id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.07221 | Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models | Pre-trained point cloud models have found extensive applications in 3D
understanding tasks like object classification and part segmentation. However,
the prevailing strategy of full fine-tuning in downstream tasks leads to large
per-task storage overhead for model parameters, which limits the efficiency
when applying large-scale pre-trained models. Inspired by the recent success of
visual prompt tuning (VPT), this paper attempts to explore prompt tuning on
pre-trained point cloud models, to pursue an elegant balance between
performance and parameter efficiency. We find while instance-agnostic static
prompting, e.g. VPT, shows some efficacy in downstream transfer, it is
vulnerable to the distribution diversity caused by various types of noises in
real-world point cloud data. To conquer this limitation, we propose a novel
Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point
cloud models. The essence of IDPT is to develop a dynamic prompt generation
module to perceive semantic prior features of each point cloud instance and
generate adaptive prompt tokens to enhance the model's robustness. Notably,
extensive experiments demonstrate that IDPT outperforms full fine-tuning in
most tasks with a mere 7% of the trainable parameters, providing a promising
solution to parameter-efficient learning for pre-trained point cloud models.
Code is available at \url{https://github.com/zyh16143998882/ICCV23-IDPT}. | Yaohua Zha, Jinpeng Wang, Tao Dai, Bin Chen, Zhi Wang, Shu-Tao Xia | 2023-04-14T16:03:09Z | http://arxiv.org/abs/2304.07221v2 | # Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models
###### Abstract
Recently, pre-trained point cloud models have found extensive applications in downstream tasks like object classification. However, these tasks often require full fine-tuning of models and lead to storage-intensive procedures, thus limiting the real applications of pre-trained models. Inspired by the great success of visual prompt tuning (VPT) in vision, we attempt to explore prompt tuning, which serves as an efficient alternative to full fine-tuning for large-scale models, to point cloud pre-trained models to reduce storage costs. However, it is non-trivial to apply the traditional static VPT to point clouds, owing to the distribution diversity of point cloud data. For instance, the scanned point clouds exhibit various types of missing or noisy points. To address this issue, we propose an Instance-aware Dynamic Prompt Tuning (IDPT) for point cloud pre-trained models, which utilizes a prompt module to perceive the semantic prior features of each instance. This semantic prior facilitates the learning of unique prompts for each instance, thus enabling downstream tasks to robustly adapt to pre-trained point cloud models. Notably, extensive experiments conducted on downstream tasks demonstrate that IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters, thus significantly reducing the storage pressure. Code is available at [https://github.com/zyh1614399882/IDPT](https://github.com/zyh1614399882/IDPT).
## 1 Introduction
With the rapid development of 3D scanning technology, point clouds, as irregular point sets that represent 3D geometry, have been widely used in various fields and tasks. Deep learning-based point cloud processing techniques [16, 25, 32, 34, 35, 45] have drawn considerable attention as they can directly process raw point cloud data while preserving its rich information. As a classic deep learning paradigm, fine-tuning the foundation model pre-trained [11, 27, 33, 51, 54] on massive raw point clouds in specific downstream tasks has achieved state-of-the-art performance. However, this approach is storage-intensive, as it requires storing and deploying a separate copy of the backbone parameters for each task.
Recently, prompt tuning has surpassed fine-tuning in multiple downstream tasks in the language and image domains, significantly reducing storage requirements by fixing the parameters of a pre-trained model and introducing a small amount of task-specific learnable parameters into the input space. Although some works [22, 21, 46, 53] attempt to introduce prompt into point cloud processing, they all rely on pre-trained image models (_e.g._ CLIP [38], ViT [12], etc.). To date, less research has been denoted to prompt tuning in point cloud pre-trained models.
Inspired by the success of visual prompt tuning (VPT) [23]), it is natural to adopt the traditional static prompting tuning in VPT to point clouds, as shown in Figure 1
Figure 1: The pipeline of (a) the previous static prompt tuning in VPT [23] and (b) our dynamic prompt tuning. Unlike the static prompt tuning that is instance-agnostic, ours is adaptive to input by concatenating the instance-adaptive prompt generated by a prompt module into the last Transformer layer input.
(a), whereby we introduced a few learnable parameters as prompts to each layer input of a pre-trained point cloud Transformer. During the fine-tuning phase in downstream tasks, we froze the pre-trained Transformer parameters and updated only the downstream task head and prompt parameters. Although such a strategy performs well on synthetic datasets (ModelNet40 [47]), it causes significant performance degradation on real scanned point cloud datasets (ScanObjectNN [41]). Thus, static prompt tuning is not suitable for real point clouds, where point clouds with different types of missing or noisy points belong to different distributions. These observations motivate us to design a universal prompt tuning strategy for both synthetic and real point clouds.
To address this issue, we proposed an Instance-aware Dynamic Prompt Tuning (IDPT) for point cloud pre-trained models, which is an instance-adaptive prompt tuning. As shown in Figure 1 (b), IDPT uses a prompt module to perceive the semantic prior of each instance and learns unique prompts for each point cloud via this semantic prior. The instance-aware dynamic prompt enables us to pair inputs with different distribution domains into the same domain, thereby enhancing the robustness of the Transformer [43] backbone. Our IDPT only concatenates prompts in the input of the last Transformer layer, which allows for a more accurate representation of the semantic prior to each point cloud. This semantic prior facilitates the learning of unique prompts for each instance, thus enabling downstream tasks to robustly adapt to pre-trained point cloud models.
The main contributions can be summarized as follows:
* We investigate static prompt tuning in vision, which is instance-agnostic, to point cloud pre-trained models, and have found that the static prompt tuning cannot work well in real point clouds due to the distribution diversity. To the best of our knowledge, this is the first exploration of utilizing prompt tuning in point cloud pre-trained models.
* We propose an Instance-aware Dynamic Prompt Tuning (IDPT) to produce a universal prompt for both synthetic and real point clouds, which is adaptive to instance input. Specifically, IDPT learns a unique prompt from the semantic prior of each point cloud to cope with the distribution diversity issues existing in point cloud data.
* Extensive experiments on a variety of downstream tasks have shown that our IDPT reaches state-of-the-art performance in most tasks, and particularly, our IDPT significantly reduces the storage cost of pre-trained models by using only 7% of the trainable parameters of full fine-tuning.
## 2 Related Work
### Point Cloud Pre-training Model
Recently, studies on pre-trained foundational models for 3D point clouds have achieved remarkable success. These approaches first apply a pretext task to pre-train the foundational model to learn the latent semantic information of the point cloud and then fine-tune the model weights for the target task to achieve higher performance. Existing pre-train pretext tasks can be divided into discriminative tasks [4, 9, 14, 48] and generative tasks [3, 11, 19, 26, 27, 33, 51, 54]. The discriminative approach distinguishes different views of the same point cloud instance from other instances, PointContrast [49] and CrossPoint [1] explore the use of contrast learning of intra-domain and cross-domain features to obtain rich self-supervised information. Generation methods typically rely on an autoencoder to learn the latent features of the data by reconstructing the original input. Point-BERT [51], Point-MAE [33] and PointM2AE [54], based on masked autoencoders, have been very successful. Additionally, Point-DAE [54] explores a more general denoising autoencoder for point cloud learning by investigating more types of corruption beyond masking. ACT [11] achieves a significant improvement on real scanned point clouds by using pre-trained language models and image models as cross-modal teachers to guide the learning of 3D self-supervised networks. However, the above methods all utilize full fine-tuning to adapt pre-trained models to various downstream tasks. Our work further explores how to reduce parameter storage in downstream tasks by utilizing prompt tuning, building upon the aforementioned approach.
### Prompt Learning in Computer Vision
Prompt tuning involves adding specific prompt information to the input of a pre-trained model and adjusting downstream tasks to fit the pre-trained model. This is achieved by fixing the pre-trained model parameters and fine-tuning the prompt. It was first proposed in the language model [7, 13, 24, 28, 29, 30] and gained popularity in the image model [38, 39, 40, 56, 55] later due to its flexibility and high performance. CLIP [38] uses fixed class-specific text labels as prompts for prediction. Later, CoOp [56] learns class-specific continuous prompts, and CoCoOp [55] builds upon CoOp by introducing a lightweight network to learn dynamic prompts for each instance. VPT [23] first introduces the continuous prompt tuning framework into image pre-trained models inspired by P-Tuning [29]. Additionally, P2P [46] achieved the first application of prompts in point clouds by learning color information in the input space of point cloud rendering images as prompts for the 2D backbone network. PointCLIP [53] and CLIP2Point [21] project the point cloud as a depth map and then use the pre-trained CLIP [38] to understand the point cloud. However, all the
above work relies on pre-trained image models. Our work discusses the tuning of pre-trained point cloud models with appropriate prompting mechanisms.
## 3 Methodology
In this section, we first introduce the tuning pipeline for a pre-trained point cloud model (SS 3.1). Next, we present the empirical observation of static prompt tuning (_e.g_. VPT [23]) and discuss its weaknesses (SS 3.2) that highlight our motivations. At last, we describe our Instance-aware Dynamic Prompt Tuning strategy in detail (SS 3.3).
### Preliminaries
When fine-tuning a pre-trained point cloud model (_e.g_. Point-MAE [33]), a point cloud \(\mathbf{X}\in\mathbb{R}^{M\times 3}\) with \(M\) points is first divided into \(m\) point patches \(\mathbf{X}^{\prime}\in\mathbb{R}^{m\times k\times 3}\) via Farthest Point Sampling (FPS) and K-Nearest Neighborhood (KNN) algorithms, where each patch has \(k\) local points. Then, all point patches will be embedded into a series of input tokens \(\mathbf{E}_{\mathbf{0}}\in\mathbb{R}^{\mathbf{m}\times\mathbf{d}}\) with positional encoding via a point patch embedding module. Next, we insert a classification token (_i.e_., [CLS]) \(\mathbf{c}_{0}\) at the head of the patch embeddings and forward the token embeddings to the pre-trained model. Specifically, the forward process of each transformer layer is defined as
\[[\mathbf{c}_{i};\mathbf{E}_{i}]=f_{i}([\mathbf{c}_{i-1};\mathbf{E}_{i-1}]),\;i=1,2,\cdots,N, \tag{1}\]
where \(f_{i}\) denotes the \(i\)-th transformer encoder layer. \(N\) is the total transformer layer number of the pre-trained backbone. Finally, the model makes predictions by building a task-specific head \(g_{h}\) upon the output of the pre-trained backbone:
\[\mathbf{y}=g_{h}([\mathbf{c}_{N};\mathbf{E}_{N}]). \tag{2}\]
All the parameters of \(\{f_{i}\}_{i=1}^{N}\) and \(g_{h}\) will be updated in a downstream tuning task, which burdens the storage cost for per-task model weights.
Recently, prompt [56, 23] has shown to be effective for parameter-efficient tuning. The basic idea of prompt tuning is to insert a few learnable prompt tokens into the input token sequence, _i.e_., we modify Eq.(1) and Eq.(2) by
\[[\mathbf{c}_{i};\mathbf{P}_{i};\mathbf{E}_{i}]=f_{i}([\mathbf{c}_{i-1};\mathbf{P}_{i-1};\mathbf{E}_{i- 1}]),\;i=1,2,\cdots,N, \tag{3}\]
\[\mathbf{y}=g_{h}([\mathbf{c}_{N};\mathbf{P}_{N};\mathbf{E}_{N}]), \tag{4}\]
where \(\mathbf{P}_{i}\) is the inserted prompt tokens at the \(i\)-th layer. During the tuning process, we freeze the parameters of \(\{f_{i}\}_{i=1}^{N}\) and only update prompt \(\{\mathbf{P}_{i}\}_{i=1}^{N}\) and task-specific head \(g_{h}\), which can largely reduce per-task storage cost.
### Observation and Discussion
Inspired by the success of Visual Prompt Tuning (VPT) [23], it is natural to extend such a prompting tuning strategy to point clouds, as shown in Figure 1(a). As the prompt tokens are instance-independent and shared by all samples during downstream tuning, we term this kind of strategy _static prompt tuning_. Our empirical study showed
Figure 2: Overall pipeline of Instance-aware **D**ynamic **P**rompt **T**uning (**IDPT**) for pre-trained point cloud models, which only updates the parameters of the dynamic prompt generation module and downstream task head during a downstream tuning task. To capture various sub-modes existing in the real-world data and enhance the robustness against noises (_e.g_., with different types of missing or noisy points), we design a dynamic prompt generation module with graph convolution [45] layers to aggregate multi-scale contextual features and dynamically generate instance-adaptive prompt. Empirically, inserting the dynamic prompt before the last transformer layer yields promising performance and enjoys decent efficiency at the same time.
that although VPT improves downstream performance compared with tuning the task head only, it underperforms fully fine-tuning by considerable margins. In particular, Table 1 presents the results of classification on several benchmark datasets. We can learn that static prompt tuning significantly reduces the number of trainable parameters (_i.e_., about 1% to 2% of backbone parameters) compared with fully fine-tuning. In terms of accuracy, although VPT-Deep performs well on synthetic datasets (_e.g_. ModelNet40 [47]), it presents significant performance degradation on real scanned point cloud datasets. For example, on the PB_T50_RS variant of ScanObjectNN [41] dataset, fully fine-tuning outperforms VPT-Deep by 4.5% accuracy.
Here we briefly discuss why static prompt tuning performs well on synthetic datasets but poorly on real scanned datasets. We adopt a perspective from Domain Adaptation (DA) [2, 5, 6, 17] and consider the transferring from pre-trained models to downstream tasks. Our goal is to bridge the source and target domains with different distributions so as to enhance the prediction robustness of the pre-trained model. By definition, the source domain \(p_{s}(\mathbf{x}_{s})\) refers to the distribution of pre-training data, and the target domain \(p_{t}(\mathbf{x}_{t})\) refers to the distribution of downstream task data. Empirically, we found that
\[p_{t}(\mathbf{x}_{t})\neq p_{s}(\mathbf{x}_{s}), \tag{5}\]
while we ask for a robust model such that
\[p_{t}(\mathbf{y}_{t}|\mathbf{x}_{t})=p_{s}(\mathbf{y}_{s}|\mathbf{x}_{s}), \tag{6}\]
where \(\mathbf{x}\) denotes the input and \(\mathbf{y}\) denotes the output. Therefore, the goal of domain adaptation is to find a transformation \(\Phi(\cdot)\), such that
\[p_{t}(\Phi(\mathbf{x}_{t}))=p_{s}(\Phi(\mathbf{x}_{s})). \tag{7}\]
Both fine-tuning and prompt tuning can be regarded as approaches to approximate the transformation \(\Phi(\cdot)\). Fine-tuning adjusts all parameters of the pre-trained model to fit \(\Phi\), whereas prompt tuning introduces additional prompt parameters to fit \(\Phi\). From Table 1, we can learn that full fine-tuning achieves satisfactory performance by fitting all model parameters. In contrast, the static prompt shows inflexibility in mitigating the domain gap, suffering from the noises in the target domain.
We also provide an intuitive analysis of such a phenomenon. Specifically, Figure 3 shows the t-SNE [42] visualization of point cloud features extracted with Point-MAE [33] on the test sets of three datasets: (a) ModelNet40 [47], (b) ScanObjectNN [41], and (c) ShapeNetPart [50]). It reflects the downstream task data distribution (_i.e_., target domain) to some degree. As shown in the figure, on synthetic datasets like ModelNet40 and ShapeNetPart, instances from the same class tend to distribute in relatively clear and tight clusters. Quite differently on the real scanned dataset ScanObjectNN, instances from the same class scatter to many sub-clusters in the feature space, indicating the mixing of various sub-distributions corresponding to different sub-modes. Our intuition is that synthetic datasets like ModelNet40 and ShapeNetPart contain complete, uniform, and relatively clean point clouds, such as the airplanes shown in Figures 3(a) and 3(c). In contrast, ScanObjectNN consists of real scanned point clouds with varying types of missing or noisy points, making up different sub-modes within the same class. For example, Figure 3(b) shows two
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{**ScanObjectNN**} \\
**Tuning Strategy** & **\#TP (M)** & **ModelNet40** & **OBJ\_BG** & **OBJ\_ONLY** & **PR\_T50\_RS** \\ \hline Full Fine-tuning & 22.10 & 93.8 & 92.12 & 92.01 & 88.41 \\ Only Head Tuning & 0.27 & 93.2 & 87.40 & 87.13 & 80.33 \\ VPT-Shallow [23] & 0.28 & 93.4 & 87.61 & 99.04 & 80.99 \\ VPT-Deep [23] & 0.36 & 93.6 & 89.98 & 90.19 & 83.96 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) for different tuning strategies is reported. All experiments were conducted based on a pre-trained Point-MAE [33] model, and a simple rotation augmentation in ACT [11] was employed on the ScanObjectNN [41] dataset. #TP’ denotes the number of trainable parameters.
Figure 3: The t-SNE [42] visualization of the point cloud features extracted from with the Point-MAE [33] on three datasets: (a) ModelNet40, (b) ScanObjectNN, and (c) ShapeNetPart. Different from synthetic datasets (_e.g_. ModelNet40 and ShapeNetPart) with clean and compact cluster structures, in real-world datasets (_e.g_. ScanObjectNN), instances within the same category can present various sub-modes (_i.e_. sub-clusters scattered in the embedding space) because real-world point clouds contain varying types of missing or noisy points.
kinds of chairs: one is mostly missing, while the other is more complete. As static prompt tuning fails to capture various sub-modes in real-world data distribution, it highlights the necessity of an adaptive or dynamic prompting strategy.
### Instance-aware Dynamic Prompt Tuning
To address the aforementioned issues of static prompt tuning, we propose **I**nstance-aware **D**ynamic **P**rompt **T**uning (**IDPT**). Figure 2 shows the pipeline of IDPT.
#### 3.3.1 Dynamic Prompt Generation Module
To capture various sub-modes existing in the real-world data and enhance the robustness against noises (_e.g_., with different types of missing or noisy points), we utilized EdgeConv [45] at the patch level to perceive local point cloud shapes at a larger scale. Specifically, as shown in Figure 2, point patch tokens \(\mathbf{E}_{N-1}\) are processed by three EdgeConvs to generate three patch features at different scales. Then, the multi-scale patch features are concatenated and fed into a linear layer, followed by max pooling to generate an instance-aware dynamic prompt \(\mathbf{P}_{N-1}\):
\[\mathbf{P}_{N-1}=\varphi_{P}(\mathbf{E}_{N-1}). \tag{8}\]
\(\varphi_{P}(\cdot)\) denotes the dynamic prompt generation module.
Next, we forward \(\mathbf{P}_{N-1}\) accompany with \(\mathbf{E}_{N-1}\) to the last transformer layer \(f_{N}\):
\[[\mathbf{c}_{N};\mathbf{P}_{N};\mathbf{E}_{N}]=f_{N}([\mathbf{c}_{N-1};\mathbf{P}_{N-1};\mathbf{E}_{N- 1}]). \tag{9}\]
Finally, we concatenate [CLS] token \(\mathbf{c}_{N}\), prompt token \(\mathbf{P}_{N}\), and patch tokens \(\mathbf{E}_{N}\) together before feeding them into a task head. The final prediction is made by:
\[\mathbf{y}=g_{h}([\mathbf{c}_{N};\mathbf{P}_{N};\mathbf{E}_{N}]). \tag{10}\]
#### 3.3.2 Prompt Insert Position
Perceiving various sub-modes in real-world point cloud data relies on high-level semantic information. As higher (or deeper) transformer layers grasp global semantic information (_e.g_. density, shape, or categorical information) better, IDPT prefers to insert prompts at deeper transformer layers to ensure an accurate perception of point cloud semantics. In particular, we found that inserting the dynamic prompt before the last transformer layer yields robust performance and also enjoys decent efficiency. We provide detailed quantitative analysis in SS 4.3.4.
## 4 Experiments
We evaluated the performance of our proposed Instance-aware Dynamic Prompt Tuning (IDPT) on classification, few-shot learning, and segmentation tasks. Then, we discussed the benefits of our tuning strategy and architecture in the ablation study section. We used three classic pre-trained models (Point-Bert [51], Point-MAE [33], and ACT [11]) as our baseline. Notably, our IDPT is a universal paradigm that can be applied to any pre-trained point cloud model.
### Experiment Settings
To ensure a fair comparison, we have used the same experimental settings as the default fine-tuning method for each baseline. This involves freezing the weights of the pre-trained point cloud model and only updating the parameters of the Prompt Model and Head during downstream task training. All experiments were conducted on a single GeForce RTX 3090 24GB. We have explored the performance of the simple rotation augmentation from ACT [11] on the ScanObjectNN [41] dataset, which is denoted as \(\dagger\) in our table. For all other experiments, default data augmentation was used. We provide segmentation results and additional ablation experimental results in the **Appendix** due to space limitations.
### Prompt Tuning on Downstream Tasks
#### 4.2.1 Object Classification on Real-World Dataset
In the study of point cloud pre-training models, it is common practice to conduct pre-training on the ShapeNet [8] dataset, which typically only contains clean point clouds and assumes that all point clouds are identically distributed. However, in reality, point clouds often suffer from issues such as noise and missing points, resulting in a diverse distribution. We first assess our IDPT performance on the ScanObjectNN [41] dataset, which consists of about 15K point cloud samples by 15 categories. These objects are scanned indoor scene data, which are usually cluttered with background and occluded by other objects.
We conducted experiments on three variants of ScanObjectNN [41] (OBJ-BG, OBJ-ONLY, and PB-T50-RS). The results are shown in Table 2, \(\dagger\) indicates that the pre-trained model used simple rotational augmentation of ACT during fine-tuning or prompt tuning, and without \(\dagger\) indicates the default augmentation method. We observed that: (i) We achieved state-of-the-art (SOTA) performance with IDPT on Point-MAE\({}^{\dagger}\). In comparison to the current state-of-the-art method ACT, we have achieved gains of 0.34%, 1.21%, and 0.3% respectively in the three variants of ScanObjectNN, while utilizing only 7% of its trainable parameters. (ii) Our IDPT outperforms full fine-tuning in most cases with fewer trainable parameters. These results demonstrate the excellent performance of our method on real scanned point clouds with various data distributions. We believe this is due to the introduction of a semantic prior of real point cloud data on the one hand, and fewer trainable parameters to mitigate overfitting on the other.
#### 4.2.2 Object Classification on Synthetic Dataset
We evaluate our pre-trained model on the ModelNet40 [47] dataset for object classification. ModelNet40 [47] includes 12,311 clean 3D CAD models for 40 categories. Each point cloud is complete, uniform, and noise-free, and all point clouds in the dataset are independently and identically distributed. We follow standard protocols to split ModelNet40 into 9843 instances for the training set and 2468 for the testing set. Standard random scaling and random translation are applied for data augmentation during training. For fair comparisons, we also use the standard voting method [31] during testing.
As shown in Table 3, Point-MAE with IDPT achieves state-of-the-art performance with an accuracy of 94.4%. This represents a 0.6% improvement compared to fine-tuning. Additionally, other pre-trained models with IDPT, such as Point-BERT and ACT, demonstrate certain improvements compared to full fine-tuning. These results suggest that the incorporation of semantic priors of each instance can yield significant improvements, even when the downstream task dataset is identically distributed.
#### 4.2.3 Few-shot Learning
We conducted few-shot learning experiments on ModelNet40, using the n-way, m-shot setting, following previous works [33, 51, 54]. The results for the settings of \(n\in 5,10\) and \(m\in 10,20\) are presented in Table 4. Our IDPT achieved performance gains in most cases compared to full fine-tuning, demonstrating its effectiveness in few-shot learning as well.
jectNN [41] (OBJ_BG and OBJ_ONLY), and we report the average results of 10 repeated experiments.
#### 4.3.1 Advantages of IDPT over Other Tuning strategies
In order to demonstrate the superiority of our proposed instance-aware dynamic prompt tuning over other tuning strategies (fine-tuning, VPT [23], and Adapter [20]), we conducted extensive ablation studies as shown in Table 5. The specific tuning strategies used are as follows: (A) Baseline (we fixed the pre-trained model and only tuned the Head), (B) VPT-Shallow [23], (C) VPT-Deep [23], (D) Adapter [20], (E) our IDPT, and (F) Fine-tuning.
Our IDPT tuning strategy has outperformed other tuning strategies by achieving the highest level of performance, while utilizing fewer trainable parameters. When compared to the baseline without prompt, we observed a 5.08% and 5.06% improvement on the two variants of ScanObjectNN, respectively. Our strategy also significantly outperformed traditional static prompt and the classic Adapter. Additionally, our method demonstrated a significant improvement over fine-tuning, as it greatly reduced the number of trainable parameters while still improving performance. Overall, these experiments clearly demonstrate that our IDPT tuning strategy is highly effective.
#### 4.3.2 The Effect of the Number of Trainable Parameters and PM Structure
We conducted experiments with varying numbers of trainable parameters to evaluate the effectiveness of our models. In VPT-Deep, we adjusted the trainable parameters by controlling the number of prompts in each layer input. Meanwhile, in IDPT, we experimented with various network structures, including MLP, graph convolution (EdgeConv), and Transformer layers. Our experimental results, as shown in Table 6, demonstrate the following: (i) Using a single-layer MLP in IDPT resulted in significant improvements of 4.03% and 3.85% in OBJ_BG and OBJ_ONLY, respectively, when compared to the baseline without prompts. These results suggest that introducing semantic priors from the downstream task data can be highly effective. (ii) Increasing the number of parameters in VPT resulted in only limited performance improvements. It shows that merely increasing the number of parameters without introducing a semantic prior has limited performance improvement. (iii) For IDPT, EdgeConv proved to be an effective prompt module. Compared to MLP and Transformer, EdgeConv focuses more on local neighborhood information, allowing it to better perceive the specific shape of the point cloud and provide a better representation for downstream data semantic priors.
#### 4.3.3 The Effect of Head inputs
We conducted an investigation on the impact of the downstream task head's input features, which include the prompt token, the CLS token, and the max pooling of point patch tokens. The results are shown in Figure 4. We found that the highest performance was achieved when all three features were included (d): prompt token, CLS, and point patch to
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Input** Strategy & **Trainable Parameters Type** & **Tr. Param.** & **OBJ_BG** & **OBJ_ONLY** \\ \hline w/o prompt & Head & 0.27 & 87.40 & 87.13 \\ \hline VPT-Deep & 10 prompts + Head & 0.36 & 89.98 & 90.19 \\ VPT-Deep & 28 prompts + Head & 0.52 & 90.02 & 90.53 \\ VPT-Deep & 156 prompts + Head & 1.70 & 90.19 & 90.53 \\ \hline IDPT & PM (1-layer MLP) + Head & 0.52 & 91.43 & 90.98 \\ IDPT & PM (3-layer MLP) + Head & 1.49 & 91.54 & 91.34 \\ IDPT & PM (1 EdgeConv) + Head & 0.81 & 91.77 & 91.67 \\ IDPT & PM (2 EdgeConv) + Head & 1.25 & 91.95 & 91.67 \\ IDPT & PM (3 EdgeConv) + Head & 1.70 & **92.48** & **92.19** \\ IDPT & PM (1 Transformer layer) + Head & 2.14 & 92.03 & 91.22 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Effect of the Number of Trainable Parameters and PM Structure.
Figure 4: The effect of different Head inputs.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Index** & **Model** & **VPT** & **IDPT** & **Adapter** & **Tr. Param.** & **OBJ_BG** & **OBJ_ONLY** \\ \hline A (Baseline) & Fixed & ✘ & ✘ & ✘ & 0.27 & 87.40 & 87.13 \\ B (VPT-Shallow) & Feed & Shallow & ✘ & ✘ & 0.28 & 87.61 & 89.04 \\ C (VPT-Deep) & Feed & Deep & ✘ & ✘ & 0.36 & 90.98 & 90.19 \\ L (Adapter) & Fixed & ✘ & ✘ & ✘ & ✘ & 2.04 & 89.33 & 89.90 \\
**E (BIPPT)** & Fixed & ✘ & ✘ & ✘ & 1.70 & **92.48** & **92.19** \\ \hline F (Fine-tuning) & Trainable & ✘ & ✘ & ✘ & 22.10 & 92.12 & 92.01 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Influence of different tuning strategies. Our IDPT tuning strategy achieved the best performance and efficiency compared to other tuning strategies.
kens. When using only the dynamic prompt token (b), the performance was still strong and only second to the previous case. However, removing our prompt token (c) resulted in a slight decline in performance. These results indicate that the dynamic prompt token plays a critical role in guiding the downstream task fitting, as it contains specific and semantic information about the task data.
Although omitting the prompt feature in the head results in a performance decline, there is still a significant improvement when compared to traditional static prompts, as shown in Table 6. This suggests that our dynamic prompt is effective in aligning with different distributional data.
#### 4.3.4 Effect of Prompt Insert Position
We conducted an analysis to observe the impact of integrating our dynamic prompt in various depths of Transformer layers. The experimental results, as demonstrated in Figure 5 (a) and (b), indicated that the addition of a prompt to deeper layers resulted in better performance. This can be attributed to the fact that our prompt generation module utilizes patch tokens to perceive the semantic priors of each point cloud, and deeper patch tokens comprise more comprehensive semantic information.
Additionally, we found that applying the prompt to all layers using a prompt module with shared parameters in the "all_layer" setting did not produce satisfactory results. This is due to the fact that the shared prompt module can cause degradation in the semantic prior representation, and using independent parameters for each layer would result in an unacceptable increase in the number of parameters. As a result, we decided to add the prompt to the input of the final layer of the Transformer.
#### 4.3.5 Qualitative analysis of the ability to approximate the transformation \(\Phi(\cdot)\) in DA
We analyzed the effectiveness of various tuning strategies for the transformation function \(\Phi(\cdot)\) by conducting a qualitative analysis of their fitting ability. The strategies we evaluated included (a) the pre-trained model, (b) VPT-Deep, and (c) IDPT. To visualize the input (\(\mathbf{E}_{N-1}\)) and output (\(\mathbf{E}_{N}\)) of all point patches in the \(N\)-th layer of the Transformer on ScanObjectNN, we utilized a pre-trained model based on Point-MAE. The results of our visualization are presented in Figure 6, which displays the visualization outcomes of the three tuning strategies.
The performance of different strategies for approximating the function \(\Phi(\cdot)\) with the Transformer model varies. The pre-trained model (a) uses the Transformer with fixed parameters and shows the worst performance. VPT-Deep (b), which adds trainable static prompt parameters to all layers' inputs to approximate \(\Phi(\cdot)\), resulting in a better input feature distribution \(\mathbf{E}_{N-1}\) in the \(N\)-th layer than (a). However, the output features \(\mathbf{E}_{N}\) are still scattered, indicating a poorer \(\Phi(\cdot)\) approximate. IDPT takes a different approach by introducing the semantic prior of each instance and adding dynamic prompts to the input space of the \(N\)-th layer to approximate \(\Phi(\cdot)\). Although the input features \(\mathbf{E}_{N-1}\) at the \(N\)-th layer are as scattered as (a), the output features \(\mathbf{E}_{N}\) are tightly clustered for the same category after concatenating \(\mathbf{E}_{N-1}\) with our prompt through the same network as (a). It indicates that our strategy can effectively align the different distributions and is the best approximate strategy for \(\Phi(\cdot)\).
Figure 5: The effect of prompt insert position. \(E_{i}\) means to insert our dynamic prompt into the input of \(i\)-th layer of the pre-trained Transformer. ’All Layers’ means to insert our dynamic prompt in each layer of the Transformer using a prompt module with shared parameters.
Figure 6: The t-SNE visualization of the point patch features extracted \(\mathbf{E}_{N-1}\) and \(\mathbf{E}_{N}\) from the test sets of ScanObjectNN (PB_T50_RS) using a pre-trained Point-MAE with different tuning strategies. This visualization partly reflects the approximation of the transformation function \(\Phi(\cdot)\) by different tuning strategies.
## 5 Conclusion
In this paper, we examine the introduction of prompt tuning into pre-trained point cloud models as a strategy for reducing the number of parameters needed for downstream tasks. Initially, we adopt visual prompt tuning [23] to point cloud pre-trained models and found that the static prompt tuning cannot work well in real point clouds due to the distribution diversity. To address this issue, we propose an instance-aware dynamic prompt tuning (IDPT) to produce a universal prompt for both synthetic and real point clouds, which is adaptive to instance input. Extensive experimentation validates our IDPT strategy as a universal and effective solution.
## Appendix A More Experimental Analysis
### Part segmentation Performance
For part segmentation, we follow previous work [11, 33] to add prompts to the input of 3-rd, 7-th and 11-th layers and the task head. Since we empirically observed that using a single-layer MLP achieves comparable performance to the three-layer EdgeConv architecture in the segmentation task, we adopt a simple single-layer MLP as the dynamic prompt generation module at each layer to reduce the number of trainable parameters.
According to experimental results in Table 7, our IDPT outperforms the baseline without prompt and static prompting strategy, VPT [23]. It verifies the effectiveness of our dynamic prompting strategy. We can also find prompt tuning strategies still underperform state-of-the-art methods (ACT [11] and Point-MAE [33]) with full finetuning strategies on segmentation. We attribute this gap to the difficulty of fine-grained understanding of point clouds, which makes the downstream adaptation of pre-trained models with limited tunable parameters a challenging task. Fortunately, our IDPT in the idea of instance-aware dynamics provides an effective solution to mitigate the performance gap. We hope it can provide some inspiration towards better parameter-efficient tuning strategies for fine-grained tasks of point clouds.
### Effect of Prompt Number
In this section, we investigate the impact of prompt numbers in IDPT on classification tasks. By default, we use three layers of EdgeConv [45] and one layer of MLP to extract the semantic information \(\mathbf{E_{P}}\in\mathbb{R}^{m\times d}\) from all patches and then use max pooling along the feature dimension to aggregate the semantic information of all patches to generate prompt \(\mathbf{P_{N-1}}\in\mathbb{R}^{1\times d}\).
To generate multiple representative prompts, we replace the max pooling operation along the feature dimension with a top-\(K\) operation, resulting in \(K\) prompts \(\mathbf{P_{N-1}^{{}^{\prime}}}\in\mathbb{R}^{K\times d}\). We then aggregate \(\mathbf{P_{N-1}^{{}^{\prime}}}\), \(\mathbf{c}_{N-1}\), and \(\mathbf{E}_{N-1}\) and feed them to the last transformer layer \(\mathbf{f}_{N}\).
\[[\mathbf{c}_{N};\mathbf{P_{N}^{{}^{\prime}}};\mathbf{E}_{N}]=f_{N}([\mathbf{c}_{N-1};\mathbf{P_{N -1}^{{}^{\prime}}};\mathbf{E}_{N-1}]). \tag{11}\]
For the classification head, we perform max pooling along the feature dimension of \(\mathbf{P_{N}^{{}^{\prime}}}\) to obtain \(\mathbf{P}_{N}\in\mathbb{R}^{1\times d}\) as prompt-related input.
We analyze the impact of different prompt numbers on classification tasks. Figure 7 presents the experimental results on two variants of ScanObjectNN. The results indicate that simply increasing the prompt number does not contribute to performance gain. Therefore, we only set a single prompt in IDPT to improve efficiency.
### Inserting Independent Prompt Modules to All Layers
In Figure 5 of the main paper, we have demonstrated the effect of inserting prompts into multiple layers of the pre-trained point cloud model. Note that
\begin{table}
\begin{tabular}{l c c c} \hline \hline Methods & \#TP (M) & \(\mathrm{mIoU}_{c}\) & \(\mathrm{mIoU}_{I}\) \\ \hline PointNet [34] & - & 80.39 & 83.7 \\ PointNet++ [35] & - & 81.85 & 85.1 \\ DGCNN [45] & - & 82.33 & 85.2 \\ \hline Transformer [51] & 27.09 & 83.42 & 85.1 \\ OcCo [51] & 27.09 & 83.42 & 85.1 \\ MaskPoint [27] & - & 84.60 & 86.0 \\ Point-BERT [51] & 27.09 & 84.11 & 85.6 \\ Point-MAE [33] & 27.06 & 84.19 & 86.1 \\ ACT [11] & 27.06 & **84.66** & **86.1** \\ \hline Point-MAE w/o prompt & 5.24 & 83.58 & 85.3 \\ Point-MAE w/ VPT & 5.35 & 83.64 & 85.4 \\ Point-MAE w/ IDPT & 5.69 & 83.79 & 85.7 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Part segmentation results on the ShapeNetPart dataset. The mean IoU across all categories,,, and the mean IoU across all instances,, \(\mathrm{mIoU}_{I}\) (%) are reported.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Trainable Parameters Type & **\#TP (M)** & **OBJ_BG** & **OBJ_ONLY** \\ \hline
1 PM + Head & 1.70 & 92.48 & 92.19 \\
12 PM + Head & 16.34 & **92.60** & **92.22** \\ \hline \hline \end{tabular}
\end{table}
Table 8: The effect of inserting independent prompt modules to all layers. PM indicates the dynamic prompt generation module.
Figure 7: The effect of different numbers of prompts.
generation module among multiple layers in Figure 5 in the spirit of parameter-efficient tuning. Nevertheless, it would be interesting to see the results of inserting independent prompt generation modules into different layers. In particular, here we provide the results of all-layer insertion, as shown in Table 8. The results indicate that incorporating a parameter-independent prompt generation module at every layer only brings marginal improvement with a significant increase of trainable parameters, deviating from the goal of parameter-efficient tuning. Regarding the empirical observations in Figure 5 of the main paper and Table 8, we only insert the dynamic prompt generation module into the last layer of the pre-trained model.
### Convergence of Different Tuning Strategies
In this section, we study how the performances of different tuning strategies change in the whole training process. The accuracy curves of fine-tuning, VPT, and IDPT on two datasets (_i.e_., ModelNet40 and ScanObjectNN) are illustrated in Figure 8.
As shown in Figure 8, our IDPT strategy achieves significant improvements upon VPT. The performance of IDPT is competitive with fine-tuning on most datasets. Moreover, we can learn that IDPT yields faster convergence by incorporating prior semantic information of instances, revealing the merit of instance-aware dynamics for model adaptation.
Figure 8: The classification accuracy curves of fine-tuning, VPT, and our IDPT strategy on two datasets.
Figure 9: Different sub-modes in each category of ScanObjectNN
### Demonstration of Sub-modes in Real-world Scanned Data
Due to the limitations of scanning techniques, it is prevailing to see various kinds of missing or noisy points in real-world point clouds, corresponding to different sub-modes in the data distribution. Such inconsistent noises will threaten the robustness of prompt-based adaptation, especially for static prompt strategies like VPT [23]. Here we would like to give some point cloud samples to facilitate an intuitive understanding about _how different sub-modes look like_. Specifically, Figure 9 presents different missing types w.r.t. different categories in the ScanObjectNN dataset. We use sub_mode1 and sub_mode2 to differentiate missing types. For each scanned object, we show its projection images from three different viewpoints (_i.e_. view1, view2, and view3) to simulate stereoscopy.
|
2307.11113 | Chiral Magnets from String Theory | Chiral magnets with the Dzyaloshinskii-Moriya (DM) interaction have received
quite an intensive focus in condensed matter physics because of the presence of
a chiral soliton lattice (CSL), an array of magnetic domain walls and
anti-domain walls, and magnetic skyrmions. In this paper, we realize chiral
magnets in type-IIA/B string theory by using the Hanany-Witten brane
configuration (consisting of D3, D5 and NS5-branes) and the fractional D2 and
D6 branes on the Eguchi-Hanson manifold. In the both cases, we put constant
non-Abelian magnetic fluxes on flavor D-branes, turning them into magnetized
D-branes. The $O(3)$ sigma model with an easy-axis or easy-plane potential and
the DM interaction is realized on the worldvolume of the color D-branes. The
ground state is the ferromagnetic (uniform) phase and the color D-brane is
straight when the DM interaction is small compared with the scalar mass.
However, when the DM interaction is larger, the uniform state is no longer
stable and the ground state is inhomogeneous: the CSL phases and helimagnetic
phase. In this case, the color D-brane is no longer straight but is snaky
(zigzag) when the DM interaction is smaller (larger) than a critical value. A
magnetic domain wall in the ferromagnetic phase is realized as a kinky D-brane.
We further construct magnetic skyrmions in the ferromagnetic phase, realized as
D1-branes (fractional D0-branes) in the former (latter) configuration. We see
that the host D2-brane is bent around the position of a D0-brane as a magnetic
skyrmion. Finally, we construct, in the ferromagnetic phase, domain-wall
skyrmions, that is, composite states of a domain wall and skyrmions, and find
that the domain wall is no longer flat in the vicinity of the skyrmion.
Consequently, a kinky D2-brane worldvolume is pulled or pushed in the vicinity
of the D0-brane depending on the sign of the skyrmion topological charge. | Yuki Amari, Muneto Nitta | 2023-07-20T12:04:45Z | http://arxiv.org/abs/2307.11113v3 | # Chiral Magnets from String Theory
###### Abstract
Chiral magnets with the Dzyaloshinskii-Moriya (DM) interaction have received quite an intensive focus in condensed matter physics because of the presence of a chiral soliton lattice (CSL), an array of magnetic domain walls and anti-domain walls, and magnetic skyrmions, both of which are important ingredients in the current nanotechnology. In this paper, we realize chiral magnets in type-IIA/B string theory by using the Hanany-Witten brane configuration (consisting of D3, D5 and NS5-branes) and the fractional D2 and D6 branes on the Eguchi-Hanson manifold. In the both cases, we put constant non-Abelian magnetic fluxes on higher dimensional (flavor) D-branes, turning them into magnetized D-branes. The \(O(3)\) sigma model with an easy-axis or easy-plane potential and the DM interaction is realized on the worldvolume of the lower dimensional (color) D-branes. The ground state is the ferromagnetic (uniform) phase and the color D-brane is straight when the DM interaction is small compared with the scalar mass. However, when the DM interaction is larger, the uniform state is no longer stable and the ground state is inhomogeneous: the CSL phases and helimagnetic phase. In this case, the color D-brane is no longer straight but is snaky (zigzag) when the DM interaction is smaller (larger) than a critical value. A magnetic domain wall in the ferromagnetic phase is realized as a kinky D-brane. We further construct magnetic skyrmions in the ferromagnetic phase, realized as D1-branes (fractional D0-branes) in the former (latter) configuration. We see that the host D2-brane is bent around the position of a D0-brane as a magnetic skyrmion. Finally, we construct, in the ferromagnetic phase, domain-wall skyrmions, that is, composite states of a domain wall and skyrmions, and find that the domain wall is no longer flat in the vicinity of the skyrmion. Consequently, a kinky D2-brane worldvolume is pulled or pushed in the vicinity of the D0-brane depending on the sign of the skyrmion topological charge.
## 1 Introduction
Recently chiral magnets accompanied with the Dzyaloshinskii-Moriya (DM) interaction [1; 2] have been paid great attention in condensed matter physics, because of the presence of a chiral soliton lattice (CSL), an array of solitons or a pair of magnetic domain walls and anti-domain walls [3; 4; 5; 6; 7; 8], and magnetic skyrmions [9; 10]. In a certain parameter region of chiral magnets, a CSL is the ground state [3; 4; 5; 6; 7; 8] where the energy of a single soliton is negative, and one dimensional modulated states have lower energy than a uniform state (ferromagnetic phase). On the other hand, magnetic skyrmions [9; 10] have received quite intensive focus due to their realizations in a form of skyrmion lattices in the laboratory in chiral magnets [11; 12; 13] (see also Refs. [14; 15; 16; 17]) in addition to noncentrosymmetric magnets [18; 19; 20; 21], and their possible applications as components for data storage with low energy consumption [22], see Ref. [23] for a review. As a combination of a domain wall and skyrmion, domain-wall skyrmions were first proposed in quantum field theory [24; 25]
(see also Refs. [26; 27; 28])1 and have been recently observed experimentally in chiral magnets [36; 37; 38] (see also [39]). A first step at treating chiral magnetic domain walls theoretically is given in Refs. [40; 17; 41]. Apart from domain walls and skyrmions, a lot of studies have been devoted to various topological objects such as monopoles [42; 43], Hopfions [44] and instantons [45], see Ref. [46] as a review.
Footnote 1: The term “domain wall skyrmions” was first introduced in Ref. [29] in which Yang-Mills instantons in the bulk are 3D skyrmions inside a domain wall. Domain-wall skyrmions in 3+1 dimensions for which 3D skyrmions are 2D skyrmions on a domain wall were proposed in field theory [30; 31; 32; 33; 34] and are recently realized in QCD [35].
In spite of such great interests in condensed matter physics and materials science, chiral magnets were not paid much attention in high energy physics, since the DM interaction is not Lorentz invariant. One exception would be the finding of Bogomolnyi-Prasad-Sommerfield (BPS) magnetic skyrmions [47; 48; 49].
In this paper, we give a possible realization of chiral magnets in string theory. One of the key points is the DM interaction realized as a background \(SU(2)\) gauge field [47; 48]. The other is the Hanany-Witten brane configuration [50; 51] and magnetized D-branes [52; 53; 54; 55; 56; 57; 58]. We first formulate the \(O(3)\) model, or the \(\mathbb{C}P^{1}\) model, with the DM interaction in terms of a \(U(1)\times SU(2)\) gauge theory. The \(U(1)\) gauge symmetry is an auxiliary field and the \(SU(2)\) gauge symmetry is a background gauge field. Next, we embed this gauge theory into two kinds of D-brane configurations: one is the Hanany-Witten type brane configuration in type-IIB string theory, composed of two NS5-branes stretched by D3-branes and orthogonal D5-branes [50; 51], and the other is a D2-D6-brane bound state on the Eguchi-Hanson manifold in type-IIA string theory. A chiral magnet is realized on the worldvolume of the lower dimensional D-branes, D3-branes in the former and D2-branes in the latter.
The potential is classified into an easy-axis or easy-plane potential. The ground state is either a ferromagnetic2 (uniform) phase or inhomogeneous phases when the DM interaction is smaller or larger than a critical value, respectively. The inhomogeneous ground states are further classified into three cases: two kinds of the CSL phases with the easy-axis and easy-plane potentials and a linearly modulated phase (with no potential) at the boundary of these two CSL phases. In these different phases, the lower dimensional color D-branes behave differently in the brane configurations. In the ferromagnetic phase, the color D-brane is straight in the vacuum, and a magnetic domain wall as an excited state is realized as a kinky D-brane [59; 60; 61; 62; 63]. In the CSL phase with the easy-axis potential, the color D-brane is _snaky_ (an array of kinky D-branes and anti-kinky D-branes) between the two separated flavor D-branes. On the other hand, in the CSL phase with the easy-plane potential, the color D-brane is rather _zigzag_ between the two flavor D-branes. In this case, each separated (anti-)domain wall is nontopological. Between these two CFL phases, the ground state is helimagnetic in which the color D-brane modulates as a sine function.
Footnote 2: The \(O(3)\) model with the Lorentz invariant kinetic term that we are considering is relevant to rather antiferromagnets than ferromagnets, and thus it should be called antiferromagnetic strictly speaking. However, in this paper, we call the uniform state “ferromagnetic” for simplicity.
We further study magnetic skyrmions. They are realized as D1-branes in the Hanany
Witten type brane configuration as an analog of vortices [64]. On the other hand, in the D2-D6-ALE system, they are fractional D0-branes, that is, D2-branes two directions of whose worldvolumes wrap around the \(S^{2}\) cycle blowing up the \(\mathbb{Z}_{2}\) orbifold singularity. We show that the host D2(D3)-brane is bent at the position of a D0(D1)-brane as a magnetic skyrmion. Finally, we construct domain-wall skyrmions [17; 24; 25; 65] in the ferromagnetic phase. Magnetic (anti-)skyrmions are realized as (anti-) sine-Gordon solitons in a magnetic domain wall, whose worldvolume theory is the sine-Gordon model with a potential term coming from the DM interaction [17]. The domain-wall worldvolume is no longer flat in the vicinity of the skyrmion [41]. Consequently, the color D-brane worldvolume, forming a kink for the magnetic domain wall, is pulled in the vicinity of the (anti-)skyrmion to either side of the domain wall depending on whether it is a skyrmion or an anti-skyrmion.
This paper is organized as follows. In Sec. 2 we formulate the \(O(3)\) model, or the \(\mathbb{C}P^{1}\) model, with the DM interaction as a \(U(1)\times SU(2)\) gauge theory. In Sec. 3, we present D-brane configurations for the chiral magnets, the Hanany-Witten brane configuration and the D2-D6-ALE system. We put a non-Abelian magnetic flux breaking the \(SU(2)\) flavor symmetry into \(U(1)\), making the D5-branes in the former or the D6-branes in the latter magnetized. In Sec. 4, we construct a magnetic domain wall and the CSL ground states forming a snaky D-brane and a zigzag D-brane in the former and the latter brane configurations, respectively. In Sec. 5, we construct magnetic skyrmions and domain-wall skyrmions, and discuss their D-brane configurations. Sec. 6 is devoted to a summary and discussion. In Appendix. A, we give a derivation of the DM interaction from the gauged linear sigma model.
## 2 Chiral Magnets as Gauge Theory
In this section, we formulate the \(O(3)\) model describing Heisenberg magnets as a gauge theory for the cases without the DM term in Subsec. 2.1 and with the DM term in Subsec. 2.2.
### Heisenberg magnets from gauge theory
We start with a \(U(1)\) gauge theory with a gauge field \(a_{\mu}\) coupled with complex scalar fields written as \(\Phi^{T}=(\Phi_{1},\Phi_{2})\) and a real scalar field \(\Sigma\). The Lagrangian is
\[\mathcal{L} =-\frac{1}{4g^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{1}{g^{2}}(\partial_{ \mu}\Sigma)^{2}+2|D_{\mu}\Phi|^{2}-V \tag{1}\] \[V =\frac{g^{2}}{2}(\Phi^{\dagger}\Phi-v^{2})^{2}+\Phi^{\dagger}( \Sigma\mathbf{1}_{2}-M)^{2}\Phi \tag{2}\]
with the gauge coupling \(g\), the vacuum expectation value (VEV) \(v\) of \(\Phi\), and the covariant derivative \(D_{\mu}\Phi=(\partial_{\mu}-ia_{\mu})\Phi\). Here, \(M\) is a mass matrix of \(\Phi\) given by \(M=\mathrm{diag}(m,-m)\) with a constant \(m\), where the overall diagonal constant can be eliminated by a redefinition of \(\Sigma\). This can be made \(\mathcal{N}=2\) supersymmetric(SUSY) with eight supercharges by adding a complex scalar field \(\tilde{\Phi}\), and fermionic superpartners [66]: \((\Phi,\tilde{\Phi})\) with fermionic superpartners called Higgsinos are hypermultiplets, and \((a_{\mu},\Sigma)\) with a fermionic superpartner called a gaugino is a gauge or vector multiplet.
In the strong coupling limit \(g^{2}\to\infty\), the kinetic terms of \(a_{\mu}\) and \(\Sigma\) disappear and they become auxiliary fields, which can be eliminated by their equations of motion:
\[a_{\mu}=\frac{i}{2v^{2}}(\partial_{\mu}\Phi^{\dagger}\cdot\Phi- \Phi^{\dagger}\partial_{\mu}\Phi),\quad\Sigma=\frac{1}{v^{2}}\Phi^{\dagger}M \Phi=\frac{mn_{3}}{v^{2}}. \tag{3}\]
Then, the model reduces to the \(\mathbb{C}P^{1}\) model with a potential term. By rewriting
\[\Phi^{T}=v(1,u)/\sqrt{1+|u|^{2}} \tag{4}\]
with a complex projective coordinate \(u\), the Lagrangian becomes
\[\mathcal{L}=\frac{2\partial_{\mu}u^{*}\partial^{\mu}u-4m^{2}|u|^{2 }}{(1+|u|^{2})^{2}}. \tag{5}\]
We have set \(v=1\) for simplicity. This model is known as the massive \(\mathbb{C}P^{1}\) model with the potential term
\[V=\frac{4m^{2}|u|^{2}}{(1+|u|^{2})^{2}} \tag{6}\]
which is the Killing vector squared corresponding to the isometry generated by \(\sigma_{z}\), and admits two discrete vacua \(u=0\) and \(u=\infty\). This construction is known as a Kahler quotient.
Introducing a three-vector of scalar fields by
\[\mathbf{n}=\Phi^{\dagger}\boldsymbol{\sigma}\Phi \tag{7}\]
with the Pauli matrices \(\boldsymbol{\sigma}\), the Lagrangian can be rewritten in the form of the \(O(3)\) model:
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\mathbf{n}\cdot\partial^{ \mu}\mathbf{n}-m^{2}(1-n_{3}^{2}),\quad\mathbf{n}^{2}=1. \tag{8}\]
This model is known as a continuum limit of the (anti-ferromagnetic) Heisenberg model with an easy-axis potential \(V=m^{2}(1-n_{3}^{2})\).
### Dzyaloshinskii-Moriya interaction
Now we are ready to introduce the DM term. We can achieve this by gauging the \(SU(2)\) flavor symmetry with a background gauge field [47; 48]. The Lagrangian is now a \(U(1)\times SU(2)\) gauge theory
\[\mathcal{L}=-\frac{1}{4g^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{1}{g^{2}} (\partial_{\mu}\Sigma)^{2}+2|\mathcal{D}_{\mu}\Phi|^{2}-V \tag{9}\]
with a \(U(1)\times SU(2)\) covariant derivative
\[\mathcal{D}_{\mu}\Phi=\left(\partial_{\mu}-ia_{\mu}-\frac{i}{2}A_ {\mu}\right)\Phi \tag{10}\]
and an \(SU(2)\) background gauge field \(A_{\mu}=A_{\mu}^{a}\sigma_{a}\).
In the strong gauge coupling limit \(g^{2}\rightarrow\infty\), \(a_{\mu}\) and \(\Sigma\) become auxiliary fields as before. After eliminating these auxiliary fields \(a_{\mu}\) and \(\Sigma\) as in the previous subsection, we reach at (see Appendix A for a derivation):
\[\mathcal{L}=\frac{1}{2}\left(D_{\mu}\mathbf{n}\right)^{2}-m^{2}(1-n_{3}^{2}) \tag{11}\]
with \(D_{\mu}\mathbf{n}=\partial_{\mu}\mathbf{n}+\mathbf{A}_{\mu}\times\mathbf{n}\). Here we took the temporal gauge \(\mathbf{A}_{0}=(0,0,0)\), and \(\mathbf{A}_{\mu=k}=(A_{k}^{1},A_{k}^{2},A_{k}^{3})\) are spatial components of a three vector with the \(SU(2)\) adjoint index for which the product \(\times\) is defined. The corresponding Hamiltonian density is
\[\mathcal{H} = \frac{1}{2}\left(D_{k}\mathbf{n}\right)^{2}+m^{2}(1-n_{3}^{2}) \tag{12}\] \[= \frac{1}{2}\partial_{k}\mathbf{n}\cdot\partial_{k}\mathbf{n}+ \mathbf{A}_{k}\cdot\left(\mathbf{n}\times\partial_{k}\mathbf{n}\right)+\frac{1}{2 }\left(\mathbf{A}_{k}\times\mathbf{n}\right)^{2}+m^{2}(1-n_{3}^{2})\]
where the second term gives the DM term
\[\mathcal{H}_{\text{DM}}\equiv\mathbf{A}_{k}\cdot\left(\mathbf{n}\times\partial_{k }\mathbf{n}\right) \tag{13}\]
and the third term gives an additional potential term. The DM term is also known as an effect of spin-orbit coupling (SOC) in condensed matter physics.
We employ the background \(SU(2)\) gauge field with the field strength \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-i[A_{\mu},A_{\nu}]\). Here, we consider a nonzero constant non-Abelian magnetic field
\[F_{12}=\kappa^{2}\sigma_{3}, \tag{14}\]
with a constant \(\kappa\). Because of this field strength, the flavor symmetry is explicitly broken to the \(U(1)\) subgroup generated by \(\sigma_{3}\). The gauge potential leading to the field strength given in Eq. (14) is for instance
\[A_{0}^{a} =(0,0,0)\] \[A_{1}^{a} =-\kappa(\cos\vartheta,-\sin\vartheta,0) \tag{15}\] \[A_{2}^{a} =-\kappa(\sin\vartheta,\cos\vartheta,0)\]
or
\[A_{\mu} = A_{\mu}^{a}\sigma_{a}=-\kappa(0,\cos\vartheta\sigma_{1}-\sin \vartheta\sigma_{2},\sin\vartheta\sigma_{1}+\cos\vartheta\sigma_{2}) \tag{16}\] \[= -\kappa\left(0,\left(\begin{array}{cc}0&e^{i\vartheta}\\ e^{-i\vartheta}&0\end{array}\right),\left(\begin{array}{cc}0&-ie^{i\vartheta }\\ ie^{-i\vartheta}&0\end{array}\right)\right)\]
with a constant \(\vartheta\). This parameter \(\vartheta\) corresponds to just a gauge choice. Nevertheless each \(\vartheta\) gives a different looking term in the Lagrangian. In particular, the case of \(\vartheta=0\) is called the Dresselhaus SOC
\[A_{\mu}=A_{\mu}^{a}\sigma_{a}=-\kappa(0,\sigma_{1},\sigma_{2},0) \tag{17}\]
yielding the DM term in the form of
\[\mathcal{H}_{\text{DM}}=\kappa\mathbf{n}\cdot(\nabla\times\mathbf{n}), \tag{18}\]
and the case of \(\vartheta=-\pi/2\) is called the Rashba SOC
\[A_{\mu}=A_{\mu}^{a}\sigma_{a}=\kappa(0,-\sigma_{2},\sigma_{1},0) \tag{19}\]
that yields the DM term in the form of
\[\mathcal{H}_{\text{DM}}=\kappa(\mathbf{n}\cdot\nabla n_{3}-n_{3}\nabla\cdot \mathbf{n}). \tag{20}\]
The DM terms in Eqs. (18) and (20) are known to admit magnetic domain walls of the Bloch and Neel types, respectively. They also admit magnetic skyrmions of the Bloch and Neel types, respectively. We emphasize that these two terms as well as the terms for general \(\vartheta\) look different but physically are equivalent to each other under a field redefinition (gauge transformation) because they give the same field strength in Eq. (14).
The total potential term is3
Footnote 3: Because of the additional constant term \(\kappa^{2}\) in the potential, SUSY is broken if the original Lagrangian is SUSY.
\[V_{\text{tot}} =\frac{1}{2}\left(\mathbf{A}_{k}\times\mathbf{n}\right)^{2}+m^{2}(1- n_{3}^{2})\] \[=\frac{1}{2}\left|\mathbf{A}_{k}\right|^{2}\left|\mathbf{n}\right|^{ 2}-\frac{1}{2}\left(\mathbf{A}_{k}\cdot\mathbf{n}\right)^{2}+m^{2}(1-n_{3}^{2})\] \[=\kappa^{2}-\frac{\kappa^{2}}{2}\left(n_{1}^{2}+n_{2}^{2}\right) +m^{2}(1-n_{3}^{2})\] \[=\left(\frac{\kappa^{2}}{2}-m^{2}\right)n_{3}^{2}+\frac{\kappa^{ 2}}{2}+m^{2}. \tag{21}\]
Apart from the constant terms, this potential is called
\[\left\{\begin{array}{rl}\text{(i)}&\kappa^{2}-2m^{2}>0\text{ : Easy-plane}\\ \text{(ii)}&\kappa^{2}-2m^{2}=0\text{ : No potential }\text{.}\\ \text{(iii)}&\kappa^{2}-2m^{2}<0\text{ : Easy-axis}\end{array}\right. \tag{22}\]
These potentials are drawn schematically in Fig. 1.
As we show in a later section, the ground state is not a uniform state but is inhomogeneous, or forms a CSL, when the following inequality holds:
\[4\left|\kappa^{2}-2m^{2}\right|<\kappa^{2}\pi^{2}. \tag{23}\]
Then, there are a ferromagnetic phase with the easy-axis potential, the CSL phases with the easy-axis or easy-plane potential, and a helimagnetic phase at the boundary between the latter. In summary, by gradually increasing the DM interaction \(\kappa\), there are the following phases:
\[\left\{\begin{array}{rl}\kappa^{2}=0&\text{: SUSY}\\ 0\leq\kappa^{2}\leq\frac{8m^{2}}{\pi^{2}+4}&\text{: Ferromagnetic}\\ \frac{8m^{2}}{\pi^{2}+4}\leq\kappa^{2}<2m^{2}&\text{: Easy-axis CSL}\\ \kappa^{2}=2m^{2}&\text{: Helimagnetic}\\ 2m^{2}<\kappa^{2}&\text{: Easy-plane CSL}\end{array}\right.. \tag{24}\]
The phase diagram is given in Fig. 2.
## 3 Chiral Magnets from D-branes
In this section, we introduce two D-brane configurations in type-IIA/B string theory. In Sec. 3.1, we give the Hanany-Witten brane configuration consisting of D3, D6, NS5-branes in type-IIB string theory. In Sec. 3.2, we give fractional D2-D6 branes on the Eguchi
Figure 1: The potentials with vacua denoted in red. (a) Easy-axis potential. Vacua are the north (\(n_{3}=+1\)) and south (\(n_{3}=-1\)) poles. (b) Easy-plane potential. Vacua are on the equator (\(n_{3}=0\)).
Figure 2: The Phase diagram of the chiral magnet from D-branes. FM and HM denote ferromagnetic and helimagnetic, respectively. In the FM phase, the ground state denoted by a blue dot is either the north pole \(n_{3}=+1\) or south pole \(n_{3}=-1\). In the easy-axis (plane) CSL, while the vacua are the north and south poles (equator), the ground state is a CSL represented by a blue circle.
Hanson manifold In both cases, the \(O(3)\) nonlinear sigma model is realized on the color D-brane, and we further introduce non-Abelian magnetic fluxes on the flavor D-branes inducing the DM interaction.
### Hanany-Witten D-brane configuration
As mentioned, the gauge theory introduced in the last section can be made \(\mathcal{N}=2\) SUSY by introducing the Higgs scalar fields \(\tilde{\Phi}\) and adding fermionic superpartners (Higgsinos) for hypermultiplets, and gauginos for gauge or vector multiplets [66]. Then, the theory can be realized by D-brane configurations. We first consider the Hanany-Witten brane configuration in type IIB string theory [50; 51]. Here, we construct a more general Grassmann sigma model with the target space
\[Gr_{N_{\rm F},N_{\rm C}}=\frac{SU(N_{\rm F})}{SU(N_{\rm C})\times SU(N_{\rm F} -N_{\rm C})\times U(1)} \tag{11}\]
by considering the \(\mathcal{N}=2\) SUSY \(U(N_{\rm C})\) gauge theory coupled with \(N_{\rm F}\) hypermultiplets in the fundamental representation, and later restrict ourselves to \(N_{\rm F}=2\) and \(N_{\rm C}=1\) for the \(\mathbb{C}P^{1}\) model.4
Footnote 4: More precisely, for \(\mathcal{N}=2\) SUSY the vacuum manifolds are cotangent bundles \(T^{*}Gr_{N_{\rm F},N_{\rm C}},T^{*}\mathbb{C}P^{1}\) and so on.
In Table 1, we summarize the directions in which the D-branes extend, and the brane configuration is schematically drawn in Fig. 3.
In Fig. 3 a), \(N_{\rm C}\) D3-branes are stretched between two NS5 branes separated into the \(x^{3}\) direction. The \(U(N_{\rm C})\) gauge theory is realized on the \(N_{\rm C}\) coincident D3-brane world-volume. The D3-brane world-volume have the finite length \(\Delta x^{3}\) between two NS5-branes, and therefore the D3-brane world-volume theory is \((2+1)\)-dimensional \(U(N_{\rm C})\) gauge theory.5
Footnote 5: The gauge coupling is given by \(\frac{1}{g^{2}}=|\Delta x^{3}|\tau l_{s}^{4}=\frac{|\Delta x^{3}|}{g_{s}}\), with the string coupling constant \(g_{s}\) and string length \(l_{s}\) in type IIB string theory and the D3-brane tension \(\tau_{3}=1/g_{s}l_{s}^{4}\).
The positions of the \(N_{\rm F}\) D5 branes in the \(x^{7}\)-, \(x^{8}\)- and \(x^{9}\)-directions coincide with those of the D3 branes. Strings which connect between D3 and D5 branes give rise to the \(N_{\rm F}\) hypermultiplets (the Higgs fields \(\Phi,\tilde{\Phi}\) and Higgsinos) in the D3-brane worldvolume theory.
Next, we put the system into the Higgs phase by separating the positions of the two NS5-branes in the \(x^{4,5,6}\) directions, \((\Delta x^{4},\Delta x^{5},\Delta x^{6})\neq 0\), as in Fig. 3 b). This gives rise to the triplet of the Fayet-Iliopoulos(FI) parameters \(c^{a}\)[64; 67], and we choose it as
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) & \(x^{8}\) & \(x^{9}\) \\ \hline \(N_{\rm C}\) D3 & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \(N_{\rm F}\) D5 & \(\circ\) & \(\circ\ast\) & \(\circ\ast\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline
2 NS5 & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \end{tabular}
\end{table}
Table 1: The Hanany-Witten brane configuration. Branes are extended along directions denoted by \(\circ\), and are not extended along directions denoted by \(-\). \(\ast\) denotes a background gauge field strength making D5-branes magnetized.
\(c^{a}=(0,0,v^{2}=\Delta x^{4}/g_{s}l_{s}^{2}>0)\). Then, the D3-brane worldvolume is cut and each segment of a D3-brane ends on one D5-brane.
In the third step, we introduce masses to the hypermultiplets, by separating the positions of the D5-branes into the \(x^{7,8,9}\) directions as in Fig. 3 c) and d). This gives rise to triplet masses to the hypermultiplets. We consider real masses with \(\Delta x^{7}=0\) for simplicity.
Figure 3: The Hanany-Witten brane configuration. The \(U(N_{\rm C})\) gauge theory is realized on the worldvolume of the \(N_{\rm C}\) D3-branes. Strings connecting two D3-branes give gauge multiplets while string connecting a D3-brane and a D6-brane give hypermultiplets. The separation \(\Delta x^{3}\) of the two NS5-branes into the \(x^{3}\) direction corresponds to \(1/g^{2}\). a) Hypermultiplets are massless, and do not have a VEV. b) A triplet of FI-terms is introduced by separation \((\Delta x^{4},\Delta x^{5},\Delta x^{6})\) of the two NS5-branes into the \(x^{4,5,6}\) directions. c) and d) The masses of the hypermultiplets are introduced by the separation of D5-branes into the \(x^{7,8,9}\) directions.
The vacua of the D3-brane worldvolume theory can be considered as follows. As shown in Fig. 3 c) and d), each D3 brane ends on one of the D5 branes, on each of which at most one D3 brane can end, which is known as the s-rule [50]. There are \({}_{N_{\rm F}}C_{N_{\rm C}}=N_{\rm F}!/N_{\rm C}!(N_{\rm F}-N_{\rm C})!\) vacua in the Grassmann sigma model [68]. The case of \(N_{\rm F}=2,N_{\rm C}=1\) that we concern in this paper, there are two vacua as in Fig. 3 c) and d), corresponding to the antipodal points on \(\mathbb{C}P^{1}\simeq S^{2}\).
Finally, we turn on a background gauge field in Eq. (14), Eq. (15) or (16), on the \(x^{1}\)-\(x^{2}\) plane in the D5-brane worldvolume, which are the common directions with D3-brane worldvolume not shown in Fig. 3. Such a background gauge field makes D5-branes magnetized [52; 53; 54; 55; 56; 57; 58], and the \(SU(N_{\rm F}=2)\) symmetry, which is a gauge symmetry on the D5-banes, is spontaneously broken to the \(U(1)\) subgroup generated by \(\sigma_{3}\). This background gauge field induces the DM term on the D3 brane worldvolume theory, as in the second term in Eq. (12), or more explicitly Eq. (18) or (20). SUSY is completely broken at this step. We will see that this final step gives very nontrivial physics. In particular, we will find that the ground states are not uniform anymore in general as summarized in Fig. 2. Before discussing that, we give another useful brane configuration related to this brane configuration.
### D2-D6 system on Eguchi-Hanson space
We take a T-duality in the \(x^{3}\) direction along which the NS5-branes are separated to obtain a D2-D6 system with \(N_{\rm C}\) D2-branes and \(N_{\rm F}\) D6-branes in type-IIA string theory [60], see Table 2.
The hypermultiplets come from strings stretched between D2- and D6-branes. First, we consider the case that all hypermultiplets are massless. In this duality, NS5-branes are mapped to a hyper-Kahler geometry. The orthogonal space \(\mathbb{C}^{2}\) (the \(x^{3,4,5,6}\) directions) perpendicular to the D2-branes inside the D6-brane world-volume is divided by \(\mathbb{Z}_{2}\), and there is a constant self-dual NS-NS \(B\)-field on \(\mathbb{C}^{2}/\mathbb{Z}_{2}\). The asymptotically locally Euclidean space (ALE) space of the \(A_{1}\)-type, the Eguchi-Hanson space \(T^{*}\mathbb{C}P^{1}\), is obtained by blowing up the orbifold singularity by inserting \(S^{2}\).6 The D2-branes are actually fractional D2-branes, that is, D4-branes two of whose spatial directions in the whole 4+1 dimensional
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) & \(x^{8}\) & \(x^{9}\) \\ \hline \(N_{\rm C}\) D2 & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \(N_{\rm F}\) D6 & \(\circ\) & \(\circ\ast\) & \(\circ\ast\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline ALE \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline \end{tabular}
\end{table}
Table 2: D2-D6-brane configuration on the Eguchi-Hanson manifold in type-IIA string theory. Branes are extended along directions denoted by \(\circ\), and are not extended along directions denoted by \(-\).
world-volumes are wrapped around \(S^{2}\), which blows up the orbifold singularity of \(\mathbb{C}^{2}/\mathbb{Z}_{2}\). Thus, the positions of fractional D2-branes inside the D6-branes are fixed at the fixed point of the \(\mathbb{Z}_{2}\) action, and thus there are no adjoint hypermultiplets on the D2-brane world-volume theory, see Fig. 4 a).
Footnote 3: The D6-branes are wrapped on the D6-branes, and the D6-branes are wrapped on the D6-branes.
The \(N_{\rm C}\) D2-branes can be interpreted as Yang-Mills instantons (with the instanton number \(N_{\rm C}\)) on the Eguchi-Hanson manifold in the effective \(U(N_{\rm F})\) gauge theory on the D6-branes. The Kronheimer-Nakajima construction [69] of the moduli space of instantons in the ALE space gives the same moduli space of vacua \(T^{*}Gr_{N_{\rm F},N_{\rm C}}\) as that of the Hanany-Witten brane configuration.
Now we turn on the masses of the hypermultiplets. The masses of the hypermultiplets correspond to the positions of the D6-branes in the \(x^{7,8,9}\)-directions. Thus, the hypermultiplets coming from strings stretched between D2- and D6-branes become massive by this separation. We assume real masses for hypermultiplets by placing D6-branes parallel along the \(x^{7}\)-direction. To be consistent with the s-rule in the T-dual picture at most one D2-brane can be absorbed into the \(\mathbb{Z}_{2}\) fixed point of one D6-brane.
Figure 4: The D2-D6 branes in the Eguchi-Hanson manifold in type-IIA string theory. The case of \(N_{\rm F}=2\) and \(N_{\rm C}=1\) is drawn. The dashed lines denote the fixed points (orbifold singularity) of the \(\mathbb{Z}_{2}\) action on the orbifold \(\mathbb{C}^{2}/\mathbb{Z}_{2}\). The singularity is blown up by \(S^{2}\) to become the Eguchi-Hanson manifold. The D2-branes are fractional D2-branes, that is D4-branes two of whose worldvolume wrap \(S^{2}\). a) The D-brane configuration for massless hypermultiplets in the fundamental representation. b) The brane configuration for massive hypermultiplets. Hypermultiplets coming from strings stretched between D2- and D6-branes become massive by placing D6-branes with distances in the \(x^{7,8,9}\)-coordinates.
Again, we finally turn on a background gauge field (14) making D6-branes magnetized. This yields the DM term, given in Eq. (15) or (16), on the D2-brane worldvolume theory.
In the following sections, we discuss brane configurations corresponding to topological solitons and modulated ground states. For such purpose, we will see that the latter brane configuration is simpler.
## 4 Domain Walls and Chiral Soliton Lattice Phases as D-branes
In this section, we analytically consider topological solitons of codimension one and the ground states which are also codimension one (or uniform). In Subsec. 4.1, we reduce the model to the so-called chiral sine-Gordon model by assuming one dimensional dependence. In Subsec. 4.2, we construct a magnetic domain wall as an excited state in the ferromagnetic phase and a kinky D-brane configuration. In Subsec. 4.3, we construct the CSL phase with the easy-axis potential and snaky D-brane configuration. In Subsec. 4.4, we study the helimagnetic phase. In Subsec. 4.5. we construct the CSL phase with the easy-plane potential and zigzag D-brane configuration.
### Chiral sine-Gordon model
First, we introduce rotated coordinates \((\tilde{x}^{1},\tilde{x}^{2})\) by
\[\left\{\begin{aligned} &\tilde{x}^{1}=\cos\beta x^{1}+\sin\beta x^{2} \\ &\tilde{x}^{2}=-\sin\beta x^{1}+\cos\beta x^{2}\end{aligned} \right.. \tag{17}\]
In the next subsection, we will take \(\tilde{x}^{1}\) as the direction perpendicular to the domain wall, \(\tilde{x}^{2}\) the direction along the domain-wall worldvolume. Writing \(\frac{\partial}{\partial\tilde{x}^{k}}=\tilde{\partial}_{k}\), we have
\[\mathcal{H}_{\text{DM}}=\mathbf{A}_{k}\cdot(\mathbf{n}\times\partial_{k}\mathbf{n })=\tilde{\mathbf{A}}_{k}\cdot\left(\mathbf{n}\times\tilde{\partial}_{k}\mathbf{n }\right) \tag{18}\]
where
\[\begin{aligned} &\tilde{\mathbf{A}}_{1}=-\kappa(\cos\tilde{ \vartheta},-\sin\tilde{\vartheta},0)\\ &\tilde{\mathbf{A}}_{2}=-\kappa(\sin\tilde{\vartheta},\cos\tilde{ \vartheta},0)\end{aligned} \tag{19}\]
with \(\tilde{\vartheta}\equiv\vartheta-\beta\).
We employ the ansatz for configurations depending on only one direction that we take \(\tilde{x}^{1}\) (domain walls and chiral soliton lattices) of the form
\[\mathbf{n}=\left(\cos\phi\sin f(\tilde{x}^{1}),\sin\phi\sin f(\tilde{x}^{1}), \cos f(\tilde{x}^{1})\right). \tag{20}\]
Then, the DM interaction can be written as
\[\mathcal{H}_{\text{DM}}=\tilde{\mathbf{A}}_{k}\cdot\left(\mathbf{n}\times\tilde{ \partial}_{k}\mathbf{n}\right)=\tilde{\mathbf{A}}_{1}\cdot\left(\mathbf{n}\times \tilde{\partial}_{1}\mathbf{n}\right)=\kappa\sin(\tilde{\vartheta}+\phi) \tilde{\partial}_{1}f \tag{21}\]
So, one gets the chiral sine-Gordon model
\[\mathcal{H}_{\text{SG}}=\frac{1}{2}\left(\tilde{\partial}_{1}f\right)^{2}+ \kappa\sin(\tilde{\vartheta}+\phi)\tilde{\partial}_{1}f+\left(\frac{\kappa^{ 2}}{2}-m^{2}\right)\cos^{2}f+\frac{\kappa^{2}}{2}+m^{2}. \tag{22}\]
The second term is a total derivative term specific for the _chiral_ sine-Gordon model, or a topological term counting the number of sine-Gordon solitons. Note that this term does not contribute to the equation of motion.
The trivial vacuum solutions are given by
\[f_{\rm vac}=\left\{\begin{array}{cl}\left(l+\frac{1}{2}\right)\pi&\text{if $ \kappa^{2}-2m^{2}>0$ \ (easy-plane)}\\ l\pi&\text{if $\kappa^{2}-2m^{2}<0$ \ (easy-axis)}\end{array}\right. \tag{4.7}\]
with an integer \(l\). When the potential term vanishes, i.e., \(\kappa^{2}-2m^{2}=0\), any constant can be the vacuum solution.
The chiral sine-Gordon model or chiral double sine-Gordon model also appears in QCD at finite density in the presence of strong magnetic field [70; 71] or rapid rotation [72; 73]. In such cases, the Wess-Zumino-Witten term [74] gives a topological term instead of the DM term.
### Domain walls in ferromagnetic phase with easy-axis potential
First, we consider the ferromagnetic phase. In this phase, there are two vacua corresponding to the north and south poles \(n_{3}=\pm 1\). In terms of the D-brane configurations, they correspond to the straight D-branes in Fig. 3 c) and d) for the Hanany-Witten brane configuration in type-IIB string theory and Fig. 4 b) and c) for the D2-D6-ALE system in type-IIA string theory.
Then, we consider a magnetic domain wall interpolating these two vacua. In this case, the energy per unit length in the \(\tilde{x}^{2}\)-direction is given by
\[E[f] \equiv\int d\tilde{x}^{1}\mathcal{H}_{\rm SG}\] \[=\int d\tilde{x}^{1}\left[\frac{1}{2}\left(\tilde{\partial}_{1}f \right)^{2}+\kappa\sin(\tilde{\vartheta}+\phi)\tilde{\partial}_{1}f+\left(m^{ 2}-\frac{\kappa^{2}}{2}\right)\sin^{2}f\right]+E^{\rm e.a.}_{\rm vac}\, \tag{4.8}\]
where \(E^{\rm e.a.}_{\rm vac}\) denotes the vacuum energy with the easy-axis potential. Let us study a single kink solution. The (anti-)BPS equation for the magnetic domain wall can be given by
\[\tilde{\partial}_{1}f=\pm\sqrt{2m^{2}-\kappa^{2}}\sin f. \tag{4.9}\]
The solution can be obtained as
\[f_{\rm kink}=2\arctan\left[\exp\left(\pm\sqrt{2m^{2}-\kappa^{2}}\tilde{x}^{1}+ X\right)\right]\, \tag{4.10}\]
where \(X\) is a position moduli parameter. If we choose the plus (minus) sign for the (anti-)BPS soliton, the function \(f\) monotonically increases (decreases). The equation for the phase \(\phi\) is simply given by
\[\cos(\tilde{\vartheta}+\phi)=0. \tag{4.11}\]
For the lowest energy kink solutions, the second term in the energy (4.8), which stems from the DM interaction, should be negative, so that \(\phi\) is chosen as
\[\phi=\begin{cases}-\tilde{\vartheta}-\frac{\pi}{2}&\text{if $\kappa$ $ \tilde{\partial}_{1}f>0$,}\\ -\tilde{\vartheta}+\frac{\pi}{2}&\text{if $\kappa$ $\tilde{\partial}_{1}f<0$.}\end{cases} \tag{4.12}\]
For the phase \(\phi\) giving the lowest energy kink solution, one finds that the energy difference between the single kink solution and the vacuum state is given by
\[E\left[f_{\rm kink}\ \right]-E_{\rm vac}^{\rm e.a.}=2\sqrt{2m^{2}-\kappa^{2}}-| \kappa|\pi. \tag{4.13}\]
See Fig. 5 a) for a plot of the single domain wall (4.10). One of the most important role of the presence of the DM term is that this domain wall does not carry a \(U(1)\) modulus [17]; the \(U(1)\) phase \(\phi\) is fixed to be a constant determined from \(\vartheta\) through Eq. (4.11). When \({\bf n}\) at the domain wall (\(n_{3}=0\)) is parallel or orthogonal to the domain wall worldvolume, the wall is called Bloch or Neel, respectively, see Table 3. 7 We can change the direction of the domain wall worldvolume by changing \(\beta\) in Eq. (4.1) with the same ansatz in Eq. (4.4). Then, the \(U(1)\) phase \(\phi\) changes accordingly through Eq. (4.11), and the angle between the \(U(1)\) phase \(\phi\) and the spatial direction of the domain wall worldvolume is preserved under a rotation. This is well known and was reconfirmed in the effective theory of the domain wall [17].
Footnote 7: This situation is in contrast to the case in the absence of the DM term in which a domain wall carries a \(U(1)\) modulus \(\phi\)[75; 76; 77].
Now let us discuss D-brane configurations for the magnetic domain wall. In the D2-D6-ALE system in type-IIA string theory, the effective theory on the D2-brane is the SUSY \(U(N_{\rm C})\) gauge theory with massive \(N_{\rm F}\) hypermultiplets and the FI-term. For \(N_{\rm C}=1,N_{\rm F}=2\) that we are considering, \(\Sigma=mn_{3}\) for a single magnetic domain-wall solution is plotted as a function of \(x^{1}\): \(x^{7}=\Sigma(x^{1})\) in Fig. 5 (b) left.8 It represents a kinky D2-brane curved in the (\(x^{1}\),\(x^{7}\))-plane and the curve is determined by the solution (4.10) [59; 60; 61; 62; 63]. In the limit of a thin domain wall [78], the part of the kinky D2-brane can be regarded as a D2-brane extending into the \(x^{7}\)-direction instead of the \(x^{1}\) direction (the codimension of the wall). We denote it by D2\({}^{*}\) (see Fig. 5(b) right) and the brane configuration is summarized in Table 4.
Footnote 8: In the \(U(N_{\rm C})\) case, the adjoint scalar field \(\Sigma\) represents transverse fluctuations of the \(N_{\rm C}\) D2-branes. In this case, the diagonal components of (the vacuum expectation value of) \(\Sigma\), in the gauge of diagonal \(\Sigma\), can be identified as the positions of \(N_{\rm C}\) D2-branes along the \(x^{1}\)-coordinate. In that gauge, the \(N_{\rm C}\) diagonal components of \(\Sigma\) represent for domain wall solutions.
Next, let us discuss a magnetic domain wall in the Hanany-Witten configuration in type-IIB string theory [60]. In this case, the position of the D3-brane at the \(x^{7}\)-coordinate depends on the \(x^{1}\) coordinate for a magnetic domain wall. Around the domain wall at \(x^{1}=x_{0}\), they move from one D5-brane to the other D5-brane. Here, we consider the thin-wall limit for simplicity. In Fig. 5 (c), they are represented by D3\({}^{*}\). However, they can end
\begin{table}
\begin{tabular}{c|c c c c} Type & \(\tilde{\vartheta}\) & \(\phi\) & \(S^{1}\) & to DW \\ \hline Bloch & \(0\) & \(\pm\pi/2\) & \(n_{1}=0\) & parallel \\ Néel & \(\pm\pi/2\) & \(\pm\pi\) & \(n_{2}=0\) & orthogonal \\ \end{tabular}
\end{table}
Table 3: Bloch or Néel type magnetic domain wall. The parameter \(\tilde{\vartheta}\), \(U(1)\) phase \(\phi\) of the domain wall, \(S^{1}\) submanifold inside target space \(S^{2}\), and relation of \({\bf n}\) to the domain wall worldvolume.
on no D5 brane and must be bent to the \(x^{4}\)-direction to join to each other by creating a segment represented by D3\({}^{\prime}\). In Fig. 5 (d), the same configuration is shown with plotting
Figure 5: (a) A single magnetic domain wall with the easy potential at the critical point (\(\kappa=1,m^{2}=(\pi^{2}-4)/8\)). The corresponding kinky brane configuration in (b) the D2-D5-ALE system and (c), (d) the Hanany-Witten brane configuration.
the \(x^{1}\) direction and suppressing the \(x^{4}\) direction. The brane configuration is summarized in Table 5.
### Chiral soliton lattice phase with easy-axis potential
We now discuss inhomogeneous ground states. Here, we give the condition that the CSL is the ground state instead of uniform configurations. As implied by the energy difference between the single soliton and the vacuum (uniform) configuration (4.13), the kink energy can be lower than the vacuum energy. In such a case, solitons are created in the vacuum, and eventually they form a CSL, an array of kinks and anti-kinks. Thus, a CSL is the ground state if
\[4\left(2m^{2}-\kappa^{2}\right)<\kappa^{2}\pi^{2}. \tag{4.14}\]
The CSL solutions are given in terms of the Jacobi amplitude function as
\[f_{\rm CSL}=\pm{\rm am}\left(\frac{\sqrt{2m^{2}-\kappa^{2}}}{\lambda}\tilde{x} ^{1}+X,\lambda\right)+\frac{\pi}{2}. \tag{4.15}\]
It would be worth noting that Eq. (4.15) solves the Euler-Lagrange equation obtained from the energy functional for any \(\lambda\in(0,1]\), but does not satisfy the BPS equation (4.9), except for the case \(\lambda=1\) where it reduces to the single-kink solution (4.10).9 The solution
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) & \(x^{8}\) & \(x^{9}\) \\ \hline \(N_{\rm C}\) D2 & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline D2\({}^{*}\) & \(\circ\) & \(-\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(-\) & \(-\) \\ \hline \(N_{\rm F}\) D6 & \(\circ\) & \(\circ\ast\) & \(\circ\ast\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline ALE \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline \end{tabular}
\end{table}
Table 4: The domain wall in the D2-D6-brane configuration on the Eguchi-Hanson manifold in type-IIA string theory. Branes are extended along directions denoted by \(\circ\), and are not extended along directions denoted by \(-\).
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) & \(x^{8}\) & \(x^{9}\) \\ \hline \(N_{\rm C}\) D3 & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline D3\({}^{\prime}\) & \(\circ\) & \(-\) & \(\circ\) & \(-\) & \(\circ\) & \(-\) & \(-\) & \(\circ\) & \(-\) & \(-\) \\ \hline D3\({}^{*}\) & \(\circ\) & \(-\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(-\) & \(-\) \\ \hline \(N_{\rm F}\) D5 & \(\circ\) & \(\circ\ast\) & \(\circ\ast\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) \\ \hline \(2\) NS5 & \(\circ\) & \(\circ\) & \(\circ\) & \(-\) & \(-\) & \(-\) & \(-\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \end{tabular}
\end{table}
Table 5: The domain wall in the Hanany-Witten brane configuration: Branes are extended along directions denoted by \(\circ\), and are not extended along directions denoted by \(-\).
(4.15) with the plus (minus) sign is a monotonically increasing (decreasing) function of \(\tilde{x}^{1}\). To obtain the ground state, the phase \(\phi\) should be taken as Eq. (4.12). In addition, the modulus \(\lambda\) for the ground state is determined through
\[2\text{E}(\lambda)=\frac{|\kappa|\pi}{\sqrt{2m^{2}-\kappa^{2}}}\lambda \tag{4.16}\]
with the elliptic integral of the second kind \(\text{E}(\lambda)\), which can be derived from \(dE[f_{\text{CSL}}]/d\lambda=0\). Fig. 6 shows a CSL ground state for the easy-plane potential. The figure a) is a plot of the CSL solution, and b) is a schematic plot for a shape of the D2-brane in the D2-D6-ALE system, which may be called a snaky D-brane.
### Helimagnetic phase
When the relation \(2m^{2}=\kappa^{2}\) holds, the total potential term vanishes, as can be seen in Eq. (2.22), and the energy per unit length can be written as
\[E[f]=\frac{1}{2}\int d\tilde{x}^{1}\left[\left\{\tilde{\partial}_{1}f+\kappa \sin\left(\tilde{\vartheta}+\phi\right)\right\}^{2}-\kappa^{2}\sin^{2}\left( \tilde{\vartheta}+\phi\right)\right]+\text{const}. \tag{4.17}\]
In this case, domain walls do not exist. However, the ground state is uniformly modulated due to the DM interaction. Since the BPS equation is given by
\[\tilde{\partial}_{1}f=-\kappa\sin\left(\tilde{\vartheta}+\phi\right)\, \tag{4.18}\]
Figure 6: a) A CSL solution with the easy-plane potential as the ground state (\(\kappa=1.0,m=1.3\)). b) A snaky D2-brane corresponding to a).
the solution for the ground state is
\[\left\{\begin{aligned} & f=\pm\kappa\tilde{x}^{1}\\ &\phi=-\tilde{\vartheta}\mp\frac{\pi}{2}\end{aligned}\right.. \tag{4.19}\]
Thus, in this case, the phase varies linearly.
### Chiral soliton lattice phase with easy-plane potential
Next, we consider the case of the easy-plane potential. A domain wall with the easy-plane potential is not topologically stable in the sense that both of the boundaries are connected and degenerated. Such an unstable domain wall should decay into a vacuum state when the DM interaction is absent. However, in the presence of the DM interaction, such domain walls can be energetically stable. Indeed, in our setting, the ground state is always given by a CSL composed of such unstable domain walls and anti-domain walls.
For the easy-plane case, the energy per unit length in the \(\tilde{x}^{2}\)-direction can be defined as
\[E[f]=\int d\tilde{x}^{1}\left[\frac{1}{2}\left(\tilde{\partial}_{1}f\right)^{ 2}+\kappa\sin(\tilde{\vartheta}+\phi)\tilde{\partial}_{1}f+\left(\frac{\kappa ^{2}}{2}-m^{2}\right)\cos^{2}f\right]+E_{\rm vac}^{\rm e.p.}\, \tag{4.20}\]
where \(E_{\rm vac}^{\rm e.p.}\) denotes the vacuum energy with the easy-plane potential. Let us begin with studying a single kink solution. The (anti-)BPS equation for a single kink is given by
\[\tilde{\partial}_{1}f=\pm\sqrt{\kappa^{2}-2m^{2}}\cos f \tag{4.21}\]
This equation can be solved by
\[f_{\rm kink}=\pm 2\arctan\left[\tanh\left(\frac{\sqrt{\kappa^{2}-2m^{2}}}{2} \tilde{x}^{1}+X\right)\right] \tag{4.22}\]
where \(X\) is a position moduli parameter. For the plus (minus) sign for an (anti-)BPS kink, the function (4.22) is a monotonically increasing (decreasing) function of \(\tilde{x}^{1}\). Among the kink solutions, the lowest energy is attained when \(\phi\) is taken as Eq. (4.12), as the same with the easy-axis case. Fig. 7 a) shows a single kink solution in the easy-plane potential.
As the case of the easy-axis potential, the kink can have negative energy. In such a case, the kink does not decay to the vacuum. The difference between the kink energy and vacuum energy is
\[E\left[f_{\rm kink}\right]-E_{\rm vac}^{\rm e.p.}=2\sqrt{\kappa^{2}-2m^{2}}-| \kappa|\pi. \tag{4.23}\]
If the kink energy is less than that of uniform state (vacuum), i.e.,
\[4\left(\kappa^{2}-2m^{2}\right)<\kappa^{2}\pi^{2} \tag{4.24}\]
then, a CSL, an array of kinks and anti-kinks, is the ground state. Since \(\pi^{2}>4\), however, this inequality always holds. Thus, for the easy-plane potential, the ferromagnetic state (vacuum) is unstable, and the CSL phase is always the ground state for any \(m^{2}\in\left[0,\kappa^{2}/2\right)\).
In the case of the easy-plane potential, the solution describing a CSL is given by
\[f_{\rm CSL}=\pm{\rm am}\left(\frac{\sqrt{\kappa^{2}-2m^{2}}}{\lambda}\tilde{x}^{1} +X,\lambda\right) \tag{4.25}\]
with a position moduli parameter \(X\). Similar to the easy-axis case, the modulus \(\lambda\) providing the ground state is determined through
\[2{\rm E}(\lambda)=\frac{|\kappa|\pi}{\sqrt{\kappa^{2}-2m^{2}}}\lambda. \tag{4.26}\]
Fig. 7 b) shows a plot of the CSL solution with the easy-plane potential, which is rather zigzag compared with that with easy-axis potential. This can be identified with a D2-brane configuration for a CSL, as schematically drawn in Fig. 7 c). The D2-brane touches the D6-branes at a shorter range than one in the easy-axis potential case shown in Fig. 6.
## 5 Magnetic Skyrmions as D-branes
In this section, we discuss topologically excited states of two codimensions, that is, magnetic skyrmions and domain-wall skyrmions in chiral magnets. In our model, these are excited
Figure 7: a) A single domain wall and b) chiral soliton lattices with the easy-plane potential as the ground state (\(\kappa=1.2,m=0.1\)). c) A zigzag D2-brane corresponding to b).
states on top of the easy-axis ferromagnetic ground state. In Subsec. 5.1, we numerically construct magnetic skyrmions and find that they are represented by D1-branes (fractional D0-branes) in the Hanany-Witten brane configuration (D2-D6-ALE system) in type-IIB(A) string theory. In Subsec. 5.2, we numerically construct explicit solutions of domain-wall skyrmions, and see the shape to the color D-brane in the visinity of a skyrmion.
### Magnetic skyrmions
In this subsection, we show that magnetic skyrmions can be described by D1-branes in the Hanany-Witten brane configuration in type-IIB string theory, or by D0-branes in the D2-D4-ALE system in type-IIB string theory. The latter are in fact fractional D0-branes, that is D2 branes two of whose spatial directions wrap around the \(S^{2}\) cycle blowing up the orbifold singularity \(\mathbb{C}^{2}/\mathbb{Z}_{2}\).
Let us start with the case in the absence of the DM term and masses of scalar fields (\(\Delta x^{7,8,9}=0\)). In such a case, vortices were previously realized as D1-branes in the Hanany-Witten brane configuration, or as D0-branes in the D2-D6 system, as in Fig. 8 a) or b), respectively [64]. These vortices are non-Abelian vortices in general [79, 80, 81, 82, 83, 84]. For \(N_{\rm F}=N_{\rm C}\equiv N\) they are local non-Abelian vortices with non-Abelian orientational moduli \(\mathbb{C}P^{N-1}\), where the case of \(N=1\) corresponds to Abrikosov-Nielsen-Olesen (ANO) vortices [85, 86] in superconductors. For \(N_{\rm F}>N_{\rm C}\) such as our concern of \(N_{\rm C}=1,N_{\rm F}=2\), they are semi-local vortices [87, 88, 89, 90] having size and phase moduli in addition to non-Abelian orientational moduli.
In the strong gauge coupling limit, \(\Delta x^{3}\sim 1/g^{2}=0\), these semi-local vortices (\(N_{\rm F}>N_{\rm C}\)) become lumps in a nonlinear sigma model, which still possess size and phase moduli. In the case of \(N_{\rm C}=1,N_{\rm F}=2\), they are \(\mathbb{C}P^{1}\) lumps.
Figure 8: D-brane configurations for a magnetic skyrmion in a) the Hanany-Witten brane configuration and b) the D2-D6-ALE system. A magnetic skyrmion is realized by a) a D1-brane and b) a fractional D0-brane, respectively.
If we turn on masses of scalar fields (\(\Delta x^{7,8,9}\neq 0\)), semilocal vortices at finite gauge coupling \(g\) shrink, the size modulus becomes zero, and they eventually become ANO vortices. However, in the strong gauge coupling limit, lumps are not stable in such a mass deformation; they shrink to zero size and configurations become singular (called small lump
Figure 10: Shape of a D2-brane for a single skyrmion (left) and anti-skyrmion (right) with the easy-axis potential (\(\kappa=1.0,m^{2}=2.0,\vartheta=0\))
Figure 9: Skyrmion and anti-skyrmion with the easy-axis potential (\(\kappa=1.0,m^{2}=2.0,\vartheta=0\)). The top panels represent quantities of a skyrmion with the boundary condition \(n_{3}=1\), and the bottom panels do of an anti-skyrmion with the boundary condition \(n_{3}=-1\). The panels a) and d) show the value of \(n_{3}\); b) and e) energy density; c) and f) topological charge density.
singularity).
Now we introduce the DM interaction by turning on the background gauge field (14) on the D5(D6)-branes in the Hanany-Witten configuration (the D2-D4-ALE system). Then, the D5(D6)-branes become so-called magnetized D-branes [52; 53; 54; 55; 56; 57; 58], and the \(SU(2)\) gauge symmetry on the brane is spontaneously broken to \(U(1)\). This induces the DM interaction on the D3(D2)-brane worldvolume field theory, where the \(SU(2)\) flavor symmetry is broken to \(U(1)\). The Hamiltonian density is given in Eq. (12), and the energy can be written as
\[E \equiv\int d^{2}x\mathcal{H}\] \[=\int d^{2}x\left[\frac{1}{2}\partial_{k}\mathbf{n}\cdot\partial_ {k}\mathbf{n}+\kappa\left\{\cos\vartheta\ \mathbf{n}\cdot(\nabla\times\mathbf{n})-\sin\vartheta(\mathbf{n}\cdot\nabla n _{3}-n_{3}\nabla\cdot\mathbf{n})\right\}\right.\] \[\left.+\left(m^{2}-\frac{\kappa^{2}}{2}\right)(1-n_{3}^{2}) \right]+\text{const.} \tag{17}\]
where the second term is the induced DM interaction, and the third is the easy-axis potential. This induced DM interaction prevents lumps from shrinking, which are nothing but magnetic skyrmions. Indeed, the Derick's scaling argument [17] requires that for stable configurations of codimension two, the energy contribution from the DM interaction energy \(E_{\text{DM}}\) and the potential energy \(E_{\text{pot}}\) satisfy the relation
\[E_{\text{DM}}+2E_{\text{pot}}=0\, \tag{18}\]
implying that non-trivial skyrmion solutions can exist only when the DM interaction energy is a finite negative value. The topological charge of the skyrmion, \(\pi_{2}(S^{2})\), is defined by
\[Q=\frac{1}{8\pi}\int d^{2}x\ \varepsilon^{jk}\mathbf{n}\cdot\left(\partial_{j} \mathbf{n}\times\partial_{k}\mathbf{n}\right). \tag{19}\]
Even in the presence of the DM interaction, only one of either skyrmion (\(Q=+1\)) or anti-skyrmion (\(Q=-1\)) is stable depending on the vacuum: When the vacuum is \(n_{3}=-1\), a skyrmion is stable and an anti-skyrmion is unstable; On the other hand, when the vacuum is \(n_{3}=+1\), an anti-skyrmion is stable and a skyrmion is unstable.10 This can easily be seen when we consider a rotational symmetric ansatz of the form
Footnote 10: This fact does not seem to be widely known in condensed matter community. Usually, one has the Zeeman energy \(Bn_{3}\). In such a case, only either skyrmion or anti-skyrmion is stable because of the unique vacuum.
\[\mathbf{n}=(\cos(\nu\theta+\gamma)\sin f(r),\sin(\nu\theta+\gamma)\sin f(r), \cos f(r)) \tag{20}\]
with the polar coordinates \((r,\theta)\), where \(\nu\in\mathbb{Z}\) denotes a winding number, \(\gamma\in[0,2\pi)\) is a constant describing the internal orientation of the skyrmion, and \(f(r)\in[0,\pi]\) is a monotonic function satisfying the boundary conditions \(\{f(0)=0,f(\infty)=\pi\}\) or \(\{f(0)=\pi,f(\infty)=0\}\). Then, the DM term can be written as
\[E_{\text{DM}}=-\kappa\int d^{2}x\ \sin[(1-\nu)\theta-\vartheta-\gamma]\left( \frac{\nu\sin(2f)}{2r}+\partial_{r}f\right). \tag{21}\]
It indicates that the energy contribution from the DM interaction vanishes except for \(\nu=1\). If \(E_{\rm DM}=0\), no non-trivial configuration can satisfy the relation (102). Therefore, stable axisymmetric configurations must have the winding number \(\nu=1\). In addition, by substituting the ansatz into the topological charge, one obtains
\[Q=-\frac{\nu}{2}\left[\cos f(r)\right]_{r=0}^{r=\infty}=-\frac{\nu}{2}\left[n_{ 3}(r)\right]_{r=0}^{r=\infty}. \tag{106}\]
Therefore, a stable configuration with \(\nu=1\) possesses \(Q=\pm 1\) when the vacuum is \(n_{3}=\mp 1\).
The Euler-Lagrange equation of this model is given by
\[\partial_{b}^{2}n_{a}+2\kappa\left[\sin\vartheta(\partial_{a}n_{3}-\delta_{a3 }\partial_{b}n_{b})+\cos\vartheta\ \varepsilon_{abc}\partial_{b}n_{c}\right]+(2m^{2}-\kappa^{2})\delta_{a3}n_{3}+ \Lambda n_{a}=0 \tag{107}\]
where \(\Lambda\) is a Lagrange multiplier. Note that, different from the kink and CSL cases, the DM interaction contributes not only to the energy but also to the equation of motion. We numerically solve this equation using a conjugate gradient method with the fourth-order finite difference approximation. Our simulations are performed on a grid with \(201\times 201\) lattice points and a lattice spacing \(\Delta=0.1\). For the initial input, we employed the rotationally invariant configuration (104) with \(\nu=1,\gamma=0\), and a smooth monotonic function \(f(r)\) satisfying the boundary condition. In Fig. 9, we give numerical solutions for a single magnetic (anti-)skyrmion with the easy-axis potential and the Bloch-type DM interaction, i.e., \(\vartheta=0\). As one can see from the energy density plots, the stable (anti-)skyrmion is in a shape of a domain wall ring,11 inside which the energy density is negative due to the DM interaction. Fig. 10 shows that the color D2(D3)-brane worldvolume is bent and touches to the flavor D6(D5)-brane on the opposite side around the position of the D0(D1)-brane corresponding to the magnetic skyrmion in type IIA(B) string theory.
Footnote 11: A domain wall ring as a skryrmion was found in the context of the baby skyrme model [25].
Since there is repulsion between skyrmions, multiple skyrmion states are always unstable, as the same with the magnetic skyrmions with a Zeeman interaction [91].12
Footnote 12: The presence of the Zeeman interaction \(Bn_{3}\) changes only the exponent of the asymptotic behavior, and thus only the strength of repulsion is different.
### Domain-wall Skyrmions
The ferromagnetic phase admits both magnetic domain walls and magnetic skyrmions. When they coexist, they feel attraction. Thus, a magnetic skyrmion should be absorbed into a domain wall. The final state is a stable composite state called a domain-wall skyrmion [17; 24; 25]. In such a situation, the magnetic skyrmion is realized as a sine-Gordon soliton in the domain-wall effective field theory which is a sine-Gordon model with a potential term induced by the DM term [17].
To obtain explicit solutions of domain-wall skyrmions, we numerically solve the equation of motion in Eq. (107) with the same method used in the last subsection. We run our simulations by replacing the spatial domain \([-10,10]\times[-10,10]\subset\mathbb{R}^{2}\) with a grid with \(201\times 201\) lattice points and lattice spacing \(\Delta=0.1\). We impose the Dirichlet boundary conditions that we assign a different vacuum to each boundary in the \(x^{2}\)-direction and the lowest energy single-kink solution with the easy-axis potential in Eq. (110) to the
boundaries in the \(x^{1}\)-direction. In Fig. 11, we show a concrete example of the boundary
Figure 11: Boundary condition used in the numerical simulation for the domain-wall skyrmions with the parameter \(\kappa>0,\vartheta=0\).
Figure 12: Domain-wall skyrmion and domain-wall anti-skyrmion with the easy-plane potential (\(\kappa=1.0,m^{2}=2.0,\vartheta=0\)). The top panels represent quantities of a domain-wall skyrmion, and the bottom panels do of a domain-wall anti-skyrmion. The panels a) and d) show the value of \(n_{3}\); b) and e) energy density; c) and f) topological charge density. a) \(n_{3}\), b) energy density, c) topological charge density.
condition. In the bulk, we employ the following configuration as an initial input by choosing parameters compatible with the boundary conditions:
\[\mathbf{n}=\left(\cos\phi(x^{2})\sin f_{\rm kink}(x^{1}),\sin\phi(x^{2})\sin f_{ \rm kink}(x^{1}),\cos f_{\rm kink}(x^{1})\right) \tag{100}\]
where \(f_{\rm kink}\) is the single kink solution for the easy -axis case in Eq. (101) with the moduli parameter \(X=0\), and
\[\phi=4\arctan e^{cx^{2}}-\vartheta\mp\frac{\pi}{2} \tag{101}\]
with a real parameter \(c\). In Fig. 12, we present numerical solutions for domain-wall (anti-)skyrmions. The upper figures show a single skyrmion on the wall and the lower figures
Figure 14: Shapes of the D2-brane for a domain-wall skyrmion (left) and anti-skyrmion (right) with the easy-axis potential (\(\kappa=1.0,m^{2}=2.0,\vartheta=0\)). The kinky shape of the D2-branes is pulled or pushed due to the D0-brane.
Figure 13: A kinky D2-D0-brane bound state in the D2-D6-ALE system in type-IIA string theory describing a domain-wall skyrmion. A similar configuration can be considered in the Hanany-Witten brane configuration in type-IIB string theory.
show a single anti-skyrmion on the wall. It is important to emphasize that both skyrmion and anti-skyrmion are stable. We can observe that the domain wall is bent in the vicinity of the (anti-)skyrmion, which is consistent with Ref. [41]. The direction of the bending depends on the sign of the topological charge of the skyrmion.
In Fig. 13, we schematically draw brane configurations for a domain-wall skyrmion, as a combination of those for a domain wall and magnetic skyrmion. As a consequence of the bending of the domain wall in Fig. 12, the D2-brane worldvolume forming a kink is pulled to either direction of the wall in the vicinity of a D0-brane corresponding to the magnetic skyrmion, as shown in Fig. 14.
## 6 Summary and Discussion
We have given string theory construction for chiral magnets in terms of the Hanany-Witten brane configuration (consisting of D3, D5 and NS5-branes) in type-IIB string theory, and the fractional D2 and D6 branes on the Eguchi-Hanson manifold in type-IIA string theory. In both cases, the flavor branes are magnetized by a constant magnetic flux. The \(O(3)\) sigma model with the DM interaction describing chiral magnets is realized on the worldvolume of the color D-branes. As summarized in Fig. 2, we have found that the ground states are not uniform in general: The ground state is either a ferromagnetic (uniform) state, a CSL phase with the easy-axis potential or the easy-plane potential, or the helimagnetic state. A magnetic domain wall in the ferromagnetic phase is realized by a kinky D-brane. In the CSL phase with the easy-axis (plane) potential, the uniform state is unstable because a single (non)topological domain wall has negative energy due to the DM interaction. Consequently, the color D-brane is snaky (zigzag) between the two separated flavor D-branes as in Fig. 6 (7). We also have constructed magnetic skyrmions realized as D1-branes (fractional D0-branes) in the former (latter) configuration. We have shown that the worldvolume of the host D2-brane is bent at the position of the D0-brane as the magnetic skyrmion and is touched to the other flavor D-brane, see Fig. 10. Finally, we have constructed domain-wall skyrmions in the ferromagnetic phase. The domain-wall worldvolume is no longer flat in the vicinity of the (anti-)skyrmion and is pulled into a direction determined from the skyrmion topological charge as in Fig. 12. Consequently, the D2-brane worldvolume is pulled in the vicinity of the D0-brane as in Fig. 14.
Before closing this paper, let us discuss future directions. One of the most important directions may be the introduction of the Zeeman term \(n_{3}\) induced by an applied magnetic field. In such a case, a skyrmion lattice phase is also a possible ground state in the phase diagram [11; 12; 13], where skyrmions have negative energy due to the DM term [14; 15; 16; 49] Whether such a term can be introduced in brane configurations will be an interesting and important open question.
The three inhomogeneous ground states, the CSL phases with easy-axis (plane) potential and the helimagnetic phase, are continuously connected or crossover. On the other hand, the first order phase transition exists between the ferromagnetic phase to the easy-axis CSL phase. Soliton formations of this transition were studied in Refs. [92; 93] in the
framework of the chiral sine-Gordon model. These studies should be extended to the case of chiral magnets, the \(O(3)\) model with the DM term.
Domain walls and skyrmions are related by a Scherck-Schwarz dimensional reduction [84] or a T-duality [61; 62] in string theory language, at least in the absence of the DM term. It is not clear if this duality holds with the DM term. Physically, this is related to the aforementioned phase transition.
It is known that when a D\(p\)-brane and anti D\(p\)-brane pair annihilate, D\((p-2)\)-branes are created as a consequence of a tachyon condensation [94; 95]. A kinky D-brane and anti-kinky D-brane can annihilate for instance at a phase transition from the CSL phase to ferromagnetic phase. Locally this annihilation can be regarded as a D2-brane anti D2-brane pair annhilation as in Fig. 15 (left).
Consequently, there appear D0-branes after the pair annihilation as in Fig. 15 (right). These are nothing but magnetic skyrmions. A pair annihilation of domain wall and anti-domain wall resulting in the creation of magnetic skyrmions. This process was studied in the \(O(3)\) model without the DM term [96; 97] and Bose-Einstein condensates [98; 99; 100; 101]. In these cases, the \(U(1)\) moduli exist on the domain walls (see footnote 7), and the creation rate of skyrmions depends on the relative phase. The creation rate is maximized when the relative phase moduli is \(\pi\), which is precisely the case with the DM term.
Generalizations of chiral magnets to the \(\mathbb{C}P^{2}\) model or more generally to the \(\mathbb{C}P^{N-1}\) model were studied before [102; 103; 104] in which magnetic skyrmions were mainly investigated. On the other hand, multiple domain walls were studied without the DM interaction in the \(\mathbb{C}P^{N-1}\) model [105; 106] and Grassmann model [107; 108; 109], for which multiple kinky D-brane configurations were explored in Ref. [60]. Domain-wall skyrmions were also constructed without the DM interaction in parallel multiple walls in the \(\mathbb{C}P^{N-1}\) model [110] and a single non-Abelian domain wall in the Grassmann model [111; 112]. Furthermore, domain wall
Figure 15: A pair annihilation of kinky D2-brane and anti-kinky D2-brane resulting in the creation of D0-branes.
junctions or networks [113; 114; 115; 116; 117; 118] and their D-brane configurations [119] were also studied. Introducing the DM interaction in these models will be interesting to explore because these models, at least the \(\mathbb{C}P^{2}\) model, can be experimentally realizable in laboratory experiments of ultracold atomic gases.
Finally, modulated ground states (vacua) were discussed in string theory [120; 121; 122; 123; 124] and relativistic field theory [125; 126; 127; 128; 129]. These studies may be useful for analyzing various aspects of the modulated phases found in this paper, for instance phonons in the CSL.
###### Acknowledgements.
We thank Ryo Yokokura and Tetsutaro Higaki for their useful comments. This work is supported in part by JSPS KAKENHI [Grants No. JP23KJ1881 (YA) and No. JP22H01221 (MN)], the WPI program "Sustainability with Knotted Chiral Meta Matter (SKCM\({}^{2}\))" at Hiroshima University. The numerical computations in this paper were run on the "GOVORUN" cluster supported by the LIT, JINR.
## Appendix A The Dzyaloshinskii-Moriya interaction as a background gauge field
In this Appendix, we show that the \(SU(2)\) gauged \(O(3)\) nonlinear sigma model in Eq. (11) can be derived from the strong coupling limit of the \(U(1)\times SU(2)\) gauged linear sigma model given in Eq. (9), i.e.,
\[\mathcal{L}=2\left(\mathcal{D}_{\mu}\Phi\right)^{\dagger}(\mathcal{D}^{\mu} \Phi)-\Phi^{\dagger}(\Sigma\mathbf{1}_{2}-M)^{2}\Phi \tag{124}\]
with \(\mathcal{D}_{\mu}\Phi=\left(\partial_{\mu}-ia_{\mu}-\frac{i}{2}A_{\mu}\right)\Phi\) and a \(U(1)\) auxiliary gauge field \(a_{\mu}\). Since the Lagrangian does not contain the kinetic term of \(a_{\mu}\) and \(\Sigma\) in the strong coupling limit, they are auxiliary fields. Similar to the case in which \(A_{\mu}\) is absent given in Eq. (3), one can eliminate the fields using the equations of motion as
\[a_{\mu}=\frac{i}{2}\left\{(\mathscr{D}_{\mu}\Phi)^{\dagger}\Phi-\Phi^{\dagger} (\mathscr{D}_{\mu}\Phi)\right\},\qquad\Sigma=\Phi^{\dagger}M\Phi=mn_{3} \tag{125}\]
with \(\mathscr{D}_{\mu}\equiv\partial_{\mu}-\frac{i}{2}A_{\mu}\). Substituting them into the Lagrangian in Eq. (124), we obtain
\[\mathcal{L}=2\left\{\left(\mathscr{D}_{\mu}\Phi\right)^{\dagger}(\mathscr{D} ^{\mu}\Phi)+\left(\Phi^{\dagger}\mathscr{D}_{\mu}\Phi\right)^{2}\right\}-m^ {2}(1-n_{3}^{2}). \tag{126}\]
By expanding the expression, we can rewrite the Lagrangian as
\[\mathcal{L}= 2\left\{\partial_{\mu}\Phi^{\dagger}\partial^{\mu}\Phi+(\Phi^{ \dagger}\partial_{\mu}\Phi)\right\} \tag{127}\] \[+i\Phi^{\dagger}A_{\mu}\partial^{\mu}\Phi-i\partial^{\mu}\Phi^{ \dagger}A_{\mu}\Phi-2i\Phi^{\dagger}A_{\mu}\Phi\Phi^{\dagger}\partial^{\mu}\Phi\] \[+\frac{1}{2}\left\{(A_{\mu}^{a})^{2}-(\Phi^{\dagger}A_{\mu}\Phi) ^{2}\right\}-m^{2}(1-n_{3}^{2})\.\]
Here, we have used a relation
\[\Phi^{\dagger}A_{\mu}A^{\mu}\Phi=A_{\mu}^{a}\left(A^{\mu}\right)^{b}\Phi \sigma_{a}\sigma_{b}\Phi=A_{\mu}^{a}\left(A^{\mu}\right)^{b}\ \Phi(\delta_{ab}\mathbf{1}_{2}+i\varepsilon^{abc}\sigma_{c})\Phi=(A_{\mu}^{a} )^{2}. \tag{128}\]
We now compare this Lagrangian and the gauged nonlinear sigma model. First, we can write the Dirichlet term as
\[\partial_{\mu}\mathbf{n}\cdot\partial^{\mu}\mathbf{n}= (\sigma_{a})_{\alpha\beta}(\sigma_{b})_{\gamma\delta}\ \partial_{\mu}(\Phi_{\alpha}^{*}\Phi_{\beta})\partial_{\mu}(\Phi_{\gamma}^{*} \Phi_{\delta})\] \[= (2\delta_{\alpha\delta}\delta_{\beta\gamma}-\delta_{\alpha\beta} \delta_{\gamma\delta})\partial_{\mu}(\Phi_{\alpha}^{*}\Phi_{\beta})\partial_{ \mu}(\Phi_{\gamma}^{*}\Phi_{\delta})\] \[= 2\partial_{\mu}(\Phi_{\alpha}^{*}\Phi_{\beta})\partial_{\mu}( \Phi_{\beta}^{*}\Phi_{\alpha})\] \[= 4\left\{\partial_{\mu}\Phi^{\dagger}\partial^{\mu}\Phi+(\Phi^{ \dagger}\partial_{\mu}\Phi)\right\}. \tag{100}\]
The DM interaction can be rewritten as
\[\mathbf{A}_{\mu}\cdot(\mathbf{n}\times\partial^{\mu}\mathbf{n}) =\varepsilon^{abc}A_{\mu}^{a}n_{b}\partial^{\mu}n_{c}\] \[=\varepsilon^{abc}A_{\mu}^{a}\ \Phi^{\dagger}\sigma_{b}\Phi\ \partial^{\mu}\left(\Phi^{\dagger}\sigma_{c}\Phi\right)\] \[=iA_{\mu}^{a}\ \Phi^{\dagger}\left(\sigma_{a}\sigma_{b}-\delta_{ab} \mathbf{1}_{2}\right)\Phi\ \partial^{\mu}\left(\Phi^{\dagger}\sigma_{b}\Phi\right)\] \[=iA_{\mu}^{a}\Phi^{\dagger}\ \sigma_{a}\sigma_{b}\Phi\ \partial^{\mu}\left(\Phi^{\dagger}\sigma_{b}\Phi\right)-i\ \partial^{\mu}\left(\Phi^{\dagger}A_{\mu}\Phi\right) \tag{101}\]
where we have used \(\varepsilon^{abc}\sigma_{c}=-i\left(\sigma_{a}\sigma_{b}-\delta_{ab}\mathbf{1}_ {2}\right)\). Using the relation \((\sigma_{a})_{\alpha\beta}(\sigma_{a})_{\gamma\delta}=(2\delta_{\alpha\delta} \delta_{\beta\gamma}-\delta_{\alpha\beta}\delta_{\gamma\delta})\), one obtains
\[\Phi^{\dagger}\sigma_{a}\sigma_{b}\Phi\Phi^{\dagger}\sigma_{b} \partial_{\mu}\Phi =\left(\Phi^{\dagger}\sigma_{a}\right)_{\alpha}(\sigma_{b})_{ \alpha\beta}\Phi_{\beta}\ \Phi_{\gamma}^{\dagger}(\sigma_{b})_{\gamma\delta}\partial_{\mu}\Phi_{\delta}\] \[=(2\delta_{\alpha\delta}\delta_{\beta\gamma}-\delta_{\alpha\beta} \delta_{\gamma\delta})\left(\Phi^{\dagger}\sigma_{a}\right)_{\alpha}\Phi_{ \beta}\ \Phi_{\gamma}^{\dagger}\partial_{\mu}\Phi_{\delta}\] \[=2\Phi^{\dagger}\sigma_{a}\partial_{b}\Phi-\Phi^{\dagger}\sigma_{a }\Phi\ \Phi^{\dagger}\partial_{\mu}\Phi \tag{102}\]
and
\[\Phi^{\dagger}\sigma_{a}\sigma_{b}\Phi\ \partial_{\mu}\Phi^{\dagger} \sigma_{b} \Phi =(\sigma_{b})_{\alpha\beta}(\sigma_{b})_{\gamma\delta}\left(\Phi^ {\dagger}\sigma_{a}\right)_{\alpha}\Phi_{\beta}\partial_{\mu}\Phi_{\gamma}^{ \dagger}\Phi_{\delta}\] \[=(2\delta_{\alpha\delta}\delta_{\beta\gamma}-\delta_{\alpha\beta} \delta_{\gamma\delta})\left(\Phi^{\dagger}\sigma_{a}\right)_{\alpha}\Phi_{ \beta}\partial_{\mu}\Phi^{\dagger}\Phi_{\delta}\] \[=2\Phi^{\dagger}\sigma_{a}\Phi\partial_{\mu}\Phi^{\dagger}\Phi- \Phi^{\dagger}\sigma_{a}\Phi\partial_{\mu}\Phi^{\dagger}\Phi\] \[=\Phi^{\dagger}\sigma_{a}\Phi\partial_{\mu}\Phi^{\dagger}\Phi. \tag{103}\]
Combining them, we can rewrite the DM interaction as
\[\mathbf{A}_{\mu}\cdot(\mathbf{n}\times\partial^{\mu}\mathbf{n})= i\left\{2\Phi^{\dagger}A_{\mu}\partial^{\mu}\Phi-\Phi^{\dagger}A_{\mu} \Phi\Phi^{\dagger}\partial^{\mu}\Phi+\Phi^{\dagger}A_{\mu}\Phi\partial^{\mu} \Phi^{\dagger}\Phi\right\}-i\partial^{\mu}\left(\Phi^{\dagger}A_{\mu}\Phi\right)\] \[= i\Phi^{\dagger}A_{\mu}\partial^{\mu}\Phi-i\partial^{\mu}\Phi^{ \dagger}A_{\mu}\Phi-2i\Phi^{\dagger}A_{\mu}\Phi\Phi^{\dagger}\partial^{\mu} \Phi\, \tag{104}\]
where we have used \(\partial_{\mu}\Phi^{\dagger}\Phi=-\Phi^{\dagger}\partial_{\mu}\Phi\). In addition, we have
\[(\mathbf{A}_{\mu}\times\mathbf{n})^{2}=|\mathbf{A}_{\mu}|^{2}|\mathbf{n}|^{2}-(\mathbf{A}_{ \mu}\cdot\mathbf{n})^{2}=(A_{\mu}^{a})^{2}-(\Phi^{\dagger}A_{\mu}\Phi)^{2}. \tag{105}\]
We finally find from Eqs. (100), (100), (104) and (105) that the \(U(1)\times SU(2)\) gauged linear sigma model given in Eq. (9) reduces in the strong coupling limit to the \(SU(2)\) gauged \(O(3)\) nonlinear sigma model in Eq. (11). |
2308.01585 | A footnote to a paper of Deodhar | Let $X\subseteq G\slash B$ be a Schubert variety in a flag manifold and let
$\pi: \tilde X \rightarrow X$ be a Bott-Samelson resolution of $X$. In this
paper we prove an effective version of the decomposition theorem for the
derived pushforward $R \pi_{*} \mathbb{Q}_{\tilde{X}}$. As a by-product, we
obtain recursive procedure to extract Kazhdan-Lusztig polynomials from the
polynomials introduced by V. Deodhar in \cite{Deo}, which does not require
prior knowledge of a minimal set. We also observe that any family of
equivariant resolutions of Schubert varieties allows to define a new basis in
the Hecke algebra and we show a way to compute the transition matrix, from the
Kazhdan-Lusztig basis to the new one. | Davide Franco | 2023-08-03T07:41:47Z | http://arxiv.org/abs/2308.01585v1 | # A Footnote to a paper of Deodhar
###### Abstract.
Let \(X\subseteq G/B\) be a Schubert variety in a flag manifold and let \(\pi:\tilde{X}\to X\) be a Bott-Samelson resolution of \(X\). In this paper we prove an effective version of the decomposition theorem for the derived pushforward \(R\pi,\mathbb{Q}_{\tilde{X}}\). As a by-product, we obtain recursive procedure to extract Kazhdan-Lusztig polynomials from the polynomials introduced by V. Deodhar in [6], which does not require prior knowledge of a minimal set. We also observe that any family of equivariant resolutions of Schubert varieties allows to define a new basis in the Hecke algebra and we show a way to compute the transition matrix, from the Kazhdan-Lusztig basis to the new one.
_Keywords_: Kazhdan-Lusztig polynomials, Intersection cohomology, Decomposition theorem, Schubert varieties, Bott-Samelson resolution, Hecke algebra.
_MSC2010_: Primary 14B05, 14M15; Secondary 14E15, 14F45, 32S20, 32S60, 58K15.
## 1. Introduction
As the title suggests, this work is a sort of appendix to [6]. In such a paper, Vinay Deodhar introduces a statistic, called _defect_, on the subexpressions of a given reduced expression of an element of a Coxeter group \(W\) (see also [2, Section 6.3.17]). Specifically, let \(w\in W\) be an element of length \(l(w)\) and let
\[w=s_{1}\ldots s_{l},\quad l=l(w)\]
be a reduced expression of \(w\). A _subexpression_\(\sigma=(\sigma_{0},\ldots,\sigma_{l})\) is a sequence of Coxeter group elements such that
\[\sigma_{j-1}^{-1}\sigma_{j}\in\{id,s_{j}\},\quad\text{for all}\quad 1\leq j \leq l.\]
Let \(\mathcal{S}\) be the set such sequences for the given reduced expression and let
\[\pi(\sigma):=\sigma_{l},\quad\text{if}\quad\sigma=(\sigma_{0},\ldots,\sigma_{l })\in\mathcal{S}.\]
For any subexpression \(\sigma=(\sigma_{0},\ldots,\sigma_{l})\in\mathcal{S}\), Deodhar defines the _defect_ of \(\sigma\) by
\[d(\sigma):=\#\{1\leq j\leq l\mid\sigma_{j-1}^{-1}s_{j}<\sigma_{j-1}\}.\]
If one fix a reduced expression for all \(w\in W\) and consider \(v\in W\) such that \(v\leq w\) in the Bruhat order, then one can use the defect to define the following polynomial
\[Q_{w,v}:=\sum_{\sigma\in\mathcal{S},\,\pi(\sigma)=v}q^{d(\sigma)}\in\mathbb{Z} [q]. \tag{1}\]
In [6], Deodhar proves that the Kazhdan-Lusztig polynomial \(P_{w,v}\) admit a description as subsum of (1). More precisely, he gives a recursive algorithm for computing a minimal set
\(E_{min}\subseteq\mathcal{S}\) such that
\[P_{w,v}:=\sum_{\sigma\in E_{min},\,\pi(\sigma)=v}q^{d(\sigma)}\in\mathbb{Z}[q], \tag{2}\]
for any pair \(w,v\in W\), \(v\leq w\). What is more, in [6] one can find a new basis of the Hecke algebra of \(W\) that is defined starting from the polynomials (1).
The main aim of this paper is to give a recursive procedure to extract Kazhdan-Lusztig polynomials \(P_{w,\,v}\) from polynomials \(Q_{w,\,v}\), without going through the computation of the minimal set \(E_{min}\subseteq\mathcal{S}\), in the case \(W\) is the Weyl group of a semisimple connected algebraic group over \(\mathbb{C}\). Instead, our approach is based on an effective version of the Beilinson-Bernstein-Deligne-Gabber decomposition theorem (BBDG for short) for the Bott-Samelson resolution. As a by-product of our analysis, we provide a recursive procedure to compute the change-of-basis matrix, from the Kazhdan-Lusztig basis of the Hecke algebra to the basis defined in [6].
The starting point of our analysis is Proposition 3.9 of [6], where the author shows that, when \(W\) is the Weyl group of a semisimple connected algebraic group, the polynomial (1) has a nice geometric interpretation as Poincare polynomial of a suitable fiber of the _Bott-Samelson resolution_ of the Schubert variety \(X(w)\) (see section (2) for a short review of some standard definitions and notations concerning Schubert varieties). More precisely, if we denote by \(G\) an algebraic group with Weyl group \(W\) and Borel subgroup \(B\), then to the chosen reduced expression \(w=s_{1}\ldots s_{l}\) it is also associated the Bott-Samelson resolution
\[\pi_{w}:\tilde{X}(w)\to X(w).\]
The smooth variety \(\tilde{X}(w)\) is defined as the subvariety of \((G/B)^{l}\) consisting of \(l\)-tuples \((g_{1}B,\ldots,g_{l}B)\), such that
\[g_{i-1}^{-1}g_{i}\in\overline{Bs_{i}B},\quad 1\leq i\leq l\quad\text{(by convention, $g_{0}=1$)},\]
and \(\pi_{w}\) is the projection on the last factor.
The polynomial (1) is the Poincare polynomial of the fiber of \(\pi_{w}\) over the cell \(\Omega_{v}\subset X(w)\) associated to \(v\):
\[Q_{w,\,v}=\sum_{i}\dim H^{2i}(\pi_{w}^{-1}(x))q^{i},\quad\forall x\in\Omega(v). \tag{3}\]
Our approach for extracting the Kazhdan-Lusztig polynomials from the polynomials defined in (1), is to prove an effective version of the BBDG decomposition theorem for the Bott-Samelson resolution and, more generally, for any equivariant resolution of a Schubert variety (in [9], [11], and [3] partial results in this direction were previously obtained). Specifically, let \(D^{b}_{c}(X)\) be the derived category of bounded complexes of constructible \(\mathbb{Q}\)-vector sheaves on a Schubert variety \(X\subseteq G/B\). The decomposition theorem applied to an equivariant resolution \(\pi:\tilde{X}\to X\), states that the derived direct image \(R\pi_{*}\mathbb{Q}_{\tilde{X}}[\dim X]\) splits in \(D^{b}_{c}(X)\) as a direct sum of shifts of irreducible perverse sheaves on \(X\). By [5, SS 1.5], we have a non-canonical decomposition
\[R\pi_{*}\mathbb{Q}_{\tilde{X}}[\dim X]\cong\bigoplus_{i\in\mathbb{Z}}\bigoplus _{j\in\mathbb{N}}IC(L_{ij})[-i], \tag{4}\]
where the summands are shifted intersection cohomology complexes of the semisimple local systems \(L_{ij}\), each of which is supported on a suitable locally closed stratum of codimension \(j\), usually called a _support_ of the decomposition. The summand supported in the general point
is precisely the intersection cohomology of \(X\). The supports appearing in the splitting (4) and the local systems \(L_{ij}\) are, generally, rather mysterious objects when \(j\geq 1\).
Quite luckily, in our case a crucial simplification arises because all the local systems \(L_{ij}\) appearing in the decomposition (4) are trivial by an easy argument that is explained in Proposition 3.2. As a consequence, we have
\[R\pi_{*}\mathbb{Q}_{\tilde{X}}[\dim X]\cong\bigoplus_{X(v)\subseteq X}\bigoplus _{\alpha\in\mathbb{Z}}IC^{\oplus s_{v,\alpha}}_{X(v)}, \tag{5}\]
for suitable multiplicities \(s_{v,\alpha}\). Since we have a splitting like (5) for any equivariant resolution \(\pi_{w}:\tilde{X}(w)\to X(w)\), we can define a Laurent polynomial recording the contribution to the decomposition theorem for \(\pi_{w}\), with support \(X(v)\):
\[D_{w,v}(t):=\sum_{\alpha\in\mathbb{Z}}s_{v,\alpha}^{w}\cdot t^{\alpha}\in \mathbb{Z}[t,t^{-1}],\]
for avery pair \((v,w)\) in \(W\), such that \(v\leq w\), and for every resolution.
Now, assume to have fixed an equivariant resolution \(\pi_{w}:\tilde{X}(w)\to X(w)\) for every Schubert variety \(X(w)\) and assume that the cohomology of the fibers of \(\pi_{w}\) vanish in odd degrees (this property is satisfied by any reasonable resolution of Schubert varieties). Similarly as in (3), define the analogue Deodhar's polynomial \(Q_{w,v}\) as the Poincare polynomial of the fibers of \(\pi_{w}\) over the cell \(\Omega(v)\). The main results contained in this paper can be summarized as follows:
a) we set up an iterative procedure that allows to compute **both** the Kazhdan-Lusztig polynomials \(P_{w,v}\) and the Laurent polynomials \(D_{w,v}\) starting from Deodhar's polynomials \(Q_{w,v}\);
b) we observe that the polynomials \(Q_{w,v}\) allow to construct a new basis \(\{B_{w}\ |\ w\in W\}\) of the Hecke algebra;
c) we prove that the transition matrix, from the Kazhdan-Lusztig basis to the new basis \(\{B_{w}\ |\ w\in W\}\), can be easily deduced from the Laurent polynomials \(D_{w,v}\)_and does not require prior knowledge of the transition matrix from the Kazhdan-Lusztig basis to the standard one_ (compare with Remark 5.2).
## 2. Notations and basic facts
In this section we review some basic facts concerning Buhat decomposition, Schubert varieties and combinatorics of subexpressions that are needed in the following.
(i) Let \(G\) be a semisimple connected algebraic group over \(\mathbb{C}\). Let \(T\) and \(B\) be a _maximal torus_ and a _Borel subgroup_ of \(G\), respectively. Denote by \(W\) be the _Weyl group_ of \(G\). If we consider
\[\{e_{w}\ |\ \ w\in W\}\subset G/B,\]
the set of fixed points for the torus action on \(G/B\), then we have the _Bruhat decomposition_ of \(G/B\) i.e. the disjoint union of _Bruhat cells_
\[G/B=\bigsqcup_{w\in W}\Omega(w),\quad\Omega(w):=Be_{w}.\]
For every \(w\in W\) the _Schubert variety_ associated to \(w\) is defined as the Zariski closure of the corresponding Bruhat cell:
\[X(w):=\overline{\Omega(w)}.\]
(ii) There is a partial order on the Weyl group \(W\) determined by the decomposition above. Specifically, for \(w_{1},w_{2}\in W\) we have
\[w_{1}\geq w_{2}\quad\Leftrightarrow\quad X(w_{1})\supseteq X(w_{2}).\]
Furthermore, we have
\[X(w)=\bigcup_{v\leq w}\Omega(v).\]
We borrow from Deodhar's paper [6] some crucial definitions concerning the combinatorics of subexpressions of a reduced word in a Coxeter group.
**Definition 2.1**.: _[_6_, Def. 2.1 - 2.2]___
1. _Let_ \(w\in W\) _be an element of length_ \(l(w)\) _and let_ \[w=s_{1}\ldots s_{l},\quad l=l(w)\] _be a reduced expression of_ \(w\)_. A subexpression_ \(\sigma=(\sigma_{0},\ldots,\sigma_{l})\) _is a sequence of Weyl group elements such that_ \(\sigma_{0}=id\) _and_ \[\sigma_{j-1}^{-1}\sigma_{j}\in\{id,s_{j}\},\quad\text{for all}\quad 1\leq j \leq l.\] _For any reduced word_ \(r\)_, let_ \(\mathcal{S}_{r}\) _be the set of reduced expressions of_ \(r\) _and let_ \[\pi(\sigma):=\sigma_{l},\quad\text{if}\quad\sigma=(\sigma_{0},\ldots,\sigma_{ l})\in\mathcal{S}_{r}.\]
2. _For any_ \(\sigma=(\sigma_{0},\ldots,\sigma_{l})\in\mathcal{S}_{r}\) _define the_ defect _of_ \(\sigma\) _by_ \[d(\sigma):=\#\{1\leq j\leq l\mid\sigma_{j-1}^{-1}s_{j}<\sigma_{j-1}\}.\]
_If_ \(v\leq w\)_, we let_
\[Q_{w,v}:=\sum_{\sigma\in\mathcal{S}_{r},\,\pi(\sigma)=v}q^{d(\sigma)}\in \mathbb{Z}[q]. \tag{6}\]
_From now on we assume that we have chosen a reduced expression for all \(w\in W\), hence (6) provides a polynomial \(Q_{w,v}\in\mathbb{Z}[q]\) for any pair \((w,v)\) such that \(v\leq w\)._
As explained in [6, Proposition 3.9], the polynomial above has a nice geometric interpretation as Poincare polynomial of the _Bott-Samelson resolution_. We recall that to each reduced expression \(w=s_{1}\ldots s_{l}\), it is also associated the _Bott-Samelson resolution_
\[\pi_{w}:\tilde{X}(w)\to X(w).\]
The smooth variety \(\tilde{X}(w)\) is defined as the subvariety of \((G/B)^{l}\) consisting of \(l\)-tuples \((g_{1}B,\ldots,g_{l}B)\), such that
\[g_{i-1}^{-1}g_{i}\in\overline{Bs_{i}B},\quad 1\leq i\leq l\quad\text{(by convention, $g_{0}=1$)},\]
and \(\pi_{w}\) is the projection on the last factor. It is clear that \(\pi_{w}\) is equivariant under the action of the Borel subgroup \(B\).
By [6, Proposition 3.9], (6) is the Poincare polynomial of the fiber of \(\pi_{w}\) over the cell \(\Omega(v)\subset X(w)\):
\[Q_{w,v}=\sum_{i}\dim H^{2i}(\pi_{w}^{-1}(x))q^{i},\quad\forall x\in\Omega(v). \tag{7}\]
## 3. The Decomposition theorem
Formula (7) and Deodhar's paper suggest that there should be a closed relationship between the Poincare polynomials of the stalk cohomology of the complex \(R\pi_{*}\mathbb{Q}_{\tilde{X}(w)}\) and the Kazhdan-Lusztig polynomials. The most important result concerning the complex \(R\pi_{*}\mathbb{Q}_{\tilde{X}(w)}\) and, in general, concerning the topology of proper algebraic map is the _decomposition theorem_ of Beilinson, Bernstein, Deligne and Gabber, which we now recall.
In what follows, we shall work cohomology with \(\mathbb{Q}\)-coefficients and the self-dual perversity \(\mathfrak{p}\) (see [1, SS2.1], and [12, p. 79]).
**Theorem 3.1**.: _(**Decomposition theorem**[5, 1.6.1]) Let \(f:X\to Y\) be a proper map of complex algebraic varieties. In \(D^{b}_{c}(Y)\), the derived category of bounded complexes of constructible \(\mathbb{Q}\)-vector sheaves on \(Y\), there is a non-canonical isomorphism_
\[Rf_{*}IC_{X}\cong\bigoplus_{\alpha\in\mathbb{Z}}{}^{\mathfrak{p}}\mathcal{H}^ {\alpha}(Rf_{*}IC_{X})\left[-\alpha\right]. \tag{8}\]
_Furthermore, the perverse sheaves \({}^{\mathfrak{p}}\mathcal{H}^{\alpha}(Rf_{*}IC_{X})\) are semisimple; i.e. there is a decomposition into finitely many disjoint locally closed and nonsingular subvarieties \(Y=\coprod S_{\beta}\) and a canonical decomposition into a direct sum of intersection complexes of semisimple local systems_
\[{}^{\mathfrak{p}}\mathcal{H}^{\alpha}(Rf_{*}IC_{X})\cong\bigoplus_{\beta}IC_ {\overline{S_{\beta}}}(L_{\alpha,S_{\beta}}). \tag{9}\]
Combining (8) and (9) we have
\[Rf_{*}IC_{X}\cong\bigoplus_{\alpha\in\mathbb{Z}}{}^{\mathfrak{p}}\mathcal{H}^ {\alpha}(Rf_{*}IC_{X})[-\alpha]\cong\bigoplus_{\alpha\in\mathbb{Z}}\bigoplus _{\beta}IC_{\overline{S_{\beta}}}(L_{\alpha,S_{\beta}})[-\alpha], \tag{10}\]
which can be written in the form
\[Rf_{*}IC_{X}\cong\bigoplus_{\alpha\in\mathbb{Z}}{}^{\mathfrak{p}}\mathcal{H}^ {\alpha}(Rf_{*}IC_{X})[-\alpha]\cong\bigoplus_{\alpha\in\mathbb{Z}}\bigoplus _{S}{}^{\mathfrak{p}}\mathcal{H}^{\alpha}(Rf_{*}IC_{X})_{S}[-\alpha],\]
where \(S\), called a **support** of \(f\), is any \(\overline{S_{\beta}}\) associated to a non-zero local system \(L_{\alpha,S_{\beta}}\) (see [14, Definition 9.3.41]). In the literature one can find different approaches to the Decomposition Theorem (see [1], [15], [5], [16]), which is a very general result but also rather implicit. On the other hand, there are many special cases for which the Decomposition Theorem admits a simplified and explicit approach. One of these is the case of varieties with isolated singularities. For instance, in the work [10], a simplified approach to the Decomposition Theorem for varieties with isolated singularities is developed, in connection with the existence of a _natural Gysin morphism_, as defined in [7, Definition 2.3] (see also [8] for other applications of the Decomposition Theorem to the Noether-Lefschetz Theory).
As remarked before, the Bott-Samelson resolution
\[\pi_{w}:\tilde{X}(w)\to X(w),\]
is equivariant under the action of the Borel subgroup \(B\), hence \(\pi_{w}\) is stratified according to the Bruhat decomposition
\[X(w)=\bigsqcup_{v\leq w}\Omega(v).\]
In this case, the supports of the decomposition theorem applied to the resolution \(\pi_{w}\) are the Schubert subvarieties
\[X(v)=\overline{\Omega}(v),\qquad v\leq w.\]
Furthermore, all local systems appearing in the decomposition theorem are trivial since the isotropy subgroup of each orbit \(\Omega(v)\) is connected [13, Remark 11.6.2].
We include in the following proposition a proof of these facts, although they are probably well-known, in the attempt of making the present paper reasonably self-contained and also because the simple argument is very close to the rest of the paper.
**Proposition 3.2**.: _Let \(\pi:\tilde{X}\to X\) be an equivariant resolution of a Schubert variety \(X=X(w)\) of dimension \(l\). In the derived category \(D^{b}_{c}(X)\), we have a splitting_
\[R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]=\bigoplus_{v\leq w}\bigoplus_{\alpha\in \mathbb{Z}}IC^{\oplus s_{v,\alpha}}_{X(v)}[-\alpha],\]
_for suitable multiplicities \(s_{v,\alpha}\). In other words, the supports of the decomposition theorem applied to the resolution \(\pi\) are the Schubert varieties contained in \(X\) and all local systems are trivial._
Proof.: Let \(l:=l(w)\), fix \(r\) such that \(-1\leq r\leq l\) and define the following decreasing sequence of open sets of \(X\)
\[\mathcal{U}_{r}:=X(w)\backslash\bigsqcup_{v\leq w,\,l(v)\leq r}\Omega(v).\]
Clearly we have \(\mathcal{U}_{l}=\emptyset\) and \(\mathcal{U}_{-1}=X\). We are going to prove, by decreasing induction on \(r\), that
\[R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]\mid_{\mathcal{U}_{r}}=\bigoplus_{v\leq w, \,r<l(v)}\bigoplus_{\alpha\in\mathbb{Z}}IC^{\oplus s_{v,\alpha}}_{X(v)}[- \alpha]\mid_{\mathcal{U}_{r}}, \tag{11}\]
for suitable multiplicities \(s_{v,\alpha}\).
Since \(\mathcal{U}_{l-1}=\Omega:=\Omega(w)\) and since \(\pi\) is an isomorphism over \(\Omega\), we have
\[R\pi_{*}\mathbb{Q}_{\tilde{X}}[l]\mid_{\Omega}\cong\mathbb{Q}_{\Omega}[l],\]
hence the _base step_ follows from the well known isomorphism
\[\mathbb{Q}_{\Omega}[l]\cong IC_{X}\mid_{\Omega}\]
(compare e.g. with [14, Definition 6.3.1]).
As for the _inductive step_, let
\[\mathcal{D}:=\bigsqcup_{v\leq w,\,r=l(v)}\Omega(v).\]
By induction, we have
\[R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]\mid_{\mathcal{U}_{r}}=\bigoplus_{v\leq w, \,r<l(v)}\bigoplus_{\alpha\in\mathbb{Z}}IC^{\oplus s_{v,\alpha}}_{X(v)}[- \alpha]\mid_{\mathcal{U}_{r}},\]
hence we deduce
\[R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]\mid_{\mathcal{U}_{r-1}}=\mathcal{L}\mid_ {\mathcal{D}}\oplus\bigoplus_{v\leq w,\,r<l(v)}\bigoplus_{\alpha\in\mathbb{Z}}IC ^{\oplus s_{v,\alpha}}_{X(v)}[-\alpha]\mid_{\mathcal{U}_{r-1}}, \tag{12}\]
where \(\mathcal{L}\) gathers all the summands supported in \(\mathcal{U}_{r}^{*}=\bigsqcup_{v\leq w,l(v)\leq r}\Omega(v)\)
\[\mathcal{L}:=\bigoplus_{\alpha\in\mathbb{Z}}\bigoplus_{S\subseteq\mathcal{U}_{r} ^{*}}{}^{\mathfrak{p}}\mathcal{H}^{\alpha}(R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l] )_{S}[-\alpha].\]
In the previous formula, the summand \({}^{\mathfrak{p}}\mathcal{H}^{\alpha}(R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l])_{S}\) denotes the \(S\) component of \({}^{\mathfrak{p}}\mathcal{H}^{\alpha}(R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l])\) in the decomposition by supports [4, Section 1.1]. By proper base change we also have
\[R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]\mid_{\mathcal{D}}=\mathcal{L}\mid_{ \mathcal{D}}\oplus\bigoplus_{v\leq w,\,r<l(v)}\bigoplus_{\alpha\in\mathbb{Z}} IC_{X(v)}^{\oplus s_{v,\alpha}}[-\alpha]\mid_{\mathcal{D}}. \tag{13}\]
Since \(\pi:\tilde{X}\to X\) is equivariant and \(\mathcal{D}\) is a disjoint union of \(B\)-orbits, the dimension of the cohomology stalk \(\mathcal{H}^{i}R\pi_{*}(\mathbb{Q}_{\tilde{X}})[l]_{x}\) is independent of \(x\in\mathcal{D}\), for all \(i\). The same holds true also for all \(IC_{X(v)}[-\alpha]\mid_{\mathcal{D}}\). Then (13) shows that the dimension of the cohomology stalk \(\mathcal{H}^{i}\mathcal{L}_{x}\) is independent of \(x\in\mathcal{D}\), for all \(i\). Thus \(\mathcal{L}\mid_{\mathcal{D}}\) is a direct sum of shifted local systems because, by [5, Remark 1.5.1], the perverse cohomology sheaves \({}^{\mathfrak{p}}\mathcal{H}^{i}(\mathcal{L}\mid_{\mathcal{D}})\) concide, up to a shift, with the ordinary cohomology
\[{}^{\mathfrak{p}}\mathcal{H}^{i}(\mathcal{L}\mid_{\mathcal{D}})\cong\mathcal{ H}^{i-r}\mathcal{L}\mid_{\mathcal{D}}[r],\quad\forall i.\]
We are done, because \(\mathcal{D}=\bigsqcup_{v\leq w,\,r=l(v)}\Omega(v)\) is a disjoint union of affine spaces of dimension \(r\) (compare e.g. with [13, Theorem 9.9.5 (i)]) so any local system on \(\mathcal{D}\) is trivial and (11) follows for the restriction to the open set \(\mathcal{U}_{r-1}\).
## 4. A consequence of the decomposition theorem
In this section we assume to have fixed an equivariant resolution \(\pi_{w}:\tilde{X}(w)\to X(w)\), for any Schubert variety \(X(w)\), \(w\in W\). As a consequence of Proposition 3.2, the decomposition theorem for \(\pi_{w}\) can be stated as
\[(R\pi_{w})_{*}\mathbb{Q}_{\tilde{X}(w)}[l(w)]=\bigoplus_{v\leq w}\bigoplus_{ \alpha\in\mathbb{Z}}IC_{X(v)}^{\oplus s_{v,\alpha}^{w}}[-\alpha], \tag{14}\]
for suitable multiplicities \(s_{v,\alpha}^{w}\), where recall that \(l(w)=\dim X(w)\).
**Notations 4.1**.: For any pair \((v,w)\) of permutations such that \(v\leq w\), let
\[D_{w,v}(t):=\sum_{\alpha\in\mathbb{Z}}s_{v,\alpha}^{w}\cdot t^{\alpha}\in \mathbb{Z}[t,t^{-1}] \tag{15}\]
be the Laurent polynomial recording the contribution to the decomposition theorem (14) coming from support \(X(v)\). Let moreover
\[F_{w,v}(t):=\sum_{\alpha\in\mathbb{Z}}f_{w,v}^{l(w)+\alpha}t^{\alpha}\in \mathbb{Z}[t,t^{-1}],\quad f_{w,v}^{i}:=\dim H^{i}(\pi_{w}^{-1}(x)),\,\,\,x\in \Omega(v), \tag{16}\]
be the shifted Poincare polynomial of the fibers of \(\pi_{w}\) over \(\Omega(v)\).
**Remark 4.2**.: Let \(X(w)\) be a Schubert variety and let \(\Omega(v)\subseteq X(w)\) be a Schubert cell. It is well known the dimensions of the stalks \(\mathcal{H}^{\alpha}(IC_{X(w)})_{x}\) do not depend on \(x\in\Omega(v)\).
The Laurent polynomial encoding the dimensions of the stalks \(\mathcal{H}^{\alpha}(IC_{\mathcal{S}_{r}}^{\bullet})_{x}\) is the shifted Kazhdan-Lusztig polynomial:
\[H_{w,v}(t):=\sum_{\alpha\in\mathbb{Z}}h_{w,v}^{\alpha}t^{\alpha},\quad h_{w,v}^ {\alpha}:=\dim\mathcal{H}^{\alpha}(IC_{X(w)})_{x},\,\,\,x\in\Omega(v).\]
Recall that we have
\[P_{w,v}(q)=q^{\frac{l(w)}{2}}H_{w,v}(\sqrt{q}), \tag{17}\]
where \(P_{w,v}(q)\) is the corresponding Kazhdan-Lusztig polynomial (2) (compare e.g. with [2, Theorem 6.1.11]).
Before stating the main result of this section, let us introduce the truncation \(U\) and symmetrizing \(S\) operators:
\[U :\sum_{\alpha\in\mathbb{Z}}a_{\alpha}t^{\alpha}\in\mathbb{Z}[t,t ^{-1}]\mapsto\sum_{\alpha\geq 0}a_{\alpha}t^{\alpha}\in\mathbb{Z}[t];\] \[S :\sum_{\alpha\geq 0}a_{\alpha}t^{\alpha}\in\mathbb{Z}[t]\mapsto a _{0}+\sum_{\alpha>0}a_{\alpha}(t^{\alpha}+t^{-\alpha})\in\mathbb{Z}[t,t^{-1}].\]
**Theorem 4.3**.: _With notations as above, let \(u\leq w\) in \(W\). Then we have the following recursive formulae for the computation of the Laurent polynomials \(D_{w,u}\) and the shifted Kazhdan-Lusztig polynomials \(H_{w,u}\):_
\[\begin{cases}D_{w,u}=S\circ U(R_{w,u})\\ H_{w,u}=t^{-l(u)}(R_{w,u}-D_{w,u})\end{cases}\]
_where_
\[R_{w,u}:=t^{l(u)}\left(F_{w,u}-\sum_{u<v<w}D_{w,v}\cdot H_{v,u}\right).\]
Proof.: For the sake of simplicity, in the proof we set \(\pi:\tilde{X}\to X\) instead of
\(\pi_{w}:\tilde{X}(w)\to X(w)\). By Proposition 3.2, we have
\[R\pi_{*}\mathbb{Q}_{\tilde{X}}[l]=\bigoplus_{v\leq w}\bigoplus_{\alpha\in \mathbb{Z}}IC_{X(v)}^{\oplus s_{v,\alpha}^{w}}[-\alpha]. \tag{18}\]
Consider a cell \(\Omega(u)\subset X\), take the stalk cohomology at \(x\in\Omega(u)\) and recall (15) and (16). From (18) we infer
\[F_{w,u}(t)=\sum_{u\leq v\leq w}D_{w,v}(t)\cdot H_{v,u}(t)=\] \[D_{w,w}(t)\cdot H_{w,u}(t)+D_{w,u}(t)\cdot H_{u,u}(t)+\sum_{u<v <w}D_{w,v}(t)\cdot H_{v,u}(t). \tag{19}\]
Since the resolution \(\pi:\tilde{X}\to X\) is equivariant, it must be an isomorphism over \(\Omega(w)\) and we have \(D_{w,w}(t)=1\) (recall (14) and (15)). Furthermore, from the well known isomorphism
\(IC_{X(u)}|_{\Omega(u)}\cong\mathbb{Q}_{\Omega(u)}[l(u)]\) (compare e.g. with [14, Definition 6.3.1]) we deduce \(H_{u,u}(t)=t^{-l(u)}\). Hence, from (20) we get
\[F_{w,u}(t)=H_{w,u}(t)+t^{-l(u)}\cdot D_{w,u}(t)+\sum_{u<v<w}D_{w,v}(t)\cdot H_{ v,u}(t). \tag{20}\]
The last equality can be written as
\[D_{w,u}(t)=t^{l(u)}\cdot\left(F_{w,u}(t)-\sum_{u<v<w}D_{w,v}(t)\cdot H_{v,u}(t )\right)-t^{l(u)}\cdot H_{w,u}(t)=R_{w,u}-t^{l(u)}\cdot H_{w,u}(t).\]
The support conditions for perverse sheaves imply that \(t^{l(u)}\cdot H_{w,u}(t)\) is concentrated in negative degrees (see [5, p. 552, equation 12]), thus
\[U(D_{w,u}(t))=U(R_{w,u}(t)).\]
Finally, the Laurent polynomials \(D_{w,u}(t)\) are symmetric because of Hard-Lefschetz theorem (see [5, Theorem 1.6.3]), that is to say
\[D_{w,u}(t)=D_{w,u}(t^{-1}).\]
Therefore we have
\[\begin{cases}D_{w,u}(t)=S\circ U(R_{w,u}(t))\\ H_{w,u}(t)=t^{-l(u)}(R_{w,u}(t)-D_{w,u}(t))\end{cases} \tag{21}\]
and the statement follows.
By (17), the last theorem provides an iterative procedure to compute Kazhdan-Lusztig polynomials \(P_{w,v}\) from Poincare polynomials \(Q_{w,v}\). To this end, let us introduce the following operators:
\[U_{\beta}:\sum_{\alpha\geq 0}c_{\alpha}t^{\alpha}\in\mathbb{Z}\left[t\right] \mapsto\sum_{\alpha\geq\beta}c_{\alpha}t^{\alpha}\in\mathbb{Z}\left[t\right], \ \forall\beta\geq 0.\]
Next statement follows from Theorem 4.3 and collects all informations we obtained until now.
**Corollary 4.4**.: _Assume to have fixed an equivariant resolution \(\pi_{w}:\tilde{X}(w)\to X(w),\) for any Schubert variety \(X(w)\), \(w\in W\). For any pair \(w,u\) in \(W\) such that \(u\leq w\), let \(\tilde{F}_{w,u}\) be the Poincare polynomials of the fiber \(\pi_{w}^{-1}(x)\), \(\forall x\in\Omega(u)\). Then we have the following recursive formulae:_
\[\begin{cases}\tilde{D}_{w,u}=t^{l(w)-l(u)}\circ S\circ t^{l(u)-l(w)}\circ U_{ l(w)-l(u)}(\tilde{R}_{w,u})\\ \tilde{H}_{w,u}=\tilde{R}_{w,u}-\tilde{D}_{w,u}\end{cases}\]
_where_
\[\tilde{D}_{w,u}:=t^{l(w)-l(u)}D_{w,u},\quad\tilde{H}_{w,u}:=t^{l(w)}H_{w,u}, \quad\tilde{R}_{w,u}:=\tilde{F}_{w,u}-\sum_{u<v<w}\tilde{D}_{w,v}\cdot\tilde{ H}_{v,u}.\]
Proof.: From (16) we find \(\tilde{F}_{w,u}=t^{l(w)}F_{w,u}\). Thus we have
\[\tilde{R}_{w,u}:=\tilde{F}_{w,u}-\sum_{u<v<w}\tilde{D}_{w,v}\cdot\tilde{H}_{v,u}= t^{l(w)}F_{w,u}-\sum_{u<v<w}t^{l(w)-l(u)}D_{w,v}\cdot t^{l(v)}H_{v,u}=t^{l(w)-l(v)} R_{w,u},\]
and the statement straightforwardly follows just combining Theorem 4.3 with
\[t^{l(u)-l(w)}\circ U_{l(w)-l(u)}\circ t^{l(w)-l(v)}=U_{0}.\]
**Remark 4.5**.: By (17), we have
\[P_{w,v}(q)=q^{\frac{l(w)}{2}}H_{w,v}(\sqrt{q})=\tilde{H}_{w,v}(\sqrt{q}),\]
hence previous corollary provides an iterative procedure that allows to compute **both** the Kazhdan-Lusztig polynomials \(P_{w,v}\) and the Laurent polynomials \(D_{w,v}\).
## 5. Bases for the Hecke algebra
As in the previous section we assume to have fixed an equivariant resolution \(\pi_{w}\) for any Schubert variety \(X(w)\), \(w\in W\) and we assume in addition that the cohomology of the fibers of \(\pi_{w}\) vanish in odd degrees (this property is satisfied by any reasonable resolution of Schubert varieties). As a consequence, we have that the coefficients of the polynomials \(\tilde{F}_{w,v}\) vanish in odd degree. Furthermore, from the relations
\[\begin{cases}\tilde{D}_{w,u}=t^{l(w)-l(u)}\circ S\circ t^{l(u)-l(w)}\circ U_{l (w)-l(u)}(\tilde{R}_{w,u})\\ \tilde{R}_{w,u}:=\tilde{F}_{w,u}-\sum_{u<v<w}\tilde{D}_{w,v}\cdot\tilde{H}_{v, u}\end{cases}\]
one deduces immediately that the same holds true for the polynomials \(\tilde{D}_{w,v}\). We define
\[Q_{w,v}(q)=\tilde{F}_{w,v}(\sqrt{q})\in\mathbb{Z}[q], \tag{22}\]
\[S_{w,v}(q)=\tilde{D}_{w,v}(\sqrt{q})\in\mathbb{Z}[q], \tag{23}\]
for all \(v,w\in W\) such that \(v\leq w\). Our aim in this section is to define a new basis for the Hecke algebra by means of the polynomials \(Q_{w,v}(q)\). We start by recalling the definition of Hecke algebra.
Let \(\mathcal{H}\) be the _Hecke algebra_ of \(W\) i.e. the algebra over \(\mathbb{Z}[q^{\frac{1}{2}},q^{-\frac{1}{2}}]\) with basis elements
\(\{T_{w}\ |\ \ w\in W\}\) and relations [2, Sec. 6.1]
\[\begin{cases}T_{s_{i}}T_{w}=T_{s_{i}w}\quad\text{if}\quad l(s_{i}w)>l(w),\\ T_{s_{i}}T_{s_{i}}=(q-1)T_{s_{i}}+qT_{id}.\end{cases}\]
The Hecke algebra is also equipped with the Kazhdan-Lusztig basis \(\{C_{w}\ |\ \ w\in W\}\) where
\[C_{w}=T_{w}+\sum_{v<w}P_{w,v}T_{v} \tag{24}\]
where \(P_{w,v}\in\mathbb{Z}[q]\) are the Kazhdan-Lusztig polynomials.
**Theorem 5.1**.: _For any \(w\in W\), let_
\[B_{w}=T_{w}+\sum_{v<w}Q_{w,v}T_{v}\in\mathcal{H}.\]
_The polynomials \(S_{w,v}(q)\in\mathbb{Z}[q]\) are the coefficients of \(B_{w}\) with respect to the Kazhdan-Lusztig basis_
\[B_{w}=C_{w}+\sum_{v<w}S_{w,v}C_{v}\in\mathcal{H}.\]
Proof.: From (20) and recalling \(\tilde{F}_{w,u}=t^{l(w)}F_{w,u}\), we get
\[\tilde{F}_{w,u}=t^{l(w)}F_{w,u}=\sum_{u\leq v\leq w}t^{l(w)-l(v)}D_{w,v}\cdot t ^{l(v)}H_{v,u}=\sum_{u\leq v\leq w}\tilde{D}_{w,v}\cdot\tilde{H}_{v,u}, \tag{25}\]
where we have taken into account 4.4. Combining 4.5 with (22) and (23) and evaluating in \(t=\sqrt{q}\) the last equality, we get
\[Q_{w,u}=\sum_{u\leq v\leq w}S_{w,v}\cdot P_{v,u}\in\mathbb{Z}[q]. \tag{26}\]
Since the resolution \(\pi:\tilde{X}\to X\) is equivariant, it must be an ismorphism over \(\Omega(w)\), so we have \(Q_{w,w}=1\) and
\[B_{w}=T_{w}+\sum_{u<w}Q_{w,u}T_{u}=\sum_{u\leq w}Q_{w,u}T_{u}=\sum_{u\leq w} \left(\sum_{u\leq v\leq w}S_{w,v}\cdot P_{v,u}\right)T_{u}=\sum_{v\leq w}S_{w, v}\left(\sum_{u\leq v}P_{v,u}T_{u}\right).\]
Again, since \(\pi_{w}\) an ismorphism over \(\Omega(w)\) and we have \(D_{w,w}=S_{w,w}=1\) (recall (14) and (15)) and the last sum can be written as
\[\sum_{u\leq w}P_{w,u}T_{u}+\sum_{v<w}S_{w,v}\left(\sum_{u\leq v}P_{v,u}T_{u}\right)\]
that coincides with
\[C_{w}+\sum_{v<w}S_{w,v}C_{v}\]
in view of (24).
**Remark 5.2**.: Theorem above shows that the transition matrix, from the Kazhdan-Lusztig basis \(\{C_{w}\mid w\in W\}\) to the new one \(\{B_{w}\mid w\in W\}\), is triangular with coefficients \(S_{w,v}\in\mathbb{Z}[q]\). In this work we have set up an iterative procedure that allows the computation of such a matrix and which _does not require prior knowledge of the transition matrix from the Kazhdan-Lusztig basis to the standard one \(\{T_{w}\mid w\in W\}\)_.
**Conflicts of interest**
The author has no conflict of interest to declare that are relevant to this article. |
2305.18028 | ADAPTERMIX: Exploring the Efficacy of Mixture of Adapters for
Low-Resource TTS Adaptation | There are significant challenges for speaker adaptation in text-to-speech for
languages that are not widely spoken or for speakers with accents or dialects
that are not well-represented in the training data. To address this issue, we
propose the use of the "mixture of adapters" method. This approach involves
adding multiple adapters within a backbone-model layer to learn the unique
characteristics of different speakers. Our approach outperforms the baseline,
with a noticeable improvement of 5% observed in speaker preference tests when
using only one minute of data for each new speaker. Moreover, following the
adapter paradigm, we fine-tune only the adapter parameters (11% of the total
model parameters). This is a significant achievement in parameter-efficient
speaker adaptation, and one of the first models of its kind. Overall, our
proposed approach offers a promising solution to the speech synthesis
techniques, particularly for adapting to speakers from diverse backgrounds. | Ambuj Mehrish, Abhinav Ramesh Kashyap, Li Yingting, Navonil Majumder, Soujanya Poria | 2023-05-29T11:39:01Z | http://arxiv.org/abs/2305.18028v1 | # Adaptermix: Exploring the Efficacy of _Mixture of Adapters_ for Low-Resource TTS Adaptation
###### Abstract
There are significant challenges for speaker adaptation in text-to-speech for languages that are not widely spoken or for speakers with accents or dialects that are not well-represented in the training data. To address this issue, we propose the use of the "mixture of adapters" method. This approach involves adding multiple adapters within a backbone-model layer to learn the unique characteristics of different speakers. Our approach outperforms the baseline, with a noticeable improvement of 5% observed in speaker preference tests when using only one minute of data for each new speaker. Moreover, following the adapter paradigm, we fine-tune only the adapter parameters (11% of the total model parameters). This is a significant achievement in parameter-efficient speaker adaptation, and one of the first models of its kind. Overall, our proposed approach offers a promising solution to the speech synthesis techniques, particularly for adapting to speakers from diverse backgrounds.
Ambuj Mehrish\({}^{1}\), Abhinav Ramesh Kashyap\({}^{2}\), Li Yingting\({}^{3}\), Navonil Majumder\({}^{1}\), Soujanya Poria\({}^{1}\)\({}^{1}\)Singapore University of Technology and Design, Singapore
\({}^{2}\)ASUS Intelligent Cloud Services (AICS) Singapore,
\({}^{3}\)Beijing University of Posts and Telecommunications, China [email protected], [email protected], [email protected], [email protected], [email protected]
**Index Terms**: Text to Speech, Adapters, Mixture of Adapters
## 1 Introduction
One of the key aspects of text-to-speech (TTS) technology is the ability to capture the unique acoustic mannerisms of a given speaker, which can include characteristics such as accent, intonation, rhythm, and other vocal traits that are associated with a speaker's identity [1]. This can be especially important in applications where the speaker's identity is a key factor, such as in voice assistants or interactive voice response systems. Thus, capturing these vocal idiosyncrasies in the generated speech is challenging and often requires many hours of reference speech samples from the speaker. Performing TTS tasks using large reference samples could be infeasible due to various reasons, such as limited memory budget, privacy concerns, or logistical issues. To address this challenge, we suggest a low-resource TTS approach that utilizes reference samples no longer than 1 minute. This strategy will help overcome the aforementioned limitations and enable efficient TTS performance with minimal resources.
TTS is being integrated into a wide range of applications, from virtual assistants to audiobooks, making it more accessible and useful than ever before. As text-to-speech technology continues to evolve, it has the potential to revolutionize communication and accessibility for people with disabilities or language barriers. One of the biggest challenges in TTS research is improving the naturalness and expressiveness of speech [2, 3, 4]. This involves using algorithms to convert written text into spoken words, and there are many techniques available to make the resulting speech sound more natural and expressive. For example, prosody modelling [5, 6] can help to convey intonation, stress, and rhythm, while neural network-based models such as [7, 8] can produce speech with more natural sounding timbre and articulation.
Few-shot _speaker adaptation_ is a challenging task in TTS, which aims to personalize synthesized speech to match the characteristics of a specific speaker. This can involve modifying the acoustic model of the TTS to adjust for factors such as pitch, tone, and pronunciation [9, 10, 11]. Adaptive TTS models can be used to achieve few-shot speaker adaptation through methods such as pre-trained speaker embedding models with a small amount of reference speech [10] or fine-tuning multi-speaker TTS. However, fine-tuning requires a large amount of training data to avoid over-fitting and catastrophic forgetting [12].
In this paper, we propose a low-resource speaker adaptation approach based on the mixture of adapters (MoA) [13, 14, 15]. MoA has been widely used in NLP tasks such as text [13], question answering [14], and machine translation [16], by fine-tuning pre-trained models with minimal adapter parameters. In TTS, MoA can train a parameter-efficient module on a small amount of data, enabling fine-tuning of a pre-trained TTS model for a new speaker without forgetting previous knowledge. MoA's multiple adapter modules capture fine-grained information about the speaker's speech such as prosody, speaking rate or accent and adapt the acoustic model more effectively than traditional fine-tuning, saving time and computational resources while maintaining or improving performance.
MoA has several advantages over traditional fine-tuning approaches, such as faster training time, better generalization to new tasks, and improved robustness to domain shifts. However, MoA also has some limitations, such as the need for many adapters to achieve good performance, the risk of overfitting on small datasets, and the difficulty of interpreting the adapter weights. While MoA is a promising technique for improving the performance of TTS systems with minimal resources, its use is likely to increase in the future.
To this end, we take a two-phase training approach: i) train a transformer-based encoder-decoder TTS model on a large TTS dataset, LibriTTS [17] with \(100\)h of clean speech samples from \(251\) speakers; ii) adapt this backbone model to work with short duration speech samples. The first phase is meant to learn the TTS task, traditionally by optimizing all the parameters in the network. Whereas, the second phase adds only a small fraction of the parameters, in the form of Adapters [18], only to optimize them to work with shorter reference speech samples. As compared to the recently proposed by Hsieh [19], which trains an adapter for each speaker, our approach is much more scalable to a large number of speakers as the adapters are shared among the speakers. To learn the key lower-dimensional vocal qualities, from the speech samples in the context of the target text, we add multiple parallel adapters to the transformer decoder and aggregate their outputs, akin to _mixture of experts_[20]. We call our setup adaptermix. |
2305.00667 | RISnet: A Scalable Approach for Reconfigurable Intelligent Surface
Optimization with Partial CSI | The reconfigurable intelligent surface (RIS) is a promising technology that
enables wireless communication systems to achieve improved performance by
intelligently manipulating wireless channels. In this paper, we consider the
sum-rate maximization problem in a downlink multi-user
multi-input-single-output (MISO) channel via space-division multiple access
(SDMA). Two major challenges of this problem are the high dimensionality due to
the large number of RIS elements and the difficulty to obtain the full channel
state information (CSI), which is assumed known in many algorithms proposed in
the literature. Instead, we propose a hybrid machine learning approach using
the weighted minimum mean squared error (WMMSE) precoder at the base station
(BS) and a dedicated neural network (NN) architecture, RISnet, for RIS
configuration. The RISnet has a good scalability to optimize 1296 RIS elements
and requires partial CSI of only 16 RIS elements as input. We show it achieves
a high performance with low requirement for channel estimation for geometric
channel models obtained with ray-tracing simulation. The unsupervised learning
lets the RISnet find an optimized RIS configuration by itself. Numerical
results show that a trained model configures the RIS with low computational
effort, considerably outperforms the baselines, and can work with discrete
phase shifts. | Bile Peng, Karl-Ludwig Besser, Ramprasad Raghunath, Vahid Jamali, Eduard A. Jorswieck | 2023-05-01T05:36:38Z | http://arxiv.org/abs/2305.00667v2 | # RISnet: A Scalable Approach for Reconfigurable Intelligent Surface Optimization with Partial CSI
###### Abstract
The reconfigurable intelligent surface (RIS) is a promising technology that enables wireless communication systems to achieve improved performance by intelligently manipulating wireless channels. In this paper, we consider the sum-rate maximization problem in a downlink multi-user multi-input-single-output (MISO) channel via space-division multiple access (SDMA). Two major challenges of this problem are the high dimensionality due to the large number of RIS elements and the difficulty to obtain the full channel state information (CSI), which is assumed known in many algorithms proposed in the literature. Instead, we propose a hybrid machine learning approach using the weighted minimum mean squared error (WMMSE) precoder at the base station (BS) and a dedicated neural network (NN) architecture, RISnet, for RIS configuration. The RISnet has a good scalability to optimize 1296 RIS elements and requires partial CSI of only 16 RIS elements as input. We show it achieves a high performance with low requirement for channel estimation for geometric channel models obtained with ray-tracing simulation. The unsupervised learning lets the RISnet find an optimized RIS configuration by itself. Numerical results show that a trained model configures the RIS with low computational effort, considerably outperforms the baselines, and can work with discrete phase shifts.
Reconfigurable intelligent surfaces, space-division multiple access, unsupervised machine learning, partial channel state information, ray-tracing channel model.
## I Introduction
The reconfigurable intelligent surface (RIS) is a promising technology that has been attracting increasing attention in recent years. An RIS is a planar array of many passive, reconfigurable elements that can reflect incoming electromagnetic waves with shifted complex phases [1]. As an important application of the RIS, we consider the sum-rate maximization problem in a downlink RIS-assisted multi-user multi-input-single-output (MISO) channel with space-division multiple access (SDMA), which is a joint optimization problem of precoding at the base station (BS) and the RIS configuration. In the literature, the weighted minimum mean squared error (WMMSE) precoder [2] is proposed as an optimal precoder for weighted sum-rate maximization with SDMA. On the RIS side, many different approaches for optimizing the RIS configuration have been presented in previous works. This includes block coordinate descent (BCD) to maximize the weighted sum-rate [3], majorization-maximization and the efficient alternating direction method of multipliers (ADMM) to optimize the sum-rate [4, 5] and the sum-rate of multiple user groups [6]. Riemannian manifold conjugate gradient (RMCG) and the Lagrangian method are applied to optimize multiple RISs and BSs to serve users on the cell edge [7]. Exhaustive search and successive refinement algorithm are applied to improve passive beamforming [8]. The configuration of active RISs is optimized with the successive convex approximation algorithm to maximize the signal-to-noise ratio in [9]. Gradient-based optimization is applied to maximize the effective rank and the minimum singular value [10].
These analytical iterative methods do not scale well with the number of RIS elements in general. No more than 100 RIS elements are assumed in [3, 4, 5, 6, 7, 8, 9] and up to 400 RIS elements are assumed in [10], which is still far from the vision of more than 1000 RIS elements [1]. In fact, it has been shown that for most typical scenarios, large RISs with several hundreds to several thousands of elements are needed in order to establish a sufficient link budget [11].
Moreover, it is common that suboptimal approximations (e.g., [3, 8]) have to be made for these approaches, which degrade the performance. On the other hand, machine learning models require much computational resources in training but the trained model can perform inference almost instantly. They are more flexible due to the universal approximation theorem [12] such that they do not have to make suboptimal approximations. In recent years, deep learning [13, 14, 15], deep reinforcement learning [16] and meta learning [17] have been applied to optimize the RIS. However, scalability is still an open problem because the model complexity grows with the number of RIS elements. Therefore, existing works employing machine learning only assume a limited number of RIS elements (no more than 100 in [14, 15, 16, 17] and 256 in [13]).
Another common disadvantage of many references on RIS-assisted communication systems is the full channel state information (CSI) assumption (e.g., [3, 5, 6, 8, 9, 15]). Due to the large number of RIS elements, the full CSI of all elements is extremely difficult to obtain in real time. Possible countermeasures are, e.g., codebook-based RIS optimization [11]. However, the beam training is still a bottleneck.
To address the above issues, we introduce a neural network (NN) architecture _RISnet_, which was first proposed in [18] and receives a major improvement in this work. It has the following two advantages:
1. We use the same filters to all RIS elements and users. The number of parameters in the NN is therefore independent from the number of RIS elements, which improves the scalability significantly. We demonstrate this by applying RISnet to a scenario with 4 users and 1296 RIS elements in Section IV.
2. RISnet supports partial CSI as input, which reduces the difficulty of channel estimation considerably (phase shifts of 1296 RIS elements are computed based on the CSI of 16 RIS elements).
Furthermore, we show that our proposed approach computes the precoding matrix and RIS phase shifts in milliseconds, outperforms the baselines significantly, and works with discrete phase shifts, which is a more realistic assumption compared to continuous phase shifts for e.g., PIN-diode based RIS elements..
## II System Model and Problem Formulation
We consider a downlink RIS aided communication from a multi-antenna BS to multiple users, as shown in Figure 1. The channel from BS to RIS is denoted as \(\mathbf{H}\in\mathbb{C}^{N\times M}\), where \(N\) is the number of RIS elements and \(M\) is the number of BS antennas. The channel from RIS to users is denoted as \(\mathbf{G}\in\mathbb{C}^{U\times N}\), where \(U\) is the number of users. The channel from BS directly to users is denoted as \(\mathbf{D}\in\mathbb{C}^{U\times M}\).
The objective is to use SDMA to maximize the sum-rate of the users. The BS can perform precoding subject to maximum transmit power constraint \(E_{T_{r}}\). Each RIS element can receive the signal from the BS, shift the complex phase of the signal, and reflect signal without changing its amplitude.
We denote the precoding matrix as \(\mathbf{V}\in\mathbb{C}^{M\times U}\). The diagonal matrix \(\mathbf{\Phi}\in\mathbb{C}^{N\times N}\) is the signal processing matrix at the RIS. The diagonal element in row \(n\) and column \(n\) is \(\phi_{nn}=e^{j\psi_{n}}\), where \(\psi_{n}\in[0,2\pi)\) is the phase shift of RIS element \(n\). The signal received at the users is given as
\[\mathbf{y}=\left(\mathbf{G}\mathbf{\Phi}\mathbf{H}+\mathbf{D} \right)\mathbf{V}\mathbf{x}+\mathbf{n}, \tag{1}\]
where \(\mathbf{x}\in\mathbb{C}^{U\times 1}\) is the transmitted symbols, \(\mathbf{y}\in\mathbb{C}^{U\times 1}\) is the received symbols and \(\mathbf{n}\in\mathbb{C}^{U\times 1}\) is the thermal noise.
Let \(\mathbf{C}\in\mathbb{C}^{U\times U}\) be the combined matrix of precoding and transmission, i.e.,
\[\mathbf{C}=(\mathbf{G}\mathbf{\Phi}\mathbf{H}+\mathbf{D})\mathbf{V} \tag{2}\]
and \(c_{uv}\) be the element in row \(u\) and column \(v\) of \(\mathbf{C}\). The problem to maximize the sum-rate subject to the maximum transmit power can be given by
\[\max_{\mathbf{V},\mathbf{\Phi}} L=\sum_{u=1}^{U}\log_{2}\left(1+\frac{c_{uv}^{2}}{\sum_{v\neq u}c_{ uv}^{2}+\sigma^{2}}\right)\] (3a) s.t. \[\mathrm{tr}\left(\mathbf{V}\mathbf{V}^{H}\right)\leq E_{T\tau} \tag{3b}\] \[|\phi_{nn}|=1\] (3c) \[|\phi_{nn^{\prime}}|=0\text{ for }n\neq n^{\prime}, \tag{3d}\]
where \(\sigma^{2}\) is the noise power.
In this problem, we optimize both precoding \(\mathbf{V}\) at BS and RIS phase shifts \(\mathbf{\Phi}\). While optimizing \(\mathbf{\Phi}\) is a new and open problem, the WMMSE precoder is proved to be the optimal precoder to maximize the weighted sum rate (WSR). It is shown that minimizing weighted mean squared error (MSE) is equivalent to maximizing WSR [2]. By iteratively updating precoding vectors and weights of MSE, the WSR is maximized. We choose the WMMSE precoder for the precoding in BS. It is to note that \(\mathbf{V}\) and \(\mathbf{\Phi}\) must be jointly optimized because their optimal values depend on each other. However, the iterative approach of WMMSE precoder makes it not differentiable and the gradient ascent cannot be applied to optimize the NN. To tackle this problem, we apply alternating optimization (AO), which updates \(\mathbf{V}\) and phase shift matrices \(\mathbf{\Phi}\) alternatively while keeping the other constant for all the data samples.
## III Proposed Machine Learning Solution
The aim of solving problem (3) is to jointly optimize the precoding matrix \(\mathbf{V}\) and the phase adjustments of the RIS elements \(\mathbf{\Phi}\). Since it is difficult to find an optimal solution to this problem due to its high dimensionality (the large number of RIS elements), we propose solving (3) by employing machine learning. In particular, we train an NN to learn the mapping between the available CSI, the optimal precoding and phase adjustments.
### _Framework of Unsupervised Machine Learning_
We define a neural network \(N_{\theta}\) parameterized by \(\theta\), which maps from the CSI to the RIS phase shifts \(\mathbf{\Phi}\). Since \(N_{\theta}\) cannot take the original CSI as input, we define channel feature \(\mathbf{\Gamma}\) as an equivalent CSI presentation, as will be explained in Section III-B. Hence, we have \(\mathbf{\Phi}=N_{\theta}(\mathbf{\Gamma})\). Applying the WMMSE precoder, the objective function \(L\) defined in (3) is fully determined by \(\mathbf{\Gamma}\) and \(\mathbf{\Phi}\). We can write the objective as \(L(\mathbf{\Gamma},\mathbf{\Phi})=L(\mathbf{\Gamma},N_{\theta}(\mathbf{\Gamma} );\theta)\). Note that we emphasize
Figure 1: System model.
depends on the parameter \(\theta\) given \(\mathbf{\Gamma}\) here. We collect massive channel features in a training set \(\mathcal{D}\) and formulate the unsupervised machine learning problem as
\[\max_{\theta}\sum_{\mathbf{\Gamma}\in\mathcal{D}}L(\mathbf{\Gamma},N_{\theta}( \mathbf{\Gamma});\theta). \tag{4}\]
In this way, we optimize \(N_{\theta}\) which maps from any \(\mathbf{\Gamma}\in\mathcal{D}\) to \(\mathbf{\Phi}\). If the training set is general enough, we would expect that a channel feature \(\mathbf{\Gamma}^{\prime}\notin\mathcal{D}\) can also be mapped to a good \(\mathbf{\Phi}\).
### _Channel Feature_
In the RIS-assisted channel, there are three channel matrices \(\mathbf{H}\), \(\mathbf{G}\) and \(\mathbf{D}\), among which \(\mathbf{H}\) is assumed to be constant because BS and RIS are stationary and the environment is relatively invariant, \(\mathbf{G}\) and \(\mathbf{D}\) depend on user positions and are inputs of \(N_{\theta}\). We would like to define a feature \(\boldsymbol{\gamma}_{un}\) for user \(u\) and RIS element \(n\) such that we can apply the same filters to every user and RIS element to enable scalability. Since \(g_{un}\) in row \(u\) and column \(n\) of \(\mathbf{G}\) is the channel gain from RIS element \(n\) to user \(u\), we can simply include amplitude and phase of \(g_{un}\) in \(\boldsymbol{\gamma}_{un}\)1. On the other hand, elements in \(\mathbf{D}\) cannot be mapped to RIS elements because \(\mathbf{D}\) is the channel from BS directly to users. Therefore, we define \(\mathbf{J}=\mathbf{D}\mathbf{H}^{+}\), where \((\cdot)^{+}\) denotes the pseudo-inverse operation, and (1) becomes \(\mathbf{y}=\left(\mathbf{G}\mathbf{\Phi}+\mathbf{J}\right)\mathbf{H}\mathbf{V }\mathbf{x}+\mathbf{n}\), i.e., signal \(\mathbf{x}\) is precoded with \(\mathbf{V}\), transmitted through channel \(\mathbf{H}\) to the RIS, and through channel \(\mathbf{G}\mathbf{\Phi}+\mathbf{J}\) to users. Elements \(j_{un}\) of \(\mathbf{J}\) is the channel gain from RIS element \(n\) to user \(u\). The features of user \(u\) and RIS element \(n\) can then be defined as \(\boldsymbol{\gamma}_{un}=(|g_{un}|,\arg(g_{un}),|j_{un}|,\arg(j_{un}))^{T}\in \mathbb{R}^{4\times 1}\), where \(|a|\) and \(\arg(a)\) are amplitude and phase of complex number \(a\), respectively. The complete channel feature \(\mathbf{\Gamma}\in\mathbb{R}^{4\times U\times N}\) is a three dimensional tensor, where elements with index \(u\) and \(n\) in second and third dimensions are \(\boldsymbol{\gamma}_{un}\).
Footnote 1: The NN does not take complex numbers as input.
### _RISnet Architecture with Full CSI_
The RISnet comprises \(L\) layers. In each layer, we would like to apply the same filters to all users and RIS elements. However, the optimal phase shift of an RIS element depends on all users and the other RIS elements. Therefore, in every layer of the NN, we apply four filters for information processing of current user and current RIS element (cc), current user and other RIS elements (co), other users and current RIS element (oc), and other users and other RIS elements (oo). Denote the input feature of user \(u\) and RIS element \(n\) in layer \(i\) as \(\mathbf{f}_{un,i}\) (in particular, \(\mathbf{f}_{un,1}=\boldsymbol{\gamma}_{un}\)), the output feature of user \(u\) and RIS element \(n\) in layer \(i\) is calculated as
\[\begin{split}&\mathbf{f}_{un,i+1}\\ =&\left(\begin{array}{c}\text{ReLU}(\mathbf{W}_{i}^{cc} \mathbf{f}_{un,i}+\mathbf{b}_{i}^{cc})\\ \left(\sum_{n^{\prime}\neq n}\text{ReLU}(\mathbf{W}_{i}^{cc}\mathbf{f}_{un^{ \prime},i}+\mathbf{b}_{i}^{co})\right)/(N-1)\\ \left(\sum_{u^{\prime}\neq u}\text{ReLU}(\mathbf{W}_{i}^{cc}\mathbf{f}_{u^{ \prime}n,i}+\mathbf{b}_{i}^{cc})\right)/(U-1)\\ \left(\sum_{u^{\prime}\neq u}\sum_{n^{\prime}\neq n}\text{ReLU}(\mathbf{W}_{i }^{oo}\mathbf{f}_{u^{\prime}n^{\prime},i}+\mathbf{b}_{i}^{oo})\right)/\\ \left((N-1)(U-1)\right)\end{array}\right)\end{split} \tag{5}\]
for \(i<L\), where \(\mathbf{W}_{i}^{cc}\in\mathbb{R}^{Q_{i}\times P_{i}}\) is trainable weights of class cc in layer \(i\) with the input feature dimension \(P_{i}\) in layer \(i\) (i.e., \(\mathbf{f}_{un,i}\in\mathbb{R}^{P_{i}\times 1}\)) and output feature dimension \(Q_{i}\) in layer \(i\) of class cc, \(\mathbf{b}_{i}^{cc}\in\mathbb{R}^{Q_{i}\times 1}\) is trainable bias of class cc in layer \(i\). Similar definitions and same dimensions apply to classes co, OC and oo. We can infer from the above description that \(\mathbf{f}_{i+1}\in\mathbb{R}^{4Q_{i}\times 1}\). Therefore \(P_{i+1}=4Q_{i}\). The whole output feature \(\mathbf{F}_{i+1}\in\mathbb{R}^{4Q_{i}\times U\times N}\) is a three dimensional tensor, where elements with index \(u\) and \(n\) in second and third dimensions are \(\mathbf{f}_{un,i+1}\). We see from (5) that all four parts of \(\mathbf{f}_{un,i+1}\) use \(\mathbf{F}_{i}\) to compute the output features. The output feature of the current user and RIS element depends exclusively on the current user and RIS element (class cc, i.e., first part of \(\mathbf{f}_{un,i+1}\)) and the output feature of the other users and/or other RIS elements is the mean of the raw output of other users and/or other RIS elements (other classes, i.e., second to fourth parts of \(\mathbf{f}_{un,i+1}\)). Therefore, it should contain sufficient local and global information to make a wise decision on \(\psi_{n}\). The information processing of layer \(i<L\) is illustrated in Figure 2. For the final layer, we use one filter to process the information. Features of different users are summed up to be the phase shifts because all the users share the same phase shift. Because the information processing before the final layer is symmetric to users and addition in the final layer is commutative, the RISnet is permutation-invariant to users, i.e., if we permute the users in the input, the output of RIS phase shifts does not change. This is a desired property that can enhance the generality of the NN.
In this way, we achieve a good scalability with the RISnet. However, it is difficult to obtain the full CSI (\(\boldsymbol{\gamma}_{un}\) for every user and RIS element) in practice. Therefore, we propose to use partial CSI as input to RISnet.
### _RISnet Architecture with Partial CSI_
If the propagation paths in the channel are mostly specular or line-of-sight (LoS) paths, the full CSI can be inferred from partial CSI because all RIS elements share the same propagation paths (with different path loss and complex phase). This fact suggests that we can use partial CSI as input to RISnet. Define an RIS element which has been taken into consideration of the RISnet as an _anchor element_, and a layer in RISnet that expands the anchor elements as an _expansion layer_, the RISnet expands the anchor elements from the RIS elements with CSI to all RIS elements. The basic idea of the expansion layer is to apply the same filter to an adjacent RIS
element with the same relative position to the anchor element, as shown in Figure 3. Instead of applying one filter to every RIS element of a class as output of itself, as described in Section III-C, we apply 9 filters to every anchor element as output of itself and the surrounding 8 RIS elements. The output of RIS element \(n\) using filter \(j\) is computed as
\[\mathbf{f}_{w^{\prime}(n,j),i+1} \tag{6}\] \[=\left(\begin{array}{c}\text{ReLU}(\mathbf{W}_{i,j}^{cc} \mathbf{f}_{un,i}+\mathbf{b}_{i,j}^{cc})\\ \left(\sum_{n^{\prime}\neq n}\text{ReLU}(\mathbf{W}_{i,j}^{cc}\mathbf{f}_{un^ {\prime},i}+\mathbf{b}_{i,j}^{cc})\right)/(N-1)\\ \left(\sum_{u^{\prime}\neq u}\text{ReLU}(\mathbf{W}_{i,j}^{cc}\mathbf{f}_{u^ {\prime},n,i}+\mathbf{b}_{i,j}^{cc})\right)/(U-1)\\ \left(\sum_{u^{\prime}\neq u}\sum_{n^{\prime}\neq n}\text{ReLU}(\mathbf{W}_{i, j}^{cc}\mathbf{f}_{u^{\prime}n^{\prime},i}+\mathbf{b}_{i,j}^{oo})\right)/\\ \left((N-1)(U-1)\right)\end{array}\right),\]
where \(\nu(n,j)\) is the RIS element index when applying filter \(j\) for input of RIS element \(n\). According to Figure 3 and assuming that the RIS element index begins from 1 with the upper left corner, increases first along rows and then changes to the next row (i.e., the index in row \(w\) and column \(h\) is \(w+(h-1)\cdot W\), where \(W\) is the number of rows of the RIS array.), we have
\[\nu(n,j)=\begin{cases}n-W-2+j&j=1,2,3,\\ n-5+j&j=4,5,6,\\ n+W-8+j&j=7,8,9.\end{cases} \tag{7}\]
By defining two such expansion layers, we can generate phase shifts of 1296 (36\(\times\)36) RIS elements with 16 input channel features (4\(\times\)4). This process is illustrated in Figure 4, where the blue RIS elements can estimate the channel from the pilot signals from the users. Such RIS elements are only 1/81 of all RIS elements. In an RISnet of 8 layers, layer 3 and 6 are such expansion layers, such that the expansion layers are roughly equally placed among the 8 layers. Other layers are normal layers described in Section III-C. It is to note that we can use tensor multiplication to implement (5), (6) and use permutation matrix to sort the RIS elements in the correct order according to (7), such that we can utilize the power of parallel computing to accelerate training and testing.
## IV Training and Testing Results
We use the open-source DeepMIMO ray-tracing data set [19] to generate realistic ray-tracing channel data. The positions of BS, RIS and users are chosen to satisfy the following criterion: 1) There are LoS propagation paths between BS and RIS, RIS and users, but not between BS and users directly, otherwise the direct channel is so strong that the RIS does not play a significant row. 2) There are multi-path components (MPCs) between BS and RIS such that the channel matrix \(\mathbf{H}\) has a high rank because the number of served users cannot be greater than the matrix rank. 3) The minimum distance between users in a group for SDMA is 8 meters, such that the channel matrix \(\mathbf{G}\) has a high rank. Following these criterion, the designed scenario is shown in Figure 5.
In the ray-tracing model, all the RIS elements share the same propagation paths (e.g., LoS path, reflection path on the ground or on a building) with different pathlosses and complex
Figure 4: Expansion of considered RIS elements. Blue: RIS elements with CSI (i.e., anchor RIS elements before first expansion), gray: anchor RIS elements after first expansion, white: anchor RIS elements after second expansion (all RIS elements). Lower left corner: example of the first expansion to extend the anchor RIS elements from the blue element to the other gray elements. Lower right corner: example of the second expansion to extend the anchor RIS elements from the gray element to all other elements.
Figure 3: Application of 9 filters to expand from one anchor RIS element to 9 RIS elements, where \(\mathbf{f}_{5}\) is the channel feature of RIS element 5. Indices of user and layer are omitted for simplicity since the expansion is for RIS elements.
Figure 2: Class of channels between RIS elements and users
phases due to the different positions of the RIS element. The strongest path (the LoS path) is at least 1.5 times stronger than the second-strongest path. There are up to four propagation paths from the RIS to a user due to the limited number of reflectors nearby. In this case, the CSI of a few RIS elements would be representative of CSI of all RIS elements. In the opposite extreme case, if the channel comprises infinitely many and weak MPCs due to isotropic scattering, channel gains are independent and identically distributed (i.i.d.) complex Gaussian random variables in the RIS elements [20]. In this case, the full CSI cannot be inferred from the partial CSI. Between the two extreme cases, the channel gain is contributed by both deterministic strong MPCs and i.i.d. complex Gaussian gain due to weak scattering. Training and testing results using the above three channel models are shown in Figure 6, where the testing result is obtained with channel models different to training set.
Besides the observation that the training and testing results are very similar, which suggests there is no overfitting, an important conclusion is that the performance with partial CSI depends on the channel model. Assuming the ray-tracing channel model (i.e., there are multiple specular propagation paths, which contribute to channel gains of all RIS elements), RISnet with partial CSI achieves a similar performance to the RISnet with full CSI (Figure 6(a)). The ray-tracing channel model fulfills the requirement to apply the partial CSI. On the contrary, assuming i.i.d. complex Gaussian channel gain, the 1/81 partial CSI does not provide sufficient information about the full channel and RISnet with with partial CSI does not work (Figure 6(c)). Between the two extreme cases, i.i.d. channel gains due to infinitely many and weak scattering MPCs exist but are not dominant compared to the deterministic strong propagation paths. RISnet with partial CSI has a slightly worse performance than RISnet with full CSI because the channel gain is contributed by both deterministic paths and i.i.d. scattering paths (Figure 6(b)).
The contribution of scattering paths dependends mainly on the number, sizes and roughness of the scatterers as well as the frequency. The more, rougher and smaller the scatterers are in the surrounding environment, the more important scattering paths there are in the channel [21]. According to [22], scattering can be safely ignored in frequency band between 300 MHz and 3 GHz, suggesting that the proposed method would work in this frequency band.
Figure 7 shows the achieved sum-rate with three channel models and different settings, where the discrete phase shifts are 0, \(\frac{1}{2}\pi\), \(\pi\) and \(\frac{3}{2}\pi\). Simple rounding is applied to convert continuous phase shift to discrete phase shift. We choose random phase shift and the BCD algorithm [3] as baselines. We observe that the difference between continuous and discrete phase shifts is only marginal for the resolution of \(\frac{1}{2}\pi\). Sum-rate with partial CSI is comparable to sum-rate with full CSI for deterministic ray-tracing channel model and deterministic channel model plus i.i.d. perturbation. Moreover, the RISnet outperforms the two baselines considerably except RISnet with partial CSI and i.i.d. channel models. The BCD algorithm employs full CSI but applies suboptimal approximation, which makes it more efficient but the performance is degraded.
Figure 5: The ray-tracing scenario.
Figure 6: Training results with different spatial correlation between RIS elements.
Nevertheless, it still requires more than 10 minutes to run 2000 iterations in order to compute the suboptimal RIS phase shifts for one channel realization. Compared to it, although the training of RISnet takes a long time, application of a trained RISnet to a new channel realization is done in a few milliseconds, and the performance is better than the BCD algorithm, which makes the proposed method more suitable as a high-performance, real-time solution.
## V Conclusion
The RIS is a promising technology for future wireless communications to optimize the channel property for better performance, but it faces two major challenges: scalability and the assumption of full CSI, which can be difficult to obtain in practical scenarios. In this work, we propose an NN architecture RISnet, which is scalable because its number of parameters is independent from the number of RIS elements (we assume 1296 RIS elements in the simulation) and requires CSI of a small portion of elements (16 elements in our simulation, i.e., 1/81 of all elements). The RISnet and the WMMSE precoder are applied for SDMA for a multi-antenna BS and multiple users. Simulation results show that the RISnet with full CSI outperforms two baselines for all types of channels and the RISnet with partial CSI outperforms the baselines for deterministic ray-tracing channels and ray-tracing channels plus i.i.d. complex Gaussian channel gains due to scattering. Moreover, discrete phase shifts with a granularity of \(\frac{1}{2}\pi\) only cause moderate performance loss.
|
2307.01300 | Traffic Centralization and Digital Sovereignty: An Analysis Under the
Lens of DNS Servers | The Domain Name System (DNS) service is one of the pillars of the Internet.
This service allows users to access websites on the Internet through
easy-to-remember domain names rather than complex numeric IP addresses. DNS
acts as a directory that translates the domain names into a corresponding IP
address, allowing communication between computers on different networks.
However, the concentration of DNS service providers on the Internet affects
user security, privacy, and network accessibility. The reliance on a small
number of large DNS providers can lead to (a) risks of data breaches and
disruption of service in the event of failures and (b) concerns about the
digital sovereignty of countries regarding DNS hosting. In this sense, this
work approaches this issue of DNS concentration on the Internet by presenting a
solution to measure DNS hosting centralization and digital sovereignty in
countries. With the data obtained through these measurements, relevant
questions are answered, such as which are the top-10 DNS providers, if there is
DNS centralization, and how dependent countries are on such providers. | Demétrio F. Boeira, Eder J. Scheid, Muriel F. Franco, Luciano Zembruzki, Lisandro Z. Granville | 2023-07-03T19:09:26Z | http://arxiv.org/abs/2307.01300v1 | # Traffic Centralization and Digital Sovereignity:
###### Abstract
The Domain Name System (DNS) service is one of the pillars of the Internet. This service allows users to access websites on the Internet through easy-to-remember domain names rather than complex numeric IP addresses. DNS acts as a directory that translates the domain names into a corresponding IP address, allowing communication between computers on different networks. However, the concentration of DNS service providers on the Internet affects user security, privacy, and network accessibility. The reliance on a small number of large DNS providers can lead to (a) risks of data breaches and disruption of service in the event of failures and (b) concerns about the digital sovereignty of countries regarding DNS hosting. In this sense, this work approaches this issue of DNS concentration on the Internet by presenting a solution to measure DNS hosting centralization and digital sovereignty in countries. With the data obtained through these measurements, relevant questions are answered, such as which are the top-10 DNS providers, if there is DNS centralization, and how dependent countries are on such providers.
DNS, Internet Access, Communication Protocols, Digital Sovereignty, Measurement
## I Introduction
The Internet's Domain Name System (DNS) is a globally hierarchical naming mechanism that enables the association of networks, servers, and services to Internet Protocol (IP) addresses [1]. DNS enables, for example, accessing Websites through easy-to-remember domain names rather than IP addresses, meaning that wikipedia.org would be translated to 208.80.154.224. The records that map domain names and IP addresses are maintained by authoritative DNS servers that provide authoritative and up-to-date records.
Because deploying a local DNS server requires technical expertise [2], companies not rarely have been delegating the task of maintaining their authoritative NameServers (NS) records to third-party DNS providers (_e.g._, Cloudflare [3] and Akamai [4]). Such a delegation, which has been increasing over the years [5], led to the current scenario where DNS resolution is concentrated on a small number of large providers. And, for the sake of the business model, each large DNS provider multiplexes its Information Technology (IT) or data center infrastructure among its client companies [6]. As a result, DNS centralization inevitably leads to security and availability risks, such as user privacy and the inability to resolve domain names in case of an outage or service failure at one of the large providers. The dependency on a few providers creates concerns regarding the _(a)_ dependability and _(b)_ digital sovereignty of countries, especially considering compliance regulations, such as Europe's General Data Protection Regulation (GDPR) and Brazil's Data Protection Law (LGPD).
DNS centralization has been widely investigated in the literature. There exist a number of research efforts on assessing the degree of centralization in authoritative DNS servers [7][5][8], showing _e.g._, that popular domains share the same authoritative DNS servers. Thus, disruptions (_e.g._, due to cyberattacks or sabotage) on DNS infrastructure providers could lead to collateral damages to multiple DNS domains. Although this centralization aspect has been previously addressed, further research on digital sovereignty implications is necessary considering such a DNS dependency. Analyzing digital sovereignty is crucial because it ensures a country's autonomy, control, and security over its digital infrastructure [9]. Efforts to quantify the dependency of different countries on DNS providers are, thus, required to uncover possible sovereignty risks for the nations and their critical infrastructures (_e.g._, healthcare, banking, and education sectors), too.
In this paper, we investigate how country code top-level domain (ccTLD) from two conglomerate of countries, _(i)_ Brazil, Russia, India, China, and South Africa (BRICS) and _(ii)_ the European Union (EU), are resolved and quantify their dependency on foreign public DNS providers. For that, we define an approach to periodically collect measurements about NS records, A records, and AAAA records in order to find out and map the organizations responsible for managing such providers' infrastructure. These measurements use domains extracted from the Tranco list [10]. Thus, we also analyze how domains are managed and discuss the implications on regulations, compliance, and digital sovereignty. The results show that DNS centralization is a reality and a key concern for digital sovereignty, especially for countries that do not have relevant DNS providers and rely on infrastructure providers from countries or companies with different regulations and interests.
The rest of this paper is organized as follows. In Section II, we review background knowledge and discuss related work on DNS centralization and digital sovereignty. In Section III, we introduce our _DNS Measurement_ and its components, including implementation details. In Section IV, we present the evaluation and results, followed by a discussion in Section V. Finally, in Section VI, we close this paper presenting conclusions and discussions about future work.
## II Background and Related Work
Due to the massive damage it may bring to the Internet infrastructure [5], academia started worrying about and discussing DNS centralization. Some observations found an alarming concentration of DNS traffic, with more than 50% of the observed traffic being handled by only 10 AS operators [11]. There is also efforts toward emerging topics to build a responsible Internet [12], which proposes more transparency and trust within networks, independent of vendors and countries that run the underlying infrastructure. Thus, it is clear that companies from the technology and telecommunication sectors have a place to ensure secure communication and a key role in the digital sovereignty.
There are significant concerns about DNS centralization and the impacts it may cause. One big concern is related to performance and how a centralized environment may negatively affect the time-response of DNS in some regions of the globe [8]. Internet Service Providers (ISP) typically operate DNS resolvers for their customers, which means they have access to users' DNS queries and can potentially monitor or manipulate the data, which is definitely a fact to watch. This centralization of DNS resolution can raise privacy and political concerns, mainly if ISPs engage in activities like DNS filtering, censorship, or surveillance [13].
Furthermore, another two concerns that DNS centralization may raise are security since cyberattacks are evolving and becoming more sophisticated [14], including those that target or explore DNS (_e.g._, tunneling, amplification, and flood) to cause technical, economic, and societal impacts [15]. This security concern includes attacks on DNS authoritative servers [16, 17] and, also, availability of services worldwide, since the phenomenon of centralization is, in addition to being logical, also physical and geographical [18].
Another concept that can appear from the discussions on DNS centralization is digital sovereignty. Digital sovereignty refers to a nation's ability to control its digital infrastructure, data, and digital technologies within its territorial borders [19]. It encompasses the idea that countries should be able to shape their digital policies, regulations, and frameworks to protect their national interests, security, and values in the digital realm. Digital sovereignty relies on certain aspects, such as data protection and privacy regulations, domestic digital infrastructure, digital trade and economic policies, and Internet governance [20]. Different works have focused on sovereignty from different perspectives, such as the usage o the decentralization provided by blockchain technology [21] as a potential ally for digital sovereignty [22]. However, it is unlikely that fundamental changes will become a reality in the short term since, besides enormous technological efforts and associated costs, it depends on convergences between technical and political spheres.
It is important to note that digital sovereignty is a complex and evolving topic, and there are debates around its implementation and potential trade-offs. Striking a balance between digital sovereignty and the benefits of an interconnected global digital ecosystem remains challenging for policymakers worldwide. The question, thus, is how the digital sovereignty of countries is affected by the current Internet and its underlying infrastructure. Thus, it explores DNS and its centralization on few companies and governments to shed light on discussions about digital sovereignty under the lens of DNS.
## III Measurement Approach
The approach consists of the mapping of lists of popular Internet domains (_e.g._, based on the publicly available rankings) to its authoritative NSes and organizations behind providing such a service. This allows to identify _(i)_ who provides the correct IPs, _(ii)_ which organization operates the NS infrastructure, and _(iii)_ to which country and regulations the operator is subjected. For that, the approach combines information from domains (_e.g._, A, AAAA, and NS records) and Autonomous System (AS) records provided by Internet registries (_e.g._, LACNIC, RIPE, and ARIN).
An AS is a network of interconnected computing devices that operate under the same policy. It is often managed by a single entity (_e.g._, ISPs or technology organizations) and is identified by an AS Number (ASN). Each AS manages one or more unique IP ranges, for example, _Wikimedia Foundation Inc._ has an ASN 14907 and manages the IP range 208.80.152.0/22 in the United States of America and 185.71.138.0/24 in the Netherlands. Thus, it is possible to associate the IP of any NS to an AS and, consequently, to its operator and region.
Therefore, the approach is able to determine the entire flow from the domain name to the organization handling the AS that manages the IP of the associated NS. This allows to understand the different points where centralization and digital sovereignty risks might occur. For example, the owner of an NS can tamper the DNS records, while the AS operator can outage the communication to make the DNS translation unavailable. In both scenarios, a clear DNS-related dependence can be identified on a few players that maintain the underlying infrastructure (_e.g._, those that operate ASes and NSes). This makes the need to analyze such players and centralization a key pillar for discussing digital sovereignty.
Figure 1 depicts the components that are part of the approach and the flow of information between them. They are organized in three main groups, namely **Datasets**, **Approach**, and **Outputs**. Datasets containing information regarding _Autonomous Systems (AS)_ and list of _Domains_ are used as inputs for the approach. The ASes responsible for each NS are defined using the list provided by the Center for Applied Internet Data Analysis (CAIDA). For that, it was used the network prefix mapping to AS [23] and also the mapping of AS to organizations [24]. This allows to determine the AS, the organization managing the AS, and, thus, the country/region of DNS providers (based on the IP of the NS). For each measurement, an updated listed of CAIDA is obtained by the _Data Gatherer_ and processed to ensure that the correct and most up-to-date information.
For the domains, the Tranco list [10] is used as dataset since it provides a updated source of top 1 million Websites on Internet based on popularity and access traffic. The list is updated considering a variety of sources, such as Alexa, SimilarWeb and Moz. This offers a reliable and transparent list that can be used to conduct research that needs popular domains. The _Data Gatherer_ also obtains the updated Tranco list for each measurement (using a diff approach to identify changes) and the _Data Processor_ organize the information of both ASes and domains to be used in further steps.
Next, the _Records Retrieves_ analyzes each one of the 1 million domains and retrieves information regarding the A, AAAA, and NS records. For example, for the domain **wikipedia.org**, the A is 208.80.154.224, the AAAA is 2620:0:861:ED1A::1 and the NS is ns0.wikimedia.org. This information is sent to the mapper to understand the entire path to resolve the DNS in order to build the _NS Resolution Flow_ and also collects statistics (_e.g._, organizations concentration, measurement errors, and identified IPs) for further analysis.
The _NS Mapper_ receives the records regarding the domain and obtains the IP of the NS. This information is then used to map the IP to the correspondent AS managing it. For that, the A record can be used in case of an IPv4 prefix or the AAAA for IPv6. Finally, the organization name is obtained by looking at the CAIDA AS organization rank mapping dataset [24], and a complete analysis can be conducted to identify its region and relevant characteristics (_e.g._, regulations and number of ASes being operated). The _NS Mapper_ stores information obtained in the _Database_ and builds, as output, the _NS Resolution Flow_. This flow shows how the domain is resolved until discovering the organization/company that is managing the infrastructure, which is a point that may directly impact DNS resolutions in case of network disruption. Further, identifying NSes is crucial as they might tamper with DNS records, as they answer the requests in an authoritative manner.
Figure 2 illustrates a graph-like structure of the _NS Resolution Flow_ for the domains **wikipedia.org** and **dns.br**. In the example, **wikipedia.org** has two NS records, _(i)_ns0.wikimedia.org and _(ii)_ns1.wikimedia.org, while **dns.br** has one a.dns.br. This means these NSes are authoritative servers for these domains and are crucial to their operation. This also applies to the organization that manages the IP addresses and advertise routing information of such servers (_i.e._, their ASes). Such organizations, including the countries they are operating, are retrieved using the A and AAAA records of the NSes by identifying the resolved IPs using their prefixes and mapping them with the AS dataset list. Thus, in the example, **wikipedia.org** is managed by the _Wikimedia Foundation Inc._, placed in the United States of America, and **dns.br** is managed by _NIC.BR_, placed in Brazil.
The approach implementation and results are publicly available at [25]. Python was used to implement the approach's components, with the _dnspython_[26], a Python library to request and manipulate DNS records, being used to implement the _Records Retriever_. The _NS Mapper_ connects with a _SQLite3_ database to store and manipulate the data required to build the _NS Resolution Flow_. Further, statistics can be retrieved and processed from such a database.
Fig. 1: Overview of the Approach to Analysis Domains
Fig. 2: NS Resolution Flow Example
## IV Evaluation and Analysis
The measurements considered all the 1 million domains from the Tranco list [10], using only the pay-level domains filter, with the latest list used in the experiments generated on June 16, 2023. To infer the AS names and countries, it was leveraged the CAIDA's AS-to-organization dataset [23]. To conduct the measurements, it was used a six-core AMD Ryzen 5-5500U @ 2.1 GHz with 8 GB of RAM and connected to the Internet using an Ethernet cable to maintain a stable network connection. Its operation system was a Deban 11 "_bullseye_" stable distribution.
It is essential to mention that during the experiments, not all domains from the Tranco list were resolved correctly (_e.g._, NS servers not found or incorrectly configured), and their NS or ASN was identified; thus, hindering the possibility of identifying the country where their DNS was managed. However, such limitation does not invalidate the results provided herein.
### _Identifying Top-10 DNS Providers_
Table I lists the ranking, using 10 positions, of the DNS providers identified during the analysis of centralization aspect of the DNS traffic. The position in the ranking is based on the amount of domains that rely on such DNS provider during the indicated period. Three periods were defined, **Period 1** from 16/12/2022 to 23/01/2023, **Period 2** from 23/01/2023 to 13/02/2023, and **Period 3** from 13/02/2023 to 15/03/2023. As it can be seen in the table, the ranking remained stable during these periods and there was only one change, rows highlighted in gray in the table, in the ranking, where TIGGEE was the 6th during the first two periods but replaced MICROSOFT-CORP-MSN-AS-BLOCK as 7th in the third period.
Within this context, it was also investigated if the domains of such DNS providers (_e.g._, cloudflare.com) were managed by them or if they relied on services from competitors. Table II presents the results of such investigation. The results indicate that not all DNS providers rely on their DNS services for their domains. For example, Amazon, the second largest DNS provider according to Table I, uses Oracle's DNS services, and Godaddy, which employs its own DNS service but also relies on Akamai's DNS service. However, all the major providers use their own DNS service. This paper does not delve into why **tiggee.com** is among the largest DNS providers on the Internet but is not listed in the top one million domains according to the Tranco list. Thus, this aspect could be investigated in future research.
### _Measuring DNS Centralization_
Having identified the top-10 DNS providers that are responsible for hosting the highest amount of domains in the list, one question that arises is if there is an apparent centralization on those providers or if the DNS providing service is highly distributed to avoid Single Point of Failures (SPoF) or monopoly. To answer such a question, it was measured from 16/12/2022 until 15/03/2023, the concentration of domains resolved by the domains listed in Table I.
Figure 3 depicts the results from the performed concentration measurements. In the figure, the \(x\)-axis represents the date on which the concentration percentage was calculated, and the \(y\)-axis represents the concentration in the top-10 providers. Considering the period, the average concentration was 30% of the measured domains. This means that, on average, 30% of the one million domains of the Tranco list (_i.e._, 300 000 domains) had their DNS records hosted by the top 10 DNS providers (_cf._ Table I). Further, considering that such a concentration peaked at 39% on 29/01/2023 and the fact that it was identified that around 3000 DNS providers were responsible for managing all of the one million domains, there is strong evidence that centralization in the DNS hosting industry is a reality.
dependency on these providers and quantify how sovereign its Internet infrastructure is in terms of DNS hosting. For that, domains from the Tranco list were selected based on their ccTLD (_e.g._,.br and.cn) and grouped into their political conglomerates. In total, 91 286 domains from 95 792 domains using the BRICS and EU ccTLDs were resolved, and their DNS hosting organization was identified. This represents 9.1% and 9.5% of the Tranco list, respectively. Russia's ccTLD (.ru) represented 59% of the resolved domains, approximately 54 168 domains. Results from such analysis categorized by these groups are presented in the following sections.
#### Iv-C1 BRICS Domains
BRICS represents a conglomerate of five major emerging economies, namely _(a)_ Brazil, _(b)_ China, _(c)_ India, _(d)_ Russia, and _(e)_ South Africa, formed to promote inter-economic cooperation and inter-political discussions. As BRICS does not have an official ccTLD as Europe, the ccTLD for the BRICS are, respectively, _(a)_.br, _(b)_.cn, _(c)_.in, _(d)_.ru, and _(e)_.za.
Figure 4 depicts the results of the BRICS analysis. Each section of the chart represents the percentage of domains from the defined ccTLD that have their authoritative DNS servers located in the country of the section. The countries are represented as Alpha-2 ISO country codes [27], and countries with less than 4% of domains were aggregated in the "Others" section. For example, in Brazil (_cf._ Figure 4a), there was a tie between.br domains of the Tranco list that relied on DNS providers from the United States (US) (_i.e._, 46.9%) and domains that are provided by Brazilian-based companies (_i.e._, 46.8%), the rest of the share (_i.e._, 6.2%) were located in other countries (_e.g._, France and Germany).
It is possible to observe that US-based DNS providers, such as Cloudflare, Inc., Amazon.com, Inc., and Google LLC, represent a significant portion of the DNS hosting industry in the BRICS, with India presenting the highest dependence (_i.e._, 60.3%) of the five nations. The exceptions are Russia (60.8%) and South Africa (53.2%), with most domains provided by national DNS companies (_e.g._, Yandex.Cloud LLC for Russia and Xneelo (Pty) Ltd for South Africa). Thus, showing indications of concern regarding digital sovereignty.
Further, to have an overview of the digital sovereignty of the BRICS as a conglomerate, the five countries' results were aggregated and illustrated in Figure 5. Russia and the United States appear to host the majority of the domains (_i.e._, a total of 73.3%), followed by Brazil, China, and Germany. This behaviour is logical considering the division of Figure 4. Therefore, showing a dystopian view of digital sovereignty, where the BRICS is subject to and dependent on the United States regarding DNS regulations and infrastructure.
#### Iv-C2 European Union
The EU is a political and economic union composed of 27 member states (_e.g._,, Portugal, Spain, France, Italy, Germany, and Hungary) located in Europe. For such countries, the.eu ccTLD was examined. Any person, company or organization within the EU may register domains with this ccTLD. Figure 6 illustrates a different scenario than the one from the BRICS (_cf._ Figure 5), where more countries share the DNS hosting infrastructure of the EU. Germany (_i.e._, DE) represents a significant portion given its size and number of DNS hosting providers.
However, the US also concentrates a significant portion of the DNS hosting industry for.eu domains. After Germany, France and the Netherlands appear as major countries hosting DNS domains for Europe, this supports the data presented in Table I, where OVH, a French cloud computing company, appears as the 10th DNS provider in the ranking. This concentration in a cloud provider might indicate that other services, besides DNS, are being hosted in France and the Netherlands, given the fact that such companies offer more services than DNS, such as virtual machines, Function-on-a-Service (FaaS), and web hosting that require a DNS provider.
### _Hosting Governmental Domains_
One analysis dimension that is highly relevant concerning digital sovereignty and centralization is to investigate where restricted TLDs, such as.gov, are hosted. These domains are intended to be used only by federal government institutions (_e.g._, security agencies and institutes). Thus, their DNS should be hosted within federal organizations to maintain critical services for citizens and control over the infrastructure during critical periods (_e.g._, global conflicts, pandemics, or sanctions).
Figure 7 depicts the results from the analysis of the BRICS domains:.gov,br,.gov,cn,.gov,in, and.gov.za. Russia did not present.gov domains in the Tranco list; hence, it is not presented in the results. It can be seen that Brazil's governmental domains are mostly resolved within Brazil, specifically in the Federal Data Processing Service (Servico Federal de Processamento de Dados - SEPRO, in Portuguese), which is the biggest government-owned corporation of IT services in Brazil. Further, Indian and South African government domains are mostly hosted in their countries, with the National Informatics Centre (NIC) hosting most domains for India and the State Information Technology Agency (SITA) for South Africa. These results show a concern within BRICS about hosting governmental DNS domains for federal services within government organizations to avoid censorship, data leakage and disruption of critical services.
### _Discussion and Key Observations_
Different insights can be obtained from our experiments under a different lens. From the technical dimension, we have shown that the DNS has a centralization and few players. We also showed that DNS centralization is economic in nature since big techs from developed countries lead the market. Moreover, several economic impacts (_e.g._, business disruption and reputation harm) may happen in companies and governments in case of intentional or non-intentional disruption of DNS underlying infrastructure. Our findings can also be explored from a legal dimension since digital sovereignty involves regulations and actions that can be done by policy-makers based on the technical analysis of the different protocols and dependence (_e.g._, DNS and its centralization on a few companies and countries). The rest of this section provides a discussion on each one of these dimensions.
On the **technical** dimension, based on the results, it is very straightforward to assume that there is a clear indication of a DNS centralization, which can lead to a scenario where the Internet's infrastructure and management are directly dependent on a few players (_e.g._, governments and companies with different technical and political characteristics). This is not the best scenario, since it can lead to the issues discussed in Section II, such as security, availability, and performance. Moreover, by allowing such centralization in a given country, region or company, the risk of Internet censorship increases, as such a control can be achieved by injecting fake DNS replies to block access to certain content [28]. Thus, the DNS infrastructure and its distribution concentrated on a few authoritative servers may lead to Internet outages (due to misconfigurations) and Internet censorship, as the technical enablers for implementing this control are in place.
When discussing the economic dimension of DNS centralization, one point that relates is the possibility of DNS providers profiting from DNS lookup data. [29] advocates that DNS providers do not commercialize such information because of the potential consumer and regulatory backlash of such a monetization. However, suppose the DNS provider's centralization occurs in a country with not-so-well-defined regulations concerning commercializing user-sensitive data. In that case, further monopoly is risky as DNS lookup can be valuable for advertisement. Thus, monitoring and addressing DNS centralization and digital sovereignty is critical to tackling such an economic perspective. Further, most DNS providers (_e.g._, Amazon, Google, and Microsoft) are also major cloud provider companies [30], where their business is strongly tied to providing a reliable DNS infrastructure to access such cloud instances. However, such a combined service offering leads to a vendor lock-in issue [31] and even further dependence on their infrastructure, in which companies are subject to such companies' pricing policies.
In addition to these possible economic impacts, DNS centralization also has an economic motivation since big techs (often based in US) offer DNS resolvers and associated services as part of their business core. In 2020, the DNS market was worth USD 372 million, and it is expected to be worth USD 862 million by 2025 [32]. This growth expectation is
Fig. 4: Results from the BRICS Domains Separated by ccTLD
Fig. 5: Results from the Aggregated BRICS Domains
Fig. 6: Results from the.eu Domain
Fig. 7: Results from the Analysis of the.gov. Domains
attributed to the growing number of domain name registrations and also Web traffic. Concerns about security, centralization, and digital sovereignty may be part of the marketing and product development strategies for DNS providers and big techs operating the underlying infrastructure.
Lastly, in the **legal and political** dimension, there are different efforts from the EU to strengthen the EU's digital sovereignty, such as the GDPR for the idea of data sovereignty and the action plan for more digital sovereignty called by governments of Germany, Estonia, Denmark, and Finland [33]. Cybersecurity experts, entrepreneurs, and decision-makers also moved to the discussion to highlight the need to develop and promote digital infrastructures under European technological sovereignty [34]. However, even though digital sovereignty is receiving much political attention around the World, the discussions still need to evolve to find a common understanding to succeed. In Brazil, the discussions on the topic are also increasing since discussions on regulations are still needed to increase national cybersecurity and digital sovereignty [35]. Thus, as seen with these examples and discussions, digital sovereignty is a matter that many stakeholders (_e_.\(g\)., governments, companies, and society) have to address from technical, economic, and legal perspectives. Otherwise, digital colonialism may become more prominent and dangerous in the following years, providing mechanisms to increase censorship and digital warfare.
Thus, as shown in this work and experiments, we advocate that analysis and discussions under different lenses are needed. Besides centralization of protocols such as DNS, cybersecurity, regulations and investments for technology, and mobile communications and its vendors are examples of aspects that must be investigated to lead the discussions of digital sovereignty.
## V Conclusion and Future Work
The Domain Name System (DNS) infrastructure plays an essential role in the Internet access infrastructure by allowing content and services to be reached using easy-to-remember names (_i_.\(e\)., domains). However, during its development, it was never imagined that such a system would become a market of global proportions. Thus, aspects such as its centralization and governmental regulations were disregarded. In this sense, given its central role in society and concerns regarding the level of control that DNS providers could enforce if the system becomes centralized, understanding and identifying DNS centralization is a key concern.
Thus, in this paper, we presented an approach to measure DNS centralization and digital sovereignty based on DNS domain resolution. The approach relies on a list of 1M popular domains (_i_.\(e\)., the Tranco list) and, for each one, identifies the name server responsible for hosting the domain (_i_.\(e\)., its authoritative server) and, based on its IP address, maps it to the Autonomous System (AS) managing the IP address. Further, with the AS information, the approach identifies the country in which the AS is located to analyze which regulations the AS is subject to. Consequently, with that information, the approach infers the top-10 DNS providers, the percentage of centralization of the Tranco list in these providers, and also the portion of domains that are managed within their country based on its country-code Top-Level Domain (ccTLD).
Results from the analysis show that most of the top-10 DNS providers identified in the Tranco list are in the US, with Cloudflare being the 1st DNS provider. Further, the analysis of how centralized the DNS hosting industry is revealed that the concentration of domains resolved in the identified top-10 providers peaked at almost 40%, which shows signals of centralization. Lastly, the results of measuring digital sovereignty in Brazil, Russia, India, China, and South Africa (BRICS) and the European Union (EU) unveiled a scenario where a significant percentage of domains within these countries are not hosted by national companies but hosted on US-based organizations; exceptions being Russia and South Africa. Based on the results, it can be said that not only is DNS centralization occurring on the Internet as previous literature showed (_cf_. Section II), but also that countries are becoming less sovereign in terms of control over the national DNS infrastructure.
Considering future work, it is planned to _(a)_ analyze such DNS providers distribution with additional countries that are discussing digital sovereignty, _(b)_ address the limitations of the work discussed in Section IV, and _(c)_ create a tool to analyze DNS providers distribution periodically. Furthermore, our measurement approach can be extended to analyze additional protocols and technologies to provide a more granular technical view of the digital sovereignty landscape.
|
2310.01917 | Hierarchical Evaluation Framework: Best Practices for Human Evaluation | Human evaluation plays a crucial role in Natural Language Processing (NLP) as
it assesses the quality and relevance of developed systems, thereby
facilitating their enhancement. However, the absence of widely accepted human
evaluation metrics in NLP hampers fair comparisons among different systems and
the establishment of universal assessment standards. Through an extensive
analysis of existing literature on human evaluation metrics, we identified
several gaps in NLP evaluation methodologies. These gaps served as motivation
for developing our own hierarchical evaluation framework. The proposed
framework offers notable advantages, particularly in providing a more
comprehensive representation of the NLP system's performance. We applied this
framework to evaluate the developed Machine Reading Comprehension system, which
was utilized within a human-AI symbiosis model. The results highlighted the
associations between the quality of inputs and outputs, underscoring the
necessity to evaluate both components rather than solely focusing on outputs.
In future work, we will investigate the potential time-saving benefits of our
proposed framework for evaluators assessing NLP systems. | Iva Bojic, Jessica Chen, Si Yuan Chang, Qi Chwen Ong, Shafiq Joty, Josip Car | 2023-10-03T09:46:02Z | http://arxiv.org/abs/2310.01917v2 | # Hierarchical Evaluation Framework: Best Practices for Human Evaluation
###### Abstract
Human evaluation plays a crucial role in Natural Language Processing (NLP) as it assesses the quality and relevance of developed systems, thereby facilitating their enhancement. However, the absence of widely accepted human evaluation metrics in NLP hampers fair comparisons among different systems and the establishment of universal assessment standards. Through an extensive analysis of existing literature on human evaluation metrics, we identified several gaps in NLP evaluation methodologies. These gaps served as motivation for developing our own hierarchical evaluation framework. The proposed framework offers notable advantages, particularly in providing a more comprehensive representation of the NLP system's performance. We applied this framework to evaluate the developed Machine Reading Comprehension system, which was utilized within a human-AI symbiosis model. The results highlighted the associations between the quality of inputs and outputs, underscoring the necessity to evaluate both components rather than solely focusing on outputs. In future work, we will investigate the potential time-saving benefits of our proposed framework for evaluators assessing NLP systems.
## 1 Introduction
Human evaluation is crucial for assessing the quality, validity, and performance of Natural Language Processing (NLP) systems especially as automatic metrics are usually not sufficient Van Der Lee et al. (2019). Human evaluation can deal with complex generated natural language and its nuances such as pragmatics, context and semantics which often requires some expert knowledge Sudoh et al. (2021). Automatic evaluation may be used to assess individual dimensions (e.g., fluency, accuracy) of natural language, however, may often lose to humans in terms of accuracy and understanding.
Various methodologies are often employed in human evaluation such as ranking, pairwise comparison, or a state-of-the-art machine translation metric that was used in Castilho Castilho (2021). They can provide valuable insights into the strengths and limitations of an NLP system; however, it is notably time-consuming and expensive and significant trade-offs may exist in consideration of different goals or requirements Zhang et al. (2020). The human evaluation also comes with its own set of limitations, such as fatigue effect van der Lee et al. (2021) and inconsistencies between evaluators. The role of human evaluators should also be considered as some tasks may require domain expert knowledge or provide specific training evaluators.
There is currently a lack of consensus on which metrics to use for the human evaluation of NLP systems Paroubek et al. (2007). As there tend to be different research goals, requirements and task-dependent metrics, there exists the challenge of standardizing human evaluation metrics and essentially reaching an overall consensus. A unique combination of metrics can be used for a more comprehensive assessment depending on the desired objectives. These combinations can be grouped based on different evaluation aspects Liang and Li (2021). Metrics may also vary depending on the task (e.g., machine translation, sentiment analysis) and thus task design can affect the criteria used for evaluation Iskender et al. (2021).
To identify gaps in the literature pertaining to human evaluation, we conducted a scoping review to systematically examine various aspects of human evaluation experiments in NLP tasks, including the characteristics of evaluators, evaluation samples, scoring methods, design of evaluation and statistical analysis. The findings of our literature review revealed three significant gaps: (i) the absence of evaluation metrics for NLP system inputs, (ii) the lack of consideration for interdependencies among different characteristics of assessed NLP systems, and (iii) a limited utilization of metrics for extrinsic evaluation of NLP systems.
We hope to bridge the aforementioned gaps by providing a standardized human evaluation framework that can be used across different NLP tasks. Our proposed framework employs a hierarchical structure that divides the human evaluation process into two phases: testing and evaluation. This division enables evaluators to assess the quality of inputs used by testers when evaluating NLP systems. Furthermore, the hierarchical design of the evaluation metric allows for the computation of a composite score that reflects the overall quality of the NLP system.
This paper is organized as follows. Section 2 presents the analysis from a scoping review that included more than 200 papers published within the last three years in the top 5 NLP venues. The results of the aforementioned analysis informed the development of the proposed hierarchical evaluation framework, which is presented in Section 3. Section 4 presents the results of adopting the proposed framework for the human evaluation of the Machine Reading Comprehension (MRC) system developed as a part of the human-AI symbiosis model. Finally, Section 5 concludes the paper.
## 2 Scoping Review
### Structured Review
To inform our development of a hierarchical framework for human evaluation, we conducted a scoping review to examine existing literature systematically. Our paper selection process followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews checklist (PRISMA-ScR) (Peters et al., 2015) (see Figure 1). We searched for relevant publication venues on Google Scholar. We selected the category of Engineering and Computer Science, followed by the sub-category of Computational Linguistics. Subsequently, we chose the top five venues with the highest h5-index, namely:
* Meeting of the Association for Computational Linguistics (ACL),
* Conference on Empirical Methods in Natural Language Processing (EMNLP),
* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL),
* Conference of the European Chapter of the Association for Computational Linguistics (EACL),
* International Conference on Computational Linguistics (COLING).
Due to the rapid development in the NLP field, only studies published between 2019 and 2023 were included. The Google Scholar search strategy is shown in Figure 2.
### Selection of Articles
Eligible articles were identified in two stages: (1) title and abstract screening, (2) full-text screening. To maintain consistency of decision-making in the selection process, both title and abstract screening and full-text screening were conducted by two of the three reviewers (IB, JC, QCO) independently based on pre-defined inclusion and exclusion criteria (see Figure 3). Conflicts were resolved through discussion with a third reviewer to establish consensus. The resolution of inconsistencies or disagreements amongst reviewers was guided by pre-defined eligibility criteria and reference to initial objectives. Reasons for exclusion were recorded during full-text screening.
Figure 1: This PRISMA flow diagram depicts the study selection process throughout this scoping review. 203 studies in total were identified through a search on Google Scholar. After one duplicate was removed, the total remaining studies was 202. After title and abstract screening, 16 studies were excluded, leaving 186 studies for full-text screening. A final 173 studies were included in this scoping review for data extraction and analysis.
### Data Extraction
A standardized data extraction form (see Appendix 1) was developed through iterative discussions between three reviewers (IB, JC, QCO) based on insights gained during the initial literature review of related work. The data extraction form was first piloted on three randomly selected articles by the three reviewers to ensure consistent and accurate extraction of data. The data extraction process involved all three reviewers and was done independently. Ambiguities or uncertainties were resolved by discussion between reviewers and by referring to the original papers used for the creation of the extraction matrix (Van Der Lee et al., 2019; Amidei et al., 2018; Liang and Li, 2021; Howcroft et al., 2020). We extracted a range of variables from certain chosen sources and tailored them to the objectives of our review. These variables are categorized as follows in Section 2.4: (1) characteristics of evaluators, (2) evaluation samples, (3) scoring methods, (4) design of evaluation and (5) statistical analysis.
### Synthesis of Results
#### 2.4.1 Characteristics of Evaluators
A large proportion of papers (83%, 144/173) provided information on the number of evaluators that participated in the human evaluation. This shows that there is a general consistency in the reporting of human evaluation methods across all papers reviewed. The number of evaluators employed can be defined as _small_ (1-5), _medium_ (6-9) and _large_ (\(\geq 10\)) scale (van der Lee et al., 2021). Papers reported a small number of evaluators in 62% of cases (107/173), a medium number in 6% (11/173), and a large number in 15% (26/173). The median number of evaluators was three per study.
71% of the reviewed papers (122/173) reported the background of the evaluators, differentiating between _experts_ and _non-experts_, detailed which platform they were from or set standards for crowdsourced workers. One example, proposed in Zhu et al. (2020), was to set standards by only using workers with a high enough approval rate to ensure quality. This helps alleviate the problem of quality control when using larger-scale crowd-sourcing platforms such as Amazon Mechanical Turk.
#### 2.4.2 Evaluation Samples
All of the papers reported that human evaluation was done only on _outputs_ of NLP systems, with the median number of evaluation instances being 100. Most papers (60%, 103/173) created samples _randomly_, but some (3%, 6/173) specified _their methodology_. For instance, in Zeng and Nie (2021), discussions that were difficult to understand were filtered out. In this case, human evaluation was used to compare the dialogue generation between two different models. In order to create a more relevant dataset for human evaluation, filtering out professional texts that were difficult to understand, ensured that the data was closer to daily dialogue. This allowed for more accurate and reproducible human evaluation results. Using alternative methods to random sampling can have certain benefits such as cost-effectiveness, time efficiency and focused research objectives (Zeng and Nie, 2021).
Figure 3: This figure lists the inclusion and exclusion criteria that formed the basis of our screening process.
Figure 2: Search strategy used for the scoping review. After performing 1, we also performed 2-6 to find all papers from individual venues that did not appear after the first combined search.
#### 2.4.3 Scoring Methods
Overall, 68% of papers (118/173) used a _scale_ as their evaluation scoring system. A scoring system should also be defined by assigning attributes or certain qualities to a number in the scale that they are using. Further, 23% of papers (39/173) reported using _comparison_ between different models or question answering to achieve more qualitative results. Examples include win, tie, loss, A/B testing, and a direct comparison.
The characteristics of evaluation can be referred to as evaluation attributes or text quality dimensions such as _fluency_, _adequacy_, and _grammar_(Gehrmann et al., 2023). These characteristics can be considered for both qualitative and quantitative methods and are often specified to guide the evaluation task. For example, Liang and Li (2021) divided various characteristics into seven groups based on their similarity and overall purpose for the human evaluation of chatbots. These groups further tailor the characteristics of evaluation to the unique task, allowing the reader to understand the reason for their selection.
Dependencies can exist among characteristics of evaluation. In other words, human evaluation can be done in sequential order when the order in which characteristics are evaluated matters. Moreover, evaluation can be prematurely stopped if some characteristics were not deemed of a satisfactory quality. Consequently, dependencies among characteristics of evaluation could also allow for a NLP system to have a composite score that would reflect its overall quality. For instance, an overall performance score can be produced based on pre-defined threshold criteria that need to be fulfilled. This threshold could be a specified performance level reached by a specific combination of characteristics. We have not observed any dependencies reported among different evaluated characteristics in the reviewed literature. Namely, all characteristics were evaluated separately, and the quality of a certain characteristic was never put in relation with the quality of another one.
#### 2.4.4 Design of Evaluation
_Extrinsic_ and _intrinsic_ evaluation are two different types of human evaluation. Extrinsic evaluation assesses the ability of the system to perform an over-arching task with a real-world application. On the other hand, intrinsic evaluation assesses specific qualities or attributes and is evaluated independently of the over-arching task. Therefore, a system could perform well intrinsically without performing well extrinsically. Most papers (88%, 153/173) performed intrinsic evaluation, 4% (7/173) performed extrinsic evaluation, and 8% (13/173) involved aspects of both intrinsic and extrinsic evaluation. Intrinsic evaluation remains popular likely due to its simplicity, cost-efficiency, ease in tracking progress and benchmarking (Gehrmann et al., 2023), (Belz and Gatt, 2008). The lack of extrinsic evaluation may also be affected by the difficulty of designing an evaluation that effectively emulates its usage in the real-world setting.
Bias mitigation is important due to the potential compromise of human evaluation caused by order effects (Van Der Lee et al., 2019). Order effects include practice, carryover, and fatigue effects (Van Der Lee et al., 2019), all of which have the potential to affect human evaluation and lead to misleading and biased results. To mitigate this, Van Der Lee et al. (2019) suggested potential solutions including practice trials, increasing the time between tasks, shortening tasks, and proposed specific evaluation designs such as counterbalancing (systematically varying the order of presentation) and randomization. Further solutions include multiple evaluators assessing the same point (Son et al., 2022) to increase the reliability of their human evaluation and randomized counterbalancing, which is a combination of randomization and counterbalancing methods (Kurisinkel and Chen, 2019). However, the method of bias mitigation was only specified in 14% (24/173) of papers. This may be due to the high costs of evaluation designs, specifically counterbalancing. However, according to Van Der Lee et al. (2019), randomization or limiting the evaluation to one judge per system (if order effects are suspected) should be sufficient to mitigate order effects and avoid biased results.
#### 2.4.5 Statistical Analysis
Inter-annotator agreement (IAA) scores should be reported to confirm consistency between evaluators and the reliability of the evaluation. Typically, a higher score indicates increased IAA. 34% of included papers (58/173) reported IAA using Kendall's \(\tau\), Fleiss' \(\kappa\), Cohen's \(\kappa\), Krippendorf's \(\alpha\) and percentage agreement to name a few. However, a detailed analysis of the IAA scores and how they affected the overall evaluation is important. In some cases, IAA scores can prove to not be a useful measurement of agreement - as alluded to further in (Amidei et al., 2018).
The importance of ensuring the reliability and validity of human evaluation is further highlighted by Liu et al. (2022) through the need for using statistical tests. Other methods of presenting data and analyzing results include displaying 1st and 2nd best performances in a table by highlighting the specific performance values (Gangal et al., 2022); or summary statistics such as standard deviations or mean scores (Qian and Levy, 2022). Only 16% of papers (28/173) used statistical tests as a form of analysis of their human evaluation such as student's t-test and Wilcoxon ranked test (Van Der Lee et al., 2019). This could be due to a lack of statistical power attributed to inadequate sample sizes, which could lead to misleading or different conclusions as they are more subject to the effects of chance (Otani et al., 2023).
## 3 Hierarchical Evaluation Framework
The review of existing literature identified 3 gaps:
* Majority of human evaluation was _intrinsic_.
* The characteristics of NLP systems were evaluated _independently_.
* Human evaluation focused on assessing the _outputs_ of NLP systems, neglecting the evaluation of their _inputs_.
The analysis of existing literature revealed that the majority of papers (88%, 153/173) focused solely on an intrinsic evaluation of NLP systems. To avoid conducting an evaluation merely for the sake of it, we suggest that first a clear purpose for an NLP system is defined, and subsequently, an extrinsic evaluation is designed to gauge the systems' performance in fulfilling that specific purpose.
Additionally, the evaluation of various aspects of NLP systems' outputs (e.g., truthfulness) is usually conducted independently, without providing a composite score for the overall system performance. We suggest adopting a hierarchical approach, where the characteristics of the systems are interdependent, and the evaluation process continues only if the preceding characteristic(s) is deemed satisfactory. Conversely, if a characteristic is unsatisfactory, the evaluation can be discontinued, allowing evaluators to save time by not evaluating all characteristics for the low-quality outputs.
Lastly, to date, the existing literature has focused solely on the human evaluation of NLP systems' outputs, assuming that the inputs provided to these systems were of good quality. However, this assumption may not always hold true. We thus propose a two-phase approach for human evaluation, wherein testers initially assess NLP systems, followed by evaluators who evaluate both the inputs and outputs of the systems. By dividing the evaluation process into two phases, we enable evaluators to also assess the quality of the inputs used by testers during the testing phase of NLP systems. In essence, our hypothesis is that the quality of the outputs may not only be influenced by the system itself but also by the quality of the inputs.
In order to address those gaps, we propose a framework as shown in Figure 4. By defining a system's purpose as the first step, our framework supports extrinsic evaluation. The second step is to define interdependencies between the evaluated characteristics and consequently to design a hierarchical evaluation metric that supports calculating a composite score that encompasses the overall quality of an NLP system. Namely, the evaluation stops if any of the evaluated characteristics is deemed unsatisfactory and, in this case, the composite score is "bad" as the system did not pass the evaluation. Otherwise, if the evaluation goes to the end, then the composite score is "good". We hypothesize that our framework facilitates a shorter evaluation time for evaluators by allowing early termination of evaluation in cases where any evaluated characteristic does not meet satisfactory quality. The third step is to do testing of the system according to the defined purpose. Testers are independent of evaluators who evaluate the system's inputs and outputs using the designed hierarchical evaluation metric in the fourth step. This allows for independent evaluation of the system's inputs as well. Consequently, our framework enables an examination of whether the quality of a system's outputs is influenced by the quality of its inputs.
Figure 4: Steps explaining how to create a hierarchical evaluation framework for an NLP system.
Case study: Hierarchical Evaluation for an MRC System
We evaluated a Machine Reading Comprehension (MRC) system using the framework outlined in the previous section. In an MRC system, answers come in the form of short text spans which are directly extracted from the text corpus (i.e., relevant text database). Questions asked, on the other hand, need to be relevant to the topic that the text corpus covers, factoid, answerable and mistake-free (i.e., no spelling or grammar mistakes).
### The purpose of the MRC System
The purpose of the developed MRC system was to support health coaches during their sessions with clients, coaching them on the importance of good quality sleep. Namely, the developed system is part of the human-AI symbiosis model shown in Figure 5(Bojic et al., 2023). The system is a pre-trained BERT model that was fine-tuned on a human-annotated domain-specific dataset.
The entire health coaching process takes place online through text messaging. To address factoid questions raised by clients, the health coach may utilize the MRC system for additional support during coaching sessions Bojic et al. (2022, 2023). Health coaches were given the liberty to use, modify, or disregard the answers provided by the MRC system. This integration enhances the human coaching experience by incorporating evidence-based knowledge given by the MRC system. As a result, the health coaches' response time improves, and the information they offer is grounded in reliable evidence.
### Hierarchical Evaluation Metrics
We developed two evaluation metrics: one for the inputs (i.e., questions) of the MRC system and the other for the outputs (i.e., answers), in order to be able to detect whether the quality of the MRC system output is affected by the quality of its input.
#### 4.2.1 Evaluation of Inputs
Figure 6 shows a set of evaluation criteria for evaluating the MRC questions. The question is _relevant_ if it is on the topic covered in the corresponding text corpus. _Factoid_ questions are questions that start with one of the following words: "who", "what", "where", "when", "why" or "how". They ask about facts that can be expressed as short texts Parsing (2009). The question is _answerable_ if there exists an answer to it. The evaluators are asked if the posed question contains any _spelling_ or _grammar_ errors. The _difficulty_ of the posed question can be chosen from three levels - _easy_, _medium_, or _hard_ (please refer to Table 1).
\begin{table}
\begin{tabular}{|c|c|} \hline \multirow{2}{*}{Easy} & The correct answer is \\ & obvious after reading the passage \\ & only one time. \\ \hline \multirow{3}{*}{Medium} & To find the correct answer, one \\ & needs to carefully read and \\ & understand both the question and \\ & the paragraph. \\ \hline \multirow{3}{*}{Hard} & To find the correct answer, \\ & one needs to read the paragraph \\ \cline{1-1} & many times, sometimes even use \\ \cline{1-1} & logical reasoning to find the \\ \cline{1-1} & correct answer. \\ \hline \end{tabular}
\end{table}
Table 1: Three different levels of difficulty of the posed questions.
Figure 5: Human-AI health coaching model.
Figure 6: Hierarchical evaluation of the questions.
#### 4.2.2 Evaluation of Outputs
The evaluators were asked to evaluate the retrieved _short answer_ and if necessary its _explanation_. Namely, the output of the whole MRC system is a text span (i.e., short answer). However, an MRC system can be seen as a pipeline of two NLP models - _document retrieval_ and _document reader_, where the output of the former model is the _relevant passage(s)_ and the output of the latter model (i.e., the whole system) is a _text span_. Our metric first evaluates the characteristics of the output of the whole system (i.e., text span). If the output of the whole system was not satisfying, then we evaluate its explanation (i.e., relevant passage) that was provided by the document retrieval component.
The retrieved short answer is _clear_ if its meaning is easy to understand. The retrieved short answer/explanation is _relevant_ if it answers the posed question. _Clinical accuracy_ of the retrieved short answer/explanation denotes the degree to which it is clinically accurate - (i) clinically accurate, (ii) partially clinically accurate, and (iii) clinically inaccurate (see Table 2). Finally, the health coaches judged the usefulness of the retrieved short answer/explanation (see Figure 7).
### Testing of the MRC System
Testing of the developed MRC system was conducted during a pilot Randomized Controlled Trial (RCT). In this RCT, 30 participants in the intervention group (i.e., clients) interacted with 10 health coaches who utilized the MRC system to answer factoid questions. Clients were recruited from a general student population if they (1) were older than 21 years, (2) were available for weekly interaction with a health coach for four weeks, (3) were not currently undergoing any treatment for a sleep disorder or mental disorder and were not under the care of a psychologist or psychiatrist, and (iv) had PHQ-9 score less than 10.
Health coaches were recruited from the cohorts of graduated students from the health coaching course if they (1) were older than 21 years, (2) were available for weekly interaction with three clients for four weeks, and (iii) successfully completed and passed the health coaching course. During the study period of four weeks, clients had weekly 30-minute sessions with their respective health coaches. All questions asked by health coaches and their corresponding answers were saved during the testing phase and were subsequently used in the evaluation phase. By dividing human evaluation into two parts, we were able also to judge whether questions were posed in the way we asked our health coaches to ask them, i.e., if they can be answered by the developed MRC system.
### Evaluation of the MRC System
Following a 4-week pilot RCT, the developed MRC system underwent evaluation by 10 health coaches. A total of 387 unique question-answer pairs were evaluated by the health coaches during this period. The heat map depicted in Figure 8 illustrates the number of inputs and outputs evaluated by each health coach, while Figure 9 showcases the average evaluation time required for each input/output assessed by the health coaches.
Almost all questions (99%, 383/387) were evaluated as _relevant_. One example of a question that was marked as not relevant was: _"Food nutrition tips"_. The next 87% of questions (335/383) were judged as factoid. Some examples of not factoid questions are as follows: _"About REM sleep, is it the phase that I'm dreaming?"_, _"Can you exercise before sleeping?"_, _"I often run around campus for 3-5km at night 1-2h before sleeping. Is it good or bad for sleep?"_. 2% of the remaining questions (8/335) were marked as not answerable: _"How long should I be awake during sleep?"_, _"How bad would you say is my sleep health like compared to the average?"_, while additional 2% (6/327) had spelling errors (e.g., _"How long before bedtime shld i stop screentime?"_). Finally, the last 23% (74/321) had grammar errors: _"How do ensure naps have good quality?"_, _"Why wake up during night?"_. The results of the complete external human evaluation for questions are shown in Figure 10.
\begin{table}
\begin{tabular}{|c|c|} \hline Clinically accurate & The retrieved short answer/ explanation is clinically accurate and is based on evidence-based information. \\ \hline Partially clinically accurate & The retrieved short answer/ explanation is partially clinically accurate and somewhat lacks evidence-based information. \\ \hline Clinically inaccurate & The retrieved short answer/ explanation is not clinically accurate and is not based on evidence-based information. \\ \hline \end{tabular}
\end{table}
Table 2: Three different levels of clinical accuracy.
More than 40% (157/387) of short answers were evaluated as not _clear_, out of which in 57% of cases (89/157), their explanations were marked as relevant. For example, **"Question**: When does melatonin peak? **Answer**: release of melatonin, the hormone that induces feelings of tiredness and relaxation. **Explanation**: When the sun goes down, your eyes will perceive darkness and signal the scn accordingly. This triggers the release of melatonin, the hormone that induces feelings of tiredness and relaxation. This also causes your core temperature to dip."_. 63% of clear answers (146/230) were also evaluated as relevant of which 99% (144/146) was indicated as being (partly) clinically accurate. Furthermore, 97% (113/116) of the short answers that were not clear, but their explanations were relevant, were (partly) clinically accurate. The results of the complete external human evaluation for answers are shown in Figure 11.
Figure 8: The total number of questions and answers evaluated by each health coach.
Figure 10: Extrinsic evaluation of questions.
Figure 7: Hierarchical evaluation of the answers.
Figure 9: Average time in seconds per health coach needed to evaluate questions and answers.
### Composite scores of the MRC System
The results of our evaluation showed that 63.8% (247/387) of unique questions were evaluated as relevant, factoid, answerable, spelling and grammar mistakes-free (i.e., _good_ questions). Out of those, 63% (155/247) were judged as easy, 30% (74/247) as medium and 7% (18/247) as hard questions. Furthermore, 49.4% (191/387) of unique answers were evaluated as clear, relevant, clinically accurate and useful (i.e., _good_ answers). In order to check if there are any associations between the quality of outputs and inputs, we performed a \(\chi^{2}\) test. The result showed significant associations between the two (\(\chi^{2}\) = 4.56, p=0.03). The distribution of the performance matrix is shown in Table 3.
## 5 Discussion and Conclusions
In this study, we conducted a scoping review to identify gaps in the literature regarding human evaluation in NLP. The findings revealed three significant gaps that need to be addressed: the lack of evaluation metrics for NLP system inputs, limited consideration for interdependencies among different characteristics of NLP systems, and a scarcity of metrics for extrinsic evaluation.
To bridge these gaps and enhance human evaluation in NLP, we proposed a hierarchical evaluation framework. Our framework offers a standardized approach that considers both the inputs and outputs of NLP systems, allowing for a more comprehensive assessment. Moreover, our hierarchical approach considers the interdependencies among different characteristics of NLP systems. Rather than evaluating characteristics independently, our framework emphasizes their interconnectedness and the impact they may have on each other. This approach enables a more holistic evaluation that captures the overall performance of NLP systems.
To validate the effectiveness of our proposed framework, we conducted a pilot RCT evaluating an MRC system. The evaluation phase of our study involved 10 health coaches who evaluated a total of 387 question-answer pairs generated during the RCT. The evaluation metrics developed for inputs focused on aspects such as relevance, factoid nature, answerability, spelling, grammar errors, and difficulty levels of the questions. For outputs, the evaluation criteria included clarity, relevance, clinical accuracy, and usefulness of the retrieved short answers and explanations.
The results of the evaluation provided valuable insights into the strengths and weaknesses of the MRC system and demonstrated the practical application of our hierarchical evaluation framework. The findings supported the notion that evaluating both inputs and outputs is crucial for obtaining a comprehensive understanding of the performance and effectiveness of NLP systems. Future research should focus on validating the scalability and time-saving benefits of our proposed framework.
## Limitations
We recognize the potential limitations that may arise with a small-scale scoping review that is limited to a few venues. As our sample size is small, our results and proposed solutions may lack generalizability and applicability. To mitigate the potentially negative effects, we carefully chose the most appropriate venues - as further explained in 2.1 - and limited the search to the most recent papers as the field of computer science is rapidly and constantly evolving. Solely reviewing papers in the English language could also potentially limit the scope of our research. We also tried to delve into a broad range of aspects of human evaluation whilst keeping our objectives focused. However, we recognize the inevitability of potential factors that may exist outside of our considerations - which may also affect results and conclusions.
Figure 11: Extrinsic evaluation of answers.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Questions} \\ \cline{3-4} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{} & good & bad \\ \hline \multirow{2}{*}{Answers} & good & 132 & 59 \\ \cline{2-4} & bad & 115 & 81 \\ \hline \end{tabular}
\end{table}
Table 3: 2x2 matrix for the performed \(\chi^{2}\) test.
## Ethics Statement
We aim to conduct our study with the highest ethical standards and maintain continuous referral to the ACL code of ethics throughout our research. We obtained articles via Google Scholar and have anonymized most of the papers and authors - excluding a few that were cited in our main text. This paper should be used to provide insight into the current practices of human evaluation and a potential solution to streamline the process. It is not used to penalize any research or draw any negative attention to certain papers.
We also recognize that some potential biases and errors may arise amongst human reviewers which may lead to potentially inaccurate data extraction. This may have a potential knock-on effect on derived conclusions. These issues are considered and mitigated through multiple reviewers performing the same task, frequent discussions, and good communication.
## Acknowledgements
The authors would like to acknowledge the Accelerating Creativity and Excellence (ACE) Award (NTU-ACE2020-05) and center funding from Nanyang Technological University, Singapore. Josip Car's post at Imperial College London is supported by the NIHR NW London Applied Research Collaboration. Finally, the authors would also like to acknowledge Jintana Liu and Ashwini Lawate who were included in the pilot RCT running and supported the data collection process.
|
2307.02609 | MRecGen: Multimodal Appropriate Reaction Generator | Verbal and non-verbal human reaction generation is a challenging task, as
different reactions could be appropriate for responding to the same behaviour.
This paper proposes the first multiple and multimodal (verbal and nonverbal)
appropriate human reaction generation framework that can generate appropriate
and realistic human-style reactions (displayed in the form of synchronised
text, audio and video streams) in response to an input user behaviour. This
novel technique can be applied to various human-computer interaction scenarios
by generating appropriate virtual agent/robot behaviours. Our demo is available
at \url{https://github.com/SSYSteve/MRecGen}. | Jiaqi Xu, Cheng Luo, Weicheng Xie, Linlin Shen, Xiaofeng Liu, Lu Liu, Hatice Gunes, Siyang Song | 2023-07-05T19:07:00Z | http://arxiv.org/abs/2307.02609v1 | # MRecGen: Multimodal Appropriate Reaction Generator
###### Abstract.
Verbal and non-verbal human reaction generation is a challenging task, as different reactions could be appropriate for responding to the same behaviour. This paper proposes the first multiple and multimodal (verbal and nonverbal) appropriate human reaction generation framework that can generate appropriate and realistic human-style reactions (displayed in the form of synchronised text, audio and video streams) in response to an input user behaviour. This novel technique can be applied to various human-computer interaction scenarios by generating appropriate virtual agent/robot behaviours. Our demo is available at [https://github.com/SSYSieve/MRecGen](https://github.com/SSYSieve/MRecGen).
2018
**Behaviour synchronisation**: This module conducts following operations: (i) synchronising multiple user behaviour representations generated by the UBE module, and then combine them as a single user behaviour representation; (ii) synchronising multiple reaction representations generated by ARP module, which represents the multi-modal reaction behaviour; and (iii) synchronising the synchronised multi-modal reaction representations with the synchronised and combined multi-modal user behaviour representation.
**Reaction display:** This module finally displays the generated reactions in the form of audio, text and face video.
_Implementation details:_ Our demo employs the architecture proposed in (Han et al., 2017) as the SBE module. The distribution learning strategy in ARP module is inherited from (Krishnan et al., 2017), where the speaker representation is transformed into graph representation to predict appropriate reaction distribution. The transformer-based multi-modal and inter-person behaviour synchronisation operations defined by the BS module are built based on the similar strategy proposed in (Krishnan et al., 2017). For the reaction display module, GPT4 (Girard et al., 2017), Bark 1, and SadTalker (Sad et al., 2017) are employed to generate the final text, audio and facial video, respectively.
Footnote 1: [https://github.com/suno-ai/bank](https://github.com/suno-ai/bank)
## 3. Demo Evaluation
We recruited 58 volunteers (19 females and 39 males) online for the following two user studies, where each volunteer is asked to evaluate the performances of two tasks:
* (i) Evaluating the demo's ability in generating reactions in response to users' audio-visual behaviours. Each volunteer is asked to watch five examples, where each example includes: (1) an audio-visual user clip; (2) a corresponding audio-visual-text human reaction clip; and (3) an audio-visual-text reaction clip generated by our demo.
* (ii) Evaluating the demo's ability in generating reactions in response to users' verbal textual behaviours. Each volunteer is asked to watch five examples, where each example includes: (1) an textual input by user; and (2) an audio-visual-text reaction clip generated by our demo.
We employ the widely-used Mean Opinion Scores (MOS) rating protocol (Girard et al., 2017), where users are required to give their ratings (1-5) on the following seven aspects for each video : (1) textual response appropriateness, (2) audio response appropriateness, (3) audio response smoothness, (4) facial reaction appropriateness, (5) facial reaction smoothness, (6) lip sync quality, and (7) video realism. The results show that volunteers feel that the reactions generated by our demo are good in all aspects (i.e., scores are above 3.0 for all 7 aspects). Particularly, our demo generated appropriate textual/facial reactions with high lip sync quality (their scores are above 3.4). The details of these ratings are provided in Supplementary Material.
## 4. Conclusion
This paper proposes the first multiple and multi-modal appropriate human behaviour reaction generation framework, and provides a well-trained model (demo) that can generate multiple appropriate, synchronised and realistic human textual, audio and facial behaviour reactions in response to user behaviours.
Figure 1. Overview of proposed MRecGen framework.
Figure 2. Example of the generated multi-modal reaction. |
2305.14612 | Assessment of Anterior Cruciate Ligament Injury Risk Based on Human Key
Points Detection Algorithm | This paper aims to detect the potential injury risk of the anterior cruciate
ligament (ACL) by proposing an ACL potential injury risk assessment algorithm
based on key points of the human body detected using computer vision
technology. To obtain the key points data of the human body in each frame,
OpenPose, an open source computer vision algorithm, was employed. The obtained
data underwent preprocessing and were then fed into an ACL potential injury
feature extraction model based on the Landing Error Evaluation System (LESS).
This model extracted several important parameters, including the knee flexion
angle, the trunk flexion on the sagittal plane, trunk flexion angle on the
frontal plane, the ankle knee horizontal distance, and the ankle shoulder
horizontal distance. Each of these features was assigned a threshold interval,
and a segmented evaluation function was utilized to score them accordingly. To
calculate the final score of the participant, the score values were input into
a weighted scoring model designed based on the Analytic Hierarchy Process
(AHP). The AHP based model takes into account the relative importance of each
feature in the overall assessment. The results demonstrate that the proposed
algorithm effectively detects the potential risk of ACL injury. The proposed
algorithm demonstrates its effectiveness in detecting ACL injury risk, offering
valuable insights for injury prevention and intervention strategies in sports
and related fields. Code is available at:
https://github.com/ZiyuGong-proj/Assessment-of-ACL-Injury-Risk-Based-on-Openpose | Ziyu Gong, Xiong Zhao, Chen Yang | 2023-05-24T01:22:26Z | http://arxiv.org/abs/2305.14612v1 | ## Assessment of Anterior Cruciate Ligament Injury Risk Based on Human Key Points Detection Algorithm
## Abstract
This paper aims to detect the potential injury risk of the anterior cruciate ligament (ACL) by proposing an ACL potential injury risk assessment algorithm based on key points of the human body detected using computer vision technology. To obtain the key points data of the human body in each frame, OpenPose, an open-source computer vision algorithm, was employed. The obtained data underwent preprocessing and were then fed into an ACL potential injury feature extraction model based on the Landing Error Evaluation System (LESS). This model extracted several important parameters, including the knee flexion angle, the trunk flexion on the sagittal plane, trunk flexion angle on the frontal plane, the ankle-knee horizontal distance, and the ankle-shoulder horizontal distance. Each of these features was assigned a threshold interval, and a segmented evaluation function was utilized to score them accordingly. To calculate the final score of the participant, the score values were input into a weighted scoring model designed based on the Analytic Hierarchy Process (AHP). The AHP-based model takes into account the relative importance of each feature in the overall assessment. The results demonstrate that the proposed algorithm effectively detects the potential risk of ACL injury. The proposed algorithm demonstrates its effectiveness in detecting ACL injury risk, offering valuable insights for injury prevention and intervention strategies in sports and related fields. Code is available at: [https://github.com/ZiyuGong-proj/Assessment-of-ACL-Injury-Risk-Based-on-Openpose](https://github.com/ZiyuGong-proj/Assessment-of-ACL-Injury-Risk-Based-on-Openpose)
**Keywords**: Human key points; the Landing Error Evaluation System (LESS); ACL injury risk; Analytic Hierarchy Process (AHP)
## Introduction
Improper movement patterns can highly increase the risk of injury and are detrimental to the quality of exercise. For athletes, a fixed movement pattern often results in wear and tear on certain soft tissues or joints and increases the risk of injuries. Therefore, screening out potential injuries is of great significance to athletes and sports enthusiasts. Movement pattern assessment has become a common method to identify potential sports injury risks [1]. LESS
and functional movement screening system (Functional Movement Screen, FMS) are frequently used to identify human movement patterns. However, the LESS and FMS usually rely on the visual evaluation of the clinician or trainer through video playback. The process can be subjective and time-consuming [2]. Mauntel et al. [3] used the 3D depth camera Kinect to achieve automatic scoring of LESS, but the automation of LESS using 2D camera is still not available. The increasing popularity of utilizing 2D cameras in motion analysis is attributed to the advancements in computer vision, which have led to improved accuracy.
This study implements the automatic scoring function of the LESS by utilizing human body key points data generated by OpenPose from a 2D camera. [4]. To begin with, we introduce a method for extracting features related to the potential injury risk of the ACL based on the key points of the human body. Subsequently, utilizing the ACL potential injury risk scoring function outlined in this paper, the extracted features are assigned scores. The process involves initially utilizing OpenPose to capture the key points data of the human body in each image frame. The data is then preprocessed and input into the feature extraction model to obtain the corresponding feature values, including the knee flexion and trunk flexion angles in the sagittal plane, the lateral trunk angle in the frontal plane, as well as the ankle-knee horizontal distance and the ankle-shoulder horizontal distance.
Using the defined scoring function, the extracted feature values undergo scoring operations based on distinct threshold intervals assigned to each feature. Building upon the five characteristic values obtained in the previous step, a weighted scoring model, developed through the analytic hierarchy process (AHP), is employed to calculate the participant's final score. The individual score and final score are then combined to comprehensively evaluate the participant's risk.
Finally, the analysis of OpenPose's recognition accuracy and the experimental results of the ACL potential injury risk assessment method were conducted. The author acknowledges the significant innovation and practical value of this research in the field of movement evaluation.
## Related work
In 2009, Padua et al. [2] first proposed the LESS to identify subjects with a higher risk of ACL injury. The test process was recorded using cameras positioned in the sagittal and frontal planes to capture the entirety of the evaluation. The subjects' scoring was performed manually by observing the recorded videos. The study revealed significant differences in lower limb kinetics between subjects with low LESS scores (indicating excellent landing techniques) and those with high LESS scores (indicating poor landing technique). Therefore, it is concluded that LESS is a reliable kinematic evaluation method that can be used to identify the potential risk of injury during the landing process.
Clark et al., 2012 [5] conducted a study to assess the effectiveness of Kinect for posture evaluation. The research involved administering three balance tests to a group of 20 healthy subjects. The study's findings indicated that Kinect can be successfully utilized for human posture evaluation. This research holds significance in demonstrating the potential value of integrating Kinect depth cameras into human kinematics analysis. In a study conducted by Schmitz et al., 2015 [6], a comparison was made between a marker-based motion capture system and a markerless motion capture system utilizing the Kinect depth camera. The
investigation specifically focused on analyzing the peaks of flexion angles of lower limb joints during squatting exercises performed by 15 subjects. The results demonstrated a high correlation between the two systems, indicating the feasibility and reliability of the markerless motion capture system based on the Kinect depth camera for capturing lower limb joint angles. In a similar fashion, Perrott et al., 2017 [7] conducted a comparison between the marker-based and the markerless motion capture system, examining 13 clinically relevant joint angles during the squatting motion. Their findings indicated that in 9 out of the 13 joint angles, there was no significant difference between the two systems. This suggests that the markerless motion capture system, based on the Kinect depth camera, can provide comparable results to the marker-based system for assessing joint angles during squatting exercises.
In 2017, Mauntel et al. [3] introduced automated enhancements to the LESS test method by leveraging the Kinect depth camera, effectively addressing the research gap in automated injury assessment within the field of movement evaluation. Furthermore, Mentiplay et al. [8], in their 2018 study, utilized the Kinect depth camera to measure the kinematics of the hips and knees in both frontal and sagittal planes during single-leg squat (SLS) and drop vertical jump (DVJ) exercises. The findings indicated that the Kinect depth camera holds potential in screening for ACL injury risk. Additionally, in the 2019 study by Clark et al., 2019 [9], they explored the quest for an optimal movement assessment method and alternative approaches to Kinect. Their work emphasized the future development of a movement assessment system based on OpenPose, which holds implications for the present research.
This article focuses on the automatic evaluation of the LESS using a 2D camera and OpenPose. To extract key points, the bottom-up multi-person pose estimation approach initially requires identifying the position of an individual within the image using a bounding box. Subsequently, the pose of the person within the bounding box is estimated. In the initial stages of this research field, Ladicky et al. [10] employed HOG features for segmenting and locating different parts of the human body. Gkiocari et al. [11] proposed a method based on K-poselets for detecting human body key points. Pischulin et al. [12] utilized deep features to predict joint positions and applied the Deepcut clustering method for joint point classification. However, this approach suffered from increased algorithmic time complexity, resulting in slower processing speeds. In a subsequent study, Insafutdinov et al. [13] introduced the ResNet framework for human body part detection and incorporated incremental optimization strategies to enhance the computational efficiency of the Deepcut algorithm. Furthermore, Cao et al. [4] developed the OpenPose multi-person pose estimation open-source library. This library utilizes a network with two stacked and cross-linked convolution branches, where one branch predicts key points, and the other branch infers limb relationships. The real-time generation of key points data by OpenPose renders it highly practical and valuable for automating the LESS.
## Methods
### ACL Injury feature extraction model
Figure 1 illustrates the flow chart of the proposed potential injury assessment system based on human body key point data in this paper. The process begins by obtaining and processing Body_25 human body key point data in each frame using OpenPose. Next, the
system calculates the five feature values of the human body. Based on their respective threshold scoring functions, the individual test values are assigned scores. Specifically, a score of 9 denotes excellent performance, 5 represents good performance, and 1 indicates poor performance. Subsequently, the calculated five features are fed into the Analytic Hierarchy Process (AHP) weight evaluation model, resulting in the final evaluation score.
Following the evaluation and consolidation of the 17 scoring rules in the LESS, it was determined that four out of the 17 individual scoring indicators pertained to assessing the subject's feet. Taking into account the recognition accuracy of OpenPose [14], five evaluation indicators were selected to specifically identify the potential risk of ACL injury. In conjunction with the study conducted by Sinsurin et al. [15], it was observed that an increase in the flexion angles of the hip and knee in the sagittal plane can effectively mitigate the risk of injury. Ameer et al. [16] revealed that increased peak knee flexion during single-leg landings may serve as a protective mechanism against ACL injury. Specifically, when the knee flexion angle is less than 30\({}^{\circ}\), the strain on the ACL is significantly higher compared to when the knee flexion angle is larger. On the other hand, when the knee flexion angle exceeds 60\({}^{\circ}\), the load on the ACL is considerably reduced. Moreover, increasing the knee flexion angle after landing can effectively reduce the impact force [17], consequently reducing the risk of ACL injury.
This paper uses the platform landing support (Drop Landing, DL) test to extract the aforementioned five features, and employed the modified LESS scoring criteria to evaluate the ACL potential injury risk as shown in Table 1.
Figure 1: Flow chart of potential injury assessment system based on human body key points data.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Scoring item** & **Criterion** & **Score** \\ \hline
**A1:** & Sagittal plane: & A peak value between 0 and 30 degrees represents poor; & 1=poor \\ Peak flexion angle & A peak value between 30 degrees and 60 degrees represents & 5=good \\ of right thigh and & good; & 9=excellent \\ shank & A peak value greater than 60 degrees represents excellent & 1=poor \\ \hline
**A2:** & Sagittal plane: & A peak value between 0 and 30 degrees represents poor; & 1=poor \\ Peak flexion angle & A peak value between 30 degrees and 60 degrees represents & 5=good \\ of right thigh and & good; & 9=excellent \\ trunk & A peak value greater than 60 degrees represents excellent & 9=excellent \\ \hline
**A3:** & Frontal plane: & \\ the average of peak & A peak value greater than 60 degrees means poor; & 1=poor \\ values of the flexion & A peak value between 60 degrees and 30 degrees means good; & 5=good \\ angles between the two thighs and the trunk & A peak value between 0 degrees and 30 degrees means excellent & 9=excellent \\ \hline
**D1**: Frontal plane: & \\ the absolute value of & & \\ the maximum & & \\ difference between & & \\ the horizontal & A value greater than 50 means poor; & 1=poor \\ distance between the & A value between 30-50 means good; & 5=good \\ two knee joints and & A value less than 30 means excellent; & 9=excellent \\ the horizontal & & \\ distance between the & & \\ two ankles & & \\ \hline
**D2**: Frontal plane: & & \\ the absolute value of & & \\ the maximum & A value greater than 50 means poor; & 1=poor \\ difference between & A value between 30-50 means good; & 5=good \\ the width of the & A value less than 30 means excellent; & 9=excellent \\ shoulders and the & & \\ width of the feet & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: ACL potential injury risk assessment index.
Figure 2. And \(p_{2}\) is used to represent the peak value of the cosine value of the supplementary angle of the torso and thigh flexion angle, and the torso vector is defined \(\gamma\).
The model is constructed using the key points data generated by OpenPose, as well as the variable \(p_{1}\) and \(p_{2}\) during the DL test. The knee angle feature and the feature representing the angle between the trunk and the thigh are as shown in equations 3 and 4:
\[p_{1}=\max\{\frac{a^{\prime}\beta}{|\alpha|+|\beta|}\}\cdot\ \text{s.t.}\ \cdot\{\alpha,\beta| \alpha=(x_{9}-x_{10},y_{9}-y_{10}),\beta=(x_{11}-x_{10},y_{11}-y_{10})\} \tag{3}\]
\[p_{2}=\max\{\frac{a^{\prime}\gamma}{|\alpha|^{\prime}|^{\prime}|^{\prime}} \}\cdot\ \text{s.t.}\ \cdot\{\alpha,\gamma|\alpha=(x_{9}-x_{10},y_{9}-y_{10}),\gamma=(x_{8}-x_{1}, y_{8}-y_{1})\} \tag{4}\]
The segment scoring functions, denoted as \(f(t_{n},p_{1})\) and \(f(t_{n},p_{2})\), are defined individually. These functions signify that when the time frame is \(t_{n}(n>0)\), the value of \(p_{1}\) (or \(p_{2}\)) is evaluated, and the resulting score value is classified based on the segment function and corresponding threshold on the sagittal plane. Specifically, a score of 9 represents excellent performance, 5 denotes good performance, and 1 indicates poor performance. This is illustrated by equation 5 and equation 6:
\[f\big{(}t_{n},p_{1}\big{)}=\begin{cases}9,&-\frac{1}{2}<p_{1}\leq 1\\ 5,&-\frac{\sqrt{3}}{2}<p_{1}\leq-\frac{1}{2}\\ 1,&-1\leq p_{1}\leq-\frac{\sqrt{3}}{2}\end{cases} \tag{5}\]
\[f(t_{n},p_{2})=\begin{cases}9,&-\frac{1}{2}<p_{2}\leq 1\\ 5,&-\frac{\sqrt{3}}{2}<p_{2}\leq-\frac{1}{2}\\ 1,&-1\leq p_{2}\leq-\frac{\sqrt{3}}{2}\end{cases} \tag{6}\]
Figure 2: Illustration of the model and the sagittal plane.
_Frontal plane feature extraction model_
Similarly, this article first defines the time series frame \(T\) and the frontal plane feature set \(S\), as shown in equations 7 and 8:
\[\text{Time series frame}\textbf{: }\ T=\{t_{1},t_{2},t_{3}\cdots t_{n}\},n>0 \tag{7}\]
\[\text{Frontal feature}\textbf{: }\ S=\{s_{1},s_{2},s_{3},s_{4}\} \tag{8}\]
Equations 9 to 15 describe the calculations used to determine specific measurements on the frontal plane. In these equations, \(s_{1}\) represents the Euclidean distance (i.e., horizontal width) between the two ankles on the frontal plane, \(s_{2}\) means the Euclidean distance in horizontal between the two knee joints on the frontal plane, \(s_{3}\) stands for the shoulder width on the frontal plane, and \(s_{4}\) represents the peak value of the average flexion angle of the two thighs and the trunk on the frontal plane. These equations are used to model the human body key points generated in real time by OpenPose in the frontal plane. For a visual representation of the human body key points in the frontal plane, please refer to Figure 3.
\[s_{1}=\sqrt{(x_{14}-x_{11})^{2}+(y_{14}-y_{11})^{2}} \tag{9}\]
\[s_{2}=\sqrt{(x_{13}-x_{10})^{2}+(y_{13}-y_{10})^{2}} \tag{10}\]
\[s_{3}=\sqrt{(x_{5}-x_{2})^{2}+(y_{5}-y_{2})^{2}} \tag{11}\]
\[\text{Trunk vector: }\ d=(x_{8}-x_{1},y_{8}-y_{1}) \tag{12}\]
\[\text{Left thigh vector: }\ e=(x_{9}-x_{10},y_{9}-y_{10}) \tag{13}\]
\[\text{Right thigh vector: }\ f=(x_{12}-x_{13},y_{12}-y_{13}) \tag{14}\]
\[s_{4}=\max\{\frac{1}{2}\Big{(}\frac{d*e}{|d|*|e|}+\frac{d*f}{|d|*|f|}\Big{)}\} \tag{15}\]
By combining the frontal features mentioned earlier, this paper establishes segmented evaluation functions \(f(t_{n},s_{1},s_{2})\) and \(f(t_{n},s_{1},s_{3})\). These functions represent the score values
Figure 3: Illustration of the model and the frontal plane.
obtained for the features in the \(t_{n}\) frame based on the absolute difference between \(|s_{1}-s_{2}|\), as well as \(|s_{1}-s_{3}|\), respectively. For the angle-related features, the function \(f(t_{n},s_{4})\) is defined to represent the score value obtained by the feature \(s_{4}\) in the \(t_{n}\) frame. The following equations are defined to construct the model using key point data generated by OpenPose. Similarly, a score of 9 represents excellent performance, 5 denotes good, and 1 indicates poor, as shown in equation 16 to equation 18:
\[f(t_{n},s_{1},s_{2})=\begin{pmatrix}9,&\max\{|s_{1}-s_{2}|\}<30\\ 5,&30\leq\max\{|s_{1}-s_{2}|\}<50\\ 1,&50\leq\max\{|s_{1}-s_{2}|\}\end{pmatrix} \tag{16}\]
\[f(t_{n},s_{1},s_{3})=\begin{pmatrix}9,&\max\{|s_{1}-s_{3}|\}<30\\ 5,&30\leq\max\{|s_{1}-s_{3}|\}<50\\ 1,&50\leq\max\{|s_{1}-s_{3}|\}\end{pmatrix} \tag{17}\]
\[f(t_{n},s_{4})=\begin{pmatrix}9,&-1\leq s_{4}\leq-\frac{\sqrt{3}}{2}\\ 5,&-\frac{\sqrt{3}}{2}<s_{4}\leq-\frac{1}{2}\\ 1,&-\frac{1}{2}<s_{4}\leq 1\end{pmatrix} \tag{18}\]
### Comprehensive evaluation model
#### Construction of the scoring system
The Analytic Hierarchy Process (AHP) is a multi-dimensional decision-making method that incorporates both qualitative and quantitative analysis. It allows for judgments to be made based on expert opinions and the experiences of decision makers. The AHP is particularly suitable for conducting multi-index evaluations of complex systems. In the context of evaluating LESS scoring results, it is crucial to establish an objective and rational evaluation system. By utilizing the AHP, subjective judgments and preferences can be quantified and combined with objective measurements to create a comprehensive evaluation framework.
To ensure the consideration of both independence and comprehensiveness of indicators, the AHP is employed in this study to construct a three-level structure comprising the target level, criterion level, and index level when establishing the evaluation index system. A weighted scoring model is established based on the AHP to provide a more visually intuitive representation of the test results. This model incorporates the assigned weights for each criterion and index, allowing for a comprehensive evaluation of the subjects. The results of this weighted scoring model are presented in Figure 4.
_Determination of the weight coefficients of criterion level and index level_
At the criterion level, the number of indicators is relatively small, making it suitable for a preliminary hierarchical judgment of plan quality. This level allows for intuitive and easily achievable pairwise comparisons. To determine the weights of the indicators at this level, the expert comparison method is utilized. To quantify the comparative indicators, a nine-level scale method is introduced in the evaluation process. This method allows experts to make pairwise comparisons based on the relative importance or preference of the indicators. The nine-level scale provides a structured framework for assigning values to the indicators, facilitating a more systematic and consistent assessment process.
By employing this method, the evaluation process becomes more rigorous and transparent, enabling experts to effectively compare and prioritize the indicators at the criterion level. This approach ensures a more accurate determination of the weights assigned to each indicator, contributing to the overall evaluation of the plan's quality.
\begin{table}
\begin{tabular}{c c} \hline \hline Relative importance scale & Implication \\ \hline
1 & The two elements are equally important \\ \hline
3 & \(i\) is slightly important than \(j\) \\ \hline
5 & \(i\) is obviously important than \(j\) \\ \hline
7 & \(i\) is much important than \(j\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: AHP evaluation index scale.
Figure 4: LESS structure chart of scoring results.
\begin{tabular}{c c c} \hline \hline
9 & \(i\) & is extremely important than \(j\) \\ \hline
2,4,6,8 & The median of the above values \\ \hline Reciprocals of above & If activity \(i\) & has one of the above non-zero numbers signed to \\ & it when compared with activity \(j\), then \(j\) & has the \\ & reciprocal value when compared with \(i\) \\ \hline \hline \end{tabular} Index \(A\) is the parent index of index \(B(B_{1},B_{2}\cdots B_{n})\). After comparing and judging by experts and transforming according to the scale of Table 1, the judgment matrix \(B\) can be established, and the index weight can be solved by the sum-product method, and finally the consistency test will be carried out as follows:
The judgment matrix \(A\) is established by using the relative importance of the indicators, namely:
\[A=\left(a_{y}\right)_{nn}=\left(w_{i}/w_{j}\right)_{nn} \tag{19}\]
Calculate the square root of \(M_{i}\):
\[M_{i}=\prod_{j=1}^{n}a_{y}\left(i,j=1,2,3,\cdots,n\right) \tag{20}\]
Normalize the weighted vector \(\overline{W}_{i}\):
\[\overline{W}_{i}=\frac{\overline{W}_{i}}{\sum_{j=1}^{n}\overline{W}_{j}}\left( i,j=1,2,3,\cdots,n\right) \tag{21}\]
Calculate the maximum eigenvalue \(\lambda_{\max}\):
\[\lambda_{\max}=\sum_{i=1}^{n}\left[\frac{\left(4W\right)_{i}}{nW_{i}}\right] \tag{22}\]
Finally, the consistency check:
\[CI=\frac{\lambda_{\max}-n}{n-1} \tag{23}\]
Consistency check is important to ensure reliable evaluations. It examines the negative mean of remaining characteristic roots and uses the consistency index (\(CI\)) to assess consistency. The random consistency ratio (\(CR\)) is compared to a threshold (as in equation 24) to determine if the evaluations pass the consistency test. If not, adjustments are needed to enhance consistency and reliability.
\[\begin{array}{l}CR=CI/RI\\ \text{s.t. }CR=CI/RI<0.1\end{array} \tag{24}\]
\begin{tabular}{c c c c c c c c c c} \hline \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(RI\) & 0.00 & 0.00 & 0.58 & 0.90 & 1.12 & 1.24 & 1.32 & 1.41 & 1.45 \\ \hline \hline \end{tabular}
_LESS overall score_
Using the weight set \(C_{i}\) of the index layer and the index value matrix \(R_{i}\), the evaluation matrix \(B_{i}\) of the criterion layer can be obtained:
\[\mathbf{B}_{i}=C_{i}\times\mathbf{R}_{i} \tag{25}\]
Finally, the overall evaluation result \(P\) can be obtained by multiplying with the weight coefficient of each index of the criterion layer:
\[P=\mathbf{B}_{i}\times W_{i} \tag{26}\]
To visually see the evaluation results, the \(P\) value corresponds to the evaluation set in Table 4.
### Verification of LESS scoring results
_Establishment of index weights at the criterion level_
In this paper, the number of indicators at the criterion level is small, and the method of pairwise comparison is also easy to implement. Therefore, a number of experts are asked to make judgments to determine the judgment matrix at this level, and the judgment matrix \(B\) is:
\[\mathbf{B}=\begin{bmatrix}1&3\\ \frac{1}{3}&1\end{bmatrix} \tag{27}\]
The eigenvector of the judgment matrix \(B\) is \(W=[0.25,0.75]^{T}\), the maximum eigenvalue \(\lambda_{\max}=2\), \(CI=0\), and \(CR=0\), which meets the consistency requirements. Therefore, the index weight coefficient of the criterion layer is: (0.25, 0.75).
_Establishing the index weight of the index layer_
Index \(B\) is the parent index of index \(C\). After experts compare and judge and transform according to the scale, a judgment matrix \(R\) can be established, as shown in equation 28:
\begin{table}
\begin{tabular}{c c c} \hline \hline Evaluation grade & Evaluation level & Evaluation value \\ \hline
1 & excellent & 9 \\ \hline
2 & good & 5 \\ \hline
3 & poor & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Index score table.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(RI\) & 0.00 & 0.00 & 0.58 & 0.90 & 1.12 & 1.24 & 1.32 & 1.41 & 1.45 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Random consistency index value.
\[\mathbf{R}=\begin{bmatrix}\begin{array}{cccccc}1&2&3&5&5\\ \frac{1}{2}&1&2&3&4\\ \frac{1}{3}&\frac{1}{2}&1&3&2\\ \frac{1}{5}&\frac{1}{3}&\frac{1}{3}&1&2\\ \frac{1}{5}&\frac{1}{4}&\frac{1}{2}&\frac{1}{2}&1\end{array} \tag{28}\]
The calculated eigenvector of the judgment matrix \(W=[0.4267,0.2574,0.1602,0.0886,0.0671]^{T}\), the maximum eigenvalue \(\lambda_{\max}=5.126\), \(CI\) is 0.031, and \(CR\) is 0.028\(<\)1.12, which meets the consistency requirements. Therefore, the index weight coefficient of the index layer is: (0.4267, 0.2574, 0.1602, 0.0886, 0.0671).
**Participants and protocol**:
30 young male participants (10 varsity basketball players, 10 active participants and 10 less active participants) were recruited to test whether there are significant differences in the feature values extracted by the LESS-based ACL potential injury risk feature extraction model between different populations.
The participants were categorized into three groups, and the platform landing support test method was used to extract the five characteristic values from each group. Subsequently, the comprehensive scores for the five individual indicators were calculated using the AHP weighted model, as described in equations (25) and (26). The scoring results are presented in detail in Table 5.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Number & X1 & X2 & X3 & X4 & X5 & Total Score \\ \hline
[MISSING_PAGE_POST]
& 2.3262 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Test results of the three groups of subjects.
\begin{tabular}{c c c c c c c} \hline
26 & 1 & 1 & 1 & 1 & 5 & 1.9066 \\ \hline
27 & 5 & 9 & 1 & 1 & 1 & 4.7634 \\ \hline
28 & 1 & 5 & 1 & 1 & 9 & 2.5638 \\ \hline
29 & 1 & 1 & 1 & 5 & 9 & 1.8782 \\ \hline
30 & 1 & 1 & 9 & 5 & 5 & 2.8914 \\ \hline \end{tabular}
*X represents the feature.
Among them, the score of the group of basketball players (#1-10) is significantly higher than the group of active participants (#11-20) \(p=0.0193<0.05\), and significantly higher than the group of less active participants (#21-30) \(p=3.59\times 10^{-8}<0.05\). The group of active participants is also significantly higher than the group of less active participants, \(p=0.0486<0.05\). From the results, there are significant differences between the three groups, indicating that the feature extraction model based on the LESS has a certain reference value for distinguishing people with different sports abilities and identifying potential sports injury risk with liability.
## Results
The experimental equipment utilized in this paper was the Jetson Nano, while the experimental environment consisted of a quad-core ARM Cortex-A57MPCore CPU, 4GB memory, Ubuntu 18.04 operating system, and PyCharm Community 2019.
Error analysis of human posture estimation
Based on the OpenPose error analysis experiment conducted by Kimberley-Dale Ng et al. [14], the PCKh(@)0.5 (the ratio of correct estimate of key points) was found to be 88.9% when the confidence threshold was set to 0.2, resulting in a 7.1% loss of human key points data. With a confidence threshold of 0.4, the PCKh(@)0.5 improved to 91.3%, but there was an 18.5% loss of key points data. For confidence thresholds of 0.6 and 0.8, the PCKh(@)0.5 values were 90.9% and 93.5% respectively, but the corresponding key points losses were 36.7% and 76.2%. In this paper, a confidence threshold greater than 0.4 was chosen to analyze the experimental results of the ACL potential injury risk assessment algorithm.
Figure 5 provides visual representations of the experimental results during the landing stage of the platform landing test. In (a), the cosine of the knee flexion angle in the sagittal plane shows a peak value between 0 and -0.5, indicating a recorded maximum knee flexion angle close to 65 degrees, which is considered excellent. (b) illustrates the change in the cosine of the sagittal hip flexion angle, showing a peak value between 0 and -0.5, representing a recorded maximum hip flexion angle close to 70 degrees, also considered excellent. (c) displays the average flexion angle of the thighs and the trunk in the frontal plane, showing that the two are nearly parallel with a cosine value always close to -1. Finally, (d) presents the peak cosine value of the flexion angle between the two thighs and the trunk on the frontal plane, which is close to -0.75, indicating a flexion angle between 30 degrees and 60 degrees, signifying a good test result.
Figure 5: The first three features. (a) and (b) shows cosine of the knee flexion and hip flexion angle in the sagittal plane, respectively. (c) indicates the average and (d) represents the peak cosine value of the flexion angle of the thighs and the trunk in frontal plane.
In Figure 6, (a) represents the correct take-off posture, where the subject's take-off data are stable. In this phase, the absolute difference between the distance of the subject's knees and ankles should be less than 30\(\,{}^{\circ}\,\). On the other hand, (b) and (c) illustrate the errors of knee varus and knee valgus, respectively. When the knee is varus, there are fluctuations in the real-time data, and the distance between the subject's knees and ankles falls between 30 and 50. When the knee is valgus, the absolute difference between the distance of the subject's knees and ankles exceeds 50. Both knee valgus and knee varus situations are prone to ACL tears. Notably, a larger displacement in knee varus results in a more severe ACL injury.
## Conclusion
This paper introduces an ACL potential injury risk assessment method that utilizes key point detection of the human body. The study includes handling missing values in key point data, establishing a feature extraction model for ACL injury risk based on LESS, and calculating the overall score of the subject using a weighted scoring model designed with the AHP method. Through error analysis of the OpenPose algorithm and feature value comparison among three subject types, the proposed ACL injury risk assessment method based on LESS demonstrates its effectiveness in identifying potential anterior cruciate ligament injury risks in individuals.
|
2304.03047 | ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments | Vision-language navigation is a task that requires an agent to follow
instructions to navigate in environments. It becomes increasingly crucial in
the field of embodied AI, with potential applications in autonomous navigation,
search and rescue, and human-robot interaction. In this paper, we propose to
address a more practical yet challenging counterpart setting - vision-language
navigation in continuous environments (VLN-CE). To develop a robust VLN-CE
agent, we propose a new navigation framework, ETPNav, which focuses on two
critical skills: 1) the capability to abstract environments and generate
long-range navigation plans, and 2) the ability of obstacle-avoiding control in
continuous environments. ETPNav performs online topological mapping of
environments by self-organizing predicted waypoints along a traversed path,
without prior environmental experience. It privileges the agent to break down
the navigation procedure into high-level planning and low-level control.
Concurrently, ETPNav utilizes a transformer-based cross-modal planner to
generate navigation plans based on topological maps and instructions. The plan
is then performed through an obstacle-avoiding controller that leverages a
trial-and-error heuristic to prevent navigation from getting stuck in
obstacles. Experimental results demonstrate the effectiveness of the proposed
method. ETPNav yields more than 10% and 20% improvements over prior
state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is
available at https://github.com/MarSaKi/ETPNav. | Dong An, Hanqing Wang, Wenguan Wang, Zun Wang, Yan Huang, Keji He, Liang Wang | 2023-04-06T13:07:17Z | http://arxiv.org/abs/2304.03047v3 | # ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments
###### Abstract
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VNL-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at [https://github.com/MarSaKi/ETPNav](https://github.com/MarSaKi/ETPNav).
Vision-Language Navigation, Continuous Environments, Topological Maps
## 1 Introduction
Given a natural language instruction, the task of vision-language navigation (VLN) [1] requires an agent to interpret and follow the instruction to reach the target location. This task has been well-studied over the past few years [2, 3, 4, 5], however, the majority of works focus on the discrete VLN setting. This setting simplifies navigation as traversing on a predefined graph of an environment, which significantly narrows down the possible locations of the agent and target. Recognizing this cannot reflect the challenges of a deployed system encountered in a real environment, Krantaz _et al._[6] introduce VLN in continuous environments (VLN-CE), which discards the strong graph assumption, instead, requires the agent to navigate on a 3D mesh freely with low-level actions.
So far, VLN-CE has been shown far more difficult than VLN, with a few published works revealing episode success rates less than half of those reported in VLN. Early efforts for VLN-CE are end-to-end trained systems to directly predict low-level actions (or waypoints) from language and observations [6, 7, 8]. This scheme can be challenged by joint learning of navigation and language grounding in a long-horizon task, thus leading to a lower performance compared to VLN. Recently, there has been an emerging trend towards modular waypoint-based approaches [9, 10, 11] that divide the complex task into waypoint generation, subgoal planning, and navigation control. Concretely, in each decision loop, the agent uses a pre-trained network to predict several nearby candidate waypoints, and then performs cross-modal grounding to select a subgoal from the waypoints. After that, a controller drives the agent to reach the selected subgoal with low-level actions. Overall, this modular pipeline simplifies policy learning and closes the performance gap between VLN-CE and VLN.
Despite the progress, we find these waypoint-based methods still have drawbacks in three aspects. **First**, the predicted waypoints are still local and constrained to a nearby area of the agent, which are insufficient to capture the global environment layouts and may hinder the agent's long-range planning capacity. For example, to backtrack to a previous remote location for past decision correction, the agent has to run multiple plan-control flows which can introduce unstable accumulation bias. **Second**, the key design choices for waypoint prediction have not been well-studied. One representative predictor [9] takes RGBD images as inputs, but whether or not the semantic-level RGB inputs are valid remains unknown, since it is only tasked with inferring spatial accessibility. **Third**, obstacle-avoiding control leaves unstudied. These methods, instead, employ either a straightforward heuristic [9] or an off-the-shelf controller [12]. As a result, the agent is likely to get stuck in obstacles and early stop, leading to navigation failure.
To address the above problems, we propose a hierarchical navigation framework powered by topological (topo)
maps, and a low-level controller. Topo maps, partially inspired by cognitive science [13], typically depict environments as low-dimensional graph representations with nodes for places and edges for reachability. They can efficiently capture environment layouts and long-range navigation dependency, thereby easing the agent to make long-range goal plans, such as planning a shortest path within the map to reach a remote location. But what makes our topo maps novel is that they are constructed via online self-organization of predicted waypoints, which are concise and meet the assumption of partial observability in a real environment. Notably, this scheme is greatly distinct from previous VLN literature regarding topo mapping, which requires either predefined graphs [14, 3, 15] or environment pre-exploration [16].
To better capture the environment layouts, we systematically examine our topo map's key design choices, such as waypoint prediction, node density, and node representation. In particular, we find that a depth-only waypoint predictor aids generalization in novel environments, while RGB information may undermine spatial accessibility inferring. Moreover, we explicitly consider the obstacle-avoidance problem in VLN-CE. We find this problem is especially crucial in a more challenging and practical scenario - sliding along obstacles is forbidden, where commonly used controllers [9, 12] can cause navigation to get stuck in obstacles frequently, leading to a severe performance drop. Accordingly, we propose a new controller via a trial-and-error heuristic to explicitly help the agent escape from deadlocks, nearly eliminating the performance loss caused by sliding-forbidden.
Altogether, we propose a full navigation system for VLN-CE. For each episode, our agent updates a topo map through online self-organization of waypoints predicted so far. The map decomposes the navigation problem into planning and control. Within each decision loop, the agent uses a cross-modal transformer [17] to compute a global navigation plan from the instruction and the topo map. Then, this plan is executed by a robust obstacle-avoiding controller with low-level actions.
Extensive experiments demonstrate the effectiveness of the proposed method, and our system achieves state-of-the-art on two VLN-CE benchmarks (_e.g._, in test unseen splits, 55% SR and 48% SPL on R2R-CE dataset, 51.21% SR and 41.30% SDTW on RxR-CE dataset). Based on the algorithm described in this paper, we won the CVPR 2022 RxR-Habitat Challenge [11, 18]. In summary, the contributions of this work are four-fold:
* We propose a new topological map-based method for robust navigation planning in VLN-CE. It can efficiently abstract the continuous environments and facilitates the agent's long-range goal planning.
* We investigate the essential design choices for building topological maps through comprehensive experiments, demonstrating that a concise depth-only design is optimal for waypoint prediction.
* obstacle avoidance, and propose an effective heuristic controller to address the problem.
* The proposed system won the CVPR 2022 RxR-Habitat Challenge and doubled the SDTW of the second-best model. It can serve as a strong baseline for further research on this challenging task.1
Footnote 1: Our code is available at [https://github.com/MarSaKi/ETPNav](https://github.com/MarSaKi/ETPNav).
The rest of this paper is organized as follows. In SS 2, we give a brief review of the related work. SS 3 describes the task setup of vision-language navigation in continuous environments and then introduces our proposed method. Experimental results are provided in SS 4. Lastly, we conclude this work in SS 5.
## 2 Related Work
### _Vision-Language Navigation_
Learning navigation with language guidance has drawn significant research interest in recent years. R2R [1] and RxR [19] datasets introduce low-level human language instructions and photo-realistic environments for indoor navigation, while Touchdown [20] further extends this task in an outdoor navigation context. Following these works, dialogue-based navigation such as CVDN [21] and HANNA [22], and navigation for remote object-finding such as REVERIE [23] and SOON [24] have been proposed for further research.
Early VLN methods use sequence-to-sequence LSTMs to predict low-level actions [1] or high-level actions from panoramas [25]. Various attention mechanisms [26, 27, 28, 29] are proposed to improve the learning of visual-textual correspondence. Reinforcement learning is also explored to improve policy learning [30, 31, 2, 32]. Different strategies are also investigated to form a more robust navigation policy, such as environment pre-exploration [2], active perception [33, 34], and planning with graph memory [3, 14]. To enhance an agent's generalization ability to novel environments, various data augmentation strategies are studied to mimic new environments [31, 35, 36, 37, 38] or synthesis new instructions [39, 40, 41, 42, 25]. Recently, transformer-based models have shown superior performance thanks to their powerful ability to learn generic multi-modal representations [43, 44, 45]. This scheme is further extended by recurrent agent state [46, 47, 48, 49], episodic memory [5, 48, 49], graph memory [50, 51, 51] and prompt learning [52, 53] that significantly improves sequential action prediction.
Despite the progress, these agents are developed under the discrete VLN setting, which simplifies navigation as traversing on a predefined graph of an environment. In effect, this setup greatly narrows down the possible locations of the agent and target, while ignoring the low-level control problem that arises in a real-world navigation system. As a result, directly transferring these agents into the real world [54] or continuous environments [6] can cause a severe performance drop.
### _VLN in Continuous Environments_
Recognizing the navigation graph assumption cannot reflect the challenges a deployed system would experience in a real environment, Krantaz _et al._[6] introduce VLN in continuous environments (VLN-CE) - requiring the agent to navigate
on a 3D mesh freely with low-level actions (_e.g._, FORWARD 0.25m, ROTATE 15\({}^{\circ}\)). To benchmark VLN-CE agents, discrete paths in R2R [1] and RxR [19] are transferred to continuous environments through the Habitat Simulator [55].
Initial methods for VLN-CE are end-to-end trained systems to directly predict low-level actions from language and observations [6, 7, 56], but demonstrate a huge performance gap to VLN. Because jointly learning language-grounding and low-level control in a fully end-to-end manner can be difficult and expensive, typically requiring millions of frames of experience [12]. Thus, Krantaz _et al._[8] propose to decouple the navigation process as subgoal planning and low-level control, where the model predicts a language-conditioned waypoint as subgoal at each step, and then reaches the subgoal with an off-the-shelf controller [12]. This idea is further extended with semantic maps for better perception of environments [57, 58], finding increased performance but still far below VLN. One potential reason is that the prediction of language-conditioned waypoints requires the model to learn spatial accessibility inferring and cross-modal reasoning simultaneously, which is difficult and may need massive training [8].
Recently, there has been an emerging trend towards modular waypoint-based approaches [9, 10, 11]. Instead of directly predicting a language-conditioned waypoint, these methods further decouple the subgoal prediction as candidate waypoint generation and subgoal selection. Concretely, within each decision loop, the agent first uses a pre-trained network to predict several nearby candidate waypoints, and then performs cross-modal grounding over the waypoints to select a subgoal. Similarly, the subgoal-reaching is conducted by a follow-up controller. Overall, this modular scheme simplifies policy learning and further closes the gap between VLN-CE and VLN. But the drawback is the local waypoint representations which are insufficient to capture global environment layouts and navigation dependency, leading to the agent's non-ideal long-range planning capability. Meanwhile, the widely used controllers are unaware of obstacles, and we find they can cause navigation to get stuck in obstacles frequently in a practical sliding-forbidden scenario. To address these limitations, we not only propose an online constructed topo map for long-range planning, but also devise an obstacle-avoiding controller.
### _Maps for Navigation_
Works on robot navigation have a long tradition of using spatial or topological space representations to enhance environmental perception. Researchers have investigated explicit metric spatial representations [59], and examined the construction of these representations using various sensors [60, 61], as well as how to locate agents with such representations [62, 63]. Modern literature has begun to integrate spatial representations with semantics, yielding promising results in various tasks, such as object-goal navigation [64, 65], vision-language navigation [15, 57, 66], and active perception [67, 68]. However, these metric representations typically suffer from scalability issues and require meticulous map construction [69], which may not be suitable for long-range navigation tasks. Thus, non-metric topological representations have also been considered in classical literature [70, 71], and researchers have investigated the use of semantic topo maps for high-level navigation tasks [72, 73]. Topo maps are based on low-dimensional graph representations and are efficient in capturing environment layouts, thereby benefiting exploration or long-range planning.
In VLN, several works have employed topo maps and demonstrated superior performance [3, 14, 15, 50]. Because the long-range map can facilitate the agent to learn self-correction policy, which is crucial when the agent loses track of an instruction. However, these maps are derived from predefined graphs by marking observed nodes, which are unavailable in continuous or real-world environments. Chen _et al._[16] explored topo maps in VLN-CE, but their proposed map is built offline through environment pre-exploration and assumes the agent has access to global topology priors, which limits its use in more realistic scenarios. Inspired by their novel ideas, we propose a more practical solution for topo mapping in VLN-CE. Without the need for predefined graphs or environment pre-exploration, our map is built online through the self-organization of pre-defined waypoints at each step. It is scalable as navigation progresses and meets the assumption of partial observability in a real environment.
## 3 Method
**Task Setup.** We address instruction-following navigation in indoor environments, where an agent is required to follow a specific path described by a natural language instruction to reach the target location. In particular, we focus on a practical setup - vision-language navigation in continuous environments (VLN-CE) [6], where the agent navigates on a 3D mesh of an environment with low-level actions. The action space consists of a set of parameterized discrete actions (_e.g._, FORWARD (0.25m), ROTATE LEFT/RIGHT (15\({}^{\circ}\)), and STOP). VLN-CE uses the Habitat Simulator [55]
Fig. 1: Overview of the proposed model, ETPNav. It consists of three modules, a topological mapping module that gradually updates the topological map as it receives new observations, a cross-modal planning module that computes a navigational plan based on the instruction and map, and a control module that executes the plan with low-level actions.
to render environmental observations based on the Matterport3D scene dataset [74]. Following the panoramic VLN-CE setting [8, 9, 10], at each step \(t\), the agent receives panoramic RGB observations \(O_{t}=\{I_{t}^{rgb},I_{t}^{d}\}\) consisting of 12 RGB images and 12 depth images, which are captured from different views at 12 equally-spaced horizontal heading angles, _i.e._, \((0^{\circ},30^{\circ},...,330^{\circ})\). The agent also receives an instruction for each episode. We denote the embeddings of the instruction with \(L\) words by \(W=\{\mathbf{w}_{i}\}_{i=1}^{L}\).
**Overview of Our Approach.** We propose a hierarchical navigation model, named 'ETP Nav', which leverages high-level topological map-based planning and low-level controller for the VLN-CE task. As illustrated in Figure 1, ETP Nav comprises three modules: topological mapping, cross-modal planning and control. In each episode, the topological mapping module gradually updates and maintains a topo map by incorporating observations along the traversed path. Subsequently, the planning module conducts cross-modal reasoning over the map and instruction to predict a long-term goal, and then crafts a high-level topological path plan. The plan is then executed by the control module, which drives the agent towards the goal using low-level actions.
Similar to recent work [57, 10, 58], we presume that the agent can access the ground-truth pose provided by the simulator to facilitate mapping and control. Note that this work does not address the challenge of estimating pose based on noisy sensor readings. However, we suggest that visual odometry techniques [75] may be adaptable to our model in this context. This paper proceeds by introducing topological mapping in SS 3.1, followed by cross-modal planning in SS 3.2, and the presentation of our control policy in SS 3.3. Finally, we provide detailed expositions of training and inference of our model in SS 3.4.
### _Topological Mapping_
To facilitate long-term planning, our agent constructs a topo map on-the-fly. This map abstracts the visited or observed locations along the traversed path as a graph representation, denoted as \(G_{t}=\langle N_{t},E_{t}\rangle\) at step \(t\). Each node (\(n_{i}\in N_{t}\)) contains visual information observed at its location as well as position information. Two nodes are connected by an edge (\(e_{i,j}\in E_{t}\)) if their represented locations are directly reachable from each other. Each edge also stores the relative Euclidean distance between two nodes. We divide these nodes into visited nodes, the current node, and ghost nodes, where 'ghost' denotes that nodes have been observed but left unexplored.
Different from prior work [14, 15, 16, 3], our method assumes no prior knowledge of the environmental structure and we propose to construct the topo map via online self-organization of predicted waypoints. As depicted in Figure 2, at each step \(t\), the agent first predicts several nearby waypoints, representing possibly accessible locations near the agent. A current node is also initialized at the agent's current location and connects to the last visited node (if it exists). The predicted waypoints and current node are represented by feature embeddings of the current observations \(O_{t}\). These waypoints will be organized to update the previous topo map \(G_{t-1}\) and obtain the current map \(G_{t}\).
**Image Processing.** Given the current step's RGBD observations \(O_{t}=\{I_{t}^{rgb},I_{t}^{d}\}\), two different pre-trained visual encoders are used to extract RGB feature vectors \(\mathbf{V}_{t}^{rgb}=\{\mathbf{v}_{i}^{rgb}\}_{i=1}^{12}\) and depth feature vectors \(\mathbf{V}_{t}^{d}=\{\mathbf{v}_{i}^{d}\}_{i=1}^{12}\), respectively. To distinguish the features captured from different views of the panorama, we also apply orientation features \(\mathbf{V}_{t}^{\text{ori}}=\{(\cos\theta_{i},\sin\theta_{i})\}_{i=1}^{12}\), where \(\theta_{i}\) represents heading angle. The parameters of the two visual encoders are fixed. More details of pre-processing are introduced in SS 4.1.3.
**Waypoint Prediction.** We employ a transformer-based waypoint predictor [9] to generate the nearby waypoints. The predictor takes the depth feature vectors \(V_{t}^{\text{d}}\) and orientation feature vectors \(V_{t}^{\text{ori}}\) to predict the relative poses of these waypoints. Concretely, feature vectors in \(V_{t}^{\text{d}}\) and \(V_{t}^{\text{ori}}\) are first fused using a linear layer. The resulting vectors are fed into a two-layer transformer to conduct inter-view interaction and obtain contextual depth embeddings. These embeddings are then fed into a multi-layer perceptron to obtain a heatmap representing probabilities of nearby waypoints in space. \(K\) waypoints \(\triangle P^{w}=\{\triangle p^{w}_{i}\}_{i=1}^{K}\) are sampled from the heatmap using a non-maximum-suppression (NMS), where \(\triangle p^{w}_{i}\) denotes the relative pose to the agent. The predictor is pre-trained on the MP3D graph dataset [9], and its parameters are fixed.
To be noted, our predictor only takes depth images as inputs, instead of RGBD images used in [9]. Such depth-only design is motivated by the fact that waypoints only represent spatial accessibility, while semantic-level RGB information may be not helpful or even detrimental. We provide an ablation analysis of this design in SS 4.3.1.
**Visual Representations for Waypoints and the Current Node.** We conduct feature mapping of the current observations \(O_{t}\) to represent the predicted waypoints and the current node. Specifically, RGB features \(\mathbf{V}_{t}^{\text{rgb}}\), depth features \(\mathbf{V}_{t}^{\text{d}}\) and orientation features \(\mathbf{V}_{t}^{\text{ori}}\) are fused using a
Fig. 2: Illustration of the topological mapping module. It takes the previous graph (\(G_{t-1}\)) and the agent observation (\(O_{t}\)) as input. The waypoint prediction submodule first predicts several nearby waypoints. The graph update submodule organizes these waypoints and incorporates them to update the graph using a waypoint localization function (\(\mathcal{F}_{L}\)).
linear layer, and then fed into a panorama encoder. The panorama encoder uses a multi-layer transformer to perform inter-view interaction and outputs contextual visual embeddings \(\mathbf{\tilde{V}}_{t}=\{\hat{v}_{i}\}_{i=1}^{12}\). The current node \(\mathbf{\bigodot}\) has access to the panoramic observations and thus is represented as an average of \(\mathbf{\tilde{V}}_{t}\). The waypoints \(\mathbf{\bigodot}\) are partially observed and are represented by embeddings of views from which they can be observed. For example, if the relative heading angle of a waypoint to the agent is within \(0^{\circ}\sim 30^{\circ}\), the waypoint is represented by the first view embedding \(\mathbf{\tilde{v}}_{1}\). The waypoint representations will be incorporated to update the representations of ghost nodes \(\mathbf{\bigodot}\).
**Graph Update.** We update the topo map with the predicted waypoints based on their spatial relations with existing nodes in the graph. This process utilizes a Waypoint Localization (\(\mathcal{F}_{L}\)) function to localize waypoints in the graph. \(\mathcal{F}_{L}\) takes the position of a waypoint as input and computes its Euclidean distances with all nodes in the graph. If the minimum distance is less than a threshold \(\gamma\), \(\mathcal{F}_{L}\) returns the corresponding node as the localized node. For each waypoint, we try to localize it in the graph using the Waypoint Localization function (\(\mathcal{F}_{L}\)). To update the graph, we divide the localization results into three cases:
1. If a visited node \(\mathbf{\bigodot}\) is localized, delete the input waypoint and add an edge between the current node and the localized visited node.
2. If a ghost node \(\mathbf{\bigodot}\) is localized, accumulate the position and visual representation of the input waypoint to the localized ghost node. The new position and representation of the localized ghost node are updated as the average of its accumulated waypoint positions and representations.
3. If no node is localized, we take the input waypoint as a new ghost node.
### _Cross-Modal Planning_
Figure 3 illustrates the cross-modal planning module. It consists of a text encoder and a cross-modal graph encoder. The instruction of the current episode is encoded by the text encoder. Then, the cross-modal graph encoder conducts reasoning over the topo map and encoded instruction to predict a long-term goal node. The output is a planned topological path to the goal.
#### 3.2.1 Text Encoder
Each word embedding \(\mathbf{w}_{i}\) is added a positional embedding [76] corresponding to the position of the word in the sentence and a type embedding for text [77]. We denote the word embeddings with positional information as \(\widehat{W}=\{\mathbf{\hat{w}}_{i}\}_{i=1}^{L}\). Those embeddings are then fed into a multi-layer transformer to obtain contextual word representations.
#### 3.2.2 Cross-Modal Graph Encoder
The module takes the topo map \(G_{t}\) and encoded instruction \(\widehat{W}\) to predict a long-term goal node in the topo map.
**Node Encoding.** The visual feature in node \(n_{i}\) is added with a pose encoding and a navigation step encoding. The pose encoding embeds the global relative pose information of a node _w.r.t._ the agent's current location, including its orientation and Euclidean distance relative to the current node. The navigation step encoding embeds the latest visited time step for visited nodes and 0 for ghost nodes. This allows visited nodes to be encoded with different histories to capture navigation dependencies and facilitate alignment with the instruction. The encoding of \(n_{i}\) is denoted as \(\mathbf{n}_{i}\). To represent a STOP action, we add a'stop' node in the graph and connect it with all other nodes.
**Cross-Modal Graph Transformer.** The encoded node and word embeddings are fed into a multi-layer transformer to conduct cross-modal interaction. The transformer architecture is similar to LXBERT [77], with each layer comprising one bi-directional cross-attention sub-layer, two self-attention sub-layers, and two feed-forward sub-layers. For node encoding, the standard self-attention layer [17] only considers visual similarity among nodes, which may overlook nearby nodes which are more relevant than distant nodes. To this end, we devise a graph-aware self-attention (GASA) that further takes into account the graph topology when computing inter-node attention for node encoding:
\[\text{GASA}(\mathbf{X})=\text{softmax}(\frac{\mathbf{X}\mathbf{W}_{\text{q}}(\mathbf{X}\mathbf{W} _{\text{k}})^{\top}}{\sqrt{d}}+\mathbf{E}\mathbf{W}_{\text{c}}))\mathbf{X}\mathbf{W}_{\text{v}}, \tag{1}\]
where \(\mathbf{X}\) represents the stack of all node encodings, \(\mathbf{E}\) is the spatial matrix constructed by all-pair shortest distances obtained from the graph edges \(E_{t}\), \(\mathbf{W}_{\text{q}},\mathbf{W}_{\text{k}},\mathbf{W}_{\text{e}},\mathbf{W}_{\text{v}}\) are learnable matrices. The produced visual-textual associated representation of nodes is formulated as \([\tilde{\mathbf{n}}_{1},\dots,\tilde{\mathbf{n}}_{|N_{t}|}]=\text{GSASA}([\mathbf{n}_{1}, \dots,\mathbf{n}_{|N_{t}|}])\).
**Long-term Goal Prediction.** We predict a navigation goal score for each node in the topo map \(G_{t}\) as follows:
\[\mathbf{s}_{i}=\text{FFN}(\tilde{\mathbf{n}}_{i}), \tag{2}\]
where FFN denotes a feed-forward network and \(\tilde{\mathbf{n}}_{i}\) is the multimodal representation of node \(n_{i}\). Note that \(s_{0}\) corresponds to the'stop' node and it represents the score of the STOP action. To avoid unnecessary repeated visits to visited nodes, we mask the score for visited nodes and the current node. As such, a long-term goal is picked from ghost nodes or the'stop' node.
Finally, the agent selects a long-term goal according to the predicted goal scores (_e.g._, pick the node with the maximum score). If the selected goal is the'stop' node, navigation of the current episode terminates. If the selected goal is a ghost node, the agent computes a shortest path to the goal by performing Dikstra's algorithm on the graph. The resulting path plan consists of a sequence of subgoal nodes, denoted as \(\mathcal{P}_{t}=\{p_{m}\}_{m=1}^{M}\) where \(p_{m}\) represents node position.
Fig. 3: The planning module consists of a text encoder for instruction encoding, and a graph encoder to conduct cross-modal reasoning over the map to generate a path plan.
### _Control_
The control module is responsible for converting the topological plan \(\mathcal{P}_{t}\) into a series of low-level actions that guide the agent to the goal. Inputs for the control module include a sequence of subgoal nodes spanning \(\mathcal{P}_{t}\), and the agent's pose at each each time step. The output action space of navigation control is a set of parameterized low-level actions defined by the VLN-CE task, _e.g._, FORWARD (0.25m), ROTATE LEFT/RIGHT (15\({}^{\circ}\)), and STOP.
The control module produces actions that move the agent from one node to another, where we employ a heuristic policy to generate these actions. Specifically, to reach a subgoal node \(p_{m}\), the agent accesses its current pose and computes its relative orientation and distance (\(\triangle\theta,\triangle\rho\)) from \(p_{m}\). After that, the agent applies a rotate-then-forward control flow, where (\(\triangle\theta,\triangle\rho\)) are quantized and translated to a series of ROTATE (15\({}^{\circ}\)) actions, followed by a FORWARD (0.25m) action sequence. The agent executes these translated low-level actions sequentially, _i.e._, it first rotates to face toward the subgoal and then moves on. After the translated action sequence has been completed, the current subgoal is consumed and the subsequent node in plan \(\mathcal{P}_{t}\) becomes the new subgoal. The cycle repeats until no more nodes remain in \(\mathcal{P}_{t}\).
**Handling Unreachable Goal.** It is possible that the predicted long-term goal (a ghost node) is unreachable, due to its position being estimated by predicted waypoints that might not be on the navigation mesh. In such cases, there is a risk of the agent repeatedly selecting the same unreachable goal node in alternating planning stages, inevitably leading to no progress in navigation control. To alleviate this issue, we employ a simple strategy - delete the selected ghost node from the graph map \(G_{t}\) before trying to reach it using navigation control. This approach not only avoids the repeated selection of unfeasible ghost nodes but also reduces the pool of candidates available for long-term goal prediction, thereby easing policy learning.
**Obstacle Avoidance.** The VLN-CE task simulates a practical navigation scenario where collision with obstacles is taken into account during navigation control. Obstacle avoidance is essential, especially when sliding along obstacles is forbidden, such as on the RxR-CE dataset [19]. In such cases, the agent is unable to move forward if its chassis comes into contact with an obstacle. This can result in deadlocks and no progress in control, and in extreme cases, early termination of the episode and navigation failure. To address this issue, we devise a heuristic called 'Tryout' that leverages a trial-and-error approach to prevent navigation from getting stuck. During the execution of a sequence of FORWARD actions by the control module, the Tryout comes into play. Specifically, it detects navigation deadlocks by checking if the agent's position changes after executing a FORWARD action. If a deadlock is identified, the Tryout compels the agent to rotate with a set of predefined headings \(\triangle\Theta^{\text{try}}\) and attempt to move on with a single FORWARD action. If the agent moves away from its previous position after trying the FORWARD action, it indicates that the agent has exited the dead-end. The agent then returns to its original heading and continues with the remaining FORWARD action sequence. However, if the agent remains in the same position, it proceeds to try other headings in \(\triangle\Theta^{\text{try}}\). In practice, \(\triangle\Theta^{\text{try}}\) consists of 7 equally-space horizontal heading angles, ranging from 90\({}^{\circ}\)counterclockwise (\(-90^{\circ}\)) to 90\({}^{\circ}\)clockwise (\(90^{\circ}\)), _i.e._, \((-90^{\circ},-60^{\circ},-30^{\circ},0^{\circ},30^{\circ},60^{\circ},90^{\circ})\).
### _Training and Inference_
**Pre-training.** To improve the generalization ability of our agent, we pre-train the planning module with self-supervised proxy tasks following the common practice in transformer-based VLN models [4, 5, 45]. In this stage, the input topo maps are constructed offline and derived from predefined graphs used by the Matterport3D simulator [1]. Given a discrete expert trajectory of VLN, the map is generated by marking the current node, visited node, and observed ghost nodes along the trajectory, while inheriting node positions and edges from the predefined graph. Further, we align rendered RGBD images in the Habitat Simulator [55] onto the predefined graph for feature mapping in the map construction process. We adopt Masked Language Modeling (MLM) [76] and Single Action Prediction (SAP) [43] proxy tasks for pre-training. In the MLM task, the input instructions are randomly masked and the planning module is optimized by recovering the masked words after map-instruction interaction as described in SS 3.2. As for the SAP task, we randomly chunk an input expert trajectory and build its corresponding topo map. The objective of this task is to predict the next teacher action, _i.e._, the subsequent action node of the chunked trajectory.
**Fine-tuning.** We further fine-tune our model on downstream VLN-CE tasks to adapt navigation on 3D meshes in the Habitat Simulator [55]. To avoid overfitting to expert experience, we use'student-forcing' [6] to train the model, where the predicted long-term goal of each step is sampled through the probability distribution of the predicted scores (Equation 2). In each decision loop, the agent updates the topo map as described in SS 3.1, and then conducts cross-modal map-instruction reasoning to predict a long-term goal as explained in SS 3.2. The planned path is executed by a controller as presented in SS 3.3. To determine the teacher action node of each step, we employ an interactive demonstrator \(*\) similar to DAgger algorithm [78]. The demonstrator \(*\) accesses the ground-truth 3D mesh and selects the ghost node with the shortest geodesic distance to the final target as the teacher node. Note that we determine the real positions of ghost nodes on the mesh by running a rotate-then-forward attempt control after their generation in SS 3.1. Overall, the policy learning objective is formulated as:
\[L=\sum_{t=1}^{T}-\log p(a_{t}^{*}|W,G_{t}) \tag{3}\]
where \(a_{t}^{*}\) denotes the teacher action node at step \(t\).
**Inference.** During the testing phase, the agent consistently runs the mapping-planning-control cycle, which is analogous to the fine-tuning stage. The primary distinction between the two stages pertains to the long-term goal sampling strategy employed at each planning step. In this case, the agent selects the ghost node with the maximum predicted scores (Equation 2) greedily. In the event of the agent triggering a STOP action or surpassing the maximum action steps, the navigation of the ongoing episode will terminate.
## 4 Experiment
### _Experimental Setup_
#### 4.1.1 Datasets
We conduct experiments on R2R-CE and RxR-CE datasets, which are created by converting discrete paths of R2R [1] and RxR [19] datasets into continuous environments through the Habitat Simulator [55]. While both datasets provide step-by-step language guidance, they differ in various aspects such as path length, guidance granularity, and agent embodiment as summarized in Table I.
The R2R-CE dataset comprises a total of 5,611 shortest-path trajectories, encompassing train, validation, and test splits. Each trajectory corresponds to approximately 3 English instructions. The average path length is 9.89m and each instruction consists of an average of 32 words. We report performance on several validation splits. Val-Seen contains episodes with novel paths and instructions but from scenes observed in training. Val-Unseen contains novel paths, instructions, and scenes. Agents in R2R-CE have a chassis radius of 0.10m and can slide along obstacles while navigating.
RxR-CE is larger and more challenging compared to R2R-CE. While having similar scene splits as R2R-CE, RxR-CE presents substantively more instructions, spanning multilingual descriptions in English, Hindi, and Telugu, requiring an average of 120 words per instruction. Additionally, annotated paths in RxR-CE are much longer than those in R2R-CE (15.23m v.s. 9.89m). To be noted, agents in RxR-CE are forbidden to slide along obstacles, and the larger chassis radius (0.18m) makes it prone to collide with obstacles. This also makes RxR-CE more challenging because navigation can easily get stuck when encountering obstacles, underscoring the vital role of obstacle avoidance in this challenging task.
#### 4.1.2 Evaluation Metrics
As in [1, 80, 79], we adopt the following navigation metrics. Trajectory Length (TL): average path length in meters; Navigation Error (NE): average geometric distance in meters between the final and target location; Success Rate (SR): the ratio of paths with NE less than 3 meters; Oracle SR (OSR): SR given oracle stop policy; SR penalized by Path Length (SPL); Normalize Dynamic Time Wrapping (NDTW): the fidelity between the predicted and annotated paths and NDTW penalized by SR (SDTW). R2R-CE uses SR and SPL as its primary metrics, whereas RxR-CE is more concerned with path fidelity and uses NDTW and SDTW as its primary metrics.
#### 4.1.3 Implementation Details
**Model Configuration.** For visual encoding, we use ViT-B/32 [81] pre-trained in CLIP [82] to encode RGB images as [11], and ResNet-50 [83] pre-trained in point-goal navigation [12] to encode depth images following [6]. The same as [77, 5, 9], we set the layers' number of the panorama encoder, the text encoder, and the cross-modal graph encoder as 2, 9, 4, respectively. Other hyperparameters are the same as LXMERT [77] (e.g. the hidden layer size is 768). In the pre-training stage, we initialize the model with pre-trained LXMERT on the R2R-CE dataset and pre-trained RoBerta [84] for the multilingual RxR-CE dataset.
**Training Details.** Our experiments were performed using the PyTorch framework [85] and executed on two NVIDIA RTX 3090 GPUs. Our model includes two trainable modules: the panorama encoder used in the topological mapping module, and the cross-modal planning module. We pre-train our model for 100,000 iterations (\(\sim\) 20 hours), with a batch size of 64 and a learning rate of 5e-5, utilizing the AdamW optimizer [86]. In this stage, topological maps are built offline and derived from the predefined graph of discrete VLN [1]. We leverage the discrete paths in the R2R and RxR datasets for pre-training purposes and augment the data using synthetic instructions from Prevalent [43] and RxR-Markey [40]. After pre-training, we choose the model weights, producing the best zero-shot navigation performance (_e.g._, SPL on R2R-CE, SDTW on RxR-CE) to initialize the fine-tuning stage. During fine-tuning, the agent interacts with the environments online through the Habitat Simulator [55] and is supervised by the teacher node generated by the demonstrator *. We leverage scheduled sampling [87] to train the model, shifting from teacher-forcing to student-forcing with a decay frequency of per 3000 iterations and decay ratio of 0.75. The fine-tuning iterations amount to 15,000 (\(\sim\) 30 hours) with a batch size of 16 and a learning rate of 1e-5. The best iterations are determined by best performance on validation unseen splits.
### _Comparison with State-of-the-art Methods_
#### 4.2.1 R2r-Ce
In Table II, we compared our ETPNav with current state-of-the-art methods on the R2R-CE dataset. The results demonstrate that our model outperforms the existing models on all splits in terms of NE, OSR, SR, and SPL. Particularly, on the val unseen split, ETPNav surpasses the second-best model CWP-RecBERT [9] by 13% on SR and 10% on SPL. Moreover, our model also generalizes well on the test unseen split, as it outperforms Sim2Sim [10] by 11% on SR and 11% on SPL. Reborn [11] serves as the initial version for the 2022 RxR-Habitat Challenge. It uses a local planning space, which consists of nearby waypoints, and utilizes an unstructured memory bank to capture navigation dependency. The performance gap between Reborn and ETPNav is substantial, with ETPNav outperforming Reborn on the test unseen split by 6% on SR and 3% on SPL. This highlights the efficacy of global planning with topo maps, enabling the agent to encode structured environmental priors and allowing for long-term planning, leading to a more robust policy. We also note that compared to Reborn, ETPNav's improvement
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Language} & \multicolumn{2}{c}{Length} & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Val-Seen} & \multicolumn{2}{c}{Val-Unseen} & \multicolumn{2}{c}{Test-Unseen} & \multicolumn{2}{c}{Embodiment} \\ \cline{3-14} & & Path & Sentence & \#house & \#instr & \#house & \#instr & \#house & \#instr & \#house & \#instr & \multicolumn{1}{c}{Chassis} & \multicolumn{1}{c}{Sliding} \\ \hline R2R-CE & en & 9.89m & 32 words & 61 & 10.819 & 53 & 778 & 11 & 1,839 & 18 & 3,408 & 0.10m & Allowed \\ RxR-CE & en,hi,te & 15.23m & 120 words & 59 & 60,300 & 57 & 6,746 & 11 & 11,006 & 17 & 9,557 & 0.18m & Forbidden \\ \hline \hline \end{tabular}
\end{table} TABLE I: Data statistics and agent embodiment of R2R-CE and RxR-CE datasets.
on SPL is less prominent than that on SR. We attribute this to the global planning of ETPNav, which encourages backtracking and culminates in longer trajectories.
#### 4.2.2 Rx-Ce
Table III compares our ETPNav model with the current state-of-the-art methods on the RxR-CE dataset. Our model outperforms the existing best model, CWP-RecBERT [9] on all evaluation metrics on the three splits. For instance, on the val unseen split, ETPNav surpasses CWP-RecBERT by 27.71% on SR, 22.24% on SPL, and 15.19% on NDTW. ETPNav also generalizes well on the test unseen split, where it outperforms CWP-RecBERT by 26.36% on SR, 20.25% on SPL, 16.81% on NDTW, and 22.25% on SDTW. For a fair comparison, we also report our results without Marky-mT5 [40] data augmentation, where ETPNav still beats CWP-RecBERT by a significant margin, for instance, 25.99% SR and 14.78% NDTW on the val unseen split. Please note that Reborn [11] is our winning entry for the 2022 RxR-Habitat Challenge, which employs a local planning space composed of nearby waypoints. While Reborn achieves slightly better NDTW (_e.g._, 55.43% v.s. 54.11%) on the test unseen splits, it has significant worse SDTW (_e.g._, 38.43% v.s. 41.30%). We attribute this to the global planning space of ETPNav, which promotes backtracking and may impact path fidelity. However, this global planning space enables the agent to make long-term plans, resulting in better SR and SDTW.
### _Ablation Study_
In this section, we provide detailed ablation experiments to evaluate specific components of ETPNav, including critical design choices of the topological mapping module (SS 4.3.1) and the cross-modal planning module (SS 4.3.2). Additionally, we compare the proposed heuristic controller with other alternatives (SS 4.3.3). Finally, we visualize the trajectories predicted by our model and compare them with other variants (SS 4.3.4).
#### 4.3.1 Key Design Choices of Topological Mapping
**Waypoint Prediction.** Table IV presents a comparison between three different waypoint predictors on the R2R-CE dataset. In Row 1, RGB and depth features are utilized as inputs, where both feature types are linearly transformed to the same dimension, fused, and then fed into the transformer layer to predict waypoints. This approach is also the default choice in [9]. Row 2 only takes RGB features as inputs, while Row 3 shows our approach that uses only depth features for waypoint prediction. We apply waypoint metrics [9] and navigation results to assess the quality of predicted waypoints. These waypoint metrics are as follows: \(|\triangle|\) measures the difference in the number of target waypoints and predicted waypoints. %Open measures the ratio of waypoints that is in open space (not hindered by any obstacle). \(d_{C}\) and \(d_{H}\) are the Chamfer distance and the Hausdorff distance, respectively, commonly used metrics to measure the distance between point clouds.
As shown in Table IV, Row 1 achieves a decent performance in both waypoints and navigation metrics on the val unseen split, with 82.87 %Open, 1.05 \(D_{C}\), and 56.44% SR. Conversely, Row 2 only utilizes RGB to predict waypoints, resulting in the worst performance of all with 65.34 %Open and 1.08 \(D_{C}\). Without depth information, the %Open metric drops severely, indicating that many waypoints are obstructed by obstacles or not on the navigation mesh. Consequently, the navigation performance also declines considerably, for example, compared to Row 1, SR drops by 4.78% on the val unseen split. It is noteworthy that the depth-only predictor (Row 3) yields the best performance, achieving 84.05 %Open and 1.04 \(D_{C}\). Additionally, the navigation
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{5}{c}{Val Seen} & \multicolumn{5}{c}{Val Unseen} & \multicolumn{5}{c}{Test Unseen} \\ \cline{2-13} & TL & NE\(\downarrow\) & OSR\(\uparrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & OSR\(\uparrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & OSR\(\uparrow\) & SR\(\uparrow\) & SPL\(\uparrow\) \\ \hline Seq2Seq [6] & 9.26 & 7.12 & 46 & 37 & 35 & 8.64 & 7.37 & 40 & 32 & 30 & 8.85 & 7.91 & 36 & 28 & 25 \\ SASRA [56] & 8.89 & 7.71 & - & 36 & 34 & 7.89 & 8.32 & - & 24 & 22 & - & - & - & - & - \\ CMTP [16] & - & 7.10 & 56 & 36 & 31 & - & 7.90 & 38 & 26 & 23 & - & - & - & - & - \\ LAW [7] & - & - & - & 40 & 37 & - & - & - & 35 & 31 & - & - & - & - & - \\ HPN [8] & 8.54 & 5.48 & 53 & 46 & 43 & 7.62 & 6.31 & 40 & 36 & 34 & 8.02 & 6.65 & 37 & 32 & 30 \\ CMZ [57] & 12.05 & 6.10 & 51 & 43 & 35 & 11.54 & 7.02 & 42 & 34 & 28 & 13.90 & 7.70 & 39 & 31 & 24 \\ WS-MGMAP [58] & 10.12 & 5.65 & 52 & 47 & 43 & 10.00 & 6.28 & 48 & 39 & 34 & 12.30 & 7.11 & 45 & 35 & 28 \\ CWP-CMA [9] & 11.47 & 5.20 & 61 & 51 & 45 & 10.90 & 6.20 & 52 & 41 & 36 & 11.85 & 6.30 & 49 & 38 & 33 \\ CWP-RecBERT [9] & 12.50 & 5.02 & 59 & 50 & 44 & 12.23 & 5.74 & 53 & 44 & 39 & 13.51 & 5.89 & 51 & 42 & 36 \\ Sim2Sim [10] & 11.18 & 4.67 & 61 & 52 & 44 & 10.69 & 6.07 & 52 & 43 & 36 & 11.43 & 6.17 & 52 & 44 & 37 \\ \hline Reborn (ours) [11] & 10.29 & 4.34 & 67 & 59 & 56 & 10.06 & 5.40 & 57 & 50 & 46 & 11.47 & 5.55 & 57 & 49 & 45 \\ ETPNav (ours) & 11.78 & 3.95 & 72 & **66** & **59** & 11.99 & **4.71** & 65 & **57** & **49** & 12.87 & 5.12 & 63 & **55** & **48** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison with state-of-the-art methods on R2R-CE dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{5}{c}{Val Seen} & \multicolumn{5}{c}{Val Unseen} & \multicolumn{5}{c}{Test Unseen} \\ \cline{2-13} & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & **NDTW\(\uparrow\)** & **SDTW\(\uparrow\)** & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & **NDTW\(\uparrow\)** & **SDTW\(\uparrow\)** & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & **NDTW\(\uparrow\)** & **SDTW\(\uparrow\)** \\ \hline Seq2Seq [6] & - & - & - & - & - & - & - & - & - & - & 12.10 & 13.93 & 11.96 & 30.86 & 11.01 \\ CWP-CMA [7] & - & - & - & - & 8.76 & 26.59 & 22.16 & 47.05 & - & 10.40 & 24.08 & 19.07 & 37.39 & 18.65 \\ CWP-RecBERT [9] & - & - & - & - & 8.98 & 27.08 & 22.65 & 46.71 & - & 10.40 & 24.85 & 19.61 & 37.30 & 19.05 \\ \hline Reborn \#(ours) [11] & 5.73 & 51.14 & 44.78 & 65.72 & 43.84 & 5.82 & 47.56 & 41.65 & 63.02 & 41.16 & - & - & - & - & - \\ Reborn (ours) [11] & 5.69 & 52.43 & 45.46 & 66.27 & 44.47 & 5.98 & 48.60 & 42.05 & **63.35** & 41.82 & 7.10 & 45.82 & 38.82 & **55.43** & 38.42 \\ ETPNav(ours) & 5.55 & 57.26 & 47.67 & 64.15 & 47.57 & 5.80 & 53.07 & 44.16 & 61.49 & 43.92 & - & - & - & - & - \\ ETPNav (ours) & 5.03 & 61.46 & 50.83 & **66.41** & **51.28** & 5.64 & **54.79** & 44.89 & 61.90 & **45.33** & 6.99 & 51.21 & 39.86 & 54.11 & **41.30** \\ \
performance is also superior, with 57.21% SR and 49.15% SPL on the val unseen split, compared to 56.44% SR and 48.53% SPL respectively. These findings suggest that RGB information is ineffective and even detrimental to waypoint prediction. One possible explanation is that low-level semantics in RGB features can make the predictor overfit to seen environments, while such semantics are unnecessary for inferring spatial accessibility.
**Different Options for Map Construction.** Table V compares different options for map construction on the R2R-CE dataset, including the localization threshold \(\gamma\) and the waypoint accumulation in SS 3.1, as well as the ghost node deleting in SS 3.3.
As the localization threshold \(\gamma\) increases, the number of nodes \(N_{\text{node}}\) shows a downward trend. This is because a higher \(\gamma\) encourages the agent to localize predicted waypoints onto existing nodes of the graph, thereby reducing the creation of new nodes. Meanwhile, the overall navigation performance is sensitive to the number of nodes \(N_{\text{node}}\). For example, on the val unseen split, there is approximately 12% difference on SR comparing (Row 10 \(\sim\) Row 12) to (Row 1 \(\sim\) Row 3). The reason is that a higher \(\gamma\) results in too few nodes to depict the environment well, limiting the agent's accurate perception and efficient planning. However, a large \(N_{\text{node}}\) also affects the navigation performance, _e.g._, on the val unseen split, 56.71% SR of Row 1 v.s. 57.21% SR of Row 4. One potential reason is that a larger number of candidate nodes increases the learning difficulty of the planning module.
Moreover, both the 'Accumulation' and 'Deleting' are beneficial. For instance, comparing Row 4 and Row 5, without 'Accumulation', SR and SPL on the val unseen split decrease by 1.32% and 1.23% respectively. 'Accumulation' allows the agent to integrate multi-step waypoint observations to represent ghost nodes, promoting the planning module to predict an accurate long-term goal. Similarly, comparing Row 4 and Row 6, without 'Deleting', the performance decreases significantly, with SR and SPL on the val unseen split dropping by 4.80% and 4.13% respectively. Without 'Deleting', unreachable ghost nodes can be selected endlessly by the agent, resulting in no progress in navigation. In subsequent experiments, Row 4 is taken as the default setup.
#### 4.3.2 Key Design Choices of Cross-Modal Planning
**Comparison of Different Planning Space.** Table VI compares different planning spaces on the R2R-CE dataset as well as the effect of GASA in Equation 1. The local planning space only considers adjacent ghost nodes of the agent as candidate goals, while the global planning space consists of all observed ghost nodes along the traversed path. Global planning results in better navigation performance, _e.g._, on the val unseen split, Row 4 achieves a 57.21% SR compared to Raw 2 at 53.92%SR. This demonstrates the superiority of global planning, as it allows efficient backtracking to a previous location, providing self-correction policy. In contrast, local planning requires multiple plan-control flows to reach a remote location, introducing unstable accumulation bias, making it challenging to achieve such intelligent behavior. GASA is also shown to be effective as it increases SR about by 1% compared (Row 2, Row 4) to (Row 1, Row 3) where it is not used. GASA introduces topology for node encoding, facilitating the agent's ability to capture environmental structural priors. We also note that the gain of GASA to global planning is more significant than that to local planning, comparing \(\uparrow\) 1.24% SR in global planning and \(\uparrow\) 0.77% SR in local planning. We suspect this because conducting global planning requires an understanding of the house structure, while local planning is restricted to nearby areas, thereby reducing the need for structure priors.
**The Effect of Pre-training.** Table VII presents the benefits of various pre-training tasks on the downstream R2R-CE
\begin{table}
\begin{tabular}{l c|c c c c|c c c c|c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{inputs} & \multicolumn{4}{c}{Waypoint Prediction} & \multicolumn{4}{c}{Val-Seen} & \multicolumn{4}{c}{Val-Unseen} \\ & & & \(\mid\)\(\mid\)\(\mid\) & \%Open\(\uparrow\) & \(d_{C}\)\(\downarrow\) & \(d_{H}\)\(\downarrow\) & TL & NE\(\downarrow\) & OSR\(\uparrow\) & **SR\(\uparrow\)** & **SPL\(\uparrow\)** & TL & NE\(\downarrow\) & OSR\(\uparrow\) & **SR\(\uparrow\)** & **SPL\(\uparrow\)** \\ \hline
1 & RGBD & 1.40 & 82.87 & 1.05 & **2.01** & 11.22 & 3.87 & 70.56 & 63.88 & 57.11 & 11.77 & 4.73 & 63.24 & 56.44 & 48.53 \\
2 & RGB & 1.38 & 65.34 & 1.08 & 2.03 & 13.38 & 4.38 & 63.49 & 56.29 & 46.42 & 12.81 & 4.99 & 57.91 & 51.66 & 42.21 \\
3 & Depth & 1.39 & **84.05** & **1.04** & **2.01** & 11.78 & 3.95 & 71.85 & **66.19** & **59.37** & 11.99 & 4.71 & 64.71 & **57.21** & **49.15** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison of different waypoint predictors.
\begin{table}
\begin{tabular}{l c c c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{\(\gamma\) (m)} & \multirow{2}{*}{Accumulation} & \multirow{2}{*}{Deleting} & \multicolumn{4}{c}{Val-Seen} & \multicolumn{4}{c}{Val-Unseen} \\ & & & & & \(N_{\text{node}}\) & TL & NE\(\downarrow\) & OSR\(\uparrow\) & **SR\(\uparrow\)** & **SPL\(\uparrow\)** & \(N_{\text{node}}\) & TL & NE\(\downarrow\) & OSR\(\uparrow\) & **SR\(\uparrow\)** & **SPL\(\uparrow\)** \\ \hline
1 & & ✓ & ✓ & 32.17 & 11.43 & 3.86 & 70.82 & **66.58** & **60.17** & 32.18 & 11.68 & 4.70 & 63.34 & 56.71 & 48.71 \\
2 & 0.25 & & ✓ & 33.62 & 10.96 & 3.70 & 70.43 & 65.42 & 59.67 & 34.13 & 11.33 & 4.81 & 62.26 & 55.19 & 48.30 \\
3 & & ✓ & & 31.07 & 12.15 & 3.91 & 71.46 & 64.65 & 57.51 & 30.80 & 13.46 & 5.08 & 62.15 & 53.34 & 45.64 \\ \hline
4 & & ✓ & ✓ & 24.34 & 11.78 & 3.95 & 71.85 & 66.19 & 59.37 & 23.76 & 11.99 & 4.71 & 64.71 & **57.21** & **49.15** \\
5 & 0.50 & & ✓ & 30.46 & 11.48 & 3.71 & 72.49 & 66.19 & 60.01 & 31.02 & 12.61 & 4.68 & 63.13 & 55.89 & 47.92 \\
6 & & ✓ & & 22.12 & 11.97 & 4.02 & 70.05 & 63.93 & 57.46 & 21.01 & 13.38 & 5.03 & 59.70 & 52.41 & 45.02 \\ \hline
7 & & ✓ & ✓ & 18.37 & 12.22 & 3.68 & 73.52 & 66.45 & 58.64 & 18.23 & 13.97 & 4.94 & 64.11 & 54.75 & 45.42 \\
8 & 0.75 & & ✓ & 25.71 & 13.20 & 3.92 & 69.15 & 64.01 & 57.58 & 25.45 & 14.48 & 4.81 & 61.06 & 53.61 & 45.86 \\
9 & & ✓ & & & 16.52 & 12.79 & 4.14 & 67.86 & 61.95 & 55.96 & 15.57 & 15.04 & 5.07 & 58.78 & 51.16 & 42.38 \\ \hline
10 & & ✓ & ✓ & 14.62 & 14.92 & 4.96 & 65.55 & 56.04 & 47.92 & 14.43 & 18.60 & 6.13 & 52.31 & 42.30 & 33.60 \\
11 & 1.00 & & ✓ & 20.55 & 17.05 & 4.66 & 62.59 & 53.85 & 45.53 & 20.29 & 21.02 & 5.68 & 53.12 & 41.59 & 32.17 \\
12 & & ✓ & & & 11.87 & 16.16 & 4.53 & 59.51 & 55.14 & 45.63 & 10.98 & 18.52 & 5.50 & 49.32 & 42.36 & 33.09 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparison of different options for map construction. \(\gamma\) is the threshold of the waypoint localization function \(\mathcal{F}_{L}\). ‘Accumulation’ denotes accumulating multiple waypoints to represent one ghost node. ‘Deleting’ denotes deleting the selected ghost node in each planning step. \(N_{\text{node}}\) denotes the average number of nodes per episode.
task. Row 1 shows results from the model trained from scratch, which shows the worst performance (_e.g._, on the val unseen split, 37.41% on SR and 30.28% on SPL). Row 2 displays the outcomes of pre-training with the MLM task, indicating a significant gain from the generic pre-training task (_e.g._, \(\uparrow\) 10.82% SR and \(\uparrow\) 7.45% SPL on the val unseen split). Because the MLM task enables the model to learn transferable visiolinguistic representations, enhancing the agent's generalization ability. When applying the downstream-specific SAP task, the navigation performance of the model in row 3 further improves (_e.g._, compared to Row 2, \(\uparrow\) 8.98% SR and \(\uparrow\) 11.42% SPL). This is due to the SAP task promoting the model to learn navigation-oriented representations, which is crucial for efficient navigation. For example, comparing Row 3 to Row 2 on the val unseen split, TL decreases remarkably (\(\downarrow\) 4.70m) and SPL increases significantly (\(\uparrow\) 11.42%). Thus, we assemble MLM and SAP as our pre-training tasks.
Table VIII presents the effect of using different visual inputs for pre-training on the downstream R2R-CE task. Row 1 and Row 2 pre-train the model with RGB images captured in Matterport3D Simulator [1]. which is a common practice in existing pre-training-based VLN-CE models [9, 10, 11]. In contrast, Row 3 and Row 4 use RGB images reconstructed in Habitat Simulator [55]. Notably, Row 3 outperforms Row 1 by 2.68% on SR and 2.88% on SPL on the val unseen split, highlighting a performance gap between MP3D and Habitat simulators due to their visual domain gap. Although the model can be fine-tuned using habitat images, the performance loss caused by the domain gap in the pre-training stage cannot be eliminated. Additionally, adding depth information for pre-training (_e.g._, Row 4) shows better performance than Row 3, with a decrease of 1.29% on SR and 2.18% on SPL on the val unseen split. Learning depth embedding from scratch may affect the model's visual encoding ability learned in the pre-training stage, influencing downstream navigation performance. Therefore, Row 4 is our default pre-training setup.
#### 4.3.3 Comparison of Different Controllers
Table IX compares the performance of various navigation controllers on the val unseen splits of the R2R-CE and RxR-CE datasets. The Teleportation controller serves as the performance upper bound. It transports the agent to the goal predicted by the planning module. However, since the goal (a ghost node) might not be on the navigation mesh, in practice, we first transport the agent to an adjacent node of the goal, then drive it towards the goal using our heuristic control. We also consider PointGoal and Heuristic controllers that are admissible in VLN-CE. PointGoal represents the off-the-shelf point-goal navigator [12]. Heuristic is the proposed controller described in SS 3.3 and Tryout is active when sliding along obstacles is forbidden (_i.e._, RxR-CE dataset).
Row 1 establishes the upper bound, reaching 57.97% SR and 49.76% SPL on the R2R-CE dataset and 64.43% NDTW and 46.04% SDTW on the RxR-CE dataset. In Row 2, PointGoal shows satisfactory performance, but there is a clear gap to Row 1 with a decrease of 5.71% SPL on the R2R-CE dataset and 2.25% SDTW on the RxR-CE dataset. Row 3 shows that our Heuristic controller manages to narrow the gap on the R2R-CE dataset, reaching 49.15% SPL compared to Row 1's 49.76% SPL. However, this controller results in significant performance drops on the RxR-CE dataset with a 27.4% SDTW decrease compared to Row 1. Because it is unaware of collision and causes frequent deadlocks in obstacles under the challenging sliding-forbidden setup, resulting in navigation failure. The proposed Tryout satisfactorily handles this problem, nearly eliminating the performance loss caused by sliding-forbidden with 45.33% SDTW on Row 4 compared to 46.04% SDTW on Row 1. Tryout even sur
\begin{table}
\begin{tabular}{l l c|c c c c c c|c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Planning Space} & \multirow{2}{*}{GASA} & \multicolumn{3}{c}{Val-Seen} & \multicolumn{3}{c}{Val-Unseen} \\ & & & \multicolumn{1}{c}{NL} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} & \multicolumn{1}{c}{TL} & \multicolumn{1}{c}{NE\(\downarrow\)} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} \\ \hline
1 & & & 11.01 & 4.07 & 68.51 & 62.60 & 56.79 & 11.37 & 4.92 & 61.28 & 53.15 & 46.83 \\
2 & Local & ✓ & 11.53 & 3.95 & 70.05 & 63.11 & 56.69 & 12.12 & 4.94 & 62.18 & 53.92 & 46.43 \\ \hline
3 & & & 12.34 & 3.89 & 72.37 & 65.17 & 57.11 & 12.04 & 4.83 & 63.48 & 55.97 & 48.08 \\
4 & Global & ✓ & 11.78 & 3.95 & 71.85 & **66.19** & **59.37** & 11.99 & 4.71 & 64.71 & **57.21** & **49.15** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison of different plannig spaces (Local v.s. Global). GASA represents graph-aware self-attention.
\begin{table}
\begin{tabular}{l c c|c c c c c|c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{RGB} & \multirow{2}{*}{Depth} & \multicolumn{3}{c}{Val-Seen} & \multicolumn{3}{c}{Val-Unseen} \\ & & & \multicolumn{1}{c}{NL} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} & \multicolumn{1}{c}{TL} & \multicolumn{1}{c}{NE\(\downarrow\)} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} \\ \hline
1 & MP3D & & 12.33 & 3.86 & 73.78 & **66.20** & 57.90 & 13.54 & 5.04 & 62.04 & 53.24 & 44.09 \\
2 & MP3D & ✓ & 11.84 & 3.97 & 71.59 & 63.88 & 56.11 & 13.13 & 4.98 & 64.49 & 55.74 & 47.48 \\ \hline
3 & Habitat & & 11.71 & 3.98 & 70.82 & 62.34 & 55.07 & 12.90 & 4.98 & 62.59 & 55.92 & 46.97 \\
4 & Habitat & ✓ & 11.78 & 3.95 & 71.85 & 66.19 & **59.37** & 11.99 & 4.71 & 64.71 & **57.21** & **49.15** \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The effect of pre-training tasks.
\begin{table}
\begin{tabular}{l c c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Planning Space} & \multirow{2}{*}{GASA} & \multicolumn{3}{c}{TL} & \multicolumn{3}{c}{NL\(\downarrow\)} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} & \multicolumn{1}{c}{TL} & \multicolumn{1}{c}{NE\(\downarrow\)} & \multicolumn{1}{c}{OSR\(\uparrow\)} & \multicolumn{1}{c}{**SR\(\uparrow\)**} & \multicolumn{1}{c}{**SPL\(\uparrow\)**} \\ \hline
1 & & & 11.01 & 4.07 & 68.51 & 62.60 & 56.79 & 11.37 & 4.92 & 61.28 & 53.15 & 46.83 \\
2 & Local & ✓ & 11.53 & 3.95 & 70.05 & 63.11 & 56.69 & 12.12 & 4.94 & 62.18 & 53.92 & 46.43 \\ \hline
3 & & & 12.34 & 3.89 & 72.37 & 65.17 & 57.11 & 12.04 & 4.83 & 63.48 & 55.97 & 48.08 \\
4 & Global & ✓ & 11.78 & 3.95 & 71.85 & **66.19** & **59.37** & 11.99 & 4.71 & 64.71 & **57.21** & **49.15** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison of different plannig spaces (Local v.s. Global). GASA represents graph-aware self-attention.
passes the learning-based PointGoal controller with 45.33% SDTW on Row 4 compared to 43.79% SDTW on Row 2.
Also, we are interested in how the agent's chassis radius may impact navigational performance. Figure 5 shows the episodic success rate of ETPNav on the R2R- CE and RxR-CE datasets whether employing PointGoal or Heuristic controllers. For all datasets, the success rate of the two controllers decreases as the chassis radius increases since a bigger chassis might lead the agent to collide with obstacles more frequently, increasing the risk of navigation failure. The proposed heuristic controller, however, is resilient to this problem and consistently outperforms the PointGoal controller, particularly on the RxR-CE dataset, where it beats PointGoal by about 3% on SR through all chassis radiuses. The main reason is that it uses Tryout to explicitly prevent the agent from getting stuck in obstacles, which aids the adaptation of navigation policy to various chassis sizes. This further verifies the robustness of the proposed obstacle-avoiding controller.
#### 4.3.4 Qualitative Results
Figure 4 and Figure 6 visualize trajectories predicted by our model compared to the variant using local planning (on the R2R-CE dataset) and the variant without Tryout control (on the RxR-CE dataset), respectively.
As shown in Figure 4, the local planning space is insufficient to capture the global environment layouts and hinders the agent's long-term planning capacity. For example, at step 7, the agent seems to realize that it is navigating in the wrong direction and intends to backtrack. However, after completing a sing-step backtracking at step 8, it again decides to go back to the wrong place it was at step 7. This behavior of oscillating between two locations persists until navigation failure at step 15. On the other hand, the global planning space enables the agent to capture global environment layouts and successfully correct previous wrong decisions. At step 4, the agent also starts by navigating in the wrong direction, just like the local planning variants. But the predicted long-term goal effectively guides it back on the right track at step 8, concluding with successful navigation.
\begin{table}
\begin{tabular}{l l|c c c c|c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Controllers} & \multicolumn{4}{c|}{R2R-CE Val-Unseen} & \multicolumn{4}{c}{RxR-CE Val-Unseen} \\ & & TL & NE\(\downarrow\) & OSR\(\uparrow\) & **SR\(\uparrow\)** & **SPL\(\uparrow\)** & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & **NDTW\(\uparrow\)** & **SDTW\(\uparrow\)** \\ \hline
1 & Teleportation & 11.31 & 4.64 & 65.14 & **57.97** & **49.76** & 5.80 & 54.98 & 45.23 & **64.33** & **46.04** \\
2 & PointGoal [12] & 13.35 & 4.87 & 63.40 & 54.86 & 44.05 & 5.89 & 52.77 & 43.50 & 61.03 & 43.79 \\
3 & Heuristic w/o Tryout & 11.99 & 4.71 & 64.71 & 57.21 & 49.15 & 9.22 & 22.61 & 20.06 & 44.30 & 18.64 \\
4 & Heuristic w/ Tryout & - & - & - & - & - & 5.64 & 54.79 & 44.89 & 61.90 & 45.33 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Comparison of different controllers.
Fig. 4: Comparison of the same episode’s trajectories predicted by different model variants. (Top) The trajectory predicted by ETPNav using local planning. (Bottom) The trajectory predicted by ETPNav using global planning.
Fig. 5: The effect of the agent’s chassis radius on SR.
As shown in Figure 6, the practical sliding-forbidden steps can cause the agent to get stuck in obstacles and lead to navigation failure. For instance, in the absence of Tryout, the agent is unable to proceed forward once its chassis collides with the wall (at steps 6 and 7). This situation persists until the end of navigation (at step 14), where the agent does not succeed in escaping the deadlock, ultimately leading to navigation failure. Conversely, the integration of Tryout control in our model effectively addresses this issue. At step 4, Tryout is triggered upon colliding with the wall, causing the agent to twist and stagger away from the obstacle. This helps to navigate around the obstacle, with the navigation accomplished at step 6 successfully.
## 5 Conclusion
In summary, this paper introduces ETPNav, a novel navigation system that leverages topological maps for VLN-CE. We first propose an online mapping method via waypoint self-organization to enable robust long-range planning of an agent. This scheme doesn't require any prior environmental experience and satisfies the demand of navigation in realistic scenarios. Then, we systematically examine our topo map's key design choices and empirically show that a concise depth-only design can be optimal for waypoint prediction. Furthermore, we address an often-neglected issue in VLN-CE - obstacle avoidance, with a simple and effective heuristic controller. Extensive experiments demonstrate the effectiveness of the proposed method, yielding more than 10% and 20% absolute improvements over prior state-of-the-art on R2R-CE and RxR-CE benchmarks, respectively. We hope this work can serve as a strong baseline for further research on this challenging task.
|
2303.07286 | Gradient-Descent Based Optimization of Constant Envelope OFDM Waveforms | This paper describes a gradient-descent based optimization algorithm for
synthesizing Constant Envelope Orthogonal Frequency Division Multiplexing
(CE-OFDM) waveforms with low Auto-Correlation Function (ACF) sidelobes in a
specified region of time-delays. The algorithm optimizes the Generalized
Integrated Sidelobe Level (GISL) which controls the mainlobe and sidelobe
structure of the waveform's ACF. The operations of this Gradient-Descent GISL
(GD-GISL) algorithm are FFT-based making it computationally efficient. This
computational efficiency facilitates the design of large dimensional waveform
design problems. Simulations demonstrate the GD-GISL algorithm on CE-OFDM
waveforms employing Phase-Shift Keying (PSK) symbols that take on a continuum
of values (i.e, $M_{\text{PSK}} = \infty$). Results from these simulations show
that the GD-GISL algorithm can indeed reduce ACF sidelobes in a desired region
of time-delays. However, truncating the symbols to finite M-ary alphabets
introduces perturbations to the waveform's instantaneous phase which increases
the waveform's ACF sidelobe levels. | David G. Felton, David A. Hague | 2023-03-13T17:02:58Z | http://arxiv.org/abs/2303.07286v2 | # Gradient-Descent Based Optimization of Constant Envelope OFDM Waveforms
###### Abstract
This paper describes a gradient-descent based optimization algorithm for synthesizing Constant Envelope Orthogonal Frequency Division Multiplexing (CE-OFDM) waveforms with low Auto-Correlation Function (ACF) sidelobes in a specified region of time-delays. The algorithm optimizes the Generalized Integrated Sidelobe Level (GISL) which controls the mainlobe and sidelobe structure of the waveform's ACF. The operations of this Gradient-Descent GISL (GD-GISL) algorithm are FFT-based making it computationally efficient. This computational efficiency facilitates the design of large dimensional waveform design problems. Simulations demonstrate the GD-GISL algorithm on CE-OFDM waveforms employing Phase-Shift Keying (PSK) symbols that take on a continuum of values (i.e, \(M_{\text{PSK}}=\infty\)). Results from these simulations show that the GD-GISL algorithm can indeed reduce ACF sidelobes in a desired region of time-delays. However, truncating the symbols to finite M-ary alphabets introduces perturbations to the waveform's instantaneous phase which increases the waveform's ACF sidelobe levels.
CE-OFDM, Waveform Design, Gradient-Descent, Dual-Function Radar/Communications
## I Introduction
The Constant-Envelope Orthogonal Frequency Division Multiplexing (CE-OFDM) waveform is a constant envelope analogue of the standard OFDM waveform model. The OFDM modulation is performed in either the instantaneous phase or frequency domain [1, 2] and is essentially a form of Frequency Modulation (FM) which guarantees a constant envelope. This makes the waveform better suited for transmission on real-world radar and communications transmitters than standard OFDM waveforms [3] whose complex envelope can vary substantially [4]. CE-OFDM waveforms have recently been proposed for use in Dual-Function Radar/Communication (DFRC) applications [5, 6, 7] as the encoded communication symbols can also be utilized as a discrete set of parameters that can realize optimized constant modulus radar waveforms [8, 9].
Recent efforts in the literature have studied CE-OFDM as a potential radar waveform and analyzed the structure of the waveform's Ambiguity Function (AF) and Auto/Cross-Correlation Functions (ACF/CCF) [10, 11]. From a waveform design perspective, the OFDM symbols represent a discrete set of parameters that can be optimized to synthesize CE-OFDM waveforms with desirable AF/ACF characteristics. However, to the best of the authors' knowledge, optimization methods to synthesize novel CE-OFDM waveform designs have not yet been developed. This paper introduces a gradient-descent based algorithm that synthesizes CE-OFDM waveforms with low ACF sidelobes in a specified region of time-delays via minimization of a Generalized Integrated Sidelobe Level (GISL) metric. This Gradient-Descent GISL (GD-GISL) algorithm leverages methods developed in [12] that were used to optimize Polyphase-Coded FM (PCFM) waveforms. Since the GD-GISL algorithm's operations are largely comprised of FFTs, it is computationally efficient which facilitates synthesizing large-dimensional waveform design problems.
The capabilities of this GD-GISL algorithm to optimize CE-OFDM waveforms are demonstrated by several illustrative design examples. These examples use CE-OFDM waveforms that employ Phase-Shift Keying (PSK) where the symbols reside on the unit-circle and take on a continuum of values (i.e, \(M_{\text{PSK}}=\infty\)). The results from these design examples show that the CE-OFDM waveform model can indeed be optimized to possess lower ACF sidelobes in a specific region of time-delays than pseudo-randomly generated PSK coefficients. However, truncating the CE-OFDM symbols to finite M-ary alphabets will increase the waveform's ACF sidelobes. The rest of this paper is organized as follows: Section II describes the CE-OFDM waveform model and the design metrics used to assess the waveform's ACF characteristics. Section III describes the GD-GISL algorithm. Section IV evaluates the performance of this algorithm via several illustrative design examples. Finally, Section V concludes the paper.
## II CE-OFDM Waveform Model
A basebanded FM waveform with unit energy is expressed as
\[s\left(t\right)=\frac{\text{rect}\left(t/T\right)}{\sqrt{T}}e^{j\varphi\left( t\right)} \tag{1}\]
where \(\varphi\left(t\right)\) is the waveform's phase modulation function, \(T\) is the waveform's duration, and \(1/\sqrt{T}\) normalizes the signal energy to unity. A CE-OFDM waveform utilizing Phase-Shift Keying (PSK) possesses a phase modulation function that is expressed as [13, 11]
\[\varphi(t)=2\pi h\sum_{\ell=1}^{L}\left|\Gamma_{\ell}\right|\cos\left(\frac{2 \pi\ell t}{T}+\phi_{\ell}\right) \tag{2}\]
where \(L\) is the number of phase modulation sub-carriers, \(\left|\Gamma_{\ell}\right|=1\) and \(\phi_{\ell}\) are the magnitude and phase respectively of
the CE-OFDM symbol associated with the \(\ell^{\text{th}}\) subcarrier, and \(h\) is the modulation index which along with \(L\) determines the waveform's bandwidth [13]. The waveform's corresponding frequency modulation function \(m\left(t\right)\) is expressed as
\[m\left(t\right)=\frac{1}{2\pi}\frac{\partial\varphi\left(t\right)}{\partial t}= -\frac{2\pi h}{T}\sum_{\ell=1}^{L}\ell|\Gamma_{\ell}|\sin\left(\frac{2\pi \ell t}{T}+\phi_{\ell}\right). \tag{3}\]
The CE-OFDM symbols \(\phi_{\ell}\) are utilized as a discrete set of design parameters. Modifying these parameters in an intelligent manner can produce waveforms with low AF/ACF sidelobes. Additionally, the CE-OFDM's phase and frequency modulation functions are expressed as a finite Fourier series which are infinitely differentiable [14]. This property makes these functions smooth and devoid of any transient components resulting in the vast majority of the CE-OFDM waveform's energy being densely concentrated in a compact band of frequencies. Coupling this spectral compactness property with its natural constant envelope makes the CE-OFDM waveform model well-suited for transmission on real-world radar transmitter devices. The design versatility and the transmitter amenability of the CE-OFDM waveform model makes it an attractive choice for a variety of radar applications.
This paper assumes a Matched-Filter (MF) receiver is used to process the target's echo signal. The narrowband AF measures the MF response to Doppler shifted versions of the transmit waveform and is expressed as
\[\chi\left(\tau,\nu\right)=\int_{-\infty}^{\infty}s\left(t-\frac{\tau}{2} \right)s^{*}\left(t+\frac{\tau}{2}\right)e^{j2\pi\nu t}dt \tag{4}\]
where \(\nu\) is the Doppler shift. The zero-Doppler cut of the AF, the ACF, provides the range response of the waveform's MF output and is expressed as
\[R\left(\tau\right)=\chi\left(\tau,\nu\right)|_{\nu=0}=\int_{-\infty}^{\infty} s\left(t-\frac{\tau}{2}\right)s^{*}\left(t+\frac{\tau}{2}\right)dt. \tag{5}\]
There are several metrics that describe the sidelobe structure of a waveform's ACF. Two of the most common metrics are the Peak-to-Sidelobe Level Ratio (PSLR) and the Integrated Sidelobe Level (ISL). The PSLR is expressed as
\[\text{PSLR}=\frac{\underset{0\leq\left|\tau\right|\leq\Delta\tau\right|\leq \Delta\tau}{\text{max}}\left\{\left|R\left(\tau\right)\right|^{2}\right\}}{ \underset{0\leq\left|\tau\right|\leq\Delta\tau}{\text{max}}\left\{\left|R \left(\tau\right)\right|^{2}\right\}}=\underset{\Delta\tau\leq\left|\tau \right|\leq T}{\text{max}}\{\left|R\left(\tau\right)\right|^{2}\} \tag{6}\]
where \(\Delta_{\tau}\) denotes the first null of the ACF and therefore establishes the mainlobe width of the ACF as \(2\Delta\tau\). Note that the rightmost expression in (6) results from the assumption that the waveform is unit energy and thus the maximum value of \(\left|R\left(\tau\right)\right|^{2}\) is unity which occurs at \(\tau=0\). The ISL is the ratio of the area under the sidelobe region \(A_{\tau}\) of \(\left|R\left(\tau\right)\right|^{2}\) to the area under mainlobe region \(A_{0}\) of \(\left|R\left(\tau\right)\right|^{2}\) expressed as
\[\text{ISL }=\frac{A_{\tau}}{A_{0}}=\frac{\int_{\Omega_{\tau}}\left|R\left(\tau \right)\right|^{2}d\tau}{\int_{-\Delta\tau}^{\Delta\tau}\left|R\left(\tau \right)\right|^{2}d\tau} \tag{7}\]
where the term \(\Omega_{\tau}\) denotes any sub-region of time-delays of the ACF excluding the mainlobe region \(-\Delta\tau\leq\tau\leq\Delta\tau\). A lower ISL corresponds to an ACF with lower overall sidelobe levels in the region \(\Omega_{\tau}\) and/or a larger area \(A_{0}\) under the mainlobe region which implies that the mainlobe width \(2\Delta\tau\) increases. Note that while reducing the area \(A_{\tau}\) will generally reduce sidelobe levels, it does not always directly translate to a lower PSLR [15].
The Generalized Integrated Sidelobe Level (GISL) [12] generalizes the ISL metric in (7) by evaluating the \(\ell_{p}\)-norm [16, 17] of the sidelobe and mainlobe regions of the ACF expressed as
\[\text{GISL }=\left(\frac{\int_{\Omega_{\tau}}\left|R\left(\tau\right)\right|^{p}d \tau}{\int_{-\Delta\tau}^{\Delta\tau}\left|R\left(\tau\right)\right|^{p}d \tau}\right)^{2/p} \tag{8}\]
where \(p\geq 2\) is an integer. When \(p=2\), the GISL becomes the standard ISL metric. As \(p\rightarrow\infty\), the integrals in (8) approach the infinity norm \(\left\|\cdot\right\|_{\infty}^{2}\), also known as the max-norm, and correspondingly the GISL approaches the PSLR metric (6) [12]. However, from a waveform optimization perspective, the max-norm can produce a discontinuous objective function which prevents the efficient use of gradient-descent based waveform optimization methods. Making \(p\) large but finite results in a smooth objective function which facilitates the use of gradient-descent based methods in the waveform optimization problem. Empirical analysis [12] suggests that values of \(p\geq 6\) or so produces such a PSLR-like metric. For this reason the GISL is the design metric this paper uses to optimize the ACF sidelobe levels of CE-OFDM waveforms.
Figure 1 shows an example CE-OFDM waveform's spectrogram, spectrum, AF, and ACF. The waveform is composed of \(L=24\) sub-carriers that employ PSK symbols \(\phi_{\ell}\) with \(M_{\text{PSK}}=32\) generated from a pseudo-random sequence as described in [11]. The modulation index \(h=0.1856\) is chosen to synthesize a waveform with the same RMS bandwidth [18] as that of a LFM waveform with a Time-Bandwidth Product (TBP) of 200 which can be calculated in closed form as [13]
\[h=\frac{T\Delta f}{2\pi\sqrt{2L^{3}+3L^{2}+L}}. \tag{9}\]
As explained earlier, the CE-OFDM waveform's frequency modulation function is a finite Fourier series and is therefore an infinitely differentiable and smooth function. This smooth FM function is shown in panel (a) of Figure 1. As a result of the smooth FM function, the vast majority of the waveform's energy is densely concentrated in a compact band of frequencies as is shown in panel (b) of Figure 1. As is described in [13], a CE-OFDM waveform utilizing PSK coding will essentially always produce a "Thumbtack-Like" AF shape. The sidelobe pedestal is clearly visible in the ACF of the waveform shown in panel (d). It is the goal of this paper to develope an algorithm that will further reduce the ACF sidelobe levels in a specified region \(\Omega_{\tau}\) of time-delays.
## III The GD-GISL Algorithm
The first step in developing the GD-GISL algorithm is to discretize the waveform signal model and its design metrics. The CE-OFDM waveform's instantaneous phase (2) can be transformed from the amplitude-phase Fourier series representation to the standard real-valued Fourier series as
\[\varphi\left(t\right)=2\pi h\sum_{\ell=1}^{L}\tilde{\alpha}_{\ell}\cos\left( \frac{2\pi\ell t}{T}\right)+\tilde{\beta}_{\ell}\sin\left(\frac{2\pi\ell t}{T} \right). \tag{10}\]
The Fourier coefficients \(\tilde{\alpha}_{\ell}\) and \(\tilde{\beta}_{\ell}\) lie on the unit circle such that \(|\Gamma_{\ell}|=\sqrt{\tilde{\alpha}_{\ell}^{2}+\tilde{\beta}_{\ell}^{2}}=1\) and are derived from \(\phi_{\ell}\)
\[\tilde{\alpha}_{\ell} =|\Gamma_{\ell}|\cos\phi_{\ell}, \tag{11}\] \[\tilde{\beta}_{\ell} =|\Gamma_{\ell}|\sin\phi_{\ell}. \tag{12}\]
From here, (10) can be written as a linear sum using discrete variables as
\[\varphi=\begin{bmatrix}\mathbf{B}_{\mathrm{c}}&\mathbf{B}_{\mathrm{s}}\end{bmatrix} \begin{bmatrix}\cos\left(\mathbf{\phi}\right)\\ \sin\left(\mathbf{\phi}\right)\end{bmatrix} \tag{13}\]
where the phase values are grouped into a \(L\times 1\) vector \(\mathbf{\phi}=\left[\phi_{1},\phi_{2},\ldots,\phi_{L}\right]^{\mathrm{T}}\) and the \(M\times L\) basis matrices \(\mathbf{B}_{\mathrm{c}}\) and \(\mathbf{B}_{\mathrm{s}}\) contain cosine and sine harmonics respectively such that the \(\ell^{\text{th}}\) columns
\[\mathbf{b}_{\mathrm{c},\ell} =\cos\left(\frac{2\pi\ell t}{T}\right), \tag{14}\] \[\mathbf{b}_{\mathrm{s},\ell} =\sin\left(\frac{2\pi\ell t}{T}\right) \tag{15}\]
are sampled at some sampling rate \(f_{s}\) which satisfies the Nyquist criterion.
From here, the development of the gradient-based GISL algorithm largely follows the description given in [12]. The GISL metric can be expressed in terms of the discretized ACF which is expressed as
\[\mathbf{r}=\mathbf{A}^{\mathrm{H}}|\mathbf{A}\bar{\mathbf{s}}|^{2} \tag{16}\]
where \(\mathbf{r}\in\mathbb{C}^{(2M-1)}\) contains discretized samples of the ACF, \(\bar{\mathbf{s}}\in\mathbb{C}^{(2M-1)}\) is a discretized and zero-padded version of \(\mathbf{s}\), and \(\mathbf{A}\) and \(\mathbf{A}^{\mathrm{H}}\) are \((2M-1)\times(2M-1)\) Discrete Fourier Transform (DFT) and Inverse DFT matrices respectively. The GISL metric is then expressed as the cost function [12]
\[J_{p}=\frac{\|\mathbf{w}_{\text{SL}}\circ\mathbf{r}\|_{p}^{2}}{\|\mathbf{w}_{ \text{ML}}\odot\mathbf{r}\|_{p}^{2}} \tag{17}\]
where the vectors \(\mathbf{w}_{\text{SL}}\) and \(\mathbf{w}_{\text{ML}}\in\mathbb{R}^{(2M-1)}\) are non-zero in the extent of the sidelobe and mainlobe regions respectively. In setting the cost function \(J_{p}\) to the GISL metric, the optimization problem can be formally stated as
\[\underset{\mathbf{\phi}}{\text{min}}\ J_{p}. \tag{18}\]
The GISL is an L-dimensional and highly non-convex objective function across the CE-OFDM parameter space \(\phi_{\ell}\) on the order of \(2p\) in each dimension. Therefore, convergence to the global minimum is most certainly not guaranteed. Non-convex functions require a non-linear optimization routine, which is why here we expand upon the continuous nature of the CE-OFDM waveform and its modulation function to employ gradient-descent optimization. Gradient-descent is an iterative approach which takes some step \(\mu\) in the direction of steepest descent \(q_{i}\)
\[\mathbf{\phi}_{i+1} =\mathbf{\phi}_{i}+\mu q_{i} \tag{19}\] \[\mathbf{q}_{i} =-\nabla_{\mathbf{\phi}_{i}}J_{p} \tag{20}\]
where \(\nabla_{\mathbf{\phi}}\) is the gradient operator. The gradient of (18), derived in the Appendix, is expressed as
\[\nabla_{\mathbf{\phi}}J_{p}=4J_{p}\bar{\mathbf{D}}^{\mathrm{T}}\Im\left\{\bar{ \mathbf{s}}^{\ast}\odot\mathbf{A}^{\mathrm{H}}\left[\left(\mathbf{A}\bar{ \mathbf{s}}\right)\odot\mathbf{P}\right]\right\} \tag{21}\]
where
\[\mathbf{P}=\Re\left\{\mathbf{A}\left(|\mathbf{r}|^{p-2}\odot\mathbf{r}\odot \left[\frac{\mathbf{w}_{\text{SL}}}{\mathbf{w}_{\text{SL}}^{\mathrm{T}}| \mathbf{r}|^{p}}-\frac{\mathbf{w}_{\text{ML}}}{\mathbf{w}_{\text{ML}}^{\mathrm{ T}}|\mathbf{r}|^{p}}\right]\right)\right\}. \tag{22}\]
Performing (19) and (20) iteratively until the Euclidean length of \(q_{i}\) is below some threshold \(g_{\text{min}}\) ensures that \(J_{p}\) is very near a local minima. Alternatively, the routine may continue until it reaches a predetermined number of iterations \(I_{\text{max}}\).
We employ heavy-ball gradient-descent which includes weighted versions of the previous search-directions with the current gradient. This has been shown to converge quickly for these types of problems by dampening rapid transitions of the gradient thereby enforcing a smooth path to the minima. The search direction is altered by inclusion of previous gradients as
\[\mathbf{q}_{i}=-\nabla_{\mathbf{\phi}_{i}}J_{p}+\beta\mathbf{q}_{i-1} \tag{23}\]
where \(\beta\in[0,1]\). Since this method does not always ensure a descent, if in fact the current search direction is an ascent (i.e., the projection of the gradient onto the current search direction is positive), the current search direction is reset to the current gradient.
\[\text{if}\ \mathbf{q}_{i}^{\mathrm{T}}(\nabla_{\mathbf{\phi}_{i}}J_{p})>0,\ \text{then}\ \mathbf{q}_{i}=-\nabla_{\mathbf{\phi}_{i}}J_{p}. \tag{24}\]
Fig. 1: Spectrogram (a), spectrum (b), AF (c), and ACF (d) of a CE-OFDM waveform with a TBP of 200 (\(h=0.1856\)) and \(L=24\) randomly generated PSK coefficients where \(M_{\text{PSK}}=32\). Also shown in panel (b) is the spectrum of a LFM waveform of the same TBP. The CE-OFDM waveform possesses a spectrum that is densely concentrated in a compact band of frequencies and possesses a “Thumbtack-Like” AF shape.
Once the search direction is established, a simple backtracking method is used to calculate the step size \(\mu\) for the line search that satisfies sufficient decrease via the Armijo condition [19]. As mentioned earlier, to maintain continuity of the GISL gradient and to avoid diminishing returns thereafter, the parameter \(p\) should take on values much lower than infinity but larger than roughly 6 [12]. These values for \(p\) have been shown to consistently generate waveforms with a flat sidelobe response. On the other hand, letting \(p=2\) results in waveforms whose ACFs possess more peaks and valleys, but overall less area in the specified sidelobe region \(\Omega_{\tau}\). The steps of the GD-GISL algorithm are listed in Algorithm 1. Since the algorithm makes extensive use of FFTs in computing the GISL metric (21), it is substantially more computationally efficient compared to a brute-force numerical computation.
```
0: Initialize \(\mathbf{B}\), \(\phi^{(0)}\), \(P\), \(L\), \(\mathbf{q}_{0}=\mathbf{0}_{\mathrm{N}\times 1}\), \(\beta\), \(\mu\), \(\rho_{\text{up}}\), \(\rho_{\text{down}}\), \(c\), and set \(i=1\).
0: Final CE-OFDM coefficient vector \(\boldsymbol{\phi}\) with refined ACF properties that locally solves the criteria in (18)
1: Evaluate \(J_{p}\left(\boldsymbol{\phi}_{i-1}\right)\) and \(\nabla_{\boldsymbol{\phi}_{i}}J_{p}\left(\boldsymbol{\phi}_{i-1}\right)\) via (17) and (21).
2:\(\mathbf{q}_{i}=-\nabla_{\boldsymbol{\phi}_{i}}J_{p}+\beta\mathbf{q}_{i-1}\)
3:If\(\left(\nabla_{\boldsymbol{\phi}_{i}}J_{p}\left(\boldsymbol{\phi}_{i-1}\right) \right)^{\mathrm{T}}\ \mathbf{q}_{i}\geq 0\)
4:\(\mathbf{q}_{i}=-\nabla_{\boldsymbol{\phi}_{i}}J_{p}\left(\boldsymbol{\phi}_{i-1}\right)\)
5:End(If)
6:While\(J_{p}(\boldsymbol{\phi}_{i}+\mu\mathbf{q}_{i})\)\(>\)\(J_{p}(\boldsymbol{\phi}_{i-1})+c\mu\left(\nabla_{\boldsymbol{\phi}_{i}}J_{p}\left( \boldsymbol{\phi}_{i-1}\right)\right)^{\mathrm{T}}\ \mathbf{q}_{i}\)
7:\(\mu=\rho_{\text{down}}\mu\)
8:End(While)
9:\(\boldsymbol{\phi}_{i}=\boldsymbol{\phi}_{i-1}+\mu\mathbf{q}_{i},\ \ \ \mu=\rho_{\text{up}}\mu\)
10:\(i=i+1\)
11:Repeat steps 1-9 until \(i=I\) or \(\left\|\nabla_{\boldsymbol{\phi}}J_{p}\left(\boldsymbol{\phi}_{i}\right) \right\|\leq g_{\text{min}}\)
```
**Algorithm 1** The GD-GISL Algorithm
## IV Several Illustrative Design Examples
This section demonstrates the GD-GISL algorithm using two waveform optimization design examples. Both examples optimize the GISL (\(p=20\)) of the waveform from Figure 1 over different sub-regions \(\Omega_{\tau}\) of the waveform's ACF. The waveform time-series are sampled at a rate \(f_{s}=5\Delta f\). The algorithm was set to run for a max number of iterations \(I=100\). Figure 2 shows the ACF, spectrum, and ACF zoomed in at the origin of the CE-OFDM waveform from Figure 1 and the resulting optimized CE-OFDM waveform whose ACF sidelobes were minimized over all time-delays \(\tau\) excluding the mainlobe region. The initial waveform's GISL for \(p=20\) and PSLR values were -14.73 dB and -15.21 dB respectively. The optimized waveform's GISL was -22.14 dB corresponding to a PSLR of -20.72 dB. It's also important to note that the mainlobe width of the optimized waveform stayed the same width as its initial seed waveform. As explained in [13], the CE-OFDM's RMS bandwidth, which controls the ACF mainlobe width, remains the same value for fixed \(L\) and \(h\) regardless of the PSK values \(\phi_{\ell}\). This precludes introducing constraints on RMS bandwidth to ensure the waveform's ACF mainlobe width stays largely fixed as was done for a closely related waveform model in [20].
Figure 3 shows the result of optimizing the GISL of the CE-OFDM waveform from Figure 1 over a sub-region of time-delays \(\Omega_{\tau}\in\Delta\tau\leq|\tau|\leq 0.1T\). The initial waveform's GISL and PSLR over this sub-region of time-delays was -16.8 dB and -17.51 dB respectively. The resulting optimized waveform's GISL and PSLR values were -30.34 dB and -31.39 dB respectively, a substantial reduction of sidelobes in the specified region \(\Omega_{\tau}\). Again, the optimized waveform's mainlobe width stays fixed as does the waveform's spectral extent.
One final important point is that the GD-GISL optimization routine assumes the CE-OFDM waveforms employ PSK with infinite granularity (i.e, \(M_{\text{PSK}}=\infty\)). Practical implementations of CE-OFDM waveforms will almost certainly employ a finite M-ary alphabet for the PSK symbols. Truncating these continuous symbols to an M-ary representation will introduce perturbations in the waveform's instantaneous phase. These perturbations are likely to degrade the desiraly low ACF sidelobes of the \(M_{\text{PSK}}=\infty\) optimized waveform designs. This issue is illustrated in Figure 4 where the PSK symbols from the optimized waveform in Figure 3 is approximated by finite M-ary alphabets. As can be seen from the figure, as \(M_{\text{PSK}}\) is reduced, the degree of perturbations in the PSK symbols is greater and the ACF sidelobes increase in the region of time-delays \(\Omega_{\tau}\) where the optimization routine was run on. There are two ways to mitigate this issue. The first is to utilize a large \(M_{\text{PSK}}\) value to reduce the degree of perturbations in the implemented waveform. The second is to modify the GD-GISL algorithm to operate on finite M-ary alphabets. This approach may produce more favorable optimal designs with
Fig. 2: ACFs (a), Spectra (b), and zoomed ACFs (c) of initial and optimized CE-OFDM waveforms. The CE-OFDM waveform resulting from the GISL optimization algorithm possesses clearly lower ACF sidelobes while also maintaining the same mainlobe width as the initial waveform.
less perturbations than the waveforms shown in Figure 4. This will be a topic of future investigation.
## V Conclusion
The GD-GISL minimization algorithm synthesizes CE-OFDM waveforms with low ACF sidelobes in a specified region of time-delays. The algorithm leverages methods developed in [12] which exploits FFTs in the majority of its operations making it computationally feasible to optimize CE-OFDM waveforms with a large number of sub-carriers \(L\). Results show that the algorithm is indeed capable of reducing CE-OFDM ACF sidelobes in a user-specified sub-region of time-delays while also maintaining the waveform's ACF main-lobe width. Truncating these PSK symbols to M-ary alphabets introduces perturbations to the optimal design that results in increased ACF sidelobes. There are a number of avenues to pursue for future work. The first obvious one is to modify the algorithm to work with finite M-ary alphabets for PSK and other digital modulation techniques such as Quadrature Amplitude Modulation (QAM). The algorithm should also be generalizable to generating families of waveforms with desirable ACF and Cross-Correlation Function (CCF) properties such as the efforts of [16] and extendable to minimizing \(\ell_{p}\)-norms on the AF of the waveform as well. Lastly, these optimization algorithms should also be applicable to the Multi-Tone Sinusoidal FM (MTSFM) waveform [20] of which the CE-OFDM waveform is a special case.
## Appendix
We define the cost metric as the GISL in (17) as
\[J_{p}=\frac{\|\mathbf{w}_{\mathbf{SL}}\odot\mathbf{r}\|_{p}^{2}}{\|\mathbf{ w}_{\mathbf{ML}}\odot\mathbf{r}\|_{p}^{2}}=\left(\frac{\mathbf{w}_{\mathbf{SL}}^{ \mathsf{T}}|\mathbf{r}|^{p}}{\mathbf{w}_{\mathbf{ML}}^{\mathsf{T}}|\mathbf{r} |^{p}}\right)^{2/p}. \tag{25}\]
The Wirtinger definition of a complex derivative extended to vector calculus is [21]
\[\nabla_{\mathbf{x}}=\begin{cases}\left(\frac{\partial\mathbf{y}}{\partial \mathbf{x}}\right)^{\mathsf{T}}\nabla_{\mathbf{y}}+\left(\frac{\partial \mathbf{y}^{*}}{\partial\mathbf{x}}\right)^{\mathsf{T}}\nabla_{\mathbf{y}^{*}} &\mathbf{y}\in\mathbb{C}\\ \left(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\right)^{\mathsf{T}}\nabla_{ \mathbf{y}}&\mathbf{y}\in\mathbb{R}\end{cases} \tag{26}\]
where for an arbitrary \(\{\mathbf{y}(\mathbf{x}):n_{x}\to n_{y}\}\) transformation from a \(n_{x}\) to a \(n_{y}\) dimensional space,
\[\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\left[\nabla_{\mathbf{x}}y_{1} \quad\nabla_{\mathbf{x}}y_{2}\quad\ldots\quad\nabla_{\mathbf{x}}y_{n_{y}} \right]^{\mathsf{T}} \tag{27}\]
is the \(n_{y}\)\(\times n_{x}\) Jacobian matrix.
We begin with the real to complex mapping from \(\boldsymbol{\phi}\) to \(\mathbf{\bar{s}}^{*}\)
\[\begin{split}\nabla_{\boldsymbol{\phi}}J_{p}&= \left(\frac{\partial\mathbf{\bar{s}}^{*}}{\partial\boldsymbol{\phi}}\right)^{ \mathsf{T}}\nabla_{\mathbf{\bar{s}}^{*}}J_{p}+\left(\frac{\partial\mathbf{ \bar{s}}}{\partial\boldsymbol{\phi}}\right)^{\mathsf{T}}\nabla_{\mathbf{\bar{s }}}J_{p}\\ &=-j\mathbf{\bar{D}}^{\mathsf{T}}\left(\mathbf{\bar{s}}^{*}\odot \nabla_{\mathbf{\bar{s}}^{*}}J_{p}\right)+j\mathbf{\bar{D}}^{\mathsf{T}} \left(\mathbf{\bar{s}}\odot\nabla_{\mathbf{\bar{s}}}J_{p}\right)\\ &=2\mathbf{\bar{D}}^{\mathsf{T}}\Im\{\mathbf{\bar{s}}^{*}\odot \nabla_{\mathbf{\bar{s}}^{*}}J_{p}\}\end{split} \tag{28}\]
where the matrix
\[\mathbf{\bar{D}}=-\mathbf{\bar{B}}_{c}\text{diag}\left\{\sin\left(\boldsymbol {\phi}\right)\right\}+\mathbf{\bar{B}}_{s}\text{diag}\left\{\cos\left( \boldsymbol{\phi}\right)\right\} \tag{29}\]
results from the chain rule, and where \(\text{diag}\left\{\bullet\right\}\) places the elements of the operand along the diagonal of a square matrix such that the \(\ell\)th column is
\[\mathbf{\bar{d}}_{\ell}=-\mathbf{\bar{b}}_{c,\ell}\sin\left(\phi_{\ell}\right) +\mathbf{\bar{b}}_{s,\ell}\cos\left(\phi_{\ell}\right) \tag{30}\]
Note that \(\mathbf{\bar{B}}_{c}\), \(\mathbf{\bar{B}}_{s}\), and \(\mathbf{\bar{D}}\) are zero padded such that their dimensionality is now \((2M-1)\)\(\times L\). Next, we map from \(\mathbf{\bar{s}}^{*}\) to \((\mathbf{A}\mathbf{\bar{s}})^{*}\)
\[\nabla_{\mathbf{\bar{s}}^{*}}J_{p} =\left(\frac{\partial(\mathbf{A}\mathbf{\bar{s}})^{*}}{\partial \mathbf{\bar{s}}^{*}}\right)^{\mathsf{T}}\nabla_{(\mathbf{A}\mathbf{\bar{s}} )^{*}}J_{p}+\left(\frac{\partial(\mathbf{A}\mathbf{\bar{s}})}{\partial \mathbf{\bar{s}}^{*}}\right)^{\mathsf{T}}\nabla_{(\mathbf{A}\mathbf{\bar{s}} )}J_{p}\] \[=\mathbf{A}^{\mathsf{H}}\nabla_{(\mathbf{A}\mathbf{\bar{s}})^{*}}J _{p} \tag{31}\]
Fig. 4: ACF of the CE-OFDM in Figure 3 and corresponding ACFs of the same waveform using finite M-ary alphabets to represent the PSK symbols. Lower \(M_{\text{PSK}}\) introduces stronger perturbations to the original waveform’s PSK symbols. This in turn degrades the original waveform’s desirably low ACF sidelobe levels.
Fig. 3: ACFs (a), Spectra (b), and zoomed ACFs (c) of initial and optimized CE-OFDM waveforms. The CE-OFDM waveform resulting from the GISL optimization algorithm possesses clearly lower ACF sidelobes in the specified sub-region of time-delays while also maintaining the same mainlobe width.
where the second term simplifies to 0 since \(\overline{\mathbf{s}}\) and \(\overline{\mathbf{s}}^{*}\) are considered independent independent variables. Next, mapping from \((\mathbf{A}\bar{\mathbf{s}})^{*}\) to \(|\mathbf{A}\bar{\mathbf{s}}|^{2}\) yields
\[\nabla_{(\mathbf{A}\bar{\mathbf{s}})^{*}}J_{p} =\left(\frac{\partial|\mathbf{A}\bar{\mathbf{s}}|^{2}}{\partial( \mathbf{A}\bar{\mathbf{s}})^{*}}\right)^{\mathsf{T}}\nabla_{|\mathbf{A}\bar{ \mathbf{s}}|^{2}}J_{p} \tag{32}\] \[=\text{diag}\left\{\mathbf{A}\mathbf{s}\right\}\nabla_{|\mathbf{ A}\bar{\mathbf{s}}|^{2}}J_{p}\] \[=(\mathbf{A}\bar{\mathbf{s}})\odot\nabla_{|\mathbf{A}\bar{ \mathbf{s}}|^{2}}J_{p}\]
Then, since \(\mathbf{r}\) is a function of \(|\mathbf{A}\bar{\mathbf{s}}|^{2}\) via (16),
\[\nabla_{|\mathbf{A}\bar{\mathbf{s}}|^{2}}J_{p} =\left(\frac{\partial\mathbf{r}}{\partial|\mathbf{A}\bar{ \mathbf{s}}|^{2}}\right)^{\mathsf{T}}\nabla_{\mathbf{r}}J_{p}+\left(\frac{ \partial\mathbf{r}^{*}}{\partial|\mathbf{A}\bar{\mathbf{s}}|^{2}}\right)^{ \mathsf{T}}\nabla_{\mathbf{r}^{*}}J_{p} \tag{33}\] \[=\mathbf{A}^{*}\nabla_{\mathbf{r}}J_{p}+\mathbf{A}\nabla_{ \mathbf{r}^{*}}J_{p}\] \[=2\Re\left\{\mathbf{A}\nabla_{\mathbf{r}^{*}}J_{p}\right\}\]
And, mapping from \(\mathbf{r}^{*}\) to \(|\mathbf{r}|^{p}\)
\[\nabla_{\mathbf{r}^{*}}J_{p} =\left(\frac{\partial|\mathbf{r}|^{p}}{\partial\mathbf{r}^{*}} \right)^{\mathsf{T}}\nabla_{|\mathbf{r}|^{p}}J_{p} \tag{34}\] \[=\frac{p}{2}\text{diag}\left\{|\mathbf{r}|^{p-2}\odot\mathbf{r} \right\}\nabla_{|\mathbf{r}|^{p}}J_{p}\] \[=\frac{p}{2}|\mathbf{r}|^{p-2}\odot\mathbf{r}\odot\nabla_{| \mathbf{r}|^{p}}J_{p}\]
And, since \(J_{p}\) is directly a function of \(|\mathbf{r}|^{p}\), we solve for \(\nabla_{|\mathbf{r}|^{p}}J_{p}\) by use of the quotient rule followed by chain rule
\[\nabla_{|\mathbf{r}|^{p}}J_{p} =\frac{2}{p}\left(\frac{\mathbf{w}_{\text{SL}}^{\mathsf{T}}| \mathbf{r}|^{p}}{\mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^{p}}\right)^ {2/p-1} \tag{35}\] \[\frac{\left(\mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^{p} \right)\frac{\partial\left(\mathbf{w}_{\text{SL}}^{\mathsf{T}}|\mathbf{r}|^{p }\right)}{\partial|\mathbf{r}|^{p}}-\left(\mathbf{w}_{\text{SL}}^{\mathsf{T}}| \mathbf{r}|^{p}\right)\frac{\partial\left(\mathbf{w}_{\text{ML}}^{\mathsf{T}} |\mathbf{r}|^{p}\right)}{\partial|\mathbf{r}|^{p}}}\] \[=\frac{2}{p}J_{p}\left(\frac{\mathbf{w}_{\text{ML}}^{\mathsf{T}}| \mathbf{r}|^{p}}{\mathbf{w}_{\text{SL}}^{\mathsf{T}}|\mathbf{r}|^{p}}\right) \frac{\left(\mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^{p}\right)\mathbf{ w}_{\text{SL}}-\left(\mathbf{w}_{\text{SL}}^{\mathsf{T}}|\mathbf{r}|^{p}\right) \mathbf{w}_{\text{ML}}}{\left(\mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^ {p}\right)^{2}}\] \[=\frac{2}{p}J_{p}\left[\frac{\mathbf{w}_{\text{SL}}}{\mathbf{w}_{ \text{SL}}^{\mathsf{T}}|\mathbf{r}|^{p}}-\frac{\mathbf{w}_{\text{ML}}}{ \mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^{p}}\right]\]
Finally, using (32)-(35) and inserting (31) into (28) produces the final result
\[\nabla_{\boldsymbol{\phi}}J_{p} =2\bar{\mathbf{D}}^{\mathsf{T}}\Im\{\bar{\mathbf{s}}^{*}\odot \nabla_{\mathbf{s}^{*}}J_{p}\} \tag{36}\] \[=2\bar{\mathbf{D}}^{\mathsf{T}}\Im\{\bar{\mathbf{s}}^{*}\odot \mathbf{A}^{\mathsf{H}}\nabla_{(\mathbf{A}\bar{\mathbf{s}})^{*}}J_{p}\}\] \[=2\bar{\mathbf{D}}^{\mathsf{T}}\Im\{\bar{\mathbf{s}}^{*}\odot \mathbf{A}^{\mathsf{H}}\left[(\mathbf{A}\bar{\mathbf{s}})\odot\nabla_{| \mathbf{A}\bar{\mathbf{s}}|^{2}}J_{p}\right]\}\] \[=2\bar{\mathbf{D}}^{\mathsf{T}}\Im\{\bar{\mathbf{s}}^{*}\odot \mathbf{A}^{\mathsf{H}}\left[(\mathbf{A}\bar{\mathbf{s}})\odot 2\Re\left\{\mathbf{A}\nabla_{\mathbf{r}^{*}}J_{p}\right\}\right]\}\] \[=4J_{p}\bar{\mathbf{D}}^{\mathsf{T}}\Im\{\bar{\mathbf{s}}^{*} \odot\mathbf{A}^{\mathsf{H}}\left[(\mathbf{A}\bar{\mathbf{s}})\odot\mathbf{P} \right]\}\]
where
\[\mathbf{P}=\Re\left\{\mathbf{A}\left(|\mathbf{r}|^{p-2}\odot\mathbf{r}\odot \left[\frac{\mathbf{w}_{\text{SL}}}{\mathbf{w}_{\text{SL}}^{\mathsf{T}}| \mathbf{r}|^{p}}-\frac{\mathbf{w}_{\text{ML}}}{\mathbf{w}_{\text{ML}}^{\mathsf{ T}}|\mathbf{r}|^{p}}\right]\right)\right\}. \tag{37}\]
Note that a function is said to be conjugate symmetric if and only if its Fourier transform is real valued [22]. Under the assumption that \(\mathbf{w}_{\text{SL}}\) and \(\mathbf{w}_{\text{ML}}\) are defined to be symmetric about 0 delay, then \(\left(|\mathbf{r}|^{p-2}\odot\mathbf{r}\odot\left[\frac{\mathbf{w}_{\text{SL}} }{\mathbf{w}_{\text{SL}}^{\mathsf{T}}|\mathbf{r}|^{p}}-\frac{\mathbf{w}_{\text{ ML}}}{\mathbf{w}_{\text{ML}}^{\mathsf{T}}|\mathbf{r}|^{p}}\right]\right)\) term exhibits conjugate symmetry since by definition, a waveform's ACF is conjugate symmetric. Additionally, since we are applying a DFT matrix to a conjugate symmetric vector, the result is purely real and so the \(\Re\{\bullet\}\) operator can be discarded.
|
2308.14836 | Data fusion using weakly aligned sources | We introduce a new data fusion method that utilizes multiple data sources to
estimate a smooth, finite-dimensional parameter. Most existing methods only
make use of fully aligned data sources that share common conditional
distributions of one or more variables of interest. However, in many settings,
the scarcity of fully aligned sources can make existing methods require unduly
large sample sizes to be useful. Our approach enables the incorporation of
weakly aligned data sources that are not perfectly aligned, provided their
degree of misalignment can be characterized by a prespecified density ratio
model. We describe gains in efficiency and provide a general means to construct
estimators achieving these gains. We illustrate our results by fusing data from
two harmonized HIV monoclonal antibody prevention efficacy trials to study how
a neutralizing antibody biomarker associates with HIV genotype. | Sijia Li, Peter B. Gilbert, Alex Luedtke | 2023-08-28T18:47:14Z | http://arxiv.org/abs/2308.14836v1 | # Data fusion using weakly aligned sources
###### Abstract
We introduce a new data fusion method that utilizes multiple data sources to estimate a smooth, finite-dimensional parameter. Most existing methods only make use of fully aligned data sources that share common conditional distributions of one or more variables of interest. However, in many settings, the scarcity of fully aligned sources can make existing methods require unduly large sample sizes to be useful. Our approach enables the incorporation of weakly aligned data sources that are not perfectly aligned, provided their degree of misalignment can be characterized by a prespecified density ratio model. We describe gains in efficiency and provide a general means to construct estimators achieving these gains. We illustrate our results by fusing data from two harmonized HIV monoclonal antibody prevention efficacy trials to study how a neutralizing antibody biomarker associates with HIV genotype.
## 1 Introduction
The increasing availability of data has led to a growing interest in data fusion, which involves combing data from multiple sources to obtain a global summary of a target population of interest. Existing data fusion approaches require the data sources to be fused to share a common conditional distribution of one or more variables of interest with the target distribution (Pearl and Bareinboim, 2011; Hernan and VanderWeele, 2011; Bareinboim and Pearl, 2014; Stuart et al., 2015; Dahabreh et al., 2019; Kallus et al., 2020; Li and Luedtke, 2023). This is known as the exchangeability over data sources condition, or the common distributions condition, which enables the transfer of conclusions across data sources and thus facilitates data fusion. In some special cases, this condition can be depicted in a causal diagram as a lack of directional edges connecting data source nodes to those of other variables (Pearl and Bareinboim, 2011). With the use of such aligned data sources, data fusion often result in efficiency gains compared to traditional analyses that only leverage one data source.
Several variants of the common distributions condition have been proposed in the literature. For instance, this condition can be relaxed to only require a common outcome model (Rudolph and van der Laan, 2017; Dahabreh
et al., 2019) or other common statistical summaries (Taylor et al., 2023). These weaker conditions generally only enable valid inferences for particular summaries of the target distribution, such as an average treatment effect. To infer about a generic summary, the common distributions condition is still generally required.
An aligned conditional distribution condition is often most likely to hold if the data source contains a rich set of variables that fully explains the variability between the observed and target populations. This condition often fails when these populations have different sets of covariates measured, or in cases when the conditional distributions across data sources are fundamentally different. For example, evaluating the efficacy of the same HIV vaccine regimen to prevent HIV-1 acquisition diagnosis using pooled data from trials with different prevailing HIV strains may not be possible due to the variability in the conditional distributions of new HIV-1 diagnosis across trials, even with the inclusion of baseline covariates. As a result, the lack of exact alignment among data sources often leaves little room for existing data fusion techniques to be applied. This raises an important question: can researchers unlock efficiency gains by making use of slightly misaligned data sources? In this work, we answer that question affirmatively by introducing a novel data fusion method that can take advantage of such data sources provided appropriate conditions are satisfied.
We examine the general case where different data sources weakly align with different parts of the distribution of the target population, in the sense that the ratio of certain conditional densities between these sources and the target distribution can be characterized by selection bias models. Selection bias models have been discussed by Vardi (1982, 1985), Gill et al. (1988), and Bickel et al. (1993) in the case of completely known density ratios, and by Qin (1998) and Gilbert (2000) in the case of density ratios that are unknown up to finite dimensional parameters. In the aforementioned HIV vaccine efficacy example, a selection bias model can be used to describe a covariate-dependent shift in the log odds of new HIV-1 diagnosis across trials. We show that using such a model makes it possible to calibrate the discrepancy in the conditional distributions of the outcome between data sources before data fusion.
Our approach is applicable in any setting where the general data fusion framework described in Li and Luedtke (2023) can be used. Like the method in that earlier work, our approach relies on having at least one data source fully aligned with each relevant conditional distribution under the target distribution of interest. The estimators presented in the current work improve on earlier ones by allowing the incorporation of more data sources -- namely, weakly aligned ones -- in an analysis, thereby improving the precision of the resulting estimator.
## 2 Notations and Assumptions
We will use the same notation as in Li and Luedtke (2023). We use \([d]\) to denote \(\{1,\ldots,d\}\) for a natural number \(d\) and let \(E_{\nu}\) denote the expectation operator under a distribution \(\nu\). We use \(Z=(Z_{1},\ldots,Z_{d})\) to denote a random variable and we let \(\bar{Z}_{j}=(Z_{1},\ldots,Z_{j})\) for \(j\in[d]\), where we let \(\bar{Z}_{0}\) denote an empty set. Random variables will be denoted by capital letters whereas their realizations will be denoted by corresponding lowercase letters.
To indicate conditioning on a random variable taking a specific value, we will condition on lowercase letters in expectations. For instance, \(E_{\nu}(Z_{2}|z_{1})\) will denote \(E_{\nu}(Z_{2}|Z_{1}=z_{1})\). For a distribution \(Q\) of \(Z\) and \(j\in[d]\), \(Q_{j}(\,\cdot\mid\bar{z}_{j-1})\) represents the conditional distribution of \(Z_{j}\mid\bar{Z}_{j-1}=\bar{z}_{j-1}\). Similarly, for a distribution \(P\) of \((Z,S)\), \(P_{j}(\,\cdot\mid\bar{z}_{j-1},s)\) denotes the conditional distribution of \(Z_{j}\mid\bar{Z}_{j-1}=\bar{z}_{j-1},S=s\). We will assume sufficient regularity conditions so that all such conditional distributions are well-defined, and that all discussed distributions of \(Z_{j}\mid\bar{Z}_{j-1}=\bar{z}_{j-1}\) and \(Z_{j}\mid\bar{Z}_{j-1}=\bar{z}_{j-1},S=s\) are defined on some common measurable space. We will denote the conditional densities of \(P_{j}^{0}\) and \(Q_{j}^{0}\) by \(p_{j}^{0}\) and \(q_{j}^{0}\), respectively, and if there is no ambiguity we will drop the \(j\) subscript, such as by writing \(p^{0}(z_{j}\mid\bar{z}_{j-1},s)\) rather than \(p_{j}^{0}(z_{j}\mid\bar{z}_{j-1},s)\). For a set \(\mathcal{D}\), we use \(|\mathcal{D}|\) to denote its cardinality. For a collection of vectors \(v_{\ell}\in\mathbb{R}^{d_{\ell}}\), \(\ell\in L\), we write \((v_{\ell})_{\ell\in L}\in\mathbb{R}^{\sum_{\ell=1}^{L}d_{\ell}}\) to denote the concatenation of these vectors.
Suppose we have a collection of \(k\) data sources and want to estimate the target parameter \(\psi:\mathcal{Q}\rightarrow\mathbb{R}\) of a target distribution \(Q^{0}\). This distribution is known to belong to a collection \(\mathcal{Q}\) of distributions of a random variable \(Z=(Z_{1},\ldots,Z_{d})\), where \(Z\) takes values in \(\mathcal{Z}=\prod_{j=1}^{d}\mathcal{Z}_{j}\). Since the summary \(\psi\) may only depend on certain components of \(Q^{0}\), we let \(\mathcal{I}\subset[d]\) denote a set of irrelevant indices \(j\) such that \(\psi\) is not a function of the distribution of \(Z_{j}\mid\bar{Z}_{j-1}\) -- for example, if \(d=3\) and \(\psi\) is the G-computation parameter \(\psi(Q):=\iint Q_{3}(dz_{3}\mid Z_{2}=1,z_{1})Q_{1}(dz_{1})\)(Robins, 1986), then \(\mathcal{I}=\{2\}\). We use \(\mathcal{J}=[d]\backslash\mathcal{I}\) to denote the set of indices that may be relevant to the evaluation of \(\psi\), which we call the set of relevant indices.
Instead of directly obtaining samples from the target distribution \(Q^{0}\), we observe \(n\) independent copies of \(X=(Z,S)\) drawn from some common distribution \(P^{0}\), where \(Z\) takes values in \(\mathcal{Z}\) and \(S\) is a categorical random variable denoting the data source that has support \([k]\). The distribution \(P^{0}\) is known to weakly align with \(Q^{0}\) such that, for each \(j\in\mathcal{J}\) and some \(s\), the conditional density of the observed \(P^{0}\) is such that
\[p^{0}(z_{j}\mid\bar{z}_{j-1},s)=\frac{w_{j,s}(\bar{z}_{j};\beta_{j,s}^{0})}{W_ {j,s}(Q_{j}^{0};\beta_{j,s}^{0})(\bar{z}_{j-1})}q^{0}(z_{j}\mid\bar{z}_{j-1}), \tag{1}\]
where the form of weight function \(w_{j,s}\) is known, \(\beta_{j,s}^{0}\in\mathbb{R}^{c_{j,s}}\) is a vector of length \(c_{j,s}\), and \(W_{j,s}(Q_{j}^{0};\beta_{j,s}^{0})(\bar{z}_{j-1})\) is the normalizing function given by
\[W_{j,s}(Q_{j}^{0};\beta_{j,s}^{0})(\bar{z}_{j-1}):=\int_{\mathcal{Z}_{j}}w_{j,s}(\bar{z}_{j};\beta_{j,s}^{0})\,Q_{j}^{0}(dz_{j}\mid\bar{z}_{j-1}).\]
The quantity \(W_{j,s}(Q_{j}^{0},\beta_{j,s}^{0})(\bar{z}_{j-1})\) is assumed to be strictly positive and finite for all possible values of \(\beta_{j,s}^{0}\). The values of \(\beta_{j,s}^{0}\) can vary in magnitude and in lengths across different conditional distribution index \(j\) and data source \(s\). We let \(w_{j,s}^{*}(\bar{z}_{j};\beta_{j,s}):=w_{j,s}(\bar{z}_{j};\beta_{j,s})/W_{j,s}(Q_ {j}^{0};\beta_{j,s})(\bar{z}_{j-1})\) hereafter, and we refer to this function as a density ratio.
The selection bias model specified in (1) generalizes a variety of models, including univariate and multivariate logistic models (Gilbert, 2000), density ratios of distributions from an exponential family (Bickel et al., 1993), and truncated regression (Bickel et al., 1993). We give two examples below to illustrate the wide applicability of
selection bias models and refer readers to Chapter 4.4 in (Bickel et al., 1993) for additional examples.
**Example 2.1**.: _Multivariate logistic regression model. Suppose \(Z=(Z_{1},Z_{2},Z_{3})=(X,A,Y)\), where \(X\) is a covariate, \(A\) is an indicator of treatment, and \(Y\) is binary outcome. When \(w_{3,s}(z;\beta_{3,s})=\exp(\beta_{3,s}y)\) for data source \(s\), (1) rewrites as follows when \(j=3\):_
\[\log\left\{\frac{p^{0}(Y=1\mid A=a,X=x,S=s)}{p^{0}(Y=0\mid A=a,X=x,S=s)}\right\} =\log\left\{\frac{q^{0}(Y=1\mid A=a,X=x)}{q^{0}(Y=0\mid A=a,X=x)}\right\}+\beta _{3,s}.\]
_The above corresponds to a constant shift in log odds between the observed population and the target population across all strata defined by treatment \(A\) and covariate \(X\). With a more complex choice of \(w_{3,s}(z;\beta_{3,s})=\exp(\beta_{3,s}^{1}y+\beta_{3,s}^{2}ay+\beta_{3,s}^{3} xy+\beta_{3,s}^{4}axy)\), we arrive at a more flexible shift in log odds that differs across strata defined by \(A\) and \(X\). This biased sampling model can be generalized naturally for multi-level categorical outcomes \(Y\) and can be useful in interpreting differential vaccine efficacy against different strains (Gilbert et al., 1999). Even more generally, with a choice of \(w_{j,s}(\bar{z}_{j};\beta_{j,s})=\exp\{f(\bar{z}_{j};\beta_{j,s})\}\) for some function \(f\), the selection bias model (1) can accurately model shifts between any two distributions that are from the same exponential family (Patil and Rao, 1978)._
**Example 2.2**.: _Truncated regression model (Bickel et al., 1993). Truncated regression models arise naturally in a variety of fields, including astronomy and biostatistics (Bhattacharya et al., 1983; Jewell, 1985). Suppose \(Z=(Z_{1},Z_{2})\) and that data source \(1\) consists of observations that are directly sampled from \(Q^{0}\). In data source \(2\), the outcome \(Z_{2}\) is only observed if \(Z_{2}\geq c_{0}\) for some threshold \(c_{0}\in\mathbb{R}\). For example, \(Z_{2}\) may denote the luminosity of a distant celestial object, and data source \(2\) may come from a low-sensitivity telescope that only observes objects whose luminosity exceeds some threshold \(c_{0}\). To use both data sources to study the association between the size of a celestial object, \(Z_{1}\), and its luminosity, \(Z_{2}\), it is helpful to note that the observed conditional density in data source \(s=2\) is given by_
\[p^{0}(z_{2}\mid z_{1},s;\beta)=\frac{\mathbb{1}(z_{2}\geq c_{0})q^{0}(z_{2} \mid z_{1})}{E_{Q^{0}}[\mathbb{1}(Z_{2}\geq c_{0})\mid z_{1}]}.\]
_Hence, (1) is satisfied when \(j=2\) with \(w_{2,s}(\bar{z}_{2})=\mathbb{1}(s=1)+\mathbb{1}(s=2,z_{2}\geq c_{0})\)._
Other forms of the weight function can also be appropriate to model density ratios (Cook and Martin, 1974; Rao, 1965; Patil and Rao, 1978; Bickel et al., 1993). Depending on the extent of shifts between the observed and the target distribution, the form of the weight function will vary. In what follows, we formally outline an alignment condition that makes it possible to relate the conditional distributions \(P^{0}_{j}(\cdot\mid\bar{z}_{j-1},s)\) and \(Q^{0}_{j}(\cdot\mid\bar{z}_{j-1})\).
**Condition 1**.: _(Sufficient alignment) For each relevant index \(j\in\mathcal{J}\), there exist known, disjoint subsets \(\mathcal{A}_{j}\) and \(\mathcal{W}_{j}\) of \([k]\) such that all of the following hold:_
1. _[label=_**_**(Sufficient overlap)]
2. _for all_ \(s\in\mathcal{S}_{j}:=\mathcal{A}_{j}\cup\mathcal{W}_{j}\)_, the marginal distribution of_ \(\bar{Z}_{j-1}\) _under sampling from_ \(Q^{0}\) _is absolutely continuous with respect to the conditional distribution of_ \(\bar{Z}_{j-1}\mid S=s\) _under sampling from_ \(P^{0}\)
_._
**1b**: _(At least one aligned data source)_ \(\mathcal{A}_{j}\neq\emptyset\) _and, for all_ \(s\in\mathcal{A}_{j}\)_,_ \(p^{0}(z_{j}\mid\bar{z}_{j-1},s)=q^{0}(z_{j}\mid\bar{z}_{j-1})\)_;_
**1c**: _(Weakly aligned data sources) for all_ \(s\in\mathcal{W}_{j}\)_,_ \(p^{0}(z_{j}\mid\bar{z}_{j-1},s)=w^{*}_{j,s}(\bar{z}_{j};\beta^{0}_{j,s})q^{0} (z_{j}\mid\bar{z}_{j-1})\)_._
For a given \(j\), we refer to \(\mathcal{A}_{j}\) and \(\mathcal{W}_{j}\) as the aligned fusion set and weakly aligned fusion set, respectively. We denote the concatenation of vectors \(\beta_{j,s}\) over all \(s\in\mathcal{W}_{j}\) as \(\beta_{j}=(\beta_{j,s})_{s\in\mathcal{W}_{j}}\), and \(\beta_{j,s}\) over all \(j\in\mathcal{J}\) and \(s\in\mathcal{W}_{j}\) as \(\beta=(\beta_{j,s})_{j\in\mathcal{J},s\in\mathcal{W}_{j}}\). The true parameter \(\beta^{0}\) is known to belong to a collection \(\mathcal{B}:=\mathbb{R}^{t}\) of vectors of length \(t:=\sum_{j\in\mathcal{J}}\sum_{s\in\mathcal{W}_{j}}c_{j,s}\). For ease of notation, we define a mapping \(\mathrm{B}:\mathcal{P}_{\mathcal{Q},\mathcal{B}}\rightarrow\mathcal{B}\) such that \(\beta^{0}:=\mathrm{B}(P^{0})\). Similarly, the true parameter \(\beta^{0}_{j,s}\) for each \(j\in\mathcal{J}\) and \(s\in\mathcal{W}_{j}\) is known to belong to a collection \(\mathcal{B}_{j,s}:=\mathbb{R}^{\,c_{j,s}}\) of vectors of length \(c_{j,s}\). Condition 1 implies that \(P^{0}\) is known to belong to the collection \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) of distributions \(P_{Q,\beta}\) with support on \(\mathcal{Z}\times[k]\) for which there exists a \(Q\in\mathcal{Q}\) and \(\beta_{j,s}\in\mathcal{B}_{j,s}\) such that, for all \(j\in\mathcal{J}\) and \(s\in\mathcal{S}_{j}\): (a) the marginal distribution of \(\bar{Z}_{j-1}\) under sampling from \(Q\) is absolutely continuous with respect to the conditional distribution \(\bar{Z}_{j-1}\mid S=s\) under sampling from \(P\), (b) \(P_{j}(\cdot\mid\bar{z}_{j-1},s)=Q_{j}(\cdot\mid\bar{z}_{j-1})\)\(Q\)-almost everywhere for all \(s\in\mathcal{A}_{j}\), and (c) \(p(z_{j}\mid\bar{z}_{j-1},s)=w^{*}_{j,s}(\bar{z}_{j};\beta_{j,s})q(z_{j}\mid\bar {z}_{j-1})\) for all \(s\in\mathcal{W}_{j}\). In addition, we define \(\mathcal{P}_{Q^{0},\mathcal{B}}=\{P_{Q^{0},\beta}:\beta\in\mathcal{B}\}\) and \(\mathcal{P}_{\mathcal{Q},\beta^{0}}=\{P_{Q,\beta^{0}}:Q\in\mathcal{Q}\}\). We will refer to \(\mathcal{Q}\), \(\mathcal{P}_{Q^{0},\mathcal{B}}\), \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\), and \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) as models.
Condition 1a ensures the conditional distributions appearing in Conditions 1b and 1c are uniquely defined up to \(Q^{0}\)-null sets. Condition 1c imposes that data sources in \(\mathcal{W}_{j}\) are weakly aligned with a known \(w_{j,s}\) but unknown \(\beta^{0}_{j,s}\). Each function \(w_{j,s}(\,\cdot\,;\beta^{0}_{j,s})\) calibrates the conditional distribution of \(Z_{j}\) given \(\bar{Z}_{j-1}\) in data source \(s\) to that of the target population. As the name suggests, weak alignment is typically a less restrictive property than alignment. It is important to note that if there are not any data sources known to be aligned with the target distribution at an index \(j\), then the parameters \(\beta^{0}_{j,s}\) describing the shift of the weakly aligned data sources \(s\in\mathcal{W}_{j}\) will not generally be identifiable. Therefore, we impose Condition 1b, which requires having and correctly identifying nonempty sets \(\mathcal{A}_{j}\) of data sources that align with the target. Condition 1b is the same as the alignment condition imposed in Li and Luedtke (2023). Since we can always take the weakly aligned fusion sets \(\mathcal{W}_{j}\) to be empty for all \(j\), Condition 1 in the current work is no more restrictive than Condition 1 from Li and Luedtke (2023). However, we will show that, when there is at least one \(j\) such that \(\mathcal{W}_{j}\) is nonempty, it will generally be possible to gain statistical efficiency when estimating summaries of the target distribution \(Q^{0}\). See Figure 1 for an illustration comparing the alignment condition from the current work to that in Li and Luedtke (2023).
Together, Conditions 1a and 1b enable the identification of the summary \(\psi(Q^{0})\) of the target population using the observed data sources in \(\mathcal{A}_{j},j\in\mathcal{J}\) only, thus making it feasible to learn about \(\psi(Q^{0})\) solely from the observed data distribution \(P^{0}\). To express this identifiability result, we define a mapping \(\theta:\mathcal{P}_{\mathcal{Q},\mathcal{B}}\rightarrow\mathcal{Q}\). In particular, for any \(P\in\mathcal{P}_{\mathcal{Q},\mathcal{B}}\), we let \(\theta(P)\) denote an arbitrarily selected distribution from the set \(\mathcal{Q}(P)\) of distributions \(Q\in\mathcal{Q}\) that are such that, for each \(j\in\mathcal{J}\), \(Z_{j}\mid\bar{Z}_{j-1}\) under sampling from \(Q\) has the same distribution as \(Z_{j}\mid\bar{Z}_{j-1},S\in\mathcal{A}_{j}\) under sampling from \(P\). We let \(\phi=\psi\circ\theta\). Then it can be shown that under Condition 1, \(\psi(Q^{0})=\phi(P^{0})\). The proof of this result can be found in Theorem 1 of Li and Luedtke (2023).
## 3 Methods
We now outline strategies to construct regular and asymptotically linear estimators of \(\psi(Q^{0})\) with the observed data when \(\mathcal{Q}\) is a collection of nonparametric distributions \(Q\). Such estimators are appealing since they are consistent and asymptotically normal under mild conditions, thus facilitating the construction of confidence intervals and hypothesis tests. The construction of such estimators requires a key object, namely, gradients of statistical parameters (Bickel et al., 1993). We begin by deriving the form of such a gradient \(D^{\dagger}_{p^{0}}\) in the model \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) implied by \(\mathcal{Q}\) and the data fusion conditions. We then use knowledge of this gradient to construct a one-step estimator of \(\psi(Q^{0})\)(Bickel, 1982). Given an estimate \(\widehat{P}\) of \(P^{0}\), the one-step estimator takes the form of \(\hat{\phi}\equiv\phi(\widehat{P})+\sum_{i=1}^{n}D^{\dagger}_{\widehat{P}}(X_{ i})/n\). When the remainder term \(R(\widehat{P},P^{0})\equiv\phi(\widehat{P})-\phi(P^{0})+E_{P^{0}}[D^{\dagger}_{ \widehat{P}}(X)]\) is \(o_{p}(n^{-1/2})\) and the empirical mean of \(D^{\dagger}_{\widehat{P}}(X)-D^{\dagger}_{P^{0}}(X)\) is within \(o_{p}(n^{-1/2})\) of the mean of this term when \(X\sim P^{0}\), this estimator will be asymptotically linear with influence function \(D^{\dagger}_{P^{0}}\), that is,
\[\sqrt{n}[\hat{\phi}-\phi(P^{0})]\xrightarrow{d}N(0,\sigma^{2}_{P^{0}}),\]
where \(\sigma^{2}_{P^{0}}\) is the variance of \(D^{\dagger}_{P^{0}}(X)\) when \(X\sim P^{0}\). Since the asymptotic variance of a one-step estimator is, under conditions, the variance of the gradient used to construct it, it is desirable to seek gradients with low variance. The gradient with the lowest variance is called the canonical gradient. We refer the readers to Bickel et al. (1993), Ibragimov and Hasminskii (1981), and Bickel (1982) for overviews of influence functions, gradients of statistical parameters, semiparametric efficiency theory, and one-step estimators.
Figure 1: Previous data fusion methods propose to use aligned fusion sets only (top row). We advocate fusing weakly aligned data sources (bottom row) in addition to aligned data sources. While aligned data sources might be scarce (small \(|\mathcal{A}_{d}|\)), there can be more weakly aligned sources (larger \(|\mathcal{W}_{d}|\)). If \(\mathcal{W}_{j}\) is empty for some \(j\in[d]\), as illustrated in \(\mathcal{W}_{1}\) in above, we can still identify the target estimand as long as \(\mathcal{A}_{j}\) is nonempty.
Our proposed data fusion procedure proceeds as follows. The first step is to obtain a gradient of \(\psi\) at \(Q^{0}\) in a model where direct sampling from the target distribution \(Q^{0}\) is possible. For many estimands of interest, this step can be accomplished with relative ease as these gradients are readily accessible in existing literature. As the second step, we leverage the findings from this section to establish a correspondence between the aforementioned gradient and a gradient of \(\phi\) at \(P^{0}\) within our data fusion framework. Lastly, we use the derived gradient to construct a one-step estimator. There are also alternative approaches for using knowledge of a gradient to construct asymptotically linear estimators, such as those based on estimating equations (Van der Laan and Robins, 2003; Tsiatis, 2006) and targeted minimum loss-based estimation (Van Der Laan and Rubin, 2006).
We start by deriving the gradients of (i) \(\phi\) in a model where \(\beta^{0}\) is known and (ii) \(\beta^{0}\) in a model where it is not known. We will subsequently use these functions to obtain gradients of \(\phi\) under \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\). To derive the form of a gradient of the target estimand when \(\beta^{0}\) is known, we require one additional regularity condition, which we now introduce. To express this condition, we let \(\lambda_{j-1}\) denote the Radon-Nikodym derivative of the marginal distribution of \(\bar{Z}_{j-1}\) under sampling from \(Q^{0}\) relative to the conditional distribution of \(\bar{Z}_{j-1}\mid S\in\mathcal{S}_{j}\) under sampling from \(P^{0}\) and \(\lambda_{j}^{\dagger}(\bar{z}_{j},s):=\lambda_{j-1}(\bar{z}_{j-1})q^{0}(z_{j} \mid\bar{z}_{j-1})/p^{0}(z_{j}\mid\bar{z}_{j-1},s)\).
**Condition 2**.: _(Strong overlap) For each fixed \(j\in\mathcal{J}\) and \(s\in\mathcal{W}_{j}\), there exists a \(u_{j,s}\in(0,\infty)\) such that \(Q^{0}\{u_{j,s}^{-1}\leq\lambda_{j}^{\dagger}(\bar{Z}_{j},s)\leq u_{j,s}\}=1\)._
Condition 2 strengthens the overlap in \(\bar{Z}_{j-1}\) between data sources specified by Condition 1a. Under this strengthening, for \(Q^{0}\)-almost all \(\bar{z}_{j-1}\), an observation with \(\bar{Z}_{j-1}=\bar{z}_{j-1}\) has a positive probability of being observed in any data source \(s\in\mathcal{S}_{j}\). In the meantime, it also requires the Radon-Nikodym derivative of \(Q^{0}_{j}(\cdot\mid\bar{z}_{j-1})\) relative to \(P^{0}_{j}(\cdot\mid\bar{z}_{j-1},s)\) for each \(s\in\mathcal{W}_{j}\) and \(j\in\mathcal{J}\) to be bounded for each \(\bar{z}_{j-1}\) in the support of \(Q^{0}\).
**Lemma 1** (_A gradient of \(\phi\) when \(\beta^{0}\) is known_).: _Suppose that Conditions 1 and 2 hold, \(\beta^{0}\) is known and \(\psi\) is pathwise differentiable at \(Q^{0}\) relative to \(\mathcal{Q}\) with gradient \(D_{Q^{0}}\). A gradient of \(\phi\) relative to \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) is given by_
\[D_{P^{0}}(z,s;\beta^{0})=\sum_{j\in\mathcal{J}}\mathbb{1}(\bar{z}_{j-1}\in \bar{\mathcal{Z}}_{j-1}^{\dagger})\frac{\mathbb{1}(s\in\mathcal{S}_{j})}{P^{0 }(S\in\mathcal{S}_{j})}\lambda_{j}^{\dagger}(\bar{z}_{j},s;\beta^{0}_{j,s})D_ {Q^{0},j}(\bar{z}_{j}), \tag{2}\]
_where \(\bar{\mathcal{Z}}_{j-1}^{\dagger}\) denotes the support of \(\bar{Z}_{j-1}\) under sampling from \(Q^{0}\) and \(D_{Q^{0},j}(\bar{z}_{j})=E_{Q^{0}}[D_{Q^{0}}(Z)\mid\bar{Z}_{j}=\bar{z}_{j}]-E _{Q^{0}}[D_{Q^{0}}(Z)\mid\bar{Z}_{j-1}=\bar{z}_{j-1}]\)._
This result resembles Corollary 1 in Li and Luedtke (2023) which proposes to use only aligned data sources by constructing an estimator using the following gradient:
\[D_{P^{0}}^{\mathcal{A}}(z,s)=\sum_{j\in\mathcal{J}}\mathbb{1}(\bar{z}_{j-1}\in \bar{\mathcal{Z}}_{j-1}^{\dagger})\frac{\mathbb{1}(s\in\mathcal{A}_{j})}{P(S \in\mathcal{A}_{j})}\lambda_{j-1}^{\natural}(\bar{z}_{j-1})D_{Q^{0},j}(\bar{z}_ {j}), \tag{3}\]
where \(\lambda_{j-1}^{\natural}\) denote the Radon-Nikodym derivative of the marginal distribution of \(\bar{Z}_{j-1}\) under sampling from \(Q^{0}\) relative to the conditional distribution of \(\bar{Z}_{j-1}\mid S\in\mathcal{A}_{j}\) under \(P^{0}\). The gradients in (2) and (3) differ in some key
aspects. The gradient in (2) uses not only aligned data sources in \(\mathcal{A}_{j}\) but also weakly aligned data sources in \(\mathcal{W}_{j}\) as reflected by the indicator term \(\mathbb{1}\left(s\in\mathcal{S}_{j}\right)\). Because of the inclusion of these slightly misaligned sources, we need to additionally correct for the potential shifts in the conditional distribution of \(Z_{j}\mid\bar{Z}_{j-1},S\) for data sources in \(\mathcal{W}_{j}\), besides correcting the shifts in the joint distribution of \(\bar{Z}_{j-1}\mid S\). As a result, we have \(\lambda_{j}^{\dagger}\) in the projection term compared to \(\lambda_{j-1}^{\bar{\mathrm{S}}}\) in (3). When \(\mathcal{W}_{j}=\emptyset\) for all \(j\), all the data sources in \(\mathcal{S}_{j}\) are aligned in the sense that \(p^{0}(z_{j}\mid\bar{z}_{j-1},s)=q^{0}(z_{j}\mid\bar{z}_{j-1})\), which in turn, gives \(\lambda_{j}^{\dagger}(\bar{z}_{j},s)=\lambda_{j-1}^{\bar{\mathrm{S}}}(\bar{z} _{j-1})\)\(Q^{0}\)-almost surely. In this special case, Lemma 1 recovers the results from Corollary 1 from Li and Luedtke (2023).
While Lemma 1 provides a preliminary solution for estimating \(\psi(Q^{0})\), it relies on the assumption that \(\beta^{0}\) is known. However, in practice, the value of \(\beta^{0}\) needs to be estimated. To this end, we propose to efficiently estimate \(\beta^{0}\) based on its corresponding canonical gradient. We let \(\delta_{j,s}(\bar{z}_{j-1}):=P^{0}(S=s\mid\bar{z}_{j-1})\), \(r_{j}(\bar{z}_{j};\beta_{j}):=\left\{\sum_{m=1}^{|\mathcal{S}_{j}|}\delta_{j, m}(\bar{z}_{j-1})w_{j,m}^{*}(\bar{z}_{j};\beta_{j,m})\right\}^{-1}\) and \(r_{j,s}(\bar{z}_{j};\beta_{j,s}):=\delta_{j,s}(\bar{z}_{j-1})w_{j,s}^{*}(\bar{ z}_{j};\beta_{j,s})r_{j}(\bar{z}_{j};\beta_{j})\). In addition, we let \(\bar{w}_{j}^{*}:=(w_{j,m}^{*})_{m\in\mathcal{S}_{j}}\), \(\bar{r}_{j}:=(r_{j,m})_{m\in\mathcal{S}_{j}}\), and \(\Delta\) be the diagonal matrix with diagonal \(((\delta_{j,m})_{m\in\mathcal{S}_{j}})^{\top}\). We define an \(|\mathcal{S}_{j}|\times|\mathcal{S}_{j}|\) matrix \(M_{j}(\bar{z}_{j-1};\beta_{j})=\Delta^{-1}(\bar{z}_{j-1})-\int r_{j}(\bar{z}_{ j};\beta_{j})\bar{w}_{j}^{*}(\bar{z}_{j};\beta_{j})\bar{w}_{j}{}^{*\top}(\bar{z} _{j};\beta_{j})\,Q^{0}_{j}(dz_{j}\mid\bar{z}_{j-1})\). Let \(M_{j}^{-}\) be the generalized inverse of \(M_{j}\) and \(\bar{w}_{j,s}(\bar{z}_{m},s^{\prime};\beta^{0}_{m,s^{\prime}})\) be the derivative of \(w_{m,s^{\prime}}(\bar{z}_{m};\beta_{m,s^{\prime}})\) with respect to \(\beta_{j,s}\) evaluated at \(\beta^{0}_{j,s}\). For a fixed \(s\in\mathcal{W}_{j}\) and \(j\in\mathcal{J}\), the score function of \(\beta^{0}_{j,s}\) when \(Q^{0}\) is known is \(\hat{\ell}_{\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta^{0}_{j,s}):=\frac{\bar{w }_{j,s}(\bar{z}_{j},s^{\prime};\beta^{0}_{j,s})}{w_{j,s}(\bar{z}_{j},s^{\prime };\beta^{0}_{j,s})}-E_{P^{0}}\left[\frac{\bar{w}_{j,s}(\bar{z}_{j},S;\beta^{0} _{j,s})}{w_{j,s}(\bar{z}_{j},S;\beta^{0}_{j,s})}\mid\bar{z}_{j-1},S=s^{\prime}\right]\). We denote \(\tau_{j,m}(\bar{z}_{j};\beta^{0}_{j,m}):=r_{j,m}(\bar{z}_{j};\beta^{0}_{j,m}) \dot{\ell}_{\beta_{j,s}}(\bar{z}_{j},m;\beta^{0}_{j,m})\), and
\[a_{j}^{*}(\bar{z}_{j};\beta^{0}_{j}) :=\sum_{m=1}^{|\mathcal{S}_{j}|}\tau_{j,m}(\bar{z}_{j};\beta^{0}_ {j,m})-E_{P^{0}}\left[\sum_{m=1}^{|\mathcal{S}_{j}|}\tau_{j,m}(\bar{Z}_{j}; \beta^{0}_{j,m})\mid\bar{z}_{j-1},S\in\mathcal{A}_{j}\right]\] \[\quad+E_{P^{0}}\left[\sum_{m=1}^{|\mathcal{S}_{j}|}\tau_{j,m}( \bar{Z}_{j};\beta^{0}_{j,m})\bar{w}_{j}{}^{*\top}(\bar{Z}_{j};\beta^{0}_{j}) \mid\bar{z}_{j-1},S\in\mathcal{A}_{j}\right]\] \[\quad\cdot M_{j}^{-}(\bar{z}_{j-1};\beta^{0}_{j})^{\top}\bigg{\{} \bar{w}_{j}{}^{*\top}(\bar{z}_{j};\beta^{0}_{j})r_{j}(\bar{z}_{j};\beta^{0}_{j} )-E_{P^{0}}\left[\bar{w}_{j}{}^{*\top}(\bar{Z}_{j};\beta^{0}_{j})r_{j}(\bar{Z}_ {j};\beta^{0}_{j})\mid\bar{z}_{j-1},S\in\mathcal{A}_{j}\right]\bigg{\}}.\]
**Lemma 2** (_The canonical gradient of \(\beta^{0}\)).: _The efficient score function of \(\beta^{0}_{j,s}\) under \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) for each \(s\in\mathcal{W}_{j}\) and \(j\in\mathcal{J}\) is,_
\[\hat{\ell}^{*}_{\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta^{0}_{j,s})=\hat{\ell}_ {\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta^{0}_{j,s})-\left\{a_{j}^{*}(\bar{z}_{ j};\beta^{0}_{j})-E_{P^{0}}\left[a_{j}^{*}(\bar{Z}_{j};\beta^{0}_{j})\mid\bar{Z}_{j-1}= \bar{z}_{j-1},S=s^{\prime}\right]\right\}.\]
_Furthermore, the canonical gradient of \(\beta^{0}\) is \(D^{\beta}_{P^{0}}(z,s;\beta^{0})=I^{-1}_{\beta^{0}}\hat{\ell}^{*}_{\beta}(z,s; \beta^{0})\), where \(I_{\beta^{0}}:=E_{P^{0}}[\hat{\ell}^{*}_{\beta}(Z,S;\beta^{0})\hat{\ell}^{*\top}_{ \beta}(Z,S;\beta^{0})]\)._
The above results can be derived via a score function argument for finite-dimensional parameters under semi-parametric models -- see Supplementary Appendix B for details. The results of Gilbert (2000) emerge as a special case of Lemma 2 when \(Z\) is one-dimensional and \(\mathcal{S}_{1}=[k]\).
Now we make use of the preceding lemmas and derive a valid gradient via a variant of an efficient score function argument for semiparametric models, where some adaptation is needed to account for the fact that the unknown parameter \(Q^{0}\) is infinite-dimensional. To begin with, we define a new mapping \(\gamma:\mathcal{B}\to\mathbb{R}\) that
\(\gamma(\beta):=E_{P_{\underline{Q}^{0,\beta}}}[D_{P^{0}}(Z,S;\beta^{0})]\), where \(\underline{Q}^{0}:=\theta(P^{0})\). Note that \(\gamma\) depends on \(P^{0}\) through both \(D_{P^{0}}\) and the \(\underline{Q}^{0}\) indexing \(P_{\underline{Q}^{0,\beta}}^{\underline{Q}^{0}}\) in the expectation.
**Lemma 3** (_A gradient of \(\phi\)_).: _Suppose that \(\gamma\) is differentiable in \(\beta\), and Conditions 1 and 2 hold. If \(D_{P^{0}}\) is a gradient in the model \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) where \(\beta^{0}\) is known, then a gradient of \(\phi\) relative to \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) is given by_
\[D_{P^{0}}^{\dagger}(z,s;\beta^{0})=D_{P^{0}}(z,s;\beta^{0})-\{\nabla_{\beta} \gamma(\beta)\mid_{\beta=\beta^{0}}\}^{\top}D_{P^{0}}^{\beta}(z,s;\beta^{0}), \tag{4}\]
_where \(\nabla_{\beta}\gamma(\beta)\mid_{\beta=\beta^{0}}\) denotes the partial derivative of \(\gamma(\beta)\) with respect to \(\beta\) evaluated at \(\beta=\beta^{0}\). Moreover, if \(D_{P^{0}}\) is the canonical gradient in \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\), then (4) is the canonical gradient of \(\phi\) relative to \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\)._
The first term in (4) is a gradient of \(\phi\) under \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\). The second term is the transformation of a gradient \(\beta^{0}\) under \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) that accounts for the fact that \(\beta^{0}\) needs to be estimated. In another view, the second term can also be regarded as the projection of \(D_{P^{0}}\) onto the tangent space of the nuisance parameter \(\beta^{0}\) due to the fact that the target estimand \(\psi(Q^{0})\) does not depend on \(\beta^{0}\). It can be shown that the coefficient \(\nabla_{\beta}\gamma(\beta)\mid_{\beta=\beta^{0}}=E_{P^{0}}[D_{P^{0}}(Z,S; \beta^{0})\dot{\ell}_{\beta}(Z,S;\beta^{0})]\) and hence this result is consistent with Theorem 2.1 and part C of Proposition 4.1 in Klaassen and Putter (2005).
It is interesting to contrast the gradient that was previously presented in Corollary 1 of Li and Luedtke (2023) to the one presented in Lemma 3 of this work. The former only uses aligned data sources, whereas the latter uses both aligned and weakly aligned data sources. While it would thus seem natural that the gradient that leverages more data sources must have lower variance, this turns out to not necessarily be the case. This disappointing phenomenon will occur if some weakly aligned data sources are far away from the target distribution with large \(|\beta^{0}|\) -- see Appendix D for an example. This phenomenon can arise because the gradient in Lemma 3 is not necessarily the canonical gradient of \(\phi\) relative to the model where \(\beta^{0}\) is not known. While deriving the canonical gradient would be one approach to finding a gradient that leverages weakly aligned data sources to dominate the best possible gradient that does not, we have not been able to find the form of this gradient; hence, we leave its derivation to future work.
To find a gradient that necessarily has variance at least as small as \(D_{P^{0}}^{\mathcal{A}}\) from Corollary 1 of Li and Luedtke (2023), we find the gradient with the smallest variance in a class of gradients that may not contain the canonical gradient, but does contain both \(D_{P^{0}}^{\mathcal{A}}\) and \(D_{P^{0}}^{\dagger}\). We define the class of gradients as \(\mathcal{D}:=\{D_{P^{0}}^{\varepsilon}(\cdot;\beta^{0}):v=(v_{j})_{j\in \mathcal{J}}\in\mathbb{R}^{\mid\mathcal{J}\mid}\}\), where
\[D_{P^{0}}^{v}(z,s;\beta^{0}):=\sum_{j\in\mathcal{J}}\Big{\{}v_{j}D_{P^{0},j}^{ \mathcal{A}}(\bar{z}_{j},s)+(1-v_{j})D_{P^{0},j}^{\dagger}(\bar{z}_{j},s; \beta_{j}^{0})\Big{\}} \tag{5}\]
with \(D_{P^{0},j}^{\mathcal{A}}(\bar{z}_{j},s):=E_{P^{0}}[D_{P^{0}}^{\mathcal{A}}(Z, S)\mid\bar{z}_{j},s]-E_{P^{0}}[D_{P^{0}}^{\mathcal{A}}(Z,S)\mid\bar{z}_{j-1},s]\) and \(D_{P^{0},j}^{\dagger}(\bar{z}_{j},s;\beta_{j}^{0}):=E_{P^{0}}[D_{P^{0}}^{ \dagger}(Z,S;\beta^{0})\mid\bar{z}_{j},s]-E_{P^{0}}[D_{P^{0}}^{\dagger}(Z,S; \beta^{0})\mid\bar{z}_{j-1},s]\).
In addition, we define \(D^{\star}_{P^{0}}(z,s;\beta^{0}):=D^{\star}_{P^{0}}(z,s;\beta^{0})\) where for each \(j\in\mathcal{J}\),
\[v^{\star}_{j}=\frac{\mathrm{var}[D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{ j})]-\mathrm{cov}[D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S),D^{\dagger}_{P^{0},j}( \bar{Z}_{j},S;\beta^{0}_{j})]}{\mathrm{var}[D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_ {j},S)-D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})]}.\]
We use the convention that \(v^{\star}_{j}=1\) if \(\mathrm{var}(D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S)-D^{\dagger}_{P^{0},j}( \bar{Z}_{j},S;\beta^{0}_{j}))=0\).
**Theorem 1** (A gradient with lower variance than any gradient based only on aligned sources).: _If the conditions of Lemma 3 hold, then \(D^{\star}_{P^{0}}(\,\cdot\,;\beta^{0})\) is a gradient of \(\phi\) relative to \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\). Moreover, the weights used to define this gradient satisfy \(v^{\star}=\mathrm{argmin}_{v}\mathrm{var}[D^{\mathrm{v}}_{P^{0}}(Z,S;\beta^{0})]\), and so_
\[\mathrm{var}[D^{\star}_{P^{0}}(Z,S;\beta^{0})]\leq\min\{\mathrm{var}[D^{ \mathcal{A}}_{P^{0}}(Z,S)],\mathrm{var}[D^{\dagger}_{P^{0}}(Z,S;\beta^{0})]\}.\]
By construction, \(D^{\star}_{P^{0}}\) is optimal in the sense that it never has higher variance than any gradient in \(\mathcal{D}\), including the gradients in (3) and (4). In the absence of weakly aligned sources, \(D^{\dagger}_{P^{0},j}=D^{\mathcal{A}}_{P^{0},j}\) for all \(j\) and \(D^{\star}_{P^{0}}\) reduces to (3). Consequently, our weighting scheme yields estimators that are asymptotically equivalent with established methods in these cases (Stuart et al., 2015; Rudolph and van der Laan, 2017; Dahabreh et al., 2019; Li and Luedtke, 2023). When weakly aligned sources are available, our proposed weighting scheme considers the incorporation of such weakly aligned sources by assigning weights that optimize the variance of the resulting gradient. We numerically compare the variances of \(D^{\mathcal{A}}_{P^{0}}\), \(D^{\dagger}_{P^{0}}\) and \(D^{\star}_{P^{0}}\) in the example in Appendix D and, as expected by theory, the gradient \(D^{\star}_{P^{0}}\) from Theorem 1 dominates \(D^{\mathcal{A}}_{P^{0}}\).
Lastly, we note that similar theoretical guarantees can be also established for the case where data sources have fixed sample sizes \(n_{1},\ldots,n_{k}\) such that \(n=\sum_{i=1}^{k}n_{i}\)(Li and Luedtke, 2023). The results for this case are similar, but the expression \(P^{0}(S\in\mathcal{S}_{j})\) in the gradients is replaced by \(\{\sum_{i=1}^{k}n_{i}\mathbb{1}(i\in\mathcal{S}_{j})\}/\sum_{i=1}^{k}n_{i}\) and \(P^{0}_{j}(\cdot\ |\ \bar{z}_{j-1},s)\) is replaced by the conditional probability of \(Z_{j}\) given \(\bar{Z}_{j-1}\) for a random draw from data source \(s\). Further details can be found in Appendix I of Li and Luedtke (2023).
## 4 Simulation
We simulated \(Z=(Z_{1},Z_{2},Z_{3})\) from \(k=4\) data sources with fixed sample sizes of \(2000\) observations per data source. Conditional on \(S\), \(Z\) is distributed as follows: \(Z_{1}\ |\ S\sim\mathrm{Uniform}(1,2)\), \(Z_{2}\ |\ Z_{1},S\sim\mathrm{Bernoulli}(1/2)\), \(Z_{3}\ |\ Z_{2},Z_{1},S\sim\mathrm{Beta}(\{2-0.51(S=2)\}Z_{1},(2-0.51(S=3))Z_{1}-0.5 1(S=4))\). Data source 1 perfectly aligns with the target distribution. All other sources are aligned in the distributions of \(Z_{2}\ |\ Z_{1}\) and \(Z_{1}\), and are weakly aligned in the distribution of \(Z_{3}\ |\ Z_{2},Z_{1}\) for appropriately parameterized weight functions such that Conditions 1 and 2 hold. We consider two different parameterizations. In the first, which we refer to as the "simple weight parameterization", each data source has a distinct form of \(w^{\star}_{3,s}\) with distinct values of \(\beta^{0}_{3,s}\in\mathbb{R}\). Specifically, \(w_{3,1}(\bar{z}_{3};\beta^{0}_{3,1})=\exp(\beta^{0}_{3,1}z_{1}\log z_{3})\), \(w_{3,2}(\bar{z}_{3};\beta^{0}_{3,2})=\exp[\beta^{0}_{3,2}z_{1}\log(1-z_{3})]\) and \(w_{3,3}(\bar{z}_{3};\beta^{0}_{3,3})=\exp[\beta^{0}_{3,2}z_{1}\log(1-z_{3})]\). The \(w_{3,3}\) and \(w_{3,3}\) are aligned in the distribution of \(Z_{3}\ |\ Z_{2},Z_{1}\) and \(Z_{1}\), respectively.
\(\exp[\beta^{0}_{3,3}\log(1-z_{3})]\). In the second, which we refer to as the "complex weight parameterization", \(w_{3,s}\in\mathcal{G}\) for all \(s\in\{2,3,4\}\), where
\[\mathcal{G}:=\Big{\{}(\bar{z}_{3},s) \mapsto\exp\big{(}\beta^{1}_{3,s}z_{1}\log z_{3}+\beta^{2}_{3,s}z_ {1}\log(1-z_{3})+\beta^{3}_{3,s}\log(1-z_{3})\big{)}\] \[:\beta^{0}_{\mathbf{3,s}}=(\beta^{1}_{3,s},\beta^{2}_{3,s},\beta^ {3}_{3,s})\in\mathbb{R}^{3}\Big{\}}\]
Even though only one out of the three coefficients is non-zero for each data source, we estimate the whole vector \(\beta^{0}_{\mathbf{3,s}}\) for each source \(s\). This mimics a scenario where there is limited knowledge on the possible form of the density ratio model, so a flexible one is used.
We aim to estimate two parameters: the mean outcome under missingness and the variance-weighted treatment effect. The mean outcome under missingness is defined as \(\psi_{1}(Q^{0})=E_{Q^{0}}[E_{Q^{0}}(Z_{3}\mid Z_{2}=1,Z_{1})]\) under positivity and missing at random assumptions (Rubin, 1976), and is mathematically equivalent to the G-computed mean outcome for the treated population (Robins, 1986). The variance-weighted treatment effect is defined as \(\psi_{2}(Q^{0})=E_{Q^{0}}[\text{cov}(Z_{2},Z_{3}\mid Z_{1})]/E_{Q^{0}}[\text {var}(Z_{2}\mid Z_{1})]\) under positivity, consistency, and no unmeasured confounding assumptions (Rubin, 1980; Rosenbaum and Rubin, 1983), and can be of special interest when one wants to estimate a treatment effect in the presence of a high-dimensional vector of confounding covariates \(Z_{1}\)(Li et al., 2011; Robins et al., 2008). We compare one-step estimators of \(\psi_{1}(Q^{0})\) and \(\psi_{2}(Q^{0})\) under the following scenarios, all with \(\mathcal{W}_{1}=\mathcal{W}_{2}=\emptyset\): (1) no data fusion with \(\mathcal{S}_{1}=\mathcal{S}_{2}=\mathcal{S}_{3}=\{1\}\) and \(\mathcal{W}_{3}=\emptyset\), (2) some data fusion with \(\mathcal{S}_{1}=\mathcal{S}_{2}=\mathcal{S}_{3}=\{1,2\}\) with \(\mathcal{W}_{3}=\{2\}\), (3) some data fusion with \(\mathcal{S}_{1}=\mathcal{S}_{2}=\mathcal{S}_{3}=\{1,3\}\) with \(\mathcal{W}_{3}=\{3\}\), (4) some data fusion with \(\mathcal{S}_{1}=\mathcal{S}_{2}=\mathcal{S}_{3}=\{1,4\}\) with \(\mathcal{W}_{3}=\{4\}\) and (5) complete data fusion with \(\mathcal{S}_{1}=\mathcal{S}_{2}=\mathcal{S}_{3}=\{1,2,3,4\}\) with \(\mathcal{W}_{3}=\{2,3,4\}\), for both supplying a simple weight parameterization or a more complex \(w_{3,s}\in\mathcal{G}\). We also examined the complete aligned fusion scenario using methods proposed in Li and Luedtke (2023), where \(\mathcal{S}_{1}=\mathcal{S}_{2}=\{1,2,3,4\}\) with \(\mathcal{W}_{1}=\mathcal{W}_{2}=\emptyset\) and \(\mathcal{S}_{3}=\{1\}\) with \(\mathcal{W}_{3}=\{2,3,4\}\). Due to the data generating mechanism in our simulation setting, the resulting one-step estimators had nearly identical performance to those under (1); thus, only estimator from (1) is reported, and any observed improvements on that estimator also represent improvements over the one proposed in Li and Luedtke (2023).
Initial estimates of \(\beta^{0}\) were obtained via maximum likelihood using Rowan's Subplex method (Rowan, 1990) in the nloptr R package (Johnson et al., 2014), which is a variant of Nelder-Mead that uses Nelder-Mead on a sequence of subspaces (Nelder and Mead, 1965), with a maximum of 500 evaluation steps and absolute tolerance of \(10^{-8}\). Then one-step estimators of \(\beta^{0}\) were constructed using these initial estimates and the canonical gradient of \(\beta^{0}\). The nuisance parameters, including the outcome regressions, the Radon-Nikodym derivative of the covariates \(\lambda_{j-1}\) for each \(j\in\mathcal{J}\), and scores of \(\beta^{0}\) were estimated via kernel regression under default settings in the np R package (Hayfield and Racine, 2008) while propensity scores were estimated via main terms linear-logistic regression. While it may seem intuitive to model the density ratios \(q^{0}_{j}(\cdot\mid\bar{z}_{j-1})/p^{0}_{j}(\cdot\mid\bar{z}_{j-1},s)\) nonparametrically, it is important to note that the initial estimate \(\hat{P}\) of \(P^{0}\) must belong to \(\mathcal{P}\). We ensured this by estimating this
quantity by using the corresponding parametric form imposed by \(w_{j,s}\) and estimates of \(\beta^{0}\). For each simulation study presented in this work, 1000 Monte Carlo replications were conducted.
Table 1 displays results for estimating the mean missing outcome. Relative to a method that did not use fusion, data fusion with one of the weakly aligned data sources yields around 26%-30% reduction in variance, and yields 40-45% reduction in variance when fusing all weakly aligned data sources together. Compared to the one that used the simple form of weight function, the one-step estimator constructed using a more complex weight function had a similar performance. We observe similar trends in estimating the variance-weighted treatment effect (Table 2), where data fusion yields around 50% reduction in variance for partial fusion, and around 70% reduction in variance for complete fusion. Coverage was nearly nominal for all estimators.
## 5 Data illustration
We illustrate our approach using data from two harmonized phase IIb trials that evaluated the efficacy of the broadly neutralizing antibody (bnAb) VRC01 to prevent new HIV-1 diagnosis in different populations. One of these studies, HVTN 703, enrolled women in sub-Saharan, while the other, HVTN 704, enrolled men and transgender people who have sex with men in North America, South America, and Switzerland. Results from both
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & No fusion & \multicolumn{3}{c}{Some fusion} & Complete fusion \\ \cline{2-5} Fusion sets & \(\{1\}\) & \(\{1,2\}\) & \(\{1,3\}\) & \(\{1,4\}\) & \(\{1,2,3,4\}\) \\ \hline \multicolumn{5}{l}{**Simple weight parameterization**} \\ \multicolumn{5}{l}{Bias\({}^{2}\times 10^{4}\)} & 0.002 & 0.001 & 0.004 & 0.001 & 0.001 \\ \multicolumn{5}{l}{Variance\(\times 10^{4}\)} & 0.38 & 0.27 & 0.28 & 0.27 & 0.21 \\ \multicolumn{5}{l}{Coverage} & 0.94 & 0.94 & 0.92 & 0.92 & 0.93 \\ \multicolumn{5}{l}{**Complex weight parameterization**} \\ \multicolumn{5}{l}{Bias\({}^{2}\times 10^{4}\)} & 0.002 & 0.002 & 0.001 & 0.001 & 0.001 \\ \multicolumn{5}{l}{Variance\(\times 10^{4}\)} & 0.38 & 0.28 & 0.28 & 0.28 & 0.23 \\ \multicolumn{5}{l}{Coverage} & 0.94 & 0.94 & 0.93 & 0.92 & 0.94 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bias, variance and coverage of estimators of mean missing outcome under no data fusion, some data fusion and complete data fusion.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & No fusion & \multicolumn{3}{c}{Some fusion} & Complete fusion \\ \cline{2-5} Fusion sets & \(\{1\}\) & \(\{1,2\}\) & \(\{1,3\}\) & \(\{1,4\}\) & \(\{1,2,3,4\}\) \\ \hline \multicolumn{5}{l}{**Simple weight parameterization**} \\ \multicolumn{5}{l}{Bias\({}^{2}\times 10^{5}\)} & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \multicolumn{5}{l}{Variance\(\times 10^{5}\)} & 0.45 & 0.24 & 0.25 & 0.23 & 0.13 \\ \multicolumn{5}{l}{Coverage} & 0.95 & 0.95 & 0.95 & 0.95 & 0.94 \\ \multicolumn{5}{l}{**Complex weight parameterization**} \\ \multicolumn{5}{l}{Bias\({}^{2}\times 10^{5}\)} & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \multicolumn{5}{l}{Variance\(\times 10^{5}\)} & 0.45 & 0.24 & 0.25 & 0.23 & 0.12 \\ \multicolumn{5}{l}{Coverage} & 0.95 & 0.95 & 0.95 & 0.95 & 0.94 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Bias, variance and coverage of estimators of variance-weighted treatment effect under no data fusion, some data fusion and complete data fusion.
trials supported that the bnAb did not prevent overall HIV-1 acquisition, but did prevent acquisition of HIV-1 strains that were sensitive to neutralization by VRC01 (Corey et al., 2021), providing proof-of-concept evidence supporting development of bnAb cocktails that provide neutralization coverage of more HIV-1 strains. Using data from both trials, Gilbert et al. (2022) showed that a biomarker that quantifies the neutralization potency of antibodies in an individual's serum against a given HIV-1 isolate, the predicted 80% neutralizing antibody titer (PT80), is a promising potential surrogate endpoint for new HIV-1 diagnosis with application to future HIV-1 bnAb studies. This putative surrogate endpoint can be used for ranking candidate bnAb cocktails by their predicted HIV-1 prevention efficacy, aiding down-selection of the most promising cocktails for study in future prevention efficacy trials. In this article, for VRC01 recipients (pooling over the low-dose and high-dose arms) who acquired the new HIV-1 diagnosis primary endpoint, we studied the association of the PT80 biomarker with amino acid sequence features of the HIV-1 virus. The viral sequence was measured from a blood sample drawn at the time of HIV-1 diagnosis, and the PT80 was calculated as the concentration of VRC01 in this same blood sample divided by the measured level of resistance of the virus to neutralization by VRC01. Typically about 200 HIV-1 sequences were measured in an individual's blood sample with the Pacbio sequencing technology, and the sequence that had greatest sequence-based predicted resistance (Williamson et al., 2021) to neutralization by VRC01 was used. Studying the associations of PT80 marker with sequence features of HIV-1 can provide insights into "viral genetic signatures" that may be important for impacting whether VRC01 can protect against a given viral strain.
The data consist of 98 VRC01 new HIV-1 diagnosis endpoint cases with PT80 measured (\(n_{703}=43\), \(n_{704}=55\)). There are 246 residue indicator variables for the 50 amino acid positions of the HIV-1 Envelope protein that comprise CD4 binding sites and the VRC01 binding footprint that are most relevant for potential protection by VRC01. Only the subset of these residue indicators with enough variation to reasonably study an association, quantified by requiring at least 10 participants with observed residue at the position and at least 10 participants without the observed residue at the position, are analyzed. This process yielded 22 amino acid features of interest for HVTN 703 and 30 for HVTN 704. The data also includes a single genetic distance variable defined based on all 50 amino acid positions mentioned previously: the physicochemical-weighted Hamming distance between the observed sequence and the HIV-1 sequence in the CATNAP database (Yoon et al., 2015) with least amount of neutralization resistance to VRC01, calculated matched to a given participant's geographic region (HVTN 703 South Africa, HVTN 703 outside South Africa, HVTN 704 U.S. or Switzerland, HVTN 704 South America) and the subtype of their acquired virus (mostly subtype C for HVTN 703 and subtype B for HVTN 704). Our objective is to study the coefficients corresponding to the selected residue indicators and the genetic distance in univariate linear working models with outcome \(\log 10\) PT80. The definition of these parameters, models and the forms of the gradients can be found in Supplementary Appendix G.
Data fusion is especially appealing in this problem due to the limited HIV-1 diagnosis endpoints in each study (Corey et al., 2021). There is a need to use a fusion method that only requires weak alignment because participants in HVTN 703 and HVTN 704 had different sexes assigned at birth and HIV-1 acquisition routes, and were exposed
to different HIV-1 circulating subtypes. As a result, the common distribution condition considered in previous data fusion literature likely fails (Li and Luedtke, 2023). The density plot of the \(\log_{10}\) PT80 marker also suggests the conditional distribution indeed differs between the two studies (Figure 2). We assume the density ratio between the conditional density of biomarkers takes the following form:
\[\frac{p_{703}(Y\mid W,X)}{p_{704}(Y\mid W,X)}=\frac{\exp\{\beta_{0}Y+\beta_{1} XY+\beta_{2}Y^{2}\}}{E_{704}\left[\exp\{\beta_{0}Y+\beta_{1}XY+\beta_{2}Y^{2}\} \mid W,X\right]},\]
where \(Y=\log_{10}\) PT80, \(X\) denotes genetic distance, and \(W\) are the HIV-1 amino acid features. If \(Y\) were to follow a normal distribution in both trials conditional on \(W\) and \(X\), then the form of the above density ratio allows shifts in the mean of \(Y\) stratified by genetic distance \(X\), while also accommodating a different variance of \(Y\) across the two studies due to the inclusion of \(\beta_{2}\). We separately treated each of the study populations from the two studies as the target population and compared the estimation results generated by using one single dataset vs. using both datasets.
All covariates of interest are standardized to have mean 0 and standard deviation 1. We estimated density ratios via kernel regression using the Nadaraya-Watson estimator, adaptive nearest neighbors bandwidths, and expected Kullback-Leibler cross-validation for bandwidth selection in the np R package (Hayfield and Racine, 2008). Nuisance parameters such as conditional expectations were estimated using SuperLearner (Van der Laan et al., 2007; Polley and Van Der Laan, 2010) with a library of a generalized linear model, LASSO regression, generalized additive model, random forests, and xgboost.
Results for a selected list of the residue indicators based on alphabetical order and genetic distance are presented in Table 3. Additional results for the remaining residue indicators can be found in Table 4 in the supplement.
Figure 2: Density plots of the \(\log_{10}\) PT80 marker using a kernel density estimator with a bandwidth selected by cross-validation separately for HVTN 703 and HVTN 704. The left figure compares the density of \(\log_{10}\) PT80 in all groups. The right panels compare density plots of \(\log_{10}\) PT80 stratified by the genetic distance in tertiles. From left to right, we observe a monotone shift in the relative positions of the means and also in the spread of the curves.
Augmenting HVTN 703 leads to a 4% to 41% reduction in confidence interval width, corresponding to a 7% to 66% reduction in the required sample size to achieve the same level of precision. Similarly, augmenting HVTN 704 yields a 3% to 52% reduction in confidence interval width, resulting in a 6% to 77% reduction in sample size. Notably, the residue indicator Glycine at position 471 (G471) demonstrates significant associations with relatively small standard errors. The 95% confidence interval for the G471 coefficient is [0.25, 1.19] for augmenting HVTN 703. This feature has also been identified as having high variable importance in predicting neutralization sensitivity (Hake and Pfeifer, 2017; Magaret et al., 2019). By reducing variability in the estimators of coefficients in working models, these findings provide a clearer understanding of the relative predictive importance of HIV-1 Envelope amino acid sequence features for PT80. These results are particularly valuable for feature selection and the development of models aimed at predicting neutralization readouts, as amino-acid sequence sieve analysis often faces the challenge of analyzing a large number of sequence features (Magaret et al., 2019).
## 6 Discussion
We begin by discussing a key question: what are the factors influencing the efficiency gains within our introduced data fusion framework? We identify two principal factors. First, the parsimony of the density ratios plays a role in determining the extent of efficiency gain we can derive from fusion. As a general rule, simpler functions offer greater potential for efficiency gains. In contrast, if the density ratios are complex so that \(\beta^{0}\) is high dimensional, then weakly aligned data sources will contribute little information due to the difficulty of estimating \(\beta^{0}\). Second, the magnitudes and ranges of the density ratios \(w^{*}_{j,s}\) play a significant role in the extent of possible efficiency gains. These density ratios measure the dissimilarity between distributions of interest across data sources. If \(w^{*}_{j,s}\) can take large values for some \(j\), there is a substantial deviation of the distribution in data source \(s\) from that in the target population. Combining such dissimilar data sources can lead to unstable estimates of density ratios, limiting the potential efficiency gains that can be achieved through fusion. Data fusion tends to result in the most gains when data sources that exhibit a higher degree of similarity with the target are available. These similarities
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & & \multicolumn{3}{c}{Augmenting HVTN 703} & \multicolumn{2}{c}{Augmenting HVTN 704} \\ \cline{3-6} & & 703 only & Both & 704 only & Both \\ Residue & Site & (N=43) & (N=98) & (N=55) & (N=98) \\ \hline A & 281 & 0.66 (0.31) & 0.76 (0.27) & 0.61 (0.31) & 0.53 (0.23) \\ D & 279 & -0.22 (0.34) & -0.26 (0.27) & -0.08 (0.31) & -0.17 (0.23) \\ D & 474 & -0.23 (0.32) & 0.19 (0.24) & 0.56 (0.30) & 0.55 (0.29) \\ E & 429 & -0.07 (0.32) & -0.09 (0.27) & 0.09 (0.32) & 0.16 (0.28) \\ G & 429 & 0.20 (0.37) & 0.36 (0.22) & & \\ G & 471 & 0.80 (0.28) & 0.72 (0.24) & 0.39 (0.42) & 0.57 (0.33) \\ Gap & 463 & -0.21 (0.31) & -0.08 (0.26) & -0.06 (0.32) & -0.23 (0.23) \\ Genetic & distance & -0.32 (0.14) & -0.32 (0.13) & -0.43 (0.16) & -0.48 (0.13) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Estimated coefficient using the HVTN 703 and HVTN 704 data. Results are presented as estimates (standard errors).
can arise from various factors such as study design, data collection methods, or the populations being sampled. Therefore, identifying such data sources should be a key preliminary step before conducting data fusion.
Our method relies on a key assumption: that the density ratio model, as defined in Condition 1c, is correctly specified. Naturally, there is always a risk of misspecification. Nevertheless, certain techniques can be employed to mitigate this risk. One approach is to use domain knowledge to specify the form of the density ratio. Another is to use historical data to infer about its form. For instance, in the context of Example 1 in Section 2, historical data can be leveraged to make informed assumptions on whether we expect an overall constant shift in the log odds or a shift that is different across a specific stratification variable. Moreover, Gilbert (2004) introduced three goodness-of-fit tests and provided guidance on their use. Sensitivity analyses can also be a useful approach to assess the robustness of a density ratio model (Gilbert et al., 2003; Jemiai et al., 2007).
Our focus has been on nonparametric \(\mathcal{Q}\), where we aim to leverage data from both aligned and weakly aligned sources. We attained lower asymptotic variance than estimators from Li and Luedtke (2023), which are efficient among all estimators that rely solely on aligned data. In future work, it would be interesting to explore whether further efficiency gains can be achieved. We foresee two approaches for doing this. First, while we have shown that properly leveraging weakly aligned data sources always results in efficiency gains over methods that do not, we have not been able to derive the efficiency lower bound in our data fusion model \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\). Therefore, it is possible that there exists a more efficient estimator than the ones presented in this work when the model is \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\). Second, it would be interesting to explore what further gains are available when the model \(\mathcal{Q}\) is semiparametric, rather than nonparametric.
## Acknowledgements
We thank the participants and investigators of HVTN 703 and HVTN 704. Research reported in this publication was supported by the NIH under award number DP2-LM013340 and R37AI054165, and the National Institute Of Allergy And Infectious Diseases of the National Institutes of Health under the U.S. Public Health Service Grant AI068635. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
## Appendix A Proof of Lemma 1
We prove Lemma 1 by first characterizing the tangent space. Let \(\mathcal{T}(P^{0},\mathcal{P})\) denote the tangent set of model \(\mathcal{P}\) at \(P^{0}\). We assume that \(\mathcal{T}(Q^{0},\mathcal{Q})\) is a closed linear subspace of \(L^{2}_{0}(Q^{0})\), and it can be verified that \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\), which is the tangent set of a model that is nonparametric model up to the restriction imposed by the data fusion alignment
condition and with known value of \(\beta^{0}\), is itself a closed linear subspace of \(L^{2}_{0}(P^{0})\). The same is true for \(\mathcal{P}\), which is the tangent set of a model that is nonparametric model up to the restriction imposed by the data fusion alignment condition with \(\beta^{0}\) unknown. Therefore, we also refer to \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) and \(\mathcal{P}\) as the tangent space of \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) and \(\mathcal{P}\) at \(P^{0}\), respectively. Let \(L^{2}_{0}(Q^{0}_{j})\) denote the subspace of \(L^{2}_{0}(Q^{0})\) consisting of all functions \(f:\prod_{j=1}^{j}\mathcal{Z}_{i}\mapsto\mathds{R}\) such that \(E_{Q^{0}}[f(\bar{Z}_{j}\mid\bar{Z}_{j-1})]=0\) with \(Q^{0}\)-probability one. Similarly let \(L^{2}_{0}(P^{0}_{j})\) denote the subspace of \(L^{2}_{0}(P^{0})\) consisting of all functions \(g:(\prod_{i=1}^{j}\mathcal{Z}_{i})\times\mathcal{S}\mapsto\mathds{R}\) such that \(E_{P^{0}}[g(\bar{Z}_{j},S)\mid\bar{Z}_{j-1},S]=0\) with \(P^{0}\)-probability one. For each \(j\in[d]\), let \(\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) be the subspace of \(L^{2}_{0}(Q^{0}_{j})\) that consists of all \(f_{j}\) that arise as scores of univariate submodels \(\{Q^{(\epsilon)}:\epsilon\in[0,\delta)\}\) for which \(Q^{(\epsilon)}=Q^{0}\) when \(\epsilon=0\). Similarly, we define \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\) be the subspace of \(L^{2}_{0}(P^{0}_{j})\) that consists of all \(h_{j}\) that arise as scores of univariate submodels of \(\{P^{(\epsilon)}:\epsilon\in[0,\delta)\}\) for which \(P^{(\epsilon)}=P^{0}\) when \(\epsilon=0\).
Throughout this work, we assume \(\mathcal{Q}\) is a collection of nonparametric distributions \(Q\), i.e, \(\mathcal{T}(Q^{0},\mathcal{Q})=L^{2}_{0}(Q^{0})\). When \(Q\) is nonparametric, there exist sets \(\mathcal{Q}_{j}\) of conditional distributions of \(Z_{j}\mid\bar{Z}_{j-1}\), \(j\in[d]\), such that \(\mathcal{Q}\) is equal to the set of all distributions \(Q\) such that, for all \(j\in[d]\), the conditional distribution \(Q_{j}\) belongs to \(\mathcal{Q}_{j}\). In other words, \(Q\) is variation independent such that distributions in \(\mathcal{Q}\) can be defined separately via their conditional distributions such that it is possible to modify a conditional distribution \(Q^{0}_{j}\) without affecting the others, namely \(Q^{0}_{j^{\prime}}\) with \(j^{\prime}\neq j\). This condition enables the derivation of a gradient by summing up individual projections of a gradient onto different sub-spaces of the \(L^{2}_{0}\) space defined by perturbing the conditional distributions of \(Q^{0}_{j}\) or \(P^{0}_{j}\) separately over \(j\in[d]\). By Lemma 1.6 of Van der Laan and Robins (2003), and the fact that the tangent set of \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) at \(P^{0}\) is a closed linear space, the tangent space \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) of \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) at \(P^{0}\) takes the form \(\bigoplus_{j=0}^{d}\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j }}):=\{\sum_{j=0}^{d}h_{j}:h_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_ {j},\beta^{0}_{j}})\}\), and the \(L^{2}_{0}(P)\) projection of a function onto \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\}\) is equal to the sum of projections onto \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\}\), \(j=0,1,\ldots,d\). Since the marginal distribution of \(S\) is unrestricted and therefore independent of \(\beta^{0}\), \(\mathcal{T}(P^{0}_{0},\mathcal{P}_{0})=L^{2}_{0}(P^{0}_{0})\) and \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{0},\beta^{0}_{j}})=\mathcal{T}(P^{0},\mathcal{P}_{0})\). Moreover, for all \(j\in\mathcal{I}\), the conditional distribution of \(Z_{j}\mid\bar{Z}_{j-1},S\) is also unrestricted and so \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})=L^{2}_{0}(P^{0} _{j})\). The following result characterizes the other tangent spaces that appear in the direct sum defining \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\).
**Lemma S1** (_Tangent Space of \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\)).: _If Conditions 1 and 2 from the main text hold and \(j\in\mathcal{J}\), then_
\[\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})= \{(z,s)\mapsto g_{j}(\bar{z}_{j},s)+\mathbb{1}_{\bar{S}_{j}}(s) \mathbb{1}_{\bar{Z}^{\dagger}_{j-1}}(\bar{z}_{j-1})\left[\left(f_{j}(\bar{z}_{j} )-E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1},s]\right)-g_{j}(\bar{z}_{j},s)\right]\] \[:f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j}),g_{j}\in L^{2}_{0}(P^{ 0}_{j})\}. \tag{6}\]
When \(\mathcal{S}_{j}=\mathcal{A}_{j}\), then this result collapses to Lemma S1 in Li and Luedtke (2023) since \(E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1},s]=0\) for all \(s\in\mathcal{A}_{j}\). We choose to provide a heuristic proof here and direct the reader's attention to the aforementioned paper for more details.
Proof of Lemma S1.: Throughout this appendix, we assume enough regularity so that the scores considered correspond to derivatives of log-likelihoods. The following proof could be modified to apply even when this fails by
adopting similar arguments to those used in the proof of Lemma S1 from Li and Luedtke (2023). Fix \(j\in\mathcal{J}\) and let \(\mathcal{R}_{j}\) denote the right-hand side of (6). We first show that \(\mathcal{R}_{j}\subseteq\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j }^{0}})\), and then we show \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\subseteq \mathcal{R}_{j}\).
**Part 1 of proof:**\(\mathcal{R}_{j}\subseteq\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j }^{0}})\). Fix \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) and \(g_{j}\in L_{0}^{2}(P_{j}^{0})\). Since \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\), there exists a univariate submodel \(\{Q^{(\epsilon)}:\epsilon\in[0,\delta]\}\) with score \(f_{j}\) at \(\epsilon=0\) for which \(Q^{(\epsilon)}=Q^{0}\). For each \(\epsilon\in[0,\delta)\), \(s\in\mathcal{S}_{j}\), and \(j\in\mathcal{J}\), we let \(P_{j}^{(\epsilon)}\in\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}}\) be such that \(dP_{j}^{(\epsilon)}(\cdot\mid\bar{z}_{j-1},s)=w_{j,s}(\bar{z}_{j};\beta_{j,s}^ {0})/E_{Q^{(\epsilon)}}[w_{j,s}(\bar{Z}_{j};\beta_{j,s}^{0})\mid\bar{z}_{j-1}] dQ_{j}^{(\epsilon)}(\cdot\mid\bar{z}_{j-1})\) for \(P^{0}\)-almost all \(\bar{z}_{j-1}\in\bar{\mathcal{Z}}_{j-1}^{\dagger}\). By the variation independence of \(P^{0}\), we can suppose without loss of generality that \(P_{i}^{(\epsilon)}=P_{i}^{0}\) for all \(i\neq j\) and also the marginal distribution of \(S\) under \(P^{(\epsilon)}\) is equal to the marginal distribution of \(S\) under \(P^{0}\). It can be shown that \(P^{(\epsilon)}\) belongs to \(\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}}\). In addition, for \(s\in\mathcal{S}_{j}\) and all \(\bar{z}_{j-1}\in\mathcal{Z}_{j-1}^{\dagger}\), \(\{P^{(\epsilon)}:\epsilon\in[0,\delta)\}\) is quadratic mean differentiable at \(\epsilon=0\) with the following score:
\[h_{j}(\bar{z}_{j},s)\] \[=\left.\frac{d}{d\epsilon}\log p_{j}^{\epsilon}(\bar{z}_{j},s; \beta_{j,s}^{0})\right|_{\epsilon=0}\] \[=\left.\frac{d}{d\epsilon}\log\frac{w_{j,s}(\bar{z}_{j};\beta_{j,s }^{0})q_{j}^{(\epsilon)}}{E_{Q^{(\epsilon)}}[w_{j,s}(\bar{Z}_{j};\beta_{j,s}^ {0})\mid\bar{z}_{j-1}]}\right|_{\epsilon=0}\] \[=\left.\frac{d}{d\epsilon}\log q_{j}^{\epsilon}(\bar{z}_{j}) \right|_{\epsilon=0}-\frac{1}{E_{Q^{0}}[w_{j,s}(\bar{Z}_{j};\beta_{j,s}^{0}) \mid\bar{z}_{j-1}]}\left.\frac{d}{d\epsilon}E_{Q^{(\epsilon)}}\left[w_{j,s}( \bar{Z}_{j};\beta_{j,s}^{0})\mid\bar{z}_{j-1}\right]\right|_{\epsilon=0}\] \[=f_{j}(\bar{z}_{j})-\frac{E_{Q^{0}}[w_{j,s}(\bar{Z}_{j};\beta_{j,s }^{0})f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1}]}{E_{Q^{0}}[w_{j,s}(\bar{Z}_{j};\beta _{j,s}^{0})\mid\bar{z}_{j-1}]}\] \[=f_{j}(\bar{z}_{j})-E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1},s].\]
As \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) and \(g_{j}\in L_{0}^{2}(P_{j}^{0})\) were arbitrary, \(\mathcal{R}_{j}\subseteq\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta _{j}^{0}})\).
**Part 2 of proof:**\(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\subseteq \mathcal{R}_{j}\). Fix \(h_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\) and let \(P^{(\epsilon)}\) be the submodel such that \(P^{(\epsilon)}=P^{0}\) at \(\epsilon=0\) with score \(h_{j}(\bar{z}_{j},s)\). We will show that there exists \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) and \(g_{j}\in L_{0}^{2}(P_{j}^{0})\) such that any \(h_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\) takes the form that \(h_{j}(z,s)=g_{j}(\bar{z}_{j},s)+\mathbb{1}_{\mathcal{S}_{j}}(s)\mathbb{1}_{\bar{ \mathcal{Z}}_{j-1}^{\dagger}}(\bar{z}_{j-1})[\left(f_{j}(\bar{z}_{j})-E_{P^{0}} [f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1},s]\right)-g_{j}(\bar{z}_{j},s)]\)\(P^{0}\)-almost everywhere. Since \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\) is a subset of the maximal tangent space \(L_{0}^{2}(P_{j}^{0})\), we can let \(g_{j}=h_{j}\). Then it remains to show that there exists an \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) such that \(h_{j}(z,s)=f_{j}(z)-E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1},s]\) for \(P^{0}\)-almost all \(\bar{z}_{j-1}\in\bar{Z}_{j-1}^{\dagger}\) and all \(s\in\mathcal{S}_{j}\).
For each \(\epsilon\in[0,\delta)\), \(s\in\mathcal{S}_{j}\), and \(j\in\mathcal{J}\), we let \(Q_{j}^{(\epsilon)}\in\mathcal{Q}\) be such that \(dQ_{j}^{(\epsilon)}(\cdot\mid\bar{z}_{j-1})w_{j,s}(\bar{z}_{j};\beta_{j}^{0})/E_{ Q_{(\epsilon)}}[w_{j,s}(\bar{Z}_{j};\beta_{j}^{0})\mid\bar{z}_{j-1}]=dP_{j}^{( \epsilon)}(\cdot\mid\bar{z}_{j-1},s)\) for \(Q^{0}\)-almost all \(\bar{z}_{j-1}\in\bar{\mathcal{Z}}_{j-1}^{\dagger}\) and \(Q_{i}^{(\epsilon)}=Q_{i}^{(\epsilon)}\) for all \(i\neq j\). Then \(\{Q^{(\epsilon)}:\epsilon\in[0,\delta)\}\)
is a submodel of \(\mathcal{Q}\) that is quadratic mean differentiable at \(\epsilon=0\) with the following score \(f_{j}\):
\[f_{j}(\bar{z}_{j}) =\left.\frac{d}{de}\log q_{j}^{\epsilon}(\bar{z}_{j})\right|_{ \epsilon=0}\] \[=\left.\frac{d}{de}\log\frac{p_{j}^{(\epsilon)}E_{Q^{\epsilon}}[w _{j,s}(\bar{Z}_{j};\beta_{j,s}^{0})\mid\bar{z}_{j-1}]}{w_{j,s}(\bar{z}_{j}; \beta_{j,s}^{0})}\right|_{\epsilon=0}\] \[=h_{j}(\bar{z}_{j},s)+\frac{E_{Q^{0}}[w_{j,s}(\bar{Z}_{j};\beta_{ j,s}^{0})f_{j}(\bar{Z}_{j})\mid\bar{z}_{j-1}]}{E_{Q^{0}}[w_{j,s}(\bar{Z}_{j}; \beta_{j,s}^{0})\mid\bar{z}_{j-1}]}\] \[=h_{j}(\bar{z}_{j},s)+E_{P^{0}}\left[f_{j}(\bar{Z}_{j})\mid\bar{z }_{j-1},s\right].\]
From above, we have \(h_{j}(\bar{z}_{j},s)=f_{j}(\bar{z}_{j})-E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z }_{j-1},s]\) for \(P^{0}\)-almost all \((\bar{z}_{j},s)\) that are such that \(\bar{z}_{j-1}\in\mathcal{Z}_{j-1}^{\dagger}\) and \(s\in\mathcal{S}_{j}\) and we see that \(h_{j}(z,s)=g_{j}(\bar{z}_{j},s)+\mathbb{1}_{\mathcal{S}_{j}}(s)\mathbb{1}_{ \bar{z}_{j-1}^{\dagger}}(\bar{z}_{j-1})[(f_{j}(\bar{z}_{j})-E_{P^{0}}[f_{j}( \bar{Z}_{j})\mid\bar{z}_{j-1},s])-g_{j}(\bar{z}_{j},s)]\)\(P^{0}\)-almost everywhere. Hence, \(h_{j}\in\mathcal{R}_{j}\). As \(h_{j}\) was an arbitrary element of \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\), we have proved that \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\subseteq \mathcal{R}_{j}\).
The proof of Lemma 1 shares great similarity to the proof of Theorem 2 in Supplementary Section A.3 in Li and Luedtke (2023). We suppose that \(\psi\) is pathwise differentiable at \(Q^{0}\) relative to \(\mathcal{Q}\) and fix a gradient \(D_{Q^{0}}\) of \(\psi\). We will show that, for any submodel \(\{P^{(\epsilon)}:\epsilon\in[0,\delta)\}\) with score \(h\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) and with \(P^{(\epsilon)}=P^{0}\) when \(\epsilon=0\), it holds that \(\frac{\partial}{\partial\epsilon}\dot{\phi}^{\dagger}(P^{(\epsilon)})\mid_{ \epsilon=0}=E_{P^{0}}\{D_{P^{0}}(Z,S)h(Z,S)\}\), where \(D_{P^{0}}\) takes the form given in Lemma 1. This will show that \(D_{P^{0}}\) is a gradient of \(\phi^{\dagger}\), which will complete the proof of Lemma 1. We first provide a useful lemma that will be used in later proof.
**Lemma S2** (_Exchanging between \(E_{Q^{0}}\) and \(E_{P^{0}}\)).: _For an arbitrary function \(f:\bar{\mathcal{Z}}_{j}\to\mathbb{R}\) with \(E_{Q^{0}}[f^{2}(\bar{Z}_{j})]<\infty\), \(E_{Q^{0}}[f(\bar{Z}_{j})]=E_{P^{0}}[\lambda_{j}^{\dagger}(\bar{Z}_{j},S;\beta_ {j}^{0})f(\bar{Z}_{j})\mid S\in\mathcal{S}_{j}]\) where \(\lambda_{j}^{\dagger}(\bar{z}_{j},s;\beta_{j}^{0}):=q^{0}(z_{j}\mid\bar{z}_{j -1})/p^{0}(z_{j}\mid\bar{z}_{j-1},s)\lambda_{j-1}\) and \(\lambda_{j-1}\) denotes the Radon-Nikodym derivative of the marginal distribution of \(\bar{Z}_{j-1}\) under sampling from \(Q^{0}\) relative to the conditional distribution of \(\bar{Z}_{j-1}\mid S\in\mathcal{S}_{j}\) under sampling from \(P^{0}\)._
Proof of Lemma S2.: Using tower property and by the definition of \(\lambda_{j}^{\dagger}\), we have
\[E_{Q^{0}}[f(\bar{Z}_{j})]\] \[=E_{Q^{0}}[E_{Q^{0}}[f(\bar{Z}_{j})\mid\bar{Z}_{j-1}]]\] \[=E_{Q^{0}}\left[E_{P^{0}}\left[\frac{q^{0}(Z_{j}\mid\bar{Z}_{j-1 })}{p^{0}(Z_{j}\mid\bar{Z}_{j-1},\mathcal{S}_{j})}f(\bar{Z}_{j})\mid\bar{Z}_{j -1},S\in\mathcal{S}_{j}\right]\right]\] \[=E_{P^{0}}\Bigg{[}\lambda_{j-1}(\bar{Z}_{j-1})E_{P^{0}}\bigg{[} \frac{q^{0}(Z_{j}\mid\bar{Z}_{j-1})}{p^{0}(Z_{j}\mid\bar{Z}_{j-1},S\in \mathcal{S}_{j})}f(\bar{Z}_{j})\mid\bar{Z}_{j-1},S\in\mathcal{S}_{j}\bigg{]} \mid S\in\mathcal{S}_{j}\Bigg{]}\] \[=E_{P^{0}}\Bigg{[}\lambda_{j-1}(\bar{Z}_{j-1})E_{P^{0}}\bigg{[} \frac{q^{0}(Z_{j}\mid\bar{Z}_{j-1})}{p^{0}(Z_{j}\mid\bar{Z}_{j-1},S)}f(\bar{Z} _{j})\mid\bar{Z}_{j-1},S\bigg{]}\mid S\in\mathcal{S}_{j}\Bigg{]}\] \[=E_{P^{0}}\left[\lambda_{j}^{\dagger}(\bar{Z}_{j},S;\beta_{j}^{0 })f(\bar{Z}_{j})\mid S\in\mathcal{S}_{j}\right]\] \[=E_{P^{0}}\left[\frac{1}{P^{0}(S\in\mathcal{S}_{j})}\lambda_{j}^ {\dagger}(\bar{Z}_{j},S;\beta_{j}^{0})f(\bar{Z}_{j})\right]\]
Proof of Lemma 1.: Fix a function \(h\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) and submodel \(\{P^{(\epsilon)}:\epsilon\in[0,\delta)\}\). Since \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})=\bigoplus_{j=0}^{d} \mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\), there exist \(h_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\), \(j\in\{0\}\cup[d]\), such that \(h(z,s)=\sum_{j=0}^{d}h_{j}(\bar{z}_{j},s)\). Based on previous results, for each \(j\in\mathcal{J}\), there exists an \(f_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\) such that \(f_{j}(\bar{z}_{j})=h_{j}(\bar{z}_{j},s)+E_{P^{0}}[f_{j}(\bar{Z}_{j})\mid\bar{z }_{j-1},s]\) for \((s,\bar{z}_{j-1})\in\mathcal{S}_{j}\times\bar{\mathcal{Z}}_{j-1}^{\dagger}\). For each \(\epsilon\in[0,\delta)\), let \(Q^{(\epsilon)}\in\mathcal{Q}\) be such that \(Q^{(\epsilon)}_{j}(\cdot\mid\bar{z}_{j-1})=P^{(\epsilon)}_{j}(\cdot\mid\bar{z }_{j-1},S\in\mathcal{A}_{j})\) for \(Q^{0}\)-almost all \(\bar{z}_{j-1}\in\mathcal{Z}_{j-1}^{\dagger}\). By analogous arguemmts to those given in the proof of Lemma S1, \(\{Q^{(\epsilon)}:\epsilon\in[0,\delta)\}\) has score \(\sum_{j\in\mathcal{J}}f_{j}\) at \(\epsilon=0\). As \(\psi\) is pathwise differentiable at \(Q^{0}\) relative to \(\mathcal{Q}\),
\[\frac{d}{d\epsilon}\phi^{\dagger}(P^{(\epsilon)})\bigg{|}_{ \epsilon=0}=\left.\frac{d}{d\epsilon}\psi\circ\theta^{\dagger}(P^{(\epsilon)} )\right|_{\epsilon=0}\] \[=\left.\frac{d}{d\epsilon}\psi(Q^{(\epsilon)})\right|_{\epsilon=0}\] \[=E_{Q^{0}}\left[D_{Q^{0}}(Z)\sum_{j\in\mathcal{J}}f_{j}(\bar{Z}_ {j})\right]\] (by the pathwise differentiability of \[\psi\] ) \[=E_{Q^{0}}\left[\sum_{j\in\mathcal{J}}D_{Q^{0},j}(\bar{Z}_{j})f_{ j}(\bar{Z}_{j})\right] \tag{7}\] \[=E_{P^{0}}\left[\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{ j})}{P^{0}(S\in\mathcal{S}_{j})}\lambda_{j}^{\dagger}(\bar{Z}_{j},S;\beta^{0}_{j})D_{Q ^{0},j}(\bar{Z}_{j})f_{j}(\bar{Z}_{j})\right] \tag{8}\]
Letting \(b_{j}(\bar{z}_{j},s):=\lambda_{j}^{\dagger}(\bar{z}_{j},s;\beta^{0}_{j})D_{Q^{ 0},j}(\bar{z}_{j})\). The previous equation writes as,
\[=E_{P^{0}}\left[\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{ j})}{P^{0}(S\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)f_{j}(\bar{Z}_{j})\right]\] \[=E_{P^{0}}\bigg{[}\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{ j})}{P^{0}(S\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)\left\{h_{j}(\bar{Z}_{j},S)+E_{P^ {0}}\left[f_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},S\right]\right\}\bigg{]} \tag{9}\] \[=E_{P^{0}}\left[\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{ j})}{P^{0}(S\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)h_{j}(\bar{Z}_{j},S)\right] \tag{10}\]
where (9) is true by Lemma S1 that for a submodel \(Q^{(\epsilon)}\) with score \(f_{j}(\bar{z}_{j})\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\), there exists a submodel \(P^{(\epsilon)}\) with score \(h_{j}(\bar{z}_{j},s)\) that takes the above form, and (10) is true since
\[E_{P^{0}}\left[\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{j} )}{P^{0}(S\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)E_{P^{0}}\left[f_{j}(\bar{Z}_ {j})\mid\bar{Z}_{j-1},S\right]\right]\] \[=E_{P^{0}}\bigg{[}E_{P^{0}}\left[\sum_{j\in\mathcal{J}}\frac{1(S \in\mathcal{S}_{j})}{P^{0}(S\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)\mid\bar{Z}_ {j-1},S\right]E_{P^{0}}\left[f_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},S\right]\bigg{]}\] \[=E_{P^{0}}\bigg{[}\sum_{j\in\mathcal{J}}\frac{1(S\in\mathcal{S}_{ j})}{P^{0}(S\in\mathcal{S}_{j})}\lambda_{j-1}(\bar{Z}_{j-1})E_{P^{0}}\left[f_{j}(\bar{Z}_ {j})\mid\bar{Z}_{j-1},S\right]E_{P^{0}}\left[\frac{q^{0}(Z_{j}\mid\bar{Z}_{j-1 })}{p^{0}(Z_{j}\mid\bar{Z}_{j-1},S)}D_{Q^{0},j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},S \right]\bigg{]}\] \[=0.\]
It is straightforward to show that \(E_{P^{0}}[\sum_{j\in\mathcal{S}_{j}}\frac{1(S(\mathcal{S}\mathcal{S}_{j})}{P^{0}( \mathcal{S}\in\mathcal{S}_{j})}b_{j}(\bar{Z}_{j},S)]=0\). As \(h\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) was arbitrary, \(\phi^{\dagger}\) is pathwise differentiable at \(P^{0}\) relative to \(\mathcal{P}_{\mathcal{Q},\beta^{0}}\) with gradient \(D_{P^{0}}\) in Lemma 1.
## Appendix B Proof of Lemma 2
Proof of Lemma 2.: By the variation independence of \(\beta^{0}\) and \(Q^{0}\), Lemma 25.25 of van der Vaart (2000) implies that the canonical gradient of \(\beta^{0}\) under \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\mathcal{B}})\) is \(D_{\beta}^{*}=I_{\beta^{0}}^{-1}\dot{\ell}_{\beta}^{*}\), where \(I_{\beta^{0}}:=E_{P^{0}}[\dot{\ell}_{\beta}^{*}(Z,S;\beta^{0})\dot{\ell}_{ \beta}^{*\top}(Z,S;\beta^{0})]\) and \(\dot{\ell}_{\beta}^{*}\) is the efficient score function of \(\beta^{0}\). To find the efficient score \(\dot{\ell}_{\beta}^{*}\), we first compute the score of \(\beta^{0}\) when \(Q^{0}\) is known and then project it onto the orthogonal complement of the nuisance tangent space \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) in a model where \(Q^{0}\) is not known. To begin with, the loglikelihood of \((\beta^{0},Q^{0})\) at a given data point \((z,s)\) is
\[\ell(\beta^{0},Q^{0}\mid z,s)=\sum_{j=1}^{d}\Big{[}\mathbb{1}(s\in\mathcal{S}_ {j})\left\{\log w_{j,s}^{*}(\bar{z}_{j};\beta_{j,s}^{0})+\log q_{j}^{0}(\bar{z }_{j})\right\}+\mathbb{1}(s\notin\mathcal{S}_{j})\log p_{j}^{0}(\bar{z}_{j},s) +\log P^{0}(S=s)\Big{]}.\]
For fixed \(s\in\mathcal{W}_{j}\) and \(j\in\mathcal{J}\), the score for \(\beta_{j,s}^{0}\), namely \(\dot{\ell}_{\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta_{j,s}^{0}):=\nabla_{ \beta_{j,s}^{0}}\ell(\beta^{0},Q^{0}\mid z,s^{\prime})\), takes the form
\[\dot{\ell}_{\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta_{j,s}^{0})=\mathbb{1}(s ^{\prime}\in\mathcal{S}_{j})\left\{\frac{\dot{w}_{j,s}(\bar{z}_{j},s^{\prime };\beta_{j,s}^{0})}{w_{j,s}(\bar{z}_{j},s^{\prime};\beta_{j,s}^{0})}-E_{P^{0} }\left[\frac{\dot{w}_{j,s}(\bar{Z}_{j},S;\beta_{j,s}^{0})}{w_{j,s}(\bar{Z}_{j},S;\beta_{j,s}^{0})}\mid\bar{z}_{j-1},S=s^{\prime}\right]\right\}.\]
For each \(\beta_{j,s}^{0}\), the nuisance tangent space is \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})=\bigoplus_{j=0}^{d} \mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\). In what follows we use \(\Pi_{P^{0},\beta^{0}}\{\cdot\mid\mathcal{R}\}\) denote the \(L_{0}^{2}(P^{0})\)-projection operator onto a subspace \(\mathcal{R}\) of \(L_{0}^{2}(P^{0})\) under fixed \(\beta^{0}\). Then the projection of score \(\dot{\ell}_{\beta_{j,s}}\) onto \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) is,
\[\Pi_{P^{0},\beta^{0}}\{\dot{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P ^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\}(z,s;\beta^{0})\] \[=\sum_{m=0}^{d}\Pi_{P^{0},\beta^{0}}\{\dot{\ell}_{\beta_{j,s}}\mid \mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{m},\beta_{m}^{0}})\}(z,s;\beta^{0}) \tag{11}\] \[=\Pi_{P^{0},\beta^{0}}\{\dot{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P ^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta_{j}^{0}})\}(z,s;\beta^{0}). \tag{12}\]
Equation 11 is true by the fact that \(\mathcal{Q}\) is nonparametric, and so \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\) is the orthogonal sum of \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{m},\beta_{m}^{0}})\), \(m=0,1,\ldots,d\). As we prove below, the latter equality holds since \(\dot{\ell}_{\beta_{j,s}}\) is orthogonal to subspaces \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{m},\beta_{m}^{0}})\) for all \(j\neq m\), which makes it so that most of the projections are zero.
Proof of Equation 12.: Now we show that it is always true that \(\dot{\ell}_{\beta_{j}}(P^{0})\perp g_{m}\) for all \(g_{m}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{m},\beta_{m}^{0}})\) where \(m\neq j\), \(j\in\mathcal{J}\) and \(m\in\mathcal{J}\cup\{0\}\).
\[E_{P^{0}}\left[\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S)g_{m}(\bar{Z}_{m},S)\right] =\begin{cases}E_{P^{0}}\left[\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S)E_{P^{0}} \left[g_{m}(\bar{Z}_{m},S)\mid\bar{Z}_{m-1},S\right]\right]=0,&\text{if $j<m$,}\\ E_{P^{0}}\left[E_{P^{0}}\left[\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S)\mid\bar{Z}_{j-1 },S\right]g_{m}(\bar{Z}_{m},S)\right]=0,&\text{if $j>m$.}\end{cases}\]
Since \(\beta^{0}_{j}\) is the concatenation of all \(\beta^{0}_{j,s}\) for \(s\in\mathcal{W}_{j}\), we have shown that for all \(s\in\mathcal{W}_{j}\), \(\Pi_{P^{0},\beta^{0}}\{\hat{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P^{0},\mathcal{ P}_{\mathcal{Q}_{m},\beta^{0}_{m}})\}(z,s^{\prime};\beta^{0})=0\) when \(m\neq j\).
Continuing in the line of Equation 12, we claim that \(\Pi_{P^{0},\beta^{0}}\{\hat{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P^{0},\mathcal{ P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\}(\bar{z}_{j},s^{\prime};\beta^{0}_{j})= \mathbb{P}_{j,s^{\prime}}a^{*}_{j}(\bar{z}_{j};\beta^{0}_{j})\), where \(\mathbb{P}_{j,s}a_{j}(\bar{z}_{j};\beta^{0}_{j}):=a_{j}(\bar{z}_{j};\beta^{0}_ {j})-E_{P^{0}}[a_{j}(\bar{Z}_{j};\beta^{0}_{j})\mid\bar{z}_{j-1},s]\) and \(a^{*}_{j}(\bar{z}_{j})\) takes the form in Lemma 2. For ease of exposition, we focus on the case where \(\mathcal{Z}^{\dagger}_{j-1}=\mathcal{Z}_{j-1}\) and use the fact that, as implied by Lemma S1, a generic element of \(\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\) takes the form \(\mathbb{P}_{j,s}a_{j}\) for some \(a_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\). We first show that \(\mathbb{P}_{j,s}a^{*}_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j}, \beta^{0}_{j}})\), and then we show \(E_{P^{0}}[\{\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S;\beta^{0}_{j,s})-\mathbb{P }_{j,s}a^{*}_{j}(\bar{Z}_{j};\beta^{0}_{j})\}\mathbb{P}_{j,s}a_{j}(\bar{Z}_{j} ;\beta^{0}_{j})]=0\) for all \(a_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\).
**Part I of showing that \(\Pi_{P^{0},\beta^{0}}\{\hat{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P^{0},\mathcal{ P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\}(\bar{z}_{j},s^{\prime};\beta^{0}_{j})= \mathbb{P}_{j,s^{\prime}}a^{*}_{j}(\bar{z}_{j};\beta^{0}_{j})\):**\(\mathbb{P}_{j,s}a^{*}_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j}, \beta^{0}_{j,s}})\). It is straightfoward to show that \(E_{Q^{0}}[a^{*}_{j}(\bar{Z}_{j};\beta^{0}_{j})\mid\bar{z}_{j-1}]=0\) everywhere. Since \(Q^{0}\) is nonparametric, we have \(a^{*}_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\). By Lemma S1, it follows that \(\mathbb{P}_{j,s}a^{*}_{j}\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q}_{j}, \beta^{0}_{j}})\) for any \(s\in\mathcal{W}_{j}\).
**Part II of showing that \(\Pi_{P^{0},\beta^{0}}\{\hat{\ell}_{\beta_{j,s}}\mid\mathcal{T}(P^{0},\mathcal{ P}_{\mathcal{Q}_{j},\beta^{0}_{j}})\}(\bar{z}_{j},s^{\prime};\beta^{0}_{j})= \mathbb{P}_{j,s^{\prime}}a^{*}_{j}(\bar{z}_{j};\beta^{0}_{j})\):**\(E_{P^{0}}[(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S;\beta^{0}_{j,s})-\)\(\mathbb{P}_{j,s}a^{*}_{j}(\bar{Z}_{j};\beta^{0}_{j}))\mathbb{P}_{j,s}a_{j}(\bar{Z}_{j}; \beta^{0}_{j})]=0\) for all \(a_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\). To verify that \(E_{P^{0}}[(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S;\beta^{0}_{j,s})-\mathbb{P }_{j,s}a^{*}_{j}(\bar{Z}_{j};\beta^{0}_{j}))\mathbb{P}_{j,s}a_{j}(\bar{Z}_{j}; \beta^{0}_{j})]\mathbb{P}_{j,s}a_{j}(\bar{Z}_{j};\hat{\ell}_{\beta})\) for all \(a_{j}\in\mathcal{T}(Q^{0},\mathcal{Q}_{j})\), we note that:
\[0 =E_{P^{0}}\left[\left(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S)- \mathbb{P}_{j,s}a^{*}_{j}(\bar{Z}_{j})\right)\mathbb{P}_{j,s}a_{j}(\bar{Z}_{j})\right]\] \[=E_{P^{0}}\left[\sum_{m=1}^{d}P^{0}_{m}(\bar{Z}_{j-1})E_{P^{0}} \left[\left(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S)-\mathbb{P}_{j,m}a^{*}_{j}( \bar{Z}_{j})\right)\mathbb{P}_{j,m}a_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},S=m\right]\right]\] \[=E_{P^{0}}\bigg{[}\sum_{m=1}^{d}P^{0}_{m}(\bar{Z}_{j-1})\mathbb{1 }(m\in\mathcal{S}_{j})E_{P^{0}}\left[\left(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S)-\mathbb{P}_{j,m}a^{*}_{j}(\bar{Z}_{j})\right)\mathbb{P}_{j,m}a_{j}(\bar{Z}_{ j})\mid\bar{Z}_{j-1},S=m\right]\bigg{]}\] \[=E_{P^{0}}\bigg{[}\sum_{m\in\mathcal{S}_{j}}P^{0}_{m}(\bar{Z}_{j-1 })E_{P^{0}}\left[\left(\hat{\ell}_{\beta_{j,s}}(\bar{Z}_{j},S)-\mathbb{P}_{j,m}a^{ *}_{j}(\bar{Z}_{j})\right)\mathbb{P}_{j,m}a_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},S=m \right]\bigg{]}\] \[=E_{P^{0}}\bigg{[}\sum_{m\in\mathcal{S}_{j}}P^{0}_{m}(\bar{Z}_{j-1 })E_{P^{0}}\left[\left\{\left(\frac{\hat{w}_{j,s}(\bar{Z}_{j},m;\beta^{0}_{j,s})}{ w_{j,s}(\bar{Z}_{j},m;\beta^{0}_{j,s})}-a^{*}_{j}(\bar{Z}_{j})\right)-E_{P^{0}}\left[ \frac{\hat{w}_{j,s}(\bar{Z}_{j},m;\beta^{0}_{j,s})}{w_{j,s}(\bar{Z}_{j},m;\beta^ {0}_{j,s})}-a^{*}_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},m\right]\right\}\] \[\cdot a_{j}(\bar{Z}_{j})\mid\bar{Z}_{j-1},m\right]\bigg{]}\] \[=\int\int\bigg{\{}\sum_{m\in\mathcal{S}_{j}}P^{0}_{m}(\bar{Z}_{j-1 })\bigg{(}\frac{\hat{w}_{j,s}(\bar{z}_{j},m;\beta^{0}_{j,s})}{w_{j,s}(\bar{z}_{j},m;\beta^{0}_{j,s})}-a^{*}_{j}(\bar{z}_{j})\] \[\qquad\qquad-E_{P^{0}}\left[\frac{\hat{w}_{j,s}(\bar{Z}_{j},m;\beta^ {0}_{j,s})}{w_{j,s}(\bar{Z}_{j},m;\beta^{0}_{j,s})}-a^{*}_{j}(\bar{Z}_{j}) \mid\bar{z}_{j-1},m\right]\bigg{)}\cdot w^{*}_{j,m}(\bar{z}_{j},m;\beta^{0}_{j,m} \bigg{)}\bigg{\}}a_{j}(\bar{z}_{j})dQ^{0}_{j}(z_{j}\mid\bar{z}_{j-1})d\bar{P}^{0}_{j-1 }(\bar{z}_{j-1}),\]
where \(P^{0}_{m}(\bar{z}_{j-1}):=P^{0}(S=m\mid\bar{z}_{j-1})\). Given the fact that
with kernel \(K(z_{j}^{\prime},\bar{z}_{j}):=\sum_{m=1}^{|\mathcal{S}_{j}|}r_{j,m}(\bar{z}_{j}; \beta_{j,m}^{0})w_{j,m}^{*}(z_{j}^{\prime},\bar{z}_{j-1};\beta_{j,m}^{0})\) and \(b_{j}(\bar{z}_{j};\beta_{j}^{0})=\sum_{m=1}^{|\mathcal{S}_{j}|}r_{j,m}(\bar{z}_ {j};\beta_{j,m}^{0})\dot{\ell}_{\beta_{j,s}}(\bar{z}_{j},m)\). Since the kernel is Pincherle-Goursat (i.e. it factors in \(z_{j}^{\prime}\) and \(z_{j}\) given values of \(\bar{z}_{j-1}\)), for any given values of \(\bar{z}_{j-1}\), the Fredholm equation can be equivalently expressed as an algebraic system of \(|\mathcal{S}_{j}|\) equations in \(|\mathcal{S}_{j}|\) unknowns \(A_{j}x=B_{j}\) where \(x=(x_{1},\ldots,x_{|\mathcal{S}_{j}|})^{\top}\) are unknowns, \(A_{j}=I_{|\mathcal{S}_{j}|}-(a_{im})=I_{|\mathcal{S}_{j}|}-E_{Q^{0}}[\bar{r}_ {j}(\bar{Z}_{j})\bar{w}_{j}{}^{*\top}(\bar{Z}_{j})\mid\bar{z}_{j-1}]\) and \(B_{j}=[B_{1}^{\top},\ldots,B_{|\mathcal{S}_{j}|}^{\top}]^{\top}\), with
\[a_{im}(\bar{z}_{j-1})=\int r_{j,m}(\bar{z}_{j})w_{j,i}^{*}(\bar{ z}_{j})dQ^{0}_{j}(z_{j}\mid\bar{z}_{j-1})=\delta_{j,m}(\bar{z}_{j-1})E_{Q^{0}_{j} }[r_{j}(\bar{Z}_{j})w_{j,i}^{*}(\bar{Z}_{j})w_{j,m}^{*}(\bar{Z}_{j})\mid\bar{z }_{j-1}],\] \[B_{m}(\bar{z}_{j-1})=\int b_{j}(\bar{z}_{j})w_{j,m}^{*}(\bar{z}_{ j})dQ^{0}_{j}(z_{j}\mid\bar{z}_{j-1})=\sum_{i=1}^{|\mathcal{S}_{j}|}\delta_{j,i}( \bar{z}_{j-1})E_{Q^{0}_{j}}[\dot{\ell}_{\beta_{j,s}}(\bar{Z}_{j},i)r_{j}w_{j,i} ^{*}(\bar{Z}_{j})w_{j,m}^{*}(\bar{Z}_{j})\mid\bar{z}_{j-1}].\]
We note that \(A_{j}=M_{j}\Delta\) where \(M_{j}\) is the matrix defined in the main text. Consequently, \(A_{j}\) has a rank of \(|\mathcal{S}_{j}|-1\). Thus, the solution to the aforementioned system of equations can be equivalently obtained by solving \(x=\Delta M_{j}^{-}B_{j}\), where \(M_{j}^{-}\) denotes a generalized inverse of \(M_{j}\).
\[a^{*}(\bar{z}_{j}) =b_{j}(\bar{z}_{j})+B_{j}^{\top}(\bar{z}_{j-1})(M^{-})^{\top}( \bar{z}_{j-1})\Delta^{-1}(\bar{z}_{j-1})\bar{r}_{j}(\bar{z}_{j})\] \[=b_{j}(\bar{z}_{j})+E_{Q^{0}_{j}}[b_{j}(\bar{Z}_{j})\bar{w}_{j}^{* \top}(\bar{Z}_{j})\mid\bar{z}_{j-1}](M_{j}^{-})^{\top}(\bar{z}_{j-1})r_{j}(\bar {z}_{j})\bar{w}_{j}^{*}(\bar{z}_{j})\]
We \(Q^{0}\)-center the above to give the form of \(a_{j}^{*}\) in the main text. The above proof is adapted from the previous work Gilbert et al. (1999) and Gilbert (2000). To this end, we have proved that the projection of score of \(\beta_{j,s}^{0}\) onto the nuisance tangent space is \(\Pi_{P^{0},\beta^{0}}\{\dot{\ell}_{\beta_{s,s}}\mid\mathcal{T}(P^{0},\mathcal{ P}_{\mathcal{Q},\beta^{0}})\}(\bar{z}_{j},s^{\prime};\beta_{j}^{0})=\mathbb{P}_{j,s^{ \prime}}a_{j}^{*}(\bar{z}_{j};\beta_{j}^{0})\) and therefore, the efficient score function takes the form \(\dot{\ell}_{\beta_{j,s}}^{*}(\bar{z}_{j},s^{\prime};\beta_{j,s}^{0})=\dot{\ell} _{\beta_{j,s}}(\bar{z}_{j},s^{\prime};\beta_{j,s}^{0})-\mathbb{P}_{j,s^{\prime} }a_{j}^{*}(\bar{z}_{j};\beta_{j}^{0})\) as specified in Lemma 2. Since \(j\) and \(s\) are arbitrary, we have derived the efficient score function of \(\beta^{0}\) and let us denote it as \(\dot{\ell}_{\beta}^{*}\).
The canonical gradient of \(\beta^{0}\) is \(D_{P^{0}}^{\beta}=I_{\beta^{0}}^{-1}\dot{\ell}_{\beta}^{*}\).
We conclude by noting that \(I_{\beta^{0}}^{-1}\) is block-diagonal, which makes the canonical gradient of \(\beta^{0}\) straightforward to compute.
Proof of \(I_{\beta^{0}}^{-1}\) being block-diagonal.: We start by noticing that \(\tilde{I}_{\beta^{0}}:=E_{P^{0}}[\dot{\ell}_{\beta}(Z,S;\beta^{0})\dot{\ell}_ {\beta}^{\top}(Z,S;\beta^{0})]\) is block-diagonal since it is straightforward to verify that \(\frac{\partial}{\partial\bar{\beta}_{j}\partial\beta_{m}}\ell(\beta^{0},Q^{0} \mid z,s)=\vec{0}\) when \(j\neq m\). Fix \(j,m\in\mathcal{J}\) and \(j\neq m\), the \(\{j,m\}\)-block of the Fisher's information matrix under unknown \(Q^{0}\) is
\[I_{\beta^{0}}^{j,m}=E_{P^{0}}\left[\dot{\ell}_{\bar{\beta}_{j}}^{ *}(Z,S;\beta_{j}^{0})\{\dot{\ell}_{\bar{\beta}_{m}}^{*}(Z,S;\beta_{m}^{0})\}^{ \top}\right]\] \[=E_{P^{0}}\bigg{[}\left(\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S;\beta _{j}^{0})-\mathbb{P}_{j,S}a_{j}^{*}(\bar{Z}_{j};\beta_{j}^{0})\right)\left(\dot{ \ell}_{\beta_{m}}(\bar{Z}_{m},S;\beta_{m}^{0})-\mathbb{P}_{m,S}a_{m}^{*}(\bar{Z} _{m};\beta_{m}^{0})\right)^{\top}\bigg{]}\] \[=E_{P^{0}}\left[\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S;\beta_{j}^{0}) \tilde{\ell}_{\beta_{m}}^{\top}(\bar{Z}_{m},S;\beta_{m}^{0})\right]-E_{P^{0}} \left[\mathbb{P}_{j,S}a_{j}^{*}(\bar{Z}_{j};\beta_{j}^{0})\tilde{\ell}_{\beta_{m }}^{\top}(\bar{Z}_{m},S;\beta_{m}^{0})\right]\] \[\quad-E_{P^{0}}\left[\dot{\ell}_{\beta_{j}}(\bar{Z}_{j},S;\beta_{j }^{0})\mathbb{P}_{m,S}^{\top}a_{m}^{*}(\bar{Z}_{m};\beta_{m}^{0})\right]+E_{P^{0}} \left[\mathbb{P}_{j,S}a_{j}^{*}(\bar{Z}_{j};\beta_{j}^{0})\mathbb{P}^{\top}{}_{m,S}a_ {m}^{*}(\bar{Z}_{m};\beta_{m}^{0})\right]\] \[=0^{c_{j}\times c_{m}},\]
where each of the four terms above is a \(c_{j}\times c_{m}\) matrix of zeros by tower property.
## Appendix C Proof of Lemma 3
Proof.: We let \(\mathcal{T}_{\mathcal{B}}(P^{0})\) denote the tangent space of \(\mathcal{P}_{Q^{0},\mathcal{B}}:=\{P_{Q^{0},\beta}:\beta\in\mathcal{B}\}\) at \(P^{0}\). Throughout we'll suppose that the tangent space of \(\mathcal{P}\) at \(P^{0}\) writes as
\[\mathcal{T}(P^{0},\mathcal{P}):=\left\{g+h:g\in\mathcal{T}(P^{0},\mathcal{P}_ {\mathcal{Q},\beta^{0}}),h\in\mathcal{T}_{\mathcal{B}}(P^{0})\right\}. \tag{13}\]
We denote a gradient of \(\phi\) under \(\mathcal{P}_{\mathcal{Q},\mathcal{B}}\) as \(D_{P^{0}}^{\dagger}\). We begin by noting that, if \(\{P_{\epsilon}:=P_{Q_{\epsilon},\beta_{\epsilon}}:\epsilon\}\subset\mathcal{P}_ {Q^{0},\mathcal{B}}\) is a submodel with \(P_{\epsilon=0}=P^{0}\), score \(h\in\mathcal{T}_{\mathcal{B}}(P^{0})\), and \(Q_{\epsilon}=Q^{0}\) for all \(\epsilon>0\), then \(\psi\circ\phi(P_{\epsilon})=\psi\circ\phi(P^{0})\) for all \(\epsilon\), and so
\[\left.\frac{d}{d\epsilon}\psi\circ\phi(P_{\epsilon})\right|_{ \epsilon=0}=P^{0}D_{P^{0}}^{\dagger}h=0. \tag{14}\]
We further note that, if \(\{P_{\epsilon}:=P_{Q_{\epsilon},\beta_{\epsilon}}:\epsilon\}\subset\mathcal{P} _{\mathcal{Q},\beta^{0}}\) is a submodel with \(P_{\epsilon=0}=P^{0}\), score \(g\), and \(\beta_{\epsilon}=\beta^{0}\) for all \(\epsilon>0\), then
\[\left.\frac{d}{d\epsilon}\psi\circ\phi(P_{\epsilon})\right|_{ \epsilon=0}=P^{0}D_{P^{0}}g=P^{0}D_{P^{0}}^{\dagger}g. \tag{15}\]
Combining the preceding two displays shows that the gradient \(D_{P^{0}}^{\dagger}\) must be a function \(f\in L_{0}^{2}(P^{0})\) that satisfies the following:
\[P^{0}fg=P^{0}D_{P^{0}}g\;\;\text{for all}\;g\in\mathcal{T}(P^{0 },\mathcal{P}_{\mathcal{Q},\beta^{0}}), \tag{16}\] \[P^{0}fh=0\;\;\text{for all}\;h\in\mathcal{T}_{\mathcal{B}}(P^{0 }). \tag{17}\]
Furthermore, because Lemma S1 shows that the submodels used to define (14) and (15) span the tangent space, any function \(f\in L_{0}^{2}(P^{0})\) that satisfies the above two conditions is a gradient. We now show that such a function does indeed exist provided the functional \(\tilde{\gamma}:\mathcal{P}\rightarrow\mathbb{R}\) defined below is pathwise differentiable relative to \(\mathcal{P}\):
\[\tilde{\gamma}(P_{Q,\beta}):=\gamma(\beta).\]
We note that \(E_{P_{\mathcal{Q}^{0},\beta}}[D_{P^{0}}(Z,S;\beta^{0})]=E_{P_{Q^{0},\beta}}[D_ {P^{0}}(Z,S;\beta^{0})]\) since \(Q^{0}\) and \(\underline{Q}^{0}\) agree on all the relevant conditional distributions, and so \(\gamma(\beta)=E_{P_{Q^{0},\beta}}[D_{P^{0}}(Z,S;\beta^{0})]\) for all \(\beta\). Let \(D_{P^{0}}^{\tilde{\gamma}}\) denote a gradient of this functional at \(P^{0}\) relative to \(\mathcal{P}\). Because the right-hand side of the above does not depend on \(Q\), we have that, for any submodel
\(\{P_{\epsilon}:=P_{Q_{\epsilon},\beta_{\epsilon}}:\epsilon\}\subset\mathcal{P}_{ \mathcal{Q},\beta^{0}}\) with \(P_{\epsilon=0}=P^{0}\), score \(g\), and \(\beta_{\epsilon}=\beta^{0}\) for all \(\epsilon>0\), it holds that
\[\frac{d}{d\epsilon}\tilde{\gamma}(P_{\epsilon})\bigg{|}_{\epsilon=0}=P^{0}D_{ P^{0}}^{\tilde{\gamma}}g=0. \tag{18}\]
We claim that a valid gradient \(D_{P^{0}}^{\dagger}\) of \(\psi\circ\phi\) relative to \(\mathcal{P}\) is equal to \(D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}\). To show this, it suffices to establish that \(D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}\) belongs to \(L_{0}^{2}(P^{0})\) and satisfies (16) and (17). To see that \(D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}\) belongs to \(L_{0}^{2}(P^{0})\), note that \(D_{P^{0}}\in L_{0}^{2}(P^{0})\) and \(D_{P^{0}}^{\tilde{\gamma}}\in\mathcal{T}_{\mathcal{P}}(P^{0})\). To see that (16) is satisfied, note that (18) implies that, for all \(g\in\mathcal{T}(P^{0},\mathcal{P}_{\mathcal{Q},\beta^{0}})\), \(P^{0}[D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}]g=P^{0}D_{P^{0}}g\). We now show that (17) is satisfied. Fix a submodel \(\{P_{\epsilon}:=P_{Q_{\epsilon},\beta_{\epsilon}}:\epsilon\}\subset\mathcal{P} _{Q^{0},\mathcal{B}}\) is a submodel with \(P_{\epsilon=0}=P^{0}\), score \(h\in\mathcal{T}_{\mathcal{B}}(P^{0})\), and \(Q_{\epsilon}=Q^{0}\) for all \(\epsilon>0\). For any \(\epsilon>0\),
\[P^{0}[D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}]h =P^{0}D_{P^{0}}h-P^{0}D_{P^{0}}^{\tilde{\gamma}}h\] \[=P^{0}D_{P^{0}}h-\tilde{\gamma}(P_{Q^{0},\beta_{\epsilon}})+ \tilde{\gamma}(P^{0})+o(\epsilon)\] \[=P^{0}D_{P^{0}}h-P_{Q^{0},\beta_{\epsilon}}D_{0}+P^{0}D_{P^{0}}+o(\epsilon)\] \[=P^{0}D_{P^{0}}h-P_{Q^{0},\beta_{\epsilon}}D_{P^{0}}+o(\epsilon)\] \[=P^{0}D_{P^{0}}h-P_{e}D_{P^{0}}+o(\epsilon)\] \[=o(\epsilon).\]
Taking \(\epsilon\to 0\) on both sides shows that \(P^{0}[D_{P^{0}}-D_{P^{0}}^{\tilde{\gamma}}]=0\). As \(h\) was arbitrary, (17) is satisfied. It is straightforward to verify that \(D_{P^{0}}^{\tilde{\gamma}}\) takes the form in Lemma 3 by employing the delta method.
## Appendix D Example where \(D_{P^{0}}^{\dagger}\) loses efficiency
Suppose \(k=2\) and that source \(S=0\) fully aligns with the target distribution while source \(S=1\) weakly aligns with the target distribution. Further suppose \(X=(Z_{1},S)\) and the parameter of interest is \(\psi(Q^{0})=E_{Q^{0}}[Z_{1}]\). According to the results in Li and Luedtke (2023), an efficient estimator that only leverages fully aligned data sources can be built using the gradient
\[D_{P^{0}}^{\mathcal{A}}(z_{1},s)=\frac{\mathbb{1}\left(s=0\right)}{P^{0}(S=0)} \left(z_{1}-E_{P^{0}}[Z_{1}\mid S=0]\right).\]
If the weakly aligned data sources are also used, then Lemma 3 suggests instead building an estimator using the gradient
\[D_{P^{0}}^{\dagger}(z_{1},s;\beta^{0})=\] \[\frac{1}{w_{1}^{*}(z_{1};\beta^{0})}\left(z_{1}-E_{P^{0}}[Z_{1} \mid S=0]\right)-E_{P^{0}}\left[\frac{1}{w_{1}^{*}(z_{1};\beta^{0})}\left(Z_{1 }-E_{P^{0}}[Z_{1}\mid S=0]\right)\hat{\ell}_{\beta}(Z_{1},S;\beta^{0})\right]^ {\top}I_{\beta^{0}}^{-1}\hat{\iota}_{\beta}^{*}(z_{1},s;\beta^{0}).\]
In the case of \(Z_{1}\mid S=0\sim\mathrm{Beta}(2,2)\) and \(Z_{1}\mid S=1\sim(2+\beta^{0},2)\) such that \(w_{1}(z_{1};\beta^{0})=\exp\left(\beta^{0}\log(z_{1})\right))\), we numerically approximated the difference in variances using a sample size of \(10^{7}\) and results are shown in Figure 3. We also compared it to the difference in variances for \(D^{\star}_{P^{0}}\) and \(D^{\mathcal{A}}_{P^{0}}\). As expected by theory, the gradient \(D^{\star}_{P^{0}}\) from Theorem 1 dominates \(D^{\mathcal{A}}_{P^{0}}\) in this example.
## Appendix E Proof of Theorem 1
We show that (1) any gradient in \(\mathcal{D}\) is a valid gradient of \(\phi(P^{0})\), (2) the gradients in (3) and (4) belong to \(\mathcal{D}\), and (3) the proposed weighting scheme \(v^{\star}\) is equal to \(\mathrm{argmin}_{v}\mathrm{var}(D^{v}_{P^{0}}(Z,S;\beta^{0}))\).
Proof.:
1. We begin the proof by noting that any valid gradient for \(\phi(P^{0})\) can be written as a sum of \(j-\)specific gradients, where each \(j-\)specific gradient is a gradient for the mapping \(\Gamma_{j}:\mathcal{P}_{\mathcal{Q},\mathcal{B}}\mapsto\mathbb{R}\) such that \(\Gamma_{j}(P):=\phi(P^{0}_{0},\ldots,P^{0}_{j-1},P_{j},P^{0}_{j+1},\ldots,P^{0} _{d})\). Both \(D^{\mathcal{A}}_{P^{0},j}\) and \(D^{\dagger}_{P^{0},j}\) are such \(j-\)specific gradients and the sum of all these \(j-\)specific gradients \(D^{\mathcal{A}}_{P^{0},j}\) and \(D^{\dagger}_{P^{0},j}\) gives \(D^{\mathcal{A}}_{P^{0}}\) and \(D^{\dagger}_{P^{0}}\) respectively. Fix \(j\in\mathcal{J}\), both \(D^{\mathcal{A}}_{P^{0},j}\) and \(D^{\dagger}_{P^{0},j}\) can be decomposed into the sum of the canonical gradient of \(\Gamma_{j}(P^{0})\), denoted as \(\tilde{D}_{P^{0},j}\), and a projection term that is in the orthogonal complement of the tangent space such that we have \(D^{\mathcal{A}}_{P^{0},j}=\tilde{D}_{P^{0},j}+D^{\mathcal{A},\perp}_{P^{0},j}\) and \(D^{\dagger}_{P^{0},j}=\tilde{D}_{P^{0},j}+D^{\dagger,\perp}_{P^{0},j}\). Adopting such notations, we have \[v_{j}D^{\mathcal{A}}_{P^{0},j}+(1-v_{j})D^{\dagger}_{P^{0},j}\] \[=v_{j}(\tilde{D}_{P^{0},j}+D^{\mathcal{A},\perp}_{P^{0},j})+(1-v _{j})(\tilde{D}_{P^{0},j}+D^{\dagger,\perp}_{P^{0},j})\] \[=\tilde{D}_{P^{0},j}+\left(v_{j}D^{\mathcal{A},\perp}_{P^{0},j}+( 1-v_{j})D^{\dagger,\perp}_{P^{0},j}\right),\] which is a \(j-\)specific gradient since the second term resides in the orthogonal complement of the tangent
space. As a result, the sum of any combinations of \(j\)-specific gradients \(D^{\mathcal{A}}_{P^{0},j}\) and \(D^{\dagger}_{P^{0},j}\), denoted as \(D^{v}_{P^{0}}\) with any possible weighting scheme \(v=(v_{j})_{j\in\mathcal{J}}\), is also a valid gradient itself.
2. It is straightforward to show that the gradient in (3) is a special case of \(D^{v}_{P^{0}}\) with a weighting scheme \(v_{j}=1\) for each \(j\in\mathcal{J}\) and the gradient in (4) is a special case of \(D^{v}_{P^{0}}\) with a weighting scheme \(v_{j}=0\) for each \(j\in\mathcal{J}\).
3. In order to find the optimal weighting scheme, we aim to solve the following optimization problem: \[\min_{v_{j},j\in\mathcal{J}}\quad\mathrm{var}\sum_{j\in\mathcal{J}}\left\{v_{j }D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S)+(1-v_{j})D^{\dagger}_{P^{0},j}(\bar{ Z}_{j},S;\beta^{0}_{j})\right\}.\] Let us denote \(D^{v}_{P^{0},j}(\bar{z}_{j},s):=v_{j}D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S) +(1-v_{j})D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\). It is straightforward to show that, by the tower property, \(\text{cov}\left(D^{v}_{P^{0},j}(\bar{Z}_{j},S),D^{v}_{P^{0}_{m}}(\bar{Z}_{m}, S)\right)=0\) for any \(j,m\in\mathcal{J}\) with \(j\neq m\). Hence, it suffices to solve the following optimization problem for each fixed \(j\in\mathcal{J}\): \[\min_{\bar{V}_{j}}\quad\text{var}\left(D^{v}_{P^{0},j}(\bar{Z}_{j},S)\right).\] Writing out the objective function, we have \[\mathrm{var}\left(v_{j}D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S) +(1-v_{j})D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\right)\] \[=v_{j}^{2}\mathrm{var}\left(D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S)\right)+(1-v_{j})^{2}\mathrm{var}\left(D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S; \beta^{0}_{j})\right)+2v_{j}(1-v_{j})\text{cov}\left(D^{\mathcal{A}}_{P^{0},j }(\bar{Z}_{j},S),D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\right)\] Taking the derivative of the above with respect to \(v_{j}\) and setting it to 0, we have \[0 =\frac{1}{2}\frac{\partial}{\partial v_{j}}\mathrm{var}\left(v_{j }D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S)+(1-v_{j})D^{\dagger}_{P^{0},j}(\bar{ Z}_{j},S;\beta^{0}_{j})\right)\] \[=v_{j}\bigg{\{}\mathrm{var}\left(D^{\mathcal{A}}_{P^{0},j}(\bar{ Z}_{j},S)\right)+\mathrm{var}\left(D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j}) \right)-2\text{cov}\left(D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S),D^{\dagger} _{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\right)\bigg{\}}\] \[\quad-\mathrm{var}\left(D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta ^{0}_{j})\right)+\mathrm{cov}\left(D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S),D^ {\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\right)\] Hence the solution is, \[v_{j}^{\star}=\frac{\sigma_{\uparrow}^{2}-\sigma_{\mathcal{A},\dagger}}{ \mathrm{var}\left(D^{\mathcal{A}}_{P^{0},j}(\bar{Z}_{j},S)-D^{\dagger}_{P^{0},j}(\bar{Z}_{j},S;\beta^{0}_{j})\right)},\]
where \(\sigma_{\dagger}^{2}:=\mathrm{var}\left(D_{P^{0},j}^{1}(\bar{Z}_{j},S;\beta_{j}^{ 0})\right)\) and \(\sigma_{\mathcal{A},\dagger}:=\mathrm{cov}\left(D_{P^{0},j}^{\mathcal{A}}(\bar{Z} _{j},S),D_{P^{0},j}^{1}(\bar{Z}_{j},S;\beta_{j}^{0})\right)\). It is straightforward to verify that this solution is unique as \(\frac{\partial^{2}}{\partial v_{j}^{2}}\mathrm{var}(D_{P^{0},j}^{\omega}(\bar{ Z}_{j},S))=2\mathrm{var}(D_{P^{0},j}^{\mathcal{A}}(\bar{Z}_{j},S)-D_{P^{0},j}^{ \dagger}(\bar{Z}_{j},S))>0\) when \(D_{P^{0},j}^{\dagger}(\bar{Z}_{j},S;\beta_{j}^{0})\neq D_{P^{0},j}^{\mathcal{A }}(\bar{Z}_{j},S)\). Hence the solution \(v^{\star}:=(v_{j}^{\star})_{j\in\mathcal{J}}\) is the unique minimizer when \(\mathcal{S}_{j}\neq\mathcal{A}_{j}\) for each \(j\in\mathcal{J}\).
## Appendix F Form of gradients for estimands in Section 4
Following Lemma 3 and Kennedy (2022), a gradient of the mean missing outcome \(\phi_{1}(P^{0})\) is
\[D_{P^{0}}^{\dagger}(z,s;\beta^{0})\] \[=\frac{1(s\in\mathcal{S}_{3})}{P^{0}(S\in\mathcal{S}_{3})}\frac{ 1\left(z_{2}=1\right)}{{P^{0}(S\in\mathcal{S}_{3})}}\frac{1}{{P^{0}(Z_{2}=1 \mid z_{1},S\in\mathcal{S}_{3})}}\frac{1}{{w^{\star}}_{3,s}(z;\beta^{0})}\frac {p^{0}(z_{1}\mid S\in\mathcal{A}_{1})}{{p^{0}(z_{1}\mid S\in\mathcal{S}_{3}) }}\left(z_{3}-E_{P^{0}}[Z_{3}\mid Z_{2}=1,z_{1},S\in\mathcal{A}_{3}]\right)\] \[\quad+\frac{1(s\in\mathcal{S}_{1})}{{P^{0}(S\in\mathcal{S}_{1})}} \frac{1}{{w^{\star}}_{1,s}(z;\beta^{0})}\left(E_{P^{0}}[Z_{3}\mid Z_{2}=1,z_{1 },S\in\mathcal{A}_{3}]-\phi_{1}(P^{0})\right)-\{\nabla_{\beta}\gamma(\beta) \mid_{\beta=\beta^{0}}\}^{\top}D_{P^{0}}^{\beta}(z,s;\beta^{0}).\]
A gradient of the variance-weighted treatment effect \(\phi_{2}(P^{0})\) is (Li et al., 2011)
\[D_{P^{0}}^{\dagger}(z,s;\beta^{0})\] \[=\frac{1(s\in\mathcal{S}_{3})}{{P^{0}(S\in\mathcal{S}_{3})}}\frac {1}{{w^{\star}}_{3,s}(z;\beta^{0})}\frac{p^{0}(z_{2}\mid z_{1},S\in\mathcal{A} _{2})}{{p^{0}(z_{2}\mid z_{1},S\in\mathcal{S}_{3})}}\frac{p^{0}(z_{1}\mid S\in \mathcal{A}_{1})}{{p^{0}(z_{1}\mid S\in\mathcal{S}_{3})}}\left(z_{2}-E_{P^{0} }[Z_{2}\mid z_{1},S\in\mathcal{A}_{2}]\right)\] \[\quad(z_{3}-E_{P^{0}}[Z_{3}\mid Z_{2}=1,Z_{1},S\in\mathcal{A}_{3}])\] \[\quad+\bigg{\{}\frac{1(s\in\mathcal{S}_{2})}{{P^{0}(S\in\mathcal{ S}_{2})}}\frac{1}{{w^{\star}}_{2,s}(z;\beta^{0})}\frac{p^{0}(z_{1}\mid S\in \mathcal{A}_{1})}{{p^{0}(z_{1}\mid S\in\mathcal{S}_{2})}}\left(z_{2}-2E_{P^{0} }[Z_{2}\mid z_{1},S\in\mathcal{A}_{2}]\right)\] \[\quad+\frac{1(s\in\mathcal{S}_{1})}{{P^{0}(S\in\mathcal{S}_{1})}} \frac{1}{{w^{\star}}_{1,s}(z;\beta^{0})}E_{P^{0}}[Z_{2}\mid z_{1},S\in \mathcal{A}_{2}]\bigg{\}}\bigg{\{}E_{P^{0}}[Z_{3}\mid Z_{2}=1,z_{1},S\in \mathcal{A}_{3}]\] \[\quad-\sum_{z_{2}=0}^{1}E_{P^{0}}[Z_{3}\mid z_{2},z_{1},S\in \mathcal{A}_{3}]E_{P^{0}}[\mathbb{1}\left(Z_{2}=z_{2}\right)\mid z_{1},S\in \mathcal{A}_{2}]\bigg{\}}\] \[\quad-\frac{1(s\in\mathcal{S}_{1})}{{P^{0}(S\in\mathcal{S}_{1})}} \frac{1}{{w^{\star}}_{1,s}(z;\beta^{0})}\phi_{2}(P^{0})-\{\nabla_{\beta}\gamma( \beta)\mid_{\beta=\beta^{0}}\}^{\top}D_{P^{0}}^{\beta}(z,s;\beta^{0}).\]
And the weighted version of these gradients of can be derived using the result in Theorem 1.
## Appendix G Data analysis: estimand of interest, gradients and continued results
For each selected amino acid features as well as the genetic distance feature, we studied the associations between PT80 and these covariates in least squares projections onto univariate working linear regression models. We take the genetic distance \(X\) as an example. We are interested in estimating
\[\theta^{0}:=\underset{\theta}{\mathrm{argmin}}\ E_{Q^{0}}[l(X,Y;\theta)],\]
where \(l(X,Y;\theta)\) denotes the loss function that is given by
\[l(x,y;\theta):=\left[y-\mu(x;\theta)\right]^{2},\]
where \(\mu(x;\theta)=\theta_{0}+\theta_{1}x\) denotes the mean function. We adopt the Z-estimation framework and our problem is equivalent to solving
\[0=E_{P^{0}}[E_{P^{0}}[m_{\theta}(X,Y)\mid X,S\in\mathcal{A}_{3}]\mid S\in \mathcal{A}_{1}],\]
where \(m_{\theta^{\prime}}(x,y)=\frac{\partial l(x,y;\theta)}{\partial\theta}\mid_{ \theta=\theta^{\prime}}\). A gradient of \(\theta^{0}\) assuming \(\beta^{0}\) is known is thus given by
\[D_{P^{0}}(x,y,s;\theta^{0},\beta^{0})=-\left(\frac{\partial E_{P^{0}}[E_{P^{0 }}[m_{\theta}(X,Y)\mid X,S\in\mathcal{A}_{3}]\mid S\in\mathcal{A}_{1}]}{ \partial\theta}\mid_{\theta=\theta^{0}}\right)^{-1}F_{P^{0}}(x,y,s;\theta^{0},\beta^{0}),\]
where
\[F_{P^{0}}(x,y,s;\theta,\beta)\] \[=\frac{\mathbb{1}\left(s\in\mathcal{S}_{3}\right)}{P^{0}(S\in \mathcal{S}_{3})}\frac{1}{w_{2,s}^{*}(x,y;\beta)}\frac{dP^{0}(x\mid S\in \mathcal{A}_{1})}{dP^{0}(x\mid S\in\mathcal{S}_{3})}\left(m_{\theta}-E_{P^{0}} [m_{\theta}\mid x,S\in\mathcal{A}_{3}]\right)\] \[\quad+\frac{\mathbb{1}\left(s\in\mathcal{S}_{1}\right)}{P^{0}(S \in\mathcal{S}_{1})}\frac{1}{w_{1,s}^{*}(x;\beta)}\left(E_{P^{0}}[m_{\theta} \mid x,S\in\mathcal{A}_{3}]-E_{P^{0}}[E_{P^{0}}[m_{\theta}\mid x,S\in\mathcal{ A}_{3}]\mid S\in\mathcal{A}_{1}]\right).\]
Hence, a gradient of \(\theta^{0}\) is
\[D_{P^{0}}^{\dagger}(x,y,s;\theta^{0},\beta^{0})\] \[=D_{P^{0}}(x,y,s;\theta^{0},\beta^{0})-E\left[D_{P^{0}}(X,Y,S; \theta^{0},\beta^{0})\hat{\ell}_{\beta}(X,Y,S;\beta^{0})\right]^{\top}D_{P^{0}} ^{\beta}(x,y,s;\beta^{0}).\]
A more efficient weighted gradient of \(\theta^{0}\) can be constructed using the result in Theorem 1. |
2304.13254 | The directional isotropy of LIGO-Virgo binaries | We demonstrate how to constrain the degree of absolute alignment of the total
angular momenta of LIGO-Virgo binary black holes, looking for a special
direction in space that would break isotropy. We also allow for inhomogeneities
in the distribution of black holes over the sky. Making use of dipolar models
for the spatial distribution and orientation of the sources, we analyze 57
signals with false-alarm rates < 1/yr from the third LIGO-Virgo observing run.
Accounting for selection biases, we find the population of LIGO-Virgo black
holes to be fully consistent with both homogeneity and isotropy. We
additionally find the data to constrain some directions of alignment more than
others, and produce posteriors for the directions of total angular momentum of
all binaries in our set. All code and data are made publicly available in
https://github.com/maxisi/gwisotropy/. | Maximiliano Isi, Will M. Farr, Vijay Varma | 2023-04-26T02:54:27Z | http://arxiv.org/abs/2304.13254v1 | # The directional isotropy of LIGO-Virgo binaries
###### Abstract
We demonstrate how to constrain the degree of absolute alignment of the total angular momenta of LIGO-Virgo binary black holes, looking for a special direction in space that would break isotropy. We also allow for inhomogeneities in the distribution of black holes over the sky. Making use of dipolar models for the spatial distribution and orientation of the sources, we analyze 57 signals with false-alarm rates \(\leq 1/\)yr from the third LIGO-Virgo observing run. Accounting for selection biases, we find the population of LIGO-Virgo black holes to be fully consistent with both homogeneity and isotropy. We additionally find the data to constrain some directions of alignment more than others, and produce posteriors for the directions of total angular momentum of all binaries in our set. All code and data are made publicly available in [https://github.com/maxisi/gwisotropy/](https://github.com/maxisi/gwisotropy/).
## I Introduction
With the increasing number of binary black holes (BBHs) detected by LIGO [1] and Virgo [2], it has become possible to study the distribution of such gravitational wave (GW) sources over time and space [3; 4; 5; 6; 7; 8; 9]. Since BBHs can be detected up to nonnegligible redshifts (currently, \(\lesssim 1\)), we expect their distribution at large scales to reflect the homogeneity and isotropy that characterize the universe cosmologically--a departure from that expectation would reveal a major shortcoming in our understanding of the detection biases affecting the LIGO-Virgo instruments, or, more tantalizingly, point to fundamentally new physics or astrophysics.
The homogeneity [6; 7; 8; 9] and isotropy [10] of BBHs have been studied before under different frameworks. In this work, we reconsider the problem from a new point of view by quantifying the degree of alignment of the BBH orbits; in other words, we ask the question: could the total angular momentum vectors of LIGO-Virgo BBHs be preferentially aligned with a special direction in space? We consider this possibility as we simultaneously search for angular inhomogeneities in the spatial distribution of sources, thus constraining the existence of special directions controlling both the alignment and location of BBHs.
Unlike previous studies, we look for a breaking of isotropy in the angular momenta through the existence of a special direction in space with reference to some absolute frame, like the cosmic microwave background or far away stars (Fig. 1, second panel), and not with respect to Earth. The discovery of such a special direction could reveal the presence of a vector field breaking Lorentz symmetry. This differs from previous works like Ref. [10], which checked for anomalies in the alignment of sources with respect to Earth, as reflected in the distribution of BBH inclinations relative to the line of sight (Fig. 1, third panel). Such studies are not sensitive to the kind of overall alignment in absolute space that we look for here.
In Sec. II we describe the population model we use to constrain anisotropies, as well as our assumptions about the astrophysical distribution of BBH properties like masses and redshifts, and our treatment of selection biases. In Sec. III we outline the data products used in our analysis. We present our results in Sec. IV, including constraints on the degree of orientation and location anisotropies, as well as maps of possible preferred directions. In Sec. V, we summarize validation tests of our infrastructure to contextualize our measurement. We conclude in Sec. VI. Appendices show how to obtain posteriors on the angular-momentum direction, discuss hyperparameter prior choices and display posteriors for the angular-momentum direction for all events in our set.
## II Method
### Isotropy modeling
In order to study the spatial distribution and orientation of BBHs, we must analyze the collection of detections under a hierarchical population model that allows for variable degrees of spatial and directional correlations (see, e.g., Ref. [9]). This requires modeling the distribution of source locations, \(\hat{N}\), and total-angular-momentum directions, \(\hat{J}\). For modeling purposes, the \(\hat{N}\) and \(\hat{J}\) vectors must be expressed through their Cartesian compo
nents in some absolute reference frame.
We choose to work in geocentric, equatorial celestial coordinates, with the vernal equinox as the \(x\) axis. In that frame, the sky-location vector is, for each BBH,
\[\hat{N}=\left(\cos\alpha\cos\delta,\sin\alpha\cos\delta,\sin\delta\right), \tag{1}\]
for the right ascension \(\alpha\) and declination \(\delta\); meanwhile, the orientation vector is \(\hat{J}\equiv\vec{J}/|\vec{J}|\) where
\[\vec{J}=\vec{L}+\vec{S}_{1}+\vec{S}_{2}\,, \tag{2}\]
for the orbital angular momentum \(\vec{L}\) and individual (dimensionful) black hole (BH) spin angular momenta \(\vec{S}_{1/2}\).
The Cartesian components of \(\hat{J}\) can be computed in the above frame as a function of \(\alpha\), \(\delta\), and the polarization angle \(\psi\)[11], as well as the component masses \(m_{1/2}\), the dimensionless spin vectors \(\vec{\chi}_{1/2}\) and the orbital phase \(\phi_{\rm ref}\) at some reference frequency \(f_{\rm ref}\); we outline this calculation in Appendix A and release relevant code in [12]. We prefer to work with \(\hat{J}\) rather than \(\hat{L}\) because the former is conserved over the coalescence to a high degree, even for precessing systems [13].
Our goal is to quantify the degree of isotropy in \(\hat{N}\) and \(\hat{J}\). To this end, we model each of those vectors as drawn from an isotropic distribution with a dipolar correction of variable magnitude. This _ad hoc_ modification may be generally seen as the first term in a harmonic expansion around isotropy, and is specifically well suited to capture the existence of a preferred direction in space. The dipolar components in the location and orientation distributions are defined by dipole vectors \(\vec{v}_{N/J}\) whose magnitudes control the degree of deviation from isotropy for \(\hat{N}\) and \(\hat{J}\) respectively.
Concretely, we implement the following population-level likelihood for each event (indexed by \(i\))
\[p(\hat{N}_{i},\hat{J}_{i}\mid\vec{v}_{N},\vec{v}_{J})\sim\frac{1}{16\pi^{2}} \Big{(}1+\vec{v}_{N}\cdot\hat{N}_{i}\Big{)}\Big{(}1+\vec{v}_{J}\cdot\hat{J}_{ i}\Big{)}, \tag{3}\]
for some special-direction vectors \(\vec{v}_{N/J}\), with \(0\leq\left|\vec{v}_{N/J}\right|<1\), to be inferred as hyperparameters from the collection of detections. The fully isotropic case is recovered for \(\vec{v}_{N}=\vec{v}_{J}=0\). On the other hand, setting \(\vec{v}_{J}=0\) alone allows for a nonhomogenous distribution of sources in the sky while assuming isotropic source orientations; this reduces to the "dipole" model in [9]. Modeling the likelihood as in Eq. (3), with \(\left|\vec{v}_{N/J}\right|<1\), ensures that the likelihood itself remains positive everywhere.
It is convenient to rephrase the constraint that \(0\leq\left|\vec{v}_{N/J}\right|<1\) by re-parameterizing the population in Eq. (3), through two corresponding, auxiliary vectors \(\vec{u}_{N/J}\) defined such that
\[\vec{v}_{N/J}=\frac{\vec{u}_{N/J}}{\sqrt{1+|\vec{u}_{N/J}|^{2}}}. \tag{4}\]
In this way, we ensure \(0\leq\left|\vec{v}_{N/J}\right|\leq 1\) for \(-\infty<\vec{u}_{N/J}<\infty\); \(\vec{u}_{N/J}\) are unconstrained. This "decompactifying" transformation is more effective for sampling purposes than enforcing a sharp constraint on the magnitudes of \(\vec{v}_{N/J}\). For small \(\left|\vec{v}_{N/J}\right|\ll 1\), \(\vec{v}_{N/J}\simeq\vec{u}_{N/J}\) to second order in \(\left|\vec{v}_{N/J}\right|\). As a prior, we choose a three-dimensional Gaussian distribution on the components of each \(\vec{u}_{N}\) and \(\vec{u}_{J}\), with zero mean and standard deviation \(\sigma=0.4\). This choice of prior is designed to be fairly uninformative about \(\vec{v}_{N/J}\) (i.e., lacking a strong gradient) while still peaking at \(\vec{v}_{N/J}=\vec{0}\); see Appendix B for a description of this feature.
Figure 1: _BBH orientation models._ By default, we expect BBH angular momenta (arrows) to be oriented randomly with respect to Earth (circle) or each other, reflecting isotropy (first panel). In this study, we consider the possibility that BBH orbits follow a special direction in space, the extreme of which is full alignment (second panel). Previous studies, like Ref. [10], have considered models in which binaries are (or are perceived to be) aligned anomalously with respect to Earth, e.g., pointing preferentially towards it (third panel). The first two panels both have a distribution of inclinations that looks isotropic to analyses like Ref. [10].
### Reweighting to an astrophysical population
Besides the location and orientation modeling described above, we need to ensure that our assumptions about the parameters of each individual BBH, like masses and redshift, are astrophysically sensible. To that end, we assume a redshift distribution that follows the Madau-Dickinson star formation rate [14] in the comoving frame. For the masses, we assume a prior distribution inversely proportional to the heaviest component mass (\(\propto 1/m_{1}\)) and uniform in the mass ratio (constant in \(q=m_{2}/m_{1}\)), and restrict our sample to BHs with (posterior-median) masses in the range \(5\,M_{\odot}<m_{2}\leq m_{1}<150\,M_{\odot}\); within that range, this choice is a simple approximation to the measurement in [3]. We assume the population of component spins are isotropically oriented with respect to the orbital angular momentum, with a uniform distribution over their dimensionless magnitude.1 A future analysis may fit the astrophysical distribution of these parameters jointly with the orientation and location of the binaries.
Footnote 1: This implies that our modeling of \(\hat{J}\) following Eq. (3) can be reinterpreted as a nontrivial modeling of \(\vec{L}\) through Eq. (2).
### Selection biases
Since we are attempting to model the _intrinsic_ distribution of all BBH sources, not merely those that were detected, we must account for the difference in LIGO-Virgo's sensitivity to various sources. This is true for both intrinsic parameters, like BH masses and spin magnitudes, as well as the location and orientation parameters in which we are interested for this work (namely, \(\hat{N}\) and \(\hat{J}\)). With knowledge of the instruments' sensitivity over parameter space, we use the measured selection function to obtain a measurement of the intrinsic distribution of parameters out of the distribution of detected sources [15; 16].
Evaluating the detectors' sensitivity over parameter space requires large simulation campaigns that quantify the end-to-end performance of LIGO-Virgo detection pipelines by injecting and recovering synthetic signals. As in [9], we take advantage of the BBH dataset in [17] for this purpose.2 Since this injection campaign only covered LIGO-Virgo's third observing run, we only consider events detected during that period; together with the mass constraints cited above, this means there are 57 BBH events to be included in our analysis (listed in Table 1).
Footnote 2: Specifically, the endo3_bbbbpop-LIGO-T2100113-v12 injection set.
## III Data
Our analysis starts from posterior samples for individual events reported by LIGO-Virgo in [18; 19] and publicly released in [20; 21] through the Gravitational Wave Open Science Center [22; 23; 24]. Specifically, we make use of results obtained with the IMRPhenomXPHM waveform [25; 26; 27; 28] that have been already reweighted to a distance prior uniform in comoving volume. The single-event inference was carried out by the LIGO-Virgo collaborations using the Bilby parameter estimation pipeline [29; 30], as detailed in [18; 19]. We reweight those samples following the astrophysical population described above (see, e.g., [31; 32] for methodology) and then use them to produce distributions for the components of \(\hat{N}\) and \(\hat{J}\), which we use as input for our hierarchical analysis based on Eq. (3).
The six-dimensional posteriors for the components of \(\hat{J}\) and \(\hat{N}\) for each event are the primary input for our hierarchical isotropy analysis. We show the posteriors for \(\hat{J}\) in Fig. 8 as skymaps for all events in our set, produced using standard LIGO-Virgo tools for representing probability densities over the sky [33; 34; 35]; we make these posteriors, resulting from the calculation detailed in Appendix A, available in our data release [12]. The equivalent figures for \(\hat{N}\) are nothing but the sky-localization maps already made available by LIGO-Virgo [20; 21].
## IV Results
We showcase the full result of our analysis in Fig. 2, which represents the simultaneous measurement of location and orientation anisotropies through the six-dimensional posterior on the components of \(\vec{v}_{N}\) and \(\vec{v}_{J}\). The result is fully consistent with both kinds of isotropy, with \(\vec{v}_{N}=0\) and \(\vec{v}_{J}=0\) supported with high credibility, falling close to the peak of the marginal distributions on
\begin{table}
\begin{tabular}{l l l} \hline \hline GW190408\_181802 & GW190708\_232457 & GW191129\_134029 \\ GW190412\_053044 & GW190719\_215514 & GW191204\_171526 \\ GW190413\_052954 & GW190720\_000836 & GW191215\_223052 \\ GW190413\_134308 & GW190727\_060333 & GW191216\_213338 \\ GW1904021\_213856 & GW190728\_064510 & GW191222\_033537 \\ GW190503\_185404 & GW190731\_140936 & GW191230\_180458 \\ GW190512\_180714 & GW190803\_022701 & GW200112\_155838 \\ GW190513\_025428 & GW1908052\_11137 & GW200128\_020211 \\ GW1905107\_055101 & GW1908082\_063405 & GW200129\_065458 \\ GW190519\_153544 & GW190828\_065509 & GW200220\_154313 \\ GW190521\_030229 & GW190910\_112807 & GW200208\_130117 \\ GW190521\_074359 & GW190915\_235702 & GW200209\_085452 \\ GW190527\_092055 & GW190925\_232845 & GW200216\_220804 \\ GW190602\_175927 & GW190929\_012149 & GW200219\_094415 \\ GW190620\_030421 & GW190930\_133541 & GW20022422234 \\ GW190630\_185205 & GW191103\_012549 & GW200225\_060421 \\ GW190701\_203306 & GW191105\_143521 & GW200302\_015811 \\ GW190706\_222641 & GW191109\_010717 & GW200311\_115853 \\ GW190707\_093326 & GW191127\_050227 & GW200316\_215756 \\ \hline \end{tabular}
\end{table}
Table 1: Events considered.
Figure 2: _Isotropy measurement._ Result of the simultaneous measurement of location and orientation isotropy through the model in Eq. (3), as represented by the posterior distribution on the dipole vectors \(\vec{v}_{N/J}\) (corner plot), and the corresponding projections over the sky (Mollweide insets). The six-dimensional posterior distribution is represented through credible levels over two-dimensional slices (blue contours, spaced at intervals corresponding to 10% increments in probability mass, with the outer contour enclosing 90% of the probability), and one-dimensional marginals (diagonal). The upper-left and lower-right sub-corners encode constraints on the individual components of each \(\vec{v}_{N}\) and \(\vec{v}_{J}\) respectively (highlighted with vertical and horizontal lines in the margin), while the other panels encode potential correlations between the location and orientation anisotropies. The measurements for \(\vec{v}_{N/J}\) can be projected into distribution over the sky as in the top-right insets, which show the allowed dipole orientations for \(\hat{v}_{N}=\vec{v}_{N}/|\vec{v}_{N}|\) (top) or \(\hat{v}_{J}\equiv\vec{v}_{J}/|\vec{v}_{J}|\) (bottom), with lighter colors encoding higher probability density over the celestial sphere [33; 34; 35]; inhomogeneities in these sky-maps do not constitute evidence for anisotropies. Isotropy is recovered for \(\vec{v}_{N}=\vec{v}_{J}=0\) (dotted lines), which is well supported by the 6D posterior.
the 1% and 49% quantiles respectively;3 this feature is also reflected in the fact that the origin is well supported in all the panels of the corner plot in Fig. 2. The result for \(\vec{v}_{N}\) is consistent with previous studies [9], which did not find evidence against isotropy in the location of LIGO-Virgo sources.
Footnote 3: These three-dimensional quantiles correspond to the fraction of \(\vec{v}_{J/N}\) samples with higher probability density than the origin.
To the extent that there is any support for nonzero dipolar contributions to the location or orientation densities (namely, for \(|\vec{v}_{J/N}|>0\)), their possible directions in the sky are represented by the insets on the top right of Fig. 2: a higher density indicates a potentially allowed orientation for the \(\vec{v}_{N}\) (top) or \(\vec{v}_{J}\) (bottom) dipole vectors. These skymaps reveal that the data are not in conflict with the existence of a weak dipole along the vernal equinox in the celestial equatorial plane (the direction of the \(x\) axis in our Cartesian coordinate system), as implied by the marginal on \(\vec{v}_{J,x}\) in Fig. 2; this dipole is allowed, but not required, by the data, since the posterior is fully consistent with \(\vec{v}_{J}=0\).
Indeed, although inhomogeneities appear in these Mollweide projections, this should not be interpreted as evidence for anisotropies: the density of points in those maps only encodes _permissible_ directions for the dipole, without implications for its magnitude. In fact, inhomogeneities will appear in such plots any time the posterior does not happen to peak exactly at the three-dimensional origin of \(\vec{v}_{J/N}\), as we expect to be commonly the case even if \(\vec{v}_{J/N}=0\) is the underlying truth.4
Footnote 4: With a finite number of events in the catalog, even if \(\vec{v}_{J/N}\) is zero in truth, the maximum-likelihood estimate will not, in general, be zero. The posterior will therefore peak away from zero (but be fully consistent with zero). In this situation inhomogeneity can appear as in Fig. 2.
In Fig. 3, we provide an additional representation of this posterior projected onto the three-dimensional spaces of \(\vec{v}_{J}\) and \(\vec{v}_{N}\); we also present the prior for comparison. Large dipolar contributions are disfavored by the posterior in all cases, and the posterior distributions are more concentrated than the prior, indicating that the data are informative. The posterior standard deviations for the three Cartesian components of \(\vec{v}_{J}\) and \(\vec{v}_{N}\) are smaller by \(\{15\%,10\%,9\%\}\) and \(\{25\%,20\%,15\%\}\) respectively with respect to the prior. The stronger tightening
Figure 4: _Posterior on dipole magnitudes._ The measurement of Fig. 2 translated into one-dimensional posteriors on the \(|\vec{v}_{J}|\) (blue) and \(|\vec{v}_{N}|\) (orange) magnitudes. These are shifted rightward with respect to the implied prior (gray), which itself heavily disfavors \(|\vec{v}_{J/N}|=0\) due to the reduced phase space near the origin in a three-dimensional space.
Figure 3: _3D distributions._ Three-dimensional representation of the \(\vec{v}_{J/N}\) measurement in Fig. 2 (first two panels), in comparison to the prior (last panel). Each point is drawn from the corresponding three-dimensional distribution, with color proportional to the probability density (lighter colors for higher density). The origin, representing isotropy, is well favored in all cases (intersection of gray dashed lines). The posteriors are tighter with respect to the prior, as is also seen in Fig. 4.
in the \(\vec{v}_{N}\) distribution is a consequence of \(\hat{N}\) being better constrained than \(\hat{J}\) in individual events. Figure 3 again makes it apparent that data disfavor certain directions for the \(\vec{v}_{J}\) dipole (towards the positive-\(x\) quadrants, away from the vernal equinox).
We may translate the result in Fig. 2 into a posterior on the magnitudes \(\left|\vec{v}_{J/N}\right|\) of the dipole components, as we do in Fig. 4. However, the one-dimensional posteriors on these quantities is heavily dominated by the dimensionality of the problem, which results in a Jacobian disfavoring small values of \(\left|\vec{v}_{J/N}\right|\) due to the limited phase space near \(\vec{v}_{J}=0\) or \(\vec{v}_{N}=0\).5 Therefore, even though our three-dimensional prior does not treat \(\vec{v}_{J/N}=0\) as a special point (Appendix B), the effective prior induced on the magnitudes heavily disfavors the origin in \(\left|\vec{v}_{J/N}\right|\) due to the reduced available prior volume (gray curve in Fig. 4); that explains why the posteriors in Fig. 4 themselves appear to disfavor \(\left|\vec{v}_{J/N}\right|=0\). With that in mind, the influence of the data can be seen in the leftward shift of the \(\left|\vec{v}_{J/N}\right|\) distributions with respect to the prior; this is effect is more pronounced for \(\vec{v}_{N}\), which is a consequence of the fact that \(\vec{N}\) is generally better measured than \(\vec{J}\). Although the shift in \(\left|\vec{v}_{J}\right|\) is slight, the data _are_ informative about \(\vec{v}_{J}\)--constraining some of its possible orientations (Figs. 2 and 3), if not its overall magnitude.
Footnote 5: In other words, although the probability density is high near the origin, this is outweighed by the greater volume found far away, so that the overall probability of finding \(\vec{v}_{J/N}=0\) is small. Any density in \(\vec{v}_{J/N}\) that is finite at the origin will produce a density on \(\left|\vec{v}_{J/N}\right|\) that behaves as \(\left|\vec{v}_{J/N}\right|^{2}\) near the origin.
## V Validation
In this section, we validate our setup by studying simulated datasets in which \(\hat{J}\) and \(\hat{N}\) are isotropically distributed (V.1). We also revisit our assumptions about the astrophysical distribution of BBH parameters, described in Sec. II.2, and show that they are robust.
### Injections from selection set
We validate our infrastructure on the set of injections used to evaluate the selection function of the instruments, which was drawn from an intrinsically isotropic distribution [17]. This provides an end-to-end test of our setup, including the computation of location and orientation vectors, the selection function and the inference process.
Concretely, we simulate catalogs of detections drawn from the injection set used to evaluate the selection function as described in Sec. II.3[17]. For simplicity, we treat the injection parameters as a single sample of a fictitious posterior: the input to our hierarchical analysis is one sample per synthetic event, drawn from the distribution in Sec. II.3. At each iteration, we double the size of the simulated catalog. Since the injection distribution was constructed to be isotropic [17], we expect this experiment to indicate \(\vec{v}_{J/N}=\vec{0}\), with certainty growing with catalog size.
We show the result in Fig. 5 for \(\vec{v}_{J}\) (the result for \(\vec{v}_{N}\) is similar). As expected, \(\vec{v}_{J/N}=\vec{0}\) is supported with increasing certainty as the synthetic catalog grows. This is what we expect if our infrastructure is working as designed.
### Astrophysical distributions
As described, in Sec. II.2, analysis above (Figs. 2-4) hinged on simplified assumptions about the astrophysical distribution of BH masses, spins and redshifts. To evaluate the impact of this simplification, we repeat our analysis but now reweighting to a different astrophysical population based on the measurements in [3]. Specifically, we make use of the highest-probability instantiation of the Power Law + Peak parametric mass model and the
Figure 5: _Validation on synthetic catalogs drawn from selection injection set._ Result of hierarchical analyses on synthetic catalogs of increasing size \(N\) (color), obtained by taking draws from the injection set used to evaluate the selection function (Sec. II.3) and using them as single samples from the \(\hat{J}\) and \(\hat{N}\) posterior of synthetic events. We show the posterior on the components of \(\vec{v}_{J}\) (main corner), and the implied posterior on the magnitude \(\left|\vec{v}_{J}\right|\) (upper right). A lighter color corresponds to a larger catalog: starting with \(N=4\) events for the darkest color and progressively doubling the catalog size \(11\) times to reach \(N=8192\) events for the lightest color. The hierarchical analysis measures \(\vec{v}_{J}=\vec{0}\) with growing precision as the size of the catalog increases.
Default spin model, with parameters obtained from the posterior samples released in [36]. This model treats the astrophysical distribution of BH masses as a power law plus a Gaussian peak, with density evolving over comoving volume as a power law; the spins are isotropic with a possible Gaussian overdensity in alignment with the orbital angular momentum, and with magnitudes following a Beta distribution (see [3] for details). To implement this new astrophysical prior, both individual detections and the selection injections are reweighted accordingly.
The result of assuming this different astrophysical distribution is shown in Fig. 6, compared to the main result above. Although the new posterior differs slightly from our primary one in Fig. 2, as we might expect given the difference in models, the change is quite limited and does not qualitatively impact the discussion above. The discrepancy is somewhat more pronounced for the components of \(\vec{v}_{J}\), as we might expect from the fact that the reconstruction of \(\hat{J}\) must factor our inference on the masses and three dimensional spins of each BBH. In the future, a more comprehensive analysis might simultaneously measure \(\vec{v}_{J/N}\) and the distribution of astrophysical properties.
## VI Conclusion
We have demonstrated a measurement constraining the potential alignment of the total angular momenta of BBHs detected by LIGO and Virgo, using 57 detections from their third observing run and duly accounting for selection effects. In addition to alignment of momenta, we simultaneously looked for inhomogeneities in the distribution of sources over the sky. We found no evidence against isotropy in either the orientation or, consistent with previous works, the location of LIGO-Virgo BBHs. Additionally, we determined that the GWTC-3 data disfavor certain orientations of the potential preferred alignment of angular momenta more than others.
Future measurements will improve as the LIGO, Virgo and KAGRA detectors grow in sensitivity, resulting in many more detections at higher signal-to-noise ratios, which will result in more precise isotropy constraints. The advent of next generation detectors, like Cosmic Explorer [37, 38, 39] or the Einstein Telescope [40], will enable the exploration of higher order anisotropies and other interesting effects, like correlations with redshift or, potentially, local correlations in the directions \(\hat{N}\) and \(\hat{J}\) on the sky.
###### Acknowledgements.
We thank Reed Essick for valuable comments. The Flatiron Institute is a division of the Simons Foundation. V.V. acknowledges funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 896869. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS),
Figure 6: _Effect of astrophysical population._ We repeat the isotropy analysis with a different assumption about the underlying distribution of BBH parameters based on the measurement in [3]. This yields the result shown in black, as opposed to the result in Fig. 2, which is reproduced here in blue for comparison. The black contours enclose 90%, 50% and 10% of the marginal probability mass.
National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. This is a reproducible article compiled with showyourwork[41]. This paper carries LIGO document number LIGO-P2300088.
## Appendix A Computing the total angular momentum
Here we describe how to compute the total angular momentum vector \(\vec{J}\) in the celestial coordinate frame of the main text, starting from BBH parameters measured in LIGO-Virgo analyses. The total angular momentum is defined, as in Eq. (2), to be the vector sum of the orbital angular momentum \(\vec{L}\) and the dimensionful BH spins \(\vec{S}_{1/2}\). We can thus compute \(\vec{J}\) by summing the components of \(\vec{L}\) and \(\vec{S}_{1/2}\) in some common frame with known orientation relative to our target.
The Cartesian components of the BH dimensionless spins, \(\vec{\chi}_{1/2}\equiv\vec{S}_{1/2}/m_{1/2}^{2}\) in units where \(G=c=1\), can be readily obtained from the LIGO-Virgo samples, in a frame in which the \(z\)-axis points along (the Newtonian) \(\vec{L}\) and the \(x\)-axis points along the orbital vector from the lighter to the heavier body [42] (see, e.g., Fig. 18 in [11]); this specification is established in reference to some specific point in the evolution of the system, e.g., when the GW signal at the detector reaches some frequency \(f_{\rm ref}\). With that information, all we need to compute the components of \(\vec{J}\) in that same frame is the magnitude \(|L|\) of the orbital angular momentum; then, the \(\vec{J}\) vector will be specified by
\[J_{x} =S_{1,x}+S_{2,x}, \tag{22a}\] \[J_{y} =S_{1,y}+S_{2,y},\] (22b) \[J_{z} =S_{1,z}+S_{2,z}+|L|, \tag{22c}\]
and \(\vec{J}=J_{x}\hat{x}+J_{y}\hat{y}+J_{z}\hat{L}\), where \(\hat{L}\equiv\vec{L}/|L|\) is the direction of the orbital angular momentum, \(\hat{x}\) is the orbital vector at the reference time, and \(\hat{y}\) completes the triad.
In the above equations, \((S_{1/2,x},S_{1/2,y},S_{1/2,z})\) are the components of the dimensionful spins, which we can derive from the respective components of the dimensionless spins \(\vec{\chi}_{1/2}\) using the masses \(m_{1/2}\) and the definition above. Following the standard LIGO-Virgo calculation [42],6 we can approximate the magnitude of the orbital angular momentum as [43; 44]
Footnote 6: Here we are following the definition of \(|L|\) within the LIGO-Virgo software; this is adopted in defining the inclination parameters \(\theta_{JN}\) and \(\iota\). The effect of truncating the series at the first post-Newtonian order is expected to be small, but could be revisited in future work.
\[|L|=L_{N}\left(1+\ell_{\rm 1PN}\right), \tag{23}\]
where \(L_{N}=m_{1}m_{2}/v\) is the Newtonian angular momentum, \(v=\left(\pi M{f_{\rm ref}}\right)^{1/3}\) is the post-Newtonian expansion parameter at the reference frequency, and \(\ell_{\rm 1PN}=v^{2}\left(3+\eta/3\right)/2\) is the first-order correction to \(|L|\), for the symmetric mass ratio \(\eta\equiv m_{1}m_{2}/M^{2}\) and the total mass \(M\equiv m_{1}+m_{2}\). We can then get the components of the orientation vector \(\hat{J}\equiv\vec{J}/|J|\) by normalizing Eq. (22).
The components of \(\vec{J}\) in Eq. (22) are defined in a binary-specific frame which does not facilitate comparisons across different systems. To express \(\vec{J}\) in a common frame for all binaries, all we need is to express the \((\hat{x},\hat{y},\hat{L})\) coordinate basis of Eq. (22) in a Celestial coordinate frame. We can achieve this by first obtaining the Cartesian components for \(\vec{L}\) in this frame using the measured right ascension \(\alpha\), declination \(\delta\), inclination \(\iota\), polarization angle \(\psi\), and the reference orbital phase \(\phi_{\rm ref}\) (see, e.g., Figs. 6 and 8 in [11]). In doing so, however, it is important to keep in mind that, since the polarization angle only enters the waveform as \(2\psi\) (i.e., \(\psi\) and \(\psi+\pi\) are degenerate), the LIGO-Virgo analyses usually only allow \(0\leq\psi<\pi\); yet, \(\psi\) and \(\psi+\pi\) are two physically distinct configurations, so we must for our purposes mirror the samples to ensure they span the full range of polarization angles, \(0\leq\psi<2\pi\) (or, equivalently, randomly add \(0\) or \(\pi\) to \(\psi\) for each sample).
Having properly accounted for this degeneracy, \(\hat{L}\) can be obtained from the source location \(\hat{N}\equiv-\hat{k}\) in Eq. (1), \(\iota\) and \(\psi\) via rotations and geometric products as
\[\hat{L}=R_{\hat{w}_{y}}(\iota)\,\hat{k}\,, \tag{24}\]
where \(R_{\hat{v}}(\theta)\) is a right-handed rotation by an angle \(\theta\) around some direction \(\hat{v}\), and \(\hat{w}_{y}\propto\hat{k}\times\hat{w}_{x}\) with \(\hat{w}_{x}=R_{\hat{k}}(\psi)\hat{u}\), for \(\hat{u}\) due west (see [45] for details). We can similarly obtain \(\hat{x}\) by rotating \(\hat{w}_{y}\) around \(\vec{L}\) by \(\phi_{\rm ref}\), namely
\[\hat{x}=R_{\hat{L}}(\phi_{\rm ref})\,\hat{w}_{y}\,, \tag{25}\]
and \(\hat{y}\propto\hat{L}\times\hat{x}\) completes the triad. This provides expressions for the components of the \((\hat{x},\hat{y},\hat{L})\) basis for each binary in a common reference frame, from which we can obtain the components of all \(\hat{J}\) vectors in that same frame via Eq. (22). Those components are the input for the hierarchical analysis described in the main text.
## Appendix B Prior in Unconstrained Parameters
Recall that we sample in parameters \(\vec{u}_{N/J}\) defined by
\[\vec{u}_{N/J}=\frac{\vec{v}_{N/J}}{\sqrt{1-\vec{v}_{N/J}\cdot\vec{v}_{N/J}}} \tag{26}\]
that map the unit ball \(\mathbb{B}^{3}\) to \(\mathbb{R}^{3}\). A Gaussian prior with mean \(\vec{0}\) and standard deviation in each component of \(\sigma\)
on \(\vec{u}_{N/J}\) induces a prior on \(\vec{v}_{N/J}\) that is
\[\log p\left(\vec{v}_{N/J}\right)=\text{const}\\ -\frac{\left|\vec{v}_{N/J}\right|^{2}}{2\sigma^{2}\left(1-\left| \vec{v}_{N/J}\right|^{2}\right)}-\frac{5}{2}\log\left(1-\left|\vec{v}_{N/J} \right|^{2}\right). \tag{30}\]
The derivative of the density with respect to \(\left|\vec{v}_{N/J}\right|\) vanishes at the origin, as it must by symmetry. But the second derivative does not, and is
\[\frac{\partial^{2}\log p}{\partial\left|\vec{v}_{N/J}\right|^{2}}=5-\frac{1}{ \sigma^{2}}. \tag{31}\]
Thus, such a Gaussian prior will have a _maximum_ at \(\vec{v}_{N/J}=\vec{0}\) only if \(\sigma<1/\sqrt{5}\simeq 0.45\); otherwise it places too much prior mass on large \(\left|\vec{u}_{N/J}\right|\) which generates a ring-shaped maximum in the prior for some \(\left|\vec{v}_{N/J}\right|>0\) and a _minimum_ at the origin. With the desire to be uninformative about the typical scale of the components of \(\vec{v}_{N/J}\) while keeping the maximum prior density at the origin, we choose \(\sigma=0.4<1/\sqrt{5}\) for the analysis in this paper. A two-dimensional slice of the prior on the components of \(\vec{v}_{N/J}\) for this choice is illustrated in Fig. 7.
## Appendix C Direction of total angular momentum for individual events
In Fig. 8, we present posteriors on the direction of the total angular momentum \(\hat{J}\) for all 57 events in our set. We produced these posteriors by applying the calculation described in Appendix A to the samples released by LIGO-Virgo [20, 21], after reweighting as outlined in Sec. II.2.
The skymaps in the figure provide a Molinewide projection of probability density over the celestial sphere, in the standard equatorial, geocentric coordinates used in the main text. A darker color corresponds to a direction in the sky with more probability density. From these maps, it is clear that not all events are equally informative about \(\hat{J}\); better constrained events will tend to dominate our hierarchical measurement.
The skymaps were produced using the ligo.skymap package [33, 34, 35], a set of standard LIGO-Virgo tools for the processing of probability densities over the sphere. We make these figures, corresponding Flexible Image Transport System (FITS) files, and code used to generate them, available in our data release [12].
|
2301.08914 | ExClaim: Explainable Neural Claim Verification Using Rationalization | With the advent of deep learning, text generation language models have
improved dramatically, with text at a similar level as human-written text. This
can lead to rampant misinformation because content can now be created cheaply
and distributed quickly. Automated claim verification methods exist to validate
claims, but they lack foundational data and often use mainstream news as
evidence sources that are strongly biased towards a specific agenda. Current
claim verification methods use deep neural network models and complex
algorithms for a high classification accuracy but it is at the expense of model
explainability. The models are black-boxes and their decision-making process
and the steps it took to arrive at a final prediction are obfuscated from the
user. We introduce a novel claim verification approach, namely: ExClaim, that
attempts to provide an explainable claim verification system with foundational
evidence. Inspired by the legal system, ExClaim leverages rationalization to
provide a verdict for the claim and justifies the verdict through a natural
language explanation (rationale) to describe the model's decision-making
process. ExClaim treats the verdict classification task as a question-answer
problem and achieves a performance of 0.93 F1 score. It provides subtasks
explanations to also justify the intermediate outcomes. Statistical and
Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy
outcomes. Ensuring claim verification systems are assured, rational, and
explainable is an essential step toward improving Human-AI trust and the
accessibility of black-box systems. | Sai Gurrapu, Lifu Huang, Feras A. Batarseh | 2023-01-21T08:26:27Z | http://arxiv.org/abs/2301.08914v1 | # ExClaim: Explainable Neural Claim Verification Using Rationalization
###### Abstract
With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.
Natural Language Processing, Rationalization, Claim Verification, Explainable AI, Language Modeling
## I Introduction
Information on the web has grown remarkably in the recent few decades, and misinformation has become a significant challenge. Misinformation can weaken public trust in institutions, and economies. For instance, in 2013 Associated Press published a false tweet claiming that President Barack Obama suffered injuries from an explosion in the White House and within seconds S&P 500 lost $130 billion in stock value [1]. The impact of such information is not only on global economies but also on all aspects of our lives since it has become commonplace to consume news digitally. A leading cause for the rise of misinformation is that it can now be created cheaper and distributed faster with the internet when compared to traditional platforms such as newspapers and television [2]. The growth of social media usage has also allowed effortless mass sharing of information.
The recent progress with Language Models (LM) in Natural Language Processing (NLP) has made it even easier, cheaper, and faster for a machine to generate artificial text [3]. Generative LMs have become sophisticated such as GPT-3 at text generation, and their results are almost on par with human-written text in terms of readability, coherence, and grammatical accuracy [4]. In [5], the authors demonstrate that LMs are improving 10x every year in terms of parameters and model size and that there is an incredible amount of progress with natural language text generation that is bound to happen. There is a possibility that these models will reach (or even surpass) human performance in a few years.
Such advances can likely lead to more rampant misinformation than what we have today online. There has been a growing
Fig. 1: ExClaim Approach Example
interest in identifying the authenticity and truthfulness of online content. Professional fact-checkers and independent fact-checking organizations exist, but their work cannot be scaled linearly with the amount of digital content growth. This led to an increase in interest in automated fact-checking, also known as claim verification [6]. By fusing deep learning techniques with complex networks and algorithms, claim verification systems have achieved respectable performance but have become black-boxes. Optimizing for the task performance comes at the expense of model explainability. The developers of the model and the end-users which we refer to as the nonexerts do not fully know the internals or understand its decision-making processes. Furthermore, [6] points that there exists no system that offers explanations for sub-tasks in the claim verification pipeline.
Currently, a claim is fed into the system, the model checks against an evidence dataset, and it returns a binary verdict as the output; true if the evidence supports the claim otherwise false. This approach has become commonplace [7]; however, many questions still remain unanswered. _What evidence influenced the prediction? Is the evidence trustworthy? How can the nonexpert understand the model internals?_ These are all missing components of the process and there are no wide standards to assure the outcomes of these black-box systems [8].
Explainability techniques are available but they are insufficient because many require specialized domain knowledge to understand. Claim verification systems are tools which would generally be used by a wide public audience. They need to be explainable in the sense that they appropriately justify their outcomes and are comprehensible to the nontechnical users. Local Interpretable Model-agnostic Explanations (LIME), Attention scores and saliency heatmaps are generally helpful to the model developer, but not the end-user.
An emerging NLP explainable technique is called Rationalization [9]. It provides a rationale also known as a natural language explanation (NLE) to justify a model's prediction. The reasoning behind a model's prediction can be understood simply by reading the rationale, thereby revealing the model's decision-making process. Rationalization can be an attractive explainable technique because it is intuitive and human-comprehensible, and does not require domain knowledge [10]. It is a type of local explanation because there is a unique explanation for each prediction [11]. Rationalization frames explainability as an outcome-explanation problem. It looks at explainability from the perspective of an end-user whose aim is to understand how the model arrives at its final outcome [12]. This process is encapsulated in the rationales statement.
Rationales are also widely used in the legal system. A judge rules a verdict and it cannot be upheld without a legal opinion which is a written explanation laying out the rationale for a ruling. Inspired by this, we propose a novel approach, namely, ExClaim which leverages rationalization for an explainable neural claim verification system that is accessible and trustworthy to the nonexpert. It can not only predict a verdict but also defend and rationalize its output as a NLE. We also introduce NLP Assurance and incorporate extensive explainable evaluations to assure the verdict and NLE outcomes are valid to help reinforce Human-AI trust [13].
The following are the contributions of this paper:
1. Introduces a new benchmark claim verification dataset with credible foundational information and a novel explainable claim verification approach called ExClaim.
2. Uses transfer learning and presents an effective method to generate abstractive rationales in an unsupervised manner without any ground truth.
3. Treats the verdict classification task as a question-answer problem and demonstrates that it can significantly help improve the classification performance.
4. Presents the problem of claim verification through the lens of explainability and is the first to introduce sub-task explanations in the claim verification space.
The structure of this paper is as follows: in Section 2, we present related work. In Section 3, we introduce a new benchmark claim verification dataset. In Section 4, we demonstrate our experiment methodology, while Section 5 describes experimental results. Lastly, in Section 6, we provide a conclusion and future directions.
## II Related Work
Several studies have been conducted on FEVER, LIAR, and MultiFC datasets, the most extensive datasets available for claim verification. FEVER does not contain real-world claims; instead, they are generated from Wikipedia. MultiFC uses the top \(k\) search results from Google as its evidence for claims which is not reliable, and quality could vary drastically. Hence, we disregarded them in the study and focused on Politifact-based data. The LIAR dataset by [14] which was the first large-scale claim verification Politifact dataset, which led to an increase in research in this field. It is a six-way classification dataset, and [14] presents a Convolutional Neural Network (CNN) model that uses the claim and metadata such as speaker, party, and state, amongst others, to achieve a 0.277 accuracy. Using metadata could potentially introduce unwanted biases towards certain parties and groups. The model will be influenced by those details than using the evidence for a claim. In [15], the authors use an attention mechanism on the speaker and topic and achieve a 0.41 accuracy.
In [16], the authors publise their LIAR-PLUS dataset with human-written justifications from each Politifact article, and their model uses a P-BiLSTM with claim and justification as inputs and achieves a 0.70 on binary classification on the Politifact data. The current SOTA performance on binary verdict classification on LIAR-PLUS is by [17], and their siamese network achieves a 0.82 accuracy. Although these datasets are different from the ExClaim dataset, they shed light on the progress of verdict classification using Politifact claims. ExClaim serves as a new benchmark dataset with more credible information, and our results from Section 5 serve as the new baselines.
In the previously discussed Politifact work, the role of explainability and methods to assure the black-box predic
tions are untouched. A majority of explanation methods in the broader claim verification space are quantitative. Papers [18, 19, 20, 21], and [22] all propose attention-based explanations using BiLSTMs, LSTMs, CNNs, co-attention networks, and decision trees, respectively. In [23] and [24], they both use rule discovery to derive knowledge graphs and Horn rules explanations. Given the low accessibility of these explanation methods, [25] in 2020 presents the first study that jointly explains and predicts a verdict using a rationale. They use a supervised technique using DistilBERT for rationale extraction and perform an annotation task to evaluate rationale quality and interpretability. In [26], the authors demonstrate this technique on a public health claims dataset. The progress of natural language explanations as an explainability technique has been minimal and constrained to supervised methods.
## III ExClaim Foundational Dataset
In [6], the authors note that the availability of datasets for experimentation is the current bottleneck with automatic claim verification. The two datasets that we first considered were LIAR [14] and LIAR-PLUS [16]. The datasets were developed by scraping Politifact, an independent fact-checking platform. However, they were not sufficient for our task because we found problems with the content with a few features missing that we required. Every Politifact article includes a claim, a six-class verdict, ruling comments written by a human fact-checker; a short justification summary at the end of the article, and metadata such as speaker, subject, and other components. In the LIAR and LIAR-PLUS datasets, they exclude the ruling comments, and previous work relies primarily on using the justification for claim verification. Frequently, this justification summary misses essential information from the verdict comments and serves as more of a closure to the article. Therefore, the datasets were not a reliable structure for our task.
We developed a new claim verification dataset using Politifact for our experiment that includes information on 4000+ articles as shown in Table I. Our dataset consists of the following features: Claim, Date, Source, Verdict, and the ruling comments we refer to as the Evidence and the URL. Determining the authenticity and ensuring that the evidence is neutral is crucial for claim verification. Suppose the evidence includes polarizing language and has a strong True or False sentiment. In that case, the model uses those features to predict a verdict instead of truly learning the evidence content. Additionally, Politifact often cites mainstream news mediums as sources. Most of the time, these sources are heavily biased and skew to the left or right with their content production. Due to these factors, data cleaning is performed based on the following criteria.
When verifying claims, credible and trustworthy evidence is essential. Paper [8] points that evidence is primarily available from reputable institutions such as world governments and international organizations. News sources are mostly avenues of secondary sources. We identified the top 30 most popular news platforms and removed evidence paragraphs citing them. Our evidence includes primarily information from the U.S. government (State and Federal Laws, and government departments and agencies such as Centers for Disease Control and Prevention, Bureau of Labor Statistics, and Census Bureau amongst others) and also international organizations (United Nations, World Health Organization, World Trade Organization and others). Politifact includes six classes for the verdict (True, Mostly True, Half True, Mostly False, False, Pants on Fire). We only scraped articles with True or False labels since the accuracy for the other classes was not clear. Claim verification systems that output _True_ or _False_ as the verdict can be misleading because there is always a possibility for error. We pre-process the verdict labels in our dataset and convert True and False into Supports and Refutes, respectively. From the end user's perspective, this approach makes it intentional that the evidence at hand either supports or refutes the claim. The end user has the option to conclude if the claim is true or false from the given outputs. We release this as the ExClaim Foundational dataset, a new benchmark claim verification dataset with foundational sources2.
Footnote 2: [https://github.com/AI-VTRC/ExClaim](https://github.com/AI-VTRC/ExClaim)
## IV Methods
The Principle of Factor Sparsity also known as the Pareto principle states that for many outcomes in a system, 80% of the effects come from 20% of the causes [27]. This principle generally holds true in many situations and has been successfully applied in various disciplines such as medicine, engineering, and economics [28, 29]. In computer science, it was shown useful for the optimization of genetic algorithms [30]. It has also been applied in software testing and [31] demonstrates that on average "20% of the code has 80% of the errors".
Inspired by the Pareto principle, we hypothesize that on average only 20% of the information in the evidence is important to verify a claim and the remainder 80% is nonessential. This is similar to how humans approach manual claim verification. There are a few key pieces of information in the evidence that most influence our decision and compel us to rule that a claim is either true or false. Paper [32] categorizes this as an explain-then-predict model and [33, 34, 35] cite improvements in task performance optimization after using similar techniques. This lays the foundation for our ExClaim approach as illustrated in Figure 2.
### _Rationale Generation_
To determine the salient information in the evidence, we use transfer learning and treat it as an unsupervised summarization task by using language models. Summarization techniques have been shown to distill the most important information from a given source with high accuracy [36]. We specifically use abstractive summarization because extractive techniques lack coherence, readability, and overall quality that does not reflect human-written summaries [37, 38]. We refer to the generated summary as rationales which later become part of the NLE to justify the verdict prediction.
Language models perform well at summarization as first demonstrated by [39] where a BERT-based abstractive summarization model outperforms most non-Transformer-based models. However, a majority of the current language models have a maximum sequence length of 512 tokens. As Table I shows, the average sequence length of the evidence is 449 tokens with some articles over the 512 token limit. To minimize truncation, we leverage the BART language model which has a maximum sequence length of 1024 tokens and it outperformed previous work on summarization at its publication [40]. BART is pre-trained on the English language and we specifically use the part-large-cnn checkpoint from Hugging Face that has been fine-tuned on a large corpus of text-summary pairs [41]. We learn the function \(f(E)=P^{R}\) where, given the evidence \(E=\{E_{0},E_{1},\ldots,E_{n}\}\), it predicts the paragraph of rationales where \(P^{R}=\{P^{R}_{0},P^{R}_{1},\ldots,P^{R}_{n}\}\). The terms \(E_{i}\) and \(P^{R}_{i}\) are the _i_-th words of the evidence and the rationales paragraph, respectively.
The function \(f(E)\) tokenizes the input E and passes it to a bidirectional encoder, and \(P^{R}\) is generated through an autoregressive decoder on the final hidden layer. We use this model to perform inference on each evidence sample in the train, validation, and test sets to generate their respective rationales. The output \(P^{R}\) is set to a maximum sequence length of 120 tokens and a minimum of 75 tokens which we found to be optimal after experimentation. The claim was not an input in this process and it is to ensure that the rationales are balanced and are not heavily biased towards any one side of the verdict.
### _Verdict Classification_
In [42], the authors demonstrate the benefits of multitask learning where downstream tasks are viewed as a question-answer problem using pre-trained language model. Based on our findings, this method has not been explored in the claim verification domain. Given its promising results in other domain, we frame the ExClaim classification task as a question-answer problem.
We leverage a pre-trained T5 model, a text-to-text generation model [43] with SOTA results on many NLP tasks including question-answering. We specifically use T5's capability with the Choice of Plausible Alternatives (COPA) task. COPA was introduced by [44] and it is a type of question-answering problem that examines automated commonsense causal reasoning. The objective is given a question, a premise as context, and two answer choices perform binary classification to determine the correct answer choice. For this task, we conceive a function \(g(S)=P^{V}\) where \(S\) is the input prompt sequence and \(P^{V}\in\{V^{S},V^{R}\}\) is the binary verdict prediction where \(V^{S}=\{Supports\}\) and \(V^{R}=\{Refutes\}\). We formulate \(S\) as the following where the claim as the question \(C=\{C0,C1,\ldots,Cn\}\) with \(Ci\) indicating the _i_-_th_ word in the claim, the generated paragraph of rationales as the premise \(P^{R}\), and the possible verdicts as \(V^{S}\) and \(V^{R}\)
Fig. 2: ExClaim Architecture
The sequence \(S\) is fed in as "copa choice1: \(V^{S}\) choice2: \(V^{R}\) premise: \(P^{R}\) question: \(C\)".
We fine-tune the T5 model on our train dataset with the sequence \(S\) for each sample. We set the batch size to 8, cross-entropy as the loss function and AdamW as the optimizer with a 2e-5 learning rate and we train our model for 20 epochs [45]. We evaluate the model's performance on the validation set at every 350 steps. A majority of the input sequences were below T5's maximum sequence length of 512 tokens and if the length of S went over the limit T5 does not truncate automatically since T5 uses relative positional bucketing or relative attention [46]. It continues to work until it is exhausted of memory which never occurred during our experimentation.
#### Iv-B1 Baselines
We also utilize other architectures as baselines and they are as follows.
**LSTM** Previous work such as [14, 16] have relied heavily on LSTM networks for the verdict classification. We designed a multi-input LSTM classification model. It takes the claim and evidence as two inputs which are converted into GloVe embeddings and are individually passed through an embedding layer and an LSTM layer with 128 features and the outputs of these are concatenated and passed through a dense layer with ReLu activation and a final dense layer with two neurons and softmax activation for the binary prediction output.
**DistilBERT** The DistilBERT model is also used often in the claim verification space. We fine-tune a pre-trained DistilBERT model. We set the batch size to 8, enable a sliding window to prevent truncation, use AdamW for optimization, and train for 2 epochs.
**BART** We use a BART Zero-Shot classification model, specifically, the fine-tuned _bart-large-nnli_ model checkpoint from Hugging Face and we directly perform inference with a zero-shot problem setup.
### _Natural Language Explanation_
The NLE is a generative task using a language model. However, during experimentation with SOTA natural language generation models such as GPT-2 [48] and GPT-3 [49], there were challenges with respect to language model hallucination. The model hallucinates by introducing unintended and irrelevant text in the generation process. Paper [50] suggests that such problems diminish user trust and fail to meet their expectations. Especially for a claim verification system, introducing unintended text makes the entire system worthless and can lead to dangerous consequences. Instead, we used abstractive summarization to generate the rationales, originally it was extractive. With abstractive, the rationales reflect the natural style of a human-written text. We formulate the NLE by concatenating the claim C, the rationales \(P^{R}\), and the verdict \(P^{V}\). The resulting NLE, \(P^{N}\), is as follows "The evidence \(P^{V}\) the claim because \(P^{R}\)".
## V Results and Discussion
In this section, we describe the experimental results with supporting analysis.
### _SHAP Explanation_
The process of extracting the rationales is an intermediate step in the ExClaim approach. There is also no available ground truth data to assess its accuracy. We used an explainable technique called SHAP, which leverages game theory to explain the output of any ML model [47]. SHAP is based on Shapley values in economics which measure the marginal contribution of a feature towards the final outcome of a system. These values are computed by meticulously perturbing the input features and observing how those changes affect the final prediction of the model. While performing inference to generate the rationales, we also pass our model to a SHAP explainer. Figure 3 shows the output explanation where the explainer provides Shapley values for input features in the text that influenced the generation of \(P^{R}_{i}\) in \(P^{R}\). The red highlights indicate features with negative Shapley values, which are negative contributions, and similarly, blue represents positive contributions and values. SHAP is an accessible explanation method for the nonexpert because the explainer illustrates the marginal influence of each component by visualizing the Shapley values with colored highlights.
### _Verdict Classification_
We measure the accuracy for our verdict classification models using the macro F1 score. The models we have experimented with and their results for validation and test datasets are shown in Table II. Since the BART Zero-Shot was not finetuned on this dataset, we only include the F1 score from the test set inference. Our approach using ExClaim T5 COPA outperforms the baselines, specifically, the DistilBERT model, by 10 points on the validation and 12 points on the test set. An advantage of language models is that they are pre-trained on existing knowledge. When fine-tuned on downstream tasks, they can use transfer learning and leverage the pre-existing data to improve task performance. Language models such as BART, DistilBERT, and T5 were pre-trained on vast amounts of internet data, and it hints at why the pre-trained language models performed better than the LSTM model. Amongst the language models, zero-shot classification performed poorly, and a potential reason for this is that fine-tuning generally helps improve task performance and accuracy.
### _Natural Language Inference_
The NLE is an open-ended text without ground truth data. This eliminated any possibility of using the ROUGE-N or BLEU score for evaluation. The work by [51] demonstrates that Natural Language Inference (NLI) can be confidently
used to evaluate the correctness of abstractive summarization results without any human-written references. In [52], they claim that assessing the accuracy of the textual entailment can be a reasonable evaluation method. NLI attempts to determine given a premise if the hypothesis is true (entailment), false (not entitlement/contradiction), or undetermined (neutral) [53].
We use the NLI textual entailment technique to automatically evaluate our NLE. We used a pre-trained T5-large model, specifically, the CB task, which was fine-tuned on [54]'s NLI dataset. We experimented with other T5 NLI tasks such as QNLI, MNLI, and RTE, and we found that CB had the best performance. For each generated NLE, \(P^{N}\), for the test set, we formulate the NLI input for our model as follows: "cb hypothesis: \(C\) premise: \(P^{N}\)". Essentially, we test can we logically deduce the claim from the NLE. The results are shown in Table III. Neutral NLEs is the majority with 42% and contradiction and entailment as the following. A potential reason for this is we are currently relying on transfer learning for the NLI evaluation, and it has not been fine-tuned on our dataset. Identifying and establishing a logical connection between the NLE and the claim can be challenging. An alternate reason is that the quality of the rationales may be sufficient enough to not be a contradiction the majority of the time but also not be sufficient enough for the premise to entail the hypothesis, which may be why neutral is the majority.
### _Manual Evaluation_
In addition to the automatic evaluation with NLI, we also conduct a manual evaluation of the generated NLEs to assess its quality. For the evaluation setup, we randomly select 100 claims and its respective NLE instances from our test set. We had three annotators who were shown the claim and its NLE and their task was to judge for the following criteria: Plausibility, Fluency, and Correctness. The annotators were required to rely only on the information provided with each working individually. Since non-expert annotators might find it difficult to give thoughtful judgment, we adopt a rating scale from 1 to 5 for each criterion as presented in IV.
We define plausibility as how convincingly can the NLE explain the verdict prediction [55] and its rating scale is adopted from [56]. Fluency is defined by [57] as having correct spelling, good grammar, flow, and tone and its rating scale is adopted from [57]. Correctness is defined as the likelihood of the NLE and the verdict being true and its scale is from [56].
Figure 4 shows the results for the manual evaluation. We use DistilBERT as a comparison since it is used often in the claim verification space [25]. On average, ExClaim outperformed DistilBERT in all of the ratings. A major contributor is that ExClaim is an ensemble approach that leverages two strong language models (BART and T5) and is, therefore, better able to work in our dynamic context of rationale creation, verdict
Fig. 3: SHAP Explanation [47]
classification, and NLE generation. Both approaches have the same criteria ranking where fluency has the highest rating, then plausibility, and lastly correctness. This is expected for fluency since language models generally work well at producing proper text without major grammatical errors. Plausibility was the lowest on average and it indicates that the rationales chosen could be more persuasive. DistilBERT has a lower rating for Plausibility and it may be due to its limitations with text summarization. The BART model used in ExClaim outperforms BERT-based models at text summarization that we used for the rationale generation. Further, DistilBERT is extractive summarization and this creates challenges with the coherency and convincingness of the rationales. Correctness on average demonstrates that there is a decent likelihood that the rationales presented with the verdict are accurate for the claim. The T5 model in ExClaim excels at transfer learning and is, therefore, better able to classify the verdict with respect to the rationales presented at inference time. In ExClaim, the verdict and rationales may have strong entailment in the NLE whereas with DistilBERT they may be more random and contradicting.
## VI Conclusions
In this paper, we introduced a new benchmark dataset with foundational information (from government policy documents and international organizations), and we presented promising baseline results on the verdict classification task using ExClaim's _question and answer_ approach. We also demonstrated an unsupervised method of using transfer learning and abstractive summarization to generate rationales which are used to determine the verdict and generate an NLE. We showed multiple explainability methods to assure the final outcomes and also the intermediate sub-tasks outcomes which has not been done before in a claim verification system. We presented a novel claim verification approach and approached the problem with rigorous explainability to ensure the system is accessible, transparent, and trustworthy to the nonexpert.
Certain limitations do exist such as the verdict prediction and NLE being heavily impacted by the quality of the foundational data available. Additionally, claims also need to be closely related to the information included in the foundational dataset. Many promising future work directions exist to further progress in explainable claim verification. For example, developing better abstractive methods for the NLE generation from a given claim and rationales and avoiding LM hallucination. Obtaining the evidence directly from the sources listed on Politifact can yield more plausible rationales than relying on the ruling comments. Rationale generation technique without summarization has also been explored very little. Lastly, incorporating AI assurance methods as demonstrated by [13] in NLP is also a viable path for improving general model trustworthiness.
|
2306.12733 | Logarithmic or algebraic: roughening of an active Kardar-Parisi-Zhang
surface | The Kardar-Parisi-Zhang (KPZ) equation sets the universality class for
growing and roughening of nonequilibrium surfaces without any conservation law
and nonlocal effects. We argue here that the KPZ equation can be generalised by
including a symmetry-permitted nonlocal nonlinear term of active origin that is
of the same order as the one included in the KPZ equation. Including this term,
the 2D active KPZ equation is stable in some parameter regimes, in which the
interface conformation fluctuations exhibit sub-logarithmic or
super-logarithmic roughness, with nonuniversal exponents, giving positional
generalised quasi-long-ranged order. For other parameter choices, the model is
unstable, suggesting a perturbatively inaccessible algebraically rough
interface or positional short-ranged order. Our model should serve as a
paradigmatic nonlocal growth equation. | Debayan Jana, Astik Haldar, Abhik Basu | 2023-06-22T08:18:10Z | http://arxiv.org/abs/2306.12733v3 | # Logarithmic or algebraic: roughening of an active Kardar-Parisi-Zhang surface
###### Abstract
The Kardar-Parisi-Zhang (KPZ) equation sets the universality class for growing and roughening of nonequilibrium surfaces without any conservation law that has only a rough phase in two dimensions (2D). We argue here that the KPZ equation can be generalised by including a symmetry-permitted nonlocal nonlinear gradient term of active origin that is of the _same order_ as the one included in the KPZ equation. Including this term, the 2D _active_ KPZ equation is stable in some parameter regimes, in which the interface conformation fluctuations can exhibit _sub-logarithmic_ or _super-logarithmic_ roughness, with nonuniversal exponents, giving positional generalised quasi-long-ranged order. In other parameter regimes, there is an algebraically rough interface or positional short-ranged order. Our model serves as a minimal description of a nearly flat interface with a _fast_ interacting conserved active species living on it.
The Kardar-Parisi-Zhang (KPZ) equation [1; 2] is a paradigmatic nonequilibrium model originally introduced to describe the dynamics of growing nonequilibrium surface of height \(h({\bf x},t)\) without overhangs measured with respect to an arbitrary base plane. The KPZ equation has been extensively studied [3; 4; 5] in the last three decades.It has \(d=2\) as the _lower critical dimension_, above which there is a nonequilibrium roughening transition between a smooth phase, whose long wavelength scaling properties are identical to an Edward-Wilkinson (EW) surface [6] at the same dimension \(d\), to a perturbatively inaccessible rough surface [7; 2]. Notable variants of the KPZ equation, proposed to describe nonequilibrium surfaces outside its scope, include the conserved KPZ [8; 9] and the \(|{\bf q}|\)KPZ [10] equations.
Theoretical studies on nonlocal interactions has a longstanding history in equilibrium systems [11; 12; 13; 14; 15; 16]. Their nonequilibrium analogues are however much less explored. The effects of long-ranged interactions on nonequilibrium surfaces are considered in Kinetic roughening in the presence of nonlocal interactions is studied in Ref. [17], predicting generic non-KPZ scaling behaviour. Other important examples include Levy-flight-like motion [18], nonlocal effects due to an underlying network architecture [19], spatially-dependent reaction rates [20], or auto-chemotactic interactions [21].
In this Letter, we set up and study a nonlocal generalisation of the KPZ equation by adding a symmetry-permitted nonlocal nonlinear gradient term that is of the same order as the usual KPZ nonlinear term. To generalise the scope of our study, we also include chiral contributions, which is ubiquitous in soft matter and biologically inspired systems; see, e.g., Refs. [22; 23]. The resulting equation in 2D, named _active-KPZ_ or a-KPZ equation, which we derive below, is
\[\frac{\partial h}{\partial t}= \nu\nabla^{2}h+\frac{\lambda}{2}(\mathbf{\nabla}h)^{2}+\lambda_{1}Q_ {ij}({\bf r})(\nabla_{i}h\nabla_{j}h) \tag{1}\] \[+\lambda_{2}Q_{ij}({\bf r})e_{jm}(\nabla_{i}h\nabla_{m}h)+\eta,\]
a _nonlocal_ generalisation to the usual KPZ equation that is distinct from the one considered in Ref. [17]. Here, the tensor \(e_{jm}\) is the 2D totally antisymmetric matrix. Further, \(Q_{ij}({\bf r})\) is the longitudinal projection operator that in the Fourier space is \(Q_{ij}({\bf k})=k_{i}k_{j}/k^{2}\), where \({\bf k}\) is a Fourier wavevector, and is nonlocal. Physically \(\lambda_{1}Q_{ij}({\bf r})(\nabla_{i}h\nabla_{j}h)+\lambda_{2}Q_{ij}({\bf r})e _{jm}(\nabla_{i}h\nabla_{m}h)\) is the contribution to the surface velocity normal to the base plane \(v_{p}=\partial h/\partial t\) that is _nonlocal_ in height fluctuations \(\mathbf{\nabla}h\). This is in contrast to the KPZ equation, where \(v_{p}\) is fully local in \(\mathbf{\nabla}h\). Noise \(\eta\) is a zero-mean, Gaussian-distributed white noise with a variance \(\langle\eta({\bf x},t)\eta(0,0)\rangle=2D\delta^{d}({\bf x})\delta(t)\). We extract the scaling of the stable phases, which exists for a range of the model parameters. In particular, we show that the variance \(\Delta\equiv\langle h({\bf x},t)^{2}\rangle\sim[\ln(L/a)]^{\mu}\) for a surface of lateral size \(L\), where \(\mu<(>)1\) for sub-(super-)logarithmic roughness and \(a\) is a microscopic cutoff. This defines positional generalised quasi-long-ranged order (QLRO), generalising the well-known quasi-long range order of EW surfaces [2], in which \(\Delta\sim\ln(L/a)\), i.e., \(\mu=1\). Further, the time-scale of relaxation \(\tau(L)\sim L^{2}\left(\ln\,L\right)^{-\kappa}\), \(\kappa>0\), i.e., logarithmically superdiffusive. Both \(\mu\) and \(\kappa\) are _non-universal_: They vary continuously with \(\lambda_{1}/\lambda_{2}/\lambda_{2}\).
The form of Eq. (A7) can be obtained by considering the mapping from the KPZ equation to the Burgers equation [24] in terms of the "Burgers velocity \({\bf v}=\mathbf{\nabla}h\)". Now generalise the Burgers equation nonlinearity \(\lambda\mathbf{\nabla}v^{2}\) to \(\lambda_{1}\nabla_{j}(v_{i}v_{j})+\lambda_{2}e_{jm}\nabla_{j}(v_{i}v_{m})\), and then writing them in terms of \(h\) produces the \(\lambda_{1}\)- and \(\lambda_{2}\)-terms in (A7); see Supplemental Material (SM).
The \(\lambda_{1}\)- and \(\lambda_{2}\)-terms can be motivated by considering a nearly flat nonequilibrium surface described by a single valued height field \(h({\bf x},t)\) in Monge gauge [25; 26], with an active conserved density \(\rho({\bf x},t)\) living on the surface. Its hydrodynamic equations has the form retaining only
the lowest order in nonlinearities and spatial gradients
\[\frac{\partial h}{\partial t}=\nu\nabla^{2}h+\frac{\lambda}{2}(\mathbf{\nabla}h)^{2}+v (\rho)+\eta, \tag{2}\]
where \(v(\rho)\) is a local density-dependent velocity of the membrane; \(v(\rho)=v_{0}+g_{1}\rho\) to the leading order in \(\rho\); \(g_{1}\) is a coupling constant of either sign. Further, density \(\rho\) follows \(\partial_{t}\rho=-\mathbf{\nabla}\cdot\mathbf{J}\), where \(\mathbf{J}\) is the current. The specific form of the particle dynamics decides the structure of \(\mathbf{J}\). We choose \(J_{i}=-\overline{D}\nabla_{i}\rho+\nabla_{j}\sigma_{ij}\), where \(\sigma_{ij}\equiv\alpha\nabla_{i}h\nabla_{j}h+\beta e_{jm}\nabla_{i}h\nabla_ {m}h\) is reminiscent of "active stresses" found in active matter theories [27], the \(\beta\)-term is a chiral contribution. The quadratic dependence of \(\mathbf{J}\) on \(\mathbf{\nabla}h\) implies the active particles (i) respond, unsurprisingly, to the height fluctuations, but not the absolute height, and (ii) in the absence of gravity, the particles do not distinguish valleys from the hills (although the surface itself breaks the inversion symmetry). Here, \(\overline{D}>0\) is a diffusivity. We focus on the quasi-static limit of infinitely fast dynamics of \(\rho\), such that \(\partial\rho/\partial t\approx 0\), giving \(\overline{D}\nabla^{2}\rho=\alpha\nabla_{i}\nabla_{j}(\nabla_{i}h\nabla_{j}h) +\beta e_{jm}\nabla_{i}\nabla_{j}(\nabla_{i}h\nabla_{m}h)\) neglecting any noise in the \(\rho\)-dynamics. Now use this to eliminate \(\rho\) in (2) to get (10), where as factor of \(\overline{D}\) has been absorbed. All of \(\lambda,\lambda_{1},\lambda_{2}\) can be individually positive and negative. It should be noted that the form of the chiral term is 2D specific; the other two nonlinear terms with coefficients \(\lambda\) and \(\lambda_{1}\) do not have dimension specific forms and hence can work in any general dimension \(d\). Thus the \(\lambda_{1}\)- and \(\lambda_{2}\)-terms in (10) are physical, although our active species origin need not be the only possible source of these two terms. At one dimension, the \(\lambda_{2}\)-term vanishes, and the \(\lambda_{1}\)-term becomes indistinguishable from the \(\lambda\)-term. The transformation \(x^{\prime}_{i}=x_{i}-(\lambda+2\lambda_{1})c_{i}t-\lambda_{2}e_{ij}c_{j}t\) and \(t^{\prime}=t\), together with the height function \(h\) transforming as \(h^{\prime}(\mathbf{x}^{\prime},t^{\prime})=h(\mathbf{x},t)+\mathbf{c}\cdot \mathbf{x}\) leaves Eq. (10) invariant; see SM. This generalises invariance of the usual KPZ equation under a pseudo-Galilean transformation [2].
The nonequilibrium steady states (NESS) of Eq. (10) display universal spatiotemporal scaling by the time-dependent correlation function \(C\) of \(h\):
\[C_{h}(r,t)\equiv\langle[h(\mathbf{x},t)-h(0,0)]^{2}\rangle\sim r^{2\chi}f_{h} (r^{z}/t), \tag{3}\]
which defines the roughness exponent \(\chi\) and dynamic exponent \(z\). Here, \(r\equiv|\mathbf{x}|\) and \(f_{h}\) is a dimensionless scaling function of its argument.
Dimensional analysis via scaling \(\mathbf{r}\to b\mathbf{r}\), \(t\to b^{z}t\), \(h\to b^{\chi}h\) together with the linear theory scaling exponents \(z=2\) and \(\chi=0\) at 2D reveals (which keep the parameters in the linearised version of Eq. (10) unscaled) that all of \(\lambda,\lambda_{1},\lambda_{2}\)_do not_ scale. This suggests that all of \(\lambda,\lambda_{1},\lambda_{2}\) are equally relevant (in the scaling sense) at 2D, and \(d=2\) is the _critical dimension_ of Eq. (10). Whether it is the _upper_ or _lower_ critical dimension requires further analysis that follows below.
Our first aim is to determine if Eq. (10) has a stable NESS, and second, if so, the scaling properties in those NESS. We use renormalisation group (RG) framework, which is well-suited to systematically handle the diverging corrections encountered in naive perturbation theories. The dynamic RG method is standard and for our model it closely resembles the one for the KPZ equation [1; 2; 28].
We perform the one-loop Wilson momentum shell dynamic RG procedure at the one-loop order [2; 24; 28]; see SM for the one-loop Feynman diagrams. There are _no_ one-loop corrections to \(\lambda,\lambda_{1},\lambda_{2}\). However, there are diverging one-loop corrections to \(\nu\) and \(D\). Dimensional analysis allows us to identify an effective dimensionless coupling constant \(g\) and two dimensionless ratios \(\gamma_{1},\,\gamma_{2}\) defined as \(g\equiv\frac{\lambda^{2}D}{\nu^{3}}\frac{1}{2\pi}\), \(\gamma_{1}\equiv\frac{\lambda_{1}}{\lambda},\,\gamma_{2}\equiv\frac{\lambda_{2 }}{\lambda}\). By following the standard steps of RG, we obtain the RG recursion relations for \(D,\,\nu\) at the one-loop order (here \(l\) is the "RG time"; \(\exp(l)\) is a length-scale):
\[\frac{dD}{dl} =D\Big{[}z-d-2\chi+g\mathcal{B}(\gamma_{1},\gamma_{2})\Big{]}, \tag{4}\] \[\frac{d\nu}{dl} =\nu\Big{[}z-2+g\mathcal{C}(\gamma_{1},\gamma_{2})\Big{]}, \tag{5}\]
with \(\gamma_{1},\,\gamma_{2}\) being marginal at the one-loop order, stemming from the nonrenormalisation of \(\lambda,\lambda_{1},\lambda_{2}\) at that order. Equattions (4)-(5) are invariant under the inversion of \(\lambda_{2}\), i.e., interchange of left handed coordinate system with a right handed one, leaving the long wavelength physics unaffected. Here, \(\mathcal{B}[\gamma_{1},\gamma_{2}]=\frac{3}{8}\gamma_{1}^{2}+\frac{1}{2}\gamma_ {1}+\frac{1}{8}\gamma_{2}^{2}+\frac{1}{4},\,\mathcal{C}[\gamma_{1},\gamma_{2}]= \frac{1}{2}\gamma_{1}^{2}+\frac{5}{8}\gamma_{1}+\frac{1}{8}\gamma_{2}^{2}\); \(\mathcal{B}>0\). Flow equations (4)-(5) yield the flow equation for \(g\):
\[\frac{dg}{dl}=-g^{2}\mathcal{A}[\gamma_{1},\gamma_{2}], \tag{6}\]
where \(\mathcal{A}[\gamma_{1},\gamma_{2}]=\frac{9}{8}\gamma_{1}^{2}+\frac{11}{8} \gamma_{1}+\frac{1}{4}\gamma_{2}^{2}-\frac{1}{4}\).
We first focus on the achiral case, i.e., \(\gamma_{2}=0\). An RG flow diagram in the \(g-\gamma_{1}\) plane is shown in Fig. 1(a). The condition \(\tilde{\mathcal{A}}(\gamma_{1})\equiv\mathcal{A}(\gamma_{1},\gamma_{2}=0)=0\) defines two dashed lines \(\gamma_{1}=\gamma_{+},\,\gamma_{-}\) parallel to the \(g\)-axis in the \(g-\gamma_{1}\) plane, where \(\gamma_{+}=0.161,\gamma_{-}=-1.383\), such that for \(\gamma_{+}>\gamma_{1}>\gamma_{-}\) (green region), the RG flow lines flow away parallel to the \(g\)-axis towards infinity, indicating a perturbatively inaccessible, presumably rough, phase with short-ranged positional order. In this unstable region \(g(l)\) diverges as \(l\to 1/[|\tilde{\mathcal{A}}(\gamma_{1})|]\), reminiscent of the 2D KPZ equation [2]. Outside this region, where \(\tilde{\mathcal{A}}(\gamma_{1})>0\), the flow lines flow towards \(g=0\) parallel to the \(g\)-axis, implying stability, with \(g(l)\approx 1/[|\tilde{\mathcal{A}}(\gamma_{1})|]\) vanishes _slowly_ in the long wavelength limit \(l\to\infty\). Although \(g^{*}=0\) is the _only_ fixed point (FP) in the stable region, the vanishing of \(g(l)\) is _so slow_, being proportional to \(1/l\), the parameters \(D\) and \(\nu\) are _infinitely_ renormalised, altering the linear theory scaling in the long wavelength limit. The simplest way to see this is to set \(z=2\), \(\chi=0\) (i.e., their linear theory values) in (4) and (5) with \(\gamma_{2}=0\), which gives \(D(l)=D(0)l^{\mathbf{\beta}/\tilde{\mathcal{A}}}\) and \(\nu(l)=\nu(0)l^{\tilde{\mathcal{C}}/\tilde{\mathcal{A}}}\), where \(\tilde{\mathcal{B}}(\gamma_{1})\equiv\mathcal{B}(\gamma_{1},\gamma_{2}=0)\), \(\tilde{\mathcal{C}}(\gamma_{1})\equiv\mathcal{C}(\gamma_{1},\gamma_{2}=0)\), \(D(0),\nu(0)\) are the small-scale or unrenormalised values of \(D\) and \(\nu\)
respectively. Since \(\tilde{\mathcal{B}}\) is positive definite, \(D(l)\gg D(0)\) for \(l\to\infty\). On the other hand, \(\tilde{\mathcal{C}}\) is positive in stable regions, for which \(\nu(l)\gg\nu(0)\) for \(l\to\infty\), giving the time-scale \(\tau(L)\sim L^{2}[\ln(L/a)]^{-\kappa}\) for relaxation over lateral size \(L\), where \(\kappa(\gamma_{1})=\tilde{\mathcal{C}}/\tilde{\mathcal{A}}\) is a positive definite but nonuniversal, \(\gamma_{1}\)-dependent exponent. The logarithmic modulation in \(\tau(L)\) implies (i) breakdown of conventional dynamic scaling [29; 30; 31], and (ii) _nonuniversally faster_ relaxation, being parametrised by \(\gamma_{1}\), of fluctuations. Furthermore, by defining RG time \(l\simeq\ln(1/aq)\) in fourier space and using equivalantly \(\nu(q),D(q)\), the variance is
\[\Delta\equiv\langle h^{2}(\mathbf{x},t)\rangle\sim\int_{1/L}^{1/a}d^{2}q\frac{ D(q)}{\nu(q)q^{2}}\sim[\ln(L/a)]^{\mu}, \tag{7}\]
where \(\mu(\gamma_{1})=1+(\tilde{\mathcal{B}}-\tilde{\mathcal{C}})/\tilde{\mathcal{ A}}\) is also _nonuniversal_, parametrised by \(\gamma_{1}\), and can be more or less than unity, depending upon the sign of \(\tilde{\mathcal{B}}-\tilde{\mathcal{C}}\), as mentioned above. Variations of \(\mu\) and \(\kappa\) as functions of \(\gamma_{1}\) are shown in Fig. 1(b). For \(\mu(\gamma_{1})<1(>1)\), \(\Delta(\gamma_{1})\) grows with the system size \(L\) slower (faster) than positional QLRO, as in the 2D EW equation [6]. We call these _stronger_ (_weaker_) than QLRO, or SQLRO (WQLRO), corresponding to _sub (super)_ logarithmically rough surfaces with positional generalised QLRO. In particular, the minimum value of \(\mu=0.89\). In Fig. 1(a) the red outer regions (yellow inner strips) correspond to SQLRO (WQLRO). These logarithmic modulations of the linear theory results are reminiscent of the logarithmic anomalous elasticity in three-dimensional equilibrium smectics [32; 33], and a 2D equilibrium elastic sheet having vanishing thermal expansion coupled with Ising spins [34; 35]; see also Ref. [31] for similar results.
The chiral effects (\(\gamma_{2}\neq 0\)) can be included in the above calculational scheme in straightforward ways: Stability of the RG flow is now determined by \(\mathcal{A}(\gamma_{1},\gamma_{2})>0\). Flow lines having initial conditions within a narrow elliptical cylinder, containing the origin (0,0,0) and having the axis parallel to the \(g\)-axis, with its surface given by \(\mathcal{A}(\gamma_{1},\gamma_{2})=0\) for any \(g\), run away parallel to the \(g\)-axis, leaving the perturbatively accessible region. Flow lines with initial conditions falling in regions _outside_ of this elliptical cylinder flow towards the \(\gamma_{1}-\gamma_{2}\) plane with stable states. See Fig. 1(c) depicting the RG flow lines in the space spanned by \(\gamma_{1}-\gamma_{2}-g\). Outside the elliptical cylinder \(g(l)\sim 1/(\mathcal{A}l)\) for large \(l\), similar to its achiral analog. Inside the cylinder, \(g(l)\) diverges as \(l\to 1/(|\mathcal{A}|)\) from below. Focusing on the \(\gamma_{1}-\gamma_{2}\) plane, \(\mathcal{A}(\gamma_{1},\gamma_{2})=0\) sketches out an inner elliptical unstable region, whereas the outer region is stable; see Fig.1(d). We use the above results to find that in the stable region \(\Delta\sim[\ln(L/a)]^{\mu(\gamma_{1},\gamma_{2})}\), where \(\mu=1+(\mathcal{B}-\mathcal{C})/\mathcal{A}\) is now parametrised by both \(\gamma_{1},\gamma_{2}\). Similar to and quantitatively extending the achiral case, \(\mu<1(>1)\) is referred to as SQLRO (WQLRO), giving positional generalised QLRO. The SQLRO and WQLRO regions are demarcated within the stable region in Fig. 1(d).
We further find that in the equal-time limit, \(C_{h}\) defined in Eq. (3) scales as \(C_{h}(r,0)\sim\frac{D(0)}{\nu(0)}[\ln(r/a)]^{\mu}\), for large \(r\gg a\) indicating _logarithmically faster_ or _slower_ rise with the separation \(r\) for large \(r\)[29; 30], again generalising.the well-known QLRO found in an EW surface at 2D.
Our continuously varying scaling exponents are a crucial outcome of the nonrenormalisation of \(\lambda,\lambda_{1}\) and \(\lambda_{2}\), rendering \(\gamma_{1},\gamma_{2}\) marginal, which have been demonstrated at the one-loop order. Unlike usual KPZ equation, Galilean invariance of the present model ensures nonrenormalisation of _a combination_ of \(\lambda,\,\lambda_{1},\,\lambda_{2}\), and not each of them individually. Thus, there is no surety that \(\gamma_{1},\gamma_{2}\) should remain marginal even at higher-loop orders. We now argue that these possible higher-loop contributions, even though may exist, actually do not matter. Since we are at the critical dimension, \(g(l)\sim 1/l\) for large \(l\) at the one-loop order. At higher-loop orders, the Feynman diagrams will contain higher power of \(g\). Hence, a general scaling solution for \(g(l)\) should have the form \(g(l)\sim 1/l+\sum_{n}c_{n}/l^{n},\,n>1\) is an integer. Thus, the higher-loop corrections to the one-loop solution of \(g(l)\) should vanish like \(1/l\) to a power greater than 1. Therefore, their integrals over \(l\) from zero to infinity will be finite, so they will not change the anomalous behavior of \(D\) and \(\nu\). Similarly, they cannot make any divergent contribution to \(\gamma_{1}(l)\) and \(\gamma_{2}(l)\), even though there can
Figure 1: (a) RG flow diagram in the \(g-\gamma_{1}\) plane in the achiral limit (\(\gamma_{2}=0\)). Arrows indicate RG flows. Flow in the stable (unstable), i.e., towards (away from) \(g=0\) region are marked. (b) Variations of \(\mu\) and \(\kappa\) as functions of \(\gamma_{1}\) in the stable region for the achiral case. (c) RG flow diagram in the space spanned by \(\gamma_{1}-\gamma_{2}-g\) in the full a-KPZ equation. RG flow lines in the stable and unstable regions are shown by the arrows. (d) Phase diagram in the \(\gamma_{1}-\gamma_{2}\) plane for the a-KPZ equation. The central white region containing the origin is unstable. Regions with SQLRO and WQLRO are marked (see text).
be higher-loop diagrams. Therefore, our one-loop results are, in fact, asymptotically exact. This then implies that the continuous variation of the scaling exponents, making them nonuniversal, is also _asymptotically exact_ in the long wavelength limit. See Refs. [36, 37, 38, 39, 40, 41, 29] for similar nonuniversal scaling exponents in other models.
At higher dimensions \(d>2\), the chiral term with coupling \(\lambda_{2}\) cannot exist. The other two achiral nonlinear terms in Eq. (10) are present at \(d>2\). The RG recursion relations at \(d>2\) can be read off the 2D results with \(\gamma_{2}=0\). Using a \(d=2+\epsilon\) expansion as in the KPZ equation [7], we find at the one-loop order or to the lowest order in \(\epsilon\)
\[\frac{dg}{dl}=-\epsilon g-\tilde{\mathcal{A}}(\gamma_{1})g^{2}. \tag{8}\]
Parameter \(\gamma_{1}\) remains marginal at the lowest order. Therefore, if \(\tilde{\mathcal{A}}(\gamma_{1})>0\), \(g(l)\) flows to zero _rapidly_, with \(g(l)\sim g(0)\exp(-\epsilon\,l)\) in the long wavelength limit; \(g^{*}=0\) is the _only_ FP that is globally stable. This renders the nonlinearities irrelevant in the RG sense. Therefore, scaling in the long wavelength limit is identical to that in the EW equation: \(z=2\), \(\chi=(2-d)/2\). Furthermore, \(d=2\) is then the _upper critical dimension_. On the other hand, if \(\tilde{\mathcal{A}}(\gamma_{1})<0\), \(g(l)\) has three FP: \(g_{c}^{*}=-\epsilon/\tilde{\mathcal{A}}(\gamma_{1})\), an _unstable_ FP, parametrised by \(\gamma_{1}\) and separating possibly two stable FPs, one being at \(g^{*}=0\) Gaussian FP with EW scaling for that \(d\), and another putative perturbatively inaccessible FP, corresponding presumably to an algebraically rough phase. This gives, with 2D as the lower critical dimension, a roughening transition at \(d>2\), very similar to the KPZ equation at \(d>2\), but with one caveat. At this unstable FP, using (4) and (5), to \(\mathcal{O}(\epsilon)\)\(z=2+\epsilon\frac{\mathcal{C}(\gamma_{1})}{\tilde{\mathcal{A}}(\gamma_{1})}\), \(\chi=-\epsilon\frac{\mathcal{C}(\gamma_{1})}{\tilde{\mathcal{A}}(\gamma_{1})}\), depend explicitly on \(\gamma_{1}\) and deviate from their linear theory (or EW equation) values already at \(\mathcal{O}(\epsilon)\). This is in contrast to the KPZ equation at \(d>2\), where \(z\) and \(\chi\) at the unstable FP are at least \(\mathcal{O}(\epsilon^{2})\)[7]. In fact, application the Cole-Hopf transformation that \(z=2\), \(\chi=0\) at the unstable FP of the KPZ equation at \(d>2\)[42].
In the parameter space where \(\tilde{\mathcal{A}}(\gamma_{1})<0\), solution of \(\tilde{\mathcal{C}}(\gamma_{1})=0\) gives the red dashed lines \(\gamma_{1}=\gamma^{-},\gamma^{\dagger}\) where, \(\gamma^{\dagger}=0\) and \(\gamma^{-}=-1.25\); see Fig. 2(a) for variation of \(z\) and \(\chi\) with \(\gamma_{1}\) for a fixed \(\epsilon\). Green strips correspond to \(\tilde{\mathcal{C}}(\gamma_{1})>0\) where \(\chi>0\) and \(z<2\); \(\tilde{\mathcal{C}}(\gamma_{1})<0\) is the blue region where, \(\chi<0\) and \(z>2\). For a given \(\epsilon\), maximum value of \(z\) and minimum value of \(\chi\) are \(z_{\text{max}}=2+0.292\epsilon\) and \(\chi_{\text{min}}=-0.292\epsilon\) at \(\gamma_{1}=-0.651\), such that the dynamics is _slowest_ and the surface is _smoothest_ at the unstable FP. Since \(\chi_{\text{min}}>\chi_{\text{EW}}=-\epsilon/2\), an a-KPZ surface at the unstable FP is always rougher than an EW surface.
In the \(g-\gamma_{1}\) plane, \(g_{c}^{*}=-\epsilon/\tilde{\mathcal{A}}(\gamma_{1})\) is a _fixed line_, such that RG flow lines with initial \(g\) values above the line flows to perturbatively inaccessible FP, see Fig. 2(b). And for systems with initial \(g\) values lying below the line, the RG flow lines run parallel to the \(g\)-axis _towards_ Gaussian FP, corresponding to the smooth phase belonging to the EW class. This behaviour holds within a range \(\gamma_{-}>\gamma_{1}>\gamma_{+}\). As \(\gamma_{1}\to\gamma_{+}\), \(\gamma_{-}\), \(\tilde{\mathcal{A}}(\gamma_{1})\) vanishes and \(g_{c}^{*}\) diverges. As soon as \(\gamma_{1}\) exceeds \(\gamma_{+}\) or falls short of \(\gamma_{-}\), \(g_{c}^{*}\) no longer exists with the roughening transition disappearing. RG flow lines starting from _any_ initial condition with \(\gamma_{1}>\gamma_{+}\) or \(\gamma_{1}<\gamma_{-}\) (red region) where \(\tilde{\mathcal{A}}(\gamma_{1})>0\), flow to \(g^{*}=0\) ensuing scaling belonging to the EW class. Thus the roughening transition of the KPZ equation is entirely suppressed by the nonlocal nonlinear term.
In summary, we have generalised the KPZ equation by inclusion of nonlocal nonlinear gradient terms of the same order as the KPZ nonlinearity. Surprisingly, we find stable surfaces with positional generalised QLRO with _nonuniversal exponents_ for most choices of the model parameters, unlike the 2D KPZ equation. At \(d>2\), similar choices for the same parameters can suppress the KPZ roughening transitions, resulting into only smooth surfaces statistically identical to an EW surface at same \(d\). Heuristically, a nonlocal part in the surface propulsion velocity \(v_{p}\) means a local large fluctuation can generate a propulsion not just _locally_, but _over large scales_, which when sufficiently strong can suppress local variations in \(v_{p}\) due to the local KPZ-nonlinear term. This in turn has the effect of reducing surface fluctuations. For other parameter choices, a KPZ-like perturbatively inaccessible rough phase is speculated. In that parameter space, the roughening transition survives at \(d>2\), but with significantly modified scaling properties, again with nonuniversal exponents. Our scaling exponents at 2D are asymptotically exact; at \(d>2\), the exponents at the unstable FP are correct to \(\mathcal{O}(\epsilon)\). We hope our studies here will provide further impetus to study nonlocal effects on similar nonequilibrium models.
_Acknowledgement:-_ A.B. thanks the SERB, DST (India) for partial financial support through the MATRICS scheme [file no.: MTR/2020/000406].
## Appendix A Active KPZ equation
We first construct the appropriate equation in terms of the vector field \(\mathbf{v}(\mathbf{r},t)\equiv\mathbf{\nabla}h\). Field \(\mathbf{v}\) has a definite physical interpretation. For the active XY model, \(v\) is the local "superfluid velocity" [25], whereas for an active membrane \(\mathbf{v}\) is the fluctuation of the local normal to the membrane surface [25]. Noting that the Burgers velocity \(\mathbf{v}\) is a conserved vector field, it must follow a generic conservation law of the form
\[\frac{\partial v_{i}}{\partial t}=-\nabla_{j}J_{ij}, \tag{10}\]
where \(J_{ij}\) is the velocity current. In general \(J_{ij}\) can be decomposed as the sum of a symmetric part \(J_{ij}^{s}=J_{ji}^{s}\) and an antisymmetric part \(J_{ij}^{a}=-J_{ji}^{a}\): \(J_{ij}=J_{ij}^{ij}+J_{ij}^{b}\). Since \(\mathbf{v}\) is purely irrolational, it can be expressed, via the Helmholtz theorem [43; 44], solely in terms of its divergence, i.e., \(\mathbf{\nabla}\cdot\mathbf{v}\equiv\mathcal{D}\), which follows the equation
\[\frac{\partial\mathcal{D}}{\partial t}=-\nabla_{i}\nabla_{j}J_{ij}=-\nabla_{ i}\nabla_{j}J_{ij}^{s}. \tag{11}\]
Thus, \(J_{ij}^{a}\) plays no role in the dynamics of \(\mathcal{D}\) and hence of \(\mathbf{v}\). Without any loss of generality, we then set \(J_{ij}^{a}=0\). We further express \(J_{ij}=J_{ij}^{s}\) as
\[J_{ij}=\frac{1}{2}(\partial_{i}\mu_{j}+\partial_{j}\mu_{i}), \tag{12}\]
where \(\mathbf{\mu}\) is the "chemical potential" vector, which in general can have irrolational and solenoidal parts. The latter part does not contribute to the dynamics of \(\mathcal{D}\) and hence of \(\mathbf{v}\), and hence can be set to zero, leaving \(\mathbf{\mu}\) purely irrolational. In that case,
\[\frac{\partial\mathbf{v}}{\partial t}=-\nabla^{2}\mathbf{\mu}. \tag{13}\]
The Burgers equation is given by
\[\frac{\partial v_{i}}{\partial t}=\nu\nabla^{2}v_{i}+\frac{\lambda}{2}\nabla_ {i}v^{2}+f_{i}. \tag{14}\]
Therefore, in the linearised Burgers equation (\(\lambda=0\)), \(\mathbf{\mu}=-\nu\mathbf{v}\), a local quantity, whereas for the Burgers equation \(\mathbf{\mu}=-\frac{\lambda}{2}\nabla^{-2}\mathbf{\nabla}v^{2}\), a nonlocal quantity.
Now consistent with the conservation law form of the Burgers equation \(\mu_{i}\) also admits, at the same order, a second term \(\sim\nabla_{j}(v_{i}v_{j})/\nabla^{2}\). Indeed, including this term the most general equation of \(\mathbf{v}\) retaining only up to the lowest order in nonlinearities and spatial gradients that now includes a chiral contribution is of the form
\[\frac{\partial v_{i}}{\partial t}=\nu\nabla^{2}v_{i}+\frac{\lambda}{2}\nabla_ {i}v^{2}+\lambda_{1}\nabla_{j}(v_{i}v_{j})+\lambda_{2}\nabla_{j}e_{jm}(v_{i}v _{m})+f_{i}. \tag{15}\]
Both the additional nonlinear terms are nonlocal contributions to \(\mu\). Equation (15) reduces to the well-known Burgers equation [24] when \(\lambda_{1}=0=\lambda_{2}\). Further, the tensor \(e_{ij}\) is the 2D totally antisymmetric matrix, with \(e_{11}=0=e_{22}\) and \(e_{12}=1=-e_{21}\). Thus, the \(\lambda_{2}\)-term in Eq. (15) is the chiral contribution. Now, further demanding that \(\mathbf{v}\) is fully irrotational with \(\mathbf{v}=\mathbf{\nabla}h\), the "superfluid velocity" of an XY model, or the deviation of the local normal of an interface in the Monge gauge [26], we obtain,
\[\frac{\partial h}{\partial t}= \nu\nabla^{2}h+\tfrac{\lambda}{2}(\mathbf{\nabla}h)^{2}+\lambda_{1} Q_{ij}(\mathbf{r})(\nabla_{i}h\nabla_{j}h) \tag{16}\] \[+\lambda_{2}Q_{ij}(\mathbf{r})e_{jm}(\nabla_{i}h\nabla_{m}h)+\eta,\]
where formally \(Q_{ij}=\nabla_{i}\nabla_{j}/\nabla^{2}\) and is to be understood in terms of its Fourier transform.
## Appendix B Galilean invariance
Under the transformation \(x_{i}^{\prime}\to x_{i}-(\lambda+2\lambda_{1})c_{i}t-\lambda_{2}e_{ij}c_{j}t\) and \(t^{\prime}\to t\), where \(i\) and \(j\) can take values \(1\) or \(2\) we have,
\[\frac{\partial}{\partial t^{\prime}} \to\frac{\partial}{\partial t}+(\lambda+2\lambda_{1})\mathbf{c} \cdot\mathbf{\nabla}-\lambda_{2}e_{ij}c_{i}\frac{\partial}{\partial x_{j}}, \tag{17}\] \[\mathbf{\nabla}^{\prime} \to\mathbf{\nabla}. \tag{18}\]
Using these we show below that if the height function \(h(\mathbf{x},t)\) transforms as \(h^{\prime}(\mathbf{x}^{\prime},t^{\prime})\to h(\mathbf{x},t)+\mathbf{c}\cdot \mathbf{x}\), then Eq. (16) is invariant. Considering RHS of Eq. (16), first term transforms as,
\[\nu\nabla^{\prime 2}h^{\prime}\to\nu\nabla^{2}h. \tag{19}\]
Second term transforms as,
\[\frac{\lambda}{2}(\mathbf{\nabla}^{\prime}h^{\prime})^{2}\to\frac{\lambda}{2}(\bm {\nabla}h)^{2}+\lambda\mathbf{\nabla}h\cdot\mathbf{c}+\frac{\lambda}{2}c^{2}. \tag{20}\]
Third term transforms as,
\[\lambda_{1}\frac{\nabla_{i}^{\prime}\nabla_{j}^{\prime}}{\nabla^{\prime 2 }}(\nabla_{i}^{\prime}h^{\prime}\nabla_{j}^{\prime}h^{\prime})\to\lambda_{1} \frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}(\nabla_{i}h\nabla_{j}h)+2\lambda_{1} \mathbf{c}\cdot\mathbf{\nabla}h. \tag{21}\]
Chiral term transforms as,
\[\lambda_{2}\frac{\nabla_{i}^{\prime}\nabla_{j}^{\prime}}{\nabla^{ \prime 2}}e_{jm}(\nabla_{i}^{\prime}h^{\prime}\nabla_{m}^{\prime}h^{\prime})\to \lambda_{2}\frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}e_{jm}(\nabla_{ i}h\nabla_{m}h)\] \[-\lambda_{2}e_{ij}c_{i}\frac{\partial h}{\partial x_{j}}. \tag{22}\]
Noise term remains invariant under this transformation. Thus after the transformation extra terms in RHS are
\[\lambda\mathbf{\nabla}h\cdot\mathbf{c}+\frac{\lambda}{2}c^{2}+2\lambda_{1} \mathbf{c}\cdot\mathbf{\nabla}h-\lambda_{2}e_{ij}c_{i}\frac{\partial h}{\partial x_ {j}}. \tag{23}\]
Similarly LHS of Eq. (16) transforms as,
\[\frac{\partial h^{\prime}}{\partial t^{\prime}}\to\frac{\partial h}{ \partial t}+(\lambda+2\lambda_{1})\mathbf{c}\cdot\mathbf{\nabla}h-\lambda_{2}e_{ij} c_{i}\frac{\partial h}{\partial x_{j}}+(\lambda+2\lambda_{1})c^{2}. \tag{24}\]
Comparing Eq. (23) and Eq. (24) we see that Eq. (16) is invariant.
## Appendix C Renormalisation group calculations
The dynamic renormalisation group calculation is conveniently performed in terms of a path integral over \(h(\mathbf{r},t)\) and its dynamic conjugate field \(\hat{h}(\mathbf{r},t)\)[28; 45] that is equivalent to and constructed from Eq. (10) together with the noise variance. The generating functional corresponding to Eq. (10) is given by [28; 45]
\[\mathcal{Z}=\int\mathcal{D}\hat{h}\mathcal{D}he^{-\mathcal{S}[\hat{h},h]}, \tag{12}\]
where \(\hat{h}\) is the dynamic conjugate field and \(\mathcal{S}\) is the action functional:
\[S=-\int_{\mathbf{x},t}\hat{h}D\hat{h}+\int_{\mathbf{x},t}\hat{h}\bigg{\{} \partial_{t}h-\nu\nabla^{2}h-\frac{\lambda}{2}(\mathbf{\nabla}h)^{2}-\lambda_{1} \frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}(\nabla_{i}h\nabla_{j}h)-\lambda_{2} \frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}e_{jm}(\nabla_{i}h\nabla_{m}h)\bigg{\}}. \tag{13}\]
### Linear theory results
Defining Fourier transform by
\[a(\mathbf{x},t)=\int_{\mathbf{q},\omega}a(\mathbf{q},\omega)e^{i(\mathbf{q} \cdot\mathbf{x}-\omega t)},\]
where \(a=h,\hat{h}\). Then from the Gaussian part of (13), the correlation functions at the harmonic order can be found:
\[\langle\hat{h}(\mathbf{q},\omega)\hat{h}(-\mathbf{q},-\omega) \rangle_{0} =0 \tag{14a}\] \[\langle\hat{h}(\mathbf{q},\omega)h(-\mathbf{q},-\omega)\rangle_{0} =\frac{1}{i\omega+\nu q^{2}}\] (14b) \[\langle\hat{h}(-\mathbf{q},-\omega)h(\mathbf{q},\omega)\rangle_{0} =\frac{1}{-i\omega+\nu q^{2}}\] (14c) \[\langle h(\mathbf{q},\omega)h(-\mathbf{q},-\omega)\rangle_{0} =\frac{2D}{\omega^{2}+\nu^{2}q^{4}} \tag{14d}\]
### Corrections to \(D\)
There are total four Feynman one-loop diagrams which contribute to the fluctuation-corrections of \(D\).
Fig. 3(\(a\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{2\lambda^{2}}{2!\times 4}\int_{\mathbf{q},\Omega}\frac{[ \mathbf{q}\cdot(\mathbf{k}-\mathbf{q})]^{2}\times 4D^{2}}{\left(\Omega^{2}+\nu^{2}q ^{4}\right)\left(\Omega^{2}+\nu^{2}(\mathbf{k}-\mathbf{q})^{4}\right)} \tag{15}\] \[=\frac{\lambda_{1}^{2}D^{2}}{4\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{16}\]
Fig. 3(\(b\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k_ {j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{17}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{18}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{2}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{19}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{20}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{21}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{22}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{23}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{24}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}}{2}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{25}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{26}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}}{2}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{27}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{28}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{2}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{29}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{30}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{2}^{2}}{2!}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{31}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{32}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}}{2}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\times\frac{4D^{2}}{(\Omega-i\nu q^{2})^{2}(\Omega+i\nu q^{2})^{2}} \tag{33}\] \[=\frac{\lambda_{2}^{2}D^{2}}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}} \tag{34}\]
Fig. 3(\(d\)) has symmetry factor 2 and contributes to the correction of \(D\) as shown below,
\[\frac{\lambda_{1}}{2}\times 2\int_{\mathbf{q},\Omega}\frac{k_{i}k _{j}}{k^{2}}e_{jm}q_{i}q_{m}\frac{k_{s}k_{t}}{k^{2}}e_{tr}q_{s}q_{r}\] \[\
### Corrections to \(\nu\)
Fig. 4\((a)\) represents the contribution originating from vertices \(\lambda\) - \(\lambda\) which is,
\[\frac{\lambda^{2}D}{\nu^{2}}\frac{(d-2)}{d}k^{2}\int\frac{d^{2}q}{q^{2}} \tag{101}\]
Fig. 4\((b)\) has symmetry factor 8 and represents the contribution originating from vertices \(\lambda_{1}\) - \(\lambda_{1}\) which is,
\[-\frac{\lambda_{1}^{2}}{2!}\times 8\int_{\mathbf{q},\Omega}\frac{k_{ i}k_{j}}{k^{2}}(q_{i}(k-q)_{j})\frac{q_{m}q_{n}}{q^{2}}k_{m}(k-q)_{n}\] \[\quad\times\frac{2D}{\Omega^{2}+\nu^{2}(\mathbf{k}-\mathbf{q})^{ 4}}\times\frac{1}{-i\Omega+\nu q^{2}} \tag{102}\] \[=-\frac{\lambda_{1}^{2}D}{2\nu^{2}}k^{2}\int\frac{d^{2}q}{q^{2}} \tag{103}\]
Fig. 4\((c)\) has symmetry factor 4 and represents the contribution originating from vertices \(\lambda_{1}\) - \(\lambda\) where external leg \(\hat{h}\) is from vertex \(\lambda_{1}\). The contribution is,
\[-\frac{\lambda_{1}\lambda}{2}\times 4\times\frac{k_{i}k_{j}}{k^{2}} \int_{\mathbf{q},\Omega}q_{i}(k-q)_{j}(\mathbf{q}\cdot\mathbf{k})\] \[\quad\times\frac{2D}{\Omega^{2}+\nu^{2}q^{4}}\times\frac{1}{-i \Omega+\nu(\mathbf{k}-\mathbf{q})^{2}} \tag{104}\] \[=-\frac{\lambda_{1}D}{2\nu^{2}}k^{2}\int\frac{d^{2}q}{q^{2}} \tag{105}\]
Fig. 4\((e)\) has symmetry factor 2 and represents the contribution originating from vertices \(\lambda_{2}\) - \(\lambda_{2}\) which is,
\[\frac{\lambda_{2}^{2}}{2!}\times 2\frac{k_{i}k_{j}}{k^{2}}e_{jm}e_{ lm}\int_{\mathbf{q},\Omega}\Bigl{(}-q_{i}(k-q)_{m}-(k-q)_{i}q_{m}\Bigr{)}\] \[\quad\times\frac{(k-q)_{s}(k-q)_{t}}{(\mathbf{k}-\mathbf{q})^{2} }\Bigl{(}q_{s}k_{n}+q_{n}k_{s}\Bigr{)}\] \[\quad\times\frac{2D}{\Omega^{2}+\nu^{2}q^{4}}\times\frac{1}{i \Omega+\nu(\mathbf{k}-\mathbf{q})^{2}} \tag{106}\] \[=-\frac{\lambda_{2}^{2}D}{8\nu^{2}}k^{2}\int\frac{d^{2}q}{q^{2}} \tag{107}\]
There is no correction from vertices (\(\lambda_{2}\) - \(\lambda\)) or (\(\lambda_{2}\) - \(\lambda_{1}\)) because presence of single \(e_{jm}\) will make whole thing antisymmetric and ultimately the contribution vanish.
Adding all these we have total correction of \(\nu\),
\[\nu^{<}=\nu\Bigl{[}1+\Bigl{(}\frac{\lambda_{1}^{2}D}{2\nu^{3}}+ \frac{5}{8}\frac{\lambda\lambda_{1}D}{\nu^{3}}+\frac{\lambda_{2}^{2}D}{8\nu^{ 3}}\Bigr{)}\!\int\frac{d^{2}q}{q^{2}}\Bigr{]} \tag{108}\]
### One-loop corrections to \(\lambda,\lambda_{1}\) and \(\lambda_{2}\)
Fig. 5 shows representative one-loop Feynman diagrams for renormalisation of \(\lambda\), \(\lambda_{1}\) and \(\lambda_{2}\) where \(a,b,c\) can take values \(0,1,2\), and \(\lambda_{0}=\lambda\). There are many possible similar diagrams depending on \(a,b,c\) but for a particular set of vertices (i.e for a fixed values of \(a,b,c\)) there exist two diagrams that may contribute to the corrections of any of \(\lambda,\lambda_{1},\lambda_{2}\) as shown in Fig. 5. When (\(a=b=c=1\)) then Fig. 5\((a)\) has symmetry factor 24 and contributes following,
\[\frac{\lambda_{1}^{3}}{3!}\times 24\times \Bigl{[}\frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}\hat{h}\nabla_{m} h\nabla_{t}h\Bigr{]}\int_{\mathbf{q},\Omega}q_{i}q_{j}q_{m}q_{t}\] \[\times\frac{2D}{(\Omega+i\nu q^{2})^{2}(\Omega-i\nu q^{2})^{2}} \tag{109}\] \[=\frac{48\times\lambda_{1}^{3}}{3!\times 8}\times \Bigl{[}\frac{\nabla_{i}\nabla_{j}}{\nabla^{2}}\hat{h}\nabla_{m} h\nabla_{t}h\Bigr{]}\frac{2D}{8\nu^{3}}\int\frac{d^{2}q}{q^{2}}\] \[\times \Bigl{[}\delta_{ij}\delta_{mt}+\delta_{im}\delta_{jt}+\delta_{it} \delta_{jm}\Bigr{]} \tag{110}\]
Similarly for (\(a=b=c=1\)) Fig. 5\((b)\) has symmetry factor 48 and contributes following,
Figure 4: One-loop Feynman diagrams that contributes to the renormalisation of \(\nu\)
Figure 5: Representative one-loop Feynman diagrams that can contribute to the renormalisation of \(\lambda\), \(\lambda_{1}\) and \(\lambda_{2}\).
\[\frac{48\lambda_{1}^{3}}{3!}\times\Bigl{[}\frac{\nabla_{i}\nabla_{j}} {\nabla^{2}}\hat{h}\nabla_{m}h\nabla_{t}h\Bigr{]}\int_{\mathbf{q},\Omega}q_{i}q_ {j}q_{m}q_{t}\] \[\qquad\qquad\times\frac{2D}{(\Omega^{2}+\nu^{2}q^{4})(\Omega+i \nu q^{2})^{2}} \tag{101}\] \[=-\frac{48\times\lambda_{1}^{3}}{3!\times 8}\times\Bigl{[}\frac{\nabla_{i}\nabla_{j}} {\nabla^{2}}\hat{h}\nabla_{m}h\nabla_{t}h\Bigr{]}\frac{2D}{8\nu^{3}}\int\frac{ d^{2}q}{q^{2}}\] \[\qquad\qquad\times\Bigl{[}\delta_{ij}\delta_{mt}+\delta_{im} \delta_{jt}+\delta_{it}\delta_{jm}\Bigr{]} \tag{102}\]
We can see that Eq. (102) is exactly same as Eq. (100) but with a negative sign. They cancel each other contributing nothing to the correction of \(\lambda,\lambda_{1}\) and \(\lambda_{2}\). Thus for any combination of \(a,b\) and \(c\) there exist diagrams like Fig. 5\((a)\) and Fig. 5\((b)\) added up to zero contributing nothing. This shows that there is no correction to \(\lambda,\lambda_{1}\) and \(\lambda_{2}\).
### RG flow equations and scaling
After averaging over fields with higher momentum new action contains fields with upper cut off momentum \(\frac{\Lambda}{b}\). Since we want to describe the same system with new action, so we have to rescale the space, time and fields so the upper cutoff momentum becomes \(\Lambda\). Rescaling gives \(q\to q\ =bq\implies x^{{}^{\prime}}=\frac{x}{b}\) similarly, \(\omega\rightarrow\omega^{{}^{\prime}}=b^{z}\omega\implies t^{\prime}=\frac{t} {b^{z}}\). Similarly fields scale as shown below,
\[h^{<}(q,\omega)=\xi h(q^{{}^{\prime}},\omega^{{}^{\prime}})\] \[\hat{h}^{<}(q,\omega)=\xi\hat{h}(q^{{}^{\prime}},\omega^{{}^{ \prime}})\] \[h^{<}(x,t)=\xi_{R}h(x^{{}^{\prime}},t^{{}^{\prime}})\]
Say \(\xi_{R}=b^{\chi}\), where \(\chi\) is the roughness exponent of the KPZ surface. Since there is no correction to \(\int_{x,t}\hat{h}\partial_{t}h\) we impose the condition that coefficient of \(\int_{x,t}\hat{h}\partial_{t}h\) remains unity under rescaling. Using these we find model parameters rescale as,
\[D^{{}^{\prime}} =D^{<}b^{z-d-2\chi}\] \[\nu^{{}^{\prime}} =\nu^{<}b^{z-2}\] \[\lambda^{{}^{\prime}} =\lambda b^{\chi+z-2}\]
Where \(\lambda_{1}\) and \(\lambda_{2}\) scales same as \(\lambda\). Since there is no correction to lambdas so \(\lambda^{<}=\lambda\). Also \(D^{<}=D\Bigl{[}1+\left(\frac{\lambda_{1}^{2}D}{4\nu^{2}}+\frac{3\lambda_{2}^ {3}D}{8\nu^{3}}+\frac{3\lambda_{2}^{3}D}{8\nu^{3}}\right)\!\!\frac{\lambda_{ 3}\lambda_{1}D}{2\nu^{3}}\Bigr{]}\!\!\int_{\frac{\Lambda}{b}}^{\Lambda}\frac{ d^{2}q}{q^{2}}\Bigr{]}\) and \(\nu^{<}=\nu\Bigl{[}1+\left(\frac{\lambda_{1}^{2}D}{2\nu^{3}}+\frac{5}{8}\frac {\lambda_{3}\lambda_{1}D}{\nu^{3}}+\frac{\lambda_{2}^{3}D}{8\nu^{3}}\right)\! \int_{\frac{\Lambda}{b}}^{\Lambda}\frac{d^{2}q}{q^{2}}\Bigr{]}\). We set \(b=e^{\delta l}\approx 1+\delta l\) and define dimensionless coupling constants by \(g=\frac{\lambda_{2}^{2}D}{\nu^{3}}\frac{1}{2\pi}\),\(\gamma_{1}=\frac{\lambda_{1}}{\lambda}\) and \(\gamma_{2}=\frac{\lambda_{2}}{\lambda}\) to obtain the RG flow equations in the main text.
## Appendix D Correlation function
Following Refs. [29; 30; 34], we now calculate the renormalised correlation functions of \(h(\mathbf{x})\), defined as
\[C_{h}^{R}(r)\equiv\langle[h(\mathbf{x})-h(\mathbf{x}^{\prime})]^{2}\rangle_{R}, \tag{103}\]
where \(R\) refers to a renormalised quantity. We start from
\[\langle h(\mathbf{k})h(-\mathbf{k})\rangle_{R}\approx\frac{D(0)}{\nu(0)k^{2}[ \ln(\Lambda/k)]^{1-\mu}}, \tag{104}\]
Expression (104) is no longer valid over the wavevector range from \(0\) to \(\Lambda\), rather it is valid between \(0\) and \(\tilde{\Lambda}\ll\Lambda\). We then obtain,
\[C_{h}(r)\approx\int_{0}^{\tilde{\Lambda}}\frac{d^{2}k}{(2\pi)^{2}}\left[1-\exp i \mathbf{k}\cdot(\mathbf{x}-\mathbf{x}^{\prime})\right]\frac{D(0)}{\nu(0)k^{2 }[\ln(\Lambda/k)]^{1-\mu}}. \tag{105}\]
Integrating over the angular variable, we get
\[C_{h}(r) \approx \int_{0}^{\tilde{\Lambda}}\frac{dq\,D(0)}{\nu(0)q[\ln(\Lambda/q)]^ {1-\mu}}\left[\frac{1}{2\pi}\int_{0}^{2\pi}d\theta(1-e^{iqr\cos\theta}\right] \tag{106}\] \[= \int_{0}^{\tilde{\Lambda}}\frac{dq\,D(0)}{\nu(0)q[\ln(\Lambda/q)] ^{1-\mu}}\left[1-J_{0}(qr)\right]\] \[= \int_{0}^{\tilde{\Lambda}r}\frac{du\,D(0)[1-J_{0}(u)]}{\nu(0)u[ \ln(\frac{\Lambda}{u})]^{1-\mu}},\]
where \(J_{0}(u)\) is the Bessel function of order zero. Then
\[C_{h}(r)=\int_{0}^{1}\frac{du\,D(0)[1-J_{0}(u)]}{\nu(0)u[-\ln u+\ln(1/y)]^{1- \mu}}+\int_{1}^{\tilde{\Lambda}r}\frac{du\,D(0)}{\nu(0)u[-\ln u+\ln(\Lambda r)] ^{1-\mu}}-\int_{1}^{\tilde{\Lambda}r}\frac{du\,D(0)J_{0}(u)}{\nu(0)u[-\ln u+\ln( \Lambda r)]^{1-\mu}}. \tag{107}\]
The first and the third terms on the rhs of (107) are finite. Since \(u_{max}=\tilde{\Lambda}r\ll\Lambda r\), the second contribution on the right may be integrated with the substitution \(u=\exp(z)\) giving
\[\int_{1}^{\tilde{\Lambda}r}\frac{du}{u[\ln(\Lambda\,r)]^{1-\mu}}\approx\mu[ \ln(\Lambda\,r)]^{\mu} \tag{108}\]
in the limit of large \(r\). We thus find \(C_{h}(r)\approx\mu\frac{D(0)}{\nu(0)}|\ln(\Lambda\,r)|^{\mu}\) in the limit of large \(r\), with the remaining contributions on the right hand side of (45) being finite or subleading for large \(r\).
|
2307.05166 | The Ergodic Hypothesis: A Typicality Statement | This paper analyzes the ergodic hypothesis in the context of Boltzmann's late
work in statistical mechanics, where Boltzmann lays the foundations for what is
today known as the typicality account. I argue that, based on the concepts of
stationarity (of the measure) and typicality (of the equilibrium state), the
ergodic hypothesis, as an idealization, is a consequence rather than an
assumption of Boltzmann's approach. More precisely, it can be shown that every
system with a stationary measure and an equilibrium state (be it a state of
overwhelming phase space or time average) behaves essentially as if it were
ergodic. I claim that Boltzmann was aware of this fact as it grounds both his
notion of equilibrium, relating it to the thermodynamic notion of equilibrium,
and his estimate of the fluctuation rates. | Paula Reichert | 2023-07-11T10:51:48Z | http://arxiv.org/abs/2307.05166v1 | # The ergodic hypothesis: a typicality statement
###### Abstract
This paper analyzes the ergodic hypothesis in the context of Boltzmann's late work in statistical mechanics, where Boltzmann lays the foundations for what is today known as the typicality account. I argue that, based on the concepts of stationarity (of the measure) and typicality (of the equilibrium state), the ergodic hypothesis, as an idealization, is a consequence rather than an assumption of Boltzmann's approach. More precisely, it can be shown that every system with a stationary measure and an equilibrium state (be it a state of overwhelming phase space or time average) behaves essentially as if it were ergodic. I claim that Boltzmann was aware of this fact as it grounds both his notion of equilibrium, relating it to the thermodynamic notion of equilibrium, and his estimate of the fluctuation rates.
**Keywords:** Ergodic Hypothesis, Boltzmann Equilibrium, (Essential) Ergodicity, Typicality, Thermodynamic Equilibrium
## 1 Introduction
The ergodic hypothesis has been formulated by Boltzmann (1871) and Maxwell (1879) and has famously been discussed by Ehrenfest and Ehrenfest-Afanassjewa (1911) in their influential encyclopedia article on statistical mechanics, where they provide an overview of and comment on Boltzmann's work in statistical physics.
Ever since, the ergodic hypothesis has been debated controversially. This refers not only to the status of the ergodic hypothesis within Boltzmann's work (see, e.g., Brush (1967)), but more generally to its applicability with respect to realistic systems (see, e.g., Earman and Redei (1996); Smale (1979)) and its relevance for physics as such (see, e.g., Sklar (1973); Schwartz (1992); Bricmont (1995)).
Despite its debatable status, the concept of ergodicity has attracted a lot of attention. Today there even exists a proper branch of mathematics, so-called ergodic theory, with a plentitude of rigorous mathematical results (most notably, the results of Birkhoff (1931), von Neumann (1932), and Khinchin (1949); see Petersen (1983) for an overview).
Interestingly enough, though, Boltzmann himself never highlighted the ergodic hypothesis. Although he introduces it in his early work, he mentions it not even once
in his two volumes on gas theory, which constitute his _opus magnum_ on statistical mechanics (cf. Boltzmann (1896a)). Still, he seems to rely on ergodicity, at least as an idealization, also in his later work like, for instance, when he estimates the rate of fluctuations in the letter to Zermelo (cf. Boltzmann (1896b)).
This said, has ergodicity been a fundamental assumption of Boltzmann as the Ehrenfests suggest? If so, why didn't he make this more explicit? This seems the more surprising as he does emphasize the explanatory value of other concepts. For instance, he stresses the fact that equilibrium is a typical state, i.e., a state which is realized by an overwhelming number of micro configurations, at several points throughout his work (see, e.g., Boltzmann (1896a,b, 1897)).
In this paper, I argue that ergodicity, as an idealization, or essential ergodicity, in the strict sense (as defined in section 3.3 below), is a consequence rather than an assumption of Boltzmann's approach. Based on this, I claim that the ergodic hypothesis should be read as a typicality statement, in a way analogous to how Boltzmann taught us to read the H-theorem (see Boltzmann (1896b, 1897)). That is, just as a dynamical system of many particles doesn't approach equilibrium for all, but for typical initial conditions (given a low-entropy initial macrostate) and stays there not for all, but for most times, in the case of ergodicity, not all, but typical systems behave not strictly, but essentially, that is qualitatively, as if they were ergodic.
To make this point precise, what can be shown is the following: _On typical trajectories, the time and phase space averages of physical macrostates coincide in good approximation_. This property of the dynamics, which I call 'essential ergodicity', follows from the stationarity of the measure and the typicality of the equilibrium state alone.
## 2 The ergodic hypothesis
To discuss the ergodic hypothesis, we need to introduce the realm of Boltzmann's statistical mechanics: the theory of measure-preserving dynamical systems.
### Measure-preserving dynamical systems
Let \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) denote a Hamiltonian system. For \(N\) particles, \(\Gamma\cong\mathbb{R}^{6N}\) is called phase space. It is the space of all possible microstates \(X\) of the system, where a point \(X=(q,p)\) in \(\Gamma\) represents the positions and momenta of all the particles: \((q,p)=(q_{1},...,q_{3N},p_{1},...,p_{3N})\).
The Hamiltonian flow \(T\) is a one-parameter flow \(T^{t}(q,p)=(q,p)(t)\) on \(\Gamma\) with \(t\) representing time. It is connected to the Hamiltonian vector field \(v_{H}\) as follows: \(v_{H}(T^{t}(q,p))=dT^{t}(q,p)/dt\). In other words, the flow lines are the integral curves along the Hamiltonian vector field, where the latter is specified by \(v_{H}=(\partial H/\partial p,-\partial H/\partial q)\). This is the physical vector field of the system, generated by the Hamiltonian \(H\), and the flow lines represent the possible trajectories of the system. Finally, \(\mu\) refers to the Liouville measure,
\[d\mu=\prod_{i=1}^{3N}dq_{i}dp_{i}, \tag{1}\]
or to any other stationary measure derived thereof.
Note that we call a measure \(\mu\)_stationary_ (with respect to \(T\)) if and only if the flow \(T\) is measure-preserving (with respect to \(\mu\)). Given a Hamiltonian system, it follows from Liouville's theorem that the Liouville measure is conserved under the
Hamiltonian phase flow. That is, for every \(A\in\mathcal{B}(\Gamma)\),
\[\mu(T^{-t}A)=\mu(A). \tag{2}\]
Since the Liouville measure is just the \(6N\)-dimensional Lebesgue measure, this says that phase space volume is conserved under time evolution.
If we introduce the notion of the time-evolved measure, \(\mu_{t}(A):=\mu(T^{-t}A)\), we can reformulate the condition of stationarity as follows. A measure \(\mu\) is _stationary_ if and only if, for every \(A\in\mathcal{B}(\Gamma)\),
\[\mu_{t}(A)=\mu(A). \tag{3}\]
According to this equation, the measure itself is invariant under time translation, which is the main reason for physicists to accept it as _the_ measure grounding a statistical analysis in physics (see, e.g., Ehrenfest and Ehrenfest-Afanassjewa (1911), Gibbons et al. (1987), Durr et al. (2017)). In practice, we are not concerned with the Liouville measure _per se_, but with appropriate stationary measures derived thereof.1
Footnote 1: Consider, for instance, an isolated system. Within that system, total energy \(E\) is conserved. Hence, trajectories are restricted to the constant-energy hypersurface \(\Gamma_{E}=\{(q,p)\in\Gamma|H(q,p)=E\}\), from which it follows that the microcanonical measure
\[d\mu_{E}=\prod_{i=1}^{3N}dq_{i}dp_{i}\ \delta(H(q,p)-E)\]
is the appropriate stationary measure of the dynamics in that case.
### Variants of the ergodic hypothesis
Within the framework of Hamiltonian systems or, more generally, measure-preserving dynamical systems, we can analyze Boltzmann's ergodic hypothesis.
Let again \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) be a measure-preserving dynamical system and \(A\in\mathcal{B}(\Gamma)\). Let, in what follows, \(\mu(\Gamma)=1\).2 We call
Footnote 2: Throughout this paper, we deal with systems where \(\Gamma\) is finite and, hence, \(\mu\) is normalizable. In that case, we can set \(\mu(\Gamma)=1\) without loss of generality. The hard case of infinite phase spaces has to be discussed elsewhere (see Goldstein et al. (2016) and Lazaroviči and Reichert (2020) for a first discussion).
\[\mu(A)=\int_{\Gamma}\chi_{A}(x)d\mu(x) \tag{4}\]
the 'phase space average' of \(A\) with \(\chi_{A}\) being the characteristic function which is \(1\) if \(x\in A\) and \(0\) otherwise. Further we call
\[\hat{A}(x)=\lim_{\mathcal{T}\to\infty}\frac{1}{\mathcal{T}}\int_{0}^{\mathcal{ T}}\chi_{A}(T^{t}x)dt \tag{5}\]
the 'time average' of \(A\) for some \(x\in\Gamma\). Here it has been proven by Birkhoff (1931) that the infinite-time limit exists pointwise almost everywhere on \(\Gamma\) and the limit function \(\hat{A}(x)\) is integrable.
A dynamical system is called _ergodic_ if and only if, for all \(A\in\mathcal{B}(\Gamma)\) and almost all \(x\in\Gamma\) (i.e. for all \(x\) except a measure-zero set), the time and phase averages coincide:
\[\mu(A)=\hat{A}(x). \tag{6}\]
In other words, a system is called ergodic if and only if, for almost all solutions, the fraction of time the system spends in a certain region in phase space (in the limit \(t\to\infty\)!) is precisely _equal_ to the phase space average of that region.
Historically, the ergodic hypothesis has been formulated differently. In its original version due to Boltzmann (1871) (cited by Ehrenfest and Ehrenfest-Afanassjewa
(1911)), it refers to the assertion that a trajectory literally has to _go through every point_ in phase space (more precisely, in the constant-energy hypersurface). But this would imply that there is only _one_ solution with all possible microstates belonging to one and the same solution. This has been proven impossible by Rosenthal (1913) and Plancherel (1913).
In a weaker formulation, the so-called 'quasi-ergodic hypothesis' demands that a trajectory has to _come arbitrarily close to every point_ in phase space (see Ehrenfest and Ehrenfest-Afanassjewa (1911)). Later, the results of Birkhoff (1931) and von Neumann (1932) established the precise conditions under which equality of the time and phase space average is obtained.3
Footnote 3: Birkhoff (1931) gives a definition of ergodicity in terms of invariant sets (where a set \(A\in\mathcal{B}(\Gamma)\) is called invariant if and only if \(T^{-1}A=A\)). If, for all sets \(A\in\mathcal{B}(\Gamma)\) with \(T^{-1}A=A\),
\(\mu(A)=0\) or \(\mu(A)=1\), then the system is called ‘ergodic’. Thus a system is called ‘ergodic’ if and only if all invariant sets are of full or zero measure. In other words, there exist no two (or more) disjoint invariant sets of non-zero measure. The two definitions of ergodicity relate to one another via Birkhoff’s theorem.
For realistic physical systems, this equality of the time and phase space average - that is, ergodicity - turned out to be extremely hard to prove, if it could be proven at all. To draw on the most important result: it took almost 50 years and joined efforts to extend Sinai's proof for the model of 1 billiard ball on a 2-dimensional table (cf. Sinai (1970)) to the generalized model of \(N\geq 2\) hard spheres in a container with periodic boundary conditions (i.e. a torus) of dimension \(d\geq 2\); see Simanyi (2015).
At this point, the question arises: What if we were not interested in the exact coincidence of the time and phase space average in the first place? What if all we need is an approximate equality of the time and phase space average on typical trajectories? The point I want to make is the following: Boltzmann, being concerned with the analysis of realistic physical systems, need not be and presumably was not interested in ergodicity in the strict sense. According to Ehrenfest and Ehrenfest-Afanassjewa (1911), Boltzmann used ergodicity to estimate the fraction of time a system spends in a certain macrostate. To obtain such an estimate, however, it suffices to establish a result qualitatively comparable to ergodicity: _an almost equality of the time and phase space average of physical macrostates on typical trajectories._ This is precisely where the notion of essential ergodicity comes into play.
## 3 Essential ergodicity
We need one last ingredient to grasp the notion of essential ergodicity and that is the notion of typicality of macro- and microstates. We will then find that, given a stationary measure and a typical macrostate, that is, an equilibrium state in Boltzmann's sense, a typical system behaves essentially as if it were ergodic.
### Typicality and Boltzmann's notion of equilibrium
Given a measure on the space of possible states of the system - like a volume measure on phase space - this is naturally a measure of probability or typicality.4 Let again \(\mu\) denote the volume measure on \(\Gamma\). We call a measurable set \(A\subset\Gamma\) 'typical' (with respect to \(\Gamma\)) if and only if
Footnote 4: There is a little caveat to this statement. While it is definitely true whenever phase space is finite and the measure is normalizable, one has to be careful with infinite phase spaces and non-normalizable measures. For problems related to the latter, see Schiffrin and Wald (2012) or Goldstein et al. (2016). The distinction between the notions of probability and typicality has been drawn and discussed elsewhere (see, e.g., Goldstein (2012), Lazarovici and Reichert (2015) or Wilhelm (2019)).
\[\mu(A)=1-\varepsilon \tag{7}\]
for \(0<\varepsilon<<1\). This definition of 'typical sets' directly entails a definition of 'typical points' (cf. Wilhelm (2019)). We say that a point \(x\) is 'typical' (with respect to \(\Gamma\)) if and only if \(x\in A\) and \(A\) is typical with respect to \(\Gamma\).
In Boltzmann's statistical mechanics, we are concerned with 'points' (microstates) and'sets' (macro-regions). Macro-regions are regions of phase space corresponding to physical macrostates of the system. More precisely, every microstate \(X\), represented by a point \((q,p)\) on \(\Gamma\), belongs to respectively determines a certain macrostate \(M(X)\), represented by an entire region \(\Gamma_{M}\subset\Gamma\) - the set of all microstates realizing that particular macrostate. While a microstate comprises the exact positions and velocities of all the particles, \(X=(q_{1},...,q_{N},p_{1},...,p_{N})\), a macrostate \(M(X)\) is specified by the macroscopic, thermodynamic variables of the system, like volume \(V\), temperature \(T\), and so on. By definition, any two macrostates \(M_{i}\) and \(M_{j}\) are macroscopically distinct, hence there are only finitely many macrostates \(M_{i}\), and all macrostates together provide a partition of phase space into disjoint'macro regions' \(\Gamma_{M_{i}}\) with \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{M_{i}}\). Here it is a consequence of the large number of particles that every macrostate \(M(X)\) is realized by a huge number of microstates \(X\) and, hence, the precise way of partitioning doesn't matter.
In this set-up, Boltzmann defined 'equilibrium' precisely as the _typical_ macrostate of the system.
**Definition 1** (Boltzmann equilibrium).: _Let \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) be a dynamical system. Let \(\Gamma\) be partitioned into finitely many disjoint, measurable subsets \(\Gamma_{M_{i}},i=1,..,n\) by some (set of) physical macrovariable(s) \(M_{i}\), i.e., \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{M_{i}}\). Then a set \(\Gamma_{Eq}\in\{\Gamma_{M_{1}},...,\Gamma_{M_{n}}\}\) with phase space average_
\[\mu(\Gamma_{Eq})=1-\varepsilon \tag{8}\]
_where \(\varepsilon\in\mathbb{R}\), \(0<\varepsilon<<1\), is called the 'equilibrium set' or 'equilibrium region'. The corresponding macrostate \(M_{Eq}\) is called the 'Boltzmann equilibrium' of the system._
Be aware that this definition is grounded on a particular, physical macro partition of phase space. In other words, it is not an arbitrary value of \(\varepsilon\) which, when given, determines an equilibrium state - such a definition would be meaningless from the point of physics. Instead, it is a partition determined by the physical macrovariables of the theory, which is given, and it is with respect to that partition that a region of overwhelming phase space measure, if it exists, defines an equilibrium state in Boltzmann's sense (and by the way determines the value of \(\varepsilon\)).
At this point, it has been Boltzmann's crucial insight that, for a realistic physical system of \(N\approx 10^{24}\) particles (where, for a medium-sized object, we take Avogadro's constant) and a partition into macroscopically distinct states, there always exists a region of overwhelming phase space measure (see, e.g., Boltzmann (1896b)).5 This follows essentially from the vast gap between micro and macro description of the system and the fact that, for a large number of particles, small differences at the macroscopic level translate into huge differences in the corresponding phase space volumes.
Footnote 5: Lanford (1973) proves the existence of a region of overwhelming phase space measure for a large class of realistic physical systems.
To obtain an idea of the numbers, consider a gas in a medium-sized box. For that model, Penrose (1989, 2004) estimates the volume of all non-equilibrium regions together as compared to the equilibrium region to be:
\[\frac{\mu(\bigcup_{i=1}^{n}\Gamma_{M_{i}}\setminus\Gamma_{Eq})}{\mu(\Gamma_{ Eq})}=\frac{\mu(\Omega\setminus\Gamma_{Eq})}{\mu(\Gamma_{Eq})}\approx 1:10^{N} \tag{9}\]
with \(N\approx 10^{24}\). This implies, with \(\mu(\Gamma_{Eq})\approx\mu(\Omega)\), that \(\varepsilon\) is of the order \(1:10^{N}\approx 1:10^{10^{24}}=\frac{1}{10^{10^{10^{24}}}=\frac{1}{10^{10^{10^{
Given the value of \(\varepsilon\) from above, \(\varepsilon\approx 10^{-10^{24}}\), it follows that \(\sqrt{\varepsilon}\approx 10^{-10^{23}}\). Consequently, the equilibrium region is of measure \(\mu(\Gamma_{Eq})\approx 1-10^{-10^{24}}\) and the measures of the sets \(B\) and \(G\) are
\[\mu(B)<10^{-10^{23}},\hskip 28.452756pt\mu(G)>1-10^{-10^{23}}. \tag{14}\]
Note that \(B\) is now the set of trajectories which spend less than \(1-10^{-10^{23}}\) and \(G\) the set of trajectories which spend at least \(1-10^{-10^{23}}\) (!) of their time in equilibrium. We thus find that trajectories which spend almost all of their time in equilibrium are typical whereas trajectories which spend less than almost all of their time in equilibrium are atypical!
The converse statement has be proven as well (Frigg and Werndl (2015a,b); see the appendix for a different proof; cf. Reichert (2020)). It says that if there exists a region \(\Gamma_{Eq^{\prime}}\subset\Gamma\) in which by far most trajectories spend by far most of their time, then this region has very large phase space measure. To be precise, if there exists a region \(G^{\prime}\) with \(\mu(G^{\prime})=1-\delta\) such that \(\forall x\in G^{\prime}\): \(\hat{\Gamma}_{Eq^{\prime}}(x)\geq 1-\varepsilon^{\prime}\), then the following holds:
\[\mu(\Gamma_{Eq^{\prime}})\geq(1-\varepsilon^{\prime})(1-\delta). \tag{15}\]
Here we are again interested in those cases where \(\delta\) and \(\varepsilon^{\prime}\) are very small, \(0<\delta<<1\) and \(0<\varepsilon^{\prime}<<1\) (while the result holds for other values of \(\delta\) and \(\varepsilon^{\prime}\) as well).
This converse result tells us that, if there exists a state in which a typical trajectory spends by far most of its time, then this state is of overwhelming phase space measure.
Why is this converse statement interesting? It doesn't start from Boltzmann's notion of equilibrium. Instead, it starts from a thermodynamic or thermodynamic-like notion of equilibrium.
According to a standard thermodynamics textbook (like, e.g., Callen (1960) or Reiss (1996)), a thermodynamic equilibrium is a state in which a system, once it is in that state, stays for all times. In what follows, we give a definition which relaxes that standard definition a little bit in that it allows for rare fluctuations out of equilibrium and for some atypical trajectories (all \(x\notin G^{\prime}\)) that don't behave thermodynamic-like.6
Footnote 6: Lavis (2005, 2008) would call this a ‘thermodynamic-like equilibrium’ to draw the distinction between this notion and the standard textbook definition.
**Definition 2** (Thermodynamic equilibrium).: _Let \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) be a dynamical system. Let \(\Gamma\) be partitioned into finitely many disjoint, measurable subsets \(\Gamma_{M_{i}}(i=1,...,n)\) by some (set of) physical macrovariable(s) \(M_{i}\), i.e., \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{M_{i}}\). Let \(G^{\prime}\subset\Gamma\) with \(\mu(G^{\prime})=1-\delta\) and \(0<\delta<<1\). Let \(0<\varepsilon^{\prime}<<1\). A set \(\Gamma_{Eq^{\prime}}\in\{\Gamma_{M_{1}},...,\Gamma_{M_{n}}\}\) (connected to a macrostate \(M_{Eq^{\prime}}\)) with time average_
\[\hat{\Gamma}_{Eq^{\prime}}(x)\geq 1-\varepsilon^{\prime} \tag{16}\]
_for all \(x\in G^{\prime}\) is called a 'thermodynamic equilibrium'._
To summarize, we obtain that, for every dynamical system with a stationary measure and a state of overwhelming phase space measure, almost all trajectories spend almost all of their time in that state, and the other way round, given a state in which almost all trajectories spend almost all of their time, that state is of overwhelming phase space measure. _Hence, an equilibrium state in Boltzmann's sense is a thermodynamic equilibrium and the other way round!7_
Footnote 7: Based on the apparently missing connection between the time and the phase space average of equilibrium, Frigg and Werndl assert that Boltzmann’s account of thermodynamic behaviour, which has later become known as the ‘typicality account’, is simply ‘mysterious’ (Frigg and Werndl, 2012, p. 918). In
The only two assumptions which enter the proofs of the above assertions are:
1. that the measure is stationary (resp. the dynamics is measure-preserving), i.e., \(\mu_{t}(A)=\mu(A)\) for all \(A\in\mathcal{B}(\Gamma)\) and
2. that there is a macrostate of overwhelming phase space measure, i.e., a Boltzmann equilibrium \(\Gamma_{Eq}\) with \(\mu(\Gamma_{Eq})=1-\varepsilon\),
or, for the reverse direction, a) and
3. that there is a state in which typical trajectories spend by far most of their time, i.e., a thermodynamic equilibrium \(\Gamma_{Eq^{\prime}}\) with \(\hat{\Gamma}_{Eq^{\prime}}\geq 1-\varepsilon^{\prime}\).
Ergodicity doesn't enter the proofs, nor do we get ergodicity out of it. However, we get something similar to ergodicity, what we call 'essential ergodicity'.
### Essential ergodicity
While, for an ergodic system, the time and phase space averages _exactly coincide for all but a measure-zero set of solutions_, for an essentially ergodic system, the time and phase space averages _almost coincide on typical solutions_. To be precise, the following definition applies.
**Definition 3** (Essential ergodicity).: _Let \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) be a dynamical system. Let \(\Gamma\) be partitioned into finitely many disjoint, measurable subsets \(\Gamma_{M_{i}}(i=1,...,n)\) by some (set of) physical macrovariable(s) \(M_{i}\), i.e., \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{M_{i}}\). Let \(0<\varepsilon<<1\). A system is called 'essentially ergodic' if and only if_
\[|\hat{\Gamma}_{M_{i}}(x)-\mu(\Gamma_{M_{i}})|\leq\varepsilon \tag{17}\]
\(\forall i=1,...,n\) _and \(\forall x\in G\) with \(\mu(G)\geq 1-\delta\), \(0<\delta<<1\)._
For a measure-preserving system with an equilibrium state (be it a Boltzmann or a thermodynamic equilibrium), Equations 17 follow in a straightforward way from the two definitions of equilibrium given in Eq. 8 and Eq. 16 and the corresponding results on the time and phase space average, Eq. 14 and Eq. 15, respectively.More precisely, the following holds.
**Theorem 1** (FAPP ergodic hypothesis).: _Let \((\Gamma,\mathcal{B}(\Gamma),T,\mu)\) be a measure-preserving dynamical system. Let there be an equilibrium state \(M_{Eq}\) (a Boltzmann or thermodynamic equilibrium) with corresponding equilibrium region \(\Gamma_{Eq}\subset\Gamma\)._
_Then the system is essentially ergodic. In particular, there exists an \(\varepsilon\in\mathbb{R}\) with \(0<\varepsilon<<1\) such that_
\[|\hat{\Gamma}_{Eq}(x)-\mu(\Gamma_{Eq})|\leq\varepsilon \tag{18}\]
\(\forall x\in G\) _with \(\mu(G)\geq 1-\delta\), \(0<\delta<<1\)._
Proof.: We only prove Equation 18. From that, Equations 17 follow directly.
Let \(0<\delta^{\prime},\varepsilon^{\prime},\varepsilon^{\prime\prime}<<1\). For the first direction of proof, consider a thermodynamic equilibrium, i.e., \(\hat{\Gamma}_{Eq}(x)\geq 1-\varepsilon^{\prime}\) for all \(x\in G^{\prime}\) with \(\mu(G^{\prime})=1-\delta^{\prime}\). It follows from Eq. 15 that \(\mu(\Gamma_{Eq})\geq(1-\varepsilon^{\prime})(1-\delta^{\prime})\) and, hence,
\[|\hat{\Gamma}_{Eq}(x)-\mu(\Gamma_{Eq})|\leq\varepsilon^{\prime}+\delta^{ \prime}-\varepsilon^{\prime}\delta^{\prime}. \tag{19}\]
Now set \(G=G^{\prime}\), \(\delta=\delta^{\prime}\) and \(\varepsilon=\varepsilon^{\prime}+\delta^{\prime}-\varepsilon^{\prime}\delta^{\prime}\).
For the other direction, consider a Boltzmann equilibrium, i.e., \(\mu(\Gamma_{Eq})=1-\varepsilon^{\prime\prime}\). It follows from Eq. 11 that \(\mu(G^{\prime\prime})>1-\sqrt{\varepsilon^{\prime\prime}}\) with \(G^{\prime\prime}=\{x\in\Gamma|\hat{\Gamma}_{Eq}(x)\geq 1-\sqrt{\varepsilon^{ \prime\prime}}\}\). Hence, for all \(x\in G^{\prime\prime}\),
\[|\hat{\Gamma}_{Eq}(x)-\mu(\Gamma_{Eq})|\leq\varepsilon^{\prime\prime}. \tag{20}\]
Now set \(G=G^{\prime\prime}\), \(\delta=\sqrt{\varepsilon^{\prime\prime}}\) and \(\varepsilon=\varepsilon^{\prime\prime}\).
Bear in mind that, in this theorem, the order of \(\varepsilon\) is the order of the incredibly tiny proportion of phase space that is occupied by the system's non-equilibrium macrostates. This means that for all practical purposes (FAPP) the time and phase space averages can be taken to be equal. In other words, the system behaves essentially as if it were ergodic.
### Scope and limits of (essential) ergodicity
Although the notion of essential ergodicity is weaker than the notion of ergodicity, it predicts qualitatively the same long-time behaviour. In particular, it tells us that a typical trajectory spends by far most of its time in equilibrium, where equilibrium is defined in Boltzmann's way in terms of the phase space average, and it makes this notion of 'by far most' mathematically precise.8 This justifies, in a rigorous way, Boltzmann's assumption of ergodicity as an idealization or FAPP truth in analyzing the system's long-time behaviour (as done, e.g., in his estimate of the fluctuation rate Boltzmann (1896b)). In other words, based on Boltzmann's account, the ergodic hypothesis is well-justified. It is a good working hypothesis for those time scales on which it begins to matter that trajectories wind around all of phase space.
Footnote 8: Goldstein makes a similar point when he asserts that, even without ergodicity, the value of any thermodynamic variable is constant ‘to all intents and purposes’ (Goldstein, 2001, p. 46).
Let us, at this point, use the above result on essential ergodicity to estimate the rate of fluctuations out of equilibrium. Recall that, according to Eq. 14, typical trajectories spend at least \(1-10^{-10^{23}}\) of their time in equilibrium, when equilibrium is of measure \(\mu(\Gamma_{Eq})=1-10^{-10^{24}}\) (which is a reasonable value for a medium-sized object). In other words, they spend a fraction of less than \(10^{-10^{23}}\) of their time out of equilibrium, that is, in a fluctuation. If we assume that fluctuations happen randomly, in accordance with a trajectory wandering around phase space erratically, we obtain the following estimate for typical trajectories: a fluctuation of 1 second occurs about every \(10^{10^{23}}\) seconds. But this means that a typical medium-sized system spends trillions of years in equilibrium as compared to one second in non-equilibrium, a time larger than the age of the universel9
Footnote 9: This agrees with the time estimate Boltzmann presents in his letter to Zermelo (Boltzmann, 1896b, p. 577).
So far we argued that essential ergodicity substantiates Boltzmann s assertions about the long-time behaviour of macroscopic systems. What about the short-time behaviour? In physics and philosophy, several attempts have been made to use ergodicity in some way or the other to explain a system's evolution from non-equilibrium to equilibrium (see Vranas (1998) or Frigg and Werndl (2011, 2012); for earlier attempts as well as a thorough critique, see Bricmont (1995) and the references therein).
In this paper, I argue that ergodicity - just like epsilon-ergodicity, essential ergodicity, or any other notion involving an infinite-time limit - _does not_ and _cannot_ tell us anything about the approach to equilbrium, which is a behaviour within _short times_. This is simply due to the fact that the notion of ergodicity (or any notion akin to that) involves an _infinite-time limit_. Because of that limit, ergodicity can, at best, tell us something about the system's _long-time_ behaviour where 'long-time' refers to time scales comparable to the recurrence times, where it begins to matter
that the system's trajectory winds around all of phase space. For those short time scales on which the system evolves from non-equilibrium to equilibrium, ergodicity (or any notion akin to that) doesn't play any role. In fact, for a realistic gas, the equilibration time scale (i.e. the time scale of a system's approach to equilibrium) is fractions of a second as compared to trillions of years for the recurrence time!
Boltzmann's explanation of the irreversible approach to equilibrium is a genuine typicality result (see the discussion and references at the end of section 3.1) - ergodicity doesn't add to nor take anything from that.
At this point, a quote of the mathematician Schwartz fits well.10 Schwartz writes with respect to Birkhoff's ergodic theorem and the widely-spread conception that ergodicity might help to explain thermodynamic behaviour (Schwartz, 1992, pp. 23-24):
Footnote 10: This quote was one of the first quotes (and essays) that were given to me by Detlef Dür, to whom this memorial volume is dedicated.
The intellectual attractiveness of a mathematical argument, as well as the considerable mental labor involved in following it, makes mathematics a powerful tool of intellectual prestidigitation - a glittering deception in which some are entrapped, and some, alas, entrappers. Thus, for instance, the delicious ingenuity of the Birkhoff ergodic theorem has created the general impression that it must play a central role in the foundations of statistical mechanics. [...] The Birkhoff theorem in fact does us the service of establishing its own inability to be more than a questionably relevant superstructure upon [the] hypothesis [of typicality].
## 4 Conclusion
Based on typicality and stationarity as the two basic concepts of Boltzmann's approach, it follows that ergodicity, as an idealization, or essential ergodicity, in the strict sense, is a consequence rather than an assumption of Boltzmann's account.
I believe that Boltzmann was aware of this fact. In my opinion, he simply didn't highlight the precise mathematical connection between the concepts of typicality, stationarity, and essential ergodicity because it was absolutely clear to him that, given a state of overwhelming phase space volume and a stationary measure, by far most trajectories would stay in that state by far most of their time - just like by far most trajectories starting from non-equilibrium would move into equilibrium very quickly. He didn't need a mathematical theorem to make this more precise.
Let me now end this paper with a variation of the both picturesque and paradigmatic example of Tim Maudlin, about typicality incidents occurring in the Sahara desert.11 In what follows, I will adapt this example to the case of essential ergodicity.
Footnote 11: Known to the author from private conversation. The original version is about a person’s approach from non-equilibrium (here: an oasis) to equilibrium (here: the remainder of the desert), where it is the atypical initial condition, the special fact of ‘being in an oasis’ _in the very beginning_, which is in need of explanation. The fact that a person, walking around in an unspecific and maybe even random way, walks out of the oasis into the desert is merely typical (we call it _typical within atypicality_; see Dür and Teufel (2009) for this phrasing). According to Goldstein (2001), it is the explanation of the atypical initial condition which constitutes the hard part of any explanation of thermodynamic irreversibility.
A person wandering through the Sahara is typically surrounded by sand by far most of her time. In other words, she is typically hardly ever in an oasis. This fact is independent of the exact form of her 'wandering about', if she changes direction often, or not, if she moves fast, or not, and so on. Even if she doesn't move at all, she is typically surrounded by sand (in that case, for all times). In other words, independent of the dynamics, the long-time average of 'being surrounded by sand' is close to one on typical trajectories. This follows solely from the fact that all oases together constitute a vanishing small part of the Sahara desert and remain to do so throughout all times.
## Appendix
In what follows, I prove a theorem on the time average of the Boltzmann equilibrium.
**Theorem 2** (Time average of \(\Gamma_{Eq}\)).: _Let \((\Gamma,\mathcal{B}(\Gamma),\mu)\) be a probability space and let \(T\) be a measure-preserving transformation. Let \(\varepsilon,k\in\mathbb{R}\) with \(0<\varepsilon<<1\) and \(1\leq k\leq 1/\varepsilon\). Let \(\Gamma_{Eq}\subset\Gamma\) be an equilibrium region with \(\mu(\Gamma_{Eq})=1-\varepsilon\). Let \(B\) be the set of points for which the time average of equilibrium is smaller than \(1-k\varepsilon\), \(B=\{x\in\Gamma|\hat{\Gamma}_{Eq}(x)<1-k\varepsilon\}\). It follows that \(B\) is of measure_
\[\mu(B)<1/k. \tag{21}\]
_Let further \(G=\{x\in\Gamma|\hat{\Gamma}_{Eq}(x)\geq 1-k\varepsilon\}\) be the set of points for which the time average of equilibrium is larger than or equal to \(1-k\varepsilon\). Then_
\[\mu(G)>1-1/k. \tag{22}\]
Proof (Theorem 4.1).: The transformation \(T\) is measure-preserving, that is, for any set \(A\in\mathcal{B}(\Gamma)\) and \(\forall t\colon\mu(A)=\mu(T^{-t}A)\). Hence, in particular, \(\mu(\Gamma_{Eq})=\mu(T^{-t}\Gamma_{Eq})\) where \(\Gamma_{eq}\) refers to the equilibrium state, i.e., \(\mu(\Gamma_{Eq})=1-\varepsilon\). It follows that \(\mu(T^{-t}\Gamma_{Eq})=1-\varepsilon\), as well, and thus:
\[1-\varepsilon = \mu(\Gamma_{Eq})=\mu(T^{-t}\Gamma_{Eq})=\int_{\Gamma}\chi_{T^{ -t}\Gamma_{Eq}}(x)d\mu(x)=\int_{\Gamma}\chi_{\Gamma_{Eq}}(T^{t}x)d\mu(x) \tag{23}\] \[= \lim_{\mathcal{T}\to\infty}\frac{1}{\mathcal{T}}\int_{0}^{ \mathcal{T}}dt\int_{\Gamma}\chi_{\Gamma_{Eq}}(T^{t}x)d\mu(x).\]
Here the last equation follows from the fact that the integrand is a constant.
At this point, we make use of the pointwise ergodic theorem of Birkhoff (1931)12 which says that, for any measure-preserving transformation \(T\) and for any \(\mu\)-integrable function \(f\), i.e. \(f\in L^{1}(\mu)\), the limit
Footnote 12: For a thorough presentation of Birkhoff’s theorem and its proof, see Petersen (1983).
\[\hat{f}=\lim_{\mathcal{T}\to\infty}\frac{1}{\mathcal{T}}\int_{0}^{\mathcal{T} }f(T^{t}x)dt \tag{24}\]
exists for almost every \(x\in\Gamma\) and the (almost everywhere defined) limit function \(\hat{f}\) is integrable, i.e., \(\hat{f}\in L^{1}(\mu)\).
Let us apply Birkhoff's theorem to the above equation. The characteristic function \(\chi_{\Gamma_{Eq}}\) is \(\mu\)-integrable. Hence, for almost all \(x\in\Gamma\), \(\lim_{\mathcal{T}\to\infty}\frac{1}{\mathcal{T}}\int_{0}^{\mathcal{T}}\chi_{ \Gamma_{Eq}}(T^{t}x)dt\) exists and is \(\mu\)-integrable. In other words, for almost every single trajectory the time average exists. By dominated convergence, we can thus change the order of integration and pull the limit into the \(\mu\)-integral. Let \(\Gamma^{*}\subset\Gamma\) denote the set of points for which the time average exists, with \(\mu(\Gamma^{*})=\mu(\Gamma)\). Then Eq. 23 becomes
\[1-\varepsilon=\int_{\Gamma^{*}}d\mu(x)\bigg{[}\lim_{\mathcal{T}\to\infty}\frac {1}{\mathcal{T}}\int_{0}^{\mathcal{T}}\chi_{\Gamma_{Eq}}(T^{t}x)dt\bigg{]}. \tag{25}\]
Let us analyze the general case.13 Let again \(G=\{x\in\Gamma|\hat{\Gamma}_{Eq}(x)\geq 1-k\varepsilon\}\) and \(B=\{x\in\Gamma|\hat{\Gamma}_{Eq}(x)<1-k\varepsilon\}\). It is clear that this defines a decomposition of \(\Gamma^{*}\)
into disjoint sets, \(\Gamma^{*}=G\cup B\), with \(\mu(B)=\mu(\Gamma\backslash G)=1-\mu(G)\), and where \(G\) and \(B\) are invariant sets. Hence, Eq. 25 can be rewritten as
Footnote 1: The first way to fulfill Eq. 25 is that the time average \(\hat{\Gamma}_{Eq}(x)\) is a constant (almost everywhere). In that case, it must hold that \(\hat{\Gamma}_{Eq}(x)=1-\varepsilon\). The set of all points \(x\in\Gamma\) for which the limit exists (and is constantly \(1-\varepsilon\)), defines an invariant set, \(T^{-1}\Gamma^{*}=\Gamma^{*}\), with measure \(\mu(\Gamma^{*})=1\). This is the ergodic case. The second way to fulfill Eq. 25 is that there exists an invariant set \(A\) (i.e. \(T^{-1}A=A\)) with \(\mu(A)=\varepsilon\) such that \(\forall x\in A\): \(\hat{\Gamma}_{Eq}(x)=0\) and \(\forall x\notin A:\hat{\Gamma}_{Eq}(x)=0\) (again, up to a set of measure zero). Then also \(\Gamma^{*}\backslash A\) is an invariant set and \(\mu(\Gamma^{*}\backslash A)=1-\varepsilon\). This reflects the case of \(T^{t}\) being the identity, \(T^{t}x=x\), and \(\Gamma\backslash A=\Gamma_{Eq}\).
\[1-\varepsilon=\int_{G}\hat{\Gamma}_{Eq}(x)d\mu(x)+\int_{B}\hat{\Gamma}_{Eq} (x)d\mu(x). \tag{26}\]
Let now the'mean time average' of \(G\) be defined as
\[\bar{\Gamma}_{Eq}(G)=\frac{1}{\mu(G)}\int_{G}\hat{\Gamma}_{Eq}(x)d\mu(x), \tag{27}\]
where \(\hat{\Gamma}_{Eq}(x)\) exists and is integrable for all \(x\in G\). The mean time average determines the mean fraction of time the trajectories starting in \(G\) spend in the set \(\Gamma_{Eq}\). Analogously, let \(\bar{\Gamma}_{Eq}(B)\) denote the mean time average of \(B\). With this definition, Eq. 26 can be rewritten as
\[1-\varepsilon=\bar{\Gamma}_{Eq}(G)\mu(G)+\bar{\Gamma}_{Eq}(B)\mu(B). \tag{28}\]
We want to solve this for \(\mu(B)\). Recall that \(\mu(G)=1-\mu(\Gamma\backslash G)=1-\mu(B)\). Moreover, since \(\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\chi_{\Gamma_{Eq}}(T^{t}x)dt\leq 1\), it is \(\bar{\Gamma}_{Eq}(G)\leq 1\). On the other hand, it follows from the definition of the mean time average that \(\bar{\Gamma}_{Eq}(B)<1-k\varepsilon\) (since \(\hat{\Gamma}_{Eq}(x)<1-k\varepsilon\) for all \(x\in B\)). Hence, since \(k\geq 1\), it is \(\bar{\Gamma}_{Eq}(B)<1-\varepsilon\).
Now in order for the right hand side of Eq. 28 to add up to \(1-\varepsilon\), the measure of \(B\) needs to be small. This is due to the fact that \(\mu(B)\) comes with a factor \(\bar{\Gamma}_{Eq}(B)<1-\varepsilon\) which can only be encountered by a factor \(\bar{\Gamma}_{Eq}(G)\geq 1-\varepsilon\) in front of \(\mu(G)\). However, since \(\bar{\Gamma}_{Eq}(G)\) is bounded from above by One, \(\bar{\Gamma}_{Eq}(G)\leq 1\), the first summand can outweigh the second only if \(\mu(G)\) is large enough (respectively, \(\mu(B)\) small enough). At most, \(\bar{\Gamma}_{Eq}(G)=1\). In that case, \(\mu(G)\) attains its minimum and \(\mu(B)\) its maximum (where \(\mu(B)=1-\mu(G)\)). Since we want to determine an upper bound of \(\mu(B)\), we set \(\bar{\Gamma}_{Eq}(G)=1\) (a condition we will relax later). Let, in addition, \(\Theta:=\mu(B)\). Then equation Eq. 28 can be rewritten as
\[1-\varepsilon=(1-\Theta)+\bar{\Gamma}_{Eq}(B)\Theta \tag{29}\]
With \(\bar{\Gamma}_{Eq}(B)<1-k\epsilon\), it follows that
\[\Theta=\frac{\varepsilon}{1-\bar{\Gamma}_{Eq}(B)}<\frac{\varepsilon}{1-(1-k \varepsilon)}=1/k. \tag{30}\]
If we now no longer restrict the mean time average of \(G\) to be one, this inequality becomes even more pronounced. That way we obtain an upper bound on \(\mu(B)\):
\[\mu(B)<1/k. \tag{31}\]
From this it follows directly that
\[\mu(G)=\mu(\Gamma\backslash B)=1-\mu(B))>1-1/k. \tag{32}\]
This proves the assertion.
In what follows, I give the proof of the converse statement saying that a state in which typical solutions stay by far most of their time is a state of by far largest phase space volume.
**Proposition 3**.: _Let the setting be as in the above theorem. Let \(0<\delta<<1\) and \(0<\varepsilon<<1\). Let now \(\Gamma_{Eq^{\prime}}\subset\Gamma\) and \(G^{\prime}\subset\Gamma\) with \(\mu(G^{\prime})=1-\delta\) such that \(\forall x\in G^{\prime}\): \(\hat{\Gamma}_{Eq^{\prime}}(x)\geq 1-\varepsilon\). Then_
\[\mu(\Gamma_{Eq^{\prime}})\geq(1-\varepsilon)(1-\delta). \tag{33}\]
Proof.: When one applies Eq. 23, Eq. 25 and Eq. 26 to the set \(\Gamma_{Eq^{\prime}}\subset\Gamma\), one gets
\[\mu(\Gamma_{Eq^{\prime}}) = \int_{\Gamma^{*}}d\mu(x)\bigg{[}\lim_{\mathcal{T}\to\infty}\frac {1}{\mathcal{T}}\int_{0}^{\mathcal{T}}\chi_{\Gamma_{Eq^{\prime}}}(T^{t}x)dt \bigg{]} \tag{34}\] \[= \int_{G^{\prime}}d\mu(x)\hat{\Gamma}_{Eq^{\prime}}(x)+\int_{B^{ \prime}}d\mu(x)\hat{\Gamma}_{Eq^{\prime}}(x),\]
where the first equality holds due to Birkhoff's theorem and the second uses the definition of the time average and \(B^{\prime}=\Gamma^{*}\backslash G^{\prime}\). From \(\int_{B^{\prime}}d\mu(x)\hat{\Gamma}_{Eq^{\prime}}(x)\geq 0\) and the assumptions it follows that
\[\mu(\Gamma_{Eq^{\prime}})\geq\int_{G^{\prime}}d\mu(x)\hat{\Gamma}_{Eq^{ \prime}}(x)\geq(1-\varepsilon)(1-\delta). \tag{35}\]
This proves the assertion.
|
2303.10922 | Modular $A_4$ symmetry in 3+1 active-sterile neutrino masses and mixings | Motivated by the significance of modular symmetry in generating neutrino
masses and flavor mixings, we apply the modular $A_4$ symmetry in a 3+1 scheme
of active-sterile neutrino mixings. Neutrino oscillation observables in the
3$\sigma$ range are successfully reproduced through the vacuum expectation
value of the modulus $\tau$ in the Inverted Hierarchy(IH) only, whereas Normal
Hierarchy (NH) is ruled out. We also study phenomenologies related to the
effective neutrino masses $m_{\beta}$ in tritium beta decay and
$m_{\beta\beta}$ in neutrinoless double beta decay. Mixings between active
neutrinos and eV scale sterile neutrino are analyzed in detail. The model also
predicts the Dirac CP-violating phase $\delta_{CP}$ and Majonara phases
$\alpha$ and $\beta$. The best-fit values of the neutrino mixing angles and
ratios of two mass-squared differences are determined using minimum $\chi^2$
analysis. The best-fit values of the neutrino oscillation observables are
predicted as $\sin^2\theta_{23}=0.566,$ $\sin^2\theta_{12}=0.307,$ $
\sin^2\theta_{13}=0.023$ and $r = 0.172$. The Dirac and Majorana phases are
observed at $\delta_{CP}=350.19^o,\ \alpha=355.75^o$ and $\beta=333.10^o$. We
also observe that the predictions of effective neutrino mass parameters in the
3+1 scheme are significantly different from the three neutrino paradigm. | Mayengbam Kishan Singh, S. Robertson Singh, N. Nimai Singh | 2023-03-20T07:40:41Z | http://arxiv.org/abs/2303.10922v5 | # Modular \(A_{4}\) symmetry in 3+1 active-sterile neutrino masses and mixings
###### Abstract
Motivated by the significance of modular symmetry in describing neutrino masses and flavour structure, we apply the \(A_{4}\) modular symmetry in 3+1 scheme of active-sterile neutrino mixings. Neutrino oscillation observables in \(3\sigma\) range are successfully reproduced through the vev of the modulus \(\tau\) in normal hierarchy(NH) as well as inverted hierarchy(IH). We have also studied other phenomenologies regarding effective neutrino masses \(m_{\beta}\) in tritium beta decay and \(m_{\beta\beta}\) in neutrinoless double beta decay. Mixings between active and sterile neutrinos are analysed in detail and the mixing elements \(|V_{i4}|\) are found to satisfy the experimental bounds. The best-fit values of the neutrino mixing angles and mass squared differences ratio are determined using minimum \(\chi^{2}\) analysis. This model predicts best-fit values of the neutrino oscillation observables as \(\sin^{2}\theta_{23}=0.572\), \(\sin^{2}\theta_{12}=0.313\), \(\sin^{2}\theta_{13}=0.022\) and \(r=\sqrt{\Delta m^{2}_{21}/\Delta m^{2}_{31}}=0.172\) for NH whereas \(\sin^{2}\theta_{23}=0.602\), \(\sin^{2}\theta_{12}=0.288\), \(\sin^{2}\theta_{13}=0.022\) and \(r=\sqrt{\Delta m^{2}_{21}/|\Delta m^{2}_{32}|}=0.172\) for IH. Our analysis is also consistent with the latest Planck cosmological upper bound on the sum of neutrino masses \(\sum m_{i}<0.12\) eV.
Introduction
Origin of neutrino masses and flavour structure is one of the most important problems of Standard Model(SM) of particle physics. Since the discovery of neutrino oscillation in various experiments such as SNO, SuperKamiokande, etc., various extensions of SM have been studied. Models with non-Abelian discrete symmetries such as \(A_{4}\)[1; 2; 3], \(S_{4}\)[4; 5; 6], \(S_{3}\)[7; 8; 9; 10; 11], etc. find their distinct places in describing some of the experimental results as well as new predictions regarding unresolved problems of SM. Among these, the absolute mass scale of neutrino, Dirac CP violating phase, Baryogenesis, Dark matter, etc. are some of the main questions in high energy physics.
Inspired by the observations of LSND [12] and MiniBooNE [13; 14], many authors proposed the existence of a fourth state of neutrino called sterile neutrino. Sterile neutrinos are incorporated with the three neutrino theory in various literatures in a 3+1 scheme, 3+1+1, 3+2 etc. One of the simplest extensions is the 3+1 scheme where a singlet sterile neutrino is added to the three active neutrinos, and sterile neutrino gets mass through minimal extended seesaw mechanism(MES)[15]. Many authors have used discrete symmetries to describe the neutrino masses and their flavour structures. The main drawbacks of such approaches are the presence of many hypothetical scalar fields as well as additional symmetry groups. Recently, an interesting method has been proposed in which modular symmetry is used along with the discrete symmetries as its subgroups. An important feature of modular symmetry framework is that the Yukawa couplings can transform non-trivially as modular forms under the modular group. These modular forms are written as a function of a single parameter \(\tau\) which is called the modulus. A minimal or no scalar flavons are required to break the symmetry as the lepton masses are generated from the symmetry breaking by the vev of the modulus \(\tau\). Modular forms of level 3 denoted by \(\Gamma(3)\) is isomorphic to \(A_{4}\). Detailed analysis on \(A_{4}\) modular groups and its application in neutrino model building are studied in Refs. [16; 17; 18]. There are other works based on numerous modular groups \(A_{4}\)[19; 20; 21; 22; 23; 24; 25; 26; 27; 28], \(A_{5}\)[29; 30; 31; 32; 33; 34], \(S_{4}\)[35; 36; 37; 38; 39; 40], etc. to study neutrino phenomenologies, dark matter, leptogenesis, etc. For instance, in Ref.[25], modular \(A_{4}\) is used to study lepton and quark masses and mixing based on inverse seesaw mechanism, Ref.[27] studies lepton mixing and leptogenesis in linear seesaw while scotogenic dark matter scenario is studied in Ref.[26; 20].
The special feature of the present work is that we perform a detailed study of the effects
of an eV-scale sterile neutrino on neutrino phenomenology using modular \(A_{4}\) symmetry. We successfully reproduce neutrino oscillation data within the \(3\sigma\) bounds in normal hierarchy(NH) as well as Inverted hierarchy(IH). The particle contents and model predictions are different from other works. Dirac CP-violating phase and the two Majorana phases are also determined from the active-sterile mixing matrix. Finally, the best-fit values of the model parameters and neutrino observables are determined using the \(\chi^{2}\) analysis. Significant results are obtained for the active-sterile mixing parameters \(|V_{i4}|\), (where \(i=1,2,3\)) within the experimental bounds. There are other works in the literature which study 3+1 active-sterile mixing using the general \(A_{4}\) discrete symmetry group [41; 42; 43; 44; 45]. However, to the best of our knowledge, modular \(A_{4}\) group has not been used in MES mechanism for 3+1 scheme. The structure of the present paper is organised as follows. We present a detailed description of the model in section II, followed by the numerical analysis of the model in section III. Results of the analysis is presented in section IV. We conclude with a brief summary and discussion in section V.
## II
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Parameter & Normal Hierarchy (best-fit\(\pm 1\sigma\)) & Inverted Hierarchy (best-fit\(\pm 1\sigma\)) \\ \hline \(|\Delta m^{2}_{21}|:[10^{-5}eV^{2}]\) & 6.82 – 8.03 (\(7.41^{+0.21}_{-0.20}\)) & 6.82 – 8.03 (\(7.41^{+0.21}_{-0.20}\)) \\ \(|\Delta m^{2}_{31}|:[10^{-3}eV^{2}]\) & 2.428 – 2.597 (\(2.511^{+0.028}_{-0.027}\)) & 2.408 – 2.581 (\(2.498^{+0.032}_{-0.025}\)) \\ \(\sin^{2}\theta_{12}\) & 0.270 – 0.341 (\(0.303^{+0.012}_{-0.011}\)) & 0.270 – 0.341 (\(0.303^{+0.012}_{-0.011}\)) \\ \(\sin^{2}\theta_{23}\) & 0.406 – 0.620 (\(0.572^{+0.018}_{-0.023}\)) & 0.412 – 0.623 (\(0.578^{+0.016}_{-0.021}\)) \\ \(\sin^{2}\theta_{13}/10^{-2}\) & 2.029 – 2.391 (\(2.203^{+0.056}_{-0.059}\)) & 2.047 – 2.396 (\(2.219^{+0.060}_{-0.057}\)) \\ \(\delta_{\rm CP}/^{o}\) & 108 - 404 (\(197^{+42}_{-0.25}\)) & 192 - 360 (\(286^{+27}_{-32}\)) \\ \(r=\sqrt{\frac{\Delta m^{2}_{1}}{|\Delta m^{2}_{3l}|}}\) & 0.1675 – 0.1759 (0.1718) & 0.1683 - 0.1765 (0.1722) \\ \(|U_{14}|^{2}\) & 0.012 - 0.047 & 0.012 - 0.047 \\ \(|U_{24}|^{2}\) & 0.005 - 0.03 & 0.005 - 0.03 \\ \(|U_{34}|^{2}\) & 0 - 0.16 & 0 - 0.16 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Updated global-fit data for three neutrino oscillation, \(Nufit\) 2022 [46]. For 3+1 mixing, data taken from [44; 47; 48].
Description of the Model
In this model, we use \(A_{4}\) modular symmetry for studying lepton masses and mixings. We consider three right-handed neutrinos \(N_{i}\) as triplet under \(A_{4}\) and a sterile neutrino field \(S\) as singlet. The SM lepton doublet \(L\) also transforms as triplet of \(A_{4}\) while the Higgs represented as \(H_{u}\) and \(H_{d}\) having hypercharges \(+1/2\) and \(-1/2\) respectively are taken as singlets of \(A_{4}\). They develop VEVs due to symmetry breaking along \(\left\langle H_{u}\right\rangle=\left(0,v_{u}/\sqrt{2}\right)^{T}\) and \(\left\langle H_{d}\right\rangle=\left(v_{d}/\sqrt{2},0\right)^{T}\) inducing SM fermion mass terms. The right-handed charged leptons \(e_{r},\mu_{r}\) and \(\tau_{r}\) are assigned \(1,1^{\prime\prime},1^{\prime}\) of \(A_{4}\) respectively. As a result, there are three independent coupling constants \(\alpha^{\prime},\beta^{\prime}\) and \(\gamma^{\prime}\) in the superpotential of the charged lepton sector. We have considered one scalar field \(\zeta\) as an \(A_{4}\) singlet in order to generate sterile neutrino mass matrix \(M_{s}\). Finally, the Yukawa coupling is transformed as a modular function \(Y_{i}(\tau)\) which is triplet under \(A_{4}.\) The role of triplet scalar field \(\phi\) with zero modular weight is to simplify our analysis by making the charged lepton mass matrix diagonal. It does not have any significance in the neutrino sector. The complete particle contents with their corresponding group charges and modular weights are given in Table 2. The modular forms \(Y_{i}(\tau)\) of weight 2 which transform as triplet under \(A_{4}\) can be expressed in terms of Dedekind eta-function \(\eta(\tau)\) as [16]
\[\begin{split} y_{1}=&\frac{i}{2\pi}\left[\frac{\eta^ {\prime}(\frac{\tau}{3})}{\eta(\frac{\tau}{3})}+\frac{\eta^{\prime}(\frac{\tau +1}{3})}{\eta(\frac{\tau+1}{3})}+\frac{\eta^{\prime}(\frac{\tau+2}{3})}{\eta( \frac{\tau+2}{3})}-\frac{27\eta^{\prime}(3\tau)}{\eta(3\tau)}\right]\\ y_{2}=&\frac{-i}{\pi}\left[\frac{\eta^{\prime}( \frac{\tau}{3})}{\eta(\frac{\tau}{3})}+\omega^{2}\frac{\eta^{\prime}(\frac{ \tau+1}{3})}{\eta(\frac{\tau+1}{3})}+\omega\frac{\eta^{\prime}(\frac{\tau+2}{ 3})}{\eta(\frac{\tau+2}{3})}\right]\\ y_{3}=&\frac{-i}{\pi}\left[\frac{\eta^{\prime}( \frac{\tau}{3})}{\eta(\frac{\tau}{3})}+\omega\frac{\eta^{\prime}(\frac{\tau+1 }{3})}{\eta(\frac{\tau+1}{3})}+\omega^{2}\frac{\eta^{\prime}(\frac{\tau+2}{3}) }{\eta(\frac{\tau+2}{3})}\right]\end{split} \tag{1}\]
where \(\omega=e^{2\pi i/3}\) and \(\eta(\tau)\) is defined as
\[\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n}),\ \ \ \ q\equiv e^{i2\pi\tau}. \tag{2}\]
The overall coefficient in eq.(1) is one possible choice and it cannot be determined.
The modular invariant Lagrangian of the charged lepton is given below.
\[-\mathcal{L}_{c}=\ \alpha^{\prime}e_{r}^{c}(H_{d}L\phi)_{1}+\beta^{\prime}\mu_{ r}^{c}(H_{d}L\phi)_{1^{\prime}}+\gamma^{\prime}\tau_{r}^{c}(H_{d}L\phi)_{1^{ \prime\prime}}+h.c. \tag{3}\]
The charged lepton masses can be reproduced by adjusting the values the coefficients \(\alpha^{\prime},\beta^{\prime}\) and \(\gamma^{\prime}\). Choosing the vev of \(\phi\) along \((1,0,0)\), eq.(3) gives a diagonal charged lepton mass
matrix
\[M_{L}=diag(\alpha^{\prime},\beta^{\prime},\gamma^{\prime})\ v_{\phi}v_{d} \tag{4}\]
For the neutrino sector, the invariant Lagrangian is
\[-{\cal L}_{D} = g(N_{i}^{c})^{T}H_{u}LY+h.c \tag{5}\] \[= g_{1}(N_{i}^{c})^{T}H_{u}(LY)_{3S}+g_{2}(N_{i}^{c})^{T}H_{u}(LY)_ {3A}+h.c\] \[-{\cal L}_{R} = \lambda Y(N_{i}^{c})^{T}N_{i}+h.c\] (6) \[-{\cal L}_{s} = \delta\zeta YS^{c}N_{i}+h.c \tag{7}\]
The weights \(k_{i}\) should satisfy the following conditions,
\[k_{e}+k_{H}+k_{L}+k_{\phi}=0,\ \ k_{H}+k_{L}+k_{N}+k_{Y}=0,\] \[k_{N}+k_{N}+k_{Y}=0,\ \ \ \ \ k_{\zeta}+k_{S}+k_{N}+k_{Y}=0 \tag{8}\]
For \(-k_{Y}=2\), we get
\[k_{N}=k_{L}=1,\ k_{\phi}=k_{H}=0,k_{S}=2,\ k_{\zeta}=-1,\ \mbox{and}\ k_{e}=-1 \tag{9}\]
The neutrino mass matrices along with the v.e.v \(\langle H_{u}\rangle=v_{u}\) and \(\langle\zeta\rangle=v_{\zeta}\) are
\[M_{D} = v_{u}\left(\begin{array}{ccc}2g_{1}y_{1}&y_{3}(g_{2}-g_{1})&y_ {2}(-g_{1}-g_{2})\\ y_{3}(-g_{1}-g_{2})&2g_{1}y_{2}&y_{1}(g_{2}-g_{1})\\ y_{2}(g_{2}-g_{1})&y_{1}(-g_{1}-g_{2})&2g_{1}y_{3}\end{array}\right);\] \[M_{R} = \lambda\left(\begin{array}{ccc}2y_{1}&-y_{3}&-y_{2}\\ -y_{3}&2y_{2}&-y_{1}\\ -y_{2}&-y_{1}&2y_{3}\end{array}\right);\] \[M_{s} = \delta v_{\zeta}\left(\begin{array}{ccc}y_{1}&y_{3}&y_{2}\end{array} \right).\]
\begin{table}
\begin{tabular}{|c|c c|c|c|c|c c|} \hline \({}_{\underline{Fields}}\) & \(L\) & \(e_{r},\mu_{r},\tau_{r}\) & \(H_{u,d}\) & \(N_{i}\) & \(S\) & \(\phi\) & \(\zeta\) & \(Y\) \\ \hline \(SU(2)_{L}\) & 2 & 1 & 2 & 1 & 1 & 1 & 1 \\ \(U(1)_{Y}\) & \(-1/2\) & \(-1\) & \(\pm\) 1/2 & 0 & 0 & 0 & 0 & 0 \\ \(A_{4}\) & 3 & 1,1\({}^{\prime\prime}\),1\({}^{\prime}\) & 1 & 3 & 1 & 3 & 1 & 3 \\ \(k_{i}\) & \(k_{L}\) & \(k_{e}\) & \(k_{H}\) & \(k_{N}\) & \(k_{S}\) & \(k_{\phi}\) & \(k_{\zeta}\) & \(k_{Y}\) \\ \hline \end{tabular}
\end{table}
Table 2: Particle contents of the model and their group charges.
In the 3+1 MES mechanism, the active neutrino mass and the sterile neutrino mass are calculated using the following relations [15]
\[m_{\nu} \simeq M_{D}M_{R}^{-1}M_{S}^{T}\left(M_{S}M_{R}^{-1}M_{S}^{T}\right)^{-1}M_ {S}\left(M_{R}^{-1}\right)^{T}M_{D}^{T}-M_{D}M_{R}^{-1}M_{D}^{T}\ ; \tag{10}\] \[m_{s} \simeq -M_{S}M_{R}^{-1}M_{S}^{T}. \tag{11}\]
The \((4\times 4)\) active-sterile mass matrix is diagonalised by a unitary \((4\times 4)\) mixing matrix given by [49]
\[V\simeq\begin{pmatrix}(1-\frac{1}{2}RR^{\dagger})U&R\\ -R^{\dagger}U&1-\frac{1}{2}R^{\dagger}R\end{pmatrix}, \tag{12}\]
where \(R\) represents the strength of active-sterile mixing given by
\[R= M_{D}M_{R}^{-1}M_{S}^{T}(M_{S}M_{R}^{-1}M_{S}^{T})^{-1} \tag{13}\]
Deviation of the active neutrino mixing matrix \(U\) from unitarity due to the presence of sterile neutrino is determined by \(\frac{1}{2}RR^{\dagger}\). The \(4\times 4\) neutrino mixing matrix can be parameterized by six mixing angles \((\theta_{12},\theta_{13},\theta_{23},\theta_{14},\theta_{24},\theta_{34})\), three Dirac phases \((\delta_{13},\delta_{14},\delta_{24})\) and three Majorana phases \((\alpha,\beta,\gamma)\)[50] as,
\[V^{4\times 4}=\begin{pmatrix}c_{12}c_{13}c_{14}&c_{13}c_{14}s_{12}e^{i \frac{\alpha}{2}}&c_{14}s_{13}e^{i\frac{\beta}{2}}&s_{14}e^{-i\frac{\gamma}{2} }\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}&c_{14}s_{24}e^{-i\left(\frac{\gamma}{2}-\delta_{14}+ \delta_{24}\right)}\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}&c_{14}c_{24}s_{34}e^{-i\left(\frac{\gamma}{2}- \delta_{14}\right)}\\ U_{s1}&U_{s2}&U_{s3}&c_{14}c_{24}c_{34}e^{-i\left(\frac{\gamma}{2}-\delta_{14 }\right)}\end{pmatrix}. \tag{14}\]
The six neutrino mixing angles can be determined from the mixing elements of \(V\) using the following relations [51],
\[\sin^{2}\theta_{14} = |V_{e4}|^{2},\ \ \sin^{2}\theta_{24}\ =\ \frac{|V_{\mu 4}|^{2}}{1-|V_{e4}|^{2}},\ \ \sin^{2}\theta_{34}\ =\ \frac{|V_{\tau 4}|^{2}}{1-|V_{e4}|^{2}-|V_{\mu 4 }|^{2}}\] \[\sin^{2}\theta_{12} = \frac{|V_{e2}|^{2}}{1-|V_{e4}|^{2}-|V_{e3}|^{2}},\ \ \sin^{2}\theta_{13}\ =\ \frac{|V_{e3}|^{2}}{1-|V_{e4}|^{2}}, \tag{15}\]
\[\sin^{2}\theta_{23}\ =\frac{|V_{e3}|^{2}(1-|V_{e4}|^{2})-|V_{e4}|^{2}|V_{\mu 4 }|^{2}}{1-|V_{e4}|^{2}-|V_{\mu 4}|^{2}}+\frac{|V_{e1}V_{\mu 1}+V_{e2}V_{\mu 2}|^{2}(1-|V_{e4}| ^{2})}{(1-|V_{e4}|^{2}-|V_{e3}|^{2})(1-|V_{e4}|^{2}-|V_{\mu 4}|^{2})}.\]
where \(V_{ij}\) are the elements of mixing matrix in eq.(12). We also try do determine the Jarlskog invariant \(J\) for the active sterile mixing. According to the parameterisation in eq.(14), the Jarlskog invariant \(J_{3+1}=Im[V_{e1}V_{\mu 2}V_{e2}^{*}V_{\mu 1}^{*}]\) takes the form [52]
\[J_{3+1}=J_{3}^{cp}c_{14}^{2}c_{24}^{2}+s_{24}s_{14}c_{24}c_{23}c_{14}^{2}c_{13} ^{3}c_{12}s_{12}\sin(\delta_{14}-\delta_{24}), \tag{16}\]
where \(J_{3}^{cp}=s_{23}c_{23}s_{12}c_{12}s_{13}c_{13}^{2}\sin\delta_{13}\) is the Jarlskog invariant for the three neutrino framework and \(s_{ij}=\sin\theta_{ij},c_{ij}=\cos\theta_{ij}\) are the mixing angles. The two physical Majorana phases \(\alpha\) and \(\beta\) are determined from \(V^{4\times 4}\) using the invariants \(I_{1}\) and \(I_{2}\) defined as follows
\[I_{1}=Im[V_{e1}^{*}V_{e2}]\ =\ c_{12}c_{13}^{2}c_{14}^{2}s_{12}\sin \left(\frac{\alpha}{2}\right), \tag{17}\] \[I_{2}=Im[V_{e1}^{*}V_{e3}]\ =\ c_{12}c_{13}c_{14}^{2}s_{13}\sin \left(\frac{\beta}{2}-\delta_{13}\right). \tag{18}\]
Another important parameter in neutrino physics is the effective neutrino mass \(m_{\beta\beta}\) and effective electron mass \(m_{\beta}\). A combined analysis from KAMLand-Zen[53] and GERDA provided an upper bound on \(m_{\beta\beta}\) in the range \(m_{\beta\beta}<(0.071-0.161)\) eV [54; 55]. Recent result from latest KATRIN [56] experiment constrains the effective electron neutrino mass \(m_{\beta}\) to be less than 1.1 eV. These parameters are determined from the neutrinoless double beta decay and beta decay respectively using the relations [57]
\[m_{\beta\beta}=|\sum_{j=1}^{4}|V_{ej}|^{2}m_{j}|, \tag{19}\]
\[m_{\beta}=\left(\sum_{i=1}^{4}|V_{ei}|^{2}m_{i}^{2}\right)^{1/2}. \tag{20}\]
## III Numerical analysis
In this section, we carry out the detailed numerical analysis to determine the allowed regions of the free parameters in the model, which satisfy the current neutrino oscillation data. The lepton mass matrices in eq.(3) and eq.(10) depend on the effective parameters \(\alpha^{\prime},\beta^{\prime},\gamma^{\prime},g_{1},g_{2}\) and \(\tau.\) Parameters \(\alpha^{\prime},\beta^{\prime},\gamma^{\prime}\) in the charged lepton sector can be taken as real and they are determined by the ratios of the charged lepton masses \(m_{e}/m_{\tau}\) and \(m_{\mu}/m_{\tau}\). On fixing \(\tau\), the modular symmetry is broken and the neutrino mass eigenvalue, mixing angles and Dirac as well as Majorana phases are completely determined. We define the ratio of neutrino mass squared differences \(r=\sqrt{\Delta m_{21}^{2}/\Delta m_{31}^{2}}=m_{2}/m_{3}\) for NH (\(m_{1}\approx 0<<m_{2}<m_{3}<m_{4}\)) and \(r=\sqrt{\Delta m_{21}^{2}/|\Delta m_{32}^{2}|}=\sqrt{1-\frac{m_{1}^{2}}{m_{2} ^{2}}}\) for IH (\(m_{3}\approx 0<<m_{1}<m_{2}<m_{4}\)). The absolute scale of active neutrino masses can be fixed by adjusting the overall factor \(v_{u}^{2}g_{1}/\lambda.\) The neutrino mixing angles and \(r\) depend on only two free parameters \(g_{2}/g_{1}\) and
the modulus \(\tau\). Since these parameters are complex, we consider
\[\tau=Re[\tau]+i\ Im[\tau],\ \ \frac{g_{2}}{g_{1}}=ge^{i\phi_{g}} \tag{21}\]
The fundamental domain of \(\tau\) is given in Ref.[16]. We randomly scan the parameters in the range \(\text{Re}[\tau]=[-1.5,1.5]\) for both ordering. Further, we take \(\ g=[1,2]\) and \(\text{Im}[\tau]=[1,1.5]\) for NH while \(g=[1.5,2]\), \(\text{Im}[\tau]=[0.6,1]\) for IH. We also consider \(\phi_{g}=[-\pi,\pi]\). The active neutrino mass matrix is numerically diagonalised using the relation \(m_{\nu}=Um_{\nu}^{d}U^{\dagger}\), where \(m_{\nu}^{d}=diag(m_{1},m_{2},m_{3})\), and \(m_{1},m_{2},m_{3}\) are the neutrino mass eigenvalues. The mixing angles can be calculated using the general formula given in eq.(15). We filter the allowed values of the model parameters using the \(3\sigma\) bounds of the three mixing angles and ratio \(r\) given in Table 1. For the sterile sector, the coefficient \(k\equiv\delta v_{\zeta}\) is solved by constraining the sterile neutrino mass \(m_{s}\) in the range \([0.8,10]\)eV. Finally, the best-fit values of the neutrino
Figure 1: Variation plot for NH showing the dependence of Yukawa couplings \((y_{1},y_{2},y_{3})\) among themselves in (a). Relation of \(y_{3}\) with \(\text{Re}[\tau]\) is shown in (b) while (c) shows the variation of \(y_{1},y_{2}\) with \(\text{Re}[\tau]\). The dependence of \(\text{Im}[\tau]\) with \((y_{1},y_{2},y_{3})\) is shown in (d).
observables and the corresponding best-fit values of model parameters \(\tau\) and \(g\) are evaluated using the \(\chi^{2}\) analysis. We use the \(\chi^{2}\) function given by
\[\chi^{2}(x_{i})=\sum_{j}\left(\frac{y_{j}(x_{i})-y_{j}^{bf}}{\sigma_{j}}\right)^ {2} \tag{22}\]
where \(x_{i}\) are the free parameters in the model and \(j\) is summed over the observables \(\{\sin^{2}\theta_{12},\sin^{2}\theta_{13},\sin^{2}\theta_{23},r\}\). Here, \(y_{j}(x_{i})\) denotes the model predictions for the observables and \(y_{j}^{bf}\) are their best-fit values obtained from the global analysis. \(\sigma_{j}\) denotes the corresponding uncertainties obtained by symmetrizing \(1\sigma\) range of the neutrino observables given in Table 1. By minimizing the overall \(\chi^{2}\) function, we can calculate the best-fit values of our model parameters and predict the values of neutrino observables.
## IV Results of the analysis
The ranges of Yukawa couplings which satisfy all the neutrino observables, are shown in Fig.(1)-(2) as variation plots with \(\text{Re}[\tau]\) and \(\text{Im}[\tau]\) respectively for both NH and IH. Values of \(|y_{1}|\) and \(|y_{2}|\) lie very close to each other in IH at around 0.98 while all the couplings are well separated in case of NH. It is observed that the couplings lie in the ranges \(0.984\leq|y_{1}(\tau)|\leq 0.989\), \(0.593\leq|y_{2}(\tau)|\leq 0.662\), \(0.178\leq|y_{3}(\tau)|\leq 0.222\) for NH. The allowed values of \(\text{Im}(\tau)\) continuously vary in the range \((1.048-1.102)\) while the values of \(\text{Re}(\tau)\) are concentrated at specific regions around \(\pm 0.5\) and \(\pm 1.5\) in the fundamental domain of \(\tau\) as evident from Fig.(1): (b),(c). Predicted values of neutrino mixing angles \(\sin^{2}\theta_{13}\) are
Figure 2: (a) and (b) show the dependence of Yukawa couplings \((y_{1},y_{2},y_{3})\) with \(\text{Im}[\tau]\) and \(\text{Re}[\tau]\) respectively for IH.
plotted with \(\sin^{2}\theta_{12}\) and \(\sin^{2}\theta_{23}\) for both orderings in Fig.(3). From these plots, the values of \(\sin^{2}\theta_{23}\) are concentrated at regions greater than \(0.55\) implying the higher octant of \(\theta_{23}\). Plots between individual neutrino masses \(m_{2},m_{3}\) for NH (\(m_{1}=0\)) and \(m_{1},m_{2}\) for IH (\(m_{3}=0\)) with sum of active neutrino masses \(\sum m_{i}\) are shown respectively in Fig.(4). Choosing the factor \(v_{u}^{2}g_{1}/\lambda\sim{\cal O}(10^{-2})\), we observe that the sum of active neutrino masses \(\sum m_{i}\) satisfies the Planck upper bound \(\sum m_{i}<0.12\)eV. For NH, sum of active neutrino masses is observed in a very narrow range \(\sum m_{i}\sim(0.0145-0.045)\)eV while it is \(\sum m_{i}\sim(0.083-0.121)\)eV for IH. The individual neutrino masses are found in the range \(m_{2}\sim(0.00214-0.00658)\)eV, \(m_{3}\sim(0.0124-0.0382)\)eV for NH and \(m_{1}\sim(0.041-0.06)\)eV, \(m_{2}\sim(0.042-0.061)\)eV for IH. For the mixings between active and sterile neutrino, we have plotted the active-sterile mixing elements \(|V_{14}|,|V_{24}|\) and \(|V_{34}|\) in Fig.(7) as a function of sterile neutrino mass \(m_{s}\). It is evident from these plots that the active-sterile mixing elements satisfy the observed bounds given in Table.2. Variation of active-sterile mixing angles \(\sin^{2}\theta_{i4}\) (\(i=1,2,3\)) with parameter \(k\) is shown in Fig.(6) for both ordering. Fig.(5) shows the relation between the mixing angle \(\sin^{2}\theta_{14}\) with \(\sin^{2}\theta_{24}\) and \(\sin^{2}\theta_{34}\) for NH. The non-unitarity effect due to the presence of active-sterile mixing is determined by \(\frac{1}{2}RR^{\dagger}\). For our analysis it is observed to be less than \({\cal O}(10^{-3}).\) It is also important to note from the plots shown above that the number of data points in case of NH is much larger compared to IH. Thus, we can infer that our model favours NH.
Proceeding further, we have calculated the model prediction of the Jarlskog invariant in the 3+1 scenario. Fig.(8): (a) shows the variation of mixing angles with \(J_{3+1}\) for NH. Prediction of Dirac CP-violating phase \(\delta_{13}\) is shown in Fig.(9): (a). Majorana phases \(\alpha\) and
\(\beta\) are also evaluated using eq.(17) - (18) and the predictions from our model are shown in Fig.(9): (b). Unknown phases for the active-sterile sector \(\delta_{14}\) and \(\delta_{24}\) are randomly chosen in the range \([-\pi,\pi]\). For the case of IH, the variation of active neutrino mixing angles with Jarlskog invariant \(J_{3+1}\) is shown in Fig.(8)(b). The value of \(J_{3+1}\) is observed in the vicinity of \(\pm 0.06\) which is out of the bounds for \(J_{max}^{CP}=0.0332\pm 0.0008\) at \(1\sigma\) for both orderings [46].
Other phenomenological studies on neutrinoless double beta decay experiments are also carried out. The effective mass parameter \(m_{\beta\beta}\) and effective electron neutrino mass \(m_{\beta}\) are determined using eq.(19)-(20). Fig.(10) shows the variation of \(m_{\beta\beta}\) and \(m_{\beta}\) as a function of \(\sum m_{i}\) for NH. It is observed that these effective mass parameters lie in the range 0.03 eV\(\leq m_{\beta}\leq\) 0.18 eV and 0.001 eV \(\leq m_{\beta\beta}\leq\) 0.006 eV. In case of IH shown in Fig.(11), the effective parameters are in range 0.179 eV\(\leq m_{\beta}\leq\) 0.413 eV and 0.005 eV \(\leq m_{\beta\beta}\leq\) 0.017 eV. Values of \(m_{\beta\beta}\) and \(m_{\beta}\) are observed to be comparatively larger in case of IH.
Finally, the best-fit values of the model parameters and predictions of neutrino observables are evaluated using \(\chi^{2}\) analysis as shown in eq.(22). For NH at \(\chi^{2}_{min}=1.20\), the best-fit of the parameter \(\tau\) is obtained at \(\text{Re}[\tau]=-0.45\), \(\text{Im}[\tau]=1.07\) and \(g=1.49\). The corresponding values of Yukawa couplings are found to be \(|y_{1}|=0.987,|y_{2}|=0.644,|y_{3}|=0.197\). The best-fit values of neutrino observables are obtained at \(\sin^{2}\theta_{23}=0.572\), \(\sin^{2}\theta_{12}=0.313\), \(\sin^{2}\theta_{13}=0.022\) and \(r=0.172\). Similarly, for IH at \(\chi^{2}_{min}=3.54\), the values of the model parameters are found as \(\text{Re}[\tau]=0.35\), \(\text{Im}[\tau]=0.87\) and \(g=1.89\), while the Yukawa
Figure 4: Variation between neutrino masses \(m_{2}\) and \(m_{3}\) (\(m_{1}=0\)) with sum of active neutrino masses \(\sum m_{i}\) for NH is shown in (a). For IH, (b) shows plot between \(m_{1}\) and \(m_{2}\) (\(m_{3}=0\)) with \(\sum m_{i}\). The vertical dotted line is the Planck cosmological upper bound \(\sum m_{i}<0.12\)eV.
Figure 5: Plot between active-sterile mixing angle \(\sin^{2}\theta_{14}\) with \(\sin^{2}\theta_{24}\) and \(\sin^{2}\theta_{34}\) for NH.
Figure 6: Variation between active-sterile mixing angle \(\sin^{2}\theta_{i4}\) where \(i=1,2,3\) with sterile neutrino mass coefficient \(k=\delta v_{\zeta}\) for NH in (a) and IH in (b).
Figure 7: Dependence of active-sterile mixing elements \(V_{i4}\) where \(i=1,2,3\) on sterile neutrino mass \(m_{s}\) for NH in (a) and IH in (b).
couplings are found to be \(|y_{1}|=0.969,|y_{2}|=0.951,|y_{3}|=0.466\). The corresponding best-fit values of the neutrino observables are \(\sin^{2}\theta_{23}=0.602\), \(\sin^{2}\theta_{12}=0.288\), \(\sin^{2}\theta_{13}=0.022\) and \(r=0.172\).
For the charged lepton sector, by comparing eq.(3) with the experimental values for masses of the charged leptons given in Ref.[58], \(m_{e}=0.51099\) MeV, \(m_{\mu}=105.65837\) MeV and \(m_{\tau}=1776.86\) MeV, the parameters \(\alpha^{\prime},\beta^{\prime},\gamma^{\prime}\) are determined to be
\[\frac{\alpha^{\prime}}{\gamma^{\prime}}=0.287\times 10^{-2},\ \ \ \ \ \frac{\beta^{\prime}}{\gamma^{\prime}}=0.059. \tag{23}\]
Figure 9: (a) Variation of CP-violating Dirac phase \(\delta_{13}\) with \(\sin^{2}\theta_{23}\) for NH. (b) Plot between the two Majorana phases \(\alpha\) and \(\beta\) for NH.
## V Summary and discussion
We have successfully constructed a new neutrino mass model based on modular \(A_{4}\) by extending the SM with an \(A_{4}\) triplet right-handed neutrino \(N_{i}\) and a singlet sterile neutrino \(S\) in the 3+1 scheme. MES mechanism is used to generate the active and eV-scale sterile neutrino masses in both NH and IH. The main motivation is to avoid the hypothetical scalar flavons using modular symmetry and at the same time, reproduce all the neutrino observables through the vev of a single parameter \(\tau.\) However, to simplify our analysis, we have used a triplet scalar \(\phi\) whose vev makes the charged lepton mass matrix diagonal. Two free model parameters \(\tau\) and \(g\) are scanned randomly in a particular domain and active neutrino mass matrix is numerically diagonalised. We have conducted the numerical analysis using the \(3\sigma\) bounds of neutrino observables in such a way that all the neutrino observables
evaluated from the model simultaneously satisfy these bounds. Our analysis of neutrino mass is also consistent with the cosmological upper bound on the sum of neutrino masses \(\sum m_{i}<0.12\) eV. Effects of the eV-scale sterile neutrino on neutrino mixing angles, effective mass parameters \(m_{\beta},m_{\beta\beta}\) and unitarity of the active neutrino mixing matrix are studied in detail. Neutrino mixing angles are evaluated from the \((4\times 4)\) active-sterile mixing matrix \(V\) without avoiding the non-unitarity effects of sterile neutrino. Active-sterile mixing angles \(\sin^{2}\theta_{14}\), \(\sin^{2}\theta_{24}\) and \(\sin^{2}\theta_{34}\) are observed to be \(0.000144\leq\sin^{2}\theta_{14}\leq 0.00236\), \(0.000133\leq\sin^{2}\theta_{24}\leq 0.00394\) and \(0.00023\leq\sin^{2}\theta_{34}\leq 0.00796\) for NH. In case of IH, these mixing angles are observed in the range \(0.001032\leq\sin^{2}\theta_{14}\leq 0.002937\), \(0.00172\leq\sin^{2}\theta_{24}\leq 0.00359\) and \(0.00248\leq\sin^{2}\theta_{34}\leq 0.00606\). Jarlskog invariant in the 3+1 sector is also determined. CP-violating Dirac phase is successfully predicted for NH in the ranges \(\delta_{13}\sim(180.12^{o}-230.70^{o})\) and \(\delta_{13}\sim(305.15^{o}-358.83^{o})\). Finally, we have used the minimum \(\chi^{2}\) analysis to predict the best-fit values of the model parameters as well as the neutrino oscillation data. From our analysis, we can conclude that NH is more favoured in our model compared to IH. The MES mechanism has an advantage in generating keV-MeV scale sterile neutrino along with the active neutrino mass in the eV scale. Study on the possibility of keV-MeV sterile neutrino as a dark matter candidate, will be addressed in the future. To conclude, we emphasize that modular \(A_{4}\) is very successful in reproducing neutrino phenomenology and other beyond Standard Model problems without the need for extra flavons as in conventional discrete symmetry models.
## Acknowledgements
One of the authors(MKS) would like to thank DST-INSPIRE, govt. of India for providing the fellowship for the research under INSPIRE fellowship (ID IF180349).
|
2301.06939 | Critical avalanches of Susceptible-Infected-Susceptible dynamics in
finite networks | We investigate the avalanche temporal statistics of the
Susceptible-Infected-Susceptible (SIS) model when the dynamics is critical and
takes place on finite random networks. By considering numerical simulations on
annealed topologies we show that the survival probability always exhibits three
distinct dynamical regimes. Size-dependent crossover timescales separating them
scale differently for homogeneous and for heterogeneous networks. The
phenomenology can be qualitatively understood based on known features of the
SIS dynamics on networks. A fully quantitative approach based on Langevin
theory is shown to perfectly reproduce the results for homogeneous networks,
while failing in the heterogeneous case. The analysis is extended to quenched
random networks, which behave in agreement with the annealed case for strongly
homogeneous and strongly heterogeneous networks. | Daniele Notarmuzi, Alessandro Flammini, Claudio Castellano, Filippo Radicchi | 2023-01-12T22:32:47Z | http://arxiv.org/abs/2301.06939v1 | # Critical avalanches of Susceptible-Infected-Susceptible dynamics in finite networks
###### Abstract
We investigate the avalanche temporal statistics of the Susceptible-Infected-Susceptible (SIS) model when the dynamics is critical and takes place on finite random networks. By considering numerical simulations on annealed topologies we show that the survival probability always exhibits three distinct dynamical regimes. Size-dependent crossover timescales separating them scale differently for homogeneous and for heterogeneous networks. The phenomenology can be qualitatively understood based on known features of the SIS dynamics on networks. A fully quantitative approach based on Langevin theory is shown to perfectly reproduce the results for homogeneous networks, while failing in the heterogeneous case. The analysis is extended to quenched random networks, which behave in agreement with the annealed case for strongly homogeneous and strongly heterogeneous networks.
## I Introduction
Systems undergoing continuous absorbing-state phase-transitions are, exactly at the critical point, infinitely susceptible to local perturbations. This implies that, introducing a single active seed in the absorbing state, the subsequent evolution leads to an avalanche of activation events that may span the whole range of temporal and spatial scales. Examples of this avalanche dynamics abound, both in the realm of physics and in other biological, social and technical domains [1; 2; 3; 4; 5; 6; 7].
The multiscale nature of these phenomena is quantitatively characterized by the size and duration distributions of avalanches. If the control parameter is subcritical (i.e., in the absorbing phase) size and duration of avalanches are exponentially distributed, extending up to well defined and finite temporal and spatial scales. In the supercritical domain, instead, a finite fraction of them spans the whole system, leading (in infinite systems) to a stationary active state. In the critical case all avalanches end in a finite time, but the distributions have power-law tails so that events of any size and duration are possible. Denoting with \(z\) the size of an avalanche and with \(t\) its duration, we can in general write for the probability distributions of size \(z\) and duration \(t\)[8]
\[P(z)\sim z^{-\tau}\mathcal{F}(z/z_{\times})\;\;\text{and}\;\;P(t)\sim t^{- \alpha}\mathcal{G}(t/t_{\times}), \tag{1}\]
where \(\tau\) and \(\alpha\) are universal critical exponents, \(\mathcal{F}\) and \(\mathcal{G}\) are scaling functions and the cutoff scales \(z_{\times}\) and \(t_{\times}\) depend at criticality only on the system size. Averaging the avalanche size at a fixed duration \(t\) it is possible to define another exponent \(\langle z\rangle\sim t^{\theta}\), which is related to the others as [9]
\[\theta=\frac{\alpha-1}{\tau-1}. \tag{2}\]
The branching process (BP) model [10] provides the natural framework for describing (at least approximately) a large class of systems undergoing continuum absorbing phase transitions. In the simplest case, when the distribution of offsprings (i.e., the probability for an active element to activate a given number of new elements) has finite variance, BP theory predicts
\[\tau=3/2,\quad\alpha=2,\quad\quad\text{and}\;\;\theta=2. \tag{3}\]
This kind of mean-field behavior is expected to occur when the system dynamics takes place on a homogeneous network, where the second moment of the degree distribution is finite. The phenomenology changes when the distribution of offsprings in the BP has infinite variance, as it happens for processes taking place on heterogeneous networks with degree distribution \(P(k)\sim k^{-\gamma}\) and \(2<\gamma\leq 3\). In such a case BP theory predicts anomalous, \(\gamma\)-dependent, exponents [11; 12; 13; 14]
\[\tau=\frac{\gamma}{\gamma-1},\quad\alpha=\frac{\gamma-1}{\gamma-2},\quad\text {and}\;\;\theta=\frac{\gamma-1}{\gamma-2}. \tag{4}\]
While standard BP behavior is observed for many systems on homogeneous networks [15; 16; 17; 18], recent work has cast doubts over the applicability of this theoretical framework to many avalanche phenomena in heterogeneous networks [19]. In all cases considered, apart from a possible preasymptotic regime valid for short duration and small size, the distributions decay with exponents in agreement with standard BP values, with no dependence on \(\gamma\). Possible causes of this violation of the BP predictions have been put forward in Ref. [19]. One possible reason for such violation is the existence of loops in networks, possibly allowing for nodes that are already active to be reached again by the infection, which is an explicit violation of the mapping between spreading models and the BP. Further, finite networks always have a finite second moment of the degree distribution, which prevents the anomalous exponents (4) to be asymptotically observed even in a pure BP. It was also noted in Ref. [19] that the spreading mechanism of certain models, such as the contact process (CP), do not really involve all the neighbors of a node and hence avalanches are not impacted by unbounded fluctuations of the network degree distribution. These hypotheses, however, call for more de
tailed investigations of the origin of the breakdown of anomalous BP exponents in specific systems and for the formulation of alternative theoretical approaches such as the one presented in Ref. [20]. Also, the way avalanches depend on the system size in finite systems, which is an issue of crucial importance for the analysis of real systems, is still largely unexplored.
In this work we consider Susceptible-Infected-Susceptible (SIS) dynamics [21], one of the most fundamental (and simple) models deemed to be described by BP, and perform such an investigation in complex networks of variable size. Apart from the intrinsic importance of SIS dynamics, an additional motivation for our study is that it has been realized that the interplay between heterogeneous topology and SIS dynamics gives rise to highly nontrivial phenomena, such as the vanishing of the threshold in the large-network limit [22; 23; 24], the interplay between distinct subextensive subgraphs [25] and long-range percolation effects [26]. Whether and how these nontrivial properties of the stationary state are reflected also in the avalanche statistics and its dependence on the system size are other aspects deserving to be analyzed.
Our investigation is inspired by the work of Ref. [27] that deals with critical properties of the CP, a model akin to SIS, but exhibiting simpler critical dynamics characterized by a finite threshold for any value of \(\gamma\). For CP, avalanche exponents are nonanomalous also for \(2<\gamma\leq 3\), but this is easily reconciled with the BP phenomenology, as the effective distribution of the number of offsprings does not depend on the network substrate and is always homogeneous. Yet, Ref. [27] presents a detailed analytical investigation of CP based on a Langevin approach, which constitutes the natural basis also for the analytical approach to SIS avalanches and in particular for their temporal properties.
In this paper we perform numerical simulations of the critical SIS model on annealed and quenched networks, both homogeneous and heterogeneous, generated according to the configuration model (CM) [28]. We further develop a theoretical approach based on the use of Langevin equations to study the probability that avalanches have duration at least \(t\). The theoretical approach explicitly makes use of the annealed network approximation, but we show that it further provides with a partial understanding of the avalanche behavior on quenched networks. The rest of the paper is organized as follows. In Section II, the dynamics of the SIS model and the statistical features of the networks we consider are briefly introduced. In Section III, we summarize the phenomenology we observe on different network structure, and offer a clear physical interpretation of the avalanche behavior. In Section IV, numerical results for annealed networks are discussed. In Section V, the theoretical approach is presented and its predictions are compared with the results of the previous section. In Section VI, results for quenched networks are shown and compared to the theoretical predictions of the previous section. We discuss our results Section VII.
## II Model Definitions
### Spreading dynamics
We consider the continuous-time Susceptible-Infected-Susceptible (SIS) dynamics on networks. Each node can be either infected (I) or susceptible (S). Infection events, i.e., \(I+S\to I+I\), occur according to a Poisson process with rate \(\lambda\geq 0\). Recovery events, i.e., \(I\to S\), obey a spontaneous Poisson process with rate \(\mu\). We set \(\mu=1\), with no loss of generality. The configuration where all nodes are in the S state is an absorbing configuration: once such a configuration is reached the system will remain in it forever, as no new infections can arise. In a network of infinite size, if the dynamics is started from a configuration different from the absorbing one, a critical value \(\lambda_{c}\) of the control parameter \(\lambda\) separates a phase where the system unavoidably reaches the absorbing configuration (i.e., \(\lambda\leq\lambda_{c}\)) from a phase where a stationary state with a finite fraction of infected nodes exists ( \(\lambda>\lambda_{c}\)).
On a network with finite size \(N\), the SIS model does not undergo a true phase transition. As long as the rate of infection \(\lambda\) is finite, the system necessarily ends up in the absorbing configuration in a finite time. Nonetheless, it is still possible to identify a pseudo-critical point \(\lambda_{c}(N)\), distinguishing a phase where the system reaches the absorbing configuration in a time that, on average, grows at most logarithmically with the system size (i.e., \(\lambda<\lambda_{c}(N)\)), and a phase where the absorbing configuration is reached in an average time that grows exponentially with the system size (i.e., \(\lambda>\lambda_{c}(N)\)).
In this paper, we are interested in characterizing SIS spreading on networks of finite size \(N\) in their pseudo-critical regime, i.e., \(\lambda=\lambda_{c}(N)\). We perform a large-scale numerical analysis and develop a theoretical approach valid for a specific initial setup where \(i_{0}\ll N\) infected nodes are surrounded by a completely susceptible system. Our main goal is understanding in detail the behavior of the survival probability \(S(t)\), i.e., the probability that the system has not yet reached the absorbing configuration by time \(t\), connected to the duration distribution as \(P(t)=-dS(t)/dt\). To avoid verbosity, we refer to the critical regime even if a network has finite size. Also, we make use of the compact notation \(\lambda_{c}\) instead of \(\lambda_{c}(N)\) to indicate the pseudo-critical point of a network with finite size \(N\).
### Networks
As interaction patterns, we consider networks built according to the CM [28]. The properties of a network generated according to the CM are specified by its degree distribution, i.e., the probability \(P(k)\) that a randomly selected node has degree \(k\). Each quenched network realization of the CM is obtained by sampling the degree sequence \(\vec{k}=(k_{1},k_{2},\ldots,k_{N})\) from \(P(k)\) and pairing nodes in a completely random manner, while preserving the degree sequence. The only constraint we set on the pairing procedure is that multiedges and self-loops are prohibited. The \(q\)-th sample moment of the degree sequence
is given by
\[\langle k^{q}\rangle=\frac{1}{N}\ \sum_{i=1}^{N}(k_{i})^{q}. \tag{5}\]
The annealed version of the CM model implies that a new pairing is generated at each time step, still keeping the degree sequence fixed. In the annealed scenario one can usefully define the annealed adjacency matrix as the average over all possible pairings: \(\bar{A}(k,k^{\prime})=kk^{\prime}/(\langle k\rangle N)\). Its elements equal the probability that a node of degree \(k\) is connected to a node of degree \(k^{\prime}\)[29].
In this work, we focus on power-law degree sequences generated by selecting random variates from the distribution
\[P(k)\sim\left\{\begin{array}{ll}k^{-\gamma}&\mbox{if }k\in[k_{\rm min},k_{\rm max }]\\ 0&\mbox{otherwise}\end{array}\right.. \tag{6}\]
In the following, without loss of generality, we set \(k_{\rm min}=3\), however, results are qualitatively similar for \(k_{\rm min}\geq 3\), as long as \(k_{\rm min}\) is independent of \(N\). We assume that the maximum degree \(k_{\rm max}\) is growing as a power of \(N\), i.e.,
\[k_{\rm max}=N^{1/\omega}\, \tag{7}\]
with
\[\omega\geq\max[2,\gamma-1]. \tag{8}\]
This specific setting of the CM model is known as the uncorrelated configuration model (UCM) [30]. Imposing the constraints of Eqs. (7) and (8) provides several advantages. For example, one can safely assume that the effective maximum degree observed in the sequence sampled from \(P(k)\) is actually \(k_{\rm max}\) if \(N\) is sufficiently large [27]. Also, quenched UCM networks have negligible degree-degree correlations.
Using the above constraints, one finds that the \(q\)-th moment of a large but finite network scales approximately as
\[\langle k^{q}\rangle\sim\left\{\begin{array}{ll}\mbox{const.}&\mbox{if }q< \gamma-1\\ \log k_{\rm max}=1/\omega\ \log N&\mbox{if }q=\gamma-1\\ k_{\rm max}^{q-\gamma}=N^{(q-\gamma)/\omega}&\mbox{if }q>\gamma-1\end{array}\right.. \tag{9}\]
For \(2<\gamma\leq 3\), the power-law degree distribution of Eq. (6) has diverging second moment, i.e., \(\langle k^{2}\rangle\to\infty\) for \(N\to\infty\). We refer to networks in this class as heterogeneous networks. For \(\gamma>3\) instead, \(\langle k^{2}\rangle\) is finite, and we refer to this class as homogeneous networks.
## III Summary of the results
We consider SIS dynamics on a finite network of size \(N\) initialized with \(i_{0}\ll N\) infected nodes. The sum of the degrees of the \(i_{0}\) seeds is \(k_{0}\). The survival function \(S(t)\) turns out to be characterized by three main regimes describing respectively short, intermediate and long avalanches. We denote the point of transition between the first two regimes as \(t^{*}\) and the point of transition between the second and third regime as \(t_{\times}\). The transitions between the various regimes are not sharp, rather the survival function displays smooth crossover behaviors. This qualitative behavior is valid irrespectively of the exponent \(\gamma\) of the degree distribution in Eq. (6). Also, both the quenched and the annealed versions of the UCM exhibit this qualitative behavior. However, fundamental differences between homogeneous and heterogeneous networks exist as the actual values of the transition points depend on the degree exponent \(\gamma\).
### Regime of short avalanches
For \(t<t^{*}\), in the regime of short avalanches, the survival function of each of the \(i_{0}\) avalanches is characterized by an exponential decay. This is due the immediate recovery of each of the \(i_{0}\) initial spreaders. The characteristic time scale of the exponential decay is equal to \(1\) because we set the rate of recovery as \(\mu=1\). If the \(i_{0}\) avalanches are small enough not to interact one with the other, we can write \(S(t)=1-(1-e^{-t})^{i_{0}}\), i.e., the survival probability equals the probability that at least one of the \(i_{0}\) avalanches is still active at time \(t\). Such expression behaves as
\[S(t)\sim i_{0}\ e^{-t}. \tag{10}\]
if \(t\) is sufficiently large. In particular, the above expression must be at most \(1\) so it can hold only if \(t\geq\log(i_{0})\).
The relative weight of the regime of short avalanches compared to the other regimes valid for longer avalanches, which ultimately determines the transition point \(t^{*}\), strongly depends on the initial condition of the dynamics and the topology of the underlying network. If \(\gamma>3\), then \(t^{*}\) does not display any dependence on the network size; if, instead, \(2<\gamma\leq 3\), a logarithmic divergence of \(t^{*}\) with the system size appears. For \(2<\gamma\leq 3\), the fraction of trajectories that ends in this regime also depends on the initial condition. If the spreading is initiated by spreaders with a sufficiently large \(k_{0}\), then \(t^{*}\simeq 0\) and the regime is barely visible, while \(t^{*}\) grows as \(k_{0}\) diminishes.
### Regime of intermediate avalanches
The range \(t^{*}<t<t_{\times}\) is known as the adiabatic regime [19; 27]. This regime describes avalanches that are sufficiently long to have lost any memory of the initial conditions. The survival function decays as
\[S(t)\sim t^{-1}. \tag{11}\]
The power-law decay with exponent \(-1\) is typical of critical spreading on networks with finite second moment of the degree distribution [15; 19; 20]. Here we find that the same decay describes intermediate avalanches in finite networks characterized by degree distributions with diverging second moment.
### Regime of long avalanches
Finally, avalanches that last \(t>t_{\times}\) are sufficiently large to feel the finite size of the network. An exponential cutoff characterizes the decay of the survival function in this regime, i.e.,
\[S(t)\sim e^{-t/t_{\times}}. \tag{12}\]
In particular, we have that the typical time of the exponential cutoff diverges as a power of the network size, i.e.,
\[t_{\times}(N)\sim N^{1/\sigma\nu}. \tag{13}\]
The specific value of \(\sigma\nu\) depends on \(\gamma\) and \(\omega\) only if \(\gamma\leq 3\); \(\sigma\nu=2\) otherwise.
## IV Numerical results for annealed networks
In this section, we consider the case of uncorrelated annealed networks [27]. We consider the critical dynamics, setting \(\lambda=\lambda_{c}=\langle k\rangle/\langle k^{2}\rangle\), the exact epidemic threshold for uncorrelated networks. We consider networks of finite size, and therefore the scaling of \(\langle k\rangle\) and \(\langle k^{2}\rangle\) with \(N\), is the one in Eq. (9). It follows that, for \(2<\gamma\leq 3\), \(\lambda_{c}\to 0\) for \(N\to\infty\); instead, \(\lambda_{c}\) does not vanish for \(N\to\infty\) if \(\gamma>3\)[21].
### Homogeneous networks
Figure 1 shows results obtained by simulating critical SIS dynamics on homogeneous networks generated via the CM with \(P(k)\sim k^{-\gamma}\), \(\gamma=5.3\) and \(\omega=\gamma-1=4.3\). All realizations have been initiated with a single infected node, i.e., \(i_{0}=1\), of degree \(k_{0}=\lfloor k_{\min}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\langle k\rangle\rfloor\), where \(\lfloor\cdot\rfloor\) is the floor function. This choice is motivated by the need of having an initial condition that is independent of the system size.
The three regimes anticipated in the summary above are clearly visible. Given the homogeneity of the network, short avalanches do not display any dependence on the network size. This is the standard behavior of the continuous-time BP [31] and hence it is not surprising to observe it on homogeneous networks. In particular, we observe \(t^{*}\simeq 1\). In the regime of intermediate avalanches, the survival probability shows the power-law decay of Eq. (10). The exponential cutoff emerges after a time
\[t_{\times}^{hom}(N)\sim N^{1/2}. \tag{14}\]
This is apparent from the finite-size scaling analysis shown in the inset of Fig. 1. This fact is consistent with what one finds for other critical spreading processes on homogeneous networks via a standard mean-field description [32].
The standard mean-field picture emerging from the analysis of the SIS model on homogeneous networks is further confirmed by the fact that the probability distribution of the avalanche size scales with exponent \(3/2\) and shows a cutoff diverging linearly with the system size, see Appendix A. Both these results are consequences of the behaviour of \(S(t)\) and of the scaling law that relates the average avalanche size with its duration [15].
### Heterogeneous networks
Results for heterogeneous networks with degree exponent \(2<\gamma\leq 3\) are shown in Figure 2. Also in this case, all realizations have been initiated with \(i_{0}=1\) infected node of degree \(k_{0}=\lfloor k_{min}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\langle k\rangle\rfloor\). Despite Eq. (9) predicts \(\langle k\rangle\sim\text{const}\). for any \(\gamma>2\), in small systems such as the ones that can be considered numerically, \(\langle k\rangle\) has not yet reached its limit value and displays a slow dependence on \(N\). The three qualitative regimes are still present, but the crossovers between the various regimes strongly depend on \(\omega\) and \(\gamma\), in stark contrast with the homogeneous problem.
Short avalanches are described by the exponential decay of Eq. (10). We observe that \(t^{*}\) grows as \(N\) increases. The reason for such a behavior is quite intuitive. The critical value of the epidemic threshold goes to zero as the system size increases. However, the initial condition of the dynamics is unaffected by the network size, as we are still considering spreading processes initiated by \(i_{0}=1\) nodes with degree \(k_{0}\). Many configurations generate short avalanches just because the probability of observing spreading events instead of recoveries becomes negligible as the size of the network increases. For example, the probability that the first event is a spreading event is \(k_{0}\lambda_{c}/(1+k_{0}\lambda_{c})\), which clearly goes to zero as the system size increases.
This physically intuitive picture can be made more precise by studying the dependence of the survival function in the
Figure 1: Survival probability for critical SIS avalanches in homogeneous annealed networks. We set \(\gamma=5.3\), \(\omega=4.3\), \(k_{0}=\lfloor k_{min}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\langle k\rangle\rfloor\) and we consider \(10^{7}\) realizations of the process for each value of \(N\). Different curves show the survival probability \(S(t)\) as \(N\) is varied. The black dashed line scales as \(t^{-1}\). In the inset, the same data as in the main panel are rescaled to make the various curves collapse one on the top of the other. The abscissa values are rescaled as \(t/t_{\times}(N)\) with \(t_{\times}(N)=N^{1/2}\), and the ordinate values are rescaled as \(tS(t)\).
regime of short avalanches on the initial condition of the dynamics. Our main finding in this regard is that the crossover from the exponential to the power-law decay starts when the survival probability has dropped by an amount that we infer to scale as \(k_{\rm max}/k_{0}\sim N^{1/\omega}/k_{0}\). This finding is deduced from the combination of the results of Figs. 3a, 3b and 3c. In Fig. 3a, we start the dynamics from a single initial seed, i.e., \(i_{0}=1\) with variable degree \(k_{0}\). Simulations are all performed on the same network, thus \(k_{\rm max}\) is constant. The drop of \(S(t)\) is proportional to \(1/k_{0}\), as apparent from the inset of Fig. 3a. In Fig. 3b instead, we consider networks of different size to keep the ratio \(k_{\rm max}/k_{0}\) constant. In the regime of short avalanches, the different curves collapse with no rescaling. Finally, in Fig. 3c, we vary the number of initially infected nodes \(i_{0}\), but keep constant the sum of their degrees: \(k_{0}=60\). The value of the initial drop of \(S(t)\) is once again constant, as the ratio \(k_{\rm max}/k_{0}\) is not varied. The value of the time \(t^{*}\) appears the same for all settings. For \(t\geq\log(i_{0})\), the functional form of \(S(t)\) depends linearly on \(i_{0}\), as described by Eq. (10).
The scaling \(k_{\rm max}/k_{0}\) of the initial drop of the survival function \(S(t)\) together with the known exponential decay characterizing \(S(t)\) in the regime of short avalanches, i.e., Eq. (10), and the imposed \(N\) dependence of the maximum degree, i.e., Eq. (7), allow us to write that
\[t^{*}(N)\sim\log\frac{k_{max}}{k_{0}}\sim\frac{1}{\omega}\log N-\log k_{0}\, \tag{15}\]
thus a logarithmic growth of the time distinguishing short from intermediate avalanches.
Intermediate avalanches are perfectly described by the power law of Eq. (11). Our finite-size scaling analysis indicates that
\[t_{\times}^{het}(N)\sim N^{(\omega+1-\gamma)/2\omega}\,. \tag{16}\]
The scaling of Eq. (16) is clearly different from the one predicted for homogeneous networks, i.e., Eq. (14). It can be, however, linked to the same physical interpretation behind Eq. (14) using the known fact that in the SIS model on annealed networks, at criticality, the subset of nodes with largest degree plays a crucial role [21]. For \(2<\gamma\leq 3\), this set is subextensive, and contains the \(N_{h}\sim N^{1+(\gamma-\gamma)/\omega}\) nodes with largest degree [33]. The subgraph of the \(N_{h}\) nodes with largest degree is also known to be homogeneous, in the sense that the smallest, largest and average degree of this subgraph all scale in the same way with the system size, as in homogeneous networks [33]. As a matter of fact, we can interpret the finding of Eq. (16) as \(t_{\times}^{het}(N)\sim t_{\times}^{hom}(N_{h})\sim N_{h}^{1/2}\), thus as the cutoff time valid for a homogeneous graph, i.e., Eq. (14), with \(N_{h}\) nodes. Note that Eqs. (15) and (16) explicitly depend on \(\omega\) [see also Eq. (7)]. The validity of these expressions as \(\omega\) is varied is assessed in Appendix B.
## V Langevin theory for annealed networks
The striking differences between the two cases of homogeneous and heterogeneous networks call for an analytical investigation. A natural choice for this task is to develop a theory based on a Langevin equation for the order parameter. The general theory of stochastic processes [34] provides with an exact protocol to compute \(S(t)\) given a Langevin equation for the order parameter and such a protocol has been successfully employed to describe the behaviour of avalanches in the CP [27]. Inspired by Ref. [27], we develop a theoretical approach for the SIS model on annealed networks.
For annealed networks the degree sequence fully determines the network properties: nodes with the same degree have exactly the same dynamical behavior. It is therefore convenient to define the variables \(n_{k}(t)\) and \(\rho_{k}(t)=n_{k}(t)/[NP(k)]\) which represent, respectively, the number of infected nodes with degree \(k\) at time \(t\) and the probability that a randomly selected node among those with degree \(k\) is infected at time \(t\). The epidemic phase transition can be described through the order parameter \(\rho=\sum_{k}n_{k}/N=\sum_{k}\rho_{k}P(k)\), representing the probability that a random node is infected, or through the
Figure 2: Survival probability for critical SIS avalanches in heterogeneous annealed networks. We set \(\omega=2\), \(k_{0}=\lfloor k_{\rm min}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\{k\}\rfloor\) and we consider \(10^{9}\) realizations of the SIS process for each value of the system size \(N\). (a) We display the survival probability for \(\gamma=2.1\). The inset displays the same data as in the main panel with the abscissa rescaled as \(t/t_{\times}(N)\) and with the ordinate rescaled as \(tS(t)k_{\rm max}\). (b) Same as in panel (a), but for \(\gamma=2.5\). (c) Same as in panel (a), but for \(\gamma=2.9\).
order parameter \(\Theta=\sum_{k}k\rho_{k}P(k)/\langle k\rangle\), representing the probability that a random neighbor of a node is infected.
The theoretical approach is based on the observation that the variables \(n_{k}\), and hence \(\rho_{k}\), are sums of a large number of binary random variables, which for annealed networks are independent [27]. Appealing to the central limit theorem, the temporal evolution of these variables can be described by means of suitable Langevin equations.
Following closely the approach of Ref. [27] valid for the CP, we find the Langevin equation valid for the critical SIS model is
\[\hat{\Theta}=-\frac{\langle k\rangle\langle k^{3}\rangle}{\langle k^{2} \rangle^{2}}\Theta^{2}+\sqrt{\frac{2\Theta}{N}\frac{\langle k^{3}\rangle}{ \langle k\rangle\langle k^{2}\rangle}}\,\xi\,, \tag{17}\]
where \(\xi\) is a Gaussian white noise of zero mean and unit variance. The derivation of this result is in appendix C, and is based on three main hypotheses. We assume that the dynamics takes place on an annealed uncorrelated network, the variables \(\rho_{k}\) evolve in time subjected to Gaussian white noise and, importantly, the adiabatic approximation is valid.
Such an approximation consists in replacing the value of the microscopic variables \(\rho_{k}\) in the equation for the evolution of the order parameter \(\Theta\) with quasi-stationary, \(\Theta\)-dependent, values, obtained by setting the time derivative to zero in their evolution equations. This is based on the fact that \(\Theta\) evolves slowly at criticality, while the \(\rho_{k}\) relax exponentially fast. (see Appendix C). Only sufficiently long avalanches may be correctly described by our theory. In particular, the Langevin theory can not explain the regime of short avalanches, where the exponential decay is observed. This happens because the adiabatic approximation is not yet valid.
Eq. (17) can be recast as a partial differential equation for \(S(t)\), which in turn can be used to compute the finite-size properties of the survival function. Explicit calculations are shown in Appendix D. At criticality, the theory predicts
\[S(t)=1-e^{-\bar{t}/t} \tag{18}\]
where
\[\bar{t}=k_{0}\langle k^{2}\rangle/\langle k^{3}\rangle\,, \tag{19}\]
and with a cutoff for long times, due to the finite size of the network, scaling as
\[t_{\times}(N)\sim\sqrt{N\langle k^{2}\rangle}\bigg{(}\frac{\langle k^{2} \rangle}{\langle k^{3}\rangle}\bigg{)}^{2}\,\,. \tag{20}\]
For \(t\gg\bar{t}\), Eq. (18) predicts that \(S(t)\sim t^{-1}\). This prediction is valid for all networks, homogeneous and heterogeneous, and is in agreement with the results of numerical simulations. We stress that \(\bar{t}\) does not represent the same quantity as \(t^{*}\): the regime of short avalanches is not described under the adiabatic approximation which we do not expect to be valid during the regime of short avalanches. Indeed, the cumulative probability of Eq. (18) equals one for \(t=0\), as expected for a conditional probability such as the present one. \(S(t)\) in Eq. (18) is conditioned on \(t\) being larger than the adiabatic relaxation time. The fact that \(\bar{t}\) tends to zero in heterogeneous networks signals that the power-law decay begins immediately after the adiabatic relaxation time while it happens after a time that grows with \(k_{0}\) in homogeneous networks.
For homogeneous networks, Eq. (19) predicts \(\bar{t}\) of order \(k_{0}\) and Eq. (20) tells us that \(t_{\times}(N)\sim N^{1/2}\). These predictions are perfectly consistent with the numerical results presented in section III.
Predictions, however, are only partially in agreement with numerical simulations for heterogeneous networks. The scaling \(\bar{t}\sim k_{0}/k_{\rm max}\) predicted in Eq. (19) is consistent with the
Figure 3: Survival probability for critical SIS avalanches in heterogeneous annealed networks. We set \(\omega=2\), \(\gamma=2.1\) and we consider \(10^{9}\) realizations of the process for each curve. The process is started from \(i_{0}\) seeds; the sum of the degrees of these seeds is \(k_{0}\). (a) The process is started from \(i_{0}=1\). Different curves correspond to different values of \(k_{0}\). Here the network size is \(N=10^{8}\). The inset displays the same data as in the panel with the ordinate rescaled as \(tS(t)/k_{0}\). (b) We start the process from \(i_{0}=1\) seed. We consider different \(k_{0}\) values, but also different network sizes \(N\). The values of these two parameters are chosen such that the ratio \(k_{\rm max}/k_{0}=100\). The inset displays the same data as in the main panel with the abscissa rescaled as \(t/t_{\times}(N)\), with \(t_{\times}(N)\) given in Eq. (16). We rescale also the ordinate as \(tS(t)k_{\rm max}/k_{0}\). (c) We start the process from variable values of \(i_{0}\) all with the same degree, but we keep \(k_{0}=60\). The inset displays the same data as in the main panel with the ordinate rescaled as \(S(t)/i_{0}\). The solid black line is the exponential decay with unitary rate \(e^{-t_{\times}}\).
fact that \(tS(t)k_{\max}/k_{0}\) is constant, see Fig. 2 and Fig. 3b. The theory, however, incorrectly predicts the scaling of the cutoff for large times \(t_{\chi}\). Indeed, for \(2<\gamma\leq 3\) both \(\langle k^{2}\rangle\) and \(\langle k^{3}\rangle\) diverge as described in Eq. (9), and Eq. (20) gives \(t_{\chi}(N)\sim N^{(4\omega-\gamma-1)/2\omega}\), which diverges with \(N\) only if \(\omega>\gamma+1\) and limits to zero otherwise. This latter finding is not only in stark contrast with the results of the numerical simulations, but is also unphysical, denoting some underlying shortcoming of the theory. We believe that the Langevin approach fails because it assumes the whole system to be involved in the critical dynamics and it does not capture the special role played by the subextensive graph of nodes with highest degree, as explained in the previous section. Hence, we can recover a correct scaling of \(t_{\chi}\) by heuristically replacing \(N\) with \(N_{h}\) and using the scaling of the homogeneous problem.
## VI Numerical results for quenched networks
The comprehension of the phenomenology on annealed networks allows us to interpret what happens in spreading experiments on quenched networks.
The main limitation to study large-scale networks is the computational cost of estimating the critical threshold of a network. At odds with the case of annealed networks, \(\lambda_{c}\) is not simply given by the ratio between first and second moment of the degree sequence, rather it requires to be numerically estimated [35]. We use the standard practice of identifying \(\lambda_{c}\) as the \(\lambda\) value corresponding to maximum of the susceptibility associated to \(\rho\), see Appendix E. Numerical estimates of \(\lambda_{c}\) require high accuracy, up to six decimal digits1, and these estimations are computationally demanding even though the maximization is performed using the efficient quasi-stationary method [35]. We have been able to study networks of size at most \(N=10^{7}\). Once, the value of \(\lambda_{c}\) is estimated, spreading experiments for \(\lambda=\lambda_{c}\) can be performed at a relatively small computational cost.
Footnote 1: The number of digits required to have satisfactory results depends on the network.
Results of our analysis are shown in Fig. 4. We first validate our method on random regular graphs, whose phenomenology is expected to be perfectly consistent with the standard mean-field picture, i.e., consistent with the predictions of the Langevin theory for homogeneous networks and the findings for homogeneous annealed networks (Fig. 1). Numerical results, shown in Figure 4a, confirm this expectation as \(S(t)\) obeys Eq. (18) and the cutoff is given by Eq. (20). Interestingly, the scaling function displays a hump just before the cutoff. The hump is sufficiently large to be observed even in the non-rescaled data and hinders the direct observation of the exponent \(-1\) for relatively small systems, a fact that often happens when scaling functions display large peaks [9].
The results for small values of \(\gamma\) are perfectly analogous to those obtained for heterogeneous annealed networks. For example, in the case \(\gamma=2.1\), Fig. 4b, one finds that the rescaling used for annealed networks leads to the clean data collapse shown in the inset. This finding is not surprising, as the theory of annealed networks suits well SIS on quenched networks as long as \(\gamma<5/2\)[33; 35]. In Appendix E we show that the maximization of the susceptibility is a necessary step in order to obtain clean-cut results as the ones shown in Fig. 4.
The rescaling used for annealed networks works quite well also for \(\gamma=2.7\) (see Fig. 4c). Also in this case we find that the drop of \(S(t)\) at small \(t\) is proportional to \(k_{\max}/k_{0}\) and the cutoff is again given by Eq. (16). However, this agreement is valid for the relatively small network sizes we can simulate, i.e., \(N\leq 10^{7}\). Based on the findings of Refs. [33; 35; 26], we expect that, for any value \(\gamma>5/2\), the theory for annealed networks is no longer able to describe critical SIS avalanches on sufficiently large quenched networks, and that a scaling different from Eq. (16) holds for \(N\to\infty\).
## VII Conclusions
In this paper we have performed a thorough analysis of SIS critical avalanche dynamics on random networks of finite size. While for annealed homogeneous networks the theoretical understanding is complete and quantitative, when the dynamics takes place on annealed heterogeneous substrates we have to resort to heuristic arguments to correct some shortcomings of the Langevin approach. In particular, it is necessary to assume that nodes play different roles depending on whether they are part of the subextensive subgraph of size \(N_{h}\) which includes high-degree nodes. Such a network subset is where the principal eigenvector of the adjacency matrix gets localized [33]. For stationary properties of the SIS dynamics, this localization determines the position of the epidemic threshold, but it does not affect the probability that a node is infected, which is proportional to the degree for any value of \(k\). Our results show instead that for nonstationary properties, such as avalanche statistics, localization matters: nodes outside the subset play no role in the evolution of long avalanches and as a consequence the Langevin approach, which treats all nodes on the same footing, fails.
This failure is in stark contrast with what happens for the CP. Physically, the CP differs from the SIS model as node \(i\) spreads activity with rate \(\lambda\) regardless of its degree, rather than spreading with rate \(\lambda k_{i}\). As a consequence, the critical point of the CP is \(\lambda_{c}^{CP}=1\), i.e., is network independent while the critical point of the SIS is not and, in particular, it vanishes in heterogeneous networks. However, the structure of the Langevin equations describing the two processes is quite similar. The equations for the variables \(\rho_{k}\) are the same, with the only difference that the order parameter \(\Theta\) appearing in Eq.( 12) is replaced by the factor \(\rho/\langle k\rangle\) and Eq.( 13) has the same structure as of the equation for \(\rho\) in the CP, with the fundamental difference that the non-linear term involves \(\sum_{k}kP(k)\rho_{k}\) in the CP, while it involves \(\sum_{k}k^{2}P(k)\rho_{k}\) in the SIS. This difference reflects the physically different spreading mechanisms of the two models. Furthermore, the analogy between the equations for the \(\rho_{k}\) and for the order parameters allows to perform
the adiabatic approximation in both cases and the stationary values of the \(\rho_{k}\) have indeed the same form. It follows that the closed-form expressions of the Langevin equations are the same for both processes. The physical difference between the two processes, however, is reflected by a different scaling of the drift and diffusion coefficients with the system size. In particular, in the limit of small order parameter, the SIS model involves moments of \(P(k)\) up to the third (see Eq. (17)) while the highest moment in the CP is the second. Once the Langevin equations are obtained, writing the partial differential equation for \(S(t)\), its limit for large \(N\) and its integration for the calculation of \(t_{\times}\) proceeds analogously for both models. It was found that the CP is always characterized by a \(t^{*}\) that does not scale with the system size regardless of the network (as \(\lambda_{c}^{CP}\) is finite on any network). The cutoff time \(t_{\times}\), however, is correctly predicted for the CP but not for the SIS. A solid mathematical framework to explore this regime is still to be found.
We have verified that, for \(\gamma<5/2\), the avalanche statistics on quenched networks is practically identical to the case of annealed networks, in agreement with what occurs for stationary properties [35]. For larger values of \(\gamma\) the stationary properties of SIS on quenched networks (such as the value of the epidemic threshold) are governed by the interplay of hubs which effectively "interact at distance" giving rise to long-range percolation patterns [26]. The investigation of how these highly nontrivial effects influence critical avalanche statistics for quenched networks with \(\gamma>5/2\) remains a very challenging open problem both from the numerical and from the analytical point of view.
###### Acknowledgements.
F.R. acknowledges support by the Army Research Office (W911NF-21-1-0194) and by the Air Force Office of Scientific Research (FA9550-21-1-0446). The funders had no role in study design, data collection and analysis, decision to publish, or any opinions, findings, and conclusions or recommendations expressed in the manuscript.
## Appendix A Avalanche size distribution
Figure 5 displays the avalanche size distribution obtained on annealed networks. As explained in the main text, the power-law exponent \(\tau=3/2\) is easily understood from Eq. (11) and from the quadratic growth of the average avalanche size with the avalanche duration [15]. The avalanche size cutoffs \(z_{\times}\) for both classes of networks are understood again using \(\theta=2\) and in particular they are the square of the cutoffs of the survival probability. The drop of \(P(z)\) for small values of \(z\) in heterogeneous networks mirrors the exponential decay of the first temporal regime. During such a decay, avalanche size and duration scale linearly as, for short avalanches, an infection event increasing \(z\) by \(1\) generally increases \(t\) only of the healing time of the newly infected node, i.e., \(1/\mu=1\) on average. Hence, for short avalanches \(\langle z\rangle\propto t/\mu\). It follows that the rescaling is the same for both observables.
## Appendix B The role of the upper cutoff of the degree distribution
The cutoff \(t_{\times}\) is predicted to depend only on \(N\) in homogeneous networks, Eq. (14), while it explicitly depends on \(\gamma\) and \(\omega\) in heterogeneous networks, Eq. (16). The relation between \(t_{\times}\) and \(\gamma\) is explicitly assessed in Fig. 2. Figure 6 verifies the relationship between \(t_{\times}\) and \(\omega\) by keeping all parameters fixed except for \(\omega\). For large \(\gamma\) values, networks are homogeneous regardless of the \(\omega\) value. For heterogeneous networks, in turn, \(N_{h}\) depends on \(\gamma\), \(\omega\) and \(N\)[36].
Figure 4: Survival probability for critical SIS avalanches in quenched networks. We set \(k_{0}=\lfloor k_{\text{min}}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\langle k\rangle\rfloor\). (a) Simulations are performed on a random regular graph with degree \(k=10\). We consider \(10^{8}\) realizations of the spreading process for each value of \(N\). The inset displays the same data as the main panel with the abscissa rescaled as \(t/\tilde{t}(N)\) and with the ordinate rescaled as \(tS(t)\). (b) Same as in panel (a) but on configuration model networks with \(\gamma=2.1\) and \(\omega=2\). We consider \(10^{9}\) realizations of the SIS process for each value of \(N\). The inset displays the same data as in the main panel with the abscissa rescaled as \(t/\tilde{t}(N_{h})\) and with the ordinate rescaled as \(tS(t)k_{\text{max}}\). (c) Same as in (b), but for \(\gamma=2.7\) and \(\omega=2\).
## Appendix C Derivation of the Langevin equations
The derivation of the Langevin equations proceeds in the same way as in [19; 27]. The Langevin equations for the variables \(n_{k}\) are
\[\dot{n}_{k} =-n_{k}+\lambda\frac{k}{\langle k\rangle}(1-\rho_{k})\sum_{k^{ \prime}}k^{\prime}P(k^{\prime})n_{k^{\prime}} \tag{10}\] \[+\sqrt{n_{k}+\lambda(1-\rho_{k})\frac{k}{\langle k\rangle}\sum_{ k^{\prime}}k^{\prime}P(k^{\prime})n_{k^{\prime}}}\,\xi_{k}\,,\]
Diving by \(NP(k)\) we obtain
\[\dot{\rho}_{k} =-\rho_{k}+\lambda k(1-\rho_{k})\Theta \tag{11}\] \[+\sqrt{\frac{1}{NP(k)}\big{[}\rho_{k}+\lambda k(1-\rho_{k}) \Theta\big{]}}\xi_{k}\,,\]
and using the definition of \(\Theta\)
\[\dot{\Theta} =\Theta\left(-1+\lambda\sum_{k}\frac{k^{2}}{\langle k\rangle}(1- \rho_{k})P(k)\right) \tag{12}\] \[+\frac{1}{\langle k\rangle}\sum_{k}kP(k)\sqrt{\frac{1}{NP(k)} \big{[}\rho_{k}+\lambda k(1-\rho_{k})\Theta\big{]}}\xi_{k}\,.\]
Note that the leading order in Eq. (11) is linear in \(\rho_{k}\) regardless of the value of \(\lambda\), implying a fast exponential decay toward their asymptotic (average) values, while linear terms are suppressed in Eq. (12) by setting \(\lambda=\lambda_{c}\) and hence \(\Theta\) is a slowly varying variable at criticality. Therefore, the adiabatic approximation can be performed at the critical point. Setting
Figure 5: Survival probability of the avalanche size for critical SIS avalanches in annealed networks. (a) We set \(\gamma=5.3\), \(\omega=4.3\), \(k_{0}=[k_{\text{min}}(\gamma-1)/(\gamma-2)]=\lfloor\langle k\rangle\rfloor\) and we consider \(10^{7}\) realizations of the SIS process for each value of the system size \(N\). The inset displays the same data as in the main panel with the abscissa rescaled as \(z/N\) and with the ordinate rescaled as \(z^{2/2}S(z)\). (b) We set \(\gamma=2.1\), \(\omega=2\), \(k_{0}=[k_{\text{min}}(\gamma-1)/(\gamma-2)]=\lfloor\langle k\rangle\rfloor\) and we consider \(10^{7}\) realizations of the SIS process for each value of the system size \(N\). The inset displays the same data as in the main panel with the abscissa rescaled as \(z/N_{k}\) and with the ordinate rescaled as \(z^{3/2}P(z)k_{c}\).
Figure 6: Survival probability for critical SIS avalanches in annealed networks. (a) We set \(\gamma=5.3\), \(N=10^{6}\), \(k_{0}=[k_{\text{min}}(\gamma-1)/(\gamma-2)]=\lfloor\langle k\rangle\rfloor\) and we consider \(10^{7}\) realizations of the SIS process for each value of \(\omega\). The inset displays the same data as in the main panel with the abscissa rescaled as \(t/t_{c}(N)\) and with the ordinate rescaled as \(tS(t)\). (b) We set \(\gamma=2.1\), \(N=10^{6}\), \(k_{0}=[k_{\text{min}}(\gamma-1)/(\gamma-2)]=\lfloor\langle k\rangle\rfloor\) and we consider \(10^{7}\) realizations of the SIS process for each value of the SIS process for each value of \(\omega\). The inset displays the same data as in the main panel with the abscissa rescaled as \(t/t_{c}(N_{k})\) and with the ordinate rescaled as \(tS(t)k_{\text{max}}\).
\(\dot{\rho}_{k}=0\) and \(\lambda=\lambda_{c}\) yield
\[\begin{split}\rho_{k}&=\frac{\lambda_{c}k\Theta}{1+ \lambda_{c}k\Theta}\\ &+\frac{1}{1+\lambda_{c}k\Theta}\sqrt{\frac{1}{NP(k)}\Big{[}\rho_ {k}+\lambda_{c}k(1-\rho_{k})\Theta\Big{]}}\,\xi_{k}\,.\end{split} \tag{39}\]
In the above expression, the stochastic term vanishes in the thermodynamic limit. Therefore we can rewrite Eq. (39) by replacing \(\rho_{k}\) in the square root with its leading (deterministic) term, so to express \(\rho_{k}\) entirely as a function of \(\Theta\), obtaining
\[\rho_{k}=\frac{\lambda_{c}k\Theta}{1+\lambda_{c}k\Theta}+\sqrt{\frac{2\lambda _{c}k\Theta}{NP(k)(1+\lambda_{c}k\Theta)}}\,\xi_{k}. \tag{40}\]
Replacing this expression in the deterministic term of Eq. (38) gives
\[\begin{split}&\Theta\left[-1+\lambda_{c}\sum_{k}\frac{k^{2}P(k)}{ \langle k\rangle}\frac{1}{1+\lambda_{c}k\Theta}\right]-\\ &\lambda_{c}\Theta\sum_{k}\frac{k^{2}P(k)}{\langle k\rangle} \sqrt{\frac{1}{NP(k)}\frac{2\lambda_{c}k\Theta}{1+\lambda_{c}k\Theta}}\xi_{k }\,,\end{split} \tag{41}\]
while the stochastic term takes the form
\[\sum_{k}\frac{kP(k)}{\langle k\rangle}\sqrt{\frac{1}{NP(k)}\frac{2\lambda_{c}k \Theta}{1+\lambda_{c}k\Theta}}\,\xi_{k}\,. \tag{42}\]
The overall contribution to the new stochastic term in the equation for \(\dot{\Theta}\) is
\[\begin{split}&\sum_{k}\frac{kP(k)}{\langle k\rangle}\sqrt{\frac{1}{ NP(k)}\frac{2\lambda_{c}k\Theta}{1+\lambda_{c}k\Theta}}\,(1-\lambda_{c}k \Theta)\,\,\xi_{k}=\\ &\sqrt{\sum_{k}\frac{k^{2}P(k)}{N\langle k\rangle^{2}}\frac{2 \lambda_{c}k\Theta}{1+\lambda_{c}k\Theta}\big{(}1-\lambda_{c}k\Theta\big{)}^{ 2}}\,\xi\,.\end{split} \tag{43}\]
Note that, in inserting the sum into the square root, we also replace the individual noises \(\xi_{k}\) with an overall noise \(\xi\) by means of the central limit theorem. Defining \(\Delta=\lambda_{c}\frac{\langle k^{2}\rangle}{\langle k\rangle}-1\) and summing and subtracting the quantity \(\lambda_{c}\frac{\langle k^{2}\rangle}{\langle k\rangle}\Theta\), the closed-form of the Langevin equation for \(\Theta\) is
\[\dot{\Theta}=\Theta(\Delta-\lambda_{c}\Omega[\Theta])+\sqrt{\frac{2\lambda_{c }k\Theta}{N}\Lambda[\Theta]}\,\xi\,, \tag{44}\]
where
\[\begin{split}\Omega[\Theta]&=\sum_{k}\frac{k^{2}P(k )}{\langle k\rangle}\frac{\lambda_{c}k\Theta}{1+\lambda_{c}k\Theta}\\ \Lambda[\Theta]&=\sum_{k}\frac{k^{3}P(k)}{\langle k \rangle^{2}}\frac{(1-\lambda_{c}k\Theta)^{2}}{1+\lambda_{c}k\Theta}\end{split}\,. \tag{45}\]
The two summations above can be easily performed if \(\Theta\ll 1/\lambda_{c}k_{max}\). Under this assumption we have \(\Omega=\lambda_{c}\Theta\langle k^{3}\rangle/\langle k\rangle\) and \(\Lambda=\langle k^{3}\rangle/\langle k\rangle^{2}\). Replacing these expressions in Eq. (44) we obtain Eq. (17). Figure 7 shows that the assumption \(\Theta\ll 1/\lambda_{c}k_{max}\) holds if the dynamics is initialized with sufficiently small values of \(\Theta\), as we always do in our simulations.
## Appendix D The survival probability function
From Eq. (17) an equation for \(S(t|\Theta_{0})\) can be derived using standard techniques [34]
\[\begin{split}\frac{\partial}{\partial t}S(t|\Theta_{0})=& -\frac{\langle k\rangle\langle k^{3}\rangle}{\langle k^{2}\rangle^{2}} \Theta_{0}^{2}\frac{\partial}{\partial\Theta_{0}}S(t|\Theta_{0})\\ &+\frac{\langle k^{3}\rangle}{\langle k\rangle\langle k^{2} \rangle}\frac{\Theta_{0}}{N}\frac{\partial^{2}}{\partial\Theta_{0}^{2}}S(t| \Theta_{0})\,,\end{split} \tag{46}\]
with the boundary conditions
\[S(t|\Theta_{0}=0)=0\,\,\,\text{and}\,\,\,\,\,\frac{\partial}{\partial\Theta_{0} }S(t|\Theta_{0})\bigg{|}_{\Theta_{0}=1}=0\,, \tag{47}\]
expressing the absorbing nature of the barrier in \(\Theta=0\) and the reflecting nature of the barrier in \(\Theta=1\). The initial condition on \(\Theta\) explicitly depends on the system size as \(\Theta_{0}=k_{0}/(N\langle k\rangle)\). We are interested in deriving the decay of \(S(t)\) over time and this requires the thermodynamic limit to be taken, so we need an initial condition that does not depend on the system size [27]. We change variable from \(\Theta_{0}\) to \(k_{0}=\Theta_{0}N\langle k\rangle\). In this way, if we fix the initial condition to be such that only a single node with degree \(k_{0}=\langle k\rangle\) is infected, then \(\Theta_{0}=1/N\). We obtain
\[\frac{\partial}{\partial t}S(t|k_{0})=-\frac{\langle k^{3}\rangle}{\langle k^{ 2}\rangle^{2}}\frac{k_{0}^{2}}{N}\frac{\partial}{\partial k_{0}}S(t|k_{0})+ \frac{\langle k^{3}\rangle}{\langle k^{2}\rangle}k_{0}\frac{\partial^{2}}{ \partial k_{0}^{2}}S(t|k_{0})\,. \tag{48}\]
where now the thermodynamic limit \(N\to\infty\) can be taken. The coefficient of the drift term scales as
\[\frac{\langle k^{3}\rangle}{\langle k^{2}\rangle^{2}}N^{-1}\sim k_{max}^{\gamma -(2+\omega)} \tag{49}\]
if \(2<\gamma\leq 3\) while it scales as \(k_{max}^{-\omega}\) if \(4<\gamma\). Analogously, the diffusion coefficient scales as
\[\frac{\langle k^{3}\rangle}{\langle k^{2}\rangle}\sim k_{max} \tag{50}\]
Figure 7: Trajectories of the order parameter \(\Theta\). We use here \(N=10^{8}\) and \(\omega=2\). Each realization is initialized with \(k_{0}=\lfloor k_{min}(\gamma-1)/(\gamma-2)\rfloor=\lfloor\langle k\rangle\rfloor\). Solid lines correspond to a single trajectory of the order parameter for different values of \(\gamma\). Dashed lines correspond to the bound \(1/\lambda_{c}k_{max}\).
if \(2<\gamma\leq 3\) and it converges to a constant for \(\gamma>4\).
For \(\gamma>4\) we have a "standard" (in the sense of Ref. [15]) diffusive scenario where the drift term tends to zero (for any \(\omega>0\)) and the diffusion coefficient converges to a constant. For \(2\leq\gamma<3\) the diffusion coefficient diverges and the drift term again tends to zero unless \(\gamma>2+\omega\), which is impossible if \(\omega\geq\gamma-1\). This condition is always met in our numerical simulations when \(2<\gamma<3\). It follows that for large \(N\) the dynamics is dominated by the diffusion term, for both homogeneous and heterogeneous networks, in complete analogy with the CP [27]. In the limit of large but finite \(N\) the drift term can be neglected and the solution to the resulting equation is Eq. (18).
The finite-size cutoff of \(S(t)\) can be estimated from the average duration of avalanches [27]. For it to be computed, we first need to compute \(T_{1}(k_{0})=\int_{0}^{\infty}S(t|k_{0})dt\). Integrating in time Eq. (17) we obtain
\[\frac{d^{2}T_{1}(k_{0})}{dk_{0}^{2}}-\frac{k_{0}}{N\langle k^{2}\rangle}\frac{ dT_{1}(k_{0})}{dk_{0}}=-\frac{\langle k^{2}\rangle}{\langle k^{3}\rangle k_{0}}\,, \tag{19}\]
with boundary conditions, obtained integrating Eqs. (16), given by \(T_{1}(0)=0\) and \(T_{1}^{\prime}(k_{tot})=0\), where \(k_{tot}=N\langle k\rangle/2\) is the total number of edges. The solution is given by
\[T_{1}(k_{0})=\] \[\sqrt{2N\langle k^{2}\rangle}\int_{0}^{k_{0}/\sqrt{2N\langle k^{ 2}\rangle}}du\,e^{u^{2}}\int_{u}^{\sqrt{N\langle k\rangle^{2}/8\langle k^{2} \rangle}}\frac{\langle k^{2}\rangle}{\langle k^{3}\rangle}\frac{dv}{v}\,e^{-v ^{2}}\,, \tag{20}\]
as can be verified by direct substitution.
Defining \(T_{2}(k_{0})=2\int_{0}^{\infty}tS(t|k_{0})\) and integrating again Eq. (17) we get
\[\frac{d^{2}T_{2}(k_{0})}{dk_{0}^{2}}-\frac{k_{0}}{N\langle k^{2}\rangle}\frac {dT_{2}(k_{0})}{dk_{0}}=-\frac{2}{k_{0}}\frac{\langle k^{2}\rangle}{\langle k ^{3}\rangle}T_{1}(k_{0})\,. \tag{21}\]
The solution is given by
\[T_{2}(k_{0})=2N\langle k^{2}\rangle\int_{0}^{k_{0}/\sqrt{2N\langle k^{2} \rangle}}due^{u^{2}}\int_{u}^{\sqrt{N\langle k\rangle^{2}/8\langle k^{2} \rangle}}G(t)e^{-t^{2}}dt\,, \tag{22}\]
with
\[G(t)=2\left(\frac{\langle k^{2}\rangle}{\langle k^{3}\rangle}\right)^{2}t^{ -1}\int_{0}^{t}e^{u^{2}}du\int_{u}^{\sqrt{N\langle k\rangle^{2}/8\langle k^{ 2}\rangle}}\frac{dv}{v}e^{-v^{2}}\,. \tag{23}\]
For large \(N\) we can approximate the upper limit of the outer integral in Eq. (22) with \(0\) and noting that \(\sqrt{N\langle k\rangle^{2}/8\langle k^{2}\rangle}\) diverges we get
\[T_{2}(k_{0})\approx k_{0}\sqrt{2N\langle k^{2}\rangle}\int_{0}^{\infty}G(t)e^{ -t^{2}}dt\,, \tag{24}\]
so that the size-dependent cutoff time \(t_{\rm x}(N)\) scales according to Eq. (14) if \(\langle k^{3}\rangle\) is asymptotically finite (\(\gamma>4\)), while it scales as Eq. (20) if \(2<\gamma\leq 3\).
## Appendix E Determination of the critical point in quenched networks
Figure 8 shows the importance of a precise estimation of the critical point on quenched networks. Figure 8a shows the susceptibility obtained on a random graph with \(N=10^{6}\) as \(\lambda\) is varied. The critical point is determined as the value of the spreading rate that maximizes the susceptibility, marked by the blue dashed line. Figure 8b shows three curves obtained using three different values of \(\lambda\), the critical one and the two closest values for which we studied the susceptibility (resolution set to \(2.5\cdot 10^{-5}\)). Despite the curves look extremely similar (inset), the data collapse is strongly impacted by \(\lambda\). Figure 8b shows this by slightly increasing or decreasing \(\lambda\) with respect to to the critical value.
Figure 8: Critical point determination and its importance. (a) The susceptibility for the random regular graph with \(N=10^{6}\) and \(k=10\) is computed for different values of \(\lambda\), with a resolution \(2.5\cdot 10^{-5}\). (b) Rescaled survival probability for the random regular graph with \(k=10\). The abscissa is rescaled as \(t/t_{\rm c}(N)\) and the ordinate is rescaled as \(tS(t)\). The inset shows the same data but the axes are not rescaled. The two black curves represent \(N=10^{4}\) and \(N=10^{5}\). The red, blue and green lines represent data for \(N=10^{6}\) obtained using the three values of the \(\lambda\) marked by the dashed lines of the same color in panel (a). |
2301.00390 | Impact of CP violation searches at MOMENT experiment with sterile
neutrinos | We examine the scope of the MOMENT experiment in the context of CP violation
searches with the presence of extra eV scale sterile neutrino. MOMENT is a
proposed short baseline neutrino oscillation experiment using muon beams for
neutrinos production, making it advantageous over $\pi_0$ background and other
technical difficulties. We work over the first oscillation maxima which matches
the peak value of flux with a run time of 5 years for both neutrino and
anti-neutrino modes. We perform the bi-probability studies for both 3 and 3+1
flavor mixing schemes. The CP violation sensitivities arising from the
fundamental CP phase $\delta_{13}$ and unknown CP phase $\delta_{14}$ are
explored at the firm footing. The slight deteriorates are observed in CP
violations induced by $\delta_{13}$ as the presence of sterile neutrino is
considered. We also look at the reconstruction of CP violations phases
$\delta_{13}$ and $\delta_{14}$ and the MOMENT experiment shows significant
capabilities in the precise measurement of $\delta_{13}$ phase. | Kiran Sharma, Sudhanwa Patra | 2023-01-01T11:52:20Z | http://arxiv.org/abs/2301.00390v1 | # Impact of CP violation searches at MOMENT experiment with sterile neutrinos
###### Abstract
We examine the scope of the MOMENT experiment in the context of CP violation searches with the presence of extra eV scale sterile neutrino. MOMENT is a proposed short baseline neutrino oscillation experiment using muon beams for neutrinos production, making it advantageous over \(\pi_{0}\) background and other technical difficulties. We work over the first oscillation maxima which matches the peak value of flux with a run time of 5 years for both neutrino and anti-neutrino modes. We perform the bi-probability studies for both 3 and 3+1 flavor mixing schemes. The CP violation sensitivities arising from the fundamental CP phase \(\delta_{13}\) and unknown CP phase \(\delta_{14}\) are explored at the firm footing. The slight deteriorates are observed in CP violations induced by \(\delta_{13}\) as the presence of sterile neutrino is considered. We also look at the reconstruction of CP violations phases \(\delta_{13}\) and \(\delta_{14}\) and the MOMENT experiment shows significant capabilities in the precise measurement of \(\delta_{13}\) phase.
Keywords:Neutrino Oscillation, Medium-baseline, Sterile Neutrino, MOMENT
## 1 Introduction and Motivation
Neutrino physics has made tremendous progress in the past few years. The discovery of the phenomenon of neutrino oscillations [1; 2; 3; 4; 5; 6; 7; 8] has not only answered the solar- neutrino problem [9; 10; 11; 12] and atmospheric-neutrino problem [13; 14; 15; 16], but also it has open up the new doors to look at the beyond standard model physics. At present, as per the standard model, there are three active flavors of neutrinos associated with the electron, muon and, tau lepton. The neutrino oscillations in the presence of three active neutrinos are described by six oscillation parameters: two mass square differences i.e. \(\Delta m^{2}_{21}=m^{2}_{2}-m^{2}_{1}\), \(\Delta m^{2}_{31}=m^{2}_{3}-m^{2}_{1}\), three neutrino mixing angles \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\) and one Dirac CP phase \(\delta_{CP}\). The values of the oscillations parameters have been constrained by various neutrino experiments [17; 18; 19; 20; 21; 22; 23; 24], the only unknown oscillation parameters left to be known precisely are the sign of atmosphere mass -squared difference i.e. \(\Delta m^{2}_{31}\) which will resolve the issue of neutrino mass ordering, the precise measurement of mixing angle \(\theta_{23}\) which will help in establishing the correct octant and the value of fundamental CP phase. The long baseline experiments T2K [25] and NO\(\nu\)A [26] have independently reported the value of \(\delta_{CP}\) but there is a slight tension between the two values. This discrepancy may be because of the systematic uncertainties or
it may be an indication of the new physics arising from the presence of sterile neutrinos [27; 28; 29; 30; 31; 32; 33; 34; 35] or non-standard interactions [36; 37; 38; 39; 40].
Certain short baseline anomalies are coming from the reactor and Gallium [41; 42], accelerator experiments LSND [43] and MiniBooNE [44] which hints towards the existence of a hypothetical fourth sterile mass eigenstate. The presence of such right-handed sterile neutrinos will extend the structure of the standard 3 flavor framework. As a result, the number of neutrino oscillation parameters gets enhanced in the minimal extended \(3+1\) framework. There are now three additional mixing angles i.e. \(\theta_{14}\), \(\theta_{24}\), \(\theta_{34}\) and two more CP phases. We have the new mass square differences arising from the mixing among fourth mass eigen state \(\nu_{4}\) with mass eigenstates of three active flavors (\(\nu_{e}\), \(\nu_{\mu}\) and, \(\nu_{\tau}\)). The presence of sterile neutrino has been widely explored in the literature (see references [45; 46; 47; 48; 49; 50; 51]).
The effect of sterile neutrinos over the standard oscillation parameters have been widely explored in the current and future proposed neutrino oscillation experiments like T2HK [52],T2HKK [53], and DUNE [54; 55; 56; 57]. An important point to remark is that all the above-stated experimental facilities are using the neutrino beams originating from the pions decay, whereas in the proposed work, we look at the potential of upcoming neutrino oscillation experiment MOMENT(MuOn-decay MEdium baseline NeuTrino beam facility) [58] which observe a neutrino beam produced from decaying muons at relatively low energies. As an advantage one can skip the background effects and other technical difficulties at the experimental level.
MOMENT is a proposed short baseline neutrino oscillation experiment with a baseline of 150 km. A detailed description of the experimental facility is provided in section 5. The matter effects arising from the interaction of neutrinos with the matter potential as they propagate through it are negligible in context with the long baseline neutrino experiments. As a result, the fake CP violation scenarios arising from matter terms can be avoided in the MOMENT. The comparative analysis for the precise measurement of delta CP phase has been made among MOMENT, T2K, NO\(\nu\)A, T2HK, T2HKK, and DUNE where it has been discussed that MOMENT can significantly improve the bounds over \(\delta_{CP}\)[59]. Furthermore, the effect of non-standard interactions and their effect over the CP phase has been looked at in references([59; 60; 61; 62]). To the best of our knowledge, the effect of sterile neutrinos in the MOMENT facility is not been studied to a much large extent. In this work, we look at the influence of sterile neutrino over the precise measurement of the fundamental CP phase. This work aims to study the capabilities of the MOMENT experiment and put it into context in the global experimental effort in neutrino physics.
This motivates us to study the physics potential of MOMENT experiment in the presence of a hypothetical right-handed eV scale sterile neutrino. We present a detailed discussion of the behavior of the 4-flavor transition probabilities and understand the bird's eye view of CP trajectory curves in the standard 3-flavor scheme and extend it to our framework of the 3+1 flavor scheme. We perform a prospective study addressing the CP violation sensitivities and look at the degeneracies among different CP phases.
The paper is organized as follows: In the next section, we develop the theoretical understanding of the neutrino and anti-neutrino oscillation probabilities in presence of a
sterile neutrino. The numerical results are performed using a bi-probability detailed study in section 2. The experimental details and the corresponding event spectrum are explored in the next section. We focus on the CP violation study and the reconstruction among different CP phases in section 6.1. The paper is concluded with a summary of prospective studies perform in our work.
## 2 Transition probability in the 4-flavor scheme
### Theoretical framework
The formalism of \(3+1\) neutrino oscillation can be understood in terms of time dependent Schrodinger equation in the mass basis as,
\[i\frac{\partial\nu_{j}}{\partial t} = H_{0}\nu_{j}, \tag{1}\]
with \(j=1,2,3,4\) and \(H_{0}\) is defined as the effective Hamiltonian in the mass basis and \(\nu_{j}\) being the neutrino mass eigenstate. The effective flavor dependent Hamiltonian for \(3+1\) neutrino oscillation including matter effects is given by
\[H_{4\nu} = \underbrace{U_{4\nu}\left[\begin{array}{cccc}0&0&0&0\\ 0&\Delta m^{2}_{21}/2E&0&0\\ 0&0&\Delta m^{2}_{31}/2E&0\\ 0&0&0&\Delta m^{2}_{41}/2E\end{array}\right]}_{=H_{\rm vac}}U^{\dagger}_{4\nu} +\underbrace{\left[\begin{array}{cccc}V_{CC}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&-V_{NC}\end{array}\right]}_{=H_{\rm mat}}, \tag{2}\]
In case of \(3+1\) scenario, i.e, for three active neutrinos and one sterile neutrino, the mixing matrix \(U_{4\nu}\) can be parameterised as
\[U_{4\nu} = R\big{(}\theta_{34},\delta_{34}\big{)}\,R\big{(}\theta_{24},0 \big{)}\,R\big{(}\theta_{14},\delta_{14}\big{)}R\big{(}\theta_{23},0\big{)}R \big{(}\theta_{13},\delta_{13}\big{)}R\big{(}\theta_{12},0\big{)}\] \[\equiv R\big{(}\theta_{34},\delta_{34}\big{)}\,R\big{(}\theta_{24},0 \big{)}\,R\big{(}\theta_{14},\delta_{14}\big{)}U_{3\nu}\]
where \(U_{3\nu}=R\big{(}\theta_{23},0\big{)}R\big{(}\theta_{13},\delta_{13}\big{)}R \big{(}\theta_{12},0\big{)}\) is the standard three flavor neutrino mixing matrix. The mass eigenstates are related to flavor eigenstates as \(\nu_{j}=\big{[}U_{4\nu}\big{]}_{j\alpha}\nu_{\alpha}\).
The \(4\times 4\) real rotation matrices \(R_{ij}\) (here R(\(\theta_{24}\)), R(\(\theta_{23}\)), R(\(\theta_{12}\))) in the \((i,j)\) plane with \(2\times 2\) sub matrices are defined as:
\[R_{ij}=\begin{pmatrix}c_{ij}&s_{ij}\\ -s_{ij}&c_{ij}\end{pmatrix} \tag{4}\]
while the complex rotation matrices (i.e. R(\(\theta_{34},\delta_{34}\)), R(\(\theta_{14},\delta_{14}\), and R(\(\theta_{13},\delta_{13}\)))in the \((i,j)\) plane are defined by:
\[\tilde{R_{ij}}=\begin{pmatrix}c_{ij}&s_{ij}\\ -s^{*}_{ij}e^{-i\delta_{ij}}&c_{ij}\end{pmatrix} \tag{5}\]
with \(c_{ij}=\cos\theta_{ij}\) and \(s_{ij}=\sin\theta_{ij}\). The parametrization of unitary mixing matrix allows the conversion probability to independent of mixing angle \(\theta_{34}\) and the corresponding delta phase \(\delta_{34}\).
In the S-matrix formalism, the neutrino flavor changes after propagating a distance \(L\) is defined in terms of an evolution matrix \(S\) as
\[\nu_{\alpha}\big{(}L\big{)}=S_{\alpha\beta}\nu_{\beta}\big{(}0\big{)}\,. \tag{6}\]
The key point is that the evolution matrix \(S\) satisfies the same Schrodinger equation. After some simplification, the form of evolution matrix can be expressed in term \(H_{4\nu}\) as
\[S_{\beta\alpha} = \big{[}\exp(-iH_{4\nu}L)\big{]}_{\beta\alpha}\;\,, \tag{7}\]
The final expression of neutrino oscillation probability from \(\nu_{\alpha}\) to \(\nu_{\beta}\) with neutrino energy \(E\) and baseline \(L\) is expressed in terms of evolution matrix as,
\[P(\nu_{\alpha}\rightarrow\nu_{\beta}) = \big{|}\,S_{\beta\alpha}\,\big{|}^{2} \tag{8}\]
### Appearance probability \(P_{\mu e}^{4\nu}\)
The \(3+1\) appearance probability for \(\nu_{\mu}\rightarrow\nu_{e}\) transition has been derived in ref.[63] and is related to three flavour neutrino transition probability \(P_{\mu e}^{3\nu}\) and other interference terms arising from sterile neutrino mixing parameters as follows,
\[P_{\mu e}^{4\nu} \approx \bigg{(}1-\mathrm{s}_{14}^{2}-\mathrm{s}_{24}^{2}\bigg{)}P_{\mu e }^{3\nu} \tag{9}\] \[+\mathrm{s}_{14}\mathrm{s}_{24}\mathrm{s}_{13}\mathrm{s}_{23}\sin \Delta_{31}\sin(\Delta_{31}+\delta_{13}-\delta_{14}),\] \[-4\mathrm{s}_{14}\mathrm{s}_{24}\mathrm{c}_{23}\mathrm{s}_{12} \mathrm{c}_{12}\sin(\Delta_{21})\sin\delta_{14},\] \[+2\mathrm{s}_{14}^{2}\mathrm{s}_{24}^{2}\,.\]
with \(\Delta_{ij}=\frac{\Delta m_{ij}^{2}L}{4E}\). The expression for \(P_{\mu e}^{3\nu}\) is sum of three contributions i.e, atmospheric \(P^{\mathrm{ATM}}\), solar \(P^{\mathrm{SOL}}\) and their interference term \(P_{\mathrm{I}}^{\mathrm{INT}}\). Looking at the neutrino oscillation parameters from recent global fit values displayed in Table.2
\[s_{23} = 0.76\;,\;c_{23}\;=\;0.65\;,\] \[s_{12} = 0.56\;,\;c_{12}\;=\;0.83\;, \tag{10}\] \[s_{13} = 0.15\;,\;c_{13}\;=\;0.99\;.\]
The sine of the reactor mixing angle \(s_{13}\) is treated as a small parameter in comparison to other mixing angles and is taken to be of the order of \(O(\varepsilon)\approx 0.15\) while all other sines and cosines are \(O(1)\). The other parameter which is the ratio between two mass-square differences can be considered as \(|\alpha|\;=\;\frac{\Delta m_{21}^{2}}{|\Delta m_{31}^{2}|}\;\approx\;0.03\simeq \varepsilon^{2}\). Also, parameters having sine of the sterile mixing angles \(\sin\theta_{14}\) and \(\sin\theta_{24}\) are considered to be small and are of the order of \(\varepsilon\). However, the other sterile neutrino mixing angle \(\sin\theta_{34}\) and the corresponding phase \(\delta_{34}\) are taken to be zero as the vacuum probability expression is independent of these
contributions. Retaining term up to third order in \(\varepsilon\) the appearance probability \(P_{\mu e}^{4\nu}\) can be simplified as sum of three contributions,
\[P_{\mu e}^{4\nu}\simeq P^{\rm ATM}+P_{\rm I}^{\rm INT}+P_{\rm II}^{\rm INT}\,. \tag{11}\]
with individual contributions are given by
\[P^{\rm ATM} \simeq 4s_{23}^{2}s_{13}^{2}\sin^{2}\Delta\,, \tag{12}\] \[P_{\rm I}^{\rm INT} \simeq 8s_{13}s_{12}c_{12}s_{23}c_{23}(\alpha\Delta)\sin\Delta\cos( \Delta+\delta_{13})\,,\] (13) \[P_{\rm II}^{\rm INT} \simeq 4s_{14}s_{24}s_{13}s_{23}\sin\Delta\sin(\Delta+\delta_{13}-\delta _{14})\,, \tag{14}\]
where \(\Delta\equiv\Delta_{31}\).
The first term coming from the atmospheric oscillating parameter \(\Delta_{31}\) is a positive quantity providing the leading order contribution to the transition probability. The sub-leading contribution are arising from the interferences of different frequencies. The term \(P_{\rm I}^{\rm INT}\) corresponds to the interference of solar and atmospheric frequencies while the term \(P_{\rm II}^{\rm INT}\) is connected with the interference of atmospheric and sterile frequencies. One can infer from the expression of \(P_{\rm I}^{\rm INT}\) and \(P_{\rm II}^{\rm INT}\) that the interference induced by the presence of sterile neutrino is not proportional to \(\Delta\) while the solar-atmospheric interference term is directly related with it. As a result, the numerical simulation carried out for the MOMENT experiment working over the first oscillation maxima has a good performance.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameters** & **True Value** & **Marginalization Range** \\ \hline \hline \(\sin^{2}\theta_{12}\) & 0.318 & Not Marginalized \\ \hline \(\sin^{2}\theta_{13}\) & 0.022 & Not Marginalized \\ \hline \(\sin^{2}\theta_{23}\) & 0.574 & [0.38,0.64] \\ \hline \(\sin^{2}\theta_{14}\) & 0.02 & Not Marginalized \\ \hline \(\sin^{2}\theta_{24}\) & 0.02 & Not Marginalized \\ \hline \(\sin^{2}\theta_{34}\) & 0 & Not Marginalized \\ \hline \(\delta_{13}\) & [-180,180] & [-180,180] \\ \hline \(\delta_{14}\) & [-180,180] & Not Marginalized \\ \hline \(\delta_{34}\) & 0 & Not Marginalized \\ \hline \(\Delta m_{21}^{2}(eV^{2})\) & \(7.50\times 10^{-5}\) & Not Marginalized \\ \hline \(\Delta m_{31}^{2}(eV^{2})\) & \(2.55\times 10^{-3}\) & \([2.4,2.7]\times 10^{-3}\) \\ \hline \(\Delta m_{41}^{2}(eV^{2})\) & 1 & Not Marginalized \\ \hline \end{tabular}
\end{table}
Table 1: The various parameters and their true value used to simulate the data in GLoBES are mentioned in columns 1 and 2. The third column depicts the range over which \(sin^{2}\theta_{23}\), \(\delta_{13}\), and \(\Delta m_{31}^{2}\) are varied while minimizing the \(\chi^{2}\) to obtain the final results.
Discussion at probability level
The impact of matter effects on the appearance probabilities is marginal as MOMENT is a short baseline experiment with a baseline of 150km. Thus, the vacuum probabilities expressions can be used effectively while neglecting the suppressed contributions from the MSW effect. For illustration, let us consider the transition probability in the presence of matter effects up to leading order as [63; 64; 65; 66],
\[P_{m}^{\rm ATM}\simeq(1+2v)P^{\rm ATM}\,, \tag{1}\]
with the correction factor,
\[v=\frac{V}{k}\equiv\frac{2VE}{\Delta m_{31}^{2}}\,,\qquad\text{with,}\quad\ V= \sqrt{2}G_{F}N_{e} \tag{2}\]
For the MOMENT experiment with baseline 150km and considering first maxima peak energy \(E\approx 0.3\)GeV, the correction factor is estimated to be \(v=0.048\). The correction factor for NO\(\nu\)A experiment is \(v\sim 0.17\) while taking the benchmark value of peak energy \(E=2\) GeV and that is the reason why matter effects are important for long baseline studies.
We look at the variation of appearance probability channel (\(\nu_{\mu}\to\nu_{e}\)) and (\(\overline{\nu}_{\mu}\to\overline{\nu}_{e}\)) as a function of energy for the fixed value of the fundamental CP phase. The value of \(\delta_{13}\) is kept fixed at \(0^{\circ}\) and \(-90^{\circ}\) while the Dirac phase \(\delta_{14}\) is allowed to vary over different values as mentioned in the legends in fig 3. The value of the remaining oscillation parameters has been kept fixed at the benchmark values mentioned in table 2. The simulations are performed over the constant value of matter density, \(2.7\) g/cc considered using the Preliminary Reference Earth Model(PREM) [67]. We have taken the value of the sterile mass square difference to be \(\Delta m_{41}^{2}=1eV^{2}\), as a result, the oscillation induced by the sterile presence is averaged out by the finite energy resolution of the detector. We look at the averaged-out behavior of the appearance probabilities. One can also recall that the anti-neutrino appearance probability can be obtained from that of neutrinos by changing the sign of the matter potential and the fundamental and sterile CP phases i.e.
\[P_{\overline{\nu}_{\alpha}\to\overline{\nu}_{\beta}}(V,\delta_{13},\delta_{14 })=P_{\nu_{\alpha}\to\nu_{\beta}}(-V,-\delta_{13},-\delta_{14}) \tag{3}\]
As evident from eqn 1, the leading order contribution to the appearance probability will be more for neutrinos with \(V>0\) than anti-neutrinos with \(V<0\). The solid black line refers to the 3 flavor appearance probability while the colored lines correspond to the different \(3+1\) flavor mixing scenario. The blue line indicates the contribution arising from \(\delta_{14}=-90^{\circ}\), orange for \(\delta_{14}=0^{\circ}\), green for \(\delta_{14}=90^{\circ}\) and magenta for \(\delta_{14}=180^{\circ}\). The left column of the figure represents the appearance probability for neutrinos while the right column explains the behavior of anti-neutrino appearance probability. The curves are peaked around the first oscillation maxima for MOMENT experiment i.e \(\approx 0.3~{}GeV\). One can notice that the amplitude and shape of probability are strongly dependent on the value of sterile CP phase \(\delta_{14}\). There is a decrease in the amplitude for the anti-neutrino oscillation probability as expected in accordance to eqn 1 and which will be further understood for the event spectrum in section 5. Moreover, the plot also shows the mutual swapping for curves as one makes a transition from neutrino to anti-neutrino probability.
## 4 Biprobability analysis
The bi-probability curves as the name suggest are the parametric curves that allow us to trace the bi-probability space spanned together by the neutrino and anti-neutrino oscillation probabilities. The idea of bi-probability curves in the 3 flavor scheme was first introduced in the work [68]. The oscillation probability in the 3 flavor scheme is defined as:
\[P=P_{0}+A(\cos\Delta\cos\delta_{13}-\sin\Delta\sin\delta_{13}) \tag{1}\]
\[\overline{P}=\overline{P_{0}}+\overline{A}(\cos\Delta\cos\delta_{13}-\sin \Delta\sin\delta_{13}) \tag{2}\]
where \(A=8s_{13}s_{12}c_{12}s_{23}c_{23}(\alpha\Delta_{31})\sin\Delta_{31.}\) and \(P_{0}\approx(1+2v)P^{ATM}\)
Thus, the bi-probability curves display the effects coming from the fundamental CP phase(\(\sin\delta_{13}\) term which is CP violating, \(\cos\delta_{13}\) term which is CP conserving), and the matter effects arising from the presence of term A, all in a single diagram. Thus, bi-probability provides us with a bird's eye view important for the mass hierarchy and CP violation studies. Since
Figure 1: Transition probabilities for neutrinos and anti-neutrinos after averaging over the fast oscillations are plotted against energy(varying from 0.1 to 0.8 GeV) with matter density kept fixed at 2.7 g/cc. The oscillation probability for 3 flavor is shown by black curve while the colored curve represents the oscillation probability in 3+1 mixing scenario for four different values of \(\delta_{14}\) i.e. \(-90^{\circ}\),\(0^{\circ}\),\(90^{\circ}\), and \(180^{\circ}\). The left column corresponds to the neutrino transition probabilities for two different values of \(\delta_{13}\) whereas the right one is for anti-neutrino transition probabilities.
the MOMENT is a short baseline facility, it is not worth performing the analysis for mass hierarchy sensitivities. In this work, we fix ourselves to a normal hierarchy and look at the CP trajectory diagrams for neutrino and anti-neutrino oscillation probabilities. The fundamental CP phase \(\delta_{13}\) is varied in the range from \(-\pi\) to \(\pi\) and biprobability space is spanned. As the probabilities involve the periodic function sine and cosine, as a result, the space spanned must form a closed trajectory as the \(\delta_{13}\) is varied. We generalize the bi-probability representation to the 3+1 flavor scheme where the value of \(\delta_{14}\) is kept fixed at different values while \(\delta_{13}\) is varied over the entire range.
Under the adiabatic approximation, the equation 1 and 2, can be re-expressed as:
\[P=l\cos\delta_{13}+m\sin\delta_{13}+n \tag{21}\]
\[\overline{P}=\overline{l}\cos\delta_{13}-\overline{m}\sin\delta_{13}+\overline {n} \tag{22}\]
A deeper understanding of the CP-trajectory curves for 3 flavor analysis can be obtained by eliminating the \(\delta_{13}\) factor from above equations and under the assumption \(A=\overline{A}\) which holds true in vacuum, we obtain the equation followed by neutrino and anti-neutrino probability as:
\[\left(\frac{P+\overline{P}-2n}{2l}\right)^{2}+\left(\frac{P-\overline{P}}{2m }\right)^{2}=1 \tag{23}\]
This is the equation of an ellipse where the lengths of the major and the minor axes are measures for the coefficients of \(\sin\delta_{13}\) and \(\cos\delta_{13}\), respectively, in the oscillation probability. Further, the minor or major axis is always inclined at an angle of \(45^{\circ}\) as visible in bi-probability plots.
The idea of bi-probability plots in 3 flavor scheme can be extended to \(3+1\) flavor analysis where we have some additional sources for CP violation arising by sterile neutrino presence as explored in [69] and the references therein. The analytic expression for the neutrino transition probabilities in case of \(3+1\) neutrino oscillation is recast into a compact form as,
\[P(\nu)\equiv P=P_{0}+A\cos\left(\Delta+\delta_{13}\right)+B\sin\left(\Delta- \delta_{14}+\delta_{13}\right) \tag{24}\]
where the first term \(P_{0}=P^{\rm ATM}\!\simeq 4s_{23}^{2}s_{13}^{2}\sin^{2}\Delta\) is independent of phase factor while the factors which are independent of phase contained in second and third term are, \(A=8s_{13}s_{12}c_{12}s_{23}c_{23}(\alpha\Delta)\sin\Delta\) and \(B=4s_{14}s_{24}s_{13}s_{23}\sin\Delta\). After a few simplifications using trigonometric relations, the transition probability given in eq.(19) modifies to
\[P=P_{0}+A^{\prime}\cos\delta_{13}+B^{\prime}\sin\delta_{13} \tag{25}\]
Similarly, the simplified antineutrino transition probability is given by
\[\overline{P}=\overline{P}_{0}+\overline{A}^{\prime}\cos\delta_{13}-\overline{ B}^{\prime}\sin\delta_{13} \tag{26}\]
where the coefficients \(A^{\prime}\), \(B^{\prime}\),\(\overline{A}^{\prime}\) and \(\overline{B}^{\prime}\) are defined as follows
\[A^{\prime}=A\cos\Delta+B\sin\left(\Delta-\delta_{14}\right) \tag{27}\]
\[B^{\prime}=-A\sin\Delta+B\cos\left(\Delta-\delta_{14}\right) \tag{29}\]
\[\overline{A}^{\prime}=\overline{A}\cos\Delta+\overline{B}\sin\left(\Delta+ \delta_{14}\right) \tag{30}\]
\[\overline{B}^{\prime}=-\overline{A}\sin\Delta+\overline{B}\cos\left(\Delta+ \delta_{14}\right) \tag{31}\]
The detailed derivation for obtaining the CP-trajectory is mentioned in the appendix **??** obtained by eliminating the \(\delta_{13}\). The angle of inclination is obtained by comparing the trajectory equation with the general equation of ellipse and is obtained as:
\[\tan 2\theta=\frac{(B^{2}-A^{2})\cos 2\Delta-2AB\sin 2\Delta\cos\delta_{14}}{2AB \sin\delta_{14}} \tag{32}\]
It is clear from the expression that the orientation of the ellipse is strongly dependent on the value of sterile CP phase \(\delta_{14}\) and the parameter \(\Delta\). The following conclusions can be marked depending on the values of these parameters:
1. Firstly, under the vanishing condition of factor \(\sin\delta_{14}\) (i.e. \(\delta_{14}=n\pi\)), the inclination becomes \(\theta=\pi/4\). The orientation of the elliptical trajectory can be determined by looking at the sign of numerator term. If the numerator is a positive definite quantity the orientation is counter-clockwise while it is clockwise for negative numerator.
2. Also, if \(\Delta\approx n\pi/2\), which is true for MOMENT experiment which works in the first oscillation maxima, the inclination angle simplifies as \[\tan 2\theta\approx\frac{0.366}{\sin\delta_{14}}\] (33) Now there are again two possible inclinations for \(\delta_{14}\rightarrow(2n-1)\pi/2\), depending upon the value of n (where \(n=1,2,3...\)). For the positive sign in denominator, the inclination of minor axis is \(\approx 10^{\circ}\) whereas for the negative sign the major axis is inclined by \(\approx-10^{\circ}\).
The discussion can be easily confirmed from the bi-probability plots in fig 2. We have shown the dependency of ellipse inclination for four different values of \(\delta_{14}\). The parameter \(\delta_{13}\) is varied over the complete allowed range \([-\pi,\pi]\) while \(\delta_{14}\) is kept fixed for each particular plot as mentioned in the legend. The solid black ellipse is drawn for the variation of neutrino and anti-neutrino probabilities in the 3-flavor scheme. The centers of the ellipses in the \(3+1\) flavor scheme are almost coinciding with the centers in the 3 flavor scheme. The slight variation is arising from the negligible matter effects. We have marked special symbols for highlighting four different fundamental CP phase values i.e \(\delta_{13}=-180^{\circ},-90^{\circ},0^{\circ},90^{\circ}\). For \(\delta_{14}=-180^{\circ}\) and \(\delta_{14}=0^{\circ}\), the inclination is \(45^{\circ}\) as seen by the orange and magenta curves. There is a swapping among the values of \(\delta_{13}\) phase pointing towards the tracing of elliptical trajectory as expected. While the blue and green curves plotted under \(\delta_{14}=-90^{\circ},90^{\circ}\) are inclined by \(-10^{\circ}\) and \(10^{\circ}\) respectively.
Experimental Details and Event Spectra
In this section, we shed light on the specifications of the experimental setup MOMENT. As the name suggests MOMENT is a muon-decay medium-baseline neutrino beam facility that uses a continuous proton beam of energy, 15 MW. Since ideally, it is very difficult for any solid target to withstand such a high power, so mercury jets are adopted as the target. Neutrinos are produced from the muons in a pion decay channel. The \(\mu^{+}\) decays via channel \(\mu^{+}\rightarrow\bar{\nu}_{\mu}\nu_{e}e^{+}\) while \(\mu^{-}\) decays as \(\mu^{-}\rightarrow\nu_{\mu}\bar{\nu}_{e}e^{-}\). Thus, we have the four neutrino flavors i.e. \(\nu_{e}\), \(\nu_{\mu}\), \(\bar{\nu}_{e}\) and \(\bar{\nu}_{\mu}\). Hence, the MOMENT setup would allow the study of eight oscillation channels (i.e. \(\nu_{e}\rightarrow\nu_{e}\), \(\nu_{e}\rightarrow\nu_{\mu}\), \(\nu_{\mu}\rightarrow\nu_{e}\), \(\nu_{e}\rightarrow\nu_{\mu}\) as well as their corresponding CP-conjugate partners). To provide sensitivity towards CP-violating, the distinction between the neutrinos and anti-neutrinos is achieved by 500 kilo ton fiducial Gd-doped Water Cherenkov detector as the baseline detector. The final neutrons captured on Gd provides us a way to distinguish neutrinos from anti-neutrinos with the known interactions of neutrinos with nucleons as \(\nu_{e}+n\to p+e^{-}\), \(\nu_{\mu}+n\to p+\mu^{-}\), \(\bar{\nu}_{e}+p\to n+e^{+}\), \(\bar{\nu}_{\mu}+p\to n+\mu^{+}\). The number of the proton on target (POT) with a proton beam power
Figure 2: Bi-probability plots under the normal hierarchy are shown for both 3 flavor and 3+1 flavor mixing scheme. The CP trajectory diagram for the 3-flavor is drawn by black color while the orange, magenta, blue and green closed curves corresponds to the fixed values of \(\delta_{14}\) phase as mentioned in each legend. The neutrino energy is kept fixed at its first oscillation peak value of 0.3 GeV. The CP-phase \(\delta_{13}\) is varied in the range \([-\pi,\pi]\). Four different values of \(\delta_{13}\) phase (i.e. \(0^{\circ}\), \(180^{\circ}\), \(-90^{\circ}\), and \(90^{\circ}\)) are marked by different symbols for comparing the orientation of ellipses.
of 15 MW and with a proton energy of 1.5 GeV can be calculated as:
\[x=\frac{W\times y\times t\times 10^{16}}{1.6\times E_{p}} \tag{10}\]
where \(W\) is beam power and \(E_{p}\) is the proton energy. The operational time in one year is represented by t and is roughly \(\approx 10^{7}\) seconds, while \(y\) is number of years protons deposited on the target. For MOMENT experiment, POT will be \(6.2\times 10^{22}\). The fluxes for neutrinos and anti-neutrinos arising from 1.5 GeV peaks around 0.030 GeV while the total energy is varied in range from \(0.010-0.80\) GeV. The simulations are performed by assuming the equal run rate for neutrino and anti-neutrino mode. We used an uncertainty 2.5% on the signal and 5% uncertainty over the background for both neutrinos and anti-neutrino modes.
From the existing literature, we know that the number of events in the i-th energy bin are given by
\[N_{i}=\frac{Tn_{n}\epsilon}{4\pi L^{2}}\int_{0}^{E_{max}}dE\int_{E_{A_{i}}^{min }}^{E_{A_{i}}^{max}}dE_{A}\phi(E)\sigma_{\nu_{e}}(E)R(E,E_{A})P_{\mu e}(E) \tag{11}\]
where T is the total running time, \(n_{n}\) is the number of target nucleons in the detector, \(\epsilon\) is the detector efficiency, \(\phi(E)\) is the neutrino flux, \(\sigma_{\nu_{e}}\) is the neutrino interaction cross-section and R(E,\(E_{A}\)) is the \(Gau\betaian\) energy resolution function of detector. The quantities
\begin{table}
\begin{tabular}{|c|c|} \hline
**Characteristics** & **MOMENT** \\ \hline \hline Beam power & 15MW \\ Fiducial Detector mass & 500 kton Gd doped Water Cherenkov \\ Baseline & 150 km \\ Flux peaks at & 0.3 GeV \\ First Oscillation maxima & \(\approx\) 0.3 GeV \\ \hline \end{tabular}
\end{table}
Table 2: The experimental specifications of MOMENT experiment.
Figure 3: The expected number of signal events are plotted against the reconstructed neutrino energy. The black curve refers to 3 flavor case while the colored histograms are for (3+1) scheme with different values of \(\delta_{14}\) as mentioned in the legend. The left panel corresponds to \(\nu_{e}\) appearance events while the right one is for \(\overline{\nu}_{e}\) appearance events.
E and \(E_{A}\) are the true and reconstructed (anti-)neutrino energies respectively and L is the baseline.
The numerical results have been performed using the GLoBES software [70; 71] and its additional toolkit required to incorporate the new physics arising from the presence of sterile neutrino. We plot number of events against the reconstructed energy. It is found that the maximum flux peaks about 0.3 GeV, so we have maximum number of events in that energy bin. In the plot, the the thick black lines depicts the 3 flavor scenario. The other colored histograms are drawn in the 3+1 scenario for the different values of \(\delta_{14}\). The number of \(\nu_{e}\) appearance events are almost double of the number of events of \(\overline{\nu}_{e}\) events since the cross section of neutrino interaction is very much different from the anti-neutrino interaction cross-section.
## 6 CP violation sensitivities in presence of sterile neutrino
### Chi-square analysis
The sensitivity of an experiment is to determine the precise values of the oscillation parameters by performing the \(\chi^{2}\) square analysis. It is performed by comparing the simulated true event rates from the present best fit data with the events generated by the test hypothesis. Also, the theoretical uncertainties and the systematic errors at the experimental level are incorporated using the method of pulls in calculating \(\chi^{2}\) as [72; 73],
\[\chi^{2}(p_{\rm true},p_{\rm test})=\min_{\xi}\big{[}\sum_{i\,\in\,{\rm bins}} \frac{(N_{i}^{\rm true}-N_{i}^{\rm test}(\xi))^{2}}{N_{i}^{\rm true}}+\frac{ \xi^{2}}{\sigma_{\xi}^{2}}\big{]} \tag{10}\]
where the nuisance parameter is denoted by \(\xi\) and the corresponding systematic error is presented by \(\sigma_{\xi}\). The terms involving the nuisance parameters are called pull terms. In order to counter the effect of systematic errors, the penalty term \(\xi^{2}\) is added. The nuisance parameters are dependent on the fiducial mass of the detector used in a particular experiment as well as on the other experimental properties like flux normalization and cross section. The minimisation of \(\chi^{2}\) is obtained by marginalizing the oscillation parameter space. Therefore, one add another penalty terms called priors to \(\chi^{2}\). The mathematical expression for the prior is given by
\[\chi^{2}_{\rm prior}=\bigg{(}\frac{N^{\rm true}-N^{\rm test}}{\sigma^{\rm true }}\bigg{)}^{2} \tag{11}\]
In our analysis, we looked at the sensitivity of MOMENT experiment to determine the precise value of CP phase in the presence of sterile neutrino. In the three flavor scenario as seen by eq. (10), the CP violations are induced by the presence of \(\sin\delta_{13}\) term. The \(\chi^{2}\) analysis is carried out with the statistical significance at which we can reject no CP violation test hypothesis as,
\[\chi^{2}=\frac{(N(\delta^{\rm tr}_{\rm CP})-N(\delta^{\rm test}_{\rm CP}=0,1 80))^{2}}{N(\delta^{\rm tr}_{\rm CP})} \tag{12}\]
We have fixed the mixing angles \(\theta_{12}\), \(\theta_{13}\) and \(\theta_{14}\) to their best fit values as mentioned in table 2 in both true and test spectrum. We look at the marginalization over the parameters \(\theta_{23}\) and the mass-squared difference \(\Delta m^{2}_{31}\) in two different ways as follows
1. First Case: \(\Delta m^{2}_{31}\) is kept fixed for both 3 flavor and 3+1 flavor scheme while \(\theta_{23}\) is marginalized over the range mentioned in table 2.
2. Second Case: Both \(\Delta m^{2}_{31}\) and \(\theta_{23}\) are marginalized in 3 and 3+1 flavor scheme. Also the projected information on \(\Delta m^{2}_{31}\) is added in the form of priors as \[\chi^{2}_{\rm prior}=\left(\frac{\Delta m^{2}_{31}({\rm true})-\Delta m^{2}_{ 31}({\rm test})}{\sigma(\Delta m^{2}_{31})}\right)^{2}\] (6.4) where \(\sigma(\Delta m^{2}_{31})\) is the \(1\sigma\) error on \(\Delta m^{2}_{31}\).
The numerical results are displayed in figure 4. The left-panel of the figure corresponds to the fixed value of \(\Delta m^{2}_{31}\) and varying \(\theta_{23}\) while right-panel of the figure corresponds to the second case of marginalizing over both the mass hierarchy \(\Delta m^{2}_{31}\) and \(\theta_{23}\). The solid black solid line represents the estimated value of \(\Delta\chi^{2}\) for three flavor scenario while all other color lines depict the analysis for 3+1 flavor scenario. While performing the \(\chi^{2}\) analysis for discovery potential of CP phases we have considered equal neutrino and anti-neutrino mode for a run time of (5+5) years. In each case we have considered four different values of true \(\delta_{14}\) phase as considered in probability analysis while its test value is marginalized. The figure clearly depicts that CP sensitivities decreases in the presence of sterile neutrino, primarily because of the degeneracies between the fundamental and active-sterile CP phase.
### Reconstructing the CP phases
In the last subsection, we looked at the CP-violation discovery potential of the MOMENT experiment by performing \(\chi^{2}\) analysis. But there is an another way to extract the comple
Figure 4: The figure indicated the potential of MOMENT experiment for the discovery of CP violation induced by the fundamental phase \(\delta_{13}\). The black curve shows the behavior under 3 flavor scheme while the colored curves in each panel are of different values of \(\delta_{14}\). The left figure \(\Delta m^{2}_{31}\) is kept fixed for both 3 and 3+1 flavor mixing while we marginalize over \(\theta_{23}\) value whereas in the right figure, we marginalised over both hierarchy \(\Delta m^{2}_{31}\) and \(\theta_{23}\).
mentary information by reconstructing the values of two CP phases \(\delta_{13}\) and \(\delta 14\), independent of the amount of CP violation. We look at the contour plots by reconstructing the CP phases in \(\delta_{13}-\delta_{14}\) plane. The test values of both the CP phases are varied from \((-\pi,\pi)\) and contours are shown at one, two, and three sigma level, simultaneously. The four plots represents the regions reconstructed for the four benchmark values considered in figure 5. The solid black dot in the upper row represents CP-conserving scenarios with values (0,0), \((\pi,\pi)\) while the bottom panel has CP-violating picture with \((-\pi/2,-\pi/2)\), \((\pi/2,\pi/2)\). Our results are predicts remarkable sensitivity for determining the precise value of the CP phase \(\delta_{13}\) in the presence of sterile neutrino.
Figure 5: Contour plots in the plane of \(\delta_{13}\) (test) and \(\delta_{14}\) (test) for four different values of CP phases \(\delta_{13}\) and \(\delta_{14}\). The discovery potential of \(\delta_{13}\) in the contour plot of \(\delta_{13}\) vs \(\delta_{14}\) plane has been presented by blue, red, and green band with 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) confidence level respectively. The left(right) panel of first row corresponds to (0,0) and \((\pi,\pi)\) respectively while the left(right) panel of second row corresponds to \((-\pi/2,-\pi/2)\) and \((\pi/2,\pi/2)\)
Conclusions and Outlook
In the work, we have addressed the potential of MOMENT experiment to emphasis the role of sterile neutrino presence over the standard three flavor oscillation parameters. We have shown the transition probabilities for both neutrinos and anti-neutrinos in 3+1 flavor scheme and looked at the space spanned by the CP trajectory curves. The performance of MOMENT in understanding the CP violation sensitivities induced by the fundamental CP phase \(\delta_{13}\) and new CP phase arising from active-sterile mixing has been explored. We found that loss of CP sensitivity is dependent on the different values of \(\delta_{14}\) phase. The discovery potential of CP violation in 3 flavor scheme is quite significant at \(7\,\sigma\) level though it gets reduced in the presence of unknown CP phase \(\delta_{14}\). The reduction might has arose from the degeneracies among the two CP phases. We have also assessed the capability of MOMENT experiment in reconstructing the true values of such CP-phases.
###### Acknowledgements.
Kiran Sharma would like to thank Ministry of Education for the financial support for carrying out this research work. KS is very thankful to Dr. Sabya Sachi Chatterjee for the fruitful discussion carried time to time for the betterment of this work.
## Appendix A Detailed Description of Biprobability analysis.
The analytic expression for the neutrino transition probabilities in case of \(3+1\) neutrino oscillation are recasted into a compact form as,
\[P(\nu)\equiv P=P_{0}+A\cos\left(\Delta+\delta_{13}\right)+B\sin\left(\Delta- \delta_{14}+\delta_{13}\right) \tag{10}\]
where the first term \(P_{0}=P^{\rm ATM}\!\simeq 4s_{23}^{2}s_{13}^{2}\sin^{2}\Delta\) is independent of phase factor while the factors which are independent of phase contained in second and third term are, \(A=8s_{13}s_{12}c_{12}s_{23}c_{23}(\alpha\Delta)\sin\Delta\) and \(B=4s_{14}s_{24}s_{13}s_{23}\sin\Delta\). After few simplification using trigonometric relations, the transition probability given in eq.(10) modifies to
\[P=P_{0}+A^{\prime}\cos\delta_{13}+B^{\prime}\sin\delta_{13} \tag{11}\]
Similarly, the simplified antineutrino transition probability is given by
\[\overline{P}=\overline{P}_{0}+\overline{A}^{\prime}\cos\delta_{13}-\overline{ B}^{\prime}\sin\delta_{13} \tag{12}\]
where the coefficients \(A^{\prime}\), \(B^{\prime}\),\(\overline{A}^{\prime}\) and \(\overline{B}^{\prime}\) are defined as follows
\[A^{\prime}=A\cos\Delta+B\sin\left(\Delta-\delta_{14}\right) \tag{13}\]
\[B^{\prime}=-A\sin\Delta+B\cos\left(\Delta-\delta_{14}\right) \tag{14}\]
\[\overline{A}^{\prime}=\overline{A}\cos\Delta+\overline{B}\sin\left(\Delta+\delta_{14}\right) \tag{100}\]
\[\overline{B}^{\prime}=-\overline{A}\sin\Delta+\overline{B}\cos\left(\Delta+ \delta_{14}\right) \tag{101}\]
The factors \(\overline{A}^{\prime}\) and \(\overline{B}^{\prime}\) are obtained from known relations of \(A^{\prime}\) and \(B^{\prime}\) by replacing \(\delta_{14}\rightarrow-\delta_{14}\). Eliminating \(\delta_{13}\) from the modified neutrino and antineutrino transition probabilities, we obtain,
\[\frac{1}{\left[\frac{A^{\prime}}{B^{\prime}}+\frac{\overline{A}^{\prime}}{ \overline{B}^{\prime}}\right]^{2}}\bigg{(}\frac{P-P_{0}}{B^{\prime}}+\frac{ \overline{P}-\overline{P}_{0}}{\overline{B}^{\prime}}\bigg{)}^{2}+\frac{1}{ \left[\frac{B^{\prime}}{\overline{A}^{\prime}}+\frac{\overline{B}^{\prime}}{ \overline{A}^{\prime}}\right]^{2}}\bigg{(}\frac{P-P_{0}}{A^{\prime}}-\frac{ \overline{P}-\overline{P}_{0}}{\overline{A}^{\prime}}\bigg{)}^{2}=1 \tag{102}\]
\[\bigg{[}\frac{1}{{B^{\prime}}^{2}\bigg{(}\frac{A^{\prime}}{B^{\prime}}+\frac{ \overline{A}^{\prime}}{\overline{B}}\bigg{)}^{2}}+\frac{1}{{A^{\prime}}^{2} \bigg{(}\frac{B^{\prime}}{\overline{A}^{\prime}}+\frac{\overline{A}^{\prime}} {\overline{B}}\bigg{)}^{2}}\bigg{]}P^{2}+\bigg{[}\frac{1}{{\overline{B^{\prime }}}^{2}\bigg{(}\frac{A^{\prime}}{\overline{B}^{\prime}}+\frac{\overline{A}^{ \prime}}{\overline{B}}\bigg{)}^{2}}+\frac{1}{{\overline{A^{\prime}}}^{2} \bigg{(}\frac{B^{\prime}}{\overline{A}^{\prime}}+\frac{\overline{B}^{\prime}} {\overline{A}^{\prime}}\bigg{)}^{2}}\bigg{]}\overline{P}^{2}+\bigg{[}\frac{2}{ {B^{\prime}}\overline{B^{\prime}}\bigg{(}\frac{A^{\prime}}{\overline{B}^{ \prime}}+\frac{\overline{A}^{\prime}}{\overline{B}}\bigg{)}^{2}}\]
\[-\frac{2}{{A^{\prime}}\overline{A}^{\prime}\bigg{(}\frac{B^{\prime}}{\overline {B}^{\prime}}+\frac{\overline{A}^{\prime}}{\overline{B}^{\prime}}\bigg{)}^{2} }+\frac{1}{{A^{\prime}}^{2}\bigg{(}\frac{B^{\prime}}{\overline{A}^{\prime}}+ \frac{\overline{B}^{\prime}}{\overline{A}^{\prime}}\bigg{)}^{2}}+\frac{-2P_{0 }}{{A^{\prime}}\overline{A}^{\prime}\bigg{(}\frac{B^{\prime}}{\overline{B}^{ \prime}}+\frac{\overline{A}^{\prime}}{\overline{B}}\bigg{)}^{2}}+\frac{-2 \overline{P}_{0}}{{B^{\prime}}\overline{B^{\prime}}\bigg{(}\frac{A^{\prime}}{ \overline{B}^{\prime}}+\frac{\overline{A}^{\prime}}{\overline{B}^{\prime}} \bigg{)}^{2}}+\frac{-2\overline{P}_{0}}{{A^{\prime}}\overline{A}^{\prime} \bigg{(}\frac{B^{\prime}}{\overline{B}^{\prime}}+\frac{\overline{A}^{\prime}} {\overline{B}}\bigg{)}^{2}}\bigg{]}\overline{P}+\bigg{[}\bigg{(}\frac{1}{{B^{ \prime}}^{2}\bigg{(}\frac{A^{\prime}}{\overline{B}^{\prime}}+\frac{\overline{ A}^{\prime}}{\overline{B}^{\prime}}\bigg{)}^{2}}\]
\[+\frac{1}{{A^{\prime}}^{2}\bigg{(}\frac{B^{\prime}}{\overline{A}^{\prime}}+ \frac{\overline{B}^{\prime}}{\overline{A}^{\prime}}\bigg{)}^{2}}\bigg{)}P_{0 }^{2}+\bigg{(}\frac{1}{{\overline{B^{\prime}}}^{2}\bigg{(}\frac{A^{\prime}}{ \overline{B}^{\prime}}+\frac{\overline{A}^{\prime}}{\overline{B}^{\prime}} \bigg{)}^{2}}+\frac{1}{{\overline{A^{\prime}}}^{2}\bigg{(}\frac{B^{\prime}}{ \overline{A}^{\prime}}+\frac{\overline{B}^{\prime}}{\overline{A}^{\prime}} \bigg{)}^{2}}\bigg{)}\overline{P}_{0}^{2}+\bigg{(}\frac{2}{{B^{\prime}} \overline{B^{\prime}}\bigg{(}\frac{A^{\prime}}{\overline{B}^{\prime}}+\frac{ \overline{A}^{\prime}}{\overline{B}^{\prime}}\bigg{)}^{2}}\]
\[-\frac{2}{{A^{\prime}}\overline{A}^{\prime}\bigg{(}\frac{B^{\prime}}{\overline{ A}^{\prime}}+\frac{\overline{B}^{\prime}}{\overline{A}^{\prime}}\bigg{)}^{2}}\bigg{)}P_{0 }\overline{P}_{0}-1\bigg{]}=0 \tag{103}\]
Comparing eqn 103 with the general quadratic curve representing the equation of ellipse
\[ax^{2}+bxy+cy^{2}+cx+ey+f=0 \tag{104}\]
where the counterclockwise angle of rotation from the x-axis to the major axis of the ellipse is defined by \(\tan 2\theta=\frac{b}{a-c}\), we can understand the inclination of ellipse in the biprobability plots.
\[a=\frac{1}{{B^{\prime}}^{2}\bigg{(}\frac{A^{\prime}}{\overline{B}^{\prime}}+ \frac{\overline{A}^{\prime}}{\overline{B}^{\prime}}\bigg{)}^{2}}+\frac{1}{{A^{ \prime}}^{2}\bigg{(}\frac{B^{\prime}}{\overline{A}^{\prime}}+\frac{\overline{B}^ {\prime}}{\overline{A}^{\prime}}\bigg{)}^{2}} \tag{105}\]
\[b=\frac{2}{B^{\prime}\overline{B}^{\prime}\left(\frac{A^{\prime}}{B^{\prime}}+ \frac{\overline{A}^{\prime}}{\overline{B}^{\prime}}\right)^{2}}-\frac{2}{A^{ \prime}\overline{A}^{\prime}\left(\frac{B^{\prime}}{A^{\prime}}+\frac{ \overline{B}^{\prime}}{\overline{A}^{\prime}}\right)^{2}} \tag{12}\]
\[c=\frac{1}{\overline{{B}^{\prime}}^{2}\left(\frac{A^{\prime}}{B^{\prime}}+ \frac{\overline{A}^{\prime}}{\overline{B}}\right)^{2}}+\frac{1}{\overline{{A }^{\prime}}^{2}\left(\frac{B^{\prime}}{A^{\prime}}+\frac{\overline{B}^{\prime }}{\overline{A}^{\prime}}\right)^{2}} \tag{13}\]
Simplifying the results under the assumption that matter effects induce negligible perturbations on the interference terms(i.e. \(A=\overline{A}\), \(B=\overline{B}\)) and using equations 11, 12, 13 and 14, the angle of inclination becomes
\[\tan 2\theta=\frac{(B^{2}-A^{2})\cos 2\Delta-2AB\sin 2\Delta\cos\delta_{14}}{2 AB\sin\delta_{14}} \tag{14}\]
|
2308.13181 | Emergent $\mathbb{Z}_2$ symmetry near a CDW multicritical point | We consider the critical behavior associated with incommensurate
unidirectional charge-density-wave ordering in a weakly orthorhombic system
subject to uniaxial strain as an experimentally significant example of
$U(1)\times U(1)$ multicriticality. We show that, depending on microscopic
details, the phase diagram can have qualitatively different structures which
can involve a vestigial meta-nematic critical point, a pair of tricritical
points, a decoupled tetracritical point, or (at least at mean-field level) a
bicritical point. We analyze the emergent symmetries in the critical regime and
find that these can -- at least in some cases -- involve an emergent
$\mathbb{Z}_2$ order parameter symmetry. | Steven A. Kivelson, Akshat Pandey, Anisha G. Singh, Aharon Kapitulnik, Ian R. Fisher | 2023-08-25T05:21:15Z | http://arxiv.org/abs/2308.13181v2 | # Emergent \(\mathbb{Z}_{2}\) symmetry near a CDW multicritical point
###### Abstract
We consider the critical behavior associated with incommensurate unidirectional charge-density-wave ordering in a weakly orthorhombic system subject to uniaxial strain as an experimentally significant example of \(U(1)\times U(1)\) multicriticality. We show that, depending on microscopic details, the phase diagram can have qualitatively different structures which can involve a vestigtial metamematic critical point, a pair of tricritical points, a decoupled tetracritical point, or (at least at mean-field level) a bicritical point. We analyze the emergent symmetries in the critical regime and find that these can -- at least in some cases -- involve an emergent \(\mathbb{Z}_{2}\) order parameter symmetry.
From an electronic structure perspective, ErTe\({}_{3}\) is very nearly tetragonal, but from a structural perspective, due to the presence of a glide plane, it is intrinsically orthorhombic [1]. Below a well-characterized transition temperature, \(T_{c}\), it exhibits unidirectional incommensurate charge-density-wave (CDW) order with ordering vector along the orthorhombic \(c\) axis. In the presence of an in-plane unidirectional applied stress \(s\) in excess of a modest critical value, \(s^{*}\), (or, more generally, anisotropic in-plane strain greater than some critical value) the CDW ordering vector rotates by \(\pi/2\) to lie along the \(a\) axis [2; 3; 4]. Moreover, for temperatures slightly above \(T_{c}\), the nematic susceptibility (as inferred from the elasto-resistance) as a function of strain is strongly peaked at \(s=s^{*}\)[3]. These observations motivate us to reconsider the possible multicritical phase diagrams relevant to this general situation in which there are two distinct \(U(1)\) order parameters (associated with breaking of translational symmetry in the two directions).
From a symmetry perspective, the two CDW states found for ErTe\({}_{3}\) are fundamentally distinct; this is an inescapable consequence of the orthorhombic crystal structure. For instance, while the magnitudes of the ordering vectors in the two directions are similar, they are not identical [3]. Similarly, the magnitude of the slope of \(T_{c}\) with respect to \(s\) is different for the two states [3]. Nonetheless, we identify circumstances when, in a sense that we will define precisely, there is an emergent \(\mathbb{Z}_{2}\) symmetry under exchange of the two order parameters that characterizes the critical regime -- i.e. the system behaves as if it were truly tetragonal at a critical value of the stress, \(s=s^{*}\), and for \(T\) close to an appropriate critical point.
Specifically, we consider a Landau-Ginzburg-Wilson effective field theory with two complex scalar order parameters, \(\phi_{1}\) and \(\phi_{2}\), corresponding to the two components of the CDW order. We will consider the solution of this problem in the multicritical regime first in the context of Landau mean-field theory. Then, more qualitatively (and conjecturally) we will discuss the true three-dimensional critical fluctuations.
For simplicity, we will start by treating the case in which the system is actually tetragonal, in which case, for vanishing applied stress (\(s=0\)), there is a set of discrete (point-group) symmetries which interchange \(\phi_{1}\;\leftrightarrow\;\phi_{2}\) and at the same time interchange coordinates, \(x\;\leftrightarrow\;y\)[5]. We will then analyze the effects of including small terms consistent with an orthorhombic point group symmetry. In Figs. 1 and 2 we show the possible structures of the phase diagram that come from a saddle-point (Landau mean-field) solution of the model in various ranges of couplings for the tetragonal system and the weakly orthorhombic system respectively.
The effective field theories from which these results are derived are defined in Sec. I. The mean-field treatment and the considerations concerning the true asymptotic critical phenomena are discussed in Secs. II and III, respectively. In Sec. IV, the results are discussed with emphasis on the existence (or not) of additional emergent symmetries in the asymptotic near-critical regime.
## I The effective field theory
The effective (Landau-Ginzburg-Wilson) field theory in the neighborhood of a multicritical point for two unidirectional incommensurate CDW orders is described by a classical effective Hamiltonian density
\[\mathcal{H}[\phi_{1},\phi_{2}]=H+\tilde{H}=V+K+\tilde{V}+\tilde{K} \tag{1}\]
where \(H=V+K\) contains all the terms that are consistent with the point group symmetry of a tetragonal system (\(D_{4h}\)) and an assumed emergent translational symmetry in all three directions (\(\mathbb{R}^{3}\)), while \(\tilde{H}=\tilde{V}+\tilde{K}\) contains the additional terms that are allowed when (either due to applied strain or the intrinsic crystal structure) the point-group symmetry is reduced to that of an orthorhombic system (\(D_{2h}\)) [6]. In this treatment, because the CDW is assumed to be incommensurate, translations also act on the order parameter fields, such that under translation by \(\mathbf{a}\), the order parameters transform
as \(\phi_{1}(\mathbf{r})\rightarrow\phi_{1}(\mathbf{r}+\mathbf{a})e^{iQ_{x}a_{x}}\) and \(\phi_{2}(\mathbf{r})\rightarrow\phi_{2}(\mathbf{r}+\mathbf{a})e^{iQ_{y}a_{y}}\) (where \(Q_{x,y}\) are the CDW ordering vectors in the \(x\) and \(y\) directions respectively), while any point group element that exchanges \(x\) and \(y\) also exchanges \(\phi_{1}\) and \(\phi_{2}\). In the field-theoretic description, the translational symmetries in the \(x\) and \(y\) directions discussed above are manifested as internal \(U(1)\) symmetries associated with the order parameter fields, \(\phi_{1}\) and \(\phi_{2}\), while the \(\mathbb{Z}_{2}\) symmetry under exchange of \(\phi_{1}\) and \(\phi_{2}\) requires the simultaneous exchange of \(x\) and \(y\).
Here \(V+\tilde{V}\) is the effective potential, which we will express as a polynomial in powers of \(\phi_{1}\) and \(\phi_{2}\), keeping explicitly terms only to the lowest necessary order, while \(K+\tilde{K}\) are the lowest-order gradient terms. Explicit expressions for these various terms in the effective Hamiltonian follow:
\[V = \frac{\mu}{2}\left[|\phi_{1}|^{2}+|\phi_{2}|^{2}\right]+\frac{u }{4}\left[|\phi_{1}|^{2}+|\phi_{2}|^{2}\right]^{2}+\frac{\gamma}{2}|\phi_{1}| ^{2}|\phi_{2}|^{2},\] \[K = \frac{\kappa_{L}}{2}\left[|\partial_{x}\phi_{1}|^{2}+|\partial_{ y}\phi_{2}|^{2}\right]+\frac{\kappa_{T}}{2}\left[|\partial_{y}\phi_{1}|^{2}+| \partial_{x}\phi_{2}|^{2}\right] \tag{2}\] \[+\frac{\kappa_{\perp}}{2}\left[|\partial_{z}\phi_{1}|^{2}+| \partial_{z}\phi_{2}|^{2}\right],\]
\[\tilde{V} = \frac{\tilde{\mu}}{2}\left[|\phi_{1}|^{2}-|\phi_{2}|^{2}\right]+ \frac{\tilde{u}}{4}\left[|\phi_{1}|^{4}-|\phi_{2}|^{4}\right],\] \[\tilde{K} = \frac{\tilde{\kappa}_{L}}{2}\left[|\partial_{x}\phi_{1}|^{2}-| \partial_{y}\phi_{2}|^{2}\right]+\frac{\tilde{\kappa}_{T}}{2}\left[|\partial_ {y}\phi_{1}|^{2}-|\partial_{x}\phi_{2}|^{2}\right] \tag{3}\] \[+\frac{\tilde{\kappa}_{\perp}}{2}\left[|\partial_{z}\phi_{1}|^{2 }-|\partial_{z}\phi_{2}|^{2}\right].\]
In most cases, we assume \(u>\sqrt{\tilde{u}^{2}+\gamma^{2}}\), \(\kappa_{L}>|\tilde{\kappa}_{L}|\), \(\kappa_{T}>|\tilde{\kappa}_{T}|\), and \(\kappa_{\perp}>|\tilde{\kappa}_{\perp}|\). However, when we consider the tricritical phase diagrams in Figs. 1e and 2e, we will consider \(u\) slightly negative, in which case terms of order at least \(\phi^{6}\) are required for thermodynamic stability. Similarly, we shall see that for the special case \(\gamma=0\), higher order terms are necessary (at mean-field level) to fully determine the phase diagram; where such higher-order terms enter our discussion we will introduce them explicitly.
In general, all the coefficients appearing in these expressions are functions of the temperature, \(T\), and if we apply stress to the system, of the stress tensor, \(s_{ab}\). In an otherwise tetragonal system, all the coefficients with a tilde vanish in the absence of shear strain, and are odd functions of the shear stress, \(s_{B_{1g}}=s_{xx}-s_{yy}\), while the coefficients that do not carry tildes are even functions of all other components [7].
## II Mean-field analysis
In a mean-field analysis, we look for field configurations that minimize \(\mathcal{H}[\phi_{1},\phi_{2}]\). As \(K+\tilde{K}\) is minimized by any uniform field configuration, this means finding the minima of \(V+\tilde{V}\).
### The \(\mathbb{Z}_{2}\) symmetric case
The mean-field phase diagram in the stress-temperature plane for the simple case in which the underlying problem is tetragonal is shown in Fig. 1. In making this figure, we have taken \(\mu\) and \(\tilde{\mu}\) to be linear in \(T\) and \(s\) respectively, \(\mu=\mu_{0}[T-T_{0}]\) and \(\tilde{\mu}=\tilde{\mu}_{0}s\), and have ignored the \(T\) and \(s\) dependence of all other parameters. In particular, this means we have set \(\tilde{u}=0\). Figs. 1a, b, and c follow directly from minimizing \(V\) in Eq. (2), while Figs. 1d, e, and f involve consideration of additional terms.
The various cases are as follows:
* **Fig. 1a:** For \(u>0\) and \(\gamma>0\) there is a bicritical point at \(\tilde{\mu}=\mu=0\). There is a first-order line between two unidirectional phases that occurs along the half line \(\tilde{\mu}=0\) with \(\mu<0\). The phase boundaries between the unidirectional ordered phases and the disordered high-temperature phase correspond to \(\tilde{\mu}=\pm\mu\) with \(\mu\geq 0\).
* **Fig. 1b:** For \(u>0\) and \(-2u<\gamma<0\) there is a tetracritical point at \(\mu=\tilde{\mu}=0\). For \(\tilde{\mu}>0\), there is a transition from a disordered state for \(\mu>|\tilde{\mu}|\), to a state in which \(|\phi_{2}|>|\phi_{1}|=0\) at \(\mu<|\tilde{\mu}|\). There is then a further transition to a state with coexisting bidirectional order (but with \(\phi_{2}\) dominant) at \(\mu=-|\tilde{\mu}|(2u-|\gamma|)/|\gamma|\). For \(\tilde{\mu}<0\), the phase boundaries follow the corresponding lines, but with the roles of \(\phi_{1}\) and \(\phi_{2}\) interchanged.
* **Fig. 1c:** For \(u>0\) and \(\gamma=-u\), the system has the special feature of exhibiting a "decoupled" tetracritical point in which the two phase boundaries pass through each other without changing slope. At mean-field level, this corresponds a third degree of fine-tuning, requiring tuning three parameters, \(\mu\), \(\tilde{\mu}\), and \(\gamma\) all to \(0\) simultaneously.
* **Fig. 1d:** For \(u>0\) and \(\gamma=0\), \(V\) exhibits an enhanced \(O(4)\) symmetry. Here there is no coexistence region. However, there is no latent heat associated with crossing the phase boundary at \(\mu<0\) between the two phases (as \(\tilde{\mu}\) changes sign) so the transition is not conventionally first order despite the fact that the order parameter changes direction abruptly. Indeed, as is discussed in Ref. [8], in this case, the nature of the phase diagram below the multicritical point depends on higher order terms in powers of \(\phi_{j}\). Specifically, the tetracritical phase diagram shown in Fig. 1d arises if we include next order terms \[V \to V+\frac{u_{6}}{6}\left[|\phi_{1}|^{2}+|\phi_{2}|^{2} \right]^{3}\] (4) \[+\frac{\gamma_{6}}{3}\left[|\phi_{1}|^{2}+|\phi_{2}|^{2}\right]| \phi_{1}|^{2}|\phi_{2}|^{2}\] with \(\gamma_{6}<0\) (and with \(u_{6}>|\gamma_{6}|/2\) to preserve stability). Again, at mean-field level, this form of
multicriticality involves an additional degree of fine tuning relative to a generic tetracritical point.
* **Fig. 1e:** For \(u<0\) the transition at zero stress is necessarily first order. Generically the expansion in powers of \(\phi\) is not justified at a first-order transition unless the transition is only weakly first order, as happens near a tricritical (or bicritical) point. Thus, in Fig. 1e, we have shown the phase diagram that results from a case where \(u\) changes sign in the vicinity of a putative bicritical point. Specifically, we have included an implicit (linear) \(T\) dependence of \(u\) taking the form \(u=-u^{*}+u^{\prime}\mu\), so that \(u<0\) but of small magnitude as \(\mu\to 0\), but \(u>0\) for \(\mu>\mu^{*}\equiv u^{*}/u^{\prime}\). To ensure the stability of the free energy, we are forced to include higher-order terms in the effective potential as in Eq. 4. To be concrete, we have taken \(\gamma>0\). Now, in the vicinity of the point at which, for \(u>0\), there would have been a bicritical point, we instead find a pair of tricritical points at \(\mu=\mu^{*}\) and \(\tilde{\mu}=\pm\mu^{*}\).
* **In Fig. 1f** we show a phase diagram that cannot be derived at mean-field level from a Hamiltonian of the form shown, although a phase diagram of this sort -- with a "vestigial nematic" phase -- can arise from fluctuation effects, as discussed below and in Refs. [9] and [10]. To obtain this sort of phase diagram in mean-field theory, we need to add in an additional nematic order parameter, \(\mathcal{N}\), which transforms as \(B_{1g}\) (\(x^{2}-y^{2}\)) under the point group symmetry of the tetragonal crystal. To the same order as we have considered so far, this involves \[V\to V +\tilde{\alpha}\mathcal{N}+\frac{\nu}{2}\mathcal{N}^{2}+\frac{ \tilde{w}}{3}\mathcal{N}^{3}+\frac{1}{4}\mathcal{N}^{4}\] \[+\frac{\lambda}{2}\mathcal{N}\left[|\phi_{1}|^{2}-|\phi_{2}|^{2}\right]\] (5) where the nematic field has been normalized in such a way that the quartic-coupling is equal to \(1\), and where \(\tilde{\alpha}\) and \(\tilde{w}\) both vanish by symmetry in the absence of strain. To obtain the phase diagram in Fig. 1f, we have assumed that \(\nu\) changes sign at a temperature slightly above the putative bicritical point, and in projecting the phase diagram onto the \(\mu-\tilde{\mu}\) plane we have taken \(\nu=-\nu^{*}+\nu^{\prime}\mu\), with \(\nu^{*}\) small and positive. Coefficients of terms odd in \(\mathcal{N}\) vary linearly with \(\tilde{\mu}\): \(\tilde{\alpha}=\tilde{\alpha}^{\prime}\tilde{\mu}\) and \(\tilde{w}=\tilde{w}^{\prime}\tilde{\mu}\). Now there is a nematic critical point at \(\tilde{\mu}=0\) and \(\mu=\mu^{*}\equiv\nu^{*}/\nu^{\prime}\), while the two critical endpoints (where the CDW ordering temperatures intersect a first-order line) occur at a smaller value of \(\mu\) which (for example) is at \(\mu=\mu^{*}\left[\lambda\nu/(1+\lambda\nu^{\prime})\right]\) in the limit \(\tilde{\mu}\to 0\).
### Emergent \(\mathbb{Z}_{2}\) symmetry: the orthorhombic case
We now consider modifications to the mean-field phase diagrams that result from considering a system which is weakly orthorhombic, i.e. where \(\tilde{H}\neq 0\) even for \(s=0\). Since at mean-field level, we can neglect the gradient terms, to quartic order there are only two terms that break the \(\mathbb{Z}_{2}\) symmetry under exchange \(\phi_{1}\leftrightarrow\phi_{2}\): the quadratic term, \(\tilde{\mu}\), and the quartic term, \(\tilde{u}\). Rescaling the order parameter fields such that \(\phi_{1}\to Z\phi_{1}\) and \(\phi_{2}\to Z^{-1}\phi_{2}\) with \(Z=\left[(u+\tilde{u})/(u-\tilde{u})\right]^{-1/8}\) preserves the form of the potential \(V+\tilde{V}\), but with shifted parameters:
\[\mu \rightarrow\left[(Z^{4}+1)\mu+(Z^{4}-1)\tilde{\mu}\right]/(2Z^{2}),\] \[\tilde{\mu} \rightarrow\left[(Z^{4}-1)\mu+(Z^{4}+1)\tilde{\mu}\right]/(2Z^{2}),\] \[u \rightarrow\sqrt{u^{2}-\tilde{u}^{2}}, \tag{6}\] \[\gamma \rightarrow\gamma+u-\sqrt{u^{2}-\tilde{u}^{2}},\] \[\tilde{u} \to 0.\]
We see therefore that (to the extent that we can neglect higher order terms in powers of \(\phi_{j}\)) the theory possesses an emergent \(\mathbb{Z}_{2}\) symmetry under the exchange of \(\phi_{1}\) and \(\phi_{2}\) and \(\tilde{\mu}\rightarrow-\tilde{\mu}\). In this sense, it behaves as if it were tetragonal whether it is or not. Of course, the fact that this is not a true symmetry will be reflected in the presence of higher order terms, such as
Figure 1: Mean-field phase diagrams in the \(\mathbb{Z}_{2}\)-symmetric (tetragonal) cases discussed in Sec. II.1: (a) bicritical point, (b) tetracritical point, (c) decoupled tetracritical point, (d) \(O(4)\)-symmetric tetracritical point, (e) tricritical points, and (f) vestigial nematic. Thin and thick black lines denote, respectively, continuous and first-order transitions.
\(\tilde{u}_{6}\left[|\phi_{1}|^{2}+|\phi_{2}|^{2}\right]\left[|\phi_{1}|^{4}-|\phi_{ 2}|^{4}\right]\), which do not vanish under the rescaling transformation. Consequently, while an appropriate coordinate transformation in the \(T-s\) plane will make the phase diagram of the orthorhombic system look like that of a tetragonal system close enough to criticality that higher order terms in \(V+\tilde{V}\) can be ignored, further from criticality where the higher order terms begin to play a role, the emergent \(\mathbb{Z}_{2}\) symmetry will be increasingly violated, as reflected in the schematic phase diagrams in Fig. 2.
The situation is somewhat subtle for the tricritical case in Fig. 2e since higher order (at least sixth) terms play a role in the phase diagram near criticality. Thus, the extent to which this system exhibits an approximate \(\mathbb{Z}_{2}\) symmetry can depend on other assumptions.
The case of the vestigial nematic, Fig. 2f, needs to be handled separately. In the first place there is not, strictly speaking, a nematic phase defined by a spontaneously broken symmetry, since the symmetry in question is explicitly broken. This is reflected in the presence of odd terms, \(\tilde{\alpha}\) and \(\tilde{w}\) in the in the effective potential in Eq. (5), where now no symmetry requires that \(\tilde{\mu}\), \(\tilde{\alpha}\), and \(\tilde{w}\) all vanish simultaneously. Thus, the first-order line immediately below the critical point corresponds to a line of meta-nematic transitions (where the value of \(\mathcal{N}\) jumps discontinuously -- typically from a negative to a positive value). The critical point at the end -- what was formerly the nematic critical point -- is now analogous to the critical point that terminates the liquid-gas coexistence line in the phase diagram of water or other common liquids.
That the vicinity of the critical point possesses an emergent \(\mathbb{Z}_{2}\) symmetry can be seen by making a shift of the order parameter, \(\mathcal{N}\to\mathcal{N}+\bar{\mathcal{N}}\), where \(\bar{\mathcal{N}}=-\tilde{w}/3\) is chosen to cancel the third order term. Then with proper rescaling of \(\mathcal{N}\), this replaces Eq. (5) with an expression of the same form, but with \(\tilde{w}=0\). The meta-nematic critical point at \(\mu=\mu^{*}\) and \(\tilde{\mu}=\tilde{\mu}^{*}\equiv-\alpha^{*}/\alpha^{\prime}\) thus has the same form as in the tetragonal case. However, as shown in the figure, the critical end-points (where the CDW transitions end on the meta-nematic line) are split by order \(\lambda\bar{\mathcal{N}}\), thus spoiling the \(\mathbb{Z}_{2}\) symmetry as one moves away from the critical point.
## III Fluctuation effects
Mean-field theory tends to work remarkably well in metallic systems -- presumably because the effective range of interactions is relatively long. The coupling of the nematic component of the order parameter to the strain field tends to further reduce fluctuations effects (see Ref. [11] and references therein). It is, nonetheless, interesting in a more general context to analyze what of the above structure survives (or may even be enhanced by) fluctuation effects in the true asymptotic critical regime.
The most basic question to be addressed is the stability of the various mean-field phase diagrams to fluctuations. The issue concerning the stability of \(O(N)\times O(N)\) (or more generally \(O(N)\times O(M)\)) multicritical points has been addressed in several different ways, notably through the use of the \(\epsilon\) expansion [12], the functional renormalization group [13] and, more recently, using conformal bootstrap to address the problem directly in three dimensions [14; 15].
In the context of the \(\epsilon\) expansion, there are three fixed points (in addition to the always unstable Gaussian fixed point) which correspond to the decoupled tetracritical point, a higher symmetry \(O(2N)\) symmetric multicritical point, and an interacting "biconical" fixed point. However, for given \(N\), only one of these is ever stable with the other two being unstable -- i.e. the other two require at least one additional parameter to be fine tuned (in addition to the usual two associated with "stable" multicritical points).
A number of conclusions can been reached based on non-perturbative lines of analysis [16]. For \(N=1\), the \(O(2)\) symmetric fixed point, familiar from studies of the Ashkin-Teller model, is stable. [17] This is one of the most remarkable examples of an emergent symmetry. However, it is generally accepted that the \(O(2N)\) symmetric fixed point is (at least weakly) unstable for all \(N>1\)[18; 19].
For \(N=2\) the situation is somewhat subtle. Certainly, the decoupled fixed point is stable for \(N\geq 2\). As viewed from the perspective of the \(\epsilon\) expansion, this
Figure 2: Mean-field phase diagrams in the non-\(\mathbb{Z}_{2}\)-symmetric (orthorhombic) cases discussed in Sec. II.2: (a) bicritical point, (b) tetracritical point, (c) decoupled tetracritical point, (d) \(O(4)\)-symmetric tetracritical point, (e) tricritical points, and (f) vestigial metanematic. Thin and thick black lines denote, respectively, continuous and first-order transitions. Dashed ovals indicate regimes of approximate emergent \(\mathbb{Z}_{2}\) symmetry. Grey lines are the corresponding phase boundaries from Fig. 1.
suggests that this is the only stable multicritical point, which among other things would imply that there is no stable \(O(N)\times O(N)\) bicritical point with \(N\geq 2\). Recently, however, initial evidence has emerged from conformal bootstrap of the existence of an additional stable conformal field theory that might correspond to the long-sought bicritical point. We will return to this below.
We now comment on the tricritical scenario. From the fact that there is no stable bicritical fixed point in the context of the \(\epsilon\) expansion, it was suggested by Aharony _et al._[20; 21; 22] that it was likely to be fluctuation-driven first order. This further led to the suggestion that a mean-field bicritical phase diagram should instead exhibit two tricritical points, and thus be of the form shown in Figs. 1e and 2e.
The tricritical points in this case are readily characterized since \(d=3\) is the upper critical dimension, and so they should exhibit mean-field behavior up to logarithmic corrections. Clearly, each tricritical point involves the ordering of only one of the CDW components, so there can be no question of an emergent \(\mathbb{Z}_{2}\) symmetry.
Another possible form of a phase diagram with a fluctuation driven first-order transition is that corresponding to a vestigial nematic phase, as in Figs. 1f and 2f. It has been shown [10] that such a vestigial nematic phase arises from fluctuations in a sufficiently layered (quasi-2D) system with \(N>2\), and strong arguments suggest that this is true as well for \(N=2\). There thus appear to be circumstances in which this form of a phase diagram arises organically from CDW fluctuations, without need to introduce an explicit nematic order parameter. (See also Refs. [23] and [9].)
In this case the critical point in question is in the 3D Ising universality class. For the tetragonal case, this is obvious in that there is an Ising symmetry -- which corresponds to the \(\mathbb{Z}_{2}\) symmetry under exchange \(\phi_{1}\leftrightarrow\phi_{2}\) -- that is broken below the critical point. But in the orthorhombic case, the analogy with the liquid-gas critical point is more apt, in that there is no actual broken symmetry associated with the first-order line below the critical point. However, the present discussion puts a somewhat different perspective on this familiar problem -- there is an emergent \(\mathbb{Z}_{2}\) symmetry at the critical point which is broken along the first-order line below the critical point. Correspondingly, there is presumably a Widom line, at which the emergent \(\mathbb{Z}_{2}\) symmetry is best defined, that extends above the critical point, and which should be observable as a peak in the nematic susceptibility [3].
The conformal bootstrap offers a new approach to identifying possible critical phenomena -- especially in 3D [14; 15]. There is not yet any systematic classification of all possible 3D conformal field theories, but (in many cases) where such field theories have been identified and characterized from this approach, these provide important complements to the \(\epsilon\) expansion and other more familiar approaches to critical phenomena. For instance, the perturbative stability of the decoupled \(O(2)\times O(2)\) tetracritical point can be corroborated by knowing precisely the critical exponents of the \(O(2)\) critical point [16].
In this context, it is interesting to note that there is preliminary indication of the existence of at least one additional stable theory with \(O(2)\times O(2)\) symmetry, distinct from the decoupled theory [24; 25; 26]. The results here are not definitive, and the identification of the physical meaning of this additional critical theory -- if it indeed exists -- is also not established. It is tempting, however, to identify this as the long-sought bicritical theory. In this context, it is interesting to note that the theories bootstrapped in these works actually have a higher, \((O(2)\times O(2))\rtimes\mathbb{Z}_{2}\) symmetry, so would correspond to a critical point with emergent \(\mathbb{Z}_{2}\) symmetry under exchange of the two order parameters.
Finally, there is an issue concerning the presence or absence of an emergent \(SO(3)\) spatial rotational symmetry at criticality. This is a feature of an \(O(2)\)-symmetric critical point, such as occurs along the narrow solid phase boundaries in the figures. In particular, by appropriate rescaling of the length scales in the \(x\), \(y\), and \(z\) directions, the system at criticality -- even on an underlying orthorhombic lattice -- can be described by an \(SO(3)\) rotationally symmetric effective field theory. The same analysis can be applied to the tricritical points or to the nematic or metanematic critical points, such as those illustrated in Figs. 1f and 2f. It would presumably be true at a bicritical point of the sort shown in Figs. 1e and 2e, if such a critical point indeed arises.
In contrast, it is easy to see that rotational symmetry does not emerge at the decoupled tetracritical points, because of the presence of an effective "spin-orbit coupling" that reflects the fact that the broken symmetries involved are spatial symmetries. Specifically, even in the tetragonal case, while the effective field theory for \(\phi_{1}\) can be made rotationally symmetric by rescaling \(x\), \(y\), and \(z\) appropriately, to achieve the same result for \(\phi_{2}\) requires interchanging the scale factors for \(x\) and \(y\). In addition, if the system is orthorhombic, there is a different scale required for the magnitude of fluctuations of \(\phi_{1}\) and \(\phi_{2}\) and for the \(z\) coordinate that enters the two theories. Thus, even at criticality there is only \(C_{2}\) rotational symmetry for the orthorhombic system, while for the tetragonal case the \(C_{4}\) rotational symmetry is represented as a further symmetry under \(\phi_{1}\leftrightarrow\phi_{2}\) and \(x\leftrightarrow y\).
## IV Discussion
The analysis in the present paper was motivated by the observation of an apparent bicritical phase diagram with an emergent tetragonal symmetry in ErTe\({}_{3}\), a weakly orthorhombic, quasi two-dimensional (layered) material with unidirectional CDW order whose direction can be reoriented with the application of uniaxial stress. Further studies of this material with the goal of exploring the behavior closer to the putative bicritical point would certainly be interesting. In particular, many of the theoretically most interesting issues concern fluctuation ef
fects not captured by mean-field theory, while at present it is not clear whether the experiments approach close enough to criticality to be sensitive to such effects.
It is worth noting that similar considerations apply to systems with unidirectional spin density waves as well, with the modification that in that case the order parameter symmetries associated with the two order parameter fields is richer than in the present case. For instance, for a commensurate unidirectional colinear antiferromagnet -- i.e. the \((0,\pi)-(\pi,0)\) "stripe" antiferromagnet seen in the Fe-based superconductors [27] -- the two order parameters fields (to the extent that spin-orbit coupling is negligible) are three-component fields, so the relevant symmetry is \(O(3)\times O(3)\) with, for tetragonal systems, an additional \(\mathbb{Z}_{2}\) under exchange. Many other examples exist, with other symmetries or near symmetries, including materials with still lower point-group symmetry, in various strongly interacting electronic materials.
The analysis -- with suitable modifications -- is also of possible relevance to unconventional superconductors under circumstances in which by tuning some non-thermal parameter, e.g. pressure or alloy concentrations, the system can be tuned from a regime in which there is one form of superconducting order to a regime in which there is another. Here, again, one expects a multicritical phase diagram at the point at which the two forms of superconductivity have the same \(T_{c}\). An example of this is an apparent change from an extended s-wave to a d-wave superconductor that appears as a function of chemical substitution in \(\mathrm{Ba}_{1-x}\mathrm{K}_{x}\mathrm{Fe}_{2}\mathrm{As}_{2}\)[28].
From a theory perspective, it remains to determine whether or not a stable bicritical point not accessible from the \(\epsilon\) expansion exists, what are the best experimental tests of the existence of an emergent \(\mathbb{Z}_{2}\) symmetry in the critical regime, and more generally how to nail down the topology of the phase diagram in the asymptotic multicritical regime where mean-field considerations are insufficient.
###### Acknowledgements.
We gratefully acknowledge useful discussions and correspondence with Vladimir Calvera, Eduardo Fradkin, Andreas Stergiou and Gilles Tarjus. The work was supported, in part, by the Department of Energy, Office of Basic Energy Sciences, under contract DE-AC02-76SF00515. AP is supported by a Stanford Graduate Fellowship. AGS was supported in part by an NSF GRFP and DOE SCGSR award. SAK was further supported by a Leverhulme Trust International Professorship grant number LIP-202-014 at Oxford.
|
2302.00818 | Analyticity of Steklov Eigenvalues in nearly-hyperspherical domains in
\mathbb{R}^{d+1} | We consider the Dirichlet-to-Neumann operator (DNO) on nearly-hyperspherical
domains in dimension greater than 3. Treating such domains as perturbations of
the ball, we prove the analytic dependence of the DNO on the shape perturbation
parameter for fixed perturbation functions. Consequently, we conclude that the
Steklov eigenvalues are analytic in the shape perturbation parameter as well.
To obtain these results, we use the strategy of Nicholls and Nigam (2004), and
of Viator and Osting (2020); we transform the Laplace-Dirichlet problem on the
perturbed domain to a more complicated, parameter-dependent equation on the
ball, and then geometrically bound the Neumann expansion of the transformed
DNO. These results are a generalization of the work of Viator and Osting (2020)
for dimension 2 and 3. | Chee Han Tan, Robert Viator | 2023-02-02T01:46:45Z | http://arxiv.org/abs/2302.00818v1 | # Analyticity of Steklov eigenvalues of
###### Abstract.
We consider the Dirichlet-to-Neumann operator (DNO) on nearly-hyperspherical domains in dimension greater than \(3\). Treating such domains as perturbations of the ball, we prove the analytic dependence of the DNO on the shape perturbation parameter for fixed perturbation functions. Consequently, we conclude that the Steklov eigenvalues are analytic in the shape perturbation parameter as well. To obtain these results, we use the strategy of Nicholls and Nigam (2004), and of Viator and Osting (2020); we transform the Laplace-Dirichlet problem on the perturbed domain to a more complicated, parameter-dependent equation on the ball, and then geometrically bound the Neumann expansion of the transformed DNO. These results are a generalization of the work of Viator and Osting (2020) for dimension \(2\) and \(3\).
Key words and phrases:Dirichlet-to-Neumann operator, Steklov eigenvalues, perturbation theory, hyperspherical coordinates 2010 Mathematics Subject Classification: 26E05, 35C20, 35P05, 41A58
## 1. Introduction
For any integer \(d\geq 3\), we call \(\Omega_{\varepsilon}\subset\mathbb{R}^{d+1}\) a _nearly-hyperspherical domain_ if it is a small perturbation of the unit \((d+1)\)-ball \(B\) in \(\mathbb{R}^{d+1}\), given by
\[\Omega_{\varepsilon}=\left\{(r,\hat{\theta}):0\leq r\leq 1+\varepsilon\rho( \hat{\theta}),\,\hat{\theta}\in S^{d}\right\}.\]
Here, \(\rho\in C^{s+2}(S^{d})\) is the _domain perturbation function_ for some \(s\in\mathbb{N}=\{0,1,2,\dots\}\), \(S^{d}\) is the unit \(d\)-sphere in \(\mathbb{R}^{d+1}\), and \(\varepsilon\geq 0\) is the _perturbation parameter_ which we assumed to be small in magnitude.
For fixed \(\rho\in C^{s+2}(S^{d})\), we consider the Steklov eigenvalue problem on the nearly-hyperspherical domain \(\Omega_{\varepsilon}\):
\[\Delta u_{\varepsilon} =0 \text{in }\Omega_{\varepsilon},\] \[\partial_{\mathbf{n}_{\rho,\varepsilon}}u_{\varepsilon} =\sigma_{\varepsilon}u_{\varepsilon} \text{on }\partial\Omega_{\varepsilon}, \tag{1}\]
where \(\partial_{\mathbf{n}_{\rho,\varepsilon}}=\mathbf{n}_{\rho,\varepsilon}\cdot\nabla\) is the unit outward normal derivative on the boundary \(\partial\Omega_{\varepsilon}\) of the perturbed domain \(\Omega_{\varepsilon}\). It is well-known [1] that the Steklov spectrum is discrete, real, and nonnegative, and we arrange them in non-decreasing order
\[0=\sigma_{0}(\Omega_{\varepsilon})<\sigma_{1}(\Omega_{\varepsilon})\leq \sigma_{2}(\Omega_{\varepsilon})\leq\dots,\]
tending to infinity. The Steklov spectrum matches the spectrum of the Dirichlet-to-Neumann operator (DNO), \(G_{\rho,\varepsilon}\colon H^{s+\frac{1}{2}}(\partial\Omega_{\varepsilon})\to H ^{s-\frac{1}{2}}(\partial\Omega_{\varepsilon})\), given by
\[\xi\mapsto G_{\rho,\varepsilon}\xi=\mathbf{n}_{\rho,\varepsilon}\cdot\nabla v _{\varepsilon}|_{r=1+\varepsilon\rho},\]
where \(v_{\varepsilon}\) is the _harmonic extension_ of \(\xi\) to \(\Omega_{\varepsilon}\), satisfying
\[\Delta v_{\varepsilon} =0 \text{in }\Omega_{\varepsilon},\] \[v_{\varepsilon} =\xi \text{on }\partial\Omega_{\varepsilon}. \tag{2}\]
In previous work [13], it was proven that \(\sigma_{\varepsilon}\) is analytic with respect to the perturbation parameter \(\varepsilon>0\) for nearly-circular domain (\(d=1\)) and nearly-spherical domain (\(d=2\)). The goal of this paper is to show that the same result holds for \(d\geq 3\).
**Theorem 1.1**.: _Let \(d\geq 3\) and \(s\in\mathbb{N}\). If \(\rho\in C^{s+2}(S^{d})\), then the Dirichlet-to-Neumann operator, \(G_{\rho,\varepsilon}\colon H^{s+\frac{3}{2}}(\partial\Omega_{\varepsilon})\to H ^{s+\frac{1}{2}}(\partial\Omega_{\varepsilon})\), is analytic in the perturbation parameter \(\varepsilon\). More precisely, if \(\rho\in C^{s+2}(S^{d})\), then there exists a Neumann series_
\[G_{\rho,\varepsilon}(\xi)=\sum_{n=0}^{\infty}\varepsilon^{n}G_{\rho,n}(\xi)\]
_that converges strongly as an operator from \(H^{s+\frac{3}{2}}(S^{d})\) to \(H^{s+\frac{1}{2}}(S^{d})\). That is, there exists constants \(K_{1}=K_{1}(B,d,s)>0\) and \(\alpha=\alpha(d)>1\) such that_
\[\left\|G_{\rho,n}\xi\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq K_{1}\|\xi\|_{H^{s +\frac{3}{2}}(S^{d})}A^{n}\]
_for \(A>\alpha\max\{2K_{0}C_{0}\left|\rho\right|_{C^{s+2}},M\left|\rho\right|_{C^{s+ 2}}\}\)._
We prove Theorem 1.1 for all \(d\geq 3\) simultaneously: the proof can be found in section 5. The proof follows the strategy of [13, 13]. We first show the analyticity of the harmonic extension, _i.e.,_ that for fixed \(\xi:\partial\Omega_{\varepsilon}\to\mathbb{C}\) the solution \(v_{\varepsilon}\) to (2) is analytic in \(\varepsilon>0\); see Section 4. Using this and a recursive formula for the DNO \(G_{\rho,\varepsilon}\), we then prove that \(G_{\rho,\varepsilon}\) also depends analytically on \(\varepsilon>0\), establishing Theorem 1.1.
The next corollary follows from [10] and Theorem 1.1, establishing the analytic dependence on \(\varepsilon\) of the Steklov eigenvalues \(\sigma_{\varepsilon}\) of (1) within the same disk of convergence as in Theorem 1.1:
**Corollary 1.2**.: _The Steklov eigenvalues \(\sigma_{\varepsilon}\) of (1) consist of branches of one or several analytic functions which have at most algebraic singularities near \(\varepsilon=0\). The same is true of the corresponding eigenprojections._
The motivation of this paper is to lay the groundwork for the asymptotic study of Steklov eigenvalues of nearly-hyperspherical domains in \(\mathbb{R}^{d+1}\) for \(d\geq 3\). Asymptotic and perturbative methods have already yielded meaningful results in the study of Steklov shape optimization in two and three dimensions [13, 13]. There, the authors begin with a nearly-circular domain in \(\mathbb{R}^{2}\) or a nearly-spherical domain in \(\mathbb{R}^{3}\) and, using the analyticity result proved in [13], demonstrate that the ball is _not_ a shape optimizer for a large class of Steklov eigenvalues, while also obtaining a collection of Steklov eigenvalues which _are_ (locally) optimized by the ball. This paper provides the foundation for future work using similar techniques for nearly-hyperspherical domains in dimension greater than three.
## 2. Laplacian in Hyperspherical Coordinates in \(\mathbb{R}^{d+1}\)
Given \(d\geq 1\), let \((x_{1},x_{2},\ldots,x_{d+1})\) denote the \((d+1)\)-dimensional Cartesian coordinates. We define the \((d+1)\)-dimensional hyperspherical coordinates \((r,\theta_{1},\theta_{2},\ldots,\theta_{d})\) as follows:
\[x_{1} =r\cos\theta_{1}\sin\theta_{2}\sin\theta_{3}\ldots\sin\theta_{d},\] \[x_{2} =r\sin\theta_{1}\sin\theta_{2}\sin\theta_{3}\ldots\sin\theta_{d},\] \[x_{j} =r\cos\theta_{j-1}\sin\theta_{j}\sin\theta_{j+1}\ldots\sin\theta_ {d},\ \ j=3,4,\ldots,d,\] \[x_{d+1} =r\cos\theta_{d},\]
where \(r\geq 0\) is the radius of a \((d+1)\)-dimensional sphere, \(0\leq\theta_{1}\leq 2\pi\) is the azimuth, and \(0\leq\theta_{2},\theta_{3},\ldots,\theta_{d}\leq\pi\) are the inclinations. In particular, these correspond to polar coordinates \((x_{1},x_{2})=(r\cos\theta_{1},r\sin\theta_{1})\) for \(d=1\) and spherical coordinates \((x_{1},x_{2},x_{3})=(r\cos\theta_{1}\sin\theta_{2},r\sin\theta_{1}\sin\theta_{2 },r\cos\theta_{2})\) for \(d=2\).
The hyperspherical coordinates are an orthogonal curvilinear coordinate system in \(\mathbb{R}^{d+1}\). The associated metric tensor \(g\) is therefore diagonal, with components
\[g_{ij}=\sum_{k=1}^{d+1}\frac{\partial x_{k}}{\partial\theta_{i}}\frac{\partial x _{k}}{\partial\theta_{j}}=h_{i}^{2}\delta_{ij},\ \ 0\leq i,j\leq d,\]
where \(\theta_{0}=r\) and the scale factors are defined as \(h_{0}=1\) and \(h_{i}=r\prod_{k=i+1}^{d}\sin\theta_{k}\) for \(i=1,2,\ldots,d\); the latter includes the empty product, which gives \(h_{d}=r\). The volume element \(dV\) in hyperspherical coordinates is given by
\[dV=\sqrt{|\det g|}\,dr\,d\theta_{1}\ldots d\theta_{d}=\left(r^{d}\prod_{k=2}^ {d}\sin^{k-1}\theta_{k}\right)dr\,d\theta_{1}\ldots d\theta_{d}. \tag{3}\]
For notational convenience, we introduce new variables \(\eta_{i}=h_{i}/r\) for \(i=1,2,\ldots,d\) and \(h=\sqrt{|\det g|}\). The gradient operator in hyperspherical coordinates is defined as
\[\nabla=\sum_{i=0}^{d}\frac{1}{h_{i}}\frac{\partial}{\partial\theta_{i}}\hat{ \boldsymbol{\theta}}_{i}=\frac{\partial}{\partial r}\hat{\boldsymbol{r}}+ \frac{1}{r}\sum_{i=1}^{d}\frac{1}{\eta_{i}}\frac{\partial}{\partial\theta_{i}} \hat{\boldsymbol{\theta}}_{i}=\frac{\partial}{\partial r}\hat{\boldsymbol{r}} +\frac{1}{r}\nabla_{S^{d}}, \tag{4}\]
where \(\hat{\boldsymbol{r}},\hat{\boldsymbol{\theta}}_{1},\hat{\boldsymbol{\theta}}_ {2},\ldots,\hat{\boldsymbol{\theta}}_{d}\) are orthonormal hyperspherical basis vectors. The Laplacian is given in terms of the metric tensor by
\[\Delta=\frac{1}{h}\sum_{i=0}^{d}\frac{\partial}{\partial\theta_{i}}\left( \frac{h}{h_{i}^{2}}\frac{\partial}{\partial\theta_{i}}\right). \tag{5}\]
Since \(h_{i}\) is independent of \(\theta_{i}\) and \(h=h_{0}h_{1}\ldots h_{d}\) is separable in hyperspherical coordinates, (5) simplifies to
\[\Delta=\sum_{i=0}^{d}\frac{1}{h_{i}^{2}}\cdot\frac{1}{h}\frac{\partial}{ \partial\theta_{i}}\left(h\frac{\partial}{\partial\theta_{i}}\right)=\frac{1} {r^{d}}\frac{\partial}{\partial r}\left(r^{d}\frac{\partial}{\partial r} \right)+\frac{1}{r^{2}}\Delta_{S^{d}},\]
where \(\Delta_{S^{d}}\) is the spherical Laplacian on the unit \(d\)-sphere \(S^{d}\), given by
\[\Delta_{S^{d}}=\sum_{i=1}^{d}\frac{1}{\eta_{i}^{2}\sin^{i-1}\theta_{i}}\frac{ \partial}{\partial\theta_{i}}\left(\sin^{i-1}\theta_{i}\frac{\partial}{ \partial\theta_{i}}\right). \tag{6}\]
For \(d=2\), we recover the Laplacian in spherical coordinates \((r,\theta_{1},\theta_{2})=(r,\phi,\theta)\):
\[\Delta=\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{\partial}{ \partial r}\right)+\frac{1}{r^{2}}\left[\frac{1}{\sin^{2}\theta}\frac{\partial ^{2}}{\partial\phi^{2}}+\frac{1}{\sin\theta}\frac{\partial}{\partial\theta} \left(\sin\theta\frac{\partial}{\partial\theta}\right)\right].\]
## 3. Change of Variables
For notational convenience, let \(\partial_{r}\) and \(\partial_{i}\) denote the partial derivative with respect to \(r\) and \(\theta_{i}\), respectively, for \(i=1,2,\ldots,d\). We first consider the problem of harmonically extending a function \(\xi(\hat{\theta})\) from \(\partial\Omega_{\varepsilon}\) to \(\Omega_{\varepsilon}\):
\[\begin{split}\frac{1}{r^{d}}\partial_{r}\left(r^{d}\partial_{r}v \right)+\frac{1}{r^{2}}\Delta_{S^{d}}v&=0\ \ \ \ \ \ \ \ \ \ \ \text{in}\ \Omega_{\varepsilon},\\ v&=\xi\ \ \ \ \ \ \ \ \ \ \text{on}\ \partial\Omega_{\varepsilon}.\end{split} \tag{7}\]
Following [18, 19, 20], we introduce the change of variables
\[(r^{\prime},\theta^{\prime})=\left(\frac{r}{1+\varepsilon\rho(\hat{\theta})},\hat {\theta}\right) \tag{8}\]
which maps \(\Omega_{\varepsilon}\) to the unit \((d+1)\)-ball \(B\coloneqq\Omega_{0}\). The partial derivatives in the new coordinates are given by
\[\partial_{r}=\frac{1}{1+\varepsilon\rho(\theta^{\prime})}\partial_{r^{\prime}},\qquad\partial_{i}=\partial_{i^{\prime}}-\frac{\varepsilon r^{\prime} \partial_{i^{\prime}}\rho(\theta^{\prime})}{1+\varepsilon\rho(\theta^{\prime} )}\partial_{r^{\prime}},\ \ i=1,2,\ldots,d.\]
The harmonic extension \(v\) transforms to
\[u_{\varepsilon}(r^{\prime},\theta^{\prime})=v\left((1+\varepsilon\rho(\theta^ {\prime}))r^{\prime},\theta^{\prime}\right).\]
The radial component of the Laplacian transforms to
\[\frac{1}{r^{d}}\partial_{r}\left(r^{d}\partial_{r}v\right) =\frac{1}{(1+\varepsilon\rho)^{d}(r^{\prime})^{d}}\cdot\frac{1}{ 1+\varepsilon\rho}\partial_{r^{\prime}}\left((1+\varepsilon\rho)^{d}(r^{ \prime})^{d}\cdot\frac{1}{1+\varepsilon\rho}\partial_{r^{\prime}}u_{ \varepsilon}\right)\] \[=\frac{1}{(1+\varepsilon\rho)^{2}(r^{\prime})^{d}}\partial_{r^{ \prime}}\left((r^{\prime})^{d}\partial_{r^{\prime}}u_{\varepsilon}\right).\]
Referring to (6), the angular component of the Laplacian transforms to
\[\partial_{i}\left(\sin^{i-1}\theta_{i}\,\partial_{i}v\right) =\partial_{i^{\prime}}\left(\sin^{i-1}\theta^{\prime}_{i}\, \partial_{i^{\prime}}u_{\varepsilon}\right)-\frac{\varepsilon r^{\prime}\sin^ {i-1}\theta^{\prime}_{i}\,\partial_{i^{\prime}}\rho}{1+\varepsilon\rho} \partial_{r^{\prime}}\partial_{i^{\prime}}u_{\varepsilon}\] \[\qquad-\varepsilon r^{\prime}\partial_{i^{\prime}}\left(\frac{ \sin^{i-1}\theta^{\prime}_{i}\,\partial_{i^{\prime}}\rho}{1+\varepsilon\rho} \partial_{r^{\prime}}u_{\varepsilon}\right)+\frac{\varepsilon^{2}\sin^{i-1} \theta^{\prime}_{i}\,(\partial_{i^{\prime}}\rho)^{2}}{(1+\varepsilon\rho)^{2 }}r^{\prime}\partial_{r^{\prime}}\left(r^{\prime}\partial_{r^{\prime}}u_{ \varepsilon}\right)\] \[=\partial_{i^{\prime}}\left(\sin^{i-1}\theta^{\prime}_{i}\, \partial_{i^{\prime}}u_{\varepsilon}\right)-\frac{\varepsilon r^{\prime}\sin^ {i-1}\theta^{\prime}_{i}\,\partial_{i^{\prime}}\rho}{1+\varepsilon\rho} \partial_{r^{\prime}}\partial_{i^{\prime}}u_{\varepsilon}\] \[\qquad-\varepsilon r^{\prime}\left[\frac{\sin^{i-1}\theta^{ \prime}_{i}\,\partial_{i^{\prime}}\rho}{1+\varepsilon\rho}\partial_{i^{ \prime}}\partial_{r^{\prime}}u_{\varepsilon}+\left[\frac{\partial_{i^{\prime} }\left(\sin^{i-1}\theta^{\prime}_{i}\,\partial_{i^{\prime}}\rho\right)}{1+ \varepsilon\rho}-\frac{\varepsilon\sin^{i-1}\theta^{\prime}_{i}\,(\partial_ {i^{\prime}}\rho)^{2}}{(1+\varepsilon\rho)^{2}}\right]\partial_{r^{\prime}}u_ {\varepsilon}\right]\] \[\qquad+\frac{\varepsilon^{2}\sin^{i-1}\theta^{\prime}_{i}\,( \partial_{i^{\prime}}\rho)^{2}}{(1+\varepsilon\rho)^{2}}\Big{[}(r^{\prime})^{2 }\partial_{r^{\prime}}^{2}u_{\varepsilon}+r^{\prime}\partial_{r^{\prime}}u_{ \varepsilon}\Big{]}\] \[=\partial_{i^{\prime}}\left(\sin^{i-1}\theta^{\prime}_{i}\, \partial_{i^{\prime}}u_{\varepsilon}\right)-\frac{\varepsilon r^{\prime} \partial_{i^{\prime}}\left(\sin^{i-1}\theta^{\prime}_{i}\,\partial_{i^{ \prime}}\rho\right)}{1+\varepsilon\rho}\partial_{r^{\prime}}u_{\varepsilon}\] \[\qquad-\frac{2\varepsilon r^{\prime}\sin^{i-1}\theta^{\prime}_{i} \,\partial_{i^{\prime}}\rho}{1+\varepsilon\rho}\partial_{i^{\prime}}\partial_{ r^{\prime}}u_{\varepsilon}+\frac{\varepsilon^{2}(r^{\prime})^{2}\sin^{i-1} \theta^{\prime}_{i}\,(\partial_{i^{\prime}}\rho)^{2}}{(1+\varepsilon\rho)^{2 }}\left[\partial_{r^{\prime}}^{2}u_{\varepsilon}+\frac{2}{r^{\prime}} \partial_{r^{\prime}}u_{\varepsilon}\right].\]
Substituting the transformed radial and angular components into the Laplace's equation (7), substituting \(r^{-2}=(r^{\prime})^{-2}(1+\varepsilon\rho)^{-2}\) in front of \(\Delta_{S^{d}}\), multiplying by \((1+\varepsilon\rho)^{4}\), and dropping the primes on the transformed variables, we obtain
\[0 =(1+\varepsilon\rho)^{4}\Delta v\] \[=(1+\varepsilon\rho)^{2}\Delta u_{\varepsilon}-\frac{\varepsilon( 1+\varepsilon\rho)}{r}(\Delta_{S^{d}}\rho)\partial_{r}u_{\varepsilon}-\frac{2 \varepsilon(1+\varepsilon\rho)}{r}\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i} ^{2}}\partial_{i}\partial_{r}u_{\varepsilon}\] \[\qquad+\varepsilon^{2}\sum_{i=1}^{d}\frac{(\partial_{i}\rho)^{2 }}{\eta_{i}^{2}}\Big{[}\partial_{r}^{2}u_{\varepsilon}+\frac{2}{r}\partial_{ r}u_{\varepsilon}\Big{]}.\]
Finally, collecting terms of order \(\varepsilon\) and \(\varepsilon^{2}\), we see that \(u_{\varepsilon}\) satisfies the following transformed harmonic extension problem:
\[\Delta u_{\varepsilon} =\varepsilon L_{1}u_{\varepsilon}+\varepsilon^{2}L_{2}u_{ \varepsilon} \text{in }B,\] \[u_{\varepsilon}(1,\theta) =\xi(\theta) \text{on }S^{d}, \tag{9}\]
where \(\theta=(\theta_{1},\theta_{2},\dots,\theta_{d})\) and
\[L_{1}u_{\varepsilon} =-2\rho\Delta u_{\varepsilon}+(\Delta_{S^{d}}\rho)r^{-1}\partial _{r}u_{\varepsilon}+2\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i}^{2}}r^{-1} \partial_{i}\partial_{r}u_{\varepsilon},\] \[L_{2}u_{\varepsilon} =-\rho^{2}\Delta u_{\varepsilon}+(\rho\Delta_{S^{d}}\rho)r^{-1} \partial_{r}u_{\varepsilon}+2\rho\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{ i}^{2}}r^{-1}\partial_{i}\partial_{r}u_{\varepsilon}-\sum_{i=1}^{d}\frac{( \partial_{i}\rho)^{2}}{\eta_{i}^{2}}\Big{[}\partial_{r}^{2}u_{\varepsilon}+2r^ {-1}\partial_{r}u_{\varepsilon}\Big{]}\] \[=\rho^{2}\Delta u_{\varepsilon}+\rho L_{1}u_{\varepsilon}-\sum_{i =1}^{d}\frac{(\partial_{i}\rho)^{2}}{\eta_{i}^{2}}\Big{[}\partial_{r}^{2}u_{ \varepsilon}+2r^{-1}\partial_{r}u_{\varepsilon}\Big{]}.\]
## 4. Analyticity of Harmonic Extension
In this section we show that the solution \(u_{\varepsilon}\) of the transformed harmonic extension problem (9) is analytic with respect to \(\varepsilon\). We begin by formally expanding \(u_{\varepsilon}\) as a power series in \(\varepsilon\):
\[u_{\varepsilon}(r,\theta)=\sum_{n=0}^{\infty}u_{n}(r,\theta)\varepsilon^{n}. \tag{10}\]
Substituting (10) into (9) and collecting terms in powers of \(\varepsilon\), we see that \(u_{0}\) satisfies
\[\Delta u_{0} =0 \text{in }B,\] \[u_{0}(1,\theta) =\xi(\theta) \text{on }S^{d}, \tag{11}\]
and \(u_{n}\) for \(n=1,2,\dots\) satisfies the following recursive formula:
\[\Delta u_{n} =L_{1}u_{n-1}+L_{2}u_{n-2} \text{in }B,\] \[u_{n}(1,\theta) =0 \text{on }S^{d}. \tag{12}\]
Our first lemma establishes an elliptic estimate for the Poisson's equation with Dirichlet boundary data, which is analogous to [13, Lemma 3.1]. We now review some basic facts about hyperspherical harmonics before stating Lemma 4.1; see [1] for more details.
For \(\ell=0,1,2,\dots\), a hyperspherical harmonic of order \(\ell\) on \(S^{d}\) is the restriction to \(S^{d}\) of a homogeneous harmonic polynomial of degree \(\ell\). Let \(\mathbf{H}_{\ell}^{d}\) denote the space of all hyperspherical harmonics of order \(\ell\) on \(S^{d}\). The space \(\mathbf{H}_{\ell}^{d}\) has an orthonormal basis \(\{Y_{\ell}^{m}(\theta)\}_{m=1}^{N(d,\ell)}\) with respect to the \(L^{2}\)-inner product over \(S^{d}\), where \(N(d,\ell)\) is the dimension of \(\mathbf{H}_{\ell}^{d}\). It is well-known that each \(Y_{\ell}^{m}\) is an eigenfunction of \(-\Delta_{S^{d}}\) with corresponding eigenvalue \(\ell(\ell+d-1)\), _i.e.,_
\[-\Delta_{S^{d}}Y_{\ell}^{m}(\theta)=\ell(\ell+d-1)Y_{\ell}^{m}(\theta),\ \ 1 \leq m\leq N(d,\ell). \tag{13}\]
A direct computation using Green's identity, together with (13), gives the following integral identity:
\[\int_{S^{d}}\left|\nabla_{S^{d}}Y_{\ell}^{m}(\theta)\right|^{2}d\sigma_{d}( \theta)=\int_{S^{d}}\ell(\ell+d-1)\left|Y_{\ell}^{m}(\theta)\right|^{2}d \sigma_{d}(\theta), \tag{14}\]
where \(d\sigma_{d}\) is the area element of \(S^{d}\) in hyperspherical coordinates, satisfying the relation \(dV=d\sigma_{d}(\theta)\,r^{d}dr\); see (3). The sum of \(\mathbf{H}_{\ell}^{d}\) is dense in \(L^{2}(S^{d})\), thus any \(\xi\in L^{2}(S^{d})\) can be expanded in
hyperspherical harmonics:
\[\xi(\theta)=\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}\hat{\xi}(\ell,m)Y_{\ell} ^{m}(\theta),\ \ \hat{\xi}(\ell,m)\coloneqq\int_{S^{d}}\xi(\theta)\overline{Y_{\ell}^{m}(\theta)} \,d\sigma_{d}(\theta). \tag{15}\]
This leads us to define the Sobolev space \(H^{s}(S^{d})\) of order \(s\geq 0\) as the subspace of \(L^{2}(S^{d})\) with norm
\[\|\xi\|_{H^{s}(S^{d})}^{2}\coloneqq\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell) }\left(1+\ell(\ell+d-1)\right)^{s}\left|\hat{\xi}(\ell,m)\right|^{2}.\]
**Lemma 4.1**.: _Given \(d\geq 3\) and \(s\in\mathbb{N}\), there exists a constant \(K_{0}>0\) such that for any \(F\in H^{s-1}(B)\) and \(\xi\in H^{s+\frac{1}{2}}(S^{d})\), the solution of_
\[\Delta w(r,\theta) =F(r,\theta) \text{in }B,\] \[w(1,\theta) =\xi(\theta) \text{on }S^{d},\]
_satisfies the following estimate:_
\[\|w\|_{H^{s+1}(B)}\leq K_{0}\left(\|F\|_{H^{s-1}(B)}+\|\xi\|_{H^{s+\frac{1}{2} }(S^{d})}\right).\]
Proof.: We will prove the result for \(s=0\) only, as the proof for \(s=1,2,\dots\) is similar. Since \(\xi\in H^{\frac{1}{2}}(S^{d})\), \(\xi\) has a hyperspherical harmonic expansion (15). Let us decompose \(w=\Phi+V\) where
\[\Phi(r,\theta)=\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}\hat{\xi}(\ell,m)r^ {\ell}Y_{\ell}^{m}(\theta).\]
It is clear that \(\Phi\) is harmonic in \(B\) and \(\Phi=\xi\) on \(S^{d}\), which means \(V\) solves the following boundary value problem:
\[\Delta V(r,\theta) =F(r,\theta) \text{in }B, \tag{16b}\] \[V(1,\theta) =0 \text{on }S^{d}. \tag{16a}\]
Multiplying (16a) by \(\overline{V}\) and integrating by parts using (16b) yields
\[\|V\|_{H^{1}_{0}(B)}^{2}\coloneqq\|\nabla V\|_{L^{2}(B)}^{2}=-\langle F,V \rangle_{H^{-1}(B),H^{1}_{0}(B)},\]
where the latter denotes the duality pairing between \(H^{1}_{0}(B)\) and \(H^{-1}(B)\). Using Poincare inequality and duality, there exists a constant \(C_{B}>0\) such that
\[\|V\|_{H^{1}(B)}\leq C_{B}\|V\|_{H^{1}_{0}(B)}\leq C_{B}\|F\|_{H^{-1}(B)}. \tag{17}\]
It remains to show that \(\|\Phi\|_{H^{1}(B)}\) is controlled by \(\|\xi\|_{H^{\frac{1}{2}}(S^{d})}\). Since \(Y_{\ell}^{m}\) is orthonormal in \(L^{2}(S^{d})\), we obtain
\[\|\Phi\|_{L^{2}(B)}^{2} =\sum_{\ell,m}\left|\hat{\xi}(\ell,m)\right|^{2}\int_{0}^{1}\left( \int_{S^{d}}r^{2\ell}\left|Y_{\ell}^{m}(\theta)\right|^{2}d\sigma_{d}(\theta) \right)r^{d}dr\] \[=\sum_{\ell,m}\left|\hat{\xi}(\ell,m)\right|^{2}\int_{0}^{1}r^{2 \ell+d}\,dr=\sum_{\ell,m}\left[\frac{1}{2\ell+d+1}\right]\left|\hat{\xi}(\ell,m)\right|^{2}.\]
Similarly, using (4) to compute \(\nabla\Phi\) and using the integral identity (14), we obtain
\[\|\nabla\Phi\|_{L^{2}(B)}^{2}=\sum_{\ell,m}\left|\hat{\xi}(\ell,m)\right|^{2} \int_{0}^{1}\left(\int_{S^{d}}\left(\ell^{2}r^{2(\ell-1)}\left|Y_{\ell}^{m}( \theta)\right|^{2}+\frac{r^{2\ell}}{r^{2}}\left|\nabla_{S^{d}}Y_{\ell}^{m}( \theta)\right|^{2}\right)d\sigma_{d}(\theta)\right)r^{d}dr\]
\[\|L_{1}u_{N-1}\|_{H^{s}(B)} \leq C_{0}\left|\rho\right|_{C^{s+2}}K_{1}A^{N-1},\] \[\|L_{2}u_{N-2}\|_{H^{s}(B)} \leq C_{0}\left|\rho\right|_{C^{s+2}}K_{1}A^{N-1}.\]
Proof.: For notation convenience, we suppress the dependence of \(M\) on \(d\) and \(s\). Throughout this proof we will use the fact that both \(\left|\rho\right|_{C^{s}}\) and \(\left|\rho\right|_{C^{s+1}}\) are bounded above by \(\left|\rho\right|_{C^{s+2}}\). First, a close inspection on \(L_{1}u_{N-1}\) reveals that all differential operators acting on \(u_{N-1}\) are second order, since we may write \(L_{1}u_{N-1}\) as follows:
\[L_{1}u_{N-1}=-2\rho\cdot\Delta u_{N-1}+(\Delta_{S^{d}}\rho)\cdot\frac{\partial _{r}u_{N-1}}{r}+2\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i}}\cdot\frac{1} {h_{i}}\partial_{i}\partial_{r}u_{N-1}.\]
Estimating each term in \(L_{1}u_{N-1}\) using the inequality (19a) and noting that \(\partial_{i}\rho/\eta_{i}\) is a first order partial derivative on \(S^{d}\), we obtain
\[\|L_{1}u_{N-1}\|_{H^{s}(B)} \leq\|2\rho\Delta u_{N-1}\|_{H^{s}(B)}+\|(\Delta_{S^{d}}\rho) \cdot r^{-1}\partial_{r}u_{N-1}\|_{H^{s}(B)}\] \[\qquad+2\sum_{i=1}^{d}\left\|\frac{\partial_{i}\rho}{\eta_{i}} \cdot\frac{1}{h_{i}}\partial_{i}\partial_{r}u_{N-1}\right\|_{H^{s}(B)}\]
\[\stackrel{{\eqref{eq:19a}}}{{\leq}} 2M\left|\rho\right|_{C^{s}}\left\|\Delta u_{N-1}\right\|_{H^{s}(B)}+M \left|\Delta_{S^{d}}\rho\right|_{C^{s}}\left\|r^{-1}\partial_{r}u_{N-1}\right\| _{H^{s}(B)}\] \[+2\sum_{i=1}^{d}M\left|\eta_{i}^{-1}\partial_{i}\rho\right|_{C^{s }}\left\|(h_{i})^{-1}\partial_{i}\partial_{r}u_{N-1}\right\|_{H^{s}(B)}\] \[\leq 2M\left|\rho\right|_{C^{s}}\left\|u_{N-1}\right\|_{H^{s+2}(B)} +M\left|\rho\right|_{C^{s+2}}\left\|u_{N-1}\right\|_{H^{s+2}(B)}\] \[+2\sum_{i=1}^{d}M\left|\rho\right|_{C^{s+1}}\left\|u_{N-1}\right\| _{H^{s+2}(B)}\] \[\leq(3+2d)M\left|\rho\right|_{C^{s+2}}\left\|u_{N-1}\right\|_{H^ {s+2}(B)}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}} (3+2d)M\left|\rho\right|_{C^{s+2}}K_{1}A^{N-1}.\]
The estimate for \(L_{2}u_{N-2}\) is similar. Estimating the second term \(\rho L_{1}u_{N-2}\) in \(L_{2}u_{N-2}\) using the inequality (19a) once and adapting the estimate for \(L_{1}u_{N-1}\) from above, we obtain
\[\left\|\rho L_{1}u_{N-2}\right\|_{H^{s}(B)}\leq M\left|\rho\right|_{C^{s}} \left\|L_{1}u_{N-2}\right\|_{H^{s}(B)}\leq(3+2d)M^{2}\left|\rho\right|_{C^{s+2 }}^{2}K_{1}A^{N-2}.\]
Estimating the remaining terms in \(L_{2}u_{N-2}\) using the inequality (19a) twice and noticing that all differential operators acting on \(u_{N-2}\) are second order, we obtain
\[\left\|L_{2}u_{N-2}\right\|_{H^{s}(B)} \leq\left\|\rho L_{1}u_{N-2}\right\|_{H^{s}(B)}+\left\|\rho^{2} \Delta u_{N-2}\right\|_{H^{s}(B)}\] \[+\sum_{i=1}^{d}\left\|\left(\frac{\partial_{i}\rho}{\eta_{i}} \right)^{2}\left[\partial_{r}^{2}u_{N-2}+2r^{-1}\partial_{r}u_{N-2}\right]\right\|\] \[\stackrel{{\eqref{eq:19a}}}{{\leq}} (3+2d)M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}A^{N-2}+M^{2} \left|\rho\right|_{C^{s}}^{2}\left\|\Delta u_{N-2}\right\|_{H^{s}(B)}\] \[+\sum_{i=1}^{d}M^{2}\left|\eta_{i}^{-1}\partial_{i}\rho\right|_{C^ {s}}^{2}\left(\left\|\partial_{r}^{2}u_{N-2}\right\|_{H^{s}(B)}+2\|r^{-1} \partial_{r}u_{N-2}\right\|_{H^{s}(B)}\right)\] \[\leq(3+2d)M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}A^{N-2}+M^{2} \left|\rho\right|_{C^{s}}^{2}\left\|u_{N-2}\right\|_{H^{s+2}(B)}\] \[+\sum_{i=1}^{d}M^{2}\left|\rho\right|_{C^{s+1}}^{2}\left(3\|u_{N- 2}\|_{H^{s+2}(B)}\right)\] \[\leq(3+2d)M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}A^{N-2}+(1+3d) M^{2}\left|\rho\right|_{C^{s+2}}^{2}\left\|u_{N-2}\right\|_{H^{s+2}(B)}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}} (3+2d)M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}A^{N-2}+(1+3d)M^{2} \left|\rho\right|_{C^{s+2}}^{2}K_{1}A^{N-2}\] \[\leq(4+5d)M\left|\rho\right|_{C^{s+2}}K_{1}A^{N-1},\]
assuming \(A>M\left|\rho\right|_{C^{s+2}}\). Finally, the desired result follows by choosing \(C_{0}=(4+5d)M\).
Finally, we justify the convergence of (10) for sufficiently small \(\varepsilon>0\).
**Theorem 4.3**.: _Given \(d\geq 3\) and \(s\in\mathbb{N}\), if \(\rho\in C^{s+2}(S^{d})\) and \(\xi\in H^{s+\frac{3}{2}}(S^{d})\), there exists constants \(C_{0}\) and \(K_{0}\) and a unique solution \(u_{\varepsilon}\) of (9) satisfying (10) such that_
\[\left\|u_{n}\right\|_{H^{s+2}(B)}\leq K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^ {n} \tag{21}\]
_for any \(A>\max\{2K_{0}C_{0}\left|\rho\right|_{C^{s+2}},M\left|\rho\right|_{C^{s+2}}\}\)._
Proof.: The proof is similar to [10, Theorem 3.3]. We proceed by induction. The case \(n=0\) follows from applying Lemma 4.1 to \(u_{0}\) satisfying (11), with \(F\equiv 0\):
\[\|u_{0}\|_{H^{s+2}(B)}\leq K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}=K_{0}\|\xi\|_ {H^{s+\frac{3}{2}}(S^{d})}A^{0}.\]
Suppose inequality (21) holds for \(n<N\). Applying Lemma 4.1 to \(u_{N}\) satisfying (12), with \(F=L_{1}u_{N-1}+L_{2}u_{N-2}\), we obtain
\[\|u_{N}\|_{H^{s+2}(B)}\leq K_{0}\left(\|L_{1}u_{N-1}\|_{H^{s}(B)}+\|L_{2}u_{N-2} \|_{H^{s}(B)}\right).\]
Using Lemma 4.2 with \(K_{1}=K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}\) from the induction hypothesis, we may bound \(\|L_{1}u_{N-1}\|_{H^{s}(B)}\) and \(\|L_{2}u_{N-2}\|_{H^{s}(B)}\) so that
\[\|u_{N}\|_{H^{s+2}(B)}\leq K_{0}\left(2C_{0}\left|\rho\right|_{C^ {s+2}}K_{1}A^{N-1}\right) =2K_{0}^{2}C_{0}\left|\rho\right|_{C^{s+2}}\left\|\xi\right\|_{H^{ s+\frac{3}{2}}(S^{d})}A^{N-1}\] \[\leq K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N},\]
thereby establishing (21) for \(n=N\), provided \(A>2K_{0}C_{0}\left|\rho\right|_{C^{s+2}}\).
## 5. Analyticity of DNO and Steklov Eigenvalues
This last section is devoted to proving analyticity of DNO and subsequently the analyticity of Steklov eigenvalues, with respect to the perturbation parameter \(\varepsilon\). Recall that the DNO \(G_{\rho,\varepsilon}\colon H^{s+\frac{1}{2}}(\partial\Omega_{\varepsilon}) \to H^{s-\frac{1}{2}}(\partial\Omega_{\varepsilon})\) is given by
\[G_{\rho,\varepsilon}\xi=\mathbf{n}_{\rho,\varepsilon}\cdot\nabla v|_{r=1+ \varepsilon\rho},\]
where \(\mathbf{n}_{\rho,\varepsilon}\) is the unit normal vector of \(\partial\Omega_{\varepsilon}\) and \(v\) is the harmonic extension of \(\xi\) from \(\partial\Omega_{\varepsilon}\) to \(\Omega_{\varepsilon}\), satisfying (7). We may represent the hypersurface \(\partial\Omega_{\varepsilon}\) as the zero level set of the implicit function \(J\), _i.e.,_
\[\partial\Omega_{\varepsilon}=\left\{(r,\hat{\theta})\colon J(r,\hat{\theta}) =\frac{r}{1+\varepsilon\rho(\hat{\theta})}-1=0\right\}.\]
A normal vector to \(\partial\Omega_{\varepsilon}\) is given by the gradient \(\nabla J\). Using (4), we obtain
\[\nabla J=\frac{1}{1+\varepsilon\rho}\hat{\boldsymbol{r}}+\frac{1}{r}\sum_{i= 1}^{d}\frac{1}{\eta_{i}}\cdot\frac{-\varepsilon r\partial_{i}\rho}{(1+ \varepsilon\rho)^{2}}\,\hat{\boldsymbol{\theta}}_{i}=\frac{1}{(1+\varepsilon \rho)^{2}}\left[(1+\varepsilon\rho)\hat{\boldsymbol{r}}-\varepsilon\sum_{i=1} ^{d}\frac{\partial_{i}\rho}{\eta_{i}}\hat{\boldsymbol{\theta}}_{i}\right].\]
It follows that
\[\mathbf{n}_{\rho,\varepsilon}=\frac{\nabla J}{\|\nabla J\|_{2}} =\left((1+\varepsilon\rho)^{2}+\varepsilon^{2}\sum_{i=1}^{d} \frac{(\partial_{i}\rho)^{2}}{\eta_{i}^{2}}\right)^{-1/2}\left[(1+\varepsilon \rho)\,\hat{\boldsymbol{r}}-\varepsilon\sum_{i=1}^{d}\frac{\partial_{i}\rho}{ \eta_{i}}\hat{\boldsymbol{\theta}}_{i}\right]\] \[=M_{\rho,\varepsilon}\left[(1+\varepsilon\rho)\,\hat{ \boldsymbol{r}}-\varepsilon\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i}}\hat {\boldsymbol{\theta}}_{i}\right]\]
and the DNO is given by
\[G_{\rho,\varepsilon}\xi=M_{\rho,\varepsilon}\left[(1+\varepsilon\rho)\partial _{r}v-\frac{\varepsilon}{r}\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i}^{2}} \partial_{i}v\right]\bigg{|}_{r=1+\varepsilon\rho}\]
Next we transform the DNO from \(\partial\Omega_{\varepsilon}\) to \(S^{d}\) using the change of variables (8). Setting
\[u_{\varepsilon}(r^{\prime},\theta^{\prime})=v\left((1+\varepsilon\rho(\theta^{ \prime}))r^{\prime},\theta^{\prime}\right)\]
and dropping the primes on the transformed variables, we obtain
\[G_{\rho,\varepsilon}\xi =M_{\rho,\varepsilon}\left[\partial_{r}u_{\varepsilon}-\frac{ \varepsilon}{r(1+\varepsilon\rho)}\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i} ^{2}}\left(\partial_{i}u_{\varepsilon}-\frac{\varepsilon r\partial_{i}\rho}{(1+ \varepsilon\rho)}\partial_{r}u_{\varepsilon}\right)\right]\bigg{|}_{r=1}\] \[=M_{\rho,\varepsilon}\widehat{G}_{\rho,\varepsilon}\xi,\]
where
\[\widehat{G}_{\rho,\varepsilon}\xi=\partial_{r}u_{\varepsilon}(1,\theta)-\frac{ \varepsilon}{(1+\varepsilon\rho)}\sum_{i=1}^{d}\frac{\partial_{i}\rho}{\eta_{i }^{2}}\partial_{i}u_{\varepsilon}(1,\theta)+\frac{\varepsilon^{2}}{(1+ \varepsilon\rho)^{2}}\sum_{i=1}^{d}\frac{(\partial_{i}\rho)^{2}}{\eta_{i}^{2}} \partial_{r}u_{\varepsilon}(1,\theta). \tag{22}\]
Since \(M_{\rho,\varepsilon}\) is clearly analytic near \(\varepsilon=0\), we need only show the analyticity of \(\widehat{G}_{\rho,\varepsilon}\) near \(\varepsilon=0\). Multiplying (22) by \((1+\varepsilon\rho)^{2}\) and collecting terms of order \(\varepsilon\) and \(\varepsilon^{2}\), we see that \(\widehat{G}_{\rho,\varepsilon}\xi\) satisfies
\[\widehat{G}_{\rho,\varepsilon}\xi=\partial_{r}u_{\varepsilon}(1,\theta)+ \varepsilon\Big{[}(T_{1}u_{\varepsilon})(1,\theta)-2\rho\widehat{G}_{\rho, \varepsilon}\xi\Big{]}+\varepsilon^{2}\Big{[}(T_{2}u_{\varepsilon})(1, \theta)-\rho^{2}\widehat{G}_{\rho,\varepsilon}\xi\Big{]}, \tag{23}\]
where
\[T_{1}u_{\varepsilon} =2\rho\partial_{r}u_{\varepsilon}-\sum_{i=1}^{d}\frac{\partial_{ i}\rho}{\eta_{i}^{2}}\partial_{i}u_{\varepsilon},\] \[T_{2}u_{\varepsilon} =\rho^{2}\partial_{r}u_{\varepsilon}-\sum_{i=1}^{d}\frac{\rho \partial_{i}\rho}{\eta_{i}^{2}}\partial_{i}u_{\varepsilon}+\sum_{i=1}^{d} \frac{(\partial_{i}\rho)^{2}}{\eta_{i}^{2}}\partial_{r}u_{\varepsilon}.\]
We now formally expand \(\widehat{G}_{\rho,\varepsilon}\) as a power series in \(\varepsilon\):
\[\widehat{G}_{\rho,\varepsilon}\xi=\sum_{n=0}^{\infty}\varepsilon^{n}\widehat{G }_{\rho,n}\xi. \tag{24}\]
Substituting (24) and (10) into (23) and collecting terms in powers of \(\varepsilon\), we see that \(\widehat{G}_{\rho,0}\) satisfies
\[\widehat{G}_{\rho,0}\xi=\partial_{r}u_{0}(1,\theta) \tag{25}\]
and \(\widehat{G}_{\rho,n}\) for \(n=1,2,\dots\) satisfies the following recursive formula:
\[\widehat{G}_{\rho,n}\xi=\partial_{r}u_{n}(1,\theta)+(T_{1}u_{n-1})(1,\theta)- 2\rho\widehat{G}_{\rho,n-1}\xi+(T_{2}u_{n-2})(1,\theta)-\rho^{2}\widehat{G}_{ \rho,n-2}\xi. \tag{26}\]
The following theorem justifies the convergence of (24) for suitably small \(\varepsilon>0\) and thereby proves Theorem 1.1 for \(d\geq 3\).
**Theorem 5.1**.: _Given \(d\geq 3\) and \(s\in\mathbb{N}\), if \(\rho\in C^{s+2}(S^{d})\) and \(\xi\in H^{s+\frac{3}{2}}(S^{d})\), then there exists constants \(K_{1}=K_{1}(B,d,s)>0\) and \(\alpha=\alpha(d)>1\) such that_
\[\left\|\widehat{G}_{\rho,n}\xi\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq K_{1} \|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{n} \tag{27}\]
_for \(A>\alpha\max\{2K_{0}C_{0}\left|\rho\right|_{C^{s+2}},M\left|\rho\right|_{C^{s +2}}\}\)._
Proof.: Throughout this proof we will use two facts: (1) Both \(\left|\rho\right|_{C^{s}}\) and \(\left|\rho\right|_{C^{s+1}}\) are bounded above by \(\left|\rho\right|_{C^{s+2}}\), and (2) if \(P\) be any first order differential operator on \(S^{d}\), then the continuity of the trace operator on \(H^{s+1}(B)\) implies the existence of a constant \(C_{B}>0\) such that
\[\left\|Pu_{n}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq C_{B}\|Pu_{n}\|_{H^{s+1}(B )}\leq C_{B}\|u_{n}\|_{H^{s+2}(B)}\leq C_{B}K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{ d})}A^{n}, \tag{28}\]
where the last inequality follows from Theorem 4.3.
We proceed by induction. Set \(K_{1}=\beta C_{B}K_{0}\) for some constant \(\beta>1\). The case \(n=0\) follows directly from (25) and (28):
\[\left\|\widehat{G}_{\rho,0}\xi\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq C_{B}K_{0 }\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{0}<K_{1}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})} A^{0}.\]
Now suppose that (27) holds for \(n<N\). From the recursive formula (26), we estimate each term in \(\widehat{G}_{\rho,N}\xi\) individually. The first term can be estimated using (28) with \(n=N\):
\[\left\|\partial_{r}u_{N}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq C_{B}K_{0}\| \xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N}.\]
To estimate the terms \(2\rho\widehat{G}_{\rho,N-1}\xi\) and \(\rho^{2}\widehat{G}_{\rho,N-2}\xi\), we use the inequality (19b) with \(\delta=1/2\) together with the induction hypothesis:
\[2\|\rho\widehat{G}_{\rho,N-1}\xi\|_{H^{s+\frac{1}{2}}(S^{d})} \stackrel{{\eqref{eq:20}}}{{\leq}}M^{2}\left|\rho \right|_{C^{s+1}}\left\|\widehat{G}_{\rho,N-1}\xi\right\|_{H^{s+\frac{1}{2}}(S ^{d})}\leq 2M\left|\rho\right|_{C^{s+2}}K_{1}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})} A^{N-1}\] \[\|\rho^{2}\widehat{G}_{\rho,N-2}\xi\|_{H^{s+\frac{1}{2}}(S^{d})} \stackrel{{\eqref{eq:20}}}{{\leq}}M^{2}\left|\rho \right|_{C^{s+1}}^{2}\left\|\widehat{G}_{\rho,N-2}\xi\right\|_{H^{s+\frac{1}{2} }(S^{d})}\leq M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}\|\xi\|_{H^{s+\frac{3}{2 }}(S^{d})}A^{N-2}.\]
To estimate the terms \(T_{1}u_{N-1}(1,\theta)\) and \(T_{2}u_{N-1}(1,\theta)\), we observe that the differential operators acting on \(u_{N-1}\) and \(u_{N-2}\) are first order on \(S^{d}\). Using the inequality (19b) with \(\delta=1/2\) and (28), we obtain
\[\left\|T_{1}u_{N-1}\right\|_{H^{s+\frac{1}{2}}(S^{d})} \leq\left\|2\rho\partial_{r}u_{N-1}\right\|_{H^{s+\frac{1}{2}}(S^ {d})}+\sum_{i=1}^{d}\left\|\frac{\partial_{i}\rho}{\eta_{i}}\cdot\frac{ \partial_{i}u_{N-1}}{\eta_{i}}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}}2M\left|\rho \right|_{C^{s+1}}\left\|\partial_{r}u_{N-1}\right\|_{H^{s+\frac{1}{2}}(S^{d}) }+\sum_{i=1}^{d}M\left|\rho\right|_{C^{s+2}}\left\|\eta_{i}^{-1}\partial_{i}u_ {N-1}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}}(2+d)M\left|\rho \right|_{C^{s+2}}C_{B}K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N-1}\] \[\left\|T_{2}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S^{d})} \leq\left\|\rho^{2}\partial_{r}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S ^{d})}+\sum_{i=1}^{d}\left\|\rho\cdot\frac{\partial_{i}\rho}{\eta_{i}}\cdot \frac{\partial_{i}u_{N-2}}{\eta_{i}}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\quad+\sum_{i=1}^{d}\left\|\left(\frac{\partial_{i}\rho}{\eta_{i} }\right)^{2}\partial_{r}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}}M^{2}\left|\rho \right|_{C^{s+1}}^{2}\left\|\partial_{r}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S^{ d})}+\sum_{i=1}^{d}M^{2}\left|\rho\right|_{C^{s+1}}\left|\rho\right|_{C^{s+2}} \left\|\eta_{i}^{-1}\partial_{i}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\quad+\sum_{i=1}^{d}M^{2}\left|\rho\right|_{C^{s+2}}^{2}\left\| \partial_{r}u_{N-2}\right\|_{H^{s+\frac{1}{2}}(S^{d})}\] \[\stackrel{{\eqref{eq:20}}}{{\leq}}(1+2d)M^{2}\left| \rho\right|_{C^{s+2}}^{2}C_{B}K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N-2}.\]
Adding all the estimates for the five terms in \(\widehat{G}_{\rho,N}\xi\) and using the assumption \(A>\alpha M\left|\rho\right|_{C^{s+2}}\) for some constant \(\alpha>1\), we obtain
\[\left\|\widehat{G}_{\rho,N}\xi\right\|_{H^{s+\frac{1}{2}}(S^{d})} \leq C_{B}K_{0}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N}\] \[\quad+\left(2+d)M\left|\rho\right|_{C^{s+2}}C_{B}K_{0}\|\xi\right\| _{H^{s+\frac{3}{2}}(S^{d})}A^{N-1}\] \[\quad+2M\left|\rho\right|_{C^{s+2}}K_{1}\|\xi\|_{H^{s+\frac{3}{2} }(S^{d})}A^{N-1}\] \[\quad+(1+2d)M^{2}\left|\rho\right|_{C^{s+2}}^{2}C_{B}K_{0}\|\xi\| _{H^{s+\frac{3}{2}}(S^{d})}A^{N-2}\] \[\quad+M^{2}\left|\rho\right|_{C^{s+2}}^{2}K_{1}\|\xi\|_{H^{s+ \frac{3}{2}}(S^{d})}A^{N-2}\] \[\leq C_{B}K_{0}\left[1+\frac{2+d}{\alpha}+\frac{1+2d}{\alpha^{2}} \right]\left\|\xi\right\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N}+K_{1}\left[\frac{2}{ \alpha}+\frac{1}{\alpha^{2}}\right]\left\|\xi\right\|_{H^{s+\frac{3}{2}}(S^{d})} A^{N}\] \[=C(\alpha,\beta)K_{1}\|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N},\]
where we use \(K_{1}=\beta C_{B}K_{0}\) for \(\beta>1\) for the equality and
\[C(\alpha,\beta)=\frac{1}{\beta}\left[1+\frac{2+d}{\alpha}+\frac{1+2d}{\alpha^{2} }\right]+\frac{2}{\alpha}+\frac{1}{\alpha^{2}}.\]
Choosing \(\alpha>1+\sqrt{2}>1\) and \(\beta>\frac{\alpha^{2}+\alpha(2+d)+2d+1}{\alpha^{2}-2\alpha-1}>1\), we obtain \(C(\alpha,\beta)\leq 1\) and so
\[\left\|\widehat{G}_{\rho,N}\xi\right\|_{H^{s+\frac{1}{2}}(S^{d})}\leq K_{1} \|\xi\|_{H^{s+\frac{3}{2}}(S^{d})}A^{N},\]
thereby completing the inductive argument for \(n=N\).
Proof of Theorem 1.1 and Corollary 1.2.: The proof is identical to [23, Corollary 1.2], but we include it here for completeness. Theorem 5.1 shows that the expansion of the non-normalized DNO \(\widehat{G}_{\rho,\varepsilon}\) given in (24) is uniformly convergent for small \(\varepsilon\). It follows that \(G_{\rho,\varepsilon}\colon H^{s+\frac{3}{2}}(S^{d})\to H^{s+\frac{1}{2}}(S^{d})\) is analytic for small \(\varepsilon\). The DNO operator \(G_{\rho,\varepsilon}\colon L^{2}(S^{d})\to L^{2}(S^{d})\) is self-adjoint [1], hence closed. The result now follows from [1, Ch. 7, Thm 1.8, p. 370].
_Acknowledgements_.: The authors would like to thank John Gemmer, Jason Parsley, and Nsoki Mavinga for helpful insights.
|
2301.10183 | Mesostructures: Beyond Spectrogram Loss in Differentiable Time-Frequency
Analysis | Computer musicians refer to mesostructures as the intermediate levels of
articulation between the microstructure of waveshapes and the macrostructure of
musical forms. Examples of mesostructures include melody, arpeggios,
syncopation, polyphonic grouping, and textural contrast. Despite their central
role in musical expression, they have received limited attention in deep
learning. Currently, autoencoders and neural audio synthesizers are only
trained and evaluated at the scale of microstructure: i.e., local amplitude
variations up to 100 milliseconds or so. In this paper, we formulate and
address the problem of mesostructural audio modeling via a composition of a
differentiable arpeggiator and time-frequency scattering. We empirically
demonstrate that time--frequency scattering serves as a differentiable model of
similarity between synthesis parameters that govern mesostructure. By exposing
the sensitivity of short-time spectral distances to time alignment, we motivate
the need for a time-invariant and multiscale differentiable time--frequency
model of similarity at the level of both local spectra and spectrotemporal
modulations. | Cyrus Vahidi, Han Han, Changhong Wang, Mathieu Lagrange, György Fazekas, Vincent Lostanlen | 2023-01-24T17:50:19Z | http://arxiv.org/abs/2301.10183v1 | # Mesostructures: Beyond Spectrogram Loss
###### Abstract
Computer musicians refer to mesostructures as the intermediate levels of articulation between the microstructure of waveshapes and the macrostructure of musical forms. Examples of mesostructures include melody, argeogios, syncopation, polyphonic grouping, and textural contrast. Despite their central role in musical expression, they have received limited attention in deep learning. Currently, autoencoders and neural audio synthesizers are only trained and evaluated at the scale of microstructure: i.e., local amplitude variations up to 100 milliseconds or so. In this paper, we formulate and address the problem of mesostructural audio modeling via a composition of a differentiable argeogator and time-frequency scattering. We empirically demonstrate that time-frequency scattering serves as a differentiable model of similarity between synthesis parameters that govern mesostructure. By exposing the sensitivity of short-time spectral distances to time alignment, we motivate the need for a time-invariant and multiscale differentiable time-frequency model of similarity at the level of both local spectra and spectrotemporal modulations.
## 0 Introduction
### Differentiable time-frequency analysis
Time-frequency representations (TFR) such as the short-term Fourier transform (STFT) or constant-Q transform (CQT) play a key role in music signal processing [1; 2] as they can demodulate the phase of slowly varying complex tones. As a consequence, any two sounds \(\mathbf{x}\) and \(\mathbf{y}\) with equal TFR magnitudes (i.e., spectrograms) are heard as the same by human listeners, even though the underlying waveforms may differ. For this reason, spectrograms can not only serve for visualization, but also for similarity retrieval. Denoting the spectrogram operator by \(\mathbf{\Phi}\), the Euclidean distance \(\|\mathbf{\Phi}(\mathbf{y})-\mathbf{\Phi}(\mathbf{x})\|_{2}\) is much more informative than the waveform distance \(\|\mathbf{y}-\mathbf{x}\|_{2}\), since the waveform distance diverges quickly even when phase differences are small.
In recent years, existing algorithms for STFT and CQT have been ported to deep learning frameworks such as PyTorch, TensorFlow, MXNet, and JAX [3; 4; 5]. By doing so, the developers have taken advantage of the paradigm of differentiable programming, defined as the ability to compute the gradient of mathematical functions by means of reverse-mode automatic differentiation. In the context of audio processing, differentiable programming may serve to train a neural network for audio encoding, decoding, or both. Hence, we may coin the umbrella term _differentiable time-frequency analysis_ (DTFA) to describe an emerging subfield of deep learning in which stochastic gradient descent involves a composition of neural network layers as well as TFR. Previously, TFR were largely restricted to analysis frontends, but now play an integral part in learning architectures for audio generation.
The simplest example of DTFA is autoencoding. Given an input waveform \(\mathbf{x}\), the autoencoder is a neural network architecture \(f\) with weights \(\mathbf{W}\), which returns another waveform \(\mathbf{y}\)[6; 7]. During training, the neural network \(f_{\mathbf{W}}\) aims to minimize the following loss function:
\[\mathcal{L}_{\mathbf{x}}(\mathbf{W})=\|(\mathbf{\Phi}\circ f_{\mathbf{W}})(\mathbf{x} )-\mathbf{\Phi}(\mathbf{x})\|_{2}, \tag{1}\]
on average over sample \(\mathbf{x}\) in an unlabeled dataset. The function above is known as _spectrogram loss_ because \(\mathbf{\Phi}\) maps \(\mathbf{x}\) and \(\mathbf{y}\) to the time-frequency domain.
Another example of DTFA is found in audio restoration. This time, the input of \(f_{\mathbf{W}}\) is not \(\mathbf{x}\) itself but some degraded version \(h(\mathbf{x})\) -- noisy or bandlimited, for example [8; 9]. The goal of \(f_{\mathbf{W}}\) is to invert the degradation operator \(\mathbf{h}\) by producing a restored sound \((f_{\mathbf{W}}\circ\mathbf{h})(\mathbf{x})\) which is close to \(\mathbf{x}\) in terms of spectrogram loss:
\[\mathcal{L}_{\mathbf{x}}(\mathbf{W})=\|(\mathbf{\Phi}\circ f_{\mathbf{W}}\circ h)(\bm {x})-\mathbf{\Phi}(\mathbf{x})\|_{2}. \tag{2}\]
Thirdly, DTFA may serve for sound matching, also known as synthesizer parameter inversion [6; 10; 11]. Given
a parametric synthesizer \(\mathbf{g}\) and an audio query \(\mathbf{x}\), this task consists in retrieving the parameter setting \(\mathbf{\theta}\) such that \(\mathbf{y}=\mathbf{g}(\mathbf{\theta})\) resembles \(\mathbf{x}\). In practice, sound matching may be trained on synthetic data by sampling \(\mathbf{\theta}\) at random, generating \(\mathbf{x}=\mathbf{g}(\mathbf{\theta})\), and measuring the spectrogram loss between \(\mathbf{x}\) and \(\mathbf{y}\):
\[\mathcal{L}_{\mathbf{\theta}}(\mathbf{W})=\|(\mathbf{\Phi}\circ\mathbf{g}\circ f_{\mathbf{W }}\circ\mathbf{g})(\mathbf{\theta})-(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{\theta})\|_{2}. \tag{3}\]
### Shortcomings of spectrogram loss
Despite its proven merits for generative audio modeling, spectrogram loss suffers from counterintuitive properties when events are unaligned in time or pitch [12]. Although a low spectrogram distance implies a judgment of high perceptual similarity, the converse is not true: one can find examples in which \(\mathbf{\Phi}(\mathbf{x})\) is far from \(\mathbf{\Phi}(\mathbf{y})\) yet judged musically similar by a human listener. First, \(\mathbf{\Phi}\) is only sensitive to time shifts up to the scale \(T\) of the spectrogram window; i.e., around 10-100 milliseconds. In the case of autoencoding, if \(f_{\mathbf{W}}(\mathbf{x})(t)=\mathbf{x}(t-\tau)\) with \(\tau\gg T\), \(\mathcal{L}_{\mathbf{x}}(\mathbf{W})\) may be as large as \(2\|\mathbf{\Phi}(\mathbf{x})\|_{2}\) even though the output of \(f_{\mathbf{W}}\) would be easily realigned onto \(\mathbf{x}\) by cross-correlation. In the case of audio restoration of pitched sounds, listeners are more sensitive to artifacts near the onset (e.g., pre-echo) [13], even though most of the spectrogram energy is contained in the sustain and release parts of the temporal profile.
Lastly, in the case of sound matching, certain synthesizers contain parameters which govern periodic structures at larger time scales while being independent of local spectral variations. In additive synthesis, periodic modulation techniques such as vibrato, tremolo, or trill have a "rate" parameter which is neither predictable from isolated spectrogram frames, nor reducible to a sequence of discrete sound events. A small perturbation to synthesis parameters of \(\varepsilon\) will induce a \(\mathbf{g}(\mathbf{\theta}+\varepsilon)\) globally dilated or compressed but locally misaligned in time, rendering \(\|(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{\theta}+\varepsilon)-(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{ \theta})\|\) not indicative of the magnitude of \(\varepsilon\). Modular synthesizers shape sound via an interaction between control modules (sequencers, function generator) and sound processing & generating modules (oscillators, filters, waveshapers) [14]. In a "patch", sequencers determine the playback speed and actuate events, while amplitude envelopes, oscillator waveshapes and filters sculpt the timbre. Changing the clock speed of a patch would cause events to be unaligned in time, but not alter the spectral composition of isolated events. Therefore comparison of timbre similarity is no longer possible at the time scale of isolated spectrogram frames.
### Musical timescales: micro, meso, macro
The shortcomings of modelling music similarity solely at the microscale of short-term spectra is exemplified by the terminology of musical structure used in algorithmic composition. Computer musicians refer to musical structures at a hierarchy of time scales. At one end is the _micro scale_; from sound particles of few samples up to the milliseconds of short-term spectral analysis [15]. Further up the hierarchy of time is the _meso scale_; structures from that emerge from the grouping of sound objects and their complex spectrotemporal evolution [16]. While the _macro scale_ broadly includes the arrangement of a whole composition or performance. Curtis Roads outlines the challenge of coherently modeling multiscale structures in algorithmic composition [17]. In granular synthesis, microstructure arises from individual grains, while their rate of playback forms texture clouds at the level of mesostructure. Beyond the micro scale and spectrogram analysis are sound structures that emerge from complex spectral and temporal envelopes, such as sound textures and instrumental playing techniques [18].
### Contributions
In this paper, we pave the way towards differentiable time-frequency analysis of mesostructure. The key idea is to compute a 2D wavelet decomposition ("scattering") in the time-frequency domain for a sound \(\mathbf{x}\). The result, named joint time-frequency scattering transform (JTFS), is sensitive to relative time lags and frequency intervals between musical events. Meanwhile, JTFS remains stable to global time shifts: going back to the example of autoencoding, \(f_{\mathbf{W}}(\mathbf{x})(t)=\mathbf{x}(t-\tau)\) leads to \((\mathbf{\Phi}\circ f_{\mathbf{W}})(\mathbf{x})\approx\mathbf{\Phi}(\mathbf{x})\), in line with human perception.
To illustrate the potential of JTFS in DTFA, we present an example of differentiable sound matching in which microscale distance is a poor indicator of parameter distance. In our example, the target sound \(\mathbf{x}=\mathbf{g}(\mathbf{\theta})\) is an arpeggio of short glissandi events ("chirplets") which spans a scale of two octaves. The two unknowns of the problem are the number of chirplets per unit of time and the total duration of the arpeggio. We show that it is possible to retrieve these two unknowns without any feature engineering, simply by formulating a least squares inverse problem in JTFS space of the form:
\[\mathbf{\theta}^{*} =\arg\min_{\mathbf{\bar{\theta}}}\mathcal{L}_{\mathbf{\theta}}(\mathbf{\bar{ \theta}})\] \[=\arg\min_{\mathbf{\bar{\theta}}}\|(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{\bar{ \theta}})-(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{\theta})\|_{2}^{2} \tag{4}\]
Intuitively, for the inverse problem above to be solvable by gradient descent, the gradient of \(\mathcal{L}_{\mathbf{\theta}}\) should point towards \(\mathbf{\theta}\) when evaluated at any initial guess \(\mathbf{\overline{\theta}}\). Our main finding is that such is the case if \(\mathbf{\Phi}\) is JTFS, but not if \(\mathbf{\Phi}\) is the multi-scale spectrogram (MSS). Moreover, we find that the gradient of \(\mathcal{L}_{\mathbf{\theta}}\) remains informative even if the target sound is subject to random time lags of several hundred milliseconds. To explain this discrepancy, we define the concept of _differentiable mesostructural operator_ as yielding the Jacobian matrix of \((\mathbf{\Phi}\circ\mathbf{x})\) at \(\mathbf{\overline{\theta}}\), i.e., the composition between audio synthesis and JTFS analysis at the parameter setting of interest. This concept is not limited to sound matching but also finds equivalents when training neural networks for autoencoding and audio restoration.
We release a differentiable implementation of JTFS in Kymatio v0.41, an open-source software for DTFA on GPU which is interoperable with modern deep learning libraries
[19]. To encourage reproducibility of numerical experiments, we supplement this paper with open-source code2.
Footnote 2: Experiments repository: [https://github.com/cyrusvahidi/meso-dtfa](https://github.com/cyrusvahidi/meso-dtfa)
## 1 motivating example
### Comparing time-delayed chirps
Fig. (1) illustrates the challenge in DTFA of reliably computing similarity between chirps synthesized by \(\mathbf{g}\). In the example, the first-order moments of two chirps in the time-frequency domain are equal, regardless of FM rate. Consider two chirps that are displaced from one another in time. Their spectrogram distance is at a maximum when the mesostructure is identical, i.e. the FM rates are equal and the two signals are disjoint. As the FM rate increases, the two chirps overlap in the time-frequency domain, resulting in a reduction of the spectrogram distance that does not correlate with correct prediction of \(\mathbf{\theta}\). The spectrogram loss changes little as \(\gamma\) is varied. Moreover, local micro segments of a chirp are periodically shifted in _both_ time and frequency under \(\gamma\), implying that comparison of microstructure is an inadequate indicator of similarity. A possible solution would be to dynamically realign the chirps, however this operation is numerically unstable and not differentiable. In the following sections, we outline a differentiable operator that is capable of modelling distance in \(\mathbf{\theta}\) and stable to time shifts. A representation that is well-equipped to disentangle these three factors of variability should provide neighbourhood distance metrics in acoustic space that reflect distance in parameter space.
### Chirplet synthesizer
A chirplet is a short sound event which produces a diagonal line in the time-frequency plane. Generally speaking, chirplets follow an equation of the form \(\mathbf{x}(t)=\mathbf{a}(t)\cos(2\pi\mathbf{\varphi}(t))\) where \(\mathbf{a}\) and \(\mathbf{\varphi}\) denote instantaneous amplitude and phase respectively. In this paper, we generate chirplets whose instantaneous frequency grows exponentially with time, so that their perceived pitch (roughly proportional to log-frequency) grows linearly. We parametrize this frequency modulation (FM) in terms of a chirp rate \(\gamma\), measured in octaves per second. Denoting by \(f_{\mathrm{c}}\) the instantaneous frequency of the chirplet at its onset, we obtain:
\[\mathbf{\varphi}(t)=\frac{f_{\mathrm{c}}}{\gamma\log 2}2^{\gamma t}. \tag{5}\]
Then, we define the instantaneous amplitude \(\mathbf{a}\) of the chirplet as the half-period of a sine function, over a time support of \(\delta^{\mathrm{t}}\). We parameterise this half-period in terms of amplitude modulation (AM) frequency \(f_{\mathrm{m}}=\frac{1}{2}\delta^{\mathrm{t}}\). Hence:
\[\mathbf{a}(t)=\sin(2\pi f_{\mathrm{m}}t)\text{ if }0\leq f_{\mathrm{m}}t< \frac{1}{2}\text{ and }0\text{ otherwise}. \tag{6}\]
At its offset, the instantaneous frequency of the chirplet is equal to \(f_{\mathrm{m}}=f_{\mathrm{c}}2^{\gamma\delta^{\mathrm{t}}}=f_{\mathrm{m}}2^{ \gamma/f_{\mathrm{m}}}\). We use the notation \(\mathbf{\theta}\) as a shorthand for the AM/FM tuple \((f_{\mathrm{m}},\gamma)\).
### Differentiable arpeggiator
We now define an ascending "arpeggio" such that the offset of the previous event coincides with the onset of the next event in the time-frequency domain. To do so, we shift the chirplet by \(n\delta^{\mathrm{t}}\) in time and multiply its phase by \(2^{n\delta^{\mathrm{t}}}=2^{n\gamma\delta^{\mathrm{t}}}\) for integer \(n\). Lastly, we apply a global temporal envelope to the arpeggio, by means of a Gaussian window (\(t\mapsto\mathbf{\phi}_{\mathrm{w}}(\gamma t)/\gamma\)) of width \(\gamma w\) where the bandwidth parameter \(w\) is expressed in octaves. Hence:
\[\mathbf{x}(t) =\frac{1}{\gamma}\mathbf{\phi}_{\mathrm{w}}(\gamma t)\sum_{n=-\infty} ^{\infty}\mathbf{a}\left(t-\frac{n}{f_{\mathrm{m}}}\right)\cos\left(2^{\gamma\frac {f_{\mathrm{m}}}{\mathbf{\phi}}}\left(t-\frac{n}{f_{\mathrm{m}}}\right)\right)\] \[=\mathbf{g}_{\mathbf{\theta}}(t)\text{, where }\mathbf{\theta}=(f_{\mathrm{m}}, \gamma). \tag{7}\]
In the equation above, the number of events with non-negligible energy is proportional to:
\[\nu(\mathbf{\theta})=\frac{f_{\mathrm{m}}w}{\gamma}, \tag{8}\]
which is not necessarily an integer number since it varies continuously with respect to \(\mathbf{\theta}\). Here we see that our parametric model \(\mathbf{g}\), despite being very simple, controls an auditory sensation whose definition only makes sense at the mesoscale: namely, the number of notes \(\nu\) in the arpeggio that form a sequential stream. Furthermore, this number results from the entanglement between AM (\(f_{\mathrm{m}}\)) and FM (\(\gamma\)) and would remain unchanged after time shifts (replacing \(t\) by \((t-\tau)\)) or frequency transposition (varying \(f_{\mathrm{c}}\)). Thus, although the differentiable arpeggiator has limited flexibil
Figure 1: Illustration of chirps overlapping in time and log–frequency. The red chirps are of equal chirp rate \(\gamma\). The blue chirps are displaced in time from red and of increasing \(\gamma\) (left to right). The bars indicate the distance between two chirps in the multiscale spectrogram (grey) and time–frequency scattering (black) domains, respectively. We observe that when the chirp rates \(\gamma\) governing _mesostructure_ are equal, the JTFS distance is at a minimum, while spectrogram distance is around its maximum. JTFS distance correlates well with distance in \(\gamma\). We give a more detailed discussion of the importance of a time-invariant differentiable mesostructural operator in Section 3.
ity, we believe that is offers an insightful test bed for the DTFA of mesostructure.
## 2 Time-frequency scattering
Joint time-frequency scattering (JTFS) is a convolutional operator in the time-frequency domain [20]. Via two-dimensional wavelet filters applied in the time-frequency domain at various scales and rates, JTFS extracts multiscale spectrotemporal modulations from digital audio. When used as a frontend to a 2D convolutional neural network, JTFS enables state-of-the-art musical instrument classification with limited annotated training data [21]. Florian Hecker's compositions, e.g _FAVN_ in 2016, mark JTF's' capability of computer music resynthesis (see a full list of compositions from [22]).
### Wavelet scalogram
Let \(\mathbf{\psi}\in\mathbf{L}^{2}(\mathbb{R},\mathbb{C})\) be a complex-valued zero-average wavelet filter of unit center frequency and bandwidth \(1/Q_{1}\). We define a constant-\(Q\) filterbank of dilations from \(\mathbf{\psi}\) as \(\mathbf{\psi}_{\lambda}:t\longmapsto\lambda\mathbf{\psi}(\lambda t)\), with constant quality factor \(Q_{1}\). Each wavelet has a centre frequency \(\lambda\) and a bandwidth of \(\lambda/Q_{1}\). We discretise the frequency variable \(\lambda\) under a geometric progression of common ratio \(2^{\frac{1}{Q_{1}}}\), starting from \(\lambda/Q_{1}\). For a constant quality factor of \(Q_{1}=1\), subsequent wavelet centre frequencies are spaced by an octave, i.e. a dyadic wavelet filterbank.
Convolving the filterbank \(\mathbf{\psi}\) with a waveform \(\mathbf{x}\in\mathbf{L}^{2}(\mathbb{R})\) and applying a pointwise complex modulus gives the wavelet scalogram \(\mathbf{U}_{1}\):
\[\mathbf{U}_{1}\mathbf{x}(t,\lambda)=|\mathbf{x}\ast\mathbf{\psi}_{\lambda}|(t) \tag{9}\]
\(\mathbf{U}_{1}\) is indexed by time and log-frequency, corresponding to the commmonly known constant-Q transform in time-frequency analysis.
### Time-frequency wavelets
Similarly to Section 2.1, we define another two wavelets \(\mathbf{\psi}^{\mathrm{t}}\) and \(\mathbf{\psi}^{\mathrm{t}}\) along the time and log-frequency axes, with quality factors equivalent to \(Q_{2}\) and \(Q_{\mathrm{fr}}\), respectively. We then derive two filterbanks \(\mathbf{\psi}_{\alpha}^{\mathrm{t}}\) and \(\mathbf{\psi}_{\beta}^{\mathrm{t}}\), with center frequencies of \(\alpha\) and \(\beta\), where
\[\mathbf{\psi}_{\alpha}^{\mathrm{t}}(t)=\alpha\mathbf{\psi}^{\mathrm{t}}(\alpha t) \tag{10}\]
\[\mathbf{\psi}_{\beta}^{\mathrm{f}}(\log_{2}\lambda)=\beta\mathbf{\psi}^{\mathrm{f}}( \beta\log_{2}\lambda) \tag{11}\]
As in the computation of \(\mathbf{U}_{1}\), we discretize \(\alpha\) and \(\beta\) by geometric progressions of common ratios \(2^{\frac{1}{Q_{2}}}\) and \(2^{\frac{1}{Q_{n}}}\). We interpret the frequency variable \(\alpha\) and \(\beta\) from a perspective of auditory STRFs [23]: \(\alpha\) is the temporal modulation rate measured in Hz, while \(\beta\) is the frequential modulation scale measured in cycles per octave.
The outer product between \(\mathbf{\psi}_{\alpha}^{\mathrm{t}}\) and \(\mathbf{\psi}_{\beta}^{\mathrm{f}}\) forms a family of 2D wavelets of various rates \(\alpha\) and scales \(\beta\). We convolve \(\mathbf{\psi}_{\alpha}^{\mathrm{t}}\) and \(\mathbf{\psi}_{\beta}^{\mathrm{f}}\) with \(\mathbf{U}_{1}\mathbf{x}\) in sequence and apply a pointwise complex modulus, resulting in a four-way tensor indexed (\(t\), \(\lambda\), \(\alpha\), \(\beta\)):
\[\mathbf{U}_{2}\mathbf{x}(t,\lambda,\alpha,\beta)=|\mathbf{U}_{1}\ast\mathbf{\psi}_{ \alpha}^{\mathrm{t}}\ast\mathbf{\psi}_{\beta}^{\mathrm{f}}| \tag{12}\]
In Fig. 2 we visualize the real part of the 2D wavelet filters in the time-frequency domain. The wavelets are of rate \(\alpha\), scale \(\beta\) and orientation (upward or downward) along \(\log_{2}\lambda\), capturing multiscale oscillatory patterns in time and frequency.
### Local averaging
We compute _first-order joint time-frequency scattering_ coefficients by convolving the scalogram \(\mathbf{U}_{1}\mathbf{x}\) of Eqn. (9) with a Gaussian lowpass filter \(\mathbf{\phi}_{T}\) of width \(T\), followed by convolution with \(\mathbf{\psi}_{\beta}\) (\(\beta\geq 0\)) over the log-frequency axis, then pointwise complex modulus:
\[\mathbf{S}_{1}\mathbf{x}(t,\lambda,\alpha,\beta)=|\mathbf{U}_{1}x(t,\lambda)\ast \mathbf{\phi}_{T}\ast\mathbf{\psi}_{\beta}| \tag{13}\]
Before convolution with \(\mathbf{\psi}_{\beta}\), we subsample the output of \(\mathbf{U}_{1}x(t,\lambda)\ast\mathbf{\phi}_{T}\) along time, resulting in a sampling rate proportional to \(1/T\). Indeed, Eqn. (13) is a special case of Eqn. (12) in which modulation rate \(\alpha=0\) by the use of \(\mathbf{\phi}_{T}\).
We define the _second-order joint time-frequency scattering_ transform of \(\mathbf{x}\) as:
\[\mathbf{S}_{2}\mathbf{x}(t,\lambda,\alpha,\beta)=\mathbf{U}_{2}x(t,\lambda)\ast\bm {\phi}_{T}\ast\mathbf{\phi}_{F} \tag{14}\]
where \(\mathbf{\phi}_{F}\) is a Gaussian lowpass filter over the log-frequency dimension of width \(F\). For the special case of \(\beta=0\) in Eqn. 12, \(\mathbf{\psi}_{\beta}\) performs the role of \(\mathbf{\phi}_{F}\), yielding:
\[\mathbf{S}_{2}\mathbf{x}(t,\lambda,\alpha,\beta=0)=|\mathbf{U}_{1}x(t,\lambda)\ast \mathbf{\psi}_{\alpha}^{\mathrm{t}}\ast\mathbf{\phi}_{F}|\ast\mathbf{\phi}_{T} \tag{15}\]
In both Eqns. (14) and (15), we subsample \(\mathbf{S}_{2}\mathbf{x}\) to sampling rates of \(T^{-1}\) and \(F^{-1}\) over the time and log-frequency axes, respectively. Lowpass filtering with \(\mathbf{\phi}_{T}\) and \(\mathbf{\phi}_{F}\) provides invariance to time shifts and frequency transpositions up to a scale of \(T\) and \(F\) respectively. The combination of \(\mathbf{S}_{1}\mathbf{x}\) and \(\mathbf{S}_{2}\mathbf{x}\), i.e. \(\mathbf{Sx}=\{\mathbf{S}_{1}\mathbf{x},\mathbf{S}_{2}\mathbf{x}\}\), allows us to cover all paths combining the variables (\(\lambda,\alpha,\beta\)). In Section 3 we introduce the use of \(\mathbf{Sx}\) as a DTFA operator for mesostructures.
In Fig. 1, we highlighted the need for a operator that models mesostructures. The stream of chirplets is displaced in frequency at a particular rate. At second-order, JTFS describes the larger scale spectrotemporal structure that is not captured by \(\mathbf{S}_{1}\). Moreover, JTFS is time-invariant,
Figure 2: Illustration of the shape of 2D time–frequency wavelets (second-order JTFS). Red and blue indicate higher positive and lower negative values (resp.). Each pattern shows the response of the real part of two-dimensional filters that arise from the outer product between 1D wavelets \(\mathbf{\psi}_{\alpha}(t)\) and \(\mathbf{\psi}_{\beta}(\log\lambda)\) of various rates \(\alpha\) and scales \(\beta\) (resp.). Orientation is determined by the sign of \(\beta\), otherwise known as the _spin_ variable falling in \(\{-1,1\}\). See Section 2 for details on JTFS.
making it a reliable measure of mesostructural similarity up to time scale \(T\).
## 3 Differentiable mesostructural operator
In this section, we introduce a _differentiable mesostructural operator_ for time-frequency analysis. Such an operator is needed in optimization scenarios that require a differentiable measure of similarity, such as autoencoding.
In Section 1, we defined a differentiable arpeggiator \(\mathbf{g}\) whose parameters \(\mathbf{\theta}\) govern the _mesostructure_ in \(\mathbf{x}\). We now seek a differentiable operator \(\mathbf{\Phi}\circ\mathbf{g}\) that provides a model to control the low-dimensional parameter space \(\mathbf{\theta}\). By way of distance and gradient visualization under \(\mathbf{\Phi}\circ\mathbf{g}\), we set out to assess the suitability of \(\mathbf{\Phi}\) for modelling \(\mathbf{\theta}\) in a sound matching task.
We consider two DTFA operators in the role of \(\mathbf{\Phi}\): (i) the multiscale spectrogram (MSS) (approximately \(\mathbf{U}_{1}\mathbf{x}\)) and (ii) time-frequency scattering (\(\mathbf{S}\mathbf{x}=\{\mathbf{S}_{1}\mathbf{x},\mathbf{S}_{2}\mathbf{x}\}\)) (JTFS). In case (i), we deem a small distance between two sounds to be an indication of same _microstructure_. On the contrary, similarity in case (ii) suggests the same _mesostructure_. Although identical \(\mathbf{U}_{1}\) implies equality in mesostructure, the reverse is not true, i.e. in the case of time shifts and non-stationary frequency.
Previously, JTFS has offered assessment of similarity between musical instrument playing techniques that underlie mesostructure. With the DTFA operator \(\mathbf{\Phi}\), there is potential to model mesostructures by their similarity as expressed in terms of the raw audio waveform, synthesis parameters or neural network weights. In cases such as granular synthesis, it may be desirable to control mesostructure, while allowing microstructure to stochastically vary.
### Gradient computation & visualization
We evaluate a distance objective under the operator \(\mathbf{\Phi}\circ\mathbf{g}\) as a proxy for distance in \(\mathbf{\theta}\):
\[\mathcal{L}_{\mathbf{\theta}}(\tilde{\mathbf{\theta}})=\|(\mathbf{\Phi}\circ\mathbf{g})(\mathbf{ \theta})-(\mathbf{\Phi}\circ\mathbf{g})(\tilde{\mathbf{\theta}})\|_{2}^{2} \tag{16}\]
For a given parameter estimate \(\tilde{\mathbf{\theta}}\), the gradient \(\nabla\mathcal{L}_{\mathbf{\theta}}\) of the distance to the target \(\mathbf{\theta}\) is:
\[\nabla\mathcal{L}_{\mathbf{\theta}}(\tilde{\mathbf{\theta}})=-2\Big{(}(\mathbf{\Phi} \circ\mathbf{g})(\mathbf{\theta})-(\mathbf{\Phi}\circ\mathbf{g})(\tilde{\mathbf{\theta}})\Big{)}^ {T}\cdot\nabla(\mathbf{\Phi}\circ\mathbf{g})(\tilde{\mathbf{\theta}}) \tag{17}\]
Figure 4: Parameter distance \(||\mathbf{\theta}-\mathbf{\tilde{\theta}}||_{2}\) over gradient descent iterations with \(\mathbf{\Phi}\) as MSS and JTFS. The target sound has parameters \(\mathbf{\theta}=[8.49,1.49]\). We initialize the predicted sound at \(\mathbf{\tilde{\theta}}_{0}=[4,0.5]\). The line plots the mean distance at each iteration for multiple runs that shift the predicted sample in time by \(\tau=\{2^{2},2^{4},2^{7},2^{10}\}\) samples. The shaded region indicates the range across different time shifts.
Figure 5: Final parameter distance \(||\mathbf{\theta}-\mathbf{\tilde{\theta}}||_{2}\) after gradient descent for \(\mathbf{g}(\mathbf{\theta})(t)\) and \(\mathbf{g}(\mathbf{\tilde{\theta}})(t-\tau)\), for \(\mathbf{\theta}=[8.49,1.49]\), \(\mathbf{\tilde{\theta}}_{0}=[4,0.5]\). Each run (x-axis) is optimized under a different time shift \(\tau\) on the predicted audio. JTFS is invariant up to the support \(T=2^{13}\) of its lowpass filter. We observe that convergence in parameter recovery is stable to time shifts under our differentiable mesostructural operator \(\mathbf{\Phi}\circ\mathbf{g}\), in the case that \(\mathbf{\Phi}\) is JTFS. Optimization is unstable when \(\mathbf{\Phi}\) is a spectrogram operator.
Figure 3: Loss surface and gradient field visualization under \(\mathbf{\Phi}\) as JTFS (left) and MSS (right) for sounds synthesized by \(\mathbf{g}\) (see Section 1). Sounds are sampled from a logarithmically spaced grid on \(f_{\text{m}}\) and \(\gamma\). We plot the target sound as a green dot and compute the loss between the target and a sound generated at every point on the grid. We time shift the generated sound relative to the target by a constant of \(\tau=2^{10}\) samples. In the quiver plots, we evaluate the gradient of the loss operator with respect to synthesis parameters \(f_{\text{m}}\) and \(\gamma\). The direction of the arrows is indicative of the informativeness of the distance computed on \(\mathbf{\Phi}\circ\mathbf{g}\) with respect to \(\mathbf{\theta}\). In the case of \(\mathbf{\Phi}_{\text{JTFS}}\), we observe a 3D loss surface whose global minimum is centred around the target sound, while gradients point towards the target. Contrarily, the global minimum of \(\mathbf{\Phi}_{\text{MSS}}\) does not centre around the target or reach \(0\). In the presence of small time shifts, the MSS loss appears insensitive to differences in AM and uninformative with respect to \(\mathbf{\theta}\).
The first term in Eqn. (17) is a row vector of length \(P=\dim\left((\mathbf{\Phi}\circ\mathbf{g})(\mathbf{\theta})\right)\) and the second term is a matrix of dimension \(P\times\dim(\tilde{\mathbf{\theta}})\). The dot product between the row vector in the first term and each column vector in the high-dimensional Jacobian matrix \(\nabla(\mathbf{\Phi}\circ\mathbf{g})\) yields a low-dimensional vector of \(\dim(\mathbf{\theta})\). Each column of the Jacobian matrix can be seen as the direction of steepest descent in the parameter space, such that distance in \(\mathbf{\Phi}\) is minimized. Therefore the operator \(\mathbf{\Phi}\circ\mathbf{g}\) should result in distances that reflect sensitivity and direction of changes in \(\mathbf{\theta}\).
In \(\mathcal{L}_{\theta}\) of Eqn. (16), we adopt time-frequency scattering (\(\mathbf{Sx}\)) (see Section 2) in the role of \(\mathbf{\Phi}\). Otherwise, we refer to \(\mathcal{L}_{\theta}^{MSS}\) when using the multi-scale spectrogram (MSS). In the JTFS transform, we set \(J=12,J_{fr}=5,Q_{1}=8,Q_{2}=2\), \(Q_{fr}=2\), and set \(F=0\) to disable frequency averaging.
Alternatively, we refer to \(\mathcal{L}_{\theta}^{MSS}\) when using the multi-scale spectrogram (MSS). Let \(\mathbf{\Phi}_{\text{STFT}}^{(n)}\) be the short-time fourier transform coefficients computed with a window size of \(2^{n}\). We compute the MSS loss in Eqn. (18), which is the average of L1 distances between spectrograms at multiple STFT resolutions:
\[\mathcal{L}_{\theta}^{MSS}(\tilde{\mathbf{\theta}})=\frac{1}{N}\sum_{i=5}^{10}|( \mathbf{\Phi}_{\text{STFT}}^{(n)}\circ\mathbf{g})(\mathbf{\theta})-(\mathbf{\Phi}_{\text{ STFT}}^{(n)}\circ\mathbf{g})(\tilde{\mathbf{\theta}})| \tag{18}\]
The chosen resolutions account for the sampling rate of 8192 Hz used by \(\mathbf{g}\). We set \(w=2\) octaves in all subsequent experiments and normalize the amplitude of each \(\mathbf{g_{\theta}}\).
For this experiment, we uniformly sample a grid of \(20\times 20\) AM/FM rates \((\mathbf{f_{m}},\mathbf{\gamma})\) on a log-scale ranging from 4 to 16 Hz and 0.5 to 4 octaves per second, leading to 400 signals with a carrier frequency of \(\mathbf{f_{c}}=512\) Hz. We designate the centre of the grid \(\mathbf{f_{m}}=8.29\) Hz and \(\mathbf{\gamma}=1.49\) octaves / second as the target sound. We introduce a constant time shift \(\tau=2^{10}\) samples to the target sound in order to test the stability of gradients under perturbations in microstructures. We evaluate \(\mathcal{L}_{\theta}\) and \(\nabla\mathcal{L}_{\theta}\) associated to each sound for the two DTFA operators \(\mathbf{\Phi}_{\text{STFT}}\) and \(\mathbf{\Phi}_{\text{ITFS}}\).
We visualize the loss surfaces and gradient fields with respect to \(\mathbf{\widetilde{\theta}}\) in Fig. 3. We observe that the JTFS operator forms a loss surface with a single local minimum that is located at the target sound's \(\mathbf{\theta}\). Meanwhile gradients across the sampled parameters \(\mathbf{\widetilde{\theta}}\) consistently point towards the target, despite certain exceptions at high \(\mathbf{\gamma}\), which acoustically correspond to very high FM rate. Contrarily, MSS loss gradient suffers from multiple local minima and does not reach the global minimum when \(\mathbf{\widetilde{\theta}}\) is located at the target due to time shift equivariance. We highlight that the MSS distance is insensitive to variation along AM, making it unsuitable for modelling mesostructures.
In line with our findings, previous work [21] found that 3D visualizations of the manifold embedding of JTFS' nearest neighbour graph revealed a 3D mesh whose principal components correlated with parameters describing carrier frequency, AM and FM. Moreover, \(K\)-nearest neighbours regression using a nearest neighbours graph in JTFS space produced error ratios close to unity for each of the three parameters.
### Sound Matching by gradient descent
Unlike classic sound matching literature, where \(\tilde{\mathbf{\theta}}\) is estimated from a forward pass through trainable \(f_{\mathbf{W}}\) (i.e., neural network weights), we formulate sound matching as an inverse problem in \((\mathbf{\Phi}\circ\mathbf{g})\). For the sake of simplicity, we do not learn any weights to approximate \(\mathbf{\theta}\).
Using the gradients derived in Section 3.1, we attempt sound matching of a target state in \(\mathbf{\theta}\) using a simple gradient descent scheme with bold driver heuristics. We perform additive updates to \(\tilde{\mathbf{\theta}}\) along the direction dictated by gradient
Figure 6: Parameter distance \(||\mathbf{\theta}-\mathbf{\widetilde{\theta}}||\) (log-scale) over gradient descent iterations with \(\mathbf{\Phi}\) as MSS and JTFS in 3 scenarios: (left) 5 initialisation of \(\mathbf{\widetilde{\theta}}\) far from the target, (centre) 5 initialisation of \(\mathbf{\widetilde{\theta}}\) in the neighbourhood of the target and (right) 5 initialisations of \(\mathbf{\widetilde{\theta}}\) across the range of the grid. We do not apply a time shift to the predicted sound (see Fig. 3 for gradient visualisation). The target sound has parameters \(\mathbf{\theta}=[8.49,1.49]\). The lines indicate the mean distance at each iteration across 5 runs of different \(\mathbf{\widetilde{\theta}}\) initialisation. The shaded region indicates the range across the 5 initialisations. The titles indicate the range of the initial \(\mathbf{\widetilde{\theta}}\). We highlight that even with no time shifts, MSS only recovers \(\mathbf{\theta}\) well when \(\mathbf{\widetilde{\theta}}\) is initialised in its local neighbourhood (centre). When \(\mathbf{\widetilde{\theta}}\) is initialised far from the target (left), MSS fails to converge. Starting anywhere (right), converges in the best case, but on average fails to converge and is close to the worst case.
\[\nabla_{\bar{\theta}}\mathcal{L}_{\theta}\] \[\bar{\mathbf{\theta}}\leftarrow\bar{\mathbf{\theta}}-\alpha\nabla_{\bar{ \theta}}\mathcal{L}_{\theta} \tag{19}\]
Our bold driver heuristic increases the learning rate \(\alpha\) by a factor of 1.2 when \(\mathcal{L}_{\theta}\) decreases it by a factor of 2 otherwise. Our evaluation metric in parameter space is defined as:
\[\mathcal{L}_{\theta}(\mathbf{\theta})=\|\mathbf{\theta}-\mathbf{\bar{\theta}}\|_{2}^{2} \tag{20}\]
Fig. 4 shows the mean L2 parameter error over gradient descent steps for each \(\mathbf{\Phi}\). We select a fixed target and initial prediction. We run multiple optimizations that consider time shifts between 0 and \(2^{10}\) samples on the target audio. Across time-shifts within the support \(T\) of the lowpass filter in \(\mathbf{\Phi}_{JIFS}\), convergence is stable and reaches close to 0. We observe that MSS does not converge and \(\mathcal{L}_{\theta}(\mathbf{\bar{\theta}})\) does not advance far from its initial value, including the case of no time shifts. In Fig. 5, we further illustrate the effects of time shifts for DTFA, validating that JTFS is a time-invariant mesostructural operator up to support \(T\).
### Time invariance
In Fig. 6, we explore the gradient convergence for different initialisations of \(\mathbf{\bar{\theta}}\) but _without_ time shifting the predicted sound. In each plot, we perform gradient descent for 5 different initialisations of \(\mathbf{\bar{\theta}}\): (i) far away from the target sound, (ii) in the local neighbourhood of the target sound and (iii) broadly across the parameter grid. We highlight that JTFS is able to converge to the solution in each of the 3 initialisation schemes, as corroborated by its gradients in Fig. 7. We observe that even without time shifts, MSS fails to recover the target sound in the case that the parameter initialisation is far from the target. MSS does indeed recover the target sound if \(\mathbf{\bar{\theta}}\) is initialised in the neighbourhood of the target. Although when starting anywhere, MSS does indeed converge in the best case, but on average it is close to the worst case which does not converge.
Fig. 7 shows the loss surface and gradient fields for \(\mathbf{\Phi}_{\text{JTFS}}\) and \(\mathbf{\Phi}_{\text{MSS}}\) with no time shifts and random time shifts applied to the predicted sound. Despite MSS reaching the global minimum when the predicted sound is centred at the target, our experiments in gradient descent demonstrate that it is only stable when \(\mathbf{\bar{\theta}}\) is initialised within the local region of the target \(\mathbf{\theta}\). When we apply a random time shift to the predicted sound, the MSS loss is highly unstable and produces many local minima that are not located at the target sound. As expected, the JTFS gradient is highly stable with no time shifts. Even in the presence of random time shifts, JTFS is an invariant representation of spectrotemporal modulations upto time shifts \(T\).
## 4 Conclusion
Differentiable time-frequency analysis (DTFA) is an emerging direction for audio deep learning tasks. The current state-of-the-art for autoencoding, audio restoration and sound matching predominantly perform DTFA in the spectrogram domain. However, spectrogram loss suffers from numerical instabilities when computing similarity in the context of: (i) time shifts beyond the scale of the spectrogram window and (ii) nonstationarity that arises from synthesis parameters. These prohibit the reliability of spec
Figure 7: Loss surfaces (top) and gradient fields (bottom) under \(\mathbf{\Phi}_{\text{JTFS}}\) and the \(\mathbf{\Phi}_{\text{MSS}}\) for sounds synthesized by \(\mathbf{g}\) (see Section 1), sampled from a logarithmically spaced grid on \(f_{\text{m}}\) and \(\gamma\). Each sound is randomly shifted in time relative to the target by \(2^{n}\) samples, where \(n\) is sampled uniformly between \([8,12]\). We plot the target sound as a green dot and compute the loss under \(\mathbf{\Phi}_{\text{JTFS}}\) and \(\mathbf{\Phi}_{\text{MSS}}\) between each sound and the target. In the quiver plots, we evaluate the gradient of the loss operator with respect to the synthesis parameters \(f_{\text{m}}\) and \(\gamma\) of the generated sound. In the case of both no time shifts, JTFS gradients point towards the target and the distance around 0 when is at the target. Without time shifts, MSS computes distance between objects that intersect in the time–frequency domain. Its gradients appear to lead to the target, however it suffers from local minima along AM, as demonstrated by convergence in Fig. 6. In the presence of random time shifts, JTFS is appears robust while MSS is highly unstable and prone to local minima.
trogram loss as a similarity metric for modelling multiscale musical structures.
In this paper, we introduced the _differentiable mesostructural operator_, comprising of DTFA and an arpeggio synthesiser, for time-frequency analysis of mesostructures. We model synthesis parameters for a sound matching task using the joint time-frequency scattering (JTFS) for DTFA of structures that are identifiable beyond the locality of microstructure; i.e. amplitude and frequency modulations of a chirplet synthesizer. Notably, JTFS offers a differentiable and scalable implementation of: auditory spectrotemporal receptive fields, multiscale analysis in the time-frequency domain and invariance to time shifts.
However, despite prior evidence that JTFS accurately models similarities in signals containing spectrotemporal modulations, JTFS is yet to be assessed in DTFA for inverse problems and control in sound synthesis. By analysis of the gradient of our DTFA operator with respect to synthesis parameters, we showed that in contrast to spectrogram losses, JTFS distance is suitable for modelling similarity in synthesis parameters that describe mesostructure. We demonstrated the stability of JTFS as a DTFA operator in sound matching by gradient descent, particularly in the case of time shifts.
This work lays the foundations for further experiments in DTFA for autoencoding, sound matching, resynthesis and computer music composition. Indeed, our differentiable mesostructural operator could be used as a model of the raw audio waveform directly, however this approach is prone to resynthesis artifacts [24, 22]. We have shown that by means of DTFA, we can model low-dimensional synthesis parameters that shape sequential audio events. A direction for future work lies in differentiable parametric texture synthesis, in which texture similarity may be optimized in terms of parameters that derive larger scale structures; e.g. beyond the definition of individual grains in granular synthesis.
## 5 Acknowledgment
Cyrus Vahidi is a researcher at the UKRI CDT in AI and Music, supported jointly by the UKRI (grant number EP/S022694/1) and Music Tribe. This work was conducted during a research visit at LS2N, CNRS. Changhong Wang is supported by an Atlanstic2020 project on Trainable Acoustic Sensors (TrAcS).
|
2307.03271 | On the spectra of multidimensional normal discrete Hausdorff operators | In the paper the general case of a normal discrete Hausdorff operators in
$L^2(\mathbb{R}^d)$ is considered. The main result states that under some
natural arithmetic condition the spectrum of such an operator is rotationally
invariant. Several special cases and examples are considered. | A. R. Mirotin | 2023-07-06T20:14:35Z | http://arxiv.org/abs/2307.03271v2 | # On the spectra of multidimensional normal discrete Hausdorff operators
###### Abstract.
In the paper the general case of a normal discrete Hausdorff operators in \(L^{2}(\mathbb{R}^{d})\) is considered. The main result states that under some natural arithmetic condition the spectrum of such an operator is rotationally invariant. Several special cases and examples are considered.
2020 Mathematics Subject Classification: Primary 47B38; Secondary 47B15, 47A10, 46E30
Key words and phrases. Hausdorff operator, discrete Hausdorff operator, symbol of an operator, spectrum, Weyl spectrum, Lebesgue space, pantograph equation.
## 1. Introduction
In resent two decades different notions of a Hausdorff operator have been suggested (see e.g., [26, 8, 22, 36] and bibliography therein).
Hausdorff operators over the topological group \(\mathbb{R}^{d}\) in a sense of [25] (see also [36]) have attracted much attention. But most of the work in this direction was devoted to the boundedness of such operators in various function spaces, see [12, 7, 25, 28, 47] among others. Probably the only exception is a special case of Hausdorff operators, namely the Cesaro operator (e.g., [1, 10]).
The situation has changed somewhat in recent years. It is shown in [41, 42] that every normal Hausdorff operator in \(L^{2}(\mathbb{R}^{d})\) with self-adjoint perturbation matrices is unitarily equivalent to the multiplication operator by some matrix-valued function (its matrix symbol) in the space \(L^{2}(\mathbb{R}^{d};\mathbb{C}^{2^{d}})\). This is an analogue of the Spectral Theorem for the class of operators under consideration. This allowed to find the norm, to study the spectrum, and to develop functional calculi of such operators [27]. See also [43], where the case of the spaces \(L^{p}(\mathbb{R}^{d})\) was considered.
The structure of the spectrum of discrete normal Hausdorff operators in \(L^{2}(\mathbb{R}^{d})\) was investigated in [44] for the case of positive or negative definite perturbation matrices. In this paper, the general case of normal discrete Hausdorff operators in \(L^{2}(\mathbb{R}^{d})\) is considered. Our interest to such operators is motivated by the fact that discrete Hausdorff operators are involved in functional differential equations (see, e.g., [2, 23, 45, 46, 48] and the bibliography therein, see also Section 4 below), problems in analysis (see,
e.g., [35], especially Theorem 1 and formula (12) therein), and quantum mechanics [30, Chapter VII, SS2, (7.24)].
The main result of this article states that under some natural arithmetic condition the spectrum of normal discrete Hausdorff operator is rotationally invariant. Several special cases and examples of functional differential equations are considered.
## 2. Preliminaries
We study the following special case of a Hausdorff operator on Euclidean spaces.
**Definition 2.1**.: _Let \(c=(c(k))_{k\in\mathbb{Z}}\) be a sequences of complex numbers, and \(A(k)\in\mathrm{GL}(d,\mathbb{R})\) for all \(k\in\mathbb{Z}\). A discrete Hausdorff operator acts on a function \(f:\mathbb{R}^{d}\to\mathbb{C}\) by the rule_
\[\mathcal{H}_{c,A}f(x)=\sum_{k=-\infty}^{\infty}c(k)f(A(k)x)\]
_provided the series converges absolutely._
Without loss of generality one can assume that \(A(k)\neq A(l)\) for \(k\neq l\).
At the first time such operators in this general form appeared in [7, Section 4] (see also [41, 42]), but one can find a lot of concrete examples in the literature, see, e. g., [35], [46], [45] and the bibliography therein, and examples of functional-differential equations below.
**Lemma 2.2**.: _(i) Let \(1\leq p\leq\infty\). If_
\[N_{p}(c,A):=\sum_{k=-\infty}^{\infty}|c(k)||\det A(k)|^{-1/p}<\infty,\]
_then the operator \(\mathcal{H}_{c,A}\) is bounded in \(L^{p}(\mathbb{R}^{d})\) and its norm does not exceed \(N_{p}(c,A)\)._
_(ii) The condition \(N_{p}(c,A)<\infty\) for the \(L^{p}\) boundedness of \(\mathcal{H}_{c,A}\) can't be weakened in general._
Proof.: (i) This readily follows from the Minkowskii inequality [6].
(ii) Let \(1<p<\infty\), \(d=1\). Consider the case where \(c(k)\geq 0\), \(c(k)=0\) for \(k\leq 0\), and \(A(k)=1/k\) for \(k\geq 1\). We proceed as in [17, Theorem 2.1].
For each \(t\in(0,1)\) put
\[f_{t}(x)=\frac{1}{x^{1/p}}\chi_{(t,1/t)}(x),\ \ g_{t}(x)=\frac{1}{x^{1/q}}\chi_{(t,1/t)}(x)\ \ (1/p+1/q=1).\]
Then
\[\|f_{t}\|_{p}^{p}=\|g_{t}\|_{q}^{q}=2\log\frac{1}{t}.\]
For every natural \(k\) consider the function
\[h_{t}(k) = \int_{0}^{\infty}g_{t}(kx)f_{t}(x)dx\] \[= \int_{0}^{\infty}\frac{1}{(kx)^{1/q}}\chi_{(t,1/t)}(kx)\frac{1}{x^{ 1/p}}\chi_{(t,1/t)}(x)dx\] \[= k^{-1/q}\int_{0}^{\infty}\chi_{(t/k,1/kt)}(x)\chi_{(t,1/t)}(x) \frac{dx}{x}.\]
If \(k>1/t^{2}\), then \((t/k,1/kt)\cap(t,1/t)=\varnothing\) and so \(\chi_{(t/k,1/kt)}\chi_{(t,1/t)}=0\). In this case, \(h_{t}(k)=0\).
It is easy tho compute then \(h_{t}(1)=2\log\frac{1}{t}\). Further, if \(1<k\leq 1/t^{2}\) we have \((t/k,1/kt)\cap(t,1/t)=(t,1/kt)\). Therefore (2.1) implies
\[h_{t}(k)=k^{-1/q}\int_{t}^{1/kt}\frac{dx}{x}=k^{-1/q}\left(2\log\frac{1}{t}- \log k\right).\]
Consider the non-negative quantity
\[J_{t} = \int_{0}^{\infty}g_{t}(y)(\mathcal{H}_{c,A}f_{t})(y)dy\] \[= \sum_{k=1}^{\infty}c(k)\int_{0}^{\infty}g_{t}(y)f_{t}(\frac{y}{k })dy\] \[= \sum_{k=1}^{\infty}c(k)k\int_{0}^{\infty}g_{t}(kx)f_{t}(x)dx\] \[= \sum_{k=1}^{\infty}c(k)kh_{t}(k)\] \[= c(1)2\log\frac{1}{t}+\sum_{k=2}^{\infty}c(k)k^{1-1/q}\left(2 \log\frac{1}{t}-\log k\right).\]
If \(\mathcal{H}_{c,A}\) is bounded in \(L^{p}(\mathbb{R})\), we have by the Holder's inequality
\[J_{t}\leq\|\mathcal{H}_{c,A}\|\|f_{t}\|_{p}\|g_{t}\|_{q}=\|\mathcal{H}_{c,A}\| 2\log\frac{1}{t}.\]
It follows, that
\[c(1)+\sum_{k=2}^{\infty}c(k)k^{1/p}\left(1-\frac{\log k}{2\log\frac{1}{t}} \right)\leq\|\mathcal{H}_{c,A}\|.\]
Putting here \(t=1/n\) (\(n\in\mathbb{N}\)), \(n\rightarrow\infty\) we get by the B. Levi's theorem, that
\[N_{p}(c,A)=\sum_{k=1}^{\infty}c(k)k^{1/p}\leq\|\mathcal{H}_{c,A}\|,\]
which completes the proof in the case \(1<p<\infty\). For the cases \(p=1\) and \(p=\infty\) the statement (ii) is obvious.
Let \(U_{j}\)\((j=1;\ldots;2^{d})\) be some fixed enumeration of the family of all open hyperoctants in \(\mathbb{R}^{d}\). For every pair \((i;j)\) of indices there is a unique \(\varepsilon(i;j)\in\{-1,1\}^{d}\) such that
\[\varepsilon(i;j)U_{i}:=\{(\varepsilon(i;j)_{1}x_{1};\ldots;\varepsilon(i;j)_{ d}x_{d}):x\in U_{i}\}=U_{j}.\]
It is clear that \(\varepsilon(i;j)U_{j}=U_{i}\) and \(\varepsilon(i;j)U_{l}\cap U_{i}=\varnothing\) as \(l\neq j\). We will assume that \(A(k)\) form a commuting family. Then as is well known there are an orthogonal \(d\times d\)-matrix \(C\) and a family of diagonal non-singular real matrices \(A^{\prime}(k)=\operatorname{diag}(a_{1}(k);\ldots;a_{d}(k))\) such that \(A^{\prime}(k)=C^{-1}A(k)C\) for \(k\in\mathbb{Z}\). Then
\[a(k):=(a_{1}(k);\ldots;a_{d}(k))\]
is the family of all eigenvalues (with every eigenvalue repeated according to its multiplicity) of the matrix \(A(k)\). We put
\[\Omega_{ij}:=\{k\in\mathbb{Z}:(\operatorname{sgn}(a_{1}(k));\ldots; \operatorname{sgn}(a_{d}(k)))=\varepsilon(i;j)\}.\]
If \(N_{2}(c,A)<\infty\), i. e., if \((|\det A(k)|^{-1/2}c(k))_{k\in\mathbb{Z}}\in\ell^{1}(\mathbb{Z})\), we put
\[\varphi_{ij}(s):=\sum_{k\in\Omega_{ij}}c(k)|a(k)|^{-1/2-\imath s}.\]
Above we assume that \(|a(k)|^{-1/2-\imath s}:=\prod_{l=1}^{d}|a_{l}(k)|^{-1/2-\imath s_{l}}\) where we put \(|a_{l}(k)|^{-1/2-\imath s_{l}}:=\exp((-1/2-\imath s_{l})\log|a_{l}(k)|)\). It follows that
\[\varphi_{ij}(s)=\sum_{k\in\Omega_{ij}}\frac{c(k)}{\sqrt{|\det A(k)|}}e^{- \imath s\cdot\log|a(k)|}\]
(here \(\log|a(k)|:=(\log|a_{1}(k)|,\ldots,\log|a_{d}(k)|)\); the dot denotes the inner product in \(\mathbb{R}^{d}\)).
Evidently, \(\varphi_{ij}=\varphi_{ji}\) and all functions \(\varphi_{ij}\) belong to the algebra \(C_{b}(\mathbb{R}^{d})\) of bounded and continuous functions on \(\mathbb{R}^{d}\) if \(N_{2}(c,A)<\infty\).
Note also that the operator \(\mathcal{H}_{c,A}\) is normal if the matrices \(A(k)\) form a commuting family [42].
**Definition 2.3**.: _Let the matrices \(A(k)\) form a commuting family and \(N_{2}(c,A)<\infty\). We define the matrix symbol of a Hausdorff operator \(\mathcal{H}_{c,A}\) by_
\[\Phi=(\varphi_{ij})_{i,j=1}^{2^{d}}\]
Then \(\Phi\) is a symmetric element of the matrix algebra \(\operatorname{Mat}_{2^{d}}(C_{b}(\mathbb{R}^{n}))\).
The symbol was first introduced in [41] for the case of positive definite \(A(k)\).
It is known [42, Theorem 1] that
\[\sigma(\mathcal{H}_{c,A})=\{\lambda\in\mathbb{C}:\inf_{s\in\mathbb{R}^{d}}|\det( \lambda I_{2^{d}}-\Phi(s)|=0\}, \tag{2.2}\]
and by [42, Corollary 3]
\[\|\mathcal{H}_{K,A}\|=\max\{|\lambda|:\inf_{s\in\mathbb{R}^{d}}|\det(\lambda I _{2^{d}}-\Phi(s)|=0\}=\sup_{s\in\mathbb{R}^{d}}\|\Phi(s)\|,\]
where \(\|\Phi(s)\|\) stands for the norm of the operator in \(\mathbb{C}^{2^{d}}\) of multiplication by the matrix \(\Phi(s)\) and \(I_{2^{d}}\) denotes the unit matrix of order \(2^{d}\).
## 3. Main results
Recall that real numbers \(r_{1},\ldots,r_{m}\) are called _linear independent over \(\mathbb{Z}\)_ if the equality \(\sum_{k=1}^{m}l_{k}r_{k}=0\) where all \(l_{k}\in\mathbb{Z}\) yields \(l_{k}=0\) for all \(k\). As usual we say that an infinite family of real numbers is linear independent over \(\mathbb{Z}\) if each its finite subfamily is linear independent over \(\mathbb{Z}\).
We shall say that a set of complex numbers is _rotationally invariant_ if it is invariant under all rotations around the origin.
**Theorem 3.4**.: _Let \(A(k)\in\mathrm{GL}(d,\mathbb{R})\) (\(k\in\mathbb{Z}\)) be a commuting family of self-adjoint matrices, and \(N_{2}(c,A)<\infty\)._
_(i) Let \(a(k)=(a_{1}(k);\ldots;a_{d}(k))\) be the family of all eigenvalues (with their multiplicities) of the matrix \(A(k)\). If for some \(\nu\) the numbers \(\log|a_{\nu}(k)|\) (\(k\in\mathbb{Z}\)) are linear independent over \(\mathbb{Z}\), the spectrum \(\sigma(\mathcal{H}_{c,A})\) of a Hausdorff operator \(\mathcal{H}_{c,A}\) in \(L^{2}(\mathbb{R}^{d})\) is rotationally invariant._
_(ii) Let the symbol \(\Phi\) of \(\mathcal{H}_{c,A}\) be a real analytic matrix function on \(\mathbb{R}^{d}\). Then \(\lambda\in\sigma_{p}(\mathcal{H}_{c,A})\) if and only if all matrices \(\Phi(s)\) (\(s\in\mathbb{R}^{d}\)) have a common eigenvalue \(\lambda\)._
Proof.: (i) Consider the truncated operator
\[\mathcal{H}_{c,A}^{(n)}f(x):=\sum_{k=-n}^{n}c(k)f(A(k)x)\ \ (n\in\mathbb{N})\]
in \(L^{2}(\mathbb{R}^{d})\). Its matrix symbol is
\[\Phi^{(n)}=\left(\varphi_{ij}^{(n)}\right)_{i,j=1}^{2^{d}},\]
where
\[\varphi_{ij}^{(n)}(s) = \sum_{k\in\Omega_{ij},\atop|k|\leq n}c(k)|a(k)|^{-1/2-\imath s}\] \[= \sum_{k\in\Omega_{ij},\atop|k|\leq n}\frac{c(k)}{\sqrt{|\det A(k )|}}e^{-\imath s\cdot\log|a(k)|}.\]
Let, for definiteness, the numbers \(\log|a_{1}(k)|\) (\(k\in\mathbb{Z}\)) are linear independent over \(\mathbb{Z}\). Then the corollary of Kronecker's approximation theorem (see, e.g., [24]) implies that the set
\[\{(e^{-\imath s_{1}\log|a_{1}(-n)|},\ldots,e^{-\imath s_{1}\log|a_{1}(n)|}):s_{1 }\in\mathbb{R}\}\]
is dense in the \((2n+1)\)-dimensional torus \(\mathbb{T}^{2n+1}\) for each \(n\in\mathbb{N}\). Thus the set
\[\{\Lambda(s):=(e^{-\imath s\cdot\log|a(-n)|},\ldots,e^{-\imath s\cdot\log|a(n)| }):s\in\mathbb{R}^{d}\}\]
is dense in \(\mathbb{T}^{2n+1}\) for each \(n\in\mathbb{N}\), as well.
From [42, Theorem 1] it follows that
\[\sigma(\mathcal{H}^{(n)}_{c,A})=\{\lambda\in\mathbb{C}:\inf_{s\in\mathbb{R}^{ d}}|\det(\lambda I_{2^{d}}-\Phi^{(n)}(s)|=0\}. \tag{3.4}\]
For each pare \(i,j=1,\ldots,2^{d}\) consider the functions
\[\xi^{(n)}_{ij}(s):=\sum_{k\in\Omega_{ij},\atop|k|\leq n}\frac{c(k)}{\sqrt{| \det A(k)|}}t_{k}\ \ \ (t=(t_{k})\in\mathbb{T}^{2n+1}).\]
This functions are continuous on \(\mathbb{T}^{2n+1}\) and \(\varphi^{(n)}_{ij}(s)=\xi^{(n)}_{ij}\circ\Lambda(s)\) for \(i,j=1,\ldots,2^{d}\). As was mentioned above the range of \(\Lambda\) is dense in \(\mathbb{T}^{2n+1}\). Then by continuity we get
\[\inf_{s\in\mathbb{R}^{d}}|\det(\lambda I_{2^{d}}-\Phi^{(n)}(s)| = \inf_{s\in\mathbb{R}^{d}}|\det(\lambda I_{2^{d}}-(\xi^{(n)}_{ij} \circ\Lambda(s))_{i,j=1}^{2^{d}}| \tag{3.5}\] \[= \inf_{t\in\mathbb{T}^{2n+1}}|\det(\lambda I_{2^{d}}-(\xi^{(n)}_{ ij}(t))_{i,j=1}^{2^{d}}|.\]
Further, for every \(\zeta\in\mathbb{C}\), \(|\zeta|=1\) we have
\[|\det(\zeta\lambda I_{2^{d}}-\Phi^{(n)}(s)| = |\det(\lambda I_{2^{d}}-\overline{\zeta}(\xi^{(n)}_{ij}(t))_{i,j= 1}^{2^{d}}|\] \[= |\det(\lambda I_{2^{d}}-(\xi^{(n)}_{ij}(\overline{\zeta}t))_{i,j =1}^{2^{d}}|.\]
The last equation, (3.4), and (3.5) imply that \(\zeta\sigma(\mathcal{H}^{(n)}_{c,A})=\sigma(\mathcal{H}^{(n)}_{c,A})\). Thus, the set \(\sigma(\mathcal{H}^{(n)}_{c,A})\) is rotationally invariant.
Let \(R_{n}:=\mathcal{H}_{c,A}-\mathcal{H}^{(n)}_{c,A}\). Since \(N_{2}(c,A)<\infty\), we have
\[\|R_{n}\|\leq\sum_{|k|>n}\frac{|c(k)|}{\sqrt{|\det A(k)|}}\to 0\ \ \mbox{ as }n\to\infty.\]
Since matrices \(A(k)\) form a commuting family, \(R_{n}\) commutes with \(\mathcal{H}^{(n)}_{c,A}\) (see the proof of Theorem 1 in [42]). Now by [18, Theorem IV.3.6] we have
\[\operatorname{dist}_{H}(\sigma(\mathcal{H}_{c,A}),\sigma(\mathcal{H}^{(n)}_{c,A}))\leq\|R_{n}\|, \tag{3.6}\]
where \(\mathrm{dist}_{H}(X,Y)\) denotes the Hausdorff distance between compact sets \(X,Y\subset\mathbb{C}\). Recall that
\[\mathrm{dist}_{H}(X,Y)=\inf\{\varepsilon:X\subseteq Y_{\varepsilon}\text{ and }Y \subseteq X_{\varepsilon}\},\]
where \(X_{\varepsilon}:=\cup_{x\in X}\{z\in\mathbb{C}:|z-x|\leq\varepsilon\}\). It is easy to verify that \(\mathrm{dist}_{H}\) is invariant with respect to rotations: \(\mathrm{dist}_{H}(\zeta X,\zeta Y)=\mathrm{dist}_{H}(X,Y)\) for all \(\zeta\in\mathbb{C}\), \(|\zeta|=1\). Then
\[\mathrm{dist}_{H}(\zeta\sigma(\mathcal{H}_{c,A}),\sigma(\mathcal{ H}_{c,A}^{(n)})) = \mathrm{dist}_{H}(\sigma(\mathcal{H}_{c,A}),\overline{\zeta} \sigma(\mathcal{H}_{c,A}^{(n)}))\] \[= \mathrm{dist}_{H}(\sigma(\mathcal{H}_{c,A}),\sigma(\mathcal{H}_{ c,A}^{(n)}))\leq\|R_{n}\|\to 0,\]
as \(n\to\infty\). Thus, both \(\zeta\sigma(\mathcal{H}_{c,A})\) and \(\sigma(\mathcal{H}_{c,A})\) are limits of \(\sigma(\mathcal{H}_{c,A}^{(n)}))\) with respect to \(\mathrm{dist}_{H}\). It is known that \(\mathrm{dist}_{H}\) is a metric on compact subsets (see, e.g., [20, SS21, VII]). Thus, (3.7) shows that \(\zeta\sigma(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\) and (i) follows.
(ii) In [42, Theorem 1] it is proven that the point spectrum \(\sigma_{p}(\mathcal{H}_{c,A})\) consists of such complex numbers \(\lambda\) for which the closed set
\[E(\lambda):=\{s\in\mathbb{R}^{d}:\mathrm{det}(\lambda-\Phi(s))=0\}\]
is of positive Lebesgue measure.
Thus, if \(\lambda\in\cap_{s\in\mathbb{R}^{d}}\sigma(\Phi(s))\), then \(E(\lambda)=\mathbb{R}^{d}\) and therefore \(\lambda\in\sigma_{p}(\mathcal{H}_{c,A})\).
Conversely, if \(\lambda\) is an eigenvalue of \(\mathcal{H}_{c,A}\), then \(\mathrm{mes}(E(\lambda))>0\). Since each entry \(\varphi_{ij}\) of \(\Phi\) is a real analytic function on \(\mathbb{R}^{d}\), the function \(s\mapsto\mathrm{det}(\lambda-\Phi(s))\) is real analytic, as well. As far as this function vanishes on the set \(E(\lambda)\) of positive measure, it is identically zero by a version of a uniqueness theorem for real analytic functions [29] (see also [19, p. 83]) and therefore \(\lambda\) is a common eigenvalue of \(\Phi(s)\) for all \(s\in\mathbb{R}^{d}\).
**Remark 1**.: _The arithmetic condition in Theorem 3.4 (i) is essential. Indeed, consider the operator \(\mathcal{H}f(x)=f(x)+f(2x)\) in \(L^{2}(\mathbb{R})\). Its scalar symbol (see (3.9) below) is \(\varphi(s)=1+2^{-1/2}e^{-\imath s\log 2}\). Then by [41] or [42, Corollary 7] we have \(\sigma(\mathcal{H})=\mathrm{cl}(\varphi(\mathbb{R}))=\{|z-1|=2^{-1/2}\}\) (here and below \(\mathrm{cl}\) stands for the closure)._
**Remark 2**.: _Since \(\mathcal{H}_{c,A}\) is normal, the residual spectrum of \(\mathcal{H}_{c,A}\) is empty._
**Corollary 3.5**.: _Let \(X\) be a closed \(\mathcal{H}_{c,A}\)-invariant subspace of \(L^{2}(\mathbb{R}^{d})\). If the assumptions of Theorem 3.4 (i) are true and \(\mathcal{H}_{c,A}\) is a minimal normal extension of the restriction \(S:=\mathcal{H}_{c,A}|X\), then the spectrum \(\sigma(S)\) is rotationally invariant._
Proof.: In is known (see, e. g., [9, Theorem II.2.11]) that \(\sigma(\mathcal{H}_{c,A})\subseteq\sigma(S)\), and if \(\sigma(\mathcal{H}_{c,A})\neq\sigma(S)\) then \(\sigma(S)\) is a union of \(\sigma(\mathcal{H}_{c,A})\) and some bounded holes of \(\sigma(\mathcal{H}_{c,A})\) (i. e. bounded components of \(\mathbb{C}\setminus\sigma(\mathcal{H}_{c,A})\)). Since the set
\(\sigma(\mathcal{H}_{c,A})\) is rotationally invariant, each its hole is rotationally invariant, too. This completes the proof.
**Corollary 3.6**.: _Let the assumptions of Theorem 3.4 (i) hold. If the operator \(\mathcal{H}_{c,A}\) is non-null, then it is not self-adjoint._
Proof.: Indeed, since \(\mathcal{H}_{c,A}\) is normal, its spectral radius is \(\|\mathcal{H}_{c,A}\|\neq 0\). Thus, the spectrum \(\sigma(\mathcal{H}_{c,A})\) cannot be a subset of reals.
Recall that the _essential Weyl spectrum_\(\sigma_{ew}(T)\) of an (closed densely defined) operator \(T\) in a complex Banach space \(X\) can be defined as
\[\sigma_{ew}(T)=\mathbb{C}\setminus\Delta_{4}(T),\]
where
\[\Delta_{4}(T)=\{\lambda\in\mathbb{C}:T-\lambda I\text{ is Fredholm and }\mathrm{ind}(T-\lambda I)=0\}.\]
It is known that
\[\sigma_{ew}(T)=\cap_{K\in\mathcal{K}(X)}\sigma(T+K),\]
where \(\mathcal{K}(X)\) stands for the space of compact operators in \(X\) (see, e.g., [15], [3], or [13, Theorem IX.1.4].1)
Footnote 1: In [13] the Weyl spectrum is denoted by \(\sigma_{e4}(T)\).
In the following \(\pi_{00}(T)\) stands for the set of isolated points of the spectrum \(\sigma(T)\) of an operator \(T\) that are eigenvalues of finite geometric multiplicity.
**Corollary 3.7**.: _Let \(\mathcal{H}_{c,A}\neq O\) and the assumptions of Theorem 3.4 (i) hold. Then \(\sigma_{ew}(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\) if \(0\notin\pi_{00}(\mathcal{H}_{c,A})\), and \(\sigma_{ew}(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\setminus\{0\}\) otherwise. In particular, \(\sigma_{ew}(\mathcal{H}_{c,A})\) is rotationally invariant._
Proof.: Theorem 3.4 (i) implies that \(\pi_{00}(\mathcal{H}_{c,A})\subseteq\{0\}\). Therefore, the first assertion of the corollary follows from the Weyl theorem, which states that
\[\sigma_{ew}(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\setminus\pi_{00}( \mathcal{H}_{c,A}) \tag{3.8}\]
(see, e.g., [15], [3]). Now the last statement follows from the Theorem 3.4 (i), as well.
For the next corollaries consider the _scalar symbol_ of a Hausdorff operator \(\mathcal{H}_{c,A}\)
\[\varphi(s):=\sum_{k=-\infty}^{\infty}c(k)|\det A(k)|^{-1/2}e^{-\imath s\cdot \log|a(k)|}. \tag{3.9}\]
**Corollary 3.8**.: _(cf. [44]) Let in addition to the assumptions of Theorem 3.4 (i) all matrices \((A(k))_{k\in\mathbb{Z}}\) are positive definite and \(\mathcal{H}_{c,A}\neq O\). Then the spectrum \(\sigma(\mathcal{H}_{c,A})\) is an annulus (or a disc) of the form_
\[\left\{\zeta\in\mathbb{C}:\inf_{\mathbb{R}^{d}}|\varphi|\leq|\zeta|\leq\sup_{ \mathbb{R}^{d}}|\varphi|\right\} \tag{3.10}\]
_and \(\|\mathcal{H}_{c,A})\|=\sup_{\mathbb{R}^{d}}|\varphi|\). Moreover, \(\sigma_{ew}(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\). In particular, \(\sigma(\mathcal{H}_{c,A})\) is invariant under compact perturbations of \(\mathcal{H}_{c,A}\)._
Proof.: In the case of positive definiteness the spectrum \(\sigma(\mathcal{H}_{c,A})\) equals to the closure \(\mathrm{cl}(\varphi(\mathbb{R}^{d}))\) by [41], [42, Corollary 7]. Since \(N_{2}(c,A)<\infty\), the scalar symbol \(\varphi\) is continuous on \(\mathbb{R}^{d}\) and therefore its range is connected. Since the set \(\sigma(\mathcal{H}_{c,A})\) is connected and rotationally invariant, it is an annulus (or a disc) centered at the origin. Moreover, the spectral radius of the operator \(\mathcal{H}_{c,A}\) equals to \(\sup_{\mathbb{R}^{d}}|\varphi|\), and equals to its norm, since \(\mathcal{H}_{c,A}\) is normal. This proves the first statement. Finally, in our case \(\pi_{00}(\mathcal{H}_{c,A})=\varnothing\) and the last assertion follows from (3.8).
**Corollary 3.9**.: _(cf. [44]) Let in addition to the assumptions of Theorem 3.4 (i) all matrices \((A(k))_{k\in\mathbb{Z}}\) are negative definite and \(\mathcal{H}_{c,A}\neq O\). Then the spectrum \(\sigma(\mathcal{H}_{c,A})\) is the annulus (or a disc) of the form (3.10). Moreover, \(\sigma_{ew}(\mathcal{H}_{c,A})=\sigma(\mathcal{H}_{c,A})\). In particular, \(\sigma(\mathcal{H}_{c,A})\) is invariant under compact perturbations of \(\mathcal{H}_{c,A}\)._
Proof.: The scalar symbol of the operator \(\mathcal{H}_{c,A}\) is
\[\varphi(s) := \sum_{k=-\infty}^{\infty}\frac{c(k)}{\sqrt{|\det A(k)|}}e^{-ts \cdot\log|a(k)|}\] \[= \sum_{k=-\infty}^{\infty}\frac{c(k)}{\sqrt{\det(-A(k))}}e^{-ts \cdot\log(-a(k))}.\]
Thus, \(\varphi\) coincides with the scalar symbol \(\varphi^{-}\) of the operator \(\mathcal{H}_{c,(-A)}\) where all the matrices \((-A(k))\) are positive definite.
According to [42, Corollary 8], \(\sigma(\mathcal{H}_{c,A})\) equals to the set
\[-\mathrm{cl}(\varphi(\mathbb{R}^{d}))\cup\mathrm{cl}(\varphi(\mathbb{R}^{d}))= -\mathrm{cl}(\varphi^{-}(\mathbb{R}^{d}))\cup\mathrm{cl}(\varphi^{-}(\mathbb{R }^{d})).\]
But as was shown in the previous corollary \(\mathrm{cl}(\varphi^{-}(\mathbb{R}^{d}))\) is an annulus (or a disc) centered at the origin and so \(-\mathrm{cl}(\varphi^{-}(\mathbb{R}^{d}))=\mathrm{cl}(\varphi^{-}(\mathbb{R }^{d}))=\mathrm{cl}(\varphi(\mathbb{R}^{d}))\). It follows that the spectrum \(\sigma(\mathcal{H}_{c,A})\) is given by the formula (3.10). The last assertion follows from (3.8) as in the previous corollary.
Now we shall consider the case, where \(A(k)=\mathrm{diag}(a(k),\ldots,a(k))\), \(a(k)\neq 0\). In other words, we consider discrete Hausdorff operators of the
form
\[\mathcal{H}_{c,a}f(x):=\sum_{k=-\infty}^{\infty}c(k)f(a(k)x),\ x\in\mathbb{R}^{d} \tag{3.11}\]
provided the series converges absolutely.
Evidently every one-dimensional discrete Hausdorff operator has the form (3.11).
As above, we introduce the scalar symbol of \(\mathcal{H}_{c,a}\) as
\[\varphi(s):=\sum_{k=-\infty}^{\infty}c(k)|a(k)|^{-d/2}e^{-\imath\log|a(k)|\sum _{j=1}^{d}s_{j}}. \tag{3.12}\]
We introduce also the conjugate scalar symbol of \(\mathcal{H}_{c,a}\) as
\[\varphi^{*}(s):=\sum_{k=-\infty}^{\infty}c(k)\mathrm{sgn}(a(k))|a(k)|^{-d/2}e ^{-\imath\log|a(k)|\sum_{j=1}^{d}s_{j}}. \tag{3.13}\]
**Theorem 3.10**.: _Let \(N_{2}(c,a)<\infty\). Then the following assertions hold._
_(i)_
\[\sigma(\mathcal{H}_{c,a})=\mathrm{cl}(\varphi(\mathbb{R}^{d})\cup\varphi^{*}( \mathbb{R}^{d}))\]
_(\(\mathrm{cl}\) stands for the closure), and \(\|\mathcal{H}_{c,a}\|=\max\{\sup|\varphi|,\sup|\varphi^{*}|\}.\)_
_(ii) Let the set \(\{\log|a(k)|:k\in\mathbb{Z}\}\) be linear independent over \(\mathbb{Z}\). Then the spectrum \(\sigma(\mathcal{H}_{c,a})\) is an annulus of the form \(\{r(c,A)\leq|\zeta|\leq\|\mathcal{H}_{c,a})\|\}\), or the disc \(\{|\zeta|\leq\|\mathcal{H}_{c,a})\|\}\). Moreover, \(\sigma_{ew}(\mathcal{H}_{c,a})=\sigma(\mathcal{H}_{c,a})\). Thus, \(\sigma(\mathcal{H}_{c,a})\) is invariant under compact perturbations of \(\mathcal{H}_{c,a}\)._
Proof.: (i) In order to employ Theorem 1 from [42] we enumerate \(d\)-hyperoctants \(U_{j}\) in such a way that \(U_{2^{d-1}+j}=-U_{j}\) for \(j=1,\ldots,2^{d-1}.\) Then \(\Omega_{ii}=\{k\in\mathbb{Z}:a(k)>0\},\)\(\Omega_{ij}=\{k\in\mathbb{Z}:a(k)<0\}\) if \(|j-i|=2^{d-1},\) and \(\Omega_{ij}=\varnothing\) otherwise. It follows that
\[\varphi_{ii}(s)=\varphi_{+}(s):=\sum_{k:a(k)>0}c(k)|a(k)|^{-d/2}e^{-\imath\log |a(k)|\sum_{j=1}^{d}s_{j}}.\]
Analogously, if \(|j-i|=2^{d-1},\)
\[\varphi_{ij}(s)=\varphi_{-}(s):=\sum_{k:a(k)<0}c(k)|a(k)|^{-d/2}e^{-\imath\log |a(k)|\sum_{j=1}^{d}s_{j}},\]
and \(\varphi_{ij}=0\) otherwise. Thus, the matrix symbol of \(\mathcal{H}_{c,a}\) is the following block matrix:
\[\Phi=\begin{pmatrix}\varphi_{+}I_{2^{d-1}}&\varphi_{-}I_{2^{d-1}}\\ \varphi_{-}I_{2^{d-1}}&\varphi_{+}I_{2^{d-1}}\end{pmatrix},\]
where \(I_{2^{d-1}}\) denotes the identity matrix of order \(2^{d-1}.\) Then for every \(\lambda\in\mathbb{C}\)
\[\lambda-\Phi=\begin{pmatrix}(\lambda-\varphi_{+})I_{2^{d-1}}&-\varphi_{-}I_{2 ^{d-1}}\\ -\varphi_{-}I_{2^{d-1}}&(\lambda-\varphi_{+})I_{2^{d-1}}\end{pmatrix}\]
and therefore by the formula of Schur (see, e. g., [14, p. 46]),
\[\begin{array}{ll}\det(\lambda-\Phi)&=\det((\lambda-\varphi_{+})^{2}I_{2^{d-1}} -\varphi_{-}^{2}I_{2^{d-1}})\\ &=((\lambda-\varphi_{+}-\varphi_{-})(\lambda-\varphi_{+}+\varphi_{-}))^{2^{d-1} }\\ &\qquad=((\lambda-\varphi)(\lambda-\varphi^{*}))^{2^{d-1}},\end{array}\]
since \(\varphi=\varphi_{+}+\varphi_{-}\), \(\varphi^{*}=\varphi_{+}-\varphi_{-}\). Now Theorem 1 from [42] implies that (we use the boundedness of \(\varphi,\varphi^{*}\)) that
\[\sigma(\mathcal{H}_{c,a})=\{\lambda\in\mathbb{C}:\inf_{s\in\mathbb{R}^{d}}|( \lambda-\varphi(s))(\lambda-\varphi^{*}(s))|=0\}=\operatorname{cl}(\varphi( \mathbb{R}^{d})\cup\varphi^{*}(\mathbb{R}^{d})).\]
In view of the normality of \(\mathcal{H}_{c,a}\), this yields that \(\|\mathcal{H}_{c,a}\|=\max\{\sup|\varphi|,\sup|\varphi^{*}|\}\).
(ii) Consider the truncated operator
\[\mathcal{H}_{c,a}^{(n)}f(x):=\sum_{k=-n}^{n}c(k)f(a(k)x),\ \ n\in\mathbb{N},\]
in \(L^{2}(\mathbb{R}^{d})\). We claim that
\[\operatorname{cl}(\varphi_{n}(\mathbb{R}^{d}))=\left\{\xi_{n}(t):t=(t_{-n}, \ldots,t_{n})\in\mathbb{T}^{2n+1}\right\}, \tag{3.14}\]
where
\[\xi_{n}(t):=\sum_{k=-n}^{n}c(k)|a(k)|^{-d/2}t_{k}.\]
Indeed, the scalar symbol (3.9) of \(\mathcal{H}_{c,a}^{(n)}\) is a trigonometric polynomial of the form
\[\varphi_{n}(s)=\sum_{k=-n}^{n}c(k)|a(k)|^{-d/2}e^{-\imath\log|a(k)|\sum_{j=1}^ {d}s_{j}},\ s\in\mathbb{R}^{d}. \tag{3.15}\]
As in the proof of Theorem 3.4 the corollary of Kronecker's approximation theorem implies that the set
\[\{\Lambda(s):=(e^{-\imath\log|a(-n)|\sum_{j=1}^{d}s_{j}},\ldots,e^{-\imath\log |a(n)|\sum_{j=1}^{d}s_{j}}):s\in\mathbb{R}^{d}\} \tag{3.16}\]
is dense in \(\mathbb{T}^{2n+1}\). Moreover, since \(\xi_{n}\) is a continuous and closed map on \(\mathbb{T}^{2n+1}\), we have \(\xi_{n}(\operatorname{cl}(M))=\operatorname{cl}(\xi_{n}(M))\) for all \(M\subset\mathbb{T}^{2n+1}\), (see, e.g., [5, Chapter 1, SS5, Proposition 9]). Therefore,
\[\operatorname{cl}(\varphi_{n}(\mathbb{R}^{d}))=\operatorname{cl}(\xi_{n}( \Lambda(\mathbb{R}^{d})))=\xi_{n}(\operatorname{cl}(\Lambda(\mathbb{R}^{d})))= \xi_{n}(\mathbb{T}^{2n+1}).\]
Similarly, if we let
\[\xi_{n}^{*}(t):=\sum_{k=-n}^{n}c(k)\mathrm{sgn}(a(k))|a(k)|^{-d/2}t_{k},\]
then
\[\operatorname{cl}(\varphi_{n}^{*}(\mathbb{R}^{d}))=\xi_{n}^{*}(\mathbb{T}^{2n +1}).\]
Since
\[\xi_{n}(\mathbb{T}^{2n+1})=\xi_{n}^{*}(\mathbb{T}^{2n+1}),\]
we conclude that the spectrum
\[\sigma(\mathcal{H}_{c,a}^{(n)})=\operatorname{cl}(\varphi_{n}(\mathbb{R}^{d}) \cup\varphi_{n}^{*}(\mathbb{R}^{d}))=\xi_{n}(\mathbb{T}^{2n+1})\]
is connected as the continuous image of the connected set.
Consider the operator \(R_{n}:=\mathcal{H}_{c,a}-\mathcal{H}_{c,a}^{(n)}\). Since \(\|R_{n}\|\to 0\) (\(n\to\infty\)) (see the proof of Theorem 3.4), the formula (3.6) shows that \(\sigma(\mathcal{H}_{c,a})\) is a limit of \(\sigma(\mathcal{H}_{c,a}^{(n)})\) in Hausdorff metric. Let \(K\) denotes a compact plane set that contains \(\sigma(\mathcal{H}_{c,a})\) and all \(\sigma(\mathcal{H}_{c,a}^{(n)})\). By [21, Chapter 4, SS42, II, Theorem 2]\((2^{K})_{m}=2^{K}\), where \((2^{K})_{m}\) denotes the space of closed subspaces of \(K\) endowed with Hausdorff metric. Since the sets \(\sigma(\mathcal{H}_{c,a}^{(n)})\) are connected, the set \(\sigma(\mathcal{H}_{c,a})\) is also connected by [21, Chapter 5, SS46, II, Theorem 14]. Application of Theorem 3.4 completes the proof of the first assertion of the part (ii). The proof of the second assertion is the same as for the similar assertion in the Corollary 3.8.
**Remark 3**.: _Under the conditions of Theorem 3.10 (ii), or corollaries 3.8, 3.9 the spectrum \(\sigma(\mathcal{H}_{c,a})\) is connected. In general the problem of connectedness of \(\sigma(\mathcal{H}_{c,a})\) under the conditions of Theorem 3.4 (i) is open._
**Corollary 3.11**.: _(cf. [44]). Let \(A(k)\in\operatorname{GL}(d,\mathbb{R})\) (\(k\in\mathbb{Z}\)) be a commuting family of positive definite matrices, and \(N_{2}(c,A)<\infty\). Let the scalar symbol \(\varphi\) of \(\mathcal{H}_{c,A}\) be real analytic. Then \(\lambda\in\sigma_{p}(\mathcal{H}_{c,A})\) if and only if \(\mathcal{H}_{c,A}=\lambda I\)._
Proof.: One can assume that \(\mathcal{H}_{c,A}\neq O\). We use the statement (ii). It was shown in [42, Corollary 7] that in the case of positive definiteness one has \(\Phi=\operatorname{diag}(\varphi,\ldots,\varphi)\). It follows that \(\lambda\in\sigma(\Phi(s))\) for all \(s\in\mathbb{R}^{d}\), i. e., \(\det(\lambda-\Phi(s))\equiv 0\), if and only if \(\varphi(s)\equiv\lambda\). This yields by [41], [42, Corollary 7] that
\[\sigma(\mathcal{H}_{c,A})=\operatorname{cl}(\varphi(\mathbb{R}^{d}))=\{ \lambda\}.\]
Since \(\mathcal{H}_{c,A}\) is normal, this implies that \(\mathcal{H}_{c,A}=\lambda I\).
**Corollary 3.12**.: _Let \(A(k)\in\operatorname{GL}(d,\mathbb{R})\) (\(k\in\mathbb{Z}\)) be a commuting family of negative definite matrices, and \(N_{2}(c,A)<\infty\). Let the scalar symbol \(\varphi\) of \(\mathcal{H}_{c,A}\) be real analytic. Then \(\lambda\in\sigma_{p}(\mathcal{H}_{c,A})\) if and only if \(\mathcal{H}_{c,A}=\pm\lambda J\), where \(Jf(x)=f(-x)\)._
Proof.: One can assume that \(\mathcal{H}_{c,A}\neq O\). First note that if a function \(f_{e}\) from \(L^{2}(\mathbb{R}^{d})\) is even that
\[\mathcal{H}_{c,A}f_{e}(x)=\sum_{k\in(\mathbb{Z}}c(k)f_{e}(-A(k)x)=\mathcal{H} _{c,(-A)}f_{e}(x).\]
Similarly, if a function \(f_{o}\) from \(L^{2}(\mathbb{R}^{d})\) is odd that
\[\mathcal{H}_{c,A}f_{o}(x)=-\sum_{k\in(\mathbb{Z}}c(k)f_{e}(-A(k)x)=-\mathcal{H}_ {c,(-A)}f_{e}(x).\]
Now let \(f\) from \(L^{2}(\mathbb{R}^{d})\) is an eigenfunction of \(\mathcal{H}_{c,A}\) with an eigenvalue \(\lambda\), i.e., \(\mathcal{H}_{c,A}f=\lambda f\), and \(f\neq 0\). We have \(f=f_{e}+f_{o}\) with an even part \(f_{e}\in L^{2}(\mathbb{R}^{d})\) and an odd part \(f_{o}\in L^{2}(\mathbb{R}^{d})\), and therefore \(\mathcal{H}_{c,A}f_{e}+\mathcal{H}_{c,A}f_{o}=\lambda f_{e}+\lambda f_{o}\). Due to the previous equalities, this implies that
\[\mathcal{H}_{c,(-A)}f_{e}-\lambda f_{e}=\mathcal{H}_{c,(-A)}f_{o}+\lambda f_{ o}.\]
Since the left-hand side here is even, and the right-hand one is odd, this yields that both functions are zero. Thus,
\[\mathcal{H}_{c,(-A)}f_{e}=\lambda f_{e},\text{ and }\mathcal{H}_{c,(-A)}f_{o}=(- \lambda)f_{o}.\]
Let \(f_{e}\neq 0\). Since \(-A(k)\) is positive definite, Corollary 3.11 implies that \(\mathcal{H}_{c,(-A)}=\lambda I\), i.e., \(\mathcal{H}_{c,A}=\lambda J\). Similarly, if \(f_{o}\neq 0\), we have by the Corollary 3.11, that \(\mathcal{H}_{c,A}=-\lambda J\). The converse is obvious.
## 4. Several examples
Below we list several examples of functional differential equations with discrete Hausdorff operators.
**Example 1**.: _The functional differential equation_
\[y^{\prime}(t)=\sum_{k\in E}c(k)y(a(k)t)\]
_(\(E\) is a finite subset of \(\mathbb{Z}_{+}\)) with a one-dimensional discrete Hausdorff operator \(\mathcal{H}_{c,a}\) of the form (3.11) in the right-hand side is called the multi-pantograph equation. Theorem 3.10 describes the spectrum of the operator \(\mathcal{H}_{c,a}\) even for the case where \(E=\mathbb{Z}\)._
Special cases and analogs of the multi-pantograph equation have found applications in number theory, dynamical systems, probability, quantum mechanics, a current collection system for an electric locomotive, biology, economy, control, and electrodynamics (see, e.g., [2], [23]).
**Example 2**.: The next functional partial differential equation arises in a model of cell growth
\[\frac{\partial n(x,t)}{\partial t}+q\frac{\partial n(x,t)}{\partial x}=c(0)n(x,t)+c(1)n(\alpha x,t)\quad(q>0,\alpha>0)\]
(see, e.g., [48]). The right-hand side here is a two-term two-dimensional discrete Hausdorff operator with positive definite commuting perturbation matrices \(A(0)=I_{2}\), \(A(1)=\operatorname{diag}(\alpha,1)\). Formula (3.4) implies that the spectrum of this operator is the circle with center \(c(0)\) and radius \(|c(1)|/\sqrt{\alpha}\)
**Example 3**.: In [45] the following functional partial differential equation
\[-\Delta Ru(x)=f(x) \tag{4.17}\]
was considered with a discrete Hausdorff operator
\[Ru(x)=\sum_{k\in E}c(k)u(q^{-k}x),\]
where \(c(k)\in\mathbb{C}\), \(q>1\), \(E\) is a finite subset of \(\mathbb{Z}\). In this case, \(A(k)=\operatorname{diag}(q^{-k},\ldots,q^{-k})\). Here Theorem 3.10 is applicable. One has
\[\varphi(s)=\varphi^{*}(s)=\sum_{k\in E}c(k)(q^{\frac{d}{2}}e^{-\imath(\log q) \sum_{j=1}^{d}s_{j}})^{k},\ s\in\mathbb{R}^{d}.\]
Thus, by the aforementioned theorem we have \(\sigma(R)=r(q^{\frac{d}{2}}\mathbb{T})\) in \(L^{2}(\mathbb{R}^{d})\), where
\[r(z)=\sum_{k\in E}c(k)z^{k}.\]
This result was obtained in [45, SS1.1] using a different method. In particular, \(R\) is invertible if and only if \(r(z)\neq 0\) for \(|z|=q^{\frac{d}{2}}\). In this case, equation (4.17) in \(L^{2}(\mathbb{R}^{d})\) reduces to the Poisson equation. Since \(R\) is normal, it follows that \(\|R\|=\max\{|r(z)|:|z|=q^{\frac{d}{2}}\}\). Under the condition of boundedness given by Lemma 2.2 these results are valid for the case of infinite \(E\subseteq\mathbb{Z}\), too.
We shall give some applications of Theorem 3.4 to several pantograph-type partial differential equations.
**Example 4**.: Consider the following multidimensional pantograph-type PDE in \(L^{2}(\mathbb{R}^{d})\)
\[\frac{\partial u(t,\cdot)}{\partial t}=\sum_{k=-\infty}^{\infty}c(k)u(t,A(k) \cdot)+Ku(t,\cdot)\equiv(\mathcal{H}_{c,A}+K)u(t,\cdot)\]
with unknown differentiable function \(t\mapsto u(t,\cdot):\mathbb{R}\to L^{2}(\mathbb{R}^{d})\) where \(\mathcal{H}_{c,A}\) and a compact operator \(K\) in \(L^{2}(\mathbb{R}^{d})\) act with respect to \(x\). As usual we can rewrite the last equation in the form
\[\frac{du}{dt}=(\mathcal{H}_{c,A}+K)u. \tag{4.18}\]
It is known [11, SSII.3, Theorem 3.1] that if all solutions of such equation are bounded on \(\mathbb{R}\), then \(\sigma(\mathcal{H}_{c,A}+K)\subset{}_{\mathbb{R}}\). Let \(\mathcal{H}_{c,A}\neq O\). Since \(\mathcal{H}_{c,A}\) is normal, it follows that \(\sigma(\mathcal{H}_{c,A})\neq\{0\}\). Then by Corollary 3.7 under the conditions of Theorem 3.4(i) we have \(\sigma_{ew}(\mathcal{H}_{c,A})\neq\{0\}\). In turn, this implies that \(\sigma_{ew}(\mathcal{H}_{c,A}+K)=\sigma_{ew}(\mathcal{H}_{c,A})\neq\{0\}\), too. Since \(\sigma_{ew}(\mathcal{H}_{c,A})\) is rotationally invariant, we conclude that the equation (4.18) has unbounded solutions in \(L^{2}(\mathbb{R}^{d})\).
**Example 5**.: Similarly to the previous example, Theorem 3.2 from [11, SSII.3] shows that under the conditions of Theorem 3.4(i) if \(\mathcal{H}_{c,A}\neq O\) and \(K\) is a compact operator in \(L^{2}(\mathbb{R}^{d})\) the equation
\[\frac{d^{2}u}{dt^{2}}=(\mathcal{H}_{c,A}+K)u\]
has unbounded solutions in \(L^{2}(\mathbb{R}^{d})\) by Corollary 3.7.
**Example 6**.: Consider an inhomogeneous equation
\[\frac{du}{dt}=(\mathcal{H}_{c,A}+K)u+f \tag{4.19}\]
with given continuous function \(f:\mathbb{R}\to L^{2}(\mathbb{R}^{d})\) (again, \(\mathcal{H}_{c,A}\) and compact operator \(K\) act in \(L^{2}(\mathbb{R}^{d})\)). We say that this equation has the bounded uniqueness property if for each bounded \(f\) it has a unique bounded solution \(u:\mathbb{R}\to L^{2}(\mathbb{R}^{d})\). It is known [11, SSII.4, Theorem 4.1] that if \(\sigma(\mathcal{H}_{c,A}+K)\) is disconnected the equation (4.19) has the bounded uniqueness property if and only if the spectrum \(\sigma(\mathcal{H}_{c,A}+K)\) does not intersect \(\imath\mathbb{R}\). Thus, under the conditions of Theorem 3.4(i) if \(\mathcal{H}_{c,A}\neq O\) Corollary 3.7 shows that the equation (4.19) does not enjoy the bounded uniqueness property if \(\sigma(\mathcal{H}_{c,A})\) is not an annulus (or a disc).
## 5. acknowledgments
The author is partially supported by the State Program of Scientific Research of Republic of Belarus, project No. 20211776 and by the Ministry of Education and Science of Russia, agreement No. 075-02-2023-924.
## 6. Data availability statement
The author confirms that all data generated or analyzed during this study are included in this article. This work does not have any conflicts of interest.
|
2310.17813 | A Spectral Condition for Feature Learning | The push to train ever larger neural networks has motivated the study of
initialization and training at large network width. A key challenge is to scale
training so that a network's internal representations evolve nontrivially at
all widths, a process known as feature learning. Here, we show that feature
learning is achieved by scaling the spectral norm of weight matrices and their
updates like $\sqrt{\texttt{fan-out}/\texttt{fan-in}}$, in contrast to widely
used but heuristic scalings based on Frobenius norm and entry size. Our
spectral scaling analysis also leads to an elementary derivation of
\emph{maximal update parametrization}. All in all, we aim to provide the reader
with a solid conceptual understanding of feature learning in neural networks. | Greg Yang, James B. Simon, Jeremy Bernstein | 2023-10-26T23:17:39Z | http://arxiv.org/abs/2310.17813v2 | # A Spectral Condition for Feature Learning
###### Abstract
The push to train ever larger neural networks has motivated the study of initialization and training at large network width. A key challenge is to scale training so that a network's internal representations evolve nontrivially at all widths, a process known as _feature learning_. Here, we show that feature learning is achieved by scaling the _spectral norm_ of weight matrices and their updates like \(\sqrt{\texttt{fan-out}/\texttt{fan-in}}\), in contrast to widely used but heuristic scalings based on Frobenius norm and entry size. Our spectral scaling analysis also leads to an elementary derivation of _maximal update parametrization_. All in all, we aim to provide the reader with a solid conceptual understanding of feature learning in neural networks.
## 1 Introduction
Recent years have seen an unprecedented push to train deep learning systems with more and more parameters, leading to powerful models across domains and the unlocking of qualitatively new capabilities (Brown et al., 2020; Ramesh et al., 2022; Silver et al., 2016). This continuing trend, combined with the technical challenges of training large models, has motivated much recent study of the dynamics of neural networks at _large width_, and more generally the study of how their dynamics scale as network width grows. This program has yielded a cornucopia of theoretical insights (Arora et al., 2019; Canatar et al., 2021; Jacot et al., 2018; Lee et al., 2018) and practical scaling recommendations (Dey et al., 2023; Yang & Hu, 2021; Yang et al., 2021).
A key challenge when training a network of large width is to ensure that _feature learning_ occurs at hidden layers. By this, we mean that the hyperparameters of the network are scaled in a manner such that the hidden representations of the network (as obtained by partial evaluation of the network up to a certain layer) change substantially over the course of training. Naive hyperparameter scaling rules, including the well-studied "neural tangent parametrization" (NTP), in fact _lose_ feature learning at large width (Lee et al., 2019; Sohl-Dickstein et al., 2020). But ample evidence supports the conclusion that proper feature learning is necessary for achieving optimal performance on many tasks (Atanasov et al., 2022; Fort et al., 2020; Lee et al., 2020; Vyas et al., 2022). Furthermore, scaling training correctly can lead to new functionality such as _hyperparameter transfer_. For instance, the recently proposed _maximal update parametrization_(Yang & Hu, 2021; Yang et al., 2021) allows for transferring hyperparameters from narrow models to wide models, avoiding the cost of tuning the wide model directly.
Maximal update parametrization (\(\mu\)P) is derived by fairly involved "tensor programs" arguments that track feature distributions analytically in the infinite width limit. Anecdotally, the principles underlying \(\mu\)P are not well understood by the community. In this paper, we provide a new perspective on \(\mu\)P, showing that its scaling relations can be obtained by elementary linear algebra arguments. In short, we show that \(\mu\)P is equivalent to scaling the _spectral norm_ of any weight matrix or update like \(\sqrt{\texttt{fan-out}/\texttt{fan-in}}\). This simple condition has various favorable numerical properties that contrast sharply with heuristic optimization strategies based on controlling the Frobenius norm (You et al., 2017) or entry size (Kingma & Ba, 2015) of updates. In the authors' experience, the spectral scaling condition both simplifies the implementation of \(\mu\)P in code, and is significantly easier to work with theoretically, leading to further conceptual advances in our research (Bernstein et al., 2023).
On a more fundamental level, an important step to solving many problems in classical computer science is to write down a suitable distance function for the problem at hand (Dhillon and Tropp, 2008). This idea is of particular importance in the design of optimization algorithms, where a notion of parameter distance is needed (Amari, 1998; Nemirovsky and Yudin, 1983). While it can be tempting to use the Euclidean norm on parameter vectors to measure distance, this naive choice risks discarding the structure of the problem. For example, neural networks involve compositions of linear operators, which we refer to as their _operator structure_. Past efforts to metrize the space of neural networks while accounting for their operator structure have included using the Frobenius norm to measure distance between matrices (Bernstein et al., 2020), which motivates various optimization algorithms that make Frobenius-normalized updates (Liu et al., 2021; Shazeer and Stern, 2018; You et al., 2017; You et al., 2020). This paper shows that the spectral norm provides a better notion of distance between operators in the context of deep learning.
### Road map for the paper
To begin, we state the precise conditions on hidden features that we wish to ensure. We will ask for two things: both the _features_ and their _updates_ upon a step of gradient descent must be the proper size.
**Desideratum 1** (Feature learning).: Let \(\mathbf{h}_{\ell}(\mathbf{x})\in\mathbb{R}^{n_{\ell}}\) denote the features of input \(\mathbf{x}\) at layer \(\ell\) of a neural network, and let \(\Delta\mathbf{h}_{\ell}(\mathbf{x})\in\mathbb{R}^{n_{\ell}}\) denote their change after a gradient step. We desire that:
\[\left|\mathbf{h}_{\ell}\right|_{2}=\Theta(\sqrt{n_{\ell}})\quad\text{and}\quad \left|\!\left|\Delta\mathbf{h}_{\ell}\right|\!\right|_{2}=\Theta(\sqrt{n_{\ell}}), \quad\text{ at layers }\ell=1,...,L\!-\!1.\]
Let us unpack Desideratum 1. These conditions treat the \(\ell^{2}\)_-norms_ of the feature vectors \(\mathbf{h}_{\ell}(\mathbf{x})\) and \(\Delta\mathbf{h}_{\ell}(\mathbf{x})\), a framing which will prove convenient. Desideratum 1 amounts to asking that the "typical element size" of vectors \(\mathbf{h}_{\ell}(\mathbf{x})\) and \(\Delta\mathbf{h}_{\ell}(\mathbf{x})\) is \(\Theta(1)\) with respect to width \(n_{\ell}\) (we give a review of big-\(\Theta\) notation in Section 2). Enforcing that hidden features have \(\Theta(1)\) element size has long been a principle of deep learning parametrization, motivated by the fact that activation functions are designed to take order-one inputs and give order-one outputs (LeCun et al., 2002). Our second requirement stipulates that feature entries also undergo \(\Theta(1)\) updates during training. Note that any larger updates would blow up at large width, and any smaller updates would vanish at large width. We take Desideratum 1 as our definition of feature learning.1
Footnote 1: Our notion of feature learning might also be called “nontrivial feature evolution.” While other authors may prefer different notions of “feature learning”—for example, the learning of interpretable, visualizing functions at hidden nodes (Olah et al., 2017; Zeiler and Fergus, 2014)—nontrivial feature evolution in our sense is necessary for any other reasonable definition of the term.
Our main message is that feature learning in the sense of Desideratum 1 may be ensured by the following _spectral scaling condition_ on the weight matrices of a deep network and their gradient updates:
**Condition 1** (Spectral scaling).: Consider applying a gradient update \(\Delta\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) to the \(\ell\)th weight matrix \(\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\). The spectral norms of these matrices should satisfy:
\[\left|\!\left|\mathbf{W}_{\ell}\right|\!\right|_{*}=\Theta\left(\sqrt{\frac{n_{ \ell}}{n_{\ell-1}}}\right)\quad\text{and}\quad\left|\!\left|\Delta\mathbf{W}_{ \ell}\right|\!\right|_{*}=\Theta\left(\sqrt{\frac{n_{\ell}}{n_{\ell-1}}} \right),\quad\text{ at layers }\ell=1,...,L.\]
We review the spectral norm in Section 2. The spectral scaling condition has two components which will serve to enforce the respective components of Desideratum 1. The first component mandates that each _weight matrix_ has a spectral norm of a certain size, which will serve to enforce that the layer passes forward features of the correct size. The second component mandates that each _gradient update_ has a spectral norm of a certain size, which will ensure that subsequent features undergo a change of the correct size. We have implicitly assumed that the input has size \(\left|\!\left|\mathbf{x}\right|\!\right|_{2}=\Theta(\sqrt{n_{0}})\), which is standard for image data. Language models are an important counterexample, where embedding matrices take one-hot inputs and the \(\sqrt{n_{0}}\) in Condition 1 should be replaced by 1. Appendix E provides a unifying treatment of these cases.
To get some quick intuition for the origin of Condition 1, observe that under the forward propagation \(\mathbf{h}_{\ell}(\mathbf{x})=\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})\), if the layer input \(\mathbf{h}_{\ell-1}(\mathbf{x})\) aligns with the top singular vector of the weight matrix \(\mathbf{W}_{\ell}\), then \(\left|\mathbf{h}_{\ell}(\mathbf{x})\right|_{2}=\left|\mathbf{W}_{\ell}\right|_{*}\cdot \left|\mathbf{h}_{\ell-1}(\mathbf{x})\right|_{2}\). The requirement that \(\left|\!\left|\mathbf{W}_{\ell}\right|\!\right|_{*}=\Theta(\sqrt{n_{\ell}/n_{ \ell-1}})\) then follows from
Desideratum 1. The scaling of \(\left|\!\left|\Delta\!\mathbf{W}_{\ell}\right|\!\right|_{*}\) can similarly be obtained by writing \(\Delta\mathbf{h}_{\ell}(\mathbf{x})=\Delta\!\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})+\ldots\) and applying the same argument. The key missing step is to justify that layer inputs actually do line up with the top singular subspaces of both weight matrices and weight updates. As the paper will show, gradient descent training actually induces this form of alignment.
The bulk of this paper is dedicated to thoroughly demonstrating that training in accordance with our spectral scaling condition satisfies Desideratum 1 in MLPs. As an accessible path to this conclusion, we begin in Section 3 with a simple model--a deep linear MLP trained for one step on one example--and then successively extend to multiple training steps, a nonlinear model, and multiple inputs. In the process, we give a scaling analysis of the dynamics of feature learning. We then explain how Condition 1 may be achieved in a standard deep learning setting and compare-and-contrast the resulting scaling prescription with others in the literature. In particular, we recover the recent "maximal-update parametrization" (\(\mu\)P) (Yang and Hu, 2021).
### Summary of contributions
Concretely, our contributions are as follows:
* We propose the _spectral scaling condition_ (Condition 1) and show that it suffices to achieve feature learning in neural networks even at large width.
* We show how Condition 1 may be implemented: either via direct spectral normalization, or by layer-wise initialization scales \(\{\sigma_{\ell}\}\) and learning rates \(\{\eta_{\ell}\}\) that recover _maximal update parametrization_.2 Footnote 2: In fact, Condition 1 is the _unique_ scaling on spectral norm that is equivalent to \(\mu\)P in its usual definition in terms of learning rate and initialization scaling (Theorem 1).
* We show that other popular scaling rules, including so-called _standard parameterization_ and _neural tangent parametrization_, fail to satisfy Condition 1.
In the main text, we focus on MLPs trained via ordinary gradient descent for clarity. Our results may actually be extended to cover any architecture and any adaptive optimizer (for a suitable definition of _any_, c.f. Appendix B). Therefore our spectral scaling condition provides a unifying hyperparameter scaling rule that remains the same whether the underlying optimizer is, say, SGD or Adam. We suggest that, when one wishes to determine how the hyperparameters of a new deep learning system should scale with width, one might turn to the satisfaction of Condition 1 as an overarching principle.
## 2 Preliminaries
Here we review standard notations which we use in our scaling analysis.
**Scaling notation.** We will use the usual big-\(O\) notation and variants to make statements about how various quantities scale with network width. Intuitively speaking:
* \(f(n)=O(g(n))\) means that \(f(n)\) "scales no faster than" \(g(n)\),
* \(f(n)=\Theta(g(n))\) means that \(f(n)\) "scales like" or "is order" \(g(n)\),
* \(f(n)=\Omega(g(n))\) means that \(f(n)\) "scales at least as fast as" \(g(n)\).
Formally, \(f(n)=\Theta(g(n))\) is equivalent to the statement that there exist constants \(c,C>0\) such that \(c\cdot g(n)\leq f(n)\leq C\cdot g(n)\) for all sufficiently large \(d\). The weaker statements \(f(n)=O(g(n))\) and \(f(n)=\Omega(g(n))\) entail only the upper and lower bounds, respectively.
We will _only_ be concerned with scaling with respect to layer widths in this paper. Big-\(O\) notation will hide any dependence on other factors -- such as depth, dataset size, learning rate schedule, a global learning rate prefactor -- and our statements purely concern how quantities will or should scale with model width.
**Vector and matrix norms.** We will use the standard \(\ell^{2}\)-norm \(\left|\!\left|\cdot\right|\!\right|_{2}\) to assess the size of a vector. For matrices, we will principally use the _spectral norm_\(\left|\!\left|\cdot\right|\!\right|_{*}\) (a.k.a. _operator norm_) defined as follows:
**Definition 1** (Spectral norm).: The spectral norm of a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is given by
\[\left|\!\left|\mathbf{A}\right|\!\right|_{*}:=\max_{\mathbf{v}\in\mathbb{R}^{n}\setminus \{\mathbf{0}\}}\frac{\left|\!\left|\mathbf{Av}\right|\!\right|_{2}}{\left|\mathbf{v}\right| \!\right|_{2}}. \tag{1}\]
That is, the spectral norm is the largest factor by which a matrix can increase the norm of a vector on which it acts. The spectral norm of a matrix is equal to its largest singular value. We will sometimes contrast the spectral norm with the _Frobenius norm_\(\left|\!\left|\cdot\right|\!\right|_{F}\) given by \(\left|\!\left|\mathbf{A}\right|\!\right|_{F}^{2}=\sum_{ij}A_{ij}^{2}\).
**Properties of the spectral norm.** Let \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{m\times n}\) be arbitrary matrices and \(\mathbf{v}\in\mathbb{R}^{n}\) be an arbitrary vector. As with all norms, the spectral norm is _subadditive_, meaning that it obeys the _triangle inequality_\(\left|\!\left|\mathbf{A}+\mathbf{B}\right|\!\right|_{*}\leq\left|\!\left|\mathbf{A} \right|\!\right|_{*}+\left|\!\left|\mathbf{B}\right|\!\right|_{*}\). The spectral norm is also _submultiplicative_ in the sense that \(\left|\!\left|\mathbf{A}\mathbf{v}\right|\!\right|_{*}\leq\left|\!\left|\mathbf{A}\right|\! \right|_{*}\cdot\left|\!\left|\mathbf{v}\right|\!\right|_{2}\) and \(\left|\!\left|\mathbf{A}\mathbf{B}\right|\!\right|_{*}\leq\left|\!\left|\mathbf{A}\right|\! \right|_{*}\cdot\left|\!\left|\mathbf{B}\right|\!\right|_{*}\). If we interpret a vector \(\mathbf{v}\in\mathbb{R}^{n}\) as a \(1\times n\) matrix, then the \(\ell^{2}\), spectral and Frobenius norms are equivalent: \(\left|\!\left|\mathbf{v}\right|\!\right|_{2}=\left|\!\left|\mathbf{v}\right|\!\right|_ {*}=\left|\!\left|\mathbf{v}\right|\!\right|_{F}\).
**Special cases of the spectral norm.** For a _rank-one_ matrix \(\mathbf{A}\), which can be written as an outer-product \(\mathbf{A}=\mathbf{u}\mathbf{v}^{\top}\), it holds that \(\left|\!\left|\mathbf{A}\right|\!\right|_{*}=\left|\!\left|\mathbf{A}\right|\!\right|_ {F}=\left|\!\left|\mathbf{u}\right|\!\right|_{2}\cdot\left|\!\left|\mathbf{v}\right|\! \right|_{2}\). A matrix \(\mathbf{B}\in\mathbb{R}^{m\times n}\) is _semi-orthogonal_ if either \(\mathbf{B}^{\top}\mathbf{B}=\mathbf{I}_{n}\) or \(\mathbf{B}\mathbf{B}^{\top}=\mathbf{I}_{m}\). A semi-orthogonal matrix has unit spectral norm: \(\left|\!\left|\mathbf{B}\right|\!\right|_{*}=1\).
## 3 The spectral scaling condition induces feature learning
In this section, we show that our spectral scaling condition (Condition 1) achieves feature evolution of the correct scale in multilayer perceptrons (MLPs). We begin with a toy example which conveys key intuitions and then give a series of extensions which recover a much more general case.
### Warmup: deep linear MLP, one step of SGD, on a single example
We begin with a simple model: a deep linear MLP trained for one step on a single input. While elementary, this example will be sufficient to capture the intuition for a much more general case.
**Model definition.** We will study an MLP composed of \(L\) successive linear transformations. We consider a single input \(\mathbf{x}\in\mathbb{R}^{n_{0}}\) with norm \(\left|\!\left|\mathbf{x}\right|\!\right|_{2}=\Theta(\sqrt{n_{0}})\). The first hidden representation is \(\mathbf{h}_{1}(\mathbf{x})=\mathbf{W}_{1}\mathbf{x}\), with the rest of the network following recursively as:
\[\mathbf{h}_{\ell}(\mathbf{x})=\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})\qquad\text{for }\ell=2, \dots,L. \tag{2}\]
We let \(\mathbf{h}_{L}(\mathbf{x})\in\mathbb{R}^{n_{L}}\) be the network output. We will keep the input dimension \(n_{0}\) and output dimension \(n_{L}\) fixed and consider scaling with respect to the hidden dimensions \(n_{1},\dots,n_{L\!-\!1}\).
We let the global loss be \(\mathcal{L}=g(\mathbf{h}_{L}(\mathbf{x}),\mathbf{y})\) where \(g\) and \(\mathbf{y}\) are a loss function and target vector, respectively. During training, we will take gradient steps at each layer as \(\Delta\mathbf{W}_{\ell}=-\eta_{\ell}\cdot\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\), where \(\eta_{\ell}\) is a layerwise learning rate. We will ultimately solve for the scale of \(\eta_{\ell}\), but for now we will be content to discuss the perturbation \(\Delta\mathbf{W}_{\ell}\) directly. By Equation (2), hidden vector updates at subsequent layers are related by:
\[\mathbf{h}_{\ell}(\mathbf{x})+\Delta\mathbf{h}_{\ell}(\mathbf{x})=(\mathbf{W}_{\ell}+\Delta\mathbf{W}_ {\ell})(\mathbf{h}_{\ell-1}(\mathbf{x})+\Delta\mathbf{h}_{\ell-1}(\mathbf{x})). \tag{3}\]
**Hidden vector sizes.** To reiterate Desideratum 1, we wish for features at the \(\ell\)th layer to have a norm which scales as \(\left|\!\left|\mathbf{h}_{\ell}(\mathbf{x})\right|\!\right|_{2}=\Theta(\sqrt{n_{\ell}})\). Upon a gradient update, this feature vector should undergo an update of size \(\left|\!\left|\Delta\mathbf{h}_{\ell}(\mathbf{x})\right|\!\right|_{2}=\Theta(\sqrt{n_{ \ell}})\). We will show that Condition 1 is sufficient to achieve these aims at all layers.
**Plan of attack.** For simplicity, we will first focus on the _first step of gradient descent_ after random initialization. We will argue recursively in depth, showing that if the features at layer \(\ell-1\) and their updates satisfy Desideratum 1, then so will those at layer \(\ell\). In order to verify the desired scalings, we will show upper and lower scaling bounds separately: we will first show that the features and their updates are not larger than asked by Desideratum 1, and then show that they are in fact also not smaller than asked by Desideratum 1.
**Hidden vector updates.** By the subadditivity and submultiplicativity of the spectral norm, Equations (2) and (3) imply that:
\[\left|\!\left|\mathbf{h}_{\ell}(\mathbf{x})\right|\!\right|_{2} \leq\left|\!\left|\mathbf{W}_{\ell}\right|\!\right|_{*}\cdot\left|\! \left|\mathbf{h}_{\ell-1}(\mathbf{x})\right|\!\right|_{2}=\Theta(\sqrt{n_{\ell}}); \tag{4}\] \[\left|\!\left|\Delta\mathbf{h}_{\ell}(\mathbf{x})\right|\!\right|_{2} \leq\left|\!\left|\Delta\mathbf{W}_{\ell}\right|\!\right|_{*}\cdot\left|\! \left|\mathbf{h}_{\ell-1}(\mathbf{x})\right|\!\right|_{2}+\left|\!\left|\mathbf{W}_{\ell} \right|\!\right|_{*}\cdot\left|\!\left|\Delta\mathbf{h}_{\ell-1}(\mathbf{x})\right|\! \right|_{2}+\left|\!\left|\Delta\mathbf{W}_{\ell}\right|\!\right|_{*}\cdot\left|\! \left|\Delta\mathbf{h}_{\ell-1}(\mathbf{x})\right|\!\right|_{2}=\Theta(\sqrt{n_{\ell}}), \tag{5}\]
where on the right hand sides of the inequality we have inserted Desideratum 1 and Condition 1. The spectral scaling condition thus gives features and feature updates obeying the correct _upper_ bounds, and we need merely show comparable lower bounds.
**Tightness of bounds via matrix-vector alignment.** The upper bound in the submultiplicativity property can be very loose--in particular, this is the case when the vector only interacts with the small singular values in the matrix. We will now show that this is not the case in deep network training, and that these upper bounds provide a fairly accurate description of the way things scale. We make two observations regarding random weight matrices and gradient updates:
**Claim 1** (Alignment of initial weight matrices).: _Fix a feature vector \(\mathbf{h}_{\ell-1}(\mathbf{x})\in\mathbb{R}^{n_{\ell-1}}\). Assume \(\mathbf{W}_{\ell}\) in \(\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) is sampled using a common weight initialization strategy (e.g., Gaussian or semi-orthogonal init). Provided that fan-out is no less than fan-in (\(n_{\ell}\geq n_{\ell-1}\)), then with high probability:_
\[\left\lvert\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})\right\rvert_{2}=\Theta(\left \lvert\mathbf{W}_{\ell}\right\rvert_{*}\cdot\left\lvert\mathbf{h}_{\ell-1}(\mathbf{x}) \right\rvert_{2}).\]
**Claim 2** (Alignment of updates).: _For an update \(\Delta\mathbf{W}_{\ell}\) given by gradient descent with batch size 1,_
\[\left\lvert\Delta\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})\right\rvert_{2}=\left \lvert\Delta\mathbf{W}_{\ell}\right\rvert_{*}\cdot\left\lvert\mathbf{h}_{\ell-1}(\bm {x})\right\rvert_{2}.\]
In words, Claim 1 states that random hidden weight matrices scale incoming vectors by factors commensurate to their spectral norms, so long as their fan-out is not smaller than their fan-in, which we will assume is the case for all but the final layer of the network. Claim 2 states the same of weight updates, but the proportionality constant is precisely one and requires no condition on dimensionality.3
Footnote 3: Note that \(\mathbf{x}\) in Claim 2 is the same input that induced the gradient \(\Delta\mathbf{W}_{\ell}\) in the previous step. Claim 2 is generally not an equality if this not true.
We now justify these claims in turn. For Claim 1, first suppose that \(\mathbf{W}_{\ell}\) is a random semi-orthogonal matrix as is a popular initialization strategy. Then all singular values of \(\mathbf{W}_{\ell}\) are one and, since fan-out exceeds fan-in, the null-space of \(\mathbf{W}_{\ell}\) is empty. Taken together, these observations imply the equality:. Fortunately, if the elements of \(\mathbf{W}_{\ell}\) are instead sampled i.i.d. from a centered Gaussian distribution with standard deviation \(\sigma_{\ell}\), then the situation is similar. It is easily shown by the law of large numbers that, and it is a standard result in random matrix theory that. Claim 1 for Gaussian initialization follows by combining these results.
Claim 2 is easily verified as follows. Observe that we can write the update at layer \(\ell\) as the outer-product:
\[\Delta\mathbf{W}_{\ell}=-\eta_{\ell}\cdot\nabla_{\mathbf{h}_{\ell}(\mathbf{x})}\mathcal{ L}\cdot\mathbf{h}_{\ell-1}(\mathbf{x})^{\top}. \tag{6}\]
So the update \(\Delta\mathbf{W}_{\ell}\) is rank-one with right singular vector \(\mathbf{h}_{\ell-1}(\mathbf{x})\). To verify the claim, observe that:
(7)
With the claims established, we can now get lower bounds on hidden vector size which serve to verify Desideratum 1. The features at initialization scale correctly as:
\[\left\lvert\mathbf{h}_{\ell}(\mathbf{x})\right\rvert_{2}=\Theta\left(\left\lvert\mathbf{ W}_{\ell}\right\rvert_{*}\cdot\left\lvert\mathbf{h}_{\ell-1}(\mathbf{x})\right\rvert_{2} \right)=\Theta(\sqrt{n_{\ell}}), \tag{8}\]
where we have first used Claim 1 and then inserted Condition 1. To bound the size of \(\Delta\mathbf{h}_{\ell}(\mathbf{x})\), let us first observe from Equation (3) that
\[\Delta\mathbf{h}_{\ell}(\mathbf{x})=\Delta\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})+\mathbf{W} _{\ell}\Delta\mathbf{h}_{\ell-1}(\mathbf{x})+\Delta\mathbf{W}_{\ell}\Delta\mathbf{h}_{\ell-1} (\mathbf{x}). \tag{9}\]
So long as the first term \(\Delta\mathbf{W}_{\ell}\mathbf{h}_{\ell-1}(\mathbf{x})\) does not perfectly cancel with the latter two, we have that:
\[\left\lvert\Delta\mathbf{h}_{\ell}(\mathbf{x})\right\rvert_{2}=\Omega(\left\lvert \Delta\mathbf{W}_{\ell}\right\rvert_{*}\cdot\left\lvert\mathbf{h}_{\ell}\right\rvert_{ 2})=\Omega(\sqrt{n_{\ell}}), \tag{10}\]
where in the last step we have inserted Condition 1. Combining Equation (10) with our matching upper bound from Equation (5), we conclude that as desired.
We have achieved both clauses of Desideratum 1 at layer \(\ell\), completing a recursive step from layer \(\ell-1\). The norm of the input \(\mathbf{x}\) is by assumption the correct size to serve as a base case, and thus we recursively have the correct feature scaling at all layers.
#### 3.1.1 Key intuitions
We now pause to discuss key intuitions from the above argument which will carry through to the general case.
**Weight updates are low-rank and aligned.** An important observation is that weight updates are highly structured: they have low rank and align to incoming vectors. This motivates the spectral norm (which is the degree by which a matrix scales a "perfectly aligned" vector) as the correct measure of size.
**Spectral variables enable simpler scaling analysis.** Most prior work on hyperparameter scaling (Yaida, 2022; Yang and Hu, 2021b) discusses layerwise initialization scales \(\{\sigma_{\ell}\}\) and learning rates \(\{\eta_{\ell}\}\) as the primary variables--although there are exceptions (Bernstein et al., 2020). By contrast, we work directly in terms of quantities that these hyperparameters regulate: the spectral norms of \(\mathbf{W}_{\ell}\) and \(\Delta\mathbf{W}_{\ell}\). This enables us to determine the sizes of hidden vectors and their updates quite easily, whereas the same calculation in terms of \(\sigma_{\ell}\) and \(\eta_{\ell}\) is more involved. Section 4 shows how to recover \(\sigma_{\ell}\) and \(\eta_{\ell}\) from our spectral scaling condition.
### Extensions: additional gradient steps, nonlinearities, and multiple examples
We now extend our warmup example to successively more complex settings, ultimately recovering the general case. As we add back complexity, our spectral scaling condition will remain sufficient to achieve feature evolution of the proper size, and key intuitions from our warmup will continue to hold up to minor modifications. Each extension requires making a natural assumption. We empirically verify these assumptions for a deep MLP in Appendix C.
#### 3.2.1 Additional gradient steps
Our warmup argument relied on two properties of \(\mathbf{W}_{\ell}\) at initialization: its spectral norm being the correct size (Condition 1) and it passing forward features \(\mathbf{h}_{\ell}(\mathbf{x})\) of the correct size (Claim 2). Fortunately, it is easily seen that these two properties remain true of \(\mathbf{W}_{\ell}\) after a gradient step, and we can therefore treat the second (and later) steps exactly as we did the first. This follows quickly given the following assumption.
**Assumption 1**.: Updates do not perfectly cancel initial quantities. That is:
\[\left|\mathbf{W}_{\ell}+\Delta\mathbf{W}_{\ell}\right|_{*} =\Theta\left(\left|\mathbf{W}_{\ell}\right|_{*}+\left|\Delta\mathbf{W}_{ \ell}\right|_{*}\right) \tag{11}\] \[\left|\mathbf{h}_{\ell}(\mathbf{x})+\Delta\mathbf{h}_{\ell}(\mathbf{x})\right|_{2} =\Theta(\left|\mathbf{h}_{\ell}(\mathbf{x})\right|_{2}+\left|\Delta\mathbf{h} _{\ell}(\mathbf{x})\right|_{2}). \tag{12}\]
The sort of perfect cancellation required to violate this assumption will be rare in practice (and adding a small amount of randomness to the learning rate \(\eta_{\ell}\) will fix any occurrence with high probability). It follows that \(\left|\mathbf{W}_{\ell}+\Delta\mathbf{W}_{\ell}\right|_{*}=\Theta(\sqrt{n_{\ell}/n_{ \ell-1}})\) and \(\left|\mathbf{h}_{\ell}(\mathbf{x})+\Delta\mathbf{h}_{\ell}(\mathbf{x})\right|_{2}=\Theta( \sqrt{n_{\ell}}).\) With these facts in place, the same argument we used in Section 3.1 for the first step ensures that Desideratum 1 also holds at later steps.
#### 3.2.2 Nonlinearities
We now add a nonlinearity \(\phi\) to each layer of our MLP. The modified forward recursion relation is:
\[\mathbf{h}_{\ell}(\mathbf{x})=\mathbf{W}_{\ell}\mathbf{h}^{\prime}_{\ell-1}(\mathbf{x}),\qquad\bm {h}^{\prime}_{\ell}(\mathbf{x})=\phi(\mathbf{h}_{\ell}(\mathbf{x}));\qquad\text{for }\ell=2, \ldots,L\!-\!1, \tag{13}\]
with base case \(\mathbf{h}_{1}(\mathbf{x})=\mathbf{W}_{1}\mathbf{x}\) and output \(\mathbf{h}_{L}(\mathbf{x})=\mathbf{W}_{L}\mathbf{h}^{\prime}_{L-1}(\mathbf{x})\). We assume that the hidden features before and after the application of the nonlinearity are of the same scale:
**Assumption 2**.: \(\left|\mathbf{h}^{\prime}_{\ell}(\mathbf{x})\right|_{2}=\Theta\left(\left|\mathbf{h}_{\ell }(\mathbf{x})\right|_{2}\right).\)__
This is the expected behavior for most activation functions (which are designed to take in order-one inputs and return outputs which neither explode nor uniformly vanish) and seems like a reasonable assumption. As in the linear case, \(\Delta\mathbf{W}_{\ell}\) will be rank-one and align to incoming signal as:
\[\left|\Delta\mathbf{W}_{\ell}\mathbf{h}^{\prime}_{\ell-1}(\mathbf{x})\right|_{2}=\left| \Delta\mathbf{W}_{\ell}\right|_{*}\cdot\left|\mathbf{h}^{\prime}_{\ell-1}(\mathbf{x}) \right|_{2} \tag{14}\]
at each step. All our scaling arguments from the linear case therefore carry through: the term \(\Delta\mathbf{W}_{\ell}\mathbf{h}^{\prime}_{\ell-1}(\mathbf{x})\) is sufficient to induce a change \(\Delta\mathbf{h}_{\ell}(\mathbf{x})\) satisfying Desideratum 1. (The other terms which depend on \(\Delta\mathbf{h}^{\prime}_{\ell-1}(\mathbf{x})\) will be no larger.) Condition 1 therefore still achieves correctly-scaled feature evolution.
#### 3.2.3 Batch size greater than one
When training on a minibatch \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{B}\) with size \(B>1\), each gradient step is simply an average of the steps on each example:
\[\Delta\mathbf{W}_{\ell}=\frac{1}{B}\sum_{i=1}^{B}\Delta\mathbf{W}_{\ell}^{(i)}, \tag{15}\]
where the sum runs over the minibatch index and \(\Delta\mathbf{W}_{\ell}^{(i)}\) denotes the update that would result from a step on the single example \((\mathbf{x}_{i},\mathbf{y}_{i})\). While \(\Delta\mathbf{W}_{\ell}\) is generally no longer rank one and cannot perfectly align to all \(B\) incoming vectors, Equation 15 makes it clear that at least one summand will align to each \(\mathbf{h}_{\ell}(\mathbf{x}_{i})\) from the batch. We first assume that this term is not perfectly cancelled by the others:
**Assumption 3**.: \(\left|\Delta\mathbf{W}_{\ell}\mathbf{h}_{\ell}(\mathbf{x}_{i})\right|_{2}=\Theta(\left| \frac{1}{B}\Delta\mathbf{W}_{\ell}^{(i)}\mathbf{h}_{\ell}(\mathbf{x}_{i})\right|_{2})\)_._
We additionally make the assumption that the batch size is fixed and independent of width:
**Assumption 4**.: The batch size is width-independent: \(B=\Theta(1)\).
Combining these assumptions, we find that \(\left|\Delta\mathbf{W}_{\ell}\mathbf{h}_{\ell}(\mathbf{x}_{i})\right|_{2}=\Theta\left( \left|\Delta\mathbf{W}_{\ell}\right|_{*}\cdot\left|\mathbf{h}_{\ell}(\mathbf{x}_{i}) \right|_{2}\right)\): the batch update is aligned with incoming signal in a scaling sense. Our previous scaling arguments in fact needed only alignment in this "big-\(\Theta\)" sense (as opposed to the perfect rank-one sense of Claim 2), and thus they still carry through: our spectral scaling condition continues to suffice to achieve proper feature learning as per Desideratum 1.
**Empirical observation: low-rank structure remains at large batch size.** Surprisingly, we observe numerically that MLP updates remain low (effective) rank and aligned with incoming vectors _even at large batch size_\(B\). This is demonstrated in Figure 1.
### Adam and other adaptive optimizers
Adam (like most adaptive optimizers in deep learning) processes gradients into updates via an entrywise function. For example, the momentum-less version of Adam is SignSGD (Bernstein et al., 2018) which just applies the sign function to each gradient entry. Via less elementary arguments detailed in Appendix B, all the above discussion holds for \(\Delta\mathbf{W}_{\ell}\) calculated from these optimizers. In a gist, the main ingredients are 1)
Figure 1: **Gradient updates have low effective rank and high-alignment with incoming hidden vectors even at large batch size in MLPs on CIFAR-10. We randomly initialize MLPs with depth \(L=3\), hidden widths \(n_{1}=n_{2}=300\), and ReLU and tanh activation functions. We then compute gradient updates \(\Delta\mathbf{W}_{\ell}\) for layer \(\ell=2\) on randomly-sampled size-\(B\) subsets of CIFAR-10. **Left.** As a measure of effective rank, we report the _stable rank_\(\text{rank}(\Delta\mathbf{W}_{\ell}):=\left|\Delta\mathbf{W}_{\ell}\right|_{F}^{2}/ \left|\Delta\mathbf{W}_{\ell}\right|_{*}^{2}\) of gradient updates. The stable rank remains less than 10 even when \(B\) is large, which is much less than its maximal possible value of \(\min(n_{1},n_{2})=300\). **Right.** We report the average alignment \(\left|\Delta\mathbf{W}_{\ell}h_{\ell-1}(\mathbf{x})\right|_{2}/\left|\Delta\mathbf{W}_{ \ell}\right|_{*}\left|\mathbf{h}_{\ell-1}^{\prime}(\mathbf{x})\right|_{2}\) of the weight update \(\Delta\mathbf{W}_{\ell}\) to the incoming vector \(\mathbf{h}_{\ell-1}^{\prime}(\mathbf{x})\), averaged over \(\mathbf{x}\) from the batch. Observe that alignment does not decay substantially with large batch size. Dashed lines on both axes in both subplots show the network width. Shaded regions denote one standard deviation of variation over random initializations and batches. Curves look similar after training. See Appendix A for experimental details.
the nontrivial insight from _Tensor Programs_ that gradients look like outer products of iid vectors, and 2) the fact that entrywise processing preserves the Frobenius norm of such a matrix up to multiplicative constants when width is large.
### Uniqueness of spectral scaling condition
Technically, Desideratum 1 can be satisfied by scalings other than Condition 1. For example, if one implements Condition 1 at the first layer but sets \(\left\lfloor\Delta\mathbf{W}_{\ell}\right\rceil_{*}=0\) for layers \(\ell=2,\ldots,L\), then Desideratum 1 still holds. However, Condition 1 is the unique _maximal_ scaling: if any of \(\left\lfloor\Delta\mathbf{W}_{\ell}\right\rceil_{*}\) or \(\left\lvert\mathbf{W}_{\ell}\right\rceil_{*}\) exceeds Condition 1, then training will blow up as width is increased. See Appendix B for a proof.
## 4 Efficient implementation of the spectral scaling condition
We have thus far proposed the _spectral scaling condition_ (Condition 1) and argued in Section 3 that this condition induces feature learning. We now explain how this condition can be implemented in practice. Several implementation strategies are possible because relations exist between different notions of matrix norm (Section 3.4 provides a summary). In this section, we first discuss an implementation via direct spectral normalization, and ultimately recover a more standard setup in which each layer's weight matrix is controlled by an initialization scale \(\sigma_{\ell}\) and a (SGD) learning rate \(\eta_{\ell}\) as follows:
**Parametrization 1** (Spectral parametrization).: _We claim that the spectral scaling condition (Condition 1) is satisfied and feature learning is achieved (as per Desideratum 1) if the initialization scale and learning rate of each layer \(\ell\) are chosen according to:_
\[\sigma_{\ell}=\Theta\left(\frac{1}{\sqrt{n_{\ell-1}}}\min\left\{1,\sqrt{\frac {n_{\ell}}{n_{\ell-1}}}\right\}\right);\qquad\qquad\eta_{\ell}=\Theta\left( \frac{n_{\ell}}{n_{\ell-1}}\right).\]
**Naive method: direct spectral normalization.** The most naive way to impose Condition 1 is to directly normalize the relevant quantities by their spectral norm. For instance, to initialize the weight matrix \(\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) at the \(\ell\)th layer to have spectral norm \(\sqrt{n_{\ell}/n_{\ell-1}}\), one could sample a temporary matrix
using any standard initializer and then re-normalize according to:
\[\mathbf{W}_{\ell}=\sigma\sqrt{\frac{n_{\ell}}{n_{\ell-1}}}\times\frac{\mathbf{W}_{\ell}^ {\prime}}{\left\|\mathbf{W}_{\ell}\right\|_{*}}, \tag{16}\]
where \(\sigma=\Theta(1)\) is a width-independent prefactor. Similarly, to ensure that the gradient step \(\Delta\mathbf{W}_{\ell}\) also has a spectral norm of the proper size, one might spectrally normalize the gradient according to:
\[\Delta\mathbf{W}_{\ell}=-\eta\sqrt{\frac{n_{\ell}}{n_{\ell-1}}}\times\frac{\nabla _{\mathbf{W}_{\ell}}\mathcal{L}}{\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_ {*}}, \tag{17}\]
where \(\eta=\Theta(1)\) is a width-independent prefactor.
It is quick to check that normalizing as in Equations (16) and (17) will satisfy Condition 1, but computing spectral norms is expensive and (as we will now show) can be avoided entirely by working out how \(\left\|\mathbf{W}_{\ell}^{\prime}\right\|_{*}\) and \(\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{*}\) will scale and dividing by the appropriate factor.
**Random initialization.** Let us suppose that, as is common practice, \(\mathbf{W}_{\ell}\) is initialized as \(\mathbf{W}_{\ell}=\sigma_{\ell}\cdot\mathbf{W}_{\ell}^{\prime}\), where all elements of \(\mathbf{W}_{\ell}^{\prime}\) are initialized i.i.d. from a normal distribution with mean zero and unit variance. The spectral norm of a matrix thus constructed is roughly \(\left\|\mathbf{W}_{\ell}\right\|_{*}\approx\sigma_{\ell}\cdot(\sqrt{n_{\ell}}+ \sqrt{n_{\ell-1}})\)(Rudelson and Vershynin, 2010; Vershynin, 2018). To get the desired scaling \(\left\|\mathbf{W}_{\ell}\right\|_{*}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}})\), we need merely choose \(\sigma_{\ell}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}}\cdot(\sqrt{n_{\ell}}+\sqrt{n _{\ell-1}})^{-1})\). Simplifying within the \(\Theta(\cdot)\), we arrive at \(\sigma_{\ell}\) scaled as in the spectral parametrization (Parametrization 1). Initializing weights with a prefactor \(\sigma_{\ell}\) scaling in this manner achieves the correct spectral norm of \(\mathbf{W}_{\ell}\). We note that the constant factor suppressed by the \(\Theta(\cdot)\) here will usually be small--for example, a prefactor of \(\sqrt{2}\) agrees with typical practice for ReLU networks at most layers. If \(\mathbf{W}_{\ell}^{\prime}\) is instead a random semi-orthogonal matrix, then we can simply use a prefactor \(\sigma_{\ell}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}})\).
**Gradient updates.** Here we give two methods for obtaining weight updates \(\Delta\mathbf{W}_{\ell}\) with the correct spectral norm. The first method is to note that, for a matrix with low stable rank, the Frobenius norm scales like the spectral norm (and is cheap to compute), so we may simply approximate \(\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{*}\approx\left\|\nabla_{ \mathbf{W}_{\ell}}\mathcal{L}\right\|_{F}\) and use Equation (17) directly. This approach is useful if one wants to avoid worrying about width-scaling pre-factors entirely. The second method--on which we will spend more time--is to make standard updates
\[\Delta\mathbf{W}_{\ell}=-\eta_{\ell}\nabla_{\mathbf{W}_{\ell}}\mathcal{L} \tag{18}\]
and find a scaling of \(\eta_{\ell}\) such that \(\left\|\Delta\mathbf{W}_{\ell}\right\|_{*}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}})\).
The main challenge lies in finding the scaling of the gradient \(\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{*}\). Note that we expect each gradient update \(\Delta\mathbf{W}_{\ell}\) to induce a change \(\left\|\Delta\mathbf{h}_{L}(\mathbf{x})\right\|_{2}=\Theta(\sqrt{n_{L}})\) in the output which induces a change \(\Delta\mathcal{L}=\Theta(1)\) for common loss functions \(\mathcal{L}\). Taylor expanding the loss to first order, we also expect that
\[\Delta\mathcal{L}=\Theta(\langle\Delta\mathbf{W}_{\ell},\nabla_{\mathbf{W}_{\ell}} \mathcal{L}\rangle)=\Theta(\left\|\Delta\mathbf{W}_{\ell}\right\|_{F}\cdot\left\| \nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{F})=\Theta\left(\left\|\Delta\mathbf{W }_{\ell}\right\|_{*}\cdot\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{*} \right), \tag{19}\]
where \(\langle\cdot,\cdot\rangle\) denotes the trace inner product and we have used the facts that the two arguments of the inner product are (a) proportional to each other and (b) low-rank. Inserting \(\Delta\mathcal{L}=\Theta(1)\) and the spectral scaling condition \(\left\|\Delta\mathbf{W}_{\ell}\right\|_{*}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}})\), we can conclude that
\[\left\|\nabla_{\mathbf{W}_{\ell}}\mathcal{L}\right\|_{*}=\Theta(\sqrt{n_{\ell-1}/n_ {\ell}}). \tag{20}\]
This result may also be reached via direct layerwise-recursive analysis of the size of the gradient. Returning to Equation (18), we now see that we achieve a properly-scaled update \(\left\|\Delta\mathbf{W}_{\ell}\right\|_{*}=\Theta(\sqrt{n_{\ell}/n_{\ell-1}})\) if we take \(\eta_{\ell}=\Theta\left(n_{\ell}/n_{\ell-1}\right)\) as prescribed by Parametrization 1.
In summary: training with layerwise initialization \(\sigma_{\ell}\) and learning rate \(\eta_{\ell}\) scaled as in our Parametrization 1 will implement the spectral scaling condition and give features and feature evolution of the correct size.
## 5 Comparisons to existing parametrizations
We have shown that training in accordance with our _spectral scaling condition_ (Condition 1) at every layer suffices to achieve correctly-scaled feature evolution, and we have given width scalings for layerwise
hyperparameters \(\sigma_{\ell}\) and \(\eta_{\ell}\) that suffice to put this condition into action in a standard deep learning paradigm (Parametrization 1). Here we compare this "spectral parametrization" with popular parametrizations. We find that our parametrization recovers the "maximal update parametrization" (\(\mu\)P) at all layers and is different to other parametrizations.
### Comparison with "maximal update parametrization"
Maximal update parametrization (\(\mu\)P) was recently proposed as a scaling rule that retains feature learning even at infinite width. \(\mu\)P as given in Table 3 of Yang et al. (2021) may be recovered from our Parametrization 1 by setting \(n_{0}=n_{L}=1\) and \(n_{1}=n_{2}=...=n_{L-1}\). Our Parametrization 1 actually streamlines and generalizes \(\mu\)P: we provide a unifying treatment for any rectangular matrix, rather than treating input, hidden and output layers separately. In other words, from a spectral point of view, no layer is special. Our parametrization also includes scaling with respect to the input dimension \(n_{0}\) and output dimension \(n_{L}\) (as opposed to neglecting them as agnostic \(\Theta(1)\) quantities) and treats hidden widths of unequal dimension (\(n_{\ell-1}\neq n_{\ell}\)).
### Contrast to "standard parametrization"
At present, the vast majority of deep learning systems use either "Kaiming," "Xavier," or "LeCun" initialization (Glorot and Bengio, 2010; He et al., 2015; LeCun et al., 2002) with layer-independent learning rates. Generically, we refer to this as "standard parametrization" (SP), where layerwise initialization and learning rates scale as:
\[\sigma_{\ell}=\Theta(1/\sqrt{n_{\ell-1}})\qquad\text{and}\qquad\eta_{\ell}= \Theta(1). \tag{21}\]
Notice that SP initialization exceeds Parametrization 1 in any layer with fan-out smaller than fan-in. This includes the final layer in sufficiently wide networks. So, while Parametrization 1 implies that weight matrices have spectral norm \(\Theta(\sqrt{\texttt{fan-out}/\texttt{fan-in}})\) at initialization, under SP the spectral norms of certain layers are initialized larger than this. This means that, under SP, network outputs can blow up if training aligns the layer inputs with the top singular subspaces (and in fact this alignment generally occurs).
### Contrast to "neural tangent parametrization"
The neural tangent parametrization (NTP) of Jacot et al. (2018) parameterizes a weight matrix as \(\mathbf{W}_{\ell}/\sqrt{n_{\ell-1}}\) where the entries of \(\mathbf{W}_{\ell}\) are sampled iid standard normal, and applies gradient descent with a layer-independent step-size. Since dividing a weight matrix by \(\sqrt{n_{\ell-1}}\) also divides the gradient of that layer by the same factor, NTP is equivalent to training in our setup with a layer-wise standard deviation and learning rate:
\[\sigma_{\ell}=\Theta(1/\sqrt{n_{\ell-1}})\qquad\text{and}\qquad\eta_{\ell}= \Theta(1/n_{\ell-1}). \tag{22}\]
Comparing Equations (21) and (22), we see that NTP shares the same initialization scaling as SP but uses a smaller step-size. To see the deficiency of NTP, notice the output layer \(\sigma_{L}\) is \(\sqrt{n_{L-1}}\) larger than Parametrization 1, so that by the linearity of backpropagation, the gradient to any middle layer \(\mathbf{W}_{\ell}\) is also \(\sqrt{n_{L-1}}\) larger. Then the \(1/n_{\ell-1}\) learning rate in NTP induces a change \(\Delta\mathbf{W}_{\ell}\) that is \(\sqrt{n_{L-1}}/n_{\ell-1}\) (smaller) compared to Parametrization 1, which prescribes a learning rate of \(\eta_{\ell}=1\). Because Parametrization 1 guarantees \(\left|\Delta\mathbf{W}_{\ell}\right|_{*}=\Theta(1)\), NTP causes \(\Delta\mathbf{W}_{\ell}\) to vanish in spectral norm as hidden widths \(n_{\ell-1},n_{L}\to\infty\).
It bears noting that an MLP parameterized with the NTP can be made to undergo feature evolution by simply rescaling the network output (and appropriately scaling down the global learning rate) (Bordelon and Pehlevan, 2022; Chizat et al., 2019). This operation transforms the NTP into \(\mu\)P.
### Contrast to "Frobenius-normalized updates"
Condition 1 mandates that the spectral norm of the update at layer \(\ell\) be proportional to the spectral norm of the weight matrix to which it is applied: \(\left|\Delta\mathbf{W}_{\ell}\right|_{*}\propto\left|\mathbf{W}_{\ell}\right|_{*}\). This contrasts with a body of optimization work (Bernstein et al., 2020; Liu et al., 2021; Shazeer and Stern, 2018; You et al., 2017) that has suggested instead that the _Frobenius norm_ of the update at layer \(\ell\) be proportional to the _Frobenius norm_ of the weight matrix to which it is applied: \(\left|\Delta\mathbf{W}_{\ell}\right|_{F}\propto\left|\mathbf{W}_{\ell}\right|_{F}\). For instance, Bernstein et al. (2020) analysed the operator structure of deep neural networks and wrote down perturbation bounds on network activations in terms of perturbations to the weight matrices at each layer, and used these perturbation bounds to motivate making updates that are small in Frobenius norm. The flaw in that analysis is the assumption that weight matrices and their perturbations have identical conditioning structure (Bernstein et al., 2020, Condition 2 of Theorem
1), when in reality weight matrices have high stable rank and updates have \(\Theta(1)\) stable rank. In light of this fact, the Frobenius-normalized proportionality rule should be modified to \(\left|\Delta\mathbf{W_{\ell}}\right|_{F}\propto\left|\mathbf{W_{\ell}}\right|_{F}/ \sqrt{\min(n_{\ell},n_{\ell-1})}\) in order to see proper feature evolution.
## 6 Demonstration: \(\mu\)P versus NTP
Here we discuss a simple, illustrative experiment in which we directly verify that \(\mu\)P obeys our spectral scaling condition (Condition 1) and achieves leading-order feature evolution (Desideratum 1) and the NTP does not. We do so via direct measurement of spectral quantities in MLPs of varying width trained on the same task. Appendix A provides full experimental details, and results are plotted Figure 2.
**Model and data.** We train MLPs with \(L=3\) linear layers, no biases, and ReLU activations. For demonstration purposes, we train on a small subset of \(B=200\) examples from a two-class subset of CIFAR-10. The model has input dimension \(n_{0}=3072\), output dimension \(n_{3}=1\), and uniform hidden dimension \(n_{1}=n_{2}=n\), with \(n\) varied between training runs. We initialize and train each MLP twice with hyperparameters obtained using \(\mu\)P and NTP scalings, respectively. We train networks of widths \(n\in[16,4096]\) to near-zero training loss. After training, we compute various spectral quantities which we now discuss.
**Norms of feature updates.** We measure the average relative change in features over training:
\[\mathbb{E}_{\mathbf{x}}\!\left[\frac{\left\|\mathbf{h_{2}}(\mathbf{x})-\mathbf{h_{2}}^{0}(\bm {x})\right\|_{2}}{\left\|\mathbf{h_{2}}^{0}(\mathbf{x})\right\|_{2}}\right], \tag{23}\]
where \(\mathbf{h_{2}}^{0}(\mathbf{x})\) and \(\mathbf{h_{2}}(\mathbf{x})\) are the second (preactivation) hidden vector at initialization and after training respectively, and the expectation is over samples \(\mathbf{x}\) from the batch. As shown in Figure 2A, this feature
Figure 2: **Spectral quantities are \(\mathbf{\Theta(1)}\) under \(\mu\)P but decay with width under NTP.** We train multilayer perceptrons of varying width, and plot the following quantities computed between the initial and final network: **(A)** Average relative change in features. **(B)** Relative change in weights in spectral norm. **(C)** Final layer alignment with incoming vectors (Equation (24)). **(D)** Relative change in weights in Frobenius norm. Labeled triangles show predicted powerlaw slopes. Shaded regions show one standard deviation over random data and initialization.
evolution ratio remains roughly fixed as width \(d\) grows when using \(\mu\)P, in satisfaction of Desideratum 1. By contrast, it decays as \(1/\sqrt{n}\) for the NTP as predicted by Lee et al. (2019).
**Spectral norms of weight updates.** We measure the relative change in weights in spectral norm: \(\left\|\mathbf{W}_{2}-\mathbf{W}_{2}^{0}\right\|_{*}/\left\|\mathbf{W}_{2}^{0}\right\|_{*}\), where \(\mathbf{W}_{2}^{0}\) and \(\mathbf{W}_{2}\) are the second weight matrix before and after training, respectively. As shown in Figure 2B, this ratio remains roughly fixed with width in the case of \(\mu\)P, in accordance with Condition 1. By contrast, it decays as \(1/\sqrt{n}\) for the NTP.
**Final-layer alignment.** We measure the alignment of the final layer to incoming vectors as follows:
\[\text{Final-layer alignment}:=\mathbb{E}_{x}\!\left[\frac{\left|\mathbf{W}_{3}\bm {h}_{2}^{\prime}(\mathbf{x})\right|_{2}}{\left|\mathbf{W}_{3}\right|_{*}\cdot\left|\bm {h}_{2}^{\prime}(\mathbf{x})\right|_{2}}\right]. \tag{24}\]
This quantity is \(\Theta(1/\sqrt{n})\) at initialization. As shown in Figure 2C, it grows to \(\Theta(1)\) when using \(\mu\)P but remains \(\Theta(1/\sqrt{n})\) when using the NTP.
**Frobenius norms of weight updates.** One often hears the claim that "the weights don't move" when training very wide neural networks. Here we show that the validity of this claim crucially depends on the choice of metric. In Figure 2D, we show that the Frobenius norm is deceptive: the net relative change \(\left\|\mathbf{W}_{2}-\mathbf{W}_{2}^{0}\right\|_{F}/\left\|\mathbf{W}_{2}^{0}\right\|_{F}\) can decay with width even when the relative change in spectral norm is constant. This provides crucial context for interpreting existing results in the literature (Lee et al., 2019, Figure 1).
## 7 Related work
\(\mu\)P was derived heuristically from spectral norm considerations in talks given by the first author in 2021 (Yang and Hu, 2021). Earlier work (Bernstein et al., 2020) derived a spectral analysis of feature learning based on perturbation bounds, but that work obtained the wrong scaling relation with network width due to a flawed conditioning assumption on gradients. Below, we review various strands of related work on feature learning and training strategies.
**Parametrizations for wide neural networks.** Much work has examined the scaling behavior of training dynamics of networks at large width. Together, works on the "neural tangent kernel" (NTK) limit (Jacot et al., 2018; Lee et al., 2019), the "mean-field" limit (Mei et al., 2019; Rotskoff and Vanden-Eijnden, 2022; Sirignano and Spiliopoulos, 2022), and the related "feature learning" limit (Geiger et al., 2020; Yaida, 2022; Yang and Hu, 2021) (i.e. the \(\mu\)P limit), paint a rich picture of a family of possible infinite-width scalings. After healthy debate regarding the relative empirical performance of the (more analytically tractable) NTK limit and the feature learning limit, a recent consensus holds that learning features is usually beneficial in practical large-scale deep learning settings (Chizat et al., 2019; Fort et al., 2020; Vyas et al., 2022). Our spectral scaling analysis recovers the feature learning limit in a simpler manner than previous analyses.
**Spectral normalization.** Spectral normalization emerged as a form of weight normalization in the generative adversarial network literature (Miyato et al., 2018). This form of normalization acts on the weight matrices, and is used analogously to other normalization schemes such as batchnorm (Ioffe and Szegedy, 2015) and layernorm (Ba et al., 2016). In contrast to our spectral scaling condition (Condition 1), this method treats only the weights \(\mathbf{W}_{\ell}\), not the updates \(\Delta\mathbf{W}_{\ell}\). Furthermore, spectral normalization implementations typically set the spectral norm of weight matrices either to one or to a tunable hyperparameter (Farnia et al., 2019), and do not include the key factor of \(\sqrt{n_{\ell}/n_{\ell-1}}\) in our Condition 1.
**Operator theory of neural networks.** Neural networks are constructed by composing linear operators with elementwise nonlinearities. One line of work studies this operator structure and how it behaves under perturbation to understand how step sizes should be set in gradient descent. For instance, Bernstein et al. (2020) derive perturbation bounds on the maximum amount of feature change that can be induced by a gradient step in terms of the operator properties of the weight matrices. Meanwhile, Yang and Hu (2021) study the operator structure of neural networks in the limit that width is taken to infinity, proposing a parametrization that obtains feature learning in this limit.
**Optimization theory.** A body of literature studies optimization algorithms for deep networks that take steps whose size is set relative to the weights to which they are applied (Bernstein et al., 2020;e,B; Carbonnelle and Vleeschouwer, 2019; Liu et al., 2021; Shazeer and Stern, 2018; You et al., 2017). A particular focus has
been placed on setting the Frobenius norm of update steps to be small relative to the Frobenius norm of the weight matrices (Bernstein et al., 2020; Liu et al., 2021; You et al., 2017). A main practical takeaway of this paper is that the Frobenius norm should be replaced by the spectral norm to get proper width scaling. The source of the difference between Frobenius and spectral norm is that gradient updates tend to have low stable rank, as shown in Figure 1, while the weights themselves tend to have high stable rank.
## 8 Conclusion
We have presented an analysis of the dynamics of feature learning in deep neural networks, beginning with desired conditions on feature evolution (Desideratum 1) and culminating in the demonstration that these conditions may be achieved by simple scaling rules (Condition 1 and Parametrization 1) applied uniformly to each layer. Our analysis recovers and generalizes practically-important "feature-learning parametrizations" and provides a simple, unifying perspective on the question of parametrization in wide neural networks. For comparison, formal results derived under the _tensor programs_ framework are given in Appendix B.
Our discussion has focused principally on MLPs for clarity, but our feature learning desideratum and spectral scaling condition can be directly applied to structured architectures. The spectral scaling condition may be applied to multi-index tensors as appear in convolutional architectures by applying the condition to appropriate "slices" of the full tensor. Simple application of our spectral scaling condition recovers \(\mu\)P scalings reported for these model classes (see e.g. Yang et al. (2021), Table 8 and Section J.2). We also give the hyperparameter scalings for biases (which are easily derived but omitted in the main text for clarity) in Appendix D. This architectural universality is also proven rigorously in Appendix B.
Finally, we note that under a natural redefinition of the \(\ell^{2}\)-norm of a vector \(\mathbf{v}\in\mathbb{R}^{n}\) incorporating a normalization prefactor, _all vector and matrix norms used in our spectral analysis_ become \(\Theta(1)\), permitting an elegant summary of our conclusions. We state and discuss this nondimensionalization procedure in Appendix E. This generalization permits the extension of our results to e.g. the input layer in language modeling, in which one-hot embeddings violate our assumption that \(\left|\mathbf{x}\right|_{2}=\Theta(\sqrt{n_{0}})\).
## Acknowledgements
The authors thank Josh Albrecht, Blake Bordelon, Alex Wei, Nikhil Ghosh, and Dhruva Karkada for useful discussions and comments on the manuscript. JS gratefully acknowledges support from the National Science Foundation Graduate Fellow Research Program (NSF-GRFP) under grant DGE 1752814.
## Author Contributions
GY developed our core insight regarding the utility of the spectral norm, produced our tensor programs theory (Appendix B), and aided in refinement of the paper. JS spearheaded the writing of the paper, led iteration towards simple analysis which communicates our spectral picture, and ran experiments. JB developed an early incarnation of the spectral picture (Bernstein et al., 2020), contributed key insights simplifying our exposition including unifying all layers under single formulae, aided in writing the paper, and ran experiments.
|
2302.07811 | Asteroseismology of evolved stars to constrain the internal transport of
angular momentum. VI. Testing a parametric formulation for the azimuthal
magneto-rotational instability | Asteroseismic measurements of the internal rotation rate in evolved stars
pointed out to a lack of angular momentum (AM) transport in stellar evolution
models. Several physical processes in addition to hydrodynamical ones were
proposed as candidates for the missing mechanism. Nonetheless, no current
candidate can satisfy all the constraints provided by asteroseismology. We
revisit the role of a candidate process whose efficiency scales with the
contrast between the rotation rate of the core and the surface which was
proposed to be related to the azimuthal magneto-rotational instability (AMRI)
by Spada et al. We compute stellar evolution models of low- and
intermediate-mass stars with the parametric formulation of AM transport
proposed by Spada et al. until the end of the core-helium burning for low- and
intermediate-mass stars and compare our results to the latest asteroseismic
constraints available in the post main sequence phase. Both hydrogen-shell
burning stars in the red giant branch and core-helium burning stars of low- and
intermediate-mass in the mass range $1 M_{\odot} \lesssim M \lesssim 2.5
M_{\odot}$ can be simultaneously reproduced by this kind of parametrisation.
Given current constraints from asteroseismology, the core rotation rate of
post-main sequence stars seems to be well explained by a process whose
efficiency is regulated by the internal degree of differential rotation in
radiative zones. | F. D. Moyano, P. Eggenberger, B. Mosser, F. Spada | 2023-02-15T17:49:31Z | http://arxiv.org/abs/2302.07811v1 | # Asteroseismology of evolved stars to constrain the internal transport of angular momentum
###### Abstract
Context:Asteroseismic measurements of the internal rotation rate in evolved stars pointed out to a lack of angular momentum (AM) transport in stellar evolution models. Several physical processes in addition to hydrodynamical ones were proposed as candidates for the missing mechanism. Nonetheless, no current candidate can satisfy all the constraints provided by asteroseismology.
Aims:We revisit the role of a candidate process whose efficiency scales with the contrast between the rotation rate of the core and the surface which was proposed to be related to the azimuthal magneto-rotational instability (AMRI) by Spada et al.
Methods:We compute stellar evolution models of low- and intermediate-mass stars with the parametric formulation of AM transport proposed by Spada et al. until the end of the core-helium burning for low- and intermediate-mass stars and compare our results to the latest asteroseismic constraints available in the post main sequence phase.
Results:Both hydrogen-shell burning stars in the red giant branch and core-helium burning stars of low- and intermediate-mass in the mass range \(1M_{\odot}\lesssim M\lesssim 2.5M_{\odot}\) can be simultaneously reproduced by this kind of parametrisation.
Conclusions:Given current constraints from asteroseismology, the core rotation rate of post-main sequence stars seems to be well explained by a process whose efficiency is regulated by the internal degree of differential rotation in radiative zones.
## 1 Introduction
The detection of splittings of mixed modes in post-main sequence stars enabled the measurement of the internal rotation rate of stars at different evolutionary phases, particularly in subgiants (Deheuvels et al., 2014, 2020), red giant branch (RGB) stars in the hydrogen shell-burning phase (Beck et al., 2012; Deheuvels et al., 2012; Mosser et al., 2012; Di Mauro et al., 2016, 2018; Gehan et al., 2018), red giants in the core-helium burning phase (Mosser et al., 2012; Deheuvels et al., 2015; Tayar et al., 2019), among others. This information combined with precise constraints on their structure and fundamental parameters such as the stellar mass and effective temperature give us information on how the physical processes redistributing angular momentum (AM) in stellar interiors must act through evolution. In particular the first constraints on the internal rotation rate of red giants (Beck et al., 2012) led to the conclusion that hydrodynamical processes usually adopted in stellar evolution computations such as meridional currents and shear instabilities (Zahn, 1992) can not account for the core rotation rate of evolved low-mass stars (Eggenberger et al., 2012; Marques et al., 2013; Ceillier et al., 2013), pointing out to a missing physical process in stellar models. Different processes were proposed as candidates to explain the lack of AM transport in stellar interiors, such as internal magnetic fields (Spruit, 2002; Cantiello et al., 2014; Fuller et al., 2019; Takahashi & Langer, 2021; Eggenberger et al., 2022), wave-like perturbations such as internal gravity waves (Talon & Charbonnel, 2005; Pincon et al., 2017) or mixed modes (Belkacem et al., 2015), inward pumping of AM from convective envelopes (Kissin & Thompson, 2015), among others. However, none of them gave a definite answer to the problem of the AM transport in stellar interiors so far. Particularly troublesome is the subgiant-red giant connection given the difficulty for a given transport process to reproduce the core rotation rate in both phases simultaneously (Eggenberger et al., 2019). In the same way, the high efficiency of AM transport needed to provide a good overall agreement on the red-giant branch (RGB) can lead to some discrepancies with core rotation rates observed during the core-helium burning phase of intermediate-mass stars (den Hartogh et al., 2020).
The increasing amount of results by extensive studies by the asteroseismology community on internal rotation (Mosser et al., 2012; Deheuvels et al., 2014, 2015; Van Reeth et al., 2016; Gehan et al., 2018; Tayar et al., 2019; Deheuvels et al., 2020; Li et al., 2020) and detailed studies on individual stars (Di Mauro et al., 2016; Fellay et al., 2021; Salmon et al., 2022; Tayar et al., 2022) enables us to explore new alternatives to the physical processes responsible for the internal AM transport at different evolutionary phases and different mass ranges. In a series of recent papers, the efficiency of the additional AM transport process that would
be needed in stellar evolution models to fit the core rotation rate of evolved stars has been quantified (den Hartogh et al., 2019; Eggenberger et al., 2019; Deheuvels et al., 2020; Moyano et al., 2022) leading to the conclusion that the physical process needed should decrease its efficiency just after the end of the main sequence, and gradually become more efficient through early RGB and core-helium burning phase, as well as being more efficient in more massive stars in general. This kind of exploratory work enables us to gain information about the nature of the physical process across different stellar conditions.
In a recent work Spada et al. (2016) showed that a physical process whose efficiency increases with the ratio of core-to-surface rotation rate can account for the internal rotation of RGB stars and is in mild agreement with that of subgiants. This kind of parametrisation for the AM transport efficiency was proposed to be related to the Azimuthal Magneto-Rotational Instability (AMRI; Rudiger et al., 2015), a different version of the standard magneto-rotational instability (MRI; Balbus & Hawley, 1991) in which the background magnetic field is mainly azimuthal and current-free (i.e. \(B_{\phi}\propto 1/r\) with \(r\), the radial distance) and the instability arises from non-axisymmetric modes as opposed to the MRI where the instability arises from axisymmetric perturbations in a medium with a poloidal field. The AMRI was explored in numerical simulations (Rudiger et al., 2014, 2015; Gellert et al., 2016; Guseva et al., 2017) and its existence was verified in laboratory experiments (Seilmayer et al., 2014); this instability thus represents a possible candidate to the AM transport problem. Spada et al. (2016) argued that the transport efficiency of the AMRI increases as the contrast between the core and the surface increases. They thus parametrised the diffusion coefficient as a power law of the core-to-surface rotation rate and presented stellar evolution models of low-mass stars with a single mass value of \(M=1.25M_{\odot}\) in the subgiant and early red-giant phase finding a good agreement with the core rotation rate of RGB stars by fitting an exponent to the power law that is consistent with the scaling of the eddy viscosity of the AMRI. In this paper, we further explore this scenario for both low- and intermediate-mass stars until the end of the core-helium burning phase, and explore the role of the molecular viscosity. We benefit from a larger dataset of red giants in the hydrogen-burning shell phase with revised trends in the core rotation evolution (Gehan et al., 2018), as well as two early subgiants close the end of the main sequence (Deheuvels et al., 2020).
In Sect. 2.1 we describe the physical ingredients of our models and the data used, in Sect. 3 we present the models obtained and the comparison with the data and finally discuss its implications in Sect. 4. We conclude in Sect. 5.
## 2 Physical ingredients
### Stellar evolution code and initial conditions
We compute stellar evolution models with the Geneva stellar evolution code (GENEC; Eggenberger et al., 2008) taking into account the transport of angular momentum in radiative regions in the shellular rotation approximation (Zahn, 1992). We include the advection of angular momentum by meridional circulation as well as diffusion by the shear instability. Convective zones are assumed to rotate rigidly. The equation that describes the AM transport in radiative regions is given by
\[\rho\frac{\mathrm{d}}{\mathrm{d}t}\left(r^{2}\Omega\right)_{M_{r}}=\frac{1}{ 5r^{2}}\frac{\partial}{\partial r}\left(\rho r^{4}\Omega U(r)\right)+\frac{1 }{r^{2}}\frac{\partial}{\partial r}\left(\rho Dr^{4}\frac{\partial\Omega}{ \partial r}\right) \tag{1}\]
with \(\Omega\) the horizontally averaged angular velocity, \(\rho\) the density, \(r\) the radial coordinate, \(U\) the vertical component of the meridional circulation velocity and \(D\) the total diffusion coefficient. The coefficient \(D\) takes into account the action of diffusive processes, which in our case are the shear instability (\(D_{\mathrm{shear}}\)) and the parametric diffusion coefficients that we explore (\(D_{\mathrm{add}}\)), so \(D=D_{\mathrm{shear}}+D_{\mathrm{add}}\). The solar metallicity is adopted for all the models with a solar chemical mixture as given by Asplund et al. (2009), and the initial period is \(P_{\mathrm{ini}}=10\) days (see Sect 4.1 by Moyano et al. (2022) for a discussion on this choice). We do not include braking by magnetic winds. The rest of the parameters (e.g. overshooting, mass-loss, mixing-length parameter) where chosen in a similar way as in Ekstrom et al. (2012). The models are computed from the zero age main sequence (ZAMS) until the tip of the RGB for low-mass stars (\(M\lesssim 2M_{\odot}\)), and until the end of the core-helium burning phase for models that do not go through the helium-flash (\(M\gtrsim 2M_{\odot}\)).
In addition to GENEC, we use the MESA stellar evolution code version 15140 (Paxton et al., 2011, 2013, 2015, 2019) to compute low-mass (\(M\lesssim 2M_{\odot}\)) models until the end of the core-helium burning phase 1. We do this because GENEC is not prepared to routinely follow the angular momentum transport during the helium flash. In MESA the AM transport is treated as a purely diffusive process (Paxton et al., 2013) and its evolution is given by the equation
Footnote 1: All the initial parameters and extensions necessary to reproduce our models are available at [https://zenodo.org/record/7305068](https://zenodo.org/record/7305068)
\[\left(\frac{\partial\Omega}{\partial t}\right)_{m}=\frac{1}{i}\left(\frac{ \partial}{\partial m}\right)_{t}\left[(4\pi r^{2}\rho)^{2}iD\left(\frac{ \partial\Omega}{\partial m}\right)\right]-\frac{2\Omega}{r}\left(\frac{ \partial r}{\partial t}\right)_{m}\left(\frac{1}{2}\frac{\mathrm{d}\mathrm{d} \mathrm{n}}{\mathrm{d}\mathrm{n}r}\right) \tag{2}\]
where \(i\) is the specific moment of inertia of a shell at a mass coordinate \(m\), \(D\) is the associated diffusion coefficient, and the rest of the variables keep their usual meaning.
We also include the additional diffusion coefficient that we explore (\(D_{\mathrm{add}}\)) in Eq. 2 as part of the diffusion coefficient \(D\) for our red-clump models computed with MESA on top of the meridional circulation in the Eddington-Sweet (ES) approach and the secular shear instability (SSI; see Heger et al., 2000), thus \(D=D_{\mathrm{ES}}+D_{\mathrm{SSI}}+D_{\mathrm{add}}\). Solid-body rotation is also assumed in convective regions. We recall that the main difference on the treatment of the AM transport between both stellar codes lies on the advective character of Eq. 1 which can lead to both inward and outward AM transport depending on the (non-trivial) sense of the circulation of each circulation cell (e.g. Decressin et al., 2009). This can increase the shear rather than decreasing it as diffusive processes. However, for the initial masses and velocities that we adopt the meridional currents are not expected to act efficiently, thus their effects remain limited.
Although different approaches are used we verified that a good agreement concerning the rotational evolution is obtained with both codes, especially in the upper RGB where the additional diffusion coefficient (\(D_{\mathrm{add}}\)) that we explore dominates.
### Angular momentum redistribution regulated by core-envelope coupling
In line with previous efforts by Spada et al. (2016), we explore the effect of a physical process whose efficiency scales with the contrast of rotation rate between the core and the surface. We implement this as an additional diffusion coefficient in the equation of AM transport (Eq. 1) following the prescription
\[D_{\mathrm{add}}=D_{0}\left(\frac{\Omega_{\mathrm{core}}}{\Omega_{\mathrm{ surf}}}\right)^{\alpha} \tag{3}\]
where we take \(\Omega_{\rm core}\) as the mean core rotation rate in the inner radiative layers, \(\Omega_{\rm surf}\) is the surface rotation rate, and \(D_{0}\) and \(\alpha\) are free parameters. In our models we take \(\Omega_{\rm core}\) as the mean rotation rate in the region close to the core as sensed by gravity modes and whose expression is given by (Goupil et al., 2013)
\[\Omega_{\rm core}=\frac{\int_{0}^{\tau}\Omega{\rm N}_{\rm B}{\rm v}{\rm d}{\rm r }/{\rm r}}{\int_{0}^{\tau}{\rm N}_{\rm B}{\rm v}{\rm d}{\rm r}/{\rm r}} \tag{4}\]
with \({\rm N}_{\rm B}\)v the Brunt-Vaisala frequency, \(r\) the radial coordinate, and \(r_{\rm g}\) the upper boundary of the gravity-mode cavity.
As mentioned in Sect. 1, such a dependence on the core-envelope coupling was proposed to be related to the AMRI. According to direct numerical simulations of Taylor-Couette flows, the eddy viscosity resulting from the onset of this instability can be fit with a relation that depends on adimensional numbers whose values are determined by the properties of the fluid. In particular the turbulent eddy viscosity (\(\nu_{\rm T}\)) derived from direct numerical simulations was shown to scale as \(\nu_{\rm T}/\nu\propto Rm/\sqrt{Pm}\)(Rudiger et al., 2018) where \(Pm=\nu/\eta\) is the magnetic Prandtl number, \(Rm=RePm\) is the magnetic Reynolds number, with \(\eta\) the magnetic diffusivity, and \(\nu\) the molecular viscosity of the gas. Spada et al. (2016) argue that the eddy viscosity in such experiments not only scales with \(Rm\) and \(Pm\) but also with the degree of shear between both cylinders and give the relation
\[\frac{\nu_{\rm T}}{\nu}\propto\frac{Rm}{\sqrt{PmRa}}\left(\frac{\Omega_{\rm z }}{\Omega_{\rm z}}\right)^{2} \tag{5}\]
where \(Ra\) is the Rayleigh number and the subindexes \(i\) and \(o\) refer to the values of the inner and outer cylinders in the Taylor-Couette simulations, respectively. Taking into account only the dependence on the angular velocities, then one can approximate \(\Omega_{\rm i}\sim\Omega_{\rm core}\) and \(\Omega_{\rm o}\sim\Omega_{\rm surf}\), leading to the functional dependence proposed for the diffusion coefficient given by Eq. 3.
Since \(Ra\equiv Qg(T_{o}-T_{i})(R_{o}-R_{i})^{3}/\nu\chi\) with \(Q\) the coefficient of thermal expansion of the gas, \(\chi\) the thermal conductivity, \(g\) the local gravity, and \(T_{\rm i,o}\) and \(R_{\rm i,o}\) the radius and temperature of either the inner or outer boundaries, the right hand side of Eq. 5 does not depend on the molecular viscosity. Then the turbulent viscosity alone is proportional to the molecular viscosity (i.e. \(\nu_{\rm T}\propto\nu\)).
Motivated by these findings we further explore the idea that the turbulent viscosity which regulates the transport of AM scales not only with the rotation contrast between core and surface but also with the intrinsic molecular viscosity of the gas. This is a simple way to test the AMRI efficiency at different conditions in the stellar interior such as temperature and density, which occur in stars of different initial mass. We consider the most extreme case in which the diffusion coefficient \(D_{\rm add}\) is enhanced by the maximum value of the molecular viscosity in the radiative interior, thus we employ the following prescription
\[D_{\rm add}=D_{1}{\rm max}(\nu_{\rm mol})\left(\frac{\Omega_{\rm core}}{ \Omega_{\rm surf}}\right)^{\alpha} \tag{6}\]
where \(D_{1}\) is a constant adimensional free parameter, and \(\nu_{\rm mol}\) is the kinematic molecular viscosity, whose expression for a mixture of hydrogen and helium is given by (Schatzman, 1977)
\[\nu_{\rm mol}=2.2\times 10^{-15}\frac{T^{5/2}}{\rho\ln\Lambda}\frac{1+7\chi}{8 }\,[{\rm cm^{2}/s}] \tag{7}\]
where \(\ln\Lambda\) is the Coulomb logarithm and \(X\) is the mass fraction of hydrogen. Usually the maximum of the kinematic molecular viscosity is located in the radiative layers near the hydrogen burning shell. The behaviour of both the rotation contrast between core and surface, and the maximum value of the molecular viscosity in the radiative interior for three representative models are shown in Fig. 1. These models are computed taking into account only meridional circulation and shear instabilities. This shows that the molecular viscosity is expected to be higher in more massive stars, and grows as the star climbs the RGB.
## 3 Stellar models
### Red giant branch stars
We computed rotating models with initial masses in the range \(M=1-2.5M_{\odot}\) from the ZAMS until the end of the core-helium burning phase using either the prescription given by Eq. 3 or 6, in addition to the shear instability and meridional circulation. An example of the evolutionary track in the \(\Omega_{\rm core}-\log g\) diagram for a \(1.3M_{\odot}\) model computed from the ZAMS until the end of the core-helium burning phase is shown in Fig. 2. This model was computed with an initial period of \(P=10\) days employing Eq. 3 with \(D_{0}=50\) cm\({}^{2}\)/s and \(\alpha=2\). In Fig. 3, we show the evolution of the core rotation rate for different initial masses using a diffusion coefficient given by Eq. 3 and a calibrated value of \(D_{0}=50\) cm\({}^{2}\)/s and \(\alpha=2\). The models with initial masses of \(M=1.1,1.3\) and \(1.5M_{\odot}\) can reproduce simultaneously the core rotation rate of the three subgiants with the fastest spinning cores around \(\log g\sim 3.7\), and the core rotation rate of the bulk of RGB stars, which is in the range \(\Omega_{\rm c}/2\pi\sim 600-800\)\(\rm\,\,\rm nHz\).
We chose a value of \(\alpha=2\) to fit the mean flat trend in the evolution of the core rotation rate for RGB stars (Gehan et al., 2018). While \(D_{0}\) regulates the maximum core rotation rate that can be achieved during the subgiant and early red-giant phase, \(\alpha\) regulates the speed with which the core is decelerated and thus sets the slope of the rotational evolution in the RGB phase; the higher the value of \(\alpha\), the steeper the slope of the core rotation rate during the RGB. Thus the large sample of RGB stars given by Gehan et al. (2018) constrains this value to \(\alpha\sim 2\) for low-mass stars. These are not completely new results since they were already presented by Spada et al. (2016) for a \(1.25M_{\odot}\) model,
Figure 1: Solid lines show the ratio of core-to-surface rotation rate while dashed lines show the maximum value of the molecular viscosity in the stellar interior through evolution from the zero age main sequence until the tip of the red giant branch, for different initial masses. In this set of models the transport of angular momentum is driven only by hydrodynamical processes.
but we confirm in this work that this scenario is valid in the mass range \(M\sim 1-1.5M_{\odot}\). However, for more massive stars (\(M\gtrsim 1.7M_{\odot}\)), the core rotation rate decreases too steeply and thus is not able to reproduce the apparently flat trend seen for RGB stars. This occurs in part because the radius of these stars is larger on the RGB compared to lower-mass stars and hence the surface rotation rate is lower, which increases the value of the diffusion coefficient and hence the deceleration rate of the core.
### Low-mass core helium burning stars
We computed models until the end of the core-helium burning phase for our low-mass models with \(M=1.3,1.5\) and \(1.7M_{\odot}\) for which constraints on the core rotation rate are available (Mosser et al., 2012). In Fig. 4, we show the evolution of the core rotation rate with an additional diffusion coefficient following Eq. 3, with \(D_{0}=50\) cm\({}^{2}\)/s and \(\alpha=2\), from the ZAMS until the end of the core helium burning phase. This is a continuation of the evolutionary scenario presented in Fig. 3 for RGB stars since we adopt the same diffusion coefficient. Our models with \(M=1.3,1.5\) and \(1.7M_{\odot}\) can reproduce the core rotation rate of the three fastest subgiants as well as their surface rotation rates, shown by the dotted line (only for the \(1.3M_{\odot}\) model). They can also reproduce the apparently flat trend of red giants in the hydrogen-shell burning phase (at \(\log g\sim 3.3-2.9\)) and the core rotation rate of core-helium burning stars for different masses (see red, green and blue lines in Fig. 4). The models spend \(\sim 85\%\) of their core-helium burning time with their core spinning at \(\Omega_{\rm core}/2\pi\sim 40-200\) nHz (denoted by grey lines), in agreement with the asteroseismic constraints given by Mosser et al. (2012) shown as magenta circles at \(\log g\sim 2.5\). This is achieved without changing any parameter during the evolution until the end of the core-helium burning phase. We also verify, as proposed by Spada et al. (2016), that if one enforces solid body rotation until a given time during the subgiant branch, one can reproduce the increase of the core rotation rate seen for the subgiants. We show this for a \(1.3M_{\odot}\) model for which we enforced solid body rotation until roughly the location of the second subgiant seen in Fig. 4 whose internal rotation is consistent with that of a solid body (Deheuvels et al., 2020). Afterwards, the core rotation rate converges towards the RGB bump, reproducing in a similar way the constraints for RGB stars and red clump stars.
In this series of models, the core rotation rate decreases by roughly one order of magnitude from the RGB base until the tip. This happens because, as the star ascends the RGB, the core contracts and the envelope expands until the tip, which in turn in
Figure 4: Evolution of the core rotation rate for models with an additional diffusion coefficient of the form \(D_{\rm adi}=D_{0}(\Omega_{\rm core}/\Omega_{\rm surf})^{2}\), with \(D_{0}=50\) cm\({}^{2}\)/s. The black filled(empty) symbols show the surface(core) rotation rate of the eight subgiants presented by Deheuvels et al. (2014, 2020). The grey symbols show the core rotation rate of stars in the lower RGB (Gehan et al., 2018). The magenta circles correspond to red giants presented by Mosser et al. (2012) with those grouped around \(\log g\sim 2.5\) in the core-helium burning phase (we only include those with an estimated mass \(M\lesssim 2M_{\odot}\)). The red and black dashed lines show the surface rotation rate of the \(1.3M_{\odot}\) model. The black-line shows a \(1.3\) M\({}_{\odot}\) model for which we enforced rigid rotation until the location of the second subgiant at \(\log g\sim 3.9\). The grey lines at \(\log g\sim 2.5\) mark the region where the central abundance of helium is in the range \(Y=0.9-0.1\), i.e. the long and stable core-helium burning phase.
Figure 3: Core rotation rate as a function of the surface gravity for models with different initial masses, as indicated in the figure. The models were computed using Eq. 3 with \(D_{0}=50\) cm\({}^{2}\)/s and \(\alpha=2\). The black squares correspond to subgiants (Deheuvels et al., 2014, 2020) and the grey triangles to RGB stars (Gehan et al., 2018).
Figure 2: Core rotation rate as a function of the surface gravity for a \(1.3M_{\odot}\) model with an initial period of \(P=10\) days employing Eq. 3 with \(D_{0}=50\) cm\({}^{2}\)/s and \(\alpha=2\), computed from the zero age main sequence until the end of the core-helium burning phase. The different evolutionary phases are indicated with different colours and labels.
creases the rotation contrast between core and surface and thus increases the value of the diffusion coefficient, slowing down the core progressively. This is even more pronounced if one chooses higher values of \(\alpha\) which lead to lower core rotation rates towards the RGB tip. Once the models reach the tip, the helium is ignited off-center in degenerate conditions, leading to the helium flash, which produces enough energy to expand the central layers and thus allows the core to expand and slow down due to its increased moment of inertia. This is why the core rotation rate decreases abruptly at \(\log g\sim 0.5\)\(-\)0.2 for the three models presented in Fig. 4. This behaviour is general of any rotating model and does not depend on the physical process adopted, because the timescale in this phase is too short (\(\sim 2-3\) Myr) to allow for AM transport. Afterwards, the star contracts, increasing its surface gravity, and goes through a series of helium sub-flashes, which are seen as small loops around \(\log g\sim 2-2.5\), until the core becomes non-degenerate and the stable core-helium burning phase can begin. As mentioned before, once the star settles into the stable core-helium burning phase, it spends \(\sim 85\) % of its core-helium burning time with core rotation rates \(\Omega_{\rm core}/2\pi\sim 40-200\) nHz in agreement with asteroseismic constraints. Towards the end of the core-helium burning phase, the core shifts to He- and H-shell burning and the subsequent CO core contracts while the envelope expands, leading to a spin-up of the core and a decrease in surface gravity, which occurs at \(\log g\sim 2.5\) and \(\Omega_{\rm c}/2\pi\sim 200\) nHz.
In these models, the core is found to spin-up during the core-helium burning phase. This behaviour seems a priori to be counter-intuitive, since under local conservation arguments, the core should spin at a rather constant rate since its size (in radius) does not change enough to alter significantly its moment of inertia. Furthermore, if the core rotates faster than the surface and we assume that the angular velocity is distributed smoothly, then the diffusion of AM from the core to the outer layers should spin down the core. However, this counter-intuitive behaviour occurs because of the particularly low core rotation rate at the RGB tip and the dynamics of the He-flash. It can be explained by analysing the rotation profile, which in turn is essentially explained by the contraction and expansion (hence increase or decrease of the density) of the layers since the timescales are too short for AM transport to take place 2.
Footnote 2: See Iben (2013) for a detailed discussion
The evolution of the rotation profile during the first He-flash is shown in Fig. 5. During the first He-flash, the peak of nuclear energy is located at the base of the convective He-shell, around \(M_{\rm r}/M\sim 0.12\) in Fig. 5, and most of the energy released produces an expansion of the inner layers, but causes the layers at the base of this convective shell to spin down faster than the surrounding layers. This explains that, in Fig. 5, the model at roughly the middle of the He-flash exhibits a rotation profile with a dip close to the base of the helium convective shell, where the uniform rotation rate is due to convection (marked by dotted lines) and the rotation rate is higher in the neighbouring hydrogen-rich layers located at \(M_{\rm r}/M\sim 0.32(M_{\rm r}\sim 0.465M_{\rm o})\). As the core expands and the regions above move outwards, the hydrogen-burning shell cools down and its nuclear energy generation rate decreases, forcing the star to contract and to release gravitational energy, decreasing the luminosity on its way to the horizontal branch. This contraction occurs outside the He-convective shell and across the whole convective envelope, which leads to faster rotating layers near the hydrogen-rich layers and leads to the rotation profile shown by the black line in Fig. 5. At this stage, the star has already contracted, which can be deduced from its much higher surface rotation rate. During the subsequent helium sub-flashes, a similar scenario occurs regarding the rotation rate of the layers above the convective helium shell. These layers remain rotating faster than the core until the beginning of the core-helium burning phase. And given the high degree of shear in those rapidly rotating layers, AM is easily diffused to the core by e.g. the shear instability or even viscous effects, which spins it up during the long and stable core-helium burning phase. This is the reason why the core rotation rate increases during the core-helium burning phase in these models.
### Intermediate-mass core helium burning stars
We also follow the evolution of our intermediate-mass models of \(2.5M_{\odot}\) until the end of the core-helium burning phase with different prescriptions for the transport of AM (see Fig. 6). Most of the stable core-helium burning phase (\(0.1\lesssim Y\lesssim 0.9\)) occurs once the star already descended from the RGB tip (located at \(\log g\sim 1.8\)) and starts expanding as it becomes more luminous. This spans a range of surface gravities \(\log g\sim 2.9-2.6\) in our models. In all three models, the core spins down during this phase.
In Fig. 6, we compare our different models with the data from secondary-clump stars from Deheuvels et al. (2015) and Tayar et al. (2019). The red line model corresponds to our \(2.5M_{\odot}\) model presented in Fig. 3 computed with Eq. 3 using \(\alpha=2\) and \(D_{0}=50\) cm\({}^{2}\)/s extended until the end of the core-helium burning phase. It does not reproduce the core rotation rate of RGB stars and we show here that it does not either reproduce the constraints for stars in the core-helium burning phase. If we re-calibrate the constant variable to \(D_{0}=2000\) cm\({}^{2}\)/s (without changing the power \(\alpha\)) to reproduce the RGB stars then we can automatically reproduce the rotation rate of secondary-clump stars; this is shown by the green-line model. As mentioned in Sect. 3.1, the core rotation of our most massive models is not in agreement with the data of RGB stars. Regarding this point, we note that the molecular kinematic viscosity is higher in general for more massive stars, mainly because of higher internal temperatures, and increases monotonically through evolution on the RGB (see Fig.
Figure 5: Angular velocity as a function of the mass coordinate at different moments during the first helium flash of the \(1.5M_{\odot}\) model presented in Fig. 4. Radiative regions are shown by solid lines while dotted lines show convective regions. The inset around \(M_{\rm r}/M\sim 0.32\) shows a zoom into the location of the H-burning shell.
1). To test whether the AM transport efficiency including an enhancement by the molecular viscosity can bring the models into agreement with the data for the massive stars in our sample, we computed models following Eq. 6. When we include this additional factor and consider \(\alpha=2\) and \(D_{1}=50\) the core rotation rate of the \(2.5M_{\odot}\) model close to the RGB base is \(\Omega_{\rm core}/2\pi\sim 600\) nHz in agreement with the data of RGB stars, but decreases too steeply with evolution, in contradiction with the apparently flat trend (see blue line in Fig. 6). Interestingly, once we take this enhancement into account, the core rotation rate of secondary clump stars can be reproduced. We find that a good fit able to reproduce both the apparent flat trend in core rotation rate of RGB stars and the core rotation rate of secondary clump stars can be obtained if one includes the enhancement by the molecular viscosity as in Eq. 6 with \(\alpha=1\) and \(D_{1}=300\), shown by the black-line model in Fig. 6. This provides evidence that if the AM transport efficiency is regulated by the core-envelope coupling, once the core rotation rate of RGB stars is reproduced then secondary clump stars can also be well reproduced. A value of \(\alpha=1\) is different from the values of \(\alpha\simeq 2-3\) previously suggested by Spada et al. (2016). This could point to a mass-dependence for the efficiency on the AM transport by this parameterisation; the efficiency of AM transport may have a weaker dependence on the degree of differential rotation for higher-mass stars.
### Comparison with additional viscosities
In previous works, we estimated the efficiency of the additional AM transport process needed to reproduce the core rotation rate of red giants via a parametric approach (Moyano et al., 2022), in particular to reproduce the apparently flat trend of core rotation rates of RGB stars. In addition to the direct comparison between predicted and observed core rotation rates presented in Fig. 3, we can then also compare the values of the diffusion coefficients predicted by Eq. 3 with the mean values along this phase determined from our previous estimates to compare its behaviour with mass and evolution. Since the diffusion coefficient (Eq. 3) employed in the present models changes through the evolution, while the mean additional viscosities presented by Moyano et al. (2022) are defined as being constant, a direct comparison between both quantities would not be accurate, since the former are instantaneous values at a given time and the latter would represent mean values through the evolution until a given constraint is satisfied. To reconcilicate these two aspects and make a fairer comparison between both approaches, we take the time average value of the diffusion coefficient \(D_{\rm add}\) as follows
\[\bar{D}(t)=\frac{1}{t-t_{\rm TAMS}}\int_{t_{\rm TAMS}}^{t}D(t^{\prime})dt^{\prime} \tag{8}\]
where \(t\) is the current age and \(t_{\rm TAMS}\) the age at the terminal age main sequence (TAMS). This approach can give us an estimate of the additional viscosity that would have been needed through the evolution until a certain time \(t\) to obtain the core rotation rate of a given model. The comparison is shown in Fig. 7 for the models in the whole mass range studied as a function of the mixed mode density. The mixed mode density, defined as \(\mathcal{N}=\Delta v/(\Delta\Pi_{1}v_{\rm max}^{2})\) with \(\Delta v\) the large frequency separation, \(v_{\rm max}\) the frequency of the maximum oscillation signal, and \(\Delta\Pi_{1}\) the asymptotic period spacing of dipole modes, was proposed as an asteroseismic indicator of the evolution through the lower RGB with the advantage of showing no strong correlation with the stellar mass (Gehan et al., 2018), allowing us to study the evolutionary trends for different masses. The absolute values are within the ranges expected, where the disagreements for the 1.1 and 1.3 \(M_{\odot}\) models may arise because in Fig. 3 the models with masses \(M=1.1-1.3M_{\odot}\) evolve at core rotation rates slightly higher than \(\Omega_{\rm core}/2\pi\sim 700\) nHz whereas Moyano et al. (2022) estimated the additional viscosity needed for the models to achieve that core rotation rate through the range of mixed mode densities presented. Interestingly, the trends with mass are roughly satisfied with the parametrisation given by Eq. 3; i.e. the efficiency increases roughly two orders of magnitude in this mass range for early RGB stars.
Moyano et al. (2022) also studied how the efficiency should increase as evolution proceeds to keep the core rotating at the expected rate (\(\Omega_{\rm core}/2\pi\sim 700\) nHz) during the early RGB and found that the additional viscosity for low-mass stars should increase by roughly a factor two. Here we find that the behaviour of the normalized values is similar for the low-mass stars, i.e. the time-averaged diffusion coefficient increases by roughly a factor two in the early RGB in the range of mixed mode densities \(\mathcal{N}\sim 4-14\). However, in the case of more massive stars, the increase is much higher than expected, which can be directly understood from Fig. 3, where the values of the core rotation rate of the massive models (\(M=2.0,2.5M_{\odot}\)) are found to decrease during the RGB, in contradiction with the data. To take the time average of the additional diffusion coefficients we choose as a starting point the TAMS, since the evolution of the core rotation is dominated by what happens during the subgiant phase (see e.g. Eggenberger et al. (2017, 2019)). However if one enforces solid body rotation during the main sequence (as was done by Spada et al. (2016)), the time-averaged diffusion coefficients would have lower values because the core rotation rate in this case reaches lower values at the TAMS and thus remains lower during the subgiant and early RGB until it converges during the RGB. The effect mentioned on the core rotation rate can be seen comparing the black and red lines in Fig. 4, where we enforced solid-body rotation (black line) until the location of the second
Figure 6: Core rotation rate as a function of the surface gravity for \(2.5M_{\odot}\) models until the end of the core-helium burning phase. The models are computed with Eq. 3 with \(\alpha=2\) and \(D_{0}=50\) and \(2000\) cm\({}^{2}\)/s for the red and green lines, respectively. The blue and black-line models are computed with Eq. 6 (including the enhancement by the molecular viscosity) using \(\alpha=2\), \(D_{1}=50\) and \(\alpha=1\), \(D_{1}=300\), respectively. The grey triangles correspond to red giants in the lower red giant branch (Gehan et al., 2018) and the black circles and orange squares to intermediate-mass core-helium burning stars in the secondary clump (Deheuvels et al., 2015; Tayar et al., 2019).
subgiant of the sample of Deheuvels et al. (2020) in the \(1.3M_{\odot}\) model. In this case once the models reach the RGB bump (seen at \(\log g\sim 2.5\)) both models converge to the same core rotation rate and evolve identically, erasing any previous rotational history. However the behaviour for the normalized values remains similar, the time-averaged diffusion coefficient increases by roughly a factor two. We also note that considering a constant diffusion coefficient of \(D=2\times 10^{4}\) cm\({}^{2}\)/s from the TAMS until the location of the second subgiant would make our average diffusion coefficient slightly larger and would then be in agreement with the values inferred for red giants. This is shown by the red-dashed line in Fig. 7. This diffusion coefficient used until the location of the second subgiant predicts a slightly decoupled envelope, with a contrast of core-to-surface rotation rate of \(\Omega_{\rm c}/\Omega_{\rm s}\simeq 1.4\).
## 4 Discussion
Considering only hydrodynamical processes in stellar evolution computations, the rotation contrast between the core and the surface increases with evolution as a result of the surface expansion and the contraction of the core. Assuming that the rotation profile is smooth (i.e. it does not have sharp discontinuities), the core-envelope coupling is a global measure of the shear in the radiative regions, where we expect instabilities to redistribute AM. In previous works (Eggenberger et al., 2017; Moyano et al., 2022), we showed that the efficiency of the AM transport should increase both with mass and evolution along the RGB to satisfy the constraints from asteroseismology. Since the properties mentioned above go in this direction, we revisited the computations presented by Spada et al. (2016), which they argue could be related with the AMRI.
Spada et al. (2016) showed that employing a diffusion coefficient that scales with the inverse of the core-envelope coupling (see Eq. 3) and enforcing rigid rotation until approximately the middle of the subgiant phase would reproduce simultaneously the core rotation rate of subgiants and red giants (see the upper panel of their Fig. 4). Adopting the same approach and following the evolution until the end of the core-helium burning, we find that all three evolutionary phases can be well reproduced. Moreover, recent results by Deheuvels et al. (2020) show that two young subgiants close to the TAMS are rotating roughly as solid bodies, consistent with the hypothesis above mentioned. Regarding the main sequence, there is some evidence from \(\gamma\) Dor stars that the core is strongly coupled to the envelope, with a core that spins not faster than \(\sim 5\) % than its surface (Li et al., 2020; Saio et al., 2021), while models with only hydrodynamical processes attain a contrast of \(\Omega_{\rm core}/\Omega_{\rm surf}\sim 2\) by the middle of the MS. Also, recent theoretical works on AM transport in \(\gamma\) Dor stars combining AM constraints on both surface and core, favor highly efficient transport of AM during the MS, such that solid-body rotation in radiative zones is a good approximation (Moyano et al., in prep.). This group of stars represents the MS counterpart of the subgiants and red giants studied in this work since their mass range is \(M\sim 1.4-2.0M_{\odot}\). However, for our low-mass models, the core rotation rate during the RGB onwards is roughly independent on the rotational history during the main sequence.
We note that the diffusion coefficient explored in this work can not couple efficiently the core with the envelope during the main sequence, so that a more efficient process, possibly scaling with other properties of the star (as for instance transport by the Tayler instability, see Eggenberger et al., 2022), would be needed during the MS and early subgiant branch. Concerning later evolutionary phases, and in particular evolution along the asymptotic giant branch, we found that the envelope expands and leads to very low surface rotation rates, which in turn leads to a very high rotation contrast between core and surface. Because of this, the diffusion coefficient we adopted reaches very high values (\(\log D_{\rm add}\sim 8-10\) cm\({}^{2}\)/s) which slows-down the core to periods in strong disagreement with those of white dwarfs. Specifically, our models with initial masses of \(1.3\), \(1.5\) and \(1.7\)\(M_{\odot}\) presented in Fig. 4 reach core rotation periods of \(P_{\rm core}\sim 30-300\) days while those of observed white dwarfs in the corresponding mass-range (\(M\sim 0.5-0.6M_{\odot}\)) are of \(P_{\rm core}\sim 10-100\) hours (Hermes et al., 2017). This along with the whole evolutionary path towards the white dwarf phase is shown in Fig. 8. For clarity, we also note that the diffusion coefficient is much larger than the diffusion coefficient.
Figure 8: Core rotation rate as a function of the surface gravity for a \(1.3M_{\odot}\) model with an initial period of \(P=10\) days. Angular momentum transport is driven by meridional circulation, shear instabilities, and the parametric diffusion coefficient of the AMRI (Eq. 3). Evolutionary phases are indicated in the figure. The data points correspond to the rotation periods of white dwarfs (Hermes et al., 2017).
Figure 7: Time-averaged diffusion coefficient as a function of the mixed mode density for different masses computed with Eq. 3 using \(\alpha=2\) and \(D_{0}=50\) cm\({}^{2}\)/s. The points correspond to the additional viscosities estimated by Moyano et al. (2022) to reproduce the core-rotation rate of RGB stars at different mixed-mode densities. The red-dashed line corresponds to a model where a constant value of \(D=2\times 10^{4}\) cm\({}^{2}\)/s was used until \(\log g\simeq 3.9\).
ity, we show only the evolutionary track of a single mass and we do not show the data of subgiants nor red giants. The evolution for our two models of 1.5 and 1.7 \(M_{\odot}\) is similar. The evolution until the end of the core-helium burning phase corresponds to the same \(1.3M_{\odot}\) model presented in Fig. 4. The later evolution corresponds to the early asymptotic giant branch (AGB), followed by the thermal pulses. The small hook seen at \(\log g\simeq 2\) corresponds to the AGB bump, which is associated to the ignition of the He-shell. Afterwards the progressive expansion of the convective envelope leads to strong transport of AM, slowing down the core as the star goes through thermal pulses. These pulses are seen as small loops at \(\log g\simeq 0\) until the star finally leaves the AGB phase, contracts towards the white dwarf phase and enters into the cooling branch. The track ends when the stellar luminosity drops below \(\log L/L_{\odot}=-3\).
Another relevant point is that, in the evolutionary scenario studied here, the core of low-mass stars spins up during the core-helium burning phase with evolution. This is contrary to what is obtained with AM transport by the Tayler instability (e.g. Fuller et al., 2019; Eggenberger et al., 2022) and offers an interesting way to distinguish between the scenario of AM transport by the parametric formulation of the AMRI presented here and AM transport by the Tayler instability. Indeed, a possible way to do this is to study the evolution of the core rotation rate as a function of the period spacing, since during the core-helium burning phase, the asymptotic period spacing of g-modes in stellar models increases during \(\sim 80\%\) of the core-helium burning time, because as the convective core grows the layers contributing the most to the integral move out in radius and so decreasing the value of the integral; i.e. it essentially traces the size of the convective core (see Fig. 3 of Bossini et al., 2015). The asymptotic period spacing of dipole g-modes can be computed as
\[\Delta\Pi_{1}=\frac{2\pi^{2}}{\sqrt{2}}\left(\int_{0}^{r_{\rm g}}{\rm N}_{\rm BW }\frac{\rm dr}{\rm r}\right)^{-1} \tag{9}\]
The evolution of the core rotation rate as a function of \(\Delta\Pi_{1}\) is shown in Fig. 9 for models with masses \(M=1.3,1.5\) and \(1.7M_{\odot}\) during the core-helium burning phase. The data points correspond to red-clump stars analyzed by Mosser et al. (2012). The noisy behaviour seen at period spacings higher than \(\Delta\Pi_{1}\gtrsim 260\) s occurs because of the oscillatory nature of the mass of the convective core during this phase. This occurs because in our models during the core-helium burning phase the convective core grows until the radiative gradient starts developing a minimum inside and close to the edge of the convective core, which becomes lower than the adiabatic gradient and splits the convective region into two. Afterwards, the convective core grows until it can merge with the convective shell and the core enlarges again, repeating the cycle again until the helium is depleted (see Salaris & Cassisi, 2005). With a large enough sample of stars in this phase, one could then probe the trend predicted by this scenario.
We do a first comparison with the data set of Mosser et al. (2012). Although our models can reproduce roughly the range of asymptotic spacings and core rotation rates expected, the trends in the data are not strong enough to either discard or support the scenario studied (see Fig. 9). However, we do note that there is an accumulation of stars at high period spacings and low core-rotation rates, which would hint at a rather constant to decreasing core rotation rate during the core-helium burning phase. This is further supported by the lack of fast rotators at high period spacings, which are rather seen at low period spacings, but would probably correspond to stars that have just finished burning helium in their cores and are thus climbing the asymptotic giant branch.
We also note that our standard models with an exponential overshooting extended over 0.1 \(H_{\rm p}\) (red, green and blue line models in Fig. 9) cannot reproduce the highest period spacings at \(\Delta\Pi_{1}\gtrsim 315\) s. In this phase, the value of \(\Delta\Pi_{1}\) is proportional to the size of the convective core (Montalban et al., 2013). Thus a possible way to reproduce the highest \(\Delta\Pi_{1}\) values is to increase the degree of overshooting. We do this by adopting a step overshooting scheme in the convective penetration approach, that is the core is extended a distance dictated by the free parameter \(\alpha_{\rm OV}\) and the region is assumed to be adiabatic (i.e. \(\nabla=\nabla_{\rm AD}\)). To reproduce the highest \(\Delta\Pi_{1}\) we needed to extend the core overshooting to \(\alpha_{\rm OV}=1.0H_{\rm p}\), which is a rather high value (for comparison on the main sequence \(\alpha_{\rm OV}\simeq 0.1-0.2H_{\rm p}\), see Claret & Torres (2016, 2018)). The result is shown in Fig. 9 in magenta lines for our \(1.3M_{\odot}\) model and is similar for different masses. This test shows that in our model of AM transport the discrepancy seen with the period spacing is not a product of an incorrect treatment of the overshooting during the core-helium burning phase. We also note that a proper treatment of convective boundary mixing to reproduce the asymptotic period spacing remains challenging (Bossini et al., 2015), but is out of the main scope of this work. Regarding the empirical data, it is still difficult to detect gravity-dominated mixed modes of stars in the upper RGB at high luminosities since the non-radial modes trapped in the core are expected to vanish due to the strong radiative damping (Dupret et al., 2009; Grosjean et al., 2014) and hence the core rotation rate in this phase remains unknown. It is then still possible to propose a scenario where the core actually spins down during the advanced RGB since it does not pose any contradictions.
The effect of the prescription studied in this work on the efficiency of the chemical mixing remains to be explored since in some cases, such as media with low \(Pm\) and \(Rm\) numbers the kinetic energy is non-negligible compared to the magnetic one (Gellert et al., 2016). We finally note that the role of chemical gradients in reducing the efficiency of AM transport by the AMRI remains to be explored. This is of course a key point recalling the
Figure 9: Evolution of the core rotation rate during the core-helium burning phase, as a function of the asymptotic period spacing for different initial masses. The models correspond to those presented in Fig. 4. The solid lines show the region where the central helium decreases from \(Y=0.9\) to \(Y=0.1\). The evolution proceeds from left to right during most of the core-helium burning phase (solid lines). The data points correspond to the red-clump stars presented by Mosser et al. (2012). The magenta line shows a model where we assumed strong penetrative convection.
strong inhibiting effects of chemical gradients in the case of the standard magneto-rotational instability (see for instance Wheeler et al. 2015; Griffiths et al. 2022) and AM transport by the Tayler instability. Moreover, preliminary detailed asteroseismic studies of the internal rotation of red giants suggest a possible link between the location of chemical and rotation gradients (Di Mauro et al. 2018; Fellay et al. 2021). In this context, we found that the standard MRI does not have any impact on the core rotation rate of low-mass evolved stars, because both the thermal and chemical stratification strongly inhibits the instability, specially in regions close to the core where chemical gradients can develop (See App. A). Since it is the region from where layers tends to contract, the core then does not slow down due to this instability.
## 5 Conclusions
We build upon the work of Spada et al. (2016) by exploring the possible effects of a parametric formulation for the azimuthal magneto-rotational instability on the internal redistribution of AM for stars in different evolutionary phases. We demonstrate that an AM transport process whose efficiency scales with the global degree of internal shear in radiative regions can reproduce simultaneously the evolutionary trends in core rotation rate of low-mass stars in the H-shell burning phase and in the core-helium burning phase as given by asteroseismic constraints (e.g. Mosser et al. 2012; Gehan et al. 2018). Remarkably, in our models the core spins up during the core-helium burning phase, which is particular of this kind of parametrisation, where the cores spin down considerably in the upper part of the RGB; still it remains a possible scenario which remains to be confirmed.
Using the same parametrisation for the AM transport in intermediate-mass stars, the efficiency with respect to low-mass stars needs to be enhanced in order to correctly reproduce the rotational constraints of red giants in the H-shell burning phase. In this direction, we explored a prescription that takes into account the enhancement of the AM transport by the molecular viscosity, considering its maximum value in the radiative regions and find that in our intermediate-mass models the core rotation rate of both H-shell and core helium burning stars can be better reproduced.
Since possibly a process that acts more strongly as the core decouples from the envelope can reproduce the core rotation rate of red giants in the hydrogen-shell and core-helium burning phases simultaneously, we suggest this process should be further explored with direct numerical simulations under different rotational configurations. In particular, the critical role of chemical gradients on the inhibition of the AMRI should be studied. If this instability is not strongly inhibited by chemical gradients, it could constitute a missing piece of the AM transport puzzle in both low- and intermediate-mass stars.
###### Acknowledgements.
FDM and PE have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 833925, project STAREX). FS is supported by the German space agency (Deustches _Zentrum fur Lutz_ und Raumfahrt) under PLATO data grant 50OO1501. FDM thanks Georges Meyet, Sebastien Salmon, Gaid Buldgen, Thibaut Dumont and Adolfo Simaz-Bunzel for useful discussions at different stages of this work. We thank the anonymous referee for her/his constructive input, which improved the outline of this article.
|
2305.04367 | Natural observables and dynamical approximations in multifield
cosmological models | I give a geometric construction of certain first order natural dynamical
observables in multifield cosmological models with arbitrary target space
topology and discuss a system of related dynamical approximations and regimes
for such models. | Calin Iuliu Lazaroiu | 2023-05-07T20:02:26Z | http://arxiv.org/abs/2305.04367v1 | # Natural observables and dynamical approximations in multifield cosmological models
###### Abstract
I give a geometric construction of certain first order natural dynamical observables in multifield cosmological models with arbitrary target space topology and discuss a system of related dynamical approximations and regimes for such models.
cosmology; differential geometry; dynamical systems.
## 1 Introduction
Cosmological models with more that one scalar field ("inflaton") are natural and preferred in quantum theories of gravity [1] and have acquired increasing importance in recent years, being subject to renewed study from various perspectives [2, 3, 4, 5, 6, 7, 8, 9].
Despite their importance in connecting cosmology to supergravity and string theory, such models are poorly understood when compared to their one field counterparts - in particular because their dynamics can be very involved already before considering perturbations. Moreover, the scalar fields of such models can be valued in an arbitrary connected Riemannian manifold (called _scalar manifold_) which is generally non-compact. Hence a systematic study must consider scalar manifolds of arbitrary connected topology.
When the Hubble parameter is positive, the background dynamics of multifield models can be described mathematically [10] using a geometric dynamical system [11, 12] which is defined on the tangent bundle to the scalar manifold. The general behavior of this system can be extremely non-trivial already in the case of models with two scalar fields [13, 14, 15, 16, 17, 18].
In general, one cannot hope to solve multifield cosmological dynamics exactly or even to approach it numerically in a satisfactory manner when the number of scalar fields (i.e. the dimension of the scalar manifold) is large - and even less so when the topology of the scalar manifold is involved. Hence the best hope for practical progress is to develop approximation schemes which might allow one to make progress - such as the IR and UV expansions [10, 19]. To do this in general,
one needs a geometric description of the cosmological observables which control various approximations and regimes of physical interest while taking into account the topology of the scalar manifold. In this paper, I give such a description for a collection of natural cosmological observables and discuss some aspects of the approximation schemes which they suggest.
The paper is organized as follows. Section 2 recalls the description of general multifield cosmological models and gives a geometric treatment of their scalar and vector observables. Section 3 gives a careful geometric construction of certain first order observables (some of which are obtained by "on-shell" reduction of second order observables) for scalar manifolds of arbitrary topology, discusses some relations between them and describes various regions of interest which they define inside the tangent bundle of the scalar manifold. Section 4 discusses two dynamical approximations which are suggested by the first slow roll conditions. Section 5 discusses the conservative and dissipative approximations, which are controlled by the _conservative function_ of the model - one of the basic natural observables introduced in Section 3. It also gives an overview of other natural approximations and their relations to the conservative and dissipative regimes. Section 6 briefly describes the limits of large and small Planck mass, relating them to the conservative and IR approximations. Section 7 presents our conclusions and a few directions for further research.
**Notations and conventions.** All manifolds considered in the paper are smooth and paracompact. For any manifold \(\mathcal{M}\), we let \(\dot{T}\mathcal{M}\) denote the slit tangent bundle of \(\mathcal{M}\), defined as the complement of the image of the zero section in \(T\mathcal{M}\). We denote by \(\pi:T\mathcal{M}\to\mathcal{M}\) the bundle projection and by \(F:=F\mathcal{M}\stackrel{{\text{def.}}}{{=}}\pi^{*}(T\mathcal{M})\) the _Finsler bundle_[20] of \(\mathcal{M}\), which is a bundle over \(T\mathcal{M}\). A section of \(F\) defined over an open subset of \(T\mathcal{M}\) is called a _Finsler vector field_. We denote by \(\dot{F}\stackrel{{\text{def.}}}{{=}}\pi^{*}(\dot{T}\mathcal{M})\) the slit Finsler bundle of \(T\mathcal{M}\). A global section of \(F\) is the same as a map \(f:T\mathcal{M}\to T\mathcal{M}\) which satisfies \(f(u)\in T_{\pi(u)}\mathcal{M}\) for all \(u\in T\mathcal{M}\), while a global section of \(\dot{F}\) is a map of this type which also satisfies \(f(u)\neq 0\) for all \(u\).
## 2 Multifield cosmological models
In this section we recall the geometric description of multifield cosmological models with arbitrary target space topology (in the absence of cosmological perturbations) and give a geometric treatment of their local observables. We also discuss some natural constructions afforded by the Finsler bundle of the scalar manifold of such models, which will be useful in later sections.
### Basics
Throughout this paper, a _multifield cosmological model_ is a classical cosmological model with \(d\) scalar fields (where \(d\in\mathbb{Z}_{>0}\) is a fixed positive integer) which is derived
from the following action on a spacetime with topology \(\mathbb{R}^{4}\):
\[S[g,\varphi]=\int\mathrm{vol}_{g}\mathcal{L}[g,\varphi]\ \, \tag{1}\]
where:
\[\mathcal{L}[g,\varphi]=\frac{M^{2}}{2}\mathrm{R}(g)-\frac{1}{2}\mathrm{Tr}_{g} \varphi^{*}(\mathcal{G})-\Phi\circ\varphi\ . \tag{2}\]
Here \(M\) is the reduced Planck mass, \(g\) is the spacetime metric on \(\mathbb{R}^{4}\) (taken to be of "mostly plus" signature) while \(\mathrm{vol}_{g}\) and \(\mathrm{R}(g)\) are the volume form and Ricci scalar of \(g\). The scalar fields are described by a smooth map \(\varphi:\mathbb{R}^{4}\to\mathcal{M}\), where \(\mathcal{M}\) is a (generally non-compact) connected, smooth and paracompact manifold of dimension \(d\) which is endowed with a smooth Riemannian metric \(\mathcal{G}\), while \(\Phi:\mathcal{M}\to\mathbb{R}\) is a smooth function which plays the role of potential for the scalar fields. We require that \(\mathcal{G}\) is complete to ensure conservation of energy. For simplicity, we also assume that \(\Phi\) is strictly positive on \(\mathcal{M}\). The ordered system \((\mathcal{M},\mathcal{G},\Phi)\) is called the _scalar triple_ of the model while the Riemannian manifold \((\mathcal{M},\mathcal{G})\) is called the _scalar manifold_. The model is parameterized by the quadruplet:
\[\mathfrak{M}=(M_{0},\mathcal{M},\mathcal{G},\Phi)\ \, \tag{3}\]
where:
\[M_{0}\stackrel{{\mathrm{def.}}}{{=}}M\sqrt{\frac{2}{3}}\]
is the _rescaled Planck mass_. We denote by:
\[\mathrm{Crit}\Phi\stackrel{{\mathrm{def.}}}{{=}}\{m\in \mathcal{M}\ |\ (\mathrm{d}\Phi)(m)=0\}\]
the critical set of \(\Phi\) and by:
\[\mathcal{M}_{0}\stackrel{{\mathrm{def.}}}{{=}}\mathcal{M}\setminus \mathrm{Crit}\Phi\]
the _noncritical set_ of \((\mathcal{M},\mathcal{G},\Phi)\). For any \(E>0\), we denote by:
\[\mathcal{M}(E)\stackrel{{\mathrm{def.}}}{{=}}\{m\in\mathcal{M} \ |\ \Phi(m)<E\} \tag{4}\]
the corresponding open subcritical level set of \(\Phi\) and set:
\[\mathcal{M}_{0}(E)\stackrel{{\mathrm{def.}}}{{=}}\mathcal{M}(E) \setminus\mathrm{Crit}\Phi\ . \tag{5}\]
The multifield cosmological model parameterized by the quadruplet \(\mathfrak{M}\) is obtained by assuming that \(g\) is an FLRW metric with flat spatial section:
\[\mathrm{d}s_{g}^{2}=-\mathrm{d}t^{2}+a(t)^{2}\sum_{i=1}^{3}\mathrm{d}x_{i}^{2} \tag{6}\]
(where \(a\) is a smooth and strictly positive function) and that \(\varphi\) depends only on the _cosmological time_\(t\stackrel{{\mathrm{def.}}}{{=}}x^{0}\). Define the _Hubble parameter_ through:
\[H(t)\stackrel{{\mathrm{def.}}}{{=}}\frac{\dot{a}(t)}{a(t)}\ \,\]
where the dot indicates derivation with respect to \(t\).
### The cosmological equation
When \(H>0\) (which we assume throughout this paper), the variational equations of (1) reduce to the _cosmological equation_:
\[\nabla_{t}\dot{\varphi}(t)+{\cal H}(\dot{\varphi}(t))\dot{\varphi}(t)+({\rm grad }_{\dot{\varphi}}\Phi)(\varphi(t))=0 \tag{7}\]
together with the condition:
\[H(t)=H_{\varphi}(t)\stackrel{{\rm def.}}{{=}}\frac{1}{3}{\cal H} (\dot{\varphi}(t))\ \, \tag{8}\]
where \({\cal H}:T{\cal M}\to\mathbb{R}_{>0}\) is the _rescaled Hubble function_ of \(({\cal M},{\cal G},\Phi)\), which is defined through:
\[{\cal H}(u)\stackrel{{\rm def.}}{{=}}\frac{1}{M_{0}}\left[||u|| ^{2}+2\Phi(\pi(u))\right]^{1/2}\ \ \forall u\in T{\cal M}\ \.\]
Here \(\nabla_{t}\stackrel{{\rm def.}}{{=}}\nabla_{\dot{\varphi}(t)}\) and \(\pi:T{\cal M}\to{\cal M}\) is the bundle projection. The solutions \(\varphi:I\to{\cal M}\) of (7) (where \(I\) is a non-degenerate interval) are called _cosmological curves_, while their images in \({\cal M}\) are called _cosmological orbits_.
A cosmological curve \(\varphi:I\to{\cal M}\) need not be an immersion. Accordingly, we define the _singular and regular parameter sets_ of \(\varphi\) through:
\[I_{\rm sing}\stackrel{{\rm def.}}{{=}}\{t\in I\ |\ \dot{ \varphi}(t)=0\}\subset I \tag{9}\] \[I_{\rm reg}\stackrel{{\rm def.}}{{=}}I\setminus I_ {\rm sing}=\{t\in I\ |\ \dot{\varphi}(t)\neq 0\}\subset I\ . \tag{10}\]
It can be shown that \(I_{\rm sing}\) is an at most countable set. The sets of _critical and noncritical times_ of \(\varphi\) are defined through:
\[I_{\rm crit}\stackrel{{\rm def.}}{{=}}\{t\in I\ |\ \varphi(t) \in{\rm Crit}\Phi\}\] \[I_{\rm noncrit}\stackrel{{\rm def.}}{{=}}I \setminus I_{\rm crit}\stackrel{{\rm def.}}{{=}}\{t\in I\ |\ \varphi(t)\not\in{\rm Crit}\Phi\}\ . \tag{11}\]
The cosmological curve \(\varphi\) is called _noncritical_ if \(I_{\rm crit}=\emptyset\), i.e. if its orbit is contained in the non-critical set \({\cal M}_{0}\). It is easy to see that a cosmological curve \(\varphi:I\to{\cal M}\) is constant iff its orbit coincides with a critical point of \(\Phi\), which in turn happens iff there exists some \(t\in I_{\rm sing}\) such that \(\varphi(t)\in{\rm Crit}\Phi\). Hence for any non-constant cosmological curve we have:
\[I_{\rm sing}\cap I_{\rm crit}=\emptyset\ \.\]
Given a cosmological curve \(\varphi\), relation (8) determines \(a\) up to a multiplicative constant. The cosmological equation can be reduced to first order[20] by passing to the tangent bundle of \({\cal M}\). This interpretation arises[10] by viewing the second order time derivative \(\ddot{\varphi}(t)\) appearing in the expression:
\[\nabla_{t}\dot{\varphi}^{i}(t)=\ddot{\varphi}(t)+\Gamma^{i}_{jk}(\varphi(t)) \varphi^{j}(t)\dot{\varphi}^{k}(t)\]
as an element of the double tangent bundle \(TT{\cal M}\) and the opposite of the remaining terms of (7) as defining a vector field \(S\in{\cal K}(T{\cal M})\) (called the _cosmological semispray_ or _second order vector field_ of \(({\cal M},{\cal G},\Phi)\)) which is defined on \(T{\cal M}\). Then (7) is equivalent with the integral curve equation of \(S\). The flow of this vector field on the total space of \(T{\cal M}\) is called the _cosmological flow_ of the model.
### Cosmological observables
For any \(k\in\mathbb{Z}_{>0}\), let \(\pi^{k}:J^{k}(\mathcal{M})\to\mathcal{M}\) denote the bundle of \(k\)-order jets of curves in \(\mathcal{M}\).
An (off-shell) _scalar local cosmological observable_ of order \(k\geq 0\) is a map \(f:J^{k}(\mathcal{M})\to\mathbb{R}\). The observable is called _basic_ if it has order \(k=1\); in this case, \(f\) is a real-valued function defined on \(T\mathcal{M}\).
A simple example of basic cosmological observable is the norm function \(|||:T\mathcal{M}\to\mathbb{R}_{\geq 0}\) of \((\mathcal{M},\mathcal{G})\), which is continuous everywhere but smooth only on the slit tangent bundle \(\hat{T}\mathcal{M}\). Another example is the rescaled Hubble function \(\mathcal{H}:T\mathcal{M}\to\mathbb{R}_{\geq 0}\).
Given a cosmological observable \(f\) of order \(k\) and a curve \(\varphi:I\to\mathcal{M}\), the _evaluation_\(f_{\varphi}:I\to\mathcal{M}\) of \(f\) along \(\varphi\) is the function defined though:
\[f_{\varphi}\stackrel{{\rm def.}}{{=}}f\circ j^{k}(\varphi):I\to \mathbb{R}\ \,\]
where \(j^{k}(\varphi):I\to J^{k}(\mathcal{M})\) is the \(k\)-th jet prolongation of \(\varphi\).
Notice that a cosmological observable \(f\) is completely determined by its evaluation \(f_{\varphi}\) on _arbitrary_ curves \(\varphi\) in \(\mathcal{M}\). Accordingly, one can describe such an observable by giving its evaluation on an arbitrary curve rather than using the jet bundle description.
The cosmological equation defines a codimension \(d\) closed submanifold \(\mathfrak{S}\) (called the _cosmological shell_) of the total space of \(J^{2}(\mathcal{M})\), the second order jet bundle of curves of \(\mathcal{M}\) (which is a bundle of rank \(2d\) over \(\mathcal{M}\)). Notice that \(\mathfrak{S}\) is the image of a section \(\mathfrak{s}:J^{1}(\mathcal{M})\to J^{2}(\mathcal{M})\) of the natural surjection \(J^{2}(\mathcal{M})\to J^{1}(\mathcal{M})\) with local coordinate expression:
\[\mathfrak{s}^{i}=-\Gamma^{i}_{jk}\dot{\varphi}^{j}\dot{\varphi}^{k}-\frac{1}{ M_{0}}\left[\mathcal{G}_{kl}(\varphi)\dot{\varphi}^{k}\dot{\varphi}^{l}+2 \Phi(\varphi)\right]^{1/2}\dot{\varphi}^{i}-\mathcal{G}^{ij}(\varphi)(\partial _{j}\Phi)(\varphi)\ \.\]
In the jet bundle interpretation, the cosmological equation (7) reads:
\[\ddot{\varphi}^{i}=\mathfrak{s}^{i}(\varphi^{1},\dots,\varphi^{d},\dot{ \varphi}^{1},\dots,\dot{\varphi}^{d}) \tag{12}\]
and amounts to the defining equations of \(\mathfrak{S}\). We say that \(\mathfrak{s}\) is the _cosmological section_ of \(J^{2}(\mathcal{M})\to J^{1}(\mathcal{M})\). Formal differentiation of (12) with respect to time defines sections \(\mathfrak{s}^{(k)}:J^{k}(\mathcal{M})\to J^{k+1}(\mathcal{M})\) of the projections \(J^{k+1}(\mathcal{M})\to J^{k}(\mathcal{M})\) for all \(k\geq 1\), where \(\mathfrak{s}^{(1)}\stackrel{{\rm def.}}{{=}}\mathfrak{s}\). These determine the higher order prolongations of (12), whose iterative use allows us to express all formal higher time derivatives of \(\varphi\) in terms of \(\varphi\) and \(\dot{\varphi}\). In turn, this gives sections \(\mathfrak{s}_{k}:J^{1}(\mathcal{M})\to J^{k+1}(\mathcal{M})\) of the natural maps \(J^{k+1}(\mathcal{M})\to J^{1}(\mathcal{M})\) for all \(k\geq 1\).
The _dynamical reduction_ (or _on-shell reduction_) of a cosmological observable \(f\) of order \(k\geq 2\) is the basic cosmological observable \(f^{\rm red}:J^{1}(\mathcal{M})=T\mathcal{M}\to\mathbb{R}\) defined through:
\[f^{\rm red}\stackrel{{\rm def.}}{{=}}f\circ\mathfrak{s}_{k-1}\ \.\]
The following statement is immediate:
**Prop 2.1**.: For any cosmological curve \(\varphi\), we have:
\[f_{\varphi}=f_{\varphi}^{\rm red}=f^{\rm red}\circ\dot{\varphi}\ \.\]
One can also consider local observables of order \(k\) defined on non-empty open subsets \(U\subset J^{k}(\mathcal{M})\); the considerations above apply to this situation with very minor modifications. Finally, one can define _vector local cosmological observables_ of order \(k\in\mathbb{Z}_{>0}\) as local sections \(\mathbf{f}\) of the _\(k\)-th Finsler bundle_:
\[F^{k}\stackrel{{\rm def.}}{{=}}(\pi^{k})^{*}(T\mathcal{M})\to J ^{k}(\mathcal{M})\]
defined on a non-empty open set \(U\subset J^{k}(\mathcal{M})\) (notice that \(F^{1}=F\) is the ordinary Finsler bundle of \(\mathcal{M}\)). Such observables are called _basic_ if \(k=1\), in which case \(\mathbf{f}\in\Gamma(T\mathcal{M},F)\) can be viewed as a map from an open subset \(U\) of \(T\mathcal{M}\) to \(T\mathcal{M}\) which satisfies \(\mathbf{f}(u)\in T_{\pi(u)}\mathcal{M}\) for all \(u\in U\). The evaluation of a local vector observable \(\mathbf{f}\) of order \(k\) on a curve \(\varphi:I\rightarrow\mathcal{M}\) whose \(k\)-th prolongation is contained in \(U\) produces a section \(\mathbf{f}_{\varphi}\stackrel{{\rm def.}}{{=}}[j^{k}(\varphi)]^{* }(\mathbf{f})\in\Gamma(I,\varphi^{*}(T\mathcal{M}))\), i.e. a map with associates to any \(t\in I\) a vector \(\mathbf{f}_{\varphi}(t)\in T_{\varphi(t)}\mathcal{M}\). The dynamical reduction \(\mathbf{f}^{\rm red}\stackrel{{\rm def.}}{{=}}(\mathbf{g}_{k-1} )^{*}(\mathbf{f})\) is a local section of the Finsler bundle \(F\to T\mathcal{M}\) which satisfies:
\[\mathbf{f}_{\varphi}=(\dot{\varphi})^{*}(\mathbf{f}^{\rm red})\]
when \(\varphi\) is a cosmological curve. We will encounter such vector observables later on.
**The cosmological energy function and rescaled Hubble function.**
**Definition 2.4**.: The _cosmological energy function_ is the basic cosmological observable \(E:T\mathcal{M}\rightarrow\mathbb{R}_{>0}\) defined through:
\[E(u)\stackrel{{\rm def.}}{{=}}\frac{1}{2}||u||^{2}+\Phi(\pi(u)) \ \ \forall u\in T\mathcal{M}\ \.\]
The _cosmological kinetic energy function_\(E_{\rm kin}:T\mathcal{M}\rightarrow\mathbb{R}_{\geq 0}\) and the _cosmological potential energy function_\(E_{\rm pot}:T\mathcal{M}\rightarrow\mathbb{R}_{>0}\) are the basic cosmological observables defined through:
\[E_{\rm kin}(u)\stackrel{{\rm def.}}{{=}}\frac{1}{2}||u||^{2}\ \,\ \ E_{\rm pot}(u) \stackrel{{\rm def.}}{{=}}\Phi(\pi(u))\ \ \forall u\in T\mathcal{M}\ . \tag{13}\]
With these definitions, we have:
\[E=E_{\rm kin}+E_{\rm pot}\ \.\]
Notice that \(E_{\rm pot}\) coincides with the natural lift of \(\Phi\) to \(T\mathcal{M}\):
\[E_{\rm pot}=\pi^{*}(\Phi)=\Phi\circ\pi\ \.\]
Also notice the relation:
\[\mathcal{H}=\frac{1}{M_{0}}\sqrt{2E}\ \.\]
**Prop 2.2**.: The evaluation of the cosmological energy along any cosmological curve \(\varphi:I\to\mathcal{M}\) satisfies the _cosmological dissipation equation_:
\[\frac{\mathrm{d}E_{\varphi}(t)}{\mathrm{d}t}=-\frac{\sqrt{2E_{\varphi}(t)}}{M_{ 0}}||\dot{\varphi}(t)||^{2}\ . \tag{14}\]
**Proof.**: Follows immediately by using the cosmological equation (7).
Notice that the rescaled Hubble function \(\mathcal{H}:T\mathcal{M}\to\mathbb{R}_{>0}\) is a basic cosmological observable. The dissipation equation is equivalent with:
\[\frac{\mathrm{d}\mathcal{H}_{\varphi}(t)}{\mathrm{d}t}=-||\dot{\varphi}(t)||^ {2} \tag{15}\]
for any cosmological curve \(\varphi\). Integrating (15) from \(t_{1}\) to \(t_{2}\) along \(\varphi\) gives:
\[\mathcal{H}_{\varphi}(t_{1})-\mathcal{H}_{\varphi}(t_{2})=\int_{t_{1}}^{t_{2} }\mathrm{d}t||\dot{\varphi}(t)||^{2}\]
Since the integral in the right hand side is minimized by the shortest geodesic which connects the points \(\varphi(t_{1})\) to \(\varphi(t_{2})\) in \(\mathcal{M}\), we have:
\[\int_{t_{1}}^{t_{2}}\mathrm{d}t||\dot{\varphi}(t)||^{2}\geq\mathrm{d}(\varphi (t_{1}),\varphi(t_{2}))^{2}\ \,\]
where \(\mathrm{d}\) is the distance function of \((\mathcal{M},\mathcal{G})\). This gives the _Hubble inequality_:
\[\mathcal{H}_{\varphi}(t_{1})-\mathcal{H}_{\varphi}(t_{2})\geq\mathrm{d}( \varphi(t_{1}),\varphi(t_{2}))^{2}\ . \tag{16}\]
### Some operations determined by the Finsler bundle of \(\mathcal{M}\)
Recall that \(\pi:T\mathcal{M}\to\mathcal{M}\) denotes the bundle projection. The following two fiber bundles defined on \(T\mathcal{M}\):
\[F\stackrel{{\text{def.}}}{{=}}\pi^{*}(T\mathcal{M})\ \,\ \ \dot{F} \stackrel{{\text{def.}}}{{=}}\pi^{*}(\dot{T}\mathcal{M})\subset F\]
are called respectively the _Finsler_ and _slit Finsler bundle_ of \(T\mathcal{M}\) (see [20]). The \(\pi\)-pullback of \(\mathcal{G}\) (which we denote by the same letter) makes \(F\) into a Euclidean vector bundle, which contains \(\dot{F}\) as open fiber sub-bundle. The sections of \(F\) defined over a non-empty open subset \(U\subset T\mathcal{M}\) are called _Finsler vector fields_. A Finsler vector field can be viewed as a map \(f:U\to T\mathcal{M}\) which satisfies the condition:
\[\pi\circ f=\pi|_{U}\ \,\]
i.e. which takes a vector tangent to \(\mathcal{M}\) at a point into vector tangent to \(\mathcal{M}\)_at the same point_. The Finsler bundle allows us to give a global geometric description to various operations on tangent vectors.
The normalization mapThe _normalization map_ of \((\mathcal{M},\mathcal{G})\) is the nowhere-vanishing Finsler vector field \(T\in\Gamma(\dot{T}\mathcal{M},\dot{F})\) defined on \(\dot{T}\mathcal{M}\) through:
\[T(u)\stackrel{{\text{def.}}}{{=}}\frac{u}{||u||}\in\dot{T} \mathcal{M}\ \ \forall u\in\dot{T}\mathcal{M}\ \.\]
Given a curve \(\varphi:I\rightarrow\mathcal{M}\) and \(t\in I_{\text{reg}}\), the unit tangent vector to \(\varphi\) at time \(t\) is given by:
\[T_{\varphi}(t)=T(\dot{\varphi}(t))\ \.\]
The map \(T_{\varphi}:I_{\text{reg}}\to T\mathcal{M}\) identifies with the pull-back of \(T\) through the restriction of \(\dot{\varphi}\) to \(I_{\text{reg}}\).
The parallel and normal projection of vector fieldsFor any vector field \(X\) defined on \(\mathcal{M}\), let \(X^{\parallel},X^{\perp}\in\Gamma(\dot{T}\mathcal{M},F)\) be the Finsler vector fields defined on \(\dot{T}\mathcal{M}\) through:
\[X^{\parallel}(u)\stackrel{{\text{def.}}}{{=}}\mathcal{G}(T(u),X (\pi(u)))T(u)\in T_{\pi(u)}\mathcal{M}\]
\[X^{\perp}(u)\stackrel{{\text{def.}}}{{=}}X(\pi(u))-X^{\parallel }(u)\in T_{\pi(u)}\mathcal{M}\ \,\]
for all \(u\in\dot{T}\mathcal{M}\). For any \(u\in\dot{T}\mathcal{M}\), we have:
\[\mathcal{G}(X^{\parallel}(u),X^{\perp}(u))=0\ \,\ \ X(u)=X^{\parallel}(u)+X^{ \perp}(u)\ \.\]
The vectors \(X^{\parallel}(u)\) and \(X^{\perp}(u)\) are respectively the orthogonal projections of the vector \(X(\pi(u))\in T_{\pi(u)}\mathcal{M}\) on \(u\) and on the hyperplane \(\Pi(u)\subset T_{\pi(u)}\mathcal{M}\) which is orthogonal to \(u\) in the Euclidean vector space \((T_{\pi(u)}\mathcal{M},\mathcal{G}_{\pi(u)})\).
## 3 Natural basic observables
In this section, we give a geometric description of certain basic cosmological observables which play a special role in the study of multifield cosmological models. Some of these are defined directly on \(T\mathcal{M}\), while others are obtained as the on-shell reduction of second order observables. Since cosmological observables are determined by their evaluation on arbitrary curves (see Section 2), we will use that description instead of the jet bundle formulation.
### The first IR function of a scalar triple
A first natural basic observable is the _first IR function_ considered in [10], which plays a crucial role in the infrared dynamics of multifield cosmological models. This is closely related to the _first slow roll function_ (which is the on-shell reduction of a second order observable).
The _first IR function_ of the scalar triple \((\mathcal{M},\mathcal{G},\Phi)\) is the basic cosmological observable \(\kappa:T\mathcal{M}\rightarrow\mathbb{R}_{\geq 0}\) defined through:
\[\kappa(u)\stackrel{{\text{def.}}}{{=}}\frac{E_{\text{kin}}(u)}{E_ {\text{pot}}(u)}=\frac{||u||^{2}}{2\Phi(\pi(u))}\ \ \forall u\in T\mathcal{M}\ . \tag{17}\]
Notice that \(\kappa\) is continuous on \(T{\cal M}\) but smooth only on the slit tangent bundle \(\dot{T}{\cal M}.\) Also notice that \(\kappa(u)\) depends only on \(\pi(u)\in{\cal M}\) and \(||u||\in\mathbb{R}_{\geq 0},\) i.e. for all \(m\in{\cal M}\) we have:
\[\kappa(u)=\kappa_{m}(||u||)\ \ \forall u\in T_{m}{\cal M}\ \,\]
where \(\kappa_{m}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) is the map defined through:
\[\kappa_{m}(x)\stackrel{{\rm def.}}{{=}}\frac{x^{2}}{2\Phi(m)}\ \ \forall x\in\mathbb{R}_{\geq 0}\ \.\]
**The first IR "parameter" of a cosmological curve.**
**Definition 3.2**.: The _first IR "parameter"_ of the smooth curve \(\varphi:I\to{\cal M}\) is the evaluation \(\kappa_{\varphi}:I\to\mathbb{R}_{\geq 0}\) of \(\kappa\) along \(\varphi\):
\[\kappa_{\varphi}(t)\stackrel{{\rm def.}}{{=}}\kappa(\dot{ \varphi}(t))=\frac{||\dot{\varphi}(t)||^{2}}{2\Phi(\varphi(t))}\ \.\]
Notice that \(\kappa_{\varphi}(t)\) vanishes when \(t\in I_{\rm sing},\) so \(\kappa_{\varphi}\) is _not_ a true parameter for \(\varphi\) when \(I_{\rm sing}\neq\emptyset.\) The first IR function allows us to express the norm of any tangent vector \(u\in T{\cal M}\) as:
\[||u||=\sqrt{2E_{\rm pot}(u)\kappa(u)}=\sqrt{2\Phi(\pi(u))\kappa(u)}\ . \tag{18}\]
For any \(u\in\dot{T}{\cal M},\) we have:
\[T(u)=\frac{1}{\sqrt{2\Phi(\pi(u))\kappa(u)}}\,u\ \.\]
Also notice the relation:
\[{\cal H}(u)=\frac{1}{M_{0}}\sqrt{2\Phi(\pi(u))}[1+\kappa(u)]^{1/2}\ . \tag{19}\]
**The first slow roll function of a scalar triple.** It is traditional in cosmology to use a basic observable which encodes the same information as \(\kappa\) but is more convenient for certain purposes. We give a geometric description of this below.
**Definition 3.3**.: The _rescaled first slow roll function_ of \(({\cal M},{\cal G},\Phi)\) is the basic cosmological observable \(\hat{\mathbf{\varepsilon}}:T{\cal M}\to\mathbb{R}_{\geq 0}\) defined through:
\[\hat{\mathbf{\varepsilon}}(u)\stackrel{{\rm def.}}{{=}} \frac{E_{\rm kin}(u)}{E(u)}=\frac{\kappa(u)}{1+\kappa(u)}=\frac{||u||^{2}}{||u ||^{2}+2\Phi(\pi(u))}\ \ \forall u\in T{\cal M}\ \,\]
while the _first slow roll function_ of \(({\cal M},{\cal G},\Phi)\) is the basic cosmological observable \(\mathbf{\varepsilon}:T{\cal M}\to\mathbb{R}_{\geq 0}\) given by:
\[\mathbf{\varepsilon}\stackrel{{\rm def.}}{{=}}3\hat{ \mathbf{\varepsilon}}\ \.\]
We have:
\[\hat{\mathbf{\varepsilon}}=f\circ\kappa\ \,\]
where \(f:\mathbb{R}_{\geq 0}\to[0,1)\) is the function defined through:
\[f(x)\stackrel{{\rm def.}}{{=}}\frac{x}{1+x}\ \.\]
This function has first derivative given by:
\[f^{\prime}(x)=\frac{1}{(1+x)^{2}}>0\]
and hence is strictly increasing. Notice the equivalence:
\[f(x)=A\Longleftrightarrow x=\frac{A}{1-A}\]
and the relation:
\[f(x)\approx x\ \ \mbox{for}\ x\ll 1\ \.\]
We have \(\hat{\mathbf{\varepsilon}}=0\) iff \(\kappa=0\) and \(\mathbf{\varepsilon}=1\) (i.e. \(\hat{\mathbf{\varepsilon}}=1/3\)) iff \(\kappa=1/2\). For \(x>0\), the inequality \(\kappa(u)<x\) is equivalent with \(\hat{\mathbf{\varepsilon}}(u)<\frac{x}{1+x}\).
The first slow roll "parameter" of a cosmological curve.The traditional notion used in the cosmology literature is as follows:
**Definition 3.4**.: The _first slow roll "parameter"_ of a curve \(\varphi:I\to\mathcal{M}\) is the function \(\mathbf{\varepsilon}_{\varphi}:I\to\mathbb{R}\) defined through:
\[\mathbf{\varepsilon}_{\varphi}(t)\stackrel{{\rm def.}}{{=}}-\frac{ \dot{H}_{\varphi}(t)}{H_{\varphi}(t)^{2}}=-\frac{1}{H_{\varphi}(t)}\frac{ \mathrm{d}}{\mathrm{d}t}\log H_{\varphi}(t)=3\mathbf{\varepsilon}_{\varphi}(t)\ \,\]
where:
\[\hat{\mathbf{\varepsilon}}_{\varphi}(t)\stackrel{{\rm def.}}{{=}}- \frac{1}{\mathcal{H}_{\varphi}(t)}\frac{\mathrm{d}}{\mathrm{d}t}\log\mathcal{ H}_{\varphi}(t)\]
is the _rescaled first slow roll "parameter"_ of \(\varphi\).
**Prop 3.1**.: Suppose that \(\varphi:I\to\mathcal{M}\) is a cosmological curve. Then the (rescaled) first slow roll parameter of \(\varphi\) coincides with the on-shell reduction of the (rescaled) first slow roll function along \(\varphi\), i.e. we have:
\[\hat{\mathbf{\varepsilon}}_{\varphi}(t)=\hat{\mathbf{\varepsilon}}(\dot{\varphi}(t)) \ \ \mbox{and}\ \ \mathbf{\varepsilon}_{\varphi}(t)=\mathbf{\varepsilon}(\dot{\varphi}(t))\ \ \forall t\in I\ \.\]
Proof.: We have:
\[\frac{\mathrm{d}}{\mathrm{d}t}\log\mathcal{H}_{\varphi}(t)=\frac{\mathcal{G}( \dot{\varphi}(t),\nabla_{t}\dot{\varphi}(t))+(\mathrm{d}\Phi)(\varphi(t))(\dot {\varphi}(t))}{||\dot{\varphi}(t)||^{2}+2\Phi(\varphi(t))}=-\frac{\mathcal{H }_{\varphi}(t)||\dot{\varphi}(t)||^{2}}{||\dot{\varphi}(t)||^{2}+2\Phi(\varphi (t))}=-\frac{\mathcal{H}_{\varphi}(t)\kappa_{\varphi}(t)}{1+\kappa_{\varphi}( t)}\ \,\]
where in the second equality we used the cosmological equation and the relation:
\[\mathcal{G}(\dot{\varphi}(t),(\mathrm{grad}\Phi)(\varphi(t)))=(\mathrm{d} \Phi)(\varphi(t))(\dot{\varphi}(t))\ \.\]
This immediately implies the conclusion.
Since \(\mathcal{H}\) is a first order observable, the first slow roll parameter of \(\varphi\) is the evaluation along \(\varphi\) of a _second order_ observable (the "first slow roll observable"), whose on-shell reduction coincides with the first slow roll function by Proposition 3. Hence the first IR parameter encodes the same information as the on-shell reduction of the second order first slow roll observable.
Notice that \(\hat{\boldsymbol{\varepsilon}}_{\varphi}(t)\) vanishes when \(t\in I_{\text{sing}}\), so it is _not_ a true parameter for \(\varphi\) when \(I_{\text{sing}}\neq\emptyset\). The condition for inflation can be formalized as follows (see [15]).
The inflation region.
The inflation region of \((\mathcal{M},\mathcal{G},\Phi)\) is the subset of \(T\mathcal{M}\) defined through:
\[\mathcal{R}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{\text{def.} }}{{=}}\left\{u\in T\mathcal{M}\,|\,||u||^{2}<\Phi(\pi(u))\right\}=\left\{u\in T \mathcal{M}\,|\,\kappa(u)<1/2\right\}=\left\{u\in T\mathcal{M}\,|\,\boldsymbol {\varepsilon}(u)<1\right\}\.\]
A cosmological curve \(\varphi:I\to\mathcal{M}\) is called _inflationary at time_\(t\in I\) if \(\dot{\varphi}(t)\in\mathcal{R}(\mathcal{M},\mathcal{G},\Phi)\), which amounts to the usual condition for inflation:
\[\boldsymbol{\varepsilon}_{\varphi}(t)<1\Longleftrightarrow\hat{\boldsymbol{ \varepsilon}}_{\varphi}(t)<1/3\Longleftrightarrow\kappa_{\varphi}(t)<1/2\ \.\]
The cosmological curve is called _inflationary on the non-empty interval_\(J\subset I\) if \(\varphi\) is inflationary for all \(t\in J\).
Notice that \(\mathcal{R}(\mathcal{M},\mathcal{G},\Phi)\) is an open tubular neighborhood of the zero section of \(T\mathcal{M}\).
For any cosmological curve \(\varphi:I\to\mathcal{M}\), the set of its inflationary times:
\[I_{\text{inf}}\stackrel{{\text{def.}}}{{=}}\varphi^{-1}( \mathcal{R}(\mathcal{M},\mathcal{G},\Phi))\subset I\]
is a (possibly empty) relatively open subset of \(I\) and hence it is the intersection of \(I\) with an at most countable disjoint union of open intervals of the real axis.
The first slow roll condition with parameter \(\boldsymbol{\epsilon}\).
Let \(\epsilon\in(0,1]\). We say that a vector \(u\in T\mathcal{M}\) satisfies the _first slow roll condition_ with parameter \(\epsilon\) if:
\[\kappa(u)<\epsilon\ \,\ \ \text{i.e.}\ \ ||u||^{2}<2\Phi(\pi(u))\,\epsilon\ \.\]
The _first slow roll region_ of \((\mathcal{M},\mathcal{G},\Phi)\) at parameter \(\epsilon\) is the open subset of \(T\mathcal{M}\) defined through:
\[\mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{ \text{def.}}}{{=}}\left\{u\in T\mathcal{M}\ |\ \kappa(u)<\epsilon\right\}=\left\{u\in T \mathcal{M}\ |\ ||u||<\sqrt{2\epsilon\Phi(\pi(u))}\right\}\subset T\mathcal{M}\ \.\]
Since \(\Phi>0\), the image of the zero section of \(T\mathcal{M}\) is contained in \(\mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\) and hence we have:
\[\pi(\mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi))=\mathcal{M}\ . \tag{20}\]
**Definition 3.7**.: We say that a cosmological curve \(\varphi:I\to\mathcal{M}\) satisfies the first slow roll condition with parameter \(\epsilon\in(0,1]\) at time \(t\in I\) if \(\dot{\varphi}(t)\in\mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\).
Since \(\mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\) is an open subset of \(T\mathcal{M}\) and the canonical lift \(\dot{\varphi}:I\to T\mathcal{M}\) of \(\varphi\) is continuous, the set:
\[I^{1}_{\epsilon}\stackrel{{\rm def.}}{{=}}\dot{\varphi}^{-1}( \mathcal{S}^{1}_{\epsilon}(\mathcal{M},\mathcal{G},\Phi))\subset I\]
is an open subset of \(I\). When non-empty, the set \(I^{1}_{\epsilon}\) is an at most countable union of relatively open subintervals of \(I\).
### The conservative function of a scalar triple
A second natural basic observable is the _conservative function_, which controls the _conservative approximation_ discussed in Section 5.2. This is defined as the inverse of the norm of a Finsler vector field which is naturally associated to every scalar triple.
**Definition 3.8**.: The _relative gradient field_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the Finsler vector field \(\mathbf{q}\in\Gamma(\hat{T}\mathcal{M},F)\) defined on \(\hat{T}\mathcal{M}\) through:
\[\mathbf{q}(u)\stackrel{{\rm def.}}{{=}}\frac{({\rm grad}\Phi)( \pi(u))}{\mathcal{H}(u)||u||}=M_{0}\frac{({\rm grad}\Phi)(\pi(u))}{||u||[||u |^{2}+2\Phi(\pi(u))]^{1/2}}\enspace. \tag{21}\]
Notice that \(\mathbf{q}(u)=0\) iff \(\pi(u)\in{\rm Crit}\Phi\). The relative gradient field is a basic _vector_ local observable in the sense of Section 2.
**Definition 3.9**.: The _characteristic one-form_\(\Xi\in\Omega^{1}(\mathcal{M})\) of the model is defined through:
\[\Xi\stackrel{{\rm def.}}{{=}}\frac{M_{0}}{2\Phi}\mathrm{d}\Phi= \frac{M_{0}}{2}\mathrm{d}\log\Phi\enspace. \tag{22}\]
The pointwise norm \(||\Xi||\) vanishes exactly on the critical set \({\rm Crit}\Phi\). For any \(u\in\hat{T}\mathcal{M}_{0}\), we have:
\[\mathbf{q}(u)=\frac{||\Xi(\pi(u))||}{[\kappa(u)(1+\kappa(u)]^{1/2}}n_{\Phi}( \pi(u))\enspace,\]
where \(n_{\Phi}\in\mathcal{X}(\mathcal{M}_{0})\) is the _normalized gradient field_ of \(\Phi\):
\[n_{\Phi}(m)\stackrel{{\rm def.}}{{=}}\frac{({\rm grad}\Phi)(m)} {||({\rm d}\Phi)(m)||}\enspace\forall m\in\mathcal{M}_{0}\enspace.\]
**Definition 3.10**.: The _conservative function_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the function \(c:T\mathcal{M}_{0}\to\mathbb{R}_{\geq 0}\) defined through:
\[c(u)\stackrel{{\rm def.}}{{=}}\frac{1}{||\mathbf{q}(u)||}=\frac{1 }{M_{0}}\frac{||u||\left[||u|^{2}+2\Phi(\pi(u))\right]^{1/2}}{||({\rm d}\Phi) (\pi(u))||}\enspace\forall u\in T\mathcal{M}_{0}\enspace, \tag{23}\]
i.e.:
\[c(u)=\frac{\left[\kappa(u)\big{(}1+\kappa(u)\big{)}\right]^{1/2}}{||\Xi(\pi(u) )||}\enspace\forall u\in T\mathcal{M}_{0}\enspace. \tag{24}\]
Thus:
\[{\bf q}(u)=\frac{1}{c(u)}n_{\Phi}(\pi(u))\ \ \forall u\in T{\cal M}_{0}\ \.\]
Notice that \(c(u)\) depends only on \(m\stackrel{{\rm def.}}{{=}}\pi(u)\in{\cal M}_{0}\) and on \(||u||\), i.e. we have:
\[c(u)=c_{m}(||u||)\ \ \forall u\in T_{m}{\cal M}_{0}\ \,\]
where \(c_{m}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is the map defined through:
\[c_{m}(x)\stackrel{{\rm def.}}{{=}}\frac{1}{M_{0}}\frac{x\left[x ^{2}+2\Phi(m)\right]^{1/2}}{||(\Phi\Phi)(m)||}\ \.\]
Let \(\Lambda>0\). The function \(f:\mathbb{R}\rightarrow\mathbb{R}\) defined through \(f(x)=x(1+x)\) has a minimum at \(x=-1/2\) and is strictly increasing when \(x>-1/2\). Thus \(f(\mathbb{R}_{+})=[0,+\infty)\) and a non-negative \(x\) satisfies \(f(x)\leq\Lambda\) iff \(x\leq x_{+}\) where:
\[x_{+}=\frac{1}{2}[-1+\sqrt{1+4\Lambda}]\]
is the non-negative solution of the equation \(f(x)=\Lambda\).
The first IR function \(\kappa\) and the conservative function \(c\) determine each other through relation (24). Using Remark 3, this relation can be solved for \(\kappa(u)\) as:
\[\kappa(u)=\frac{1}{2}\left[-1+\sqrt{1+4c(u)^{2}||\Xi(\pi(u))||^{2}}\right]\ . \tag{25}\]
In particular, we have \(\kappa(u)\ll 1\) iff \(c(u)||\Xi(\pi(u))||\ll 1\). When this condition is satisfied, relation (25) becomes \(\kappa(u)\approx c(u)^{2}||\Xi(\pi(u))||^{2}\).
The conservative "parameter" of a non-constant cosmological curve.The conservative "parameter" of a non-constant cosmological curve \(\varphi:I\rightarrow{\cal M}_{0}\) is the function \(c_{\varphi}:I\rightarrow{\cal R}_{\geq 0}\cup\{+\infty\}\) defined through:
\[c_{\varphi}(t)\stackrel{{\rm def.}}{{=}}\frac{{\cal H}(\dot{ \varphi}(t))||\dot{\varphi}(t)||}{||(\Phi\Phi)(\varphi(t))||}=\frac{1}{M_{0}} \frac{||\dot{\varphi}(t)||[||\dot{\varphi}(t)||^{2}+2\Phi(\varphi(t))]^{1/2}} {||(\Phi)(\varphi(t))||}=c(\dot{\varphi}(t))\ \ \forall t\in I\ \,\]
which gives the evaluation of \(c\) along \(\varphi\).
Notice that \(c_{\varphi}\) is well-defined since the speed of a non-constant cosmological curve cannot vanish at a critical time. We have:
\[c_{\varphi}(t)=+\infty\Longleftrightarrow t\in I_{\rm crit}\ \.\]
For non-constant curves \(\varphi\), the cosmological equation (7) reads:
\[\nabla_{t}\dot{\varphi}(t)+({\rm grad}_{\cal G}\Phi)(\varphi(t))=-||(\Phi\Phi) (\varphi(t))||c_{\varphi}(t)T(\dot{\varphi}(t))\ \.\]
The _conservative approximation_ consists of neglecting the right hand side, which replaces (7) by the _conservative equation_:
\[\nabla_{t}\dot{\varphi}(t)+({\rm grad}_{\cal G}\Phi)(\varphi(t))=0\ \.\]
This approximation is discussed in Subsection 5.2. The conservative approximation is accurate in the _quasi-conservative regime_\(c_{\varphi}(t)\ll 1\). When \(c_{\varphi}(t)\) is large, one can neglect the friction term instead, which amounts to replacing (7) with the _dissipative equation_:
\[\nabla_{t}\dot{\varphi}(t)+\mathcal{H}_{\varphi}(t)\dot{\varphi}(t)=0\ \.\]
This defines the _dissipative approximation_ (studied in Subsection 5.3), which is accurate in the _strongly dissipative regime_\(c_{\varphi}(t)\gg 1\).
For any \(\epsilon\in(0,1]\), the _conservative region_\(C_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\) of \((\mathcal{M},\mathcal{G},\Phi)\) is the open tubular neighborhood of the zero section of \(T\mathcal{M}_{0}\) defined through:
\[C_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{\rm def.}}{ {=}}\{u\in T\mathcal{M}_{0}|c(u)<\epsilon\}\subset T\mathcal{M}_{0}\ \.\]
We say that a vector \(u\in T\mathcal{M}_{0}\) satisfies the _conservative condition_ with parameter \(\epsilon\) if \(u\in C_{\epsilon}\), i.e. if:
\[c(u)<\epsilon\ . \tag{26}\]
We say that a cosmological curve \(\varphi\) satisfies the conservative condition with parameter \(\epsilon\in(0,1]\) at time \(t\in I\) if \(\dot{\varphi}(t)\in C_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\).
By Remark 3.1, the conservative condition (26) is equivalent with:
\[\kappa(u)<\frac{1}{2}\left[-1+\sqrt{1+\epsilon^{2}M_{0}^{2}\frac{||(\dd\Phi)( \pi(u))||^{2}}{\Phi(\pi(u))^{2}}}\ \right]\ \, \tag{27}\]
i.e. with:
\[||u||<A_{\epsilon}(\pi(u)) \tag{28}\]
where:
\[A_{\epsilon}\stackrel{{\rm def.}}{{=}}\sqrt{-\Phi+\sqrt{\Phi^{2 }+\epsilon^{2}M_{0}^{2}||\dd\Phi||^{2}}} \tag{29}\]
is the _conservative bound function_ of \((\mathcal{M},\mathcal{G},\Phi)\) at parameter \(\epsilon\).
The _conservative closure_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the region:
\[\overline{C_{1}}=\{u\in T\mathcal{M}_{0}\ |\ c(u)\leq 1\}\ \.\]
### The tangential acceleration, characteristic angle and turning rate
We next discuss further natural scalar and vector basic observables which are on shell reductions of corresponding second order observables. For this, we start from a natural vector observable of second order which we describe through its evaluation on arbitrary curves (rather than in jet bundle language). Some of the following
definitions are standard except for certain conventional scale factors which we find convenient to eliminate in order to simplify various formulas.
The _opposite relative acceleration_ of a smooth curve \(\varphi:I\to\mathcal{M}\) is the smooth function \(\boldsymbol{\eta}_{\varphi}:I_{\mathrm{reg}}\to T\mathcal{M}\) defined through:
\[\boldsymbol{\eta}_{\varphi}(t)=-\frac{1}{H_{\varphi}(t)}\frac{\nabla_{t}\dot{ \varphi}(t)}{||\dot{\varphi}(t)||}=3\boldsymbol{\hat{\eta}}_{\varphi}(t)\ \ \forall t\in I_{\mathrm{reg}}\ \,\]
where:
\[\boldsymbol{\hat{\eta}}_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{= }}-\frac{1}{\mathcal{H}_{\varphi}(t)}\frac{\nabla_{t}\dot{\varphi}(t)}{|| \dot{\varphi}(t)||}\ \ (t\in I_{\mathrm{reg}}) \tag{30}\]
is the _rescaled acceleration_ of \(\varphi\).
In reference [13], the opposite relative acceleration \(\boldsymbol{\eta}_{\varphi}(t)\) was called the _vector gradient flow parameter_ of \(\varphi\) at time \(t\), while its norm \(||\boldsymbol{\eta}_{\varphi}(t)||\) was called the _scalar gradient flow parameter_.
The _rescaled tangential acceleration_ (a.k.a. _rescaled second slow roll parameter_) of a smooth curve \(\varphi:I\to\mathcal{M}\) is the function \(\hat{\eta}_{\varphi}^{\parallel}:I_{\mathrm{reg}}\to\mathbb{R}\) obtained by projecting \(\boldsymbol{\hat{\eta}}_{\varphi}(t)\) on the unit tangent vector \(T_{\varphi}(t)\) to \(\varphi\) at time \(t\in I_{\mathrm{reg}}\):
\[\hat{\eta}_{\varphi}^{\parallel}(t)\stackrel{{\mathrm{def.}}}{{= }}\mathcal{G}(\hat{\boldsymbol{\eta}}_{\varphi}(t),T_{\varphi}(t))\ \ \forall t\in I_{\mathrm{reg}}\ \.\]
The _rescaled normal acceleration_ of \(\varphi\) is the function \(\hat{\boldsymbol{\eta}}_{\varphi}^{\perp}:I_{\mathrm{reg}}\to T\mathcal{M}\) obtained by projecting \(\hat{\boldsymbol{\eta}}_{\varphi}(t)\) on the hyperplane \(\Pi_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}T_{\varphi}(t)^{ \perp}\subset T_{\varphi(t)}\mathcal{M}\) normal to \(T_{\varphi}(t)\) inside the tangent space \(T_{\varphi(t)}\mathcal{M}\):
\[\hat{\boldsymbol{\eta}}_{\varphi}^{\perp}(t)\stackrel{{\mathrm{ def.}}}{{=}}\hat{\boldsymbol{\eta}}_{\varphi}(t)-\hat{\eta}_{\varphi}^{ \parallel}(t)T_{\varphi}(t)\ \ \forall t\in I_{\mathrm{reg}}\ . \tag{31}\]
The _characteristic angle_\(\theta_{\varphi}(t)\in[0,\pi]\) of \(\varphi\) at time \(t\in I_{\mathrm{reg}}\cap I_{\mathrm{noncrit}}\) is the angle between \(\dot{\varphi}(t)\) and \((\mathrm{grad}\Phi)(\varphi(t))\). For any \(t\in I_{\mathrm{reg}}\cap I_{\mathrm{noncrit}}\), we have:
\[\hat{\eta}_{\varphi}^{\parallel}(t)=||\hat{\boldsymbol{\eta}}_{\varphi}(t)|| \cos\theta_{\varphi}(t)\ \.\]
Let \(s\) be an increasing arc length parameter on the curve \(\varphi\), thus \(\dot{s}=||\dot{\varphi}||\). We have:
\[[\nabla_{t}\dot{\varphi}(t)]^{\parallel}=\frac{\mathrm{d}}{\mathrm{d}t}||\dot {\varphi}(t)||=\ddot{s}\ \,\ \ [\nabla_{t}\dot{\varphi}(t)]^{\perp}=\nabla_{t}\dot{ \varphi}(t)-\left(\frac{\mathrm{d}}{\mathrm{d}t}\log||\dot{\varphi}(t)|| \right)\dot{\varphi}(t)\ \, \tag{32}\]
where we used the relation:
\[\mathcal{G}(\nabla_{t}\dot{\varphi}(t),\dot{\varphi}(t))=\frac{1}{2}\frac{ \mathrm{d}}{\mathrm{d}t}||\dot{\varphi}(t)||^{2}=||\dot{\varphi}(t)||\ \frac{\mathrm{d}}{\mathrm{d}t}||\dot{\varphi}(t)||\ \,\]
which follows from the fact that \(\mathcal{G}\) is covariantly-constant. In particular, we have:
\[\hat{\eta}_{\varphi}^{\parallel}(t)=-\frac{1}{\mathcal{H}_{\varphi}(t)}\frac{ \mathrm{d}}{\mathrm{d}t}\log||\dot{\varphi}(t)||=-M_{0}\frac{\frac{\mathrm{d}}{ \mathrm{d}t}\log||\dot{\varphi}(t)||}{\sqrt{2\Phi(\varphi(t))}[1+\kappa_{\varphi }(t)]^{1/2}}\ \ \forall t\in I_{\mathrm{reg}}\ \.\]
**Definition 3.16**.: The vector:
\[\mathbf{\Omega}_{\varphi}(t)=-\nabla_{t}T_{\varphi}(t)\in T_{\varphi(t)}\mathcal{M }\ \ (t\in I_{\mathrm{reg}}). \tag{33}\]
is called the _turning rate vector_ of \(\varphi\) at time \(t\).
Notice that \(\mathbf{\Omega}_{\varphi}(t)\perp T_{\varphi}(t)\). Since \(\dot{\varphi}(t)^{\perp}=0\), the defining relation (33) can be written as:
\[\mathbf{\Omega}_{\varphi}(t)=-\frac{[\nabla_{t}\dot{\varphi}(t)]^{\perp}}{|| \dot{\varphi}(t)||}\ \.\]
The Frenet-Serret formulas give:
\[\nabla_{t}T_{\varphi}(t)=\chi_{\varphi}(t)||\dot{\varphi}(t)||n_{\varphi}(t)\ \,\]
where:
\[n_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}\frac{N_{\varphi} (t)}{||N_{\varphi}(t)||}\]
with:
\[N_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}(\nabla_{t}\dot{ \varphi}_{t})^{\perp}\]
and \(\chi_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}||\nabla_{t} \dot{\varphi}_{t}||\in\mathbb{R}_{\geq 0}\) is the principal curvature of \(\varphi\) at time \(t\). Thus:
\[\mathbf{\Omega}_{\varphi}(t)=\Omega_{\varphi}(t)n_{\varphi}(t)\]
where:
\[\Omega_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}-\chi_{ \varphi}(t)||\dot{\varphi}(t)||\]
is the _scalar turning rate_ of \(\varphi\) at time \(t\). In particular, we have:
\[||\mathbf{\Omega}_{\varphi}(t)||=|\Omega_{\varphi}(t)|=|\chi_{\varphi}(t)|\ ||\dot{ \varphi}(t)||\ \.\]
In terms of \(\mathbf{\Omega}_{\varphi}\), relation (31) takes the form:
\[\hat{\mathbf{\eta}}_{\varphi}^{\perp}(t)=\frac{\mathbf{\Omega}_{\varphi}(t)} {\mathcal{H}_{\varphi}(t)}\ \.\]
We have:
\[\mathbf{\eta}_{\varphi}(t)=\hat{\eta}_{\varphi}^{\parallel}(t)T_{\varphi}(t)+ \hat{\eta}_{\varphi}^{\perp}(t)\ \ \forall t\in I_{\mathrm{reg}}\]
and hence:
\[\nabla_{t}\dot{\varphi}(t)=-\mathcal{H}_{\varphi}(t)\left[\hat{\eta}_{\varphi }^{\parallel}(t)\dot{\varphi}(t)+||\dot{\varphi}(t)||\hat{\eta}_{\varphi}^{ \perp}(t)\right]\ \ \forall t\in I_{\mathrm{reg}}\ . \tag{34}\]
The rescaled acceleration field of \((\boldsymbol{\mathcal{M}},\boldsymbol{\mathcal{G}},\boldsymbol{\Phi})\).We next define a Finsler vector field on \(\dot{T}\mathcal{M}\) which encodes the rescaled acceleration of all _cosmological_ curves. It is the on-shell reduction of the second order rescaled acceleration observable.
The _rescaled acceleration field_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the Finsler vector field \(\hat{\boldsymbol{\eta}}\in\Gamma(\dot{T}\mathcal{M},F\mathcal{M})\) defined on \(\dot{T}\mathcal{M}\) through:
\[\hat{\boldsymbol{\eta}}(u)\stackrel{{\rm def.}}{{=}}n(u)+ \mathbf{q}(u)=n(u)+\frac{\Xi(\pi(u))(n(u))}{[\kappa(u)(1+\kappa(u))]^{1/2}}n_{ \Phi}(\pi(u))=n(u)+\frac{1}{c(u)}n_{\Phi}(\pi(u))\ \, \tag{35}\]
where \(\mathbf{q}\) is the relative gradient field of \((\mathcal{M},\mathcal{G},\Phi)\) (see Definition 3.8).
Notice that the rescaled acceleration field determines the conservative function:
\[c(u)=\frac{1}{||\mathbf{q}(u)||}=\frac{1}{||\hat{\boldsymbol{\eta}}(u)-n(u)|| }\ \ \forall u\in\dot{T}\mathcal{M}_{0}\ \.\]
The rescaled tangential acceleration function and normal acceleration field of \((\boldsymbol{\mathcal{M}},\boldsymbol{\mathcal{G}},\boldsymbol{\Phi})\).The rescaled tangential acceleration function (a.k.a. "second slow roll function") \(\hat{\eta}^{\parallel}:\dot{T}\mathcal{M}\rightarrow\mathbb{R}\) is the basic cosmological observable defined through:
\[\hat{\eta}^{\parallel}(u)=\mathcal{G}(\hat{\boldsymbol{\eta}}(u),n(u))=1+ \frac{(\mathrm{d}\Phi)(\pi(u))(u)}{\mathcal{H}(u)||u||^{2}}=1+\frac{\cos\theta (u)}{c(u)}\ . \tag{36}\]
The _rescaled normal acceleration field_\(\hat{\boldsymbol{\eta}}^{\perp}\in\Gamma(\dot{T}\mathcal{M},F\mathcal{M})\) of \((\mathcal{M},\mathcal{G},\Phi)\) is the Finsler vector field defined on \(\dot{T}\mathcal{M}\) through:
\[\hat{\boldsymbol{\eta}}^{\perp}(u)=\mathbf{q}^{\perp}(u)=\frac{(\mathrm{grad} \Phi)^{\perp}(u)}{\mathcal{H}(u)||u||}=\frac{n_{\Phi}^{\perp}(u)}{c(u)}\ . \tag{37}\]
We have:
\[\boldsymbol{\eta}(u)=\hat{\eta}^{\parallel}(u)n(u)+\hat{\eta}^{\perp}(u)\ \ \forall u\in\dot{T}\mathcal{M}\ \.\]
Notice that \(\hat{\eta}^{\parallel}(u)=1\) and \(\hat{\boldsymbol{\eta}}^{\perp}(u)=0\) when \(\pi(u)\in\mathrm{Crit}\Phi\).The turning rate field of \((\boldsymbol{\mathcal{M}},\boldsymbol{\mathcal{G}},\boldsymbol{\Phi})\).The turning rate field of \((\boldsymbol{\mathcal{M}},\boldsymbol{\mathcal{G}},\boldsymbol{\Phi})\).The turning rate field of \((\mathcal{M},\mathcal{G},\Phi)\) is the Finsler vector field \(\boldsymbol{\Omega}\in\Gamma(\dot{T}\mathcal{M},F\mathcal{M})\) given by:
\[\boldsymbol{\Omega}(u)\stackrel{{\rm def.}}{{=}}\frac{(\mathrm{ grad}\Phi)^{\perp}(u)}{||u||}\ \ (u\in\dot{T}\mathcal{M})\ \.\]
Relation (37) reads:
\[\hat{\boldsymbol{\eta}}^{\perp}(u)=\mathbf{q}^{\perp}(u)=\frac{\boldsymbol{ \Omega}(u)}{\mathcal{H}(u)}\ . \tag{38}\]
The following result shows that the basic scalar and vector observables introduced above are the on-shell reductions of the corresponding second order observables.
Suppose that \(\varphi:I\to\mathcal{M}\) is a cosmological curve. Then for any \(t\in I_{\rm reg}\) we have:
\[\hat{\boldsymbol{\eta}}_{\varphi}(t)=\hat{\boldsymbol{\eta}}(\dot{\varphi}(t)) \ \,\ \ \hat{\eta}_{\varphi}^{\parallel}(t)=\hat{\eta}^{\parallel}(\dot{\varphi}(t)) \ \,\ \ \hat{\boldsymbol{\eta}}_{\varphi}^{\perp}(t)=\hat{\boldsymbol{\eta}}^{ \perp}(\dot{\varphi}(t))\ \,\ \ \boldsymbol{\Omega}_{\varphi}(t)=\boldsymbol{\Omega}(\dot{ \varphi(t)})\ \.\]
The rescaled acceleration of \(\varphi\) can be expressed as follows using the cosmological equation:
\[\hat{\boldsymbol{\eta}}_{\varphi}(t)=\frac{\mathcal{H}_{\varphi}(t)\dot{ \varphi}(t)+(\mathrm{grad}\Phi)(\varphi(t))}{\mathcal{H}_{\varphi}(t)||\dot{ \varphi}(t)||}=\hat{\boldsymbol{\eta}}(\dot{\varphi}(t))\ \.\]
Thus:
\[\hat{\eta}_{\varphi}^{\parallel}(t)=1+\frac{(\mathrm{d}\Phi)(\varphi(t))( \dot{\varphi}(t))}{\mathcal{H}_{\varphi}(t)||\dot{\varphi}(t)||^{2}}=1+M_{0} \frac{(\mathrm{d}\Phi)(\varphi(t))(\dot{\varphi}(t))}{(2\Phi(\varphi(t)))^{3/ 2}\kappa_{\varphi}(t)[1+\kappa_{\varphi}(t)]^{1/2}} \tag{39}\]
and:
\[\hat{\boldsymbol{\eta}}_{\varphi}^{\perp}(t)=\frac{(\mathrm{grad}\Phi)^{ \perp}(\varphi(t))}{\mathcal{H}_{\varphi}(t)||\dot{\varphi}(t)||}=M_{0}\frac{ (\mathrm{grad}\Phi)^{\perp}(\varphi(t))}{2\Phi(\varphi(t))[\kappa_{\varphi}(t )(1+\kappa_{\varphi}(t))]^{1/2}}\ \, \tag{40}\]
where we used (18). This immediately gives the conclusion.
Summarizing, we consider four natural basic observables, namely \(\kappa\), \(c\), \(\hat{\eta}^{\parallel}\) and \(\boldsymbol{\Omega}\), the last of which is a vector observable. We also introduce an auxiliary basic observable \(\theta\):
The _characteristic angle function_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the map \(\theta:\hat{T}\mathcal{M}_{0}\to[0,\pi)\) defined through:
\[\cos\theta(u)=\mathcal{G}(T(u),n_{\Phi}(\pi(u)))\ \ \forall u\in\hat{T} \mathcal{M}\ \,\]
where \(n_{\Phi}\in\mathcal{X}(\mathcal{M}_{0})\) is the normalized gradient field of \((\mathcal{M},\mathcal{G},\Phi)\).
The following statement is immediate:
For any curve \(\varphi:I\to\mathcal{M}\) and any \(t\in I_{\rm reg}\cap I_{\rm noncrit}\), we have:
\[\theta_{\varphi}(t)=\theta(\dot{\varphi}(t))\ \,\]
where \(\theta_{\varphi}\) is the characteristic angle of \(\varphi\).
Reconstruction of the scalar field metric and potential from \(\kappa\) and \(\hat{\eta}^{\parallel}\)
Knowledge of the first and second slow-roll functions (equivalently, knowledge of the first IR function and of the second slow-roll function) allows one to reconstruct the positive homothety class of the pair \(({\cal G},\Phi)\) when \(M_{0}\) is fixed. Indeed, relation (36) gives:
\[\Xi(\pi(u))(n(u))=[\kappa(u)(1+\kappa(u))]^{1/2}\left[-1+\hat{\eta}^{ \parallel}(u)\right]\ \ \forall u\in\dot{T}{\cal M}\ \, \tag{41}\]
showing that \(\kappa\) and \(\hat{\eta}^{\parallel}\) determine \(\Xi\). Since \({\cal M}\) is connected, this implies that \(\kappa\) and \(\hat{\eta}^{\parallel}\) determine the scalar potential \(\Phi\) up to a multiplicative constant \(A>0\). Now relation (18) determines \({\cal G}\) up to multiplication by \(A\). Thus \(\kappa\) and \(\eta^{\parallel}\) determine the positive homothety class of the pair \(({\cal G},\Phi)\). In particular, the rescaled first and second slow-roll functions encode the same information as this homothety class.
### Geometric interpretation of \(\hat{\eta}^{\parallel}\)
Relation (36) can be written as:
\[\cos\theta(u)=c(u)[\hat{\eta}^{\parallel}(u)-1] \tag{42}\]
and requires:
\[\hat{\eta}^{\parallel}(u)\in\left[1-\frac{1}{c(u)},1+\frac{1}{c(u)}\right]\ \.\]
In particular, the interval within which \(\hat{\eta}^{\parallel}(u)\) can take values is centered on \(1\) and constrained by the value of \(c(u)\) and hence by the norm of \(u\). This interval is very large in the quasi-conservative regime \(c(u)\ll 1\) and becomes narrow in the strongly dissipative regime \(c(u)\gg 1\), when \(\hat{\eta}^{\parallel}(u)\) is forced to be close to one (see Figure 1). In particular, the generalized "ultra slow roll" approximation \(\hat{\eta}^{\parallel}\approx 1\) is accurate in the strongly dissipative regime.
Notice that \(|\eta^{\parallel}(u)|\ll 1\) iff \(\cos\theta(u)\approx-c(u)\), which in particular requires that \(u\) points towards decreasing values of \(\Phi\). On the other hand, (25) gives:
\[||u||=\sqrt{\Phi(\pi(u))\left[-1+\sqrt{1+4c(u)^{2}||\Xi(\pi(u))||^{2}}\right]} \ . \tag{43}\]
Let \(\pi(u)=m\in{\cal M}_{0}\). When \(\theta(u)=\pi/2\), relation (42) requires \(\hat{\eta}^{\parallel}(u)=1\). In this case, \(u\) is orthogonal to \(n_{\Phi}(u)\) in \(T_{m}{\cal M}\) and its norm determines and is determined by \(c(u)\) through relation (43). This is the so-called "ultra slow roll" case, which generalizes the ultra slow roll regime of one-field models. By the remarks above, the strongly dissipative regime \(c(u)\gg 1\) forces \(\hat{\eta}^{\parallel}(u)\approx 1\) and hence a generalized "ultra slow roll" approximation is accurate in this regime.
Suppose now that \(\theta(u)\neq\pi/2\), i.e. \(\hat{\eta}^{\parallel}(u)\neq 1\). Then eliminating \(c(u)\) from (42) and substituting in (43) gives:
\[||u||=\sqrt{\Phi(\pi(u))\left[-1+\sqrt{1+\frac{4||\Xi(\pi(u))||^{2}}{(1-\hat{ \eta}||(u))^{2}}\cos^{2}\theta(u)}\right]}\ \.\]
When \(\hat{\eta}^{\parallel}(u)\) is fixed, this determines \(||u||\) as a function of \(\theta(u)\), where the sign of \(\cos\theta(u)\) must equal that of \(\hat{\eta}^{\parallel}(u)-1\) by (42) since \(c(u)\) is positive. Thus fixing the value of \(\hat{\eta}^{\parallel}\) forces \(u\) to lie within a hypersurface of revolution with axis given by the line determined by \(({\rm grad}\Phi)(m)\) inside \(T_{m}{\cal M}\). This hypersurface passes through the origin of \(T_{m}{\cal M}\) at \(\theta=0\) and cuts the line determined by \(({\rm grad}\Phi)(m)\) again at a point corresponding to the maximal norm of \(u\), which is attained for \(\theta=0\) or
Figure 1: Admissible domain of \(c\) and \(\hat{\eta}^{\parallel}\). The right and left boundaries of the domain are the hyperbolas \((\hat{\eta}^{\parallel}-1)=-1\) and \(c(\hat{\eta}^{\parallel}-1)=+1\), which correspond respectively to \(\theta=0\) and \(\theta=\pi\). The vertical red line in the middle has equation \(\hat{\eta}^{\parallel}=1\) and corresponds to \(\theta=\pi/2\). The dotted blue hyperbolas correspond to \(\theta=\pi/3\) and \(\theta=2\pi/3\) while the dotted red hyperbolas correspond to \(\theta=4\pi/9\) and \(\theta=5\pi/9\). The inter within which \(\hat{\eta}^{\parallel}\) can vary for a fixed value of \(c\) is obtained by intersecting the corresponding horizontal line with the domain shown in the figure. This interval is centered on the value \(\hat{\eta}^{\parallel}=1\) and its length decreases as \(c\) increases. Accordingly, the strongly dissipative regime \(c\gg 1\) forces \(\hat{\eta}^{\parallel}\) to be close to one and hence the generalized “ultra slow roll” approximation is accurate in this regime.
\(\theta=\pi\), depending on the sign of \(\hat{\eta}^{\parallel}-1\). This maximal value of \(||u||\) is given by:
\[||u||_{\max}=\sqrt{\Phi(m)\left[-1+\sqrt{1+\frac{4||\Xi(m))||^{2}}{(1-\hat{\eta}^ {\parallel}(u))^{2}}}\right]}\enspace.\]
The surface of revolution degenerates to the plane perpendicular to \((\text{grad}\Phi)(m)\) when \(\hat{\eta}^{\parallel}(u)\to 1\). When \(\hat{\eta}^{\parallel}(u)\rightarrow\pm\infty\), it degenerates to the origin of \(T_{m}\mathcal{M}\). These remarks show that the constant roll approximation \(\hat{\eta}^{\parallel}\approx C\) with \(C\) a constant means that the velocity \(\dot{\varphi}(t)\) of the cosmological curve remains close to the surface of revolution determined by \(C\) inside \(T_{\varphi(t)}\mathcal{M}\). In particular, the condition \(\hat{\eta}^{\parallel}(u)=0\) requires \(\cos\theta(u)<0\) i.e. \(\theta\in(\frac{\pi}{2},\pi]\) and determines the surface of revolution with equation:
\[||u||=\sqrt{\Phi(\pi(u))\left[-1+\sqrt{1+4||\Xi(\pi(u))||^{2}\cos^{2}\theta(u )}\right]}\enspace,\]
whose section with a plane inside \(T_{m}\mathcal{M}\) which contains the vector \((\text{grad}\Phi)(m)\) is plotted in figure 2. The second slow roll condition \(\hat{\eta}^{\parallel}(\dot{\varphi}(t))=0\) requires that \(\dot{\varphi}(t)\) lies close to this surface inside \(T_{\varphi(t)}\mathcal{M}\).
### The no roll condition
**Definition 3.21**.: The _no roll shell_ of \((\mathcal{M},\mathcal{G},\Phi)\) is the zero level set of the second slow roll function \(\hat{\eta}^{\parallel}\):
\[\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{\text{def.}} }{{=}}\{u\in\dot{T}\mathcal{M}\ |\ \hat{\eta}^{\parallel}(u)=0\}\enspace,\]
which is a closed codimension one submanifold of \(\dot{T}\mathcal{M}_{0}\).
By the discussion of the previous subsection, the no roll shell is a fiber sub-bundle of \(T\mathcal{M}_{0}\) whose fiber at each \(m\in\mathcal{M}_{0}\) is a connected surface of revolution around the axis determined by the vector \((\text{grad}\Phi)(m)\in T_{m}\mathcal{M}\).
**Definition 3.22**.: We say that a cosmological curve \(\varphi:I\rightarrow\mathcal{M}_{0}\) satisfies the _no roll condition_ at time \(t\in I_{\text{reg}}\) if \(\dot{\varphi}(t)\in\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\) i.e. if \(\hat{\eta}^{\parallel}(\dot{\varphi}(t))=0\).
Figure 2: Intersection of the surface of revolution determined inside \(T_{m}\mathcal{M}\) by the condition \(\hat{\eta}^{\parallel}(u)=0\) for \(\pi(u)=m\) with a plane which contains the vector \((\text{grad}\Phi)(m)\). In the figure, we took \(\Phi(m)=1\) and \(||\Xi(m)||=1/2\). The horizontal and vertical axes correspond to the projection \(u_{\parallel}\) of \(u\) on \((\text{grad}\Phi)(m)\) and the norm of the projection \(u_{\perp}\) of \(u\) on the plane orthogonal in \(T_{m}\mathcal{M}\) to the vector \((\text{grad}\Phi)(m)\).
**Remark 3.5**.: The no roll condition at time \(t\in I_{\rm reg}\) means that the covariant acceleration \(\nabla_{t}\varphi(t)\) is orthogonal on \(\dot{\varphi}(t)\). Since \(\mathcal{G}(\dot{\varphi},\nabla_{t}\dot{\varphi})=\frac{1}{2}\frac{\mathrm{d}} {\mathrm{d}t}||\dot{\varphi}||^{2}=||\dot{\varphi}||\frac{\mathrm{d}}{\mathrm{ d}t}||\dot{\varphi}||\), this amounts to the condition:
\[\frac{\mathrm{d}}{\mathrm{d}t}||\dot{\varphi}(t)||=0\ \,\]
which means that the proper length parameter \(s\) along the cosmological curve (which satisfies \(\dot{s}=||\dot{\varphi}(t)||\)) has vanishing second derivative at time \(t\):
\[\ddot{s}=0\ \.\]
### The second slow roll condition
**Definition 3.23**.: Let \(\epsilon\in(0,1]\). We say that a vector \(u\in\dot{T}\mathcal{M}\) satisfies the _second slow roll condition_ of \((\mathcal{M},\mathcal{G},\Phi)\) with parameter \(\epsilon>0\) if \(|\hat{\eta}|^{\parallel}(u)|<\epsilon\). The _second slow roll region_ of \((\mathcal{M},\mathcal{G},\Phi)\) at parameter \(\epsilon\) is the open subset of \(T\mathcal{M}\) defined through:
\[\mathcal{S}_{\epsilon}^{2}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{ \mathrm{def.}}}{{=}}\{u\in\dot{T}\mathcal{M}\ |\ |\hat{\eta}^{\parallel}(u)|<\epsilon\}\subset\dot{T} \mathcal{M}\ \.\]
Notice that \(\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\subset\mathcal{S}_{\epsilon}^{2}( \mathcal{M},\mathcal{G},\Phi)\) for all \(\epsilon\in(0,1]\).
**Definition 3.24**.: We say that a cosmological curve \(\varphi:I\rightarrow\mathcal{M}\) satisfies the _second slow roll condition_ with parameter \(\epsilon\) at \(t\in I_{\rm reg}\) if \(\dot{\varphi}(t)\in\mathcal{S}_{\epsilon}^{2}(\mathcal{M},\mathcal{G},\Phi)\), i.e. if the vector \(\dot{\varphi}(t)\) satisfies the second slow roll condition of \((\mathcal{M},\mathcal{G},\Phi)\) with parameter \(\epsilon\).
Since \(\mathcal{S}_{\epsilon}^{2}(\mathcal{M},\mathcal{G},\Phi)\) is an open subset of \(\dot{T}\mathcal{M}\) and the canonical lift \(\dot{\varphi}_{\rm reg}:I_{\rm reg}\rightarrow\dot{T}\mathcal{M}\) of \(\varphi_{\rm reg}\stackrel{{\mathrm{def.}}}{{=}}\varphi|_{I_{\rm reg}}\) is continuous, the set:
\[I_{\epsilon}^{2}\stackrel{{\mathrm{def.}}}{{=}}\dot{\varphi}_{ \rm reg}^{-1}(\mathcal{S}_{\epsilon}^{1}(\mathcal{M},\mathcal{G},\Phi))\subset I _{\rm reg}\]
is an open subset of \(I_{\rm reg}\).
**Vectors of fixed norm which satisfy the second slow roll condition.** When \(u\in\dot{T}_{m}\mathcal{M}_{0}\), relation (36) gives:
\[\bar{\eta}_{m}^{\parallel}(u)=1+\frac{\Xi_{m}(T(u))}{[\kappa_{m}(||u||)(1+ \kappa_{m}(||u||))]^{1/2}}=1+\frac{\cos\theta(u)}{c_{m}(||u||)}\ . \tag{44}\]
For any \(m\in\mathcal{M}\) and any \(N>0\), we denote by \(\mathrm{S}_{m}(x)\) the sphere of radius \(x\) in the Euclidean vector space \((T_{m}\mathcal{M},\mathcal{G}_{m})\). Moreover, we set \(\mathrm{S}_{m}\stackrel{{\mathrm{def.}}}{{=}}\mathrm{S}_{m}(1)\).
**Prop 3.4**.: Suppose that \(m\in\mathcal{M}_{0}\) is not a critical point of \(\Phi\). Then the image of the sphere \(\mathrm{S}_{m}(x)\subset\dot{T}_{m}\mathcal{M}\) of radius \(N>0\) through the map \(\bar{\eta}_{m}^{\parallel}:\dot{T}_{m}\mathcal{M}\rightarrow\mathbb{R}\) is given by:
\[\bar{\eta}_{m}^{\parallel}(\mathrm{S}_{m}(x))=\left[1-\frac{1}{c_{m}(x)},1+ \frac{1}{c_{m}(x)}\right]\ . \tag{45}\]
Thus for any \(N>0\) and any \(\epsilon\in\left[1-\frac{1}{c_{m}(x)},1+\frac{1}{c_{m}(x)}\right]\) there exists a vector \(u\in\dot{T}_{m}\mathcal{M}\) which satisfies the conditions:
\[\hat{\eta}^{\parallel}(u)=\epsilon\ \ \text{and}\ \ ||u||=N\ . \tag{46}\]
In particular, for any \(N>0\) which satisfies the inequality:
\[c_{m}(x)\leq 1\ \, \tag{47}\]
there exists a vector \(u\in\dot{T}_{m}\mathcal{M}\) such that:
\[\hat{\eta}^{\parallel}_{m}(u)=0\ \ \text{and}\ \ ||u||=N\ . \tag{48}\]
**Proof.** Relation (44) implies that the image of \(\mathrm{S}_{m}\) through \(\hat{\eta}^{\parallel}_{m}\) is given by (45). The remaining statements follows immediately from (45).
**Remark 3.6**: Relation (47) is the defining equation of the conservative closure \(\overline{C_{1}}\) and hence is equivalent with (see equation (28)):
\[\kappa_{m}(x)\leq\frac{1}{2}[-1+\sqrt{1+4||\overline{\Xi}_{m}||^{2}}] \Longleftrightarrow N^{2}\leq-\Phi(m)+\sqrt{\Phi(m)^{2}+M_{0}^{2}||(\mathrm{ d}\Phi)(m)||^{2}}\ . \tag{49}\]
Recall that \(A_{1}\) denotes the conservative bound function at parameter \(1\) (see (29)):
\[A_{1}\stackrel{{\text{\tiny def.}}}{{=}}\sqrt{-\Phi+\sqrt{\Phi^ {2}+M_{0}^{2}||(\mathrm{d}\Phi)||^{2}}}\ \.\]
**Corollary 3.1**: _For any non-critical point \(m\in\mathcal{M}_{0}\) and any real number_
\[N\in(0,A_{1}(m)]\ \,\]
_the equation \(\hat{\eta}^{\parallel}_{m}(u)=0\) has a solution \(u\in\dot{T}_{m}\mathcal{M}\) of norm \(||u||=N\). The set \(\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\cap\mathrm{S}_{m}(x)\) of such solutions is a sphere of dimension \(d-2\) when \(N\in(0,A_{1}(m))\) and is reduced to a point when \(N=A(m)\). When \(N>A(m)\), the set \(\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\cap\mathrm{S}_{m}(x)\) is empty._
**Proof.** Follows immediately from Prop 3.4.
The corollary implies that the no roll shell intersects the conservative region for any parameter \(\epsilon\in(0,1]\):
\[\mathcal{F}(\mathcal{M},\mathcal{G},\Phi)\cap C_{\epsilon}(\mathcal{M}, \mathcal{G},\Phi)\neq 0\ \ \forall\epsilon\in(0,1]\ \.\]
### The second order slow roll conditions
**Definition 3.25**: Let \(\epsilon_{1},\epsilon_{2}\in(0,1]\). We say that a vector \(u\in\dot{T}\mathcal{M}\) satisfies the _second order slow roll conditions_ with parameters \(\epsilon_{1}\), \(\epsilon_{2}\) if:
\[\kappa(u)<\epsilon_{1}\ \ \text{and}\ \ |\hat{\eta}^{\parallel}(u)|<\epsilon_{2}\ \.\]
The _second order slow roll region_ of \((\mathcal{M},\mathcal{G},\Phi)\) at parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) is the open subset of \(\dot{T}\mathcal{M}\) defined through:
\[\mathcal{S}_{\epsilon_{1},\epsilon_{2}}(\mathcal{M},\mathcal{G},\Phi)\stackrel{{ \mathrm{def.}}}{{=}}\mathcal{S}_{\epsilon_{1}}^{1}(\mathcal{M},\mathcal{G}, \Phi)\cap\mathcal{S}_{\epsilon_{2}}^{2}(\mathcal{M},\mathcal{G},\Phi)\subset \dot{T}\mathcal{M}\ \.\]
Corollary 3.1 and relation (20) imply:
**Prop 3.5**.: For any \(\epsilon_{1},\epsilon_{2}\in(0,1]\), we have:
\[\pi(\mathcal{S}_{\epsilon_{1},\epsilon_{2}}(\mathcal{M},\mathcal{G},\Phi))= \mathcal{M}_{0}\ \ \text{and}\ \ \pi(\mathcal{S}_{\epsilon_{1}}^{1}(\mathcal{M},\mathcal{G},\Phi)\cap \mathcal{F}(\mathcal{M},\mathcal{G},\Phi))=\mathcal{M}_{0}\]
**Definition 3.26**.: We say that a cosmological curve \(\varphi:I\to\mathcal{M}\) satisfies the _second order slow roll condition_ with parameters \(\epsilon_{1},\epsilon_{2}\in(0,1]\) at \(t\in I_{\mathrm{reg}}\) if \(\dot{\varphi}(t)\in\mathcal{S}_{\epsilon_{1},\epsilon_{2}}(\mathcal{M}, \mathcal{G},\Phi)\).
Since \(\mathcal{S}_{\epsilon_{1},\epsilon_{2}}(\mathcal{M},\mathcal{G},\Phi)\) is an open subset of \(\dot{T}\mathcal{M}\) and the canonical lift \(\dot{\varphi}_{\mathrm{reg}}:I_{\mathrm{reg}}\to\dot{T}\mathcal{M}\) of \(\varphi_{\mathrm{reg}}:I_{\mathrm{reg}}\to\mathcal{M}\) is continuous, the set:
\[I_{\epsilon_{1},\epsilon_{2}}\stackrel{{\mathrm{def.}}}{{=}}\dot {\varphi}_{\mathrm{reg}}^{-1}(\mathcal{S}_{\epsilon_{1},\epsilon_{2}}( \mathcal{M},\mathcal{G},\Phi))=I_{\epsilon_{1}}^{1}\cap I_{\epsilon_{2}}^{2} \subset I_{\mathrm{reg}}\]
is an open subset of \(I_{\mathrm{reg}}\).
### The slow roll-fast turn (a.k.a. slow roll conservative) conditions
Since:
\[\hat{\boldsymbol{\eta}}(u)=\hat{\eta}^{\parallel}(u)n(u)+\hat{\boldsymbol{ \eta}}^{\perp}(u)\ \ \forall u\in\dot{T}\mathcal{M}\ \,\]
the conditions:
\[|\hat{\eta}^{\parallel}(u)|\ll 1\ \,\ \ ||\hat{\boldsymbol{\eta}}^{\perp}(u)| |\gg 1 \tag{50}\]
are equivalent with:
\[|\hat{\eta}^{\parallel}(u)|\ll 1\ \ \text{and}\ \ ||\hat{\boldsymbol{\eta}}(u)| |\gg 1\ \.\]
The second of these amounts to \(||\mathbf{q}(u)||\gg 1\) i.e. \(c(u)\ll 1\), where:
\[c(u)\stackrel{{\mathrm{def.}}}{{=}}\frac{1}{||\mathbf{q}(u)||}= \frac{1}{M_{0}}\frac{||u||\ \big{[}||u||^{2}+2\Phi(\pi(u))\big{]}^{1/2}}{||( \mathrm{d}\Phi)(\pi(u))||}\]
is the _conservative function_ of Section 5.2. Hence conditions (50) are equivalent with:
\[c(u)\ll 1\ \ \text{and}\ \ |\hat{\eta}^{\parallel}(u)|\ll 1\ \.\]
In particular, the defining conditions of the _slow roll-fast turn approximation_:
\[\mathfrak{e}(u)\ll 1\ \,\ \ |\hat{\eta}^{\parallel}(u)|\ll 1\ \,\ \ ||\hat{\eta}^{ \perp}(u)||\gg 1\ \.\]
are equivalent with the _slow roll conservative conditions_:
\[c(u)\ll 1\ \,\ \ \kappa(u)\ll 1\ \,\ \ \ |\hat{\eta}^{\parallel}(u)|\ll 1\ \.\]
**Definition 3.27**.: Let \(\epsilon_{1},\epsilon_{2},\epsilon_{3}\in(0,1]\). The _slow roll conservative region_ of \(({\cal M},{\cal G},\Phi)\) is the open subset of \(\dot{T}{\cal M}_{0}\) defined through:
\[C_{\epsilon_{1},\epsilon_{2},\epsilon_{3}}({\cal M},{\cal G},\Phi)\stackrel{{ \rm def.}}{{=}}{\cal S}_{\epsilon_{1},\epsilon_{2}}({\cal M},{\cal G},\Phi) \cap C_{\epsilon_{3}}({\cal M},{\cal G},\Phi)\subset\dot{T}{\cal M}_{0}\ \.\]
Thus \(C_{\epsilon_{1},\epsilon_{2},\epsilon_{3}}({\cal M},{\cal G},\Phi)\) consists of those non-zero tangent vectors \(u\in\dot{T}{\cal M}\) which satisfy the conditions:
\[\kappa(u)<\epsilon_{1}\ \,\ \ \hat{\eta}^{\parallel}(u)<\epsilon_{2}\ \,\ \ c(u)< \epsilon_{3}\ \.\]
Equation (24) gives:
\[||\Xi(\pi(u))||=\frac{[\kappa(u)(1+\kappa(u))]^{1/2}}{c(u)}\ . \tag{51}\]
Thus (41) takes the form:
\[\hat{\Xi}(\pi(u))(n(u))=c(u)\left[-1+\hat{\eta}^{\parallel}(u)\right]\ \, \tag{52}\]
where:
\[\hat{\Xi}\stackrel{{\rm def.}}{{=}}\frac{\Xi}{||\Xi||}\]
is the normalization of the one-form (22).
**Prop 3.6**.: For any \(m\in{\cal M}\) which is not a critical point of \(\Phi\) and every \(N>0\) such that:
\[c_{m}(x)\leq 1\Longleftrightarrow N^{2}\leq-\Phi(m)+\sqrt{\Phi(m)^{2}+M_{0}^{2 }||(\Phi\Phi)(m)||^{2}} \tag{53}\]
there exists a non-zero vector \(u\in\dot{T}_{m}{\cal M}\) which satisfies the conditions:
\[\hat{\eta}^{\parallel}_{m}(u)=0\ \ {\rm and}\ \ ||u||=N\ \.\]
In particular, for any \(\epsilon_{1},\epsilon_{3}\in(0,1]\) there exists a non-zero vector \(u\in\dot{T}_{m}{\cal M}\) which satisfies:
\[\kappa_{m}(u)<\epsilon_{1}\ \,\ \ \hat{\eta}^{\parallel}_{m}(u)=0\ \,\ \ c_{m}(u)<\epsilon_{3}\ \.\]
**Proof.** The first statement follows from Proposition 3.4 upon noticing that (53) coincides with condition (49). The second statement follows from the same proposition upon noticing that the inequalities:
\[\kappa_{m}(x)<\epsilon_{1}\ \,\ \ c_{m}(x)<\epsilon_{3}\]
amount to:
\[N^{2}<\min\left(2\Phi(m)\epsilon_{1},-\Phi(m)+\sqrt{\Phi(m)^{2}+\epsilon_{3}^ {2}M_{0}^{2}||(\Phi\Phi)(\pi(u))||^{2}}\right)\ \,\]
where we used the fact that the conservative condition \(c_{m}(x)<\epsilon_{3}\) is equivalent with (27).
**Corollary 3.2**.: _For \(\epsilon_{1},\epsilon_{2},\epsilon_{3}\in(0,1]\), we have:_
\[\pi(C_{\epsilon_{1},\epsilon_{2},\epsilon_{3}}({\cal M},{\cal G},\Phi))={ \cal M}_{0}\ \.\]
## 4 The first and second dynamical slow roll approximations
The first and second slow roll conditions can be used to define _dynamical approximations_ of a cosmological curve by a solution curve of an approximating equation obtained by neglecting the corresponding parameters in the cosmological equation.
### The first dynamical slow roll approximation
Relation (19) allows us to write the cosmological equation (7) as:
\[\nabla_{t}\dot{\varphi}(t)+\frac{1}{M_{0}}\sqrt{2\Phi(\varphi(t))}\left[1+ \kappa_{\varphi}(t)\right]^{1/2}\dot{\varphi}(t)+(\mbox{grad}_{\cal G}\Phi)( \varphi(t))=0\ . \tag{54}\]
The _first dynamical slow roll approximation_ consists of neglecting \(\kappa_{\varphi}\) in this equation, i.e. approximating a cosmological curve \(\varphi\) by the solution \(\varphi_{s}\) of the _slow cosmological equation_:
\[\nabla_{t}\dot{\varphi}_{s}(t)+\frac{1}{M_{0}}\sqrt{2\Phi(\varphi_{s}(t))} \dot{\varphi}_{s}(t)+(\mbox{grad}_{\cal G}\Phi)(\varphi_{s}(t))=0 \tag{55}\]
which satisfies the initial conditions:
\[\varphi_{s}(0)=\varphi(0)\ \ \mbox{and}\ \ \dot{\varphi}_{s}(0)=\dot{\varphi}(0)\ \.\]
The approximation is accurate around \(t=0\) provided that \(\kappa_{\varphi}(0)\ll 1\), which amounts to the condition:
\[\kappa(\varphi(0))=\kappa(\varphi_{s}(0))=\frac{||\dot{\varphi}(0)||^{2}}{2 \Phi(\varphi(0))}\ll 1\ \.\]
A necessary condition for the dynamical first slow roll approximation \(\varphi(t)\approx\varphi_{s}(t)\) to remain accurate at \(t\neq 0\) is that we have:
\[\kappa(\dot{\varphi}_{s}(t))\ll 1\Longleftrightarrow||\dot{\varphi}(t)||\ll \sqrt{2\Phi(\varphi(t))}\ \, \tag{56}\]
where:
\[\kappa(\dot{\varphi}_{s}(t))=\frac{||\dot{\varphi}_{s}(t)||^{2}}{2\Phi( \varphi_{s}(t))}\ \.\]
Equation (56) requires that the lift of \(\varphi\) to \(T{\cal M}\) be contained within a tubular neighborhood of the zero section whose radius is much smaller that \(\sqrt{2\Phi}\). This condition constraints the time interval around zero for which the approximation can remain accurate.
The geometric equation (55) defines the _slow flow_ on the tangent bundle \(T{\cal M}\). Notice that the stationary points of this flow coincide with those of the cosmological flow, being given by the trivial lifts of the critical points of \(\Phi\). The cosmological energy:
\[E_{\varphi_{s}}(t)\stackrel{{\rm def.}}{{=}}E(\dot{\varphi}_{s} (t))=\frac{1}{2}||\dot{\varphi}_{s}(t)||^{2}+\Phi(\varphi_{s}(t))\]
of \(\varphi_{s}\) satisfies:
\[\frac{\mathrm{d}E_{\varphi_{s}}(t)}{\mathrm{d}t}=-\frac{\sqrt{2\Phi(\varphi_{s}(t) )}}{M_{0}}||\dot{\varphi}_{s}(t)||^{2}\]
and attains its minima at the stationary points of the slow flow. It follows that \(E:T\mathcal{M}\rightarrow\mathbb{R}_{>0}\) is a Lyapunov function for the geometric dynamical system defined by (55) on \(T\mathcal{M}\). When \(\mathcal{M}\) is compact, the usual argument shows that the slow flow is future complete and that the \(\omega\)-limit point of any slow solution curve is a critical point of \(\Phi\).
### The second dynamical slow roll approximation
Relation (34) allows us to write the cosmological equation as:
\[\mathcal{H}_{\varphi}(t)(1-\hat{\eta}^{\parallel}_{\varphi}(t))\dot{\varphi} (t)-||\dot{\varphi}(t)||\hat{\eta}^{\perp}_{\varphi}(t)+(\mathrm{grad}\Phi)( \varphi(t))=0\ \.\]
The _second dynamical slow roll approximation_ consists of neglecting \(\hat{\eta}^{\parallel}_{\varphi}(t)\) in this equation, i.e. neglecting the parallel component of the covariant acceleration:
\[[\nabla_{t}\dot{\varphi}(t)]^{\parallel}=\mathcal{G}(\nabla_{t}\dot{\varphi} (t),T_{\varphi}(t))=\frac{\mathrm{d}}{\mathrm{d}t}||\dot{\varphi}(t)||\]
in the cosmological equation. This amounts to approximating a cosmological curve \(\varphi\) by the solution \(\varphi_{\sigma}\) of the _no second roll equation_:
\[[\nabla_{t}\dot{\varphi}_{\sigma}(t)]^{\perp}+\mathcal{H}_{\varphi_{\sigma}}( t)\dot{\varphi}_{\sigma}(t)+(\mathrm{grad}\Phi)(\varphi_{\sigma}(t))=0 \tag{57}\]
which satisfies the initial conditions:
\[\varphi_{\sigma}(0)=\varphi(0)\ \ \mathrm{and}\ \ \varphi_{\sigma}(t)=\varphi(t)\ \.\]
Recall that (see (32)):
\[[\nabla_{t}\dot{\varphi}_{\sigma}(t)]^{\perp}=\nabla_{t}\dot{\varphi}_{\sigma }(t)-\left(\frac{\mathrm{d}}{\mathrm{d}t}\log||\dot{\varphi}_{\sigma}(t)|| \right)\dot{\varphi}_{\sigma}(t)\ . \tag{58}\]
When \(\dot{\varphi}(t)=0\), we define:
\[[\nabla_{t}\dot{\varphi}(t)]^{\perp}\stackrel{{\mathrm{def.}}}{{= }}\nabla_{t}\dot{\varphi}(t)\ \ (t\in I_{\mathrm{sing}})\ \,\]
which agrees with the limit of (58) when \(t\) approaches a singular point of \(\varphi\).
The approximation is accurate at \(t=0\) provided that:
\[|\hat{\eta}^{\parallel}_{\varphi}(0)|=|\hat{\eta}^{\parallel}(\dot{\varphi}(0 ))|=|\hat{\eta}^{\parallel}(\dot{\varphi}_{\sigma}(0))|=\Big{|}1+\frac{\cos \theta(\dot{\varphi}(0))}{c(\dot{\varphi}(0))}\Big{|}\ll 1\ \.\]
A necessary condition that the approximation remains accurate at \(t\neq 0\) is that we have:
\[|\hat{\eta}^{\parallel}(\dot{\varphi}_{\sigma}(t))|=\Big{|}1+\frac{\cos \theta(\dot{\varphi}_{\sigma}(t))}{c(\dot{\varphi}_{\sigma}(t))}\Big{|}\ll 1\ \,\]
a condition which constrains the time interval around the origin on which the approximation can be applied.
Projecting (57) on the direction of \(T_{\varphi_{\sigma}}(t)\) and on the hyperplane inside \(T_{\varphi_{\sigma}(t)}{\cal M}\) which is orthogonal to \(T_{\varphi_{\sigma}}(t)\) gives:
\[{\cal H}_{\varphi_{\sigma}}(t)||\dot{\varphi}_{\sigma}(t)||=-||({ \rm d}\Phi)(\varphi_{\sigma}(t))||\cos\theta_{\varphi_{\sigma}}(t)\] \[||\dot{\varphi}_{\sigma}(t)||^{2}\chi_{\varphi_{\sigma}}(t)n_{ \varphi_{\sigma}}(t)=({\rm grad}\Phi)^{\perp}(\varphi_{\sigma}(t))\ \, \tag{59}\]
where noticed that:
\[[\nabla_{t}\dot{\varphi}_{\sigma}(t)]^{\perp}=||\dot{\varphi}_{\sigma}(t)|| \nabla_{t}T_{\varphi_{\sigma}}(t)=||\dot{\varphi}_{\sigma}(t)||^{2}\chi_{ \varphi_{\sigma}}(t)n_{\varphi_{\sigma}}(t)\ \.\]
The second condition in (59) means that the oscillating plane of \(\varphi_{\sigma}\) contains the vector \(({\rm grad}\Phi)(\varphi_{\sigma}(t))\) and that we have:
\[||\dot{\varphi}_{\sigma}(t)||^{2}\chi_{\varphi_{\sigma}}(t)=||({\rm d}\Phi)( \varphi_{\sigma}(t))||\sin\theta_{\varphi_{\sigma}}(t)\ . \tag{60}\]
This relation and the first equation in (59) determines the principal curvature of \(\varphi_{\sigma}\) in terms of its speed as:
\[\chi_{\varphi_{\sigma}}(t)=\frac{||({\rm d}\Phi)(\varphi_{\sigma}(t))||}{|| \varphi_{\sigma}(t)||^{2}}\left[1-\left(\frac{{\cal H}_{\varphi_{\sigma}}(t) ||\dot{\varphi}_{\sigma}(t)||}{||({\rm d}\Phi)(\varphi_{\sigma}(t))||}\right) ^{2}\right]^{1/2}\ \,\]
i.e.:
\[\chi_{\varphi_{\sigma}}(t)=\frac{||\Xi(\varphi_{\sigma}(t))||}{M_{0}\kappa( \dot{\varphi}_{\sigma}(t))}\sqrt{1-c(\dot{\varphi}_{\sigma}(t))^{2}}\ . \tag{61}\]
Notice that the first equation in (59) amounts to:
\[c(\dot{\varphi}_{\sigma}(t))=-\cos\theta(\dot{\varphi}_{\sigma}(t))\ \,\]
which shows that the flow defined by (57) on \(T{\cal M}\) preserves the no roll shell \({\cal F}({\cal M},{\cal G},\Phi)\).
## 5 The conservative and dissipative approximations
In this section we discuss two dynamical approximations controlled by the conservative parameter \(c_{\varphi}\), namely the conservative and dissipative approximations, which are obtained by taking this parameter to be very small or very large. We start by explaining the relation of these approximations to approximations controlled by the norm of \(\mathbf{\eta}_{\varphi}\).
### Dynamical approximations controlled by the norm of the relative acceleration
It is natural to consider the approximants of the cosmological equation which are obtained by requiring that \(||\hat{\mathbf{\eta}}_{\varphi}(t)||\) is very small, very large or close to one.
The gradient flow approximation.The _gradient flow approximation_ of [13] consists of replacing the cosmological equation (7) with the _modified gradient flow equation_:
\[{\cal H}_{\varphi}(t)\dot{\varphi}(t)+(\mbox{grad}\Phi)(\varphi(t))=0\ \,\]
whose integral curves are obtained from the gradient flow curves of \(\Phi\) by a certain reparameterization. This approximation (which was further discussed in [15]) is accurate when the _small relative acceleration condition_\(||\boldsymbol{\eta}_{\varphi}(t)||\ll 1\) holds. Notice that this condition implies the small longitudinal relative acceleration condition \(|\hat{\eta}|(u)|\ll 1\) and hence the gradient flow approximation implies the second slow roll approximation.
Combining the gradient flow approximation with the slow motion approximation (i.e. further neglecting \(\kappa_{\varphi}(t)\) in the rescaled Hubble parameter \({\cal H}_{\varphi}(t)=\frac{\sqrt{2\Phi(t)}}{M_{0}}(1+\kappa_{\varphi}(t))^{1 /2}\)) results in the _IR approximation_ of [10]. The latter replaces (7) with the equation:
\[\frac{1}{M_{0}}\sqrt{2\Phi(\varphi(t))}\dot{\varphi}(t)+(\mbox{grad}\Phi)( \varphi(t))=0\ \,\]
which is equivalent with the gradient flow equation of the _classical effective potential_\(V\stackrel{{\rm def.}}{{=}}M_{0}\sqrt{2\Phi}\). The IR approximation is accurate when \(\kappa_{\varphi}(t)\ll 1\) and \(||\boldsymbol{\eta}_{\varphi}(t)||\ll 1\) and is a natural generalization to multifield models of the second order slow roll approximation of one-field cosmological models.
Notice that relation (35) gives:
\[||\boldsymbol{\eta}(u)||=\sqrt{1+\frac{1}{c(u)^{2}}+\frac{2\cos\theta(u)}{c(u) }}\in\left[|1-\frac{1}{c(u)}|,1+\frac{1}{c(u)}\right]\ . \tag{62}\]
Thus:
* The conservative condition \(c(u)\ll 1\) is _equivalent_ with the _large relative acceleration condition_\(||\boldsymbol{\eta}(u)||\gg 1\), namely it forces \(||\boldsymbol{\eta}(u)||\approx\frac{1}{c(u)}\).
* The dissipative condition \(c(u)\gg 1\) is _equivalent_ with the _unit relative acceleration condition_\(||\boldsymbol{\eta}(u)||\approx 1\).
On the other hand, the _small relative acceleration condition_\(||\boldsymbol{\eta}(u)||\ll 1\) (which is used to define the gradient flow approximation of [13]) does not constrain \(c(u)\). Thus one has the following approximations which are controlled by \(||\boldsymbol{\hat{\eta}}||\):
* The gradient flow approximation, which is accurate when \(||\boldsymbol{\hat{\eta}}_{\varphi}(t)||\ll 1\). This approximation was discussed in [13] and [15].
* The conservative approximation, which is accurate when \(||\boldsymbol{\hat{\eta}}_{\varphi}(t)||\gg 1\), i.e. when \(c_{\varphi}(t)\ll 1\).
* The dissipative approximation, which is accurate when \(||\boldsymbol{\hat{\eta}}_{\varphi}(t)||\approx 1\), i.e. when \(c_{\varphi}(t)\gg 1\).
Each of these can be combined with the slow motion approximation, which is accurate when \(\kappa_{\varphi}(t)\ll 1\), i.e. when \(\mathbf{\varepsilon}_{\varphi}(t)\ll 1\). We refer the reader to[13] and[15] for further details of the gradient flow approximation and to[10] and[19] for further information on the IR approximation. Below, we discuss the conservative and dissipative approximations.
### The conservative approximation
The _conservative approximation_ consists of considering only non-critical cosmological curves and neglecting the friction term in the cosmological equation. This approximation is accurate for a noncritical cosmological curve \(\varphi:I\to\mathcal{M}_{0}\) when the _kinematic conservative condition_:
\[c_{\varphi}(t)\ll 1\Longleftrightarrow\dot{\varphi}(t)\in C_{\epsilon}( \mathcal{M},\mathcal{G},\Phi)\ \ \mbox{for some positive}\ \epsilon\ll 1 \tag{63}\]
is satisfied. Suppose that \(0\in I\). When (63) holds at \(t=0\), the noncritical cosmological curve \(\varphi\) is well-approximated for small \(|t|\) by the solution \(\varphi_{c}:I_{c}\to\mathcal{M}\) of the _conservative equation_ of the scalar triple \((\mathcal{M},\mathcal{G},\Phi)\):
\[\nabla_{t}\dot{\varphi}_{c}(t)+(\mbox{grad}\Phi)(\varphi_{c}(t))=0 \tag{64}\]
which satisfies the initial conditions:
\[\varphi_{c}(0)=\varphi(0)\ \ \mbox{and}\ \ \dot{\varphi}_{c}(0)=\dot{\varphi}(0)\ . \tag{65}\]
In this case the approximation remains accurate for those cosmological times \(t\) close to zero which satisfy \(\dot{\varphi}(t)\in C_{\epsilon}(\mathcal{M},\mathcal{G},\Phi)\). This condition determines a relatively open subset of \(I\) whose connected component which contains \(0\) is the time interval on which the approximation remains accurate.
Conservation of energy for the conservative approximantNotice that (64) is the equation of the motion of a particle of unit mass in the Riemannian manifold \((\mathcal{M},\mathcal{G})\) in the presence of the potential \(\Phi\). This motion is conservative in the sense that the energy:
\[E_{\varphi_{c}}\stackrel{{\rm def.}}{{=}}\frac{1}{2}||\dot{ \varphi}_{c}(t)||^{2}+\Phi(\varphi_{c}(t)) \tag{66}\]
is independent of \(t\) when \(\varphi_{c}\) is a solution of (64). The initial conditions (65) determine the energy as:
\[E_{\varphi_{c}}=\frac{1}{2}||\dot{\varphi}(0)||^{2}+\Phi(\varphi(0))=E_{ \varphi}(0):=E_{0}\ \, \tag{67}\]
where:
\[E_{\varphi}(t)\stackrel{{\rm def.}}{{=}}\frac{1}{2}||\dot{ \varphi}(t)||^{2}+\Phi(\varphi(t))\]
is the cosmological energy of \(\varphi\), which is a strictly-decreasing function of \(t\) when \(\varphi\) is not constant. We have (see eq. (4) for the definition of the set \(\mathcal{M}(E_{0})\)):
\[\Phi(\varphi_{c}(t))\leq E_{0}\ \ \mbox{i.e.}\ \ \varphi_{c}(t)\subset\mathcal{M}(E_{0})\]
since \(||\dot{\varphi}_{c}(t)||^{2}\geq 0.\) Equations (66) and (67) give:
\[||\dot{\varphi}_{c}(t)||=\sqrt{2[E_{0}-\Phi(\varphi_{c}(t))]} \tag{68}\]
and the rescaled Hubble parameter of \(\varphi_{c}\):
\[{\cal H}_{\varphi_{c}}(t)={\cal H}(\dot{\varphi}_{c}(t))=\frac{1}{M_{0}}\sqrt{ 2E_{0}}\]
is independent of \(t\). In particular, the e-fold function of \(\varphi_{c}\) is given by:
\[{\cal N}_{\varphi_{c}}(T)\stackrel{{\rm def.}}{{=}}\frac{1}{3} \int_{0}^{T}{\rm d}t{\cal H}_{\varphi_{c}}(t)=\frac{T}{3M_{0}}\sqrt{2E_{0}} \tag{69}\]
and hence is a linear function of \(T\). Moreover, we have:
\[\kappa_{\varphi_{c}}(t)=\frac{||\dot{\varphi}_{c}(t)||^{2}}{2\Phi(\varphi_{c} (t))}=\frac{E_{0}-\Phi(\varphi_{c}(t))}{\Phi(\varphi_{c}(t))}=\frac{E_{0}}{ \Phi(\varphi_{c}(t))}-1 \tag{70}\]
and the slow roll parameter of \(\varphi_{c}\) is given by:
\[\mathfrak{e}_{\varphi_{c}}(t)=\frac{3\kappa_{\varphi_{c}}(t)}{1+\kappa_{ \varphi_{c}}(t)}=3\left[1-\frac{\Phi(\varphi_{c}(t))}{E_{0}}\right]\ . \tag{71}\]
Thus \(\dot{\varphi}_{c}(t)\) lies in the inflation region of \(({\cal M},{\cal G},\Phi)\) iff:
\[\kappa_{\varphi_{c}}(t)<\frac{1}{2}\Longleftrightarrow\Phi(\varphi_{c}(t))> \frac{2E_{0}}{3}\ \,\]
i.e. iff \(\varphi_{c}(t)\) lies in the following sublevel set of \(\Phi\):
\[{\cal M}(2E_{0}/3)\stackrel{{\rm def.}}{{=}}\{m\in{\cal M}\ |\ \Phi( \varphi_{c}(t))<\frac{2E_{0}}{3}\}\subset{\cal M} \tag{72}\]
which we call the _conservative inflation region_ of \({\cal M}\) at energy \(E_{0}\).
**The potential conservative condition.** The conservative approximation \(\varphi(t)\approx\varphi_{c}(t)\) implies that the conservative parameter of \(\varphi\) is approximated as \(c_{\varphi}(t)\approx c_{\varphi_{c}}(t)\), where:
\[c_{\varphi_{c}}(t)\stackrel{{\rm def.}}{{=}}c(\dot{\varphi}_{c} (t))=\frac{{\cal H}(\dot{\varphi}_{c}(t))||\dot{\varphi}_{c}(t)||}{||({\rm d} \Phi)(\varphi_{c}(t))||}=\frac{2}{M_{0}}\frac{\left[E_{0}(E_{0}-\Phi(\varphi_ {c}(t)))\right]^{1/2}}{||({\rm d}\Phi)(\varphi_{c}(t))||}=c_{E_{0}}(\varphi_{ c}(t))\ . \tag{73}\]
Here \(c_{E_{0}}:{\cal M}_{0}(E_{0})\rightarrow\mathbb{R}\) is the \(E_{0}\)_-reduced conservative function_ of \(({\cal M},{\cal G},\Phi)\), which is defined through:
\[c_{E_{0}}\stackrel{{\rm def.}}{{=}}\frac{2}{M_{0}}\frac{\left[E_ {0}(E_{0}-\Phi)\right]^{1/2}}{||{\rm d}\Phi||}=\frac{\left[E_{0}(E_{0}-\Phi) \right]^{1/2}}{||\Xi||\Phi}\ \ \forall E_{0}>0\ \.\]
The initial conditions (65) give:
\[c_{\varphi}(0)=c(\dot{\varphi}(0))=c(\dot{\varphi}_{c}(0))=c_{E_{0}}(\varphi( 0))\ \.\]
Consistency with (63) requires that the _potential conservative condition_:
\[c_{\varphi_{c}}(t)\ll 1\Longleftrightarrow c_{E_{0}}(\varphi_{c}(t))\ll 1 \tag{74}\]
is satisfied, i.e. that there exists a positive \(\epsilon^{\prime}\ll 1\) such that
\[\varphi_{c}(t)\in C_{E_{0},\epsilon^{\prime}}({\cal M},{\cal G},\Phi)\stackrel{{ \rm def.}}{{=}}\{m\in{\cal M}_{0}\mid c_{E_{0}}(m)<\epsilon^{\prime}\} \subset{\cal M}_{0}\ . \tag{75}\]
This condition constrains the cosmological times \(t\neq 0\) for which the conservative approximation can remain accurate.
We have:
\[\frac{{\rm d}\Phi(\varphi_{c}(t))}{{\rm d}t}=({\rm d}\Phi)(\dot{\varphi}_{c}(t))\]
and:
\[\frac{{\rm d}^{2}\Phi(\varphi_{c}(t))}{{\rm d}t^{2}}={\rm Hess}(\Phi)(\dot{ \varphi}_{c}(t),\dot{\varphi}_{c}(t))+({\rm d}\Phi)(\nabla_{t}\dot{\varphi}_{c }(t))={\rm Hess}(\Phi)(\dot{\varphi}_{c}(t),\dot{\varphi}_{c}(t))-||{\rm d} \Phi||^{2}\.\]
### The dissipative approximation
The _dissipative approximation_ consists of considering only non-critical cosmological curves and neglecting the friction term in the cosmological equation. This approximation is accurate for a noncritical cosmological curve \(\varphi:I\to{\cal M}_{0}\) when the _kinematic dissipative condition_:
\[c_{\varphi}(t)\gg 1 \tag{76}\]
is satisfied. Suppose that \(0\in I\). When (63) holds at \(t=0\), the noncritical cosmological curve \(\varphi\) is well-approximated for small \(|t|\) by the solution \(\varphi_{d}:I_{c}\to{\cal M}\) of the _dissipative equation_ of the scalar triple \(({\cal M},{\cal G},\Phi)\):
\[\nabla_{t}\dot{\varphi}_{d}(t)+{\cal H}(\varphi_{d}(t))\dot{\varphi}_{d}(t)=0 \tag{77}\]
which satisfies the initial conditions:
\[\varphi_{d}(0)=\varphi(0)\ \ {\rm and}\ \ \dot{\varphi}_{d}(0)=\dot{\varphi}(0)\ . \tag{78}\]
In this case the approximation remains accurate for those cosmological times \(t\) close to zero which satisfy \(c(\dot{\varphi}(t))\gg 1\). This condition determines a relatively open subset of \(I\) whose connected component which contains \(0\) is the time interval on which the approximation remains accurate.
The dissipative equation (77) is a modified (a.k.a. reparameterized) geodesic equation. Indeed, let \(s=s(t)\) be an increasing parameter along the curve \(\varphi\), chosen such that \(s(0)=0\). We have:
\[\dot{\varphi}_{d}=\dot{s}\varphi^{\prime}_{d}\ \ {\rm and}\ \ \nabla_{t}\dot{ \varphi}_{d}=\ddot{s}\varphi^{\prime}_{d}+\dot{s}^{2}\nabla_{s}\varphi^{ \prime}_{d}\ \,\]
where the primes denote derivatives with respect to \(s\). Hence (77) is equivalent with:
\[\nabla_{s}\varphi^{\prime}_{d}(s)+\frac{\ddot{s}+\dot{s}{\cal H}(\dot{s} \varphi^{\prime}_{d}(s))}{\dot{s}^{2}}=0\]
and reduces to the geodesic equation \(\nabla_{s}\varphi^{\prime}_{d}(s)=0\) with affine parameter \(s\) provided that \(s\) satisfies:
\[\frac{\ddot{s}}{\dot{s}^{2}}+\frac{1}{M_{0}}\sqrt{||\varphi^{\prime}_{d}(s)||^{ 2}+2\frac{\Phi(\varphi_{d}(s))}{\dot{s}^{2}}}=0\ . \tag{79}\]
Since:
\[\frac{\ddot{s}}{\dot{s}^{2}}=-\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{1}{ \dot{s}}\right)=-\dot{s}t^{\prime\prime}(s)=-\frac{t^{\prime\prime}(s)}{t^{ \prime}(s)}\ \,\]
condition (79) is equivalent with the following ODE for the inverse function \(t=t(s)\):
\[t^{\prime\prime}(s)-\frac{1}{M_{0}}[||\varphi^{\prime}_{d}(s)||^{2}+2\Phi( \varphi_{d}(s))t^{\prime}(s)^{2}]^{1/2}t^{\prime}(s)=0\ . \tag{80}\]
It follows that \(\varphi_{d}\) is a reparameterized geodesic of \((\mathcal{M},\mathcal{G})\), namely \(\varphi(s)=\stackrel{{\mathrm{def.}}}{{=}}\varphi(t(s))\) is a normalized geodesic. The initial conditions (78) amount to:
\[\varphi(0)=\varphi(0)\ \ \mbox{and}\ \ \varphi^{\prime}_{d}(0)=t^{\prime}(0) \dot{\varphi}(0)\ \.\]
Since \(||\varphi^{\prime}_{d}(0)||=1\), the second of these conditions implies:
\[t^{\prime}(0)=\frac{1}{||\dot{\varphi}(0)||}\ \,\]
which together with \(t(0)=0\) specifies the initial condition for the desired solution \(t(s)\) of (80). Moreover, the initial conditions for \(\varphi_{d}\) can be written as:
\[\varphi_{d}(0)=\varphi(0)\ \ \mbox{and}\ \ \varphi^{\prime}_{d}(0)=T_{\varphi}(0 )=\frac{\dot{\varphi}(0)}{||\dot{\varphi}(0)||}\ . \tag{81}\]
When \(\varphi\) is a maximal cosmological curve, the curve \(\varphi_{d}(s)\) can be taken to the unique maximal normalized geodesic of \((\mathcal{M},\mathcal{G})\) which satisfies these conditions.
For the approximation of \(\varphi(t)\) by \(\varphi_{d}(t)\) to remain accurate for \(t\neq 0\), it is necessary that the _effective dissipative condition_:
\[c_{\varphi_{d}}(t)\ll 1\]
be satisfied. Since \(\varphi_{d}(s)\) is a normalized geodesic, we have:
\[||\dot{\varphi}_{d}(t)||=\dot{s}=\frac{1}{t^{\prime}(s)}\ \.\]
Hence:
\[\kappa_{\varphi_{d}}(t)=\frac{1}{2t^{\prime}(s)^{2}\Phi(\varphi_{d}(s))}\]
and:
\[c_{\varphi_{d}}(s)=\frac{\sqrt{1+2t^{\prime}(s)^{2}\Phi(\varphi_{d}(s))}}{2t^ {\prime}(s)^{2}\Phi(\varphi_{d}(s))||\Xi(\varphi_{d}(s))||}=\frac{\sqrt{1+2t^ {\prime}(s)^{2}\Phi(\varphi_{d}(s))}}{M_{0}t^{\prime}(s)^{2}||(\mathrm{d} \Phi)(\varphi_{d}(s))||}\ \.\]
Notice that \(c_{\varphi_{d}}(0)=c(\dot{\varphi}_{d}(0))=c(\dot{\varphi}(0))=c_{\varphi}(0)\) since \(\dot{\varphi}_{d}(0)=\dot{\varphi}(0)\).
As explained in Subsection 3.5, the dissipative condition \(c(\dot{\varphi}(t))\gg 1\) forces \(\hat{\eta}^{\dagger}(\dot{\varphi}(t))\) to be very close to one. Hence the generalized "ultra slow roll" approximation holds in the strongly dissipative regime, where the dissipative approximation is accurate.
## 6 The limits of large and small rescaled Planck mass
Recall from [10] that universal similarities can be used to absorb all parameters of the model into the rescaled Planck mass \(M_{0}\). Thus it is natural to consider the limits when \(M_{0}\) is very large and very small.
When \(M_{0}\gg 1\), the friction term can be neglected and hence cosmological curves are well-approximated by solutions of the conservative equation (64).
When \(M_{0}\ll 1\), the scale transformation with parameter \(\epsilon=M_{0}\) brings (7) to the form:
\[M_{0}^{2}\nabla_{t}\frac{{\rm d}\varphi_{M_{0}}(t)}{{\rm d}t}+\left[M_{0}^{2} ||\frac{{\rm d}\varphi_{M_{0}}(t)}{{\rm d}t}||^{2}+2\Phi(\varphi_{M_{0}}(t)) \right]^{1/2}\frac{{\rm d}\varphi_{M_{0}}(t)}{{\rm d}t}+({\rm grad}_{\rm g} \Phi)(\varphi_{M_{0}}(t))=0\ \, \tag{82}\]
where \(\varphi_{M_{0}}(t)=\varphi(t/M_{0})\). Hence the limit of very small \(M_{0}\) coincides with the infrared limit of [10] at parameter \(\epsilon=M_{0}\). In this limit, the rescaled equation (82) approximates as:
\[\frac{{\rm d}\varphi_{M_{0}}(t)}{{\rm d}t}+({\rm grad}V_{1})(\varphi_{M_{0}}( t))\approx 0\ \,\]
where:
\[V_{1}\stackrel{{\rm def.}}{{=}}\sqrt{2\Phi}\]
coincides with the classical effective potential of [10] for \(M_{0}=1\). Thus \(\varphi(t)\) is well-approximated by the curve \(\varphi_{0}(t)=\varphi_{1}(M_{0}t)\), where \(\varphi_{1}(t)\) is the solution of the gradient flow equation of \(V_{1}\):
\[\frac{{\rm d}\varphi_{1}(t)}{{\rm d}t}+({\rm grad}V_{1})(\varphi_{1}(t))=0 \tag{83}\]
which satisfies the initial condition:
\[\varphi_{1}(0)=\varphi(0)\ \.\]
Notice that (83) is equivalent with the condition that \(\varphi_{0}\) satisfies the gradient flow equation:
\[\frac{{\rm d}\varphi_{0}(t)}{{\rm d}t}+({\rm grad}V)(\varphi_{0}(t))=0 \tag{84}\]
of the classical effective scalar potential:
\[V=M_{0}V_{1}=M_{0}\sqrt{2\Phi}\]
introduced in [10]. It also satisfies the initial condition:
\[\varphi_{0}(0)=\varphi(0)\ \.\]
The approximation is most accurate for _infrared optimal curves_, which satisfy:
\[\dot{\varphi}(0)=-M_{0}({\rm grad}V_{1})(\varphi(0))\Longleftrightarrow\dot{ \varphi}(0)=-\frac{M_{0}}{\sqrt{2\Phi}}({\rm grad}\Phi)(\varphi(0))\ \.\]
This amounts to the condition that \(\dot{\varphi}(0)\) belongs to the gradient flow shell of \((\mathcal{M},\mathcal{G},V)\):
\[\dot{\varphi}(0)=-(\mathrm{grad}V)(\varphi(0))\ \.\]
The approximation remains accurate for \(t\neq 0\) when one can neglect the acceleration and kinetic energy terms in (7), which requires:
\[\kappa_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}}\frac{||\dot{ \varphi}(t)||^{2}}{2\Phi(\varphi(t))}\ll 1\]
and:
\[\tilde{\kappa}_{\varphi}(t)\stackrel{{\mathrm{def.}}}{{=}} \frac{||\nabla_{t}\dot{\varphi}(t)||}{||(\Phi\Phi)(\varphi(t))||}\ll 1\ \.\]
Notice that \(\tilde{\kappa}_{\varphi}(t)\) coincides with the second IR parameter of [10]. Thus:
_In leading order, the large rescaled Planck mass limit \(M_{0}\gg 1\) reproduces the conservative approximation, while the small rescaled Planck mass limit \(M_{0}\ll 1\) reproduces the IR approximation._
## 7 Conclusions and further directions
We gave a careful geometric construction of natural basic observables for multifield cosmological models with arbitrary scalar manifold topology. Some of these first order observables are obtained by on-shell reduction of second order observables and their definition on the tangent bundle of the scalar manifold is not obvious a priori. We also discussed relations between these observables and the regions of interest which they determine within the tangent bundle of the scalar manifold. Finally, we described a system of dynamical approximations of the cosmological equation which are defined by imposing conditions on these basic observables - some of which were not considered systematically before. This defines a hierarchy of qualitatively distinct dynamical regimes which deserve detailed study.
Our discussion of these approximations only addressed their most basic features and much remains to be done. In particular, each dynamical approximation considered in the paper could be studied by expansion techniques with a view towards extracting asymptotic approximation schemes for cosmological curves in each dynamical regime we have identified. This appears to be quite involved since it requires methods from the asymptotic theory of nonlinear geometric ODEs and the control of various bounds involving the scalar potential and its derivatives as well as the distance function of \((\mathcal{M},\mathcal{G})\). In principle, the dynamical approximations which we identified could also be used to devise numerical approximation methods which could shed light on the corresponding dynamical regimes. Finally, it would be interesting to study these regimes in detail for the class of tame two field models, similar to the study of the IR regime performed in [19]. We hope to report on these and related questions in the future.
## Acknowledgments
This work was supported by national grant PN 19060101/2019-2022 and by a Maria Zambrano Fellowship.
|
2306.12055 | Multiplicity distribution and entropy of produced gluons in deep
inelastic scattering at high energies | In this paper we found the multiplicity distribution of the produced gluons
in deep inelastic scattering at large $z=\ln\LbQ^2_s/Q^2\Rb\,\,\gg\,\,1$ where
$ Q_s $ is the saturation momentum and $Q^2$ is the photon virtuality. It turns
out that this distribution at large $n > \bar{n}$ almost reproduces the KNO
scaling behaviour with the average number of gluons $ \bar{n} \propto \exp\Lb
z^2/2 \kappa\Rb$, where $\kappa = 4.88 $ in the leading order of perturbative
QCD. TheKNO function $\Psi\Lb \frac{n}{\bar{n}}\Rb = \exp\Lb -\,n/\bar{n}\Rb$.
For $n < \bar{n}$ we found that $\sigma_n \propto\Big( z - \sqrt{2
\,\kappa\,\ln (n-1)}\Big)/(n-1)$. Such small $n$ determine the value of entropy
of produced gluons $S_E = 0.3\, z^2/(2\,\kappa)$ at large $z$. The factor $0.3$
stems from the non-perturbative corrections that provide the correct behaviour
of the saturation momentum at large $b$. | Eugene Levin | 2023-06-21T06:56:53Z | http://arxiv.org/abs/2306.12055v1 | Multiplicity distribution and entropy of produced gluons in deep inelastic scattering at high energies
###### Abstract
In this paper we found the multiplicity distribution of the produced gluons in deep inelastic scattering at large \(z=\ln\left(Q_{s}^{2}/Q^{2}\right)\ \gg\ 1\) where \(Q_{s}\) is the saturation momentum and \(Q^{2}\) is the photon virtuality. It turns out that this distribution at large \(n>\bar{n}\) almost reproduces the KNO scaling behaviour with the average number of gluons \(\bar{n}\propto\exp\left(z^{2}/2\kappa\right)\), where \(\kappa=4.88\) in the leading order of perturbative QCD. The KNO function \(\Psi\left(\frac{n}{n}\right)=\exp\left(-\,n/\bar{n}\right)\). For \(n<\bar{n}\) we found that \(\sigma_{n}\propto\left(z-\sqrt{2\,\kappa\,\ln(n-1)}\right)/(n-1)\). Such small \(n\) determine the value of entropy of produced gluons \(S_{E}=0.3\,z^{2}/(2\,\kappa)\) at large \(z\). The factor \(0.3\) stems from the non-perturbative corrections that provide the correct behaviour of the saturation momentum at large \(b\).
pacs: 13.60.Hb, 12.38.Cy
###### Contents
* I Introduction
* II Generating functional for multipartical production processes
* II.1 Generating functional for the scattering amplitude
* II.2 AGK cutting rules and generating functional for the production of gluons
* II.3 The master equation for \(\sigma_{n}\)
* II.4 Solutions for Pomeron calculus in zero transverse dimension
* III The cross sections of gluon production deep in the saturation region
* III.1 The total cross section
* III.2 The diffraction production
* III.3 \(\sigma_{1}\left(Y,r,b\right)\)
* III.4 \(\sigma_{2}\left(Y,r,b\right)\)
* III.5 \(\sigma_{3}\left(Y,r,b\right)\) and \(\sigma_{k}\left(Y,r,b\right)\)
* IV Entropy of produced gluons
* V Conclusions
## I Introduction
Deep inelastic scattering (DIS) processes play a unique role in developing the theoretical understanding of the high energy interaction in QCD. Indeed, the main ideas of Colour Glass Condensate(CGC)/saturation approach (see Ref.[1] for a review) : the saturation of the dipole density and the new dimensional scale (\(Q_{s}\)), which increases with energy, have become the common language for discussing the high energy scattering in QCD. All these ideas are rooted in the theoretical description of DIS[2; 3; 4; 5; 6; 7; 8].
We are going to discuss the multiplicity distribution of the produced gluons in DIS. The multiplicity distributions and especially the entropy of produced gluons have become a hot subject during the past several years [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] and we hope that this paper will look at these problems at different angle. However, before discussing the multiplicity distribution it looks reasonable to summarize our theoretical achievements in DIS.
First and most importantly is that the scattering amplitude of the colourless dipole with the size \(x_{01}\) which determines the DIS cross section, satisfies the Balitsky-Kovchegov (BK) non-linear equation[28]:
\[\frac{\partial N_{01}}{\partial Y}\,=\,\bar{\alpha}_{S}\int\frac{d^{2}\,x_{02} }{2\pi}\,\frac{x_{01}^{2}}{x_{02}^{2}\,x_{12}^{2}}\Big{\{}N_{02}+N_{12}-N_{02} N_{12}-N_{01}\Big{\}} \tag{1}\]
where \(N_{ik}=N\,(Y,\mathbf{x}_{ik},\mathbf{b})\) is the scattering amplitude of the dipoles with size \(x_{ik}\) and with rapidity \(Y\) at the impact parameter \(\mathbf{b}\).
It has been shown that Eq. (1) leads to a new dimensional scale: saturation momentum[5] which has the following \(Y\) dependence[5; 29; 30]:
\[Q_{s}^{2}\,(Y,b)\ =\ Q_{s}^{2}\,(Y=Y_{0},b)\ e^{\bar{\alpha}_{S}\,\kappa\,Y\,-\, \frac{3}{2\,\gamma_{cr}}\ln Y} \tag{2}\]
where \(Y_{0}\) is the initial value of rapidity and \(\kappa\) and \(\gamma_{cr}\) are determined by the following equations1:
Footnote 1: \(\chi(\gamma)\) is the BFKL kernel[2] in anomalous dimension (\(\gamma\)) representation.\(\psi\) is the Euler psi -function (see Ref.[31] formula **8.36**).
\[\kappa\ \equiv\ \frac{\chi\,(\gamma_{cr})}{1-\gamma_{cr}}\ =\ -\frac{d_{\chi}\,(\gamma_{cr})}{d \gamma_{cr}}\ \ \ \ \ \mbox{and}\ \ \ \ \ \chi\,(\gamma)\,=\ 2\,\psi\,(1)\,-\,\psi\,(\gamma)\,-\,\psi\,(1-\gamma) \tag{3}\]
In Ref.[29] (see also Re,[30]) it is shown that in the vicinity of the saturation scale the scattering amplitude takes the following form:
\[N_{01}\,(z)\ \ =\ \ \mbox{Const}\,\big{(}x_{10}^{2}\,Q_{s}^{2}\,(Y)\big{)}^{ \bar{\gamma}} \tag{4}\]
with \(\bar{\gamma}=1-\gamma_{cr}\). In Eq. (4) we introduce a new variable \(z\), which is equal to:
\[z\ =\ \ln\big{(}x_{01}^{2}\,Q_{s}^{2}\,(Y,b)\big{)}\ \ =\ \ \bar{\alpha}_{S}\,\kappa\,\left(Y\,-\,Y_{A}\right)\ +\ \xi \tag{5}\]
where \(\xi=\ln x_{01}^{2}\). Note that we neglected the term \(\frac{3}{2\,\gamma_{cr}}\ln Y\) in Eq. (2).
It turns out that inside the saturation region: \(x_{01}^{2}\,Q_{s}^{2}\,(Y)\,>\,1\) the scattering amplitude shows the geometric scaling behaviour, being a function of one variable \(x_{01}^{2}\,Q_{s}^{2}\,(Y)\) :
\[N_{01}\,(Y,x_{01},b)=N\,\big{(}x_{01}^{2}\,Q_{s}^{2}\,(Y,b)\big{)} \tag{6}\]
It is instructive to note, that this behaviour has been proven on general theoretical grounds[32] and has been seen in the experimental data on DIS[33].
Finally, in Ref.[34] the solution to Eq. (1) was found deep into saturation region for \(z\,\gg\,1\) which has the following form:
\[N_{01}\,(z)\ =\ 1\ \ -\ \ \mbox{Const}\,\exp\Big{(}-\,\frac{z^{2}}{2\,\kappa} \Big{)} \tag{7}\]
where Const is a smooth function of \(z\).
We have defined the saturation region as \(x_{10}^{2}\,Q_{s}^{2}\,(Y,b)\,>\,1\). However, in Refs.[35; 36] it has been noted that actually for very large \(x_{10}\) the non-linear corrections become small and we have to solve linear BFKL equation. This feature can be seen directly from the eigenfunction of this equation. Indeed, the eigenfunction has the following form [3]
\[\phi_{\gamma}\,(\mathbf{r},\mathbf{R},\mathbf{b})\ \ =\ \left(\frac{r^{2}\,R^{2}}{\big{(}\mathbf{b}+ \frac{1}{2}(\mathbf{r}-\mathbf{R})\big{)}^{2}\,\big{(}\mathbf{b}-\frac{1}{2}(\mathbf{r}-\mathbf{R}) \big{)}^{2}}\right)^{\gamma}\ \xrightarrow{b\gg r,rR}\ \left(\frac{R^{2}\,r^{2}}{b^{4}}\right)^{ \gamma}\ \equiv\ e^{\gamma\,\xi}\ \ \mbox{with}\ 0\,<\,Re\gamma\,<\,1 \tag{8}\]
for any kernel which satisfies the conformal symmetry. In Eq. (8) \(R\) is the size of the initial dipole at \(Y=0\) while \(r\equiv x_{10}\) is the size of the dipole with rapidity \(Y\).
One can see that for \(r=x_{10}\,>\,min[R,b]\), \(\phi_{\gamma}\) starts to be small and the non-linear term in the BK equation could be neglected. In this paper we wish to discuss the DIS for \(Q^{2}\geq 1\,GeV^{2}\) and in the region of small \(x\). In other words
we consider \(x_{10}<R\), where \(R\) is the radius of a hadron. Bearing this in mind we can replace Eq. (8) by the following one:
\[\phi_{\gamma}\left(\mathbf{r},\mathbf{R},\mathbf{b}\right) = \left(\frac{r^{2}\,R^{2}}{\left(\mathbf{b}-\frac{1}{2}\mathbf{R}\right)^{2 }\left(\mathbf{b}+\frac{1}{2}\mathbf{R}\right)\right)^{\gamma}}=\left(r^{2}Q_{s}\left( Y=0,b,R\right)\right)^{\gamma} \tag{9}\]
Therefore, we can absorb all impact parameter dependence in the dependence of the saturation scale on \(b\).
The main goal is to find the multiplicity distributions and the entropy of produced gluons in the kinematical region of Eq. (7). In other words, we wish to decipher the saturated amplitude in terms of produced gluons. Fortunately, the equations for the cross section of production of \(n\) gluons have been derived in the framework of CGC (see Ref.[37]). The next section is a brief review of the results of Ref.[37]. In this section we emphasize the two main ingredients on which the derivation of the equations is based: the AGK cutting rules[38] and the BFKL equation[2], which gives the production of gluons in a particular kinematic region typical for leading log(1/x) approximation of perturbative QCD.
In section 3 we discuss solutions to the equations for the total cross section \(\sigma_{tot}\), for the cross section of diffraction production \(\sigma_{sd}\) and the cross section for productions of \(n\) gluons \(\sigma_{n}\). We show that both the total cross section and the cross section of the diffraction production reaches the unitarity limit: \(\sigma_{tot}(b)=2\) and \(\sigma_{sd}=1\), while \(\sigma_{n}\) are small and proportional to \(\frac{1}{\bar{n}(z)}\exp\left(-n/\bar{n}(z)\right)\) with \(\bar{n}(z)\ \propto\ \exp\left(\frac{z^{2}}{2\,\kappa}\right)\). In section 4 we estimate the value of the entropy of produced gluons. In conclusions we summarize our results and discussed related problems.
## II Generating functional for multipartical production processes
### Generating functional for the scattering amplitude
The scattering amplitude, given by BK equation (see Eq. (1)), can be viewed as a sum of the "fan" BFKL Pomeron diagrams[5; 6; 39; 40; 41; 42](see Fig. 1). The value of the triple Pomeron vertex is determined by the non-linear term of the BK equation. However, it has been shown in Ref/[7] that the BK equation has a quite different interpretation in which the BFKL kernel
\[\frac{\bar{\alpha}_{S}}{2\,\pi}K\left(x_{01}|x_{02},x_{12}\right) = \frac{\bar{\alpha}_{S}}{2\,\pi}\frac{x_{01}^{2}}{x_{02}^{2}x_{12}^{2}} \tag{10}\]
gives the probability for decay of one dipole with size \(x_{10}\) to two dipoles with sizes \(x_{02}\) and \(x_{12}\).
The simplest and the most transparent technique to incorporate this decay is the generating functional which allows us to reduce the calculation of the high energy elastic amplitude to consideration of a Markov process. The generating functional is defined as [7; 43; 44; 45]
\[Z_{0}\left(Y;\{u\}\right)\ \equiv\ \sum_{n=1}\,\int\ P_{n}\left(Y;r_{1},\ldots,r_{ n}\right)\ \prod_{i=1}^{n}\,u(r_{i})d^{2}r_{i} \tag{11}\]
Figure 1: Fig. 1-a:The fan diagrams in the BFKL Pomeron calculus that describe the BK equation (see Eq. (1)).Fig. 1-b: The unitarity constraints for the exchange of the BFKL Pomeron: the structure of the cut Pomeron. This constraints determines the cut Pomerons that are shown in Fig. 2 by the wavvy lines which are crossed by the dashed ones.
where \(u(r_{i})\) is an arbitrary function of \(r_{i}\) and \(b_{i}\). \(P_{n}\) is the probability density to find \(n\) dipoles with sizes \(r_{1},\ldots,r_{n}\) at rapidity \(Y\). 2 For functional of Eq. (11) we have two obvious conditions:
Footnote 2: The dipole \((x_{i},y_{i})\) with coordinates \(x_{i}\) for quark and \(y_{i}\) for antiquark can be characterized by the dipole size \(\mathbf{r}_{i}=\mathbf{x}_{i}-\mathbf{y}_{i}\) and \(\mathbf{b}_{i}=\frac{1}{2}(\mathbf{x}_{i}+\mathbf{y}_{i})\). For simplicity we suppress in Eq. (11) and below the coordinate \(b_{i}\). For the scattering with the nuclear target we can consider that impact parameters of all dipoles are the same \(b_{i}=b\) (see Ref. [28]). The alternative notations are \(\mathbf{r}_{i}=\mathbf{r}_{ik}=|\mathbf{x}_{i}-\mathbf{y}_{k=i+1}|\).
\[\text{initial conditions:}\quad\,Y=0\ P_{n}\,=\,0\ \text{for}\,\ n\,>\,1\ \text{and}\ P_{1}=\delta(\mathbf{r}\,-\,\mathbf{r}_{1})\delta(\mathbf{b}-\mathbf{b}_{1});Z_{ 0}\left(Y=0;\{u\}\right)\ =\ u(r)\ ; \tag{12a}\] \[\text{boundary conditions:}\quad\,\ u=1\ Z_{0}\left(Y;\{u\}\right)|_{u=1}\ =\ 1; \tag{12b}\]
Eq. (12b) follows from the physical meaning of \(P_{n}\) and represents the conservation of the total probability, while Eq. (12a) indicates that we are considering the interaction of one dipole with the target.
The Markov process can be described as the following equation for the generating functional:
\[\frac{dZ_{0}\left(Y;\{u\}\right)}{dY}=\,\frac{\bar{\alpha}_{S}}{2\pi}\,\int\ d^{2}r\ d^{2}r^{\prime}\,K\left(r|r^{\prime},|\mathbf{r}-\mathbf{r}^{ \prime}|\right)\left\{\,u(r)\,-\,u(r^{\prime})\,u(|\mathbf{r}^{\prime}-\mathbf{r}|) \right\}\,\frac{\delta}{\delta u(r)}\ Z_{0}\left(Y-y;\{u\}\right) \tag{13}\]
Two terms of this equation has simple meaning of increase of the probability to find \(n\)-dipoles due to decay of one dipole to two dipoles (birth terms, the second term in Eq. (13)) and of decrease of the probability since one of \(n\)-dipoles can decay (death term, the first term in Eq. (13))
Eq. (13) is general and can be used for arbitrary initial condition. In the case of Eq. (12a) it can be rewritten as the non-linear equation. Eq. (13) being a linear equation with only first derivative has a general solution of the form \(Z_{0}\left(Y-y;\{u\}\right)\equiv Z_{0}\left(\{u(Y-y)\}\right)\). Inserting this solution and using the initial condition of Eq. (12a) we obtain the non-linear equation
\[\frac{dZ_{0}\left(Y;\{u\}\right)}{dY}=\frac{\bar{\alpha}_{S}}{2\pi}\int d^{2}r ^{\prime}\,K\left(r|r^{\prime},|\mathbf{r}-\mathbf{r}^{\prime}|\right)\left\{Z_{0} \left(Y,\{u\}\right)-Z_{0}^{2}\left(Y,\{u\}\right)\right\} \tag{14}\]
It is easy to see that Eq. (14) can be re-written as the Balitsky-Kovchegov equation [28] for the scattering amplitude (see for example Ref.[1]).
### AGK cutting rules and generating functional for the production of gluons
Based on this probability interpretation of the sum of "fan" diagrams in Ref.[37] an attempt was made to develop the probability approach for the production of the gluon using the AGK cutting rules[38]. It is worthwhile mentioning that such kind of approach was successfully applied for interactions of two BFKL cascades in the simple case with zero transverse dimension [43; 44; 46]. The AGK cutting rules use that the BFKL Pomeron[2] gives the cross section of produced gluons in the specific kinematic region typical for leading log(1/x) approximation of perturbative QCD. These cutting rules allow us to expand the contribution of the exchange of \(n\) BFKL Pomerons as a definite sum of the final states with the average number of produced gluon \(k\Delta Y\), where \(k\,\leq\,n\) and \(\Delta\,Y\) is the mean multiplicity for one BFKL Pomeron exchange. Actually, the AGK cutting rules introduce three different triple Pomeron vertices (see Fig. 2 ). The structure of the production processes is determined by the s-channel unitarity constraints for the BFKL Pomeron:
\[2N^{BFKL}\left(Y;r,b\right)\ =\ G_{in}^{BFKL}\left(Y;r,b\right) \tag{15}\]
where \(N^{BFKL}\left(Y;r,b\right)\) is the contribution to the scattering amplitude by the BFKL Pomeron \(G_{in}^{BFKL}\left(Y;r,b\right)\) is the cross section of produced gluons, which was actually calculated in Ref.[2] and which is shown in Fig. 1-b. In framework of the AGK cutting rules it is called the cut Pomeron which is shown in Fig. 2 by the wavy lines which are crossed by the dashed ones.
In Ref.[37] it is introduced a generalization of Eq. (11):
\[Z\left(Y;\{u\},\{v\}\right)\ \equiv\ \sum_{n=0,m=0}^{\infty}\,\int\ P_{n}^{m}\left(y;r_{1},\ldots,r_{n };r_{1},\ldots,r_{m}\right)\ \prod_{i=1}^{n}\,u(r_{i})\prod_{k=1}^{m}\,v(r_{k})d^{2}r_{i}\,d^{2}r_{k} \tag{16}\]
where \(P_{n}^{m}\) is the probability to find (i) \(n\) dipoles in the wave function of the fast dipole, which interact with the target at time \(\tau=0\); and (ii) \(m\) dipoles which can be detected at \(\tau=\infty\). In other words, \(P_{n}^{m}\) is the probability to have \(n\) dipoles with sizes \(r_{1},\ldots,r_{n}\) at \(\tau=0\) at rapidity \(Y\), which do not survive until \(\tau=\infty\) and cannot be measured, while the dipoles with sizes \(r_{1},\ldots,r_{m}\) reach \(\tau=\infty\) and can be caught by detectors.
The initial conditions of Eq. (12a) has to be replaced by
\[Z\left(Y=0;\{u\},\{v\}\right)\ =\ v(r) \tag{17}\]
and we have two boundary conditions:
\[\left(1\right)\ \ Z\left(Y;\{u\},\{v\}\right)|_{u=1,v=1}\ =\ 1;\ \ \ \ \left(2 \right)\ \ Z\left(Y;\{u\},\{v\}\right)|_{v=2u-1}=2\ Z_{0}\left(Y;\{u\}\right)\ -1; \tag{18}\]
The first one comes from the conservation of probabilities while the second stems from the s-channel unitarity:
\[2N\left(Y,r,b\right)\ =\ \sigma_{sd}\left(Y,r,b\right)\ +\ \sigma_{in}\left(Y,r,b\right) \tag{19}\]
where \(\sigma_{sd}\) and \(\sigma_{in}\) are the single diffraction and inelastic cross sections at fixed impact parameter \(b\).
In Ref.[37] the AGK cutting rules for the "fan" diagrams were reduced to the following linear equation for \(Z\left(Y;\{u\},\{\zeta=2\,u-v\}\right)\):
\[\frac{\partial\tilde{Z}\left(Y;\{u\},\{\zeta\}\right)}{\partial Y }\ =\ \frac{\bar{\alpha}_{S}}{2\pi}\,\int\,d^{2}r_{2}\,K\left(r_{10}|r_{12},r_{02} \right)\times \tag{20}\] \[\left\{\left(u(r_{12})\,u(r_{02})\,-\,u(r_{10})\right)\frac{ \delta\tilde{Z}\left(y;\{u\},\{\zeta\}\right)}{\delta u(r_{10})}\ +\ \left(\zeta(r_{12})\,\zeta(r_{02})\,-\,\zeta(r_{10})\right)\,\frac{ \delta\tilde{Z}\left(y;\{u\},\{\zeta\}\right)}{\delta\zeta(r_{10})}\right\}\]
Each term in Eq. (20) has the same transparent probabilistic interpretation as in Eq. (13).
### The master equation for \(\mathbf{\sigma_{n}}\)
Eq. (20) can be used for an arbitrary initial condition. For DIS scattering the initial condition of Eq. (17) results in the non-linear equation. This equation takes the most elegant form for the generating functional which is defined as
\[M\left(Y-Y_{0};r,b\right)\ =\ 1\ -\ Z\left(Y-Y_{0};\{u\},\{v\}\right) \tag{21}\] \[=\sum_{n=1,m=0}^{\infty}\ \frac{(-1)^{n+m+1}}{n!\,m!}\ \left\{\int\,\prod_{i=1}^{n}\ d^{2}r_{i}\,\int\, \prod_{k=1}^{m}\ d^{2}r_{k}\,\gamma(r_{i})\,\gamma_{in}(r_{k})\ \frac{\delta}{\delta u(r_{i})}\ \frac{ \delta}{\delta v(r_{k})}\right\}\,Z\left(Y-Y_{0}\right);\{u\},\{v\}\right) \Big{|}_{u(r)=1,v(r)=1}\]
Figure 2: Three vertices for gluon production accordingly to AGK cutting rules for fan diagrams in the BFKL Pomeron calculus. The wavy lines denote the BFKL Pomerons. The wavy lines crossed by the dashed ones show the cut Pomerons.
where \(u(r)=1-\gamma(r)\) and \(v(r)=1-\gamma_{in}(r)\).
For \(M\left(Y;r,b\right)\) the non-linear equation is obtained from Eq. (20) in Ref.[37]:
\[\frac{\partial M(Y;r_{10},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\,\int\,d^{2}r_{2}\,K\left(r_{10}|r_{ 12},r_{02}\right)\left\{M(Y;r_{12},b)\,+\,M(Y;r_{20},b)\,-\,M(Y;r_{10},b)\right.\] \[\left.+M(Y;r_{12},b)M(Y;r_{20},b)\,-\,2\,M(Y;r_{12},b)N(Y;r_{20},b )-\,2\,N(Y;r_{12},b)M(Y;r_{20},b)\,+\,2N(Y;r_{12},b)N(Y;r_{20},b)\right\}\]
The first check of this equation: \(M(y;r_{10},b)\) for \(\gamma_{in}=0\) gives the cross section of diffraction production \(M(y;r_{10},b)|_{\gamma_{in}=0}=\sigma_{sd}\). This equation coincides with the equation for the diffraction production of Ref.[47].
If we want to find a cross section with the \(k\)-produced gluons in dipole nucleus (or proton) interaction we need to calculate
\[\sigma_{k}\left(Y,r;b\right) = \frac{1}{k!}\,\prod_{i=1}^{k}\,\gamma_{in}(r_{i})\left(\frac{ \delta}{\delta\gamma_{in}(r_{i})}\,M\left(Y;\{\gamma\},\{\gamma_{in}\}\right) \right)\Big{|}_{\gamma_{in}=0} \tag{23}\]
where \(\gamma(r)\) is the low energy elastic amplitude and \(\gamma_{in}(r)=2\gamma(r)\) at low energy (\(Y_{0}\)).
### Solutions for Pomeron calculus in zero transverse dimension
For the Pomeron calculus in zero transverse dimension Eq. (20) and Eq. (22) were solved in Ref.[46]. The solutions for the generating functions take the forms:
\[Z_{0}\left(Y,u\right) = \frac{ue^{-\Delta\,Y}}{1-u\left(1\,-\,e^{-\Delta\,Y}\right)}; \tag{24a}\] \[Z\left(Y,u,v\right) = \frac{2ue^{-\Delta\,Y}}{1-u\left(1\,-\,e^{-\Delta\,Y}\right)}\,- \,\frac{(2u-v)e^{-\Delta\,Y}}{1-(2u-v)\left(1\,-\,e^{-\Delta\,Y}\right)}; \tag{24b}\]
where \(\Delta\) is the intercept of the BFKL Pomeron. From Eq. (23) we obtain:
\[\sigma_{k} = \frac{e^{\Delta\,Y}\,\left(e^{\Delta\,Y}\,-\,1\right)^{k-1}}{ \left(1-2\,\gamma\left(e^{\Delta\,Y}\,-\,1\right)\right)^{k+1}}\,\gamma_{in}^ {k} \tag{25}\]
where \(\gamma\) and \(\gamma_{in}\) are the scattering amplitude and [production cross section of two dipoles at low energy. From Eq. (15) \(\gamma_{in}=2\gamma\).
Using Eq. (24a) we can calculate the scattering amplitude, which has the following form:
\[N\left(Y\right) = \frac{\gamma}{1-\left(1-\gamma\right)\left(1\,\,-\,e^{-\Delta\, Y}\right)}\ =\ \frac{\tilde{\gamma}e^{\Delta\,Y}}{\tilde{\gamma}e^{\Delta\,Y}\,-\,1} \tag{26}\]
where \(\tilde{\gamma}=\gamma/(1-\gamma)\).
Considering \(e^{\Delta\,Y}\) as Pomeron Green's function and using the AGK cutting rules for the scattering amplitude of Eq. (26) we reproduce at large values of \(Y\) the same \(\sigma_{k}\) which follow from Eq. (25). We believe that this observation is a strong argument for that our probabilistic treatment of gluon production is correct.
## III The cross sections of gluon production deep in the saturation region
The main goal of this paper is to find \(\sigma_{k}\) solving Eq. (22) in the kinematic region where we have the scattering amplitude of Eq. (7). In other words, we are looking for the solutions for the scattering amplitude of the dipole with size \(r\equiv r_{01}\) and rapidity \(Y\) in the kinematic region where \(r^{2}Q_{s}^{2}(Y,b)\ \gg\ 1\).
### The total cross section
The total cross section can be obtained considering \(M\left(Y,r,b\right)\) for \(\gamma_{in}(r_{i})=2\gamma(r_{i})\). From the unitarity constraints of Eq. (19) we expect that in our kinematic region \(\sigma_{tot}\left(Y,r,b\right)\) approaches the unitarity limit of \(\sigma_{tot}\to 2\). Introducing
\(\sigma_{tot}\left(Y,r,b\right)\ =\ 2\ -\ \Delta_{tot}\left(Y,r,b\right)\) and \(N\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{0}\left(Y,r,b\right)\), we can rewrite Eq. (22) in the following form. neglecting the contribution of \(\Delta_{in}^{2}\) or/and \(\Delta_{0}^{2}\):
\[\frac{\partial\Delta_{tot}(Y;r_{01},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_{ 02}\,\gg\,1/Q_{s}\left(Y\right)\\ r_{12}\,\gg\,1/Q_{s}\left(Y\right)\end{subarray}}\!\!d^{2}r_{2}\,K\left(r_{01} |r_{12},r_{02}\right)\left(\Delta_{tot}(Y;r_{02},b)\ +\ \Delta_{tot}(Y;r_{12},b)\ -\ \Delta_{tot}(Y;r_{01},b)\right) \tag{27}\] \[- 2\,\frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c }r_{02}\,\gg\,1/Q_{s}\left(Y\right)\\ r_{12}\,\gg\,1/Q_{s}\left(Y\right)\end{subarray}}\!\!d^{2}r_{2}\,K\left(r_{01 }|r_{12},r_{02}\right)\left(\Delta_{0}(Y;r_{02},b),\,+\,\Delta_{0}(Y;r_{12},b)\right)\]
One can see that Eq. (27) has solution: \(\Delta_{tot}(Y;r_{01},b)\,=\,2\,\Delta_{0}(Y;r_{01},b)\). Indeed, substituting this solution into Eq. (27) we obtain the equation for \(\Delta_{0}(Y;r_{01},b)\):
\[\frac{\partial\Delta_{0}(Y;r_{01},b)}{\partial Y}\ =\ -\left(\frac{\bar{\alpha}_{S}}{2\pi} \int\limits_{\begin{subarray}{c}r_{02}\,\gg\,1/Q_{s}\left(Y\right)\\ r_{12}\,\gg\,1/Q_{s}\left(Y\right)\end{subarray}}\!\!d^{2}r_{2}\,K\left(r_{0 1}|r_{12},r_{02}\right)\right)\,\Delta_{0}(Y;r_{01},b)\ =\ -z\ \Delta_{0}(Y;r_{01},b) \tag{28}\]
which is the equation for \(\Delta_{0}\), that has been discussed in Ref.[34], and from which follows the asymptotic behaviour of Eq. (7).
### The diffraction production
As we have discussed the master equation (see Eq. (22)) coincides with the equation of Ref.[47], which has been derived in a quite different way than it is done here. For the cross section of diffraction production we need to consider \(M(Y;r_{10},b)\) of Eq. (21) at \(\gamma_{in}\left(r\right)=0\). From Eq. (19) we expect that \(\sigma_{in}(Y;r_{10},b)\) approaches 1 at high energies. Bearing this in mind we introduce \(\sigma_{sd}\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{sd}\left(Y,r,b\right)\) and \(N\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{0}\left(Y,r,b\right)\). Plugging these expressions in Eq. (22) we see that
\[\frac{\partial\Delta_{sd}(Y;r_{01},b)}{\partial Y}\ =\ -\ \left(\frac{\bar{ \alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_{02}\,\gg\,1/Q_{s}\left(Y \right)\\ r_{12}\,\gg\,1/Q_{s}\left(Y\right)\end{subarray}}\!\!d^{2}r_{2}\,K\left(r_{0 1}|r_{12},r_{02}\right)\right)\,\Delta_{sd}(Y;r_{01},b)\ =\ -z\ \Delta_{sd}(Y;r_{01},b). \tag{29}\]
In the variable \(z\) of Eq. (5) Eq. (29) takes the form:
\[\kappa\frac{d\,\Delta_{sd}(z)}{d\,z}\ =\ -z\ \Delta_{sd}(z) \tag{30}\]
with the solution:
\[\Delta_{sd}(z)\ =\ C(z)\exp\left(-\frac{z^{2}}{2\,\kappa}\right) \tag{31}\]
where \(\kappa\) is given by Eq. (3) and \(C(z)\) is a smooth function of \(z\). Therefore, \(\sigma_{sd}\left(Y,r,b\right)\) at large values of \(Y\) has the form of Eq. (7) and shows the geometric scaling behaviour as it has been discussed in Ref.[47].
It is instructive to note that \(\sigma_{tot}\left(Y,r,b\right)\ =\ 2\ -\ \Delta_{in}\left(Y,r,b\right)\) and \(\sigma_{sd}\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{sd}\left(Y,r,b\right)\) are the solutions of the same equation. We believe that this observation is the important check that the main ideas of Ref.[37], which result in deriving Eq. (22), are correct.
### \(\sigma_{1}\left(Y,r,b\right)\)
Using Eq. (23) we can obtain the equations for \(\sigma_{k}\) from the master equation by differentiating this equation. First
taking \(\frac{\delta}{\delta\gamma_{in}(r_{i})}\) from both parts of Eq. (22) and putting \(\gamma_{in}(r_{i})=0\) we obtain the following equation for \(\sigma_{1}\left(Y,r,b\right)\):
\[\frac{\partial\sigma_{1}(Y;r_{01},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_{ 02}\,\gg\,1/Q_{s}(Y)\\ r_{12}\,\gg\,1/Q_{s}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12},r _{02}\right)\left\{\sigma_{1}\left(Y,r_{02},b\right)\,+\,\sigma_{1}\left(Y,r_{ 12},b\right)\,-\,\sigma_{1}\left(Y,r_{01},b\right)\right.\right. \tag{32}\] \[+ \left.\left.\sigma_{1}\left(Y,r_{02},b\right)\,\sigma_{sd}\left( Y,r_{02},b\right)\,+\,\sigma_{1}\left(Y,r_{12},b\right)\,\sigma_{sd}\left(Y,r_{02},b \right)\right.\right.\] \[- \left.\left.2\,\sigma_{1}\left(Y,r_{02},b\right)\,N\left(Y,r_{02 },b\right)\,-\,2\,\sigma_{1}\left(Y,r_{12},b\right)\,N\left(Y,r_{02},b\right) \right\}\right\}\]
The high energy asymptotic behaviour we get from Eq. (32) substituting \(\sigma_{sd}\left(Y,r_{i,i+1},b\right)\,=\,1-\Delta_{sd}\left(z\right)\) and \(N\left(Y,r,b\right)\,=\,1\,\,-\,\,\Delta_{0}\left(Y,r,b\right)\), Eq. (32) takes the following form:
\[\frac{\partial\sigma_{1}(Y;r_{01},b)}{\partial Y} = -\,\left(\frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray} {c}r_{02}\,\gg\,1/Q_{s}(Y)\\ r_{12}\,\gg\,1/Q_{s}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12},r _{02}\right)\right)\Bigg{\{}\,\sigma_{1}\left(Y,r_{01},b\right)\,-\,\,\sigma_{1 }\left(Y,r_{02},b\right)\,\Delta_{in}\left(Y,r_{02},b\right) \tag{33}\] \[-\,\sigma_{1}\left(Y,r_{12},b\right)\,\Delta_{in}\left(Y,r_{02},b \right)\Bigg{\}}\,=\,\,-z\,\sigma_{1}\left(Y,r_{01},b\right)\]
where \(\Delta_{in}\left(Y,r_{0i},b\right)=2\Delta_{0}\left(Y,r_{0i},b\right)\,-\, \Delta_{sd}\left(Y,r_{0i},b\right)\). The total inelastic cross section has the form: \(\sigma_{in}=1-\Delta_{in}\left(Y,r_{0i},b\right)\) and the above equation for \(\Delta_{in}\) follows from the unitarity constraints (see Eq. (19)).
Neglecting the second and the third terms in the curly bracket one can see that the geometric scaling solution of Eq. (33) has the same form as Eq. (31) for \(\Delta_{sd}(z)\). Hence
\[\kappa\frac{d\,\sigma_{1}(z)}{d\,z}\,\,=\,\,-z\,\,\sigma_{1}(z)\quad\mbox{ with the solution}\quad\sigma_{1}(z)\,\,=\,\,C(z)\exp\left(-\frac{z^{2}}{2\,\kappa}\right) \tag{34}\]
We will discuss below the influence of the neglected terms on the solution of Eq. (34)
It turns out that it is convenient to discuss the solutions in the momentum representation, where Eq. (34) takes the form of convolution. Adding and subtracting the gluon reggeization term in the momentum representation we obtain the following equation for the geometric scaling solution:
\[\kappa\frac{d\,\sigma_{1}(\tilde{z})}{d\,\tilde{z}}\,\,=\,\,\,\,\int\frac{d^{ 2}k_{T}^{\prime}}{(2\pi)^{2}}K\left(\mathbf{k}_{T},\mathbf{k}_{T}^{\prime}\right)\,\, \sigma_{1}\left(\tilde{z}^{\prime}\right)\,\,-\,\,\tilde{z}\,\sigma_{1}\left( \tilde{z}\right)\,\,+\,\,\Delta_{in}\left(\tilde{z}\right)\sigma_{1}\left( \tilde{z}\right) \tag{35}\]
where \(K\left(\mathbf{k}_{T},\mathbf{k}_{T}^{\prime}\right)\) is the BFKL kernel in momentum representation:
\[K\left(\mathbf{k}_{T},\mathbf{k}_{T}^{\prime}\right)\,\,\sigma_{1}\left(Y,\mathbf{k}_{T}^ {\prime},b\right)\,=\,\frac{1}{\left(\mathbf{k}_{T}-\mathbf{k}_{T}^{\prime}\right)^{2}} \,\,\,\sigma_{1}\left(Y,\mathbf{k}_{T}^{\prime},b\right)\,\,-\,\,\frac{k_{T}^{2}} {\left(\mathbf{k}_{T}-\mathbf{k}_{T}^{\prime}\right)^{2}\,\left(\left(\mathbf{k}_{T}-\mathbf{k} _{T}^{\prime}\right)^{2}\,+\,k_{T}^{\prime 2}\right)}\,\,\sigma_{1}\left(Y,\mathbf{k}_{T},b\right) \tag{36}\]
\(\tilde{z}\) is a new scaling variable:
\[\tilde{z}\,\,=\,\,\kappa\,\bar{\alpha}_{S}\,Y\,\,-\,\,\ln k_{T}^{2}\,\,=\,\, \ln\left(\frac{Q_{s}^{2}\left(Y,b\right)}{k_{T}^{2}}\right) \tag{37}\]
Using the Mellin transform:
\[\sigma_{1}\left(\tilde{z},b\right)\,\,=\,\int\limits_{\epsilon-i\infty}^{ \epsilon+i\infty}\frac{d\gamma}{2\,\pi\,i}e^{\gamma\,\tilde{z}}\,\sigma_{1} \left(\gamma,b\right) \tag{38}\]
we can rewrite Eq. (35) in the form:
\[\left(\kappa\,\gamma\,\,-\,\,\chi\left(\gamma\right)\right)\sigma_{1}\left( \gamma,b\right)=\frac{d\sigma_{1}\left(\gamma,b\right)}{d\,\gamma} \tag{39}\]
with the solution
\[\sigma_{1}\left(\gamma,b\right)\ =\ \sigma_{1}^{init}\left(b\right)\exp\left(\frac{1}{ 2}\kappa\,\gamma^{2}\ +\,\int_{\frac{1}{2}}^{\gamma}\chi\left(\gamma^{\prime}\right)d\gamma^{\prime}\right) \tag{40}\]
where \(\sigma_{1}^{init}\) has to be found from the initial conditions. Plugging Eq. (40) into Eq. (38) we can take the integral over \(\gamma\) using the method of steepest decent. The equation for the saddle point takes the form:
\[\tilde{z}+\kappa\gamma_{SP}\,+\,\chi\left(\gamma_{SP}\right)=0 \tag{41}\]
Since \(\chi\left(\gamma_{SP}\right)\sim\ln\gamma_{SP}\) the value of \(\gamma_{SP}=-\tilde{z}/\kappa\). Hence
\[\sigma_{1}\left(\tilde{z}\right)\ =\ C\left(\tilde{z}\right)e^{-\frac{\tilde{z}^{2} }{2\kappa}} \tag{42}\]
with \(C\left(\tilde{z}\right)\) being a smooth function of \(\tilde{z}\). Therefore, at large values of \(Y\) the solutions in coordinate and momentum representations have the same form in \(z\) and \(\tilde{z}\), respectively. The last term in Eq. (35) leads to replacement of \(\tilde{z}\rightarrow\tilde{z}-\int\limits_{0}^{\infty}d\tilde{z}^{\prime}\, \Delta_{in}\left(\tilde{z}\right)\) in Eq. (42). One can see this solving the simplified equation:
\[\kappa\frac{d\,\sigma_{1}(\tilde{z})}{d\,\tilde{z}}\ =\ \ \ -\ \tilde{z}\,\sigma_{1} \left(\tilde{z}\right)\ +\ \Delta_{in}\left(\tilde{z}\right)\sigma_{1} \left(\tilde{z}\right) \tag{43}\]
with the solution:
\[\sigma_{1}\left(\tilde{z}\right)=\mathrm{C}\exp\left(-\frac{ \tilde{z}^{2}}{2\,\kappa}+\frac{1}{\kappa}\int_{0}^{\tilde{z}}d\tilde{z}^{ \prime}\,\Delta_{in}\left(\tilde{z}^{\prime}\right)\right)\] \[=\mathrm{C}\exp\left(-\frac{\tilde{z}^{2}}{2\,\kappa}+\frac{1}{ \kappa}\int_{0}^{\infty}d\tilde{z}^{\prime}\,\Delta_{in}\left(\tilde{z}^{ \prime}\right)-\underbrace{\frac{1}{\kappa}\int_{\tilde{z}}^{\infty}d\tilde{z}^ {\prime}\,\Delta_{in}\left(\tilde{z}^{\prime}\right)}_{\ll 1}\right)=\mathrm{C}^{ \prime}\exp\left(-\frac{\left(\tilde{z}-\int_{\tilde{z}}^{\infty}d\tilde{z}^ {\prime}\,\Delta_{in}\left(\tilde{z}^{\prime}\right)\right)^{2}}{2\,\kappa}\right) \tag{44}\]
### \(\sigma_{2}\left(Y,r,b\right)\)
Applying \(\frac{1}{2}\frac{\delta}{\delta\gamma_{in}\left(r_{1}\right)}\)\(\frac{\delta}{\delta\gamma_{in}\left(r_{2}\right)}\) to the both parts of the master equation we obtain:
\[\frac{\partial\sigma_{2}(Y;r_{01},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_{ 02}\,\gg\,1/Q_{s}(Y)\\ r_{12}\,\gg\,1/Q_{s}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12}, r_{02}\right)\left\{\sigma_{2}\left(Y,r_{02},b\right)\,+\,\sigma_{2}\left(Y,r_{12}, b\right)\,-\,\sigma_{2}\left(Y,r_{01},b\right)\right. \tag{45}\] \[+ \sigma_{2}\left(Y,r_{02},b\right)\,\sigma_{sd}\left(Y,r_{02},b \right)\,+\,\sigma_{1}\left(Y,r_{12},b\right)\,\sigma_{1}\left(Y,r_{02},b \right)\,+\,\sigma_{2}\left(Y,r_{12},b\right)\,\sigma_{sd}\left(Y,r_{02},b\right)\] \[- \left.2\,\sigma_{2}\left(Y,r_{02},b\right)\,N\left(Y,r_{02},b \right)\,-\,2\,\sigma_{2}\left(Y,r_{12},b\right)\,N\left(Y,r_{02},b\right) \right\}\]
Replacing at large \(Y\)\(\sigma_{sd}\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{sd}\left(Y,r,b\right)\) and \(N\left(Y,r,b\right)\ =\ 1\ -\ \Delta_{0}\left(Y,r,b\right)\) we reduce Eq. (33) to the form:
\[\frac{\partial\sigma_{2}(Y;r_{01},b)}{\partial Y} = -\left(\frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c }r_{02}\,\gg\,1/Q_{s}(Y)\\ r_{12}\,\gg\,1/Q_{s}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12}, r_{02}\right)\right)\,\sigma_{2}\left(Y,r_{01},b\right)\] \[+ \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_{ 02}\,\gg\,1/Q_{s}(Y)\\ r_{12}\,\gg\,1/Q_{s}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12}, r_{02}\right)\,\Big{\{}\,\sigma_{1}\left(Y,r_{02},b\right)\,\sigma_{1}\left(Y,r_{12},b\right)\] \[+ \sigma_{2}\left(Y,r_{02},b\right)\left(2\,\Delta_{0}\left(Y,r_{12}, b\right)\,-\,\Delta_{sd}\left(Y,r_{12},b\right)\right)\ +\ \sigma_{2}\left(Y,r_{12},b\right)\left(2\,\Delta_{0}\left(Y,r_{02},b\right)\,-\, \Delta_{sd}\left(Y,r_{02},b\right)\right)\]
In momentum representation in the region with the geometric scaling behaviour Eq. (46) takes the form:
\[\kappa\frac{d\,\sigma_{2}\left(\tilde{z}\right)}{d\,\tilde{z}}\ =\ -\tilde{z}\,\sigma_{2}\left(\tilde{z} \right)\ +\ 2\,\left(\underbrace{2\,\Delta_{0}\left(\tilde{z}\right)\,-\,\Delta_{sd} \left(\tilde{z}\right)}_{\Delta_{in}\left(\tilde{z}\right)}\right)\sigma_{2} \left(\tilde{z}\right)\,+\ \sigma_{1}^{2}\left(\tilde{z}\right) \tag{47}\]
where \(\Delta_{in}\left(\tilde{z}\right)\) is the moment representation of \(\Delta_{in}\left(z\right)\) with \(\sigma_{in}\left(z\right)=1-\Delta_{in}\left(z\right)\).
The general solution to Eq. (47) is a sum of the solution to homogeneous part of the equation and the particular solution of the non-homogeneous one. The homogeneous solution has the form:
\[\sigma_{2}\left(\tilde{z}\right)\ =\ \sigma_{1}\left(\tilde{z}\right)\exp\left(-2 \int\limits_{\tilde{z}}^{\infty}\Delta_{in}\left(\tilde{z}^{\prime}\right)d \tilde{z}^{\prime}\right) \tag{48}\]
where \(\sigma_{1}\left(\tilde{z}\right)\) is given by Eq. (42).
We can find the initial conditions for \(\sigma_{n}\) by assuming that for \(\tilde{z}\,<\,0\) only the BFKL Pomeron exchange contribute to \(\sigma_{1}\), while all \(\sigma_{k}=0\) in this kinematic region. In other words, we assume that we can neglect the "fan" BFKL Pomeron diagrams dealing only with the BFKL Pomeron exchange for \(\tilde{z}<0\). For this initial conditions the solution to Eq. (47) can be written as:
\[\sigma_{2}\left(\tilde{z}\right)\ =\ \sigma_{1}\left(\tilde{z}\right)\exp \left(-2\int\limits_{\tilde{z}}^{\infty}\Delta_{in}\left(\tilde{z}^{\prime} \right)d\tilde{z}^{\prime}\right)\int\limits_{0}^{\tilde{z}}\sigma_{1}\left( \tilde{z}^{\prime}\right)\frac{d\tilde{z}^{\prime}}{\kappa}, \tag{49}\]
if we neglect the contributions of the order of \(\sigma_{1}^{3}\).
### \(\sigma_{3}\left(Y,r,b\right)\) and \(\sigma_{k}\left(Y,r,b\right)\)
Using Eq. (23) we obtain for \(\sigma_{3}(Y;r_{01},b)\):
\[\frac{\partial\sigma_{3}(Y;r_{01},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_ {02}\geq 1/Q_{x}(Y)\\ r_{12}\geq 1/Q_{x}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12},r_{02} \right)\left\{\sigma_{3}\left(Y,r_{02},b\right)\,+\,\sigma_{3}\left(Y,r_{12}, b\right)\,-\,\sigma_{3}\left(Y,r_{01},b\right)\right. \tag{50}\] \[+ \sigma_{3}\left(Y,r_{02},b\right)\,\sigma_{sd}\left(Y,r_{02},b \right)\ +\ \sigma_{3}\left(Y,r_{12},b\right)\,\sigma_{sd}\left(Y,r_{02},b\right)\] \[+ \sigma_{2}\left(Y,r_{02},b\right)\,\sigma_{1}\left(Y,r_{12},b \right)\ +\ \sigma_{2}\left(Y,r_{12},b\right)\,\sigma_{1}\left(Y,r_{02},b\right)\] \[- \left.2\,\sigma_{3}\left(Y,r_{02},b\right)\,N\left(Y,r_{02},b \right)\,-\,2\,\sigma_{3}\left(Y,r_{12},b\right)\,N\left(Y,r_{02},b\right) \right\}\]
Using the asymptotic expansion for \(\sigma_{sd}\) and \(N\) we obtain for the solutions with geometric scaling behaviour :
\[\kappa\frac{d\,\sigma_{3}\left(\tilde{z}\right)}{d\,\tilde{z}}\ =\ -\tilde{z}\,\sigma_{3}\left(\tilde{z} \right)\ +\ 2\,\Delta_{in}\left(\tilde{z}\right)\sigma_{3}\left(\tilde{z}\right)\,+\ 2\sigma_{2}\left(\tilde{z}\right)\,\sigma_{1}\left(\tilde{z}\right) \tag{51}\]
The solution to Eq. (51) has the form:
\[\sigma_{3}\left(\tilde{z}\right)\ =\ \sigma_{1}\left(\tilde{z}\right)\exp \left(-2\int\limits_{\tilde{z}}^{\infty}\Delta_{in}\left(\tilde{z}^{\prime} \right)d\tilde{z}^{\prime}\right)\left(\int\limits_{0}^{\tilde{z}}\sigma_{1} \left(\tilde{z}^{\prime}\right)\ \frac{d\tilde{z}^{\prime}}{\kappa}\right)^{2} \tag{52}\]
The general equation for \(\sigma_{k}\left(Y,r,b\right)\) reads as follows:
\[\frac{\partial\sigma_{k}(Y;r_{01},b)}{\partial Y} = \frac{\bar{\alpha}_{S}}{2\pi}\int\limits_{\begin{subarray}{c}r_ {02}\geq 1/Q_{x}(Y)\\ r_{12}\geq 1/Q_{x}(Y)\end{subarray}}\!\!\!d^{2}r_{2}\,K\left(r_{01}|r_{12},r_{02} \right)\left\{\sigma_{k}\left(Y,r_{02},b\right)\,+\,\sigma_{k}\left(Y,r_{12}, b\right)\,-\,\sigma_{k}\left(Y,r_{01},b\right)\right. \tag{53}\] \[+ \sigma_{k}\left(Y,r_{02},b\right)\,\sigma_{sd}\left(Y,r_{02},b \right)\,+\,\sigma_{k}\left(Y,r_{12},b\right)\,\sigma_{sd}\left(Y,r_{02},b \right)+\sum\limits_{j=1}^{k-1}\sigma_{k-j}\left(Y,r_{02},b\right)\,\sigma_{j} \left(Y,r_{12},b\right)\] \[- \left.2\,\sigma_{k}\left(Y,r_{02},b\right)\,N\left(Y,r_{02},b \right)\,-\,2\,\sigma_{k}\left(Y,r_{12},b\right)\,N\left(Y,r_{02},b\right)\,\right\}\]
We can check that solution to Eq. (53) has the following form:
\[\sigma_{k}\left(\tilde{z}\right)\ =\ \sigma_{1}\left(\tilde{z}\right)\exp\left(-2 \int\limits_{\tilde{z}}^{\infty}\Delta_{in}\left(\tilde{z}^{\prime}\right)d \tilde{z}^{\prime}\right)\left(\,\int\limits_{0}^{\tilde{z}}\sigma_{1}\left( \tilde{z}^{\prime}\right)\ \frac{d\tilde{z}^{\prime}}{\kappa}\right)^{k-1} \tag{54}\]
Note that in term \(\sum_{j=1}^{k-1}\sigma_{k-j}\left(Y,r_{02},b\right)\,\sigma_{j}\left(Y,r_{12},b\right)\) in Eq. (53) we can replace the factor \(\exp\left(-2\int\limits_{\tilde{z}}^{\infty}\Delta_{in}\left(\tilde{z}^{\prime }\right)d\tilde{z}^{\prime}\right)\,=\,1\) keeping accuracy of the order of \(\sigma_{1}^{k}\).
In Eq. (54) we have two smooth functions that have to be determined from the initial conditions: \(C(z)\) of Eq. (42) and function \(D\left(\tilde{z}\right)\) in the expression for \(2\Delta_{in}\left(\tilde{z}\right)=D(\tilde{z})\exp\left(-\frac{\tilde{z}^{2 }}{2\,\kappa}\right)\). We believe that the only condition that we need is
\[\int\limits_{0}^{\infty}C\left(\tilde{z}^{\prime}\right)\,\frac{d\tilde{z}^{ \prime}}{\kappa}\ =\ 1 \tag{55}\]
In this case the unitary constraints of Eq. (19) or/and Eq. (27):
\[\sigma_{in}\left(\tilde{z}\right)\ =\ \sum_{k=1}^{\infty}\sigma_{k}\left( \tilde{z}\right)\ \propto\ \tilde{z} \tag{56}\]
which provides that \(\sigma_{in}\left(z\right)\) in the coordinate representation reaches the unitarity limit \(\sigma_{in}\left(z\right)=1\). The sum itself has the form:
\[\sigma_{in}\left(\tilde{z}\right)\ =\ \frac{e^{-\tilde{z}^{2}/2\kappa}C\left( \tilde{z}\right)\exp\left(-\int\limits_{\tilde{z}}^{\infty}D\left(\tilde{z}^{ \prime}\right)e^{-\tilde{z}^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa} \right)}{\int\limits_{\tilde{z}}^{\infty}C\left(\tilde{z}^{\prime}\right)e^{- \tilde{z}^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa}} \tag{57}\]
while
\[\sigma_{k}\left(\tilde{z}\right)\ =\ e^{-\tilde{z}^{2}/2\kappa}C\left(\tilde{z} \right)\exp\left(-\int\limits_{\tilde{z}}^{\infty}D\left(\tilde{z}^{\prime} \right)e^{-\tilde{z}^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa} \right)\left(1\ -\ \int\limits_{\tilde{z}}^{\infty}C\left(\tilde{z}^{\prime}\right)e^{- \tilde{z}^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa}\right)^{k-1} \tag{58}\]
Neglecting \(\int\limits_{\tilde{z}}^{\infty}D\left(\tilde{z}^{\prime}\right)e^{-\tilde{z} ^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa}\,\sim e^{-\tilde{z}^{ \prime 2}/2\kappa}\,\ll 1\) and denoting \(\int\limits_{\tilde{z}}^{\infty}C\left(\tilde{z}^{\prime}\right)e^{-\tilde{z} ^{\prime 2}/2\kappa}\frac{d\tilde{z}^{\prime}}{\kappa}\) by \(1/\bar{n}\left(\tilde{z}\right)\) we see that
\[\sigma_{k}\left(\tilde{z}\right)\ =\ e^{-\tilde{z}^{2}/2\kappa}C\left(\tilde{z} \right)\left(1\ -\ \frac{1}{\bar{n}\left(\tilde{z}\right)}\right)^{k-1}\,\xrightarrow{\bar{n} \left(\tilde{z}\right)\gg 1}e^{-\tilde{z}^{2}/2\kappa}C\left(\tilde{z}\right)\exp \Big{(}-\frac{k-1}{\bar{n}\left(\tilde{z}\right)}\Big{)} \tag{59}\]
Eq. (59) as well as all equations in this paper, are written at fixed impact parameter \(b\). Hence we need to integrate \(\sigma_{k}\left(\tilde{z}\right)\) over \(b\) to obtain the cross section of produced \(k\)-gluons. The \(b\)-dependence enters our formulae through the saturation momentum (see Eq. (5), Eq. (9) and Eq. (37)). In perturbative QCD the variable \(\xi\) in Eq. (5) is determined by the form of the eigenfunction for the BFKL equation (see Eq. (8)). However, it has been shown [49; 50; 51; 52] that the typical \(b\) that contribute in the integrals for cross sections turn out to be of the order of \(re^{\lambda Y}\) and they lead to the violation of the Froissart theorem[53]. Therefore, we need to introduce the non-perturbative corrections at large \(b\). Eq. (59) is written in the convenient form to introduce such corrections to the saturation momentum: \(Q_{s}^{2}\left(Y,b\right)=Q_{s}^{2}\left(Y,b=0\right)\exp\left(-\mu\,b\right)\)., where \(\mu\) is a new non-perturbative parameter. The multiplicity \(\bar{n}\left(\tilde{z}\right)\) takes the form
\[\bar{n}\left(\tilde{z}(b)\right)\ =\ \left(\frac{C\left(\infty\right)}{\tilde{z}(b=0)-\mu\,b }\exp\left(-\frac{\left(\tilde{z}(b=0)-\mu b\right)^{2}}{2\kappa}\right) \right)^{-1}\ =\ \frac{1}{\mathcal{A}}\,\exp\left(\frac{\left(\tilde{z}(b=0)-\mu b \right)^{2}}{2\kappa}\right) \tag{60}\]
where \(\mathcal{A}\) is a smooth function of \(\tilde{z}\).
The region of integration over \(b\) is \(0<b<\tilde{z}(b=0)\) since for larger \(b\) the scattering amplitude is small and cannot be treated in our approach. Introducing a new variable \(t=\tilde{z}(b=0)-\mu\,b\) one can see that for the integral has
a maximum at \(t_{SP}=\sqrt{2\,\kappa\,\ln\left({\cal A}(k-1)\right)}\)(see Fig. 3).. Therefore, for for \(t_{SP}\,\leq\tilde{z}(b=0)\) the main contribution is determined by the saddle point contribution which gives
\[\sigma_{k}= \tag{61}\] \[\frac{t_{SP})(2\,\pi)^{3/2}\kappa}{e\,\mu^{2}}\,\frac{1}{k-1}\, \frac{1}{t_{SP}}\left(\tilde{z}(b=0)-t_{SP}\right)=\frac{(2\,\pi)^{3/2}\kappa }{e\,\mu^{2}}\frac{1}{(k-1)}\Bigg{(}\tilde{z}(b=0)-\sqrt{2\,\kappa\,\ln\left( {\cal A}(k-1)\right)}\Bigg{)}\]
These \(\sigma_{k}\) mildly depend on \(\tilde{z}(b=0)\) for all \(k-1\leq\,\bar{n}\left(\tilde{z}(b=0)\right)\). For larger \(k\) the main contributions stem from \(b\to 0\). Estimating the slope at b=0 by the method of steepest decent we obtain for \(k-1\,>\,\bar{n}\left(\tilde{z}(b=0)\right)\) :
\[\sigma_{k}\left(\tilde{z}(b=0),t=0\right)\ =\ \int\limits_{0<b<\tilde{z}(b=0)} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
From Eq. (64) one can see that (i) the largest contribution stems from the first sum in Eq. (65), since the second term is suppressed as \(1/\tilde{z}^{4}(b=0)\) (see Eq. (63)); and (ii) in \(\ln p_{k}\) we can restrict ourselves by the term: \(-\ln(k-1)\). All other contributions are smaller. Taking the integral we obtain for large values of \(\tilde{z}(b=0)\):
\[S_{E} = 0.3\,\frac{\tilde{z}^{2}(b=0)}{2\,\kappa} \tag{66}\]
In Eq. (66) we see two strange results. First, \(S_{E}\propto\frac{\tilde{z}^{2}(b=0)}{2\,\kappa}\). At large \(Y\,\frac{\tilde{z}^{2}(b=0)}{2\,\kappa}\) can be written as \(S_{E}\propto\frac{1}{2}\tilde{z}\,\bar{\alpha}_{S}\,Y\) which is in two times smaller the entropy for the BFKL cascade estimated in Ref.[13]. However, it turns out that actually extra \(\frac{1}{2}\) stems from our assumption about geometric scaling behaviour of the cross sections. In Ref.[13] as well as in Ref.[54], it is assumed that the cross sections are functions of \(Y\) and \(\tilde{z}\) but not only \(\tilde{z}\). Coming back to Eq. (33) one can see that for \(\sigma_{1}\left(Y,\tilde{z}\right)\) we obtain the solution: \(\sigma_{1}\left(Y,\tilde{z}\right)\propto\exp\left(-\,\bar{\alpha}_{S}Y \tilde{z}\right)\) and \(\bar{n}\left(\tilde{z}(b=0)\right)\propto\,\exp\left(\bar{\alpha}_{S}Y\tilde {z}\right)\). Finally, \(S_{E}\propto\alpha_{S}Y\tilde{z}\) in agreement with Ref.[13]. Second, factor \(0.3\) in Eq. (66) stems from the integration over \(b\). Indeed, if we estimate the entropy at fixed \(b\) using Eq. (59), we obtain \(S_{E}\ =\ \frac{\tilde{z}^{2}(b)}{2\,\kappa}\). Therefore, we can state that non -perturbative corrections that determine the large \(b\) behaviour of the saturation momentum, lead to a decrease of the entropy if we believe in estimate of Refs.[13; 54] for the entropy of the BFKL cascade at the moment of interaction with the target.
## V Conclusions
In this paper we found the multiplicity distributions for produced gluons based on the equations derived in Ref.[37]. These equations follow from the generating functional approach to Color Glass Condensate effective theory and from AGK cutting rules[38]. AGK cutting rules allow us to use the principal property of the BFKL Pomeron: it gives the cross section of produced gluons in the particular kinematic region of leading log(1/x) approximation of perturbative QCD.
We solve the equations in the deep saturation region where the scattering amplitude has the form of Eq. (7). It turns out that the multiplicity distribution for \(k-1\,<\,\bar{n}\propto\exp\left(z^{2}/2\kappa\right)\) is proportional to \((1/(k-1))\Big{(}z-\sqrt{2\,\kappa\,\ln(k-1)}\Big{)}\). In this equation \(\bar{n}\propto\exp\left(z^{2}/2\kappa\right)\) is the average number of produced gluons with \(z=\kappa\,\bar{\alpha}_{S}Y-\ln Q^{2}\) where \(Q^{2}\) is the virtuality of the photon and \(Y=\ln(1/x)\). For \(k-1\,\geq\,\bar{n}\propto\exp\left(z^{2}/2\kappa\right)\) the multiplicity distribution almost reproduces the KNO scaling behaviour with the average number of gluons \(\bar{n}\propto\exp\left(z^{2}/2\kappa\right)\). The value of \(\kappa\) is given by Eq. (3). The KNO function \(\Psi\left(\frac{k}{\bar{n}}\right)=\exp\left(-\,k/\bar{n}\right)\).
We estimated the value of the entropy of produced gluons which has been a subject of hot recent discussions[9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. We obtain \(S_{E}=0.3\,\frac{\tilde{z}^{2}}{2\kappa}\). First factor \(\frac{\tilde{z}^{2}}{2\kappa}\) is twice smaller the entropy of the BFKL cascade estimated in Ref.[13] (see also Ref.[54])3.
Footnote 3: In Refs.[47; 54] we cut the divergent entropy of the BFKL cascade by the saturation momentum. We believe that the saturation momentum provides the natural scale for interactions with the target. The result of this paper supports this assumption.
As we have discussed in the previous section, we believe that this difference stems from our assumption that all cross sections should have geometric scaling behaviour. Second, factor \(0.3\) in the equation for the entropy stems from the non-perturbative corrections, which provide the correct behaviour of the saturation momentum at large \(b\). Therefore, we claim that the non-perturbative corrections can decrease the entropy if we believe in the estimates of Refs.[13; 54] for the entropy of the BFKL cascade. We would like to stress that we deliberately calculated the entropy of the produced dipoles, which can be measured and has been measured in DIS[55]: we evaluated the cross sections for \(n\) produced dipoles, estimate the probability to produce \(n\) dipoles (\(p_{n}\)) and determine the entropy \(S_{E}=-\sum_{n}p_{n}\ln p_{n}\). It should be noted that the entropy at fixed impact parameter is equal to \(S_{E}\ =\ \ln\left(\bar{n}\left(z\right)\right)=\frac{z(b)^{2}}{2\,\kappa}\) and coincides with the entropy of the parton cascade estimated in Refs.[13; 54]. Our results can be trusted at large values of \(z\) and it is difficult to compare it with the experimental data of Ref.[55], since it turns out that for all \(x\) in this paper \(z\leq 1\) with the saturation momentum from Ref.[56].
It should be stressed that we consider DIS in the region of large \(z\) where the scattering amplitude has the geometric scaling behaviour. It means that we can discuss DIS with proton at small \(x\), but we have to be very careful with DIS on nuclear target. Indeed, for the nuclear target the geometric scaling behaviour is valid only in the limited region
even at large \(z\)[57; 58; 59]. In the vast part of the saturation region we do not expect the geometric scaling behaviour for the cross section of produced gluons and have to solve the equation for \(\sigma_{k}\) with the initial conditions:
\[\sigma_{k}\left(\xi,Y=Y_{A}\right)=\frac{1}{k!}\Omega^{k}\left(Y_{A},r,b\right) \exp\left|\left(-\,\Omega\left(Y_{A},r,b\right)\right)\right. \tag{67}\]
where \(\Omega\left(Y_{A},r,b\right)=\sigma\) dipole-proton \(\left(Y_{A},r\right)\)\(T_{A}\left(b\right)\). \(\sigma\) dipole-proton \(\left(Y_{A},r\right)=2\int d^{2}bN_{0}\left(Y_{A},r\right)\)] and \(N_{0}\) is the initial condition for the scattering amplitude of the dipole with the proton. \(Y_{A}=\ln A^{1/3}\) and A is the number of the nucleons in a nucleus. \(T_{A}\) is the optical width of nucleus, which gives the number of nucleons at given value of impact parameter \(b\). To find the multiplicity distribution for DIS with nucleus naturally is our next problem to solve.
|
2310.20067 | Vignat: Vulnerability identification by learning code semantics via
graph attention networks | Vulnerability identification is crucial to protect software systems from
attacks for cyber-security. However, huge projects have more than millions of
lines of code, and the complex dependencies make it hard to carry out
traditional static and dynamic methods. Furthermore, the semantic structure of
various types of vulnerabilities differs greatly and may occur simultaneously,
making general rule-based methods difficult to extend. In this paper, we
propose \textit{Vignat}, a novel attention-based framework for identifying
vulnerabilities by learning graph-level semantic representations of code. We
represent codes with code property graphs (CPGs) in fine grain and use graph
attention networks (GATs) for vulnerability detection. The results show that
Vignat is able to achieve $57.38\%$ accuracy on reliable datasets derived from
popular C libraries. Furthermore, the interpretability of our GATs provides
valuable insights into vulnerability patterns. | Shuo Liu, Gail Kaiser | 2023-10-30T22:31:38Z | http://arxiv.org/abs/2310.20067v1 | # _Vignat_: Vulnerability Identification by Learning Code Semantics via Graph Attention Networks
###### Abstract
Vulnerability identification is crucial to protect software systems from attacks for cyber-security. However, huge projects have more than millions of lines of code, and the complex dependencies make it hard to carry out traditional static and dynamic methods. Furthermore, the semantic structure of various types of vulnerabilities differs greatly and may occur simultaneously, making general rule-based methods difficult to extend. In this paper, we propose _Vignat_, a novel attention-based framework for identifying vulnerabilities by learning graph-level semantic representations of code. We represent codes with code property graphs (CPGs) in fine grain and use graph attention networks (GATs) for vulnerability detection. The results show that _Vignat_ is able to achieve \(57.38\%\) accuracy on reliable datasets derived from popular C libraries. Furthermore, the interpretability of our GATs provides valuable insights into vulnerability patterns.
Vinervability identification Graph attention networks Code property graph
## 1 Introduction
Vulnerability identification is crucial in ensuring the security, integrity, and resilience of systems, networks, and applications. By proactively identifying and addressing vulnerabilities, organizations can reduce the risk of security breaches, disruption of services and malware propagation. Vulnerability identification has promising applications in fields such as software development, where it can be used to improve code quality and reduce the likelihood of introducing vulnerabilities in the first place, and in security research, where it can be used to develop new defensive technologies and techniques.
Some works identify vulnerabilities by executing testing programs and analyzing runtime behaviors, such as [1], [2], and [3]. However, due to the limited code coverage, execution dependencies, and substantial overhead associated with dynamic approaches, researchers start to focus on more efficient static vulnerability detection methods. There are several challenges that make static vulnerability detection difficult. Since large-scale projects often consist of millions of lines and exhibit complex calling relationships within and between projects, it is hard to effectively feature and analyze the code. Moreover, real-world vulnerabilities are sparse, which can cause bias when confronted with new and unseen patterns.
Some rule-based static analyzers have been developed to locate vulnerabilities using defined patterns or signatures that match known vulnerabilities or programming errors, [4], [5], and [6]. While these methods are useful in detecting common types of vulnerabilities, they heavily rely on predefined rules and struggle to handle complex projects, making them unsatisfactory in real-world scenarios. Besides, some transformer-based models are also applied to this task because of their sequence processing ability and interpretability, [7], [8], [9], and [10]. Nonetheless, these studies are difficult to obtain a comprehensive understanding of program semantics solely. Because, unlike natural language sequences, source codes are more structured, and the vulnerabilities can often be subtle flaws that need thorough examination from various semantic perspectives. Therefore, it seems a more reasonable way to find the patterns of vulnerabilities by analyzing the complex relationships in various code representations.
In this paper, we address these challenges by constructing graph embeddings of code functions and analyzing the logical relationships between tokens to predict code vulnerability. Using the self-attention mechanism [11], our models exhibit excellent explainability, enabling us to infer vulnerability patterns instead of relying on experience to detect vulnerabilities. Our main contributions are:
* We obtain a comprehensive representation of code functions by embedding them into CPGs to capture code syntactic structure, control flow, and data dependencies.
* We propose a GAT framework, _Vignat_, for capturing and modeling complex relationships among nodes in CPGs, where higher attention edges can reveal patterns of vulnerabilities.
* We evaluate the effectiveness of _Vignat_ on manually labeled datasets collected from 4 large-scale C projects. _Vignat_ achieves better results on the dataset with various attention-based models, up to 10% accuracy and 5% F-score improvement over baseline methods.
The rest of this paper is organized as follows. Section 2 presents _Vignat_ framework. Section 3 demonstrates the performance and illustrates the interpretability of our model. Section 4 concludes this paper.
## 2 _Vignat_ Framework
This section provides an overview of our _Vignat_ framework, as shown in Fig. 1. Within the _Vignat_ framework, we first tokenize the source code functions and construct CPGs, a composite representation of code semantics. Following this, we employ various embedding methods to obtain node embeddings. In conjunction with graph connectivity, the graphs are inputted into a GAT for graph-level predictions. By extracting attention weights from the output of the attention layer, we can identify salient edges, revealing patterns associated with vulnerabilities.
### Graph Embedding of Code
Aside from the semantics conveyed by code tokens and the logic present in natural code sequencing (NCS), highly structured graph representations of code obscure a significant amount of logical information. The abstract syntax tree (AST) is a tree-like representation that encodes how statements and expressions are nested to produce a program. Inner nodes denote operators, leaf nodes denote operands, and edges specify container and content relationships. The control flow graph (CFG) describes the order in which code statements are executed. It also shows the conditions that must be met for a particular execution path. In a CFG, nodes represent statements or predicates, and edges denote the paths the program can traverse. The program dependence graph (PDG) represents dependencies among statements and predicates in a program. Data dependence edges show how a node's outcome impacts a variable in another node, while control dependence edges reveal the effect of predicates on variable values. Integrating elements of ASTs, CFGs, PDGs, a CPG provides a unified representation for program analysis and enables us to simultaneously reason about all perspectives of code properties. A CPG for a vulnerable code snippet is shown in Fig. 2.
In this paper, we make use of an open-source code analysis platform Joern to parse source code functions and construct CPGs in batches. In a CPG, nodes represent program constructs, including variables, methods, control structure, etc. Each node contains several tokens and has some attributes according to its type, such as the name of local variables, the signature of the method, and the type of control structure. Nodes are connected by directed edges to represent relationships in the program structure. In _Vignat_, Considering that a node may have different numbers of tokens in it, we obtain its embedding by averaging its token embeddings. To make the graph embedding tractable for graph neural networks (GNNs), we simplify the CPGs by disregarding the heterogeneity of various relationships and eliminating duplicate edges. Only the connectivity information is passed to the subsequent GAT model.
Figure 1: An overview of _Vignat_ framework.
### Gat
Graph Neural Networks (GNNs) have emerged as a family of models for learning representations of graph-structured data. A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of a set of nodes \(\mathcal{V}\) and edges \(\mathcal{E}\), where each edge \((i,j)\) represents a connection between nodes \(i\) and \(j\). The key idea behind GNNs is to iteratively update node representations by aggregating information from neighboring nodes. Let \(\mathbf{X}\) be the initial node feature matrix, where each row corresponds to the feature vector of a node. Mathematically, the propagation of a GNN can be expressed as:
\[\mathbf{h}^{(l)}=\sigma\left(\mathbf{A}\cdot ReLU\left(\mathbf{W}^{(l-1)} \cdot\mathbf{h}^{(l-1)}\right)\right) \tag{1}\]
where \(\mathbf{h}^{(l)}\) is the node representation matrix at layer \(l\), \(\mathbf{W}^{(l)}\) is the learnable weight matrix, \(\mathbf{A}\) is the adjacency matrix capturing graph connectivity, and \(\sigma\) denotes an activation function. This layer-wise aggregation process allows GNNs to capture complex relationships in graph-structured data and learn expressive node representations.
GAT was introduced by [12] as a novel approach for graph representation learning, which can naturally generalize convolutions to irregular graph structures. GAT incorporates self-attention mechanisms, which enables the model to weigh the importance of neighboring nodes for the target one, thus capturing the local code structure of the CPG.
The GAT model consists of a series of graph attention layers. Each layer computes a new set of node embeddings based on the input node features and the learned attention coefficients. The attention coefficients between nodes \(i\) and \(j\) in layer \(l\) can be computed as follows:
\[e_{ij}^{(l)}=LeakyReLU\left(\mathbf{a}^{(l)T}\left[\mathbf{W}^{(l)}\mathbf{h} _{i}^{(l-1)}\parallel\mathbf{W}^{(l)}\mathbf{h}_{j}^{(l-1)}\right]\right), \tag{2}\]
where \(\parallel\) denotes the concatenation operation, and \(\mathbf{a}^{(l)}\) is a learnable vector.
To ensure that the coefficients are normalized across the neighboring nodes, we apply the softmax function:
\[\alpha_{ij}^{(l)}=softmax_{j}\left(e_{ij}^{(l)}\right)=\frac{exp(e_{ij}^{(l)}) }{\sum_{k\in\mathcal{N}(i)}exp(e_{ik}^{(l)})}, \tag{3}\]
where \(\mathcal{N}(i)\) denotes the set of neighboring nodes of node \(i\). The new node embeddings for layer \(l\) are computed as a weighted sum of the input features of the neighboring nodes:
\[\mathbf{h}_{i}^{(l)}=\sigma\left(\sum_{j\in\mathcal{N}(i)}\alpha_{ij}^{(l)} \mathbf{W}^{(l)}\mathbf{h}_{j}^{(l-1)}\right), \tag{4}\]
where \(\sigma\) is an activation function, such as the rectified linear unit (ReLU).
One of the key advantages of GAT is its explainability. In contrast to graph convolutional networks (GCNs) [13], where the aggregation of neighboring node features is often performed uniformly, making it difficult to discern the influence of specific nodes, GAT provides an interpretable measure of the importance of neighboring nodes for a target node. By investigating the attention coefficients \(e_{ij}^{(l)}\), we are able to gain insights into the rationale of our model, which is useful for identifying influential paths and addressing potential biases.
Figure 2: A vulnerable code snippet and its corresponding CPG.
## 3 Experiments
We evaluate _Vignat_ with the following research questions.
1. How does _Vignat_ compare to the transformers that are based on the NCS?
2. How does _Vignat_ perform on different kinds of code representations?
3. How do different kinds of word embedding methods affect the performance of _Vignat_?
4. How does attention-mechanism powered _Vignat_ compare to GCNs?
5. How to infer patterns of code vulnerabilities through attention coefficients?
### Setup
We use the _Devign_ dataset to evaluate our models [14]. Compared to datasets with labels generated by static analyzers and artificial ones, it is more reliable and maintains a sparsity of vulnerabilities and is more reliable by manually labeling vulnerabilities in large-scale, real-world C projects. Specifically, we focused on two projects, FFmpeg and QEMU, and selected medium-length functions containing less than \(1200\) tokens as samples for model training and testing. As the DFG edges are labeled with the variables involved, it tremendously complicates embedded graphs. Therefore, we only extracted ASTs and CFGs and combined them into composite graph as CPGs for our dataset using Joern. We calculate the average embedding of tokens within a node to obtain its embedding and standardize its embedding size to \(768\) using pretrained models, BERT, DistilBERT, RoBERTa, and CodeBERT. If a CPG has more than \(225\) nodes, we cut the first \(225\) ones; otherwise, we use padding zeros to keep the size of input code features same.
We configure the GAT models as follows. Models are configured with the input hidden dimension of \(225\) and learning rate \(0.0001\). We implement our models using PyTorch 2.0.0, CUDA 11.8, and train them \(100\) epochs with batch size \(8\) using 16GB Tesla V100-SXM2 GPU. It takes us \(0.0358\) seconds to generate each CPG in average. And we takes \(116\) minutes to train For _Vignat_ with Roberta embedding on FFmpeg, we spend \(116\) minutes to train; It takes us \(127\) minutes to train _Vignat_ with Roberta embedding on QEMU.
### Result Analysis
To evaluate the performance of _Vignat_, we compare it with some baseline methods, including state-of-the-art transformers based on the NCS and GCNs based on graph representation of codes. To avoid the resulting gap caused by data heterogeneity, we provide these baselines with the same training set and test set after shuffle and split. We measure the performance of different methods from 4 metrics, accuracy, precision, recall, and F1 score. The comparison results are shown in Table. 1.
As shown in Table. 1, on FFmpeg dataset, _Vignat_ has the best performance over all models. Compared with transformer baselines, it improves \(4.05\%\) accuracy over BERT and RoBERTa, and \(5.71\%\) over CodeBERT. We find that CPGs provide the most comprehensive representation among the graphs, as the accuracy improves \(6.56\%\) over ASTs, and \(4.92\%\) over CFGs. Besides, experiments also show that different embedding methods have an impact on model performance. RoBERTa performs well on all data sets. For _Vignat_ using GCNs, though it does not show a substantial enhancement compared to alternative embedding techniques on accuracy, it has a better overall performance with higher precision and recall.
### Pattern Explanation
Fig. 3 illustrates 5 CPG edges with the largest values of attention coefficients \(e_{ij}^{(l)}\) for the source code in Fig. 2. In this code example predicted to be vulnerable, the type and operator of a variable are highlighted with red arrows. These edges reveal the vulnerabilities that could occur in this code snippet. Specifically, the integer may overflow when declaring the variable and the divisor can be zero, and these vulnerabilities will affect the following proceeding functions.
## 4 Conclusion
This paper presents _Vignat_, an innovative attention-based framework for vulnerability identification. By leveraging CPGs to comprehensively capture the syntactic structure, control flow, and data dependencies of code functions, and employing GATs to model complex relationships among nodes, _Vignat_ effectively predicts and reasons the vulnerabilities in source codes. Our method demonstrates remarkable performance, achieving up to \(57.38\%\) accuracy on reliable datasets, and
at least \(4\%\) accuracy improvement over transformers and \(1\%\)-\(5\%\) over other GNNs. Furthermore, by employing the interpretability of GATs in _Vignat_, we are able to get valuable insights into vulnerability patterns, paving the way for future advances in vulnerability identification and prevention in cybersecurity.
## 5 Acknowledgements
Kaiser is supported in part by DARPA/NIWC-Pacific N66001-21-C-4018 and in part by NSF CNS-2247370 and CCF-2313055.
\begin{table}
\begin{tabular}{c|c|c|c|c c c} \hline \hline \multirow{2}{*}{Class} & \multirow{2}{*}{Embedding} & \multirow{2}{*}{Representation} & \multicolumn{5}{c}{FFPurge} \\ \cline{4-7} & & & Acc & Prec & Rec & F1 \\ \hline \multirow{4}{*}{Transformer} & BERT & & 53.33\% & 55.77\% & 90.63\% & 68.89\% \\ & DistilBERT & & 56.67\% & 54.45\% & 96.88\% & 69.05\% \\ & RoBERTa & & 53.33\% & 53.85\% & 87.50\% & 66.67\% \\ & CodeBERT & & 51.67\% & 52.83\% & 87.50\% & 65.88\% \\ \hline \multirow{4}{*}{GCN} & Word2Vec & AST & 54.09\% & 53.70\% & 90.63\% & 67.44\% \\ & Word2Vec & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & CFG & 55.74\% & 56.10\% & 71.88\% & 63.10\% \\ \cline{2-7} & \multirow{2}{*}{BERT} & AST & 50.82\% & 52.00\% & 81.25\% & 63.41\% \\ & & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & CFG & 57.38\% & 56.82\% & 78.13\% & 65.79\% \\ \cline{2-7} & \multirow{2}{*}{DistilBERT} & AST & 50.82\% & 51.92\% & 84.37\% & 64.29\% \\ & & CFG & 54.10\% & 53.57\% & 93.75\% & 68.18\% \\ & & CFG & 50.82\% & 51.92\% & 84.38\% & 64.29\% \\ \cline{2-7} & \multirow{2}{*}{**RoBERTa**} & AST & 50.82\% & 51.92\% & 84.37\% & 64.29\% \\ & & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & **CPG** & **57.38\%** & **56.52\%** & **81.25\%** & **66.67\%** \\ \cline{2-7} & \multirow{2}{*}{CodeBERT} & AST & 52.46\% & 52.46\% & 56.25\% & 55.38\% \\ & & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & CFG & 57.38\% & 57.50\% & 71.88\% & 63.89\% \\ \hline \multirow{4}{*}{**GAT**} & Word2Vec & AST & 55.74\% & 56.10\% & 71.86\% & 63.01\% \\ & & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & **CPG** & **63.93\%** & **63.16\%** & **75.00\%** & **68.58\%** \\ \cline{2-7} & \multirow{2}{*}{BERT} & AST & 59.02\% & 57.14\% & 87.50\% & 69.14\% \\ & & CFG & 52.46\% & 52.46\% & 100.00\% & 68.82\% \\ & & CFG & 60.66\% & 59.09\% & 81.25\% & 68.42\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods. Acc, Prec, Rec, F1 represent accuracy, precision, recall, and F1 score respectively.
Figure 3: 5 edges with highest attention in Fig. 2. |
2304.00228 | Accuracy and Political Bias of News Source Credibility Ratings by Large
Language Models | Search engines increasingly leverage large language models (LLMs) to generate
direct answers, and AI chatbots now access the Internet for fresh data. As
information curators for billions of users, LLMs must assess the accuracy and
reliability of different sources. This paper audits eight widely used LLMs from
three major providers -- OpenAI, Google, and Meta -- to evaluate their ability
to discern credible and high-quality information sources from low-credibility
ones. We find that while LLMs can rate most tested news outlets, larger models
more frequently refuse to provide ratings due to insufficient information,
whereas smaller models are more prone to hallucination in their ratings. For
sources where ratings are provided, LLMs exhibit a high level of agreement
among themselves (average Spearman's $\rho = 0.81$), but their ratings align
only moderately with human expert evaluations (average $\rho = 0.59$).
Analyzing news sources with different political leanings in the US, we observe
a liberal bias in credibility ratings yielded by all LLMs in default
configurations. Additionally, assigning partisan identities to LLMs
consistently results in strong politically congruent bias in the ratings. These
findings have important implications for the use of LLMs in curating news and
political information. | Kai-Cheng Yang, Filippo Menczer | 2023-04-01T05:04:06Z | http://arxiv.org/abs/2304.00228v2 | # Large language models can rate news outlet credibility
###### Abstract
Although large language models (LLMs) have shown exceptional performance in various natural language processing tasks, they are prone to hallucinations. State-of-the-art chatbots, such as the new Bing, attempt to mitigate this issue by gathering information directly from the internet to ground their answers. In this setting, the capacity to distinguish trustworthy sources is critical for providing appropriate accuracy contexts to users. Here we assess whether ChatGPT, a prominent LLM, can evaluate the credibility of news outlets. With appropriate instructions, ChatGPT can provide ratings for a diverse set of news outlets, including those in non-English languages and satirical sources, along with contextual explanations. Our results show that these ratings correlate with those from human experts (Spearman's \(\rho=0.54,p<0.001\)). These findings suggest that LLMs could be an affordable reference for credibility ratings in fact-checking applications. Future LLMs should enhance their alignment with human expert judgments of source credibility to improve information accuracy.
## 1 Introduction
Large language models (LLMs), such as ChatGPT, have demonstrated outstanding capabilities in numerous natural language processing tasks, including text summarization, named entity recognition, and sentiment analysis [1, 2]. They also show great potential in assisting research by performing text annotation [3], simulating survey responses from human samples [4, 5], and aiding in paper writing and data analysis [6], among other tasks.
However, LLMs tend to generate content that lacks factual basis, which is often referred to as hallucination [7]. Given that LLMs can produce convincing statements [8] and even alter people's beliefs [9], hallucinations can dangerously mislead users. This raises the concern that language models could become a new source of misinformation and disinformation, further polluting our communication channels [10, 11, 12].
A major root of hallucination is the lack of ground truth and up-to-date data [7]. Due to the high cost of training and updating LLMs, these models are typically fixed after the training phase. For example, ChatGPT-3.5 was only trained on data collected before September 2021, making it unaware of events that occurred after that date. A solution to address this issue is to let LLMs directly retrieve information from the internet and process
it in real-time [13]. This approach has been implemented by LLMs such as Microsoft's new Bing1 and Google's Bard.2 In this setting, the ability of language models to determine the credibility of information sources is crucial for providing helpful accuracy context to users.
Footnote 1: blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web
Footnote 2: blog.google/technology/ai/bard-google-ai-search-updates
In this study, we assess if ChatGPT can rate the credibility of news outlets, which are major information sources on the internet. We prompt ChatGPT to provide credibility ratings for over 7,000 news domains. The process only takes about two hours and costs $3. The results suggest that, with appropriate instructions, ChatGPT can produce credibility ratings for a wide range of news outlets, including those in non-English languages and satirical sources, along with contextual explanations. These ratings have a moderate correlation with those from human experts (Spearman's \(\rho=0.54,p<0.001\)). Furthermore, ChatGPT achieves acceptable accuracy in a binary classification scenario (AUC=0.89).
Evaluating the credibility of news sources is crucial for studying and curbing the spread of misinformation online [14]. With credibility ratings, researchers can track the diffusion of misinformation and estimate their prevalence online [15, 16, 17, 18]. Social media platforms can reduce user exposure to content from low-credibility sources3 and highlight the credibility of information sources to improve user discernment [19, 20]. However, expert ratings of domain credibility can be expensive and time-consuming [21]. The findings here suggest that LLMs could be an affordable and convenient reference for credibility ratings and serve as media literacy tools to help users investigate news sources [22].
Footnote 3: about.fb.com/news/2019/10/update-on-election-integrity-efforts
## 2 Results
### ChatGPT can rate the credibility of news outlets
We start with a comprehensive list of 11,520 news domains compiled by Lin et al. from multiple sources [23]. We focus on a subset of 7,523 domains indexed in the Tranco list [24], which ranks the top 5.5 million websites worldwide based on their popularity. Websites not included in the list have very limited viewership. More details on data processing can be found in the Methods section.
We ask ChatGPT to rate the credibility of each domain. Take reuters.com as an example; we use the following prompt:
Rate the website's credibility: reuters.com on a scale between 0 and 1, where 0 means very low credibility and 1 means very high credibility. Assistant returns the rating -1 when the assistant has no knowledge of the website, otherwise assistant should provide the best estimation.
Here, "assistant" refers to the ChatGPT model. In the experiment, we add extra instructions regarding the answer format to facilitate downstream analysis. The details can be found in the Methods section. The prompt leads to the following answer:
Reuters is a well-known and respected news organization with a long history of producing accurate and reliable news content, so the rating will be 0.9.
When queried about childrenshealthdefense.org, ChatGPT responds:
he website has been flagged for spreading misinformation and conspiracy theories about vaccines and COVID-19, so the rating will be 0.2.
We can see that ChatGPT is capable of identifying the news sources based on their website domains, assessing their credibility, and providing justifications for their ratings. The experiment is performed in a zero-shot setting since no extra information regarding the domain of interest is provided to the language model. Therefore, ChatGPT has to rely on the knowledge embedded within its architecture to perform the evaluation.
We use the ChatGPT API4 to perform the experiment programmatically. Using five concurrent processes on a single desktop, we evaluated all 7,523 domains in approximately two hours, with a cost of around $3. ChatGPT successfully rated 7,282 of them but reported errors for the remaining 241 due to a lack of information regarding them. These 241 domains are significantly less popular (\(p<0.001\) according to Mann Whitney U Test) based on the Tranco ranking (see Methods).
Footnote 4: openai.com/blog/introducing-chatgpt-and-whisper-apis
By analyzing the answers returned by ChatGPT, we find that it can follow the instructions and yield ratings in the range between 0 and 1 with one decimal point of precision. The rating distribution in Figure 1 is bimodal and indicates that ChatGPT rarely assigns scores around 0, 0.5, and 1. These findings suggest that ChatGPT can efficiently and cost-effectively assess the reliability of many news domains.
### ChatGPT ratings correlate with human expert judgments
Let us measure how well the ChatGPT ratings align with those of human experts. We utilize the aggregate human ratings produced by Lin et al. [23], which are derived from multiple sources. Additionally, we take into account the ratings assigned by Media Bias/Fact Check (MBFC)5 and NewsGuard,6 both of which are frequently employed in research projects and industrial products. All ratings are rescaled to the range 0-1 for easy comparison. Further information on these human expert ratings and the associated data processing methods
Figure 1: Distribution of the domain ratings generated by ChatGPT. A higher rating indicates higher credibility for a domain.
can be found in the Methods section. Figure 2 plots the joint distributions of ChatGPT ratings versus different human expert ratings. The moderate positive correlation in all cases suggests that the judgment of ChatGPT is consistent with that of human experts.
In many cases, it is useful to classify domains as either low- or high-credibility [16, 17]. Using the NewsGuard ratings, one can follow the official recommendation and treat domains with scores lower than 60 (out of 100) as low-credibility. Using the MBFC ratings, one may consider domains with "very low" or "low" factual reporting levels as low-credibility. By comparing the ChatGPT ratings with these ground-truth labels, we can gauge the model's efficacy as a classifier for identifying low-credibility domains. Since there are significantly more high-credibility domains, we use AUC (Area Under the Receiver Operating Characteristic Curve) score to quantify the performance of ChatGPT; this measure is robust to class imbalance. The results, shown in Figure 3(a), indicate that ChatGPT achieves an AUC score of 0.89 when compared to both the NewsGuard and MBFC ratings.
What if one hopes to choose a threshold to dichotomize the ChatGPT ratings? To
Figure 3: (a) The receiver operating characteristic (ROC) curves of ChatGPT’s ratings when compared to the binary labels yielded by NewsGuard and MBFC. We also report the AUC (Area Under the Curve) values. (b) The F1 score of ChatGPT’s ratings as a function of the threshold.
Figure 2: Joint distributions of ChatGPT ratings against (a) the aggregate ratings produced by Lin et al. [23], (b) Media Bias/Fact Check (MBFC) ratings, and (c) NewsGuard ratings. For each pair-wise comparison, we report the number \(n\) of sources present in both ratings, the Spearman correlation coefficient \(\rho\), and the corresponding \(p\)-value.
identify an optimal value, we vary the threshold for ChatGPT ratings and calculate the corresponding F1 scores against the labels from NewsGuard and MBFC. The results are shown in Figure 3(b). The F1 score balances the trade-off between precision and recall, so an ideal threshold should maximize it. We observe that the F1 score is highest when the threshold is in the vicinity of 0.5 in both cases. Since ChatGPT ratings exhibit a bimodal distribution (see Figure 1), choosing 0.5 as the threshold is conceptually straightforward and can maximize practical accuracy. With this threshold, ChatGPT achieves F1 scores of 0.73 and 0.63 when compared with the NewsGuard and MBFC labels, respectively.
### Non-English and satirical domains
A significant challenge with current domain credibility ratings is their focus on English-language outlets. Among the sources rated by both ChatGPT and NewsGuard, 84.1% are in English, and the figure is 89.4% for MBFC. More detailed statistics can be found in Table 1. This bias makes it difficult for non-English-speaking users to assess the credibility of information sources.
Given ChatGPT's multilingual capabilities, let us investigate its performance on non-English sources. We categorize the source list based on their languages and measure the correlation between ChatGPT ratings and NewsGuard/MBFC ratings in each sub-group. The results presented in Table 1 suggest that ChatGPT's performance on non-English sources is consistent with its performance on English sources. Italian-language sources are the exception with a lower correlation.
Working with source credibility also involves handling satire websites, such as theonion.com and babylonbee.com which publish humorous articles containing non-factual and misleading content. Many of these websites are not intended to deceive; they explicitly label their content as satire, making it challenging to handle them in research and practice. Some researchers classify them as low-credibility [15], while some treat them separately in their analysis [16].
To test if ChatGPT can identify satirical sources, we employ the MBFC list of satire websites, 53 of which are in the set evaluated in our experiment. We manually review the answers provided by ChatGPT and find that it recognizes the satirical nature of 41 sources (77.4%). For example, ChatGPT states that "The Onion is a well-known satirical news website" in the response. Among the remaining 12 satire sources, seven are identified as
\begin{table}
\begin{tabular}{l|l r r r} \hline Rating & Language & Sources & \% & \(\rho\) \\ \hline \multirow{6}{*}{NewsGuard} & English & 4,574 & 84.1 & 0.51 \\ & Italian & 306 & 5.6 & 0.38 \\ \cline{1-1} & French & 294 & 5.4 & 0.53 \\ \cline{1-1} & German & 267 & 4.9 & 0.51 \\ \cline{1-1} & Total & 5,441 & 100.0 & 0.51 \\ \hline \multirow{3}{*}{MBFC} & English & 2,984 & 89.4 & 0.62 \\ \cline{1-1} & Non-English & 354 & 10.6 & 0.65 \\ \cline{1-1} & Total & 3,338 & 100.0 & 0.60 \\ \hline \end{tabular}
\end{table}
Table 1: Correlation between ChatGPT ratings and NewsGuard/MBFC ratings on news outlets in different languages. From left to right, we report the outlet language, the number and percentage of sources in that language, and the Spearman correlation coefficient \(\rho\). All coefficients are significant at the \(p<0.001\) level.
posting misleading or fake news by ChatGPT, two are mislabeled as highly credible, and three yield errors due to a lack of information.
## 3 Discussion
In this paper, we show that ChatGPT, a prominent LLM, can rate the credibility of various news sources in a zero-shot setting. The ratings correlate moderately with those from human experts. Furthermore, ChatGPT shows consistent accuracy on sources using different languages and can identify satirical websites.
Our study has some limitations. Due to the flexibility of the prompt provided to ChatGPT, there are different ways to rate source credibility. For instance, one could employ a binary classification approach or ask ChatGPT to compare two sources at a time, which might result in different outcomes. However, we were unable to test all of these approaches. Additionally, our study exclusively focuses on ChatGPT; there are currently many other LLMs available and new ones emerging soon. Our findings may not generalize to all models.
Despite these limitations, our findings suggest that ChatGPT has encoded a set of evaluations aligned with human experts regarding news sources. It is not entirely clear how the language model acquires such a capacity. Based on the justifications provided by ChatGPT alongside the ratings, our guess is that the model summarizes descriptions obtained from various sources and bases its outputs on them. For high-credibility sources, ChatGPT typically states that they are well-established and reputable news websites. On the other hand, ChatGPT typically claims that low-credibility sources are known for posting misleading or unverified content. In a handful of cases, ChatGPT refers to the information from MBFC in the justifications.
Nevertheless, such a capacity may help the model ground its responses. For instance, we asked the new Bing (backed by ChatGPT) to check the veracity of a satirical story claiming that Trump sued his grandchildren for violating NDAs he made them sign as infants.7 The tool identified the source and accurately noted its satirical nature in the response.
Footnote 7: halfwaypost.com/2021/05/05/trump-is-suing-his-grandchildren-for-violating-ndas-he-made-them-sign-as-infants
Alignment is a critical component of LLM training to ensure the models behave as desired [25]. Future models might consider enhancing their alignment with human judgments of source credibility by explicitly integrating such information into both the training and inference phases. Researchers could assign more weight to information from credible sources in the training data. In the inference phase, models could be configured to give greater attention to messages from credible sources when browsing the internet for information.
The results of this study also suggest that LLMs have the potential to serve as an affordable and accessible reference for domain credibility ratings. Such ratings are vital for researchers to study the dissemination and prevalence of misinformation online. The low cost of LLMs makes them an appealing alternative to services with higher costs such as NewsGuard, or those with a narrower focus such as MBFC. With the ability to provide contextual information and actively answer user questions, LLMs might be adopted by the general public as media literacy tools to investigate the credibility of news sources and perform fact-checking. However, our results indicate that the accuracy of LLMs is far from perfect, calling for further analysis among other misinformation intervention methods [22].
Methods
### Querying the ChatGPT API
We use the openai-python package8 to query the API endpoint. Specifically, we choose the gpt-3.5-turbo-0301 model, i.e., the ChatGPT-3.5 model released on March 1, 2023. We set the temperature parameter to 0 to minimize the randomness of the output generation. Below, we provide the complete prompt used in our experiment:
Footnote 8: github.com/openai/openai-python
Rate the website's credibility: {domain} on a scale between 0 and 1, where 0 means very low credibility and 1 means very high credibility. Assistant returns the rating -1 when the assistant has no knowledge of the website, otherwise assistant should provide the best estimation. Assistant should only provide the result in JSON format, do not provide anything else. Here is an example:
{'url': 'example.com', 'rating': 0.5,'reason': 'The example website is known to post credible content.'}
In each query, {domain} is replaced with the domain of interest.
ChatGPT is configured to answer questions in human-readable form by default. So we add extra instructions to force ChatGPT to return the results in JSON (JavaScript Object Notation) format, which can be easily processed by machine. Below we show a typical answer generated by ChatGPT:
{'url': 'foxnews.com', 'rating': 0.6,'reason': 'Fox News has a reputation for having a conservative bias and has been known to publish misleading or inaccurate information in the past. However, they also have a large audience and employ many experienced journalists, so some of their content can be considered credible.'}
Despite our instructions, ChatGPT responds with invalid JSON objects from time to time. We manually code these answers to extract the relevant information.
### Human expert ratings
In this study, we adopt human expert ratings from three sources: the aggregate list compiled by Lin et al. [23], NewsGuard, and MBFC.
#### 4.2.1 Aggregate list form Lin et al.
Lin et al. [23] analyze the news domain ratings from six sources, including NewsGuard, Ad Fontes Media,9 Ify index of unreliable sources,10 MBFC, and two lists compiled by professional fact-checkers and researchers [21, 26]. The comparison of these ratings revealed a high level of consistency. Using an ensemble method, they generate an aggregate list that contains credibility ratings for 11,520 domains. This list is publicly accessible on GitHub.11 Our analysis starts with this list.
Footnote 10: iffy.news
Footnote 11: github.com/hauselin/domain-quality-ratings
#### 4.2.2 NewsGuard
NewsGuard is a journalistic organization that routinely assesses the reliability of news websites based on various criteria. They assign news outlets a trust score on a scale of 0 to 100.12 When dichotomizing the scores, we use the threshold 60 following the official recommendation. We rescale the scores to numbers between 0 and 1 in our analysis for easy comparison across different rating systems. NewsGuard updates its database routinely, and for this study, we use the snapshot from January 2022. NewsGuard ratings are available in English, French, German, and Italian. The English outlets include those from the US, Canada, UK, and Australia.
Footnote 12: newsguardtech.com/ratings/rating-process-criteria
#### 4.2.3 Mbfc
Media Bias/Fact Check (MBFC) is an independent organization that reviews and rates the reliability of news sources. More information about MBFC ratings can be found on their methodology page.13 For this study, we obtained the ratings of 4,275 domains from the MBFC website in October 2022. We focus on the factual reporting levels of the domains, which are categorized into "very low," "low," "mixed," "mostly factual," "high," and "very high" labels. We respectively assign numerical values of 0, 0.2, 0.4, 0.6, 0.8, and 1.0 to these categories for our analysis. When dichotomizing the scores, we consider domains with "very low" and "low" factual reporting levels as low-credibility sources.
Footnote 13: mediabiasfactcheck.com/methodology
MBFC only provides the country information for a subset of the domains; the language information is not available. For simplicity, we consider all domains from the US, UK, Canada, and Australia as English outlets. These four countries also have the most domains in the MBFC dataset. The remaining domains are treated as non-English outlets. In addition, MBFC maintains a list of satirical websites, which is used in our analysis.
### Website popularity ranking
We adopt the Tranco list to measure the popularity of websites [24]. This list combines the website ranking information from multiple sources, including Alexa14 and Cisco Umbrella.15 The Tranco list is updated on a routine basis, and for this study, we use the snapshot from September 2021. This snapshot contains the ranks of the top 5.5 million websites worldwide. Websites not on the list typically have limited viewership or have been deactivated.
Footnote 14: alexa.com/topsites
Footnote 15: umbrella-static.s3-us-west-1.amazonaws.com/index.html
### Data availability
The code and data used in this study are shared in the GitHub repository (github.com/osome-iu/ChatGPT_domain_rating). We are unable to share the NewsGuard data due to their policy. |
2307.06309 | S Equilibrium: A Synthesis of (Behavioral) Game Theory | $S$ equilibrium synthesizes a century of game-theoretic modeling. $S$-beliefs
determine choices as in the refinement literature and level-$k$, without
anchoring on Nash equilibrium or imposing ad hoc belief formation. $S$-choices
allow for mistakes as in QRE, without imposing rational expectations. $S$
equilibrium is explicitly set-valued to avoid the common practice of selecting
the best prediction from an implicitly defined set of unknown, and unaccounted
for, size. $S$-equilibrium sets vary with a complexity parameter, offering a
trade-off between accuracy and precision unlike in $M$ equilibrium. Simple
"areametrics" determine the model's parameter and show that choice sets with a
relative size of 5 percent capture 58 percent percent of the data.
Goodness-of-fit tests applied to data from a broad array of experimental games
confirm $S$ equilibrium's ability to predict behavior in and out of sample. In
contrast, choice (belief) predictions of level-$k$ and QRE are rejected in most
(all) games. | Jacob K Goeree, Bernardo Garcia-Pola | 2023-07-12T17:13:21Z | http://arxiv.org/abs/2307.06309v1 | # S Equilibrium
###### Abstract
\(S\) equilibrium synthesizes a century of game-theoretic modeling. \(S\)-beliefs determine choices as in the refinement literature and level-\(k\), without anchoring on Nash equilibrium or imposing ad hoc belief formation. \(S\)-choices allow for mistakes as in QRE, without imposing rational expectations. \(S\) equilibrium is explicitly set-valued to avoid the common practice of selecting the best prediction from an implicitly defined set of unknown, and unaccounted for, size. \(S\)-equilibrium sets vary with a complexity parameter, offering a trade-off between accuracy and precision unlike in \(M\) equilibrium. Simple "areametrics" determine the model's parameter and show that choice sets with a relative size of 5% capture 58% of the data. Goodness-of-fit tests applied to data from a broad array of experimental games confirm \(S\) equilibrium's ability to predict behavior in and out of sample. In contrast, choice (belief) predictions of level-\(k\) and QRE are rejected in most (all) games.
1
Footnote 1: Goeree: AGORA Center for Market Design, UNSW, Sydney, Australia. García-Pola: Department of Economics, Universidad Publica de Navarra, Pamplona, Spain. We gratefully acknowledge funding from the Australian Research Council (DP190103888 and DP220102893). We thank Brett Williams for useful comments. The “S” terminology was inspired by Reinhard Selten’s work on the role of beliefs in refining Nash equilibria.
**Keywords**: _S equilibrium, \(S\) potential, belief sets, choice sets, prediction sets, precision, accuracy, measure of predictive success, preregistration, pre-analyses plan_ |
2306.09313 | Lexical Speaker Error Correction: Leveraging Language Models for Speaker
Diarization Error Correction | Speaker diarization (SD) is typically used with an automatic speech
recognition (ASR) system to ascribe speaker labels to recognized words. The
conventional approach reconciles outputs from independently optimized ASR and
SD systems, where the SD system typically uses only acoustic information to
identify the speakers in the audio stream. This approach can lead to speaker
errors especially around speaker turns and regions of speaker overlap. In this
paper, we propose a novel second-pass speaker error correction system using
lexical information, leveraging the power of modern language models (LMs). Our
experiments across multiple telephony datasets show that our approach is both
effective and robust. Training and tuning only on the Fisher dataset, this
error correction approach leads to relative word-level diarization error rate
(WDER) reductions of 15-30% on three telephony datasets: RT03-CTS, Callhome
American English and held-out portions of Fisher. | Rohit Paturi, Sundararajan Srinivasan, Xiang Li | 2023-06-15T17:47:41Z | http://arxiv.org/abs/2306.09313v1 | Lexical Speaker Error Correction: Leveraging Language Models for Speaker Diarization Error Correction
###### Abstract
Speaker diarization (SD) is typically used with an automatic speech recognition (ASR) system to ascribe speaker labels to recognized words. The conventional approach reconciles outputs from independently optimized ASR and SD systems, where the SD system typically uses only acoustic information to identify the speakers in the audio stream. This approach can lead to speaker errors especially around speaker turns and regions of speaker overlap. In this paper, we propose a novel second-pass speaker error correction system using lexical information, leveraging the power of modern language models (LMs). Our experiments across multiple telephony datasets show that our approach is both effective and robust. Training and tuning only on the Fisher dataset, this error correction approach leads to relative word-level diarization error rate (WDER) reductions of 15-30% on three telephony datasets: RT03-CTS, Callhome American English and held-out portions of Fisher.
**Index Terms**: Speaker Diarization, Large Language Models, Automatic Speech Recognition, Error Correction
Rohit Paturi\({}^{*}\), Sundararajan Srinivasan\({}^{*}\), Xiang Li AWS AI Labs
[email protected], [email protected], [email protected]
**Index Terms**: Speaker Diarization, Large Language Models, Automatic Speech Recognition, Error Correction
## 1 Introduction
Speech transcription systems have advanced significantly in the past decade but even with these remarkable advances, machines have difficulties understanding natural conversations with multiple speakers such as in broadcast interviews, meetings, telephone calls, videos or medical recordings. One of the first steps in understanding natural conversations is to recognize the words spoken and their corresponding speakers. Speaker Diarization (SD) is the process of determining "who spoke when" in a multi-speaker audio signal and is a key component in any speech transcription system. SD is used in conjunction with Automatic Speech Recognition (ASR) to assign a speaker label to each transcribed speaker turn and has widespread applications in generating meeting/interview transcripts, medical notes, automated subtitling and dubbing, downstream speaker analytics, among others (we refer to this combined system as SD-ASR in this paper). This is typically performed in multiple steps that include (1) transcribing the words using an ASR system, (2) predicting "who spoke when" using a speaker diarization (SD) system, and, finally, (3) reconciling the output of those two systems.
Recent advances in SD systems are outlined in [1] and the independent module optimized SD systems typically consists of the following main sub-tasks: (a) segment the input audio into speech segments using a Voice activity detector (VAD), (b) generate speaker segments from the speech segments by either using a uniform window size [2, 3, 4] or by detecting speaker turns [5, 6, 7], (c) extract speaker embeddings [2, 8, 9, 10] for each of the speaker segments and (d) cluster the resulting speaker embeddings using clustering algorithms like Spectral Clustering [2], Agglomerative Hierarchical Clustering [4] among others. These sub-tasks of most of the diarization systems in literature rely only acoustic information and can thus lead to speaker errors, mainly around the speaker turns. This can happen in uniform speaker segmentation as long segments very likely contain speaker turn boundaries, while short segments carry insufficient speaker information. It is also shown that detecting speaker turns using only acoustic information is also error-prone [7]. In addition to the SD errors, speakers can be attributed to the wrong words in the SD-ASR reconciliation phase due to errors in ASR word timings. Reconciliation errors can also occur in regions of speech overlap as SD can identify one of the speakers while ASR can identify words corresponding to a different speaker.
Lexical information can contain complementary information which can be very useful in accurately predicting speaker turns [6, 7]. For instance, analyzing only the written transcript of a conversation such as "how are you i am good", enables us to infer that there is likely a speaker change between the utterances "how are you" and "i am good". There have been a handful of works [6, 7, 11, 12, 13, 14] which leverage the ASR transcripts to infuse lexical information in the SD module. In [7], lexical cues are used to estimate the speaker turns for diarization. [11] made use of turn probabilities from lexical cues in the clustering stage by enhancing the adjacency matrix. Though these approaches showed good SD improvements, these systems can still produce errors around speaker turns due to ASR and Diarization errors in overlapped speech as well are sensitive to ASR word timings as they rely on ASR timings in the diarization sub-tasks as well as in the Reconciliation phase. [12] modeled SD and ASR jointly but is confined to 2 speakers with specific distinct roles.
In this paper, we propose a Speaker Error Correction (SEC) module which can correct speaker errors at the word level without modifying the underlying ASR or the acoustic SD system. This SEC module makes use of the any of the readily available pre-trained LMs [15, 16, 17, 18] to infuse the lexical knowledge to correct speaker errors while also leveraging speaker scores from the SD system to prevent over-corrections. The reliance on LMs also significantly reduces the amount of speaker labelled text data needed to train the system. Our approach has components which are modular and don't need paired audio, text data to train while only needing a small amount of paired data for fine-tuning. This approach is also easier to integrate with existing systems than other lexical-based diarization approaches, since the first-pass acoustic SD system can be run independently of the ASR system. Using experiments across three telephony datasets, we demonstrate that the proposed system is both effective as well as capable of generalization.
## 2 Speaker Error Corrector
The overall pipeline of the proposed two-pass Speaker Error Corrector (SEC) framework is shown in Fig 1a. The conventional Speaker Transcription system consists of an ASR module, a SD module and a reconciliation stage. The SEC follows the reconciliation stage and takes in two streams of inputs: acoustic features from the SD module and lexical features from the ASR module. The ASR and acoustic SD models can continue to run in parallel, making it easier to integrate with existing systems. The core component of the SEC is the Lexical Correction module which takes in the transcribed words from ASR along with the speaker labels from the SD module. These are explained in more detail in the following sub-sections.
### Lexical Diarization Corrector
While lexical features have complementary information to the acoustic features and can be leveraged to correct some of the errors from a naive reconciliation of ASR and SD, lexical features alone can't accurately predict the speaker labels especially in realistic conversations. So, we propose a simple yet efficient way to correct the speakers based on both the decisions from the 1st pass diarzer and the ASR transcriptions. Our proposed Lexical Speaker Error Corrector consists of two main components: a backbone language model (LM) and a Transformer Encoder Front-end to predict the speaker labels. After reconciling ASR and diarization outputs, we have speaker labels \(\{\mathbf{S_{i}}\}_{i=1}^{N},\mathbf{S_{i}}\in\mathbb{R}^{1\times K}\) for every word \(\{\mathbf{W_{i}}\}_{i=1}^{N}\), where \(N\) is the number of words in the sequence and K is the number of speakers the SEC is trained to handle. The words \(W_{i}\) are tokenized and passed to the backbone LM to obtain contextual word embeddings \(\{\mathbf{E_{j}}\}_{j=1}^{M},\mathbf{E_{j}}\in\mathbb{R}^{1\times W}\) where \(M\) is the number of tokens in the word sequence and W is the word embedding dimension. The word level speaker labels \(S_{i}\) are mapped to token level by mapping the speaker ID corresponding to the word to its first token if the word has more than 2 tokens and assigning a special "don't care" token to any of the subsequent tokens of the word. These token level embeddings \(E_{j}\) are concatenated with the speaker IDs \(S_{j}\) to form the fused features for the Front-end Transformer Encoder as shown in Figure 1b. The posteriors from the Front-end Encoder \(\{\mathbf{L_{i\mathbf{j}}}\}_{j=1}^{K}\), \(\mathbf{L_{i}}\in\mathbb{R}\) are used to optimize the classification loss on the ground-truth speaker labels.
### Training Methodology
The SEC model can be trained only using speaker turn transcripts and doesn't require paired audio data and we show that training the lexical corrector on just the transcripts also improves the performance of the baseline. Since the relatively smaller number of speaker errors produced by 1st pass diarzer system limits the training of the error corrector, we train the corrector by simulating speaker errors based on the ground truth as well by simulating ASR substitution errors.
We define the probability of ASR errors as \(P_{ASR}\) and the probability of speaker errors as \(P_{Spk}\). Setting \(P_{ASR}=1\) implies that all the words in the training transcripts are substituted with random words and \(P_{ASR}=0\) implies the original ground-truth transcripts. Similarly, \(P_{Spk}=1\) implies all the speaker labels are randomly substituted whereas \(P_{Spk}=0\) implies the ground-truth speaker labels. We simulate ASR, Speaker errors using a curriculum learning paradigm [19] to make sure that we don't under or over correct the speakers and balance the information flow from the SD labels and ASR word lexical information. We start the curriculum for \(P_{Spk}\) at a low value and increase \(P_{Spk}\) as the training progresses. Conversely, \(P_{ASR}\) starts at a high value at the first epoch and decreases as the training progress. The intuition for this curriculum with \(P_{ASR}\) being higher and \(P_{Spk}\) being lower in the initial epochs is to train the model without any meaningful lexical information and to train the model to at least copy the 1st pass speaker labels in the initial epochs. More meaningful lexical information with a smaller \(P_{ASR}\) is used in the later epochs along with a higher \(P_{Spk}\) to train the model on more complex speaker errors as the training progresses.
In addition to the errors simulated text data, we also use paired audio data to train, fine-tune the model on real data. For this, we generate speaker labels using the baseline 1st SD and use the ground-truth speaker labels as the targets. In this work, we train the SEC on two speaker cases, i.e., \(K=2\)
### Inference Setup
During inference, we perform error correction on sliding windows with a fixed number of ASR transcribed words as shown in Fig 1c. Though the lexical corrector is trained to only correct two speakers locally, we can still handle use-cases where more than two speakers are detected globally in the audio. We achieve this
Figure 1: (a) Speaker Error Correction as a 2nd-pass post-processing step to the traditional SD-ASR system, (b) Lexical SEC: Word embeddings from the LM, Speaker IDs from SD are passed to the Transformer Encoder to get the corrected Speaker IDs, (c) SEC inference performed on sliding windows with 2 hypothesis speakers.
by only correcting sliding windows comprising of two speakers and by bypassing the remaining windows as shown in Figure 1c. The size of the sliding window is a parameter we tune on a validation set.
## 3 Experiments
### Data and Metrics
In this work, we use the full Fisher dataset [20, 21] to train the Speaker Corrector system. We split the Fisher data into train, validation and test splits as defined in [22]. We also the Fisher train set to fine-tune the backbone LM model as well as to train, fine-tune the Corrector model. We only use the Fisher validation split for tuning our model. For evaluation, in addition to Fisher test split, we use the standard dev, test splits of CALLHOME American English (CHAE) [23] and RT03-CTS [24] which are majorly two speaker calls. We also evaluate on the two-speaker only set of CHAE, the CH-109 dataset [25] by fixing the number of clusters to 2 as well as automatically determining the number of speakers in the 1st pass SD system.
In order to evaluate the full ASR, SD system, we use the Word Diarization Error Rate (WDER) proposed in [12] as it aptly captures both ASR and SD errors at the word level. We also account for words transcribed in regions of speech overlap in the WDER metric. This is achieved by using asclite [26] as it can align multiple speaker hypotheses against multiple reference speaker transcriptions and can also efficiently handle words in regions of speaker overlaps.
### Baseline System
Our baseline SD system follows the pipeline in [2] and consists of a speaker embedding model followed by Spectral Clustering and the number of speakers is identified using the maximum eigengap of the Spectral Clustering. The speaker embedding model is based on a ResNet-34 architecture trained with a combination of classification, metric loss [27] and channel loss [28] on about 12k speakers and 4k hours of CTS data. We use a uniform speaker segmentation [3, 4] with a duration of 500ms to extract the speaker embeddings followed by the Clustering phase for the SD system. Our baseline SD system is comparable to state-of-the-art diarization systems across several datasets and achieves a a DER of 3.72 and SER of 1.1 on CHAE test set which is a stronger baseline than the one reported in [11]. We use a hybrid ASR system [29, 30, 31] with a Conformer Acoustic model [32] and a n-gram Language model trained on several tens of thousands of audio, text data. For the reconciliation phase, the SD system provides speaker turns with time boundaries and these labels are mapped to recognized words using the associated word boundaries from the ASR system. When the speaker turn boundary falls in the middle of a word, we assign the word to the speaker with the largest overlap with the word similar to the baseline system in [12]. We use a neural-network based Speech Activity detector (SAD) similar to [33] as a front end for both SD-ASR systems above.
### SEC System
For the SEC model, we use a pre-trained Roberta-base model [16] as the backbone LM and a Transformer Encoder of size 128 hidden states for the Front-end model. The curriculum for \(P_{ASR}\) starts at 1 at the 1st epoch and decreases to 0.08 in the 10th and subsequent epochs in uniform steps. The curriculum for \(P_{Spk}\) starts at 0 in the 1st epoch and increase to 0.14 in the 10th and subsequent epochs in uniform steps. The model is trained with Adam Optimizer with a batch size of 32 and an average sequence length of 30 words per batch. We use a learning rate of 1e-4 and train the model for 30 epochs on a machine with 8 GPUs.
We use the SEC as a 2nd pass post-processing step to the baseline SD-ASR system in Section 3.2. In order to determine the number of simulated errors needed to effectively train the lexical SEC to correct the speaker errors, we follow the error curricula mentioned in Section 2.4 and pick the checkpoint with \(P_{ASR}\), \(P_{Spk}\) that achieves the lowest WDER on the Fisher validation set. The values that achieve the best validation WDER are 0.1 for both \(P_{ASR}\) and \(P_{Spk}\). In addition to the training parameters, we also tune an inference parameter, the sliding window size as mentioned in Section 2.1 also on the Fisher validation set.
### Results
We tune for the sliding window size on our Fisher validation subset and plot the WDER with the corresponding values as shown in Figure 2. From the plot, we see that WDER decreases as the window size increases up to 30 due to increased lexical context for the backbone LM as well as the corrector model. The WDER further increases beyond the window size of 30 likely due to the corrector model being trained with an average
Figure 3: Correction Examples: a) Errors due to overlapping speech, b) Errors around speaker turns.
Figure 2: Window Size tuning on the validation set
seq length of 30 and more sliding windows with greater than 2 speakers being bypassed with a larger window size. We have also tried training with larger average sequence lengths but that did not show any additional gains compared to the sequence length of 30 words. So, we use the sliding window size as 30 words for the remainder of the experiments. We also show some qualitative examples of the correction performance on the Fisher test set using the SEC model with the best sliding window size in Figure 3. Figure 3a shows that the correction model is able to effectively correct errors due to overlapping speech when the SD hypothesizes one of the overlapping speakers and ASR hypothesizes the words of the other speaker. The model is also effective in correcting the lexically implausible errors around speaker turns which is one of the major error-prone scenario [34] for SD systems as seen in Figure 3b.
The quantitative WDER improvements of the correction models on the held out validation and test sets are outlined in Table 1. We call the model trained on ground truth transcripts with simulated speaker, ASR errors as the "SimSEC" model. SimSEC_v2 is the "SimSEC" model with a Fisher tuned backbone Roberta and trained with a custom curriculum as mentioned in Section 3.3 SimSEC_v1 is similar to SimSEC_v2 but with the Roberta-base as a backbone without any further fine-tuning. We evaluate SimSEC_v1 to quantify the gains attributed to fine-tuning of the backbone LM on conversational datasets. "SimSEC init + RealSEC Tuning" model is the paired data tuned model initialized with SimSEC_v2 and tuned using the 1st pass acoustic SD labels instead of the simulated speaker errors. RealSEC model is similar to "SimSEC_v2 init + RealSEC Tuning" but is only trained by flat-starting the model on real paired data.
From Table 1, we can see that almost all of the corrector models produce considerable WDER gains over the Baseline SD-ASR reconciled system across all the datasets, except from SimSEC_v1 on CH-109 with unknown speakers. It can be observed that tuning the backbone Roberta LM in SimSEC_v2 can produce moderate WDER gains over the pretrained Roberta-base LM, especially on CHAE validation set and CH-109 with unknown speakers. The model trained on Paired data, either by tuning the SimSEC_v2 model or by flat-start training (RealSEC) produces further gains over the models train with errors simulated (SimSEC_v1 and SimSEC_v2). The performance improvement of the models on CH-109 without fixing the speakers to 2 in the Clustering phase is comparatively limited due to hypothesising more than 2 speakers on few of the audio files leading to smaller average WDER gains. With the best model "SimSEC_v2 init + RealSEC Tuning", we observe relative WDER gains in the range 15-30% across all the datasets.
To further analyze the importance of dataset sizes needed to train or tune the models, we perform an ablation by only using a fraction of the Fisher train data as shown in Table2. We evaluate The models SimSEC_v2 and "SimSEC init + RealSEC Tuning" with different fractions of ground truth text and paired data respectively. We see that the WDER of the SimSEC_v2 model and "SimSEC init + RealSEC Tuning" model only improves moderately and saturates at a point as the amount of text data and paired data increases respectively. This shows that the corrector model can be trained purely on small amounts of ground truth transcripts by simulating speaker, ASR errors and can also be fine-tuned on a small amount of paired data to achieve significant WDER gains.
## 4 Conclusion and Future Work
In this work, we propose a novel Speaker Error Corrector (SEC) to correct word-level speaker label errors from a conventional audio-only speaker diarization system. We achieve this using a language model over the ASR transcriptions to correct the speaker labels. The proposed lexical SEC can be trained effectively using only text data by simulating speaker errors without the need for any paired audio-text data. A small amount of paired data can further improve model performance, leading to overall relative reduction of WDER by over 15% across three telephony datasets. The proposed SEC framework is also lightweight and is easy to integrate as a post-processing module over existing systems.
One limitation of our current work is that it has been applied only to conversations in English. One future work can include training a multi-lingual SEC to make the system language-agnostic. To increase the robustness of this approach, in addition to the first-pass SD labels, we can leverage additional complementary acoustic cues to further improve the performance. Also, the current SEC model can only handle 2 speakers in a sliding window, which we plan to generalize to handle more number of speakers. We will also explore leveraging large generative models to synthesize conversational transcripts across multiple domains using curated prompts [35].
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model Type} & Fisher & \multicolumn{2}{c}{RT03-CTS} & \multicolumn{2}{c}{CHAE} & \multicolumn{2}{c}{CH-109} \\ \cline{3-8} & Test & Validation & Test & Validation & Test & \begin{tabular}{c} Known \\ Spkrs \\ \end{tabular} &
\begin{tabular}{c} Unknown \\ Spkrs \\ \end{tabular} \\ \hline \hline Baseline (No Correction) & 2.26 & 2.30 & 2.18 & 4.23 & 2.82 & 3.69 & 4.28 \\ SimSEC\_v1 (Base Roberta) & 1.72 & 2.18 & 1.98 & 4.16 & 2.68 & 3.41 & 4.29 \\ SimSEC\_v2 (Tuned Roberta) & 1.63 & 1.90 & 1.67 & 3.52 & 2.49 & 3.16 & 3.74 \\ RealSEC (flat-start Training) & 1.53 & 1.73 & 1.58 & 3.31 & 2.30 & 2.98 & 3.57 \\ SimSEC\_v2 init + RealSEC Tuning & 1.53 & 1.73 & 1.59 & 3.28 & 2.26 & 2.97 & 3.56 \\ \hline \end{tabular}
\end{table}
Table 1: WDER of different models on Fisher test, RT03-CTS and CHAE dev, test sets. CHAE-109 is evaluated with and without fixing the number of speakers in 1st pass to 2. SimSEC: SEC model trained using simulated transcript errors, RealSEC: SEC trained/tuned on real paired data
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Model Type} & \multicolumn{2}{c}{
\begin{tabular}{c} Fraction of \\ Train Data \\ \end{tabular} } & \multirow{2}{*}{WDER} \\ \hline \hline Baseline (No Correction) & 2.26 \\ \hline \multirow{3}{*}{SimSEC\_v2} & 0.2 & 1.73 \\ & 0.4 & 1.68 \\ & 0.8 & 1.65 \\ & 1.0 & 1.63 \\ \hline \multirow{3}{*}{SimSEC\_v2 init + RealSEC Tuned} & 0.2 & 1.58 \\ \cline{1-1} & 0.4 & 1.57 \\ \cline{1-1} RealSEC Tuned} & 0.8 & 1.54 \\ \cline{1-1} \cline{2-4} \cline{1-1} |
2303.03786 | Stability of the personal relationship networks in a longitudinal study
of middle school students | The personal network of relationships is structured in circles of
friendships, that go from the most intense relationships to the least intense
ones. While this is a well established result, little is known about the
stability of those circles and their evolution in time. To shed light on this
issue, we study the temporal evolution of friendships among teenagers during
two consecutive academic years by means of a survey administered on five
occasions. We show that the first two circles, best friends and friends, can be
clearly observed in the survey but also that being in one or the other leads to
more or less stable relationships. We find that being in the same class is one
of the key drivers of friendship evolution. We also observe an almost constant
degree of reciprocity in the relationships, around 60%, a percentage influenced
both by being in the same class and by gender homophily. Not only do our
results confirm the mounting evidence supporting the circle structure of human
social networks, but they also show that these structures persist in time
despite the turnover of individual relationships -- a fact that may prove
particularly useful for understanding the social environment in middle schools. | Diego Escribano, Francisco J. Lapuente, José A. Cuesta, Robin I. M. Dunbar, Angel Sánchez | 2023-03-07T10:46:52Z | http://arxiv.org/abs/2303.03786v3 | # Longitudinal study of friendships and enmities in middle school
###### Abstract
We study the temporal evolution of friendships among teenagers during two consecutive academic years with five waves of a survey. Our results show a large degree of consistency among waves, which is remarkable in view of the fact that the studied period includes a major reorganisation of the school due to the COVID-19 pandemic. We show that the first two circles, best friends and friends, can be clearly observed in the survey but also that being in one or the other leads to more or less stable relationships. We have also studied enmities, finding that they are reported to a much lesser extent and are highly volatile. We find that being in the same class is one of the key drivers of friendship evolution. We also observe an almost constant degree of reciprocity in the relationships, around 60%, a percentage slightly influenced by being in the same class and gender homophily. Not only do our results confirm the mounting evidence supporting the layered structure of human social networks, but they also show that these structures persist in time despite the turnover of individual relationships--a fact that may prove particularly useful for understanding the social environment in high schools.
## Introduction
Human egocentric social networks have very distinct sizes and structures. In adults, they comprise around 150 friends and family members, organised in a hierarchically inclusive series of layers of about 5, 15, 50 and, finally, the 150, with each layer being approximately three times the size of the layer immediately inside it.[1] The sizes of these layers appear to be remarkably stable across time, despite changes in membership.[2] This is largely a consequence of the fact that the time available for socialising (and hence the capacity for bond-building) is strictly limited.[3, 4, 5] The layers are created by differential time investment, with the five members of the innermost layer receiving around 40% of total social effort between them and the additional 10 members of the 15-layer about 20%, with the 135 members of the outer two network layers sharing the remaining 40% in progressively declining quantities.[6] The layers themselves appear as the most likely distribution of time under this overall constraint in a context where the benefits a relationship provides are proportional to the time invested in it.[7, 8]
The primary function of networks is to provide social and economic benefits, with the individual layers offering benefits of different kinds. The innermost layer at 5, for example, consists of a subset of people (the support clique) who, out of a sense of obligation and emotional depth of relationship, are willing to drop everything and come to your assistance when needed; the 15-layer, or sympathy group, constitutes a pool of regular social contacts who provide less costly forms of social exchange.[9, 10] These relationship types require different investments of time and emotional effort to ensure they will provide the benefits they offer, with the time investment being proportional to the benefits on offer. There is evidence that the size of the outer layer at 150 is a point where the flow of information in the network is maximized.[11]
Although egocentric social networks have been studied in considerable depth in adults, we know little about the networks of children and teenagers. While several studies have addressed this issue in the last two decades, looking at peer influence processes,[12] comparing different ways of collecting data,[13] assessing the effect of networks on psychological adjustment to schools[14] or on influenza propagation,[15] none have studied how these networks evolve over time. From a time evolution perspective, it is worth mentioning the recent results by Kucharski _et al._ focused on students in year 7.[16] These authors show that longitudinal approaches yield consistent results, in the sense that the properties of data remain essentially the same across
different waves. As we will show below, our results confirm this consistency. Other longitudinal studies have shown, for instance, that students prefer to gradually reorganise their social networks according to their performance levels [17] or that residential and school mobility has a large impact on the structure of adolescents' friendship networks [18]. In this framework, the research presented here touches upon the effect of the circle structure and on the endogenous drivers of the time evolution of friendships.
Most studies of these age groups have focused on tie quality in a handful of identified close family and friend relationships ('best friends' or, perhaps, the inner core of five best friends). There is some evidence that network size increases across childhood and the teenage years, and that it does so by adding whole layers in step-changes rather than by accreting alters individually [4]. How stable these networks are across time, however, is largely unknown. In addition, almost all studies have focused on affiliative relationships. Although we know that friends become enemies (and occasionally vice versa) over time, there are very few studies that have examined enmities (negative relationships) in the context of egocentric social networks. Indeed, one of the few that has done so focused on relationships in medieval literature rather than contemporary real-world social networks [19].
In this study, we report longitudinal data on the inner layers of friendship networks in young teenagers. We focus on the 5- and 15-layers among a cohort of middle school children (corresponding to years 1 to 4 of mandatory middle school, referred to as ESO after its initials in Spanish), and how these change both over time and in the face of the constraints imposed by lockdown during COVID-19. We study two key aspects of friendship networks: their size and composition and the stability of these relationships as students change classes in successive years and are exposed to new social opportunities. We study both friendships and enmities. The results reported here largely extend those presented in Ref. [8], which were limited to only one age range (ages 12 and 13, ESO 1st year) and two waves, carried out in December 2018 and May 2019 (see section S1 in the Supplementary Material for a breakdown of the distribution of students by different magnitudes). See Methods for more details on the sample, the school organization, and our outlier detection procedure.
## Results
The number of friendships and best friendships the students have is remarkably consistent across the five waves despite the very different situations in the two academic years under study (cf. Fig. 1), in agreement with Ref. [16]. Even with drastic changes in waves 4 and 5, all waves' results are very similar. The results are the same even if splitting the data by class, gender, itinerary, or by "repetidor" (i.e., if the student takes the course for the first or the second time, and in this last case they receive the name of "repetidor") (see plots in section S2 in the Supplementary Material). This is true of both friendships and best friendships. If a regression line is set through the number of friends in each of the five waves for each student, the mean of that slope is exactly 0 (see Supplementary Material, Sec. S4).
To delve deeper into the structure of ego-networks, we employed the parameter \(\mu\) as a tool for analysing the circular structure of a given individual, introduced in Ref. [7] and discussed in Ref. [8]. This parameter is obtained for every individual by fitting the analytical expression from Ref. [7] to the reported numbers of friendships in the two circles. When \(\mu\) is greater than 0, the circles have a characteristic structure where the number of friendships grows rapidly as we move away from circle 1. A value of \(\mu\) close to 0.7 is typically seen when the scaling ratio between the sizes of the circles is around 3, as is often observed. Conversely, when \(\mu\) is less than 0, most friendships are concentrated in circle 1 and the additional circles have very few additional people. These negative values of \(\mu\) are observed in situations where the number of possible links is limited (e.g. sailors in a boat, communities of migrants, etc. [7]), or for introverted individuals. The distribution of the \(\mu\) parameters (cf. Ref. [8]) is also constant and, in fact, most individual values of \(\mu\) change very little in the five waves (see plots in section S3 in the Supplementary Information; see also the distribution of the slopes of fits to the evolution of \(\mu\) in Sec. S4) in spite of the organisational changes due to COVID and to the composition of the classes when moving to the next academic year (see Methods).
In order to study the evolution of relationships, we have compared the nature of links (labelled \(+2\) for 'best friend', \(+1\) for 'friend', 0 for 'no link', \(-1\) for 'enemy', and \(-2\) for 'worst enemy') between pairs of subjects in two consecutive waves (\(w_{n}\) and \(w_{n+1}\)), and have computed diverse conditional probabilities. Thus, Fig. 2 represents \(P(x,w_{5}|+2,w_{4})\) (left) and \(P(x,w_{5}|+1,w_{4})\) (right), whereas Fig. 3 represents \(P(x,w_{5}|-2,w_{4})\) (left) and \(P(x,w_{5}|-1,w_{4})\) (right). (Other conditional probabilities are shown in Sec. S5 of the Supplementary Information.)
The first two circles (best friends and friends, respectively) evolve in very different ways: best friends are quite stable and when they stop being best friends they end up as plain friends most of the time. In contrast, friends are more dynamic and may disappear from the radar or, in some cases, even become best friends. We have also looked at the opposite evolution, namely where students that appear for the first time as best friends were in previous waves (i.e. \(P(x,w_{n-1}|+2,w_{n})\)). (All these conditional probabilities are shown in Sec. S5 of the Supplementary Information.) We have observed that best friends were often already best friends in the previous wave, and new ones come mostly from being friends. Therefore, the first two circles
show clear differences in the stability of the relationships they involve, arising from the different intensity in best friends vs just friends relationships.
Enmities are very few and highly volatile. The total number of negative relationships is an order of magnitude smaller than that of positive ones. As can be seen from the plot (Fig. 3, and Sec. S4 in Supplementary Information for the rest of the waves), most relationships that are marked as worst enemies or just enemies in one wave end up not marked in the next wave. Interestingly, worst enemies are retained with higher frequency than plain enemies. These results point to friendships and enmities having a different nature, with friendships being more long-lasting and enmities reflecting, in general, more the heat of a specific conflict--although very bad relationships may last longer.
The results described so far--the stability of the number of relationships across waves and the higher turnover of the outer friendship layer (\(+1\)) compared to the inner one (\(+2\))--suggest picturing individuals as "social atoms". In this metaphor, layers play the role of atomic orbitals, whereas individuals act like electrons. Inner orbitals attract electrons more strongly than outer ones, so there is less turnover. Also, electrons may leave their orbitals for good, leaving a "hole" quickly replaced by a new electron. Likewise, friends who leave the egonetwork get replaced by new friends, so their average number remains constant.
On the other hand, we observe a larger degree of turnover than that reported in Ref. [20]. This may be due to the fact that the latter study deals with data obtained from phone calls between adults; interestingly, even in that study participants aged 17-21 showed a larger turnover than those older than 21 years old (see also Ref. [7]). Nonetheless, even Ref. [20] reported differences
Figure 1: Boxplots of the number of friendships (left) and best friendships (right) reported by each student that answered the five waves of the survey.
Figure 2: Percentage of individuals that ended up in a given category in wave 5, when they were marked in wave 4 as ‘best friend’ (conditional probability \(P(x,w_{5}|+2,w_{4})\), left) or just as ‘friend’ (conditional probability \(P(x,w_{5}|+1,w_{4})\), right).
between layers similar to what we find here. This points to the role of developmental issues in the evolution of the structure of personal friendship networks. Care has to be taken, though, because the phone data should capture the general structure of people's friendships, whether they are family, workmates, friends, etc. In contrast, here we are restricting the students to tell us only about their relationships within the school. In this respect, what we are seeing is that the Dunbar circle structure reproduces itself in each domain of relationships: a fraction of a person's cognitive capabilities are devoted to school relationships, and then from that limited capability the structure follows as Ref. [7]. The more rapid turnover could be related to the smaller cognitive capacity devoted to the specific niche of school relationships.
One possible confound that might influence the evolution of relationships is the distribution of students in the different classes. Generally speaking, some 70% of the students' relationships are with other students in their same year, and of those, a majority are with students in their same class. In fact, among all potential relationships within the same class, approximately 50% of them are actually reported, whereas less than 5% of all potential relationships with students in different classes exist (see Fig. 4, left panel). The fact that this is an important factor can be seen also in the right panel of Fig. 4, where pairs of students are divided into groups according to whether they were in the same class in the two academic years included in our data (2020-2021 and 2021-2022) (hereafter referred to as S-S), in different classes both years (hereafter D-D), in the same class in the first year and in a different class in the second year (hereafter S-D), and vice versa (hereafter D-S). Then, the right panel of Fig. 4 shows the percentage of relationships in each of these groups that were actually reported in each wave. Importantly, the change in the academic year between wave 2 and wave 3 students is associated with a reshuffling of the classes. This reflects in a decrease of relationships S-D, going from values close to 50% to 25% (orange bars in Fig. 4), i.e., the separation led to the disappearance of half of the existing relationships. On the contrary, the plot shows an increase in the percentage of relationships D-S (blue ones in Fig. 4); in this case, the percentage raises from 15% to almost 45%, comparable to the starting point of the other group. This observation should be compared to the almost constant percentages of S-S and D-D relationships. This clearly shows that being in the same class is a very relevant driver for relationships to decay or start. In this regard, it is also interesting to note that when students are separated, the number of relationships that still remain in the second year is almost twice as large as relationships D-S in the first year, meaning that it takes longer for relationships to disappear due to separation than to form upon becoming together.
Our longitudinal study also allows us to address the issue of reciprocity of relationships. As shown in Fig. 5, the aggregate percentage of reciprocal relationships is remarkably similar in all five waves and close to 60%. On the other hand, as shown in the right panel of Fig. 5, although this is also true for most individuals, there are quite a few for which reciprocity is very low. Figures in Sec. S7 of the Supplementary Information show that the results do not significantly depend on the group, the gender, or the itinerary.
As reciprocity is also a property of relationships, it is worth considering their dependence on the personal characteristics of both people involved. Figure 6 shows the percentage of reciprocal links between individuals of the same gender, and also the percentages of the four types of temporal evolution discussed above. Regarding gender, the plot shows that homophilic links are generally more reciprocal, while mixed-gender links are less reciprocal. Interestingly, when mixed-gender links are not reciprocal, it is not due to a gender bias (cf. Sec. S7 in the Supplementary Information). We can also see that reciprocity is quite
Figure 3: Percentage of individuals that ended up in a given category in wave 5, when they were marked in wave 4 as ‘worst enemy’ (conditional probability \(P(x,w_{5}|-2,w_{4})\), left) or just as ‘enemy’ (conditional probability \(P(x,w_{5}|-1,w_{4})\), right).
high in links between students that remain together the two academic years (S-S), is lowest for students that are always separate (D-D), while S-D and D-S links decrease or increase, respectively, in reciprocity in later waves. Similar results arise when looking at relationships in the same class or itinerary (cf. Sec. S7 in the Supplementary Information). Reciprocated friendships are also more stable, as are triangles formed only by positive relationships (in both cases, they are much more stable than any other combination). These results are shown in Fig. 7.
## Conclusions
In this paper, we have studied the temporal evolution of relationships among 12-16 years old students that attend the same high school. The study consisted of five waves of surveys during two consecutive academic years and included both positive and negative relationships. The number of students answering all five waves of the survey was 224. The oldest student had already participated in a preliminary pilot of this study in the academic year 2018-2019 [8]. In spite of what one could expect, we do not observe any signs of fatigue among the students, and their responses are remarkably similar in every wave, thus confirming the consistency of the data collection reported in Ref. [16]. Indeed, the number of reported friends and best friends is quite constant in the five waves and it is so irrespective of groups and ages, genders, itineraries, or being a "repetidot". Therefore, we have a very rich longitudinal dataset that allows addressing a number of important issues.
The results from the surveys show clearly that friendships in the inner circle (best friends) are more stable than the rest of
Figure 4: Left: Percentage of relationships formed between individuals in the same group (S) versus those in different groups (D) relative to the total number of relations that might potentially form in each of the two cases. Right: Percentage of relationships between individuals that are in the same group both years (S-S), in the same group the first year but different the second (S-D), in a different group the first year and the same group the second (D-S) and different group both years (D-D), referred in each case to the total number of possible relationships.
Figure 5: Distributions of individual reciprocal relationships in each wave. The numbers on the horizontal axis correspond to each one of the waves.
the friendships in the second circle (just friends). This observation provides further evidence of the key role of Dunbar's circles in the organisation of relationships, but also supports the idea that the intensity of the relationship in the first circle is higher than in the second one, apparently making them more stable. On the contrary, enmities are few, much less frequent than friendships, and highly volatile, with many simply disappearing from one wave of the survey to the next. Note, however, that this does not mean that learning about enmities is irrelevant, as they have been shown to control the community structure of classes in Ref. [8]. This fact is therefore important for the daily dynamics of the class and as such, it is highly valuable information for the school management.
Our study also points to the importance of being in the same class for forming and stabilising friendships. As we have discussed above, the strongest friendships arise among students who were in the same class during the two academic years we have studied. The change from being in the same class one year and separate the following one leads to the loss of a sizable fraction of friendships, which are then refocused on new classmates. Friendships among students that never shared class are much rarer in comparison. These observations suggest that we tend to have our relationship structure occupied at all times, as the friends who are lost because of the separation are replaced with new classmates. In addition, it also highlights the importance of frequent interaction in keeping or weakening relationships.
A remark is in order concerning the comparison of these results with the preliminary ones reported in Ref. [8]. In terms of the total number of relations reported as friends and best friends, the numbers in Fig. 1 of Ref. [8] (notice that they refer to individual groups of the same year and not to whole years as reported here) are larger than those found here (cf. Fig. 1 above). That study was done only with students from 1st year of ESO, and in the work reported here, they have answered the surveys again, now as students of 4th year of ESO, and their aggregate numbers of friendships are indeed different from those reported
Figure 6: Left: Percentage of reciprocal links per gender: M-M (male-male), M-F (male-female), F-F (female-female). Right: percentage of reciprocal links according to whether the pair of individuals are in the same or different class in consecutive years.
Figure 7: Evolution of reciprocal friendship (regardless of its strength) when passing from wave 4 to wave 5. Left: Percentage of links +/+ in wave 4 that end up in \(xy\) in wave 5. Right: Percentage of reciprocal friendship triangles (type +/+/+) in wave 4 that end up in a triangle \(x\)/\(y\)/\(z\) in wave 5. (\(x,y,z=+,-,0\)).
earlier. In contrast, students in this work who are in 1st year ESO report a number of friendships that is also smaller than that in Ref. [8] and similar to the rest of the classes in these surveys. Unfortunately, the students who took place in both studies cannot be connected because of an upgrade of the software, so we cannot make a proper longitudinal study encompassing the sample period that could explain the difference. In any event, the discrepancy is probably due to the presence of outliers in the study of Ref. [8] who have been removed in the present one. A few outliers report more than 100 relationships, thus biasing the mean to higher values. The much larger number of participants and waves in the present work makes the results much more reliable. In connection with this, we note that the distribution of slopes of fits to the temporal evolution of \(\mu\) (see Sec. S4 in the Supplementary Information) is relatively wide (much more so than the distribution of slopes for the evolution of the total number of friends), but we did not find any systematic correlation between the slopes and the year in which students were at the time of the waves.
An interesting observation concerns the topic of reciprocal friendships, an important topic in view of its connection to performance [21] and the success of behavioural interventions [22]. Reciprocity is remarkably constant and for a majority of individuals, a percentage between 50% and 70% of their relationships are reciprocal. In general, gender homophilic relationships are slightly more reciprocal and vice-versa, male-female relationships tend to be less reciprocal, with both genders being equally responsible for this effect. Relationship reciprocity also shows the effect of group reshuffling and evolves in a manner similar to the friendships themselves. On the other hand, there are a few individuals whose reciprocity is very low, which could be an indicator of possible socialisation problems for those particular subjects, providing yet another valuable hint for the school management.
We would like to end up this discussion by summarising the big picture that can be inferred from this study. As already mentioned, everybody seems to have a predefined structure of their relationships, despite their frequent turnover. The structure is akin to that of an atom with its electrons, with less turnover in the inner layers than in the outer ones--hence the "social atom" metaphor. This suggests the possibility of studying the formation of social networks as a statistical-mechanical system in equilibrium, every relation having an associated "binding energy"--the cost to remove the link. The question of whether one could then map this system into one of the available statistical-mechanical models of graph formation (e.g., those based on exponential random graphs [23, 24, 25]) remains open. From a more philosophical point of view, one such mapping would imply that it is the "total energy" (the Hamiltonian, in the statistical-mechanical jargon) what describes a social system, rather than a graph. Graphs are volatile, and in constant change, whereas the energy uniquely determines the graph ensemble, of which any observed social network would be but a specific instance. One such mapping would open a big avenue to re-think social systems from a new perspective.
## Methods
### Data collection
Data were obtained from surveys conducted in the school involved in the study. The study was approved by the Ethics Committee of Universidad Carlos III de Madrid, and the surveys were subsequently carried out in accordance with the approved guidelines. Consent was obtained from the school which adopted this as a research project of its own and in turn got informed consent from the participants' parents. Students always participated voluntarily and signed informed consent prior to beginning the survey. The surveys were administered through a computer interface and included direct questions about their relationships as well as others related to personality traits. To elicit relationships, students could choose from a list of all the other students in their schools at the same level (i.e., the so-called mandatory secondary school, known by its Spanish acronym, ESO, ages 11-15 years old). A preliminary study of the school was presented in full detail in Ref. [8]. For each individual in the high school, we elicit friends, best friends, and light and strong enemies collected from each student as a list of student IDs and labelled as \(\{+2,+1,-1,-2\}\). They will constitute the links of our high school networks. Note that each student provides this list, so we are extracting a directed network.
### Data on sample
In what follows, we report the results of five waves of our social relationship survey, and we will refer to them by their number. They were carried out on the following dates: December 2020 (wave 1), May 2021 (wave 2), October 2021 (wave 3), February 2022 (wave 4), and May 2022 (wave 5). This means that waves 1 and 2 correspond to the academic year 2020-2021 and waves 3-5 correspond to the academic year 2021-2022. Academic years in middle schools in Spain begin around mid-September and end around mid-June.
Outliers were removed according to the following criterion: students answering with more than 100 relationships and more than a 200% change (upwards or downwards) in the number of reported relationships from one wave to the next were discarded from the sample. From the 285 students that answered in all five waves, we discarded 64 as outliers. 22 reported more than a hundred relationships, while 42 had too much variation between the number of answers in consecutive samples. This leaves us with a final sample of answers from 221 students for the five waves. Data are taken from students that were in 2nd, 3rd, and 4th
year of middle school (in Spanish, Ensenanza Secundaria Obligatoria, i.e. ESO) during the academic year 2021-2022, and are students who turned 13, 14 or 15 years old during 2021.
In the academic year 2020-2021, due to COVID prevention measures, the structure of the school was different from the standard one. Students who were in 1st and 2nd year of ESO during the course 2020-2021 were divided into eight classes with some 15 students each. In the same year, students in the 3rd year of ESO were divided into five classes of some 25-30 students each, and within each class, they were split into two subgroups which, because of COVID, attended school physically on alternate days. In the academic year 2021-2022, these students advanced to the 2nd, 3rd and 4th year of ESO as already stated, and the school returned to a pre-COVID structure, i.e. 5, 5 and 4 classes with some 25 students in each year respectively, attending school physically on a daily basis. In addition, there are two teaching pathways in this school, one that is taught mostly in English (except Spanish and Mathematics) and another that is taught mostly in Spanish (except Plastic Expression and Physical Education, which are taught in English). Approximately 40% of the students take the English pathway (see section S1 in the Supplementary Material).
|
2304.08674 | Prime Hasse principles via Diophantine second moments | We show that almost all primes $p\not\equiv \pm 4 \bmod{9}$ are sums of three
cubes, assuming a conjecture due to Hooley, Manin, et al. on cubic fourfolds.
This conjecture is approachable under standard statistical hypotheses on
geometric families of $L$-functions. | Victor Y. Wang | 2023-04-18T00:53:09Z | http://arxiv.org/abs/2304.08674v2 | # Prime Hasse principles via Diophantine second moments
###### Abstract.
We show that almost all primes \(p\not\equiv\pm 4\bmod 9\) are sums of three cubes, assuming a conjecture due to Hooley, Manin, et al. on cubic fourfolds. This conjecture could be proven under standard number-theoretic hypotheses, following the author's thesis work.
Key words and phrases:Integral points, densities, Manin conjectures, representing primes, Selberg sieve 2020 Mathematics Subject Classification: Primary 11D85; Secondary 11D25, 11D45, 11N35, 11N36
## 1. Introduction
Let \(F_{0}(\boldsymbol{y})=F_{0}(y_{1},y_{2},y_{3}):=y_{1}^{3}+y_{2}^{3}+y_{3}^{3}\). The _Hasse-exceptional set_
\[\mathcal{E}:=\{a\in\mathbb{Z}:a\not\equiv\pm 4\bmod 9\}\setminus F_{0}(\mathbb{Z} ^{3}) \tag{1.1}\]
might be empty [10]. The analog of \(\mathcal{E}\) for \(5y_{1}^{3}+12y_{2}^{3}+9y_{3}^{3}\) contains a sparse sequence (see e.g. [1, p. 691, footnote 3]), produced by means that do not apply to \(F_{0}\)[12, p. 1304]. We confine ourselves to a statistical analysis of \(\mathcal{E}\) relative to \(\mathbb{Z}\) and the set of primes. For "critical" equations such as \(F_{0}(\boldsymbol{y})=a\), "subcritical" statistical frameworks (such as those of [13, 14, 15]) break down, and new features come into play [1, 2].
Write \(\boldsymbol{y}=(y_{1},y_{2},y_{3})\), \(\boldsymbol{z}=(z_{1},z_{2},z_{3})\), and let \(\boldsymbol{x}=(x_{1},\ldots,x_{6}):=(y_{1},y_{2},y_{3},z_{1},z_{2},z_{3})\). Let
\[F(\boldsymbol{x})=F(\boldsymbol{y},\boldsymbol{z}):=F_{0}(\boldsymbol{y})-F_ {0}(\boldsymbol{z}). \tag{1.2}\]
Let \(\Upsilon\) denote the set of \(3\)-dimensional vector spaces \(L\subseteq\mathbb{Q}^{6}\) over \(\mathbb{Q}\) such that \(F|_{L}=0\). (The equation \(\boldsymbol{y}=\boldsymbol{z}\) cuts out one such \(L\). All other \(L\in\Upsilon\) can be generated from \(\boldsymbol{y}=\boldsymbol{z}\) by suitable permutations and negations of variables.) Call a tuple \(\boldsymbol{x}\in\mathbb{Z}^{6}\)_special_ if
\[\boldsymbol{x}\in\bigcup_{L\in\Upsilon}L. \tag{1.3}\]
Given integers \(X,d\geq 1\), and a nice (say \(C_{c}^{\infty}\)) function \(w\colon\mathbb{R}^{6}\to\mathbb{R}\), let
\[N_{w}(X;d):=\sum_{\boldsymbol{x}\in\mathbb{Z}^{6}}w(\boldsymbol{x}/X)\cdot \mathbf{1}_{F(\boldsymbol{x})=0}\cdot\mathbf{1}_{d|F_{0}(\boldsymbol{y}),F_{0 }(\boldsymbol{z})} \tag{1.4}\]
(using the indicator notation \(\mathbf{1}_{E}\) defined in SS1.1) and
\[E_{w}(X;d):=N_{w}(X;d)-\mathfrak{S}(d)\cdot\sigma_{\infty,w}\cdot X^{3}-\sum_ {\text{special $\boldsymbol{x}\in\mathbb{Z}^{6}$}}w(\boldsymbol{x}/X)\cdot\mathbf{1}_{d|F_ {0}(\boldsymbol{y}),F_{0}(\boldsymbol{z})}, \tag{1.5}\]
where \(\mathfrak{S}(d)\), \(\sigma_{\infty,w}\) are familiar quantities defined in (2.9) and (1.9). Based on [1, Conjecture 2], [10], [20, Appendix], [20], et al., one conjectures
\[\lim_{X\to\infty}X^{-3}E_{w}(X;d)=0. \tag{1.6}\]
Let \(\mathbb{Z}_{\geq 0}:=\{n\in\mathbb{Z}:n\geq 0\}\), and define \(\mathbb{Z}_{\geq 1},\mathbb{R}_{>0},\ldots\) similarly. The classical "second moment method" shows that \(F_{0}(\mathbb{Z}_{\geq 0}^{3})\) has positive lower density, assuming \(E_{w}(X;1)\ll X^{3}\) holds for a suitable \(w\). There are interesting conjectures on the density of \(F_{0}(\mathbb{Z}_{\geq 0}^{3})\)[4],
but we focus on \(F_{0}(\mathbb{Z}^{3})\). Diaconu, building on work of Ghosh-Sarnak published in [12], showed in [11] that (1.1) has density \(0\), assuming a version of (1.6) for \(d=1\) where \(N_{w}(X;1)\) is replaced by a point count over a fairly skew region (see [11, \(R_{N}^{\ast}\) on p. 24]). The main goal of the present paper is to say more about (1.1), assuming that there exists a triple \((\xi,\delta,k)\in\{0,1\}\times\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\) for which the following holds:
\[E_{w}(X;d)\ll\|w\|_{k,\infty}B(w)^{k}X^{3-\delta},\quad\text{uniformly over $d\leq X^{\xi\delta}$ and clean $w\in C_{c}^{\infty}(\mathbb{R}^{6})$.} \tag{1.7}\]
(We define the terms \(\|-\|_{k,\infty}\), \(B(-)\), and "clean" in SS1.1.)
**Theorem 1.1**.: _Suppose that (1.6) for \(d=1\) holds for all clean functions \(w\in C_{c}^{\infty}(\mathbb{R}^{6})\). Then (1.1) has density \(0\) in \(\mathbb{Z}\). Now fix \((\delta,k)\in\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\), and assume (1.7) for \(\xi=0\). Then \(|\mathcal{E}\cap[-A,A]|\ll_{\epsilon}A/(\log A)^{1-\epsilon}\) for all integers \(A\geq 2\) (for all \(\epsilon>0\))._
The qualitative density result in Theorem 1.1 was previously worked out by the author (see e.g. [10, Theorem 2.1.8]), following the main ideas of [11, SSSS2-3]. Our present approach is, however, stronger and more flexible.
**Theorem 1.2**.: _Fix \((\delta,k)\in\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\). Assume (1.7) for \(\xi=1\). Then (1.1) contains at most \(O_{\epsilon}(A/(\log A)^{2-\epsilon})\) primes \(p\leq A\), for any integer \(A\geq 2\) (for any \(\epsilon>0\))._
The hypothesis (1.7) is essentially a version of (1.6) with a specified power saving, uniform over a specified range of integers \(d\) and weights \(w\). When \(d=1\), one can prove (1.6) (and in fact \(E_{w}(X;1)\ll_{w}X^{3-\delta}\) for some small \(\delta\)) for a large class of functions \(w\), assuming what primarily amounts to the Ratios and Square-free Sieve Conjectures. (See [10, Theorem 8.4.3(b)], which builds on [11, 12].) For some \((\delta,k)\in\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\), one could similarly reduce (1.7) to standard number-theoretic conjectures.
Given a compactly supported function \(\nu\colon\mathbb{R}^{3}\to\mathbb{R}\), let
\[N_{a,\nu}(X):=\sum_{\boldsymbol{y}\in\mathbb{Z}^{3}:\,F_{0}(\boldsymbol{y})=a }\nu(\boldsymbol{y}/X). \tag{1.8}\]
To prove Theorem 1.1, we will choose "nice" weights \(\nu\) (cf. [11]) and bound \(N_{a,\nu}(X)\) in "approximate variance" over \(a\ll X^{3}\)--building on [12, 11, 10]. To prove Theorem 1.2, we will also need precise estimates (not just bounds) for such variances, and not just over \(\mathbb{Z}\) but also over arithmetic progressions \(a\equiv 0\bmod d\) with \(d\leq X^{\delta}\).
In SS2, we will define and estimate an approximate variance. In SS3, we will show that certain truncated singular series are typically sizable. In SS4, we will apply our estimates from SSSS2-3 to prove Theorems 1.1 and 1.2. Here it is important to allow \(\nu\) to deform with \(X\). For fixed \(\nu\), counting prime values of \(F_{0}\) (with or without multiplicity) up to constant factors remains a challenge, even conditionally; see [12, Conjecture A.3] (Bateman-Horn in several variables) and the present SS5 for some concrete open questions in this direction. (It might also be interesting to ask analogous questions for sequences other than the primes.)
How many solutions to \(x^{3}+y^{3}+z^{3}=a\) of a given size can our methods produce, for typical \(a\)? By slightly modifying SS4, one could provide several kinds of answers to this question. A comprehensive discussion would be tedious, so we limit ourselves to a murky remark:
_Remark 1.3_.: In Theorems 1.1 and 1.2, there are three levels of hypotheses in total, say H1, H2, H3, from weakest to strongest. H1 implies that if \(h(a)\to\infty\) as \(|a|\to\infty\), then there exists \(f\colon\mathbb{Z}\to\mathbb{R}\), with \(\lim_{|a|\to\infty}f(a)=\infty\), such that for almost all integers \(a\not\equiv\pm 4\bmod 9\), the equation \(x^{3}+y^{3}+z^{3}=a\) has \(\geq f(a)\) solutions with \(\max(|x|,|y|,|z|)\leq h(a)\cdot|a|^{1/3}\)
Under H2 (resp. H3), one can show that if \(g(a)\to 0\) as \(|a|\to\infty\), then \(x^{3}+y^{3}+z^{3}=a\) has \(\geq g(a)\cdot\log(1+|a|)\) solutions for almost all integers (resp. primes) \(a\not\equiv\pm 4\bmod 9\).
We expect that Theorems 1.1 and 1.2 can be generalized from \(F_{0}\) to arbitrary ternary cubic forms \(G_{0}\) with nonzero discriminant, if one modifies (1.11) and (2.27). For (1.11), see [20, Definition 1.4.3]. For (2.27), one needs that for any \(L\) in the (finite) set \(\Upsilon\), if \(I(L)\) denotes the ideal of \(\mathbb{Q}[\boldsymbol{y},\boldsymbol{z}]\) defining \(L\), then \(I(L)\cap\mathbb{Q}[\boldsymbol{y}]\) is a principal ideal of \(\mathbb{Q}[\boldsymbol{y}]\).
The proof of Theorem 1.2 might also adapt to the Markoff cubic \(x^{2}+y^{2}+z^{2}-xyz\) to _unconditionally_ extend [1, Theorem 1.2(ii)] to primes, provided one can handle certain quadratic subtleties uncovered in [1]. For practical reasons, we focus on \(x^{3}+y^{3}+z^{3}\).
### Conventions
For an event \(E\), we let \(\mathbf{1}_{E}:=1\) if \(E\) holds, and \(\mathbf{1}_{E}:=0\) otherwise.
For a vector \(\boldsymbol{u}\in\mathbb{R}^{s}\), we let \(\|\boldsymbol{u}\|:=\max_{i}(|u_{i}|)\) and \(d\boldsymbol{u}:=du_{1}\cdots du_{s}\). We let \(C_{c}^{\infty}(\mathbb{R}^{s})\) denote the set of smooth compactly supported functions \(\mathbb{R}^{s}\to\mathbb{R}\). Given a function \(f\colon\mathbb{R}^{s}\to\mathbb{R}\), we let \(\operatorname{Supp}f\) denote the closure of \(\{\boldsymbol{u}\in\mathbb{R}^{s}:f(\boldsymbol{u})\neq 0\}\) in \(\mathbb{R}^{s}\).
We let \(\partial_{x}:=\partial/\partial x\). We write \(\int_{X}dx\,f(x)\) to mean \(\int_{X}f(x)\,dx\); we write \(\int_{X\times Y}dx\,dy\,f(x,y)\) to mean \(\int_{X}dx\,(\int_{Y}dy\,f(x,y))\). For \(w=w(\boldsymbol{x})\in C_{c}^{\infty}(\mathbb{R}^{6})\), we let
\[\sigma_{\infty,w}:=\lim_{\epsilon\to 0}(2\epsilon)^{-1}\int_{|F(\boldsymbol{x} )|\leq\epsilon}d\boldsymbol{x}\,w(\boldsymbol{x}). \tag{1.9}\]
Given \(f=f(u_{1},\ldots,u_{s})\in C_{c}^{\infty}(\mathbb{R}^{s})\) and \(k\in\mathbb{Z}_{\geq 0}\), we define the Sobolev norm
\[\|f\|_{k,\infty}:=\max_{\begin{subarray}{c}\alpha_{1},\ldots,\alpha_{s}\in \mathbb{Z}_{\geq 0}:\\ \alpha_{1}+\cdots+\alpha_{s}\leq k\end{subarray}}\max_{\mathbb{R}^{s}}| \partial_{u_{1}}^{\alpha_{1}}\cdots\partial_{u_{s}}^{\alpha_{s}}f|. \tag{1.10}\]
Given a compactly supported function \(f\colon\mathbb{R}^{s}\to\mathbb{R}\) with \(\boldsymbol{0}\notin\operatorname{Supp}f\), we let \(B(f)\) denote the smallest integer \(B\geq 1\) such that \(B^{-1}\leq\|\boldsymbol{u}\|\leq B\) for all \(\boldsymbol{u}\in\operatorname{Supp}f\).
We call a function \(f\colon\mathbb{R}^{s}\to\mathbb{R}\)_clean_ if
\[(\operatorname{Supp}f)\cap\{\boldsymbol{u}\in\mathbb{R}^{s}:u_{1}\cdots u_{s} =0\}=\emptyset. \tag{1.11}\]
We let \(e(t):=e^{2\pi it}\), and \(e_{r}(t):=e(t/r)\).
For a finite nonempty set \(S\), we let \(\mathbb{E}_{b\in S}[f(b)]:=|S|^{-1}\sum_{b\in S}f(b)\).
We write \(f\ll_{S}g\), or \(g\gg_{S}f\), to mean \(|f|\leq Cg\) for some \(C=C(S)>0\). We let \(O_{S}(g)\) denote a quantity that is \(\ll_{S}g\). We write \(f\asymp_{S}g\) if \(f\ll_{S}g\ll_{S}f\).
We let \(v_{p}(-)\) denote the usual \(p\)-adic valuation. For integers \(n\geq 1\), we let \(\phi(n)\) denote the totient function, \(\tau(n)\) the divisor function, \(\mu(n)\) the Mobius function, \(\omega(n)\) the number of distinct prime factors of \(n\), and \(\operatorname{rad}(n)\) the radical of \(n\).
## 2. General variance setup and estimation
Given \(\nu\), \(a\), \(X\), we define certain densities \(\sigma_{p,a}\), \(\sigma_{\infty,a,\nu}(X)\) in (2.5), (2.22). For \(a\in\mathbb{Z}\setminus\{0\}\), informally write \(\mathfrak{S}_{a}:=\prod_{p\text{ prime}}\sigma_{p,a}\), and consider the naive _Hardy-Littlewood prediction_\(\mathfrak{S}_{a}\cdot\sigma_{\infty,a,\nu}(X)\) for \(N_{a,\nu}(X)\). Smaller moduli should have a greater effect in \(\mathfrak{S}_{a}\); furthermore, \(\mathfrak{S}_{a}\) itself--_as is_--can be subtle (see SS3). So in (2.4) below, we use a "restricted" version of \(\mathfrak{S}_{a}\cdot\sigma_{\infty,a,\nu}(X)\). For technical quantitative reasons, we use an additive approximation to \(\mathfrak{S}_{a}\), rather than a multiplicative approximation as in [1, 16, 20].
**Definition 2.1**.: Let \(F_{a}:=F_{0}-a\) for \(a\in\mathbb{Z}\). For integers \(n\geq 1\), let
\[T_{a}(n):=\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\sum_{\boldsymbol{y}\in( \mathbb{Z}/n\mathbb{Z})^{3}}e_{n}(uF_{a}(\boldsymbol{y}))=\sum_{\begin{subarray} {c}1\leq u\leq n:\\ \gcd(u,n)=1\end{subarray}}\ \sum_{1\leq y_{1},y_{2},y_{3}\leq n}e_{n}(uF_{a}(y_{1},y_{2},y_{3})) \tag{2.1}\]
(whenever \(a\) lies in \(\mathbb{Z}\) or \(\mathbb{Z}/n\mathbb{Z}\), or maps canonically into \(\mathbb{Z}/n\mathbb{Z}\)). Let
\[s_{a}(K):=\sum_{n\leq K}n^{-3}T_{a}(n) \tag{2.2}\]
(whenever \(a,K\in\mathbb{Z}\) and \(K\geq 1\)). Given \(\nu\colon\mathbb{R}^{3}\to\mathbb{R}\), let
\[\nu^{\otimes 2}(\boldsymbol{x})=\nu^{\otimes 2}(\boldsymbol{y},\boldsymbol{z} ):=\nu(\boldsymbol{y})\nu(\boldsymbol{z}), \tag{2.3}\]
and for integers \(d\geq 1\), let \(d\mathbb{Z}:=\{a\in\mathbb{Z}:d\mid a\}\) and define the \(K\)_-approximate variance_
\[\operatorname{Var}(X,K;d):=\sum_{a\in d\mathbb{Z}}[N_{a,\nu}(X)-s_{a}(K) \sigma_{\infty,a,\nu}(X)]^{2}. \tag{2.4}\]
Note that \(T_{a}(n)\) is always real (since \(-1\in(\mathbb{Z}/n\mathbb{Z})^{\times}\)), and \(\nu\) maps into \(\mathbb{R}\), so (2.4) is a reasonable definition. For a large class of weights \(\nu\), we can rewrite \(\operatorname{Var}(X,K;d)\) after expanding the square; see Theorem 2.13 below. Due to the nonnegativity of squares in (2.4), the Selberg sieve can then be used to (conditionally) bound a variant of (2.4) over primes, after observing a certain multiplicative structure over \(d\); see Theorem 2.14 below. The proofs of Theorems 2.13 and 2.14 rest on a fair amount of local calculation.
### Non-archimedean work
We first define certain \(p\)-adic densities. For \(a\in\mathbb{Z}\setminus\{0\}\), let
\[\sigma_{p,a}:=\lim_{l\to\infty}(p^{-2l}\cdot\#\{\boldsymbol{y}\in(\mathbb{Z}/ p^{l}\mathbb{Z})^{3}:F_{a}(\boldsymbol{y})=0\}), \tag{2.5}\]
and for integers \(d\geq 1\), let
\[\sigma_{p}(d):=\lim_{l\to\infty}(p^{-5l}\cdot\#\{\boldsymbol{x}\in(\mathbb{Z} /p^{l}\mathbb{Z})^{6}:F(\boldsymbol{x})=0\text{ and }\gcd(p^{l},d)\mid F_{0}(\boldsymbol{y}),F_{0}( \boldsymbol{z})\}). \tag{2.6}\]
(It is routine to show that both limits exist and are finite.) Next, for integers \(n,d\geq 1\), let
\[S_{\boldsymbol{0}}^{+}(n;d):=\sum_{m\geq 1:\,\mathrm{lcm}(m,d)=n}\sum_{u\in( \mathbb{Z}/m\mathbb{Z})^{\times}}\sum_{\boldsymbol{x}\in(\mathbb{Z}/n\mathbb{ Z})^{6}:\,d\mid F_{0}(\boldsymbol{y}),F_{0}(\boldsymbol{z})}e_{m}(uF( \boldsymbol{x})); \tag{2.7}\]
then \(S_{\boldsymbol{0}}^{+}(n;d)=\boldsymbol{1}_{d\mid n}\cdot S_{\boldsymbol{0}} ^{+}(n;d)\), and for every positive integer \(q\in d\mathbb{Z}\) we have
\[q^{-6}\sum_{a\in\mathbb{Z}/q\mathbb{Z}}\sum_{\boldsymbol{x}\in(\mathbb{Z}/q \mathbb{Z})^{6}:\,d\mid F_{0}(\boldsymbol{y}),F_{0}(\boldsymbol{z})}e_{q}(aF( \boldsymbol{x}))=\sum_{n\mid q}n^{-6}S_{\boldsymbol{0}}^{+}(n;d). \tag{2.8}\]
Finally, let
\[\mathfrak{S}(d):=\prod_{p\text{ prime}}\sigma_{p}(d)=\sum_{n\geq 1}n^{-6}S_{ \boldsymbol{0}}^{+}(n;d)=\sum_{n\geq 1:\,d\mid n}n^{-6}S_{\boldsymbol{0}}^{+}(n;d). \tag{2.9}\]
(Using (2.8), Proposition 2.2, and Lemma 2.4, it is routine to show that the infinite product and sums in (2.9) all converge absolutely, and equal one another.)
**Proposition 2.2**.: _Let \(n_{1},n_{2}\geq 1\) be coprime integers. Then \(T_{a}(n_{1}n_{2})=T_{a}(n_{1})T_{a}(n_{2})\). If \(d_{1},d_{2}\geq 1\) are integers with \(\gcd(n_{1}d_{1},n_{2}d_{2})=1\), then \(S_{\boldsymbol{0}}^{+}(n_{1}n_{2};d_{1}d_{2})=S_{\boldsymbol{0}}^{+}(n_{1};d_ {1})S_{\boldsymbol{0}}^{+}(n_{2};d_{2})\)._
Proof.: This is routine (by the Chinese remainder theorem). We omit the details.
**Lemma 2.3**.: _Let \(n,d\geq 1\) be integers. Suppose that the inequality \(v_{p}(n)>v_{p}(d)\) holds for all primes \(p\mid n\). Then \(|S_{\mathbf{0}}^{+}(n;d)|\leq|S_{\mathbf{0}}^{+}(n;1)|\)._
Proof.: Here \(\{m\geq 1:\operatorname{lcm}(m,d)=n\}=\{n\}\). So by (2.7), the quantity \(d^{2}\cdot S_{\mathbf{0}}^{+}(n;d)\) equals
\[d^{2}\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\sum_{\boldsymbol {y}_{1},\boldsymbol{y}_{2}\in(\mathbb{Z}/n\mathbb{Z})^{3}:d|F_{0}(\boldsymbol {y}_{1}),F_{0}(\boldsymbol{y}_{2})}e_{n}(uF_{0}(\boldsymbol{y}_{1})-uF_{0}( \boldsymbol{y}_{2}))\] \[=\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\sum_{\boldsymbol{y }_{1},\boldsymbol{y}_{2}\in(\mathbb{Z}/n\mathbb{Z})^{3}}e_{n}(uF_{0}( \boldsymbol{y}_{1})-uF_{0}(\boldsymbol{y}_{2}))\sum_{v_{1},v_{2}\in\mathbb{Z }/d\mathbb{Z}}e_{d}(v_{1}F_{0}(\boldsymbol{y}_{1})+v_{2}F_{0}(\boldsymbol{y} _{2}))\] \[=\sum_{v_{1},v_{2}\in\mathbb{Z}/d\mathbb{Z}}\sum_{u\in(\mathbb{Z} /n\mathbb{Z})^{\times}}\prod_{1\leq i\leq 2}\bigg{(}\sum_{\boldsymbol{y}_{i} \in(\mathbb{Z}/n\mathbb{Z})^{3}}e_{n}((-1)^{i-1}uF_{0}(\boldsymbol{y}_{i}))e_ {d}(v_{i}F_{0}(\boldsymbol{y}_{i}))\bigg{)}.\]
But for any \(\epsilon\in\{-1,1\}\) and \(v\in\mathbb{Z}/d\mathbb{Z}\), we have
\[\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\Big{|}\sum_{\boldsymbol{y}\in( \mathbb{Z}/n\mathbb{Z})^{3}}e_{n}(\epsilon uF_{0}(\boldsymbol{y}))e_{d}(vF_{0} (\boldsymbol{y}))\Big{|}^{2}=S_{\mathbf{0}}^{+}(n;1),\]
since the formula \(u\mapsto\epsilon u+(n/d)v\) defines a bijection on \((\mathbb{Z}/n\mathbb{Z})^{\times}\). So by Cauchy (over \(u\in(\mathbb{Z}/n\mathbb{Z})^{\times}\)), we have \(d^{2}\cdot|S_{\mathbf{0}}^{+}(n;d)|\leq\sum_{v_{1},v_{2}\in\mathbb{Z}/d \mathbb{Z}}S_{\mathbf{0}}^{+}(n;1)=d^{2}\cdot S_{\mathbf{0}}^{+}(n;1)\). This suffices.
**Lemma 2.4**.: _Let \(N,d\geq 1\) be integers. Then the following holds:_
\[\sum_{n\in[N,2N)}n^{-6}\,|S_{\mathbf{0}}^{+}(n;d)|\leq d\sum_{ab\in[N,2N):\,a |d}b^{-6}\,|S_{\mathbf{0}}^{+}(b;1)|\ll_{\epsilon}d^{5/3}N^{-2/3+\epsilon}.\]
Proof.: If \(e\mid d\), then trivially, \(|S_{\mathbf{0}}^{+}(e;e)|\leq\sum_{m|e}\phi(m)e^{6}=e^{7}\). Proposition 2.2 and Lemma 2.3 then imply that for each \(n\geq 1\), we have \(|S_{\mathbf{0}}^{+}(n;d)|\leq\max_{ef=n:\,e|d}(e^{7}\,|S_{\mathbf{0}}^{+}(f;1 )|)\). Therefore,
\[\sum_{n\in[N,2N)}n^{-6}\,|S_{\mathbf{0}}^{+}(n;d)|\leq\sum_{e|d}e\sum_{f\in[N /e,2N/e)}f^{-6}\,|S_{\mathbf{0}}^{+}(f;1)|\leq d\sum_{e|d}\,\sum_{f\in[N/e,2N/ e)}f^{-6}\,|S_{\mathbf{0}}^{+}(f;1)|.\]
By [11, Lemma 3.4.1] (a routine variant of [12, Lemma 4.9]), the inner sum over \(f\) is \(\ll_{\epsilon}(N/e)^{-2/3+\epsilon}\). The lemma then follows from the bound \(\sum_{e|d}e^{2/3-\epsilon}\ll d^{2/3}\).
We now prove three results relating \(T_{a}(-)\), from (2.1), to \(S_{\mathbf{0}}^{+}(-;d)\), from (2.7).
**Lemma 2.5**.: _Let \(d,m\geq 1\) be integers with \(d\mid m\). Then_
\[\sum_{n_{1},n_{2}\geq 1:\,\operatorname{lcm}(n_{1},d)=m,\, \operatorname{lcm}(n_{2},d)=m}\frac{1}{m}\sum_{b\in d\mathbb{Z}/m\mathbb{Z}} \frac{T_{b}(n_{1})T_{b}(n_{2})}{(n_{1}n_{2})^{3}}=\frac{S_{\mathbf{0}}^{+}(m;d )}{m^{6}}, \tag{2.11}\] \[\sum_{n_{1},n_{2}\geq 1:\,\operatorname{lcm}(n_{1},d)=m,\, \operatorname{lcm}(n_{2},d)=m}\left|\frac{1}{m}\sum_{b\in d\mathbb{Z}/m\mathbb{Z}} \frac{T_{b}(n_{1})T_{b}(n_{2})}{(n_{1}n_{2})^{3}}\right|\leq\tau(m)\sum_{rn=m: \,r|d}\frac{|S_{\mathbf{0}}^{+}(n;1)|}{n^{6}}. \tag{2.10}\]
Proof.: By Proposition 2.2, it suffices to prove the lemma when \(m\) is a prime power. So suppose \((d,m)=(p^{e},p^{f})\), where \(p\) is prime and \(0\leq e\leq f\).
_Case 1: \(e<f\)._ Then \(\{n\geq 1:\operatorname{lcm}(n,d)=m\}=\{m\}\). In particular,
\[S_{\mathbf{0}}^{+}(m;d) =\sum_{u\in(\mathbb{Z}/m\mathbb{Z})^{\times}}\sum_{\boldsymbol{x} \in(\mathbb{Z}/m\mathbb{Z})^{6}:\,d|F_{0}(\boldsymbol{y}),F_{0}(\boldsymbol{z} )}e_{m}(uF(\boldsymbol{x}))\] \[=p^{e}\sum_{u\in(\mathbb{Z}/p^{f-\epsilon}\mathbb{Z})^{\times}} \sum_{\boldsymbol{x}\in(\mathbb{Z}/m\mathbb{Z})^{6}:\,d|F_{0}(\boldsymbol{y}),F_ {0}(\boldsymbol{z})}e_{m}(uF(\boldsymbol{x})) \tag{2.12}\]
(since \(e_{m}(ud)\) depends only on \(u\bmod p^{f-e}\)). And if \(b\in d\mathbb{Z}/m\mathbb{Z}\), then
\[\begin{split} T_{b}(m)=\sum_{u\in(\mathbb{Z}/m\mathbb{Z})^{\times}} \sum_{\boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}}e_{m}(uF_{b}(\boldsymbol{ y}))&=\sum_{u\in(\mathbb{Z}/m\mathbb{Z})^{\times}}\sum_{\boldsymbol{y}\in( \mathbb{Z}/m\mathbb{Z})^{3}:\,d|F_{0}(\boldsymbol{y})}e_{m}(uF_{b}(\boldsymbol {y}))\\ &=p^{e}\sum_{u\in(\mathbb{Z}/p^{f-e}\mathbb{Z})^{\times}}\sum_{ \boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,d|F_{0}(\boldsymbol{y})}e_{m}( uF_{b}(\boldsymbol{y}))\end{split} \tag{2.13}\]
(since \(\sum_{u\in(\mathbb{Z}/p^{f}\mathbb{Z})^{\times}}e_{p^{f}}(uF_{b}(\boldsymbol{ y}))=0\) if \(p^{f-1}\nmid F_{b}(\boldsymbol{y})\)), whence
\[T_{b}(m)^{2}=p^{2e}\sum_{u,v\in(\mathbb{Z}/p^{f-e}\mathbb{Z})^{\times}}\sum_{ \boldsymbol{y},\boldsymbol{z}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,d|F_{0}( \boldsymbol{y}),F_{0}(\boldsymbol{z})}e_{m}(uF_{b}(\boldsymbol{y})+vF_{b}( \boldsymbol{z})).\]
But \(\sum_{b\in d\mathbb{Z}/m\mathbb{Z}}e_{m}(-ub-vb)=p^{f-e}\cdot\boldsymbol{1}_{ p^{f-e}|u+v}\). So summing the previous display over \(b\in d\mathbb{Z}/m\mathbb{Z}\), and then using (2.12), we get \(\sum_{b\in d\mathbb{Z}/m\mathbb{Z}}T_{b}(m)^{2}=p^{f}S_{\boldsymbol{0}}^{+}(m;d)\). This suffices for both (2.10), (2.11). (For (2.11), note that \(|S_{\boldsymbol{0}}^{+}(m;d)|\leq|S_{\boldsymbol{0}}^{+}(m;1)|\) by Lemma 2.3).
_Case 2: \(e=f\)._ Then \(d=m\) and \(\{n\geq 1:\operatorname{lcm}(n,d)=m\}=\{n\geq 1:n\mid m\}\). So
\[S_{\boldsymbol{0}}^{+}(m;d)=\sum_{n|m}\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{ \times}}\sum_{\begin{subarray}{c}\boldsymbol{x}\in(\mathbb{Z}/m\mathbb{Z})^{6 }:\\ m|F_{0}(\boldsymbol{y}),F_{0}(\boldsymbol{z})\end{subarray}}e_{n}(uF(\boldsymbol{ x}))=m\sum_{\begin{subarray}{c}\boldsymbol{x}\in(\mathbb{Z}/m\mathbb{Z})^{6 }:\\ m|F_{0}(\boldsymbol{y}),F_{0}(\boldsymbol{z})\end{subarray}}1. \tag{2.14}\]
But \(d\mathbb{Z}/m\mathbb{Z}=0\) (in (2.10), (2.11)), and
\[\begin{split}\sum_{n|m}\frac{T_{0}(n)}{n^{3}}&=\sum _{n|m}\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\mathbb{E}_{\boldsymbol{y} \in(\mathbb{Z}/n\mathbb{Z})^{3}}[e_{n}(uF_{0}(\boldsymbol{y}))]\\ &=\sum_{n|m}\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}\mathbb{E} _{\boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}}[e_{n}(uF_{0}(\boldsymbol{y}))] \\ &=\sum_{u\in\mathbb{Z}/m\mathbb{Z}}\mathbb{E}_{\boldsymbol{y} \in(\mathbb{Z}/m\mathbb{Z})^{3}}[e_{m}(uF_{0}(\boldsymbol{y}))]=m^{-2}\sum_{ \boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,m|F_{0}(\boldsymbol{y})}1. \end{split} \tag{2.15}\]
Upon squaring (2.15), dividing by \(m\), and using (2.14), we get (2.10). For (2.11), note that by Cauchy, \(|T_{0}(n)|^{2}\leq\phi(n)\sum_{u\in(\mathbb{Z}/n\mathbb{Z})^{\times}}|\sum_{ \boldsymbol{y}\in(\mathbb{Z}/n\mathbb{Z})^{3}}e_{n}(uF_{0}(\boldsymbol{y}))|^{ 2}=\phi(n)S_{\boldsymbol{0}}^{+}(n;1)\), so
\[\frac{1}{m}\sum_{n_{1},n_{2}|m}\frac{|T_{0}(n_{1})T_{0}(n_{2})|}{(n_{1}n_{2})^ {3}}\leq\frac{\tau(m)}{m}\sum_{n|m}\frac{|T_{0}(n)|^{2}}{n^{6}}\leq\frac{\tau(m )}{m}\sum_{n|m}\frac{\phi(n)S_{\boldsymbol{0}}^{+}(n;1)}{n^{6}}. \tag{2.16}\]
This suffices for (2.11) (since \(\phi(n)\leq n\leq m\), and \((m/n)\mid m=d\)).
**Lemma 2.6**.: _Let \(d,m\geq 1\) be integers with \(d\mid m\). Then_
\[\sum_{n\geq 1:\operatorname{lcm}(n,d)=m}\frac{1}{m^{3}}\sum_{ \boldsymbol{e}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,F_{0}(\boldsymbol{e})\equiv 0 \bmod d}\frac{T_{F_{0}(\boldsymbol{e})}(n)}{n^{3}}=\frac{S_{\boldsymbol{0}}^{+} (m;d)}{m^{6}}, \tag{2.18}\] \[\sum_{n\geq 1:\operatorname{lcm}(n,d)=m}\biggl{|}\frac{1}{m^{3}} \sum_{\boldsymbol{e}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,F_{0}(\boldsymbol{e}) \equiv 0\bmod d}\frac{T_{F_{0}(\boldsymbol{e})}(n)}{n^{3}}\biggr{|}\leq d\cdot \tau(m)\sum_{rn=m:\,r|d}\frac{|S_{\boldsymbol{0}}^{+}(n;1)|}{n^{6}}. \tag{2.17}\]
Proof.: Again, we may assume that \((d,m)=(p^{e},p^{f})\), where \(p\) is prime and \(0\leq e\leq f\).
_Case 1: \(e<f\)._ Then \(\{n\geq 1:\operatorname{lcm}(n,d)=m\}=\{m\}\). But by (2.13),
\[\sum_{\boldsymbol{e}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,F_{0}(\boldsymbol{e}) \equiv 0\bmod d}T_{F_{0}(\boldsymbol{e})}(m)=\sum_{u\in(\mathbb{Z}/m\mathbb{Z})^{ \times}}\sum_{\boldsymbol{e},\boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,d|F_ {0}(\boldsymbol{e}),F_{0}(\boldsymbol{y})}e_{m}(u(F_{0}(\boldsymbol{y})-F_{0}( \boldsymbol{e}))),\]
which equals \(S_{\mathbf{0}}^{+}(m;d)\) by (2.12). Both (2.17), (2.18) follow (in the present case).
_Case 2: \(e=f\)._ Then \(d=m\) and \(\{n\geq 1:\operatorname{lcm}(n,d)=m\}=\{n\geq 1:n\mid m\}\). So (2.17) follows from (2.15) and (2.14). Also, (2.18) follows from (2.16) (since \(T_{0}(n)=T_{0}(n)T_{0}(1)\) for all \(n\geq 1\), and we have \(\#\{\boldsymbol{e}\in(\mathbb{Z}/m\mathbb{Z})^{3}:F_{0}(\boldsymbol{e})\equiv 0 \bmod d\}\leq m^{3}=m^{2}d\) trivially).
**Lemma 2.7**.: _Suppose \(n_{1},n_{2},d\geq 1\) are integers with \(\operatorname{lcm}(n_{1},d)\neq\operatorname{lcm}(n_{2},d)\). Then_
\[\sum_{b\in d\mathbb{Z}/n_{1}n_{2}d\mathbb{Z}}T_{b}(n_{1})T_{b}(n_{2})=0. \tag{2.19}\]
Proof.: We may assume \(\operatorname{lcm}(n_{1},d)<\operatorname{lcm}(n_{2},d)\). Let \(r:=\operatorname{lcm}(n_{1},d)\); then \(n_{2}\nmid r\). So \(\sum_{a\in\gcd(n_{2},r)\mathbb{Z}/n_{2}\mathbb{Z}}e_{n_{2}}(-ua)=0\) for all \(u\in(\mathbb{Z}/n_{2}\mathbb{Z})^{\times}\). Thus \(\sum_{b\in\mathbb{Z}/n_{1}n_{2}d\mathbb{Z}:\,b\equiv c\bmod r}T_{b}(n_{2})=0\) for all \(c\in\mathbb{Z}/r\mathbb{Z}\). Multiplying by \(T_{c}(n_{1})\) and summing over \(c\in d\mathbb{Z}/r\mathbb{Z}\), we get (2.19).
**Proposition 2.8**.: _Let \(d,K\geq 1\) be integers. Then_
\[\sum_{n_{1},n_{2}\leq K}\frac{1}{n_{1}n_{2}d}\sum_{b\in d \mathbb{Z}/n_{1}n_{2}d\mathbb{Z}}\frac{T_{b}(n_{1})T_{b}(n_{2})}{(n_{1}n_{2}) ^{3}}=\mathfrak{S}(d)+O_{\epsilon}(d^{5/3}K^{-2/3+\epsilon}), \tag{2.21}\] \[\sum_{n\leq K}\frac{1}{(nd)^{3}}\sum_{\boldsymbol{e}\in(\mathbb{Z }/nd\mathbb{Z})^{3}:\,F_{0}(\boldsymbol{e})\equiv 0\bmod d}\frac{T_{F_{0}( \boldsymbol{e})}(n)}{n^{3}}=\mathfrak{S}(d)+O_{\epsilon}(d^{5/3}K^{-2/3+ \epsilon}). \tag{2.20}\]
Proof.: If \(n_{1},n_{2}\geq 1\) are integers with \(\operatorname{lcm}(n_{1},d)=\operatorname{lcm}(n_{2},d)=m\), say, then \(d\mid m\) and
\[\frac{1}{n_{1}n_{2}d}\sum_{b\in d\mathbb{Z}/n_{1}n_{2}d\mathbb{Z}}\frac{T_{b}( n_{1})T_{b}(n_{2})}{(n_{1}n_{2})^{3}}=\frac{1}{m}\sum_{b\in d\mathbb{Z}/m \mathbb{Z}}\frac{T_{b}(n_{1})T_{b}(n_{2})}{(n_{1}n_{2})^{3}}.\]
By Lemmas 2.7, 2.5, and 2.4 (and eq. (2.9)), it follows that the left-hand side of (2.20) equals
\[\sum_{m\geq 1:\,d\mid m}\sum_{\begin{subarray}{c}n_{1},n_{2}\leq K :\\ \operatorname{lcm}(n_{1},d)=m,\,\operatorname{lcm}(n_{2},d)=m\end{subarray}} \frac{1}{m}\sum_{b\in d\mathbb{Z}/m\mathbb{Z}}\frac{T_{b}(n_{1})T_{b}(n_{2}) }{(n_{1}n_{2})^{3}}\] \[=\sum_{m\leq K:\,d\mid m}\frac{S_{\mathbf{0}}^{+}(m;d)}{m^{6}}+ \sum_{m>K:\,d\mid m}\tau(m)\sum_{rn=m:\,r\mid d}\frac{O(|S_{\mathbf{0}}^{+}(n;1 )|)}{n^{6}}=\mathfrak{S}(d)+O_{\epsilon}(d^{5/3}K^{-2/3+\epsilon}).\]
The second part, (2.21), is similar, but simpler. If \(n\geq 1\) and \(\operatorname{lcm}(n,d)=m\), say, then
\[\frac{1}{(nd)^{3}}\sum_{\boldsymbol{e}\in(\mathbb{Z}/nd\mathbb{Z})^{3}:\,F_{0 }(\boldsymbol{e})\equiv 0\bmod d}\frac{T_{F_{0}(\boldsymbol{e})}(n)}{n^{3}}= \frac{1}{m^{3}}\sum_{\boldsymbol{e}\in(\mathbb{Z}/m\mathbb{Z})^{3}:\,F_{0}( \boldsymbol{e})\equiv 0\bmod d}\frac{T_{F_{0}(\boldsymbol{e})}(n)}{n^{3}}.\]
By Lemmas 2.6 and 2.4 (and eq. (2.9)), the left-hand side of (2.21) thus equals
\[\sum_{m\geq 1:\,d\mid m}\sum_{\begin{subarray}{c}n\leq K:\\ \operatorname{lcm}(n,d)=m\end{subarray}}\frac{1}{m^{3}}\sum_{\boldsymbol{e}\in( \mathbb{Z}/m\mathbb{Z})^{3}:\,F_{0}(\boldsymbol{e})\equiv 0\bmod d}\frac{T_{F_{0}( \boldsymbol{e})}(n)}{n^{3}}\] \[=\sum_{m\leq K:\,d\mid m}\frac{S_{\mathbf{0}}^{+}(m;d)}{m^{6}}+ \sum_{m>K:\,d\mid m}d\cdot\tau(m)\sum_{rn=m:\,r\mid d}\frac{O(|S_{\mathbf{0}}^{+} (n;1)|)}{n^{6}}=\mathfrak{S}(d)+O_{\epsilon}(d^{5/3}K^{-2/3+\epsilon}).\]
This completes the proof.
### Archimedean work
Let \(\nu\in C_{c}^{\infty}(\mathbb{R}^{3})\), and suppose \(\mathbf{0}\notin\operatorname{Supp}\nu\). Given \(X\in\mathbb{R}_{>0}\) and \((\boldsymbol{y},a)\in\mathbb{R}^{3}\times\mathbb{R}\), write \(\tilde{\boldsymbol{y}}:=\boldsymbol{y}/X\) and \(\tilde{a}:=a/X^{3}\). Also, for all \(a\in\mathbb{R}\), let
\[\sigma_{\infty,a,\nu}(X):=\lim_{\epsilon\to 0}\,(2\epsilon)^{-1}\int_{|F_{a}( \boldsymbol{y})/X^{3}|\leq\epsilon}d(\boldsymbol{y}/X)\,\nu(\boldsymbol{y}/X); \tag{2.22}\]
this is real analog of the \(p\)-adic density \(\sigma_{p,a}\) from (2.5). Note that by definition,
\[\sigma_{\infty,a,\nu}(X)=\lim_{\epsilon\to 0}\,(2\epsilon)^{-1}\int_{|F_{0}( \tilde{\boldsymbol{y}})-\tilde{a}|\leq\epsilon}d\tilde{\boldsymbol{y}}\,\nu( \tilde{\boldsymbol{y}})=\sigma_{\infty,\tilde{a},\nu}(1). \tag{2.23}\]
For convenience (when working with \(\sigma_{\infty,a,\nu}(X)\)), we now observe the following:
_Observation 2.9_.: Suppose \(|y_{1}|\geq\delta>0\) for all \(\boldsymbol{y}\in\operatorname{Supp}\nu\). Let \((a,X)\in\mathbb{R}\times\mathbb{R}_{>0}\). Then a change of variables in (2.23) from \(\tilde{y}_{1}\) to \(F_{0}:=F_{0}(\tilde{\boldsymbol{y}})\) proves
\[\sigma_{\infty,a,\nu}(X)=\int_{\mathbb{R}^{2}}d\tilde{y}_{2}\,d\tilde{y}_{3}\, \nu(\tilde{\boldsymbol{y}})\cdot|\partial F_{0}/\partial\tilde{y}_{1}|^{-1}, \tag{2.24}\]
where \(F_{0},\tilde{y}_{1}\) are constrained by the equation \(F_{0}=\tilde{a}\) (and thus determined by \(\tilde{y}_{1},\tilde{y}_{2}\)), and where \(\partial F_{0}/\partial\tilde{y}_{1}=3\tilde{y}_{1}^{2}\geq 3\delta^{2}>0\) over the support of the integrand.
Proof.: Cf. [11, proof of Lemma 11].
At least in the absence of better surface coordinates, the earlier "\(\epsilon\)-thickening" still provides greater intuition, while the surface integral allows for effortless rigor.
We now prove three results on real densities. Let \(B(\nu)\) and \(\|\nu\|_{k,\infty}\) be as in SS1.1.
**Proposition 2.10**.: _For integers \(k\geq 0\), we have \(\partial_{\tilde{a}}^{k}[\sigma_{\infty,a,\nu}(X)]\ll_{k}\|\nu\|_{k,\infty}B( \nu)^{A_{0}+A_{1}k}\) (for some constants \(A_{0},A_{1}\in[1,10]\)), uniformly over \((a,X)\in\mathbb{R}\times\mathbb{R}_{>0}\)._
Proof.: _Case 1: \(|y_{1}|\geq 1/3B(\nu)\) for all \(\boldsymbol{y}\in\operatorname{Supp}\nu\)._ Consider the surface integral in (2.24). The integrand vanishes unless \(\tilde{a}\ll B(\nu)^{3}\) and \(\tilde{y}_{2},\tilde{y}_{3}\ll B(\nu)\). Fix \(\tilde{y}_{2},\tilde{y}_{3}\ll B(\nu)\), and let \(\tilde{y}_{1}\) vary with \(\tilde{a}\) according to \(F_{0}=\tilde{a}\). Then \(\partial_{\tilde{a}}[\tilde{y}_{1}]=(3\tilde{y}_{1}^{2})^{-1}\ll B(\nu)^{2}\). Now repeatedly apply \(\partial_{\tilde{a}}\) to the integrand (using Leibniz and the chain rule). This gives \(\partial_{\tilde{a}}^{k}[\sigma_{\infty,a,\nu}(X)]\ll_{k}\|\nu\|_{k,\infty}B( \nu)^{4+3k}\).
_Case 2: The general case._ Fix \(v_{0}\in C_{c}^{\infty}(\mathbb{R}^{3})\), supported on \([-1,1]^{3}\), with \(\int_{\mathbb{R}^{3}}d\boldsymbol{t}\,v_{0}(\boldsymbol{t})=1\). Let \(\nu_{\boldsymbol{t}}(\boldsymbol{y}):=v_{0}(\boldsymbol{t}-3B(\nu)\boldsymbol {y})\cdot\nu(\boldsymbol{y})\), so that \(\nu(\boldsymbol{y})=\int_{\boldsymbol{t}\in\mathbb{R}^{3}}d\boldsymbol{t}\, \nu_{\boldsymbol{t}}(\boldsymbol{y})\) for all \(\boldsymbol{y}\in\mathbb{R}^{3}\). For each \(\boldsymbol{t}\in\mathbb{R}^{3}\) and \(\boldsymbol{y}\in\operatorname{Supp}\nu_{\boldsymbol{t}}\), we have \(\|\boldsymbol{y}\|\in\operatorname{Supp}\nu\), so \(\|\boldsymbol{t}\|\leq 1+3B(\nu)\|\boldsymbol{y}\|\leq 4B(\nu)^{2}\) and \(\|\boldsymbol{y}\|\geq 1/B(\nu)\). Since \(\operatorname{Supp}\nu_{\boldsymbol{t}}\) has taxicab diameter \(\leq 2/3B(\nu)\), there exists \(j=j_{\boldsymbol{t}}\in\{1,2,3\}\) such that \(|y_{j}|\geq 1/3B(\nu)\) holds for all \(\boldsymbol{y}\in\operatorname{Supp}\nu_{\boldsymbol{t}}\). Thus \(\partial_{\tilde{a}}^{k}[\sigma_{\infty,a,\nu_{\boldsymbol{t}}}(X)]\ll_{k}\| \nu_{\boldsymbol{t}}\|_{k,\infty}B(\nu_{\boldsymbol{t}})^{4+3k}\ll_{k}\|\nu\|_{k,\infty}B(\nu)^{4+4k}\) for all \(\boldsymbol{t}\in\mathbb{R}^{3}\). Integrating over \(\|\boldsymbol{t}\|\leq 4B(\nu)^{2}\), we conclude that \(\partial_{\tilde{a}}^{k}[\sigma_{\infty,a,\nu_{\boldsymbol{t}}}(X)]\ll_{k}B( \nu)^{10+4k}\). Thus we may take \(A_{0}=10\), \(A_{1}=4\).
**Proposition 2.11**.: _Let \(X\in\mathbb{R}_{>0}\). The "pure \(L^{2}\) moment" \(\int_{a\in\mathbb{R}}d\tilde{a}\,\sigma_{\infty,a,\nu}(X)^{2}\) and the "mixed \(L^{1}\) moment" \(\int_{\tilde{\boldsymbol{z}}\in\mathbb{R}^{3}}d\tilde{\boldsymbol{z}}\,\nu( \tilde{\boldsymbol{z}})\sigma_{\infty,F_{0}(\boldsymbol{z}),\nu}(X)\) both equal \(\sigma_{\infty,\nu^{\otimes 2}}\) (defined via (1.9))._
Proof when \(|y_{1}|>0\) for all \(\boldsymbol{y}\in\operatorname{Supp}\nu\).: First, \(F_{a}=F_{0}-a\), so \(\int_{a\in\mathbb{R}}d\tilde{a}\,\sigma_{\infty,a,\nu}(X)^{2}\) expands (via (2.24), and a change of variables from \(a\) to \(\tilde{y}_{1}\)) to
\[\int_{\mathbb{R}^{4}}d\tilde{y}_{2}\,d\tilde{y}_{3}\,d\tilde{z}_{2}\,d\tilde{z}_{3 }\int_{\tilde{g}_{1}\in\mathbb{R}}dF_{0}(\tilde{\boldsymbol{y}})\,\frac{\nu( \tilde{\boldsymbol{y}})\nu(\tilde{\boldsymbol{z}})}{\partial_{\tilde{y}_{1}}F_{0 }|_{\tilde{\boldsymbol{y}}}}(\partial_{\tilde{z}_{1}}F_{0}|_{\tilde{\boldsymbol {z}}})^{-1}|_{F_{0}(\tilde{\boldsymbol{y}})=F_{0}(\tilde{\boldsymbol{z}})},\]
which simplifies to \(\int_{\mathbb{R}^{5}}d\tilde{y}_{2}\,d\tilde{y}_{3}\,d\tilde{z}_{2}\,d\tilde{z}_{3 }\,d\tilde{y}_{1}\,\nu^{\otimes 2}(\tilde{\mathbf{x}})\cdot(\partial_{\tilde{z}_{1}}F_{0}|_{ \tilde{\mathbf{z}}})^{-1}|_{F(\tilde{\mathbf{x}})=0}\), which equals \(\sigma_{\infty,\nu^{\otimes 2}}\) by an analog of (2.24). Second, \(F_{F_{0}(\mathbf{z})}=F_{0}(\mathbf{y})-F_{0}(\mathbf{z})\), so by (2.24),
\[\int_{\tilde{\mathbf{z}}\in\mathbb{R}^{3}}d\tilde{\mathbf{z}}\,\nu(\tilde{\mathbf{z}}) \sigma_{\infty,F_{0}(\mathbf{z}),\nu}(X)=\int_{\mathbb{R}^{3}\times\mathbb{R}^{2} }d\tilde{\mathbf{z}}\,d\tilde{y}_{2}\,d\tilde{y}_{3}\,\nu(\tilde{\mathbf{y}})\nu( \tilde{\mathbf{z}})\cdot(\partial_{\tilde{y}_{1}}F_{0}|_{\tilde{\mathbf{y}}})^{-1}|_{F _{0}(\tilde{\mathbf{y}})=F_{0}(\tilde{\mathbf{z}})},\]
which again simplifies to \(\sigma_{\infty,\nu^{\otimes 2}}\).
Proof in general.: Generalize to a "bilinear" statement (involving _two_ weights \(\nu_{1},\nu_{2}\), rather than just one); then reduce the bilinear statement to a surface integral computation (based on a general version of (2.24)), after taking suitable partitions of unity.
**Proposition 2.12** (Cf. [19, proof of Lemma 3.1]).: _Let \(X,N\geq 1\) be integers. Let \(b\in\mathbb{Z}/N\mathbb{Z}\) and \(\mathbf{e}\in(\mathbb{Z}/N\mathbb{Z})^{3}\). Then for integers \(j\geq 4\), we have (for some constants \(A_{2},\ldots,A_{5}\in[1,30]\))_
\[\sum_{a\in\mathbb{Z}:\,a\equiv b\bmod N}\sigma_{\infty,a,\nu}(X) ^{2} =\frac{X^{3}\sigma_{\infty,\nu^{\otimes 2}}}{N}+\frac{O_{j}(\|\nu\|_{j, \infty}^{2}B(\nu)^{A_{2}+A_{3}j})}{(X^{3}/N)^{j-1}}, \tag{2.26}\] \[\sum_{\mathbf{z}\in\mathbb{Z}^{3}:\,\mathbf{z}\equiv\mathbf{e}\bmod N}\nu( \mathbf{z}/X)\sigma_{\infty,F_{0}(\mathbf{z}),\nu}(X) =\frac{X^{3}\sigma_{\infty,\nu^{\otimes 2}}}{N^{3}}+\frac{O_{j}(\|\nu\|_{j, \infty}^{2}B(\nu)^{A_{4}+A_{5}j})}{(X/N)^{j-3}}. \tag{2.25}\]
Proof sketch.: Poisson summation, together with Proposition 2.11, gives
\[\sum_{a\equiv b\bmod N}\sigma_{\infty,a,\nu}(X)^{2}=\frac{X^{3}\sigma_{\infty,\nu^{\otimes 2}}}{N}+\sum_{c\neq 0}\frac{O(1)}{N}\left|\int_{a\in\mathbb{R}} da\,\sigma_{\infty,a,\nu}(X)^{2}e(-c\cdot a/N)\right|.\]
To bound the \(c\neq 0\) terms, we plug in \(a=X^{3}\tilde{a}\), integrate by parts \(j\geq 2\) times in \(\tilde{a}\), and invoke Proposition 2.10. (Note that \(\sigma_{\infty,a,\nu}(X)=0\) unless \(\tilde{a}\ll B(\nu)^{3}\).) The estimate (2.25) follows, with \(A_{2}=3+2A_{0}\), \(A_{3}=A_{1}\).
The estimate (2.26) similar. Let \(\mathbf{c}\in\mathbb{Z}^{3}\setminus\{\mathbf{0}\}\). Choose \(i\in\{1,2,3\}\) with \(|c_{i}|=\|\mathbf{c}\|\). Integrating by parts \(j\geq 4\) times in \(\tilde{z}_{i}=z_{i}/X\) gives (via Proposition 2.10 with \(a=F_{0}(\mathbf{z})\))
\[\frac{1}{N^{3}}\left|\int_{\mathbf{z}\in\mathbb{R}^{3}}d\mathbf{z}\,\nu(\mathbf{z}/X)\sigma _{\infty,F_{0}(\mathbf{z}),\nu}(X)e(-\mathbf{c}\cdot\mathbf{z}/N)\right|\ll_{j}\frac{\|\nu \|_{j,\infty}^{2}B(\nu)^{A_{4}+A_{5}j}}{\|\mathbf{c}\|^{j}(X/N)^{j-3}},\]
with \(A_{4}=3+A_{0}\), \(A_{5}=A_{1}+2\) (since a factor of \(\partial_{\tilde{z}_{i}}F_{0}(\tilde{\mathbf{z}})=3\tilde{z}_{i}^{2}\) appears each time we differentiate \(\sigma_{\infty,F_{0}(\mathbf{z}),\nu}(X)\)). Poisson summation and Proposition 2.11 then give (2.26).
### Final calculations
We are finally ready to establish the main results of SS2.
**Theorem 2.13**.: _Let \(\nu\in C_{c}^{\infty}(\mathbb{R}^{3})\), and suppose \(\mathbf{0}\notin\operatorname{Supp}\nu\). Let \(X,K,d\geq 1\) be integers with \(Kd\leq X^{9/10}\). Let \(\mathfrak{S}(d)\) be defined as in (2.9) below. Then \(\operatorname{Var}(X,K;d)\) equals_
\[[N_{\nu^{\otimes 2}}(X;d)-\mathfrak{S}(d)\cdot\sigma_{\infty,\nu^{\otimes 2}}X^{3}]+O _{\epsilon}(d^{5/3}K^{-2/3+\epsilon}\cdot\sigma_{\infty,\nu^{\otimes 2}}X^{3})+O (\|\nu\|_{100,\infty}^{2}B(\nu)^{5000}).\]
Proof.: Squaring out (2.4) yields \(\operatorname{Var}(X,K;d)=\Sigma_{1}-2\Sigma_{2}+\Sigma_{3}\), where
\[\Sigma_{1}:=\sum_{a\in d\mathbb{Z}}N_{a,\nu}(X)^{2},\quad\Sigma_{2}:=\sum_{a \in d\mathbb{Z}}N_{a,\nu}(X)s_{a}(K)\sigma_{\infty,a,\nu}(X),\quad\Sigma_{3}:= \sum_{a\in d\mathbb{Z}}[s_{a}(K)\sigma_{\infty,a,\nu}(X)]^{2}.\]
By (1.2), (1.4), (1.8), and (2.3), the \(\ell^{2}\) moment \(\Sigma_{1}\) equals \(N_{\nu^{\otimes 2}}(X;d)\). Next, write
\[\Sigma_{3}=\sum_{n_{1},n_{2}\leq K}\sum_{b\in d\mathbb{Z}/n_{1}n_{2}d\mathbb{Z}} (n_{1}n_{2})^{-3}T_{b}(n_{1})T_{b}(n_{2})\sum_{a\equiv b\bmod n_{1}n_{2}d} \sigma_{\infty,a,\nu}(X)^{2}\]
by plugging in (2.2) and then grouping terms by \(a\bmod n_{1}n_{2}d\); and write
\[\Sigma_{2} =\sum_{\mathbf{z}\in\mathbb{Z}^{3}:\,F_{0}(\mathbf{z})\equiv 0\bmod d}\nu( \mathbf{z}/X)s_{F_{0}(\mathbf{z})}(K)\sigma_{\infty,F_{0}(\mathbf{z}),\nu}(X)\] \[=\sum_{n\leq K}\sum_{\mathbf{e}\in(\mathbb{Z}/nd\mathbb{Z})^{3}:\,F_{ 0}(\mathbf{e})\equiv 0\bmod d}n^{-3}T_{F_{0}(\mathbf{e})}(n)\sum_{\mathbf{z}\equiv\mathbf{e} \bmod nd}\nu(\mathbf{z}/X)\sigma_{\infty,F_{0}(\mathbf{z}),\nu}(X)\]
by expanding \(N_{a,\nu}(X)\), plugging in (2.2), and grouping terms by \(\mathbf{z}\bmod nd\). Then by Proposition 2.12 and the trivial bound \(|T_{b}(n)|\leq n^{4}\), we have (for \(j\geq 4\))
\[\Sigma_{3}-\sum_{n_{1},n_{2}\leq K}\sum_{b\in d\mathbb{Z}/n_{1}n_{2}d\mathbb{ Z}}(n_{1}n_{2})^{-3}T_{b}(n_{1})T_{b}(n_{2})\cdot\frac{X^{3}\sigma_{\infty, \nu^{\otimes 2}}}{n_{1}n_{2}d}\ll K^{6}\cdot\frac{O_{j}(\|\nu\|_{j,\infty}^{2}B( \nu)^{A_{2}+A_{3}j})}{(X^{3}/K^{2}d)^{j-1}}\]
by (2.25) (with \(N=n_{1}n_{2}d\)), and
\[\Sigma_{2}-\sum_{n\leq K}\sum_{\mathbf{e}\in(\mathbb{Z}/nd\mathbb{Z})^{3}:\,F_{0}( \mathbf{e})\equiv 0\bmod d}n^{-3}T_{F_{0}(\mathbf{e})}(n)\cdot\frac{X^{3}\sigma_{ \infty,\nu^{\otimes 2}}}{(nd)^{3}}\ll K^{2}(Kd)^{3}\cdot\frac{O_{j}(\|\nu\|_{j, \infty}^{2}B(\nu)^{A_{4}+A_{5}j})}{(X/Kd)^{j-3}}\]
by (2.26) (with \(N=nd\)), where \(A_{2},\ldots,A_{5}\leq 30\). Since \(X,K,d,B(\nu)\geq 1\) and \(Kd\leq X^{9/10}\) (and thus \(K^{6}\), \(K^{2}d\), \(K^{2}(Kd)^{3}\) are at most \(X^{6}\), \(X^{2}\), \(X^{5}\), respectively), Theorem 2.13 now follows upon taking \(j=100\) above, and plugging in (2.20) and (2.21) from Proposition 2.8.
For Theorem 2.14, let \(\|\nu\|_{L^{2}(\mathbb{R}^{3})}^{2}:=\int_{\mathbf{y}\in\mathbb{R}^{3}}d\mathbf{y}\, \nu(\mathbf{y})^{2}\). Call a function \(\nu\colon\mathbb{R}^{3}\to\mathbb{R}\)_very clean_ if
\[(\operatorname{Supp}\nu)\cap\{\mathbf{y}\in\mathbb{R}^{3}:y_{1}y_{2}y_{3}(y_{1}+y_ {2})(y_{1}+y_{3})(y_{2}+y_{3})=0\}=\emptyset, \tag{2.27}\]
and \(\operatorname{LinAut}(F_{0})\)_-symmetric_ if
\[\nu(y_{1},y_{2},y_{3})=\nu(y_{\sigma(1)},y_{\sigma(2)},y_{\sigma(3)})\quad \text{for all $\sigma\in S_{3}$}. \tag{2.28}\]
**Theorem 2.14**.: _Let \(\xi\in\{0,1\}\). Fix \(\delta\in(0,9/10)\) and an integer \(k\geq 5000\), and assume (1.7) for \(\xi\). Let \(\nu\in C_{c}^{\infty}(\mathbb{R}^{3})\), and assume \(\nu\) satisfies (2.27) and (2.28). Let \(X,K\geq 2\) be integers with \(K\leq X^{9/10-\delta}\). Fix \(\hbar\in(0,9\delta/20]\), and let \(P\) be the product of all primes \(p<X^{\xi\hbar}\). Then_
\[\sum_{a\in\mathbb{Z}:\,\gcd(a,P)=1}[N_{a,\nu}(X)-s_{a}(K)\sigma_{\infty,a,\nu} (X)]^{2}\ll\frac{X^{3}\|\nu\|_{L^{2}(\mathbb{R}^{3})}^{2}}{(\log X)^{\xi}}+ \frac{O(X^{3}\|\nu\|_{k,\infty}^{2}B(\nu)^{k})}{\min(X^{\delta/11},K^{2/3}X^{- 3\delta})}.\]
Proof.: By Propositions 2.11 and 2.10, \(\sigma_{\infty,\nu^{\otimes 2}}\ll B(\nu)^{3}\cdot\|\nu\|_{0,\infty}^{2}B(\nu)^{8}\). Also, \(\|\nu^{\otimes 2}\|_{k,\infty}\leq\|\nu\|_{k,\infty}^{2}\) by (1.10), and \(B(\nu^{\otimes 2})\leq B(\nu)\) since \(\operatorname{Supp}(\nu^{\otimes 2})\subseteq(\operatorname{Supp}\nu)^{2}\). Now let \(d\leq X^{\xi\delta}\) (so that \(Kd\leq X^{9/10}\)). By Theorem 2.13, (1.7), and (1.5), we conclude that (since \(k\geq 5000\))
\[\operatorname{Var}(X,K;d)=\sum_{\text{special}\,\,\mathbf{x}\in\mathbb{Z}^{6}:\,d|F_ {0}(\mathbf{y}),F_{0}(\mathbf{z})}\nu^{\otimes 2}(\mathbf{x}/X)+O(X^{3-\delta}+X^{3+2\delta}K^{-2/3}) \cdot\|\nu\|_{k,\infty}^{2}B(\nu)^{k}.\]
Since \(\nu\) satisfies (2.27) and (2.28), a short calculation using (1.3) shows that
\[\sum_{\text{special}\,\,\mathbf{x}\in\mathbb{Z}^{6}:\,d|F_{0}(\mathbf{y}),F_{0}(\mathbf{z})} \nu^{\otimes 2}(\mathbf{x}/X)=3!\sum_{\mathbf{y}\in\mathbb{Z}^{3}:\,d|F_{0}(\mathbf{y})}\nu(\mathbf{y} /X)^{2}+O(\|\nu\|_{0,\infty}^{2}\cdot[B(\nu)X]^{2}).\]
But \(\nu\in C_{c}^{\infty}(\mathbb{R}^{3})\), so by Poisson summation (cf. the proof of (2.26) in Proposition 2.12),
\[\sum_{\mathbf{y}\in\mathbb{Z}^{3}:\,d|F_{0}(\mathbf{y})}\nu(\mathbf{y}/X)^{2}=\sum_{\mathbf{e} \in(\mathbb{Z}/d\mathbb{Z})^{3}:\,F_{0}(\mathbf{e})\equiv 0\bmod d}\left(\frac{X^{3}\|\nu\|_{L^{2}( \mathbb{R}^{3})}^{2}}{d^{3}}+\frac{O_{k}(\|\nu\|_{k,\infty}^{2}B(\nu)^{3})}{(X/d )^{k-3}}\right),\]
since \(k\geq 4\). For integers \(n\geq 1\), let
\[g(n):=n^{-3}\cdot|\{\boldsymbol{e}\in(\mathbb{Z}/n\mathbb{Z})^{3}:F_{0}( \boldsymbol{e})\equiv 0\bmod n\}|.\]
Then our work above (in the the last several displays) implies
\[\operatorname{Var}(X,K;d)=3!g(d)X^{3}\|\nu\|_{L^{2}(\mathbb{R}^{3})}^{2}+r_{d}, \tag{2.29}\]
where \(r_{d}\ll(X^{3-\delta}+X^{3+2\delta}K^{-2/3}+X^{2}+d^{3}/(X/d)^{k-3})\cdot\| \nu\|_{k,\infty}^{2}B(\nu)^{k}\). Since \(d\leq X^{9/10}\) and \(k\geq 100\), we in fact have
\[r_{d}\ll(X^{3-\delta}+X^{3+2\delta}K^{-2/3})\cdot\|\nu\|_{k,\infty}^{2}B(\nu) ^{k}. \tag{2.30}\]
But \(g\) is multiplicative in \(n\), and we have \(g(n)\in(0,1)\) for all \(n\geq 2\) (because, for instance, \(F_{0}(\boldsymbol{0})=0\) and \(F_{0}(1,0,0)=1\)). Directly if \(\xi=0\), and by the Selberg sieve [13, Theorem 6.4 and (6.80)] if \(\xi=1\), we conclude from (2.29) that
\[\sum_{a\in\mathbb{Z}:\,\gcd(a,P)=1}[N_{a,\nu}(X)-s_{a}(K)\sigma_{\infty,a,\nu }(X)]^{2}\leq\frac{3!X^{3}\|\nu\|_{L^{2}(\mathbb{R}^{3})}^{2}}{H^{\xi}}+R(P) \tag{2.31}\]
(upon recalling that \(P=\prod_{p<X\leqslant^{h}}p\)), where
\[H:=\sum_{d<X^{h}}\mu(d)^{2}\prod_{p|d}\frac{g(p)}{1-g(p)}\geq 1,\]
and where (using (2.30) to bound \(r_{d}\))
\[R(P)\ll_{\epsilon}\sum_{d\leq X^{2\xi h}}d^{\epsilon}|r_{d}|\ll X^{\epsilon}( X^{3-\delta/10}+X^{3+2.9\delta}K^{-2/3})\cdot\|\nu\|_{k,\infty}^{2}B(\nu)^{k}.\]
But the Hasse bound for elliptic curves implies \(g(p)=p^{-1}+O(p^{-3/2})\), so \(\sum_{p\leq x}g(p)\log p=\log x+O(1)\) (for all real \(x\geq 1\), say) and \(\sum_{p\text{ prime}}g(p)^{2}\log p<\infty\). By [13, SS6.6, derivation of (6.85) using Theorem 1.1], we have \(H\asymp\log(X^{h})\). Theorem 2.14 follows.
## 3. Statistics of truncated singular series
For integers \(K\geq 1\), recall the definition of \(s_{a}(K)\) from (2.2). In this section, we will prove that \(s_{a}(K)\) is typically sizable (see Theorem 3.9 below). The proof makes use of an "approximate inverse" of \(s_{a}(K)=\sum_{n\leq K}n^{-3}T_{a}(n)\). There is some flexibility here; we let
\[M_{a}(K):=\sum_{n\leq K:\,\gcd(n,30)=1}\mu(n)n^{-3}T_{a}(n). \tag{3.1}\]
The strategy for Theorem 3.9 is to show that \(s_{a}(K)M_{a}(K)\) is typically sizable, and that \(M_{a}(K)\) is typically bounded. For analytic convenience, let \(T_{a}^{\natural}(n):=n^{-2}T_{a}(n)\).
For integers \(a\) and primes \(p\nmid 3a\), Propositions 3.2 and 3.3 below imply \(T_{a}^{\natural}(p)\ll 1\) and \(T_{a}(p^{\geq 2})=0\). Indeed, if \(a\neq 0\), then \(s_{a}(K)\) resembles the value of an \(L\)-function at \(1\), and in this sense Theorem 3.9 is related to work such as [11]. But there are also significant issues with this comparison, because \(|T_{a}^{\natural}(n)|\) can be as large as \(n\) or so. (Trivially \(|T_{a}^{\natural}(n)|\leq n^{2}\).) The most serious issues arise when \(a\) has a large cube divisor.
For an integer \(n\geq 1\), let \(\operatorname{sq}(n):=\prod_{p^{2}|n}p^{v_{p}(n)}\) denote the _square-full part_ of \(n\), and \(\operatorname{cub}(n):=\prod_{p^{3}|n}p^{v_{p}(n)}\) the _cube-full part_ of \(n\). Let \(\mathcal{S}(D)\) be the set of nonzero integers \(a\) with
\(\operatorname{sq}(|a|)\leq D\). Let \(\mathcal{C}(D)\) be the set of nonzero integers \(a\) with \(\operatorname{cub}(|a|)\leq D\). We call an integer \(n\geq 1\)_square-full_ if \(n\in\mathcal{S}(1)\), and _cube-full_ if \(n\in\mathcal{C}(1)\).
We now collect some basic facts about \(T_{a}(n)\).
**Lemma 3.1**.: _Let \(a,n\in\mathbb{Z}\) with \(n\geq 1\). Let \(n_{3}:=\operatorname{cub}(n)\). Then \(|T_{a}^{\natural}(n)|\leq O(1)^{\omega(n)}(n_{3}n)^{1/2}\)._
Proof.: By Proposition 2.2, it suffices to prove the lemma when \(n=p^{l}\) for some prime \(p\) and integer \(l\geq 1\). But by (2.1) and [20, (4.24) and Lemma 4.7], we have
\[p^{3\lfloor(l-1)/3\rfloor}\cdot p^{-3l}T_{a}(p^{l})\ll p^{l-3/2}\mathbf{1}_{3 \mid l-1}+p^{l-3}\mathbf{1}_{3\mid l-1}.\]
A short calculation then yields \(T_{a}^{\natural}(p^{l})\ll p^{l/2}\cdot\mathbf{1}_{l\in\{1,2\}}+p^{l}\cdot \mathbf{1}_{l\geq 3}\). (There should be more robust proofs, but since \(F_{0}\) is diagonal, it is convenient to call on [20].)
For the proofs of the next two results, let \(N_{a}(m)\) denote the number of solutions \(\boldsymbol{y}\in(\mathbb{Z}/m\mathbb{Z})^{3}\) to \(F_{0}(\boldsymbol{y})\equiv a\bmod m\).
**Proposition 3.2**.: _Let \(a\in\mathbb{Z}\). Let \(p\) be a prime. Then the following hold:_
1. _If_ \(p\nmid 3a\)_, then_ \(|T_{a}^{\natural}(p)|\leq 6+2p^{-1/2}\)_._
2. _If_ \(p\mid 3a\)_, then_ \(|T_{a}^{\natural}(p)|\leq 6p^{1/2}\)_._
3. _If_ \(p\geq 7\)_, then_ \(|T_{a}^{\natural}(p)|<0.99p\)_._
Proof.: By (2.1), we have
\[T_{a}(p)=pN_{a}(p)-p^{3}. \tag{3.2}\]
_Case 1: \(p\nmid 3a\)._ Then the equation \(F_{0}(\boldsymbol{y})=aw^{3}\) cuts out a smooth cubic hypersurface in \(\mathbb{P}^{3}\) over \(\mathbb{F}_{p}\). The "curve at infinity" (cut out by \(w=0\)), namely \(F_{0}(\boldsymbol{y})=0\) in \(\mathbb{P}^{2}\), is also smooth. So by the Weil conjectures, \(N_{a}(p)=(p^{2}+p+1+\lambda_{1})-(p+1-\lambda_{2})\), where \(\lambda_{1}=\lambda_{1}(a;p)\) and \(\lambda_{2}=\lambda_{2}(p)\) are integers with \(|\lambda_{1}|\leq 6p\) and \(|\lambda_{2}|\leq 2\sqrt{p}\). Thus \(|T_{a}(p)|\leq 6p^{2}+2p^{3/2}\), and \(|T_{a}^{\natural}(p)|\leq 6+2p^{-1/2}\). So (1)-(3) hold (with (2) being vacuous).
_Case 2: \(p=3\)._ Then \(|T_{a}^{\natural}(p)|\leq p^{2}\) trivially. (In fact, \(T_{a}(p)=0\), but we do not need this special fact.) So (1)-(3) hold (with (1) and (3) being vacuous).
_Case 2: \(p\mid 3a\) and \(p\neq 3\)._ Then \(p\mid a\). So \(T_{a}(p)=T_{0}(p)\). But \(F_{0}(\boldsymbol{y})=0\) in \(\mathbb{A}^{3}\) is an affine cone over a curve. This curve, \(F_{0}(\boldsymbol{y})=0\) in \(\mathbb{P}^{2}\), is smooth. So by the Weil conjectures, \(N_{0}(p)=1+(p-1)(p+1-\lambda_{2}(p))\), where \(|\lambda_{2}(p)|\leq 2\sqrt{p}\). Thus \(|T_{a}(p)|\leq 2p(p-1)\sqrt{p}\), and \(|T_{a}^{\natural}(p)|\leq 2(p^{1/2}-p^{-1/2})\). So (1)-(3) hold (with (1) being vacuous).
We now strengthen the vanishing result in [20, Lemma 4.7].
**Proposition 3.3**.: _Let \(p\) be a prime. Let \(a,l\in\mathbb{Z}\) with \(l\geq 2\). If \(p^{l-1}\nmid 3a\), then \(T_{a}(p^{l})=0\)._
Proof when \(p\neq 3\).: By (2.1), we have
\[T_{a}(p^{l})=p^{l}N_{a}(p^{l})-p^{l-1}p^{3}N_{a}(p^{l-1}). \tag{3.3}\]
Let \(N_{a}^{0}(p^{v})\) (for \(v\geq 1\)) denote the number of solutions \(\boldsymbol{y}\in(\mathbb{Z}/p^{v}\mathbb{Z})^{3}\) to \(F_{0}(\boldsymbol{y})\equiv a\bmod p^{v}\) with \(p\mid\nabla F_{0}(\boldsymbol{y})\). (Here \(\nabla F_{0}(\boldsymbol{y})=(3y_{1}^{2},3y_{2}^{2},3y_{3}^{2})\).) Then by (3.3) and Hensel's lemma,
\[T_{a}(p^{l})=p^{l}N_{a}^{0}(p^{l})-p^{l-1}p^{3}N_{a}^{0}(p^{l-1}). \tag{3.4}\]
_Case 1: \(p^{3}\nmid a\)._ Suppose \(p^{l-1}\nmid 3a\). Then \(p^{\min(3,l-1)}\nmid a\), so all solutions \(\boldsymbol{y}\bmod p^{l-1}\) to \(F_{0}(\boldsymbol{y})\equiv a\bmod p^{l-1}\) must be _primitive_ (i.e. satisfy \(p\nmid\boldsymbol{y}\)), and thus satisfy \(p\nmid\nabla F_{0}(\boldsymbol{y})\). Thus \(N_{a}^{0}(p^{v})=0\) for \(v\geq l-1\), whence \(T_{a}(p^{l})=0\) by (3.4).
_Case 2: \(p^{3}\mid a\)._ Suppose \(p^{l-1}\nmid 3a\). Then \(l\geq 5\). But by a short calculation, \(N^{0}_{a}(p^{v})=p^{6}N_{a/p^{3}}(p^{v-3})\) for \(v\geq 4\), whence \(T_{a}(p^{l})=p^{9}T_{a/p^{3}}(p^{l-3})\) by (3.4). Noting that \(l-3\geq 2\) and \(p^{l-4}\nmid 3a/p^{3}\), a recursive argument now yields \(T_{a}(p^{l})=0\).
_Proof when \(p=3\)._ Both (3.3) and (3.4) hold here, but (3.4) is no longer simpler than (3.3). However, let \(N^{1}_{a}(p^{v})\) (for \(v\geq 2\)) denote the number of solutions \(\boldsymbol{y}\in(\mathbb{Z}/p^{v}\mathbb{Z})^{3}\) to \(F_{0}(\boldsymbol{y})\equiv a\bmod p^{v}\) with \(p^{2}\mid\nabla F_{0}(\boldsymbol{y})\). A routine Hensel argument (based on writing \(\boldsymbol{y}=\boldsymbol{y}_{0}+p^{l-2}\boldsymbol{z}\), rather than \(\boldsymbol{y}=\boldsymbol{y}_{0}+p^{l-1}\boldsymbol{z}\)) shows that if \(l\geq 3\), then \(T_{a}(p^{l})=p^{l}N^{1}_{a}(p^{l})-p^{l-1}p^{3}N^{1}_{a}(p^{l-1})\). Just as for \(p\neq 3\), one then finishes by casework on whether \(p^{3}\nmid a\) or \(p^{3}\mid a\).
Now recall (2.2) and (3.1). For every \(a\in\mathbb{Z}\), we have
\[s_{a}(K)M_{a}(K)=\sum_{n\leq K}c_{a}(n)+\sum_{\begin{subarray}{c}n_{1},n_{2} \leq K:\\ n_{1}n_{2}>K,\;\gcd(n_{2},30)=1\end{subarray}}n_{1}^{-3}T_{a}(n_{1})\cdot\mu(n _{2})n_{2}^{-3}T_{a}(n_{2}), \tag{3.5}\]
where for integers \(n\geq 1\) we let
\[c_{a}(n):=n^{-3}\sum_{n_{1}n_{2}=n:\gcd(n_{2},30)=1}T_{a}(n_{1})\cdot\mu(n_{2} )T_{a}(n_{2}).\]
By Proposition 2.2, \(c_{a}(n)\) is a Dirichlet convolution of two multiplicative functions, so \(c_{a}(n)\) is multiplicative in \(n\). For any prime \(p\) and integer \(l\geq 1\), we have
\[c_{a}(p^{l})=p^{-3l}[T_{a}(p^{l})-T_{a}(p^{l-1})T_{a}(p)\mathbf{1}_{p\geq 7 }]=p^{-l}[T_{a}^{\natural}(p^{l})-T_{a}^{\natural}(p^{l-1})T_{a}^{\natural}(p) \mathbf{1}_{p\geq 7}]. \tag{3.6}\]
We now bound \(c_{a}(n)\) using our work above (for convenience; [10, Lemma 4.7], a "stratification" result for \(p^{-l}T_{a}^{\natural}(p^{l})\) based on \(\gcd(p^{l},a)\), might also suffice).
**Lemma 3.4**.: _Let \(a,n\in\mathbb{Z}\) with \(a\neq 0\) and \(n\geq 1\). Let \(a_{2}:=\operatorname{sq}(|a|)\) and \(n_{3}:=\operatorname{cub}(n)\). Let \(n_{2}:=\operatorname{sq}(n/n_{3})\) and \(n_{1}:=n/n_{2}n_{3}\). Then \(n_{2}\) is a square, and_
\[|c_{a}(n)|\leq O(1)^{\omega(n)}n_{1}^{-2}\cdot n_{2}^{-1}\gcd(n_{2}^{1/2},3a) \cdot n_{3}^{-1/2}\prod_{p\mid n_{3}:\,v_{p}(n_{3})\leq 2v_{p}(9a_{2})}p^{v_{p}( n_{3})/2}. \tag{3.7}\]
Proof.: Since \(n/n_{3}\) is cube-free, \(n_{2}=\operatorname{rad}(n_{2})^{2}\). As for (3.7), note that both sides of (3.7) are multiplicative in \(n\), so we may assume \(n=p^{l}\), where \(p\) is prime and \(l\geq 1\) is an integer.
_Case 1: \(l=1\)._ By (3.6), \(c_{a}(p)=0\) if \(p\geq 7\), and \(c_{a}(p)\ll p^{-2}\) trivially if \(p\leq 5\). So \(c_{a}(n)\ll n^{-2}\). This proves (3.7), since \((n_{1},n_{2},n_{3})=(n,1,1)\).
_Case 2: \(l=2\)._ Again, use (3.6). If \(p\nmid 3a\), then Propositions 3.3 and 3.2 give \(c_{a}(p^{2})=p^{-2}[0+O(1)]\ll p^{-2}\). If \(p\mid 3a\), then Lemma 3.1 and Proposition 3.2 give \(c_{a}(p^{2})\ll p^{-2}[p+O(p)]\ll p^{-1}\). Either way, \(c_{a}(p^{2})\ll n^{-1}\gcd(n^{1/2},3a)\). This proves (3.7), since \((n_{1},n_{2},n_{3})=(1,n,1)\).
_Case 3: \(l\geq 3\)._ If \(p^{\max(2,l-2)}\nmid 3a\), then (3.6) and Proposition 3.3 (plus Lemma 3.1 if \(l=3\)) give \(c_{a}(p^{l})\ll p^{-l/2}\). In general, (3.6) and Lemma 3.1 give \(c_{a}(p^{l})\ll 1\). Either way,
\[c_{a}(n)\ll n^{-1/2}(p^{v_{p}(n)/2})^{\mathbf{1}_{v_{p}(3a)\geq\max(2,l-2)}} \ll n^{-1/2}(p^{v_{p}(n)/2})^{\mathbf{1}_{v_{p}(9a_{2})\geq l/2}},\]
where in the final step we have used the fact that \(p^{\max(2,l-2)}\) is square-full, \(\operatorname{sq}(3a)\mid\operatorname{sq}(9a_{2})\), and \(\max(2,l-2)\geq l/2\). This proves (3.7), since \((n_{1},n_{2},n_{3})=(1,1,n)\).
In general, the series \(\sum_{n\leq K}c_{a}(n)\) behaves better than \(s_{a}(K)\).
**Proposition 3.5**.: _Let \(a,D,K\in\mathbb{Z}\) with \(a\neq 0\) and \(D,K\geq 1\). Suppose \(a\not\equiv\pm 4\bmod 9\)._
1. _For_ \(p\) _prime,_ \(1+c_{a}(p)+c_{a}(p^{2})+\cdots\) _converges absolutely to a real number_ \(\gamma_{p}(a)>0\)_._
2. _If_ \(p\nmid 3a\)_, then_ \(\gamma_{p}(a)\geq(1-p^{-2})^{O(1)}\)_. If_ \(p\mid 3a\)_, then_ \(\gamma_{p}(a)\geq(1-p^{-1})^{O(1)}\)
_._
3. \(\prod_{p\text{ prime}}\gamma_{p}(a)\) _converges absolutely to a real number_ \(\gamma(a)\gg\prod_{p\mid a}(1-p^{-1})^{O(1)}\)_._
4. _If_ \(a\in\mathcal{S}(D)\)_, then_ \(\sum_{n\leq K}c_{a}(n)=\gamma(a)+O_{\epsilon}(|a|^{\epsilon}K^{\epsilon-1/6}D)\)_._
Proof.: By Proposition 3.3, (3.6), and (2.5), the sum \(\sum_{j\geq 0}c_{a}(p^{j})\) converges _absolutely_ to
\[\gamma_{p}(a):=\sigma_{p,a}\cdot(1-p^{-3}T_{a}(p)\mathbf{1}_{p\geq 7}), \tag{3.8}\]
since \(a\neq 0\). Here \(\sigma_{p,a}>0\) by [1, (5.11) and Lemma 5.5], since \(a\not\equiv\pm 4\bmod 9\). And
\[1-p^{-3}T_{a}(p)\mathbf{1}_{p\geq 7}\in(0.01,1.99) \tag{3.9}\]
by Proposition 3.2. So \(\gamma_{p}(a)>0\) by (3.8). This completes the proof of (1).
Now we bound \(\gamma_{p}(a)\) from below. First suppose \(p\leq 5\). The proof of [1, Lemma 5.5] shows that \(\sigma_{p,a}\gg 1\) (uniformly over \(a\)). So \(\gamma_{p}(a)\gg 1\) by (3.8) and (3.9). Now suppose \(p\geq 7\). Let \(N_{a}^{*}(p)\) denote the number of solutions \(\boldsymbol{y}\in\mathbb{F}_{p}^{3}\setminus\{\mathbf{0}\}\) to \(F_{0}(\boldsymbol{y})=a\). By (2.5) and Hensel's lemma, \(\sigma_{p,a}\geq p^{-2}N_{a}^{*}(p)\). But \(N_{a}^{*}(p)\geq N_{a}(p)-1\), and \(N_{a}(p)=p^{2}+p^{-1}T_{a}(p)\) by (3.2), so \(\sigma_{p,a}\geq 1+p^{-3}T_{a}(p)-p^{-2}\). Since \(p\geq 7\), it follows from (3.8), (3.9), and Proposition 3.2 that \(\gamma_{p}(a)\geq 1-p^{-6}T_{a}(p)^{2}-1.99p^{-2}=1-O(p^{-2}\mathbf{1}_{p\mid 3a} +p^{-1}\mathbf{1}_{p\mid 3a})\). Since \(\gamma_{p}(a)>0\) by (1), we conclude that (2) holds for \(p\geq 7\). By enlarging the \(O(1)\)'s in (2) if necessary, we can then ensure that (2) holds for all primes \(p\).
For (3), first note that if \(p\geq 7\) and \(p\nmid 3a\), then \(N_{a}^{*}(p)=N_{a}(p)\), so \(\sigma_{p,a}=p^{-2}N_{a}(p)\) by (2.5) and Hensel's lemma; and hence the arguments in the previous paragraph imply \(\gamma_{p}(a)=1+O(p^{-2})\) (uniformly over \(p\nmid 30a\)). Thus \(\prod_{p\text{ prime}}\gamma_{p}(a)\) converges absolutely to some \(\gamma(a)\in\mathbb{R}_{>0}\). By (2), then, \(\gamma(a)\gg\prod_{p\mid 3a}(1-p^{-1})^{O(1)}\). So (3) holds.
For (4), note that by Lemma 3.4 (and the definition of \(\mathcal{S}(D)\)),
\[\sum_{n>K}|c_{a}(n)|\ll_{\epsilon}\sum_{\begin{subarray}{c}n_{1},m_{2},n_{3} \geq 1:\\ n_{1}m_{2}^{2}n_{3}\geq K,\,n_{3}=\operatorname{cub}(n_{3})\end{subarray}}(n _{1}m_{2}^{2}n_{3})^{\epsilon}n_{1}^{-2}m_{2}^{-2}\gcd(m_{2},3a)n_{3}^{-1/2} (9D). \tag{3.10}\]
By dyadic summation, the right-hand side of (3.10) is at most \(O_{\epsilon}(K^{O(\epsilon)}D)\) times
\[\sup_{\begin{subarray}{c}N_{1},M_{2},N_{3}\in\mathbb{R}_{\geq 1}:\\ (2N_{1})(2M_{2})^{2}(2N_{3})\geq K\end{subarray}}\sum_{d\mid 3a}N_{1}^{1-2}(M_{2}/d)M_{2 }^{-2}dN_{3}^{1/3-1/2}\ll_{\epsilon}|a|^{\epsilon}K^{-1/6}.\]
Thus \(\sum_{n\geq 1}c_{a}(n)\) converges absolutely to \(\gamma(a)\); and furthermore, (4) holds.
To bound \(M_{a}(K)\) on average, and to handle the sum over \(n_{1}n_{2}>K\) in (3.5), we can use Lemma 3.8 below, whose proof requires the following two lemmas.
**Lemma 3.6**.: _Let \(D,n\geq 1\) be integers. Let \(a\in\mathcal{S}(D)\). If \(T_{a}(n)\neq 0\), then \(n\in\mathcal{C}(27D^{3/2})\)._
Proof.: Suppose \(T_{a}(n)\neq 0\). Let \(a_{2}\leq D\) be the square-full part of \(|a|\). Suppose \(p\) is a prime dividing the cube-full part \(n_{3}\) of \(n\). Since \(T_{a}(n)\neq 0\), Proposition 2.2 implies \(T_{a}(p^{v_{p}(n)})\neq 0\). So by Proposition 3.3, \(p^{v_{p}(n)-1}\mid 3a\). Here \(v_{p}(n)\geq 3\); so \(p^{v_{p}(n)-1}\mid\operatorname{sq}(3a)\mid\operatorname{sq}(9a_{2})\). Therefore, \(9a_{2}\) is divisible by \(\prod_{p\mid n_{3}}p^{v_{p}(n)-1}\geq n_{3}^{2/3}\), whence \(n_{3}\leq(9a_{2})^{3/2}\leq 27D^{3/2}\).
The next lemma controls the large values of \(|T_{a}^{\natural}(n)|\) on average over \(a\). Let
\[T_{b}^{\natural}(\boldsymbol{m}):=T_{b}^{\natural}(m_{1})\cdots T_{b}^{\natural} (m_{r}),\quad T_{b}^{\natural}(\boldsymbol{n}):=T_{b}^{\natural}(n_{1}) \cdots T_{b}^{\natural}(n_{r}),\]
whenever the variables \(b\), \(r\), \(m_{1},\ldots,m_{r}\), \(n_{1},\ldots,n_{r}\) are clear from context.
**Lemma 3.7**.: _Let \(r,m_{1},n_{1},\ldots,m_{r},n_{r}\geq 1\) be integers. Let \(m_{i,3}\), \(n_{i,3}\) denote the cube-full parts of \(m_{i}\), \(n_{i}\), respectively. Suppose \(m_{1}\cdots m_{r}n_{1}\cdots n_{r}\) is square-full. Then_
\[\mathbb{E}_{b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}}[|T_{b} ^{\natural}(\mathbf{m})|\cdot|T_{b}^{\natural}(\mathbf{n})|]\ll_{r,\epsilon}\frac{ \prod_{i=1}^{r}[(m_{i,3}n_{i,3})^{1/2}(m_{i}n_{i})^{1/2+\epsilon}]}{\operatorname {rad}(m_{1}\cdots m_{r}n_{1}\cdots n_{r})}.\]
Proof.: By Proposition 2.2, we may assume that \(m_{1}\cdots m_{r}n_{1}\cdots n_{r}\) is a prime power; or equivalently, that \(\operatorname{rad}(m_{1}\cdots m_{r}n_{1}\cdots n_{r})\) equals some prime \(p\). Applying Lemma 3.1 to residues \(b\) with \(p\mid 3b\), and Propositions 3.2 and 3.3 to residues \(b\) with \(p\nmid 3b\), gives
\[\mathbb{E}_{b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}}[|T_{b }^{\natural}(\mathbf{m})|\cdot|T_{b}^{\natural}(\mathbf{n})|]\ll_{r}\frac{\prod_{i=1}^ {r}[(m_{i,3}n_{i,3})^{1/2}(m_{i}n_{i})^{1/2}]}{p/\gcd(3,p)}+1.\]
But \(m_{1}\cdots m_{r}n_{1}\cdots n_{r}\) is square-full, so \(\prod_{i=1}^{r}(m_{i}n_{i})^{1/2}\geq p\), and the lemma follows.
The following result can loosely be thought of as an algebro-geometric analog of [1, Theorem 4], emphasizing a different aspect. It only applies over relatively small moduli.
**Lemma 3.8**.: _Let \(A,K_{1},K_{2},D,j\geq 1\) be integers. Let \(\beta\colon\mathbb{Z}^{2}\to\mathbb{R}\) be a function supported on \([K_{1},2K_{1})\times[K_{2},2K_{2})\). For \(a\in\mathbb{Z}\), let_
\[P_{a}:=\sum_{m,n\geq 1\colon n\in\mathcal{S}(1)}\beta(m,n)\cdot T_{a}^{ \natural}(m)T_{a}^{\natural}(n). \tag{3.11}\]
_Suppose \(A\geq(K_{1}K_{2})^{3j}\). Then_
\[\sum_{a\in[-A,A]\cap\mathcal{S}(D)}|P_{a}|^{2j}\ll_{j,\epsilon}A\cdot\min(K_{ 1},D^{3/2})^{j}\cdot(K_{1}K_{2})^{j+\epsilon}\cdot(\max|\beta|)^{2j}. \tag{3.12}\]
Proof.: Let \(K:=K_{1}K_{2}\) and \(r:=2j\). For \(a\in\mathbb{Z}\), let
\[Q_{a}:=\sum_{m,n\geq 1\colon m\in\mathcal{C}(27D^{3/2}),\;n\in\mathcal{S}(1)} \beta(m,n)\cdot T_{a}^{\natural}(m)T_{a}^{\natural}(n).\]
By (3.11) and Lemma 3.6, we have \(P_{a}=Q_{a}\) for all \(a\in\mathcal{S}(D)\). Thus we may replace each \(P_{a}\) in (3.12) with \(Q_{a}\); and by positivity, we may then extend \([-A,A]\cap\mathcal{S}(D)\) to the full interval \([-A,A]\). It follows that the left-hand side of (3.12) is at most
\[\sum_{a\in[-A,A]}|Q_{a}|^{r}\ll\sum_{\begin{subarray}{c}m_{1},\ldots,m_{r}\in \mathbb{Z}_{>1}\cap\mathcal{C}(27D^{3/2}),\\ n_{1},\ldots,n_{r}\in\mathbb{Z}_{\geq 1}\cap\mathcal{S}(1),\\ b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}\end{subarray}}\beta (\mathbf{m},\mathbf{n})T_{b}^{\natural}(\mathbf{m})T_{b}^{\natural}(\mathbf{n})\cdot\frac{A \cdot(1+O_{k}(K^{-k}))}{m_{1}\cdots m_{r}n_{1}\cdots n_{r}}, \tag{3.13}\]
where \(\beta(\mathbf{m},\mathbf{n}):=\beta(m_{1},n_{1})\cdots\beta(m_{r},n_{r})\); the inequality (3.13) follows from Poisson summation after replacing (by positivity again) the sum over \(a\in[-A,A]\) with a smoothed sum over \(a\in\mathbb{Z}\). The total contribution from \(O_{k}(K^{-k})\) to the right-hand side of (3.13) is trivially \(\ll K^{2r}\cdot(\max|\beta|)^{r}\cdot K^{r}\cdot A\cdot O_{k}(K^{-k})\), which is satisfactory if \(k:=3r\), say.
It remains to handle the "main" term in (3.13). Given \(m_{1},n_{1},\ldots,m_{r},n_{r}\geq 1\), we have
\[\sum_{b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}}T_{b}^{ \natural}(\mathbf{m})T_{b}^{\natural}(\mathbf{n})=0\quad\text{if $m_{1}\cdots m_{r}n_{1}\cdots n_{r}$ is not square-full.}\]
(This follows from Proposition 2.2 and the fact that \(\sum_{a\in\mathbb{Z}/p\mathbb{Z}}T_{a}(p)=0\) for every prime \(p\). Cf. the proof of Lemma 2.7.) On the other hand, if \(m_{1}\cdots m_{r}n_{1}\cdots n_{r}\) is square-full, then
Lemma 3.7 delivers the bound
\[\mathbb{E}_{b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}}[|T_{b}^{ \natural}(\boldsymbol{m})|\cdot|T_{b}^{\natural}(\boldsymbol{n})|]\ll_{r, \epsilon}\frac{\min(K_{1},D^{3/2})^{r/2}\cdot K^{r/2+\epsilon}}{\operatorname{ rad}(m_{1}\cdots m_{r}n_{1}\cdots n_{r})},\]
provided that we have \(m_{1},\ldots,m_{r}\in\mathcal{C}(27D^{3/2})\) and \(n_{1},\ldots,n_{r}\in\mathcal{S}(1)\). It follows that
\[\sum_{\begin{subarray}{c}m_{1},\ldots,m_{r}\in\mathbb{Z}_{\geq 1}\cap \mathcal{C}(27D^{3/2}),\\ n_{1},\ldots,n_{r}\in\mathbb{Z}_{\geq 1}\cap\mathcal{S}(1),\\ b\in\mathbb{Z}/m_{1}\cdots m_{r}n_{1}\cdots n_{r}\mathbb{Z}\end{subarray}} \frac{\beta(\boldsymbol{m},\boldsymbol{n})T_{b}^{\natural}(\boldsymbol{m})T_{ b}^{\natural}(\boldsymbol{n})}{m_{1}\cdots m_{r}n_{1}\cdots n_{r}}\ll \sum_{R\leq(4K)^{r/2}}\sum_{\begin{subarray}{c}m_{1},\ldots,m_{r}|R^{\infty}, \\ n_{1},\ldots,n_{r}|R^{\infty},\end{subarray}}\frac{C(\boldsymbol{m}, \boldsymbol{n})}{R}, \tag{3.14}\]
where \(C(\boldsymbol{m},\boldsymbol{n}):=|\beta(\boldsymbol{m},\boldsymbol{n})|\cdot \min(K_{1},D^{3/2})^{r/2}\cdot K^{r/2+\epsilon}\). (For \(u,v\in\mathbb{Z}\), we write \(u\mid v^{\infty}\) if there exists a positive integer \(k\) with \(u\mid v^{k}\).) But for any integers \(R,N\geq 1\), there are at most \(O_{\epsilon}(R^{\epsilon}N^{\epsilon})\) integers \(n\in[N,2N)\) with \(n\mid R^{\infty}\) (a fact one can prove using Rankin's trick); so
\[\sum_{m_{1},\ldots,m_{r},n_{1},\ldots,n_{r}|R^{\infty}}|\beta(\boldsymbol{m}, \boldsymbol{n})|\ll_{r,\epsilon}(\max|\beta|)^{r}(K_{1}R)^{r\epsilon}(K_{2}R) ^{r\epsilon}. \tag{3.15}\]
By (3.15) and the bound \(\sum_{R\leq(4K)^{r/2}}R^{2r\epsilon-1}\ll_{r,\epsilon}K^{r^{2}\epsilon}\), the right-hand side of (3.14) is \(\ll_{r,\epsilon}(\max|\beta|)^{r}\min(K_{1},D^{3/2})^{r/2}K^{r/2+(1+r+r^{2})\epsilon}\). Plugging this into (3.13) gives (3.12).
For integers \(K\geq 1\) and reals \(\eta>0\), let
\[\mathscr{E}(K;\eta):=\{a\in\mathbb{Z}:a\not\equiv\pm 4\bmod 9\}\cap\{a\in \mathbb{Z}:|s_{a}(K)|\leq\eta\}. \tag{3.16}\]
**Theorem 3.9**.: _Let \(A,K,j\geq 1\) be integers. Let \(\epsilon,\eta>0\) be reals. If \(A\geq K^{6j}\), then_
\[A^{-1}\cdot|\mathscr{E}(K;\eta)\cap[-A,A]|\ll_{j,\epsilon}\eta^{1.8j}+A^{ \epsilon}K^{-1.5j}+\eta^{-0.2j}K^{-0.8j}+K^{-1/24}.\]
Proof.: By Proposition 3.5, there exists a constant \(C=C(\epsilon)>0\) such that every element of \(\mathscr{E}(K;\eta)\cap[-A,A]\cap\mathcal{S}(D)\) lies in one of the following sets:
1. \(\mathscr{E}_{1}:=\{a\in\mathcal{S}(D):|M_{a}(K)|\geq\eta^{-9/10}\}\).
2. \(\mathscr{E}_{2}:=\{a\in\mathcal{S}(D):\prod_{p|a}(1-p^{-1})^{C}\leq C\cdot( \eta^{1/10}+A^{\epsilon}K^{\epsilon-1/6}D)\}\).
3. \(\mathscr{E}_{3}:=\{a\in\mathcal{S}(D):|s_{a}(K)M_{a}(K)-\sum_{n\leq K}c_{a}(n )|\geq\eta^{1/10}\}\).
Indeed, if \(C\) is sufficiently large and \(a\in\mathscr{E}(K;\eta)\cap[-A,A]\cap\mathcal{S}(D)\setminus(\mathscr{E}_{1} \cup\mathscr{E}_{2})\), then \(|s_{a}(K)|\leq\eta\) (since \(a\in\mathscr{E}(K;\eta)\)) and \(|M_{a}(K)|\leq\eta^{-9/10}\) (since \(n\notin\mathscr{E}_{1}\)), and \(\sum_{n\leq K}c_{a}(n)\geq 2\eta^{1/10}\) (by Proposition 3.5, since \(n\notin\mathscr{E}_{2}\)), so \(a\in\mathscr{E}_{3}\).
We now bound \(\mathscr{E}_{1}\), \(\mathscr{E}_{2}\), \(\mathscr{E}_{3}\). By Lemma 3.8 (with \(K_{1}=1\) and \(1\leq K_{2}\leq K\)), we have
\[\eta^{-1.8j}\cdot|\mathscr{E}_{1}\cap[-A,A]|\ll_{j}A, \tag{3.17}\]
provided \(A\geq K^{3j}\). By (3.5) and Lemma 3.8 (with \(1\leq K_{2},K_{2}\leq K\) and \((2K_{1})(2K_{2})\geq K\)),
\[\eta^{0.2j}\cdot|\mathscr{E}_{3}\cap[-A,A]|\ll_{j,\epsilon}AD^{3j/2}K^{-j+ \epsilon}, \tag{3.18}\]
provided \(A\geq K^{6j}\). And \(\prod_{p|a}(1-p^{-1})=\phi(|a|)/|a|\), so [13, p. 61, (2.32)] implies
\[(\eta^{1/10}+A^{\epsilon}K^{\epsilon-1/6}D)^{-18j}\cdot|\mathscr{E}_{2}\cap[-A, A]|\ll_{j,\epsilon,C}A. \tag{3.19}\]
Since \([-A,A]\setminus\mathcal{S}(D)\) has size \(\ll D^{-1/2}A\), we conclude that
\[A^{-1}|\mathscr{E}(K;\eta)\cap[-A,A]|\ll_{j,\epsilon}\eta^{1.8j}+(A^{\epsilon}K ^{\epsilon-1/6}D)^{18j}+\eta^{-0.2j}D^{3j/2}K^{-j+\epsilon}+D^{-1/2},\]
provided \(A\geq K^{6j}\). Taking \(D=\lfloor K^{1/12}\rfloor\) gives Theorem 3.9.
## 4. Applying increasingly cuspidal weights
To prove Theorems 1.1 and 1.2, we will combine Theorems 2.13, 2.14, and 3.9. To apply Theorems 2.13 and 2.14, we need to choose a suitable weight \(\nu\). Fix a function \(w_{0}\in C_{c}^{\infty}(\mathbb{R})\) with \(w_{0}\geq 0\) everywhere, and \(w_{0}\geq 1\) on \([-2,2]\). Fix a function \(w_{2}\in C_{c}^{\infty}(\mathbb{R})\) with \(w_{2}\geq 0\) everywhere, \(w_{2}\geq 1\) on \([1,10]\), and \(\operatorname{Supp}w_{2}\subseteq\mathbb{R}_{>0}\). Given a real \(R\geq 2\), set
\[\nu^{\star}(\boldsymbol{y}):=w_{0}(F_{0}(\boldsymbol{y}))\int_{r\in[1,R]}d^{ \times}r\prod_{1\leq l\leq 3}w_{2}(|y_{l}|/r)\prod_{1\leq i<j\leq 3}w_{2}(|y_{i}+y_ {j}|/r), \tag{4.1}\]
where \(d^{\times}r:=dr/r\). Clearly \(\nu^{\star}\in C_{c}^{\infty}(\mathbb{R}^{3})\), and \(\nu^{\star}\) satisfies (2.27) and (2.28).
Let us consider what happens as we vary \(R\). Since \(w_{0}\), \(w_{2}\) are fixed, we have
\[\nu^{\star}(\boldsymbol{y})\ll\int_{\mathbb{R}_{>0}}d^{\times}r\,w_{2}(|y_{1} |/r)=\|w_{2}\circ\log\|_{L^{1}(\mathbb{R})}\ll 1 \tag{4.2}\]
uniformly over \(\boldsymbol{y}\in\mathbb{R}^{3}\) and \(R\geq 2\). Similarly in general, for integers \(k\geq 0\), we have \(\|\nu^{\star}\|_{k,\infty}\ll_{k}1\), by a short calculation using (1.10), (4.1), and the fact that \(r\geq 1\) for all \(r\in[1,R]\). Also, it is clear from (4.1) that \(B(\nu^{\star})\ll R\). Furthermore,
\[\operatorname{vol}(\operatorname{Supp}\nu^{\star})\ll\int_{r\in[1,R]}d^{ \times}r\int_{y_{1},y_{2}\in\mathbb{R}}dy_{1}\,dy_{2}\,\mathbf{1}_{|y_{1}|,|y _{2}|\in r\cdot\operatorname{Supp}w_{2}}\cdot r^{-2}\ll\log R, \tag{4.3}\]
because for any \(r\in[1,R]\) and \(y_{1},y_{2}\in\pm r\cdot\operatorname{Supp}w_{2}\), the set \(\{y_{3}\in\pm r\cdot\operatorname{Supp}w_{2}:F_{0}(\boldsymbol{y})\in \operatorname{Supp}w_{0}\}\) has measure \(\ll r^{-2}\). We also have the following key lemma.
**Lemma 4.1**.: _Let \(a\in\mathbb{R}\) with \(|a|\leq X^{3}\). Then \(\sigma_{\infty,a,\nu^{\star}}(X)\gg\log R\)._
Proof.: Let \(\tilde{a}:=a/X^{3}\). Given \((y_{2},y_{3})\), let \(y_{1}:=(\tilde{a}-y_{2}^{3}-y_{3}^{3})^{1/3}\). For \(r\geq 1\), let \(D_{r}:=[3.99r,4r]^{2}\). Then for all \((y_{2},y_{3})\in D_{r}\), we have \(F_{0}(\boldsymbol{y})=\tilde{a}\in[-1,1]\) and \(y_{1}\in[-6r,-5r]\), and thus
\[w_{0}(F_{0}(\boldsymbol{y}))\prod_{1\leq l\leq 3}w_{2}(|y_{l}|/r)\prod_{1\leq i<j \leq 3}w_{2}(|y_{i}+y_{j}|/r)\geq 1.\]
By (2.24), (4.1), and the nonnegativity of \(w_{0}\), \(w_{2}\), it follows that
\[\sigma_{\infty,a,\nu^{\star}}(X)=\int_{\mathbb{R}^{2}}dy_{2}\,dy_{3}\,\nu^{ \star}(\boldsymbol{y})\cdot(3y_{1}^{2})^{-1}\gg\int_{r\in[1,R]}d^{\times}r\, \operatorname{vol}(D_{r})\cdot r^{-2}\gg\log R,\]
since \(\operatorname{vol}(D_{r})\gg r^{2}\).
Proof of Theorems 1.1 and 1.2.: We gradually increase our hypotheses. First assume that (1.6) for \(d=1\) holds for all clean functions \(w\in C_{c}^{\infty}(\mathbb{R}^{6})\). Using Theorem 2.13 (with \(d=1\)), one can prove (cf. (2.29) and (2.30)) that for integers \(X,K\geq 1\) with \(K\leq X^{9/10}\), we have
\[X^{-3}\operatorname{Var}(X,K;1)\ll\|\nu^{\star}\|_{L^{2}(\mathbb{R}^{3})}^{2}+o _{\nu^{\star};X\to\infty}(1)+O(K^{-1/2}\cdot\|\nu^{\star}\|_{100,\infty}^{2} B(\nu^{\star})^{5000}), \tag{4.4}\]
where \(o_{\nu^{\star};X\to\infty}(1)\) denotes a quantity that tends to \(0\) as \(X\to\infty\) (for any fixed \(\nu^{\star}\)). Here \(\|\nu^{\star}\|_{L^{2}(\mathbb{R}^{3})}^{2}\ll\log R\), and \(\|\nu^{\star}\|_{100,\infty}^{2}\ll 1\) and \(B(\nu^{\star})\ll R\). On the other hand, by Theorem 3.9 with \(\eta=(\log R)^{-10/j}\) (for an integer \(j\geq 1\)), we have
\[A^{-1}\cdot|\mathscr{E}(K;\eta)\cap[-A,A]|\ll_{j,\epsilon}(\log R)^{-18}+A^{ \epsilon}K^{-1.5j}+(\log R)^{2}K^{-0.8j}+K^{-1/24} \tag{4.5}\]
for integers \(A\geq K^{6j}\). By (2.4), (3.16), and Lemma 4.1, we conclude (by letting \(A,X\to\infty\) with \(A\in[X^{3}/2,X^{3}]\), taking \(K=\lfloor A^{1/6j}\rfloor\), taking \(\epsilon=1/6\), and taking \(R\leq K^{1/20000}\)) that
\[A^{-1}\cdot|\mathscr{E}\cap[-A,A]|\ll_{j}(\log R)^{-18}+(\log R)^{-2+20/j} \cdot[(\log R)+o_{R;A\to\infty}(1)] \tag{4.6}\]
as \(A\to\infty\). Taking \(j=40\) and \(R\to\infty\) proves the first part of Theorem 1.1.
What remains is similar, so we merely sketch the main points. Fix \((\delta,k)\in\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\), and assume (1.7) for \(\xi=0\). Note that (1.7) remains true if we decrease \(\delta\) or increase \(k\). In any case, Theorem 2.14 for \(\xi=0\) now implies (assuming \(X,K\geq 2\) and \(K\leq X^{9/10-\delta}\))
\[X^{-3}\operatorname{Var}(X,K;1)\ll\|\nu^{\star}\|_{L^{2}(\mathbb{R}^{3})}^{2}+ \frac{O(\|\nu^{\star}\|_{k,\infty}^{2}B(\nu^{\star})^{k})}{\min(X^{\delta/11},K^{2/3}X^{-3\delta})}. \tag{4.7}\]
We now apply Theorem 3.9 and Lemma 4.1 as before. Let \(A,X\to\infty\) with \(A\in[X^{3}/2,X^{3}]\); let \((K,\epsilon,R,\eta)=(\lfloor A^{2\delta}\rfloor,\delta,X^{\delta/30k},(\log R )^{-10/j})\). Assuming \(2\delta\leq 1/6j\) (so \(A\geq K^{6j}\)), we get
\[A^{-1}\cdot|\mathcal{E}\cap[-A,A]|\ll_{j}(\log R)^{-18}+(\log R)^{-2+20/j}\cdot (\log R)+X^{-\delta/20} \tag{4.8}\]
as \(A\to\infty\). Taking \(j\to\infty\) proves the second part of Theorem 1.1.
Finally, fix \((\delta,k)\in\mathbb{R}_{>0}\times\mathbb{Z}_{\geq 1}\), and assume (1.7) for \(\xi=1\). Theorem 2.14 for \(\xi=1\) (when combined with Theorem 3.9 and Lemma 4.1) then implies Theorem 1.2.
## 5. Nonnegative cubes
Let \(A\geq 2\). For integers \(a\geq 0\), let \(r_{3}(a):=\#\{(x,y,z)\in\mathbb{Z}_{\geq 0}^{3}:x^{3}+y^{3}+z^{3}=a\}\). The Selberg sieve implies \(\sum_{p\leq A}r_{3}(p)\ll A/\log A\). Assuming something like (1.7) (ideally for arbitrarily small \(\delta>0\)), can one show that \(\sum_{p\leq A}r_{3}(p)\gg A/\log A\)? Something like (1.7) might let one handle certain "Type II" sums. The main difficulty might instead lie in "Type I\({}_{j}\)" estimates (roughly corresponding to counting solutions to \(dn_{1}n_{2}\cdots n_{j}=x^{3}+y^{3}+z^{3}\) for \(j\) large, where \(d\) is fixed). One may be able to handle "Type I\({}_{j}\)" sums for \(j\leq 2\) using the methods of [10], but it would be nice to treat larger \(j\), even conditionally.
Or, assuming precise asymptotic second moments for \(r_{3}(a)\) over \(\{a\leq A:a\equiv 0\bmod d\}\) for \(d\leq A^{\delta}\), can one show that \(\sum_{p\leq A}r_{3}(p)^{2}\ll A/\log A\)? Note the lack of exact multiplicative structure in \(d\) in the expected main term over \(\{a\leq A:a\equiv 0\bmod d\}\). (For \(d=1\), see [10, Conjecture 2].) This may or may not be a serious obstacle.
Finally, in the conditional sense above, can one show that a positive proportion of primes \(p\) have \(r_{3}(p)\neq 0\) (i.e. are sums of three nonnegative cubes)?
## Acknowledgements
I thank Valeriya Kovaleva, Sarah Peluse, and Peter Sarnak for inspiring me to extend my thesis work on \(x^{3}+y^{3}+z^{3}\) from integer values to prime values. I thank Manjul Bhargava, Tim Browning, Valeriya Kovaleva, Peter Sarnak, Katy Woo, and Liyang Yang for some nice conversations on closely related topics. Finally, I thank Simona Diaconu for sharing an early draft of her enlightening senior thesis, [12], several years ago.
|
2301.04848 | $Ï$-quantization and $Ï$-Cohen classes distributions of
Feichtinger operators | We investigate the $\tau$-quantizations and Cohen's class distributions of a
suitable class of trace-class operators, called Feichtinger's operators, and
show that it is a convenient substitute for the class of Schwartz operators.
Many well-known concepts and results for functions in time-frequency analysis
have an operator-analog in our setting, e.g. that Cohen's classes are
convolutions of Wigner functions with distributions or characterization of the
class of Schwartz operators as an intersection of weighted variants of the
class of Feichtinger operators. | Federico Bastianoni, Franz Luef | 2023-01-12T07:21:46Z | http://arxiv.org/abs/2301.04848v1 | # \(\tau\)-quantization and \(\tau\)-Cohen classes distributions of Feichtinger operators
###### Abstract.
We investigate the \(\tau\)-quantizations and Cohen's class distributions of a suitable class of trace-class operators, called Feichtinger's operators, and show that it is a convenient substitute for the class of Schwartz operators. Many well-known concepts and results for functions in time-frequency analysis have an operator-analog in our setting, e.g. that Cohen's classes are convolutions of Wigner functions with distributions or characterization of the class of Schwartz operators as an intersection of weighted variants of the class of Feichtinger operators.
Key words and phrases:Cohen's class, \(\tau\)-quantization, Feichtinger's algebra, Wigner distribution 2010 Mathematics Subject Classification: 42B35;46E35;47G30;47B10
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 A family of time-frequency representations
* 2.2 Basics of QHA and novel tools
* 2.3 \(\tau\)-quantization of functions
* 3 Feichtinger operators
* 3.1 \(\tau\)-quantization of operators
* 3.2 A convenient environment for QHA
* 3.3 \(\tau\)-Cohen's class of operators
* 4 A characterization of Schwartz operators
## 1. Introduction
There is a vast literature on the boundedness of pseudodifferential operators for certain classes of symbols in various quantization schemes along the lines of Hormander classes or alternatively using Sjostrand's class or Shubin's classes, e.g.
Introduction
Let \(S\) be a continuous operator between the Feichtinger algebra \(\mathcal{S}_{0}\) and its continuous dual space \(\mathcal{S}_{0}^{\prime}\). We denote by \(K_{S}\) the kernel of \(S\), which exists by Feichtinger's kernel theorem and is a mild distribution on \(\mathbb{R}^{2d}\).
We define Feichtinger operators, \(\mathbb{S}_{0}\), to be the following class of continuous and linear operators \(\mathbb{S}_{0}\coloneqq S\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{d}) \to\mathcal{S}_{0}(\mathbb{R}^{d})\) that map norm bounded w-\(*\) convergent sequences in \(\mathcal{S}_{0}^{\prime}\) into norm convergent sequences in \(\mathcal{S}_{0}\). In [10] it was shown that these are precisely the linear continuous operators from \(\mathcal{S}_{0}^{\prime}\) to \(\mathcal{S}_{0}\) that have a kernel in Feichtinger's algebra, the so-called inner kernel theorem.
One of our main tools is that Feichtinger operators have a nice spectral decomposition. If \(S\) is in \(\mathbb{S}_{0}\), then there exist two (non-unique) sequences \(\{f_{n}\}_{n},\{g_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\)
such that
\[S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n},\qquad\sum_{n=1}^{\infty}\left\|f_{n} \right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_{\mathcal{S}_{0}}<\infty,\qquad K _{S}=\sum_{n=1}^{\infty}K_{f_{n}\otimes g_{n}}.\]
Hence, Feichtinger operators are trace class operators and we can compute their trace as follows \(\operatorname{tr}(S)=\int_{\mathbb{R}^{d}}K_{S}(x,x)\,dx\). In [7] operators having such a decomposition have been studied and called Feichtinger states in case \(\operatorname{tr}(S)=1\), but there the link between these operators and the work [10] was not established, which is one of our main observations.
Then the \(\tau\)-Wigner distribution of \(S\) is defined in the following way
\[W_{\tau}S(x,\omega)\coloneqq\int_{\mathbb{R}^{d}}e^{-2\pi it\omega}K_{S}(x+ \tau t,x-(1-\tau)t)\,dt. \tag{2}\]
Our key observation is the following identity:
\[\langle a,\!W_{\tau}S\rangle=\operatorname{tr}(\operatorname{Op}_{\tau}(a)S^ {*})=:\langle\operatorname{Op}_{\tau}(a),\!S\rangle,\]
for \(S\) in \(\mathbb{S}_{0}\) or \(\mathcal{J}^{1}\), and \(W_{\tau}S\) is the \(\tau\)-Wigner distribution of \(S\). Consequently, we interpret \(W_{\tau}S\) as the \(\tau\)-quantization of an operator in \(\mathbb{S}_{0}\) or \(\mathcal{J}^{1}\).
Note, that if \(S\) is the rank-one operator \(f\otimes g\) this becomes the aforementioned relation between the \(\tau\)-Wigner distribution and the Shubin \(\tau\)-transform.
Based on this framework we deduce operator analogs of well-known results on \(\tau\)-Wigner distributions and \(\tau\)-Shubin quantization, which indicates that this is a very convenient setting for this type of investigation. In addition, we extend the Cohen class of an operator, introduced in [17], to the \(\tau\)-setting and show that it can be written as the convolution of the Wigner distribution of an operator with a distribution as in the function setting.
We close our discussion with the introduction of weighted versions of \(\mathbb{S}_{0}\) and prove that the intersection of all these is the class of Schwartz operators in [14]. As in the case of functions, we hope that this global description of the Schwartz operators will also turn out to be useful in subsequent studies and it also hints at operator analogs of Gelfand-Shilov classes or other classes of test functions and the corresponding class of ultradistributions.
## 2. Preliminaries
In this paper, the parameter \(\tau\) always belongs to \([0,1]\), even when not specified.
### A family of time-frequency representations
For \(x,\omega\in\mathbb{R}^{d}\) we define the translation and modulation operator by
\[T_{x}f(t)\coloneqq f(t-x),\qquad M_{\omega}f(t)\coloneqq e^{2\pi i\omega t}f( t),\qquad\forall t\in\mathbb{R}^{d},\]
respectively. Their composition is denoted by \(\pi(x,\omega)\coloneqq M_{\omega}T_{x}\).
Given \(\tau\in[0,1]\), the \(\tau\)-time-frequency shift (\(\tau\)-TFS) at \((x,\omega)\in\mathbb{R}^{2d}\) is defined to be
\[\pi^{\tau}(x,\omega)\coloneqq e^{-2\pi i\tau x\omega}M_{\omega}T_{x}=M_{(1-\tau )\omega}T_{x}M_{\tau\omega}. \tag{3}\]
For \(\tau=0\) we recover the usual time-frequency shifts \(\pi^{0}=\pi\). The following relations are consequences of elementary computations, which are left to the reader:
\[\pi^{\tau}(x,\omega)\pi^{\tau}(x^{\prime},\omega^{\prime}) =e^{-2\pi i[(1-\tau)x\omega^{\prime}-\tau x^{\prime}\omega]}\pi^{ \tau}(x+x^{\prime},\omega+\omega^{\prime}),\] \[\pi^{\tau}(x,\omega)\pi^{\tau}(x^{\prime},\omega^{\prime}) =e^{-2\pi i[x\omega^{\prime}-x^{\prime}\omega]}\pi^{\tau}(x^{ \prime},\omega^{\prime})\pi^{\tau}(x,\omega),\] \[\pi^{\tau}(x,\omega)^{*} =\pi^{1-\tau}(-x,-\omega)=e^{-2\pi i(1-\tau)x\omega}\pi(-x,- \omega).\]
In the present paper the symbol \(\langle\cdot,\cdot\rangle\) either denotes the inner product in \(L^{2}(\mathbb{R}^{d})\) or a duality pairing between a Banach space \(X\) and its dual space \(X^{\prime}\), which is compatible with the latter, i.e. \(\langle\cdot,\cdot\rangle\) is assumed to be linear in the first argument and conjugate-linear in the second one. In particular, the dual pairs considered in this work are \((L^{2},L^{2})\), \((\mathcal{S}^{\prime}_{0},\mathcal{S}_{0})\), \((\mathbb{S}^{\prime}_{0},\mathbb{S}_{0})\), respectively.
Above, \(\mathcal{S}_{0}\) is the Feichtinger algebra (24), for the definitions of \(\mathbb{S}_{0}\) and \(\mathbb{S}^{\prime}_{0}\) see the equations (25),(26) and (28). We introduce for \(f,g\in L^{2}(\mathbb{R}^{d})\), or for any suitable dual pair, the \(\tau\)-short-time Fourier transform (\(\tau\)-STFT) of \(f\) w.r.t \(g\):
\[V_{g}^{\tau}f(x,\omega)\coloneqq\langle f,\pi^{\tau}(x,\omega)g\rangle,\qquad \forall x,\omega\in\mathbb{R}^{d}. \tag{4}\]
As can be easily verified, the mapping
\[\pi^{\tau}\colon\mathbb{R}^{2d}\to\mathcal{U}(L^{2}(\mathbb{R}^{d})),\]
where \(\mathcal{U}(L^{2}(\mathbb{R}^{d}))\) denotes the unitary operators on \(L^{2}(\mathbb{R}^{d})\), is a projective representation of \(\mathbb{R}^{2d}\) for any \(\tau\). Consequently, \(V^{\tau}\) is the wavelet transform associated to \(\pi^{\tau}\), thus \(V_{g}^{\tau}f\) is a continuous function.
**Remark 2.1**.: _For \(\tau=0\) we obtain the usual STFT \(V_{g}^{0}f=V_{g}f\) and we have_
\[V_{g}^{\tau}f(x,\omega)=e^{2\pi i\tau x\omega}V_{g}f(x,\omega). \tag{5}\]
_By the preceding identity, we have that \(V_{g}^{\frac{1}{2}}f\) is the cross-ambiguity function of \(f\) and \(g\):_
\[V_{g}^{\frac{1}{2}}f(x,\omega)=A(f,g)(x,\omega). \tag{6}\]
We recall another frequently used time-frequency representation, the so-called cross-\(\tau\)-Wigner distribution of \(f\) and \(g\) in \(L^{2}(\mathbb{R}^{d})\) defined by
\[W_{\tau}(f,g)(x,\omega)\coloneqq\int_{\mathbb{R}^{d}}e^{-2\pi it\omega}f(x+ \tau t)\overline{g(x-(1-\tau)t)}\,dt. \tag{7}\]
We aim to extend the definition of \(W_{\tau}\) from functions to operators, see (15).
### Basics of QHA and novel tools
In this subsection we introduce the basic definitions of quantum harmonic analysis (QHA) following the seminal work of Werner [23].
For \(z\in\mathbb{R}^{2d}\) and \(A\in B(L^{2}(\mathbb{R}^{d}))\) the translation of the operator \(A\) by \(z\) is
\[\alpha_{z}(A)\coloneqq\pi(z)A\pi(z)^{*}, \tag{8}\]
which satisfies \(\alpha_{z}\alpha_{z^{\prime}}=\alpha_{z+z^{\prime}}\). By the parity operator, we mean
\[Pf(t)\coloneqq\check{f}(t)\coloneqq f(-t), \tag{9}\]
for any \(f\in L^{2}(\mathbb{R}^{d})\), which induces an involution of \(A\in B(L^{2}(\mathbb{R}^{d}))\):
\[\check{A}\coloneqq PAP. \tag{10}\]
We denote by \(\mathcal{J}^{1}\) the space of all trace class operators on \(L^{2}(\mathbb{R}^{d})\). Given \(a\in L^{1}(\mathbb{R}^{2d})\) and \(S\in\mathcal{J}^{1}\). The convolution between \(a\) and \(S\) is the operator
\[a\star S\coloneqq S\star a\coloneqq\int_{\mathbb{R}^{2d}}a(z)\alpha_{z}(S)\,dz, \tag{11}\]
were the integral may be interpreted in the weak sense. For operators \(S,T\in\mathcal{J}^{1}\), their convolution is the function defined for every \(z\in\mathbb{R}^{2d}\) as
\[S\star T(z)\coloneqq\operatorname{tr}\left(S\alpha_{z}(\check{T})\right). \tag{12}\]
In this paper, we reserve the symbol \(\otimes\) for rank-one operators. Namely, given \(f,g\in L^{2}(\mathbb{R}^{d})\):
\[(f\otimes g)\psi\coloneqq\langle\psi,g\rangle f,\qquad\forall\psi\in L^{2}( \mathbb{R}^{d}). \tag{13}\]
The kernel of an operator \(S\) will always be denoted by \(K_{S}\). Evidently, the kernel of the operator \(f\otimes g\) is the tensor product _of functions_\(f(x)\overline{g(y)}\):
\[(f\otimes g)\psi(t)=\langle\psi,g\rangle f(t)=\int_{\mathbb{R}^{d}}f(t) \overline{g(x)}\psi(x)\,dx.\]
In the sequel we denote the tensor product of two functions by \(f(x)g(y)\), we shall adopt the notation
\[K_{f\otimes\overline{g}}(x,y)=f(x)\overline{g}(y). \tag{14}\]
We now interpret (7) as the cross-\(\tau\)-Wigner distribution of the rank-one operator \(f\otimes g\).
Hence, it is natural to define the \(\tau\)_-Wigner distribution_ of an _operator_\(S\) with kernel \(K_{S}\) in the following way:
\[W_{\tau}S(x,\omega)\coloneqq\int_{\mathbb{R}^{d}}e^{-2\pi it\omega}K_{S}(x+ \tau t,x-(1-\tau)t)\,dt. \tag{15}\]
For \(S\in\mathcal{J}^{1}\) and \(\tau\in[0,1]\), we define the Fourier-\(\tau\)-Wigner transform of \(S\) to be:
\[\mathcal{F}_{W_{\tau}}S(z)\coloneqq\operatorname{tr}\left(\pi^{\tau}(z)^{*}S \right),\qquad\forall z\in\mathbb{R}^{2d}. \tag{16}\]
For \(\tau=1/2\) we recover the usual Fourier-Wigner transform [23].
The \(\tau\)-spreading representation of \(S\in B(L^{2})\) is the decomposition
\[S=\int_{\mathbb{R}^{2d}}h(z)\pi^{\tau}(z)\,dz, \tag{17}\]
where the integral is understood in the weak sense. The function \(h\) is called the \(\tau\)-spreading function of \(S\).
In the following, we shall consider the \(\tau\)-spreading representation as a quantization scheme that assigns to a function an operator. Namely, \(h\in L^{1}(\mathbb{R}^{2d})\) gets associated to the operator
\[\operatorname{SR}^{\tau}(h)\coloneqq\int_{\mathbb{R}^{2d}}h(z)\pi^{\tau}(z)\,dz. \tag{18}\]
Let \(\mathcal{F}_{\sigma}\) denote the symplectic Fourier transform. In the following lemma we collect a number of important relations between these notions. The proofs are elementary computations and based on the spectral decomposition of the trace class operators \(S\) and \(T\) ([21]), which we leave to the interested reader.
**Lemma 2.2**.: _Let \(f,g,\in L^{2}(\mathbb{R}^{d})\), \(S,T\in\mathcal{J}^{1}\), \(a\in L^{1}(\mathbb{R}^{2d})\) and \(\tau\in[0,1]\). Then:_
1. \(\mathcal{F}_{\sigma}(W_{\tau}(f\otimes g))=V_{g}^{\tau}f\)_;_
2. \(\mathcal{F}_{W_{\tau}}(f\otimes g)=V_{g}^{\tau}f\)_;_
3. \(W_{\tau}S=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}S\)_;_
4. \(\mathcal{F}_{W_{\tau}}S(x,\omega)=e^{-2\pi i(1/2-\tau)x\omega}\mathcal{F}_{W_ {1/2}}S(x,\omega)\)_;_
5. \(\mathcal{F}_{\sigma}(S\star T)=\mathcal{F}_{W_{\tau}}S\cdot\mathcal{F}_{W_{1- \tau}}T=\mathcal{F}_{W_{1-\tau}}S\cdot\mathcal{F}_{W_{\tau}}T\)_;_
6. \(\mathcal{F}_{W_{\tau}}(a\star S)=\mathcal{F}_{\sigma}a\cdot\mathcal{F}_{W_{ \tau}}S\)_;_
7. \(\mathcal{F}_{W_{\tau}}S\) _is the_ \(\tau\)_-spreading function of_ \(S\)_, i.e._ \(S=\int_{\mathbb{R}^{2d}}\mathcal{F}_{W_{\tau}}S(z)\pi^{\tau}(z)\,dz\)_._
We notice that if we consider the rank-one operator \(S=f\otimes g\), then the assertions \((iii)\) and \((ii)\) of the previous lemma imply
\[W_{\tau}(f,g)=W_{\tau}(f\otimes g)=\mathcal{F}_{\sigma}V_{g}^{\tau}f. \tag{19}\]
### \(\tau\)-quantization of functions
The \(\tau\)-quantization of a symbol \(a\in\mathcal{S}^{\prime}(\mathbb{R}^{2d})\), the space of tempered distributions, is formally given by
\[\operatorname{Op}_{\tau}(a)f(t)\coloneqq\int_{\mathbb{R}^{2d}}e^{2\pi i(t-y) \xi}a((1-\tau)t+\tau y,\xi)f(y)\,dyd\xi, \tag{20}\]
where \(f\in\mathcal{S}(\mathbb{R}^{d})\). \(\operatorname{Op}_{\tau}(a)\) may be described rigorously in the weak sense:
\[\langle\operatorname{Op}_{\tau}(a)f,g\rangle=\langle a,W_{\tau}(g,f)\rangle, \qquad\forall f,g,\in\mathcal{S}(\mathbb{R}^{d}).\]
Given an operator \(S\), we denote by \(\operatorname{a}_{\tau}^{S}\) its \(\tau\)-symbol, i.e. the tempered distribution such that
\[\operatorname{Op}_{\tau}\left(\operatorname{a}_{\tau}^{S}\right)=S.\]
**Remark 2.3**.: _Under suitable assumptions, for example \(a\in L^{1}(\mathbb{R}^{2d})\), straightforward calculations give_
\[\operatorname{Op}_{\tau}(a)=\int_{\mathbb{R}^{2d}}\mathcal{F}_{\sigma}a(z)\pi^{ \tau}(z)\,dz,\]
_and since also \(\mathcal{F}_{W_{\tau}}\operatorname{Op}_{\tau}(a)\) is the \(\tau\)-spreading function of \(\operatorname{Op}_{\tau}(a)\), we have_
\[a=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}\operatorname{Op}_{\tau}(a). \tag{21}\]
_Hence, for \(S\in\mathcal{J}^{1}\)_
\[\operatorname{a}_{\tau}^{S}=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}S=W_{ \tau}S. \tag{22}\]
Given \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) and \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\), we recall the definition of cross-\(\tau\)-Cohen's class representation of \(f\) and \(g\), with kernel \(a\):
\[Q_{a}^{\tau}(f,g)\coloneqq a\ast W_{\tau}(f,g). \tag{23}\]
## 3. Feichtinger operators
In this section we summarize some important results concerning a class of operators studied in [10]. For such operators, introduced below, we adopt the name "Feichtinger operators" for reasons which will become evident later.
We recall that the Feichtinger algebra over \(\mathbb{R}^{d}\)[9] is the Banach space
\[\mathcal{S}_{0}(\mathbb{R}^{d})\coloneqq\{f\in L^{2}(\mathbb{R}^{d})\,|\,V_{g }f\in L^{1}(\mathbb{R}^{2d})\}, \tag{24}\]
for some \(g\in L^{2}(\mathbb{R}^{d})\smallsetminus\{0\}\), endowed with the norm
\[\left\|f\right\|_{\mathcal{S}_{0}}\coloneqq\left\|V_{g}f\right\|_{L^{1}}=\int_ {\mathbb{R}^{2d}}\left|V_{g}f(x,\omega)\right|\,dxd\omega.\]
We refer the reader to [13] for a detailed survey on \(\mathcal{S}_{0}(\mathbb{R}^{d})\). In this work, \(\mathcal{S}_{0}^{\prime}(\mathbb{R}^{d})\) denotes the conjugate-dual of \(\mathcal{S}_{0}(\mathbb{R}^{d})\).
**Definition 3.1**.: _The set of Feichtinger operators is defined to be_
\[\begin{split}\mathbb{S}_{0}\coloneqq&\{S\colon \mathcal{S}_{0}^{\prime}(\mathbb{R}^{d})\to\mathcal{S}_{0}(\mathbb{R}^{d})\,|\,S \,\text{is linear, continuous and}\\ &\text{maps norm bounded w-$\ast$ convergent sequences in $\mathcal{S}_{0}^{\prime}$}\\ &\text{into norm convergent sequences in $\mathcal{S}_{0}$}\}.\end{split} \tag{25}\]
We adopt the following notation:
\[\mathbb{S}_{0}^{\prime}\coloneqq B(\mathcal{S}_{0}(\mathbb{R}^{d}),\mathcal{S}_ {0}^{\prime}(\mathbb{R}^{d})) \tag{26}\]
and state the so called Outer Kernel Theorem [10, Theorem 1.1]:
**Theorem 3.2**.: _The Banach space \(\mathbb{S}_{0}^{\prime}\) is isomorphic to \(\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) via the map \(T\mapsto K_{T}\), where the relation between \(T\) and its kernel \(K_{T}\) is given by_
\[\langle Tf,g\rangle=\langle K_{T},K_{g\otimes f}\rangle,\qquad\forall\,f,g, \in\mathcal{S}_{0}(\mathbb{R}^{d}).\]
The following statement goes under the name of Inner Kernel Theorem. We present it in our setting. To this end, we introduce the following notation: given \(\sigma,\nu\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{d})\), we denote by \(\nu\widetilde{\otimes}\overline{\sigma}\) the unique element of \(\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) such that
\[\langle\nu\widetilde{\otimes}\overline{\sigma},\!K_{\psi\otimes\overline{ \varphi}}\rangle=\langle\nu,\!\psi\rangle\overline{\langle\sigma,\!\overline{ \varphi}\rangle},\qquad\forall\,\psi,\varphi\in\mathcal{S}_{0}(\mathbb{R}^{d}).\]
We refer the reader to [10, Theorem 1.3], Lemma 3.1 and Corollary 3.10, too.
**Theorem 3.3**.: _The space of Feichtinger operators \(\mathbb{S}_{0}\) is a Banach space if endowed with the norm of \(B(\mathcal{S}_{0}^{\prime},\mathcal{S}_{0})\) and it is naturally isomorphic as Banach space to \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) through the map \(T\mapsto K_{T}\), where the relation between \(T\) and its kernel \(K_{T}\) is given by_
\[\langle\nu,\!T\sigma\rangle=\langle\nu\widetilde{\otimes}\overline{\sigma},\! K_{T}\rangle,\qquad\forall\,\sigma,\nu,\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{d}).\]
_Moreover, \(\mathbb{S}_{0}\) is Banach algebra under composition. If \(S,T\in\mathbb{S}_{0}\), then_
\[K_{S\circ T}(y,u)=\int_{\mathbb{R}^{d}}K_{T}(y,t)K_{S}(t,u)\,dt. \tag{27}\]
By the above theorems 3.2 and 3.3, \(\mathbb{S}_{0}^{\prime}\) is the (conjugate) topological dual of \(\mathbb{S}_{0}\) and the duality is given by
\[\langle T,\!S\rangle=\langle K_{T},\!K_{S}\rangle. \tag{28}\]
**Lemma 3.4**.: _Suppose \(S\in\mathbb{S}_{0}\). Then there exist two non-unique sequences \(\{f_{n}\}_{n},\{g_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) such that_
\[S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n},\qquad\sum_{n=1}^{\infty}\left\|f_{n }\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_{\mathcal{S}_{0}}<+\infty, \qquad K_{S}=\sum_{n=1}^{\infty}K_{f_{n}\otimes g_{n}}.\]
_Moreover,_
\[\mathbb{S}_{0}\hookrightarrow\mathcal{J}^{1}\]
_with_
\[\operatorname{tr}(S)=\int_{\mathbb{R}^{d}}K_{S}(x,x)\,dx.\]
Proof.: We just have to prove the continuous inclusion of Feichtinger operators into \(\mathcal{J}^{1}\), all the remaining statements can be found in [10], see in particular Corollary 3.15 and Remark 9. The claim follows from an elementary computation:
\[\left\|S\right\|_{\mathcal{J}^{1}} =\left|\operatorname{tr}(A)\right|\leq\int_{\mathbb{R}^{d}}\sum_ {n=1}^{\infty}\left|f_{n}(x)g_{n}(x)\right|\,dx=\sum_{n=1}^{\infty}\int_{ \mathbb{R}^{d}}\left|f_{n}(x)g_{n}(x)\right|\,dx\] \[\leq\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{L^{2}}\left\|g_{n} \right\|_{L^{2}}\lesssim\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S} _{0}}\left\|g_{n}\right\|_{\mathcal{S}_{0}}<\infty.\]
Since \(\mathcal{S}_{0}(\mathbb{R}^{2d})=\mathcal{S}_{0}(\mathbb{R}^{d})\hat{\otimes }\mathcal{S}_{0}(\mathbb{R}^{d})\), see e.g. [10, Lemma 2.1], we get
\[\left\|S\right\|_{\mathcal{J}^{1}}\lesssim\left\|K_{S}\right\|_{\mathcal{S}_{0 }}\asymp\left\|S\right\|_{\mathbb{S}_{0}},\]
which gives the desired assertion.
The preceding result and the observations in [10, p. 4] yield
\[\mathbb{S}_{0}\hookrightarrow\mathcal{J}^{1}\hookrightarrow\mathcal{J}^{2} \hookrightarrow B(L^{2}(\mathbb{R}^{d}))\hookrightarrow\mathbb{S}_{0}^{\prime}. \tag{29}\]
The fact that all Feichtinger operators are trace class implies the validity of Lemma 2.2.
### \(\tau\)-quantization of operators
The following remark is the key insight for the subsequent results concerning \(\operatorname{Op}_{\tau}\) and \(W_{\tau}\).
**Remark 3.5**.: _Let us consider \(f,g\in L^{2}(\mathbb{R}^{d})\) such that \(f\neq 0\), \(a\in L^{2}(\mathbb{R}^{2d})\) and \(\{f_{j}\}_{j}\) o.n.b. for \(L^{2}\) with \(f_{1}=f\). Then we compute as follows:_
\[\langle\operatorname{Op}_{\tau}(a)f,g\rangle =\langle\operatorname{Op}_{\tau}(a)f,\sum_{j=1}^{\infty}\langle g,f_{j}\rangle f_{j}\rangle=\sum_{j=1}^{\infty}\langle\operatorname{Op}_{\tau} (a)\left(\langle f_{j},g\rangle f\right),f_{j}\rangle\] \[=\sum_{j=1}^{\infty}\langle\operatorname{Op}_{\tau}(a)(f\otimes g )f_{j},f_{j}\rangle=\operatorname{tr}\left(\operatorname{Op}_{\tau}(a)(f \otimes g)\right).\]
_Taking into account the weak definition of \(\operatorname{Op}_{\tau}(a)\) and (15) we can write_
\[\langle\operatorname{Op}_{\tau}(a)f,g\rangle=\langle a,W_{\tau}((f\otimes g)^ {*})\rangle=\operatorname{tr}\left(\operatorname{Op}_{\tau}(a)(f\otimes g) \right)=\langle\operatorname{Op}_{\tau}(a),(f\otimes g)^{*}\rangle_{( \mathcal{J}^{1},\mathcal{J}^{\infty})}. \tag{30}\]
_By computations similar to the ones above for \(S\in\mathcal{J}^{1}\) with the spectral decomposition \(\sum_{k=1}^{\infty}\lambda_{k}f_{k}\otimes g_{k}\) after extending \(\{f_{k}\}_{k}\) to an orthonormal basis of \(L^{2}(\mathbb{R}^{d})\) implies_
\[\langle a,W_{\tau}S\rangle=\operatorname{tr}\left(\operatorname{Op}_{\tau}(a)S ^{*}\right)=\langle\operatorname{Op}_{\tau}(a),S\rangle_{(\mathcal{J}^{1}, \mathcal{J}^{\infty})}. \tag{31}\]
**Theorem 3.6**.: _For every \(\tau\in[0,1]\) the following mappings are linear and continuous:_
\[\operatorname{Op}_{\tau}\colon L^{2}(\mathbb{R}^{2d})\to\mathcal{J}^{\infty}, \qquad W_{\tau}\colon\mathcal{J}^{1}\to L^{2}(\mathbb{R}^{2d}).\]
_Moreover, \(\operatorname{Op}_{\tau}\) is the Banach space adjoint of \(W_{\tau}\): \(\operatorname{Op}_{\tau}=W_{\tau}^{*}\)._
Proof.: The boundedness of \(\operatorname{Op}_{\tau}\) is evident; the proof of the continuity of \(W_{\tau}\) follows by a similar reasoning as the proof of the subsequent Theorem 3.7. The last claim is just (31).
**Theorem 3.7**.: _For every \(\tau\in[0,1]\) the following mappings are linear and continuous:_
\[\operatorname{Op}_{\tau}\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\to \mathbb{S}_{0}^{\prime},\qquad W_{\tau}\colon\mathbb{S}_{0}\to\mathcal{S}_{0 }(\mathbb{R}^{2d}).\]
_Moreover, \(\operatorname{Op}_{\tau}\) is the Banach space adjoint of \(W_{\tau}\): \(\operatorname{Op}_{\tau}=W_{\tau}^{*}\), i.e. for every \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) and \(S\in\mathbb{S}_{0}\)_
\[\langle a,\!W_{\tau}S\rangle=\langle\operatorname{Op}_{\tau}(a),\!S\rangle. \tag{32}\]
Proof.: The boundedness and linearity of \(\operatorname{Op}_{\tau}\) follow from the definitions. By using the formal representation of \(\operatorname{Op}_{\tau}(a)\) we can derive an expression for its kernel:
\[K_{\operatorname{Op}_{\tau}(a)}(t,x)=\int_{\mathbb{R}^{d}}e^{2\pi i(t-x)\omega}a( (1-\tau)t+\tau x,\omega)\,d\omega. \tag{33}\]
Let us consider first \(f,g\in\mathcal{S}_{0}\). Then a standard argument, see e.g. [5, Proposition 1.3.25], gives that
\[W_{\tau}(f\otimes g)=W_{\tau}(f,g)\in\mathcal{S}_{0}(\mathbb{R}^{2d})\qquad \text{with}\qquad\left\|W_{\tau}(f\otimes g)\right\|_{\mathcal{S}_{0}}\lesssim \left\|f\right\|_{\mathcal{S}_{0}}\left\|g\right\|_{\mathcal{S}_{0}}.\]
Since Lemma 2.2 holds for \(\mathbb{S}_{0}\), we write \(W_{\tau}=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}\) and use the spectral decomposition for \(S\) of the form \(\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) as shown in Lemma 3.4. Now, we compute:
\[\mathcal{F}_{W_{\tau}}S(z) =\operatorname{tr}(\pi^{\tau}(z)^{*}S)=\operatorname{tr}(\sum_{n =1}^{\infty}\pi^{\tau}(z)^{*}(f_{n}\otimes g_{n}))\] \[=\sum_{n=1}^{\infty}\langle\pi^{\tau}(z)^{*}f_{n},g_{n}\rangle= \sum_{n=1}^{\infty}V_{g_{n}}^{\tau}f_{n}(z). \tag{34}\]
Taking a suitable window for the norm on \(\mathcal{S}_{0}(\mathbb{R}^{2d})\)[13, Theorem 5.3] we have
\[\left\|\mathcal{F}_{W_{\tau}}S\right\|_{\mathcal{S}_{0}}\leq\sum_{n=1}^{\infty }\left\|V_{g_{n}}^{\tau}f_{n}\right\|_{\mathcal{S}_{0}}=\sum_{n=1}^{\infty} \left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_{\mathcal{S}_{0}}<+\infty.\]
Consequently,
\[\left\|\mathcal{F}_{W_{\tau}}S\right\|_{\mathcal{S}_{0}} \leq\inf\{\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{ 0}}\left\|g_{n}\right\|_{\mathcal{S}_{0}},S=\sum_{n=1}^{\infty}f_{n}\otimes g _{n}\}\] \[\leq\inf\{\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0 }}\left\|g_{n}\right\|_{\mathcal{S}_{0}},K_{S}=\sum_{n=1}^{\infty}K_{f_{n} \otimes g_{n}}\}\] \[=\left\|K_{S}\right\|_{\mathcal{S}_{0}}\asymp\left\|S\right\|_{ \mathcal{S}_{0}}.\]
We proved the boundedness of \(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\), the continuity of the symplectic Fourier transform \(\mathcal{F}_{\sigma}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathcal{S}_{0}( \mathbb{R}^{2d})\) is well-known, and thus the continuity of
\(W_{\tau}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) follows. Concerning the last claim, we proceed as follows:
\[\langle\text{Op}_{\tau}(a), S\rangle =\langle K_{\text{Op}_{\tau}(a)}, K_{S}\rangle=\langle K_{\text{Op}_{\tau}(a)},\sum_{n=1}^{\infty}K_{f_{ \otimes}g_{n}}\rangle\] \[=\sum_{n=1}^{\infty}\langle K_{\text{Op}_{\tau}(a)}, K_{f_{\otimes}g_{n}}\rangle=\sum_{n=1}^{\infty}\langle\text{Op}_{\tau}(a)g_{n}, f_{n}\rangle\] \[=\sum_{n=1}^{\infty}\langle a, W_{\tau}(f_{n}\otimes g_{n})\rangle=\langle a,\sum_{n=1}^{\infty}W_{\tau}(f_{n} \otimes g_{n})\rangle\] \[=\langle a, W_{\tau}S\rangle,\]
which concludes the proof.
On account of Theorem 3.6 and 3.7, it seems reasonable to interpret \(W_{\tau}S\) as the \(\tau\)-quantization of an operator in \(\mathbb{S}_{0}\) or \(\mathcal{J}^{1}\).
**Corollary 3.8**.:
1. _For every_ \(\tau\in[0,1]\) _the mapping_ \(W_{\tau}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) _is a topological isomorphism with inverse given by_ \(\text{Op}_{\tau}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathbb{S}_{0}\)_;_
2. _A linear and continuous operator_ \(S\colon\mathcal{S}_{0}(\mathbb{R}^{d})\to\mathcal{S}_{0}^{\prime}(\mathbb{R}^ {d})\) _belongs to_ \(\mathbb{S}_{0}\) _if and only if_ \(W_{\tau}S\in\mathcal{S}_{0}(\mathbb{R}^{2d})\) _for some (and hence any)_ \(\tau\in[0,1]\)_._
Proof.: \((i)\) We observed in (22) that \(W_{\tau}S\) is just the \(\tau\)-symbol \(\text{a}_{\tau}^{S}\) of a trace class operator \(S\), in particular this holds for \(S\in\mathbb{S}_{0}\). Therefore,
\[\text{Op}_{\tau}\circ W_{\tau}S=\text{Op}_{\tau}(\text{a}_{\tau}^{S})=S.\]
We now show that if we start with \(a\in\mathcal{S}_{0}(\mathbb{R}^{2d})\), then \(\text{Op}_{\tau}(a)\) belongs to \(\mathbb{S}_{0}\). From (33), we have that the kernel of \(\text{Op}_{\tau}(a)\) can be written as
\[K_{\text{Op}_{\tau}(a)}(t,x)=\int_{\mathbb{R}^{d}}e^{2\pi i(t-x)\omega}a((1- \tau)t+\tau x,\omega)\,d\omega=\Psi_{\tau}\mathcal{F}_{2}^{-1}a(t,x),\]
where \(\mathcal{F}_{2}^{-1}\) is the inverse of the partial Fourier transform with respect to the second variable; \(\Psi_{\tau}\) is the change of variables induced by the matrix
\[\begin{bmatrix}1-\tau&\tau\\ 1&-1\end{bmatrix},\qquad\Psi_{\tau}F(t,x)\coloneqq F((1-\tau)t+\tau x,t-x). \tag{35}\]
From the assumption \(a\) in the Feichtinger algebra \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) we have \(\mathcal{F}_{2}^{-1}a\in\mathcal{S}_{0}(\mathbb{R}^{2d})\), thus \(\Psi_{\tau}\mathcal{F}_{2}^{-1}a\) is in \(\mathcal{S}_{0}(\mathbb{R}^{2d})\), i.e. \(\text{Op}_{\tau}(a)\) is an element of \(\mathbb{S}_{0}\). The fact that \(\text{Op}_{\tau}\) is continuous from \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) into \(\mathbb{S}_{0}\) is evident from the applications of \(\mathcal{F}_{2}^{-1}\) and \(\Psi_{\tau}\). Hence we have shown that
\[W_{\tau}\circ\text{Op}_{\tau}(a)=\text{a}_{\tau}^{\text{Op}_{\tau}(a)}=a.\]
\((ii)\) The claim is a straightforward consequence of \((i)\).
**Corollary 3.9**.:
1. _For every_ \(\tau\in[0,1]\)__\(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) _is a topological isomorphisms with inverse given by the_ \(\tau\)_-spreading representation_ (36) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathbb{S}_{0} \,,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz;\]
2. _Let us define_ (37) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\to \mathbb{S}_{0}^{\prime}\,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz,\] _where the integral has to be understood weakly as follows:_ \[\langle\operatorname{SR}^{\tau}(a)f,\!g\rangle\coloneqq\langle a,\!V_{f}^{ \tau}g\rangle,\qquad a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d}),\,f,g\in \mathcal{S}_{0}(\mathbb{R}^{d}).\] _Then_ \(\operatorname{SR}^{\tau}\) _as in (_37_) is well-defined, linear, continuous, extends (_36_) and it is the Banach space adjoint of_ \(\mathcal{F}_{W_{\tau}}\) _in_ \((i)\)_:_ (38) \[\operatorname{SR}^{\tau}=\mathcal{F}_{W_{\tau}}^{*},\] _in the sense that for every_ \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) _and_ \(S\in\mathbb{S}_{0}\)__ \[\langle a,\!\mathcal{F}_{W_{\tau}}S\rangle=\langle\operatorname{SR}^{\tau}(a), \!S\rangle=\langle K_{\operatorname{SR}^{\tau}(a)},\!K_{S}\rangle;\]
3. _Every function_ \(F\in\mathcal{S}_{0}(\mathbb{R}^{2d})\) _admits an expansion of the following type:_ \[F=\sum_{n=1}^{\infty}V_{g_{n}}^{\tau}f_{n},\] _for some sequences_ \(\{f_{n}\}_{n},\{g_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) _such that_ \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\)_._
Proof.: \((i)\) First we notice that if we start with \(a\in\mathcal{S}_{0}(\mathbb{R}^{2d})\), then \(\operatorname{SR}^{\tau}(a)\) is the Feichtinger operator with kernel
\[K_{\operatorname{SR}^{\tau}(a)}(y,u)=\int_{\mathbb{R}^{d}}a(y-u,\omega)e^{2 \pi iy\omega}\,d\omega=\mathcal{F}_{2}^{-1}[a(y-u,\cdot)](y).\]
Clearly \(\operatorname{SR}^{\tau}\) is continuous from \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) into \(\mathbb{S}_{0}\).
Since we have \(W_{\tau}=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}\) and \(\mathcal{F}_{\sigma}\) is an automorphism of \(\mathcal{S}_{0}(\mathbb{R}^{2d})\), we can write \(\mathcal{F}_{W_{\tau}}=\mathcal{F}_{\sigma}W_{\tau}\) and which is an isomorphism due to Corollary 3.8. To prove that \(\operatorname{SR}^{\tau}\) is a topological isomorphism, we can write \(\mathcal{F}_{W_{\tau}}=\mathcal{F}_{\sigma}W_{\tau}\) and \(\mathcal{F}_{\sigma}=\mathcal{F}_{\sigma}W_{\tau}\). Then \(\operatorname{SR}^{\tau}\) is a topological isomorphism.
**Corollary 3.1**.:
1. _For every_ \(\tau\in[0,1]\)__\(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) _is a topological isomorphisms with inverse given by the_ \(\tau\)_-spreading representation_ (36) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathbb{S}_{0} \,,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz;\]
2. _Let us define_ (37) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\to \mathbb{S}_{0}^{\prime}\,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz,\] _where the integral has to be understood weakly as follows:_ \[\langle\operatorname{SR}^{\tau}(a)f,\!g\rangle\coloneqq\langle a,\!V_{f}^{ \tau}g\rangle,\qquad a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d}),\,f,g\in \mathcal{S}_{0}(\mathbb{R}^{d}).\] _Then_ \(\operatorname{SR}^{\tau}\) _as in (_37_) is well-defined, linear, continuous, extends (_36_) and it is the Banach space adjoint of_ \(\mathcal{F}_{W_{\tau}}\) _in_ \((i)\)_:_ (38) \[\operatorname{SR}^{\tau}=\mathcal{F}_{W_{\tau}}^{*},\] _in the sense that for every_ \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) _and_ \(S\in\mathbb{S}_{0}\)__ \[\langle a,\!\mathcal{F}_{W_{\tau}}S\rangle=\langle\operatorname{SR}^{\tau}(a), \!S\rangle=\langle K_{\operatorname{SR}^{\tau}(a)},\!K_{S}\rangle;\]
3. _Every function_ \(F\in\mathcal{S}_{0}(\mathbb{R}^{2d})\) _admits an expansion of the following type:_ \[F=\sum_{n=1}^{\infty}V_{g_{n}}^{\tau}f_{n},\] _for some sequences_ \(\{f_{n}\}_{n},\{g_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) _such that_ \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\)_._
Proof.: \((i)\) First we notice that if we start with \(a\in\mathcal{S}_{0}(\mathbb{R}^{2d})\), then \(\operatorname{SR}^{\tau}(a)\) is the Feichtinger operator with kernel
\[K_{\operatorname{SR}^{\tau}(a)}(y,u)=\int_{\mathbb{R}^{d}}a(y-u,\omega)e^{2\pi iy \omega}\,d\omega=\mathcal{F}_{2}^{-1}[a(y-u,\cdot)](y).\]
Clearly \(\operatorname{SR}^{\tau}\) is continuous from \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) into \(\mathbb{S}_{0}\).
Since we have \(W_{\tau}=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}\) and \(\mathcal{F}_{\sigma}\) is an automorphism of \(\mathcal{S}_{0}(\mathbb{R}^{2d})\), we can write \(\mathcal{F}_{W_{\tau}}=\mathcal{F}_{\sigma}W_{\tau}\) and which is an isomorphism due to Corollary 3.8. To prove that \(\operatorname{SR}^{\tau}\) is a topological isomorphism, we can write \(\mathcal{F}_{W_{\tau}}=\mathcal{F}_{\sigma}W_{\tau}\) and \(\mathcal{F}_{\sigma}=\mathcal{F}_{\sigma}W_{\tau}\). Then \(\operatorname{SR}^{\tau}\) is a topological isomorphism.
**Corollary 3.2**.:
1. _For every_ \(\tau\in[0,1]\)__\(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) _is a topological isomorphisms with inverse given by the_ \(\tau\)_-spreading representation_ (36) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathbb{S}_{0} \,,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz;\]
2. _Let us define_ (37) \[\operatorname{SR}^{\tau}\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\to \mathbb{S}_{0}^{\prime}\,a\mapsto\int_{\mathbb{R}^{2d}}a(z)\pi^{\tau}(z)\,dz,\] _where the integral has to be understood weakly as follows:_ \[\langle\operatorname{SR}^{\tau}(a)f,\!g\rangle\coloneqq\langle a,\!V_{f}^{ \tau}g\rangle,\qquad a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d}),\,f,g\in \mathcal{S}_{0}(\mathbb{R}^{d}).\] _Then_ \(\operatorname{SR}^{\tau}\) _as in (_37_) is well-defined, linear, continuous, extends (_36_) and it is the Banach space adjoint of_ \(\mathcal{F}_{W_{\tau}}\) _in_ \((i)\)_:_ (38) \[\operatorname{SR}^{\tau}=\mathcal{F}_{W_{\tau}}^{*},\] _in the sense that for every_ \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) _and_ \(S\in\mathbb{S}_{0}\)__ \[\langle a,\!\mathcal{F}_{W_{\tau}}S\rangle=\langle\operatorname{SR}^{\tau}(a),\!S \rangle=\langle K_{\operatorname{SR}
is the inverse of \(\mathcal{F}_{W_{\tau}}\) we use (34), take \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\in\mathbb{S}_{0}\) and \(\psi,\varphi\in\mathcal{S}_{0}(\mathbb{R}^{d})\):
\[\langle(\text{SR}^{\tau}\circ\mathcal{F}_{W_{\tau}}S)\psi,\!\varphi\rangle =\int_{\mathbb{R}^{2d}}\mathcal{F}_{W_{\tau}}S(z)\langle\pi^{ \tau}(z)\psi,\!\varphi\rangle\,dz\] \[=\sum_{n=1}^{\infty}\int_{\mathbb{R}^{2d}}V_{g_{n}}^{\tau}f_{n}(z )\overline{V_{\psi}^{\tau}\varphi(z)}\,dz\] \[=\sum_{n=1}^{\infty}\langle f_{n},\!\varphi\rangle\overline{ \langle g_{n},\!\psi\rangle}\] \[=\langle\sum_{n=1}^{\infty}\langle\psi,\!g_{n}\rangle f_{n},\! \varphi\rangle\] \[=\langle\sum_{n=1}^{\infty}(f_{n}\otimes g_{n})\psi,\!\varphi\rangle\] \[=\langle S\psi,\!\varphi\rangle,\]
in the third equality we used Moyal's identity. For the composition \(\mathcal{F}_{W_{\tau}}\circ\text{SR}^{\tau}\), notice that this is the identity on \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) due lo Lemma 2.2\((vii)\).
\((ii)\) Well-posedness, linearity and continuity of \(\text{SR}^{\tau}\) from \(\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) into \(\mathbb{S}_{0}^{\prime}\) are standard. Trivially (37) extends (36). To see that \(\text{SR}^{\tau}\) is the Banach space adjoint of \(\mathcal{F}_{W_{\tau}}\) from \(\mathbb{S}_{0}\) into \(\mathcal{S}_{0}(\mathbb{R}^{2d})\), take \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\) and \(S\in\mathbb{S}_{0}\). In the following calculations we use: the prior stated (34), the representation for Feichtinger operators and their kernel given in Lemma 3.4, the Outer and Inner Kernel Theorems:
\[\langle a,\!\mathcal{F}_{W_{\tau}}S\rangle =\sum_{n=1}^{\infty}\langle a,\!V_{g_{n}}^{\tau}f_{n}\rangle= \sum_{n=1}^{\infty}\langle\text{SR}^{\tau}(a)g_{n},\!f_{n}\rangle\] \[=\sum_{n=1}^{\infty}\langle K_{\text{SR}^{\tau}(a)},\!K_{f_{n} \otimes g_{n}}\rangle=\langle K_{\text{SR}^{\tau}(a)},\!K_{S}\rangle\] \[=\langle\text{SR}^{\tau}(a),\!S\rangle.\]
\((iii)\) The last claim is a direct consequence of the computations in (34) and the surjectivity of \(\mathcal{F}_{W_{\tau}}\).
### A convenient environment for QHA
In Section 2 we introduced convolutions between a function and an operator and two operators. Keyl, Kiukas and Werner [14] showed that such convolutions make sense for wider classes of (generalized) functions and operators. We summarize here the main results; in what follows \(\mathfrak{S}\) denotes the set of pseudo-differential operators with Weyl symbol in the Schwartz class \(\mathcal{S}(\mathbb{R}^{2d})\) and \(\mathfrak{S}^{\prime}\) those pseudo-differential operators with Weyl symbol in \(\mathcal{S}^{\prime}(\mathbb{R}^{2d})\). On account of the Schwartz Kernel Theorem we can identify \(\mathfrak{S}^{\prime}\) with the continuous and linear operators from \(\mathcal{S}(\mathbb{R}^{d})\) into \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\).
**Proposition 3.10**.:
1. _Suppose_ \(S,T\in\mathfrak{S}\)_,_ \(A\in\mathfrak{S}^{\prime}\)_,_ \(b\in\mathcal{S}(\mathbb{R}^{2d})\) _and_ \(a\in\mathcal{S}^{\prime}(\mathbb{R}^{2d})\)_. Then the following convolutions are well-defined and they extend the ones defined in Subsection_ 2.2_:_ \[S\star T\in\mathcal{S}(\mathbb{R}^{2d}),\quad S\star A\in\mathcal{S}^{\prime}( \mathbb{R}^{2d}),\quad b\star S\in\mathfrak{S},\quad a\star S,b\star A\in \mathfrak{S}^{\prime};\]
2. _The Fourier-Wigner transform can be extended to a topological isomorphism_ \(\mathcal{F}_{W_{1/2}}\colon\mathfrak{S}^{\prime}\to\mathcal{S}^{\prime}( \mathbb{R}^{2d})\)_;_
3. _We have_ \(\mathcal{F}_{\sigma}(S\star T)=\mathcal{F}_{W_{1/2}}S\cdot\mathcal{F}_{W_{1/2 }}T\) _and_ \(\mathcal{F}_{W_{1/2}}(b\star S)=\mathcal{F}_{\sigma}b\cdot\mathcal{F}_{W_{1/2 }}S\) _whenever_ \(S,T\) _and_ \(b\) _are such that the convolutions are defined as in part_ \((i)\)_;_
4. _The Weyl symbol of_ \(A\in\mathfrak{S}^{\prime}\) _is given by_ \(\mathcal{F}_{\sigma}\mathcal{F}_{W_{1/2}}A\)_._
The authors of [14] proved that the class of so-called Schwartz operators \(\mathfrak{S}\) has the structure of a Frechet space. We propose that the Banach space of Feichtinger operators \(\mathbb{S}_{0}\) is an alternative to \(\mathfrak{S}\) that is a much bigger class of "nice" operators. We start with some preliminaries on \(\mathcal{S}_{0}\) and \(\mathbb{S}_{0}\).
**Lemma 3.11**.: _Given \(f\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\), there exists a sequence \(\{f_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) which w-\(*\) converges to \(f\) and it is bounded by \(\left\|f\right\|_{\mathcal{S}^{\prime}_{0}}\), i.e._
\[\lim_{n\to+\infty}\langle f_{n},g\rangle=\langle f,\!g\rangle\qquad\forall\,g \in\mathcal{S}_{0}(\mathbb{R}^{d}),\qquad\sup_{n}\left\|f_{n}\right\|_{ \mathcal{S}_{0}}\leq\left\|f\right\|_{\mathcal{S}^{\prime}_{0}}.\]
Proof.: Let us fix \(f\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\smallsetminus\{0\}\) and set \(R\coloneqq\left\|f\right\|_{\mathcal{S}^{\prime}_{0}}\). By [13, Proposition 6.15], there exists a net \(\{f_{\alpha}\}_{\alpha\in A}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) which converges w-\(*\) to \(f\) in \(\mathcal{S}^{\prime}_{0}\) and such that \(\left\|f_{\alpha}\right\|_{\mathcal{S}^{\prime}_{0}}\leq R\) for every \(\alpha\in A\). Set
\[B_{R}\coloneqq\left\{f\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\,|\,\left\| f\right\|_{\mathcal{S}^{\prime}_{0}}\leq R\right\}\quad\text{and}\quad E_{R} \coloneqq\mathcal{S}_{0}(\mathbb{R}^{d})\cap B_{R},\]
where \(\mathcal{S}_{0}\) is identified with its natural embedding in \(\mathcal{S}^{\prime}_{0}\), i.e.
\[E_{R}\subseteq B_{R}\subseteq\overline{E_{R}}^{w-*}.\]
\(\overline{E_{R}}^{w-*}\) is bounded in \(\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\).
In fact, if \(f_{0}\in\overline{E_{R}}^{w-*}\), then there exists a net \(\{f_{\alpha}\}_{\alpha\in A}\subseteq E_{R}\) that it converges w-\(*\) to \(f_{0}\). Hence, we obtain
\[\left\|f_{0}\right\|_{\mathcal{S}^{\prime}_{0}}\leq\liminf_{\alpha\in A}\left\| f_{\alpha}\right\|_{\mathcal{S}^{\prime}_{0}}=\lim_{\alpha\in A}\inf\{\left\|f_{ \beta}\right\|_{\mathcal{S}^{\prime}_{0}}\,|\,\alpha\preceq\beta\}\leq\lim_{ \alpha\in A}R=R.\]
In particular, this shows that \(\overline{E_{R}}^{w-*}\subseteq B_{R}\) and we get
\[\overline{E_{R}}^{w-*}=B_{R}.\]
Since \(\mathcal{S}_{0}\) separable, and the relative w-\(*\) topology on \(B_{R}\) is induced by a metric by [18, Theorem 2.6.23]. Hence the topological w-\(*\) closure of \(E_{R}\) equals its sequential w-\(*\) closure. Consequently, there exists a sequence \(\{f_{n}\}_{n}\subseteq E_{R}\) which converges w-\(*\) to \(f\) in \(\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\).
**Remark 3.12**.: _The above lemma holds also for any LCA second countable group \(\mathcal{G}\) replacing \(\mathbb{R}^{d}\), see [8, Theorem 2] for the separability of \(\mathcal{S}_{0}(\mathcal{G})\)._
**Lemma 3.13**.: _For any \(S\in\mathbb{S}_{0}^{\prime}\), there exists a sequence \(\{S_{n}\}_{n}\subseteq\mathbb{S}_{0}\) such that_
1. \(\|S_{n}\|_{\mathbb{S}_{0}^{\prime}}\lesssim\|S\|_{\mathbb{S}_{0}^{\prime}}\)_;_
2. \(\lim_{n\to+\infty}|\langle(S-S_{n})f,\!g\rangle|=0\) _for all_ \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\)_._
Proof.: This is a straightforward application of the Kernel Theorems 3.2 and 3.2 and of Lemma 3.11.
Convergence as in item \((ii)\) of the above lemma will be also denoted by
\[S_{n}\stackrel{{ w-\star}}{{\longrightarrow}}S\quad\text{in} \quad\mathbb{S}_{0}^{\prime}\qquad\text{or}\qquad S=\text{w}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* _The kernel of the mixed-state localization operator_ \(b\star S\) _is given by_ (42) \[K_{b\star S}(y,u)=\int_{\mathbb{R}^{d}}b(x,\omega)e^{2\pi i(y-u)\omega}K_{S}(y-x,u -x)\,dxd\omega;\] _for very_ \(z=(x,\omega)\in\mathbb{R}^{2d}\) _the kernel of_ \(S\alpha_{z}\check{T}\) _is_ (43) \[K_{S\alpha_{z}\check{T}}(y,u)=\int_{\mathbb{R}^{d}}e^{2\pi i(y-t)\omega}K_{T}(x -y,x-t)K_{S}(t,u)\,dt.\]
Proof.: \((i)\) We leave the elementary computations to the interest reader, and note that in order to prove \(\alpha_{z}S,\check{S}\in\mathbb{S}_{0}\) the result [10, Corollary 3.3] is useful. A continuous and linear operator \(S\colon\mathcal{S}_{0}\to\mathcal{S}_{0}^{\prime}\) is a Feichtinger operator if and only if
\[\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}|\langle S\pi(z)g_{1},\pi(w)g_{2} \rangle|\ dzdw\]
is finite for any \(g_{1},g_{2}\in\mathcal{S}_{0}(\mathbb{R}^{d})\).
\((ii)\) We first address the convolution between two Feichtinger operators. By item \((i)\) and the fact that \(\mathbb{S}_{0}\) is a Banach algebra under composition, we have that \(S\alpha_{z}\check{T}\) is in \(\mathbb{S}_{0}\) for any \(z=(x,\omega)\in\mathbb{R}^{2d}\). We have by [10, Corollary 3.15]:
\[S\star T(z) =\operatorname{tr}(S\alpha_{z}\check{T})=\int_{\mathbb{R}^{d}}K_ {S\alpha_{z}\check{T}}(y,y)\,dy=\int_{\mathbb{R}^{2d}}K_{\alpha_{z}\check{T}}( y,t)K_{S}(t,y)\,dtdy\] \[=\int_{\mathbb{R}^{2d}}e^{2\pi i(y-t)\omega}K_{T}(x-y,x-t)K_{S}( t,y)\,dtdy\] \[=\int_{\mathbb{R}^{d}}\left(\int_{\mathbb{R}^{d}}K_{T}(x-y,x-t)K _{S}(t,y)e^{-2\pi it\omega}\,dt\right)e^{2\pi iy\omega}\,dy\] \[=\mathcal{F}_{2}^{-1}\mathcal{F}_{1}\left(\Phi T_{(x,x)}K_{T} \cdot K_{S}\right)(\omega,\omega),\]
where \(\Phi F(t,y)\coloneqq F(-y,-t)\), \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) are the partial Fourier transforms with respect to the first and second variable, respectively. Consider now \(f,g,h,l\in\mathcal{S}_{0}(\mathbb{R}^{d})\), it is useful to compute the following where \(P\) is the parity operator:
\[\mathcal{F}_{2}^{-1}\mathcal{F}_{1}\left(\Phi T_{(x,x)}K_{h\otimes l }\cdot K_{f\otimes g}\right) (\omega,\omega)=\int_{\mathbb{R}^{d}}\left(\int_{\mathbb{R}^{d}} h(x-y)\overline{l(x-t)}f(t)\overline{g(y)}e^{-2\pi it\omega}\,dt\right)e^{2 \pi iy\omega}\,dy\] \[=\int_{\mathbb{R}^{d}}f(t)e^{-2\pi it\omega}\overline{l(x-t)}\,dt \cdot\int_{\mathbb{R}^{d}}\overline{g(y)}e^{2\pi iy\omega}h(x-y)\,dy\] \[=V_{Pl}f(-x,\omega)\cdot\overline{V_{Ph}g(-x,\omega)}.\]
Hence \(\mathcal{F}_{2}^{-1}\mathcal{F}_{1}\left(\Phi T_{(x,x)}K_{h\otimes l}\cdot K _{f\otimes g}\right)(\omega,\omega)\) is in \(\mathcal{S}_{0}(\mathbb{R}^{2d})\) as a function of \((x,\omega)\). We consider now two representations \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), see
Lemma 3.4, so that
\[K_{S}=\sum_{n=1}^{\infty}K_{f_{n}\otimes g_{n}},\qquad K_{T}=\sum_{n=1}^{\infty}K _{h_{n}\otimes l_{n}}.\]
It follows that we can write
\[S\star T(z) =\mathcal{F}_{2}^{-1}\mathcal{F}_{1}\left(\Phi T_{(x,x)}\sum_{M}^ {\infty}K_{h_{m}\otimes l_{m}}\cdot\sum_{n=1}^{\infty}K_{f_{n}\otimes g_{n}} \right)(\omega,\omega)\] \[=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\mathcal{F}_{2}^{-1} \mathcal{F}_{1}\left(\Phi T_{(x,x)}K_{h_{m}\otimes l_{m}}\cdot K_{f_{n}\otimes g _{n}}\right)(\omega,\omega)\] \[=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}V_{Pl_{m}}f_{n}(-x,\omega) \cdot\overline{V_{Ph_{m}}g_{n}(-x,\omega)}\in\mathcal{S}_{0}(\mathbb{R}^{2d}),\]
the convergence is guaranteed by Lemma 3.4.
Concerning \(b\star S\), the following estimate for any \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\) proves that \(b\star S\in\mathbb{S}_{0}^{\prime}\):
\[|\langle(b\star S)f\text{,}g\rangle|\leq\int_{\mathbb{R}^{2d}}|b(z)|\,| \langle S\pi(z)^{*}f\text{,}\pi(z)^{*}g\rangle|\ dz\lesssim\left\|b\right\|_{L ^{1}}\left\|S\right\|_{\mathbb{S}_{0}^{\prime}}\left\|f\right\|_{\mathcal{S}_ {0}}\left\|g\right\|_{\mathcal{S}_{0}}.\]
We exploit [10, Theorem 3.2 (ii)] to show that \(b\star S\) is in \(\mathbb{S}_{0}\). For \(g_{1},g_{2}\in\mathcal{S}_{0}(\mathbb{R}^{d})\) we have
\[\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}} |\langle(b\star S)\pi(w)g_{1}\text{,}\pi(u)g_{2}\rangle|\ dwdu\leq \int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}|b(z)|\] \[\times|\langle S\pi(w-z)g_{1}\text{,}\pi(u-z)g_{2}\rangle|\ dzdwdu\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}|\langle S\pi(w^{ \prime})g_{1}\text{,}\pi(u^{\prime})g_{2}\rangle|\ dw^{\prime}du^{\prime}\cdot \int_{\mathbb{R}^{2d}}|b(z)|\ dz<+\infty.\]
\((iii)\) We compute explicitly the kernel of the operator given by the convolution \(b\star S\):
\[\langle(b\star S)f\text{,}g\rangle =\int_{\mathbb{R}^{2d}}b(x,\omega)\int_{\mathbb{R}^{2d}}K_{S}(y,u )\overline{\pi(-z)g(y)}\pi(-z)f(u)\,dydu\,dxd\omega\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}b(x,\omega)e^{2\pi i (y-u)\omega}K_{S}(y,u)\overline{g(y+x)}f(u+x)\,dxd\omega\,dydu,\]
for \(z=(x,\omega)\in\mathbb{R}^{2d}\). The change of variables \(y^{\prime}=y+u,u^{\prime}=u+x\) gives the desired result. The last claim is just a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10].
**Corollary 3.17**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.18**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.19**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.20**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.21**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.22**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=1}^{\infty}h_{n}\otimes l_{n}\), where \(\{f_{n}\}_{n},\{g_{n}\}_{n},\{h_{n}\}_{n},\{l_{n}\}_{n}\subseteq\mathcal{S}_{0 }(\mathbb{R}^{d})\) with \(\sum_{n=1}^{\infty}\left\|f_{n}\right\|_{\mathcal{S}_{0}}\left\|g_{n}\right\|_ {\mathcal{S}_{0}}<\infty\). Then \(\langle(b\star S)f\text{,}g\rangle\) is a direct application of (40), (41) and the Banach algebra property for \(\mathbb{S}_{0}\)[10, Lemma 3.10]._
Proof.: We first prove the result.
**Corollary 3.23**.: _Let \(S,T\in\mathbb{S}_{0}\) with spectral decompositions \(S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\) and \(T=\sum_{n=
\(+\infty\), \(\sum_{n=1}^{\infty}\left\|h_{n}\right\|_{\mathcal{S}_{0}}\left\|l_{n}\right\|_{ \mathcal{S}_{0}}<+\infty\). Then, with the notations introduced in the proof of Lemma 3.16, for every \(z=(x,\omega)\in\mathbb{R}^{2d}\):_
\[S\star T(z) =\mathcal{F}_{2}^{-1}\mathcal{F}_{1}\left(\Phi T_{(x,x)}K_{T} \cdot K_{S}\right)(\omega,\omega)\] \[=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}V_{Pl_{m}}f_{n}(-x,\omega) \cdot\overline{V_{Ph_{m}}g_{n}(-x,\omega)}. \tag{44}\]
**Definition 3.18**.: _Let \(A\in\mathbb{S}_{0}^{\prime}\), \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\), \(S\in\mathbb{S}_{0}\) and \(b\in\mathcal{S}_{0}(\mathbb{R}^{2d})\). Consider any sequences \(\{A_{n}\}_{n}\subseteq\mathbb{S}_{0}\) and \(\{a_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{2d})\) such that_
\[A_{n}\xrightarrow[n]{w-*}A\quad\text{in}\quad\mathbb{S}_{0}^{\prime}\qquad \text{and}\qquad a_{n}\xrightarrow[n]{w-*}a\quad\text{in}\quad\mathcal{S}_{0 }^{\prime}(\mathbb{R}^{2d}).\]
_Then we define:_
(45) \[S\star A \coloneqq\text{w}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_._
3. _Moreover, they are associative. In particular, if_ \(z\in\mathbb{R}^{2d}\)_,_ \(T,Q\in\mathbb{S}_{0}\)_,_ \(\sigma\in\mathcal{S}_{0}(\mathbb{R}^{2d})\) _and_ \(A,a,S,b\) _as in Definition_ 3.18 _then:_ (51) \[(S\star(T\star b))(z) =((S\star T)\ast b)(z);\] (52) \[S\star(T\star Q) =(S\star T)\star Q;\] (53) \[(S\star b)\star\sigma =S\star(b\ast\sigma);\] (54) \[S\star(T\star a) =(S\star T)\ast a;\] (55) \[A\star(T\star b) =(A\star T)\star b;\] (56) \[S\star(T\star A) =(S\star T)\star A;\] _in the above identities_ \(\ast\) _denotes the usual convolution between two functions or a function and a distribution._
Proof.: \((i)\) It suffices to show (48), (49) and (50), since the other assertions in \((i)\) are evident.
We start with(48). Let \(b\in\mathcal{S}_{0}(\mathbb{R}^{2d})\) and \(z=(x,\omega)\in\mathbb{R}^{2d}\), in the subsequent computations we use Lemma 3.14 and 3.16:
\[\langle S\star A,\!b\rangle =\lim_{n\to+\infty}\langle S\star A_{n},\!b\rangle=\lim_{n\to+ \infty}\int_{\mathbb{R}^{2d}}\operatorname{tr}(S\alpha_{z}\check{A}_{n}) \overline{b(z)}\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{d}}K_{ S\alpha_{z}\check{A}_{n}}(y,y)\,dy\overline{b(z)}\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}e^{2\pi i(y-t)\omega}K_{A_{n}}(x-y,x-t)K_{S}(t,y)\,dtdy \,\overline{b(z)}\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}e^{2\pi i(t^{\prime}-y^{\prime})\omega}K_{A_{n}}(y^{ \prime},t^{\prime})K_{S}(x-t^{\prime},x-y^{\prime})\,dt^{\prime}dy^{\prime} \,\overline{b(z)}\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}K_{ A_{n}}(y^{\prime},t^{\prime})\overline{\left(\int_{\mathbb{R}^{2d}}\overline{K_{S} (t^{\prime}-x,y^{\prime}-x)}e^{2\pi i(y^{\prime}-t^{\prime})\omega}b(z)\,dz \right)}\,dy^{\prime}dt^{\prime}\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}K_{ A_{n}}(y^{\prime},t^{\prime})\overline{\left(\int_{\mathbb{R}^{2d}}K_{ \check{S}^{*}}(y^{\prime}-x,t^{\prime}-x)e^{2\pi i(y^{\prime}-t^{\prime}) \omega}b(z)\,dz\right)}\,dy^{\prime}dt^{\prime}\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}K_{ A_{n}}(y^{\prime},t^{\prime})\overline{\left(\int_{\mathbb{R}^{2d}}K_{ \check{S}^{*}}(y^{\prime}-x,t^{\prime}-x)e^{2\pi i(y^{\prime}-t^{\prime}) \omega}b(z)\,dz\right)}\,dy^{\prime}dt^{\prime}\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}K_{ A_{n}}(y^{\prime},t^{\prime})\overline{K_{b\star\check{S}^{*}}(y^{\prime},t^{ \prime})}\,dy^{\prime}dt^{\prime}.\]
About (49), we take \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\) and compute directly keeping in mind Remark 3.19:
\[\langle(a\star S)f, g\rangle =\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}a_{n}(z)\langle\pi(z)S\pi(z )^{*}f, g\rangle\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}a_{n}(z)\overline{\langle \pi(z)S^{*}\pi(z)^{*}g, f\rangle}\,dz\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}a_{n}(z)\overline{(g \otimes f)\star\bar{S}^{*}(z)}\,dz.\]
Let us address (50):
\[\langle(b\star A)f, g\rangle =\lim_{n\to+\infty}\langle K_{b\star A_{n}},K_{g\otimes f}\rangle\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}\Big{(}\int_{\mathbb{R} ^{2d}}b(x,\omega)e^{2\pi i(y-u)\omega}K_{A_{n}}(y-x,u-x)\,dxd\omega\Big{)}\] \[\times\overline{g(y)}f(u)\,dydu\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}K_{A_{n}}(y^{\prime},u^ {\prime})\overline{\Big{(}\int_{\mathbb{R}^{2d}}\overline{b(x,\omega)}e^{-2 \pi i(y^{\prime}-u^{\prime})\omega}}\] \[\times\overline{g(y^{\prime}+x)\overline{f(u^{\prime}+x)}\,dxd \omega\Big{)}}\,dy^{\prime}du^{\prime}\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}K_{A_{n}}(y^{\prime},u^ {\prime})\overline{\Big{(}\int_{\mathbb{R}^{2d}}b^{*}(x^{\prime},\omega^{ \prime})e^{2\pi i(y^{\prime}-u^{\prime})\omega^{\prime}}}\] \[\times\overline{g(y^{\prime}-x^{\prime})\overline{f(u^{\prime}+x ^{\prime})}\,dx^{\prime}d\omega^{\prime}\Big{)}}\,dy^{\prime}du^{\prime}\] \[=\lim_{n\to+\infty}\int_{\mathbb{R}^{2d}}K_{A_{n}}(y^{\prime},u^ {\prime})\overline{K_{b^{*}\star(g\otimes f)}(y^{\prime},u^{\prime})}\,dy^{ \prime}du^{\prime},\]
where for sake of brevity we set \(b^{*}(z)\coloneqq\overline{b(-z)}\).
\((ii)\) and \((iii)\) are trivial.
\((iv)\) We prove just (51), (52) and (53). The remaining identities can be derived in a similar manner.
In order to show (51) we compute for \(z\in\mathbb{R}^{2d}\):
\[(S\star(T\star b))(z) =\operatorname{tr}\left(S\circ\alpha_{z}\left(\left(\int_{\mathbb{ R}^{2d}}b(z)\alpha_{w}T\,dw\right)\check{\ }\right)\right)\] \[=\operatorname{tr}\left(S\circ\left(\int_{\mathbb{R}^{2d}}b(w) \alpha_{z}\left((\alpha_{w}T)\check{\ }\right)\,dw\right)\right)\] \[=\operatorname{tr}\left(S\circ\int_{\mathbb{R}^{2d}}b(w)\alpha_{ z}\alpha_{-w}\check{T}\,dw\right)\] \[=\operatorname{tr}\left(S\circ\int_{\mathbb{R}^{2d}}b(-w^{\prime })\alpha_{w^{\prime}}\alpha_{z}\check{T}\,dw^{\prime}\right)\] \[=\int_{\mathbb{R}^{2d}}b(-w^{\prime})\operatorname{tr}\left(S \alpha_{w^{\prime}+z}\check{T}\right)\,dw^{\prime},\]
where the last equality is due, e.g., to [20, Proposition 2.9]. we can the rephrase the last right-side term as
\[\int_{\mathbb{R}^{2d}}b(z-w^{\prime\prime})\operatorname{tr} \left(S\alpha_{w^{\prime\prime}}\check{T}\right)\,dw^{\prime\prime} =\int_{\mathbb{R}^{2d}}b(z-w^{\prime\prime})(S\star T)(w^{\prime \prime})\,dw^{\prime\prime}\] \[=((S\star T)\ast b)(z).\]
For the proof of (52), the following property of the trace is useful:
\[\int_{\mathbb{R}^{2d}}\operatorname{tr}(S\alpha_{w}T)\,dw=\operatorname{tr}(S )\operatorname{tr}(T),\]
where \(S,T\in\mathcal{J}^{1}\). Take now \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\):
\[\langle(S\star(T\star Q))f, g\rangle =\int_{\mathbb{R}^{2d}}\operatorname{tr}(T\alpha_{z}\check{Q}) \langle\alpha_{z}Sf, g\rangle\,dz\] \[=\int_{\mathbb{R}^{2d}}\operatorname{tr}(Q\alpha_{z}\check{T}) \operatorname{tr}((\alpha_{z}S)(f\otimes g))\,dz\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}\operatorname{tr}(Q (\alpha_{z}\check{T})\alpha_{w}((\alpha_{z}S)(f\otimes g)))\,dwdz\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}\operatorname{tr}((f \otimes g)(\alpha_{w}Q)\alpha_{z}((\alpha_{w}\check{T})S))\,dzdw\] \[=\int_{\mathbb{R}^{2d}}\operatorname{tr}(S\alpha_{w}\check{T}) \operatorname{tr}((\alpha_{w}Q)(f\otimes g))\,dw\] \[=\langle((S\star T)\star Q)f, g\rangle.\]
Also the last identity (53) may be deduced by a direct computation. For \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\) we have
\[\langle((S\star b)\star\sigma)f, g\rangle =\int_{\mathbb{R}^{2d}}\sigma(z)\langle\alpha_{z}(S\star b)f, g\rangle\,dz\] \[=\int_{\mathbb{R}^{2d}}\sigma(z)\int_{\mathbb{R}^{2d}}b(w)\langle (\alpha_{w}S)\pi(z)^{*}f,\pi(z)^{*}g\rangle\,dwdz\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}\sigma(z)b(w)\langle (\alpha_{w+z}S)f, g\rangle\,dwdz\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}\sigma(z)b(w)\, \mathrm{tr}((\alpha_{w+z}S)(f\otimes g))\,dwdz\] \[=\int_{\mathbb{R}^{2d}}b(w)\int_{\mathbb{R}^{2d}}\sigma(z^{\prime }-w)\,\mathrm{tr}((\alpha_{z^{\prime}}S)(f\otimes g))\,dz^{\prime}dw\] \[=\int_{\mathbb{R}^{2d}}(\int_{\mathbb{R}^{2d}}b(w)\sigma(z^{ \prime}-w)\,dz^{\prime})\,\mathrm{tr}((\alpha_{z^{\prime}}S)(f\otimes g))\,dw\] \[=\int_{\mathbb{R}^{2d}}b*\sigma(z^{\prime})\langle(\alpha_{z^{ \prime}}S)f, g\rangle\,dz^{\prime}\] \[=\langle(S\star(b*\sigma))f, g\rangle.\]
This concludes the proof.
**Corollary 3.21**.: _The mappings \(\mathcal{F}_{W_{\tau}}\) and \(W_{\tau}\) defined on \(\mathbb{S}_{0}\) can be extended to topological isomorphisms_
\[\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}^{\prime}\to\mathcal{S}_{0}^{\prime }(\mathbb{R}^{2d})\qquad\text{and}\qquad W_{\tau}\colon\mathbb{S}_{0}^{\prime} \to\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\]
_by duality:_
\[\langle\mathcal{F}_{W_{\tau}}S, a\rangle\coloneqq\langle S,\mathrm{SR}^{\tau}a\rangle,\qquad\langle W_{ \tau}S, a\rangle\coloneqq\langle S,\mathrm{Op}_{\tau}\,a\rangle, \tag{57}\]
_where \(S\in\mathbb{S}_{0}^{\prime}\) and \(a\in\mathcal{S}_{0}(\mathbb{R}^{2d})\). The inverses are given by_
\[\mathrm{SR}^{\tau}\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\to\mathbb{S }_{0}^{\prime}\qquad\text{and}\qquad\mathrm{Op}_{\tau}\colon\mathcal{S}_{0}^{ \prime}(\mathbb{R}^{2d})\to\mathbb{S}_{0}^{\prime},\]
_respectively._
Proof.: The definitions in (57) rely on the fact that \(\mathrm{Op}_{\tau}=W_{\tau}^{*}\) and \(\mathrm{SR}^{\tau}=\mathcal{F}_{W_{\tau}}^{*}\), see Theorem 3.7 and Corollary 3.9. It is straightforward to see that if \(S\in\mathbb{S}_{0}^{\prime}\), then \(\mathcal{F}_{W_{\tau}}S\) and \(W_{\tau}S\) defined as in (57) are in \(\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\). Also linearity and boundedness of \(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}^{\prime}\to\mathcal{S}_{0}^{\prime} (\mathbb{R}^{2d})\) and \(W_{\tau}\colon\mathbb{S}_{0}^{\prime}\to\mathcal{S}_{0}^{\prime}(\mathbb{R}^ {2d})\) are easy to verify as well as the fact that they extend \(\mathcal{F}_{W_{\tau}}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\) and \(W_{\tau}\colon\mathbb{S}_{0}\to\mathcal{S}_{0}(\mathbb{R}^{2d})\).
We show that \(W_{\tau}\) is an isomorphisms with inverse \(\mathrm{Op}_{\tau}\), then \(\mathcal{F}_{W_{\tau}}\) is treated in the same way. \(W_{\tau}\) is injective because \(\mathrm{Op}_{\tau}\colon\mathcal{S}_{0}(\mathbb{R}^{2d})\to\mathbb{S}_{0}\) is an isomorphism. Fix now \(a\in\mathcal{S}_{0}^{\prime}(\mathbb{R}^{2d})\), there exists a sequence \(\{a_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{2d})\) such that
\(a\quad\text{in}\quad\mathcal{S}^{\prime}_{0}(\mathbb{R}^{2d})\). Since \(W_{\tau}\) is an isomorphism between \(\mathbb{S}_{0}\) and \(\mathcal{S}_{0}(\mathbb{R}^{2d})\), there exists \(\{A_{n}\}_{n}\subseteq\mathbb{S}_{0}\) such that \(a_{n}=W_{\tau}A_{n}\). We see that there is \(A\in\mathbb{S}^{\prime}_{0}\) such that \(A_{n}\xrightarrow[n]{w-*}\)\(A\quad\text{in}\quad\mathbb{S}^{\prime}_{0}\), in fact taking \(b\in\mathcal{S}_{0}(\mathbb{R}^{2d})\)
\[\langle a,\!b\rangle=\lim_{n\to+\infty}\langle W_{\tau}A_{n},\!b\rangle=\lim_ {n\to+\infty}\langle A_{n},\text{Op}_{\tau}\,b\rangle.\]
Hence \(a=W_{\tau}A\), which proves that \(W_{\tau}\) is onto. We show now that \(W_{\tau}\circ\text{Op}_{\tau}\) is the identity on \(\mathcal{S}^{\prime}_{0}(\mathbb{R}^{2d})\), take \(a\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{2d})\) and \(b\in\mathcal{S}_{0}(\mathbb{R}^{2d})\):
\[\langle W_{\tau}\circ\text{Op}_{\tau}\,a,\!b\rangle=\langle\text{Op}_{\tau} \,a,\text{Op}_{\tau}\,b\rangle=\langle a,\!W_{\tau}\circ\text{Op}_{\tau}\,b \rangle=\langle a,\!b\rangle.\]
The first identity is just (57), the second one is (32) and the last one is \((i)\) of Corollary 3.8. For the other direction, take \(S\in\mathbb{S}^{\prime}_{0}\) and \(T\in\mathbb{S}_{0}\):
\[\langle\text{Op}_{\tau}\circ W_{\tau}S,\!T\rangle=\langle W_{\tau}S,\!W_{ \tau}T\rangle=\langle S,\text{Op}_{\tau}\circ W_{\tau}T\rangle=\langle S,\!T\rangle.\]
The first identity is (32), the second one is (57) and the last one is \((i)\) of Corollary 3.8.
### \(\tau\)-Cohen's class of operators
In the present subsection we define \(Q^{\tau}_{a}(S)\) and recall the definition of \(Q^{\tau}_{S}(f)\) from [17]. We shall see that \(Q^{\tau}_{a}(S)\) relates to well-known objects and observe that it coincides with the \(\tau\)-symbol of the mixed-state localization operator \(a\star S\). We continue with some statements concerning the interplay between the Gabor matrix of an operator \(G^{\varphi}_{T}\), the \(\tau\)-Cohen's class, the trace and the \(\tau\)-Wigner distribution.
**Definition 3.22**.: _For \(a\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{2d})\) we define the \(\tau\)-Cohen's class distribution, with kernel \(a\), of an operator \(S\in\mathbb{S}_{0}\) as_
\[Q^{\tau}_{a}(S)\coloneqq a*W_{\tau}S. \tag{58}\]
Of course, the rank-one case \(f\otimes g\) reduces to the definition given in (23). We recall also the definition given in [17] of Cohen's class distribution of a function \(f\in\mathcal{S}_{0}(\mathbb{R}^{d})\) w.r.t. the operator \(S\in\mathbb{S}^{\prime}_{0}\) by
\[Q_{S}f\coloneqq(f\otimes f)\star\check{S}. \tag{59}\]
It can be easily seen that for every \(z\in\mathbb{R}^{2d}\)
\[Q_{S}f(z)=(f\otimes f)\star\check{S}(z)=\langle(\alpha_{z}S)f,f,\rangle.\]
**Remark 3.23**.: _If \(a\in\mathcal{S}^{\prime}_{0}(\mathbb{R}^{2d})\) and \(S\in\mathbb{S}_{0}\), then we see that the \(\tau\)-Cohen's class representation of \(S\) w.r.t. \(a\) is just the \(\tau\)-symbol of the mixed-state localization operator \(a\star S\):_
\[\text{a}^{a\star S}_{\tau}=W_{\tau}(a\star S)=a*W_{\tau}S=Q^{\tau}_{a}(S).\]
**Lemma 3.24**.: _Let \(S\in\mathbb{S}_{0}\) have the spectral decomposition \(\sum_{n=1}^{\infty}f_{n}\otimes g_{n}\), for \(f,\varphi,\psi\in\mathcal{S}_{0}(\mathbb{R}^{d})\) and \(\{h_{n}\}_{n}\subseteq\mathcal{S}_{0}(\mathbb{R}^{d})\) with_
\[\sum_{n=1}^{\infty}\|h_{n}\|_{\mathcal{S}_{0}}^{2}<+\infty.\]
_. Then for every \(z\in\mathbb{R}^{2d}\):_
\[Q_{W_{1-\tau}(\check{\psi},\check{\varphi})}^{\tau}(S)(z) =\sum_{n=1}^{\infty}V_{\varphi}f_{n}(z)\overline{V_{\psi}g_{n}(z)}; \tag{61}\] \[Q_{W_{1-\tau}(\check{\varphi},\check{\varphi})}^{\tau}(\sum_{n= 1}^{\infty}h_{n}\otimes h_{n})(z) =\sum_{n=1}^{\infty}\left|V_{\varphi}h_{n}(z)\right|^{2}. \tag{60}\]
Proof.: Clearly, it suffices to prove the first identity. We show first that for \(f,g\in\mathcal{S}_{0}(\mathbb{R}^{d})\)
\[Q_{a}^{\tau}(f,g)=(f\otimes g)\star\operatorname{Op}_{\text{1-$\tau$}}(a). \tag{62}\]
In fact, applying \(\mathcal{F}_{\sigma}\) to the right-hand side first we get
\[\mathcal{F}_{\sigma}((f\otimes g)\star\operatorname{Op}_{\text{1-$\tau$}}(a) )=\mathcal{F}_{W_{\tau}}(f\otimes g)\cdot\mathcal{F}_{W_{1-\tau}}\operatorname {Op}_{\text{1-$\tau$}}(a)=V_{g}^{\tau}f\cdot\mathcal{F}_{\sigma}a.\]
We apply \(\mathcal{F}_{\sigma}\) a second time:
\[(f\otimes g)\star\operatorname{Op}_{\text{1-$\tau$}}(a)=\mathcal{F}_{\sigma} V_{g}^{\tau}f\ast\mathcal{F}_{\sigma}\mathcal{F}_{\sigma}a=W_{\tau}(f,g)\ast a.\]
We can now proceed as follows:
\[Q_{W_{1-\tau}(\check{\psi},\check{\varphi})}^{\tau}(S) =W_{1-\tau}(\check{\psi},\check{\varphi})\ast W_{\tau}(\sum_{n=1 }^{\infty}f_{n}\otimes g_{n})=\sum_{n=1}^{\infty}W_{1-\tau}(\check{\psi}, \check{\varphi})\ast W_{\tau}(f_{n},g_{n})\] \[=\sum_{n=1}^{\infty}(f_{n}\otimes g_{n})\star\operatorname{Op}_{ \text{1-$\tau$}}(W_{1-\tau}(\check{\psi},\check{\varphi}))=\sum_{n=1}^{\infty }(f_{n}\otimes g_{n})\star(\check{\psi}\otimes\check{\varphi})\] \[=\sum_{n=1}^{\infty}V_{\varphi}f_{n}(z)\overline{V_{\psi}g_{n}(z)},\]
where the last equality is due to [17].
We call a bounded operator \(T\) on \(L^{2}(\mathbb{R}^{d})\) positive, denoted by \(T\geq 0\), if
\[\langle Tf,f\rangle\geq 0,\qquad\forall\,f\in L^{2}(\mathbb{R}^{d}).\]
An operator \(T\in\mathcal{J}^{1}\) and \(T\geq 0\) is also called a state in quantum mechanics.
Let us take \(T\in\mathbb{S}_{0}^{\prime}\) and \(\varphi\in\mathcal{S}\), then the Gabor matrix of \(T\) (w.r.t. \(\varphi\)) is defined as
\[G_{T}^{\varphi}(z,w)\coloneqq\langle T\pi(w)\varphi,\pi(z)\varphi\rangle, \qquad z=(x,\omega),w=(u,v)\in\mathbb{R}^{2d}. \tag{63}\]
We notice that the Gabor matrix of an operator does not depend on \(\tau\), in the sense that
\[G_{T}^{\varphi}(z,w)=\langle T\pi(w)\varphi,\pi(z)\varphi\rangle=\langle T\pi^{ \tau}(w)\varphi,\pi^{\tau}(z)\varphi\rangle,\qquad\forall\,\tau\in[0,1].\]
**Remark 3.25**.: _We point out that the diagonal of the Gabor matrix of \(T\), w.r.t. \(\varphi\), is the Cohen's class representation of \(\varphi\) w.r.t. \(T\) up to a reflection:_
\[G_{T}^{\varphi}(-z,-z)=Q_{T}\varphi(z). \tag{64}\]
_In fact_
\[G_{T}^{\varphi}(-z,-z) =\langle T\pi(-z)\varphi,\pi(-z)\varphi\rangle=\langle T\pi(z)^{* }\varphi,\pi(z)^{*}\varphi\rangle\] \[=\langle(\alpha_{z}T)\varphi,\varphi,\rangle=Q_{T}\varphi(z).\]
Let \(F\) and \(H\) be functions of \((z,w)\in\mathbb{R}^{4d}\) and let \(\Theta\) be a real \(4d\times 4d\) matrix. Then the twisted convolution induced by \(\Theta\) is defined as
\[F\,\natural_{\Theta}\,H(z,w)\coloneqq\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2 d}}F(z^{\prime},w^{\prime})H(z-z^{\prime},w-w^{\prime})e^{2\pi i(z,w)\Theta(z^{ \prime},w^{\prime})}\,dz^{\prime}dw^{\prime}. \tag{65}\]
**Lemma 3.26**.: _Let \(T,S\in\mathcal{J}^{1}\), \(T,S\geq 0\). Then for every \(\tau\in[0,1]\) we have_
\[\operatorname{tr}(TS)=\int_{\mathbb{R}^{2d}}W_{\tau}T(z)\overline{W_{\tau}S(z )}\,dz. \tag{66}\]
Proof.: Since \(T\) and \(S\) are trace-class and positive, they can be described as
\[T=\sum_{n=1}^{\infty}\lambda_{n}f_{n}\otimes f_{n},\qquad S=\sum_{n=1}^{\infty }\mu_{n}g_{n}\otimes g_{n}\]
for some orthonormal sets \(\{f_{n}\}_{n}\) and \(\{g_{n}\}_{n}\) in \(L^{2}\) and \(\lambda_{n},\mu_{n}\geq 0\). Let \(\{e_{n}\}_{n}\) be an o.n.b. for \(L^{2}(\mathbb{R}^{d})\):
\[\operatorname{tr}(TS)=\sum_{n=1}^{\infty}\langle TSe_{n},e_{n}\rangle=\sum_{ i,j}^{\infty}\lambda_{j}\mu_{i}\left|\langle f_{j},g_{i}\rangle\right|^{2}.\]
On the other hand,
\[\int_{\mathbb{R}^{2d}}W_{\tau}T(z)\overline{W_{\tau}S(z)}\,dz=\sum_{i,j}^{ \infty}\lambda_{j}\mu_{i}\int_{\mathbb{R}^{2d}}W_{\tau}f_{j}(z)\overline{W_{ \tau}g_{i}(z)}\,dz=\sum_{i,j}^{\infty}\lambda_{j}\mu_{i}\left|\langle f_{j},g_ {i}\rangle\right|^{2},\]
where the last equality is due to Moyal's identity. This concludes the proof.
**Remark 3.27**.: _Since we assume \(S\geq 0\), \(S\) is self-adjoint and for \(\tau=1/2\) we have that \(W_{1/2}S\) is real-valued. In fact, using the representation given in the proof of Lemma 3.26:_
\[W_{1/2}S=\sum_{n=1}^{\infty}\mu_{n}W_{1/2}g_{n}\]
_with every \(W_{1/2}g_{n}\) real-valued and \(\mu_{n}\geq 0\). Hence, for \(\tau=1/2\) we recover [12, Lemma 2.7]._
**Lemma 3.28**.: _Let \(T\in\mathcal{J}^{1}\) and consider \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that \(\left\|\varphi\right\|_{L^{2}}=1\). Then_
\[\operatorname{tr}T=\int_{\mathbb{R}^{2d}}\langle(\alpha_{z}T)\varphi,\varphi \rangle\,dz=\int_{\mathbb{R}^{2d}}Q_{T}\varphi(z)\,dz=\int_{\mathbb{R}^{2d}}G_ {T}^{\varphi}(z,z)\,dz. \tag{67}\]
Proof.: The proof follows from a direct computation using the representations presented in the proof of Lemma 3.26 and Moyal's identity involving the function \(\varphi\):
\[\langle f_{j},g_{i}\rangle=\langle V_{\varphi}f_{j},V_{\varphi}g_{i}\rangle,\]
we leave details to the interested reader.
**Lemma 3.29**.: _Let \(T\in\mathcal{J}^{1}\), \(T\geq 0\) and let \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that \(\left\|\varphi\right\|_{L^{2}}=1\). Then for every \(z\in\mathbb{R}^{2d}\):_
\[Q_{T}\varphi(z)=\int_{\mathbb{R}^{2d}}W_{\tau}T(w)\overline{W_{\tau}\varphi(z +w)}\,dw=W_{\tau}T*(W_{\tau}\varphi)^{*}(z), \tag{68}\]
_where \((W_{\tau}\varphi)^{*}(w)=\overline{W_{\tau}\varphi(-w)}\)._
Proof.: We compute directly
\[Q_{T}\varphi(z) =\langle\pi(z)T\pi(z)^{*}\varphi,\varphi\rangle=\operatorname{tr }(T(\pi(z)^{*}\varphi\otimes\pi(z)^{*}\varphi))\] \[=\int_{\mathbb{R}^{2d}}W_{\tau}T(w)\overline{W_{\tau}(\pi(z)^{*} \varphi\otimes\pi(z)^{*}\varphi)(w)}\,dw,\]
the last equation holds because of Lemma 3.26. An elementary calculation gives
\[W_{\tau}(\pi(z)^{*}\varphi\otimes\pi(z)^{*}\varphi)(w)=W_{\tau}\varphi(z+w),\]
which is also known as covariance property and this concludes the proof.
**Lemma 3.30**.: _Let \(T\in\mathcal{J}^{1}\), \(T\geq 0\) and consider \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that \(\left\|\varphi\right\|_{L^{2}}=1\). Then for every \(z,w\in\mathbb{R}^{2d}\):_
\[\left|G_{T}^{\varphi}(z,w)\right|^{2}\leq Q_{T}\varphi(-z)Q_{T}\varphi(-w).\]
Proof.: The claim follows from the Cauchy-Schwarz inequality for the inner product induced by the positive operator \(T\) and Remark 3.25.
**Lemma 3.31**.: _Let \(0_{d}\) and \(I_{d}\) denote the zero and identity \(d\times d\) matrices, respectively. Let us define_
\[\Theta\coloneqq\begin{bmatrix}0_{d}&0_{d}&0_{d}&0_{d}\\ I_{d}&0_{d}&0_{d}&0_{d}\\ 0_{d}&0_{d}&0_{d}&0_{d}\\ 0_{d}&0_{d}&-I_{d}&0_{d}\end{bmatrix}.\]
_Let \(T\in\mathcal{J}^{1}\) and consider \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that \(\left\|\varphi\right\|_{L^{2}}=1\). For \(z=(x,\omega),w=(u,v)\in\mathbb{R}^{2d}\) we have_
\[G_{T}^{\varphi}(z,w) =G_{T}^{\varphi}\,\natural_{\Theta}(G_{\varphi\otimes\varphi}^{ \varphi})^{*}(z,w)\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}G_{T}^{\varphi}(z^{ \prime},w^{\prime})(G_{\varphi\otimes\varphi}^{\varphi})^{*}(z-z^{\prime},w-w ^{\prime})e^{2\pi i(\omega x^{\prime}-u^{\prime}v)}\,dz^{\prime}dw^{\prime}, \tag{69}\]
_where \(z^{\prime}=(x^{\prime},\omega^{\prime}),w^{\prime}=(u^{\prime},v^{\prime})\in \mathbb{R}^{2d}\)._
Proof.: We apply twice Moyal's identity:
\[G_{T}^{\varphi}(z,w) =\int_{\mathbb{R}^{2d}}V_{\varphi}[T\pi(w)\varphi](z^{\prime}) \overline{V_{\varphi}[\pi(z)\varphi](z^{\prime})}\,dz^{\prime}\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}V_{\varphi}[\pi(w) \varphi](w^{\prime})\overline{V_{\varphi}[T^{*}\pi(z^{\prime})\varphi](w^{ \prime})}\langle\pi(z^{\prime})\varphi,\pi(z)\varphi\rangle\,dz^{\prime}dw^{\prime}\] \[=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}G_{T}^{\varphi}(z^{ \prime},w^{\prime})\langle\pi(w)\varphi,\pi(w^{\prime})\varphi\rangle\langle \pi(z^{\prime})\varphi,\pi(z)\varphi\rangle\,dz^{\prime}dw^{\prime}.\]
It is then a direct, although tedious, calculation to show that
\[\langle\pi(z)\varphi,\pi(z^{\prime})\varphi\rangle\langle\pi(w^{\prime}) \varphi,\pi(w)\varphi\rangle=(G_{\varphi\otimes\varphi}^{\varphi})^{*}(z-z^{ \prime},w-w^{\prime})e^{2\pi i(\omega x^{\prime}-u^{\prime}v)}.\]
This concludes the proof.
**Lemma 3.32**.: _Let \(T\in\mathcal{J}^{1}\), \(T\geq 0\) and consider \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that \(\left\|\varphi\right\|_{L^{2}}=1\). Then for any \(\tau\in[0,1]\):_
\[W_{\tau}T(z)=\int_{\mathbb{R}^{2d}}\int_{\mathbb{R}^{2d}}e^{-2\pi i[(\omega x ^{\prime}-\omega^{\prime}x)+(\frac{1}{2}-\frac{3}{4}\tau)x^{\prime}\omega^{ \prime}+x^{\prime}v]}G_{T}^{\varphi}\left(\frac{z^{\prime}}{2}-w,-\frac{z^{ \prime}}{2}-w\right)\,dwdz^{\prime}, \tag{70}\]
_where \(z=(x,\omega),z^{\prime}=(x^{\prime},\omega^{\prime}),w=(u,v)\in\mathbb{R}^{2d}\)._
Proof.: We start rephrasing the \(\tau\)-Wigner distribution of \(T\):
\[W_{\tau}T(z)=\mathcal{F}_{\sigma}\mathcal{F}_{W_{\tau}}T(z)=\int_{\mathbb{R}^{ 2d}}e^{-2\pi i(\omega x^{\prime}-\omega^{\prime}x)}\operatorname{tr}(\pi^{ \tau}(z^{\prime})^{*}T)\,dz^{\prime}.\]
Recalling the properties for \(\pi^{\tau}\), see Section 2, we see that
\[\pi^{\tau}(z^{\prime}/2+z^{\prime}/2) =e^{2\pi i[(1-\tau)\frac{x^{\prime}\omega^{\prime}}{4}-\tau\frac {x^{\prime}\omega^{\prime}}{4}]}\pi^{\tau}(z^{\prime}/2)\pi^{\tau}(z^{\prime}/2)\] \[=e^{\frac{\pi}{2}i(1-2\tau)x^{\prime}\omega^{\prime}}\pi^{\tau}(z ^{\prime}/2)\pi^{\tau}(z^{\prime}/2).\]
Taking the adjoint we get \(\pi^{\tau}(z^{\prime})^{*}=e^{-\frac{\pi}{2}i(1-2\tau)x^{\prime}\omega^{\prime}} \pi^{\tau}(z^{\prime}/2)^{*}\pi^{\tau}(z^{\prime}/2)^{*}\) and we write using Lemma 3.28:
\[\operatorname{tr}(\pi^{\tau}(z^{\prime})^{*}T) =e^{-\frac{\pi}{2}i(1-2\tau)x^{\prime}\omega^{\prime}} \operatorname{tr}(\pi^{\tau}(z^{\prime}/2)^{*}T\pi^{\tau}(z^{\prime}/2)^{*})\] \[=e^{-\frac{\pi}{2}i(1-2\tau)x^{\prime}\omega^{\prime}}\int_{ \mathbb{R}^{2d}}\langle T\pi^{\tau}(z^{\prime}/2)^{*}\pi^{\tau}(w)^{*}\varphi, \pi^{\tau}(z^{\prime}/2)\pi^{\tau}(w)^{*}\varphi\rangle\,dw\] \[=e^{-\frac{\pi}{2}i(1-2\tau)x^{\prime}\omega^{\prime}}e^{-\frac{ \pi}{2}i(1-\tau)x^{\prime}\omega^{\prime}}\] \[\times\int_{\mathbb{R}^{2d}}\langle T\pi^{\tau}(-z^{\prime}/2) \pi^{\tau}(-w)\varphi,\pi^{\tau}(z^{\prime}/2)\pi^{\tau}(-w)\varphi\rangle\,dw\] \[=e^{-\frac{\pi}{2}i(2-3\tau)x^{\prime}\omega^{\prime}}\int_{ \mathbb{R}^{2d}}\langle T\pi(-z^{\prime}/2)\pi(-w)\varphi,\pi(z^{\prime}/2)\pi (-w)\varphi\rangle\,dw\] \[=e^{-\frac{\pi}{2}i(2-3\tau)x^{\prime}\omega^{\prime}}\int_{ \mathbb{R}^{2d}}e^{-2\pi ix^{\prime}v}\langle T\pi(-z^{\prime}/2-w)\varphi, \pi(z^{\prime}/2-w)\varphi\rangle\,dw.\]
This concludes the argument.
## 4. A characterization of Schwartz operators
In this section we introduce weighted versions of \(\mathbb{S}_{0}\) and give an alternative description of the class \(\mathfrak{S}\). We use the polynomial weight
\[v_{s}(z)\coloneqq(1+\left|z\right|^{2})^{\frac{s}{2}},\qquad z\in\mathbb{R}^{2 d}, \tag{71}\]
where \(s\geq 0\). In order to avoid an extremely cumbersome notation, just for the weight functions \(v_{s}\) we shall use the following:
\[v_{s}\otimes v_{s}(z,w)\coloneqq K_{v_{s}\otimes\overline{v_{s}}}=v_{s}(z)v_{ s}(w),\qquad\forall z,w\in\mathbb{R}^{2d}.\]
**Definition 4.1**.: _For \(s\geq 0\) we define the weighted class of Feichtinger operators as_
\[\mathbb{M}_{s}^{1}\coloneqq\{S\colon\mathcal{S}_{0}^{\prime}(\mathbb{R}^{d}) \to\mathcal{S}_{0}(\mathbb{R}^{d})\,|\,S\,\text{is linear, continuous with kernel}\,K_{S}\in M_{v_{s}\otimes v_{s}}^{1}(\mathbb{R}^{2d})\}. \tag{72}\]
_For \(S\) in \(\mathbb{M}_{s}^{1}\) we define the mapping_
\[\left\|S\right\|_{\mathbb{M}_{s}^{1}}\coloneqq\left\|K_{S}\right\|_{M_{v_{s} \otimes v_{s}}^{1}}. \tag{73}\]
**Remark 4.2**.:
1. _For_ \(s=0\) _we recover the Feichtinger operators_ \(\mathbb{S}_{0}\)_;_
2. _The mapping defined in_ (73) _is a norm on_ \(\mathbb{M}_{s}^{1}\) _and it is easy to see that_ \((\mathbb{M}_{s}^{1},\left\|\cdot\right\|_{\mathbb{M}_{s}^{1}})\) _is a Banach space and the following continuous inclusion holds true for every_ \(s\geq 0\)_:_ (74) \[\mathbb{M}_{s}^{1}\hookrightarrow\mathbb{S}_{0}.\]
**Lemma 4.3**.: _For any \(S\in\mathbb{M}_{s}^{1}\) there exist \(\{f_{n}\}_{n},\{g_{n}\}_{n}\subseteq M_{v_{s}\otimes v_{s}}^{1}(\mathbb{R}^{2d})\) such that_
\[S=\sum_{n=1}^{\infty}f_{n}\otimes g_{n},\qquad\sum_{n=1}^{\infty}\left\|f_{n} \right\|_{M_{v_{s}}^{1}}\left\|g_{n}\right\|_{M_{v_{s}}^{1}}\leq+\infty,\quad K _{S}=\sum_{n=1}^{\infty}K_{f_{n}\otimes g_{n}}.\]
Proof.: The proof follows from the fact that
\[M^{1}_{v_{s}\otimes v_{s}}(\mathbb{R}^{2d})=M^{1}_{v_{s}}(\mathbb{R}^{d})\hat{ \otimes}M^{1}_{v_{s}}(\mathbb{R}^{d}).\]
See also the proof of Lemma 3.4.
**Theorem 4.4**.: _For every \(\tau\in[0,1]\) the mapping \(W_{\tau}\colon\mathbb{M}^{1}_{s}\to M^{1}_{v_{s}\otimes v_{s}}(\mathbb{R}^{2d})\) is a topological isomorphism with inverse given by \(\operatorname{Op}_{\tau}\colon M^{1}_{v_{s}\otimes v_{s}}(\mathbb{R}^{2d}) \to\mathbb{M}^{1}_{s}\)._
Proof.: The proof follows the same pattern as the ones of Theorem 3.7 and Corollary 3.8.
**Corollary 4.5**.: _An operator \(S\) belongs to \(\mathbb{M}^{1}_{s}\) if and only if for some (hence every) \(\tau\in[0,1]\)\(W_{\tau}S\in M^{1}_{v_{s}\otimes v_{s}}(\mathbb{R}^{2d})\)._
**Theorem 4.6**.: _The following is true:_
\[\mathfrak{S}=\bigcap_{s\geq 0}\mathbb{M}^{1}_{s}. \tag{75}\]
Proof.: By Corollary 4.5, \(S\) belongs to the set on the right-hand side if and only if
\[W_{\tau}S\in\bigcap_{s\geq 0}M^{1}_{v_{s}\otimes v_{s}}(\mathbb{R}^{2d})= \mathcal{S}(\mathbb{R}^{2d}).\]
The claim follows since \(W_{1/2}S\) is the Weyl symbol of \(S\), i.e. \(\mathrm{a}^{S}_{1/2}=W_{1/2}S\).
We recall that a function \(F\) on \(\mathbb{R}^{2d}\) is called rapidly decaying if for every multiindex \(\alpha,\beta\in\mathbb{N}^{d}_{0}\) we have
\[\sup_{x,\omega\in\mathbb{R}^{d}}\left|x^{\alpha}\omega^{\beta}F(x,\omega) \right|<+\infty,\]
where, if \(x=(x_{1},\ldots,x_{d})\) and \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\), \(x^{\alpha}\) stands for \(x_{1}^{\alpha_{1}}\cdot\ldots\cdot x_{d}^{\alpha_{d}}\).
In [12, Theorem 1.1] a sufficient condition is given for a positive trace-class operator to be in \(\mathfrak{S}\). Namely, if \(T\in B(L^{2})\), \(T\geq 0\), is such that \(W_{\tau}T\) exists for some \(\tau\in[0,1]\) and it is rapidly decreasing, then \(T\in\mathfrak{S}\) and \(W_{\tau}T\) exists for every \(\tau\in[0,1]\). In this spirit, we provide the following sufficient condition for a generic \(S\in B(L^{2})\). Observe that we do not not require \(S\) to be positive.
**Corollary 4.7**.: _Let \(S\in B(L^{2})\) and assume that for some \(\tau\in[0,1]\)\(W_{\tau}S\) exists. Suppose also that, w.r.t. some non-zero window in \(L^{2}(\mathbb{R}^{2d})\), the STFT of \(W_{\tau}S\) is rapidly decaying. Then \(W_{\tau}S\) exists for every \(\tau\in[0,1]\) and \(S\) is in \(\mathfrak{S}\)._
Proof.: Let us pick \(G\in L^{2}(\mathbb{R}^{2d})\smallsetminus\{0\}\). If \(V_{G}W_{\tau}S\) is rapidly decaying then \(S\in\mathbb{M}^{1}_{s}\) for every \(s\geq 0\). The claim follows from Theorem 4.6.
## Acknowledgments
The first author would like to thank Eduard Ortega for the financial support to visit Trondheim which led to this work.
|
2306.06020 | There's Plenty of Room in the Middle: The Unsung Revolution of the
Renormalization Group | The remarkable technical contributions of Michael E. Fisher to statistical
physics and the development of the renormalization group are widely known and
deeply influential. But less well-known is his early and profound appreciation
of the way in which renormalization group created a revolution in our
understanding of how physics -- in fact, all science -- is practiced, and the
concomitant adjustment that needs to be made to our conception of the purpose
and philosophy of science. In this essay, I attempt to redress this imbalance,
with examples from Fisher's writings and my own work. It is my hope that this
tribute will help remove some of the confusion that surrounds the scientific
usage of minimal models and renormalization group concepts, as well as their
limitations, in the ongoing effort to understand emergence in complex systems.
This paper will be published in "50 years of the renormalization group",
dedicated to the memory of Michael E. Fisher, edited by Amnon Aharony, Ora
Entin-Wohlman, David Huse and Leo Radzihovsky, World Scientific (in press). | Nigel Goldenfeld | 2023-06-09T16:40:00Z | http://arxiv.org/abs/2306.06020v1 | # There's Plenty of Room in the Middle:
###### Abstract
The remarkable technical contributions of Michael E. Fisher to statistical physics and the development of the renormalization group are widely known and deeply influential. But less well-known is his early and profound appreciation of the way in which renormalization group created a revolution in our understanding of how physics -- in fact, all science -- is practiced, and the concomitant adjustment that needs to be made to our conception of the purpose and philosophy of science. In this essay, I attempt to redress this imbalance, with examples from Fisher's writings and my own work. It is my hope that this tribute will help remove some of the confusion that surrounds the scientific usage of minimal models and renormalization group concepts, as well as their limitations, in the ongoing effort to understand emergence in complex systems.
_This paper will be published in "50 years of the renormalization group", dedicated to the memory of Michael E. Fisher, edited by Amnon Aharony, Ora Entin-Wohlman, David Huse and Leo Radzihovsky, World Scientific (in press)._
## I Introduction
In 1988, Michael E. Fisher authored what must surely be a candidate for his least cited paper. Entitled "Condensed Matter Physics: Does Quantum Mechanics Matter?" [1; 2] and bestowed with all of 9 citations (according to Google Scholar), it poses an outrageous question that in Fisher's hands is developed and answered with his characteristic brilliance, clarity and originality. Fisher was writing at the behest of Herman Feshbach to review the state of condensed matter physics as it stood on the 100th anniversary of the birth of Niels Bohr, with a special remit to comment on the overlap with Bohr's ideas on the fundamentals of quantum mechanics. The timing could not have been more propitious. The previous two years had seen the discovery of the high temperature superconductors LBCO [3] and YBCO [4]; the discovery of quasi-crystals had been made 4 years previously [5]; and the integer [6] and fractional quantum hall effects [7] had been discovered and largely explained during the previous 8 years [8]. Five major discoveries in condensed matter physics in less than a decade, ultimately garnering 5 Nobel Prizes, and each a splendid manifestation of quantum mechanics.
Nevertheless, Fisher evidently did not regard his role as to provide a self-congratulatory pat on the back to condensed matter physics. Instead, he took it upon himself to explain his thoughts on "... what condensed matter physics does, or should be doing, and what defines condensed matter physics, and thence to approach the question 'Does quantum mechanics matter?'". The viewpoint that he expounded was at the time still rather unorthodox in many areas of physics and science, in particular his perspective that "the question of connecting the models with fundamental principles is _not_ a very relevant issue or central enterprise." This would come as news to those in the physics community engaged in the reductionist program, including to some extent condensed matter physicists but to a far greater extent those working in what at the time was sometimes called elementary particle physics or fundamental physics, and what is now generally called high-energy physics, or in some areas, with less hubris and reflecting the progress in accelerator physics and budget, medium-energy physics. This dichotomy within physics was certainly pervasive at the time, and important enough that Fisher chose to make it the focus of his article.
Fisher undoubtedly hoped to use the opportunity provided by Feshbach to pop the bubble of the reductionist approach to condensed matter physics, and this he did with gusto, and one assumes, with the twinkle in his eye and suave[9] cravat by which I will always remember him.
Over the last 35 years since Fisher's article, physics and its philosophical roots have undergone a significant shift. This shift influences the way we do physics, the way we interpret what we do, and the way we use physics to explore interdisciplinary scientific questions. Accordingly, I have chosen for my title a deliberately ambiguous and evocative phrase. "There is plenty of room in the middle" is of course a parody of Feynman's famous lecture "There is plenty of room at the bottom", which is often credited with forseeing the nanotechnology revolution [10]. In my case, I am drawing attention away from the microscopic scale, to focus instead on the middle scale or level of description, which depending on the problem might be the scale intermediate between the lattice spacing and the correlation length (in the case of critical phenomena) or the scale larger than the dissipation scale and the scale of energy input (in the case of fluid turbulence). In the first of these, one can make this scale arbitrarily large by tuning the temperature close to the critical temperature; and in the second, by making the Reynolds number large. This middle scale was where many of Fisher's interests lay, understanding the universal, cooperative phenomena that arise there. And of course that is why he chose his provocative theme for his article.
But I have a second motivation for my title. On more than one occasion, I remember Michael saying "You can't
have interdisciplinary research without first having disciplines" (for my personal comments, I will lapse into first name terms). Thus, the "middle" in my title also refers to the space between disciplines, where interdisciplinary research happens and transdisciplinary research begins. I happen to believe that this is where the greatest scientific excitement can often be generated. Fisher himself was one of the shining examples of an interdisciplinary scientist, holding for many years at Cornell University the Horace White Professorship of Chemistry, Physics, and Mathematics, each of which he illuminated with his scholarship and unique contributions. Thus, I propose to use this article to pay tribute to Fisher's deep understanding of the way in which his most important work on renormalization group theory led to a transformation in our understanding of physics and its explanatory purpose, and to extend the scope of his examples and comments, especially with regard to non-equilibrium statistical physics. I will spend very little time on critical phenomena and exponents, because I believe that although this was the problem that Fisher and the statistical mechanics community set out to solve, the impact of the renormalization group is equally important away from criticality. One of the ways that I will argue this is through the connections between universality, renormalization group, levels of description and asymptotics. I also want to argue that the perspective he espoused is especially relevant today to applications outside of physics, where there has arguably been less self-consciousness about the way we choose to construct theories.
## II Condensed matter physics: does quantum mechanics matter?
To appreciate Fisher's subversive intentions, it is worth reminding the reader of the intellectual background to his article. It is no exaggeration to say that physics underwent a remarkable transformation sometime around the middle of the 20th century. During the previous decades, the reductionist program had been profoundly successful, with the discovery of the electron, neutron, proton; the theoretical predictions of the neutrino (by Pauli [11] in 1930) and the meson (by Yukawa[12] in 1934); the detection of the former by Cowan, Reines, Harrison, Kruse and McGuire [13] in 1956, and of the latter in 1936 by Neddersmeyer and Anderson [14] -- although they actually discovered what we now call the muon, a lepton whose mass was close to Yukawa's prediction for the nuclear force carrier mass; the actual discovery in 1947 of Yukawa's particle, the pion, in cosmic rays [15]; and accelerating with the discovery of a multitude of mesons during the 1950's and the subsequent decades. Yet, with the birth of solid state physics around 1940 (reflected by the establishment of the American Physical Society Division of Solid State Physics (DSSP) in 1947), the initial focus on one-electron properties of metals and semi-conductors turned in the 1950's to the collective properties of matter through many-body theory and the application of non-relativistic quantum field theory. In due course, the field became known as condensed matter physics, reflecting the increasing focus on all matter, not just solids, and especially on collective phenomena [16] (and marked by the American Physical Society renaming the Division of Solid State Physics as the Division of Condensed Matter Physics in 1978).
It quickly became clear that novel macroscopic phenomena transcended the additive properties of single degrees of freedom, and a renewed focus on collective properties was seen by many as the defining aspect of condensed matter physics, particularly through the field's visionary leadership by P.W. Anderson [17; 18]. Indeed, Fisher is very clear where he stands on this point [1; 2]: "The basic problem which underlies the subject is to understand the many, varied manifestations of ordinary matter in its condensed states and to elucidate the ways in which the properties of the "units" affect the overall, many-variable systems". In other words, condensed matter physics is about interactions primarily.
For present purposes, I would argue that the pivotal developments in the field of condensed matter theory were two iconically quantum mechanical theories: Bogoliubov's 1947 theory of superfluidity for the weakly-interacting Bose gas [19], and the Bardeen-Cooper-Schrieffer 1957 theory of superconductivity [20; 21]. I choose these problems, because they are relevant to Fisher's conception of the task of condensed matter physics. Again, he is very explicit about the questions that animate the task of addressing the basic problem underlying condensed matter physics: "What are the states of matter?... What is their nature?... How do the various states transform into one another?" As we will see, inextricably linked with these questions are two concepts that are subtle and took a long time to be appreciated: the notions of "minimal models" and "levels of description". In fact, there are several features of superfluidity and superconductivity theory to which I especially want to draw attention.
### Minimal models
First, their starting point. Both these works showed the surprising explanatory power of a ludicroly simple model of interacting Bosons or electrons respectively that is clearly a brutal idealization of the real complexities of atomic or electron-phonon interactions. Bogoliubov assumed that the interactions between Bosons were weak in the sense that "corresponds to a neglection of the finiteness of molecular radius, since we do not take into account the intensive increase of [the potential] for small \(r\), which causes the impenetrability of molecules" [19]. At the end of his paper, he argued that the approximation could also be extended to include the finite "molecular" radius by replacing the Fourier transform
of the potential by the amplitude of the binary collision probability in the Born scattering approximation (an observation for which he thanks Landau). This amounts to replacing the real interaction with a contact interaction \(U\delta(\mathbf{r})\). Bogoliubov's explicit recognition of the fact that his approximation was valid at long-wavelengths where the atomic (molecular) size was far smaller than the mean particle separation was present, but it took several decades for this insight to be translated into a renormalization group description [22; 23; 24; 25]. Bardeen, Cooper and Schrieffer (BCS) took the Bardeen-Pines Hamiltonian for electron-phonon interactions including Coulomb effects [26], and replaced the complex interaction with one in which "each state is connected to \(n\) other states by the same matrix element \(-V\)"[20]. They explained that this idealization applied to a subset of states that were paired with equal and opposite spin and momentum, and that this was sufficient to capture the formation of the condensed state, for arbitrary weak matrix element/potential \(V\).
The models that both Bogoliubov and BCS used in their work are examples of what are sometimes known as "minimal models" [27; 28; 29; 30; 31], "model[s] that most economically caricature the essential physics" [28]. However, what this actually means is rather subtle, and sometimes misunderstood; in order to elaborate, we need to discuss the role of asymptotics, and in particular the process by which the minimal models were analyzed.
### Asymptotics and Universality
One reason that both Bogoliubov and BCS were successful where others had failed, was because they used non-perturbative methods to extract their physical conclusions. BCS used a variational method, whilst Bogoliubov invented a canonical transformation applied after the naive perturbation theory in the weak potential had been performed. Shortly after the publication of the BCS letter announcing their results [20], Bogoliubov's method was applied to the BCS model [32; 33]. I emphasize the non-perturbative nature of the theories, because in both cases, the results are non-analytic: for example, the depletion of the condensate in the Bose gas problem at zero temperature due to interactions is proportional to \(U^{3/2}\), whilst in the BCS problem, the energy gap depends on the matrix element \(V\) and the density of states at the Fermi level \(N(0)\) through an essential singularity of the form \(\exp(-1/N(0)V)\). Such results are, of course, beyond the scope of simple, regular, finite order perturbation theory and due to the way they are obtained, are believed to be valid asymptotically as leading order approximations for \(U,V\to 0\).
The necessity of using non-perturbative methods to obtain non-analytic results that describe the emergent states of superfluidity and superconductivity is not a result of any "exotic" character of these states. Phase boundaries are by definition the loci where the partition function is non-analytic, and the emergent states are physically not simple perturbations around the normal state, possessing generalized rigidity and elasticity as a result of the symmetry breaking associated with the transition [18; 28]. The non-analytic nature of phase transitions means that any mathematical expansion about the transition point cannot be convergent, and so our results are at best asymptotic in the minimal model interaction parameter. As a result, there will be sub-leading corrections to our predictions, in addition to other regular terms that may arise from the details ignored in constructing the minimal model in the first place. I also want to stress that in both the examples given here, the non-analytic results arise even at mean field level, and proper treatment of the interaction of fluctuations adds another level of non-analyticity.
For example, in the case of the Bogoliubov Bose gas, the minimal model itself is constructed to be an asymptotic long-wavelength limit when the potential range is negligible compared to the mean separation of the particles. Corrections to this approximation arising from including a finite potential range or a non-vanishing particle density will decorate the leading order asymptotic result in a complicated way that needs careful consideration of the various approximations. Nevertheless, the notion of a minimal model as being an economical caricature of the essential physics must be understood in this asymptotic context. The two cannot be separated. Thus, it is not simply that a minimal model is an inaccurate representation of a physical system, a poor approximation that can be improved by embellishment. Instead the point is that the explanatory requirement of a minimal model is to account for the phases and phase transitions of matter, just as Fisher emphasized, and also the mathematical structure of the description of the latter, which necessarily involves asymptotic methods. Gratuitously realistic details of the starting model will not only complicate the technical task of extracting the asymptotic structure, but will not impact the leading order behavior. Thus, a minimal model represents a universality class of models; whether or not one uses the minimal model or a decoration of it, the asymptotic outcome of a non-perturbative calculation will not be different.
Overall, we are led inexorably to the conclusion that there is a strong connection between asymptotics and universality. Indeed, in my own work, I have shown how renormalization group methods can be used to treat singular mathematical problems, such as those arising in differential equation theory [28; 34] and low Reynolds number fluid dynamics [35]. In these problems, there are no fluctuations at all: the non-analyticity arises at mean field level.
### Minimal models in action: the case of superconductivity
This narrative of the nature of minimal model explanation is not a subjective philosophical interpretation that is open to debate, although it has been challenged [36] and defended [37] in the philosophical literature [38; 29; 31]. It is what physicists actually do. In the case of BCS theory, we have a very important demonstration of the validity and utility of this perspective. The BCS minimal model of superconductivity leaves out a number of embellishments that are desirable to include if one wants to calculate accurately observables such as the transition temperature or energy gap at zero temperature for materials where details of the actual electron-phonon interactions are measurable. Such an elaboration of the BCS theory was initiated by Eliashberg [39], who accounted for the detailed form of the electron-phonon frequency spectrum even in the strong coupling limit, using methods developed by Migdal for the normal state [40], and by McMillan [41] who provided a synthesis that included electron-electron interactions and band structure effects. In Eliashberg's theory, the effects of phonons lead to integral equations accounting for the phonon excitation spectrum and so the theory becomes intrinsically non-local. In fact, the interaction between electrons, mediated by phonons, is inherently a retarded interaction, because of the mass difference between the lattice ions contributing to the phonon modes and the electrons themselves. Distortions of the crystal lattice, described by the phonons, inevitably relax on a much slower time scale than that of the electrons, and basically, it is the fact that electrons experience the local deformations that causes them to interact and be attracted to one another, in terms of their kinematics -- hence the BCS coupling of electrons together in time-reversed momentum states. In McMillan's main result [41] (Eq. (18)), the structure of the BCS formula, with its essential singularity is modified to \(\exp(-1.04(1+\lambda)/(\lambda-\mu^{*}(1+0.62\lambda))\) where \(\lambda\) and \(\mu^{*}\) are constants that are respectively calculable from the Eliashberg theory and the effects of Coulomb interactions [42]. For small values of the parameter \(\lambda\) (i.e. the weak coupling limit), the formula reduces to the BCS formula, with \(\lambda-\mu^{*}\) being identified as \(N(0)V\). Thus, we see explicitly how the minimal model can be embellished by including greater levels of realism in order to compare with experimental details, but staying within its universality class and mathematical structure.
### Levels of description and emergence
Bogoliubov's work showed how an interacting system can, within a first approximation, be treated as a collection of non-interacting quasi-particles each of which involves all the original "real" atoms in the system. The quasi-particles were recognized right at the outset as forming an explicitly long-wavelength description of the system, the details of which were later developed in 1957 by Lee and Yang, and others [43; 44; 45; 22; 23; 24; 25]. The many-body ground state and the excitations of the interacting Bose system depend on a single parameter \(U\) characterizing the strength of interactions of the original, "real" Bosons; both the ground state and the excitations are collective properties of the system. In the case of superconductivity, the additional complication is the formation of composite bosons -- Cooper pairs -- which simultaneously undergo a similar ordering as in the Bogoliubov gas. The upshot of these developments is that for many purposes, a superfluid or superconductor has a universal description that effectively hides certain details of the underlying constituents and their interactions. In the case of the weakly interacting Bosons, the interaction parameter \(U\) depends on the s-wave scattering length derived from the collision of two Bosons, and the theory is valid when this length is much shorter than the mean separation of the Bosons. Thus, it can be said that \(U\) relates two different _levels of description_, the microscopic world inhabited by the minimal model of interacting Bosons, and the macroscopic world of the condensate with its superfluid properties and excitation spectrum for the quasi-particles that have emerged. It would be possible in principle to infer the existence of the quasi-particles phenomenologically from thermodynamic measurements, as Landau famously did [46], without knowledge of the properties or even the existence of the atomic level of description.
Although much of the impact of the Bogoliubov and especially the BCS theories derived from their ability to bridge levels of description and compare the predictions of minimal models quantitatively with experiment at the microscopic level of descriptions (e.g. the work of McMillan [41] already mentioned), Fisher is emphatic that this is not their main importance. We already cited his views about connecting "the models with fundamental principles", and elsewhere in the same article he writes "The basic problem which underlies the subject is to understand the many, varied manifestations of ordinary matter in its condensed states... an important role is, and, indeed, should be played by various special'models'.... One might as a theorist argue that more attention should be paid to the connection between models and the fundamental principles as embodied in quantum mechanics." [1; 2] However, he argues, this is misguided for the following reason: "I stress again, that in any science worth the name, the important point is to gain understanding. The language in which the understanding is best expressed cannot be dictated ahead of time, but must, rather, be determined by the subject as it develops. Accordingly, it really would not be a 'great success' to derive the Ising model from atomic theory or quantum mechanics. Indeed, for which physical system would one be 'deriving' it?" [1; 2] His point is that a minimal model such as the Ising model is a minimal model for a ferromagnet, a liquid-gas systems, an alloy, a ferroelectric material and, I would add, many other systems, including examples
in biology and ecology ranging from the neurobiology of retinal ganglion cells [47] to the annual nut distribution of pistachio trees [48]. Thus, although "... one does want to understand what the important variables actually are... in many ways, it is more important to understand how these variables interact together and what their 'cooperative' results will be". [1; 2]
To close this important section on levels of description, I want to comment again, in Fisher's prescient words, of the important role played by asymptotics: "Personally, I find the elucidation of the connections between various disciplinary levels a topic of great interest. Typically, delicate matters of understanding appropriate asymptotic limits, both physical and mathematical, are entailed. The issues involved are often very subtle; nevertheless, one must admit, they seldom add significantly to one's understanding of either the 'fundamental' starting theory or of the target discipline. Sad but true!" [1; 2].
To this I would add a point that is not made frequently enough: technical derivations of higher levels of description from more microscopic ones have a limited regime of validity. For example, it is possible to derive the Navier-Stokes equations from Boltzmann's kinetic equation. This is a long and complicated calculation, that I often show the physics students in my statistical mechanics classes. After I have done that, I show them a phenomenological motivation that of course is very brief and closer in spirit to the derivations usually given in fluid mechanics texts. Most students, when asked, find the former derivation more convincing. They are surprised to learn that I do not share this view, and are somewhat shocked to realize that the seemingly more careful, technical derivation is only valid at very low density, whereas we know that the Navier-Stokes equations are an excellent description of fluid phenomena up to very high densities.
### Universal and asymptotic scaling laws in non-critical matter
Fisher structured his article by first describing his perspective on the nature of condensed matter physics, and secondly by giving multiple examples of forms of matter to illustrate his main thesis. The first form of condensed matter that he chose was polymeric matter, focusing first of all on the excluded volume problem for a single polymer [49; 50], and then the problem of the statistical thermodynamics of dilute polymer solutions [51; 27]. Fisher chose this example and not the obvious example of critical phenomena scaling. Why?
I conjecture that he wanted to make two points. First, that there are other ways in which matter can enter asymptotic realms other than just being close to a critical point. Fisher was intimately acquainted with asymptotic methods and understood that an example of scaling which has nothing to do with a critical point would be consciousness-raising for some of his audience. This is elaborated on with numerous "critical phase" examples in the chapter of this volume by Leo Radzihovsky. Second, that he wanted to show the relevance of condensed matter physics to a discipline other than physics. He writes [1; 2] "... other challenging problems remain beyond the size of a single isolated molecule. One might say, 'Well, aren't polymers for chemists?' The answer is that many of their properties have proved too difficult for traditionally trained chemists!"
At this time, Fisher was in fact a Professor of Chemistry -- amongst other things -- and had in the early part of his career at King's College, London, worked extensively on the statistics of self-avoiding walks and Ising models, developing methods to enumerate configurations, extract exponents using series expansions, Pade approximants and so on. In an especially interesting paper from this period [52], he and Gaunt estimated the behavior of self-avoiding walks and Ising models in dimensions \(d>4\), finding that the deviations from mean field theory seemed to become very small above four dimensions, a result that must have been on his mind during the development with Wilson of the \(\epsilon=4-d\) expansion [53]. One of the earliest applications of renormalization group theory, and in particular the \(\epsilon\)-expansion, was to the conformations of polymers.
In 1965, Edwards had [50] developed a field theoretic approach to the statistics of a self-avoiding polymer chain, based on Bogoliubov's idea of the pseudo-potential [19], and used a functional integral approach to derive a self-consistent theory that showed how in the limit of an asymptotically long chain, the radius of gyration of a single polymer scaled with length \(L\) as \(L^{3/5}\). This theory, like Bogoliubov's, had a regime of validity associated with a description at long wavelengths compared to the range of interparticle forces, in this case being that \(L\) was much greater than the monomer scale. Edwards' result, is like Bogoliubov's, a mean field theory approximation of course, and ignores fluctuations which add still another level of non-analyticity to the problem. The much more complicated problem of semi-dilute polymer solutions was treated the following year by the same methods [54] subject to the same limitations of course.
In the early 1980's, Ohta and Oono [51; 27] developed a conformation-space renormalization group method which was not based on de Gennes' analogy to the \(n\)-component magnet [55] and thus was able to account for arbitrary molecular weight distributions of the polymers. Ohta and Oono used their method to go far beyond computing simple scaling laws, not only correcting the Edwards mean field results to include fluctuations, but also calculating the universal scaling functions for the statistical thermodynamics of polymer solutions as a function of concentration. Fisher could not hide his excitement in reporting their results, especially the complete comparison with experiment [56] over several decades for the functional form of the osmotic compressibility and other variables. In particular, Fisher enthused: "Thus we see in polymeric matter new, subtle and universal behavior
which we have succeeded in understanding theoretically. But quantum mechanics has had essentially nothing to say about the problem! Indeed, one feels that if some of the giants of the past, like Boltzmann or Gibbs or Rayleigh, were able to rejoin us today, they would be able to engage in research at the cutting edges of condensed matter physics without taking time off to study quantum mechanics first!" [1; 2]
## III Turbulence: Does Fluid Mechanics Matter?
I want to turn now to a topic that did not play a large role in Fisher's article or his research interests: systems far from equilibrium. At the time of his article, condensed matter physicists were beginning to address problems in non-equilibrium pattern formation, phase transition kinetics, and there is a long story to tell there about asymptotic scaling laws, since this is the field where I started my own career. That story is for a future occasion. Here, I want to mention briefly two vignettes from my own research on turbulent fluids. The first of these I know he enjoyed, because I got to tell him about it during a visit to Cornell. Unfortunately, I do not remember if I had the chance to tell him about the second one.
### A turbulent analogue of Widom scaling
The classic problem of fluid dynamics is that fluids appear to be scale invariant when strongly turbulent [57; 58]. Specifically, in a fluid geometry with characteristic scale \(D\) (which might be the diameter of a pipe), characteristic velocity \(U\) (which might be the average mean flow velocity along a pipe) and kinematic viscosity \(\nu\), we define the Reynolds number as \(Re=UD/\nu\). In addition, there is the energy dissipation rate \(\epsilon\), which is in steady state equal to the energy input rate. The so-called energy spectrum -- the kinetic energy per unit mass of fluid per unit wavenumber \(k\) -- was argued by Kolmogorov [59], in a paper universally known as K41, to vary as \(E(k)=\epsilon^{2/3}k^{-5/3}\) at large \(Re\), and this is broadly in agreement with experimental data going back more than 50 years [60]. The classical measurements and theories only concerned themselves with the lowest order moments of the velocity fluctuation probability density, but today there is strong evidence for multifractal scaling of higher order moments [58]. Kolmogorov further showed that the range in \(k\)-space over which this scaling occurred was intermediate between the scale of forcing \(D\) and the Kolmogorov scale of dissipation \(\eta_{K}=(\nu^{3}/\epsilon)^{1/4}\), in other words \(2\pi/D\ll k\ll 2\pi/\eta_{K}\). As \(Re\to\infty\), this range of scaling increases because the Kolmogorov scale gets smaller and smaller, but can be shown to always be above the mean free path, so that the fluid is always in the hydrodynamic limit. In fact, K41 is not quite correct, even for the second order correlation function, and there is a so-called large-scale intermittency correction, \(\eta\), which changes the scaling result to \(E(k)\sim k^{-5/3+\eta}\)[61; 62].
To a statistical mechanic, this scaling behavior is reminiscent of what happens in a magnet near a critical point. In general, correlation functions decay as a power-law within a range of wavenumber that is intermediate between the lattice spacing \(a\) and the correlation length \(\xi\). As the temperature \(T\) approaches its critical value \(T_{c}\), the correlation length diverges to infinity, and at any scale accessible to experiment, the correlations will be in the power-law scaling range, ultimately scaling as \(G(k)\sim k^{-2+\eta}\) at the critical temperature itself. Here \(\eta\) is a correction to the mean field result, sometimes called Fisher's exponent, and now understood to be the anomalous scaling dimension of the magnetization \(M\), which grows with a power-law \(\beta\) as \(T\to T_{c}^{-}\).
Statistical mechanicians also know that these results are only valid when there is no external field \(H\). When \(H\neq 0\) there are more complicated scaling laws. For example, in general we expect that the magnetization is proportional to the external field, but this linear response law breaks down exactly at the critical temperature: \(M(H,T_{c})\sim H^{1/\delta}\), where \(\delta\) is another critical exponent that can of course be calculated by renormalization group. In the critical region \(H\sim 0\) and \(t\to 0\) (where \(t\equiv(T-T_{c})/T_{c}\)), Widom showed [63] that the behavior of the magnetization, ostensibly a function of both \(H\) and \(t\) is actually a function of a combined variable, predicting a data collapse of \(M(H,T)\) when plotted appropriately, similar to the data collapse that we talked about with polymer solution theory. This scaling was independently discovered by Kadanoff, and explained in a famous paper the following year [64], and this discovery by Widom and Kadanoff was instrumental in the development of the renormalization group.
Thus, it is a natural question to ask: is there a counterpart to the Widom scaling in turbulence?
I approached this question by asking what are the analogous quantities to \(t\) and \(H\) in turbulence. With Greg Eyink, I had long ago argued that it is natural for \(1/t\) to be analogous to \(Re\), because both control the size of the intermediate regime in \(k\) space where there is power-law scaling as \(1/t\) and \(Re\) respectively go to infinity [65]. But what is the turbulent analogue of \(H\)? My reasoning was that \(H\) is a variable that couples to and induces magnetization. In a pipe, it turns out that the laminar flow is linearly stable, but if it has rough walls, the roughness will excite turbulence. Thus, my guess was that the roughness scale \(s\) could be analogous to \(H\). Using the same sort of scaling arguments that Kadanoff and Widom had used, and assuming that turbulence was in fact a non-equilibrium steady state with its own critical point at \(Re\to\infty\), \(s\to 0\), it was possible to predict the analogue of Widom's scaling law [66].
Fortunately, there were experimental data on this very question! In 1933, Nikuradse had measured the friction experienced by a turbulent fluid as it transited a pipe
with rough walls [67]. Importantly, Nikuradse had systematically varied both \(Re\) and the roughness scale \(s\). His data collapsed satisfactorily when plotted according to the formula [66]. However, I had only used the mean field exponents in my collapse, the ones found in K41. Mehrafarin and Pourtolami extended my calculation to include the intermittency correction [68]. Even though the correction is very small for the second order correlation function, they were able to show convincingly that they could extract it by improving the data collapse fit. Their result was consistent with known estimates for the intermittency exponent, obtained by direct measurements of velocity fluctuations.
This result is extraordinary for the following reason. In phase transition theory, it is known that there is a connection between purely thermodynamic critical exponents and those associated with spatial correlations [28]. It turns out that it is possible to deduce Fisher's exponent for correlations, the anomalous dimension of the order parameter, directly from the thermodynamic exponent \(\delta\) that we mentioned above quantifies the breakdown of linear response theory at the critical point. In my treatment of the turbulent scaling problem, the analogue of the thermodynamic exponent is one that concerns the scaling of the pressure drop along the pipe with \(Re\). So, using the data collapse scaling law, if it had been known in 1933, Nikuradse could have measured indirectly the intermittency exponent of turbulence characterizing the strong spatial fluctuations!! In other words, the fluctuations of turbulence are directly related to the pressure drop along a pipe, i.e. the dissipation experienced. This implies that there are non-equilibrium fluctuation-dissipation relations in turbulence, but at the present time, we do not know how to extract them.
There were other developments that are important to mention. My Illinois colleagues Pinaki Chakraborty and Gustavo Gioia came up with an ingenious heuristic mean field argument that predicted the various exponents that are visible in different regimes of the Nikuradse data [69; 70], at least at mean field level [71]. With Nicholas Guttenberg I worked out the scaling laws and simulated the flow in a rough two-dimensional pipe (i.e. a soap film suspended between two wires) [72], where the point was that in two-dimensions, a different scaling of the velocity correlations is possible than the K41, because of the flow of angular momentum fluctuations (enstrophy). This fact meant that we could in principle construct flows with different velocity correlation scaling from the K41 one, and observe the effect on the pressure drop, i.e. the friction, and thus test the fluctuation-dissipation relation that had been discovered. Our predictions were fully confirmed in experiments that we performed with Walter Goldburg at Pittsburgh and Hamid Kellay in Bordeaux [73; 74; 75].
Overall, these findings suggest that fully-developed turbulence is controlled by a non-equilibrium critical point, with strong connections to a far from equilibrium statistical mechanics through an unknown fluctuation-dissipation relation.
However, this was not the only surprise in treating turbulence as a problem in statistical mechanics [76].
### Fluids become turbulent through a non-equilibrium phase transition
Up to now, we have talked about the large \(Re\) behavior of turbulence. But how do fluids transition from predictable, smooth, laminar flows to unpredictable, fluctuating turbulent flows? In certain shear flows where the laminar-turbulence transition is sub-critical, including pipe flow, we now have compelling theoretical [77; 78; 79; 80; 81; 82; 83], computational [84; 85] and experimental [86; 87; 88; 89] evidence that this transition is a non-equilibrium phase transition in the universality class of directed percolation [90]. A synthesis is beginning to emerge [91; 76; 92], and most workers in this small field tend to agree that the one-dimensional problem is understood in varying levels of detail, but the problem of the transition to turbulence in two-dimensions is still not understood theoretically, even though the experimental data are very compelling [88].
I believe that Fisher would have been excited by these developments, because it is fascinating to see phase transition physics, which he calls "my own love", emerge unexpectedly in a field such as fluid dynamics, where there is no explicit stochasticity, no partition function, and no obvious connection to directed percolation. This is an intriguing example of universality, to be sure. But I think Fisher would have been equally delighted by the way that this prediction was made. When my students and I set out to tackle this problem, our perspective was that the worst starting point for understanding the transition to turbulence was the Navier-Stokes equations. Instead, our goal was to construct the level of description that corresponded to what would be Landau theory for an equilibrium transition. To this end, we used direct numerical simulations to identify the weak long-wavelength modes that are relevant at a putative continuous laminar-turbulence transition, and once we had identified these, constructed the generic description, which turned out to be a stochastic predator-prey or activator-inhibitor model [81]. In other words, we wanted to identify the variables that defined the universality class of the laminar-turbulence transition. This model led to directed percolation in a straightforward way using known techniques [93], and we showed was able to reproduce the universal aspects of the phenomenology at the laminar-turbulence transition [81; 76; 83].
We did not set out to derive the directed percolation transition from the Navier-Stokes equations, because we already knew that this sort of derivation from low levels of description cannot be done systematically, let alone rigorously, even for the simplest systems such as equilibrium magnets. We also did not attempt to predict the critical Reynolds number of the laminar-turbulence transition, because we know from the renormalization group in equilibrium statistical mechanics that this is not uni
versal.
In a later paper reviewing the statistical mechanics approach to turbulence [76], we did give a heuristic motivation for the predator-prey description, at least at mean field level, but I believe that it is fair to say that the majority of fluid mechanicians do not yet share Fisher's perspective that it is more important to understand the cooperative interactions of the emergent degrees of freedom at the appropriate level of description than to systematically derive these degrees of freedom. The reason for his perspective, and the reason why it is so counterintuitive, is that our algorithm is to heuristically guess the emergent level of description, and then use the renormalization group picture to identify only the relevant degrees of freedom at the appropriate fixed points, thus obviating the need to calculate quantities that will not affect the final goal of our understanding.
## IV Epilogue
It is an unavoidable temptation to imagine the reaction when Fisher's manuscript arrived on Feshbach's desk. Not only does its very title threaten to undermine the rationale for the whole enterprise, but Fisher doubles down, arguing at one point that "...it is a mistake to view complementarity as merely a two-terminal black box...a fully reductionist philosophy, while tenable purely as philosophy, is the wrong way to practice real science!" For many editors and organizers of a symposium on the legacy of Niels Bohr, Fisher's article might have seemed like an unwarranted provocation. But I suspect that in Feshbach's case, he appreciated the article as a masterly contribution that both recalled the extraordinary collision between physics and philosophy of a previous age, and heralded the dawn of a new era where this confrontation would be renewed, but on fresh ground, whose boundaries had been demarcated by Fisher's warning shot across the bow.
Surprisingly, it would be over a decade later before the first skirmishes took place. I would place their date to be the 2001 publication of a book by R.W. Batterman, entitled "The devil in the details: Asymptotic reasoning in explanation, reduction, and emergence" [29]. Drawing on sources such as the works of Michael Berry on caustics and asymptotics [94; 95], Leo Kadanoff [64], Michael Fisher [96] and Ken Wilson [97] on critical phenomena and renormalization group, and my own work on asymptotics and renormalization group [98; 99], as summarized especially in [28], Batterman weaves together the concepts of minimal models, asymptotics, renormalization group theory and emergence in a thesis that explicitly recognizes how the modern way of doing science is neither Popperian nor Kuhnian, partly because of a more sophisticated notion of falsifiability entailed by concepts such as universality. This work and its extensions [31; 38] have been increasingly influential during the last 20 years, and the battles have been fought mostly by philosophers of science. Most recently, Batterman has used hydrodynamic linear response theory as an example to expound further on these ideas, echoing many of the themes that Fisher advocated and which I have described here [38].
I believe that the philosophical debates about minimal models are especially important for complex systems such as biology and turbulent fluid mechanics, because the choices one makes in modeling are so much more difficult than in physics. I discussed above how the minimal model and renormalization group approach to modeling at the appropriate level of description was successful in understanding quantitatively the laminar-turbulence transition in certain shear flows. The idea of making effective models of turbulence is, however, not new to fluid mechanics. Indeed, there is a long tradition of making approximate models, and these are discussed in the textbooks, especially Pope's excellent monograph on turbulent flow, section 8.3 [57]. These are not minimal models, because the "ultimate objective is to obtain a tractable quantitative theory or model that can be used to calculate quantities of interest and practical relevance." [57] This means using models as approximations for computer simulations or simple analytical calculations whose utility is assessed by factors such as cost and ease of use, range of applicability and accuracy [57]. In general, it is not the goal to devise universal scaling functions, for example, in the way that minimal modeling was used in the semi-dilute polymer problem [27; 51].
That being said, it must not be forgotten that Kolmogorov's program of analyzing turbulence began with two similarity hypotheses about universality, and this is to my knowledge the first example of where the renormalization group perspective was articulated clearly [59], and followed up in many seminal works by Barenblatt (Kolmogorov's last student!) [100; 101]. The first of these is that the energy spectrum can be written in the middle or inertial range of scales as \(E(k)=(\nu^{2}/\eta_{K})F(k\eta_{K})\), where the limit \(kL\to\infty\) has been assumed to exist and been taken, and the scaling function \(F(z)\) "must be the same for all cases of locally isotropic turbulence" [59]. This hypothesis does _not_ require the large Reynolds number limit to have been taken, and indeed it has been verified to a good approximation even for transitional turbulent flows [102]. Only when a second similarity hypothesis is taken, that the \(Re\to\infty\) limit exists, does the K41 scaling emerge. In fact, the connection with critical phenomena is very close: the first similarity hypothesis is not quite correct. The \(kL\to\infty\) limit does not exist, and the asymptotics exhibit incomplete similarity [100; 101] leading to the existence of a new exponent, the intermittency exponent that we described earlier.
In the renormalization group-informed perspective presented throughout this article, minimal models are in the universality class of the transition, and thus give a quantitatively accurate account of the transition. They are not approximations in the sense that they have inadequate realism. As we have discussed above, the predictions of minimal models are quantitatively accurate, de
spite the fact that the minimal models are seemingly lacking in realism. In biology, there are many more levels of description than in physical systems, because of the intertwining of molecular sequence, three-dimensional structure, large-scale molecular motions, elasticity, gene expression, metabolism and energy flow, regulation, signaling, cell division, tissue mechanics, electrical activity,..., all the way up to ecology and evolutionary processes. Choosing the right questions and the right levels of description is critically important, because there is no other way forward. As Fisher wrote: "Schrodinger's equation really does have _almost no_ relevance to what one actually does in vast areas of chemistry." [1; 2] I believe that the analysis of biological modeling from the perspective of philosophy of science would be a very interesting project, one which might bring a broader range of topics into the field, and which might ultimately help biophysicists and biologists in their efforts to understand these complex systems by being aware of the necessity to choose the right level of description. In my own work, there are a number of examples where this has been attempted, in topics ranging from the evolution of the genetic code [103], the open-ended growth of biological complexity [104], to the topological scaling laws that have recently been uncovered in phylogenetic trees [105].
For physicists, it was all over by the early 1990s, as even elementary particle physicists came to the clear realization that their subject was about effective field theory, and not about fundamental physical law. This was a remarkable 180 degree turn for a community which by and large had regarded renormalizability as a necessary but not sufficient condition for fundamental quantum field theories, and which still adhered to the program of first understanding the basic constituents of matter, so that they could then be combined to understand the low energy world to which we have direct access. But within a few years of Fisher's article, this view was superseded by the general acceptance of effective field theory, the fascinating history of which has been beautifully described from a personal perspective by Weinberg [106] and summarized intellectually by Georgi [107]. I want to end by briefly talking about effective field theory, through the lens of Georgi's review, because it is truly remarkable to see the convergence between the modern perspective of condensed matter, as championed by Fisher, and the modern perspective of high energy physics, as embodied in effective field theory.
### High energy physics: does high energy matter?
The starting point of effective field theory is the recognition that the goal has changed. The goal is not to create a theory of everything but, according to Georgi, "to isolate a set of phenomena from all the rest, so that we can describe it without having to understand everything... Fortunately this is often possible. We can divide the parameter space of the world into different regions, in each of which there is a different appropriate description of the important physics." [107] This is the pragmatic rationale for doing effective theory, but historically, this is not how it developed. If we had a theory of everything, it would be awfully unwieldy to calculate something specific, and so for convenience you "use the effective theory... it makes calculations easier, because you are forced to concentrate on the important physics". [107] In practice this means doing an expansion in scale, shrinking to zero size the features of the physics that are smaller than the scale of interest. As Georgi writes[107]: "this gives a useful and simple picture of the important physics. The finite size effects that you have ignored are all small and can be included as perturbations." The attentive reader will have noticed that this passage is uncannily reminiscent of the modeling strategy articulated earlier in this article by Bogoliubov in 1947. [19] In high energy physics, this strategy is equivalent to removing massive particles, but the price paid for this massive oversimplification (sorry, I could not resist!) is that the ultraviolet regularization is now non-trivial, because of the way that coupling constants vary with scale. Integrating out the high energy degrees of freedom results in a non-local theory because of virtual exchange processes with the neglected massive particles. The program of effective theory replaces these non-local interactions with local interactions, that are [107] "... constructed to give the same physics at low energies... Thus the domain of utility of an effective theory is necessarily bounded from above in energy scale." This is of course reminiscent of Bogoliubov (prompted by Landau) connecting his minimal model to reality by replacing his point interaction amplitude with the amplitude of the binary collision probability in the Born scattering approximation, and of course to the Eliashberg extension of the BCS model of superconductivity.
I pause to note another important connection to condensed matter physics. The scale-dependence of interactions was discovered in the context of critical phenomena by Kadanoff [64] and Patashinski and Pokrovsky [108] in 1966 but identified earlier during the development of the renormalization group [109; 110], and used by the Landau [111] school in the context of quantum electrodynamics and the famous "Moscow Zero" [112; 113; 114; 115; 116] -- the divergence of the coupling constant that renders the field theory ill-defined at high energies. I refer the interested reader to a thoughtful modern perspective on the Moscow Zero in the context of condensed matter physics [117] and the remarkable experimental observation of the scale-dependence of the coupling constant in graphene [118].
The effective field theory is also bounded below in energy. This is because the effective field theory will generate its own massive particles, and on energy scales smaller than these masses, these particles can again be eliminated to generate a new effective theory at a lower energy scale too. Thus an effective field theory lives in an intermediate or "middle" scale of energy.
Effective field theory is in practice looked at in a differ
ent way, because we do not have any information about the high energy theory that we invoked above. So effective field theory is used to describe the physics of an energy scale of interest up to a certain level of accuracy, with a small or finite number of parameters that "parameterize our ignorance in a useful way" [107]. This is not the same as the old-fashioned view of renormalizability in field theory, because it is expected that with increasing energy, the non-renormalizable interactions get replaced with a new effective theory.
Weinberg recounts [106] how his perspective started to change in 1976, when he learned about Wilson's approach to critical phenomena by integrating out the ultraviolet degrees of freedom, and using the renormalization group equation to ensure that physical quantities are cutoff independent. He realised that this entails introducing "every possible interactions, renormalizable or not, to keep physics strictly cutoff independent. From this point of view, it doesn't make much difference whether the underlying theory is renormalizable or not... Non-renormalizable theories, I realized, are just as renormalizable as renormalizable theories" [106].
Georgi describes this change in perspective in a remarkable comment that sounds the death well for the fundamental viewpoint of elementary particle physics [107]: "How does this process end? It is possible, I suppose, that at some very large energy scale... the theory is simply renormalizable in the old sense. This seems unlikely... It may even be possible that there is no end, simply more and more scales as one goes to higher and higher energy. Who knows? Who cares?"
Michael Fisher would agree.
###### Acknowledgements.
First and foremost I want to acknowledge the many inspirations, insights and perspectives that I learned from Michael's writings and personal discussions in our scientific interactions. One of my own struggles with renormalization group was to figure out how to separate it from the context of statistical and quantum field theory. The way Michael presented his own work, as well as his pedagogical introductions to the field, aided my understanding of the field that he had helped invent. Indeed, Michael cared deeply about the presentation of science, as well as its content; and in a pre-powerpoint world, he was ahead of the curve in bringing his talks to life by writing on his slides in real time, filling in the gaps he had purposefully left. In this way, he managed to combine the immediacy of a blackboard talk with the opportunity to create a skillful layout that aided the presentation. I felt that I had got to know him properly when he shared with me his secret: the slides were written in indelible marker, but the in-lecture annotations were written in water-soluble marker, and were easily washed away after the talk, ready for the next one. To the extent that I am successful as an educator and communicator of science, it is partly because of Michael's example, along with those of Jim Langer and Sir James Lighthill.
I also want to use this opportunity to acknowledge with thanks my friend and collaborator Yoshi Oono, whose brilliant and sometimes iconoclastic perspectives uniquely influenced my understanding of science, even going back to the time when I was a graduate student. It is no accident that Michael had the breadth and intellectual taste to chose to begin the "forms of matter" section of his article with Yoshi's largely overlooked work on the conformational space renormalization group of polymeric matter. I remember that Yoshi was excited that Michael had highlighted his work in his article, and that is how I learned of Michael's article.
I also wish to thank my other collaborators and students, whose work was mentioned here: Gregory Eyink, Hong-Yan Shih, Tsung-Lin Hsieh, Chi Xue, Zhiru Liu, Xueying Wang, Lin-Yuan Chen, Nicholas Guttenberg, Tuan Tran, Alisia Prescott, Pinak Chakraborty, Hamid Kellay, Gustavo Gioia, Walter Goldburg, Maksim Sipos, Olivier Martin, Fong Liu, Kalin Vetsigian, Carl Woese, John Veysey.
I wish to thank Robert Batterman for many discussions on the philosophy of science, and for his skillful articulation and observations about the way modern statistical physicists approach the task of practicing science.
Lastly I thank Amnon Aharony and Leo Radzhihovsky for inviting me to contribute this essay, for their helpful suggestions that improved the manuscript, and for their patience during the unforseen delays.
This work was partially supported by the Simons Foundation through Targeted Grant "Revisiting the Turbulence Problem Using Statistical Mechanics" (Grant No 662985(N.G.)).
|
2306.10642 | Two and three photon fusion into charmonium in ultra-peripheral nuclear
collisions | In this paper we investigate the production of charmonium states in two and
three photon fusion processes in nucleus -- nucleus collisions at the CERN
Large Hadron Collider (LHC) energies. Our results indicate that the
experimental study of these processes is feasible and can be used to constrain
the theoretical decay widths and give information on the non $c - \bar{c}$
components of these states. | R. Fariello, D. Bhandari, C. A. Bertulani, F. S. Navarra | 2023-06-18T21:46:00Z | http://arxiv.org/abs/2306.10642v1 | # Two and three photon fusion into charmonium in ultra-peripheral nuclear collisions
###### Abstract
In this paper we investigate the production of charmonium states in two and three photon fusion processes in nucleus - nucleus collisions at the CERN Large Hadron Collider (LHC) energies. Our results indicate that the experimental study of these processes is feasible and can be used to constrain the theoretical decay widths and give information on the non \(c-\bar{c}\) components of these states.
Quantum Chromodynamics, Exotic Vector Mesons, Photon - photon interactions. pacs: 12.38.-t, 24.85.+p, 25.30.-c
## I Introduction
During the last twenty years dozens of new charmonium states have been observed at the LHC [1; 2; 3; 4; 5; 6; 7]. Some of them are, beyond any doubt, multiquark (or exotic) states, i.e., states in which the minimum quark content is \(c\bar{c}q\bar{q}\). This is the case of all charged exotic states [7]. Among the charge neutral states there are some which are, for several reasons, incompatible with the \(c\bar{c}\) configuration. This is the case of the most famous exotic state, the \(X(3872)\), which is now called \(\chi_{c1}(3872)\). There are other charge neutral states, whose multiquark nature is still under debate, such as the \(\psi(3770)\).
The central discussion in this field is about the internal structure of the multiquark states. The most often studied configurations are the meson molecule and the tetraquark. The main difference between a tetraquark and a meson molecule is that the former is compact and the interaction between the constituents occurs through color exchange forces whereas the latter is an extended object and the interaction between its constituents happens through meson exchange forces [1; 2; 3; 4; 5; 6; 7].
One aspect that is sometimes forgotten, is that, being quantum systems, these states can be mixtures. There may be charmonium-tetraquark, charmonium-molecule or tetraquark-molecule mixtures. Here again, different works which consider these multiquarks states as mixtures do not reach a consensus. For example, in the case of the well studied \(\chi_{c1}(3872)\), in Ref. [8] the mass and strong decay width were very well reproduced assuming that it has a \(c\bar{c}\) component with a weight of 97 % and a tetraquark component with 3 % weight. On the other hand, in Ref. [9] it was shown that, in the case of production in proton-proton collisions, the best description of the data could be achieved with a charmonium-molecule combination, i.e. \(\chi^{\prime}_{c1}-D\bar{D}^{*}\), in which the \(c\bar{c}\) component is of the order of \(28-44\) %. In spite of the discrepancies, it is remarkable that in both works a large \(c\bar{c}\) component is required to explain data.
The study of exotic states started in \(B\) factories and then went to hadron colliders. The hadronic production of exotic states became a new way to discriminate between different configurations. The production of \(\chi_{c1}(3872)\) in proton-proton collisions in the pure molecular approach was studied in [10; 11; 12]. In [13], the analysis of recent data from the LHCb with the comover interaction model favored the compact tetraquark configuration. An attempt to use the pure tetraquark model to study \(\chi_{c1}(3872)\) and \(T_{4c}\) (\(X(6900)\)) production in proton-proton collisions was presented in [14]. All these works have improved our understanding of these new states, but there are still important questions to be answered.
The very recent publication of the CMS collaboration [15] reporting the observation of the \(\chi_{c1}(3872)\) in Pb-Pb collisions opened a new era in the study of exotics in heavy ion collisions. The main advantage of using heavy ion projectiles is the very large number of produced \(c-\bar{c}\) pairs. In the case of central collisions, the main disadvantage is that the total number of produced particles is very large and it becomes difficult to search for the multiquark states.
It is also possible to study multiquark states in ultra-peripheral collisions (UPCs). High energy hadrons are an intense source of photons (For a review see Ref. [16; 17; 18; 19; 20; 21]). At large impact parameters (\(b>R_{h_{1}}+R_{h_{2}}\)), the photon - induced interactions become dominant with the final state being characterized by the multiquark state and the presence of two intact hadrons if the resonance was produced in two or three-photon interactions. Experimental
results at the LHC [22; 23; 24; 25] have shown that the study of photon - induced interactions in hadronic collisions is feasible and can be used to improve our understanding of the QCD dynamics. The idea of studying exotic meson production in UPCs was pioneered in [26], where the production cross section of several light and heavy well known mesons in nucleus-nucleus collisions was computed. Later, in Refs. [27] and [28], the same formalism was applied to the production of mesons and heavy exotic states in pp, pA and AA collisions.
In this work we will focus on \(c-\bar{c}\) states, giving special attention to the states, which are presently quoted by the PDG [29] as \(c-\bar{c}\), but whose nature is still under debate and which might still be multiquark states, or at least, might have a multiquark (either tetraquark or molecular) component. We will argue that we can use photon fusion processes in UPCs to confirm (or not) their \(c-\bar{c}\) nature. This is possible because in these processes we only use QED and a well established method to project quark-antiquark pairs into bound states, avoiding some model dependence inherent to hadronic processes. We will revisit and update the calculations performed in [30] and include new states. We will study the most recently observed particles using the quantum number assignments published in the most recent compilation made by the Particle Data Group [29]. We shall consider both two-photon and tree-photon processes. As it will be seen, all the ingredients of the calculation are fixed. The formalism developed in [30] applies to fermion-antifermion systems, being thus appropriate to the study of conventional quarkonium states. We will also apply it to the controversial cases, where the multiquark nature of the state is still under debate. A future experimental confirmation of our predictions would establish the quark-antiquark nature of these states.
This paper is organized as follows. In section II we present a short description of the formalism used for particle production in two-photon interactions at hadronic colliders. In section III we discuss meson production in three-photon interactions. In all cases we present the update of the results obtained for the production of charmonium in Pb-Pb collisions, including new states and making predictions for LHC. Finally, in section IV we present a brief summary and discussion of the results.
## II Two photon processes
The theoretical treatment of UPCs in relativistic heavy ion collisions has been extensively discussed in the literature [16; 17; 18; 19; 20; 21]. In what follows we will only review the main formulas needed to make predictions for meson production in two and three photon interactions, which were derived in [30].
The differential cross section for the production of C-even mesons through two-photon fusion is given by [30]:
\[\frac{d\sigma}{dP_{z}}=\frac{16(2J+1)}{\pi^{2}}\frac{Z^{4}\alpha^{2}}{M^{3}} \ \frac{\Gamma_{\gamma\gamma}}{E}\int d{\bf q}_{1t}d{\bf q}_{2t}\ ({\bf q}_{1t}\times{\bf q}_{2t})^{2}\ \frac{\left[F_{1}(q_{1t}^{2})F_{2}(q_{2t}^{2})\right]^{2}}{\left(q_{1t}^{2}+ \omega_{1}^{2}/\gamma^{2}\right)^{2}\left(q_{2t}^{2}+\omega_{2}^{2}/\gamma^{2 }\right)^{2}} \tag{1}\]
where \(P_{z}\), \(E\), \(M\) and \(J\) are the longitudinal momentum, energy, mass and spin of the produced meson, respectively; \(\Gamma_{\gamma\gamma}\) is the two-photon decay width of the meson; \(Z\), \(\alpha\) and \(\gamma\) are the atomic number, the fine structure constant and the Lorentz factor. Finally, \(F_{1}\) and \(F_{2}\) are the projectile and target form factors. Following [30] it is easy to relate the meson variables with the photon energies \(\omega_{1}\) and \(\omega_{2}\):
\[E=\omega_{1}+\omega_{2}\,\ \ \ \ \omega_{1}-\omega_{2}=P_{z}\,\ \ \ \ \ {\rm and}\ \ \ \ \ \omega_{1}\omega_{2}=M^{2}/4\]
The photon energies \(\omega_{1}\) and \(\omega_{2}\) are related to the mass \(M\) and the rapidity \(Y\) of the outgoing meson by \(\omega_{1}=\frac{M}{2}e^{Y}\) and \(\omega_{2}=\frac{M}{2}e^{-Y}\).
As it was already mentioned, \(F(q^{2})\) is the nuclear form factor and it plays a crucial role in this formalism. The precise form of the form factor is the main source of uncertainties in our calculations. The Woods-Saxon distribution, with central density \(\rho_{0}\), size \(R\) and diffuseness \(a\) gives a good description of the densities of the nuclei. Fortunately, this distribution is very well described by the convolution of a hard sphere and an Yukawa function [31]. In this case, the form factors can be calculated analytically:
\[F(q^{2})=\frac{4\,\pi\,\rho_{0}}{A\,q^{3}}\ \left[\sin(qR)-qR\cos(qR)\right]\ \left[\frac{1}{1+q^{2}a^{2}}\right]. \tag{2}\]
For Pb we use \(R=6.63\) fm and \(a=0.549\) fm, with \(\rho_{0}\) normalized so that \(\int d^{3}r\rho(r)=208\)[32]. With the above expressions it is easy to compute the total cross sections with an adequate form factor [31].
During the derivation of the above formula for the cross section, we had to use a prescription to bind together the produced quark and antiquark into a bound state. We did this using the projection operators [30]
\[\bar{u}\cdots v \longrightarrow \frac{\Psi(0)}{2\sqrt{M}}\ {\rm tr}\left[\cdots(P\!\!\!/+M)i\gamma^{5}\right]\] \[\bar{u}\cdots v \longrightarrow \frac{\Psi(0)}{2\sqrt{M}}\ {\rm tr}\left[\cdots(P\!\!\!/+M)i\not{\!\!\!/} \right] \tag{3}\]
where \(\cdots\) denotes any matrix operator. The first equation applies to spin 0 and the second to spin 1 particles, respectively. In these equations \(\Psi({\bf r})\) is the bound state wavefunction calculated at the origin and \(\hat{e}^{*}\) is the polarization vector of the outgoing vector meson. Squaring the corresponding amplitude yields the factor \(|\Psi(0)|^{2}\), which is then related to the decay width \(\Gamma_{\gamma\gamma}\) through the formula derived by Van Royen and Weisskopf in Ref. [33] (see the discussion in [30]) for fermion-antifermion annihilation. Hence, because of the hadronization prescription, the cross section formulas derived in [30] apply to quark-antiquark states. Nevertheless, in order to obtain a first estimate we shall use the Van Royen - Weisskopf formula also for states, which are very likely multiquark states, such as the \(X(6900)\).
In what follows we will compute the production cross sections for conventional \(c-\bar{c}\) and also to states whose status is still under debate. Therefore our results will serve as baseline for the experimental search in UPCs. If our predictions are confirmed this will be an argument in favor of the quark-antiquark assignment. If there are large discrepancies between data and our numbers, this will indicate the existence of a molecular or tetraquark component. As mentioned in the introduction, charge neutral states can always be mixtures and in the existing calculations involving mixtures, the \(c-\bar{c}\) component is always large. Hence our calculations will be relevant. Our strategy is conservative. Instead of creating a model for the production of multiquark states, we stick to the well know QED amplitudes complemented by experimental information.
In Table 1 we show the cross sections for the production of \(C\) even mesons in Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV using the formalism described above. Comparing with the results obtained in [30] we observe some discrepancies, which are due to the use of updated values of the measured decay widths. We have also included the results for the \(\eta_{c}(2S)\), which was not so well measured at that time.
In Table 2 we show the update of the results obtained in [28] for the production cross section of the \(J=0\) and \(J=2\) particles. In the PDG compilation, the quantum numbers and the nature of the \(X(3940)\) are still undefined and its two-photon decay width was not measured. We have used the theoretical values calculated in [34; 35]. The states \(\chi_{c0}(3915)\) and \(\chi_{c2}(3930)\) are treated as conventional \(c-\bar{c}\) scalar and tensor states respectively. However these assignments have been questioned (see for example, [36]). In Table 2 we included results for the very recently measured \(X(6900)\) state [37]. This state was seen in the \(J/\psi\)-\(J/\psi\) invariant mass spectrum and therefore it could be a \(c\bar{c}c\bar{c}\) state. After the observation there was a series of works discussing its structure and hadronic production. Among them, Ref. [38] is of special relevance to us. The authors have used the equivalent photon approximation (EPA) and the convolution formula included in the Appendix for the sake of discussion. The formalism described there is quite similar to the one described above and the use of the Low formula for the \(\gamma\gamma\to R\) reaction is equivalent to using the Van Royen - Weisskopf formula. Since in Ref. [38] the authors did not know the two-photon decay width of the \(X(6900)\), they could not be very precise in their estimate. Later, this information was extracted from the analysis of light-by-light scattering in Ref. [39]. The main observation was that the fit of the measured \(\gamma\gamma\) invariant mass spectrum becomes much better when one adds a resonance of mass \(\simeq 6900\) MeV. Using different assumptions in the analysis they arrive at the values of \(\Gamma_{\gamma\gamma}\) quoted in Table 2, where I stands for interference and NI for no-interference (for more details see [39]). These values are surprisingly large an when inserted into our formulas yield very large production cross sections.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline State & Mass & \(\Gamma_{\gamma\gamma}\)[keV] & \(\sigma^{\rm LHC}\)[mb] \\ \hline \hline \(\pi_{0}\), \(0^{-+}\) & 134 & 0.0078 & 38.0 \\ \(\eta\), \(0^{-+}\) & 547 & 0.46 & 17.3 \\ \(\eta^{\prime}\), \(0^{-+}\) & 958 & 4.2 & 21.8 \\ f\({}_{2}\), \(2^{++}\) & 1275 & 2.4 & 22.4 \\ a\({}_{2}\), \(2^{++}\) & 1318 & 1.0 & 8.3 \\ \(\eta_{c}\), \(0^{-+}\) & 2984 & 5.15 & 0.43 \\ \(\chi_{0c}\), \(0^{++}\) & 3415 & 2.2 & 0.11 \\ \(\chi_{2c}\), \(2^{++}\) & 3556 & 0.56 & 0.02 \\ \(\eta_{c}(2S)\), \(0^{-+}\) & 3637 & 2.14 & 0.09 \\ \hline \end{tabular}
\end{table}
Table 1: Cross sections for production of C-even mesons in Pb-Pb ultra-peripheral collisions at \(\sqrt{s_{NN}}=5.02\) TeV. The decay widths are taken from the PDG [29].
## III Three-photon processes
The formalism described in the previous section is analogous to the equivalent photon approach and the cross section could be rewritten as the well known convolution EPA formula (given in the Appendix) for the process \(\gamma\gamma\to R\), where \(R\) is any integer spin resonance. However, this formula can only be used for the production of \(J=0\) or 2 states. For the case of vector meson production we need the three-photon fusion process. In Ref. [30] we derived the expression for the cross section of three-photon fusion into a C-odd meson. In differential form it reads:
\[\frac{d\sigma}{dP_{z}} = 1024\,\pi\,\Big{|}\Psi(0)\Big{|}^{2}(Z\alpha)^{6}\frac{1}{M^{3}E }\ \int\frac{dq_{1t}\ q_{1t}^{3}\ \left[F(q_{1t}^{2})\right]^{2}}{\left(q_{1t}^{2}+ \omega_{2}^{2}/\gamma^{2}\right)^{2}} \tag{4}\] \[\times \int\frac{dq_{2t}\ q_{2t}\ \left[F(q_{2t}^{2})\right]^{2}}{\left[q_{2t }^{2}+(2\omega_{1}-\omega_{2})^{2}/\gamma^{2}\right]^{2}}\ \left[\int\frac{dk_{t}\ k_{t}\ F(k_{t}^{2})}{ \left(k_{t}^{2}+(\omega_{1}-\omega_{2})^{2}/\gamma^{2}\right)}\right]^{2}\]
The definition of the variables are as in (1). However, in the present case, the wave function \(|\Psi(0)|^{2}\) can no longer be related to the \(\gamma\gamma\) decay width. On the other hand, vector mesons can decay into \(e^{+}e^{-}\) pairs and these decay widths are very well known experimentally. Following a similar derivation as for the \(\gamma\gamma\) decay, the \(e^{+}e^{-}\) decay width of the vector mesons can be shown [33] to be proportional to the wave function squared, i.e. \(\Gamma_{e^{+}e^{-}}\propto|\Psi(0)|^{2}\). Using the relation derived in [33] we arrive at [30]:
\[\sigma=\int d\omega\,96\,\pi\,\frac{\Gamma_{e^{+}e^{-}}}{M^{3}}\,\frac{n( \omega)}{\omega}\ \ H(M,\omega) \tag{5}\]
where \(n(\omega)\) is given by:
\[n_{(}\omega)=\frac{2}{\pi}\ Z^{2}\alpha\ \int\frac{dq\ q^{3}\ \left[F(q^{2})\right]^{2}}{ \left(q^{2}+\omega^{2}/\gamma^{2}\right)^{2}}. \tag{6}\]
and
\[H(M,\omega)=Z^{4}\alpha^{3}M^{2}\int\frac{dq_{2t}\ q_{2t}\ \left[F(q_{2t}^{2}) \right]^{2}}{\left[q_{2t}^{2}+(M^{2}/2\omega-\omega)^{2}/\gamma^{2}\right]^{2 }}\ \left[\int\frac{dk_{t}\ k_{t}\ F(k_{t}^{2})}{k_{t}^{2}+(M^{2}/4\omega- \omega)^{2}/\gamma^{2}}\right]^{2} \tag{7}\]
In Table 3 we present the cross sections for vector charmonium production. The first four lines are just an update of the results found in [30]. The other lines present states which may be exotic. A common feature shared by all these \(\psi\) states (with the exception of \(\psi(3770)\)) is that they are all above a \(D\bar{D}\) threshold and yet this decay mode is not a dominant one. This fact (among other things) raises the suspicion that these are not conventional \(c\bar{c}\) states.
The nature of the \(\psi(3770)\) resonance is still a subject of debate. Conventionally, it has been regarded as the lowest-mass D-wave charmonium state above the \(D\bar{D}\) threshold, i.e. a pure \(c\bar{c}\) meson in the quark model. However, in Ref. [41] it was suggested that the \(\psi(3770)\) may contain a considerable tetraquark component. In that work it was also suggested that the tetraquark nature of the state would reveal itself in the decay \(\psi(3770)\to\eta\,J/\psi\) and a prediction of the decay width in this channel was given. Very recently, this decay was observed by the BESIII collaboration [42] and the measured width was close to the prediction made in [41], giving support to the possible tetraquark component of the \(\psi(3770)\). In our formalism, we treat the vector mesons as \(c\bar{c}\) bound states. So our predicted cross section refers to the production of a conventional charmonium or to the charmonium component of the mixed charmonium-tetraquark state.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline State & Mass & \(\Gamma_{\gamma\gamma}\)[keV] & \multicolumn{4}{c|}{\(\sigma\)[\(\mu\)b]} \\ \cline{3-6} & & & 2.76 TeV & 5.02 TeV & 39 TeV \\ \hline \hline X(3940), \(0^{++}\) & 3943 & 0.33 [34; 35] & 5.5 & 9.7 & 32.5 \\ X(3940), \(2^{++}\) & 3943 & 0.27 [34; 35] & 22.5 & 39.6 & 133.0 \\ \(\chi_{c2}(3930)\), \(2^{++}\) & 3922 & 0.08 [29] & 7.1 & 12.4 & 41.7 \\ \(\chi_{c0}(3915)\), \(0^{++}\) & 3919 & 0.20 [29] & 3.4 & 6.0 & 20.1 \\ X(6900), (I) & 6900 & 67 [39] & 120.5 & 238 & 912 \\ X(6900), (NI) & 6900 & 45 [39] & 81 & 160 & 612 \\ \hline \end{tabular}
\end{table}
Table 2: Cross sections for production of C-even charmonium states in Pb-Pb ultra-peripheral collisions at different energies. The highest energy might be relevant for collisions at the FCC [40].
The Particle Data Group (PDG) has been updating the parameters of vector charmonium like states, \(\psi(4160)\) and \(\psi(4230)\), thanks to the improved data analysis techniques and the higher statistics of the data. The analysis done by BES reported higher Breit-Wigner (BW) mass for \(\psi(4160)\): \(M=4191.7\pm 6.5\) MeV [43]. Although the updated mass and width parameters of these two states are closer to each other, they are commonly regarded as different states with the same quantum numbers whose underlying nature remains elusive. \(\psi(4160)\) is regarded as a \(2^{3}D_{1}\)\(c\bar{c}\) state due to its consistency with the predictions of the quark potential model [44]. The \(\psi(4160)\) and \(\psi(4230)\) have the same quantum numbers with mass difference of about 40 MeV but can hardly be accommodated in the quark model at the same time [44]. Furthermore, while the \(\psi(4160)\) appears in the open charm channels, it is absent in the hidden-charm channels, and the decay channels of \(\psi(4230)\) listed in the PDG table are mostly hidden-charm channels. Clearly, these states deserve further studies. In [45] it has been argued that the \(\psi(4160)\) and \(\psi(4230)\) are in fact the same state. The measurement of the production cross sections of these two states in the three photon fusion may help in elucidating their nature.
## IV Summary
In this work we have studied the charmonium production in UPCs at LHC energies due to two and three photon fusion processes. These are clean processes where the particles of the initial state are intact at the final state and can be detected with the presence of two rapidity gaps between the projectiles and the produced particle. We have used the QED formulas (derived in [30]) complemented with the experimental data on decay widths. We have predicted sizeable values for the cross sections in Pb-Pb collisions. We conclude that the experimental study is worth pursuing, that it can be useful to constrain decay widths evaluated theoretically and, ultimately, it can help in determining the structure of the considered states, confirming or not their quark-antiquark nature.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline State & Mass & \(\Gamma_{e^{+}e^{-}}\)[keV] & \(\sigma\)[nb] \\ \hline \hline \(\rho^{0}\) & 770 & 6.77 & 2466.9 \\ \(\omega\) & 782 & 0.6 & 215.3 \\ J/\(\psi\) & 3097 & 5.3 & 476.5 \\ \(\psi(2S)\) & 3686 & 2.1 & 161.4 \\ \(\psi(3770)\) & 3770 & 0.26 & 19.5 \\ \(\psi(4040)\) & 4040 & 0.86 & 59.7 \\ \(\psi(4160)\) & 4160 & 0.48 & 32.4 \\ \(\psi(4230)\) & 4230 & 1.53 & 101.5 \\ \(\psi(4415)\) & 4415 & 0.58 & 36.9 \\ \hline \end{tabular}
\end{table}
Table 3: Cross sections for production of C-odd mesons in Pb-Pb ultra-peripheral collisions at \(\sqrt{s_{NN}}=5.02\) TeV. The decay widths are taken from the PDG [29].
Appendix: Equivalent photon approximation for two-photon processes
In the equivalent photon approximation, the cross section for the production of a generic charmonium state, \(R\), in UPCs between two hadrons, \(h_{1}\) and \(h_{2}\), is given by (See e.g. [16; 20])
\[\sigma\left(h_{1}h_{2}\to h_{1}\otimes R\otimes h_{2};s\right)\ =\ \int\hat{\sigma}\left(\gamma\gamma\to R;W\right)N\left(\omega_{1}, \mathbf{b}_{1}\right)N\left(\omega_{2},\mathbf{b}_{2}\right)S^{2}_{abs}( \mathbf{b})\mathrm{d}^{2}\mathbf{b}_{1}\mathrm{d}^{2}\mathbf{b}_{2}\mathrm{d} \omega_{1}\mathrm{d}\omega_{2}\ \,, \tag{8}\]
where \(\sqrt{s}\) is center - of - mass energy for the \(h_{1}h_{2}\) collision (\(h_{i}=p,A\)), \(\otimes\) characterizes a rapidity gap in the final state and \(W=\sqrt{4\omega_{1}\omega_{2}}\) is the invariant mass of the \(\gamma\gamma\) system. The quantity \(N(\omega_{i},b_{i})\) is the equivalent photon spectrum generated by hadron (nucleus) \(i\), and \(\sigma_{\gamma\gamma\to R}(\omega_{1},\omega_{2})\) is the cross section for the production of a state \(R\) from two real photons with energies \(\omega_{1}\) and \(\omega_{2}\). Moreover, in Eq. (8), \(\omega_{i}\) is the energy of the photon emitted by the hadron (nucleus) \(h_{i}\) at an impact parameter, or distance, \(b_{i}\) from \(h_{i}\). Finally, in the above formula \(S^{2}_{abs}(\mathbf{b})\) is the survival probability written as the square of the scattering matrix, introduced here to enforce the absorption at small impact parameters \(b\lesssim R_{h_{1}}+R_{h_{2}}\)[46]. The equivalent photon flux can be expressed as follows
\[N(\omega,b)=\frac{Z^{2}\alpha_{em}}{\pi^{2}}\frac{1}{b^{2}\omega}\left[\int u ^{2}J_{1}(u)F\left(\sqrt{\frac{\left(b\omega/\gamma\right)^{2}+u^{2}}{b^{2}} }\right)\frac{1}{\left(b\omega/\gamma\right)^{2}+u^{2}}\mathrm{d}u\right]^{2}\,, \tag{9}\]
where \(F\) is the nuclear form factor of the equivalent photon source. In order to estimate the \(h_{1}h_{2}\to h_{1}\otimes R\otimes h_{2}\) cross section one needs the \(\gamma\gamma\to R\) interaction cross section as input. Usually on uses the Low formula [47], where the cross section for the production of the \(R\) state due to the two-photon fusion can be written in terms of the two-photon decay width of the corresponding state as
\[\sigma_{\gamma\gamma\to R}(\omega_{1},\omega_{2})=8\pi^{2}(2J+1)\frac{\Gamma_ {R\rightarrow\gamma\gamma}}{M_{R}}\delta(4\omega_{1}\omega_{2}-M_{R}^{2})\,, \tag{10}\]
where the decay width \(\Gamma_{R\rightarrow\gamma\gamma}\) can in some cases be taken from experiment or can be theoretically estimated. In the above formula, \(M_{R}\) and \(J\) are, respectively, the mass and spin of the produced state.
###### Acknowledgements.
This work was partially financed by the Brazilian funding agencies CNPq, FAPESP and also by the INCT-FNA and by the IANN-QCD network.
|
2304.06659 | The role of previous generations of stars in triggering star formation
and driving gas dynamics | We present hydrodynamic and magnetohydrodynamic (MHD) simulations of sub
galactic regions including photoionising and supernova feedack. We aim to
improve the initial conditions of our region extraction models by including an
initial population of stars. We also investigate the reliability of extracting
regions in simulations, and show that with a good choice of region, results are
comparable with using a larger region for the duration of our simulations.
Simulations of star formation on molecular cloud scales typically start with a
turbulent cloud of gas, from which stars form and then undergo feedback. In
reality, a typical cloud or region within a galaxy may already include, or
reside near some population of stars containing massive stars undergoing
feedback. We find the main role of a prior population is triggering star
formation, and contributing to gas dynamics. Early time supernova from the
initial population are important in triggering new star formation and driving
gas motions on larger scales above 100 pc, whilst the ionising feedback
contribution from the initial population has less impact, since many members of
the initial population have cleared out gas around them in the prior model. In
terms of overall star formation rates though, the initial population has a
relatively small effect, and the feedback does not for example suppress
subsequent star formation. We find that MHD has a relatively larger impact than
initial conditions, reducing the star formation rate by a factor of 3 at later
times. | Nicholas P. Herrington, Clare L. Dobbs, Thomas J. R. Bending | 2023-04-13T16:46:40Z | http://arxiv.org/abs/2304.06659v1 | # The role of previous generations of stars in triggering star formation and driving gas dynamics
###### Abstract
We present hydrodynamic and magnetohydrodynamic (MHD) simulations of sub galactic regions including photoionising and supernova feedback. We aim to improve the initial conditions of our region extraction models by including an initial population of stars. We also investigate the reliability of extracting regions in simulations, and show that with a good choice of region, results are comparable with using a larger region for the duration of our simulations. Simulations of star formation on molecular cloud scales typically start with a turbulent cloud of gas, from which stars form and then undergo feedback. In reality, a typical cloud or region within a galaxy may already include, or reside near some population of stars containing massive stars undergoing feedback. We find the main role of a prior population is triggering star formation, and contributing to gas dynamics. Early time supernova from the initial population are important in triggering new star formation and driving gas motions on larger scales above 100 pc, whilst the ionising feedback contribution from the initial population has less impact, since many members of the initial population have cleared out gas around them in the prior model. In terms of overall star formation rates though, the initial population has a relatively small effect, and the feedback does not for example suppress subsequent star formation. We find that MHD has a relatively larger impact than initial conditions, reducing the star formation rate by a factor of 3 at later times.
keywords: galaxies: star formation - ISM: clouds - methods: numerical - hydro- dynamics - radiative transfer - HII regions.
## 1 Introduction
Most if not all star formation is influenced by larger scale processes such as spiral arms, bars, feedback from massive stars and large scale turbulence. Spiral arms induce converging flows (Roberts, 1969), bars drive gas flows towards the centres of galaxies (Krumholz et al., 2017; Seo et al., 2019) and are associated with large shearing motions (Sheth et al., 2000; Khoperskov et al., 2018), whilst star formation is often supposed to be triggered within dense shells from HII regions and supernovae (e.g. Elmegreen & Lada, 1977; Tieftrunk et al., 1997; Elmegreen, 1998; Thompson et al., 2004; Martins et al., 2010; Smith et al., 2010; Deharveng et al., 2012). Star formation may thus be influenced by larger scale dynamics, as well as the past star formation history of a region or a galaxy, whereby previous generations of stars may be driving turbulence, or producing shells and bubbles in the ISM.
Incorporating these processes in numerical studies of star formation is challenging. Many simulations do not attempt to explicitly include larger scale processes, and simply model isolated clouds subject to turbulence driven on larger scales (Bate et al., 2002; Clark & Bonnell, 2006; Bate, 2009; Girichidis et al., 2011; Federrath et al., 2014a), or converging flows which are assumed to be from spiral arms, supernovae feedback, or cloud collisions (Heitsch et al., 2006; Hennebelle et al., 2008; Banerjee et al., 2009; Ntormousi et al., 2011; Duarte-Cabral et al., 2011; Clark et al., 2012; Walch et al., 2012; Balfour et al., 2015; Dobbs et al., 2015; Fogerty et al., 2016; Dobbs et al., 2020; Liow & Dobbs, 2020). These not only miss the larger scale dynamics, but also that regions of star formation may occur near to young massive stars, with such stars undergoing ionisation, winds and supernovae on different timescales. Observationally, we see evidence for sequential star formation (Fukuda & Hanawa, 2000; Deharveng et al., 2003; Preibisch & Zinnecker, 2007), suggesting that later groups of clusters of stars will be forming in or close to regions subject to feedback and where supernovae have already occurred.
An alternative approach is to simulate regions from larger scale simulations at higher resolution (Rey-Raposo et al., 2015; Dobbs, 2015; Shima et al., 2017; Seifried et al., 2017; Rey-Raposo et al., 2017; Bending et al., 2020; Ali, 2021; Rieder et al., 2020; Bending et al., 2022; Rieder et al., 2022; Dobbs et al., 2022). These simulations have the advantage that they capture the resolution necessary to
follow low mass molecular clouds, star formation within molecular clouds, feedback processes and their effect on star formation, as well as spiral arm dynamics, or galaxy collisions. Recent work on these scales is able to follow cluster evolution, showing that interactions with the surroundings, e.g. incoming gas and other clusters, as well as hierarchical cluster formation whereby multiple clusters are forming, and often merging together, are important for cluster formation and evolution, particularly for more massive clusters and longer term cluster evolution (Rieder et al., 2022; Dobbs et al., 2022). A common problem with these previous works, both the isolated clouds and galactic region scale simulations however are the boundary conditions, which are usually assumed to be empty, or for grid codes, periodic, and again the prior star formation history (and particularly whether there are any nearby massive stars) is typically neglected. It is not clear what difference missing this information makes to the validity of these models.
Other works, particularly with moving mesh codes, have instead performed zoom-in simulations where the rest of the galaxy is modelled at lower resolution, and one region is modelled at much higher resolution (Smith et al., 2020; Li et al., 2022). However this technique may be less suitable for smoothed particle hydrodynamics (SPH) since this would involve adjacent particles, or mixing of particles with very different masses. For all codes, these simulations also may be unnecessarily computationally expensive, or memory intensive, since the whole galaxy is also modelled but not of interest. And there is still a difference in resolution that cannot be avoided, so for example feedback which occurs outside the high resolution region will only be modelled with low resolution.
The star formation rate is one specific property that may depend on the boundary conditions and prior stellar population. In a recent paper, Bending et al. (2020), we found the surprising result that photoionising feedback appeared to slightly increase rather than decrease star formation. One important difference in this simulation compared to previous work was that the simulation was somewhat larger scale, so rather than feedback simply propagating into a vacuum, feedback causes new star formation to occur. We also saw that star formation initially increased then decayed compared to having no feedback, and postulated that this behaviour may be due to, or exacerbated by not having any initial population of stars. Rather than feedback acting initially to reduce star formation, stars form and then produce feedback. We supposed that including an initial population of stars might affect the ISM at earlier times before the first burst of star formation occurs, possibly regulating star formation better. As well as feedback, we also expect that magnetic fields, which were not included in Bending et al. (2020) will influence the amount of star formation which occurs in the simulations.
In this paper we examine how sensitive results from high resolution simulations of isolated galaxy regions are to the inclusion of the surrounding interstellar medium, i.e. testing the reliability of extracting a smaller region from a large scale simulation, and whether including prior populations of stars is something we need to do when modelling star formation. We also look at whether feedback from an initial population triggers or reduces star formation. Related to this we also compare how the inclusion of feedback generally, the inclusion of a prior stellar population and the inclusion of magnetic fields affect the star formation rates and efficiency of star formation in our regions.
## 2 Methodology
This work uses the three dimensional smoothed particle magnetohydrodynamics code named sphNG (Benz, 1990; Benz et al., 1990; Bate et al., 1995; Price & Monaghan, 2007) The code includes gravity, and ISM chemistry for heating and cooling following Glover & Mac Low (2007). Artificial viscosity is included to capture shocks, using a switch (Morris & Monaghan, 1997) such that the viscosity parameter \(\alpha\) varies between 0.1 and 1. The code includes magnetic fields using a divergence cleaning method (Price & Monaghan, 2004, 2004; Price, 2012; Tricco & Price, 2012, 2013; Dobbs et al., 2016).
### Magnetohydrodynamics
The code can include non-ideal MHD physics, but for this work we assume ideal MHD. The divergence free constraint is enforced using constrained hyperbolic divergence cleaning (Tricco & Price, 2012), based on the method by Dedner et al. (2002). The magnetic field is coupled to a scalar field, and the divergence error is removed from the magnetic field by propagating \(\nabla\cdot\textbf{B}\) as a damped wave. We adopt the default parameters in SPMHD, and take the cleaning wave speed \(c_{h}\) to be the fast MHD wave speed. We find this is sufficient to keep
Figure 1: Spiral arm galaxy models from Dobbs & Pringle (2013) in the top panel and Dobbs et al. (2017) in the bottom panel. The black box in the top panel shows the region SR1 and the purple box shows SR1e. In the bottom panel the black box shows SR2 and the purple box is SR2e. Explanations of SR1, SR1e, SR2 and SR2e are located in section 2.3.1 and 2.4.
\(h\nabla.\mathbf{B}/|\mathbf{B}|\) typically \(<0.01\). We also include photoionising feedback, modelled in a similar method to Dale et al. (2007) with modifications by Bending et al. (2020). Our models contain a prescription for supernova feedback based on Dobbs et al. (2011) (see Section 2.2).
In our simulations we do not resolve individual stars. We use cluster sink particles which represent clusters (or sub-clusters) and their contained gas. The method is described in detail in Bending et al. (2020), here we provide a summary and highlight some changes. The cluster sink particles form according to the sink criteria detailed by Bate et al. (1995b) and we set the ratio of stars to gas at 0.25. In Bending et al. (2020) we used star to gas ratios of 0.5 and 1 and found that these produced high star formation rates and levels of feedback. Previous observation and theoretical work also suggests star formation efficiencies of \(\sim 0.3\) on scales below our resolution, for example jets and outflows (Matzner and McKee, 2000; Alves et al., 2007; Federrath et al., 2014b; Offner and Chaban, 2017; Pezzuto et al., 2021). Once a cluster sink particle is formed it can accrete gas within a given accretion radius. In this work we set this to 0.6 pc.
Stars are assigned to cluster sink particles from a list that is pre-sampled from a Kroupa IMF (Kroupa, 2001). The sample limits are \(0.01M_{\odot}\) to \(100M_{\odot}\), however, we discard all non-massive stars (\(<8M_{\odot}\)) since their photoionising feedback is negligible and they do not undergo SNe. In Bending et al. (2020) and Bending et al. (2022) the limit was \(18M_{\odot}\), however, stars that undergo SNe on timescales longer than 10 Myr now become relevant with the inclusion of an initial population (Section 2.3.2). We continue to only include the photoionising flux from stars with mass greater than \(18M_{\odot}\).
Massive stars are added to the simulation each time the total stellar mass in cluster sink particles increases by \(\Delta M_{i}\) (the massive star injection interval). \(\Delta M_{i}\) is calculated from the pre-sampled list as the total mass of the sample over the number of massive stars in the sample; in this work this gives 305 M\({}_{\odot}\). The properties of each massive star (mass, Lyman flux, lifetime) are given from the pre-sampled list. Each massive star is allocated to the sink with the largest non-stellar mass. This non-stellar mass is calculated as \(M_{\rm sink}\times 0.25-N_{\rm massive}\times\Delta M_{i}\) where \(M_{\rm sink}\) is the mass of the sink, 0.25 is the fraction of mass assumed to be stars (see above) and \(N_{\rm massive}\) is the number of massive stars in the sink (i.e. this represents the total sink mass minus the mass which has already been accounted for as contributing massive stars). This is a slight modification from Bending et al. (2020) in which the sum of the mass of massive stars in each sink was used in the calculation rather than \(N_{\rm massive}\times\Delta M_{i}\), this meant larger sinks contained disproportionately large numbers of ionising stars.
### Photoionising and supernova feedback
This work includes two forms of stellar feedback, photoionisation and supernovae. The photoionisation method follows a line of sight (LOS) approach described in Bending et al. (2020) and the supernovae are modelled as described in Dobbs et al. (2011) and Bending et al. (2022). Here we will briefly describe the two methods and some improvements.
For the photoionisation a LOS is drawn between every particle-sink pair. The gas particles for which the LOS intersects their volume of compact support are identified. The column density contributions are calculated by a line integral through the smoothing kernel of each of these particles, which is integrated to get a contribution to the column density. These contributions are summed to get the total column density for each line of sight. Once we have computed the column densities we use the on-the-spot approximation to treat the direct to ground state recombinations.
We have improved the way in which overlapping HII regions are dealt with since Bending et al. (2020). The issue is that gas particles can receive ionising flux from more than one cluster sink particle leading to ionising flux from a smaller or more distant source being discarded. We have addressed this using a similar approach to Dale and Bonnell (2011). They track the number of sources contributing to the ionisation of each gas particle and then iterate reducing the column density of those particles by a factor of the number of sources until they reach some tolerance. This iterative approach is highly computationally expensive so we instead use the number of sources calculated in the previous timestep to reduce the column densities. Dale et al. (2012) improved the method to consider the fractional contributions of each source to the ionisation of any given gas particle, however, they found this only had a very small effect. We have not implemented this since it has a high memory requirement.
Supernovae are modelled using the same method as described in Dobbs et al. (2011) and Bending et al. (2020), by inserting kinetic and thermal energy following the snowplough solution. Stellar lifetimes are computed using the Seba code (Portegies Zwart and Verbunt, 2012) and once a massive star is inserted into a cluster sink, the sink's current age is checked against the lifetime of the massive star. When the age of the sink exceeds the massive star lifetime, a supernova is initiated and photoionization ceases.
### Initial Conditions
In this paper we perform a study of how sensitive our simulations are to the initial conditions. We consider two aspects of the wider galaxy scale physics, to probe their effects on small scale star formation. First we test whether extracting a small region from a large scale simulation is reliable, i.e. do we see the same results in the small region compared with what would be found in a large scale simulation. Next we look at whether prior star formation acting on galaxy scales affects smaller scales by including an initial population of stars. So effectively in the former we are testing the spatial dependence of the initial conditions, whilst in the latter we test the temporal dependence of the initial conditions. These initial condition tests are described in sections 2.3.1 and 2.3.2 respectively.
#### 2.3.1 Region extraction
The initial conditions for our models are taken from two spiral galaxy simulations from Dobbs and Pringle (2013) (GM) and Dobbs et al. (2017) (GMs). These two galaxy models start from the same initial conditions and include self-gravity, a galactic potential, ISM chemistry for heating and cooling and supernovae feedback. The only difference is that GMs also includes star particles (see section 2.3.2 for more details on star particles). Once our region is chosen we use a resolution increase method detailed by Bending et al. (2020), in which particles are arranged around a spherical grid centred on the target particle. The radius of the spherical grid is \(2h\), where \(h\) is the smoothing length. The particles are distributed over a spherical grid according to the SPH smoothing kernel. In the parent galactic models the mass per particle is \(312.5\) M\({}_{\odot}\), after splitting the particles using the resolution increase method we achieve 3.68 M\({}_{\odot}\)per particle. This provides us with enough resolution to model ionising feedback effectively, without increasing the particle numbers such that the models evolve on unreasonable time scales.
We simulate two different regions, which we call SR1 and SR2, both of which are box-shaped. To test the region extraction, we performed a number of tests, where we extended the SR1 and SR2 in different ways, but focus on two models, SR1e and SR2e here. The region extraction models are run without feedback, since these are faster, and we are simply testing whether the size of the region makes a difference, which does not depend on feedback. The first selected region and its extended version will be referred to as SR1 and SR1e throughout. This region is taken from the GMs model at a radius of 4 kpc from the galactic centre, visible in the top panel of figure 1. SR2 is the same region used in the Bending et al. (2020) models taken from GM (the same galactic model used in their work), residing roughly 2.8 kpc away from the galactic centre, shown in the bottom panel of figure 1 along with the extended model SR2e. We extend both SR1 and SR2 by 0.5 kpc in the (x, -y) and (x, y) plane respectively.
\begin{table}
\begin{tabular}{c c c c c c} \hline Name & Initial & Feedback & MHD & Size of & Number \\ & Population & & & region (kpc\({}^{2}\)) & of Particles \\ \hline Region extraction testing & & & & & \\ \hline SR1 & yes & none & none & 1.20 & \(4.3\times 10^{6}\) \\ SR1e & yes & none & none & 2.55 & \(7.2\times 10^{6}\) \\ SR2 & no & none & none & 0.50 & \(1.1\times 10^{6}\) \\ SR2e & no & none & none & 1.00 & \(2.4\times 10^{6}\) \\ \hline Feedback and MHD testing (using SR1) & & & & & \\ \hline NOFB & yes & none & none & 1.20 & \(4.3\times 10^{6}\) \\ IOFB & yes & Photoionising & none & 1.20 & \(4.3\times 10^{6}\) \\ IOSFB & yes & & photoionising & none & 1.20 & \(4.3\times 10^{6}\) \\ & & \& supernovae & & \\ IOSFBc & no & & photoionising & none & 1.20 & \(4.3\times 10^{6}\) \\ MHD5 & yes & none & \(\rm{B}_{ini}=5\,\mu\)G & 1.20 & \(4.3\times 10^{6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Descriptions of the initial conditions of all the models in this study. Each model has a mass resolution of 3.68 \(\rm{M}_{\odot}\)per particle.
Figure 2: Column density snapshots of models SR1 and SR1e on the left window and SR2 along with SR2e on the right window. The snapshots are taken at times t = 0 Myr and 7.7 Myrs. The equivalent sections of gas to SR1 and SR2 are shown by the red dotted lines on the bottom row of SR1e and SR2e. These regions within the red dashed lines are referred to as SR1e(SR1) and SR2e(SR2) in the main text.
#### 2.3.2 The initial population of stars
Our models containing an initial population of stars are extracted from the galaxy model GMs (i.e. SR1 described in the previous section). This model contained star particles which represent sites of star formation. These particles are fixed mass and non accreting with ages between 0 and 40 Myr. The age and mass of each star particle within the region of choice are carried over to the models presented here. To model feedback we convert the star particles from GMs into cluster sink particles as described in Section 2.1. This means that the star particles are assigned a stellar population when creating the initial conditions, and some will contain massive stars which will immediately undergo ionisation. Some will also imminently undergo supernovae. The sink particles can also undergo accretion which was not the case for the star particles in the GMs simulation.
#### 2.3.3 Magnetic field
For one of our models, we set up a galactic toroidal magnetic field, with an initial magnetic field strength of 5 \(\mu\)G. This field strength is consistent with observations of magnetic field strengths in spiral galaxies and the solar neighbourhood (Sofue et al., 1986; Rand & Kulkarni, 1989; Ferriere, 2001; Beck, 2015; Han, 2017; Pattle et al., 2022). We did not include feedback in this model, rather we use this model to see how much magnetic fields affect star formation compared to the no feedback model, and how the addition of magnetic fields compares to sensitivity to initial conditions.
### Model descriptions
We list the models in Table 1. The first set of simulations are the region extraction tests, as described in Section 2.3.1. These all have the same resolution, but cover larger and smaller regions of the galaxy. The next set of models test feedback from an initial population of stars. These models use the same region as SR1, from the galaxy model with star particles, and vary by the feedback mechanisms included. These are split into 4 models containing two control runs. Control run one has an initial population of stars and no feedback to compare to our feedback runs. This will be referred to as NOFB (no feedback). Our second control run has no initial population of stars but includes all forms of feedback to assess the impact of the initial population of stars, which will be named IOSFBC. The remaining two runs have an initial population of stars, comparing ionising feedback only (IOFB) and ionising feedback with supernovae (IOSFB). Our final model (MHD5) is performed without feedback but setup with a galactic toroidal magnetic field, using the same region as SR1.
## 3 Results
### Region extraction tests
To test the region extraction method, we compare SR1 with SR1e and SR2 with SR2e. We show SR1, SR2, SR1e and SR2e, all without feedback at a time of 7.7 Myr in Figure 2 (bottom panel). To aid the comparison we have drawn the equivalent region of gas for SR1 and SR2 within the extended models, shown by the dotted red line in figure 2. For SR1 (left panels), we see minimal difference between the isolated region SR1 versus the equivalent part of the extended model SR1e. This implies that taking SR1 only is a reasonable representation of how the region would evolve within the host galaxy throughout our simulation time frame.
For SR2 (right hand panels) we see that there are significant differences between the SR2 region and the equivalent area of gas in the extended SR2e model. These are apparent at for example the top of the SR2 region where there is a large(r) cavity or hole and fewer sinks. There is also more gas apparent in the spiral arm, the features are slightly sharper in SR2, and the distribution of sinks is slightly different.
The difference between the two cases arises from how well the gas flow into the region is captured. For both regions, gas is flowing from right to left, entering roughly perpendicular to the spiral arm. For SR1, the arm is situated to the left, there are several 100's parsecs of gas to flow into the arm along all its length, and even at the later time, there is still some gas which has yet to enter the spiral arm. For SR2, the geometry of the region is different, such that at the top of the spiral arm for this region, there is no gas present (i.e. perpendicular to the arm) to flow into the arm. Furthermore at the later time, there is no or little gas still to enter the spiral arm. The only place where there is much gas which can flow into the arm is in the middle, i.e. where gas flowing in extends up to the top right corner. These difficulties are exacerbated at smaller galactic radii, since the difference between the pattern speed of the spiral and the rotation of the galaxy is higher. Taking the velocity of the gas relative to the spiral arm, we estimate that the timescale for gas to travel into the spiral arm in SR1 is similar to the length of the simulation (\(\sim\) 10 Myr), whereas in SR2 the timescale is shorter. Thus the simulation time, coupled with the velocity of the gas relative to the spiral arms, give an indication of the required size of the region which can reasonably be simulated. Another difference, as well as gas replenishment, is that gas at the boundaries will also add additional pressure terms at the edge of the region.
We checked the star formation rates for the models, where we would expect that lack of gas replenishment would mean lower star
Figure 3: The star formation rates of models SR1 and SR2 (solid lines) are compared to models SR1e and SR2e. The dotted lines, referring to SR1e(SR1) and SR2e(SR2), show the star formation rate taken from an equivalent section of gas to SR1 and SR2 within the extended models (see the red dotted regions in figure 2). The star formation rate in the equivalent regions are very similar, whether the extended surroundings are included or not.
formation rates. We calculate the star formation rates for each simulation, selecting only those particles in SR1e and SR2e which are also in SR1 and SR2, shown in figure 3. We found that the difference in star formation rates between the extended models versus SR1 and SR2 was surprisingly small. We find the star formation rates over the same particles in SR1e and SR1 are near identical. For the second region, we see a slightly higher star formation rate in SR2e compared to SR2, but overall quite similar.
We tested extending SR1 and SR2 in directions both above and below the original regions by 0.5 kpc and found that further extension in the \(y\) direction (i.e. along the spiral arm) made little difference to the results obtained with SR1e and SR2e (i.e minimal difference in star formation, similar morphologies of gas in the red outlined region of interest in Figure 3) suggesting that the extension in these directions is less significant than perpendicular to the arm.
### Region evolution with initial population
In the previous section we looked at the spatial extent of the initial conditions, here we consider the effect of star formation history by comparing with and without a prior population of stars. In the absence of feedback, an initial population of stars will make minimal difference, so here we are investigating feedback from previously formed stars. The following simulations are performed with the initial conditions for the SR1 model from our previous tests.
Figure 4 shows column densities for the models where we test the impact of an initial population, at 3 different times. At the earlier time of 2.36 Myr, there are minimal differences between the models. At 4.24 Myr we start to see more structure in the IOSFB model, and by 6.93 Myr there are clear if not large differences between the simulations. The largest difference by eye in terms of structure for the non-MHD models is between the NOFB and IOSFB cases, where we see clearer structures, and there more sink particles in model IOSFB. At 6.93 Myr NOFB has formed 2789 sinks where as in IOSFB there are 8103 sinks. In all our models, as our regions evolve, the molecular clouds along the spiral arm and inter-arm regions begin to collapse and star formation initiates within the first million years. With feedback, but no initial population (IOSFBc), photoionisation and supernovae will start to shape the gas from the newly formed sinks, but even by 6.93 Myr this has not had a large impact on the gas structure compared to the no feedback model (NOFB). However for the IOFB and IOSFB models, there is both additional feedback from the initial population, and this feedback occurs at an earlier time. We see that IOSFB has the most complex structure, with HII regions, and sharp filaments, then IOFB has less followed by IOSFBc and NOFB. There are also correspondingly fewer sink particles in the models with no initial stellar population and fewer feedback processes.
Thus it appears in our models that increasing the feedback increases the structure in the gas, and at least from the number and
Figure 4: Column density snapshots of all models going left to right NOFB, IOSFBc, IOFB, IOSFB and MHD5 at 2.36, 4.24 and 6.93 Myr. A single column represents a model with each row displaying a snapshot in time. Cluster sink particles are displayed in each snapshot as white scatter points.
distribution of sink particles, also appears to increase star formation. This is most evident in the simulation with an initial population of stars which undergo both ionisation and supernovae, which thus contains both feedback processes operating from the outset of the simulation. From the outset, these are shaping the gas. Dense filaments along the spiral arm are compressed by the expansion of HII regions, whilst supernovae sweep gas through low density regions impacting dense material further aiding collapse.
For the MHD model (MHD5), the effect is the opposite (to increasing feedback), the magnetic fields appear to smooth out the structure compared to the other models including NOFB. This is due to magnetic pressure preventing the formation of such dense structures.
Figure 5 shows a zoom in of models NOFB, IOSFBc, IOFB and IOSFB, with column density and temperature along columns 1 and 2 respectively. With no feedback the region is cold reflecting the temperature of the cold neutral medium and molecular cloud phases of the ISM. Including ionising feedback results in bubbles around sinks that are heated to around \(10^{4}\) K leading to more warm ionised medium in the ISM. Adding supernova feedback to this means that HII regions are further heated up to roughly \(10^{6}\) K. Looking at the temperature in the second column of figure 5 it is clear that newly forming sinks are distributed around the edges of feedback bubbles. Comparing our control run IOSFBc with IOFB we see that despite IOSFBc having both supernova and ionising feedback the column densities of these two models are relatively similar, at least IOFB is more comparable to IOSFBc compared to IOSFB. We see some difference in the temperature panels, with hot gas due to two recent supernovae evident. Again if we compare the lower panel, with IOSFB, we see many more supernovae have occurred which is having a greater impact on the density.
### Star formation
We present the star formation efficiency (SFE) and star formation rate (SFR) in figure 6. They are calculated by the relations
\[\mathrm{SFE}=\frac{0.25\times M_{\star}}{0.25\times M_{\star}+M_{\mathrm{gas}}} \tag{1}\]
Figure 5: Feedback models IOSFB and IOFB are shown in comparison with our two control runs NOFB and IOSFBc. These snapshots which feature a zoomed in region in the lower half of the central spiral arm, show column density in the first column and temperature in the second column, at time 6.93 Myr. Cluster sink particles are shown as black points.
Figure 6: Star formation rate is shown in the top panel and star formation efficiency in the bottom panel for models NOFB, IOSFBc, IOFB, IOSFB and MHD5. The details of these calculations are explained in section 3.3.
and
\[\mathrm{SFR}(t_{i})=0.25\times\frac{M_{*}(t_{i})-M_{*}(t_{i-1})}{t_{i}-t_{i-1}} \tag{2}\]
where \(M_{*}=\sum_{n=1}^{N_{\mathrm{ank}}}m_{\mathrm{sink},n}\), \(M_{\mathrm{gas}}=\sum_{n=1}^{N_{\mathrm{gas}}}m_{\mathrm{gas},n}\), 0.25 is the fraction of mass in sinks which is assumed to be converted into stars and \(t_{i}-t_{i-1}\) refers to the difference in simulation time between two neighbouring data dumps. Our models with feedback and an initial population of stars (IOSFB and IOFB) begin to diverge in SFE (lower panel of figure 6) from the no feedback case (NOFB) after 3 Myr. At 7.00 Myr our feedback models, IOSFB and IOFB reached a SFE of 4.99 % and 3.90 % respectively, whereas our NOFB model reached an SFE of 3.20 %. The control model IOSFBc starts with a SFE of 0 since there is no initial population. This continues for the first million years until star formation occurs. At this point the SFE begins to rise at a rate greater than our NOFB model, reaching 2.91 % by 7.00 Myr. MHD5 shows a flatter slope compared to the other models, reaching a SFE of 2.27 % by 7.00 Myr. This further shows that including both forms of feedback boosts star formation, whilst MHD suppresses star formation.
Moving to the top panel of figure 6, we see that across all models the rate of star formation remains roughly constant for the first million years, despite the presence of an active population of stars. Before the first stars form, the star formation rate is controlled solely by the accretion of the initial population which as we show later in figure 10 is a small contribution. After 1 Myr star formation begins to increase in all cases. This is where the models diverge in SFR, as feedback alters the environment of star formation sites across the spiral arm. With all forms of feedback and an initial population of stars (IOSFB) the star formation rate increases rapidly, peaking at just below 0.2 M\({}_{\odot}\)yr\({}^{-1}\) by 7.00 Myr. Our model with ionising feedback only, IOFB, (and an initial population of stars) follows the same trend but does not increase as steeply reaching a peak of around 0.15 M\({}_{\odot}\)yr\({}^{-1}\) by the same time. This burst of star formation with ionising feedback is similar to models by Bending et al. (2020). When we remove the initial population but include all forms of feedback (IOSFBc) we find a similar shape to the star formation rate but with a stunted peak. This shows the importance of the early supernovae activity in boosting the star formation rate.
Finally without any feedback and with an initial population of stars (NOFB) we see the star formation rate increase as gas collapses into stars, and flatten out over time, in the same way as seen by Bending et al. (2020) and Ali et al. (2022). However including magnetic fields suppresses the peak of the star formation rate. Following the MHD5 model the SFR increases in the same way as our other models but plateaus between 3 and 4 million years at SFR values of 0.05 - 0.053 M\({}_{\odot}\)yr\({}^{-1}\), which means star formation suppression by magnetic fields is more influential on the star formation rate than the inclusion of an initial population of stars.
We see that in all cases the star formation rates are higher with feedback, more with an initial population of stars, but still IOSFBc is slightly higher compared to the no feedback model. The increase reflects triggered star formation. Whilst feedback can also inhibit star formation on small scales, this depends somewhat on the sink radius. If the sink radius is smaller, accretion on to the sink is reduced, and for those sinks with feedback, feedback instead heats the surrounding gas so that it doesn't participate in star formation. So for example in Bending et al. (2020), the sink radii were larger and the increase in star formation greater compared to this paper, whilst with sufficiently small sink radii the feedback produces a net decrease in star formation (Dobbs et al., 2022). The impact of feedback from the initial population however will be predominantly to trigger star formation since accretion onto these sinks is minimal (see Section 4.2).
### Clouds
Using a friends of friends algorithm we identify cloud structures within our models, similar to Bending et al. (2020). The algorithm groups particles together based on a separation of 1.26 pc to identify members of a cloud. A particle is chosen and grouped with all other particles which lie within 1.26 pc. The particles added to the group are checked in the same way until no more ungrouped particles are within 1.26 pc of any in the group. The process is repeated for ungrouped particles until all particles have been assigned to a group. We do not consider groups of less than 100 particles to be clouds.
We compare the molecular hydrogen content of the clouds formed in four of the models (IOSFB, IOFB, NOFB and MHD5) in figure 7. This figure shows the column density of each region with the clouds overlaid on top. The colour scale of the clouds shows the average H\({}_{2}\) ratio. The H\({}_{2}\) ratios will be a lower estimate due to limitations of resolution (Duarte-Cabral et al., 2015; Joshi et al., 2019). The number of clouds found follows the same trend as Bending et al. (2020) and Ali et al. (2022) in which adding feedback leads to cloud fragmentation and so increases the number of cloud structures across the region. We also find that adding MHD to our models reduces the number of clouds significantly. At 6.6 Myr, with MHD we find 250 clouds (MHD5), NOFB 399 clouds, IOFB 645 clouds and IOSFB 1021 clouds. Feedback is leading to the fragmentation of clouds, resulting in higher numbers of lower mass clouds across the region. Feedback appears to break up the clouds presumably by increasing density variations, whereas magnetic fields essentially smooth the gas and reduce fragmentation. We also see that in the models where feedback is having more of an effect, i.e. IOSFB and IOFB with the initial population, the molecular gas ratios are higher. As we see in the next paragraph feedback is acting to compress gas to higher densities than the gas would be otherwise, and consequently higher H\({}_{2}\) ratios.
In figure 8 we compute the properties of the clouds in the different simulations. Starting with the density, our feedback models IOSFB, IOSFBc and IOFB show a larger number of clouds with higher densities, indicating that clouds in these models are compressed by the action of feedback. The mass distribution of the clouds is similar for each model, though the number of low mass clouds is suppressed with magnetic fields, perhaps because some low mass structures are smoothed out and no longer meet the criteria for a cloud.
We see that the majority of clouds have larger velocity dispersions in models IOSFB, IOSFBc and IOFB, typically around 1 - 2 kms\({}^{-1}\) with a small number of clouds containing velocities of 4 kms\({}^{-1}\). This is due to momentum from feedback driving the gas motions. Without feedback (NOFB and MHD5), velocity dispersions only reach 2 kms\({}^{-1}\) with most clouds containing velocities of 0.2-0.4 kms\({}^{-1}\).
In the bottom left panel of figure 8 we calculate the virial parameters for the clouds. Most of the clouds within the feedback models are unbound, which is expected as they have higher velocity dispersions from feedback driven motions. NOFB and MHD5 again show similar properties with roughly even numbers of clouds that are bound and unbound. We do see that there is a dip in the number of unbound clouds in the MHD5 model compared to NOFB. This
Figure 8: Properties of clouds are shown across all no feedback and feedback models. The first row shows cloud density, masses and dispersion velocities, spanning left to right. Row 2 shows the virial parameter, 2D aspect ratio and average H\({}_{2}\) ratio for all clouds (again spanning left to right).
Figure 7: Snapshot of models NOFB, MHD5, IOFB and IOSFB are shown at 6.6 Myr. The column density is rendered in grey with the clouds plotted on top coloured by their average ratio of H\({}_{2}\).
may again be due to magnetic pressure smoothing out low density features.
Moving to the middle panel of the bottom row of figure 8, we present the 2D aspect ratio of the clouds. Using this in conjunction with figure 7, the morphology of the clouds can be considered. Models IOSFB and IOFB show clouds with more elongated and filamentary shapes, located at the extent of feedback bubbles. The 2D aspect ratio confirms there are more clouds in the model with the most feedback occurring (IOSFB) with larger aspect ratios. Feedback appears to fragment and shape clouds across the region. The models MHD5 and NOFB in figure 7 show fewer elongated structures, and contain no very high aspect ratio clouds.
The final panel in the bottom right of figure 8 shows the average H\({}_{2}\) ratio within clouds. Again using this with figure 7 we can see that including feedback leads to clouds with higher ratios of molecular hydrogen, up to 70%. This again suggested feedback is pushing clouds to higher densities promoting the formation of molecular hydrogen.
### Velocity dispersion of gas in our models compared to observations
We compare our models to observed Milky Way cloud complexes with a Larson type relationship Larson (1981) of the form \(\sigma_{v}=10^{0.2}l^{0.5}\) in a similar way as described in Bending et al. (2022). To reproduce this in our models we calculate the velocity dispersions for gas within randomly placed spheres of increasing radius from 1 to 100 pc. The velocity dispersion at a given scale are shown in figure 9 and then a line of best fit is drawn for each model. It is clear that models IOSFB, IOSFBc and IOFB better fit the Larson type relation on the largest scales, meaning that feedback is important in driving gas motions at these scales. Including or excluding an initial population of stars (IOSFB and IOSFBc) affects the fitting to the Larson type relation. With an initial population of stars there are more supernovae occurring throughout the simulation. These supernovae drive turbulent motions at larger scales then the expansion of HII regions with ionising feedback only and so accounts for the increased dispersion in velocity at larger scales. Our model IOSFBc evolves in a similar way to IOFB since there are few supernovae throughout the simulation. This is reflected in the Larson type relation as they are similar on all scales. Without feedback the models NOFB and MHD5 do not fit the Larson type relation as well and are roughly similar on all scales as MHD does not affect the gas velocities significantly.
## 4 Discussion and conclusions
### Impact of the initial region
Our simulations show that, perhaps unsurprisingly, if the gas which is flowing into the region, is included over the timescales of interest, then the properties of the region accurately reflect those that it would have if it were still embedded in the whole galaxy. The simulations in Bending et al. (2020) likely (slightly) underestimate star formation at later times due to missing gas inflow. Choosing a region by tracing gas back (e.g. Dobbs, 2015) is one way to ensure gas flow is modelled, or the required locus of gas can be estimated analytically.
### Impact of the initial population of stars
An initial population of stars could potentially impact our simulations in two ways, firstly by producing warm ionised gas and reducing star formation, or by triggering the formation of new stars. Our simulations clearly find the latter. We find that for the first million years, the feedback has little effect. We expect the reasons for this are two fold, one being that HII regions need to grow and two the gas surrounding sinks is already relatively low density, due to feedback in the original galaxy simulation. Our initial population sinks do not accrete significantly throughout the simulation due to their low density environments, so their main effect is to trigger new star formation, through both ionisation and supernovae. We tested explicitly whether the distribution of the initial population has a significant impact by running a further model with the initial population preferentially placed in denser regions. To do so we take the same initial conditions as IOSFB (and same feedback during the evolution), but we redistribute the initial population into the denser regions. We locate dense gas by a cutoff of \(3.39\times 10^{-22}\) gcm\({}^{-3}\) and randomly place sinks around gas particles above the cutoff within a maximum radius of 1 pc. This relocates sinks in spiral arm regions and dense filaments. We show the star formation rate of this model (named SNKM) compared with IOSFB in the top panel of Figure 10. The star formation rates are very similar suggesting that the location of the initial population doesn't make much difference. Looking at the bottom panel of Figure 10, compared to IOSFB, there is more accretion at early times and less at later times onto the initial sinks, but the star formation associated with the initial population is minimal compared to the formation of new sinks, including those due to triggering.
One aim of introducing an initial population was to potentially improve continuity between the galactic scale simulations and the resimulations. However if anything the initial population exacerbates this since the initial population increases triggered star formation, leading to greater disparity between the star formation
Figure 9: The Larson relation Larson (1981) \(\sigma_{v}=10^{-0.2}l^{0.5}\) for velocity dispersion at a given scale, is plotted as the dashed line. This is compared to our Larson type relation for our no feedback and feedback models. The calculation for the Larson type relation is explained in section 3.5. For each model the velocity dispersions are shown as points accompanied by a line of best fit.
rate in the global disc simulations and the local sub galactic simulations, and producing greater differences in structure. However the initial population of stars does drive motions on larger scales from the outset of the simulations, which otherwise is not necessarily captured.
### Magnetic fields
Magnetic fields have a stronger impact on the rate of star formation than including an initial population of stars. Star formation is suppressed in our MHD model (MHD5) in the inter-arm and spiral arm regions. Small over densities in the inter-arm regions are smoothed out by the magnetic fields. This decreases the burst in star formation rate we see in the hydrodynamic feedback models after 5 million years by roughly a factor of two. Including MHD produces similar gas dynamics to our hydrodynamic model NOFB (no feedback).
### Summary of results
1. Testing our region extraction models by comparing SR1 and SR2 to the equivalent gas within extended regions SR1e and SR2e show surprisingly little difference in star formation rate. We note however morphological changes in the structure of region SR2, which we attribute to not capturing gas inflow to region SR2 at later times, due to poor choice of region (lack of gas perpendicular to the arm) and the smaller galactic radius.
2. Including an initial population of stars unexpectedly increases the star formation rate compared to having no initial population. Supernovae feedback from the initial population lead to increased star formation triggering. But the initial population themselves do not accrete very much throughout the simulation, since they reside in low density regions from prior feedback within the host galaxy model. This means, by accretion, their contribution to the rate of star formation is low.
3. Supernovae from the initial population are important in driving gas motions on larger scales. The velocity dispersions on scales > 100 pc shows better agreement compared to the Larson relation (Larson, 1981) than models without both forms of feedback.
4. Our MHD model shows that magnetic fields influence the rate of star formation more compared to an initial population of stars. Magnetic fields suppress star formation across the region due to extra magnetic pressure within star forming over densities. Here we only performed one comparison simulation with MHD, we plan to test different field strengths also with feedback in future work.
## Acknowledgements
We thank the referee, for insightful feedback which has helped improve the paper. All authors in this work are funded by the European Research Council H2020-EU.1.1 under the ICYBOB project, grant number 818940. Our no feedback models where computed using the ISCA High Performance Computing Service located at the University of Exeter. The feedback models were performed with the DiRAC DIAL system, controlled by the University of Leicester IT Services, forming part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/K001014/1. DiRAC is part of the National E-Infrastructure.
## Data availability statement
Upon reasonable request to the author, all data in this work will be shared.
|
2307.07017 | The Ionization Profile Monitors in the Recycler Ring | The ionization profile monitors (IPMs) are used to measure the beam size in
synchrotrons. Both the Fermilab Recycler and Main Injector (MI) machines have
IPMs. However, they were not well understood enough to provide confidence in
their measurements. Accurately measuring beam size through the IPMs was crucial
to recognize the loss mechanisms for accelerators and to keep the beam loss to
a minimum. Thus, performing measurements with different parameters using the
IPMs led to a better analysis on how changes in conditions affect the beam
size. The IPM measurements are compared with that of multi-wires in the
upstream transfer line after applying corrections. The results were compared
with other diagnostics and the change in the beam size for different parameters
are presented in this paper. | B. Babacan, R. Ainsworth, K. J. Hazelwood, D. K Morris, P. Snopok | 2023-07-11T22:06:03Z | http://arxiv.org/abs/2307.07017v1 | # The Ionization Profile Monitors in the Recycler Ring +
###### Abstract
The ionization profile monitors (IPMs) are used to measure the beam size in synchrotrons. Both the Fermilab Recycler and Main Injector (MI) machines have IPMs. However, they were not well understood enough to provide confidence in their measurements. Accurately measuring beam size through the IPMs was crucial to recognize the loss mechanisms for accelerators and to keep the beam loss to a minimum. Thus, performing measurements with different parameters using the IPMs led to a better analysis on how changes in conditions affect the beam size. The IPM measurements are compared with that of multi-wires in the upstream transfer line after applying corrections. The results were compared with other diagnostics and the change in the beam size for different parameters are presented in this paper.
## 1 Introduction
The Recycler Ring (RR) is a key part of the Fermilab Accelerator complex used to deliver high intensity beams to the high energy neutrino experiments [1]. In order to understand the loss mechanisms that occur, it's crucial to have the ability to measure the transverse profile. Therefore, this paper mainly focuses on the ionization profile monitors (IPMs) [2] which can measure the beam size with the aim to have a better understanding of how this instrumentation operates and to have more confidence in its measurements. The IPMs do not cause a beam loss unlike the multi-wires (MWs), and this makes them a better alternative to study the beam sizes in synchrotrons. They do not have the same limitations as MWs where the IPMs capture multiple turns in the RR, and this is one of the reasons to study them further to have a better understanding of the measurements they provide.
There are horizontal and vertical IPMs located in the RR and the Main Injector (MI) where the beams reach energies of 8GeV and 120GeV, respectively. For the purpose of this study, the IPMs are used to measure the beam size and the emittance of the beam. Then, the results were compared to those of MWs that are located in the RR and the MI-8 line which is the transfer line that connects the upstream machine, the Booster to the Recycler.
## 2 Ionization Profile Monitors
IPMs are used to measure the beam size and the position of the beam by looking at beam profiles of up to 64,000 individual turns of beam in a cycle. They use micro-channel plates (MCP) to receive data and to collect the excited particles through the electric and magnetic fields. A MCP is made from a resistive material, usually glass, up to 2.0 mm thick with an array of small tubes, called microchannels, passing from one face of the plate to the other face. The microchannels are typically 5 to 20 microns in diameter, parallel to each other and aligned at an angle of 8 to 13 degrees to the surface of the plate. These IPMs use two stacked MCPs rotated 180 degrees forming a "chevron" pattern to increase the gain and reduce the feedback in the MCP. The MCPs are controlled by voltage and the higher voltages can lead to degrading the quality of the MCPs over time.
This also leads to having unusable data points when individual pre-amplifiers for a channel fail giving disconnected data points within a profile. During data analysis, the channels that were marked to be inoperative were set to the average value of the overall IPM data set to eliminate the poor MCP issue.
Out of all 64,000 turns, both horizontal and vertical IPMs store the data locally but only return the first 1000 turns for analysis. This allows to calculate the sigma \(\sigma\) that represents the beam size.
The IPMs were used to study the change in beam size in the MI by changing the MCP voltage to determine its effects as both were functioning compared to the vertical IPM not working in the RR. The R-square of the fits were also calculated to analyze the quality of the fits and this determined which voltage range was the best fit. Once an ideal MCP voltage range was determined, the beam size and the emittance were analyzed by using intensity as the dependent.
Afterwards, the emittance of the beam was calculated by using \(\sigma\) in Eq. (1) where \(\beta\) is a Twiss parameter, \(D\) is dispersion, and \(\frac{\delta p}{p\sigma}\) is the momentum spread.
\[\frac{\sigma_{x,y}^{2}}{\beta_{xy}}-\frac{D_{x,y}^{2}}{\beta_{xy}}\ \ \frac{\delta p}{p_{0}}\ ^{2}=\varepsilon_{x,y} \tag{1}\]
The emittance was normalized by using the Lorentz factors \(\beta_{L}\) and \(\gamma\) in Eq. (2).
\[\varepsilon_{N,95\%}\ =\ 6\beta\gamma\varepsilon_{x,y} \tag{2}\]
## 3 Multi-Wires
The MW103 and MW104 are located in the Fermilab RR and MWs that start with an '8' are in the MI-8 line. Figure 1 displays the location of those MWs where the horizontal and vertical IPMs are next to MW104 and MW103, respectively.
Although the measurements of the MW are precise, they also create a great beam loss. The MWs only record a single turn while IPMs can look at multiple turns without hurting the instrument. Therefore, MWs in the RR are only operated for one turn to avoid saturating the scanners and to prevent the beam from damaging the wire planes.
The horizontal and vertical MWs in MI-8 line measure the number of counts of a proton beam via the wire planes. A Gaussian fit was also applied to the number of counts collected on the wire planes and calculated the beam size using the same method as the IPMs. An example of MWs in the RR profile is shown in Fig. 2. The emittance was also calculated using Eq. (2) by using the corresponding Twiss parameters for each MWs.
## 3 IPM - MCP VOLtage Scan
Different MCP voltage levels were set as dependent using the IPMs in the MI to see if there is a change in the measured profile. This range varied between 1170V-1220V and three data sets were collected at each MCP voltage level in both cases. This study was done by using the IPMs in the MI since both the horizontal and vertical work. This was done in order to determine if the MCP voltage had a significant effect on the measurements and to decide which MCP voltage level was advantageous to use for the horizontal IPM in the RR to be compared to the MWs.
The horizontal IPM was used to measure the beam size showed high uncertainties and the beam size was observed to be much larger than the average. Therefore, the horizontal IPM was not shown in this paper.
The beam size was observed to increase in the vertical IPM in the MI as shown in Fig. 3 and the uncertainty was a lot higher at low voltage levels. The MCP voltage of 1170V-1190V provided a beam size that changed between 1.5mm-2mm. This voltage range where the beam size remains almost stable with low uncertainty was shown to be the ideal MCP voltage to use when operating IPMs in both RR and MI.
Although, the MCP voltage should not be affecting the measurements in the IPMs, it was observed otherwise. It is also important to note that, the ver
Figure 1: A schematic of the MI-8 line connecting to the Recycler ring. The horizontal IPM is next to MW104 and the vertical IPM is next to MW103.
Figure 3: Beam size change in vertical MI-IPM.
Figure 2: MW103 and MW104 profiles in the RR.
feasible beam size measurement at MCP voltage levels below 1170V.
## 5 Emittance Measurements
Emittance is calculated for comparison because it represents the normalized beam size for any position in the lattice and not at a particular place. For both IPMs and MWs, their emittance were calculated using Eq. (2) where each have different \(\beta\) and \(D\) values as shown in Table 1. The term \(\delta p/p_{0}\) was set to 5e-4 while calculating emittance.
Some of the horizontal MWs were used to be compared to the IPM in the RR as listed in Fig. 4. This study involved an intensity scan for both IPMs and MWs and the emittance was shown to increase for both instruments. The horizontal IPM in the RR was operated at 1270V as it was in the range of the acceptable MCP voltage.
The emittance was calculated for the horizontal MWs as shown in Fig. 4. Due to the vertical IPM in the RR not functioning, no data was collected for the IPM and was not be able to compare it to the vertical MWs. The emittance for the vertical MWs was plotted in Fig. 5. The MW103 in the RR was shown to have lower emittance than the MWs in the MI-8 line.
## 6 Conclusion
The beam size in the IPMs was observed to change with the MCP voltage and it sets a constraint on it depending on what beam intensity it is operated. At higher beam intensities, a lower MCP voltage level is required to provide a realistic beam profile and better Gaussian fits. It was also observed that operating IPMs at high MCP voltage and the beam at high intensities could saturate the IPMs enough to damage the MCPs. Therefore, it is important to use IPMs at certain MCP voltage levels to prevent damage and collecting poor quality data. This range was determined to be around 1170V-1190V.
The condition of the horizontal IPM in the MI should also be taken into consideration. It is older than the vertical IPM and that could affect the number of depleted MCPs. Hence, it could be the reason for high uncertainty. A comparison between IPMs in the RR and the MI would be beneficial when all IPMs are back to functioning simultaneously.
The horizontal MWs listed in Fig. 3 were shown to agree with each other while calculating the emittance. The emittance grows dependent on beam intensity in the RR and the MI. The emittance of the horizontal IPM in the RR was compared to those of MWs and shown that they do not quite agree with each other.
The emittance for the vertical MWs have shown agreement with each other and the horizontal MWs with similar values. The emittance became larger as the intensity increased.
A comparison of the vertical IPM to the MWs would be beneficial for future studies once it is functional again. It would be interesting to see how much the horizontal and vertical IPM compare with each other with the emittance calculations.
## 7 Acknowledgements
Fermi Research Alliance, LLC manages and operates the Fermi National Accelerator Laboratory pursuant to Contract number DE-AC02-07CH11359 with the US DOE.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Element** & \(\beta_{x}\) & \(D_{x}\) & \(\beta_{y}\) & \(D_{y}\) \\ \hline
826 & 48.5 & -0.04 & 9.62 & -0.035 \\
827 & 12.1 & 0.58 & 38.74 & -0.28 \\
829 & 9.66 & 1.37 & 52.84 & -0.22 \\
836 & 46.62 & 3.66 & 8.64 & -0.19 \\
850 & 51.31 & -0.05 & 7.761 & 0.05 \\
851 & 14.75 & 0.10 & 39.13 & 0.20 \\
103 & 10.94 & 0.5 & 54.8 & 0.10 \\
104 & 49.23 & -0.1 & 12.92 & 0.12 \\ IPMH & 27.10 & -0.32 & 23.308 & 0.19 \\ IPMV & 10.89 & 0.58 & 54.84 & 0.095 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Twiss Parameters used for the MWs and IPMs
Figure 4: Emittance for Horizontal RR-IPM and MWs.
Figure 5: Emittance for vertical MWs. |
2304.11325 | Deterministic identity testing paradigms for bounded top-fanin depth-4
circuits | Polynomial Identity Testing (PIT) is a fundamental computational problem. The
famous depth-$4$ reduction result by Agrawal and Vinay (FOCS 2008) has made PIT
for depth-$4$ circuits an enticing pursuit. A restricted depth-4 circuit
computing a $n$-variate degree-$d$ polynomial of the form $\sum_{i = 1}^{k}
\prod_{j} g_{ij}$, where $\deg g_{ij} \leq \delta$ is called
$\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}$ circuit. On further restricting $g_{ij}$
to be sum of univariates we obtain $\Sigma^{[k]}\Pi\Sigma\wedge$ circuits. The
largely open, special-cases of $\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}$ for
constant $k$ and $\delta$, and $\Sigma^{[k]}\Pi\Sigma\wedge$ have been a source
of many great ideas in the last two decades. For eg. depth-$3$ ideas of Dvir
and Shpilka (STOC 2005), Kayal and Saxena (CCC 2006), and Saxena and Seshadhri
(FOCS 2010 and STOC 2011). Further, depth-$4$ ideas of Beecken, Mittmann and
Saxena (ICALP 2011), Saha, Saxena and Saptharishi (Comput.Compl. 2013), Forbes
(FOCS 2015), and Kumar and Saraf (CCC 2016). Additionally, geometric
Sylvester-Gallai ideas of Kayal and Saraf (FOCS 2009), Shpilka (STOC 2019), and
Peleg and Shpilka (CCC 2020, STOC 2021). Very recently, a subexponential-time
blackbox PIT algorithm for constant-depth circuits was obtained via lower bound
breakthrough of Limaye, Srinivasan, Tavenas (FOCS 2021). We solve two of the
basic underlying open problems in this work.
We give the first polynomial-time PIT for $\Sigma^{[k]}\Pi\Sigma\wedge$. We
also give the first quasipolynomial time blackbox PIT for both
$\Sigma^{[k]}\Pi\Sigma\wedge$ and $\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}$. A key
technical ingredient in all the three algorithms is how the logarithmic
derivative, and its power-series, modify the top $\Pi$-gate to $\wedge$. | Pranjal Dutta, Prateek Dwivedi, Nitin Saxena | 2023-04-22T05:36:12Z | http://arxiv.org/abs/2304.11325v1 | # Deterministic identity testing paradigms for bounded top-fanin depth-4 circuits+
###### Abstract
Polynomial Identity Testing (PIT) is a fundamental computational problem. The famous depth-4 reduction result by Agrawal and Vinay (FOCS 2008) has made PIT for depth-4 circuits an enticing pursuit. A restricted depth-4 circuit computing a \(n\)-variate degree-\(d\) polynomial of the form \(\sum_{i=1}^{k}\prod_{j}g_{ij}\), where \(\deg g_{ij}\leq\delta\) is called \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) circuit. On further restricting \(g_{ij}\) to be sum of univariates we obtain \(\Sigma^{[k]}\Pi\Sigma\wedge\) circuits. The largely open, special-cases of \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) for constant \(k\) and \(\delta\), and \(\Sigma^{[k]}\Pi\Sigma\wedge\) have been a source of many great ideas in the last two decades. For eg. depth-3 ideas of Dvir and Shpilka (STOC 2005), Kayal and Saxena (CCC 2006), and Saxena and Seshadhri (FOCS 2010 and STOC 2011). Further, depth-4 ideas of Beecken, Mittmann and Saxena (ICALP 2011), Saha, Saxena and Saptharishi (Comput.Compl. 2013), Forbes (FOCS 2015), and Kumar and Saraf (CCC 2016). Additionally, geometric Sylvester-Gallai ideas of Kayal and Saraf (FOCS 2009), Shpilka (STOC 2019), and Peleg and Shpilka (CCC 2020, STOC 2021). Very recently, a subexponential-time _blackbox_ PIT algorithm for constant-depth circuits was obtained via lower bound breakthrough of Limaye, Srinivasan, Tavenas (FOCS 2021). We solve two of the basic underlying open problems in this work.
We give the _first_ polynomial-time PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\). We also give the _first_ quasipolynomial time _blackbox_ PIT for both \(\Sigma^{[k]}\Pi\Sigma\wedge\) and \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\). A key technical ingredient in all the three algorithms is how the _logarithmic derivative_, and its power-series, modify the top \(\Pi\)-gate to \(\wedge\).
**Keywords** Polynomial identity testing, hitting set, depth-4 circuits.
**2012 ACM Subject Classification** Theory of computation \(\rightarrow\) Algebraic complexity theory.
**Acknowledgement** Pranjal is supported by the project "Foundation of Lattice-based Cryptography", funded by NUS-NCS Joint Laboratory for Cyber Security. Most of the work was carried out when Pranjal was visiting CSE, IIT Kanpur and supported by Google PhD Fellowship. Nitin thanks the funding support from DST (SJF/MSA-01/2013-14), SERB (CRG/2020/000045) and N.Rama Rao Chair.
###### Contents
* 1 Introduction: PIT & beyond
* 1.1 Main results: An analytic detour to three PITs
* 1.2 Prior works on related models
* 1.3 Techniques and motivation
* 1.3.1 The DiDI-technique
* 1.3.2 Jacobian hits again
* 2 Preliminaries
* 2.1 Notations and Definitions
* 2.2 Basics of Algebraic Complexity Theory
* 3 Whitebox PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\)
* 3.1 Algorithm
* 4 Blackbox PIT for Depth-4 Circuits
* 4.1 PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\)
* 4.2 PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\)
* 5 Conclusion
## 1 Introduction: PIT & beyond
Algebraic circuits are natural algebraic analog of boolean circuits, with the logical operations being replaced by \(+\) and \(\times\) operations over the underlying field. The study of algebraic circuits comprise the large study of algebraic complexity, mainly pioneered (and formalized) by Valiant [20]. A central problem in algebraic complexity is an algorithmic design problem, known as Polynomial Identity Testing (PIT): given an algebraic circuit \(\mathcal{C}\) over a field \(\mathbb{F}\) and input variables \(x_{1},\ldots,x_{n}\), determine whether \(\mathcal{C}\) computes the identically zero polynomial. PIT has found numerous applications and connections to other algorithmic problems. Among the examples are algorithms for finding perfect matchings in graphs [13, 14, 15], primality testing [11, 12], polynomial factoring [16, 17], polynomial equivalence [18], reconstruction algorithms [19, 20, 21] and the existence of algebraic natural proofs [23, 24]. Moreover, efficient design of PIT algorithms is intrinsically connected to proving strong lower bounds [15, 1, 16, 17, 18, 19]. Interestingly, PIT also emerges in many fundamental results in complexity theory such as \(\mathsf{IP}=\mathsf{PSPACE}\)[22, 19], the PCP theorem [1, 20], and the overarching Geometric Complexity Theory (GCT) program towards \(\mathsf{P}\neq\mathsf{NP}\)[25, 26, 17].
There are broadly two settings in which the PIT question can be framed. In the _whitebox_ setup, we are allowed to look inside the wirings of the circuit, while in the _blackbox_ setting we
can only evaluate the circuit at some points from the given domain. There is a very simple randomized algorithm for this problem - evaluate the polynomial at a random point from a large enough domain. With very high probability, a nonzero polynomial will have a nonzero evaluation; this is famously known as the Polynomial Identity Lemma [12, 13, 14, 15]. It has been a long standing open question to derandomize this algorithm.
For many years, blackbox identity tests were only known for depth-2 circuits which compute sparse polynomials [1, 10]. In a surprising result, Agrawal and Vinay [1] showed that a complete derandomization of blackbox identity testing for just depth-4 algebraic circuits (\(\Sigma\Pi\Sigma\Pi\)) already implies a near complete derandomization for the general PIT problem. More recent depth reduction results [16, 1], and the bootstrapping phenomenon [1, 13, 14, 15] show that even PIT for very restricted classes of depth-4 circuits (_even_ depth-3) would have very interesting consequences for PIT of general circuits. These results make the identity testing regime for depth-4 circuits, a very meaningful pursuit.
_Three PITs in one-shot_. Following the same spirit, here we solve three important (and open) PIT questions. We give the first deterministic polynomial-time whitebox PIT algorithm for the bounded sum of product of sum of univariates circuits [17, Open Prob. 2]. Further, we give a quasipolynomial-time blackbox algorithm for the same class of circuits. These circuits are denoted by \(\Sigma^{[k]}\Pi\Sigma\wedge\) and compute polynomials of the form \(\Sigma_{i\in[k]}\Pi_{j}\left(g_{ij1}(x_{1})+\cdots+g_{ijn}(x_{n})\right)\).
_Whitebox and Blackbox PIT for the \(\Sigma^{[k]}\Pi\Sigma\wedge\) circuits is in polynomial and quasi-polynomial time respectively._
A similar technique also gives a quasi-polynomial time blackbox PIT algorithm for the bounded sum of product of bounded degree sparse polynomials circuits. They are denoted by \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) (where \(k\) and \(\delta\) can be up to \(\mathsf{poly}(\log(s))\), where \(s\) is the circuit size).
_Blackbox PIT for the \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) circuits is in quasi-polynomial time._
\(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) circuits compute polynomials which are of the form \(\Sigma_{i\in[k]}\Pi_{j}g_{ij}(x)\), where \(\mathsf{deg}(g_{ij})\leq\delta\). Even \(\delta=2\) was a challenging open problem [10, Open Problem 2]. The model has gained a lot of interest in the past few years and has generated many important results [11, 12, 13].
### Main results: An analytic detour to three PITs
Though some attempts have been made to solve PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\), an efficient PIT for \(k\geq 3\)_even_ in the whitebox settings remains open, see [17, Open Prob. 2]. Our first result addresses this problem and designs a polynomial time algorithm (Ref. Algorithm 1). In our pursuit we discover an analytic and non-ideal based new technique which we refer as DiDI. Throughout the paper, we will work with \(\mathbb{F}=\mathbb{Q}\), though all the results hold for field of large characteristic.
**Theorem 1.1** (Whitebox \(\Sigma^{[k]}\Pi\Sigma\wedge\) PIT).: _There is a deterministic, whitebox \(s^{O(k\pi^{k})}\)-time PIT algorithm for \(\Sigma^{[k]}\Pi\Sigma\wedge\) circuits of size \(s\), over \(\mathbf{F}[\mathbf{x}]\)._
**Remark**.:
1. Case \(k\leq 2\) can be solved by invoking [13, Theorem 5.2]; but \(k\geq 3\) was open.
2. Our technique _necessarily_ blows up the exponent exponentially in \(k\). In particular, it would be interesting to design an efficient time algorithm when \(k=\Theta(\log s)\).
3. It is not clear if the current technique gives PIT for \(\Sigma^{[k]}\Pi\Sigma\mathsf{M}_{2}\) circuits, where \(\Sigma\mathsf{M}_{2}\) denotes sum of _biv_ariate monomials computed and fed into the top product gate.
Next, we go to the blackbox setting and address two models of interest, namely-- \(\Sigma^{[k]}\Pi\Sigma\wedge\) and \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\), where \(k,\delta\) are constants. Our work builds on previous ideas for unbounded top fanin (1) Jacobian [1], (2) the known blackbox PIT for \(\Sigma\wedge\Sigma\wedge\) and \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)[14, 15] while maneuvering with an analytic approach _via_ power-series, which unexpectedly _reduces_ the top \(\Pi\)-gate to a \(\wedge\)-gate.
**Theorem 1.2** (Blackbox depth-4 PIT).:
1. _There is a_ \(s^{O(k\log\log s)}\) _time blackbox PIT algorithm for_ \(\Sigma^{[k]}\Pi\Sigma\wedge\) _circuits of size_ \(s\)_, over_ \(\mathbf{F}[\mathbf{x}]\)_._
2. _There is a_ \(s^{O(\delta^{2}\,k\,\log s)}\) _time blackbox PIT algorithm for_ \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) _circuits of size_ \(s\)_, over_ \(\mathbf{F}[\mathbf{x}]\)_._
**Remark**.:
1. Theorem 1.2 (b) has a _better_ dependence on \(k\), but _worse_ on \(s\), than Theorem 1.1. Our results are quasipoly-time even up to \(k,\delta=\mathsf{poly}(\log s)\).
2. Theorem 1.2 (a) is better than Theorem 1.2 (b), because \(\Sigma\wedge\Sigma\wedge\) has a faster algorithm than \(\Sigma\wedge\Sigma\Pi^{[\delta]}\).
3. Even for \(\Sigma^{[3]}\Pi\Sigma\wedge\) and \(\Sigma^{[3]}\Pi\Sigma\Pi^{[3]}\) models, we leave the _poly_-time blackbox question open.
### Prior works on related models
In the last two decades, there has been a surge of results on identity testing for restricted classes of bounded depth algebraic circuits (e.g. 'locally' bounded independence, bounded read/occur, bounded variables). There have been numerous results on PIT for depth-3 circuits with bounded top fanin (known as \(\Sigma^{[k]}\Pi\Sigma\)-circuits). Dvir and Shpilka [11] gave the first quasipolynomial-time deterministic whitebox algorithm for \(k=O(1)\), using rank based methods, which finally lead Karnin and Shpilka [12] to design algorithm of same complexity in the blackbox setting. Kayal and Saxena [13] gave the first polynomial-time algorithm of the same. Later, a series of works in [13, 13, 14, 15] generalized the model and gave \(n^{O(k)}\)-time algorithm when the algebraic rank of the product polynomials are bounded. Note
that in the white-box setting, our algorithm gives a poly(s) time PIT algorithm for bounded top-fanin depth-3 circuit. Moreover, the dependence on the top-fan is exponential. In the blackbox setting, our algorithm solves PIT for bounded top-fanin depth-3 circuit in quasi-poly(s) time, hence it does not offer any speedup compared to known polynomial time algorithms. However, our algorithm does give a PIT idea that is different from the known ones.
There has also been some progress on PIT for restricted classes of depth-4 circuits. A quasipolynomial-time blackbox PIT algorithm for _multilinear_\(\Sigma^{[k]}\Pi\Sigma\Pi\)-circuits was designed in [16], which was further improved to a \(n^{O(k^{2})}\)-time deterministic algorithm in [14]. A quasipolynomial blackbox PIT was given in [11, 15] when algebraic rank of the irreducible factors in each multiplication gate as well as the bottom fanin are bounded. Further interesting restrictions like sum of product of fewer variables, and more structural restrictions have been exploited, see [15, 16, 17, 18]. Some progress has also been made for bounded top-fanin and bottom-fanin depth-4 circuits via incidence geometry [12, 13, 14]. In fact, very recently, [14] gave a polynomial-time blackbox PIT for \(\Sigma^{[3]}\Pi\Sigma\Pi^{[2]}\)-circuits.
The authors recently generalised their novel DiDI-technique to solve 'border PIT' of depth-4 circuits [10]. Specifically, they give a \(s^{O(k\cdot\mathcal{F}^{k}\cdot\log logs)}\) time and \(s^{O(\delta^{2}\cdot k\cdot\mathcal{F}^{k}\cdot\log s)}\) time blackbox PIT algorithm for \(\overline{\Sigma^{[k]}\Pi\Sigma\wedge}\) and \(\overline{\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}}\) respectively. By definition, border classes capture exact complexity classes, hence border PIT results seemingly subsumes the results we present in this paper. However, the whitebox PIT algorithm here is much more efficient than their quasi-poly time blackbox algorithm. Further, the time complexity of blackbox PIT algorithms has a better dependence on \(k\) and \(\delta\) compared to their exponential dependence. Lastly, the
\begin{table}
\begin{tabular}{|l l l|} \hline Model & Time & Ref. \\ \hline \(\Sigma^{[k]}\Pi^{[d]}\Sigma\) & \(\mathsf{poly}(n,d^{k})\) & [14] \\ Multilinear \(\Sigma^{[k]}\Pi\Sigma\Pi\) & \(\mathsf{poly}(n^{O(k^{2})})\) & [14, 14] \\ \(\Sigma\Pi\Sigma\Pi\) of bounded trdeg & \(\mathsf{poly}(s^{\mathsf{trdeg}})\) & [11] \\ \(\Sigma^{(k)}\Pi\Sigma\Pi^{[d]}\) of bounded _local_ trdeg & \(\mathsf{QP}(n)\) & [14] \\ \(\Sigma^{[3]}\Pi\Sigma\Pi^{[2]}\) & \(\mathsf{poly}(n,d)\) & [14] \\ \(\overline{\Sigma^{[k]}\Pi\Sigma\wedge}\) & \(s^{O(k\cdot\mathcal{F}^{k}\cdot\log logs)}\) & [10] \\ \(\overline{\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}}\) & \(s^{O(\delta^{2}\cdot k\cdot\mathcal{F}^{k}\cdot\log s)}\) & [10] \\ \(\Sigma\Pi\Sigma\Pi\) & \(\mathsf{SUBEXP}(n)\) & [13] \\ Whitebox \(\Sigma^{[k]}\Pi\Sigma\wedge\) & \(s^{O(k\cdot\mathcal{F}^{k})}\) & This work. \\ \(\Sigma^{[k]}\Pi\Sigma\wedge\) & \(s^{O(k\log\log s)}\) & This work. \\ \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) & \(s^{O(\delta^{2}k\log s)}\) & This work. \\ \hline \end{tabular}
\end{table}
Table 1: Time complexity comparision of PIT algorithms related to \(\Sigma\Pi\Sigma\Pi\) circuits
proofs in this paper are simpler as we don't have to deal with an infinitesimally close approximation of polynomials in border complexity classes. Very recently, Dutta and Saxena [1] showed an exponential-gap fanin-hierarchy theorem for bounded depth-3 circuits which is also based on a _finer_ generalization of the DiDI-technique.
In a breakthrough result by Limaye, Srinivasan and Tavenas [13] the _first_ superpolynomial lower bound for constant depth circuits was obtained. Their lower bound result, together with the 'hardness vs randomness' tradeoff result of [14] gives the _first_ deterministic black-box PIT algorithm for general depth-4 circuits which runs in \(s^{O(n^{\epsilon})}\) for all real \(\epsilon>0\). Their result is the first _sub_exponential time PIT algorithm for depth-4 circuits. Moreover, compared to their algorithm, our quasipoly time blackbox and polynomial time whitebox algorithms are significantly faster.
**Limitations of known techniques.** People have studied depth-4 PIT only with extra restrictions, mostly due to the limited applicability of the existing techniques as they were tailor-made for the specific models and do not generalize. E.g. the previous methods handle \(\delta=1\) (i.e. linear polynomials at the bottom) or \(k=2\) (via _factoring_, [12]). While \(k=2\) to \(3\), or \(\delta=1\) to \(2\) (i.e. 'linear' to 'quadratic') already demands a qualitatively different approach.
Whitebox \(\Sigma^{[k]}\Pi\Sigma\wedge\) model generalizes the famous bounded top fanin depth-3 circuits \(\Sigma^{[k]}\Pi\Sigma\) of [1]; but their Chinese Remaindering (CR) method, loses applicability and thus fails to solve even a slightly more general model. The blackbox setting involved similar 'certifying path' ideas in [12] which could be thought of as general CR. It comes up with an ideal \(I\) such that \(f\neq 0\bmod I\) and finally preserves it under a constant-variate linear map. The preservation gets harder (for both \(\Sigma^{[k]}\Pi\Sigma\wedge\) and \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\)) due to the increased non-linearity of the ideal \(I\) generators. Intuitively, larger \(\delta\) via ideal-based routes, brings us to the Grobner basis method (which is doubly-exponential-time in \(n\)) [21]. We know that ideals even with 3-generators (analogously \(k=4\)) already capture the whole ideal-membership problem [20].
The algebraic-geometric approach to tackle \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) has been explored in [1, 22, 23, 24]. The families which satisfy a certain Sylvester-Gallai configuration (called SG-circuits) is the harder case which is conjectured to have constant transcendence degree [25, 26]. Non-SG circuits is the case where the nonzeroness-certifying-path question reduces to radical-ideal non-membership questions [15]. This is really a variety question where one could use algebraic-geometry tools to design a poly-time blackbox PIT. In fact, very recently, Guo [26] gave a \(s^{\delta^{k}}\)-time PIT by constructing explicit variety evasive subspace families. Unfortunately, this is not the case in the ideal non-membership; this scenario makes it much harder to solve \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\). From this viewpoint, radical-ideal-membership explains well why the intuitive \(\Sigma^{[k]}\Pi\Sigma\) methods do not extend to \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\).
Interestingly, Forbes [27] found a quasipolynomial-time PIT for \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) using shifted-partial derivative techniques; but it naively fails when one replaces the \(\wedge\)-gate by \(\Pi\) (because the'measure' becomes too large). The duality trick of [27] completely solves whitebox PIT for \(\Sigma\wedge\Sigma\wedge\), by transforming it to a read-once oblivious ABP (ROABP); but it is inapplicable to
our models with the top \(\Pi\)-gate (due to large waring rank and ROABP-width). A priori, our models are incomparable to ROABP, and thus the famous PIT algorithms for ROABP [13, 14, 15] are not expected to help either.
Similarly, a naive application of the _Jacobian_ and _certifying path_ technique from [1] fails for our models because it is difficult to come up with a _faithful_ map for constant-variate reduction. Kumar and Saraf [14] crucially used that the computed polynomial has low individual degree (such that [12] can be invoked), while in [14] they exploits the low algebraic rank of the polynomials computed below the top \(\Pi\)-gate. Neither of them hold in general for our models. Very recently, Peleg and Shpilka [15] gave a poly-time blackbox PIT for \(\Sigma^{[3]}\Pi\Sigma\Pi^{[2]}\), via incidence geometry (e.g. Edelstein-Kelly theorem involving 'quadratic' polynomials), by solving [11, 12] for \(k=3,\delta=2\). The method seems very strenuous to generalize even to 'cubic' polynomials (\(\delta=3=k\)).
**PIT for other models.** Blackbox PIT algorithms for many restricted models are known. Egs. ROABP related models [14, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], log-variate circuits [14, 15], and non-commutative models [12, 13].
### Techniques and motivation
Both the proofs are analytic as they use _logarithmic derivative_, and its power-series expansion which greatly transform the respective models. Where the nature of the first proof is inductive, the second is a more direct _one-shot_ proof. In both the cases, we essentially reduce to the well-understood _wedge_ models, that have unbounded top fanin, yet for which PITs are known. This reduction is unforeseeable and quite 'power'ful.
The analytic tool that we use, appears in algebra and complexity theory through the _formal power series_ ring \(\mathsf{R}[[x_{1},\ldots,x_{n}]]\) (in short \(\mathsf{R}[[x]]\)), see [16, 17, 18]. The advantages of the ring \(\mathsf{R}[[x]]\) are many and they usually emerge because of the inverse identity: \((1-x_{1})^{-1}=\sum_{i\geq 0}x_{1}^{i}\), which does not make sense in \(\mathsf{R}[x]\), but is valid in \(\mathsf{R}[[x]]\). Other analytic tools used are inspired from Wronskian (linear dependence) [16, Theorem 7][10], Jacobian (algebraic dependence) [1, 12, 13], and logarithmic derivative operator \(\mathsf{dlog}_{z_{1}}(f)=(\partial_{z_{1}}f)/f\).
We will be work with the division operator (e.g. \(\mathsf{R}(z_{1})\), over a certain ring \(\mathsf{R}\)). However, the divisions do not come for free as they require invertibility with respect to \(z_{1}\) throughout (again landing us in \(\mathsf{R}[[z_{1}]]\). For circuit classes \(C,D\) we define class
\[\mathcal{C}/\mathcal{D}:=\{f/g\mid f\in\mathcal{C},\mathcal{D}\ni g\neq 0\}.\]
Similarly \(\mathcal{C}\cdot\mathcal{D}\) to denotes the class taking respective products.
#### 1.3.1 The DiDi-technique
In Theorem1.1 we introduce a novel technique for designing PIT algorithms which comprises of inductively applying two fundamental operations on the input circuits to reduce it to a more tractable model. Suppose we want to test \(\sum_{i\in[k]}T_{i}\stackrel{{?}}{{=}}0\) where each \(T_{i}\) is computable by \(\Pi\Sigma\wedge\). The idea is to \(D\!I\!V\)ide it by \(T_{k}\) to obtain \(1+\sum_{i\in[k-1]}T_{i}/T_{k}\) and then \(D\!\) Derivative to reduce the fanin to \(k-1\) and obtain \(\sum_{i\in[k-1]}\mathcal{T}_{i}\). Naturally, these operations pushes us to work with the fractional ring (e.g. \(\mathsf{R}(z_{1})\), over a certain ring \(\mathsf{R}\)), further it also distorts the model as \(\mathcal{T}_{i}\)'s are no longer computable by simple \(\Pi\Sigma\wedge\) circuits. However, with careful analytically analysis we establish that the non-zereness is preserved in the reduced model. The process is then repeated until we reach \(k=1\), while maintaining the invariants which help us in preserving the non-zereness till the end. We finish the proof by showing that the identity testing of reduced model can be done using known PIT algorithms.
#### 1.3.2 Jacobian hits again
In Theorem1.2 we exploit the prowess of the Jacobian polynomial first introduced in [1] and later explored in [1] to unify known PIT algorithms and design new ones. Suppose we want to test \(\sum_{i\in[k]}T_{i}\stackrel{{?}}{{=}}0\), where \(T_{i}\in\Pi\Sigma\Pi^{[\delta]}\) (respec. \(\Pi\Sigma\wedge\)). We associate the Jacobian \(J(T_{1},\ldots,T_{r})\) to captures the algebraic independence of \(T_{1},\ldots,T_{r}\) assuming this to be a transcendence basis of the \(T_{i}\)'s. We design a variable reducing linear map \(\Phi\) which preserves the algebraic independece of \(T_{1},\ldots,T_{r}\) and show that for any \(C\): \(C(T_{1},\ldots,T_{k})=0\iff C(\Phi(T_{1}),\ldots,\Phi(T_{k}))=0\). Such a map is called 'faithful' [1]. The map \(\Phi\) ultimately provides a hitting set for \(T_{1}+\ldots+T_{k}\), as we reduce to a PIT of a polynomial over 'few' (roughly equal to \(k\)) variables, yielding a QP-time algorithm.
## 2 Preliminaries
Before proving the results, we describe some of the assumptions and notations used throughout the paper. \(x\) denotes \((x_{1},\ldots,x_{n})\). \([n]\) denotes \(\{1,\ldots,n\}\).
### Notations and Definitions
* **Logarithmic derivative.** Over a ring \(\mathsf{R}\) and a variable \(y\), the logarithmic derivative \(\mathsf{dlog}_{y}:\mathsf{R}(y)\to\mathsf{R}(y)\) is defined as \(\mathsf{dlog}_{y}(f):=\partial_{y}f/f\); here \(\partial_{y}\) denotes the partial derivative with respect to variable \(y\). One important property of \(\mathsf{dlog}\) is that it is additive over a product as \[\mathsf{dlog}_{y}(f\cdot g)\;=\;\frac{\partial_{y}(f\cdot g)}{f\cdot g}\;=\; \frac{(f\cdot\partial_{y}g\,+\,g\cdot\partial_{y}f)}{f\cdot g}\;=\;\mathsf{dlog }_{y}(f)+\mathsf{dlog}_{y}(g).\] We refer this effect as _linearization_ of product.
* **Circuit size.** Sparsity \(\mathsf{sp}(\cdot)\) refers to the number of nonzero monomials. In this paper, it is a parameter of the circuit size. In particular, \(\mathsf{size}(g_{1}\cdots g_{s})=\sum_{i\in[s]}\left(\mathsf{sp}(g_{i})+\deg(g_{ i})\right)\), for \(g_{i}\in\Sigma\wedge\) (respectively \(\Sigma\Pi^{[\delta]}\)). In whitebox settings, we also include the _bit-complexity_ of the circuit (i.e. bit complexity of the constants used in the wires) in the size parameter. Some of the complexity parameters of a circuit are _depth_ (number of layers), _syntactic degree_ (the maximum degree polynomial computed by any node), _fanin_ (maximum number of inputs to a node).
* **Hitting set.** A set of points \(\mathcal{H}\subseteq\mathbb{F}^{n}\) is called a _hitting-set_ for a class \(\mathcal{C}\) of \(n\)-variate polynomials if for any nonzero polynomial \(f\in\mathcal{C}\), there exists a point in \(\mathcal{H}\) where \(f\) evaluates to a nonzero value. A \(T(n)\)-time hitting-set would mean that the hitting-set can be generated in time \(T(n)\), for input size \(n\).
* **Valuation.** Valuation is a map \(\mathsf{val}_{y}:\mathsf{R}[y]\to\mathbb{Z}_{\geq 0}\), over a ring \(\mathsf{R}\), such that \(\mathsf{val}_{y}(\cdot)\) is defined to be the maximum power of \(y\) dividing the element. It can be easily extended to fraction field \(\mathsf{R}(y)\), by defining \(\mathsf{val}_{y}(p/q):=\mathsf{val}_{y}(p)-\mathsf{val}_{y}(q)\); where it can be negative.
* **Field.** We denote the underlying field as \(\mathbb{F}\) and assume that it is of characteristic \(0\). All our results hold for other fields (eg. \(\mathbf{Q}_{p},\mathbb{F}_{p}\)) of _large_ characteristic (see Remarks in Section 3-4).
* **Jacobian.** The Jacobian of a set of polynomials \(\mathbf{f}=\{f_{1},\ldots,f_{m}\}\) in \(\mathbb{F}[\mathbf{x}]\) is defined to be the matrix \(\mathcal{J}_{\mathbf{x}}(\mathbf{f}):=\left(\partial_{x_{j}}(f_{i})\right)_{m \times n}\). Let \(S\subseteq\mathbf{x}=\{x_{1},\ldots,x_{n}\}\) and \(|S|=m\). Then, polynomial \(J_{S}(\mathbf{f})\) denotes the minor (i.e. determinant of the submatrix) of \(\mathcal{J}_{\mathbf{x}}(\mathbf{f})\), formed by the columns corresponding to the variables in \(S\).
### Basics of Algebraic Complexity Theory
For detailed discussion on the basics of Algebraic Complexity Theory we will encourage readers to refer [11, 12, 13, 14]. Here we will formally state a few of the PIT results and properties of circuits for the later reference.
#### Trivial PIT Algorithm
The simplest PIT algorithm for any circuit in general is due to Polynomial Identity Lemma [15, 16, 17, 18]. When the number of variables is small, say \(O(1)\), then this algorithm is very efficient.
**Lemma 2.1** (Trivial PIT).: _For a class of \(n\)-variate, individual degree \(<d\) polynomial \(f\in\mathbb{F}[\mathbf{x}]\) there exists a deterministic PIT algorithm which runs in time \(O(d^{n})\)._
#### Sparse Polynomial
Sparse PIT is testing the identity of polynomials with bounded number of monomials. There have been a lot of work on sparse-PIT, interested readers can refer [1, 10] and references therein. For the proof of poly-time hitting set of Sparse PIT see [16, Thm. 2.1].
**Theorem 2.2** (Sparse-PIT map [10]).: _Let \(p(\mathbf{x})\in\mathbb{F}[\mathbf{x}]\) with individual degree at most \(d\) and sparsity at most \(m\). Then, there exists \(1\leq r\leq(mn\log d)^{2}\), such that_
\[p(y,y^{d},\ldots,y^{d^{m-1}})\neq 0,\,\mathrm{mod}\,y^{r}-1.\]
_If \(p\) is computable by a size-\(s\)\(\Sigma\Pi\) circuit, then there is a deterministic algorithm to test its identity which runs in time \(\poly(s,m)\)._
Indeed if identity of sparse polynomial can be tested efficiently, product of sparse polynomial can be tested efficiently. We formalise this in the following:
**Lemma 2.3** ([18] Lemma 2.3).: _For a class of \(n\)-variate, degree \(d\) polynomial \(f\in\mathbb{F}[\mathbf{x}]\) computable by \(\Pi\Sigma\Pi\) of size \(s\), there is a deterministic PIT algorithm which runs in time \(\poly(s,d)\)._
A set \(\mathcal{H}\subseteq\mathbb{F}^{n}\) is called a Hitting Set for a class polynomial \(\mathcal{C}\subseteq\mathbb{F}[\mathbf{x}]\), if for all \(g\in\mathcal{C}\)
\[g\neq 0\iff\exists\mathbf{\alpha}\in\mathcal{H}:g(\mathbf{\alpha})\neq 0.\]
In literature, PIT has a close association with Hitting set as the two notions are provably equivalent (refer Lemma 3.2.9 and 3.2.10 [12]). Note that the set \(\mathcal{H}\) works for every polynomial of the class. Instead of a PIT algorithm occasionally we will use such a set.
**Lemma 2.4** (Hitting Set of \(\Pi\Sigma\wedge\)).: _For a class of \(n\)-variate, degree \(d\) polynomial \(f\in\mathbb{F}[\mathbf{x}]\) computable by \(\Pi\Sigma\Pi\) of size \(s\), there is an explicit Hitting Set of size \(\poly(s,d)\)._
#### Algebraic Branching Program (ABP)
An ABP is a layered directed acyclic graph with \(q+1\) many layers of vertices \(V_{0},\ldots,V_{q}\) with a source \(a\) and a sink \(b\) such that all the edges in the graph only go from \(a\) to \(V_{0}\), \(V_{i-1}\) to \(V_{i}\) for any \(i\in[q]\), and \(V_{q}\) to \(b\). The edges have _univariate_ polynomials as their weights. The ABP is said to compute the polynomial
\[f(\mathbf{x})\ =\ \sum_{p\in\paths(a,b)}\prod_{e\in p}W(e)\,\]
where \(W(e)\) is the weight of the edge \(e\). The ABP has width-\(w\) if \(|V_{i}|\leq w\), \(\forall i\in\{0,\ldots,q\}\). In an equivalent definition, polynomials computed by ABP are of the form \(A^{T}(\prod_{i\in[q]}D_{i})B\), where \(A,B\in\mathbb{F}^{w\times 1}[\mathbf{x}]\), and \(D_{i}\in\mathbb{F}^{w\times w}[\mathbf{x}]\), where entries are univariate polynomials. We encourage interested readers to refer [10, 11] for more detailed discussion.
**Definition 2.5** (Read-once oblivious ABP (ROABP)).: _An ABP is called a read-once oblivious
ABP (ROABP) _if the edge weights are univariate polynomials in distinct variables across layers. Formally, there is a permutation \(\pi\) on the set \([q]\) such that the entries in the \(i\)-th matrix \(D_{i}\) are univariate polynomials over the variable \(x_{\pi(i)}\), i.e., they come from the polynomial ring \(\mathbb{F}[x_{\pi(i)}]\)._
A polynomial \(f(x)\) is said to be computed by width-\(w\) ROABPs in _any order_, if for every permutation \(\sigma\) of the variables, there exists a width-\(w\) ROABP in the variable order \(\sigma\) that computes the polynomial \(f(x)\). In whitebox setting, identity testing of any-order ROABP is completely solved.
**Theorem 2.6** (Theorem 2.4[15]).: _For \(n\)-variate polynomials computed by size-s ROABP, a hitting set of size \(O(s^{5}+s\cdot n^{4})\) can be constructed._
There have been quite a few results on blackbox PIT for ROABPs as well [15, 15, 14]. The current best known algorithm works in quasipolynomial time.
**Theorem 2.7** (Theorem 4.9[14]).: _For \(n\)-variate, individual-degree-\(d\) polynomials computed by width-\(w\) ROABPs in any order, a hitting set of size \((ndw)^{O(\log\log w)}\) can be constructed._
#### Depth-4 Circuits
A polynomial \(f(x)\in\mathbb{F}[x]\) is computable by \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) circuits if \(f(x)=\sum_{i\in[s]}f_{i}(x)^{e_{i}}\) where \(\deg f_{i}\leq\delta\). The first nontrivial PIT algorithm for this model was designed in [16].
**Theorem 2.8** (Proposition 4.18[16]).: _There is a \(\mathsf{poly}(n,d,\delta\log s)\)-explicit hitting set of size \((nd)^{O(\delta\log s)}\) for the class of \(n\)-variate, degree-(\(\leq d\)) polynomials \(f(x)\), computed by \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit of size \(s\)._
Similarly, \(\Sigma\wedge\Sigma\wedge\) circuits compute polynomials of the form \(f(x)=\sum_{i\in[s]}f_{i}^{e_{i}}\) where \(f_{i}\) is a sum of univariate polynomials. Using duality trick [13] and PIT results from [15, 14], one can design efficient PIT algorithm for \(\Sigma\wedge\Sigma\wedge\) circuits.
**Lemma 2.9** (PIT for \(\Sigma\wedge\Sigma\wedge\)-circuits).: _Let \(P\in\Sigma\wedge\Sigma\wedge\) of size \(s\). Then, there exists a \(\mathsf{poly}(s)\) (respectively \(s^{O(\log\log s)}\)) time whitebox (respectively blackbox) PIT for the same._
Proof sketch.: We show that any \(g(x)^{e}=(g_{1}(x_{1})+\ldots+g_{n}(x_{n}))^{e}\), where \(\mathsf{deg}(g_{i})\leq s\) can be written as \(\sum_{j}h_{j1}(x_{1})\cdots h_{jn}(x_{n})\), for some \(h_{j\ell}\in\mathbb{F}[x_{\ell}]\) of degree at most \(es\). Define, \(G:=(y+g_{1})\cdots(y+g_{n})-y^{n}\). In its \(e\)-th power, notice that the leading-coefficient is \(\mathsf{coef}_{y^{e(n-1)}}(G^{e})=g^{e}\). So, interpolate on \(e(n-1)+1\) many points (\(y=\beta_{i}\in\mathbb{F}\)) to get
\[\mathsf{coef}_{y^{e(n-1)}}(G^{e})\ =\ \sum_{i=1}^{e(n-1)+1}\alpha_{i}\,G^{e}( \beta_{i})\.\]
Now, expand \(G^{e}(\beta_{i})=((\beta_{i}+g_{1})\cdots(\beta_{i}+g_{n})-\beta_{i}^{n})^{e}\), by binomial expansion (without expanding the inner \(n\)-fold product). The top-fanin can be at most \(s\cdot(e+1)\cdot(e(n-1)+1)=O(se^{2}n)\). The individual degrees of the intermediate univariates can be at most \(es\). Thus, it can be computed by an ROABP (of _any order_) of size at most \(O(s^{2}e^{3}n)\).
Now, if \(f=\sum_{j\in[s]}f_{j}^{e_{j}}\) is computed by a \(\Sigma\wedge\Sigma\wedge\) circuit of size \(s\), then clearly, \(f\) can also be computed by an ROABP (of any order) of size at most \(O(s^{6})\). So, the whitebox PIT follows
from Theorem 2.6, while the blackbox PIT follows from Theorem Theorem 2.7.
Further, \(\Sigma\wedge\Sigma\wedge\) can be shown to be closed under multiplication i.e., product of two polynomials, each computable by a \(\Sigma\wedge\Sigma\wedge\) circuit, is computable by a single \(\Sigma\wedge\Sigma\wedge\) circuit. To prove that we will need an efficient way to write a product of a few powers as a sum of powers, using simple interpolation. For an algebraic proof, see [2, Proposition 4.3].
**Lemma 2.10** (Waring Identity for a monomial).: _Let \(M=x_{1}^{b_{1}}\cdots x_{k}^{b_{k}}\), where \(1\leq b_{1}\leq\ldots\leq b_{k}\), and roots of unity \(\mathcal{Z}(i):=\{z\in\mathsf{C}:z^{b_{i}+1}=1\}\). Then,_
\[M=\sum_{\varepsilon(i)\in\mathcal{Z}(i):i=2,\cdots,k}\gamma_{\varepsilon(2), \ldots,\varepsilon(k)}\cdot\left(x_{1}+\varepsilon(2)x_{2}+\ldots+\varepsilon (k)x_{k}\right)^{d}\,\]
_where \(d:=\deg(M)=b_{1}+\ldots+b_{k}\), and \(\gamma_{\varepsilon(2),\ldots,\varepsilon(k)}\) are scalars (\(\mathsf{rk}(M):=\prod_{i=2}^{k}\left(b_{i}+1\right)\) many)._
_Remark_.: We actually need not work with \(\mathbb{F}=\mathsf{C}\). We can go to a small extension (at most \(d^{k}\)), for a monomial of degree \(d\), to make sure that \(\varepsilon(i)\) exists.
Using the above lemma we prove the closure result.
**Lemma 2.11**.: _Let \(f_{i}(\mathbf{x},y)\in\mathbb{F}[y][\mathbf{x}]\), of syntactic degree \(\leq d_{i}\), be computed by a \(\Sigma\wedge\Sigma\wedge\) circuit of size \(s_{i}\), for \(i\in[k]\) (wrt \(\mathbf{x}\)). Then, \(f_{1}\cdots f_{k}\) has \(\Sigma\wedge\Sigma\wedge\) circuit of size \(O((d_{2}+1)\cdots(d_{k}+1)\cdot s_{1}\cdots s_{k})\)._
Proof.: Let \(f_{i}=\sum_{j}f_{ij}^{e_{ij}}\); by assumption \(e_{ij}\leq d_{i}\) (by assumption). Then using Lemma 2.10, \(f_{1j_{1}}^{e_{1j_{1}}}\ldots f_{kj_{k}}^{e_{jk}}\) has size at most \((d_{2}+1)\cdots(d_{k}+1)\cdot\left(\sum_{i\in[k]}\mathsf{size}(f_{ij_{i}})\right)\), for indices \(j_{1},\ldots,j_{k}\). Summing up for all \(s_{1}\cdots s_{k}\) many products (atmost) gives the upper bound.
## 3 Whitebox PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\)
We consider a bloated model of computation which naturally generalizes \(\Sigma\Pi\Sigma\wedge\) circuits and works ideally under the DiDI-techniques.
**Definition 3.1**.: _We call a circuit \(\mathcal{C}\in\mathsf{Gen}(k,s)\), over \(\mathsf{R}(\mathbf{x})\), for any ring \(\mathsf{R}\), with parameter \(k\) and size-\(s\), if \(\mathcal{C}\in\Sigma^{[k]}(\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge \Sigma\wedge/\Sigma\wedge)\). It computes \(f\in\mathsf{R}(\mathbf{x})\), if \(f\ =\ \sum_{i=1}^{k}\ T_{i}\), where_
* \(T_{i}\ =:\ (U_{i}/V_{i})\cdot(P_{i}/Q_{i})\)_, for_ \(U_{i},V_{i}\in\Pi\Sigma\wedge\)_, and_ \(P_{i},Q_{i}\in\Sigma\wedge\Sigma\wedge\)_,_
* \(\mathsf{size}(T_{i})=\mathsf{size}(U_{i})+\mathsf{size}(V_{i})+\mathsf{size}( P_{i})+\mathsf{size}(Q_{i})\)_, and_ \(\mathsf{size}(f)=\sum_{i\in[k]}\mathsf{size}(T_{i})\)_._
It is easy to see that all size-\(s\)\(\Sigma^{[k]}\Pi\Sigma\wedge\) circuit are in \(\mathsf{Gen}(k,s)\). We will design the _recursive_ algorithm on \(\mathsf{Gen}(k,s)\).
Proof of Theorem 1.1.: Begin with defining \(T_{i,0}:=T_{i}\) and \(f_{0}:=f\) where \(T_{i,0}\in\Pi\Sigma\wedge\); \(\sum_{i}T_{i,0}=f_{0}\), and \(f_{0}\) has size \(\leq s\). Assume \(\deg(f)<d\leq s\); we keep the parameter \(d\) separately, to help optimize the complexity later. In every recursive call we work with \(\mathsf{Gen}(\cdot,\cdot)\) circuits.
As the input case, define \(U_{i,0}:=T_{i,0}\) and \(V_{i,0}:=P_{i,0}:=Q_{i,0}:=1\). We will use the hitting set of product of sparse polynomials (refer section 2.2) to obtain a point
such that \(U_{i,0}|_{x=\mathbf{\alpha}}\neq 0\), for all \(i\in[k]\). Eventually this evaluation point will help in maintaining the invertibility of \(\Pi\Sigma\wedge\). Consider
\[g\ :=\ \prod_{i\in[k]}T_{i,0}\ =\ \prod_{i\in[k]}U_{i,0}\ =\ \prod_{i\in[\ell]}\sum_{j\in[n]}f_{ij}(x_{j})\,\]
where \(f_{ij}(x_{j})\) are univariate polynomials of degree at most \(d\) and \(\ell\leq k\cdot s\). Note that \(\deg g\leq d\cdot k\cdot s\) and \(g\) is computable by a \(\Pi\Sigma\wedge\) circuit of size \(O(s)\). Invoke Lemma2.4 to obtain a hitting set \(\mathcal{H}\), then evaluate \(g\) on every point of \(\mathcal{H}\) to find an element \(\mathbf{\alpha}\in\mathcal{H}\) such that \(g(\mathbf{\alpha})\neq 0\). We emphasise that in whitebox setting all \(U_{i,0}\), are readily available for evaluation. Since, the size of the set is \(\mathsf{poly}(s)\) and each evaluation takes \(\mathsf{poly}(s)\) time, this preliminary step will add \(\mathsf{poly}(s)\) time to the overall time complexity. Moreover, we obtain the \(\mathbf{\alpha}\in\mathbb{F}^{n}\) which possess the required property.
To capture the non-zeroness, consider a 1-1 homomorphism \(\Phi:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[\mathbf{x},z]\) such that \(x_{i}\mapsto z\cdot x_{i}+a_{i}\) where \(a_{i}\) is the \(i\)-th coordinate of \(\mathbf{\alpha}\), obtained earlier. Invertibility implies that \(f_{0}=0\iff\Phi(f_{0})=0\). Now we proceed with the recursive algorithm which first reduces the identity testing from top-fanin \(k\) to \(k-1\). Note: \(k=1\) is trivial.
#### First Step: Efficient reduction from \(k\) to \(k-1\)
By assumption, \(\sum_{i=1}^{k}\ T_{i,0}\ =\ f_{0}\) and \(T_{k,0}\neq 0\). Apply \(\Phi\) both sides, then divide and derive:
\[\sum_{i\in[k]}\ T_{i,0}\ =\ f_{0} \iff\ \sum_{i\in[k]}\ \Phi(T_{i,0})\ =\ \Phi(f_{0})\] \[\iff\ \sum_{i\in[k-1]}\ \frac{\Phi(T_{i,0})}{\Phi(T_{k,0})}\ +\ 1\ =\ \frac{\Phi(f_{0})}{\Phi(T_{k,0})}\] \[\implies\ \sum_{i\in[k-1]}\ \partial_{z}\left(\frac{\Phi(T_{i,0})}{ \Phi(T_{k,0})}\right)\ =\ \partial_{z}\left(\frac{\Phi(f_{0})}{\Phi(T_{k,0})}\right)\] \[\iff\ \sum_{i=1}^{k-1}\ \frac{\Phi(T_{i,0})}{\Phi(T_{k,0})}\cdot \mathsf{dlog}_{z}\left(\frac{\Phi(T_{i,0})}{\Phi(T_{k,0})}\right)\ =\ \partial_{z}\left(\frac{\Phi(f_{0})}{\Phi(T_{k,0})}\right). \tag{3.2}\]
Here onwards we say \(\mathsf{dlog}\) to mean \(\mathsf{dlog}_{z}\), unless stated otherwise. Define the following:
* \(\mathsf{R}_{1}\ :=\ \mathbb{F}[z]/\langle z^{d}\rangle\). Note that, Equation3.2 holds over \(\mathsf{R}_{1}(\mathbf{x})\).
* \(\widetilde{T}_{i,1}:=\Phi(T_{i,0})/\Phi(T_{k,0})\cdot\mathsf{dlog}(\Phi(T_{i, 0})/\Phi(T_{k,0}))\), \(\forall\ i\in[k-1]\).
* \(f_{1}:=\partial_{z}(\Phi(f_{0})/\Phi(T_{k,0}))\), over \(\mathsf{R}_{1}(\mathbf{x})\).
**Definability of \(T_{i,1}\) and \(f_{1}\).** It is easy to see that these are well-defined terms. Here, we emphasize that we do not exactly compute/store \(\widetilde{T}_{i,1}\) as a fraction where the degree in \(z\) is \(<d\); instead it is computed as an element in \(\mathbb{F}(z,\mathbf{x})\), where \(z\) is a formal variable. Formally, we
compute \(T_{i,1}\in\mathbb{F}(z,\mathbf{x})\), such that \(\widetilde{T}_{i,1}=T_{i,1}\), over \(\mathsf{R}_{1}(\mathbf{x})\). We keep track of the degree of \(z\) in \(T_{i,1}\). Thus, \(\sum_{i\in[k-1]}\,T_{i,1}=f_{1}\), over \(\mathsf{R}_{1}(\mathbf{x})\).
**The 'iff' condition.** To show that our one step of DiDI has reduced to the identity testing of \(\mathsf{Gen}(k-1,\cdot)\), we need an \(\iff\) condition. So far equality in Equation 3.2 over \(\mathsf{R}_{1}(\mathbf{x})\) is _one-sided_. Note that \(f_{1}\neq 0\) implies \(\mathsf{val}_{z}(f_{1})<d=:d_{1}\). By assumption, \(\Phi(T_{k,0})\) is invertible over \(\mathsf{R}_{1}(\mathbf{x})\). Further, \(f_{1}=0\), over \(\mathsf{R}_{1}(\mathbf{x})\), which implies -
1. Either, \(\Phi(f_{0})/\Phi(T_{k,0})\) is \(z\)-free. Then \(\Phi(f_{0})/\Phi(T_{k,0})\,\in\mathbb{F}(\mathbf{x})\), which further implies it is in \(\mathbb{F}\), because of the map \(\Phi\) (\(z\)-free implies \(\mathbf{x}\)-free, by substituting \(z=0\)). Also, note that \(f_{0},T_{k,0}\,\neq\,0\) implies \(\Phi(f_{0})/\Phi(T_{k,0})\) is a _nonzero_ element in \(\mathbb{F}\). Thus, it suffices to check whether \(\Phi(f_{0})|_{z=0}\) is non-zero or not.
2. Or, \(\partial_{z}(\Phi(f_{0})/\Phi(T_{k,0}))=z^{d_{1}}\cdot p\) where \(p\in\mathbb{F}(z,\mathbf{x})\) s.t. \(\mathsf{val}_{z}(p)\geq 0\). By simple power series expansion, one can show that \(p\in\mathbb{F}(x)[[z]]\). **Lemma 3.3** (Valuation).: _Consider \(f\in\mathbb{F}(\mathbf{x},y)\) such that \(\mathsf{val}_{y}(f)\geq 0\). Then, \(f\in\mathbb{F}(\mathbf{x})[[y]]\,\bigcap\,\mathbb{F}(\mathbf{x},y)\)._
Proof Sketch.: Let \(f=g/h\), where \(g,h\in\mathbb{F}[\mathbf{x},y]\). Now, \(\mathsf{val}_{y}(f)\geq 0\), implies \(\mathsf{val}_{y}(g)\geq\mathsf{val}_{y}(h)\). Let \(\mathsf{val}_{y}(g)=d_{1}\) and \(\mathsf{val}_{y}(h)=d_{2}\), where \(d_{1}\geq d_{2}\geq 0\). Write \(g=y^{d_{1}}\cdot\tilde{g}\) and \(h=y^{d_{2}}\cdot\tilde{h}\). Write, \(\tilde{h}=h_{0}+h_{1}\,y+h_{2}\,y^{2}+\ldots+h_{d}\,y^{d}\), for some \(d\). Note that \(h_{0}\neq 0\). Thus,
\[f = y^{d_{1}-d_{2}}\cdot\tilde{g}/(h_{0}+h_{1}\,y+\ldots+h_{d}\,y^{d})\] \[= y^{d_{1}-d_{2}}\cdot(\tilde{g}/h_{0})\cdot(1+(h_{1}/h_{0})\,y+ \ldots+(h_{d}/h_{0})\,y^{d})^{-1}\,\in\mathbb{F}(\mathbf{x})[[y]]\.\] The last conclusion follows by the inverse identity in the power-series ring.
Hence, \(\Phi(f_{0})/\Phi(T_{k,0})=z^{d_{1}+1}\cdot q\) where \(q\in F(\mathbf{x})[[z]]\), i.e.
\[\Phi(f_{0})/\Phi(T_{k,0})\in\langle z^{d_{1}+1}\rangle_{\mathbb{F}(\mathbf{x})[[z ]]}\implies\mathsf{val}_{z}(\Phi(f_{0}))\geq d+1,\] a contradiction.
Conversely, it is obvious that \(f_{0}=0\) implies \(f_{1}=0\). Thus, we have proved the following
\[\sum_{i\in[k]}\,T_{i,0}\,\neq\,0\,\,\,\text{over}\,\,\mathbb{F}[\mathbf{x}]\,\iff \,\sum_{i\in[k-1]}\,T_{i,1}\neq 0\,\,\,\text{over}\,\,\mathbb{R}_{1}(\mathbf{x}),\,\,\, \text{or}\,,\,\,\,0\neq\Phi(f_{0})|_{z=0}\in\mathbb{F}\.\]
Eventually, we show that \(T_{i,1}\in(\Pi\Sigma\,\wedge\,/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge \,/\Sigma\wedge\Sigma\wedge)\), over \(\mathsf{R}_{1}(\mathbf{x})\), with polynomial blowup in size (Claim 3.6). So, the above circuit is in \(\mathsf{Gen}(k-1,\cdot)\), over \(\mathsf{R}_{1}(\mathbf{x})\), which we recurse on to finally give the identity testing. The subsequent steps will be a bit more tricky:
#### Induction step
Assume that we are in the \(j\)-th step (\(j\geq 1\)). Our induction hypothesis assumes -
1. \(\sum_{i\in[k-j]}\,T_{i,j}=f_{j}\), over \(\mathsf{R}_{j}(\mathbf{x})\), where \(\mathsf{R}_{j}:=\mathbb{F}[z]/\langle z^{d_{j}}\rangle\) for \(d_{j}<d\), and \(T_{i,j}\neq 0\).
2. \(\mathsf{val}_{z}(T_{i,j})\geq 0,\forall i\in[k-j]\).
3. Non-zero preserving iff condition \[f\neq 0,\,\text{over}\,\mathbb{F}[\mathbf{x}] \iff f_{j}\neq 0,\,\,\text{over}\,\mathsf{R}_{j}(\mathbf{x}),\] \[\text{or}\,\bigvee_{i=0}^{j-1}((f_{i}/T_{k-i,i})|_{z=0}\neq 0,\, \text{over}\,\mathbb{F}(\mathbf{x}))\]
4. Here, \(T_{i,j}=:\big{(}U_{i,j}/V_{i,j}\big{)}\cdot\big{(}P_{i,j}/Q_{i,j}\big{)}\), where \(U_{i,j},V_{i,j}\in\Pi\Sigma\wedge\), and \(P_{i,j},Q_{i,j}\in\Sigma\wedge\Sigma\wedge\), each in \(\mathsf{R}_{j}[\mathbf{x}]\). Think of them being computed as \(\mathbb{F}(z,\mathbf{x})\), with the degrees being tracked. Wlog, assume that \(\mathsf{val}_{z}(T_{k-j,j})\) is the minimal among all \(T_{i,j}\)'s.
5. \(U_{i,j}|_{z=0},V_{i,j}|_{z=0}\in\mathbb{F}\backslash\{0\}\).
We follow as before without applying homomorphism any further. Note that the 'or condition' in the hypothesis 3 is similar to the \(j=0\) case except that there is no \(\Phi\): this is because \(\Phi(f_{0})|_{z=0}\neq 0\iff\Phi(f_{0}/T_{k,0})|_{z=0}\neq 0\). This condition just separates the derivative from the constant-term.
**Efficient reduction from \(k-j\) to \(k-j-1\).** Let \(\mathsf{val}_{z}(T_{i,j})=:v_{i,j}\), for all \(i\in[k-j]\). Note that
\[\min_{i}\mathsf{val}_{z}(T_{i,j})=\min_{i}\mathsf{val}_{z}(P_{i,j}/Q_{i,j})=v_ {k-j,j}\]
since \(\mathsf{val}_{z}(U_{i,j})=\mathsf{val}_{z}(V_{i,j})=0\) (else we reorder). We remark that \(0\leq v_{i,j}<d_{j}\) for all \(i\)'s in \(j\)-th step; upper-bound is strict, since otherwise \(T_{i,j}=0\) over \(\mathsf{R}_{j}(x)\).
Similar to the first step, we divide with \(T_{k-j,j}\) which has \(\min\mathsf{val}\) and then derive:
\[\sum_{i\in[k-j]}\,T_{i,j}\ =\ f_{j} \iff\sum_{i\in[k-j-1]}\,T_{i,j}/T_{k-j,j}\,+\,1\ =\ f_{j}/T_{k-j,j}\] \[\implies\sum_{i\in[k-j-1]}\,\partial_{z}(T_{i,j}/T_{k-j,j})\ =\ \partial_{z}(f_{j}/T_{k-j,j})\] \[\iff\sum_{i=1}^{k-j-1}\,T_{i,j}/T_{k-j,j}\cdot\mathsf{dlog}(T_{i,j}/T_{k-j,j})\ =\ \partial_{z}(f_{j}/T_{k-j,j}) \tag{3.4}\]
Define the following:
* \(\mathsf{R}_{j+1}:=\mathbb{F}[z]/\langle z^{d_{j+1}}\rangle\), where \(d_{j+1}:=d_{j}-v_{k-j,j}-1\).
* \(\widetilde{T}_{i,j+1}:=T_{i,j}/T_{k-j,j}\cdot\mathsf{dlog}(T_{i,j}/T_{k-j,j})\), \(\forall\ i\in[k-j-1]\).
* \(f_{j+1}:=\partial_{z}(f_{j}/T_{k-jj})\), over \(\mathsf{R}_{j+1}(\mathbf{x})\).
We emphasize on the fact again that we do not exactly compute \(\widetilde{T}_{i,j+1}\bmod z^{d_{j+1}}\); instead it is computed as a fraction in \(\mathbb{F}(z,\mathbf{x})\), with formal \(z\). Formally, we compute \(T_{i,j+1}\in\mathbb{F}(z,\mathbf{x})\), such that \(\widetilde{T}_{i,j+1}=T_{i,j+1}\), over \(\mathsf{R}_{j+1}(\mathbf{x})\). We keep track of the degree of \(z\) in \(T_{i,j+1}\). Next, we will show that all the inductive hypotheses assumed hold in the \(j^{\text{th}}\) step as well.
**Hypothesis (1): Definability of \(T_{i,j+1}\) and \(f_{j+1}\).** By the minimal valuation assumption, it follows that \(\mathsf{val}(f_{j})\geq v_{k-j}\), and thus \(\widetilde{T}_{i,j+1}\) and \(f_{j+1}\) are all well-defined over \(\mathsf{R}_{j+1}(\mathbf{x})\). Note that, Equation 3.4 holds over \(\mathsf{R}_{j+1}(\mathbf{x})\) as \(d_{j+1}<d_{j}\) (because, whatever identity holds true \(\bmod z^{d_{j}}\) must hold \(\bmod z^{d_{j+1}}\) as well). Hence, we must have \(\sum_{i=1}^{k-j-1}\widetilde{T}_{i,j+1}=f_{j+1}\), over \(\mathsf{R}_{j+1}(\mathbf{x})\) thus proving the induction hypothesis (1).
**Hypothesis (2): Positivity of Valuation.** Since we divide by the \(\min\mathsf{val}\), by definition we immediately get \(\mathsf{val}_{z}(T_{i,j+1})\geq 0\) proving the hypothesis. Further, we claim that \(\min\mathsf{val}\) computation in DiDI is easy. For this, recall from the definition of valuation
\[\min_{i}\mathsf{val}_{z}(P_{i,j}/Q_{i,j})=\min_{i}(\mathsf{val}_{z}(P_{i,j})- \mathsf{val}_{z}(P_{i,j})).\]
Therefore, for \(\min\mathsf{val}\) we compute \(\mathsf{val}_{z}(P_{i,j})\) and \(\mathsf{val}_{z}(Q_{i,j})\) for all \(i\in[k-j]\).
Here is an important lemma which shows that coefficient of \(y^{e}\) of a polynomial \(f(\mathbf{x},y)\in\mathbb{F}[\mathbf{x},y]\), computed by a \(\Sigma\wedge\Sigma\wedge\) circuit, can be computed by a small \(\Sigma\wedge\Sigma\wedge\) circuit.
**Lemma 3.5** (Coefficient extraction).: _Let \(f(\mathbf{x},y)\in\mathbb{F}[y][\mathbf{x}]\) be computed by a \(\Sigma\wedge\Sigma\wedge\) circuit of size \(s\) and degree \(d\). Then, \(\mathsf{coef}_{y^{e}}(f)\in\mathbb{F}[\mathbf{x}]\) can be computed by a small \(\Sigma\wedge\Sigma\wedge\) circuit of size \(O(sd)\), over \(\mathbb{F}[\mathbf{x}]\)._
Proof Sketch.: Let, \(f=\sum_{i}\alpha_{i}\cdot g_{i}^{e_{i}}\). Of course, \(e_{i}\leq s\) and \(\mathsf{deg}_{y}(f)\leq d\). Thus, write \(f=\sum_{i=0}^{d}f_{i}\cdot y^{i}\), where \(f_{i}\in\mathbb{F}[\mathbf{x}]\). We can interpolate on \(d+1\)-many distinct points \(y\in\mathbb{F}\) and conclude that \(f_{i}\) has a \(\Sigma\wedge\Sigma\wedge\) circuit of size at most \(O(sd)\).
Using Lemma 3.5 we known \(\mathsf{coef}_{z^{e}}(P_{i,j})\) and \(\mathsf{coef}_{z^{e}}(Q_{i,j})\) are in \(\Sigma\wedge\Sigma\wedge\) over \(F[\mathbf{x}]\). We can keep track of \(z\) degree and thus interpolate to find the minimum \(e<d_{j}\) such that the computed coefficients are \(\neq 0\), which gives the respective \(\mathsf{val}\).
**Hypothesis (3): The 'iff' condition.** The above Equation 3.4 pioneers to reduce from \(k-j\)-summands to \(k-j-1\). But we want a \(\iff\) condition to efficiently reduce the identity testing. If \(f_{j+1}\neq 0\), then \(\mathsf{val}_{z}(f_{j+1})<d_{j+1}\). Further, \(f_{j+1}=0\), over \(\mathsf{R}_{j+1}(\mathbf{x})\) implies-
1. Either, \(f_{j}/T_{k-j,j}\) is \(z\)-free. This implies it is in \(\mathbb{F}(\mathbf{x})\). Now, if indeed \(f_{0}\neq 0\), then the computed \(T_{i,j}\) as well as \(f_{j}\) must be non-zero over \(\mathbb{F}(z,\mathbf{x})\), by induction hypothesis (as they are non-zero over \(\mathsf{R}_{j}(\mathbf{x})\)). However, \[\left(\frac{T_{i,j}}{T_{k-j,j}}\right)\bigg{|}_{z=0}=\left(\frac{U_{i,j} \cdot V_{k-j,j}}{U_{k-j,j}\cdot V_{i,j}}\right)\bigg{|}_{z=0}\cdot\left(\frac{P _{i,j}\cdot Q_{k-j,j}}{P_{k-j,j}\cdot Q_{i,j}}\right)\bigg{|}_{z=0}\]
\[\in\ \mathbb{F}\cdot\left(\frac{\Sigma\wedge\Sigma\wedge}{\Sigma\wedge \Sigma\wedge}\right).\] Thus, \[\frac{f_{j}}{T_{k-jj}}\ \in\ \sum\ \mathbb{F}\cdot\left(\frac{\Sigma\wedge\Sigma \wedge}{\Sigma\wedge\Sigma\wedge}\right)\ \in\ \left(\frac{\Sigma\wedge\Sigma\wedge}{\Sigma\wedge\Sigma \wedge}\right).\] Here we crucially use that \(\Sigma\wedge\Sigma\wedge\) is closed under multiplication (Lemma 2.11). Thus, this identity testing can be done in poly-time (Lemma 2.9). For, detailed time-complexity and calculations, see Claim 3.6 and its subsequent paragraph.
2. Or, \(\partial_{z}(f_{j}/T_{k-j,j})=z^{d_{j+1}}\cdot p\), where \(p\in\mathbb{F}(z,\mathbf{x})\) s.t. \(\mathsf{val}_{z}(p)\geq 0\). By a simple power series expansion, one concludes that \(p\in\mathbb{F}(\mathbf{x})[[z]]\) (Lemma 3.3). Hence, one concludes that \[\frac{f_{j}}{T_{k-j,j}}\in\left\langle z^{d_{j+1}+1}\right\rangle_{\mathbb{F} (\mathbf{x})[[z]]}\ \implies\mathsf{val}_{z}(f_{j})\geq d_{j},\] i.e. \(f_{j}=0\), over \(\mathsf{R}_{j}(\mathbf{x})\).
Conversely, \(f_{j}=0\), over \(\mathsf{R}_{j}(\mathbf{x})\), implies
\[\mathsf{val}_{z}(f_{j})\geq d_{j} \implies\mathsf{val}_{z}\left(\partial_{z}\left(\frac{f_{j}}{T_{ k-j,j}}\right)\right)\geq d_{j}-v_{k-j,j}-1\] \[\implies f_{j+1}=0,\text{ over }\mathsf{R}_{j+1}(\mathbf{x}).\]
Thus, we have proved that \(\sum_{i\in[k-j]}\ T_{i,j}\neq 0\ \text{ over }\mathsf{R}_{j}(\mathbf{x})\) iff
\[\sum_{i\in[k-j-1]}\ T_{i,j+1}\neq 0\ \text{ over }\mathsf{R}_{j+1}(\mathbf{x})\,\text{ or },\ \ 0\neq\left(\frac{f_{j}}{T_{k-j,j}}\right)\bigg{|}_{z=0}\in\mathbb{F}(\mathbf{x})\.\]
Therefore induction hypothesis (3) holds.
**Hypothesis (4): Size analysis.** We will show that \(T_{i,j+1}\in(\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/ \Sigma\wedge\Sigma\wedge)\), over \(\mathsf{R}_{j+1}(\mathbf{x})\), with only polynomial blowup in size. Let \(\mathsf{size}(T_{i,j})\leq s_{j}\), for \(i\in[k-j]\), and \(j\in[k]\). Note that, by assumption, \(s_{0}\leq s\).
**Claim 3.6** (Final size).: \(T_{1,k-1}\in(\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/ \Sigma\wedge\Sigma\wedge)\) _of size \(s^{O(k^{k})}\), over \(\mathsf{R}_{k-1}(\mathbf{x})\)._
Proof.: Steps \(j=0\) and \(j>0\) are slightly different because of the \(\Phi\). However the main idea of using power-series is the same which eventually shows that \(\mathsf{dlog}(\Sigma\wedge)\in\Sigma\wedge\Sigma\wedge\).
We first deal with \(j=0\). Let \(A-z\cdot B=\Phi(g)\in\Sigma\wedge\), for some \(A\in\mathbb{F}\) and \(B\in\mathsf{R}_{1}[\mathbf{x}]\). Note that \(A\neq 0\) because of the map \(\Psi\). Further, \(\mathsf{size}(B)\leq O(d\cdot\mathsf{size}(g))\), as a single monomial of the form \(x^{e}\) can produce \(d+1\)-many monomials. Over \(\mathsf{R}_{1}(\mathbf{x})\),
\[\mathsf{dlog}(\Phi(g))=-\frac{\partial_{z}(B\cdot z)}{A(1-\frac{B}{A}\cdot z)} =-\frac{\partial_{z}(B\cdot z)}{A}\cdot\sum_{i=0}^{d_{1}-1}\left(\frac{B}{A} \right)^{i}\cdot z^{i}. \tag{3.7}\]
\(B^{i}\) has a trivial \(\wedge\Sigma\wedge\)-circuit of size \(O(d\cdot\mathsf{size}(g))\). Also, \(\partial_{z}(B\cdot z)\) has a \(\Sigma\wedge\)-circuit of size at most \(O(d\cdot\mathsf{size}(g))\). Using waring identity (Lemma 2.10), we get that each \(\partial_{z}(B\cdot z)\cdot(B/A)^{i}\cdot z^{i}\) has size \(O(i\cdot d\cdot\mathsf{size}(g))\), over \(\mathsf{R}_{1}(x)\). Summing over \(i\in[d_{1}-1]\), the overall size is at most \(O(d_{1}^{2}\cdot d\cdot\mathsf{size}(g))=O(d^{3}\cdot\mathsf{size}(g))\), as \(d_{0}=d_{1}=d\).
For the \(j\)-th step, we emphasize that the degree could be larger than \(d\). Assume that syntactic degree of denominator and numerator of \(T_{i,j}\) (each in \(\mathbb{F}[x,z]\)) are bounded by \(D_{j}\) (it is _not_\(d_{j}\) as seen above; this is to save on the trouble of mod-computation at each step). Of course, \(D_{0}<d\leq s\).
For \(j>0\), the above summation in Equation 3.7 is over \(\mathsf{R}_{j}(x)\). However the degree could be \(D_{j}\) (possibly more than \(d_{j}\)) of the corresponding \(A\) and \(B\). Thus, the overall size after the power-series expansion would be \(O(D_{j}^{2}\cdot d\cdot\mathsf{size}(g))\).
Using Lemma 3.8, we can show that \(\mathsf{dlog}(P_{i,j})\in\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) (similarly for \(Q_{i,j}\)), of size \(O(D_{j}^{2}\cdot s_{j})\). Also \(\mathsf{dlog}(U_{ij}\cdot V_{k-j,j})\in\sum\mathsf{dlog}(\Sigma\wedge)\), i.e. sum of action of \(\mathsf{dlog}\) on \(\Sigma\wedge\) (since \(\mathsf{dlog}\) linearizes product); and it can be computed by the above formulation. Thus, \(\mathsf{dlog}(T_{i,j}/T_{k-j,j})\) is a sum of 4-many \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\) of size at most \(O(D_{j}^{2}\,s_{j})\) and 1-many \(\Sigma\wedge\Sigma\wedge\) of size \(O(D_{j}^{2}d_{j}s_{j})\) (from the above power-series computation) [Note: we summed up the \(\Sigma\wedge\Sigma\wedge\)-expressions from \(\mathsf{dlog}(\Sigma\wedge)\) together]. Additionally the syntactic degree of each denominator and numerator (of the \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) ) is \(O(D_{j})\). We rewrite the 4 expressions (each of \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) ) and express it as a single \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) using waring identity (Lemma 2.11), with the size blowup of \(O(D_{j}^{12}\,s_{j}^{4})\); here the syntatic degree blowsup to \(O(D_{j})\). Finally we add the remaining \(\Sigma\wedge\Sigma\wedge\) circuit (of size \(O(D_{j}^{3}s_{j})\) and degree \(O(dD_{j})\)) to get \(O(s_{j}^{5}D_{j}^{16}d)\). To bound this, we need to understand the degree bound \(D_{j}\).
Finally we need to multiply \(T_{i,j}/T_{k-j,j}\in(\Pi\Sigma\wedge\ /\Pi\Sigma\wedge)\cdot(\Sigma\wedge \Sigma\wedge\ /\Sigma\wedge\Sigma\wedge)\) where each \(\Sigma\wedge\Sigma\wedge\) is a product of two \(\Sigma\wedge\Sigma\wedge\) expression of size \(s_{j}\) and syntactic degree \(D_{j}\); clubbed together owing a blowup of \(O(D_{j}\cdot s_{j}^{2})\). Hence multiplying it with \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) expression obtained from \(\mathsf{dlog}\) computation above gives size blowup of \(s_{j+1}=s^{7}\cdot D_{j}^{O(1)}\cdot d\).
Computing \(T_{i,j}/T_{k-j,j}\) increases the syntactic degree'slowly'; which is much less than the size blowup. As mentioned before, the \(\mathsf{dlog}\)-blowup in \(\mathsf{dlog}\)-computation is \(O(dD_{j})\) and in the clearing of four expressions, it is just \(O(D_{j})\). Thus, \(D_{j+1}=O(dD_{j})\implies D_{j}=d^{O(j)}\).
The recursion on the size is \(s_{j+1}=s_{j}^{7}\cdot d^{O(j)}\). Using \(d\leq s\) we deduce, \(s_{j}=(sd)^{O(j\cdot 7)}\). In particular, \(s_{k-1}\), size after \(k-1\) steps is \(s^{O(k\cdot\gamma^{k})}\). This computation quantitatively establishes induction hypothesis (4).
**Hypothesis (5): Invertibility of \(\Pi\Sigma\wedge\)-circuits.** For invertibility, we want to emphasise that the \(\mathsf{dlog}\) compuation plays a crucial role here. In the following lemma we claim that the action \(\mathsf{dlog}(\Sigma\wedge\Sigma\wedge)\in\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\), is of \(\mathsf{poly}\)-size.
**Lemma 3.8** (Differentiation).: _Let \(f(\boldsymbol{x},y)\in\mathbb{F}[y][\boldsymbol{x}]\) be computed by a \(\Sigma\wedge\Sigma\wedge\) circuit of size \(s\) and degree \(d\). Then, \(\partial_{y}(f)\) can be computed by a small \(\Sigma\wedge\Sigma\wedge\) circuit of size \(O(sd^{2})\), over \(\mathbb{F}[y][\boldsymbol{x}]\)._
Proof Sketch.: Lemma 3.5 shows that each \(f_{e}\) has \(O(sd)\) size circuit where \(f=\sum_{e}f_{e}\,y^{e}\). Doing
this for each \(e\in[0,d]\) gives a blowup of \(O(sd^{2})\).
Similarly consider the action on \(\Pi\Sigma\wedge\). We know \(\mathsf{dlog}\) distributes the product additively, so it suffices to work with \(\mathsf{dlog}(\Sigma\wedge)\); and earlier in Claim3.6 we saw that \(\mathsf{dlog}(\Sigma\wedge)\in\Sigma\wedge\) of poly-size. Assuming these, we simplify
\[\frac{T_{i,j}}{T_{k-j,j}}=\frac{U_{i,j}\cdot V_{k-j,j}}{V_{i,j}\cdot U_{k-j,j} }\cdot\frac{P_{i,j}\cdot Q_{k-j,j}}{Q_{i,j}\cdot P_{k-j,j}},\]
and its \(\mathsf{dlog}\). Thus, using Equation3.4, \(U_{i,(j+1)}\) grows to \(U_{i,j}\cdot V_{k-j,j}\) (and similarly \(V_{i,(j+1)}\)). This also means: \(U_{i,(j+1)}|_{z=0}\in\mathbb{F}\setminus\{0\}\) and thereby proving the hypothesis.
#### Final time complexity
The above proof actually shows that \(T_{1,k-1}\) is in \(\mathsf{Gen}(1,s^{O(k\cdot\gamma^{k})})\) over \(\mathsf{R}_{k-1}(\mathbf{x})\); and that the degree bound on \(z\) (over \(\mathbb{F}[z,\mathbf{x}]\), keeping denominator and numerator 'in place') is \(D_{k-1}=d^{O(k)}\). We cannot directly use the identity testing algorithms of the constituent simpler models due to \(\mathsf{R}_{k-1}(\mathbf{x})\). Moreover, using hypothesis (2) and Lemma3.3 we know that \(T_{1,k-1}\in\mathbb{F}(\mathbf{x})[[z]]\) and it suffices to do identity testing on the first term of the powerseries: \(T_{1,k-1}|_{z=0}\) over \(\mathbb{F}(\mathbf{x})\). Note that, hypothesis (5) guarantees that \(\Pi\Sigma\wedge\) part remains non-zero on \(z=0\) evaluation, however, \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) may be undefined. For this, we keep track of \(z\) degree of numerator and denominator, which will be polynomially bounded as seen in the discussion above. We can easily interpolate and cancel the \(z\) power to make it work. Basically this shows that to test \(T_{1,k-1}\) we need to test \(z^{e}\cdot\Sigma\wedge\Sigma\wedge\) over \(\mathbb{F}[\mathbf{x}]\) where \(e\geq 0\) due to positive valuation. Whitebox PIT of \(\Sigma\wedge\Sigma\wedge\) is in poly-time using Lemma2.9, and testing \(z^{e}\) is possible using Lemma2.1 with appropriate degree bound. The proof above is constructive: we calculate \(U_{i,j+1}\) (and other terms) from \(U_{i,j}\) explicitly. Gluing everything together we conclude this part can be done in \(s^{O(k\gamma^{k})}\) time.
What remains is to test the \(z=0\)-part of induction hypothesis (3); it could _short-circuit_ the recursion much before \(j=k-1\). As we mentioned before, in this case, we need to do a PIT on \(\Sigma\wedge\Sigma\wedge\) only. At the \(j\)-th step, when we substitute \(z=0\), the size of each \(T_{i,j}\) can be at most \(s_{j}\) (by definition). We need to do PIT on a simpler model: \(\sum^{[k-j]}\ \mathbb{F}\cdot(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\ )\). We can clear out and express this as a single \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) expression; with a size blowup of \(s_{j}^{O(k-j)}\leq(sd)^{O(j(k-j)\gamma^{j})}\). Since this case could short-circuit the recursion, to bound the final time complexity, we need to consider the \(j\) which maximizes the exponent.
**Lemma 3.9**.: _Let \(k\in\mathbb{N}\), and \(h(x):=x(k-x)\gamma^{x}\). Then, \(\max_{i\in[k-1]}h(i)=h(k-1)\)._
Proof Sketch.: Differentiate to \(\operatorname{get}h^{\prime}(x)=(k-x)\gamma^{x}-x\gamma^{x}+x(k-x)(\log 7) \gamma^{x}=7^{x}\cdot[x^{2}(-\log 7)+x(k\log 7-2)+k]\). It vanishes at
\[x=\left(\frac{k}{2}-\frac{1}{\log 7}\right)+\sqrt{\left(\frac{k}{2}-\frac{1}{ \log 7}\right)^{2}-\frac{k}{\log 7}}\.\]
Thus, \(h\) is maximized at the integer \(x=k-1\).
Therefore, \(\max_{j\in[k-1]}j(k-j)7^{j}=(k-1)7^{k-1}\). Finally, use Lemma2.9 for the base-case whitebox PIT. Thus, the final time complexity is \(s^{O(k\cdot 7^{k})}\).
Here we also remark that in \(z=0\) substitution \(\Sigma\wedge\Sigma\wedge\ /\Sigma\wedge\Sigma\wedge\) may be undefined. However, we keep track of \(z\) degree of numerator and denominator, which will be polynomially bounded as seen in the discussion above. We can easily interpolate and cancel the \(z\) power to make it work.
**Bit complexity.** It is routine to show that the bit-complexity is really what we claim. Initially, the given circuit has bit-complexity \(s\). The main blowup happens due to the \(\mathsf{dlog}\)-computation which is a poly-size blowup. We also remark that while using Lemma2.11 (using Lemma2.10), we _may_ need to go to a field extension of at most \(s^{O(k)}\) (because of the \(\varepsilon(i)\) and correspondingly the constants \(\gamma_{\varepsilon(2),\ldots,\varepsilon(k)}\), but they still are \(s^{O(k)}\)-bits). Also, Theorem2.2 and Lemma2.9 computations blowup bit-complexity polynomially. This concludes the proof.
**Remark.**.:
1. The above method does _not_ give whitebox PIT (in poly-time) for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\), as we donot know poly-time whitebox PIT for \(\Sigma\wedge\Sigma\Pi^{[\delta]}\). However, the above methods do show that whitebox-PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) polynomially _reduces_ to whitebox-PIT for \(\Sigma\wedge\Sigma\Pi^{[\delta]}\).
2. DiDI-technique can be used to give whitebox PIT for the general bloated model \(\mathsf{Gen}(k,s)\).
3. The above proof works when the characteristic is \(\geq d\). This is because the nonzeroness remains _preserved_ after derivation wrt \(z\).
### Algorithm
The whitebox PIT for Theorem1.1, that is discussed in section3, appears (below) as Algorithm1.
_Words of caution_: Throughout the algorithm there are intermediate expressions to be stored compactly. Think of them as'special' circuits in \(\mathbf{x}\), but over the _function-field_\(\mathbb{F}(z)\). Keep track of their degrees wrt \(z\); and that of the sizes of their fractions represented in 'bloated' circuit form.
## 4 Blackbox PIT for Depth-4 Circuits
We will give the proof of Theorem1.2 in this section. Before the details, we will state a few important definitions and lemmas from [1] to be referenced later.
**Definition 4.1** (Transcendence Degree).: _Polynomials \(T_{1},\ldots,T_{m}\) are called algebraically dependent if there exists a nonzero annihilator \(A\) s.t. \(A(T_{1},\ldots,T_{m})=0\). Transcendence degree is the size of the largest subset \(S\subseteq\{T_{1},\ldots,T_{m}\}\) that is algebraically independent. Then \(S\) is called a transcendence basis._
```
1:Input:\(f=T_{1}+\ldots+T_{k}\in\Sigma^{[k]}\Pi\Sigma\wedge\), a whitebox circuit of size \(s\) over \(\mathbb{F}[\mathbf{x}]\) OUTPUT:\(0\), if\(f\equiv 0\), and \(1\), if non-zero.
2:Let \(\Psi:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[z]\), be a sparse-PIT map, using [10] (Theorem 2.2). Apply it on \(f\) and check whether \(\Psi(f)\stackrel{{?}}{{=}}0\). If non-zero, output \(1\)
3:Obtain a point \(\mathbf{\alpha}=(a_{1},\ldots,a_{n})\in\mathbb{F}^{n}\) from Hitting Set \(\mathcal{H}\) of \(\Pi\Sigma\wedge\) such that \(T_{i}|_{\mathbf{x}=\mathbf{\alpha}}\neq 0\), for all \(i\in[k]\). And define \(\Phi:x_{i}\mapsto z\cdot x_{i}+a_{i}\). Check \(\sum_{i\in[k-1]}\partial_{z}(\Phi(T_{i})/\Phi(T_{k}))\stackrel{{?} }{{=}}0\) mod \(z^{d_{1}}\) (\(d_{1}:=s\)) as follows:
4:Consider each \(T_{i,1}:=\partial_{z}(\Phi(T_{i})/\Phi(T_{k}))\) over \(R_{1}(\mathbf{x})\), where \(R_{1}:=\mathbb{F}[z]/\langle z^{d_{1}}\rangle\). Use dlog computation (Claim 3.6), to write each \(T_{i,1}\) in a 'bloated' form as \((\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/\Sigma\wedge \Sigma\wedge\Sigma\wedge\)).
5:for\(j\gets 1\)to\(k-1\)do
6: Reduce the top-fanin at each step using 'Divide & Derive' technique. Assume that at \(j\)-th step, we have to check the identity: \(\sum_{i\in[k-j]}T_{i,j}\stackrel{{?}}{{=}}0\) over \(R_{j}(\mathbf{x})\), where \(R_{j}:=\mathbb{F}[z]/\langle z^{d_{j}}\rangle\), each \(T_{i,j}\) has a \((\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/\Sigma\wedge)\) representation and therein each \(\Pi\Sigma\wedge|_{z=0}\in\mathbb{F}\setminus\{0\}\).
7: Compute \(v_{k-j,j}:=\min_{i}\operatorname{val}_{z}(T_{i,j})\); by reordering it is for \(i=k-j\). To compute \(v_{k-j,j}\), use coefficient extraction (Lemma 3.5) and \(\Sigma\wedge\Sigma\wedge\) -circuit PIT (Lemma 2.9).
8: 'Divide' by \(T_{k-j,j}\) and check whether \(\left.\left(\sum_{i\in[k-j-1]}\left(T_{i,j}/T_{k-j,j}\right)+1\right)\right|_ {z=0}\stackrel{{?}}{{=}}0\). Note: this expression is in \((\Sigma\wedge\Sigma\wedge/\Sigma\wedge\Sigma\wedge)\). Use-- (1) \(\Pi\Sigma\wedge|_{z=0}\in\mathbb{F}\), and (2) _closure_ of \(\Sigma\wedge\Sigma\wedge\) under multiplication. Finally, do PIT on this by Lemma 2.9.
9: If it is non-zero, output \(1\), otherwise 'Derive' wrt \(z\) and 'Induct' on \(\left(\sum_{i\in[k-j-1]}\partial_{z}(T_{i,j}/T_{k-j,j})\right)\stackrel{{?} }{{=}}0\), over \(R_{j+1}(\mathbf{x})\) where \(R_{j+1}:=\mathbb{F}[z]/\langle z^{d_{j}-v_{k-j,j}-1}\rangle\).
10: Again using dlog (Claim 3.6), show that \(T_{i,j+1}:=\partial_{z}(T_{i,j}/T_{k-j,j})\) has small \((\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/\Sigma\wedge \Sigma\wedge)\)-circuit over \(R_{j+1}(\mathbf{x})\). So call the algorithm on \(\sum_{i\in[k-j-1]}\,T_{i,j+1}\stackrel{{?}}{{=}}0\).
11:\(j\gets j+1\).
12:endfor
13:At the end, \(j=k-1\). Do PIT (Lemma 2.9) on the single \((\Pi\Sigma\wedge/\Pi\Sigma\wedge)\cdot(\Sigma\wedge\Sigma\wedge/\Sigma\wedge)\) circuit, over \(R_{k-1}(\mathbf{x})\). If it is zero, output \(0\) otherwise output \(1\).
```
**Algorithm 1** Whitebox PIT Algorithm for \(\Sigma^{[k]}\Pi\Sigma\wedge\)-circuits
**Definition 4.2** (Faithful homomorphism).: _A homomorphism \(\Phi:\mathbb{F}[\mathbf{x}]\to\mathbb{F}[\mathbf{y}]\) is faithful for \(\mathbf{T}\) if \(\mathsf{trdeg}_{\mathbb{F}}(\mathbf{T})=\mathsf{trdeg}_{\mathbb{F}}(\Phi(\mathbf{T}))\)._
The reason for interest in faithful maps is due its usefulness in preserving the identity as shown in the following fact.
**Fact 4.3** (Theorem 2.4[1]).: _For any \(C\in\mathbb{F}[y_{1},\ldots,y_{m}]\), \(C(\mathbf{T})=0\iff C(\Phi(\mathbf{T}))=0\)._
Here is an important criterion about the jacobian matrix which basically shows that it _preserves_ algabraic independence.
**Fact 4.4** (Jacobian criterion).: _Let \(\mathbf{f}\subset\mathbb{F}[\mathbf{x}]\) be a finite set of polynomials of degree at most \(d\), and \(\mathsf{trdeg}_{\mathbb{F}}(\mathbf{f})\leq r\). If \(\text{char}(\mathbb{F})=0\), or \(\text{char}(\mathbb{F})>d^{r}\), then \(\mathsf{trdeg}_{\mathbb{F}}(\mathbf{f})=\mathsf{rk}_{\mathbb{F}(x)}\mathcal{J }_{\mathbf{x}}(\mathbf{f})\)._
Jacobian criterion together with faithful maps give a recipe to design a map which drastically reduces number of variables, if \(\mathsf{trdeg}\) is small.
**Lemma 4.5** (Lemma 2.7[1]).: _Let \(\mathbf{T}\in\mathbb{F}[\mathbf{x}]\) be a finite set of polynomials of degree at most \(d\) and \(\mathsf{trdeg}_{\mathbb{F}}(\mathbf{T})\leq r\), and char(F)=\(0\) or \(>d^{r}\). Let \(\Psi^{\prime}:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[z]\) such that \(\mathsf{rk}_{\mathbb{F}(x)}\mathcal{J}_{\mathbf{x}}(\mathbf{T})=\mathsf{rk}_{ \mathbb{F}(z)}\Psi^{\prime}(\mathcal{J}_{\mathbf{x}}(\mathbf{T}))\)._
_Then, the map \(\Phi:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[z,t,\mathbf{y}]\), such that \(x_{i}\mapsto(\sum_{j\in[r]}y_{j}t^{ij})+\Psi^{\prime}(x_{i})\), is a faithful homomorphism for \(\mathbf{T}\)._
In the next section we will use these tools to prove Theorem 1.2(b). The proof and calculations for Theorem 1.2(a) are very similar.
### PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\)
We solve the PIT for a more general model than \(\Sigma^{[k]}\Pi\Sigma\Pi\) by solving the following problem.
**Problem 4.6**.: _Let \(\{T_{i}\,|\,i\in[m]\}\) be \(\Pi\Sigma\Pi^{[\delta]}\) circuits of (syntactic) degree at most \(d\) and size \(s\). Let the transcendence degree of \(T_{i}\)'s, \(\mathsf{trdeg}_{\mathbb{F}}(T_{1},\ldots,T_{m})=k\ll s\). Further, \(C(x_{1},\ldots,x_{m})\) be a circuit of \((\mathsf{size}+\deg)<s^{\prime}\). Design a blackbox-PIT algorithm for \(C(T_{1},\ldots,T_{m})\)._
Trivially, \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) is a very special case of the above setting. Let \(\mathbf{T}:=\{T_{1},\ldots,T_{m}\}\). Let \(\mathbf{T}_{k}:=\{T_{1},\ldots,T_{k}\}\) be a transcendence basis. For \(T_{i}=\prod_{j}g_{ij}\), we denote the set \(L(T_{i}):=\{g_{ij}\mid j\}\).
We want to find an explicit homomorphism \(\Psi:\mathbb{F}[\mathbf{x}]\to\mathbb{F}[\mathbf{x},z]\) s.t. \(\Psi(\mathcal{J}_{\mathbf{x}}(\mathbf{T}))\) is of a 'nice' form. In the image we fix \(\mathbf{x}\) suitably, to get a composed map \(\Psi^{\prime}:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[z]\) s.t. \(\mathsf{rk}_{\mathbb{F}(\mathbf{x})}\mathcal{J}_{\mathbf{x}}(\mathbf{T})=\mathsf{rk}_{ \mathbb{F}(z)}\Psi^{\prime}(\mathcal{J}_{\mathbf{x}}(\mathbf{T}))\). Then, we can extend this map to \(\Phi:\mathbb{F}[\mathbf{x}]\longrightarrow\mathbb{F}[z,\mathbf{y},t]\) s.t. \(x_{i}\mapsto(\sum_{j=1}^{k}y_{j}t^{ij})+\Psi^{\prime}(x_{i})\), which is _faithful_ Lemma 4.5. We show that the map \(\Phi\) can be efficiently constructed using a scaling and shifting map (\(\Psi\)) which is eventually fixed by the hitting set (\(H^{\prime}\) defining \(\Psi^{\prime}\)) of a \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) circuit. Overall, \(\Phi(f)\) is a \(k+2\)-variate polynomial for which a trivial hitting set exists.
Wlog, \(\mathcal{J}_{\mathbf{x}}(\mathbf{T})\) is full rank with respect to the variable set \(\mathbf{x}_{k}=(x_{1},\ldots,x_{k})\). Thus, by assumption, \(J_{x_{k}}(\mathbf{T}_{k})\neq 0\) (for notation, see section 2). We want to construct a \(\Psi\) s.t. \(\Psi(J_{x_{k}}(\mathbf{T}_{k}))\) has an 'easier' PIT. We have the following identity [1, Eqn. 3.1], from the linearity of the deter
minant, and the simple observation that \(\partial_{x}(T_{i})\ =\ T_{i}\cdot\left(\sum_{j}\ \partial_{x}(g_{ij})/g_{ij}\right)\), where \(T_{i}=\prod_{j}\ g_{ij}\):
\[J_{x_{k}}(\mathbf{T}_{k})\ =\ \sum_{g_{1}\in L(T_{1}),\ldots,g_{k}\in L(T_{k})}\ \ \left(\frac{T_{1}\ldots T_{k}}{g_{1}\ldots g_{k}}\right)\cdot J_{x_{k}}(g_{1}, \ldots,g_{k}). \tag{4.7}\]
**The homomorphism \(\Psi\).** To ensure the invertibility of all \(g\in\bigcup_{i}\ L(T_{i})\) we proceed as in section 3. Consider
\[h:=\prod_{i\in[k]}\prod_{g\in L(T_{i})}g=\prod_{i\in[\ell]}g,\]
where \(g\in\bigcup_{i}\ L(T_{i})\) and \(\ell\leq k\cdot s\). Note that \(\deg h\leq d\cdot k\cdot s\) and \(h\) is computable by \(\Pi\Sigma\Pi\) circuit of size \(O(s)\). Lemma 2.4 gives the relevant hitting set \(\mathcal{H}\subseteq\mathbb{F}^{n}\) which contains an evaluation point \(\mathbf{\alpha}=(a_{1},\ldots,a_{n})\) such that \(h(\mathbf{\alpha})\neq 0\) implying \(g(\mathbf{\alpha})\neq 0\), for all \(g\in\bigcup_{i}\ L(T_{i})\). We emphasise that, unlike the previous case, here in the blackbox setting, we _do not_ have individual access of \(g\) to verify for the correct \(\mathbf{\alpha}\). Thus, we try out all \(\mathbf{\alpha}\in\mathcal{H}\) to see whichever works. If the input polynomial \(f\) is non-zero, then one such \(\mathbf{\alpha}\) must exist. This search adds a multiplicative blowup of \(\mathsf{poly}(s)\), since the size of \(\mathcal{H}\) is \(\mathsf{poly}(s)\).
Fix an \(\mathbf{\alpha}=(a_{1},\cdots,a_{n})\in\mathcal{H}\) and define \(\Psi:\mathbb{F}[\mathbf{x}]\to\mathbb{F}[\mathbf{x},z]\) as \(x_{i}\mapsto z\cdot x_{i}+a_{i}\). Denote the ring \(\mathsf{R}[\mathbf{x}]\) where \(\mathsf{R}:=\mathbb{F}[z]/\langle z^{D}\rangle\), and \(D:=k\cdot(d-1)+1\). Being 1-1, \(\Psi\) is clearly a non-zero preserving map. Moreover,
**Claim 4.8**.: \(J_{x_{k}}(\mathbf{T}_{k})\ =\ 0\ \iff\ \Psi(J_{x_{k}}(\mathbf{T}_{k}))\ =\ 0\)_, over \(\mathsf{R}[\mathbf{x}]\)._
Proof.: As \(\deg(T_{i})\leq d\), each entry of the matrix can be of degree at most \(d-1\); therefore \(\deg(J_{x_{k}}(\mathbf{T}_{k}))\leq k(d-1)=D-1\). Thus, \(\deg_{z}(\Psi(J_{x_{k}}(\mathbf{T}_{k})))<D\). Hence, the conclusion.
Equation 4.7 implies that
\[\Psi(J_{x_{k}}(\mathbf{T}_{k}))\ =\ \Psi(T_{1}\cdots T_{k})\cdot\sum_{g_{1}\in L (T_{1}),\ldots,g_{k}\in L(T_{k})}\ \ \frac{\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))}{\Psi(g_{1}\ldots g_{k})}. \tag{4.9}\]
As \(T_{i}\) has product fanin \(s\), the top-fanin in the sum in Equation 4.9 can be at most \(s^{k}\). Then define,
\[\widetilde{F}\ :=\ \sum_{g_{1}\in L(T_{1}),\ldots,g_{k}\in L(T_{k})}\ \ \frac{\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))}{\Psi(g_{1}\ldots g_{k})}\,\ \ \text{ over}\ \mathsf{R}[\mathbf{x}]. \tag{4.10}\]
**Well-definability of \(\widetilde{F}\).** Note that,
\[\Psi(g_{i})\ \text{mod}\ z\neq 0\ \Longrightarrow\ 1/\Psi(g_{1}\cdots g_{k}) \in\mathbb{F}[[\mathbf{x},z]].\]
Thus, RHS is an element in \(\mathbb{F}[[\mathbf{x},z]]\) and taking \(\text{mod}\ z^{D}\) it is in \(\mathsf{R}[\mathbf{x}]\). We remark that instead of minimally reducing \(\text{mod}\ z^{D}\), we will work with an \(F\in\mathbb{F}[z,\mathbf{x}]\) such that \(F=\tilde{F}\) over \(\mathsf{R}[\mathbf{x}]\).
Further, we ensure that the degree of \(z\) is polynomially bounded.
**Claim 4.11**.: _Over \(\mathsf{R}[\mathbf{x}]\), \(\Psi(J_{x_{k}}(\mathbf{T}_{k}))=0\iff F=0\)._
Proof sketch.: This follows from the invertibility of \(\Psi(T_{1}\cdots T_{k})\) in \(R[\mathbf{x}]\).
**The hitting set \(H^{\prime}\).** By \(J_{x_{k}}(\mathbf{T}_{k})\neq 0\), and Claims 4.8-4.11, we have \(F\neq 0\) over \(\mathsf{R}[\mathbf{x}]\). We want to find \(H^{\prime}\subseteq\mathbb{F}^{n}\), s.t. \(\Psi(J_{x_{k}}(\mathbf{T}_{k}))|_{\mathbf{x}=\mathbf{\alpha}}\neq 0\), for some \(\mathbf{\alpha}\in H^{\prime}\) (which will ensure the rank-preservation). Towards this, we will show (below) that \(F\) has \(s^{O(\delta k)}\)-size \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit over \(\mathsf{R}[\mathbf{x}]\). Next, Theorem 2.8 provides the hitting set \(H^{\prime}\) in time \(s^{O(\delta^{2}k\log s)}\).
**Claim 4.12** (Main size bound).: \(F\in\mathsf{R}[\mathbf{x}]\) _has \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit of size \((s3^{\delta})^{O(k)}\)._
The proof studies the two parts of Equation 4.10--
1. The numerator \(\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))\) has \(O(3^{\delta}2^{k}k!ks)\)-size \(\Sigma\wedge\Sigma\Pi^{[\delta-1]}\)-circuit (see Lemma 4.15), and
2. \(1/\Psi(g_{1}\cdots g_{k})\), for \(g_{i}\in L(T_{i})\) has \((s3^{\delta})^{O(k)}\)-size \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit; both over \(\mathsf{R}[\mathbf{x}]\) (see Lemma 4.16).
We need the following two claims to prove the numerator size bound.
**Claim 4.13**.: _Let \(g_{i}\in L(T_{i})\), where \(T_{i}\in\Pi\Sigma\Pi^{[\delta]}\) of size atmost \(s\), then the polynomial \(J_{x_{k}}(g_{1},\ldots,g_{k})\) is computable by \(\Sigma^{[k]}\Pi^{[k]}\Sigma\Pi^{[\delta-1]}\) of size \(O(k!\,ks)\)._
Proof Sketch.: Each entry of the matrix has degree at most \(\delta-1\). Trivial expansion gives \(k!\) top-fanin where each product (of fanin \(k\)) has size \(\sum_{i}\mathsf{size}(g_{i})\). As, \(\mathsf{size}(T_{i})\leq s\), trivially each \(\mathsf{size}(g_{i})\leq s\). Therefore, the total size is \(k!\cdot\sum_{i}\mathsf{size}(g_{i})=O(k!\,ks)\).
**Claim 4.14**.: _Let \(g\in\Sigma\Pi^{\delta}\), then \(\Psi(g)\in\Sigma\Pi^{\delta}\) of size \(3^{\delta}\cdot\mathsf{size}(g)\) (for \(n\gg\delta\))._
Proof Sketch.: Each monomial \(\mathbf{x}^{\mathbf{\epsilon}}\) of degree \(\delta\), can produce \(\prod_{i}(e_{i}+1)\leq((\sum_{i}e_{i}+n)/n)^{n}\leq(\delta/n+1)^{n}\)-many monomials, by AM-GM inequality as \(\sum_{i}e_{i}\leq\delta\). As \(\delta/n\to 0\), we have \((1+\delta/n)^{n}\to e^{\delta}\). As \(e<3\), the upper bound follows.
**Lemma 4.15** (Numerator size).: \(\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))\) _is computable by \(\Sigma\wedge\Sigma\Pi^{[\delta-1]}\) of size \(O(3^{\delta}\,2^{k}k!s)\eqeqsim s_{2}\)._
Proof.: In Claim 4.13 we showed that \(J_{x_{k}}(g_{1},\ldots,g_{k})\in\Sigma^{[k]}\Pi^{[k]}\Sigma\Pi^{[\delta-1]}\) of size \(O(k!ks)\). Moreover, for a \(g\in\Sigma\Pi^{[\delta-1]}\), we have \(\Psi(g)\in\Sigma\Pi^{[\delta-1]}\) of size at most \(3^{\delta}\cdot\mathsf{size}(g)\), over \(\mathsf{R}[\mathbf{x}]\) due to Claim 4.14).
Combining these, one concludes that \(\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))\in\Sigma^{[k]}\Pi^{[k]}\Sigma\Pi^{[\delta-1]}\), of size \(O(3^{\delta}\,k!ks)\). We _convert_ the \(\Pi\)-gate to \(\wedge\) gate using waring identity (Lemma 2.10) which blowsup the size by a multiple of \(2^{k-1}\). Thus, \(\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))\in\Sigma\wedge\Sigma\Pi^{[\delta-1]}\) of size \(O(3^{\delta}\,2^{k}k!s)\).
In the following lemma, using power series expansion of expressions like \(1/(1-a\cdot z)\), we conclude that \(1/\Psi(g)\) has a small \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit, which would further imply the same for \(1/\Psi(g_{1}\cdots g_{k})\).
**Lemma 4.16** (Denominator size).: _Let \(g_{i}\in L(T_{i})\). Then, \(1/\Psi(g_{1}\cdots g_{k})\) can be computed by a \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit of size \(s_{1}:=(s3^{\delta})^{O(k)}\), over \(\mathsf{R}[\mathbf{x}]\)._
Proof.: Let \(g\in L(T_{i})\) for some \(i\). Assume, \(\Psi(g)=A-z\cdot B\), for some \(A\in\mathbb{F}\) and \(B\in\mathsf{R}[\mathbf{x}]\) of degree \(\delta\), with \(\mathsf{size}(B)\leq 3^{\delta}\cdot s\), from Claim 4.14. Note that, over \(\mathsf{R}[\mathbf{x}]\),
\[\frac{1}{\Psi(g)}\ =\ \frac{1}{A(1-\frac{B}{A}\cdot z)}\ =\ \frac{1}{A}\cdot \sum_{i=0}^{D-1}\left(\frac{B}{A}\right)^{i}\cdot z^{i}. \tag{4.17}\]
As, \(B^{i}\) has a trivial \(\wedge\Sigma\Pi^{[\delta]}\)-circuit (over \(\mathsf{R}[\mathbf{x}]\)) of size \(\leq 3^{\delta}\cdot s+i\); summing over \(i\in[D-1]\), the overall size is at most \(D\cdot 3^{\delta}\cdot s+O(D^{2})\). As \(D<k\cdot d\), we conclude that \(1/\Psi(g)\) has \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) of size \(\mathsf{poly}(s\cdot k\cdot d3^{\delta})\), over \(\mathsf{R}[\mathbf{x}]\). Multiplying \(k\)-many such products directly gives an upper bound of \((s\cdot 3^{\delta})^{O(k)}\), using Lemma 2.11 (basically, waring identity).
Proof of Claim 4.12.: Combining Lemmas 4.15-4.16, observe that \(\Psi(J_{x_{k}}(\cdot)/\Psi(\cdot)\) has \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit of size at most \((s_{1}\cdot s_{2})^{2}=(s\cdot 3^{\delta})^{O(k)}\), over \(\mathsf{R}[\mathbf{x}]\), using Lemma 2.11. Summing up at most \(s^{k}\) many terms (by defn. of \(F\)), the size still remains \((s\cdot 3^{\delta})^{O(k)}\).
**Degree bound.** As, syntactic degree of \(T_{i}\) are bounded by \(d\), and \(\Psi\) maintain \(\mathsf{deg}_{\mathbf{x}}=\mathsf{deg}_{\mathbf{z}}\), we must have \(\mathsf{deg}_{\mathbf{z}}(\Psi(J_{x_{k}}(g_{1},\ldots,g_{k}))=\mathsf{deg}_{\mathbf{x }}(J_{x_{k}}(g_{1},\ldots,g_{k}))\leq D-1\). Note that, Lemma 4.15 actually works over \(\mathbb{F}[\mathbf{x},z]\) and thus there is no additional degree-blow up (in \(z\)). However, there is some degree blowup in Lemma 4.16, due to Equation 4.17.
Note that Equation 4.17 shows that over \(\mathsf{R}[\mathbf{x}]\),
\[\frac{1}{\Psi(g)}=\left(\frac{1}{A^{D}}\right)\cdot\left(\sum_{i=0}^{D-1}A^{ D-1-i}z^{i}\cdot B^{i}\right)=:\frac{p(\mathbf{x},z)}{q},\]
where \(q=A^{D}\). We think of \(p\in\mathbb{F}[\mathbf{x},z]\) and \(q\in\mathbb{F}\). Note, \(\mathsf{deg}_{z}(\Psi(g))\leq\delta\) implies \(\mathsf{deg}_{z}(p)\leq\mathsf{deg}_{z}((B\,z)^{D-1})\leq\delta\cdot(D-1)\).
Finally, denote \(1/\Psi(g_{1}\cdots g_{k})=:P_{g_{1},\ldots,g_{k}}/Q_{g_{1},\ldots,g_{k}}\), over \(\mathsf{R}[\mathbf{x}]\). This is just multiplying \(k\)-many \((p/q)\)'s; implying a degree blowup by a multiple of \(k\). In particular - \(\mathsf{deg}_{z}(P_{(\cdot)})\leq\delta\cdot k\cdot(D-1)\) Thus, in Equation 4.10, summing up \(s^{k}\)-many terms gives an expression (over \(\mathsf{R}[\mathbf{x}]\)):
\[F\ =\ \sum_{g_{1}\in L(T_{1}),\ldots,g_{k}\in L(T_{k})}\Psi(J_{x_{k}}(g_{1}, \ldots,g_{k}))\cdot\left(\frac{P_{g_{1},\ldots,g_{k}}}{Q_{g_{1},\ldots,g_{k}}} \right)\ =:\ \frac{P(\mathbf{x},z)}{Q}\.\]
Verify that \(Q\in\mathbb{F}\). The degree of \(z\) also remains bounded by
\[\max_{g_{i}\in L(T_{i}),i\in[k]}\mathsf{deg}_{z}(P_{g_{1},\ldots,g_{k}})+ \delta k\leq\mathsf{poly}(s).\]
Using the degree bounds, we finally have \(P\in\mathbb{F}[\mathbf{x},z]\) as a \(\Sigma\wedge\Sigma\Pi^{[\delta]}\)-circuit (over \(\mathbb{F}(z)\)) of size \(n^{O(\delta)}\,(s3^{\delta})^{O(k)}=3^{O(\delta k)}s^{O(k+\delta)}=:s_{3}\).
We want to _construct_ a set \(H^{\prime}\subseteq\mathbb{F}^{n}\) such that the action \(P(H^{\prime},z)\neq 0\). Using [10] (Theorem 2.8), we conclude that it has \(s^{O(\delta\log s_{3})}=s^{O(\delta^{2}k\log s)}\) size hitting set which is constructible in a similar time. Hence, the construction of \(\Phi\) follows, making \(\Phi(f)\) a \(k+2\) variate polynomial. Finally, by the obvious degree bounds of \(\mathbf{y},z,t\) from the definition of \(\Phi\), we get the blackbox PIT algorithm with time-complexity \(s^{O(\delta^{2}k\log s)}\); finishing Theorem 1.2(b).
We could also give the final hitting set for the general problem.
Solution to Problem 4.6.: We know that
\[C(T_{1},\ldots,T_{m})=0\iff E:=\Phi(C(T_{1},\ldots,T_{m}))=0.\]
Since, \(H^{\prime}\) can be constructed in \(s^{O(\delta^{2}\,k\,\log s)}\)-time, it is trivial to find hitting set for \(E|_{H^{\prime}}\) (which is just a \(k+2\)-variate polynomial with the aformentioned degree bounds). The final hitting set for \(E\) can be constructed in \(s^{O(k)}\cdot s^{O(\delta^{2}k\,\log s)}\)-time.
**Remark**.:
1. As Jacobian Criterion (Fact 4.4) holds when the characteristic is \(>d^{\mathsf{trdeg}}\), it is easy to conclude that our theorem holds for all fields of \(\operatorname{char}>d^{k}\).
2. The above proof gives an efficient reduction from blackbox PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) circuits to \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) circuits. In particular, a poly-time hitting set for \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) circuits would put PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) in \(\mathsf{P}\).
3. Also, DiDI-technique (of Theorem 1.1) directly gives a blackbox algorithm, but the complexity is _exponentially_ worse (in terms of \(k\) in the exponent) for its recursive blowups.
### PIT for \(\Sigma^{[k]}\Pi\Sigma\wedge\)
As we remarked earlier, the proof of Theorem 1.2(a) is similar to the one we discussed in section 4.1. Here we sketch the proof, stating some relevant changes. Similar to Theorem 1.2(b), we generalize this theorem and prove for a much bigger class of polynomials.
**Problem 4.18**.: _Let \(\{T_{i}\,|\,i\in[m]\}\) be \(\Pi\Sigma\wedge\) circuits of (syntactic) degree at most \(d\) and size \(s\). Let the transcendence degree of \(T_{i}\)'s, \(\mathsf{trdeg}_{\mathbb{F}}(T_{1},\ldots,T_{m})=:k\ll s\). Further, \(C(x_{1},\ldots,x_{m})\) be a circuit of size \(+\) degree \(<s^{\prime}\). Design a blackbox-PIT algorithm for \(C(T_{1},\ldots,T_{m})\)._
It is trivial to see that \(\Sigma^{[k]}\Pi\Sigma\wedge\) is a very special case of the above settings. We will use the same idea (& notation) as in Theorem 1.2(b), using the Jacobian technique. The main idea is to come up with \(\Psi\) map, and correspondingly the hitting set \(H^{\prime}\). If \(g\in L(T_{i})\), then \(\mathsf{size}(g)\leq O(dn)\). The \(D\) (and hence \(R[\mathbf{x}]\)) remains as before. Claims 4.8-4.11 hold similarly. We will construct the hitting set \(H^{\prime}\) by showing that \(F\) has a small \(\Sigma\wedge\Sigma\wedge\) circuit over \(R[\mathbf{x}]\).
Note that, Claim 4.13 remains the same for \(\Sigma\wedge\Sigma\wedge\) (implying the same size blowup). However, Claim 4.14, the size blowup is \(O(d\operatorname{size}(g))\), because each monomial \(x^{e}\) can only produce \(d+1\) many monomials. Therefore, similar to Lemma 4.16, one can show that \(\Psi(J_{x_{k}}(g_{1},\dots,g_{k}))\in\Sigma\wedge\Sigma\wedge\), of size \(O(2^{k}k!kds)\). Similarly, the size in Lemma 4.15 can be replaced by \(s^{O(k)}\). Therefore, we get (similar to Claim 4.12):
**Claim 4.19**.: \(F\in R[\mathbf{x}]\) _has \(\Sigma\wedge\Sigma\wedge\) -circuit of size \(s^{O(k)}\)._
Next, the degree bound also remains the same. Following the same footsteps, it is not hard to see that while degree bound on \(z\) remains \(\mathsf{poly}(ksd)\). Therefore, \(P\in\mathbb{F}[\mathbf{x},z]\) has \(\Sigma\wedge\Sigma\wedge\) -circuit of size \(s^{O(k)}\).
We want to _construct_ a set \(H^{\prime}\subseteq\mathbb{F}^{n}\) such that the action \(P(H^{\prime},z)\neq 0\). By Lemma 2.9, we conclude that it has \(s^{O(k\log\log s)}\) size hitting set which is constructible in a similar time. Hence, the construction of map \(\Phi\) and the theorem follows (from \(z\)-degree bound).
_Solution to Problem 4.18_.: We know that
\[C(T_{1},\dots,T_{m})=0\iff E:=\Phi(C(T_{1},\dots,T_{m}))=0.\]
Since, \(H^{\prime}\) can be constructed in \(s^{O(k\log\log s)}\) time, it is trivial to find hitting set for \(E|_{H^{\prime}}\) (which is just a \(k+2\)-variate polynomial with the aforementioned degree bounds). The final hitting set for \(E\) can be constructed in \(s^{O(k)}\cdot s^{O(k\log\log s)}\) time.
## 5 Conclusion
This work introduces the powerful DiDI-technique and solves three open problems in PIT for depth-4 circuits, namely \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\) (blackbox) and \(\Sigma^{[k]}\Pi\Sigma\wedge\) (both whitebox and blackbox). Here are some immediate questions of interest which require rigorous investigation.
1. Can the exponent in Theorem 1.1 be improved to \(O(k)\)? Currently, it is exponential in \(k\).
2. Can we improve Theorem 1.2(b) to \(s^{O(\log\log s)}\) (like in Theorem 1.2(a))?
3. Can we design a polynomial-time PIT for \(\Sigma^{[k]}\Pi\Sigma\Pi^{[\delta]}\)?
4. Design a polynomial time PIT for \(\Sigma\wedge\Sigma\Pi^{[\delta]}\) circuits (i.e. unbounded top-fanin)?
5. Can we solve PIT for \(\Sigma^{[k]}\Pi\Sigma M_{2}\) circuits efficiently (polynomial/quasipolynomial-time), where \(\Sigma M_{2}\) denotes bivariate polynomials?
6. Can we design an efficient PIT for rational functions of the form \(\Sigma\) (\(1/\Sigma\wedge\Sigma\)) or \(\Sigma\) (\(1/\Sigma\Pi\)) (for \(u\)bounded top-fanin)? |
2303.02498 | A stochastic network approach to clustering and visualising single-cell
genomic count data | Important tasks in the study of genomic data include the identification of
groups of similar cells (for example by clustering), and visualisation of data
summaries (for example by dimensional reduction). In this paper, we propose a
novel approach to studying single-cell genomic data, by modelling the observed
genomic data count matrix $\mathbf{X}\in\mathbb{Z}_{\geq0}^{p\times n}$ as a
bipartite network with multi-edges. Utilising this first-principles network
representation of the raw data, we propose clustering single cells in a
suitably identified $d$-dimensional Laplacian Eigenspace (LE) via a Gaussian
mixture model (GMM-LE), and employing UMAP to non-linearly project the LE to
two dimensions for visualisation (UMAP-LE). This LE representation of the data
estimates transformed latent positions (of genes and cells), under a latent
position model of nodes in a bipartite stochastic network. We demonstrate how
these estimated latent positions can enable fine-grained clustering and
visualisation of single-cell genomic data, by application to data from three
recent genomics studies in different biological contexts. In each data
application, clusters of cells independently learned by our proposed
methodology are found to correspond to cells expressing specific marker genes
that were independently defined by domain experts. In this validation setting,
our proposed clustering methodology outperforms the industry-standard for these
data. Furthermore, we validate components of the LE decomposition of the data
by contrasting healthy cells from normal and at-risk groups in a
machine-learning model, thereby identifying an LE cancer biomarker that
significantly predicts long-term patient survival outcome in two independent
validation cohorts with data from 1904 and 1091 individuals. | Thomas E. Bartlett, Swati Chandna, Sandipan Roy | 2023-03-04T20:47:30Z | http://arxiv.org/abs/2303.02498v4 | # Stochastic networks theory to model
###### Abstract
We propose a novel way of representing and analysing single-cell genomic count data, by modelling the observed data count matrix as a network adjacency matrix. This perspective enables theory from stochastic networks modelling to be applied in a principled way to this type of data, providing new ways to view and analyse these data, and giving first-principles theoretical justification to established, successful methods. We show the success of this approach in the context of three cell-biological contexts, from the ephblast/epithelial/neural lineage. New technology has made it possible to gather genomic data from single cells at unprecedented scale, and this brings with it new challenges to deal with much higher levels of heterogeneity than expected between individual cells. Novel, tailored, computational-statistical methodology is needed to make the most of these new types of data, involving collaboration between mathematical and biomedical scientists.
## 1 Introduction
Network models and methods have become an important subject in modern statistical science, especially in application areas such as cell biology. A network can be represented by an 'adjacency matrix' [1], denoted by \(\mathbf{A}\) (Fig.1). In the simplest case, a network consists of nodes of only one type (called a 'unipartite network'), and these nodes may be connected (or not) by exactly one unweighted edge; in this case, \(\mathbf{A}\in\{0,1\}^{n\times n}\) (Fig.1a), where \(n\) is the number of nodes in the network. A stochastic network model [2] is a statistical model of a network in which there is a probability of observing an edge between nodes \(i\) and \(j\) defined according to some model, i.e, \(P(A_{ij}\neq 0)=p_{ij}\), where the probability \(p_{ij}\) is defined according to a law which may depend on some characteristics of the nodes \(i\) and \(j\), such as some observed data-vectors \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\).
Network models are well established in genomics. However, in genomics network models are typically used to represent gene regulation in a gene regulatory network [3], or related notions such as the gene co-expression network. Gene regulation is the phenomenon in which the protein encoded by one gene activates or represses the expression of another gene. When the expression of one gene regulates the expression of another gene in this way, this individual gene regulation is represented by an edge connecting the repressing and activating genes in the gene regulatory network. However, the focus of the work presented in this paper is not gene regulatory networks. Instead, we use a network model as a novel representation of single-cell genomic count data, and we draw on recent advances in stochastic network modelling to improve clustering and visualisation of the single cells in a genomic data set.
Raw genomic sequencing data (RNA-seq, ATAC-seq, etc) may be represented as a matrix \(\mathbf{X}\in\mathbb{Z}_{>0}^{p\times n}\) of non-negative integer counts. These are counts of genomic sequencing reads, such that each entry \(X_{ij}\) represents the number of observed copies of genomic fragment (e.g., mRNA transcript) \(i\in\{1,...,p\}\) in sample (e.g., cell) \(j\in\{1,...,n\}\). Modern single-cell sequencing data use
UMIs (unique molecular identifiers), which avoid double-counting the same transcript (after amplification by PCR, polymerase chain reaction). In these data, the columns of \(\mathbf{X}\) may be modelled as \(n\) independent draws from a multinomial distribution [4]. This contrasts with long-established methods for analysing RNA-seq data which use normalisation methods such as TPM (transcripts per million) and FPKM (fragments per kilobase per million). Normalisation methods such as TPM and FPKM normalise the data by library size (i.e., \(\sum_{i=1}^{p}X_{ij}\)), and this may induce spurious correlations between transcripts [4], as well as making it difficult to write down expressions for the true error distributions of the data after normalisation. In other words, after normalisation, we can't say exactly what the true distribution of the normalised data is, and so rely on heuristics. Instead, we take a unique view of the data that is inspired by stochastic network theory. Our perspective allows data modelling based on a fundamental understanding of the distribution of the data and summaries that follow.
When the genomic data being analysed are the only available observations of some single cells in a study, an important task when analysing these data is to group cells together of similar types for identification or 'phenotyping', e.g., by clustering. One of the most successful and reliable methods for clustering single cells in genomic data is the so-called 'Louvain clustering' method [5], as implemented in the industry-standard Seurat software package [6]. Louvain clustering works by optimising an estimator called the 'network modularity', which is defined in Section 2.2. Another important task when analysing single-cell data is visualisation of the data by projection into two dimensions, e.g., using non-linear methods such as UMAP (uniform manifold approximation and projection) or \(t\)-SNE (\(t\)-distributed stochastic neighbour embedding). The main aim of these projections is that data points which are close together in the original high dimensional space should remain close together in the two dimensional projection, where distance between points is quantified by a carefully chosen metric (e.g. the \(t\)-statistic). The \(p\) features of data matrix \(\mathbf{X}\in\mathbb{R}^{p\times n}\) usually contain much redundancy, and so to aid computational tractability and reduce noise [7], spectral methods may be used to reduce the dimension of the data to \(\mathbf{Y}\in\mathbb{R}^{d\times n}\) where \(d\ll p\), before using a non-linear method such as \(t\)-SNE or UMAP to generate the two dimensional projection \(\tilde{\mathbf{Y}}\in\mathbb{R}^{2\times n}\) from \(\mathbf{Y}\). It is natural to carry out spectral clustering and non-linear projection using the same spectral decomposition \(\mathbf{Y}\) of the data-matrix \(\mathbf{X}\).
The structure of this papers is as follows. In Section 2, we present our novel first-principles representation of single-cell genomic count data, and our improved method of clustering single cells, that we call 'GMM-LE' clustering, and related visualisations. Then in Section 3, we present the results of applying GMM-LE clustering and related visualisation to example data-sets relevant to human cortical development, to human embryonic development, and to breast cancer initiation, showing an improvement over the state-of-the-art methods that are currently available.
## 2 Methods
### Model specification
The model specification is based on the observation that a single-cell genomic data-matrix \(\mathbf{X}\in\mathbb{Z}_{\geq 0}^{p\times n}\) has equivalent characteristics to the adjacency matrix \(\mathbf{A}\) that represents a bipartite network with multi-edges (Fig.1b). By'multi-edges' we mean that a pair of nodes may be connected by multiple edges, I.e., \(A_{ij}\in\mathbb{Z}_{\geq 0}\). By 'bipartite' we mean that the network can have nodes of two different types (represented by blue circles and red squares in Fig.1b), and in our setting we restrict this so that only nodes of different types may be connected by edges. Furthermore, both \(\mathbf{A}\) and \(\mathbf{X}\) tend to be very sparse (i.e., contain a large proportion of zeros: typically less than 10% of the matrix elements are non-zero in both cases). Hence, modelling \(\mathbf{X}\) with \(\mathbf{A}\) provides a mathematical justification for using clustering methodology designed for networks, as described in Section 2.2. We note that this adjacency matrix is \(\mathbf{A}\in\mathbb{Z}_{\geq 0}^{p\times n}\), and our model is thus specified as \(\mathbf{A}=\mathbf{X}\).
### Single-cell data clustering
Communities in a network are groups of highly interconnected nodes in that network (Fig.2). Assigning network nodes \(i\in\{1,...,n\}\) to communities is an equivalent problem to clustering the nodes
Figure 1: **Network model of count data.** (a) Adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) represents a symmetric unipartite network. (b) Adjacency matrix \(\mathbf{A}\in\mathbb{Z}_{\geq 0}^{p\times n}\) represents an asymmetric bipartite multi-edge network; adjacency matrix \(\mathbf{A}\) (and the equivalent network it represents) models the data matrix \(\mathbf{X}\) of non-negative integer counts.
[8], and hence when these nodes represent single cells, community detection methodology can be used to cluster single cells. When the data-matrix \(\mathbf{X}\) is high dimensional, as is typical in genomics, this can lead to poor performance of standard clustering methodology such as _K_-means. This is because as the dimensionality of the data increases, the euclidean distances between similar and dissimilar points converge, making it difficult to cluster similar points together. As a result, dimension reduction is often used to project the \(n\) data-points represented by \(\mathbf{X}\) into a lower dimensional subspace \(\mathbf{Y}\in\mathbb{R}^{d\times n}\), before carrying out the clustering in that \(d<p\) dimensional subspace. When the dimension reduction is carried out by spectral methods such as PCA (principal components analysis) or SVD (singular value decomposition), this procedure is called spectral clustering.
Newman equated spectral clustering of network nodes with maximising an estimator called the network modularity [8], and these concepts are described as follows. We start with a network of \(n\) nodes represented by the adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\), where \(A_{ij}=1\) and \(A_{ij}=0\) correspond (respectively) to the presence or absence of an edge in the network between nodes \(i\) and \(j\). Then, spectral clustering of the nodes proceeds with \(\mathbf{A}\) as the 'input matrix' (i.e., the input to the spectral clustering algorithm), or alternatively with its 'graph-Laplacian' \(\boldsymbol{\mathcal{L}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) as the 'input matrix', where \(\mathbf{D}\) is the diagonal matrix of the vector \((D_{1},D_{2},\ldots,D_{n})^{\top}\), with \(D_{i}=\sum_{j=1}^{n}A_{ij}\). Taking either \(\mathbf{A}\) or \(\boldsymbol{\mathcal{L}}\) as the 'input matrix', spectral clustering first carries out dimension-reduction to give the matrix \(\mathbf{Y}\in\mathbb{R}^{d\times n}\), before carrying out _K_-means or another type of clustering in this
Figure 2: **Communities in a network.** Network nodes in the same community are displayed with the same colour. The density of network edges tends to be much greater within communities, than between communities.
\(d<n\) dimensional subspace. When spectral methods are used to project the graph-Laplacian representation of the \(n\) data-points into a dimensionally-reduced subspace \(\mathbf{Y}\in\mathbb{R}^{d\times n}\), we refer to this subspace as the 'Laplacian eigenspace', or 'Laplacian spectral embedding' (LSE). On the other hand, when spectral methods are used to project the adjacency matrix representation of the data-points into a dimensionally-reduced subspace, we refer to this subspace as the 'adjacency spectral embedding' (ASE) [9].
The network modularity is defined as an aggregated 'observed minus expected' statistic \(Q(\mathbf{C})\), given cluster labels \(\mathbf{C}=\{c_{1},c_{2},...,c_{K}\}\), where \(c_{k}\) is the set of nodes labelled as cluster \(k\in\{1,...,K\}\):
\[Q(\mathbf{C})=\frac{1}{D^{++}}\sum_{i=1}^{n}\sum_{j=1}^{n}\left(A_{ij}-\frac{D _{i}D_{j}}{D^{++}}\right)\cdot\mathbb{I}[\zeta(i)=\zeta(j)],\]
where node degree \(D_{i}=\sum_{j=1}^{n}A_{ij}\), total number of edges \(D^{++}=\sum_{i=1}^{n}D_{i}=\sum_{i=1}^{n}\sum_{j=1}^{n}A_{ij}\), and indicator function \(\mathbb{I}[\zeta(i)=\zeta(j)]=1\) if nodes \(i\) and \(j\) appear together in any set of cluster labels \(c\in\mathbf{C}\), or \(\mathbb{I}[\zeta(i)=\zeta(j)]=0\) otherwise, where \(c=\zeta(i)\) means that node \(i\) is assigned to cluster \(c\). Recent research has shown that clusters in the eigenspace of either the adjacency matrix \(\mathbf{A}\) or its corresponding graph-Laplacian \(\mathcal{L}\) asymptotically follow multivariate Gaussian distributions [10]. Therefore, fitting a Gaussian mixture model (GMM), rather than _K_-means clustering, is better specified for the eigenspace of either the matrix \(\mathbf{A}=\mathbf{X}\) or of its graph-Laplacian \(\mathcal{L}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). Therefore, in the results presented in Section 3, we fit a Gaussian mixture model (GMM) in the Laplacian eigenspace, a procedure we refer to as GMM-LE clustering. We note that inferred clusters obtained using a Gaussian mixture model with diagonal covariance matrices are effectively the same as the clusters obtained from the _K_-means algorithm.
### Single-cell data visualisaton
Single-cell data are often visualised in two dimensions by using non-linear methods such as UMAP or _t_-SNE, i.e., projecting the data \(\mathbf{X}\in\mathbb{Z}_{\geq 0}^{p\times n}\) to give \(\widetilde{\mathbf{Y}}\in\mathbb{R}^{2\times n}\). To aid computational tractability and reduce noise, spectral decomposition is often applied to the data matrix to give \(\mathbf{Y}\in\mathbb{R}^{d\times n}\), \(d\ll p\), before carrying out non-linear projection to \(\widetilde{\mathbf{Y}}\in\mathbb{R}^{2\times n}\) for visualisation [7]. Therefore it is natural to carry out spectral clustering and non-linear projection of the data based on the same spectral decomposition of the data matrix \(\mathbf{X}\). We suggest that when clustering the data-points in the Laplacian eigenspace, it is most natural to obtain the two-dimensional projection of these \(n\) data-points by applying non-linear dimension reduction to the representation of these data-points in the Laplacian eigenspace, i.e., the eigendecomposition of \(\mathcal{L}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). In Section 3, we present visualisations of various single-cell genomic data-sets that are based on applying UMAP to the projection of the data points into the Laplacian eigenspace, a procedure that we refer to as UMAP-LE (uniform manifold approximation and projection from the Laplacian eigenspace).
## 3 Results
In this section, we use the methodology described in Section 2, to cluster and visualise single-cell genomic data from three recent studies from different cell-biological contexts:
* _Human cortical development_[11]: data for \(p=15735\) RNA transcripts in \(n=31073\) cells, from the V1 area of the visual cortex in the brains of human embryos at gestation weeks 20-22.
* _Human embryonic development_[12]: data for \(p=20407\) RNA transcripts in \(n=1258\) cells, from the blastocyst stage of the embryo at 5-7 days post fertilisation.
* _Breast cancer initiation_[13]: data for \(p=14835\) RNA transcripts in \(n=17730\) cells, from the epithelial lineage and supporting cells, from human breast tissue from healthy individuals.
Fig.3, Fig.4, and Fig.5 show clustering and visualisation based on the Laplacian eigenspace estimated from respectively, the human cortical development data-set, the human embryonic development data-set, and the breast cancer initiation data-set. UMAP is used to project the \(n\) data-points into two dimensions for visualisation, based on their projection into the estimated Laplacian eigenspace (UMAP-LE). GMM-LE clustering is also carried out by fitting a Gaussian mixture model in the same estimated Laplacian eigenspace. In the plots in these figures, each dot represents one cell (sample). In the cluster plots, the colours indicate the GMM-LE cluster that the cell is assigned to. In the remaining plots, the colours indicate the average log-expression levels of sets of marker genes defined for specific cell-types that are relevant to each data-set. These sets of marker genes have been used previously by us in other studies relating to human neural development [14], human embryonic development [15], and breast cancer initiation [16].
### Human cortical development dataset: visualisation and clustering inferences
The cell-type validation plots shown in Fig.3 illustrate that the UMAP projection of the data
Figure 3: Projection, clustering, and validation, for the human cortical development data-set. UMAP projections from the Laplacian eigenspace (UMAP-LE) show GMM-LE clusters and mean expression levels of validating marker genes for cell-types of interest.
points in the Laplacian eigenspace (UMAP-LE) make biological sense. The neural stem-cell types oRG (outer radial glia) and vRG (ventricular radial glia) appear in separate yet close locations, and a trajectory can be discerned from these stem-cell types through IPC (intermediate progenitor cells) to neuron and interneurons. The GMM-LE clusters also closely correspond to the greatest colour intensity in the validation plots for specific cell-types. However, the number of clusters \(K\) is an important parameter to set in any clustering procedure. Because the validation for the clustering in the human cortical development data-set was based on marker genes for 8 different neural cell-types, we started with \(K=8\) clusters for this data-set (Fig.3). However, this did not lead to the neural stem-cell types oRG (outer radial glia) and vRG (ventricular radial glia) being clustered separately, despite their apparent separation in the UMAP projection of the Laplacian eigenspace. Therefore, we tested \(K\in\{5,10,15,20,25\}\), finding that \(K\in\{20,25\}\) provided sufficient granularity to separate the neural stem-cell types oRG and vRG, with little difference in the clustering output between \(K=20\) and \(K=25\). However, at this level of granularity, it becomes clear that the definitions of 'neuron' (i.e., excitatory neurons) and 'interneuron' (i.e., inhibitory neurons) are too broad for our sets of validation markers, and that finer definitions of neuronal subtypes will be required to identify these (possibly novel) subtypes. We also compared these results from GMM-LE clustering against those obtained from the well-known Louvain clustering algorithm [5] implemented in the industry-standard Seurat software package [6]. The Seurat software with default settings chooses \(K=22\), and therefore we used \(K=22\) to compare results between the GMM-LE and Seurat-Louvain clustering methods. Table 1 shows ratios of the mean log expression of the marker genes for one cell-type to the mean log expression for the markers for all the other cell-types. These are based on cell-type identity assigned to each cluster based on the set of markers with the greatest mean expression level in each cluster. For 6 out of 8 cell-types, GMM-LE clustering performs better; however as noted already, the worse performance for neurons and interneurons is likely to be due to the fact that these definitions are too broad for clustering at this level of granularity, and therefore these general cell-types span several clusters.
### Human embryonic development dataset: visualisation and clustering inferences
Research on human embryos is highly restricted, due to ethical and regulatory considerations. This constrains sample sizes in generated data-sets, restricting inferences that can be obtained from these samples. If a novel clustering method is able to increase the number of cells obtained from an embryo that can be reliably identified as a specific cell-type of interest in the embryo such as eiphlast cells, then it would be potentially very valuable to biological science researchers because it would increase the effective sample size for these precious cells that can be used for further research based on those data. Epiblast cells are precursors to the ectoderm layer, one of three primary germ layers of the embryo; eiphlast cells go on to form neural and epithelial cells [17]. Using GMM-LE clustering, we were able to increase the number of cells that can be reliably identified as ephilst from a well-known study of human embryos [12], compared to the latest research in the field [18]. To set the number of clusters \(K\) shown in Fig.4, we increased \(K\) until the eiphlast cells formed a distinct cluster as shown by the UMAP projection of the Laplacian eigenspace (UMAP-LE), setting \(K=8\). In Fig.4, cluster 5 identifies 68 eiphast cells according to expression of the marker genes NANOG and KLF17, and the absence of expression of GATA3 and SOX17 [19]. These 68 cells include 44 of 45 cells that were previously identified as ephblast, as well as very importantly 24 cells
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Method & oRG & vRG & astrocytes & Oligo & Microglia & IPC & neuron & interneuron \\ \hline GMM-LE & 1.43 & 1.16 & 1.29 & 1.56 & 1.44 & 1.56 & 1.17 & 1.41 \\ Seurat-Louvain & 1.31 & 1.09 & 1.25 & 1.58 & 1.29 & 1.29 & 1.16 & 1.97 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of GMM-LE clustering and Seurat-Louvain clustering in the human cortical development data-set. Ratios of the mean log expression of marker genes for the cell-type of interest to the other cell-types are shown.
newly identified as epiblast cells here. The full list of all 68 epiblast cells are given in Appendix B.
### Breast cancer initiation dataset: visualisation and clustering inferences
The main cell-types of the epithelial lineage in breast tissue are luminal progenitor cells and mature luminal cells (thought to be the cells-of-origin of, respectively, hormone-receptor negative and positive breast cancers) as well as basal cells [20]. As luminal progenitor cells are thought to be the cell-of-origin of highly aggressive triple negative breast cancers (TNBC), it is important to be able to identify these cells as accurately as possible, e.g., for quantifying their numbers in tissue samples from at-risk individuals. Important applications of accur
Figure 4: **Projection, clustering, and validation, for the human embryonic development dataset.** UMAP projections from the Laplacian eigenspace (UMAP-LE) show GMM-LE clusters and expression levels of validating marker genes for cell-types of interest.
Figure 5: **Projection, clustering, and validation, for the breast cancer initiation data-set.** UMAP projections from the Laplacian eigenspace (UMAP-LE) show GMM-LE clusters and mean expression levels of validating marker genes for cell-types of interest.
progenitor cells include developing novel biomarkers and surrogate end-points for clinical trials of chemoprerventative medicines [21]. We have previously shown that GMM-LE clustering improves identification of luminal progenitor cells in single-cell genomic data compared to Seurat-Louvain clustering [16]. In this study we identify the breast epithelial subtypes basal, luminal progenitor, and mature luminal cells in a different recently published data-set [13], again showing an improvement over Seurat-Louvain clustering (Table 2). The breast epithelial subtypes can be well separated in the UMAP projection of the data-points in the Laplacian eigenspace (UMAP-LE), and they can be separated well by GMM-LE clustering, as illustrated by Fig.5. To carry out GMM-LE clustering here, we again increased the number of clusters \(K\) until the breast epithelial subtypes were well separated, choosing \(K=8\).
## 4 Discussion
In this paper we have presented a novel first-principles method of modelling single-cell genomic count data, by modelling these data as a network adjacency matrix. This view of single-cell genomic data opens up a wealth of new modelling approaches for this type of data based on stochastic networks theory. It also gives theoretical validation to well-established and highly successful methods for clustering these data based on community detection, such as Louvain clustering [5]. In particular, this view shows the promise of studying the eigenspace of the graph-Laplacian (the 'Laplacian eigenspace') to identify and represent known and novel cell-types. Based on this, we have proposed a new method of visualising single-cell genomic data based on the UMAP projection of the representation of the data-points in the Laplacian eigenspace (UMAP-LE), and we have proposed GMM-LE clustering, or 'Gaussian mixture modelling in the Laplacian eigenspace'. We show that UMAP-LE visualisations correspond to existing biological knowledge, and that GMM-LE clustering outperforms the industry standard clustering method for these data, i.e., the implementation of Louvain clustering in the Seurat software.
There are several outstanding questions still to address in this research. In particular, choosing the number of clusters for a particular data-set is a challenging problem, which we have addressed heuristically for each data-set here. Developing an automatic method for choosing the number of clusters for GMM-LE clustering is a priority. A straightforward approach to this problem is to set the number of clusters as one greater than the dimensionality of the Laplacian eigenspace that is used [22]. We note that the latent dimensionality of the eigenspace of the graph-Laplacian can be estimated using existing theory based on scree plots [23]. Recent findings from stochastic network modelling provides refined methodology to choose the optimum number of clusters to estimate in the Laplacian eigenspace via the Bayesian Information Criterion (BIC) [9]. Building on existing theory for bootstrapping network data with latent structure [24] will enable estimates of uncertainty to be calculated for estimators of the number of latent clusters, as well as quantifying uncertainty on the estimated Laplacian eigenspace. We also note that there are alternative methods for calculating the graph-Laplacian, such as the random-walk Laplacian [25], which will be interesting to investigate in relation to single-cell genomic data. Finally, we have previously proposed a novel method of estimating a latent space in which different cell-types or clusters can be separated well, based on the Mahalanobis distance [16]. It will be interesting to investigate how this relates to the estimate of the Laplacian eigenspace, and refines it for identifying different cell-types from single-cell genomic data..
\begin{table}
\begin{tabular}{l c c c} \hline Method & Basal & Luminal progenitor & Luminal mature \\ \hline GMM-LE & 2.017 & 1.994 & 1.561 \\ Seurat-Louvain & 1.993 & 1.978 & 1.560 \\ \hline \end{tabular}
\end{table}
Table 2: **Comparison of GMM-LE clustering and Seurat-Louvain clustering in the breast cancer initiation data-set. Ratios of the mean log expression of marker genes for the cell-type of interest to the other cell-types are shown.**
In summary, our proposed methodological advances in this work provide mathematical technologies that have the potential for large impact in biomedical science. This is achieved by providing theoretical justification for and improvement on existing and widely adopted methodology for identifying important cell-types in single-cell genomic data.
|
2308.07692 | Long-Term X-Ray/UV Variability in ULXs | The focus of NASA's Swift telescope has been transients and
target-of-opportunity observing, resulting in many observations of
ultraluminous X-ray sources (ULXs) over the last ~20 years. For the vast
majority of these observations, simultaneous data has been obtained using both
the X-ray telescope (XRT) and the ultraviolet and optical telescope (UVOT),
providing a unique opportunity to study coupled variability between these
bands. Using a sample of ~40 ULXs with numerous repeat observations, we extract
stacked images to characterise the spatial extent of the UV-Optical emission
and extract long-term light curves to search for first-order linear
correlations between the UV and X-ray emission. We find that a small subset may
show weakly correlated joint variability, while other sources appear to display
non-linear relationships between the bands. We discuss these observations in
the context of several theoretical models: precession, irradiation of the outer
accretion disc and irradiation of the companion star. We conclude that more
complicated analysis or higher quality data may be required to accurately
constrain the nature of the joint X-ray and UV/optical emission in these
sources. | Norman Khan, Matthew. J. Middleton | 2023-08-15T10:44:57Z | http://arxiv.org/abs/2308.07692v1 | # Long-Term X-Ray/UV Variability in ULXs
###### Abstract
The focus of NASA's _Swift_ telescope has been transients and target-of-opportunity observing, resulting in many observations of ultraluminous X-ray sources (ULXs) over the last \(\sim 20\) years. For the vast majority of these observations, simultaneous data has been obtained using both the X-ray telescope (XRT) and the ultraviolet and optical telescope (UVOT), providing a unique opportunity to study coupled variability between these bands. Using a sample of \(\sim\)40 ULXs with numerous repeat observations, we extract stacked images to characterise the spatial extent of the UV-Optical emission and extract long-term light curves to search for first-order linear correlations between the UV and X-ray emission. We find that a small subset may show weakly correlated joint variability, while other sources appear to display non-linear relationships between the bands. We discuss these observations in the context of several theoretical models: precession, irradiation of the outer accretion disc and irradiation of the companion star. We conclude that more complicated analysis or higher quality data may be required to accurately constrain the nature of the joint X-ray and UV/optical emission in these sources.
keywords: stars: black holes - stars: neutron
## 1 Introduction
Ultraluminous X-ray sources (ULXs) are defined as off-nuclear, point-like sources with inferred isotropic luminosities exceeding \(L>10^{39}\)erg s\({}^{-1}\)(see the reviews of Roberts, 2007; Kaaret et al., 2017; King et al., 2023). Pulsations unambiguously indicating the presence of neutron star (NS) accretors have been located in several ULXs (Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017; Tsygankov et al., 2017; Doroshenko et al., 2018; Carpano et al., 2018; Sathyaprakash et al., 2019; Rodriguez Castillo et al., 2020) whilst the presence of a cyclotron resonant scattering feature in \(\rm{M51}\) ULX-8 (Brightman et al., 2018) and possible detection in NGC300 ULX1 (Walton et al., 2018) also suggest the presence of neutron stars. Given the luminosities of these sources and the known accretor mass, it is inescapable that the accretion flow in such systems is super-critical, either in the flow or onto the star itself.
Should the Eddington limit be reached in the disc, then standard theory (i.e. Shakura & Sunyaev, 1973; Poutanen et al., 2007) would predict powerful, mass-loaded winds which have been located in several ULXs to-date, both as imprints in the CCD spectra (Middleton et al., 2014, 2015; Walton et al., 2016) and using higher energy-resolution detectors (Pinto et al., 2016, 2017; Kosec et al., 2018, 2021). A corollary of such accretion flows is that, providing the wind is optically thick, there should be some degree of collimation and the assumption of isotropy breaks down (King et al., 2001). The resulting spectrum (and timing properties) of a given ULX then depends on both the accretion rate and inclination of the source (Poutanen et al., 2007; Middleton et al., 2015).
Should the accretion rate be high, we would naturally expect high inclination ULXs to be dim in the X-rays and instead peak at lower frequencies (Poutanen et al., 2007). With emission from the wind photosphere being \(\sim\)Eddington, these may be prime candidates for detection by next-generation deep surveys (e.g. LSST: Middleton et al. in prep). A prime example of such an edge-on ULX is the Galactic source SS433 (\(i=78^{\circ}\)), which, despite having an X-ray luminosity of only \(\sim 10^{36}\) erg s\({}^{-1}\) appears to share many of the same characteristics of ULXs and is inferred to have considerably brighter, face-on X-ray luminosities (Cherepashchuk, 2002; Fabrika, 2004; Khabibullin & Sazonov, 2016; Liu et al., 2015; Middleton et al., 2021) and emit at \(\sim 10^{40}\) erg s\({}^{-1}\) in the UV (Dolan et al., 1997).
A change in the intrinsic X-ray/UV emission from a given ULX may be driven by either a change in mass accretion rate at larger radius or a change in our line-of-sight inclination to the inner-regions. The former may be the result of mass loss at large radii (Middleton et al., 2022), whilst the latter may be a result of disc warping due to irradiation (Pringle, 1996; Pasham & Strohmayer, 2013) or precession of the super-critical disc and wind (Pasham & Strohmayer, 2013; Middleton et al., 2018, 2019).
Whilst the intrinsic emission from high inclination ULXs may peak at low frequencies, in the optical/UV band there will also be emission from the secondary star which can be amplified if effectively irradiated, as well as emission from the outer disc (again, if effectively irradiated). Whilst there has been a great deal of study of ULXs in the optical - specifically to elucidate the nature of the companion star (Heida et al., 2014) - there has been limited exploration in the UV bands. However, it has been observed that in one ULX, the UV emission appears extremely bright (\(>10^{39}\) erg/s: Kaaret et al., 2017) and in the case of the PULX, NGC 7793 P13, an optical/UV super-orbital period is seen, out of phase with the X-ray super-orbital period (Furst et al., 2016). Although not well explored, the correlation
between X-ray and low frequency emission should provide invaluable insight into the geometry and nature of the accretion flow. In this paper, we explore the general shapes of the X-ray/UV correlations we might expect for various scenarios and search for these within the Swift lightcurves of several prominent ULXs.
## 2 Predictions
In this section, we consider the theoretical impact on the coupled X-ray/UV properties of ULXs when ascribing the low frequency emission to various locations in the system. To simplify this picture we make the explicit assumption that ULXs which contain neutron stars all have dipole fields weak enough (or accretion rates high enough) that the classical super-critical picture of disc accretion (Poutanen et al., 2007) holds (i.e. that the magnetospheric radius, \(R_{\rm M}\), is much smaller than the spherisation radius, \(R_{\rm sph}\)). Currently, measurements only exist for a handful of pulsating ULXs (see Walton et al., 2018; Brightman et al., 2018; Middleton et al., 2019; Furst et al., 2023) but suggest field strengths of order \(\sim 10^{11}-10^{13}\) G which according to the canonical picture of PULXs should tend to lead to the condition that \(R_{\rm M}<R_{\rm sph}\)(King & Lasota, 2020).
### Emission from the outer wind photosphere
Following the super-critical model of Poutanen et al. 2007, we assume that the wind photosphere extends out to some radius \(r_{\rm out}\) and reprocesses the flux from the disc below the wind where the inflow starts to become locally super-critical, and some fraction of the radiation produced interior to \(r_{\rm out}\) (the latter with a luminosity greater than or equal to the Eddington luminosity). A full understanding of the shape and intensity of the emergent SED requires full radiative magnetohydrodynamic (RMHD) simulations and extensive post-processing, which has not yet been performed in the case of ULXs (although see the work by Narayan et al. 2017 and Dai et al. 2018). In the absence of numerical studies, simple qualitative predictions are still possible. For a fixed inclination, increasing the mass accretion rate pushes \(r_{\rm out}\) to a larger radius due to increased mass loading of the wind and the larger radial location of \(r_{\rm sph}\)(Poutanen et al., 2007). Should the opening angle of the wind be connected to the accretion rate at large radius (as it would seem to be cf Jiang et al. 2014, 2019), then an increase in mass accretion rate will increasingly collimate the X-ray emission from within. What follows depends on the orientation of the observer. Should the observer be able to see into the wind cone, then the X-ray luminosity at all energies will increase, and the characteristic temperature associated with the spherization radius, \(T_{\rm sph}\) will decrease. The expansion of the wind photosphere to larger radius will lead to a reduction in its temperature according to the formula of Poutanen et al. 2007 (eq 1):
\[T_{\rm sph}\approx 0.8f_{\rm col}\left(\frac{\zeta\beta}{\epsilon_{w}}\right) ^{1/2}m^{-1/4}\dot{m_{0}}^{-3/4}\ {\rm keV} \tag{1}\]
where \(f_{\rm col}\) is a colour temperature correction factor, \(\zeta\), \(\beta\) and \(\epsilon_{w}\) are constants relating to the wind-cone opening angle, outflow velocity and fractional energy content in powering the wind, \(m\) is the accretor mass in \(\rm M_{\odot}\), and \(\dot{m_{0}}\) is the Eddington-scaled accretion rate.
By assuming the wind photosphere radiates as a blackbody at the characteristic temperature \(T_{\rm sph}\), that \(\zeta\beta/\epsilon_{w}\approx 1\) (A simplification that can be made as all parameters can be considered of order unity), \(f_{\rm col}\approx 2\)(see Shimura & Takahara, 1995), and accretor masses of \(10\ \rm M_{\odot}\) and \(1.4\ \rm M_{\odot}\), the UV emission in the highest energy band (UVW2) (taking the form of a blackbody, peaking around 3 kT) will increase in brightness until accretion rates in excess of \(\dot{m_{0}}\approx 5000\)\(\times\) Eddington are reached for a \(10\ \rm M_{\odot}\) black hole and \(\dot{m_{0}}\approx 7000\)\(\times\) Eddington M\({}_{\odot}\) for a \(1.4\ \rm M_{\odot}\) neutron star (see figure 2). Such rates are safely above those inferred for known ULXs (Vasilopoulos et al., 2020). Below these limits, and for an orientation permitting a view into the wind-cone, one would expect a positive correlation between the X-rays and UV. It is conceivable that an observer could be oriented such that the closing of the wind cone inhibits their ability
Figure 1: Black body spectrum (bbody in xspec) set at \(kT=0.005\) keV. Coloured are the effective energy ranges of the UVOT bands used for the flux calculation shown in figure 2.
Figure 2: Flux in each of the UVOT bands as a function of the mass accretion rate \(\dot{m_{0}}\). The peak flux is reached at \(\dot{m_{0}}\sim 7000\) for the highest energy band (UVW2), much higher than is expected for ULXs; this means our UV view of high inclination ULXs, where the observed emission is from the wind photosphere, exists entirely within the region of UV emission on the left-hand side of the plot where the flux in a single band increases monotonically with \(\dot{m_{0}}\). The plot was made using the xspec model bbody with a normalized luminosity and a temperature set to \(T_{\rm ph}\) (eq 1), \(m=1.4\ M_{\odot}\), \(\beta=\zeta=1\), \(f_{\rm col}=1.7\) and \(\epsilon_{w}=0.95\).
to see the collimated emission. In this case, there would be a change in the ratio between hard and soft X-ray emission (the soft being relatively more visible) accompanying a drop in \(T_{\rm Sph}\). This would result in an anti-correlation between \(T_{\rm sph}\) and the UV brightness and a more complex correlation with spectral hardness.
For a fixed accretion rate, a change in the inclination of the disc/wind, driven by precession (e.g. Middleton et al., 2018, 2019) would result in changes to the X-ray spectral colours similar to the case of the wind-cone closing (see Middleton et al., 2015). In short, when the cone precesses to higher inclinations to our line-of-sight, the X-ray emission should diminish, and the low frequency emission should become brighter, leading to an anti-correlation between the X-ray and UV emission (assuming the accretion rate is high enough for the wind photosphere to emit substantially at such low energies).
### Irradiated outer disc
It has been suggested that the outer accretion disc could be irradiated by X-ray emission from the inner regions after scattering by the wind (Sutton et al., 2013). As long as this irradiating SED has sufficient intensity above 2 keV, down-scattering of these photons can produce a UV-shoulder (Gierlinski et al., 2008). Exploring irradiation requires RMHD simulations and radiative transfer calculations to follow the photons from the inner regions to the outer disc. However, such calculations have yet to be performed, and so we instead base our reasoning on a simplified picture (see figure 3).
Should the accretor be a black hole or a low dipole field neutron star (\(\lesssim 10^{9}G\)), then the spectrum from the innermost regions is insensitive to accretion rate (Poutanen et al., 2007) due to mass loss at larger radii. If the ULX contains a high dipole field-strength neutron star (up to \(\sim\)10\({}^{13}\)G), the emergent spectrum above 2 keV is predicted to be dominated by emission from the accretion column, with photons scattering to escape the magnetosphere (Mushtukov et al., 2017). Assuming the magnetospheric radius lies within \(r_{\rm sph}\), then the accretion rate through the magnetosphere is mostly insensitive to the accretion rate at larger radii (see e.g. King & Lasota, 2020 although see also Chashkina et al., 2019). Assuming the intrinsic spectrum from the accretion column above 2 keV (see Brightman et al., 2016; Walton et al., 2018) does not change, we need only consider the changes in the scattering medium between the outer disc and inner regions. As the accretion rate at large radius increases, the wind cone closes and the optical depth of the wind increases (see figure 3). There are then more scatterings within the wind cone, which reduces the energy of photons created in the inner regions; any escaping photons are therefore likely to be at lower energy and less likely to thermalise in the outer disc (Gierlinski et al., 2008). The converse is true for a drop in accretion rate.
For a fixed observer inclination but a varying accretion rate, the presence (or lack of) a correlation between the X-rays and UV resulting from an irradiated outer disc once again depends on whether an observer can see into the wind cone. Should an observer be able to view the innermost regions directly, then an increase in accretion rate will lead to an increase in X-ray flux (and decrease in T\({}_{\rm sph}\)) and a drop in UV flux, as fewer hard X-rays impinge on the outer disc. Should an observer instead view the inner flow at higher inclinations, then a more complex change in spectral hardness will result (as described in Middleton et al., 2015). For a fixed accretion rate, a change in inclination of the inner disc and wind will not change the UV emission (even if the wind were to tilt away from us, it would still irradiate the far side of the disc) and the UV and X-rays will be uncorrelated.
### Irradiated companion star
A third option, distinct from the above two - and proposed to explain the anti-phase optical super-orbital period seen in NGC 7793 P13 (Furst et al., 2021) - is that the X-ray cone sweeps over the companion star and hard X-rays thermalise in the outer layers of the star leading to enhanced low frequency emission (Motch et al., 2014). To explore this scenario, we simulate a cone of X-ray emission irradiating a star at different orbital phases and different orientations relative to each other and the observer. Figure 4 shows a schematic of how the irradiated area (shown in light blue) from a cone of X-ray emission may vary as it sweeps over the star. The irradiated projected area seen by an observer is subject to many variables, including the size and shape of the orbit, star and cone, as well as the position of the observer. For further elaboration on the specific implementations of the irradiated companion model, please refer to Appendix A.
Figure 5 shows how the irradiated fraction, defined as the projected area seen by the observer divided by the maximum possible projected area \(A_{\rm proj}/\pi R_{\star}^{2}\), varies as a function of the (circular) orbital phase.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Prediction** & \(\mu\) & **Inclination** & **X-ray** & **UV** \\ \hline Wind Photosphere & Low & Low (face-on) & \\ Wind Photosphere & Low & Intermediate & \\ Wind Photosphere & Low & High (Edge-on) & \\ Wind Photosphere & High & Low (face-on) & \\ Wind Photosphere & High & Intermediate & \\ Wind Photosphere & High & High (Edge-on) & \\ Irradiated outer disc & Low & Intermediate & \\ Irradiated outer disc & Low & High (Edge-on) & \\ Irradiated outer disc & High & Low (face-on) & \\ Irradiated outer disc & High & Intermediate & \\ Irradiated outer disc & High & High (Edge-on) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Predictions of the relative X-ray to UV emission originating from the wind photosphere and irradiation of the outer disc. The blue and red bars provide a visual representation of the relative observed intensity of the emitted radiation in the X-ray and UV bands. These estimates are approximate and quantized to four discrete lengths to serve as a qualitative indicator.
Figure 3: Schematic for the geometry of an irradiated outer disc, an increase in the mass accretion rate results in the spherization radius moving outwards and decreasing in temperature. High energy photons produced in the inner parts of the accretion flow are scattered by the large scale-height wind cone.
The transit is simulated for a cone sweeping directly through the centre of the star and is shown for three different observer inclinations. It is worth noting that the inclination parameter alone is not sufficient to fully describe the configuration of the system and the resulting irradiated area, this is as the entire system can be rotated while the inclination remains unchanged resulting in different projected areas for the same inclination value. Never the less, for illustrative purposes figure 5 shows how the profile of a transit may be significantly altered by the observer's position.
If we were to assume that the axis of the wind-cone is perpendicular to the orbital axis, only wind-cone half-opening angles (\(\theta/2\)) which satisfy \(\theta/2>\arccos(R_{\bullet}/a)\) would be able to irradiate the star (where \(R_{\bullet}\) is the radius of the companion star and \(a\) is the average semi-major axis of the orbit). For example, a half-opening angle of \(10^{\circ}\) would require a \(R_{\bullet}/a\approx 1\) which poses issues for stable accretion. Allowing for larger opening angles will of course lower the value of the ratio of \(R_{\bullet}/a\) although, as super-critical accretion demands a half-opening angle of at least \(45^{\circ}\) (where the scale-height of the disc is approximately unity), this requires \(R_{\bullet}/a>0.7\). Thus, large amounts of irradiation likely require an offset between the axis of the wind cone and that of the orbit (which may naturally lead to precession of the wind cone itself Middleton et al., 2018, 2019b).
Estimates for the eccentricity of ULX orbits have been obtained for a few pulsating sources, where the pulse arrival time allows for the modelling of the orbital parameters of the system. Based on modelling of the X-ray pulsations in NGC 7793 P13, (Furst et al., 2018) find an eccentricity of \(e\leq 0.15\), whilst (Bachetri et al., 2014) found that M82 X-2 had a near circular orbit of \(e=0.003\). In the absence of any detailed orbital information, we therefore assume a simple circular orbit in our model.
We have investigated the potential impact of an irradiated companion, illuminated by a cone of X-ray emission, on the observed UV/optical emission in ULXs. Our model is subject to limitations and uncertainties, and it is possible that specific conditions may be required for its geometry to be a reality in these sources. However, despite these challenges, it may be possible in the future to determine the ephemeris based on the observed profiles seen in the UV/optical light curves of these sources. High-resolution spectroscopy can also be used to determine whether significant heating of the star's outer layers is present from specific spectral lines. In the next section, we will describe our approach to obtaining observations and reducing data to create long-term light curves for a sample of ULXs.
## 3 Observations and data reduction
The Neil Gehrels Swift Observatory (_Swift_) (Gehrels et al., 2004) is capable of observing simultaneously in both the UV and X-ray bands via its UVOT (Roming et al., 2005) and XRT (Burrows et al., 2005) cameras, and so provides a unique opportunity to explore whether observations of ULXs are consistent with any of the predictions discussed above. We begin by creating our ULX sample by crossmatching several ULX catalogues, (Earnshaw et al., 2019; Kovlakas et al., 2020; Bernadich et al., 2022; Walton et al., 2022) with the _Swift_ Master Catalogue (SWIFMASTR), accessible via HEASARC1. For the sake of comparison to nearby sources where data quality is high, we also include three extensively studied Galactic sources which reach super-Eddington rates of accretion, Swift J0243.6+6124, S8433 and V404 Cygni. Swift J0243.6+6124 is known to contain a magnetised neutron star and appear as a ULX (van den Eijnden et al., 2020), SS433 is widely considered to be an edge-on ULX (Fabrika, 2004; Middleton et al., 2021) and V404 Cygni is a LMXB which reached around or just in excess of its Eddington luminosity during its 2015 outburst (Motta et al., 2017).
Footnote 1: [https://heasarc.gsfc.nasa.gov](https://heasarc.gsfc.nasa.gov)
We locate all observations where the source lies within the nominal (23.6') XRT field-of-view. Due to the differences between the XRT and UVOT field-of-view, there is a mismatch between the number of observations in both bands. We set the requirement that there be 20 observations in both bands for a source to appear in our sample. For each source, we manually cross-matched with SIMBAD to obtain distances and positions. The final sample with relevant information is shown in Table 2.
Figure 4: Schematic of irradiation of the secondary star as a tight cone of X-ray emission travelling on an elliptical orbit.
Figure 5: How the irradiated fraction, defined as \(A_{\rm proj}/\pi R_{\bullet}^{2}\), varies as a function of orbital phase transiting through a cone of emission with opening angle \(\theta=20^{\circ}\). The transit is shown for three different observer inclinations, \(i=0\), 22.5 and 45 degrees. A ratio of \(R_{\bullet}/a=0.5\) is assumed.
### XRT Data Reduction
XRT data was extracted using the standard _Swift_/XRT processing pipeline (Evans et al., 2009), using the SIMBAD coordinates of the source, and the'simple' centroid method with a positional error of 1" (see Evans et al., 2009). We extract light curves in three bands, ful1 (\(0.3-10.0\) keV), soft (\(0.3-1.5\)keV) and hard (\(1.5-10.0\)keV), with the time binning set at a single point per observation. Using the full band data, we set the minimum source significance - defined as the number of counts in the source region divided by the square root of the counts in the source and background (\(\rm C_{src}/\sqrt{\rm C_{src}+C_{blg}}\)) - to the default value of 3; observations where the source is not detected at or above this threshold yield upper limits. The pipeline additionally calculates the hardness ratio, defined as the ratio of hard to soft count rates \(HR=\rm C_{hard}/C_{soft}\). 1\(\sigma\) errors are provided for each of the above measurements. The pipeline also includes GOOD and BAD values, which indicates whether the pipeline was able to obtain a centroid in a given snapshot, (such that BAD values are potentially unreliable).
### UVOT Data Reduction
UVOT data were processed locally using HEASARC v6.29 and the November 8, 2021 CALDB. We stacked all UVOT and XRT observations, combining them into a single image and manually inspected to identify any clear counterparts.
For each source, a circular extraction region with a radius of 5" was centred on the Swift XRT position, while background regions were manually positioned in a contaminant-free location with a size
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Source Name & RA & DEC & \(\lambda\) & Position reference & \(D\) & \(D\) meth. & \(D\) reference & UV \\ & ”hms” & ”dm:s” & & & Mpc & & & Emis. \\ \hline V404 Cygni & 20 24 03.8254 & +33 52 01.962 & O & Gaia Collaboration (2018) & 0.0023 & parallax & Miller-Jones et al. (2009) & PL \\ Swift J0243.6+6124 & 02 43 40.4252 & +61 26 03.757 & O & Gaia Collaboration (2020) & 0.0055 & parallax & Gaia Collaboration (2020) & PL \\ SS433 & 19 11 49.5647 & +04 58 57.827 & O & Gaia Collaboration (2020) & 0.0055 & Blundell \& Bowler (2004) & PL \\ SMC X-3 & 00 52 05.6251 & -22 60.4228 & O & Gaia Collaboration (2020) & 0.0600 & cepheid & Karachentsev et al. (2017) & PL \\ IC10 X-1 & 00 20 29.095 & +59 16 51.9 & X & Bauer \& Brandt (2004) & 0.7943 & redshift & Lianou et al. (2019) & N \\ M31 ULX-1 & 00 42 53.15 & +41 14 22.9 & X & Kaur et al. (2012) & 0.8200 & T-RDB & Karachentsev et al. (2017) & N \\ M31 ULX-1 & 01 33 50.8965 & +30 39 36.630 & O & Gaia Collaboration (2020) & 0.9300 & T-RDB & Karachentsev et al. (2017) & PL \\ NGC300 ULX-1 & 05 05 04.86 & -37 41 43.7 & O & Barbon et al. (2008) & 0.2230 & redshift & Lianou et al. (2019) & PL \\ NGC55 ULX & 00 15 28.89 & -39 13 18.8 & X & Lin et al. (2012) & 2.1100 & T-RDB & Karachentsev et al. (2017) & Edg \\ IC342 ULX-1 & 03 45 55.612 & +68 04 55.29 & O & Feng \& Kauret (2008) & 3.4356 & redshift & Lianou et al. (2019) & N \\ IC342 ULX-2 & 03 46 15.61 & +68 11 12.8 & X & Heida et al. (2014) & 3.4356 & redshift & Lianou et al. (2019) & N \\ NGC4945 XMM-1 & 13 05 28.99 & +49 27 34.1 & X & Swartz et al. (2004) & 3.4674 & redshift & Lianou et al. (2019) & E \\ Holmberg IX-1 & 08 19 28.99 & +70 42 19.4 & X & Heida et al. (2014) & 3.4674 & redshift & Lianou et al. (2019) & E \\ M81 X-6 & 09 55 32.95 & +69 03 33.6 & X & Heida et al. (2014) & 3.5975 & redshift & Lianou et al. (2019) & E \\ M82 X-2 & 09 55 51.040 & +69 40 45.49 & X & Kauret et al. (2006) & 3.6141 & redshift & Lianou et al. (2019) & E \\ NGC253 X-2 & 00 47 32.97 & -25 17 50.2 & X & Liu \& Bregman (2005) & 3.6983 & redshift & Lianou et al. (2019) & E \\ NGC53 X-9 & 00 47 22.59 & -25 20 50.9 & X & Heida et al. (2014) & 3.6983 & redshift & Lianou et al. (2019) & N \\ NGC247 ULX-1 & 04 07 40.40 & -20 47 45.7 & X & Liu \& Bregman (2005) & 3.7200 & T-RDB & Karachentsev et al. (2017) & N \\ NGC7793 P13 & 23 57 50.90 & -32 37 26.6 & X & Pannuti et al. (2011) & 3.7325 & redshift & Lianou et al. (2019) & PL \\ Holmberg IX X-1 & 09 57 53.290 & +69 03 48.20 & O & Abazajian et al. (2009) & 3.8500 & T-RDB & Karachentsev et al. (2017) & E \\ NGC1513 X-1 & 03 18 20.00 & -66 29 10.9 & X & Heida et al. (2014) & 4.2500 & Tully et al. (2016) & N \\ NGC1313 X-2 & 03 18 22.00 & -66 29 10.3 & X & Liu \& Bregman (2005) & 4.2500 & Tully et al. (2016) & E \\ NGC5204 ULX-1 & 13 29 38.62 & +58 25 05.6 & X & Heida et al. (2014) & 4.5900 & T-RDB & Karachentsev et al. (2017) & E \\ UGC465 ULX & 11 28 03.000 & +78 59 53.41 & O & Vinokurov et al. (2020) & 4.6300 & T-RDB & Karachentsev et al. (2017) & E \\ NGC4395 ULX-1 & 12 16 01.53 & +33 31 30.6 & K & Heida et al. (2014) & 4.7600 & Tully et al. (2016) & E \\ MS1 ULX-1 & 13 07 01.53 & -29 52 07.1 & X & Long et al. (2014) & 4.8978 & redshift & Lianou et al. (2019) & N \\ M83 ULX-2 & 13 37 20.12 & -29 53 47.7 & X & Liu \& Bregman (2005) & 4.8978 & redshift & Lianou et al. (2019) & E \\ NGC408 ULX-1 & 14 03 19.63 & -41 22 58.7 & X & Heida et al. (2014) & 5.3211 & redshift & Lianou et al. (2019) & N \\ NGC6946 ULX-1 & 20 35 00.11 & +60 09 08.5 & X & Lin et al. (2012) & 6.7298 & redshift & Lianou et al. (2019) & E \\ NGC6946 ULX-3 & 20 35 00.74 & +60 11 30.6 & X & Swartz et al. (2004) & 6.7298 & redshift & Lianou et al. (2019) & PL \\ M101 ULX-1 & 14 03 23.88 & +54 21 03.0 & X & Heida et al. (2014) & 7.1121 & redshift & Lianou et al. (2019) & N \\ NGC4559 ULX-1 & 12 35 51.71 & +27 56 04.1 & X & Heida et al. (2014) & 7.1450 & redshift & Lianou et al. (2019) & E \\ M51 ULX-7 & 13 30 01.01 & +47 13 43.9 & X & Heida et al. (2014) & 7.6000 & redshift & Cappellari et al. (2011) & E \\ NGC5585 ULX & 14 19 39.39 & +56 41 37.8 & X & Heida et al. (2014) & 7.8300 & Tully et al. (2016) & PL \\ NGC925 ULX-1 & 02 27 27.53 & +33 34 24.9 & X & Heida et al. (2014) & 9.2045 & redshift & Lianou et al. (2019) & E \\ NGC925 ULX-2 & 02 27 21.52 & +33 35 40.
of 15" as recommended by the UVOT data analysis manual2. The distances between the centres of the UVOT and XRT (SIMBAD) source regions as well as the position of the UVOT background regions are provided in the supplementary material.
Footnote 2: [https://swift.gsfc.nasa.gov/analysis/UVOT_swguide_v2_2.pdf](https://swift.gsfc.nasa.gov/analysis/UVOT_swguide_v2_2.pdf)
Level 2 UVOT images were processed locally using the uvotsimum to combine all snapshot extensions, then uvotsource with a signal-to-noise threshold of 3 was used to obtain photometric magnitudes in a given observation. We then determined whether the source was detected using the NSIGMA column provided as output from uvotsource. All observations of a given source were combined to produce a long term light-curve in all of the UVOT bands: V (5469 A), B (4392 A), U (3465 A), UVW1 (2600 A), UVM2 (2246 A), UVW2 (1928 A).
## 4 Analysis and Results
### UV Counterparts
The'UV Emis.' column in Table 2 indicates the spatial nature of the UV emission of the source based upon the stacked images. Sources with point-like UV counterparts and emission regions comparable to _Swift_'s PSF are labelled as 'PL', sources in or near regions of extended UV emission as 'E', those that do not display strong UV emission in the images are labelled as 'N', and three in edge-on galaxies 'Edg' where point-sources in the UV/optical are difficult to disentangle from the galactic emission.
### Testing for linear X-ray/UV correlations
A cut of \(\pm 5\sigma\) around the mean count rate was applied to both the UVOT and XRT data prior to simulating. This cut resulted in only 5 observations being excluded due to UVOT outliers, which were due to observations where the source was close to the edge of the detector or the background region was located within the fits image but outside the detector plate. Around \(\sim 30\) observations were excluded due to XRT outliers, some of these are instrumental however others may be real and be due to source flaring, we find that around 0-3 observations are excluded for each lightcurve that may be composed of several hundred data points, the removal of these outliers results in marginally weaker detected correlations meaning that our results consistute a slightly more conservative estimate. To search for linear correlations in the clean X-ray and UVOT light curves, we employ the following method: for each observation data point we sampled
Figure 6: Mean slope \(\hat{m}\) (x) plotted against the inverse coefficient of variation of the Pearson correlation coefficient \(\hat{\gamma}_{r}\) for all simulations.
the \(1\sigma\) error on the count rate in both the XRT and UVOT bands assuming Gaussian distributions; if the observation contained an upper limit, the data point was sampled assuming a uniform distribution between 0 and this value. This sampling allows us to obtain realisations of the light curve with the same time-sampling as the original. We then performed a least-squares fit with a straight line of the form \(y=mx+c\) to the re-sampled data points and calculated the Pearson correlation coefficient \(r\) given by:
\[r=\frac{\sum_{i=1}^{n}(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\sum_{i=1}^{n}(x_{ i}-\bar{x})^{2}}\sqrt{\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}}} \tag{2}\]
where \(x_{i}\) and \(y_{i}\) are the in the values in the sample, while \(\bar{x}\) and \(\bar{y}\) are the means over all the \(n\) data points. This resampling and fitting process was repeated 10,000 times to obtain posterior distributions for \(r\), \(m\) and \(c\), from which the mean and standard deviation were calculated.
To assess how well constrained are the posterior distributions for
Figure 7: Distribution of the correlation coefficient for 10 of the sources investigated. Error bars correspond to \(1\sigma\), calculated from 10,000 Monte Carlo iterations. Colours correspond to the different UVOT filters used (See legend in figures 1 & 2) and are displayed in decreasing energy from top to bottom. The marker on the datapoint denotes which of the three XRT band with which the correlation was carried out with: The full band is denoted with a dot (\(\bullet\)), the hard band by an upwards triangle (\(\blacktriangle\)) and the soft band with a downwards triangle (\(\blacktriangledown\)), the correlations against the hardness ratio are not shown here. The same colour and marker may appear multiple times as separate simulations were conducted with the inclusion or exclusion of upper limits and/or bad data points.
the fit parameters, we calculate the inverse coefficient of variation (ICV) (\(\hat{\gamma}_{\rm par}\)) by dividing the mean of the fit parameters (\(\overline{\rm par}\)) by their standard deviation (\(\sigma_{\rm par}\)), i.e. \(\hat{\gamma}_{\rm par}=\overline{\rm par}/\sigma_{\rm par}\). The absolute value of \(\hat{\gamma}_{\rm par}\) may be interpreted as a significance value, with higher values corresponding to more strongly peaked distributions (i.e. the distribution has less scatter).
Simulations were carried out on all possible combinations of the four X-ray bands (full, soft, hard, HR), and the six UVOT filters. Simulations were additionally carried out including and excluding observations labelled as BAD in the XRT pipeline, as well as data points labelled as upper limits in the full XRT band. The simulation grid implies that a single source may have a total of 24 possible simulation combinations for the full XRT band, and 36 for the remaining XRT bands (hard, soft and HR), giving a maximum of 60 correlations per source, assuming that the source has been visited in all UVOT bands and its XRT light curves contain both BAD and upper limit data points. In practice this is not the case and the total number of sets of simulations for each source is almost always lower than 60.
Figure 6 shows the mean value of the slope (\(m\)) and the ICV of the Pearson correlation coefficient (\(\hat{\gamma}_{r}\)) across all of our simulations. Sources appear multiple times on the plot due to the aforementioned grid in simulation space. The diagonal distance from the centre in this parameter space might provide an indication that a correlation exists - the upper right corresponding to positive correlations, the lower
Figure 8: See caption for figure 7.
left to negative. However as we will demonstrate, figure 6 may not encode sufficient information to locate more physical correlations.
Figures 7, 8, 9 and 10 show the correlation coefficient obtained from our simulations for different X-ray bands and UVOT filters. Each error corresponds to a discrete simulation run and is coloured based on the UVOT filter used in the correlation and has a marker corresponding to the XRT band it was correlated against (see caption in figure 7). If the errorbars for a single source consistently deviate far from the dotted line (zero) and are well constrained, then it indicates a consistent correlation between the UVOT filter and the X-ray bands. The sources V404 Cygni, Swift J0243.6+6124, SS433, SMC X-3, NGC300 ULX-1, NGC 7793 P13 and Holmberg IX X-1 show the clearest cases of such correlation. Some errorbars, such as those seen for the V band in M51 ULX-7, have large sizes due to non-Gaussian distributions for \(r\) over the 10,000 Monte-Carlo simulations.
## 5 Discussion & Conclusions
Although ULXs are defined empirically by their X-ray luminosity, they are well-known to emit over a broad energy range. Indeed, bright optical/UV emission (in excess of \(10^{39}\) erg s\({}^{-1}\)) is observed to originate in both Galactic super-critical accretors (SS433: Dolan et al. 1997) and well studied ULXs (NGC 6946 ULX-3: Kaaret et al. 2010) with a mixture of potential origins. In this paper, we have made predictions about how the UV and X-ray emission might correlate
Figure 9: See caption for figure 7.
(or anti-correlate) depending on the dominant mechanism for the low frequency radiation: the irradiated donor star, irradiated outer disc or wind photosphere.
Based on simple arguments, we predict a lack of any correlation between the UV and X-ray emission where the star is irradiated by a wind-cone, as, with the exception of an orientation where the central regions are eclipsed, the X-ray emission should remain constant (in the absence of precession or mass accretion rate changes). In the case of disc irradiation or precession of the super-critical disc/wind, the exact nature of the correlation depends mostly on the observer inclination and any changes in accretion rate at large radius. Certainly for a fixed inclination angle (again, in the absence of precession), a negative correlation would be expected for disc irradiation as the X-ray emission (assumed here to originate within the wind-cone) increases with increasing \(\dot{m}\), but the optical depth to the outer regions also increases. A negative correlation must also result in the case of precession, but can deviate and even become positive when changes in \(\dot{m}\) are invoked. In the absence of precession, the emission in both bands is a sensitive function of inclination (see Poutanen et al., 2007; Middleton et al., 2015).
NASA's _Swift_ satellite offers an unrivalled opportunity to explore the long timescale changes in both the X-ray and UV bands through simultaneous observing by the XRT and UVOT instruments. For a sample of \(\sim 40\) ULXs, we have extracted the UV and X-ray light curves and searched for correlations, placing constraints on the Pearson coefficient via Monte Carlo simulations.
Our sample contains three Galactic sources, V404 Cygni, Swift J0243.6+6124 and SS433. Strictly speaking, Swift J0243.6+6124, is a ULX under the classical empirical definition, however the other two sources are indirectly identified as supercritical accretors and share many of the same properties of extra-galactic ULXs. V404 Cygni and Swift J0243.6+6124 display among the strongest positive correlations in the sample; the strength of these correlations can be ascribed to large outbursts which subsequently decay over time in both the X-ray and UV/optical bands. The outbursts from both of these sources have been previously studied (see Oates et al., 2019 and Liu et al., 2022), however, to our knowledge this is the first time UVOT data has been used in a study for Swift J0243.6+6124. SMC X-3 is another source that displayed an X-ray outburst in 2016 (Weng et al., 2017; Tsygankov et al., 2017; Townsend et al., 2017); before dimming below the _Swift_ detection threshold, many of the observations were taken in windowed timing mode without simultaneous observations in the UV. The combination of the large dynamic range in X-ray count rate and the low number of datapoints may well bias our inferred correlations for this source.
NGC 300 ULX-1 has a bright counterpart and the system has been identified as a ULX pulsar with a supergiant companion (Carpano et al., 2018; Heida et al., 2019); it is also the closest of the ULXs in our sample to consistently display correlated variability. Initially thought to be supernova due to its transient nature in the optical (Monard, 2010), the source has been studied using the UVOT by (Villar et al., 2016). We observe that the B band appears to display a negative correlation. From the lightcurves (available in the supplementary material), we observe that the source was bright in the B band around 2010 with count rates peaking at \(\sim 10-15\)ct s\({}^{-1}\) and X-ray count rates of \(\sim 0.05\)ct s\({}^{-1}\) (in the full X-ray band). In 2017 the opposite behaviour occurred with X-ray count rates peaking at \(\sim 0.12\)ct s\({}^{-1}\) and B band count rates of \(\sim 4\)ct s\({}^{-1}\).
There are clearly several objects where the correlation between bands is negative. This can result from precession at fixed accretion rate, a changing accretion rate without precession but at viewer inclinations **not** into the wind-cone, or from irradiation of the outer disc for viewer inclinations into the wind-cone. This appears to be the case for NGC 300 ULX-1 and Holmberg IX X-1 (see bottom left figure 6), both of which are systems where precession has been suggested by previous authors (Weng and Feng, 2018; Vasilopoulos et al., 2019). It is also apparent that in some cases, the nature of the correlation changes for a single source between bands (see Holmberg IX X-1 in figure 8. This likely indicates that each band is affected to a differing degree by one of the different processes mentioned above.
Figure 10: See caption for figure 7.
In our analysis, we have assumed the simplest case of a linear correlation. However, numerous sources show clear patterns of behaviour where an 'L' shape is mapped out (see figure 12), while others show non-linear shapes; a simple linear correlation test (Pearson) is naturally less sensitive to detecting such behaviours. Intriguingly, an 'L' shape naturally results from precession of the wind-cone; using ULXLC (Dauser et al., 2017) we create an example X-ray lightcurve using the parameters \(\theta=5^{\circ}\), \(\Delta i=9^{\circ}\) and \(i=15^{\circ}\), which provides a quasi-sigmoid profile (see the first row of figure 11). We assume that the UV/optical profile is sinusoidal (assuming a quasi-spherical geometry of the outer photosphere of the wind), and plot the subsequent profile of the correlation that would result for different phases of precession. In the case of \(\phi=0\), the two curves produce a triangular shape which results in a Pearson correlation coefficient of \(r=0\), which demonstrates that even though the two curves are correlated, the value of \(r\) does not reveal this. As the phase is changed to \(\pi/2\) the curves produce a characteristic 'L' shape which is similar to some of those seen in our light-curves (such as Holmberg IX X-1). This preliminary result may suggest that more complicated (physical) models may be required to understand and search for correlated variability between bands, and that systems which show such 'L' shaped correlations, may be our clearest indication of precession.
## Acknowledgements
NK acknowledges support via STFC studentship project reference: 2115300. MM is supported by STFC through grant ST/V001000/1. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. The authors thank the anonymous referee for useful suggestions.
## Data Availability
The data underlying this article are available in the supplementary material (online). The data includes the XRT and UVOT lightcurves, Source tables with extraction regions, mean count rates and number of observations, the supplementary material also includes the full set of simulation results in this paper. The source code for this paper may be found at: [https://github.com/nx1/anticorr_data/](https://github.com/nx1/anticorr_data/)
|
2307.00778 | Modelling cargo transport in crowded environments: effect of motor
association to cargos | In intracellular transports, motor proteins transport macromolecules as
cargos to desired locations by moving on biopolymers such as microtubules.
Recent experiments suggest that cargos that can associate motor proteins during
their translocation have larger run-length, association time and can overcome
the motor traffic on microtubule tracks. Here, we model the dynamics of a cargo
that can associate at the most m free motors present on the track as obstacles
to its motion. The proposed models display competing effects of association and
crowding, leading to a peak in the run-length with the free motor density. This
result is consistent with past experimental observations. For m=2 and 3, we
show that this feature is governed by the largest eigenvalue of the transition
matrix describing the cargo dynamics. In all the above cases, free motors are
assumed to be present as stalled obstacles. We finally compare simulation
results for the run-length for general scenarios where the free motors undergo
processive motion in addition to binding and unbinding to or from the
microtubule. | Sutapa Mukherji, Dhruvi K. Patel | 2023-07-03T06:45:03Z | http://arxiv.org/abs/2307.00778v1 | # Modelling cargo transport in crowded environments: effect of motor association to cargos
###### Abstract
In intracellular transports, motor proteins transport macromolecules as cargos to desired locations by moving on biopolymers such as microtubules. Recent experiments suggest that cargos that can associate motor proteins during their translocation have larger run-length, association time and can overcome the motor traffic on microtubule tracks. Here, we model the dynamics of a cargo that can associate at the most \(m\) free motors present on the track as obstacles to its motion. The proposed models display competing effects of association and crowding, leading to a peak in the run-length with the free motor density. This result is consistent with past experimental observations. For \(m=2\) and \(3\), we show that this feature is governed by the largest eigenvalue of the transition matrix describing the cargo dynamics. In all the above cases, free motors are assumed to be present as stalled obstacles. We finally compare simulation results for the run-length for general scenarios where the free motors undergo processive motion in addition to binding and unbinding to or from the microtubule.
## I Introduction
Intracellular transport often involves directional movements of motor proteins on biopolymers such as microtubules or actin filaments [1; 2]. Three major classes of motor proteins known as kinesin, dynein and myosin are responsible for such transports. Using the energy derived from the hydrolysis of adenosineitrophastate (ATP) molecules, motor proteins transport different types of cargos such as cellular organelles, protein complexes, mRNAs etc. to desired locations in the cell. Such cargo movements are essential for various cellular functions such as cell morphogenesis, cell division, cell growth etc. This motion is processive in the sense that motor proteins typically move over several successive steps before detaching from the microtubule. Early studies [3; 4; 5] on intracellular transport revealed the underlying mechanism behind motor transport and how various properties such as the run-length, velocity etc. depend, for example, on the external force or the concentration of ATP molecules. While many of these studies are around the transport by a single motor, it is believed that cargos are often transported by multiple motors [6; 7; 8; 9] which help cargos remain bound to the biopolymer for a longer time. Experimental and theoretical studies [10; 7; 11] show that the presence of several motors helps the cargo overcome the viscous drag of the cytoplasm and have larger velocity as compared to transport by single motors. The cooperation of several motors also leads to longer run-length of the cargo before it detaches from the microtubule. Further, in vitro experiments indicate that transport processes by multiple motors can be efficiently regulated by controlling the number of engaging motors [12]. Besides these studies, there have been extensive experimental and theoretical studies attempting to understand the collective nature of transports involving many motors under diverse conditions [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23].
Quite often such transport processes take place in a crowded environment of the intracellular space. This is in particular true for the axon region of the neuron cell where a dense network of biopolymers, pre-exisiting organelles and the narrow geometry of the axon together give rise to a crowded environment that can impede cargo movements. However, despite crowding, it is found that the cargo transport happens in a robust manner without significant jamming or cargo dissociation. Experiments elucidating cargo transport in crowded environments indicate that motor proteins can adapt alternative strategies that might help them circumvent the crowding problem [24; 15]. In a recent experimental study aimed at understanding the motion of a cargo in a crowded environment, Conway _et al_[25; 26] studied the motile properties of quantum dot (Qdot) cargos, that can associate multiple kinesins, on a microtubule crowded with free kinesin motors. While comparing the motile properties of free kinesins and the Qdot cargos in crowded conditions, cargos were found to display longer run-lengths and association times as compared to free kinesins as the motor density increased. This difference prompted the prediction that the property of a cargo to associate multiple motors helps increase its run-length, association time and overcome the motor traffic. It was observed that while translocating, Qdot cargos could associate kinesins from the microtubule pool, dissociate kinesins attached to itself, or associate kinesins that are already moving along the microtubule and move together subsequently.
Motivated by this work, here, we propose mathematical and computational models to characterise the motion of a single cargo on a track crowded by free kinesin motors. During its translocation, the cargo can associate free motors which impede the motion of the cargo by occupying the forward sites on the microtubule. We as
sume that upon such association, a kinesin detaches itself from the microtubule rendering the forward site free. Our aim is to find how the interplay of the kinesin-association property of the cargo and the crowding along the track affects cargo's motile properties, for example, its run-length and association time etc. To our knowledge, this is the first modelling study of cargo transport where the cargo has the ability to associate kinesins present on a crowded microtubule track.
To this end, we study cargo transport under different scenarios described below. (1) In the simplest scenario, we assume that the cargo is always bound to the microtubule. Along the path of the cargo, the microtubule binding sites are randomly occupied by free kinesin motors. We assume that the kinesin that gets associated to the cargo during its translocation plays no specific role in facilitating the forward motion of the cargo other than freeing the forward site. This is equivalent to assuming that the cargo removes the kinesin molecule occupying the forward site via the association process. For this model, referred as "Model 1" below, we find the average velocity of the cargo. (2) In model 2, we assume that the cargo can bind more than one, say, at the most \(m\) number of kinesins. This is based on the predictions that the cargo may have a finite number of kinesin binding sites [25]. Hence, the cargo can associate a kinesin occupying the forward site provided it has a free binding site available. We consider \(m=2,\ 3,\text{and}\ 4\) in the following analysis. Kinesins attached to the cargo can detach from it and a free kinesin from the intracellular space can attach to the cargo at given rates. Finally, we implement the condition that a cargo can no longer be on the microtubule track if all the kinesin molecules detach from the cargo. A generalised version of the mathematical formulation of model 1 allows us to analyse the cargo motion obeying above rules for \(m=2\) and \(3\). Finally, run-lengths of \(m=2,\ 3,\text{and}\ 4\) are found upon numerically simulating the cargo dynamics. The motion of the cargo following different dynamical rules are shown in figure 1. (3) In models 1 and 2, free kinesins are assumed to be stalled on the microtubule. In model 3, we simulate cargo dynamics in the presence of moving kinesins as well as random processes of kinesin binding and unbinding to or from the microtubule. We compare the run-length of the cargo (with \(m=3\)) in the presence or absence of various processes mentioned above.
## II Models and Results
### Model 1
The motion of the cargo is modelled considering the following dynamical rules. (a) The cargo transported by a kinesin starts its forward journey from a given point on a one-dimensional track (often referred below as a lattice) representing the microtubule. (b) For all the lattice sites ahead, we assume an initial, random distribution of free kinesins. The average kinesin density on the lattice is represented by \(r_{m}\). These kinesins are assumed to be stalled. (c) The cargo moves to the forward site provided the forward site is not occupied by a free kinesin. (d) If the forward site in front of the cargo is occupied by a free kinesin, the cargo can associate the kinesin with itself at rate \(r_{an}\) rendering the forward site free.
In order to build the mathematical model, we consider possible configurations that two neighbouring sites can have when the first one of them is occupied by the cargo. For \(i\) and \((i+1)\)-th sites, with the cargo being at the \(i\)-th site, the \((i+1)\)-th site can be either empty or occupied by a free kinesin molecule. We denote the probability of finding \((i+1)\)-th site empty with the \(i\)-th site occupied by the cargo at time \(t\) by \(P(i,t)\). Similarly, the probability of finding \((i+1)\)-th site occupied by a free kinesin while the cargo is at the \(i\)-th site at time \(t\) is \(Q(i,t)\). Figure (2) shows these configurations as well as possible transitions from one configuration to the other as the cargo translocates forward. The following equations describe how these two configurations evolve with time [27].
\[\frac{dP(i)}{dt} =r_{an}Q(i)+(1-r_{m})P(i-1)-P(i), \tag{1}\] \[\frac{dQ(i)}{dt} =-r_{an}Q(i)+r_{m}P(i-1). \tag{2}\]
The first term on the RHS of equation (1) indicates a cargo-association process due to which a Q-type configuration transitions to a \(P\)-type configuration. The term with the pre-factor \((1-r_{m})\) indicates the motion of the cargo from \((i-1)\)-th site to \(i\)-th site while the \((i+1)\)-th site is vacant. While the forward motion happens with
Figure 1: (A) Cargo transport on a microtubule in the presence of free kinesins bound to microtubules. The microtubule is represented by a one-dimensional lattice with a site representing a tubulin dimer, the basic subunit of a microtubule. (B) A cartoon of a cargo bound to two kinesins. The cargo has more than one kinesin binding sites. (C) The process of kinesin association to cargo at rate \(r_{an}\). (D) The process of attachment of a kinesin from the intracellular space to the cargo at rate \(\omega_{a}\). (E) The process of kinesin detachment from the cargo at rate \(\omega_{d}\).
unit rate, the factor \((1-r_{m})\) indicates the probability that after the forward motion, \((i-1)\to i\), the cargo lands in a P-type configuration i.e. the \((i+1)\)-th site is unoccupied by a kinesin. The last term in (1) is a loss term which indicates that a cargo has moved from the \(i\)-th site to the \((i+1)\)-th site. In equation (2), the first term on the RHS indicates a loss of a \(Q\)-type configuration due to the association process. The second term is a gain term due to the hopping of the cargo from \((i-1)\to i\) where \((i+1)\)-th site is occupied by a free kinesin.
To find the average properties of the cargo motion, we define generating functions corresponding to the two probabilities as
\[\tilde{P}(\gamma)=\sum_{i=-\infty}^{\infty}\gamma^{i}P(i),\mbox{ and }\tilde{Q}(\gamma)=\sum_{i=-\infty}^{\infty}\gamma^{i}Q(i). \tag{3}\]
In terms of these generating functions, the time evolution equations are
\[\frac{d}{dt}\tilde{P}(\gamma)=r_{an}\tilde{Q}(\gamma)+(1-r_{m})\gamma\tilde{ P}(\gamma)-\tilde{P}(\gamma),\mbox{ and } \tag{4}\]
\[\frac{d}{dt}\tilde{Q}(\gamma)=-r_{an}\tilde{Q}(\gamma)+r_{m}\gamma\tilde{P}( \gamma). \tag{5}\]
The average position of the cargo can be found from the probabilities as
\[\langle i\rangle=\sum_{i=-\infty}^{\infty}i[P(i)+Q(i)]=\frac{d}{d\gamma}[ \tilde{P}(\gamma)+\tilde{Q}(\gamma)]\mid_{\gamma=1}. \tag{6}\]
The average velocity of the cargo is obtained from \(v=\langle i\rangle/t\) where \(t\) is the time taken to travel an average distance \(\langle i\rangle\). Solving equations (4) and (5), the average velocity of the cargo is found as (see Appendix A for details)
\[v=\frac{r_{an}}{r_{an}+r_{m}}. \tag{7}\]
### Model 2
Here we generalize the mathematical framework, discussed in the previous section, for higher values of \(m\) taking into account the possibilities of detachments of the cargo from the microtubule. In the following, we discuss the mathematical model for \(m=2\) and simulation results for \(m=2,\ 3,\mbox{ and }4\). The cargo dynamics for \(m=3\) is discussed in Appendix B.
For \(m=2\), the cargo has two kinesin binding sites. Hence it can associate at the most two kinesins. The basic rules for cargo transport in this case are listed below. (a) As before, we begin with an initial, random distribution of stalled free kinesins on a one-dimensional lattice. The average density of free kinesins is \(r_{m}\). (b) The cargo attached to a kinesin starts its forward journey from a given point on the lattice. (c) If the forward site is blocked by a free kinesin, the cargo can associate the kinesin with itself at a rate \(r_{an}\) provided the cargo has only one kinesin bound to it. (d) A kinesin bound to the cargo can detach from the cargo at rate \(\omega_{d}\) and a free kinesin from the intracellular space can bind to the cargo at rate \(\omega_{a}\) provided the cargo has only one kinesin attached to it. (e) A cargo is not attached to the microtubule if all the kinesins detach from the cargo. Thus, we are not being specific about how many kinesins are actively transporting the cargo or how many remain bound to the cargo without participating in cargo transport actively.
As before, we begin with two possible configurations of, say, \(\{i,i+1\}\)-th sites where \(i\)-th site is occupied by the cargo. However, here the cargo can be in two possible states - bound to one kinesin or bound to two kinesins. Hence, the probabilities are defined in the following way. \(P_{n}(i,t)\ (n=1,\ 2)\) represents the probability, at time \(t\), of the configuration where the cargo, located at \(i\)-th site, is bound to \(n\) kinesins and the \((i+1)\)-th site is empty. Similarly, \(Q_{n}(i,t)\ (n=1,\ 2)\) represents the probability, at time \(t\), of the configuration where the cargo, located at \(i\)-th site, is bound to \(n\) kinesins and the \((i+1)\)-th site is occupied by a free kinesin.
The probabilities of various configurations change with time as per the following equations,
Figure 2: (A) Two possible configurations of two neighbouring sites with the first site occupied by the motor transporting the cargo. (B) Possible transitions associated with the transport of the cargo to the next site.
\[\frac{d}{dt}P_{2}(i) = (1-r_{m})P_{2}(i-1)-P_{2}(i)+r_{an}Q_{1}(i)-\omega_{d}P_{2}(i)+ \omega_{a}P_{1}(i), \tag{8}\] \[\frac{d}{dt}P_{1}(i) = (1-r_{m})P_{1}(i-1)-P_{1}(i)-\omega_{a}P_{1}(i)+\omega_{d}(P_{2}(i )-P_{1}(i)),\] (9) \[\frac{d}{dt}Q_{2}(i) = r_{m}P_{2}(i-1)+\omega_{a}Q_{1}(i)-\omega_{d}Q_{2}(i),\text{ and}\] (10) \[\frac{d}{dt}Q_{1}(i) = r_{m}P_{1}(i-1)-r_{an}Q_{1}(i)+\omega_{d}(Q_{2}(i)-Q_{1}(i))- \omega_{a}Q_{1}(i). \tag{11}\]
The \(r_{\text{an}}\)-dependent term in equation (8) represents a process of kinesin association by the cargo. Due to this process, a \(Q_{1}\)-type configuration transitions to a \(P_{2}\)-type configuration. \(\omega_{a}(\omega_{d})\) dependent terms represent kinesin attachment(detachment) processes to(from) the cargo. For example, the \(\omega_{d}\) dependent term in equation (8) represents detachment of a kinesin due to which the cargo transitions from \(P_{2}\) state to \(P_{1}\) state. In addition to above equations, we introduce probabilities \(P_{0}(i,t)\) and \(Q_{0}(i,t)\) of having situations where the cargo bound to one kinesin residing at \(i\)-th site at time \(t\) loses its kinesin. These probabilities change with time as per the equations
\[\frac{d}{dt}P_{0}(i)=\omega_{d}P_{1}(i)\text{ and }\frac{d}{dt}Q_{0}(i)= \omega_{d}Q_{1}(i). \tag{12}\]
Defining generating functions as \(\tilde{P}_{n}(\gamma,t)=\sum_{i=-\infty}^{\infty}\gamma^{i}P_{n}(i,t)\), and \(\tilde{Q}_{n}(\gamma,t)=\sum_{i=-\infty}^{\infty}\gamma^{i}Q_{n}(i,t)\) (where \(n=0,\ 1,\ 2\)), we can rewrite equations (8)-(11) as
\[\frac{d}{dt}\mathbf{H}(\gamma,t)=\mathbf{SH}(\gamma,t), \tag{13}\]
where \(\mathbf{H}\) is a column matrix
\[\mathbf{H}(\gamma,t)=\left(\begin{array}{c}\tilde{P}_{2}(\gamma,t)\\ \tilde{P}_{1}(\gamma,t)\\ \tilde{Q}_{2}(\gamma,t)\\ \tilde{Q}_{1}(\gamma,t)\end{array}\right) \tag{14}\]
and \(\mathbf{S}\) is a \(4\times 4\) matrix
\[\mathbf{S}=\left(\begin{array}{cccc}(1-r_{m})\gamma-1-\omega_{d}&\omega_{a} &0&r_{an}\\ \omega_{d}&(1-r_{m})\gamma-1-\omega_{a}-\omega_{d}&0&0\\ r_{m}\gamma&0&-\omega_{d}&\omega_{a}\\ 0&r_{m}\gamma&\omega_{d}&-(\omega_{a}+\omega_{d}+r_{an})\end{array}\right). \tag{15}\]
In order to have an estimate of the association time of the cargo and how it is impacted by various processes, we have studied the quantity \([\tilde{P}_{0}(\gamma,t)+\tilde{Q}_{0}(\gamma,t)]\mid_{\gamma=1}\). This quantity being identical to \(\sum_{i=-\infty}^{\infty}[P_{0}(i,t)+Q_{0}(i,t)]\) indicates the total probability of cargo being left with no kinesin bound to it while being at any point on the lattice. A plot of this quantity for different parameter values are shown in figures (3) and (4). Over large time, this quantity approaches unity indicating cargo losing all the kinesins leading to the detachment of the cargo from the microtubule. The approach of this quantity to unity is what provides us with an estimate of the association time of the cargo to the microtubule. A fast approach to unity indicates a low association time of the cargo. For both figures, we have chosen the same sets of values for the kinesin-association rate, \(r_{an}\). The increase or decrease in \(\omega_{a}\) and \(\omega_{d}\), respectively, is expected to increase the association time of the cargo. Figures show that reducing the kinesin detachment rate, \(\omega_{d}\), from the cargo has much stronger effects on the association time as compared to increasing the kinesin attachment rate, \(\omega_{a}\).
In figure (5), we have shown how the total probability of cargo detachment at any point on the lattice is influenced by the kinesin density, \(r_{m}\). The figure shows that at a low value of the kinesin-association rate by the cargo, \(r_{\text{an}}\), the extent of crowding influences the cargo-association time to the microtubule only mildly. The situation changes significantly when the kinesin-association rate is high. In this case, the association time of the cargo to the microtubule, in general, increases significantly. Further, for large \(r_{\text{an}}\), the crowding density of free kinesins affects the association time of the cargo significantly with the association time being larger for lower crowding density.
The dependence of the run-length of the cargo on the crowding density can be obtained upon solving equations
(8)-(11) numerically. Figure (6) shows run-length plots for different values of the kinesin-association rate, \(r_{\rm an}\), and kinesin attachment and detachment rates, \(\omega_{a}\) and \(\omega_{d}\), respectively. For small \(r_{\rm an}\), the run-length decreases monotonically. However, for large \(r_{\rm an}\), the run-length increases initially for low crowding. In this case, due to large \(r_{\rm an}\), the cargo benefits from the kinesin-association process at low crowding. As the crowding density increases, due to limited number of binding sites, the cargo no longer benefits from kinesin association and the run-length decreases. This variation of the run-length with the crowding density is consistent with earlier experimental predictions [25]. With the increase in the kinesin detachment rate, \(\omega_{d}\), the run-length of the cargo decreases significantly. However, as found earlier, a decrease in the rate of kinesin attachment, \(\omega_{a}\), to the cargo has mild effect on cargo's run-length.
The fact that the run-length of the cargo for high kinesin-association rates increases with the crowding density initially is consistent with the estimates obtained from the analysis of the largest eigenvalue of the transition matrix \(S\) and the association time. In the limit of large time, the solutions for the probabilities are given by
\[{\bf H}\approx c_{1}e^{\lambda_{l}t}{\bf X}, \tag{16}\]
where \(c_{1}\) is a constant, \(\lambda_{l}\) is the largest of the four eigenvalues of the transition matrix \({\bf S}\) with all of them being negative and \({\bf X}=(x_{1},\ x_{2},\ x_{3},\ x_{4})^{T}\) is the corresponding eigenvector. The average distance travelled by the cargo and its average velocity can be obtained from \(\langle i\rangle=\sum_{i=-\infty}^{\infty}i[P_{1}(i,t)+P_{2}(i,t)+Q_{1}(i,t)+ Q_{2}(i,t)]\)
\(=\gamma\frac{d}{d\gamma}[\tilde{P}_{1}(\gamma,t)+\tilde{P}_{2}(\gamma,t)+ \tilde{Q}_{1}(\gamma,t)+\tilde{Q}_{2}(\gamma,t)]\mid_{\gamma=1}\) and \(v=\langle i\rangle/t\), respectively. In the large time limit, the dominant contribution to the velocity is of the form \(v\approx[\gamma_{c1}e^{\lambda_{l}t}\frac{\partial\lambda_{l}}{d\gamma}\sum_ {i=1}^{4}x_{i}]\mid_{\gamma=1}\). Using \(1/\lambda_{l}\) as an esti
Figure 4: \(y-\)axis represents the total probability (\(\sum_{i=-\infty}^{\infty}[P_{0}(i,t)+Q_{0}(i,t)]\)) of the cargo losing its last kinesin at time \(t\) while being at any site on the lattice. For this plot, \(r_{m}=0.5\) and \(\omega_{d}=0.01\).
Figure 5: \(y-\)axis represents the total probability (\(\sum_{i=-\infty}^{\infty}[P_{0}(i,t)+Q_{0}(i,t)]\)) of the cargo losing its last kinesin at time \(t\) while being at any site on the lattice. For this plot, \(\omega_{a}=\omega_{d}=0.001\).
Figure 6: Run-length of the cargo plotted with crowding density, \(r_{m}\). The cargo can bind at the most two kinesins (\(m=2\)).
mate of the association time, \(t_{\rm assoc}\), and finding \(\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\) numerically for given parameter values, we have plotted \(t_{\rm assoc}\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\) as a function of the crowding density, \(r_{m}\), in figure (7). Plots in figure (7) display similar trends as found in figure (6) for the run-length. Although \(t_{\rm assoc}v\) gives an estimate of the run-length, the variation in the run-length with the crowding density as seen in figure (6) essentially arises from \(t_{\rm assoc}\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\). It can be verified numerically that the variation in the remaining factors in \(v\) is almost negligible over the entire range of \(r_{m}\), \([0:1]\).
The dynamical equations for a cargo that can bind at the most three kinesins, i.e. \(m=3\), are shown in Appendix B. The variation of \(t_{\rm assoc}\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\) with the crowding density as obtained from the analysis of the largest eigenvalue is shown in figure (14).
Next we simulate the cargo dynamics with the cargo having \(m=2,\ 3,\rm and\ 4\) kinesin binding sites. Figure (8) shows the change in the run-length of the cargo with free-kinesin density, \(r_{m}\), for \(m=2,\ 3,\ \rm and\ 4\). Simulations show an initial increase in the run-length with the free-kinesin-density for \(m=3\) and \(4\); a trend that was shown earlier in figure (6).
### Processive motion of free kinesins for \(m=3\)
In this section, we study the motion of the cargo in the presence of free kinesins which move processively on the microtubule track. In addition, kinesins from the intracellular environment can attach to the microtubule and those walking on the microtubule can leave the microtubule at given rates.
Here we simulate this system using the cellular automaton method. As before, the microtubule is represented by a one-dimensional lattice. We begin with the cargo positioned at one end of the lattice. The lattice sites are randomly occupied by free kinesins with an average density, \(r_{m}\). The cargo moves following the association mechanism mentioned earlier. We assume that the kinesins move unidirectionally in the same direction as that of the cargo. The motion of the free kinesins follows the rules of the paradigmatic totally asymmetric simple exclusion process [17]. Accordingly, each kinesin can walk to the neighbouring site forward provided the target site is not occupied by another kinesin. The attachment and detachment of kinesins are as per the Langmuir kinetics considered in [19]. A kinesin can attach to a lattice site at rate \(\omega_{a,\rm kin}\) provided the site is empty and a free kinesin can detach from the lattice at rate \(\omega_{d,\rm kin}\). A kinesin located at the boundary site can exit from the lattice at rate, \(\beta\). We follow random sequential updating scheme with probability \(p\) for cargo update and \(1-p\) for updating the rest of the sites. Depending on the site chosen, the state of the site (or of the cargo) is updated following the aforementioned rules. The description of various parameters is provided in Appendix C.
In figure (9), we have plotted run-lengths for three different scenarios, (i) free kinesins are stalled (static obstacles) (ii) free kinesins are in motion and (iii) free kinesins are in motion and they can attach to (or detach from) the microtubule at rates \(\omega_{a,\rm kin}\) (or \(\omega_{d,\rm kin}\)). Plots indicate that in case of processive free kinesins (case (ii)), the run-length of the cargo peaks at a higher crowding density with a higher maximum value as compared to the stalled case. The run-length reduces significantly in case of random attachment and detachment of free kinesins (case (iii)). Figure (15) in Appendix C shows that the attachment processes lower the run-length significantly. Although, due to increased effective crowding density, the
Figure 8: Run-length of the cargo as a function of free-kinesin density along the microtubule track. For this figure, \(\omega_{a}=\omega_{d}=0.05\), \(r_{an}=0.4\). The results are obtained upon averaging over \(500\) samples for \(m=2\) and \(3\), and over \(1500\) samples for \(m=4\).
Figure 7: \(y\)-axis represents the product of two factors that dominate the nature of the run-length. \(t_{\rm assoc}\) is the association time of the cargo and \(\frac{d\lambda_{l}}{d\gamma}\) arises while computing the average velocity of the cargo (see the text). This plot is for \(m=2\) with \(\omega_{a}=\omega_{d}=0.001\).
cargo remains bound to the microtubule for a large span of time by associating free kinesins, the crowding restricts the run-length. As a result of this, the run-length attains its maximum value at a lower value of the crowding density, \(r_{m}\).
Figure (10) shows the variation in the run-length as the association rate \(r_{\rm an}\) is changed. The peaks in the run-lengths are similar to what have been observed earlier. Figure (11) shows a comparison of how the association time of the cargo to the microtubule depends on \(r_{m}\) for different values of the kinesin-association rate, \(r_{\rm an}\). The association time is expressed as the total number of discrete time steps of simulation till the cargo leaves the microtubule. An increase in \(r_{\rm an}\) helps cargo stay attached to the microtubule for a longer span of time while as per figure (12), the velocity of the cargo decreases monotonically with the crowding density, \(r_{m}\). Further, no significant variation in the velocity is seen with \(r_{\rm an}\). The association time and the velocity vary with \(r_{m}\) in such a manner that their product exhibits a peak at a specific value of \(r_{m}\).
Figure 11: Association time of the cargo to the microtubule as a function of crowding density, \(r_{m}\), for different values of the kinesin-association rate (\(r_{an}\)). Free kinesins move successively and they can randomly attach or detach to or from the microtubule. For this plot \(\omega_{a,{\rm kin}}=0.0008,\ \omega_{d,{\rm kin}}=0.0016\), \(p=0.1\), and \(\beta=0.6\). The total number of lattice sites is 2000. The association time is expressed as the total number of discrete time steps of simulation till the cargo leaves the microtubule. Association times are obtained upon averaging over 4000 samples.
Figure 10: Run-length of the cargo as a function of crowding density, \(r_{m}\), for different values of the kinesin-association rate \(r_{\rm an}\). Free kinesins move successively and they can randomly attach or detach to or from the microtubule. For this plot \(\omega_{a,{\rm kin}}=0.0008,\ \omega_{d,{\rm kin}}=0.0016\)[15], \(p=0.1\), and \(\beta=0.6\). The total number of lattice sites is 2000. Run-lengths are obtained upon averaging over 4000 samples.
Figure 9: Run-length of the cargo as a function of crowding density, \(r_{m}\), under different conditions. For ”processive and attachment/detachment”, free kinesins move successively along the microtubule and can also randomly attach/detach to/from the microtubule. For ”processive” case, free kinesins only move successively without any attachment/detachment dynamics. In the ”stalled” case, free kinesins do not move and do not attach or detach from/to the microtubule. For ”processive and attachment/detachment” plot, \(\omega_{a,{\rm kin}}=\omega_{d,{\rm kin}}=0.01\). For the rest of the cases, \(\omega_{a,{\rm kin}}=\omega_{d,{\rm kin}}=0\). Other parameter values are \(\omega_{a}=\omega_{d}=0.05\), \(r_{\rm an}=0.4\), \(p=0.1\), and \(\beta=0.6\). The total number of lattice sites is 2000. Run-lengths are obtained upon averaging over 4000 samples.
## 3 Summary
The motion of cargos on biopolymeric tracks crowded due to free or cargo-bound motor proteins is a subject of immense experimental and theoretical investigations. The central goal of these studies is to understand how cargos manage to overcome the motor traffic in order to transport necessary materials in a robust manner. Motivated by some of the experimental observations on translocation of quantum dot cargos in crowded environments, we have modelled mathematically and computationally the motion of a cargo that can bind kinesins present along its trajectory on the microtubule.
In the mathematical modelling, the kinesins on the microtubule track are assumed to be stalled. Besides taking into account the kinesin-association property of the cargo, our model incorporates the following dynamical rules. (i) The cargo has a limited number of kinesin-binding sites as a result of which it can bind at the most a given number of kinesins, (ii) bound kinesins can detach from the cargo and kinesins from the intracellular space can bind to the cargo at certain rates and (iii) the cargo leaves the microtubule if all the kinesins detach from the cargo. Upon finding the cargo velocity for a toy model where the cargo never leaves the microtubule and keeps moving forward by removing obstacles via association, we generalize the mathematical formulation to take into account the aforementioned aspects of the cargo dynamics. We show that the two features, namely, the crowding along the microtubule and the ability of the cargo to associate kinesins have competing effects on the run-length of the cargo. For low crowding density, as the crowding density increases, the cargo benefits due to its ability to associate kinesins. This leads to an increase in the run-length with the crowding density. However, as the crowding density increases further, due to its limited number of kinesin binding sites, the cargo does not benefit anymore through kinesin association. As a consequence, the run-length decreases for large values of the crowding density. This nature of the run-length has been predicted earlier from experimental observations. We show that this property of the run-length is governed by the largest eigenvalue of the transition matrix describing the dynamics of the cargo. The model can be generalized further to incorporate other features such as reversals of the cargo, bidirectional movements of the cargo, pausing of the cargo etc. with the frequencies of such events depending on the crowding density. The present work lays a foundation for such studies. Additionally, this analysis may also lead to testable predictions for cargo's motile properties once appropriate parameter values are available.
Next, we have simulated cargo transport with processive motion of free kinesins as well as binding and unbinding of motors to or from the microtubule. For different values of the rate of kinesin association to the cargo, the run-lengths show prominent peaks as the crowding density is changed. However, overall, the run-length decreases significantly due to binding of motors to the microtubule, a process that increases the effective crowding density. As a consequence of cargo's ability to associate kinesin, the association-time of the cargo to the microtubule increases with the increase in the crowding density. The velocity of the cargo, on the other hand, decreases with the crowding density and it remains approximately unchanged with the kinesin-association rate of the cargo. Incorporating the processive motion in the mathematical model would add another level of complexity which can be a subject of future studies.
## Appendix A Model 1
In the matrix form the differential equations (4) and (5) appear as
\[\frac{d}{dt}\mathbf{G}(\gamma,t)=\mathbf{R}\mathbf{G}(\gamma,t), \tag{30}\]
where
\[\mathbf{G}(\gamma,t)=\begin{pmatrix}\tilde{P}(\gamma,t)\\ \tilde{Q}(\gamma,t)\end{pmatrix}\text{ and }\] \[\mathbf{R}=\left(\begin{array}{cc}(1-r_{m})\gamma-1&r_{an}\\ r_{m}\gamma&-r_{an}\end{array}\right). \tag{31}\]
Here \(\mathbf{R}\) is a transition matrix. For \(\gamma=1\), the sum of all the elements in a column is \(0\). One may find out the solutions of these equations upon finding the eigenvalues
Figure 12: Velocity of the cargo as a function of crowding density, \(r_{m}\), for different values of the kinesin-association rate (\(r_{an}\)). Free kinesins move successively and they can randomly attach or detach to or from the microtubule. For this plot \(\omega_{a,\text{kin}}=0.0008,\ \omega_{d,\text{kin}}=0.0016\), \(p=0.1\), and \(\beta=0.6\). The total number of lattice sites is 2000. For a given \(r_{m}\), we have computed the velocity for every sample. The values presented here are obtained upon averaging of 4000 samples.
and eigenvectors of \({\bf R}\). The eigenvectors corresponding to the eigenvalues \(\lambda_{\pm}\) are, respectively,
\[\left(\begin{array}{c}1\\ \frac{\lambda_{+}+1-(1-r_{m})\gamma}{r_{an}}\end{array}\right)\ \mbox{and}\ \ \left( \begin{array}{c}1\\ \frac{\lambda_{-}+1-(1-r_{m})\gamma}{r_{an}}\end{array}\right), \tag{10}\]
where \(\lambda_{+,-}=\frac{1}{2}[-(r_{an}+1-(1-r_{m})\gamma)\pm A]\) with \(A=\sqrt{(r_{an}+1-(1-r_{m})\gamma)^{2}-4r_{an}(1-\gamma)}\). The solutions for the generating functions are
\[\left(\begin{array}{c}\tilde{P}(\gamma,t)\\ \tilde{Q}(\gamma,t)\end{array}\right)=c_{1}e^{\lambda_{+}t}\bigg{(}\begin{array} []{c}1\\ \frac{\lambda_{+}+1-(1-r_{m})\gamma}{r_{an}}\end{array}\bigg{)}+\] \[c_{2}e^{\lambda_{-}t}\bigg{(}\begin{array}{c}1\\ \frac{\lambda_{-}+1-(1-r_{m})\gamma}{r_{an}}\end{array}\bigg{)}, \tag{11}\]
where \(c_{1},\ c_{2}\) are integration constants. We consider the initial conditions \(P(i,t=0)=Q(i,t=0)=1/2\) for \(i=0\). Using these conditions, we find
\[c_{1}=\frac{1}{2A}\left[r_{an}-\lambda_{-}-1+(1-r_{m})\gamma \right]\quad\mbox{and}\] \[c_{2}=\frac{1}{2}-c_{1}=\frac{1}{2A}\left[A-r_{an}+\lambda_{-}+1 -(1-r_{m})\gamma\right]. \tag{12}\]
Since in the large time limit, the solutions are governed by the largest eigenvalue, we have
\[\tilde{P}(\gamma,t)+\tilde{Q}(\gamma,t)\approx c_{1}e^{\lambda_{+}t}\left[1+ \frac{\lambda_{+}+1-(1-r_{m})\gamma}{r_{an}}\right]. \tag{13}\]
Upon taking derivatives of \(\tilde{P}(\gamma,t)+\tilde{Q}(\gamma,t)\) with respect to \(\gamma\), we have
\[\langle i\rangle=\left[\gamma\left(\frac{d\tilde{P}}{d\gamma}+ \frac{d\tilde{Q}}{d\gamma}\right)\right]_{\gamma=1}\] \[=\left\{\gamma\frac{dc_{1}}{d\gamma}e^{\lambda_{+}t}\big{(}1+ \frac{\lambda_{+}+1-(1-r_{m})\gamma}{r_{an}}\big{)}\right\}_{\gamma=1}+\] \[\left\{\gamma c_{1}e^{\lambda_{+}t}\frac{d\lambda_{+}}{d\gamma}t \big{(}1+\frac{\lambda_{+}+1-(1-r_{m})\gamma}{r_{an}}\big{)}\right\}_{\gamma=1}+\] \[\left\{\gamma c_{1}e^{\lambda_{+}t}\frac{1}{r_{an}}\big{(}\frac{ d\lambda_{+}}{d\gamma}-(1-r_{m})\big{)}\right\}_{\gamma=1}. \tag{14}\]
In the large time limit, we finally have
\[\langle i\rangle/t=\frac{d\lambda_{+}}{d\gamma}\mid_{\gamma=1}. \tag{15}\]
Using
\[\frac{d\lambda_{+}}{d\gamma}\bigg{|}_{\gamma=1}=\left[\frac{1-r_{m}}{2}+ \frac{1}{2}\frac{dA}{d\gamma}\right]_{\gamma=1}, \tag{16}\]
where
\[\frac{dA}{d\gamma}\bigg{|}_{\gamma=1}=\frac{r_{an}+r_{an}r_{m}-r_{m}+r_{m}^{2 }}{r_{an}+r_{m}}. \tag{17}\]
Figure (13) shows plots of velocity obtained mathematically and through simulations.
## Appendix B Model 2
### \(m=3\)
A cargo that can bind at the most three kinesins can be in four possible states, namely, bound to one, two or three kinesins or not bound to any kinesin. Possible configurations of two neighbouring sites can be of \(P\) type or \(Q\) type depending on whether the site in front of the cargo is occupied by a free kinesin or empty. For example, \(P_{n}(i,t)\ (n=1,\ 2,\ \mbox{or}\ 3)\) indicates the probability of a configuration where a cargo, bound to \(n\) number of kinesins, is present at the \(i\)-th site at time \(t\) while the \((i+1)\)-th site is empty. Similarly, \(Q_{n}(i,t)\ (n=1,\ 2,\ \mbox{or}\ 3)\) represents the probability of a configuration where a cargo, bound to \(n\) number of kinesins, is present at the \(i\)-th site at time \(t\) while the \((i+1)\)-th site is occupied by a free kinesin. The change in these probabilities with time is described by the equatio
Figure 13: Velocity of the cargo as a function of \(r_{an}\) with \(r_{m}=1/2\).
\[\frac{d}{dt}P_{3}(i) =(1-r_{m})P_{3}(i-1)-P_{3}(i)+r_{an}Q_{2}(i)-\omega_{d}P_{3}(i)+ \ \omega_{a}P_{2}(i), \tag{10}\] \[\frac{d}{dt}P_{2}(i) =(1-r_{m})P_{2}(i-1)-P_{2}(i)+r_{an}Q_{1}(i)+\omega_{d}P_{3}(i)+ \omega_{a}P_{1}(i)-(\omega_{a}+\omega_{d})P_{2}(i),\] (11) \[\frac{d}{dt}P_{1}(i) =(1-r_{m})P_{1}(i-1)-P_{1}(i)+\omega_{d}P_{2}(i)-(\omega_{a}+ \omega_{d})P_{1}(i),\] (12) \[\frac{d}{dt}Q_{3}(i) =r_{m}P_{3}(i-1)+\omega_{a}Q_{2}(i)-\omega_{d}Q_{3}(i),\] (13) \[\frac{d}{dt}Q_{2}(i) =r_{m}P_{2}(i-1)+\omega_{a}Q_{1}(i)+\omega_{d}Q_{3}(i)-(\omega_{d }+\omega_{a})Q_{2}(i)-r_{\rm an}Q_{2}(i),\ \ {\rm and}\] (14) \[\frac{d}{dt}Q_{1}(i) =r_{m}P_{1}(i-1)-r_{an}Q_{1}(i)+\omega_{d}Q_{2}(i)-(\omega_{a}+ \omega_{d})Q_{1}(i). \tag{15}\]
Additionally, as in \(m=2\) case, we have
\[\frac{d}{dt}P_{0}(i)=\omega_{d}P_{1}(i)\ {\rm and}\ \frac{d}{dt}Q_{0}(i)= \omega_{d}Q_{1}(i). \tag{16}\]
Defining the generating functions as \(\tilde{P}_{n}(\gamma,t)=\sum_{i=-\infty}^{\infty}\gamma^{i}P_{n}(i,t)\) and \(\tilde{Q}_{n}(\gamma,t)=\sum_{i=-\infty}^{\infty}\gamma^{i}Q_{n}(i,t)\), we have
\[\frac{d}{dt}\mathbf{H}(\gamma,t)=\mathbf{SH}(\gamma,t), \tag{17}\]
where \(\mathbf{H}\) is a column matrix
\[\mathbf{H}(\gamma,t)=\left(\begin{array}{c}\tilde{P}_{3}(\gamma,t)\\ \tilde{P}_{2}(\gamma,t)\\ \tilde{P}_{1}(\gamma,t)\\ \tilde{Q}_{3}(\gamma,t)\\ \tilde{Q}_{2}(\gamma,t)\\ \tilde{Q}_{1}(\gamma,t)\end{array}\right), \tag{18}\]
and \(\mathbf{S}\) is a \(6\times 6\) matrix
\[\mathbf{S}=\left(\begin{array}{cccccc}(1-r_{m})\gamma-\omega_{d}-1&\omega_ {a}&0&0&r_{an}&0\\ \omega_{d}&(1-r_{m})\gamma-\omega_{a}-\omega_{d}-1&\omega_{a}&0&0&r_{\rm an}\\ 0&\omega_{d}&(1-r_{m})\gamma-\omega_{a}-\omega_{d}-1&0&0&0\\ r_{m}\gamma&0&0&-\omega_{d}&\omega_{a}&0\\ 0&r_{m}\gamma&0&\omega_{d}&-\Omega&\omega_{a}\\ 0&0&r_{m}\gamma&0&\omega_{d}&-\Omega\end{array}\right), \tag{19}\]
where \(\Omega=(\omega_{a}+\omega_{d}+r_{\rm an})\).
As in case of \(m=2\), here again the variation in the run-length is governed by the quantity \(t_{\rm assoc}\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\) where \(\lambda_{l}\) is the largest eigenvalue of matrix \(S\). Figure (14) shows the variation in the \(t_{\rm assoc}\frac{d\lambda_{l}}{d\gamma}\mid_{\gamma=1}\) with \(r_{m}\).
## Appendix C Processive movement of free kinesins
Descriptions of parameters used in simulations are provided in table 1.
Figure (15) shows how the run-length varies with the crowding density in the three cases - Processive movement of free kinesins and (i) binding of kinesins to the microtubule at rate \(\omega_{a,\rm kin}\), (ii) unb
the microtubule at rate \(\omega_{d,\rm kin}\), and (iii) no binding or unbinding of kinesins to or from the microtubule. |
2310.15553 | A general center manifold theorem on fields of Banach spaces | A general local center manifold theorem around stationary trajectories is
proved for nonlinear cocycles acting on measurable fields of Banach spaces. | Mazyar Ghani Varzaneh, Sebastian Riedel | 2023-10-24T06:45:31Z | http://arxiv.org/abs/2310.15553v3 | # A general center manifold theorem on fields of Banach spaces
###### Abstract.
A general local center manifold theorem around stationary trajectories is proved for nonlinear cocycles acting on measurable fields of Banach spaces.
Key words and phrases:center manifolds, Oseledets splitting, fields of Banach spaces, random dynamical systems 2010 Mathematics Subject Classification: 37H15, 37L55, 37B55, 37Lxx
## Introduction
The center manifold is an object that helps to describe the evolution of a dynamical system. A center manifold is invariant under the flow, and it does not either grow or decay exponentially around a stationary point. The theory of center manifolds is an essential tool for analyzing the stability of systems and, more importantly, in bifurcation theory. Indeed, center manifolds and the idea of normal forms are two standard approaches for simplifying dynamical systems that can be used to reduce the dimension and to eliminate nonlinearities of the system [1].
In bifurcation theory, the main purpose is to study the changes in the qualitative or topological structure of the family of flows. The systematic procedure is to compute or approximate the center manifold and then to make a reduction into the center manifold by applying some near identity transformation to simplify the system [21, 17]. Due to the importance of this theory in applications, there are a lot of results devoted to the existence of center manifolds for a variety of equations and dynamics in both finite and infinite dimensions [14, 15, 16, 17].
Center manifold theory can also be expressed for random dynamical systems (RDS), i.e. systems that are also affected by a random signal. Typical examples that induce random dynamical systems are stochastic ordinary differential equations (SDEs) or stochastic partial differential equations (SPDEs) [1]. In the case of SDEs, center manifolds were studied e.g. in [1, 2, 18, 19, 20, 21]. For SPDEs, the flow takes values in an infinite dimensional space. In this case, random center manifolds are considered e.g. in [1, 1, 19, 20, 21].
The main goal of this work is to provide general conditions under which a given random dynamical system admits a local center manifold around a stationary point. Our main result, cf. Theorem 2.14, is formulated in a generality that allows to apply it to SDEs, SPDEs, and even more general equations provided that they generate a random dynamical system. In fact, in the literature, most of the works that deal with the existence of random invariant manifolds for S(P)DEs face two major challenges:
1. Prove that a given equation generates an RDS.
2. Prove that this particular RDS admits a center manifold.
While the methods to prove point 1 often differ significantly from each other (for example, one can use perfection theorems for crude cocycles [1], find random mappings that transform the
equation to a pathwise solvable random ODE / PDE [14] or apply a pathwise stochastic calculus like rough paths theory [15, 16]), the techniques to prove point 2 are often very similar. Therefore, we think that it will be useful to provide an abstract theorem that can be applied in great generality.
Let us discuss a few features of our abstract center manifold theorem and how it is related to other results in the literature.
* Since our goal is to apply our main result to SPDEs, it will be necessary that Theorem 2.14 is formulated for RDS defined on infinite dimensional spaces. There are some works that prove center manifold theorems on RDS that are defined on Hilbert spaces, cf. [14, 15]. However, more modern pathwise solution theories for SPDEs (like rough paths theory [13]) define solutions that take values in a Banach, not a Hilbert space. In fact, Theorem 2.14 can be applied to cocycles defined on separable Banach spaces. With this generality, we can cover, for instance, the results obtained in [16].
* Some stochastic differential equations induce RDS that are defined in random spaces. This can be the case, for instance, when considering the linearization of an SDE evolving on a manifold [1]. Other examples are given by singular stochastic delay differential equations [11, 12]. In this case, the random spaces have a fiber-type structure. In the infinite dimensional case, one cannot expect that the random spaces are isomorphic, and the correct structures to consider here are _measurable fields of Banach spaces_. Therefore, we formulated Theorem 2.14 for RDS acting on a measurable field. To our knowledge, this is the first time that a center manifold theorem was formulated for an RDS acting on a field of Banach spaces. However, we want to point out that every fixed Banach space forms a (trivial) field of Banach spaces, thus our theorem covers RDS acting on separable Banach spaces as a special case.
* Many works dealing with center manifolds concentrate on the case where the equation has a deterministic fixed point (see e.g [14, 15, 16]). However, there are many equations that admit _random_ fixed points (or stationary points) only. This holds, for instance, in case of additive noise (an easy example would be the Langevin equation). Therefore, Theorem 2.14 is formulated for random fixed points. One cornerstone of our result is the Multiplicative Ergodic Theorem (MET) that ensures the existence of Lyapunov exponents for a given compact linear cocycle acting on a field of Banach spaces. This theorem was proved in [11, 12].
## 1. Notations and background
In this section, we collect some notations and present background about the _Multiplicative Ergodic Theorem_ (MET). Let us first fix the notations.
* For a Banach space \((X,\|\cdot\|_{X})\), we will usually drop the subindex \(\|\cdot\|_{X}\) and use the symbol \(\|\cdot\|\) rather. If \(\psi\colon X\to Y\) is a linear map between Banach spaces, \(\|\psi\|\) denotes the operator norm, i.e. \[\|\psi\|=\sup_{x\in X\setminus\{0\}}\frac{\|\psi(x)\|}{\|x\|}.\]
* Let \((X,\|\cdot\|)\) be a Banach space and \(Y\subset X\) a closed linear subspace of \(X\). Assume \(x\in X\). Then by \(d(x,Y)\), we denote the usual distance function, i.e. \[d(x,Y):=\inf_{y\in Y}\|x-y\|.\]
* Let \(X\) be a Banach space and \(E,F\) two closed subspaces of \(X\) such that \(E\cap F=\{0\}\). The linear map \(\Pi_{E||F}:=E\oplus F\to E\) defined by \((e,f)\to e\) is called _projection_. In this case, the operator norm is given by \[\|\Pi_{E||F}\|=\sup_{e\in E\setminus\{0\},f\in F\setminus\{0\}}\frac{\|e\|}{\|e +f\|}.\]
* Let \((\Omega,\mathcal{F})\) be a measurable space. We call a family of Banach spaces \(\{E_{\omega}\}_{\omega\in\Omega}\) a _measurable field of Banach spaces_ if there is a set of sections \[\Delta\subset\prod_{\omega\in\Omega}E_{\omega}\] with the following properties: 1. \(\Delta\) is a linear subspace of \(\prod_{\omega\in\Omega}E_{\omega}\). 2. There is a countable subset \(\Delta_{0}\subset\Delta\) such that for every \(\omega\in\Omega\), the set \(\{g(\omega)\,:\,g\in\Delta_{0}\}\) is dense in \(E_{\omega}\). 3. For every \(g\in\Delta\), the map \(\omega\mapsto\|g(\omega)\|_{E_{\omega}}\) is measurable. We will most of the time omit the subindex \(E_{\omega}\) and just write \(\|\cdot\|\) instead of \(\|\cdot\|_{E_{\omega}}\) when it is clear from the context which norm is meant.
* Let \((\Omega,\mathcal{F})\) be a measurable space. Assume \(\theta\colon\Omega\to\Omega\), \(\omega\mapsto\theta\omega\) is a measurable map with a measurable inverse \(\theta^{-1}\). In this case, we call \((\Omega,\mathcal{F},\theta)\) a _measurable dynamical system_. We will use \(\theta^{n}\omega\) for \(n\)-times applying \(\theta\) to an element \(\omega\in\Omega\). We also set \(\theta^{0}:=\mathrm{Id}_{\Omega}\) and \(\theta^{-n}:=(\theta^{n})^{-1}\). If \(\mathbb{P}\) is a probability measure on \((\Omega,\mathcal{F})\) that is invariant under \(\theta\), i.e. \(\mathbb{P}(\theta^{-1}A)=\mathbb{P}(A)=\mathbb{P}(\theta A)\) for every \(A\in\mathcal{F}\), we call the tuple \(\big{(}\Omega,\mathcal{F},\mathbb{P},\theta\big{)}\) a _measure-preserving dynamical system_. This system is named _ergodic_ if every invariant measurable set under \(\theta\) has the probability \(0\) or \(1\).
* Assume \((\Omega,\mathcal{F},\mathbb{P},\theta)\) is a measure-preserving dynamical system and \((\{E_{\omega}\}_{\omega\in\Omega},\Delta)\) a measurable field of Banach spaces. A _continuous cocycle on \(\{E_{\omega}\}_{\omega\in\Omega}\)_ consists of a family of continuous maps (1.1) \[\varphi_{\omega}\colon E_{\omega}\to E_{\theta\omega}.\] If \(\varphi\) is a continuous cocycle, we set \(\varphi_{\omega}^{n}\colon E_{\omega}\to E_{\theta^{n}\omega}\) where \[\varphi_{\omega}^{n}:=\varphi_{\theta^{n-1}\omega}\circ\cdots\circ\varphi_{ \omega}.\] We also define \(\varphi_{\omega}^{0}:=\mathrm{Id}_{E_{\omega}}\). If the maps \[\omega\mapsto\|\varphi_{\omega}^{n}(g(\omega))\|_{E_{\theta^{n}\omega}},\quad n \in\mathbb{N}\] are measurable for every \(g\in\Delta\), we say that \(\varphi\) _acts on \(\{E_{\omega}\}_{\omega\in\Omega}\)_. In this case, we will speak of a _continuous random dynamical system on a field of Banach spaces_. If the map (1.1) is bounded linear/compact, we call \(\varphi\) a bounded linear/compact cocycle.
Next, we define stationary points for cocycles acting on measurable fields.
**Definition 1.1**.: Let \(\{E_{\omega}\}_{\omega\in\Omega}\) be a measurable field of Banach spaces and \(\varphi_{\omega}^{n}\) a nonlinear cocycle acting on it. A map \(Y:\Omega\longrightarrow\prod_{\omega\in\Omega}E_{\omega}\) is called _stationary_ for \(\varphi_{\omega}^{n}\) provided
1. \(Y_{\omega}\in E_{\omega}\),
2. \(\varphi_{\omega}^{n}(Y_{\omega})=Y_{\theta^{n}\omega}\) and
3. \(\omega\to\|Y_{\omega}\|\) is measurable.
Note that a stationary point for a random dynamical system can be regarded as a natural generalization of a fixed point in (deterministic) dynamical systems. If \(\varphi_{\omega}^{n}\) is Frechet differentiable, one can easily check that the derivative around a stationary solution also enjoys the cocycle property, i.e for \(\psi_{\omega}^{n}(.)=D_{Y_{\omega}}\varphi_{\omega}^{n}(.)\), one has
\[\psi_{\omega}^{n+m}(.)=\psi_{\theta^{m}\omega}^{n}\big{(}\psi_{\omega}^{m}(.) \big{)}.\]
The key ingredient to prove the center manifold theorem is the following version of a _Multiplicative Ergodic Theorem_ which we call the semi-invertible Oseledets theorem on fields of Banach spaces. For a proof, cf. [13, Theorem 4.17] and [13, Theorem 1.21].
**Theorem 1.2**.: _Let \((\Omega,\mathcal{F},\mathbb{P},\theta)\) be an ergodic measure-preserving dynamical system. Assume that \(\psi\) is a compact linear cocycle acting on a measurable field of Banach spaces \(\{E_{\omega}\}_{\omega\in\Omega}\). For \(\mu\in\mathbb{R}\cup\{-\infty\}\) and \(\omega\in\Omega\), set_
\[F_{\mu}(\omega):=\big{\{}x\in E_{\omega}\,:\,\limsup_{n\to\infty}\frac{1}{n} \log\|\psi_{\omega}^{n}(x)\|\leqslant\mu\big{\}}.\]
_Assume that_
\[\log^{+}\|\psi_{\omega}\|\in L^{1}(\Omega)\]
_and that_
\[\omega\mapsto\|\psi_{\theta^{n}\omega}^{k}(\psi_{\omega}^{n}(g(\omega))-\tilde {g}(\theta^{n}\omega))\|_{E_{\theta^{n+k}\omega}}\]
_is measurable for every \(g,\tilde{g}\in\Delta\) and \(n,k\geq 0\)._
_Then there is a measurable \(\theta\)-invariant set \(\tilde{\Omega}\subset\Omega\) of full measure and a decreasing sequence \(\{\mu_{i}\}_{i\geq 1}\), \(\mu_{i}\in[-\infty,\infty)\) (Lyapunov exponents) with the properties that \(\lim_{n\to\infty}\mu_{n}=-\infty\) and either \(\mu_{i}>\mu_{i+1}\) or \(\mu_{i}=\mu_{i+1}=-\infty\) such that for every \(\omega\in\tilde{\Omega}\), \(F_{\mu_{1}}(\omega)=E_{\omega}\) and_
\[x\in F_{\mu_{i}}(\omega)\setminus F_{\mu_{i+1}}(\omega)\quad\text{if and only if}\quad\lim_{n\to\infty}\frac{1}{n}\log\|\psi_{\omega}^{n}(x)\|=\mu_{i}. \tag{1.2}\]
_Moreover, there are numbers \(m_{1},m_{2},\ldots\) such that \(\operatorname{codim}F_{\mu_{j}}(\omega)=m_{1}+\ldots+m_{j-1}\) for every \(\omega\in\tilde{\Omega}\). Furthermore, for every \(i\geq 1\) with \(\mu_{i}>\mu_{i+1}\) and \(\omega\in\tilde{\Omega}\), there is an \(m_{i}\)-dimensional subspace \(H_{\omega}^{i}\) with the following properties:_
1. _(Invariance)_ \(\psi_{\omega}^{k}(H_{\omega}^{i})=H_{\theta^{k}\omega}^{i}\) _for every_ \(k\geq 0\)_._
2. _(Splitting)_ \(H_{\omega}^{i}\oplus F_{\mu_{i+1}}(\omega)=F_{\mu_{i}}(\omega)\)_. In particular,_ \[E_{\omega}=H_{\omega}^{1}\oplus\cdots\oplus H_{\omega}^{i}\oplus F_{\mu_{i+1} }(\omega).\]
3. _('Fast-growing' subspace)_ _For each_ \(h_{\omega}\in H_{\omega}^{i}\setminus\{0\}\)_,_ \[\lim_{n\to\infty}\frac{1}{n}\log\|\psi_{\omega}^{n}(h_{\omega})\|=\mu_{i}\] _and_ \[\lim_{n\to\infty}\frac{1}{n}\log\|(\psi_{\theta^{-n}\omega}^{n})^{-1}(h_{\omega })\|=-\mu_{i}.\]
* _('Angle vanishing') Let_ \(\tilde{H}^{i}_{\omega}\) _be a subspace of_ \(H^{i}_{\omega}\) _and_ \(h_{\omega}\in H^{i}_{\omega}\setminus\tilde{H}^{i}_{\omega}\)_. Then_ \[\lim_{n\to\infty}\frac{1}{n}\log d(\psi^{n}_{\omega}(h_{\omega}),\psi^{n}_{ \omega}(\tilde{H}^{i}_{\omega}))=\mu_{i}\] _and_ \[\lim_{n\to\infty}\frac{1}{n}\log d\big{(}(\psi^{n}_{\theta^{-n}\omega})^{-1}(h_ {\omega}),(\psi^{n}_{\theta^{-n}\omega})^{-1}(\tilde{H}^{i}_{\omega})\big{)}=- \mu_{i}.\]
_Remark 1.3_.:
* The angle vanishing property in item (iv) appears in [12, Lemma 1.20].
* Note that our cocycle is not necessarily injective. However, for every \(i\geq 1\) and \(\omega\in\tilde{\Omega}\), due to the invariance property (item (i)), we can define the inverse cocycle on every \(H^{i}_{\omega}\).
## 2. Main part
We aim to prove a general center manifold theorem for a nonlinear cocycle acting on a measurable field of Banach spaces. For the rest of the paper, we will assume that \(\big{(}\Omega,\mathcal{F},\mathbb{P},\theta\big{)}\) is a measure-preserving dynamical system such that \(\theta\) is ergodic and invertible. We also assume that \(\{E_{\omega}\}_{\omega\in\Omega}\) is a measurable field of Banach spaces and \(\varphi^{n}_{\omega}\) is a nonlinear cocycle acting on it. We will assume
**Assumption 2.1**.: \(\varphi^{n}_{\omega}\) _is Frechet differentiable and \(Y\colon\Omega\longrightarrow\prod_{\omega\in\Omega}E_{\omega}\) is a stationary point for it such that:_
* _For_ \(\psi^{n}_{\omega}(.)\coloneqq D_{Y_{\omega}}\varphi^{n}_{\omega}(.)\)_,_ \[\log^{+}\|\psi_{\omega}\|\in L^{1}(\Omega)\] _where_ \(\log^{+}x\coloneqq\max\{0,\log x\}\)_._
* _We assume that for_ (2.1) \[P_{\omega}\colon E_{\omega} \to E_{\theta\omega}\] \[\xi_{\omega} \mapsto\varphi^{1}_{\omega}(Y_{\omega}+\xi_{\omega})-\varphi^{1} _{\omega}(Y_{\omega})-\psi^{1}_{\omega}(\xi_{\omega}),\] _there exists a random variable_ \(R\colon\Omega\to[0,\infty)\) _with the property that_ \[\liminf_{n\to\infty}\frac{1}{n}\log R(\theta^{n}\omega)\geq 0\] _almost surely and that for_ \(\|\xi_{\omega}\|,\|\tilde{\xi}_{\omega}\|<R(\omega)\)_, one has_ (2.2) \[\|P_{\omega}(\xi_{\omega})-P_{\omega}(\tilde{\xi}_{\omega})\|\leq\|\xi_{ \omega}-\tilde{\xi}_{\omega}\|f(\theta\omega)h(\|\xi_{\omega}\|+\|\tilde{\xi}_ {\omega}\|)\] _almost surely where_ \(f\colon\Omega\to\mathbb{R}^{+}\) _is a measurable function with the property that_ \[\lim_{n\to\infty}\frac{1}{n}\log^{+}f(\theta^{n}\omega)=0\] _almost surely. Furthermore,_ \(h\) _is assumed to be a nonnegative and increasing function such that_ \(h(0)=0\) _and for some_ \(r>0\)_,_ \(\limsup_{x\to 0}\frac{h(x)}{|x|^{r}}<\infty\)_._
Note these assumptions are enabling us to apply Theorem 1.2 to the linearized cocycle around the stationary point \(\psi^{n}_{\omega}(.)\). Let us assume \(\mu_{1}>0\) and that we have a zero Lyapunov exponent. Then for \(\mu^{-}\coloneqq\max\{\mu_{i}\colon\mu_{i}<0\}\), we define
\[S_{\omega}\coloneqq F_{\mu^{-}}(\omega),\quad U_{\omega}\coloneqq\bigoplus_{i :\mu_{i}>0}H^{i}_{\omega},\quad\text{and}\ \ C_{\omega}\coloneqq H^{i_{c}}_{\omega}, \tag{2.3}\]
where \(\mu_{i_{c}}=0\). We also set \(\mu^{+}\coloneqq\min\{\mu_{i}:\mu_{i}>0\}\). Note that from [13, Lemma 1.18], for every \(\omega\in\tilde{\Omega}\):
\[\begin{split}\lim_{j\to\pm\infty}\frac{1}{n}\log[\|\Pi_{C_{\theta j _{\omega}}\|S_{\theta j_{\omega}}\oplus U_{\theta j_{\omega}}}\|]=& \lim_{n\to\pm\infty}\frac{1}{n}\log[\|\Pi_{U_{\theta j_{\omega}}\|S_{\theta j _{\omega}}\oplus C_{\theta j_{\omega}}}\|]=\\ &\lim_{n\to\pm\infty}\frac{1}{n}\log[\|\Pi_{S_{\theta j_{\omega}} \|U_{\theta j_{\omega}}\oplus C_{\theta j_{\omega}}}\|]=0.\end{split} \tag{2.4}\]
**Definition 2.2**.: Let \(\delta:\mathbb{R}\to[0,1]\) be a smooth function such that \(\operatorname{supp}(\delta)\subset[-2,2]\), \(\delta|_{[-1,1]}=1\) and \(\sup_{x\in\mathbb{R}}|\delta^{\prime}(x)|\leq 2\). Also, assume \(\rho\) to be a positive random variable. Then we set
\[P_{\omega,\rho}(\xi_{\omega})\coloneqq\delta(\frac{\|\xi_{\omega}\|}{\rho( \theta\omega)})P_{\omega}(\xi_{\omega}).\]
Note that, from (2.2) and our assumptions,
\[\|P_{\omega,\rho}(\xi_{\omega})-P_{\omega,\rho}(\tilde{\xi}_{\omega})\|\leq 5 \|\xi_{\omega}-\tilde{\xi}_{\omega}\|f(\theta\omega)h\big{(}4\rho(\theta \omega)\big{)}. \tag{2.5}\]
Before proving the main result, we need some auxiliary lemmas.
**Lemma 2.3**.: _Let \(\omega\in\tilde{\Omega}\) and for \(p\leq q\), \(H^{p,q}_{\omega}\coloneqq\bigoplus_{p\leq i\leq q}H^{i}_{\omega}=\langle\xi^{ t}_{\omega}\rangle_{1\leq t\leq k}\). Then for \(m,n\in\mathbb{Z}\), we have_
\[\|\psi^{n}_{\theta^{m}\omega}|_{H^{p,q}_{\omega}}\|\leq\sum_{1\leq t\leq k} \frac{\|\psi^{m+n}_{\omega}(\xi^{t}_{\omega})\|}{d\big{(}\psi^{m}_{\omega}( \xi^{t}_{\omega}),\langle\psi^{m}_{\omega}(\xi^{t^{\prime}}_{\omega})\rangle_ {1\leq t^{\prime}\leq k,t^{\prime}\neq t}\big{)}}. \tag{2.6}\]
Proof.: Remember for \(n<0\), \(\psi^{n}_{\omega}=[\psi^{-n}_{\theta^{m}\omega}]^{-1}|_{H^{1}_{\omega}}\). We need to consider several cases by distinguishing the sign of \(m,n,m+n\). Here we take the case \(n<0,m+n>0\). Other cases can be proved similarly and are even easier. Assume \(\xi=\sum_{1\leq t\leq k}c_{t}\frac{\psi^{m}_{\omega}(\xi_{t})}{\|\psi^{m}_{ \omega}(\xi_{t})\|}\). Clearly, we have
\[\frac{\|\xi\|\|\psi^{m}_{\omega}(\xi_{t})\|}{|c_{t}|}\geq d\big{(}\psi^{m}_{ \omega}(\xi^{t}_{\omega}),\langle\psi^{m}_{\omega}(\xi^{t^{\prime}}_{\omega}) \rangle_{1\leq t^{\prime}\leq k,t^{\prime}\neq t}\big{)}. \tag{2.7}\]
For \(1\leq t\leq k\), by definition,
\[\begin{split}\psi^{n}_{\theta^{m}}\big{(}\psi^{m}_{\omega}(\xi_{t })\big{)}&=[\psi^{-n}_{\theta^{m+n}\omega}]^{-1}\big{(}\psi^{m}_ {\omega}(\xi_{t})\big{)}\\ &=[\psi^{-n}_{\theta^{m+n}\omega}]^{-1}\big{(}\psi^{-n}_{\theta^ {m+n}\omega}(\psi^{m+n}_{\omega}(\xi_{t}))\big{)}=\psi^{m+n}_{\omega}(\xi_{t}).\end{split} \tag{2.8}\]
Accordingly from (2.7) and (2.8)
\[\begin{split}\|\psi^{n}_{\theta^{m}\omega}(\xi)\|& \leq\sum_{1\leq t\leq k}|c_{t}|\frac{\|\psi^{m+n}_{\omega}(\xi_{t}) \|}{\|\psi^{m}_{\omega}(\xi_{t})\|}\\ &\leq\sum_{1\leq t\leq k}\frac{\|\psi^{m+n}_{\omega}(\xi^{t}_{ \omega})\|}{d\big{(}\psi^{m}_{\omega}(\xi^{t}_{\omega}),\langle\psi^{m}_{ \omega}(\xi^{t^{\prime}}_{\omega})\rangle_{1\leq t^{\prime}\leq k,t^{\prime} \neq t}\big{)}}\|\xi\|,\end{split} \tag{2.9}\]
which proves our claim.
**Lemma 2.4**.: _For \(\omega\in\tilde{\Omega}\) and \(\epsilon>0\), set_
\[F^{S}_{\epsilon}(\omega):=\sup_{n\geqslant 0}\|\psi^{n}_{\omega}|_{S_{\omega}} \|\exp(-n(\mu^{-}+\epsilon)),\quad F^{U}_{\epsilon}(\omega):=\sup_{n\leq 0}\| \psi^{n}_{\omega}|_{U_{\omega}}\|\exp(n(-\mu^{+}+\epsilon)).\]
_Then_
\[\lim_{m\to\infty}\frac{1}{m}\log^{+}[F^{S}_{\epsilon}(\theta^{m}\omega)]=\lim_ {m\to-\infty}\frac{1}{m}\log^{+}[F^{U}_{\epsilon}(\theta^{m})]=0.\]
_Similarly for_
\[F_{\epsilon}^{C,1}(\omega):=\sup_{n\geqslant 0}\|\psi_{\omega}^{n}|_{C_{\omega}}\| \exp(-n\epsilon),\quad F_{\epsilon}^{C,-1}(\omega):=\sup_{n\leq 0}\|\psi_{ \omega}^{n}|_{C_{\omega}}\|\exp(n\epsilon),\]
_we have_
\[\lim_{n\to\infty}\frac{1}{m}\log^{+}[F_{\epsilon}^{C,1}(\theta^{m}\omega)]= \lim_{m\to-\infty}\frac{1}{m}\log^{+}[F_{\epsilon}^{C,-1}(\theta^{m}\omega)]=0.\]
Proof.: For \(F_{\varepsilon}^{S}\), the statement is shown in [1, Lemma 2.3]. We only prove the claim for \(F_{\varepsilon}^{C,-1}\), the remaining assertions can be shown in the same way. Let \(C_{\omega}\coloneqq\langle\xi_{\omega}^{t}\rangle_{1\leq t\leq k}\). Applying (2.9) yields
\[F^{C,-1}(\theta^{m}\omega) \leq\sup_{n\leq 0}\|\psi_{\omega}^{n}(\theta^{m}\omega)\|\exp(n\epsilon)\] \[\leq\sup_{n\leq 0}\big{[}\sum_{1\leq t\leq k}\frac{\|\psi_{\omega}^ {m+n}(\xi_{\omega}^{t})\|}{d\big{(}\psi_{\omega}^{m}(\xi_{\omega}^{t}),\langle \psi_{\omega}^{m}(\xi_{\omega}^{t^{\prime}})\rangle_{1\leq t^{\prime}\leq k,t^ {\prime}\neq t}\big{)}}\exp(n\epsilon)\big{]}.\]
Using (iv) in Theorem 1.2 proves the claim.
**Proposition 2.5**.: _For \(\nu>0\), let \(\Sigma_{\omega}\coloneqq\prod_{j\in\mathbb{Z}}E_{\theta^{j}\omega}\) and_
\[\Sigma_{\omega}^{\nu}\coloneqq\bigg{\{}\Gamma\in\Sigma_{\omega}:\ \|\Gamma\|=\sup_{j\in\mathbb{Z}}\big{[}\|\Pi_{\omega}^{j}[\Gamma]\|\exp(-\nu|j |)\big{]}<\infty\bigg{\}}.\]
_Assume that \(0<\nu<\min\{\mu^{+},-\mu^{-}\}\). Then there is a random variable \(\rho\colon\Omega\to(0,\infty)\) such that the following map is well defined:_
\[I_{\omega}\colon C_{\omega}\times\Sigma_{\omega}^{\nu}\cap B(0,R( \omega))\to\Sigma_{\omega}^{\nu},\] \[\Pi_{\omega}^{n}\big{[}I_{\omega}(v_{\omega},\Gamma)\big{]}= \begin{cases}\psi_{\omega}^{n}(v_{\omega})+C(n,\omega,\Gamma)\\ \quad-\sum_{j\ni n+1}\big{[}\psi_{\theta^{j}\omega}^{n-j}\circ\Pi_{U_{\theta^ {j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{j}\omega}}\big{]}P_{\theta^ {j-1}\omega,\rho}\big{(}\Pi_{\omega}^{j-1}[\Gamma]\big{)}\\ \quad+\sum_{j\leq n}[\psi_{\theta^{j}\omega}^{n-j}\circ\Pi_{S_{\theta^{j} \omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}]P_{\theta^{j-1}\omega,\rho}(\Pi_{\omega}^{j-1}[\Gamma])\quad\text{ for }n\neq 0,\\ v_{\omega}-\sum_{j\geqslant 1}\big{[}\psi_{\theta^{j}\omega}^{-j}\Pi_{U_{ \theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{j}\omega}}\big{]}P_{ \theta^{j-1}\omega,\rho}(\Pi_{\omega}^{j-1}[\Gamma])\\ \quad+\sum_{j\leq 0}[\psi_{\theta^{j}\omega}^{-j}\circ\Pi_{S_{\theta^{j} \omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}]P_{\theta^{j-1} \omega,\rho}(\Pi_{\omega}^{j-1}[\Gamma])\quad\quad\text{ for }n=0,\end{cases}\]
_where_
\[C(n,\omega,\Gamma)=\begin{cases}\sum_{1\leq j\leq n}\big{[}\psi_{\theta^{j} \omega}^{n-j}\circ\Pi_{C_{\theta^{j}\omega}\|S_{\theta^{j}\omega}\oplus U_{ \theta^{j}\omega}}\big{]}P_{\theta^{j-1}\omega,\rho}\big{(}\Pi_{\omega}^{j-1}[ \Gamma]\big{)}&\text{ for }n>0,\\ -\sum_{n\leq j\leq-1}\big{[}\psi_{\theta^{j}\omega}^{n-j}\circ\Pi_{C_{\theta^{j} \omega}\|S_{\theta^{j}\omega}\oplus U_{\theta^{j}\omega}}\big{]}P_{\theta^{j-1 }\omega,\rho}\big{(}\Pi_{\omega}^{j-1}[\Gamma]\big{)}&\text{ for }n<0,\end{cases}\]
_In addition, for every \(v_{\omega}\in C_{\omega}\), the equation \(I_{\omega}(v_{\omega},\Gamma)=\Gamma\) admits a unique solution._
Proof.: By definition
\[\big{\|}\Pi_{\omega}^{n}[L_{\omega}(v_{\omega},\Gamma)]\big{\|} \leq\|\psi_{\omega}^{n}|_{C_{\omega}}\|\ \|v_{\omega}\|\] \[\qquad+\sum_{\begin{subarray}{c}n\leq j\leq-1\ \text{or}\\ 1\leq j\leq n\end{subarray}}\|\psi_{\theta^{j}\omega}^{n-j}|_{C_{\theta^{j} \omega}}\|\ \|\Pi_{C_{\theta^{j}\omega}\|S_{\theta^{j}\omega}\oplus U_{\theta^{j}\omega}} \|\ \|\Pi_{\omega}^{j-1}[\Gamma]\|f(\theta^{j}\omega)h\big{(}\|\Pi_{\omega}^{j-1}[ \Gamma]\|\big{)}\delta\bigg{(}\frac{\|\Pi_{\omega}^{j-1}[\Gamma]\|}{\rho( \theta^{j}\omega)}\bigg{)}\] \[\qquad+\sum_{j\geq n+1}\|\psi_{\theta^{j}\omega}^{n-j}|_{U_{\theta ^{j}\omega}}\|\ \|\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{j}\omega}} \|\ \|\Pi_{\omega}^{j-1}[\Gamma]\|f(\theta^{j}\omega)h\big{(}\|\Pi_{\omega}^{j-1}[ \Gamma]\|\big{)}\delta\bigg{(}\frac{\|\Pi_{\omega}^{j-1}[\Gamma]\|}{\rho( \theta^{j}\omega)}\bigg{)}\] \[\qquad+\sum_{j\leq n}\|\psi_{\theta^{j}\omega}^{n-j}|_{S_{\theta^{ j}\omega}}\|\ \|\Pi_{S_{\theta^{j}\omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\|\ \|\Pi_{\omega}^{j-1}[\Gamma]\|f(\theta^{j}\omega)h\big{(}\|\Pi_{\omega}^{j-1}[ \Gamma]\|\big{)}\delta\bigg{(}\frac{\|\Pi_{\omega}^{j-1}[\Gamma]\|}{\rho( \theta^{j}\omega)}\bigg{)}.\]
Choose \(0<\varepsilon<\nu\). Remember that \(\|\Pi_{\omega}^{j}[\Gamma]\|\leq\exp(\delta|j|)\|\Gamma\|\). Consequently,
\[\exp(-\nu|n|)\big{|}\Pi_{\omega}^{n}[I_{\omega}(v_{\omega},\Gamma) ]\big{|}\leq\exp((\epsilon-\nu)|n|)F_{\epsilon}^{C,sgn(n)}(\omega)\|v_{\omega}\|\] \[\qquad+\|\Gamma\|\sum_{\begin{subarray}{c}n\leq j\leq-1\ \text{or}\\ 1\leq j\leq n\end{subarray}}\exp((\epsilon-\nu)|n-j|+\nu)F_{\epsilon}^{C,sgn(n )}(\theta^{j}\omega)\|\Pi_{C_{\theta^{j}\omega}\|S_{\theta^{j}\omega}\oplus U_ {\theta^{j}\omega}}\|f(\theta^{j}\omega)h\big{(}2\rho(\theta^{j}\omega)\big{)}\] \[\qquad+\|\Gamma\|\sum_{j\geq n+1}\exp((j-n)(-\mu^{+}+\epsilon+ \nu)+\nu)F_{\epsilon}^{U}(\theta^{j}\omega)\|\Pi_{U_{\theta^{j}\omega}\|C_{ \theta^{j}\omega}\oplus S_{\theta^{j}\omega}}\|f(\theta^{j}\omega)h(2\rho( \theta^{j}\omega))\] \[\qquad+\|\Gamma\|\sum_{j\leq n}\exp((n-j)(\mu^{-}+\epsilon+\nu)+ \nu)F_{\epsilon}^{S}(\theta^{j}\omega)\|\|\Pi_{S_{\theta^{j}\omega}\|U_{ \theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\|f(\theta^{j}\omega)h\big{(}2 \rho(\theta^{j}\omega)\big{)}.\]
Similarly, by (2.5),
\[\exp(-\nu|n|)\big{\|}\Pi_{\omega}^{n}[I_{\omega}(v_{\omega},\Gamma )]-\Pi_{\omega}^{n}[I_{\omega}(v_{\omega},\tilde{\Gamma})]\big{\|}\] \[\leq\|\Gamma-\tilde{\Gamma}\|\sum_{\begin{subarray}{c}n\leq j \leq-1\ \text{or}\\ 1\leq j\leq n\end{subarray}}5\exp((\epsilon-\nu)|n-j|+\nu)F_{\epsilon}^{C,sgn(n )}(\theta^{j}\omega)\|\Pi_{C_{\theta^{j}\omega}\|S_{\theta^{j}\omega}\oplus U_ {\theta^{j}\omega}}\|f(\theta^{j}\omega)h(4\rho(\theta^{j}\omega))\] \[\qquad+\|\Gamma-\tilde{\Gamma}\|\sum_{j\geq n+1}5\exp((j-n)(-\mu^ {+}+\epsilon+\nu)+\nu)F_{\epsilon}^{U}(\theta^{j}\omega)\|\Pi_{U_{\theta^{j} \omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{j}\omega}}\|f(\theta^{j}\omega) h(4\rho(\theta^{j}\omega))\] \[\qquad+\|\Gamma-\tilde{\Gamma}\|\sum_{j\leq n}5\exp((n-j)(\mu^ {-}+\epsilon+\nu)+\nu)F_{\epsilon}^{S}(\theta^{j}\omega)\|\|\Pi_{S_{\theta^{j} \omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\|f(\theta^{j}\omega)h( 4\rho(\theta^{j}\omega)).\]
For some given \(M_{\epsilon}>0\), let
\[T(\omega)\coloneqq\frac{M_{\epsilon}}{f(\omega)}\min\biggl{\{} \frac{1}{F_{\epsilon}^{C,1}(\omega)\|\Pi_{C_{\omega}\|S_{\omega}\oplus U_{\omega} }\|}, \frac{1}{F_{\epsilon}^{C,-1}(\omega)\|\Pi_{C_{\omega}\|S_{\omega}\oplus U_{\omega} }\|f(\omega)}, \tag{2.11}\] \[\frac{1}{F_{\epsilon}^{U}(\omega)\|\Pi_{U_{\omega}\|C_{\omega} \oplus S_{\omega}}\|},\frac{1}{F_{\epsilon}^{S}(\omega)\|\Pi_{S_{\omega}\|U_{ \omega}\oplus C_{\omega}}\|}\biggr{\}} \tag{2.10}\]
and
\[\rho(\omega)\coloneqq\frac{1}{4}h^{-1}(T(\omega)). \tag{2.12}\]
Choosing \(\varepsilon>0\) sufficiently small, the recent inequalities yield
\[\|I_{\omega}(v_{\omega},\Gamma)\|\leq\max\{F_{\epsilon}^{C,-1}( \omega),F_{\epsilon}^{C,1}(\omega)\}\|v_{\omega}\|\] \[\quad+\|\Gamma\|M_{\epsilon}\exp(\nu)\left(\frac{1}{1-\exp( \epsilon-\nu)}+\frac{\exp(-\mu^{+}+\epsilon+\nu)}{1-\exp(-\mu^{+}+\epsilon+ \nu)}+\frac{1}{1-\exp(\mu^{-}+\epsilon+\nu)}\right)\]
and
\[\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(v_{\omega},\tilde{ \Gamma})\|\] \[\leq \|\Gamma-\Gamma\|5M_{\epsilon}\exp(\nu)\left(\frac{1}{1-\exp( \epsilon-\nu)}+\frac{\exp(-\mu^{+}+\epsilon+\nu)}{1-\exp(-\mu^{+}+\epsilon+ \nu)}+\frac{1}{1-\exp(\mu^{-}+\epsilon+\nu)}\right).\]
Set
\[L_{\epsilon}\coloneqq M_{\epsilon}\exp(\nu)\left(\frac{1}{1-\exp( \epsilon-\nu)}+\frac{\exp(-\mu^{+}+\epsilon+\nu)}{1-\exp(-\mu^{+}+\epsilon+ \nu)}+\frac{1}{1-\exp(\mu^{-}+\epsilon+\nu)}\right).\]
Choosing \(M_{\varepsilon}>0\) sufficiently small, we can obtain that \(5L_{\epsilon}<1\). With this choice, the map \(I_{\omega}\) is indeed well defined and \(I_{\omega}(v_{\omega},\cdot)\) is a contraction. Therefore, for every \(v_{\omega}\in C_{\omega}\), the equation \(I_{\omega}(v_{\omega},\Gamma_{v_{\omega}})=\Gamma_{v_{\omega}}\) has a unique solution with the property that
\[\|\Gamma_{v_{\omega}}\|\leq\frac{\max\{F_{\epsilon}^{C,-1}(\omega),F_{ \epsilon}^{C,1}(\omega)\}\|v_{\omega}\|}{1-L_{\epsilon}}.\]
We will need some further properties of the fixed points \(\Gamma_{v_{\omega}}=I_{\omega}(v_{\omega},\Gamma_{v_{\omega}})\) which we will state now.
**Lemma 2.6**.: _Let \(I_{\omega}\) be as in Proposition 2.5. Then_
\[\|\Gamma-\tilde{\Gamma}\|\leq 2\max\{F_{\nu}^{C,1}(\omega),F_{\nu}^{C,-1}( \omega)\}\|v_{\omega}-\tilde{v}_{\omega}\|\]
_for every \(v_{\omega},\tilde{v}_{\omega}\in C_{\omega}\) where \(\Gamma=\Gamma_{v_{\omega}}\) and \(\tilde{\Gamma}=\Gamma_{\tilde{v}_{\omega}}\) denote the corresponding fixed points._
Proof.: From the triangle inequality,
\[\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{\omega}, \tilde{\Gamma})\|\leq\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{ \omega},\Gamma)\|+\|I_{\omega}(\tilde{v}_{\omega},\Gamma)-I_{\omega}(\tilde{v} _{\omega},\tilde{\Gamma})\|.\]
As we have seen in the proof of Proposition 2.5,
\[\|I_{\omega}(\tilde{v}_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{ \omega},\tilde{\Gamma})\|\leq 5L_{\varepsilon}\|\Gamma-\tilde{\Gamma}\|=5L_{ \varepsilon}\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{\omega}, \tilde{\Gamma})\|,\]
thus
\[\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{\omega}, \tilde{\Gamma})\|\leq\frac{1}{1-5L_{\varepsilon}}\|I_{\omega}(v_{\omega}, \Gamma)-I_{\omega}(\tilde{v}_{\omega},\Gamma)\|.\]
By definition of the map \(I_{\omega}\),
\[\|I_{\omega}(v_{\omega},\Gamma)-I_{\omega}(\tilde{v}_{\omega}, \Gamma)\|\leq\max\{F_{\nu}^{C,1}(\omega),F_{\nu}^{C,-1}(\omega)\}\|v_{\omega}- \tilde{v}_{\omega}\|.\]
Choosing \(5L_{\varepsilon}\leq\frac{1}{2}\) in the proof of Proposition 2.5 yields the claim.
**Lemma 2.7**.: _With the same notation as in Lemma 2.6,_
\[\|\Pi_{\omega}^{0}(\Gamma)-\Pi_{\omega}^{0}(\tilde{\Gamma})\|\geq\|v_{\omega}- \tilde{v}_{\omega}\|-\frac{1}{4\max\{F_{\nu}^{C,1}(\omega),F_{\nu}^{C,-1}( \omega)\}}\|\Gamma-\tilde{\Gamma}\|.\]
_In particular,_
\[\|v_{\omega}-\tilde{v}_{\omega}\|\leq 2\|\Pi_{\omega}^{0}(\Gamma)-\Pi_{\omega}^{0 }(\tilde{\Gamma})\|.\]
Proof.: By definition,
\[\Pi^{0}_{\omega}(\Gamma)-\Pi^{0}_{\omega}(\tilde{\Gamma})\] \[= v_{\omega}-\tilde{v}_{\omega}-\sum_{j\geqslant 1}\big{[}\psi^{-j}_{ \theta^{j}\omega}\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{ \theta^{j}\omega}}\big{]}\big{(}P_{\theta^{j-1}\omega,\rho}(\Pi^{j-1}_{\omega}[ \Gamma])-(P_{\theta^{j-1}\omega,\rho}(\Pi^{j-1}_{\omega}[\tilde{\Gamma}]))\] \[\qquad+\sum_{j\leq 0}[\psi^{-j}_{\theta^{j}\omega}\circ\Pi_{S_{ \theta^{j}\omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}]\big{(}P_{ \theta^{j-1}\omega,\rho}(\Pi^{j-1}_{\omega}[\Gamma])-P_{\theta^{j-1}\omega, \rho}(\Pi^{j-1}_{\omega}[\tilde{\Gamma}])\big{)}\]
Therefore, choosing \(\varepsilon\) as in the proof of Proposition 2.5,
\[\|\Pi^{0}_{\omega}(\Gamma)-\Pi^{0}_{\omega}(\tilde{\Gamma})\|\] \[\geq \|v_{\omega}-\tilde{v}_{\omega}\|-\|\Gamma-\tilde{\Gamma}\|\sum_ {j\geqslant 1}5\exp((j-1)(-\mu^{+}+\epsilon+\nu)+\nu)F^{U}_{\epsilon}(\theta^{j} \omega)\|\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{j} \omega}}\|f(\theta^{j}\omega)h(4\rho(\theta^{j}\omega))\] \[-\|\Gamma-\tilde{\Gamma}\|\sum_{j\leq 0}5\exp((-j)(\mu^{-}+ \epsilon+\nu)+\nu)F^{S}_{\epsilon}(\theta^{j}\omega)\|\|\Pi_{S_{\theta^{j} \omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\|f(\theta^{j}\omega )h(4\rho(\theta^{j}\omega)).\]
For given \(\tilde{M}_{\epsilon}>0\), define
\[\tilde{T}(\omega)\coloneqq\frac{\tilde{M}_{\epsilon}}{4f(\omega)\max\{F^{C,1}_ {\nu}(\omega),F^{C,-1}_{\nu}(\omega)\}}\min\left\{\frac{1}{F^{U}_{\epsilon}( \omega)\|\Pi_{U_{\omega}\|C_{\omega}\oplus S_{\omega}}\|},\frac{1}{F^{S}_{ \epsilon}(\omega)\|\Pi_{S_{\omega}\|U_{\omega}\oplus C_{\omega}}\|}\right\}.\]
We now redefine \(\rho\) by setting
\[\rho(\omega)\coloneqq\min\left\{\frac{1}{4}h^{-1}(\tilde{T}(\omega)),\frac{1} {4}h^{-1}(T(\omega)\right\}.\]
where \(T(\omega)\) was defined in (2.10). Note that the implications of Proposition 2.5 remain valid for this new \(\rho\). Define
\[\tilde{L}_{\varepsilon}\coloneqq\tilde{M}_{\varepsilon}\exp(\nu)\left\{\frac{ \exp(-\mu^{+}+\epsilon+\nu)}{1-\exp(-\mu^{+}+\epsilon+\nu)}+\frac{1}{1-\exp( \mu^{-}+\epsilon+\nu)}\right\}.\]
Choosing \(\varepsilon>0\) sufficiently small gives
\[\|\Pi^{0}_{\omega}(\Gamma)-\Pi^{0}_{\omega}(\tilde{\Gamma})\|\geq\|v_{\omega} -\tilde{v}_{\omega}\|-\frac{5\tilde{L}_{\epsilon}}{4\max\{F^{C,1}_{\nu}(\omega),F^{C,-1}_{\nu}(\omega)\}}\|\Gamma-\tilde{\Gamma}\|.\]
Choosing \(\tilde{M}_{\varepsilon}\) such that \(5\tilde{L}_{\epsilon}\leq 1\) yields the first claim. The second follows by combining this estimate with Lemma 2.6.
_Remark 2.8_.: Note that, by our assumptions,
\[\liminf_{n\to\pm\infty}\frac{1}{n}\log[\rho(\theta^{n}\omega)]\geq 0.\]
Indeed, this follows from the definition of \(\rho\) and our assumption that \(\limsup_{x\to 0}\frac{h(x)}{|x|^{r}}<\infty\).
**Definition 2.9**.: For \(\omega\in\Omega\) and \(\xi_{\omega}\in E_{\omega}\), set
\[\phi^{1}_{\omega}(Y_{\omega}+\xi\omega) \coloneqq Y_{\theta\omega}+\psi^{1}_{\omega}(\eta_{\omega})+\delta \bigg{(}\frac{\|\xi_{\omega}\|}{\rho(\theta\omega)}\bigg{)}(\varphi^{1}_{ \omega}(Y_{\omega}+\xi_{\omega})-Y_{\theta\omega}-\psi^{1}_{\omega}(\xi_{ \omega}))\quad\text{and}\] \[\phi^{n}_{\omega}(Y_{\omega}+\xi_{\omega}) \coloneqq\phi^{n-1}_{\theta\omega}\circ\phi^{1}_{\omega}(Y_{ \omega}+\xi_{\omega}). \tag{2.13}\]
_Remark 2.10_.: Assume that for every \(0\leq j\leq n\) it holds that \(\|\varphi_{\omega}^{j}(Y_{\omega}+\xi_{\omega})-Y_{\theta^{j}\omega}\|\leq\rho( \theta^{j+1}\omega)\). Then \(\phi_{\omega}^{i}(Y_{\omega}+\xi_{\omega})=\varphi_{\omega}^{i}(Y_{\omega}+\xi_ {\omega})\) for every \(0\leq i\leq n+1\). Therefore, \(\phi\) is coincides with \(\varphi\) locally up to a certain iteration and the number of iterations goes to infinity if \(\|\xi_{\omega}\|\to 0\).
**Definition 2.11**.: Set
\[\mathcal{M}_{\omega}^{c,\nu}:=\{\xi_{\omega}+Y_{\omega}\in E_{ \omega}:\exists\Gamma\in\Sigma_{\omega}^{\nu}\text{ with }\Pi_{\omega}^{0}[\Gamma]=\xi_{\omega}\text{ and }\\ \phi_{\theta^{n}\omega}^{m}(\Pi_{\omega}^{n}[\Gamma]+Y_{\theta^{n} \omega})=\Pi_{\omega}^{n+m}[\Gamma]+Y_{\theta^{n+m}\omega}\text{ }\forall(m,n)\in\mathbb{N}\times\mathbb{Z}\}.\]
We call this set the _center manifold of the random map \(\phi\)_.
**Lemma 2.12**.: _With the same setting as in Proposition 2.5, for \(\Gamma\in\Sigma_{\omega}^{\nu}\),_
\[I_{\omega}(v_{\omega},\Gamma)=\Gamma\Longleftrightarrow\phi_{\theta^{n}\omega }^{m}(\Pi_{\omega}^{n}[\Gamma]+Y_{\theta^{n}\omega})=\Pi_{\omega}^{n+m}[ \Gamma]+Y_{\theta^{n+m}\omega},\text{ \ }\forall(m,n)\in\mathbb{N}\times\mathbb{Z} \tag{2.14}\]
_where_
\[v_{\omega}=\Pi_{\omega}^{0}(\Gamma)+\sum_{j\geqslant 1}\big{[} \psi_{\theta^{j}\omega}^{-j}\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega} \oplus S_{\theta^{j}\omega}}\big{]}P_{\theta^{j-1}\omega,\rho}(\Pi_{\omega}^{j -1}[\Gamma])-\] \[\sum_{j\leq 0}[\psi_{\theta^{j}\omega}^{-j}\circ\Pi_{S_{\theta^{j} \omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}]P_{\theta^{j-1} \omega,\rho}(\Pi_{\omega}^{j-1}[\Gamma])\in C_{\omega} \tag{2.15}\]
Proof.: The left to the right part, can be accomplished by using an induction argument on \(m\) for every fixed \(n\). For the the other part of the claim, first note that
\[\Pi_{\omega}^{0}(\Gamma)+\] \[\Pi_{\omega}^{0}(\Gamma)+\sum_{j\geqslant 1}\big{[}\psi_{\theta^{j} \omega}^{-j}\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{ j}\omega}}\big{]}\big{(}\phi_{\theta^{j-1}\omega}^{1}(\Pi_{\omega}^{j-1}[ \Gamma]+Y_{\theta^{j-1}\omega})-Y_{\theta^{j}\omega}-\psi_{\theta^{j-1}\omega} ^{1}(\Pi_{\omega}^{j-1}[\Gamma])\big{)}-\] \[\sum_{j\leq 0}\big{[}\psi_{\theta^{j}\omega}^{-j}\circ\Pi_{S_{ \theta^{j}\omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\big{]} \big{(}\phi_{\theta^{j-1}\omega}^{1}(\Pi_{\omega}^{j-1}[\Gamma]+Y_{\theta^{j -1}\omega})-Y_{\theta^{j}\omega}-\psi_{\theta^{j-1}\omega}^{1}(\Pi_{\omega}^ {j-1}[\Gamma])\big{)}=\] \[\Pi_{\omega}^{0}(\Gamma)+\sum_{j\geqslant 1}\big{[}\psi_{\theta^{j} \omega}^{-j}\Pi_{U_{\theta^{j}\omega}\|C_{\theta^{j}\omega}\oplus S_{\theta^{ j}\omega}}\big{]}\big{(}\Pi_{\omega}^{j}[\Gamma]-\psi_{\theta^{j-1}\omega}^{1}(\Pi_{ \omega}^{j-1}[\Gamma])\big{)}-\] \[\sum_{j\leq 0}\big{[}\psi_{\theta^{j}\omega}^{-j}\circ\Pi_{S_{ \theta^{j}\omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^{j}\omega}}\big{]} \big{(}\Pi_{\omega}^{j}[\Gamma]-\psi_{\theta^{j-1}\omega}^{1}(\Pi_{\omega}^ {j-1}[\Gamma])\big{)}=\Pi_{\omega}^{0}(\Gamma)-\Pi_{U_{\theta^{j}\omega}\|C_ {\theta^{j}\omega}\oplus S_{\theta^{j}\omega}}(\Pi_{\omega}^{0}(\Gamma))-\] \[\Pi_{S_{\theta^{j}\omega}\|U_{\theta^{j}\omega}\oplus C_{\theta^ {j}\omega}}(\Pi_{\omega}^{0}(\Gamma))=\Pi_{C_{\theta^{j}\omega}\|S_{\theta^{ j}\omega}\oplus U_{\theta^{j}\omega}}(\Pi_{\omega}^{0}(\Gamma))\in C_{\omega}.\]
The same calculation for every \(n\in\mathbb{Z}\) yields the left side of the claim.
_Remark 2.13_.: Note that we also have
\[\phi_{\omega}^{n}(\Pi_{\omega}^{0}(\Gamma)+Y_{\omega})=Y_{\theta^{n}\omega}+ \sum_{0\leq j\leq n-1}\psi_{\theta^{j+1}\omega}^{n-1-j}\big{(}P_{\theta^{j} \omega,\rho}(\phi_{\omega}^{j}(\Pi_{\omega}^{0}(\Gamma)+Y_{\omega})-Y_{\theta ^{j}\omega})\big{)}.\]
Finally, we can formulate the main result of this section.
**Theorem 2.14** (Center manifold theorem).: _Assume \((\Omega,\mathcal{F},\mathbb{P},\theta)\) is an ergodic measure-preserving dynamical systems. Let \(\varphi\) be a Frechet-differentiable cocycle acting on a measurable field of Banach spaces \(\{E_{\omega}\}_{\omega\in\Omega}\) and assume that \(\varphi\) admits a stationary point \(Y_{\omega}\). Furthermore, assume that the
linearized cocycle \(\psi\) around \(Y\) is compact and satisfies Assumption 2.1 and the conditions of Theorem 1.2. Suppose that at least one Lyapunov exponent is zero. Let \(\phi\) be defined as in Definition 2.9 for a random variable \(\rho\) provided by Proposition 2.5. For every \(0<\nu<\min\{\mu^{+},-\mu^{-}\}\), let_
\[h_{\omega}^{c}\colon C_{\omega}\to\mathcal{M}_{\omega}^{c,\nu}\]
_be defined as \(h_{\omega}^{c}(v_{\omega})=\Pi_{\omega}^{0}[\Gamma]\) where \(\Gamma\) is the unique fixed point that satisfies \(I_{\omega}(v_{\omega},\Gamma)=\Gamma\). Then the following statements hold:_
* \(h_{\omega}^{c}\) _is a homeomorphism, Lipschitz continuous and differentiable at zero._
* \(\mathcal{M}_{\omega}^{c,\nu}\) _is a topological Banach manifold_1 _modeled on_ \(C_{\omega}\)_. This manifold is differentiable if for every_ \(\omega\in\Omega\) _the random norm_ \(\|\cdot\|_{E_{\omega}}\) _is differentiable._ Footnote 1: That means a \(\mathcal{C}^{0}\) Banach manifold in the sense of definition [11, 12].
* \(\mathcal{M}_{\omega}^{c,\nu}\) _is_ \(\phi\)_-invariant, i.e. for every_ \(n\in\mathbb{N}_{0}\)_,_ \(\phi_{\omega}^{n}(\mathcal{M}_{\omega}^{c,\nu})\subset\mathcal{M}_{\theta^{n} _{\omega}}^{c,\nu}\)_._
Proof.:
* Lemma 2.12 immediately implies surjectivity of the map \(h_{\omega}^{c}\). Let \(v_{\omega},\tilde{v}_{\omega}\in C_{\omega}\) and assume that \(h_{\omega}^{c}(v_{\omega})=h_{\omega}^{c}(\tilde{v}_{\omega})\). By definition, there exist \(\Gamma,\tilde{\Gamma}\in\Sigma_{\omega}^{\nu}\) such that \(I_{\omega}(v_{\omega},\Gamma)=\Gamma\), \(I_{\omega}(\tilde{v}_{\omega},\tilde{\Gamma})=\tilde{\Gamma}\) and \(\Pi_{\omega}^{0}[\Gamma]=\Pi_{\omega}^{0}[\tilde{\Gamma}]\). Lemma 2.7 implies that \(v_{\omega}=\tilde{v}_{\omega}\), thus \(h_{\omega}^{c}\) is also injective. Lipschitz continuity of \(h_{\omega}^{c}\) is a consequence of Lemma 2.6 and continuity of the inverse follows again from Lemma 2.7.
* By the first item, \(h_{\omega}^{c}\) defines a a continuous, Lipschitz chart on \(\mathcal{M}_{\omega}^{c,\nu}\). Note that, we modified the original cocycle \(\varphi\) by \(\phi\). If the random norms \(\|.\|_{E_{\omega}}\) be smooth enough (Like the case for the Hilbert spaces), and \(\varphi\in C^{m}\), then by the Implicit function theorem, we conclude \(\mathcal{M}_{\omega}^{c,\nu}\) is locally \(C^{m-1}\).
* The \(\phi\)-invariant property of \(\mathcal{M}_{\omega}^{c,\nu}\), follows from Definition 2.11 and Lemma 2.12. Indeed, assume \(\xi_{\omega}+Y_{\omega}\in\mathcal{M}_{\omega}^{c,\nu}\), then for \(\Gamma\in\Sigma_{\omega}^{\nu}\) and \(v_{\omega}\) defined in (2.15), we have \(I_{\omega}(v_{\omega},\Gamma)=\Gamma\) and \(\Pi_{\omega}^{0}[\Gamma]=\xi_{\omega}\). Let us define \(\tilde{\Gamma}\in\Sigma_{\theta\omega}^{\nu}\), with \(\Pi_{\theta\omega}^{n}[\tilde{\Gamma}]:=\Pi_{\omega}^{n+1}[\Gamma]\) and \(\tilde{v}_{\theta\omega}\in C_{\theta\omega}\), by \(\tilde{v}_{\theta\omega}=\psi_{\omega}^{1}(v_{\omega})\). Note that, \(\Pi_{\theta\omega}^{0}[\tilde{\Gamma}]=\phi_{\omega}^{1}(\xi_{\omega}+Y_{ \omega})-Y_{\theta\omega}\) and \(I_{\theta\omega}(\tilde{v}_{\theta\omega},\tilde{\Gamma})=\tilde{\Gamma}\). This proves the claim.
|
2305.11583 | RECIPE: How to Integrate ChatGPT into EFL Writing Education | The integration of generative AI in the field of education is actively being
explored. In particular, ChatGPT has garnered significant interest, offering an
opportunity to examine its effectiveness in English as a foreign language (EFL)
education. To address this need, we present a novel learning platform called
RECIPE (Revising an Essay with ChatGPT on an Interactive Platform for EFL
learners). Our platform features two types of prompts that facilitate
conversations between ChatGPT and students: (1) a hidden prompt for ChatGPT to
take an EFL teacher role and (2) an open prompt for students to initiate a
dialogue with a self-written summary of what they have learned. We deployed
this platform for 213 undergraduate and graduate students enrolled in EFL
writing courses and seven instructors. For this study, we collect students'
interaction data from RECIPE, including students' perceptions and usage of the
platform, and user scenarios are examined with the data. We also conduct a
focus group interview with six students and an individual interview with one
EFL instructor to explore design opportunities for leveraging generative AI
models in the field of EFL education. | Jieun Han, Haneul Yoo, Yoonsu Kim, Junho Myung, Minsun Kim, Hyunseung Lim, Juho Kim, Tak Yeon Lee, Hwajung Hong, So-Yeon Ahn, Alice Oh | 2023-05-19T10:45:40Z | http://arxiv.org/abs/2305.11583v1 | # RECIPE: How to Integrate ChatGPT into EFL Writing Education
###### Abstract
The integration of generative AI in the field of education is actively being explored. In particular, ChatGPT has garnered significant interest, offering an opportunity to examine its effectiveness in English as a foreign language (EFL) education. To address this need, we present a novel learning platform called RECIPE (Revising an Essay with ChatGPT on an Interactive Platform for EFL learners). Our platform features two types of prompts that facilitate conversations between ChatGPT and students: (1) a hidden prompt for ChatGPT to take an EFL teacher role and (2) an open prompt for students to initiate a dialogue with a self-written summary of what they have learned. We deployed this platform for 213 undergraduate and graduate students enrolled in EFL writing courses and seven instructors. For this study, we collect students' interaction data from RECIPE, including students' perceptions and usage of the platform, and user scenarios are examined with the data. We also conduct a focus group interview with six students and an individual interview with one EFL instructor to explore design opportunities for leveraging generative AI models in the field of EFL education.
+
Footnote †: FOOTNOTE:+10
## 1. Introduction
In the context of English as a foreign language (EFL) education, the integration of artificial intelligence (AI) technology has been shown to enhance students' learning experience. The use of AI-based tools, including Grammarly 1 and Quillbots 2, has resulted in significant improvement in EFL learners' writing abilities. Moreover, learners have exhibited positive perceptions towards the use of these tools in their writing class (Bahdan et al., 2016; Bahdan et al., 2016).
Footnote 1: [https://www.grammary.com/](https://www.grammary.com/)
Footnote 2: [https://quillbot.com/](https://quillbot.com/)
ChatGPT 3, a large language model (LLM)-driven chatbot by OpenAI, has made a significant breakthrough in the domain of language learning. While earlier chatbots were unnatural and incapable of engaging language learners (Bahdan et al., 2016), ChatGPT can generate natural and personalized responses, which make students' learning experience more interactive and engaging (Bahdan et al., 2016; Bahdan et al., 2016). Recent studies have suggested the potential educational benefits of incorporating generative AI into education (Bahdan et al., 2016). Hence many online educational platforms such as Khan Academy 4 and Duolingo5 have already started integrating ChatGPT to their functionalities.
Footnote 3: [https://chat.openni.com/](https://chat.openni.com/)
Footnote 4: [https://www.khanacademy.org/](https://www.khanacademy.org/)
Footnote 5: [https://www.duelingo.com/](https://www.duelingo.com/)
Despite such suggestions and attempts to incorporate ChatGPT into language education, only episodic and anecdotal knowledge has been shared rather than systematic investigation (Bahdan et al., 2016; Bahdan et al., 2016). Therefore, we need to examine the effective use of ChatGPT in higher education and identify its learning effect. In order to effectively integrate ChatGPT into EFL writing courses, it is crucial to investigate the specific design of the integration, considering learners' perceptions and usage of ChatGPT in higher education. In this regard, we introduce an interactive learning platform, RECIPE (Revising an Essay with ChatGPT on an Interactive Platform with EFL learners), aiming to leverage the data collected from our platform to guide learners towards a more effective and engaging learning experience. The main contributions of this work are as follows:
1. We analyze the perception and usage of ChatGPT in the context of EFL among learners and instructors.
2. We introduce RECIPE (Revising an Essay with ChatGPT on an Interactive Platform with EFL learners), a learning platform designed to integrate ChatGPT with underlying two types of prompts for students' better learning experience.
3. We collect students' interaction data on their perception and ChatGPT usage through RECIPE to investigate further development of our platform.
## 2. Preliminary Questionnaire
In order to investigate students' attitudes, usage, and expectations of ChatGPT, especially in the context of EFL, we conducted a preliminary questionnaire with 213 (91 undergraduate and 122 graduate) college students in South Korea. They were enrolled in three types of English writing courses for the spring semester of 2023. In specific, 91 undergraduate students were enrolled either the Intermediate English Reading & Writing (IRW; 46 enrolled) or the Advanced English Writing (AW; 45 enrolled) course, based on their TOEFL writing scores (15-18 for IRW, and 19-21 for AW). 122 graduate students were enrolled in Scientific Writing (SW).
The questionnaire consists of 5-point Likert scale questions about participant's prior experience and evaluation of ChatGPT, with regard to perceived helpfulness, trustworthiness, credibility, appropriateness of style/tone, performance, overall satisfaction, and referral intention 6 of ChatGPT in the context of college education. Detailed questions are described in Appendix A. Results from this survey provide visions for developing an educational platform with ChatGPT.
Footnote 6: to what extent they would recommend others to adopt ChatGPT
### Perception and Usage of ChatGPT
The majority of students reported a positive user experience with ChatGPT across all seven factors described above. Specifically, 85% of students with ChatGPT exposure used the tool in their academic work, indicating a potential for leveraging and developing the technology for educational purposes. Moreover, less than half of the students used ChatGPT to improve their English writing skills, highlighting an opportunity and a need for research on integrating ChatGPT into EFL education. These students also relied on other widely available AI tools including Grammarly1, Yurnitin 7, and Google Translate 8, Papago 9, Wordtune 10, ExplainPaper 11, and Elicit12 for help with English writing (90.4%) most, followed by reading (61.6%), grammar (57.1%), speaking (27.8%), and listening (12.6%) skills.
Footnote 10: [https://www.wordtune.com/](https://www.wordtune.com/)
Footnote 11: [https://www.explignment.com/](https://www.explignment.com/)
Footnote 12: [https://elicit.org/](https://elicit.org/)
This questionnaire also underscores the need for comprehensive and well-guided instructions for effectively and efficiently integrating ChatGPT into EFL writing courses, rather than implementing an LLM-agnostic class policy and allowing students to use ChatGPT on their own discretion.
Students with prior experience and knowledge of LLMs expressed higher levels of satisfaction and expectations of using ChatGPT in the courses they enrolled. In particular, those who had low LLM understanding and enrolled in IRW and AW, expected that ChatGPT would be useful for asking questions about the lecture and finding sources to support their writing, which are notable limitations of ChatGPT with hallucination (Bahdan et al., 2016). Whereas students with a
high level of LLM understanding exhibited greater satisfaction with their previous ChatGPT experience regarding perceived helpfulness, appropriateness of style/tone, performance, overall satisfaction, and encouragement, compared to those with low LLM understanding. However, we did not find a statistically significant difference in terms trustworthiness and credibility. Students with a high level of LLM understanding and those with ChatGPT exposure also exhibited significantly positive expectations regarding performance, credibility, and overall satisfaction towards the use of ChatGPT in academic courses. Based on these findings, we posit that our proposed platform RECIPE can effectively and efficiently guide students towards obtaining satisfactory responses from ChatGPT, ultimately enhancing their English writing skills.
## 3. Platform Design
This section outlines the design of RECIPE, geared towards the development of English writing skills, utilizing ChatGPT's capabilities in a targeted and effective manner. We deployed RECIPE for 213 students who enrolled to EFL courses (IRW, AW, and SW) in the spring semester of 2023. Students' interaction data is gathered throughout the semester from undergraduate and graduate students using our platform. RECIPE comprises three main components: a pre-survey (SS3.1), the writing exercise (SS3.2), and a post-survey (SS3.3). Table 1 displays the data collected at each phase of the platform.
### Pre-survey
We designed a pre-survey to collect (1) students' expectations for the upcoming exercise and (2) their understanding of the topics that they will discuss with ChatGPT during the exercise. The pre-survey aims to compare students' expected and actual assistance and to examine changes in their understanding before and after the exercise.
Students are asked to indicate all applicable objectives that they expect to receive assistance with during the exercise by selecting from a list of nine options provided in Table 6 of Appendix A. Additionally, students are required to rate their understanding of the topics to be addressed during the exercise, using a 5-point Likert scale.
### Writing Exercise
Figure 1 displays a screenshot of our platform, which has a writing exercise interface divided into two sections: the left side is for editing an essay, while the right side is for interacting with ChatGPT. The students' essay from the previous session is available on the left side, enabling them to revise their work with the help of ChatGPT.
On the right side, students can consult ChatGPT about their essay. Unlike the existing ChatGPT interface from OpenAI, our platform provides two initial prompts based on empirical prompt engineering, as shown in Table 2: (a) a hidden prompt for the model to set a persona for ChatGPT, acting as a personalized English writing teacher, and (b) an open prompt for students to start a dialogue efficiently. ChatGPT instructs students step by step to revise their essay based on the content they learned during the class, and students are asked to summarize what they learned during the corresponding week or previous classes as the first dialogue. We advise both ChatGPT and students not to provide or request a revised version of the entire essay without any explanation. These instructions serve two purposes: first, to remind students of the lecture content and enhance their learning, and second, to help students receive a more class-relevant response from ChatGPT. We believe this suggested interface can guide EFL learners to write a more specific opening prompt for ChatGPT.
After each turn of the conversation, the edited version of the students' essay will be saved to analyze how they revised it based on ChatGPT's response. In addition, students are asked to rate the helpfulness of the response generated by ChatGPT using a 5-scale Likert scale. Students can continue to converse with ChatGPT and revise their essay.
### Post-survey
First, to gain insights into students' decision-making regarding essay revision, we inquire about their reasons for not making an edit. As the essay editing history is collected during the writing exercise, we can compare the original version of the student's essay with the submitted version after the conversation with ChatGPT. To further analyze prompt engineering, we ask students about the main topic of their conversation using multi-select questions, including whether it was about the current week's lecture, previous lectures, and content not covered in the lecture. To compare students' expectations of the exercise with their actual usage of ChatGPT, we provide the same nine options mentioned in the pre-survey (SS3.1). Additionally, we collect data on students' confidence in their essay and overall satisfaction with ChatGPT's responses to analyze their experiences throughout the semester. Finally, students share how they utilized ChatGPT's responses to modify their essay and their experiences regarding it in a free-text form.
in the platform for SW. Students with a paragraph can revise the essay and converse with ChatGPT, similar to IRW and AW user flow. Whereas students who do not submit any paragraph to edit can still review the course content and have conversations with ChatGPT, a personalized English writing teacher, to deepen their understanding.
We illustrate actual interaction data collected from RECIPE with students taking IRW, AW, and SW. A selected portion of the interactive dialogue data is described in Table 8 and 9 of Appendix C. As shown in Table 9, the student followed the user scenario as we intended. The student described the content of the class in the first prompt, which is focused on methodology, and submitted the methodology section of his or her own paper. Furthermore, after receiving ChatGPT's response suggesting three possible improvements to the paper, the student incorporated one of the suggestions into his or her own writing. In addition, a student taking IRW asked, "Is 7 and 8 grammatical error?" after ChatGPT listed up a revised version with ten grammatical errors. The student continues to question the response from ChatGPT by switching to Korean from English. Both students did not simply accept all the suggestions but sought clarification on suggested revisions that they did not understand. Therefore, this suggests that students are engaged in using RECIPE, and perform critical thinking, indicating the potential benefit of RECIPE for learning effect.
## 4. Interview for Further Development
We conducted two interviews to further scrutinize the needs for future work on RECIPE: (1) a focus group interview (FGI) among six students who had already taken at least one EFL writing course and (2) a 1:1 in-depth interview with an instructor who had taught EFL courses for 6 years, including IRW and AW. Three students (S1, S2, S3) had taken undergraduate courses (IRW and AW), and the other three students (S4, S5, S6) had taken a graduate course (SW). Detailed educational backgrounds of six student interviewees are depicted in Table 7 Appendix B. We asked both students and instructor to use RECIPE and suggest future directions to ChatGPT-integrated educational platforms. In general, both of them acknowledged the usefulness of ChatGPT in EFL writing and expressed the need for a specialized platform integrated with ChatGPT in the EFL writing courses.
_Recommendation for optimal prompts_. Although we provided instructions on the initial prompt and an example for the writing exercise, S5 and S6 reported difficulties in initiating a dialogue with ChatGPT. S1 mentioned that sharing prompts that had effectively worked in the focus group was helpful. Both S1 and S4 recommended implementing a tool to share the learning history of each student, including successful prompts and dialogue pattern logs. We believe that the assessment data on each student's prompt and ChatGPT's response collected through this platform can be utilized to enhance RECIPE's prompt engineering techniques and build a recommendation model for optimal prompts.
_Personalized persona setup_. The instructor noted the need for personalized responses from ChatGPT based on students' situations, such as English proficiency level. Additionally, S1 observed that a detailed persona setup with diverse perspectives, such as EFL professors, domain experts, and slightly advanced level EFL learners, made ChatGPT generate insightful critiques. RECIPE's persona, acting as an instructor of EFL writing courses, can be broadened in a diverse way so that students may choose the persona that best suits their purpose.
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline & Pre-survey & Writing Exercise & Post-survey \\ \hline (a) Essay & \(\star\) Students’ original essay & \(\star\) Student’s edited essay and its revision history & \(\star\) Reason not to revise essay \\ & & \(\star\) History & \(\star\) Confidence on students’ essay \\ \hline (b) ChatGPT & & \(\star\) Conversation log data & \(\star\) Topics discussed with ChatGPT \\ & & \(\star\) Helpfulness of each ChatGPT response & \(\star\) Overall satisfaction towards \\ & & & ChatGPT responses \\ \hline (c) Student & \(\star\) Expected help for the session & \(\star\) Timestamp of each sent message & \(\star\) Actual help through the session \\ & \(\star\) Comprehension level about course & & \(\star\) Improvements in comprehension \\ & topics & & level after writing exercise \\ \hline \hline \end{tabular}
\end{table}
Table 1. Data collected during each phase of the platform.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline \multicolumn{1}{c|}{(a) Hidden prompt for ChatGPT} & \multicolumn{1}{c}{(b) Open prompt for students} \\ \hline \hline \multicolumn{1}{c|}{Act as an English writing class teacher and instruct a student to revise} & \multicolumn{1}{c}{Hello! Welcome to the main exercise. Can you please tell me what you learned in class so that I can help you in a better way? An example can be: “Today I learned about comma rules. When there is a list that contains two or more elements, we should use commas to separate them.”} \\ \hline \hline \multicolumn{1}{c|}{(b) Open prompt for students} \\ \hline \multicolumn{1}{c|}{(c) High-performance students} & \multicolumn{1}{c}{Hello! Welcome to the main exercise. Can you please tell me what you learned in class so that I can help you in a better way? An example can be: “Today I learned about comma rules. When there is a list that contains two or more elements, we should use commas to separate them.”} \\ \hline \hline \end{tabular}
\end{table}
Table 2. Initial prompts at writing exercise
Student-initiative platform designThe instructor remarked that continuous trials and errors in interactions with ChatGPT about revising their essays would be essential for their language learning. The instructor also commented that those interactions led by students' initiative would facilitate their writing skills. In particular, S1 suggested generating errorful essays relevant to what students learned through ChatGPT as learning materials and asking them to revise those essays by themselves, step by step, as ChatGPT guided them.
Automated essay scoringThe instructor emphasized the need for automatically-measured essay scores and explanations for evaluation based on their own rubrics or grading systems. While some state-of-the-art automated essay scoring (AES) systems for overall scores exist, it is nearly inaccessible for EFL instructors who don't have much background in AI techniques to incorporate and fine-tune them based on their own rubrics. We believe that instructor-friendly platforms based on interactions with ChatGPT may mitigate this issue.
Refined user interfaceSS5 suggested that a better user interface between two windows on the writing exercise of RECIPE would increase engagement among EFL students. Specifically, text visualization or toggles for comparison of a student's original essay and ChatGPT's suggestions may be useful.
## 5. Conclusion
In this paper, we investigated students' perception and usage of generative AI, including ChatGPT, in academic courses. Our results indicated that most students reported positive experiences with ChatGPT for general, academic, and essay-writing purposes. However, students with a limited understanding of LLMs faced challenges as expecting ChatGPT to find ground sources for their writings, and this hallucination is one of the major limitations of generative AI. To address this issue, we introduce RECIPE, a novel language learning platform, that leverages ChatGPT to cater to students' needs and enhance their English writing skills with two types of prompts. We aim to collect interaction data throughout the semester on students' perception and usage of ChatGPT in English writing through RECIPE. The interaction data we gather with RECIPE has the potential to serve as a baseline for developing an effective paradigm for integrating ChatGPT in academic settings.
## Acknowledgments
This work was supported by Elice.
|
2305.10845 | TAPIR: Learning Adaptive Revision for Incremental Natural Language
Understanding with a Two-Pass Model | Language is by its very nature incremental in how it is produced and
processed. This property can be exploited by NLP systems to produce fast
responses, which has been shown to be beneficial for real-time interactive
applications. Recent neural network-based approaches for incremental processing
mainly use RNNs or Transformers. RNNs are fast but monotonic (cannot correct
earlier output, which can be necessary in incremental processing).
Transformers, on the other hand, consume whole sequences, and hence are by
nature non-incremental. A restart-incremental interface that repeatedly passes
longer input prefixes can be used to obtain partial outputs, while providing
the ability to revise. However, this method becomes costly as the sentence
grows longer. In this work, we propose the Two-pass model for AdaPtIve Revision
(TAPIR) and introduce a method to obtain an incremental supervision signal for
learning an adaptive revision policy. Experimental results on sequence
labelling show that our model has better incremental performance and faster
inference speed compared to restart-incremental Transformers, while showing
little degradation on full sequences. | Patrick Kahardipraja, Brielen Madureira, David Schlangen | 2023-05-18T09:58:19Z | http://arxiv.org/abs/2305.10845v1 | Tapir: Learning Adaptive Revision for Incremental Natural Language Understanding with a Two-Pass Model
###### Abstract
Language is by its very nature incremental in how it is produced and processed. This property can be exploited by NLP systems to produce fast responses, which has been shown to be beneficial for real-time interactive applications. Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers. RNNs are fast but monotonic (cannot correct earlier output, which can be necessary in incremental processing). Transformers, on the other hand, consume whole sequences, and hence are by nature non-incremental. A _restart-incremental_ interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise. However, this method becomes costly as the sentence grows longer. In this work, we propose the Two-pass model for AdaPtlve Revision (Tapir) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy. Experimental results on sequence labelling show that our model has better incremental performance and faster inference speed compared to restart-incremental Transformers, while showing little degradation on full sequences.1
Footnote 1: Our implementation is publicly available at [https://github.com/pkdhpriaja/tapir](https://github.com/pkdhpriaja/tapir).
## 1 Introduction
Incrementality is an inseparable aspect of language use. Human speakers can produce utterances based on an incomplete message formed in their minds while simultaneously continuing to refine its content for subsequent speech production (Kempen and Hoenkamp, 1982, 1987). They also comprehend language on (approximately) a word-by-word basis and do not need to wait until the utterance finishes to grasp its meaning (Kamide, 2008).
As observed by Madureira and Schlangen (2020), a natural option for neural network-based incremental processing would be RNNs (Rumelhart et al., 1986), as they have essential properties required in incremental scenarios: They keep a recurrent state, are sensitive to the notion of order and are able to accept partial input and produce an output at each time step. Ideally, an incremental processor should also be able to revise its previous incorrect hypotheses based on new input prefixes (Schlangen and Skantze, 2009). However, RNNs are unable to do so as their output is monotonic.
The Transformer architecture (Vaswani et al., 2017) has been the _de facto_ standard for many NLP tasks since its inception. Nevertheless, it is not designed for incremental processing as the input sequences are assumed to be complete and processed as a whole. A _restart-incremental_ interface (Beuck et al., 2011; Schlangen and Skantze, 2011) can be applied to adapt Transformers for incremental processing (Madureira and Schlangen, 2020), where available input prefixes are recomputed at each time step to produce partial outputs. Such an interface also provides the capability to revise existing outputs through its non-monotonic nature. Although feasible, this method does not scale well for long sequences since the number of required forward passes grows with the sequence length.2 The
Figure 1: Illustrative example of how a monotonic incremental POS-tagger would not recover from wrong hypotheses. A policy for adaptive revision, here parameterised by a controller, can enable reanalyses to be performed when necessary (here at time steps 3 and 7).
revision process is also not effective as it occurs at every time step, even when it is unnecessary.
Footnote 1: The _same_\(n\) sequence classification is used to train the _
posed of encoder-decoder architecture with an additional second-pass decoder to refine the generated target sentence. In dialogue domains, this strategy is also used to improve the contextual coherence and correctness of the response (Li et al., 2019) and to refine the output of retrieval-based dialogue systems (Song et al., 2018; Weston et al., 2018). Furthermore, the two-pass approach is commonly utilised in streaming ASR to improve the initial hypothesis (Sainath et al., 2019; Hu et al., 2020; Wang et al., 2022, _inter alia_).
The aforementioned works shared a common trait, as they used a fixed policy and performed revision either for each incoming input or when the input is already complete. Our approach differs in that our model learns an adaptive policy that results in more timely revisions. Contemporaneous to our work, Kaushal et al. (2023) proposed a cascaded uni- and bidirectional architecture with an additional module to predict when to restart. The module is trained with a supervision signal obtained from comparing the model's prediction against the ground truth. Their approach is effective in reducing the required computational budget.
## 3 Model
To address the weaknesses of RNN- and Transformer-only architectures for incremental processing (SS1), we introduce a Two-pass model for AdaPtlve Revision named Tapir, which integrates advantages of both models and is based on the deliberation network (Xia et al., 2017). Its architecture, depicted in Figure 2, consists of four components as follows:
1. **Incremental Processor**: a recurrent model that produces an output at each time step and serves as the first-pass model. In this work, we use a standard LSTM network.
2. **Review**: a bidirectional model that can revise via recomputation operations (SS3.1), also called the second-pass model. We opt for Transformer-based models following Li et al. (2020) as it allows parallel recomputation. The revision process corresponds to the _forward reanalysis hypothesis_(Frazier and Rayner, 1982), where a sentence is processed from the beginning whenever the need for reanalysis is detected.
3. **Memory**: the history of inputs and outputs are stored in the memory. Taking the inspiration from Grave et al. (2017), we use caches, as they are computationally cheap, offering a considerable speed-up in incremental settings.
4. **Controller**: a neural network that parameterises the revision policy. We choose a recurrent controller following Graves et al. (2014), as its internal memory complements the memory module and is also suitable for incremental scenarios. We use a modified LSTMN (Cheng et al., 2016) for this component.
During incremental inference, Tapir computes a candidate output \(y_{t}\) for the most recent input \(x_{t}\) as the first pass. Then, based on \(x_{t}\) and the memory, it decides whether to take a WRITE (add \(y_{t}\) to an output buffer) or REVISE (perform a second pass to recompute all existing outputs) action. The action is defined by a revision policy \(\pi_{\theta}\), which models the effect of new input on past outputs. At each time \(t\), \(\pi_{\theta}\) makes use of processed inputs \(x_{\leq t}\) and past outputs \(y_{<t}\) to select a suitable action \(a_{t}\).4 It is parameterised by the controller hidden state \(k_{t}\) with a non-linear function \(g\):
Footnote 4: The output \(y_{t}\) is excluded as it is not required to determine if a recomputation should occur in our model.
\[\pi_{\theta}(a_{t}|a_{<t},x_{\leq t},y_{<t})\propto g_{\theta}(k_{t}) \tag{1}\]
### Revision Policy
In restart-incremental models, revisions can occur as a result of recomputations, which are costly since they happen at every time step, even when no revisions occur. Tapir revises by selectively
Figure 2: Tapir computes a candidate output using an RNN at each time step. Then the controller decides whether to WRITE by adding the new output to the output buffer or to take a REVISE action, which can edit the output buffer after observing the effect of the new input on past outputs with the help of the memory.
deciding when to recompute, which enables it to revisit previous outputs at different points in time while reducing the number of recomputations.
**Memory Content**. The memory in Tapir contains information pertaining to processed inputs and their corresponding outputs, which is crucial for our approach. This is because it enables our model to perform relational learning between an incoming input and past outputs, using past inputs as an additional cue. Here, we use three caches \(\Gamma\). \(\Gamma^{h}\) stores the hidden state \(h\) of the incremental processor, representing the current input prefix, \(\Gamma^{z}\) stores the projected output vector \(z\) which represents the output, and \(\Gamma^{p}\) stores the input-output representation \(\varphi\), which is computed from \(h\) and \(z\). The \(i\)-th slot of the caches contains \(\gamma_{i}^{h},\gamma_{i}^{z},\gamma_{i}^{p}\), all of them computed at the same time step. The representations \(z\) and \(\varphi\) are computed as follows:
\[z =\tanh(W_{\tilde{y}}\tilde{y}+b_{z}) \tag{2}\] \[\varphi =\tanh(W_{in}h+W_{out}z+b_{\varphi}) \tag{3}\]
where \(\tilde{y}\) is the output logits from the incremental processor. \(W_{\tilde{y}},W_{in},\) and \(W_{out}\) are parameters while \(b_{z}\) and \(b_{\varphi}\) are bias terms. The dimension of \(z\) and \(h\) is the same. We keep the cache size \(N\) small, as we later perform soft attention over \(\Gamma^{p}\). The attention computation for large cache sizes is costly and is not suitable for incremental scenarios. Due to this limitation, the oldest cache element is discarded when the cache is full and new partial input arrives.
**Modelling Actions**. To model possible changes in past outputs as an effect of a new input, we use an LSTMN controller due to its ability to induce relations among tokens. It computes the relation between \(h_{t}\) and each cache element \(\gamma_{i}^{p}\) via an attention mechanism:
\[U =W_{c}\gamma_{i}^{p}+W_{h}h_{t}+W_{\tilde{k}}\tilde{k}_{t-1}+b_{u} \tag{4}\] \[s_{i}^{t} =\text{softmax}(v^{\top}\text{tanh}(U)) \tag{5}\]
which yields a probability distribution over \(\Gamma^{p}\). \(\tilde{k}_{t-1}\) is the previous summary vector of the controller hidden state. \(W_{c},W_{h},W_{\tilde{k}}\), and \(v\) are parameters and \(b_{u}\) is a bias term. We can then compute adaptive summary vectors \(\tilde{k}_{t}\) and \(\tilde{c}_{t}\) as a weighted sum of the cache \(\Gamma^{p}\) and the controller memory tape \(C_{t-1}\):
\[\begin{bmatrix}\tilde{k}_{t}\\ \tilde{c}_{t}\end{bmatrix}=\sum_{i=1}^{N}s_{i}^{t}\cdot\begin{bmatrix}\gamma_ {i}^{p}\\ c_{i+\max{(0,t-N-1)}}\end{bmatrix} \tag{6}\]
where \(c_{i+\max{(0,t-N-1)}}\) is the controller memory cell for the corresponding cache element \(\gamma_{i}^{p}\). The attention can be partially viewed as local (Luong et al., 2015), since older cache elements are incorporated through \(\tilde{k}_{t-1}\). These summary vectors are used to compute the recurrent update as follows:
\[\begin{bmatrix}i_{t}\\ \tilde{f}_{t}\\ \alpha_{t}\\ \tilde{c}_{t}\end{bmatrix} =\begin{bmatrix}\sigma\\ \sigma\\ \sigma\\ \text{tanh}\end{bmatrix}W\cdot[\tilde{k}_{t},x_{t}] \tag{7}\] \[c_{t} =f_{t}\odot\tilde{c}_{t}+i_{t}\odot\hat{c}_{t}\] (8) \[k_{t} =o_{t}\odot\text{tanh}(c_{t}) \tag{9}\]
Lastly, \(k_{t}\) is used by the revision policy to compute the action \(a_{t}\):
\[\pi_{\theta}(a_{t}|a_{<t},x_{\leq t},y_{<t})=\sigma(\theta^{\top}k _{t}+b_{k}) \tag{10}\] \[a_{t}=\begin{cases}\texttt{REVISE},&\text{if }\sigma(\theta^{\top}k _{t}+b_{k})\geq\tau\\ \texttt{WRITE},&\text{otherwise}\end{cases} \tag{11}\]
where \(\theta\) is a parameter vector, \(b_{k}\) is the bias, and \(\tau\in[0,1]\) is a decision threshold. According to equation (11), a REVISE action is selected only if the policy value is greater than or equal to \(\tau\); otherwise, a WRITE action is chosen. This threshold can be adjusted to encourage or discourage the recomputation frequency without the need to train the policy. Our model is equal to an RNN when \(\tau=1\) (never recompute), and becomes a restart-incremental Transformer when \(\tau=0\) (always recompute).
### Incremental Inference Mechanism
Using the policy, Tapir predicts when to perform a recomputation. Assume that an input token \(x_{t}\) is fed to the RNN component to obtain \(y_{t}\). The controller then reads \(x_{t}\), \(h_{t}\), and \(\Gamma^{p}\) to compute \(a_{t}\). If a REVISE action is emitted, the input buffer (containing all available inputs so far) will be passed to the reviser to yield the recomputed outputs. When this happens, both \(z\) and \(\varphi\) stored in the caches also need to be updated to reflect the effect of the recomputation. The recomputation of past \(z\) and \(\varphi\) will occur simultaneously with the computation of \(z\) and \(\varphi\) for the current time step to update \(\Gamma^{z}\) and \(\Gamma^{p}\) using the recomputed outputs. If a WRITE action is emitted, we take \(y_{t}\) to be the current output and continue to process the next token. The content of \(\Gamma^{z}\) and \(\Gamma^{p}\) are also updated for the current step. The cache \(\Gamma^{h}\) is always updated regardless of which action the policy takes. See algorithm in the Appendix.
Let us use Figure 1 and \(\tau=0.5\) as a constructed example. At \(t=1\), the incremental processor consumes the token \(the\), updates its hidden state and predicts the POS-tag \(\mathtt{det}\). The controller predicts that the probability for recomputation is _e.g._ 0.3. Since it is lower than \(\tau\), \(\mathtt{det}\) gets written to the output buffer, the memory is updated and the current step is finished. A similar decision happens at \(t=2\) and _alert_ is classified as \(\mathtt{noun}\). At \(t=3\), however, the controller predicts that a \(\mathtt{REVISE}\) action should occur after the input _citizens_. That triggers the reviser, which takes _the alert citizens_ as input and returns \(\mathtt{det}\)\(\mathtt{adj}\)\(\mathtt{noun}\). The output buffer gets overwritten with this new hypothesis and the caches are recomputed to accommodate the new state. This dynamics continues until the end of the sentence.
### Training
Jointly training all components of such a two-pass model from scratch can be unstable (Sainath et al., 2019), so we opt for a two-step training process:
1. Train only the reviser using cross entropy loss.
2. Train the incremental processor and the controller together with a combined loss: \[\mathcal{L}=CE(y^{\text{gold}},y)+BCE(a^{\text{LT}},a)\] (12) where \(y^{\text{gold}}\) is the expected output and \(a^{\text{LT}}\) is the expected action.
## 4 Supervision Signal for Revision
During incremental sentence comprehension, a revision or reanalysis occurs when disambiguating material rules out the current sentence interpretation. In Figure 1, \(\mathtt{noun}\) is a valid label for _suspect_ at \(t=6\), but _person_ at \(t=7\) rules that analysis out, forcing a reanalysis to \(\mathtt{adj}\) instead.
Training \(\mathtt{Tapir}\)'s controller requires a sequence of \(\mathtt{WRITE}\)/\(\mathtt{REVISE}\) actions expressed as the supervision signal \(a^{\text{LT}}\) in equation (12), capturing when revision happens. This signal then allows us to frame the policy learning as a supervised learning task (as in the work of Zheng et al. (2019)).
If we have the sequence of output prefix hypotheses at each step, as shown in Figure 1, we know that the steps when revisions have occurred are \(t=\{3,7\}\). We can then construct the sequence of actions we need. The first action is always \(\mathtt{WRITE}\) as there is no past output to revise at this step. For \(t>1\), the action can be determined by comparing the partial outputs at time step \(t\) (excluding \(y_{t}\)) against the partial outputs at time step \(t-1\). If no edits occur, then the partial outputs after processing \(x_{t}\) should not change, and a \(\mathtt{WRITE}\) action is appended to the sequence. If any edits occur, we append a \(\mathtt{REVISE}\) action instead.
Intermediate human judgements about when to revise are not available, so we need to retrieve that from a model. It is possible obtain this information from a restart-incremental Transformer, by comparing how the prefix at \(t\) differs from prefix at \(t-1\). However, as shown by Kahardipraja et al. (2021), the signal captured using this approach may lack incremental quality due to the missing recurrence mechanism. Using a recurrent model is advisable here, as it can capture order and hierarchical structure in sentences, which is apparently hard for Transformers (Tran et al., 2018; Hahn, 2020; Sun and Lu, 2022). But it is difficult to retrieve this signal using vanilla RNNs because its recurrence only allows a unidirectional information flow, which prevents a backward update of past outputs.
Therefore, we opt for the Linear Transformer (LT) (Katharopoulos et al., 2020), which can be viewed both as a Transformer and as an RNN.5 To generate the action sequences, we first train the action generator LT with causal mask to mimic an RNN training. Afterwards, it is deployed under restart-incrementality on the same set used for training with the mask removed. We collect the sequence of partial prefixes for all sentences and use it to derive the action sequences.
Footnote 5: For a detailed proof of why LT is more suitable, see the Appendix.
## 5 Experiments
**Datasets**. We evaluate \(\mathtt{Tapir}\) on four tasks in English, for NLU and task-oriented dialogue, using seven sequence labelling datasets:
* Slot Filling: SNIPS (Coucke et al., 2018); Alarm, Reminder & Weather (Schuster et al., 2019) and MIT Movie (Liu et al., 2013).
* PoS Tagging: CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and UD-EWT (Silveira et al., 2014).
* Named-Entity Recognition (CoNLL-2003).
* Chunking (CoNLL-2003).
Table 1 shows the distribution of generated actions in the final training set for each task. Further details regarding the datasets and generated action sequences are available in the Appendix.
**Evaluation**. An ideal incremental model deployed in real-time settings should (i) exhibit good incremental behaviour, _i.e._ produce correct and stable partial hypotheses and timely recover from its mistakes; (ii) be efficient for inference by delivering responses without wasting computational resources; and (iii) not come with the cost of a negative impact on the non-incremental performance, _i.e._ produce correct final outputs. Achieving all at the same time may be hard, so trade-offs can be necessary.
We evaluate Tapir on these three relevant dimensions. For (i), we use similarity and diachronic metrics6 proposed by Baumann et al. (2011) and adapted in Madureira and Schlangen (2020): _edit overhead_ (EO, the proportion of unnecessary edits over all edits), _correction time score_ (CT, the average proportion of time steps required for an output increment to settle down), and _relative correctness_ (RC, the proportion of output prefixes that match with the final output). Aspect (ii) is analysed by benchmarking the incremental inference speed. For (iii), we use the F1 score adapted for the IOB sequence labelling scheme, except for PoS tagging, which is evaluated by measuring accuracy.
Footnote 6: This metric can also be used for incremental evaluation involving frame semantics. See Atterer et al. (2009) for details.
Rather than trying to beat the state-of-the art results, we focus on analysing the incremental abilities of models whose performances are high enough for our purposes. As a reference model, we use a Transformer encoder applied in a restart-incremental fashion, which implicitly performs revision at every step. We follow Baumann et al. (2011) and Madureira and Schlangen (2020) by evaluating partial outputs with respect to the final
\begin{table}
\begin{tabular}{l c c} \hline \hline Tasks & WRITE & REVISE \\ \hline SNIPS & 0.777 & 0.223 \\ ARW & 0.811 & 0.189 \\ Movie & 0.765 & 0.235 \\ NER & 0.895 & 0.105 \\ Chunk & 0.687 & 0.313 \\ PoS & 0.769 & 0.231 \\ EWT & 0.712 & 0.288 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution of generated actions (train+val).
Figure 3: Incremental evaluation of the models on test sets. Edit Overhead, Correction Time Score and Relative Correctness \(\in[0,1]\). Lower is better for EO and CT, while higher is better for RC. Tapir is better compared to the reference model for the non-delayed case (output prefixes are often correct and stable). The delay strategy of one lookahead token is beneficial.
output, to separate between incremental and non-incremental performance.
**Delay strategy**. To inspect the effect of right context on the model's performance, we use the delay strategy (Baumann et al., 2011) with a lookahead window of size 1 and 2, computing a delayed version of EO and RC (Madureira and Schlangen, 2020). The output for the reference model is delayed only during inference, as in Madureira and Schlangen (2020). For Tapir, the same treatment would not be possible as it contains an RNN that must be able to recognise the output delay. Thus, we follow the approach of Turek et al. (2020): During training and inference, the label for input \(x_{t}\) is expected at time step \(t+d\), where \(d\) is the delay.
**Implementation**. For the reviser component, we choose Transformer (Trf) and Linear Transformer (LT) encoders trained with full attention.7 The reference model is trained with cross entropy loss similar to the reviser. All models are trained with the AdamW optimiser (Loshchilov and Hutter, 2019). We use 300-D GloVe embeddings (Pennington et al., 2014), which, for the reference model and the reviser, are passed through an additional linear projection layer. The probability threshold \(\tau\) is set to 0.5. We report results for a single run with the best hyperparameter configuration. See Appendix for details about the set-up and experiments.
Footnote 7: We previously tried to train both revisers with causal mask to make revision more robust as it can occur before the input is complete, however our preliminary results show that training without mask yields better results.
## 6 Results and Analysis
**Incremental**. Figure 3 depicts the incremental evaluation results. For the no-delay case, Tapir performs better compared to the reference model. We also observe that the delay strategy helps improve the metrics. It improves the results for Tapir, in general, but a longer delay does not always yield a better incremental performance. We suspect this happens for two possible reasons: First, if we consider the case where the delay is 1, TAPIR has already achieved relatively low EO (\(<0.1\)) and high RC (\(>0.85\)). This, combined with its non-monotonic behaviour, might make it harder to further improve on both incremental metrics, even if a longer delay is allowed. Second, a longer delay means that our model needs to wait longer before producing an output. In the meantime, it still has to process incoming tokens, which might cause some difficulty in learning the relation between the input and its corresponding delayed output. As a consequence, we have mixed results when comparing EO and RC for the delayed version of the reference model and Tapir. Their differences are, however, very small. Tapir achieves low EO and CT score, which indicates that the partial output is stable and settles down quickly. RC is also high, which shows that, most of the time, the partial outputs are correct prefixes of the final, non-incremental output and would be useful for downstream processing.
**Benchmark**. Table 2 shows that Tapir is considerably faster compared to the reference model in incremental settings, as it offers, on average, \(\sim\)4.5\(\times\) speed-up in terms of sequences per second.8
Footnote 8: We use the same models as in Table 3 and Figure 3 for benchmarking, as the policy affects the inference speed.
**Non-Incremental**. The performance of the restart-incremental reference model and our model on full sentences is shown in Table 3. The results of Tapir, in particular with the Transformer reviser (Tapir-Trf), are roughly comparable to the reference model, with only modest differences (0.96% - 4.12%). Tapir-Trf performs slightly better than Tapir-LT. This is possibly due to the approximation of softmax attention in LT, which leads to degradation in the output quality. Furthermore, we see that delay of 1 or 2 tokens for Tapir is generally beneficial.9 Note that we do not force a REVISE action at the final time step to examine the effect of the learned policy on Tapir's performance, although that would be a strategy to achieve the same non-incremental performance as the reference model.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Tasks & Ref. & Tapir-Trf & Tapir-LT \\ \hline SNIPS & 1.103 & 4.958 (4.50\(\times\)) & **8.983 (8.15\(\times\))** \\ ARW & 2.339 & **8.734 (3.73\(\times\))** & 5.959 (2.55\(\times\)) \\ Movie & 0.927 & **3.520 (3.80\(\times\))** & 3.432 (3.70\(\times\)) \\ NER & 0.675 & 4.465 (6.62\(\times\)) & **4.502 (6.67\(\times\))** \\ Chunk & 0.688 & **2.714 (3.95\(\times\))** & 1.912 (2.78\(\times\)) \\ PoS & 0.672 & 4.111 (6.12\(\times\)) & **7.400 (11.01\(\times\))** \\ EWT & 0.819 & **3.659 (4.47\(\times\))** & 3.122 (3.81\(\times\)) \\ \hline Average & 1.032 & 4.594 (4.45\(\times\)) & **5.044 (4.89\(\times\))** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of incremental inference speed on test sets. Tapir is \(\sim\)4.5\(\times\) faster compared to the reference model. All results are in sentences/sec.
### Detailed Analysis
In the next paragraphs, we assess Tapir-Trf on aspects beyond the basic evaluation metrics.
**Policy Effectiveness**. Figure 4 shows the distributions of actions and states of the output prefixes. Here, a prefix is considered correct if all its labels match the final output, and incorrect otherwise. We start by noticing that most of the actions are WRITE, and among them, very few occur when the prefix is incorrect. Tapir is thus good at recognising states where recomputation is not required, supporting its speed advantage. A good model should avoid revising prefixes that are already correct. We see that, for all datasets, the vast majority of the correct prefixes indeed do not get revised. A utopian model would not make mistakes (and thus never need to revise) or immediately revise incorrect prefixes. In reality, this cannot be achieved, given the incremental nature of language and the long-distance dependencies. As a result, incorrect prefixes are expected to have a mixed distribution between actions, as the model needs to wait for the edit-triggering input, and our results corroborate that. Finally, among the REVISE actions (_i.e._ the lighter bars in the bottom area), there is still a considerable relative number of unnecessary revisions occurring for correct prefixes. We see room for further refinement of the policy in that sense, but, in absolute numbers, the occurrence of recomputations is much lower than in the restart-incrementality paradigm, where all steps require a recomputation.
**Qualitative analysis**. Figure 5 shows two examples of how Tapir behaves in incremental slot filling (more examples in the Appendix), showing that it performs critical revisions that would not be possible with a monotonic model.
At the top, the model must produce labels for unknown tokens, which is harder to perform correctly. The first UNK token is initially interpreted as a city at \(t=6\), which is probably deemed as correct considering the available left context. The controller agrees with this, producing a WRITE action. However, when _heritage_ and the second UNK token have been consumed at \(t=8\), the incremental processor labels them as parts of a geographic point of interest. The controller is able to notice the output inconsistency as I-geographic_poi should be preceded by B-geographic_poi (following the IOB scheme) and emits a REVISE action. As a result, the label B-city is correctly replaced.
In the second example, Tapir produces interest
Figure 4: Distribution of actions and output prefixes by dataset. Most of the actions are WRITE and most of the partial prefixes which are correct do not get unnecessarily revised. Incorrect prefixes cannot always be immediately detected, as expected. Part of the REVISE actions are dispensable, but in a much lower frequency than in the restart-incremental paradigm.
Figure 5: Examples of incremental inference (from SNIPS and Movie) for Tapir-Trf. Edited labels are marked by a diamond symbol, with the immediate past output at the top right corner for right-frontier edits. Red labels are incorrect with respect to the final output.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Tapir-Trf} & \multicolumn{3}{c}{Tapir-LT} \\ \cline{2-9} Tasks & Ref. & - & D1 & D2 & - & D1 & D2 \\ \hline SNIPS & **91.05** & 88.57 & 90.47 & 89.45 & 85.95 & 88.07 & 87.28 \\ ARW & **95.63** & 93.35 & 95.17 & 95.15 & 92.84 & 93.65 & 94.50 \\ Movie & **83.98** & 82.85 & 83.26 & 82.95 & 81.40 & 83.16 & 82.21 \\ NER & **78.25** & 74.13 & 76.85 & 78.04 & 73.12 & 73.79 & 75.75 \\ Chunk & **88.35** & 86.85 & 87.48 & 87.52 & 85.03 & 86.43 & 85.79 \\ \hline PoS & **92.28** & 91.32 & 91.35 & 91.49 & 90.90 & 90.83 & 90.65 \\ EWT & **92.14** & 90.84 & 92.00 & 91.95 & 90.20 & 91.34 & 90.93 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Non-incremental performance of the models on test sets (first group is F1, second group is accuracy). \(D=\) delay. The performance of Tapir is roughly comparable to the reference model.
ing interpretations. It initially considers _woods_ to be an actor name at \(t=4\). When it reads _have_, the reanalysis triggered by the controller interprets _woods_ as a part of a title, _the woods_. The model revises its hypothesis again at \(t=6\), and decides that the complete title should be _the woods have eyes_. It still makes a mistake at the last time step, opting for a (wrong) revision of 0 to B-RATING for _rated_ when it should be unnecessary.
**Effect of Threshold**. Figure 6 portrays the effect of the probability threshold \(\tau\) on incremental and non-incremental metrics. As \(\tau\) increases, the incremental performance improves while the non-incremental performance deteriorates. This happens as higher \(\tau\) discourages recomputation and makes the model closer to an RNN. In return, it is harder for the model to revisit its past decisions.
## 7 Conclusion
We proposed Tapir, a two-pass model capable of performing adaptive revision in incremental scenarios _e.g._ for dialogue and interactive systems. We also demonstrated that it is possible to obtain an incremental supervision signal using the Linear Transformer (LT), in the form of WRITE/REVISE action sequences, to guide the policy learning for adaptive revision. Results on sequence labelling tasks showed that Tapir has a better incremental performance than a restart-incremental Transformer, in general, while being roughly comparable to it on full sentences. The delay strategy helps to improve incremental and non-incremental metrics, although a longer delay does not always yield better results.
The ability to revise adaptively provides our model with substantial advantages over using RNNs or restart-incremental Transformers. It can fix incorrect past outputs after observing incoming inputs, which is not possible for RNNs. Looking from the aspect of efficiency, our model is also better compared to restart-incremental Transformers as the recomputation is only performed when the need for it is detected. Tapir is consequently faster in terms of inference speed.
Figure 6: Effect of the probability threshold \(\tau\) on incremental and non-incremental metrics, using Tapir-Trf. Increasing \(\tau\) leads to improvement of incremental metrics at the cost of non-incremental performance.
### Limitations
In this section, we discuss some of the known limitations of our set-up, data and models.
To handle unknown words in the test sets, we replace them by a special UNK token which is also used to mask some tokens in the training set. The UNK token provides little information regarding the actual input and Tapir might be unable to fully utilise the token to refine its interpretation of the past output. This has a direct influence in the incremental metrics, as the model can exploit this property by using UNK token as a cue to emit the REVISE action. This strategy also introduces the extra hyperparameter of what proportion of tokens to mask.
We put effort into achieving a diverse selection of datasets in various tasks, but our analysis is limited to English. We are reporting results on the datasets for which the non-incremental versions of the model could achieve a performance high enough to allow a meaningful evaluation of their incremental performance. Tuning is still required to extend the analysis to other datasets.
Related to these two issues, we decided to use tokens as the incremental unit for processing. We follow the tokenization given by the sequence labelling datasets we use. Extending the analysis for other languages requires thus a good tokenizer, and annotated data, which may not exist. We may also inherit limitations from the datasets that we use. Although we do not include an in-depth analysis of the datasets, as our focus is on the model and not on solving the tasks themselves, they are widely used by the community and details are available in their corresponding publications.
The method we propose to retrieve the action sequences depends on the chosen model, and the grounding of the action sequences in the actual prefix outputs have a direct influence in training the controller. Therefore, the decisions made by Tapir rely on the quality of the underlying generated action sequences. In order to ensure that the internal representations of the action generator LT do not depend on right context, we had to restrict ourselves to a single layer variation of this model when generating the sequence of actions. It is possible that with more layers its behaviour would be different, but that would invalidate the assumptions needed for an incremental processor.
When it comes to the Tapir architecture, the attention scores for the controller are computed independently of temporal order and we do not explicitly model relation between cache elements. The limited cache size also means that some past information has to be discarded to accommodate incoming inputs. Although we have made efforts to incorporate them through the summary vector, this might be not ideal due to information bottleneck.
## Ethics Statement
We do not see any immediate ethical issues arising from this work, beyond those inherent to NLP which are under discussion by the community.
## Acknowledgements
We thank the anonymous reviewers for their valuable and insightful comments and suggestions. This work is partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project ID 423217434 (Schlangen).
|
2310.14895 | Astronomical Anomalies: Their Role in the Quest for Extraterrestrial
Life | Astronomers occasionally detect an object having unexpected shape,
unexplainable photometry, or unprecedented spectra that are inconsistent with
our contemporary knowledge of the universe. Upon careful assessment, many of
these anomalies are discarded as mere noise, contamination, or faulty analysis.
But some anomalies survive scrutiny to yield new astronomical objects and
physical processes. Examples of validated anomalies include quasars, pulsars,
and periodic Doppler shifts of Sun-like stars caused by the gravitational pull
of orbiting planets. Other anomalies persist as mysteries, including Fast Radio
Bursts, dark energy, 'Oumuamua as an alien spaceship, and simultaneously
vanishing stars. Advanced technological life may present astronomers with
anomalies that require carefully designed observations from multiple vantage
points simultaneously and with real-time spectroscopy. | Beatriz Villarroel, Geoffrey W. Marcy | 2023-10-02T05:58:26Z | http://arxiv.org/abs/2310.14895v1 | ## Astronomical Anomalies: Their Role in the Quest for Extraterrestrial Life
### Abstract
Astronomers occasionally detect an object having unexpected shape, unexplainable photometry, or unprecedented spectra that are inconsistent with our contemporary knowledge of the universe. Upon careful assessment, many of these anomalies are discarded as mere noise, contamination, or faulty analysis. But some anomalies survive scrutiny to yield new astronomical objects and physical processes. Examples of validated anomalies include quasars, pulsars, and periodic Doppler shifts of Sun-like stars caused by the gravitational pull of orbiting planets. Other anomalies persist as mysteries, including Fast Radio Bursts, dark energy, 'Oumuamua as an alien spaceship, and simultaneously vanishing stars. Advanced technological life may present astronomers with anomalies that require carefully designed observations from multiple vantage points simultaneously and with real-time spectroscopy.
### Great Anomalous Moments in Astronomy
The word "anomaly" is defined by the Oxford English Dictionary as "a thing, situation, etc. that is different from what is normal or expected." In the field of astronomy, an anomaly becomes the subject of detailed observations, studied with images, spectra, and light curves. Sometimes, once the anomaly has been discovered and published in a reputable peer reviewed journal, fellow colleagues pick up on it and decide to perform their own analysis.
When Maarten Schmidt spotted a bright dot of light with a greatly Doppler shifted spectrum in 1963, the object was considered an anomaly. Within a few years, further observations showed these objects, now known as quasars, are actually outside our own Galaxy. Schmidt is credited with the discovery of the first quasar, known as 3c 273 (Schmidt, 1963). Twenty years later, astronomers realized that quasars are luminous centers of galaxies powered by hot gas falling onto supermassive black holes. Astronomers have now cataloged millions of them. Observations show that super massive black holes exist at the centers of nearly all galaxies, all of which were "active galactic nuclei" in the past when accreting gas. Thus, we now have a complete turn-around: A galaxy that does not harbor a supermassive black hole and was never a quasar is now the anomaly.
History shows that an anomaly is at risk of being discarded as the result of a faulty analysis. Other times, the anomaly makes a comeback after having been buried, forgotten, or dismissed due to biases, sociological factors, or the passage of time. One still unresolved anomaly is Halton Arp's famous discovery of _anomalous redshifts_, where pairs of galaxies apparently connected by bridges nonetheless have different redshifts. According to the commonly accepted expanding universe model, the redshifts indicate vastly different distances for the two galaxies, which would conflict with the apparent "bridge" between the two. Arp and others, including anti-Big Bang-proponents Margaret and Geoffrey Burbidge and Fred Hoyle, soon found many more examples of galaxies with anomalous redshifts, claiming that quasars tended to surround themselves with many more background and foreground objects than normal galaxies (Hoyle & Burbidge, 1996, Burbidge et al., 2003). These anomalous bridged but redshift-discrepant quasars challenged the notion that the redshift of a quasar truly indicates its distance from us. Thus, the Universe might not be expanding.
When the Armenian astronomer Victor Ambartsumian hypothesized that galaxies are actually splitting and giving birth to one another, rather than gravitationally colliding, Halton Arp proposed that quasars actually are newly born galaxies ejected from a galaxy core (obtaining a high redshift upon ejection), and thus could be found in alignment with other objects. After a few heated debates, the discussion of anomalous redshifts died off without a final resolution (Lopez-Corredoira, 2010) even as the consensus has been to disregard the
effects as due to poor statistical analysis. So an anomaly might be abandoned before it's even solved, simply because scientists find no meaningful resolution and grow tired of it! Each subtopic of modern astronomical research has its own set of anomalies.
Today, some of the most fascinating anomalies are those that relate to our searches for extraterrestrial life. Some of these anomalies have been thoroughly reported in the popular science media. The question of whether or not we humans are alone as a technological civilization within our Milky Way Galaxy is so intriguing to scientists and the public that even a remote possibility that an anomalous observation is caused by the activity or presence of "little green men" serves as a driving force to find an underlying explanation.
### From Little Green Men to Exoplanets
In 1959, Giuseppe Cocconi and Philip Morrison published a seminal work, "Searching for Interstellar Communications," in the journal Nature where they speculated that just as we humans are unwittingly broadcasting our existence to our interstellar neighbors through radio waves, our interstellar neighbors may similarly be revealing their presence to us. When a young PhD student named Jocelyn Bell, one late summer in 1967 in rainy Cambridge, detected a mysterious repeating radio signal (Hewish et al., 1968) with a new telescope she herself had helped build, it was no surprise that she immediately alerted her advisor, Anthony Hewish. They found the radio waves arrived in pulses, each lasting a fraction of a second, followed by a pause of 1.3 seconds. The mysterious radio source came from a particular coordinate in the sky, which made it obvious that it did not come from our own Earth. What could this mysterious source be?
Soon after this signal was referred to as "Little Green Men-1" (LGM-1), and astronomers started considering the possibility of alien activity. This anomaly would have remained mysterious if others had not been found, making it clear that pulsating objects are a natural phenomenon and abundant in the universe. Today, pulsars are understood to be dense, compact objects only a few tens of kilometers in size, with a strong magnetic field (up to a trillion Gauss), and that spin hundreds of times each second around their axis, emitting double beams of accelerated particles that we observe on and off as the object quickly rotates. These quizizical false-alarm little green men form at the end of the lives of massive stars, when they explode as supernovae only to collapse to a neutron star. Anthony Hewish went on to receive the Nobel Prize for the discovery in 1974, while Jocelyn Bell Burnell had to wait 44 more years to receive the nearly as prestigious Special Breakthrough Prize in Fundamental Physics in 2018.
But that was not the end of the story with pulsars. Their regular pulses constitute accurate clocks and they sometimes come in orbiting pairs bound by gravity. So these "orbiting clocks" provide an excellent test of
Einstein's theory that predicts the existence of gravitational waves. When one such pair was discovered, gravitational waves were demonstrated indirectly through the decrease of the orbital period of the system as the two neutron stars spiral toward each other. The pair of pulsars was losing energy through the emission of gravitational waves, as Einstein's theory predicted. Impressed by the agreement between the predictions and the observations (Weisberg et al., 1981, Taylor & Weisberg, 1982), the Swedish Royal Academy of Sciences decided to award a second Nobel Prize for pulsars, this time going to Russell Hulse and Joseph Taylor Jr.
In 1992, Alexander Wolszczan and Dave Frail announced the discovery of the first exoplanet found outside the Solar System (Wolszczan & Frail, 1992). This was the confirmation that other star systems might form planets as well, not only those that formed around our own Sun. A few years later, the first three exoplanets around normal, hydrogen-burning stars were found by two competing teams, Michel Mayor and Didier Queloz (1995) and Geoffrey Marcy and Paul Butler (1996), accomplished by detecting Doppler shifts that vary with a periodicity.
However, the first periodic Doppler shift found around 51 Pegasi was widely doubted as due to an actual planet, and for good reasons. The orbital period was anomalously short, only 4.3 days, much shorter than any "normal" planet in our Solar System. Also, the Doppler shift varied as a perfect sine wave, which is consistent with a star that was merely pulsating, breathing in and out. Such pulsations are well known to exist and constituted a less "anomalous" explanation than the existence of a "planet." Moreover, the periodic Doppler shifts could result from a binary star in a nearly "face-on" orbit. However, the discovery of Doppler periodicity in the Sun-like star 70 Virginis was definitive due to a planet. Its period of 117 days placed it at a distance comparable to Mercury's orbit. Even better, the orbit was clearly eccentric, with an oval shape exactly as Newton's Laws predicted, making it unmistakably an orbiting planet rather than the pulsation of star (Marcy & Butler, 1996). 70 Virginis, along with 47 Ursae Majoris with its two-year orbital period, pointed to a high prevalence of planetary systems in a diversity of orbits, and many opportunities for life.
### The Discovery of Fast Radio Bursts
About 15 years ago, two researchers at the Parkes Observatory were scanning archival data taken some years earlier with the radio telescope. In the data, they saw a short burst, lasting less than 5 milliseconds. The short burst was extremely powerful and came from a source outside our galaxy. A new anomaly in the radio wavelengths had been found. This first Fast Radio Burst (FRB) was discovered by Duncan Lorimer and his student David Narkevic (Lorimer et al., 2007). Today, we know of about 500 FRBs, some repeating themselves irregularly, a few regularly. The spacing between the pulses varies, and some can have periods
Figure 3: A sketch of the structure of a rotating pulsar that was initially anomalous but now serves as an accurate cosmic clock. (courtesy, Bill Saxton/NRAO/AUI/NSF)
as long as 18 minutes (see e.g. Hurley-Walker et al., 2022). The FRBs are emitted from a source with an extremely strong magnetic field. The origin of these bursts is heavily debated; they are thought to be caused either by very magnetized neutron stars or possibly by the merger of compact objects. It is difficult to build reliable theoretical models of these objects because these bursts are very short and difficult to localize in the radio spectrum. Last year some significant progress was made when a FRB inside our own Galaxy, FRB 200428, was localized by the Canadian Hydrogen Intensity Mapping Experiment (The CHIME/FRB Collaboration, 2020). The FRB was also emitting in the gamma and x-ray frequencies. The scientists proposed that the burst of light could be produced when the jet from a magnetar (a neutron star with an extremely strong magnetic field) collides with the surrounding interstellar medium and produces a shock wave.
But even here astronomers have not refrained from discussing alternative possibilities such as ET technology (Lingam & Loeb, 2017). The speculations on the possible ET origin of FRBs peaked at a time when a particular subcategory of FRBs was found and dubbed "perytons," radio bursts having pulses that were about 250 milliseconds in duration at the 1.4 GHz radio frequency (Burke-Spolaor et al., 2011). Was ET signaling to us via these very peculiar and sharp bursts? The mystery further deepened as researchers noticed that the bursts only appeared weekdays and during office hours. How clever--ET must already be familiar with our habits! More clues emerged until it was finally established that the perytons were caused by two microwave ovens at the observatory that emitted small bursts whenever the lunch geer opened the microwave door prematurely (Petroff et al., 2015)!
### A Megastructure Around Boyajian's Star
Sometimes an anomaly is an error or an instrumental fluke, but in some cases the object actually has an anomalous behavior that might or might not be natural in its origin. An example is Boyajian's Star named after its discoverer Tabetha Boyajian (Boyajian et al. 2016). The object showed an unusual dimming. The discovery hit the headlines of all major media in 2016. Jason Wright et al (2016) described their extraordinary theory for the dimmings of the star this way: "KIC 846285 [is] an object with a bizarre light curve consistent with a'swarm' of megastructures. We suggest that this is an outstanding SETI target."
The news media exaggerated this claim in hundreds of newspapers, blogs, podcasts, and documentaries, all highlighting megastructures built by alien super civilizations, with little mention of the heritage explanation: clumps of dust that could block a small fraction of Boyajian's star. The media seldom report the more conventional interpretations. It was well known since the 1940s that young Sun-like stars, called T Tauri stars, vary in brightness sporadically and unpredictably due to dust clouds around them that occasionally move in front of the star, blocking some of the star's light (Bertout, 1989). Indeed, stars somewhat older than our Sun also show brightness variations due to dust that hasn't gone away over the millions of years. Even our Sun still has some dust around it; it's called "Zodiacal dust" and it's the remains of collisions of asteroids that fragment into dust. But in the end, thanks to the large interest in the story, extensive measurements were made of its brightness at different wavelengths from blue to red to the infrared, which showed colors consistent with dust along the line of sight--and the anomaly of Boyajian's Star has gone away, for now.
### There is Life in Venus Atmosphere
Even when the news reporting is more balanced, not every anomaly survives the test of time. One such example was the suggested discovery of life on Venus last year (Greaves et al., 2021). The discovery was intriguing because we are unaware of any organisms that can survive in a climate as hot as Venus, which has a surface temperature of 475 C. The scientists had analyzed radio waves coming from Venus using the James Clerk Maxwell Telescope and Atacama Large Millimeter/submillimeter Array. They reported absorption of wavelengths at which phosphine absorbs radio waves, thus discovering the gas phosphine in the atmosphere of Venus. Theorists in a group headed by Sara Seager developed atmospheric and chemical models related to microbial life in a "Venusian Aerial Biosphere" (Seager et al., 2021), and their theoretical models predicted that phosphine could not possibly occur unless some microbial life form generated it. So
they concluded that the discovery of phosphine constituted strong evidence for life in the atmosphere of Venus.
Doubts were soon raised, however. One was related to the measurement technique and the method used to identify and quantify the phosphine in the spectrum. Another was that even if the signal was real, the absorption could be caused by some other molecule. Indeed, sulfur dioxide absorbs at nearly the same wavelength (Lincowski et al., 2021). Life in the atmosphere of Venus also seems unlikely because the clouds are composed of sulfuric acid, and also any descending organism would fall onto the cauldron of the surface. Even if the signal truly came from phosphine, non-biological origins remain possible, including volcanism (Truong & Lunine, 2021; Bains et al., 2022). Three follow-up observations, including at radio and infrared wavelengths, failed to find phosphine (Snellen et al., 2020; Villanueva et al., 2021; Lincowski et al., 2021). In this case, we see that an interested community can help to quickly investigate the most intriguing claims. But even these latest results are subject to change as some recent indications suggest the presence of ammonia in droplets in Venus atmosphere, making the planet more habitable than previously thought. Not seldom, it takes years to reach a final conclusion and it may well be the case here as well. We must remain open to the possibility that the phosphine or life might still be detected on Venus.
### An Alien Spaceship Inside the Solar System
The most heated anomaly since Halton Arp's discrepant redshifts may be 'Oumuamua, which was discovered in 2017 as a point of light moving through the night sky. It entered our Solar System from outside, and then exited, never to return. It was the first object ever discovered to enter our Solar System from outside. It passed through unexpectedly and too quickly to allow careful observations, leaving its properties poorly measured (Meech et al., 2017; 'Cuk, 2018; Raymond et al., 2018; Jewitt & Luu, 2019; Moro-Martin, 2019). Although first thought to be comet, it had no cometary tail, and unlike other comets showed no carbon-based molecules or dust (Trilling et al., 2018). Extensive observations showed that its speed was too great to remain bound to the Sun (JPL, 2017).
A debate about its nature took off. Asteroids and comets are commonly "sing-shot" by gravity as they pass near planets, often achieving escape velocity from their home planetary system. As planetary systems are common around stars, our Milky Way Galaxy must contain millions of billions (> 10\({}^{15}\)) of these wandering rocky escapees, some of which will pass through our Solar System by chance. Thus, 'Oumuamua has a natural explanation, albeit with some observational properties that remain unresolved (e.g. Jewitt et al., 2017; Meech et al., 2017; Luu et al., 2019; Moro-Martin, 2019).
But the natural explanation was not shared by all scientists. Shmuel Bialy and Abraham Loeb (2018) proposed that 'Oumuamua is a spacecraft with a light sail constructed by an advanced civilization. The light sail would be thin, less than a millimeter in thickness, and large enough to allow the object to be pushed by the reflection of sunlight. The extra push on the motion of the comet could be explained with a light sail. This suggestion gained international attention. But many scientists dismiss this spacecraft explanation, given the existence of possible natural explanations.
There is certainly a common "bias" against the spacecraft explanation for 'Oumuamua among scientists. The bias has several origins. First, we humans have never detected technological artifacts in the Galaxy, constituting an "absence of evidence" of alien life despite a century of telescopic observations. Second, the spacecraft explanation attracts too much media attention, which makes some scientists uncomfortable. Third, dogmatic ideas are as prevalent in the scientific community as in any other human endeavor (Lopez-Corredoira & Castro Perelman, 2008).
But the spacecraft theory of 'Oumuamua deserves an assessment of our bias against it. Suppose we had prior information about the prevalence of technological civilizations in our Galaxy, such as observations of radio or laser beacons detected from many directions. Such information might change our "prior" assumptions about the possibility of spacecraft and light sails, functional or not. The relative probability that 'Oumuamua is of natural versus technological origin depends on our prior information, motivating caution about our "gut feeling."
In September 2020, another interstellar visitor with similar properties as 'Oumuamua was detected, named 2020 SO. This visitor lacked any signs of outgassing. It turned out to be a rocket booster from a 1966 mission to the Moon.
### Searching for "Vanishing Stars" with the VASCO Project
So far we have seen examples of serendipitous discoveries of anomalies. But one can also search for anomalies deliberately. An example of a project that aims to find weird astrophysical transients in the sky is the Vanishing & Appearing Sources during a Century of Observations (VASCO) project. The project proposal made the case for looking for "vanishing stars" and "impossible effects" in an attempt to find (1) signs of massive, evolved stars that fail to emit a bright supernova as they collapse to a black hole, (2) the activity of extraterrestrial intelligence in action through, for example, interstellar beacons, and (3) new, speculative phenomena like wormholes (Villarroel et al., 2016). From its inception, the project took on the goal of comparing historical images of the sky in the 1950s with modern sky imagery from Pan-STARRS observatory. An example of a transient that is visible in the old Palomar images is the object 1084-0241525 in the USNO-B1.0 catalog. Deep new observations with the 2.5m Nordic Optical Telescope in the Canary Islands eventually revealed a counterpart to the star, suggesting that the object was something that flared up momentarily in the old images. Soon, the VASCO project found about a hundred short-lived transients that were visible point sources in the old Palomar images but not to be seen in PanSTARRS data. Most of these are thought to be natural transients, such as flaring stars, optical counterparts to gamma-ray bursts, and similar phenomena. A list of such short-lived transients can even be useful in searches for communication lasers (Villarroel et al., 2020) that also would leave bright, short-lived spots in an image.
Last year, an unusual discovery of nine simultaneously appearing and vanishing star-like objects in a small image--a small fraction of a square degree in the sky--was announced (Villarroel, Marcy, Geier et al., 2021). An image taken half an hour earlier, and another image taken six days later, had no transients in the spots, suggesting that they appeared and vanished within the exposure time of the plate. No astrophysical scenario could be reconciled with this finding. A number of tests for instrumental artifacts also failed, revealing nothing dubious about the nine spots. Finally, the authors proposed that the "nine simultaneous transients" could be some "unknown" type of photographic plate contamination (defects) that produced early star-like shapes of varying intensities. But they also mentioned the possibility of having found a new phenomenon on the sky.
One possibility discussed was reflections from highly reflective and flat objects in orbits around the Earth. To test this idea further, the VASCO team recently proposed how to look for solar reflections from objects in pre
Figure 4: Trajectory of ’Oumuamua through the inner parts of the Solar System in 2017, dated weekly. (Credit: Avi Loeb and Nagual Design, Creative Commons Attribution 4.0 International)
satellite images (Villarroel et al., 2022) by looking for images where multiple transients follow a line. In this case, whether one finds support for the hypothesis or not, the result will be valuable for studies of possible alien artifacts in orbit around the Earth and to estimate an upper limit of such hypothetical objects. But a solution to the simultaneous transients might be neither plate defects nor alien artifacts. Nature tends to surprise us continuously with new phenomena, and maybe also here, the explanation might be rooted in physics presently unknown to us.
### Progress and Possibility
The discovery of anomalies offers progress in astronomy through discrepancies with expectations, curiosity, and debate. The "little green men" gave researchers an extra push needed to spark human imagination and a desire to gather more observations. Many scientists quote the loss of "credibility" of a certain field when big claims of alien life are made to the media. On the other hand, we have shown with several examples how research activity is often stimulated by the possibility of discovering alien life, regardless of the outcome.
The authors wish to thank Martin J. Ward for helpful discussions.
|
2308.10661 | On the super edge-magicness of graphs with a specific degree sequence | A graph $G$ is said to be super edge-magic if there exists a bijective
function $f:V\left(G\right) \cup E\left(G\right)\rightarrow \left\{1, 2, \ldots
, \left\vert V\left( G\right) \right\vert +\left\vert E\left( G\right)
\right\vert \right\}$ such that $f\left(V \left(G\right)\right) =\left\{1, 2,
\ldots , \left\vert V\left( G\right) \right\vert \right\}$ and $f\left(u\right)
+ f\left(v\right) + f\left(uv\right)$ is a constant for each $uv\in E\left(
G\right) $. In this paper, we study the super edge-magicness of graphs of order
$n$ with degree sequence $s:4, 2, 2, \ldots, 2$. We also investigate the super
edge-magic properties of certain families of graphs. This leads us to propose
some open problems. | Rikio Ichishima, Francesc A. Muntaner-Batle | 2023-08-21T11:55:28Z | http://arxiv.org/abs/2308.10661v1 | # On the super edge-magicness of graphs with a specific degree sequence
###### Abstract.
A graph \(G\) is said to be super edge-magic if there exists a bijective function \(f:V\left(G\right)\cup E\left(G\right)\rightarrow\left\{1,2,\ldots,\left|V \left(G\right)\right|+\left|E\left(G\right)\right|\right\}\) such that \(f\left(V\left(G\right)\right)=\left\{1,2,\ldots,\left|V\left(G\right)\right|\right\}\) and \(f\left(u\right)+f\left(v\right)+f\left(uv\right)\) is a constant for each \(uv\in E\left(G\right)\). In this paper, we study the super edge-magicness of graphs of order \(n\) with degree sequence \(s:4,2,2,\ldots,2\). We also investigate the super edge-magic properties of certain families of graphs. This leads us to propose some open problems.
Key words and phrases:(super) edge-magic graph, (super) edge-magic labeling, vertex degree, degree sequence, graph labeling 2020 Mathematics Subject Classification: 05C07, 05C78
## 1. Introduction
Only graphs without loops or multiple edges will be considered in this paper. Undefined graph theoretical notation and terminology can be found in [4] or [17]. The _vertex set_ of a graph \(G\) is denoted by \(V\left(G\right)\), while the _edge set_ of \(G\) is denoted by \(E\left(G\right)\).
Motivated by the notion of magic valuation of graphs introduced by Kotzig and Rosa [15], Enomoto, Llado, Nakamigawa, and Ringel [5] introduced the concept of super edge-magic graphs. A graph \(G\) is said to be _super edge-magic_ if there exists a bijective function \(f:V\left(G\right)\cup E\left(G\right)\rightarrow\left\{1,2,\ldots,\left|V \left(G\right)\right|+\left|E\left(G\right)\right|\right\}\) such that \(f\left(V\left(G\right)\right)=\left\{1,2,\ldots,\left|V\left(G\right)\right|\right\}\) and \(f\left(u\right)+f\left(v\right)+f\left(uv\right)\) is a constant (called the _valence_\(\operatorname{val}\left(f\right)\) of \(f\)) for each \(uv\in E\left(G\right)\). Such a function is called a _super edge-magic labeling_.
The following lemma found in [6] is a very useful characterization of super edge-magic graphs.
**Lemma 1.1**.: _A graph \(G\) is super edge-magic if and only if there exists a bijective function \(f:V\left(G\right)\rightarrow\left\{1,2,\ldots,\left|V\left(G\right)\right|\right\}\) such that the set_
\[S=\left\{f\left(u\right)+f\left(v\right):uv\in E\left(G\right)\right\}\]
_consists of \(\left|E\left(G\right)\right|\) consecutive integers. In such a case, \(f\) extends to a super edge-magic labeling of \(G\) with valence \(k=\left|V\left(G\right)\right|+\left|E\left(G\right)\right|+s\), where \(s=\min\left(S\right)\) and_
\[S=\left\{k-\left(\left|V\left(G\right)\right|+1\right),k-\left(\left|V\left(G \right)\right|+2\right),\ldots,k-\left(\left|V\left(G\right)\right|+\left|E \left(G\right)\right|\right)\right\}.\]
It is worth to mention that Acharya and Hegde [1] introduced the concept of strongly indexable graph that turns out to be equivalent to the concept of super edge-magic graph (see [8]). The study of super edge-magic labelings of graphs has proven to be crucial in the last two decades, since many relations with other types
of labelings have been found (see [6]), and relations with other concepts such as Skolem and Langford sequences (see [9]), and dual shuffle primes and Jacobsthal sequences (see [11] and [14]).
For a thorough study of graph labeling problems, see the extensive survey by Gallian [7]. For more information on super edge-magic graphs and related topics, see the books by Baca and Miller [2], Chartrand, Egan, and Zhang [3], Lopez and Muntaner-Batle [10], and Marr and Wallis [16].
One of the main goals of this paper is to study the super edge-magic properties of graphs of order \(n\) with degree sequence \(s:4,2,2,\ldots,2\). In particular, we would like to stress the family of graphs that we define next that will be a point of study in the subsequent pages of this paper. For integers \(m\geq 3\) and \(n\geq 3\), let \(C(m,n)\) denote the graph consisting of two cycles \(C_{m}\) and \(C_{n}\) of orders \(m\) and \(n\) in such a way that the two cycles share exactly one vertex in common. Observe that this family of graphs is the simplest case of the family of graphs for which all blocks are cycles. In fact, another goal for this paper is to spread around the interest about the super edge-magic properties of graphs of this type.
Next, we proceed with the following lemma found in [6] that will be useful to obtain certain results in this paper. For this purpose, the degree of a vertex \(v\) is denoted by \(\deg_{G}(v)\) or simply by \(\deg(v)\) if the graph \(G\) under discussion is clear.
**Lemma 1.2**.: _Let \(G\) be a graph such that \(\deg(v)\) is even for all \(v\in V\left(G\right)\) and \(\left|E\left(G\right)\right|\equiv 2\pmod{4}\). Then \(G\) is not super edge-magic._
## 2. Main results
We are now ready to state and prove our first result.
**Theorem 2.1**.: _Let \(G\) be a graph of even order \(n\geq 6\) with degree sequence \(s:4,2,2,\ldots,2\). Then \(G\) is not super edge-magic._
Proof.: Consider such a graph \(G\) with a super edge-magic labeling \(f\). Then
\[\left|E\left(G\right)\right|=\sum_{v\in V\left(G\right)}\deg(v)=\frac{4+2\left( n-1\right)}{2}=n+1.\]
Thus, the valence \(\operatorname{val}\left(f\right)\) of \(f\) can be computed as follows:
\[\operatorname{val}\left(f\right) =\frac{2\sum_{i=1}^{n}i+\sum_{i=n+1}^{2n+1}i+2\alpha}{n+1}\] \[=\frac{\sum_{i=1}^{n}i+\sum_{i=1}^{2n+1}i+2\alpha}{n+1}=\frac{5 n^{2}+7n+\left(2+4\alpha\right)}{2\left(n+1\right)},\]
where \(\alpha\in[1,n]\). We next examine for which valences of \(\alpha\in[1,n]\), \(4\alpha\) is a multiple of \(2\left(n+1\right)\) or, equivalently, \(2\alpha\) is a multiple of \(n+1\). Observe that if \(\alpha\in[1,n/2]\), then we have \(2\alpha\leq n<n+1\). On the other hand, if \(\alpha\in[n/2+1,n]\), then we have
\[n+1<n+2\leq 2\alpha\leq 2n<2n+1.\]
Consequently, in either case, \(2\alpha\) cannot be a multiple of \(n+1\). Now,
\[5n^{2}+7n+\left(2+4\alpha\right)=2\left(n+1\right)\left(\frac{5n}{2}+1\right) +4\alpha\]
so that
\[\operatorname{val}\left(f\right)=\frac{5n^{2}+7n+\left(2+4\alpha\right)}{2 \left(n+1\right)}=\left(\frac{5n}{2}+1\right)+\frac{2\alpha}{n+1}.\]
Note that \(5n/2+1\) is a positive integer, since \(n\) is an even integer \(n\geq 6\). However, since \(2\alpha\) is not a multiple of \(n+1\), it follows from the last equation that \(\operatorname{val}\left(f\right)\) is not a positive integer, which is impossible.
Observe that the preceding result excludes many graphs from being super edge-magic. In particular, we immediately obtain the following corollary.
**Corollary 2.1**.: _For every two integers \(m\geq 3\) and \(n\geq 3\), the graph \(C\left(m,n\right)\) is not super edge-magic when \(m+n\) is odd._
At this point, a question raises naturally as the next problem indicates.
**Problem 1**.: _What can be said about the super edge-magicness of graphs of odd order \(n\geq 5\) with degree sequence \(s:4,2,2,\ldots,2\)?_
A partial solution to Problem 1 comes directly from Lemma 1.2. It is clear that a graph of odd order \(n\) with degree sequence \(s:4,2,2,\ldots,2\) has even size. If the size is not only even, but also not divisible by \(4\), then it meets the hypothesis of Lemma 1.2, and hence it is not super edge-magic. Then Problem 1 becomes as follows.
**Problem 2**.: _Let \(G\) be a graph of odd order \(n\geq 7\) with \(\left|E\left(G\right)\right|\equiv 0\pmod{4}\) and degree sequence \(s:4,2,2,\ldots,2\). What can be said about the super edge-magicness of \(G\)?_
We know so far that the family of graphs \(C\left(m,n\right)\) cannot be super edge-magic except possibly in the case when \(m+n\equiv 0\pmod{4}\). This leads to propose the following problem.
**Problem 3**.: _What can be said about the super edge-magicness of \(C\left(m.n\right)\) when \(m+n\equiv 0\pmod{4}\)?_
An exhaustive computer search shows that \(C\left(3,5\right)\) is not super edge-magic but is this true in general? It is especially interesting to know whether \(C\left(3,4k-3\right)\) is super edge-magic for integers \(k\geq 3\).
## 3. Conclusions
We have established the super edge-magicness of the family of graphs \(C\left(m,n\right)\) when \(m+n\) is odd (see Corollary 2.1), and different problems have emerged out of this study. In particular, we want to highlight the following problem.
**Problem 4**.: _What can be said about the super edge-magicness of graphs that consist of blocks, which are all isomorphic to a cycle?_
We formulate the preceding problem in the most general sense possible, that is, the following concepts were introduced in [12]. Let \(G\) be a graph of order \(p\) and size \(q\) and \(S_{G}\) denote the set
\[S_{G}=\Bigg{\{}\frac{\sum_{u\in V\left(G\right)}\deg(u)g(u)+\sum_{i=p+1}^{p+q }i}{q}\text{:}\]
\[\text{the function }g:V\left(G\right)\rightarrow\left[1,p\right]\text{ is bijective} \Bigg{\}}.\]
If \(\lceil\min S_{G}\rceil\leq\lfloor\max S_{G}\rfloor\), then the _super edge-magic interval_ of \(G\) is the set
\[I_{G}=\left[\lceil\min S_{G}\rceil,\lfloor\max S_{G}\rfloor\right]\cap\mathbb{N},\]
where \(\mathbb{N}\) denotes the set of natural numbers. The _super edge-magic set_ of \(G\) is
\[\sigma_{G}=\left\{k\in I_{G}:\text{ there exists a super edge-magic labeling of }G\text{ with valence }k\text{ }\right\}.\]
Lopez et al. called a graph \(G\)_perfect super edge-magic_ if \(I_{G}=\sigma_{G}\).
Hence, we would like to encourage researchers to study which graphs, if any, in this family are perfect super edge-magic. It is worth to mention to finish this paper that similar ideas to the ones of perfect super edge-magic graphs had already appeared in some other works. The interested reader may consult the results in [12] and [13].
Acknowledgment. The authors are gratefully indebted to Yukio Takahashi for his continuous encouragement and technical assistance during this work.
|
2310.13122 | Light Meson Spectroscopy with GlueX and Beyond | The GlueX experiment at Jefferson Lab was specifically designed for precision
studies of the light-meson spectrum. For this purpose, a photon beam with
energies up to 12 GeV is directed onto a liquid hydrogen target contained
within a hermetic detector with near-complete neutral and charged particle
coverage. Linear polarization of the photon beam with a maximum around 9 GeV
provides additional information about the production process. In 2018, the
experiment completed its first phase, recording data with a total integrated
luminosity above 400 pb$^{-1}$. We highlight a selection of results from this
world-leading data set with emphasis on the search for light hybrid mesons. In
the mean time, the detector underwent significant upgrades and is currently
recording data with an even higher luminosity. The future plans of the GlueX
experiment to explore the meson spectrum with unprecedented precision are
summarized. | Alexander Austregesilo | 2023-10-19T19:35:09Z | http://arxiv.org/abs/2310.13122v1 | # Light Meson Spectroscopy with GlueX and Beyond
###### Abstract
The GlueX experiment at Jefferson Lab was specifically designed for precision studies of the light-meson spectrum. For this purpose, a photon beam with energies up to \(12\,\mathrm{GeV}\) is directed onto a liquid hydrogen target contained within a hermetic detector with near-complete neutral and charged particle coverage. Linear polarization of the photon beam with a maximum around \(9\,\mathrm{GeV}\) provides additional information about the production process. In 2018, the experiment completed its first phase, recording data with a total integrated luminosity above \(400\,\mathrm{pb}^{-1}\). We highlight a selection of results from this world-leading data set with emphasis on the search for light hybrid mesons. In the mean time, the detector underwent significant upgrades and is currently recording data with an even higher luminosity. The future plans of the GlueX experiment to explore the meson spectrum with unprecedented precision are summarized.
Thomas Jefferson National Accelerator Facility, Newport News, VA, USA
## 1 Introduction
The strong interaction is described by Quantum-Chromodynamics within the Standard Model of particle physics, but Confinement and the large coupling constant at low energies prevent the deduction of the hadron spectrum from first principles. For this reason, the precise experimental determination of the spectrum of hadrons is essential to further the understanding of the dynamics of the strong interaction.
The states in the light meson spectrum are characterized by their total angular momentum \(J\), their parity \(P\) and their charge conjugation \(C\). As they are broad and overlapping in mass, they have to be identified via the angular distribution of their decay products, and interference between resonances can be used to search for very small signals. Only certain \(J^{PC}\) combinations are allowed for the mesons when described as quark-antiquark systems in the constituent quark model. In the recent past, several experiments [1, 2, 3, 4] have reported the observation of exotic states with forbidden quantum numbers, like the \(\pi_{1}(1600)\) meson with \(J^{PC}=1^{-+}\). However, their existence and interpretation is still debated.
## 2 Light Meson Spectroscopy: The GlueX Experiment
The GlueX experiment [5] at the Thomas Jefferson National Accelerator Facility is part of the global effort to study the spectrum of hadrons. A primary electron beam of up to 12 GeV is used to produce a secondary photon beam which impinges on a liquid-hydrogen target. Assuming vector meson dominance in \(t\)-channel production, a wide variety of mesonic states are accessible. A high beam intensity provides a sufficiently large reaction rate to study rare processes. The GlueX detector was specifically designed to map the light-quark meson spectrum up to masses of approximately \(3\,\mathrm{GeV}/c^{2}\) with nearly complete acceptance for all decay modes. A superconducting solenoid magnet with a \(2\,\mathrm{T}\) field houses the target, central and forward drift chambers, and a barrel calorimeter (see Fig. 1). A forward calorimeter completes the forward photon acceptance and a time-of-flight counter provides particle identification capability.
The first phase of the GlueX experiment was completed in 2018, having recorded a total integrated luminosity above \(400\,\mathrm{pb}^{-1}\). From 2019 onwards, a DIRC detector was added to the apparatus to improve the kaon identification. GlueX is scheduled to take data with an even higher luminosity until 2025 which will quadruple the existing data set.
The unique feature of GlueX is the capability to use a polarized photon beam. Polarization of the photons is achieved by coherent Bremsstrahlung of the primary electron beam on a thin diamond radiator. With a collimator suppressing the incoherent Bremsstrahlung spectrum, a linear polarization of up to 40% is achieved in the coherent peak at \(9\,\mathrm{GeV}\). In order to cancel apparatus effects, the polarization plane is rotated in steps of \(45^{\circ}\) around the beam axis into four different orientations during physics data taking. The degree of polarization is measured using the effect of triplet production [6].
### Investigations of the Photoproduction Mechanism
The photon beam polarization poses constraints on the quantum numbers of the produced meson systems. It may be used as a filter to enhance particular resonances or as additional input in multidimensional amplitude analyses. To this end, the photoproduction mechanism has to be understood in great detail, but only very limited data from previous experiments are available at these energies. GlueX has measured the beam asymmetry for the production of several pseudoscalar mesons: \(\gamma p\to\pi^{0}p\)[7], \(\gamma p\to\eta p\) and \(\gamma p\to\eta^{\prime}(958)p\)[8],
Figure 1: Schematic of the GlueX detector highlighting detector subsystems.
\(\gamma p\to K^{+}\Sigma^{0}\)[9], and \(\gamma p\to\pi^{-}\Delta^{++}(1232)\)[10]. Many of these were the first measurements in this energy range. Common to all reactions, the natural parity exchange component dominates the production process for a single pseudoscalar meson. Only in charge-exchange reactions at small 4-momentum transfer \(t\), contributions from unnatural exchanges, e.g. pion exchange, become relevant.
As an extension to this program, the measurement of spin-density matrix elements (SDMEs) quantify the transfer of the photon beam polarization to a system with spin. With the linearly-polarized beam, 9 real components of the complex-valued spin-density matrix can be accessed. When the production of a vector meson is described through diffractive scattering with \(s\)-channel helicity conservation, all but two of the SDMEs should be zero when measured in the helicity system. Our measurement of the SDMEs for \(\rho(770)\) meson photoproduction [11] confirms this model at low values of 4-momentum transfer \(t\), but observes significant deviation for larger \(t\). In particular, the production process is sensitive to the interplay between Pomeron and \(f_{2}/a_{2}\) exchanges, which can be modeled by Regge theory [12]. We also determined the SDMEs for the process \(\gamma p\to K^{+}\Lambda(1520)\)[13] and are extending this program to the vector mesons \(\phi\), \(\omega\) and even \(J/\psi\)[14]. The results of these measurements will improve the understanding of the photoproduction process. In addition, these studies are used to evaluate the accurate modelling of the GlueX detector acceptance and study systematic effects.
### Amplitude Analysis of Multi-Meson Final States
Whenever several overlapping meson resonances decay into the same final state, we use the technique of amplitude analysis to disentangle states via the angular distribution of their decay products. Complex-valued amplitudes describe the dynamics of the decay, and an extended maximum-likelihood fit is used to extract the production strengths.
As the strongest evidence for an exotic \(J^{PC}=1^{-+}\) amplitude was reported in \(\eta^{(^{\prime})}\pi\)[4], we focused our initial effort on these final states. The GlueX data reaches competitive statistical precision to previous experiments, is using a complementary production mechanism and has access to different production and multiple decay modes. In order to use the photon beam polarization to separate natural from un-natural parity exchange processes, a new amplitude formalism had to be developed in collaboration with the Joint Physics Analysis Center (JPAC) [15]. This formalism is used to describe all systems with two pseudoscalar mesons, and an extension to vector-pseudoscalar combinations has since been developed [16].
The complexity of this new amplitude formalism impede a fully mass-independent analysis so far, but the analysis is stabilized by imposing a Breit-Wigner resonance shape for the dominant \(a_{2}(1320)\) signal. Extracting the cross section for the \(a_{2}(1320)\) photoproduction [17] yields a good agreement with theory predictions and thus demonstrates the validity of this method. The results are used as a reference for the search for the exotic \(\pi_{1}(1600)\) in this channel. In addition, branching fractions from recent Lattice QCD computations [18] are combined with measurements of the iso-vector \(b_{1}(1285)\pi\) cross section to determine upper limits for \(\pi_{1}\) production. While a small signal is expected in \(\eta\pi\), the signal could be dominant in \(\eta^{\prime}\pi\) where we place an additional effort.
### GlueX Phase 2 and the JLab Eta Factory
For the remaining beam time in 2024/25, the GlueX forward calorimeter is currently being upgraded with a PbWO\({}_{4}\) insert, which provides higher granularity and improved radiation hardness in the central part. The GlueX program will continue to take data with high luminosity, focusing on final states with kaons to map the strangeness component of the light mesons spectrum.
In parallel, the JLab Eta Factory (JEF) aims to perform precision measurements of various \(\eta\) decays with emphasis on rare neutral modes. Since the production rates of \(\eta\) and \(\eta^{\prime}\) are similar under these experimental conditions, the same data set will also offer sensitive probes for \(\eta^{\prime}\) decays. Compared to all existing and planned \(\eta/\eta^{\prime}\) experiments in the world, the unique feature of the JEF program is a clean data set in the rare neutral decays of \(\eta\) and \(\eta^{\prime}\) with up to two orders of magnitude background reduction.
## 3 Probing the Electromagnetic Structure of Hadrons
The versatile GlueX experimental setup is also used to probe the electromagnetic structure of hadrons. In the Primakoff process, beam photons scatter on the strong electromagnetic field that surrounds the nucleus in a fixed target geometry.
### PrimEx-\(\eta\): Measurement of the \(\eta\to\gamma\gamma\) decay width
This experiment aims for a precision measurement of the \(\eta\to\gamma\gamma\) decay width, which will be extracted from the measured differential cross sections at forward angles on light targets, \({}^{4}\)He and Be, using a tagged photon beam with energies up to 11.5 GeV. The result will not only potentially resolve a long standing discrepancy between the Primakoff and collider measurements, but is expected to reduce the experimental uncertainty of the current PDG average [19] on this important quantity by a factor of two. This will result in a direct improvement on all other partial decay widths of the \(\eta\) meson. The high precision measurement will have significant impact on the experimental determination of the fundamental parameters, such as the ratios of light quark masses and the \(\eta-\eta^{\prime}\) mixing angle. The experiment was completed in three beam times from 2019 to 2022 and the analysis is ongoing. A precise measurement of the Compton cross section over the full energy range is currently used to evaluate the systematic uncertainties of the experiment.
### Charged and Neutral Pion Polarizability
The polarizability describes the deformation of an object when subjected to an external electric field. For pions, this fundamental property of the strong interaction has precise predictions from chiral perturbation theory and is expected to be extremely small. Only the strong electromagnetic field produced by nuclei is able to deform the hadrons measurably. In our experiment, we scattered a 6 GeV polarized photon beam on a lead target to produce pion pairs, \(\pi^{+}\pi^{-}\) and \(\pi^{0}\pi^{0}\), and determine the cross section for these processes. A new wire chamber was installed downstream of the forward calorimeter to be able to detect muon pairs from Bethe-Heitler production, which is a background process but can be used to normalize the cross section measurement for the charged pion polarizability. The experiment was completed in 2022 and a competitive precision with previous state-of-the art experiments is expected. The inverted kinematics from traditional Primakoff experiments [20] and the photon beam polarization are the unique features of this complementary measurement of the charged pion polarizability. The polarizability for the neutral pions has never been measured before.
## 4 Summary and Outlook
We presented an overview of the light meson spectroscopy program currently pursued at the GlueX experiment at Jefferson Lab. The full data set recorded in the initial phase of the experiment is available and under active analysis, with several exciting results already published and presented at this conference [14, 16, 17, 21]. Several necessary
milestone towards the search for exotic mesons have been reached, and the mapping of the light meson spectrum is well underway. Close collaboration with theory on model development and interpretation is indispensable to reach this ambitious goal.
In parallel, the GlueX detector was equipped with improved particle identification and calorimetry, and the data taking for the second phase is in process. By the end of 2025, we expect to quadruple the existing data set which will allow us to study rare processes and explore the strangeness dimension of the light meson spectrum. Besides this program, the versatile GlueX detector is also used for precision measurements of the electromagnetic structure of hadrons.
After the GlueX program is complete, we plan to transform the photon beam line into a tertiary beam of neutral long-lived kaons. This world-wide unique \(K_{\mathrm{L}}\) beam facility combined with the multi-purpose GlueX detector is planned to be ready after 2026. It will open up the possibility to search for missing hyperon resonances in the baryon spectroscopy sector and allow us to study the \(\kappa\) meson in \(K\pi\) scattering.
Furthermore, we are currently developing a program for hadron spectroscopy opportunities at a possible 22 GeV electron accelerator upgrade at Jefferson Lab. This beam energy would span far beyond the threshold for charmonium production, and would allow us to search for the photoproduction of the recently discovered \(X\), \(Y\), \(Z\) and \(P_{c}\) states at GlueX.
###### Acknowledgements.
Work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177. The author acknowledges the support by the DOE, Office of Science, Office of Nuclear Physics in the Early Career Program.
|
2308.09330 | A non-overlapping Schwarz algorithm for the HDG method | In this paper, we present two non-overlapping Schwarz algorithms for the
hybridizable discontinuous Galerkin (HDG) method. The first algorithm is based
on the Neumann-Neumann method. The second one is an iterative algorithm uses
both trace and flux interface unknowns on interfaces between subdomains.
Numerical results are provided to verify the validity of our algorithms. | Issei Oikawa | 2023-08-18T06:24:56Z | http://arxiv.org/abs/2308.09330v1 | # A non-overlapping Schwarz algorithm for the HDG method
###### Abstract.
In this paper, we present two non-overlapping Schwarz algorithms for the hybridizable discontinuous Galerkin (HDG) method. The first algorithm is based on the Neumann-Neumann method. The second one is an iterative algorithm uses both trace and flux interface unknowns on interfaces between subdomains. Numerical results are provided to verify the validity of our algorithms.
## 1. Introduction
Let \(\Omega\subset\mathbb{R}^{d}\) (\(d=2,3\)) be a bounded polygonal or polyhedral domain. We consider the Poisson equation as a model problem:
\[-\Delta u =f\qquad\text{ in }\Omega, \tag{1b}\] \[u =0\qquad\text{ on }\partial\Omega, \tag{1a}\]
where \(f\) is a given function. Let \(\Omega=\Omega_{1}\cup\Omega_{2}\) and \(\Omega_{1}\cap\Omega_{2}=\emptyset\). Let \(\Gamma_{12}\) denote the interface between the subdomains. The problem (1) can be rewritten into
\[-\Delta u_{i} =f\text{ in }\Omega_{i}\quad(i=1,2), \tag{2b}\] \[u_{i} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega\quad(i=1,2),\] (2c) \[\frac{\partial u_{1}}{\partial n_{1}}+\frac{\partial u_{2}}{ \partial n_{2}} =0\text{ on }\Gamma_{12},\] (2d) \[u_{1} =u_{2}\text{ on }\Gamma_{12}. \tag{2a}\]
Here \(n_{i}\) is the outer unit normal vector to \(\partial\Omega_{i}\). A non-overlapping Schwarz algorithm (cf. [10, 11]) is a family of domain decomposition methods to solve the subproblems separately and is widely used to compute numerical solutions in parallel. The optimized Schwarz method is introduced by Lions [8], which is based on Robin interface condition and can be applied both overlapping and non-overlapping cases. In [6, 4, 5], the optimized Schwarz method of the HDG method is proposed and analyzed. The Neumann-Neumann method [9] and the FETI (or Dirichlet-Dirichlet) method [3] are also well known as a non-overlapping algorithm, however, there is no application to the HDG method to the best of our knowledge.
In this paper, we present two non-overlapping Schwarz algorithms intended to apply to the hybridizable discontinuous Galerkin (HDG) method [2]. The first algorithm is a direct application of the Neumann-Neumann algorithm to the HDG method. In the HDG method, the problem is split into many elements, and numerical trace and flux on
inter-element boundaries are introduced. The Neumann-Neumann and the FETI methods introduce an interface unknown on interfaces between subdomains. The interface unknown plays a similar role to the numerical trace or flux that reduces the jump of the solution on interface, so the Neumann-Neumann and the FETI methods are highly compatible with the HDG method. We also another non-overlapping algorithm. The key idea is to use alternatively trace- and flux-interface unknowns on interfaces between subdomains. The interface unknowns are updated by using the numerical trace or flux of the solutions on subdomains. Therefore, the second algorithm is different from the other algorihtms. We note that the proposed algorithm does not include any parameter in iteration.
The rest of the paper is organized as follows. In Section 2, we describe the Neumann-Neumann algorithm and application to the HDG method, i.e. the first algorithm, in two-subdomain case. In Section 3, the second algorithm and discretization by the HDG method are presented. Numerical results are also provided to verify the validity of the proposed algorithm. All numerical computation were carried out by FreeFEM [7] and Julia [1].
## 2. The Neumann-Neumann algorithm
### The Neumann-Neumann algorithm
We begin by recalling the Neumann-Neumann algorithm.
1. Set the initial value: \(u_{\Gamma}^{0}\)
2. Repeat Steps 3-5 for \(n\geq 0\) until convergence
3. Compute \(u_{i}^{n+1/2}\) (\(i=1,2\)) by solving \[-\Delta u_{i}^{n+1/2} =f\text{ in }\Omega_{i},\] \[u_{i}^{n+1/2} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] \[u_{i}^{n+1/2} =u_{\Gamma}^{n}\text{ on }\Gamma_{12}.\]
Step 4. Compute \(\psi_{i}^{n+1}\) (\(i=1,2\)) by solving
\[-\Delta\psi_{i}^{n+1} =0\text{ in }\Omega_{i},\] \[\psi_{i}^{n+1} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] \[\frac{\partial{\psi_{i}}^{n+1}}{\partial n_{i}} =\frac{\partial{u_{1}}}{\partial n_{1}}^{n+1/2}+\frac{\partial{u _{2}}}{\partial n_{2}}^{n+1/2}\text{ on }\Gamma_{12}.\]
Step 5. Update
\[u_{\Gamma}^{n+1}=u_{\Gamma}^{n}-\theta(\psi_{1}^{n+1}+\psi_{2}^{n+1})\text{ on }\Gamma_{12}.\]
Here \(\theta>0\) is a constant parameter.
### The HDG approximation
Let \(\mathcal{T}_{h}\) be a mesh of \(\Omega\). The set of all edges of \(K\in\mathcal{T}_{h}\) is denoted by \(\mathcal{E}_{h}\). Define \(\mathcal{T}_{ih}=\{K\in\mathcal{T}_{h}:K\subset\Omega_{i}\}\) and \(\mathcal{E}_{ih}=\{e\in\mathcal{E}_{h}:e\subset\Omega_{i}\}\) for \(i=1,2\). Let \(\mathcal{E}_{ih}\) denote the set of all edges of \(\partial K\) for \(K\in\mathcal{T}_{ih}\). We assume that \(\Gamma_{12}=\bigcup_{e\subset\Gamma_{12},e\in\mathcal{E}_{h}}e\). We introduce finite dimensional spaces \(\boldsymbol{V}(K)\), \(W(K)\) and \(M(e)\) to approximate
and \(u|_{e}\), respectively, where \(K\in\mathcal{T}_{h}\) and \(e\in\mathcal{E}_{h}\). The global approximate spaces are constructed as
\[\boldsymbol{V}_{ih} =\{\boldsymbol{v}\in L^{2}(\Omega_{i})^{d}:\boldsymbol{v}|_{K}\in \boldsymbol{V}(K)\ \forall K\in\mathcal{T}_{ih}\},\] \[W_{ih} =\{w\in L^{2}(\Omega_{i}):w|_{K}\in W(K)\ \forall K\in\mathcal{T}_{ih}\},\] \[M_{ih} =\{\mu\in L^{2}(\mathcal{E}_{ih}):\mu|_{e}\in M(e)\ \forall e\in \mathcal{E}_{ih},\quad\mu|_{\partial\Omega}=0\}.\]
The inner product on a domain \(D\) or a curve \(F\) is denoted as
\[(u,w)_{D}=\int_{D}uwdx,\quad\langle\lambda,\mu\rangle_{\partial\mathcal{T}_{F} }=\int_{F}\lambda\mu ds.\]
The piecewise inner products are denoted as
\[(\boldsymbol{q},\boldsymbol{v})_{\mathcal{T}_{ih}} =\sum_{K\in\mathcal{T}_{ih}}\int_{K}\boldsymbol{q}\cdot \boldsymbol{v}dx,\quad(u,w)_{\mathcal{T}_{ih}}=\sum_{K\in\mathcal{T}_{ih}} \int_{K}uwdx,\] \[\langle\lambda,\mu\rangle_{\partial\mathcal{T}_{ih}} =\sum_{K\in\mathcal{T}_{ih}}\int_{\partial K}\lambda\mu ds.\]
The HDG approximation of the Neumann-Neumann algorithm is as follows.
Step 1. Set the initial value \(\widehat{u}_{\Gamma}^{0}\).
Step 2. Repeat Steps 3-5 for \(n\geq 0\) until convergence.
Step 3. Solve the following equations to get \((\boldsymbol{q}_{i}^{n+1/2},u_{i}^{n+1/2},\widehat{u}_{i}^{n+1/2})\in \boldsymbol{V}_{ih}\times W_{ih}\times M_{ih}\) for \(i=1,2\):
\[\left(\boldsymbol{q}_{i}^{n+1/2},\boldsymbol{v}\right)_{\mathcal{T}_{ih}} -\langle u_{i}^{n+1/2},\nabla\cdot\boldsymbol{v}\rangle_{\partial \mathcal{T}_{ih}}+\langle\widehat{u}_{h}^{n+1/2},\boldsymbol{v}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{ih}} =0 \forall\boldsymbol{v}\in\boldsymbol{V}_{ih}\] \[-\left(\boldsymbol{q}_{i}^{n+1/2},\nabla w\right)_{\mathcal{T}_{ ih}} +\langle\widehat{\boldsymbol{q}}_{i}^{n+1/2}\cdot\boldsymbol{n},w\rangle_{ \partial\mathcal{T}_{ih}} =(f,w)_{\Omega_{i}} \forall w\in W_{ih},\] \[\langle\widehat{\boldsymbol{q}}_{i}^{n+1/2}\cdot\boldsymbol{n}, \mu\rangle_{\partial\mathcal{T}_{ih}} =0 \forall\mu\in M_{ih},\]
where
\[\widehat{\boldsymbol{q}}_{i}^{n+1/2}\cdot\boldsymbol{n} =\boldsymbol{q}_{i}^{n+1/2}\cdot\boldsymbol{n}+\tau(u_{i}^{n+1/2} -\widehat{u}_{i}^{n+1/2}),\] \[\widehat{u}_{i}^{n+1/2} =\widehat{u}_{\Gamma}^{n}\quad\text{ on }\Gamma_{12}.\]
Step 4. Solve the following equations to get \((\boldsymbol{\sigma}_{i}^{n+1},\xi_{i}^{n+1},\widehat{\xi}_{i}^{n+1})\in \boldsymbol{V}_{ih}\times W_{ih}\times M_{ih}\) for \(i=1,2\):
\[\left(\boldsymbol{\sigma}_{i}^{n+1},\boldsymbol{v}\right)_{ \mathcal{T}_{ih}} -\langle\xi_{i}^{n+1},\nabla\cdot\boldsymbol{v}\rangle_{\partial \mathcal{T}_{ih}}+\langle\widehat{\xi}_{i}^{n+1},\boldsymbol{v}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{ih}} =0 \forall\boldsymbol{v}\in V_{ih},\] \[-\left(\widehat{\boldsymbol{\sigma}}_{i}^{n+1},\nabla w\right)_ {\mathcal{T}_{ih}}+\langle\widehat{\boldsymbol{\sigma}}_{i}^{n+1}\cdot \boldsymbol{n},w\rangle_{\partial\mathcal{T}_{ih}} =0 \forall w\in W_{ih},\] \[\langle\widehat{\boldsymbol{\sigma}}_{i}^{n+1}\cdot\boldsymbol{n},\mu\rangle_{\partial\mathcal{T}_{ih}} =\langle\widehat{\boldsymbol{q}}_{1}^{n+1/2}\cdot\boldsymbol{n}_{1}+ \widehat{\boldsymbol{q}}_{2}^{n+1/2}\cdot\boldsymbol{n}_{2},\mu\rangle_{ \partial\mathcal{T}_{\Gamma_{12}}} \forall\mu\in M_{ih},\]
where
\[\widehat{\boldsymbol{\sigma}}_{i}^{n+1}\cdot\boldsymbol{n}=\boldsymbol{\sigma} _{i}^{n+1}\cdot\boldsymbol{n}+\tau(\xi_{i}^{n+1}-\widehat{\xi}_{i}^{n+1})\]
and \(\tau>0\) is a stabilization parameter.
Step 5. Update
\[\widehat{u}_{\Gamma}^{n+1}=\widehat{u}_{\Gamma}^{n}+\theta(\widehat{\xi}_{1h}^{n+ 1}+\widehat{\xi}_{2h}^{n+1}).\]
### Numerical results
We consider the following test problem:
\[-\Delta u =2\pi^{2}\sin(\pi x)\sin(\pi y) \text{in }\Omega:=(0.1)^{2}, \tag{3b}\] \[u =0 \text{on }\partial\Omega. \tag{3a}\]
The domain is decomposed into \(\Omega_{1}=(0,1/2)\times(0,1)\) and \(\Omega_{2}=(1/2,1)\times(0,1)\). We use unstructured meshes for each subdomain, where there is no hanging node on the interface. All approximation spaces are piecewise or edgewise polynomials of degree one. The stabilization parameter of the HDG method is given by \(\tau\equiv 1\).
The initial value is taken as \(\widehat{u}_{\Gamma}^{0}\equiv 0\). The termination criteria is \(\|\widehat{u}_{\Gamma}^{n+1}-\widehat{u}_{\Gamma}^{n}\|_{L^{2}(\Gamma_{12})}< 10^{-6}\). We carried out numerical computation for \(\theta=0.05,0.10,\ldots,0.55\), and histories of convergence in \(\boldsymbol{q}\) are displayed in Figures 1 and 2. We do not show the errors of \(u_{i}\) because they are very similar to the results of \(\boldsymbol{q}_{i}\). We observe that our algorithm is convergent if \(0<\theta\leq 0.5\) and the convergence speed is fastest around \(\theta=0.25\), which is similar to the case of the Neumann-Neumann method.
## 3. Trace-Flux alternating algorithm
### Two-subdomain case
Let \(u_{\Gamma}^{n}\) be a given function defined on the interface \(\Gamma_{12}\). We solve the subproblems with trace-interface condition
\[-\Delta u_{i}^{n+1/2} =f\text{ in }\Omega_{i}, \tag{4b}\] \[u_{i}^{n+1/2} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] (4c) \[u_{i}^{n+1/2} =u_{\Gamma}^{n}\text{ on }\Gamma_{12}. \tag{4a}\]
Then, we define an interface flux by
\[\lambda_{\Gamma}^{n+1/2}=\frac{1}{2}\left(\frac{\partial u_{1}}{\partial n_{1} }^{n+1/2}+\frac{\partial u_{2}}{\partial n_{2}}^{n+1/2}\right).\]
Solving the subproblems with flux-interface condition
\[-\Delta u_{i}^{n+1} =f\text{ in }\Omega_{i}, \tag{5b}\] \[u_{i}^{n+1} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] (5c) \[\frac{\partial u_{i}}{\partial n_{i}}^{n+1} =(-1)^{i}\lambda_{\Gamma}^{n+1/2}\text{ on }\Gamma_{12}, \tag{5a}\]
we get \(u_{i}^{n+1}\). The interface trace is updated by
\[u_{\Gamma}^{n+1}=\frac{1}{2}(u_{1}^{n+1}+u_{2}^{n+1})\text{ on }\Gamma_{12}.\]
Iteratively updating the interface trace by the above procedure, we can obtain the solution of the problem (1) if \(u_{\Gamma}^{n}\) converges.
### HDG approximation
The HDG approximation of the trace-flux alternating algorithm presented in the previous subsection is described as follows.
1. Set \(\widehat{u}_{\Gamma}^{0}\).
2. Repeat Steps 3-6 for \(n\geq 0\) until convergence.
3. Solve the trace-interface subproblems: Find \((\boldsymbol{q}_{i}^{n+1/2},u_{i}^{n+1/2},\widehat{u}_{i}^{n+1/2})\in \boldsymbol{V}_{ih}\times W_{ih}\times M_{ih}\) for \(i=1,2\) such that \[\left(\boldsymbol{q}_{i}^{n+1/2},\boldsymbol{v}\right)_{\mathcal{T }_{ih}}-\langle u_{i}^{n+1/2},\nabla\cdot\boldsymbol{v}\rangle_{\partial \mathcal{T}_{ih}}+\langle\widehat{u}^{n+1/2},\boldsymbol{v}\cdot\boldsymbol{n }\rangle_{\partial\mathcal{T}_{ih}}=0 \forall\boldsymbol{v}\in V_{ih}\] \[-\left(\boldsymbol{q}_{i}^{n+1/2},\nabla w\right)_{\mathcal{T }_{ih}}+\langle\widehat{\boldsymbol{q}}^{n+1/2}\cdot\boldsymbol{n},w\rangle_{ \partial\mathcal{T}_{ih}}=(f,w)_{\Omega_{i}} \forall w\in W_{ih}\] \[\langle\widehat{\boldsymbol{q}}^{n+1/2}\cdot\boldsymbol{n},\mu \rangle_{\partial\mathcal{T}_{ih}}=0 \forall\mu\in M_{h},\]
Figure 1. Convergence history of the HDG solutions by the Neumann-Neumann algorithm. The \(L^{2}\)-errors in \(\boldsymbol{q}\) are plotted for \(\theta=0.05,\ldots,0.25\) (top) and \(\theta=0.30,\ldots,0.55\) (bottom).
where
\[\widehat{\mathbf{q}}_{i}^{n+1/2}\cdot\mathbf{n} =\mathbf{q}_{i}^{n+1/2}\cdot\mathbf{n}+\tau(u_{i}^{n+1/2}-\widehat{u}_{i}^{n +1/2}),\] \[\widehat{u}_{i}^{n+1/2} =\widehat{u}_{\Gamma}^{n}\ \ \text{on}\ \Gamma_{12}.\]
Step 4. Define an interface flux by
\[\lambda_{12}^{n+1/2}=\frac{1}{2}(\widehat{\mathbf{q}}_{1}^{n+1/2}\cdot\mathbf{n}_{1}+ \widehat{\mathbf{q}}_{2}^{n+1/2}\cdot\mathbf{n}_{2})\ \text{on}\ \Gamma_{12}.\]
Step 5. Solve the flux-interface subproblems: Find \((\mathbf{q}_{i}^{n+1},u_{i}^{n+1},\widehat{u}_{i}^{n+1})\in\mathbf{V}_{ih}\times W_{ih} \times M_{ih}\) for \(i=1,2\) such that
\[\big{(}\mathbf{q}_{i}^{n+1},\mathbf{v}\big{)}_{\mathcal{T}_{ih}}-\langle u _{i}^{n+1},\nabla\cdot\mathbf{v}\rangle_{\partial\mathcal{T}_{ih}}+\langle \widehat{u}^{n+1},\mathbf{v}\cdot\mathbf{n}\rangle_{\partial\mathcal{T}_{ih}} =0 \forall\mathbf{v}\in V_{ih},\] \[-\big{(}\mathbf{q}_{i}^{n+1},\nabla w\big{)}_{\mathcal{T}_{ih}}+ \langle\widehat{\mathbf{q}}^{n+1}\cdot\mathbf{n},w\rangle_{\partial\mathcal{T}_{ih}} =(f,w)_{\Omega_{i}} \forall w\in W_{ih},\] \[\langle\widehat{\mathbf{q}}^{n+1}\cdot\mathbf{n},\mu\rangle_{\partial \mathcal{T}_{ih}} =(-1)^{i-1}\lambda_{12}^{n+1/2} \forall\mu\in M_{ih}.\]
Step 6. Update the interface trace by
\[\widehat{u}_{12}^{n+1}=\frac{1}{2}(\widehat{u}_{1}^{n+1}+\widehat{u}_{2}^{n+1}) \text{ on }\Gamma_{12}.\]
### For many-subdomain cases
Let \(\Omega\) be a disjoint union of \(\Omega_{1},\dots,\Omega_{N}\) and define \(\Gamma_{ij}=\partial\Omega_{i}\cap\partial\Omega_{j}\). We here assume that \(\overline{\Omega_{i}}\cap\overline{\Omega_{j}}=\emptyset\) if \(|i-j|\geq 2\) and the length or area of \(\Gamma_{i,i+1}\) is a positive value for \(1\leq i\leq N-1\), see Figure 3.
We introduce types of interface to \(\Gamma_{ij}\), which takes either trace- or flux-interface. We propose the following algorithm.
1. Set the types of \(\Gamma_{12},\Gamma_{34},\dots,\Gamma_{N-1,N}\) to be trace-interfaces and set the others to be flux-interface.
2. Set \(\widehat{u}_{i,i+1}^{0}\) and \(\lambda_{i,i+1}^{0}\) for \(1\leq i\leq N-1\).
3. Repeat Steps 4-8 until convergence.
4. Solve the following subproblems to get \(u_{i}^{n+1/2}\) for \(1\leq i\leq N\): \[-\Delta u_{i}^{n+1/2} =f\text{ in }\Omega_{i},\] \[u_{i}^{n+1/2} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] \[u_{i}^{n+1/2} =\widehat{u}_{ij}^{n}\quad\text{ on }\Gamma_{ij}\text{ if } \Gamma_{ij}\text{ is a trace-interface},\] \[\frac{\partial u_{i}}{\partial n_{i}}^{n+1/2} =(-1)^{i}\lambda_{ij}^{n}\quad\text{ on }\Gamma_{ij}\text{ if } \Gamma_{ij}\text{ is a flux-interface}.\] Here and in what follows, \(j\in\{i-1,j+1\}\).
5. Update the interface trace and flux by \[\widehat{u}_{ij}^{n+1} =\frac{1}{2}(\widehat{u}_{i}^{n+1/2}+\widehat{u}_{j}^{n+1/2}) \quad\text{ on }\Gamma_{ij}\text{ if }\Gamma_{ij}\text{ is a flux-interface},\] \[\lambda_{ij}^{n+1} =\frac{1}{2}\left(\frac{\partial u_{i}}{\partial n_{i}}^{n+1/2}+ \frac{\partial u_{j}}{\partial n_{j}}^{n+1/2}\right)\quad\text{ on }\Gamma_{ij}\text{ if } \Gamma_{ij}\text{ is a trace-interface}.\]
6. Flip the types of interfaces. If the type of \(\Gamma_{ij}\) is flux-interface, then set \(\Gamma_{ij}\) to be trace-interface. Else, if the type of \(\Gamma_{ij}\) is trace-interface, then set \(\Gamma_{ij}\) to be flux-interface. See also Figure 4.
Figure 3. Illustration of subdomains and interfaces
Step 7. Solve the following to get \(u_{i}^{n+1}\):
\[-\Delta u_{i}^{n+1} =f\text{ in }\Omega_{i},\] \[u_{i}^{n+1} =0\text{ on }\partial\Omega_{i}\cap\partial\Omega,\] \[u_{i}^{n+1} =\widehat{u}_{ij}^{n}\quad\text{ on }\Gamma_{ij}\text{ if the type of }\Gamma_{ij}\text{ is trace-interface},\] \[\frac{\partial u_{i}}{\partial n_{i}}^{n+1} =(-1)^{i}\lambda_{ij}^{n}\quad\text{ on }\Gamma_{ij}\text{ if the type of }\Gamma_{ij}\text{ is flux-interface}.\]
Step 8. Update the trace and flux
\[\widehat{u}_{ij}^{n+1} =\frac{1}{2}(\widehat{u}_{i}^{n+1/2}+\widehat{u}_{j}^{n+1/2}) \quad\text{ on }\Gamma_{ij}\text{ if the type of }\Gamma_{ij}\text{ is flux-interface},\] \[\lambda_{ij}^{n+1} =\frac{1}{2}\left(\frac{\partial u_{i}}{\partial n_{i}}^{n+1/2}+ \frac{\partial u_{j}}{\partial n_{j}}^{n+1/2}\right)\quad\text{ on }\Gamma_{ij}\text{ if the type of }\Gamma_{ij}\text{ is trace-interface}.\]
This algorithm can be discretized by the HDG method in the same manner as in the two-subdomain case.
### Numerical results
In this section, we show the numerical results of the trace-flux algorithm for the test problem (3).
#### 3.4.1. Two-subdomain case
We study the dependence of convergence speed on the sizes of subdomains. Let \(\alpha\in(0,0.5)\) and decompose \(\Omega\) into \(\Omega_{1}=(0,\alpha)\times(0,1)\) and \(\Omega_{2}=(\alpha,1)\times(0,1)\). We use unstructured meshes where \(\Omega_{1}\) and \(\Omega_{2}\) are divided into about \(32\alpha\times 32\) and \(32(1-\alpha)\times 32\) triangles, respectively, and piecewise polynomials of degree \(1\).
We computed solutions \((\boldsymbol{q}_{i},u_{i},\widehat{u}_{i})\)\((i=1,2)\) with \(\alpha\) varying from \(0.05\) to \(0.5\) in order to study how the convergence property depends on the sizes of subdomains. The history of convergence for various \(\alpha\) is displayed in Figure 5. When \(\theta=0.5\), i.e. the sizes of the subdomains are equal, the the iteration is terminated in \(3\) iterations and the convergence is fastest. As the parameter \(\theta\) tends to zero, it takes more iterations to converge. For \(\theta=0.1,\ldots,0.5\), the errors are monotonically decreasing and the final errors are similar. In the case of \(\theta=0.05\), the errors are monotonically increasing and the solution seems to
Figure 4. Illustration of the types of interfaces. Letters T and F means the types of trace- and flux-interfaces, respectively.
diverge. These results suggest that the convergence gets faster as subdomains get closer to uniform.
#### 3.4.2. Many-subdomain case
By numerical experiments, we demonstrate that our algorithm is valid for many-subdomain cases and examine its convergence property. The domain is equally divided into \(N\) subdomains, and the width of a subdomain is \(W=1/N\). The \(i\)-th subdomain is denoted by \(\Omega_{i}=((i-1)W,iW)\) for \(i\leq i\leq N\). Unstructured meshes whose mesh size is \(1/128\) and piecewise polynomials of degree \(1\) are used. Figure 6 shows the convergence history of the HDG solutions. The solutions converge in \(2\), \(16\), \(64\), \(240\) iterations for \(N=2,4,8,16\), respectively. We see that the convergence gets slower as the number of division \(N\) increase and its order is about \(O(N^{2})\).
Figure 5. \(L^{2}\)-errors in \(\boldsymbol{q}\) are plotted in log scale.
Figure 6. \(L^{2}\)-errors in \(\boldsymbol{q}\) are plotted in log scale for various \(N\) |
2304.11872 | Generation-driven Contrastive Self-training for Zero-shot Text
Classification with Instruction-following LLM | The remarkable performance of large language models (LLMs) in zero-shot
language understanding has garnered significant attention. However, employing
LLMs for large-scale inference or domain-specific fine-tuning requires immense
computational resources due to their substantial model size. To overcome these
limitations, we introduce a novel method, namely GenCo, which leverages the
strong generative power of LLMs to assist in training a smaller and more
adaptable language model. In our method, an LLM plays an important role in the
self-training loop of a smaller model in two important ways. Firstly, the LLM
is used to augment each input instance with a variety of possible
continuations, enriching its semantic context for better understanding.
Secondly, it helps crafting additional high-quality training pairs, by
rewriting input texts conditioned on predicted labels. This ensures the
generated texts are highly relevant to the predicted labels, alleviating the
prediction error during pseudo-labeling, while reducing the dependency on large
volumes of unlabeled text. In our experiments, GenCo outperforms previous
state-of-the-art methods when only limited ($<5\%$ of original) in-domain text
data is available. Notably, our approach surpasses the performance of Alpaca-7B
with human prompts, highlighting the potential of leveraging LLM for
self-training. | Ruohong Zhang, Yau-Shian Wang, Yiming Yang | 2023-04-24T07:35:38Z | http://arxiv.org/abs/2304.11872v2 | Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT
###### Abstract
With the success of large GPT-based models, natural language processing (NLP) tasks have received significant performance improvements in recent years. However, using pretrained large GPT models directly for zero-shot text classification has faced difficulties due to their large sizes and computational requirements. Moreover, GPT-based zero-shot classification models tend to make independent predictions over test instances, which can be sub-optimal as the instance correlations and the decision boundaries in the target space are ignored. To address these difficulties and limitations, we propose a new approach to zero-shot text classification, namely GenCo, which leverages the strong generative power of GPT to assist in training a smaller, more adaptable, and efficient sentence encoder classifier with contrastive self-training. Specifically, GenCo applies GPT in two ways: firstly, it generates multiple augmented texts for each input instance to enhance the semantic embedding of the instance and improve the mapping to relevant labels; secondly, it generates augmented texts conditioned on the predicted label during self-training, which makes the generative process tailored to the decision boundaries in the target space. In our experiments, GenCo outperforms previous state-of-the-art methods on multiple benchmark datasets, even when only limited in-domain text data is available.1
Footnote 1: Code is available at [https://github.com/RifleZhang/GenCo](https://github.com/RifleZhang/GenCo)
## 1 Introduction
Zero-shot text classification is a challenging task of predicting the class labels of text instance without requiring labeled instances for supervised training. Effective solutions for zero-shot classification are crucial for many real-world applications as labeled data are often difficult to obtain. With the great success of large pre-trained language models in recent years Brown et al. (2020); Ouyang et al. (2022), how to leverage the generation power of such models in zero-shot text classification problems has become an important question for research.
Recent research on zero-shot text classification can be roughly divided into two categories. The first involves using large GPT models for inference. For instance, the GPT-3 model has demonstrated exceptional zero-shot performance when the input text is transformed into prompts Brown et al. (2020). InstructGPT and Alpaca are other variants of GPT have shown performance improvements by leveraging human instructions in zero-shot classification. However, those GPT-based models have certain drawbacks due to their sizes and computational requirements, making them less available or inefficient to use. Additionally, they tend to predict the labels of test instances **independently**, and thus cannot leverage **correlations over test instances** or **decision boundaries** in the target space. The second category involves fine-tuning smaller models for zero-shot classification. For example, LOTClass Meng et al. (2020) uses BERT to extract keywords that are semantically related to class labels and then use those keywords to help label additional instances for the fine-tuning of the BERT classifier. Other attempts convert classification tasks to close test tasks and design prompts Schick and Schutze (2020); Gera et al. (2022) to generate training pairs for smaller classifiers. While those smaller models are easier to train and more efficient at inference, they do not have the same level of language modeling power as the large GPT models as a drawback.
In this paper, we propose a new approach which combines the strengths of large pretrained GPT models and the adaptivity/efficiency of a smaller, sentence encoder classifier trained with contrastive self-learning. Our framework, namely **Gen**eration-driven **C**ontrastive Self-Training (GenCo), effectively leverages the generative power of GPT in two novel ways to assist in training a smaller, sen
tence encoder classifier. Firstly, it uses the GPT-generated texts to augment each input text, aiming to reduce the gap between the input-text embedding and the embeddings of semantically relevant labels (Section2.2). Secondly, it uses GPT to generate new training instances conditioned on the system-predicted labels (the pseudo labels) in an iterative self-training loop, which can enhance the training data quality by leveraging the contrastive learning with decision-boundary information in the target space (Section 2.3). These strategies yields significant performance improvements in zero-shot classification, as evident in our experiments (Section 3.3).
In summary, our contributions in this paper are the following:
* We demonstrate the effectiveness of contrastive self-learning techniques to improve a sentence-encoder model for zero-shot text classification.
* We propose the novel and effective ways to leverage instruction-tuned GPT for generating augmented text during the self-training loop.
* We conduct extensive experiments on several benchmark datasets, where the proposed method improve the performance of previous state-of-the-art methods in zero-shot text classification.
## 2 Proposed Method
We first introduce the sentence encoder classifier as our basic design choice, and then focus on the novel components in our framework in the follow-up sections.
### Zero-shot Text Classification as Sentence Alignment
The task of zero-shot text classification involves predicting the most relevant labels for a given document without requiring any labeled training data. Given a set of \(N\) unlabeled documents \(X=\{x_{1},x_{2},\cdots,x_{N}\}\) and a set of \(L\) category descriptions \(C=\{c_{1},c_{2},\cdots,c_{L}\}\), the goal is to learn a scoring function \(g(x,c)\) that takes document \(x\) and label description \(c\) as input and produces a similarity score as the measure of how well the document and the label match to each other. In the rest of the paper, we assume each input text as a sentence for convenience, which can be easily generalized to a multi-sentence passage or document without jeopardizing the key concepts.
In the absence of labeled training data, the task of assigning labels to text can be formulated as a sentence alignment problem. This involves encoding both the input sentence and the label descriptions using a pre-trained sentence encoder like SimCSE Gao et al. (2021). The alignment scores between the sentence and labels embeddings are then used to predict related labels. This approach is particularly suitable for zero-shot classification as it relies on the **semantic matching** between textual instances and label descriptions in the embedding space, instead of relying on the availability of labeled training data.
However, as label descriptions are often just a few words instead of long sentences, they may not provide enough context for a pre-trained encoder to grasp the semantic meaning of the labels. To address this issue, prompt-based approaches Schick and Schutze (2020) convert label names into natural language sentences, namely _label prompts_. For example, the label "sports" can be converted to "This is an article about sports." Following this, we denote by \(p(\cdot)\) as a function that converts label name \(c\) into a prompt by placing the label description into a predefined template. We design \(T\) templates for each dataset and the label prompt embedding for category \(c\) is defined as:
\[\mathbf{e}_{c}=\frac{1}{T}\sum_{i=1}^{T}f_{\theta}\left(p^{i}(c)\right) \tag{1}\]
where \(f_{\theta}(\cdot)\) is the sentence encoder parameterized by \(\theta\). The scoring function can be implemented as:
\[g(x,c)=\operatorname{sim}\left(f_{\theta}(x),\mathbf{e}_{c}\right) \tag{2}\]
where \(\operatorname{sim}(\cdot,\cdot)\) is a similarity function such as dot product or cosine similarity.
Given a input text at inference time, the predicted label is the one with the highest similarity score:
\[\hat{y}=\operatorname*{arg\,max}_{j}g\left(x,c_{j}\right) \tag{3}\]
### Input Text Augmentation
In this section, we propose a way to enhance the semantic embedding of the original input text with multiple GPT-generated pieces of texts, as shown in figure 1. When the input text is relatively short, such as consisting of only one or a few sentences,
the alignment-based matching to relevant labels may not be sufficiently effective. A natural remedy is to elaborate the input with a pre-trained GPT model to generate multiple pieces of texts. Specifically, we use a simple human instruction, "Elaborate the text with a few sentences," to guide the instruction-tuned GPT model, such as Alpaca-7B (Taori et al., 2023), in generating probable expansions and continuations of the text. Our system treats the concatenation of the input text and each GPT-generated piece as one augmentation, and then takes the average of the embeddings of multiple augmentations as the merged embedding of the augmented input. Intuitively, such an augmentation should enhance the semantic matching among input text and relevant labels if the meaning of the input is underrepresented (too short) and if the generative model is highly effective in generating relevant pieces for the given input. The first assumption is often true in realist textual data, and the second condition is well-met by large pre-trained GPT models.
Formally, the input text \(x\sim P(x)\) can be viewed as randomly sampled from an underlying distribution, and the augmented texts can be viewed as the different variants sampled from the conditional probability distribution induced by the GPT model, denoted as \(x^{\text{aug}}\sim P_{g}(\cdot|x)\). We obtain the augmented text embedding by averaging the embeddings of the multiple versions of the augmented text:
\[\frac{1}{K}\sum_{i=1}^{K}f_{\theta}(x\oplus x_{i}^{\text{aug}}), \tag{4}\]
where \(\oplus\) is the concatenation operator for text and \(x_{i}^{\text{aug}}\) is the \(i\)-th sample from \(P_{g}(\cdot|x)\). Our augmented texts provide different views of the input text, and the mean of the embedding provides an ensemble of induced features. We then use the augmented text embedding for pseudo-label prediction. If GPT is available at test time, we can use this method for inference as well.
### Self-Training with Contrastive Learning
We employ a contrastive self-training process to enhance a pre-trained classifier's generalization capability to iteratively augmented training data. Specifically, it is an iterative process where the pre-trained model is used to classify unlabeled data, and the newly classified data with high confidence is then used to further train the model. When the labels are noisy, previous studies have suggested using soft labeling (Xie et al., 2016; Meng et al., 2020) or label smoothing (Muller et al., 2019) to prevent the model from becoming overly confident. In this work, we propose a loss function with soft labeling that connects contrastive learning and entropy regularization (Grandvalet and Bengio, 2004).
We denote \(f_{\theta}(x)\) as our sentence encoder model. Given a input text \(x\), the distribution over labels is:
\[P(\hat{y}_{i}|x;\theta)=\frac{\exp(\sin(f_{\theta}(x),f_{\theta}(p_{i})))}{ \sum_{c\in C}\exp(\sin(f_{\theta}(x),f_{\theta}(p_{c})))} \tag{5}\]
Here, \(p_{i}\) is a shorthand notation for \(p(c_{i})\), a randomly sampled label prompt for label \(c_{i}\). The target
Figure 1: Input Text Augmentation using GPT Models: The input text and an instruction are fed into the GPT model to generate multiple pieces of elaborated texts, each of which is concatenated to the original input to obtain an augmented text. The embeddings of the augmented texts are then averaged to obtain a merged embedding, which is used for label prediction in the self-training process.
distribution is derived as:
\[Q(\hat{y}_{i}|x;\theta)=\frac{\exp(\sin(f_{\theta}(x),f_{\theta}(p_{i}))/\tau)}{ \sum_{c\in C}\exp(\sin(f_{\theta}(x),f_{\theta}(p_{c}))/\tau)} \tag{6}\]
where \(\tau\leq 1\) is the temperature. A lower temperature implies a sharper distribution and thus greater confidence in the predicted label. We drop the notation of \(\theta\) for convenience. The contrastive text to label (\(t2l\)) objective function is defined as:
\[\mathcal{L}_{t2l}=-\sum_{i=1}^{N}\sum_{j=1}^{L}Q(\hat{y}_{j}|x_{i})\log P(\hat {y}_{j}|x_{i}) \tag{7}\]
When \(\tau\to 0\), \(Q(\hat{y}|x)\) becomes categorical distribution and the loss reduces to the supervised contrastive learning loss with pseudo label \(\hat{y}_{c}\) as the target:
\[\mathcal{L}_{t2l}^{\tau\to 0}=-\sum_{i=1}^{N}\log P(\hat{y}_{c}|x_{i}) \tag{8}\]
It encourages the model to predict label \(c_{i}\) given \(x\) with more confident. On the other hand, when \(\tau=1\), the loss reduces to a minimization of conditional entropy function \(H\):
\[\mathcal{L}_{t2l}^{\tau=1}=H\left(C\mid X\right) \tag{9}\] \[=-\sum_{i=1}^{N}\sum_{j=1}^{L}P(\hat{y}_{j}|x_{i})\log P(\hat{y}_ {j}|x_{i}) \tag{10}\]
We show a theorem such that minimizing the loss function equation 7 can achieve similar effects Entropy Regularization Grandvalet and Bengio (2006, 2004), which is a means to enforce the cluster assumption such that the decision boundary should lie in low-density regions to improve generalization performance Chapelle and Zien (2005).
**Theorem 1**.: _Consider a binary classification problem with linearly separable labeled examples. When \(0<\tau<1\), optimizing equation 7 with gradient descend will enforce the larger margin between classes and achieves max margin classifier under certain constraint._
We place our formal theorems and proofs in Appendix section A. In our experiment, we set \(\tau=0.1\) to balance supervised classification and low density separation between classes.
While self-learning was effective in improving the performance of our model, it is not without its limitations. One potential issue is overfitting to the pseudo label, which is prone to error. Additionally, self-learning requires a large amount of unlabeled data, which may not always be available. In the following section, we propose conditional augmentation methods with generative model in the training loop to make self-learning more robust.
### Augmentation Conditioned on Prediction
The loss function of equation 7 can effectively enhance the separability of class instances by enforcing the decision boundary to lie in low density regions of the embedding space. In each self-training iteration, when the sampled instances are labeled with relatively lower confidence, which lie near the decision boundary, the contrastive loss pushes the instances closer to the pseudo label prompt embedding. However, self-training can lead to an undesirable bias in the classifier when instances are mislabeled. To address this issue, we propose a novel approach to the generation of labeled data for self-training. That is, we use GPT to generate labeled training pairs by augmenting each input text conditioned on the system-predicted (pseudo) labels, as shown in figure 2. For example, if a business news article discussing the retirement of Starbucks' president is misclassified with the label of "sports", optimizing the model with this mislabeled training instance will make the decision boundaries between business articles less separable from sport articles. To alleviate such an undesirable effect, we use GPT to augment the input text conditioned on the sport category, resulting in a text closer to the typical ones in the sport category instead of the original one which lies closely to the decision boundary between "sports" and "businesses". In other words, by using GPT to generate augmented texts conditioned on pseudo labels, we aim to enhance the system-produced training pair with better separation of class labels in the embedding space.
Based on the aforementioned intuition, we propose an approach called _instruction-based conditional generation_ to generate augmented text conditioned on the pseudo label. In this approach, we incorporate the predicted label information into the instructions provided to the model. For instance, we can use the instruction "Discuss the sports aspects of the article" to guide the model in generating text that is more relevant to the sports category.
Additionally, we propose two loss functions to enhance the self-training algorithm with the augmented text as follows.
#### 2.4.1 Contrastive Learning for Conditionally Augmented Text and Label Prompt
To alleviate the problem of erroneous label assignment, we use the conditional augmented text (\(x^{\text{c-aug}}\sim P_{g}(\cdot|x,\hat{y})\)) and the pseudo label prompt as positive pairs.
\[\mathcal{L}_{g2l}=-\sum_{i=1}^{N}\sum_{j=1}^{L}Q(\hat{y}_{j}|x_{i}^{ \text{c-aug}})\log P(\hat{y}_{j}|x_{i}^{\text{c-aug}}) \tag{11}\]
#### 2.4.2 Contrastive Learning for Observed Text and Augmented Text
For document representation, the contrastive pairs are usually created by sampling spans of document (Izacard et al., 2022). In our case, the generative model naturally creates different views of data and we use the contrastive loss between observed text and generated text for optimization:
\[\mathcal{L}_{t2g}=\sum_{i\in I}\frac{-1}{|A(i)|}\] \[\sum_{x^{\text{aug}}\in A(i)}\log\frac{\exp(sim(f_{\theta}(x_{i} ),f_{\theta}(x^{\text{aug}})))}{\sum_{j\in I}\exp(sim(f_{\theta}(x_{i}),f_{ \theta}(x_{j})))}. \tag{12}\]
where \(I\) is a training batch and \(A(i)\) denotes the set of augmented texts belonging to the same pseudo class of input \(x_{i}\).
Figure 2: Augmented Text Generation Conditioned on Pseudo Labels: When a pseudo label is incorrect, it can mislead the training process and decrease classification performance. We generate augmented text conditioned on the pseudo label, aiming to make the generated text closer to the majority members in the category of the pseudo label. This approach aims to improve the quality of the generated instances for self-training.
Algorithm 1 shows the self-training of GenCo with generative model assisting in the self-training loop. During training, we found that a balanced sampling that keeps the same number (\(S_{t}\) for iteration \(t\)) of training for each category is important for the stability of self-training. Additionally, we use a dictionary to cache the conditional generated text to avoid repeated generation.
## 3 Experiments
### Datasets and Experimental Settings
We conduct experiments on \(4\) benchmark text classification datasets: AG News, DBpedia, Yahoo Answers and Amazon, with the statistics shown in table 1. In the experiments, we initialize our sentence encoder with supervised SimCSE Roberta-base model Gao et al. (2021). The designed prompts for enhanced label description is illustrated in table 2. For the generative model, we use the Alpaca-7B Taori et al. (2023) model, which is an open source GPT model fine-tuned with human instructions Touvron et al. (2023). The prompts for instruction-based augmentation (table 2) is the same as the one used in the Alpaca model fine-tuning. For the generation parameters, we used \(temperature\)=0.8, \(top\_p\)=0.95, and sample \(K\)=5 augmented texts for each instance with \(min\_length=64\) and \(max\_length=128\). For the self-training of sentence encoder model, we used \(batch\_size\)=\(3*|C|\) (\(|C|\) is the number of categories), \(lr\)=1e-5, the max length is \(128\) for AG News and DBPedia and \(192\) for Yahoo Answers and Amazon. All the experiments are performed on NVIDIA RTX A6000 gpus.
### Baseline Methods
* **PET**Schick and Schutze (2020) method formulates zero-shot text classification as a cloze test tasks, where a pretrained BERT Devlin et al. (2018) model is used to predict the output label(s) by completing a prompt such as "This article is about_", which is concatenated right after an input document.
* **iPET**Schick and Schutze (2020) uses a self-training algorithm to improve from the PET model, where multiple generations of models are trained by gradually increasing the number of training instances labeled by a model trained in the previous generation.
* **LOTClass**Meng et al. (2020) first applies the BERT model to extract keywords related to the label names from unlabeled texts and then assigns pseudo labels for texts based on the extracted keywords. LOTClass also applies a self-training algorithm to further improve the classification performance.
* **Other Baselines** We include prompt-based GPT model, a sentence-encoder based model without any self-training and a self-training baseline without any text augmentation. Additionally, a supervised learning baseline is included for reference.
### Main Results
In table 3, we present a comparison of the test accuracy of our model with other baselines on four benchmark classification datasets. Due to the large number of text instances, it was not feasible to perform augmentation using the entire dataset. Instead, our model was trained on a downsampled dataset, with uniform sampling resulting in less than 2% of the original data used (rows 7-8). Despite the reduced size of the dataset, we observed that our proposed model GenCo still outperforms the other zero-shot baseline methods and is close to supervise learning settings (row 1 and 7).
**Compared with SOTA Methods** Both LOTClass and iPET use a self-training algorithm for zero-shot classification, but our adaptation of GPT model can better enhance the self-training performance. Specifically, LOTClass uses a BERT model to extract keywords for each category, and employs lexical matching between input text and the keywords to assign pseudo labels. While the keywords can be considered as an augmentation, it is less expressive than using a GPT model to generate coherent human language as augmentation. Our proposed method uses a sentence encoder with more expressive neural features, making it more effective than using lexical-based features to assign pseudo labels. The iPET model requires training multiple models and ensembling about 15 of them, which is memory extensive. While ensembling can stabilize self-training by reducing variance, it does not introduce new information about the input text. Our approach uses a generative model to augment text data during self-training, leading to improved performance and a more memory efficient alternative.
**Comparison with GPT**: While GPT (row 6) has demonstrated strong zero-shot performance in various tasks, it underperforms compared to our
sentence-encoder classifier baseline (row 5), which is fine-tuned using contrastive learning on the Natural Language Inference dataset Gao et al. (2021), in the context of text classification. Classification involves comparing instances, such as an article being more likely to belong to the "sports" category when compared to articles in the "business" category. Contrastive learning leverages this comparison and our contrastive self-training further improves it.
In table 4, we present the impact of inference time augmentation (assuming GPT is available at test time) and self-training on the performance metric. To test inference time augmentation, we performed experiments on a downsampling of both training and testing instances.
**Inference Time Augmentation**: Our results show that inference time augmentation (rows with "IA") leads to a performance gain of \(1\)-\(2\%\), with a more substantial improvement observed for AG News and Yahoo Answers. This may be attributed to the fact that AG News has an average text length of only \(38\) words, and the Yahoo Answers dataset includes many answers with only one phrase. Inference time augmentation effectively enhances the quality of shorter text inputs.
**Self-Training**: Our experiments demonstrate that self-training improves the performance on all datasets, even in the absence of augmented data (rows 3-4). The DBpedia dataset exhibits an improvement of over 20%. Theoretically, self-training enhances the separation of text, thereby making the decision boundary lie in the low-density area, which is critical for classification. Our generative-driven approaches, with and without conditioning on pseudo label, both lead to improved performance. However, the conditional augmentation approach is more effective due to its ability to stabilize self-training.
### Analysis of Input Augmentation
In this evaluation, we investigate the effectiveness of input augmentation for zero-shot inference _without training_. We evaluate the performance of our model on two datasets, namely AG News and Yahoo Answers, using two evaluation metrics: per class F1 metric and ranking-based precision metric according to prediction confidence. The per class F1 metric provides an insight into how well the model performs on each individual class by balancing precision and recall. In the upper part of figure 3, our findings indicate that using GPT augmented data leads to improved performance across all categories for AG News and in eight out of ten classes for Yahoo Answers.
In the lower part of figure 3, we employ a ranking-based precision metric to assess the quality of the most confident cases. Our results demonstrate that using augmented data yields better precision for the most confident cases. Notably, our study on the Yahoo Answers dataset indicates that the predictions are better calibrated with the use of augmented data, implying that highly confident samples exhibit better precision. Conversely, such a trend was not observed in unaugmented data, where the top 30 had higher accuracy than the top 10. Better calibration justifies the sampling from the most confident pools for self-training, making it a more reliable method for improving model performance.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Classification Type & \#Classes & \#Train & \#Test & Avg Length \\ \hline AG News & News Topic & 4 & 120,000 & 7,600 & 38 \\ DBPedia & Wikipedia Topic & 14 & 560,000 & 70,000 & 50 \\ Yahoo Answers & Queto Answering & 10 & 1,400,000 & 60,000 & 70 \\ Amazon & Product Review Sentiment & 2 & 3,600,000 & 400,000 & 78 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets for multi-class text classification.
\begin{table}
\begin{tabular}{l} \hline \hline
**Label Prompt** \\ (1)Category: [label]. \\ (2)It is about [label]. \\ \hline
**Instruction-based (Conditional) Augmentation** \\ Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. \\
**## Instruction:** \\ \hline
**Elaborate the text in a few sentences. \\ (Discuss the [pseudo label] aspects of the article.) \\
**## Input:** \\ \{text\} \\
**## Response:** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The designed prompts for enhanced label description and conditional augmentation based on pseudo label.
### Analysis of Conditional Augmentation
In table 5, we present generated texts conditioned on an sample in AG News dataset and the pseudo labels. Each example is a cherry-picked sample out of five random samples. The generated text expands on a specific aspect regarding the label while retaining the original meaning of the observed text.
The left figure in figure 4 shows a heatmap of the probability that a conditionally generated text falls into the corresponding pseudo label category. The highest probability occurs along the diagonal, indicating that the conditionally generated data has a closer meaning to the pseudo label. The right figure in figure 4 shows the distribution of the generated text plotted using T-SNE. The embeddings were obtained by our sentence encoder trained on the \(100\)-th iteration. We selected two instances that were misclassified as business and located close to the decision boundary. The augmented text, conditioned on the business category, was found to be closer to the label prompt embedding of the business category. This demonstrates the effectiveness of our method to generate less confusing training pairs away from the decision boundary.
## 4 Related Work
### Knowledge Distillation from GPT
To leverage the language modeling power of large model, previous worksYoo et al. (2021); Ye et al. (2022); Meng et al. (2022) use GPT to generate text and label pairs to train a classifier for downstream tasks. However, generating training data from scratch can lead to low-quality data with unrelated or ambiguous generated text Gao et al. (2022). Our approach also generates text data, but it is grounded in the context of the corpus of interest, and further enhances the quality and semantic diversity of the generated text. This approach provides a practical alternative to generation-based methods for zero-shot text classification.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline ID & Self-train & Methods & AG News & DBpedia & Yahoo Answers & Amazon \\ \hline \multirow{5}{*}{1} & \# unlabeled train & 4k (3.4\%) & 11.2k (2\%) & 15k (\(<\) 1\%) & 20k (\(<\) 1\%) \\ & \# unlabeled test & 7.6k & 28k & 20k & 20k \\ \hline
1 & No & Sentence-enc & 75.6 & 73.4 & 55.5 & 89.6 \\
2 & No & Sentence-enc + IA & 78.2 & 74.7 & 57.4 & 90.2 \\ \hline
3 & Yes & Self-train & 83.3 & 96.3 & 62.5 & 91.1 \\
4 & Yes & Self-train + IA & 83.9 & 96.8 & 64.3 & 91.3 \\
5 & Yes & Self-train + TA & 86.9 & 97.0 & 66.1 & 94.4 \\
6 & Yes & Self-train + TA + IA & 87.1 & 97.1 & 67.2 & 94.6 \\
7 & Yes & GenCo & 89.2 & 98.4 & 68.6 & 95.3 \\
8 & Yes & GenCo + IA & 89.7 & 98.5 & 70.2 & 95.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of ablation test for our proposed method, showing the effect of specific components on the performance metric. “TA” represents input augmentation added during training, while “IA” represents input augmentation added during inference.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline ID & Self-train & Methods & AG News & DBpedia & Yahoo Answers & Amazon \\ \hline
1 & – & Supervised & 94.2 & 99.3 & 77.3 & 97.1 \\ \hline
2 & No & PET & 79.4 & 75.2 & 56.4 & 87.1 \\
3 & Yes & iPET & 86.0 & 85.2 & 68.2 & 95.2 \\
4 & Yes & LOTClass & 86.4 & 91.1 & – & 91.6 \\ \hline
5 & No & Sentence-enc (SimCSE) & 74.5 & 73.8 & 55.6 & 88.8 \\
6 & No & GPT (Alpaca-7B) & 71.2 & 65.5 & 52.1 & 87.2 \\ \hline
7 & – & Supervised-downsample* & 93.8 & 98.7 & 76.5 & 97.0 \\ \hline
8 & Yes & GenCo-instruction* & **89.2** & **98.3** & **68.7** & **95.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of classification methods on benchmark datasets. The test accuracy of best performing zero-shot method is highlighted in bold phase. Row 7-8 (with *) use a downsampled dataset with 4k (3.4%), 11.2k (2%), 15k (\(<\)1%), 20k (\(<\)1%) unlabeled training instances respectively.
### Sentence Encoder and Contrastive Learning
Sentence encoders Gao et al. (2021) model the alignment of sentences by their similarity in the embedding space, which can be adopted to the zero-shot text classification task Hong et al. (2022); Shi et al. (2022). Sentence encoders are typically trained with contrastive learning, which optimizes representations by pulling inputs with similar semantics closer in the embedding space and pushing inputs with different semantics further apart. Recent approaches Izacard et al. (2022) sample positive pairs from spans of the same document and negative pairs from different documents in the training batch. Our model applies GPT to generate training pairs for contrastive learning.
### Self-training Methods
Self-training methods Van Engelen and Hoos (2020) have been proposed as a semi-supervised approach to improve a classifier from unlabeled datasets, where predictions on unlabeled data are
Figure 3: Per class F1 (upper) and ranking-based precision (lower) for classification performance with input augmentation.
\begin{table}
\begin{tabular}{c|p{284.5pt}} \hline \hline Observed Text & Starbucks Corp’s president and chief executive, Orin Smith, said Tuesday he plans to retire early next year because he wants to slow down and focus on philanthropy, family and sports. \\ \hline Politics & The announcement of Orin Smith’s retirement is likely to have political implications for Starbucks Corporation... His replacement will have big shoes to fill, and the decision could have long-term effects on the company’s policies and direction. \\ \hline Sports &...Sports have always been a major part of Smith’s life, as he was a college athlete and later went on to become the CEO of Starbucks. It is clear that sports have had a major influence on his life and he wants to make time for them in his retirement. \\ \hline Business & Starbucks Corp has seen great success under the leadership of Orin Smith, with the company’s stock price more than tripling since he became CEO in 2005. This success has allowed him to retire early and... \\ \hline Technology & Orin Smith’s plan to retire early next year is an example of how technology has changed the way we work and live. By utilizing technology, Smith is able to take advantage of the increasingly popular trend of “work-life balance”... \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of generated text conditioned on pseudo labels in the left column.
used to fine-tune the classifier (Lee et al., 2013). To improve the pseudo label quality, previous work (Gera et al., 2022) use a small set of instances with the most confident prediction for self-training. LOTClass (Meng et al., 2020) improves the quality of pseudo label by an expansion of label vocabulary using BERT and iPET (Schick and Schutze, 2020) ensembles multiple version of model at different stage of training. Our work improves self-training by generating augmented text with GPT in the training loop.
### Authors' Considerations and Limitations
The main goal of our paper is to promote the usage of pre-trained GPT model (Alpaca-7B) to assist in training of a smaller model (Roberta-SimCSE) on zero-shot classification tasks. We are aware that there are rooms more experiments with self-training algorithms, such as how the temperature of our loss function can affect the training stability. Currently, we only use that as a theoretical motivation of leveraging decision boundaries between classes, but tuning the temperature will be additional work to do.
Another part is data efficiency. We have shown that using GPT generated data can alleviate the data hungry issue in deep learning models for text classification, but more comprehensive study can be done by applying the baseline models on the down-sampled dataset and analyzing the performance in those scenarios.
Finally, we realize that more tricks and engineering designs are employed in our experiments and we have released our code on github for reference.
## 5 Conclusion
In conclusion, our proposed approach, GenCo, effectively addresses the difficulties and limitations of using pretrained large GPT models directly for zero-shot text classification. By leveraging the generative power of GPT models in a self-training loop of a smaller, sentence encoder classifier with contrastive learning, GenCo outperform state-of-the-art methods on four benchmark datasets. Our approach is particularly effective when limited in-domain text data are available. The success of our approach highlights the potential benefits of incorporating the generative power of GPT models into iterative self-training processes for smaller zero-shot classifiers. We hope that our work will inspire further research in this direction, ultimately leading to more efficient and effective NLP models.
|
2308.16210 | Deep Inductive Logic Programming meets Reinforcement Learning | One approach to explaining the hierarchical levels of understanding within a
machine learning model is the symbolic method of inductive logic programming
(ILP), which is data efficient and capable of learning first-order logic rules
that can entail data behaviour. A differentiable extension to ILP, so-called
differentiable Neural Logic (dNL) networks, are able to learn Boolean functions
as their neural architecture includes symbolic reasoning. We propose an
application of dNL in the field of Relational Reinforcement Learning (RRL) to
address dynamic continuous environments. This represents an extension of
previous work in applying dNL-based ILP in RRL settings, as our proposed model
updates the architecture to enable it to solve problems in continuous RL
environments. The goal of this research is to improve upon current ILP methods
for use in RRL by incorporating non-linear continuous predicates, allowing RRL
agents to reason and make decisions in dynamic and continuous environments. | Andreas Bueff, Vaishak Belle | 2023-08-30T09:08:46Z | http://arxiv.org/abs/2308.16210v1 | # Deep Inductive Logic Programming meets Reinforcement Learning
###### Abstract
One approach to explaining the hierarchical levels of understanding within a machine learning model is the symbolic method of inductive logic programming (ILP), which is data efficient and capable of learning first-order logic rules that can entail data behaviour. A differentiable extension to ILP, so-called differentiable Neural Logic (dNL) networks, are able to learn Boolean functions as their neural architecture includes symbolic reasoning. We propose an application of dNL in the field of Relational Reinforcement Learning (RRL) to address dynamic continuous environments. This represents an extension of previous work in applying dNL-based ILP in RRL settings, as our proposed model updates the architecture to enable it to solve problems in continuous RL environments. The goal of this research is to improve upon current ILP methods for use in RRL by incorporating non-linear continuous predicates, allowing RRL agents to reason and make decisions in dynamic and continuous environments.
## 1 Introduction
One approach to explaining the hierarchical levels of understanding within a machine learning model is the symbolic method of inductive logic programming (ILP) [16], which is data efficient and capable of learning first-order logic rules that can entail data behaviour. Recent contributions to the field have expanded the ILP framework to allow for end-to-end learning, resulting in hybrid models that can be classified as neuro-symbolic [7, 17, 23, 4]. The recent developments in neuro-symbolic ILP have expanded the potential applications of these models to a wider range of learning challenges, including reinforcement learning [18]. The ILP-based neuro-symbolic model implemented in this proposal is a differentiable extension to ILP, so-called differentiable Neural Logic (dNL) networks [17]. dNL networks are able to learn Boolean functions as their neural architecture includes symbolic reasoning. The primary neural layers in the dNL model contain weighted neurons associated with conjunctions as well as weighted activation neurons associated with disjunctions, providing a means of logical reasoning on the input as well as a means of optimisation via gradient descent.
Relational RL (RRL) is concerned with learning policies for decision-making tasks in complex, discrete relational environments [5]. In RRL, the environment is characterised by a set of entities and relationships between them, and the agent's actions can affect the state of these entities and relationships. Due to its focus on relational problems, RRL has leveraged the progress made in ILP but is not limited to one method. Recent efforts have enabled RRL to effectively tackle more complex RL problems by transitioning from purely symbolic reasoning to a more neuro-symbolic approach that leverages neural systems. Earlier research in RRL primarily focused on planning [5], emphasising model-based learning. Later research, however, focused on model-free methods that combined neuro-symbolic models and RRL [24] or on the derivation of interpretable FOL policies [11, 18]. Despite these advancements, there is limited research in applying RRL to more complex learning challenges, such as continuous state dynamics. This gap in knowledge motivates the proposed research.
The work by Payani et al. extended RRL to learn FOL policies using a dNL agent [18]. Taking the concepts of dNL, Payani et al. combined their dNL-ILP framework with RRL [18]. The author tested on the block world gaming environment and took advantage of the declarative bias with provided background knowledge. The authors expanded RRL to handle complex scene interpretations and used their dNL-ILP differentiable inductive engine to give RRL an end-to-end learning framework, so-called dNL-RRL, where the authors focused on policy gradients in order to improve interpretability and expert constraints to improve the speed of convergence.
The following paper is an extension that incorporates both a continuous and non-linear interpretation of [17] and the dNL-RRL agent found in [18] resulting in a dNL-ILP based agent that can both learn in dynamic continuous environments typically seen in RL control problems and extract FOL rules which define the agent policy. Various RL algorithms were explored in evaluating our dNL-ILP agent and ultimately Soft Actor-Critic was found to perform the best when solving problems on continuous state spaces. As the contribution bridges the application of dNL-ILP agents in discrete state spaces with that of continuous, we evaluated the model on a dynamic RL environment where the optimal policy captured rules such as those seen with classical mechanics in physics. In our evaluation, the model was applied to two control problems including the Cart Pole problem and Lunar Lander problem [2]. From our initial results, we were able to obtain agent policies which incorporate continuous predicate functions as well as non-linear continuous predicate functions. We also add that our proposed agent, while primarily focused on solving classical RL problems, is referred to as an RRL agent due to its use of relational language in the derived FOL rules and the ability to incorporate background knowledge through the use of non-linear predicate functions.
## 2 Background
### Reinforcement Learning
Reinforcement learning (RL) aims to find an optimal action sequence for an agent to maximize its expected future reward. This is typically done by modelling the environment as a Markov Decision Process (MDP), which is defined as a tuple \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\mathcal{E}\rangle\). \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions that can be taken, \(\mathcal{R}(s_{t},a_{t},s_{t+1})\) is the reward function that takes the current state \(s_{t}\), current action \(a_{t}\), and returns the reward from transitioning to the state \(s_{t+1}\), \(\mathcal{T}(s_{t},a_{t},s_{t+1})\) is the transition probability function which represents the probability of transitioning to state \(s_{t+1}\) from state \(s_{t}\) given action \(a_{t}\) was taken, and \(\mathcal{E}\subset\mathcal{S}\) is the set of terminal states [21, 12]. A discount factor \(\gamma\in[0,1]\) determines the importance of receiving a reward in the current state versus the future state, where \(R_{t}=\sum_{k=0}^{\infty}\gamma^{k}r(a_{t+k},s_{t+k})\) is the total accumulated return from the time step. The value function \(V^{\pi}(s)=\mathbb{E}_{\pi}[R_{t}]\) of a policy is the measure of the expected sum of discounted rewards. The goal is to find the optimal policy \(\pi^{*}=\arg\max_{\pi}\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}r(a_{t},s_{t})| s_{0}=s]\), which maps a history of observations to the next action[21, 20, 22, 20, 24, 25]. In practice, at each time step, the agent selects an action based on its current state and the policy, and receives a reward based on the transition to the next state. In our investigation of RL methods, we found that Soft Actor-Critic (SAC) is the best approach for our dNL based agent. SAC combines the strengths of both value-based and policy-based methods by alternating between updates to the policy and updates to the value function and Q-function. The entropy regularisation term is added to the policy update step to encourage exploration, and the algorithm uses a temperature parameter, \(\alpha\), to control the trade-off between exploration and exploitation. As we evaluate our agent on environments consisting of discrete actions, we use a discrete action variant of SAC [3]. The main difference is in the policy update step, where the objective becomes
\(\pi^{*}=\arg\max_{\pi}\sum_{t=0}^{T}\mathbb{E}_{s_{t}\sim D}[\mathbb{E}_{a_{t}\sim \pi}[r(s_{t},a_{t})+\alpha\mathbb{H}(\pi(\cdot|s_{t}))]]\). Where \(\mathbb{H}(\pi(a|s))\) is the entropy of the policy, \(\gamma\) is the discount factor, \(D\) is the replay buffer, and \(\alpha\) is the temperature parameter. In the discrete setting, \(\pi\) maps states to a vector of probabilities with \(|A|\) elements. For the actor cost function, we use the following equation \(J_{\pi}(\phi)=\mathbb{E}_{s_{t}\sim D}[\mathbb{E}_{a_{t}\sim\pi}[\alpha\log( \pi_{\phi}(a_{t}|s_{t}))-Q_{\theta}(s_{t},a_{t})]]\). Where \(\pi_{\phi}\) is the policy parameterized by \(\phi\) and \(Q_{\theta}\) is the Q-function parameterized by \(\theta\). Discrete SAC policy maximises the probability of discrete actions as opposed to the continuous SAC policy which optimises two parameters of a Gaussian distribution.
### Inductive Logic Programming (ILP)
Inductive logic programming (ILP) is a method of symbolic computing which automatically constructs logic programs given a background knowledge base (KB) [15]. An ILP problem is represented as a tuple \((\mathcal{B},\mathcal{P},\mathcal{N})\) of ground atoms, with \(\mathcal{B}\) being the background assumptions, \(\mathcal{P}\) being a set of positive instances which help define the target predicate to be learned, and \(\mathcal{N}\) being the set of negative instances of the target predicate. The goal of ILP is to construct a logic program that explains all provided positive sets and rejects the negative ones. Given an ILP problem \((\mathcal{B},\mathcal{P},\mathcal{N})\), the goal is to identify a set of hypotheses (clauses) \(\mathcal{H}\) such that \(\mathcal{B}\wedge\mathcal{H}\models\gamma\) for all \(\gamma\in\mathcal{P}\) and \(\mathcal{B}\wedge\mathcal{H}\not\models\gamma\) for all \(\gamma\in\mathcal{N}\), where \(\models\) denotes logical entailment. In other words, the conjunction of the background knowledge and hypothesis should entail all positive instances and not entail any negative instances.
### Differentiable Neural Logic (dNL)
The core component of the dNL network is their use of differentiable neural logic layers to learn Boolean functions [16]. The dNL architecture uses membership weights and conjunctive and disjunctive layers to learn a target predicate or Boolean function. Learning a target predicate \(p\) requires the construction of Boolean function \(\mathcal{F}_{p}\) which passes in a Boolean vector \(\mathbf{x}\) of size \(N\) with elements \(x^{(i)}\), into a neural conjunction function \(f_{conj}\) (see equation 1a) which is defined by a conjunction Boolean function \(F_{conj}\) (see equation 1b). A predicate defined by Boolean function in this matter is extracted by parsing the architecture for membership weights (\(w^{(i)}\)) above a given threshold, where membership weights are converted to Boolean weights via a sigmoid \(m^{(i)}=\sigma(cw^{(i)})\) with constant \(c\geq 1\). Membership weights are paired with continuous lower and upper bound predicate functions (see equation 3) which are eventually interpreted as atoms in the body of the predicate being learned. These same Boolean predicate functions are used to transform non-Boolean data into a Boolean format for the logic layers.
\[f_{conj}(\mathbf{x})=\prod_{i=1}^{N}F_{conj}(x^{(i)},m^{(i)}) \tag{1a}\] \[F_{conj}(x^{(i)},m^{(i)})=\overline{\overline{x^{(i)}}m^{(i)}}=1-m^{(i)}(1-x^{( i)}) \tag{1b}\]
Following induction by the conjunctive layer, outputs are fed into a neural disjunction function \(f_{disj}\) (see equation 2a) which is constructed from disjunctive Boolean functions \(F_{disj}\) (see equation 2b). The disjunctive layer provides multiple definitions for our target predicate if necessary.
\[f_{disj}(\mathbf{x})=1-\prod_{i=1}^{N}(1-F_{disj}(x^{(i)},m^{(i)})) \tag{2a}\] \[F_{disj}(x^{(i)},m^{(i)})=x^{(i)}m^{(i)} \tag{2b}\]
As mentioned, a target predicate function \(\mathcal{F}p\) is defined by the cascading architecture of the dNL network, and within the network bounded continuous Boolean predicates associated with a continuous feature are used to define our target predicate. Boolean predicate functions are used to handle continuous input by partitioning input into a series of lower and upper-bound predicates, referred to as continuous predicates as they capture the interval bounds of continuous features. Similarly, Boolean predicates mapped to discrete features are referred to as discrete predicates. These bounded continuous predicates return either true or false when a continuous value meets the condition. A continuous input \(x\) is associated with \(k\) pairs of upper and lower boundary predicates, each pair corresponding to a bounded range \((l_{xi}<x<u_{xi})\), where \(i\in{1,2,\cdots,k}\). A Boolean upper boundary predicate \(gt_{x}^{i}(x,lxi)\), states whether _"x is greater than \(l_{xi}\)"_ is true, and a Boolean lower boundary predicate \(It_{x}^{i}(x,u_{xi})\) states whether _"x is less than \(u_{xi}\)"_ is true (see equation 3). The lower and upper boundary values \(l_{xi}\) and \(u_{xi}\) are treated as trainable weights which can be optimised during training. To clarify the need for both lower and upper predicate bounds, the regions are complete and disjoint, meaning that each input value belongs to exactly one region. Although most of the upper and lower bounds may coincide, having both upper and lower predicates allows for greater flexibility in defining the regions and ensures that no input values fall outside the defined regions. A diagram illustrating this partitioning of the input domain can be found in Figure 1.
\[\mathcal{F}_{gt_{x}^{i}}=\sigma(c(x-u_{xi})),\quad\mathcal{F}_{It_{x}^{i}}= \sigma(-c(x-l_{xi})) \tag{3}\]
## 3 Methodology
Developing a neuro-symbolic RL agent that incorporates continuous predicate functions posed several non-trivial challenges. Firstly, the work of Payani et al. [17, 18], which our model extends, did not explore non-linear predicate functions, so we had to develop new functions to capture more complex relationships in the data (see equation 7). Additionally, while Payani et al. did introduce continuous predicates in a supervised setting, they only tested dNL-ILP on two datasets (Wine and Sonar datasets [17, 6]), whereas we made the integration of continuous predicates a focus of our work. 1 Another challenge was the fact that the original dNL-RRL model explicitly dealt with discrete predicates, so we had to adapt the model to incorporate continuous predicates. Finally, while Payani et al. used the REINFORCE RL algorithm [22], we had to test out various other RL algorithms, including SAC [9], to find the most effective implementation for our model.2
Footnote 1: Moreover, from an engineering standpoint, the dNL code was originally written in TensorFlow 1, which had to be converted to TensorFlow 2 for use in the RL environment.
Footnote 2: Extensive coding was required to convert the SAC algorithm, which typically has a discrete action space, to TensorFlow 2, as all available examples online were coded in PyTorch.
In the dNL-RL interpretation, the Boolean target predicate function \(\mathcal{F}_{p}\) is referred to as a _discrete action predicate function_. The actions of the agent are the target predicates and so must remain discrete. For example, an agent that can take an action \(turnLeft()\), has the corresponding _discrete action predicate
Figure 1: Given \(k=2\) and feature instance \(x^{*}\) with value \(0.8\), would result in two distinct activations of bounded Boolean predicates. Stating \(x^{*}\)_is less than \(1.0\) and \(x^{*}\) is greater than \(0.5\)._
function_\(\mathcal{F}_{turnLeft}\). Each _discrete action predicate function_ is defined by an input matrix \(\mathbf{I}\), which is composed of continuous lower and upper bound predicate functions as well as discrete Boolean predicate functions, all with associated weights. The set of the continuous state features \(cnt_{p}\) and the set of the discrete state features \(dsc_{p}\) for the associated RL environment are given Boolean interpretations after being passed through the input matrix, as seen in equation 4. The row dimensions of \(\mathbf{I}\) are defined by the batch size \(\mathbf{b}\) and the column dimensions are defined by \(N_{e}=(2\times k\times|cnt_{p}|)+(2\times|dsc_{p}|)\) (assuming discrete features are Boolean). The state feature ranges used to define the Boolean predicate functions are determined by equal-width binning, and we define the batch size as \(\mathbf{b}\). The \(\mathcal{F}_{e}\) function takes in a one-hot encoded Boolean discrete input.
\[\mathbf{I}=\begin{bmatrix}\mathcal{F}_{gt_{1}^{A},}\mathcal{F}_{lt_{e1}^{A}}& \cdots&\mathcal{F}_{gt_{e_{1}}^{A},}\mathcal{F}_{lt_{e_{1}}^{A}}&\cdots& \mathcal{F}_{gt_{|cur_{p}|}^{A},}\mathcal{F}_{lt_{e_{cur_{p}|}}^{A}}\\ \vdots&\vdots&\vdots&\vdots\end{bmatrix}\bigoplus\begin{bmatrix}\mathcal{F}_{ e_{1}},\mathcal{F}_{\overline{e_{1}}}&\cdots&\mathcal{F}_{e_{|dx_{p}|}}, \mathcal{F}_{\overline{e_{|dx_{p}|}}}\\ \vdots&\vdots&\vdots\end{bmatrix} \tag{4}\]
In defining the _discrete action predicate function_\(\mathcal{F}_{p}\) for an action \(p\), the function takes as input the input matrix \(\mathbf{I}\) as well as the disjunction layer \(F_{disj}\) and the conjunction layer \(F_{conj}\) where \(N_{e}\) is used to define the number of layers in the conjunction layer and \(N_{p}\) is used to define the number of layers in the disjunction layer, seen in equation 5. As an agent learns rules for multiple actions, we define the set of discrete action predicates as the _predicate action policy_\(\pi_{\mathcal{F}}\) which is the set \(\{\mathcal{F}_{p^{1}},\mathcal{F}_{p^{2}},\cdots,\mathcal{F}_{p^{n}}\}\) for all actions in the action space \(p\in\mathcal{A}\) for the RL environment.
To prevent the gradient from becoming excessively small during training, initial weights \(\mathbf{W}\) are initialised using a negative mean random Gaussian distribution, which ensures that they are close to zero. To keep membership weights \(\mathbf{m}\) between 0 and 1, a constant \(c\geq 1\) is applied, followed by a sigmoid function. In equations 6a and 6b, we apply the conjunction function to the input matrix using the corresponding membership weights. Note that \(\mathbf{W}^{conj}\) is a matrix with dimensions \(N_{p}\) and \(N_{e}\) while \(\mathbf{W}^{disj}\) is a vector of size \(N_{p}\). The disjunction function takes the output of the conjunction function as input as we cascade our neural architecture.
\[\mathcal{F}_{p|\mathbf{I},N_{e},N_{p}}=F_{disj}(N_{p},F_{conj}(N_{e},\mathbf{ I})) \tag{5}\]
\[F_{conj}=\sum_{i=1}^{N_{e}}[1-m_{i}^{conj}(1-\mathbf{I}_{i})]\quad\text{where} \quad\mathbf{m}^{conj}=\sigma(c\mathbf{W}^{conj}),\;\mathbf{W}^{conj}_{N_{p}, N_{e}}\sim\mathcal{N} \tag{6a}\] \[F_{disj}=1-\sum_{i=1}^{N_{p}}[1-m_{i}^{disj}F_{conj}^{i}]\quad\text{where}\quad \mathbf{m}^{disj}=\sigma(c\mathbf{W}^{disj}),\;\mathbf{W}^{disj}_{1,N_{p}}\sim \mathcal{N} \tag{6b}\]
We incorporate non-linear transformations on some of the state features. As in the case of various control problems in RL, the calculation of state transitions is based on non-linear transformations from the current state. We define the following \(k\) upper-boundary and lower-boundary functions as non-linear transformation predicates in equation 7.
\[\mathcal{F}_{gt_{f(x)}^{i}}=\sigma(c(f(x)-u_{xi})),\;\mathcal{F}_{lt_{f(x)}^{ i}}=\sigma(-c(f(x)-l_{xi}))\,\text{where}\,f(x)\in\{sqr(x),sin(x),\cdots\} \tag{7}\]
Our dNL based agents can be differentiated by the type of prior information used. The baseline dNL agent uses only continuous and discrete Boolean predicates and is referred to as a dNL RL continuous (dNLRLc) agent. It learns the continuous state features without using any prior information from a
knowledge base \(\mathsf{KB}_{T}\) which contains transformation functions mapped to specific continuous state features. However, when prior information such as non-linear equations or transformations between features is included, the agent becomes a dNL RL non-linear continuous (dNLRLnlc) agent. This involves adding non-linear continuous Boolean predicates to the input matrix as new predicate functions in \(\pi_{\mathcal{F}}\). In the experiment section, we specify which non-linear transformations were used for the dNLRLnlc agent in each RL environment.
In Algorithm 1 we provide a pseudo-code implementation for a dNL-RL agent. The agent takes in a state \(\mathbf{s}_{t}\), at time step \(t\) and returns an action \(\mathbf{a}_{t}\) for that time step. The algorithm iterates through each feature of the state, \(s_{t}^{(i)}\), and separates the elements into two cases: discrete and continuous. If the element is discrete, it adds the mapping of the discrete predicate \(dsc_{p}[i]\) to the state element \(s_{t}^{(i)}\) in a processed state set \(\mathbb{S}_{t}\). If the element is continuous, it checks if a transformation function for the feature exists in the knowledge base \(\mathsf{KB}_{T}[i]\). If it does, it adds the mapping of the continuous predicate \(cnt_{p}[i]\) to the function applied to the state feature \(s_{t}^{(i)}\) in the state set \(\mathbb{S}_{t}\). If the knowledge base does not exist, it adds the mapping of the continuous predicate \(cnt_{p}[i]\) to the state feature \(s_{t}^{(i)}\) instead. After iterating through all state features, the policy is returned by applying the dNLRL policy function to the set \(\mathbb{S}_{t}\), and then sampling an action \(\mathbf{a}_{t}\) from the _predicate action policy_.
```
1:Input:\(\mathbf{s}_{t}\)-state at time step \(t\)
2:Output:\(\mathbf{a}_{t}\)-action at time step \(t\)
3:for\(s_{t}^{(i)}\in\mathbf{s}_{t}\)do
4:if\(s_{t}^{(i)}\) is discrete then
5:\(\mathbb{S}_{t}\cup\{dsc_{p}[i]\mapsto s_{t}^{(i)}\}\)
6:endif
7:if\(s_{t}^{(i)}\) is continuous then
8:if\(\mathsf{KB}_{T}[i]\) exists then
9:\(\mathbb{S}_{t}\cup\{cnt_{p}[i]\mapsto\mathsf{KB}_{T}[i(s_{t}^{(i)})\}\)
10:else
11:\(\mathbb{S}_{t}\cup\{cnt_{p}[i]\mapsto s_{t}^{(i)}\}\)
12:endif
13:endif
14:endfor
15:\(\pi_{\mathcal{F}}\leftarrow\)dNLRL(\(\mathbb{S}_{t}\))
16:\(\mathbf{a}_{t}\sim\pi_{\mathcal{F}}(\mathcal{F}_{\mathbf{a}}|\mathbb{S}_{t})\)
```
**Algorithm 1** dNL-RL agent
To derive the predicate action policy \(\pi_{\mathcal{F}}\), we perform Boolean reasoning on the set of processed state features with associated predicates as seen in Algorithm 2. Here we take as input a set of discrete predicates \(dsc_{p}\), a set of continuous predicates \(cnt_{p}\), a set of target action predicates \(\mathbb{P}\), and a set of processed state feature predicates \(\mathbb{S}_{t}\). We loop through each target action predicate in the set \(\mathbb{P}\) and for each predicate, it creates two empty sets used for constructing the input matrix, one for the continuous predicates \(\mathbf{I}_{c}\) and one for the discrete predicates \(\mathbf{I}_{d}\). The dNLRL policy then loops through each continuous predicate in \(cnt_{p}\) and for each predicate, it creates a disjunction of predicates, which are added to the continuous input matrix set \(\mathbf{I}_{c}\). Then it does the same for each discrete predicate in \(dsc_{p}\) but instead, the discrete predicates are added to the discrete input matrix set \(\mathbf{I}_{d}\). We take the union of these two sets to get the final input matrix \(\mathbf{I}\). Finally, it creates an evaluated target action predicate \(\mathcal{X}_{p}\) by reasoning on target predicate function \(\mathcal{F}_{p}\) as defined in equation 5. This target action predicate \(\mathcal{X}_{p}\) is then added to the predicate action policy \(\pi_{\mathcal{F}}\).
```
1:Input:\(disc_{p}\)-set of discrete predicates, \(cnt_{p}\)-set of continuous predicates, \(\mathbb{P}\)-set of target action predicates, \(\mathbb{S}_{t}\)-set of state feature predicates at time step \(t\)
2:Output:\(\pi_{\mathcal{F}}\)-predicate action policy
3:for\(p\in\mathbb{P}\)do
4:L\({}_{\mathbf{c}}\)\(|\)
5:\(\mathbf{I}_{d}\)=[]
6:for\(e\in cnt_{p}\)do
7:\(\mathbf{I}_{e}\cup\bigcup_{i=1}^{t}\left(\mathcal{F}_{\pi_{\mathcal{G}}^{i}}( \mathbb{S}_{t}[e])\vee\mathcal{F}_{\hat{\mathbf{I}}_{\hat{\mathbf{G}}}}( \mathbb{S}_{t}[e])\right)\)
8:endfor
9:for\(e\in disc_{p}\)do
10:\(\mathbf{I}_{d}\cup\left(\mathcal{F}_{e}(\mathbb{S}_{t}[e])\vee\mathcal{F}_{ \mathbf{P}}(\mathbb{S}_{t}[e])\right)\)
11:endfor
12:\(\mathbf{I}\leftarrow\mathbf{I}_{d}\cup\mathbf{I}_{e}\)
13:\(\mathcal{X}_{p}\leftarrow\mathcal{X}_{p}|_{\mathbf{L}\cup\mathcal{X}_{p}}\)
14:\(\pi_{\mathcal{F}}\cup(\mathcal{X}_{p})\)
15:endfor
```
**Algorithm 2** Single Step Forward Chain Model/ dNLRL policy
## 4 Experiments and Results
### Environments and Tasks
The following RL environments are continuous control problems developed by Open AI Gym [1]. The Cart Pole problem is a benchmark for evaluating RL algorithms [12]. The goal of the problem is to balance a pole in the upright position by selecting between two discrete actions (move left, move right). The state space is 4-dimensional, all continuous. The features are the following \(x:\) Cart Position, \(x^{\prime}:\) Cart Velocity, \(\theta:\) Pole Angle, and \(\theta^{\prime}:\) Pole Angular Velocity. In the Lunar Lander environment, an agent learns a policy for throttling the side and main engines to control the descent of a lander such that it lands on a landing pad. The discrete actions include activating the right thruster engine, activating the left thruster engine, activating the main engine, and doing nothing. The state space is 8-dimensional, 6 inputs are continuous and 2 are discrete. The continuous state inputs include including \(x:\) Lander Position on the X-axis, \(y:\) Lander Position on the Y-axis, \(v_{x}:\) Horizontal velocity, \(v_{y}:\) Vertical velocity, \(\theta:\) Angular orientation in space, and \(v_{\theta}:\) Angular velocity. The remaining two features are Boolean, Left: which indicates if the left leg is touching the ground and Right: which indicates if the right leg is in contact with the ground.
#### 4.1.1 Baselines
During our investigation into the development of our continuous logic-based algorithm, we sought first to see if the dNLRLc agent could in fact learn interpretable policies on continuous environments. As stated this led to evaluation of the dNLRLc architectures on various RL algorithms. This includes the policy gradient algorithm REINFORCE [21], the model-free algorithm Deep Q-Network (DQN) [14], the on-policy method of Advantage Actor-Critic (A2C) [13], and the Soft Actor-Critic (SAC) [8]. In the case of SAC, we use a variant designed to handle discrete actions [2]. In cases where we inject prior knowledge in the form of non-linear functions added to the transformation knowledge base, we designate the model as (SAC-NL).
### Algorithm Performance Comparison
We present algorithmic comparisons of the various RL algorithms combined with the dNLRLc agent where evaluation is carried out on the Cart Pole problem. For the evaluation, only SAC is used to test the dNLRnllc agent, as it was the best-performing algorithm with the dNLRLc agent. In the Lunar Lander environment, we evaluate a dNLNLc agent and dNLRnllc agent using SAC entirely. Each algorithmic framework used the same binning scheme for the continuous features, that being 'equal-width binning' where bins for each feature were predefined for the learning environment and kept constant between the various RL algorithms.
#### 4.2.1 Task: Cart Pole Problem
For the Cart Pole problem, each algorithmic framework used the same binning scheme for 'equal-width binning' with the number of bins set to \([4,4,4,4]\) for each feature respectively. In Figure 2, we can observe the results comparing the performances of each dNLRLc agent and paired RL architecture in the Cart Pole Problem domain. As can be observed, the best performing model is that of the SAC and SAC-NL. Performance of dNL based agents is noticeably poor in the cases of the REINFORCE algorithm where rewards cap at 22.0. DQN and A2C perform better but rewards plateau far below what is to be expected. The non-linear addition for the dNLRnllc agent is a simple transformation of the Pole Angle \(\theta\), as the calculation of the next state \(s_{t+1}\) within the code for the Cart Pole problem is dependent on the _sine_ transformation of \(\theta\). For SAC-NL, we apply a _sine_ transformation and so policies can contain atoms designated _PoleAngleSine_ with corresponding inequalities. The performance of the dNLRnllc agents is not significantly better than the dNLRLc agent when using SAC. Both agent variants are able to achieve the maximum reward of 300.0 consistently with periodic fluctuations. In figure 4, we can observe the moving standard deviation for the dNLRLc and dNLRnllc when using SAC. Although the fluctuations in the standard deviations are not in sync, the variation between the two agents is not significant enough to conclude that one agent is more stable.
Figure 3 presents a comparison of the performance of the various RL algorithms using standard neural networks in solving the Cart Pole problem. The figure shows that, while SAC is able to solve the problem and stabilise, the neural network agents learn faster and are more stable than dNLRLc agents. Both agents using A2C and DQN exhibit inconsistent performance with rewards fluctuating greatly and both the neural networks and dNLRLc agents perform poorly when using REINFORCE. These findings
Figure 2: Episode scores for RL algorithms relying on dNLRLc and dNLRnllc agents.
suggest that while dNLRLc models may offer greater interpretability, they may come at the cost of decreased performance in some RL tasks. Given that, it is also evident that SAC is the primary contributor to solving the control problem as even standard neural network based agents failed otherwise. We also note that the black-box neural network-based SAC agent outperforms the interpretable dNL-RL SAC agent in terms of speed, achieving an average convergence time of 2168.31 seconds, compared to the average convergence time of 10807.94 seconds for the dNL-RL agent, highlighting a trade-off between interpretability and learning efficiency, which could be addressed in future research by optimising the algorithm for faster convergence.
The main advantage of using dNL agents is that returned policies are interpretable. These policies provide predicate logic interpretations for the individual discrete actions, in the case of the Cart Pole problem, we have rules for when to move _left()_ and when to move _right()_. The policy in our case is defined as _predicate action policy_, that is individual actions are treated as target predicates where the clausal body states what inequalities should be true for a state feature in order for the action to be considered by the agent. We also clarify that the clausal body is defined by a conjunction of atoms where the associated membership weight is placed before the atom in brackets, extracted from the conjunction neuron. If the membership weight is above 0.95, i.e. the model is confident it should be in the definition, then the weight is not listed next to the atom. Similarly, for the disjunction neurons, the value in brackets before the rule represents the membership weights.
In table 1, the predicate action policy for the dNLRLc agent trained using SAC is provided. In
\begin{table}
\begin{tabular}{c l} \hline \hline
**CartPole** & Policy rules for dNLRLc \\ \hline & **left(0)** \\ & \(:-([0.56](\mathit{CarPlus}<2.83\land\land\_0.66](\mathit{CarValue}>-1.19\land\mathit{CarValue}>0.18\) \\ & \(\wedge\mathit{Relative}>0.62\land\mathit{RelativeValue}>-0.24)\) \\
**mean reward:** & \(:-([0.81](\mathit{CarPlus}<2.82\land\mathit{Relative}>-0.06\land\mathit{RelativeValue}>0.08)\) \\
290.3\(\pm\) 32.3 & **right()** \\ & \(:-(\mathit{CarValue}>-1.19\land\mathit{Relative}<-0.03\land\mathit{RuleagleValue} <0.34\land\_0.56](\mathit{RuleagleValue}<2.48)\) \\ & \(:-(\mathit{CarPlus}<0.16\land\_0.52)(\mathit{CarValue}\leq 2.11\land\_0.50](\mathit{ Ruleagle}>-0.56\) \\ & \(\wedge\mathit{Relative}<0.11\land\mathit{RuleagleValue}<0.25)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The FOL policy rules for the dNLRLc agent, trained using SAC. Extracted continuous inequality predicates define rules for each action.
the body of the discrete action predicate _left()_, it is observed that two definitions are returned wherein which we see both associated disjunction neuron membership weights pass the threshold. The three state features present in the second definition are Cart Position (\(\mathit{CartPos}<2.82\)), Pole Angle (\(\mathit{PoleAngle}>-0.06\)), and Pole Angular Velocity (\(\mathit{PoleAngleVeloc}>0.08\)). Similarly, the body for the discrete action predicate _right()_ contains two predicate definitions. In both cases, the agent is confident in either definition as indicated by the absence of a membership weight for the corresponding disjunction neuron.
In table 2, the predicate action policy for the dNLRLnlc agent trained using SAC is provided. The definitions of the predicate action policy are comparable to those given by the dNLRLnlc agent. Note that the discrete action predicate _left()_ is defined by a single predicate rule. For both discrete action predicates, we see the presence of the non-linear continuous predicate in the definitions, that is \((\mathit{PoleAngleSin}>0.00)\) of the discrete action predicate \(left()\) and \((\mathit{PoleAngleSin}<0.00)\) for the first definition of \(\mathit{right()}\) and \((\mathit{PoleAngleSin}<0.65\wedge\mathit{PoleAngleSin}<0.28)\) for the second definition. In figure 1 and figure 2 the mean reward is also provided with the standard deviation. The mean reward in this case refers to the mean of the last 100 episodes.
#### 4.2.2 Task: Lunar Lander
At present, the Lunar Lander stands as a more challenging environment for the dNL-RL agents. As the previous control problem environment has demonstrated that SAC is the best performing algorithm, only SAC was tested on the Lunar Lander environments. In setting up the experiments, it was found that both the discretisation scheme and the initial high and low values for the state features had a significant impact. In the context of the Lunar Lander, a reward of 200 indicates the lander successfully landed on the platform. The current binning scheme being deployed is \([3,3,3,3,3,3,-,-]\) for each feature, where \((-)\) indicates no binning performed as it is a discrete/Boolean feature. In order to evaluate the consistency and performance of the dNLRLc and dNLRLnlc agents, multiple trials were conducted using different random seeds. We ran each experiment 5 times and the resulting policies were analysed. While only one representative policy is presented for each agent, any variations or discrepancies across the trials are discussed. The non-linear change was again a _sine_ transformation on the state feature angular orientation in space \(\theta\) associated with the transformation predicate _AngleSine_.
In Figure 5, we can observe the results comparing the performance of the dNLRLc agent where dark purple corresponds to the mean reward. The results indicate the agent does in fact learn, however, it is challenging for the dNL agent to successfully land the lander consistently. The episode rewards show the agent can land the lander but the mean rewards skew below a true success of 200. We note the mean reward across all trials fluctuates around rewards of a hundred with significant deviations from the mean. In some initial states, the learned policy does result in successful landings, often over 200. The rules in the predicate action policy for the Lunar Lander environment are given in table 3. As the Lunar Lander
\begin{table}
\begin{tabular}{c l} \hline \hline
**CartPole** & Policy rules for dNLRLnlc \\ \hline \multirow{4}{*}{**mean reward:**} & **left()** \\ & \(:-(0.60)\mathit{CartPos}<2.57\wedge\mathit{PoleAngleSin}>0.00\wedge\mathit{PoleAngleVeloc}>-0.38\) \\ & **right()** \\ & \(:-(\mathit{PoleAngleSin}<0.04\wedge\mathit{PoleAngleVeloc}<0.00)\) \\ & \(:-(\mathit{CartPos}<0.74\wedge\mathit{PoleAngleVeloc}>-1.64\wedge\mathit{CartPos }<-0.11\) \\ & \(:\mathit{PoleAngleSin}<0.65\wedge\mathit{PoleAngleVeloc}>-2.04\wedge\mathit{PoleAngleVeloc}<0.28)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The FOL policy rules for the dNLRLnlc agent, trained using SAC. Extracted continuous inequality predicates define rules for each action with the inclusion of non-linear continuous inequality predicates.
has four actions, we find that all disjunction layers pass the parameter threshold and only in a few cases do we have conjunction weights for specific atoms. For example for the rule \(fireLeft\), the first rule contains a weighted atom \([0.65]CoordX>-1.52\). For the action predicate, _fireRight()_ the agent learned four rules, where only the second rules contain listed conjunction layer weights for individual atoms \([0.60]CoordX>-1.53\) and \([0.94]AngularVeloc>-2.58\). We observe as well, that while the bounded atom \(CoordX\) appears in the definitions for each action, each is associated with a conjunction weight. Indicating the agent was less confident about the inclusion of the \(CoordX\) continuous predicate function.
In Figure 6, we can observe the results comparing the performance of the dNLRLnlc agent. The inclusion of the _sine_ is a very minor addition, but one that does take into account the state transition mechanics. Performance is better than dNLRLc in later episodes. The moving average shows mean
\begin{table}
\begin{tabular}{l l} \hline \hline
**LunarLander** & Policy rules for dNLRLc \\ \hline & **doNothing()** \\ & \(:-(Confort<1.28\wedge LinearVelocX<-0.14)\) \\ & \(:-(\mathbb{B}.08)CoordX>-1.53\wedge\{0.76\}CoordX<1.70\wedge LinearVelocX<-0.12\) \\ & \(\wedge(0.66)Angle>-1.60)\) \\ & \(:-(\mathbb{B}.82)CoordX<1.70\wedge LinearVelocX<-0.04\wedge RightLagContPalus) \\ & **forLeft()** \\ & \(:-(\mathbb{B}.65)CoordX>-1.52\wedge LinearVelocX<-0.14\wedge\{0.73\}Angle>-0.09) \\ & \(:-(\mathbb{B}.72)CoordX<1.70\wedge\{0.68\}CoordY>-0.99\wedge LinearVelocX<1.93 \wedge Angle>-0.09)\) \\ & **forMain()** \\
**mean reward:** \\ & \(162.1\pm 110.9\) \\ & \(:-(\mathbb{B}.58)CoordY>-0.91\wedge LinearVelocY>-0.11)\) \\ & \(:-(\mathbb{B}.75)CoordX<1.70\wedge LinearVelocX<-0.11\wedge Angle>0.01\wedge AngularVeloc>-0.14)\) \\ & **forRight()** \\ & \(:-(\mathbb{L}inuxVelocX>-0.63\wedge Angle>-1.60\wedge Angle<0.17\wedge AngularVeloc<-0.02)\) \\ & \(:-(\mathbb{B}.60)CoordX>-1.53\wedge Angle<0.17\wedge\{0.84\}AngularVeloc>-2.58\) \\ & \(:-Left/LagContarTrue)\) \\ & \(:-(Confort<1.28\wedge LinearVelocX>-0.05\wedge Angle<0.32\wedge AngularVeloc>-1.98\) \\ & \(:\wedge RightLagContarFalse)\) \\ & \(:-(\mathbb{L}inuxVelocX>-0.05\wedge Angle>-1.60\wedge Angle<0.17)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The FOL policy rules for the dNLRLc agent, trained using SAC. Extracted continuous bounded predicates define rules for each action.
Figure 5: Episode scores for SAC algorithms relying on dNLRLc. Dark purple corresponds to the mean reward. Light purple corresponds to the standard deviation and the red line is the moving average for 50 episodes.
rewards rising above 100.0 consistently. The rules in the predicate action policy for the Lunar Lander environment are given in table 4. For each discrete action, all rules pass the parameter threshold for the disjunction layer except for a single rule associated with \(fireLeft()\). For the predicate actions _fireLeft()_ and _fireRight()_, which have four rules each, we observe the majority of definitions include the _sine_ transformed state feature \(\theta\).
#### 4.2.3 Discussion
The performance of both the dNLRLc and dNLRLnlc agents is comparable. In the CartPole problem, see Figure 2, the dNLRLnlc appears more stable in later episodes than dNLRLc as evidenced by the higher occurrence of total rewards of 300. The Lunar Lander problem, with a larger action and state space, proved more challenging for the dNLRL agents. Both agents were unable to produce mean rewards of 200, although dNLRLnlc achieved higher rewards than dNLRLc in later episodes. While the deviation was high, indicating both dNLRLc and dNLRLnlc landed successfully on occasion, this also signified that the Lander would crash periodically. The various hyperparameters were also found to impact the model. Specifically, the instantiation of the binning scheme would sway performance. We note, that in [8], equal-width binning was not necessarily an optimal approach for the Lunar Lander except in specific regions with respect to the positional axis, and we leave this investigation for future research. Multiple trials of both agents were conducted, with performance across all episodes averaging around 100 for dNLRLc and slightly higher for dNLRLnlc, and performance fluctuations resulted in a variety of final policy rules. The position of the lander along the \(X\) and \(Y\) axis, as signified by atoms associated with \(CoordX\) or \(CoordY\), would often have corresponding conjunction weights, indicating uncertainty in these predicates. The fluctuating bounded values during training might have caused confusion for the agent, as the position of the lander is a significant factor in landing success. To address this issue, future investigations could explore alternative schemes for training the bound weights, and consider adding additional non-linear transformation predicates or operation predicates between state features.
## 5 Conclusion
We aimed to incorporate both continuous and non-linear interpretations of dNLR networks into an RRL framework, creating a dNL-ILP based agent that can learn in dynamic continuous environments. SAC was found to be the best among RL algorithms for evaluating the agent. The agent produced policies
Figure 6: Episode scores for SAC algorithms relying on dNLRLnlc.
incorporating continuous and non-linear continuous predicate functions and was the first to successfully integrate ILP-based reasoning, RRL, and learning in dynamic continuous settings. The Lunar Lander problem was more challenging for the dNLRL agents, resulting in a high deviation from the mean, but our dNLRLc and dNLRLnlc agents still provide a promising starting point for ILP and RL research in continuous domains.
|
2305.01948 | Upper Bounds on the Acyclic Chromatic Index of Degenerate Graphs | An acyclic edge coloring of a graph is a proper edge coloring without any
bichromatic cycles. The acyclic chromatic index of a graph $G$ denoted by
$a'(G)$, is the minimum $k$ such that $G$ has an acyclic edge coloring with $k$
colors. Fiam\v{c}\'{\i}k conjectured that $a'(G) \le \Delta+2$ for any graph
$G$ with maximum degree $\Delta$. A graph $G$ is said to be $k$-degenerate if
every subgraph of $G$ has a vertex of degree at most $k$. Basavaraju and
Chandran proved that the conjecture is true for $2$-degenerate graphs. We prove
that for a $3$-degenerate graph $G$, $a'(G) \le \Delta+5$, thereby bringing the
upper bound closer to the conjectured bound. We also consider $k$-degenerate
graphs with $k \ge 4$ and give an upper bound for the acyclic chromatic index
of the same. | Nevil Anto, Manu Basavaraju, Suresh Manjanath Hegde, Shashanka Kulamarva | 2023-05-03T07:56:13Z | http://arxiv.org/abs/2305.01948v1 | # Upper Bounds on the Acyclic Chromatic Index of Degenerate Graphs
###### Abstract
An acyclic edge coloring of a graph is a proper edge coloring without any bichromatic cycles. The _acyclic chromatic index_ of a graph \(G\) denoted by \(a^{\prime}(G)\), is the minimum \(k\) such that \(G\) has an acyclic edge coloring with \(k\) colors. Fiancik [10] conjectured that \(a^{\prime}(G)\leq\Delta+2\) for any graph \(G\) with maximum degree \(\Delta\). A graph \(G\) is said to be \(k\)_-degenerate_ if every subgraph of \(G\) has a vertex of degree at most \(k\). Basavaraju and Chandran [4] proved that the conjecture is true for \(2\)-degenerate graphs. We prove that for a \(3\)-degenerate graph \(G\), \(a^{\prime}(G)\leq\Delta+5\), thereby bringing the upper bound closer to the conjectured bound. We also consider \(k\)-degenerate graphs with \(k\geq 4\) and give an upper bound for the acyclic chromatic index of the same.
Keywords: _Acyclic chromatic index; Acyclic edge coloring; \(3\)-degenerate graphs; \(k\)-degenerate graphs_
Mathematics Subject Classification: 05C15
## 1 Introduction
Only finite and simple graphs are considered throughout this paper. Let \(G=(V,E)\) be a graph with the vertex set \(V\) and the edge set \(E\). A _path_ in \(G\) is a sequence of distinct vertices in \(V\) such that there is an edge between every pair of consecutive vertices in the sequence. If we add an edge between the starting vertex and the ending vertex of a path in \(G\), then the resulting structure is called a _cycle_ in \(G\). Let \(C\) be the given set of colors. A _proper edge coloring_ of \(G\), with \(C\), is a function \(f:E\to C\) such that \(f(e_{1})\neq f(e_{2})\) whenever \(e_{1}\) and \(e_{2}\) are adjacent to each other. The minimum number of colors required for a proper edge coloring of a given graph \(G\) is called the _chromatic index_ of \(G\) which is denoted by \(\chi^{\prime}(G)\). A proper edge coloring of \(G\) is said to be an _acyclic_ edge coloring if there are no bichromatic cycles (cycles colored with exactly \(2\) colors) in \(G\). The _acyclic chromatic index_ (also called _acyclic edge chromatic number_) of a graph \(G\) is the minimum number of colors required for an acyclic edge coloring of \(G\) and is denoted by \(a^{\prime}(G)\). Grunbaum [12] introduced the concept of acyclic coloring. The vertex analog of the acyclic chromatic index can be used to bound other parameters like oriented chromatic number [15] and star chromatic number [8] of a graph. Both of these parameters have many practical applications including wavelength routing in optical networks [2]. By Vizing's theorem [7], we have \(\Delta\leq\chi^{\prime}(G)\leq\Delta+1\) where \(\Delta=\Delta(G)\) is the maximum degree of a vertex in the graph \(G\). Since acyclic edge coloring is also a proper edge coloring by definition, we have \(a^{\prime}(G)\geq\chi^{\prime}(G)\geq\Delta\).
It was conjectured by Fiancik [10] (and independently by Alon et al. [1]) that for any graph \(G\), \(a^{\prime}(G)\leq\Delta+2\). For an arbitrary graph \(G\), the best-known upper bound for \(a^{\prime}(G)\) till date is \(3.569(\Delta-1)\)
given by Fialho et al. [9]. They obtained it by using probabilistic techniques. This bound being far from the conjectured bound reflects the difficulty level of the problem.
However, the conjecture has been proved for some special classes of graphs. Alon et al. [1] proved that there exists a constant \(k\) such that \(a^{\prime}(G)\leq\Delta+2\) for any graph \(G\) with girth at least \(k\Delta\log\Delta\). The acyclic chromatic index was exactly determined for some classes of graphs like series-parallel graphs when \(\Delta\neq 4\) (Wang and Shu [16]), outerplanar graphs when \(\Delta\neq 4\) (Hou and Wu [13], Hou et al. [14]), cubic graphs (Andersen et al. [3]), planar graphs with \(\Delta\geq 4.2\times 10^{14}\) (Cranston [6]) and planar graphs with girth at least \(5\) and \(\Delta\geq 19\) (Basavaraju et al. [5]). In the case of outerplanar graphs and series-parallel graphs, if \(\Delta\geq 5\), then \(a^{\prime}(G)=\Delta\) and when \(\Delta=3\), they characterize the graphs that require \(4\) colors for the acyclic edge coloring.
A graph \(G\) is said to be _\(k\)-degenerate_ if every subgraph of \(G\) has a vertex of degree at most \(k\). It is easy to see that the acyclic edge coloring conjecture is true for \(1\)-degenerate graphs since a \(1\)-degenerate graph can be edge colored using exactly \(\Delta\) colors. Basavaraju and Chandran [4] proved that the conjecture is true for \(2\)-degenerate graphs by giving a strong upper bound of \(\Delta+1\). Particularly, they prove that \(a^{\prime}(G)\leq\Delta+1\), for a \(2\)-degenerate graph \(G\).
Fiedorowicz [11] proved that \(a^{\prime}(G)\leq(t-1)\Delta+p\) for every graph \(G\) which satisfies the condition that \(|E(H)|\leq t|V(H)|-1\) for every subgraph \(H\subseteq G\), where \(t\geq 2\) is a given integer, and the constant \(p=2t^{3}-3t+2\). One can verify that the class of \(k\)-degenerate graphs is a subclass of the class of graphs defined by Fiedorowicz [11]. Therefore, we can obtain an upper bound on the acyclic chromatic index of a \(k\)-degenerate graph \(G\) as \(a^{\prime}(G)\leq(k-1)\Delta+2k^{3}-3k+2\) as in [11]. In this paper, we study \(k\)-degenerate graphs and improve this upper bound for the acyclic chromatic index of \(k\)-degenerate graphs. This upper bound is stated in the form of the following theorem:
**Theorem 1**.: _Let \(G\) be a \(k\)-degenerate graph with \(k\geq 4\) and maximum degree \(\Delta\). Then \(a^{\prime}(G)\leq\lceil(\frac{k+1}{2})\Delta\rceil+1\)._
We also come up with an upper bound for the acyclic chromatic index of \(3\)-degenerate graphs. Even though this does not prove the conjecture, it brings the upper bound close to the conjectured value. In particular, we prove the following theorem:
**Theorem 2**.: _Let \(G\) be a \(3\)-degenerate graph with maximum degree \(\Delta\). Then \(a^{\prime}(G)\leq\Delta+5\)._
## 2 Preliminaries
Let \(G=(V,E)\) be a graph with \(n\) vertices and \(m\) edges. The _degree_ of a vertex in \(G\) is the number of edges that are incident to that vertex in \(G\). The degree of a vertex \(v\) is represented as \(deg_{G}(v)\). The minimum degree and the maximum degree of \(G\) are represented as \(\delta(G)\) and \(\Delta(G)\) respectively. For any vertex \(v\in V\), \(N_{G}(v)\) is the set of all vertices in \(V\) that are adjacent to the vertex \(v\) in \(G\). So, \(N_{G}(v)\) represents the set of all neighbors of the vertex \(v\) in \(G\). Throughout the paper, we ignore \(G\) in the above notations whenever the graph \(G\) is understood from the context.
Let \(X\subseteq E\) and \(Y\subseteq V\). The subgraph of \(G\) obtained from the vertex set \(V\) and the edge set \(E\setminus X\) is denoted as \(G\setminus X\). Similarly, \(G\setminus Y\) is the subgraph of \(G\) obtained from the vertex set \(V\setminus Y\) and the edge set \(E\setminus\{e\in E\mid\exists y\in Y\text{ such that }e\text{ is incident to }y\}\). If either \(X\) or \(Y\) is a singleton set \(\{u\}\), then we just use \(G\setminus u\) instead of \(G\setminus\{u\}\). The subgraph of \(G\) induced by the edges in \(X\) is denoted by \(G[X]\), i.e., \(G[X]=(V_{X},E_{X})\) is a graph where \(V_{X}=\{v\in V\mid\exists e\in X\text{ with }e\text{ incident on }v\}\) and \(E_{X}=X\). Further notations and definitions can be found in [17]. We use the word coloring instead of acyclic edge coloring at some obvious places when there is no ambiguity.
Further, we will mention some definitions and lemmas that are useful for our proof. These were given by Basavaraju and Chandran [4].
**Definition 1** ([4]).: _Let \(H\) be a subgraph of a graph \(G\). An edge coloring \(f\) of \(H\) is called a **partial edge coloring** of \(G\)._
An edge coloring of \(G\) is also a partial edge coloring of \(G\) since \(G\) is also a subgraph of itself. A partial edge coloring \(f\) of \(G\) corresponding to a subgraph \(H\) is said to be proper (and acyclic) if it is proper (and acyclic) in the subgraph \(H\). Note that with respect to a partial coloring \(f\), for an edge \(e\), \(f(e)\) may or may not be defined. So, whenever we use \(f(e)\) for some edge \(e\), we implicitly assume that \(f(e)\) is defined. Let \(f\) be a partial edge coloring of the graph \(G\). For any vertex \(x\in V\), we define \(F_{x}(f)=\{f(xy)\mid y\in N_{G}(x)\}\). For any edge \(uv\in E\), we define \(F_{uv}(f)=F_{v}(f)\setminus\{f(uv)\}\). Whenever the partial coloring \(f\) is understood from the context, we use \(F_{x}\) and \(F_{uv}\) instead of \(F_{x}(f)\) and \(F_{uv}(f)\). One can see that \(F_{uv}\) is different from \(F_{vu}\).
**Definition 2** ([4]).: _An \((\alpha,\beta)\)-maximal bichromatic path with respect to a partial coloring \(f\) of \(G\) is a maximal path in \(G\) consisting of edges that are colored using the colors \(\alpha\) and \(\beta\) alternatingly. An \((\alpha,\beta,u,v)\)-**maximal bichromatic path** is an \((\alpha,\beta)\)-maximal bichromatic path which starts at the vertex \(u\) with an edge colored with \(\alpha\) and ends at the vertex \(v\)._
Now, we mention a lemma that follows from the definition of acyclic edge coloring. We assume this lemma implicitly further down the paper. This lemma was mentioned as a fact in [4].
**Lemma 1** ([4]).: _Given a pair of colors \(\alpha\) and \(\beta\) in a proper coloring \(f\) of \(G\), there is at most one \((\alpha,\beta)\)-maximal bichromatic path containing a particular vertex \(v\) in \(G\), with respect to \(f\)._
**Definition 3** ([4]).: _If the vertices \(u\) and \(v\) are adjacent in the graph \(G\), then an \((\alpha,\beta,u,v)\)-maximal bichromatic path in \(G\), which ends at the vertex \(v\) with an edge colored \(\alpha\), is said to be an \((\alpha,\beta,uv)\)-**critical path** in \(G\)._
**Definition 4** ([4]).: _Let \(f\) be a partial coloring of \(G\). Let \(u,a,b\in V\) and \(ua,ub\in E\). A **color exchange** with respect to the edges \(ua\) and \(ub\) is defined as the process of obtaining a new partial coloring \(g\) from the current partial coloring \(f\) by exchanging the colors of the edges \(ua\) and \(ub\). The color exchange defines \(g\) as follows. \(g(ua)=f(ub)\), \(g(ub)=f(ua)\) and for all other edges \(e\) in \(G\), \(g(e)=f(e)\). The color exchange with respect to the edges \(ua\) and \(ub\) is said to be proper if the coloring obtained after the exchange is proper. The color exchange is said to be **valid** if the coloring obtained after the exchange is acyclic._
A color \(\gamma\) is said to be a _candidate_ color for an edge \(e\) in \(G\) with respect to a partial coloring \(f\) if none of the edges that are incident on \(e\) are colored \(\gamma\). A candidate color \(\gamma\) is said to be _valid_ for an edge \(e\) if assigning the color \(\gamma\) to \(e\) does not result in any new bichromatic cycle in \(G\). Basavaraju and Chandran [4] mentioned the following lemma as a fact since it is obvious.
**Lemma 2** ([4]).: _Let \(f\) be a partial coloring of \(G\). A candidate color \(\gamma\) is not valid for an edge \(e=(u,v)\) if and only if there exists a color \(\eta\in F_{uv}\cap F_{vu}\) such that there is a \((\eta,\gamma,uv)\)-critical path in \(G\) with respect to the coloring \(f\)._
Now, we state and prove a lemma on the availability of a special edge in a \(k\)-degenerate graph. This special edge that we obtain by the lemma is useful in our proof technique.
**Lemma 3**.: _If \(G\) is a \(k\)-degenerate graph, then there exists an edge \(xy\) in \(G\) such that \(deg(x)\leq k\) and at most \(k\) neighbors of \(y\) have their degree strictly greater than \(k\)._
Proof.: Let \(G\) be the given \(k\) degenerate graph. By definition of \(G\), there exists an edge \(xy\) in \(G\) such that \(deg(x)\leq k\). By way of contradiction assume that for every edge \(xy\) in \(G\) with \(deg(x)\leq k\), at least \(k+1\) neighbors of \(y\) have their degree strictly greater than \(k\). Now, obtain a graph \(G^{\prime}\) by deleting all the vertices of degree at most \(k\) from \(G\). Clearly, \(G^{\prime}\) has some edges in it because the edges between the vertex \(y\) and any of its higher degree neighbors will still be present in \(G^{\prime}\).
Since \(G^{\prime}\) is a subgraph of \(G\), we know that \(G^{\prime}\) is also a \(k\) degenerate graph. Hence, there exists an edge \(uv\) in \(G^{\prime}\) such that \(deg(u)\leq k\). If the degree of \(u\) was at most \(k\) in the graph \(G\), then by choice of \(G^{\prime}\), the vertex \(u\) should have been deleted while obtaining \(G^{\prime}\) from \(G\). Since \(u\) is present in \(G^{\prime}\), we are sure that the degree of \(u\) was at least \(k+1\) in \(G\). Hence, there should exist a vertex \(w\) which is a neighbor of \(u\) in \(G\) but \(w\notin V(G^{\prime})\).
Since \(w\notin V(G^{\prime})\), \(w\) was deleted while obtaining \(G^{\prime}\) from \(G\), implying that \(deg_{G}(w)\leq k\). In fact, any neighbor of \(u\) in \(G\) but not present in \(G^{\prime}\) is of degree at most \(k\) in \(G\). Therefore, the number of neighbors of \(u\) that have their degree at least \(k+1\) is at most \(deg_{G^{\prime}}(u)\). Since \(deg_{G^{\prime}}(u)\leq k\), we have an edge \(wu\) in \(G\) with \(deg_{G}(w)\leq k\) and at most \(k\) neighbors of \(u\) have their degree strictly greater than \(k\) in \(G\), a contradiction to our initial assumption. Thus we can conclude that the Lemma is valid.
## 3 Proof of Theorem 1
Proof.: Let \(G\) be a minimum counterexample to Theorem 1 with respect to the number of edges. Let \(G\) be a \(k\)-degenerate graph with \(n\) vertices, \(m\) edges and maximum degree \(\Delta\). We also have \(k\geq 4\). Let us define the number \(p\) as follows:
\[p=\lceil(\frac{k+1}{2})\Delta\rceil+1\]
Notice that \(p\) is exactly the upper bound in Theorem 1 that we intend to prove. Let \(xy\) be an edge in \(G\) such that \(deg(x)\leq k\). Such an edge \(xy\) exists because \(G\) is a \(k\)-degenerate graph. Let \(G^{\prime}=G\setminus xy\), i.e., a graph formed by deleting the edge \(xy\) from \(G\). Observe that \(G^{\prime}\) is also a \(k\)-degenerate graph and has less than \(m\) edges. Since we did not add any edge or any vertex while obtaining \(G^{\prime}\) from \(G\), we have \(\Delta(G^{\prime})\leq\Delta(G)\). Therefore, since \(G\) is a minimum counterexample, we have an acyclic edge coloring \(g\) of \(G^{\prime}\) with \(p\) colors. Let \(C\) be the set of colors used in the coloring \(g\), i.e., \(C=\{1,2,\ldots,p\}\).
Now, we try to extend \(g\) to an acyclic edge coloring \(f\) of \(G\) by assigning a color to the edge \(xy\) from \(C\), thereby arriving at a contradiction to the fact that \(G\) is a minimum counterexample. Now, we define a set of vertices \(S\) as follows:
\[S=\{u\in N(x)\setminus y\mid\exists v\in N(y)\text{ such that }g(xu)=g(yv)\}\]
Let \(E^{*}\) be the set of all edges in \(G\) which are incident on at least one vertex in \(S\cup\{x,y\}\). Observe that all the edges in \(E^{*}\) except \(xy\) are colored in \(g\). Let \(g(E^{*})\) be the set of all colors seen on the edges in \(E^{*}\) in the coloring \(g\), excluding the repetitions. Now, we make the following claim about the validity of the colors which are not in \(g(E^{*})\).
**Claim 1**.: _Any color that is not in \(g(E^{*})\), is a valid color for the edge \(xy\) in \(G\)._
Proof.: Let \(\alpha\) be a color that is not in \(g(E^{*})\). Then clearly \(\alpha\notin F_{xy}\) and \(\alpha\notin F_{yx}\) by choice of \(E^{*}\) and \(\alpha\). Hence, \(\alpha\) is a candidate color for the edge \(xy\) in \(G\). By way of contradiction, assume that the candidate color \(\alpha\) is not a valid color for the edge \(xy\) in \(G\). This means that there exists a color \(\beta\) such that a \((\beta,\alpha,xy)\)-critical path exists in \(G^{\prime}\). Since the \((\beta,\alpha,xy)\)-critical path should be colored with the colors \(\beta\) and \(\alpha\) only, there should exist three vertices \(x^{\prime}\) and \(x^{\prime\prime}\) and \(y^{\prime}\) in \(G^{\prime}\) distinct from \(x\) and \(y\) such that \(g(xx^{\prime})=\beta\), \(g(x^{\prime}x^{\prime\prime})=\alpha\) and \(g(yy^{\prime})=\beta\). Observe that \(x^{\prime}\in N(x)\). But we also have \(g(xx^{\prime})=g(yy^{\prime})=\beta\) implying that \(x^{\prime}\in S\).
Therefore, \(x^{\prime}x^{\prime\prime}\in E^{*}\). Since \(g(x^{\prime}x^{\prime\prime})=\alpha\), this is a contradiction to our initial assumption that \(\alpha\) was a color that is not in \(g(E^{*})\). Hence, we can conclude that our assumption was wrong and the claim holds, as desired.
Let \(C^{\prime}\) be the set of candidate colors for the edge \(xy\) and let \(C^{\prime\prime}\) be the set of colors in \(F_{xy}\cup F_{yx}\). Observe that \(C=C^{\prime}\cup C^{\prime\prime}\). Note that any color in \(C^{\prime}\) is not valid for the edge \(xy\) since \(G\) is a minimum
counterexample. Hence, together with Claim 1, we have that every color in \(C\) is present in \(g(E^{*})\), i.e., \(|g(E^{*})|=|C^{\prime}\cup C^{\prime\prime}|=p\). This also implies that every color in \(C^{\prime}\) is present at some vertex in \(S\). Let the set of colors in \(C^{\prime}\) which appear only once on the edges which are incident on some vertex in \(N_{G^{\prime}}(x)\) be denoted by \(C^{*}\). Notice that \(|S|=|F_{xy}\cap F_{yx}|\). Now, we claim that the size of the set \(S\) has a lower bound as follows:
**Claim 2**.: \(|S|=|F_{xy}\cap F_{yx}|>\frac{k-3}{2}\)_._
Proof.: By way of contradiction, assume that \(|S|=|F_{xy}\cap F_{yx}|=q\leq\frac{k-3}{2}\). Since \(deg(x)\leq k\) and there are \(q\) colors common in \(F_{xy}\) and \(F_{yx}\), we have:
\[|C^{\prime\prime}|=|F_{xy}\cup F_{yx}|\leq\Delta-1+k-q-1=\Delta+k-q-2\]
The remaining colors in \(g(E^{*})\) are seen at the vertices in \(S\). Since \(|S|=q\), we have \(|C^{\prime}|\leq q(\Delta-1)\). Thus we have the following inequality:
\[|C^{\prime}\cup C^{\prime\prime}| \leq(q(\Delta-1))+(\Delta+k-q-2)\] \[\leq(q+1)\Delta+k-2q-2\]
Since \(k\leq\Delta\), we have \(k-2q-2\leq\Delta-2\). Further, since \(q\leq\frac{k-3}{2}\), we have \(q+1\leq\frac{k-1}{2}\). Therefore, the inequality becomes as follows:
\[|C^{\prime}\cup C^{\prime\prime}| \leq(q+1)\Delta+k-2q-2\] \[\leq(\frac{k-1}{2})\Delta+\Delta-2\] \[\leq(\frac{k+1}{2})\Delta-2\] \[<p\]
Observe that we have obtained an inequality \(|g(E^{*})|=|C^{\prime}\cup C^{\prime\prime}|<p\), which is a contradiction to our assumption that \(|g(E^{*})|=p\). Therefore, the claim holds.
Now, since \(|S|>\frac{k-3}{2}\) by Claim 2, we have \(|N_{G^{\prime}}(x)\setminus S|\leq k-1-(\frac{k-3}{2})=\frac{k+1}{2}\). Hence, we have:
\[|C^{\prime\prime}|=|F_{xy}\cup F_{yx}|\leq\Delta-1+\frac{k+1}{2}=\Delta+\frac {k}{2}-\frac{1}{2}\]
Further, we claim that the cardinality of the set \(C^{*}\) has a lower bound as follows:
Figure 1: Neighborhood of the edge \(xy\) in \(G\)
**Claim 3**.: \(|C^{*}|\geq 2\)_._
Proof.: By way of contradiction, assume that \(|C^{*}|=q\leq 1\). Except for the \(q\) colors in \(C^{*}\), any of the remaining colors in \(C^{\prime}\) appear at least twice at the edges incident on the vertices in \(N_{G^{\prime}}(x)\). Therefore, the number of candidate colors that are not valid for the edge \(xy\) (which is exactly given by \(|C^{\prime}|\)) is at most
\[\frac{(k-1)(\Delta-1)-q}{2}+q\]
Thus we have the following inequality:
\[|C^{\prime}\cup C^{\prime\prime}| \leq(\frac{(k-1)(\Delta-1)-q}{2}+q)+(\Delta+\frac{k}{2}-\frac{1}{ 2})\] \[\leq(\frac{k+1}{2})\Delta+\frac{q}{2}\] \[\leq(\frac{k+1}{2})\Delta+\frac{1}{2}\] \[<p\]
Since \(|g(E^{*})|=|C^{\prime}\cup C^{\prime\prime}|\), we have \(|g(E^{*})|<p\), a contradiction to our initial assumption that \(|g(E^{*})|=p\). Therefore, our assumption that \(|C^{*}|\leq 1\) is not valid and the claim holds.
Now, we make the following claim about the number of vertices in \(S\) whose edges see the colors in \(C^{*}\).
**Claim 4**.: _There exist at least two vertices in \(S\) whose edges see the colors in \(C^{*}\)._
Proof.: Every color in \(C^{*}\) is present on some edge incident to a vertex in \(S\), because \(C^{*}\subseteq C^{\prime}\) and every color in \(C^{\prime}\) is present on some edge incident to a vertex in \(S\). By way of contradiction, assume that every color in \(C^{*}\) is present on the edges incident to a single vertex in \(S\). Let \(x^{\prime}\) be the vertex in \(S\) such that every color in \(C^{*}\) is in \(F_{xx^{\prime}}\) and let \(g(xx^{\prime})=\zeta\). Let \(\alpha\) and \(\beta\) be two colors in \(C^{*}\). By Claim 3, \(\alpha\) and \(\beta\) exist. Since no color in \(C^{*}\) is valid for the edge \(xy\) in \(G\), for each color \(\gamma\in C^{*}\), there exists a \((\zeta,\gamma,xy)\)-critical path in \(G\).
**Subclaim 4.1**.: \(C^{\prime}\setminus F_{xx^{\prime}}\neq\emptyset\)_._
Proof.: By way of contradiction, assume that \(C^{\prime}\setminus F_{xx^{\prime}}=\emptyset\). This means that every candidate color for the edge \(xy\) is in \(F_{xx^{\prime}}\), implying that \(|C^{\prime}|\leq|F_{xx^{\prime}}|\leq\Delta-1\). Further, we also have \(|C^{\prime\prime}|\leq\Delta+\frac{k}{2}-\frac{1}{2}\). Therefore, we have the following inequality:
\[|C^{\prime}\cup C^{\prime\prime}| \leq(\Delta-1)+(\Delta+\frac{k}{2}-\frac{1}{2})\] \[\leq 2\Delta+\frac{k}{2}-\frac{3}{2}\]
Observe that since \(4\leq k\leq\Delta\), we have \(p=\lceil(\frac{k+1}{2})\Delta\rceil+1\geq\lceil 2.5\Delta\rceil+1\). If \(k=4\), then we have:
\[|C^{\prime}\cup C^{\prime\prime}|\leq 2\Delta+\frac{4}{2}-\frac{3}{2}=2\Delta+ 0.5<p\]
Otherwise, if \(k\geq 5\), then we have \(p=\lceil(\frac{k+1}{2})\Delta\rceil+1\geq\lceil 3\Delta\rceil+1\), which in turn implies that:
\[|C^{\prime}\cup C^{\prime\prime}|\leq 2\Delta+\frac{k}{2}-\frac{3}{2}<p\]
Therefore, in any case, for \(k\geq 4\), we have \(|C^{\prime}\cup C^{\prime\prime}|<p\), a contradiction to the fact that \(|C|=|C^{\prime}\cup C^{\prime\prime}|=p\). Hence, our assumption that \(C^{\prime}\setminus F_{xx^{\prime}}=\emptyset\) was wrong and the subclaim holds.
Now, assume that there exists a color \(\gamma\) in \(C^{\prime}\setminus F_{xx^{\prime}}\) that repeats at most twice on the edges incident on \(N_{G^{\prime}}(x)\setminus x^{\prime}\). Since every color in \(C^{*}\) is in \(F_{xx^{\prime}}\), we have \(\gamma\notin C^{*}\), implying that \(\gamma\) repeats exactly twice on the edges incident on \(N_{G^{\prime}}(x)\setminus x^{\prime}\). Let \(x_{1}\) and \(x_{2}\) be vertices in \(N_{G^{\prime}}(x)\setminus x^{\prime}\) such that \(\gamma\in F_{xx_{1}}\) and \(\gamma\in F_{xx_{2}}\). Now, recolor the edges \(xx_{1}\) and \(xx_{2}\) with \(\alpha\) and \(\beta\) respectively. Observe that for any color \(\eta\) in \(C^{*}\), \(\eta\notin F_{xv}\) for every \(v\in N_{G^{\prime}}(x)\setminus x^{\prime}\). Hence, this is particularly true for the colors \(\alpha\) and \(\beta\) in \(C^{*}\). Therefore, the recoloring is proper. Since for every color \(\eta\in C^{*}\), the \((\zeta,\eta)\)-bichromatic path in \(G\) starting from \(x\) ends at \(y\), by Lemma 1, there is no new bichromatic cycle formed by this recoloring. Hence, the recoloring is valid. Now, since \(\gamma\in(C^{\prime}\setminus F_{xx^{\prime}})\), \(\gamma\notin\{\alpha,\beta\}\), which implies that \(\gamma\) is a candidate color for the edge \(xy\). Further, for any vertex \(v\in N_{G^{\prime}}(x)\setminus\{x_{1},x_{2}\}\), we have \(\gamma\notin F_{xv}\). Therefore, \(\gamma\) is also valid for the edge \(xy\), a contradiction since \(G\) is a minimum counterexample.
Hence, we can safely assume that there does not exist a color in \(C^{\prime}\setminus F_{xx^{\prime}}\) that repeats at most twice on the edges incident on \(N_{G^{\prime}}(x)\setminus x^{\prime}\). By Subclaim 4.1, we have \(C^{\prime}\setminus F_{xx^{\prime}}\neq\emptyset\). This means that every color in \(C^{\prime}\setminus F_{xx^{\prime}}\) repeats at least three times on the edges incident on \(N_{G^{\prime}}(x)\setminus x^{\prime}\). Therefore, we can infer the following:
\[|C^{\prime}\setminus F_{xx^{\prime}}| \leq\frac{(k-2)(\Delta-1)}{3}\] \[\leq(\frac{k-2}{3})\Delta-\frac{k}{3}+\frac{2}{3}\]
Recall that we already have \(|C^{\prime\prime}|\leq\Delta+\frac{k}{2}-\frac{1}{2}\). Therefore, collectively we have the following inequality:
\[|C^{\prime}\cup C^{\prime\prime}| \leq|C^{\prime}\setminus F_{xx^{\prime}}|+|F_{xx^{\prime}}|+|C^{ \prime\prime}|\] \[\leq((\frac{k-2}{3})\Delta-\frac{k}{3}+\frac{2}{3})+(\Delta-1)+( \Delta+\frac{k}{2}-\frac{1}{2})\] \[\leq(\frac{k+4}{3})\Delta+\frac{k}{6}-\frac{5}{6}\]
Notice that for some color in \(C^{\prime}\setminus F_{xx^{\prime}}\) to repeat at least three times on the edges incident on \(N_{G^{\prime}}(x)\setminus x^{\prime}\), it is necessary that \(|N_{G^{\prime}}(x)\setminus x^{\prime}|\geq 3\). This implies that \(k\geq 5\). Recall that we have \(p\geq\lceil 3\Delta\rceil+1\) whenever \(k\geq 5\). If \(k=5\), then we have:
\[|C^{\prime}\cup C^{\prime\prime}|\leq(\frac{k+4}{3})\Delta+\frac{k}{6}-\frac {5}{6}=(\frac{5+4}{3})\Delta+\frac{5}{6}-\frac{5}{6}=3\Delta<p\]
But this is a contradiction to the fact that \(|C^{\prime}\cup C^{\prime\prime}|=p\).
Otherwise, let \(k\geq 6\). Now, since \(k\leq\Delta\), we have \(\frac{k}{6}\leq\frac{\Delta}{6}\). Hence, we have the following:
\[|C^{\prime}\cup C^{\prime\prime}| \leq(\frac{k+4}{3})\Delta+\frac{\Delta}{6}-\frac{5}{6}\] \[\leq(\frac{2k+9}{6})\Delta-\frac{5}{6}\]
Notice that if \(k\geq 6\), then \(\frac{2k+9}{6}\leq\frac{k+1}{2}\). Since we already have \(k\geq 6\), we can infer that \(|C^{\prime}\cup C^{\prime\prime}|\leq(\frac{k+1}{2})\Delta-\frac{5}{6}<p\), a contradiction to the fact that \(|C^{\prime}\cup C^{\prime\prime}|=p\).
Therefore, in any case, we arrive at a contradiction. Hence, our assumption that every color in \(C^{*}\) is present on the edges incident to a single vertex in \(S\), was wrong and the claim holds.
Recall that by Claim 3, we have that \(|C^{*}|\geq 2\). Further, by Claim 4, we are sure that there exist at least two vertices in \(S\) whose edges see the colors in \(C^{*}\). Let \(x_{1}\) and \(x_{2}\) be the vertices in \(S\) such that there exist two colors \(\gamma\) and \(\eta\) in \(C^{*}\) satisfying \(\gamma\in F_{xx_{1}}\) and \(\eta\in F_{xx_{2}}\). Let \(g(xx_{1})=\alpha\) and \(g(xx_{2})=\beta\).
Since \(\gamma\) and \(\eta\) are not valid for the edge \(xy\) in \(G\), there exists an \((\alpha,\gamma,xy)\)-critical path in \(G\) and also a \((\beta,\eta,xy)\)-critical path in \(G\). Now, we recolor the edge \(xx_{2}\) to \(\gamma\). This is still a proper coloring since
\(\gamma\in F_{xx_{1}}\) implies that \(\gamma\notin F_{xx_{2}}\) by choice of \(\gamma\). Now, since the \((\alpha,\gamma)\)-bichromatic path starting from \(x\) ends at \(y\), by Lemma 1, there is no new bichromatic cycle created indicating that the recoloring is valid. Observe that \(\eta\) becomes a valid color for the edge \(xy\) because by this recoloring we have eliminated the unique \(xy\)-critical path in \(G\) that involves the color \(\eta\), i.e., the \((\beta,\eta,xy)\)-critical path in \(G\) has been eliminated by this recoloring. Thus we can color the edge \(xy\) with color \(\eta\) and extend the coloring \(g\) to a coloring \(f\) of \(G\) with \(p\) colors. But this is a contradiction to the fact that \(G\) is a minimum counterexample. Therefore, we can conclude that a minimum counterexample to Theorem 1 does not exist which in turn implies the validity of Theorem 1.
## 4 Proof of Theorem 2
Proof.: Let \(G\) be the given \(3\)-degenerate graph with \(n\) vertices, \(m\) edges and maximum degree \(\Delta\). We use induction on the number of edges \(m\) of \(G\) to proceed with the proof. Let \(xy\) be an edge in \(G\) such that \(deg(x)\leq 3\) and at most \(3\) neighbors of \(y\) have their degree strictly greater than \(3\). The existence of such an edge \(xy\) is guaranteed by Lemma 3. Further, we choose \(x\) as the neighbor of \(y\) that has the minimum degree among the vertices in \(N(y)\). Let \(G^{\prime}=G\setminus xy\), i.e., a graph formed by deleting the edge \(xy\) from \(G\). Observe that \(G^{\prime}\) is also \(3\)-degenerate and has less than \(m\) edges. Further, we have \(\Delta(G^{\prime})\leq\Delta(G)\). Hence, by induction, we have an acyclic edge coloring \(g\) of \(G^{\prime}\) with \(\Delta+5\) colors. Let \(N^{\prime}(y)\) be the set of all neighbors of \(y\) in \(G^{\prime}\) having their degree less than or equal to \(3\) and let \(N^{\prime\prime}(y)\) be the set of all neighbors of \(y\) in \(G^{\prime}\) having their degree strictly greater than \(3\). Notice that we have \(|N^{\prime\prime}(y)|\leq 3\). Let \(S\) be the set of colors in \(F_{y}\) excluding those which belong to the set \(\{g(yz)\mid z\in N^{\prime\prime}(y)\}\). Since \(|N^{\prime\prime}(y)|\leq 3\), we have \(|F_{y}\setminus S|\leq 3\).
Now, we try to extend \(g\) to an acyclic edge coloring \(f\) of \(G\) by assigning a color to the edge \(xy\) from the available \(\Delta+5\) colors. If \(deg_{G}(x)=1\), then by assigning the edge \(xy\) any color other than the colors in \(F_{xy}\), we can extend \(g\) to the required coloring \(f\) of \(G\), since \(|F_{xy}|\leq\Delta-1\). Thus we can assume that \(deg(x)\geq 2\). Further, depending on the degree of the vertex \(x\) in \(G\), we have the following cases:
**Case 1.**\(deg(x)=2\).
Let \(x^{\prime}\) be the unique neighbor of \(x\) in \(G^{\prime}\). Let \(g(xx^{\prime})=\alpha\). If \(\alpha\notin F_{y}\), then we can assign any color satisfying the proper coloring to the edge \(xy\) and extend \(g\) to the required coloring \(f\) of \(G\). Thus we can assume that \(\alpha\in F_{y}\). Let \(y^{\prime}\) be the neighbor of \(y\) such that \(g(yy^{\prime})=\alpha\). Observe that the candidate colors which are not valid for the edge \(xy\) are precisely the colors in \(F_{yy^{\prime}}\). Further, since \(\alpha\in F_{xy}\), the colors which are not candidate colors for the edge \(xy\) are the colors in \(F_{xy}\). Therefore, any color that is not in \(F_{xy}\cup F_{yy^{\prime}}\) is valid for the edge \(xy\). Depending on whether \(\alpha\in S\) or not, we have the following cases:
**Case 1.1**.: \(\alpha\in S\).
Since \(\alpha\in S\), we have \(deg(y^{\prime})\leq 3\) which implies that \(|F_{yy^{\prime}}|\leq 2\). Therefore, we have \(|F_{xy}\cup F_{yy^{\prime}}|\leq\Delta+1\). We still have \(4\) colors available for the edge \(xy\) and by using any one of those \(4\) colors, we can extend \(g\) to the required coloring \(f\) of \(G\).
**Case 1.2**.: \(\alpha\notin S\).
Recall that we have \(|F_{xx^{\prime}}|\leq\Delta-1\) and \(|F_{y}\setminus S|\leq 3\). Therefore, we can infer the following:
\[|F_{xx^{\prime}}\cup(F_{y}\setminus S)|\leq\Delta+2\]
Now, we pick a color \(\beta\) that is not present in the set \(F_{xx^{\prime}}\cup(F_{y}\setminus S)\) and recolor the edge \(xx^{\prime}\) from \(\alpha\) to \(\beta\). The recoloring is proper since \(\beta\notin F_{xx^{\prime}}\). The recoloring is valid because the edge \(xy\) is not yet
colored. Observe that if \(\beta\notin F_{y}\), then we can assign any color satisfying the proper coloring to the edge \(xy\) and extend \(g\) to the required coloring \(f\) of \(G\). Thus we can assume that \(\beta\in F_{y}\). Further, since \(\beta\) was picked satisfying \(\beta\notin F_{xx^{\prime}}\cup(F_{y}\setminus S)\), we can conclude that \(\beta\in S\). This boils down to Case 1.1 and hence, we are done.
**Case 2.**\(deg(x)=3\).
Notice that for this case, any neighbor of \(y\) which is not in \(N^{\prime\prime}(y)\) has the degree exactly \(3\) by the choice of the vertex \(x\). Let \(x_{1}\) and \(x_{2}\) be the neighbors of \(x\) in \(G\) other than \(y\). Let \(g(xx_{1})=\alpha\) and let \(g(xx_{2})=\beta\). We define \(R\) to be the set of all colors from the total available \(\Delta+5\) colors that are not in \(F_{xy}\cup F_{yx}\). If any color in \(R\) is valid for the edge \(xy\), then we can extend \(g\) to a coloring \(f\) of \(G\), as desired. Thus we can assume that no color in \(R\) is valid for the edge \(xy\). Depending on whether the colors \(g(xx_{1})\) and \(g(xx_{2})\) belongs to \(F_{xy}\setminus S\), we have the following cases:
**Case 2.1.**\(\{g(xx_{1}),g(xx_{2})\}\cap(F_{xy}\setminus S)=\emptyset\).
Recall that \(g(xx_{1})=\alpha\) and let \(g(xx_{2})=\beta\). For this case, no color in \(\{\alpha,\beta\}\) is in \(F_{xy}\setminus S\), implying that every color in \(\{\alpha,\beta\}\) is either present in \(S\) or not present in \(F_{xy}\).
If no color in \(\{\alpha,\beta\}\) is in \(S\), then it implies that no color in \(\{\alpha,\beta\}\) is in \(F_{xy}\). Hence, we can use any color satisfying the proper coloring for the edge \(xy\) and extend \(g\) to the required coloring \(f\) of \(G\).
Otherwise, let exactly one color in \(\{\alpha,\beta\}\) be in \(S\). Without loss of generality, let \(\alpha\in S\) which implies \(\alpha\in F_{xy}\). Let \(y_{m}\) be the neighbor of \(y\) such that \(g(yy_{m})=\alpha\). Note that \(y_{m}\in N^{\prime}(y)\). Therefore, we have \(|F_{yy_{m}}|\leq 2\). Further, one can see that the set of candidate colors that are not valid for the edge \(xy\) is given by \(F_{yy_{m}}\) for this case. Since \(\alpha\in F_{xy}\), we have \(|F_{xy}\cup F_{yx}|\leq\Delta\), which implies that \(|F_{xy}\cup F_{yx}\cup F_{yy_{m}}|\leq\Delta+2\).
Otherwise, let both the colors in \(\{\alpha,\beta\}\) be in \(S\). Hence, \(\alpha\in S\) and \(\beta\in S\) which implies that \(\alpha\in F_{xy}\) and \(\beta\in F_{xy}\). Let \(y_{m}\) and \(y_{n}\) be the neighbors of \(y\) such that \(g(yy_{m})=\alpha\) and \(g(yy_{n})=\beta\). Note that \(y_{m}\in N^{\prime}(y)\) and \(y_{n}\in N^{\prime}(y)\). Therefore, we have \(|F_{yy_{m}}\cup F_{yy_{n}}|\leq 4\). Further, one can see that the set of candidate colors that are not valid for the edge \(xy\) is given by \(F_{yy_{m}}\cup F_{yy_{n}}\) for this case. Since both \(\alpha\) and \(\beta\) are in \(F_{xy}\), we have \(|F_{xy}\cup F_{yx}|\leq\Delta-1\), which implies the following:
\[|F_{xy}\cup F_{yx}\cup F_{yy_{m}}\cup F_{yy_{n}}| \leq\Delta-1+2+2\] \[=\Delta+3\]
Since we have a total of \(\Delta+5\) colors, we have a valid color \(\gamma\) for the edge \(xy\) in any case, irrespective of the number of common colors in \(\{\alpha,\beta\}\) and \(S\). By assigning \(\gamma\) to \(xy\), we can extend \(g\) to the required coloring \(f\) of \(G\).
Figure 2: Neighborhood of the edge \(xy\) in \(G\) in Case 1.2
**Case 2.2**.: \(\{g(xx_{1}),g(xx_{2})\}\cap(F_{xy}\setminus S)\neq\emptyset\)_._
In this case, at least one color in \(\{\alpha,\beta\}\) is in \(F_{xy}\setminus S\). Now, we define a color in \(S\) to be _freeable_ with respect to the edge \(xy\) as follows.
**Definition 5**.: _For any vertex \(y^{\prime}\) in \(N^{\prime}(y)\), the color \(g(yy^{\prime})\) in \(S\) is said to be freeable if we can recolor \(g(yy^{\prime})\) with a color in \(R\) without forming any new bichromatic cycle._
Observe that after this recoloring, \(g(yy^{\prime})\) becomes a candidate color for the edge \(xy\) in \(G\). Now, we make the following claim regarding the number of freeable colors in \(S\).
**Claim 5**.: _There exists at most 2 colors in \(S\) which are not freeable._
Proof.: By way of contradiction, assume that there exist at least 3 colors in \(S\) which are not freeable. Let those colors be \(\gamma_{1}\), \(\gamma_{2}\) and \(\gamma_{3}\) such that \(g(yy_{1})=\gamma_{1}\), \(g(yy_{2})=\gamma_{2}\) and \(g(yy_{3})=\gamma_{3}\) for \(y_{1},y_{2},y_{3}\in N^{\prime}(y)\). Throughout the proof of the claim, whenever we use \(i\), we implicitly assume that for any \(i\) with \(1\leq i\leq 3\). Since \(\gamma_{i}\in S\), we have \(y_{i}\notin N^{\prime\prime}(y)\). This implies that \(deg(y_{i})=3\). Let \(y^{\prime}_{i}\) and \(y^{\prime\prime}_{i}\) be the neighbors of \(y_{i}\) other than \(y\) and let \(g(y_{i}y^{\prime}_{i})=\nu_{i}\) and \(g(y_{i}y^{\prime\prime}_{i})=\eta_{i}\).
If exactly one among \(\alpha\) or \(\beta\) is in \(F_{xy}\), then we have \(|F_{xy}\cup F_{yx}|\leq\Delta\), which implies that \(|R|\geq(\Delta+5)-(\Delta)=5\). Otherwise, if both \(\alpha\) and \(\beta\) are in \(F_{xy}\), then we have \(|F_{xy}\cup F_{yx}|\leq\Delta-1\), which implies that \(|R|\geq(\Delta+5)-(\Delta-1)=6\). Since we have assumed that at least one among \(\alpha\) or \(\beta\) belongs to \(F_{xy}\), in any case, we have that \(|R|\geq 5\). Let \(\{\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5}\}\) be any five colors in \(R\).
Since \(g(yy_{i})=\gamma_{i}\) is not freeable, it means that if we recolor \(g(yy_{i})\) with any color in \(R\), a new bichromatic cycle will be formed. Therefore, we have that for every \(\mu_{j}\in R\), either a \((\nu_{i},\mu_{j},yy_{i})\)-critical path exists or a \((\eta_{i},\mu_{j},yy_{i})\)-critical path exists or both the above critical paths exist in \(G^{\prime}\). Therefore, at least three out of five \(yy_{i}\)-critical paths involve \(\nu_{i}\) or at least three out of five \(yy_{i}\)-critical paths involve \(\eta_{i}\). Recall that the statement is true for any \(i\) with \(1\leq i\leq 3\). Hence, without loss of generality, assume that at least three \(yy_{1}\)-critical paths involve \(\nu_{1}\), at least three \(yy_{2}\)-critical paths involve \(\nu_{2}\) and at least three \(yy_{3}\)-critical paths involve \(\nu_{3}\). Observe that at least three \(yy_{i}\)-critical paths that involve \(\nu_{i}\) should reach \(y\) through a vertex \(z_{i}\in N(y)\) with \(deg(z_{i})\geq 4\) which implies that \(z_{i}\in N^{\prime\prime}(y)\) and \(\nu_{i}\in F_{xy}\setminus S\).
**Subclaim 5.1**.: _The colors \(\nu_{1}\), \(\nu_{2}\) and \(\nu_{3}\) are all distinct._
Proof.: By way of contradiction, without loss of generality assume that \(\nu_{1}=\nu_{2}=\nu\) for some color \(\nu\). Let \(y^{\prime}\in N^{\prime\prime}(y)\) with \(g(yy^{\prime})=\nu\). Then there exist at least three \((\nu,\mu_{j},yy_{1})\)-critical paths and at least three \((\nu,\mu_{k},yy_{2})\)-critical paths for \(\{\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5}\}\) in \(R\). Notice that we have five colors and at least six critical paths under consideration. Hence, we have a color \(\mu_{j}\) in \(R\) such that there exists a \((\nu,\mu_{j},yy_{1})\)-critical path and a \((\nu,\mu_{j},yy_{2})\)-critical path. This implies that there exists a \((\nu,\mu_{j})\)-maximal bichromatic path starting from the vertex \(y\) ending at the vertex \(y_{1}\) and there exists a \((\nu,\mu_{j})\)-maximal bichromatic path starting from the vertex \(y\) ending at the vertex \(y_{2}\), a contradiction to Lemma 1. Therefore, our assumption that \(\nu_{1}=\nu_{2}=\nu\) is wrong and the subclaim holds.
Since any color \(\mu_{j}\) in \(R\) with \(1\leq j\leq 5\), is not valid for the edge \(xy\), there exists either an \((\alpha,\mu_{j},xy)\)-critical path or a \((\beta,\mu_{j},xy)\)-critical path or both in \(G^{\prime}\). Hence, there exist at least three \((\alpha,\mu_{j},xy)\)-critical paths or at least three \((\beta,\mu_{j},xy)\)-critical paths. Without loss of generality, assume the existence of at least three \((\alpha,\mu_{j},xy)\)-critical paths. Now, recall that we have \(\nu_{i}\in F_{xy}\setminus S\). Since \(|F_{xy}\setminus S|\leq 3\), Subclaim 5.1 implies that \(\alpha\) is a color in \(\{\nu_{1},\nu_{2},\nu_{3}\}\). Without loss of generality, let \(\alpha=\nu_{3}\). Then there exists at least three \((\alpha,\mu_{j},yy_{3})\)-critical paths together with the already assumed at least three \((\alpha,\mu_{j},xy)\)-critical paths for \(\{\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5}\}\) in \(R\). Notice that we have five colors and at least six critical paths under consideration. Hence, we have a color \(\mu_{j}\) in \(R\) such that there exists an
\((\alpha,\mu_{j},yy_{3})\)-critical path and an \((\alpha,\mu_{j},xy)\)-critical path. This implies that there exists an \((\alpha,\mu_{j})\)-maximal bichromatic path starting from the vertex \(y\) ending at the vertex \(y_{3}\) and there exists an \((\alpha,\mu_{j})\)-maximal bichromatic path starting from the vertex \(y\) ending at the vertex \(x\), a contradiction to Lemma 1. Hence, our assumption that there exist at least \(3\) colors in \(S\) which are not freeable is wrong and the claim holds.
Let \(S^{\prime}\subset S\) be the set of all colors in \(S\) which are not freeable. Now, we define the set \(T\) to be \(T=R\cup(S\setminus S^{\prime})\). Further, we make the following claim regarding the cardinality of the set \(T\).
**Claim 6**.: \(|T|\geq\Delta-1\)_._
Proof.: Observe that the set \(T\) is precisely the set of all colors that are not in \(F_{yx}\cup(F_{xy}\setminus S)\cup S^{\prime}\). By Claim 5, we have that \(|S^{\prime}|\leq 2\). We also have \(|F_{xy}\setminus S|\leq 3\). Since \(deg(x)=3\), we have \(|F_{yx}|=2\). Precisely, \(F_{yx}=\{\alpha,\beta\}\). But we have already assumed that at least one among \(\alpha\) or \(\beta\) belongs to \(F_{xy}\setminus S\). Therefore, there exists at most one color in \(F_{yx}=\{\alpha,\beta\}\) which is not in \(F_{xy}\setminus S\). With all these observations we can infer the following:
\[|T| =\Delta+5-|F_{yx}\cup(F_{xy}\setminus S)\cup S^{\prime}|\] \[\geq\Delta+5-(1+3+2)\] \[=\Delta-1\]
Thus we have the lower bound for the set \(T\), as claimed.
Further, depending on how many colors among \(\{g(xx_{1}),g(xx_{2})\}\) belong to the set \(F_{xy}\), we have the following cases:
**Case 2.2.1**.: _Exactly one color in \(\{g(xx_{1}),g(xx_{2})\}\) belongs to \(F_{xy}\)._
Recall that we have \(g(xx_{1})=\alpha\) and \(g(xx_{2})=\beta\). Without loss of generality, let \(\alpha\in F_{xy}\) and \(\beta\notin F_{xy}\). Let \(y_{1}\) be the neighbor of \(y\) in \(G\) such that \(g(yy_{1})=\alpha\). Since we already have that at least one among \(\alpha\) or \(\beta\) belongs to \(F_{xy}\setminus S\), \(\beta\) not being present in \(F_{xy}\) will imply that \(\alpha\notin S\). Collectively, we can infer that \(\alpha\in(F_{xy}\setminus S)\). Observe that since \(\alpha\in F_{xy}\), we have that \(|F_{xy}\cup F_{yx}|\leq\Delta\) implying that \(|R|\geq 5\). Let \(\{\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5}\}\in R\). If any color in \(R\) is valid for the edge \(xy\), then we are done. Hence, we can assume that every candidate color for the edge \(xy\) in \(R\) is not valid. This implies that for every color \(\mu_{i}\) in \(R\), there exists an \((\alpha,\mu_{i},xy)\)-critical path in \(G^{\prime}\) with respect to \(g\).
Now, by Claim 6, we have \(|T|\geq\Delta-1\). If there exists a color \(\zeta\in T\) such that there is no \((\alpha,\zeta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\), then we can free the color \(\zeta\) if necessary and assign \(\zeta\) to the edge \(xy\) and thereby extend \(g\) to the required coloring \(f\) of \(G\). Therefore, we can assume that for every color \(\zeta\in T\), there exists an \((\alpha,\zeta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\).
Let us assume that \(\beta\in F_{xx_{1}}\). Note that \(|F_{xx_{1}}|\leq\Delta-1\). Since \(\beta\in F_{yx}\), we have \(\beta\notin T\). This together with the assumption that \(\beta\in F_{xx_{1}}\) implies that there exists a color \(\eta\) such that \(\eta\in T\) but \(\eta\notin F_{xx_{1}}\). This implies that there can not be any \((\alpha,\eta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\). Since \(\eta\in T\), this is a contradiction to our previous assumption that for every color \(\zeta\in T\), there exists an \((\alpha,\zeta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\). Hence, our assumption that \(\beta\in F_{xx_{1}}\) is not true, which implies that we are good to conclude that \(\beta\notin F_{xx_{1}}\).
Since \(|F_{xx_{1}}\cup\{\beta\}|\leq\Delta\) and \(|F_{xy}\setminus S|\leq 3\), we are sure that there exists a color \(\gamma\) such that \(\gamma\notin F_{xx_{1}}\cup\{\beta\}\cup(F_{xy}\setminus S)\). Now, we recolor the edge \(xx_{1}\) with \(\gamma\). This recoloring is valid since \(\beta\notin F_{xx_{1}}\). If \(\gamma\notin S\), then clearly \(\gamma\notin F_{xy}\) which implies that by assigning any color to the edge \(xy\) which satisfies proper coloring, we can extend \(g\) to the required coloring \(f\) of \(G\). Otherwise, if \(\gamma\in S\), then since \(\beta\notin F_{xy}\), by Case 2.1, we are done.
**Case 2.2.2**.: _Both the colors in \(\{g(xx_{1}),g(xx_{2})\}\) belong to \(F_{xy}\)._
Recall that we have \(g(xx_{1})=\alpha\) and \(g(xx_{2})=\beta\). Let \(y_{1}\) and \(y_{2}\) be the neighbors of \(y\) such that \(g(yy_{1})=\alpha\) and \(g(yy_{2})=\beta\). Since the colors \(\alpha\) and \(\beta\) are seen at both the vertices \(x\) and \(y\) in \(G^{\prime}\), we have \(|F_{xy}\cup F_{yx}|\leq\Delta-1\), implying that \(|R|\geq 6\) for this case. Let \(\{\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5},\mu_{6}\}\in R\). If any color in \(R\) is valid for the edge \(xy\), then we are done. Hence, we can assume that every candidate color for the edge \(xy\) in \(R\) is not valid. This implies that for every color \(\mu_{i}\) in \(R\), there exists either an \((\alpha,\mu_{i},xy)\)-critical path or a \((\beta,\mu_{i},xy)\)-critical path in \(G^{\prime}\) with respect to \(g\) or both. Further, if there exists a color \(\zeta\in T\) such that there is no \((\alpha,\zeta,xy)\)-critical path and there is no \((\beta,\zeta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\), then we can free the color \(\zeta\) if necessary and assign \(\zeta\) to the edge \(xy\) and thereby extend \(g\) to the required coloring \(f\) of \(G\). Hence, we can also assume that for every color \(\zeta\in T\), there exists either an \((\alpha,\zeta,xy)\)-critical path or a \((\beta,\zeta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\) or both.
Since we already have that at least one among \(\alpha\) or \(\beta\) belongs to \(F_{xy}\setminus S\), we are sure that at most one color in the set \(\{\alpha,\beta\}\) is in \(S\). This also implies that at least one color in the set \(\{\alpha,\beta\}\) is not in \(S\). Without loss of generality, assume that \(\beta\notin S\). Then since \(\beta\in F_{xy}\), we can infer that \(\beta\in F_{xy}\setminus S\).
Let us assume that \(\beta\in F_{xx_{1}}\). Note that \(|F_{xx_{1}}|\leq\Delta-1\). Since \(\beta\in F_{x}\), we have \(\beta\notin T\). This together with Claim 6 and the assumption that \(\beta\in F_{xx_{1}}\) imply that there exists a color \(\eta\) such that \(\eta\in T\) but \(\eta\notin F_{xx_{1}}\). Therefore, there can not be an \((\alpha,\eta,xy)\)-critical path in \(G^{\prime}\) with respect to \(g\) implying that there exists a \((\beta,\eta,xy)\)-critical path, since \(\eta\in T\). Hence, we can free the color \(\eta\) and recolor the edge \(xx_{1}\) with \(\eta\) without forming any new bichromatic cycles, since the \((\beta,\eta)\)-bichromatic path starting from the vertex \(x\) can not reach \(x_{1}\) because it ends at \(y\). Now, since \(\eta\notin F_{xy}\) and \(\beta\in F_{xy}\), by Case 2.2.1, we are done.
Now, assume that \(\beta\notin F_{xx_{1}}\). Now, we have \(|F_{xx_{1}}\cup\{\beta\}|\leq\Delta\) and \(|F_{xy}\setminus S|\leq 3\). But since \(\beta\in F_{xy}\setminus S\), we have \(|F_{xx_{1}}\cup\{\beta\}\cup(F_{xy}\setminus S)|\leq\Delta+2\). Further, since \(|S^{\prime}|\leq 2\), we have \(|F_{xx_{1}}\cup\{\beta\}\cup(F_{xy}\setminus S)\cup S^{\prime}|\leq\Delta+4\). Therefore, since we have a total of \(\Delta+5\) colors, we are sure that there exists a color \(\gamma\) such that \(\gamma\notin F_{xx_{1}}\cup\{\beta\}\cup(F_{xy}\setminus S)\cup S^{\prime}\). Now, we free the color \(\gamma\) and recolor the edge \(xx_{1}\) with \(\gamma\). This recoloring is valid since \(\beta\notin F_{xx_{1}}\). Since \(\gamma\notin F_{xy}\), and \(\beta\in F_{xy}\), by Case 2.2.1, we are done.
Therefore, in any case, we can extend the coloring \(g\) of \(G^{\prime}\) to a coloring \(f\) of \(G\) with the same number of colors, which in turn confirms the validity of Theorem 2. \(\blacksquare\)
## 5 Conclusion
We conclude our discussion on the acyclic chromatic index of degenerate graphs by reiterating Theorem 1 and Theorem 2. For any \(k\)-degenerate graph \(G\), we have \(a^{\prime}(G)\leq\lceil(\frac{k+1}{2})\Delta\rceil+1\). Further, for
Figure 3: Neighborhood of the edge \(xy\) in \(G\) in Case 2.2.1
any \(3\)-degenerate graph \(G\), we have \(a^{\prime}(G)\leq\Delta+5\). But the acyclic edge coloring conjecture gives an upper bound of \(\Delta+2\) for any graph. Hence, one can take up the study of \(3\)-degenerate graphs and try to prove the conjecture for a \(3\)-degenerate graph. The same thing holds for a \(k\)-degenerate graph and one can try to improve the existing upper bound for the acyclic chromatic index of a \(k\)-degenerate graph which constitutes a nice research problem.
|
2303.14775 | On Hempel pairs and Turaev--Viro invariants | Surface bundles arising from periodic mapping classes may sometimes have
non-isomorphic, but profinitely isomorphic fundamental groups. Pairs of this
kind have been discovered by Hempel. This paper exhibits examples of nontrivial
Hempel pairs where the mapping tori can be distinguished by some Turaev--Viro
invariants, and also examples where they cannot be distinguished by any
Turaev--Viro invariants. | Yi Liu | 2023-03-26T16:47:19Z | http://arxiv.org/abs/2303.14775v2 | # On Hempel pairs and Turaev-Viro invariants
###### Abstract.
Surface bundles arising from periodic mapping classes may sometimes have non-isomorphic, but profinitely isomorphic fundamental groups. Pairs of this kind have been discovered by Hempel. This paper exhibits examples of nontrivial Hempel pairs where the mapping tori can be distinguished by some Turaev-Viro invariants, and also examples where they cannot be distinguished by any Turaev-Viro invariants.
Key words and phrases:profinite completion, mapping class group, Turaev-Viro invariant 2020 Mathematics Subject Classification: Primary 57K31; Secondary 57K16 Partially supported by NSFC Grant 11925101, and National Key R&D Program of China 2020YFA0712800
## 1. Introduction
Let \(S\) be a connected closed orientable surface. Denote by \(\operatorname{Mod}(S)\) the mapping class group of \(S\), whose elements are the isotopy classes of orientation-preserving self-homeomorphisms of \(S\).
A _Hempel pair_ as we call refers to a pair of periodic mapping classes \([f_{A}],[f_{B}]\in\operatorname{Mod}(S)\) of identical order \(d\geq 1\), such that \([f_{B}]=[f_{A}^{k}]\) holds for some integer \(k\) coprime to \(d\). Hence \([f_{A}]=[f_{B}^{k^{*}}]\) holds for any congruence inverse \(k^{*}\) of \(k\) modulo \(d\), (that is, \(k^{*}k\equiv 1\bmod d\)). Hempel studied such pairs [1], finding out that the fundamental groups of their mapping tori \(M_{A}\) and \(M_{B}\) always have identical collections of (isomorphism types of) finite quotient groups. This is equivalent to saying that the profinite completions of \(\pi_{1}(M_{A})\) and \(\pi_{1}(M_{B})\) are not isomorphic groups. Hempel discovered examples of such pairs with non-isomorphic fundamental groups. This is equivalent to saying that \(M_{A}\) and \(M_{B}\) are not homeomorphic \(3\)-manifolds.
We call \([f_{A}]\) and \([f_{B}]\) a _nontrivial_ Hempel pair, if \(M_{A}\) and \(M_{B}\) are not homeomorphic. There are no nontrivial Hempel pairs when \(S\) is a sphere or a torus, as the condition forces \(d=1\) or \(d\in\{1,2,3,4,6\}\), and hence \(k=\pm 1\). One may obtain a nontrivial Hempel pair of order \(5\) when \(S\) has genus \(2\). Nontrivial Hempel pairs are a source of distinct \(3\)-manifold pairs that cannot be distinguished by their profinite fundamental groups. Among \(3\)-manifolds, the question as to which topological invariants are determined by the profinite fundamental group has stimulated a lot of fruitful study in recent years. See [14] and [15, Section 9] for past surveys on that fast-growing topic.
Turaev-Viro invariants are topological invariants of closed \(3\)-manifolds, originally constructed using quantum \(6\)j-symbols [16]. That they are generally _not_ profinite invariants is evident from their explicit values on all lens spaces [11]. Meanwhile, there are many homeomorphically distinct torus bundles over a circle, whose monodromies are Anosov and mutually conjugate up to inverse in every
congruence quotient of \(\operatorname{Mod}(T^{2})\cong\operatorname{SL}(2,\mathbb{Z})\). These torus bundles have isomorphic profinite fundamental groups. Funar shows that no Turaev-Viro invariants (associated to any spherical fusion categories) distinguish these torus bundles [13, Proposition 1.1].
In this paper, we take up the question as to whether Turaev-Viro invariants distinguish nontrivial Hempel pairs. We shall content ourselves with the \(\operatorname{SU}(2)\) and the \(\operatorname{SO}(3)\) Turaev-Viro invariants, as they are mentioned the most often. The \(\operatorname{SU}(2)\) series can be fully listed as \(\operatorname{TV}_{r,s}\), for any integer \(r\geq 3\) and any integer \(s\) coprime to \(r\); the \(\operatorname{SO}(3)\) series can be fully listed as \(\operatorname{TV}^{\prime}_{r,s}\), for any odd integer \(r\geq 3\) and any even integer \(s\) coprime to \(r\). More economically, one could focus on \(\operatorname{TV}_{r,1}\) (\(r\) even), and \(\operatorname{TV}^{\prime}_{r,r-1}\) (\(r\) odd), together with \(\operatorname{TV}_{3,1}\) and \(\operatorname{TV}_{3,2}\), which depend only on the \(\mathbb{Z}/2\mathbb{Z}\) cohomology ring. These determine all the other ones. See Section 3 for the notations and more review.
Our conclusion can be summarized as follows.
**Theorem 1.1**.:
1. _For any integer_ \(d\geq 5\) _other than_ \(6\)_, there exists some nontrivial Hempel pair of order_ \(d\)_, such that the mapping tori can be distinguished by some_ \(\operatorname{SU}(2)\) _Turaev-Viro invariant, and if_ \(d\) _is odd, also by some_ \(\operatorname{SO}(3)\) _Turaev-Viro invariant._
2. _For any prime integer_ \(p\geq 5\)_, there exists some nontrivial Hempel pair of order_ \(p\)_, such that the mapping tori cannot be distinguished by any_ \(\operatorname{SU}(2)\) _or_ \(\operatorname{SO}(3)\) _Turaev-Viro invariants._
Theorem 1.1 is proved in Section 5 by exhibiting concrete families of examples. Our simplest distinguishable Hempel pair exists with order \(d\) on genus \(d-2\geq 3\). Our simplest indistinguishable nontrivial Hempel pair exists with prime order \(p\) on genus \((p-1)/2\geq 2\).
The technical heart of this paper is the following calculation.
**Theorem 1.2**.: _Let \(a\geq 3\) be an integer and \(b_{1},\cdots,b_{n}\) be integers coprime to \(a\). Let \(M\) be a Seifert fiber space with orientable orbifold base and orientable Seifert fibration, and with symbol \((g;(a,b_{1}),\cdots,(a,b_{n}))\). Suppose \(g\geq 0\), and \(a>n\geq 0\), and \(b_{1}+\cdots+b_{n}=0\)._
1. _If there exist some integer_ \(b^{*}\) _coprime to_ \(a\) _and_ \(\nu_{1},\cdots,\nu_{n}\in\{\pm 1\}\)_, such that_ \(b^{*}b_{j}\equiv\nu_{j}\bmod a\) _holds for all_ \(j\in\{1,\cdots,n\}\)_, then for any_ \(s\) _coprime to_ \(a\)_,_ \[\operatorname{TV}_{a,s}(M)=\frac{a^{n-1}}{2^{2n+2g-5}}\cdot\frac{\sin^{2}(\pi s /a)}{\sin^{2n+4g-4}(\pi b^{*}s/a)},\] _and moreover, if_ \(a\) _is odd and_ \(s\) _is even,_ \[\operatorname{TV}^{\prime}_{a,s}(M)=\frac{a^{n-1}}{2^{2n+4g-5}}\cdot\frac{\sin ^{2}(\pi s/a)}{\sin^{2n+4g-4}(\pi b^{*}s/a)}.\]
2. _Otherwise, for any integer_ \(r\geq 3\) _divisible by_ \(a\) _and any integer_ \(s\) _coprime to_ \(r\)_,_ \[\operatorname{TV}_{r,s}(M)=0,\] _and moreover, if_ \(r\) _is odd and_ \(s\) _is even,_ \[\operatorname{TV}^{\prime}_{r,s}(M)=0.\]
See Section 2 for review on Seifert fiber spaces and the standard notation for their symbols.
Theorem 1.2 is proved in Section 4, by applying a general formula for calculating the Witten-Reshetikhin-Turaev invariant \(\tau_{r}\) of oriented closed Seifert fiber spaces. The exact formula we invoke is due to Hansen [1], while similar calculation in special cases or with likewise strategy also appear in many other places, for instance, see [1, 10, 11, 12].
Theorem 1.2 is formulated by first testing samples of Seifert fiber spaces (on computer), and then observing interesting phenomena. Luckily, we find the assumptions as in Theorem 1.2, which greatly simplify the situation and the answer.
In the appendix, we prove a splitting formula
\[\operatorname{TV}_{r,s}(M)=\operatorname{TV}_{3,1}(M)\cdot\operatorname{TV} ^{\prime}_{r,r-s}(M)\]
for \(r\) odd and \(s\) odd. This complements a former formula
\[\operatorname{TV}_{r,s}(M)=\operatorname{TV}_{3,2}(M)\cdot\operatorname{TV} ^{\prime}_{r,s}(M)\]
for \(r\) odd and \(s\) even, proved by Detcherry, Kalfagianni, and Yang [1, Theorem 2.9]. Our appendix is mainly to confirm also the non-orientable case, although the well-known orientable case suffices for our application.
This paper is organized as follows. In Section 2, we recall the prelimiary description of periodic mapping tori as Seifert fiber spaces with vanishing rational Euler number. In Section 3, we review Turaev-Viro invariants. In Section 4, we prove Theorem 1.2. In Section 5, we prove Theorem 1.1. In Section A, we give an elementary proof of the aforementioned splitting formula regarding \(\operatorname{SU}(2)\) Turaev-Viro invariants at odd \(r\) and odd \(s\).
## 2. Periodic mapping tori
Let \(S\) be a connected orientable closed surface. The mapping class group \(\operatorname{Mod}(S)\) consists of all the isotopy classes of orientation-preserving self-homeomorphism of \(S\). For any mapping class \([f]\in\operatorname{Mod}(S)\), we denote by \(M_{f}\) the mapping torus
\[M_{f}=\frac{S\times\mathbb{R}}{(f(x),r)\sim(x,r+1)},\]
which naturally fibers over the oriented circle \(\mathbb{R}/\mathbb{Z}\) with fiber type \(S\) and (backward) monodromy type \([f]\). The mapping torus \(M_{f}\) is a connected orientable closed \(3\)-manifold, whose homeomorphism type depends only on \([f]\).
A _periodic mapping class_ refers to a mapping class of finite order. In this case, the mapping torus is a Seifert fiber space. The Seifert fibration is orientable over an orientable base orbifold, and has vanishing rational Euler number. Moreover, the Seifert fibration on any periodic mapping torus is unique up to isotopy. Conversely, any closed Seifert fiber space with orientable orbifold base and orientable Seifert fibration of vanishing rational Euler number arises as the mapping torus of some periodic mapping class. Moreover, the genus of surface and the conjugacy class of the periodic mapping class up to inverse are both unique. This way, periodic mapping classes can be described equivalently by indicating the symbol of its mapping torus, as a Seifert fiber space. (See [1, Chapter 1]).
The most general symbol describes any (connected, compact) Seifert fiber space with all features, allowing possibly non-orientable orbifold base, non-orientable Seifert fibration, and non-empty boundary. For dicussing periodic mapping classes,
we only need to consider closed Seifert fiber spaces with orientable orbifold base and orientable Seifert fibration, whose (possibly non-normalized) symbol is denoted as
\[(g;(a_{1},b_{1}),\cdots,(a_{n},b_{n})) \tag{2.1}\]
where \(g\geq 0\) is an integer, and where \(a_{j}\geq 1\) is an integer and \(b_{j}\) is an integer coprime to \(a_{j}\), for all \(j\in\{1,\cdots,n\}\). This symbol presents a Seifert fiber space (with standard orientation) constructed as follows.
Take a product \(3\)-manifold \(\Sigma\times S^{1}\) of a connected closed oriented surface \(\Sigma\) of genus \(g\) and an oriented circle \(S^{1}\); take \(n\) disjoint embedded disks \(D_{1},\cdots,D_{n}\subset\Sigma\); remove the solid tori \(D_{j}\times S^{1}\) from \(\Sigma\times S^{1}\), and refill with solid tori in other ways, such that the slopes \(a_{j}[\partial D_{j}]+b_{j}[S^{1}]\) on \(\partial D_{j}\times S^{1}\) bound disks in the new solid tori.
The base of this Seifert fiber space is a connected, closed, oriented \(2\)-orbifold of genus \(g\) with \(n\) cone points of order \(a_{1},\cdots,a_{n}\), (ordinary and negligible if \(a_{j}=1\)). Its orbifold Euler characteristic equals \(2-2g-\sum_{j}(1-1/a_{j})\). The rational Euler number of the Seifert fibration equals \(-\sum_{j}b_{j}/a_{j}\), where the minus sign comes from our convention on the orientation of \(\partial D_{j}\), (as induced by the orientation of \(D_{j}\), rather than \(S\setminus(D_{1}\cup\cdots\cup D_{n})\)).
The following operations of the symbol do not change the homeomorphism type of the resulting Seifert fiber space: re-ordering all \((a_{j},b_{j})\); inserting or deleting a term \((1,0)\); replacing one \((a_{j},b_{j})\) with \((a_{j},b_{j}+a_{j})\) and another \((a_{j^{\prime}},b_{j^{\prime}})\) with \((a_{j^{\prime}},b_{j^{\prime}}-a_{j^{\prime}})\) simultaneously; or replacing all \((a_{j},b_{j})\) with \((a_{j},-b_{j})\) simultaneously. Note that only the last operation changes the orientation-preserving homeomorphism type of the resulting Seifert fiber space.
For homeomorphic periodic mapping tori, their symbols are all related by finitely many steps of the above operations. This is a special case of the classification of Seifert fiber spaces (see [10, Theorem 1.10] or [12, Chapter 12]).
The following Lemmas 2.1 and 2.2 actually appear in [12] in equivalent forms. We include quick (and slightly different) proofs here for the reader's reference.
**Lemma 2.1**.: _Let \(M\) be a closed Seifert fiber space with orientable orbifold base and orientable Seifert fibration, and with symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\). If the Seifert fibration has vanishing rational Euler characteristic, then \(M\) is homeomorphic to the mapping torus \(M_{f}\) of a periodic mapping class \([f]\in\operatorname{Mod}(S)\). Moreover, \([f]\) has order \(d=\operatorname{lcm}(a_{1},\cdots,a_{n})\), and \(S\) has genus \(1+(g-1)d+\sum_{j}(1-1/a_{j})d/2\)._
Proof.: The orbifold fundamental group of the base \(\mathcal{O}\) has a presentation with generators \(x_{1},y_{1},\cdots,x_{g},y_{g},s_{1},\cdots,s_{n}\) and relatorions \([x_{1},y_{1}]\cdots[x_{g},y_{g}]=s_{1}\cdots s_{n}\), and \(s_{1}^{a_{1}}=\cdots=s_{n}^{a_{n}}=1\). Under the assumption of vanishing rational Euler number, The assignments \(x_{i}\mapsto 0\bmod d\), \(y_{i}\mapsto 0\bmod d\), and \(s_{j}\mapsto-b_{j}d/a_{j}\bmod d\) yield a well-defined, surjective homomorphism \(\pi_{1}(\mathcal{O})\to\mathbb{Z}/d\mathbb{Z}\), where \(d=\operatorname{lcm}(a_{1},\cdots,a_{n})\). The kernel of the isomorphism corresponds to a cyclic orbifold covering \(S\to\mathcal{O}\) of degree \(d\), where \(S\) has no singular cone points. The generator \(1\in\mathbb{Z}/d\mathbb{Z}\) corresponds to a deck transformation, representing a periodic mapping class \([f]\in\operatorname{Mod}(S)\) of order \(d\). It is elementary to check that \(M_{f}\) is homeomorphic to \(M\). The Euler characteristic of the surface \(S\) is equal to \(d\) times the rational Euler number of \(\mathcal{O}\), which implies the asserted genus of \(S\)
**Lemma 2.2**.: _Let \(S\) be a connected, close, orientable surface, and \([f]\in\operatorname{Mod}(S)\) be a periodic mapping class of order \(d\). If the mapping torus \(M_{f}\) has symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\), then for any integer \(k\) coprime to \(d\), the mapping torus \(M_{f^{k}}\) of the iterate \([f^{k}]\in\operatorname{Mod}(S)\) has symbol \((g;(a_{1},b_{1}k^{*}),\cdots,(a_{n},b_{n}k^{*}))\), where \(k^{*}\) is any integer satisfying \(kk^{*}\equiv 1\bmod d\)._
Proof.: The mapping torus \(M_{f^{k}}\) naturally cyclically cover \(M_{f}\) of degree \(k\). Since \(f\) has order \(k\), we also see that \(M_{f}\) cyclically covers \(M_{f^{k}}\) of degree \(k^{*}\). Consider the covering \(M_{f}\to M_{f^{k}}\). The pull-back of the Seifert fibration of \(M_{f^{k}}\) is a Seifert fibration on \(M_{f}\). If \(M_{f}\) has symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\), by definition, the preimage of \((\Sigma\setminus(D_{1}\cup\cdots\cup D_{n}))\times S^{1}\subset M_{f^{k}}\) is also a product \((\Sigma\setminus(D_{1}\cup\cdots\cup D_{n}))\times S^{1}\subset M_{f}\), while ordinary fibers (that is, \(*\times S^{1}\)) in \(M_{f}\)) cyclically covers the ordinary fibers in \(M_{f^{k}}\) with degree \(k^{*}\). Since the slope \(a_{j}[D_{j}]+b_{j}[S^{1}]\) on \(\partial D_{j}\times S^{1}\) is homotopically trivial in \(M_{f}\), the slope \(a_{j}[D_{j}]+b_{j}k^{*}[S^{1}]\) on \(\partial D_{j}\times S^{1}\) is homotopically trivial in \(M_{k}\). Therefore, \((g;(a_{1},b_{1}k^{*}),\cdots,(a_{n},b_{n}k^{*}))\) is a symbol of \(M_{f^{k}}\).
## 3. Turaev-Viro invariants
Turaev-Viro invariants are topological invariants of closed \(3\)-manifolds, arising from representation theory of quantum groups at roots of unity. Throughout this paper, we only discuss Turaev-Viro invariants pertaining to the most basic quantum group \(\mathcal{U}_{q}(\mathfrak{sl}_{2})\). In this setting, there are essentially two series, namely, the \(\operatorname{SU}(2)\) Turaev-Viro invariants \(\operatorname{TV}_{r}\) for all integers \(r\geq 3\), and the \(\operatorname{SO}(3)\) Turaev-Viro invariants \(\operatorname{TV}_{r}^{\prime}\) for all odd integers \(r\geq 3\).
Throughout this paper, a _closed_ manifold only means a compact manifold with empty boundary, possibly disconnected, and possibly non-orientable.
### Abstract versions
Let \(q^{1/2}\) be a square root of a root of unity \(q\neq\pm 1\). Denote by \(r\geq 3\) the order of \(q\). Note that \(q^{1/2}\) must have order \(2r\) when \(r\) is even, but may have order \(2r\) or \(r\) when \(r\) is odd. We denote by \(\mathbb{Q}(q^{1/2})\) the abstract cyclotomic field generated by \(q^{1/2}\).
For any closed \(3\)-manifold \(M\), the Turaev-Viro invariant \(\operatorname{TV}(M;q^{1/2})\) can be defined at \(q^{1/2}\); if \(r\) is odd, the refined Turaev-Viro invariant \(\operatorname{TV}^{\prime}(M;q^{1/2})\) can be defined at \(q^{1/2}\) of order \(r\). Both \(\operatorname{TV}(M;q^{1/2})\) and \(\operatorname{TV}^{\prime}(M;q^{1/2})\) are topological invariants of \(M\), with totally real values in \(\mathbb{Q}(q^{1/2})\). We call \(\operatorname{TV}\) and \(\operatorname{TV}^{\prime}\) the _abstract_\(\operatorname{SU}(2)\) and the _abstract_\(\operatorname{SO}(3)\) Turaev-Viro invariants at \(q^{1/2}\) (or at level \(r-2\)), respectively.
Most details about the actual construction of these invariants are not be necessary in the sequel, except the appendix. We record them below for the sake of clarity.
Let \(\mathscr{T}=(V,E,F,T)\) be any finite simplicial \(3\)-complex, where the items denote the sets of vertices, edges, faces, and tetrahedra, respectively. Denote \(I_{r}=\{0,1,\cdots,r-2\}\). A triple \((i,j,k)\in I_{r}\times I_{r}\times I_{r}\) is said to be _admissible_ if the numbers \((i+j+k)/2\), \((i+j-k)/2\), \((j+k-i)/2\), \((k+i-j)/2\) all stay in \(I_{r}\). An _admissible coloring_ of \((M,\mathscr{T})\) (of _level_\(r-2\)) refers to a map \(c\colon E\to I_{r}\), such that on any face, the edge colors (that is, values of \(c\)) form an admissible triple. Denote by \(\mathcal{A}_{r}=\mathcal{A}_{r}(M,\mathscr{T})\) the set of all admissible colorings of level \(r-2\). If \(r\) is odd, denote by \(I_{r}^{\prime}=\{0,2,\cdots,r-3\}\) the subset of even elements of \(I_{r}\). Denote by
\(\mathcal{A}_{r}^{\prime}=\mathcal{A}_{r}^{\prime}(M,\mathscr{T})\) the subset of \(\mathcal{A}_{r}\) consisting of admissible colorings with values in \(I_{r}^{\prime}\).
Let \(M\) be a (possibly disconnected, possibly non-orientable) closed \(3\)-manifold. Take a triangulation \(\mathscr{T}=(V,E,F,T)\) of \(M\). Then invariants \(\operatorname{TV}\) and \(\operatorname{TV}^{\prime}\) of \(M\) at \(q^{1/2}\) can be expressed in terms of \(\mathscr{T}\) as follows:
\[\operatorname{TV}\left(M;q^{1/2}\right)=\left(\frac{(q^{1/2}-q^{-1/2})^{2}}{- 2r}\right)^{|V|}\cdot\sum_{c\in\mathcal{A}_{r}}\left|\mathscr{T}\right|_{c}; \tag{3.1}\]
if \(r\) is odd and \(q^{1/2}\) has order \(r\),
\[\operatorname{TV}^{\prime}\left(M;q^{1/2}\right)=\left(\frac{(q^{1/2}-q^{-1/2 })^{2}}{-r}\right)^{|V|}\cdot\sum_{c\in\mathcal{A}_{r}^{\prime}}\left|\mathscr{ T}\right|_{c}, \tag{3.2}\]
where \(|V|\) denotes the number of vertices in \(\mathscr{T}\), and where \(|\mathscr{T}|_{c}\) is explained in Notation 3.1 below. Note that our notations follow the reformulation as in [1, Appendix A], where the factors in \(|e|_{c}\), \(|f|_{c}\), and \(|t|_{c}\) are grouped slightly differently than those in [20], so as to avoid unnecessary square roots.
As it turns out, the values of the above expressions in \(\mathbb{Q}(q^{1/2})\) are independent of the auxiliary choice of the triangulation \(\mathscr{T}\)[20]. Therefore, \(\operatorname{TV}(M;q^{1/2})\) and \(\operatorname{TV}^{\prime}(M;q^{1/2})\) are indeed topological invariants of \(M\).
**Notation 3.1**.:
1. Denote \[[n]!=\begin{cases}[1]\cdot[2]\cdot\cdots\cdot[n]&n=1,2,\cdots,r-1\\ 1&n=0\end{cases}\] where \[[n]=\frac{q^{n/2}-q^{-n/2}}{q^{1/2}-q^{-1/2}}.\] Note that the quantum integers \([1],[2],\cdots,[r-1]\) take totally real nonzero values in \(\mathbb{Q}(q^{1/2})\), as \(q\) is a primitive \(r\)-th root of unity (\(r\geq 3\)).
2. For any \(e\in E\) and \(c\in\mathcal{A}_{r}\), \[|e|_{c}=(-1)^{i}\cdot[i+1]\] where \(i\) is the color of \(e\) under \(c\).
3. For any \(f\in F\) and \(c\in\mathcal{A}_{r}\), \[|f|_{c}=(-1)^{-S}\cdot\frac{[S-i]!\cdot[S-j]!\cdot[S-k]!}{[S+1]!}\] where \(i,j,k\) are the edge colors of \(f\) under \(c\), and where \(S=(i+j+k)/2\).
4. For any \(t\in T\) and \(c\in\mathcal{A}_{r}\), denote \[|t|_{c}=\sum_{z}\frac{(-1)^{z}\,[z+1]!}{\prod_{a}[z-T_{a}]!\cdot\prod_{b}[Q_{ b}-z]!}\] where \((i,j,k),(i,m,n),(j,l,n),(k,l,m)\) are the edge colors of the faces of \(t\) under \(c\), and where \(T_{1}=(i+j+k)/2\), \(T_{2}=(i+m+n)/2\), \(T_{3}=(j+l+n)/2\), \(T_{4}=(k+l+m)/2\), \(P_{1}=(i+j+l+m)/2\), \(P_{2}=(i+k+l+n)/2\), \(P_{3}=(j+k+m+n)/2\). The index \(a\) ranges in \(\{1,2,3,4\}\), and \(b\) in \(\{1,2,3\}\), and \(z\) from \(\max_{a}T_{a}\) to \(\min_{b}Q_{b}\).
5. For any \(c\in\mathcal{A}_{r}\), denote \[|\mathscr{T}|_{c}=\prod_{e\in E}|e|_{c}\cdot\prod_{f\in F}|f|_{c}\cdot\prod_{t\in T }|t|_{c}.\]
### Specialized versions
In the literature, _the_\(\operatorname{SU}(2)\) and _the_\(\operatorname{SO}(3)\) Turaev-Viro invariants often refer to specialization of the abstract versions at customary complex roots of unity. To be precise, these refer to the numerical quantities \(\operatorname{TV}_{r}\) and \(\operatorname{TV}^{\prime}_{r}\) below.
**Notation 3.2**.: Let \(r\geq 3\) be an integer and \(s\) be an integer coprime to \(r\). The following expressions are all evaluated by specializing \(\mathbb{Q}(q^{1/2})\to\mathbb{C}\).
1. Denote \[\operatorname{TV}_{r,s}(M)=\operatorname{TV}\left(M;q^{1/2}=e^{\sqrt{-1}\cdot \pi s/r}\right),\]
2. If \(r\) is odd and \(s\) is even, denote \[\operatorname{TV}^{\prime}_{r,s}(M)=\operatorname{TV}^{\prime}\left(M;q^{1/2} =e^{\sqrt{-1}\cdot\pi s/r}\right).\]
3. In informal discussion, we often write \(\operatorname{TV}_{r}\) and \(\operatorname{TV}^{\prime}_{r}\), assuming \(s\) implicitly fixed.
**Lemma 3.3**.: _Let \(M,N\) be any closed \(3\)-manifolds._
1. \(\operatorname{TV}_{r,s}(M)\in\mathbb{R}\)_._
2. \(\operatorname{TV}_{r,s}(M\sqcup N)=\operatorname{TV}_{r,s}(M)\cdot \operatorname{TV}_{r,s}(N)\)_._
3. \(\operatorname{TV}_{r,s}(S^{2}\times S^{1})=1\)_._
_If \(r\) is odd and \(s\) is even, the same statements hold with \(\operatorname{TV}^{\prime}_{r,s}\) in place of \(\operatorname{TV}_{r,s}\)._
The first statement follow immediately from the observation that \(\operatorname{TV}\) and \(\operatorname{TV}^{\prime}\) can be written as rational functions over \(\mathbb{Q}\) in \([1],\cdots,[r-1]\) and \((q^{1/2}-q^{-1/2})^{2}\); see (3.1), (3.2), and Notation 3.1. Note \([n]=\sin(\pi sn/r)/\sin(\pi s/r)\) and \((q^{1/2}-q^{-1/2})^{2}=-4\sin^{2}(\pi s/r)\) evaluated at \(q^{1/2}=e^{\sqrt{-1}\cdot\pi s/r}\). The second statement is also obvious by definition. The last statement appears in Turaev and Viro's original paper [12, Section 8.1.B].
**Lemma 3.4**.: _Let \(M\) be any closed \(3\)-manifold._
1. \(\operatorname{TV}_{r,s}(M)=\operatorname{TV}_{r,-s}(M)=\operatorname{TV}_{r,s +2r}(M)\)_. If_ \(r\) _is odd and_ \(s\) _is even, the same identities hold with_ \(\operatorname{TV}^{\prime}\) _in place of_ \(\operatorname{TV}\)_._
2. _If_ \(s\) _is odd,_ \(\operatorname{TV}_{r,s}(M)\) _is Galois conjugate to_ \(\operatorname{TV}_{r,1}(M)\)_; if_ \(s\) _is even (hence_ \(r\) _odd),_ \(\operatorname{TV}_{r,s}(M)\) _is Galois conjugate to_ \(\operatorname{TV}_{r,r-1}(M)\)_. If_ \(r\) _is odd and_ \(s\) _is even, the same statements hold with_ \(\operatorname{TV}^{\prime}_{r,s}\) _in place of_ \(\operatorname{TV}_{r,s}\)_._
3. _If_ \(r\) _is odd,_ \[\operatorname{TV}_{r,s}(M)=\begin{cases}\operatorname{TV}_{3,2}(M)\cdot \operatorname{TV}^{\prime}_{r,s}(M)&s\text{ even}\\ \operatorname{TV}_{3,1}(M)\cdot\operatorname{TV}^{\prime}_{r,r-s}(M)&s\text{ odd}\end{cases}\]
4. _Denote by_ \(\beta_{i}\) _the dimension of_ \(H_{i}(M;\mathbb{Z}/2\mathbb{Z})\) _over_ \(\mathbb{Z}/2\mathbb{Z}\)_, and by_ \(w_{1}\in H^{1}(M;\mathbb{Z}/2\mathbb{Z})\) _the first Stifel-Whitney class of_ \(M\)_._ \[\operatorname{TV}_{3,2}(M)=2^{-\beta_{0}(M)+\beta_{2}(M)},\] _and_ \[\operatorname{TV}_{3,1}(M)=2^{-\beta_{0}(M)}\cdot\sum_{t}(-1)^{(t^{3}+w_{1}^{2} t,[M])}\]
_where the index \(t\) ranges over \(H^{1}(M;\mathbb{Z}/2\mathbb{Z})\)._
The first and the second statements are again obvious properties of the defining expressions.
The third statement is proved when \(s\) is even by Detcherry, Kalfagianni, and Yang [11, Theorem 2.9], and can be easily derived from well-known facts when \(s\) is odd, assuming \(M\) orientable, (see Lemmas 3.6 and 3.7). We supply a proof without assuming orientability in the appendix Section A.
The formulas for \(\mathrm{TV}_{3,s}\) in the fourth statement are due to Turaev and Viro [12, Section 9.3.A].
### Relation to Witten-Reshetikhin-Turaev invariants
For connected oriented closed \(3\)-manifolds, the Witten-Reshetikhin-Turaev invariants are invariant under orientation-preserving homeomorphisms. These invariants were suggested by Witten [13], and the first mathematically rigorous construction was due to Reshetikhin and Turaev [10].
Following Kirby and Melvin [11], we denote by \(\tau_{r}\) the \(\mathrm{SU}(2)\) Witten-Reshetikhin-Turaev invariants, defined for any integer \(r\geq 3\), and by \(\tau_{r}^{\prime}\) the \(\mathrm{SO}(3)\) Witten-Reshetikhin-Turaev invariants, defined for any odd integer \(r\geq 3\). Note that \(\tau_{r}\) is slight modification of the original construction by Reshetikhin and Turaev [10], differing by a factor of absolute value \(1\) (depending on the first Betti number), and \(\tau_{r}^{\prime}\) is introduced by Kirby and Melvin [11, Section 8]. The invariant \(\tau_{r}\) corresponds to the \(4r\)-th primitive complex root of unity \(q^{1/4}=e^{\sqrt{-1}\cdot 2\pi/4r}\) by construction, whereas \(\tau_{r}^{\prime}\) is obtained by modifying the defining expression of \(\tau_{r}\) when \(r\) is odd. Upon suitable interpretation, \(\tau_{r}^{\prime}\) appears to correspond to the \(2r\)-th primitive complex root of unity \(q^{1/4}=e^{\sqrt{-1}\cdot\pi(r-1)/4r}\), (compare [1, Theorem B]). Both \(\tau_{r}\) and \(\tau_{r}^{\prime}\) take values in \(\mathbb{C}\).
The following properties characterize the normalization of \(\tau_{r}\) and \(\tau_{r}^{\prime}\).
**Lemma 3.5**.: _Let \(M,N\) be any connected oriented closed \(3\)-manifolds._
1. \(\tau_{r}(M)=\overline{\tau_{r}(-M)}\)_._
2. \(\tau_{r}(M\#N)=\tau_{r}(M)\cdot\tau_{r}(N)\)_._
3. \(\tau_{r}(S^{2}\times S^{1})=\sqrt{r/2}\,/\sin(\pi/r)\)_. Hence,_ \(\tau_{r}(S^{3})=1\)_._
_If \(r\) is odd, the same statements hold with \(\tau_{r}^{\prime}\) in place of \(\tau_{r}\), and \(\sqrt{r/4}\) in place with \(\sqrt{r/2}\)._
See [11, Section 1 and (5.11)] and [1, Proposition 6.10].
**Lemma 3.6**.: _Let \(M\) be any connected oriented closed \(3\)-manifold. If \(r\) is odd,_
\[\tau_{r}(M)=\begin{cases}\frac{\tau_{3}(M)\cdot\tau_{r}^{\prime}(M)}{\tau_{3}( M)\cdot\tau_{r}^{\prime}(M)}&r\equiv 3\bmod 4\\ \overline{\tau_{3}(M)\cdot\tau_{r}^{\prime}(M)}&r\equiv 1\bmod 4\end{cases}\]
See [11, Corollary 8.9 and Theorem 8.10]. See also [11, Theorem 6.11] for characterization of \(\tau_{3}(M)\) in terms of classical topological invariants.
The Turaev-Viro invariants \(\mathrm{TV}_{r}\) and \(\mathrm{TV}_{r}^{\prime}\) are essentially the absolute value squares of Witten-Reshetikhin-Turaev invariants, or precisely as follows.
**Lemma 3.7**.: _Let \(M\) be a connected closed \(3\)-manifold. Let \(r\geq 3\) be any integer._
1. _If_ \(M\) _is oriented,_ \[\frac{\mathrm{TV}_{r,1}(M)}{\mathrm{TV}_{r,1}(S^{3})}=|\tau_{r}(M)|^{2},\]
_and if_ \(r\) _is odd,_ \[\frac{\mathrm{TV}^{\prime}_{r,r-1}(M)}{\mathrm{TV}^{\prime}_{r,r-1}(S^{3})}=| \tau^{\prime}_{r}(M)|^{2}.\]
2. _If_ \(M\) _is non-orientable,_ \[\frac{\mathrm{TV}_{r,1}(M)}{\mathrm{TV}_{r,1}(S^{3})}=\tau_{r}(W),\] _and if_ \(r\) _is odd,_ \[\frac{\mathrm{TV}^{\prime}_{r,r-1}(M)}{\mathrm{TV}^{\prime}_{r,r-1}(S^{3})}= \tau^{\prime}_{r}(W),\] _where_ \(W\) _denotes the orientable connected double cover of_ \(M\)_. Note that_ \(W\) _can be canonically constructed, with a canonical orientation and an orientation-reserving deck transformation._
3. \[\mathrm{TV}_{r,1}(S^{3})=\frac{2}{r}\cdot\sin^{2}(\pi/r),\] _and if_ \(r\) _is odd,_ \[\mathrm{TV}^{\prime}_{r,r-1}(S^{3})=\frac{4}{r}\cdot\sin^{2}(\pi/r).\]
See [14, Theorems 4.1.1 and 4.4.1] for systematic proofs of the formulas regarding \(\mathrm{TV}_{r}\); see also [13] for an elegant proof for the oriented case regarding \(\mathrm{TV}_{r}\). The formulas regarding \(\mathrm{TV}^{\prime}_{r}\) can be easily derived from the \(\mathrm{TV}_{r}\) formulas using Lemmas 3.6 and 3.4. The values for \(\mathrm{TV}_{r,1}(S^{3})\) and \(\mathrm{TV}^{\prime}_{r,r-1}(S^{3})\) can be obtained immediately by taking \(M=S^{2}\times S^{1}\) and applying Lemmas 3.3 and 3.6.
### Perspectives of TQFTs
The Turaev-Viro invariants fit into the general framework of \((2+1)\)-dimensional topological quantum field theories (TQFT) [1]. We describe below focusing on the \(\mathrm{SU}(2)\) Turaev-Viro invariants. The discussion regarding \(\mathrm{SO}(3)\) Turaev-Viro invariants is completely similar.
Turaev and Viro construct a functor \(\mathcal{Z}^{\mathrm{TV}}\) from the \((2+1)\)-dimensional cobordism category to the category of Hermitian vector spaces (and linear homomorphisms) over the abstract cyclotomic field \(\mathbb{K}=\mathbb{Q}(q^{1/2})\), with respect to the involution \(*\) on \(\mathbb{K}\) as provided by the Galois transformation \(q^{1/2}\mapsto q^{-1/2}\). To any oriented closed surface \(S\), there is a finite-dimensional vector space \(\mathcal{Z}^{\mathrm{TV}}(S)\), equipped with a nondegenerate Hermitian pairing
\[\langle\_,\_\rangle\colon\mathcal{Z}^{\mathrm{TV}}(S)\times\mathcal{Z}^{ \mathrm{TV}}(S)\to\mathbb{K}.\]
(Being _Hermitian_ means \(\mathbb{K}\)-sesquilinear and \(*\)-symmetric.) To any cobordism \(M\) from \(S_{0}\) to \(S_{1}\) (that is, an oriented compact \(3\)-manifold \(M\) with bipartite boundary \(\partial M=\partial_{-}M\sqcup\partial_{+}M=(-S_{0})\sqcup S_{1}\), up to boundary-fixing homeomorphisms), there is a linear homomorphism
\[\mathcal{Z}^{\mathrm{TV}}(M)\colon\mathcal{Z}^{\mathrm{TV}}(S_{0})\to \mathcal{Z}^{\mathrm{TV}}(S_{1}).\]
The assignment \(\mathcal{Z}^{\mathrm{TV}}\) is functorial, and satisfies Atiyah's Hermitian TQFT axioms [1, Section 2]: \(\mathcal{Z}^{\mathrm{TV}}(-S)=\mathcal{Z}^{\mathrm{TV}}(S)^{*}\); \(\mathcal{Z}^{\mathrm{TV}}(-M)=\mathcal{Z}^{\mathrm{TV}}(M)^{*}\); \(\mathcal{Z}^{\mathrm{TV}}(S^{\prime}\sqcup S^{\prime\prime})=\mathcal{Z}^{ \mathrm{TV}}(S^{\prime})\otimes_{\mathbb{K}}\mathcal{Z}^{\mathrm{TV}}(S^{ \prime\prime})\); and \(\mathcal{Z}^{\mathrm{TV}}(\emptyset)=\mathbb{K}\), (\(\emptyset\) denoting the empty surface, having a unique orientation by convention).
Turaev and Viro show that \(\mathrm{TV}\) comes from a functor \(\mathcal{Z}^{\mathrm{TV}}\) as above, in the sense that the identity
\[\mathrm{TV}\left(M;q^{1/2}\right)=\mathcal{Z}^{\mathrm{TV}}(M)\]
holds for any closed \(3\)-manifold \(M\). Here, \(M\) is treated as a cobordism between empty surfaces, and \(\mathcal{Z}^{\mathrm{TV}}(\emptyset,\emptyset)=\mathrm{End}_{\mathbb{K}}( \mathbb{K})\) is identified as \(\mathbb{K}\). See [13, Section 2.3].
Being a TQFT functor, \(\mathcal{Z}^{\mathrm{TV}}\) naturally induces a \(\mathbb{K}\)-linear representation
\[\mathrm{Mod}(S)\to\mathrm{GL}\left(\mathcal{Z}^{\mathrm{TV}}(S)\right)\]
of the mapping class group \(\mathrm{Mod}(S)\) for any oriented closed surface \(S\)[13, Section 2.4]. Specializing \(q^{1/2}\) to complex roots of unity as in Notation 3.2, we obtain a complex linear representation, denoted as
\[\rho_{r,s}^{\mathrm{TV}}\colon\mathrm{Mod}(S)\to\mathrm{GL}\left(\mathcal{Z} ^{\mathrm{TV}}_{r,s}(S)\right) \tag{3.3}\]
for each integer \(r\geq 3\) and any integer \(s\) coprime to \(r\). These representations preserve the specialized Hermitian pairings \(\langle_{\boldsymbol{-}},_{\boldsymbol{-}}\rangle_{r,s}\), whose signatures depend on both \(r\) and \(s\), but are not necessarily (Hilbert) unitary. We refer to these as the \(\mathrm{SU}(2)\)_Turaev-Viro TQFT representations_ (at level \(r-2\)) of \(\mathrm{Mod}(S)\).
The following formula is useful implication of TQFT axioms [1, Section 2].
**Lemma 3.8**.: _Let \(r\geq 3\) be any integer and \(s\) be any integer coprime to \(r\). For any oriented closed surface \(\Sigma\), and any mapping class \([f]\in\mathrm{Mod}(S)\),_
\[\mathrm{TV}_{r,s}(M_{f})=\mathrm{tr}_{\mathbb{C}}\left(\rho_{r,s}^{\mathrm{ TV}}([f])\right),\]
_where \(M_{f}\) denotes the mapping torus of \(f\)._
If \(S\) is connected, the complex dimension of the representation \(\rho_{r,s}^{\mathrm{TV}}\) depends only on \(r\) and genus \(g\) of \(S\). This fact can be derived from Lemmas 3.8 and 3.4 by considering the mapping torus of \(f=\mathrm{id}_{S}\). Verlinde type formulas for these dimensions can be derived from Lemma 3.7 and known formulas about Witten-Reshetikhin-Turaev invariants [1, Corollary 1.16]. However, Witten-Reshetikhin-Turaev invariants only come from generalized TQFTs, which require extra structures for resolving framing anomaly [1]. That is why they only naturally lead to projective linear representations of surface mapping class groups.
## 4. Calculation
This section is devoted to the proof of Theorem 1.2.
To restate our task, we consider a closed Seifert fiber space \(M\) with orientable orbifold base and orientable fibration, and with symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\), such that
\[a_{1}=a_{2}=\cdots=a_{n}=a,\]
and
\[b_{1}+b_{2}+\cdots+b_{n}=0.\]
Moreover, we suppose \(a\geq 3\) and \(a>n\geq 0\). We compute \(\mathrm{TV}_{r}\) and \(\mathrm{TV}^{\prime}_{r}\) of \(M\) for \(r=a\), and once they vanish, we show that they must also vanish for any \(r\) divisible by \(a\).
We invoke the following explicit formula for computing the Witten-Reshetikhin-Turaev invariant \(\tau_{r}\) of Seifert fiber spaces.
**Lemma 4.1**.: _Let \(M\) be a closed Seifert fiber space with orientable orbifold base and orientable fibration, and with symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\), where \(g,n\geq 0\) are integers, and where \(a_{j}\geq 0\) and \(b_{j}\) are coprime pairs of integers for \(j=1,\cdots,n\). Orient \(M\) by orienting the base and the fibers, such that the rational Euler number of the Seifert fibration is_
\[E=-\sum_{j}b_{j}/a_{j}.\]
_Then,_
\[\tau_{r}(M)=\frac{U_{r}\cdot Z_{r}}{2^{n+g-1}\sqrt{\prod_{j}a_{j}}}\]
_where_
\[Z_{r}=\sum_{(\gamma,\boldsymbol{\mu},\boldsymbol{m})}\left\{\frac{e^{\sqrt{-1 }\cdot\frac{\pi\gamma^{2}E}{2r}}}{\sin^{n+2g-2}(\pi\gamma/r)}\cdot\prod_{j} \mu_{j}e^{\sqrt{-1}\cdot\left(\frac{-\pi(2rm_{j}+\mu_{j})\gamma}{a_{j}r}+ \frac{-2\pi(rm_{j}^{2}+\mu_{j}m_{j})b_{j}^{*}}{a_{j}}\right)}\right\}\]
_and_
\[U_{r}=(-1)^{n}\cdot e^{\sqrt{-1}\cdot\left(\frac{3\pi}{2r}-\frac{3\pi}{4} \right)\cdot\text{\rm sgn}(E)}\cdot e^{\sqrt{-1}\cdot\frac{\pi\left(E+12\cdot \sum_{j}s(b_{j},a_{j})\right)}{2r}}\]
_Here, any \(j\) ranges over \(\{1,\cdots,n\}\), and \((\gamma,\boldsymbol{\mu},\boldsymbol{m})=(\gamma,(m_{1},\cdots,m_{n}),(\mu_{1 },\cdots,\mu_{n}))\) ranges over \(\{1,2,\cdots,r-1\}\times\{\pm 1\}^{n}\times\mathbb{Z}/a_{1}\mathbb{Z}\times \cdots\times\mathbb{Z}/a_{n}\mathbb{Z}\). The notation \(b_{j}^{*}\) denotes any congruence inverse of \(b_{j}\) modulo \(a_{j}\), namely, \(b_{j}b_{j}^{*}\equiv 1\bmod a_{j}\); \(\text{\rm sgn}(E)\) denotes the sign of \(E\), with value \(\pm 1\) or \(0\); \(\text{\rm s}(b_{j},a_{j})\) denotes the Dedekind sum \((4a_{j})^{-1}\cdot\sum_{l\,\in\{1,2,\cdots,a_{j}-1\}}\cot(\pi l/a_{j})\cot(\pi l b _{j}/a_{j})\)._
See [11, Theorem 8.4] for this formula. Hansen actually obtains the most general formula that applies to any orientable closed Seifert fiber space, including those with non-orientable orbifold base. We have only stated here the case with orientable orbifold base. An equivalent formula for this case is formerly obtained by Rozansky [10].
**Lemma 4.2**.: _Under the assumptions of Theorem 1.2, and assuming \(r\) divisible by \(a\), the term \(Z_{r}=Z_{r}(M)\) as in Lemma 4.1 becomes_
\[Z_{r}=\sum_{(\gamma,\boldsymbol{\mu})}\left\{\frac{e^{\sqrt{-1}\cdot\frac{- \pi\gamma\sum_{j}\mu_{j}}{ar}}\cdot\prod_{j}\mu_{j}}{\sin^{n+2g-2}(\pi\gamma/r )}\cdot\prod_{j}\sum_{m_{j}}e^{\sqrt{-1}\cdot\frac{-2\pi(\gamma+b_{j}^{*}\mu_ {j})m_{j}}{a}}\right\}\]
_where \((\gamma,\boldsymbol{\mu},\boldsymbol{m})\) ranges over \(\{1,\cdots,r-1\}\times\{\pm 1\}^{n}\times(\mathbb{Z}/a\mathbb{Z})^{n}\)._
Proof.: In the expression of \(Z_{r}\) in Lemma 4.1, if any \(a_{j}\) divides \(r\), we can ignore the term \(rm_{j}^{2}\) in the exponent of the \(j\)-th factor in the product; if \(E=0\), we can ignore the factor that involves \(\gamma^{2}\) on the exponent. Therefore, under these conditions, the
expression of \(Z_{r}\) can be rearranged into
\[Z_{r}(M) = \sum_{(\gamma,\boldsymbol{\mu})}\left\{\frac{\prod_{j}\mu_{j}e^{ \sqrt{-1}\cdot\frac{-\pi\gamma\mu_{j}}{a_{j}r}}}{\sin^{n+2g-2}(\pi\gamma/r)} \cdot\sum_{\boldsymbol{m}}e^{\sqrt{-1}\cdot\sum_{j}\left(\frac{-2\pi\gamma m_{j }}{a_{j}}+\frac{-2\pi b_{j}^{*}\mu_{j}m_{j}}{a_{j}}\right)}\right\}\] \[= \sum_{(\gamma,\boldsymbol{\mu})}\left\{\frac{e^{\sqrt{-1}\cdot \sum_{j}\frac{-\pi\gamma\mu_{j}}{a_{j}r}}\cdot\prod_{j}\mu_{j}}{\sin^{n+2g-2}( \pi\gamma/r)}\cdot\sum_{\boldsymbol{m}}e^{\sqrt{-1}\cdot\sum_{j}\frac{-2\pi( \gamma+b_{j}^{*}\mu_{j})m_{j}}{a_{j}}}\right\}\] \[= \sum_{(\gamma,\boldsymbol{\mu})}\left\{\frac{e^{\sqrt{-1}\cdot \sum_{j}\frac{-\pi\gamma\mu_{j}}{a_{j}r}}\cdot\prod_{j}\mu_{j}}{\sin^{n+2g-2}( \pi\gamma/r)}\cdot\prod_{j}\sum_{m_{j}}e^{\sqrt{-1}\cdot\frac{-2\pi(\gamma+b_{ j}^{*}\mu_{j})m_{j}}{a_{j}}}\right\}\]
where \((\gamma,\boldsymbol{\mu})\) ranges in \(\{1,2,\cdots,r-1\}\times\{\pm 1\}^{n}\), and \(j\) in \(\{1,\cdots,n\}\), and \(m_{j}\) in \(\mathbb{Z}/a_{j}\mathbb{Z}\). In particular, the simplification applies as we assume \(a_{1}=a_{2}=\cdots=a_{n}=a\), and \(b_{1}+b_{2}+\cdots+b_{n}=0\), and \(r\) divisible by \(a\).
**Lemma 4.3**.: _Under the assumptions of Theorem 1.2, and assuming \(r\) divisible by \(a\), if there does not exist any \(\boldsymbol{\mu}\in\{\pm 1\}^{n}\) that satisfies the congruence equations_
\[b_{1}^{*}\mu_{1}\equiv b_{2}^{*}\mu_{2}\equiv\cdots\equiv b_{n}^{*}\mu_{n}\mod a,\]
_then_
\[Z_{r}=0\]
_where \(Z_{r}\) is the term as in Lemma 4.1._
Proof.: For any fixed \((\gamma,\boldsymbol{\mu})\), we observe
\[\sum_{m_{j}\in\mathbb{Z}/a\mathbb{Z}}e^{\sqrt{-1}\cdot\frac{-2\pi(\gamma+b_{j }^{*}\mu_{j})m_{j}}{a}}=\begin{cases}a&\text{if }\gamma+b_{j}^{*}\mu_{j}\equiv 0\bmod a \\ 0&\text{otherwise}\end{cases}\]
Therefore, in the simplified expression of \(Z_{r}\) as in Lemma 4.2, the summand corresponding to \((\gamma,\boldsymbol{\mu})\) is nonzero if and only if \(\gamma+b_{j}^{*}\mu_{j}\equiv 0\bmod a\) holds for all \(j\in\{1,\cdots,n\}\). For the sum \(Z_{r}\) to be nonzero, there has to be some \(\gamma\in\{1,2,\cdots,r-1\}\) that satisfies the above condition for some \(\boldsymbol{\mu}\in\{\pm 1\}^{n}\), then there has to be some \(\boldsymbol{\mu}\) that satisfies \(b_{1}^{*}\mu_{1}\equiv b_{2}^{*}\mu_{2}\equiv\cdots\equiv b_{n}^{*}\mu_{n}\mod a\).
**Lemma 4.4**.: _Under the assumptions of Theorem 1.2, if there exists some integer \(b^{*}\) coprime to \(a\), and some \(\boldsymbol{\nu}\in\{\pm 1\}^{n}\), such that \(b^{*}\equiv b_{j}^{*}\nu_{j}\bmod a\) holds for all \(j\in\{1,\cdots,n\}\), then_
\[Z_{a}=e^{\sqrt{-1}\cdot\frac{\pi b^{*}\cdot\sum_{j}\nu_{j}}{a^{2}}}\cdot\frac{ 2\cdot a^{n}\cdot\prod_{j}\nu_{j}}{\sin^{n+2g-2}(\pi b^{*}/a)}\]
_where \(Z_{a}\) is the term as in Lemma 4.1 with \(r=a\)._
Proof.: Possibly after permuting \(\{1,\cdots,n\}\), we may assume \(\nu_{j}=1\) for \(j=1,\cdots,m\) and \(\nu_{j}=-1\) for \(j=m+1,\cdots,n\). We may also assume \(b^{*}\in\{1,2,\cdots,a-1\}\) without changing its residue class modulo \(a\).
For \(r=a\geq 3\), there are only two nonzero summands in \(Z_{r}\), and their corresponding to \((\gamma,\boldsymbol{\mu})\) are
\[(\gamma,\boldsymbol{\mu})=(b^{*},(\underbrace{1,\cdots,1}_{m},\underbrace{-1, \cdots,-1}_{n-m}))\]
and
\[(\gamma,\boldsymbol{\mu})=(a-b^{*},(\underbrace{-1,\cdots,-1}_{m},\underbrace{1, \cdots,1}_{n-m})).\]
By our assumption \(\sum_{j}b_{j}=0\) in Theorem 1.2, \((2m-n)b\equiv mb+(n-m)(-b)\equiv\sum_{j}b_{j}=0\bmod a\), so \(n-2m\) is divisible by \(a\). By our assumption \(a>n\geq 0\) in Theorem 1.2, we must have \(|n-2m|<a\), and hence \(n-2m=0\). So, we observe
\[(-1)^{n-m}=(-1)^{m+\frac{2m-n}{a}},\]
which is useful below.
We compute
\[Z_{a}(M) = \frac{(-1)^{n-m}\cdot e^{\sqrt{-1}\cdot\frac{-\pi(2m-n)b^{*}}{a^{2} }}}{\sin^{n+2g-2}(\pi b^{*}/a)}\cdot a^{n}+\frac{(-1)^{m}\cdot e^{\sqrt{-1} \cdot\frac{-\pi(n-2m)(a-b^{*})}{a^{2}}}}{\sin^{n+2g-2}(\pi(a-b^{*})/a)}\cdot a ^{n}\] \[= e^{\sqrt{-1}\cdot\frac{\pi(n-2m)b^{*}}{a^{2}}}\cdot\frac{a^{n}} {\sin^{n+2g-2}(\pi b^{*}/a)}\cdot\left((-1)^{n-m}+(-1)^{m+\frac{2m-n}{a}}\right)\] \[= e^{\sqrt{-1}\cdot\frac{\pi(n-2m)b^{*}}{a^{2}}}\cdot\frac{2\cdot a ^{n}\cdot(-1)^{n-m}}{\sin^{n+2g-2}(\pi b^{*}/a)}\] \[= e^{\sqrt{-1}\cdot\frac{\pi b^{*}\sum_{j}\nu_{j}}{a^{2}}}\cdot \frac{2\cdot a^{n}\cdot\prod_{j}\nu_{j}}{\sin^{n+2g-2}(\pi b^{*}/a)}\]
as desired.
**Lemma 4.5**.: _Let \(M\) be a closed Seifert fiber space with orientable orbifold base and orientable fibration, and with symbol \((g;(a_{1},b_{1}),\cdots,(a_{n},b_{n}))\), where \(g,n\geq 0\) are integers, and where \(a_{j}\geq 1\) and \(b_{j}\) are coprime pairs of integers, for \(j=1,\cdots,n\), and satisfy \(b_{1}/a_{1}+\cdots+b_{n}/a_{n}=0\). If \(a_{1},\cdots,a_{n}\) are all odd,_
\[\mathrm{TV}_{3,1}(M)=\mathrm{TV}_{3,2}(M)=2^{2g}.\]
Proof.: Denote
\[\mathrm{lcm}(a_{1},\cdots,a_{n})=d.\]
If \(a_{1},\cdots,a_{n}\) are all odd, \(d\) is also odd.
The fundamental group of \(M\) has a presentation with generators
\[x_{1},y_{1},\cdots,x_{g},y_{g},s_{1},\cdots,s_{n},f\]
and relations
\[\begin{cases}s_{1}\cdots s_{n}=[x_{1},y_{1}]\cdots[x_{g},y_{g}]\\ x_{i}f=fx_{i}&i=1,\cdots,g\\ y_{i}f=fy_{i}&i=1,\cdots,g\\ s_{j}f=fs_{j}&j=1,\cdots,n\\ s_{j}^{a_{j}}f^{b_{j}}=1&j=1,\cdots,n\end{cases}\]
With \(\mathbb{Z}/2\mathbb{Z}\) coefficients, we can eliminate any \([s_{j}]\) using the relation \(a_{j}[s_{j}]+b_{j}[f]=0\), since \(a_{j}\) is odd. Then the first relation is equivalent to \(-(b_{1}d/a_{1}+\cdots+b_{n}d/a_{n})[f]=0\) over \(\mathbb{Z}/2\mathbb{Z}\), having no effect by our assumption \(b_{1}/a_{1}+\cdots+b_{n}/a_{n}=0\). It follows that \(H_{1}(M;\mathbb{Z}/2\mathbb{Z})\) is freely generated by \([x_{1}],[y_{1}],\cdots,[x_{g}],[y_{g}],[f]\) over \(\mathbb{Z}/2\mathbb{Z}\). Then the \(\mathbb{Z}/2\mathbb{Z}\) Betti numbers of \(M\) are \(\beta_{0}=\beta_{3}=1\) and \(\beta_{2}=\beta_{1}=2g+1\), by the Poincare duality with \(\mathbb{Z}/2\mathbb{Z}\) coefficients. Therefore, we obtain
\[\mathrm{TV}_{3,2}(M)=2^{2g},\]
by Lemma 3.4.
Since \(M\) is homeomorphic to the mapping torus of a periodic surface automorphism of order \(d\), there is a cyclic cover \(\tilde{M}\to M\) of degree \(d\), and \(\tilde{M}\) is a product of a closed orientable surface with a circle. Since \(d\) is odd, the induced homomorphism \(H^{*}(M;\mathbb{Z}/2\mathbb{Z})\to H^{*}(\tilde{M};\mathbb{Z}/2\mathbb{Z})\) is injective, (because of the Poincare duality pairing with \(\mathbb{Z}/2\mathbb{Z}\) coefficients and the isomorphism on the top dimension). However, \(H^{1}(\tilde{M};\mathbb{Z}/2\mathbb{Z})\) contains no element whose cube is nontrivial, (by the Kunneth theorem which determines the cohomology ring of \(\tilde{M}\) over \(\mathbb{Z}/2\mathbb{Z}\)). It follows that \(t^{3}=0\in H^{3}(M;\mathbb{Z}/2\mathbb{Z})\) holds for any \(t\in H^{1}(M;\mathbb{Z}/2\mathbb{Z})\). Moreover, the first Stiefel-Whitney class \(w_{1}\) of \(M\) vanishes, as \(M\) is orientable. Therefore, we obtain
\[\operatorname{TV}_{3,1}(M)=2^{2g},\]
by Lemma 3.4.
Under the assumptions of Theorem 1.2, we compute the Turaev-Viro invariants in Theorem 1.2 as follows.
Suppose that \(b^{*}b_{j}\equiv\nu_{j}\bmod a\) holds for some integer \(b^{*}\) coprime to \(a\) and some \(\nu_{j}\in\{\pm 1\}\), and for all \(j\in\{1,\cdots,n\}\). Then,
\[\operatorname{TV}_{a,1}(M) = \operatorname{TV}_{a,1}(S^{3})\cdot|\tau_{r}|^{2}\] \[= \frac{2}{a}\cdot\sin^{2}(\pi/r)\cdot\left|\frac{1}{2^{n+g-1}a^{n/ 2}}\cdot\frac{2\cdot a^{n}}{\sin^{n+2g-2}(\pi b^{*}/a)}\right|^{2}\] \[= \frac{a^{n-1}}{2^{2n+2g-5}}\cdot\frac{\sin^{2}(\pi/a)}{\sin^{2n+ 4g-4}(\pi b^{*}/a)}\]
by Lemmas 4.1, 4.4, and 3.7.
When \(a\) is even, we apply Galois conjugacy (transforming \(e^{\sqrt{-1}\cdot\pi/a}\mapsto e^{\sqrt{-1}\cdot\pi s/a}\)) for any \(s\) coprime to \(a\), obtaining
\[\operatorname{TV}_{a,s}(M)=\frac{a^{n-1}}{2^{2n+2g-5}}\cdot\frac{\sin^{2}(\pi s /a)}{\sin^{2n+4g-4}(\pi b^{*}s/a)},\]
by Lemma 3.4.
When \(a\) is odd, we apply Lemmas 3.4 and 4.5, obtaining
\[\operatorname{TV}_{a,a-1}(M)=\operatorname{TV}_{3,2}(M)\cdot\operatorname{TV }^{\prime}_{a,a-1}(M)=\frac{\operatorname{TV}_{3,2}(M)}{\operatorname{TV}_{3,1}(M)}\cdot\operatorname{TV}_{a,1}(M)=\operatorname{TV}_{a,1}(M)\]
and
\[\operatorname{TV}^{\prime}_{a,a-1}(M)=\frac{\operatorname{TV}_{a,1}(M)}{ \operatorname{TV}_{3,1}(M)}=\frac{1}{2^{2g}}\cdot\operatorname{TV}_{a,1}(M).\]
Again, we apply Galois conjugacy (transforming \(e^{\sqrt{-1}\cdot\pi/a}\mapsto e^{\sqrt{-1}\cdot\pi s/a}\) or \(e^{\sqrt{-1}\cdot\pi(a-1)/a}\mapsto e^{\sqrt{-1}\cdot\pi s/a}\) depending on \(s\) odd or even), obtaining for any \(s\) coprime to \(a\)
\[\operatorname{TV}_{a,s}(M)=\frac{a^{n-1}}{2^{2n+2g-5}}\cdot\frac{\sin^{2}(\pi s /a)}{\sin^{2n+4g-4}(\pi b^{*}s/a)},\]
and if \(s\) is even,
\[\operatorname{TV}^{\prime}_{a,s}(M)=\frac{a^{n-1}}{2^{2n+4g-5}}\cdot\frac{\sin ^{2}(\pi s/a)}{\sin^{2n+4g-4}(\pi b^{*}s/a)},\]
by Lemma 3.4. This completes the computation of the nonvanishing values in Theorem 1.2.
Suppose the otherwise case. Then, for all \(r\geq 3\) divisible by \(a\), we obtain
\[\operatorname{TV}_{r,1}(M)=0,\]
by Lemmas 4.1, 4.3, and 3.7. Similarly, we derive \(\operatorname{TV}_{r,r-1}(M)=\operatorname{TV}^{\prime}_{r,r-1}(M)=0\) in this case, using Lemmas 3.4 and 4.5. Finally, by Galois conjugacy (Lemma 3.4), we see that \(\operatorname{TV}_{r,s}(M)=0\) and \(\operatorname{TV}^{\prime}_{r,s}(M)=0\) for any applicable \(s\).
This completes the proof of Theorem 1.2.
## 5. Examples
In this section, we prove Theorem 1.1 by exhibiting nontrivial Hempel pairs that can or cannot be distinguished by Turaev-Viro invariants. See Example 5.2 for our distinguishable ones, and Example 5.3 for our indistinguishable ones.
We need the following lemma for verifying our examples. A rationality statement would be good enough for our application, but the integrality here is also well-known to experts.
**Lemma 5.1**.: _Let \(S\) be a connected orientable closed surface of genus \(g\geq 0\), and \([f]\in\operatorname{Mod}(S)\) be a periodic mapping class of order \(d\geq 1\). Then, for any integer \(r\geq 3\) coprime to \(d\), and integer \(s\) coprime to \(r\),_
\[\operatorname{TV}_{r,s}(M_{f})\in\mathbb{Z},\]
_and if \(r\) is odd and \(s\) is even,_
\[\operatorname{TV}^{\prime}_{r,s}(M_{f})\in\mathbb{Z},\]
_where \(M_{f}\) denotes the mapping torus of \(f\)._
Proof.: By Lemma 3.8, \(\operatorname{TV}_{r,s}(M_{f})\in\mathbb{C}\) is equal to the trace of the Turaev-Viro TQFT representation \(\rho^{\operatorname{TV}}_{r,s}([f])\in\operatorname{GL}(\mathcal{Z}^{ \operatorname{TV}}_{r,s}(S))\). If \([f]\in\operatorname{Mod}(S)\) is periodic of order \(d\), the eigenvalues of \(\rho^{\operatorname{TV}}_{r,s}([f])\) are all complex roots of unity of order divisible by \(d\). In particular, \(\operatorname{TV}_{r,s}(M_{f})\) is an algebraic integer.
On the other hand, entries of \(\rho^{\operatorname{TV}}_{r,s}([f])\) lie in the cyclotomic subfield \(\mathbb{Q}(e^{\sqrt{-1}\cdot\pi s/r})\) of \(\mathbb{C}\). For any roots of unity \(\zeta_{m},\zeta_{n}\in\mathbb{C}\) of coprime orders \(m,n\geq 1\), respectively, it is an elementary fact that \(\mathbb{Q}(\zeta_{m})\cap\mathbb{Q}(\zeta_{n})\) equals \(\mathbb{Q}\)[20, Chapter 2, Proposition 2.4]. Applying with \(m|d\) and \(n=r\) (taking \(\zeta_{n}=e^{\sqrt{-1}\cdot\pi s/r}\), or if \(r\) is odd and \(s\) is even, \(\zeta_{n}=e^{\sqrt{-1}\cdot\pi(r-s)/r}\)), we obtain \(\operatorname{TV}_{r,s}(M_{f})\in\mathbb{Q}\). Together with the algebraic integrality, we obtain \(\operatorname{TV}_{r,s}(M_{f})\in\mathbb{Z}\).
With Lemmas 3.4 and 4.5, one may derive \(\operatorname{TV}^{\prime}_{r,s}(M_{f})\in\mathbb{Q}\) from \(\operatorname{TV}_{r,s}(M_{f})\in\mathbb{Z}\). To obtain \(\operatorname{TV}^{\prime}_{r,s}(M_{f})\in\mathbb{Z}\), it is possible to appeal to a similar lemma as Lemma 3.8 with a TQFT functor associated to \(\operatorname{TV}^{\prime}\). In fact, this case has been established by Detcherry and Kalfagianni. We refer to [1, Corollary 6.1] for their proof of this case.
**Example 5.2** (Distinguishable pairs).: Let \(g\geq 0\) and \(d=5\) or \(d\geq 7\) be integers, and \(k\) be an integer coprime to \(d\). Let \(M_{A}\) and \(M_{B}\) be closed Seifert fiber spaces with orientable orbifold base and orientable Seifert fibration. We assign their symbols as
\[M_{A} : (g;(d,1),(d,1),(d,-1),(d,-1))\] \[M_{B} : (g;(d,k^{*}),(d,k^{*}),(d,-k^{*}),(d,-k^{*}))\]
where \(k^{*}\) is an integer satisfing \(k^{*}k\equiv 1\bmod p\). The \(3\)-manifold \(M_{A}\) is homeomorphic to the mapping torus of some periodic mapping class \([f_{A}]\in\operatorname{Mod}(S)\) of order \(d\), where \(S\) is a connected orientable closed surface of genus \(dg+d-2\) (Lemma 2.1). The \(3\)-manifold \(M_{B}\) is homeomorphic to the mapping torus of the iterate mapping class \([f_{B}]=[f_{A}^{k}]\) (Lemma 2.2).
By Theorem 1.2, we compute
\[\operatorname{TV}_{d,1}(M_{A}) = \frac{d^{3}}{2^{2g+3}}\cdot\frac{1}{\sin^{4g+2}(\pi/d)}\] \[\operatorname{TV}_{d,1}(M_{B}) = \frac{d^{3}}{2^{2g+3}}\cdot\frac{1}{\sin^{4g+2}(\pi k/d)}\]
The values are equal if and only if \(k\equiv\pm 1\bmod d\), namely, \([f_{B}]=[f_{A}]\) or \([f_{B}]=[f_{A}^{-1}]\). For \(k\) other than these values, one may also check that \(\operatorname{TV}_{d,s}(M_{A})\neq\operatorname{TV}_{d,s}(M_{B})\) for any integer \(s\) coprime to \(d\), and if \(d\) is odd, \(\operatorname{TV}^{\prime}_{d,s}(M_{A})\neq\operatorname{TV}^{\prime}_{d,s}(M _{B})\) for any even integer \(s\) coprime to \(d\). Under our assumption on \(d\), such \(k\) does exist, so \(M_{A}\) and \(M_{B}\) form a nontrivial Hempel pair.
**Example 5.3** (Indistinguishable pairs).: Let \(g\geq 0\) be any integers, and \(p\geq 5\) be a prime integer, and \(k\) be an integer coprime to \(p\). Let \(M_{A}\) and \(M_{B}\) be closed Seifert fiber spaces with orientable orbifold base and orientable Seifert fibration. We assign their symbols as
\[M_{A} : (g;(p,1),(p,1),(p,-2))\] \[M_{B} : (g;(p,k^{*}),(p,k^{*}),(p,-2k^{*}))\]
where \(k^{*}\) is an integer satisfing \(k^{*}k\equiv 1\bmod p\). Obtain the connected orientable closed surface \(S\) of genus \(pg+(p-1)/2\), and the periodic mapping classes \([f_{B}]=[f_{A}^{k}]\) of order \(p\), similarly as in the previous example (Lemmas 2.1 and 2.2).
Again, \([f_{A}]\) and \([f_{B}]\) form a nontrivial Hempel pair exactly when \(k\not\equiv\pm 1\bmod p\), existing under the assumption on \(p\).
If \(r\geq 3\) is divisible by \(p\), applying Theorem 1.2 and Lemma 5.1, we see that \(\operatorname{TV}_{r,s}(M_{A})=\operatorname{TV}_{r,s}(M_{B})=0\) holds for any \(s\) coprime to \(r\), and moreover, if \(r\) is odd and \(s\) is even, \(\operatorname{TV}^{\prime}_{r,s}(M_{A})=\operatorname{TV}^{\prime}_{r,s}(M_{ B})=0\) also holds.
If \(r\geq 3\) is not divisible by \(p\), then it is coprime to \(p\). By Lemma 5.1, \(\operatorname{TV}_{r,s}(M_{A})\) and \(\operatorname{TV}_{r,s}(M_{B})\) are rational. Since \(\operatorname{TV}_{r,s}(M_{A})\) and \(\operatorname{TV}_{r,s}(M_{B})\) are the traces of the Turaev-Viro TQFT representations \(\rho^{\operatorname{TV}}_{r,s}\) of the periodic mapping class \([f_{A}]\) and \([f_{B}]=[f_{A}^{k}]\), respectively, the eigenvalues of \([f_{A}]\) are roots of unity of order dividing \(d\), and the eigenvalues of \([f_{B}]\) are their Galois conjugates under the transformation \(e^{\sqrt{-1}\cdot 2\pi/d}\mapsto e^{\sqrt{-1}\cdot 2\pi k/d}\). Then by the rationality, we obtain \(\operatorname{TV}_{r,s}(M_{A})=\operatorname{TV}_{r,s}(M_{B})\) for any \(s\) coprime to \(r\). Moreover, if \(r\) is odd and \(s\) is even, we apply Lemma 3.4 and 4.5 to deduce \(\operatorname{TV}^{\prime}_{r,s}(M_{A})=\operatorname{TV}^{\prime}_{r,s}(M_{ B})\).
## Appendix A Splitting of Turaev-Viro invariants at odd levels
In this appendix section, we prove the formula in Lemma 3.4 about \(\operatorname{TV}_{r,s}\) when \(r\) and \(s\) are both odd. We restate this part as a separate theorem, and make a couple of remarks regarding former results.
**Theorem A.1**.: _Let \(M\) be any closed \(3\)-manifold. Let \(r\geq 3\) be an odd integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.2._
_If \(s\) is odd, then the following formula holds._
\[\operatorname{TV}_{r,s}(M)=\operatorname{TV}_{3,1}(M)\cdot\operatorname{TV}^{ \prime}_{r,r-s}(M).\]
**Remark A.2**.:
1. Sokolov obtains a canonical splitting of \(\operatorname{TV}_{r,s}\) into the sum of three refined invariants [10]. When \(r\) is odd and \(s\) is odd, one may identify the three refined invariants (the zeroth, the first, and the second in Sokolov's definition) as \(\operatorname{TV}^{\prime}_{r,r-s}\), and \((\operatorname{TV}_{r,s}-\operatorname{TV}_{r,r-s})/2\), and \((\operatorname{TV}_{r,s}+\operatorname{TV}_{r,r-s})/2-\operatorname{TV}^{ \prime}_{r,r-s}\). In this case, the splitting of \(\operatorname{TV}_{r,s}\) is proportional to the splitting of \(\operatorname{TV}_{3,1}\). Similarly, when \(r\) is odd and \(s\) is even, the splitting of \(\operatorname{TV}_{r,s}\) is proportional to the splitting of \(\operatorname{TV}_{3,2}\). Compare [1, Theorem 1.5].
2. In the same paper, Sokolov quickly points out Lemma A.6 below, with assumption of orientability. See the formula (1) in [10, Proof of Lemma 2.2].
The rest of this section is devoted to the proof of Theorem A.1.
Our strategy is to derive needed ingredients from the proof of Detcherry, Kalfagianni, and Yang for the case with \(r\) odd and \(s\) even [1, Appendix A]. We count sign change from their case for individual terms in the defining state-sum expression, and verify that overall, the factors \(\operatorname{TV}_{3,1}\) and \(\operatorname{TV}_{r,s}\) are result of proportional change from factors in their case.
We denote by \(\operatorname{ev}_{r,s}\colon\mathbb{Q}(q^{1/2})\to\mathbb{C}\) the evaluation which assigns the abstract root of unity \(q^{1/2}\) to be \(e^{\sqrt{-1}\cdot\pi s/r}\).
For any odd integer \(r\geq 3\), recall that \(I_{r}=\{0,1,\cdots,r-2\}\) denotes the set of colors on this level. It contains the subset of even colors \(I^{\prime}_{r}=\{0,2,\cdots,r-3\}\) and also \(I_{3}=\{0,1\}\). For any finite simplicial \(3\)-complex \(\mathscr{T}=(V,E,F,T)\). From any coloring \(c\colon E\to I_{r}\), we obtain a pair of colorings \(c_{3}\colon E\to I_{3}\) and \(c^{\prime}\colon E\to I^{\prime}_{r}\) as follows: For any \(e\in E\), we assign \(c_{3}(e)=0\) and \(c^{\prime}(e)=c(e)\) if \(c(e)\) is even, or \(c_{3}(e)=1\) and \(c^{\prime}(e)=r-c(e)\) if \(c(e)\) is odd. By observation, this operation preserves admissible colorings, and yields a bijective correspondence between \(\mathcal{A}_{r}\) and \(\mathcal{A}_{3}\times\mathcal{A}^{\prime}_{r}\).
**Lemma A.3**.: _Let \(\mathscr{T}=(V,E,F,T)\) be any finite simplicial \(3\)-complex. Let \(r\geq 3\) be an odd integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.1. Identify \(\mathcal{A}_{r}=\mathcal{A}_{3}\times\mathcal{A}^{\prime}_{r}\). If \(s\) is even, then, for any \(x\in E\sqcup F\sqcup T\) and any \(c=(c_{3},c^{\prime})\in\mathcal{A}_{r}\),_
\[\operatorname{ev}_{r,s}\left(|x|_{c}\right)=\operatorname{ev}_{3,2}\left(|x| _{c_{3}}\right)\cdot\operatorname{ev}_{r,s}\left(|x|_{c^{\prime}}\right),\]
_and hence,_
\[\operatorname{ev}_{r,s}\left(|\mathscr{T}|_{c}\right)=\operatorname{ev}_{3,2 }\left(|\mathscr{T}|_{c_{3}}\right)\cdot\operatorname{ev}_{r,s}\left(|\mathscr{ T}|_{c^{\prime}}\right).\]
The identities in Lemma A.3 are key to the proof of Detcherry, Kalfagianni, and Yang for the \(s\) even case [1, Theorem 2.9]. We refer to [1, Lemma A.4] for the proof. Note that the identity (A.1) therein essentially relies on the parity assumption of \(s\).
For proving the \(s\) odd case, our next few lemmas examine the sign difference between \(\operatorname{ev}_{r,s}(|x|_{c})\) and \(\operatorname{ev}_{r,r-s}(|x|_{c})\), and between \(\operatorname{ev}_{r,s}(|\mathscr{T}|_{c})\) and \(\operatorname{ev}_{r,r-s}(|\mathscr{T}|_{c})\).
**Lemma A.4**.: _Let \(\mathscr{T}=(V,E,F,T)\) be any finite simplicial \(3\)-complex. Let \(r\geq 3\) be an integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.1. Then, for any
\(x\in E\sqcup F\sqcup T\) and any \(c\in\mathcal{A}_{r}\),_
\[\mathrm{ev}_{r,s}\left(|x|_{c}\right)=(-1)^{\delta(x,c)}\cdot\mathrm{ev}_{r,r-s} \left(|x|_{c}\right),\]
_where \(\delta(x,c)\in\mathbb{Z}\) is assigned as follows._
1. _For_ \(x\in E\)_, having color_ \(i\) _under_ \(c\)_,_ \[\delta(x,c)=i.\]
2. _For_ \(x\in F\)_, having edge colors_ \((i,j,k)\) _under_ \(c\)_,_ \[\delta(x,c)=\frac{i^{2}+j^{2}+k^{2}}{2}.\]
3. _For_ \(x\in T\)_, having edge colors_ \((i,j,k),(i,m,n),(j,m,n),(k,l,n)\) _on each face under_ \(c\)_,_ \[\delta(x,c)=i+j+k+l+m+n+\frac{il+jm+kn}{2}.\]
Proof.: We prove for \(x\in T\). The formulas with \(x\in E\) and \(x\in F\) can be proved by similar means, and are simpler. Suppose the tetrahedron \(x\) has edge colors \(i,j,k,l,m,n\) given by \(c\). Note that the value of the quantum integer only change by a sign determined by its parity
\[\mathrm{ev}_{r,s}([w])=(-1)^{w-1}\cdot\mathrm{ev}_{r,r-s}([w]).\]
For any fixed \(z\), the total sign change of the summand is \(-1\) to the power
\[\frac{(z+1)z}{2}+\sum_{a}\frac{(z-T_{a})(z-T_{a}-1)}{2}+\sum_{b} \frac{(Q_{b}-z)(Q_{b}-z-1)}{2}\] \[= 4z^{2}-2zW+\frac{1}{2}\cdot\left(\sum_{a}T_{a}^{2}+\sum_{b}Q_{b} ^{2}\right)\]
where \(W=i+j+k+l+m+n=\sum_{a}T_{a}=\sum_{b}Q_{b}\) is an integer. The first two terms are even integers, having no effect to the total change of sign, and the last term is independent of \(z\). Therefore, we obtain
\[\mathrm{ev}_{r,s}(|x|_{c})=(-1)^{\delta(x,c)}\cdot\mathrm{ev}_{r,r-s}(|x|_{c}).\]
where
\[\delta(x,c)\equiv\frac{1}{2}\cdot\left(\sum_{a}T_{a}^{2}+\sum_{b}Q_{b}^{2} \right)\mod 2.\]
The expression \(\sum_{a}T_{a}^{2}+\sum_{b}Q_{b}^{2}\) is equal to the sum of all the quadratic monomials in \(i,j,k,l,m,n\), namely, \(i^{2}+ij+\cdots+in+j^{2}+jk+\cdots+mn\). We rearrange
\[\sum_{a}T_{a}^{2}+\sum_{b}Q_{b}^{2}=X^{2}+Y^{2}+Z^{2}+XY+XZ+YZ-il-jm-kn\]
where \(X=i+l\), \(Y=j+m\), and \(Z=k+n\). Note that the parity pattern of \((X,Y,Z)\) may only be \((0,0,0)\) or \((1,1,1)\), up to permutation of the components. Indeed, the only possible patterns of \((i_{3}+l_{3},j_{3}+m_{3},k_{3}+n_{3})\) are \((0,0,0)\), \((1,1,1)\), and \((0,2,2)\), up to permutation of the components, because of admissible coloring. So, the part \(X^{2}+Y^{2}+Z^{2}+XY+XZ+YZ=(X+Y)(X+Z)+Y^{2}+Z^{2}\) is congruent to \(0\) modulo \(4\) if \((X,Y,Z)\equiv(0,0,0)\) mod \(2\), or congruent to \(2\) modulo \(4\) if \((X,Y,Z)\equiv(1,1,1)\) mod \(2\). In both cases, we see that \((X^{2}+Y^{2}+Z^{2}+XY+XZ+YZ)/2\equiv X+Y+Z\) mod \(2\). This yields \(\delta(x,c)\equiv X+Y+Z-(il+jm+kn)/2\equiv i+j+k+l+m+n+(il+jm+kn)/2\) mod \(2\), as desired.
**Lemma A.5**.: _Let \(\mathscr{T}=(V,E,F,T)\) be any finite simplicial \(3\)-complex. Let \(r\geq 3\) be an odd integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.1. Identify \(\mathcal{A}_{r}=\mathcal{A}_{3}\times\mathcal{A}_{r}^{\prime}\). Let \(\delta\colon(E\sqcup F\sqcup T)\times\mathcal{A}_{r}\to\mathbb{Z}\) be expressed as in Lemma A.4._
1. _For any_ \(x\in E\sqcup F\) _and any_ \(c=(c_{3},c^{\prime})\in\mathcal{A}_{r}\)_,_ \[\delta(x,c)\equiv\delta(x,c_{3})\mod 2,\] _treating_ \(\mathcal{A}_{3}\) _as a subset of_ \(\mathcal{A}_{r}\)_._
2. _For_ \(x\in T\)_, having edge colors_ \((i,j,k),(i,m,n),(j,m,n),(k,l,n)\) _on each face under_ \(c\)_,_ \[\delta(x,c)\equiv\delta(x,c_{3})+\lambda(x,c)\mod 2,\] _where_ \[\lambda(x,c)=\frac{i_{3}l^{\prime}+l_{3}i^{\prime}+j_{3}m^{\prime}+m_{3}j^{ \prime}+k_{3}n^{\prime}+n_{3}k^{\prime}}{2}\] _._
Proof.: We make use of the relation \(c(e)=c_{3}(e)\cdot(r-c^{\prime}(e))+(1-c_{3}(e))\cdot c^{\prime}(e)=r\cdot c_{ 3}(e)-c^{\prime}(e)-2\cdot c_{3}(e)\cdot c^{\prime}(e)\), for any \(e\in E\). Since \(r\) is odd and \(i^{\prime},l^{\prime}\) are even, we obtain
\[il = (ri_{3}-i^{\prime}-2i_{3}i^{\prime})\cdot(rl_{3}-l^{\prime}-2l_{3 }l^{\prime})\] \[\equiv r^{2}i^{3}l_{3}-r\cdot(i_{3}l^{\prime}+l_{3}i^{\prime})\] \[\equiv i_{3}l_{3}+(i_{3}l^{\prime}+l_{3}i^{\prime})\mod 4\]
and similarly we manipulate \(jk\) and \(kn\). Taking the sum, we obtain
\[il+jm+kn\equiv i_{3}l_{3}+j_{3}m_{3}+k_{3}n_{3}+2\cdot\lambda(x,c)\mod 4.\]
Moreover, we observe
\[i+j+k+l+m+n\equiv i_{3}+j_{3}+k_{3}+l_{3}+m_{3}+n_{3}\mod 2.\]
By Lemma A.4, the above congruence equalities imply \(\delta(x,c)\equiv\delta(x,c_{3})+\lambda(x,c)\mod 2\), as desired.
For any \(c_{3}\in\mathcal{A}_{3}\), there is a canonical subsurface \(\mathcal{S}(c_{3})\subset M\), in normal position with respect to \(\mathscr{T}\), such that \(c_{3}(e)\) indicates the number of intersection points of any edge \(e\in E\) with \(\mathcal{S}(c_{3})\). The subsurface \(\mathcal{S}(c_{3})\) is formed by taking one normal disk in each tetrahedron that has nonzero color, and then taking their union matching the sides. The types of the normal disks (among four triangular types and three quadrilateral types for each tetrahedron) are forced by the admissible coloring \(c_{3}\). The subsurface \(\mathcal{S}(c_{3})\) is closed, as \(M\) does not have boundary.
**Lemma A.6**.: _Let \((M,\mathscr{T})\) be any triangulated closed \(3\)-manifold. Let \(r\geq 3\) be an odd integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.1. Identify \(\mathcal{A}_{r}=\mathcal{A}_{3}\times\mathcal{A}_{r}^{\prime}\). Then, for any \(c=(c_{3},c^{\prime})\in\mathcal{A}_{r}\),_
\[\operatorname{ev}_{r,s}\left(|\mathscr{T}|_{c}\right)=(-1)^{\chi(S(c_{3}))} \cdot\operatorname{ev}_{r,r-s}\left(|\mathscr{T}|_{c}\right),\]
_where \(\chi(\mathcal{S}(c_{3}))\) denotes the Euler characteristic of the normal subsurface \(\mathcal{S}(c_{3})\subset M\) determined by \(c_{3}\)._
Proof.: For all \(x\in E\sqcup F\sqcup T\), the nonempty intersections \(x\cap\mathcal{S}(c_{3})\) give rise to a polygonal cell decomposition of \(\mathcal{S}(c_{3})\). Denote by \(\nu_{0},\nu_{1},\nu_{2,\triangle},\nu_{2,\square}\) the numbers of vertices, edges, triangular normal disks, and quadrilateral normal disks in \(\mathcal{S}(c_{3})\), respectively.
Let \(\delta\colon(E\sqcup F\sqcup T)\times\mathcal{A}_{r}\to\mathbb{Z}\) be expressed as in Lemma A.4. For any \(x\in E\sqcup F\sqcup T\), it is direct to check that \(\delta(x,c_{3})=1\) if and only if \(x\cap\mathcal{S}(c_{3})\) is a vertex or an edge or a quadrilateral normal disk of \(\mathcal{S}(c_{3})\), otherwise \(\delta(x,c_{3})=0\), treating \(\mathcal{A}_{3}\) as a subset of \(\mathcal{A}_{r}\). This means \(\nu_{0}=\sum_{x\in E}\delta(x,c_{3})\), \(\nu_{1}=\sum_{x\in F}\delta(x,c_{3})\), and \(\nu_{2,\square}=\sum_{x\in T}\delta(x,c_{3})\). Moreover, the relation \(3\cdot\nu_{2,\triangle}+4\cdot\nu_{2,\square}=2\cdot\nu_{1}\) implies that \(\nu_{2,\triangle}\) is even, as \(\mathcal{S}(c_{3})\) is closed. By Lemma A.5, we obtain
\[\chi(\mathcal{S}(c_{3})) = \nu_{0}-\nu_{1}+\nu_{2,\triangle}+\nu_{2,\square}\] \[\equiv \sum_{x\in E\sqcup F\sqcup T}\delta(x,c_{3})\] \[\equiv \sum_{x\in E\sqcup F\sqcup T}\delta(x,c)+\sum_{x\in T}\lambda(x,c)\mod 2.\]
Therefore, to derive the asserted formula \(\mathrm{ev}_{r,s}(|\mathscr{T}|_{c})=(-1)^{\chi(\mathcal{S}(c_{3}))}\cdot \mathrm{ev}_{r,r-s}(|\mathscr{T}|_{c})\) from Lemma A.4, it remains to prove
\[\sum_{x\in T}\lambda(x,c)\equiv 0\mod 2.\]
To this end, we observe that \(\lambda(x,c)\) is the sum of \(c^{\prime}(e)/2\), where \(e\) ranges over the edges of \(x\) whose opposite edge in \(x\) meets \(\mathcal{S}(c_{3})\). For any edge \(e^{*}\in E\), the link of \(e^{*}\) refers to the union of all the opposite edges \(e_{1},\cdots,e_{h}\) in all the tetrahedra \(t_{1},\cdots,t_{h}\in T\) that contain \(e^{*}\), denoted as \(\mathrm{lk}(e^{*})\subset M\). The link \(\mathrm{lk}(e^{*})\) is a contractible loop in \(M\) (bounding a disk transverse to \(e^{*}\)). On the other hand, any edge either misses \(\mathcal{S}(c_{3})\), or meets \(\mathrm{lk}(e^{*})\) at exactly one point. Then the number of edges in \(\mathrm{lk}(e^{*})\) that meet \(\mathcal{S}(c_{3})\) must be even, and the integer \(c^{\prime}(e^{*})/2\) contributes exactly this even number of times to \(\sum_{x\in T}\lambda(x,c)\). Because \(\sum_{x\in T}\lambda(x,c)\) is the total of the contribution from each edge \(e^{*}\in E\), we conclude \(\sum_{x\in T}\lambda(x,c)\equiv 0\mod 2\) as desired, completing the proof.
**Lemma A.7**.: _Let \((M,\mathscr{T})\) be any triangulated closed \(3\)-manifold. Let \(r\geq 3\) be an odd integer and \(s\) be an integer coprime to \(r\). Adopt Notation 3.1. Identify \(\mathcal{A}_{r}=\mathcal{A}_{3}\times\mathcal{A}_{r}^{\prime}\). If \(s\) is odd, then,_
\[\mathrm{ev}_{r,s}\left(|\mathscr{T}|_{c}\right)=\mathrm{ev}_{3,1}\left(| \mathscr{T}|_{c_{3}}\right)\cdot\mathrm{ev}_{r,r-s}\left(|\mathscr{T}|_{c^{ \prime}}\right).\]
Proof.: By Lemma A.6, we obtain \(\mathrm{ev}_{r,s}(|\mathscr{T}|_{c})=(-1)^{\chi(\mathcal{S}(c_{3}))}\cdot \mathrm{ev}_{r,r-s}(|\mathscr{T}|_{c})\) and \(\mathrm{ev}_{3,1}(|\mathscr{T}|_{c_{3}})=(-1)^{\chi(\mathcal{S}(c_{3}))} \cdot\mathrm{ev}_{3,2}(|\mathscr{T}|_{c})\). Then the asserted identity follows from the \(s\) even case (Lemma A.3).
To complete the proof of Theorem A.1, we observe
(A.1) \[\mathrm{ev}_{r,s}\left(Y_{r}\right)=\begin{cases}\mathrm{ev}_{3,2}\left(Y_{3} \right)\cdot\mathrm{ev}_{r,s}\left(Y_{r}^{\prime}\right)&\text{$s$ even}\\ \mathrm{ev}_{3,1}\left(Y_{3}\right)\cdot\mathrm{ev}_{r,r-s}\left(Y_{r}^{\prime }\right)&\text{$s$ odd}\end{cases}\]
where \(Y_{r}=-(q^{1/2}-q^{-1/2})^{2}/2r\) and \(Y_{r}^{\prime}=-(q^{1/2}-q^{-1/2})^{2}/r\), (indeed, \(\mathrm{ev}_{r,s}(q^{1/2}-q^{-1/2})=-\sqrt{-1}\cdot 2\sin(\pi s/r)\)).
Let \(M\) be any closed \(3\)-manifold. If \(r\) is odd and \(s\) is even, we obtain
\[\operatorname{TV}_{r,s}(M) = \operatorname{ev}_{r,s}\left(Y_{r}\cdot\sum_{c\in\mathcal{A}_{r}}| \mathscr{T}|_{c}\right)\] \[= \operatorname{ev}_{r,s}(Y_{r})\cdot\sum_{c\in\mathcal{A}_{r}} \operatorname{ev}_{r,s}(|\mathscr{T}|_{c})\] \[= \operatorname{ev}_{3,1}(Y_{3})\cdot\operatorname{ev}_{r,r-s}(Y _{r}^{\prime})\cdot\sum_{c_{3}\in\mathcal{A}_{3}}\sum_{c^{\prime}\in\mathcal{ A}_{r}^{\prime}}\operatorname{ev}_{3,1}(|\mathscr{T}|_{c})\cdot\operatorname{ev}_{r,r-s} (|\mathscr{T}|_{c^{\prime}})\] \[= \operatorname{ev}_{3,1}\left(Y_{3}\cdot\sum_{c_{3}\in\mathcal{A}_ {3}}|\mathscr{T}|_{c}\right)\cdot\operatorname{ev}_{r,r-s}\left(Y_{r}^{\prime }\cdot\sum_{c^{\prime}\in\mathcal{A}_{r}^{\prime}}|\mathscr{T}|_{c^{\prime}}\right)\] \[= \operatorname{TV}_{3,1}(M)\cdot\operatorname{TV}_{r,r-s}^{\prime}(M)\]
by (3.1), (3.2), (A.1), and Lemma A.7. This completes the proof of Theorem A.1.
|
2309.01445 | Why Change My Design: Explaining Poorly Constructed Visualization
Designs with Explorable Explanations | Although visualization tools are widely available and accessible, not
everyone knows the best practices and guidelines for creating accurate and
honest visual representations of data. Numerous books and articles have been
written to expose the misleading potential of poorly constructed charts and
teach people how to avoid being deceived by them or making their own mistakes.
These readings use various rhetorical devices to explain the concepts to their
readers. In our analysis of a collection of books, online materials, and a
design workshop, we identified six common explanation methods. To assess the
effectiveness of these methods, we conducted two crowdsourced studies (each
with N = 125) to evaluate their ability to teach and persuade people to make
design changes. In addition to these existing methods, we brought in the idea
of Explorable Explanations, which allows readers to experiment with different
chart settings and observe how the changes are reflected in the visualization.
While we did not find significant differences across explanation methods, the
results of our experiments indicate that, following the exposure to the
explanations, the participants showed improved proficiency in identifying
deceptive charts and were more receptive to proposed alterations of the
visualization design. We discovered that participants were willing to accept
more than 60% of the proposed adjustments in the persuasiveness assessment.
Nevertheless, we found no significant differences among different explanation
methods in convincing participants to accept the modifications. | Leo Yu-Ho Lo, Yifan Cao, Leni Yang, Huamin Qu | 2023-09-04T08:53:45Z | http://arxiv.org/abs/2309.01445v1 | Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations
###### Abstract
Although visualization tools are widely available and accessible, not everyone knows the best practices and guidelines for creating accurate and honest visual representations of data. Numerous books and articles have been written to expose the misleading potential of poorly constructed charts and teach people how to avoid being deceived by them or making their own mistakes. These readings use various rhetorical devices to explain the concepts to their readers. In our analysis of a collection of books, online materials, and a design workshop, we identified six common explanation methods. To assess the effectiveness of these methods, we conducted two crowdsourced studies (each with \(N=125\)) to evaluate their ability to teach and persuade people to make design changes. In addition to these existing methods, we brought in the idea of Explorable Explanations, which allows readers to experiment with different chart settings and observe how the changes are reflected in the visualization. While we did not find significant differences across explanation methods, the results of our experiments indicate that, following the exposure to the explanations, the participants showed improved proficiency in identifying deceptive charts and were more receptive to proposed alterations of the visualization design. We discovered that participants were willing to accept more than 60% of the proposed adjustments in the persuasiveness assessment. Nevertheless, we found no significant differences among different explanation methods in convincing participants to accept the modifications.
Information Visualization, Deceptive Visualization, Explorable Explanations
## 1 Introduction
The visualization community has developed guidelines for practitioners, outlining best practices and common pitfalls to avoid. These guidelines, such as starting the Y-axis at zero when drawing a bar chart and selecting sequential color schemes for continuous variables, are frequently emphasized. In addition, numerous books written by authors such as Huff [18], Tufte [36], Cairo [3], and Monmonier [28], as well as many other articles [9, 31, 35], have addressed how poorly constructed visualizations can mislead the audience.
The major takeaway from these readings is that readers are equipped with the skills necessary to (1) identify misleading visualizations, and (2) avoid creating deceptive visualizations themselves. These two abilities encompass the dual roles of being both consumers and producers of data visualizations in everyday life. A common rhetorical strategy in these works involves presenting an example of a misleading visualization and then systematically explaining why it is deceptive. This emphasis on explanation is crucial in educating readers to recognize misleading visualizations others produced and avoid making similar mistakes in their own work.
Explaining data visualization best practices is also crucial to automatic visualization recommendation and correction systems. Early efforts in detecting visualization errors primarily focused on finding the issues within a given visualization, typically supplying an error message as the output [26, 27, 6]. While it is important for these automated systems to detect problems and potentially suggest solutions, the communication aspect is often overlooked.
To effectively integrate these diagnostic systems into existing visualization tools and assist creators, it is necessary to address the communication gap. Presenting users with clear explanations of why their visualization design may be potentially misleading and persuading them to accept the suggested fix is a challenge that must be tackled before these systems can be widely adopted and trusted by the end users.
In this work, we investigate practical methods for delivering explanations to readers. We begin by reviewing the literature and examining examples collected from the internet to summarize how people explain misleading visualizations to their audiences. Subsequently, we organized an explanation formulation workshop to gather ideas on elucidating deceptive visualizations and to apply the explanation techniques derived from our literature review and online sources.
Through this process, we identified six techniques for explaining misleading visualizations, which can serve as a foundation for improving the communication of data visualization best practices and enhancing the effectiveness of automatic visualization recommendation and correction systems.
In addition to the existing explanation techniques, we propose adopting Bret Victor's concept of Explorable Explanations [38] as an interactive method for conveying information to readers. Explorable Explanations enable users to engage with and investigate complex concepts, ideas, and data in an interactive and immersive manner. This approach is well-suited to address the explanatory challenges faced by readers and, potentially, visualization tool users. It also provides an engaging and interactive medium to comprehend and reassess the design choices of potentially deceptive visual representations.
We conducted two between-subject experiments on crowdsourcing platforms to evaluate the learnability and persuasiveness of five explanation techniques for addressing various visualization mistakes. In the learnability experiment, we measured participants' prior- and post-intervention performance in identifying misleading visualizations to assess the effectiveness of the methods. For the persuasiveness experiment, we evaluated the acceptance rate of the suggested visualization corrections to determine the impact of the explanation techniques on users' willingness to adopt the proposed changes.
Through this work, we aim to contribute the following to the visualization community: (1) a comprehensive compilation of existing explanation techniques for addressing misleading visualizations; (2) a prime example of Explorable Explanations focused on clarifying deceptive visualizations; and (3) two evaluation experiments assessing the learnability of identifying misleading visualizations and the persuasiveness of accepting suggested corrections.
## 2 Related Work
Our study builds on the pioneering research on misleading visualizations, also known as deceptive visualizations. Previous studies collected the existence of misleading visualizations and coined the term "lies"
to describe them. Recent work focuses on developing algorithms to automatically detect them when the users are creating visualizations using computer programming tools. Yet, there is a significant gap in the research when it comes to assessing the effectiveness of different explanation methods in making these issues clear to users or readers.
### _Misleading Visualizations_
The two most influential books on this topic are Huff's _How to Lie with Statistics_ in 1954 [18] and Tufte's _The Visual Display of Quantitative Information_ in 1983 [36]. Both books addressed the issue of misleading charts, using examples gathered from the news media of their respective times. The deceptive tactics discussed in these books remain prevalent and widely debated. Tufte's introduction of the Lie Factor, a formula for identifying misleading charts, is a unique example of a quantitative approach to the problem. Other explanation techniques featured in these classic books, such as correction and annotation, continue to be widely employed when discussing visualization pitfalls and best practices.
Monmonier authored _How to Lie with Maps_[28] from a cartographer's perspective, specifically addressing the misleading presentation of information and geographic data in maps. Jones's book, _How to Lie with Charts_[19], offers a creator's perspective on avoiding the production of deceptive charts when using spreadsheets and slideshow software. Cairo's _How Charts Lie_[3] provides a modern take on the topic, featuring recent examples of misleading techniques collected from social media and news media. While these works expand upon the issues initially discussed by Huff and Tufte by examining a broader range of charts beyond printed media, the techniques used in their explanations remain similar.
Misinformation is widespread on the internet, with data visualization often serving as a key medium for disseminating such inaccurate information. Several studies have investigated the use of visualizations in the context of misinformation online. Lo _et al._ aimed to broaden the scope of poorly constructed visualizations, addressing a wide variety of deceptive techniques that result in misleading representations and mislakes that render visualizations uninformative [23]. Lee _et al._ analyzed social media to understand the role of visualizations in social media posts and the dissemination of misinformation [20]. Lisnic _et al._ collected and examined instances of visualizations being used to spread misinformation, irrespective of the deceptiveness of the charts [22].
The deceptive impact of misleading visualizations has been consistently demonstrated across various studies. Pandey _et al._ evaluated the effects of message exaggeration/undertastement and message reversal through a crowdsourced experiment. Their results revealed that participants who viewed the misleading version of the chart interpreted the data differently [31]. In another study, Correll _et al._ focused on different visual designs of truncated axes trying to mitigate the misleading effect of a truncated axis [8]. Their findings indicated that, despite attempts to mitigate the issue by hinting at the presence of a truncated axis, truncation consistently led to an exaggerated perception of the differences between bars.
Critical thinking is an important skill in data visualization literacy. Reflecting on elementary school teaching, Chevalier _et al._ stressed the necessity of embedding critical thinking within visualization literacy education [7]. Bergstrom and West, in their university course [1], gathered and deliberated on misleading instances of data visualizations, which were later compiled in their book [2]. In their hands-on teaching classroom, Lo _et al._ guided students to construct charts using visualization software and experience fireshand the inadequacy of the default axis direction in the context of ranking data, which smaller value represents better ranking. This inadvertently led to a downward trend for the ascending ranking. Subsequently, the students were tasked to rectify the chart by adjusting the chart settings [24]. Camba _et al._ investigated the effectiveness of various learning activities-such as in-class discussions, self-learning, and peer challenges-on learning to identify and accurately interpret misleading visualizations. They found that the peer challenge intervention significantly improved performance in the post-intervention test. For this activity, students were asked to experiment with the dataset and different visualization settings to "deceive" their peers and identify deceptive techniques used in their peers' visualizations. This active learning approach substantially benefited students' learning outcomes [4].
### _Automated Detection and Explanation of Misleading Visualizations_
The prevalence and impact of misleading visualizations in modern communication are significant, especially given the widespread availability of visualization tools. Automated detection and explanations serve as important countermeasures against these issues. Such detection tools are often referred to as litters, drawing an analogy to computer program code linters that raise warnings for constructs that may be legitimate but have the potential to cause errors. This analogy is fitting for detecting visualizations that are constructed correctly but may still be misleading.
McNutt and Kindlmann proposed a linter with linting rules derived from the Algebraic Visualization Design (AVD) framework [26]. They implemented a Python-based linter for charts created using the Python charting library-matplotlib. In a course project, Zheng and Sherif developed a JavaScript-based linter for the charting library, Chart.js [40]. Vizlinter proposed by Chen _et al._, is built with Answer Set Programming (ASP), and it checks the visualization specifications of Vega-Lite [6]. These linters are rule-based, relying on predefined rules derived from visualization best practices. Contrarily, McNutt _et al._ applied the AVD framework to perform automatic checking on visualizations without predefined rules [27].
While the primary goal of these tools is to create automated checking programs that detect poorly constructed visualizations, their textual outputs need to be more approachable for end users to understand the issues and accept the needed changes. These automatic detection tools provide only textual warnings, which may notify users but lack clear explanations of the problem and its location within the visualization.
Hopkins _et al._ proposed Visualint, a visual interface designed to highlight problematic regions of a chart and notify users that the visualization is poorly constructed [17]. Fan _et al._ suggested a pipeline that performs visualization reverse engineering [32] to extract the visualization specification from its bitmap graphics form, then checks it against the rules derived from best practices [12]. If any violations are detected, a small overlay is created on the chart for comparison with the suggested version. While these tools have limitations, such as applicability to specific chart types, reliance on expert-created rules, and potentially high computational costs, they represent pioneering efforts in exploring the feasibility of creating user interfaces to help users understand issues and correct their visualizations.
### _Explorable Explanations_
Bret Victor proposed the concept of Explorable Explanations as a powerful rhetorical device in writing [38]. Enabled by web technology, interactive reading differs from traditional book reading by actively engaging readers with articles incorporating Explorable Explanations. Instead of using static images and precalculated numbers, writers provide small widgets that allow readers to adjust the parameters and observe how the graphs or numbers change accordingly.
Dragicevic _et al._ introduced the idea of an explorable multiverse analysis report (EMAR) for reporting experimental results [10]. By adopting Explorable Explanations, readers can explore different settings of the experiment report, increasing transparency and mitigating issues related to reporting a single analysis path.
Explorable Explanations has significant potential for explaining various concepts, regardless of their complexity. A collection of Explorable Explanations is showcased on explorables [11], featuring numerous examples that cover complex mathematical and scientific concepts. One particularly inspiring example is Nicky Case's creative work on explaining the intricate concept of game theory in an interactive and engaging way with Explorable Explanations [5]. We aim to incorporate Explorable Explanations into the explanation of visualization concepts, enabling readers to learn and be persuaded to identify misleading visualizations and accept suggested design changes more effectively.
Existing Forms of Explanations
In order to gain a thorough understanding of the common methods used to explain misleading visualizations, we collected examples from three distinct sources: the internet, books, and a design workshop. These examples illustrate why certain visualizations can be misleading and provide insight into the most effective ways to explain them.
### Examples from the Internet
The internet offers an abundant resource for explanations concerning misleading visualizations. Numerous educational blog posts and social media discussions have been discovered in our research. We utilized the collection compiled by Lo _et al._[23], who assembled a dataset of improperly designed visualization examples sourced from search engines and social media platforms. Within this collection, 1,143 visualization examples are tagged with at least one issue. We leverage this dataset to investigate the methods employed by individuals online when explaining misleading visualizations to others.
We first attempted to access the web pages using the URLs provided in the dataset. As of January 2023, we successfully retrieved 992 documents out of the 1,143 entries. Some documents were irretrievable due to missing web pages or removal from social media platforms. Next, we filtered out duplicate entries with identical URLs and documents lacking explanations. Following the retrieval and filtering procedures, our study incorporated 360 documents containing explanations. Notably, nearly a tenth (\(9.4\%,N=34\)) of these explanations were developed as instructional materials in the form of quizzes (\(4.4\%,N=16\)), slides (\(3.9\%,N=14\)), or course websites (\(1.7\%,N=6\)).
Two authors coded these documents using the grounded theory method (GTM) as described by Muller [29]. GTM is a research approach for exploring an unfamiliar domain--in our case, understanding how to explain misleading visualizations. The coders iteratively analyzed the examples and discussed the explanation methods' definitions. After iterations of coding and refining definitions, each tag achieved a Cohen's \(\kappa>0.7\). The coding results can be found in the supplemental materials.
The explanation methods fall into two categories: visual and textual, often complementing each other. Most explanations contain textual content (\(93.9\%,N=338\)). These explanations can be as concise as directly identifying issues, such as "Look at the percentages and the length of columns!" We labeled these explanation methods as **Short Text** (\(48.3\%,N=174\)). Short Text explanations are predominantly found on social media platforms like Twitter, where post length is limited. Conversely, **Long Text** explanations (\(45.6\%,N=164\)) are most commonly encountered in blog posts. They may involve in-depth discussions of a single issue2 or address various visualization pitfalls using multiple examples3.
Footnote 2: [https://twitter.com/1GoldilocksZone/status/856973264526209924](https://twitter.com/1GoldilocksZone/status/856973264526209924)
Footnote 3: [https://andrewsheeler.com/2013/08/28/hanging-rootedgrams-and-viz-differences-in-time-series/](https://andrewsheeler.com/2013/08/28/hanging-rootedgrams-and-viz-differences-in-time-series/)
Footnote 4: [https://followiadata.com/2012/02/09/how-to-spot-visualization-lies/](https://followiadata.com/2012/02/09/how-to-spot-visualization-lies/)
Over half of the explanations incorporate visual aids (\(55.3\%,N=199\)), with some even employing a combination of multiple techniques. The most prevalent visual aid contrasts two visualizations: a misleading version and a corrected or redrawn version. We differentiate between Correction and Redraw, as a correction only modifies chart settings, while a redraw changes the entire chart type. For instance, changing the y-axis to an appropriate range to better show the trend in data is a correction (Fig. 0(a)), while converting a pie chart to a bar chart is a redraw (Fig. 0(b)). **Corrections** (\(41.7\%,N=150\)) are more common than **Redraw**s (\(12.8\%,N=46\)).
Another form of visual explanation involves marking the visualization to guide the audience in identifying the misleading elements in the chart. The simpler form is **Highlighting** (\(5.8\%,N=21\)), and the more complex form is **Annotation** (\(15\%,N=54\)). Highlighting captures the audience's attention and emphasizes a specific area on the chart by circling the crucial part that leads to misunderstanding. This can also be achieved using arrows, a highlighter pen effect, or enlarging the region with a magnifying effect. Fig. 0(c) exemplifies circling a data point that creates a false impression of a decreasing trend. Annotations draw the audience's attention and provide additional visual aids. The annotation in Fig. 0(d) simplifies the comparison between two bars for the audience. There are numerous other annotation forms, such as overlaying small icons on the larger icons in pictograms to facilitate area comparisons (Fig. 0(e)), adding reference lines to emphasize the scale inconsistencies in the chart (Fig. 0(f)), or explicitly marking chart elements with in-situ text to expose the inconsistent binning sizes (Fig. 0(g)).
Tab. 0(i) presents the categorization of explanation methods and their descriptions. In addition to these six identified methods, during the coding process, we also observed instances where evidence from various sources was compiled to form arguments that debunk claims derived from the charts. This evidence could include charts from other news sources, data from different data agencies, or other evidence like a prescription drug receipt to refute an incorrect insulin price chart. We conclude that although the categorization of these six explanation methods is not exhaustive in the design space of explaining misleading visualizations, it encompasses a representative set of explanations found on the internet, providing a foundation for those seeking to develop explanations for misleading visualizations.
### Explanations from Books
The first documented discussion of misleading visualizations appears in Huff's book on statistics, which laid the groundwork for subsequent conversations on the subject. Books provide a valuable corpus for analysis, so we also examined them to learn how authors explain the examples they have collected. Through related book suggestions from Amazon and Goodreads, we gathered six titles: Huff [18], Tufte [36], Monmonier [28], Jones [19], Cairo [3], and Bergstrom and West [2]. These titles contain at least one chapter related to misleading visualizations, while other titles focus on identifying misleading uses of statistics but lack dedicated chapters on visualizations [16, 21, 34, 39].
We examined the chapters related to misleading charts and categorized the explanation methods employed by the authors. We found that contrasting techniques were used most frequently. The author would present a misleading visualization example encountered in daily life, guide readers in identifying the misleading elements in the chart, and then reveal the corrected or redrawn version as a truthful data representation. Annotation is also a common technique, while Highlighting is the least common. These findings align with the results from the internet examples. We also noted Tufte's use of formulas to explain the Lie Factor, which is the only instance we found where a formula was used to explain why a chart is misleading.
### Design Workshop
In addition to examples from the internet and books, we aimed to explore potential new explanation methods. We organized a design workshop with 16 participants, consisting of postgraduate students and researchers in the field of data visualization research.
The workshop comprised three parts. In the first part, participants answered six questions related to various chart issues. The initial five questions each included two visualizations--one misleading and one corrected version. Participants were asked to explain why one chart was preferable to the other and to convey their explanations to a layperson from the general public. They were encouraged to use non-textual aids in their explanations. The fifth question was similar to the first four but lacked a corrected version. The final question asked participants to justify their explanations. In this part, we aimed to gather ideas about explanation methods beyond those collected from the internet and books.
The second part introduced techniques found in the internet examples to participants, aiming to provide hints or spark ideas for enhancing their initial answers. In the third part, participants revisited their explanations and attempted to improve them using techniques introduced in the second part. Lastly, they were asked to share their thoughts on the changes. This part aimed to explore the benefits of providing explanation techniques and examples for those forming explanations and rationalizing visualization design choices.
From the design workshop, we identified three new variants of explanation techniques: (1) Analogy, (2) Breakdown, and (3) Animation. Analogy is akin to Huff's chart background explanations for truncated axes [18]. Breakdown shows the decomposition of the largest group in the space next to the original chart, such as the largest group in a histogram. Animation involves rotating or morphing a chart into its correct form. As Analogy and Breakdown are drawn on the original chart to assist explanation, they may be categorized as Annotations. Animation is similar to contrasting techniques like Correction and Re-draw but transforms the original chart into the corrected or redrawn version with animation. The workshop materials and coding results can be found in the supplemental materials.
Over three-quarters of the participants (\(75\%,N=12\)) enhanced their explanations in part three. Participants who initially relied on textual explanations were able to enrich their explanations with various visual aids across different questions. Highlighting was the most frequently used technique (\(52.5\%,N=42\)) in part one.
Through the explanations collected from the internet, books, and design workshop, we identified six major forms: (1) Short Text, (2) Long Text, (3) Correction, (4) Redraw, (5) Highlighting, and (6) Annotation. This classification informs the design of explanations that facilitate learning and persuade visualization tool users to make better design choices. All the collected data and materials are available on OSF4.
Footnote 4: [https://osf.io355pf](https://osf.io355pf)
## 4 Explanation Designs
For the evaluation study, we selected five issues from the taxonomy proposed by Lo _et al._[23] and five explanation methods from the formative study. Apart from the implementation for evaluation purposes, we also examined the feasibility and challenges of applying different explanation methods.
### _Construction of Misleading Visualizations_
In selecting visualization issues to demonstrate the explanation methods, we aimed to include a diverse range of common design-related visualization pitfalls and different chart types. The design and perception stages are the most relevant from the five-stage taxonomy proposed by Lo _et al._[23]. From these two stages, we selected three design choice issues and two perception issues with the highest occurrences in their corresponding categories. This selection covers the choice of axes, chart types, color schemes, and the design choice between 2D and 3D. We selected issues that comprise: (1) **Truncated Axis** on bar charts, where the differences between bar lengths are exaggerated due to a non-zero starting point on the y-axis. Fig. 2d provides an annotated explanation of this issue. (2) **Inappropriate Axis Range** on line charts, where the lines appear predominantly flat due to the expansive range of the y-axis (Fig. 2e). (3) **Inappropriate Use of Line Chart**, where a categorical variable is erroneously encoded on the y-axis (Fig. 2f). (4) **Ineffective Color Scheme** on choropleth maps, where a rainbow color scheme is used to represent a continuous variable (Fig. 2g). (5) **3D** pie chart, where the pie chart is presented with a perspective distortion (Fig. 2h). These issues span the most common chart types-bar charts, line charts, pie charts, and choropleths-and highlight their most prevalent issues.
In the explanation examples collected in Section 3, some explanations use charts constructed from synthetic data to better illustrate the problem. However, synthetic data may risk influencing experiment results by causing participants to question the data instead of the chart design. To avoid this, we used real data to construct the charts and clearly indicated the data sources next to the chart. We randomly picked datasets from the list of all available charts on Our World In Data [30].
Fig. 1: Examples of visual explanations from online websites and social media. (a) Explaining the poor design choice of the y-axis range by providing the corrected version of the original chart. (b) Redrawing a 3D pie chart as a bar chart or correction by removing the 3D effect. (c) Highlighting the inconsistent intervals of the last two x-axis elements in contrast to the other data points. (d) Annotating bar sizes by drawing boxes on top of the bars for easy comparison. (e) Filling the larger area with more small ions than the differences in data values suggest. (f) Drawing reference lines to emphasize the scale inconsistencies in the chart. (g) Adding in-situ text to point out the inconsistencies in bin sizes.
To avoid geographical biases, country names were masked as Country A, B, C, etc., except in choropleth maps. The same practice applied to continents.
We employed Python libraries pandas [25] and Altair [37] within the Jupyter Lab environment to construct the charts, except for 3D pie charts, which we created using Microsoft Excel with VBA scripts due to a lack of available Python libraries for 3D pie charts. We constructed a total of 60 charts, which were then randomly divided into two sets. The division criteria ensured that each of the five issues was represented by three misleading charts and their respective corrected versions, thus resulting in \(5\times 3\times 2=30\) charts in each set. In Experiment One, the pre-intervention test used the first set, while the post-intervention test used the second set. In Experiment Two, only the first set is used. More specifics on the experimental design are provided in Sec. 5.1 and Sec. 5.2. The datasets and scripts to construct these charts are available on OSF4.
Footnote 4: [https://github.com/google-research/](https://github.com/google-research/)
### Explanation Methods
We implemented five designs to explain the above issues: (1) Short Text, (2) Correction or Redraw, (3) Highlighting, (4) Annotation, and (5) Explorable Explanations. **Short Text**, the simplest and most commonly used explanation method, sets the baseline for our evaluation experiment. **Correction and Redraw** both contrast the corrected version and the original misleading version, differing in whether the chart type is changed. Four of the five issues we studied can be corrected without changing the chart type, with the only exception of Inappropriate Use of Line Chart, which requires a redraw of the line chart to a bar chart to avoid misleading encoding of categorical variables. **Highlighting** can be done by circling chart elements related to the issues, such as the starting point of the y-axis or the legend of rainbow colors. Implementing **Annotation** requires more sophisticated thinking to convey and explain the message to readers. We learned from the collected examples from the internet and books that applied the Annotation technique, which includes using lines, arrows, short phrases, and overlapping areas to compare sizes. Examples of these designs can be seen in Fig. 2.
**Explorable Explanations** is a concept proposed by Bret Victor [38]. It allows readers to understand the content and interact with and challenge the writer's claim. It provides transparency, enables exploration, builds confidence and trust in the arguments, and encourages readers to form their own ideas and actively test them against the writer's message. Throughout the readings, there are controllable widgets that allow readers to tweak and see the effect on the numbers or graphics. We hypothesize that Explorable Explanations is a good match for explaining misleading visualizations. They enable readers not only to accept visualization best practices but also to explore different parameter settings and see how they affect the charts.
We implemented explorable explanations through sliders and radio buttons for the five issues, enabling readers to interact with different parameter settings and see their effects on the charts. For Truncated Axis and Inappropriate Axis Range, readers can move sliders to change the axis start or end points. The same sliders apply to 3D pie charts, allowing readers to change the perspective angle. For choropleths, readers can change the color scheme between sequential, diverging, and rainbow colors. For Inappropriate Use of Line Chart, readers can configure the line chart with different item ordering, orientation, and chart types between line chart and bar chart. Fig. 3 shows examples of implemented Explorable Explanations.
## 5 Evaluation Experiments
We aim to evaluate the effectiveness of different explanation methods in two aspects: (1) learning to identify visualizations that violate visualization guidelines, and (2) persuading visualization tool users to choose a design that follows the visualization guidelines. We designed two experiments and conducted them on prolific.ac, a crowdsourcing platform focused on studies rather than the more general crowdsourcing platform provided by Amazon. The recommended hourly rate on the Prolific platform is GBP 9 (-USD 11).
For both studies, we require participants to be fluent in English, have no colorb blindness, and maintain an approval rate of 98% or higher. On the Prolific platform, 75,552 workers meet these criteria, accounting for 62.2% of the total 121,407 available workers. The criterion of being free from colorblindness (\(70.4\%,N=85,445\)) is the major factor reducing the suitable worker pool. Despite that, this criterion is necessary to exclude variations that may affect the results related to color scheme design choices. Besides these criteria, we do not impose further restrictions on gender, age, geographic location, or education level.
Both studies begin with a consent form, a background information form, and a subjective literacy assessment form [14]. The subjective graph literacy (SGL) assessment consists of ten questions, each asking participants to rate their level of competence in chart reading and reliance on graphical information on a six-point scale. The assessment is empirically tested to have a strong positive correlation with the objective graph literacy (OGL) [13] score.
### Experiment One: Learning Effects of Explanations
We are interested in testing the effectiveness of explanations in educating readers to identify charts that violate visualization guidelines. In this experiment, we have set up five conditions: (1) Short Text, (2) Short Text + Highlighting, (3) Short Text + Annotation, (4) Short Text +
Fig. 2: Examples of implemented visual explanation methods. (a) Correcting a rainbow color choropleth to a sequential color scheme. (b) Redrawing a horizontal line chart with categorical variables into a bar chart. (c) Highlighting the improperly configured axis that downplays the data variations. (d) Using lines to annotate the mismatch between bar lengths and data values in the chart. (e) Annotating to emphasize the impact of an improperly configured axis that minimizes data fluctuations. (f) Using arrows to emphasize the irrelevance of sharp inclines in a line chart with categorical variables. (g) Pointing out the confusion caused by distinct values being represented by similar hues in a rainbow color scheme. (h) Overlaying the bottom pie chart slice with the top pie chart slice that has a downplayed area caused by the 3D perspective effect.
Correction, and (5) Short Text + Correction + Explorable Explanations. The Short Text condition serves as a baseline for comparison with the Highlighting, Annotation, and Correction conditions. Condition 5 requires combining Explorable Explanations with Correction because the corrected version of the chart is used to guide participants to explore the explanation.
#### 5.1.1 Experiment Procedures
The study is conducted using a between-subject design in three phases, including a pre-intervention testing phase, an intervention phase, and a post-intervention testing phase.
After collecting the basic information and completing the SGL assessment, participants will first perform a test to gauge their ability to distinguish between misleading visualizations and those that accurately represent data. The test comprises 30 visualizations, including three misleading charts for each of the five issues and their respective corrected versions, _i.e._, the first set described in Sec. 4.1. The order of the charts is randomized across participants. Participants answer the question with one of the options "misleading," "accurate," or "I am not sure." The screenshots of a sample test are included in the supplemental materials and are available on OSF4.
Footnote 4: [https://www.sslider.org/](https://www.sslider.org/)
In the intervention phase, participants view a series of five explanations constructed according to their assigned conditions. Participants are asked to rate the helpfulness of the explanation and elaborate on their rating. Fig. 4 shows the screenshot of the participants' interface in Condition 5. The order of the explanations on different issues is randomized across participants. After reading the explanations, participants are required to complete the post-intervention test, which consists of a different set of 30 visualizations with a similar composition to the pre-intervention test, _i.e._, the second set described in Sec. 4.1. Upon completion, participants are thanked and given an opportunity to report any comments or issues they encountered during the experiment.
#### 5.1.2 Hypotheses
In Experiment One, we want to test the effectiveness of explanations on participants' ability to identify charts that violate visualization guidelines. We hypothesize that the intervention, _i.e._, exposure to the explanations, has an effect on participants' ability to distinguish misleading charts. Therefore, our first hypothesis is: **Hypothesis 1.1 The correctness in identifying misleading charts and accurate charts is higher in the post-intervention phase, regardless of the explanation method.**
Secondly, Explorable Explanations has demonstrated its effectiveness in illustrating the linkage between variable parameters and their impact on outcomes. In the context of visualizations, these parameters involve design decisions such as the start and end points of the axis, or the color scheme applied in the chart. We posit that Explorable Explanations can also be used effectively to explain visualization guidelines. Hence, our second hypothesis is: **Hypothesis 1.2 The Explorable Explanations condition has a greater effect on improving the correctness in the post-intervention phase.**
#### 5.1.3 Participants
For each condition, we recruited 25 participants on the Prolific platform according to the criteria described at the beginning of Sec. 5. In total, 125 participants took part in the experiment. Among the participants, 57.6% identified as male, 40.8% as female, and 1.6% as other. The reported age ranged from 19 to 54 (\(age_{median}:24,age_{mean}:26.2,age_{std}:6.06\)). The reported highest completed education levels were Bachelor's degree (41.6%), some college credit (23.2%), high school graduation (21%), Master's degree (12.8%), and Doctorate's degree (1.6%). The most common occupation was student (\(40\%,N=50\)); the rest were mostly white-collar workers, except a cooking assistant, receptionist, warehouse worker, dentist, and military personnel. The participants' nationalities were mainly from Europe (67.2%), with the remainder from Africa (18.4%), North America (9.6%), South America (2.4%), and Asia (2.4%). The SGL assessment (10-item on a six-level scale) had a median of 4.4 (\(SGL_{mean}:4.32,SGL_{std}:0.86,SGL_{min}:2.5,SGL_{max}:5.9\)). The median completion time for Experiment One was 23.78 minutes. The study was advertised to workers as a 25-minute study with a compensation of GBP 3.75 (\(\sim\)USD 4.63), translating to an effective hourly rate of GBP 9 (\(\sim\)USD 11). Two participants suspected of answering randomly (\(15\pm 1\) out of 30 correct in both prior- and post-intervention tests) were checked, and their submissions were removed from the study.
#### 5.1.4 Results
The median number of correctly answered questions on the pre-intervention test was 19 out of 30 (\(63.3\%,correct_{mean}:18.69,correct_{sd}:3.38,correct_{min}:11,correct_{ max}:27,N=125\)), while the post-intervention median was 27 (\(90\%,correct_{mean}:26.02,correct_{sd}:34.4,correct_{min}:16,correct_{ max}:30,N=125\)). An ANOVA test comparing the two measures rejected the null hypothesis for hypothesis 1.1, indicating a significant positive change in partici
Fig. 3: Examples of implemented Explorable Explanations. (a) By adjusting the slider, readers can experiment with different values of the y-axis starting point to see the changes in the bar lengths. (b) Readers can set the y-axis range by adjusting the two-sided slider to observe the changes in the line chart. (c) The radio buttons on the right left readers experiment with different chart settings. (d) Readers can choose different color schemes by clicking the radio buttons. (e) Readers can change the tilt angle of the pie chart to see the changes in the slice area.
pants' ability to correctly identify misleading and accurate charts in the post-intervention test (\(F(1,248)=288,p<0.0001\)). Fig. 5 shows the confidence interval plots for each condition and the overall results.
Additionally, we observed the median precision, recall, and F1 scores for the pre-intervention phase as 0.71, 0.53, and 0.61, respectively, and the post-intervention phase as 0.92, 0.93, and 0.91. In the pre-intervention test, despite instructions specifying a \(15:15\) ratio between misleading and accurate visualizations, participants were more likely to identify charts as accurate rather than misleading, with a ratio of \(10.8:17.4\), suggesting they only labeled charts as misleading when confident in their decision. This ratio shifted to \(15.1:14.4\) in the post-intervention test.
We conducted post hoc pairwise t-tests with Bonferroni correction between Condition 5 (Text + Correction + Explorable Explanations) and other conditions. There was a significant difference when compared to Condition 3 (Text + Annotation, \(p:0.008\)). However, no significant differences were observed when compared with other conditions. Therefore, the null hypothesis of hypothesis 1.2 is not rejected.
Aside from Condition 5, it is worth noting that Condition 3 (Text + Annotation) showed significantly lower improvement compared to other conditions with visual explanations: vs. Condition 2 (Text + Highlighting, \(p:0.066\)), vs. Condition 4 (Text + Correction, \(p:0.020\)), and vs. Condition 5 (Text + Correction + Explorable Explanations, \(p:0.008\)). Despite that, no significant difference was found when compared to baseline Condition 1 (Text Only, \(p:1.0\)).
### Experiment Two: Persuasive Effects of Explanations
One of the goals of this study is to inform the future development of visualization tools by suggesting chart fixes that align with visualization guidelines. We designed Experiment Two to simulate a situation where participants are persuaded to abandon their choice of a misleading chart and switch to one that aligns with visualization guidelines. In this experiment, we established five conditions: (1) Correction only, (2) Correction + Short Text, (3) Correction + Short Text + Highlighting, (4) Correction + Short Text + Annotation, and (5) Correction + Short Text + Explorable Explanations. Since we suggest a fix to users, the base condition shows only the corrected version. Condition 2 adds on the base condition with Short Text explanations. Conditions 3, 4, and 5 provide further visual explanations.
#### 5.2.1 Experiment Procedures
Experiment Two employs a between-subject design with only two phases. Same as Experiment One, participants begin by completing basic information and an SGL assessment. The first phase includes 15 this-or-that questions, asking participants to choose one of two presented chart designs as the better one. One chart in each pair exhibits one of the five misleading issues, while the other is a corrected version. Each issue has three pairs of charts in the question set. The visualization pairs and their order within the pair are randomized.
Based on answers from the first phase, the chosen misleading charts are presented again in the second phase, accompanied by explanations of why the selected charts are misleading and why the alternative charts are better. Participants are asked to accept or reject the suggestion,
Fig. 4: Screenshot of Experiment One Condition 5 (Short Text + Correction + Explorable Explanation). The other conditions are similar to this interface. Condition 1 has only the text explanation. Conditions 2 and 3 have the text explanations and Highlighting or Annotation but not the correction. Condition 4 has the same interface but not the explorable view.
Fig. 5: Confidence intervals of correct answers in the prior- and post-intervention tests. Participants performed significantly better after reading any of the explanations. However, we did not observe significant differences across conditions.
or choose neither options. They must also explain their choice and rate their confidence on a five-option Likert scale. Fig. 6 shows the screenshot of the interface displayed to the participants in Condition 5. After completing the experiment, participants are thanked and allowed to report any comments or issues they encountered.
#### 5.2.2 Hypotheses
Experiment Two aims to test whether participants might change their minds and accept the suggested chart version. We hypothesize that providing an explanation alongside the corrected chart is more convincing than presenting only the corrected chart. Thus, our first hypothesis is: **Hypothesis 2.1 The participants in conditions with more than just the corrected chart will have a higher acceptance rate for the suggested chart.**
Secondly, similar to Experiment One, we believe that people's strong preferences for rainbow colors and 3D pie charts make it difficult to persuade them to choose less visually appealing monotonic sequential colors and 2D pie charts. Our second hypothesis is: **Hypothesis 2.2 The acceptance rates across different issue types will exhibit significant differences regardless of explanation methods.**
Lastly, we have observed the effectiveness of Explorable Explanations in clarifying complex concepts across various subjects, from mathematics to social sciences. We contend that applying Explorable Explanations will help communicate visualization guidelines to participants and persuade them to accept the suggested chart design. Our third hypothesis is: **Hypothesis 2.3 The acceptance rate in the Explorable Explanations condition will be higher than in other conditions across different issue types.**
#### 5.2.3 Participants
For each of the five conditions, we recruited 25 participants through the Prolific platform based on the criteria described at the beginning of Sec. 5. Participants who participated in Experiment One were excluded from the worker pool of Experiment Two. In total, 125 participants took part in the experiment.
Participants in the second experiment were 48.8% male, 50.4% female, and 0.8% other, and their age ranged from 18 to 55 (\(age_{median}:25,age_{mean}:27.2,age_{nd}:7.78\)). The reported highest completed education levels included Bachelor's degree (47.2%), some college credit (24%), high school graduation (14.4%), Master's degree (11.2%), and Doctorate's degree (3.2%). The most common occupation was student (31%, N=38), with the remainder mostly comprising white-collar workers, with some exceptions such as homemakers and bus drivers. Participants' nationalities were mainly European (50.4%) and African (28.8%), with the remainder being North American (16.8%). Asian (2.4%), South American (0.8%), and Oceanic (0.8%). The SGL assessment (10-item on a six-level scale) showed a median of 4.5 (\(SGL_{mean}:4.4,SGL_{std}:0.81,SGL_{min}:2.0,SGL_{max}:6.0\)). The median completion time for Experiment Two was 13.57 minutes. The study was advertised as a 15-minute study with a GBP 2.25 (\(\neg\)USD 2.78) compensation, yielding an effective hourly rate of GBP 9 (\(\neg\)USD 11).
#### 5.2.4 Results
The median number of correctly answered questions in the first phase was 11 out of 15 (\(73.3\%,correct_{mean}:10.66,correct_{std}:2.68,correct_{min}:4,correct_{max}:15,N=125\)). This result is higher than the median in Experiment One, which was \(63.3\%\). When presented with two versions of the charts, participants were able to distinguish the misleading chart better and choose the correct one, as shown by the ANOVA test (\(F(1,248)=21.67,p<0.001\)).
Since the number of incorrect answers in phase one varied among participants, the number of visualization pairs presented to each participant also differed. The same applied to issue types. While 542 explanations were presented across all participants, only 17 were related to Inappropriate Axis Range, and 26 were concerned with Inappropriate Use of Line Chart. Due to the limited sample size, we excluded these two issues from the following analysis.
We calculated the acceptance rate as \(\frac{4}{3}\), where A is the number of accepted suggestions, and S is the number of suggestions presented to
Fig. 6: Screenshot of Experiment Two Condition 5 (Correction + Short Text + Explorable Explanations). The other conditions are similar to this interface. Condition 1 has only the correction. Condition 2 has the correction and the text explanations. Conditions 3 and 4 have the same interface, replacing the explorable view with Highlighting or Annotation.
the participant. Participants who received no suggestions, _i.e._, those who answered correctly on all nine questions, were not counted. In total, 17 participants scored full marks: 4 for Condition 1, 3 for Conditions 2 and 5, 2 for Condition 3, and 5 for Condition 4. The overall acceptance rate across participants was 60.4% (\(N=108\)).
A post hoc pairwise t-test with Bonferroni correction revealed no significant differences across conditions. The confidence interval plot in Fig. 7 showed a high data variation. It remains to be seen whether adding explanations on top of presenting only the corrected version is more convincing or has no effect. Therefore, we cannot reject the null hypothesis of Hypothesis 2.1. However, the Annotation condition exhibited a high mean acceptance rate, which could be a hint for further investigation.
Across different issues, Truncated Axis had a significantly higher acceptance rate than the other two issues, vs. Inappropriate Color Scheme (\(p:0.007\)) and vs. 3D pie charts (\(p:0.046\)). Since we analyzed only three issues after excluding the two issues of limited sample size, we cannot determine whether participants were more convinced by the explanations on Truncated Axis or were simply reluctant to accept the corrected versions. Nonetheless, there was a difference in the acceptance rate across issue types, leading us to reject the null hypothesis of Hypothesis 2.2.
Since Hypothesis 2.1 is not supported, and Hypothesis 2.3 is a stronger claim than Hypothesis 2.1, Hypothesis 2.3 is also not supported. We cannot conclude that Explorbable Explanations has a stronger persuasive power of convincing users to change their design choices.
The experiment data and processing scripts can be found on OSF\({}^{4}\).
## 6 Discussion and Future Work
Our data found little to no correlation between the SGL assessment score and the accuracy of identifying misleading charts, neither in overall accuracy nor accuracies of each individual issue type (\(0<r<0.2\)). The questions in the SGL assessment focus on the ability to interpret accurate charts but do not assess the participants' ability to question the validity of the charts. In the OGL assessment [13], there are questions where the correct answer is "cannot infer from the chart," but this aspect of critical chart reading is neglected in SGL. Recently, Ge _et al._ have compiled a set of 45 assessment questions designed to evaluate critical thinking abilities when confronted with misleading visualizations [15]. We recommend that future studies on this topic consider modifying the SGL assessment by including one or two questions to address the weakness of SGL.
In the second phase of Experiment Two, we collected participants' justifications for why they chose to keep their selection of a misleading chart over the accurate one. We coded the responses and analyzed the results. 58 out of 186 responses rejected the correction on Truncated Axis. Participants found that the lengths of the bars without truncating the axis were too similar and difficult to compare (\(79.3\%,N=46\)). Even though they were aware of the problem of exaggeration, they still preferred the chart with a truncated axis. Ritchie _et al._ also investigated the value of the truncated axis over the one that shows the whole axis. They designed an interaction technique to allow readers to switch between the whole and the truncated axis [33].
61 out of 138 responses rejected the correction on Inappropriate Color Scheme. Participants reported that sequential colors were difficult to read, and rainbow colors offered better distinguishability and chart readability (\(90.2\%,N=55\)). One participant rejected the correction: "It is much easier to differentiate between rainbow colours than different shades of one colour." There were also responses stating that rainbow colors were helpful to colorblind people (\(6.6\%,N=4\)), even though rainbow colors are, in fact, very difficult to read for people with color weaknesses. 63 out of 175 responses rejected the correction on 3D pie charts. Participants preferred 3D pie charts for aesthetic reasons (\(41.3\%,N=26\)) and believed that since the values were labeled, readers could always refer to the labels and not be misled by them (\(34.9\%,N=22\)). One participant wrote, "Your explanation might be true. However, the numbers are clearly marked."
Some justifications are valid and worth considering when constructing explanations to address misunderstandings. The preference for high contrast (between colors or bar length) over accurate interpretation may be a design trade-off from the participants' perspective. Providing better alternatives and explaining the implications to users or readers can help them make more informed design decisions. Although the guidelines for visualization design are generally considered best practices, their application is subject to the context and the specific purpose of the visualization. If deviations from these guidelines can be justified logically, they can benefit the task in question. The active field of research aiming to understand human perception and the interpretation of visualizations is persistently questioning current practices, unveiling their shortcomings, and improving both visualization tools and guidelines.
The main idea of this work is inspired by the Explorable Explanations developed for different topics across various fields and conjecturing that applying this approach to explain visualization guidelines could be valuable. The work by Hopkins _et al._ implemented and evaluated the Highlighting explanation method [17]. The experiment result is similar to our results in this study, showing no significant improvement when adding visual explanations on top of text explanations. The work by Fan _et al._ implemented the Correction explanation, and their evaluation supported the notion that people with access to explanations perform better at identifying misleading visualizations [12]. Though, they did not compare it with the base case of text-only explanations. It remains an open question: Is text explanation alone enough to explain misleading visualizations? We must admit that we did not find evidence to reject this idea. Imagine that if the classic books by Huff [18] and Tufte [36] had used only text explanations, they might not have inspired many other authors to publish titles or blog posts on the topic. Consequently, the public might have become even less aware of the pitfalls of misleading charts.
One of the primary objectives of this study is to inform the future development of visualization recommendation systems and linting systems regarding the design of user interfaces when the system attempts to suggest a better alternative visualization to the user. Although our experiment results did not conclude that Explorbable Explanations has greater persuasive power over other explanation methods, it is worth noting that the design space of Explorable Explanations is much larger and highly dependent on the explanation authors' creativity. Further studies may yield more conclusive results.
## 7 Conclusion
This study identified six distinct explanation techniques through an extensive review of online resources, books, and a design workshop. We applied these methods to five chart-related issues, incorporating the concept of Explorable Explanations. Our two crowdsourced evaluation experiments revealed that exposure to explanations enhanced participants' ability to recognize misleading charts. However, no significant differences were observed among the various explanation techniques. We discovered that participants were inclined to accept more than 60% of the proposed adjustments in the persuasiveness assessment.
Nevertheless, we found no significant differences among the explanation methods in convincing participants to accept the modifications. It remains a challenge to develop more effective explanation techniques in educating and convincing the general audience on the application of visualization best practices. At a minimum, they need to be made aware of the potential pitfalls and benefits when they decide not to follow these guidelines.
Figure 7: Confidence intervals of the acceptance rate in phase 2 of Experiment Two, _i.e._, the percentage of accepting the suggested correct version of the chart after viewing the explanation. |
2303.00700 | The angular derivative problem for petals of one-parameter semigroups in
the unit disk | We study the angular derivative problem for petals of one-parameter
semigroups of holomorphic self-maps of the unit disk. For hyperbolic petals we
prove a necessary and sufficient condition for the conformality of the petal in
terms of the intrinsic hyperbolic geometry of the petal and the backward
dynamics of the semigroup. For parabolic petals we characterize conformality of
the petal in terms of the asymptotic behaviour of the Koenigs function at the
Denjoy-Wolff point. | Pavel Gumenyuk, Maria Kourou, Oliver Roth | 2023-03-01T17:47:56Z | http://arxiv.org/abs/2303.00700v2 | # The angular derivative problem
###### Abstract.
We study the angular derivative problem for petals of one-parameter semigroups of holomorphic self-maps of the unit disk. For hyperbolic petals we prove a necessary and sufficient condition for the conformality of the petal in terms of the intrinsic hyperbolic geometry of the petal and the backward dynamics of the semigroup. For parabolic petals we characterize conformality of the petal in terms of the asymptotic behaviour of the Koenigs function at the Denjoy - Wolff point.
Key words and phrases:Angular Derivative, One-parameter Semigroup, petals, Backward orbits, Repulsive Fixed Points 2020 Mathematics Subject Classification: Primary 37F44, 30D05, 30D40, 30C35; Secondary 37C10, 37C25, 30C80 Partially supported by the Alexander von Humboldt Foundation.
## 1. Introduction
One-parameter semigroups of holomorphic self-maps of the unit disk \(\mathbb{D}:=\{z\in\mathbb{C}\,:\,|z|<1\}\) have continuously been studied for more than a century, but many key properties regarding their boundary and asymptotic behaviour have been established only in the last decade. Apart from being interesting in their own right, one-parameter semigroups of the unit disk act as pivotal role models for other complex dynamical systems involving holomorphic functions of one or several complex variables -- in the continuous as well as the discrete setting. A comprehensive overview of one-parameter semigroups of the unit disk spanning from the basic theory to the numerous recent achievements can be found in the monograph [14].
Traditionally, understanding the forward dynamics of the orbits of a dynamical system has been of particular interest. However, in recent years the study of the _backward_ dynamics of one-parameter semigroups, see e.g. [13, 23, 30], as well of discrete complex dynamical systems (in one and several complex variables), see e.g. [2, 4, 11, 33], has become another focal point of research.
One of the most striking results about the forward dynamics of one-parameter semigroups of the unit disk \(\mathbb{D}\) is the continuous version of the celebrated Denjoy - Wolff Theorem. It guarantees that all forward orbits of the semigroup converge to the same point \(\tau\in\mathbb{D}\cup\partial\mathbb{D}\), the Denjoy - Wolff point of the semigroup or _DW-point_, for short. In contrast, the backward flow of a one-parameter semigroup is defined only on a proper subset \(\mathcal{W}\) of \(\mathbb{D}\), the so-called _backward_
###### Abstract
We consider the _parabolic_ point \(\sigma\) of a non-trivial one-parameter semigroup \((\phi_{t})\) in the unit disk \(\mathbb{D
We note that the second named author and Zarvalis showed recently [30, Theorem 1.3] that as \(t\to-\infty\),
\[\lambda_{\mathbb{D}}(\phi_{t}(z))\,|\phi_{t}^{\prime}(z)|\nearrow\lambda_{\Delta} (z)\quad\text{locally uniformly in $\Delta$.} \tag{1.2}\]
Hence, Theorem 1.1 relates the rate of convergence in (1.2) with the conformality of \(\Delta\) at \(\sigma\).
_Remark 1.2_.: It is easy to construct examples of hyperbolic petals \(\Delta\) such that \(\partial\Delta\) coincides in a neighbourhood of its \(\alpha\)-point \(\sigma\) with \(\partial\mathbb{D}\), see e.g. [13, Examples 7.4 and 7.8]; clearly, in such a case \(\Delta\) is conformal at \(\sigma\). An example of a _non-conformal_ hyperbolic petal based on certain subtle properties of the hyperbolic distance was given in [13, Sect. 8]. In Remark 2.22 we will describe a simple device which allows a painless construction of numerous examples of conformal as well as non-conformal hyperbolic petals.
The proof of Theorem 1.1 is long and is therefore divided into several steps. We shall require various tools from the general theory of one-parameter semigroups of the unit disk, in particular the Berkson - Porta theory, holomorphic models and pre-models and basic properties of petals. These tools are collected and explained in a preliminary Section 2. This section also thoroughly introduces the angular derivative problem. In Section 3 we state and prove two technical, but crucial auxiliary results: an integral criterion for conformality of domains which are starlike at infinity and a lemma on convergence of conformal mappings on the boundary. The proof of Theorem 1.1 is given in Section 4 and is divided into several steps. In Subsection 4.1 we state a conformality criterion for the Koenigs domain \(\Omega\) of the semigroup (Theorem 4.1), and show how it implies Theorem 1.1. In Subsection 4.2 we prove the if-part of Theorem 4.1 by establishing in Theorem 4.2 a pointwise lower bound, given in euclidean terms, for the quotient of the hyperbolic densities of a domain \(\Omega\) which is starlike at infinity and a maximal strip contained in \(\Omega\). The proof of this lower bound uses a mixture of tools from geometric function theory such as monotonicity of hyperbolic densities, Green's function, harmonic measure and kernel convergence. The only-if part of Theorem 4.1 is proved in Subsection 4.3 by comparing the Koenigs domain \(\Omega\) of \((\phi_{t})\) with carefully chosen slit domains and using potential-theoretic tools. The proof of Theorem 4.1 is finished in Subsection 4.4 where we show that the previously obtained pointwise estimates in fact hold locally uniformly. In Section 5 we state and prove a conformality criterion for the case of a parabolic petal, see Theorem 5.1. In the concluding Section 6 we discuss how the results of this paper are related to several other recent results, in particular the conformality conditions obtained by Betsakos and Karamanlis [10]. In addition, we indicate some potential alternative approaches to the conformality problem for petals, and raise several questions that remain open.
## 2. Preliminaries
### One-parameter semigroups in the unit disk
Here we briefly recall the main definitions and basic facts concerning one-parameter semigroups of holomorphic functions. For more details and proofs of the statements cited in this section we refer interested readers to the monographs [14, 22, 40] and to [1, Chapter 4].
For a domain \(D\subset\mathbb{C}\) and a set \(E\subset\mathbb{C}\) we denote by \(\mathsf{Hol}(D,E)\) the set of all holomorphic functions in \(D\) with values in \(E\). As usual, we endow \(\mathsf{Hol}(D,E)\) with the topology of locally uniform convergence. Then \(\mathsf{Hol}(D,D)\) becomes a topological semigroup w.r.t. the composition operation \((\phi,\psi)\mapsto\phi\circ\psi\).
**Definition 2.1**.: A one-parameter semigroup in the unit disk \(\mathbb{D}\) is a continuous semigroup homomorphism \([0,+\infty)\ni t\mapsto\phi_{t}\in\mathsf{Hol}(\mathbb{D},\mathbb{D})\) from the semigroup \(\big{(}[0,+\infty),+\big{)}\) with the Euclidean topology to the semigroup \(\mathsf{Hol}(\mathbb{D},\mathbb{D})\).
Equivalently, a family \((\phi_{t})_{t\geqslant 0}\subset\mathsf{Hol}(\mathbb{D},\mathbb{D})\) is a one-parameter semigroup if and only if it satisfies the following three conditions: (i) \(\phi_{0}=\mathrm{id}_{\mathbb{D}}\); (ii) \(\phi_{s}\circ\phi_{t}=\phi_{s+t}\) for any \(s,t\geqslant 0\); (iii) \(\phi_{t}\to\mathrm{id}_{\mathbb{D}}\) in \(\mathsf{Hol}(\mathbb{D},\mathbb{D})\) as \(t\to 0^{+}\). Thanks to Montel's normality criterion, see e.g. [24, SS II.7, Theorem 1], the continuity condition (iii) is equivalent to the pointwise convergence: \(\phi_{t}(z)\to z\) as \(t\to 0^{+}\) for each \(z\in\mathbb{D}\). At the same time, in the presence of (i) and (ii), condition (iii) is equivalent to a much stronger property: the map \((z,t)\mapsto\phi_{t}(z)\) is jointly real-analytic in \(\mathbb{D}\times[0,+\infty)\). Moreover, every one-parameter semigroup in \(\mathbb{D}\) represents the semiflow of a (uniquely defined) holomorphic vector field \(G:\mathbb{D}\to\mathbb{C}\), known as the infinitesimal generator of \((\phi_{t})\). This means that for each fixed \(z\in\mathbb{D}\), the function \(t\mapsto\phi_{t}(z)\) is the unique solution to the initial value problem
\[\frac{\mathrm{d}}{\mathrm{d}t}\phi_{t}(z)=G\big{(}\phi_{t}(z)\big{)},\quad t \geqslant 0;\quad\phi_{0}(z)=z. \tag{2.1}\]
Using conditions (i) and (ii) one can easily deduce from (2.1) the following PDE:
\[\frac{\partial\phi_{t}}{\partial t}=G(z)\phi_{t}^{\prime}(z),\quad t \geqslant 0,\ z\in\mathbb{D}. \tag{2.2}\]
_Remark 2.2_.: It follows from the standard uniqueness results for solutions of ODEs that every element of a one-parameter semigroup is an injective map.
_Remark 2.3_.: The definition of a one-parameter semigroup can be literally extended to an arbitrary domain \(D\subset\mathbb{C}\). However, this yields an interesting class of objects only if \(D\) is conformally equivalent to \(\mathbb{D}\) or to \(\mathbb{D}^{*}:=\mathbb{D}\setminus\{0\}\), with the latter case easily reduced to the former one; see e.g. [14, SS8.4]. Clearly, given that \(D\) admits a conformal mapping \(f\) onto \(\mathbb{D}\), a family \((\phi_{t})_{t\geqslant 0}\subset\mathsf{Hol}(D,D)\) is a one-parameter semigroup in \(D\) if and only if the mappings \(f\circ\phi_{t}\circ f^{-1}\) form a one-parameter semigroup in \(\mathbb{D}\).
Another way to modify the definition of a one-parameter semigroup is to allow negative values of the parameter \(t\). In such a case, we have \(\phi_{t}\circ\phi_{-t}=\mathrm{id}_{\mathbb{D}}\) for any \(t\in\mathbb{R}\) and hence we end up with a one-parameter group of automorphisms \((\phi_{t})_{t\in\mathbb{R}}\).
The classical representation formula due to Berkson and Porta [6] characterizes infinitesimal generators in \(\mathbb{D}\) as functions of the form
\[G(z)=(\tau-z)(1-\overline{\tau}z)p(z),\quad z\in\mathbb{D}, \tag{2.3}\]
where \(p\) is a holomorphic function in \(\mathbb{D}\) with \(\operatorname{Re}p\geqslant 0\) and \(\tau\) is a point in the closure of \(\mathbb{D}\). The function \(p\) is uniquely determined by \(G\). The same concerns \(\tau\) unless \(G\equiv 0\).
In order to exclude from consideration certain degenerate cases, we accept the following
**Assumption:** the one-parameter semigroups \((\phi_{t})\) we consider in this paper do not extend to one-parameter groups of automorphisms.
This assumption is equivalent to requiring that there exists no \(t>0\) such that \(\phi_{t}\) is an automorphism.
The distinguished point \(\tau\) in the Berkson - Porta representation formula (2.3) has a very clear dynamic meaning: \(\phi_{t}(z)\to\tau\) locally uniformly in \(\mathbb{D}\) as \(t\to+\infty\). If \(\tau\in\mathbb{D}\), then
\[\phi_{t}(\tau)=\tau\quad\phi_{t}^{\prime}(\tau)=e^{\lambda t},\;\;\lambda:=G^{ \prime}(\tau),\quad\text{for all}\;\;t\geqslant 0, \tag{2.4}\]
with \(\operatorname{Re}\lambda<0\).
If \(\tau\in\partial\mathbb{D}\), then (2.4) holds in the sense of angular limits. Here and in what follows, given a holomorphic function \(f:\mathbb{D}\to\mathbb{C}\) and a point \(\zeta\in\partial\mathbb{D}\), by \(f(\zeta)\) we denote the angular limit \(\angle\lim_{z\to\zeta}f(z)\in\mathbb{C}:=\mathbb{C}\cup\{\infty\}\). Similarly, if \(f(\zeta)\) does exist finitely, then by \(f^{\prime}(\zeta)\) we denote the angular derivative
\[f^{\prime}(\zeta):=\angle\lim_{z\to\zeta}\frac{f(z)-f(\zeta)}{z-\zeta}.\]
_Remark 2.4_.: One special important case, in which the existence of the angular derivative is guaranteed, is when \(f\in\mathsf{Hol}(\mathbb{D},\mathbb{D})\) and \(f(\zeta)\) exists and belongs to \(\partial\mathbb{D}\), see e.g. [35, Proposition 4.13 on p. 82]. In this case, \(\zeta\) is called a contact point for the self-map \(f\); the angular derivative \(f^{\prime}(\zeta)\) at a contact point does not vanish, but it can be infinite.
A boundary fixed point of \(f\in\mathsf{Hol}(\mathbb{D},\mathbb{D})\) is a contact point \(\zeta\) such that \(f(\zeta)=\zeta\). A boundary fixed point (or more generally, a contact point) \(\zeta\) is said to be regular if \(f^{\prime}(\zeta)\neq\infty\). Boundary fixed points which are not regular are also called super-repulsive (or super-repelling) fixed points of \(f\). The angular derivative \(f^{\prime}(\zeta)\) at a boundary regular fixed point \(\zeta\) is a positive real number and further two subcases are distinguished: the boundary fixed point \(\zeta\) is repulsive (or repelling) if \(f^{\prime}(\zeta)>1\), while for \(f^{\prime}(\zeta)\in(0,1]\), it is called attracting.
_Remark 2.5_.: It is worth mentioning that for elements of one-parameter semigroups the angular limit \(\phi_{t}(\zeta)\) exists at _every_ point \(\zeta\in\partial\mathbb{D}\), see [18, 25]. Moreover, the orbit \(t\mapsto\phi_{t}(\zeta)\) is continuous for each \(\zeta\in\partial\mathbb{D}\). At the same time, the extensions of the holomorphic maps \(\phi_{t}(\cdot)\) to \(\partial\mathbb{D}\) by angular limits are not necessarily continuous on \(\partial\mathbb{D}\).
According to the Denjoy - Wolff Theorem, a self-map \(f\in\mathsf{Hol}(\mathbb{D},\mathbb{D})\setminus\{\mathsf{id}_{\mathbb{D}}\}\) either has an attracting fixed point \(\tau\in\partial\mathbb{D}\) and no fixed points in \(\mathbb{D}\), or it has a fixed point \(\tau\in\mathbb{D}\) and no attracting fixed points on \(\partial\mathbb{D}\). In both cases, \(\tau\) is unique and it is called the Denjoy - Wolff point (or DW-point for short) of the self-map \(f\).
From (2.4) it is clear that \(\tau\) in the Berkson - Porta representation formula (2.3) is the DW-point for each \(\phi_{t}\) with \(t>0\). If \(\tau\in\mathbb{D}\), then \((\phi_{t})\) is said to be elliptic. If \(\tau\in\partial\mathbb{D}\), then \(\lambda\leqslant 0\), and depending on whether \(\lambda<0\) or \(\lambda=0\), the one-parameter semigroup \((\phi_{t})\) is said to be hyperbolic or parabolic, respectively. By the continuous version of the Denjoy - Wolff Theorem, \(\phi_{t}(z)\to\tau\) locally uniformly in \(\mathbb{D}\) as \(t\to+\infty\).
Similarly to the DW-point, repulsive (and super-repulsive) fixed points are common for all elements of a one-parameter semigroup. More precisely, \(\sigma\in\partial\mathbb{D}\) is a repulsive (or super-repulsive) fixed point of \(\phi_{t}\) for some \(t>0\) if and only if it is a repulsive (resp., super-repulsive) fixed point of \(\phi_{t}\) for all \(t>0\); see e.g. [18].
Fixed points of a one-parameter semigroup can be characterized in terms of the infinitesimal generator. It is known [19] that \(\sigma\in\partial\mathbb{D}\) is a boundary regular fixed point of \((\phi_{t})\) if and only if \(G(\sigma)=0\) and \(\lambda:=G^{\prime}(\sigma)\) exists finitely. In such a case, \(\lambda\in\mathbb{R}\) and \(\phi_{t}^{\prime}(\sigma)=e^{\lambda t}\) for all \(t\geqslant 0\). Clearly, if \(\lambda>0\), then \(\sigma\) is a repulsive fixed point; otherwise, i.e. if \(\lambda\leqslant 0\), then \(\sigma\) is the DW-point of \((\phi_{t})\).
The following remark contains a useful construction indicating that every elliptic one-parameter semigroup, which is not a group, is correlated with a unique non-elliptic one parameter-semigroup.
_Remark 2.6_.: Suppose that \((\phi_{t})\) is an elliptic semigroup with the DW-point \(\tau\in\mathbb{D}\). Then \((\phi_{t})\) can be regarded as a one-parameter semigroup in \(\mathbb{D}\setminus\{\tau\}\). Consider the covering map \(T\circ C:\mathbb{D}\to\mathbb{D}\setminus\{\tau\}\), where \(C(z):=\exp(-\frac{1+z}{1-z})\), \(T(w):=(w+\tau)/(1+\overline{\tau}w)\). It is known, see [25, Sect. 2], that there is a (unique) one-parameter semigroup \((\phi_{t})\) which is a lifting of \((\phi_{t})\) w.r.t. \(C\circ T\), i.e. such that \(\phi_{t}\circ T\circ C=T\circ C\circ\phi_{t}\) for all \(t\geqslant 0\). Further details and application of the above construction follow in the proof of Theorem 1.1.
### Holomorphic models and Koenigs function
It is known, see e.g. [14, Sect. 9.2] or [12]. that any one-parameter semigroup admits a holomorphic model \((\Omega_{0},h,L_{t})\). This means that \(\Omega_{0}\subset\mathbb{C}\) is a simply-connected domain, referred to as the base space, \(h:\mathbb{D}\to\Omega_{0}\) is a injective holomorphic map, and \((L_{t})\) is a one-parameter group of holomorphic automorphisms of \(\Omega_{0}\) with the following two properties:
\[h\circ\phi_{t}\ =\ L_{t}\circ h,\qquad\text{for all }t\geqslant 0; \tag{2.5}\] \[\bigcup_{t\leqslant 0}L_{t}(\Omega)\ =\ \Omega_{0},\qquad\text{where } \Omega:=h(\mathbb{D}). \tag{2.6}\]
Up to a naturally defined isomorphism, a holomorphic model for a given one-parameter semigroup is unique.
The theory which shall be presented in the current and the upcoming sections can also be generalized (with appropriate modifications) to the case of an elliptic one-parameter semigroup, which is not an elliptic group. Taking Remark 2.6 into consideration, from this point onward, we can safely consider only non-elliptic one-parameter semigroups (i.e. hyperbolic or parabolic).
Non-elliptic one-parameter semigroups admit holomorphic models for which \(L_{t}(z):=z+t\), \(t\in\mathbb{R}\), and \(\Omega_{0}\) is the whole \(\mathbb{C}\) or a half-plane or a strip with \(\partial\Omega_{0}\) composed of one or two lines parallel to \(\mathbb{R}\). For such holomorphic models, equation (2.5) becomes Abel's functional equation
\[h\big{(}\phi_{t}(z)\big{)}=h(z)+t\quad\text{for all }\ z\in\mathbb{D}\ \text{ and all }\ t\geqslant 0. \tag{2.7}\]
The function \(h\) is called the Koenigs function of \((\phi_{t})\) and it is unique up to an additive constant. The set \(\Omega:=h(\mathbb{D})\subset\Omega_{0}\) is called the Koenigs (or sometimes, planar) domain of \((\phi_{t})\). Abel's
equation implies an important property of \(\Omega\): for any of its point \(w\), the Koenigs domain contains the ray \(\{w+t\colon t\geqslant 0\}\). Such domains are said to be starlike at infinity.
Many dynamical properties of \((\phi_{t})\) are encoded in the geometry of the corresponding Koenigs domain \(\Omega\). Moreover, any starlike-at-infinity domain \(\Omega\) different from the whole plane is the Koenigs domain of a non-elliptic one-parameter semigroup. This is often used to construct examples of one-parameter semigroups with given behaviour, see e.g. Remark 2.22 below.
**Definition 2.7**.: We denote by \(\mathbb{S}\) the "standard" horizontal strip \(\{z\colon|\operatorname{Im}z|<\frac{\pi}{2}\}\), and more generally we denote \(\mathbb{S}(a,b):=\{z\colon a<\operatorname{Im}z<b\}\) for \(a,b\in\mathbb{R}\) with \(a<b\). Let \(\Omega\) be the Koenigs domain of a non-elliptic one-parameter semigroup \((\phi_{t})\). A strip \(\mathbb{S}(a,b)\) contained in \(\Omega\) is said to be a maximal strip for \((\phi_{t})\) if \(\mathbb{S}(a,b)\subset\mathbb{S}(a^{\prime},b^{\prime})\subset\Omega\) holds only for \((a^{\prime},b^{\prime})=(a,b)\).
It is easy to see that the maximal strips defined above are connected components of the interior of \(\bigcap_{t\geqslant 0}\Omega+t\).
_Remark 2.8_.: It is known [17] that there exists a one-to-one correspondence between the repulsive fixed points \(\sigma\in\partial\mathbb{D}\) of \((\phi_{t})\) and the maximal strips in the Koenigs domain of \((\phi_{t})\). If \(S\) is a maximal strip for \((\phi_{t})\) and \(w\in S\), then \(h^{-1}(w+t)\) tends, as \(t\to-\infty\), to the corresponding repulsive fixed point \(\sigma\). Moreover, the width \(\nu(S)\) of the maximal strip \(S\) is related to the angular derivative at \(\sigma\): namely, \(\nu(S)G^{\prime}(\sigma)=\pi\).
### Backward orbits, invariant petals, and pre-models
In this section we follow the terminology from [13]. For the proofs of statements quoted below we refer the reader to the same source. Let us denote by \(\operatorname{d}_{D}\) the hyperbolic distance in a hyperbolic domain \(D\).
**Definition 2.9**.: ([13, Definition 3.1]) A continuous curve \(\gamma:[0,+\infty)\) is called a backward orbit of a one-parameter semigroup \((\phi_{t})\) if for any \(t>0\) and any \(s\in(0,t)\), we have \(\phi_{s}(\gamma(t))=\gamma(t-s)\). A backward orbit \(\gamma\) is said to be regular if \(\limsup_{t\to+\infty}\operatorname{d}_{\mathbb{D}}\bigl{(}\gamma(t),\gamma(t+ 1)\bigr{)}<+\infty\).
_Remark 2.10_.: Let \((\phi_{t})\) be a non-elliptic one-parameter semigroup in \(\mathbb{D}\). Fix \(z\in\mathbb{D}\). It is easy to see that the following three conditions are equivalent:
1. there exists a backward orbit \(\gamma\) with \(\gamma(0)=z\);
2. \(z\in\mathcal{W}:=\bigcap_{t\geqslant 0}\phi_{t}(\mathbb{D})\);
3. the line \(\{h(z)+t\colon t\in\mathbb{R}\}\), where \(h\) is the Koenigs function of \((\phi_{t})\), is contained in the Koenigs domain \(\Omega\) of \((\phi_{t})\).
If the above conditions are satisfied, then the backward orbit \(\gamma\) in (i) is unique and it is given by \(\gamma(t):=\phi_{t}^{-1}(z)=h^{-1}\bigl{(}h(z)-t\bigr{)}\) for all \(t\geqslant 0\). Moreover, this backward orbit \(\gamma\) is regular if and only if \(z\in\mathcal{W}^{\circ}\).
_Remark 2.11_.: The negative iterates \(\phi_{-t}:=\phi_{t}^{-1}\), \(t>0\), are well-defined and holomorphic in \(\mathcal{W}^{\circ}\). Thus, for \(z\in\mathcal{W}^{\circ}\), the differential equations (2.1) and (2.2) are valid for all \(t<0\).
**Definition 2.12**.: The set \(\mathcal{W}\) in Remark 2.10 is called the backward invariant set of \((\phi_{t})\). Each non-empty connected component of \(\mathcal{W}^{\circ}\) is called a petal.
Every petal \(\Delta\) is a simply connected domain and \(\big{(}\phi_{t}|_{\Delta}\big{)}_{t\in\mathbb{R}}\) is one-parameter group of automorphisms of \(\Delta\). The boundary of \(\Delta\) contains the DW-point \(\tau\) of \((\phi_{t})\). All regular backward orbits that lie in a petal \(\Delta\) converge to the same boundary fixed point of \((\phi_{t})\) which lies on the boundary of the petal. We call this unique limit point _the \(\alpha\)-point of the petal \(\Delta\)_. The following dichotomy holds:
* either the \(\alpha\)-point of the petal \(\Delta\) coincides with \(\tau\) and it is the only fixed point of \((\phi_{t})\) contained in \(\partial\Delta\);
* or \(\partial\Delta\) contains exactly two fixed points of the semigroup: the DW-point \(\tau\) and a repulsive fixed point \(\sigma\), which is the \(\alpha\)-point of the petal \(\Delta\).
The case (P) arises only if the one-parameter semigroup is parabolic. In this case, the image \(h(\Delta)\) of the petal \(\Delta\) w.r.t. the Koenigs function \(h\) is a half-plane bounded by a line parallel to \(\mathbb{R}\); it is maximal in the sense that there exist no half-plane \(H\neq h(\Delta)\) such that \(h(\Delta)\subset H\subset\Omega\).
In case (H), the petal \(\Delta\) is said to be hyperbolic and \(h(\Delta)\) coincides with the maximal strip corresponding to the repulsive fixed point \(\sigma\) in the sense of Remark 2.8. Moreover, there is a one-to-one correspondence 2 between the repulsive fixed points and the hyperbolic petals, as the pre-image \(h^{-1}(S)\) of any maximal strip \(S\) is a hyperbolic petal. In what follows, the hyperbolic petal corresponding to a given repulsive fixed point \(\sigma\) will be denoted by \(\Delta(\sigma)\).
Footnote 2: This one-to-one correspondence was discovered by Elin, Shoikhet and Zalcman in [23] for elliptic and hyperbolic one-parameter semigroups. A proof covering also the parabolic case can be found in [13, Sect. 4].
The Koenigs function can be regarded as a global change of variables reducing the dynamics of \((\phi_{t})\) to the canonical form \(w\mapsto w+t\). When studying dynamics of the one-parameter semigroup in a petal \(\Delta\), instead of the holomorphic model it is more convenient to work with the so-called pre-model. This notion has been introduced for discrete iteration in [32]. The definition below is a slight modification of that from [13, Definition 3.8] combined with [13, Remark 3.9]. We denote by \(\mathbb{H}\) the right half-plane \(\{z\in\mathbb{C}\,:\,\mathrm{Re}\,z>0\}\).
**Definition 2.13**.: Let \(\sigma\in\partial\mathbb{D}\) be a repulsive fixed point of a one-parameter semigroup \((\phi_{t})\) with associated infinitesimal generator \(G\). The triple \((\mathbb{H},\psi,Q_{t})\) is called a pre-model for \((\phi_{t})\) at \(\sigma\) if the following conditions are met:
* for each \(t\geqslant 0\), \(Q_{t}\) is the automorphism of \(\mathbb{H}\) given by \(Q_{t}(z):=e^{\lambda t}z\), where \(\lambda:=G^{\prime}(\sigma)\);
* the map \(\psi:\mathbb{H}\to\mathbb{D}\) is holomorphic and injective, \(\angle\lim_{w\to 0}\psi(w)=\sigma\), and \(\psi\) is isogonal at \(0\), i.e. \[\angle\lim_{w\to 0}\mathrm{Arg}\,\frac{1-\overline{\sigma}\psi(w)}{w}=0;\] (2.8)
* \(\psi\circ Q_{t}=\phi_{t}\circ\psi\) for all \(t\geqslant 0\).
_Remark 2.14_.: It is known [13, Theorem 3.10] that every one-parameter semigroup, at each repulsive fixed point \(\sigma\), admits a pre-model unique up to the transformation \(\psi(w)\mapsto\psi(cw)\), where \(c\) is an arbitrary positive constant. Moreover, \(\psi(\mathbb{H})\) is the hyperbolic petal \(\Delta(\sigma)\) with
\(\alpha\)-point \(\sigma\). The map \(\psi\) can be expressed via the Koenigs function \(h\) of \((\phi_{t})\). Namely, if the strip \(h(\Delta(\sigma))=\mathbb{S}(a,b)\), then the map \(\psi\) in the pre-model for \((\phi_{t})\) at \(\sigma\) is given by
\[\psi(w):=h^{-1}\big{(}\tfrac{b-a}{2\pi}\log w+\tfrac{b+a}{2}i+s\big{)},\quad w \in\mathbb{H},\]
where \(s\) is an arbitrary real constant.
_Remark 2.15_.: One important consequence of the facts mentioned in Remark 2.14 is that every backward orbit \(\gamma\) starting from a point \(z\) in a hyperbolic petal \(\Delta(\sigma)\) converges to \(\sigma\) non-tangentially and with a definite slope, i.e. there exists the limit
\[\theta(z):=\lim_{t\to+\infty}\operatorname{Arg}\big{(}1-\overline{\sigma} \gamma(t)\big{)},\]
with \(\theta(z)\in(-\pi/2,\pi/2)\).
### Conformality of a domain at a boundary point
The geometry of Koenigs domains of non-elliptic one-parameter semigroups is strongly affiliated to conformality w.r.t. horizontal strips in the sense of Rodin and Warschawski [39].
**Definition 2.16**.: Let \(S:=\mathbb{S}(a,b)\) be a maximal strip contained in a domain \(\Omega\) and let \(g\) be a conformal mapping of \(\Omega\) onto \(\mathbb{S}(a,b)\) such that
\[\operatorname{Re}g(t+iy_{0})\to-\infty\quad\text{as}\;\;t\to-\infty \tag{2.9}\]
for some and hence all \(y_{0}\in(a,b)\). The domain \(\Omega\) is said to have an angular derivative or to be conformal at \(-\infty\) w.r.t. \(S\) if for any \(\varepsilon\in(0,b-a)\) there exists the finite real limit
\[\lim_{D(\varepsilon)\ni z\to\infty}g(z)-z,\quad D(\varepsilon):=\{z\in S \colon\operatorname{Re}z\leqslant 0,\;\operatorname{dist}(z,\partial S) \geqslant\varepsilon/2\}. \tag{2.10}\]
_Remark 2.17_.: Clearly, the map \(g\) above is not uniquely defined. However, it is easy to see that if condition (2.10) holds for one conformal map \(g\) of \(\Omega\) onto \(\mathbb{S}(a,b)\) satisfying (2.9), then (2.10) holds for all such mappings \(g\).
In a different geometric setting, it is natural to consider a similar and closely related notion of conformality w.r.t. the unit disk \(\mathbb{D}\). In this case, we restrict ourselves to subdomains of \(\mathbb{D}\).
**Definition 2.18**.: A simply connected domain \(U\subset\mathbb{D}\) is said to be conformal at a point \(\sigma\in\partial U\cap\partial\mathbb{D}\) w.r.t. \(\mathbb{D}\) if there exists a conformal mapping \(\varphi\) of \(\mathbb{D}\) onto \(U\) such that \(\varphi(1)=\sigma\) in the sense of angular limits and the angular derivative \(\varphi^{\prime}(1)\) is finite.
Note that the condition \(\varphi(1)=\sigma\) in the above definition means that \(\zeta=1\) is a contact point of \(\varphi\); hence, the angular derivative \(\varphi^{\prime}(1)\) exists and does not vanish, but in general, can be infinite; see Remark 2.4. To simplify the terminology in the case when \(U\) is a petal, we make the following definition.
**Definition 2.19**.: Let \(\Delta(\sigma)\) be a hyperbolic petal of a one-parameter semigroup \((\phi_{t})\) with \(\alpha\)-point \(\sigma\). We say that \(\Delta(\sigma)\) is conformal, if \(\Delta(\sigma)\) is conformal at \(\sigma\) w.r.t. \(\mathbb{D}\).
_Remark 2.20_.: Note that, in general, there can exist two conformal mappings \(\varphi_{k}\), \(k=1,2\), of \(\mathbb{D}\) onto the same domain \(U\subset\mathbb{D}\) such that \(\varphi_{k}(1)=\sigma\), \(k=1,2\), for some \(\sigma\in\partial\mathbb{D}\), but with \(\varphi_{1}^{\prime}(1)=\infty\) while \(\varphi_{2}^{\prime}(1)\) is finite. This phenomenon may happen if the geometric point \(\sigma\) corresponds to at least two different accessible boundary points of \(U\). However, this is never the case for petals, see [13, Proposition 4.15]. Therefore, in order to determine whether a petal \(\Delta(\sigma)\) is conformal it is sufficient to construct _just one_ conformal map \(\varphi\) of \(\mathbb{D}\) onto \(\Delta(\sigma)\) with \(\varphi(1)=\sigma\) in the sense of angular limits and check whether the angular derivative \(\varphi^{\prime}(1)\) is finite. According to Remark 2.14, every one-parameter semigroup admits a pre-model \((\mathbb{H},\psi,Q_{t})\) at each repulsive fixed point \(\sigma\). A conformal map of \(\mathbb{D}\) onto \(\Delta(\sigma)\) taking \(1\) to \(\sigma\) is given by \(\varphi(z):=\psi\big{(}\frac{1-z}{1+z}\big{)}\). Therefore, a hyperbolic petal \(\Delta(\sigma)\) is conformal if and only if the pre-model \((\mathbb{H},\psi,Q_{t})\) is regular in the sense that the angular derivative \(\psi^{\prime}(0)\) is finite. Note that this condition is stronger than the isogonality condition (2.8), which is also sometimes called semi-conformality; see e.g. [35, Sect. 4.3]. Condition (2.8) is satisfied in our context by the very definition of a pre-model.
For a non-elliptic one-parameter semigroup \((\phi_{t})\), the two versions of the angular derivative problem introduced above turn out to be equivalent.
**Proposition 2.21**.: _In the above notation, a hyperbolic petal \(\Delta(\sigma)\) is conformal if and only if the Koenigs domain \(\Omega\) is conformal at \(-\infty\) w.r.t. the maximal strip \(\mathbb{S}(\sigma):=h(\Delta(\sigma))\)._
Proof.: Using conformal automorphisms of \(\mathbb{D}\), we may assume that \(\sigma=-1\) and \(\tau=1\). Denote \(S:=\mathbb{S}(\sigma),\ a:=\inf_{z\in S}\operatorname{Im}z,\ b:=\sup_{z\in S} \operatorname{Im}z\). Let
\[q(z):=\frac{e^{L(z)}-1}{e^{L(z)}+1},\quad\text{where}\ \ L(z):=\pi\frac{z-(b+a)i/2}{b-a}.\]
The function \(q\) maps \(S\) conformally onto \(\mathbb{D}\) in such a way that \(q(z)\to\tau=1\) as \(\operatorname{Re}z\to+\infty\) and \(q(z)\to\sigma=-1\) as \(\operatorname{Re}z\to-\infty\).
Then \(g:=(h\circ q)^{-1}\) is a conformal mapping of \(\Omega\) onto \(S\). Moreover, by Remarks 2.8 and 2.10, for any \(y_{0}\in(a,b)\), the curve \([0,+\infty)\ni t\mapsto h^{-1}(-t+iy_{0})\) is a backward orbit in \(\Delta(\sigma)\) and hence it converges to \(\sigma\) as \(t\to+\infty\). Therefore, \(g\) satisfies the normalization (2.9). By the very definition, it follows that \(\Omega\) is conformal at \(-\infty\) w.r.t. \(S\) if and only if \(g\) satisfies condition (2.10).
It is elementary to see that (2.10) is in turn equivalent to the existence of finite angular derivative at \(\sigma=-1\) for the holomorphic self-map \(\varphi:\mathbb{D}\to\mathbb{D}\) defined by \(\varphi:=q\circ g\circ q^{-1}\). At the same time we have
\[\varphi=q\circ(h\circ q)^{-1}\circ q^{-1}=h^{-1}\circ q^{-1}.\]
Hence \(\varphi\) maps \(\mathbb{D}\) conformally onto the hyperbolic petal \(\Delta(\sigma)\). Using again Remark 2.8, we see that the radial limit of \(\varphi\) at \(\sigma=-1\) equals \(\sigma\). By Lindelof's Theorem (see e.g. [14, Theorem 1.5.7 on p. 27]) the latter means that \(\varphi(\sigma)=\sigma\) in the sense of angular limits. According to Remark 2.20, the existence of the finite angular derivative of \(z\mapsto\varphi(-z)\) at \(z=1\) implies that \(\Delta(\sigma)=\varphi(\mathbb{D})\) is conformal at \(\sigma\) w.r.t. \(\mathbb{D}\) and hence we obtain the desired result.
_Remark 2.22_ (Examples of non-conformal hyperbolic petals).: With some efforts, an example of a non-conformal hyperbolic petal was constructed in [13, Sect. 8]. We briefly indicate how
one can easily obtain many other examples of conformal and non-conformal hyperbolic petals. Lemma 3.2, which we prove in the next section, allows one to construct starlike-at-infinity domains \(\Omega\) containing a maximal strip \(S\) w.r.t. which \(\Omega\) is conformal (or non-conformal) at \(-\infty\). Let \(h\) be a conformal mapping of \(\mathbb{D}\) onto \(\Omega\). Then \(\Delta:=h^{-1}(S)\) is a hyperbolic petal for the non-elliptic semigroup \((\phi_{t})\) given by \(\phi_{t}:=h^{-1}\circ(h+t)\), \(t\geqslant 0\). By Proposition 2.21, the petal \(\Delta\) is conformal (respectively, non-conformal). Note that, using the trick described in Remark 2.6, this technique can be extended to elliptic semigroups as well.
## 3. Auxiliary results
### Strong Markov property for the Green's function
Let \(D\subsetneq\mathbb{C}\) be a simply connected domain. Let \(\mathrm{G}_{D}\) and \(\omega_{D}\) denote the (positive) Green's function and the harmonic measure for the domain \(D\), respectively; see e.g. [37, Chapter 4]. In the course of the proofs, we make use of the following remarkable property of the Green's function \(\mathrm{G}_{D}\), see [36, p. 111].
**Lemma 3.1** (Strong Markov Property for the Green's function).: _Let \(D_{1}\) and \(D_{2}\) be two simply connected domains with \(D_{1}\subset D_{2}\subsetneq\mathbb{C}\). Then for all \(z,w\in D_{1}\), \(z\neq w\),_
\[\mathrm{G}_{D_{2}}(z,w)-\mathrm{G}_{D_{1}}(z,w)=\int_{A}\mathrm{G}_{D_{2}}( \alpha,z)\ \omega_{D_{1}}(w,\,\mathrm{d}\alpha), \tag{3.1}\]
_where \(A:=D_{2}\cap\partial D_{1}\)._
In the proof of the main results, we occasionally replace the hyperbolic density with the conformal radius; the reader may refer to [21, SS2.1]. For simply connected domains \(D\subsetneq\mathbb{C}\) the conformal radius \(\mathcal{R}(z_{0},D)\) of \(D\) w.r.t. the point \(z_{0}\in D\) is just the reciprocal of the hyperbolic density \(\lambda_{D}(z_{0})\); namely \(\mathcal{R}(z_{0},D)=2/\lambda_{D}(z_{0})\).
It is further known that \(\mathrm{G}_{D}(z,w)+\log|w-z|\to\log\mathcal{R}(w,D)\) as \(z\to w\in D\). Therefore, (3.1) implies
\[\log\frac{\mathcal{R}(w,D_{2})}{\mathcal{R}(w,D_{1})}=\int_{A}\mathrm{G}_{D_{ 2}}(\alpha,w)\ \omega_{D_{1}}(w,\,\mathrm{d}\alpha),\quad w\in D_{1}. \tag{3.2}\]
### An integral criterion for conformality of domains starlike at infinity
Let \(\Omega\) be a domain which is starlike at infinity and denote by \(S=\mathbb{S}(a,b)=\{x+iy\colon a<y<b\}\) a maximal horizontal strip contained in \(\Omega\).
One of the ingredients of the proof of Theorem 4.1 is a characterization of conformality of \(\Omega\) at the boundary point \(-\infty\) w.r.t. to \(S\) in euclidean terms. Such a characterization can easily be deduced from the work of Rodin & Warschawski [39] and Jenkins & Oikawa [27] as follows.
Fix a point \(w_{0}=iy_{0}\in S\), \(y_{0}\in(a,b)\), and denote
\[\delta_{\Omega,1}(t):=a-\inf I(t),\quad\delta_{\Omega,2}(t):=\sup I(t)-b\,,\]
where \(I(t)\) is the connected component of \(\{y\colon t+iy\in\Omega\}\) containing \(y_{0}\). It is clear that \(\delta_{\Omega,j}(t)\geqslant 0\) for all \(t\in\mathbb{R}\), \(j=1,2\), because \(S\subset\Omega\). We denote
\[\delta_{\Omega}(t):=\max\big{\{}\delta_{\Omega,1}(t),\delta_{\Omega,2}(t) \big{\}}\;. \tag{3.3}\]
Note that \(\delta_{\Omega,1}(t)\), \(\delta_{\Omega,2}(t)\) and \(\delta_{\Omega}(t)\) do not depend on the choice of the base point \(w_{0}\). We should also note that there might exist \(t\in\mathbb{R}\) such that \(\delta_{\Omega}(t)=+\infty\). Due to maximality of the strip \(S\) inside \(\Omega\), there always exists a \(t_{0}\in\mathbb{R}\) such that \(\delta_{\Omega}(t)<+\infty\) for all \(t\leqslant t_{0}\). In fact, starlikeness of \(\Omega\) at infinity assures that \(\delta_{\Omega}\) is monotonically non-decreasing in \((-\infty,t_{0}]\) and that \(\delta_{\Omega}(t)\to 0\) as \(t\to-\infty\).
Since the Koenigs function \(h\) is defined modulo an additive constant, by shifting the domain \(\Omega\) along the real axis, in our proofs we may assume without loss of generality that \(t_{0}=1\). The following lemma relates the conformality of the starlike-at-infinity domain \(\Omega\) at \(-\infty\) w.r.t. the strip \(S\) with the integrability of the euclidean quantity \(\delta_{\Omega}\).
**Lemma 3.2**.: _In the above notation, \(\Omega\) is conformal at \(-\infty\) w.r.t. the strip \(S\) if and only if_
\[\int\limits_{-\infty}^{0}\delta_{\Omega}(t)\;\mathrm{d}t\;<\;+\infty. \tag{3.4}\]
Proof.: Suppose that (3.4) holds. Let \((Q_{n})_{n\geqslant 0}\) be a sequence of pairwise disjoint squares contained in \((\Omega\setminus S)\cap\{z\colon\operatorname{Re}z\leqslant 0\}\) and such that for each \(Q_{n}\) one of the sides is contained on \(\partial S\). From (3.4), it follows easily that \(\sum_{n\geqslant 0}\operatorname{area}(Q_{n})<+\infty\). Hence, by [39, Theorem 2], \(\Omega\) is conformal at \(-\infty\) w.r.t. \(S\).
Now suppose that \(\Omega\) is conformal at \(-\infty\) w.r.t. \(S\). Since \(\Omega+x\subset\Omega\) for any \(x\geqslant 0\), the functions \(\delta_{j}:=\delta_{\Omega,j}\), \(j=1,2\), are monotonically non-decreasing. Therefore, by the implication (i) \(\Rightarrow\) (iii) of [39, Theorem 2], there is an increasing sequence \(0=u_{0}<u_{1}<\ldots<u_{n}<\ldots\) tending to \(+\infty\) such that
\[\sum_{n=0}^{\infty}(u_{n+1}-u_{n})^{2}<\infty\quad\text{and}\quad\sum_{n=0}^{ \infty}\delta_{j}(-u_{n+1})^{2}<+\infty,\;\;j=1,2.\]
With the help of the Cauchy - Schwarz - Bunyakovsky inequality, it follows that
\[\int\limits_{-\infty}^{0}\delta_{j}(t)\;\mathrm{d}t\;=\;\sum_{n=0}^{\infty} \int\limits_{-u_{n+1}}^{-u_{n}}\delta_{j}(t)\;\mathrm{d}t\;\leqslant\;\sum_{n =0}^{\infty}(u_{n+1}-u_{n})\cdot\delta_{j}(-u_{n+1})\;<\;+\infty\]
by the monotonicity of \(\delta_{j}\).
### A lemma on convergence of Riemann mappings on the boundary
Let \((D_{n})\) be a sequence of simply connected domains in \(\mathbb{C}\) and \(\mathcal{B}\) an open subarc of \(\partial\mathbb{D}\) with the following properties: \(\mathbb{D}\subset D_{n}\) and \(\mathcal{B}\subset\partial D_{n}\) for each \(n\in\mathbb{N}\). Denote by \(\sigma_{k}\), \(k=1,2\), the end-points of the arc \(\mathcal{B}\).
**Lemma 3.3**.: _In the above notation, suppose additionally that each \(D_{n}\) is a Jordan domain. For each \(n\in\mathbb{N}\), let \(f_{n}\) denote the conformal map of \(\mathbb{D}\) onto \(D_{n}\) normalized by \(f_{n}(0)=0\), \(f_{n}^{\prime}(0)>0\) and extended by continuity to a homeomorphism between the closures. If \((D_{n})\) converges to \(\mathbb{D}\) w.r.t. \(0\) in the sense of kernel convergence, then \(f_{n}^{-1}(\sigma_{k})\to\sigma_{k}\), \(k=1,2\), as \(n\to+\infty\)._
_Remark 3.4_.: Requiring that each of the domains \(D_{n}\) is a Jordan domain is not really essential in the above lemma, but this condition is satisfied in the setting for which we will apply Lemma 3.3. For more general domains both the statement of the lemma and its proof would become more technical. On the other hand, the assumption that \(\mathbb{D}\subset D_{n}\) for all \(n\in\mathbb{N}\) seems to play an important role. Again, in our setting this assumption holds, but without it the conclusion of Lemma 3.3 may fail as the following example demonstrates. The sequence of Jordan domains
\[D_{n}:=\big{\{}z\in\mathbb{C}:|z|<1-\tfrac{1}{n}\big{\}}\cup\big{\{}z\in \mathbb{D}:|\arg z|<\tfrac{\pi}{2n}\big{\}}\cup\big{\{}z\in\mathbb{D}:\operatorname {Re}z>0,\ |z|>1-\tfrac{1}{2n}\big{\}}\subset\mathbb{D}\]
converges to \(\mathbb{D}\) w.r.t. \(0\) in the sense of kernel convergence. Although the arc \(\mathscr{B}:=\{z\in\partial\mathbb{D}:\operatorname{Re}z>0\}\) lies on the boundary of \(D_{n}\) for all \(n\in\mathbb{N}\) and although the functions \(f_{n}\) defined as above converge locally uniformly in \(\mathbb{D}\) to the identity map, the pre-images \(f_{n}^{-1}(\mathscr{B})\) shrink as \(n\to+\infty\) to the point \(\sigma_{0}=1\) rather than converge to \(\mathscr{B}\).
Proof of Lemma 3.3.: Denote \(g_{n}:=f_{n}^{-1}\big{|}_{\overline{\mathbb{D}}}\), where \(\overline{\mathbb{D}}\) denotes the closure of \(\mathbb{D}\). By Caratheodory's kernel convergence theorem ([35, Theorem 1.8]), \((f_{n})\) and \((g_{n})\) converge locally uniformly in \(\mathbb{D}\) to the identity mapping. Moreover, the restrictions \(g_{n}\big{|}_{\mathbb{D}}\) can be extended by the Schwarz reflection principle to conformal mappings \(g_{n}^{*}\) of \(\mathbb{D}_{\mathscr{B}}:=\{z\in\mathbb{C}:|z|\neq 1\}\cup\mathscr{B}\) into \(\mathbb{C}\). Recall that \(g_{n}(0)=0\) for all \(n\in\mathbb{N}\). Moreover, by the locally uniform convergence of \((g_{n})\) in \(\mathbb{D}\), the sequence \(|g_{n}^{\prime}(0)|\) is bounded. It follows that the extended functions \(g_{n}^{*}\) form a normal family in \(\mathbb{D}_{\mathscr{B}}\) and hence \(g_{n}^{*}\to\operatorname{id}_{\mathbb{D}_{\mathscr{B}}}\) locally uniformly in \(\mathbb{D}_{\mathscr{B}}\). This fact, however, does not imply on its own the conclusion of the lemma, because \(\sigma_{k}\not\in\mathbb{D}_{\mathscr{B}}\). On the other hand, \(g_{n}(w)=g_{n}^{*}(w)\) for all \(w\in\mathscr{B}\) and all \(n\in\mathbb{N}\) and hence we may conclude that \((g_{n})\) converges uniformly on any closed subarc of \(\mathscr{B}\) to the identity mapping.
Consider the sequence \(h_{n}(w):=\sigma_{0}g_{n}(w)/g_{n}(\sigma_{0})\), \(w\in\overline{\mathbb{D}}\), where \(\sigma_{0}\) is the midpoint of the arc \(\mathscr{B}\). Note that \(h_{n}(\mathbb{D})\subset\mathbb{D}\), \(h_{n}(\mathscr{B})\subset\partial\mathbb{D}\), and \(h_{n}(\sigma_{0})=\sigma_{0}\). Therefore, by Loewner's lemma ([35, Proposition 4.15]), \(\mathscr{B}\subset h_{n}(\mathscr{B})\). Since \(g_{n}(\sigma_{0})\to\sigma_{0}\), it is enough to show that \(h_{n}(\sigma_{k})\to\sigma_{k}\) as \(n\to+\infty\), \(k=1,2\). Suppose this is not the case. Then, passing if necessary to a subsequence, we may assume that there exists an open arc \(\mathscr{C}\) on \(\partial\mathbb{D}\) such that \(\mathscr{C}\not\subset\mathscr{B}\), and
\[\mathscr{C}\subset h_{n}(\mathscr{B})\quad\text{ for all }n\in\mathbb{N}. \tag{3.5}\]
In particular, \(h_{n}^{-1}(\mathscr{C})\subset\partial\mathbb{D}\) for any \(n\in\mathbb{N}\). The functions \(h_{n}^{-1}\) are restrictions of the maps \(z\mapsto f_{n}\big{(}zg_{n}(\sigma_{0})/\sigma_{0}\big{)}\) to \(\overline{\mathbb{D}}\). Therefore, arguing as above we see that \(h_{n}^{-1}\to\operatorname{id}_{\mathscr{C}}\) on \(\mathscr{C}\). Since \(\mathscr{C}\not\subset\mathscr{B}\), it follows that \(h_{n}^{-1}(\mathscr{C})\not\subset\mathscr{B}\) for \(n\in\mathbb{N}\) large enough. To complete the proof it remains to notice that the latter conclusion contradicts (3.5).
## 4. Proof of the main result
### Reformulation of the problem
In this section we reduce Theorem 1.1 to showing that if a domain \(\Omega\) is starlike at infinity, then its conformality at \(-\infty\) w.r.t. a maximal strip \(\mathbb{S}(a,b)\subset\Omega\) is equivalent to a certain condition on how fast \(\lambda_{\Omega}\) approaches \(\lambda_{\mathbb{S}(a,b)}\) along a horizontal ray \(\{t+iy_{0}\colon t\leqslant 0\}\subset\mathbb{S}(a,b)\) as \(t\to-\infty\). The precise statement of this result is as follows.
**Theorem 4.1**.: _Let \(\Omega\subset\mathbb{C}\), \(\Omega\neq\mathbb{C}\), be a domain starlike at infinity and let \(S:=\mathbb{S}(a,b)\) be a maximal strip contained in \(\Omega\). Fix a point \(w_{0}\in S\). Then \(\Omega\) is conformal at \(-\infty\) w.r.t. \(S\) if and only if_
\[\int\limits_{-\infty}^{0}\log\frac{\lambda_{S}(t+w_{0})}{\lambda_{\Omega}(t+w_ {0})}\;\mathrm{d}t\;<\;+\infty. \tag{4.1}\]
_In this case, the integral (4.1) converges for every \(w_{0}\in S\), and in fact locally uniformly on \(S\)._
In this subsection, we show that Theorem 4.1 implies Theorem 1.1. This is not difficult for non-elliptic semigroups \((\phi_{t})\) in \(\mathbb{D}\), but less obvious in the elliptic case. The proof of Theorem 4.1 will be given in Subsections 4.2 and 4.3.
Proof of Theorem 1.1.: First we prove Theorem 1.1 for the case of a non-elliptic semigroup \((\phi_{t})\) in \(\mathbb{D}\). Differentiating Abel's equation (2.7) w.r.t. \(z\), we obtain
\[h^{\prime}\big{(}\phi_{t}(z)\big{)}\phi_{t}^{\prime}(z)=h^{\prime}(z) \tag{4.2}\]
for all \(z\in\mathbb{D}\), \(t\geqslant 0\). Moreover, in view of Remark 2.11, (4.2) holds also for all \(t<0\) if \(z\in\mathscr{W}^{\circ}\).
Denote by \(\Omega\) the Koenigs domain of \((\phi_{t})\) and by \(S=\mathbb{S}(a,b)\) the maximal strip in \(\Omega\) associated to the \(\alpha\)-point \(\sigma\) of a hyperbolic petal \(\Delta=\Delta(\sigma)\), see Remark 2.8. Then \(\Omega\) is starlike at infinity and the Koenigs function \(h\) maps \(\mathbb{D}\) conformally onto \(\Omega\) and \(\Delta(\sigma)\) onto \(S\), see [14, Theorem 13.5.5]. Fix \(z_{0}\in\Delta(\sigma)\). Taking into account that the hyperbolic metric is invariant under conformal mappings and using equality (4.2) we see that the integrand in (1.1) can be written as
\[\log\frac{\lambda_{\Delta}(z_{0})}{\lambda_{\mathbb{D}}(\phi_{t}(z_{0}))\,| \phi_{t}^{\prime}(z_{0})|}=\log\frac{|h^{\prime}(z_{0})|\,\lambda_{S}\big{(}h (z_{0})\big{)}}{|h^{\prime}(\phi_{t}(z_{0}))|\,\lambda_{\Omega}\big{(}h(\phi_ {t}(z_{0}))\big{)}\,|\phi_{t}^{\prime}(z_{0})|}=\log\frac{\lambda_{S}\big{(}h (z_{0})\big{)}}{\lambda_{\Omega}\big{(}h(\phi_{t}(z_{0}))\big{)}},\]
which is equal for any \(t\in\mathbb{R}\), according to Abel's equation (2.7) and Remark 2.11, to
\[\log\frac{\lambda_{S}\big{(}h(z_{0})\big{)}}{\lambda_{\Omega}\big{(}h(z_{0})+ t\big{)}}.\]
Furthermore, obviously \(\lambda_{S}(h(z_{0}))=\lambda_{S}(h(z_{0})+t)\) for any \(t\in\mathbb{R}\). Consequently, for \(w_{0}:=h(z_{0})\), the integral in (1.1) is identical to the integral in (4.1). Thus, for non-elliptic semigroups Theorem 1.1 follows from Theorem 4.1 and Proposition 2.21.
Now we show how the elliptic case can be reduced to the non-elliptic case. Consider a semigroup \((\phi_{t})\) in \(\mathbb{D}\) with DW-point \(\tau\in\mathbb{D}\). According to Remark 2.6, there exists a parabolic semigroup \((\phi_{t})\) in \(\mathbb{D}\) such that for all \(t\geqslant 0\),
\[\phi_{t}\circ F=F\circ\phi_{t},\quad\text{where}\;\;F:=T\circ C,\;C(z):=\exp \left(-\frac{1+z}{1-z}\right),\;\;T(w):=\frac{w+\tau}{1+\overline{\tau}w}. \tag{4.3}\]
Note that \(F(\zeta_{1})=F(\zeta_{2})\) if and only if \(L^{\circ n}(\zeta_{1})=\zeta_{2}\) for some \(n\in\mathbb{Z}\), where \(L\) is the automorphism of \(\mathbb{D}\) given by \(L(\zeta):=H^{-1}\big{(}H(\zeta)+2\pi i\big{)}\), \(H(z):=(1+z)/(1-z)\), and \(L^{\circ n}\) denotes the \(n\)-the iterate of \(L\). The semigroup \((\phi_{t})\) satisfies for each \(t\geqslant 0\) the functional equation
\(\varphi_{t}\circ L=L\circ\varphi_{t}\). This is clear from the construction given in [25, Sect. 2]. Therefore, if \(\zeta\in\varphi_{t}(\mathbb{D})\) for some \(t\geqslant 0\), then \(\varphi_{t}(\mathbb{D})\) contains all the points \(\zeta^{\prime}\in\mathbb{D}\) satisfying \(F(\zeta^{\prime})=F(\zeta)\). With the notation
\[\mathcal{W}:=\bigcap_{t\geqslant 0}\phi_{t}(\mathbb{D}),\qquad\mathcal{U}:= \bigcap_{t\geqslant 0}\varphi_{t}(\mathbb{D}),\]
it follows that
\[F(\mathcal{U})=\bigcap_{t\geqslant 0}F(\varphi_{t}(\mathbb{D}))=\bigcap_{t \geqslant 0}\phi_{t}(F(\mathbb{D}))=\bigcap_{t\geqslant 0}\phi_{t}(\mathbb{D} \setminus\{\tau\})=\mathcal{W}\setminus\{\tau\}.\]
Note that \(\Delta(\sigma)\) is a simply connected domain and \(\tau\in\partial\Delta(\sigma)\); see [14, Proposition 13.4.2]. Moreover, by the definition of a petal, \(\Delta(\sigma)\) is a connected component of the interior of \(\mathcal{W}\). Since \(F:\mathbb{D}\to\mathbb{D}\setminus\{\tau\}\) is a covering map, it follows that there exists a connected component \(D\) of the interior of \(\mathcal{U}\) such that \(F\) maps \(D\) conformally (and in particular, injectively) onto \(\Delta(\sigma)\). By the very definition, \(D\) is a petal for \((\varphi_{t})\).
Moreover, by [14, Proposition 13.4.9], \(D\) is a Jordan domain and \(\partial\Delta(\sigma)\) is locally connected. It follows, see e.g. [35, Sect. 2.2], that \(F_{*}:=F|_{D}\) extends continuously to \(\partial D\) and that \(F_{*}(\partial D)=\partial\Delta(\sigma)\). Recall that \(\sigma\in\partial\Delta(\sigma)\). Therefore, there exists a point \(\varsigma\in\partial D\) such that \(F_{*}(\varsigma)=\sigma\). Since \(F\) is continuous with \(|F|<1\) in \(\mathbb{D}\) and since \(\sigma\in\partial\mathbb{D}\), we have \(\varsigma\in\partial\mathbb{D}\). We claim that, \(\varsigma\neq 1\). Indeed, let \(\Gamma\) be a Jordan arc in \(D\cup\{1\}\) with one of the end-points at \(1\). Since \(F_{*}\) is continuous in the closure of \(D\), we have \(F(z)\to F_{*}(1)\) as \(\Gamma\ni z\to 1\). Taking into account that the only asymptotic value of \(\mathbb{H}\ni\zeta\mapsto e^{-\zeta}\) at \(\infty\) is \(0\), it follows that \(F_{*}(1)=\tau\neq\sigma\).
Note that \(F\) extends holomorphically to any point of \(\partial\mathbb{D}\setminus\{1\}\). Hence \(F(\varsigma)=\sigma\). As a consequence, using Remark 2.5, it is easy to see that \(\varsigma\) is a repulsive fixed point of \((\varphi_{t})\).
Thus we have constructed a hyperbolic petal \(D\) for \((\varphi_{t})\) with \(\alpha\)-point \(\varsigma\) such that \(F\) maps \(D\) conformally onto \(\Delta(\sigma)\), with \(F(\varsigma)=\sigma\). Since \(F\) is holomorphic at \(\varsigma\) and \(F^{\prime}(\varsigma)\neq 0\), the petal \(D\) is conformal if and only if the petal \(\Delta(\sigma)\) is conformal.
It remains to show that condition (1.1) for \((\phi_{t})\) and a point \(z_{0}\in\Delta(\sigma)\) is equivalent to (1.1) with \((\phi_{t})\), \(\Delta(\sigma)\), and \(z_{0}\) replaced by \((\varphi_{t})\), \(D\), and \(\zeta_{0}:=F_{*}^{-1}(z_{0})\), respectively.
First of all since \(F\) maps \(D\) conformally onto \(\Delta(\sigma)\), with \(F(\zeta_{0})=z_{0}\), we have \(\lambda_{D}(\zeta_{0})=|F^{\prime}(\zeta_{0})|\lambda_{\Delta(\sigma)}(z_{0})\). Furthermore, by (4.3),
\[\phi_{t}(z_{0})=F\big{(}\varphi_{t}(\zeta_{0})\big{)}\quad\text{ and }\quad \varphi_{t}^{\prime}(\zeta_{0})=\phi_{t}^{\prime}(z_{0})\frac{F^{\prime}(\zeta_ {0})}{F^{\prime}\big{(}\varphi_{t}(\zeta_{0})\big{)}}\]
for all \(t\in\mathbb{R}\). Using these relations we obtain
\[\frac{\lambda_{D}(\zeta_{0})}{\lambda_{\mathbb{D}}(\varphi_{t}( \zeta_{0}))\,|\varphi_{t}^{\prime}(z_{0})|} =\ \frac{|F^{\prime}(\zeta_{0})|\,\lambda_{\Delta(\sigma)}(z_{0})}{ \lambda_{\mathbb{D}}\big{(}\varphi_{t}(\zeta_{0})\big{)}}\,\frac{|F^{\prime} \big{(}\varphi_{t}(\zeta_{0})\big{)}|}{|\phi_{t}^{\prime}(z_{0})F^{\prime}( \zeta_{0})|}\ =\] \[=\frac{\lambda_{\Delta(\sigma)}(z_{0})}{\lambda_{\mathbb{D}}(\phi_{ t}(z_{0}))\,|\phi_{t}^{\prime}(z_{0})|}\,\frac{|F^{\prime}\big{(}\varphi_{t}( \zeta_{0})\big{)}|\,\lambda_{\mathbb{D}}(\phi_{t}(z_{0}))}{\lambda_{\mathbb{D}} \big{(}\varphi_{t}(\zeta_{0})\big{)}} \tag{4.4}\]
\[=\frac{\lambda_{\Delta(\sigma)}(z_{0})}{\lambda_{\mathbb{D}}(\phi_{t}(z_{0})) \,|\phi_{t}^{\prime}(z_{0})|}\,\frac{|F^{\prime}\big{(}\varphi_{t}(\zeta_{0}) \big{)}|\,\lambda_{\mathbb{D}}\big{(}F(\varphi_{t}(\zeta_{0}))\big{)}}{\lambda_ {\mathbb{D}}\big{(}\varphi_{t}(\zeta_{0})\big{)}}\]
for all \(t\in\mathbb{R}\). The backward orbit \(\gamma(t):=\varphi_{-t}(\zeta_{0})\) of \((\varphi_{t})\) converges to \(\varsigma\) at an exponential rate, that is,
\[\lim_{t\to-\infty}\frac{1}{t}\log\left(1-\overline{\varsigma}\varphi_{t}( \zeta_{0})\right)=\lambda\in(0,+\infty)\,,\]
see [14, Proposition 13.4.14], and non-tangentially, see Remark 2.15. Moreover, for any \(w\in\mathbb{D}\),
\[0\geqslant\log\frac{|F^{\prime}(w)|\,\lambda_{\mathbb{D}}\big{(}F(w)\big{)}}{ \lambda_{\mathbb{D}}(w)}=\log\frac{|C^{\prime}(w)|\,\lambda_{\mathbb{D}}\big{(} C(w)\big{)}}{\lambda_{\mathbb{D}}(w)}=\log\frac{\mu(w)}{\sinh\mu(w)} \geqslant-\frac{\mu(w)^{2}}{6},\]
where \(\mu(w):=(1-|w|^{2})/(|1-w|^{2})\), and we conclude that the integral
\[\int\limits_{-\infty}^{0}\log\frac{|F^{\prime}\big{(}\varphi_{t}(\zeta_{0}) \big{)}|\,\lambda_{\mathbb{D}}\big{(}F(\varphi_{t}(\zeta_{0}))\big{)}}{\lambda _{\mathbb{D}}\big{(}\varphi_{t}(\zeta_{0})\big{)}}\,\mathrm{d}t\]
converges, in fact, locally uniformly w.r.t. \(\zeta_{0}\in D\). In view of (4.4) the proof that Theorem 4.1 implies Theorem 1.1 for elliptic semigroups is therefore reduced to the previous case of non-elliptic semigroups.
### Proof of the if-part of Theorem 4.1: condition (4.1) implies conformality
As a matter of taste, we prefer to work with the conformal radius instead of the density of the hyperbolic metric. Therefore, condition (4.1) in Theorem 4.1 can be rewritten as
\[\int\limits_{-\infty}^{0}\log\frac{\mathcal{R}(t+w_{0},\Omega)}{\mathcal{R}(t +w_{0},S)}\,\mathrm{d}t\;<\;+\infty\,,\quad w_{0}\in S. \tag{4.5}\]
In order to simplify notation, throughout the current and the following section we restrict ourselves to the case \(\operatorname{Re}w_{0}=0\). The following theorem is the key result of this section. It shows that the integrand in (4.5) can be estimated from below by the euclidean quantity \(\delta_{\Omega}(t)\) introduced in (3.3).
**Theorem 4.2**.: _Let \(\Omega\) be a proper subdomain of \(\mathbb{C}\) which is starlike at infinity and such that the standard strip \(\mathbb{S}=\{z\in\mathbb{C}\,:\,|\operatorname{Im}z|<\frac{\pi}{2}\}\) is a maximal horizontal strip contained in \(\Omega\). Then for any compact set \(K\subset(-\frac{\pi}{2},\frac{\pi}{2})\) there are constants \(c>0\) and \(T\leqslant 0\) such that for any \(y\in K\),_
\[\log\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(t+iy,\mathbb{S})}\geqslant c \,\delta_{\Omega}(t)\quad\text{ for all }\;t\leqslant T. \tag{4.6}\]
It is clear from our previous considerations that Theorem 4.2 implies the "if-part" of Theorem 4.1. Indeed, if condition (4.1) holds for some \(w_{0}=iy_{0}\in S\), \(y_{0}\in(-\frac{\pi}{2},\frac{\pi}{2})\), then Theorem 4.2 and Lemma 3.2 imply that \(\Omega\) is conformal at \(-\infty\).
The proof of Theorem 4.2 is quite long and will be broken into several steps. The idea of the proof is as follows.
_Remark 4.3_ (Idea of proof of Theorem 4.2).: Recall that \(\mathbb{H}=\{z:\operatorname{Re}z>0\}\) denotes the right half-plane. For \(\delta>0\) we consider the enlarged strip
\[\mathbb{S}_{\delta}:=\left\{z:-\frac{\pi}{2}<\operatorname{Im}z<\frac{\pi}{2}+ \delta\right\}\]
and the "half-widened" standard strip
\[\mathbb{S}_{\delta}^{*}:=\mathbb{S}\cup\left(\mathbb{S}_{\delta}\cap\mathbb{H }\right).\]
Fix a point \(iy_{0}\in\Omega\), \(y_{0}\in(-\frac{\pi}{2},\frac{\pi}{2})\). By definition of \(\delta_{2}(t):=\delta_{\Omega,2}(t)\), see Subsection 3.2, and since \(\Omega\) is starlike at infinity, it follows at once that
\[\mathbb{S}_{\delta_{2}(t)}^{*}+t\subset\Omega\,.\]
Thus, by domain monotonicity of the conformal radius,
\[\log\frac{\mathcal{R}\!\left(t+iy_{0},\Omega\right)}{\mathcal{R}\!\left(t+iy_ {0},\mathbb{S}\right)}\geqslant\log\frac{\mathcal{R}\!\left(t+iy_{0},\mathbb{ S}_{\delta_{2}(t)}^{*}+t\right)}{\mathcal{R}\!\left(t+iy_{0},\mathbb{S}\right)}= \log\frac{\mathcal{R}\!\left(iy_{0},\mathbb{S}_{\delta_{2}(t)}^{*}\right)}{ \mathcal{R}\!\left(iy_{0},\mathbb{S}\right)}\,,\qquad t\leqslant 0\,. \tag{4.7}\]
The crux of the proof is to show that the expression on the r.h.s. of (4.7) is bounded below by
\[\log\frac{\mathcal{R}\!\left(iy_{0},\mathbb{S}_{\delta_{2}(t)}\right)}{ \mathcal{R}\!\left(iy_{0},\mathbb{S}\right)}\,, \tag{4.8}\]
at least up to a positive multiplicative constant which depends only on \(y_{0}\) but in a fairly controllable way. The quantity occurring in (4.8) is explicitly computable in terms of \(\delta_{2}(t)\) (and \(y_{0}\)), and as we shall see, in fact comparable to \(\delta_{2}(t)\) locally uniformly w.r.t. \(y_{0}\). This provides a lower bound also for the l.h.s. in (4.7) in terms of \(\delta_{2}(t)=\delta_{\Omega,2}(t)\) and then, by symmetry, in terms of \(\delta_{\Omega}(t)=\max\{\delta_{\Omega,1}(t),\delta_{\Omega,2}(t)\}\), and (4.6) follows.
**Proposition 4.4**.: _For \(\delta>0\) let \(\Phi_{\delta}:(-\frac{\pi}{2},\frac{\pi}{2})\to(0,+\infty)\) be defined by_
\[\Phi_{\delta}(y):=\log\frac{\mathcal{R}\!\left(iy,\mathbb{S}_{\delta}^{*} \right)}{\mathcal{R}\!\left(iy,\mathbb{S}\right)}\left(\log\frac{\mathcal{R} \!\left(iy,\mathbb{S}_{\delta}\right)}{\mathcal{R}\!\left(iy,\mathbb{S} \right)}\right)^{-1}. \tag{4.9}\]
_Then_
\[\lim_{\delta\to 0^{+}}\Phi_{\delta}=\frac{1}{2} \tag{4.10}\]
_locally uniformly on \((-\frac{\pi}{2},\frac{\pi}{2})\)._
_Remark 4.5_.: The two quantities \(\mathcal{R}\!\left(\cdot,\mathbb{S}\right)\) and \(\mathcal{R}\!\left(\cdot,\mathbb{S}_{\delta}\right)\) occurring in (4.9) do have simple explicit expressions. In principle, the third quantity \(\mathcal{R}\!\left(\cdot,\mathbb{S}_{\delta}^{*}\right)\) does also have an explicit expression, since the conformal map of the unit disk \(\mathbb{D}\) onto \(\mathbb{S}_{\delta}^{*}\) is "explicitly" known, see [29, formula (5.2.15), p. 272]. However, this explicit formula is fairly involved and seems quite unsuitable for obtaining precise information about \(\mathcal{R}\!\left(\cdot,\mathbb{S}_{\delta}^{*}\right)\), which is needed for proving Proposition 4.4. The proof of Proposition 4.4 below circumvents this difficulty by making use of the strong Markov property for the Green's function, see Sect. 3.1.
In order to prove Proposition 4.4 we further need several auxiliary lemmas.
**Lemma 4.6**.: _Fix some \(\beta\in(-\frac{\pi}{2},\frac{\pi}{2})\), some positive \(\alpha_{0}\leqslant\frac{1}{2}\big{(}\frac{\pi}{2}-\beta\big{)}\), and some \(\alpha\in(0,\alpha_{0})\), and let_
\[h(x):=\operatorname{G}_{\mathbb{S}}\big{(}i\beta,x+i\left(\frac{\pi}{2}-\alpha \right)\big{)}\,,\quad x\in\mathbb{R}\,.\]
_Then_
\[h(0)\,\geqslant\,h(x)\,\geqslant\,h(0)\,\frac{1-\cos\alpha_{0}}{\cosh x-\cos \alpha_{0}} \tag{4.11}\]
_for all \(x\in\mathbb{R}\)._
Proof.: Let \(z:=x+i(\frac{\pi}{2}-\alpha)\). Utilizing the formula for the Green's function of \(\mathbb{S}\) (see [37, p.109]), we find
\[h(x)=\operatorname{G}_{\mathbb{S}}\big{(}i\beta,z\big{)}=\log\left|\frac{e^{z }+e^{-i\beta}}{e^{z}-e^{i\beta}}\right|=\tfrac{1}{2}\log q(\cosh x,\alpha), \quad q(u,\alpha):=\frac{u-\sin(\beta-\alpha)}{u-\sin(\beta+\alpha)}.\]
The proof of (4.11) is now elementary. For convenience we provide the main steps. As \(q(1,\alpha)\geqslant q(u,\alpha)>1\) for any \(u\geqslant 1\), we see that the left inequality in (4.11) holds. Since \(u\mapsto\rho(u):=(u-\sin(\beta+\alpha))\log q(u,\alpha)\) is concave with \(\lim_{u\to\infty}\rho^{\prime}(u)=0\), the function \(\rho(u)\) is increasing and we have
\[\log q(u,\alpha)\geqslant\frac{1-\sin(\beta+\alpha)}{u-\sin(\beta +\alpha)}\,\log q(1,\alpha) >\frac{1-\sin(\beta+\alpha_{0})}{u-\sin(\beta+\alpha_{0})}\log q (1,\alpha)\] \[\geqslant\frac{1-\cos\alpha_{0}}{u-\cos\alpha_{0}}\,\log q(1, \alpha).\]
The right inequality in (4.11) follows easily.
**Lemma 4.7**.: _Let \(\mathcal{B}:=\{e^{i\theta}:\alpha<\theta<\beta\}\) and \(\mathcal{B}^{\prime}:=\{e^{i\theta}:\alpha^{\prime}<\theta<\beta^{\prime}\}\). Let \(z,z^{\prime}\in\mathbb{D}\). Then_
\[\big{|}\omega_{\mathbb{D}}(z,\mathcal{B})-\omega_{\mathbb{D}}(z^{\prime}, \mathcal{B}^{\prime})\big{|}\,\,\leqslant\,\,\frac{|\alpha^{\prime}-\alpha|+| \beta^{\prime}-\beta|+2|z^{\prime}-z|}{\pi(1-r)}, \tag{4.12}\]
_where \(r:=\max\{|z|,|z^{\prime}|\}\)._
Proof.: Recall that \(2\pi\omega_{\mathbb{D}}(0,\cdot)\) coincides with one-dimensional Lebesgue measure on the unit circle \(\partial\mathbb{D}\). Hence
\[\big{|}\omega_{\mathbb{D}}(0,\mathcal{B})-\omega_{\mathbb{D}}(0,\mathcal{B}^ {\prime})\big{|}\leqslant\frac{|\alpha^{\prime}-\alpha|+|\beta^{\prime}-\beta |}{2\pi}\,. \tag{4.13}\]
To prove (4.12) in the case \(z^{\prime}=z\neq 0\), it is sufficient to apply (4.13) to the arcs \(T(\mathcal{B},z)\) and \(T(\mathcal{B}^{\prime},z)\), where \(T(\sigma,z):=(\sigma-z)/(1-\sigma\bar{z})\), and take into account that
\[\bigg{|}\frac{\partial T(\sigma,z)}{\partial\sigma}\bigg{|}\leqslant\frac{1+| z|}{1-|z|}<\frac{2}{1-r}\quad\text{for any }\sigma\in\partial\mathbb{D}.\]
Finally, to prove (4.12) in the general case, we notice that for any \(\sigma\in\partial\mathbb{D}\) and \(\theta\in\mathbb{R}\),
\[\bigg{|}\,\frac{\partial T(\sigma,z+te^{i\theta})}{\partial t}\Big{|}_{t=0} \,\bigg{|}\,\,\leqslant\,\,\bigg{|}\,\frac{\partial T(\sigma,z)}{\partial z }\bigg{|}\,+\,\bigg{|}\,\frac{\partial T(\sigma,z)}{\partial\bar{z}}\bigg{|} \,\,\leqslant\,\,\frac{2}{1-r}.\]
Hence, \(\big{|}\arg T(\sigma,z^{\prime})/T(\sigma,z)\big{|}\leqslant 2|z^{\prime}-z|/(1-r)\) for any \(\sigma\in\partial\mathbb{D}\). As a consequence,
\[2\pi\big{|}\omega_{\mathbb{D}}(z^{\prime},\mathcal{B}^{\prime})-\omega_{ \mathbb{D}}(z,\mathcal{B}^{\prime})\big{|}\leqslant 4|z^{\prime}-z|/(1-r)\]
and the general case for the estimate (4.12) follows immediately.
Proof of Proposition 4.4.: Applying formula (3.2) we immediately get
\[\log\frac{\mathcal{R}(iy,\mathbb{S}_{\delta})}{\mathcal{R}(iy,\mathbb{S})}= \int_{A}\mathrm{G}_{\mathbb{S}_{\delta}}(iy,z)\,\omega_{\mathbb{S}}(iy,\, \mathrm{d}z),\qquad\text{ where }A:=\big{\{}z:\mathrm{Im}\,z=\tfrac{\pi}{2}\big{\}}, \tag{4.14}\]
\[\log\frac{\mathcal{R}(iy,\mathbb{S}_{\delta})}{\mathcal{R}(iy, \mathbb{S}_{\delta}^{*})}=\int_{B\cup C_{\delta}}\mathrm{G}_{\mathbb{S}_{ \delta}}(iy,z)\,\omega_{\mathbb{S}_{\delta}^{*}}(iy,\,\mathrm{d}z),\quad \text{ where }B:=\big{\{}z:\mathrm{Im}\,z=\tfrac{\pi}{2},\ \mathrm{Re}\,z<0\big{\}} \tag{4.15}\] \[\text{ and }C_{\delta}:=\big{\{}iy:\tfrac{\pi}{2}\leqslant y< \tfrac{\pi}{2}+\delta\big{\}}.\]
Clearly, we may restrict consideration to small \(\delta>0\); namely, we will suppose that
\[\delta\in(0,\delta_{0}),\quad\text{where }\ \delta_{0}:=\min_{y\in K}\tfrac{1}{2} \big{(}\tfrac{\pi}{2}-y\big{)}.\]
Fix temporarily \(y\in K\) and \(\delta\in(0,\delta_{0})\). By Lemma 4.6 applied with
\[\beta:=\Big{(}y-\frac{\delta}{2}\Big{)}\frac{\pi}{\pi+\delta},\quad\alpha_{0}: =\delta_{0},\quad\alpha:=\delta\frac{\pi}{\pi+\delta},\]
we have that \(h(x)=h(x;y,\delta):=\mathrm{G}_{\mathbb{S}_{\delta}}(iy,x+i\tfrac{\pi}{2})\) satisfies inequality (4.11). On the one hand, in combination with the explicit formula
\[\omega_{\mathbb{S}}(iy,\,\mathrm{d}z)=(2\pi)^{-1}\cos y\,(\cosh x-\sin y)^{-1 }\,\mathrm{d}x\,,\quad z=x+i\pi/2\in A\,,\]
this allows us to estimate the r.h.s. of (4.14) from below:
\[\int_{A}\mathrm{G}_{\mathbb{S}_{\delta}}(iy,z)\,\omega_{\mathbb{S }}(iy,\,\mathrm{d}z) \geqslant \frac{h(0;y,\delta)}{2\pi}\int_{\mathbb{R}}\frac{(1-\cos\delta_{ 0})\sin 2\delta_{0}}{(\cosh x-\cos\delta_{0})(\cosh x+1)}\,\mathrm{d}x \tag{4.16}\] \[= m(\delta_{0})h(0;y,\delta)\quad\text{for all }\delta\in(0, \delta_{0})\text{ and all }y\in K,\]
where \(m(\delta_{0})\) is a positive constant depending only on \(\delta_{0}\). On the other hand, in view of (4.11), we have
\[\int_{A}\mathrm{G}_{\mathbb{S}_{\delta}}(iy,z)\,\omega_{\mathbb{S }}(iy,\,\mathrm{d}z) \leqslant h(0;y,\delta)\int_{A}\omega_{\mathbb{S}}(iy,\,\mathrm{d} z)=h(0;y,\delta)\left(\frac{y}{\pi}+\frac{1}{2}\right)\] \[\leqslant h(0;y,\delta)\left(1-\frac{2\delta_{0}}{\pi}\right)\,. \tag{4.17}\]
The main ingredient of the proof is the following:
**Claim 1.** As \(\delta\to 0^{+}\),
\[\sup_{y\in K}\,\frac{1}{h(0;y,\delta)}\left|\int_{B}\mathrm{G}_{\mathbb{S}_{ \delta}}(iy,z)\,\omega_{\mathbb{S}_{\delta}^{*}}(iy,\,\mathrm{d}z)-\frac{1}{2} \int_{A}\mathrm{G}_{\mathbb{S}_{\delta}}(iy,z)\,\boldsymbol{\omega}_{\mathbb{S} }(iy,\,\mathrm{d}z)\right|\;\to\;0. \tag{4.18}\]
To prove this claim we first notice that, due to symmetry, one can remove the factor \(1/2\) in (4.18) if the set \(A\) in the second integral is replaced by \(B\). Moreover, by the monotonicity property of harmonic measure, see e.g. [37, Corollary 4.3.9 on p. 102],
\[\mu_{y,\delta}\,:=\,\omega_{\mathbb{S}_{\delta}^{*}}(iy,\cdot)\big{|}_{B}\,- \,\omega_{\mathbb{S}}(iy,\cdot)\big{|}_{B}\]
is a non-negative bounded measure on \(B\). Recall also that \(0<h(x;y,\delta)\leqslant h(0;y,\delta)\) for all \(x\in\mathbb{R}\), see (4.11). Therefore, in order to prove (4.18), it is sufficient to show that
\[\sup_{y\in K}\mu_{y,\delta}(B)\to 0\quad\text{as $\delta\to 0^{+}$.} \tag{4.19}\]
To this end, we will take advantage of conformal invariance of harmonic measure. Denote by \(F_{\delta}\), \(\delta\in(0,\delta_{0})\), and \(F\) the conformal mappings of \(\mathbb{S}_{\delta}^{*}\) and \(\mathbb{S}\), respectively, onto \(\mathbb{D}\) with the normalization \(F(0)=F_{\delta}(0)=0\), \(F^{\prime}(0)>0\), \(F^{\prime}_{\delta}(0)>0\). Clearly, \(F(z)=(e^{z}-1)/(e^{z}+1)\) extends holomorphically and injectively to the wider strip \(|\operatorname{Im}z|<\pi\) and hence we can write \(F_{\delta}=f_{\delta}^{-1}\circ F\), where \(f_{\delta}\) is the conformal mapping of \(\mathbb{D}\) onto the Jordan domain \(D_{\delta}:=F(\mathbb{S}_{\delta}^{*})\) with \(f_{\delta}(0)=0\) and \(f^{\prime}_{\delta}(0)>0\).
Denote \(\mathcal{B}:=F(B)=\{z\in\partial\mathbb{D}:\;\pi/2<\arg z<\pi\}\). By conformal invariance of harmonic measure, \(\omega_{\mathbb{S}}\big{(}iy,B\big{)}=\omega_{\mathbb{D}}\big{(}F(iy), \mathcal{B}\big{)}\) and \(\omega_{\mathbb{S}_{\delta}^{*}}\big{(}iy,B\big{)}=\omega_{\mathbb{D}}\big{(} f_{\delta}^{-1}(F(iy)),f_{\delta}^{-1}(\mathcal{B})\big{)}\).
Note that \(\mathbb{D}\subset D_{\delta}\) for any \(\delta\in(0,\delta_{0})\) and that \(D_{\delta}\to\mathbb{D}\) w.r.t. \(0\) in the sense of kernel convergence when \(\delta\to 0^{+}\). Denote \(\,z=z(y):=F(iy)\,\) and \(\,z^{\prime}=z^{\prime}(y,\delta):=F_{\delta}(iy)=f_{\delta}^{-1}(z(y))\). As \(\delta\to 0^{+}\), by Caratheodory's kernel convergence theorem, we have \(z^{\prime}(y,\delta)\to z(y)\) uniformly w.r.t. \(y\in K\). Note also that by the Schwarz lemma, for any \(y\in K\) and any \(\delta\in(0,\delta)\) the estimate \(|z^{\prime}(y,\delta)|\leqslant|z(y)|\leqslant r_{0}:=\max_{y\in K}|F(iy)|<1\) holds. Therefore, (4.19) and hence (4.18) follow from Lemmas 3.3 and 4.7.
Taking into account (4.14) - (4.18), we see that it remains to estimate in (4.15) the part of the integral over \(C_{\delta}\). We claim that
**Claim 2.** As \(\delta\to 0^{+}\),
\[\sup_{y\in K}\,\frac{1}{h(0;y,\delta)}\left|\int_{C_{\delta}}\mathrm{G}_{ \mathbb{S}_{\delta}}(iy,z)\,\boldsymbol{\omega}_{\mathbb{S}_{\delta}^{*}}(iy, \,\mathrm{d}z)\right|\;\to\;0. \tag{4.20}\]
Fix some \(\delta\in(0,\delta_{0})\) and \(y\in K\). Clearly, \(0<\mathrm{G}_{\mathbb{S}_{\delta}}(iy,z)\leqslant h(0;y,\delta)\) for any \(z\in C_{\delta}\). Therefore, it is sufficient to show that
\[\sup_{y\in K}\omega_{\mathbb{S}_{\delta}^{*}}(iy,C_{\delta})\to 0\quad\text{as $ \delta\to 0^{+}$.} \tag{4.21}\]
Consider again the mappings \(F_{\delta}\) introduced above. As we have shown in the proof of Claim 1, \(|F_{\delta}(iy)|\leqslant r_{0}\), where \(r_{0}\in[0,1)\) depends only on the compact set \(K\). Therefore, to prove (4.21) we only need to check that the linear Lebesgue measure of \(F_{\delta}(C_{\delta})\) tends to zero as \(\delta\to 0^{+}\). Suppose this is not the case. Then there exists a sequence \((\delta_{n})\subset(0,\delta_{0})\) converging to \(0\) and a non-degenerate closed arc \(\mathcal{C}\subset\partial\mathbb{D}\) such that for each \(n\in\mathbb{N}\), \(g_{n}:=F_{\delta_{n}}^{-1}:\mathbb{D}\to\mathbb{S}_{\delta_{n}}^{*}\) extends continuously to \(\mathcal{C}\), with \(g_{n}(\mathcal{C})\subset C_{\delta_{n}}\).
Since \(g_{n}\) is continuous on the compact set \(\{\sigma r:\sigma\in\mathcal{C},\ r\in[0,1]\}\), there exists \(r_{n}\in(0,1)\) with the property that \(g_{n}(\mathcal{C}_{n})\), where \(\mathcal{C}_{n}:=\{\sigma r_{n}:\sigma\in\mathcal{C}\}\), lies in the \(1/n\) - neighbourhood of \(C_{\delta_{n}}\). Thus, \(\operatorname{diam}g_{n}(\mathcal{C}_{n})\to 0\). Note that the functions \(g_{n}\) are "uniformly normal" in \(\mathbb{D}\) in the sense that
\[\sup_{z\in\mathbb{D}}\frac{|g_{n}^{\prime}(z)|(1-|z|^{2})}{1+|g_{n}(z)|^{2}} \leqslant M\quad\text{for all }n\in\mathbb{N} \tag{4.22}\]
and some constant \(M>0\) independent of \(n\). Indeed, \(\mathbb{S}_{\delta}^{*}\subset\mathbb{S}_{\delta_{0}}^{*}\). Therefore, \(g_{n}=F_{\delta_{0}}^{-1}\circ\phi_{n}\), where \(\phi_{n}\) is a holomorphic self-map of \(\mathbb{D}\). Taking into account that \(F_{\delta_{0}}^{-1}\) is univalent in \(\mathbb{D}\) and hence normal (see [34, Lemma 9.3 on p. 262]) the desired conclusion (4.22) follows from the Schwarz - Pick Lemma. Now by the No-Koebe-arcs theorem for sequences of holomorphic functions (see [34, Theorem 9.2 and the subsequent remark on p. 265]) it follows that \((g_{n})\) converges locally uniformly in \(\mathbb{D}\) to a constant, which is impossible because \(g_{n}\to F^{-1}\) by construction. This contradiction proves Claim 2.
Now the conclusion of the proposition follows easily from (4.14) - (4.18) and (4.20).
We are now in a position to prove Theorem 4.2.
Proof of Theorem 4.2.: Let \(\delta_{j}(t):=\delta_{\Omega,j}(t)\) and let \(K\) be a compact subset of the interval \((-\frac{\pi}{2},\frac{\pi}{2})\). It suffices to prove that there exist \(T_{j}<0\) and \(c_{j}>0\), \(j=1,2\), such that
\[\log\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(iy,\mathbb{S})}\geqslant c_{j }\delta_{j}(t)\quad\text{ for all }t\leqslant T_{j}\text{ and all }y\in K\,. \tag{4.23}\]
Further, it clearly suffices to consider the case \(j=2\). Fix \(t\leqslant 0\). Since \(\mathbb{S}_{\delta_{2}(t)}^{*}+t\subset\Omega\), we have
\[\mathcal{R}(t+iy,\Omega)\geqslant\mathcal{R}\left(t+iy,\mathbb{S}_{\delta_{2} (t)}^{*}+t\right)=\mathcal{R}\left(iy,\mathbb{S}_{\delta_{2}(t)}^{*}\right) \quad\text{ for all }y\in\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\,.\]
Hence Proposition 4.4 implies that
\[\log\left[\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(iy,\mathbb{S})}\right] \geqslant\log\left[\frac{\mathcal{R}\left(iy,\mathbb{S}_{\delta_{2}(t)}^{*} \right)}{\mathcal{R}(iy,\mathbb{S})}\right]\geqslant\Phi_{\delta_{2}(t)}(y) \cdot\log\left[\frac{\mathcal{R}\left(iy,\mathbb{S}_{\delta_{2}(t)}\right)}{ \mathcal{R}(iy,\mathbb{S})}\right] \tag{4.24}\]
for any \(t\leqslant 0\) and any \(y\in(-\frac{\pi}{2},\frac{\pi}{2})\), where \(\Phi_{\delta}\) converges to \(1/2\) uniformly on \(K\) as \(\delta\to 0^{+}\). By employing the well-known explicit expression
\[\mathcal{R}\left(iy,\mathbb{S}_{\delta}\right)=\frac{2}{\lambda_{\mathbb{S}_{ \delta}(iy)}}=4\frac{\pi+\delta}{\pi}\cos\left(\frac{\pi}{2}\cdot\frac{2y- \delta}{\pi+\delta}\right)\]
for the conformal radius of the strip \(\mathbb{S}_{\delta}\) it is easily checked that
\[\lim_{\delta\to 0}\frac{1}{\delta}\log\left[\frac{\mathcal{R}(iy,\mathbb{S}_{ \delta})}{\mathcal{R}(iy,\mathbb{S})}\right]=\frac{1}{\pi}+\frac{(\pi+2y)\tan y }{2\pi}\in(0,+\infty)\]
holds uniformly w.r.t. \(y\in K\). Combining this fact with \(\Phi_{\delta}\to 1/2\) uniformly on \(K\) as \(\delta\to 0^{+}\) and \(\lim_{t\to-\infty}\delta_{2}(t)=0\) shows that inequality (4.24) implies the estimate (4.23) for \(j=2\). As noted above, this concludes the proof of Theorem 4.2.
### Necessity of condition (4.1)
In this subsection, we prove the necessity of the condition (4.1). The proof relies on the fact that the integral in (4.5) can be estimated from above by the integral over the function \(\delta_{\Omega}\). This is the content of the following theorem.
**Theorem 4.8**.: _Let \(\Omega\) be a proper subdomain of \(\mathbb{C}\) which is starlike at infinity and such that the standard strip \(\mathbb{S}=\{z\in\mathbb{C}:\,|\operatorname{Im}z|<\frac{\pi}{2}\}\) is a maximal horizontal strip contained in \(\Omega\). Then for any compact set \(K\subset(-\frac{\pi}{2},\frac{\pi}{2})\) there are constants \(c>0\) and \(C>0\) such that for any \(y\in K\),_
\[\int\limits_{-\infty}^{0}\log\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(t+ iy,\mathbb{S})}\;\mathrm{d}t\leqslant C+2c\int\limits_{-\infty}^{0}\delta_{ \Omega}(t)\;\mathrm{d}t. \tag{4.25}\]
Hence Theorem 4.8 in conjunction with Lemma 3.2 implies that the integral of the ratio of conformal radii in (4.25) converges, when the domain \(\Omega\) is conformal at \(-\infty\) w.r.t. \(\mathbb{S}\).
In order to prove Theorem 4.8, we construct a family of two-slit domains \(D_{0}(t)\) that all contain the Koenigs domain \(\Omega\). For the sake of brevity, denote
\[\delta:=\delta_{\Omega}(1+t/2),\quad\delta_{j}:=\delta_{\Omega,j}(1+t/2),\ j=1,2, \quad t\leqslant 0.\]
Due to \(\Omega\) being starlike at infinity, it is easy to see that
\[D_{0}(t):=\mathbb{C}\setminus\left\{z\in\mathbb{C}:\,\operatorname{Re}z \leqslant 1+\tfrac{t}{2}\text{ and }\operatorname{Im}z\in\{-\tfrac{\pi}{2}-\delta_{1},\tfrac{\pi}{2}+\delta_{ 2}\}\right\},\quad t\leqslant 0,\]
indeed contains \(\Omega\). Let \(y\in(-\frac{\pi}{2},\frac{\pi}{2})\). Then
\[\log\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(t+iy,\mathbb{S})}\leqslant \log\frac{\mathcal{R}(t+iy,D_{0}(t))}{\mathcal{R}(t+iy,\mathbb{S})}. \tag{4.26}\]
Furthermore, let \(S(t):=\mathbb{S}\left(-\frac{\pi}{2}-\delta_{1},\frac{\pi}{2}+\delta_{2}\right)\), the maximal strip between the two slits of the domain \(D_{0}(t)\), and let
\[D(t):=D_{0}(t)-\left(1+\tfrac{t}{2}\right)=\mathbb{C}\setminus\left\{z\in \mathbb{C}:\,\operatorname{Re}z\leqslant 0\text{ and }\operatorname{Im}z\in\{-\tfrac{\pi}{2}-\delta_{1},\tfrac{\pi}{2}+\delta_{2} \}\right\}.\]
Bearing in mind that the hyperbolic metric in a strip remains invariant under horizontal translations, it follows that
\[\log\frac{\mathcal{R}(t+iy,\Omega)}{\mathcal{R}(t+iy,\mathbb{S})}\leqslant \log\frac{\mathcal{R}(\tfrac{t}{2}-1+iy,D(t))}{\mathcal{R}(\tfrac{t}{2}-1+iy, S(t))}+\log\frac{\mathcal{R}(iy,S(t))}{\mathcal{R}(iy,\mathbb{S})}. \tag{4.27}\]
The idea for the proof of Theorem 4.8 is to analyze the asymptotic behavior of the summands in (4.27). This task is rather elementary for the second term, while the first term requires a bit more delicate work making use of an estimate of the Green's function, which we establish in the following lemma. Denote by \(D_{*}\) the "standard" two-slit domain \(\mathbb{C}\setminus\{w\in\mathbb{C}:\operatorname{Re}w\leqslant 0,\ |\operatorname{Im}w |=\pi/2\}\).
**Lemma 4.9**.: _For all \(w\in\mathbb{S}\) with \(\operatorname{Re}w<u_{0}\) and all \(s\in D_{*}\cap\partial\mathbb{S}\), we have_
\[\operatorname{G}_{D_{*}}(w,s)\leqslant\log\frac{1+e^{\operatorname{Re}w}}{1-e ^{\operatorname{Re}w}}. \tag{4.28}\]
Proof.: Using the explicit formula for the Green's function of \(\mathbb{S}\), we have
\[\operatorname{G}_{\mathbb{S}}(z_{1},z_{2}) = \frac{1}{2}\log\frac{\cosh(x_{2}-x_{1})+\cos(y_{2}+y_{1})}{\cosh(x _{2}-x_{1})+\cos(y_{2}+y_{1})} \tag{4.29}\] \[\leqslant \log\frac{1+e^{-|x_{2}-x_{1}|}}{1-e^{-|x_{2}-x_{1}|}},\quad x_{ k}:=\operatorname{Re}z_{k},\ y_{k}:=\operatorname{Im}z_{k},\ k=1,2,\]
for any pair of points \(z_{1},z_{2}\in\mathbb{S}\).
In order to apply the above formula for estimating \(\operatorname{G}_{D_{*}}\), we notice that \(g(z):=z+(1+e^{2z})/2\) maps \(\mathbb{S}\) conformally onto \(D_{*}\). Indeed, \(f(\zeta):=g(\log\zeta)=\log(\zeta)+(1+\zeta^{2})/2\) is the Schwarz - Christoffel map of the right-half plane onto \(D_{*}\) with the normalization \(f(0)=\infty\), \(f(\pm i)=\pm i\pi/2\), as follows from the equality
\[\frac{f^{\prime\prime}(\zeta)}{f^{\prime}(\zeta)}=\frac{1}{\zeta-i}+\frac{1}{ \zeta+i}-\frac{1}{\zeta},\quad\operatorname{Re}\zeta>0.\]
Let \(w\in U:=\{w\in\mathbb{S}:\ \operatorname{Re}w<0\}\). It is easy to see that \(g(U)\supset U\). Therefore, \(z:=g^{-1}(w)\in U\). It follows that
\[\operatorname{Re}g^{-1}(w)=\operatorname{Re}w-(1+\operatorname{Re}e^{2z})/2< \operatorname{Re}w\quad\text{for all }w\in U. \tag{4.30}\]
Moreover, it is easy to see that \(g\big{(}\{w\in\mathbb{S}:\ \operatorname{Re}w>0\}\big{)}\) contains the set \(A:=D_{*}\cap\partial\mathbb{S}\). Hence,
\[\operatorname{Re}g^{-1}(s)>0\qquad\text{for all }s\in A. \tag{4.31}\]
Combining now (4.29), (4.30) and (4.31), we obtain
\[\operatorname{G}_{D_{*}}(w,s)=\operatorname{G}_{\mathbb{S}}\big{(}g^{-1}(w),g^ {-1}(s)\big{)}\leqslant\log\frac{1+e^{\operatorname{Re}w}}{1-e^{\operatorname {Re}w}}\]
for any \(w\in U\) and any \(s\in A\), as desired.
**Lemma 4.10**.: _For any \(w\in U:=\{w\in\mathbb{S}:\operatorname{Re}w<0\}\), we have_
\[\log\frac{\mathcal{R}(w,D_{*})}{\mathcal{R}(w,\mathbb{S})}\leqslant\log\frac{1 +e^{\operatorname{Re}w}}{1-e^{\operatorname{Re}w}}. \tag{4.32}\]
Proof.: According to (3.2), for any \(w\in\mathbb{S}\), the l.h.s. of (4.32) equals
\[\int_{A}\mathrm{G}_{D_{*}}(w,s)\,\omega_{\mathbb{S}}(w,\mathrm{d}s),\quad A:=D_{* }\cap\partial\mathbb{S},\]
and inequality (4.32) follows immediately from (4.28), taken into account that \(\omega_{\mathbb{S}}(w,\cdot)\) is a probability measure.
Proof of Theorem 4.8.: Using the explicit formula for the conformal radius of a strip, it is easy to see that there exists a constant \(c>0\), depending only on the compact set \(K\subset\big{(}-\frac{\pi}{2},\frac{\pi}{2}\big{)}\), such that
\[\log\frac{\mathcal{R}(iy,S(t))}{\mathcal{R}(iy,\mathbb{S})}\leqslant c\delta= c\,\delta_{\Omega}\big{(}1+t/2\big{)}\quad\text{for all $t\leqslant 0$ and all $y\in K$.} \tag{4.33}\]
Moreover, recall that the function \(\delta_{\Omega}\) is finite and monotonic on \((-\infty,1]\). In particular, it is integrable on \([0,1]\). Therefore, in view of inequality (4.27), it remains to show that
\[\sup_{y\in K}\int\limits_{-\infty}^{0}\log\frac{\mathcal{R}(\frac{t}{2}-1+iy, D(t))}{\mathcal{R}(\frac{t}{2}-1+iy,S(t))}\,\mathrm{d}t\ <\ +\infty. \tag{4.34}\]
The linear function
\[F_{t}(z):=\frac{\pi}{\pi+\delta_{1}+\delta_{2}}\left(z+i\,\tfrac{\delta_{1}- \delta_{2}}{2}\right),\]
maps the strip \(S(t)\) conformally onto the standard strip \(\mathbb{S}\) and the two-slit domain \(D(t)\) onto \(D_{*}\). Therefore, the integrand in (4.34) equals
\[\log\frac{\mathcal{R}(w(t),D_{*})}{\mathcal{R}(w(t),\mathbb{S})},\quad\text{ where }\ w(t):=F_{t}\big{(}\tfrac{t}{2}-1+iy\big{)}.\]
By monotonicity of \(\delta_{\Omega,j}\), \(j=1,2\),
\[\operatorname{Re}w(t)\leqslant\frac{\pi}{\pi+\delta_{\Omega,1}(1)+\delta_{ \Omega,2}(1)}\left(\frac{t}{2}-1\right)\quad\text{for any $t\leqslant 0$,}\]
and we are done because now (4.34) follows from Lemma 4.10.
Proof of necessity in Theorem 4.1.: Suppose that \(\Omega\) is conformal at \(-\infty\) w.r.t. \(S\). By Lemma 3.2, \(\delta_{\Omega}\) is integrable over \((-\infty,0]\). Hence, Theorem 4.8 shows that condition (4.1) holds.
### Completion of the proof of Theorem 4.1: locally uniform convergence of (4.1)
We are left to prove that if (4.5) holds for one point \(w_{0}\in S\), then it holds for all \(w_{0}\in S\) and the integral in (4.5) converges locally uniformly w.r.t. \(w_{0}\in S\). Clearly, we may assume \(S=\mathbb{S}\).
Let \(S(t)\) and \(D(t)\) be defined as in Subsection 4.3. For \(t\leqslant 0\) and \(y\in\big{(}-\frac{\pi}{2},\frac{\pi}{2}\big{)}\) we denote
\[F_{1}(t,y):=\log\frac{\mathcal{R}(\tfrac{t}{2}-1+iy,D(t))}{\mathcal{R}(\tfrac {t}{2}-1+iy,S(t))},\ \ F_{2}(t,y):=\log\frac{\mathcal{R}(iy,S(t))}{\mathcal{R}(iy,\mathbb{S})},\ \ J_{k}(y):=\int\limits_{-\infty}^{0}F_{k}(t,y)\,\mathrm{d}t,\ k=1,2.\]
The function \(\delta_{\Omega}\) is monotonic on \((-\infty,1]\) and hence integrable on any compact interval contained in \((-\infty,1]\). Therefore, from Theorem 4.2 it follows that if (4.5) holds for at least one point \(w_{0}\in\mathbb{S}\), then the function \(\delta_{\Omega}\) is integrable on \((-\infty,1]\).
Thanks to inequality (4.33), the integrability of \(\delta_{\Omega}\) on \((-\infty,1]\) implies that the integral \(J_{2}(y)\) converges uniformly w.r.t. \(y\) on any compact interval \([y_{1},y_{2}]\subset\big{(}-\frac{\pi}{2},\frac{\pi}{2}\big{)}\). Moreover, the argument used in the proof of Theorem 4.8 shows that the integral \(J_{1}(y)\) converges uniformly on the whole interval \(\big{(}-\frac{\pi}{2},\frac{\pi}{2}\big{)}\).
Now let \(w_{0}:=x+iy\in\mathbb{S}\). By inequality (4.27), for all \(t\leqslant\min\{-x,0\}\) we have
\[0\leqslant\log\frac{\mathcal{R}(t+w_{0},\Omega)}{\mathcal{R}(t+w_{0},\mathbb{ S})}\leqslant F_{1}(t+x,y)+F_{2}(t+x,y).\]
Thus, for each rectangle \(R:=[x_{1},x_{2}]\times[y_{1},y_{2}]\) contained in \(\mathbb{S}\), the integral in (4.5) converges uniformly w.r.t. \(w_{0}\in R\). This completes the proof.
## 5. The angular derivative problem for parabolic petals
In this section we study conformality of parabolic petals at the Denjoy - Wolff point. An important role is played by the so-called hyperbolic step \(q(z,s):=\lim_{t\to+\infty}\mathrm{d}_{\mathbb{D}}\big{(}\phi_{t+s}(z),\phi_{t} (z)\big{)}\), where \(\mathrm{d}_{\mathbb{D}}\) stands for the hyperbolic distance in \(\mathbb{D}\). If \(q(z,s)=0\) for some \(z\in\mathbb{D}\) and \(s>0\), then \(q\equiv 0\) on \(\mathbb{D}\times(0,+\infty)\) and the one-parameter semigroup \((\phi_{t})\) is said to be of zero hyperbolic step. Otherwise, i.e. if \(q(z,s)>0\) for some (and hence all) \(z\in\mathbb{D}\) and \(s>0\), we say that \((\phi_{t})\) is of positive hyperbolic step.
**Theorem 5.1**.: _Let \(\Delta\) be a parabolic petal of a one-parameter semigroup \((\phi_{t})\) with Denjoy - Wolff point \(\tau\) and associated Koenigs function \(h\). The following statements hold:_
1. _The petal_ \(\Delta\) _is conformal at_ \(\tau\) _w.r.t._ \(\mathbb{D}\) _if and only if the angular limit_ \[L:=\angle\lim_{z\to\tau}(z-\tau)h(z)\] _is finite._
2. _If_ \((\phi_{t})\) _is of positive hyperbolic step, then_ \(\Delta\) _is conformal at_ \(\tau\) _w.r.t._ \(\mathbb{D}\) _(and hence_ \(L\neq\infty\)_)._
_Remark 5.2_.: The angular limit \(L\) in statement (A) above was previously considered by Contreras, Diaz-Madrigal and Pommerenke in a more general context of discrete iteration, see [20, Theorems 4.1 and 6.2]. Moreover, this limit is directly related to the conformality problem considered by Betsakos [7, Theorem 3] and Karamanlis [28, Theorem 2] for parabolic one-parameter semigroups of positive hyperbolic step. As we will see in the proof of Theorem 5.1, the presence of a parabolic petal ensures that the angular limit \(L\) exists and does not vanish, but it can be infinite. In fact, as illustrated by Example 5.4 given after the proof, for semigroups of zero hyperbolic step both cases \(L\in\mathbb{C}\setminus\{0\}\) and \(L=\infty\) are possible.
Proof of Theorem 5.1.: First of all, recall that the existence of a parabolic petal \(\Delta\) implies that the one-parameter semigroup \((\phi_{t})\) is parabolic. Moreover, the image of \(\Delta\) w.r.t. the Koenigs function \(h\) of \((\phi_{t})\) is a half-plane bounded by a line parallel to \(\mathbb{R}\).
If \((\phi_{t})\) is of positive hyperbolic step, then \(\Omega:=h(\mathbb{D})\) is contained in some half-plane \(U\) with \(\partial U\) parallel to \(\mathbb{R}\), see [14, Theorem 9.3.5]. Let \(T\) be a linear-fractional transformation of \(\mathbb{D}\) onto \(U\) with \(T(1)=\infty\). Choose a number \(c\in\mathbb{C}\) such that \(w\mapsto w+c\) maps \(U\) onto \(h(\Delta)\). Then
\[\varphi(z):=h^{-1}\big{(}T(z)+c\big{)},\quad z\in\mathbb{D},\]
is a conformal mapping of \(\mathbb{D}\) onto \(\Delta\) with \(\varphi(1)=\tau\) in the sense of the angular limit. Further, denote \(\psi:=T^{-1}\circ h:\mathbb{D}\to\mathbb{D}\). Since \(\psi\circ\varphi=T^{-1}\circ(T+c)\) has a regular contact point at \(\zeta=1\), by the Chain Rule for angular derivatives, see e.g. [19, Lemma 2] or [26, Lemma 5.1], \(\varphi\) also has a regular contact point at \(\zeta=1\), which means that \(\Delta\) is conformal at \(\tau\) w.r.t. \(\mathbb{D}\). This proves statement (B).
To prove (A), fix some \(w_{0}\in\mathbb{C}\setminus\Omega\) and denote \(g:=1/(h-w_{0})\). Further denote by \(D\) the image of the half-plane \(h(\Delta)\) w.r.t. the map \(w\mapsto 1/(w-w_{0})\). Clearly, \(D\) is a disk contained in \(g(\mathbb{D})\), with \(0\in\partial D\). Let \(f\) be the linear function mapping \(\mathbb{D}\) onto \(D\) and normalized by \(f(\tau)=0\). Then for a suitably chosen circular arc \(\Gamma\subset\mathbb{D}\) with the end-point at \(\tau\), \(g^{-1}\big{(}f(\Gamma)\big{)}\) is the forward orbit of some point \(z_{0}\in\Delta\) w.r.t. the semigroup \((\phi_{t})\). In particular, \(g^{-1}(f(z))\to\tau\) as \(\Gamma\ni z\to\tau\). Therefore, by the Comparison Theorem, see e.g. [34, Theorem 10.6 on p. 307], there exists a finite angular derivative \(g^{\prime}(\tau)\). In turn, this implies that \(L:=\angle\lim_{z\to\tau}(z-\tau)h(z)\) exists, with \(L=1/g^{\prime}(\tau)\in\overline{\mathbb{C}}\setminus\{0\}\).
Note that \(\varphi(\zeta):=g^{-1}\big{(}f(\tau\zeta)\big{)}\) is a conformal mapping of \(\mathbb{D}\) onto \(\Delta\) and that by Lindelof's Theorem, see e.g. [34, Theorem 9.3 on p. 268], \(\angle\lim_{\zeta\to\tau}\varphi(\zeta)=\tau\). Recall that since \(\varphi\in\operatorname{Hol}(\mathbb{D},\mathbb{D})\) and since \(\zeta=1\) is a contact point for \(\varphi\), the angular derivative \(\varphi^{\prime}(1):=\angle\lim_{\zeta\to 1}(\varphi(\zeta)-\tau)/(\zeta-1)\) does exist, finite or infinite, and \(\varphi^{\prime}(1)\neq 0\). The following simple calculation
\[\tau f^{\prime}(\tau)=(g\circ\varphi)^{\prime}(1)=\lim_{(0,1)\ni x\to 1} \left(\frac{g\big{(}\varphi(x)\big{)}}{\varphi(x)-\tau}\cdot\frac{\varphi(x)- \tau}{x-1}\right)\]
shows that \(R(z):=g(z)/(z-\tau)\), \(z\in\mathbb{D}\), has at \(z=\tau\) a finite asymptotic value \(\tau f^{\prime}(\tau)/\varphi^{\prime}(1)\). Taking into account that \(g\) is univalent and does not vanish in \(\mathbb{D}\) and arguing as in [34, proof of Theorem 10.5, pp. 305-306], we see that \(g^{\prime}(\tau)=\angle\lim_{z\to\tau}R(z)=\tau f^{\prime}(\tau)/\varphi^{ \prime}(1)\). It follows that \(L=1/g^{\prime}(\tau)\neq\infty\) if and only if \(\varphi^{\prime}(1)\neq\infty\). This completes the proof of (A).
_Remark 5.3_.: Note that the centre of the disk \(D\) considered in the above proof lies on the imaginary axis. It follows that under the hypothesis of Theorem 5.1, if \(L\neq\infty\) then \(\operatorname{Re}(\overline{\tau}L)=0\).
**Example 5.4**.: Let \(h_{1}(z):=z/(1-z)^{2}\) be the classical Koebe function and let
\[h_{2}(z):=w(z)-i\sqrt{w(z)},\quad\text{where}\ \ w(z):=i(1+z)/(1-z),\ z\in \mathbb{D},\]
and \(\sqrt{w}\) stands for the branch of the square root that maps the upper half-plane onto the first quadrant. It is not difficult to see that \(\operatorname{Re}(1-z)^{2}h_{k}^{\prime}(z)>0\) for all \(z\in\mathbb{D}\), \(k=1,2\). Therefore, see e.g. [14, Theorem 9.4.11, p. 257], \(h_{k}\)'s are univalent in \(\mathbb{D}\) and the formula \(\phi_{t}^{k}:=h_{k}^{-1}\circ(h_{k}+t)\), \(t\geqslant 0\), \(k=1,2\), defines two parabolic one-parameter semigroups with the DW-point at \(\tau=1\). Since the image domains \(h_{k}(\mathbb{D})\) are not contained in any half-plane, both semigroups \((\phi_{t}^{k})\)
\(k=1,2\), are of zero hyperbolic step. Clearly, \(\lim_{z\to 1}(z-1)h_{1}(z)=\infty\). The corresponding semigroup \((\phi_{t}^{1})\) has two non-conformal parabolic petals \(\{z\in\mathbb{D}:\pm\operatorname{Im}z>0\}\). At the same time \(\angle\lim_{z\to 1}(z-1)h_{2}(z)=-2i\neq\infty\). Since \(\{\zeta:\operatorname{Im}\zeta>0\}\subset h_{2}(\mathbb{D})\), the semigroup \((\phi_{t}^{2})\) has a parabolic petal. By Theorem 5.1 (A), this parabolic petal is conformal at \(\tau\) w.r.t. \(\mathbb{D}\).
## 6. Concluding remarks and open questions
### Rate of convergence of regular backward orbits
If \(\sigma\in\partial\mathbb{D}\) is a repulsive fixed point of a one-parameter semigroup of \((\phi_{t})\), then as it was proved in [13, Proposition 4.20],
\[\lim_{t\to-\infty}\frac{1}{t}\log|\phi_{t}(z)-\sigma|=\lambda:=G^{\prime}( \sigma)\quad\text{for any $z\in\Delta(\sigma)$,}\]
where \(G\) stands for the infinitesimal generator of \((\phi_{t})\). It is therefore natural to ask whether the limit
\[C(\sigma,z):=\lim_{t\to-\infty}e^{-\lambda t}|\phi_{t}(z)-\sigma| \tag{6.1}\]
does exist for \(z\in\Delta(\sigma)\). The answer is immediate if we recall that \((\phi_{t})\) admits at \(\sigma\) a pre-model \((\mathbb{H},\psi,z\mapsto e^{\lambda t}z)\), where \(\psi\) is a conformal mapping of \(\mathbb{H}\) onto \(\Delta(\sigma)\) with \(\psi(0)=\sigma\); see Remark 2.14. We have \(\phi_{t}(z)=\psi\big{(}e^{\lambda t}\psi^{-1}(z)\big{)}\) for any \(z\in\Delta(\sigma)\) and any \(t\in\mathbb{R}\). Hence for all \(z\in\Delta(\sigma)\),
\[C(\sigma,z)=\lim_{t\to-\infty}e^{-\lambda t}|\psi\big{(}e^{\lambda t}\psi^{-1 }(z)\big{)}-\sigma|=|\psi^{-1}(z)\psi^{\prime}(0)|.\]
If the hyperbolic petal \(\Delta(\sigma)\) is conformal, then by Remark 2.20, \(\psi^{\prime}(0)\in\mathbb{C}\setminus\{0\}\) and hence the limit (6.1) exists finitely and does not vanish for all points \(z\) in \(\Delta(\sigma)\). If \(\Delta(\sigma)\) is not conformal, then \(\psi^{\prime}(0)=\infty\) and hence \(C(z,\sigma)=+\infty\) for all \(z\in\Delta(\sigma)\).
Thus, for \(z_{0}\in\Delta(\sigma)\) condition (1.1) in Theorem 1.1 is equivalent to having \(C(z_{0},\sigma)\in(0,+\infty)\). Since the backward orbits in \(\Delta(\sigma)\) converge to \(\sigma\) non-tangentially, see Remark 2.15, \(C(z_{0},\sigma)\in(0,+\infty)\) if and only if \(\frac{1}{2}\log\big{(}e^{-\lambda t}(1-|\phi_{t}(z_{0})|^{2})\big{)}\) tends to a finite limit as \(t\to-\infty\). In turn, since by Remark 2.11, in \(\Delta(\sigma)\) the ODE (2.1) holds for all \(t\in\mathbb{R}\), the latter condition can be restated as the convergence of the integral
\[J(z_{0}):=\int\limits_{-\infty}^{0}\left(\frac{\lambda}{2}+\frac{\operatorname {Re}\big{[}G(\phi_{t}(z_{0}))\overline{\phi_{t}(z_{0})}\big{]}}{1-|\phi_{t}(z _{0})|^{2}}\right)\,\mathrm{d}t. \tag{6.2}\]
At the same time, combining (2.1) and (2.2), we get \(\phi_{t}^{\prime}(z_{0})=G\big{(}\phi_{t}(z_{0})\big{)}/G(z_{0})\). It follows that condition (1.1) in Theorem 1.1 is equivalent to the convergence of
\[I(z_{0}):=\int\limits_{-\infty}^{0}\left(\frac{A(z_{0})}{2}-\frac{\big{|}G( \phi_{t}(z_{0}))\big{|}}{1-|\phi_{t}(z_{0})|^{2}}\right)\,\mathrm{d}t, \tag{6.3}\]
where \(A(z_{0}):=\lim_{t\to-\infty}\big{|}G(\phi_{t}(z_{0}))\big{|}\,\lambda_{ \mathbb{D}}\big{(}\phi_{t}(z_{0})\big{)}=|G(z_{0})|\,\lambda_{\Delta(\sigma)}( z_{0})\).
### Boundary behaviour of the Koenigs function
For a non-elliptic one-parameter semigroups \((\phi_{t})\), the conformality of hyperbolic petals is related to the boundary behaviour of the Koenigs function \(h\). Let \(\sigma\in\partial\mathbb{D}\) be a repulsive fixed point of \((\phi_{t})\). Since \(S:=h\big{(}\Delta(\sigma)\big{)}\) is a maximal strip in \(\Omega:=h(\mathbb{D})\), by a result of Betsakos [8], the angular limit
\[\nu:=\angle\lim_{z\to\sigma}(z-\sigma)h^{\prime}(z)\;\in\;(0,+\infty)\]
exists and equals the width of the strip \(S\) divided by \(\pi\). (Since \(h^{\prime}=1/G\), the existence of the above limit follows also from [19, Theorem 1].)
For a suitable \(b\in\mathbb{C}\), the function \(\psi(\zeta):=h^{-1}\big{(}b+\nu\log\zeta\big{)}\) maps the right half-plane \(\mathbb{H}\) onto \(\Delta(\sigma)\). Hence, the conformality of the hyperbolic petal \(\Delta(\sigma)\) is equivalent to \(\psi^{\prime}(0)\neq\infty\). Thanks to the isogonality property (2.8), one can use the change of variables \(\zeta:=\psi^{-1}(z)\) to obtain
\[\angle\lim_{z\to\sigma}\big{[}h(z)-\nu\log(1-\overline{\sigma}z)\big{]}=b- \nu\log\big{(}-\overline{\sigma}\psi^{\prime}(0)\big{)}. \tag{6.4}\]
The above limit exists, and is finite or infinite, because \(\psi^{\prime}(0)\) exists with \(-\overline{\sigma}\psi^{\prime}(0)\in(0,+\infty)\cup\{+\infty\}\). Thus, \(\Delta(\sigma)\) is conformal if and only if the limit in the l.h.s. of (6.4) is _finite_.
### Semigroups with symmetry w.r.t. the real line
Consider a one-parameter semigroup \((\phi_{t})\) in \(\mathbb{D}\) with a repulsive fixed point at \(\sigma=-1\) and such that \(\phi_{t}\big{(}(-1,1)\big{)}\subset(-1,1)\) for all \(t\geqslant 0\). Fix some \(z_{0}\in\Delta(-1)\cap\mathbb{R}\). In this rather special case, Theorem 1.1 (excluding the part concerning uniformity of convergence) admits a simple proof based on the following elementary lemma.
**Lemma 6.1**.: _Let \(f\) be a function continuous on \([0,+\infty)\) and of class \(C^{2}\) on \((0,+\infty)\). Suppose that_
* \(f(0)=0\) _and_ \(f^{\prime}(x)>0\) _for all_ \(x\in(0,+\infty)\)_;_
* \(g(x):=f(x)/\big{(}xf^{\prime}(x)\big{)}\geqslant 1\) _for all_ \(x\in(0,+\infty)\)_._
_Fix \(\xi_{0}>0\). If \(f^{\prime}(x)\to A\) as \(x\to+\infty\) for some \(A\in(0,+\infty)\), then_
\[I:=\int\limits_{0}^{\xi_{0}}\frac{\log g(x)}{x}\;\mathrm{d}x<+\infty. \tag{6.5}\]
_If \(f(x)/x\to+\infty\) as \(x\to+\infty\), then \(I=+\infty\)._
Proof.: Suppose that \(f^{\prime}(x)\to A\in(0,+\infty)\) as \(x\to+\infty\). Then \(f(x)/x\to A\) and \(g(x)\to 1\) as \(x\to+\infty\). Therefore, taking into account (ii) we see that the improper integral \(I\) converges because
\[\int\limits_{0}^{\xi_{0}}\left(1-\frac{1}{g(x)}\right)\frac{\mathrm{d}x}{x}= \int\limits_{0}^{\xi_{0}}\left(\frac{1}{x}-\frac{f^{\prime}(x)}{f(x)}\right) \,\mathrm{d}x=\lim_{a\to 0^{+}}\log\frac{x}{f(x)}\bigg{|}_{x=a}^{x=\xi_{0}}\in \;\mathbb{R}.\]
Similarly, if \(f(x)/x\to+\infty\) as \(x\to+\infty\), then
\[I\ \geqslant\ \int\limits_{0}^{\xi_{0}}\left(1-\frac{1}{g(x)}\right)\frac{ \mathrm{d}x}{x}=\lim\limits_{a\to 0^{+}}\log\frac{x}{f(x)}\Bigg{|}_{x=a}^{x= \xi_{0}}=\ +\infty.\]
Proof of Theorem 1.1 in the symmetric case.: Passing from the unit disk \(\mathbb{D}\) to \(\mathbb{H}\) with the help of the Cayley map \(H(z):=(1+z)/(1-z)\), we get a one-parameter semigroup \((\varphi_{t})\) in \(\mathbb{H}\) with a repulsive fixed point at \(0\) and such that \(\varphi_{t}\big{(}(0,+\infty)\big{)}\subset(0,+\infty)\) for all \(t\geqslant 0\). It follows that the hyperbolic petal \(D\) of \((\varphi_{t})\) with the \(\alpha\)-point at \(0\) is symmetric w.r.t. the real line. Therefore, for all \(t\in\mathbb{R}\), all \(z\in D\), and a suitable \(\lambda>0\), we have
\[\varphi_{t}(z)=f\big{(}e^{\lambda t}f^{-1}(z)\big{)},\]
where \(f\) is a conformal mapping of \(\mathbb{H}\) onto \(D\) with \(f(0)=0\) and \(f\big{(}(0,+\infty)\big{)}\subset(0,+\infty)\). As a result, for \(z_{0}\in\Delta(-1)\cap\mathbb{R}\), the integral in (1.1) equals
\[\int\limits_{-\infty}^{0}\log\frac{f(e^{\lambda t}\xi_{0})}{e^{\lambda t}\xi_ {0}f^{\prime}(e^{\lambda t}\xi_{0})}\ \mathrm{d}t=\frac{1}{\lambda}\int\limits_{0}^{\xi_{0}}\left(\log\frac{f(x)}{xf ^{\prime}(x)}\right)\frac{\mathrm{d}x}{x}, \tag{6.6}\]
where \(\xi_{0}:=f^{-1}\big{(}H(z_{0})\big{)}\).
Applying the Schwarz - Pick Lemma, see e.g. [5, Theorem 6.4] to \(f:\mathbb{H}\to\mathbb{H}\), it is easy to see that the restriction of \(f\) to \([0,+\infty)\) satisfies the hypothesis of Lemma 6.1. If \(\Delta(-1)\) is conformal, then \(f^{\prime}(x)\to f^{\prime}(0)\in(0,+\infty)\) as \((0,+\infty)\ni x\to 0\) and hence by Lemma 6.1, the integral (6.6) converges.
Similarly, if \(\Delta(-1)\) is not conformal, then \(f(x)/x\to+\infty\) as \((0,+\infty)\ni x\to 0\). In this case, Lemma 6.1 guarantees that the integral (6.6) diverges.
The above proof of Theorem 1.1 for the symmetric case is based on the observation that \((\phi_{t})_{t\in\mathbb{R}}\) is a one-parameter group of hyperbolic automorphisms of \(\Delta(\sigma)\). In contrast to our proof for the general case, the fact that for \(t\geqslant 0\), \(\phi_{t}\)'s are well-defined holomorphic functions in the whole unit disk is not essential for the proof in the symmetric case.
Attempting to adapt the method used in this section to the general case, instead of the integral (6.5) one would need to consider
\[I_{1}(\theta):=\int\limits_{0}^{\rho_{0}}\left(\log\frac{\mathrm{Re}\,f(\rho e ^{i\theta})}{\big{|}f^{\prime}(\rho e^{i\theta})\big{|}\,\rho\cos\theta} \right)\ \frac{\mathrm{d}\rho}{\rho},\quad\theta\in(-\pi/2,\pi/2), \tag{6.7}\]
where \(\rho_{0}>0\) is fixed. Since \(f\) is isogonal at \(0\), \(\operatorname{Re}f(\rho e^{i\theta})/|f(\rho e^{i\theta})|\to\cos\theta\) as \(\rho\to 0^{+}\). Hence it would be also reasonable to consider
\[I_{2}(\theta):=\int\limits_{0}^{\xi_{0}}\log\left|\frac{f(\rho e^{i\theta})}{ \rho f^{\prime}(\rho e^{i\theta})}\right|\,\frac{\mathrm{d}\rho}{\rho}. \tag{6.8}\]
Note that the argument of \(\log|\cdot|\) in (6.8) tends to \(1\) as \(\rho\to 0^{+}\), because by [35, Proposition 4.11 on p. 81], the conformal map \(f\circ H\), \(H(z):=(1-z)/(1+z)\), satisfies at \(\zeta=1\) the Visser - Ostrowski condition.
### Hyperbolic length of backward orbits
For a hyperbolic domain \(D\subset\mathbb{C}\) and a rectifiable curve \(\gamma\colon[0,T]\to D\) we denote by
\[\ell_{D}(\gamma):=\int\limits_{\gamma}\lambda_{D}(z)\,|\,\mathrm{d}z|\]
the hyperbolic length of \(\gamma\). It is easily checked that Theorem 1.1 can be restated as follows: _a hyperbolic petal \(\Delta\) of a one-parameter semigroup \((\phi_{t})\) in \(\mathbb{D}\) is conformal if and only if_
\[\lim_{T\to+\infty}\left[\ell_{\Delta}(\gamma_{z}|_{[0,T]})-\ell_{\mathbb{D}}( \gamma_{z}|_{[0,T]})\right]\;<\;+\infty \tag{6.9}\]
_for any backward orbit \(\gamma_{z}(t):=\phi_{-t}(z)\), \(z\in\Delta\), in the petal \(\Delta\)._ Note that the above limit always exists, finite or infinite, because \(\Delta\subset\mathbb{D}\) and hence \(\lambda_{\Delta}(z)\geqslant\lambda_{\mathbb{D}}(z)\) for all \(z\in\Delta\).
### The angular derivative problem and hyperbolic length
Theorem 1.1 is closely related to a recent conformality condition due to Betsakos and Karamanlis [10]. We briefly discuss this issue. The conformality conditions obtained in [10] are valid for any simply connected domain \(\Omega\subsetneq\mathbb{C}\) containing the real line \(\mathbb{R}\). In order to compare the results of [10] with our Theorem 4.1 we additionally assume that \(\Omega\) contains the standard strip \(\mathbb{S}\). Taking into account Ostrowski's characterization of semi-conformality, see e.g. [10, Theorem A], we see that in this case, [10, Theorem 1] says that \(\Omega\) is conformal at \(-\infty\) w.r.t. \(\mathbb{S}\) if and only if
\[\operatorname{dist}(z,\partial\Omega):=\inf_{w\in\partial\Omega}|z-w|\; \to\;0 \text{as \ \ }\operatorname{Re}z\to-\infty,\;z\in\partial\mathbb{S}, \tag{6.10}\]
\[\text{and }\qquad\mathrm{d}_{\mathbb{S}}(iy+a,iy+b)-\mathrm{d}_{\Omega}(iy+a, iy+b)\;\to\;0 \text{as \ }a,b\to-\infty,\;a,b\in\mathbb{R}, \tag{6.11}\]
for some and hence any \(y\in(-\pi/2,\pi/2)\). In fact, in [10] the conformality condition (6.11) is stated only for \(y=0\), but in our case we additonally have \(\mathbb{S}\subset\Omega\), and the proof in [10] also works for \(y\in(-\pi/2,\pi/2)\).
If \(\Omega\) is the Koenigs domain of a one-parameter semigroup \((\phi_{t})\) and \(\mathbb{S}\) is a _maximal_ strip contained in \(\Omega\), then condition (6.10) is automatically satisfied.
On the other hand, it is easy to see that Theorem 4.1 says that \(\Omega\) is conformal at \(-\infty\) w.r.t. \(\mathbb{S}\) if and only if for some and hence all \(y\in(-\pi/2,\pi/2)\),
\[\int_{a}^{b}\left(\lambda_{S}(iy+x)-\lambda_{\Omega}(iy+x)\right)\,\mathrm{d}x \to 0\quad\text{ as }\ a,b\to-\infty,\ a,b\in\mathbb{R}\,. \tag{6.12}\]
Let us compare conditions (6.11) and (6.12).
Since \(\int_{a}^{b}\lambda_{S}(x)\ \mathrm{d}x=b-a=\mathrm{d}_{S}(a,b)\) and \(\int_{a}^{b}\lambda_{\Omega}(x)\ \mathrm{d}x\geqslant\mathrm{d}_{\Omega}(a,b)\) for any \(a<b\), \(a,b\in\mathbb{R}\), condition (6.11) for \(y=0\) implies condition (6.12) for \(y=0\). Thus, if \(\Omega\) is conformal at \(-\infty\) w.r.t. \(\mathbb{S}\) and if \(\mathbb{S}\subset\Omega\), then regardless of whether \(\Omega\) is starlike at infinity or not, our condition (6.12) holds for \(y=0\).
A similar remark applies to the relation between (6.12) and the following necessary and sufficient condition for conformality established by Bracci [11, (8.2)]:
\[\limsup_{x\to-\infty}\left[\mathrm{d}_{\mathbb{S}}(iy,\,iy+x)-\mathrm{d}_{ \Omega}(iy,\,iy+x)\right]\ <\ +\infty \tag{6.13}\]
for some and hence all \(y\in(-\pi/2,\pi/2)\). Arguing as above, one can see that (6.13) implies (6.12), but apparently, only for \(y=0\). Although [11, Sect. 8] addresses the angular derivative problem in the context of one-parameter semigroups, the proof of (6.13) does not depend on the fact that \(\Omega\) is starlike at infinity; it actually works for any simply connected domain \(\Omega\subsetneq\mathbb{C}\) containing \(\mathbb{S}\).
### Open questions
In conclusion, we state several open questions. Let \((\phi_{t})\) be a non-elliptic one-parameter semigroup with associated infinitesimal generator \(G\), Koenigs function \(h\), and Koenigs domain \(\Omega:=h(\mathbb{D})\). Further let \(\Delta(\sigma)\) be a hyperbolic petal of \((\phi_{t})\) with \(\alpha\)-point \(\sigma\). As above, for simplicity we suppose that \(h(\Delta(\sigma))=\mathbb{S}\).
**Question 1**.: _Similarly to a result of Betsakos and Karamanlis [10, Theorem 1], our condition (1.1), as well as its restatements (4.1) and (6.12), uses hyperbolic geometry. However, in the proof we make use of euclidean quantities related to the Koenigs domain \(\Omega\). Is it possible to prove one (or even both) of the implications in Theorem 1.1 without employing criteria for conformality in terms of euclidean geometry?_
One possible way to answer the above question would be to study in detail the relations between the convergence of the integrals \(I(z_{0})\) and \(J(z_{0})\) introduced in Sect. 6.1. An alternative direction is indicated in the next question.
**Question 2**.: _If \(\Omega\) is starlike at infinity, then the conformality condition (6.11) due to Betsakos and Karamanlis and our condition (6.12) are equivalent in terms of Theorem 4.1 and [10, Theorem 1]. Is there a more direct way to prove the equivalence (6.11) and (6.12) for such domains? How are conditions (6.11) and (6.12) related, when \(\Omega\) is semi-conformal at \(-\infty\) in the sense of Ostrowski, see e.g. [10, Theorem A], but not necessarily starlike at infinity?_
In the elementary proof of Theorem 1.1 for the symmetric case, given in Sect. 6.3, we consider a generic injective holomorphic self-map isogonal at a contact point. In contrast to the general (non-symmetric) case, the argument does not depend on the fact that the image of the self-map is a hyperbolic petal of a one-parameter semigroup in \(\mathbb{D}\).
On the one hand, the argument in Sect. 6.3 works only for the backward orbits contained in \((-1,\tau)\). This is similar to the situation in Sect. 6.5, where restricting to the symmetry line \(y=0\) apparently becomes necessary at some point.
On the other hand, motivated by the symmetric case, it is natural to ask whether even in the general case, Theorem 1.1 remains valid for a semigroup of hyperbolic automorphisms \((\phi_{t})_{t\geqslant 0}\subset\mathsf{Aut}(\Delta)\) of a simply connected domain \(\Delta\subset\mathbb{D}\), provided that all the backward orbits converge to the point \(\sigma\in\partial\mathbb{D}\) at which \(\Delta\) is isogonal, but we do not assume that \((\phi_{t})\) extends to a one-parameter semigroup of holomorphic self-maps of the whole unit disk \(\mathbb{D}\). This leads to the following problem.
**Question 3**.: _Let \(f:\mathbb{H}\to\mathbb{H}\) be an injective holomorphic self-map with a boundary fixed point at \(\zeta=0\). Suppose that \(f\) is isogonal at \(\zeta=0\). Is there any relation between the convergence of the integrals (6.7) and (6.8) introduced in Sect. 6.3 and the finiteness of the angular derivative \(f^{\prime}(0)\)?_
A closely related question is as follows. Let \(\varphi\) be a conformal mapping of \(\mathbb{D}\) onto the hyperbolic petal \(\Delta(\sigma)\) satisfying \(\varphi(-1)=\sigma\) and \(\varphi(1)=\tau\). According to [23, Theorems 1 and 3], the mapping \(\varphi\) is a solution to the non-linear ODE
\[G^{\prime}(\sigma)(1-z^{2})\varphi^{\prime}(z)=2G(\varphi(z)),\quad z\in \mathbb{D}. \tag{6.14}\]
**Question 4**.: _What kind of non-trivial conclusion about the infinitesimal generator \(G\) can be drawn from the equation (6.14), if we suppose that the petal \(\Delta(\sigma)\) is conformal?_
The next open question concerns the geometry of hyperbolic petals near the Denjoy - Wolff point \(\tau\) of \((\phi_{t})\).
**Question 5**.: _Recall that \(\tau\in\partial\Delta(\sigma)\). Let \(\bar{\varphi}\) be a conformal mapping of \(\mathbb{D}\) onto \(\Delta(\sigma)\) with \(\bar{\varphi}(1)=\tau\). What is the asymptotic behaviour of \(\bar{\varphi}(z)\) as \(z\to 1\) within a Stolz angle? In particular, if \((\phi_{t})\) is hyperbolic, then using results of Contreras and Diaz-Madrigal [17] it is possible to show that \(\partial\Delta(\sigma)\) has a corner of opening \(\alpha\pi\) at \(\tau\), with \(\alpha:=|G^{\prime}(\tau)|/G^{\prime}(\sigma)\). Is it always true (and if not always, then under which conditions) that the function_
\[f(z):=\left(\bar{\varphi}(z)-\tau\right)^{1/\alpha},\quad z\in\mathbb{D},\]
_is conformal at \(z=1\), i.e. \(f\) has angular derivative \(f^{\prime}(1)\in\mathbb{C}^{*}:=\mathbb{C}\setminus\{0\}\)?_
We conclude the paper with a question on parabolic petals. Theorem 5.1, in case of semigroups of zero hyperbolic step, reduces the angular derivative problem for parabolic petals to another problem of similar nature for the Koenigs function \(h\). Although the limit relation (1.2) holds for parabolic petals as well, our Theorem 1.1 does not seem to extend to this case. So it is natural to raise the following question.
**Question 6**.: _Is it possible to characterize conformality of **parabolic** petals in terms of the intrinsic hyperbolic geometry of the petal and the backward (or forward) dynamics of the semigroup, without involving the Koenigs function of the semigroup?_
|
2307.03926 | Enhancing Room Security and Automating Class Attendance Using ID Cards | With the rapid advancements in technology, automation has emerged as the
future of human endeavors. From simple tasks like attendance management to
complex security systems, automation has the potential to revolutionize various
aspects of our lives. This research paper explores the implementation of a
method aimed at enhancing room security in hostels and automating class
attendance using ID cards. In this study, we propose a system that utilizes the
unique identity information stored in ID cards for various security and
check-in tasks. By integrating RFID (Radio-Frequency Identification) reader
technology, GSM modules, Node MCU, and Arduino, we create a comprehensive
solution. The RFID reader scans the ID card, extracting the relevant
information and verifying the user's identity. The data is then transmitted via
the GSM module to a central database, ensuring real-time monitoring and
security measures. Moreover, the system also enables the automation of class
attendance. By utilizing the same ID cards, students can simply tap their cards
on a reader placed in the classroom. This information is recorded
automatically, eliminating the need for manual attendance taking and reducing
errors and time consumption. This research project highlights the practical
implementation of ID card technology to enhance room security in hostels and
automate class attendance processes. By leveraging the power of automation, we
aim to streamline administrative tasks, improve security measures, and optimize
efficiency in educational institutions and other relevant settings. | Shravan Bhat, Nithin R, Pranav S | 2023-07-08T07:51:22Z | http://arxiv.org/abs/2307.03926v1 | # Enhancing Room Security and Automating Class Attendance Using ID Cards
###### Abstract
With the rapid advancements in technology, automation has emerged as the future of human endeavors. From simple tasks like attendance management to complex security systems, automation has the potential to revolutionize various aspects of our lives. This research paper explores the implementation of a method aimed at enhancing room security in hostlets and automating class attendance using ID cards. In this study, we propose a system that utilizes the unique identity information stored in ID cards for various security and check-in tasks. By integrating RFID (Radio-Frequency identification) reader technology, GSM modules, Node MCU, and Arduino, we create a comprehensive solution. The RFID reader scans the ID card, extracting the relevant information and verifying the user's identity. The data is then transmitted via the GSM module to a central database, ensuring real-time monitoring and security measures. Moreover, the system also enables the automation of class attendance. By utilizing the same ID cards, students can simply tap their cards on a reader placed in the classroom. This information is recorded automatically, eliminating the need for manual attendance taking and reducing errors and time consumption. This research project highlights the practical implementation of ID card technology to enhance room security in hostlets and automate class attendance processes. By leveraging the power of automation, we aim to streamline administrative tasks, improve security measures, and optimize efficiency in educational institutions and other relevant settings.
ID card, RFID reader, GSM Module, Node MCU, Arduino
## 1 Introduction
Security and privacy is a basic need for any human being. India's population has been increasing exponentially since 19 century. Hence student intake for colleges has been increasing every year. Automation would help in trivial tasks like taking attendance, or making payments in a locality. Privacy and security is also an issue in many colleges. Adding layers of security to rooms and safe box would prevent petty theft from happening.
Main motivation of this project is to establish a attendance system within our college campus, a cash-less payment system and also to implement safer and key-less room locking systems in our university.
## 2 Literature Survey
### Survey of State of Art
Smart card based door lock systems which are expensive and less secure are currently available like the NFC (Near field communication) cards which are used
in the hotel rooms. Using these might be very expensive as it requires complex hardware.
Automated attendance are available, which uses finger print as the ID, But implementing that on a large scale like college is difficulty and would come out to be rather expensive.
### Features
* RFID card and RFID reader is included in the door lock system. The door unlocks only when the authorized card is scanned and corresponding pin in entered using the keypad provided.
* The locking and unlocking of the door latch is implemented using servo motors, stepped motors and gears.
* When a card is scanned an alert SMS is sent to the registered phone number and also an alert notification is generated in the app. When an authorized card is scanned without the user's consent, the user can shut down the system by sending a message from his phone.
* The same RFID card can be used in classrooms as a check in attendance system
## 3 Details of implementation
### Components Used
* Sim900 GSM module
* Arduino Uno
* MFRC522 RFID reader and RFID cards
* Servo motors, stepped motors and gears
* 4*4 keypad
* Buzzer and power adaptor
* Node MCU
* LEDs and resistors
* l2C LCD display
### Working
Smart ID card is divided into 3 sub-systems
1) Security System
2) Payment System
3) Attendance System
* Security System
The RFID reader communicates with the Arduino through the SPI protocol.The l2C LCD communicates with the Arduino through the l2C protocol. The keypad is connected to Arduino. The 4X4 keypad has 8 connections but the last column of keypad is not required. We only require numbers for the
password. For powering the SIM900 module, 5V, 2A power adaptor is used. Once the SIM900 module is powered, the power light will light up and on pressing the power key, the status led lights up. Then the phone is paired with the module.
GSM Module:
GSM is a mobile communication modem; it is stands for global system for mobile communication (GSM). It is widely used mobile communication system in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services.
GSM module is used here since it can communicate with a mobile and the data which it receives can be processed and sent to the Arduino.
I2C Protocol:
I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers, I/O interfaces and other similar peripherals in embedded systems.
* Payment System:
The RFID reader communicates with the Node MCU through SPI protocol. The Node MCU is connected to a web server where the data is stored. When the RFID card is scanned and the pin is entered, the balance amount is displayed on the screen.
Fig. 1: Security System
Note MCU: This device is used instead of only Arduino UNO because Node MCU has a wi-fi module which can be connected to the web server. The ESP8266 can be controlled from local Wi-Fi network or from the internet (after port forwarding). The ESP-01 module has GPIO pins that can be programmed to control device/ execute a code through the internet. The module can be programmed using an Arduino through the serial pins (RX,TX).
* Attendance system: When the the ID is scanned on the RFID reader, the student name that is stored in the RFID card is printed on the serial monitor. It is made sure that the can't be registered twice by comparing it with already registered IDs. An external app is used to store the output from the serial monitor. The output can be saved on to the computer.
## 4 Results and discussions
### Security System
The Door Lock security system was successfully implemented. When an authorized ID card is scanned onto the RFID reader and the correct password is entered onto the keypad, only then the door unlocks when the servo motor turns. Consequently a message is sent to the owner saying that the door is unlocked.
Figure 2: Security System setup
Figure 4: Payment System setup
Figure 3: Payment system
After few seconds the door locks back, turning the servo motor to the original position
When the owner is inside the room, he/she can use a switch which is present inside the room to unlock the door. Subsequently after few seconds the door locks backs, turning the servo motor back to the original position
If in any case a wrong ID card or wrong password is entered. The whole system locks down and an alarm is buzzed using a buzzer. A message is sent to the owner saying that there was an attempt to breach the security system.
The security system fails to detect an intruder when RFID card's ID is changed
Fig. 5: Google docs for payment system
Fig. 6: Attendance System setup
to the owners ID. It will also fail if the owner is negligent, revealing the password to others.
### Payment System
when a ID is scanned in onto the RFID reader, the value that is stored in the RFID, is sent to the server via WIFI module through internet on to the data base with the date and time which is taken from the internet. This stored value can be changed by the vendor or the shopkeeper to the new balance amount. The changed balance amount is then updated in the ID card through the WIFI module ESP8266
backdrop of this system is that the balance can be changed to a wrong value giving a wrong balance
### Attendance system
The attendance system was successfully implemented. When an registered ID card is scanned onto the RFID reader, the ID card number is send to the database through the wifi module Node MCU. The data base saves the student's name, ID number on the database. This present list can be retrieved from the database.
As a fail safe for the above implemented method, the RFID reader reads the ID number of the card and compares it with the student register, if ID is present, it prints the student's name onto the serial monitor. An external app saves the logs of the serial monitor as text.
This method would fail if some other student scans the card even if the owner is not present in the class. So the scanner must be monitored while the student is scanning on the RFID scanner
## Acknowledgment
With immense pleasure we are presenting "Enhancing Room Security and Automating Class Attendance Using ID Cards". As a part of the curriculum of "Embedded Systems and Design" under the department of "Electronics and Communication Engineering, National Institute of Technology, Karnataka". We wish to thank all people who gave us the unending support. We express my profound thanks to our Professor, Dr. Ramesh Kini M., And all those who have indirectly guided and helped us in the preparation of this project.
|
2310.00638 | A primal-dual perspective for distributed TD-learning | The goal of this paper is to investigate distributed temporal difference (TD)
learning for a networked multi-agent Markov decision process. The proposed
approach is based on distributed optimization algorithms, which can be
interpreted as primal-dual Ordinary differential equation (ODE) dynamics
subject to null-space constraints. Based on the exponential convergence
behavior of the primal-dual ODE dynamics subject to null-space constraints, we
examine the behavior of the final iterate in various distributed TD-learning
scenarios, considering both constant and diminishing step-sizes and
incorporating both i.i.d. and Markovian observation models. Unlike existing
methods, the proposed algorithm does not require the assumption that the
underlying communication network structure is characterized by a doubly
stochastic matrix. | Han-Dong Lim, Donghwan Lee | 2023-10-01T10:38:46Z | http://arxiv.org/abs/2310.00638v1 | # A primal-dual perspective for Distributed TD-learning
###### Abstract
The goal of this paper is to investigate distributed temporal difference (TD) learning for a networked multi-agent Markov decision process. The proposed approach is based on distributed optimization algorithms, which can be interpreted as primal-dual Ordinary differential equation (ODE) dynamics subject to null-space constraints. Based on the exponential convergence behavior of the primal-dual ODE dynamics subject to null-space constraints, we examine the behavior of the final iterate in various distributed TD-learning scenarios, considering both constant and diminishing step-sizes and incorporating both i.i.d. and Markovian observation models. Unlike existing methods, the proposed algorithm does not require the assumption that the underlying communication network structure is characterized by a doubly stochastic matrix.
## 1 Introduction
Temporal-difference (TD) learning (Sutton, 1988) aims to solve the policy evaluation problem in Markov decision processes (MDPs), serving as the foundational pillar for many reinforcement learning (RL) algorithms (Mnih et al., 2015). Following the empirical success of RL in various fields (Kober et al., 2013; Li et al., 2019), theoretical exploration of TD learning has become an active area of research. For instance, Tsitsiklis and Van Roy (1996) studied the asymptotic convergence of TD learning, while non-asymptotic analysis has been examined in Bhandari et al. (2018); Srikant and Ying (2019); Lee and Kim (2022).
In contrast to the single-agent case, the theoretical understanding for TD-learning for networked multi-agent Markov decision processes (MAMDPs) has not been fully explored so far. In the networked MAMDPs, each agent follows its own policy and receives different local rewards while sharing their local learning parameters through communication networks. Under this scenario, several distributed TD-learning algorithms (Wang et al., 2020; Doan et al., 2019, 2021; Sun et al., 2020; Zeng et al., 2022) have been developed based on distributed optimization frameworks (Nedic and Ozdaglar, 2009; Pu and Nedic, 2021).
The main goal of this paper is to provide finite-time analysis of a distributed TD-learning algorithm for networked MAMDPs from the perspectives of the primal-dual algorithms (Wang and Elia, 2011; Mokhtari and Ribeiro, 2016; Yuan et al., 2018). The proposed algorithms are inspired by the control system model for distributed optimization problems Wang and Elia (2011); Lee (2023), and at the same time, it can also be interpreted as the primal-dual gradient dynamics in (Qu and Li, 2018). In this respect, we first study finite-time analysis of continuous-time primal-dual gradient dynamics in (Qu and Li, 2018) with special nullity structures on the system matrix. Based on the analysis of primal-dual gradient dynamics, we further provide a finite-time analysis of the proposed distributed TD-learning under both i.i.d. observation and Markov observation models. The main contributions are summarized as follows:
1. An improved or comparable to the state of art convergence rate for continuous-time primal-dual gradient dynamics (Qu and Li, 2018) with null-space constraints under specific conditions: the results can be applied to general classes of distributed optimization problems
that can be reformulated as saddle-point problems (Wang and Elia, 2011; Mokhtari and Ribeiro, 2016; Yuan et al., 2018);
2. Development of new distributed TD-learning algorithms inspired by (Wang and Elia, 2011; Lee, 2023);
3. New mean-squared error bounds of the distributed TD-learning under our consideration for both i.i.d. and Markovian observation models and under various conditions of the step-sizes: the distributed TD-learning is based on the control system model in (Wang and Elia, 2011; Lee, 2023) which does not require the assumption that the graph Laplacian of the associated network graph is doubly stochastic matrix. Note that the doubly stochastic assumption is required in other distributed TD-learning algorithms based on the classical distributed optimization algorithms (Nedic and Ozdaglar, 2009; Pu and Nedic, 2021);
4. Empirical demonstrations of both the convergence and the rate of convergence of the algorithm studied in this paper.
**Related Works.** Nedic and Ozdaglar (2009) investigated a distributed optimization algorithm over a communication network whose structure graph is represented by a doubly stochastic matrix. In this approach, each agent exchanges information with its neighbors, with the exchange being weighted by the corresponding element in the doubly stochastic matrix. Wang and Elia (2011); Notamicola et al. (2023) provided control system approach to study distributed optimization problem. Another line of research designs distributed algorithms based on primal-dual approach (Yuan et al., 2018; Mokhtari and Ribeiro, 2016).
The asymptotic convergence of distributed TD-learning has been studied in Mathkar and Borkar (2016); Stankovic et al. (2023). Doan et al. (2019) provided finite-time analysis of distributed TD-learning based on the distributed optimization algorithm (Nedic and Ozdaglar, 2009) with i.i.d. observation model. Doan et al. (2021) extended the analysis of Doan et al. (2019) to the Markovian observation model. Sun et al. (2020) studied distributed TD-learning based on Nedic and Ozdaglar (2009) with the Markovian observation model using multi-step Lyapunov function (Wang et al., 2019). Wang et al. (2020) studied distributed TD-learning motivated by the gradient tracking method (Pu and Nedic, 2021). Zeng et al. (2022) studied finite-time behavior of distributed stochastic approximation algorithms (Robbins and Monro, 1951) with general mapping including TD-learning and Q-learning, using Lyapunov-Razumikhin function (Zhou and Luo, 2018).
In the context of policy evaluation, Macua et al. (2014); Lee et al. (2018); Wai et al. (2018); Ding et al. (2019); Cassano et al. (2020) studied distributed versions of gradient-TD (Sutton et al., 2009). The Gradient-TD method can be reformulated as saddle-point problem (Macua et al., 2014; Lee et al., 2022), and the aforementioned works can be understood as distributed optimization over a saddle-point problem (Boyd and Vandenberghe, 2004).
## 2 Preliminaries
### Markov decision process
Markov decision process (MDP) consists of five tuples \((\mathcal{S},\mathcal{A},\gamma,\mathcal{P},r)\), where \(\mathcal{S}:=\{1,2,\ldots,|\mathcal{S}|\}\) is the collection of states, \(\mathcal{A}\) is the collection of actions, \(\gamma\in(0,1)\) is the discount factor, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the transition kernel, and \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function. If action \(a\in\mathcal{A}\) is chosen at state \(s\in\mathcal{S}\), the transition to state \(s^{\prime}\in\mathcal{S}\) occurs with probability \(\mathcal{P}(s,a,s^{\prime})\), and incurs reward \(r(s,a,s^{\prime})\). Given a stochastic policy \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\), the quantity \(\pi(a\mid s)\) denotes the probability of taking action \(a\in\mathcal{A}\) at state \(s\in\mathcal{S}\). We will denote \(\mathcal{P}^{\pi}(s,s^{\prime}):=\sum_{a\in\mathcal{A}}\mathcal{P}(s,a,s^{ \prime})\pi(a\mid s)\), and \(\mathcal{R}^{\pi}(s):=\sum_{a\in\mathcal{A}}\sum_{s^{\prime}\in\mathcal{S}} \mathcal{P}(s,a,s^{\prime})\pi(a\mid s)r(s,a,s^{\prime})\), which is the transition probability from state \(s\in\mathcal{S}\) to \(s^{\prime}\in\mathcal{S}\) under policy \(\pi\), and expected reward at state \(s\in\mathcal{S}\), respectively. \(d:\mathcal{S}\rightarrow[0,1]\) denotes the stationary distribution of the state \(s\in\mathcal{S}\) under policy \(\pi\). The policy evaluation problem aims to estimate the expected sum of discounted rewards following policy \(\pi\), the so-called the value function, \(\mathbf{V}^{\pi}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}r(s_{k},a_{k},s_{ k+1})|s_{0}=s,\pi\right],\ s\in\mathcal{S}\).
Given a feature function \(\mathbf{\phi}:\mathcal{S}\rightarrow\mathbb{R}^{q}\), our aim is to estimate the value function through learnable parameter \(\mathbf{\theta}\), i.e., \(\mathbf{V}^{\pi}(s)\approx\mathbf{\phi}(s)^{\top}\mathbf{\theta}\), for \(s\in\mathcal{S}\), which can be achieved through solving the optimization problem, \(\min_{\mathbf{\theta}\in\mathbb{R}^{q}}\frac{1}{2}\left\|\mathbf{R}^{\pi}+\gamma\mathbf{P} ^{\pi}\mathbf{\Phi}\mathbf{\theta}-\mathbf{\Phi}\mathbf{\theta}\right\|_{\mathbf{D}^{\pi}}^{2}\), where \(\mathbf{D}^{\pi}:=\text{diag}(d(1),d(2),\ldots,d(|\mathcal{S}|))\in\mathbb{R}^{q}\). The objective function \(\mathbf{\Phi}\) is the set of all rewards \(\mathbf{\Phi}\), i.e., \(\mathbf{\Phi}=\mathbf{\Phi}(\mathbf{\Phi})\), and \(\mathbf{\Phi}=\mathbf{\Phi}(\mathbf{\Phi})\). The objective function \(\mathbf{\Phi}\) is the set of all rewards \(\mathbf{\Phi}\), i.e., \(\mathbf{\Phi}=\mathbf{\Phi}(\mathbf{\Phi})\), and \(\mathbf{\Phi}=\mathbf{\Phi}(\mathbf{\Phi})\).
\(\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\), \(\text{diag}(\cdot)\) is diagonal matrix whose diagonal elements correspond to elements of the tuple, \(\mathbf{P}^{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) whose elements are \(\left[\mathbf{P}^{\pi}\right]_{ij}:=\mathcal{P}^{\pi}(i,j)\) for \(i,j\in\mathcal{S}\), \(\mathbf{R}^{\pi}\in\mathbb{R}^{|\mathcal{S}|}\), \(\left[\mathbf{R}^{\pi}\right]_{i}:=\mathbb{E}\left[r(s,a,s^{\prime})|s=i\right]\) for \(i\in\mathcal{S}\), and \(\mathbf{\Phi}:=\left[\mathbf{\phi}(1)\quad\mathbf{\phi}(2)\quad\cdots\quad\mathbf{\phi}(| \mathcal{S}|)\right]^{\top}\in\mathbb{R}^{|\mathcal{S}|\times q}\). The solution of the optimization problem satisfies the so-called projected Bellman equation (Sutton et al., 2009): \(\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi}\mathbf{\theta}=\mathbf{\Phi}^{\top}\mathbf{D}^{\pi} \mathbf{R}^{\pi}+\gamma\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi}\mathbf{\theta}\).
Throughout the paper, we adopt the common assumption on the feature matrix, which is widely used in the literature (Bhandari et al., 2018; Wang et al., 2020).
**Assumption 2.1**.: \(\left\|\mathbf{\phi}(s)\right\|_{2}\leq 1\) _for all \(s\in\mathcal{S}\) and \(\mathbf{\Phi}\) is full-column rank matrix._
### Multi-agent MDP
Multi-agent Markov decision process (MAMDP) considers a set of agents cooperatively computing the value function for a shared environment. Considering \(N\) agents, each agent can be denoted by \(i\in\mathcal{V}:=\{1,2,\ldots,N\}\), and the agents communicate over networks that can be described by a connected and undirected simple graph \(\mathcal{G}:=(\mathcal{V},\mathcal{E})\), where \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) is the set of edges. \(\mathcal{N}_{i}\subset\mathcal{V}\) denotes the neighbour of agent \(i\in\mathcal{V}\), i.e., \(j\in\mathcal{N}_{i}\) if and only if \((i,j)\in\mathcal{E}\) for \(i,j\in\mathcal{V}\). Each agent \(i\in\mathcal{V}\) has its local policy \(\pi^{i}:\mathcal{S}\times\mathcal{A}_{i}\rightarrow[0,1]\), where \(\mathcal{A}_{i}\) is the action space of agent \(i\), and receives reward following its local reward function \(r^{i}\) : \(\mathcal{S}\times\Pi_{i=1}^{N}\mathcal{A}_{i}\times\mathcal{S}\rightarrow\mathbb{R}\). As in the single-agent MDP, MAMDP consists of five tuples \((\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{N},\mathcal{P},\{r^{i}\}_{i=1}^{N})\), where \(\mathcal{P}:\mathcal{S}\times\{\mathcal{A}_{i}\}_{i=1}^{N}\times\mathcal{S} \rightarrow[0,1]\) is the Markov transition kernel. The agents share the same state \(s\in\mathcal{S}\), and when action \(\mathbf{a}:=(a_{1},a_{2},\ldots,a_{N})\in\Pi_{i=1}^{N}\mathcal{A}_{i}\) is taken, the state transits to \(s^{\prime}\in\mathcal{S}\) with probability \(\mathcal{P}(s,\mathbf{a},s^{\prime})\), and for \(i\in\mathcal{V}\), agent \(i\) receives \(r^{i}(s,\mathbf{a},s^{\prime})\). The aim of the policy evaluation under MAMDP is to estimate the expected sum of discounted rewards averaged over \(N\) agents, i.e., \(\mathbf{V}^{\pi}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\frac{1}{N}\sum_{ i=1}^{N}r^{i}(s_{k},\mathbf{a},s_{k+1})\right]\), for \(s\in\mathcal{S}\). While learning, each agent \(i\in\mathcal{V}\) can share its learning parameter over the communication network with its neighboring agents \(j\in\mathcal{N}_{i}\).
Following the spirit of single-agent MDP, using the set of features \(\mathbf{\Phi}\), the aim of each agent is now to compute the solution of the following equation:
\[\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi}\mathbf{\theta}=\mathbf{\Phi}^{\top}\mathbf{D}^{\pi} \left(\frac{1}{N}\sum_{i=1}^{N}\mathbf{R}_{i}^{\pi}\right)+\gamma\mathbf{\Phi}^{\top} \mathbf{D}^{\pi}\mathbf{P}^{\pi}\mathbf{\Phi}\mathbf{\theta}, \tag{1}\]
where \(\mathbf{R}_{i}^{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) for \(i\in\mathcal{V}\), whose elements are \([\mathbf{R}_{i}^{\pi}]_{j}=\mathbb{E}\left[r^{i}(s,\mathbf{a},s^{\prime})\mid s=j\right]\) for \(j\in\mathcal{S}\). The equation (1) admits a unique solution \(\mathbf{\theta}_{c}\), given by
\[\mathbf{\theta}_{c}=(\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi}-\gamma\mathbf{\Phi}^{\top} \mathbf{D}^{\pi}\mathbf{P}^{\pi}\mathbf{\Phi})^{-1}\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\left(\frac{ 1}{N}\sum_{i=1}^{N}\mathbf{R}_{i}^{\pi}\right). \tag{2}\]
Note that the solution corresponds to the value function associated with the global reward \(\sum_{k=0}^{\infty}\gamma^{k}\frac{1}{N}\sum_{i=1}^{N}r^{i}\). Moreover, for convenience of the notation, we will denote
\[\mathbf{A}:=\gamma\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi}-\mathbf{\Phi}^{\top}\mathbf{D}^{\pi} \mathbf{\Phi},\quad\mathbf{b}_{i}=\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{R}_{i}^{\pi},\quad 1 \leq i\leq N, \tag{3}\]
and \(w:=\lambda_{\min}(\mathbf{\Phi}^{\top}\mathbf{D}^{\pi}\mathbf{\Phi})\). The bound on the reward will be denoted by some positive constant \(R_{\max}\in\mathbb{R}\), i.e., \(|r^{i}(s,\mathbf{a},s^{\prime})|\leq R_{\max},\ 1\leq i\leq N,\forall s,\mathbf{a},s^{ \prime}\in\mathcal{S}\times\Pi_{i=1}^{N}\mathcal{A}_{i}\times\mathcal{S}\).
## 3 Analysis of primal-dual gradient dynamics
The so-called primal-dual gradient dynamics (Arrow et al., 1958) will be the key tool to derive finite-time bounds of the proposed distributed TD-learning. This section establishes exponential convergent behavior of the primal-dual gradient dynamics in terms of the Lyapunov method. To this end, let us consider the following constrained optimization problem:
\[\min_{\mathbf{u}\in\mathbb{R}^{n}}\quad f(\mathbf{u})\quad\text{such that}\quad\mathbf{M}\mathbf{u}=\mathbf{0}_{n}, \tag{4}\]
where \(\mathbf{u}\in\mathbb{R}^{n}\), \(\mathbf{M}\in\mathbb{R}^{n\times n}\) and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a differentiable, smooth, and strongly convex function (Boyd and Vandenberghe, 2004). One of the popular approaches for solving (4)
is to formulate it into the saddle-point problem (Boyd and Vandenberghe, 2004), \(L(\mathbf{u},\mathbf{v})=\min_{\mathbf{u}\in\mathbb{R}^{n}}\max_{\mathbf{v}\in\mathbb{R}^{n}}(f( \mathbf{u})+\mathbf{v}^{\top}\mathbf{M}\mathbf{u}),\) whose solution, \(\mathbf{u}^{*},\mathbf{v}^{*}\in\mathbb{R}^{n}\), exists and is unique when \(\mathbf{M}\) has full-column rank (Qu and Li, 2018). When \(\mathbf{M}\) is rank-deficient, i.e., it is not full-column rank, there exists multiple \(\mathbf{v}^{*}\) solving the saddle-point problem (Ozaslan and Jovanovic, 2023). Indeed, whether \(\mathbf{M}\) is rank-deficient or not, \(\mathbf{u}^{*}\) can be shown to be the optimal solution of (4) by Karush-Kunh-Tucker condition (Boyd and Vandenberghe, 2004). It is known that its solution \(\mathbf{u}^{*},\mathbf{v}^{*}\) can be obtained by investigating the solution \(\mathbf{u}_{t},\mathbf{v}_{t}\in\mathbb{R}^{n}\) of the so-called primal-dual gradient dynamics (Qu and Li, 2018), with initial points \(\mathbf{u}_{0},\mathbf{v}_{0}\in\mathbb{R}^{n}\),
\[\dot{\mathbf{u}}_{t}=-\nabla f(\mathbf{u}_{t})-\mathbf{M}^{\top}\mathbf{v}_{t},\quad\dot{\mathbf{ v}}_{t}=\mathbf{M}\mathbf{u}_{t}.\]
Qu and Li (2018) studied exponential stability of the primal-dual gradient dynamics when \(\mathbf{M}\) is full column-rank, using the classical Lyapunov approach (Sontag, 2013). As for the case when \(\mathbf{M}\) is rank-deficient, Ozaslan and Jovanovic (2023); Cisneros-Velarde et al. (2020); Gokhale et al. (2023) proved exponential convergence to a particular solution \(\mathbf{u}^{*},\mathbf{v}^{*}\) using the tools based on singular value decomposition (Horn and Johnson, 2012). In this paper, we will study the behavior of the system under the following particular scenarios:
1. \(\nabla f(\mathbf{u}_{t})=\mathbf{U}\mathbf{u}_{t}\), where \(\mathbf{U}\in\mathbb{R}^{n\times n}\), which is not necessarily symmetric, is a positive definite matrix, i.e., \(\mathbf{U}+\mathbf{U}^{\top}>0\);
2. \(\mathbf{M}\) is symmetric and rank-deficient. Distributed algorithms are typical examples satisfying such condition and will be elaborated in subsequent sections.
We note that Ozaslan and Jovanovic (2023); Cisneros-Velarde et al. (2020); Gokhale et al. (2023) considered general matrix \(\mathbf{M}\), not necessarily a symmetric matrix. Moreover, note that the primal-dual gradient dynamics under such scenarios will appear in the further sections as an O.D.E. model of the proposed distributed TD-learning. The corresponding system can be rewritten as
\[\frac{d}{dt}\begin{bmatrix}\mathbf{u}_{t}\\ \mathbf{v}_{t}\end{bmatrix}=\begin{bmatrix}-\mathbf{U}&-\mathbf{M}^{\top}\\ \mathbf{M}&\mathbf{0}_{n\times n}\end{bmatrix}\begin{bmatrix}\mathbf{u}_{t}\\ \mathbf{v}_{t}\end{bmatrix},\quad\mathbf{u}_{0},\mathbf{v}_{0}\in\mathbb{R}^{n}. \tag{5}\]
To study its exponential stability, let us introduce the Lyapunov function candidate \(V(\mathbf{u},\mathbf{v})=\begin{bmatrix}\mathbf{u}\\ \mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}\end{bmatrix}^{\top}\mathbf{S}\begin{bmatrix}\mathbf{u}\\ \mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}\end{bmatrix}\), where \(\mathbf{S}\in\mathbb{R}^{2n\times 2n}\) is some symmetric positive definite matrix, and \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{n}\). The candidate Lyapunov function considers projection of the iterate \(\mathbf{v}_{t}\) to the range space of \(\mathbf{M}\). As in Ozaslan and Jovanovic (2023); Cisneros-Velarde et al. (2020), the difficulty coming from singularity of \(\mathbf{M}\) can be avoided by considering the range space and null space conditions of \(\mathbf{M}\). In particular, Ozaslan and Jovanovic (2023) employed a Lyapunov function that involves the gradient of the Lagrangian function, and considered the projected iterate \(\mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}_{t}\), where \(\mathbf{M}\mathbf{M}^{\dagger}\) is the projection matrix onto range space of \(\mathbf{M}\). Moreover, Cisneros-Velarde et al. (2020) exploited a quadratic Lyapunov function in (Qu and Li, 2018) for the iterate \(\mathbf{u}_{t}\) and \(\mathbf{V}\mathbf{v}_{t}\), where \(\mathbf{M}:=\mathbf{T}\mathbf{\Sigma}\mathbf{V}^{\top}\), which is the singular value decomposition of \(\mathbf{M}\). Gokhale et al. (2023) considered a positive semi-definite matrix \(\mathbf{S}\) and used semi-contraction theory (De Pasquale et al., 2023) to prove exponential convergence of the primal-dual gradient dynamics.
In this paper, we will adopt the quadratic Lyapunov function in (Qu and Li, 2018) with the projected iterate \(\mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}_{t}\), and leverage the symmetric property of \(\mathbf{M}\) to show improved or comparable to the state of art convergence rate under the particular conditions newly imposed in this paper. In particular, when \(\mathbf{M}\) is symmetric, the fact that the projection onto the column space of \(\mathbf{M}\) and row space of \(\mathbf{M}\) being identical simplifies the overall bounds. We first present the following Lyapunov inequality.
**Lemma 3.1**.: _Let \(\mathbf{S}:=\begin{bmatrix}\beta\mathbf{I}_{n}&\mathbf{M}\\ \mathbf{M}&\beta\mathbf{I}_{n}\end{bmatrix}\) where \(\beta:=\max\left\{\frac{2\lambda_{\max}(\mathbf{M})^{2}+2+\|\mathbf{U}\|_{2}^{2}}{ \lambda_{\min}(\mathbf{U}+\mathbf{U}^{\top})},4\lambda_{\max}(\mathbf{M})\right\}\). Then, \(\frac{\beta}{2}\mathbf{I}_{2n}\prec\mathbf{S}\prec 2\beta\mathbf{I}_{2n}\), and we have, for any \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{n}\),_
\[\begin{bmatrix}\mathbf{u}\\ \mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}\end{bmatrix}^{\top}\mathbf{S}\begin{bmatrix}-\mathbf{U}&- \mathbf{M}\\ \mathbf{M}&\mathbf{0}_{n\times n}\end{bmatrix}\begin{bmatrix}\mathbf{u}\\ \mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}\end{bmatrix}\leq-\min\{1,\lambda_{\min}^{+}(\mathbf{M} )^{2}\}\left\|\begin{bmatrix}\mathbf{u}\\ \mathbf{M}\mathbf{M}^{\dagger}\mathbf{v}\end{bmatrix}\right\|_{2}^{2}.\]
The proof is given in Appendix Section A.3. Using the above Lemma 3.1, we can now prove the exponential stability of the O.D.E. dynamics in (5).
**Theorem 3.2**.: _Let \(V(\mathbf{u},\mathbf{v})=\begin{bmatrix}\mathbf{u}\\ \mathbf{MM}^{\dagger}\mathbf{v}\end{bmatrix}^{\top}\mathbf{S}\begin{bmatrix}\mathbf{u}\\ \mathbf{MM}^{\dagger}\mathbf{v}\end{bmatrix}\). For \(\mathbf{u}_{0},\mathbf{v}_{0}\in\mathbb{R}^{n}\) and \(t\in\mathbb{R}^{+}\), we have_
\[V(\mathbf{u}_{t},\mathbf{v}_{t})\leq\exp\left(-\frac{\min\{1_{\lambda_{ \min}^{\perp}}(\mathbf{M})^{2}\}}{\max\left\{\frac{2\lambda_{\max}(\mathbf{M})^{2}+2+1 \mathbf{U}\frac{1}{2}}{\lambda_{\min}(\mathbf{U}+\mathbf{U}^{\top})},4\lambda_{\max}(\mathbf{ M})\right\}}t\right)V(\mathbf{u}_{0},\mathbf{v}_{0}).\]
The proof is given in Appendix Section A.4. We show that the above bound enjoys sharper or comparable to the state of the art convergence rate under particular conditions. With slight modifications, the Lyapunov function becomes identical to that of Gokhale et al. (2023). However, we directly rely on classical Lyapunov theory (Khalil, 2015) rather than the result from semi-contraction theory (De Pasquale et al., 2023) used in Gokhale et al. (2023).1 The classical Lyapunov approach simplifies the proof steps compared to that of semi-contraction theory.The detailed comparative analysis is in Appendix Section A.5. The fact that \(\mathbf{M}\) is symmetric and considering the projected iterate \(\mathbf{MM}^{\dagger}\mathbf{v}_{t}\), provides improved and comparable bound.
Footnote 1: Gokhale et al. (2023) appeared on arxiv nearby the submission of this manuscript.
## 4 Distributed TD-learning with linear function approximation
In this section, we propose a new distributed TD-learning algorithm to solve (1) based on the result in Wang and Elia (2011); Lee (2023). In this scenario, each agent keeps its own parameter estimate \(\mathbf{\theta}^{i}\in\mathbb{R}^{q},\ 1\leq i\leq N\), and the goal of each agent is to estimate the value function \(\mathbf{V}^{\pi}(s)\approx\mathbf{\phi}(s)^{\top}\mathbf{\theta}_{c}\) satisfying (1) (i.e., the value function associated with the global reward \(\sum_{k=0}^{\infty}\gamma^{k}\frac{1}{N}\sum_{i=1}^{N}r^{i}\)) under the assumption that each agent has access only to its local reward \(r^{i}\). The parameter of each agent can be shared over the communication network whose structure is represented by the graph \(\mathcal{G}\), i.e., agents can share their parameters only with their neighbors over the network to solve the global problem. The connections among the agents can be represented by graph Laplacian matrix (Anderson Jr and Morley, 1985), \(\mathbf{L}\in\mathbb{R}^{[\mathcal{S}]\times[\mathcal{S}]}\), which characterizes the graph \(\mathcal{G}\), i.e., \([\mathbf{L}]_{ij}=-1\) if \((i,j)\in\mathcal{E}\) and \([\mathbf{L}]_{ij}=0\) if \((i,j)\notin\mathcal{E}\), and \([\mathbf{L}]_{ii}=|\mathcal{N}_{i}|\) for \(i\in\mathcal{V}\). Note that \(\mathbf{L}\) is symmetric positive semi-definite matrix, i.e., \(\mathbf{x}^{\top}\mathbf{L}\mathbf{x}\geq 0\) for \(\mathbf{x}\in\mathbb{R}^{|\mathcal{S}|}\), and \(\mathbf{L}\mathbf{1}_{|\mathcal{S}|}=0\). To proceed, let us first introduce a set of matrix notations: \(\bar{\mathbf{L}}:=\mathbf{L}\otimes\mathbf{I}_{q}\in\mathbb{R}^{Nq\times Nq},\quad\bar{ \mathbf{D}}^{\pi}:=\mathbf{I}_{N}\otimes\mathbf{D}^{\pi}\in\mathbb{R}^{Nq\times Nq},\quad \bar{\mathbf{P}}^{\pi}:=\mathbf{I}_{N}\otimes\mathbf{P}^{\pi}\in\mathbb{R}^{Nq\times Nq}, \quad\bar{\mathbf{\Phi}}:=\mathbf{I}_{N}\otimes\mathbf{\Phi}\in\mathbb{R}^{Nq\times Nq}, \quad\bar{\mathbf{\theta}}=\begin{bmatrix}\mathbf{\theta}^{1^{\top}}&\mathbf{\theta}^{2^ {\top}}&\cdots&\mathbf{\theta}^{N^{\top}}\end{bmatrix}^{\top}\in\mathbb{R}^{Nq}, \quad\bar{\mathbf{A}}=\mathbf{I}_{N}\otimes\mathbf{A}\in\mathbb{R}^{Nq\times Nq},\quad\bar {\mathbf{b}}=\begin{bmatrix}\mathbf{b}_{1}^{\top}&\mathbf{b}_{2}^{\top}&\cdots&\mathbf{b}_{N}^ {\top}\end{bmatrix}^{\top}\in\mathbb{R}^{Nq},\quad\bar{\mathbf{w}}=\begin{bmatrix} \mathbf{w}^{1^{\top}}&\mathbf{w}^{2^{\top}}&\cdots&\mathbf{w}^{N^{\top}}\end{bmatrix}^{ \top}\in\mathbb{R}^{Nq},\) where \(\otimes\) denotes Kronecker product, and \(\bar{\mathbf{w}}\) is another collection of learnable parameters \(\{\mathbf{w}^{i}\in\mathbb{R}^{q}\}_{i=1}^{N}\), where \(\mathbf{w}^{i}\) assigned to each agent \(i\) for \(1\leq i\leq N\).
Meanwhile, Wang and Elia (2011) studied distributed optimization algorithms (Tsitsiklis, 1984) from the control system perspectives in continuous-time domain, which can be represented as an Lagrangian problem (Hestenes, 1969). Compared to other distributed optimization algorithms (Nedic and Ozdaglar, 2009; Pu and Nedic, 2021), the method in Wang and Elia (2011) does not require any specific initialization, diminishing step-sizes, and doubly stochastic matrix that corresponds to the underlying communication graph. Due to these advantages, this framework has been further studied in Droge and Egrestedt (2014); Shi et al. (2015); Hatanaka et al. (2018); Bin et al. (2022). Inspired by Wang and Elia (2011), Lee (2023) developed a distributed TD-learning algorithm and provided an asymptotic convergence analysis based on the O.D.E. method. The analysis relies on Barbalat's lemma (Khalil, 2015), which makes extension to the non-asymptotic finite-time analysis difficult. Moreover, they mostly focus on the deterministic continuous-time algorithms. The corresponding distributed TD-learning is summarized in Algorithm 1, where each agent updates its local parameter using the local TD-error in (6). The updates in (7) and (8) in Algorithm 1 can be obtained by discretizing the continuous-time O.D.E. introduced in (Wang and Elia, 2011) and implementing it using stochastic samples.
Using the stacked vector representation, \(\begin{bmatrix}\bar{\mathbf{\theta}}_{k}\\ \bar{\mathbf{w}}_{k}\end{bmatrix}\in\mathbb{R}^{2Nq},\ k\in\mathbb{N}_{0}\), the updates in (7) and (8) in Algorithm 1 can be rewritten in compact form, for \(k\in\mathbb{N}_{0}\):
\[\begin{bmatrix}\bar{\mathbf{\theta}}_{k+1}\\ \bar{\mathbf{w}}_{k+1}\end{bmatrix}=\begin{bmatrix}\bar{\mathbf{\theta}}_{k}\\ \bar{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k}\begin{bmatrix}\bar{\mathbf{A}}-\eta\bar{ \mathbf{L}}&-\eta\bar{\mathbf{L}}\\ \eta\bar{\mathbf{L}}&\mathbf{0}_{Nq\times Nq}\end{bmatrix}\begin{bmatrix}\bar{\mathbf{ \theta}}_{k}\\ \bar{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k}\begin{bmatrix}\bar{\mathbf{b}}\\ \mathbf{0}_{Nq}\end{bmatrix}+\alpha_{k}\bar{\mathbf{\epsilon}}(o_{k};\bar{\mathbf{\theta}} _{k}), \tag{9}\]
where, \(o_{k}:=\{o_{i}^{i}\}_{i=1}^{N}\), and for \(1\leq i\leq N\), \(\mathbf{\epsilon}^{i}(o_{k}^{i};\mathbf{\theta}_{k}^{i}):=\delta(o_{k}^{i};\mathbf{\theta }_{k}^{i})\mathbf{\phi}(s_{k})-\mathbf{A}\mathbf{\theta}_{k}^{i}-\mathbf{b}^{i}\in\mathbb{R}^ {q}\), and
\[\bar{\mathbf{\epsilon}}(o_{k};\bar{\mathbf{\theta}}_{k}):=\begin{bmatrix}\mathbf{\epsilon }^{1}(o_{k}^{1};\mathbf{\theta}_{k}^{1})^{\top}&\mathbf{\epsilon}^{2}(o_{k}^{2};\mathbf{ \theta}_{k}^{2})^{\top}&\cdots&\mathbf{\epsilon}^{N}(o_{k}^{N};\mathbf{\theta}_{k}^{N} )^{\top}&\mathbf{0}_{Nq}^{\top}\end{bmatrix}^{\top}\in\mathbb{R}^{2Nq}. \tag{10}\]
Note that the superscript of \(\bar{\mathbf{\epsilon}}^{i}\) corresponds to the superscript of \(\mathbf{b}^{i}\). Compared to the algorithm in Lee (2023), we introduce an additional positive variable \(\eta>0\) multiplied with the graph Laplacian matrix, which results in the factor \(\eta\) multiplied with the mixing part in Algorithm 1 in order to control the variance of the update. We note that when the the number of neighbors of an agent \(i\in\mathcal{V}\) is large, then so is the variance of the corresponding updates of the agent. In this case, the variance can be controlled by adjusting \(\eta\) to be small.
The behavior of stochastic algorithm is known to be closely related to its continuous-time O.D.E. counterpart (Borkar and Meyn, 2000; Srikant and Ying, 2019). In this respect, the corresponding O.D.E. model of (9) is given by
\[\frac{d}{dt}\begin{bmatrix}\bar{\mathbf{\theta}}_{t}\\ \bar{\mathbf{w}}_{t}\end{bmatrix}=\begin{bmatrix}\bar{\mathbf{A}}-\eta\bar{\mathbf{L}}&- \eta\bar{\mathbf{L}}\\ \eta\bar{\mathbf{L}}&\mathbf{0}_{Nq\times Nq}\end{bmatrix}\begin{bmatrix}\bar{\mathbf{ \theta}}_{t}\\ \bar{\mathbf{w}}_{t}\end{bmatrix}+\begin{bmatrix}\bar{\mathbf{b}}\\ \mathbf{0}_{Nq}\end{bmatrix},\quad\bar{\mathbf{\theta}}_{0},\bar{\mathbf{w}}_{0}\in\mathbb{ R}^{Nq}, \tag{11}\]
for \(t\in\mathbb{R}^{+}\). The above linear system is closely related to the primal-dual gradient dynamics in (5). Compared to (5), the difference lies in the fact that the above system corresponds to the the dynamics of the distributed TD-learning represented by matrix \(\bar{\mathbf{A}}\) instead of the gradient of a particular objective function. It is straightforward to check that the equilibrium point of the above system is \(\mathbf{1}_{N}\otimes\mathbf{\theta}_{c}\) and \(\frac{1}{\eta}\bar{\mathbf{w}}_{\infty}\) such that \(\bar{\mathbf{L}}\bar{\mathbf{w}}_{\infty}=\bar{\mathbf{A}}(\mathbf{1}_{N}\otimes\mathbf{\theta}_{c })+\bar{\mathbf{b}}\).
In what follows, we will analyze finite-time behavior of (9) based on the Lyapunov equation in Lemma 4.1. For the analysis, we will follow the spirit of Srikant and Ying (2019), which provided a finite-time bound of the standard single-agent TD-learning based on the Lyapunov method (Sontag, 2013). To proceed further, let us consider the coordinate change of \(\tilde{\mathbf{\theta}}_{k}:=\tilde{\mathbf{\theta}}_{k}-\mathbf{1}_{N}\otimes\mathbf{\theta}_{c}\) and \(\tilde{\mathbf{w}}_{k}:=\tilde{\mathbf{w}}_{k}-\frac{1}{\eta}\bar{\mathbf{w}}_{\infty}\), with which we can rewrite (9) by
\[\begin{bmatrix}\tilde{\mathbf{\theta}}_{k+1}\\ \tilde{\mathbf{w}}_{k+1}\end{bmatrix}=\begin{bmatrix}\tilde{\mathbf{\theta}}_{k}\\ \tilde{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k}\begin{bmatrix}\bar{\mathbf{A}}-\eta\bar{ \mathbf{L}}&-\eta\bar{\mathbf{L}}\\ \eta\bar{\mathbf{L}}&\mathbf{0}_{Nq\times Nq}\end{bmatrix}\begin{bmatrix}\tilde{\mathbf{ \theta}}_{k}\\ \tilde{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k}\bar{\mathbf{\epsilon}}(o_{k};\tilde{\mathbf{ \theta}}_{k}). \tag{12}\]
We will now derive a Lyapunov inequality (Sontag, 2013) for the above system based on the results in Lemma 4.1 for (11), To this end, we will rely on the analysis in Qu and Li (2018), which proved exponential convergence of the continuous-time primal-dual gradient dynamics (Arrow et al., 1958)
based on the Lyapunov method. However, the newly introduced singularity of \(\bar{\mathbf{L}}\) imposes difficulty in directly applying the results from Qu and Li (2018) which does not allow the singularity. To overcome this difficulty, we will multiply \(\bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\) to the dual update \(\tilde{\mathbf{w}}_{k+1}\) in (12), which is the projection to the range space of \(\bar{\mathbf{L}}\). Moreover, the symmetric assumption of \(\bar{\mathbf{L}}\) helps construct an explicit solution of the Lyapunov inequality in Lemma 4.1. In particular, multiplying \(\begin{bmatrix}\mathbf{I}_{N}&\mathbf{0}_{Nq\times Nq}\\ \mathbf{0}_{Nq\times Nq}&\bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\end{bmatrix}\) to (12) leads to
\[\begin{bmatrix}\tilde{\mathbf{\theta}}_{k+1}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}_{k+1}\end{bmatrix}= \begin{bmatrix}\tilde{\mathbf{\theta}}_{k}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k} \begin{bmatrix}\bar{\mathbf{A}}-\eta\bar{\mathbf{L}}&-\eta\bar{\mathbf{L}}\\ \eta\bar{\mathbf{L}}&\mathbf{0}_{Nq\times Nq}\end{bmatrix}\begin{bmatrix}\tilde{\mathbf{ \theta}}_{k}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}_{k}\end{bmatrix}+\alpha_{k} \bar{\mathbf{\epsilon}}_{k}(o_{k};\tilde{\mathbf{\theta}}_{k}), \tag{13}\]
which can be proved using Lemma A.1 in the Appendix Section A.2. For this modified system, we now derive the following Lyapunov inequality.
**Lemma 4.1**.: _There exists positive symmetric definite matrix \(\mathbf{G}\in\mathbb{R}^{2Nq\times 2Nq}\) such that \(\frac{8+\eta+4\eta^{2}\lambda_{\max}(\bar{\mathbf{L}})^{2}}{2\eta(1-\gamma)w}\mathbf{ I}_{2Nq}\prec\mathbf{G}\prec\mathbf{2}\frac{8+\eta+4\eta^{2}\lambda_{\max}(\bar{ \mathbf{L}})^{2}}{\eta(1-\gamma)w}\mathbf{I}_{2Nq}\), and for \(\tilde{\mathbf{\theta}},\tilde{\mathbf{w}}\in\mathbb{R}^{Nq}\),_
\[2\begin{bmatrix}\tilde{\mathbf{\theta}}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}\end{bmatrix}^{\top}\mathbf{G} \begin{bmatrix}\bar{\mathbf{A}}-\eta\bar{\mathbf{L}}&-\eta\bar{\mathbf{L}}\\ \eta\bar{\mathbf{L}}&\mathbf{0}_{Nq\times Nq}\end{bmatrix}\begin{bmatrix}\tilde{\mathbf{ \theta}}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}\end{bmatrix}\leq-\min\{1,\eta \lambda_{\min}^{+}(\bar{\mathbf{L}})^{2}\}\left\|\begin{bmatrix}\tilde{\mathbf{\theta} }\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}\end{bmatrix}\right\|_{2}^{2}.\]
The proof is given in Appendix Section A.6. The proof can be done by noting that \(\bar{\mathbf{A}}-\eta\bar{\mathbf{L}}\) is negative semi-definite and \(\bar{\mathbf{L}}\) is rank-deficient, and applying Lemma 3.1.
### i.i.d. observation case
We are now in position to provide the first main result, a finite-time analysis of Algorithm 1 under the i.i.d. observation model. We note that the i.i.d. observation model is common in the literature, and provides simple and clean theoretical insights.
**Theorem 4.2**.:
1. _Suppose we use constant step-size_ \(\alpha_{0}=\alpha_{1}=\cdots=\alpha_{k}\) _for_ \(k\in\mathbb{N}_{0}\)_, and_ \(\alpha_{0}\leq\bar{\alpha}\) _such that_ \(\bar{\alpha}=\mathcal{O}\left(\frac{\min\{1,\eta\lambda_{\min}^{+}(\bar{\mathbf{L} })^{2}\}}{\lambda_{\max}(\bar{\mathbf{L}})^{2}\left(\frac{8}{\eta}+4\eta\lambda_{ \max}(\bar{\mathbf{L}})^{2}\right)}(1-\gamma)w\right)\)_. Then, we have_ \[\frac{1}{N}\mathbb{E}\left[\left\|\begin{bmatrix}\tilde{\mathbf{\theta}}_{k+1}\\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}_{k+1}\end{bmatrix}\right\|_{2} ^{2}\right]=\mathcal{O}\left(\exp\left(-\alpha_{0}k\right)+\alpha_{0}\frac{R_{ \max}^{2}}{w^{3}(1-\gamma)^{3}}\frac{2+\eta^{2}\lambda_{\max}(\bar{\mathbf{L}})^ {2}}{\eta\min\{1,\eta\lambda_{\min}(\bar{\mathbf{L}})^{2}\}}\right).\]
2. _Suppose we have_ \(\alpha_{k}=\frac{h_{1}}{k+h_{2}}\) _for_ \(k\in\mathbb{N}_{0}\)_. There exists_ \(\bar{h}_{1}\) _and_ \(\bar{h}_{2}\) _such that letting_ \(h_{1}=\Theta(\bar{h}_{1})\) _and_ \(h_{2}=\Theta(\bar{h}_{2})\) _yields_ \[\frac{1}{N}\mathbb{E}\left[\left\|\begin{bmatrix}\tilde{\mathbf{\theta}}_{k+1} \\ \bar{\mathbf{L}}\bar{\mathbf{L}}^{\dagger}\tilde{\mathbf{w}}_{k+1}\end{bmatrix}\right\|_{2}^ {2}\right]= \mathcal{O}\left(\frac{1}{k}\frac{(2+\eta^{2}\lambda_{\max}(\bar{\mathbf{L}}) ^{2})^{2}}{\eta^{2}\min\{1,\eta\lambda_{\min}^{+}(\bar{\mathbf{L}})^{2}\}^{2}} \frac{R_{\max}^{2}}{w^{4}(1-\gamma)^{4}}\right).\]
The proof and the exact constants can be found in Appendix Section A.8. Using constant step-size, we can guarantee exponential convergence rate with small bias term \(\mathcal{O}\left(\alpha_{0}\frac{R_{\max}^{2}\lambda_{\max}(\bar{\mathbf{L}})}{w^{ 3}(1-\gamma)^{3}}\right)\) when \(\eta\approx\frac{1}{\lambda_{\max}(\bar{\mathbf{L}})}\) and \(\lambda_{\min}^{+}(\bar{\mathbf{L}})^{2}\geq\lambda_{\max}(\bar{\mathbf{L}})\). Appropriate choice of \(\eta\) allows wider range of step-size, and this will be clear in the experimental results in Section 5. Furthermore, the algorithm's performance is closely tied to the properties of the graph structure. \(\lambda_{\min}^{+}(\bar{\mathbf{L}})\), the smallest non-zero eigenvalue of graph Laplacian, characterizes the connectivity of the graph Chung (1997), and a graph with lower connectivity will yield larger bias. \(\lambda_{\max}(\bar{\mathbf{L}})\) is the largest eigenvalue of the graph Laplacian, and it can be upper bounded by twice the maximum degree of the graph (Anderson Jr and Morley, 1985). That is, a graph with higher maximum degree could incur larger bias. As for diminishing step-size, we achieve \(\mathcal{O}\left(\frac{1}{k}\right)\) convergence rate from the second item in Theorem 4.2, and similar observations hold as in the constant step-size, i.e., the convergence rate depends on the smallest non-zero and maximum eigenvalue of graph Laplacian. Lastly, as in Wang et al. (2020), our bound does not explicitly depend on the number of agents, \(N\), compared to the bound in Doan et al. (2019) and Sun et al. (2020), where the bias term and convergence rate scale at the order of \(N\)
### Markovian observation case
In this section, we consider the Markovian observation model, where the sequence of observations \(\{(s_{k},s^{\prime}_{k},r_{k})\}_{k=1}^{T}\) follows a Markov chain. Compared to the i.i.d. observation model, the correlation between the observation and the updated iterates imposes difficulty in the analysis. To overcome this issue, an assumption on the Markov chain that ensures a geometric mixing property is helpful. In particular, the so-called ergodic Markov chain can be characterized by the metric called total variation distance (Levin and Peres, 2017), \(d_{\mathrm{TV}}(P,Q)=\frac{1}{2}\sum_{x\in\mathcal{S}}|P(x)-Q(x)|,\) where \(P\) and \(Q\) is probability measure on \(\mathcal{S}\). A Markov chain is said to be ergodic if it is irreducible and aperiodic (Levin and Peres, 2017). An ergodic Markov chain is known to converge to its unique stationary distribution at a geometric decaying rate (Levin and Peres, 2017), i.e., for \(k\in\mathbb{N}_{0}\), \(\sup_{1\leq i\leq|\mathcal{S}|}d_{\mathrm{TV}}(\mathbf{e}_{i}^{\top}(\mathbf{P}^{ \pi})^{k},\mu_{\infty})\leq m\rho^{k},\) where \(\mathbf{e}_{i}\in\mathbb{R}^{|\mathcal{S}|}\) for \(1\leq i\leq N\) is the \(|\mathcal{S}|\)-dimensional vector whose \(i\)-th element is one and others are zero, \(\mu_{\infty}\in\mathbb{R}^{|\mathcal{S}|}\) is the stationary distribution of the Markov chain induced by transition matrix \(\mathbf{P}^{\pi}\), \(m\in\mathbb{R}\) is a positive constant, and \(\rho\in(0,1)\). The assumption on the geometric mixing property of the Markov chain is common in the literature (Srikant and Ying, 2019; Bhandari et al., 2018; Wang et al., 2020). The mixing time of Markov chain is an important quantity of a Markov chain, defined as
\[\tau(\delta):=\min\{k\in\mathbb{N}\mid m\rho^{k}\leq\delta\}. \tag{14}\]
For simplicity, we will use \(\tau:=\tau(\alpha_{T})\), where \(T\in\mathbb{N}_{0}\) denotes the total number of iterations, and \(\alpha_{k},k\in\mathbb{N}_{0}\), is the step-size at \(k\)-th iteration. If we use the step-size \(\alpha_{k}=\frac{1}{1+k},\ k\in\mathbb{N}\), the mixing time \(\tau\) only contributes to the logarithmic factor, \(\log T\) in the finite-time bound (Bhandari et al., 2018). As in the proof of i.i.d. case, using the Lypaunov argument in Lemma 4.1, we can prove the finite-time bound on the mean-squared error, following the spirit of Srikant and Ying (2019). To simplify the proof, we will investigate the case \(\eta=1\).
**Theorem 4.3**.:
1. _Suppose we use constant step-size_ \(\alpha_{0}=\alpha_{1}=\cdots=\alpha_{T}\) _such that_ \(\alpha_{0}\leq\tilde{\alpha}\) _where_ \(\tilde{\alpha}=\mathcal{O}\left(\frac{\min\{1,\lambda_{\min}^{+}(\tilde{\mathbf{L} })^{2}\}(1-\gamma)w}{\tau\max\left\{\frac{\sqrt{NgR_{\max}}}{w(1-\gamma)},q \right\}\lambda_{\max}(\mathbf{L})^{2}}\right).\) _Then, we have, for_ \(\tau\leq k\leq T\)_,_ \[\frac{1}{N}\mathbb{E}\left[\left\|\left[\tilde{\mathbf{L}}\tilde{\mathbf{L}}^{\dagger} \tilde{\mathbf{w}}_{k+1}\right]\right\|_{2}^{2}\right]=\mathcal{O}\left(\exp{(- \alpha_{0}(k-\tau))}+\alpha_{0}\tau\frac{R_{\max}^{2}}{w^{3}(1-\gamma)^{3}} \frac{\lambda_{\max}(\mathbf{L})^{2}}{\min\{1,\lambda_{\min}^{+}(\mathbf{L})^{2}\}} \right).\]
2. _Considering diminishing step-size, with_ \(\alpha_{t}=\frac{h_{1}}{t+h_{2}}\) _for_ \(t\in\mathbb{N}_{0}\)_, there exits_ \(\bar{h}_{1}\) _and_ \(\bar{h}_{2}\) _such that for_ \(h_{1}=\Theta(\bar{h}_{1})\) _and_ \(h_{2}=\Theta(\bar{h}_{2})\)_, we have for_ \(\tau\leq k\leq T\)_,_ \[\frac{1}{N}\mathbb{E}\left[\left\|\left[\tilde{\mathbf{L}}\tilde{\mathbf{L}}^{\dagger} \tilde{\mathbf{w}}_{k+1}\right]\right\|_{2}^{2}\right]=\mathcal{O}\left(\frac{\tau }{k}\frac{qR_{\max}^{2}}{w^{4}(1-\gamma)^{4}}\frac{\lambda_{\max}(\mathbf{L})^{5} }{\min\{1,\lambda_{\min}^{+}(\mathbf{L})^{2}\}^{2}}\right).\]
The proof and the exact values can be found in Appendix Section A.10. For the constant step-size, we can see that the bias term scales at the order of \(\mathcal{O}\left(\tau\alpha_{0}\lambda_{\max}(\mathbf{L})^{2}\right)\), and the bounds have additional mixing time factors compared to the i.i.d. case. Considering diminishing step-size, the convergence rate of \(\mathcal{O}\left(\frac{\tau}{k}\right)\) can be verified, incorporating a multiplication by the mixing time \(\tau\). As summarized in Table 1, the proposed distributed TD-learning do not require doubly stochastic matrix or any specific initializations.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline & Method & Observation model & Step-size & Requirement & Doubly stochastic matrix \\ \hline \hline Doan et al. (2019) & Nedic and Ordaglar (2009) & i.i.d. & Constant/\(\frac{1}{w+1}\) & Projection & ✓ \\ Doan et al. (2021) & Nedic and Ordaglar (2009) & Markovian & Constant/\(\frac{h_{1}}{t+1}\) & ✗ & ✓ \\ Sun et al. (2020) & Nedic and Ordaglar (2009) & i.i.d.Markovian & Constant & ✗ & ✓ \\ Zeng et al. (2022) & Nedic and Ordaglar (2009) & i.i.d.Markovian & Constant & ✗ & ✓ \\ Wang et al. (2020) & Pu and Nedic (2021) & i.i.d.Markovian & Constant & Specific initialization & ✓ \\ Ours & Wang and Elia (2011) & i.i.d.Markovian & Constant/\(\frac{h_{1}}{t+h_{2}}\) & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with existing works.
## 5 Experiments
This section provides the experimental results of Algorithm 1. To begin, we give an explanation of the MAMDP setup, where the number of states is three and the dimension of the feature is two. An agent can transit to every state with uniform probability. The feature matrix is set as \(\mathbf{\Phi}^{\top}=\begin{bmatrix}1&0&1\\ 0&1&0\end{bmatrix}\), which is a full-column rank matrix. The rewards are generated uniformly random between the interval \((0,1)\). The discount factor is set as \(0.8\).
For each experiment with \(N\in\{2^{3},2^{4},2^{5}\}\) number of agents, we construct a cycle, i.e., a graph \(\mathcal{G}\) consisting of \(\mathcal{V}:=\{1,2,\ldots,N\}\) and \(\mathcal{E}:=\{(i,i+1)\}_{i=1}^{N-1}\cup\{(N,1)\}\)(West, 2020). The smallest non-zero eigenvalue of graph Laplacian corresponding to a cycle with even number of vertices decreases as the number of vertices increases, while maximum eigenvalue remains same. The smallest non-zero eigenvalue is \(2-2\cos\left(\frac{2\pi}{N}\right)\), and the largest eigenvalue is four (Mohar, 1997). As \(N\) gets larger, the smallest non-zero eigenvalue gets smaller, which becomes \(0.59,0.15,0.04\) for \(N=2^{3},2^{4},2^{5}\), respectively. Therefore, as number of agents increases, the bias in the final error term will be larger as expected in Theorem 4.2, and this can be verified in the plot in Figure (1a). The plot shows the result for constant step-size \(\alpha_{0}\in\{2^{-3},2^{-4},2^{-5}\}\). To investigate the effect of \(\lambda_{\max}(\bar{\mathbf{L}})\), we construct a star graph, where one vertex has degree \(N-1\) and the others have degree one. The maximum eigenvalue of star graph is \(N\) and the smallest non-zero eigenvalue is one (Nica, 2016). As \(N\) gets larger, we expect the bias term to be larger from the bound in Theorem 4.2. The result is further discussed in Appendix Section A.11.
To verify the effect of \(\eta\), we construct a random graph model (Erdos et al., 1960), where among possible \(N(N-1)/2\) edges, \((N-3)(N-4)/2\) edges are randomly selected. The plot in Figure (1b) shows the evolution of the mean squared error for \(N=32\), and step-size \(0.1\) with different \(\eta\) values. When \(\eta=0.5\) or \(\eta=1\), the algorithm diverges. Moreover, the bias gets smaller around \(\frac{\sqrt{2}}{\lambda_{\max}(\bar{\mathbf{L}})}\approx 0.046\). This implies that appropriate choice of \(\eta\) can control the variance when the number of neighbors is large but if \(\eta\) is too small or large, Algorithm 1 may cause divergence or large bias. This matches the result of the bound in Theorem 4.2. Further results can be found in Appendix Section A.11.
## 6 Conclusion
In this study, we have studied primal-dual gradient dynamics subject to some null-space constraints and its application to a distributed TD-learning. We have derived finite-time error bounds for both the gradient dynamics and the distributed TD-learning. The results have been experimentally demonstrated. Potential future studies include extending the study to finite-time bounds of distributed TD-learning with nonlinear function approximation.
Figure 1: Experiment results of Algorithm 1. |
2310.17344 | Examination of Cybersickness in Virtual Reality: The Role of Individual
Differences, Effects on Cognitive Functions & Motor Skills, and Intensity
Differences During and After Immersion | Background: Given that VR is applied in multiple domains, understanding the
effects of cyber-sickness on human cognition and motor skills and the factors
contributing to cybersickness gains urgency. This study aimed to explore the
predictors of cybersickness and its interplay with cognitive and motor skills.
Methods: 30 participants, 20-45 years old, completed the MSSQ and the CSQ-VR,
and were immersed in VR. During immersion, they were exposed to a roller
coaster ride. Before and after the ride, participants responded to CSQ-VR and
performed VR-based cognitive and psychomotor tasks. Post-VR session,
participants completed the CSQ-VR again. Results: Motion sickness
susceptibility, during adulthood, was the most prominent predictor of
cybersickness. Pupil dilation emerged as a significant predictor of
cybersickness. Experience in videogaming was a significant predictor of both
cybersickness and cognitive/motor functions. Cybersickness negatively affected
visuospatial working memory and psychomotor skills. Overall cybersickness',
nausea and vestibular symptoms' intensities significantly decreased after
removing the VR headset. Conclusions: In order of importance, motion sickness
susceptibility and gaming experience are significant predictors of
cybersickness. Pupil dilation appears as a cybersickness' biomarker.
Cybersickness negatively affects visuospatial working memory and psychomotor
skills. Cybersickness and its effects on performance should be examined during
and not after immersion. | Panagiotis Kourtesis, Agapi Papadopoulou, Petros Roussos | 2023-10-26T12:23:08Z | http://arxiv.org/abs/2310.17344v1 | Examination of Cybersickness in Virtual Reality: The Role of Individual Differences, Effects on Cognitive Functions & Motor Skills, and Intensity Differences During and After Immersion.
###### Abstract
Background: Given that VR is applied in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness gains urgency. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20-45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to CSQ-VR and performed VR-based cognitive and psychomotor tasks. Post VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience in videogaming was a significant predictor of both cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall cybersickness', nausea and vestibular symptoms' intensities significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears as a cybersickness' biomarker. Cybersickness negatively affects visuospatial working memory and psychomotor skills. Cybersickness and its effects on performance should be examined during and not after immersion.
Virtual Reality; Cybersickness; VR Sickness; Motion Sickness; Cognition; Motor Skills; Pupil dilation; Gaming Experience; Immersion; Susceptibility. Article
## 1 Introduction
Immersive Virtual Reality (VR) represents one of the most remarkable technological advancements of the 21st century. A digital interface that promises full immersion into an alternate or simulated environment, VR's potential has been tapped across an array of disciplines. Entertainment industries have been early adopters, providing audiences with experiences that were once relegated to the realms of imagination [1, 2, 3]. Concurrently, the education sector has witnessed a paradigm shift with VR-enhanced pedagogical tools, fostering enriched learning experiences [4, 5, 6].
Furthermore, the domain of professional training has embraced VR to craft realistic scenarios for a myriad of professionals, from manual labourers mastering their craft to surgeons simulating complex procedures [7, 8, 9]. The medical arena is reaping the benefits too. Beyond conventional treatments and therapies, VR is emerging as a powerful adjunctive tool. Pain management, once reliant solely on pharmacological interventions, now explores the pain-distracting potential of VR [10]. Rehabilitation, be it physical or neurological, is experiencing a renaissance with VR-infused therapies [11, 12].
Neuropsychology, in particular, has found a robust partner in VR, aiding in cognitive assessments [13], [14], training [15], and targeted rehabilitation efforts [16; 17; 18].
Such broad applications further extend their arms to vulnerable populations. The elderly, often considered tech-averse, find solace and cognitive rejuvenation in VR experiences tailored to their needs [19], [20]. Individuals with Mild Cognitive Impairment (MCI) and/or a type of dementia [21; 22], or developmental challenges such as Attention-Deficit Hyperactivity Disorder (ADHD) [23; 24; 25]) and Autism Spectrum Disorder (ASD) [26; 17], [27] are not mere spectators. VR interventions, designed with sensitivity and precision, are being administered to provide these populations with therapeutic as well as recreational relief. Considering the importance of the aforementioned applications of VR and the fragility of some of the targeted populations, the effective implementation of VR becomes imperative.
### Cybersickness in Virtual Reality
While the promise of VR in transforming various domains is indisputable, VR also harbours an inherent limitation\(-\)cybersickness, a condition affecting a segment of its users [28]. Manifesting as a triad of nausea, disorientation, and oculomotor disturbances, cybersickness remains a significant concern. While there's a temptation to draw parallels between cybersickness and simulator sickness, the two display distinct characteristics [29]. Notably, cybersickness presents with heightened general discomfort, particularly intensified by nausea and disorientation [29]. Adding another layer to this complex tapestry, cybersickness also stands apart from motion sickness. The former arises primarily from visual cues in VR, whereas the latter emerges from actual physical movement [30].
Delving into the underlying causes of cybersickness, a comprehensive theoretical understanding is still emerging. However, the sensory conflict theory has gained traction, shedding light on the root of the issue [28; 30; 31]. According to this theory, the unsettling symptoms of cybersickness predominantly result from a sensorial conflict between the visual system and the vestibular mechanisms in our inner ear [28; 31]. At its core, our sense of balance and spatial positioning is an intricate intertwining of visual, vestibular, and proprioceptive feedback. These systems often receive mis-matched cues in VR, leading to sensory dissonance. For VR, this conflict can be at-tributed to \(\mathrm{vecto}\mathrm{t}\mathrm{t}\mathrm{e}\mathrm{m}\) illusionary perception of motion [32]. This illusion, particularly when accompanied by movements like linear and angular accelerations, has been pinpointed as a major instigator of cybersickness in VR [33; 34]. As VR continues its ascendancy in the tech world, the quest to understand and alleviate cybersickness remains a pressing concern.
### Mitigation of Cybersickness
Two primary catalysts for cybersickness are VR's hardware and software characteristics [35]. Hardware issues such as latency, which refers to the delay between a user's action and the corresponding change in the virtual environment, can be incredibly disorienting. Similarly, refresh rates that are not synchronized with a user's natural perception can lead to a jarring experience [35]. On the software side, inconsistencies in visual-vestibular integration, where what one sees doesn't match what one feels, can throw off the body's equilibrium [35; 36]. Similarly, increased cognitive workload and confusion may also play a role in modulating cybersickness intensity [35; 36].
However, the industry and the scientific community are not passive in the face of these challenges. With advancements in head-mounted displays (HMDs), many of these hardware-related issues are gradually being addressed [35]. Better display resolutions, faster refresh rates, and improved motion tracking reduce the disconnect users feel. Simultaneously, on the software end, industry or research software developers are now more attuned to creating experiences that align with human physiology. Guidelines tailored for specific scientific fields and targeted populations are emerging, ensuring a more holistic and comfortable VR experience for all [36; 37; 38; 39; 40; 41].
Furthermore, various strategies have been employed to counteract the symptoms of cybersickness, each with its own set of challenges. One such strategy is "acclimatization", which involves frequently exposing a person to elements that induce cybersickness to help them build resilience against it [42]. While this might be an effective solution, it is both time-intensive and costly, demanding significant effort and dedication. Additionally, there are recommendations for using specific medications or natural remedies, such as ginger, to tackle the effects [43], [44]. However, these methods can be obtrusive and might introduce unintended consequences like fatigue or potential allergies [43].
In the realm of Human-Computer Interaction (HCI), innovations such as adjusting the user's perspective [45], introducing dynamic focus shifts [46], ensuring bodily equilibrium [47], and incorporating brief intervals [48] have been put forward. However, these methods can impede certain virtual interactions (e.g., adjusting user perspective and maintaining balance) and might detract from the overall immersive experience (e.g., focus shifts and intervals). Similarly, joyful and calming music effectively alleviates cybersickness symptoms in VR [49]. However, again, using music is only suitable for some virtual environments, given that may confound the purpose of this VR application. An efficient approach, which may be a universal solution, is to detect and prevent cybersickness. However, this universal solution should be adaptive to the user's needs and requirements.
### Questionnaires and Physiological Metrics of Cybersickness
One of the most common approaches to measure cybersickness is the administration of questionnaires, typically before and after exposure to VR [36], [50]. The effectiveness of these tools in evaluating cybersickness in VR settings has been a topic of keen interest. The Simulator Sickness Questionnaire (SSQ) [51] has received criticism from several studies for its inability to adequately measure cybersickness in VR [52], [53], [54]. While the Virtual Reality Sickness Questionnaire (VRSQ) was conceived as an improvement on the SSQ, it too exhibits considerable limitations, especially in terms of its structure and specificity [53], [56].
On the other hand, the Cybersickness in Virtual Reality Questionnaire (CSQ-VR) emerges as a more reliable and comprehensive tool [57; 58; 49; 53]. Given that intense cybersickness can significantly hamper a user's cognitive and motor functions, especially in reaction times [59; 60; 49], [53], [49], it's imperative for a tool to detect these declines effectively. The CSQ-VR, with its robust design and metrics, compared to both SSQ and VRSQ, was substantially more efficient in detecting cognitive and/or motor skills decline due to cybersickness. Furthermore, the CSQ-VR is offered in a 3D-VR version, allowing for a repeated evaluation of cybersickness while the user is immersed in VR. This is essential considering that the intensity of cybersickness can vary throughout a VR session [53; 61; 32]. Additionally, considering that pupil size was a significant predictor of cybersickness' intensity [49], [53] the integration of eye-tracking in the CSQ-VR further augments its cybersickness' assessment capabilities.
Beyond questionnaires, and the pupil size discussed above, other physiological objective metrics have been used to examine and predict cybersickness symptomatology [30; 62]. In this direction, electroencephalography (EEG), which captures electrical activity in the brain's cortical areas, is an efficient approach to detect cybersickness [63]. However, using an EEG in combination with the VR HMD is not feasible for widespread utilization due to the high cost of an EEG, as well as that is ergonomically problematic having both cumbersome devices mounted on the head of the user. Other physiological metrics encompass electrocardiogram (ECG), electroostrogram (EGG), electrooculogram (EOG), photoplethysmogram via pulse oximeter (PPG), breathing rate, and galvanic skin response (GSR), which have been found to predict cybersickness symptomatology and intensity [64; 62]. For example, rises in bradygastric activity, breathing rate [64; 65], heart rate [60], and forehead skin conductance [66; 67] offer reliable indicators of cybersickness. However, beyond PPG and CSR, the other devices are also costly and not ergonomically appropriate for effective use. On the other hand, PPG and GSR can be
embedded in haptic gloves (e.g., see TeslaGloves) and be ergonomically efficient [68, 69]. Nevertheless, these haptic devices are costly, so, beyond applications in industry by enterprises, these devices are not affordable for the general population [68, 69]. Finally, pupil size is not the only eye-tracking metric that may indicate cybersickness. Other eye-tracking metrics, such as fixation duration and distance between the eye gaze and the targeted object, may assist in predicting cybersickness [70]. Therefore, except for the eye-tracking that is embedded in many VR HMDs, the use of the rest of the physiological metrics is either ergonomically and/or financially problematic.
### Individual Differences and Cybersickness
The experience of cybersickness varies among individuals, with factors like gender playing a potential role [49, 50]. Some studies suggest female users may experience more intense cybersickness than male users, but findings have been inconsistent [49, 50]. For instance, Petri et al.'s study [71] found no significant gender differences in objective metrics like heart rate but did observe a difference in subjective experiences based on the SSQ. Meanwhile, Melo et al. [72] found no such gender differences. Similarly, Stanney et al.'s experiments [73, 74] found gender insignificant in predicting cybersickness in certain conditions. A meta-analysis also supported the absence of significant gender differences when evaluating cybersickness in VR settings using the SSQ [50]. However, it's speculated that gaming and VR experience might play a role.
Few studies have delved into the impact of computing, VR, or gaming experience on cybersickness. While Stanney et al. [73] considered gaming experience, their methodology regarding its measurement was ambiguous, and their results, postulating an absent effect, cannot thus be considered reliable. Kourtesis and collaborators in a series of studies [75, 13] found no significant effect of gaming or VR experience on cybersickness. However, the VR software implemented in these studies was thoroughly designed and developed to elicit minimal to no cybersickness symptoms. Conversely, Weech et al. [77] found that gaming experience might influence cybersickness, especially when paired with narratives. Likewise, in the recent study of Kourtesis et al. [49] gaming experience was found to affect experiencing cybersickness, where higher gaming experiences indicated a higher resilience. Also, the same study showed that gaming experience explained gender differences in terms of cybersickness, where participants of the opposite sex but with the same gaming experience did not experience different cybersickness' intensity [49].
Moreover, each individual may demonstrate a diverse level of susceptibility to experiencing cybersickness [78, 79]. Previous studies, using the Motion Sickness Susceptibility Questionnaire (MSSQ) scores, which measure susceptibility to experiencing intense symptoms of motion sickness [80], were found to be associated with personality traits and anxiety levels [81, 82]. Similar to motion sickness susceptibility, visually induced motion sickness (i.e., cybersickness induced by vection) demonstrates similar patterns of susceptibility among individuals [78, 79]. Taking these together, given the similarities between motion sickness and cybersickness elicited by vection, the MSSQ scores may indicate susceptibility to cybersickness. However, in previous studies, the MSSQ did not predict cybersickness intensity [49, 53]. Nevertheless, the aforementioned studies also used MSSQ to exclude participants with moderate-to-high motion sickness susceptibility. Thus, MSSQ's utility in predicting cybersickness has not yet been examined appropriately. Overall, while individual differences influence cybersickness, the exact factors and their interplay remain a complex topic of investigation.
### Effects of Cybersickness on Cognitive and Motor Skills
Apart from impacting the quality of the user experience in VR, cybersickness can also detrimentally influence a user's cognitive and motor functions. Considering VR's applications in areas demanding unimpaired cognitive and motor skills, like education, research, clinical settings, and training, cybersickness poses significant challenges to VR's successful
integration. Several systematic reviews [35; 50; 83] have highlighted a considerable, though temporary, decline in cognitive and/or motor functions due to cybersickness in immersive VR. Dahlman et al. [84] theorized that motion sickness notably impairs users' verbal working memory. Similarly, a study by Varmaghani et al. [85] with 47 participants split into VR and control groups found that the VR group did not experience the expected enhancement in visuospatial skills seen in the control group, suggesting cybersickness's potential effect on visuospatial learning.
Mittelstaedt et al.'s research [59] explored cybersickness's influence on various cognitive areas before and after VR exposure. The study revealed that cybersickness resulted in delayed reaction speeds and hindered the anticipated boost in visual processing speed, indicating a negative impact on attention and reaction time. However, spatial and visuospatial memory skills seemed unaffected. Similarly, studies by Nalivaiko et al. [60] and Nesbitt et al. [34] reported slowed reaction times correlated with increasing cybersickness severity, suggesting a link between cybersickness intensity and potential cognitive/motor deterioration. It's important to note that while these studies underscore cybersickness's negative effects on cognitive and/or motor abilities, they assessed cybersickness post-VR exposure, not during the experience itself.
Since the users experience a readjustment to physical space, while removing the VR HMD and transitioning from the virtual to the physical environment [86], examining cybersickness, cognition, and motor skills after exposure may confound the observations.. In a recent study examining cybersickness, cognitive and motor skills during immersion, cybersickness had a significant negative effect on verbal working memory and psychomotor skills, but not on visuospatial working memory [49]. However, the order of the tasks was not randomized and counterbalanced, and the researchers, using the MSSQ, excluded participants who showed high susceptibility to experiencing motion sickness. Thus, based on the studies mentioned above, while evidence postulates a negative impact of cybersickness on cognitive and motor skills, there are still discrepancies, especially regarding the size of these effects.
### Current Study Aims
Based on the review of existing literature, it is clear that VR is a cutting-edge tool with extensive applications spanning from entertainment to research and rehabilitation. However, the applications of VR may be hindered by the onset of cybersickness. There are discrepancies about the impact of cybersickness on core cognitive functions, which are required in VR applications in several fields (e.g., education, occupational training, and rehabilitation). Also, the role of individual differences (e.g., IT skills, sex, and susceptibility to cybersickness) in experiencing cybersickness has not been fully understood. Given this background, the research inquiries have been articulated in the following hypotheses:
H1: Pupil size will be a significant predictor of the intensity of cybersickness.
H2: Susceptibility to motion sickness will be a significant predictor of the intensity of cybersickness.
H3: Computer and/or videogame experience will predict the intensity of cybersickness symptomatology.
H4: Cybersickness symptomatology will have a negative effect on verbal working memory, visuospatial working memory, and/or psychomotor skills.
H5: The cybersickness intensity during and after VR exposure will be significantly different.
## 2 Materials and Methods
### Virtual reality Hardware & Software
An HTC Vive Pro Eye was used, which embeds an eye-tracker with a binocular gaze data output frequency of 120Hz (i.e., refresh rate), a tracking accuracy of 0.5deg-1.1deg, a 5
point calibration, and a 110\({}^{\circ}\) trackable field of view. The HTC Vive Pro Eye substantially surpasses the minimum hardware criteria for alleviating and/or avoiding cybersickness [35]. Thus, beyond facilitating the collection of eye-tracking metrics, its utilization further ensures that the linear and angular accelerations will induce cybersickness (see description be-low) in the virtual environment and not by hardware inadequacies. Likewise, the software development was performed in line with the guidelines for VR software for research and clinical settings, which have been found efficient in substantially mitigating cybersickness symptomatology [14, 39]. This further ensured the avoidance or decrease of effects of software characteristics on the expression or intensity of cybersickness.
### Virtual Environment Development
The virtual environment was based on the one used in our previous studies on cybersickness [49, 53]. The Unity3D game engine was utilized to develop the virtual environment. Also, the SteamVR SDK aided in the creation of interactions. Given the potential influence of gaming experience on task performance [14], the virtual hands/gloves feature of the SteamVR SDK was integrated to enable straightforward interactions. Crucially, these interactions were designed to be intuitive, initiated by touching the object for selection and maintained touch for confirmation, eliminating the need for button presses. The virtual gloves offered by SteamVR were neutral, not hinting at any particular gender or race, which helped mitigate any biases tied to these factors [87].
For audio instructions, Amazon Polly was used to generate neutral, realistic voice clips. To ensure clarity and user comprehension, instructions were offered in video, audio, and text formats, promoting smooth task execution. The SteamAudio plugin was employed for spatial audio effects, particularly feedback sounds. Eye-tracking and pupillometry, including monitoring pupil size, fixation counts, fixation duration, and distance of the object from the eye, were facilitated through the SRapinal SDK. The bmITUX SDK [88] enabled the randomization of experimental sequences, data exporttion into a CSV format, and overall experimental management.
### Roller Coaster Ride: Linear and Angular Accelerations
The design of the roller coaster ride in this research was modelled after the rides used in our prior cybersickness studies [49, 53]. A 12-minute ride was designed, through which each participant had to undergo to experience linear and angular accelerations throughout the ride. The trajectory was animated to represent the moving platform the user stood on (see Figure 1). The general direction of movement was forward, with an exception towards the end (refer to reverse z-axis). Overall, the platform's motions mimicked the dynamics of a roller coaster.
The designed route encompassed a specific sequence of accelerations: (1) linear on the z-axis, (2) angular involving z- and y-axes, (3) angular encompassing z-, x-, and y-axes, (4) angular on the roll axis, (5) heightened linear acceleration on the z-axis, (6) angular on the yaw axis, and (7) intense linear acceleration involving y-axis followed by an inverse on the z-axis. The environment was kept minimalistic, predominantly in shades of black and white (as illustrated in Figure 1). This design was selected to minimize extraneous factors that could influence or elicit cybersickness symptoms. Moreover, the tiled pattern provided visual references, helping users discern changes in direction and altitude.
### Cognitive & Motor Skills Assessment in Virtual Reality
Due to the need for repeated evaluations of cybersickness, cognition, and motor abilities, while users are immersed in VR, immersive VR iterations of recognized tasks/tests were used (see Figure 2). These VR cognitive and motor skills tests have been used in our previous studies [49, 53]. In designing these VR-based cognitive and psychomotor tasks, we adhered to the specific design principles and development guidelines for cognitive evaluations in immersive VR as outlined in [14, 39]. Also, given that these tasks necessitate physical actions, their design was aligned with the ISO 9241-400:2007 standards from the International Organization for Standardization, which focus on the ergo-nomics of human-system interaction ([89]. This approach, which includes considerations like personalizing object height and the distance between the shoulder and the object, has been used in a prior study [90].
Figure 1: Examples of Linear and Angular Accelerations During the Ride.
Figure 2: Digit Span Test (Upper Left), Corsi Block Test (Upper Right), and Deary-Liewald Reaction Time Test (Bottom).
#### 2.4.1 Verbal Short/Working Memory: Digit Span Test.
A VR version of the Digit Span Test (DST) [91] was developed and used (see Figure 2). The test encompasses two tasks, the forward and backward recall tasks, which examine verbal short-term memory and verbal working memory respectively [92]. The order of administration of the tasks is standardized, where examinees perform first the forward recall task, and last the backward recall task [91], [92]. In the DST, participants were presented with a sequence of numbers to listen to, which they were then required to recall in the same (i.e., forward recall task) and reverse order (i.e., backward recall task). For instance, if the given sequence was 2, 4, 3, to correspondingly respond with 2, 4, 3 (i.e., forward recall task); or 3, 4, 2 (i.e., back-ward recall task). Post-listening, a virtual keypad materialized for the participants to input their answers. To select a digit, they touched the corresponding white box on the keypad (see Figure 2). This box turned blue upon touch. A continuous touch for more than a second provided confirmation of their selection. Correct selections turned the button green accompanied by an affirmative sound, while incorrect ones made it turn orange with a disapproving sound. A trial concluded either upon an error or once the sequence was correctly inputted. With every alternate correct trial, the sequence length increased. The task terminated after two consecutive errors at the same sequence length (e.g., three digits) or after completing the maximum sequence length (two trials of 7 digits). The Total Score combined the count of correctly completed trials and the longest digit sequence (i.e., digit span). A video description of the test can be accessed via the following link: [https://www.youtube.com/watch?v=1H8cqci-IFs](https://www.youtube.com/watch?v=1H8cqci-IFs).
#### 2.4.2 Visuospatial Short and Working Memory: Corsi Block Test.
The VR version of the Corsi Block Test (CBT) [93] was developed and employed (see Figure 2). Comparable to the DST described above, the CBT incorporates the forward recall task and the backward recall task, which evaluate visuospatial short-term memory and visuospatial working memory correspondingly [93], [94]. The sequence in which tasks are performed is also standardized: participants first perform the forward recall task and then conclude with the backward recall task [93], [94]. Each task displays 27 white cubes, positioned distinctively across the x, y, and z dimensions. However, nine of these 27 cubes were randomly showcased to participants in each trial (see Figure 2). Each trial began with nine cubes. Based on the current sequence length, a subset of these cubes would illuminate in blue for a second each, accompanied by a bell sound. Once this sequence finished, participants needed to recall and select the cubes in the same (i.e., forward recall task) or reverse order (i.e., backward recall task). A cube was selected by touching it and turning it blue. Maintaining the touch for a second confirmed the choice: the cube turned green and sounded a positive tone if correct, or orange with a negative tone if wrong. The trial ended upon a mistake or once the full normal (i.e., forward recall task) or reversed (i.e., backward recall task) sequence was recalled correctly. Trials started with sequences of two cubes, increasing by one if at least one of the two attempts was correct. The task concluded after two errors at a specific sequence length or reaching the maximum sequence of seven cubes. Perfect performance meant achieving up to seven cubes without errors. The Total Score was the sum of the highest achieved sequence length (i.e., Corsi block span) and the total number of correctly recalled sequences. A video description of the test can be accessed via the following link: [https://www.youtube.com/watch?v=MLivkyMt-g](https://www.youtube.com/watch?v=MLivkyMt-g).
#### 2.4.3 Psychomotor Skills: Deary-Liewald Reaction Time Test.
The VR version of the Deary-Liewald Reaction Time test (DLRT) [95] test was developed and used to evaluate psychomotor skills. The DLRT incorporates two tasks: the simple reaction time (SRT) task and the choice reaction time (CRT) task [95]. In the SRT task, participants watched a white box, which they had to quickly touch whenever it turned blue (see Figure 2). This was repeated for 20 trials. For the CRT task, any one of four horizontally aligned boxes would randomly turn blue, prompting participants to touch it as fast as they could (see Figure 2). This occurred over 40 trials. In both tasks, participants were directed to touch the highlighted boxes using either hand swiftly. Before starting, a
practice round ensured that participants grasped the instructions. The SRT score was derived by averaging reaction times across its 20 trials, and similarly for CRT. However, CRT provided three separate scores. Using eye-tracking, we gauged the time to notice the target (i.e., Attentional Time) and the time between noticing and touching the target (i.e., Motor Time). An overall reaction time, from target appearance to touch, was also recorded. Thus, three scores emerged:
1) Reaction Time (RT) reflecting overall psychomotor speed.
2) Attentional Time (AT) showing attention processing speed.
3) Motor Time (MT) representing movement speed.
A video description of the test can be accessed via the following link: [https://www.youtube.com/watch?v=wXdrt](https://www.youtube.com/watch?v=wXdrt)(PjNsk.
### Cybersickness in Virtual Reality Questionnaire (CSQ-VR)
To assess the symptoms and intensity of cybersickness, the CSQ-VR was used, which is a valid tool for assessing cybersickness and has shown superior psychometric properties to the SSQ and the VRSQ [53]. Also, originating from the VR-Induced Symptoms and Effects section of the VR Neuroscience Questionnaire, the CSQ-VR boasts strong structural and construct validity [36]. Its strengths include a concise format (just six questions) and the generation of comprehensible results [53, 58]. Additionally, it captures various cybersickness subcategories, such as nausea, disorientation, and oculomotor disturbances. Each category has two questions, scaled on a 7-point Likert Scale, with options ranging from "1 - absent feeling" to "7 - extreme feeling" -- each option combines text and a number (see Figure 3). From the CSQ-VR, a total score and three subscores (i.e., one for each subcategory: Nausea, Disorientation, and Oculomotor) can be derived. The total score is the summation of the three subscores. The paper-and-pencil version of the CSQ-VR was administered twice, before and after exposure to VR.
Given the study's goal of repeatedly measuring cybersickness during VR immersion, the 3D-VR version of CSQ-VR was implemented. The question appeared at the top in the designed user interface, with the chosen response (in red) situated centrally. Users could modify their answers using a slider, by selecting a number directly or sliding along the scale (see Figure 3). The VR version of CSQ-VR also incorporates eye-tracking metrics. Invisible tracking markers were positioned ahead of the text, and their dimensions always matched the visible text per line (see Figure 3). This setup allow us to gauge the Fixation Duration over the text as an indicator of reading rate. Additionally, continuous pupil measurement occurred while users engaged with the CSQ-VR, allowing us to determine the average pupil size (for both eyes) during interactions, which has been previously seen as a biomarker of cybersickness intensity [49, 53]. A video description of the 3D-VR version of CSQ-VR can be accessed via the following link: [https://www.youtube.com/watch?v=npW4NKNLXok](https://www.youtube.com/watch?v=npW4NKNLXok).
### Demographics and Motion Sickness Susceptibility Questionnaire (MSSQ)
A custom questionnaire was used to collect demographic information, such as gender, age, education, and computer and videogame skills. Smartphone/computing/gaming experiences were determined by summing up scores from two specific questions, each based on a 6-point Likert scale. The initial question gauged the participant's proficiency or skill level in using smartphone/computer/games, with ratings like '5: highly skilled.' In contrast, the subsequent question focused on how often they engage with these platforms, with responses such as '4: once a week.' This custom questionnaire is the same that has been used in previous studies (e.g., [49, 53, 90]). Also, the short form of the Motion Sickness Susceptibility Questionnaire (MSSQ) was used to measure participants' predisposition to experience motion sickness [80]. MMSQ serves as a diagnostic instrument designed to gauge an individual's vulnerability to motion sickness and pinpoint specific triggers associated with the onset of the symptoms. It delineates experiences into two distinct categories:
1. Childhood Experience (prior to the age of 12): Here, respondents specify the frequency with which they encountered sensations of sickness or nausea in different modes of transport or specific entertainment scenarios.
2. Experience over the Last 10 Years: This section requires individuals to recount the number of times they felt symptoms of sickness or nausea under similar circumstances but within the past decade.
Each section receives an independent score, and the cumulative result from both areas offers the raw MSSQ score. This raw score can be translated into a percentile through reference tables or a dedicated polynomial for enhanced interpretability. Thus, three scores were derived from MSSQ: the MSA-Child, the MSB-Adult, and the MSSQ-Total. However, it is pivotal to comprehend that the term "sickness" within the MSSQ's framework encapsulates feelings ranging from mere queasiness to outright nausea or even vomiting. Notably, the MSSQ has clinical significance as it can indicate inherent motion sickness susceptibility, especially valuable for those diagnosed with vestibular diseases, as well as examining susceptibility to visually induced cybersickness in VR [78, 79, 80, 81, 82].
Figure 3: CSQ-VR User Interface & Eye-Tracking (ET) Targets. Note: eye-tracking targets were not visible to the user.
### Participants & Experimental Procedures
Participants were recruited through convenience sampling, utilizing the internal mailing lists of the National and Kapodistrian University of Athens, as well as promotions on social media platforms. The research received approval from the Ethics Committee of the Department of Psychology of the National and Kapodistrian University of Athens. The sample consisted of 30 participants, 17 women and 13 men, aged 20 to 45 years old. All participants had normal or corrected vision using contact lenses or glasses. Upon arrival, all participants were informed about the procedure and gave their informed consent in written form before proceeding with the experimental procedures. The process began with the completion of the demographic data questionnaire and the questions of the MSSQ questionnaire.
Then, before using the VR, the participants were asked to complete the CSQ-VR questionnaire in a paper-and-pencil form. Only then did participants go through an induction to VR technology and how to use and wear the HTC Vive Pro Eye. The in-duction also described the cognitive and motor tasks they had to perform in VR. After the induction, under the guidance of the experimenter, the participants wore the VR HMD and performed the eye-tracking calibration as offered by the HTC Vive in collaboration with the SteamVR. Once in VR, they stood at a designated spot marked 'X' (see Figure 1). The session began with tutorials. For each task, a video tutorial, along with verbal and written guidelines, was provided.
During immersion, the first task participants had to perform was to provide their responses to the questions of the 3D-VR version of the CSQ-VR. After that, the participants had to perform the VR versions of the cognitive (i.e., DST and CBT) and psychomotor skills (i.e., DLRT) tests (see Figure 2). After going through the tutorial of each task, where participants received audiovisual (i.e., audio, video, and labels) information about the task, they then performed the corresponding task. This stage of going through the tutorials and performing all the tasks served as the baseline assessment, which lasted approximately 25 minutes. Note that the order of the tests was counterbalanced across participants. Given that the tests were DST, CBT, and DLRT, three different orders were used:
1. DST, CBT, and DLRT.
2. CBT, DLRT and DST.
3. DLRT, DST, and CBT.
Thus, the full circle of orders was complete for every three participants. In the whole sample, the circle of orders was completed ten times (i.e., 30 participants divided by three orders).
The roller coaster ride followed, which lasted about 12 minutes after this baseline assessment. After the ride, participants performed the same set of assessments as the baseline (i.e., CSQ-VR, DST, CBT, and DLRT). Note that the order of administration of the tests was identical to the baseline assessment. Participants thus underwent one ride of 12 mins and two assessment sessions (i.e., before and after the ride). The entire VR procedure spanned roughly 60 minutes for every participant. Finally, after removing the VR HMD (i.e., post-VR exposure), participants completed the paper-and-pencil version of the CSQ-VR questionnaire. Then, refreshments rich in electrolytes were offered to the participants, and they were given a 10-15-minute rest before departure. Before leaving, participants were advised against driving and operating heavy machinery for the rest of the day.
### Statistical Analyses
All the statistical analyses were conducted using R language [96] in RStudio [97]. Furthermore, the psych (correlational analyses & t-tests) [98], the ggplot2 (plots) [99], and the lme4 (regression analyses) [100] R packages were used to perform the respective analyses. Descriptive statistical analysis was performed to provide an overall review of the sample. The paired samples t-test examined the differences in intensity of cybersickness between during and after exposure to VR. Finally, multiple linear regression analyses
were performed to examine the predictors of cybersickness symptomatology, and multiple mixed linear regression model analyses were performed to examine the predictors of performance on cognitive and psychomotor skills' tasks. As the variables violated the condition of normality, the bestNormalize R package [101] was used to transform (e.g., into logarithms) and center the data (i.e., converting them into z scores). In this way, there was a normal distribution in the data to conduct parametric statistical analyses..
2.4.1 Regression Analyses.
The assumption of normality for event- and time-based scores was evaluated using the Shapiro-Wilk Normality test. The Non-Constant Error Variance test was applied to verify the homoscedasticity assumption for the models. Multicollinearity was assessed by calculating the variance inflation factor for the predictors in each model. Linear regression analyses were utilized to explore the predictors of everyday PM functioning based on everyday cognitive functions. Model comparisons were made using analyses of variance. The comparison criteria among the models included Akaike's information criterion (AIC), Bayesian criterion (BIC), the F statistic and its significance level, and the proportion of variance explained (i.e., R\({}^{\circ}\)).
In the analytical approach adopted, a wide array of variables was considered as potential predictors for the models. Specifically,
For the Linear Multiple Regression analyses, the following variables were considered: such as Sex, Education, Age, Computing Experience, Smartphone Apps Experience, Gaming Experience, MSA-Child, MSB-Adult, MSSQ Total, Pupil Size during responding to CSQ-VR, and Pupil Size during the Ride.
For the Mixed Model Regression analyses (i.e., prediction of performance on cognitive and psychomotor skills' tasks) the following variables were considered: CSQ-VR Total Score, CSQ-VR Nausea Score, CSQ-VR Vestibular Score, CSQ-VR Oculomotor Score, Sex, Education, Age, Computing Experience, Smartphone Apps Experience, and Gaming Experience.
With these variables as the foundation, a systematic and incremental model development process was initiated:
Single-Predictor Models: Initially, separate models were developed. In each of these models, a single cognitive predictor from the list was incorporated. When the performance of these models was compared, the individual variable that yielded the best preliminary results was identified.
Dyadic Predictor Models: A new set of models was crafted in the subsequent phase, each containing two predictors. The top-performing variable from the first step was consistently used as one of the predictors in these models. The second predictor was drawn sequentially from the remaining list of variables. After the two-predictor models had been designed, their performances were critically evaluated. The top-performing two-predictor model was then juxtaposed with the best model from the first step.
Subsequent Iterations in the Incremental Approach: The methodology was characterized by its iterative nature. In each subsequent phase, the best predictors from the previous step were retained, and an additional predictor from the list was introduced. With each iteration, the models were rendered more complex. This step-by-step comparison continued until a point was reached where the introduction of more variables failed to enhance the model's performance significantly. When a model from an earlier step outperformed a more complex model from a subsequent step, it indicated that the optimal combination of predictors had been identified, and the simpler model from the previous step was chosen as the final best model. This rigorous process ensured that the final model was robust and represented the best combination of the variables initially considered.
3. **Results**
Descriptive statistics of the data are presented in Table 1. Demographically, participants were relatively young adults with a wide range of educational backgrounds. Their technology engagement was evident from the experience they had with computing,
sartphones, and gaming. This provides an interesting lens through which to understand the effects of VR, given their wide range of familiarity with digital tools. Regarding motion sickness, participants displayed varied susceptibility. Their scores from childhood to adulthood in motion sickness susceptibility showed a notable shift, suggesting that reactions to motion may evolve with age. When diving into the cybersickness, as assessed by the CSQ-VR, it seemed to intensify post-VR exposure across all its subcategories: nausea, vestibular, and oculomotor symptoms.
### Linear Regression Analyses: Prediction of Cybersickness Intensity
Linear regression analyses were conducted to detect and compare the significant predictors of overall, per symptom category, cybersickness intensity. Table 2 elucidates how various individual factors predict overall cybersickness. In support of **H1**, the pupil size during responding to the CSQ-VR questionnaire and the VR ride demonstrated significant cybersickness prediction. In agreement with **H2**, the most significant predictor is the
\begin{table}
\begin{tabular}{c c c c} \hline \hline & _Stage_ & _Mean (SD)_ & _Range_ \\ \hline Age & – & 29.17 (4.17) & 22-38 \\ Education & – & 16.27 (2.92) & 12-25 \\ Computing XP & – & 10.30 (1.05) & 8-12 \\ Smartphone XP & – & 10.70 (0.75) & 9-12 \\ Gaming XP & – & 37.23 (14.64) & 22-7 \\ MSA – Child & – & 6.02 (4.38) & 0-14.14 \\ MSB – Adult & – & 3.61 (3.18) & 0- 12.60 \\ MSSQ Total & – & 9.63 (6.73) & 0-26.74 \\ CSQ-VR – Total & Baseline & **8.73** (3.06) & **6-19** \\ & Post-Ride & 14.78 (7.84) & 6-34 \\ CSQ-VR – Nausea & Baseline & **2.50** (0.90) & **2-6** \\ & Post-Ride & 5.20 (3.22) & 2-14 \\ CSQ-VR – Vestibular & Baseline & **2.63** (0.89) & 2-5 \\ & Post-Ride & 4.63 (3.18) & 2-14 \\ CSQ-VR – Oculomotor & Baseline & **3.60** (**1.70) & **2-8** \\ & Post-Ride & 4.93 (2.87) & 2-12 \\ DST – Forward & Baseline & 16.63 (2.58) & 12-20 \\ & Post-Ride & 16.90 (2.63) & 11-20 \\ CBT – Forward & Baseline & 14.63 (2.72) & 10-20 \\ & Post-Ride & 15.17 (3.05) & 5-20 \\ DST – Backward & Baseline & 14.83 (3.58) & 8-20 \\ & Post-Ride & 15.63 (3.64) & 7-20 \\ CBT – Backward & Baseline & 15.10 (2.00) & 11-19 \\ & Post-Ride & 14.43 (2.89) & 8-20 \\ DLRT – RT & Baseline & 0.57 (0.10) & 0.36-0.81 \\ & Post-Ride & 0.61 (0.16) & 0.38-1.13 \\ DLRT – AT & Baseline & 0.30 (0.09) & 0-0.42 \\ & Post-Ride & 0.34 (1.44) & 0- 0.87 \\ DLRT – MT & Baseline & 0.24 (0.10) & 0-0.45 \\ & Post-Ride & 0.27 (0.17) & 0- 0.83 \\ \hline \hline \end{tabular}
* XP = Experience; MS = Motion Sickness; MSSQ = Motion Sickness Susceptibility; CSAQ-VR = Cybersickness in Virtual reality Questionnaire; DST = Digit Span Test; CBT = Corsi Block Test; DLRT = Deary-Llewald Reaction Time Test; RT = Reaction Time; AT = Attention Time; MT = Motor Time;
\end{table}
Table 1: Descriptive statistics
Motion Sickness susceptibility score as an adult (MSB-Adult), with an impressive 39% variance (R\({}^{2}\)=0.39) accounted for. Finally, in line with **H3**, smartphone experience and gaming experience were notable for their significance in predicting cybersickness.
Table 3 provides insights specific to nausea symptoms. Once again, in line with **H1**, pupil size, both during CSQ-VR reading and the VR ride, was a significant predictor of nausea symptomatology. Notably, in agreement with **H2**, MSB-Adult stands out as the most potent predictor. The child motion sickness score (MSA-Child) and the total motion sickness susceptibility (MSSQ Total) were also significant, hinting that one's propensity to motion sickness earlier in life could have repercussions in a virtual environment. Finally, in support of **H3**, experience in using smartphones and playing videogames were significant predictors of nausea symptoms' intensity.
In Table 4, the focus shifts to vestibular symptoms. While some predictors overlap with those for nausea, in disagreement with **H1**, the pupil size did not significantly predict vestibular symptoms. However, the MSB-Adult was found to be a robust predictor of nausea, further confirming **H2**. Finally, it is intriguing to see that smartphone and gaming experience are significant predictors of vestibular symptomatology, which agrees with **H3**
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\)_coefficient_ & _p-value_ & \(R^{2}\) \\ \hline Sex (Male) & -0.71\({}^{\text{\text{\text
and implies a potential link between technological experience and experiencing vestibular symptoms in a VR.
Table 5 centers on oculomotor symptoms. Unlike previous findings and discrepantly to **H1-H3**, no single predictor emerges as significant. The table largely communicates that more conventional metrics (age, education, tech experience) don't have significant associations with oculomotor symptoms in VR. It highlights the complexities of predicting these specific symptoms.
Table 6 aggregates the best models for predicting various aspects of cybersickness. This table reemphasizes the overarching role of MSB-Adult in determining cybersickness, nausea, and vestibular symptoms. This connotes that the susceptibility to experiencing motion sickness as an adult is robustly associated with experiencing visually induced cybersickness symptoms in VR. Interestingly, no predictors were identified for oculomotor symptoms, indicating a potential gap in our understanding or the need for more refined measures.
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\) _coefficient_ & _p-value_ & \(R^{2}\) \\ \hline Sex (Male) & -0.61* &.100 & 0.09 \\ Education & 0.24 &.203 & 0.06 \\ Age & 0.00 &.980 & 0.00 \\ Computing XP & -0.22 &.238 & 0.05 \\ Smartphone XP & -0.31 &.047* & 0.13 \\ Gaming XP & -0.40 &.029* & 0.16 \\ MSA-Child & 0.20 &.290 & 0.04 \\ MSB-Adult & 0.53 &.003** & 0.28 \\ MSSQ Total & 0.28 &.137 & 0.08 \\ Pupil – CSQ-VR & -0.25 &.182 & 0.06 \\ Pupil – Ride & -0.24 &.208 & 0.06 \\ \hline \end{tabular} \(\mathcal{XP}\) = Experience; MS = Motion Sickness; MSSQ = Motion Sickness Susceptibility Questionnaire; Pupil – CSQ-VR = Pupil Size While Reading CSQ-VR Questions; Pupil Ride = Pupil Size During Ride;* p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001; * unstandardized beta
\end{table}
Table 4: Single Predictor Models for Vestibular Symptoms
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\) _coefficient_ & _p-value_ & \(R^{2}\) \\ \hline Sex (Male) & -0.26* &.486 & 0.02 \\ Education & -0.14 &.468 & 0.02 \\ Age & -0.23 &.213 & 0.06 \\ Computing XP & -0.25 &.179 & 0.06 \\ Smartphone XP & -0.22 &.249 & 0.05 \\ Gaming XP & -0.18 &.336 & 0.03 \\ MSA-Child & 0.21 &.265 & 0.04 \\ MSB-Adult & 0.27 &.152 & 0.07 \\ MSSQ Total & 0.23 &.221 & 0.05 \\ Pupil – CSQ-VR & -0.11 &.569 & 0.01 \\ Pupil – Ride & -0.28 &.138 & 0.08 \\ \hline \end{tabular} \(\mathcal{XP}\) = Experience; MS = Motion Sickness; MSSQ = Motion Sickness Susceptibility Questionnaire; Pupil – CSQ-VR = Pupil Size While Reading CSQ-VR Questions; Pupil Ride = Pupil Size During Ride;* p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001; * unstandardized beta
\end{table}
Table 5: Single Predictor Models for Oculomotor Symptoms
In essence, these results underscore the multifaceted nature of cybersickness and its determinants. While some predictors like MSB-Adult consistently emerge as influential, others show symptom-specific associations. Furthermore, they accentuate the importance of not only considering the user's history and demographics but also real-time metrics like pupil size in understanding their VR experience.
### Mixed Model Regression Analyses: Effects on Cognitive & Motor Performance
Mixed linear regression model analyses were carried out to determine and evaluate the significant predictors of performance on cognitive and psychomotor skills tasks. Tables 7 and 8 delve into the predictors for verbal short-term and working memory, respectively. In disagreement with **H4**, the critical observation here is that none of the predictors were significant predictors of verbal short-term and working memory. This might suggest that verbal memory is less susceptible to variations in these predictors or that other unmeasured factors may have a more substantial impact.
\begin{table}
\begin{tabular}{c c c c} \hline _Predicted_ & _Predictors_ & _\(\beta\) coefficient_ & _p-value(\(\beta\))_ & \(R^{2}\) \\ \hline CSQ-VR – Total & MSB-Adult & 0.66 & \textless{}001*** & 0.39 \\ CSQ-VR – Nausea & MSB-Adult & 0.63 & \textless{}001*** & 0.39 \\ CSQ-VR – Vestibular & MSB-Adult & 0.53 & \textless{}003*** & 0.28 \\ CSQ-VR – Oculomotor & Null Model & – & – & – \\ \hline \end{tabular}
* CSQ-VR = Cybersickness in Virtual reality Questionnaire; MS = Motion Sickness;
* p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001
\end{table}
Table 6. Best Models for Predicting Cybersickness
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & _\(\beta\) coefficient_ & _p-value_ & _R^2(Fixed Effects / Overall)_ \\ \hline Sex (Male) & 0.27\({}^{\text{\text{\textdegree}}}\) &.404 & 0.02 / 0.54 \\ Education & 0.09 &.592 & 0.01 / 0.54 \\ Age & 0.15 &.351 & 0.02 / 0.54 \\ Computing XP & -0.02 &.911 & 0.01 / 0.54 \\ Smartphone XP & 0.10 &.668 & 0.01 / 0.54 \\ Gaming XP & 0.07 &.679 & 0.01 / 0.54 \\ CSQ-VR – Total & -0.03 &.798 & 0.01 / 0.53 \\ CSQ-VR – Nausea & -0.02 &.862 & 0.01 / 0.52 \\ CSQ-VR – Vestibular & -0.05 &.669 & 0.01 / 0.53 \\ CSQ-VR – Oculomotor & -0.01 &.941 & 0.01 / 0.53 \\ \hline \end{tabular}
* CSQ-VR = Cybersickness in Virtual reality Questionnaire; XP = Experience;
* p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001; \({}^{\text{\texttextdegree}}\) unstandardized beta
\end{table}
Table 8. Single Predictor Models for Verbal Working Memory
Table 10 examines predictors for visuospatial working memory. Supporting **H4**, the vestibular symptoms intensity was found to be a significant predictor of visuospatial working memory, albeit other cybersickness metrics did not substantially predict it. Furthermore, sex and gaming experience stand out as significant predictors, indicating that gaming might shape how individuals process and manipulate visual-spatial information.
Table 11 on attentional time displays that CSQ-VR -- specifically the oculomotor component-- has a notable relationship with attentional time. The positive \(\beta\) coefficient suggests a direct correlation, meaning that as cybersickness symptoms increase, attentional time might also increase.
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\) _coefficient_ & _p-value_ & _R\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{ \text{ \
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\) _coefficient_ & _p-value_ & _R\({}^{\text{z}}\)(Fixed Effects / Overall)_ \\ \hline Sex (Male) & -0.59* &.075 & 0.08 / 0.69 \\ Education & 0.18 &.298 & 0.03 / 0.69 \\ Age & -0.03 &.874 & 0.01 / 0.69 \\ Computing XP & -0.14 &.417 & 0.02 / 0.69 \\ Smartphone XP & -0.14 &.410 & 0.02 / 0.69 \\ Gaming XP & -0.38 &.018* & 0.14 / 0.69 \\ CSQ-VR – Total & 0.18 &.067 & 0.03 / 0.67 \\ CSQ-VR – Nausea & 0.20 &.030* & 0.04 / 0.67 \\ CSQ-VR – Vestibular & 0.14 &.145 & 0.02 / 0.67 \\ CSQ-VR – Oculomotor & 0.09 &.368 & 0.01 / 0.67 \\ \hline \end{tabular} CSQ-VR – Cybersickness in Virtual reality Questionnaire; XP = Experience;
* p \(\leq\).05, *
* p \(\leq\).01, **
* p \(\leq\).001;
* unstandardized beta
\end{table}
Table 12: Single Predictor Models for Motor Time
\begin{table}
\begin{tabular}{c c c c} \hline _Predictor_ & \(\beta\) _coefficient_ & _p-value_ & _R\({}^{\text{z}}\)(Fixed Effects / Overall)_ \\ \hline Sex (Male) & -0.65* &.054 & 0.10 / 0.75 \\ Education & 0.10 &.590 & 0.01 / 0.75 \\ Age & -0.07 &.691 & 0.01 / 0.75 \\ Computing XP & -0.24 &.157 & 0.06 / 0.76 \\ Smartphone XP & -0.20 &.239 & 0.04 / 0.75 \\ Gaming XP & -0.41 &.013* & 0.16 / 0.75 \\ CSQ-VR – Total & 0.25 &.004** & 0.06 / 0.76 \\ CSQ-VR – Nausea & 0.22 &.009** & 0.04 / 0.74 \\ CSQ-VR – Vestibular & 0.19 &.028* & 0.04 / 0.75 \\ CSQ-VR – Oculomotor & 0.22 &.020* & 0.05 / 0.76 \\ \hline \end{tabular} CSQ-VR – Cybersickness in Virtual reality Questionnaire; XP = Experience;
* p \(\leq\).05, *
* p \(\leq\).01, **
* p \(\leq\).001;
* unstandardized beta
\end{table}
Table 13: Single Predictor Models for Reaction Time
Lastly, Table 14 consolidates the most impactful predictors for various cognitive and motor skills. A recurring theme here is the influence of gaming experience on cognitive and motor skills, emphasizing its potential cognitive benefits or the development of specific skills associated with gaming. Interestingly, overall cybersickness, vestibular and oculomotor components, were also included in the best models, postulating the negative effects of cybersickness on cognitive functioning and psychomotor skills.
In summary, these results underscore the potential cognitive and motor influences of digital experiences like gaming. They also hint at the intertwined nature of cybersickness symptoms and cognitive/motor skills, suggesting that our experience in virtual environments can have multifaceted impacts on cognitive functioning.
### Comparison of Cybersickness Between During and After Exposure to Virtual Reality
Paired sample t-test analyses were performed to examine if there was a difference in terms of cybersickness symptomatology between during VR immersion and after exposure to VR (i.e., immediately after the removal of VR equipment). Figure 4 illustrates the z-scores representing the intensity of overall and various cybersickness symptoms experienced by participants both during their immersion in VR and after their exposure to VR (i.e., after the removal of the VR headset). In line with **H5**, the overall cybersickness intensity was found to have a significant and large decrease after the removal of the VR headset (i.e., the re-adaptation to the physical world), _t(29)=3.59, p<.001, Hedges g = 0.64_. Furthermore, other cybersickness symptoms showed significant decreases after removing the VR headset and transitioning from the virtual to the physical environment, further supporting the **H5**.
For vestibular symptoms, after exposure to VR (Post-VR), a noticeable shift towards negative z-scores was identified, indicating that the experience of vestibular symptoms was less intense. This difference between the two stages was confirmed to be statistically significant and large, _t(29)=2.74, p =.010, Hedges g = 0.49_. Regarding nausea, post-VR, a slightly broader spread of z-scores, predominantly on the negative side, was detected. This decrease in nausea symptoms' intensity was deemed significant and large, _t(29)=3.12, p<.001, Hedges g = 0.56_. Finally, for oculomotor symptoms, after exposure to VR, the distribution pattern of the z-scores remained largely unchanged, _t(29)=1.22, p =.230, Hedges g = 0.22_, indicating that the removal of the headset and re-adaptation to the physical world had a non-significant effect on oculomotor symptoms' intensity. To summarize, of the three cybersickness symptom categories examined, the vestibular and nausea symptoms exhibited a statistically significant decrease after the headset removal (i.e., post-VR
\begin{table}
\begin{tabular}{c c c c c} _Predicted_ & _Predictors_ & \(\beta\) _coefficient_ & _p-value(\(\beta\))_ & _R\({}^{\prime}\)(Fixed Effects / Overall)_ \\ \hline DST - Forward & Null Model & – & – & – \\ DST - Backward & Null Model & – & – & – \\ CBT - Forward & Gaming XP & 0.46 &.003** & 0.19 / 0.56 \\ CBT - Backward & Gaming XP & 0.36 &.034* & 0.20 / 0.68 \\ & CSQ-VR - Vestibular & -0.21 &.025* & \\ DLRT - AT & CSQ-VR - Oculomotor & 0.28 &.026* & 0.08 / 0.36 \\ DLRT - MT & Gaming XP & -0.36 &.023* & 0.18 / 0.69 \\ & CSQ-VR - Nausea & 0.19 &.049* & \\ DLRT - RT & Gaming XP & -0.35 &.028* & 0.21 / 0.77 \\ & CSQ-VR - Total & 0.23 &.008** & \\ \hline \end{tabular}
\begin{tabular}{c} CSQ-VR = Cybersickness in Virtual Reality Questionnaire; XP = Experience; DST = Digit Span Test; CBT = Corsi \\ Block Test; DLRT = Dears-Liewald Reaction Time Test; RT = Reaction Time; AT = Attention Time; MT = Motor \\ Time; * p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001 \\ \end{tabular}
\end{table}
Table 14: Best Models for Predicting Cognitive & Motor Skills
exposure). It was determined, however, that oculomotor symptoms' intensity remained relatively stable after the exposure to VR.
## 4 Discussion
Considering that VR is implemented in educational, professional, research, and clinical settings, the present study aimed to examine cybersickness symptoms during immersion in VR. The study explored the role of several factors pertinent to individual differences, such as motion sickness susceptibility, experience in playing videogames, experience in using computers, sex, and age, in experiencing cybersickness symptomatology and intensity. For predicting cybersickness, overall and symptoms intensity, eye-tracking metrics, such as pupil dilation, were also considered. Furthermore, the study aimed to determine the effects of cybersickness symptoms on cognitive functions and motor skills. Finally, given that users experience a readjustment to physical space, while removing the VR HMD and transitioning from the virtual to the physical environment, the current study also examined the differences in cybersickness intensity during and after immersion. A comprehensive discussion is offered by integrating findings from the current study with the insights from the reviewed literature.
### Pupil Dilation as a Biomarker of Cybersickness
Pupil dilation has been previously seen as a biomarker of affective state, where a bigger size indicates a positive (e.g., joy) and a smaller size indicates a negative affective state (e.g., fear) [102]. Our results demonstrated a comparable pattern in cybersickness,
Figure 4: Comparison of Cybersickness’ Intensity Between During and After Exposure to Virtual Reality. Note: CSQ-VR = Cybersickness in Virtual reality Questionnaire; In VR = During Immersion; Post-VR = After Exposure to VR; ns = non-significant, * p \(\leq\).05, ** p \(\leq\).01, *** p \(\leq\).001
where more intense cybersickness induced a smaller diameter (i.e., more negative affective state), and less intense a bigger diameter of the pupils (i.e., more positive affective state). In VR, pupil size has been previously incorporated into a deep fusion model to predict cybersickness [103]. However, this previous study did not assess its association, predictive capacity, and role within this model, making it challenging to determine if pupil size acts as a biomarker for cybersickness. However, in our previous studies, evidence was offered that pupil dilation while reading the questions of CSQ-VR substantially predicts cybersickness intensity [49, 53]. In line with all the studies mentioned above, pupil dilation was found to be a significant predictor of cybersickness in this study. Nevertheless, the pupil dilation was measured during reading CSQ-VR questions, as well as during the ride. The pupil size during the ride was deemed a significant predictor of overall cybersickness and nausea symptoms, while the pupil size during responding to CSQ-VR significantly predicted only the overall cybersickness. Also, for overall cybersickness intensity, pupil dilation during the ride was a substantially better predictor than pupil dilation during CSQ-VR. This suggests that pupil size is a more reliable biomarker during the triggering and experience of cybersickness than post-exposure to stimuli eliciting symptomatology.
### Modulators of Cybersickness: Sex, Smartphone Experience, and Videogame Experience
None of the demographics appeared to be a significant predictor of cybersickness intensity. Notably, age was not a significant predictor of cybersickness. Given the younger demographic of participants, it is worth investigating these effects in a broader age range. Also, experience in using computers failed to predict cybersickness. Finally, in disagreement with previous studies [73, 104], sex (i.e., male, female) did not predict cybersickness intensity and symptoms, postulating that cybersickness is not more frequent or intense in either sex. Interestingly, Stanney et al. [73] proposed that the variations stemmed from the VR headset's InterPupillary Distance (IPD). In their subsequent experiment, no disparities between the two sexes were observed when participants deemed the IPD agreeable. In our research, the HTC Vive Pro Eye was employed, known for its universal comfort. Each participant underwent an eye-tracking calibration to fine-tune the IPD, ensuring its appropriateness and comfort. Also, in our previous study [49], the differences between sexes in terms of cybersickness were eliminated when we con-trolled for gaming experience (i.e., male and female users with the same experience in playing videogames). In this study, a balanced sample was attempted for having both female and male participants with a comparable (e.g., high/low) gaming experience. Therefore, the calibration of the IPD via eye-tracking, as well as the balance in terms of gaming experience among participants, may explain the non-significant effect of sex on experiencing cybersickness. In combination, the findings of this study and our previous study [49], along with the findings of Stanney et al., [73] suggest that the sex/gender of the participant does not modulate the intensity of the cybersickness symptomatology.
On the contrary, experience in playing videogames and experience in using apps on the smartphone for improving everyday functionality significantly predicted the intensity of perceived cybersickness symptoms. Notably, our study is the first to show that experience with smartphones may be a predictor of cybersickness. Specifically, higher experience with smartphones was associated with a substantially lower intensity of cybersickness symptomatology. Interestingly, the usage of smartphones for performing various tasks (e.g., surfing on the internet, sending emails, writing documents, editing photos and videos, etc.), which traditionally were performed on a computer, nowadays is significantly higher compared to computers/laptops' usage for performing the same tasks [105]. Note that visually induced cybersickness can be elicited by exposure to any screen, including smartphones [106, 107, 108]. Also, exposure to and experience with using tech mediums with screens is associated with cultivating tolerance towards experiencing cybersickness symptomatology by adapting to spatial requirements and motion [32]. Thus, the
finding of this study regarding smartphone experience for developing resilience to cybersickness is aligned with the relevant literature.
In the same direction, the current study showed that higher experience with playing videogames postulates a higher resilience to cybersickness. This finding corroborates with the findings of our previous studies [49, 53], a large scale study by Weech et al., [77], as well as other studies on cybersickness [109] and simulator sickness [110]. Therefore, a more significant gaming background seems to act as a protective factor against cybersickness, whereas a limited one might heighten vulnerability. Notably, similar to our previous studies on cybersickness [49, 53], in this study, experience in playing videogames considered both proficiency and frequency, which appears as a refined measure of gaming experience. However, the videogames are clustered under several diverse genres (e.g., action, first-person shooting, role-playing, etc.), and each one may modulate the physiological and biochemical state [111], as well as cognitive functioning and brain structure [112], in a significantly different way. Thus, further research is required for examining the effects of gaming on cybersickness, by considering experience in playing games of each genre as distinct metrics.
### Cybersickness & Susceptibility to Motion Sickness
Susceptibility to motion sickness during adulthood (i.e., MSB-Adult score of MSSQ) emerged as the best predictor of cybersickness intensity. Except for oculomotor symptoms, the MSB-Adult was a significant predictor of overall cybersickness, nausea symptoms, and vestibular symptoms. Notably, the single predictor models with MSB-Adult (i.e., having only MSB-Adult as a predictor) were found to be the best regression models for predicting the respective scores of CSQ-VR. Considering that motion sickness and cybersickness induced by section share common characteristics, such as motion cues acting as elicitors of sickness, it comes with no surprise that findings of previous studies postulated that visually induced cybersickness and motion sickness demonstrate similar patterns of susceptibility among individuals [78, 79]. The findings of this study agree with the findings of the aforementioned studies, since the MSB-Adult was the most prominent predictor of overall cybersickness, as well as nausea and vestibular symptoms.
However, in our previous study MSSQ scores failed to predict cybersickness intensity and symptoms [49]. Nonetheless, in this previous study, the MSSQ was also used to exclude participants who demonstrated high susceptibility to motion sickness. This may explain why the MSSQ scores were not identified as significant predictors in our previous study. Finally, taken together, the MSSQ scores, especially MSB-Adult, may be used to identify individuals who are prone to experience a section induced cybersickness. However, the Visual Induced MSSQ (VIMSSQ) has been recently developed and validated, which is specific to exposure to screens (e.g., computers, tablets, and VR HMDs) [108, 113]. Also, it should be noted that section is only one of the reasons for experiencing cybersickness in VR [31, 35, 36, 53]. Thus, further research is required to examine whether MSSQ and/or VIMSSQ scores may predict cybersickness induced by other factors such as low latency or latency fluctuations, non-ergonomic navigation, and low-quality graphics.
### Cybersickness Effects on Verbal Short-Term Memory and Working Memory
The study of Dahlman et al. [84] posited a direct negative effect of motion sickness on verbal working memory. Also, significant negative effects of cybersickness on verbal working memory were observed in our previous study [49]. Discrepantly to these previous studies, cybersickness was seen to affect significantly neither verbal short-term memory nor verbal working memory. Regarding short-term and working memory, there is consensus that they are two different cognitive constructs stemming from the activation of diverse brain regions, where the former requires substantially fewer cognitive resources than the latter [114, 115]. Hence, the short-term memory may not appear to
decrease due to low difficulty. Furthermore, the study of Dahlman et al. [84] was on motion sickness. While motion sickness and cybersickness share some similarities, they are substantially different in terms of symptoms frequency and intensity [29, 30]. Thus, this difference between cybersickness and motion sickness may explain the disagreement between the findings of this study and the findings of Dahlman and collaborators' study.
Furthermore, in our previous study, only working memory was examined [49]. Also, the order of the tasks was not counterbalanced, where the verbal working memory task was always first to be performed after exposure to linear and angular accelerations. Finally, the size effect of the performance decrease on the verbal working memory task was small [49]. These limitations of our previous study may thus explain the discrepancy between the findings of the two studies. Nonetheless, given that cybersickness effects are transient and of relatively short duration [52], if performing a task, immediately after exposure to stimuli inducing cybersickness, does indeed have an impact, then this poses a severe methodological consideration that should be further explored in future studies. However, in this study, in line with the design of the original tests (see [91, 93]), short-term memory tasks always precede working memory tasks. Thus, if the order had a significant impact, cybersickness effects should have also been observed on short-term memory tasks.
### Cybersickness Effects on Visuospatial Short-Term Memory and Working Memory
Cybersickness was found to have a significant negative effect on visuospatial working memory. This finding aligns with the findings of the study of Mittelstaedt et al. [59], where performance on the visuospatial working memory task was substantially decreased. The two studies hence postulate that cybersickness has indeed a negative impact on visuospatial working memory. However, again, there was no effect on visuospatial short-term memory. Considering that administration of the CBT tasks, forward (short-term memory) and backward (working memory) recall tasks, should be in this order (i.e., forward recall and then backward recall) [94, 93], the absence of an effect on short-term memory and a significant effect on working memory, further supports that order of the tasks does not have an impact on observing effects of cybersickness on cognitive performance. Furthermore, similar to verbal short-term and working memory, it is widely agreed that visuospatial short-term memory and working memory are dis-tinct cognitive structures facilitated by different brain structures, with the former re-quiring fewer cognitive resources than the latter [115, 116]. This difference between the two explains why visuospatial short-term memory was left intact by cybersickness while visuospatial working memory was substantially decreased.
Moreover, in this study, vestibular symptomatology, which postulates a transient dysfunction of the vestibular system, was found to have a significant negative impact on visuospatial working memory. Visuospatial working memory functioning pertains to the processing of visuospatial information [115, 116]. The vestibular system has been suggested to have an important implication in visuospatial cognitive functioning [117]. Thus, the decrease of visuospatial working memory by predominantly vestibular symptomatology that was observed in this study further supports the importance of the vestibular system to visuospatial information processing. Nevertheless, the best model for predicting visuospatial working memory also included gaming experience, which revealed a positive effect on working memory. Notably, gaming experience was also included in the best model of visuospatial short-term memory. These findings are in line with the relevant literature, which suggests that gamers have enhanced short-term and working memory abilities [118]. In the investigation of cybersickness, these findings postulate that gaming experience should always be considered (e.g., as a covariate or an additional factor) when examining the effects of cybersickness on cognition.
### Cybersickness Effects on Psychomotor Skills: Reaction, Attention, and Motor Speed
Psychomotor skills, such as attentional speed, motor speed, and overall reaction time were found to be substantially negatively affected by cybersickness symptomatology and intensity. In this study, the VR version of DLRT was implemented, which in contrast with the traditional version that produces a single score (i.e., reaction time), the DLRT-VR produces three metrics corresponding to attentional speed, motor speed, and overall reaction time. Since the previous studies on cybersickness (e.g., [34, 59, 60]) used the traditional version, and assessed participants after immersion, the current study may further decipher the effects of cybersickness on psychomotor skills. The observed significant deceleration of overall reaction speed is in line with our [49, 53] and other previous studies [34, 59, 60]. In conjunction, the current and previous findings thus offer robust evidence that cybersickness substantially compromised psychomotor skills. However, the experience in playing videogames was also included in the best model, where a significant positive effect on reaction time (i.e., acceleration of reaction speed) was detected. This finding aligns with the previous literature pertaining to the effects of gaming experience on psychomotor speed [119, 120, 121, 122]. In the context of cybersickness, this outcome postulates that the gaming experience has to be considered (e.g., as a covariate) for effectively examining the effects of cybersickness on psychomotor skills.
While the effects on overall reaction speed are well established by the current and previous findings, the cybersickness effects on the components of psychomotor skills (i.e., attentional and motor speed) still need to be investigated in depth. In this study, the attentional speed was found to be significantly decelerated by the oculomotor symptoms' intensity. This outcome is in agreement with previous studies that revealed a significant negative effect on attentional processing speed [59, 85]. However, these previous studies attributed the deceleration of attentional speed to overall cybersickness. In the current study, while both overall cybersickness intensity and oculomotor symptoms' intensity were deemed significant predictors of attentional speed, only the oculomotor symptomatology was incorporated in the respective best model. This outcome is in line with the established understanding that the oculomotor system is essential for facilitating visual attention functioning, especially for orienting attention [123, 124]. Hence, our findings connote that a transient dysfunction of the oculomotor system (e.g., eye fatigue or strain) substantially compromises attentional speed.
However, the deceleration of the motor speed was found to be predominantly attributed to nausea symptomatology. This finding is aligned with the current understanding of the negative effects of nausea on motor coordination and skills due to a modulation of the activation of sensorimotor brain regions [125, 126]. Nevertheless, the gaming experience was also included in the best predictive model of motor speed, where a significant acceleration of motor speed was observed due to higher gaming experience. This aligns with the relevant literature that postulates that the gaming experience promotes an enhanced motor speed, especially for fine motor functions [127, 128]. Thus, this finding suggests that gaming experience should be considered in the examination of cybersickness effects on motor speed. In summary, gaming experience appears central to enhancing psychomotor speed, while cybersickness substantially decelerates overall reaction speed. Regarding the components of psychomotor skills, oculomotor and nausea symptoms' intensity significantly decelerates attentional and motor speed respectively.
### Cybersickness Symptoms and Intensity During and After Immersion
The intensities of overall cybersickness, nausea symptoms, and vestibular symptoms were substantially decreased after immersion (i.e., immediately after removing the headset). However, oculomotor symptoms did not decrease, which suggests that they may have a more lasting impact. This appears similar to simulator sickness, where oculomotor symptoms like eye-strain persist long after the exposure to the simulator [129]. In contrast, vestibular and nausea symptoms are the most frequent and predominant in cybersickness
[29; 61; 130]. Thus, these findings draw a stark contrast between the experience of cybersickness during immersion in a virtual environment and after immersion. This agrees with the suggestion that the human body and mind strive to readjust to the physical environment immediately after the removal of the VR headset [86].
Based on our results, after the exposure to stimuli inducing cybersickness, and during this transitory period of readapting to the physical space and body, the cybersickness intensity and symptoms substantially fade away. This also agrees with the current predominant understanding that cybersickness symptoms and effects are transient [52]. However, there is no consensus concerning how long cybersickness symptoms may persist, with some previous reviews suggesting that they may last for up to 12 hours after exposure to VR [31; 131]. However, our results postulate that the period of experiencing a substantial alleviation of cybersickness intensity commences during immersion (i.e., after exposure to stimuli inducing cybersickness, such as linear and angular accelerations) and have attained a significant decrease after immersion, during the readaptation to the physical body and environment. Hence, this natural process counteracts or alleviates some of the dissonances experienced in the virtual environment during immersion.
Furthermore, these findings posit fundamental ramifications for the methodology used in VR research on cybersickness. To the best of our knowledge, except for the current and our previous studies [49; 53], the studies on cybersickness (e.g., [28; 45; 46; 47; 48; 56]) or cybersickness effects on human physiology or cognitive functioning or psychomotor skills (e.g., [59; 60; 64; 65; 34; 66; 64; 66; 85]) measure cybersickness intensity, symptomatology, and/or its effects, after immersion. Our findings indicate that this approach may only capture a part of the overall picture, or even worse, a substantially distorted picture.
If cybersickness symptoms subside or change in intensity immediately after VR exposure, then solely post-immersion evaluations could lead to unreliable conclusions. For instance, evaluations done post-immersion might underestimate the true intensity of symptoms experienced during VR exposure, as well as the effects of cybersickness on cognition and motor skills. From a research design standpoint, these insights underscore the importance of adopting a more temporally accurate approach. Hence, Hence, in studies attempting to attain a holistic view of cybersickness, researchers should endeavor to capture data at multiple points - before, during, and after VR exposure. On the other hand, studies attempting to examine cybersickness aftereffects or persistence of symptomatology may perform the examination after immersion. Lastly, it is crucial for studies striving to assess cybersickness intensity and symptomatology, as well as its effects on physiology (e.g., autonomic responses such as heart rate and temperature), cognition, and motor skills to perform their examination during the VR immersion.
### Limitations and Future Studies
This study also has some limitations that should be considered. While allowing the conduction of the required statistical analyses, the sample size was relatively small. Also, the sample consisted predominantly of young adults aged 20-45 years old. A future study should incorporate a larger and/or more age-diverse sample (e.g., considering adolescents and/or older adults). Moreover, this study utilized the MSSQ and did not consider the VIMSSQ, which is specific to vection elicited by screen-based mediums. Using both in a future study may assist with deciphering whether generic motion sickness susceptibility or specific to vection is an indicator for experiencing cybersickness symptomatology and intensity. Also, the overall gaming experience was used for investigating its effects on cybersickness, cognitive functioning, and psychomotor skills. Since each genre of video-games offers diverse content that may stimulate distinct physiological and cognitive aspects, future studies should attempt to examine the effects of gaming experience by genre.
Moreover, this study explored the cybersickness intensity and symptoms in VR that were induced by vection. While vection is indeed one of the main reasons for cybersickness, several factors (e.g., low latency or latency fluctuations, non-ergonomic navigation, and low-quality graphics) may induce cyber-sickness. Future research should explore
cybersickness intensity and symptomatology induced by each factor. The current study incorporated several cognitive and psychomotor tasks. Given that a task's characteristics may modulate cybersickness [132], future studies should either consider a single task or examine each task's effects on cybersickness. Finally, considering that significant differences were found during and post-immersion, future attempts should consider multiple assessment points of cybersickness to effectively scrutinize the cybersickness symptoms and intensity before, during (i.e., with several assessments), and after immersion.
## 5 Conclusions
This study comprehensively explored the modulators, symptomatology, and effects of cybersickness on an array of cognitive, physiological, and psychomotor functions. Notably, it was found that pupil dilation could be a valuable biomarker for cybersickness, reflecting the intensity of the experienced symptoms. Interestingly, demographic factors like sex and age did not significantly predict cybersickness. However, experience with videogames and smartphones emerged as protective factors, suggesting that familiarity with screen-based interactions could mitigate cybersickness symptoms. A particularly significant insight was the relationship between motion sickness susceptibility in adulthood and cybersickness intensity, suggesting an intertwined susceptibility pattern between these two phenomena. Furthermore, the research identified a selective impact of cybersickness when dissecting the effects on cognitive domains. While visuospatial working memory suffered from cybersickness, verbal short-term and working memory remained largely untouched. This distinction underscores the complex interplay between neural mechanisms and their susceptibility to cybersickness. One of the most crucial takeaways from this study was the marked difference in the experience and intensity of cybersickness during VR immersion compared to the post-immersion phase. Cybersickness symptomatology showed substantial changes post-immersion. This temporal difference poses significant methodological implications: if cybersickness evaluations are exclusively conducted post-immersion, they might not offer reliable results and conclusions. Therefore, for a more accurate understanding of cybersickness, future research endeavours should consider assessment points during immersion and/or post-immersion phases, respectively, to the research aims.
Conceptualization, P.K.; methodology, P.K; software, P.K.; validation, A.P., P.R., and P.K; formal analysis, P.K.; investigation, A.P.; resources, P.R. and P.K; data curation, A.P. and P.K; writing -- original draft preparation, A.P. and P.K; writing -- review and editing, A.P., P.R., and P.K; visualization, P.K.; supervision, P.K.; project administration, P.R. and P.K; funding acquisition, A.P. and P.R. All authors have read and agreed to the published version of the manuscript. This research received no external funding. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Department of Psychology of the National and Kapodistrian University of Athens (796-14/07/2023). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ethical approval requirements. The authors declare no conflict of interest.
|
2301.01267 | Optimal convergence rates in stochastic homogenization in a balanced
random environment | We consider random walks in a uniformly elliptic, balanced, i.i.d. random
environment in the integer lattice $Z^d$ for $d\geq 2$ and the corresponding
problem of stochastic homogenization of non-divergence form difference
operators. We first derive a quantitative law of large numbers for the
invariant measure, which is nearly optimal. A mixing property of the field of
the invariant measure is then achieved. We next obtain rates of convergence for
the homogenization of the Dirichlet problem for non-divergence form operators,
which are generically optimal for $d\geq 3$ and nearly optimal when $d=2$.
Furthermore, we establish the existence, stationarity and uniqueness properties
of the corrector problem for all dimensions $d\ge 2$. Afterwards, we quantify
the ergodicity of the environmental process for both the continuous-time and
discrete-time random walks, and as a consequence, we get explicit convergence
rates for the quenched central limit theorem of the balanced random walk. | Xiaoqin Guo, Hung V. Tran | 2023-01-03T18:20:02Z | http://arxiv.org/abs/2301.01267v2 | # Optimal convergence rates in stochastic homogenization in a balanced random environment
###### Abstract
We consider random walks in a uniformly elliptic, balanced, i.i.d. random environment in \(\mathbb{Z}^{d}\) for \(d\geq 2\). We first derive a quantitative law of large numbers for the invariant measure, which is nearly optimal. A mixing property of the field of the invariant measure is then achieved. We next obtain rates of convergence for the homogenization of the Dirichlet problem, which are generically optimal for \(d\geq 5\). Afterwards, we quantify the ergodicity of the environmental process for both the continuous-time and discrete-time random walks, and as a consequence, we get explicit convergence rates for the quenched central limit theorem of the balanced random walk.
###### Contents
* 1 Introduction
* 1.1 Settings
* 1.2 Main assumptions
* 1.3 Earlier results in the literature
* 1.4 Main results
* 2010 _Mathematics subject classification_. 35J15 35J25 35K10 35K20 60G50 60J65 60K37 74Q20 76M50.
* 2 Main assumptions
* 2.1.1 The \(\mathbb{Z}^{d}\)-norm \(\mathcal{L}^{d}\) of the random walk \(\mathcal{L}^{d}\)
* 2.2 Main assumptions
* 2.3 Earlier results in the literature
* 2.4 Main results
* 3
Large scale \(C^{0,1}\) and \(C^{1,1}\) estimates * 2.1 Some regularity properties of deterministic functions * 2.2 Large scale regularity
* 3 Mixing properties of the invariant measure
* 3.1 A sensitivity estimate of the invariant measure
* 3.2 Rate of convergence for the average of the invariant measure: Proof of Theorem 5
* 3.3 Correlation structure of the field of the invariant measure
* 4 Homogenization of the Dirichlet problem
* 4.1 Homogenization of the approximate corrector
* 4.2 Rate of homogenization for the Dirichlet problem (19)
* 5 Quantification of the diffusive behavior
* 5.1 Quantification of the ergodicity of the environmental process: Proof of Theorem 8
* 5.2 A Berry-Esseen estimate for the QCLT: Proof of Corollary 10
* A Appendix
* A.1 Proof of Proposition 16
* A.2 Verification of (68)
* A.3 Verification of (70)
* A.4 Verification of (78)
## 1 Introduction
In this paper, we consider random walks in a uniformly elliptic, balanced, i.i.d. random environment in \(\mathbb{Z}^{d}\) for \(d\geq 2\). Our main goals are two-fold. Firstly, we derive a quantitative large-scale average of the invariant measure, which is nearly optimal, in Theorem 5. A mixing property of the field of the invariant measure is achieved. Secondly, we obtain rates of convergence for the homogenization of the Dirichlet problem in Theorem 7. When \(d\geq 5\), the convergence rate is \(O(R^{-1})\), which is generically optimal. Afterwards, we quantify the ergodicity of the environmental process for both the continuous- and discrete-time random walks in Theorem 8, and as a consequence, we get explicit convergence rates for the quenched central limit theorem (QCLT) of the balanced random walk.
### Settings
Let \(\mathbb{S}_{d\times d}\) denote the set of \(d\times d\) positive-definite diagonal matrices. A map
\[\omega\,:\,\mathbb{Z}^{d}\,\to\mathbb{S}_{d\times d}\]
is called an _environment_. We denote the set of all environments by \(\Omega\). Let \(\mathbb{P}\) be a probability measure on \(\Omega\) so that
\[\left\{\omega(x)=\operatorname{diag}[\omega_{1}(x),\ldots,\omega_{d}(x)],x\in \mathbb{Z}^{d}\right\}\]
are i.i.d. under \(\mathbb{P}\). Expectation with respect to \(\mathbb{P}\) is denoted by \(\mathbb{E}\) or \(E_{\mathbb{P}}\).
Let \(\{e_{1},\ldots,e_{d}\}\) be the canonical basis for \(\mathbb{R}^{d}\). For any function \(u:\mathbb{Z}^{d}\to\mathbb{R}\) and \(\omega\in\Omega\), define the non-divergence form difference operator
\[\operatorname{tr}(\omega(x)\nabla^{2}u)=\sum_{i=1}^{d}\omega_{i}(x)[u(x+e_{i}) +u(x-e_{i})-2u(x)], \tag{1}\]
where \(\nabla^{2}=\operatorname{diag}[\nabla^{2}_{1},\ldots,\nabla^{2}_{d}]\), and \(\nabla^{2}_{i}u(x)=u(x+e_{i})+u(x-e_{i})-2u(x)\).
For \(r>0\), \(y\in\mathbb{R}^{d}\) we let
\[\mathbb{B}_{r}(y)=\left\{x\in\mathbb{R}^{d}\,:\,|x-y|<r\right\},\quad B_{r}( y)=\mathbb{B}_{r}(y)\cap\mathbb{Z}^{d}\]
denote the continuous and discrete balls with center \(y\) and radius \(r\), respectively. When \(y=0\), we also write \(\mathbb{B}_{r}=\mathbb{B}_{r}(0)\) and \(B_{r}=B_{r}(0)\). For any \(B\subset\mathbb{Z}^{d}\), its _discrete boundary_ is defined as
\[\partial B:=\left\{z\in\mathbb{Z}^{d}\,\setminus\,B\,:\,\operatorname{dist}(z, x)=1\text{ for some }x\in B\right\}.\]
Let \(\bar{B}=B\cup\partial B\). By abuse of notations, whenever confusion does not occur, we also use \(\partial A\) and \(\bar{A}\) to denote the usual continuous boundary and closure of \(A\subset\mathbb{R}^{d}\), respectively.
For \(x\in\mathbb{Z}^{d}\), a _spatial shift_\(\theta_{x}\,:\,\Omega\to\Omega\) is defined by
\[(\theta_{x}\omega)(\cdot)=\omega(x+\cdot).\]
In a random environment \(\omega\in\Omega\), we consider the discrete elliptic Dirichlet problem
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\omega\nabla^{2}u(x))=\frac {1}{R^{2}}f\left(\frac{x}{R}\right)\psi(\theta_{x}\omega)&x\in B_{R},\\ u(x)=g\left(\frac{x}{|x|}\right)&x\in\partial B_{R},\end{array}\right. \tag{2}\]
where \(f\in\mathbb{R}^{\mathbb{B}_{1}},g\in\mathbb{R}^{\partial\mathbb{B}_{1}}\) are functions with good regularity properties and \(\psi\in\mathbb{R}^{\Omega}\) is bounded and satisfies suitable measurability condition. Stochastic homogenization studies (for \(\mathbb{P}\)-almost all \(\omega\)) the convergence of \(u\) to the solution \(\bar{u}\) of a deterministic _effective equation_
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\bar{a}D^{2}\bar{u})=f\bar{ \psi}&\text{ in }\mathbb{B}_{1},\\ \bar{u}=g&\text{ on }\partial\mathbb{B}_{1},\end{array}\right. \tag{3}\]
as \(R\to\infty\). Here \(D^{2}\bar{u}\) denotes the Hessian matrix of \(\bar{u}\) and \(\bar{a}=\bar{a}(\mathbb{P})\in\mathbb{S}_{d\times d}\) and \(\bar{\psi}=\bar{\psi}(\mathbb{P},\psi)\in\mathbb{R}\) are _deterministic_ and do not depend on the realization of the random environment (see the statement of Theorem C for formulas for \(\bar{a}\) and \(\bar{\psi}\)).
The difference equation (2) is used to describe random walks in a random environment (RWRE) in \(\mathbb{Z}^{d}\). To be specific, we set
\[\omega(x,x\pm e_{i}):=\frac{\omega_{i}(x)}{2\mathrm{tr}\omega(x)}\quad\text{ for }i=1,\ldots d, \tag{4}\]
and \(\omega(x,y)=0\) if \(|x-y|\neq 1\). Namely, we normalize \(\omega\) to get a transition probability. We remark that the configuration of \(\{\omega(x,y):x,y\in\mathbb{Z}^{d}\}\) is also called a _balanced environment_ in the literature [40, 33, 10].
**Definition 1**.: For each fixed \(\omega\in\Omega\), the random walk \((X_{n})_{n\geq 0}\) in the environment \(\omega\) with \(X_{0}=x\) is a Markov chain in \(\mathbb{Z}^{d}\) with transition probability \(P_{\omega}^{x}\) specified by
\[P_{\omega}^{x}\left(X_{n+1}=z|X_{n}=y\right)=\omega(y,z). \tag{5}\]
The expectation with respect to \(P_{\omega}^{x}\) is written as \(E_{\omega}^{x}\). When the starting point of the random walk is \(0\), we sometimes omit the superscript and simply write \(P_{\omega}^{0},E_{\omega}^{0}\) as \(P_{\omega}\) and \(E_{\omega}\), respectively. Notice that for random walks \((X_{n})\) in an environment \(\omega\),
\[\bar{\omega}^{i}=\theta_{X_{i}}\omega\in\Omega,\quad i\geq 0, \tag{6}\]
is also a Markov chain, called the _environment viewed from the particle_ process. By abuse of notation, we enlarge our probability space so that \(P_{\omega}\) still denotes the joint law of the random walks and \((\bar{\omega}^{i})_{i\geq 0}\).
We also consider the _continuous-time_ RWRE \((Y_{t})\) on \(\mathbb{Z}^{d}\).
**Definition 2**.: Let \((Y_{t})_{t\geq 0}\) be the Markov process on \(\mathbb{Z}^{d}\) with generator
\[L_{\omega}u(x)=\sum_{y}\omega(x,y)[u(y)-u(x)]=\frac{1}{2\mathrm{tr}\omega(x) }\mathrm{tr}(\omega(x)\nabla^{2}u). \tag{7}\]
By abuse of notation, we also denote by \(P_{\omega}^{x}\) the quenched law of \((Y_{t})\). If there is no ambiguity from the context, we also write, for \(x,y\in\mathbb{Z}^{d}\), \(n\in\mathbb{Z},t\in\mathbb{R}\), the transition kernels of the discrete and continuous time walks as
\[p_{n}^{\omega}(x,y)=P_{\omega}^{x}(X_{n}=y),\quad\text{ and }\quad p_{t}^{ \omega}(x,y)=P_{\omega}^{x}(Y_{t}=y),\]
respectively.
### Main assumptions
Throughout the paper, the following assumptions are always in force.
1. \(\left\{\omega(x),x\in\mathbb{Z}^{d}\right\}\) are i.i.d. under the probability measure \(\mathbb{P}\).
2. \(\frac{\omega}{\mathrm{tr}\omega}\geq 2\kappa\mathrm{I}\) for \(\mathbb{P}\)-almost every \(\omega\) and some constant \(\kappa\in(0,\frac{1}{2d}]\).
3. \(\psi\) is a measurable function of the environment with the property that \(\{\psi(\theta_{x}\omega):x\in\mathbb{Z}^{d}\}\) are i.i.d. under \(\mathbb{P}\).
In the paper, we use \(c,C\) to denote positive constants which may change from line to line but only depend on the dimension \(d\) and the ellipticity constant \(\kappa\) unless otherwise stated. We write \(A\lesssim B\) if \(A\leq CB\), and \(A\asymp B\) if \(A\lesssim B\) and \(A\gtrsim B\). We also use notations \(A\lesssim_{j}B\), \(A\asymp_{j}B\) to indicate that the multiplicative constant depends on the variable \(j\) other than \((d,\kappa)\).
### Earlier results in the literature
We first recall the following quenched central limit theorem (QCLT) proved by Lawler [40], which is a discrete version of Papanicolaou, Varadhan [44].
**Theorem A**.: _Assume (A2) and that law \(\mathbb{P}\) of the environment is ergodic under spatial shifts \(\{\theta_{x}:x\in\mathbb{Z}^{d}\}\). Then_
1. _There exists a probability measure_ \(\mathbb{Q}\approx\mathbb{P}\) _such that_ \(\left(\bar{\omega}^{i}\right)_{i\geq 0}\) _is an ergodic (with respect to time shifts) sequence under law_ \(\mathbb{Q}\times P_{\omega}\)_._
2. _For_ \(\mathbb{P}\)_-almost every_ \(\omega\)_, the rescaled path_ \(X_{n^{2}t}/n\) _converges weakly (under law_ \(P_{\omega}\)_) to a Brownian motion with covariance matrix_ \(\bar{a}=E_{\mathbb{Q}}[\omega/\mathrm{tr}\omega]>0\)_._
QCLT for the balanced RWRE in static environments under weaker ellipticity assumptions can be found at [33, 10]. For dynamic balanced random environment, QCLT was established in [23] and finer results concerning the local limit theorem and heat kernel estimates was obtained at [22]. When the RWRE is allowed to make long jumps, non-CLT stable limits of the balanced random walk is considered in [20, 21]. We refer to the lecture notes [13, 48, 12, 24, 38] for QCLT results in different models of RWRE.
We are moreover interested in characterizing the invariant measure \(\mathbb{Q}\). Denote the Radon-Nikodym derivative of \(\mathbb{Q}\) with respect to \(\mathbb{P}\) as
\[\rho(\omega)=\mathrm{d}\mathbb{Q}/\mathrm{d}\mathbb{P}. \tag{8}\]
For any \(x\in\mathbb{Z}^{d}\) and finite set \(A\subset\mathbb{Z}^{d}\), we define
\[\rho_{\omega}(x):=\rho(\theta_{x}\omega)\quad\text{ and }\quad\rho_{\omega}(A)= \sum_{x\in A}\rho_{\omega}(x).\]
As an important feature of the non-divergence form model, \(\rho_{\omega}\) does not have deterministic (nonzero) upper and lower bounds. Moreover, the heat kernel \(p_{t}^{\omega}(\cdot,\cdot)\) is not expected to have deterministic Gaussian bounds.
For \(r\geq 0,t>0\), define a function
\[\mathbf{j}(r,t)=\frac{r^{2}}{r\lor t}+r\log(\frac{r}{t}\lor 1),\quad r\geq 0,t>0. \tag{9}\]
The following result was obtained by Guo, Tran [31].
**Theorem B**.: _Assume (A1), (A2), and \(d\geq 2\). Let \(s=s(d,\kappa)=2+\frac{1}{2\kappa}-d\geq 2\). For any \(\varepsilon\in(0,1)\), there exists a random variable \(\mathcal{H}(\omega)=\mathcal{H}(\omega,d,\kappa,\varepsilon)>0\) with \(\mathbb{E}[\exp(c\mathcal{H}^{d-\varepsilon})]<\infty\) such that the following properties hold._
1. _For_ \(\mathbb{P}\)_-almost all_ \(\omega\)_,_ \[c\mathcal{H}^{-s}\leq\rho(\omega)\leq C\mathcal{H}^{d-1}.\]
2. _Recall the function_ \(\mathfrak{h}\) _in (_9_). For any_ \(r\geq 1\) _and_ \(\mathbb{P}\)_-almost all_ \(\omega\)_,_ \[c\mathcal{H}^{-s}\leq\frac{r^{d}\,\rho_{\omega}(0)}{\rho_{\omega}(B_{r})}\leq C \mathcal{H}^{d-1}.\]
3. _For any_ \(x\in\mathbb{Z}^{d},t>0\)_, and_ \(\mathbb{P}\)_-almost all_ \(\omega\)_,_ \[\begin{split}& p_{t}^{\omega}(x,0)\leq C\mathcal{H}^{d-1}(1+t)^ {-d/2}e^{-c\mathfrak{h}(|x|,t)},\\ & p_{t}^{\omega}(x,0)\geq c\mathcal{H}^{-s}(1+t)^{-d/2}e^{-C|x|^{ 2}/t}.\end{split}\]
**Remark 3**.: In the PDE setting, positive and negative algebraic moment bounds and volume doubling property of \(\rho\) were proved by Bauman [7]. \(L^{p}\) The positive moment bound in (73) with \(q=\frac{d}{d-1}\) was obtained by Lawler [40]. The \(L^{p}\) integrability of the heat kernel moment was proved by Fabes and Stroock [26]. Deterministic heat kernel bounds in terms of \(\rho\) was shown by Escauriaza [25] in the PDE setting, and by Mustapha [42] for discrete time balanced random walks. In the more general dynamic ergodic balanced environment setting, the bounds
\[\frac{c\,\rho_{\omega}(0)}{\rho_{\omega}(B_{\sqrt{t}})}e^{-C|x|^{2}/t}\leq p_{ t}^{\omega}(x,0)\leq\frac{C\,\rho_{\omega}(0)}{\rho_{\omega}(B_{\sqrt{t}})}e^{ -c\mathfrak{h}(|x|,t)} \tag{10}\]
were proved by Deuschel, Guo [22, Theorem 11]. Recently, Armstrong, Fehrman, Lin [2] obtain an algebraic rate of convergence for the heat kernels.
A function \(\psi\,:\,\Omega\to\mathbb{R}\) is said to be _local_ if it is measurable and depends only on the environment \(\{\omega(x):\,x\in S\}\) in a finite set \(S\subset\mathbb{Z}^{d}\). We now state a quantitative homogenization result in Guo, Peterson, Tran [29, Theorem 1.5], which can be considered as a discrete version of Armstrong, Smart [4, Theorem 1.2].
**Proposition C**.: _Assume (A1), (A2), and that the \(\psi\) is a local function. Recall the measure \(\mathbb{Q}\) in Theorem A. Suppose \(g\in C^{a}(\partial\mathbb{B}_{1})\), \(f\in C^{a}(\mathbb{B}_{1})\) for some \(\alpha\in(0,1]\), and \(\psi\) is a measurable function of \(\omega(0)\) with \(\|\psi/\mathrm{tr}\omega\|_{\infty}<\infty\). Let \(\bar{u}\) be the solution of the Dirichlet problem_
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\bar{a}D^{2}\bar{u})=f\bar{ \psi}&\mbox{ in }\mathbb{B}_{1},\\ \bar{u}=g&\mbox{ on }\partial\mathbb{B}_{1},\end{array}\right.\]
_with \(\bar{a}=E_{\mathbb{Q}}[\omega/\mathrm{tr}\omega]>0\) being a positive-definite matrix and \(\bar{\psi}=E_{\mathbb{Q}}[\psi/\mathrm{tr}\omega]\)._
_For any \(\varepsilon\in(0,1)\), there exists a random variable \(\mathcal{H}(\omega)=\mathcal{H}(\omega,d,\kappa,\varepsilon)>0\) with \(\mathbb{E}[\exp(c\mathcal{H}^{d-\varepsilon})]<\infty\) and a constant \(\beta=\beta(d,\kappa,\varepsilon)\in(0,1)\) such that for any \(y\in B_{3R}\), the solution \(u\) of_
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\omega\nabla^{2}u(x))=\frac{ 1}{R^{2}}f(\frac{x-y}{R})\psi(\theta_{x-y}\omega)&x\in B_{R}(y),\\ u(x)=g(\frac{x-y}{|x-y|})&x\in\partial B_{R}(y)\end{array}\right. \tag{11}\]
_satisfies, with \(A_{1}=\|f\|_{C^{0,\alpha}(\mathbb{B}_{1})}\|\frac{\psi}{\operatorname{tr}(\omega) }\|_{\infty}+[g]_{C^{0,\alpha}(\partial\mathbb{B}_{1})}\),_
\[\max_{x\in B_{R}(y)}\left|u(x)-\bar{u}(\frac{x-y}{R})\right|\lesssim A_{1}(1+( \frac{\mathcal{H}}{R})^{1-\epsilon/d})R^{-\alpha\beta}. \tag{12}\]
When the balanced environment is allowed to be non-elliptic and genuinely \(d\)-dimensional, (weak) quantitative results and Harnack inequalities for non-divergence form difference operators are obtained by Berger, Cohen, Deuschel, Guo [9], and Berger, Criens [11] for \(\omega\)-harmonic and \(\omega\)-caloric functions, respectively.
Let us also give a brief overview of the quantitative homogenization of non-divergence form operators in the continuous PDE setting. Yurinski derived a second moment estimate of the homogenization error in [47] for linear elliptic case. Caffarelli, Souganidis [18] proved a logarithmic convergence rate for the fully nonlinear case. Afterwards, Armstrong, Smart [4], and Lin, Smart [41] achieved an algebraic convergence rate for fully nonlinear elliptic equations, and fully nonlinear parabolic equations, respectively. Armstrong, Lin [3] obtained quantitative estimates for the approximate corrector problems.
For \(d\geq 2\) and any finite set \(A\subset\mathbb{Z}^{d}\), denote the exit time from \(A\) by
\[\tau(A)=\tau(A;X)=\inf\{\,n\geq 0\,:\,X_{n}\not\in A\}. \tag{13}\]
**Definition 4**.: For \(R\geq 1\), \(\omega\in\Omega\), \(x\in\mathbb{Z}^{d}\), \(S\subset\mathbb{Z}^{d}\), the _Green function_\(G_{R}(\cdot,\cdot)\) in the ball \(B_{R}\) for the balanced random walk is defined by
\[G_{R}(x,S)=G_{R}^{\omega}(x,S)\,:\,=\,E_{\omega}^{x}\big{[}\int_{0}^{\tau(B_{R })}\mathbbm{1}_{Y_{t}\in S}\mathrm{d}t\big{]},\quad x\in\bar{B}_{R}.\]
We also write \(G_{R}(x,y)\,:\,=\,G_{R}^{\omega}(x,\{y\})\) and \(G_{R}(x)\,:\,=\,G_{R}(x,0)\). When \(d\geq 3\), for any finite set \(S\subset\mathbb{Z}^{d}\), the Green function on the whole space can be defined as
\[G^{\omega}(x,S)=\int_{0}^{\infty}p_{t}^{\omega}(x,S)\mathrm{d}t<\infty.\]
When \(d=2\), for any \(x,y\subset\mathbb{Z}^{d}\), the _potential kernel_ is defined as
\[A(x,y)=A^{\omega}(x,y)=\int_{0}^{\infty}[p_{t}^{\omega}(y,y)-p_{t}^{\omega}(x,y)]\mathrm{d}t,\quad x\in\mathbb{Z}^{2}. \tag{14}\]
The bounds for the Green functions and the potential kernel were proved in Guo, Tran [31], which was based on the idea of Armstrong, Lin [3, Proposition 4.1].
**Theorem D**.: _Assume (A1), (A2). For \(\epsilon>0\), let \(s>0\), \(\mathcal{H}=\mathcal{H}(\omega,d,\kappa,\epsilon)>0\) be as in Theorem B. For \(r>0\), let_
\[U(r)\,:\,=\,\left\{\begin{array}{ll}-\log r&\quad d=2,\\ r^{2-d}&\quad d\geq 3.\end{array}\right. \tag{15}\]
_Then \(\mathbb{P}\)-almost surely, for all \(x\in B_{R}\),_
\[\mathcal{H}^{-s}[U(|x|+1)-U(R+2)]\lesssim G^{\omega}_{R}(x,0)\lesssim\mathcal{H} ^{d-1}[U(|x|+1)-U(R+2)].\]
_As consequences, \(\mathbb{P}\)-almost surely, for all \(x\in\mathbb{Z}^{d}\),_
\[\mathcal{H}^{-s}\log(|x|+1)\lesssim A^{\omega}(x,0)\lesssim \mathcal{H}\log(|x|+1),\,\text{when }d=2;\] \[\mathcal{H}^{-s}(1+|x|)^{2-d}\lesssim G^{\omega}(x,0)\lesssim \mathcal{H}^{d-1}(1+|x|)^{2-d},\,\text{when }d\geq 3.\]
Recall the continuous time RWRE \((Y_{t})_{t\geq 0}\) in Definition 2. Define the semigroup \(P_{t}\), \(t\geq 0\), on \(\mathbb{R}^{\Omega}\) by
\[P_{t}\zeta(\omega)=E^{0}_{\omega}[\zeta(\theta_{Y_{t}}\omega)]= \sum_{z}p_{t}^{\omega}(0,z)\zeta(\theta_{z}\omega). \tag{16}\]
The following theorem from Guo, Tran [31] estimates the optimal speed of decorrelation of the environmental process \(\tilde{\omega}^{t}\) from the original environment.
**Theorem E**.: _Assume_ (A1), (A2), _and \(d\geq 3\). For any local measurable function \(\zeta:\Omega\to\mathbb{R}\) with \(\left\|\zeta\right\|_{\infty}\leq 1\) and \(t\geq 0\), we have_
\[\operatorname{Var}_{\mathrm{Q}}(P_{t}\zeta)\leq C(1+t)^{-d/2}; \tag{17}\] \[\left\|P_{t}\zeta\right\|_{1}+\left\|P_{t}\zeta-\mathbb{E}[P_{t} \zeta]\right\|_{p}\leq C_{p}(1+t)^{-d/4}\quad\text{ for all }p\in(0,2). \tag{18}\]
### Main results
The field \(\{\rho_{\omega}(x):x\in\mathbb{Z}^{d}\}\) of the invariant measure, which governs the long term behavior of the diffusion and which determines the effective PDE, plays a central role in the theory of homogenization of non-divergence form equations.
We first obtain a rate of convergence of the average \(\rho_{\omega}(B_{R})/|B_{R}|\) of the invariant measure to \(1\) as \(R\to\infty\).
**Theorem 5**.: _Assume_ (A1), (A2)_. For any \(d\geq 2,p\in(0,\frac{2}{3})\), \(t>0\) and \(R\geq 2\),_
\[P\left(\left|\tfrac{\rho_{\omega}(B_{R})}{|B_{R}|}-1\right|\geq tR^{-d/2}\log R \right)\leq C_{p}\exp(-ct^{p}).\]
Note that the rate \(R^{-d/2}\log R\) is very close to the size \(R^{-d/2}\) of the diffusive scaling. In other words, to some extent the field \((\rho_{\omega}(x))_{x\in\mathbb{Z}^{d}}\) behaves quite similarly to i.i.d. random variables. Hence, we expect the rate \(R^{-d/2}\log R\) obtained here to be either optimal or nearly optimal. For non-divergence form PDEs, the volume-doubling property for the measure \(\rho_{\omega}(\cdot)\) was proved by Bauman [7]. An algebraic convergence rate \(R^{-\gamma}\) for some \(\gamma\in(0,1)\) was proved recently by Armstrong, Fehrman, Lin [2, Theorem 1.4].
In the course of obtaining our homogenization results in this paper, sensitivity estimates together with an Efron-Stein type inequality are used to control fluctuations of a random field around its mean. This method was used in the stochastic
homogenization of divergence-form operators, e.g., [43, 28]. To facilitate this strategy, obtaining sensitivity estimates (with respect to the change of the environment) is crucial, and \(C^{1,1}\) estimates for the random equation is necessary, cf. e.g., [28, 3]. To obtain \(C^{1,1}\) regularity for the heterogeneous equation, we follow the idea of Armstrong, Lin [3] who generalized the compactness argument of Avellaneda, Lin [5] to the random non-divergence form setting.
The key observation in the proof of Theorem 5 is explained as follows. Although the invariant measure \(\rho_{\omega}(x)\) does not have an explicit expression, it can be interpreted as the long term frequency of visits to location \(x\). Hence, modifying the local value of the environment is related to the Green function of the RWRE. Guided by this intuition, we will obtain a formula for the sensitivity estimate of the invariant measure in terms of the Green function.
As indicated in Theorem 5, the field \(\{\rho_{\omega}(x)\,:\,x\in\mathbb{Z}^{d}\,\}\) is expected to have weak enough correlation so that the behavior of its mean fluctuation over \(B_{R}\) resembles (up to a logarithmic factor) that of i.i.d. random variables. The following proposition reveals the mixing property of the field of the invariant measure.
**Proposition 6**.: _Assume (A1), (A2). For any \(x,y\in\mathbb{Z}^{d}\) with \(x\neq y\),_
\[\left|\mathrm{Cov}_{\mathrm{p}}(\rho(x),\rho(y))\right|\lesssim\left\{ \begin{array}{ll}|x-y|^{-1}(1+\log|x-y|),&d=2\\ |x-y|^{-d/2},&d\geq 3.\end{array}\right.\]
This is perhaps the first characterization of the correlation structure of the field of the invariant measure (with algebraic mixing rates) in a balanced environment.
Next, we derive rates of convergence for the homogenization of the Dirichlet problem.
**Theorem 7**.: _Assume (A1), (A2). Consider the Dirichlet problem_
\[\left\{\begin{array}{ll}L_{\omega}u=\frac{1}{R^{2}}f(\frac{x}{R}),&x\in B_{R },\\ u(x)=g(\frac{x}{R}),&x\in\partial B_{R}.\end{array}\right. \tag{19}\]
_Assume that \(f,g\) are both in \(C^{4}(\mathbb{R}^{d})\). For any \(0<s<\frac{d}{3d+2}\), there exists \(C=C(d,\kappa,s)\) such that, for \(R\geq 2\), there exists a random variable \(\mathcal{Z}=\mathcal{Z}(R,s,\omega)>0\) with \(\mathbb{E}[\exp(\mathcal{Z}^{s})]<C\) and_
\[\max_{x\in B_{R}}|u(x)-\bar{u}(\frac{x}{R})|\lesssim\|\bar{u}\|_{C^{4}(\bar{ \mathbb{B}}_{1})}\tau(R)\mathcal{Z},\]
_where_
\[\tau(R)=\left\{\begin{array}{ll}R^{-2/3},&d=2\\ R^{-6/7},&d=3\\ R^{-1}(\log R)^{1/4},&d=4\\ R^{-1},&d\geq 5,\end{array}\right. \tag{20}\]
_and \(\bar{u}\) is the solution of_
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\bar{a}D^{2}\bar{u})=f&in \ \mathbb{B}_{1},\\ \bar{u}=g&on\ \partial\mathbb{B}_{1}.\end{array}\right.\]
Thus, for \(d\geq 5\), we obtain the optimal rate of convergence for the homogenization of the Dirichlet problem, which is generically of scale \(R^{-1}\). This is consistent with the generically optimal rate \(R^{-1}\) for the periodic setting (see the classical books [8, 36] for the derivation, and [32, 45, 30] for discussions on the optimality of the rates.) It is not clear to us what the optimal rates are when \(2\leq d\leq 4\), which deserve further analysis.
To prove Theorem 7, we apply the classical method of two-scale expansions and the quantitative homogenization of the approximation corrector (Lemma 21). The continuous version of Lemma 21 was proved earlier in the PDE setting by Armstrong, Lin [3].
Note that the effective matrix \(\bar{a}=E_{\mathbb{Q}}[\omega/\mathrm{tr}\omega]\) does not have an explicit expression. Even though by Birkhoff's ergodic theorem, \(\mathbb{Q}\) can be approximated qualitatively by
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}E_{\omega}[\nu(\theta_{X_{i}} \omega)]=E_{\mathbb{Q}}[\nu]\quad\mathbb{P}\text{-a.s.}\]
for any \(L^{1}\) function \(\psi\) on environments, in order to better understand the effective matrix \(\bar{a}\) it is important to quantify the speed of this convergence. To this end, we set, for \(T\geq 1\),
\[\nu(T)=\left\{\begin{array}{ll}T^{-1/2}&d=2\\ T^{-3/4}&d=3\\ T^{-1}(\log T)^{1/2}&d=4\\ T^{-1}&d\geq 5.\end{array}\right. \tag{21}\]
We will quantify the ergodicity of the environmental process for both the continuous- and discrete-time random walks in a balanced random environment.
**Theorem 8**.: _Assume (A1), (A2), (A3). Let \(\nu\) be as in (21). For any \(0<p<\frac{2d}{3d+2}\), there exists \(C=C(d,\kappa,p)\) such that for \(T\,,n\geq 2\) and any \(t\geq 0\),_
\[\mathbb{P}\left(\left|\tfrac{1}{T}E_{\omega}\big{[}\int_{0}^{T} \psi(\theta_{Y_{i}}\omega)\mathrm{d}s\right]-\bar{\psi}\right|\geq t\nu(T) \|\psi\|_{\infty}\right)\leq C\exp(-\tfrac{t^{p}}{C}),\] \[\mathbb{P}\left(\left|\tfrac{1}{n}E_{\omega}\big{[}\sum_{i=0}^{n -1}\psi(\theta_{X_{i}}\omega)\big{]}-\bar{\psi}\right|\geq t\nu(n)\|\psi\|_{ \infty}\right)\leq C\exp(-\tfrac{t^{p}}{C}).\]
**Remark 9**.: Recall that Theorem E (from [31]) states that, when \(d\geq 3\), the typical size of \(P_{t}\psi-\bar{\psi}\) is of scale \(t^{-d/4}\). Observe that the typical size \(\nu(T)\) of the ergodic average in Theorem 8 satisfies (for \(T\geq 2\))
\[\nu(T)\lesssim\frac{1}{T}\int_{1}^{T}t^{-d/4}\mathrm{d}t.\]
(The sign \(\lesssim\) can be replaced by \(\asymp\) except for \(d=4\) when \(\nu(T)\) is a \(\sqrt{\log T}\) factor smaller than the right side.) Hence, Theorem 8 can be regarded as the integral version of Theorem E which holds for all \(d\geq 2\) and which has much better exponential integrability.
We remark that an (unknown) algebraic rate for the convergence of the ergodic average was obtained in [29, Theorem 1.2] in the discrete setting and recently in [2, Theorem 1.6] in the PDE setting.
As a consequence of Theorem 8, we obtain explicit convergence rates for the QCLT of the balanced random walk.
**Corollary 10**.: _Assume (A1), (A2). For any \(0<q<\frac{2d}{5(3d+2)}\), \(n\geq 2\), there exists a random variable \(\mathcal{Y}=\mathcal{Y}(\omega,q;n,\kappa,d)\) with \(\mathbb{E}[\exp(\mathcal{Y}^{q})]\leq C\) such that, \(\mathbb{P}\)-almost surely, for any unit vector \(\mathcal{L}\in\mathbb{R}^{d}\),_
\[\sup_{r\in\mathbb{R}}\Big{|}P_{\omega}\left(X_{n}\cdot\mathcal{L}/\sqrt{n} \leq r\sqrt{\mathcal{L}^{T}\bar{a}\mathcal{L}}\right)-\Phi(r)\Big{|}\leq Cv(n) ^{1/5}\mathcal{Y},\]
_where \(\Phi(r)=(2\pi)^{-1/2}\int_{-\infty}^{r}e^{-x^{2}/2}\mathrm{d}x\) for \(r\in\mathbb{R}\)._
An algebraic rate for the QCLT was proved in [29, Theorem 1.3]. We remark that for the model of random walk in random conductances, algebraic rates similar to ours was proved in [1] for dimensions \(d\geq 3\).
## 2 Large scale \(C^{0,1}\) and \(C^{1,1}\) estimates
In this section, we apply the ideas of Avellaneda, Lin [5, 6] in the periodic setting to the discrete random setting. The key idea is quite intuitive and natural: large-scale solutions of \(L_{\omega}u(x)=\psi(\theta_{x}\omega)+f(x)\) are well-approximated by those of the homogenized equation with an algebraic rate thanks to Proposition C. As the latter are harmonic, they possess rather nice estimates (see Proposition 16 below on the scaling property.) Therefore, by iterations, scalings and the triangle inequalities, the better regularity of the homogenized equation is inherited by the heterogeneous equation. It is crucial to note two points here. Firstly, in each iteration step, the scaling is done by using the nice estimates in Proposition 16 of the homogenized limit, and the triangle inequality and Proposition C are used to pass this estimate to the solution \(u\) of the heterogeneous equation. Secondly, in the random setting, one can only go down to radii greater than the homogenization radius in the iterations, which therefore gives us only large scale estimates. The generalization of this idea to the random non-divergence form PDE setting was first done by Armstrong, Lin [3] who made the observation that an algebraic rate is sufficient for such an iteration.
The main result in this section, Theorem 14, can be considered as a discrete version of Armstrong, Lin [3, Theorem 3.1,Corollary 3.4] in terms of the large scale \(C^{0,1}\) and \(C^{1,1}\) regularity. We remark that for \(\omega\)-harmonic functions in a genuinely \(d\)-dimensional balanced environment, a \(C^{0,1-\epsilon}\) regularity was achieved by Berger, Cohen, Deuschel, Guo [9, Corollary 1.4] using coupling arguments.
As can be seen in the following Subsection 2.1, this sort of compactness argument, although is applied to the random setting here, is deterministic in its core.
### Some regularity properties of deterministic functions
This subsection contains the key tools for the compactness arguments used in our paper. It is completely deterministic and can be read independently of other parts of the paper. The lemmas presented here concern large scale \(C^{k,1}\), \(k\geq 0\), properties of deterministic functions.
For any function \(f\) on a set \(A\) and \(\alpha\in(0,1]\), define
\[\operatorname*{osc}_{A}f\,:=\sup_{x,y\in A}|f(x)-f(y)|,\qquad[f]_{\alpha;A}= \sup_{x,y\in A,x\neq y}\frac{|f(x)-f(y)|}{|x-y|^{\alpha}},\]
and, if \(A\) is a finite set, for \(p\in(0,\infty)\), we define
\[\left\|f\right\|_{p;A}\,:=\left(\frac{1}{\#A}\sum_{x\in A}|f|^{p}\right)^{1/p},\quad\left\|f\right\|_{\infty;A}=\max_{x\in A}|f(x)|.\]
For any \(j\geq 0\), let \(\mathrm{H}_{j}\) denote the set of \(j\)-th order polynomials, with \(\mathrm{H}_{0}=\mathbb{R}\). In fact, in our paper we will be only use the cases \(j=0,1,2\).
Define, for function \(f\,:\,\mathbb{R}^{d}\to\mathbb{R}\) and a bounded set \(A\subset\mathbb{R}^{d}\), \(j\geq 1\),
\[\mathcal{D}_{A}^{j}(f)=\inf_{p\in\mathrm{H}_{j-1}}\sup_{A}|f-p|=\frac{1}{2} \inf_{p\in\mathrm{H}_{j-1}}\operatorname*{osc}_{A}(f-p). \tag{22}\]
\(\mathcal{D}_{A}^{j}\) satisfies the triangle inequality. Namely, \(\mathcal{D}_{A}^{j}(f\pm g)\leq\mathcal{D}_{A}^{j}(f)+\mathcal{D}_{A}^{j}(g)\). When \(A=B_{R}\) is the discrete ball, \(R>0\), we simply write
\[\mathcal{D}_{R}^{j}\,:=\mathcal{D}_{B_{R}}^{j}.\]
Note that for \(j\geq 1\), the above term normalized
\[\mathbb{D}_{R}^{j}(f)\,:=\frac{\mathcal{D}_{R}^{j}(f)}{R^{j}} \tag{23}\]
is a large scale analogue of the \(j\)-th order derivative.
For any \(r>0\), \(\theta\in(0,\frac{1}{3})\), define a sequence of exponentially increasing radii \((r_{k})_{k\geq 0}\) by
\[r_{k}=r_{k}(r,\theta)\,:=\theta^{-k}r,\quad k\geq 0.\]
The following elementary lemma confirms the intuition that "the integral of the \((j+1)\)-th derivative is the \(j\)-th derivative".
**Lemma 11**.: _For any function \(f\,:\,\mathbb{Z}^{d}\to\mathbb{R}\) and \(r>0,\theta\in(0,\frac{1}{3}),n\in\mathbb{N},j\geq 1\),_
\[\mathbb{D}_{r_{0}}^{j}(f)\leq\mathbb{D}_{r_{n}}^{j}(f)+3\theta^{-j}\sum_{k=0} ^{n}r_{k}\mathbb{D}_{r_{k}}^{j+1}(f).\]
Proof.: For any \(j\)-th order homogeneous polynomials \(p,q\in\mathrm{H}_{j}\), \(R>r>0\), by the triangle inequality,
\[\mathcal{D}_{r}^{j}(p) \leq\mathcal{D}_{r}^{j}(q)+\mathcal{D}_{r}^{j}(p-q)\] \[\leq(\tfrac{r}{R})^{j}\mathcal{D}_{R}^{j}(q)+\mathcal{D}_{r}^{j}( f-p)+\mathcal{D}_{r}^{j}(f-q)\]
where in the second inequality we used the fact that \(\mathcal{D}_{r}^{j}(q)=(\tfrac{r}{R})^{j}\mathcal{D}_{R}^{j}(q)\) for all \(j\)-th order homogeneous polynomial \(q\). Hence, by the inequality above,
\[\mathcal{D}_{r}^{j}(f) \leq\mathcal{D}_{r}^{j}(f-p)+\mathcal{D}_{r}^{j}(p)\] \[\leq 2\mathcal{D}_{r}^{j}(f-p)+\mathcal{D}_{r}^{j}(f-q)+(\tfrac{r }{R})^{j}\mathcal{D}_{R}^{j}(q)\] \[\leq 2\mathcal{D}_{r}^{j}(f-p)+\mathcal{D}_{r}^{j}(f-q)+( \tfrac{r}{R})^{j}[\mathcal{D}_{R}^{j}(f)+\mathcal{D}_{R}^{j}(f-q)]\] \[\leq 2[\mathcal{D}_{r}^{j}(f-p)+\mathcal{D}_{R}^{j}(f-q)]+( \tfrac{r}{R})^{j}\mathcal{D}_{R}^{j}(f).\]
Taking infimum over all \(j\)-th order homogeneous polynomials \(p,q\in\mathrm{H}_{j}\), we get
\[\mathcal{D}_{r}^{j}(f)\leq 2[\mathcal{D}_{r}^{j+1}(f)+\mathcal{D}_{R}^{j+1}( f)]+(\tfrac{r}{R})^{j}\mathcal{D}_{R}^{j}(f).\]
Replacing \(r\), \(R\) by \(r_{k},r_{k+1}\), and using notation (23), the above inequality yields
\[\mathbb{D}_{r_{k}}^{j}(f)-\mathbb{D}_{r_{k+1}}^{j}(f)\leq 2[r_{k}\mathbb{D}_{r_ {k}}^{j+1}(f)+\theta^{-j}r_{k+1}\mathbb{D}_{r_{k+1}}^{j+1}(f)].\]
Summing both sides over \(k=0,\dots,n-1\), the lemma is proved.
The following lemma will be crucially employed later in our derivation of large scale regularity estimates in Subsection 2.2.
**Lemma 12**.: _Let \(j\geq 1\), \(m\in\mathbb{N}\), \(r,\alpha>0\). Let \(A_{r}\geq 0\) be a constant depending on \(r\). If for \(f\,:\,\mathbb{Z}^{d}\to\mathbb{R}\), \(k=0,\dots,m-1\), and all \(\theta\in(0,\tfrac{1}{3})\),_
\[\mathcal{D}_{r_{k}}^{j+1}(f)\lesssim_{j}\theta^{j+1}\mathcal{D}_{r_{k+1}}^{j+1 }(f)+r_{k+1}^{-\alpha}\mathcal{D}_{r_{k+1}}^{j}(f)+r_{k+1}^{j}A_{r_{k+1}}, \tag{24}\]
_then there exists \(\theta=\theta(j),N=N(j,\alpha)\) such that, for \(N\leq r\leq R\leq r_{m}\),_
\[\mathcal{D}_{r}^{j}(f)\leq 13\theta^{-2j}\left(\tfrac{r}{R}\right)^{j} \mathcal{D}_{R}^{j}(f)+\sum_{k\geq 1\,:\,r_{k}\leq R}A_{r_{k}}.\]
Proof.: For the simplicity of notations, we suppress the dependency on \(f\). Let \(n=n(R,\theta)\leq m\) be such that \(r_{n}\leq R<r_{n+1}\). Display (24) is equivalent to
\[r_{k}\mathbb{D}_{r_{k}}^{j+1}\lesssim_{j}\theta r_{k+1}\mathbb{D}_{r_{k+1}}^{j +1}+\theta^{-j}r_{k+1}^{-\alpha}\mathbb{D}_{r_{k+1}}^{j}+\theta^{-j}A_{r_{k+1 }}.\]
Summing this inequality over \(k=0,\dots,n-1\), we have
\[\sum_{k=0}^{n-1}r_{k}\mathbb{D}_{r_{k}}^{j+1}\lesssim_{j}\theta\sum_{k=1}^{n} r_{k}\mathbb{D}_{r_{k}}^{j+1}+\theta^{-j}\sum_{k=1}^{n}r_{k}^{-\alpha}\mathbb{D}_{r_{k}} ^{j}+\theta^{-j}\sum_{k=1}^{n}A_{r_{k}}. \tag{25}\]
Moreover, by Lemma 11, \(\mathbb{D}^{j}_{r_{k}}\leq\mathbb{D}^{j}_{r_{n}}+3\theta^{-j}\sum_{\ell=k}^{n}r_{ \ell}\mathbb{D}^{j+1}_{r_{\ell}}\). Hence
\[\sum_{k=1}^{n}r_{k}^{-\alpha}\mathbb{D}^{j}_{r_{k}} \leq\sum_{k=1}^{n}r_{k}^{-\alpha}\left(\mathbb{D}^{j}_{r_{n}}+3 \theta^{-j}\sum_{\ell=k}^{n}r_{\ell}\mathbb{D}^{j+1}_{r_{\ell}}\right)\] \[\leq C_{\alpha}r^{-\alpha}\mathbb{D}^{j}_{r_{n}}+C_{\alpha}\theta ^{-j}r^{-\alpha}\sum_{\ell^{\prime}=1}^{n}r_{\ell^{\prime}}\mathbb{D}^{j+1}_{ r_{\ell}}, \tag{26}\]
where \(C_{\alpha}=1-3^{-\alpha}\). Choosing \(\theta=\theta(j)\in(0,\frac{1}{3})\) sufficiently small, when \(r\geq N\) for some \(N=N(j,\alpha)\), we get from (25) and (26) that
\[\sum_{k=0}^{n-1}r_{k}\mathbb{D}^{j+1}_{r_{k}}\leq\frac{1}{2}\sum_{k=1}^{n}r_{ k}\mathbb{D}^{j+1}_{r_{k}}+\mathbb{D}^{j}_{r_{k}}+C_{j}\theta^{-j}\sum_{k=1}^{n}A _{r_{k}}\]
which implies (Note that \(r_{n}\mathbb{D}^{j+1}_{r_{n}}\leq\mathbb{D}^{j}_{r_{n}}\))
\[\sum_{k=0}^{n}r_{k}\mathbb{D}^{j+1}_{r_{k}}\leq 4\mathbb{D}^{j}_{r_{n}}+C_{j} \theta^{-j}\sum_{k=1}^{n}A_{r_{k}}.\]
This inequality, together with Lemma 11, yields for \(r\geq N\), \(\theta=\theta(j)\in(0,\frac{1}{3})\),
\[\mathbb{D}^{j}_{r_{0}}\leq 13\theta^{-j}\mathbb{D}^{j}_{r_{n}}+C_{j}\theta^{-2j} \sum_{k=1}^{n}A_{r_{k}}\leq 13\theta^{-2j}\mathbb{D}^{j}_{R}+\sum_{k=1}^{n}A _{r_{k}}.\]
The lemma is proved.
**Remark 13**.: In this subsection we consider \(f\) as a function on \(\mathbb{Z}^{d}\) and defined \(\mathrm{H}_{j}\) to be the set of \(j\)-th order polynomials just for our convenience. One may let \(f\) be a function on \(\mathbb{R}^{d}\) and redefine \(\mathrm{H}_{j}\)'s to be other sub-spaces of the polynomials (e.g., the set of harmonic polynomials) and Lemmas 11, 12 still hold.
### Large scale regularity
The goal of this section is to apply Lemma 12 to obtain \(C^{0,1}\) and \(C^{1,1}\) regularities for the heterogeneous equations in our random setting.
Define operators \(\nabla=(\nabla_{i})_{1\leq i\leq d}\) and \(\nabla^{*}=(\nabla^{*}_{i})_{1\leq i\leq d}\) by
\[\nabla_{i}u(x)=u(x+e_{i})-u(x),\quad\nabla^{*}_{i}u(x)=u(x-e_{i})-u(x).\]
Note that \(\nabla_{i}\) and \(\nabla^{*}_{i}\) are adjoint linear operators, and \(\nabla^{2}_{i}=-\nabla_{i}\nabla^{*}_{i}\).
**Theorem 14**.: _Assume (A1), (A2), and that \(\psi\) is a local function. Let \(R\geq 1\). There exists \(\alpha=\alpha(d,\kappa)\in(0,\frac{1}{3})\) such that, for any any \(u\) with \(L_{\alpha}u(x)=\psi(\theta_{x}\omega)+f(x)\) on \(B_{R}\), \(j\in\{1,2\}\), \(\mathcal{H}\leq r<R\),_
\[\frac{1}{r^{j}}\inf_{p\in\mathrm{H}_{j-1}}\operatorname*{osc}_{B_{r}}(u-p) \lesssim\frac{1}{R^{j}}\inf_{p\in\mathrm{H}_{j-1}}\operatorname*{osc}_{B_{R}}( u-p)+A_{j}, \tag{27}\]
_where the terms \(A_{j}=A_{j}(R,r)\) have the following bounds (for any \(\sigma\in(0,1]\))_
\[A_{1} \leq R^{1-\alpha}\|\psi\|_{\infty}+R\|f\|_{\infty}\text{ and }A_{1} \leq R^{1-\alpha}\|\psi+f(0)\|_{\infty}+R^{1+\sigma}[f]_{\sigma;B_{R}},\] \[A_{2} \leq r^{-\alpha}\|\psi\|_{\infty}+\log(\tfrac{R}{r})\|f\|_{\infty} \text{ and }A_{2}\leq r^{-\alpha}\|\psi+f(0)\|_{\infty}+R^{\sigma}[f]_{\sigma;B_{R}}.\]
_In particular, recalling the operator \(\nabla_{i}^{2}\) in (1), for any \(R>1\), \(j=1,2\),_
\[|\nabla^{j}u(0)|\lesssim(\frac{\mathcal{H}}{R})^{j}\left(\|u\|_{1;B_{R}}+R^{2 }\|\psi+f(0)\|_{\infty}+R^{2+\sigma}[f]_{\sigma;B_{R}}\right) \tag{28}\]
As a consequence of (28), any \(\omega\)-harmonic function on \(\mathbb{Z}^{d}\) with sublinear growth is a constant. That is, if \(L_{\omega}u=0\) on \(\mathbb{Z}^{d}\) and \(\max_{B_{R}}|u|=o(R)\) for all \(R>0\), then \(u\) is constant. To prove Theorem 14, it suffices to prove the following lemma.
**Lemma 15**.: _There exists \(\gamma=\gamma(d,\kappa)\) such that, for \(R\geq\mathcal{H}\), \(\theta\in(0,\tfrac{1}{3})\), \(1\leq j\leq 3\) and any \(u\) with \(L_{\omega}u(x)=\psi(\theta_{x}\omega)+f(x)\) for \(x\in B_{R}\), we have_
\[\mathcal{D}^{j}_{\theta R}(u)\lesssim R^{-\gamma\beta}\mathcal{D}^{2}_{R}(u)+ \theta^{j}\mathcal{D}^{j}_{R}(u)+R^{2-\gamma\beta}\|\psi\|_{\infty}+R^{2}\|f \|_{d;B_{R}}.\]
The proof of Lemma 15 uses the following fact of deterministic harmonic functions. For completeness, we include its proof in Section A.1 of the Appendix.
**Proposition 16**.: _Recall the notation in (22). Let \(c_{0}\) be a constant. Let \(v\) be a function satisfying \(\operatorname{tr}\bar{a}D^{2}v=c_{0}\) in \(\bar{\mathbb{H}}_{R}\). Then, for \(\theta\in(0,\tfrac{1}{3})\), \(j\in\{1,2,3\}\) and \(R\geq 1\),_
\[\mathcal{D}^{j}_{\bar{\mathbb{B}}_{\theta R}}(v)\leq C\theta^{j}\mathcal{D}^{ j}_{\bar{\mathbb{B}}_{R/2}}(v). \tag{29}\]
_We also have_
\[\mathcal{D}^{j}_{\bar{\mathbb{B}}_{\theta R}}(v)\lesssim\tfrac{\theta^{j}}{R}( \sup_{\partial\mathbb{B}_{2R/3}}|v|+R^{2}|c_{0}|)+\theta^{j}\mathcal{D}^{j}_{ 2R/3}(v). \tag{30}\]
**Remark 17**.: Property (29) for deterministic harmonic functions (\(c_{0}=0\)) was used in [3, Lemma 3.3] to obtain regularities of the heterogeneous solution in the PDE setting. Comparing to [3, Corollary 3.4], here by allowing \(c_{0}\neq 0\) we will gain a tiny improvement for the coefficient of \(\|\psi+f(0)\|_{\infty}\) in the \(C^{0,1}\) estimate by an \(R^{-\alpha}\) factor (cf. Theorem 14). Note that in the discrete setting, we will need (30) as well because of discretization. It would be also clear later in Section 3 that the \(\log R\) factor in the bound of \(A_{2}\) will help us achieve the \(\log R\) factor in Theorem 5.
Proof of Lemma 15.: By the Holder estimate of Krylov-Safonov, there exists \(\gamma=\gamma(d,\kappa)>0\) such that, for \(r\in(0,\,R)\),
\[\operatorname*{osc}_{B_{r}}u\lesssim\left(\tfrac{r}{R}\right)^{\gamma}( \operatorname*{osc}_{B_{R}}u+R^{2}\|\psi+f\|_{d;B_{R}}). \tag{31}\]
Note that this allows us to extend \(u\) to be a function \(\tilde{u}\in C^{\gamma}(\mathbb{R}^{d})\) with \([\tilde{u}]_{\gamma;\mathbb{R}^{d}}=[u]_{\gamma;\bar{B}_{2R/3}}\). Indeed, define the function \(\tilde{u}\) as
\[\tilde{u}(x)=\min_{y\in\bar{B}_{2R/3}}\left\{u(y)+|x-y|^{\sigma}[u]_{\sigma; \bar{B}_{2R/3}}\right\}.\]
It is straightforward to check that \(\tilde{u}=u\) in \(\bar{B}_{2R/3}\) and \([\tilde{u}]_{\gamma;\mathbb{R}^{d}}\leq[u]_{\gamma;\bar{B}_{2R/3}}\). By (31),
\[[\tilde{u}]_{\gamma;\mathbb{R}^{d}}=[u]_{\gamma;\bar{B}_{2R/3}}\lesssim R^{- \gamma}(\max_{B_{R}}|u|+R^{2}\|\psi+f\|_{d;B_{R}}). \tag{32}\]
Let \(\tilde{v}:\bar{\mathbb{B}}_{2/3}\to\mathbb{R}\) be the solution of
\[\left\{\begin{array}{ll}\frac{1}{2}\mathrm{tr}(\bar{a}D^{2}\bar{v})=R^{2} \bar{\psi}&\text{in }\mathbb{B}_{2/3}\\ \tilde{v}(x)=\tilde{u}(Rx)&\text{for }x\in\partial\mathbb{B}_{2/3}.\end{array}\right.\]
First, write \(A:=R^{2-\gamma\beta}\|\psi\|_{\infty}+R^{2}\|f\|_{d;B_{R}}\). We will show that, for \(R\geq\mathcal{H}\),
\[\max_{x\in B_{2R/3}}|u(x)-\tilde{v}(\frac{x}{R})|\lesssim R^{-\gamma\beta} \max_{B_{R}}|u|+A. \tag{33}\]
To this end, let \(u_{1}:\bar{B}_{2R/3}\to\mathbb{R}\) be the solution of
\[\left\{\begin{array}{ll}L_{\omega}u_{1}=\psi(\theta_{x}\omega)&\text{in }B_{2R/3}\\ u_{1}(x)=\tilde{u}(\frac{2Rx}{3|x|})&x\in\partial B_{2R/3}.\end{array}\right.\]
By Proposition C and (32), when \(R\geq\mathcal{H}\), noting that \([\tilde{u}(R\cdot)]_{\gamma;\mathbb{R}^{d}}\leq R^{\gamma}[\tilde{u}(\cdot)]_ {\gamma;\mathbb{R}^{d}}\),
\[\max_{x\in B_{2R/3}}|u_{1}(x)-\tilde{v}(\frac{x}{R})| \lesssim R^{-\gamma\beta}(\left[\tilde{u}(R\cdot)\right]_{\gamma; \mathbb{R}^{d}}+R^{2}\|\psi\|_{\infty})\] \[\lesssim R^{-\gamma\beta}(\max_{B_{R}}|u|+R^{2}\|\psi\|_{\infty}+R^{2}\|f \|_{d;B_{R}}).\]
Moreover, by the ABP maximum principle,
\[\max_{B_{2R/3}}|u-u_{1}| \leq\max_{x\in\partial B_{2R/3}}|u(x)-\tilde{u}(\frac{2Rx}{3|x|} )|+CR^{2}\|f\|_{d;B_{R}}\] \[\lesssim[\tilde{u}]_{\gamma;\mathbb{R}^{d}}+R^{2}\|f\|_{d;B_{R}}\] \[\overset{\eqref{eq:ABP}}{\lesssim}R^{-\gamma}(\max_{B_{R}}|u|+R^{ 2}\|\psi\|_{\infty})+R^{2}\|f\|_{d;B_{R}}.\]
Combining the two inequalities above, display (33) is proved.
By the triangle inequality and Proposition 16, for \(1\leq j\leq 3\),
\[\mathcal{D}^{j}_{\theta R}(u) \leq\max_{B_{R/2}}|u-\tilde{v}(\frac{\cdot}{R})|+\mathcal{D}^{j}_ {\theta R}(\tilde{v}(\frac{\cdot}{R}))\] \[\leq\max_{B_{R/2}}|u-\tilde{v}(\frac{\cdot}{R})|+\frac{C\theta^{ j}}{R}(\sup_{\partial\mathbb{B}_{2/3}}|\tilde{v}|+R^{2}|\bar{\psi}\prime|)+C \theta^{j}\mathcal{D}^{j}_{2R/3}(\tilde{v}(\frac{\cdot}{R}))\] \[\lesssim\max_{B_{2R/3}}|u-\tilde{v}(\frac{\cdot}{R})|+\frac{ \theta^{j}}{R}(\sup_{\partial\mathbb{B}_{2/3}}|\tilde{v}|+R^{2}|\bar{\psi} \prime|)+\theta^{j}\mathcal{D}^{j}_{2R/3}(u). \tag{34}\]
Since \(\sup_{\partial\mathbb{B}_{2/3}}|\tilde{v}|=\sup_{\partial\mathbb{B}_{2R/3}}| \tilde{u}|\leq\max_{B_{2R/3}}|u|+[\tilde{u}]_{\gamma;\bar{B}_{2R/3}}\leq\max _{B_{R}}|u|+A\), by (33) and (34), we have, for \(1\leq j\leq 3\),
\[\mathcal{D}^{j}_{\theta R}(u)\lesssim R^{-\gamma\beta}\max_{B_{R}}|u|+A+ \theta^{j}\mathcal{D}^{j}_{R}(u).\]
Finally, note that since every \(p\in\mathrm{H}_{1}\) is \(\omega\)-harmonic, \((u-p)\) still solves \(L_{\omega}(u-p)=\psi(\theta_{x}\omega)+f(x)\) for \(x\in B_{R}\). Therefore, substituting \(u\) by \((u-p)\) in the above inequality and optimizing over \(p\in\mathrm{H}_{1}\), the lemma follows.
Proof of Theorem 14:.: By Lemma 15 and Lemma 12, there exists \(\theta=\theta(d,\kappa)\in(0,\frac{1}{3})\) such that (27) holds with the terms \(A_{j}\), \(j\in\{1,2\}\) satisfying
\[A_{j}=\sum_{k\geq 1:r_{k}\leq R}r_{k}^{2-\alpha-j}\|\psi\|_{\infty}+r_{k}^{2-j} \|f\|_{d;B_{r_{k}}}.\]
Note that \(\|f-f(0)\|_{d;B_{r}}\lesssim r^{\sigma}[f]_{\sigma;B_{r}}\) for all \(\sigma\in(0,1]\). The bounds of \(A_{1}\), \(A_{2}\) in the theorem follow immediately.
To prove (28), note that \(|\nabla u(0)|\leq\mathrm{osc}_{\tilde{B}_{1}}\)\(u\) and that for any \(\ell\in\mathrm{H}_{1}\), \(|\nabla^{2}u(0)|=|\nabla^{2}(u-\ell)|\lesssim\mathrm{osc}_{\tilde{B}_{1}}(u-\ell)\). Hence, by (27), we get
\[|\nabla^{j}u(0)|\lesssim(\frac{\mathcal{H}}{R})^{j}\left(\operatorname*{osc}_ {B_{R/2}}u+R^{2}\|\psi+f(0)\|_{\infty}+R^{2+\sigma}[f]_{\sigma;B_{R/2}}\right).\]
By the Harnack inequality and the ABP inequality, we have
\[\operatorname*{osc}_{B_{R/2}}u\lesssim\|u-u_{B_{R}}\|_{1;B_{R}}+R^{2}\|\psi+f \|_{d;B_{R}}. \tag{35}\]
Display (28) follows by using again \(\|f-f(0)\|_{d;B_{R}}\lesssim R^{\sigma}[f]_{\sigma;B_{R}}\) for \(\sigma\in(0,1]\).
## 3 Mixing properties of the invariant measure
The goal of this section is to investigate the mixing properties of the field \(\{\rho_{\omega}(x):x\in\mathbb{Z}^{d}\}\) of the invariant measure. We will obtain a rate of convergence (Theorem 5) of the average of the invariant measure over balls \(B_{R}\). We will also quantify the correlation of the field (Proposition 6).
The Efron-Stein inequality (38) of Boucheron, Bousquet, and Massart [14] will be used in our derivation of quantitative estimates.
Let \(\omega^{\prime}(x),x\in\mathbb{Z}^{d}\), be i.i.d. copies of \(\omega(x),x\in\mathbb{Z}^{d}\). For any \(y\in\mathbb{Z}^{d}\), let \(\omega^{\prime}_{y}\in\Omega\) be the environment such that
\[\omega^{\prime}_{y}(x)=\left\{\begin{array}{rl}&\omega(x)\quad\text{if }x \neq y,\\ &\omega^{\prime}(y)\quad\text{if }x=y.\end{array}\right.\]
That is, \(\omega^{\prime}_{y}\) is a modification of \(\omega\) only at location \(y\). For any measurable function \(Z\) of the environment \(\omega\), we write, for \(y\in\mathbb{Z}^{d}\),
\[Z^{\prime}_{y}=Z(\omega^{\prime}_{y}),\quad\partial^{\prime}_{y}Z(\omega)=Z^{ \prime}_{y}-Z, \tag{36}\]
and set
\[V(Z)=\sum_{y\in\mathbb{Z}^{d}}(\partial^{\prime}_{y}Z)^{2}. \tag{37}\]
With abuse of notations, we enlarge the probability space and still use \(\mathbb{P}\) to denote the distribution of both \(\omega,\omega^{\prime}\).
The \(L_{p}\) version of Efron-Stein inequality in [14, Theorem 3] states that, for \(q\geq 2\),
\[\mathbb{E}[|Z-\mathbb{E}Z|^{q}]\leq Cq^{q/2}\mathbb{E}[V^{q/2}]. \tag{38}\]
### A sensitivity estimate of the invariant measure
The main contribution of this subsection is a formula for the "vertical" derivative of the invariant measure \(\rho\).
**Definition 18**.: For \(t>0\), we let \(V(t,\omega)=\sum_{x\in\mathbb{Z}^{d}}p_{t}^{\rho_{0}}(x,0)\). Let \(\mathbb{Q}_{t}\) be the probability measure on \(\Omega\) defined by
\[\mathbb{Q}_{t}(\mathrm{d}\omega)=V(t,\omega)\mathbb{P}(\mathrm{d}\omega).\]
We remark that by Theorem B,
\[V(t,\omega)\lesssim\mathcal{R}^{d-1}\quad\text{ for }\mathbb{P}\text{-a.e. }\omega \tag{39}\]
and so \(\mathbb{Q}_{t}\) is well-defined. Note that for any bounded measurable function \(\zeta\) on \(\Omega\), we have \(E_{\mathbb{Q}_{t}}[\zeta]=\mathbb{E}[P_{t}\zeta]\). In other words, \(\mathbb{Q}_{t}\) is the distribution of the environment viewed from the particle at time \(t\). It is natural to expect that \(\mathbb{Q}_{t}\to\mathbb{Q}\) as \(t\to\infty\).
For any function \(u\) of the environment, we denote by \(u_{\omega}\) the corresponding function on \(\mathbb{Z}^{d}\) defined by \(u_{\omega}(x)\,:=u(\theta_{x}\omega)\).
**Lemma 19**.: _As \(t\to\infty\), \(\mathbb{Q}_{t}\) converges weakly to \(\mathbb{Q}\)._
Proof.: Since \(\{\mathbb{Q}_{t}\}\) is a sequence of probability measures on the compact space \(\Omega\), it has a weak convergent subsequence \(\{\mathbb{Q}_{t_{k}}\}\) which has a weak limit \(\mathbb{Q}_{\infty}\).
To prove that \(\mathbb{Q}_{\infty}\) is an invariant measure for the Markov chain \((\theta_{Y_{t}}\omega)\), it suffices to show that for any bounded measurable function \(f\) on \(\Omega\),
\[E_{\mathbb{Q}_{\infty}}[L_{\omega}f_{\omega}(0)]=0.\]
Indeed, by the translation invariance of the measure \(\mathbb{P}\), for any \(e\) with \(|e|=1\),
\[\mathbb{E}[\omega(0,e)V(t,\omega)f(\theta_{e}\omega)] =\mathbb{E}[\omega(-e,0)V(t,\theta_{-e}\omega)f(\omega)]\] \[=\mathbb{E}[\rho_{\omega}(0)\omega^{*}(0,-e)\bar{V}(t,\theta_{-e }\omega)f(\omega)], \tag{40}\]
where \(\omega^{*}(x,y)\,:=\rho_{\omega}(y)\omega(y,x)/\rho_{\omega}(x)\) denotes the _adjoint_ of \(\omega\), cf. e.g., [22], and \(\bar{V}(t,\omega)\,:=V(t,\omega)/\rho(\omega)\). Noting that \(\sum_{y}\omega^{*}(x,y)=1\), we have
\[E_{\mathbb{Q}_{t}}[L_{\omega}f_{\omega}(0)] =\mathbb{E}[V(t,\omega)\,\sum_{e}\omega(0,e)[f(\theta_{e}\omega)- f(\omega)]]\] \[\overset{(\ref{eq:def})}{=}\mathbb{E}[\rho_{\omega}(0)\,\sum_{e }\omega^{*}(0,e)[\bar{V}(t,\theta_{e}\omega)-\bar{V}(t,\omega)]f(\omega)]\] \[=E_{\mathbb{Q}}[f(\omega)L_{\omega^{*}}\bar{V}_{\omega}(t,0)], \tag{41}\]
where \(\tilde{V}_{\omega}(t,x):=\tilde{V}(t,\theta_{\omega}\omega)\), and \(L_{\omega^{*}}\) only acts on the spatial (\(\mathbb{Z}^{d}\)) coordinate of the function \(\tilde{V}_{\omega}:\mathbb{R}\times\mathbb{Z}^{d}\to\mathbb{R}\) of space and time. Observe that \(\tilde{V}_{\omega}\) solves the parabolic equation
\[(\partial_{t}-L_{\omega^{*}})\tilde{V}_{\omega}=0\quad\text{ in }(0,\infty) \times\mathbb{Z}^{d}.\]
By the Holder estimate [22, Corollary 7] and the Harnack inequality [22, Theorem 6] for the operator \((\partial_{t}-L_{\omega^{*}})\), there exists \(\gamma=\gamma(d,\kappa)>0\) such that, for \(t>1\),
\[\max_{e:\,|e|=1}|\tilde{V}_{\omega}(t,e)-\tilde{V}_{\omega}(t,0)| \lesssim t^{-\gamma}\sup_{(s,x)\in(0.5t,t)\times B_{\sqrt{t}}} \tilde{V}_{\omega}(s,x) \tag{42}\] \[\lesssim t^{-\gamma}\tilde{V}_{\omega}(2t,0)\] \[\stackrel{{\eqref{eq:2011}}}{{\lesssim}}t^{-\gamma} \rho^{-1}\mathcal{H}^{d-1}.\]
Thus, by (41), \(|\mathsf{E}_{\mathbb{Q}_{t}}[L_{\omega}f_{\omega}(0)]|\lesssim t^{-\gamma} \mathbb{E}[\mathcal{H}^{d-1}]\|f\|_{\infty}\lesssim t^{-\gamma}\|f\|_{\infty}\). In particular, for any bounded measurable function \(f\) on \(\Omega\),
\[E_{\mathbb{Q}_{\infty}}[L_{\omega}f_{\omega}(0)]=\lim_{k\to\infty}E_{\mathbb{Q }_{t_{k}}}[L_{\omega}f_{\omega}(0)]=0\]
which implies that \(\mathbb{Q}_{\infty}\) is an invariant measure for the Markov chain \((\theta_{Y_{t}}\omega)\). Moreover, for any bounded measurable function \(f:\Omega\to\mathbb{R}\) and \(p>0\),
\[E_{\mathbb{Q}_{t}}[f]=\mathbb{E}[V_{\omega}(t,0)f(\omega)]\lesssim\mathbb{E} [\mathcal{H}^{d-1}f]\lesssim_{p}\|f\|_{L^{p}(\mathbb{P})},\]
and so \(E_{\mathbb{Q}_{\infty}}[f]\lesssim_{p}\|f\|_{L^{p}(\mathbb{P})}\) which implies \(\mathbb{Q}\ll\mathbb{P}\). Therefore, by the same argument as in [33, (4)], we have \(\mathbb{Q}_{\infty}=\mathbb{Q}\).
Before stating the formula for \(\partial_{y}^{\prime}\rho\) in the following proposition, we remark that although the global Green function \(G^{\omega}(x,y)\) is only defined for \(d\geq 3\), the second order difference \(\nabla^{2}_{i;1}G(x,y)\) can be defined for all dimensions, where \(\nabla^{2}_{i;1}\) is \(\nabla^{2}_{i}\) applied to the first \(\mathbb{Z}^{d}\) coordinate. That is, for any fixed \(y\), \(\nabla^{2}_{i;1}G(\cdot,y):=\nabla^{2}_{i}G(\cdot,y)\). Indeed, recalling \(A(x,y)\) in (14), we can set
\[\nabla^{2}_{i;1}G(x,y):=-\nabla^{2}_{i;1}A(x,y)\qquad\text{ when }d=2.\]
Since \(G(\cdot,\cdot)\) is not defined in Definition 4 for \(d=2\), for the convenience of notations, throughout this section we denote
\[G(x,y):=-A(x,y),\text{ and }G(x,S)=-\sum_{y\in S}A(x,y)\quad\text{ when }d=2. \tag{43}\]
**Proposition 20**.: _For any \(x,y\in\mathbb{Z}^{d}\), \(\mathbb{P}\)-almost surely,_
\[\partial_{y}^{\prime}\rho_{\omega}(x)=\rho_{\omega}(y)\sum_{i=1}^{d}( \partial_{y}^{\prime}\omega)(y,e_{i})\nabla^{2}_{i;1}G^{\omega_{i}^{\prime}}(y,x).\]
We will use the fact that for any measurable functions \(f,g\) on \(\Omega\),
\[\mathbb{E}[(\partial_{y}^{\prime}f)g]=\mathbb{E}[f(\partial_{y}^{ \prime}g)], \tag{44}\] \[\partial_{y}^{\prime}(fg)=(\partial_{y}^{\prime}f)g+f_{y}^{\prime }(\partial_{y}^{\prime}g)=(\partial_{y}^{\prime}f)g_{y}^{\prime}+f(\partial_{y} ^{\prime}g). \tag{45}\]
Proof.: It suffices to consider the case \(x=0\). The formula for general \(x\) will follow from the fact that \(\partial_{y}^{\prime}\rho_{\omega}(x)=\partial_{y-x}^{\prime}\rho_{\partial_{ x}\omega}(0)\). We divide the proof into several steps.
**Step 1.** First, we will show a formula for \(\partial_{y}^{\prime}V(t,\omega)\):
\[\partial_{y}^{\prime}V(t,\omega)=\int_{0}^{t}V_{\omega}(t-s,y)\,\sum_{i=1}^{d }(\partial_{y}^{\prime}\omega)(y,e_{i})\nabla_{i}^{2}p_{s}^{\omega_{y}^{ \prime}}(y,0)\mathrm{d}s, \tag{46}\]
where \(V_{\omega}(s,y)=V(s,\theta_{y}\omega)\), and \(V_{y}^{\prime}(s,y)=V_{\omega_{y}^{\prime}}(s,y)\).
Indeed, notice that \(u(x,t)=p_{t}^{\omega}(x,0)\) satisfies \(u(x,0)=\mathbb{1}_{x=0}\) and
\[(\partial_{t}-L_{\omega})u(x,t)=0\quad\text{ for }(x,t)\in\mathbb{Z}^{d}\times(0,\infty). \tag{47}\]
By the equation above and the product rule (45), we have
\[-\partial_{t}[\partial_{y}^{\prime}u(x,t)]=\partial_{y}^{\prime}[L_{\omega}u( x,t)]=\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)(x,e_{i})\nabla_{i}^{2}u_{y}^{ \prime}(x,t)+L_{\omega}(\partial_{y}^{\prime}u)(x,t).\]
Hence, for every fixed \(y\in\mathbb{Z}^{d}\), \(\partial_{y}^{\prime}u(x,t)\) solves the heat equation
\[\left\{\begin{array}{ll}(\partial_{t}-L_{\omega})\partial_{y}^{\prime}u(x, t)=\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)(x,e_{i})\nabla_{i}^{2}u_{y}^{ \prime}(x,t)&\text{ for }(x,t)\in\mathbb{Z}^{d}\times(0,\infty),\\ \partial_{y}^{\prime}u(x,0)=0&\text{ for }x\in\mathbb{Z}^{d}\end{array}\right.\]
whose solution can be represented by Duhamel's formula
\[\partial_{y}^{\prime}u(x,t) =\sum_{z}\sum_{i=1}^{d}\int_{0}^{t}p_{t-s}^{\omega}(x,z)( \partial_{y}^{\prime}\omega)(z,e_{i})\nabla_{i}^{2}u_{y}^{\prime}(z,s)\mathrm{ d}s\] \[=\sum_{i=1}^{d}\int_{0}^{t}p_{t-s}^{\omega}(x,y)(\partial_{y}^{ \prime}\omega)(y,e_{i})\nabla_{i}^{2}u_{y}^{\prime}(y,s)\mathrm{d}s\]
where we used the fact that \(\partial_{y}^{\prime}\omega(z,e)=0\) if \(z\neq y\). Recall that \(u(x,t)=p_{t}^{\omega}(x,0)\). Summing the above equality over all \(x\in\mathbb{Z}^{d}\), we obtain formula (46).
**Step 2.** We claim that the integrand in (46) has the following bound: \(\forall s\in(0,t)\),
\[\left|V_{\omega}(t-s,y)\,\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)(y,e_{i}) \nabla_{i}^{2}p_{s}^{\omega_{y}^{\prime}}(y,0)\right|\lesssim(\mathcal{H}_{y} \mathcal{H}_{y}^{\prime})^{d-1}(1+s)^{-\gamma-0.5d}. \tag{48}\]
Indeed, by (47) and applying the Harnack inequality (Corollary A.2) for the operator \((\partial_{t}-L_{\omega})\) in a similar manner as in (42), we have
\[\left|V_{\omega}(t-s,y)\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)( y,e_{i})\nabla_{i}^{2}p_{s}^{\omega_{y}^{\prime}}(y,0)\right|\] \[\stackrel{{\eqref{eq:2.1}}}{{\lesssim}}\mathcal{H} _{y}^{d-1}\underset{\tilde{B}_{1}(y)}{\operatorname{osc}}p_{s}^{\omega_{y}^{ \prime}}(\cdot,0)\lesssim\mathcal{H}_{y}^{d-1}s^{-\gamma}p_{2s}^{\omega_{y}^{ \prime}}(y,0)\]
for \(s>1\). Hence (48) follows from Theorem B when \(s>1\). When \(s\leq 1\), (48) is a trivial consequence of (39) since \(|\nabla_{i}^{2}p_{s}^{\omega_{y}^{\prime}}(y,0)|\leq 2\). Display (48) is proved.
**Step 3.** For any bounded measurable function \(f\) on \(\Omega\), by Lemma 19 and (44),
\[\mathbb{E}[(\partial_{y}^{\prime}\rho)f]=\mathbb{E}[\rho(\partial_{y}^{\prime }f)]=\lim_{t\to\infty}\mathbb{E}[V(t,\omega)(\partial_{y}^{\prime}f)]=\lim_{t \to\infty}\mathbb{E}[(\partial_{y}^{\prime}V(t,\omega))f].\]
Furthermore, by (46), (48), and the dominated convergence theorem, we get
\[\mathbb{E}[(\partial_{y}^{\prime}\rho)f] =\int_{0}^{\infty}\lim_{t\to\infty}\mathbb{E}\left[V_{\omega}(t-s,y)\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)(y,e_{i})\nabla_{i}^{2}p_{s}^{ \omega_{y}^{\prime}}(y,0)\mathbbm{1}_{t>s}f\right]\,\mathrm{d}s\] \[\stackrel{{ Lemma}}{{=}}\,\,19\,\int_{0}^{\infty} \mathbb{E}\left[\rho f\,\sum_{i=1}^{d}(\partial_{y}^{\prime}\omega)(y,e_{i}) \nabla_{i}^{2}p_{s}^{\omega_{y}^{\prime}}(y,0)\right]\,\mathrm{d}s\] \[=\mathbb{E}\left[\rho f\,\sum_{i=1}^{d}(\partial_{y}^{\prime} \omega)(y,e_{i})\nabla_{i}^{2}G^{\omega_{y}^{\prime}}(y,0)\right].\]
Proposition 20 follows.
### Rate of convergence for the average of the invariant measure: Proof of Theorem 5
Now we will proceed to prove one of the main theorems in this paper, Theorem 5.
It will be clear in the proof that the \(\log R\) term in the \(C^{1,1}\) bound of Theorem 14 is important for us to obtain the logarithmic term in Theorem 5.
Proof of Theorem 5.: We divide the proof into several steps.
**Step 1.** Let \(u\,:\,\mathbb{R}_{+}\to\mathbb{R}_{+}\) be the function
\[u(r)=\left\{\begin{array}{ll}\log(r+1)&\text{ when }d=2\\ (r+1)^{2-d}&\text{ when }d\geq 3.\end{array}\right. \tag{49}\]
We will show that for any \(\varepsilon>0\), \(R\geq 2\), there exists a random variable \(\mathcal{H}^{*}(\omega)=\mathcal{H}^{*}(R,\omega;d,\kappa,\varepsilon)>0\) with \(\mathbb{E}[\exp(c\mathcal{H}^{*d-\varepsilon})]<C\) such that, \(\mathbb{P}\)-a.s.,
\[\underset{B_{(|y|+R)/2}(y)}{\operatorname{osc}}G^{\omega}(\cdot,B_{R}) \lesssim\left\{\begin{array}{ll}\mathcal{H}^{*d-1}u(|y|)R^{d}&\text{ if }|y|>4R\\ \mathcal{H}^{*d-1}R^{2}\log R&\text{ if }|y|\leq 4R.\end{array}\right. \tag{50}\]
Indeed, by Theorem D, for \(z\in\mathbb{Z}^{d}\),
\[|G(z,B_{R})|\lesssim\sum_{x\in B_{R}}\mathcal{H}_{x}^{d-1}u(|z-x|). \tag{51}\]
When \(|y|>4R\), \(u(|z-x|)\asymp u(|y|)\) for all \(x\in B_{R},z\in B_{(|y|+R)/2}(y)\), and so
\[|G(z,B_{R})|\lesssim\sum_{x\in B_{R}}\mathcal{H}_{x}^{d-1}u(|y|)\lesssim \mathcal{H}_{1}^{*d-1}u(|y|)R^{d}, \tag{52}\]
where \(\mathcal{H}_{1}^{*}=(\frac{1}{|B_{R}|}\sum_{x\in B_{R}}\mathcal{H}_{x}^{d-1}) ^{1/(d-1)}\).
When \(|y|\leq 4R\) and \(d=2\), for all \(z\in B_{(|y|+R)/2}(y)\), we have \(u(|z-x|)\lesssim\log R\)\(\forall x\in B_{R}\), and so
\[|G(z,B_{R})|\lesssim\log R\sum_{B_{R}}\mathcal{H}_{x}^{d-1}=\mathcal{H}_{1}^{* d-1}R^{2}\log R. \tag{53}\]
When \(|y|\leq 4R\) and \(d\geq 3\), for all \(z\in B_{(|y|+R)/2}(y)\), (51) yields
\[|G(z,B_{R})| \lesssim[\mathcal{H}_{2}^{*}+(\log R)^{1/(d-1)}]^{d-1}\sum_{x\in B _{4R}}u(|x|)\] \[\leq(\mathcal{H}_{2}^{*d-1}+\log R)R^{2}\lesssim\mathcal{H}_{2}^{ *d-1}R^{2}\log R, \tag{54}\]
where \(\mathcal{H}_{2}^{*}=[\max_{x\in B_{R}}\mathcal{H}_{x}-(\log R)^{1/(d-1)}]_{+}\). Recall \(\mathcal{H}=\mathcal{H}(\omega,d,\kappa,\epsilon)\) in Theorem B. Note that for \(t>1\) and \(p=d-\varepsilon>d-1\),
\[P(\mathcal{H}_{2}^{*}>t) \leq\sum_{x\in B_{R}}P(\mathcal{H}_{x}>t+(\log R)^{1/(d-1)})\] \[\lesssim R^{d}\exp[-c(t+(\log R)^{1/(d-1)})^{p}]\] \[\lesssim\exp(-ct^{p})\]
where we used Chebyshev's inequality in the second inequality. Hence \(\mathbb{E}[\exp(-c\mathcal{H}_{2}^{d-\epsilon})]\leq C\). Note also that, by Jensen's inequality, \(\mathbb{E}[\exp(-c\mathcal{H}_{1}^{d-\epsilon})]\leq C\).
Setting \(\mathcal{H}^{*}=\mathcal{H}_{1}^{*}+\mathcal{H}_{2}^{*}\), (50) follows from (52), (53), and (54).
**Step 2.** Next, we will show that
\[|\nabla^{2}G^{\omega}(y,B_{R})|\lesssim\left\{\begin{array}{ll}\mathcal{H}_ {y}^{2}\mathcal{H}^{*d-1}|y|^{-2}u(|y|)R^{d}&\text{ if }|y|>4R\\ \mathcal{H}_{y}^{2}\mathcal{H}^{*d-1}\log R&\text{ if }|y|\leq 4R,\end{array}\right. \tag{55}\]
where the operator \(\nabla^{2}\) is only applied to the first \(\mathbb{Z}^{d}\) coordinate of \(G(\cdot,\cdot)\).
When \(|y|>4R\), by Theorem 14 and (50),
\[|\nabla^{2}G(y,B_{R})|\lesssim\frac{\mathcal{H}_{y}^{2}}{|y|^{2}}\underset{B _{|y|/2}(y)}{\operatorname{osc}}G(\cdot,B_{R})\lesssim\mathcal{H}_{y}^{2} \mathcal{H}_{1}^{*d-1}|y|^{-2}u(|y|)R^{d}.\]
When \(|y|\leq 4R\), applying Theorem 14 (with \(\psi=0\), \(f=-1_{B_{R}}\)) again, we get
\[|\nabla^{2}G(y,B_{R})|\lesssim\frac{\mathcal{H}_{y}^{2}}{R^{2}}\underset{B_{R/2} (y)}{\operatorname{osc}}G(\cdot,B_{R})+\mathcal{H}_{y}^{2}\log R\overset{(50 )}{\lesssim}\mathcal{H}_{y}^{2}\mathcal{H}^{*d-1}\log R.\]
**Step 3.** By Proposition 20, Theorem B, and (55),
\[\left|\partial_{y}^{\prime}\frac{\rho_{\omega}(B_{R})}{|B_{R}|}\right|\lesssim R ^{-d}\rho_{\omega}(y)|\nabla^{2}G^{\omega_{y}^{\prime}}(y,B_{R})|\lesssim \mathcal{J}_{y}^{2d}(\omega,\omega^{\prime})w(|y|), \tag{56}\]
where \(\mathcal{J}_{y}(\omega,\omega^{\prime}):=[\mathcal{H}_{y}^{d-1}(\omega) \mathcal{H}_{y}^{2}(\omega_{y}^{\prime})\mathcal{H}^{*d-1}(\omega_{y}^{\prime })]^{1/(2d)}\), and
\[w(r)=\left\{\begin{array}{cc}r^{-2}u(r)&\text{if }r>4R\\ R^{-d}\log R&\text{if }r\leq 4R.\end{array}\right.\]
Note that \(\mathbb{E}[\exp(c\mathcal{J}_{y}^{d-\epsilon})]<C\), and
\[\sum_{x\in\mathcal{Z}^{d}}w^{2}(|x|)\asymp R^{-d}(\log R)^{2}\]
We let \(W(R)=R^{-d/2}\log R\) so that \(W(R)^{2}\asymp\sum_{x\in\mathcal{Z}^{d}}w^{2}(|x|)\). Set
\[Z(\omega):=\frac{\rho_{\omega}(B_{R})/|B_{R}|}{W(R)}.\]
**Step 4.** By (37), (56), and Jensen's inequality, for any \(q\geq 2\),
\[V(Z)^{q/2}\lesssim\left(\frac{\sum_{y}\mathcal{J}_{y}^{2d}w(|y|)^{2})}{\sum_{ z}w(|z|)^{2}}\right)^{q/2}\leq\sum_{y}\frac{w(|y|)^{2}}{\sum_{z}w(|z|)^{2}} \mathcal{J}_{y}^{dq}.\]
Taking expectations on both sides and using translation-invariance of \(\mathbb{P}\), we get
\[\mathbb{E}[V^{q/2}]\lesssim\mathbb{E}[\mathcal{J}_{0}^{dq}].\]
Thus, by (38),
\[\mathbb{E}[|Z-\mathbb{E}Z|^{q}]\leq Cq^{q/2}\mathbb{E}[\mathcal{J}_{0}^{dq}], \quad\forall q\geq 2. \tag{57}\]
As a fact, for any \(\alpha\in[0,1)\), there exists \(c=c(\alpha)>0\) such that, for all \(x>0\),
\[\sum_{n=1}^{\infty}\frac{c^{n}}{n!}x^{n}n^{\alpha n}\leq\exp(x^{1/(1-\alpha)}). \tag{58}\]
Indeed, when \(x>0\), putting \(c=e^{-\alpha}/2\) and using inequality \(\frac{n^{n}}{n!}\leq e^{n}\),
\[\sum_{n=1}^{\infty}\frac{c^{n}}{n!}x^{n}n^{\alpha n}\leq\sum_{n=1}^{\infty}2^ {-n}\frac{x^{n}}{(n!)^{1-\alpha}}=\sum_{n=1}^{\infty}2^{-n}\big{(}\frac{x^{n/( 1-\alpha)}}{n!}\big{)}^{1-\alpha}\leq\exp(x^{1/(1-\alpha)}),\]
where we used \(\frac{y^{n}}{n!}\leq e^{y}\) for \(y\geq 0\) in the last inequality. Thus, recalling \(\mathbb{E}[\exp(c\mathcal{J}_{0}^{d-\epsilon})]<C\) and letting \(p=(\frac{3}{2}+\frac{\epsilon}{d-\epsilon})^{-1}\), displays (57) and (58) yield
\[\mathbb{E}[\exp(c|Z-\mathbb{E}Z|^{p})]\lesssim\mathbb{E}\left[\sum_{n=0}^{ \infty}\frac{c^{n}}{n!}\mathcal{J}_{0}^{dnp}(np)^{np/2}\right]\lesssim\mathbb{ E}[\exp(c\mathcal{J}_{0}^{d-\epsilon})]<C.\]
Note that \(\mathbb{E}Z=\frac{1}{W(R)}\). The theorem follows by Chebyshev's inequality.
### Correlation structure of the field of the invariant measure
In this subsection we will investigate the mixing property of the field by showing the rate of decay of its correlations. Intuitively, since \(\rho_{\omega}(x)\) is determined by the long term frequency of visits of the RWRE to \(x\), the influence of environments at remote locations will be small.
Our proof uses a localization of the invariant measure \(\rho_{\omega}(x)\) to a finite ball. For any \(x\in\mathbb{Z}^{d},r>0\), we introduce the notation
\[\rho_{r}(x)=\rho_{r,\omega}(x)\,:=\mathbb{E}[\rho_{\omega}(x)| \omega(y)\,:\,y\in B_{r}(x)]\]
so that \(\rho_{r}(x)\) is only a function of environments within the ball \(B_{r}(x)\).
Proof of Proposition 6.: As usual, we divide the proof into two steps.
**Step 1.** We will show that, for \(r\geq 2\),
\[\left\|\rho(0)-\rho_{r}(0)\right\|_{L^{2}(\mathbb{P})}\lesssim \left\{\begin{array}{ll}r^{-1}\log r,&d=2\\ r^{-d/2},&d\geq 3.\end{array}\right. \tag{59}\]
Recall the function \(u(r)\) defined in (49). By applying the Efron-Stein inequality to every fixed realization of the environment within \(B_{r}\), we get
\[\left\|\rho-\rho_{r}\right\|_{L^{2}(\mathbb{P})}^{2} \lesssim\mathbb{E}[\sum_{y\not\in B_{r}}(\partial_{y}^{\prime} \rho)^{2}]\] \[\lesssim\mathbb{E}[\sum_{y\not\in B_{r}}\frac{1}{|y|^{4}}( \mathcal{H}_{y}\mathcal{H}(\omega_{y}^{\prime}))^{2d-2}\mathcal{H}_{y}^{4}( \omega_{y}^{\prime})\,u(|y|)^{2}],\]
where in the last inequality we used Theorem B, Theorem D, and Theorem 14. Since for \(r\geq 2\),
\[\sum_{y\not\in B_{r}}|y|^{-4}u(|y|)^{2}\asymp\int_{r}^{\infty}s^ {-4}u(s)^{2}s^{d-1}\mathrm{d}s\lesssim\left\{\begin{array}{ll}r^{-2}(\log r) ^{2},&d=2\\ r^{-d},&d\geq 3,\end{array}\right.\]
inequality (59) follows.
**Step 2.** We will use (59) to estimate \(\mathrm{Cov}_{\mathbb{P}}(\rho(x),\rho(y))\). By the translation invariance of \(\mathbb{P}\), it suffices to consider the case \(y=0\). Then
\[\mathrm{Cov}_{\mathbb{P}}(\rho(0),\rho(x))\] \[=\mathbb{E}\left[\rho(0)(\rho(x)-\rho_{|x|/2}(x))\right]+ \mathbb{E}\left[\rho(0)(\rho_{|x|/2}(x)-1)\right]\] \[=\mathbb{E}\left[\rho(0)(\rho(x)-\rho_{|x|/2}(x))\right]+ \mathbb{E}\left[(\rho(0)-\rho_{|x|/2}(0))(\rho_{|x|/2}(x)-1)\right]\]
where in the last equality we used the fact that \(\rho_{|x|/2}(0)\) and \(\rho_{|x|/2}(x)\) are independent under \(\mathbb{P}\), and that \(\mathbb{E}[\rho_{|x|/2}(x)]=\mathbb{E}[\rho]=1\).
Hence, by Holder's inequality and the moment bound (cf. Theorem B) of \(\rho\), we have
\[|\mathrm{Cov}_{\mathbb{P}}(\rho(0),\rho(x))|\lesssim\|\rho-\rho_ {|x|/2}\|_{L^{2}(\mathbb{P})}.\]
The proposition is proved by recalling inequality (59).
Homogenization of the Dirichlet problem
In this section, \(\psi\) is always assumed to be a local function.
### Homogenization of the approximate corrector
We consider the function \(\hat{\phi}\,:\,\mathbb{Z}^{d}\to\mathbb{R}\) defined as
\[\hat{\phi}(x)=\hat{\phi}(x;\psi\,,R,\omega)=-\int_{0}^{\infty}e^{-t/R^{2}}E_{ \omega}^{x}[\psi(\theta_{Y_{t}}\omega)]\mathrm{d}t. \tag{60}\]
where \(R\geq 1\), and \(\psi\) is measurable function of \(\omega(0)\) with \(E_{\mathrm{Q}}[\psi]=0\). Notice that \(\hat{\phi}\) is stationary, i.e., \(\hat{\phi}(x;\psi\,,R,\omega)=\hat{\phi}(0;\psi\,,R,\theta_{x}\omega)\). Moreover, \(\hat{\phi}\) is a solution of
\[L_{\omega}\hat{\phi}(x)=\tfrac{1}{R^{2}}\hat{\phi}(x)+\psi(\theta_{x}\omega), \quad x\in\mathbb{Z}^{d}. \tag{61}\]
Clearly, by the definition of \(\hat{\phi}\) in (60), for any \(\omega\in\Omega\),
\[\sup_{x\in\mathbb{Z}^{d}}|\hat{\phi}(x)|\leq R^{2}\|\psi\|_{\infty}. \tag{62}\]
and so \(\|\tfrac{1}{R^{2}}\hat{\phi}(x)+\psi(\theta_{x}\omega)\|_{\infty}\leq 2\| \psi\|_{\infty}\). By (62) and the Holder estimate (31),
\[[\hat{\phi}]_{Y;\,B_{R/2}}\lesssim R^{-\gamma}[\max_{B_{R}}|\hat{\phi}|+R^{2} \|R^{-2}\hat{\phi}+\psi\|_{d;\,B_{R}}]\lesssim R^{2-\gamma}\|\psi\|_{\infty}.\]
Hence, for any \(2\leq D\leq R\), applying (28) to \(f=\hat{\phi}/R^{2}\) and \(\sigma=\gamma\) in \(B_{D}\), we get
\[|\nabla\hat{\phi}(0)| \lesssim\mathcal{H}(D\|\psi\|_{\infty}+\tfrac{1}{D}\|\hat{\phi} \|_{1;\,B_{D}}), \tag{63}\] \[|\nabla^{2}\hat{\phi}(0)| \lesssim\mathcal{H}^{2}\|\psi\|_{\infty}. \tag{64}\]
The goal of this subsection is to establish the optimal rate of convergence of the approximate corrector. To this end, set, for \(R\geq 2\),
\[\mu(R)\,:=\left\{\begin{array}{ll}R&d=2\\ R^{1/2}&d=3\\ (\log R)^{1/2}&d=4\\ 1&d\geq 5.\end{array}\right. \tag{65}\]
**Lemma 21**.: _Assume that \(\psi(\omega)=\psi(\omega(0))\) is a bounded function of \(\omega(0)\). For any \(0<p<\tfrac{2d}{3d+2}\), there exists \(C=C(d,\kappa,p)\) such that for \(t\geq 0\), \(R\geq 2\), with \(\mu(R)\) as defined in (65) and \(\hat{\phi}(x)=\hat{\phi}(x;\psi\,,R,\omega)\) as in (60),_
\[\mathbb{P}\left(|\hat{\phi}(0)|\geq t\mu(R)\|\psi\|_{\infty}\right)\leq C\exp( -\tfrac{1}{C}t^{p}).\]
The continuous version of Lemma 21 was proved earlier by Armstrong, Lin [3]. Our result in two dimensions (\(d=2\)) is slightly better than that in [3].
We now obtain Lemma 21 using the concentration inequality (38). To this end, we regard \(\hat{\phi}\) as a function of the environment and write, for \(y\in\mathbb{Z}^{d}\),
\[\hat{\phi}^{\prime}_{y}(x)\,:\,=\hat{\phi}(x;\psi,R,\omega^{\prime}_{y}),\quad \partial^{\prime}_{y}\hat{\phi}=\hat{\phi}^{\prime}_{y}-\hat{\phi}. \tag{66}\]
We will need a bound for \(\partial^{\prime}_{y}\hat{\phi}(0)\). Note that \(\omega(x)=\partial^{\prime}_{y}\hat{\phi}(x)\) satisfies, for \(x,y\in\mathbb{Z}^{d}\),
\[L_{\omega^{\prime}_{y}}\omega(x)=R^{-2}\omega(x)+[R^{-2}\hat{\phi}+\psi( \omega^{\prime}(y))-\operatorname{tr}(\omega^{\prime}\nabla^{2}\hat{\phi})] \mathbb{1}_{y=x},\]
which yields
\[\omega(x)=-\left[R^{-2}\hat{\phi}(y)+\psi(\omega^{\prime}(y))-\operatorname{ tr}(\omega^{\prime}\nabla^{2}\hat{\phi})(y)\right]\int_{0}^{\infty}e^{-t/R^{2}}p ^{\omega^{\prime}_{y}}_{t}(x,y)\mathrm{d}t. \tag{67}\]
This equality, together with (64), (62) and Theorem B(c), implies
\[|\partial^{\prime}_{y}\hat{\phi}(0)|=|\omega(0)| \lesssim\mathcal{H}^{2}_{y}\|\psi\|_{\infty}\int_{0}^{\infty}e^{- t/R^{2}}p^{\omega^{\prime}_{y}}_{t}(0,y)\mathrm{d}t\] \[\lesssim\mathcal{H}^{2}_{y}\mathcal{H}^{rd-1}_{y}\|\psi\|_{ \infty}\int_{0}^{\infty}(1+t)^{-d/2}\exp\left[-\tfrac{t}{R^{2}}-c\hat{\eta}(| y|,t)\right]\mathrm{d}t\] \[\lesssim\mathcal{H}^{2}_{y}\mathcal{H}^{rd-1}_{y}\|\psi\|_{\infty }\upsilon(|y|), \tag{68}\]
where \(\mathcal{H}_{y}=\mathcal{H}(\theta_{y}\omega),\mathcal{H}^{\prime}_{y}= \mathcal{H}(\theta_{y}\omega^{\prime}_{y})\) and, with \(c_{2}=c_{2}(\kappa,d)>0\) denoting an appropriate constant,
\[\upsilon(r)=\left\{\begin{array}{ll}e^{-c_{2}r/R}\left[1+\log(\tfrac{R}{(r +1)\wedge R})\right]&d=2\\ e^{-c_{2}r/R}(r+1)^{2-d}&d\geq 3.\end{array}\right. \tag{69}\]
Recall \(\mu(R)\) in (65). Notice that
\[\sum_{y\in\mathbb{Z}^{d}}\upsilon(|y|)^{2}\lesssim\int_{0}^{\infty}\upsilon( r)^{2}r^{d-1}\mathrm{d}r\lesssim\mu(R)^{2}. \tag{70}\]
The verifications of inequalities (68) and (70) are included in the Appendix.
Proof of Lemma 21:.: For \(y\in\mathbb{Z}^{d}\), set \(\mathcal{H}_{y}\,:\,=\,\mathcal{H}(\theta_{y}\omega)\), and
\[Z(\omega)\,:=\,\frac{\hat{\phi}(0)}{\|\psi\|_{\infty}\mu(R)}. \tag{71}\]
By (37), (68), (70), and Jensen's inequality, for any \(q\geq 2\),
\[V(Z)^{q/2} \lesssim\left(\frac{\sum_{y}(\mathcal{H}^{4}_{y}\mathcal{H}^{r}_{ y}{}^{2(d-1)})\upsilon(|y|)^{2})}{\sum_{z}\upsilon(|z|)^{2}}\right)^{q/2}\] \[\leq\sum_{y}\frac{\upsilon(|y|)^{2}}{\sum_{z}\upsilon(|z|)^{2}} \mathcal{H}^{2q}_{y}\mathcal{H}^{rd-1q}_{y}.\]
Taking expectations on both sides and using translation-invariance of \(\mathbb{P}\), we get
\[\mathbb{E}[V^{q/2}]\lesssim\mathbb{E}[\mathcal{H}^{2q}\mathcal{H}^{q(d-1)}] \lesssim\mathbb{E}[\mathcal{H}^{(1+d)q}],\]
where we used Holder's inequality in the second inequality. Thus, by (38),
\[\mathbb{E}[|Z-\mathbb{E}Z|^{q}]\leq Cq^{q/2}\mathbb{E}[\mathcal{H}^{(1+d)q}], \quad\forall q\geq 2. \tag{72}\]
Thus, recalling \(\mathbb{E}[\exp(c\mathcal{H}^{d-\epsilon})]<C\) in Theorem B and letting \(p=(\frac{3}{2}+\frac{1+\epsilon}{d-\epsilon})^{-1}\), displays (72) and (58) yield
\[\mathbb{E}[\exp(c|Z-\mathbb{E}Z|^{p})]\lesssim\mathbb{E}\left[\sum_{n=0}^{ \infty}\frac{c^{n}}{n!}\mathcal{H}^{(1+d)np}(np)^{np/2}\right]\lesssim\mathbb{ E}[\exp(c\mathcal{H}^{d-\epsilon})]<C.\]
In particular, \(\mathbb{E}[|Z-\mathbb{E}Z|^{2}]<C\).
To prove Lemma 21, it suffices to show that
\[\mathbb{E}[\exp(c|Z|^{p})]=\mathbb{E}\left[\exp\big{(}c|\frac{\hat{\phi}(0)}{ \mu(R)\|\psi\|_{\infty}}|^{p}\big{)}\right]<C. \tag{73}\]
It suffices to show that \(|\mathbb{E}Z|<C\). Since \(\mathbb{Q}\) is an invariant measure for \((\theta_{Y_{t}}\omega)_{t\geq 0}\), we have \(E_{\mathbb{Q}}E_{\omega}^{0}[\psi(\theta_{Y_{t}}\omega)]=E_{\mathbb{Q}}[\psi]=0\) for all \(t\geq 0\). Hence, by (60), we know
\[E_{\mathbb{Q}}[\hat{\phi}(0)]=0\]
and so \(E_{\mathbb{Q}}[Z]=0\). Further, by Holder's inequality and Theorem B,
\[|\mathbb{E}Z|=|E_{\mathbb{Q}}[Z-\mathbb{E}Z|]\leq E_{\mathbb{Q}}[|Z-\mathbb{E }Z|]\leq\|\rho\|_{L^{2}(\mathbb{P})}\|Z-\mathbb{E}Z\|_{L^{2}(\mathbb{P})}\leq C\]
Therefore, we obtain (73). Lemma 21 follows by Chebyshev's inequality.
### Rate of homogenization for the Dirichlet problem (19)
We need the following notations. Let \(r=r(R)\) be a function of \(R\) defined as
\[r=\left\{\begin{array}{ll}R^{2/3}&d=2\\ R^{4/7}&d=3\\ R^{1/2}(\log R)^{1/8}&d=4\\ R^{1/2}&d\geq 5.\end{array}\right. \tag{74}\]
Recall \(\hat{\phi}\) in (60). For \(k=1,\ldots,d\), let
\[\upsilon^{k}(x)=\hat{\phi}(x;\omega_{k}-\bar{a}_{k},r,\omega),\quad x\in \mathbb{Z}^{d}. \tag{75}\]
For \(R\geq 2\) and \(0<p<\frac{2d}{3d+2}\), set \(\Lambda=\{\omega_{k}-\bar{a}_{k}:\,k=1,\ldots,d\}\), and define
\[\mathcal{Y}=\mathcal{Y}(\omega;R)=1+c_{p}\max_{\xi\in\Lambda,|e|\leq 1}|\hat{ \phi}(0;\xi,r,\theta_{e}\omega)|/[\|\xi\|_{\infty}\mu(r)],\]
so that, by Lemma 21, for all \(R\geq 2\), \(\mathbb{E}[\exp(\mathcal{Y}^{p})]\leq C\) and
\[\max_{\xi\in\Lambda,|\epsilon|\leq 1}|\hat{\phi}(0;\xi,r,\theta_{e}\omega)| \lesssim_{p}\mathcal{Y}\mu(r). \tag{76}\]
We write \(\mathcal{Y}_{x}=\mathcal{Y}_{x}(\omega;R):=\mathcal{Y}(\theta_{x}\omega;R)\).
In what follows we will apply the classical method of two-scale expansions to quantify the rate of the homogenization for the Dirichlet problem (19). The proof is similar to that in the periodic setting (see [32, 45, 30] for example). The only difference is that we use the approximate corrector \(\upsilon^{k}\) here instead of the actual corrector in the periodic setting.
Proof of Theorem 7:.: We can replace the function \(g\) in (19) by \(\bar{u}\), because doing this only introduces an error of size \(CR^{-1}\|u\|_{C^{4}}\) to \(|u(x)-\bar{u}(\frac{x}{R})|\).
Consider
\[\omega(x)=u(x)-\bar{u}(\frac{x}{R})+\tfrac{1}{R^{2}}\upsilon^{k}(x)\partial_{ kk}\bar{u}(\frac{x}{R}),\quad x\in\bar{B}_{R}. \tag{77}\]
Here we follow the convention of summation over repeated indices. Then, \(\forall x\in B_{R}\),
\[|L_{\omega}\omega(x)|= \Big{|}\tfrac{1}{R^{2}}f(\frac{x}{R})-L_{\omega_{0}}[\bar{u}( \frac{x}{R})]+\tfrac{1}{R^{2}}(\tfrac{\upsilon^{k}}{r^{2}}+\omega_{k}-\bar{a} _{k})\partial_{kk}\bar{u}(\frac{x}{R})+\tfrac{1}{R^{2}}\upsilon^{k}L_{\omega_ {0}}[\partial_{kk}\bar{u}(\frac{x}{R})]\] \[+\tfrac{1}{R^{2}}\sum_{y\sim x}\omega(x,y)[\partial_{kk}\bar{u}( \frac{y}{R^{2}})-\partial_{kk}\bar{u}(\frac{x}{R^{2}})][\upsilon^{k}(y)- \upsilon^{k}(x)]\Big{|}\] \[= \Big{|}\tfrac{1}{2}R^{-3}\omega_{i}(x)\partial_{kk}\bar{u}(\frac{ x}{R})[\upsilon^{k}(x+e_{i})-\upsilon^{k}(x-e_{i})]+(Rr)^{-2}\partial_{kk} \bar{u}(\frac{x}{R})\upsilon^{k}(x)\] \[\quad+R^{-4}\|\bar{u}\|_{C^{4}}|\upsilon^{k}(x)|O(1)\Big{|}\] \[\lesssim\|\bar{u}\|_{C^{4}}\left(R^{-3}|\upsilon^{k}(x+e_{i})- \upsilon^{k}(x-e_{i})|+(Rr)^{-2}|\upsilon^{k}|+R^{-4}|\upsilon^{k}|\right). \tag{78}\]
See Appendix A.4 for verification of the first part of (78). Set \(D=\mu(r)^{1/2}\leq r\). By (63) and (76),
\[\operatorname*{osc}_{\bar{B}_{1}(x)}\upsilon^{k}\lesssim\mathcal{H}_{x}(D+\| \upsilon^{k}\|_{1;B_{D}(x)})\lesssim\mathcal{H}_{x}\mathcal{Y}_{x}^{*}\sqrt{ \mu(r)}, \tag{79}\]
where
\[\mathcal{Y}_{x}^{*}=\frac{1}{\#B_{D}}\sum_{z\in B_{D}(x)}\mathcal{Y}_{z}. \tag{80}\]
By (78), (79) and (64), we get that for \(x\in B_{R}\),
\[|L_{\omega}w| \lesssim\|\bar{u}\|_{C^{4}}[\mathcal{H}_{x}\mathcal{Y}_{x}^{*}R^ {-3}\sqrt{\mu(r)}+(Rr)^{-2}\mu(r)\mathcal{Y}_{x}]\] \[\lesssim\|\bar{u}\|_{C^{4}}R^{-2}\tau(R)(\mathcal{H}_{x}\mathcal{ Y}_{x}^{*}+\mathcal{Y}_{x}).\]
Recall \(r(R),\tau(R)\) in (74), (20). Hence, by (77) and the ABP inequality,
\[\max_{B_{R}}|u(x)-\bar{u}(\tfrac{x}{R})| \lesssim\max_{B_{R}}|\omega|+R^{-2}\max_{x\in B_{R}}|v^{k}(x) \partial_{kk}\bar{u}(\tfrac{x}{R})|\] \[\lesssim\|\bar{u}\|_{C^{4}}\left[\tau(R)\big{(}\frac{1}{\#B_{R}} \sum_{x\in B_{R}}(\mathcal{H}_{x}\mathcal{Y}_{x}^{*}+\mathcal{Y}_{x})^{d} \big{)}^{1/d}+R^{-2}\mu(r)\max_{y\in B_{R}}\mathcal{Y}_{y}\right]\] \[\lesssim\|\bar{u}\|_{C^{4}}\left[\tau(R)A_{1}+R^{-2}\mu(r)A_{2}+R ^{-2}\mu(r)(\log R)^{1/(2s)}\right]\] \[\lesssim\|\bar{u}\|_{C^{4}}\tau(R)(A_{1}+A_{2}), \tag{81}\]
where
\[A_{1}=\left(\frac{1}{\#B_{R}}\sum_{x\in B_{R}}(\mathcal{H}_{x}\mathcal{Y}_{x }^{*}+\mathcal{Y}_{x})^{d}\right)^{1/d},\quad A_{2}=\left(\max_{y\in\tilde{B} _{R}}\mathcal{Y}_{y}-(2d\,\log R)^{1/(2s)}\right)_{+}.\]
For \(q\geq d\), by the translation-invariance of \(\mathbb{P}\),
\[\mathbb{E}[A_{1}^{q}]\lesssim\mathbb{E}[(\mathcal{H}\mathcal{Y}_{0}^{*}+ \mathcal{Y})^{q}]\lesssim\mathbb{E}[\mathcal{X}^{2q}+\mathcal{Y}_{0}^{*2q}+ \mathcal{Y}^{q}]\lesssim\mathbb{E}[\mathcal{X}^{2q}+\mathcal{Y}^{2q}],\]
which implies, for \(s\in(0,\tfrac{d}{3d+2})\),
\[\mathbb{E}[\exp(c\,A_{1}^{s})]\lesssim\mathbb{E}[\exp(\mathcal{X}^{2s})+\exp( \mathcal{Y}^{2s})]\leq C. \tag{82}\]
Moreover, for \(t>0\), by a union bound and Chebyshev's inequality,
\[\mathbb{P}(A_{2}\geq t) \lesssim R^{d}\mathbb{P}(\mathcal{Y}-[2d\,\log R]^{1/(2s)}\geq t)\] \[\leq\#\tilde{B}_{R}\mathbb{E}\left[\exp\left(\mathcal{Y}^{2s}- \tfrac{1}{2}t^{2s}-d\,\log R\right)\right]\lesssim e^{-t^{2s}/2}.\]
Thus \(\mathbb{E}[\exp(c\,A_{2}^{s})]<C\). This, together with (82) and (81), yields
\[\max_{B_{R}}|u(x)-\bar{u}(\tfrac{x}{R})|\lesssim\|\bar{u}\|_{C^{4}}\tau(R) \mathcal{Z},\]
with \(\mathcal{Z}:=c(A_{1}+A_{2})\) satisfying \(\mathbb{E}[\exp(\mathcal{Z}^{s})]\leq C\). Our proof is complete.
## 5 Quantification of the diffusive behavior
### Quantification of the ergodicity of the environmental process: Proof of Theorem 8
In this section we will derive the optimal rates of convergence (as \(t\to\infty\)) of the ergodic average \(\frac{1}{t}E_{\omega}[\int_{0}^{t}\psi(\bar{\omega}^{s})\mathrm{d}s]\), where \(\bar{\omega}^{s}\) denotes the process of the environment viewed from the particle:
\[\bar{\omega}^{s}:=\theta_{Y_{s}}\omega.\]
With Lemma 21, it may be tempting to compare the approximate corrector \(\hat{\phi}\) in (60) to the corrector within a finite ball \(B_{R}\), i.e., the solution \(u\) to the Dirichlet
problem \(L_{\omega}u=\psi_{\omega}\) in \(B_{R}\) with \(u=0\) on \(\partial B_{R}\). However, such comparison involves controlling the boundary error \(\max_{\partial B_{R}}\hat{\phi}\) which would result in an extra log \(R\) factor. In what follows, we will follow the argument of Kipnis and Varadhan [37] to approximate \(E_{\omega}[\int_{0}^{T}\psi(\theta_{Y_{s}}\omega)\mathrm{d}s]\) with a martingale using the approximate corrector.
Proof of Theorem 8.: Without loss of generality, assume \(\left\|\psi\right\|_{\infty}=1\) and \(\bar{\psi}=0\).
First, we will construct a martingale (for both continuous and discrete time cases) using the approximate corrector.
For any fixed \(T>1\), let \(\phi:\Omega\to\mathbb{R}\) denote the function
\[\phi(\omega)=\phi_{\psi,T}(\omega)\,:\,=\hat{\phi}(0;\psi,\sqrt{T},\omega),\]
where \(\hat{\phi}\) is as in (60). Then, for a.s. \(\omega\in\Omega\), the process \((M_{t})_{t\geq 0}\) defined by
\[M_{t}\,:\,=\phi(\theta_{Y_{s}}\omega)-\phi(\theta_{Y_{0}}\omega )-\int_{0}^{t}L_{\omega}\phi(\theta_{Y_{s}}\omega)\mathrm{d}s\] \[\stackrel{{\eqref{eq:2}}}{{=}}\phi(\bar{\omega}^{t} )-\phi(\bar{\omega}^{0})-\int_{0}^{t}[\frac{1}{T}\phi(\bar{\omega}^{s})+\psi( \bar{\omega}^{s})]\mathrm{d}s \tag{83}\]
is a \(P_{\omega}\)-martingale with respect to the filtration \(\mathcal{F}_{t}=\sigma(Y_{s}:\,s\leq t)\). Similarly, for discrete-time RWRE, we have that
\[N_{n}\,:\,=\phi(\bar{\omega}^{n})-\phi(\bar{\omega}^{0})-\sum_{i=0}^{n-1}[ \frac{1}{T}\phi(\bar{\omega}^{i})+\psi(\bar{\omega}^{i})]\]
is a \(P_{\omega}\)-martingale with respect to the filtration \(\mathcal{F}_{n}=\sigma(X_{i}:\,i\leq n)\).
Next, we will derive an exponential moment bounds for \(\int_{0}^{t}P_{s}\psi\mathrm{d}s\) and \(\sum_{i=0}^{n}P_{i}\psi\), where the operator \(P_{s}\) is as in (16). We will only provide a proof for the continuous-time case, because the argument for the discrete-time setting is exactly the same. Since \(E_{\omega}[M_{s}]=E_{\omega}[M_{0}]=0\), taking expectations in (83), we get
\[\int_{0}^{t}P_{s}\psi\mathrm{d}s=P_{t}\phi-\phi-\tfrac{1}{T}\int_{0}^{t}P_{s} \phi\mathrm{d}s. \tag{84}\]
Since the process \((\bar{\omega}^{s})\) is a stationary sequence under the measure \(\mathbb{Q}\times P_{\omega}\), we have, by Jensen's inequality, for any \(t\geq 0\), \(q\geq 1\),
\[\left\|P_{t}\phi\right\|_{L^{q}(\mathbb{Q})}^{q}=E_{\mathbb{Q}}[|E_{\omega} \phi(\bar{\omega}^{t})|^{q}]\leq E_{\mathbb{Q}\times P_{\omega}}[|\phi(\bar{ \omega}^{t})|^{q}]=E_{\mathbb{Q}}[|\phi|^{q}].\]
Hence, taking the \(L^{q}(\mathbb{Q})\)-norms on both sides of (84), we get
\[\left\|\int_{0}^{T}P_{s}\psi\mathrm{d}s\right\|_{L^{q}(\mathbb{Q})}\leq 3\left\| \phi\right\|_{L^{q}(\mathbb{Q})},\quad\forall q\geq 1\]
which implies
\[E_{\mathrm{Q}}\left[\exp\left(c\Big{|}\int_{0}^{T}P_{s}\psi\mathrm{d }s\big{/}\mu(\sqrt{T})\Big{|}^{p}\right)\right]\] \[\leq E_{\mathrm{Q}}\left[\exp\left(c\Big{|}3\phi/\mu(\sqrt{T}) \Big{|}^{p}\right)\right]\] \[\leq\|\rho\|_{L^{2}(\mathbb{P})}E_{\mathbb{P}}\left[\exp\left(0.5c \Big{|}3\phi/\mu(\sqrt{T})\Big{|}^{p}\right)\right]^{1/2}\overset{\eqref{eq:eq:eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq
### A Berry-Esseen estimate for the QCLT: Proof of Corollary 10
To prove Corollary 10 we will apply the Berry-Esseen estimates for martingales by Heyde and Brown [35]. Here we will use the version in [34, Theorem 2] which is also applicable to the continuous-time setting.
Proof of Corollary 10.: For any unit vector \(\mathcal{L}\in\mathbb{R}^{d}\), let \(\psi_{0}(\omega)=\mathcal{L}^{T}\frac{\omega(0)}{\mathrm{tr}\omega(0)} \mathcal{L}\), \(\psi=\psi_{0}-E_{\mathrm{Q}}[\psi_{0}]\). Following the notations in [34], we set
\[N_{n,2}\,: =E_{\omega}\left[\Big{|}\sum_{k=0}^{n-1}E_{\omega}\big{[}\tfrac{1 }{\sqrt{n}}\big{(}(X_{k+1}-X_{k})\cdot\mathcal{L}\big{)}^{2}|\mathcal{F}_{k} \big{]}-\mathcal{L}^{T}\bar{a}\mathcal{L}\Big{|}^{2}\right]\] \[=\frac{1}{n^{2}}E_{\omega}\left[\big{(}\sum_{k=0}^{n-1}\psi(\bar{ \omega}^{k})\big{)}^{2}\right]\,,\] \[L_{n,2}\,: =\sum_{k=0}^{n-1}E_{\omega}[|\tfrac{1}{\sqrt{n}}(X_{k+1}-X_{k}) \cdot\mathcal{L}|^{4}]=\frac{1}{n^{2}}E_{\omega}\left[\sum_{k=0}^{n-1}\psi_{0 }(\bar{\omega}^{k})\right]\,.\]
The term \(N_{n,2}\) can be further written as
\[n^{2}N_{n,2}=2\sum_{i=0}^{n-1}E_{\omega}\left[\psi(\bar{\omega}^{i})\sum_{j=0} ^{n-i-1}\psi(\bar{\omega}^{i+j})\right]=2\sum_{i=0}^{n-1}E_{\omega}\left[\psi( \bar{\omega}^{i})E_{\omega}^{X_{i}}\left[\sum_{j=0}^{n-i-1}\psi(\bar{\omega}^{ j})\right]\right]\,.\]
Hence, for any \(q\geq 1\), using the fact that \((\bar{\omega}^{i})\) is a stationary sequence under \(\mathbb{Q}\times P_{\omega}\), we get (note \(\left\|\psi_{0}\right\|_{\infty}\lesssim 1\))
\[\left\|N_{n,2}\right\|_{L^{q}(\mathbb{Q})}\lesssim\frac{1}{n^{2}}\sum_{i=0}^{ n-1}\left\|\sum_{j=0}^{n-i-1}P_{j}\psi\right\|_{L^{q}(\mathbb{Q}\times P_{ \omega})}\]
which, by Jensen's inequality and the fact \(\frac{1}{n^{2}}\sum_{k=1}^{n}\mu(\sqrt{k})\asymp\nu(n)\), implies that for any \(0<p<\frac{2d}{3d+2}\),
\[E_{\mathbb{Q}}\left[\exp\big{(}c\,|N_{n,2}/\nu(n)|^{p}\big{)}\right]\] \[\lesssim\frac{1}{n^{2}\nu(n)}\sum_{k=1}^{n}\mu(\sqrt{k})E_{ \mathbb{Q}}\left[\exp\big{(}c\Big{|}\sum_{j=0}^{k-1}P_{j}\psi\big{/}\mu(\sqrt {k})\big{|}^{p}\big{)}\right]\stackrel{{ Theorem}}{{\leq}}C.\]
Thus, using the moment bound of \(\rho^{-1}\) in Theorem B, by Holder's inequality,
\[E_{\mathbb{P}}\left[\exp\big{(}0.5c\,|N_{n,2}/\nu(n)|^{p}\big{)}\right]\leq \left\|\rho^{-1/2}\right\|_{L^{2}(\mathbb{P})}E_{\mathbb{Q}}\left[\exp\big{(} c\,|N_{n,2}/\nu(n)|^{p}\big{)}\right]^{1/2}\leq C.\]
By Theorem 8 we already know that \(E_{\mathbb{P}}[\exp\big{(}c\,|nL_{n,2}|^{p}\big{)}]\leq C\). Therefore, we conclude that there exists a random variable \(\mathcal{Y}^{5}\) with \(E_{\mathbb{P}}[\exp(\mathcal{Y}^{5p})]<\infty\) such that
\[L_{n,2}+N_{n,2}\leq C\nu(n)\mathcal{Y}^{5}.\]
The corollary follows by applying [34, Theorem 2].
Appendix
Define the parabolic operator \(\mathcal{L}_{\omega}\) as
\[\mathcal{L}_{\omega}u(x,t)=\sum_{y:\,y\sim x}\omega(x,y)[u(y,t)-u(x,t)]-\partial_{ t}u(x,t)\]
for every function \(u:\,\mathbb{Z}^{d}\times\mathbb{R}\to\mathbb{R}\) which is differentiable in \(t\). The following results are used in the paper.
**Theorem A.1**.: ([22, Theorem 17]) Assume \(\frac{\omega}{\operatorname{tr}\omega}>2\kappa I\) for some \(\kappa>0\). Any non-negative function \(u\) with \(\mathcal{L}_{\omega}u=0\) in \(B_{2R}\times(0,4R^{2})\) for \(R>0\) satisfies
\[\sup_{B_{R}\times(R^{2},2R^{2})}u\leq C\inf_{B_{R}\times(3R^{2},4R^{2})}u.\]
As a consequence, we have the following Holder regularity for \(u\).
**Corollary A.2**.: Assume \(\frac{\omega}{\operatorname{tr}\omega}>2\kappa I\) for some \(\kappa>0\). There exists \(\gamma=\gamma(d,\kappa)\in(0,1)\) such that any non-negative function \(u\) with \(\mathcal{L}_{\omega}u=0\) in \(B_{R}(x_{0})\times(t_{0}-R^{2},t_{0})\), for some \((x_{0},t_{0})\in\mathbb{Z}^{d}\times\mathbb{R}\) and \(R>0\), satisfies
\[|u(\hat{x})-u(\hat{y})|\leq C\left(\frac{r}{R}\right)^{\gamma}\sup_{B_{R}(x_{0 })\times(t_{0}-R^{2},t_{0})}u\]
for all \(\hat{x},\hat{y}\in B_{r}(x_{0})\times(t_{0}-r^{2},t_{0})\) and \(r\in(0,R)\).
### Proof of Proposition 16
Proof.: Let \(p\in\operatorname{H}_{j}\) be the \(j\)-th order Taylor polynomial (around \(0\)) of \(\upsilon\). Then
\[\sup_{B_{\theta R}}|\upsilon-p|\leq C(\theta R)^{j+1}\sup_{B_{\theta/3}}|D^{j+1 }\upsilon|.\]
This gives \(D^{j+1}_{B_{\theta R}}(\upsilon)\lesssim(\theta R)^{j+1}\sup_{B_{\theta/3}}|D^ {j+1}\upsilon|\). Furthermore, for any \(q\in\operatorname{H}_{j}\), \(j\leq 2\), note that \(D(\upsilon-q)\) is an \(\bar{a}\)-harmonic function. Hence, by [27, Theorem 2.10],
\[\sup_{B_{\theta/3}}|D^{j+1}\upsilon| =\sup_{B_{\theta/3}}|D^{j+1}(\upsilon-q)|\] \[\leq\frac{C}{R^{j}}\sup_{B_{5R/12}}|D(\upsilon-q)|\] \[=\frac{C}{R^{j}}\sup_{x\in B_{5R/12}}|\fint_{B_{\theta/12}(x)}D( \upsilon-q)|\leq\frac{C}{R^{j+1}}\sup_{B_{\theta/2}}|\upsilon-q|\]
for \(j\leq 2\). Hence, taking infimum over \(q\in\operatorname{H}_{j}\), we get \(D^{j+1}_{B_{\theta R}}(\upsilon)\lesssim\theta^{j+1}D^{j+1}_{B_{\theta/2}}(\upsilon)\) for \(j\leq 2\). The first statement is proved.
To prove the second statement, observe that for any \(x\in\mathbb{B}_{R/2}\), there are \(2^{d}\) points \(y_{i}\in\bar{B}_{R/2}\), \(i\in\Lambda=\{1,\ldots,2^{d}\}\), such that \(|y_{i}-x|\leq 1\) and \(x\) is a convex combination of the \(y_{i}\)'s. That is, \(x=\sum_{i\in\Lambda}\alpha_{i}y_{i}\) for some \(\alpha_{i}\geq 0\) with \(\sum_{i\in\Lambda}\alpha_{i}=1\). Let \(p\in\mathrm{H}_{j}\), \(j\leq 2\), be such that \(\max_{B_{2R/3}}|v-p|\leq 2D_{2R/3}^{j+1}(v)\) and denote the Hessian matrix of \(p\) by \(M_{p}\). Then, for \(x\in\mathbb{B}_{R/2}\), \(j\leq 2\),
\[|v(x)-p(x)| \leq[v]_{1;\mathbb{B}_{R/2+1}}+\sum_{i\in\Lambda}\alpha_{i}|v(y_{i })-p(y_{i})|+\left|p(x)-\sum_{i\in\Lambda}\alpha_{i}p(y_{i})\right|\] \[\leq[v]_{1;\mathbb{B}_{R/2+1}}+\max_{\bar{B}_{R/2}}|v-p|+CR|M_{p}|. \tag{87}\]
Further, using the fact (see [27, Cor.6.3]) that
\[R[v]_{1;\mathbb{B}_{R/2+1}}\lesssim\sup_{\mathbb{B}_{2R/3}}|v|+R^{2}|c_{0}| \lesssim\sup_{\partial\mathbb{B}_{2R/3}}|v|+R^{2}|c_{0}|\]
and (Note that the following bound is not needed for the case \(j=1\) where \(M_{p}\equiv 0\).)
\[R^{2}|M_{p}|\lesssim\max_{y\in B_{R/2}}|p(y)+p(-y)-2p(0)|\lesssim\max_{B_{R/2} }|v-p|+\max_{B_{R/2}}|v|\lesssim D_{2R/3}^{3}(v)+\max_{B_{R/2}}|v|,\]
display (87) implies, for \(j\leq 2\),
\[D_{\mathbb{B}_{R/2}}^{j+1}(v)\lesssim\frac{1}{R}\sup_{\partial\mathbb{B}_{2R/3 }}|v|+R|c_{0}|+D_{2R/3}^{j+1}(v).\]
The second claim follows.
### Verification of (68)
In this subsection we will verify the inequality
\[\int_{0}^{\infty}(1+t)^{-d/2}\exp\left[-\frac{t}{R^{2}}-c\mathfrak{h}(|y|,t) \right]\mathrm{d}t\lesssim\upsilon(|y|+1),\quad\forall y\in\mathbb{Z}^{d}.\]
We break the integral on the left side of the above inequality as
\[\int_{0}^{\infty}=\int_{0}^{|y|/2}+\int_{|y|/2}^{|y|^{2}}+\int_{|y|^{2}}^{ \infty}=:\mathrm{I}+\Pi+\mathrm{III}.\]
It suffices to consider the case \(|y|\geq 1\). First, with \(c_{2}>0\) sufficiently small,
\[\mathrm{I}=\int_{0}^{|y|/2}(1+t)^{-d/2}\exp\left(-\frac{t}{R^{2}}-c|y|\log \frac{|y|}{t}\right)\mathrm{d}t\leq|y|e^{-c|y|}\lesssim\upsilon(|y|).\]
Moreover, noting that \(-\frac{t}{2R^{2}}-c\frac{|y|^{2}}{t}\lesssim-\frac{|y|}{R}\),
\[\Pi =\int_{|y|/2}^{|y|^{2}}(1+t)^{-d/2}\exp\left(-\frac{t}{R^{2}}-c \frac{|y|^{2}}{t}\right)\mathrm{d}t\] \[\lesssim e^{-c|y|/R}\int_{0}^{|y|^{2}}t^{-d/2}e^{-c|y|^{2}/t} \mathrm{d}t\] \[\lesssim e^{-c|y|/R}|y|^{2-d}\int_{1}^{\infty}s^{d/2-2}e^{-cs} \mathrm{d}s\lesssim\upsilon(|y|).\]
Similarly, for \(d=2\),
\[\text{III} \lesssim e^{-c|y|/R}\int_{|y|^{2}}^{\infty}(1+t)^{-d/2}\exp\left(- \frac{t}{2R^{2}}\right)\text{d}t\] \[\lesssim e^{-c|y|/R}\int_{|y|^{2}/R^{2}}^{\infty}s^{-1}e^{-s/2} \text{d}s\] \[\lesssim e^{-c|y|/R}\left[1+\int_{0}^{\infty}s^{-1}\mathbb{1}_{\{|y|^{ 2}/R^{2}\leq s\leq 1\}}\text{d}s\right]\lesssim v(y|).\]
For \(d\geq 3\), we have
\[\text{III}\lesssim e^{-c|y|/R}\int_{|y|^{2}}^{\infty}(1+t)^{-d/2} \text{d}t\lesssim v(|y|).\]
Therefore, the above bounds of I, II, III imply inequality(68).
### Verification of (70)
When \(d=2\),
\[\int_{0}^{\infty}e^{-c{r}/R}v(r)^{2}r^{d-1}\text{d}r \lesssim\int_{0}^{\infty}e^{-c{r}/R}[1+(\log\frac{R}{(r+1)\wedge R })^{2}]r\text{d}r\] \[\lesssim R^{2}+\int_{1}^{R}e^{-c{r}/R}\left(\log\frac{R}{r} \right)^{2}r\text{d}r\lesssim R^{2}.\]
When \(d=3\),
\[\int_{0}^{\infty}e^{-c{r}/R}v(r)^{2}r^{d-1}\text{d}r\lesssim\int_{0}^{\infty} e^{-c{r}/R}\lesssim R.\]
When \(d=4\),
\[\int_{0}^{\infty}e^{-c{r}/R}v(r)^{2}r^{d-1}\text{d}r\lesssim\int_{0}^{R}(1+r) ^{-1}\text{d}r+\int_{R}^{\infty}e^{-c{r}/R}R^{-1}\text{d}r\lesssim\log R.\]
When \(d\geq 5\),
\[\int_{0}^{\infty}e^{-c{r}/R}v(r)^{2}r^{d-1}\text{d}r\lesssim\int_{0}^{\infty} (1+r)^{-2}\text{d}r=1.\]
### Verification of (78)
We verify the first part of (78). For functions \(u,v\) on \(\mathbb{Z}^{d}\),
\[\nabla_{e}^{2}(uv)(x) =u(x+e)v(x+e)+u(x-e)v(x-e)-2u(x)v(x)\] \[=v(x)\nabla_{e}^{2}u(x)+u(x+e)[v(x+e)-v(x)]+u(x-e)[v(x-e)-v(x)].\]
From this we have the expression
\[L_{\omega}(w)=uL_{\omega}v+vL_{\omega}u+\sum_{y:\,y\sim x}\omega(x,y)[u(y)-u(x)][v( y)-v(x)].\]
In particular, if \(u\) is \(C^{2}\) in \(\mathbb{R}^{d}\), then, doing Taylor expansion to \(u\),
\[\nabla^{2}_{e}(uv)(x) =\upsilon(x)\nabla^{2}_{e}u(x)+[u(x)+D_{e}u(x)+\tfrac{1}{2}D^{2} _{e}u(y)][\upsilon(x+e)-\upsilon(x)]\] \[\qquad+[u(x)-D_{e}u(x)+\tfrac{1}{2}D^{2}_{e}u(z)][\upsilon(x-e)- \upsilon(x)]\] \[=\upsilon(x)\nabla^{2}_{e}u(x)+u(x)\nabla^{2}_{e}\upsilon(x)+D_{e }u(x)[\upsilon(x+e)-\upsilon(x-e)]\] \[\qquad+\tfrac{1}{2}[D^{2}_{e}u(y)a_{+}(x)+D^{2}_{e}u(z)a_{-}(x)] \nabla^{2}_{e}\upsilon(x)\] \[\leq\upsilon(x)\nabla^{2}_{e}u(x)+u(x)\nabla^{2}_{e}\upsilon(x)+D _{e}u(x)[\upsilon(x+e)-\upsilon(x-e)]\] \[\qquad\qquad+\|u\|_{C^{2}}\nabla^{2}_{e}\upsilon(x).\]
with \(a_{\pm}(x)=[\upsilon(x\pm e)-\upsilon(x)]/\nabla^{2}_{e}\upsilon(x)\), and \(y,\,z\) points within the line segment \([x-e,x+e]\). Note that \(a_{+}+a_{-}=1\).
|
2302.07346 | ScatterShot: Interactive In-context Example Curation for Text
Transformation | The in-context learning capabilities of LLMs like GPT-3 allow annotators to
customize an LLM to their specific tasks with a small number of examples.
However, users tend to include only the most obvious patterns when crafting
examples, resulting in underspecified in-context functions that fall short on
unseen cases. Further, it is hard to know when "enough" examples have been
included even for known patterns. In this work, we present ScatterShot, an
interactive system for building high-quality demonstration sets for in-context
learning. ScatterShot iteratively slices unlabeled data into task-specific
patterns, samples informative inputs from underexplored or not-yet-saturated
slices in an active learning manner, and helps users label more efficiently
with the help of an LLM and the current example set. In simulation studies on
two text perturbation scenarios, ScatterShot sampling improves the resulting
few-shot functions by 4-5 percentage points over random sampling, with less
variance as more examples are added. In a user study, ScatterShot greatly helps
users in covering different patterns in the input space and labeling in-context
examples more efficiently, resulting in better in-context learning and less
user effort. | Tongshuang Wu, Hua Shen, Daniel S. Weld, Jeffrey Heer, Marco Tulio Ribeiro | 2023-02-14T21:13:31Z | http://arxiv.org/abs/2302.07346v1 | # ScatterShot: Interactive In-context Example Curation
###### Abstract.
The in-context learning capabilities of LLMs like GPT-3 allow annotators to customize an LLM to their specific tasks with a small number of examples. However, users tend to include only the most obvious patterns when crafting examples, resulting in underspecified in-context functions that fall short on unseen cases. Further, it is hard to know when "enough" examples have been included even for known patterns. In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from under-explored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set. In simulation studies on two text perturbation scenarios, ScatterShot sampling improves the resulting few-shot functions by 4-5 percentage points over random sampling, with less variance as more examples are added. In a user study, ScatterShot greatly helps users in covering different patterns in the input space and labeling in-context examples more efficiently, resulting in better in-context learning and less user effort.
2023
Tongshuang Wu, Hua Shen, Daniel S. Weld, Jeffrey Heer, and Marco Tulio Ribeiro 2023
3378164135884059
2023
## 1. Introduction
In-context learning ((Tongshuang et al., 2017)) is a property of Large Language Models (LLMs), where a user can "write" a transformation function via an (optional) short set of instructions and a few (input, output) examples. For example, writing a function that "translates" a holiday name (e.g. "Christmas") into its calendar date (e.g. "12/25") would previously require a complicated rule-based system capable of mapping various kinds of subtly different inputs (e.g. "Xmas", "Christmas day", etc) to a lookup table of. With LLMs like GPT-3 ((Becker et al., 2016)), the process is much simpler. A user can achieve the same functionality with a _prompt_ (_i.e._, a natural language instruction) that contains a small number (_e.g._, two) of simple demonstrations, followed by a query (underlined): "Christmas = \(\approx\) 12/25; Halloween =\(\approx\) 10/31; Independence Day (US) =\(\approx\) - GPT-3 would take the prompt and return the right date "7/04" for this query. More impressively, LLM will also have some generalizability towards semantically relevant queries, _e.g._, queries with abbreviations ("Xmas = 12/25", "ney =\(\approx\) 12/31"), misspellings ("s patics day = 03/17"), lesser-known name variations ("All Saints' Eve = \(\approx\) 10/31"), and holidays that might be overlooked (_e.g._," Harriet Tubman Day = \(\approx\) 3/10"). The much reduced programming effort (compared to _e.g._, rule-based systems) draws users' attention towards building their personalized in-context functions in various use scenarios, including code generation, question answering, creative writing, and others ((Shi et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018)).
Although in-context learning has great potential, the quality of the learned function for non-trivial tasks depends on which incorrect examples are used as demonstrations (Shi et al., 2017; Wang et al., 2018). Techniques for automatic example selection (Shi et al., 2017) depend on existing labeled datasets and tasks that can be evaluated automatically (_e.g._, classification), and thus users "in the wild" rely on their own ingenuity and intuition when coming up with demonstrations (Krishnan et al., 2018). Unfortunately, users tend to focus on the most obvious and memorable patterns for demonstration (Krishnan et al., 2018), leading to systematic extensions (Shi et al., 2017) and _underspecification_ that might go unnoticed. As an example, in Figure 1 we use in-context learning to build a function to extract and normalize temporal information from a sentence (Shi et al., 2017). Most users would provide demonstrations with common date formats (e.g. "Oct. 23, 1999"), and some might remember relative date references (e.g. "today"). However, some patterns are easy to miss, e.g. long-form dates with no capitalization or holidays (e.g. "nnience ninety-six", "Thanksgiving Day" in Figure 1C), and the LLM may fail to learn them if they are omitted. Even sampling random examples from the unlabeled data might lead to the repetition of common patterns (Figure 1B) at the expense of demonstrating less-common ones. What is worse, users may not know when they have provided enough examples, and whether there are any uncovered patterns in the
unlabeled data. As a result, prior work summarized the two major pain points of prompting to be (1) the effort required to source examples for a prompt, and (2) the difficulty of evaluating whether a prompt is improving (Kolomek et al., 2019).
In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. In a nutshell, ScatterShot helps users find informative input examples in the unlabeled data, annotate them efficiently with the help of the current version of the learned in-context function, and estimate the quality of said function. In each iteration, ScatterShot automatically slices the unlabeled data into clusters based on task-specific _key phrases_(Shi et al., 2017; Shi et al., 2017). For example, given the existing examples in Figure 1A, it finds a cluster based on holiday key phrases ("Christmas", "Thanksgiving", etc.) and a cluster based on exact dates like "Oct. 23, 1999" (among others). ScatterShot keeps a running estimate of the error of each cluster, and thus prioritizes examples from clusters that have not yet been explored or learned effectively. It further uses the _stability_ of the current in-context function with respect to minor changes in the prompt (e.g. ordering of in-context examples), prioritizing unlabeled examples that get different predictions with different prompt variations. Users are then presented with examples of underexplored clusters (_e.g.,_ Figure 1 C1), or hard examples of explored clusters (_e.g.,_ C2, hard because the past tense refers to the Thanksgiving date of the _previous_ year). Instead of having to label demonstrations from scratch, users can either accept correct predictions from the current function (Fig 1 C1) or make edits to fix wrong predictions (Fig 1 C2). These additional labels are used to update the in-context function, such that the user explores the different possible input patterns in an interactive manner, without wasting resources on patterns that have already been learned.
We evaluate ScatterShot both in terms of sampling efficiency and support for human annotators. In simulation experiments, we compare the sampling strategy in ScatterShot to random sampling on two text transformation tasks contemplated in prior work: the data wrangling task illustrated in Figure 1(Shi et al., 2017), and rewriting question-answer pairs into logically equivalent pairs in order to evaluate model consistency (Shi et al., 2017). In both cases, we find ScatterSim improves performance on corresponding metrics (_e.g.,_ Rouge-L, F1) by 4-5 percentage points, with less variance for various values of \(k\) demonstrations. Further, we conduct a within-subject user study in which 10 participants build in-context functions for the QA-pair rewriting task either (1) manually, (2) with the ScatterShot interface but random sampling, or (3) with the fully-feature ScatterShot. We show that ScatterShot's interface alone is an improvement, by offloading input selection and providing sample outputs. Moreover, the sampling strategy in the fully-featured ScatterShot helps users notice diverse input patterns, leading to improvements in the resulting in-context function. For example, participants who thought their in-context examples were sufficient when using random samples labeled _an additional_1.4 times of examples after switching to full ScatterShot (as they found new patterns), which further improved the function test performance. We conclude the paper with insights into challenges and opportunities that arise from our experiments, including _e.g.,_ explaining the sampling rationales, incorporating automated blind-spot detection, and the potential of using a ScatterShot setup to help users iteratively refine their task definition during data collection.
## 2. The Design of Scattershot
The goal of ScatterShot is to help users iteratively find and label high-quality demonstrative examples to build effective in-context
Figure 1. An overview of how human annotators can use ScatterShot to iteratively collect effective in-context examples for temporal expression extraction and normalization. The function extracts phrases with temporal meaning from sentences (_e.g.,_“Oct. 23, 1999” in “Slepian was killed on Oct. 23, 1999”), and normalizes them into standard formats (“Oct. 23, 1999 — = 1999-10-23”) — the red spans represent information deleted from the input, and the green ones represent information generated in the output. Given an in-context example set that is likely _underspecifying_ the intended functionality (A), ScatterShot applies slice-based sampling to return unlabeled inputs that either have novel patterns or are difficult cases, and uses the existing examples to drive an LLM (_e.g.,_ GPT-3) to suggest (possibly noisy) annotations, such that humans can correct the suggested annotations and possibly expand the in-context example bucket. Compared to random sampling and manual labeling (B), ScatterShot helps humans re-allocate annotation budgets towards informative examples, and increases the in-context function performance.
functions. In order to be effective, a function must be able to handle **common patterns** (_e.g.,_ the temporal normalization function in Figure 1 must be able to handle common temporal expressions such as "today"), **without neglecting less common ones** (_e.g.,_ holidays such as "Christmas"). Further, we want the process to be **cost-effective**, not wasting annotation effort on demonstrations that are redundant with already covered patterns. To achieve these goals, we design ScatterShot to respond to every user interaction by offering assistance in three areas:
* **Help the user discover previously unexplored patterns**. In each iteration, ScatterShot uses _existing_ demonstrations and users' past interactions to cluster the remaining unlabeled data into task-specific slices. Such slices map the input space for users to explore.
* **Help the user prioritize the most informative examples**. ScatterShot uses the current in-context function to estimate the difficulty of slices _and_ examples, prioritizing unexplored slices or slices and examples where the current function is not yet performing well. We call this variant of active learning _slice-based_ sampling.
* **Minimize annotation cost**. Rather than providing a gold output label from scratch for each example, the user is presented with the best guess output from the _current_ in-context function (updated at every step), which they either accept when correct or _edit_ the incorrect parts.
We wrap these functionalities with a lightweight interface, where at each round, users are presented with a batch of unlabeled examples to be (potentially) added to the set of demonstrations. Thus, at each round, users get a "picture" of their current in-context function, and interact with it for improvement. We now detail each one of these components.
### Interactive Interface
We present ScatterShot as an interactive interface, shown in Figure 2. The interface contains a task description (A\({}_{1}\)) and existing in-context examples as demonstrations, presented as input-output pairs (A\({}_{2}\)). These pairs are color-encoded based on the text editing distance, with the spans deleted from the input in red, and the spans added in green. Both the description and demonstrations are editable, and are automatically translated into an LLM prompt (Figure 2B) with the task description, demonstrations in the format = [example input] => [example output], and a candidate input for the LLM1 to transform into an output.
Footnote 1: All of our studies and experiments are run on GPT-3 (Carrone et al., 2017), [https://beta.openai.com/](https://beta.openai.com/)
Below the existing examples, ScatterShot proposes a batch of 5 _candidate_ inputs sampled from the unlabeled dataset, with outputs computed with the current version of the in-context function (A\({}_{3}\)), using the prompt in Figure 2B. Users then verify the candidates and provide feedback (A\({}_{3}\)), editing outputs to fix mistakes when needed (_e.g.,_ changing from "Thanksgiving == 2000-11-25" to "Thanksgiving == 1999-11-25", A\({}_{4}\)), and adding or removing examples to the few-shot examples for in-context learning (A\({}_{5}\)). In addition to saving annotation time, LLM-generated outputs help users assess the quality of the current version of the in-context function. For example, if all LLM outputs are correct for a few consecutive batches, it is likely that the existing few-shot examples cover the patterns in the unlabeled data, and thus labeling can stop.
The interface is task-agnostic and can be used whenever users want to learn one-on-one text mapping between text inputs and outputs. This format is flexible, encompassing both classification tasks (where the output is just the class name) and generation tasks like summarization, though the color encoding would be most effective for text transformation tasks where the edits from inputs to outputs are worth highlighting. For example, Figure 5 shows
Figure 2. (A) The ScatterShot interface, with (A\({}_{1}\)) task description, (A\({}_{2}\)) existing in-context examples, and (A\({}_{3}\)) candidate examples awaiting human inspection. Through interactions A\({}_{4}\) and A\({}_{5}\). Users can make edits to LLM outputs, sort the candidates into positive demonstrative examples (+), negative ones (-), or just not include the candidate (O). The description and the examples get transformed into raw text prompts. One set of in-context examples produces multiple prompts depending on how the examples are ordered; (B) shows a prompt with one possible ordering.
how the same interface is used for another question-answer pair rewriting task. ScatterShot can be easily invoked in a Jupyter Notebook, and therefore can support users' natural workflows.
### Slice-based Sampling
#### 2.2.1. Identifying patterns with key phrase clustering
To help users explore both _common_ and _less common_ patterns, we need a way to partition the unlabeled input examples. While there are a variety of task-agnostic distance metrics that could be used for clustering (_e.g.,_ cosine similarity of sentence embeddings (Srivastava et al., 2016)), our preliminary exploration indicated that these are typically too coarse when applied to _specific_ tasks. For example, using the embeddings from Reimers and Gurevy (Srivastava et al., 2016), "Took a photo today," is closer to "Saw a photo on Flickr." (similarity = 0.50) than to "Are you going to yoga class today?" (similarity = 0.30). While this may make sense in the abstract, it does not correspond to how we would want to slice examples for the temporal extraction task in Figure 1, where date references "today" are more important than subject matter ("photos" vs "yoga class"). Thus, we propose a _task-specific_ clustering method based on key phrases as explained below.
**Detecting key phrases in demonstrations.** While key phrase extraction in general may require domain knowledge (Beng et al., 2015; Srivastava et al., 2016; Gurevy et al., 2016), for text transformation we can leverage the signal present in the relationships between input and output, _i.e._, in which parts of the input are perturbed or retained. For example, "today" is retained in the output of both "Took a photo today." and "Are you going to yoga class today?" (among many other samples), and thus it is probably a key phrase. Formally, given a labeled, positive example, _i.e.,_ a pair of original and perturbed sentences \(f(x)=>y\), we extract as key phrases either the unmodified parts of \(x\) when most of \(x\) is changed (Levenshtein edit distance \(d(x,y)\geq 0.5\), as is the case with the "today" examples above), or the modified parts when most of \(x\) remain unchanged.
**Applying key phrases to unlabeled inputs.** Applying key phrases naively with an exact match would yield low coverage in the unlabeled data (especially for larger phrases). To get more coverage, at each iteration, we generalize key phrases extracted from labeled demonstrations into _templates_ with combinations of tokens, lemmas, and part-of-speech tags (Srivastava et al., 2016; Gurevy et al., 2016), _e.g._,"today" is expanded into today, NOUN, and DATE. We then select representative templates with a greedy weighted set coverage algorithm based on their specificity and the number of inputs they cover (Gurevy et al., 2016). Example templates at various abstraction levels are shown in Figure 3A.
**Key phrase clustering.** We define the distance between two inputs as the minimum cosine distance between the sentence embeddings (Srivastava et al., 2016)_of their key phrases_, and use agglomerative clustering (Srivastava et al., 2016) to recursively merge pairs of clusters in the unlabeled data. We set the number of clusters to 20 (chosen empirically in Section 3), and aggregate all clusters with \(<10\) examples into a single "outlier" cluster (Figure 3B1). Note that we recompute clusters in every iteration, and thus the outlier cluster tends to shrink as the user interacts with the system. Figure 3B contains various examples of discovered clusters.
Note that as a result of the weighted coverage selection, the templates -- and thereby the extracted key phrases -- are dynamically changing, and will eventually become more dominant in the sampling procedure: when the few-shot set contains only a few (e.g., 3) seeding examples, the templates might be biased or even non-existent, most examples will just use the full sentences as key phrases, making it similar to vanilla clustering on full examples.
Figure 3. An overview of ScatterShot’s slice-based sampling. We use the data status from 1 to \(i-1\)-th iteration to perform sampling for the \(i\)-th iteration. As shown in (A), we use the already sampled input-output pairs to extract _templates_ for _task-specific_ key phrases. We use these templates to extract key phrases for each unlabeled input, which are the blue highlights in (B). For example, PRON helps extract “Christmas” from “@\(\,\)virredom Merry Christmas!”. We run Agglomerative Clustering on the sentence embedding of these key phrases to find task-specific data slices, which contain both not-yet labeled examples (marked with “?”) as well as those that have been sampled (“/” for correctly predicted in previous iterations and “\(X\) ” for incorrect predictions). We rank these slices by an award function \(\mu\) based on slice size, estimated model performance, and sample frequency, and draw samples from the top clusters.
However, as we add more examples, the templates will be more balanced and eventually stabilize, at which point the clustering can rely more on the extracted key phrases.
#### 2.2.2. Selecting slices for exploration
We want to explore the identified slices in an efficient way, avoiding slices already "solved," and making the user discovers any unexplored patterns. We take inspiration from the UCB algorithm (Cheng et al., 2017), and use an upper bound estimate of the error of our function in each slice as part of the "reward" for sampling from that slice. Formally, suppose slice \(c\) has \(n\) examples, \(m\) of which are labeled in previous iterations (see the next section for "labeling" details). Further, suppose that out of the \(m\) previously labeled examples, the current function is correct on \(k\).2 The reward of drawing from slice \(c\) at iteration \(i\) is then given by:
Footnote 2: If an example is in the in-context set, we perform cross-validation and predict its output using the remaining examples.
\[\mu_{i,c}=(1-\frac{k}{m})\cdot\underbrace{\ln n}_{\text{Error Rate}}+ \underbrace{\sqrt{\frac{\ln i}{m}}}_{\text{Size Parity}}\]
In other words, we prioritize _large slices_ (\(\ln n\)), _low performance_ (\(1-k/m\)), and slices that have not been sampled many times (\(\sqrt{\ln i/m}\), which would give higher weights to clusters with smaller \(m\) as the iteration \(i\) progresses). Thus, we avoid wasting annotation effort on slices that are already "solved", but keep drawing from slices we can't yet deal with and slices we have not yet explored.
Figure 3B shows four data slices in temporal extraction ranked by reward \(\mu\). 1 is the "outlier" cluster, where patterns are not yet apparent. This slice still gets prioritized due to its large size (\(n=449\)), even though it has been sampled \(m=10\), which encourages either higher accuracy or further slicing in follow-up iterations. 2 is a slice with holiday-based key phrases. Though the slice is small (\(n=19\)), the LLM failed whenever it was previously sampled (\(k/m=0\)), and thus it currently represents a hard pattern. 3 is a slice with past date references, while 4 is a slice with the common temporal pattern represented by the words "\(\mathrm{today}\)", "\(\mathrm{yesterday}\)", and "\(\mathrm{tomorrow}\)". This last slice has low priority despite being common, since the LLM had perfect accuracy whenever a sample from it was drawn. To maximize diversity (similar to batched active learning (Kang et al., 2017; Li et al., 2018; Li et al., 2019)), we rank the slices by reward and select one example from each until the batch is filled (in our case, batch size = 5).
Footnote 2: If an example is in the in-context set, we perform cross-validation and predict its output using the remaining examples.
#### 2.2.3. Saving user effort with implicit labels and pseudo-labeling
As mentioned above, our per-slice performance estimation requires _labeled examples_. Unfortunately, we only have firm labels on user-added in-context examples, which may be quite small, especially if users only add a portion of the sampled data. As a result, in-context examples offer limited power for estimation. Although we can modify the interface to collect additional user labels on output correctness, it requires additional interaction that can be cumbersome. To save user effort, we use _implicit labeling, i.e._, we label the LLM output of an example in a batch as correct if the user does not make any changes to the output, even if they do not add it to the in-context demonstration set. Of course, users might ignore model errors if they are frustrated or distracted, but we verified in pilot experiments that users almost always make corrections in the presence of model mistakes (\(\sim\)87% of the time, and the selection method is robust to this small amount of noise). In comparison to explicit labeling, this method requires the bare minimum user interaction, and is easier to integrate into iteration workflows.
Still, implicit labeling requires users to actually _see and interact_ with a sample. However, after a certain point in the process, the LLM is correct often enough that many interactions would simply be "accepted" (no changes) by the user. While important for estimating slice accuracy, too much of such interaction might also lead users to overestimate the in-context function quality, and stop the process before they explore the remaining slices. Thus, after we reach a threshold of quality (LLM is correct on 70% of examples in two consecutive rounds), we start leveraging pseudo-labeling with _unanimity voting_, a method inspired by the unanimity principle (Kang et al., 2017) and Query-by-Committee (Li et al., 2019). Following Lu et al. (Lu et al., 2019)'s observation that the order of in-context demonstrations can drastically change LLM performance, we form three different prompts by randomly reordering the examples. When the outputs of the prompts agree (_i.e._, are unanimous), we use that output as a _pseudo-label_, used both for estimating slice accuracy and as a filtering method (_i.e._, these examples are not shown to the user). Figure 4 illustrates this process, where "@viereedom Merry Christmas" (A) is pseudo-labeled due to unanimity, and "Atlanta nineteen ninety-six" (B) yields different predictions, and thus is shown to the user for manual inspection.
Figure 4. An illustration of ScatterShot’s two-step correctness estimation. When the in-context function demonstrates reasonable quality in the last two iterations, we first employ _unanimity voting, i.e._, we use three different orderings of in-context examples (noted with the three dots with different grey shades) to get three outputs, and say the function is correct if all the outputs are the same, without showing the input to the human (A). When the outputs are different, we return the one with the highest output probability for _user inspection_ (underlined), in which manual editing would imply that the function is wrong (B).
\begin{table}
\begin{tabular}{c c
## 3. Simulation Experiment:
Scattershot Sampling vs. Random Sampling
In this section, we measure the effectiveness of slice-based sampling, when compared to random sampling on two text transformation tasks. We use datasets for which we have labels on both tasks, so that we can simulate the labeling process with an oracle at scale, and evaluate the learned function on a held-out portion of each dataset.
### Tasks and Datasets
**Temporal expression extraction and normalization**. The _Temporal_ task involves data wrangling (Zhu et al., 2017), where the goal is extracting phrases with temporal expressions from sentences or documents, and normalizing them into a standard format (Wrangling, 2018). As shown in Figure 1, these can include absolute or relative dates, and can have different granularity (_e.g._, exact date vs. year only).
**Data.** We take the data from (Bach et al., 2018), containing temporal expression datasets, including TimeBank (Zhu et al., 2017) (news articles) and TweeTime (Twee et al., 2018) (tweets). We process each dataset into sentences, discarding any date annotations that could not be normalized to the format YYYY-MM-DD (for consistency), and keeping sentences involving absolute dates, dates relative to the document publication date, or no time expressions at all (as the pool for negative examples). This resulted in 491 examples with YYYY-MM-DD outputs, and 369 negative examples with the output N/A. We sample 100 examples randomly from this dataset as a _test set_, and use the remaining examples as our unlabeled pool in the experiment.
**Evaluation.** Following Chang and Manning (Chang and Manning, 2018), we report F1, recall, and accuracy both for the temporal expression extraction and normalization separately.
**Question-Answer Pair Implication**. For the _QA-pair_ task, we use ScatterShot to replicate transformation functions from prior work. Given a question-answer (QA) pair, Ribeiro et al. (Ribeiro et al., 2019) wrote a rule-based system (over \(1,000\) lines of code3) to generate a new QA pair that is _implied_ by the original pair, to check whether question answering systems are consistent in their reasoning. We replicate their _logical equivalence_ transformation, where the original QA is rewritten to a logically equivalent form, e.g. "Q: What room is this? A: bathroom" is transformed to "Q: Is this a bathroom? A: yes". Despite the heavy engineering, the rule-based system is not able to cover many inputs, and often produces text that does not look fluent or natural. We thus apply in-context learning to this task, and use ScatterShot to select the examples.
Footnote 3: [https://github.com/marcotcr/qa_consistency/](https://github.com/marcotcr/qa_consistency/)
**Data.** We download the input sentences and rule-based implications from Ribeiro et al. (Ribeiro et al., 2019), and manually inspect and label \(1,000\) randomly sampled QA pairs (351 rule-based implications were noisy and had to be relabeled). We randomly sample 100 pairs as a test set, and use the remaining pairs as our unlabeled pool in the experiment.
**Evaluation.** We follow the standard in text generation and report the Rouge-L F scores (Rasmaglia et al., 2019), as well as BLEU-4 (Rasmaglia et al., 2019).
### Procedure and Baseline
We compare _ScatterShot_'s slice-based sampling with a _Random_ sampling baseline, which is the most common sampling method used especially in complex tasks, e.g., in text translation (Bach et al., 2018). We use GPT-3 as our underlying LLM, with greedy decoding (temperature=0) in both conditions. In each simulation run, we start the process with three random samples (the same for both conditions) of input-output. At every iteration, we compare the ground truth label with the candidate label proposed by the current in-context function. When the labels differ, we add the pair (input, oracle output) to the in-context example set, simulating the case where the user corrects a transformation and adds it to the set; Otherwise, the oracle user does not perform any action, simulating cases where the user ignores examples where the current in-context function is correct.
The process is repeated until one of the following stopping conditions is met: (1) the in-context example set contains more than 40 data points (exceeding the LLM maximum context size), (2) The oracle user has been presented with 100 examples (i.e. annotation budget is met), (3) the in-context function provided the correct outputs in five consecutive iterations, or (4) the in-context function's estimated accuracy for all slices of data is \(\geq 80\%\).
We run ten simulation rounds with different random seeds, and report the (averaged) final function performance. We further track the function improvement trajectory over iterations on three randomly selected simulation rounds, by evaluating the intermediate in-context functions after every five examples are added.
### Results
As Table 1 shows, ScatterShot's slice-based sampling outperforms the baseline on both tasks. In _Temporal_, ScatterShot improves the \(F_{1}\) for date span extraction by around 2 points, and the normalization by 4 points. In _QA-pair_, ScatterShot outperforms _Random_ by 6 points on Rouge-L, and even outperforms the heavily engineered rule-based system _used to label most of the evaluation data_, despite needing 40 or fewer in-context examples. Table 2 shows qualitative examples, where ScatterShot outperforms both baselines in terms of coverage, fluency, and correctness. These results point to ScatterShot's potential on saving human efforts in creating fine-grained functions, alleviating the need for handcrafting templates.
Figure 6 shows the trajectory of the in-context function quality as the simulated user adds more examples, for three randomly selected runs. ScatterShot dominates the baseline at almost all points in all runs, with the biggest gaps occurring when the number of in-context examples is small. We see particular gains at \(n=5\), i.e. when the first two examples are added to the seed examples. Our hypothesis (based on qualitative observation) is that ScatterShot consistently selects examples that represent _patterns_ not contained in the seed examples, _e.g._, negative examples (where the outcome is N/A) when all seed examples are positive. While ScatterShot helps users explore most patterns in the unlabeled data as they reach higher \(n\), early gains are especially useful in practice when users have low annotation budgets, _e.g._, prior work notes users selecting as few as five or ten examples (Rasmaglia et al., 2019; Rasmaglia et al., 2019).
Finally, we observe that ScatterShot is less liable to variance in quality as more examples are added (e.g. in _QA-pair_-2, baseline
performance degrades by almost 15 points between \(n=20\) and \(n=30\)). These results suggest that besides its interface and interactivity benefits, ScatterShot can improve in-context learning just by virtue of its sample selection function. In order to evaluate the benefits to actual humans, we now turn to a user study.
## 4. User Study
ScatterShot sampling is effective in simulation, but does it actually aid humans to articulate their desired functions? We conducted a within-subject user study to evaluate whether human users can sense ScatterShot's support in exploring the data space.
### Study Design
**Task & Participants**. We ran a user study on the _QA-pair_ task using the same dataset as Section 3.1, with a split of 900 unlabeled inputs for participants to access, and 100 test examples for evaluating the in-context functions they built. We recruited ten CS graduate student participants (4 females, 6 males) on our CSE department mailing list. Eight of them had previously used GPT-3 and two had heard about it, but none were familiar with the task or ScatterShot. Each participant spent around 60 minutes in the study.
**Settings & Conditions**. In order to isolate the effect of the different components in ScatterShot, we have two ablation settings in addition to our method: (1) _Manual_, where participants manually craft prompts without any help from ScatterShot, which is the de-facto _status-quo_ of practitioners creating their own non-context learning examples. (2) _Random_, where participants use the ScatterShot interface with slice-based sampling disabled, _i.e._, they review randomly selected examples. This condition still has the benefit of an interactive interface, and uses the intermediate in-context functions to suggest outputs and pseudo-label. (3) _ScatterShot_, where participants have access to ScatterShot, fully featured.
Every participant interacted with every setting in sequence and in a cumulative manner, _i.e._, the in-context demonstrations gathered in one setting carry over to the next, and we measured the _additional_ benefit of moving to the next setting. We divided the participants into two groups, such that in one group the sequence is _Manual_\(\rightarrow\)_Random_\(\rightarrow\)_ScatterShot_ (**M-R-S**), while in the other it is _Manual_\(\rightarrow\)_ScatterShot_\(\rightarrow\)_Random_ (**M-S-R**). _M-R-S_ represents a condition where participants are gradually exposed to more features, such that the step-wise gain maps directly to the benefit of the new feature, while _M-S-R_ serves as the counterbalanced condition that combats the learning effect and the natural impact of accumulating examples on function qualities.
**Study Procedure.** We designed our hour-long study to be self-contained in a Jupyter Notebook,4 and one of the authors was present in all studies to ensure that participants understood the task and to answer any questions.
Footnote 4: The full user study instructions, as well as the detailed exit survey, are in [https://github.com/longshuangwu/scattershot](https://github.com/longshuangwu/scattershot)
Participants were first introduced to the basic concepts of LLM (GPT-3), in-context example construction, and the study task. Then, we randomly assigned the participants to one of the two conditions (_M-R-S_ or _M-S-R_), and they completed the task by going through the three conditions in the assigned order. Participants were not instructed on the difference between _ScatterShot_ and _Random_, and were instead told that "these two selection methods are randomly ordered, and one is not necessarily better than another."
In each step (setting), participants were told to inspect the inputs and current function outputs (available in _ScatterShot_ and _Random_), fix the erroneous outputs, and add demonstrations (input-output pairs) to the in-context example bucket if they believed the data would add additional value, _e.g._, instances where the current context function fails, as well as diverse input or output patterns. They were
Figure 6. The in-context function performance trajectory, We evaluate the in-context function on the held-out test set every time we add five more examples to the in-context bucket until the stop condition is satisfied. ScatterShot tends to frequently outperform _Random_, and tends to have less performance oscillation. Note that the y-axis is different for _Temporal_ and _QA-pair_.
asked to iterate within the step until they were satisfied with the in-context function at hand, or accumulated 40 examples. To prevent them from stopping too early, we also asked them to run at least three batches (_i.e._, see 10-15 examples).5 Afterwards, participants completed an exit survey and a semi-structured interview, where they rated their perceived experience in each of the two consecutive steps. These questions concerned their perceived input/output pattern diversity, the example difficulty, and their confidence in estimating in-context function quality.
Footnote 5: In _Manual_, this meant looking at three random batches of unlabeled data in the Jupyter notebook.
**Collected Data.** We observed and analyzed three sets of data. First, to quantify the change in **function quality**, we saved participants' in-context examples per step, and applied them to the held-out test set. Here, besides the absolute numbers as in Section 3, we calculated the _difference_ in performance between two consecutive steps to see if adding (or, in the case of _M-S-R_, removing ScatterShot features impacted the quality of examples participants submitted. Second, to assess participants' **self-perceived experience**, we used a standard five-point Likert Scale (Zhou et al., 2017) to collect their perceived step-wise differences. Third, to track participants' **annotation trajectories**, we logged their clickstreams in all the steps. This included both the number of examples they examined per step, the edits they made, and the number of examples they added.
### Results
**The ScatterShot interface made it easier to iterate on in-context examples.** As shown in Figure 7, participants' found moving from _Manual_ (Step 1) to a ScatterShot interface (Step 2) beneficial, regardless of the sampling setting. In particular, they found that the interface made it easier and more intuitive to construct the few-shot examples. (**Easier to use** in Figure 7, \(4.7\pm 0.7\) for _Manual\(\rightarrow\)Random_ and \(4.2\pm 0.4\) for _Manual\(\rightarrow\)ScatterShot_). Users liked the fact that ScatterShot offers sample inputs (rather than having to go through the dataset on their own), and the that the interface provides easy access to all the existing in-context examples, allowing for fast back-and-forth iteration. For example, one participant (P7) kept revisiting their examples, and removed some earlier examples that they thought were less useful as they became more familiar with the unlabeled input space.
As part of the interface, LLM-generated outputs helped participants craft examples more efficiently, _e.g._, P6 comments that "_it is less work to make edits than starting from scratch._" Somewhat surprisingly, LLM-generated outputs also improved _output diversity, i.e._, users considered more diverse output patterns. For example, P10 commented that they were "_pleasantly surprised by the LLM's clever output in several cases_," and that they would not have thought about transformations such as "Q: Is there more than 1 boy? A: no" \(\rightarrow\) "Q: Is there no more than 1 boy? A: yes", which they added to their set of in-context examples. The observation is consistent with prior work showing AI-induced creativity gains (Zhou et al., 2017). We note that actual user behavior here differs from our simulation setup, where we assumed human users would _only_ add new examples when the LLM output was wrong.
**Participants' perceptions matched ScatterShot's slice-based sampling design goals: more diverse and more challenging patterns.** As shown in Figure 7, participants in _M-R-S_ clearly noticed the improvement moving from _Random\(\rightarrow\)ScatterShot_ (\(4.2\pm 1.2\) for **more diverse patterns** and \(4.8\pm 0.4\) for **more difficult case**), whereas most users in _M-S-R_ did not report improvements from _ScatterShot\(\rightarrow\)Random_. Qualitative results confirm this, _e.g._, P7 in _M-R-S_ commented: "_Step 2_ (Random) _provided me with some worthy examples, but much less than Step 3_ (ScatterShot). I went through several rounds of pretty similar examples, thinking the function is behaving quite decently, and didn't realize the function needed more diverse and edge cases until I reached Step 3._"**9 in _M-R-S_ was also happy that _ScatterShot_ helped them explore beyond typical patterns. In contrast, P10 in _M-S-R_ reflected that their exploration seemed to have "_quickly saturated in Step 3_" (_Random_).
Despite not being given details, seven participants discerned the goals behind ScatterShot's sampling method by interacting with it. For example, P2 described it as "_sample for additional variation based on the patterns in existing examples, and also sample for examples similar to previous error mistakes to track whether the function has been corrected._" Two participants in _M-S-R_ noticed that _Random_ presented fewer mistakes, but attributed it to the increasing number of in-context examples (P5: "_It's getting more correct, but
Figure 7. Participants’ subjective ratings on their perceived differences between different settings as they switch between them. We use the rectangle to represent when participants first move from _Manual_ (Step 1) to the ScatterShot interface_ (either _ScatterShot_ or _Random_, Step 2), and circles to represent switches between ScatterShot interfaces, from one sampling method to the next (Step 2 to 3). Participants strongly preferred the ScatterShot interaction to manual example annotation, and felt they found more diverse patterns and difficult cases in _ScatterShot_ than _Random_ (_Random\(\rightarrow\)ScatterShot_, blue). In contrast, people in the reversed condition did not find _Random_ more useful than _ScatterShot_ (orange).
_I would expect it given that I have annotated more examples_"). After we explained the selection methods at the end, some users noted that understanding the methods would have helped them better calibrate their estimates of the learned function quality over time.
**ScatterShot helped participants explore the input space more holistically, and build better in-context functions.** The perceived data difficulty and diversity encouraged participants to iterate more on their in-context examples. When looking at the number of in-context examples added in each setting, participants added 40% more examples in _ScatterShot_ than _Random_ when _ScatterShot_ came after (_M-R-S_), and 20% _fewer_ examples in _Random_ when _Random_ came after (_M-S-R_), _i.e._, they stopped much earlier when _Random_ came after _ScatterShot_. These additional examples are not only a result of more inspection effort (on average, participants in _ScatterShot_ reviewed 20% more samples), but also that each batch in _ScatterShot_ was more likely to contain a good in-context example -- participants added 81% of the examples they inspected in _ScatterShot_, but only 48% of the examples in _Random_.
We report the _quality_ of the resulting in-context function on the held-out set in Table 3, and note that _Random\(\rightarrow\)ScatterShot_ consistently increases performance, while _ScatterShot\(\rightarrow\)Random_ consistently _decrease_ performance despite adding more in-context examples, which is in line with our simulation results.
**ScatterShot helped participants estimate function quality and "debug" their example set.** As expected, participants estimated their in-context function quality based on the candidate examples they reviewed. For example, P5 (_M-S-R_) tracked the function convergence: _"I made mental notes on the LLM errors and hypothesized what types of examples were missing. For example, \(I\) noticed the model was wrong on N/A questions at first, but later got it right."_ Participants in _M-R-S_ seemed slightly more satisfied with their estimation, with 4.2 \(\pm\) 0.9 in _Manual\(\rightarrow\)Random_ and then further 4.3 \(\pm\) 0.7 Random\(\rightarrow\)ScatterShot. P7 commented that _"Step 2 showed me the function is quite smart on patterns it has already seen and has high precision, and Step 3 showed me there are more patterns and it has low recall"_. P2 further reflected that _Random\(\rightarrow\)_ sampling _"created a false impression of convergence, when the function still had various blind spots."_ The interactive process also helped participants debug their example sets, _e.g._, P4 saw big performance drops (4/5 to 1/5 accuracy) on two consecutive batches, which led them to remove in-context examples that were hurting performance.
Participants in _M-S-R_ gave slightly lower ratings on their estimates. Qualitatively, the fact that _ScatterShot_ prioritized potential mistakes seemed to discourage users, _e.g._, P3 noted they were driven into _"an endless blackhole of errors,"_ after which a round of repetitive patterns in _Random_ was hard to make sense of. Once again, this could have been mitigated by _explaining_ the sampling strategy to the users, and explicitly displaying the slice accuracy estimates ScatterShot keeps track of.
## 5. Discussion
In this work, we design a human-LLM collaboration mechanism in ScatterShot to help humans craft diverse and informative in-context learning examples. By iteratively identifying data slices, sampling from low-performance or unseen slices, and providing best-guess outputs for the sampled examples, ScatterShot not only helps the collection of informative in-context examples, but also supports users in exploring the input space and assessing the function quality. At its core, ScatterShot is built on three concepts: data slicing and sampling, iterative human-model interaction, and collaborative human-model labeling. We now discuss challenges and potential future work for each of these.
**Slice-based sampling can increase data space coverage.** Our experiments showed that sampling from diverse and difficult data slices improves in-context function performance. Importantly, these slices cannot be surfaced via clustering on task-agnostic embeddings; rather, task-specific features should be considered to group examples, while task-irrelevant noise should be minimized. However, identifying these task-specific features remains a challenge. While effective for our function examples (and many others), key-phrase and template extraction would not generalize to tasks where input and output have little syntactic overlap, _e.g._, English-French translation, summarization, etc. Future work should look into incorporating more general slicing methods, _e.g._, asking practitioners for slicing functions (Han et al., 2017; Wang et al., 2018; Wang et al., 2019), automatically detecting blind spots (Wang et al., 2019; Wang et al., 2019), etc.
In addition to data slicing, the sampling algorithm also plays a crucial role in narrowing down the actual slices to sample from. We adapt the UCB algorithm to prioritize slice size, performance, and sample rarity, but there are other interesting dimensions that could be explored. For example, if there are slices that cannot be learned after several rounds of sampling, UCB may be counterproductive and create a biased in-context example set that performs worse on _other slices_, whereas a strategy that penalized or just "gave up" on those slices might produce a better overall function. Moreover, we might want to explore better methods for example ranking _within a slice_.
**Interacting with the latest function is essential for in-context learning.** In-context learning enables rapid function updates, which
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Condition** & **Step 1** & **Step 2** & **Step 3** \\ \hline \hline _M-R-S_ & / (59.3) & +17.4 (74.7) & **+3.2** (77.8) \\ _M-S-R_ & / (61.8) & +18.1 (75.4) & -0.4 (74.9) \\ \hline \end{tabular}
\end{table}
Table 3. The performances of participants’ in-context functions after each step. +/- represents the average performance change compared to the prior step, whereas the number in the parentheses are the absolute performances. _M-R-S_ participants were able to keep adding useful examples, whereas _M-S-R_ participants _decreased_ the function performance by 0.6 in Step three (_ScatterShot\(\rightarrow\)Random_), indicating that these efforts were wasted.
are not possible in other current interactions with models (_e.g._, finetuning often takes long hours, and is often not suitable for interactivity). Allowing users to interact with the most current version of what is being learned helps them track progress, and backtrack when they introduce cascading errors (Kumar et al., 2017). The setup in ScatterShot is a step in this direction, since users always interact with the latest version of the in-context functions.
While participants _were_ making progress with ScatterShot (more than with baselines), some participants felt frustrated by inspecting mistake after mistake, fearing that they would never be able to produce a good enough function. While this is by design (ScatterShot prioritizes potential errors), it might compromise annotators' estimates of the quality of their function, and their motivation for labeling more examples. Thus, we notice the importance of presenting quality metrics to the user and clearly explaining the sampling function so that the right expectations are set. For example, users may perform better mental calibration if they have access to hints like the number of slices that are considered "solved" (e.g., as a progress bar that allows people to zoom into concrete examples grouped by the slice), cross-validation accuracy on in-context examples, etc. Another alternative would be to let users exercise _more control_ over which slices are explored, _e.g._, allowing them to "drill down" or "give up" on specific slices.
**Human-AI collaborative labeling for building better functions with respect to better quality _and better task definition._ Essentially, ScatterShot enables human-LLM collaboration on data annotation. In our work, we mostly focused on evaluating the quality benefit of such annotation, but we observed additional interesting gains in bringing people inspiration. In Section 4, we notice that participants can take inspiration from the LLM not only on the input patterns, but also on potential output patterns even though our _QA-pair_ task is relatively deterministic in its transformations. Thus, we hypothesize that similar systems supporting human-LLM collaborative labeling could play an important role in helping users iteratively refine their task definition and function behavior during data collection. Prior work has shown that annotation requesters refine their labeling instructions when they see noisy (and therefore unusable) crowdsourced labels on ambiguous examples. However, we have yet to examine how LLMs' suggestions (good or bad) might help users better specify their functions.It would be interesting to systematically analyze and measure users' own distribution shift as the example set expands. Recently, Lee et al. (Lee et al., 2019) proposes the "retaining rate" of LLM suggestions (in their case, suggested character names subsequently used in novels) as a metric of the usefulness of LLMs for ideation. An analogue to our case would be measuring the appearance of new patterns _data slices_ when users use ScatterShot, compared to when they come up with their own patterns.
## 6. Related Work
### LLMs and In-context Learning
Transformer-based large language models (LLMs) (Kumar et al., 2017) have recently led to large improvements in NLP. Pre-trained on a large amount of unlabeled text data, these models encapsulate rich, general-purpose features of language both syntactically and semantically. These features can help facilitate various downstream applications much more dynamically (Kumar et al., 2017) -- rather than having to train a new model for every custom task, users can just customize the model by feeding it natural language **prompts** at run time, like the holiday in the previous section. Such ability to recognize the desired task on-the-fly is called _in-context learning_(Dong et al., 2019).
The flexible in-context learning intrigues various work to explore designing prompts that can effectively invoke the user desired functionalities (Kumar et al., 2017; Dong et al., 2019; Dong et al., 2019; Dong et al., 2019). To date, the most common patterns for prompting are either _zero-shot_ or _few-shot_ prompts. Zero-shot prompts directly describe what ought to happen in a task. For example, we can enact the holiday date translator in Section 1 with a _task description_ prompt: "Identifify the date for a national holiday in the month/date format." Studies on improving zero-shot prompts typically study the effect of task instructions (Kumar et al., 2017), induce LLM reasoning through task decomposition (Kumar et al., 2017; Dong et al., 2019), etc. Zero-shot prompts do not use demonstrative examples and therefore tend to be less performative (Dong et al., 2019), but writing just the natural language descriptions is lightweight enough that it creates an intuitive natural language interface between humans and the model (Dong et al., 2019).
In contrast, _few-shot_ prompts show the LLM what pattern to follow by feeding it examples of the desired input and output data. As can be seen in Section 1, given examples on "Christmas" and "Hallowen", the LLM would produce a reasonable date for "Independence Day". These examples usually follow consistent structures with meaningful prefixes ("Holday: [name] => Date: [date]"), which helps re-emphasize the desired intent (Kumar et al., 2017). The quality of few-shot prompts heavily relies on the five to thirty in-context examples that demonstrate the intended behavior (Kumar et al., 2017; Dong et al., 2019), and LLMs can only perform in-context learning if it has seen the corresponding distribution or concept (Kumar et al., 2017; Dong et al., 2019; Dong et al., 2019). If developers omit corner cases in the few examples they created, the task quality can easily be affected (Kumar et al., 2017). For example, without a negative example where we denote ineligible inputs with a placeholder output "N/A" ("Holday: yesterday => Date: N/A"), the LLM would attempt to produce the most plausible "label" even for negative examples -- It may try to normalize "yesterday" to a most plausible date even though there is no _holiday_. Our work here tries to help users interactively identify high-quality in-context examples for text transformation. We review the literature on in-context example selection next.
### Effective Example Selection
Prior work has explored selecting effective demonstrations, and has shown that because pre-trained models possess high-level semantic features, sampling or active learning tends to help identify informative examples (Kumar et al., 2017). In particular, dynamically selecting (retrieving) the most similar demonstrative examples for each given input significantly improves in-context learning performance (Dong et al., 2019; Dong et al., 2019). However, such retrieval methods require fully labeled datasets as the search space. In contrast, our work studies the scenario where humans craft their personalized in-context functions, and therefore focuses on an unlabeled space.
In the unlabeled search space, prior work has explored effective dataset annotation that can support better in-context learning or few-shot finetuning. These studies strive to allocate annotation budgets to diverse and representative examples through clustering (Dong et al., 2019)
or graph-based search (Sutskever et al., 2017). For example, Su et al. (2017) built a similarity graph by computing pairwise distances between input sentences and then iteratively selected and annotated examples based on graph density. They show such selection substantially reduces the annotation cost while achieving high and stable in-context learning performance. Despite being effective, these methods sample examples purely for input diversity. Because our work focuses more on supporting users' interactive function construction, we additionally emphasize _current function quality_ in sampling, which helps users track their progress and prioritize improving the current in-context function. Moreover, these prior studies measures diversity with cosine similarities on input sentence embedding (Sutskever et al., 2017) which, as we argue in Section 2.2, is not reflective of various tasks (Sutskever et al., 2017). As a workaround, our work focuses on measuring similarities only on the key phrase embeddings, which leads to more intuitive clusters.
On the interactive example selection side, our work is perhaps more similar to some literature in programming-by-demonstration (PBD). For example, Zhang et al. (2019) explored effectively selecting examples that can help disambiguate and validate synthesized regular expressions. We share similar motivations that interactively and iteratively suggest corner cases help synthesize the right function, but unlike PBD where new examples are always _pruning_ the function search space, ScatterShot focuses on _expanding_ the function coverage. Therefore, it is essential to select examples that incentivize people to provide feedback.
**Active Learning**. Our work is also similar to the aforementioned, effective annotation work (Sutskever et al., 2017; Sutskever et al., 2017) in the sense that its selection method is akin to sampling approaches in active learning (Sutskever et al., 2017; Sutskever et al., 2017). The key idea behind active learning is that machine learning models can achieve higher performance with fewer training examples, if it is allowed to choose its own, most informative training examples. Given a budget, an active learner iteratively selects examples-to-annotate from an unlabeled pool according to some ranking mechanism. While the previous work is more similar to diversity sampling (Sutskever et al., 2017), ours is closer to uncertainty sampling (Sutskever et al., 2017), where an active learner queries the instances about which it is least certain how to label. Because LLMs are generative in nature and do not have clear probabilistic distributions across all "labels" as in _e.g._, classification tasks, we estimate uncertainty using the LLM output stability (unanimity voting) which also conveniently serves as a correctness estimation. This voting strategy is also quite relevant to Query-By-Committee (Sutskever et al., 2017) where a list of "committee" models trained on the same labeled set vote on the labelings of query candidates. Other work has also been considered directly representing LLM confidence with the average log probability of the entire output (Sutskever et al., 2017; Sutskever et al., 2017), an alternative worth comparing against in the future.
Importantly, while many empirical results suggest that active learning is effective, it does suffer from certain limitations. For example, the labeled examples are not drawn _i.i.d_ from the underlying data distribution (Sutskever et al., 2017), and therefore can sometimes be imbalanced (Sutskever et al., 2017) or less effective than random sampling (Sutskever et al., 2017). Our method will likely share the same limitations, though we leave it to future work to articulate scenarios where ScatterShot is most useful.
### Model-assisted Annotation
ScatterShot can also be seen as offering assistance in data annotation (for context learning). The idea of annotating data with both humans and AI models in the loop has been explored broadly. In this setup, AIs can play various roles (Sutskever et al., 2017), _e.g._, they may generate more examples that mimic difficult patterns (Sutskever et al., 2017; Sutskever et al., 2017), select uncertain examples for people to inspect (Sutskever et al., 2017), etc. ScatterShot is closer to work encouraging annotators to find model-fooling examples ("adversarial data collection.") (Brandola et al., 2016; Sutskever et al., 2017; Sutskever et al., 2017; Sutskever et al., 2017). In particular, Bartolo et al. (Bartolo et al., 2016) found that in question-answering tasks, models trained on these adversarially collected data can generalize better to more challenging examples. However, because of the overhead of re-training, their analyses were performed _post-hoc_, _i.e._, they only updated the model offline after collecting a large batch of challenging examples. In contrast, we leverage the advantage of in-context learning, and directly study the dynamic of in-context function update.
The iterative nature also links ScatterShot to earlier work in interactive machine learning (IML) (Brandola et al., 2016; Sutskever et al., 2017). IML is a typical paradigm that facilitates iterative and exploratory model understanding and update -- a system explains to users how the current model makes predictions, and users in turn give feedback back to the model, starting the cycle again. Labeling is one classic type of IML feedback (Sutskever et al., 2017; Sutskever et al., 2017). However, because traditional ML tends to focus much more on the surface features (_e.g._, count trigrams in a training example without caring its semantic meanings), users find labeling to be not powerful enough, and prefer richer controls like feature selection (Brandola et al., 2016; Sutskever et al., 2017; Sutskever et al., 2017). Since LLMs have some capability to generalize individual examples more broadly to its semantically similar ones, we believe labeling in in-context learning would be more effective, and we use ScatterShot to reactivate labeling-based IML for in-context learning.
## 7. Conclusion
In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot helps users find informative input examples in the unlabeled data, annotate them efficiently with the help of the current version of the learned in-context function, and estimate the quality of said function. Results from both a simulation study and a 10-person evaluation show ScatterShot improves in-context function performance, as well as annotator's awareness and handling of diverse patterns. Our findings highlight the importance of data slicing and sampling, iterative human-model interaction, and collaborative human-model labeling, and point to interesting future directions such as AI-assisted task definition refinement, more concrete quality metrics that convey the in-context function progress, etc.
###### Acknowledgements.
This material is based upon work supported by NSF awards 1901386 and 2040196, ONR grant N00014-21-1-2707, and a gift from the Allen Institute for Artificial Intelligence (AI2). The authors thank the user study participants for their valuable feedback, and anonymous reviewers for helpful discussions and comments. |
2310.08264 | Classification of solutions of higher order critical Choquard equation | In this paper, we classify the solutions of the following critical Choquard
equation \[ (-\Delta)^{\frac{n}{2}} u(x) = \int_{\mathbb{R}^n}
\frac{e^{\frac{2n- \mu}{2}u(y)}}{|x-y|^{\mu}}dy e^{\frac{2n- \mu}{2}u(x)}, \
\text{in} \ \mathbb{R}^n, \] where $ 0<\mu < n$, $ n\ge 2$. Suppose $ u(x) =
o(|x|^2) \ \text{at} \ \infty $ for $ n \geq 3$ and satisfies \[
\int_{\mathbb{R}^n}e^{\frac{2n- \mu}{2}u(y)} dy < \infty, \
\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{e^{\frac{2n-
\mu}{2}u(y)}}{|x-y|^{\mu}} e^{\frac{2n- \mu}{2}u(x)} dy dx < \infty. \] By
using the method of moving spheres, we show that the solutions have the
following form \[ u(x)= \ln \frac{C_1(\varepsilon)}{|x-x_0|^2 + \varepsilon^2}.
\] | Genggeng Huang, Yating Niu | 2023-10-12T12:11:59Z | http://arxiv.org/abs/2310.08264v1 | # Classification of solutions of higher order critical Choquard equation
###### Abstract
In this paper, we classify the solutions of the following critical Choquard equation
\[(-\Delta)^{\frac{n}{2}}u(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)} }{|x-y|^{\mu}}dye^{\frac{2n-\mu}{2}u(x)},\ \text{in}\ \mathbb{R}^{n},\]
where \(0<\mu<n\), \(n\geq 2\). Suppose \(u(x)=o(|x|^{2})\) at \(\infty\) for \(n\geq 3\) and satisfies
\[\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty,\ \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}e^{\frac{2n- \mu}{2}u(x)}dydx<\infty.\] (B)
By using the method of moving spheres, we show that the solutions have the following form
\[u(x)=\ln\frac{C_{1}(\varepsilon)}{|x-x_{0}|^{2}+\varepsilon^{2}}.\]
School of Mathematical Sciences, Fudan University, Shanghai, China
## 1 Introduction
In this paper, we classify the solutions of the following Choquard equation
\[(-\Delta)^{\frac{n}{2}}u(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y )}}{|x-y|^{\mu}}dye^{\frac{2n-\mu}{2}u(x)},\ \text{in}\ \mathbb{R}^{n}, \tag{1.1}\]
where \(0<\mu<n\) and \(n\geq 2\). When \(n\) is odd, the nonlocal fractional Laplacians \((-\Delta)^{\frac{n}{2}}\) are defined by
\[(-\Delta)^{\frac{n}{2}}u=(-\Delta)^{\frac{1}{2}}\circ(-\Delta)^{\frac{n-1}{2}} u(x).\]
Interested readers may refer to the book of Chen et al. [6]. In order to insure the problem is well-defined, we assume throughout this paper that for a \(C^{n}\) solution \(u\) of (1.1), the following assumption
\[\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty,\ \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}e^{\frac{2n- \mu}{2}u(x)}dydx<\infty\] (B)
holds true.
Consider the conformally flat manifolds \((M,g)=(\mathbb{R}^{n},e^{2u}|dx|^{2})\), where \(n\geq 2\) is an even integer. Then the \(Q-\)curvature \(Q_{g}\) satisfies
\[(-\Delta)^{\frac{n}{2}}u=Q_{g}e^{nu},\quad\text{in}\quad\mathbb{R}^{n}. \tag{1.2}\]
For more details of \(Q-\)curvatures, we refer readers to [1, 11]. For (1.1), we can view \(Q_{g}=e^{-\frac{\mu}{2}u(x)}\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y) }}{|x-y|^{\mu}}dy\) as the corresponding \(Q-\)curvature. Since the volume form \(dvol_{g}=e^{nu}dx\), the first condition in (B) is the finite volume condition with weight \(e^{-\frac{\mu}{2}u}\) and the second condition in (B) is the finite total curvature condition.
The Choquard equation
\[-\Delta u+V(x)u=2\left(\frac{1}{|x|}*|u|^{2}\right)u,\ x\in\mathbb{R}^{n} \tag{1.3}\]
has various applications in mathematical physics, such as quantum mechanics (see for [23]). Mathematically, Lieb [17] proved the existence and uniqueness of the minimal energy solution of (1.3) by using rearrangement technique, when \(V\) is a positive constant. For \(V\equiv 1\), Ma and Zhao [21] proved the positive solutions of (1.3) must be radially symmetric and monotone decreasing about some fixed point. For more results on Choquard equations, see for [9, 19, 22].
For the critical Choquard equation
\[-\Delta u=\left(\frac{1}{|x|^{\mu}}*u^{2^{*}_{\mu}}\right)u^{2^{*}_{\mu}-1}, \ x\in\mathbb{R}^{n},\ n\geq 3,\]
Du and Yang [8] applied the moving plane method to classify the positive solution of the nonlocal equation, where \(2^{*}_{\mu}=\frac{2n-\mu}{n-2}\), \(0<\mu<n\), if \(n=3\) or \(4\) and \(0<\mu\leq 4\), if \(n\geq 5\). \(2^{*}_{\mu}\) is the upper critical exponent in the sense of the Hardy-Littlewood-Sobolev inequality. Guo et al. [12] proved the same result and they also studied the nonlinear Choquard equation. For multi-critical Choquard equations, please refer to [20].
In [26], Yang and Yu consider the following Choquard equation in dimension two
\[-\Delta u(x)=\int_{\mathbb{R}^{2}}\frac{e^{\frac{4-\mu}{2}u(y)}}{|x-y|^{\mu}} dye^{\frac{4-\mu}{2}u(x)}\ \text{in}\ \mathbb{R}^{2}, \tag{1.4}\]
where \(0<\mu<1\). They classified the solution of the above equation (1.4) under the assumptions
\[\int_{\mathbb{R}^{2}}e^{\frac{4-\mu}{2}u(y)}dy<\infty,\quad\int_{\mathbb{R}^{ 2}}\int_{\mathbb{R}^{2}}\frac{e^{\frac{4-\mu}{2}u(y)}e^{\frac{4-\mu}{2}u(x)}} {|x-y|^{\mu}}dydx<\infty.\]
The natural generalizations of (1.4) is the higher order critical Choquard equation (1.1). We rewrite (1.1) into the following differential-integral system
\[\begin{cases}(-\Delta)^{\frac{n}{2}}u(x)=v(x)e^{\frac{2n-\mu}{2}u(x)}\quad \text{in}\quad\mathbb{R}^{n},\\ v(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy\quad \text{in}\quad\mathbb{R}^{n}.\end{cases}\]
Now let us consider the following integral equation
\[u(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left(\frac{|y|+1}{|x-y|} \right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C, \tag{1.5}\]
where
\[v(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy\quad \text{and}\quad\beta_{n}=2^{n-1}\left(\frac{n}{2}\right)!\left(\frac{n}{2}-1 \right)!\omega_{n}.\]
\(\omega_{n}\) is the volume of the unit ball in \(\mathbb{R}^{n}\). We establish the equivalence between differential equation (1.1) and integral equation (1.5).
**Theorem 1.1**.: _Suppose \(u\in C^{n}\) is a solution of (1.1) which satisfies (B) and_
\[u(x)=o(|x|^{2})\text{ at }\infty\quad\text{for}\quad n\geq 3. \tag{1.6}\]
_Then \(u\) also satisfies (1.5)._
**Theorem 1.2**.: _Suppose \(u\in C^{n}\) is a solution of (1.1) which satisfies (B) and (1.6). Then we have_
\[u(x)=-\alpha\ln|x|+O(1)\]
_for \(|x|\) large enough, where_
\[\alpha:=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)} dy>\frac{2n}{2n-\mu}\]
_and_
\[\lim_{|x|\to\infty}|x|^{\mu}v(x)=\beta,\]
_where_
\[\beta=\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy.\]
With the aids of the equivalence of differential equation (1.1) and integral equation (1.5) and the asymptotic behavior of \((u,v)\), we deduce the Liouville type theorem for solutions \(u\) to (1.1).
**Theorem 1.3**.: _Assume that \(u\in C^{n}\) is a solution to (1.1) and satisfies the hypothesis (B) and (1.6). Then for \(n\geq 2\) and \(\mu\in(0,n)\), \(u\) has the following form_
\[u(x)=\ln\frac{C_{1}(\varepsilon)}{|x-x_{0}|^{2}+\varepsilon^{2}},\]
_where \(C_{1}\) is a positive constant depending only on \(\varepsilon\) and \(x_{0}\) is a fixed point in \(\mathbb{R}^{n}\)._
**Remark 1**.: _For the case \(n=2\) and \(0<\mu<1\), Yang and Yu [26] proved Theorem 1.3 and gave the decay of \(u\) at \(\infty\) without the condition (1.6). Actually, the same results are also true for \(0<\mu<2\)._
Theorem 1.3 is proved by the moving spheres method, which is a variant of the moving planes method. It is a very powerful tool to study the symmetry of solutions. Many classification results were obtained, see for instance [3, 4, 5, 14, 27].
Theorem 1.3 generalizes the result of [26] to higher order equation. Firstly, in [26], they used the moving spheres method to rule out both slow decay and fast decay, then obtained the exact decay of the solution \(u\). In this paper, inspired by [25], we use the Pohozaev identity and give the decay of the solution \(u\) directly. We show preliminarily that
\[\lim_{|x|\to\infty}\frac{u(x)}{\ln|x|}=-\alpha\quad\text{and}\quad\alpha>\frac {2n}{2n-\mu}.\]
However, the proof of the exact value \(\alpha=2\) is more complicated. Since the corresponding \(v(x)\) in [25] is a constant. We have made some improvements to their methods. Secondly, Yang and Yu [26] classified the solutions of (1.4) in dimension two for \(0<\mu<1\). In this paper, we generalize the dimension from \(2\) to \(n\) and the index \(\mu\) to \(0<\mu<n\). The main difficulty is the integral definition of \(\nabla v(x)\) is not well-defined for \(n-1\leq\mu<n\). We use techniques such as integration by parts, cut off function and integral estimation to deal with this problem, see Lemma 2.6.
If \(\mu=0\), the Choquard equation (1.1) formally reduces to
\[\begin{cases}(-\Delta)^{\frac{n}{2}}u=(n-1)!e^{nu}\quad\text{in}\quad\mathbb{ R}^{n},\\ \int_{\mathbb{R}^{n}}e^{nu(x)}dx<\infty\quad\text{and}\quad u=o(|x|^{2})\text{ at }\infty.\end{cases} \tag{1.7}\]
Zhu [28] obtained the Liouville theorems to (1.7) with \(n=3\). For \(n=4\), Lin [18] classified the fourth order equation of (1.7). In the case that \(n\) is an even integer, Wei and Xu [24] worked out the explicit form of the solution from (1.7). For all dimensions \(n\geq 3\), Xu [25] gave the equivalent integral form and the classification of the solution by moving spheres method. For more literatures on higher order equations, please refer to [7, 13, 15].
This paper is organized as follows. In Section 2, we give the integral representation formula for \(u\). And we obtain the asymptotic behavior of \(u\) and \(v\) at \(\infty\). In Section 3, we use the moving spheres method to prove Theorem 1.3.
## 2 Preliminaries
In this section, we first establish the equivalence between the differential equation (1.1) and integral equation (1.5). Furthermore, we study the decay of solution \(u\) at \(\infty\).
**Lemma 2.1**.: _Suppose \(u\in C^{2}(\mathbb{R}^{n})\) such that \((a)\)_
\[\alpha:=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)} dy<\infty\qquad\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty,\]
_where_
\[v(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy;\]
\((b)\)_\(u\) satisfies the following equation:_
\[\Delta u(x)+v(x)e^{\frac{4-\mu}{2}u(x)}=0\quad\text{for}\quad n=2,\]
\[\Delta u(x)+(n-2)\int_{\mathbb{R}^{n}}\frac{v(y)e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{2} }dy=0\quad\text{for}\quad n\geq 3. \tag{2.1}\]
_Then there is a constant \(C>0\) such that_
\[u(x)\leq C\quad\text{and}\quad v(x)\leq C\quad\text{in}\quad\mathbb{R}^{n}. \tag{2.2}\]
Before we prove the above Lemma 2.1, we need to recall two results.
**Lemma 2.2**.: _(see Lemma 3.2 of [25]) Suppose \(u\in C^{2}(\mathbb{R}^{n})\) such that \(0\leq-\Delta u(x)\leq C\) in \(\mathbb{R}^{n}\) and \(\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty\). Then there exists a constant \(C>0\), such that_
\[u(x)\leq C\quad\text{in}\quad\mathbb{R}^{n}.\]
**Lemma 2.3**.: _(see Theorem 1 of [2]) Let \(h(x)\) be a solution of_
\[\begin{cases}-\Delta h(x)=f(x)&\text{in}\quad B_{R}\subset\mathbb{R}^{2}\\ h(x)=0&\text{on}\quad\partial B_{R}\end{cases}\]
_with \(f\in L^{1}(B_{R})\), then for any \(\delta\in(0,4\pi)\), there exists a constant \(C_{\delta}>0\) such that_
\[\int_{B_{R}}\exp\left(\frac{(4\pi-\delta)|h(x)|}{\|f\|_{L^{1}(B_{R})}}\right) dx\leq\frac{4\pi^{2}}{\delta}R^{2}.\]
Proof of Lemma 2.1.: Firstly, we consider \(n\geq 3\). For any \(0<\varepsilon<\min\left\{\frac{2(n-\mu)}{(2n-\mu)n},\frac{2(n-2)}{(2n-\mu)n}\right\}\), there exists an \(R>0\) sufficiently large such that
\[\int_{\mathbb{R}^{n}\setminus B_{R}}v(y)e^{\frac{2n-\mu}{2}u(y)}dy\leq\varepsilon. \tag{2.3}\]
Now \(\forall x_{0}\in\mathbb{R}^{n}\setminus B_{R+8}\), consider the solution \(h\) of the equation
\[\begin{cases}-\Delta h(x)=(n-2)\int_{B_{4}(x_{0})}\frac{v(y)e^{\frac{2n-\mu}{2 }u(y)}}{|x-y|^{2}}dy&\text{in }B_{4}(x_{0})\\ h=0&\text{on }\partial B_{4}(x_{0}).\end{cases} \tag{2.4}\]
From the maximum principle we conclude that
\[h(x)\geq 0,\quad x\in B_{4}(x_{0}).\]
And also consider the function
\[h_{1}(x)=\int_{B_{4}(x_{0})}\ln\left(\frac{16}{|x-y|}\right)v(y)e^{\frac{2n- \mu}{2}u(y)}dy,\quad\forall x\in B_{4}(x_{0}).\]
We conclude that
\[h_{1}(x)\geq 0\quad\text{in}\quad B_{4}(x_{0}).\]
By a simple calculation, we have
\[-\Delta h_{1}(x)=(n-2)\int_{B_{4}(x_{0})}\frac{v(y)e^{\frac{2n-\mu}{2}u(y)}}{| x-y|^{2}}dy\quad\text{in}\quad B_{4}(x_{0}). \tag{2.5}\]
Combining (2.4) and (2.5) yields that
\[\left\{\begin{aligned} -\Delta(h-h_{1})=0\quad\text{in }B_{4}(x_{0})\\ h-h_{1}\leq 0\qquad\text{on }\partial B_{4}(x_{0}).\end{aligned}\right.\]
The maximum principle allows us to conclude that
\[h(x)\leq h_{1}(x),\quad x\in B_{4}(x_{0}).\]
Now let us denote the measure \(v(y)e^{\frac{2n-\mu}{2}u(y)}dy/\int_{B_{4}(x_{0})}v(y)e^{\frac{2n-\mu}{2}u(y)}dy\) by \(d\mu\). Therefore Jenson's inequality, together with (2.3), imply that
\[\int_{B_{4}(x_{0})}e^{\frac{h(x)}{\varepsilon}}dx \leq\int_{B_{4}(x_{0})}\exp\left(\frac{h_{1}(x)}{\int_{B_{4}(x_{ 0})}v(y)e^{\frac{2n-\mu}{2}u(y)}dy}\right)dx\] \[=\int_{B_{4}(x_{0})}\exp\left(\int_{B_{4}(x_{0})}\ln\left(\frac{ 16}{|x-y|}\right)d\mu\right)dx\] \[\leq\int_{B_{4}(x_{0})}\int_{B_{4}(x_{0})}\frac{16}{|x-y|}d\mu dx\] \[=\int_{B_{4}(x_{0})}\int_{B_{4}(x_{0})}\frac{16}{|x-y|}dxd\mu\] \[\leq C.\]
We consider the function
\[q(x)=u(x)-h(x),\quad x\in B_{3}(x_{0}).\]
We find that
\[\Delta q(x)=\Delta u(x)-\Delta h(x)=-(n-2)\int_{\mathbb{R}^{n}\setminus B_{4} (x_{0})}\frac{v(y)e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{2}}dy.\]
If \(x\in B_{3}(x_{0})\) and \(y\in\mathbb{R}^{n}\setminus B_{4}(x_{0})\), then \(|x-y|\geq 1\). Therefore, one has
\[0\leq-\Delta q(x)\leq(n-2)\beta_{n}\alpha.\]
Hence it follows from Harnack principle (Theorem 8.17 of [10]) that
\[\sup_{B_{2}(x_{0})}q(x)\leq C(\|q^{+}\|_{L^{2}(B_{3}(x_{0}))}+\|\Delta q\|_{L ^{\infty}(B_{3}(x_{0}))}).\]
To estimate the first term, we note that
\[q^{+}(x)\leq u^{+}(x).\]
Thus we get
\[\int_{B_{3}(x_{0})}(q^{+}(x))^{2}dx \leq C\int_{B_{3}(x_{0})}e^{\frac{2n-\mu}{2}q^{+}(x)}dx\] \[\leq C\int_{B_{3}(x_{0})}e^{\frac{2n-\mu}{2}u^{+}(x)}dx\] \[\leq C.\]
Therefore, it follows that
\[u(x)=q(x)+h(x)\leq C+h(x),\quad x\in B_{2}(x_{0}).\]
Therefore we reach the estimate
\[\int_{B_{2}(x_{0})}e^{\frac{u(y)}{\varepsilon}}dy\leq C\int_{B_{2}(x_{0})}e^{ \frac{h(y)}{\varepsilon}}dy\leq\tilde{C}.\]
By the definition of \(v(x)\), we get
\[v(x) =\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy\] \[=\int_{\mathbb{R}^{n}\setminus B_{1}(x)}\frac{e^{\frac{2n-\mu}{2 }u(y)}}{|x-y|^{\mu}}dy+\int_{B_{1}(x)}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{ \mu}}dy\] \[\leq C+\left(\int_{B_{1}(x)}\frac{1}{|x-y|^{p\mu}}dy\right)^{ \frac{1}{p}}\left(\int_{B_{1}(x)}e^{\frac{2n-\mu}{2}qu(y)}dy\right)^{\frac{1}{ q}}\] \[\leq C.\]
In getting last inequality of above, we need \(1<p<\frac{n}{\mu}\), \(\frac{1}{p}+\frac{1}{q}=1\) and \(q>\frac{n}{n-\mu}\). According to \(\frac{1}{\varepsilon}>\frac{n(2n-\mu)}{2(n-\mu)}\), we can choose \(q\) satisfying
\[\frac{1}{\varepsilon}\geq\frac{2n-\mu}{2}q>\frac{n(2n-\mu)}{2(n-\mu)},\]
such that \(v(x)\leq C\). By (2.1), we have for any \(|x_{0}|\) sufficiently large,
\[-\Delta u(x_{0}) =(n-2)\int_{\mathbb{R}^{n}}\frac{v(y)e^{\frac{2n-\mu}{2}u(y)}}{|x _{0}-y|^{2}}dy\] \[=(n-2)\int_{\mathbb{R}^{n}\setminus B_{2}(x_{0})}\frac{v(y)e^{ \frac{2n-\mu}{2}u(y)}}{|x_{0}-y|^{2}}dy+(n-2)\int_{B_{2}(x_{0})}\frac{v(y)e^{ \frac{2n-\mu}{2}u(y)}}{|x_{0}-y|^{2}}dy\] \[\leq C+C\left(\int_{B_{2}(x_{0})}\frac{1}{|x_{0}-y|^{2p_{1}}}dy \right)^{\frac{1}{p_{1}}}\left(\int_{B_{2}(x_{0})}e^{\frac{2n-\mu}{2}q_{1}u(y )}dy\right)^{\frac{1}{q_{1}}}\] \[\leq C,\]
where \(\frac{1}{p_{1}}+\frac{1}{q_{1}}=1\), \(1<p_{1}<\frac{n}{2}\) and \(q_{1}>\frac{n}{n-2}\). Since \(\frac{1}{\varepsilon}>\frac{n(2n-\mu)}{2(n-2)}\), we can choose \(q_{1}\) satisfying
\[\frac{1}{\varepsilon}\geq\frac{2n-\mu}{2}q_{1}>\frac{n(2n-\mu)}{2(n-2)}\]
such that \(-\Delta u(x_{0})\leq C\). Therefore, \(-\Delta u\geq 0\) is bounded on \(\mathbb{R}^{n}\). According to Lemma 2.2, we get directly
\[u(x)\leq C,\]
for some constant \(C>0\).
Then for the case \(n=2\), by the regularity theory of elliptic equations, we see that
\[\sup_{B_{1}(x_{0})}u\leq C\{\|u^{+}\|_{L^{2}(B_{2}(x_{0}))}+\|ve^{\frac{4-\mu}{2} u}\|_{L^{2}(B_{2}(x_{0}))}\}\]
Similarly, we consider that \(h(x)\) and \(q(x)\) satisfy the following equations
\[\begin{cases}-\Delta h(x)=v(x)e^{\frac{4-\mu}{2}u(x)}\quad\text{in}\quad B_{4} (x_{0}),\\ h(x)=0\qquad\text{on}\quad\partial B_{4}(x_{0})\end{cases} \tag{2.6}\]
and
\[\begin{cases}-\Delta g(x)=0\qquad\text{in}\quad B_{4}(x_{0}),\\ g(x)=u(x)\quad\text{on}\quad\partial B_{4}(x_{0}).\end{cases} \tag{2.7}\]
We know that
\[u(x)=h(x)+g(x)\quad\text{in}\quad B_{4}(x_{0}).\]
Using Lemma 2.3 and the mean value property of harmonic function, we can prove that
\[u(x)\leq C\quad\text{and}\quad v(x)\leq C\]
for some constant \(C>0\). The proof is similar to the case for \(n\geq 3\) and is omitted.
Define
\[J(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left(\frac{|y|+1}{|x-y| }\right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy. \tag{2.8}\]
Since \(\int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)}dy<\infty\), \(J(x)\) is well-defined.
**Lemma 2.4**.: _Suppose that \(u\) satisfies condition (B). Then there holds_
\[\lim_{|x|\to\infty}\frac{J(x)}{\ln|x|}=-\alpha.\]
Proof.: Firstly, we prove that
\[J(x)\geq-\alpha\ln|x|, \tag{2.9}\]
for \(|x|\) large. Set
\[D_{1}=\left\{y\in\mathbb{R}^{n}||y-x|\leq\frac{|x|}{2}\right\}\text{ and }\ D_{2}=\left\{y\in\mathbb{R}^{n}||y-x|>\frac{|x|}{2}\right\}.\]
For \(y\in D_{1}\), we have \(|x-y|\leq\frac{|x|}{2}\leq|y|<|y|+1\), hence
\[\ln\frac{|y|+1}{|x-y|}>0,\]
which further implies
\[J(x)\geq\frac{1}{\beta_{n}}\int_{D_{2}}\left[\ln\left(\frac{|y|+1}{|x-y|} \right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy.\]
For \(|x|\geq 2\), we find \(|x-y|\leq|x|+|y|\leq|x|(1+|y|)\) and
\[\ln\frac{|y|+1}{|x-y|}\geq\ln\frac{1}{|x|}.\]
For \(|x|\geq 2\), we conclude that
\[J(x) \geq\frac{-\ln|x|}{\beta_{n}}\int_{D_{2}}v(y)e^{\frac{2n-\mu}{2}u( y)}dy\] \[\geq-\alpha\ln|x|.\]
Then, we claim that \(\forall\varepsilon>0\), there exists an \(R_{\varepsilon}>0\) such that
\[J(x)\leq-(\alpha-\varepsilon)\ln|x|,\quad\forall|x|\geq R_{\varepsilon}.\]
Let \(A_{1}=\{y\in\mathbb{R}^{n}||y|\leq R_{0}\}\). Then we can choose \(R_{0}\) large enough, such that
\[\frac{1}{\beta_{n}}\int_{A_{1}}[\ln|x-y|-\ln(|y|+1)]v(y)e^{\frac{2n-\mu}{2}u( y)}dy\geq\Big{(}\alpha-\frac{\varepsilon}{2}\Big{)}\ln|x|. \tag{2.10}\]
Let \(A_{2}=\left\{y\in\mathbb{R}^{n}||y-x|\leq\frac{|x|}{2},|y|\geq R_{0}\right\}\) and \(A_{3}=\left\{y\in\mathbb{R}^{n}||y-x|>\frac{|x|}{2},|y|\geq R_{0}\right\}\). Then
\[\frac{1}{\beta_{n}}\int_{A_{2}}[\ln|x-y|-\ln(|y|+1)]v(y)e^{\frac{2 n-\mu}{2}u(y)}dy \tag{2.11}\] \[\geq \frac{1}{\beta_{n}}\int_{B_{1}(x)}\ln|x-y|v(y)e^{\frac{2n-\mu}{2} u(y)}dy-\frac{1}{\beta_{n}}\int_{A_{2}}\ln(|y|+1)v(y)e^{\frac{2n-\mu}{2}u(y)}dy\] \[\geq -C-\frac{\ln(2|x|)}{\beta_{n}}\int_{A_{2}}v(y)e^{\frac{2n-\mu}{2} u(y)}dy\] \[\geq -C-\frac{\varepsilon}{4}\ln|x|.\]
If \(y\in A_{3}\), then we find \(|y-x|>\frac{|x|}{2}\geq\frac{1}{2}(|y|-|x-y|)\), i.e.,
\[\frac{|x-y|}{|y|}\geq\frac{1}{3}\]
or
\[\frac{|x-y|}{|y|+1}\geq\frac{1}{6}.\]
Hence, it is clear that
\[\frac{1}{\beta_{n}}\int_{A_{3}}[\ln|x-y|-\ln(|y|+1)]v(y)e^{\frac{ 2n-\mu}{2}u(y)}dy \geq\frac{-\ln 6}{\beta_{n}}\int_{A_{3}}v(y)e^{\frac{2n-\mu}{2}u(y)}dy \tag{2.12}\] \[\geq-\frac{\varepsilon}{4}\ln|x|\]
for \(|x|\) enough large. Finally, we infer from (2.10)-(2.12) that
\[-J(x)\geq(\alpha-\varepsilon)\ln|x|\]
for \(|x|\) large enough. This proves the claim.
**Theorem 2.1**.: _Let \(u\in C^{n}\) be the solution of the following equation_
\[\begin{cases}(-\Delta)^{\frac{n}{2}}u(x)=v(x)e^{\frac{2n-\mu}{2}u(x)}\quad\text{ in}\quad\mathbb{R}^{n},\\ v(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy\quad \text{in}\quad\mathbb{R}^{n},\end{cases}\]
_where \(n\geq 2\). If \(u(x)=o(|x|^{2})\) for \(n\geq 3\) and_
\[\alpha:=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)} dy<\infty\qquad\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty, \tag{2.13}\]
_then \(u\) is given by_
\[u(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left(\frac{|y|+1}{|x-y |}\right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C, \tag{2.14}\]
_for some constant \(C\)._
Proof.: By (2.8), \(J(x)\) satisfies
\[(-\Delta)^{\frac{n}{2}}J(x)=v(x)e^{\frac{2n-\mu}{2}u(x)}.\]
Set \(p=[\frac{n+1}{2}]\), the greatest integer less than or equal to \(\frac{n+1}{2}\). We can see that the function \(u-J\) is a poly-harmonic function with \((-\Delta)^{p}(u-J)=0\).
Claim: If \(p\geq 2\), \((-\Delta)^{p-1}(u-J)=0\).
Let \(g=u-J\). Since \((\Delta)^{p-1}g\) is harmonic, by mean value theorem and divergence theorem, we have \(\forall x_{0}\in\mathbb{R}^{n}\) and \(\forall r>0\),
\[\begin{split}&[(\Delta)^{p-1}g](x_{0})\\ =&\frac{1}{\omega_{n}r^{n}}\int_{B_{r}(x_{0})}[( \Delta)^{p-1}g](y)dy\\ =&\frac{1}{\omega_{n}r^{n}}\int_{\partial B_{r}(x_{ 0})}\frac{\partial}{\partial r}[(\Delta)^{p-2}g](y)dS\\ =&\frac{n}{r}\frac{1}{n\omega_{n}r^{n-1}}\int_{ \partial B_{r}(x_{0})}\frac{\partial}{\partial r}[(\Delta)^{p-2}g](y)dS\\ =&\frac{n}{r}\frac{\partial}{\partial r}\left(\frac{ 1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})}[(\Delta)^{p-2}g](y)dS\right),\end{split} \tag{2.15}\]
where \(\frac{\partial}{\partial r}\) is the radial derivative along the sphere.
Multiplying (2.15) by \(\frac{r}{n}\) and integrating on \((0,r)\), we obtain
\[\frac{r^{2}}{2n}[(\Delta)^{p-1}g](x_{0})+[(\Delta)^{p-2}g](x_{0})=\frac{1}{n \omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})}[(\Delta)^{p-2}g](y)dS. \tag{2.16}\]
Then multiplying (2.16) by \(r^{n-1}n\) and integrating on \((0,r)\) and dividing the whole resulting equation by \(r^{n}\) to get
\[\frac{r^{2}}{2(n+2)}[(\Delta)^{p-1}g](x_{0})+[(\Delta)^{p-2}g](x_{0})=\frac{1}{ \omega_{n}r^{n}}\int_{B_{r}(x_{0})}[(\Delta)^{p-2}g](y)dy.\]
Repeating the above argument \(p-1\) times to get
\[\begin{split} P(r):=& C_{1}(n,p)r^{2(p-1)}[(\Delta)^{p-1 }g](x_{0})+C_{2}(n,p)r^{2(p-2)}[(\Delta)^{p-2}g](x_{0})\\ &+\cdots+C_{p-1}(n,p)r^{2}(\Delta g)(x_{0})\\ =&\frac{1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}( x_{0})}g(y)dS-g(x_{0}),\end{split} \tag{2.17}\]
where \(C_{i}(n,p)>0\), \(i=1,\cdots,p-1\).
By Jensen's inequality, one gets
\[\begin{split}\exp(\theta P(r))&=e^{-\theta g(x_{0} )}\text{exp}\left[\frac{1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})} \theta g(y)dS\right]\\ &\leq e^{-\theta g(x_{0})}\frac{1}{n\omega_{n}r^{n-1}}\int_{ \partial B_{r}(x_{0})}e^{\theta g(y)}dS,\end{split} \tag{2.18}\]
where \(\theta\) is a constant.
Using (2.9), we see that
\[g(x)=u(x)-J(x)\leq u(x)+\alpha\ln|x|.\]
Let \(\theta=\frac{2n-\mu}{2}\) in (2.18). Then one gets
\[\begin{split} r^{-\frac{2n-\mu}{2}\alpha}e^{\frac{2n-\mu}{2}P(r) }&\leq r^{-\frac{2n-\mu}{2}\alpha}e^{-\frac{2n-\mu}{2}g(x_{0})} \frac{1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})}e^{\frac{2n-\mu}{2}(u (y)+\alpha\ln|y|)}dS\\ &\leq C\frac{1}{r^{n-1}}\int_{\partial B_{r}(x_{0})}e^{\frac{2n- \mu}{2}u(y)}\left(\frac{|y|}{r}\right)^{\frac{2n-\mu}{2}\alpha}dS\\ &\leq C\frac{1}{r^{n-1}}\int_{\partial B_{r}(x_{0})}e^{\frac{2n- \mu}{2}u(y)}dS\end{split}\]
for \(r\geq 1\). Thus we have \(r^{-\frac{2n-\mu}{2}\alpha}e^{\frac{2n-\mu}{2}P(r)}\in L^{1}(1,\infty)\). Hence the leading coefficient in \(P(r)\) must be non-positive. That is,
\[[(\Delta)^{p-1}g](x_{0})=-C_{0}\leq 0.\]
Note that by the definition of \(J(x)\), we know that \(\Delta J(x)\leq 0\) in \(\mathbb{R}^{n}\). By mean value property for super-harmonic function, we have for any \(x_{0}\in\mathbb{R}^{n}\) and \(r>0\)
\[J(x_{0})\geq\frac{1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})}J(y)dS. \tag{2.19}\]
Then it follows from (2.17) and (2.19) that
\[P(r)\geq\frac{1}{n\omega_{n}r^{n-1}}\int_{\partial B_{r}(x_{0})}u(y)dS-u(x_{0}).\]
Since \(u=o(|x|^{2})\) at \(\infty\), by dividing both sides by \(r^{2}\) and letting \(r\to\infty\), we observe that
\[\varliminf_{r\to\infty}\frac{P(r)}{r^{2}}\geq 0.\]
If \(C_{0}<0\), then we would have
\[\varlimsup_{r\to\infty}\frac{P(r)}{r^{2}}<0\]
or \(\frac{P(r)}{r^{2}}\) tends to \(-\infty\) as \(r\to\infty\). This is impossible. We deduce that
\[[(\Delta)^{p-1}g](x)=0,\quad\forall x\in\mathbb{R}^{n}.\]
Similarly, \((\Delta)^{p-2}g=\cdots=\Delta g=0\), i.e.,
\[\Delta u=\Delta J\leq 0. \tag{2.20}\]
If \(p=1\) and \(n=2\), we get directly (2.20). According to (1.6), (2.9) and Lemma 2.1, we finally conclude that \(g=u-J\) is an entire harmonic function with
\[\liminf_{|x|\to\infty}\frac{J(x)-u(x)}{|x|^{2}}\geq 0,\]
for \(n\geq 2\). Then the classical Liouville theorem implies that
\[u=J(x)+\sum_{i=1}^{n}c_{i}x_{i}+d\]
for some constants \(c_{i}\) and \(d\). Now the integrability condition (2.13) forces \(c_{i}=0\) for \(i=1,2,\cdots,n.\) Hence, we conclude that \(u\) satisfies the integral equation (2.14).
Proof of Theorem 1.2.: By Lemma 2.4 and Theorem 2.1, we deduce that
\[\lim_{|x|\to\infty}\frac{u(x)}{\ln|x|}=-\alpha. \tag{2.21}\]
Since
\[\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty,\]
from (2.9) and (2.14), we conclude that
\[\alpha>\frac{2n}{2n-\mu}. \tag{2.22}\]
Then we claim that
\[|x|^{\mu}v(x)-\beta\to 0\ \ \text{as}\ \ |x|\to\infty.\]
We know from
\[|x|^{\mu}v(x)-\beta=\int_{\mathbb{R}^{n}}\frac{|x|^{\mu}-|x-y|^{\mu}}{|x-y|^{ \mu}}e^{\frac{2n-\mu}{2}u(y)}dy.\]
We take \(A_{i}\), \(i=1,2,3\) as in Lemma 2.4. If \(y\in A_{1}\), one has
\[\frac{|x|^{\mu}-|x-y|^{\mu}}{|x-y|^{\mu}}\to 0\]
as \(|x|\to\infty\). Hence,
\[\int_{A_{1}}\frac{|x|^{\mu}-|x-y|^{\mu}}{|x-y|^{\mu}}e^{\frac{2n-\mu}{2}u(y)}dy \to 0\ \ \text{as}\ \ |x|\to\infty.\]
Next, if \(y\in A_{2}\), we have \(\frac{|x|}{2}\leq|y|\leq\frac{3}{2}|x|\). According to (2.21) and (2.22), we deduce that
\[\left|\int_{A_{2}}\frac{|x|^{\mu}-|x-y|^{\mu}}{|x-y|^{\mu}}e^{ \frac{2n-\mu}{2}u(y)}dy\right|\] \[\leq \int_{A_{2}}e^{\frac{2n-\mu}{2}u(y)}dy+|x|^{\mu}\int_{A_{2}}\frac {e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy\] \[\leq o(1)+|x|^{\mu-\frac{2n-\mu}{2}(\alpha-o(1))}\int_{A_{2}}\frac{1 }{|x-y|^{\mu}}dy\] \[\leq o(1)+C|x|^{n-\frac{2n-\mu}{2}\alpha+o(1)}\to 0\]
as \(|x|\to\infty\).
Finally, if \(y\in A_{3}\), we have
\[\frac{|x|^{\mu}}{|x-y|^{\mu}}\leq 2^{\mu},\]
which implies
\[\left|\int_{A_{3}}\frac{|x|^{\mu}-|x-y|^{\mu}}{|x-y|^{\mu}}e^{ \frac{2n-\mu}{2}u(y)}dy\right|\leq C\int_{A_{3}}e^{\frac{2n-\mu}{2}u(y)}dy \leq C\varepsilon.\]
In fact, we are going to give a more precise decay of \(u(x)\) as following
\[u(x)=-\alpha\ln|x|+O(1), \tag{2.23}\]
for \(|x|\) large enough. We find
\[u(x)+\alpha\ln|x|=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left( \frac{|x|(|y|+1)}{|x-y|}\right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C.\]
A direct calculation yields that
\[\int_{A_{2}}\left[\ln\left(\frac{|x|(|y|+1)}{|x-y|}\right)\right] v(y)e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq C\int_{A_{2}}(\ln(2|x|^{2}))v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C|x| ^{o(1)-\mu-\alpha^{\frac{2n-\mu}{2}}}\int_{A_{2}}\ln\frac{1}{|x-y|}dy\] \[\leq C|x|^{n+o(1)-\mu-\alpha^{\frac{2n-\mu}{2}}}(\ln(\sqrt{2}|x|)+\ln| x|)\] \[\leq C,\]
for \(|x|\) large enough. For \(x\in A_{1}\cup A_{3}\), it is clear that
\[\int_{A_{1}\cup A_{3}}\left[\ln\left(\frac{|x|(|y|+1)}{|x-y|} \right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq \int_{A_{1}\cup A_{3}}(\ln 2(|y|+1))v(y)e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq C\int_{A_{1}}v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C\int_{A_{3}}\frac{ \ln(2(|y|+1))}{|y|^{\mu}}e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq C.\]
We conclude that
\[u(x)+\alpha\ln|x|\leq C.\]
By (2.9), we obtain (2.23). This completes the proof.
**Lemma 2.5**.: _Suppose \(u(x)\in C^{n}\) is a solution of (2.14) with \(n\geq 2\). And if one sets_
\[\alpha:=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)} dy<\infty,\ \int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty, \tag{2.24}\]
_then the following identity holds:_
\[\alpha\left(\alpha-\frac{4n}{2n-\mu}\right)=\frac{4}{(2n-\mu)\beta_{n}}\int_{ \mathbb{R}^{n}}\langle x,\nabla v(x)\rangle e^{\frac{2n-\mu}{2}u(x)}dx. \tag{2.25}\]
Proof.: Since \(u(x)\in C^{n}\), then \(\nabla u(x)\) is continuous. Differentiating equation (2.14) yields
\[\langle x,\nabla u(x)\rangle=-\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\frac{ \langle x,x-y\rangle}{|x-y|^{2}}v(y)e^{\frac{2n-\mu}{2}u(y)}dy. \tag{2.26}\]
Multiplying both sides of (2.26) by \(v(x)e^{\frac{2n-\mu}{2}u(x)}\) and integrating the resulting equation both sides over \(B_{R}\) for any \(R>0\), one gets
\[\begin{split}&-\frac{1}{\beta_{n}}\int_{B_{R}}v(x)e^{\frac{2n- \mu}{2}u(x)}\int_{\mathbb{R}^{n}}\frac{\langle x,x-y\rangle}{|x-y|^{2}}v(y)e^ {\frac{2n-\mu}{2}u(y)}dydx\\ &=\int_{B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x)}\langle x,\nabla u(x) \rangle dx.\end{split} \tag{2.27}\]
By a simple calculation, we obtain
\[\begin{split}&\int_{B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x)}\langle x,\nabla u(x)\rangle dx\\ &=\frac{2}{2n-\mu}\int_{B_{R}}v(x)\langle x,\nabla e^{\frac{2n- \mu}{2}u(x)}\rangle dx\\ &=-\frac{2}{2n-\mu}\int_{B_{R}}e^{\frac{2n-\mu}{2}u(x)}(\langle x,\nabla v(x)\rangle+nv(x))dx+\frac{2R}{2n-\mu}\int_{\partial B_{R}}v(x)e^{ \frac{2n-\mu}{2}u(x)}dS.\end{split}\]
By the asymptotic behavior of \(u\), \(v\) at \(\infty\), we have
\[\lim_{R\to\infty}R\int_{\partial B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x)}dS=0.\]
We claim that
\[\int_{\mathbb{R}^{n}}\left|v(x)e^{\frac{2n-\mu}{2}u(x)}\langle x,\nabla u(x) \rangle\right|dx<\infty\quad\text{and}\quad\int_{\mathbb{R}^{n}}\left|e^{ \frac{2n-\mu}{2}u(x)}\langle x,\nabla v(x)\rangle\right|dx<\infty.\]
Then, letting \(R\to\infty\), one has
\[\begin{split} RHS&=-\frac{2}{2n-\mu}\int_{\mathbb{R }^{n}}e^{\frac{2n-\mu}{2}u(x)}(\langle x,\nabla v(x)\rangle+nv(x))dx\\ &=-\frac{2}{2n-\mu}\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(x)} \langle x,\nabla v(x)\rangle dx-\frac{2n}{2n-\mu}\beta_{n}\alpha.\end{split} \tag{2.28}\]
In order to prove this claim it is sufficient to prove
\[|x||\nabla u(x)|\leq C\quad\text{and}\quad|x||\nabla v(x)|\leq C. \tag{2.29}\]
By (2.26), we know that
\[|x||\nabla u(x)|\leq\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\frac{|x|}{|x-y|}v( y)e^{\frac{2n-\mu}{2}u(y)}dy.\]
For \(|x|\) large, we divide \(\mathbb{R}^{n}\) into two parts:
\[A_{1}=\{y||x-y|\leq\frac{|x|}{2}\},\ A_{2}=\{y||x-y|>\frac{|x|}{2}\}.\]
If \(y\in A_{1}\), by Theorem 1.2, we deduce that
\[\int_{A_{1}}\frac{|x|}{|x-y|}v(y)e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq C|x|^{1-\mu-\alpha\frac{2n-\mu}{2}}\int_{A_{1}}\frac{1}{|x-y|}dy\] \[\leq C|x|^{n-\mu-\alpha\frac{2n-\mu}{2}}\to 0\]
as \(|x|\to\infty\).
If \(y\in A_{2}\), it is clear that
\[\int_{A_{2}}\frac{|x|}{|x-y|}v(y)e^{\frac{2n-\mu}{2}u(y)}dy\leq 2\int_{A_{2}}v( y)e^{\frac{2n-\mu}{2}u(y)}dy\leq C.\]
Thus, one gets \(|x||\nabla u(x)|\leq C\).
For \(|\nabla v(x)|\), we consider the case \(0<\mu<n-1\). For \(|x|\) large enough, one has
\[|x||\nabla v(x)| \leq\mu\int_{\mathbb{R}^{n}}\frac{|x|e^{\frac{2n-\mu}{2}u(y)}}{| x-y|^{\mu+1}}dy\] \[=\mu\int_{A_{1}}\frac{|x|e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu+1}} dy+\mu\int_{A_{2}}\frac{|x|e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu+1}}dy\] \[\leq C|x|^{1-\frac{2n-\mu}{2}\alpha}\int_{A_{1}}\frac{1}{|x-y|^{ \mu+1}}dy+C|x|^{-\mu}\int_{A_{2}}e^{\frac{2n-\mu}{2}u(y)}dy\] \[\leq C|x|^{n-\mu-\frac{2n-\mu}{2}\alpha}+C|x|^{-\mu}\] \[\leq C.\]
For the case \(n-1\leq\mu<n\), we can write
\[\nabla v(x)=\frac{2n-\mu}{2}\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u( x-y)}\nabla_{x}u(x-y)}{|y|^{\mu}}dy.\]
Then, for \(|x|\) large enough, we divide \(\mathbb{R}^{n}\) into four parts:
\[D_{1}=\{y||y|\geq 2|x|\},\quad D_{2}=\left\{y||y-x|>\frac{|x|}{2},|y|\leq 2|x|\right\}\]
\[D_{3}=\left\{y|K<|y-x|\leq\frac{|x|}{2}\right\},\quad D_{4}=\{y||y-x|\leq K\},\]
where \(K>0\) is large. Theorem 1.2 implies that
\[|x||\nabla v(x)| \leq\frac{2n-\mu}{2}\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2} u(x-y)}|\nabla_{x}u(x-y)||x|}{|y|^{\mu}}dy\] \[\leq\frac{C}{|x|^{\mu-1}}\int_{D_{1}}e^{\frac{2n-\mu}{2}u(x-y)}| \nabla_{x}u(x-y)|dy+\frac{C}{|x|^{\frac{2n-\mu}{2}\alpha}}\int_{D_{2}}\frac{1} {|y|^{\mu}}dy\] \[\quad+\frac{C}{|x|^{\mu-1}}\int_{D_{3}\cup D_{4}}e^{\frac{2n-\mu} {2}u(x-y)}|\nabla_{x}u(x-y)|dy\] \[\leq\frac{C}{|x|^{\mu-1}}\int_{|y-x|\geq|x|}e^{\frac{2n-\mu}{2} u(x-y)}|\nabla_{x}u(x-y)|dy+\frac{C|x|^{n-\mu}}{|x|^{\frac{2n-\mu}{2} \alpha}}+\frac{C}{|x|^{\mu-1}}\] \[\leq\frac{C|x|^{n-\mu}}{|x|^{\frac{2n-\mu}{2}\alpha}}+\frac{C}{| x|^{\mu-1}}.\]
By (2.22), since \(n\geq 2\) and \(\mu\geq 1\), we obtain
\[|x||\nabla v(x)|\leq C.\]
Therefore, we get (2.29).
With \(x=\frac{1}{2}((x-y)+(x+y))\), for the left hand side of (2.27), one has the following identity
\[-\frac{1}{\beta_{n}}\int_{B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x)}\int _{\mathbb{R}^{n}}\frac{\langle x,x-y\rangle}{|x-y|^{2}}v(y)e^{\frac{2n-\mu}{2} u(y)}dydx\] \[=-\frac{1}{2\beta_{n}}\int_{B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x)} \int_{\mathbb{R}^{n}}v(y)e^{\frac{2n-\mu}{2}u(y)}dydx \tag{2.30}\] \[\quad-\frac{1}{2\beta_{n}}\int_{B_{R}}v(x)e^{\frac{2n-\mu}{2}u(x) }\int_{\mathbb{R}^{n}}\frac{\langle x+y,x-y\rangle}{|x-y|^{2}}v(y)e^{\frac{2n -\mu}{2}u(y)}dydx.\]
We claim that
\[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left|\frac{\langle x,x-y\rangle}{| x-y|^{2}}v(x)e^{\frac{2n-\mu}{2}u(x)}v(y)e^{\frac{2n-\mu}{2}u(y)}\right|dydx<\infty.\]
Then, it is easy to see that the last term in (2.30) will vanish when one takes the limit \(R\to\infty\) simply by changing variables \(x\) and \(y\). Thus the left hand side gives
\[LHS=-\frac{1}{2}\beta_{n}\alpha^{2}. \tag{2.31}\]
A simple calculation yields
\[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left|\frac{\langle x,x-y\rangle}{|x-y|^{2}}v(x)e^{\frac{2n-\mu}{2}u(x)}v(y)e^{\frac{2n-\mu}{2}u(y) }\right|dydx\] \[\leq \int_{\mathbb{R}^{n}}\int_{|y-x|\geq\frac{|x|}{2}}\frac{|x|}{|x-y |}v(x)e^{\frac{2n-\mu}{2}u(x)}v(y)e^{\frac{2n-\mu}{2}u(y)}dydx+\int_{B_{R_{0} }}\int_{|y-x|<\frac{|x|}{2}}\frac{v(y)e^{\frac{2n-\mu}{2}u(y)}}{|x-y|}dydx\]
\[\begin{split}&\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}(v(x)-v_{ \varepsilon}(x))|\nabla\varphi(x)|dx\\ =&\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}} \int_{B_{\varepsilon}(x)}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu}}dy| \nabla\varphi(x)|dx\\ \leq& C\lim_{\varepsilon\to 0^{+}}\int_{supp\varphi}\int_{B_{ \varepsilon}(x)}\frac{1}{|x-y|^{\mu}}dydx\\ =& 0.\end{split} \tag{2.32}\]
Take a cutoff function \(\eta_{R}(x)\in C_{c}^{\infty}(B_{2R})\) and \(0\leq\eta_{R}(x)\leq 1\) such that
\[\eta_{R}(x)=\begin{cases}1,&x\in B_{R}\\ 0,&x\in B_{2R}^{c},\end{cases}\]
where \(R\) is large enough. By (2.32), we get
\[\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\langle x,\nabla(v(x)-v_{ \varepsilon}(x))\rangle e^{\frac{2n-\mu}{2}u(x)}\eta_{R}(x)dx=0.\]
For \(|x|\geq R\), by (2.29), a simple calculation yields
\[|\langle x,\nabla(v(x)-v_{\varepsilon}(x))\rangle|= \left|\frac{2n-\mu}{2}\int_{B_{\varepsilon}}\frac{\langle x,\nabla_ {x}u(x-y)\rangle e^{\frac{2n-\mu}{2}u(x-y)}}{|y|^{\mu}}dy\right|\] \[\leq C|x|^{-\alpha\frac{2n-\mu}{2}}\int_{B_{\varepsilon}}\frac{1}{|y|^ {\mu}}dy\] \[\leq C\varepsilon^{n-\mu}|x|^{-\alpha\frac{2n-\mu}{2}}.\]
Then, we deduce that
\[\int_{\mathbb{R}^{n}\setminus B_{R}}\left|\langle x,\nabla(v(x)-v _{\varepsilon}(x))\rangle e^{\frac{2n-\mu}{2}u(x)}(1-\eta_{R}(x))\right|dx\] \[\leq C\varepsilon^{n-\mu}\int_{\mathbb{R}^{n}\setminus B_{R}}|x|^{- \alpha(2n-\mu)}dx\] \[\leq C\varepsilon^{n-\mu}R^{n-\alpha(2n-\mu)}.\]
Thus, one obtains
\[\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\langle x,\nabla(v(x) -v_{\varepsilon}(x))\rangle e^{\frac{2n-\mu}{2}u(x)}dx\] \[= \lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\langle x,\nabla(v (x)-v_{\varepsilon}(x))\rangle e^{\frac{2n-\mu}{2}u(x)}(\eta_{R}(x)+(1-\eta_{ R}(x)))dx\] \[= 0.\]
That is
\[\int_{\mathbb{R}^{n}}\langle x,\nabla v(x)\rangle e^{\frac{2n-\mu}{2}u(x)}dx= \lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\langle x,\nabla v_{\varepsilon}(x) \rangle e^{\frac{2n-\mu}{2}u(x)}dx. \tag{2.33}\]
Now, we give the representation formula for \(\nabla v_{\varepsilon}(x)\). We have
\[\nabla v_{\varepsilon}(x)= \nabla_{x}\left(\int_{\mathbb{R}^{n}\setminus B_{\varepsilon}} \frac{e^{\frac{2n-\mu}{2}u(x-y)}}{|y|^{\mu}}dy\right)\] \[= \int_{\mathbb{R}^{n}\setminus B_{\varepsilon}}\frac{\nabla_{x} \left(e^{\frac{2n-\mu}{2}u(x-y)}\right)}{|y|^{\mu}}dy\] \[= \int_{\mathbb{R}^{n}\setminus B_{\varepsilon}(x)}\frac{\nabla \left(e^{\frac{2n-\mu}{2}u(y)}\right)}{|x-y|^{\mu}}dy\] \[= \lim_{R\to\infty}\int_{B_{R}(x)\setminus B_{\varepsilon}(x)} \frac{\nabla\left(e^{\frac{2n-\mu}{2}u(y)}\right)}{|x-y|^{\mu}}dy\] \[= \lim_{R\to\infty}\int_{\partial B_{R}(x)}\frac{e^{\frac{2n-\mu}{ 2}u(y)}\nu}{|x-y|^{\mu}}dS-\int_{\partial B_{\varepsilon}(x)}\frac{e^{\frac{2n -\mu}{2}u(y)}\nu}{|x-y|^{\mu}}dS\] \[-\mu\lim_{R\to\infty}\int_{B_{R}(x)\setminus B_{\varepsilon}(x)} \frac{e^{\frac{2n-\mu}{2}u(y)}(x-y)}{|x-y|^{\mu+2}}dy\] \[:= V^{1}-V^{2}_{\varepsilon}-V^{3}_{\varepsilon},\]
where \(\nu=\frac{y-x}{|y-x|}\) is the unit outer normal direction. For the integral \(V^{1}\), there holds
\[\lim_{R\rightarrow\infty}\int_{\partial B_{\varepsilon}(x)}\left|\frac{e^{\frac{ 2n-\mu}{2}u(y)}\nu}{|x-y|^{\mu}}\right|dS\leq C\lim_{R\rightarrow\infty}\frac{ 1}{R^{\mu}}\int_{\partial B_{R}(x)}e^{\frac{2n-\mu}{2}u(y)}dy=0.\]
Thus we have \(\nabla v_{\varepsilon}(x)=-V_{\varepsilon}^{2}-V_{\varepsilon}^{3}\). For the integral \(V_{\varepsilon}^{2}\), there exists a \(\xi(y)\in B_{\varepsilon}(x)\) such that
\[\int_{\partial B_{\varepsilon}(x)}\frac{e^{\frac{2n-\mu}{2}u(y)} \nu}{|x-y|^{\mu}}dS= \int_{\partial B_{\varepsilon}(x)}\frac{e^{\frac{2n-\mu}{2}u(y)} -e^{\frac{2n-\mu}{2}u(x)}}{|x-y|^{\mu}}\cdot\frac{y-x}{|y-x|}dS\] \[= \frac{2n-\mu}{2}\int_{\partial B_{\varepsilon}(x)}\frac{e^{\frac {2n-\mu}{2}u(\xi(y))}\langle\nabla u(\xi(y)),y-x\rangle}{|x-y|^{\mu}}\cdot \frac{y-x}{|y-x|}dS.\]
If \(|x|\leq R_{0}\) and \(R_{0}\) is large, there exists a constant \(C>0\) such that
\[\int_{\partial B_{\varepsilon}(x)}\left|\frac{e^{\frac{2n-\mu}{2} u(\xi(y))}\langle\nabla u(\xi(y)),y-x\rangle}{|x-y|^{\mu}}\cdot\frac{y-x}{|y-x |}\right|dS\] \[\leq C\int_{\partial B_{\varepsilon}(x)}\frac{1}{|x-y|^{\mu-1}}dS\] \[\leq C\varepsilon^{n-\mu}.\]
If \(|x|\geq R_{0}\), we get
\[\int_{\partial B_{\varepsilon}(x)}\left|\frac{e^{\frac{2n-\mu}{2} u(\xi(y))}\langle\nabla u(\xi(y)),y-x\rangle}{|x-y|^{\mu}}\cdot\frac{y-x}{|y-x |}\right|dS\] \[\leq C|x|^{-1-\alpha\frac{2n-\mu}{2}}\int_{\partial B_{\varepsilon}(x )}\frac{1}{|x-y|^{\mu-1}}dS\] \[\leq C\varepsilon^{n-\mu}|x|^{-1-\alpha\frac{2n-\mu}{2}}.\]
By a simple calculation, we find
\[\lim_{\varepsilon\to 0^{+}}\left|\int_{\mathbb{R}^{n}} \langle x,V_{\varepsilon}^{2}\rangle e^{\frac{2n-\mu}{2}u(x)}dx\right|\] \[= \lim_{\varepsilon\to 0^{+}}\left|\int_{\mathbb{R}^{n}} \frac{2n-\mu}{2}\int_{\partial B_{\varepsilon}(x)}\frac{e^{\frac{2n-\mu}{2} u(\xi(y))}\langle\nabla u(\xi(y)),y-x\rangle}{|x-y|^{\mu}}\cdot\frac{\langle x,y-x \rangle}{|y-x|}dSe^{\frac{2n-\mu}{2}u(x)}dx\right|\] \[\leq C\lim_{\varepsilon\to 0^{+}}\int_{B_{R_{0}}} \varepsilon^{n-\mu}|x|e^{\frac{2n-\mu}{2}u(x)}dx+C\lim_{\varepsilon\to 0^{+}} \int_{B_{R_{0}}^{c}}\varepsilon^{n-\mu}|x|^{-\alpha\frac{2n-\mu}{2}}e^{ \frac{2n-\mu}{2}u(x)}dx\] \[= 0. \tag{2.34}\]
Therefore, with (2.33) and (2.34), one has
\[\int_{\mathbb{R}^{n}}\langle x,\nabla v(x)\rangle e^{\frac{2n-\mu}{2 }u(x)}dx\] \[= -\mu\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}\setminus B_{\varepsilon}(x)}\frac{\langle x,x-y\rangle e^{ \frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu+2}}dye^{\frac{2n-\mu}{2}u(x)}dx\] \[= -\mu\lim_{\varepsilon\to 0^{+}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}\setminus B_{\varepsilon}(x)}\frac{\langle x,x-y\rangle\left(e^ {\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu}{2}u(x)}\right)}{ |x-y|^{\mu+2}}dye^{\frac{2n-\mu}{2}u(x)}dx\]
for \(\delta>0\) small enough. For any \(\Omega\subset\mathbb{R}^{n}\), we define
\[\chi_{\Omega}(x)=\begin{cases}1,&x\in\Omega\\ 0,&x\in\Omega^{c}.\end{cases}\]
We first prove that the above integral is absolutely integrable in \(\mathbb{R}^{n}\times\mathbb{R}^{n}\). We consider that
\[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left|\frac{\langle x, x-y\rangle\left(e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu}{2} u(x)}\right)}{|x-y|^{\mu+2}}e^{\frac{2n-\mu}{2}u(x)}\right|dydx\] \[\leq \int_{\mathbb{R}^{n}}\int_{B_{\delta}(x)}\frac{|x||\left|e^{ \frac{2n-\mu}{2}u(y)}-e^{\frac{2n-\mu}{2}u(x)}\right|}{|x-y|^{\mu+1}}e^{\frac {2n-\mu}{2}u(x)}dydx+\int_{\mathbb{R}^{n}}\int_{B_{\delta}^{c}(x)}\frac{|x|e^{ \frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu+1}}e^{\frac{2n-\mu}{2}u(x)}dydx\] \[:= I_{1}+I_{2}\]
By (2.22), (2.23) and (2.29), we observe that
\[I_{1}\leq C\int_{\mathbb{R}^{n}}\int_{B_{\delta}(x)}\frac{|x||\nabla u( \xi(y))|e^{\frac{2n-\mu}{2}u(\xi(y))}}{|x-y|^{\mu}}e^{\frac{2n-\mu}{2}u(x)}dydx\] \[\leq C\int_{B_{R_{0}}}\int_{B_{\delta}(x)}\frac{e^{\frac{2n-\mu}{2}u( x)}}{|x-y|^{\mu}}dydx+C\int_{B_{R_{0}}^{c}}\int_{B_{\delta}(x)}\frac{|x|^{- \frac{2n-\mu}{2}\alpha}}{|x-y|^{\mu}}e^{\frac{2n-\mu}{2}u(x)}dydx\] \[\leq C\delta^{n-\mu}\int_{B_{R_{0}}}e^{\frac{2n-\mu}{2}u(x)}dx+C \delta^{n-\mu}\int_{B_{R_{0}}^{c}}|x|^{-(2n-\mu)\alpha}dx\] \[\leq C,\]
where \(\xi(y)\in B_{\delta}(x)\). We divide \(\mathbb{R}^{n}\setminus B_{\delta}(x)\) into two parts:
\[A_{1}=\left\{y|\delta\leq|x-y|\leq\frac{|x|}{2}\right\},\ A_{2}=\left\{y||x -y|>\frac{|x|}{2}\right\}.\]
For \(y\in A_{1}\), We deduce that
\[\begin{split}&\int_{\mathbb{R}^{n}}\int_{A_{1}}\frac{|x|e^{\frac{2n- \mu}{2}u(y)}}{|x-y|^{\mu+1}}e^{\frac{2n-\mu}{2}u(x)}dydx\\ \leq&\int_{B_{R_{0}}}\int_{A_{1}}\frac{C}{|x-y|^{\mu +1}}dy|x|e^{\frac{2n-\mu}{2}u(x)}dx+\int_{B_{R_{0}}^{c}}\int_{A_{1}}\frac{C}{| x-y|^{\mu+1}}dy|x|^{1-\alpha(2n-\mu)}dx\\ \leq& C\int_{B_{R_{0}}}|x|^{n-\mu}e^{\frac{2n-\mu}{2 }u(x)}dx+C\delta^{n-\mu-1}\int_{B_{R_{0}}}|x|e^{\frac{2n-\mu}{2}u(x)}dx+\int_{ B_{R_{0}}^{c}}|x|^{n-\mu-\alpha(2n-\mu)}dx\\ &+C\delta^{n-\mu-1}\int_{B_{R_{0}}^{c}}|x|^{1-\alpha(2n-\mu)}dx \\ \leq& C+C\delta^{n-\mu-1}\\ \leq& C\end{split}\]
for \(R_{0}\) large enough. In order to obtain the above inequality, we used the asymptotic behavior of \(u\) at \(\infty\) and \(\alpha>\frac{2n}{2n-\mu}\). For \(y\in A_{2}\), one gets
\[\begin{split}\int_{\mathbb{R}^{n}}\int_{A_{2}}\frac{|x|e^{\frac{2 n-\mu}{2}u(y)}}{|x-y|^{\mu+1}}e^{\frac{2n-\mu}{2}u(x)}dydx\leq& 2\int_{\mathbb{R}^{n}}\int_{A_{2}}\frac{e^{\frac{2n-\mu}{2}u(y)} }{|x-y|^{\mu}}dye^{\frac{2n-\mu}{2}u(x)}dx\\ \leq& 2\int_{\mathbb{R}^{n}}v(x)e^{\frac{2n-\mu}{2}u(x)} dx\\ \leq& C.\end{split}\]
Hence,
\[I_{1}+I_{2}\leq C.\]
We consider that
\[\begin{split}&\int_{\mathbb{R}^{n}}\langle x,\nabla v(x)\rangle e^{ \frac{2n-\mu}{2}u(x)}dx\\ =&-\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}} \frac{\langle x,x-y\rangle\left(e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)} (y)e^{\frac{2n-\mu}{2}u(x)}\right)}{|x-y|^{\mu+2}}e^{\frac{2n-\mu}{2}u(x)}dydx \\ =&-\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}} \frac{e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu}{2}u(x)} }{|x-y|^{\mu}}e^{\frac{2n-\mu}{2}u(x)}dydx\\ &-\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,x -y\rangle\left(e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu }{2}u(x)}\right)}{|x-y|^{\mu+2}}e^{\frac{2n-\mu}{2}u(x)}dydx\\ :=& I_{1}+II_{2}\end{split} \tag{2.35}\]
Using the Lebesgue's dominated convergence theorem for \(II_{1}\), one can easily deduce that
\[\begin{split}\lim_{\delta\to 0^{+}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{ 2n-\mu}{2}u(x)}}{|x-y|^{\mu}}e^{\frac{2n-\mu}{2}u(x)}dydx=&\int_{ \mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}u(y)}}{|x-y|^{\mu }}e^{\frac{2n-\mu}{2}u(x)}dydx\\ =&\beta_{n}\alpha\end{split} \tag{2.36}\]
We prove that the dominant function of \(II_{1}\) is integrable. That is
\[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left(\frac{e^{\frac{2n- \mu}{2}u(y)}e^{\frac{2n-\mu}{2}u(x)}}{|x-y|^{\mu}}+\frac{\chi_{B_{1}(x)}(y)e^{(2 n-\mu)u(x)}}{|x-y|^{\mu}}\right)dydx\] \[= \int_{\mathbb{R}^{n}}v(x)e^{\frac{2n-\mu}{2}u(x)}dx+\int_{\mathbb{ R}^{n}}\int_{B_{1}(x)}\frac{e^{(2n-\mu)u(x)}}{|x-y|^{\mu}}dydx\] \[\leq C+C\int_{\mathbb{R}^{n}}e^{(2n-\mu)u(x)}dx\] \[\leq C.\]
By a simple calculation, we obtain
\[II_{2}= \mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,y-x \rangle\left(e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu} {2}u(x)}\right)}{|x-y|^{\mu+2}}\left(e^{\frac{2n-\mu}{2}u(x)}-\chi_{B_{\delta} (y)}(x)e^{\frac{2n-\mu}{2}u(y)}\right)dydx\] \[-\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,x-y \rangle\left(e^{\frac{2n-\mu}{2}u(y)}-\chi_{B_{\delta}(x)}(y)e^{\frac{2n-\mu} {2}u(x)}\right)}{|x-y|^{\mu+2}}\chi_{B_{\delta}(y)}(x)e^{\frac{2n-\mu}{2}u(y) }dydx\] \[= -\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,x-y \rangle\left(e^{\frac{2n-\mu}{2}u(x)}-\chi_{B_{\delta}(y)}(x)e^{\frac{2n-\mu} {2}u(y)}\right)}{|x-y|^{\mu+2}}e^{\frac{2n-\mu}{2}u(y)}dydx\] \[+\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,x-y \rangle\left(\chi_{B_{\delta}(x)}(y)e^{(2n-\mu)u(x)}-\chi_{B_{\delta}(y)}(x)e ^{(2n-\mu)u(y)}\right)}{|x-y|^{\mu+2}}dydx.\] \[:= -III_{1}+III_{2}.\]
According to (2.35), we know that
\[III_{1}=II_{1}+II_{2}.\]
Similarly, we deduce that the integral \(III_{2}\) is absolutely integrable
\[\mu\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left|\frac{\langle y,x-y \rangle\chi_{B_{1}(x)}(y)\left(e^{(2n-\mu)u(x)}-e^{(2n-\mu)u(y)}\right)}{|x-y| ^{\mu+2}}\right|dydx<\infty.\]
Using the Lebesgue's dominated convergence theorem, we have
\[\lim_{\delta\to 0^{+}}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\langle y,x- y\rangle\chi_{B_{\delta}(x)}(y)\left(e^{(2n-\mu)u(x)}-e^{(2n-\mu)u(y)}\right)}{|x-y| ^{\mu+2}}dydx=0. \tag{2.37}\]
Therefore, by (2.35)-(2.37), we obtain
\[\int_{\mathbb{R}^{n}}\langle x,\nabla v(x)\rangle e^{\frac{2n-\mu}{2}u(x)}dx= -\frac{\mu\beta_{n}\alpha}{2}.\]
By Lemma 2.5, we deduce that
\[\alpha\left(\alpha-\frac{4n}{2n-\mu}\right)=-\frac{4}{(2n-\mu)\beta_{n}} \times\frac{\mu\beta_{n}\alpha}{2}.\]
Thus, one gets
\[\alpha=2.\]
Moving sphere
In this section, we prove our main results by moving spheres method.
It follows from Theorem 2.1 that
\[u(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left(\frac{|y|+1}{|x-y|} \right)\right]v(y)e^{\frac{2n-\mu}{2}u(y)}dy+C.\]
Clearly \(u\) satisfies the assumptions \((a)\) and \((b)\) of Lemma 2.1. Therefore,
\[u(x)\leq C,\quad x\in\mathbb{R}^{n},\]
for some constant \(C>0\). By Lemma 2.6, we obtain the precise decay of \(u\)
\[u(x)=-2\ln|x|+O(1),\quad\text{for}\quad\ |x|>>1. \tag{3.1}\]
With these preparations, we are ready to run the method of moving spheres. We consider the Kelvin transformation of \((u,v)\):
\[\begin{cases}p(x)=u\left(\frac{x}{|x|^{2}}\right)-2\ln|x|,\\ q(x)=\frac{1}{|x|^{\mu}}v\left(\frac{x}{|x|^{2}}\right).\end{cases} \tag{3.2}\]
Then a direct calculation shows that they satisfy the following equation
\[\begin{cases}p(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left( \frac{|y|+1}{|x-y|}\right)\right]q(y)e^{\frac{2n-\mu}{2}p(y)}dy+C\\ q(x)=\int_{\mathbb{R}^{n}}\frac{e^{\frac{2n-\mu}{2}p(y)}}{|x-y|^{\mu}}dy \end{cases}\text{for }x\neq 0. \tag{3.3}\]
We use the moving spheres method on \((p,q)\). We define
\[\begin{cases}p_{\lambda}(x)=p\left(\frac{\lambda^{2}x}{|x|^{2}} \right)+2\ln\frac{\lambda}{|x|}=u\left(\frac{x}{\lambda^{2}}\right)-2\ln\lambda,\\ q_{\lambda}(x)=\frac{\lambda^{\mu}}{|x|^{\mu}}q\left(\frac{\lambda^{2}x}{|x|^{ 2}}\right)=\frac{1}{\lambda^{\mu}}v\left(\frac{x}{\lambda^{2}}\right).\end{cases} \tag{3.4}\]
Let
\[w_{\lambda}(x)=p(x)-p_{\lambda}(x),\quad z_{\lambda}(x)=q(x)-q_{\lambda}(x).\]
Then we have
\[z_{\lambda}(x)=\int_{B_{\lambda}}\left(\frac{1}{|x-y|^{\mu}}-\frac{1}{|x- \frac{\lambda^{2}y}{|y|^{2}}|^{\mu}}\frac{\lambda^{\mu}}{|y|^{\mu}}\right) \left(e^{\frac{2n-\mu}{2}p(y)}-e^{\frac{2n-\mu}{2}p_{\lambda}(y)}\right)dy. \tag{3.5}\]
Now, we show that the moving spheres method can be started from infinity.
**Proposition 3.1**.: _If \(\lambda>0\) is large enough, we have_
\[w_{\lambda}(x)\geq 0\quad\text{and}\quad z_{\lambda}(x)\geq 0\text{ in }B_{\lambda}(0)\setminus\{0\}.\]
Proof.: We denote
\[A_{1}=\left\{x\in\mathbb{R}^{n}|R_{0}\leq|x|\leq\frac{\lambda}{2}\right\},\quad A _{2}=\left\{x\in\mathbb{R}^{n}|\frac{\lambda}{2}\leq|x|\leq\lambda\right\}\]
and
\[A_{3}=B_{R_{0}}(0)\setminus\{0\}.\]
First, if \(x\in A_{1}\), since \(u\) is continuous at \(0\), we get
\[w_{\lambda}(x)=u\left(\frac{x}{|x|^{2}}\right)-2\ln|x|-u\left(\frac{x}{\lambda ^{2}}\right)+2\ln\lambda\geq 2\ln 2-o(1)\geq 0\]
for \(\lambda\) and \(R_{0}\) large enough. Similarly, for \(\lambda\) large enough,
\[\begin{split} z_{\lambda}(x)&=\frac{1}{|x|^{\mu}}v \left(\frac{x}{|x|^{2}}\right)-\frac{1}{\lambda^{\mu}}v\left(\frac{x}{\lambda ^{2}}\right)\\ &=\left(\frac{1}{|x|^{\mu}}-\frac{1}{\lambda^{\mu}}\right)v\left( \frac{x}{|x|^{2}}\right)+\frac{1}{\lambda^{\mu}}\left[v\left(\frac{x}{|x|^{2} }\right)-v\left(\frac{x}{\lambda^{2}}\right)\right]\\ &\geq\frac{2^{\mu}-1}{\lambda^{\mu}}(v(0)+o(1))-\frac{C}{\lambda ^{\mu}}\left(\frac{1}{R_{0}}-\frac{R_{0}}{\lambda^{2}}\right)\\ &\geq 0,\end{split} \tag{3.6}\]
where \(C=\max\limits_{x\in B_{1}}|\nabla v(x)|\) and \(v(0)>0\).
Next, we consider that \(x\in A_{2}\). We deduce from \(v(0)>0\) that
\[\nabla(|x|^{\frac{\mu}{2}}v(x))\cdot x=\frac{\mu}{2}|x|^{\frac{\mu}{2}}v(x)+| x|^{\frac{\mu}{2}}\nabla v(x)\cdot x>0\]
for \(|x|\) small enough. The function \(|x|^{\frac{\mu}{2}}v(x)\) is increasing in the direction of \(x\). Hence,
\[\left|\frac{x}{|x|^{2}}\right|^{\frac{\mu}{2}}v\left(\frac{x}{|x|^{2}}\right) \geq\left|\frac{x}{\lambda^{2}}\right|^{\frac{\mu}{2}}v\left(\frac{x}{\lambda ^{2}}\right)\]
and
\[q(x)\geq q_{\lambda}(x),\quad x\in A_{2}.\]
For \(\frac{\lambda}{2}\leq|x|\leq\lambda\), a directly calculation shows that
\[\begin{split}-\Delta w_{\lambda}(x)=&-\Delta_{x} u\left(\frac{x}{|x|^{2}}\right)+\frac{2(n-2)}{|x|^{2}}+\Delta_{x}u\left(\frac{x}{ \lambda^{2}}\right)\\ =&\frac{1}{|x|^{2}}\left\{2(n-2)+2(n-2)[y\cdot \nabla u(y)]_{y=x/|x|^{2}}\right\}\\ &+\frac{1}{|x|^{2}}\left\{\frac{|x|^{2}}{\lambda^{4}}(\Delta u) \left(\frac{x}{\lambda^{2}}\right)-\frac{1}{|x|^{2}}(\Delta u)\left(\frac{x}{ |x|^{2}}\right)\right\}\\ \geq& 0\end{split} \tag{3.7}\]
for \(\lambda\) large enough. From the maximum principle we conclude that
\[w_{\lambda}(x)\geq 0\quad\text{in}\quad A_{2}.\]
Now, we consider the case \(x\in A_{3}\). By (3.1), one has
\[w_{\lambda}(x)=u\left(\frac{x}{|x|^{2}}\right)-2\ln|x|-u\left(\frac{x}{\lambda^{ 2}}\right)+2\ln\lambda\geq-C+2\ln\lambda\]
for \(|x|\) small enough. Choosing \(\lambda\) large enough, one gets
\[w_{\lambda}(x)\geq 0\quad\text{in}\quad B_{R_{0}}\setminus\{0\}.\]
Since \(|x-y|\leq\left|x-\frac{\lambda^{2}y}{|y|^{2}}\right|\frac{|y|}{\lambda}\), then
\[\frac{1}{|x-y|^{\mu}}-\frac{1}{\left|x-\frac{\lambda^{2}y}{|y|^{2}}\right|^{\mu }}\frac{\lambda^{\mu}}{|y|^{\mu}}\geq 0\]
for \(x\in B_{R_{0}}\setminus\{0\}\), \(y\in B_{\lambda}\setminus\{0\}\) and \(w_{\lambda}(x)\geq 0\) in \(B_{\lambda}\setminus\{0\}\). We infer from (3.5) that
\[z_{\lambda}(x)\geq 0\quad\text{in}\quad B_{R_{0}}\setminus\{0\}.\]
This completes the proof of this proposition.
For fixed \(b\in\mathbb{R}^{n}\), we define
\[u_{b}(x)=u(x+b),\quad v_{b}(x)=v(x+b)\] \[p_{b}(x)=u_{b}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|,\quad q_{b}( x)=\frac{1}{|x|^{\mu}}v_{b}\left(\frac{x}{|x|^{2}}\right),\] \[p_{\lambda,b}(x)=p_{b}\left(\frac{\lambda^{2}x}{|x|^{2}}\right)+ 2\ln\frac{\lambda}{|x|}=u_{b}\left(\frac{x}{\lambda^{2}}\right)-2\ln\lambda,\] \[q_{\lambda,b}(x)=\frac{\lambda^{\mu}}{|x|^{\mu}}q_{b}\left(\frac {\lambda^{2}x}{|x|^{2}}\right)=\frac{1}{\lambda^{\mu}}v_{b}\left(\frac{x}{ \lambda^{2}}\right),\] \[w_{\lambda,b}(x)=p_{b}(x)-p_{\lambda,b}(x),\quad z_{\lambda,b}(x )=q_{b}(x)-q_{\lambda,b}(x).\]
Next, we give a technical lemma.
**Lemma 3.1**.: _(see Lemma 11.2 of [16] or see Lemma 3.3 of [26]) (1)Suppose \(v\in C^{1}(\mathbb{R}^{n})\), if for all \(b\in\mathbb{R}^{n}\) and \(\lambda>0\), the following inequality holds_
\[\frac{1}{|x|^{\mu}}v_{b}\left(\frac{x}{|x|^{2}}\right)-\frac{1}{\lambda^{\mu} }v_{b}\left(\frac{x}{\lambda^{2}}\right)\geq 0,\ \forall x\in B_{\lambda}\setminus\{0\},\]
_then we have \(v(x)\equiv C\)._
_(2) Suppose \(u\in C^{1}(\mathbb{R}^{n})\), if for all \(b\in\mathbb{R}^{n}\) and \(\lambda>0\), the following inequality holds_
\[u_{b}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|-u_{b}\left(\frac{x}{\lambda^{2}} \right)+2\ln\lambda\geq 0,\quad\forall x\in B_{\lambda}\setminus\{0\}\]
_then we have \(u(x)\equiv C\)._
For fixed \(b\in\mathbb{R}^{n}\), we define
\[\lambda_{b}=\inf\{\lambda>0\ |\ w_{\mu,b}(x)\geq 0,z_{\mu,b}(x)\geq 0\ \text{in}\ B_{\mu}\setminus\{0\},\lambda\leq\mu<\infty\}.\]
**Proposition 3.2**.: _There exists a vector \(\bar{b}\in\mathbb{R}^{n}\), such that \(\lambda_{\bar{b}}>0\)._
Proof.: We prove it by contradiction. Suppose on the contrary, then for any \(b\in\mathbb{R}^{n}\), we have \(\lambda_{b}=0\). By the definition of \(\lambda_{b}\), we get
\[w_{\lambda,b}(x)\geq 0\quad\text{and}\quad z_{\lambda,b}(x)\geq 0\]
for any \(\lambda>0\) and \(x\in B_{\lambda}\setminus\{0\}\). Then we infer from Lemma 3.1 that \(u(x)\equiv C_{1}\) and \(v(x)\equiv C_{2}\). This contradicts \(\int_{\mathbb{R}^{n}}e^{\frac{2n-\mu}{2}u(y)}dy<\infty\).
**Proposition 3.3**.: _Suppose that \(\lambda_{b}>0\) for some \(b\in\mathbb{R}^{n}\), then we have_
\[w_{\lambda_{b},b}(x)\equiv 0\quad\text{and}\quad z_{\lambda_{b},b}(x)\equiv 0 \quad\forall x\in B_{\lambda_{b}}\setminus\{0\}.\]
Proof.: Without loss of generality, we assume \(b=0\). Then we define
\[w_{\lambda_{0}}=w_{\lambda_{0},0}\quad\text{and}\quad z_{\lambda_{0}}=z_{ \lambda_{0},0}.\]
Suppose on the contrary that \(w_{\lambda_{0}}(x)\not\equiv 0\) or \(z_{\lambda_{0}}(x)\not\equiv 0\). By the definition of \(\lambda_{0}\) we deduce that
\[w_{\lambda_{0}}(x)\geq 0\quad\text{and}\quad z_{\lambda_{0}}(x)\geq 0\quad \text{in}\quad B_{\lambda_{0}}\setminus\{0\}. \tag{3.8}\]
By a direct calculation, one gets
\[p_{\lambda}(x)=\frac{1}{\beta_{n}}\int_{\mathbb{R}^{n}}\left[\ln\left(\frac{|y |+\lambda^{2}}{|x-y|}\right)\right]q_{\lambda}(y)e^{\frac{2n-\mu}{2}p_{ \lambda}(y)}dy+C-2\ln\lambda. \tag{3.9}\]
Moreover, we deduce from (3.3) and (3.9) that
\[-\Delta w_{\lambda}(x)=\frac{n-2}{\beta_{n}}\int_{B_{\lambda}}\left(\frac{1}{ |x-y|^{2}}-\frac{1}{\left|x-\frac{\lambda^{2}y}{|y|^{2}}\right|^{2}}\right)(q( y)e^{\frac{2n-\mu}{2}p(y)}-q_{\lambda}(y)e^{\frac{2n-\mu}{2}p_{\lambda}(y)})dy.\]
By (3.8), we conclude that
\[-\Delta w_{\lambda_{0}}(x)\geq 0\quad\text{in}\quad B_{\lambda_{0}}\setminus\{ 0\}.\]
We infer from the maximum principle that
\[w_{\lambda_{0}}(x)>0\quad\text{in}\quad B_{\lambda_{0}}\setminus\{0\}. \tag{3.10}\]
Substituting the inequality in (3.10) into (3.5), we obtain
\[z_{\lambda_{0}}(x)>0\quad\text{in}\quad B_{\lambda_{0}}\setminus\{0\}.\]
By the definition of \(\lambda_{0}\), there exists a sequence \(\lambda_{k}<\lambda_{0}\), \(\lambda_{k}\to\lambda_{0}\), such that
\[\inf_{x\in B_{\lambda_{k}}\setminus\{0\}}w_{\lambda_{k}}(x)<0.\]
We deduce from (3.10) and the Hopf Lemma that
\[\frac{\partial w_{\lambda_{0}}}{\partial\nu}(x)<0\quad\text{on}\quad\partial B _{\lambda_{0}}, \tag{3.11}\]
where \(\nu\) is the unit outer normal direction.
Claim: there exists a \(\gamma=\gamma(\lambda_{0})>0\), such that
\[w_{\lambda_{k}}(x)\geq\frac{\gamma}{2},\quad\forall x\in B_{\frac{\lambda_{0}}{2 }}\setminus\{0\}.\]
Let
\[\gamma=\min_{\partial B_{\frac{\lambda_{0}}{2}}}w_{\lambda_{0}}(x)>0.\]
We define
\[h(x)=\gamma-\frac{r^{n-2}}{|x|^{n-2}}\gamma\quad\text{in}\quad B_{\frac{ \lambda_{0}}{2}}\setminus B_{r}\]
with \(r\) small. Then, \(k(x)=w_{\lambda_{0}}(x)-h(x)\) satisfies
\[\begin{cases}-\Delta k(x)=-\Delta w_{\lambda_{0}}(x)\geq 0\quad\text{in}\quad B _{\frac{\lambda_{0}}{2}}\setminus B_{r},\\ k(x)=w_{\lambda_{0}}(x)>0\qquad\text{on}\quad\partial B_{r}\\ k(x)>0\qquad\qquad\text{on}\quad\partial B_{\frac{\lambda_{0}}{2}}.\end{cases}\]
Hence, by the maximum principle and letting \(r\to 0\), one gets \(w_{\lambda_{0}}(x)\geq\gamma\), in \(B_{\frac{\lambda_{0}}{2}}\setminus\{0\}\). Then
\[w_{\lambda_{k}}(x)= p(x)-p_{\lambda_{k}}(x)\] \[= w_{\lambda_{0}}(x)+p_{\lambda_{0}}(x)-p_{\lambda_{k}}(x)\] \[\geq \frac{\gamma}{2}\]
provided \(\lambda_{k}\) is close enough to \(\lambda_{0}\). This proves the claim.
On the other hand, since \(\inf_{B_{\lambda_{k}}\setminus\{0\}}w_{\lambda_{k}}(x)<0\), then we infer from the claim that there exists an \(x_{k}\in B_{\lambda_{k}}\setminus B_{\frac{\lambda_{0}}{2}}\) such that
\[w_{\lambda_{k}}(x_{k})=\inf_{B_{\lambda_{k}}\setminus\{0\}}w_{\lambda_{k}}(x)<0.\]
In particular, we have \(\nabla w_{\lambda_{k}}(x_{k})=0\). We assume that, up to a subsequence, \(x_{k}\to\bar{x}\), then we obtain \(\nabla w_{\lambda_{0}}(\bar{x})=0\) and \(w_{\lambda_{0}}(\bar{x})=0\). Hence \(\bar{x}\in\partial B_{\lambda_{0}}\). However, this contradicts (3.11).
Therefore, for any sequence \(\lambda_{k}<\lambda_{0}\), \(\lambda_{k}\to\lambda_{0}\), it is clear that
\[w_{\lambda_{k}}(x)\geq 0,\quad\text{in}\quad B_{\lambda_{k}}\setminus\{0\}.\]
It follows from (3.5) that
\[z_{\lambda_{k}}(x)\geq 0,\quad\text{in}\quad B_{\lambda_{k}}\setminus\{0\}.\]
This contradicts the definition of \(\lambda_{0}\).
**Proposition 3.4**.: _For any \(b\in\mathbb{R}^{n}\), we have \(\lambda_{b}>0\)._
Proof.: By Proposition 3.2 and Proposition 3.3, there exists a vector \(\bar{b}\in\mathbb{R}^{n}\), such that \(\lambda_{\bar{b}}>0\),
\[w_{\lambda_{\bar{b}},\bar{b}}(x)\equiv 0\quad\text{and}\quad z_{\lambda_{\bar{b}}, \bar{b}}(x)\equiv 0\quad x\in B_{\lambda_{\bar{b}}}\setminus\{0\}.\]
That is
\[u_{\bar{b}}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|-u_{\bar{b}}\left(\frac{x}{ \lambda_{\bar{b}}^{2}}\right)+2\ln\lambda_{\bar{b}}\equiv 0,\quad x\in B_{ \lambda_{\bar{b}}}\setminus\{0\}.\]
Letting \(|x|\to 0\), we obtain
\[\lim_{|x|\to 0}\left(u_{\bar{b}}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|\right)=u_ {\bar{b}}(0)-2\ln\lambda_{\bar{b}}. \tag{3.12}\]
Suppose on the contrary that there exists a vector \(b\in\mathbb{R}^{n}\) such that \(\lambda_{b}=0\), then we obtain
\[u_{b}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|-u_{b}\left(\frac{x}{\lambda^{2}} \right)+2\ln\lambda\geq 0\]
for all \(\lambda>0\) and \(x\in B_{\lambda}\setminus\{0\}\). Fix \(\lambda\) and \(|x|\to 0\), we infer
\[\liminf_{|x|\to 0}\left(u_{b}\left(\frac{x}{|x|^{2}}\right)-2\ln|x|\right) \geq u_{b}(0)-2\ln\lambda. \tag{3.13}\]
We derive from (3.12) and (3.13) that
\[u_{\bar{b}}(0)-2\ln\lambda_{\bar{b}}\geq u_{b}(0)-2\ln\lambda,\]
this is a contradiction for small \(\lambda\).
**Proposition 3.5**.: _For all \(b\in\mathbb{R}^{n}\), we have \(\lambda_{b}>0\), \(w_{\lambda_{b},b}\equiv 0\) and \(z_{\lambda_{b},b}\equiv 0\) in \(B_{\lambda_{b}}\setminus\{0\}\)._
Proof.: This is a direct consequence of Proposition 3.3 and Proposition 3.4.
Proof of Theorem 1.3.: Let
\[f(x)=e^{u(x)}.\]
Then it follows from Proposition 3.5 that
\[f(x)=\frac{1}{\lambda_{b}^{2}|x-b|^{2}}f\left(\frac{x-b}{\lambda_{b}^{2}|x-b| ^{2}}+b\right).\]
Let
\[A:=\lim_{|x|\to\infty}|x|^{2}f(x)=\frac{f(b)}{\lambda_{b}^{2}}=\frac{f(0)}{ \lambda_{0}^{2}},\]
where \(A>0\). We first assume that \(A=1\). Since
\[f(x)=\frac{1}{\lambda_{0}^{2}|x|^{2}}f\left(\frac{x}{\lambda_{0}^{2}|x|^{2}} \right)=\frac{1}{\lambda_{b}^{2}|x-b|^{2}}f\left(\frac{x-b}{\lambda_{b}^{2}|x- b|^{2}}+b\right),\]
then one has
\[f(x)=\frac{1}{\lambda_{0}^{2}|x|^{2}}\left[f(0)+\frac{x\cdot\nabla f(0)}{ \lambda_{0}^{2}|x|^{2}}+o\left(\frac{1}{|x|}\right)\right] \tag{3.14}\]
\[f(x)=\frac{1}{\lambda_{b}^{2}|x-b|^{2}}\left[f(b)+\frac{(x-b)\cdot\nabla f(b)}{ \lambda_{b}^{2}|x-b|^{2}}+o\left(\frac{1}{|x-b|}\right)\right] \tag{3.15}\]
as \(|x|\to\infty\). It follows from equation (3.14) and (3.15) that
\[\frac{\partial f(b)}{\partial x_{i}}f(b)^{-2}=\frac{\partial f(0)}{\partial x _{i}}f(0)^{-2}-2b_{i}\]
and
\[(f^{-1})_{i}(b)=2b_{i}+(f^{-1})_{i}(0)=\frac{\partial}{\partial b_{i}}(|b|^{2} +\nabla f^{-1}(0)\cdot b).\]
Therefore,
\[f(b)=\frac{1}{|b-d_{0}|^{2}+l}\]
and
\[u(b)=\ln\left(\frac{1}{|b-d_{0}|^{2}+l}\right),\]
where \(l\) is a constant. Finally, if we don't assume \(A=1\), then
\[u(x)=\ln\frac{C_{1}(\varepsilon)}{|x-x_{0}|^{2}+\varepsilon^{2}}.\]
This completes the proof of Theorem 1.3.
## 4 Acknowledgments.
This work is partially supported by National Natural Science Foundation of China 12141105.
|
2301.11074 | Digital Inheritance in Web3: A Case Study of Soulbound Tokens and the
Social Recovery Pallet within the Polkadot and Kusama Ecosystems | In recent years discussions centered around digital inheritance have
increased among social media users and across blockchain ecosystems. As a
result digital assets such as social media content cryptocurrencies and
non-fungible tokens have become increasingly valuable and widespread, leading
to the need for clear and secure mechanisms for transferring these assets upon
the testators death or incapacitation. This study proposes a framework for
digital inheritance using soulbound tokens and the social recovery pallet as a
use case in the Polkadot and Kusama blockchain networks. The findings discussed
within this study suggest that while soulbound tokens and the social recovery
pallet offer a promising solution for creating a digital inheritance plan the
findings also raise important considerations for testators digital executors
and developers. While further research is needed to fully understand the
potential impacts and risks of other technologies such as artificial
intelligence and quantum computing this study provides a primer for users to
begin planning a digital inheritance strategy and for developers to develop a
more intuitive solution. | Justin Goldston, Tomer Jordi Chaffer, Justyna Osowska, Charles von Goins II | 2023-01-26T13:12:45Z | http://arxiv.org/abs/2301.11074v3 | Digital Inheritance in Web3: A Case Study of Soulbound Tokens and the Social Recovery Pallet within the Polkadot and Kusama Ecosystems
###### Abstract
In recent years, discussions centered around remainder of the section, we will discuss the motivation and digital inheritance have increased among social media users and across blockchain ecosystems. As a result, digital assets such as social media content, cryptocurrencies, and non-fungible tokens have become increasingly valuable and widespread, leading to the need for clear and secure mechanisms for transferring these assets upon the testator's death or incapatation. This study proposes a framework for digital inheritance using soulbound tokens and the social recovery pallet as a use case in the Polkadot and Kusama principles with Identity Schemes developed within the blockchain networks. The findings discussed within this study blockchain and Web3 space. Section VI introduces researchers to suggest that while soulbound tokens and the social recovery soulbound tokens (SBTs), while Section VII highlights the pallet offer a promising solution for creating a digital _developments_ of social recovery wallets. Next, Section VIII takes \(inheritance\) plan, the \(findings\) also raise important a look at the current digital inheritance landscape across a number of blockchain ecosystems.
The focus of the study is included in Section XI, where the social recovery pallet within the Polkadot and Kusama ecosystems is introduced. Finally, Section X offers future research considerations based on the findings, and the study concludes with the acknowledgment and closing thoughts with Sections XI and XII.
## I Introduction
The notion of digital inheritance has gained increased A. Motivation
prominence in recent years due to the rise of digital technology The primary motivation for writing on digital inheritance is to and the ubiquity of digital assets in the lives of individuals and ensure that individuals, organizations, and families are aware of families. In addition, with the growth of the Internet of Things the importance of digital inheritance and that appropriate plans (IoT), more and more devices are now storing personal data and can be put in place for the smooth transfer of digital assets upon assets, making it increasingly necessary to consider what will death or incapacitation. While the topic of digital inheritance in happen to these digital assets upon one's death. Digital assets the area of social media has been studied in legal and refer to any property that exists in digital form. Examples of technological contexts, little research has been performed in the digital assets include online accounts (such as email, social areas of NFTs and cryptocurrencies [4]. Digital inheritance is a media, and banking accounts), digital currency, non-fungible growing concern for many individuals in the digital age as our tokens (NFTs), and digital media such as photos, videos, and lives become increasingly intertwined with digital technologies.
This has resulted in a plethora of digital assets, such as online accounts, emails, and digital photos, which can be lost after a music [1].
In the past, paper documents and tangible, physical assets accounts, emails, and digital photos, which can be lost after a byte the primary legal instruments for an inheritance, and person's death.
digital assets have often been excluded [2]. Additionally, laws
In an often discussed topic regarding estate planning and highlighting digital assets upon death vary among states, digital inheritance, banking their Matthew Mellon saw his US$2 countries, and even service providers such as Facebook, US$1 billion at the peak of the cryptocurrency market [5]. After LinkedIn, YouTube, and Google. Conventional digital assets are unexpectedly passing away in 2019, Mellon's heirs would be in often stored on third-party servers or in the cloud, making them line to inherit this fortune in traditional law. The issue arose sometimes difficult to track and transfer. Google, Apple, and when it was uncovered that Mellon did not share his private keys Microsoft have all developed tools to allow users to designate a with anyone and kept them on devices across the United States digital executor who can access accounts in the event of one's [6]. In an uncommon scenario, attorneys discovered that Ripple, death [3].
As decentralized digital assets such as cryptocurrencies and contribution to the project's early days. Although the attorneys non-fungible tokens (NFTs) have begun to emerge in estate could recover the assets, an agreement between Ripple and planning discussions, policymakers and practitioners are Mellon prevented the attorney from selling all of the assets at sometimes unclear on how to distribute these digital assets to a once, leading to the investment losing almost two-thirds of its testator's heirs. In this study, we propose a digital inheritance value [7]. In this case, if Mellon had a plan in place for his digital plan that testators can use as a framework when working with assets, the loss in the value of the assets may have been the executors of their estate to manage digital assets. In the prevented.
Additionally, in a 2020 study by the Cremation Institute [8], nodes - or individuals - across blockchain networks validate and 89% of cryptocurrency investors worry about dying with their record transactions on the ledger, enabling individuals and assets, while only 23% have a documented plan. Of that businesses to streamline processes, increase transparency, and percentage, only 7% have created a will that includes decrease costs.
## 2. Introduction
The blockchain
### _Cryptocurrencies_
Used as a form of utility across a number of blockchain protocols, cryptocurrencies are digital assets that are created and exchanged using cryptography and distributed ledger technology. These decentralized digital assets have no connection to any central bank or government and rely on peer-to-peer networks for transaction transfer and validation [24]. Cryptocurrencies are designed to be secure, with anonymous and irreversible transactions that eliminate the need for third-party intermediaries such as banks or other financial institutions. Although Central Bank Digital Currencies (CBDCs) in jurisdictions use stablecoins - a class of cryptocurrency - the reference to cryptocurrencies in this study will be focused on decentralized cryptocurrencies.
### _Social Recovery_
The act of social recovery on blockchain protocols refers to the ability for users to recover assets from an account in the event that they lose their private keys or forget their password. Through cryptographic protection, social recovery wallets can provide a secure and efficient way to restore users' identities, based on where the testator is located, there are a number of access, and other important data [25]. Additionally, social existing legal provisions governing the transfer of digital assets. recovery in blockchain can allow for the re-establishment of For example, The General Data Protection Regulation (GDPR) is trust between users and their respective networks and provide a secure way for users to store and manage their personal data and with the right to access their personal data and the right to have assets.
their data erased upon their death [31]. Although digital assets such as online accounts, emails, and digital photos have been identified in these legal instruments, researchers and legal SI is a decentralized authentication system that enables practitioners have identified gaps within the existing frameworks individuals to securely, privately, and autonomously control when attempting to understand the legal provisions centered their personal identity data. SSI is based on the principles of around digital assets such as cryptocurrencies and NFTs [32]. self-ownership, portability, and control, which means that The legal frameworks regarding digital inheritance will be individuals own and manage their information and can move it discussed in greater detail in subsequent sections of this study.
### _The Executor_
An executor for an estate is a person responsible for carrying out the wishes of a deceased person as outlined in their will in accordance with the law. This includes gathering the deceased's assets, paying any debts, and distributing the remaining assets to the beneficiaries. The executor must also file any necessary paperwork with the court and handle any disputes that may arise.
For digital assets, this role will remain the same. Although an individual could work with their trustees - or guardians - to have a smart contract executed based on agreed-upon terms, it may be for a more decentralized governance model with fewer points of advisable to have a digital executor included in the process [32]. failure and is also used in various applications such as shared As laws change based on geographical region, legal guidelines accounts, escrow services, and smart contracts [28]. The multi- give executors the right to access digital assets and require online sig standard follows the m-of-n ratio - also known as the service providers to provide access to digital assets upon request quorum quotient. For example, a common multi-sig [4]. They also provide clear guidance on managing digital assets configuration is 3-of-5, which means three private keys - individuals - would have to sign a transaction before it is executed. If a transaction did not receive the required number of signatures from users holding private keys, the transaction would not execute.
### _Seed Phrase_
A seed phrase, also known as a recovery phrase or backup phrase, is a string of words that can be used to regain access to a cryptocurrency wallet. Seed phrases are a method of backing up and securing the private keys that allow access to the wallet's funds, and the order of the words in the phrase must be entered correctly to regain access to the wallet [29]. A seed phrase is typically made up of 12 to 24 words and is generated when the wallet is first created.
## III Entities
In traditional inheritance proceedings, several parties are involved in the digital inheritance planning process. Although many parties may remain the same, the entities involved in traditional and digital inheritance planning will be reviewed.
### _The Testator_
The testator is the individual working with another party to develop a will. For a will to be legally valid, it must be executed by the testator in accordance with the laws of the jurisdiction where the will be being made [30]. This typically involves the testator signing the will in the presence of witnesses, who must also sign the will to validate it. The testator can also appoint guardians for minor children and designate beneficiaries to receive specific assets or a share of the estate. The testator may also include provisions for charitable gifts, trusts, and other arrangements in the will.
The testator should be very explicit in his or her wishes regarding how they would like for digital assets to be transferred. Based on where the testator is located, there are a number of access, and other important data [25]. Additionally, social existing legal provisions governing the transfer of digital assets. For example, The General Data Protection Regulation (GDPR) is the primary legal instrument in Europe that provides individuals with the right to access their personal data and the right to have assets.
their data erased upon their death [31]. Although digital assets such as online accounts, emails, and digital photos have been identified in these legal instruments, researchers and legal social social is a decentralized authentication system that enables practitioners have identified gaps within the existing frameworks individuals to securely, privately, and autonomously control when attempting to understand the legal provisions centered their personal identity data. SSI is based on the principles of around digital assets such as cryptocurrencies and NFTs [32]. self-ownership, portability, and control, which means that The legal frameworks regarding digital inheritance will be individuals own and manage their information and can move it discussed in greater detail in subsequent sections of this study.
### _The Executor_
An executor for an estate is a person responsible for carrying out the wishes of a deceased person as outlined in their will in accordance with the law. This includes gathering the deceased's assets, paying any debts, and distributing the remaining assets to the beneficiaries. The executor must also file any necessary paperwork with the court and handle any disputes that may arise.
For digital assets, this role will remain the same. Although an individual could work with their trustees - or guardians - to have a smart contract executed based on agreed-upon terms, it may be for a more decentralized governance model with fewer points of advisable to have a digital executor included in the process [32]. failure and is also used in various applications such as shared As laws change based on geographical region, legal guidelines accounts, escrow services, and smart contracts [28]. The multi- give executors the right to access digital assets and require online sig standard follows the m-of-n ratio - also known as the service providers to provide access to digital assets upon request quorum quotient. For example, a common multi-sig [4]. They also provide clear guidance on managing digital assets configuration is 3-of-5, which means three private keys - individuals - would have to sign a transaction before it is executed. If a transaction did not receive the required number of signatures from users holding private keys, the transaction would not execute.
### _Seed Phrase_
A seed phrase, also known as a recovery phrase or backup phrase, is a string of words that can be used to regain access to a cryptocurrency wallet. Seed phrases are a method of backing up and securing the private keys that allow access to the wallet's funds, and the order of the words in the phrase must be entered correctly to regain access to the wallet [29]. A seed phrase is typically made up of 12 to 24 words and is generated when the parties of the beneficiaries.
## IV The Trustee
The trustee is the individual or entity responsible for managing and administering the assets of a trust. The trustee may be appointed to manage the estate's assets on behalf of the beneficiaries. Working closely with the estate executor, the trustee's role is to act in the beneficiaries best interests and ensure that the trust assets are managed and distributed according to the trust document's terms. There are several different types of trustees that may be appointed to manage an estate, including individual trustees, corporate trustees, and co-trustees [34]. The type of trustee best suited for a particular situation will depend on the size and complexity of the estate, as well as the needs and preferences of the beneficiaries.
Trustees are sometimes referred to as guardians for solutions guardians could cancel the transaction if foil play or a potential that support digital inheritance. The difference between hack is of concern.
guardians compared to traditional trustees is that guardians can use a multi-sig approach to approve the release of assets upon _C. TRANSPARENCY_ the testator's death [35]. Multi-sigs can provide an additional Transparency is essential for the proper functioning of SSI as it layer of privacy and security to both the testator and the enables individuals to make informed decisions about their guardians [36]. Although the testator can identify the trustees or personal data and identity information. Without transparency, guardians during the digital inheritance planning process, per individuals may not be aware of how their data is being used or the wishes identified by the testator, guardians do not have to shared, and may not have the ability to control or protect their know the identities of the other guardians.
### The beneficiaries
The beneficiaries of an estate are the individuals who are legally platforms, along with cryptocurrencies and NFTs across a entitled to the testator's assets. Family members are typically number of different blockchains, it could be difficult to aggregate included, such as spouses, children, and other close relatives. all of this content into one application. As projects have created Friends, charities, and other organizations could also be dashboards for tracking decentralized finance (DeFi) portfolios beneficiaries. The executor of the estate is responsible for across a number of different blockchain networks, the potential ensuring that the assets are distributed in accordance with the to create digital inheritance dashboards to increase the deceased's wishes. Beneficiaries may be entitled to a lump sum transparency of a user's digital assets will be reviewed in further payment or periodic distributions [37]. Beneficiaries may detail within the Discussion section of this study.
In regards to digital assets, researchers have been looking into _D. PERSISTENCE_ the importance of SSI principles for testators further to increase Persistence is a concept based on the idea that an individual's the secure transfer of assets to their beneficiaries. SSI principles identity is maintained throughout their lifetime, regardless of provide a framework for digital inheritance, allowing changes in personal circumstances. Persistence is a critical individuals to securely and safely transfer their digital assets element of SSI, as it ensures that the individual's identity is without relying on third-party intermediaries [38]. SSI consistently linked to their digital assets, even after death. This principles involve the use of secure identity layers, effective ensures that the digital assets can be securely transferred to the data privacy, and secure protocols built on distributed ledger designated beneficiaries.
Researchers have also mentioned the use of biometric transferred to the verifiable digital identities of the testator's authorication and blockchain technology to ensure that the individual's identity is securely linked to their digital assets [44]. When planning for digital inheritance, an individual could set up biometric authentication for themselves and the executor of their estate to ensure persistence and an additional layer of security for users should consider when sharing data on the internet. This their assets. Given that some studies are theoretical, it is framework could also be considered when joining a important to consider the legal implications of persistence in the decentralized Web3 ecosystem and developing a digital context of digital inheritance and succession planning within inheritance strategy. Shuaib et al.'s framework consist of the one's geographical region.
## Appendix E Portability
SSI requires that individuals can securely transfer their digital assets to other individuals or entities, as well as the ability to should have control over their digital identity and be able to revoke access to their assets. Portability could become have full control of their data and digital assets. Many studies complicated for Web2 digital assets are stored in different services and platforms such as YouTube, TikTok, and Instagram. For heirs to access these assets, they must be able to access the same services and platforms that the deceased used. While storing one's digital identities and passwords across different services and platforms, this ensures that heirs will able to access the digital assets of the deceased.
## Appendix F Interoperability
Interoperability for SSI will be essential as the decentralized Web3 space continues to expand. Currently, a small number of blockchain networks provide cross-chain communication and the bridging of assets [45]. Layer zero blockchains such as Polkadot, Kusama, and Cosmos have built blockchain networks with cross-chain communication in mind. This communication and transfer of funds primarily occur across blockchains within the network. Although bridges are being developed within these networks to communicate with blockchains such as the Bitcoin network, the Ethereum network, and others such as Avalanche, solutions to aggregate cross-chain assets may need to be considered for solutions such as those being developed for digital inheritance.
### Consent
General Data Protection Regulation (GDPR) in the European should control how their data is used and who can access it. It social media accounts, online banking, and cloud storage [53]. requires that individuals give explicit consent to any digital data However, while dapps can provide anonymity and protection by associated with them and have the right to revoke this consent at allowing a user to connect a wallet [43], apps in blockchain any time during their life. Although researchers have noted that networks such as Ethereum, Solana, Polygon, and Polkadot have the scope and coverage are extremely limited [46], regulations yet to gain wide adoption among users.
have continued to emerge in many countries as policymakers For decentralized digital assets such as cryptocurrencies and have seen the increased use and monetization of digital assets. NFTs, SafeHaven is a project within the VeChain blockchain In the United States, the Revised Uniform Fiducary Access ecosystem that has added an extra layer of protection for testators to Digital Assets Act (RUFADAA) provides a framework for by implementing a deadman's switch [54]. A deadman's switch managing digital accounts in the event of death or disability. keeps assets in a wallet or account safe by allowing the testator to The act gives executors and trustees access to emails, chats, and define a period for the application to alert the user to enter direct messages _only_ if the individual explicitly consented [15], authentication information and confirm liveness periodically. If [47]. Additionally, under the RUFADAA, centralized entities the user fails to enter the authentication information within a such as Google and Meta are referred to as custodians of a certain period, the assets are automatically transferred to the user's data and may only provide access to the reasonably designated beneficiaries.
## V Identity Schemes
To expand on Shuaib et al.'s [40] SSI principles, Sun et al. [36] identified the following identity schemes users should be aware of as Web3 and the decentralized web continue to evolve.
### _A. Personality Identity_
Personality Identity is a digital identity model that gives individuals full control over their identity and related information. Personality identity allows users to manage their identity credentials and access their data online without relying on a third-party identity provider, such as a government or corporate entity. This allows users to fully monetize off of their personality identity, compared to the conventional model where corporate entities receive a large percentage of the monetization rewards [55]. With this approach, an identity management system of this type provides users with a high level of privacy and security by allowing them full control over their data and the services that use it.
### _B. Credential Identity_
Credential identity provides a secure way to prove one's identity to other parties online. It is based on the idea that a digital identity is based on an individual's unique attributes, which are securely stored and managed by the individual, and is verifiable by other parties. Therefore, credential Identity allows individuals to create and store digital credentials that are cryptographically secured and can be used to prove qualifications and access services without having to rely on a third party. For example, services such as the POAP and Ethereum Name Service (ENS) have created solutions that prove attendance or participation at events or prove digital assets or credentials are held in a digital wallet the address - in an ENS case - is attached to [56, 57].
### _C. Reputational Identity_
Reputational Identity for SSI refers to the digital representation of an individual's identity that is built upon their past interactions, digital and physical, and is used to assess an individual's trustworthiness. Also referred to as proof of authority (PoA), blockchain protocols such as VeChain have developed consensus mechanisms around the PoA model [58]. When using the PoA consensus mechanism, nodes - or individuals - must have a reputational identity established and recognized by the network. This reputational identity serves as a means of validating the node's trustworthiness and is a key factor in determining the authority of the node.
### _Data Identity_
Data Identity is a concept in which individuals control their organizations, and governments around the world [62]. Similar to digital identity data, including authentication, access how NFTs onboard new users into the blockchain and Web3 management, and privacy. Projects such as Kilt Protocol provides spaces, SSI with SBTs could encourage more users to enter decentralized identity (DID) tools for users within the Polkadot various blockchain ecosystems.
### _Additional Use Cases for Soulobound Tokens_
Upon the release of Buterin's article Soulbound, projects across Ethereum and other blockchain protocols began to develop solutions for SBTs. Within the United States, Wyoming recognized decentralized autonomous organizations (DAOs) as separate legal entities through a supplement to the law governing limited liability companies (LLCs) in 2021 [63]. This led to a prominent ownership of real or virtual items. NFTs are unique, number of projects registering as DAOs in 2021 and 2022. DAOs meaning each token is distinct and cannot be replaced with such as Reputation DAO leveraged SBTs as reputation systems another. This makes NFTs different from traditional digital for utilities such as credit scores, proof of identity in the play-assets such as Bitcoin, Ethereum, and Polkadot tokens, which and-earn space, and proof of identity for council members across are interchangeable and divisible. NFTs became popular in 2021 various ecosystems [36]. In taking this approach with SBTs, and 2022 due to their ability to enable the monetization of blockchain can potentially provide further transparency to the on-digital items such as art, music, and gaming assets [31]. With chain behavior of users.
In another example of how SBTs can cross over into the Web3 blockchain infrastructure technology underlying it, an NFT guarantee s secure and immutable record of ownership for the space, RMRK, a project within the Polkadot and Kusama asset being represented. NFTs can be viewed as a use case for ecosystems, created an SBT called Soulbound 2.0 (SBT2). blockchain that has enabled digital assets to be monetized and Developed with the metaverse and GameFi in mind, RMRK is traded securely and transparently, paving the way for a new another project that created a reputation-based SBT [64]. As form of digital asset ownership.
As cryptocurrencies, decentralized finance, and NFTs have plays a blockchain-based game or stays in a metaverse, SBT2s allowed users to amass wealth due to the ability to transfer can be viewed as dynamic SBTs. Phala Network, another project digital assets, there may also be a use case for non-transferable within the Polkadot and Kusama ecosystems, leveraged RMRK's assets within the Web3 space. For example, within the Web2 SBT2 to create an SBT within their PhalaWorld metaverse. Using gaming space, namely in World of Warcraft (WoW), soulbound a Spirit, any user can obtain an SBT that can only be leveled up items can be collected in-game but cannot be transferred to by performing activities throughout the Phala ecosystem. By another player. Soulbound items are valuable because they give demonstrating behaviors such as participating in on-chain players access to exclusive items and provide additional activities, being a community advocate on Web2 social media benefits, such as increased experience or bonuses, when used in platforms, contributing to the community from a technical certain activities. However, with WoW continuing to bring in perspective, or staking PHA - the native token for the Phala over a million daily active users (DAUs) in 2022 [60], how can ecosystem, SBTs can only evolve with user interaction.
Within the Binance ecosystem, the Binance Account Bound the concept of soulbound items transfer over to the Web3 world?
In an article called Soulbound, Buterin [61] introduced the (BAB) SBT was created for Binance users that completed their concept of SBTs for governance mechanisms across blockchain know your customer (KYC) verification [36]. Given that Binance protocols. Current state, because users can go onto centralized is a CEX that holds a number of assets across a number of exchanges (CEXs) or decentralized exchanges (DEXs) to wallets, could SBTs provide a use case for know-your-exchange purchase governance tokens, parties that are driven by purposes (KYX) verification? With the collapse of FTX in 2022, users potentially detrimental to the protocol could concentrate could track the suspicious activity of a wallet holding large interests away from the ecosystem's community, defeating the amounts of FTT tokens - the native token of the FTX exchange. purpose of the decentralized nature of blockchain and Web3. If CEXs such as Binance leverage the BAB SBT they created to Furthermore, with the introduction of SBTs, similar to provide increased transparency, other CEXs, DAOs, and swallow them's in Web2 gaming, SBTs would be non- protocols may have to follow similar processes to develop transferable. While it can be argued that a user could sell an standards across the Web3 landscape.
### _A Decentralized Society with Soulbound_
With governance tokens providing one use case for SBTs, _Tokens_
another use case for non-cryptocurrency users may be for proof In considering use cases for a number of industries, Weyl et al. of identity. Two projects using a de-facto soulbound framework [22] proposed that SBTs could potentially create the foundation within the Ethereum ecosystem are BrightID and Proof of for a decentralized society (DeSoc). Referred to as an extended Humanity [36]. To prove that the owners of accounts are indeed resume, if users leveraged wallets as SBTs, these tools could actual individuals, in an SBT model, the token identifiers are not provide the credentials and commitments to conduct a variety of transferable, given that on-chain assets are soulbound to each activities in a decentralized fashion. Although decentralized verified individual [61]. As Web3 looks to introduce SSI to the finance activities are currently performed across a number of masses, these tools' virtual asset service providers (VASPs) may blockchain ecosystems, methods to track a user's credit score or consider an SBT framework.
As previously mentioned, SSI is a foundational tenant of way to unlock lending opportunities for individuals and Web3 that allows users to have complete control over their organizations that do not have the financial history or assets digital identities. By leveraging blockchain technology, SSI can traditional financial institutions recognize. For example, if a user provide access to services and applications in a more efficient owns cryptocurrencies or NFTs, if they hold those assets in an
SBT-based wallet, with the user's permission, the wallet could show the payment history and the 'crypto credit score' to the sender.
Tying SBTs back to extended resumes, college degrees, certifications, and transcripts could also be on-chain as SBTs. Given that degrees, certifications, and transcripts are soulbound in the Web2 world, employers and academic institutions could leverage SBTs to confirm that the contents of an applicant's resume are valid. Additionally, if an employer would want to conduct a reference check, the reference's addresses could be included in the SBT, where attestations could be performed on-chain.
As discussed in Section IV's Existence and Minimization areas, SBTs will play a key role in digital inheritance planning. While SBTs can be created for all entities that are a part of the executor's will, ZKPs may also be used in conjunction with SBTs to ensure that no one can manipulate the system [65]. In addition, as SBTs will become another foundational piece in the digital inheritance framework, protocols and networks outside of those previously mentioned may consider SBTs to provide privacy and security to their users as they build out their digital inheritance plans.
## VII Social Recovery Wallets and Accounts
In an effort to work toward SSI in the digital age, intuitive solutions must be created for non-technical users. Given that the Web3 future will be built on dapps, if a user loses their password or seed phrase, no intermediary or centralized entity outside the Ethereum ecosystem, such as Moonbeam within the can be contacted to recover a user's password. Projects have Polkadot ecosystem, are building EVM solutions for non-native created social recovery wallets to mitigate the risk of lost assets. EVM blockchains, this approach may provide more users with Social recovery wallets are smart contract wallets where a user's digital inheritance solutions [70]. Furthermore, given the ultimate protects access to the smart contract, and that signing key is EVMs to bridge solutions such as Inheriti into different forgets his or her password, trustees - or guardians in this case - inheritance offerings.
Within the Ethereum ecosystem, Argent and Loopring have developed multi-sig social recovery wallet solutions where a user can set up between three to seven guardians they can reach In order to demonstrate a proposed solution for a digital out to in order to recover an account [35]. As new users entering inheritance framework, a case study methodology was selected. the Web3 space may forget his or her wallet password or seed In analyzing different methodologies, one reason for proposing a phrase, this function may be a helpful tool. Additionally, as case study for using the social recovery pallet for digital passwords, pictures, video files, and other Web2-native digital inheritance is that it allows for exploring the unique assets could be stored in a social recovery wallet through the use characteristics and potential challenges of the specific context in decentralized storage applications such as Arweave, Filecoin, which the pallet will be used. Also, a case study can be useful for or The InterPlanetary File System (IPFS), social recovery generating recommendations and best practices for using the wallets may provide a more secure solution for assets during social recovery pallet in other contexts, as developers and users one's life. Building upon the utility of social recovery wallets, a can be interviewed during the data collection and exploration process [71]. Using a case study approach and carefully performing digital inheritance planning activities.
## VIII The Existing Digital Inheritance Landscape
With more individuals understanding the importance of securing their digital assets, more projects and solutions have entered the market. Organizations have created solutions to help individuals create digital estate plans and transfer digital assets after death. Similar to traditional executors of a will or estate, a digital executor is responsible for an individual's digital assets [33]. In addition, by using a combination of multi-signature technology, Vault12 ensures that assets are securely transferred to a designated beneficiary upon the user's death or incapatation.
### _SafeHaven_
SafeHaven is a platform that provides individuals with an innovative way to protect and manage their digital assets in the event of death, disability, or other life-changing events. SafeHaven has developed a solution called Inherit that enables users to securely store, share, and transfer their digital assets to their heirs. Schouppe [69] patented the AES 256 cryptographic key encryption SafeHaven's solutions are built upon and leverage the VeChain blockchain Proof of Authority consensus mechanism.
In expanding outside of the VeChain ecosystem, SafeHaven looks to introduce its solutions to users across Ethereum Virtual Machine (EVM)-compatible blockchains [54]. As projects users to achieve the Ethereum ecosystem, such as Moonbeam within the Ethereum ecosystem, are building EVM solutions for non-native created social recovery wallets to mitigate the risk of lost assets. EVM blockchains, this approach may provide more users with Social recovery wallets are smart contract wallets where a user's digital inheritance solutions [70]. Furthermore, given the ultimate assets will be within that smart contract [35]. A signing key goal of interoperability for various blockchain projects, using tools to approve transactions or the transfer of assets. If a user blockchain ecosystems may increase the awareness of digital forgets his or her password, trustees - or guardians in this case - inheritance offerings.
Within the Ethereum ecosystem, Argent and Loopring have developed multi-sig social recovery wallet solutions where a user can set up between three to seven guardians they can reach In order to demonstrate a proposed solution for a digital out to in order to recover an account [35]. As new users entering inheritance framework, a case study methodology was selected. the Web3 space may forget his or her wallet password or seed In analyzing different methodologies, one reason for proposing a phrase, this function may be a helpful tool. Additionally, as case study for using the social recovery pallet for digital passwords, pictures, video files, and other Web2-native digital inheritance is that it allows for exploring the unique assets could be stored in a social recovery wallet through the use characteristics and potential challenges of the specific context in of decentralized storage applications such as Arweave, Filecoin, which the pallet will be used. Also, a case study can be useful for or The InterPlanetary File System (IPFS), social recovery generating recommendations and best practices for using the wallets may provide a more secure solution for assets during social recovery pallet in other contexts, as developers and users one's life. Building upon the utility of social recovery wallets, a can be interviewed during the data collection and exploration process [71]. Using a case study approach and carefully examining the functionality of the social recovery pallet, the researchers hope that the proposed solution can lead to future implementations within and outside the Polkadot and Kusama ecosystems.
### _Substrate_
In constructing the digital inheritance lego within the Polkadot and Kusama ecosystems, the proposed solution will be built upon the Substrate framework. The Substrate framework is a modular platform that allows developers to build custom blockchain applications similar to templates in other technological solutions [72]. Pallets are self-contained units of code that provide specific functionality to the blockchain, such as governance, staking, or the consensus [72]. Developers can choose from a wide range of pre-built pallets or create their own custom pallets to add specific
features to their blockchain [74]. This modular design enables _C.digital inheritance Planning_ developers to easily customize their blockchains and add or The first step in developing a digital inheritance strategy would remove features as needed.
In the Polkadot and Kusama ecosystems, pallets are to start with a digital executor - who may also be set implemented as WebAssembly (Wasm) modules that are up as a friend - to identify the number of friends that would be compiled and uploaded onto the blockchain [18]. The Substrate added to the M-of-N tool [78]. The 'M' would be the number of pallets are compiled Wasm code, which can be executed by the network runtime.
Pallets can be grouped into three categories:
1. Core pallets: Core pallets are the fundamental building blocks of the network, providing the necessary infrastructure for consensus, networking, and other critical functions.
2. Runtime pallets: Runtime pallets are the pallets that provide the core functionality of the network, such as smart contract to identify how long the beneficiaries would have to wait until execution, governance, and staking.
3. Application pallets: Application pallets are the pallets that provides an additional layer of security against social engineering enable the development of specific dApps or use cases, such attacks as the testator could identify malicious actors attempting as financial services, supply chain management, or to access his or her account during life. In this use case, given that the testator may be holding large amounts of assets in the account to be recovered, they may work with their digital executor to set a higher delay period.
Substrate pallets have a wide range of potential uses and executor to set a higher delay period.
Once the testator has set up the recovery configuration, they benefits in the context of the Polkadot and Kusama networks. For example, pallets have previously been created to develop will be required to deposit KSM on the Kusama blockchain or decentralized governance models and create decentralized DOT on the Polkadot blockchain to put the data on-chain. identity systems [76]. In the next section of this case study, we Additionally, upon death, the digital executor or a trustee - or will explore an additional use case for the social recovery pallet friend - would have to issue a deposit to initiate the social recovery to develop a digital inheritance plan for users within the process could also be deemed a security measure or a honeypot for the social recovery process, the testator or the digital executor could identify this attack and call the _closeRecovery_ function to stop identifying systems [76]. In the next section of this case study, we Additionally, upon death, the digital executor or a trustee - or will explore an additional use case for the social recovery pallet friend - would have to issue a deposit to initiate the social recovery process. The deposit to initiate the social recovery process could also be deemed a security measure or a honeypot as if a malicious actor submits a deposit to begin the social recovery process, the testator or the digital executor could identify this attack and call the _closeRecovery_ function to stop identifying the process and receive the deposit.
### _Executive the Digital Inheritance Process on Polkadot and Kusama_
An individual's private key, the social recovery pallet creates a Upon the testator's death, the digital executor could submit a new key and makes calls on behalf of one's last account to a new deposit to initiate the social recovery process by calling the account for one's assets to be retrieved [77]. Fig. 1 provides an _initateRecovery_ option. The digital executor could then contact a overview of the social recovery process.
The process includes an individual's existing account, where _vunchRecovery_ transaction [80]. Once the threshold set up during he or she will set up a _createRecovery_ process that uses an M- the M-of-N process has been reached and the delay period has of-N recovery tool. M-of-N is a multi-sig process where the user passed, the digital executor would be able to access the digital will identify a defined number of trustees - referred to as friends assets from the old account and distribute the assets per the in the case of the social recovery pallet - that need to approve wishes of the testator through the _claimRecovery_ process.
Once assets have been recovered from the old account, the _closeRecovery_ process can be called, allowing the digital executor to recover the deposit needed to initiate the social recovery process. After all assets have been moved to the new account, the digital executor can call the _removeRecovery_ option to remove the configuration and connection between the new account and the old account and make any further transactions or assets unrecoverable.
Although this is the proposed solution for setting up a digital inheritance plan, if a testator did not perform the recovery configuration, the developers of the social recovery pallet included an additional failsafe. In the event that the digital executor does not receive enough signatures on the _vunchRecovery_ transaction from friends of the testator, the digital executor could go through the Polkadot or Kusama council or initiate a public proposal to use the Root origin to access the digital assets of the testator [79]. As this process would provide additional levels of complexity and the risk of attracting malicious actors, the proposed solution to develop a
Figure 1: An overview of the pubkey change process
digital inheritance plan could provide the security and safety of documents that include specific instructions for managing digital property [82]. These instruments should identify the types of digital assets owned by the individual, provide instructions for the transfer of digital assets, and name a digital executor to manage the digital estate [83].
## X Future Research Considerations
Despite the proposed solution to leverage the social recovery pallet within the Polkadot and Kusama ecosystems for digital inheritance planning, effective ways to educate users and legal teams on this use case may require further research. Additionally, with the advancements in other technologies, such as artificial intelligence and quantum computing, the following topics may also require further investigation:
### Inclusion of Additional Digital assets
While Web3 native assets such as cryptocurrencies and NFTs can be easily stored in decentralized accounts and vaults, there is still a gap in the research as to how Web2 native assets such as social media accounts, content, and passwords can be stored. Phala Network within the Polkadot and Kusama ecosystems has developed external storage services using centralized tools such as Amazon S3 and decentralized tools such as Storj, Arweave, and Filecoin [84]. However, additional research may be required regarding the security concerns and accessibility of storing different types of assets in a digital inheritance account and creating a type of pointer to those assets.
### Interoperability
Interoperability is a common topic of discussion regarding Web3 and blockchain, but how will tools created for active accounts impact digital inheritance accounts? Additional research may be required to assess the feasibility of an account aggregator injecting digital assets from other blockchains once a social recovery process is initiated. For example, if a digital executor initiated the social recovery process, given the potential interoperability properties of the Polkadot and Kusama ecosystems, is it feasible for a social recovery pallet to perform calls to pre-defined wallet addresses on the Bitcoin, Ethereum, Avalanche, and Binance blockchains to transfer assets the social recovery account set up by the testator?
### Onboarding attorneys to establish legal
_FRAMEWORKS_
As technology has grown, so have the legal issues surrounding digital inheritance. As a result, attorneys and digital executors can play a crucial role in helping individuals and families it to the new account. Currently, the new account is simply establish a digital inheritance strategy. They can also assist in performing a call to the old account, and once all assets are developing more explicit and more defined legal frameworks removed from the old account, the _removeRecovery_ function is called to remove the link between the two accounts. If a function was created within the social recovery pallet to transfer all assets using one function, this function might increase the intuitiveness of the pallet.
### The determination and implementation of the deceased's wishes (the digital will)
Traditional estate planning instruments - such as wills and trusts - are not designed to address the transfer of digital assets. These organizations should be preparing now for the future threat of instruments may provide instructions for the transfer of physical quantum computers [85]. While the National Institute of property, but they need to account for the complexities of digital Standards and Technology (NIST) is currently developing post-property [81]. To ensure that digital assets are effectively quantum cryptography standards, the U.S. Cybersecurity and transferred, individuals must create updated estate planning Infrastructure Security Agency (CISA) has developed a roadmap
Figure 2: The Social Recovery process on Polkadot and Kusama
that leaders of organizations should follow in the near term [86]. However, as there is currently no consensus as to whether quantum computing may be a threat to blockchains soon, further research may need to be performed on the topic.
Additionally, as organizations such as Apple have been using secure enclaves in the organization's mobile solutions, additional research may need to be conducted to understand whether additional layers of security should be introduced. Also referred to as trusted execution environments (TEEs), secure enclaves can be used to store cryptographic keys, protect sensitive data from unauthorized access, and enable secure remote attestation [87]. With biometric authentication providing an additional security layer during one's life, secure enclaves may need to be considered to counter future threats once someone passes away.
### _A Digital Inheritance Lego Framework_
Similar to how DeFi legos were built from existing protocols and dapps to create new protocols and solutions, additional research may be required to integrate a number of tools discussed in this study, such as SBTs, social recovery wallets, vaults, secure enclaves, Zk-proofs, and smart contracts. In addition, given these solutions exist across a number of blockchain ecosystems, composability among these applications may also need to be considered.
## XI Acknowledgement
The authors would like to thank Joshua Waller, Ray Lu, and the other team members from Phala and Bit.Country from the Polkadot and Kusama ecosystems for their thoughts and input into this study. The authors would also like to thank Charlotte McCurdy of Perley-Robertson, Hill & McDougall for providing input from a legal aspect on digital asset inheritance. This work was supported by a research grant from the Web3 Foundation.
## XII Conclusion
The findings from this study have important implications for individuals and those involved in managing digital assets after one's death. First, the findings suggest that individuals are beginning to understand the importance of adding Web2-native digital assets into their inheritance plans, but further education must take place for individuals, digital executors, and policymakers regarding Web3-native digital assets. Second, the findings suggest that although the study focused on the Polkadot and Kusama ecosystems, digital inheritance lego frameworks and SBT structures may be considered in other protocols to enable the composability and interoperability of integrating Web2 and Web3 digital assets using some aggregation approach. Finally, the study highlights the importance of understanding the motivations for digital inheritance. This understanding can inform effective strategies for managing digital assets after a person's death as the decentralized Web3 environment continues to evolve.
|
2301.07616 | Amenable wreath products with non almost finite actions of mean
dimension zero | Almost finiteness was introduced in the seminal work of Kerr as an dynamical
analogue of Z-stability in the Toms-Winter conjecture. In this article, we
provide the first examples of minimal, topologically free actions of amenable
groups that have mean dimension zero but are not almost finite. More precisely,
we prove that there exists an infinite family of amenable wreath products that
admit topologically free, minimal profinite actions on the Cantor space which
fail to be almost finite. Furthermore, these actions have dynamical comparison.
This intriguing new phenomenon shows that Kerr's dynamical analogue of
Toms-Winter conjecture fails for minimal, topologically free actions of
amenable groups.
The notion of allosteric group holds a significant position in our study. A
group is allosteric if it admits a minimal action on a compact space with an
invariant ergodic probability measure that is topologically free but not
essentially free. We study allostery of wreath products and provide the first
examples of allosteric amenable groups. | Matthieu Joseph | 2023-01-18T15:50:57Z | http://arxiv.org/abs/2301.07616v3 | # Amenable wreath products with non almost finite actions of mean dimension zero
###### Abstract
Almost finiteness was introduced in the seminal work of Kerr as an dynamical analogue of \(\mathcal{Z}\)-stability in the Toms-Winter conjecture. In this article, we provide the first examples of minimal, topologically free actions of amenable groups that have mean dimension zero but are not almost finite. More precisely, we prove that there exists an infinite family of amenable wreath products that admit topologically free, minimal profinite actions on the Cantor space which fail to be almost finite. Furthermore, these actions have dynamical comparison. This intriguing new phenomenon shows that Kerr's dynamical analogue of Toms-Winter conjecture fails for minimal, topologically free actions of amenable groups.
The notion of allosteric group holds a significant position in our study. A group is allosteric if it admits a minimal action on a compact space with an invariant ergodic measure that is topologically free but not essentially free. We study allostery of wreath products and provide the first examples of allosteric amenable groups.
**MSC:** 37A15, 37B05, 20E08, 20E26, 20E15, 46L35.
**Keywords:** Almost finiteness, C\({}^{*}\)-algebras, non-free actions, allostery.
###### Contents
* 1 Introduction
* 2 Almost finiteness and dynamical comparison
* 3 A profinite criterion for allostery
* 4 Allostery and wreath products
* 5 Proof of the main theorem
## 1 Introduction
Topological dynamics and C\({}^{*}\)-algebra theory have a long intertwined story with the former offering a wide source of meaningful examples for the latter via the notion of
crossed product. In the last decade, a major breakthrough occurred in the ambitious classification program for simple separable nuclear C\({}^{*}\)-algebras that was launched by Elliott in the 1980s. This program, which proposes to classify a certain class of C\({}^{*}\)-algebras using \(K\)-theoretic and tracial data has somehow come to a conclusion with the following classification theorem built on the work of many researchers.
**Classification Theorem** ([1, Cor. D]).: _Unital, separable, simple, nuclear, \(\mathcal{Z}\)-stable C\({}^{*}\)-algebras satisfying the Universal Coefficient Theorem are classified by their Elliott invariant._
It has naturally become an essential task to find large classes of C\({}^{*}\)-algebras that are covered by the classification theorem. Crossed products arising from topological dynamics are typical candidates: if \(\Gamma\curvearrowright X\) is an action on a compact metric space, then the crossed product \(C(X)\rtimes X\) is unital and separable, is simple if and only if \(\Gamma\curvearrowright X\) is topologically free and minimal [1], is nuclear if and only if the action \(\Gamma\curvearrowright X\) is amenable [1]. In the case where \(\Gamma\) is amenable, then the action \(\Gamma\curvearrowright\) is automatically amenable. Moreover, if \(\Gamma\curvearrowright X\) is amenable, then \(C(X)\rtimes\Gamma\) satisfies the Universal Coefficient Theorem [20, Prop. 10.7]. In order to understand whether such crossed products are covered by the classification theorem, it remains to understand their \(\mathcal{Z}\)-stability.
Even though \(\mathcal{Z}\)-stability for an amenable action of a non-amenable group is a stimulating question which has received attention recently (see for instance [10] and [10]) we focus in the present paper on actions of amenable groups. In this context, several techniques with a dynamical flavor have been developped recently to prove \(\mathcal{Z}\)-stability.
Mean dimension (as defined by Gromov) appears to be a meaningful invariant in this context. This invariant, associated to any topological action, measures how the dimension grows asymptotically and is known to be zero when the space is finite dimensional or when the action has zero entropy. For a minimal action of \(\mathbb{Z}\) on a compact metric space \(X\), mean dimension zero implies that \(C(X)\rtimes\mathbb{Z}\) is \(\mathcal{Z}\)-stable, as a combination of [21] when \(X\) is finite dimensional and [14] when \(X\) is infinite dimensional. Conversely, Giol and Kerr exhibit examples of minimal \(\mathbb{Z}\)-actions of mean dimension nonzero whose associated crossed product is not \(\mathcal{Z}\)-stable [10]. More recently, Niu proved that for any free minimal action of \(\mathbb{Z}^{d}\) on a compact metric space \(X\), mean dimension zero implies \(\mathcal{Z}\)-stability of \(C(X)\rtimes\mathbb{Z}^{d}\)[23]. For minimal, topologically free actions of amenable groups, mean dimension zero is conjectured to be the suitable setting in which \(\mathcal{Z}\)-stability holds.
Another fruitful technique for \(\mathcal{Z}\)-stability is the recent dynamical notion of _almost finiteness_ due to Kerr [11]. Almost finiteness is defined for group actions on compact metric spaces as a sort of topological version of the Ornstein-Weiss tower theorem in measurable dynamics. It was first used in [10] to prove that the crossed product associated with any almost finite free minimal actions on the Cantor space is \(\mathcal{Z}\)-stable. Kerr was then able to remove the zero-dimensional assumption on the space: he proved that crossed products associated with free minimal almost finite actions of amenable groups are \(\mathcal{Z}\)-stable [11, Thm. 12.4] and therefore are covered by the classification theorem. A large class of group actions are covered by this theorem: by a recent result of Kerr and Naryshkin, any free minimal action of an elementary amenable group on a finite-dimensional compact metric space is
almost finite [10]. The main result of this article is to provide the first examples of minimal, topologically free actions of amenable groups on zero dimensional spaces (and therefore of mean dimension zero) that are not almost finite.
**Theorem 1.1**.: _Let \(\Lambda\) be a finitely generated torsion free nilpotent group and let \(d\geq 1\). Then \(\mathbb{Z}^{d}\wr\Lambda\) admits minimal, topologically free, profinite actions on the Cantor space that are not almost finite._
Here an action \(\Gamma\curvearrowright X\) on a zero-dimensional compact metric space \(X\) is _profinite_ if the orbits of the action of \(\Gamma\) on the set \(\operatorname{Clo}(X)\) of clopen subsets of \(X\) are all finite. We refer to Section 3 for a precise definition of profinite actions.
The wreath products covered by Theorem 1.1 are all elementary amenable, so by contrast, all their free minimal action on finite-dimensional compact metric spaces are almost finite by Kerr and Naryshkin's result [10].
As a consequence, the hope to use almost finiteness in order to prove \(\mathcal{Z}\)-stability of the crossed products associated with allosteric actions of amenable groups is vain. In general, the \(\mathcal{Z}\)-stability of \(C(X)\rtimes\Gamma\), and therefore its classifiability, where \(\Gamma\curvearrowright(X,\mu)\) is an allosteric action of an amenable group \(\Gamma\), remains an open question.
**Question 1.2**.: Let \(\Gamma\) be an amenable group. Let \(\Gamma\curvearrowright(X,\mu)\) be a minimal ergodic action of mean dimension zero. If \(\Gamma\curvearrowright(X,\mu)\) is allosteric, is the crossed product \(C(X)\rtimes\Gamma\) dissifiable by its Elliott invariant?
In his seminal work [11], Kerr developped a dynamical counterpart of the so called Toms-Winter conjecture for simple separable nuclear \(\mathrm{C}^{*}\)-algebras. With a view toward finding dynamical analogues of finite nuclear dimension, \(\mathcal{Z}\)-stability and strict comparison in the Toms-Winter conjecture, Kerr introduced the following properties for group actions on compact spaces:
1. finite tower dimension,
2. almost finiteness,
3. dynamical comparison,
and proved that for a free minimal action of an amenable group \(\Gamma\) on a finite dimensional compact metric space \(X\), with finitely many invariant ergodic measures, (i)\(\Rightarrow\)(ii)\(\Leftrightarrow\)(iii), see [11, Thm. 9.3]. The actions that we construct in Theorem 1.1 are profinite. Since profinite actions are uniquely ergodic and have comparison (see Lemma 2.4), this shows that Kerr's dynamical version of Toms-Winter conjecture fails for topologically free actions of amenable groups (see Corollary 2.5). Therefore, a dynamical analogue of Toms-Winter conjecture remains to be understood for topologically free minimal actions.
The notion of allostery, which has its roots in subgroup dynamics, will be the main tool in the proof of Theorem 1.1. A _minimal ergodic action_\(\Gamma\curvearrowright(X,\mu)\) is an action by homeomorphisms of \(\Gamma\) on a compact metric space \(X\), which is minimal (every orbit is dense), with an ergodic \(\Gamma\)-invariant Borel probability measure \(\mu\). We say that a minimal ergodic action is:
* _topologically free_ if the set of points with trivial stabilizer is comeager, that is contains a dense countable intersection of open sets.
* _essentially free_ if the set of points with trivial stabilizer has full measure.
It is a classical result that essential freeness implies topological freeness for a minimal ergodic action, see for instance [10, Lem. 2.2]. The study of the converse, which is false in general, will be the main tool in this article and led the author to introduce the following denomination in [10].
**Definition 1.3**.: A minimal ergodic action \(\Gamma\curvearrowright(X,\mu)\) is _allosteric_ if it is topologically free but not essentially free.
A countable group \(\Gamma\) is _allosteric_ if it admits an allosteric action. The existence of allosteric groups, and more precisely groups which admit allosteric profinite actions, was asked by Grigorchuk, Nekrashevich and Sushchanskii in [11, Prob. 7.3.3]. The first examples of allosteric groups were provided by Bergeron and Gaboriau in [1]. They proved that any non-amenable free product of two non-trivial residually finite groups is allosteric. An independent proof of this result for free groups of finite rank was obtained by Abert and Elek in the unpublished paper [1]. In [1], Abert and Elek proved that the free product of four copies of the cyclic group \(C_{2}\) admits an allosteric action whose orbit equivalence relation is measure hyperfinite. In [10], the author proved that the fundamental group of any non-amenable surface group is allosteric, providing the first examples of allosteric groups with one end. In [1], [1], [1], and [10], the allosteric actions obtained are all profinite, which answer positively the question [11, Prob. 7.3.3].
As of now, examples of allosteric groups are rare. By contrast, there are plenty of groups that are known to be non-allosteric and we refer to the introduction of [10] for a non-exhaustive list of such groups.
With a view toward proving Theorem 1.1, we study in Section 4 allostery for wreath products. Given two countable groups \(\Gamma,\Lambda\), the _wreath product_\(\Gamma\wr\Lambda\) is the group
\[\Gamma\wr\Lambda\coloneqq\Big{(}\bigoplus_{\Lambda}\Gamma\Big{)}\rtimes\Lambda,\]
where \(\Lambda\) acts on the direct sum \(\bigoplus_{\Lambda}\Gamma\) by shifting the copies of \(\Gamma\). Given a group \(\Gamma\) and a prime number \(p\), we say that \(\Gamma\) is _residually \(p\)-finite_ if for every nontrivial element \(\gamma\in\Gamma\), there exists a normal subgroup \(N\trianglelefteq\Gamma\) such that \(\Gamma/N\) is a finite \(p\)-group and \(\gamma\notin N\).
We prove that wreath products with an abundance of finite \(p\)-quotients are allosteric.
**Theorem 1.4**.: _Let \(\Lambda\) be a countable group and let \(d\in\mathbb{N}^{*}\). Assume that there exist infinitely many prime numbers \(p\) such that \(\Lambda\) is a residually \(p\)-finite group. Then \(\mathbb{Z}^{d}\wr\Lambda\) admits profinite allosteric actions._
To prove this theorem, we develop a profinite criterion that implies allostery. This criterion, which is explained in Section 3, uses a profinite construction and rely on the existence of a sequence of finite index subgroups which mimic allostery at finite stages. In Section 4, we prove Theorem 1.4 by showing that \(\mathbb{Z}^{d}\wr\Lambda\) satisfies this criterion. In particular, the allosteric actions that we obtain are all profinite.
As a corollary of Theorem 1.4, we provide the first examples of _amenable_ allosteric groups, which answers a question of Ortega and Scarparo [10, Rem. 2.6].
**Corollary 1.5**.: _Let \(\Lambda\) be a finitely generated torsion-free nilpotent group and let \(d\in\mathbb{N}^{*}\). Then \(\mathbb{Z}^{d}\wr\Lambda\) is an amenable allosteric group._
Proof of Corollary 1.5.: If \(\Lambda\) is finitely generated torsion-free and nilpotent, then \(\Lambda\) is residually \(p\) finite for every prime \(p\) by a result of Gruenberg [10]. Therefore \(\mathbb{Z}^{d}\wr\Lambda\) is allosteric and amenable.
Again, the allosteric actions taht weo obtain in Corollary 1.5 are profinite. The group under consideration in this corollary all have infinite asymptotic dimension and we don't know whether allosteric amenable groups with finite asymptotic dimension exist.
The proof of Theorem 1.1 will be provided in Section 5 at the end of this paper. Let us sketch its proof here. The key observation in order to prove this result is that allosteric actions of amenable groups cannot be almost finite, as almost finite actions are always essentially free for any invariant measure (see Lemma 2.2 for of proof of this fact for zero-dimensional compact metric spaces). Since the allosteric actions that we construct in Corollary 1.5 are profinite, this provides minimal, topologically free profinite actions on the Cantor space that are not almost finite.
**Acknowledgments.** I thank David Kerr, Petr Naryshkin and Owen Tanner for a stimulating discussion on allostery and C\({}^{*}\)-algebras, as well as for their comments on a preliminary version of this work. I also thank the organizers of the conference _"Group Actions: Dynamics, Measure, Topology"_ in Munster during which this interaction took place. Finally, I thank Damien Gaboriau for suggesting me a substantial restructuring of the introduction in the first version.
## 2 Almost finiteness and dynamical comparison
In this section we discuss the notions of almost finiteness and dynamical comparison for zero-dimensional compact metric spaces.
A compact metric space \(X\) is _zero-dimensional_ if it admits a basis of clopen sets. Almost finiteness was defined by Kerr for group actions on compact metric spaces [11, Def. 8.2] but we will restrict in this paper to group actions on zero-dimensional compact metric spaces as the definition is easier to state in this case.
**Definition 2.1**.: An action \(\Gamma\curvearrowright X\) on a compact metric zero-dimensional space \(X\) is _almost finite_ if for all finite \(K\Subset\Gamma\) (here and thereafter, \(\Subset\) stands for "is a finite subset of") and \(\varepsilon>0\), there exists \(V_{1},\ldots,V_{n}\subseteq X\) clopen and \(S_{1},\ldots,S_{n}\Subset\Gamma\) such that
* the sets \(sV_{i}\) for \(s\in S_{i}\) and \(i\in\{1,\ldots,n\}\) are pairwise disjoint,
* for all \(i\in\{1,\ldots,n\}\), \(|KS_{i}\triangle S_{i}|<\varepsilon|S_{i}|\),
* \(X=\bigsqcup_{i=1}^{n}S_{i}V_{i}\).
Observe that the notion of almost finiteness makes sense for action of _amenable_ groups, as the finite sets \(S_{i}\) are \((K,\varepsilon)\)-Folner sets of \(\Gamma\). If \(\Gamma\curvearrowright X\) is an action on a
compact space, we denote by \(\operatorname{Prob}_{\Gamma}(X)\) the space of \(\Gamma\)-invariant Borel probability measures on \(X\).
**Lemma 2.2**.: _Let \(\Gamma\curvearrowright X\) be an action on a zero-dimensional compact metric space, which is almost finite. Then for any \(\mu\in\operatorname{Prob}_{\Gamma}(X)\), the action \(\Gamma\curvearrowright(X,\mu)\) is essentially free._
Proof.: Fix \(\mu\in\operatorname{Prob}_{\Gamma}(X)\). Let \(\gamma\in\Gamma\setminus\{1\}\), let \(K\coloneqq\{\gamma^{-1}\}\) and fix \(\varepsilon>0\). Let \(V_{1},\ldots,V_{n}\subseteq X\) clopen and \(S_{1},\ldots,S_{n}\Subset\Gamma\) finite that witness almost finiteness for the pair \((K,\varepsilon)\). If \(\{x\in X\colon\gamma x=x\}\) intersects \(sV_{i}\), then \(\gamma sV_{i}\) and \(sV_{i}\) are not disjoint, so \(s\in S_{i}\setminus KS_{i}\). Thus
\[\mu(\{x\in X\colon\gamma x=x\}) \leq\sum_{i=1}^{n}\lvert KS_{i}\triangle S_{i}\rvert\mu(V_{i})\] \[\leq\varepsilon\sum_{i=1}^{n}\lvert S_{i}\rvert\mu(V_{i})=\varepsilon.\]
This implies that \(\Gamma\curvearrowright(X,\mu)\) is essentially free.
**Definition 2.3**.: An action \(\Gamma\curvearrowright X\) on a compact metric zero-dimensional space has comparison if for every nonempty clopen sets \(A,B\subseteq X\) satisfying \(\mu(A)<\mu(B)\) for all \(\mu\in\operatorname{Prob}_{\Gamma}(X)\), there exists a partition \(A=\bigsqcup_{i=1}^{n}C_{i}\) into clopen sets and elements \(\gamma_{1},\ldots,\gamma_{n}\in\Gamma\) such that \((\gamma_{i}C_{i})_{1\leq i\leq n}\) are pairwise disjoint subsets of \(B\).
Recall that an action \(\Gamma\curvearrowright X\) on a zero-dimensional compact metric space \(X\) is _profinite_ if the orbits of the \(\Gamma\)-action on the set \(\operatorname{Clo}(X)\) of clopen subsets of \(X\) are all finite. In the sequel, we will need the fact that profinite minimal actions are uniquely ergodic (see Lemma 3.1 for a more precise statement).
**Lemma 2.4**.: _Any profinite minimal action \(\Gamma\curvearrowright X\) has comparison._
Proof.: Let \(\mu\) be the unique \(\Gamma\)-invariant probability measure on \(X\). Fix \(A,B\) clopen such that \(\mu(A)<\mu(B)\). Let \(\mathcal{B}\) be the finite Boolean algebra generated by the \(\Gamma\)-translates of \(A\) and of \(B\). Let \(\mathcal{A}\subseteq\mathcal{B}\) denotes the set of atoms of \(\mathcal{B}\). Then \(\mathcal{A}\) is a \(\Gamma\)-invariant set and two atoms are either disjoint of equal. For all \(C\in\mathcal{A}\), we have \(\bigcup_{\gamma\in\Gamma}\gamma C=X\) by minimality. This shows that \(\Gamma\curvearrowright\mathcal{A}\) is transitive and that \(\mathcal{A}\) forms a partition of \(X\) whose pieces all have the same \(\mu\)-measure. Since \(A\) and \(B\) are in \(\mathcal{B}\), they can be expressed uniquely as a disjoint union of atoms. So there exists \(C_{1},\ldots,C_{n},C_{1}^{\prime},\ldots,C_{m}^{\prime}\in\mathcal{A}\) such that \(A=\bigsqcup_{i=1}^{n}C_{i}\) and \(B=\bigsqcup_{j=1}^{m}C_{j}^{\prime}\). Since \(\mu(A)<\mu(B)\), then \(n<m\) and by transitivity of \(\Gamma\curvearrowright\mathcal{A}\), there exists \(\gamma_{1},\ldots,\gamma_{n}\in\Gamma\) such that \((\gamma_{i}C_{i})_{1\leq i\leq n}\) are pairwise disjoint subsets of \(B\).
We therefore obtain the following result as a Corollary of Theorem 1.4.
**Corollary 2.5**.: _Let \(\Lambda\) be a finitely generated torsion free nilpotent group and let \(d\in\mathbb{N}^{*}\). Then \(\mathbb{Z}^{d}\wr\Lambda\) admits a minimal, topologically free action \(\Gamma\curvearrowright X\) on a finite dimensional compact metric space with finitely many invariant ergodic measures, which is not almost finite but have comparison._
By contrast, Kerr proved that for any _free_ minimal action of an amenable group on a finite-dimensional compact metric space with finitely many invariant ergodic measures, almost finiteness is equivalent to comparison [12, Thm. 9.3].
A profinite criterion for allostery
In this section we provide a profinite criterion which implies allostery. This criterion was used without being made explicit in the proof that the fundamental group of any hyperbolic surface group is allosteric [10]. We provide here a precise and explicit criterion. Let us first recall some basic notions on profinite actions. In Section 2, we defined profinite actions as actions \(\Gamma\curvearrowright X\) on zero-dimensional compact metric spaces \(X\) such that the \(\Gamma\)-orbit of every clopen set is finite. We provide here an equivalent definition with inverse limit of actions on finite sets.
Let \((I,\leq)\) be a directed _countable_ poset. For all \(i\in I\), let \(\Gamma\curvearrowright X_{i}\) be an action on a finite set. Assume that for all \(i\leq j\), we have a \(\Gamma\)-equivariant onto map \(f_{ij}:X_{j}\to X_{i}\), such that
* \(f_{ii}\) is the identity on \(X_{i}\),
* \(f_{ik}=f_{ij}\circ f_{jk}\) for all \(i\leq j\leq k\).
The inverse limit of the finite spaces \(X_{i}\) is the space
\[\varliminf_{i\in I}X_{i}\coloneqq\left\{(x_{i})\in\prod_{i\in I}X_{i}\colon x _{i}=f_{ij}(x_{j})\text{ for all }i\leq j\right\}.\]
This space is closed, thus compact metrizable, and totally disconnected in the product topology. The diagonal action of \(\Gamma\) on \(\prod_{i\in I}X_{i}\) restricts to an action by homeomorphisms of \(\Gamma\) on \(\varliminf X_{i}\).
If each \(X_{i}\) is endowed with a \(\Gamma\)-invariant probability measure \(\mu_{i}\) such that \((f_{ij})_{*}\mu_{j}=\mu_{i}\) for all \(i\leq j\), we let \(\mu\) be the unique Borel probability measure on \(\varliminf X_{i}\) that projects for every \(j\in I\) onto \(\mu_{j}\) via the canonical projection \(\pi_{j}:\varliminf X_{i}\to X_{j}\). The \(\Gamma\)-action on \(\varliminf X_{i}\) preserves \(\mu\) and is called the _inverse limit_ of the pmp actions \(\Gamma\curvearrowright(X_{i},\mu_{i})\). A pmp action of \(\Gamma\) is _profinite_ if it is measurably isomorphic to an inverse limit of pmp \(\Gamma\)-actions on finite sets. The following lemma is well-known, see [12, Prop. 4.1] for a proof.
**Lemma 3.1**.: _The following are equivalent:_
1. _For every_ \(i\in I\)_,_ \(\Gamma\curvearrowright X_{i}\) _is transitive and_ \(\mu_{i}\) _is the uniform probability measure on_ \(X_{i}\)_._
2. _The action_ \(\Gamma\curvearrowright\varprojlim X_{i}\) _is minimal._
3. _The pmp action_ \(\Gamma\curvearrowright(\varprojlim X_{i},\mu)\) _is ergodic._
4. _The action_ \(\Gamma\curvearrowright\varprojlim X_{i}\) _is uniquely ergodic._
We are now ready to state the allosteric criterion.
**Theorem 3.2** (Allosteric criterion).: _Let \(\Gamma\) be a countable group and let \(S\leq\Gamma\) be a nontrivial subgroup. Fix a family \((\varepsilon_{\gamma})_{\gamma\in\Gamma\setminus\{1\}}\) of real numbers in \(]0,1[\) such that \(\prod_{\gamma\in\Gamma\setminus\{1\}}(1-\varepsilon_{\gamma})>0\) and a family \((n_{\gamma})_{\gamma\in\Gamma\setminus\{1\}}\) of pairwise coprime integers. Assume that for all \(\gamma\in\Gamma\setminus\{1\}\), there exists a finite index subgroup \(\Gamma_{\gamma}\leq\Gamma\), whose index is a power of \(n_{\gamma}\), such that_
1. \(\gamma\notin\Gamma_{\gamma}\)_,_
2. \(|\{q\in\Gamma/\Gamma_{\gamma}\colon\forall s\in S,sq=q\}|\geq(1-\varepsilon_{ \gamma})[\Gamma:\Gamma_{\gamma}]\)_._
_Then the profinite action_
\[\Gamma\curvearrowright\varprojlim_{F\Subset\Gamma\setminus\{1\}}\Big{(}\Gamma/ \bigcap_{\gamma\in F}\Gamma_{\gamma},\mu_{F}\Big{)},\]
_where \(\mu_{F}\) is the uniform probability measure on \(\Gamma/\bigcap_{\gamma\in F}\Gamma_{\gamma}\), is allosteric._
Before we give the proof of this criterion, let us make some remarks. First, if \(\Gamma\) satisfies this criterion, then \(\Gamma\) is residually finite. In fact, we don't know any example of a non-residually finite allosteric group. In the criterion, the assumption that the \((n_{\gamma})_{\gamma\in\Gamma\setminus\{1\}}\) are pairwise coprime is used to obtain a minimal ergodic profinite action. Item 1 is used to get topological freeness of the profinite action, whereas Item 2 is used to get non-essential freeness of the profinite action.
Proof.: For all \(F\Subset\Gamma\setminus\{1\}\), we denote by \(X_{F}\) the finite set \(\Gamma/\bigcap_{\gamma\in F}\Gamma_{\gamma}\). We denote by \(X\) the inverse limit \(\varprojlim X_{F}\), by \(\mu\) the probability measure on \(X\) that projects onto \(\mu_{F}\) and by \(\alpha\) the profinite action \(\Gamma\curvearrowright(X,\mu)\). For all \(F\Subset\Gamma\setminus\{1\}\), the action \(\Gamma\curvearrowright X_{F}\) is transitive, therefore we get by Lemma 3.1 that \(\alpha\) is a minimal ergodic action. Let us prove that \(\alpha\) is topologically free but not essentially free.
We start with topological freeness. For all \(F\Subset\Gamma\setminus\{1\}\), let \(y_{F}\coloneqq\bigcap_{\gamma\in F}\Gamma_{\gamma}\in X_{F}\) and let \(y\coloneqq(y_{F})\in X\). By assumption, \(\gamma\notin\Gamma_{\gamma}\) for all \(\gamma\in\Gamma\setminus\{1\}\) and thus \(\mathrm{Stab}_{\alpha}(y)=\{1\}\). Since \(\alpha\) is minimal, this implies that a dense set of points have trivial stabilizer. Moreover, the set \(\mathrm{Stab}_{\alpha}^{-1}(\{1\})=\bigcap_{\gamma\in\Gamma\setminus\{1\}}\{x \in X\colon\gamma x\neq x\}\) is always \(G_{\delta}\). Therefore, it is comeager and this shows that \(\alpha\) is topologically free.
We now prove that \(\alpha\) is not essentially free. Since the index of \([\Gamma:\Gamma_{\gamma}]\) is a power of \(n_{\gamma}\) and the \(n_{\gamma}\) are pairwise coprime, the action \(\Gamma\curvearrowright X_{F}\) is isomorphic to the diagonal action \(\Gamma\curvearrowright\prod_{\gamma\in F}\Gamma/\Gamma_{\gamma}\) of the left coset actions. Therefore for all \(F\Subset\Gamma\setminus\{1\}\),
\[\frac{|\{x\in X_{F}\colon\forall s\in S,sx=s\}|}{|X_{F}|} =\prod_{\gamma\in F}\frac{|\{q\in\Gamma/\Gamma_{\gamma}\colon \forall s\in S,sq=q\}|}{[\Gamma:\Gamma_{\gamma}]}\] \[\geq\prod_{\gamma\in F}(1-\varepsilon_{\gamma}).\]
By definition of \(\mu\), this implies that
\[\mu(\{x\in X\colon S\leq\mathrm{Stab}_{\alpha}(x)\})\geq\prod_{\gamma\in \Gamma\setminus\{1\}}(1-\varepsilon_{\gamma})>0\]
which shows that \(\alpha\) is not essentially free.
## 4 Allostery and wreath products
In this section, we apply the allostery criterion of Theorem 3.2 to prove that some wreath products are allosteric. In the sequel, we will only only work with wreath products of the form \(A\wr\Lambda\) with \(A\) abelian and we recall the definitions in this case. Let \(A\) and \(\Lambda\) be two countable groups with \(A\) abelian. The _support_ of a function \(f:\Lambda\to A\) is defined by \(\mathrm{supp}(f)\coloneqq\{\lambda\in\Lambda\colon f(\lambda)\neq 0\}\). Pointwise addition defines a group operation on \(A^{\Lambda}\). The subgroup of \(A^{\Lambda}\) consisting of functions whose support
is finite is denoted by \(\bigoplus_{\lambda\in\Lambda}A\). The _wreath product_ of \(A\) by \(\Lambda\), denoted by \(A\wr\Lambda\), is the semi-direct product
\[A\wr\Lambda\coloneqq\Big{(}\bigoplus_{\lambda\in\Lambda}A\Big{)}\rtimes\Lambda\]
where \(\Lambda\) acts by shifting the copies of \(A\) in \(\bigoplus_{\lambda\in\Lambda}A\). In other words, the group multiplication is given by \((f,\gamma)(f^{\prime},\gamma^{\prime})\coloneqq(f+f^{\prime}(\gamma^{-1}.), \gamma\gamma^{\prime})\).
The following theorem uses the allosteric criterion described in Theorem 3.2 to prove that wreath products with an abundance of \(p\)-finite quotients are allosteric.
**Theorem 4.1**.: _Let \(\Lambda\) be a countable group. Assume that there exist infinitely many prime numbers \(p\) such that \(\Lambda\) is a residually \(p\)-finite group. Then for all \(d\geq 1\), the wreath product \(\mathbb{Z}^{d}\wr\Lambda\) is allosteric._
Proof.: Let \(\Gamma\coloneqq\mathbb{Z}^{d}\wr\Lambda\). Fix \((p_{\gamma})_{\gamma\in\Gamma\setminus\{1\}}\) a sequence of pairwise distinct prime numbers, such that for all nontrivial element \(\gamma=(g,\delta)\) in \(\Gamma\), the group \(\Lambda\) is a residually \(p_{\gamma}\)-finite group and \(g(\operatorname{supp}(g))\cap(p_{\gamma}\mathbb{Z})^{d}=\varnothing\). Fix also a sequence \((\varepsilon_{\gamma})_{\gamma\in\Gamma\setminus\{1\}}\) of real numbers in \(]0,1[\) such that \(\prod_{\gamma\in\Gamma\setminus\{1\}}(1-\varepsilon_{\gamma})>0\).
In the remainder of the proof, we fix a nontrivial element \(\gamma\coloneqq(g,\delta)\) in \(\Gamma\) and we will construct a finite index subgroup \(\Gamma_{\gamma}\) satisfying the assumptions of the allosteric criterion (Theorem 3.2). Fix \(l\in\mathbb{N}\) such that \(l>|\operatorname{supp}(g)|\). Let us fix a finite index normal subgroup \(\Lambda_{\gamma}\unlhd\Lambda\) by considering the following two cases.
* If \(\delta=1_{\Lambda}\), then fix \(\Lambda_{\gamma}\unlhd\Lambda\) of finite index such that:
* \([\Lambda:\Lambda_{\gamma}]\) is a power of \(p_{\gamma}\) that satisfies \(l<\varepsilon_{\gamma}[\Lambda:\Lambda_{\gamma}]\),
* for all distinct \(\lambda,\lambda^{\prime}\in\operatorname{supp}(g)\), \(\lambda^{-1}\lambda^{\prime}\notin\Lambda_{\gamma}\).
* If \(\delta\neq 1_{\Lambda}\), then fix \(\Lambda_{\gamma}\unlhd\Lambda\) of finite index such that:
* \([\Lambda:\Lambda_{\gamma}]\) is a power of \(p_{\gamma}\) that satifies \(l<\varepsilon_{\gamma}[\Lambda:\Lambda_{\gamma}]\),
* for all distinct \(\lambda,\lambda^{\prime}\in\operatorname{supp}(g)\), \(\lambda^{-1}\lambda^{\prime}\notin\Lambda_{\gamma}\),
Such a subgroup \(\Lambda_{\gamma}\) exists because \(\Lambda\) is a residually \(p_{\gamma}\)-finite group. Observe that the second condition implies that any two distinct elements in \(\operatorname{supp}(g)\) belong to distinct left cosets of \(\Lambda_{\gamma}\). Fix a subset \(E\subseteq\Lambda/\Lambda_{\gamma}\) of cardinality \(l\) such that all the left cosets of \(\Lambda_{\gamma}\) that intersect \(\operatorname{supp}(g)\) belong to \(E\). This is possible since \(l>|\operatorname{supp}(g)|\). Let \(A_{\gamma}\) be the subgroup of \(\bigoplus_{\lambda\in\Lambda}\mathbb{Z}^{d}\) given by
\[A_{\gamma}\coloneqq\Big{\{}f\in\bigoplus_{\lambda\in\Lambda}\mathbb{Z}^{d} \colon\sum_{\lambda\in q}f(\lambda)\in(p_{\gamma}\mathbb{Z})^{d}\text{ for all }q\in E\Big{\}}.\]
**Claim 1**.: \(A_{\gamma}\) is \(\Lambda_{\gamma}\)-invariant.
Proof of the claim.: Let \(f\in A_{\gamma}\) and \(\lambda\in\Lambda_{\gamma}\). Since \(\Lambda_{\gamma}\) is normal in \(\Lambda\), then for all \(q\in E\), we have \(\lambda^{-1}q=q\). Therefore,
\[\sum_{\lambda^{\prime}\in q}f(\lambda^{-1}\lambda^{\prime})=\sum_{\lambda^{ \prime}\in q}f(\lambda^{\prime})\in(p_{\gamma}\mathbb{Z})^{d}.\]
This shows that \(A_{\gamma}\) is \(\Lambda_{\gamma}\)-invariant.
Let us define \(\Gamma_{\gamma}\coloneqq A_{\gamma}\wr\Lambda_{\gamma}\). This is a finite index subgroup of \(\Gamma\).
**Claim 2**.: \([\Gamma:\Gamma_{\gamma}]=[\Lambda:\Lambda_{\gamma}][\bigoplus_{\Lambda}\mathbb{Z}^{ d}:A_{\gamma}]=[\Lambda:\Lambda_{\gamma}](p_{\gamma})^{ld}\), which is a power of \(p_{\gamma}\).
Proof of the claim.: A straightforward computation shows that the map
\[\Gamma/(A_{\gamma}\wr\Lambda_{\gamma})\to\big{(}\bigoplus_{\Lambda}\mathbb{Z}^ {d}\big{)}/A_{\gamma}\times\Lambda/\Lambda_{\gamma}\]
given by \((f,\lambda)(A_{\gamma}\wr\Lambda_{\gamma})\mapsto(f+A_{\gamma},\lambda\Lambda _{\gamma})\) is a well defined bijection, which proves the claim since \([\bigoplus_{\Lambda}\mathbb{Z}^{d}:A_{\gamma}]=(p_{\gamma})^{ld}\). \(\square_{\mathrm{claim}}\)
**Claim 3**.: The element \(\gamma=(g,\delta)\) doesn't belong to \(\Gamma_{\gamma}\).
Proof of the claim.: If \(\delta\neq 1_{\Lambda}\), then \(\delta\notin\Lambda_{\gamma}\) by construction. Therefore \(\gamma\notin\Gamma_{\gamma}\). If \(\delta=1_{\Lambda}\), then \(\mathrm{supp}(g)\) is not empty since \((g,\delta)\neq 1_{\Gamma}\). Let \(\lambda\in\mathrm{supp}(g)\) and let \(q\in\Lambda/\Lambda_{\gamma}\) be such that \(\lambda\in q\). By construction of \(E\), we have \(q\in E\). Since any two distinct elements in \(\mathrm{supp}(g)\) belong to different left cosets of \(\Lambda_{\gamma}\), we get that
\[\sum_{\lambda^{\prime}\in q}g(\lambda^{\prime})=g(\lambda)\]
which does not belong to \((p_{\gamma}\mathbb{Z})^{d}\) because \(g(\mathrm{supp}(g))\cap(p_{\gamma}\mathbb{Z})^{d}=\varnothing\). Therefore \(\gamma\notin\Gamma\). \(\square_{\mathrm{claim}}\)
Let \(S\) be the subgroup of \(\mathbb{Z}^{d}\wr\Lambda\) defined by
\[S\coloneqq\big{\{}(f,1_{\Lambda})\in\mathbb{Z}^{d}\wr\Lambda\colon\mathrm{ supp}(f)\subseteq\{1_{\Lambda}\}\big{\}}.\]
**Claim 4**.: \(|\{q\in\Gamma/\Gamma_{\gamma}\colon\forall s\in S,sq=q\}|\geq(1-\varepsilon_ {\gamma})[\Gamma:\Gamma_{\gamma}]\)_._
Proof of the claim.: A coset \(q\in\Gamma/\Gamma_{\gamma}\) is fixed by every element of \(S\) if and only if it is fixed by the elements \(s_{1},\ldots,s_{d}\in S\) where \(s_{i}\) is defined by \(s_{i}\coloneqq(t_{i},1_{\Lambda})\) with \(t_{i}(1_{\Lambda})\) being the \(i\)-th element of the canonical basis of \(\mathbb{Z}^{d}\). So without loss of generality, it is sufficient to prove that the number of \(q\in\Gamma/\Gamma_{\gamma}\) that are fixed by \(s_{1}\) is \(\geq(1-\varepsilon_{\gamma})[\Gamma:\Gamma_{\gamma}]\). Let \(q\in\Gamma/\Gamma_{\gamma}\). Write \(q=(f,\lambda)\Gamma_{\gamma}\) with \((f,\lambda)\in\Gamma\). Then
\[s_{1}q=q \Leftrightarrow(f,\lambda)^{-1}(t_{1},1_{\Lambda})(f,\lambda) \in\Gamma_{\gamma}\] \[\Leftrightarrow(t_{1}(\lambda\cdot),1_{\Lambda})\in\Gamma_{\gamma}\] \[\Leftrightarrow\forall q^{\prime}\in E,\sum_{\lambda^{\prime}\in q ^{\prime}}t_{1}(\lambda\lambda^{\prime})\in(p_{\gamma}\mathbb{Z})^{d}\]
This happens if and only if all these sums are \(0\), which exactly means that for all \(q^{\prime}\in E\), \(\lambda^{-1}\notin q^{\prime}\). There are exactly \([\Lambda:\Lambda_{\gamma}]-l\) such left cosets of \(\Lambda_{\gamma}\) and therefore there are exactly \(([\Lambda:\Lambda_{\gamma}]-l)[\bigoplus_{\Lambda}\mathbb{Z}^{d}:A_{\gamma}]\) left coset \(q\in\Gamma/\Gamma_{\gamma}\) such that \(s_{1}q=q\). We therefore obtain
\[\frac{|\{q\in\Gamma/\Gamma_{\gamma}\colon s_{1}q=q\}|}{[\Gamma: \Gamma_{\gamma}]} =\frac{([\Lambda:\Lambda_{\gamma}]-l)[\bigoplus_{\Lambda}\mathbb{Z}^ {d}:A_{\gamma}]}{[\Lambda:\Lambda_{\gamma}][\bigoplus_{\Lambda}\mathbb{Z}^{d}:A _{\gamma}]}\] \[>1-\varepsilon_{\gamma},\]
which finishes the proof of the claim. \(\square_{\mathrm{claim}}\)
The sequence of finite-index subgroups \((\Gamma_{\gamma})_{\gamma\in\Gamma\backslash\{1\}}\) satisfies all the assumptions of Theorem 3.2, therefore \(\Gamma=\mathbb{Z}^{d}\wr\Lambda\) is allosteric.
Proof of the main theorem
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1.: Let \(\Lambda\) be a finitely generated torsion free nilpotent group and let \(d\in\mathbb{N}^{d}\). Let \(\Gamma\coloneqq\mathbb{Z}^{d}\wr\Lambda\). By a result of Gruenberg [10], \(\Lambda\) is residually \(p\)-finite for every prime \(p\). By Theorem 1.4, \(\Gamma\) admits profinite allosteric actions. These actions are therefore topologically free, minimal actions on the Cantor space. Since they are not essentially free, they cannot be almost finite by Lemma 2.2. This proves the theorem.
|
2302.06058 | Bi-directional Masks for Efficient N:M Sparse Training | We focus on addressing the dense backward propagation issue for training
efficiency of N:M fine-grained sparsity that preserves at most N out of M
consecutive weights and achieves practical speedups supported by the N:M sparse
tensor core. Therefore, we present a novel method of Bi-directional Masks
(Bi-Mask) with its two central innovations in: 1) Separate sparse masks in the
two directions of forward and backward propagation to obtain training
acceleration. It disentangles the forward and backward weight sparsity and
overcomes the very dense gradient computation. 2) An efficient weight row
permutation method to maintain performance. It picks up the permutation
candidate with the most eligible N:M weight blocks in the backward to minimize
the gradient gap between traditional uni-directional masks and our
bi-directional masks. Compared with existing uni-directional scenario that
applies a transposable mask and enables backward acceleration, our Bi-Mask is
experimentally demonstrated to be more superior in performance. Also, our
Bi-Mask performs on par with or even better than methods that fail to achieve
backward acceleration. Project of this paper is available at
\url{https://github.com/zyxxmu/Bi-Mask}. | Yuxin Zhang, Yiting Luo, Mingbao Lin, Yunshan Zhong, Jingjing Xie, Fei Chao, Rongrong Ji | 2023-02-13T02:32:02Z | http://arxiv.org/abs/2302.06058v1 | # Bi-directional Masks for Efficient N:M Sparse Training
###### Abstract
We focus on addressing the dense backward propagation issue for training efficiency of N:M fine-grained sparsity that preserves at most N out of M consecutive weights and achieves practical speedups supported by the N:M sparse tensor core. Therefore, we present a novel method of Bi-directional Masks (Bi-Mask) with its two central innovations in: 1) Separate sparse masks in the two directions of forward and backward propagation to obtain training acceleration. It disentangles the forward and backward weight sparsity and overcomes the very dense gradient computation. 2) An efficient weight row permutation method to maintain performance. It picks up the permutation candidate with the most eligible N:M weight blocks in the backward to minimize the gradient gap between traditional uni-directional masks and our bi-directional masks. Compared with existing uni-directional scenario that applies a transposable mask and enables backward acceleration, our Bi-Mask is experimentally demonstrated to be more superior in performance. Also, our Bi-Mask performs on par with or even better than methods that fail to achieve backward acceleration. Project of this paper is available at [https://github.com/zyxxmu/Bi-Mask](https://github.com/zyxxmu/Bi-Mask).
Machine Learning, ICML
## 1 Introduction
The past decade has witnessed thriving deep neural networks (DNNs) in various machine learning applications (He et al., 2016, 2017; Girshick et al., 2014). In large part, the prosperity is driven by increasing parameters and computations, which however, make DNN models too cumbersome to be deployed on resource-constrained edge devices such as cell phones and Internet-of-Things (IoT) devices. Therefore, the research community is sorely in need of technical renovation to compress the DNNs (Hubara et al., 2016; Tan and Le, 2019; Lin et al., 2020).
By removing redundant network weights (LeCun et al., 1989; Han et al., 2015; He et al., 2017), network sparsity has emerged as a piece of modern equipment to obtain a lightweight sparse model. Through removing individual weights at arbitrary positions, fine-grained sparsity is demonstrated to reach a high sparse ratio with performance guaran
Figure 1: Comparison between vanilla N:M mask and transposable N:M mask (2:4 case). The vanilla N:M mask (Zhou et al., 2021; Nvidia, 2020) generates sparse weights with N:M property in rows, leading to forward acceleration but remaining dense backward propagation as the weight transposition operation impairs N:M blocks. The transposable N:M mask (Hubara et al., 2021) generates sparse weights that have N:M property in both rows and columns, leading to forward & backward acceleration. Both methods consider only one sparse mask.
tee (Han et al., 2015; Evci et al., 2020). Unfortunately, the resulting unstructured sparse weights hardly produce acceleration on off-the-shelf hardware. Coarse-grained sparsity is more hardware friendly as it typically removes an entire weight block (Ji et al., 2018; Meng et al., 2020) or convolution filter (Liu et al., 2019; Lin et al., 2020). In comparison with fine-grained sparsity, the compressed model gains noticeable speedup, yet suffers more performance degradation. Therefore, it is a challenging yet valuable issue to simultaneously retain model performance of DNN models and achieve hardware acceleration.
Luckily, recent N:M fine-grained sparsity has provided a promising solution. By requiring at most N non-zero elements out of every M contiguous weights, N:M sparsity includes the performance advantage of fine-grained sparsity as well as practical acceleration thanks to the hardware innovation of N:M sparse tensor core (Ronny Krashinsky, 2020; Fang et al., 2022). Nvidia (Nvidia, 2020) has presented the ASP (APEX's Automatic Sparsity) paradigm that achieves 2:4 sparsity within three steps, unfolded as training a dense network, applying 2:4 fine-grained sparsity using magnitude-based pruning (Han et al., 2015), and re-training the sparse network. Despite the satisfying performance, ASP exhibits drawbacks in its tedious training cost as it contains dense network training and N:M sparse retraining. This largely prohibits the application of N:M sparsity when confronting with scarce training resources.
The above issue has been partially addressed by directly training an N:M sparse network from scratch (Zhou et al., 2021). Yet, the sparse tensor core is only utilized to accelerate the forward multiplication during training. As illustrated in Fig. 1a, the weight transposition operation in the backward impairs N:M blocks and thus fails to support acceleration in gradient calculation. To mitigate this, (Hubara et al., 2021) proposed N:M transposable mask, where a binary mask that indicates whether to remove weights is required to have N:M property along the rows and columns. Therefore, after performing transposition, it still satisfies the N:M format as shown in Fig. 1b. Unfortunately, the transposable requirement is observed to have more performance degradation, which is presumably caused by less flexibility of the sparsity pattern (Hubara et al., 2021). In Sec. 3.2, we further show severe performance degradation at a higher sparse level such as 1:8 and 1:16. We therefore reflect on this: how can we address the efficiency of N:M sparse training without a compromise on performance?
In this paper, we attempt to answer the above question by introducing a novel method of Bi-directional Masks (Bi-Mask) that performs surprisingly well without any N:M transposable constraint. Fig. 2 illustrates framework of our Bi-Mask. In particular, along the forward and backward directions, two separate binary masks are constructed according to the weight magnitude (Han et al., 2015). As a contrast, we require the forward mask to follow N:M property in its rows while in columns for the backward mask. By this way, we concurrently enable forward & backward acceleration from the N:M sparse tensor core. Also, the bi-directional masks benefit performance from more flexible sparsity pattern. Nevertheless, they also bring about deficiency of gradient gap since the backward mask modifies the gradient of forward loss. Given this issue, an efficient row permutation is further introduced before enforcing the backward mask. In detail, we first change row order of weight matrix and then pick up the permutation with the most eligible N:M weight blocks from a dozen of candidates. By changing column order of output gradient accordingly, we succeed in retaining the same outputs between with/without
Figure 2: Framework of the proposed Bi-direction Masks (Bi-Mask). It separately builds two N:M sparse masks in the forward and backward direction, thus enabling training acceleration in both directions. During backward propagation, Bi-Mask performs an efficient row permutation to make the sparse weights have more eligible N:M weight blocks before generating the backward mask.
row permutation, and at the same time well reducing the gradient gap between uni-directional/bi-directional mask(s).
Our simple design of Bi-Mask turns out to achieve remarkable results. Besides forward & backward training acceleration, Bi-Mask well improves the performance of transposable mask (T-Mask) across different N:M patterns, benchmarks, and networks. For example, Bi-Mask achieves 71.5% Top-1 accuracy when training 1:16 sparse ResNet-50 on ImageNet, surpassing T-mask by 5.3%. More surprisingly, our approach achieves comparable or even better results than vanilla N:M methods, where the backward propagation can not be accelerated. For example, our Bi-Mask exceeds Top-1 accuracy of SR-STE (Zhou et al., 2021) by 0.4% when training 2:4 sparse ResNet-50 on ImageNet. Table 1 provides advantage comparison between different mask methods.
## 2 Related Work
### Network Sparsity
Network sparsity has been one of the most effective tools to relieve the complexity of DNNs over the past decades (LeCun et al., 1989; Han et al., 2015). Pioneering works implement network sparsity in a fine-grained granularity where weights at arbitrary positions are removed to obtain a compact network. (Han et al., 2015) presented a classic three-step paradigm including pre-training a full network, removing low-magnitude weights, and fine-tuning the sparse networks. The lottery ticker hypothesis (Frankle and Carbin, 2019) further reveals the existence of randomly-initialized sparse networks that can be trained independently to compete with the performance of the dense model. In principle, the fine-grained network sparsity can maintain the performance of dense networks at an ultra-high sparse ratio like 90.0% (Mostafa and Wang, 2019; Blalock et al., 2020). Nevertheless, it receives very limited speedups since the resulting sparse networks are in unstructured formats, which barely take advantage of general hardware platforms.
Coarse-grained sparsity targets at removing entire weight blocks (Ji et al., 2018; Meng et al., 2020) or convolution filters (Liu et al., 2019; Lin et al., 2020) to make the sparse networks compatible with off-the-shelf hardware. For instance, (Li et al., 2017) removed convolution filters with smaller \(\ell_{1}\) norm, while (Lin et al., 2020) considered the rank of feature maps as the filter importance measurement. Unfortunately, coarse-grained sparsity suffers severe performance drops at sparsity levels higher than 50% due to the flexibility constraint on network sparsity (Renda et al., 2021). Different from the existing sparsity granularity, this paper focuses on N:M fine-grained sparsity (Zhou et al., 2021; Sun et al., 2021; Pool and Yu, 2021), which preserves at most N out of M consecutive weights. In addition to performance maintenance, N:M sparsity is also able to obtain practical acceleration from the hardware innovation of N:M sparse tensor core (Nvidia, 2020; Fang et al., 2022).
### Sparse Training
Sparse training serves as an effective tool to boost the performance of network sparsity (Hoefler et al., 2021; Evci et al., 2020; Sanh et al., 2020). It dynamically prunes and revives weights of the sparse networks during training according to specific criteria. For example, RigL (Evci et al., 2020) removes smaller-magnitude weights and revives weights with larger-magnitude gradients. Besides, sparse momentum (Dettmers and Zettlemoyer, 2019) considers magnitude of mean weight momentum as a guide to redistribute the sparse weights. In this paper, we focus on training N:M sparse networks. As a study mostly related to ours, the transposable N:M masks (Hubara et al., 2021) requires one single sparse mask with N:M blocks in both rows and columns such that the transposition in the backward also embraces hardware acceleration. In contrast, our method separately builds sparse masks in the forward and backward propagation without additional sparse constraints and gains significantly better performance under the same N:M case. Besides, (Pool and Yu, 2021) proposed to permute the input channel of pre-trained CNNs to maximally preserve the magnitude of N:M sparse networks. Very differently, our Bi-Mask permutes the row dimension of sparse weights that are trained from scratch, with a diverse object of obtaining more eligible N:M weight blocks to mitigate the gradient gap in the backward sparsity.
## 3 Methodology
### Revisiting N:M Sparse Training
We first introduce some basic knowledge about the N:M fine-grained sparsity. Let \(\mathbf{W}\in\mathbb{R}^{I\times J}\) be the parameter matrix from a specific network layer. Considering the input tensor \(\mathbf{X}\), the forward propagation represented with form of matrix multiplication can be formulated as:
\[\mathbf{Y}=\mathbf{W}*\mathbf{X}, \tag{1}\]
where \(\mathbf{Y}\) is the output tensor and \(*\) is the matrix multiplication operation. N:M sparsity forces at most N out of M consecutive weights in the rows of \(\mathbf{W}\) to have non-zero
\begin{table}
\begin{tabular}{c c c c} \hline \hline Advantage & Vanilla Mask & T-Mask & Bi-Mask \\ \hline Forward Acceleration & & & \\ Backward Acceleration & & & \\ Performance Maintenance & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Advantage comparison between the vanilla N:M mask (Mask) (Nvidia, 2020; Zhou et al., 2021), the transposable N:M mask (T-Mask) (Hubara et al., 2021) and our proposed Bi-direction Mask (Bi-Mask) for N:M sparse training.
values. The sparsity can be achieved via a binary matrix \(\mathbf{B}\in\{0,1\}^{I\times J}\) where a block of every M contiguous elements contains at most N as:
\[\left\|\mathbf{B}_{i,j:j+\text{M}}\right\|_{0}\leq\text{N}, \tag{2}\]
in which \(i=1,2,3,...,I\) and \(j=1,\text{M},2\text{M},...,J\). Then, the sparse forward propagation can be formulated as:
\[\mathbf{Y}=(\mathbf{B}\odot\mathbf{W})*\mathbf{X}, \tag{3}\]
where \(\odot\) denotes the element-wise multiplication. Since \(\mathbf{B}\odot\mathbf{W}\) meets N:M requirement, the matrix multiplication with \(\mathbf{X}\) can be efficiently implemented by the N:M sparse tensor core, as illustrated in Fig. 0(a).
N:M sparse training starts from randomly-initialized networks (Zhou et al., 2021; Zhang et al., 2022), thus avoiding heavy burden of pre-training a dense model (Nvidia, 2020). We base our study on the popular SR-STE (Zhou et al., 2021) for N:M sparse training, simply illustrated for ease of understanding in the following. During forward propagation, it adapts the binary mask \(\mathbf{B}\) at each iteration as:
\[\mathbf{B}_{i,j+m}=\left\{\begin{array}{ll}0,&\text{if }|\mathbf{W}_{i,j+m}|< \text{Top}(|\mathbf{W}_{i,j:j+\text{M}}|,\text{N}),\\ 1,&\text{otherwise},\end{array}\right. \tag{4}\]
where \(1\leq m\leq\text{M}\), \(|\cdot|\) represents the absolute function, and \(\text{Top}(|\mathbf{W}_{i,j:j+\text{M}}|,\text{N})\) returns the N-\(th\) largest value within \(|\mathbf{W}_{i,j:j+\text{M}}|\). Therefore, we obtain the forward binary mask according to the weight magnitude in each block. During backward propagation, the gradients of \(\mathbf{B}\odot\mathbf{W}\) are directly passed to \(\mathbf{W}\) according to the straight-through-estimator (STE) (Bengio et al., 2013).
### Rethinking the Transposable N:M Mask
The above sparse mask is indeed uni-directional towards forward propagation. By forming N:M blocks in rows of the mask, Eq. (3) permits forward acceleration from the N:M sparse tensor core between the weights and inputs. Unfortunately, such a vanilla mask crashes backward acceleration due simply to the transposition operation. To explain, the gradient in the backward propagation is computed as:
\[g(\mathbf{X})=(\mathbf{B}\odot\mathbf{W})^{T}*g(\mathbf{Y}), \tag{5}\]
where \(g(\cdot)\) denotes the gradient with respect to its input. The above equation requires \((\mathbf{B}\odot\mathbf{W})^{T}\) to have N:M blocks in rows for accelerating multiplication with \(g(\mathbf{Y})\), however, it is in columns on account of the transposition operation. Thus, the backward propagation remains dense and fails to be accelerated, as illustrated in Fig. 0(a).
To address this issue, (Hubara et al., 2021) presented transposable N:M mask that is required to satisfy row-wise and column-wise N:M blocks such that the transposition also undertakes an important mission of N:M property in rows. Consequently, the binary mask \(\mathbf{B}\) is constrained as:
\[\left\|\mathbf{B}_{i,j:j+\text{M}}\right\|_{0}\leq\text{N},\quad\left\| \mathbf{B}_{k:k+\text{M},l}\right\|_{0}\leq\text{N}, \tag{6}\]
where \(i=1,2,3,...,I\), \(j=1,\text{M},2\text{M},...,J\), \(k=1,M,2M,...,I\), and \(l=1,2,3,...,J\). Besides, (Hubara et al., 2021) further introduced a \(2\)-approximation algorithm to reduce complexity of finding the transposable mask.
Here we rethink the transposable pursuit for N:M sparse training. Although it enables backward acceleration, the flexibility of sparse networks is greatly restricted, which comes at the cost of performance degradation. We first report the flexibility comparison between vanilla mask and transposable mask under different N:M cases. Fig. 2(a) measures the flexibility using mask diversity that calculates the number of all possible masks under a given N:M mask case (Hubara et al., 2021). We can see a drastic flexibility degradation, in particular in cases of a small N or M. As a consensus (Gale et al., 2019; Nvidia, 2020), more restrictions on sparse patterns incur worse performance of sparse networks. For example, unstructured sparsity (Han et al., 2015) that removes arbitrary weights generally performs much better than structured sparsity (Li et al., 2017) that removes entire filter weights. Consequently, severe performance occurs in transposable mask in comparison with the vanilla method, as we experimentally verify in Fig. 2(b), notably very poor 1:8 and 1:16. The uni-directional masks,
Figure 3: Comparison between vanilla mask and transposable mask including (a) flexibility measured by mask diversity (Hubara et al., 2021) and (b) performance of training sparse ResNet-50 (He et al., 2016) on ImageNet (Deng et al., 2009).
either vanilla or transposable, do not accomplish N:M backward acceleration without a compromise on performance. Therefore, in what follows, we address this issue from the perspective of bi-directional masks.
### Bi-directional N:M Masks
In this section, we formally present our Bi-directional N:M masks (Bi-Mask). As its name suggests, our Bi-Mask disentangles the forward & backward weight sparsity by involving two different masks during N:M sparse training. Concretely speaking, in the forward direction, we count on the vanilla N:M mask \(\mathbf{B}\) from Eq. (2) that calls for N:M in rows to ensure the forward acceleration and results in better flexibility than the transposable N:M mask as we report in Fig. 2(a). Very differently, we additionally build another mask \(\bar{\mathbf{B}}\in\{0,1\}^{I\times J}\) in the backward direction with the N:M requirement on its columns as:
\[\left\|\bar{\mathbf{B}}_{k:k+\text{M},l}\right\|_{0}\leq\text{N}, \tag{7}\]
in which \(k=1,\text{M},\text{2M},...,I\), and \(l=1,2,3,...,J\). In this fashion, the backward acceleration is supported as well without a compromise on the flexibility of backward mask, and the backward gradient \(g(\mathbf{X})\) in Eq. (5) is represented by the following approximation:
\[\bar{g}(\mathbf{X})=(\bar{\mathbf{B}}\odot\mathbf{W})^{T}*g(\mathbf{Y}). \tag{8}\]
Nevertheless, the forward \(\mathbf{B}\) requires gradient of \(g(\mathbf{X})\) for our Bi-Mask, which yields a gradient gap between practical bi-directional gradient \(\bar{g}(\mathbf{X})\) and ideal uni-directional gradient \(g(\mathbf{X})\). To solve this issue, we adapt the backward mask \(\bar{\mathbf{B}}\) to the magnitudes of masked weights during sparse training as follows:
\[\bar{\mathbf{B}}_{k+m,l}=\left\{\begin{array}{c}0,\;\text{if}\;|(\mathbf{B} \odot\mathbf{W})_{k+m,l}|\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;< \text{Top}(|(\mathbf{B}\odot\mathbf{W})_{k:k+\text{M},l}|,\text{N}),\\ \mathbf{B}_{k+m,l},\;\text{otherwise},\end{array}\right. \tag{9}\]
where \(k=1,\text{M},\text{2M},...,I\), \(l=1,2,...,J\), and \(1\leq k\leq\text{M}\). For a deeper analysis, it is easy to understand that \(\mathbf{B}_{k+m,l}=0\) is a fully not necessary condition of \(\mathbf{B}_{k+m,l}=0\). That is, the event \(\mathbf{B}_{k+m,l}=0\) will produce the event \(\mathbf{B}_{k+m,l}=0\), but is not the only way for \(\mathbf{B}_{k+m,l}=0\) to occur.
The rationale behind Eq. (9) is two-fold: 1) It maximizes the similarity of forward and backward masks by setting \(\bar{\mathbf{B}}_{k+m,l}=\mathbf{B}_{k+m,l}\) if the magnitude of \(\mathbf{W}_{k+m,l}\) is beyond the top-N largest. 2) Applying our backward mask does not affect the updating of these weights with zero forward masks since \(\mathbf{B}_{k+m,l}=0\) always results in \(\bar{\mathbf{B}}_{k+m,l}=0\). Unfortunately, it is a possibility that \(\bar{\mathbf{B}}_{k+m,l}=0\) does not necessarily result from \(\mathbf{B}_{k+m,l}=0\), in which case gradients of some non-zero masked weights are mistakenly eliminated, incurring performance degradation.
To decrease this possibility, we continue a row permutation method along the row dimension of \(\mathbf{B}\odot\mathbf{W}\). Our major motivations are also two-fold: 1) We can see from Eq. (9) that the resulting mask block \(\bar{\mathbf{B}}_{k:k+\text{M},l}\) would exactly match with \(\mathbf{B}_{k:k+\text{M},l}\) if \((\mathbf{B}\odot\mathbf{W})_{k:k+\text{M},l}\) has N:M sparsity, and no gradient gap would occur. 2) Performing row permutation of \(\mathbf{B}\odot\mathbf{W}\) improves the number of eligible N:M blocks as illustrated in Fig. 2. Importantly, it does not violate the gradient computation. Denoting \(P\in\mathbb{N}^{I}\) as a permutation of \(\{1,2,3,...,I\}\), the backward gradient \(\bar{g}(\mathbf{X})\) in Eq. (8) can be equally computed as:
\[\bar{g}(\mathbf{X})=\big{(}\bar{\mathbf{B}}\odot(\mathbf{W}_{P,\cdot})\big{)} ^{T}*\big{(}g(\mathbf{Y})_{:,P}\big{)}, \tag{10}\]
where the backward mask \(\bar{\mathbf{B}}\) is computed based on the permutated \((\mathbf{B}\odot\mathbf{W})_{P,\cdot}\) accordingly:
\[\bar{\mathbf{B}}_{k+m,l}=\left\{\begin{array}{c}0,\;\text{if}\;\Big{|}\big{(} (\mathbf{B}\odot\mathbf{W})_{P,\cdot}\big{)}_{k+m,l}\Big{|}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
ported in Sec. 4.3. Our algorithm presented in this paper is outlined in Alg. 1.
## 4 Experimentation
### Settings
**Datasets and Backbones.** We conduct experiments on representative benchmarks for image classification. For small-scale dataset, we choose the CIFAR-10 dataset (Krizhevsky et al., 2009), which contains 60,000 32\(\times\)32 color images from 10 different classes, with 6,000 images for each class. For large-scale dataset, we choose the challenging ImageNet (Deng et al., 2009), which contains over 1.2 million images for training and 50,000 validation images in 1,000 categories. On CIFAR-10, we train N:M sparse ResNet-32 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018). On ImageNet, we train N:M sparse ResNet-18/50 (He et al., 2016) and DeiT-small (Touvron et al., 2021). We compare our Bi-Mask with classic N:M sparse training methods including ASP (Nvidia, 2020) and SR-STE (Zhou et al., 2021) that fail backward acceleration, and transposable N:M mask (T-Mask) (Hubara et al., 2021) that has backward acceleration. Top-1 classification accuracy is reported for comparison on both datasets.
**Implementation Details.** Our implementation of Bi-Mask is based on the PyTorch framework (Paszke et al., 2019). All experiments are conducted on the NVIDIA Tesla A100 GPUs. The training iteration interval \(\Delta T\) is set to 100 and the number of permutation candidates \(K\) is set to 100. We use the stochastic gradient descent (SGD) optimizer to perform sparse training. In the first 5 training epochs, the learning rate linearly increases from 0 to 0.1 and then is decayed using the cosine annealing (Loshchilov and Hutter, 2017). The momentum and batch size are respectively set to 0.9 and 256. On CIFAR-10, we train all networks for 300 epochs with a weight decay of 1 \(\times 10^{-3}\). On ImageNet, we follow (Zhou et al., 2021) to train ResNet-18/50 for a total of 120 epochs. For DeiT-small, we follow (Zhang et al., 2022) to train for 300 epochs in total using the timm framework (Wightman, 2019).
### Comparison on CIFAR-10
**ResNet-32.** We first apply our Bi-Mask to train ResNet-32 model. The quantitative results are reported in Table 2. We can see from the table that the proposed Bi-Mask yields significantly better performance than the transposable mask at all N:M cases, and achieves comparable performance with the vanilla N:M mask that fails to achieve backward acceleration. For example, Bi-Mask obtains 94.78% Top-1 accuracy at 2:4 sparse pattern, surpassing SR-STE and T-Mask by 0.10% and 0.26%. Therefore, these accuracy results well demonstrate the efficacy of our Bi-Mask.
**MobileNet-V2.** We further investigate the effectiveness of Bi-Mask for training N:M sparse MobileNet-V2, a prevailing network with a compact design of depth-wise separable convolution. Table 3 again suggests a significantly higher accuracy of Bi-Mask compared with T-Mask under the same backward acceleration. For instance, Bi-Mask maintains the performance of the dense model at 1:4 pattern, while T-Mask suffers apparent accuracy drops (94.28%, 94.43%, and 93.81%) for Bi-Mask, dense model, and T-Mask, respectively).
### Comparison on ImageNet
**ResNet-18.** Table 4 shows the performance comparison of different methods for training 2:4 sparse ResNet-18 on ImageNet. Compared with T-Mask (Hubara et al., 2021), the proposed Bi-Mask achieves 1.6% performance gains. Notably, Bi-Mask even surpasses ASP (Nvidia, 2020) by 0.9%, later of which fails backward acceleration. Therefore, the superiority of our proposed Bi-Mask on the large-scale dataset is validated.
**ResNet-50.** We further show the performance of training
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & N:M & Top-1 & Forward & Backward \\ & Pattern & Accuracy (\%) & Acceleration & Acceleration \\ \hline Baseline & - & 94.52 & ✗ & ✗ \\ SR-STE & 2:4 & 94.68 & ✓ & ✗ \\ T-Mask & 2:4 & 94.52 & ✓ & ✓ \\ \hline Bi-Mask & 2:4 & **94.78** & ✓ & ✓ \\ \hline SR-STE & 1:4 & **94.52** & ✓ & ✗ \\ T-Mask & 1:4 & 94.04 & ✓ & ✓ \\ \hline Bi-Mask & 1:4 & 94.43 & ✓ & ✓ \\ \hline SR-STE & 1:16 & **92.92** & ✓ & ✗ \\ T-Mask & 1:16 & 92.02 & ✓ & ✓ \\ \hline Bi-Mask & 1:16 & 92.77 & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between different methods for training the N:M sparse ResNet-32 on CIFAR-10.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & N:M & Top-1 & Forward & Backward \\ & Pattern & Accuracy (\%) & Acceleration & Acceleration \\ \hline Baseline & - & 94.43 & ✗ & ✗ \\ SR-STE & 2:4 & 94.26 & ✓ & ✗ \\ T-Mask & 2:4 & 94.12 & ✓ & ✓ \\ \hline Bi-Mask & 2:4 & **94.46** & ✓ & ✓ \\ \hline SR-STE & 1:4 & **94.48** & ✓ & ✗ \\ T-Mask & 1:4 & 93.81 & ✓ & ✓ \\ \hline Bi-Mask & 1:4 & 94.28 & ✓ & ✓ \\ \hline \hline SR-STE & 1:16 & **93.14** & ✓ & ✗ \\ T-Mask & 1:16 & 90.12 & ✓ & ✓ \\ \hline Bi-Mask & 1:16 & 92.48 & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between different methods for training the N:M sparse MobileNet-V2 on CIFAR-10.
N:M sparse ResNet-50 on ImageNet. As shown in Table 5, Bi-Mask beats all the competitors across all N:M cases with the same or superior acceleration results. In particular, in comparison with SR-STE that gets stuck in dense backward propagation, Bi-Mask results in backward acceleration, meanwhile shows the best performance. For example, it surpasses SR-STE by 0.3% at 1:4. As for T-Mask that also accelerates the backward propagation, our T-Mask shows superior performance in particular to the cases with a smaller N value. As analyzed in Sec. 3.2, a small N or M greatly degrades the mask flexibility of T-Mask, therefore severe performance drops occur.
Next, we compare the efficiency between searching the optimal N:M transposable mask (Hubara et al., 2021) and our permutation for the backward mask. We report the runtime for searching ResNet-50 with different N:M cases on one NVIDIA Tesla A100 GPU. Table 6 suggests superior efficiency of our Bi-Mask. For example, it takes negligible 15.0s for our Bi-Mask to find a good permutation at 1:16. As a contrast, T-Mask requires around 278.4s under the same N:M setting. Given the efficiency and accuracy, the efficacy of Bi-Mask is evident.
**DeiT-small.** We further continue to compare the efficacy of Bi-Mask for training 2:4 sparse DeiT, an advanced vision transformer model. As manifested in Table 7, the proposed Bi-Mask consistently obtains the best Top-1 under different N:M cases, with its additional merit in backward acceleration for N:M sparse training. For instance, Bi-Mask improves the Top-1 accuracy of SR-STE by 1.7% at 2:4, and gains 5.9% Top-1 improvements over off-the-shelf T-Mask. The above observations well demonstrate the ability of Bi-Mask in compressing and accelerating the recent advanced vision transformer models.
### Performance Study
In this section, we investigate the performance of Bi-Mask to respectively explore the effectiveness of its components. All the experimental results are conducted on ImageNet using ResNet-18.
**Permutation Updating.** We first perform ablation studies of the permutation updating. In Fig. 4, we examine the performance of Bi-Mask under different training iteration intervals \(\Delta T\in[1,10,100,1000]\) and permutation candidate number \(K\in[10,100,1000]\). As can be observed, the best accuracy is obtained when the permutation updating is performed every 100 training iterations. To analyze, small intervals incurs unstable sparse training as the typology of computing graph change in an excessive frequency. In contrast, large intervals outdate the optimal permutation, thus also leading to worse performance. Besides, the performance becomes saturated when the candidate number
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & 2:4 & 4:8 & 1:4 & 2:8 & 1:16 \\ \hline T-Mask & 155.1 & 193.2 & 168.6 & 200.1 & 278.4 \\
**Bi-Mask** & 15.5 & 14.3 & 13.6 & 15.7 & 15.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Time comparison (s) between T-Mask and Bi-Mask for searching N:M masks of ResNet-50 at different patterns.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & N:M Pattern & Top-1 Accuracy (\%) & Forward Acceleration & Backward Acceleration \\ \hline Baseline & - & 70.9 & & \\ ASP & 2:4 & 69.9 & ✓ & ✗ \\ SR-STE & 2:4 & 71.2 & ✓ & ✗ \\ T-Mask & 2:4 & 69.2 & ✓ & ✓ \\
**Bi-Mask** & 2:4 & **70.8** & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison between different methods for training the N:M sparse ResNet-18 on ImageNet.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & N:M Pattern & Top-1 Accuracy (\%) & Forward Acceleration & Backward Acceleration \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ ASP & 2:4 & 76.8 & ✓ & ✗ \\ SR-STE & 2:4 & 77.0 & ✓ & ✗ \\ T-Mask & 2:4 & 76.2 & ✓ & ✓ \\
**Bi-Mask** & 2:4 & **77.4** & ✓ & ✓ \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ SR-STE & 4:8 & 77.4 & ✓ & ✗ \\ T-Mask & 4:8 & 77.1 & ✓ & ✓ \\
**Bi-Mask** & 4:8 & **77.5** & ✓ & ✓ \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ SR-STE & 1:4 & 75.3 & ✓ & ✗ \\ T-Mask & 1:4 & 73.8 & ✓ & ✓ \\
**Bi-Mask** & 1:4 & **75.6** & ✓ & ✓ \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ SR-STE & 2:8 & 76.2 & ✓ & ✗ \\ T-Mask & 2:8 & 73.6 & ✓ & ✓ \\
**Bi-Mask** & 2:8 & **76.3** & ✓ & ✓ \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ SR-STE & 1:16 & 71.5 & ✓ & ✗ \\ T-Mask & 1:16 & 66.4 & ✓ & ✓ \\
**Bi-Mask** & 1:16 & **71.5** & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between different methods for training the N:M sparse ResNet-50 on ImageNet.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & N:M Pattern & Top-1 Accuracy (\%) & Forward Acceleration & Backward Acceleration \\ \hline Baseline & - & 77.1 & ✗ & ✗ \\ ASP & 2:4 & 76.8 & ✓ & ✗ \\ SR-STE & 2:4 & 77.0 & ✓ & ✗ \\ T-Mask & 2:4 & 77.0 & ✓ & ✗ \\ T-Mask & 2:4 & 76.2 & ✓ & ✓ \\
**Bi-Mask** & 2:4 & 76.2 & ✓ & ✓ \\ \hline
**Bi-Mask** & 2:4 & **77.6** & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between different methods for training the N:M sparse ResNet-50 on ImageNet.
reaches 100 or more. The result shows that it is unnecessary to construct a time-consuming greedy algorithm to find out the optimal permutation.
**Binarization Criteria.** We further investigate the binarization criteria used to force the transposed weights to be N:M sparse after the permutation. Recall we opt magnitudes of sparse weights in Eq. (11) for a better fit with the forward mask to reduce the gradient gap. For comparison, we consider three variants including 1) preserving weights with the Top-N largest magnitudes of their gradients; 2) sampling N weights from the multinomial probability distribution according to their magnitudes; 3) randomly preserving N weights. Table 8 manifests our experimental results for the comparison. We can observe that the variants reduce the accuracy more or less. Among them, random one incurs the most performance drops since it severely enlarges the dissimilarity between the forward and backward masks and causes great gradient gaps. As for our weight magnitude, it well complies with the forward mask settings, therefore a better result can be observed.
**Ablation Study.** To further understand the effect of each component in our Bi-Mask, we conduct an ablation study by starting from our training baseline of the vanilla N:M mask (Zhou et al., 2021) (denoted by Baseline), and gradually including the backward mask and permutation updating. Table 9 shows that our backward mask enables backward acceleration with a Top-1 accuracy drop of 0.3%. As an analysis, the little degradation mainly yields from that our backward mask sometimes mistakenly eliminate gradients of some non-zero masked weights, as discussed in Sec. 3.3. After further adding our proposed permutation updating, the performance of sparse ResNet-18 even increases by 0.3% on the basis of the Baseline method. This is because our permutation updating operation results in more eligible N:M blocks and reduce the possibility of incorrect gradient elimination. In conclusion, each part of our proposed Bi-Mask in this paper plays a unique role in boosting the performance of our N:M sparse training.
## 5 Limitations
In this section, we further discuss unexplored limitations of Bi-Mask in this paper, which will be our major future focuses. First, following the compared methods, we only train N:M sparse networks on the image classification task. More efforts can be made to verify the effectiveness of Bi-Mask on other tasks such as object detection. Besides, we only explore the acceleration on the forward and backward propagation, while accelerating the update phase of weights (Chmiel et al., 2022) remains to be excavated in the near future. At last, our random generation for the permutation does not always guarantee to maximize the number of N:M blocks. Though it does not damage the overall performance, a better solution is expected to be explored for possibly locating the best permutation.
## 6 Conclusion
In this work, we have presented a novel Bi-direction Mask (Bi-Mask) for efficient N:M find-grained sparse neural network training. Instead of imposing a transposable constraint on the N:M sparse mask, our Bi-Mask independently builds masks in the forward and backward directions. The mask in the backward direction is obtained through an efficient permutation in the weight rows and a following magnitude-based pruning to enable acceleration on the N:M sparse tensor core. Extensive experiments have demonstrated the superiority of Bi-Mask over several SOTAs. Particularly, Bi-Mask surpasses its competitor by a large margin under the same acceleration effects, and can even perform on par or even better than off-the-shelf methods that often fail to achieve backward acceleration.
Figure 4: Performance influence of the training iteration inverval \(\Delta T\) and candidation number \(K\) in our permutation updating.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Criteria & Pattern & Top-1 Accuracy(\%) \\ \hline ResNet-18 & Gradient Magnitude & 2:4 & 70.56 \\ ResNet-18 & Multinomial Sampling & 2:4 & 70.42 \\ ResNet-18 & Random & 2:4 & 67.76 \\ \hline ResNet-18 & Weight Magnitude & 2:4 & 70.73 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance of different binarization criteria for backward mask in Bi-Mask.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Criteria & Pattern & Top-1 Accuracy(\%) \\ \hline ResNet-18 & Baseline & 2:4 & 70.5 \\ ResNet-18 & + Backward Mask & 2:4 & 70.2 \\ ResNet-18 & + Permutation Updating & 2:4 & 70.8 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Ablation study for the proposed Bi-Mask.
## Acknowledgement
This work is supported by the National Science Fund for Distinguished Young (No.62025603), the National Natural Science Foundation of China (No.62025603, No. U1705262, No. 62072386, No. 62072387, No. 62072389, No. 62002305, No.61772443, No. 61802324 and No. 61702136) and Guangdong Basic and Applied Basic Research Foundation (No.2019B1515120049).
|
2308.13368 | Helioseismic determination of the solar metal mass fraction | Context. The metal mass fraction of the Sun Z is a key constraint in solar
modelling, but its value is still under debate. The standard solar chemical
composition of the late 2000s have the ratio of metals to hydrogen Z/X =
0.0181, with a small increase to 0.0187 in 2021, as inferred from 3D non-LTE
spectroscopy. However, more recent work on a horizontally and temporally
averaged <3D> model claim Z/X = 0.0225, consistent with the high values of
twenty-five years ago based on 1D LTE spectroscopy. Aims. We aim to determine a
precise and robust value of the solar metal mass fraction from helioseismic
inversions, thus providing independent constraints from spectroscopic methods.
Methods. We devise a detailed seismic reconstruction technique of the solar
envelope, combining multiple inversions and equations of state to accurately
and precisely determine the metal mass fraction value. Results. We show that a
low value of the solar metal mass fraction corresponding to Z/X = 0.0187 is
favoured by helioseismic constraints and that a higher metal mass fraction
corresponding to Z/X = 0.0225 are strongly rejected by helioseismic data.
Conclusions. We conclude that direct measurement of the metal mass fraction in
the solar envelope favours a low metallicity, in line with the 3D non-LTE
spectroscopic determination of 2021. A high metal mass fraction as measured
using a <3D> model in 2022 is disfavoured by helioseismology for all modern
equations of state used to model the solar convective envelope. | G. Buldgen, A. Noels, V. A. Baturin, A. V. Oreshina, S. V. Ayukov, R. Scuflaire, A. M. Amarsi, N. Grevesse | 2023-08-25T13:29:27Z | http://arxiv.org/abs/2308.13368v1 | # Helioseismic determination of the solar metal mass fraction
###### Abstract
Context:The metal mass fraction of the Sun \(Z\) is a key constraint in solar modelling, but its value is still under debate. The standard solar chemical composition of the late 2000s have the ratio of metals to hydrogen \(Z/X=0.0181\), with a small increase to 0.0187 in 2021, as inferred from 3D non-LTE spectroscopy. However, more recent work on a horizontally and temporally averaged \(<\)3D\(>\) model claim \(Z/X=0.0225\), consistent with the high values of twenty-five years ago based on 1D LTE spectroscopy.
Aims:We aim to determine a precise and robust value of the solar metal mass fraction from helioseismic inversions, thus providing independent constraints from spectroscopic methods.
Methods:We devise a detailed seismic reconstruction technique of the solar envelope, combining multiple inversions and equations of state to accurately and precisely determine the metal mass fraction value.
Results:We show that a low value of the solar metal mass fraction corresponding to \(Z/X=0.0187\) is favoured by helioseismic constraints and that a higher metal mass fraction corresponding to \(Z/X=0.0225\) are strongly rejected by helioseismic data.
Conclusions:We conclude that direct measurement of the metal mass fraction in the solar envelope favours a low metallicity, in line with the 3D non-LTE spectroscopic determination of 2021. A high metal mass fraction as measured using a \(<\)3D\(>\) model in 2022 is disfavoured by helioseismology for all modern equations of state used to model the solar convective envelope.
## 1 Introduction
The precise value of the solar metallicity has been an issue in solar modelling since the reappraisal of the carbon, nitrogen and oxygen abundances in the 2000s (Allende Prieto et al., 2001, 2002; Asplund et al., 2004, 2006; Melendez & Asplund, 2008). The downwards revision of these and other elements reflects an improved understanding of the solar spectrum, thanks to the development of three-dimensional (3D) radiative-hydrodynamic simulations of the solar photosphere, as well as more complex model atoms for taking account departures from local thermodynamic equilibrium (LTE). These abundance revisions were summarised in Asplund et al. (2009, hereafter AGSS09). Further improvements were made over the years (Scott et al., 2015, 2015, 2015, 2018, 2021) culminating in a new solar abundance table presented in Asplund et al. (2021, hereafter AAG21) resulting from the best 3D non LTE analyses available for a very large number of elements. The authors find a metal mass fraction of \(Z=0.0139\), or \(Z/X=0.0187\) relative to hydrogen.
These abundance revisions have led to strong disagreements with classical helioseismic constraints such as the position of the base of the convective envelope, the helium abundance in the convective envelope and sound speed inversions, but also with neutrino fluxes (see Bahcall & Serenelli, 2005; Basu & Antia, 2008; Serenelli et al., 2009; Buldgen et al., 2019; Christensen-Dalsgaard, 2021, and refs therein).
In an attempt to resolve these disagreements, several other spectroscopic analyses of the solar abundances have been presented in the literature over the years. In particular, a series of papers summarised in Caffau et al. (2011) determined a higher metallicity value than in AAG21, corresponding to \(Z/X=0.0209\). Many of these differences were later attributed to the measurement of equivalent widths (e.g. Amarsi et al., 2019, 2020), rather than differences in the 3D models. More details, also of other works, can be found in AAG21.
More recently, Magg et al. (2022) carried out an analysis of the solar flux spectrum using a horizontally and temporally averaged 3D model (herafter \(<\)3D\(>\)). They determined a high solar metallicity corresponding to \(Z/X=0.0225\). This is consistent with the 1D LTE value of Grevesse & Sauval (1998), \(Z/X=0.0231\). The authors reported to have solved the so-called "solar abundance problem" by bringing back a high metallicity value.
There are several reasons to be sceptical of this claim. On the spectroscopic side the analysis of Magg et al. (2022) is based on a \(<\)3D\(>\) model. The authors did not validate their model with respect to any solar constraints; previous attempts with other \(<\)3D\(>\) models illustrate they are vastly inferior to full 3D models (Uitenbroek & Criscuoli, 2011; Pereira et al., 2013). Moreover, their results for their 18 neutral iron lines show a large range and standard deviation of 0.62 dex and 0.13 dex (compared to 0.10 dex and 0.03 dex respectively in AAG21), suggesting serious deficiencies in their analysis. It should also be noted that their oxygen abundance of 8.77 dex, based on the blended 630nm line
and the 777nm triplet that shows strong departures from LTE (Amarsi et al., 2016, 2018) is in disagreement with the values inferred from molecular OH lines (Amarsi et al., 2021). A more recent determination of the solar oxygen abundance from the same group (Pietrow et al., 2023), using the center to limb spectra of one OI line, puts the value at 8.73 dex, in closer agreement to AAG21 as well as other previous studies (Pereira et al., 2009; Amarsi et al., 2018).
On the solar modelling side, although the agreement with classical helioseismic constraints is improved with a high metallicity value, this is only the case when using classical standard solar models with outdated descriptions of the radiative opacities and macroscopic transport (see Buldgen et al., 2023, for a discussion). Numerous studies (see Christensen-Dalsgaard, 2021, and references therein for a discussion) have pointed out that a degeneracy exists between abundances and opacities. As such, the agreement with most helioseismic constraints can just as well be restored by modifications to radiative opacities. Furthermore, taking into account macroscopic transport of chemicals due to the combined effects of rotation and magnetic instabilities could restore the agreement with the seismic helium value in the convective zone (Eggenberger et al., 2022). Indeed, none of the classical helioseismic constraints provide a direct measurement of the solar metallicity, but rather indirect hints that solar calibrated models using a high metal mass fraction provide a better agreement (see e.g. Buldgen et al., 2023).
To resolve the debate, it may be necessary to measure the solar metallicity in a way that is independent from spectroscopy. This is linked to investigations of the properties of the solar convective envelope and has been attempted as early as 2000 (Baturin et al., 2000; Takata and Shibahashi, 2001), by using the properties of the first adiabatic exponent, \(\Gamma_{1}=\frac{\partial\ln P}{\partial\ln\rho}\bigg{|}_{\text{S}}\) or the adimensional sound speed gradient, \(\frac{r^{2}}{GM_{0}}\frac{dr}{dr}\). Further studies were attempted in the early 2000 (Lin and Dappen, 2005; Antia and Basu, 2006; Lin et al., 2007), with varying results. These measurements are sensitive to the equation of state of the solar plasma, and the most recent studies using modern equations of state have favoured a low metallicity value (Vorontsov et al., 2013, 2014; Buldgen et al., 2017). However, low precisions have been achieved so far, with inferred intervals ranging between \([0.008,0.013]\). An independent approach based on seismic calibration of solar standard models has inferred a value of \(Z=0.0142\pm 0.0005\)(Houdek and Gough, 2011), in good agreement with (Asplund et al., 2021).
In this study, we improve, both in accuracy and precision, the methods used in Buldgen et al. (2017) to infer the chemical composition of the envelope. Our method is based on a combination of seismic reconstruction techniques from Buldgen et al. (2020) and recomputation of the thermodynamic conditions in the solar envelope using both FreeEOS and SAHA-S equations of state. From this detailed analysis, combined with linear inversions of the first adiabatic exponent, we are able to infer precisely the favoured metallicity value in the solar convective envelope. Our results are robust with respect to modifications of the equation of state (EOS) and also indicate that the SAHA-S EOS is favoured over FreeEOS.
## 2 Solar models
### Evolutionary and seismic models
We start from a set of reference models computed with the Liege Stellar Evolution code (Scuflaire et al., 2008) using physical ingredients listed in Table 1. The main properties of the models are summarized in Table 2. We also add the properties of Model MB22-Phot, which is the reference SSM provided in Magg et al. (2022). Its envelope chemical composition, being the one advised, will be used for comparisons with the inversion results in Sects. 4.2. These evolutionary models are computed using an extended calibration procedure, similarly to one model of Buldgen et al. (2023). All models in this work include macroscopic transport at the BCZ to reproduce the lithium depletion observed at the age of the Sun (Wang et al., 2021).
The extended calibration procedure uses 4 free parameters and 4 constraints, namely the mixing-length parameter, \(\alpha_{\text{MLT}}\), the initial chemical composition (described using the initial hydrogen, \(X_{0}\) and metal mass fraction, \(Z_{0}\)) and an adiabatic overshooting parameter, \(\alpha_{\text{Ov}}\), as free parameters and the solar radius, surface metallicity, solar luminosity and the helioseismic position of the base of the convective envelope (\(r/R\))\({}_{\text{BCZ}}\) as constraints.
This procedure allows to have an excellent agreement not only in the radial position of the base of the convective envelope, but also on the mass coordinate at the BCZ and the \(m_{75}\) constraint (namely the mass coordinate at 0.75 R\({}_{\odot}\)) that Vorontsov et al. (2013) described as a key constraint to calibrate the value of entropy in the solar convective zone.
The models computed from these evolutionary sequences serve as starting points for the construction of seismic models following Buldgen et al. (2020). These models show a much better agreement in the solar radiative envelope, intrinsically limiting the uncertainties coming from cross-term contributions in the last step of the envelope reconstruction procedure. Moreover, Buldgen et al. (2020) showed that their reconstruction approach significantly improved the agreement in entropy proxy plateau and the density profile in the convective envelope with respect to helioseismic constraints, as well as the \(m_{75}\) parameter being after reconstruction within \(0.9822\pm 0.0002\) from Vorontsov et al. (2013) (which is essentially the density profile from a physical point of view and shows that our models would be of high quality also according to their criteria). Therefore, these models provide a robust starting point for detailed investigations of the properties of the solar envelope. The total set also provides adequate conditions for robustness tests on artificial data. More precisely, we used various equations of state to carry out the analysis, as this will be the main source of potential inaccuracies in the models. As we focus only on the solar envelope, the inversion is therefore unaffected by the choice of opacity tables used in the reference models. By construction, the calibrated evolutionary models will have a different chemical composition in the convective envelope as a result of the physical ingredients (e.g. opacities or solar abundances, see e.g. Buldgen et al. (2019) for an illustration). However, through the seismic reconstruction procedure, this effect is erased (as shown in the right panel of Fig. 1) as the thermodynamical coordinates are then uniquely defined from the inversion and used as inputs for the scan in chemical composition.
As mentioned above, our main goal is to determine helioseismic constraints on the solar metal mass fraction and compare them to spectroscopic abundance tables. Therefore, we will compare our results to existing tables using acronyms de
fined as follows: GN93 is Grevesse & Noels (1993), GS98 is Grevesse & Sauval (1998), AGSS09 is Asplund et al. (2009), AAG21 is Asplund et al. (2021), MB22 is Magg et al. (2022). In Table 2, Models M1 and M2 are built using the recent abundances by Magg et al. (2022), whereas Models A1, A2 and A3 are built using the Asplund et al. (2021) abundances.
We use two solar frequency datasets to assess the impact of the data on our results. The primary dataset (Dataset 1), used in Sect. 4.2 and 4.3 is the one used in Buldgen et al. (2020), which is a combination of the "optimal" dataset of Basu et al. (2009) combined with updated BiSON frequencies of Davies et al. (2014). The secondary dataset (Dataset 2) is a combination of BiSON data from Davies et al. (2014) and Basu et al. (2009) at low degree, but all modes with \(\ell>3\) are taken a 360 days asymmetric fitting from MDI data of Larson & Schou (2015). Inversions are computed using the SOLA inversion technique (Pijpers & Thompson 1994), following prescriptions of Rabello-Soares et al. (1999) for trade-off parameter calibrations, using the InversionKit software in an adapted configuration.
### Envelope properties
First, we need to investigate the details of the \(\Gamma_{1}\) profile, the entropy proxy profile and the chemical composition in the envelope of the models. We start by plotting in the left panel of Fig. 1 the \(\Gamma_{1}\) profile of the evolutionary models and their respective entropy proxy profiles.
As shown in Fig. 1, the differences in the lower parts of the convective envelope are minute, and strongly influenced by the EOS. However, it seems that a small trend still exists with metallicity. Namely, the higher the metallicity in the envelope, the lower the \(\Gamma_{1}\) value, as a result of the higher signature of the ionization of heavy elements in the case of a higher metal mass fraction. While of the order of \(10^{-4}\), these differences may remain significant if the solar data is of high enough quality. These differences can be understood from the point of view of linear perturbations, as it can be shown that relative differences in \(\Gamma_{1}\) may be written, assuming a given equation of state, as
\[\frac{\delta\Gamma_{1}}{\Gamma_{1}}= \left(\frac{\partial\ln\Gamma_{1}}{\partial\ln P}\right)_{\rho,Y, Z}\frac{\delta P}{P}+\left(\frac{\partial\ln\Gamma_{1}}{\partial\ln\rho} \right)_{P,Y,Z}\frac{\delta\rho}{\rho}+\left(\frac{\partial\ln\Gamma_{1}}{ \partial Y}\right)_{P,\rho,Z}\delta Y\] \[+\left(\frac{\partial\ln\Gamma_{1}}{\partial Z}\right)_{P,Y} \delta Z, \tag{1}\]
where we have separated the relative differences in \(\Gamma_{1}\) in partial derivatives with respect to the various thermodynamical coordinates of the \(\Gamma_{1}\) function. As shown in Buldgen et al. (2017), the term linked with the Z derivatives has a broad maximum in around 0.75R\({}_{\odot}\), which is consistent with the analysis of Vorontsov et al. (2013, 2014). In practice the inversion procedure will rely on these signatures to determine the solar metallicity from a helioseismic point of view. As already mentioned in previous studies and as can be seen from Fig 1, the inversion will suffer from a dependency in the EOS. Therefore the metal mass fraction inversion, just like the helium mass fraction inversion, will be affected by the choice of the reference EOS.
In addition to EOS dependencies, it is worth noting that the other terms in Eq. 1 will vary dependending on the solar model considered. For example, the density and entropy proxy profiles in the envelope of solar evolutionary models strongly vary with the physical ingredients used in the calibration procedure. This is illustrated in Fig 1 for high and low metallicity solar models including macroscopic transport of chemicals and reproducing the lithium depletion observed in the Sun. These aspects must before be taken into account by determining the actual density and pressure values in the solar convective zone from helioseismic data. Moreover, by using various models with transport prescriptions, we also use different helium mass fractions in the convective envelope in the inversion, providing therefore a robust analysis regarding the third term in Eq. 1.
## 3 Inversions of first adiabatic exponent
As shown in Fig 1, the higher layers of the convective envelope, around 0.95R\({}_{\odot}\), are strongly impacted by the EOS and helium ionization. Disentangling the effects of the metal mass fraction in the solar envelope will be extremely difficult. Therefore, we chose to cut the domain in two zones, above 0.91R\({}_{\odot}\), the \(\Gamma_{1}\) profile will be used to constrain the helium mass fraction, \(Y\), whereas the lower part of the domain, below 0.91R\({}_{\odot}\) should not
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Name** & **Abundances** & **EOS** & **Opacity tables** \\ \hline Model M1 & MB22 & FreeEOS & OP \\ Model M2 & MB22 & SAHA-S v7 & OP \\ Model A1 & AAG21 & FreeEOS & OPAL \\ Model A2 & AAG21 & SAHA-S v7 & OPAL \\ Model A3 & AAG21 & SAHA-S v3 & OPAL \\ \hline \end{tabular}
\end{table}
Table 1: Physical ingredients of the solar models
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline
**Name** & \((r/R)_{\rm BCZ}\) & \((m/M)_{\rm CZ}\) & m75 & \(X_{\rm CZ}\) & \(Y_{\rm CZ}\) & \(Z_{\rm CZ}\) & A (_Li_) (_dex_) \\ \hline Model M1 & 0.7133 & 0.9759 & 0.9827 & 0.7319 & 0.2516 & 0.0165 & 0.915 \\ Model M2 & 0.7133 & 0.9762 & 0.9829 & 0.7306 & 0.2530 & 0.0164 & 0.897 \\ Model A1 & 0.7133 & 0.9766 & 0.9832 & 0.7399 & 0.2463 & 0.0138 & 0.904 \\ Model A2 & 0.7133 & 0.9768 & 0.9834 & 0.7387 & 0.2476 & 0.0137 & 0.904 \\ Model A3 & 0.7133 & 0.9769 & 0.9834 & 0.7386 & 0.2477 & 0.0137 & 0.912 \\ \hline MB22-Phot & 0.7123 & / & / & 0.7394 & 0.2439 & 0.0166 & / \\ \hline \end{tabular}
\end{table}
Table 2: Envelope properties of the evolutionary solar models
bear any strong signal of helium ionization and should be more efficient in isolating the effects of the metallicity. We also see that the slope of the \(\Gamma_{1}\) profiles between models computed with FreeEOS and those computed with SAHA-S are at odds around 0.87\(R_{\odot}\). This plays an important role for the analysis of the solar data as the final accepted Z value might be affected by the choice of the EOS. In what follows we will determine a robust approach to determine Z while minimizing such effects.
By looking at \(\Gamma_{1}\) inversions in the lower parts of the convective zone, we are trying to determine very small corrections, of the order of a few \(10^{-4}\). They will be influenced by multiple effects: trade-off parameters of the inversions, surface effects, cross-term contributions. Some can be tested, but will depend on the dataset used in the inversion procedure. Therefore, calibrating the overall procedure with artificial data is as important as carrying out the actual inversion using the observations. Moreover, a measure of the quality of the inversion can be made by verifying that the overall reconstruction of the seismic solar profile is consistent for various reference models. Should large model-dependencies remain, the results would not necessarily be robust and the trade-off parameters could be suboptimal.
### The inversion strategy for metallicity determination
The determination of the metallicity is carried out using the dependencies of \(\Gamma_{1}\) on the chemical composition of the envelope. The first adiabatic exponent is a function of four thermodynamic coordinates, namely
\[\Gamma_{1}=\Gamma_{1}(\rho,P,Y,Z). \tag{2}\]
Physically, the most interesting dependencies in the context of this study are those linked with the chemical composition. These will be physically resulting from the ionizations of helium, and those of the heavy elements at different temperatures in the solar envelope (see Baturin et al. (2022) for a dicussion and illustrations). As shown in the left panel Fig. 1 and explicitly visible in Eq. 1, a first issue with helioseismic determinations of the solar metallicity is the dependency of the method on the EOS. The only solution to the problem is to actually test the method for various equations of state available to solar modellers. In our case, this is done by using the FreeEOS (Irwin, 2012) and two versions of SAHA-S EOS (Gryaznov et al., 2004; Gryaznov et al., 2006, 2013; Baturin et al., 2013) (see also the SAHA-S website1). In the modelling. The configuration used for FreeEOS is the recommended form reproducing the behaviour of the OPAL EOS (Rogers & Nayfonov, 2002) and adopted in the latest generations of Standard Solar Models (Vinyoles et al., 2017; Magg et al., 2022).
Footnote 1: [http://crydee.sai.msu.ru/SAHA-S_EOS](http://crydee.sai.msu.ru/SAHA-S_EOS)
The other uncertainties regarding thermodynamic properties of the convective envelope are taken into account by using inversions for the density, the entropy proxy and squared isothermal sound speed, u=P/\(\rho\). These serve to determine the solar conditions as accurately as possible and act as sanity checks for the accuracy and precision of the overall method. When working on actual solar data, to ensure maximal accuracy, we actually use the seismic models as the reference models for the final entropy, isothermal sound speed, and density inversions. The whole procedure for the metallicity inversion is summarized in Fig. 2. The first step is the extended solar calibration (blue box) that provides initial conditions for the second step (orange box): the seismic reconstruction procedure of Buldgen et al. (2020), that damps the cross-term from the Ledoux discriminant in the variational equation and lead to an accurate depiction of solar ther
Figure 1: Left panel: \(\Gamma_{1}\) profile between the BCZ and 0.95R\({}_{\odot}\) for the solar models in Table 1. Right panel: Entropy proxy profile, \(P/\rho^{\Gamma_{1}}\), as a function of normalized radius for the solar models of Table 1. The reference evolutionary models (“Ref”) show a significant spread in entropy proxy plateau in the CZ that is corrected in the seismic models (“Seismic”) by the seismic reconstruction procedure.
modynamical conditions. The third step consists in inversions of density (using the (\(\rho\), \(\Gamma_{1}\) kernels), squared isothermal sound speed (\(u=P/\rho\)) (using the (\(u\), \(\Gamma_{1}\) kernels) and \(\Gamma_{1}\) (using the (A, \(\Gamma_{1}\) kernels) to determine the final thermodynamical coordinates (red box) which are then provided to the EOS routine.
This leads to the fourth step of the inference procedure (green box), during which a scan in X and Z is carried out to determine the optimal composition of the solar envelope, using the seismic thermodynamical coordinates determined at the previous step. In this study, we considered X values within \([0.70,\ 0.764]\) with a step of 0.001 and Z values within \([0.01,\ 0.0176]\), with a step of 0.0004. Therefore a grid of 64 by 20 values is considered for X and Z when reconstructing the \(\Gamma_{1}\) profile. The normalized \(\chi^{2}\) is computed using the \(\Gamma_{1}\) values from the last inversion, with N the number of poins, M is the number of free parameters, here 2 (X and Z), and \(\sigma\) is the 1 \(\sigma\) uncertainty on the \(\Gamma_{1}\) from the inversion. The domain of the solar convective envelope is separated in two zones. The first one, above \(T\approx 9\times 10^{5}K\), is dedicated to the determination of the metallicity. The second one, below \(T\approx 5\times 10^{5}K\) is dedicated to the determination of the hydrogen abundance. Due to the much higher abundance of helium in the solar envelope, the effects of the ionization of helium and hydrogen will largely dominate the properties of \(\Gamma_{1}\) at lower temperature, but not in the lower part of the CZ (see Baturin et al. 2022, for an illustration and a discussion).
## 4 Convective envelope composition
### Artificial data
A first test to determine the robustness of the method is to carry out a full analysis on artificial data. This is done by considering one of the seismic models as a reference model in the procedure while using an evolutionary model as the target. We consider three cases.
* **HH1:** M1 plays the role of the target and A2 is the reference, thus both effects of abundances and EOS are considered.
* **HH2:** A1 is the target and A3 is the reference. In this case no metallicity correction should be found and the corrections are purely EOS effects.
* **HH3:** M2 is the target and A2 is the reference, only chemical composition effects are considered as the EOS is the same for both models.
The dataset considered for the tests is exactly the same as the actual solar one, with the same uncertainties on individual modes. Therefore the propagation of uncertainties, calibration of the trade-off parameters and effects of trade-off, surface corrections,... should be as similar as possible with respect the procedure used for actual solar data. We voluntarily chose high-Z models as targets to simulate the effect of a high-Z Sun, following the work of Magg et al. (2022), to see whether this could be recovered from the data.
Before entering in the details of the analysis of Fig. 3, we provide some additional details on the technicalities of the inversion. The full domain of the inversion spans from \(\approx 0.72R_{\odot}\) up to \(\approx 0.985R_{\odot}\). Below this lowest value, the inversion might be significantly affected by the effects of the boundary of the convective zone, 0.72 can actually already be considered quite low and explains why a high trade-off parameter for the cross-term integral has been considered. Above 0.985, the inversion starts to be affected by the outer boundary conditions, the averaging kernels, despite showing good localization, show sometimes sharp deviations. Moveover, the inversion is limited by the availability of other coordinates (namely \(u=\frac{P}{\rho}\) and \(\rho\)) which are more difficult to localize at such high radii with the considered dataset. Robustness with respect to surface effects, mass conservation constraint in the model might also play a role and we chose to be conservative.
The overall domain is then subdivided in two subdomains. The first one, at lower radii and higher temperatures, will span from \(\approx 0.72R_{\odot}\) up to \(\approx 0.85R_{\odot}\). It counts about 17 points with \(\Gamma_{1}\) inversion values. Due to the higher temperatures, all of H and He are ionized and while both elements will contribute to a large fraction of the electrons in these regions, the dips in \(\Gamma_{1}\) will be mostly affected by the partial ionization of metals. The effect is illustrated in Baturin et al. (2022) Fig. 5, upper panel, where one can see that the region of interest will be mostly influenced by oxygen and slightly influenced by carbon. Therefore, the Z inversion performed here will mostly validate the abundance of oxygen, although the details of this should be confirmed from an analysis using the approach of Baturin et al. (2022). Due to the differences in \(\Gamma_{1}\) between FreeEOS and SAHA-S between \(\approx 0.85R_{\odot}\) and \(\approx 0.91R_{\odot}\), we chose first to neglect this region of the inversion (except in Sec. 4.3). While this might appear as a strong hypothesis, it is merely a choice of using 17 constraints in a region where all equations of state agree to determine one free parameter of the thermodynamical properties of the envelope, namely Z. The underlying hypothesis of the method being that if FreeEOS and SAHA-S provide the same \(\Gamma_{1}\) values for the same coordinates, then the physics of the EOS must be robust2.
Footnote 2: We mention that this actually does not influence the conclusions of the study on solar data, rather it makes the procedure more difficult for FreeEOS of the actual solar data analysis, but does not affect the conclusion that if the Sun was high-Z, we should pick up the signal in the \(\Gamma_{1}\) profile.
In this work, we will focus on a global determination of Z. If the hypothesis that Z dominates largely in the properties of \(\Gamma_{1}\) in the first subdomain is valid, a scan in X and Z as discussed above should provide an almost horizontal valley of optimal values of Z for various values of X in an X-Z \(\chi^{2}\) map. This will be verified below.
The second subdomain is considered above \(\approx 0.91R_{\odot}\) up to \(\approx 0.985R_{\odot}\). It is used to constrain the X value in the convective envelope. This is similar to what has been done in the past litterature (e.g. Vorontsov et al. 1991; Basu & Antia 1995; Richard et al. 1998). The choice of going for X instead of Y does not affect the conclusions, as the couple X and Z will determine a Y value and in these low temperature regions, the effect of the metals is almost insignificant with respect to the uncertainties due to the EOS, surface effects and inaccuracies in the thermodynamical coordinates. Again this hypothesis can be checked when drawing the \(\chi^{2}\) map as the optimal solution should appear as a vertical valley in a X-Z plane. This will be again verified below.
The results are illustrated in Fig. 3 for the \(\Gamma_{1}\) inversion. Comparing the full lines and the inversion results, we can see that the inversion reproduces quite nicely the actual behaviour of the profile. This means that the trade-off parameters have been well adjusted. Now if we look at all cases, we see that in the left panel, the reconstructions have been able to grasp most of the features in the fitted areas. In each case where a high-Z model
was a target (green and red symbols), the inversion managed to pick it up quite nicely, while the discrepancies between FreeEOS and SAHA-S between \(\approx 0.85R_{0}\) and \(\approx 0.91R_{\odot}\) are clearly seen for HH1. Similarly when both models were of low-Z values but still exhibited significant differences due to their differing equations of state, the inversion managed to recover the proper range of metallicity. The same can be seen in the right panel for the helium ionization zones.
The \(\chi^{2}\) maps are illustrated in Fig 4, with the left panel of each figure showing the results for the high-T subdomain used to determine Z and the right panel showing the results for the low-T subdomain used to determine X. We can see that the inversion actually provides a quite accurate estimate of both X and Z in all cases. While the Z inversion is not perfectly horizontal, a clear favoured region can be outlined and using the information on X from the right panel to a favoured Z in the left panel helps further refining the information on Z. Here, we chose to limit the valley in X to the regions where the \(\chi^{2}\) was below \(1.5\times\chi^{2}_{\text{Min}}\), with \(\chi^{2}_{\text{Min}}\) being the lowest \(\chi^{2}\) found for all values of X and Z used in the scan at low temperatures. This interval is then used to constrain the Z range in the valley, where the criterion is either to have \(\chi^{2}<1\) whenever it is reached in an extended range, or \(\chi^{2}<1.5\times\chi^{2}_{\text{Min}}\) when the former criterion is not satisfied. In all cases this leads to a determined metallicity in agreement with the target value. We can see that this approach makes the technique less dependent on the details of the equation of state at high temperatures. In practice the change in reduced \(\chi^{2}\) value induced by using the information of X (using the value of X with lowest \(\chi^{2}\) to constrain Z) from the lower temperature is minimal and below 1, meaning that the fit remains consistent. In all cases this approach provides X and Z values in good agreement with the actual values of the models. For HH1, the X interval found is [0.724, 0.733] and the Z interval is [0.0160, 0.0170] with the solution being X = 0.732, Z = 0.01647. For HH2, the X interval found is [0.734, 0.738] and the Z interval is [0.0136, 0.0142] with the solution being X = 0.7386, Z = 0.01374. Finally, for HH3, the X interval found is [0.727, 0.729] and the Z interval is [0.0163, 0.0169] with the solution being X = 0.73056, Z = 0.01644. While slight biases can be observed for X, a clear result is that a low-Z Sun cannot be mistaken with a high Z Sun using current modern equations of state and that the provided interval predicts the correct value. Actual reduced \(\chi^{2}\) differences between high-Z and low-Z models range between a factor 5 and 8 for the scan in metallicity. The high \(\chi^{2}\) values for the X inversion are found in the cases for which the equation of state is different between the target and the reference models. In these cases, the \(\chi^{2}\) values always keep values of about 1000 or few hundreds at best. This is due to large discrepancies between the equations of state, amplified by the very small uncertainties of the SOLA inversion in regions where the method might actually be less robust (for reasons mentioned above). The color scale in Fig 4 is as follows: for the high-T subdomain used to determine Z: white corresponds to \(\chi^{2}<1\), successive shades of blue to \(\chi^{2}<3\), \(\chi^{2}<5\), \(\chi^{2}<6\) and yellow corresponds to \(\chi^{2}>15\). For the low-T subdomain used to determine X: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\text{Min}}\), successive shades of blue to \(\chi^{2}<5\times\chi^{2}_{\text{Min}}\), \(\chi^{2}<6\times\chi^{2}_{\text{Min}}\),\(\chi^{2}<15\times\chi^{2}_{\text{Min}}\) and yellow corresponds to \(\chi^{2}>25\times\chi^{2}_{\text{Min}}\), with \(\chi^{2}_{\text{Min}}=2249\), for HH1, 419 for HH2, 22 for HH3.
Figure 2: Details of the fitting approach used to determine the chemical composition of the solar envelope. The four-steps process is tailored to eliminate cross-term contributions at each step and maximize the accuracy while the last step computed a reduced \(\chi^{2}\) value for each subdomain (see text).
Figure 4: \(\chi^{2}\) map of the X and Z scan for the artificial data (HH exercises). The green and red crosses indicate the target and reference model, respectively. The upper panels are for HH1, middle panels for HH2 and lower panels for HH3. The left panels are associated with the high-T subdomain for the Z determination, while the right panels are associated with the low-T subdomain used to determine X.
Figure 3: Inversions of the \(\Gamma_{1}\) profile as a function of normalized radius for the artificial data (HH cases). Each panel illustrates a subdomain of the method: left panel for the Z determination, indicated by the orange vertica lines, right panel for the X determination.
### Solar data
We start by presenting in Fig. 5 the \(\Gamma_{1}\) inversion results for each of the 5 seismic models presented in Sect. 2. It appears that the inversions, ploted using the symbols of various colors, are consistent with each other for all models. This gives confidence that the \(\Gamma_{1}\) profile has been accurately determined in a model-independent way.
A quick look at the left panel Fig. 5 already shows that the Magg et al. (2022) models, denoted by the blue dashed lines, are at odds with the data in the lower part of the convective zone, while this is not the case or at least less the case of the AAG21 models shown in red. A second observation on the slope of \(\Gamma_{1}\) between \(0.85R_{\odot}\) and \(0.91R_{\odot}\) indicates that solar data seems to strongly favour the SAHA-S equation of state. However a physical explanation for these differences remains to be provided as it could have multiple origins (coulomb correction, abundance of specific individual elements (Trampedach et al., 2006)). This observation confirms that the metallicity can be inferred most robustly between \(0.72\) and \(0.85R_{\odot}\). As mentioned in the previous section, we thus have \(17~{}\Gamma_{1}\) inversion values for the first subdomain. Tests with the full profile between \(0.72\) and \(0.91R_{\odot}\) have also been performed in Sect. 4.3.
In the left panel of Fig. 5, the best fit profiles using FreeEOS between \(0.72\) and \(0.85R_{\odot}\) are provided by the orange curves while those using SAHA-S EOS are provided by the dark blue curves. These models also reproduce very well the solar data up to \(0.91R_{\odot}\), while these additional points are not included in the fit yet. For each EOS, the curves obtained through the reconstruction are essentially the same, meaning that the procedure is independent of the reference model and that only the assumed EOS might affect the final result. This effect is however mitigated by limiting the first subdomain between \(0.72\) and \(0.85R_{\odot}\).
In the right panel of Fig. 5, we take a look at the helium second ionization zone. In this case the situation is reversed, the Magg et al. (2022) models, with their higher helium abundance in the solar convective zone, are in much better agreement then the AAG21 models. This is confirmed using the \(\Gamma_{1}\) reconstruction that favour very high helium abundances in the solar convective envelope. Again all inversion points tend to agree with each other, providing confidence in the robustness of the approach. However, the points above \(0.98R_{\odot}\) might be influenced by boundary effects, surface effects or slight inaccuracies in the determination of the \(\rho\) and \(u\) coordinates, as OLA inversions tend to be less accurate at the borders of the domain (Backus and Gilbert, 1968; Pijpers and Thompson, 1994).
Nevertheless, the results can be used to compute a \(\chi^{2}\) map for the two subdomains: between \(0.72\) and \(0.85R_{\odot}\) to constrain Z, and between \(0.91\) and \(0.985R_{\odot}\) to constrain X. Combining the optimal X and Z found in both subdomains, we get an accurate estimate of both parameters, as demonstrated on artificial data in the previous section.
The results are illustrated in Figs. 6 and 7 where the \(\chi^{2}\) maps show that the assumption of separating the domain works quite well. Indeed, the optimal solution in the right panels are almost vertical, indicating that there is little dependence on the metallicity and that, as expected, X dominates the solution (due to its direct impact on Y and thus on the reproduction of the properties of the helium ionization regions). Similarly, at high temperatures, a clear region in Z is outlined as optimal solution. It is also clear that the higher temperature layers bear little to no information on X, as the material is clearly fully ionized. Thus there is no clear trace left in the \(\Gamma_{1}\) profile to directly infer X. Therefore, as already seen in the HH exercise, one can use the information on X from lower temperatures to constrain the optimal interval for Z in the observed \(\chi^{2}\) valley in the left panels of Figs. 6 and 7. This provides, independently of the equation of state used (but assuming that either FreeEOS or SAHA-S is equally good in representing the plasma in the solar envelope) a final X interval between [\(0.715\), \(0.730\)] and a final Z interval between [\(0.0132,~{}0.0148\)] for the first dataset studied here.
The color scale in Fig 6, using SAHA-S (v3 or v7) in the analysis, is the following: for the high-T subdomain: white corresponds to \(\chi^{2}<1\), successive shades of blue to \(\chi^{2}<3\), \(\chi^{2}<5\), \(\chi^{2}<6\) and yellow corresponds to \(\chi^{2}>15\); for the low-T subdomain used to determine X: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<4\times\chi^{2}_{\rm Min}\), \(\chi^{2}<6\times\chi^{2}_{\rm Min}\), \(\chi^{2}<15\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}>25\times\chi^{2}_{\rm Min}\), \(\chi^{2}_{\rm Min}=3115\) for model A2, 3229 for model A3 and 2855 for model M2.
The color scale in Figs 7 where FreeEOS is used in the analysis, is the following: for the high-T subdomain: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<2\times\chi^{2}_{\rm Min}\),\(\chi^{2}<4\times\chi^{2}_{\rm Min}\),\(\chi^{2}<6\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}>8\times\chi^{2}_{\rm Min}\), with \(\chi^{2}_{\rm Min}=1.8\) starting from model A1 and 3.14 starting from model M1; for the low-T subdomain used to determine X: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<4\times\chi^{2}_{\rm Min}\),\(\chi^{2}<6\times\chi^{2}_{\rm Min}\),\(\chi^{2}<15\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}>25\times\chi^{2}_{\rm Min}\), with \(\chi^{2}_{\rm Min}=2676\) starting from model A1 and 1236 starting from model M1. The explanation for the \(\chi^{2}\) values remaining always above 1 in the high-T domain is due to the use of the FreeEOS equation of state. This means that SAHA-S tends to provide overall better fits to solar data than FreeEOS at higher temperatures. To the contrary, FreeEOS tends to provide a better fit for the low T domain overall, although none of the \(\chi^{2}\) values match as well as in HH3, suggesting inaccuracies in the EOS in these low T regimes and perhaps systematics in the inversion.
### Assuming the equation of state known
As mentioned above, SAHA-S EOS provides a much better agreement with solar data than FreeEOS. Therefore, it is worth investigating what conclusions we can draw using the full set of \(\Gamma_{1}\) points and assuming SAHA-S as the equation of state describing the solar material. We check whether in these conditions one may infer individual elements abundances, as carried out in Baturin et al. (2022). To do so, we attempt at reconstructing \(\Gamma_{1}\) using either the AGSS09 or the MB22 abundances, using X and Z as free parameters, starting from both model M1 and A2.
The results are illustrated in Fig 8 for the \(\Gamma_{1}\) profiles. We can see that a small difference can be made between AGSS09 and MB22 individual ratios of elements, but not at a high level of significance. However, a low Z value is still strongly favoured in the \(\chi^{2}\) map illustrated in Fig 9. In both cases, a Z value in line with AAG21 is favoured, while the low Z valley seems to be a bit wider if AGSS09 individual ratios of elements are assumed. This implies that while the \(\Gamma_{1}\) profile favours a low Z value, in line with the AAG21 photospheric abundances, it might be difficult to disentangle the contribution of individual elements, although trying to pick some dominant trends regarding Oxygen would still be worth attempting.
The color scale in Fig 9, using SAHA-S in the analysis, is the following: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<2\times\chi^{2}_{\rm Min}\), \(\chi^{2}<3\times\chi^{2}_{\rm Min}\), \(\chi^{2}<4\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}>8\times\chi^{2}_{\rm Min}\), with \(\chi^{2}_{\rm Min}=0.86\) starting from model A2 and the AAG21 abundances (upper right panel), \(\chi^{2}_{\rm Min}=4.9\) for model A2 with either MB22 (upper left panel) and \(\chi^{2}_{\rm Min}=4.6\) and \(4.2\) starting from model M1 with the MB22 (lower right panel) and AAG21 abundances (lower left panel), respectively.
### Impact of the dataset
A last test has been performed using more recent MDI data from Larson & Schou (2015) to determine whether the trends picked in the previous sections are spurious or not. For this, the whole calibration procedure has been carried out again from the start. The results of the \(\Gamma_{1}\) inversions for models M1 and A2 are illustrated in Fig 10. Again, a clear rejection of the high Z solution of Magg et al. (2022) can be observed in the left panel of Fig 10.
In this case the reconstruction procedure has struggled a bit more to reproduce the \(\Gamma_{1}\) values in the lower part of the convective envelope. SAHA-S EOS is still favoured between \(0.85R_{\odot}\) and \(0.91R_{\odot}\) but even SAHA-S models struggles at reconstructing the profile. Nevertheless, as shown by the blue and red curves, low Z models (in red) are strongly favoured over high Z models (in blue) for the lower part of the CZ (left panel), while a high Y value is still favoured (right panel). The effects are confirmed for both test cases using Model M1 and Model A2.
The \(\chi^{2}\) maps illustrated in Fig. 11 further confirm these trends, with low Z values strongly favoured over high Z values. However in this case, a slightly lower value of Z seems to be favoured. This might be similar to the trend seen Vorontsov et al. (2013, 2014). We can however see that this time, the valley in X is almost perfectly vertical, with a slightly higher Y interval than in previous sections. The overall fit in Z is somewhat more difficult given the strong deviations in the first few points in the deep convective zone (around \(0.75R_{\odot}\)). In this case again, a factor between 6 and 10 is found between the optimal solution at low Z and the MB22 Standard Solar Model (SSM). While the AAG21 non-standard model performs better, it still unable to reach the low X regime favoured by the inversion.
The color scale in Fig 11, using SAHA-S in the analysis, is the following: for the high-T subdomain: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<2\times\chi^{2}_{\rm Min}\), \(\chi^{2}<3\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}<4\times\chi^{2}_{\rm Min}\), with \(\chi^{2}_{\rm Min}=5.6\) starting from model A2 using SAHA-S v7 and \(\chi^{2}_{\rm Min}=2.2\) starting from model M1 and using FreeEOS; for the low-T subdomain used to determine X: white corresponds to \(\chi^{2}<1.5\times\chi^{2}_{\rm Min}\), successive shades of blue to \(\chi^{2}<4\times\chi^{2}_{\rm Min}\), \(\chi^{2}<6\times\chi^{2}_{\rm Min}\), \(\chi^{2}<15\times\chi^{2}_{\rm Min}\) and yellow corresponds to \(\chi^{2}>25\times\chi^{2}_{\rm Min}\), with \(\chi^{2}_{\rm Min}=620\), starting from model A2 and using SAHA-S v7 and \(\chi^{2}_{\rm Min}=2219\) starting from model M1 and using FreeEOS. The trend observed here is the opposite of what was seen before, with SAHA-S being favoured at low T while FreeEOS is favoured between 0.72 and \(0.85R_{\odot}\).
### Summary
To provide a global view of the inversion results, we have to combine the information of the \(\chi^{2}\) for both datasets and all test cases combined. This is done in Fig. 12 and in Table 3. In this table, each line represents a full inversion procedure, from the determination of the seismic model, to the reconstruction of the \(\Gamma_{1}\)
Figure 5: Inversions of the \(\Gamma_{1}\) profile as a function of normalized radius determined from actual solar data. Each panel illustrates a subdomain of the method: left panel for the \(Z\) determination, indicated by the vertical orange lines, right panel for the X determination. The orange curves are associated with reconstructions using FreeEOS, the purple curves are associated with reconstructions using SAHA-S and the blue and red curves are associated with models M1 and M2 and A1 and A2, respectively.
profile and chemical composition determinations. Some cases, in line 5 and 6, are those for which a different equation of state than that of the reference model was used in the \(\Gamma_{1}\) reconstruction procedure. We chose to use the limits of the white areas in the \(\chi^{2}\) maps of the hydrogen determinations to derive confidence intervals to be used to extract the associated metallicity intervals. As shown in Sect. 4.1, this allowed to recover the accurately actual values in the exercices with artifical data.
It is to be noted that the size of the rectangles will depend on other parameters such as data set and equation of state, as this will change the landscape of the \(\chi^{2}\). Our approach shows that the metallicity interval determined from a detailed helioseismic analysis of the properties of the solar envelope strongly favours the Asplund et al. (2021) abundances over the Magg et al. (2022) abundances.
## 5 Conclusion
The conclusions of this study is straightforward, inversions of solar data to determine the \(\Gamma_{1}\) profile in the solar convective envelope do not favour the revision of the abundances of Magg et al. (2022) back to the old Grevesse & Sauval (1998) metallicity value. As seen from Fig. 12 this independent measurement of the solar metallicity from seismic analyses of the solar envelope strongly favours AAG21, as well as a high Y value in the convective envelope. The situation for the Magg et al. (2022) abundances as actually worse because the strongest rejection is found for a high Z, high Y model, which corresponds to the output of a MB22 model reproducing the solar luminosity. The situation worsens further for MB22 models including macroscopic transport to reproduce the lithium depletion at the age of the Sun (Buldgen et al., 2023).
Regarding helium, it also appears that the high Y value favoured here cannot be attained solely by including the effects of macroscopic transport in low-Z models. A revision of nuclear reaction rates or opacities at high T is required to reconcile solar models with the analysis performed here (Ayukov & Baturin, 2017).
The method implemented here shows a few caveats. The inversion is quite difficult, as trade-off parameters have to be adjusted for each dataset and artificial data plays a key role in the calibration. Therefore the results cannot be obtained on a large scale with numerous datasets. From supplementary investigations, suboptimal trade-off parameters do not change the conclusions, but rather push towards lower Z values. Testing on more
Figure 6: \(\chi^{2}\) map of the X and Z scan for solar data, using Model A2 (upper panels), A3 (middle panel) and M2 (lower panels) in the procedure, therefore either SAHA-S v7 or SAHA-S v3. the orange and red crosses indicate the positions of AAG21 model including rotation and magnetic fields and the MB22 standard solar model respectively (values from their paper). The left panel is associated with the high-T subdomain for the Z determination, while the right panel is associated with the low-T subdomain used to determine X.
datasets might be done incrementally in the future but seems unlikely to change the conclusions. Future revisions of the EOS, or availability of new tables would be interesting to test physical effects in the EOS and further confirm our results3. An additional limitation of the method is the treatment of surface effects, here chosen to be dealt with using a 6\({}^{th}\) order polyno
Figure 8: Inversions of the \(\Gamma_{1}\) profile as a function of normalized radius determined from solar data, assuming SAHA-S v7 as the equation of state and changing the ratios of individual elements in the table (from MB22 to AAG21). Two reference models are used in the procedure, model M1 and A2, the results of the reconstructions are plotted in various colors. The red curve overplots the light blue one and the purple curve overplot the green one. Each symbol depicts the \(\Gamma_{1}\) inversion results for the associated model (M1 and A2).
Figure 7: Same as Fig. 6 for models A1 (upper panels) and M1 (lower panels), therefore using FreeEOS in the reconstruction. The left panel is associated with the high-T subdomain for the Z determination, the right panel is associated with the low-T subdomain used to determine X.
mial form as in Rabello-Soares et al. (1999). Experiments with both artificial data and changes to the order of the polynomial correction have been conducted to ensure robustness, but pushing towards higher degrees might need more detailed functional forms (Di Mauro et al., 2002) and the robustness and precision of the method would be further improved by using improved modelling approach of the surface layers (Spada et al., 2018; Jorgensen et al., 2021). Further investigations on the systematics of such inversions are therefore required to further pin down the chemical composition of the solar envelope, as well as detailed comparisons of the available equations of state of the solar material.
Nevertheless, from our detailed helioseismic analysis of the solar envelope using an advanced seismic inversion approach and
Figure 10: Inversions of the \(\Gamma_{1}\) profile as a function of normalized radius in the solar enveloped, determined using the Larson & Schou (2015) dataset. Each panel illustrates a subdomain of the method: left panel for the Z determination, indicated by the orange vertical lines, right panel for the X determination. The green curve and symbols are associated with Model M1 (FreeEOS), whereas the purple curve and symbols are associated with model A2 (SAHA-S v7). The red and blue lines illustrate models A1 and A2 and M1 and M2 respectively.
Figure 9: \(\chi^{2}\) map of the X and Z scan for solar data fitting \(\Gamma_{1}\) between \(0.72R_{\odot}\) and \(0.91R_{\odot}\), the orange and red crosses indicate the positions of AAG21 model including rotation and magnetic fields and the MB22 standard solar model (MB22-Phot in table 2) respectively (values from their paper). Upper panels are using model A2, lower panels using model M1/ Left panels are using MB22 individual elements, right panels using AAG21 individual elements.
up-to-date equations of state of the latest generations of solar models, we conclude that the solar metallicity in the convective envelope lies in the range \(0.0120-0.0151\) and the solar hydrogen mass fraction in the envelope lies in \(0.715-0.730\), resulting in a \((Z/X)_{\odot}\) value between \(0.0168-0.0205\). Moreover, high metallicity models using the Magg et al. (2022), Grevesse & Sauval (1998) or Grevesse & Noels (1993) abundances are rejected as a steep slope in \(\chi^{2}\) values is observed in all cases, with the \(\chi^{2}\) values of high-metallicity solar envelope models being a factor 6 to 20 higher than those of low-metallicity models, independently of the EOS used and for two different helioseismic datasets. This independent measurement of the solar metallicity therefore strongly supports the AAG21 abundances (Asplund et al., 2021) over the MB22 abundances (Magg et al., 2022), in line with previous studies using modern equations of state (Vorontsov et al., 2013, 2014; Buldgen et al., 2017). Compared to these previous studies, our method provides a much more robust and precise inference, exploiting the properties of existing equations of state of the solar material.
## Acknowledgements
The authors thank the referee for their careful reading of the manuscript. G.B. is funded by the SNF AMBIZIONE grant No 185805 (Seismic inversions and modelling of transport processes in stars). A.M.A. gratefully acknowledges support from the Swedish Research Council (VR 2020-03940). We acknowledge support by the ISSI team "Probing the core of the Sun and the stars" (ID 423) led by Thierry Appourchaux. The authors thank J. Christensen-Dalsgaard and S. Vorontsov for the detailed reading of the manuscript and the numerous suggestions.
|
2303.09042 | Embedding Theory of Reservoir Computing and Reducing Reservoir Network
Using Time Delays | Reservoir computing (RC), a particular form of recurrent neural network, is
under explosive development due to its exceptional efficacy and high
performance in reconstruction or/and prediction of complex physical systems.
However, the mechanism triggering such effective applications of RC is still
unclear, awaiting deep and systematic exploration. Here, combining the delayed
embedding theory with the generalized embedding theory, we rigorously prove
that RC is essentially a high dimensional embedding of the original input
nonlinear dynamical system. Thus, using this embedding property, we unify into
a universal framework the standard RC and the time-delayed RC where we novelly
introduce time delays only into the network's output layer, and we further find
a trade-off relation between the time delays and the number of neurons in RC.
Based on this finding, we significantly reduce the network size of RC for
reconstructing and predicting some representative physical systems, and, more
surprisingly, only using a single neuron reservoir with time delays is
sometimes sufficient for achieving those tasks. | Xing-Yue Duan, Xiong Ying, Si-Yang Leng, Jürgen Kurths, Wei Lin, Huan-Fei Ma | 2023-03-16T02:25:51Z | http://arxiv.org/abs/2303.09042v2 | # Embedding Theory of Reservoir Computing and Reducing Reservoir Network Using Time Delays
###### Abstract
Reservoir computing (RC), a particular form of recurrent neural network, is under explosive development due to its exceptional efficacy and high performance in reconstruction or/and prediction of complex physical systems. However, the mechanism triggering such effective applications of RC is still unclear, awaiting deep and systematic exploration. Here, combining the delayed embedding theory with the generalized embedding theory, we rigorously prove that RC is essentially a high dimensional embedding of the original input nonlinear dynamical system. Thus, using this embedding property, we unify into a universal framework the standard RC and the time-delayed RC where we novelly introduce time delays only into the network's _output layer_, and we further find a trade-off relation between the time delays and the number of neurons in RC. Based on these findings, we significantly reduce the RC's network size and promote its memory capacity in completing systems reconstruction and prediction. More surprisingly, only using a _single-neuron_ reservoir with time delays is sometimes sufficient for achieving reconstruction and prediction tasks, while the standard RC of any large size but without time delay cannot complete them yet.
The last decades have witnessed the extensive application and development of machine learning technology in data-driven research and in high-technology-oriented industry as well. As a representative leader among many machine learning techniques, the artificial neural network (ANN) has emerged as a powerful approach that is well suited for coping with the supervised learning problems. Among various architectures of ANN, Reservoir Computing (RC), which is a recently developed framework [1], a special variant of a recurrent neural network, and also known as a generalization of echo-state network (ESN) [2] or liquid state machine (LSM) [3], has been reported to have great efficacy in reconstruction or/and prediction of many complex physical systems only based on the observational data of time series [4; 5; 6; 7]. The architecture of RC is quite contracted. As shown in Fig. 1(a), only three weight matrices are involved: the input matrix and the reservoir recurrent matrix are randomly generated but fixed, while the output matrix is determined via training. As such, efficient least squares optimization methods rather than the resource-consuming back propagation algorithm are adopted in the training process [8]. Behind such a contracted architecture, two questions arise naturally: "What is the fundamental mechanism resulting the efficacy of RC?" and "How to improve the structure using the uncovered mechanism?" These questions have attracted great attention and motivated abundant discussions, including those from the topology and the complexity of random connections [9; 10] to the spectral radius of random networks and the edge of chaos [11; 12; 13], from the fading memory property [14] to the echo state property [15; 16], from the choice of activation functions [17] to the training algorithm of the output layer [18]. Yet, recent understanding of RC is often via heuristic interpretation and it is widely believed that a successful RC should possess high dimensionality, nonlinearity, fading memory, and separation property [6], but barely with rigorous and mathematical demonstrations.
In order to decipher the RC's capacity of reconstructing and forecasting nonlinear dynamics, several efforts from a viewpoint of dynamical systems have been recently made. For example, the regression model and the dynamical model decomposition method were used to illustrate the usefulness of RC to forecasting chaotic dynamics [19; 20], and, to demonstrate the approximation capability of RC, an embedding conjecture was studied and could be partially validated for a specific form of RC under right technical conditions [21; 22]. In the area of photonic neural network, an architecture of photonic reservoir computing has been developed through using a spatiotemporal analogy to translate a delayed differential equation (DDE) into a virtual single-neuron reservoir network [23; 24; 25]. Still, despite these significant efforts and achievements, some key questions remain unsolved: "How to understand the network dimension of general RC using the theories of nonlinear dynamics and functional analytics?" and "How to design a small size network in RC for sustaining its efficacy?"
In this Letter, we rigorously study the mechanism of RC from a viewpoint of nonlinear dynamical systems and novelly propose a framework of time-delayed RC. Particularly combining the delayed embedding theory with the generalized embedding theory, we first prove a general reservoir network rigorously as a high dimensional embedding of the original input nonlinear dynamical system. Then, we further reveal a trade-off relation between the time delays and the number of
neurons by unifying into a universal framework the standard RC without delays and the time-delayed RC where the time delays are introduced into the network's output layer. It therefore allows us to construct a random reservoir network with a significantly-reduced physical dimension to achieve the efficacy that the original larger-size RC owns. Surprisingly, we show that a standard reservoir of single-neuron, without introducing any DDE or time-division multiplexing technique, can sometimes work well for reconstructing and forecasting some representative physical systems. Moreover, we find flexible memory capacity in the time-delayed RC, which makes it possible to accomplish more challenging tasks of dynamics reconstruction that cannot be easily achieved using a standard RC of the same scale.
We start with a standard RC as sketched in Fig. 1(a). Here, the input data \(\mathbf{x}_{k}\in\mathbb{R}^{n}\) represents the state vector of a dynamical system that is evolving on a compact manifold \(\mathcal{M}\) with the evolution operator \(\varphi\in\mathrm{Diff}^{2}(\mathcal{M}):\mathbf{x}_{k+1}=\varphi(\mathbf{x}_{k})\). The vector \(\mathbf{r}_{k}\in\mathbb{R}^{m}\) represents the state of \(m\) reservoir neurons at time step \(k\), the input layer weight matrix \(W_{\mathrm{in}}\) and the reservoir network matrix \(W_{\mathrm{res}}\) are, respectively, \(m\times n\) and \(m\times m\) random matrices generated according to certain distribution laws. The dynamical evolution of the reservoir neurons is governed by (RN): \(\mathbf{r}_{k}=(1-\alpha)\mathbf{r}_{k-1}+\alpha\phi(W_{\mathrm{res}}\mathbf{r}_{k-1}+W_{ \mathrm{in}}\mathbf{x}_{k}),\) where \(\alpha\) is the leakage factor, and \(\phi\in C^{2}(\mathbb{R},(-1,1))\) is set a sigmoid function (e.g., \(\tanh\)) in this Letter. The output vector \(\mathbf{y}_{k}\in\mathbb{R}^{l}\) is determined by the output weight matrix \(W_{\mathrm{out}}\in\mathbb{R}^{l\times m}\) such that \(\mathbf{y}_{k}=W_{\mathrm{out}}\mathbf{r}_{k}\). In the task of nonlinear system reconstruction, given the time series, denoted by \(\mathbf{x}_{k},k=1,\cdots,N+1\), as training data, the target is to train the output weight matrix \(W_{\mathrm{out}}\) so as to approximate the one-step dynamics prediction, i.e., \(\mathbf{y}_{k}\approx\mathbf{x}_{k+1}\). To achieve this, the output weight matrix \(W_{\mathrm{out}}\) is generally calculated by minimizing the loss function \(\mathcal{L}=\sum_{k=1}^{N}\|\mathbf{x}_{k+1}-W_{\mathrm{out}}\mathbf{r}_{k}\|^{2}+\beta \|W_{\mathrm{out}}\|^{2}\) over the training data set, where \(\beta>0\), the \(L_{2}\)-regularization coefficient, is introduced to make optimization robust. After training, one can fix the output weight matrix \(W_{\mathrm{out}}\) and redirect the output \(\mathbf{y}_{k}=W_{\mathrm{out}}\mathbf{r}_{k}\) as an approximation of \(\mathbf{x}_{k+1}\) into the input layer of the network and thus generate the autonomous dynamics for \(\mathbf{x}_{k}\) with \(k>N\).
To rigorously establish an embedding theory for RC, we consider directly the evolution (RN) of the reservoir neurons with the leakage factor \(\alpha=1\) as:
\[\mathbf{r}_{k+1,\mathbf{x}_{0}}^{\mathbf{b}_{0}}=\phi(W_{\mathrm{res}}\mathbf{r}_{k,\mathbf{x}_{0} }^{\mathbf{b}_{0}}+W_{\mathrm{in}}\varphi^{k+1}\mathbf{x}_{0}),\ \ k=0,1,\cdots,\]
and define a map as \(\mathfrak{G}^{k}[\mathbf{r}_{0},W_{\mathrm{res}},W_{\mathrm{in}}](\mathbf{x}_{0})=\mathbf{ r}_{k,\mathbf{x}_{0}}^{\mathbf{b}_{0}}\). Here, \(\mathbf{r}_{0,\mathbf{x}_{0}}^{\mathbf{b}_{0}}=\mathbf{b}_{0}\), \(\mathbf{b}_{0}\in\mathbb{I}^{m}\), and \(\mathbb{I}=(-1,1)\). Thus, we rigorously have the following result.
**Theorem 1**: _Let \(m\geqslant 2\mathrm{dim}(\mathcal{M})+1\) and \([\mathbf{r}_{0},W_{\mathrm{res}},W_{\mathrm{in}}]\in\mathbb{I}^{m}\times\mathbb{R} ^{m\times m}\times\mathbb{R}^{m\times n}\) with \(\mathrm{dim}(\mathcal{M})\) as the box-counting dimension of the manifold \(\mathcal{M}\). Then, there exists a number \(k^{*}>0\), such that \(\mathfrak{G}^{k}[\mathbf{r}_{0},W_{\mathrm{res}},W_{\mathrm{in}}]\in C^{1}(\mathcal{ M},\mathbb{R}^{m})\) is generically an embedding for all \(k>k^{*}\)._
Here, the generic conclusion in Theorem 1 means that, for all \([\mathbf{r}_{0},W_{\mathrm{res}},W_{\mathrm{in}}]\in\mathcal{S}\) where \(\mathcal{S}\subset\mathbb{I}^{m}\times\mathbb{R}^{m\times m}\times\mathbb{R} ^{m\times n}\) is an open and dense set, \(\mathfrak{G}^{k}[\mathbf{r}_{0},W_{\mathrm{res}},W_{\mathrm{in}}]\) is an embedding for any sufficiently large \(k\). The detailed and rigorous proof with respect to the \(C^{1}\)-topology is provided in Supplemental Information (SI) [26]. Moreover, the echo state property, a necessary condition for constructing an RC, requires that, with the general configuration \(\{W_{\mathrm{in}},W_{\mathrm{res}},\phi\}\), the evolutions (RN) of the reservoir neurons, starting from any different initial values \(\mathbf{r}_{0}^{(1)}\) and \(\mathbf{r}_{0}^{(2)}\), converge to the same dynamics, i.e., \(\lim_{k\rightarrow\infty}\|\mathbf{r}_{k}^{(1)}-\mathbf{r}_{k}^{(2)}\|=0\)[15]. Hence, by virtue of Theorem 1, regardless of the choice of the initial value \(\mathbf{r}_{0}\), the dynamics of reservoir neurons is determined by the input dynamics, i.e., _there exists a unique embedding \(\Psi\) such that \(\mathbf{r}_{k}=\Psi(\mathbf{x}_{k})\) after a transient phase, while each component \(r_{ik}=\Psi_{i}(\mathbf{x}_{k})\) implies that the dynamics of each neuron is an observable of the original dynamics_.
In the standard RC investigated-above, \(m\), the number of reservoir neurons and also known as the reservoir dimension, is often required to be huge [6; 8]. To design a different RC framework, significantly reducing \(m\), we introduce time delays into the output layer, as sketched in Fig. 1(c). While all the configuration \(\{W_{\mathrm{in}},W_{\mathrm{res}},\phi\}\) and the input data \(\mathbf{x}\) are set in the same manners, the reservoir network is assumed to include \(q\) (\(<m\)) neurons only. Thus, a new
Figure 1: RCs as different nonlinear dynamical systems and embeddings. (a) A standard RC without time-delay. (b) The generalized embedding \(\Psi\) from the input dynamics to the standard non-delayed reservoir network and the delayed embedding \(F\) from the input dynamics to the delayed reservoir network, which constitute a topological conjugation between the dynamics of the non-delayed reservoir network and the delayed reservoir network. (c) A time-delayed RC with a smaller network size in the reservoir layer.
reservoir vector before the output layer is designated as \(\tilde{\mathbf{r}}_{k}=\left[r_{1,k},r_{1,k-\tau},\cdots,r_{1,k-d_{1}\tau+\tau}, \cdots,r_{q,k},\cdots,r_{q,k-d_{q}\tau+\tau}\right]^{\top},\) and, correspondingly, the output matrix \(W_{\rm out}\) is calculated by minimizing the \(L_{2}\) loss function
\[\tilde{\mathcal{L}}=\sum_{k=1}^{N}\|\mathbf{x}_{k+1}-W_{\rm out}\tilde{\mathbf{r}}_{k }\|^{2}+\beta\|W_{\rm out}\|^{2}\]
with \(W_{\rm out}\in\mathbb{R}^{l\times d}\) and \(d=\sum_{i=1}^{q}d_{i}\). Here, the new reservoir vector \(\tilde{\mathbf{r}}_{k}\) is formed by the lagged dynamics of each neuron, i.e., \(q\) neurons with each neuron contributing \(d_{i}\) lagged dynamics \([r_{i,k},r_{i,k-\tau},\ldots,r_{i,k-d_{i}\tau+\tau}]\) where \(\tau\) is a time delay. Assigning \(d\) as the output dimension of this delayed RC.
Now, we are in a position to demonstrate that the time-delayed RC with the above-assigned \(d\) has the same representation and computation ability as the standard RC involving \(m\) neurons without time delay under the same parameter settings, as long as \(d\thickapprox m\). Actually, based on the delayed embedding theory and its applications [27; 28; 29; 30], an approximate combination of the lagged observable can also generically form an embedding, i.e., for smooth observational functions \(\Psi_{1},\cdots,\Psi_{q}\), \(F(\mathbf{x})=\left[\Psi_{1}(\mathbf{x}),\Psi_{1}(\varphi^{-1}(\mathbf{x})),\cdots,\Psi_{ 1}(\varphi^{-(d_{1}-1)}(\mathbf{x})),\cdots,\Psi_{q}(\mathbf{x}),\right.\)
\(\Psi_{q}(\varphi^{-1}(\mathbf{x})),\cdots,\Psi_{q}(\varphi^{d_{q}-1}(\mathbf{x}))\big{]}\) is generically an embedding as long as \(\sum_{i=1}^{q}d_{i}>2\text{dim}(\mathcal{M})\). Using the above-obtained conclusion that each neuron is generally an observable, we further conclude that the proposed new reservoir vector \(\tilde{\mathbf{r}}_{k}\) is also an embedding. Thus, the dynamics of the state vector \(\mathbf{r}_{k}\) in the \(m\)-neuron reservoir network without time delay is topologically conjugated with the dynamics of the reservoir vector \(\tilde{\mathbf{r}}_{k}\) of a \(q\)-neuron reservoir network in the sense of embedding as long as \(m=d\) with \(d=\sum_{i=1}^{q}d_{i}\), as sketched in Fig. 1(b). Consequently, we come to a conclusion that _the delayed observables of the RC state, seen as additional nonlinear observables, have the same computational power in the system reconstruction._
To demonstrate the capability of our time-delayed RC, we first consider the benchmark Lorenz system. After a training phase including \(N=6,000\) samples, the autonomously-generated dynamics by the RC are shown in Fig. 2(a). Particularly, used are a standard RC, a time-delayed RC containing fewer neurons with uniformly lagged dynamics for each neuron, and a time-delayed RC containing the same number of neurons but with random lags for each neuron. Clearly, the time-delayed RC has almost the same performance of system reconstruction as the non-delayed one, no matter the lags are uniformly or randomly generated. Actually, this coincides with the above-performed arguments from a viewpoint of embedding that the dynamics of this non-delayed RC is a generalized embedding to the input dynamics with generically \(200\) observables, while the dynamics of the time-delayed RC forms an embedding of dimension \(200\) when the sum of lags equals \(200\) for either uniform or random lags. Such a trade-off relation is further clearly illustrated in Fig. 2(b) where different neuron number with different lag number for each neuron is combined, and, for each combination, a training error is calculated as the mean squared error (MSE) on the training data set based over \(20\) independent runs. As depicted in Fig. 2(b), for a fixed moderate number of neurons, the training error decreases monotonically with the lag number for each neuron, and, for a fixed moderate lag number, the training error also decreases monotonically with the neuron number. Analogous results are also obtained for the other benchmark systems, as presented in [26] (see Fig. S5). All these further reinforce the above conclusion that, whenever \(d\thickapprox m\) with moderate \(N_{\rm lag}\) and \(N_{\rm neuron}\), the time-delayed and the non-delayed RCs generally share the same ability in system representation, and the numbers of neurons and of time lags can be traded off mutually in these frameworks.
Such a trade-off relationship further puts the non-delayed and time-delayed RCs into a unified framework where the output dimension \(d\) becomes the effective reservoir dimension that finally decides the ability of the system representation. The standard non-delayed RC is actually a degenerated form in this unified framework where all the neurons have zero lag. More surprisingly, we find that it is even pos
Figure 2: (a) Reconstructed dynamics of the Lorenz system by a non-delayed RC including \(200\) neurons, a delayed RC #1 including \(40\) neurons with uniformly \(5\) lags for each neuron, and a delayed RC #2 including \(40\) neurons with random lags for each neuron. Here, the time unit is expressed in the Lyapunov time, and the random lags are generated by a distribution centered at \(5\) as shown in the inset. (b) System reconstruction test for the Lorenz system with different combinations of \(N_{\rm neuron}\) and \(N_{\rm lag}\), where the training MSE in a log-scale and the contour curves are, respectively, highlighted. Here, \(\tau=5\) and the sampling stepsize is \(\Delta t=0.01\). All the other parameter settings are introduced in [26].
sible to reduce the number of neurons into one and realize a single-neuron reservoir in the proposed framework. To see this, we consider a gene regulation model with multiple delays: \(\dot{x}(t)=-kx(t)+gf_{1}(x(t-\tau_{1}))f_{2}(x(t-\tau_{2}))\) which describes self-inhibitory and self-activation with distinct delays \(\tau_{1}\) and \(\tau_{2}\), and with specific parameters the one-dimensional model has chaotic dynamics [26, 31, 32]. Specifically, a time-delayed RC including only one neuron with \(600\) lags is used to reconstruct the dynamics, and the autonomously generated dynamics after training are shown in Fig. 3(a). The results confirm that the single-neuron, time-delayed RC performs well, achieving the same reconstruction ability of the time-delayed RC with multiple neurons. Frankly, the single-neuron RC in this numerical illustration is only a special case that is not universally suitable for any system reconstruction. Due to multi-scale property, the task of system reconstruction for multi-variable using one reservoir network usually requires more than one single-neuron. As for the task in Fig. 2(a), in order to get a successful prediction for the \(3\) components of the Lorenz system, a single-neuron reservoir is not adequate even with multiple time delays. In addition to the equivalent representation ability in the sense of embedding, we further discover that the time-delayed RC has a more flexible memory capacity which is an essential measure for RC's reconstruction ability for delayed systems. In the dynamics reconstruction job for the above gene regulation model in Fig. 3(a), the chaotic dynamics cannot be reconstructed by a standard RC, no matter how large the reservoir is, according to the dimension test [26]. However, with all the same reservoir environment, the time-delayed RC [both RC#1 and RC#2 in Fig. 3(a)] can fulfill the job quite well. To understand this phenomenon, we calculate the memory capacity (MC) for different RC frameworks, using the definition in [8] and with different combinations of \(N_{\text{neuron}}\) and \(N_{\text{lag}}\) but satisfying the same output dimension, i.e., \(N_{\text{neuron}}\cdot N_{\text{lag}}=600\). Specifically, MC of a reservoir refers to its ability to retain information from previous time steps and it is defined in [8] as
\[\mathrm{MC}_{k}=\frac{\mathrm{cov}\left(x(t-k),\hat{y}_{k}(t)\right)^{2}}{ \mathrm{var}(x(t))\cdot\mathrm{var}\left(\hat{y}_{k}(t)\right)},\]
where a random sequence of input values \(x(t)\) is presented to the reservoir, and the reservoir output \(\hat{y}_{k}(t)\) is trained to predict a previous input value \(x(t-k)\), and here \(\mathrm{cov}(\cdot)\) and \(\mathrm{var}(\cdot)\), respectively, represent covariance and variance.
Figure 3(b) clearly shows that, as \(N_{\text{lag}}\) increases, the reservoir computer with different delay settings has stronger memory capacity though still keeping a fading memory fashion. This is essential for the dynamics reconstruction job particularly for time-delayed physical or biological systems such as the gene regulation model above. Thus, the proposed time-delayed RC framework has a more flexible capability to deal with dynamics reconstruction jobs requiring tunable MC.
Finally, to further validate the efficacy of the time-delayed RC in reconstructing high-dimensional spatial-temporal system, we consider the ideal storage cellular automation model (ISCAM) simulating heterocatalytic reaction-diffusion processes at metal surfaces [33, 34]. Considering the extremely high dimension (the \(100\times 100\) grids yields \(10000\) input dimension), it is a challenging job to reconstruct the chaotic spatial-temporal patterns. As shown in Fig. 4, with the same reservoir output dimension, the time-delayed RC has almost the same reconstruction ability as the non-delay one.
Our framework uses a few hyper-parameters, such as \(d\), the effective reservoir dimension, and \(\tau\), the time delay, which definitely affect RC's efficacy in system reconstruction. In fact, the existing literature included some criteria for selecting such parameters in system reconstruction using delayed embedding theory. We thus implement these criteria, the dimension test and the delayed mutual information (DMI), to determine \(d\) and \(\tau\). From a perspective of embedding, \(d\) is only required to be larger than \(2\cdot\text{dim}(\mathcal{M})\) while practically the box-counting dimension of the manifold \(\mathcal{M}\) is usually very small, i.e., \(\text{dim}(\mathcal{M})\) is between \(2\) to \(3\)[35, 36] for the chaotic Lorenz attractor. However, to design an effective RC, \(d\) is required to be moderately large (see all the examples above). This is probably due that, although the generic property in the embedding theory means open and dense in a topological sense, there are still degenerated situations in practice, particularly for randomly-generated networks (see Fig. S1 in [26]). Moreover, to reveal the mechanism from representation to computation, the recent efforts used the universal approximation theory [21] and the DMD [19] framework which further demonstrate the necessity of a large network size of RC in achieving
Figure 3: (a) Reconstructed dynamics of the chaotic gene regulation model by the standard RC including 600 neurons (see the inset), the single-neuron, time-delayed RC#1 with 600 lags, and the time-delayed RC#2 including 6 neurons and 100 lags for each neuron. (b) MC test with different combinations of \(N_{\text{neuron}}\cdot N_{\text{lag}}\). Here, \(\tau=5\) and the sampling stepsize is \(\Delta t=0.1\). All the other parameter settings are introduced in [26].
good approximations. Thus, the dimension tests are used to seek a suitable \(d\) for each computation. As for the delay \(\tau\), either too small or too large value renders computation problematic in system reconstruction, which naturally prompts us to introduce a modified DMI test taking into account the intrinsic time-scales of the neuronal dynamics in RC. Finally, it is noted that, for chaotic systems, the lagged observables earlier than the Lyapunov time have diminishing predictive power for the current time step, so we suggest the constraint \(\tau\cdot\Delta t\cdot N_{\mathrm{lag}}<\Lambda_{\mathrm{max}}\) for the choice of \(\tau\) and \(N_{\mathrm{lag}}\) in practice where \(\Delta t\) is the sampling stepsize. The details for the choice of these hyper-parameters are referred to [26].
In conclusion, we have provided a deep and rigorous insight to the mechanism of RC from a viewpoint of embedding theory and nonlinear dynamical systems. Based on our analytical findings, we have studied the role of time delay in the reservoir network and proposed a new framework of time-delayed RC. This framework can significantly reduce the network size and promote the memory capacity, making its ability attain or even transcend the ability owned by the standard RC. Considering the computational costs which are crucially dependent on the network size in the dynamical evolution of RC and the hardware costs related to the circuit size in those overwhelmingly-developed physical RCs [6], smaller-size reservoir is always expected to promote its real and extensive applications. Moreover, we notice a recently-published and independent work [37], where a method, different from the perspective of embedding theory and memory capacity presented here, was proposed to concatenating internal states through time in RC and realize model-size reduction. Lastly, any contributions to designing RC frameworks of low-resource-consumption are believed to advance the direction of machine learning and thus be of broad applicability in solving data-driven science and engineering problems.
This work is supported by the National Natural Science Foundation of China (Grant nos. 11925103 and 12171350), and by the STCSM (Grant nos. 18DZ1201000 and 2021SHZDZX0103).
|
2310.06249 | l-dyno: framework to learn consistent visual features using robot's
motion | Historically, feature-based approaches have been used extensively for
camera-based robot perception tasks such as localization, mapping, tracking,
and others. Several of these approaches also combine other sensors (inertial
sensing, for example) to perform combined state estimation. Our work rethinks
this approach; we present a representation learning mechanism that identifies
visual features that best correspond to robot motion as estimated by an
external signal. Specifically, we utilize the robot's transformations through
an external signal (inertial sensing, for example) and give attention to image
space that is most consistent with the external signal. We use a pairwise
consistency metric as a representation to keep the visual features consistent
through a sequence with the robot's relative pose transformations. This
approach enables us to incorporate information from the robot's perspective
instead of solely relying on the image attributes. We evaluate our approach on
real-world datasets such as KITTI & EuRoC and compare the refined features with
existing feature descriptors. We also evaluate our method using our real robot
experiment. We notice an average of 49% reduction in the image search space
without compromising the trajectory estimation accuracy. Our method reduces the
execution time of visual odometry by 4.3% and also reduces reprojection errors.
We demonstrate the need to select only the most important features and show the
competitiveness using various feature detection baselines. | Kartikeya Singh, Charuvaran Adhivarahan, Karthik Dantu | 2023-10-10T01:53:43Z | http://arxiv.org/abs/2310.06249v1 | # L-DYNO: Framework to Learn Consistent Visual Features Using Robot's Motion.
###### Abstract
Historically, feature-based approaches have been used extensively for camera-based robot perception tasks such as localization, mapping, tracking, and others. Several of these approaches also combine other sensors (inertial sensing, for example) to perform combined state estimation. Our work rethinks this approach; we present a representation learning mechanism that identifies visual features that best correspond to robot motion as estimated by an external signal. Specifically, we utilize the robot's transformations through an external signal (inertial sensing, for example) and give attention to image space that is most consistent with the external signal. We use a pairwise consistency metric as a representation to keep the visual features consistent through a sequence with the robot's relative pose transformations. This approach enables us to incorporate information from the robot's perspective instead of solely relying on the image attributes. We evaluate our approach on real-world datasets such as KITTI & EuRoC and compare the refined features with existing feature descriptors. We also evaluate our method using our real robot experiment. We notice an average of 49% reduction in the image search space without compromising the trajectory estimation accuracy. Our method reduces the execution time of visual odometry by 4.3% and also reduces reprojection errors. We demonstrate the need to select only the _most important features_ and show the competitiveness using various feature detection baselines.
## I Introduction
Crucial robotics applications including search and rescue, agricultural robotics, industrial automation, and self-driving cars heavily rely on the robot's ability to localize itself in complex environments. Visual sensors such as monocular, stereo, and depth cameras are popular sensing modalities for perception. Prior works have utilized an understanding of sensor physics to detect features in the sensor readings for use in robot perception. An example is the use of image features for visual odometry and mapping. However, a challenge with this methodology is identifying errors from the sensor or environmental factors that affect the perception. This work develops a representation learning framework that selects sensor features (image features, for example) that best correspond with robot motion as sensed through another sensor.
Feature-based methods like RANSAC [1], TEASER++ [2] are sensitive to image features. Unique features produce consistent matches and transformations with such techniques, which lower rotation and translation errors. However, non-unique features are always present in real-world datasets due to various reasons. For example, outdoor datasets have similar-looking objects like trees, sky, cars, and buildings. Ignoring these non-unique features would improve the transformation estimation process in algorithms like bundle adjustment. In this work, we propose a learning-based algorithm that reduces the image features to a subset of the most consistent features, likely most suitable for downstream perception tasks.
Metrics from feature-based methods cannot be used for learning due to two reasons: (i) using signals from the same sensor would not reduce errors. Ideally, we would like to use a signal from a different sensor and (ii) these feature-matching functions are non-continuous. Threshold-based feature matching methods have zero gradients almost everywhere and are undefined for some inputs. Fortunately, a plethora of signals in robotics that are independent, continuous, and are good indicators of consistency. In this work, we use IMU-based transformations as our consistency signal to train our network. Rather than use an end-to-end learning like [3, 4] to find feature matches directly, we modify the input images highlighting areas for the best feature-matching so that we can rely on robust and well-tested methods in computer vision and robotics to do the feature-matching in the reduced space.
Using features that are indicative of the estimates is just as important as feature density in reducing estimation errors. For instance, consistent features extracted from a distinct portion of the scene, even in a lesser number, can assist in a better-estimated trajectory. Apart from highlighting the most relevant regions in the scene for feature matching, our approach reduces the dimensionality of the input search space. To handle the drift in the IMU-based transformations, we use a window-based approach [5] to obtain the IMU-based pose transformations, which help in supervising the consistency-based loss shown in figure 1. We evaluate our method on real-world autonomous driving datasets such as KITTI[6] and EuRoC[7] and compare them with the popular feature detector baselines. We also evaluate our method using a custom dataset generated from a real robot car based upon F1tenth [8] setup.
The main contributions of this work are as follows:
* We introduce an attention-based deep learning architecture to determine which regions of a scene are important for consistent feature detection.
* We take advantage of the robot's motion with an IMU-based loss function in our learning module. We generate a representation between inertial sensing and consistent image features by utilizing IMU consistency in extracting
most important_ image space.
* We benchmark our approach by performing evaluations on various real-world datasets KITTI[6], EuRoC[7], and our dataset recorded from a robot. We use different feature detector baselines (both classical and learning-based) and show the reduced number of outliers being produced using our method.
* We compare our method with the baselines and show an improvement in Average Trajectory Error per baseline shown in table III and a reduction in Average Execution Time by **4.3%** by reducing the image space up to **49%**.
* We further evaluate our image space by calculating re-projection errors of all the baseline feature detectors. We notice a reduction in the average reprojection error which leverages accurate homography estimation for pose recovery.
## II Related Work
### _Classical and Learning-Based Feature Detectors_
Traditional feature detection and matching methods are still being used to achieve accurate relative pose estimation through tracking visual features. Most VO methods depend on the visual features from images using feature extraction techniques like [9, 10, 11] and perform tracking using geometrically aligned methods such as LK-Tracking [12], brute force matching with KNN [13] and FLANN-based matching [14]. These techniques are highly sensitive to noise, which may result in an inaccurate estimation of a trajectory when influenced by a significant number of outliers. Once features are detected, several geometrical techniques such as relative pose estimation or epipolar geometry-based triangulation of 3D points can be used to estimate an accurate pose trajectory. However, these geometric-based methods lack robustness over large datasets and with time the odometry accumulates drift [15]. Furthermore, various SLAM systems such as ORB-SLAM [16], Lsd-SLAM [17], Svo [18] and Dso [19] perform reasonably well in both detecting and computing features followed by estimating the trajectory in an end-to-end fashion.
With the advancements in the domain of deep learning for SLAM, researchers have been attempting to replace classical methods of detecting and tracking features with learning-based methods. In terms of feature tracking, learning-based VO methods perform well [20, 21, 22, 23, 24] but many of them rely only on image features and do not consider information from the robot's dynamics, which adversely affects transformation estimates if the tracked features are inconsistent. In this work, we make use of the robot's inertial measurements, i.e. IMU (Inertial Measurement Unit) in the loss function to supervise our training mechanism to track consistent features and improve the generalizability in learning methods [25, 26]. Many VO frameworks use hand-crafted feature detectors [27, 16] and learning-based feature descriptors [28, 29, 30]. However, our method considers using information from the robot's dynamics and uses it as a supervision signal to train our learning pipeline. Most of the feature descriptors incorporate outliers, which hinders an accurate estimation of trajectory. More advanced learning techniques involve attention-based neural networks for feature detection and matching. In contrast, LoFTR [31] COTR [32] E2EMVM [33] and SUPERGLUE [30] present Graphical Neural Networks which perform well in estimating correct feature correspondences with a wide baseline. The receptive fields used in these methods help in determining fine textures in a global context, which sets them apart from standalone CNN-based learning methods.
### _Outlier Rejection Techniques_
Thresholding-based techniques that restrain the flow of outliers in a VO pipeline prevent the large influence of outliers in the system, but the step function used in these techniques hinders the computation of gradients in a learning framework. RANSAC [1] and TEASER++ [2] are the two examples of strategies used to deal with outliers. RANSAC works by randomly sampling a subset of features detected and fitting a model to this subset. The model is further evaluated by measuring how well it fits the remaining features. This technique may show a non-deterministic property of selecting different data points in every iteration. TEASER++ is a recently introduced method that rejects outliers by incorporating additional techniques such as global optimization and local refinement. Our method relies on the same objective of ignoring outliers but considering only specific attentive image regions. Our attention heads output the attentive blocks that determine the image space required to achieve both feature detection and tracking accuracy.
## III Method
The objective of this work is to reduce the number of outlier features that are inconsistent with a motion sensor (IMU). To achieve this, we aim to map the consistent visual features with the IMU-based inertial consistency. For this, we train a deep neural network that reduces the region of interest for feature detection with signals from robot motion in our loss function \(L\)=\([p_{w}^{i},q_{w}^{i}]\). We introduce a differentiable
Fig. 1: Overview of the Pipeline: FeatureNet takes in the raw image frame pairs and provides feature maps. Feature maps are passed to an Attention Network that assigns weights to certain regions of the image. These learned features go through an LSTM-based PoseNet which provides 6-dof robot pose trained with a consistency-based loss function formulated with IMU transformations.
approach to create a feature selection pipeline that takes supervised signals from an IMU. To handle drift accumulated over time using IMU, we follow a window-based approach, which takes small bursts of measurements and creates relative transformation proxies which are inherited consistently throughout the trajectory formation. The overview of our method is depicted in figure 1. In this section, we describe the various components of our method. In section IV we show results from our evaluations of our method compared with other feature-based visual odometry methods.
Our training mechanism includes three main modules. We name them as: _FeatureNet_, _PoseNet_, and _AttentionNet_. Figure 1 depicts the complete pipeline. Since IMU sensors have noise in their gyroscope and accelerometers, we use a sliding window-based approach that includes a short sequence of image pairs (\(t\) to \(t+1\)\(\forall t\in W\) ) and IMU bursts within the same timestamp-based window (\(W\)) similar to VIO (Visual and Inertial Odometry) methods [5] and this helps reduce the effect of noise from IMU given by:
\[a_{m}=C_{q}^{-1}(a_{t}-g)+b_{a}-n_{a} \tag{1}\]
\[w_{m}=w_{t}+b_{w}-n_{w} \tag{2}\]
Where: \(b_{a}\) and \(n_{a}\) are the bias and noise accumulated from the accelerometer due to external factors, \(g\) is the gravity vector on the accelerometer and \(Cq\) is the 3x3 rotational matrix representing IMU orientation with respect to a solid coordinate system. A similar representation can be seen for a gyroscope, where \(w_{t}\) is the true acceleration void of any external disturbance and \(w_{m}\) is the measured acceleration from the sensor. We exploit these sensors to create a consistent pose transformation and use them to train the following subnetworks:
**FeatureNet:** The FeatureNet network includes a multi-layered CNN block which takes the full-size input image pairs and provides two feature maps of size \(256\times M\times N\). In figure 2, we can observe the extracted attentive blocks. We use feature detector points from the baselines to overlay on top of only these attentive blocks, instead of the whole image. We show that this reduced search space extracted using our method reduces translation, rotation, and pose errors in section IV. These feature maps correspond to the pairwise embedding vectors per image and store the information about the features extracted. We do not use any pre-trained network as a backbone in any of our training modules.
**Attention Network:** The output features from FeatureNet are passed through a self-attention-based network, which takes shared weights from the embeddings and selects only attentive weights that improve the training stream. These weights are saved and further passed for training in the next batch. As an intermediate step, our AttentionNet provides trained attention heads, which are treated as the image search space and considered as the most important feature space.
**PoseNet:** Finally, an LSTM-based pose network takes the attentive features and weights to estimate a stream of 6-dof poses. This whole training mechanism iterates through the attention maps from the AttentionNet and reduces the consistency-based loss.
**Consistency-based Loss:** For loss minimization, we make use of the robot's transformation poses over the specified window size. These transformations are from IMU. The estimated IMU can be concluded from the following equations:
\[p_{w}^{i} =v_{w}^{i}, \tag{3}\] \[v_{w}^{i} =C_{q_{w}^{i}}(a_{m}-b_{a}+n_{a})-g,\] (4) \[q_{w}^{i} =\frac{1}{2}\Omega(w_{m}-b_{w}+n_{w})q_{w}^{i},\] (5) \[b_{a} =n_{b_{a}},\] (6) \[b_{w} =n_{b_{w}} \tag{7}\]
Equations (3)-(7) are the kinematics model of an IMU transformation. Consider the position of IMU in the World frame \(p_{w}^{i}\), the velocity of IMU in the World frame is \(v_{w}^{i}\) with \(dt=1s\), and the parameter representing the rotation from World coordinate to IMU stationary coordinate is \(q_{w}^{i}\). We have bias generated due to external factors \(b_{a}\) and \(b_{w}\) in the accelerometer and gyroscope. Further, \(C_{q_{w}^{i}}\) represents the rotation matrix that shows the transformation of a vector from the relative frame to the world frame, \(\mu\) is the quaternion product matrix and \(g\) is the vector due to gravity. The transformed pose of the robot is represented by \([p_{w}^{i},q_{w}^{i}]\). The integration of these differential equations helps us estimate the pose from IMU measurements. The PCM-based consistency metric calculates the relative poses \([p_{w}^{i},q_{w}^{i}]\) over a window\((W)\) and constructs proxies of the same. Further, we use MSE (Mean Squared Error) to reduce the spatial gap between the 6-dof obtained from our PoseNet module and the 6-dof obtained from \([p_{w}^{i},q_{w}^{i}]\).
## IV Evaluation
**Network Training:** For the experiments, we use KITTI and EuRoC sequences with their raw IMU as a loss function. We train the pipeline end-to-end on the RTX A100 graphics card. We use a standard MSE loss to balance out the weights with our defined loss in section III. Figure 4 represents the loss convergence over time with respect to the reduced error between the PoseNet module and defined IMU-based transformation parameters. We show the associative convergence of loss in table I which shows the
Fig. 2: Feature to Descriptor: We obtain attentive features from our AttentionNet output heads, which are further overlayed by the baseline feature detectors. This overlaying of baseline feature detectors on our attentive image space refines the outliers.
reduced mean error between our IMU-based transformations and resultant 6DOF poses from our PoseNet module. We train our models on full-size input images for both KITTI (376x1241) and EuRoc(480x752). We incorporate full-size images to exploit complete image details in our AttentionNet resulting in feature map generation. These feature maps in figure 3 demonstrate the selective attention regions over a sequential set of input images. In this section, we evaluate the performance of our method with attentive regions with VO-based pose errors, translation and rotation errors, distribution of inliers/outliers, reprojection errors, and execution time. We also demonstrate results using a real-world experiment presented in Section IV-E.
From section III, we obtain attentive regions shown in figure 3 which associates the feature consistency over a sequence with the masked image space expressed as the region with _Important Features_. As a qualitative analysis, we can notice that the attentive region is consistent over the input sequence shown in figure 3. The heat maps highlight the area that represents the most attentive region, and we can observe that these are on areas that are intuitively good indicators of relative transformation.
### _Benchmarking between the Inliers and Outliers_
We show the data association accuracy between the correct features obtained using our method, by comparing them against keypoints generated using current state-of-the-art methods such as ORB [16], SIFT [34], BRISK [35], AKAZE [36], KAZE [37] and SuperPoint [28]. We further compare the distribution of inliers and outliers over the original image space vs our reduced search space using RANSAC. We overlay all the baseline features detected over both the image spaces and obtain inliers vs outliers distribution using RANSAC. Table II shows the mean and standard deviation of inliers and outliers while using a standalone baseline over the original image space vs using baseline feature detectors over our _Attentive Image Space_ obtained with our method. We obtain a cumulative decrease in the number of outliers in our method as compared to most of the baselines. In figure 5, we qualitatively examine the difference in outliers from both image spaces.
### _Visual Odometry Estimation_
RANSAC being a non-deterministic technique of estimating essential matrix [39] which helps in recovering a robot's relative pose, we show the efficacy of the inliers and outliers obtained using our method by performing trajectory estimation using our feature space. In our method, we reduce the number of candidate features by restricting the image to regions of interest. In a 2d space \(W\times H\), our feature space corresponds to \(M\times N\in W\times H\) with a consistent reduction parameter of \(1/n\) over certain sequences of images. This search space reduces the overall distribution of features by a margin factor \(\lambda\).
In our search space, given a pair of corresponding points \(\lambda p\) and \(\lambda p^{\prime}\) in normalized camera coordinates (where \(p\) and \(p^{\prime}\) correspond to the feature points obtained using any baseline shown in table II over the complete image area \(W\times H\), the equation for the Essential matrix will be \(\lambda(p^{\prime T}Ep)=0\). where \(E\) is the essential matrix from [39]. Before estimating the current pose, we refine the poorly fitted point pairs using RANSAC. We observe that our search space \((M\times N)\) results in a reduced number of outliers in most of the baselines as shown in table II. Using the current pose as an anchor node, we estimate all the successive poses using \(E=[t]_{x}*R\) and
Fig. 4: Training R,t loss from the complete pipeline.
Fig. 3: Attentions(above) obtained from the input sequences (S0 from KITTI dataset). We can notice the consistency maintained throughout the images, which results in an accurate pose estimation explained in section.IV.
decompose all the R(rotations) and t(translations) using SVD (Singular Value Decomposition) \(E=U*W*V^{T}\), where U, V are unitary matrices
W is a diagonal matrix. To calculate the Average Trajectory Error between all the poses obtained using our method and the ground truth, we calculate RMSE (Root Mean Square) between the estimated poses and ground truth poses. We evaluate ATE using image sequences from test data and the results are shown in table III. We observe that our approach focuses on the reduction of image space without compromising the trajectory estimation accuracy.
### _Reprojection Error_
We support the inlier selection overlayed on our attention space by calculating the reprojection errors using homography metric between all incremental image sequences. Table III represents the pixels accounting for reprojection errors after triangulation of corresponding feature points.
To record the change in positional uncertainties, we create covariance matrixes as mentioned in [40]. The baseline-based feature detector and descriptor extractor are created. Key points and descriptors are computed for img1 and img2 using the ORB detector and extractor. Baseline descriptors overlayed into the image space inferred from L-DYNO are matched between img1 and img2 using the BFMatcher, and matches are sorted by distance. We calculate a homography matrix H using RANSAC. This matrix estimates the geometric transformation between keypoints in sequential image pairs. Next, we calculate the reprojection error for each key point in image n after applying the estimated homography to
project them into the image n+1 space. This error quantifies the dissimilarity between matched points.
### _Execution Time_
Another axis in which our approach yields positive results is the execution time of the VO pipeline. Intuitively, reduced feature candidates from our method should result in reduced time taken by the \(E\) matrix estimation. We take account of this evaluation by comparing the execution time spent by the VO pipeline from PYSLAMv2 [41] module to perform an end-to-end pose estimation using our search space area and compare it to other methods in table III. We experienced a noticeable drop of an average of 8.51 seconds in the time taken by our method to execute the complete pipeline. This validation supports our initial motive to refine the features and select the features that are _most important_.
### _Real World Robot Experiment_
We demonstrate the performance of L-DYNO using a custom odometry dataset. Figure 6 represents the overall setup used in recording RGB (Realsense D455), IMU (Realsense D455), and ground truth data (Realsense t265). Table IV represents the quantitative results based on the evaluation methods mentioned in Section IV-B IV-C, and IV-D. We induce inconsistency in the scene by having a person walk across the robot as shown in Figure 6. These sudden movements produce more outliers in a standard case. The training data contains over 1006 images of size 640x480 sequences with corresponding IMU and t265 base ground truth poses. L-DYNO reduces the image space by **20%** for all the 300 completely unseen test images. Additionally, in-order to validate the performance of L-DYNO under the high influence of noise in IMU, we avoid using the window-based (\(W\)) approach to form a trajectory using IMU. Table IV shows the improvements in the pose, reprojection errors, and time when we compare L-DYNO image space using baseline detectors.
## V Conclusion
We present a representation learning-based approach that learns the region of interest for selecting consistent features in order to execute accurate camera-based perception tasks like localization, mapping, and tracking. We demonstrate how to leverage the information from robot's motion and use it to supervise the training network. By doing this, we investigate the possibilities of bridging the computer vision and robotics domains together and utilize the interchangeable signals from each domain using representation learning. We demonstrate a method to accomplish consistent feature selection with reduced outlier candidates. Our method achieves improved trajectory estimations, reduced reprojection errors, and shows improved results in execution time even after reducing the image space by **49%**. Our method shows substantial improvements over all the evaluations performed using both real-world datasets like KITTI [6] and EuRoC [7] as well as dataset recorded from a real robot.
Fig. 5: ORB features on the original space(Left) vs ORB features on our reduced space(Right). Our method removes the outliers by reducing the search space area resulting in a more accurate estimation of homography without compromising the transformation accuracy shown in table III.
Fig. 6: Experiment Setup(Top Left), Raw Image(Top Right) with dynamic objects inducing outliers using an orthogonal range of movements. Inliers vs Outliers distribution on raw image space vs L-DYNO(Bottom). |
2305.02299 | Dynamic Sparse Training with Structured Sparsity | Dynamic Sparse Training (DST) methods achieve state-of-the-art results in
sparse neural network training, matching the generalization of dense models
while enabling sparse training and inference. Although the resulting models are
highly sparse and theoretically less computationally expensive, achieving
speedups with unstructured sparsity on real-world hardware is challenging. In
this work, we propose a sparse-to-sparse DST method, Structured RigL (SRigL),
to learn a variant of fine-grained structured N:M sparsity by imposing a
constant fan-in constraint. Using our empirical analysis of existing DST
methods at high sparsity, we additionally employ a neuron ablation method which
enables SRigL to achieve state-of-the-art sparse-to-sparse structured DST
performance on a variety of Neural Network (NN) architectures. Using a 90%
sparse linear layer, we demonstrate a real-world acceleration of 3.4x/2.5x on
CPU for online inference and 1.7x/13.0x on GPU for inference with a batch size
of 256 when compared to equivalent dense/unstructured (CSR) sparse layers,
respectively. | Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, Yani Ioannou | 2023-05-03T17:48:55Z | http://arxiv.org/abs/2305.02299v4 | # Dynamic Sparse Training with Structured Sparsity
###### Abstract
Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference. Although the resulting models are highly sparse and theoretically cheaper to train, achieving speedups with unstructured sparsity on real-world hardware is challenging. In this work, we propose a sparse-to-sparse DST method to learn a variant of _structured_ N:M sparsity by imposing a _constant fan-in_ constraint. We demonstrate with both a theoretical analysis and empirical results: state-of-the-art spare-to-sparse structured DST performance on a variety of network architectures, a condensed representation with a reduced parameter and memory footprint, and reduced inference time compared to dense models with a naive PyTorch CPU implementation of the condensed representation. Our source code is available here.
## 1 Introduction
Dynamic Sparse Training (DST) methods such as RigL (Evci et al., 2021) are the state-of-the-art in sparse training methods, learning _unstructured_ Sparse Neural Networks (SNNs) with 85-95% fewer weights than dense models, while maintaining similar generalization. Furthermore, sparse training methods employ sparsity _both during training and inference_, unlike pruning and other methods (Zhou et al., 2021) that only exploit sparsity at inference time.
While models trained with DST methods are highly sparse and enable a large reduction in Floating Point Operations (FLOPs) in theory, realizing these speedups on hardware is challenging when the sparsity pattern is unstructured. Even considering recent advances in accelerating unstructured SNNs (Elsen et al., 2020; Gale et al., 2020), structured sparsity realizes much stronger acceleration on real-world hardware. On the other hand, structured sparse pruning often removes salient weights, resulting in worse generalization than comparable unstructured SNNs for the same sparsity level (Fig. 0(a)). Our work presents a best-of-both-worlds approach: we exploit the DST framework to learn _both_ a highly-sparse _and_ structured representation while maintaining the generalization performance of DST and dense baselines. In summary, our work makes the following contributions:
1. We propose a novel sparse-to-sparse DST method, Structured RigL (SRigL), based on RigL (Evci et al., 2021). SRigL learns a SNN with constant fan-in structured sparsity (Fig. 0(a)) while maintaining generalization comparable with RigL up to a high sparsity level (99%) for a variety of network architectures. This structure is a particular case of "N:M sparsity" which requires \(N\) out of \(M\) consecutive weights to be non-zero (Nvidia, 2020).
2. Our empirical analysis shows that at sparsity levels >90% RigL ablates whole neurons. By allowing ablation in SRigL, we match the generalization performance of RigL even in this high-sparsity regime.
3. We motivate our choice of structure with a theoretical analysis of SNN output norm variance -- a property related to training stability -- and find that the constant fan-in constraint does not have a negative effect.
4. We demonstrate that, similar to other N:M sparsity variants (Nvidia, 2020), our constant fan-in sparsity enables a compact representation that is not only parameter- and memory-efficient, but also amenable to real-world acceleration. While hardware support for our specific form of N:M sparsity is not yet available, we observe increased performance with highly sparse networks over dense baseline at inference time even with only a naive PyTorch CPU implementation.
## 2 Related work
Dynamic Sparse TrainingUnlike with pruning, where weights are typically pruned after the dense network was trained (Han et al., 2015, 2016), or at initialization (Wang et al., 2020), DST methods learn the sparse connectivity during training by periodically adding and removing weights based on various saliency criteria. For instance, Sparse Evolutionary Training (SET) (Mocanu et al., 2018) removes weights with the smallest magnitude and adds weights randomly; similarly, RigL (Evci et al., 2021) prunes weights with the smallest magnitude and regrows weights that have large-magnitude gradients. RigL has been shown to learn models with 95% fewer parameters than dense baselines, while maintaining generalization performance. Liu et al. (2021) further improved the original RigL results by increasing the extent of the parameter space explored by modifying the sparse connectivity update schedule and drop rate.
Many recent works have examined the effect of different grow and prune saliency criteria on unstructured DST approaches, including SET, Deep Rewiring (DeepR) (Bellece et al., 2023), Sparse Networks from Scratch (SNFS) (Dettmers and Zettlemoyer, 2019), Dynamic Sparse Reparameterization (DSR) (Mostafa and Wang, 2019),Top-K Always Sparse Training (Top-KAST) (Jayakumar et al., 2020), Memory-Economic Sparse Training (MEST) (Yuan et al., 2021). In Section 4 we compare SRigL to several of these methods. While the above-noted DST methods are highly effective at finding SNNs which reduce theoretical inference cost, they result in unstructured SNNs which are difficult to accelerate in practice due to various restrictions of common hardware architectures.
Accelerating Unstructured Sparse Neural NetworksElsen et al. (2020) proposed a method for accelerating unstructured SNNs based on one-dimensional tiling of non-zero elements, which demonstrated significant speedups on both Central Processing Unit (CPU) (Elsen et al., 2020) and Graphics Processing Unit (GPU) (Gale et al., 2020). However, like most approaches to accelerating unstructured SNNs, this method relies on imposing structure on an existing sparse weight matrix _after training_. Our method can be considered a way of adding structure to SNNs _during training_, allowing the model to maximally utilize non-zero weights since structure and weights are learned concurrently.
Figure 1: (a) **Constant fan-in pruning keeps the most salient weights _per neuron_, while unstructured pruning keeps the most salient weights _per layer_. A constant fan-in weight matrix has the same number of non-zero elements (here 2) per column allowing condensed representation. While pruning may remove salient weights affecting generalization, with SRigL structure and weights are learned concurrently. (b) **Output-norm variance**: Theoretical predictions and simulation results demonstrating that sparse layers with constant fan-in have consistently smaller output-norm variance than layers with the same number of non-zero weights but without the constant fan-in constraint.
Learning Block Structured Sparsity From ScratchBlock sparsity is a particular type of structured sparsity in which blocks of non-zero weights are grouped together in arrangements that reduce the memory overhead required to store the indices of the non-zero weights. Blocks can be generated out of contiguous weights in 1D (sometimes called tiles) or 2D or by utilizing a fixed number of non-zero weights per row or column group in the case of block-balanced sparsity (Hoefler et al., 2021). Spurred by the success of DST in learning unstructured sparse models, recent works have attempted to apply DST principles to learn block-structured sparsity. Jiang et al. (2022) introduced a novel block-aware DST algorithm known as Dynamic Shuffled Block (DSB). DSB reshuffles non-zero weights into a block sparsity pattern after sparse connectivity updates, thereby improving memory access efficiency. Wall-clock speed-ups of up to \(4\times\) were reported with this method; however, generalization performance was reduced compared to RigL at comparable sparsities. Dietrich et al. (2022) applied a modified variant of RigL to BERT models (Devlin et al., 2019). The resulting method is capable of learning models with block-structured sparsity. The authors demonstrated improvements in generalization performance at reduced FLOPs compared to a dense network baseline.
Learning N:M Structured Sparsity from ScratchN:M sparsity is a specific form of block-balanced sparsity in which 1D blocks with \(M\) contiguous elements contain exactly \(N\) non-zero elements. N:M sparsity is particularly amenable to acceleration and several attempts have been made to train models with N:M fine-grained structure using DST methods.
Yang et al. (2022) extended the DST method proposed by Liu et al. (2021) to train multiple sparse sub-networks sampled from a single dense super-network. Their proposed method, Alternating Sparse Training (AST), switches the network topology between sparse sub-networks after each mini-batch during training. Yang et al. (2022) demonstrated state-of-the-art performance on several typical sparse training benchmarks. However, the DST methodology used in (Liu et al., 2021) gradually increases sparsity during training to yield the desired sparsity at the end of training. Therefore, the dense model weights and gradients are required throughout the majority of training, greatly increasing the overall compute and storage requirements. While AST demonstrated a tantalizing possibility of training multiple sparse sub-networks within a single training loop, the gradual dense-to-sparse training paradigm used by (Liu et al., 2021) is not directly comparable to RigL or other similar end-to-end sparse DST methods.
Zhou et al. (2021) explored how N:M sparsity can be achieved during training using magnitude-based pruning during the forward pass and a Straight-Through Estimator (STE) (Bengio et al., 2013) on the backward pass. In their method, the dense network weights are projected into a sparse network during each training iteration. The sparse network is obtained by selecting the top-N out of every M contiguous weights and STE is used to propagate the approximated gradients through the projection function. However, in their initial experiments sparse networks trained with STE exhibited a significant performance drop compared to a dense benchmark. The authors conjectured that the reduced performance could be due the gradients approximation performed by STE resulting in sparse connectivity instabilities during training. To counteract this, Zhou et al. (2021) introduced a regularization term applied to the gradients of pruned weights and called their approach Sparse-Refined Straight-Through Estimator (SR-STE). Their results include N:M ratios of 1:4, 2:4, 2:8, 4:8, 1:16, corresponding to model sparsities of 25%, 50%, 75% and 93.75%. Although SR-STE utilizes sparse operations in the forward pass and can find sparse models optimized for inference, it does not reduce the training cost significantly. Specifically, SR-STE training requires (1) storing original parameters in their dense format, and (2) calculating dense gradients during each training iteration. This makes SR-STE training as expensive as the original dense training in terms of memory and compute cost1. On the other hand, DST methods such as RigL, and our proposed method SRigL, were developed for end-to-end sparse training and use sparse parameters and gradients throughout training.
Footnote 1: To be precise, SR-STE can use some sparse operations and reduce training cost up to two thirds of the original dense training. However this is still far from fully sparse acceleration for training.
Accelerating Fine-grained N:M Structured SparsityNvidia (2020) introduced the Ampere Tensor Core GPU architecture (e.g. A100 GPUs) and proposed the 2:4 fine-grained structured sparsity scheme that enables SNNs to be accelerated on this hardware _at inference time_. This scheme places a constraint on the allowed sparsity pattern: For every contiguous array of four weights, two are pruned, yielding a 50%-sparse net. The resulting regular structure of the weight matrix allows one to compress it efficiently and to reduce memory storage and bandwidth by operating on the nonzero weights only.
Since the focus is on acceleration at inference time, the authors proposed to use the standard method of magnitude-based pruning post training to achieve the 2:4 sparsity. Importantly, this work considered exclusively the 2:4 ratio; other N:M ratios cannot be accelerated on Ampere GPUs.
## 3 Method
Most existing sparse-to-sparse DST methods, including the state-of-the-art RigL, learn an _unstructured_ sparse mask, and yet structured sparsity realizes substantially better acceleration in practice. Our goal in this work is to introduce structural constraints on the sparse mask learned by RigL, in order to make it more amenable to acceleration, at both training and inference time, while not affecting RigL's generalization performance. The constant fan-in constraint represents a special case of N:M sparsity where \(N\) is the number of non-zero weights per neuron and \(M\) is the dense fan-in for each neuron within a given layer. While current acceleration results exist only for 2:4 sparsity and rely on specialized hardware (e.g. Nvidia's Ampere GPU architecture), a constant fan-in constraint can also theoretically take advantage of the efficient memory access and throughput increase that N:M sparsity yields (Mishra et al., 2021).
We start with a theoretical analysis to explore the effect of various sparsity distributions with different degrees of structural constraints on the training dynamics of SNNs, motivating the particular structured sparsity we use.
### Sparsity and Output-Norm Variance
Consider a SNN with ReLU activations, where each neuron has on average \(k\) connections to the previous layer (i.e., fan-in). It has been shown by Evci et al. (2022), that by normalizing the weights on initialization by a factor of \(\sqrt{2/k}\), one achieves the following desirable normalization property for each layer \(\ell\) with output \(z^{\ell}\):
\[\mathbb{E}\bigg{(}\frac{||z^{\ell+1}||^{2}}{||z^{\ell}||^{2}}\bigg{)}\!=\!1,\]
Meaning that on average the variance of the norm of each layer's output is constant. However, the variance of this ratio is non-trivial. In networks with large depth, it can accumulate, leading to exponentially large variance at the final layer (Li et al., 2021). Minimizing this variance on initialization has been shown to have a positive effect on training dynamics in some network models (Littwin et al., 2020), as it stabilizes the gradients. We therefore analyze the output norm variance as a guiding quantity for sparsity-type selection.
In the following, we consider three different types of sparsity distributions, which respectively correspond to different degrees of sparsity _structure_ in the SNN, and derive analytic expressions for the behaviour of output norm variance in SNNs with the given sparsity type. The derivations for the following results can be found in Appendix A:
* **"Bernoulli sparsity"**: A connection between each neuron in layer \(\ell+1\) and each neuron in layer \(\ell\) appears _independently_ with probability \(p\!=\!\frac{k}{n}\), resulting in each neuron having \(k\) connections _on average_ and each layer having \(nk\) connections _on average_. The variance is: \[\mathbf{Var}_{\text{Bernoulli}}\bigg{(}\frac{||z^{\ell+1}||^{2}}{||z^{\ell}||^ {2}}\bigg{)}\!=\!\frac{5n\!-\!8\!+\!18\frac{k}{n}}{n(n\!+\!2)}.\] (1)
* **"Constant Per-Layer sparsity"**: Exactly \(kn\) connections are distributed at random in the layer connecting the \(n\) neurons in layer \(\ell+1\) and the \(n\) neurons in layer \(\ell\), resulting in each neuron having \(k\) connections _on average_. The variance is: \[\mathbf{Var}_{\text{Const-Per-Layer}}\bigg{(}\frac{||z^{\ell+1}||^{2}}{||z^{ \ell}||^{2}}\bigg{)}\!=\!\frac{(n^{2}\!+\!7n\!-\!8)C_{n,k}\!+\!18\frac{k}{n}\! -\!n^{2}\!-\!2n}{n(n\!+\!2)},\] (2) where \(C_{n,k}\!=\!\frac{n-1/k}{n-1/n}\). Note that when \(n\!\gg\!1\), \(C_{n,k}\!\approx\!1-\frac{n-k}{n^{2}k}\) is close to \(1\), and with \(C_{n,k}\!=\!1\) we recover the formula for Bernoulli sparsity, meaning that this sparsity type and Bernoulli sparsity are very similar.
* **"Constant Fan-In sparsity"**: Each neuron in layer \(\ell+1\) is connected to exactly \(k\) neurons from layer \(\ell\), chosen uniformly at random. In this case, the variance is: \[\mathbf{Var}_{\text{Const-Fin-In}}\bigg{(}\frac{||z^{\ell+1}||^{2}}{||z^{ \ell}||^{2}}\bigg{)}\!=\!\frac{5n\!-\!8\!+\!18\frac{k}{n}}{n(n\!+\!2)}-\frac{3(n \!-\!k)}{kn(n\!+\!2)}.\] (3)
In deriving the above results we assumed that the direction of the layer output vector \(\frac{x^{\ell}}{\|z^{\prime}\|}\) is uniformly distributed on the unit sphere. We compare our theoretical predictions with simulations in Fig. 0(b) and verify their accuracy. Bernoulli and constant-per-layer distribution result in unstructured sparsity, and most of the current DST approaches, including RigL, operate with constant-per-layer sparsity. In contrast, the constant-fan-in type imposes a strong structural constraint. Therefore we are somewhat surprised to find that, in fact, constant-fan-in sparsity always produces slightly smaller output-norm variance than the other types. The difference is larger when \(k\ll n\), i.e., for very sparse networks. This indicates that, at the very least, the constant fan-in constraint should not impair SNN training dynamics and performance, motivating our method of maintaining the constant fan-in sparsity constraint within a DST approach.
### Structured RigL
As motivated by Section 3.1, we propose to enforce the constant-fan-in constraint within a sparse-to-sparse DST method to learn structured sparse connectivity from scratch. Specifically, we use RigL by Evci et al. (2021), which can obtain highly sparse networks with generalization performance comparable to their dense baselines.
In brief, the methodology of RigL is to update the SNN connectivity during training by _pruning_ weights with the smallest magnitude and _regrowing_ those with the largest corresponding gradient magnitude in _each layer_. This occurs in periodic, but relatively infrequent mask update steps throughout most of training. In SRigL, weight saliency must be determined at the _neuron level_ (in convolutional layers, at the level of each filter), since we enforce that every neuron (output channel) has the same number of unmasked incoming weights, thereby satisfying the constant fan-in constraint. (Fig. 0(a)).
However, this approach alone significantly lags behind RigL's generalization at very high sparsities (>90%) and with transformer architectures, as shown in Fig. 2(a) and Table 4. This is because the constant fan-in constraint has an important side-effect: under a strict constant fan-in constraint, neurons can never be entirely masked (ablated), as illustrated in Fig. 2. At very high sparsity levels this can lead to many neurons that have only 1-2 weights, limiting the capacity to learn complex features and consequently reducing generalization performance. Indeed, at high sparsities we observed empirically that RigL ablates large numbers of neurons (Figs. 2(b), 10 and 11). Effectively, _RigL reduces the width of the model at high sparsities to maintain generalization performance_; we believe we are the first to explicitly identify this behaviour within a DST method.
To resolve this issue in SRigL, we implement a **neuron ablation method**, allowing SRigL to maintain both a constant fan-in constraint _and_ to reduce layer width at high sparsities. We introduce a new hyperparameter, \(\gamma_{sal}\), which defines the required minimum percentage of salient weights per neuron. Given a neuron with constant fan-in of \(k\), if fewer than \(\gamma_{sal}\ast k\) weights are considered salient by either the drop _or_ grow criteria, then the neuron is ablated and its weights redistributed to other neurons within the same layer. The steps below outline our final SRigL method with neuron ablation.
In the following procedure, the first two steps are the same as in RigL, while the other steps are specific to SRigL, containing modifications to include the constant fan-in constraint and neuron ablation. We first set an ablation threshold \(\gamma_{sal}\). Then, for each layer we do the following:
1. Obtain magnitudes of the active weights and gradient magnitudes of the pruned weights; these will serve as prune and growth criteria, respectively.
2. Compute \(K\), the number of weights to be grown and pruned in the current step in this layer. We always grow the same number of connections as we prune.
Figure 2: **Neuron ablation.** At sparsity levels over 90%, RigL learns to completely mask (ablate) a large number of neurons within each layer, effectively reducing layer width. Imposing a constant fan-in constraint requires all neurons to have the same number of (non-pruned) incoming weights and therefore inhibits ablation, which results in worse generalization performance than RigL. Allowing SRigL to ablate neurons restores RigL-level performance.
3. Count the number of salient weights per neuron. A weight is considered _salient_ if it is in the top-\(K\) of either the largest-magnitude weights or the largest-magnitude gradients.
4. Ablate neurons that have fewer salient weights than \(\gamma_{sal}*k\), where \(k\) is the fan-in. Ablation is done by pruning all incoming weights. These pruned weights are redistributed to the remaining neurons in the following steps.
5. Compute the new constant fan-in constraint, \(k^{\prime}\), based on the number of ablated neurons.
6. Prune the \(K\) smallest-magnitude weights in the current layer. Note that this pruning criterion considers all weights within a layer rather than pruning only the smallest weights in each neuron.
7. For each active neuron, regrow as many weights as required, proceeding in order of decreasing gradient magnitude, until the target fan-in, \(k^{\prime}\), is achieved.
## 4 Results
We implement SRigL in PyTorch by extending an existing implementation of RigL (McCreary, 2020). We evaluate our method empirically on image classification tasks, training the ResNet-18 (He et al., 2016) and Wide ResNet-22 (Zagoruyko and Komodakis, 2017) models on the CIFAR-10 dataset (Krizhevsky, 2009), and the ResNet-50 He et al. (2016) and Vision Transformer (ViT-B/16) Dosovitskiy et al. (2021) models on the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC-12) dataset (Russakovsky et al., 2015), commonly referred to as ImageNet. See Appendix B for Wide ResNet-22 experiment results.
Unless noted otherwise, we use the same hyperparameter configuration as the original RigL method. A detailed summary of our hyperparameter settings and training details can be found in Appendix C. The modified hyperparameters proposed by Liu et al. (2021) may yield higher generalization performance, but a detailed investigation of the hyperparameters for SRigL is left to future work.
Unless noted otherwise, we set \(\gamma_{sal}\) to 30% for all SRigL experimental results. This value was selected based on a hyperparameter sweep performed by training ResNet-18 and Wide ResNet-22 on the CIFAR-10 dataset, see Appendix D.
### ResNet-18 trained on CIFAR-10
Our training regimen generally follows Evci et al. (2021), see Appendix C.1 for more information. We repeat training with five different random seeds for both methods and report the mean and standard deviation compared to a densely-connected benchmark model in Table 2.
Figure 3: **(a) ResNet-50/ImageNet top-1 test accuracy when trained with SRigL for a range of sparsities is comparable to RigL. Extended training durations of \(\times 2\) and \(\times 5\) are also reported for SRigL. Results reported are single runs. (b) Neuron ablation: The percentage active neurons (i.e., not ablated) following RigL/SRigL training. RigL ablates a large number of neurons at high sparsities.**
These results confirm that imposing a constant fan-in constraint during sparse training does not significantly degrade generalization performance of the SNN compared to the RigL method. We inspect the connectivity of ResNet models trained with the RigL method and find, as shown in Fig. 2(b), that at 95% sparsity \(10.9\%\) of neurons are removed completely. Thus, RigL results in fewer, but more densely connected neurons, whereas the fan-in constraint enforces that all neurons are retained.
In Fig. 10 we plot the number of neurons ablated at ablation thresholds of 0, 30, and 50% to demonstrate how the \(\gamma_{sal}\) hyperparameter can be used to guide the final model width during training.
### ResNet-50 trained on ImageNet
Our training regimen for the ImageNet dataset generally follows Evci et al. (2021), see Appendix C.2 for more details. We investigate the effect of extended training with \(\times 2\) and \(\times 5\) the original number of training epochs. We train each model with a single seed and report the results in Fig. 2(a) and Table 1.
SRigL yields similar generalization performance as RigL across each sparsity and training duration considered. At high sparsities, SRigL with ablation outperforms SRigL without ablation, highlighting the importance of neuron ablation as sparsity increases. Notably, RigL \(\times 5\) results at 99% sparsity in Evci et al. (2021) used a dense first layer, unlike all other results reported in Table 1. Despite this difference, SRigL \(\times 5\) at 99% sparsity is comparable to the RigL \(\times 5\) results. We expect that the 99% sparse models would be improved by using a dense first layer for all SRigL results. Similar to RigL, we observe that SRigL generalization performance improves with increasing training time.
In Table 3 we compare SRigL to a variety of DST algorithms. SRigL performs comparably to other methods, even those which learn unstructured sparsity. Methods with a memory footprint listed as dense require training with the dense network and therefore are not directly comparable to other sparse-to-sparse DST methods. The most directly comparable method to ours is DSB; we note that SRigL outperforms DSB at all sparsity ratios reviewed.
### Vision Transformer trained on ImageNet
We train the vision transformer variant ViT-B/16 on ImageNet generally following the original training recipe per Dosovitskiy et al. (2021) with several modifications, see Appendix C.3 for more information.
Unlike in our convolutional neural network experiments, RigL _does not_ ablate neurons when applied to the ViT-B/16 architecture with sparsities between 80-90%. Instead, we find that RigL learns sparse connectivities with a high variance of fan-in between neurons (see Fig. 11). At 90% sparsity, some neurons are allocated up to \(\times 10\) more active weights than the mean number of active weights in the same layer. We hypothesize that these more densely connected neurons found in our RigL experiments are important for generalization performance; therefore, a high \(\gamma_{sal}\) threshold should improve performance of SRigL by ablating neurons until a sufficient density of sparse fan-in is reached. Indeed, we find that SRigL's generalization performance is sensitive to \(\gamma_{sal}\) and that high \(\gamma_{sal}\) thresholds of 90% to 99% perform best. See Fig. 8 and Appendix D for more details on how \(\gamma_{sal}\) affects the generalization performance of ViT-B/16. For the following results, we used a \(\gamma_{sal}\) of 95%.
We train each model with a single seed and report the results in Table 4. SRigL without ablation is unable to match the generalization performance of RigL. However, with neuron ablation enabled, SRigL's performance greatly improves, outperforming RigL at the 90% sparsity level.
### FLOPs analysis
In Fig. 4, we present an analysis of the FLOPs required during training and inference for SRigL and compare with SR-STE. We calculate FLOPs using the same methodology as Evci et al. (2021) by considering only operations induced by convolutional and linear layers and their activations. FLOPs for add and pooling operations are ignored. For training FLOPs, we also disregard FLOPs required for mask updates, as this step is amortized over \(\Delta T\) steps and is negligible compared to the FLOPs required otherwise for training. The open-source code for counting operations is from the NeurIPS 2019 MicroNet Challenge and is available on GitHub2.
Footnote 2: MicroNet Challenge Github Repository
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{RigL} & \multicolumn{4}{c}{SRigL} \\ \cline{3-8} sparsity & \multicolumn{3}{c}{w/o} & \multicolumn{3}{c}{w/ ablation} \\ (\%) & 1\(\times\) & 5\(\times^{\dagger}\) & 1\(\times\) & 1\(\times\) & 2\(\times\) & 5\(\times\) \\ \hline
80 & \(74.9\) & \(77.1\) & \(74.8\) & \(75.0\) & \(76.5\) & – \\
90 & \(72.8\) & \(76.6\) & \(72.6\) & \(72.7\) & \(74.7\) & \(76.2\) \\
95 & \(69.6\) & \(74.6\) & \(68.8\) & \(69.1\) & \(71.5\) & \(73.6\) \\
99 & \(51.4\) & \(61.9\)\({}^{\ddagger}\) & \(48.7\) & \(51.5\) & \(55.3\) & \(59.0\) \\ \hline \hline \multicolumn{8}{l}{0} & \multicolumn{4}{c}{_dense ResNet-50:_} & \multicolumn{1}{c}{\(76.7\)} & \multicolumn{1}{c}{} \\ \hline \hline \multicolumn{8}{l}{\({}^{\dagger}\)} & \multicolumn{1}{c}{\(5\times\) RigL results are from Evci et al. (2021)} \\ \multicolumn{8}{l}{\({}^{\ddagger}\)} & \multicolumn{1}{c}{uses a dense first layer, unlike other results} \\ \end{tabular}
\end{table}
Table 1: **Top-1 ImageNet test accuracy of ResNet-50 trained with RigL or SRigL at high sparsities and with various training times (as in Evci et al. (2021)), e.g. 5\(\times\) more training epochs than dense ResNet-50.**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{training} & \multicolumn{4}{c}{sparsity} \\ \cline{3-8} method & method & structured & 50\% & 75\% & 80\% & 90\% & 93.75\% \\ \hline Static\({}^{*}\) & sparse & no & – & – & \(70.6\pm 0.06\) & \(65.8\pm 0.04\) & – \\ SET\({}^{*}\) & sparse & no & – & – & \(72.9\pm 0.39\) & \(69.6\pm 0.23\) & – \\ DeepR\({}^{\$}\) & sparse & no & – & – & \(71.7\) & \(70.2\) & – \\ DSR & sparse & no & – & – & \(73.3\) & \(71.6\) & – \\ Top-KAST\({}^{\ddagger}\) & sparse & no & – & – & \(74.76\) & \(70.42\) & – \\ MEST\({}^{\dagger}\) & sparse & no & – & – & \(\mathbf{75.39}\) & \(72.58\) & – \\ RigL & sparse & no & – & – & \(74.98\) & \(\mathbf{72.81}\) & – \\ DSB-16 & **sparse** & **yes** & \(76.33\) & \(74.04\) & – & – & – \\ SRigL (Ours) & **sparse** & **yes** & \(\mathbf{76.60}\) & \(\mathbf{75.55}\) & \(75.01\) & \(72.71\) & \(70.56\) \\ \hline SNFS (ERK)\({}^{*}\) & dense & no & – & – & \(75.2\pm 0.11\) & \(73.0\pm 0.04\) & – \\ SR-STE & dense & yes & – & \(76.2\) & – & – & \(71.5\) \\ \hline \hline \multicolumn{8}{l}{_dense ResNet-50:_} & \multicolumn{4}{c}{\(76.7\)} \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Top-1 ImageNet test accuracy of ResNet-50 trained with a variety of DST methods, highlighting methods that both are sparse-to-sparse (i.e. sparse training) _and_ learn structured sparsity similar to SRigL — only DSB-16 (2:4 and 1:4 sparsity) is directly comparable in this regard. RigL and SRigL results are from our experiments, other values are obtained from each method’s corresponding paper, unless noted otherwise.**
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{RigL} & \multicolumn{2}{c}{SRigL} \\ \cline{3-8} sparsity & \multicolumn{2}{c}{w/o} & \multicolumn{2}{c}{w/ ablation} \\ (\%) & & & & \\ \hline
80 & \(95.2\pm 0.1\) & \(95.2\pm 0.1\) & \(95.2\pm 0.1\) \\
90 & \(95.1\pm 0.1\) & \(95.0\pm 0.1\) & \(95.1\pm 0.1\) \\
95 & \(94.6\pm 0.2\) & \(94.5\pm 0.3\) & \(94.7\pm 0.2\) \\
99 & \(92.9\pm 0.1\) & \(91.5\pm 0.3\) & \(92.8\pm 0.1\) \\ \hline
0 & \multicolumn{2}{c}{_dense ResNet-50:_} & \multicolumn{2}{c}{\(76.7\)} \\ \hline \hline \multicolumn{2}{c}{\({}^{\dagger}\)} & \multicolumn{2}{c}{\(5\times\) RigL results are from Evci et al. (2021)} \\ \multicolumn{2}{l}{\({}^{\ddagger}\)} & \multicolumn{2}{c}{uses a dense first layer, unlike other results} \\ \end{tabular}
\end{table}
Table 2: **Test accuracy for ResNet-18 on CIFAR-10 trained with RigL or SRigL with/without neuron ablation at varying sparsities repeated with five different random seeds.**
Similar to other DST methods, SRigL obtains generalization performance comparable to a dense network benchmark at a fraction of the FLOPs required for both training and inference.
### Acceleration of constant fan-in sparsity
Although commodity hardware can accelerate 2:4 sparsity (Mishra et al., 2021), the specific type of sparsity we propose to learn with SRigL, constant fan-in sparsity, has seen less attention. Theoretical speedups (i.e. FLOPs) are limited in demonstrating the real-world acceleration potential of a proposed sparse representation in general, and yet conversely creating a fully-optimized software or hardware implementation of a novel representation typically requires significant engineering effort outside of the scope of this paper.
Here we show that even a straight-forward PyTorch implementation of our proposed condensed neural network representation (see Appendix E) can demonstrate this real-world acceleration on dual Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz with 48 threads on 24 cores. In Fig. 5, we present real-world timings comparing the torch.nn.Linear layer and our condensed linear layer, which works as a drop-in replacement. We report mean timings and standard deviations across a minimum of five forward passes for a single condensed layer at sparsities of 99%, 95%, and 90% and a standard dense PyTorch Linear layer. For both the condensed and dense layer, we use a single layer with 10 neurons. We vary the batch size between 1 to 256 and use 65,536 features in our input tensor. The input tensors consist of 32 bit floating point values and the convolutional weight tensors are in a channel-first layout typical to PyTorch (batch size, channels, height, width). At high sparsity levels and moderate batch sizes, the condensed layer outperforms the dense torch.nn.Linear layer moderately. This result is highly promising, since it is reasonable to expect that a more optimized software implementation and/or explicit hardware support would improve upon these results significantly. For example, when we use the recently published SparseProp library (Nikdan et al., 2023) to accelerate networks trained with SRigL, we obtain accelerations that outperform our naive implementation showcased here. See Appendix F for more details.
## 5 Conclusion
In this work we present SRigL, a novel dynamic sparse training method that learns a variant of structured N:M sparsity. SRigL is capable of sparse-to-sparse training while maintaining generalization performance on par with state-of-the-art unstructured sparse training methods on a wide variety of network architectures. By enforcing a constant fan-in constraint on the learned topology of our networks, the models are amenable to hardware acceleration in a similar manner to N:M sparsity. Our observation that RigL ablates neurons at high sparsities inspires our neuron ablation method which enables SRigL to match the performance of RigL at high sparsities and on the ViT-B/16 network architecture. We present preliminary timings using an non-optimized implementation of our condensed
representation which compare well against a dense layer. We hope this work will motivate future work implementing additional fine-grained structured sparsity schemes within the DST framework.
## Acknowledgments and Disclosure of Funding
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), Alberta Innovates, the Digital Research Alliance of Canada, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI). We are grateful for computational resources made available to us by Google and Denvr Dataworks. We also acknowledge the very helpful feedback of Trevor Gale.
|
2310.19369 | Enhancing Time Series Aggregation For Power System Optimization Models:
Incorporating Network and Ramping Constraints | Power system optimization models are large mathematical models used by
researchers and policymakers that pose tractability issues when representing
real-world systems. Several aggregation techniques have been proposed to
address these computational challenges and it remains a relevant topic in power
systems research. In this paper, we extend a recently developed Basis-Oriented
time series aggregation approach used for power system optimization models that
aggregates time steps within their Simplex basis. This has proven to be an
exact aggregation for simple economic dispatch problems. We extend this
methodology to include network and ramping constraints; for the latter (and to
handle temporal linking), we develop a heuristic algorithm that finds an exact
partition of the input data, which is then aggregated. Our numerical results,
for a simple 3-Bus system, indicate that: with network constraints only, we can
achieve a computational reduction by a factor of 1747 (measured in the number
of variables of the optimization model), and of 12 with ramping constraints.
Moreover, our findings indicate that with temporal linking constraints,
aggregations of variable length must be employed to obtain an exact result (the
same objective function value in the aggregated model) while maintaining the
computational tractability, this implies that the duration of the aggregations
does not necessarily correspond to commonly used lengths like days or weeks.
Finally, our results support previous research concerning the importance of
extreme periods on model results. | David Cardona-Vasquez, Thomas Klatzer, Sonja Wogrin | 2023-10-30T09:21:53Z | http://arxiv.org/abs/2310.19369v1 | Enhancing Time Series Aggregation For Power System Optimization Models: Incorporating Network and Ramping Constraints
###### Abstract
*Power system optimization models are large mathematical models used by researchers and policymakers that pose tractability issues when representing real-world systems. Several aggregation techniques have been proposed to address these computational challenges and it remains a relevant topic in power systems research. In this paper, we extend a recently developed Basis-Oriented time series aggregation approach used for power system optimization models that aggregates time steps within their Simplex basis. This has proven to be an exact aggregation for simple economic dispatch problems. We extend this methodology to include network and ramping constraints; for the latter (and to handle temporal linking), we develop a heuristic algorithm that finds an exact partition of the input data, which is then aggregated. Our numerical results, for a simple 3-Bus system, indicate that: with network constraints only, we can achieve a computational reduction by a factor of 1747 (measured in the number of variables of the optimization model), and of 12 with ramping constraints. Moreover, our findings indicate that with temporal linking constraints, aggregations of variable length must be employed to obtain an exact result (the same objective function value in the aggregated model) while maintaining the computational tractability, this implies that the duration of the aggregations does not necessarily correspond to commonly used lengths like days or weeks. Finally, our results support previous research concerning the importance of extreme periods on model results.
power system optimization, mathematical modeling, dimensionality reduction, renewable energy sources, time series aggregation, linear programming.
## I Introduction
Power System Optimization Models ( PSOMs) are widely used for planning and policy-making toward sustainable and clean energy systems. However, due to real power systems' spatio-temporal size and technical complexity, PSOMs can result in computationally intractable problems. Computational intractability in PSOMs stems from the multiple dimensions they try to represent, e.g., the technical details, the uncertainty concerning the system's demand and generators' availability, the spatial representation, or the granularity and length of the time horizon, etc. To overcome this, modelers apply aggregation techniques to approximate full PSOMs with PSOMs of reduced size to derive practical results within reasonable CPU times. One subset of such techniques is time series aggregation (TSA); which aims to replace a full hourly or even sub-hourly PSOM with a smaller model using a simplified time dimension, allowing for faster model runs, and maintaining, at least to some degree, the accuracy of the results. There are many methods to achieve this, as reviewed in [1] and [2], and each approach comes with its advantages and shortcomings.
In the literature [1], TSA techniques are subdivided in a-priori and a-posteriori methods. A-priori methods rely on the input space, e.g., demand time series, capacity factors of renewables etc., as most of the models cannot be run without performing an aggregation procedure; so, a-priori techniques, have little regard for the optimization model performance or its structure, and even today, they remain as state-of-the-art TSA techniques for PSOMs [3][4][5].
Two of the most common a-priori methods are downsampling and representative periods. Downsampling consists of increasing the coarseness of the time steps used in PSOMs to reduce their size; for example, in SDDP [6], instead of using the available hourly data, weekly averages of demand and inflows are deployed; however, when using coarse time steps
researchers lose sight of short-term dynamics; for example, in the case of weekly ones, the daily and hourly patterns of a power system are lost. A case study evaluating this situation is found in [7]; this means that a model with weekly or daily time steps will not be able to represent the short-term dynamics of the system.
On the other hand, representative periods aim to partition the complete time horizon (e.g. 8760 individual hours of one year) into a weighted average of smaller ones while keeping the coarseness of the time steps (e.g. 7 representative days with 24 individual hours - \(7\times 24\) different time steps in total). For example, in [8], the authors run 168-hour long models (one week) and use four of these they approximate a complete year, so in this case, we have four representative periods that are used to aggregate the 52 weeks of a year. The main advantage of using representative periods over downsampling is that researchers do not overlook short-term temporal dynamics of the power system; however, the difficulty lies in determining if: first, the chosen periods are representative enough of the original data; and second, when employed in a PSOM, how they translate into accurate results [2], i.e., how well they approximate full model results.
However, despite their past usefulness, new trends like the increasing share of variable renewable energy sources (VRES) in power systems, pose a challenge to temporal aggregation techniques as they rely on adding multiple time steps, e.g., daily or weekly averages from hourly data [9][10][11], or breaking the inter-temporal linking of time [12][13], but VRES' technical constraints require a highly detailed temporal modeling [14]. The consequences of these aggregation procedures have been researched [15][16][17], and results show that they lead to inaccurate results as they go against these fundamental technical constraints and even those of short-term storage technologies.
Until now, the inaccuracy of a-priori techniques has not been a matter of concern, as the increasing share of VRES to achieve net-zero power systems has highlighted the limitations of the current temporal aggregation techniques used in PSOMs for planning and policy-making. The limitations of a-priori methods are especially highlighted and quantified in [18], where the author illustrates the substantial impact of applying a standard a-priori TSA method (representative periods with k-Means clustering) to approximate a full PSOM. The case study of a simple Economic Dispatch problem shows a 91% error when the objective function value of the k-Means aggregated model is compared with the full model, while other (a-posteriori) alternatives achieve a 0% error; this accentuates the need for new and adequate aggregation techniques.
In contrast to a-priori methods, a-posteriori methods do not exclusively rely on PSOM input data but also include information from the PSOM's structure, e.g., the results from partial model runs or relaxations, in the TSA process. A-posteriori methods can be classified as either non-iterative or iterative; non-iterative aim to obtain a fast solution, although with a trade-off between robustness and optimality, they rely on using a relaxed version of the entire model, for example, by reducing or fixing the number of binary variables and then cluster the input data and also identify extreme periods (e.g., those with a high cost in the relaxed model). So, non-iterative methods do not guarantee achieving the optimum but provide _good enough_ solutions fast. Iterative methods aim to obtain an aggregated model as close as possible to the complete one with less regard for the computational burden; they work by decomposing the problem, for example, by splitting design and operation or by treating each type of constraint independently and then merging the results. In this way, a-posteriori TSA methods have the potential for significant efficiency gains in terms of solution times while simultaneously maintaining the quality of model results close to the full PSOM (without TSA) [18], [2], [19]; this potential has led to an increased research interest about a-posteriori aggregation techniques for PSOM which take into account the structure of the optimization model [20][19][21].
In advancing a-posteriori methods, [18] proposes a new approach to temporal aggregation called Basis-Oriented TSA. The Basis-Oriented approach takes a full-hourly solution and splits the input data based on the Simplex basis each time step belongs to; then, for each basis, makes a centroid of the input data and reruns the model using the centroids; this procedure gives an exact approximation to the complete model while retaining computational tractability. Therefore, this approach allows for an aggregated model akin to representative periods, which preserves the objective function value and the solution of the complete model so it has zero error. Another advantage of the Basis-Oriented approach is that, as we demonstrate later in this paper for the case presented in [18], it finds an aggregation of minimal cardinality, which means no smaller set of representative periods exists that exactly approximates the model's objective function value. This result is supported by the result in [22], which states that every LP model has a row-aggregation which is equivalent to the complete problem.
In this paper, we extend the research on the Basis-Oriented approach and apply it to significantly more complex instances of PSOMs including additional constraints with both spatial and inter-temporal dependence, thereby bringing the approach closer to real-world applications. In particular, we extend the methodology to network constraints and show that we can directly apply the Basis-Oriented approach to find a zero-error aggregation. Moreover, we extend the approach to constraints with temporal linking, i.e., ramping of thermal generators. However, inter-temporal constraints are more challenging than network constraints, as obtaining a basis is not as straightforward. This complexity arises because the basis duration1 or length (always one hour in [18]) will vary depending on the system's state (in particular, on inter-temporal dependencies) and it cannot be determined a-priori.
Footnote 1: The number of consecutive time periods that cannot be separated without losing model accuracy.
To address this challenge, we develop an algorithm that explores the dual space hour by hour, to identify these bases, which allows us to formulate an aggregated PSOM that is still exact (zero error). In summary, the original contributions of this paper are as follows:
* We provide empirical proof that the aggregation found by the Basis-Oriented TSA approach is minimal for the
case analyzed in [18].
* We extend the Basis-Oriented TSA methodology to include network constraints.
* We further extend the approach to models with temporal linking (in particular, ramping constraints) and develop an algorithm to find the time-dependent bases to obtain an exact aggregation under such situations.
The remainder of this paper is organized as follows: in Section II, we briefly introduce the original Basis-Oriented TSA approach and show how the aggregation is minimal and unique; then, we extend it to include network and inter-temporal ramping constraints in Section III; in Section IV, we present a case study to validate the TSA methodology and finally, Section V presents the conclusions and future research.
## II Basis-Oriented Time Series Aggregation
In this section, we first introduce the reader to Basis-Oriented TSA in section II-A by summarizing the main aspects and initial results from [18], as it provides the groundwork for our extension to more realistic models. And then, as a novel contribution, in section II-B we provide empirical proof that the set of representative periods determined by Basis-Oriented TSA not only obtains an exact aggregation but that it corresponds to the set of minimal cardinality to achieve a zero error, and, that this set is unique. Furthermore, we show that all sets that achieve an exact aggregation with a higher number of representative periods are actually a partition of the original Basis-Oriented result.
### _Previous Results: Economic Dispatch_
Consider a stylized PSOM for the Economic Dispatch (ED) problem where the objective is to minimize the total system operation cost while satisfying demand and scheduling the generation units within their operational bounds. The set of constraints in (1) corresponds to a full model run of the ED problem for every hour \(k\) (\(k=1,2,\ldots,8760\)). Full model results are optimal production decisions \(p_{g,k}\) for each hour \(k\) and each generator \(g\). In contrast, the set of constraints in (2) represents an aggregated PSOM. Instead of \(k\), this formulation uses representative periods \(r\), a subset of \(k\). To avoid the computational complexity or potential intractability of running the full model, the modeler must choose \(r\) such that \(|r|\ll|k|\) while obtaining results similar to those of the full model. In order to determine \(r\), TSA methods are applied. Common results of a TSA method for a specific number of \(r\) are: representative values for the demand \(D_{r}\); a factor \(W_{r}\), which corresponds to the weight (or a number of occurrences) of each \(r\) so that the aggregated model represents the ED problem for a full year; and other data \(\overline{P}_{g,r}\).
\[min\sum_{g,k}C_{g}p_{g,k}+\sum_{k}C^{nsp}nsp_{k} \tag{1a}\] \[\text{s.t. }\sum_{g}p_{g,k}+nsp_{k}=D_{k}\quad\forall k\] (1b) \[\underline{P}_{g}\leq p_{g,k}\leq\overline{P}_{g,k}\quad\forall g,k\] (1c) \[min\sum_{g,r}C_{g}p_{g,r}+\sum_{r}C^{nsp}nsp_{r}W_{r}\] (2a) \[\text{s.t. }\sum_{g}p_{g,r}+nsp_{r}=D_{r}\quad\forall r\] (2b) \[\underline{P}_{g}\leq p_{g,r}\leq\overline{P}_{g,r}\quad\forall g,r \tag{2c}\]
In [18], the author shows that the full hourly ED model, which uses the whole time series of 8760 individual hours and a simple generation mix (one thermal, one wind plant, and the option of non-supplied energy), can be approximated exactly using only three representative hours. Instead of finding these three representative hours by the proximity of the data points (such as by k-means), they are determined by the Basis-Oriented approach - identifying the Simplex basis to which each hour belongs to2. However, [18] does not address the following questions: Are there other aggregations with three clusters that also yield zero error, and if so, what characterizes them? In the following section, we attempt to answer this question and further characterize the results of Basis-Oriented TSA.
Footnote 2: Since there are no inter-temporal constraints in this model, every hour is assessed separately. Hence, we can speak of an hourly basis. In the case study presented, there were only 3 out of 8760 possible different hourly bases. The 3 representative hours were then obtained as the averages of all hours that belonged to the same Simplex basis.
### _Minimal Number of Bases and Uniqueness_
As previously mentioned, Basis-Oriented TSA achieves an exact approximation of a full hourly ED, using three representative hours only; despite their work ensuring zero error, it gives no insight concerning the existence and number of aggregations when using a different number of clusters. So, this section aims to assess whether this is the only exact aggregation with zero error using three clusters and how other exact aggregations, with a different number of clusters3, relate to the one determined under Basis-Oriented TSA. We do so employing exhaustive numerical enumeration.
Footnote 3: It is important to note that we use the word clusters, particularly in this section, in the word’s general meaning and not the one specific to the machine learning community.
For this experiment and for the sake of simplicity, we limit the total time horizon of the ED problem to 12 hours. Hence, the hourly ED problem in (1) has \(|k|\) = 12 time steps. This yields the exact solution to the ED problem and a reference to compare the quality of the approximation under the aggregated model in (2). For the aggregated model, there exists a plethora of combinations to group the original data into 1\(\leq|r|\)\(\leq\)12 clusters. For example, there is only one possible combination to group the data into one cluster. However, according to the Stirling Numbers of the Second Kind [23], there are 2047 combinations of organizing the 12 data points into two clusters, and so on until 12 clusters. Table I presents the relation between the number of clusters and the number of possible combinations; in total there are 4 213 597 different ways of clustering the 12 hourly data points. We run the aggregated model for every possible combination and study the quality of the solution in terms of the error in the objective
function. Table I shows the results of those runs. Therein, column _Clusterings with No Error_ corresponds to the number of combinations that exactly approximated the full hourly model. For example, from the 2047 possible combinations to separate 12 hours into two clusters, none yielded a 0% error when used in an aggregated model. From the 8526 ways to organize the original data into 3 clusters, only one yields an exact approximation of the original hourly model, i.e., the one obtained through basis-oriented TSA. This proves that basis-oriented TSA yielded the best possible aggregation for this data and the economic dispatch problem (as there is no exact aggregation with lower cardinality), and it is unique for this number of clusters.
Now, let us analyze all the other combinations of aggregations with a cardinality higher than three that have also led to an exact aggregation. In Fig. 0(a) the Basis-Oriented aggregation is presented and in Fig. 0(b) we see a four clusters aggregation which also yields zero error, but clusters one and four in Fig. 0(b) are just a division of cluster two from Fig. 0(a). This situation occurs for all other aggregation with zero error and cardinality higher than three: all of these aggregations correspond to subdivisions of the basis-oriented 3-cluster separation. This result is quite remarkable because it shows that not only does the Basis-Oriented approach yield the best possible approximation but that other exact aggregations come from the Basis-Oriented one.
Based on these results, which underline the potential of Basis-Oriented TSA, we set out to extend the methodology to more realistic models and expand this work by adding time-linking ramping constraints and network constraints, which are commonly found in energy system models and discuss the arising challenges.
## III Extension of Basis-Oriented TSA framework
Considering that PSOM vary significantly in their formulation depending on the aspects being analyzed, our goal in this section is to frame the varieties of constraints and technical aspects we consider in our extension to the Basis-Oriented approach. Commonplace formulations of PSOMs include network aspects, e.g., the DC Optimal Power Flow (OPF) [24] or the AC OPF or its convexification [25], which are widely used by operators for day-ahead scheduling and market clearing; or security-constrained unit commitment (SCUC) [26] including inter-temporal constraints. Therefore, this paper focuses on extending the ED formulation and the Basis-Oriented TSA to: first, consider network flow constraints in section III-A, and then ramping constraints in section III-B.
### _Basis-Oriented TSA: Network Constraints_
In this section, we extend the Basis-Oriented aggregation to include network flow constraints and show that it still finds an exact aggregation of the input-data. The full hourly model for the ED problem with network flows is presented in (3). The additional term in the objective function compared to (1) corresponds to transmission costs while constraints (3d)-(3e) represent the nodal equations, the import limits and the export limits respectively. To find a Basis-Oriented aggregated model, we group the hours that have the same hourly basis and then form the centroid from each of these groups; the only difference this time is the total number of bases under the additional constraints. Because of the proof in [18], and in the absence of temporal linking constraints, this aggregation has zero error if compared to the full model.
\[\min\sum_{k,g}C^{g}p_{g,k}+\sum_{k,i}C^{nsp}nsp_{i,k}+\sum_{k,i,j} C^{N}f_{k,j,i} \tag{3a}\] \[\text{s.t.}\quad\underline{P_{w}}\leq p_{w,k}\leq CF_{k}\overline {P_{w}}\quad\forall k,w\] (3b) \[\underline{P_{t}}\leq p_{t,k}\leq\overline{P_{t}}\qquad\forall k,t\] (3c) \[\sum_{j}f_{k,j,i}-\sum_{j}f_{k,i,j}+nsp_{i,k}+\sum_{g\in i}p_{g, k}=D_{k,i}\forall k,i\] (3d) \[\sum_{j}f_{k,i,j}\leq\overline{P_{t}}\quad\forall k,i\] (3e) \[\sum_{j}f_{k,j,i}\leq\overline{P_{t}}\quad\forall k,i \tag{3f}\]
### _Basis-Oriented TSA: Ramping Constraints_
Especially in PSOM, chronology matters, and this is why it is so important to account for it in TSA methods. A common mathematical constraint found in PSOMs and that links together multiple temporal periods is a ramping constraint. This is used to represent the technical characteristic of thermal generators. In the case of ramping constraints, the temporal linking arises because of the maximum changes in production a thermal generator \(t\) can tolerate between two consecutive periods; for example, if \(p_{t,k}\) corresponds to the power produced by generator \(t\) during hour \(k\), a ramping up constraint would look like \(p_{t,k}-p_{t,k-1}\leq RU\) while a ramping down constraint would be \(p_{t,k-1}-p_{t,k}\leq RD\) where \(RU\) and \(RD\) are parameters of the generator.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Clusters \(|r|\)** & **Possible Clusterings** & **Clustering with No Error** \\ \hline
1 & 1 & 0 \\ \hline
2 & 2 047 & 0 \\ \hline
3 & 8 526 & 1 \\ \hline
4 & 611 501 & 256 \\ \hline
5 & 1 379 400 & 3280 \\ \hline
6 & 1 323 652 & 10795 \\ \hline
7 & 627 396 & 14721 \\ \hline
8 & 159 027 & 9597 \\ \hline
9 & 22 275 & 3108 \\ \hline
10 & 1 705 & 498 \\ \hline
11 & 66 & 37 \\ \hline
12 & 1 & 1 \\ \hline \end{tabular}
\end{table} TABLE I: Combinatorial Aggregation Results
Fig. 1: Two examples of clusterings with no error: (a) with 3 clusters and (b) with 4 clusters
To analyze the impact temporal linking has on the Basis-Oriented approach, we use the model in (4), which corresponds to the model used in [18] with added ramping constraints for the thermal generator \(t\).
\[min\sum_{g,k} C_{g}p_{g,k}+\sum_{k}C^{nsp}nsp_{k}\] (4a) s.t. \[\sum_{g}p_{g,k}+nsp_{k}=D_{k} \forall k \tag{4b}\] \[p_{t,k}-p_{t,k-1}\leq RU \forall k,t\] (4c) \[p_{t,k-1}-p_{t,k}\leq RD \forall k,t\] (4d) \[\underline{P}_{g}\leq p_{g,k}\leq\overline{P}_{g,k} \forall g,k \tag{4e}\]
Temporal linking constraints pose a challenge to the application of the Basis-Oriented TSA as it is no longer possible to assess the data of different hours separately - because due to active ramping constraints in the optimization model, the hourly periods are no longer independent. In Basis-Oriented TSA we therefore need to consider that a basis \(b\) might consist of multiple consecutive periods; in other words, some representative periods might be longer than one hour (because of active ramping constraints). Identifying those periods and their corresponding duration is not trivial a-priori.
To give an example, if a ramping constraint is active in a given time step, it will bind variables from two different time steps. Another way of thinking about this is considering the structure of the ramping constraint. Without it, the thermal generation in one time step is independent of the others, that is, a line with slope zero in the \(p_{t,k},p_{t,k-1}\) plane; but with ramping, they now represent a line with a slope of 1. This extends to the number of _joined_ periods the ramping constraint is active. From this, we obtain a _basis duration_, which corresponds to the set of consecutive periods that depend on each other at the optimum and therefore cannot be split.
This leads to another contribution of this paper - how to identify these joined bases and their corresponding basis durations, which we describe in the following section.
### _Basis-Oriented TSA: Basis Identification and Heuristic Algorithm_
In this section, we discuss how to identify bases for PSOMs with temporal linking constraints, such as ramping, and propose a heuristic algorithm to that purpose. This heuristic algorithm uses the dual solution of the full model to identify bases. We are aware that in real-life examples, this dual solution is not available. However, our goal here is to demonstrate that even for temporal linking and flow limit constraints the Basis-Oriented approach still finds an aggregation with zero error with respect to the objective function value. Moreover, using dual information, such as marginal costs4 is very easy to understand and provides great intuition. In future research, we plan to develop a methodology that allows us to identify these bases using input data only.
Footnote 4: The marginal cost is the dual variable of the balance equation and corresponds to the cost of serving one additional unit of demand in that hour.
To account for temporal linking, we use the marginal cost as a proxy to identify bases. In the simple case of [18], where no temporal interlinking constraints were present, only three different marginal costs could be observed, i.e., the variable cost of the wind generator, the thermal generator, and the cost of non-supplied energy. However, introducing ramping constraints complicates this and causes a plethora of additional marginal costs to appear, which do not correspond to any of the variable costs of the system's generation units. This is because, with active ramping constraints, the marginal cost of one hour is affected by the system operation of preceding or posterior hours.
For example, consider a four-hour dispatch of a single-node system with a wind turbine and a thermal generator with variable costs 3 **E**/MWh and 24 **E**/MWh respectively. Table II shows system demand, the cost-optimal productions of the generators, and the arising marginal costs. Note that between hours 3 and 4, the thermal plant has a production increase that exceeds 100 MW. Let us repeat this example but with an additional ramp-up constraint with a limit of 100 MW for the thermal generator. In Table III, we now observe that production and marginal costs have changed. Since the ramping constraint prohibited that large jump from hour 3 to 4, the thermal plant had to increase its production earlier on - replacing the cheaper wind production in hours 2 and 3. The marginal cost of 66 **E**/MWh occurs because the thermal generator has to increase its production two hours before (indicated in green in Table III), plus the cost of serving the additional demand (in blue), and minus the cost of the displaced production from wind (in red), so \(2\cdot 24+1\cdot 24-2\cdot 3=66\).
This example illustrates how temporal interlinking directly affects the marginal cost of the system not only in the period where it occurs but also in the production in periods linked to it. Generalizing the previous case, we find that for our model, the marginal cost of any given hour is a linear combination of the variable costs of the generating units with integer coefficients \(a,b,c\) because of the absence of losses.
\[MC_{k}=aVC_{nsp}+bVC_{t}+cVC_{w}\quad:\quad a,b,c\in\mathbf{Z} \tag{5}\]
With this observation about the marginal cost in mind, we develop a heuristic algorithm, i.e., Algorithm 1, detailed in the Appendix. Algorithm 1 partitions the time horizon into groups
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline
**Period** & **Demand** & **Thermal** & **Wind** & **Ramp-Up** & **MC** \\ \hline \(1\) & 170.2 & 130.2 & 40.0 & - & 24 \\ \hline \(2\) & 176.0 & 171.0 (+1) & 5.0 (+1) & 40.8 & 24 \\ \hline \(3\) & 281.7 & 271.0 (+1) & 10.7 (-1) & 100.0 & 45 \\ \hline \(4\) & 391.0 (+1) & 371.0 & 20.0 & 100.0 & 66 \\ \hline \end{tabular}
\end{table} TABLE II: Example: No Ramping Constraint
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline
**Period** & **Demand** & **Thermal** & **Wind** & **Ramp-Up** & **MC** \\ \hline \(1\) & 170.2 & 130.2 & 40.0 & - & 24 \\ \hline \(2\) & 176.0 & 171.0 (+1) & 5.0 (-1) & 40.8 & 24 \\ \hline \(3\) & 281.7 & 271.0 (+1) & 10.7 (-1) & 100.0 & 45 \\ \hline \(4\) & 391.0 (+1) & 371.0 & 20.0 & 100.0 & 66 \\ \hline \end{tabular}
\end{table} TABLE III: Example: With Ramping (blue/green/red indicate production change and relation to marginal cost)
of hours that cannot be separated (because of active ramping constraints) using information from the duals (as demonstrated in the example). The arising partition of the input data space - based on these groups of hours - will then be used to build an aggregated model using dual variables. Fig. 2 describes the complete data-aggregation process that we employ with temporal linking constraints.
The **First Step** of the data-aggregation process consists of loading the input data, capacity factors, and demand and running Algorithm 15; from this, we obtain a temporal partition of the inputs and the dual variables with subsets of different lengths6 (measured in numbers of consecutive time periods i.e. hours); then, we want to identify the bases; hence, for each length (e.g. for all 3-hour partitions) we check which of those partitions have the exact same dual variables, and how many different _basis_ (i.e., subsets that belong to different partitions according to Algorithm 1 but have the same values in the dual space) there are. For example, in the 8760 hour time horizon, there might be 189 total subsets with a 3-hour duration. However, when comparing their dual variables, there are only 7 different subsets of dual variables. Hence, there are only 7 bases of length 3 hours.
Footnote 5: Algorithm 1 looks for a time period where the system’s marginal cost does not correspond to the variable cost of any of the generation resources. Looking at the dual variables of the ramping constraints in the periods after and before, we identify if the marginal cost comes from an up or downward ramping constraint. The length of the ramp is determined by the closest integer multiple to the variable cost of the thermal generator, this is based on the empirical work in [27] where the authors found that, in the case of marginal costs higher than \(V{C_{t}}\) but lower than \(V{C_{nsp}}\), the integer multiple of \(V{C_{t}}\) corresponded to the number of periods in which one of the ramping constraints was binding.
Footnote 6: During our experiments, we found that the more constrained a model is, for example, a low ramping capacity, the higher the number of hours in each element.
After that, in the **Second Step**, we want to aggregate the data belonging to the same basis. Hence, we group the input data using the unique subsets of dual variables found in the first step and make a centroid for each of them with a weight corresponding to the number of elements in the subset, each of these centroids is a representative period in our aggregated model.
In this subsection, we have presented a data aggregation process that decreases the computational complexity of the model with time-linking constraints and performs a temporal aggregation while maintaining a zero error when compared to the complete model.
### _Basis-Oriented TSA: Network and Ramping Constraints_
In this section, we consider the model in (6), which includes flow-limit and ramping constraints. As previously illustrated, flow limits are easier to handle than ramping because there is no temporal interlinking, and they operate in a way analogous to hourly available capacity constraints. So we can apply Algorithm 1 making a small tweak: we now have to look at the marginal costs of every bus when looking for the periods that must go together, however, the aggregation procedure is the same as in Fig. 2.
\[\min\sum_{k,g}C^{g}p_{g,k}+\sum_{k,i}C^{nsp}nsp_{i,k}+\sum_{k,i,j}C^{N}f_{k,j,i}\] (6a) s.t. \[\frac{P_{w}}{P_{w}}\leq p_{w,k,i}\leq CF_{k}\overline{P_{w}} \forall k,w,i \tag{6b}\] \[\frac{P_{t}}{P_{t}}\leq p_{t,k,i}\leq\overline{P_{t}} \forall k,t,i\] (6c) \[\sum_{j}f_{k,j,i}-\sum_{j}f_{k,i,j}+nsp_{i,k}+\sum_{g\in i}p_{g,k }=D_{k,i}\forall k,i\] (6d) \[p_{t,k}-p_{t,k-1}\leq RU \forall k,t\] (6e) \[p_{t,k-1}-p_{t,k}\leq RD \forall k,t\] (6f) \[\sum_{j}f_{k,i,j}\leq\overline{F_{l}} \forall k,i\] (6g) \[\sum_{j}f_{k,j,i}\leq\overline{F_{l}} \forall k,i \tag{6h}\]
## IV Numerical Results
This section contains experimental results obtained from applying the extended Basis-Oriented approach to the 3-bus system described in Fig. 3. The input data and parametrization can be found in [28]; the system has a maximum demand of 1000 MW while the generation units have the parametrization presented in Table IV and non-supplied power has a cost of EUR/MWh 5000. The data are such that the demand could be supplied entirely with the thermal generator; also, the limits in the lines represent the difficulty of getting renewable energy from the wind turbine into the grid as is usually the case in real-world systems.
### _Results with Network Constraints_
We applied the Basis-Oriented TSA to the system presented in Fig. 3 whose data and parameter values are in [28]. For
Fig. 2: Data-aggregation process with temporal linking constraints
simplicity, we consider that load is present only in one node of the system while the other two are net exporters. The model is solved for one year with hourly time steps to obtain a benchmark against which the temporally aggregated model is compared to7. After applying the Basis-Oriented procedure to the network constraints case, we obtained the same results from the temporally aggregated model and the full-hourly one; this means that Basis-Oriented TSA yields an exact aggregation and achieved a reduction in the number of required hours of 1747 times (8736/5) because we only required five representative hours to exactly represent the whole year, as shown in Table V. This corresponds to a reduction of the number of variables of the aggregated model of 3 orders of magnitude. The relationship between the computation time and the number of hours is not linear, however, it is one of the determining factors in solving such models in a feasible amount of time.
Footnote 7: The full hourly model poses no tractability challenges for small test systems.
To analyze the results obtained from the Basis-Oriented approach, we plot the resulting bases in the input-data space in Fig. 4 (renewable energy availability on the x-axis and demand on the y-axis). Comparing the results with those in [18] (where we only had 3 linearly separable bases), we see that in the three-bus network case, the bases are not linearly separable; this stresses the point that _a-priori_ approaches for representative periods selection are insufficient in PSOM. These results also strengthen the argument that a partitioning of the inputs from a minimum distance perspective (i.e.: k-Means or k-Medoids clustering) is insufficient to approximate the complete model effectively.
In the network case, we obtain five bases because of the additional8 constraints in the model; in this case, the additional bases come from the hours when lines one or two are congested, which means that one or two of the constraints in (6g) (6h) are binding. For this case, it is worth noting that each of the bases corresponds to an operational situation in the power system: _basis 1_ and _basis 5_ are hours where the demand is higher than the available renewable energy; for _basis 1_ those hours are the ones where no line is congested while _basis 5_ are hours where L1 is at its limit, so a part of the renewable production is routed through L2 and L3; an analogous situation happens with _basis 2_ and _basis 3_, but in this case, the demand is lower than the available renewable energy so (6b) is not binding; finally, _basis 5_ correspond to hours where both L1 and L2 are at their limit. In this situation, it makes no difference whether or not (6b) is binding for the wind turbine and that's why _basis 5_ includes hours with demand higher and lower than the available renewable energy. Of course, depending on the input data, some constraints might never be binding at all (e.g., lines have much capacity when compared to the generators they serve, like L3 and the thermal generator in our test case); that is why to precisely approximate these models, both the input data and the model's structure must be considered.
Footnote 8: Compared to the case in [18].
### _Results with Ramping Constraints_
Running the full hourly model (4) (single-node problem with ramping constraints) with the parametrization in [28], results in the marginal costs shown in Table VI, which also includes their integer coefficients (\(a,b,c\)) corresponding to each of the generation units' variable cost and the non-supplied power penalty. Now we want to generate an aggregated model and present two different cases: Case A, where we ignore the fact that there is temporal linking in the TSA process; and, Case B, where we account for temporal linking constraints employing the heuristic data-aggregation process presented in Section III-C.
#### Iv-B1 Case A (ignoring temporal linking in TSA)
We apply the original Basis-Oriented TSA approach (which implicitly
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Aggregated** & **Complete** & **Size** \\ & **Model** & **Model** & **Reduction (\%)** \\ \hline Number of Variables & 45 & 78 624 & \multirow{2}{*}{99.94\%} \\ \cline{1-1} \cline{3-4} Number of Constraints & 55 & 96 096 & \\ \hline \end{tabular}
\end{table} TABLE V: Three-Bus System
Fig. 4: Bases (basis 1-5) in the input-data space - Three bus system
Fig. 3: Three-Bus system diagram with line limits (MW) and location of generators
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Wind Turbine** & **Thermal Generator** \\ \hline
**Installed Capacity (MW)** & 500 & 1000 \\ \hline
**Variable Cost (€/MWh)** & 3 & 24 \\ \hline
**Ramp limit (MW/h)** & - & 100 \\ \hline \end{tabular}
\end{table} TABLE IV: Parameters of generating units
assumes hourly bases) from [18] to the single-node system in (4) and obtain 53 unique one-hour-long combinations in the dual space. So, 53 different hourly bases. Then we partition and aggregate the input data within these 53 hourly bases, which corresponds to an aggregation of 99,39% with respect to the number of variables; and run the aggregated models9 (using cluster centroids and weights) obtaining an objective function value that is 22% lower than the full model; a summary of the results is presented in Table VII; this shows that assuming a fixed one-hour duration/length of the basis (as done in [18]) is insufficient to obtain an exact solution under temporal linking constraints.
Footnote 9: As opposed to the TSA procedure, the actual models include ramping constraints.
#### Iv-C2 Case B (accounting for temporal linking in TSA)
Given Case A, in the following we present an extension of the original Basis-Oriented TSA, which is an original contribution of this paper. The general idea is to drop the assumption of a fixed one-hour basis length and allow for a variable number of hours in each basis. The number of hours that we must group together comes from the heuristic process presented in Section III-C, where we showed that the system's marginal cost has information concerning how many, and which hours should be kept together in an aggregated model. Column \(b\)10 in Table VI indicates how many hours a ramp-up (ramp-down) constraint was active before (after) a given hour. This means that the bases will not be of the same length since it depends on the heuristic aggregation carried out by Algorithm 1. After applying Algorithm 1 (1st step of aggregation process presented in Fig. 2) we obtain a partition of the input data into 489 subsets with lengths ranging from 1 to 53 hours. The results are summarized in Table VIII. For example, in the 8736-hour time horizon, there are 189 subsets with a 3-hour length, 6390 subsets with a 1-hour length etc. The 1-hour long subsets represent situations where only the balance constraint is binding (and ramping constraints are inactive). The column _Obj. Func. Avg._ refers to the average objective function value of all of the subsets of a given length. For example, on average a 3-hour subset yields an objective function value of 22 624EUR.
Footnote 10: The integer coefficient of the thermal generator’s variable cost in the decomposition of the marginal cost.
We include the results in Table VIII because they allow for interesting observations. For example, they indicate the impact of extreme periods on the objective function value as some of the subsets with the highest objective function values have the lowest frequency (e.g., 43 hours which only appears once). Also, note that the lengths do not correspond to the typically used aggregations of days or weeks (e.g., 24 and 168 hours, respectively), which might indicate that using typical representative days in aggregated models is not the most efficient way of aggregation. Moreover, a higher length (number of hours within a subset) does not necessarily imply a higher objective function cost (e.g., the 43-hour subsets have a higher average cost than the 53-hour subsets).
Now we apply the 2nd step of the aggregation procedure from Fig. 2 where we identify bases from the subsets, which are also indicated in Table VIII. As an example, from the 189 subsets of length 3, we identify that there are only 7 among them with different dual variables. Hence, there are 7 bases of length 3. We aggregate the 3-hour subsets that belong to the same basis, i.e., that have the exact same dual variables. Following the same procedure for each length in Table VIII, we found that there are only 70 bases in total. They have varying lengths ranging from 1 to 53 hours. In total, those 70 bases represent 681 hours out of the full 8736. When the bases are used in an aggregated model of reduced size11, they exactly approximate the complete model. This significantly reduces the computational burden12 of the model.
Footnote 11: This represents a 92,2% reduction in the number of time periods, and therefore variables, in the aggregated model
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**MC Value** & **Frequency** & \(a\) & \(b\) & \(c\) \\ \hline
3 & 1868 & 0 & 0 & 1 \\ \hline
24 & 6203 & 0 & 1 & 0 \\ \hline
45 & 247 & 0 & 2 & -1 \\ \hline
66 & 137 & 0 & 3 & -2 \\ \hline
87 & 81 & 0 & 4 & -3 \\ \hline
108 & 24 & 0 & 5 & -4 \\ \hline
129 & 7 & 0 & 6 & -5 \\ \hline
5000 & 99 & 1 & 0 & 0 \\ \hline \end{tabular}
\end{table} TABLE VI: Possible Marginal Costs, frequency of occurrence and integer coefficients (\(a,b,c\)) of variable costs
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Variable** & **Aggregated Model** & **Complete Model** \\ \hline Objective Function (€€) & 61 722 846 & 75 269 408 \\ \hline Thermal Generation (MWh) & 2 403 380 & 2 537 604 \\ \hline Wind Generation (MWh) & 1 347 240 & 1 210 869 \\ \hline \end{tabular}
\end{table} TABLE VII: Case A: Comparison full versus aggregated model
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method Value** & **Frequency** & \(a\) & \(b\) & \(c\) \\ \hline
3 & 1868 & 0 & 0 & 1 \\ \hline
24 & 6203 & 0 & 1 & 0 \\ \hline
45 & 247 & 0 & 2 & -1 \\ \hline
66 & 137 & 0 & 3 & -2 \\ \hline
87 & 81 & 0 & 4 & -3 \\ \hline
108 & 24 & 0 & 5 & -4 \\ \hline
129 & 7 & 0 & 6 & -5 \\ \hline
5000 & 99 & 1 & 0 & 0 \\ \hline \end{tabular}
\end{table} TABLE VII: Case A: Comparison full versus aggregated model
### _Results with Ramping and Network Constraints_
We carry out the same data-aggregation procedure for the case with ramping and network constraints. The supporting tables are omitted due to space reasons, but can be found in [28]; most of the subsets lie in the 3 to 6-hour range with these being the ones that show the most significant aggregation in the 2nd Step. Moreover, the results also show that some extreme periods - even if they only occur once in the partition - have a significant impact on the objective function value. In summary, to completely represent this model, we only require 740 hours, 59 more than the previous 681.
A comparison of the three test cases: Network; Ramping; and, Network & Ramping is presented in Table IX. _Number of Bases_ refers to how many different bases were needed to exactly approximate the full model. _Max. Subset Length_ indicates the maximum basis length (i.e., number of hours) in the aggregated model. _Number Necessary Representative Hours_ corresponds to how many hours, in total out of the 8736, are required by the Basis-Oriented approach to achieve zero error13 in the aggregated model. Finally, _Size Reduction: Variables_ compares the size of each aggregated model with the full one.
Footnote 13: Taking into account both, the number of bases and their length
### _Discussion_
Our results highlight the complexity of temporal interlinking and show how the right-hand side value in one period, i.e., the ramping limit, affects the optimal solution, both forward and backward in time. Our research also shows that fixed-size representative periods might not be the best choice for aggregated models, and researchers must be ready to partition complete models in _chunks_ of variable temporal size, where each might represent a peculiarity of the system being analyzed; also the length of each subset does not necessarily correspond to the commonly used 24 or 168-hours periods. As our results show, wind production already challenges this convention, but a similar situation might entail if batteries, relying on efficient dynamic control algorithms, become widespread in the industrial and/or household sectors, as their consumption-charging cycles will not only depend on traditional variables like the time of the day but also on signaling from the environment (e.g., price). Finally, our results also support previous research about the importance of extreme values in an aggregated PSOM, as some of the periods that could not be merged happen only once or twice and have the highest objective-function values (e.g., subsets of length 13, 35 or 43 hours); hence the Basis-Oriented approach goes beyond the common assumptions of extreme days or weeks and tries to find these inseparable periods without imposing ex-ante conditions.
## V Conclusion
In this work, we addressed TSA when applied for optimization models, and in particular, for PSOMs. We extended Basis-Oriented TSA, which considers the structure of the optimization model in the aggregation process without losing accuracy, to include both network and ramping constraints.
One of the contributions was to demonstrate that when the structure of temporal constraints is taken into account in the Basis-Oriented approach, an exact aggregation can still be obtained, and to develop a methodology to achieve this. To illustrate this, we developed a heuristic algorithm that performs an exploration of the dual space of the full PSOM solution. The algorithm identifies which hours should be grouped together, based on the constraints that are binding at any given time. In this manner, the length of the representative periods of the aggregated model is variable and arises naturally in an incremental way from the exploration procedure.
Another conclusion of our work is that if the goal is to perform temporal aggregation while keeping the results as close as possible to the complete solution, researchers must rethink their approach to using a uniform length in their representative periods (i.e., days) and start to consider aggregated models with different lengths that may or may not correspond to the intuitive time intervals commonly used in PSOM.
Finally, for future research, we plan to extend the Basis-Oriented approach to other types of constraints (i.e. including integrality such as unit commitment constraints), which were not addressed in this paper. Another topic of future research is inferring an aggregation directly from the input data and the optimization model's structure without already having a solution (primal and dual) of the full model.
## Acknowledgment
This work is part of the NetZero-Opt project, which has been funded via a Starting Grant of the European Research Council (ERC) (Grant agreement No. [101116212]).
## Appendix A Partitioning Algorithm
The algorithm is divided in two main parts composed by each of the loops; in the first one, the ramping lengths of the model run are identified by associating each marginal cost with the closest integer multiple of the thermal unit's variable cost, \(b\), the coefficient of \(VC_{t}\) in (5). The second loop is the partitioning of the model's input data based on each period's marginal cost.
|
2310.01526 | Modern code reviews -- Preliminary results of a systematic mapping study | Reviewing source code is a common practice in a modern and collaborative
coding environment. In the past few years, the research on modern code reviews
has gained interest among practitioners and researchers. The objective of our
investigation is to observe the evolution of research related to modern code
reviews, identify research gaps and serve as a basis for future research. We
use a systematic mapping approach to identify and classify 177 research papers.
As preliminary result of our investigation, we present in this paper a
classification scheme of the main contributions of modern code review research
between 2005 and 2018. | Deepika Badampudi, Ricardo Britto, Michael Unterkalmsteiner | 2023-10-02T18:15:26Z | http://arxiv.org/abs/2310.01526v1 | # Modern code reviews - Preliminary results of a systematic mapping study
###### Abstract.
Reviewing source code is a common practice in a modern and collaborative coding environment. In the past few years, the research on modern code reviews has gained interest among practitioners and researchers. The objective of our investigation is to observe the evolution of research related to modern code reviews, identify research gaps and serve as a basis for future research. We use a systematic mapping approach to identify and classify 177 research papers. As preliminary result of our investigation, we present in this paper a classification scheme of the main contributions of modern code review research between 2005 and 2018.
Modern code reviews, Source code review, Contemporary code review +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
* How have the aspects/topic changed over time?
* How many articles cover the different aspects of modern code reviews?
* How were the aspects in R1 investigated?
Note that the preliminary results presented in this paper only address RQ1.
### Search strategy
We employed the following search strategy:
**Databases included**: After defining the review questions, the next step is to select databases to find the relevant papers. The following databases were selected based on their coverage of papers: Scopus, IEEE Explore, and ACM Digital Library.
**Search string**: In order to search for relevant papers in the three databases, we used the following keywords: Code review, Modern code review, Contemporary code review, Patch accept, Commit review, Pull request, Modern code inspect. We used the keywords in the search engines using the "OR" operator between each keyword and the keywords were suffixed with a wildcard - ***. The result of applying the search strings on the three databases is presented in Table 1.
### Inclusion-exclusion criteria
The 873 identified papers were reviewed based on a defined set of inclusion and exclusion criteria. Before we started the review process, we conducted a pilot study1 on 20 papers to ensure that all the authors have the same interpretation of the criteria. After the pilot study, the initial criteria were updated and new criteria were added. We conducted a second pilot study on 20 additional papers using the revised criteria. As a result, we achieved higher consensus in our decision. The final formulation of the criteria is as follows:
Footnote 1: In our first pilot study, we noticed a paper on test case review, we refined our inclusion criteria 1 to add test code review as well. Some papers discussed approaches to support the modern code review process to make it more efficient, for example, by selecting a relevant reviewer. Therefore, we added inclusion criteria 2. We decided to exclude papers that discuss modern code reviews in education. We modified exclusion criteria 1 to emphasize the subject of the investigation; we only include papers where the process of modern code review is under investigation. We also came across papers that discuss solutions that might benefit, among other things, the modern code reviews process, without discussing the implications of the approach on the code review process itself (e.g., defect prediction). As a result, we excluded such papers and added exclusion criteria 2. Papers discussing aspects such as reviewer selection, what code to review, etc. that support modern code review process.
* Papers discussing reviewer and/or developer perspective.
* Papers (peer reviewed and grey literature) related to modern code reviews.
* Benefits, outcomes, challenges, motivations, quality, usefulness and so on.
* Papers proposing solutions for modern code review.
#### Exclusion criteria
1. Papers not discussing modern code reviews or the subject of investigation is not the modern code review process.
2. Papers that do not discuss the implications of a solution on modern code review process.
3. Papers that discuss modern code review in education.
4. Papers not in English and those that do not have full text available.
### Keywording of abstracts and meta-data
During the screening process, we looked at the abstracts to find keywords and concepts that represent the contribution of the papers. We collected the following data from the selected papers:
* The contribution could be related to the different aspects of modern code review, solutions for modern code review improvement or discussion of modern code review process from reviewer/developer perspective.
* The authors of the papers.
* Conference, journal or book.
* Year of publication.
### Data extraction
In addition to the data extracted during the keywording process, we will extract research facet based on Wieringa's (Wieringa, 2018) classification. We will extract the information needed to evaluate the rigor and relevance of the papers. Moreover, the data extracted through the keywording process will be refined (if necessary) based on full-text reading.
### Validity threats
It is important to address the validity threats relevant to a mapping study which are as follows:
_Researcher bias in inclusion/exclusion_ - All three authors were included in the screening process. We conducted two pilot studies to ensure that all authors had the same interpretation of the inclusion/ exclusion criteria. In case of doubts, we discussed the papers together and revised the criteria to make them more explicit (see Section 3.3).
_Exclusion of relevant papers_ - We adopted an inclusive approach; whenever we were in doubt regarding a paper, we included it for further reading. We marked all excluded papers with the applicable exclusion criteria to ensure transparency and traceability.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Database** & **Papers** \\ \hline Scopus & 866 \\ IEEE Explore & 335 \\ ACM Digital Library - Title & 99 \\ ACM Digital Library - Abstract & 243 \\ \hline
**Total** & 1543 \\
**Total after removing duplicates** & 873 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Search results from each database
## 4. Review Results
In this section, we present our preliminary results, i.e. answers to R1. In total, we have included 177 papers after screening the abstracts2.
Footnote 2: We did not include the 21 papers selected in the pilot study and 46 tentatively accepted papers in this paper due to the space constraints. We will include them in our extended paper
Table 2 presents the aspects (R1) and the number of papers that cover the different aspects of modern code reviews (R1.2). The most investigated topics are "solutions", "impact/outcome", and "reviewer identification". We elaborate on these topics in Sections 4.1, 4.2 and 4.3 respectively. To investigate how the research topics evolved over time (R1.1), we first split all the papers by the year in which they were published. Figure 1 shows the number of papers published per year (2005-2018). In the last five years, the number of papers has increased drastically compared to previous years.
### Papers proposing solution to improve modern code review process
We have identified 39 solution papers in our mapping study. A few papers proposed solution for the same purpose: support for collaborative modern code review (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan and Krishnan, 2018; Krishnan and Krishnan, 2018), identifying behavior-modifying changes (Bang et al., 2019; Krishnan and Krishnan, 2018), motivation enhancement (Krishnan et al., 2018; Krishnan and Krishnan, 2018), use of static analysis to reduce modern code review effort (Krishnan and Krishnan, 2018; Krishnan and Krishnan, 2018), and the information needs of modern code review (Krishnan et al., 2018). Table 3 provides a list of solutions addressing different purposes.
### Impact and/or outcome
The impact of modern code reviews is one of the frequently investigated topics. The topics investigated are impact on software
Figure 1. Number of papers published per year
\begin{table}
\begin{tabular}{l l} \hline \hline \begin{tabular}{l} Aspects \\ \end{tabular} & Sub-Aspects & No. \\ \hline \multirow{4}{*}{Modern} & Benefits of modern code review & 3 \\ & Causes - acceptance/ rejection/ partial accode & 6 \\ & \begin{tabular}{l} \textless{} code \\ review \\ \end{tabular} & \begin{tabular}{l} \textless{} sequence/ integration delays of pull requests \\ \end{tabular} \\ & Motivations, expectations, challenges and/ & 4 \\ & or best practices & \\ & Characteristics / principles of modern code & 8 \\ & review processes & \\ & Effectiveness and/or efficiency of modern & 3 \\ & code review & \\ & Impact/outcome & 35 \\ \hline \multirow{4}{*}{Contributor /Reviewer} & Perception on modern code review & 3 \\ & Characteristics such as skills, behaviour & 15 \\ & and/or participation & \\ & Reviewer selection & 23 \\ \hline Solution & Tool support & 39 \\ \hline
\begin{tabular}{l} Source \\ code \\ \end{tabular} & Code characteristics & 5 \\ & Identification of code to review & 15 \\ \hline \multirow{4}{*}{Review comments} & Classification of comments & 3 \\ & Assessment of comments & 3 \\ & Usefulness of the comments & 5 \\ \hline Other & & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Studied aspects of modern code reviews
quality (7; 43; 46; 66; 76), human memory (39; 70), chances in inducing bug fixes (7), if-statements and change requests (59; 83; 84), and the identification of bugs/warning/vulnerabilities (12; 54).
While the impact of modern code reviews on several factors was investigated as mentioned above, the impact of several factors on modern code reviews has also been investigated: the impact of continuous integration (99), developer reputation (11), geographical location (62), pair programming (51), patch voting (29), and technical and non-technical factors (7) on modern code reviews.
The relationship between specific characteristics of modern code reviews and their respective impact on different aspects of software development has also been investigated (Table 4).
The impact of non-technical factors (different patch size, patch priority, component, reviewer load, reviewer activity, and patch writer experience) was investigated in (8). The outcome of modern code reviews in terms on detection of code smells (48) and defects (13), ability to identify information on design decisions (88) and design rational (71) and ability to discover misunderstandings about object oriented principles (80) have also been investigated.
### Reviewer identification
Papers proposing solutions to recommend modern code reviewers based on different selection criteria are presented in Table 5.
A solution recommending the role (managerial or technical) of the review was proposed in (92). The motivation of invited reviewers was investigated and led to proposing guidelines for inviting reviewers (63). The factors that influence reviewer assignment was investigated in (69). The use of reviewer recommendation was investigated in (40) and (56).
The results indicate that, even though reviewer selection is perceived to be relevant and effort saving, it rarely adds additional value and creates an unbalanced work load.
## 5. Conclusions and Next Steps
In this paper we presented the preliminary results of a systematic mapping study on code reviews. We have included 177 papers and extracted data from their abstracts to answer our RQ1 research question. We identified a steady upward trend regarding publications related to modern code review starting in 2011. Moreover, the main aspects addressed by existing research are related to: modern code review process, reviewer selection, tool support, identification of code to be reviewed, and analysis of the review comments.
The following are the next steps in our investigation: i) describe all the aspects of modern code review in Table 2 in detail (e.g., the techniques used to propose solutions). This will give an overview on how the solutions have been designed and what are the limitations, gaps, and potential future works; ii) measure the rigor and relevance of the included studies. Such analysis can determine the strength of the results; iii) extract additional data from the papers' full text (e.g., research facet) and verify the classification of the aspects that was extracted from the abstracts; iv) include a detailed discussion of the results.
|
2308.14407 | Identifying topology of leaky photonic lattices with machine learning | We show how machine learning techniques can be applied for the classification
of topological phases in leaky photonic lattices using limited measurement
data. We propose an approach based solely on bulk intensity measurements, thus
exempt from the need for complicated phase retrieval procedures. In particular,
we design a fully connected neural network that accurately determines
topological properties from the output intensity distribution in dimerized
waveguide arrays with leaky channels, after propagation of a spatially
localized initial excitation at a finite distance, in a setting that closely
emulates realistic experimental conditions. | Ekaterina O. Smolina, Lev A. Smirnov, Daniel Leykam, Franco Nori, Daria A. Smirnova | 2023-08-28T08:42:06Z | http://arxiv.org/abs/2308.14407v1 | # Identifying topology of leaky photonic lattices with machine learning
###### Abstract
We show how machine learning techniques can be applied for the classification of topological phases in leaky photonic lattices using limited measurement data. We propose an approach based solely on bulk intensity measurements, thus exempt from the need for complicated phase retrieval procedures. In particular, we design a fully connected neural network that accurately determines topological properties from the output intensity distribution in dimerized waveguide arrays with leaky channels, after propagation of a spatially localized initial excitation at a finite distance, in a setting that closely emulates realistic experimental conditions.
## I Introduction
Machine learning holds great promise for solving a variety of problems in nanophotonics. Rather than attempting to model the system of interest exactly from first principles (e.g., by solving Maxwell's equations), machine learning techniques aim to discover or reproduce key features of a system by optimizing parametrized models using a set of training data [1]. A trained model can often predict the properties of a device faster than conventional simulation techniques [2; 3]. Machine learning can also be used to solve the inverse problems of how to design a nanophotonic structure with desired functionalities, and how to reconstruct the parameters of a device using indirect measurements [4; 5; 6; 7; 8]. The latter is particularly important for nanophotonic devices, since structural parameters may differ substantially from the nominal design due to fabrication imperfections.
Recently developed topological photonic systems provide a useful testbed for better understanding the capabilities and limitations of machine learning approaches in nanophotonics [9; 10]. Topological photonic structures host robust edge states which are protected against certain classes of fabrication imperfections. This robustness is explained by the bulk-boundary correspondence, which relates the existence of localized boundary modes to nonlocal topological invariants expressed as integrals of a connection or curvature of the bulk modes [11]. While the direct measurement of a topological invariant entails the reconstruction of both the intensity and phase profiles of the bulk modes of a structure, machine learning models can perform supervised classification of topological phases using a limited set of observables [9].
In general, the performance of machine learning depends on both the quality and quantity of the data used to train the model. Supervised learning approaches, such as deep neural networks, typically require a huge quantity of labelled training data, which may be hard to come by. This has motivated recent interest in the use of unsupervised learning techniques such as manifold learning, which do not require labelled training data to distinguish topological phases [12; 13; 14; 15; 16]. Broadly speaking, these techniques are sensitive to sharp changes to observables that occur in the vicinity of topological phase transition points, and thus perform best when one has access to measurements from a large set of different model parameters, which is most feasible when the parameter controlling the phase transition is continuously tunable [14].
The above methods also rely on prior knowledge of the characteristics of the physical system (such as its sizes, its internal structure and the parameters of the initial excitation), therefore, being not in line with a realistic experimental framework. Data quality and feature selection can have a significant impact on the machine learning-based reconstruction of topological phase diagrams [17]. For example, missing data arising from incomplete measurements or local perturbations to the data can act as adversarial attacks that fool neural network-based classifiers of topological phases into making incorrect predictions [18]. The existence of adversarial examples highlights the importance of taking platform-specific uncertainties and disorder into account in the selection and design of machine learning classifiers of topological phases.
The aim of this study is to investigate how common obstacles encountered in the characterization of nanophotonic devices - disorder, imperfect alignment, and access to a limited set of output observables - affect the performance of machine learning-based classification and clustering methods for topological phases. Specifically, we focus on the case of one-dimensional waveguide arrays which have provided a versatile platform for the investigation of topological effects in nanophotonics [19; 20; 21], considering the problem of predicting the existence or absence of edge states based on bulk intensity measurements. First, we show that while curated input data can improve the performance of clustering, ambiguity in the training data (in the form of uncertainty in the alignment of the input waveguide) leads to incorrect cluster assignments, requiring the use of supervised learning techniques. We compare the performance of several
supervised classification models, including a convolutional neural network, demonstrating the ability to predict the existence of different edge state configurations with high accuracy using bulk intensity measurements. Finally, we show the feasibility of transfer learning for sufficiently weak disorder strengths, i.e. maintaining accurate predictions of topological edge states using a model trained on disorder-free data. Our numerical results reveal the feasibility using machine learning techniques to distinguish nanophotonic topological phases using incomplete measurements.
The outline of this article is as follows: Section II reviews the properties of the leaky Su-Schrieffer-Heeger (SSH) tight binding model and introduces the datasets which will be used in our study. Section III presents the results of unsupervised clustering according to the edge state configuration using the t-distributed stochastic neighbor embedding (t-SNE) method. We compare the performance of different supervised learning techniques in Sec. IV. As an example of the feasibility of transfer learning we consider in Sec. V the classification performance for disordered waveguide arrays. We conclude with Sec. VI. The Supplementary Materials contain additional details on the tight binding model parameters, training data, and the employed machine learning models.
## II Model and dataset preparation
We consider light propagation in waveguide arrays governed by the paraxial wave equation,
\[i\frac{\partial\mathcal{E}}{\partial z}+\frac{1}{2k_{0}}\Delta_{\perp} \mathcal{E}+\frac{k_{0}n_{L}(\mathbf{r}_{\perp})}{n_{0}}\mathcal{E}=0\;, \tag{1}\]
where \(\mathcal{E}\) is the envelope of the optical wavepacket propagating along the \(z\) (waveguide) axis, \(\mathbf{r}_{\perp}=(x,y)\) are the transverse coordinates, \(k_{0}=2\pi n_{0}/\lambda\) is the wave number, \(n_{L}(\mathbf{r}_{\perp})\) is a perturbation of the refractive index forming the waveguide lattice, and \(n_{0}\) is the background refractive index of the medium.
Formally, the final state after a propagation distance \(L\) can be obtained by projecting the input (\(z=0\)) state \(\mathcal{E}(0,\mathbf{r}_{\perp})\) onto the propagation-invariant modes of the array \(\varphi_{n}(\mathbf{r}_{\perp})\) with propagation constant \(\beta_{n}\), i.e.
\[\mathcal{E}(L,\mathbf{r}_{\perp})=\sum_{n}A_{n}e^{-i\beta_{n}L}\varphi_{n}(\mathbf{r} _{\perp}), \tag{2}\]
where \(A_{n}=\int d\mathbf{r}_{\perp}\varphi_{n}^{*}(\mathbf{r})\mathcal{E}(0,\mathbf{r}_{\perp})\) are the amplitudes of the modes excited at the input (\(z=0\)). The intensity of the final state
\[|\mathcal{E}(L,\mathbf{r}_{\perp})|^{2}=\sum_{mn}A_{n}A_{m}^{*}\varphi_{n}(\mathbf{r} _{\perp})\varphi_{m}^{*}(\mathbf{r}_{\perp})e^{i(\beta_{m}-\beta_{n})L} \tag{3}\]
is sensitive to both the modal excitation amplitudes \(A_{n}\) and the propagation length \(L\), so intensity measurements at a single \(L\) are generally insufficient to uniquely reconstruct the modal profiles, propagation constants, and topological invariants of the system.
Conventional schemes for predicting topological properties of the modes \(\varphi_{n}(\mathbf{r}_{\perp})\) based only on measuring intensity profiles require either the large \(L\) limit [22; 23] or measuring the evolution as a function of \(z\)[24; 25]. On the other hand, machine learning approaches can in principle infer topological properties
Figure 1: (A) Schematic of a dimerized lattice of single-mode dielectric waveguides with tunable radiative losses and a possible experiment: the waveguide indexed by \(i\) is excited at the input as indicated by a yellow circle, the intensity distribution is measured in the central area of \(N_{c}\) elements at the output of the sample (the gray rectangle) to generate a dataset for learning the topological properties. (B) Tight binding model visualization of the photonic lattice in (A). The red and orange circles depict the main array – a one-dimensional dimerised SSH-like array of coupled elements. Gray circles illustrate auxiliary arrays constituting leaky channels attached to the main array. The differing dashing between the elements denote different coupling strengths. (C) Band structures of the main (dashed red lines) and auxiliary (gray solid line) arrays in the designed leaky photonic lattice inscribed in glass. (D) Different configurations of the two edges in a finite lattice. (E) The output intensity distribution (colored) overlaid with the proposed lattice cross-section. (F,G) Intensity distribution, numerically obtained in paraxial modeling at the output facet of the waveguide array for (F) the Hermitian (lossless) lattice and (G) the lattice with leaky channels.
using intensity measurements at a fixed propagation distance [26; 27; 28], at least given access to a sufficient amount of high quality training data.
As a specific example, in the following we consider the leaky Su-Schrieffer-Heeger waveguide lattice shown in Fig. 1(A), a dimerized array composed of \(N\) leaky waveguides with elliptical cross-sections of semi-axes \(a_{x,y}\) induced by the refractive index perturbations of magnitude \(n_{A}\)[23]. With increasing coupling between the structural elements, some supermodes of the lattice become radiative, acquiring a finite lifetime. The radiation losses can be fine-tuned by optimizing the effective potential of the environment and radiation channels. This will allow us to study how changes to the input dataset affect the performance of machine learning-based classification of the different topological phases of this lattice. One possible implementation of the radiation channels is by coupling the main array to auxiliary arrays, each consisting of \(N_{\text{env}}\) equidistantly spaced single mode waveguides with an index contrast \(n_{B}\), as shown in Fig. 1(B,D). Examples of feasible parameters close to those employed in the experimental work Ref. [29] are given in Table 1.
Provided only one band of the main array overlaps with the dispersion curve of side-coupled leaky channels, an initially localised excitation with a broad transverse wavenumber spectrum would undergo gradual radiation and decay during propagation. Therefore, only the top branch will remain populated after a certain propagation distance, making it possible to calculate the topological invariant of the band using the projector of the output field distribution following the method used in Ref. [23]. However, this recipe generally requires knowing the complex-valued field, whereas phase retrieval could be a challenging task. We will demonstrate the possibility to unravel topology of the sample lattice based solely on the output intensity profile in a roughly center-positioned floating window with the use of machine and deep learning methods.
To simplify propagation simulations, we constructed the tight binding model (TBM) corresponding to the schematic in Fig. 1(B) and determined parameters of the effective Hamiltonian in compliance with the paraxial modeling,
\[i\frac{\partial\psi_{m}}{\partial z}=\hat{H}_{0}\psi_{m}+ \epsilon c_{m1}, \tag{4a}\] \[i\frac{\partial c_{m1}}{\partial z}=\Delta c_{m1}+\epsilon\psi_ {m}+J_{\text{env}}c_{m2},\] (4b) \[i\frac{\partial c_{ml}}{\partial z}=\Delta c_{ml}+J_{\text{env} }(c_{ml-1}+c_{ml+1}),\quad l=2,...N_{\text{env}}, \tag{4c}\]
where \(\psi_{m}\) and \(c_{ml}\) are the amplitudes of the optical field in the main array and in the leaky channels, respectively, \(\hat{H}_{0}\) is the \(N\times N\) Hamiltonian of the main array, made of the alternating nearest-neighbor (NN) coupling coefficients \(J_{1,2}\), \(\epsilon\) is the coupling strength between the main array and the environment, \(J_{\text{env}}\) is the NN hopping coefficient in leaky channels, and \(\Delta\) is a detuning of the propagation constants.
The dispersion characteristics of the disconnected (at \(\varepsilon=0\)) uniform lattices representing the main
\begin{table}
\begin{tabular}{l|l} Parameter & Value \\ \hline \hline \(a_{y}\) & 5.4 \(\mu\)m \\ \(a_{x}\) & 4 \(\mu\)m \\ \(d_{1}\) & 17 \(\mu\)m \\ \(d_{2}\) & 23 \(\mu\)m \\ \(\rho\) & 17 \(\mu\)m \\ \(d_{e}\) & 19 \(\mu\)m \\ \hline \(n_{0}\) & 1.47 \\ \(n_{A}\) & \(1.2\times 10^{-3}\) \\ \(n_{B}\) & \(1.1\times 10^{-3}\) \\ \(\lambda\) & 1030 nm \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the designed leaky photonic lattice: semiaxes of elliptical single-mode waveguides \(a_{x,y}\); center-to-center distances \(d_{1,2}\) between waveguides along the vertical axis; center-to-center distance \(\rho\) between waveguides along the horizontal axis. Arrays of auxiliary waveguides are set aside from the main array at a distance \(d_{e}\). Here, \(\lambda\) is the operating wavelength, \(n_{0}\) is the background refractive index of silica glass, \(n_{A,B}\) are the perturbations of the refractive index inside the waveguides of the main array and arrays of the environment, respectively.
Figure 2: (A,B) Evolution characteristics of the field in the main array in the lattice with fixed parameters obtained in the TBM of the nontrivial SSH array with (gold curves) and without (green curves) leaky channels. The Zak phase at \(z>4\) cm converges to the quantised \(\pi\) value, provided \(N_{\text{env}}=14\) elements in leaky channels. (C,D) Field evolution in \(N\) elements of the main array assembled in a nontrivial (C) and trivial (D) configuration with fixed parameters of the lattice. The gray line on the right side marks the area of \(N_{c}\) central waveguides, the intensity of which is fed to the input of the neural network.
(SSH) array and environment (env) are given by
\[\beta_{\text{SSH}}(k)=\pm\sqrt{J_{1}^{2}+J_{2}^{2}+2J_{1}J_{2}\cos kL _{y}}, \tag{5a}\] \[\beta_{\text{env}}(k)=\Delta+2J_{\text{env}}\cos kL_{y}. \tag{5b}\]
and plotted in Fig. 1(C). As deliberately ensured by design, the environmental array's dispersion curve fully intersects the lower band of the SSH lattice, meaning that only the lower band becomes lossy. Given dimerization, the main array is known to be topologically nontrivial for \(J_{1}<J_{2}\) and topologically trivial for \(J_{1}>J_{2}\).
To prepare a dataset, the TBM equations (4) were solved numerically. At the input, we excite a single waveguide designated as \(i\) in Fig. 1(A). The use of a single-element input is justified by its wide spectrum, which allows populating both bands of the lattice. By iterating over parameters of the photonic lattice in the ranges indicated in Table 2, we accumulated data for the analysis of topology of the main array. We take into account that the lattice ends can be different, so that \(N\) can be odd. We select a sample window composed of a finite number \(N_{c}\) of the central waveguides in the main array. Thereby, we aim to solve the classification problem for a finite lattice sample, i.e., to distinguish between different configurations of the two edges based on the intensity distribution measured at the output of \(N_{c}\) central waveguides. The edge of the SSH main array can be either trivial (0) or non-trivial (1), depending on the lattice termination by strong or weak bond. The nontrivial edge supports a midgap topological edge state. This yields four classes in total: 00, 11, 10, 01. The four possible configurations are visualized in Fig. 1(E): 01 (left trivial, right non-trivial), 11 (left non-trivial, right non-trivial), 10 (left non-trivial, right trivial), 00 (left trivial, right trivial). Note that such setup of the problem is different from that in Ref. [23], where both edges of the lattice had the same termination. Also, to calculate the field projector, the field distribution over all elements of the main array was used, that is \(N_{c}=N\) with \(N\) even.
Our previous work [23] presented a proposal for calculating the topological invariant (Zak phase) for this lattice (of classes 00 or 11) using the field projector of the output distribution. This procedure is summarized in Fig. 2. By analyzing the complex-valued field distribution [note Fig. 2(C,D) only shows the intensity], we compute the Zak phase, which asymptotically approaches \(\pi\) in the nontrivial configuration [see Fig. 2(A)], provided the leaky channels are introduced. At distances 4 cm \(<z<\) 9 cm the upper band is completely depopulated as a result of leakage. This depopulation is also evident in the total wavepacket norm, which converges towards 1/2. However, when the propagation distance is increased beyond \(z>9\) cm, reflections occur from the ends of the finite environment array and the main lattice, resulting in an increase in the total wavepacket norm [see Fig. 2(B)], rendering the method inapplicable. Thus, accurate reconstruction of the topological invariant requires either a large lattice or a well-controlled propagation length to avoid reflections off the ends.
## III Unsupervised Learning
To begin, we perform the preliminary analysis of the prepared datasets using the t-SNE (t-distributed Stochastic Neighbor Embedding) method. t-SNE is a nonlinear dimensionality reduction algorithm which learns a low-dimensional embedding of the input data; points within the input data set that are close to each other will remain close to each other in the embedded space. Ideally, a vector will be most similar to
\begin{table}
\begin{tabular}{l|l} Parameter & Range \\ \hline \hline \(J_{k}\) & \([1.5;2]\) \\ \(J_{p}\) & \([0.4;0.6]\) \\ \(J_{env}\) & \([1.7;2]\) \\ \(\epsilon\) & \([0.8;1]\) \\ \(\Delta\) & \([-3.3;-3.5]\) \\ \(L\) & \([2.6;10.6]\) \\ \(N\) & \([20;26]\) \\ \(N_{env}\) & \([14]\) \\ \(N_{c}\) & \(16\) \\ \end{tabular}
\end{table}
Table 2: Ranges of parameters used in data set preparation. Average values of the listed TBM parameters correspond to the physical quantities in Table 1, as established in paraxial modeling. \(k=2,\ p=1\) in the nontrivial lattice (\(J_{1}<J_{2}\)), and \(k=1,\ p=2\) in the trivial lattice (\(J_{1}>J_{2}\)). While preparing the datasets, \(J_{1,2}\) were uniformly sampled from within the specified intervals for each vector.
Figure 3: t-SNE maps of the system having 4 topological classes depending on its 2 edges: (A-C) Hermitian lattice, (D-F) lattice with leaky channels. The waveguide excited at the input is indexed by \(i\). (A,B,D,E) correspond to the case of single-waveguide excitation: (A,D) \(i=11\) is odd, (B,E) \(i=12\) is even, (C,F) the excited waveguide is randomly chosen within a dimer. For each point in the two-dimensional parameter space there is a corresponding intensity distribution vector of dimension \(N=22\) (or \(N=23\)), depending on the topological class. The four classes are color-coded: 00 (blue), 11 (red), 10 (green), 01 (black).
others obtained from the same lattice configuration, resulting in visible clustering in the low-dimensional embedding.
In this approach, we work with the intensity distribution within \(N_{c}=N\) elements (\(N=22\) or \(23\), to be more specific), and assume that the pumped waveguide can be shifted from the center of the lattice. Figure 3 shows t-SNE maps of the system with fixed \(L=7.6\) cm, \(N=22(23)\) and two different positions of the initially excited waveguide. In the Hermitian case (leakage disabled), the different classes become mixed up in the embedded space; whereas in the case of a lattice with leaky channels, they do not. This qualitatively agrees with the theory in Ref. [23], specifically that the different phases will exhibit distinct intensity distributions in their bulk.
However, as soon as we introduce uncertainty, such as the position of the initial excitation, the topological classes are no longer clearly separable: in the Hermitian case different classes become mixed up [Fig. 3(C)], whereas in the leaky lattice too many clusters are obtained [Fig. 3(F)]. Consequently, unsupervised methods are no longer applicable.
Figure 4 presents the statistic analysis of the data used for (C,F) panels of Fig. 3. This visualization shows that classes 01 and 00, 10 and 11 can be grouped pairwise. However, the classes with dissimilar edge topologies (01 and 10) are differentiated from the classes with the identical edge topologies (00 and 11) by odd \(N\), due to distinct input vector lengths (the 23th waveguide for which case is shown shaded). This postprocessing also reveals significant overlaps of the intensity bars for 00 and 11 classes in each waveguide of the Hermitian SSH lattice, while the bars overlap less in the leaky lattice forming shifted dimerized patterns, a feature to be noticed by the neural network.
## IV Supervised learning
For supervised classification of the four topological classes, we apply machine (K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree) and deep (Multi-layer Perceptron (MLP), Convolutional Neural Network (CNN)) learning methods (see details in the Supplementary Materials, Section III). The numerical experiments were carried out with varying parameters: propagation distance \(L\), total number of waveguides \(N\), number of the central waveguides in a sample window \(N_{c}\). The input waveguide \(i\) can be shifted by 1 from the center of the array, according to the expression \(\text{ceil}(N/2+l)\), where \(l\) can be 0 or 1. For each \(L\) we obtain a dataset of 32,000 intensity vectors. Accordingly by a parameter, subsets from the whole data set can be grouped. Let us examine the accuracy of classification depending on different parameters. The metric we use for this non-binary classification problem is the accuracy, defined as the percentage of correct model predictions,
\[\text{Accuracy}=\frac{\sum_{i=1}^{n}\mathbb{1}[p_{i}=y_{i}]}{n}, \tag{6}\]
where \(p_{i}\) and \(y_{i}\) are the predicted and the correct answer, respectively, and \(\mathbb{1}\) is an indicator function equal to one if the condition is met and zero otherwise.
Figure 5(A) illustrates how the accuracy of the supervised learning techniques varies with the parameter \(L\). The accuracy increases as the propagation distance increases. When the value of \(L\) is small, theoretical predictions cannot distinguish between different topological phases, and all methods show similar accuracy plateaus in their graphs. Further, the accuracy of machine learning methods increases with increasing \(L\), see Fig. 5(A). At the same time, the theoretical curve for the Zak phase in the nontrivial case ceases to converge to the quantized invariant value \(\pi\) for \(L=10.6\) cm [see Fig. 2(A)], while the power in the main array tends to grow and exceeds one half [see Fig. 2(B)]. This is explained by reflection from the boundaries of leaky channels, as the field returns
Figure 4: Statistical characteristics of intensity distributions in waveguides. The datasets were prepared for the Hermitian (A) and leaky (B) cases assuming two possible positions \(i=11,12\) of the initial excitation at \(L=7.6\) cm. The mean value is indicated by markers in the middle of horizontal lines, while the standard deviation is represented by the borders of the lines. The classes are color-coded: 00 (blue squares), 11 (red circles), 01 (black right-facing triangles), 10 (green left-facing triangles). The total number of waveguides \(N\) is 22 (even) for classes 00 and 11, and 23 (odd) for classes 01 an 10.
back to the main array. The requirement to know both the intensity and phase at the output in the method of Ref. [23] is replaced by statistical information from dynamics, but only intensity distributions at fixed \(L\).
Machine learning methods perform better for larger \(L\). This may be due to the fact that, as soon as the radiation reaches the edges, to distinguish the trivial case from the non-trivial one, we can consider not only bulk properties but also the edges themselves, and machine learning methods allow us to take this effect into account. For instance, the trivial and non-trivial cases are even visually distinguishable in the dynamics shown in Fig. 2(C,D): in the non-trivial case the bulk modes poorly couple to waveguides at the edges. Note that if we increase the number of auxiliary waveguides \(N_{\text{env}}\), the theoretical power curve will exhibit convergence to 0.5, but the reflection off the main array edges will still manifest at larger propagation distances. Thus, neural network methods are applicable in a wider range of cases than the theoretical scheme based on the projector calculation.
Based on results summarised in Fig. 5(A), we conclude that classical machine learning methods show lower accuracy compared to neural networks and support vector machine (SVM). One of the two most promising models, the MLP method, was chosen for more thorough examination in Fig. 6.
As noted above, training was held using \(N_{c}<N\) central waveguides. Figure 6(A) shows the dependence of the classification accuracy on the number of central waveguides while in training batches all \(L\) were involved. In the initially proposed theoretical scheme, we calculated the field projector for \(N_{c}=N\) elements, but we can formally calculate it for any \(N_{c}<N\), as shown in Fig. 6(B). The Zak phase is seen to converge better to the correct quantised value for larger \(N_{c}\), and this condition is also necessary to increase the accuracy of machine learning algorithms: in Fig. 6(B) the precision increases as the \(N_{c}/N\) ratio increases.
To better understand the performance of the supervised classification approach at distinguishing the different edge types, we compare topological SSH lattice with even number of elements and its non-topological counterpart, where dimerization is stipulated by the alternating difference in propagation constants (\(\Delta_{1}\) and \(\Delta_{2}=-\Delta_{1}\)), whereas the coupling between neighboring elements is uniform and equal to \(J\), as schematically shown in Fig. 7(A). To prepare the corresponding datasets, parameters of the non-topological lattice (\(\Delta_{1}\) and \(J\)) are chosen such that its band structure coincides with the topological one (see Supplementary Materials, Section I). We introduce trivial edge defects as detunings of the propagation constant in the edge elements. Thereby, the defect potential for the left end is \(\tilde{\Delta}_{1}=\Delta_{1}(1-q_{1})\), whereas the defect potential for the right end is \(\tilde{\Delta}_{2}=\Delta_{2}(1-q_{2})\). We compare the accuracy of the neural network at three propagation distances [see Fig. 7(B)] for the topological SSH array and non-topological array with the edge defects in distinguishing the two classes: both edges either support confined solutions (class 11) or not (class 00).
Figure 5: (A) Accuracy of supervised learning methods as a function of the propagation distance \(L\). (B) Scheme of the convolutional neural network, which takes the intensity distribution at \(z=L\) as the input and determines topology of the lattice edges, \(N_{c}=16\).
Figure 6: (A) Accuracy of classification by deep learning methods depending on parameters: the total number of waveguides \(N\) and the number of the central waveguides \(N_{c}\) involved in the training. (B) Theoretical dependence of the Zak phase on the propagation distance and \(N_{c}\) in the nontrivial lattice of \(N=22\) elements.
We find that for small amplitudes of the defect the accuracy for the case of the non-topological lattice is small compared to the topological one, since the defect is not connected to its bulk properties (unlike in the topological case), but bulk modes also change when the defect amplitude becomes large, leading to an increase in the model accuracy.
## V Disorder and Transfer Learning
Transfer learning refers to the use of a model trained on one set of data to make accurate predictions on a new task. Here we consider the performance of models trained on ideal data in classifying data obtained from different model parameters. If the quality metric falls slightly, we can conclude that the model has a generalization ability. This is particularly important in the context of nanophotonic circuits, where inevitable disorder will lead to sample-to-sample variations of device parameters.
First, we note that the generalization ability is not observed for the parameter \(L\), and the accuracy drops significantly when testing on \(L\) different from the propagation distance used for the training data. On the other hand, we observe generalization over some \(N\), that corresponds to attaching dimers to both edges of the main array, stipulated by the fact that such an addition of elements does not qualitatively change the topology of the lattice (see the cross-validation control map for parameter \(N\) in Supplementary Materials, Section IV).
Next, we examine a transfer learning approach that allows for the reuse of pretrained models at a fixed propagation distance of \(L=10.6\) cm [referring to the last point in Fig. 4(A)] on models with disorder. We introduce perturbations into the SSH Hamiltonian coefficients of two types: off-diagonal disorder in the inter-site coupling strengths and on-site disorder in the propagation constants. Incorporating disorder involves adding random variables to the coefficients of the Hamiltonian. For example, the off-diagonal disorder perturbs each coupling coefficient by the random variable \(l(d)\text{mean}(J_{1},J_{2})\), where \(l\) is uniformly distributed in the range \([-1/2,1/2]\) and \(\langle d\rangle\) is the disorder strength. This is a chiral type disorder in the sense that the Hamiltonian describing the disordered system respects the chiral symmetry, thus its topological edge states will remain at zero energy. We train the neural network using a non-disordered array and test it on the disordered lattice. We have identified a range of disorder strengths in which the previously trained neural network can operate with high confidence.
To quantify the impact of the disorder on the data, we compute the similarity between the output intensities. Specifically, we compute the output fields \(\psi_{m}(\langle d\rangle,i)^{1,2}\), where the indices \(1\) and \(2\) correspond to diagonal and off-diagonal disorders, respectively, and \(i\) represents the number of the specific disorder realization. We then introduce the intensity overlap as \(\mathcal{O}^{1,2}(\langle d\rangle,i)=\sum_{m}|\psi_{m}(\langle d\rangle,i)^{ 1,2}|^{2}\cdot|\psi_{m}^{0}|^{2}\), where summation is taken over waveguides of the main array and \(\psi_{m}^{0}\) is the output distribution in the disorder-free case. This overlap measures the similarity between the two distributions. It is a useful quantity to study the effect of disorder on the output of a system, as it allows us to quantify how much the output changes due to disorder. To plot the overlap measure, we calculate \(\mathcal{O}^{1,2}(\langle d\rangle,i)\) over \(4000\) disorder realizations for each of the values of \(\langle d\rangle\). To standardize the plotted functions, we divide them by the value of \(\mathcal{O}^{1,2}(\langle d\rangle,i)\) when \(\langle d\rangle\) is zero. This normalization process allows us to compare the variability of the overlap measure across different scenarios. The dotted areas in Fig. 8(A) represent the corresponding ranges. Note, we have rescaled the diagonal disorder strength \(\langle d_{\text{diag}}\rangle=4\langle d_{\text{off-diag}}\rangle\) such
Figure 7: (A) Schematics of the topological (upper row) dimerized array and the non-topological (lower row) dimer lattice with defect potentials \(\tilde{\Delta}_{1,2}\) at the edges. (B) The accuracy of the neural network trained for the non-topological case for different values of the edge defect detuning \(q_{1}\), introduced as \(\tilde{\Delta}_{1,2}=\Delta_{1,2}(1-q_{1})\), and different propagation distances \(L=7.6\) cm (red dots), \(L=8.6\) cm (blue left-facing triangles), \(L=10.6\) cm (black right-facing triangles). For comparison, the colored horizontal lines depict the accuracy in the topological case for the corresponding \(L\). (C) The band structure of the finite non-topological lattice depending on the defect detuning, at the fixed number of elements within the main array \(N=22\). The shading shows bands for all possible coupling coefficients, \(J\), and detunings, \(\Delta_{1}=-\Delta_{2}\), that were utilized to generate the datasets. (D) Profiles of the modes bound to the ends of the non-topological lattice. Colors and shapes of the markers in (C) in the representative spectral positions correspond to the profiles in (D).
that for a given \(\langle d\rangle\) the two forms of disorder have a similar effect on the overlap measure.
To demonstrate transfer learning for disordered arrays, we train the neural network using a non-disordered array and test it for the disordered lattice [see Fig. 8(B)], the ranges of parameters as in Table 2. The accuracy curves are similar for both types of disorder, showing a decrease in accuracy as the disorder amplitue increases. Expanding the range of the overlap measure results in a significant change in the output intensity, which ultimately leads to a sharp decline in the classification accuracy.
To estimate confidence of the trained neural network, we study the output of the last layer [see Fig. 2(C)] in detail. Softmax function returns probabilities of four classes. Here we fix the class 00 (both ends are trivial), but the results are comparable for the other classes as well. If the model assigns a high probability to a particular class, it is more confident in that prediction than if it assigns a lower probability.
We create a set of test vectors for each disorder amplitude and select vectors that have the highest probability of belonging to class 00. If this vector indeed belongs to class 00, we label the probability as true; otherwise, it is labeled as false. And then we average false and true answers to plot Fig. 8(C,D). Interestingly, as the accuracy of the neural network decreases, its level of certainty in both accurate and inaccurate responses increases. In other words, the neural network will more confidently give the wrong answer as the disorder strength is increased, indicating that the fabrication disorder can act as an adversarial perturbation.
## VI Conclusion
We have studied the performance of a variety of machine learning techniques at distinguishing different topological phases of leaky photonic lattices using measurements of the bulk intensity profile after a fixed propagation distance. First, we found that uncertainty in the initial conditions (such as the excited waveguide) reduces the quality of unsupervised clustering, leading to either mixing between different classes or the prediction of too many classes. We then compared the performance of a different supervised learning methods, finding that high accuracy can be achieved for sufficiently large propagation distances. The classification accuracy can be further improved by increasing the number of bulk waveguide intensities used. Finally, we studied the transfer learning ability of neural network-based classifiers. While the accuracy drops significantly if the network is trained on data obtained using a different propagation distance, the networks can accurately classify data from systems with sufficiently weak disorder, thus avoiding extensive training on each new system. Our approach for classifying lattices based on incomplete measurements can be further developed to solve a more general problem of reconstruction of the lattice Hamiltonian with some a priori knowledge of its symmetries in various fields including photonics, condensed matter physics, and quantum computing.
## Acknowledgements
The authors acknowledge useful discussions with Clemens Gneiting, Alexey Horkin and Nikita Kulikov. E.S. and L.S. are supported in part by the MSHE under project No. 0729-2021-013. E.S. thanks the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" (Grant No. 22-1-5-80-1). D.L. acknowledges support from the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2236-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. D.S. acknowledges support from the Australian Research
Figure 8: (A) Overlap measure variation induced by the disorder: shaded areas are ranges of variance due to disorder over an ensemble of 4000 disorder realizations (green is for diagonal disorder, gray for off-diagonal disorder), asterisks and dots are mean values. All parameters of the lattice are fixed. (B) Transfer learning for the disordered lattice. We train neural network in the absence of disorder \(\langle d\rangle=0\) and test the prediction accuracy for different values of disorder. All parameters of the lattice are varied according to Table 2. (C,D) Probability assigned to false (C) and true (D) answers of the neural network for different values of disorder (green bars are for diagonal disorder, gray bars are for off-diagonal disorder).
Council (FT230100058) and the Japan Society for the Promotion of Science under the Postdoctoral Fellowship Program for Foreign Researchers.
|
2306.02911 | Catch Me If You Can: Deep Meta-RL for Search-and-Rescue using LoRa UAV
Networks | Long range (LoRa) wireless networks have been widely proposed as a efficient
wireless access networks for the battery-constrained Internet of Things (IoT)
devices. In many practical search-and-rescue (SAR) operations, one challenging
problem is finding the location of devices carried by a lost person. However,
using a LoRa-based IoT network for SAR operations will have a limited coverage
caused by high signal attenuation due to the terrestrial blockages especially
in highly remote areas. To overcome this challenge, the use of unmanned aerial
vehicles (UAVs) as a flying LoRa gateway to transfer messages from ground LoRa
nodes to the ground rescue station can be a promising solution. In this paper,
the problem of the flying LoRa (FL) gateway control in the search-and-rescue
system using the UAV-assisted LoRa network is modeled as a partially observable
Markov decision process. Then, a deep meta-RL-based policy is proposed to
control the FL gateway trajectory during SAR operation. For initialization of
proposed deep meta-RL-based policy, first, a deep RL-based policy is designed
to determine the adaptive FL gateway trajectory in a fixed search environment
including a fixed radio geometry. Then, as a general solution, a deep meta-RL
framework is used for SAR in any new and unknown environments to integrate the
prior FL gateway experience with information collected from the other search
environments and rapidly adapt the SAR policy model for SAR operation in a new
environment. The proposed UAV-assisted LoRa network is then experimentally
designed and implemented. Practical evaluation results show that if the deep
meta-RL based control policy is applied instead of the deep RL-based one, the
number of SAR time slots decreases from 141 to 50. | Mehdi Naderi Soorki, Hossein Aghajari, Sajad Ahmadinabi, Hamed Bakhtiari Babadegani, Christina Chaccour, Walid Saad | 2023-06-05T14:15:27Z | http://arxiv.org/abs/2306.02911v2 | # Catch Me If You Can: Deep Meta-RL for Search-and-Rescue using LoRa UAV Networks
###### Abstract
Long range (LoRa) wireless networks have been widely proposed as a efficient wireless access networks for the battery-constrained Internet of Things (IoT) devices. In many practical search-and-rescue (SAR) operations, one challenging problem is finding the location of devices carried by a lost person. However, using a LoRa-based IoT network for SAR operations will have a limited coverage caused by high signal attenuation due to the terrestrial blockages especially in highly remote areas. To overcome this challenge, the use of unmanned aerial vehicles (UAVs) as a flying LoRa gateway to transfer messages from ground LoRa nodes to the ground rescue station can be a promising solution. In this paper, an artificial intelligence-empowered SAR operation framework using UAV-assisted LoRa network for different unknown search environments is designed and implemented. The problem of the flying LoRa (FL) gateway control in the search-and-rescue system using the UAV-assisted LoRa network is modeled as a partially observable Markov decision process. Then, a deep meta-RL-based policy is proposed to control the FL gateway trajectory during SAR operation. For initialization of proposed deep meta-RL-based policy, first, a deep RL-based policy is designed to determine the adaptive FL gateway trajectory in a fixed search environment including a fixed radio geometry. Then, as a general solution, a deep meta-RL framework is used for SAR in any new and unknown environments to integrate the prior FL gateway experience with information collected from the other search environments and rapidly adapt the SAR policy model for SAR operation in a new environment. The proposed UAV-assisted LoRa network is then experimentally designed and implemented. To analyze the performance of proposed framework in real world scenarios, the proposed SAR system is tested in two different target areas: a wide plain and a slotted canyon at Mongaht mountain ranges, Iran. Practical evaluation results show that if the deep meta-RL-based control policy is applied instead of the deep RL-based one, the number of SAR time slots decreases from 141 to 50. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are respectively \(619\) and \(1930\) meter during the SAR operation time.
LoRa technology, Unmanned aerial vehicle, Deep meta-reinforcement learning, Search-and-rescue operation
## I Introduction
Unmanned aerial vehicles (UAVs) are playing an increasingly important role in next-generation wireless networks such as 5G and beyond [1]. For instance, UAVs can guarantee high-speed and ultra-reliable connectivity while also extending the cellular network coverage to three-dimensional (3D) space [2]. In particular, we can temporarily move UAVs to cover Internet-of-Thing (IoT) devices and establish communications therein without high-cost conventional network infrastructures. In this regard, UAV-assisted wireless networks can decrease the operational expenditures and improve the efficiency of various IoT applications.
With the proliferation of UAV-assisted wireless access networks, our reliance on IoT applications such as smart farming, smart factory, and public safety will be more pronounced [3]. However, to support this IoT trend, a reliable wireless access technology with wide reach and low power consumption is required. In this regard, the so-called long-range (LoRa) communication protocol has been proposed as a promising technology for high energy-efficient and long-range communication [3, 4]. These two characteristics make LoRa technology an appropriate solution for battery-constrained IoT devices that are often deployed in dispersed rural areas. A typical LoRa-based IoT network begins with a LoRa-enabled embedded sensor node that sends data to the LoRa gateway. Then, data can be sent from LoRa gateway over cellular network and then routed to application servers located at the network core. One of the key challenges of LoRa-based IoT networks is localization for outdoor environments that is needed for different applications such as navigation and tracking, air traffic control, remote sensing, intelligence, surveillance, and reconnaissance, and search-and-rescue (SAR) operations [5].
Existing localization techniques are mainly based on the time difference of arrival (TDOA) and the received signal strength index (RSSI) schemes in wireless LoRa networks [6]. In the so-called TDOA portioning methods with LoRa networks, the distances between a LoRa node and each LoRa gateway are estimated through a time of arrival in trilateration approach [6, 7]. Thus, this method requires the use of a precise clock to synchronize between all LoRa nodes [8]. This implies additional communication overheads and higher and, thus, this solution is not appropriate for low-power and low-cost LoRa device [8]. In the RSSI trilateration positioning
methods, the end-device location is estimated by RSSI value when it transmits data to the LoRa gateways without requirement of clock synchronization. Thus, RSSI-based techniques are employed to develop positioning functions using RSSI in LoRa networks [7].
Several recent works such as in [5] and [8, 9, 10, 11] analyze RSSI-based LoRa localization system for different scenarios. In [5], the authors proposed six new RSSI-based localization algorithms to reduce the effect of non-Gaussian noise in LoRa networks by either eliminating bad anchor nodes or selecting good anchor nodes during localization. In this work, the performance of all localization algorithms is investigated using simulation model with real-data measurement with developed LoRa localization system. In [9], since noise-like electronic interference and blocking can affect the accuracy of localization, the authors propose a new approach to improve the performance of a LoRa-based localization system in noisy outdoor environments. Specifically, the work in [9] developed two new localization algorithms based on a traditional localization linear model. The first new localization algorithm locates the noisy measurement using k-mean clustering and then re-calculates the localization outcomes without a node thereby deriving the largest estimated RSSI error. The second algorithm in [9] requires the localization error is low if the estimated RSSI errors of the estimated location of the target node to other anchor nodes are small. Then, the best solution is chosen by calculating the estimated RSSI errors in all possible estimated locations. In [10], the authors combine fingerprint-based and model-based RSSI methods to solve the outdoor positioning problem. They adopt an interpolation-based approach to build a 3D model with 36 RSSI sampling points to achieve localization model with higher accuracy. The work in [11] proposed an RSSI-based method to accurately identify the location of a vehicle, equipped with a LoRa node, travelling along a known path which is divided into segments of length equal to or shorter than the desired accuracy. Values of the RSSI measured by the LoRa gateways are collected and used to characterize each segments. In [8], the RSSI-based method is proposed for the localization of cattle collars communicating with LoRa radios. In particular, the authors developed an RSSI-based distance estimation using realtime adjustment of RSSI-distance mapping, taking advantage of communication between collar nodes and gateway. However, the works in [5] and [8, 9, 10, 11] are based on the RSSI method and, thus, they need to deploy large number of anchor points on a large scale outdoors for SAR operation which in not practical in highly remote areas. Moreover, the works in [5] and [8, 9, 10, 11] do not investigate the potential of a UAV-assisted LoRa networks.
Fortunately, employing UAV as a flying gateway in localization and tracking system can bring many attractive advantages due to its high possibility of line-of-sight (LoS) links, high mobility, on-demand deployment and low cost [12]. Several recent works such as in [13, 14], and [15] have proposed the use of UAVs in the LoRa networks. In [13], the authors implemented a prototype of LoRa-based air quality sensor on a UAV and a web-UI for user to configure the route of UAV and view the sensed data immediately. In [14], a UAV-assisted LoRa architecture is suggested in which UAVs act as relays for the traffic generated between LoRa nodes and a base station (BS). Then, they focus on designing a distributed topology control algorithm that periodically updates the UAV topology to adapt to the movement of the ground-based LoRa nodes. The work in [15] proposed measuring the RSSI at a LoRa gateway for indoor, suburban, and urban areas, when the LoRa transmitter is in another indoor location or mounted on a UAV. Their result shows that for a suburban environment, the drone height and antenna orientation have a crucial impact on the RSSI. Specifically, if the transmitting antenna is vertical, a stronger signal is received. In [16], the authors experimentally analyzed and modeled the channel of the UAV-to-ground LoRa links in the urban environments. Then, they discussed the dependencies between transmission power, spread factor (SF), RSSI, and signal-to-noise (SNR). However, the prior art in [13, 14] and [15] did not apply a practical localization method for a UAV-Assisted LoRa Networks, namely when used for SAR operations in the highly remote areas.
The works in [17, 18], and [19] investigated the use of wearable LoRa radios to foster SAR missions in mountain environments. In [17], the authors designed a localization system for SAR operations using LoRa. In this regard, they have characterized the path loss of a LoRa channel in mountain scenarios. However, the work in [17] mainly focused on LoRa channel modeling in a specific scenario without considering a flying LoRa gateway and UAVs. Moreover, the authors in [17] did not propose a general adaptive solution for SAR in new unknown environment. In [18] and [19], the authors reported the measurements of the excessive aerial path loss for modeling ground-to-UAV links in a real mountain canyon, involving a receiving UAV and a transmitting LoRa radio worn by a volunteer lying on the rocks. They also demonstrated that LoRa radio propagation in the canyon is season-independent. Consequently, it is highly essential to practically design and analyze a localization system that leveraged a UAV-assisted LoRa network for the SAR operation in particular for highly remote areas. This is due to the fact that the ground-to-UAV LoRa link has less path loss compare to the ground-to-ground LoRa link. Moreover, by using the UAV as an FL gateway, it is possible to move the location of gateway in the sky in a quicker and more flexible way, compared to ground scenarios.
The main contribution of this paper is the implementation and analysis of a novel artificial intelligence-empowered search-and-rescue operation framework using a UAV-assisted LoRa network that can be applied to different unknown search environments. The proposed approach autonomously adapts the control policy of the UAV trajectory to the spatial geometry of a new search environment thereby allowing the system to determine the unknown location of a lost person. To solve the problem of the FL gateway control in the search-and-rescue system using the UAV-assisted LoRa network, we formulate a stochastic optimization problem whose goal is to maximize an episodic return that includes the received
power from LoRa node at lost person over future time slots. Next, we model the FL gateway control problem as a partially observable Markov decision process (POMDP). Then, a deep reinforcement learning (RL) policy is proposed to adaptively control the FL gateway trajectory during SAR operation in a given environment. To find a near optimal solution, a parametric functional form policy is implemented using a deep recurrent neural network (RNN) that can directly search the optimal policies of the FL gateway controllers. Then, to increase the generalizability of the our framework, a control policy using deep meta-RL is designed. By applying deep meta-RL, the controller can integrate the prior FL gateway experience with information collected from the other search environments to train a rapidly adaptive policy model for SAR operation in a new environment [20]. To analyze the performance of our proposed framework in the real world, we have experimentally designed and implemented our UAV-assisted LoRa networks including the LoRa end node as well as the FL and ground LoRa (GL) gateways. Then, we have done extensive experiments at Mongasht mountain ranges, near the areas of Ghaletol city, Kluzestan province, Iran. We have practically tested our SAR system in two different target areas: a wide plain and a slotted canyon. Practical evaluation results show that the FL gateway hovers over lost person's location after 50 and 141 time slots under deep meta-RL and RL control policy, respectively. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are \(619\) and \(1930\) meter during the SAR operation time.
The rest of the paper is organized as follows. Section II describes system model and problem formulation. Section III proposes deep meta-RL framework for UAV gateway controlling in different unknown SAR environment. In Section IV, we introduce our experimental setup including the hardware that we have used to implement our UAV-Assisted LoRa Network and also our measurement scenarios. Then, in Section V, we numerically evaluate the performance of our SAR system for highly remote areas and our proposed deep meta-RL-based UAV control policy which is trained by real data. Finally, conclusions are drawn in Section VI.
## II System model and problem formulation
### _System Model_
Consider a UAV-Assisted LoRa network composed of a LoRa node, an FL gateway, and one GL gateway. Here, a lost person is equipped with a LoRa node that periodically transmits a known signal called beacon with duration \(\tau\). The transmission power of the LoRa node is \(P_{Tx}\) in dB. The location of the lost person is an unknown point of interest (POI) \((x_{P},y_{P})\) in the search area of interest (SAI), \(\mathcal{C}\subset\mathbb{R}^{2}\).
The FL gateway is a LoRa gateway mounted on a UAV. The FL gateway is equipped with GPS and LoRa modules. At each time slot \(t\), the FL gateway transmits a message, \(\mathbf{m}_{t}=[\beta_{t},\gamma_{t},x_{t},y_{t}]\) to the GL gateway. This message contains the RSSI \(\beta_{t}\) and SNR \(\gamma_{t}\) of received LoRa beacon signal from LoRa node, as well as the FL gateway location, \((x_{t},y_{t})\). For simplicity, we assume the UAV is at the same height \(z\) with the same speed \(v\) all the time. In our model, the control action of UAV is \(a_{t}\in\{E,W,N,S,H\}\) where \(E\), \(W\), \(N\), \(S\), and \(H\) represent the movement actions across the four cardinal points, i.e., east, west, north, south, and also hover on the current location. For example, if \(a_{t}=N\), then in the next time slot \(t+1\) and during \(\tau\), the FL gateway will be at \((x_{t},y_{(t+1)})=(x_{t},y_{t}+v\tau)\). Following the received data of \(\beta_{t},\gamma_{t}\) in the message \(\mathbf{m}_{t}\) at the LoRa station, the received signal power \(P_{Rx,t}\) at FL gateway and time slot \(t\) can be computed as [21]:
\[P_{Rx,t}=\beta_{t}-10\log_{10}(1+10^{-\frac{2t}{10}}). \tag{1}\]
The resulting received power \(P_{Rx,t}\) at the FL gateway from unknown location of LoRa node is a random variable. This is due to the fact that the LoRa signals transmitting from the LoRa node antenna will often encounter the spatial geometry of SAI including random obstacles, such as trees and rocks, before reaching a given moving FL gateway receiver. The radiating electromagnetic field is reflected, diffracted, and scattered by these various obstacles, resulting commonly in a random multiplicity of rays impinging on the FL gateway antenna [21].Thus, the radio geometry of a given SAI \(i\) is directly affected by the spatial geometry. Generally, the statistically varying received signal power \(P_{Rx,t}\) over wireless link between mobile FL gateway and LoRa node is modeled as follow [21]:
\[P_{Rx,t}=10\log_{10}\nu^{2}+\omega+10\log_{10}g(d_{t})+P_{Tx}+\] \[10\log_{10}(G_{Tx}G_{Rx}). \tag{2}\]
where \(10\log_{10}g(d_{t})+P_{Tx}+10\log_{10}(G_{Tx}G_{Rx})\) is the far-field average power \(\bar{P}_{Rx,t}\). \(\omega\) is the shadow-fading random variable due to the large obstacles. \(\nu\) is referred to multipath fading which is resulting from the rate of change of the signal being proportional to FL gateway velocity. Here, \(d_{t}=\sqrt{(x_{t}-x_{P})^{2}+(y_{t}-y_{P})^{2}+z^{2}}\) represents the distance between the LoRa node carried by the flying gateway and the unknown location of lost person at time slot \(t\). Given the random spatial geometry of each SAI, the resulting radio geometry parameters such as \(\omega\), \(\nu\), and function \(g\) are unknown. In our model, we consider the worst case scenario in which there is no available model for the radio geometry parameters because the SAI is generally unknown for SAR operations. However, \(g\) is a decreasing function with respect to \(d_{t}\) in a UAV-assisted LoRa networks [21]. Then, we define a view circle centered at FL gateway with radius \(d_{t}\) as follow:
\[\mathcal{C}_{t}=\{(x,y)|(x_{t}-x_{P})^{2}+(y_{t}-y_{P})^{2}+z^{2}\leq d_{t}^{2 }\}. \tag{3}\]
Indeed, from the point of view of the FL gateway, the possible location of the lost person is on the edge of this view circle \(\mathcal{C}_{t}\) at time slot \(t\). Since function \(g\) decreases with the distance \(d\), if the FL gateway moves toward lost person correctly, the radius of this circle decreases.
Fig. 1 illustrates our smart SAR system using UAV-assisted LoRa networks during three consecutive time slots \(t\), \(t+1\)
and \(t+2\). During these time slots, the portable rescue station equipped with GL gateway receives three messages \(\mathbf{m}_{t}\), \(\mathbf{m}_{t+1}\), and \(\mathbf{m}_{t+2}\). As we can see in Fig. 1, FL gateway has the view circles of \(\mathcal{C}_{t}\), \(\mathcal{C}_{t+1}\), and \(\mathcal{C}_{t+2}\) at time slots \(t\), \(t+1\), and \(t+2\). Following the received message from FL gateway, the possible locations of lost person will be in the edges of these view circles. As we can see in Fig. 1, using the FL gateway control algorithm, the FL gateway moves toward the direction to increase the received power \(P_{Rx,t}\) during time slots, \(P_{Rx,t}<P_{Rx,t+1}<P_{Rx,t+2}\). Thus, the FL gateway moves toward the unknown location of the lost person during time slots. Note that, during the SAR operation, the GL gateway at the portable rescue station receives data messages from FL gateways over LoRa links. Thus, the FL gateway formation algorithm is run in the portable rescue station and all the SAR operation is monitored at the portable rescue in the real-time manner. Considering the stochastic changes in the received power at the FL gateway, designing an FL gateway control policy to move the UAV toward the lost person location is highly challenging particularly for different SAI scenarios with unknown radio geometry.
### _Problem formulation_
Our goal is to characterize the FL gateway control policy which moves the UAV toward lost person over a future finite horizon \(\mathcal{T}_{t}=\{t^{\prime}|t^{\prime}=t+1,...,t+T\}\) of length \(T\) time slots. The objective of this policy is to minimize the set size of lost person possible location, \(|\mathcal{C}_{t}|\), over ground of 2D dimension. Following (2) and (3), and since \(g\) is a decreasing function with respect to \(d_{t}\) in UAV-assisted LoRa networks, minimizing the set size of lost person possible location, \(|\mathcal{C}_{t}|\), is equivalent to increasing the received power, \(P_{Rx,t}\), during the corresponding time slots. The FL gateway control policy at a given slot \(t\) depends on the unknown radio geometry of SAI, which is a consequence of the stochastic nature of the wireless channel. Formally, we define a policy \(\Pi_{t}=\{a_{t^{\prime}}|\forall t^{\prime}\in\mathcal{T}_{t}\}\) for the controller that assigns the next location of FL gateway. Consequently, we formulate the FL gateway control problem in our SAR system as follows:
\[\max_{\{\Pi_{t}\}}\sum_{t^{\prime}=t+1}^{t+T}\delta^{(t^{\prime}-t)}P_{Rx,t^{ \prime}}, \tag{4}\]
s.t.
\[a_{t^{\prime}}\in\{E,W,N,S,H\},\forall t^{\prime}\in\mathcal{T}_{t}, \tag{5}\]
here, \(\delta\) is a discount factor. Maximizing the objective function in (4) ensures that the received power at FL gateway increases while the view circle surface of the FL gateway decreases. Thus, the FL gateways move toward the lost person during the considered time slots.
In practice, the solution of (4) faces the following challenges. First, since the location of the lost person is unknown, it is difficult to obtain the closed expression of the objective function in (4). Second, the received power distribution is a stochastic variable because the radio geometry wireless channels is dynamic and unknown. The complexity of the stochastic optimization problem in (4) becomes more significant due to the unknown probabilities for possible random network changes such as the fading over LoRa links and the user's location. Thus, the FL gateway control problem in (4) is a stochastic optimization problem that does not admit a closed-form solution and has an exponential complexity [22]. Therefore, we propose a framework based on principles of the deep meta-RL for SAR operation in different unknown SAIs to solve the optimization problem in (4) with low complexity and in an adaptive manner. The proposed deep meta-RL method for FL gateway formation in UAV-assisted LoRa network, only takes UAV initial position, the RSSI and SNR of received LoRa beacon signal as input, then outputs the UAV trajectories after several episodes to move the UAV toward location of lost person.
## III Deep Meta-Reinforcement Learning for SAR Operation
In this section, we present the proposed adaptive control policy based on a deep meta-RL framework to solve the the FL gateway control problem in (4). Traditional policy gradient-based RL algorithms can only determine the adaptive FL gateway control policy in a fixed SAI including a fixed radio geometry. However, the meta-RL framework [23] is a novel learning approach that can integrate the prior FL gateway experience with information collected from the other SAI radio geometry to train a rapidly adaptive policy model for SAR operation a new SAI. Therefore, the proposed deep meta-RL can obtain the FL gateway control policies that can be quickly updated to adapt to new radio geometry properties using only a few further training steps. Next, we first introduce the deep RL algorithm for an adaptive FL gateway control policy in a given environment. Then, we explain the framework of deep meta-RL algorithm to train a rapidly adaptive policy model for a new SAI using the previous information collected from the given environment.
### _Deep RL frame work for a given environment_
We model the problem in (4) as a partially observable Markov decision process (POMDP) represented by the tuple \(\{\mathcal{S},\mathcal{A},\mathcal{O},P,R,o_{0}\}\). where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action
Figure 1: An illustrative example of the system model.
space, \(\mathcal{O}\) is the observation space, \(P\) is the stochastic state transition function, \(P(s^{\prime},s,a)=\Pr(s_{t+1}=s^{\prime}|s_{t}=s,a_{t}=a)\), \(R_{t}(a_{t},s_{t})\) is the immediate reward function, and \(o_{0}\) is the initial observation for the controller of the UAV to move FL gateway [24]. Following POMDP, the required components of our proposed framework for a given SAI \(\mathcal{C}\) are specified as follows:
* _Agent:_ the controller of UAV that moves the LoRa FL gateway.
* _Actions:_ the control action of the agent at each time slot \(t\) is a \(a_{t}\in\mathcal{A}\). And, the action space \(\mathcal{A}=\{E,W,N,S,H\}\) is the set of all optional actions including move east, west, north, south, and hover.
* _Observations:_ the observation at time slot \(t\) is the RSSI and SNR of received LoRa beacon signal by the FL gateway, and also current location of UAV, which are received with massive \(\mathbf{m}_{t}\). Thus, \(\mathbf{o}_{t}=[\beta_{t},\gamma_{t},x_{t},y_{t}]\), where \((x_{t},y_{t})\in\mathcal{C}\). The observation space \(\mathcal{O}\) is the set of all possible observations.
* _States:_ the state at time slot \(t\) is the radio geometry characteristics including shadow-fading random variable \(\omega\), multipath fading random variable \(\nu\), unknown decreasing function \(g\) of LoRa links between FL gateways and the LoRa node which is not observable due to the unknown location of lost person. In the case of POMDP, we consider the observation history of during \(H\)-consecutive previous time slots as state [22, 25]. Hence, the state at time slot \(t\) in the environment \(i\) is \(\mathcal{H}_{t}=\cup_{h=0}^{H-1}\{\beta_{t-h},\gamma_{t-h},a_{t-h}\}\) containing the RSSI and SNR of received LoRa beacon signal and the FL gateway control action at time slot \(t-h\). The state space \(\mathcal{S}\) is the set of all possible histories.
* _Immediate reward:_ if the distance between the LoRa gateway and the LoRa node decreases, the received power at the LoRa gateway increases. Thus, we define the immediate reward as the power that FL gateway at time slot \(t\), \(R_{t}=P_{\text{R},t}\) which is given by (1).
* _Episodic return:_ if \(\Lambda_{t}=\cup_{\forall t^{\prime}\in\mathcal{T}_{t}}\{a_{t^{\prime}},\mathbf{o} _{t^{\prime}}\}\) is a trajectory of the POMDP during future \(T\)-consecutive time slots, then the stochastic episodic reward function during future \(T\)-consecutive time slots is defined as \(R_{T,t}=\sum_{t^{\prime}=t+1}^{t+T}R_{t}\).
* _Policy:_ for a given state, the policy is defined as the probability of the agent choosing each action. Our framework uses a functional-form policy parameterized by vector \(\mathbf{\theta}\) to map the input state to the output action. Hence, the policy is expressed as \(\pi_{\mathbf{\theta}}(a_{t},\mathcal{H}_{t})=\Pr(a_{t}|\mathcal{H}_{t})\)
The purpose of deep RL is to find the optimal policy that maximizes episodic return at FL gateway of UAV-assisted LoRa networks. Given the policy \(\pi_{\mathbf{\theta}}\) and stochastic changes over the wireless LoRa link, the unknown probability of trajectory \(\Lambda_{t}\) is equal to \(\Pr(\Lambda_{t},\mathbf{\theta})=\prod_{\forall t^{\prime}\in\mathcal{T}_{t}}\pi _{\mathbf{\theta}}(a_{t^{\prime}},\mathcal{H}_{t^{\prime}})\Pr\{\mathbf{o}_{(t^{\prime }+1)}|a_{t^{\prime}},\mathbf{o}_{t^{\prime}}\}\) during future \(T\)-consecutive time periods. For a given SAI, we define the average episodic return for parameter vector \(\mathbf{\theta}\) at time slot \(t\) as \(J_{t}(\mathbf{\theta})=\sum_{\forall\Lambda_{t}}\Pr(\Lambda_{t},\mathbf{\theta})R_{T,t}\). Given the parametric functional-form policy \(\pi_{\mathbf{\theta}}\), the goal of the FL gateway controller is to solve the following optimization problem:
\[\max_{\{\mathbf{\theta}\in\mathbb{R}^{N}\}}J_{t}(\mathbf{\theta}),\] (6) s.t. \[0\leq\pi_{\mathbf{\theta}}(a_{t^{\prime}},\mathcal{H}_{t^{\prime}}) \leq 1,\forall a_{t^{\prime}}\in\mathcal{A},\forall t^{\prime}\in\mathcal{T}_{t}, \tag{7}\] \[\sum_{\forall a_{t^{\prime}}\in\mathcal{A}}\pi_{\mathbf{\theta}}(a_{t ^{\prime}},\mathcal{H}_{t^{\prime}})=1,\forall t^{\prime}\in\mathcal{T}_{t}, \tag{8}\]
where \(T<<N\) and \(N\) is the number of parameters in the parametric functional-form policy \(\pi_{\mathbf{\theta}}\). To solve the optimization problem in (6), the FL gateway controller must have full knowledge about the transition probability \(\Pr(\Lambda_{t},\mathbf{\theta})\), and all possible values of \(R_{T,t}\) for all of the trajectories \(\Lambda_{t}\) of the POMDP under policy \(\pi_{\mathbf{\theta}}\). However, achieving this knowledge is not feasible, especially for a dynamic LoRa wireless channels between mobile FL gate way and LoRa node located at unknown location. To overcome this challenge, we propose to combine deep neural network (DNN) with the policy gradient-based RL method. Such a combination was shown in [25], where a DNN learns a mapping from the partially observed state to an action without requiring any lookup table of all trajectories of observation values and policies over time. Consequently, we use a deep RL algorithm that includes a DNN to approximate the policy \(\pi_{\mathbf{\theta}}\) for solving (6). Our proposed DNN for deep RL method is presented in Fig. 2. Here, the parameters \(\mathbf{\theta}\in\mathbb{R}^{N}\) includes the weights over all connections of the proposed DNN where \(N\) is equal to the number of connections [25]. The layers of the proposed deep NN for the implementing policy \(\pi_{\mathbf{\theta}}\) are defined as follows:
* _Input layer:_ the input of proposed deep RL policy at time slot \(t\) is the history of the POMDP during \(H\)-consecutive previous time slots, \(\mathcal{H}_{t}\). Unlike the traditional DNN layers, we use a long short-term memory (LSTM) layer with size \(H\) in the input of our policy DNN. This LSTM layer is an RNN that learns long-term dependencies between time steps in sequence data history of the POMDP trajectories. This is due to the fact that, in our model, the wireless channel affecting POMDP state transitions continuously depends on the spatiotemporal locations of UAV and the radio geometry of the SAR operation geographical area. Thus, we need to use deep RL that aggregates the observations over wireless links during previous time slots of the UAV trajectory and makes a more precise prediction of the next state of the POMDP [26]. Indeed, we use LSTM layer in policy function to persist the hidden RSSI and SNR states across previous FL gateway trajectory for continued adaption to the radio geometry of a given environment.
* _Hidden layer:_ the hidden layers include fully connected and Sigmoid layers. A sigmoid layer applies a Sigmoid function to the input such that the output is bounded in the interval (0,1). A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.
* _Output layer:_ the output layers include Softtmax and Classifier layers. The Softmax layer applies a Softmax function to the input. A classification layer computes the cross-entropy loss for classification and weighted classification tasks with mutually exclusive classes. In our model, the output shows the index of the actions in the the action space \(\mathcal{A}\). More precisely, the output \(\mathbf{y}_{t}=[y_{i,t}]\in\mathbb{R}^{T}\) is a vector of size \(T\) actions where each element \(i\) in this vector shows the action index of FL gateway at future time slot \(t+i\).
The gradient of objective function in (6) is \(\nabla_{\mathbf{\theta}}J_{t}(\mathbf{\theta})=\sum_{\nabla_{\mathbf{\theta}}}\nabla_{\bm {\theta}}\Pr(\Lambda_{t},\mathbf{\theta})R_{T,t}\). Since \(\nabla_{\mathbf{\theta}}\log\Pr(\Lambda_{t},\mathbf{\theta})=\frac{\nabla_{\mathbf{\theta }}\Pr(\Lambda_{t},\mathbf{\theta})}{\Pr(\Lambda_{t},\mathbf{\theta})}\), we can write \(\nabla_{\mathbf{\theta}}J_{t}(\mathbf{\theta})=\frac{\mathbb{E}_{\Lambda}\,\nabla_{ \mathbf{\theta}}\log\Pr(\Lambda_{t},\mathbf{\theta})R_{T,t}\). Here, \(\Pr(\Lambda_{t},\mathbf{\theta})=\prod_{t^{\prime}=t+1}^{t+T}\pi_{\mathbf{\theta}}( \mathbf{a}_{t^{\prime}},\mathcal{H}_{t^{\prime}})\Pr\{\mathbf{o}_{((t^{\prime}+1)}| \mathbf{o}_{t^{\prime}},a_{t^{\prime}}\}\) and \(\nabla_{\mathbf{\theta}}\Pr\{\mathbf{o}_{((t^{\prime}+1)}|\mathbf{o}_{t^{\prime}},a_{t^{ \prime}}\}=0\). Thus, \(\nabla_{\mathbf{\theta}}J_{t}(\mathbf{\theta})=\mathbb{E}_{\Lambda_{t}}\sum_{t^{ \prime}=t+1}^{t+T}\nabla_{\mathbf{\theta}}\log\pi_{\mathbf{\theta}}(a_{t^{\prime}}, \mathcal{H}_{t^{\prime}})R_{T,t}\). Having enough \(M\) samples from trajectories \(\Lambda_{t_{m}}\), one can approximate the expectation with sample-based estimator for \(\nabla_{\mathbf{\theta}}J(\mathbf{\theta})\). As a result, we use the gradient-ascent algorithm to train the deep RL policy \(\pi_{\mathbf{\theta}}\) as follows:
\[\nabla_{\mathbf{\theta}}J_{t}(\mathbf{\theta})\approx\frac{1}{M}\sum_{m=1} ^{M}\Big{(}\sum_{t^{\prime}=t_{m}+1}^{t_{m}+T}\nabla_{\mathbf{\theta}}\log\pi_{ \mathbf{\theta}}(a_{t^{\prime}},\mathcal{H}_{t^{\prime}})R_{T,t^{\prime}}\Big{)},\] \[\mathbf{\theta}\leftarrow\mathbf{\theta}+\alpha_{\text{RL}}\nabla_{\mathbf{ \theta}}J(\mathbf{\theta}), \tag{9}\]
where \(\alpha_{\text{RL}}\) is the reinforcement learning rate. As shown in Fig. 2, the training batch set \(\mathcal{D}_{\text{RL, train}}\) is randomly selected from experience memory \(\mathcal{M}\). Thus, a batch of deep RL training set \(\mathcal{D}_{\text{RL,train}}\) of \(M\) samples is available enough to train the deep NN network of \(\pi_{\mathbf{\theta}}\) during the time. Each training sample \(m\) in \(\mathcal{D}_{\text{RL,train}}\) includes histories during \(H\)-consecutive time slots before time slot \(t_{m}\), \(\mathcal{H}_{t_{m}}\), and actions in the trajectory of future \(T\)-consecutive time slots after time slot \(t_{m}\), \(\Lambda_{t_{m}}\). In summary, we implement the parametric functional-form policy \(\pi_{\mathbf{\theta}}\) with the proposed deep NN in Fig. 2. The proposed deep RL algorithm for FL gateway control is summarized in Algorithm 1. During the SAR time, the deep RL policy is trained with probability \(\xi\) based on the gradient-ascent algorithm in (9). In the training phase, the deep RL policy is adaptively trained using train data set from experience memory \(\mathcal{M}\). In the update phase, the experience memory is updated with history and trajectories during the time slots. The FL gateway moves using this deep RL-based algorithm until the reward \(R_{t}\) becomes more than defined target reward \(R_{\text{target}}\). This means that the UAV is close enough to the lost person and the received power is more than \(R_{\text{target}}\).
The spatial geometry of a given environment affects on the channel fading due to the the the blockage, reflection, and refraction wireless waves. Thus, the radio geometry of a given environment is directly affected by the spatial geometry. However, unlike ground-to-ground wireless links, a UAV-to-ground LoRa link is more robust to the fading effects resulting from the spatial geometry. Due to this fact, compare to the ground-to-ground LoRa links, the radio geometry information resulting from UAV-to-ground LoRa link is more stable. Moreover, given the radio geometry information resulting from UAV-to-ground LoRa link, the knowledge gained while learning the FL gateway control policy in a given spacial geometry could be applied when trying to recognize a new FL gateway control policy in another spacial geometry. Consequently, next, we use the information of a trained deep RL policy of FL gateway control in a given environment to design of a control policy for a SAR operation in a new SAL.
```
1:Input: Initial location UAV; RL learning rate \(\alpha_{\text{RL}}\); training probability \(\xi\); a defined deep NN for policy \(\pi_{\mathbf{\theta}}\); initial value for \(\mathbf{\theta}\); batch size \(M\); target reward \(R_{\text{target}}\).
2:Online deep RL
3:repeat
4:Exploiting phase: apply deep RL control policy \(\pi_{\mathbf{\theta}}\) depicted in Fig. 2;
5:Update phase: update the experience memory set \(\mathcal{M}=\mathcal{M}\cup\{\mathcal{H}_{t}\cup\Lambda_{t}\}\);
6:Training phase: with probability \(\zeta\), train deep RL control policy \(\pi_{\mathbf{\theta}}\) as follows:
7: Uniformly select a set of \(M\) minibatch samples from updated; experience memory set \(\mathcal{M}\) as a training set \(\mathcal{D}_{\text{RL, train}}\)
8: Train the deep RL control policy, \(\pi_{\mathbf{\theta}}\), following the gradient-ascent algorithm in (9);
9:until\(R_{t}>R_{\text{target}}\)
10:Output: adaptive control policy \(\pi_{\mathbf{\theta}}\) during SAR operation time.
```
**Algorithm 1** Proposed deep RL-based algorithm for FL gateway control
### _Deep meta-RL framework for new environments_
We introduce the deep meta-RL framework to use the information from a given environment to design a control policy for a new unknown radio geometry. Compare to deep RL policy, the proposed deep meta-RL policy can integrate the prior experience in one environment with information collected from the FL gateway movement in a new search environment; thus, training a rapidly adaptive learning model for FL gateway control. Indeed, the meta training procedure requires a realization set of state and near optimal policy. Here, we use the realization set of states and actions in the success
Fig. 2: The Deep RL framework for implementing the FL gateway control policy \(\pi_{\mathbf{\theta}}\).
full SAR operation in different SAIs to design a policy for SAI environment. In this case, we define our task as follow:
* _Tasks:_ Given the history of POMDP, task \(\mathcal{T}\) is the realization of FL control policy solve the optimization problem in (6) in each time slot \(t\) at each environment with specific radio geometry. Thus, for a given SAI environment, the \(\mathcal{T}_{t_{k}}=\{\mathcal{H}_{t_{k}}\cup\Lambda_{t_{k}}\}\) include history and trajectory under control policy at time slot \(t_{k}\) of SAR operation which is accessible from experience memory in Fig 2.
* _Meta-train dataset:_\(\mathcal{D}_{\text{Meta-train}}=\cup_{k=1}^{K}\mathcal{T}_{t_{k}}\) is defined as \(K\) different tasks in previous successful SAR operation.
* _Meta-test dataset:_\(\mathcal{D}_{\text{Meta-test}}=\cup_{e=1}^{E}\{\mathcal{H}_{t_{e}}\cup a_{t_{e}}\}\) is defined as \(E\) different histories and actions in the new environment under target policy \(\pi_{\mathbf{\psi},\mathbf{\phi}}\).
Here, we use the idea of the most popular policy-gradient meta-RL method which is called using model agnostic meta-learning (MAML) to design the FL gateway control policy in the new search area of interest using the previous successful SAR information [27]. During the meta training procedure, a Meta-train dataset \(\mathcal{D}_{\text{Meta-train}}\) is first sampled from experienced memory of previous successful SAR operation. Then, the meta-RL method collects experience information in variable \(\mathbf{z}\) from \(\mathcal{D}_{\text{Meta-train}}\) and use the deep meta-RL policy function \(\pi_{\mathbf{\phi},\mathbf{\psi}}\) to predict actions given the history \(\mathcal{H}_{t}\) of the new environment. Indeed, the deep meta-RL policy function \(\pi_{\mathbf{\phi},\mathbf{\psi}}=\pi_{\mathbf{\psi}}\pi_{\mathbf{\phi},\mathbf{z}}\) in which \(\pi_{\mathbf{\psi}}=\Pr(\mathbf{z})\) is the probability of experience information variable \(\mathbf{z}\) and \(\pi_{\mathbf{\phi},\mathbf{z}}=\Pr(a_{t}|\mathcal{H}_{t},\mathbf{z})\) is the probability of chosen action \(a_{t}\) given the history \(\mathcal{H}_{t}\) of the new environment and encoded experience information data \(\mathbf{z}\) from expirence of the previous successful SAR operations. More concretely, the objective of the meta training procedure is as follows:
\[\max_{\{\pi_{\mathbf{\phi},\mathbf{\psi}}\}}J_{t}(\mathbf{\phi},\mathbf{\psi}), \tag{10}\] \[\text{s.t.}\] (11) \[\sum_{\forall a_{t^{\prime}}\in\mathcal{A}}\pi_{\mathbf{\phi},\mathbf{ \psi}}(a_{t^{\prime}},\mathcal{H}_{t^{\prime}})=1,\forall t^{\prime}\in \mathcal{T}, \tag{12}\]
where the parametric functional-form \(\pi_{\mathbf{\psi}}\) encodes the experience information from tasks in Meta-train dataset \(\mathcal{D}_{\text{Meta-train}}\) to help \(\pi_{\mathbf{\phi},\mathbf{z}}\) in finding optimal policy in the new environment. In our deep meta-RL framework, we use a DNN to approximate the policy \(\pi_{\mathbf{\psi}}\), where the parameters \(\psi\) includes the weights over all connections of the DNN.
Having enough \(M_{1}\) samples from trajectories \(\Lambda_{t_{m_{1}}}\) in the dataset \(\mathcal{D}_{\text{Meta-test}}\) from experienced memory in new SAI, one can approximate the expectation with sample-based estimator for \(\nabla_{\mathbf{\phi},\mathbf{\psi}}J_{t}(\mathbf{\phi},\mathbf{\psi})\). Here, \(\nabla_{\mathbf{\phi}}\log\pi_{\mathbf{\phi},\mathbf{\psi}}=\nabla_{\mathbf{\phi}}\log\pi_{ \mathbf{\psi}}\pi_{\mathbf{\phi},\mathbf{z}}\) which is equal to \(\nabla_{\mathbf{\phi}}\log\pi_{\mathbf{\psi}}+\nabla_{\mathbf{\phi}}\log\pi_{\mathbf{\phi}, \mathbf{z}}\). Thus, for a given \(\mathbf{\psi}_{0}\), \(\mathbf{z}_{0}=\pi_{\mathbf{\psi}_{0}}(\mathcal{D}_{\text{Meta-train}})\) and \(\nabla_{\mathbf{\phi}}\log\pi_{\mathbf{\phi},\mathbf{\psi}_{0}}=\nabla_{\mathbf{\phi}}\log\pi_ {\mathbf{\phi},\mathbf{z}_{0}}\). In this case, we use the gradient-ascent algorithm to train the deep meta-RL policy \(\pi_{\mathbf{\phi},\mathbf{\psi}}\) with respect to \(\mathbf{\phi}\) as follows:
\[\nabla_{\mathbf{\phi}}J(\mathbf{\phi},\mathbf{\psi_{0}})\approx\]
\[\frac{1}{M_{1}}\sum_{m=1}^{M_{1}}\Big{(}\sum_{t^{\prime}=t_{m_{1} }+1}^{t_{m_{1}}+T}\nabla_{\mathbf{\phi}}\log\pi_{\mathbf{\phi},\mathbf{z}_{0}}(a_{t^{\prime }},\mathcal{H}_{t^{\prime}})R_{T,t^{\prime}}\Big{)},\] \[\mathbf{z}_{0}=\pi_{\mathbf{\psi}_{0}}(\mathcal{D}_{\text{Meta-train}}),\] \[\mathbf{\phi}\leftarrow\mathbf{\phi}+\alpha_{\text{Meta-RL},1}\nabla_{\bm {\phi}}J(\mathbf{\phi},\mathbf{\psi_{0}}). \tag{13}\]
Since the parameters \(\mathbf{\phi}\) are updated based on collected data in \(\mathcal{D}_{\text{Meta-test}}\) of new environment, the phase of updating \(\mathbf{\phi}\) is called adaption phase. Here, \(\alpha_{\text{Meta-RL},1}\) is the meta-RL rate for the adaption phase.
For a given \(\mathbf{\phi}_{0}\), we have \(\nabla_{\mathbf{\psi}}\log\pi_{\mathbf{\phi}_{0},\mathbf{\psi}}=\nabla_{\mathbf{\psi}}\log\pi_{ \mathbf{\psi}}+\nabla_{\mathbf{\psi}}\log\pi_{\mathbf{\phi}_{0},\mathbf{z}}\) which is equal to \(\nabla_{\mathbf{\psi}}\log\pi_{\mathbf{\psi}}\).
Having enough \(M_{2}\) samples from trajectories \(\Lambda_{t_{m_{2}}}\) in the dataset \(\mathcal{D}_{\text{Meta-train}}\) from experienced memory in previous successful SAR operations in different environments, we use the gradient-ascent algorithm to train the deep meta-RL policy \(\pi_{\mathbf{\phi_{0}},\mathbf{\psi}}\) with respect to \(\mathbf{\psi}\) as follows:
\[\nabla_{\mathbf{\psi}}J(\mathbf{\phi}_{0},\psi)\approx\] \[\frac{1}{M_{2}}\sum_{m=1}^{M_{2}}\Big{(}\sum_{t^{\prime}=t_{m_{2} }+1}^{t_{m_{2}}+T}\nabla_{\mathbf{\psi}}\log\pi_{\mathbf{\psi}}(a_{t^{\prime}}, \mathcal{H}_{t^{\prime}})R_{T,t_{m_{2}}}\Big{)},\] \[\mathbf{\psi}\leftarrow\mathbf{\psi}+\alpha_{\text{Meta-RL},2}\nabla_{\bm {\psi}}J(\mathbf{\phi}_{0},\psi), \tag{14}\]
where \(\alpha_{\text{Meta-RL},2}\) is the meta-RL rate for Meta-train set.
As shown in Fig. 3, the training set \(\mathcal{D}_{\text{Meta, train}}\) is randomly selected from experience memory of previous successful SAR operations in different environments. However, each adaptation sample \(m_{1}\) includes histories during \(H\)-consecutive time slots before time slot \(t_{m_{1}}\), \(\mathcal{H}_{t_{m_{1}}}\), and actions in the trajectory of future \(T\)-consecutive time slots after time slot \(t_{m_{1}}\), \(\Lambda_{t_{m_{1}}}\) in the new environment. In summary, we implement the parametric functional-form policy \(\pi_{\mathbf{\phi},\mathbf{\psi}}\) with the proposed deep NN in Fig. 3. The proposed deep meta-RL algorithm for FL gateway control is summarized in Algorithm 2. In the meta-training phase, the deep meta-RI policy is trained using train data set from experience memory of successful SAR operations in previous environments, and in the adaptation phase, the experience memory including history and trajectories of the new environment during the time of SAR operation is used.
```
1:Input: Initial location UAV; meta-RL learning rates \(\alpha_{\text{Meta-RL},1}\) and \(\alpha_{\text{Meta-RL},2}\); training probability \(\zeta\); a defined deep NN for policy \(\pi_{\boldsymbol{\phi},\boldsymbol{\psi}}\); initial value for \(\boldsymbol{\phi}\) and \(\boldsymbol{\psi}\); batch size \(M\); experienced memory \(\mathcal{M}_{I}\); target reward \(R_{\text{target}}\)
2:Online deep meta-RL
3:repeat
4:Exploiting phase: apply deep meta-RL control policy \(\pi_{\boldsymbol{\phi},\boldsymbol{\psi}}\) depicted in Fig. 3;
5:Update phase: update the experience memory \(\mathcal{M}=\mathcal{M}\cup\{\mathcal{H}_{t}\cup\Lambda_{t}\}\);
6:Training phase: with probability \(\xi\), train deep meta-RL control policy \(\pi_{\boldsymbol{\phi},\boldsymbol{\psi}}\) as follows:
7: Uniformly select a set of \(M_{2}\) history and action samples from; \(\mathcal{M}_{2}\) as a adaption-training set \(\mathcal{D}_{\text{Meta-train}}\)
8: Uniformly select a set of \(K\) task samples \(\mathcal{M}\) as a meta-; training set \(\mathcal{D}_{\text{Meta-train}}\)
9: Given meta-RL learning rates \(\alpha_{\text{Meta-RL},1}\) and \(\alpha_{\text{Meta-RL},2}\), train the deep meta-RL control policy, \(\pi_{\boldsymbol{\phi},\boldsymbol{\psi}}\), following the gradient-ascentad algorithms in (13) to update \(\boldsymbol{\phi}\) and (14) to update \(\boldsymbol{\psi}\);
10:until\(R_{T,t}>R_{\text{target}}\)
11:Output: adaptive control policy \(\pi_{\boldsymbol{\phi},\boldsymbol{\psi}}\) during SAR time in the new environment.
```
**Algorithm 2** Proposed deep meta-RL-based algorithm for FL gateway control in new environment
## IV Experimental Setup
The our implementation of the UAV-assisted LoRa networks and environments for our experimental testbed are presented and discussed in this section.
### _Considered Hardware_
Our experimental setup of UAV-assisted LoRa networks includes the LoRa end node, FL and GL gateways. In our setup, all of the designed nodes, and gateways are powered using lithium polymer (Li-Po) batteries with output voltage of 3.7 V and capacity of 1100 mAh. We have designed our printed circuit boards (PCBs) to assemble LoRa node FL gateways on them. Using ATmega328 which is a single-chip microcontroller created by Atmel in the megaAVR family, the LoRa node FL gateways are assembled and programmed in C language [28]. In our setup, we use the LoRa Ra-02 module from Ai-Thinker which is equipped by a LoRa SX1278 Semtech core [29] and [30].
The designed LoRa node has a LoRa module which is connected to a commercial external folded dipole antenna, and transmits a beacon packet using LoRa module on the ISM frequency band at 433 MHz. The detail of LoRa node is shown in Fig. 3(a). The FL gateway board is equipped with LoRa Ra-02 and GPS modules in which the LoRa Ra-02 module is connected to to 433 Mhz antenna and the GPS module has the square-shape receiver GPS antenna. The designed of FL gateway board is shown in Fig. 3(b). The dimensions of FL gateway board are 295 mmx60 mmx20 mm and its weight is 100 gr. Thus, the weight of designed FL gateway board is suitable to be mounted on the commercial drones and UAVs. We mount the FL gateway board over DJI Phantom 4 Pro to set up an FLoRa gateway. To use the maximum antenna radiation, we firmly fix the gateway node under DJI Phantom 4 Pro, while the LoRa antenna is vertically pointed toward the earth surface. In our setup, the FL gateway receives the beacon packet using LoRa module and GPS coordinations of UAV using GPS module. Then, The FL gateway retransmits a data packet containing of the RSSI and SNR of received LoRa beacon signal and GPS coordination of UAV over UAV-to-ground LoRa link to the be received by GL gateway.
The GL gateway consists of a LoRa Ra-02 module connected to 433Mhz antenna and our designed board. The head of rescue operation can connect to the Gl gateway via USB cable or Bluetooth wireless link. The GL gateway setsup is shown in Fig. 3(c).The GL gateway receives data packets from FL gateways using LoRa module, and transmits these received packets to the portable computer such as Labtop over USB cable or to the mobile phone over Bluetooth link. In this case, the rescue operation can be monitored and analyzed the using the data that are gather on computer or mobile phone in a realtime manner. The realtime received data packets are used to run our proposed deep reinforcement algorithms. Following our proposed deep RL algorithms, the control policy suggest the next movement of UAV, and the pilot move the UAV toward that direction. Using the application which is written in Python, the trajectory of UAV is depicted for the rescue operation head. And the UAV moves FI gateway toward the location of LoRa end node. Thus, the location of lost person is find very fast.
### _Experimental Testbed_
The measurement is at Mongasht mountain ranges, near the areas of Ghaletol city, Kluzestan province, Iran. For the lost person, the LoRa node is held in the students' hand while the students move to the unknown location in the target area. And for the FLoRa gateway, the gateway node is mounted on the UAV. The UAV hovers the gateway node at 300 meters. This elevation is high enough that UAV does not hit any obstacle such as mountains or trees on its path. Since, we are interested to evaluate our SAR system in the different radio geometry including both LoS and NLoS links, we have done our experiments on two different target areas: a wide plain and a slotted canyon. When the target area of lost person is a wide plain, there is a LoS path between GLoRa or FLoRa gateway and LoRa nodes. However, when the lost person is in a slotted canyon, there may not be any LoS path between GLoRa or FLoRa gateway and nodes, and the LoRa nodes are in the shadowing area of mountain walls around the slotted canyon. In Fig. 5, the experimental scenarios have been shown which are plain and canyon scenarios.
## V Performance analysis
In this section, we evaluate the performance of our proposed deep meta-reinforcement learning approach for SAR operation using the real measurement of UAV-assisted LoRa network for the scenarios at Mongasht mountain ranges. In our measurement, the duration of each time slots is 2 seconds, the UAV speed is \(20\) meter per second. The maximum life time of one UAV battery is \(20\) minutes. We compare deep learning policy
with optimal and greedy policies as the benchmarks. In the optimal policy, we give the unknown location of lost person to the UAV pilot, then, the UAV directly moves toward target location. Under the greedy policy, there tow phases: sense and action. The UAV starts the sense phase with probability of \(0.1\), in which UAV sequentially moves north, east, south, and west for 3 time slots to measure received power. The UAV starts action phase with probability of \(0.9\), in which UAV moves toward the direction with highest received power at the previous sense phase.
In Fig. 6, we show the received power at FL gateway versus rescue time slots in the wide plain environment. From Fig. 6, we observe that the received power at FL gateway increases in less time slots under the optimal policy. However, when the greedy algorithm is used, the received power at FL gateway increases slower. The performance of proposed deep learning policy is between optimal and greedy policies. As we can see in Fig. 6, when the UAV initial location is \(0.8R\) far from lost person, the received power at FL gateway received to its maximum value around SAR time slot 300, 700, 800 under the optimal, deep RL, and greedy algorithms, respectively. Moreover, in the plain environment, the greedy and deep learning policies converge and the UAV can find the lost person finally because in the plain environment there are mostly LoS link between lost person and FL gateway. under deep learning policy, when we start the initial location of UAV SAR operation at \(0.4\) radius of plain environment the UAV can find the lost person at time slot 400, however is the UAV initial location is at \(0.8\) radius of plain environment, the lost person is found at time slot 800.
In Fig. 7, we show the received power at FL gateway versus rescue time slots in the slotted canyon environment. For the deep meta-RL policy, we have used the data of pervious experiences of different SAR operations in the plain environment, where the initial locations of UAV have been changed and the data of SAR has been saved in the memory. Then, this data is used as the \(\mathcal{D}_{\text{Meta-train}}\) in Algorithm 2. From Fig. 7, we observe that the received power at FL gateway under
Figure 4: Assembled wireless nodes of the UAV-Assisted LoRa Network.
Figure 5: Experimental Scenarios. These satellite images are taken from “www.fatmap.com”.
deep meta-RL policy receives to its maximum value at SAR time slot 50, while the deep RL policy at time slot 141 moves the FL gateway to the location with the maximum received power. Moreover, on average, the received power at the FL gateway under the deep meta-RL policy is \(25\%\) more than deep RL policy.
In Fig. 8, we show the FL gateway horizontal distance from lost person during SAR time slots in the slotted canyon environment. From Fig. 8, we observe that the deep meta-RL and deep RL based policies could finally find the lost person; however the greedy algorithm does not converge to the lost person location during SAR operation. As we can see in Fig. 8, the UAV hovers at \(300m\) height over lost person after \(31\) time slot under optimal policy. While deep RL and deep meta-RL-based policies find the lost person at time slots 51 and 141, respectively. Moreover, the FL gateway horizontal distance from lost person under deep meta-RL-based policy is on average \(26\%\) closer than deep RL-based policy.
In Fig. 9, we show the UAV trajectory during SAR operation at a slotted canyon. From Fig. 9, we observe that the UAV under the deep meta-RL and deep RL-based policies finally hovers over the lost person's location. The greedy algorithm moves the UAV towards lost person location while the UAV is not able to hover over exact lost person location during SAR operation time. As we can see in Fig. 9, the average distance between UAV trajectories under the deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are \(619\) and \(1930\) meter during the SAR operation time.
Figure 8: UAV horizontal distance from lost person in slotted canyon environment.
Figure 6: Received power at FL gateway v.s. rescue time slots in wide plain environment.
Figure 7: Received power at FL gateway v.s. rescue time slots in slotted canyon environment.
Figure 9: UAV trajectory during SAR operation at a slotted canyon. This satellite image is taken from “Google Earth”. Lost person location is \(31^{\text{\text{\textdegree}}}35^{\prime}18.24^{\prime^{\prime}}N\) and \(50^{\text{\text{\textdegree}}}1^{\prime}37.44^{\prime^{\prime\prime}}E\).
## VI Conclusion
In this paper, we have introduced a smart SAR system based on UAV-assisted LoRa network for highly remote areas such as mountain ranges. More prices, we have designed and implemented an artificial intelligence-empowered SAR operation framework for different unknown search environments. We have modeled the problem of the FL gateway control in the SAR system using the UAV-assisted LoRa network as a partially observable Markov decision process. Then, we have proposed a deep meta-RL-based policy to control the FL gateway trajectory during SAR operation. For initialization of our deep meta-RL-based policy, first, a deep RL-based policy determines the adaptive FL gateway trajectory in a fixed search environment including a fixed radio geometry. Then, as a general solution, our deep meta-RL framework is used for SAR in any new environment. Indeed, deep meta-RL-based policy integrates the prior FL gateway experience with information collected from the other search environments to rapidly adapt the SAR policy model for SAR operation in a new environment. We have experimentally implemented UAV-assisted LoRa network and then we have tested our proposed SAR system in two real different areas: a wide plain and a slotted canyon at Mongasht mountain ranges, Iran. Practical evaluation results show that if the deep meta-RL policy is applied instead of the deep RL one to control the UAV, the number of SAR time slots decreases from 141 to 50. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are \(619\) and \(1930\) meter during the SAR operation time.
|
2307.04091 | CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge
Distillation for LIDAR Semantic Segmentation | 2D RGB images and 3D LIDAR point clouds provide complementary knowledge for
the perception system of autonomous vehicles. Several 2D and 3D fusion methods
have been explored for the LIDAR semantic segmentation task, but they suffer
from different problems. 2D-to-3D fusion methods require strictly paired data
during inference, which may not be available in real-world scenarios, while
3D-to-2D fusion methods cannot explicitly make full use of the 2D information.
Therefore, we propose a Bidirectional Fusion Network with Cross-Modality
Knowledge Distillation (CMDFusion) in this work. Our method has two
contributions. First, our bidirectional fusion scheme explicitly and implicitly
enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively,
which surpasses either one of the single fusion schemes. Second, we distillate
the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D
knowledge branch) so that the 3D network can generate 2D information even for
those points not in the FOV (field of view) of the camera. In this way, RGB
images are not required during inference anymore since the 2D knowledge branch
provides 2D information according to the 3D LIDAR input. We show that our
CMDFusion achieves the best performance among all fusion-based methods on
SemanticKITTI and nuScenes datasets. The code will be released at
https://github.com/Jun-CEN/CMDFusion. | Jun Cen, Shiwei Zhang, Yixuan Pei, Kun Li, Hang Zheng, Maochun Luo, Yingya Zhang, Qifeng Chen | 2023-07-09T04:24:12Z | http://arxiv.org/abs/2307.04091v1 | CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
###### Abstract
2D RGB images and 3D LIDAR point clouds provide complementary knowledge for the perception system of autonomous vehicles. Several 2D and 3D fusion methods have been explored for the LIDAR semantic segmentation task, but they suffer from different problems. 2D-to-3D fusion methods require strictly paired data during inference, which may not be available in real-world scenarios, while 3D-to-2D fusion methods cannot explicitly make full use of the 2D information. Therefore, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) in this work. Our method has two contributions. First, our bidirectional fusion scheme explicitly and implicitly enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively, which surpasses either one of the single fusion schemes. Second, we distillate the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D knowledge branch) so that the 3D network can generate 2D information even for those points not in the FOV (field of view) of the camera. In this way, RGB images are not required during inference anymore since the 2D knowledge branch provides 2D information according to the 3D LIDAR input. We show that our CMDFusion achieves the best performance among all fusion-based methods on SemanticKITTI and nuScenes datasets. The code will be released at [https://github.com/Jun-CEN/CMDFusion](https://github.com/Jun-CEN/CMDFusion).
## I Introduction
3D LIDAR is significant for the perception system of autonomous vehicles, and one of the applicable tasks with LIDAR is semantic segmentation. Great efforts have been made for better LIDAR semantic segmentation performance using single LIDAR modality [3, 4, 5, 6]. Recently, several multi-modality methods are developed [1, 2] to fuse the features of LIDAR and colorful cameras since they provide complementary information. LIDAR provides reliable depth information and is robust to light conditions such as dark nights, while the camera offers a dense colorful appearance and fine-grained textures. In this work, we also aim to study how to effectively leverage these two modality data for better LIDAR semantic segmentation.
Existing fusion-based methods can be divided into 2D-to-3D fusion method (PMF [1]) and 3D-to-2D fusion method (2DPASS [2]), as shown in Fig. 1 (a) and (b). PMF injects 2D knowledge into the LIDAR features, so it needs strictly paired data during training and inference. However, the FOV of LIDAR and the camera may not totally overlap with each other, so those points out of the FOV of the camera cannot be tested. For example, SemanticKITTI [7] only provides two front-view images, and points at the side and back cannot be involved in the PMF framework. 2DPASS notices this problem and proposed injecting 3D features into 2D features during training to implicitly enhance the 3D features. In this way, 2DPASS does not require images during inference. However, 3D features do not explicitly contain 2D information in such a 3D-to-2D scheme.
To solve the mentioned problems of 2D-to-3D and 3D-to-2D fusion methods, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion), as shown in Fig. 1 (c). Specifically, on the one hand, we propose a Bidirectional Fusion Block (BFB) to explicitly
Fig. 1: (a) 2D-to-3D methods (PMF [1]) can only handle points in the FOV of the camera. (b) 3D-to-2D methods (2DPASS [2]) do not explicitly involve the 2D information into 3D features. (c) Our CMDFusion can process the whole point cloud through cross-modality distillation and strengthen the 3D features through bidirectional fusion.
and implicitly enhance the 3D features through 2D-to-3D and 3D-to-2D injection, which owns the benefits of both single fusion schemes. On the other hand, we propose a Cross-Modality Distillation (CMD) module to let a 3D network (2D knowledge branch) memorize the information of the 2D network (camera branch) during training. During inference, the 2D knowledge branch provides the 2D image information based on the 3D LIDAR point cloud inputs so that we can obtain the 2D knowledge for the whole point cloud, including those points not in the FOV of the camera.
We evaluate our method on two challenging datasets, including SemanticKITTI [7] and NuScenes [8]. Experiments show that our method achieves the best performance among all fusion-based methods. In summary, our contributions include the following:
* We develop a bidirectional fusion method CMDFusion for the LIDAR semantic segmentation task, which surpasses the single directional 2D-to-3D fusion and 3D-to-2D fusion methods.
* We develop a cross-modality distillation module to generate 2D information for those points that are out of the FOV of the camera.
* We experimentally show that our method achieves the best performance among fusion-based methods on SemanticKITTI and NuScenes datasets.
## II Related Work
3D LIDAR semantic segmentation has grown very fast based on well-annotated public datasets, such as SemanticKITTI [7] and NuScenes [8]. Most methods in this area are single-modality, _i.e._, only use LIDAR point cloud to extract information. Specifically, single-modality methods can be categorized into point-based, projection-based, voxel-based, and multi-view fusion methods.
1) Point-based methods [9, 10, 11] adapt PointNet [12] and PointNet++ [13] to the LIDAR domain. These point-based methods do not generalize very well in the LIDAR point cloud scenarios since their sampling and searching algorithms cannot perfectly handle the sparse outdoor point clouds.
2) Voxel-based methods divide the whole point cloud into voxels [14] and apply efficient 3D convolution for semantic segmentation like SparseConv [15]. Cylinder3D [5] proposed a cylindrical partition and asymmetrical 3D convolutional network which follows the geometry structure of the LIDAR point cloud.
3) Projection-based methods first project 3D LIDAR point cloud into 2D range-view images [4] or birds-eye-view (BEV) images [16] and then apply 2D convolution network for semantic segmentation. However, such a projection inevitably loses some of the 3D geometry information.
4) Multi-view fusion methods combine different views of the LIDAR point cloud as inputs. FusionNet [17] and SPVCNN [3] fuse voxel and point level information, while RPVNet [18] fuses the information of voxel, point, and range views.
Recently, multi-modality fusion has become popular in the autonomous driving area. In the 3D object detection task, BEV fusion [19, 20, 21] unifies the LIDAR and image features in the BEV space and achieves the state-of-the-art performance. However, the height information is much more critical in the semantic segmentation task than the object detection task, so the BEV-based method [16] has limited performance on the semantic segmentation task. Instead, PMF [1] projects the LIDAR point cloud into the image space and then conducts 2D-to-3D fusion for better 3D feature representation. 2DPASS [2] finds that the 2D-to-3D fusion method like PMF can only be applied on the points in the overlapping FOVs of the LIDAR and camera, so 2DPASS conducts 3D-to-2D fusion to strengthen the 3D features by supervising the 3D features from the 2D branch.
Compared to PMF and 2DPASS, our bidirectional fusion network enjoys the benefits of both 2D-to-3D and 3D-to-2D fusion schemes. Besides, we propose a cross-modality distillation module so that our network can be applied to the whole LIDAR point cloud, including the points that are out of the FOV of the camera.
## III Methodology
### _Framework Overview_
The simplified and specific overall structure of our proposed CMDFusion is shown in Fig. 1 (c) and Fig. 2 (a), respectively. Our CMDFusion is composed of three branches, including a camera branch (2D network), a 2D knowledge branch (3D network), and a 3D LIDAR branch (3D network).
#### Iii-A1 Training
During training, the 2D knowledge branch (a 3D network) learns the 2D image information from the camera branch (a 2D network) via Cross-Modality Distillation (CMD). Although the CMD is conducted on those points in the overlapping FOVs of the LIDAR and camera, the 2D knowledge branch can be generalized to the points that are out of the FOV of the camera. In this way, we can obtain the 2D information of the whole point cloud, which is not approachable in PMF [1] or 2DPASS [2]. Then we fuse the features of the 2D knowledge branch and 3D LIDAR branch through Bidirectional Fusion Block (BFB). On the one hand, 2D-to-3D directional fusion explicitly enhances the 3D feature via 2D information injection. On the other hand, 3D-to-2D directional fusion implicitly improves the robustness of the 3D feature since it is required to have the potential to be well adapted to the 2D space. Therefore, our BFB enjoys the benefits of both PMF and 2DPASS.
#### Iii-A2 Testing
During inference, the camera branch is not needed anymore since its knowledge is already distilled to the 2D knowledge branch. Besides, only 2D-to-3D directional fusion is involved as the final prediction results come from the 3D LIDAR branch. The right-hand side of Fig. 1 (c) shows the parts that are needed during inference.
### _Point-to-pixel Correspondence_
Point-to-pixel correspondence is the pre-request of Cross-Modality Distillation (CMD). Given a LIDAR point cloud \(P=\{p_{i}\}_{i=1}^{N}\in\mathbb{R}^{N\times 3}\), where \(p_{i}=(x_{i},y_{i},z_{i})\in\mathbb{R}^{3}\) refers
to the XYZ coordinates of a point and \(N\) is the number of points in the point cloud, the projected 2D coordinates of the point \(p_{i}\) is calculated as:
\[[u_{i},v_{i},1]^{T}=\frac{1}{z_{i}}\times K\times T\times[x_{i},y_{i},z_{i},1]^{T}, \tag{1}\]
where \(K\in\mathbb{R}^{3\times 4}\) and \(T\in\mathbb{R}^{4\times 4}\) denote the intrinsic and extrinsic matrices of the camera, respectively. Then we have \(\hat{p}_{i}=(\left\lfloor v_{i}\right\rfloor,\left\lfloor u_{i}\right\rfloor) \in\mathbb{R}^{2}\) as the integer projected 2D coordinates, where \(\left\lfloor\cdot\right\rfloor\) is the floor operation. For the SemanticKITTI dataset, \(K\) and \(T\) are already given. For the NuScenes dataset, the extrinsic matrix \(T\) is calculated as:
\[T=T_{\mathsf{C}\leftarrow\mathrm{ego}_{u_{e}}}\times T_{\mathrm{ego}_{u_{e}} \leftarrow\mathrm{G}}\times T_{\mathrm{G}\leftarrow\mathrm{ego}_{u_{l}}} \times T_{\mathrm{ego}_{u_{l}}\leftarrow\mathrm{L}}, \tag{2}\]
where \(L\), \(C\), and \(G\) refer to the LIDAR, camera, and global.
Note that CMD is only applied on the points that are in the overlapping FOVs of LIDAR and camera, as shown in the colorized region in the input of the 2D knowledge branch in Fig. 2 (a). Formally, suppose the points set in the overlapping FOVs of LIDAR and camera is \(P^{O}=\{p_{i}\}_{i=1}^{N^{O}}\in\mathbb{R}^{N^{O}\times 3}\), where \(N^{O}\) denotes the number of points in the overlapping FOVs of the LIDAR and camera, then for each point \(p_{i}\) in \(P^{O}\), its corresponding projected coordinates \(\hat{p}_{i}=(\left\lfloor v_{i}\right\rfloor,\left\lfloor u_{i}\right\rfloor)\) should meet:
\[\left\{\begin{array}{l}0\leq\left\lfloor v_{i}\right\rfloor\leq H\\ 0\leq\left\lfloor u_{i}\right\rfloor\leq W,\end{array}\right. \tag{3}\]
where \(H\) and \(W\) refer to the height and width of corresponding images. Note that for feature maps under different scales, we first upsample the feature maps to the original scale and then use the corresponding point-to-pixel corresponding.
### _Cross-Modality Distillation_
Cross-Modality Distillation (CMD) is to distillate the 2D knowledge from the camera branch (a 2D network) to the 2D knowledge branch (a 3D network), so we can generate the 2D information for those points out of the FOV of the camera and do not need the images during inference.
#### Iii-C1 Camera Branch
Unlike PMF [1] and 2DPASS [2] that train the camera branch with the ground truth projected from the LIDAR point cloud, we use a ResNet101 [22] which is pre-trained on the Cityscapes dataset [23]. Cityscapes is a popular dataset for 2D image semantic segmentation in the autonomous driving scenario. We adopt this strategy for two reasons. First, if we use the ground truth which is projected from the LIDAR point cloud, the camera branch may learn the overlapping knowledge with the 3D LIDAR branch since they share the same ground truth source. In contrast, the pre-trained camera branch using another dataset could provide additional information on top of the LIDAR point cloud. Second, we could freeze the camera branch during training since it is well-trained, so less back-propagation is needed for the whole structure. In this way, the training process consumes less GPU memory and time.
#### Iii-C2 2D Knowledge Branch
Following 2DPASS [2], we use SPVCNN [3] as the 3D network used in this paper, including the 2D knowledge branch and 3D LIDAR branch. Now let us formulate the process of CMD.
For points in the overlapping FOVs of LIDAR and camera \(p_{i}\in P^{O}\), we feed them into the 2D knowledge branch \(f_{2D}\) to obtain the features \(z_{2D}^{s}\):
\[z_{2D}^{s}=\{f_{2D}^{s}(p_{i})\}_{i=1}^{N^{O}}\in\mathbb{R}^{N^{O}\times d}, \tag{4}\]
where \(s=\{1,2,3,4\}\) and \(d\) refer to the feature map scale
Fig. 2: (a) Framework overview of CMDFusion. The 2D knowledge branch learns the 2D information from the camera branch through Cross-Modality Distillation (CMD). The 3D LIDAR branch and 2D knowledge branch interact with each other through Bidirectional Fusion Block (BFB) at each scale. (b) The structure of BFB. (c) The structure of the single fusion block.
and the dimension of the features, respectively. Then we obtain the corresponding features \(z_{C}^{s}\) of \(P^{O}\) from the camera branch through the point-to-pixel projection described in Sec. III-B. The CMD is realized through this loss \(\mathcal{L}_{CMD}\):
\[\mathcal{L}_{CMD}=\frac{1}{N^{O}}\sum\left\|z_{2D}^{s}-z_{C}^{s}\right\|_{2}, \tag{5}\]
where \(\left\|\cdot\right\|_{2}\) denotes the L2 loss. In this way, the 2D knowledge branch can mimic the function of the camera branch to provide the 2D information based on the 3D LIDAR point cloud. Although \(\mathcal{L}_{CMD}\) is only available for \(P^{O}\) during training, the trained 2D knowledge branch can be generalized to the whole point cloud \(P\) during inference.
### _Bidirectional Fusion_
Our bidirectional fusion block (BFB) is composed of a 3D-to-2D fusion block and a 2D-to-3D fusion block, as shown in Fig. 2 (b). 2D-to-3D directional fusion explicitly enhances the 3D features via 2D feature injection, while 3D-to-2D implicitly enhances the 3D features via 2D knowledge branch supervision. Note that the 3D-to-2D fusion block and 2D-to-3D fusion block share the same single directional fusion structure, as shown in Fig. 2 (c), and the only difference is the input position. Fig. 2 (c) is the example of the 3D-to-2D single directional fusion block, and we can obtain the 2D-to-3D single directional fusion block by simply changing the positions of two inputs in Fig. 2 (c). Unlike CMD which can only be applied on the \(P^{O}\), BFB is applied on the whole point cloud. So \(z_{2D}^{s}\in\mathbb{R}^{N\times d}\) and \(z_{3D}^{s}\in\mathbb{R}^{N\times d}\) in this section.
#### Iii-D1 3D-to-2D Fusion
3D-to-2D fusion is illustrated in Fig. 2 (c). Formally, we first have:
\[z_{3D2D}^{s}=\texttt{MLP}_{2}(\texttt{Cat}(\texttt{MLP}_{1}(z_{3D}^{s}),z_{2D} ^{s})), \tag{6}\]
where \(\texttt{MLP}\) is a multiplayer perceptron, and \(\texttt{Cat}\) refers to the feature concatenation. \(\texttt{MLP}_{1}\) is used to transfer the 3D feature \(z_{3D}^{s}\) into the 2D feature space. \(\texttt{MLP}_{2}\) is responsible to transfer the concatenated feature into the residual space of \(z_{2D}^{s}\). Then we have:
\[\tilde{z}_{2D}^{s}=z_{2D}^{s}\oplus\sigma(\texttt{MLP}_{3}(\texttt{Cat}( \texttt{GAP}(z_{3D2D}^{s}),z_{3D2D}^{s})))\odot z_{3D2D}^{s}, \tag{7}\]
where \(\oplus\) and \(\odot\) denote the element-wise plus and element-wise multiply, respectively. \(\texttt{GAP}\) means global average pooling, and \(\sigma\) means Sigmoid activation function. \(\texttt{GAP}\) is used to integrate the globable information, and \(\texttt{MLP}_{3}\) is used to transfer the feature into the attention value. \(\tilde{z}_{2D}^{s}\) represents the enhanced 2D features of scale \(s\). Then we concatenate \(\tilde{z}_{2D}^{s}\) and the enhanced features of previous scales \(z_{2DF}^{s-1}\) to obtain \(z_{2DF}^{s}\):
\[z_{2DF}^{s}=\texttt{Cat}(z_{2DF}^{s-1},\tilde{z}_{2D}^{s}), \tag{8}\]
where \(z_{2DF}^{s}\) contains all enhanced 2D features from scale 1 to \(s\). Finally, \(z_{2DF}^{4}\) contains the enhanced 2D features of all 4 scales, and we use a linear classifier \(g_{2D}\) to output the logits. The loss of 2D knowledge branch \(\mathcal{L}_{2D}\) is formulated as:
\[\mathcal{L}_{2D}=-\frac{1}{N}\sum ylog(g_{2D}(z_{2DF}^{4})_{y}), \tag{9}\]
where \(y\) refers to the ground truth, and \(g(z_{2DF}^{4})_{y}\) denotes the \(y^{\text{th}}\) logit of \(g(z_{2DF}^{4})\). Note that single directional fusion does not share MLPs for different scales.
#### Iii-D2 2D-to-3D Fusion
2D-to-3D fusion shares the symmetric structure with 2D-to-3D fusion. Formally, we have the following:
\[z_{2D3D}^{s}=\texttt{MLP}_{2}(\texttt{Cat}(\texttt{MLP}_{1}(z_{2D}^{s}),z_{3D }^{s})),\]
\[\tilde{z}_{3D}^{s}=z_{3D}^{s}\oplus\sigma(\texttt{MLP}_{3}(\texttt{Cat}( \texttt{GAP}(z_{2D3D}^{s}),z_{2D3D}^{s})))\odot z_{2D3D}^{s},\]
\[z_{3DF}^{s}=\texttt{Cat}(z_{3DF}^{s-1},\tilde{z}_{3D}^{s}). \tag{10}\]
Similarly, \(z_{3DF}^{4}\) is the final enhanced 3D feature, and a linear classifier \(g_{3D}\) is used to output the logits. The loss of 3D knowledge branch \(\mathcal{L}_{3D}\) is formulated as:
\[\mathcal{L}_{3D}=-\frac{1}{N}\sum ylog(g_{3D}(z_{3DF}^{4})_{y}). \tag{11}\]
Note that 2D-to-3D fusion blocks do not share MLPs and classifiers with 3D-to-2D fusion blocks.
### _Overall Training and Testing Process_
#### Iii-E1 Training
The overall loss \(\mathcal{L}_{all}\) for training the model is calculated as:
\[\mathcal{L}_{all}=\mathcal{L}_{CMD}+\mathcal{L}_{2D}+\mathcal{L}_{3D}. \tag{12}\]
#### Iii-E2 Testing
We use the output of the classifier in the 3D LIDAR branch as the final prediction results. Specifically, the prediction result \(\hat{y}\) is:
\[\hat{y}=\operatorname*{arg\,max}_{i=1,2,\ldots,C}\;g_{3D}(z_{3DF}^{4})_{i}, \tag{13}\]
where \(C\) denotes the total number of classes in the dataset.
## IV Experiments
### _Experiment Settings_
#### Iv-A1 Datasets
We conduct experiments on three large-scale outdoor datasets, including SemanticKITTI [7], SemanticKITTI-O [1] and Nuscenes [8]. SemanticKITTI provides the dense segmentation labels for 00-10 sequences, in which sequence 08 is used for validation and others are used for training. The ground truth of sequences 11-21 is not reachable to the public and is used for testing. Two front-view colorful images are equipped with each LIDAR scan in SemanticKITTI. We use the image captured by the left camera in our experiments. NuScenes contains 8130 samples for training, 6019 samples for validation, and 6008 samples for testing. Six images are equipped for every LIDAR scan in Nuscenes, and we randomly pick up one image for training. SemanticKITTI-O is a subset of SemanticKITTI, which contains the points in the overlapping FOVs of the camera and LIDAR. The reason that PMF [1] proposed the SemanticKITTI-O is that PMF cannot be applied on the points that are out of the FOV of the camera because of its 2D-to-3D fusion scheme.
#### Iii-A2 Evaluation Metrics
We adopt the commonly used mean intersection-over-union (mIoU) of all classes as the evaluation metric. Specifically, mIoU is formulated as:
\[mIoU=\frac{TP_{c}}{TP_{c}+FP_{c}+NP_{c}}. \tag{14}\]
In addition, we also report the frequency-weighted IOU (fwIoU) provided by the NuScenes leaderboard. FwIoU is a weighted version of mIoU by the point-level frequency of different classes.
#### Iii-A3 Network Settings
The camera branch is a ResNet101 [22] network pre-trained using Cityscapes [23] dataset. Following 2DPASS [2], the 2D knowledge branch and 3D LIDAR branch are two modified SPVCNN [3] with the same structure. The feature maps from three branches are firstly reduced to the dimension of 128 and 256 for SemanticKITTI and NuScenes datasets, and then they are upsampled through bilinear interpolation to the original scale and used for CMD and BFB. As shown in Fig. 2 (a), we use feature maps from 4 scales for better performance.
#### Iii-A4 Training and Inference Details
Our model is trained in an end-to-end manner with the SGD optimizer. The initial learning rate is set to be 0.24, following 2DPASS [2] and SPVCNN [3]. We train the model for 128 epochs for SemanticKITTI and 80 epochs for NuScenes dataset. We use the commonly used augmentation strategy in the LIDAR semantic segmentation, including global scaling with a random scaling factor sampled from [0.95, 1.05], and global rotation around the Z axis with a random angle. Image augmentation includes horizontal flipping and color jitter. The cropped image size is 1200 \(\times\) 360 (\(W\times H\)) for SemanticKITTI and 400 \(\times\) 240 for NuScenes. The voxel size in the 2D knowledge branch and 3D LIDAR branch is set to 0.1. We train our model with batch size 8 on 2 Nvidia Tesla A100 GPUs with 80G memory.
### _Results on Benchmarks_
#### Iii-B1 Results on SemanticKITTI-O
PMF [1] provides the comprehensive benchmark on the SemanticKITTI-O validation set, as shown in Table I. The traditional 2D-to-3D fusion methods like PointPainting [28], RGBAL [29], and PMF conduct both training and inference based on the LIDAR and camera modality data, while our CMDFusion is trained on the LIDAR and camera pairs, but does not require the camera data during inference. We can see that our method significantly surpasses the PMF method by 6.2 mIoU. Note that our CMDFusion can be trained on the whole SemanticKITTI dataset based on our 2D knowledge branch and CMD, while PointPainting, RGBAL, and PMF can be only trained on the training set of SemanticKITTI-O due to their 2D-to-3D fusion scheme.
Fig. 3: Error visualization samples from SemanticKITTI and NuScenes datasets. Errors are in red color. We provide the results of SPCVNN (a 3D network), 2DPASS (a 3D-to-2D fusion network), and our method (a bidirectional fusion network).
#### Iv-C2 Results on SemanticKITTI
Similar to 2DPASS [2], our CMDFusion is trained on the LIDAR and camera modality, while only LIDAR modality is required during inference, so 2DPASS and our CMDFusion can be tested on the whole LIDAR point cloud. However, our CMDFusion includes both 2D-to-3D and 3D-to-2D fusion while 2DPASS only includes 3D-to-2D fusion, so our method surpasses the 2DPASS according to Table II. Note that 2DPASS only released the codebase and the checkpoint without the validation set involved in the training set and instance-level augmentation, so we retrain their model following the same setting and evaluate on the test set. We also try their released checkpoint on the test set and find that both of them achieve a similar mIoU (67.7). We follow the same setting for fair comparison and our method achieves the better performance (68.6 mIoU). We also try the instance-level augmentation from Polarmix [36] on 2DPASS and our method, and our method still surpasses the 2DPASS by 0.6 mIoU. Note that since 2DPASS does not release the code to reproduce the performance reported in their paper, we only compare with them under the same training settings, where our method achieves the better performance. To avoid the mis-correspondence between images and LIDAR point cloud brought by the instance-level augmentation, we do not involve the camera branch during finetuning, and use the frozen 2D knowledge branch to provide 2D information and only finetune the 3D LIDAR branch. In general, our method achieves the best performance among all public methods.
#### Iv-C3 Results on NuScenes
Table III shows that our method achieves better performance (2.0 mIoU) than 2DPASS. Similar to the SemanticKITTI, the performance of 2DPASS comes from the higher one between our retrained model and their released checkpoint. Unlike the SemanticKITTI dataset, the NuScenes dataset provides 6 images to cover the FOV of the LIDAR, so the 2D-to-3D fusion methods like PMF [1] and 2D3DNet [35] can also be evaluated on the
whole LIDAR point cloud. Among all fusion-based methods, our CMDFusion achieves the best performance.
#### Iv-B4 Visualization
We provide two samples from SemanticKITTI and NuScenes datasets in Fig. 3. The top sample shows that 2DPASS and our method have less error on the building compared to the SPVCNN, which illustrates the effectiveness of multi-modality fusion. Besides, our method has better results on the car and truck than 2DPASS, because 2D-to-3D fusion is involved in our method but not in the 2DPASS. In addition, we visualize the feature representation of 2DPASS and our method on the NuScenes dataset. As shown in Fig. 4, our method has more discriminative features, _e.g._, the pedestrian class is more separable in our method than 2DPASS.
### _Runtime Analysis_
Table IV provides the runtime analysis on the NuScenes dataset. PointPainting, RGBAL, and PMF use 2D networks for semantic segmentation since the input is range-view or perspective-view, so they can be accelerated using TensorRT by a large margin (125.0 to 22.3 ms for the PMF method). In contrast, the 3D network in Cylinder3D, 2DPASS, and our method cannot be accelerated by TensorRT. Compared to PMF without TensorRT, our method has a smaller number of FLOPs and parameters during inference, while sharing the same runtime. Compared to 2DPASS, our method achieves better performance since two 3D networks are used during inference (2D Knowledge branch and 3D LIDAR branch), which inevitably consumes more runtime.
### _Ablation Study_
We conduct a careful ablation study to show the effectiveness of different modules in our method. The comprehensive ablation results are based on the Semantic-O dataset since the classical 2D-to-3D fusion without CMD can only be applied on the points in the overlapping FOVs of LIDAR and camera. The results are in Table V. The baseline refers to a single SPVCNN 3D network. We can see that both 3D-to-2D fusion and 2D-to-3D fusion are helpful, but 2D-to-3D fusion brings more performance gain since the camera information is explicitly injected into the LIDAR branch. After we replace the camera branch (CB) with a frozen CB pre-trained on Cityscapes, the performance is further improved. The reason may be that the pre-trained camera branch could provide additional information for the current LIDAR point cloud dataset. Then we introduce cross-modality distillation (CMD) to let a 3D network output the 2D information so that the model could be trained on the whole dataset rather than the overlapping FOVs of the camera and LIDAR. As a result, the performance is greatly boosted by the CMD. Similar to 2DPASS, we also apply the voting test-time augmentation (TTA), _i.e._, rotating the input point cloud with 12 angles around the Z axis and averaging the prediction scores as the final outputs. TTA brings better performance by 2.46 mIoU.
## V Conclusion
In this paper, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) to fuse the information of the camera and LIDAR for better LIDAR semantic segmentation. Compared to the 2D-to-3D fusion-based method PMF [1], our proposed Cross-Modality Distillation (CMD) module solves the problem that the camera branch cannot output the 2D information for those points out of the FOV of the camera. Compared to 3D-to-2D fusion-based method 2DPASS [2], our proposed Bidirectional Fusion Block (BFB) contains additional 2D-to-3D fusion, which explicitly strengthens the 3D informa
Fig. 4: Feature visualization via t-SNE. left: 2DPASS. Right: Ours.
tion through 2D information injection for better LIDAR semantic segmentation. We show the effectiveness of our proposed method through comprehensive experiments on SemanticKITTI and NuScenes datasets. Overall, we provide an alternative approach to fully utilize the multi-modality information for 3D semantic segmentation, and introduce a new and feasible way to solve the problem that multi-sensors' FOVs are not overlapping. We hope this paper can provide inspiration for future work in autonomous vehicles and robots.
## Acknowledgment
This work is supported by Alibaba Group through Alibaba Research Intern Program.
|
2310.01240 | Detection of long-lasting aurora-like radio emission above a sunspot | Auroral radio emissions in planetary magnetospheres typically feature highly
polarized, intense radio bursts, usually attributed to electron cyclotron maser
(ECM) emission from energetic electrons in the planetary polar region that
features a converging magnetic field. Similar bursts have been observed in
magnetically active low-mass stars and brown dwarfs, often prompting analogous
interpretations. Here we report observations of long-lasting solar radio bursts
with high brightness temperature, wide bandwidth, and high circular
polarization fraction akin to these auroral/exo-auroral radio emissions, albeit
two to three orders of magnitude weaker than those on certain low-mass stars.
Spatially, spectrally, and temporally resolved analysis suggests that the
source is located above a sunspot where a strong, converging magnetic field is
present. The source morphology and frequency dispersion are consistent with ECM
emission due to precipitating energetic electrons produced by recurring flares
nearby. Our findings offer new insights into the origin of such intense solar
radio bursts and may provide an alternative explanation for auroral-like radio
emissions on other flare stars with large starspots. | Sijie Yu, Bin Chen, Rohit Sharma, Timothy Bastian, Surajit Mondal, Dale Gary, Yingjie Luo, Marina Battaglia | 2023-10-02T14:27:51Z | http://arxiv.org/abs/2310.01240v1 | # Discovery of long-lasting auroral-like radio emission above a sunspot
###### Abstract
Auroral radio emissions in planetary magnetospheres typically feature highly polarized, intense radio bursts, usually attributed to electron cyclotron maser (ECM) emission from energetic electrons in the planetary polar region that features a converging magnetic field. Similar bursts have been observed in magnetically active low-mass stars and brown dwarfs, often prompting analogous interpretations. Here we report observations of long-lasting solar radio bursts with high brightness temperature, wide bandwidth, and high circular polarization fraction akin to these auroral/exo-auroral radio emissions, albeit two to three orders of magnitude weaker than those on certain low-mass stars. Spatially, spectrally, and temporally resolved analysis suggests that the source is located above a sunspot where a strong, converging magnetic field is present. The source morphology and frequency dispersion are consistent with ECM emission due to precipitating energetic electrons produced by recurring flares nearby. Our findings offer new insights into the origin of such intense solar radio bursts and may provide an alternative explanation for auroral-like radio emissions on other flare stars with large starspots.
## Main text
Stellar magnetism is not only essential for exploring the fundamentals of stellar magnetic activity and internal dynamo, but also for understanding its influence on stellar ecosystems, including potentially habitable exoplanets. Radio observations of coherent emission offer unique diagnostics of physical parameters in stellar atmospheres. Specifically, radio electron cyclotron maser (ECM) emission occurs at low harmonics of the gyrofrequency \(\nu_{\rm B}\), providing direct measurements of magnetic field strengths above the surface of stars. Radio ECM bursts, commonly observed in planetary magnetospheres, are attributed to the ECM instability driven by the precipitation of energetic electrons to the polar regions of the planets [1]. These radio bursts are distinguished by their highly polarized, broadband, and strongly beamed nature. Similar, albeit more intense, radio emissions have been identified in low-mass stars [2]. In particular, recent evidence of periodic broadband radio pulses with high circular polarization from M-dwarfs [2, 3, 4] favor the planetary hypothesis that these stellar radio bursts are radio ECM emission generated in a predominately dipolar field's low-density cavity above the polar regions, in synchrony with the stellar rotation [5, 6]. If interpreted accurately, these findings offer a new avenue for probing magnetic fields in the coronae of stellar and sub-stellar objects, a critical parameter for understanding their internal dynamo and examining exo-space weather and habitability [7].
A key challenge in comprehending these stellar radio bursts is characterizing the emission source region due to the absence of spatially resolved observations, except for rare instances from the Very Long Baseline Interferometry (VLBI) observations [8, 9]. As the magnetic field topologies of radio-ECM-emitting stars remain largely undetermined, it is debated whether these emissions originate from a global dipole magnetic field driven by auroral-like activities in the magnetosphere [10], or from localized magnetic field structures, such as starspots, driven by flare-like magnetic activities in the coronae [11, 12]. The Sun, owing to its proximity, offers valuable context for studying radio ECM emissions analogous to those in (sub)stellar systems. Through analysis of such radio emissions from the Sun could inform our understanding of other stars and carry significant implications for uncovering the origins of intense radio bursts in (sub)stellar systems, particularly those with sizable starspots. However, while transient radio ECM bursts have been sporadically recorded on the Sun [13], persistent radio ECM emissions akin to those documented in stellar literature have yet to be observed on the Sun.
## Results
We observed the Sun using the Karl G. Jansky Very Large Array (VLA) on 2016 April 9 in both the L band (1-2 GHz) and the S band (2-4 GHz) using two sub-arrays. In the 1-2 GHz band, we detected a radio emission event from a bipolar sunspot group, dominated by a large leading spot with negative polarity (Fig. 1). The dynamic spectrum of the radio source (Fig. 2a) obtained by the VLA reveals long-duration, repeated bursts across a broad frequency range from 1 to 1.7 GHz, persisting for the entire 4.5-h observation, with distinct bursts occurring at an average rate of \(\sim\)12 h\({}^{-1}\). These bursts are also found in simultaneous 1 GHz solar radio flux data collected by the Nobeyama Radio Polarimeters (NoRP) and extend down to 245 MHz over an entire week when data from the Radio Solar Telescope Network (RSTN) is available (Extended Data Fig. 1&2), indicating their broadband nature (\(\delta\nu/\nu>150\%\)). The radio bursts appear "auroral-like", as their temporal, spectral, and polarization characteristics, including high circular polarization fraction and brightness temperature, wide bandwidth, and extended duration, resemble the documented planetary aurora radio emissions and stellar radio ECM emissions.
The sunspot group, part of NOAA active region (AR) 12529, is \(52^{\circ}\) east and \(10^{\circ}\) north of the solar disk center. The radio source, \(\sim 40^{\prime\prime}\) (30 Mm) from the polarity inversion line (Extended Data Fig. 3), coincides with sporadic flare activity observed by Geostationary Operational Environmental Satellites (GOES) in X-ray and the Atmospheric Imaging Assembly (AIA[14]) onboard the Solar Dynamics Observatory (SDO) in (extreme) ultraviolet ((E)UV). The peak flux density of the radio source in stokes \(I\) is approximately 20 sfu (1 sfu=\(10^{4}\) Jy) at 1 GHz and 2000 sfu at 245 MHz (Fig. 3(a)), with an equivalent radiation brightness temperature of \(10^{11}\) K at 1 GHz and \(10^{12}\) K at 245 MHz, based on an inferred source size of 1,000 km (Method). The brightness temperature is several orders of magnitude greater than the thermal temperature of the flaring plasma in the active region T \(\approx 6\times 10^{6}\) K, constrained by the emission measure analysis based on the GOES X-ray data (Methods). Notably, the temporal variation of the radio luminosity from the sunspot source is poorly correlated with the X-ray luminosity of the flares (see Fig. 2 and Extended Data Fig. 1&2), suggesting that the bulk of the radio emission is not directly generated by the thermal and nonthermal electron population trapped in the flare sites.
The overall radio flux density spectrum of the active region (Fig. 3(a)) consists of two distinct spectral components, exhibiting a minimum near 2 GHz (Methods). We attribute the radio emission above the frequency minimum to incoherent gyrosynchrotron radiation emitted by mildly relativistic electrons energized by the flare process in the AR. The 1-minute VLA snapshot at 3.8 GHz obtained at 21:52 UT shows a compact source near the bright EUV flare arcade (Fig. 1) with a relatively low degree of circular polarization \(\lesssim 20\%\) (Fig. 3(b)). Its source location is consistent with microwave imaging spectroscopy observations of other flares at similar frequencies, which have been interpreted as gyrosynchrotron radiation[15]. Given the proximity of the source to the flare arcade, low polarization degree, and flux density, we conclude that the radio emission at frequencies above 2 GHz is mainly gyrosynchrotron emission.
For frequencies below 2 GHz, the emission is dominated by the highly variable burst component. From 1.7 GHz to 245 MHz, the emission spectrum shows an increase of three orders of magnitude in flux density toward lower frequencies (Fig. 3(a)). The emission is nearly 100% right-hand circularly polarized in the frequencies between 1.0 to 1.4 GHz (Fig. 3(b)). Moreover, the flux density of the highly variable burst component does not follow the Gudel-Benz (GB) radio/X-ray relation, which describes a tight relationship between the observed X-ray and radio intensity from solar and stellar coronae[16] (Methods). Specifically, the ratio of the radio to X-ray luminosities for frequencies below 1 GHz violates the GB relation by up to more than two orders of magnitude (Fig. 3(c)). Using dynamic imaging spectroscopy with VLA, we have made images at hundreds of frequencies from 1 to 1.5 GHz at numerous selected times. During burst periods, all radio images are dominated by a localized radio source near the sunspot, exhibiting clear dispersion in space at different frequencies. We derive the source centroid of each radio image made at a given frequency. In Fig. 4(a), we show the spatial distribution of the centroid locations by frequency for seven example time intervals, each spanning a duration of 20 seconds. The frequency-dependent source centroid locations of each burst form a close-to-linear dispersion in space, with high-frequency ends oriented towards the sunspot. Despite the overall location of the radio source relative to the sunspot remaining relatively stationary throughout the entire duration of the burst group, the distribution of centroid locations of individual bursts occurring at different times displays a rapid variation in the frequency-spatial dispersion and an evident spread in their position angles regard to the sunspot (Fig. 4(h)).
The high circular polarization degree and brightness temperature both confirm that the observed emission is coherent in nature. We employ data-constrained coronal magnetic field and plasma density modeling to investigate the physical properties of the emission site (Methods). We find a region above the sunspot with a ratio of the electron plasma frequency \(\nu_{\rm pe}\) and the electron gyrofrequency \(\nu_{\rm ce}\) smaller than unity (\(R=\nu_{\rm pe}/\nu_{\rm ce}<1\); Extended Data Fig. 8(c)).
The distribution of the predicted radio source locations in frequency (Fig. 4(h)) aligns well with the spectro-spatial characteristics of the observed radio sources (Fig. 5a). The near-linear spatial dispersion as a function of frequency is well reproduced by individual unresolved emission sites aligned with the magnetic field lines. The magnetic field strength increase towards the sunspot accounts for higher-frequency sources (emitted from higher \(B_{\rm G}\) regions) being closer to the sunspot. The overall spectro-spatial distribution of the source location of all the bursts with different position angles can be explained by the projection of distinct radio-hosting magnetic field lines into the plane of the sky (Extended Data Fig. 10 and Methods).
Our analysis suggests most radio-hosting field lines have inclination angles of 30\({}^{\circ}\) to 60\({}^{\circ}\) relative to the line of sight (Extended Data Fig. 8(f)&10), consistent with the possible range for second-harmonic ECM emission to escape through a transparent window at the overlying absorbing layer at the third harmonic (Extended Data Fig. 8(f)). We note that the visibility of the observed radio emission is likely a geometric coincidence of the beaming angle, the source location, and the transparent opacity window, which are subjected to change with time due to solar rotation and temporal evolution of the active region. A consequence of the rather stringent visibility requirement is that the radio emissions are likely only visible for a small fraction of the time over the entire course of the presence of the active region on the solar disk. The small filling factor in time may also explain why the sunspot radio ECM emission is observed only when the sunspot is between longitudes of \(-70^{\circ}\) to \(-10^{\circ}\) (east hemisphere) and sharply diminishes afterward, despite an increased flare activity during the sunspot's west hemisphere transit (Extended Data Fig. 2; Methods).
## Discussion
Our interpretation of the observation is illustrated in Fig. 5(c). The observed coherent radio emission is interpreted as ECM emission induced by persistent energetic electrons trapped in the large-scale magnetic loops anchored at the sunspot. Analogous to the case of planetary radio aurora emissions[17], the converging magnetic field above the sunspot can form a low \(R\) cavity, providing a favorable geometry for radio ECM emission. Energetic electrons in the loops with initial pitch angles exceeding a threshold reflect at the magnetic mirror on the sunspot end of the loops, developing an anisotropic velocity distribution unstable to ECM emission in the cavity. The source energetic electrons are likely associated with the sporadic flare activity occurring in the nearby active region, which is responsible for accelerating electrons to high energies and supplying them into the overarching magnetic loop system. The persistent radio emission suggests a continuous, albeit sporadic, operation of maser radiation, necessitating frequent replenishment of energetic electrons to maintain the electron anisotropy in the radio source. We speculate that weak yet frequent flare activity in the AR without apparent X-ray enhancement may drive electron acceleration[18, 19].
Figure 1: **Auroral-like radio source seen from a sunspot associated with solar flares.****a**, Example of a VLA snapshot image at 1.0 GHz (orange) of the radio emission from the sunspot with the image of the radio-hosting NOAA 12529 AR observed in EUV wavelengths by the Atomospheric Imaging Assembly (AIA) 171 Å (blue), 94 Å (cyan) and 1600 Å (red) overlaid on the photospheric image observed by the Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamic Observatory (SDO). Also shown is a VLA 3.8 GHz image (magenta) of the radio emission from the flare site. **b**, Closer view of the sunspot region (box in **a**). The 1.0 GHz and 3.8 GHz are shown as yellow and magenta contours, respectively, at 50%, 70%, and 90% of the maximum
The absence of spatial coincidence and tight temporal correlation between flare activity and the coherent radio emission implies that the ECM emission is not directly produced by energetic electrons at the flare site. Instead, the flare-accelerated electrons may have entered and become trapped in the overarching magnetic loops before precipitating to the ECM source region. During this process, transport effects such as collision, scattering, and trapping in the large-scale loops can act as an electron "reservoir" decoupling the time correlation between the flare activities and radio ECM bursts above the sunspot. At the same time, the energetic electrons can be scattered into the loss cone and enter the region above the sunspot, driving and modulating the ECM emission. We also note that magnetic field perturbations in the loop system, potentially caused by propagating disturbances originating from the sunspot [20, 21] or the flare site [22] with a period of 3-5 minutes, may affect the strength and direction of the magnetic field within the source region. Since ECM emission heavily relies on the local magnetic field, these perturbations could result in additional modulation in the observed temporal variability of the ECM emission.
The in-band radio power for an isotropic emitter, integrated over the L band (1-2 GHz), increases of up to 3.5 \(\times 10^{18}\)erg s\({}^{-1}\) during peaks against the pre-burst baseline level of \(\sim\)0.5 \(\times 10^{18}\)erg s\({}^{-1}\) (Fig. 2(c)). If the radio emission at frequencies below 1 GHz follows the spectral shape (Fig. 3(a)) recorded on April 12--the most powerful bursts recorded in the solar rotational cycle--the total radio power would be 100 times higher, reaching 10\({}^{20}\)erg s\({}^{-1}\). The flares release a total radiative energy of an order of \(10^{26}\)erg s\({}^{-1}\) as determined by X-ray diagnostics (Fig. 2(c)). Solar flares observations suggest that a significant fraction of flare energy is initially converted into non-thermal electron kinetic energy and subsequently transformed into various energy forms [23]. Previous studies of solar energetic electrons observed at 1 AU suggest 0.01-1% of the flare-accelerated energetic electrons escape from the energy release sites and enter the interplanetary space [24]. Using this fractional value as the lower limit to estimate the total power of the energetic electrons trapped in the corona, we obtain an average power of \(>10^{22}\)-\(10^{24}\) ergs s\({}^{-1}\), more than two orders of magnitude greater than that of the radio emission. Therefore, we conclude that the flare-supplied
Figure 2: **The relationship between X-ray and radio emissions.****a**, Dynamic spectra of the right circularly polarized radio emission detected from the sunspot, as detected by the VLA. **b**, Time profiles of GOES 1.5–12 keV (black line) and 3–24 keV X-ray (gray line) X-ray, with corresponding GOES X-ray flare classes indicated. **c**, Time profile of the total radiative energy loss and the in-band (1–2 GHz) radio power for an isotropic emitter. Grey shaded bars represent gaps in the radio observations.
energetic electrons should contain adequate energy to power the observed coherent radio emission.
Our findings may have important implications for the interpretation of similar radio emissions observed at other stars and star-planetary systems[25], because they share similar spectral-temporal properties and the deviation of the GB radio/X-ray correlation[26, 27, 3]. Stellar ECM emissions are generally explained within the paradigm of planetary auroral radio emission from a dipolar magnetosphere, driven by either star-planet interactions[26] or co-rotation breakdown of a plasma disk[28] for cooler dwarf stars, and centrifugal breakout reconnection for rapid-rotating early type stars[29]. These stars feature magnetic activity[30, 31] and are notable for the presence of starspots[32]. If these stars have a continuously replenished population of energetic electrons due to flaring activity, these electrons would have the opportunity to be trapped above a starspot and facilitate the growth of ECM emission, even without a globally dipolar magnetic field geometry. This scenario may also be viable for interpreting certain radio emissions from the apparently non-flaring, quiescent stars[26, 2], which show a similar magnitude of deviation from the tight GB relation in their radio/X-ray correlation: a large number of small-scale energy release events may be at work to supply energetic electrons sufficient for powering a persistent radio source under favorable emission conditions. This scenario is akin to our case in which many episodes of intense radio bursts are observed during the quiescent period when there is no flare activity. However, we note that the interpretation may not be applied toward the ECM radio aurorae on the coolest ultracool dwarfs[33, 34, 35, 36, 37], which lack starspots with a stable loop and flaring energetic electron acceleration. We also note that our observed bursts above the sunspot are three orders of magnitude weaker than those reported on certain flare-active M-dwarfs with extremely high brightness temperatures (up to \(10^{14}\) K for those reported by, e.g.,[27]; see Methods for more discussions). While such a difference in radiation intensity is an outstanding topic for further investigation, we suggest that the extremely high magnetic field strength and activity level on these M dwarfs could be a contributing factor.
Periodic radio emission occasionally observed from stars, coinciding with stellar rotation, has been interpreted as emanating from their polar regions[38, 39, 40, 2, 5, 3, 10, 3, 5, 11, 30], similar to planetary auroral emission. Given the presence of a persistent starspot lasting
Figure 3: **Synoptic radio spectra of the NOAA 12529 AR.****a**, Stokes \(I\) spectra detected with VLA, RSTN and NoRP measured between April 9–13, 2016. The peak spectra of the highly variable burst component measured by RSTN and NoRP at frequencies below 2 GHz on April 10 and 12 are shown as filled and open blue circles with error bars, respectively. The spectra at frequencies above 2 GHz during April 9–13, dominated by incoherent gyrosynchrotron emission, are shown as gray circles. The error bars represent the \(5\sigma_{rms}\) and \(1\sigma_{rms}\) uncertainties at each frequency for frequencies below and above 2 GHz, respectively. **b**, The polarization degree (Stokes \(V/I\)) of the radio emission from the active region detected with VLA as a function of frequency. **c**, Gödel–Benz (GB) relation between radio luminosity \(L_{R}\) and X-ray luminosity \(L_{R}\) for a wide range of magnetic active stars (gray symbols). Stellar radio emissions at frequencies of a few GHz and hundreds of MHz are represented as yellow squares and circles, respectively. The sunspot radio emission in this study is denoted by the blue circle.
for multiple rotations [32] and recurring flaring activity supplying energetic electrons, we argue that the sharply-beamed ECM emission above the starspots can also lead to auroral-like radio bursts with the rotation period of the host star. It is noted that the emission may also be modulated by the short- and long-term variability of the flare activity of the host star, which may be responsible for the observed variations in the burst duty cycle across different timescales detected from some stars [3, 41, 42, 43, 44]. (Methods). Previously observed stellar ECM emissions may not be interpreted exclusively as magnetospherically-driven auroral emissions from the polar regions, but could also originate from regions above strong starspots. In sum, our results not only clarify the origin of a type of radio burst on the Sun, but also prompt a re-examination of the physical mechanisms behind certain auroral-like radio emissions from other stars. This study also highlights the need for more detailed observations in the future to better understand the intricacies of the ECM emission on the Sun, including the responsible plasma instability, local driver of the ECM (e.g., a parallel electric field [45]), and energy efficiency of the ECM emission, which remains largely unexplored.
Figure 4: **3D modeling of NOAA 12529 AR and the associated ECM emission sites.****a**, Detailed view of the sunspot auroral radio source (white box in Fig. 1b) at seven example times. The spatial distribution of the ECM sources appears as the convergent magnetic field lines about the sunspot is clearly delineated by the ECM sources of different burst events. The grayscale background is the SDO/HMI continuum image of the photosphere. **b**, 3D magnetic field model of the AR, with \(X\)- and \(Y\)-axes pointing to heliographic longitude and latitude on the solar disk, respectively, and \(Z\)-axis pointing radially from the Sun’s center. White curves represent selected field lines for the closed fields connected to the sunspot umbra. Similar to Fig. 1, reddish sources on the photospheric surface represent flare loop footpoints observed by AIA at 1600 Å. **c**, Side view of the magnetic field model viewed towards positive \(Y\)-axis. The green dots above the sunspot denote the sites of the second harmonic ECM emission sources at \(V_{\rm GHz}=1.2\) at selected field lines, outlining an iso-gauss dome of \(B_{\rm G}=214\). **d**, Same as (c), but showing the ECM emission sites spanning a magnetic field strength from 179 to 268 G, colored according to field strength \(B_{\rm G}\), which directly maps to ECM frequency \(\nu_{\rm GHz}\) from 1.0 to 1.5 GHz using the relation \(B_{\rm G}\approx 357\nu_{\rm GHz}/s\), where \(s=2\) for the second harmonic ECM emission. **e–g**, Same as (d), but viewed towards negative \(X\)-axis, negative \(Z\)-axis, and the LOS, respectively. **h**, Same as (g), but overlaid with gyro-resonance opacity. Colorbars of magnetic field strength and opacity shown in **(c–h)** are displayed alongside.
## Methods
### GOES X-ray data reduction
The X-ray Sensor (XRS) onboard the Geostationary Operational Environment Satellite (GOES) continuously monitors the visible solar hemisphere in two energy bands, one in 1-8 A (or 1.5-12 keV) and another in 0.5-4 A (or 3-24 keV), both of which are dominated by thermal continuum emission. The GOES/XRS data were reduced using the standard analysis tool available in the SolarSoftWare IDL (SSWIDL) package ([https://www.lmsal.com/solarsoft/ssw_setup.html](https://www.lmsal.com/solarsoft/ssw_setup.html)). Because AR 12529 is the only flaring AR at the time of the radio observation, we attributed the impulsive X-ray emission shown in Extended Data Fig. 1(bottom) to thermal Bremsstrahlung emitted by the coronal plasma heated by the flare activity in this AR. We subtracted a constant background from the XRS time series, and compute the total radiative energy loss rate shown in Fig. 2(c) from the expressions[46] available in the SSWIDL GOES tool. We note The mean square root flux uncertainty of the 1.5-12 keV and 3-24 keV channels is \(5\times 10^{-9}\) and \(2\times 10^{-10}\) W m\({}^{-2}\), respectively, each of which is two orders of magnitude lower than the corresponding background level. Additionally, since the flares exceed the background level by one order of magnitude at their peak, the detailed choice of the background for subtraction does not impact the results.
### Radio data reduction: VLA
The observation was made when VLA is in C configuration, for which the longest antenna baseline is over 3 km, yielding an angular resolution of \(17^{\prime\prime}.3\times 29^{\prime\prime}.3/\nu_{\rm GHz}^{2}\) at the time of the observation. We observed the Sun using two subarrays in dual-polarization mode at 50 ms cadence, with 512 and 1024 frequency channels of 2 MHz spectral resolution, covering 1-2 GHz (L band; 14 antennas) and 2-4 GHz (S band; 13 antennas), respectively. The flux calibration, bandpass calibration, and complex gain calibration are performed using a standard celestial calibrator 3C48. When observing the Sun, 20 dB attenuators were switched into the signal path of each antenna and each polarization. The delay and amplitude introduced by the attenuators were removed following the technique described in Chen et al (2013)[47]. Data were reduced using the Common Astronomy Software Applications (CASA Release 5.4; [http://casa.nrao.edu](http://casa.nrao.edu)).
The data contaminated by radio frequency interference, particularly those near GPS L1 frequency (1.58 GHz), are flagged. The slow-varying background emission contributed by the quiescent Sun is subtracted from the VLA data. The spectrotemporal intensity variation (dynamic spectrum) intrinsic to the sunspot radio source are estimated by averaging the amplitude of the complex visibility data over short baselines (0.2-0.8 km) that are sensitive to the spatial scale of the active region. The radio source is located at an angular distance of \(13^{\prime}.4\) from the antenna pointing at solar disk center, which is comparable to the size of the primary beam of the VLA antennas \(\theta_{\rm PB}=45^{\prime}/\nu_{\rm GHz}\) in the frequency range of 1-4 GHz, where \(\nu_{\rm GHz}\) is observing frequency in GHz. To ensure the absolute flux density of the source is correct, we scale the observed flux density of the VLA
Figure 5: **Schematic sketch of the sunspot auroral radio emission.** Physical scenario of electron acceleration by flaring activity, resulting in sunspot radio aurorae. Individual sunspot-rooted magnetic field lines filled with trapped electrons are responsible for individual auroral radio emission events.
dynamic spectrum at L and S band up to that at the antenna pointing center according to the primary beam response (a Gaussian beam pattern is used). An overview of the radio emission from the sunspot in the L band is shown in Extended Data Fig. 4 for the 4.5 hr duration of the observation. Extended Data Fig. 5 shows a detail of the L band dynamic spectrum of 1 minute observation after 21:48 UT.
Radio imaging of each pixel in the dynamic spectrum where the bursts are found provides key information on the spatial variation of the radio source as a function of time and frequency. We produce independent radio images at all the spectral channels between 1.0 to 1.5 GHz for every four time pixels in the dynamic spectrum using the CASA task tclean. We note a periodic pattern is present in the observed visibility phases as a function of time. This is due to the telescope not updating its geometric delays at a rate that matches the time-cadence of the data, known as the so-called "delay clunking", which introduced an offset with regard to the true phase center a fixed rate of \(-2^{\prime\prime}.5/\mathrm{min}\) in right ascension. This effect is mitigated by inserting an offset of \(2^{\prime\prime}.5/\mathrm{min}\) along the same direction to the sky coordinates of the radio images. It is known that the absolute position of the source can be determined to a small fraction (\(\sim 10\%\)) of the synthesized beam under typical conditions, due to atmospheric phase stability, the SNR on target, etc. We verified the alignment accuracy of the radio images (without background subtraction) using features in the NOAA 12529 active region seen in both radio and EUV wavelengths. Since the source of the sunspot radio bursts is only marginally resolved by VLA, we treat the radio source above 50% of the maximum as a point-like source. The uncertainty of the centroid location of the source \(\sigma\) can be determined by [48]:
\[\sigma\approx\theta_{\mathrm{HPBW}}/(\mathrm{SNR}\sqrt{8\ln 2})\, \tag{1}\]
where \(\theta_{\mathrm{HPBW}}\) is the half-power beamwidth of the synthesized beam and SNR is the signal-to-noise ratio of the image. With a typical SNR value for the bursts exceeding 100, we can infer a position accuracy to \(\lesssim 0^{\prime\prime}.2\). However, the propagation of radio waves through the inhomogeneous corona toward the observers is known to be susceptible to scattering effects [49, 50]. Therefore, the estimate of uncertainty given above should only be considered as a lower limit. In fact, by obtaining the centroid locations of all frequency-time pixels on the sunspot radio bursts within a short time period (40 s) and frequency range (40 MHz), we find that they are distributed rather randomly within an area of FWHM size of \(\sim 1^{\prime\prime}.5\times 1^{\prime\prime}.5\) (Extended Data Fig. 10). Hence, we estimate the actual position uncertainty of the centroids as \(\sigma\approx 1^{\prime\prime}.0\).
### Radio data reduction: RSTN and NoRP
USAF Radio Solar Telescope Network (RSTN) monitors the sun-as-a-star radio flux at 8 fixed frequencies -- 0.245, 0.41, 0.61, 1.415, 2.695, 4.995, 8.8 and 15.4 GHz -- at a cadence of one second. The data in use were recorded by the RSTN Learmonth site. The daily radio fluxes at frequency \(\nu_{\mathrm{GHz}}>1\) show an overall inverted-U shape, which is mainly due to the varying zenith-angle-dependent opacity owing to atmospheric absorption of the radio waves. To correct for this effect, we first fit the day-to-day variation of the local noon radio flux at each frequency by using a third-order polynomial function over one solar rotation cycle from 2016 April 6 to 2016 May 2. Next, the daily inverted-U shape of the radio light curves in each frequency due to atmospheric absorption were fit by using an eighth-order polynomial function and subsequently removed. Finally, the resulting light curves are scaled to the fitted day-to-day noon flux curve. In addition, the Nobeyama Radio Polarimeters [51] (NoRP) perform full-stokes polarimetric observations of the Sun at six frequencies -- 1, 2, 3.75, 9.4, 17, 35 GHz -- with a temporal resolution of 0.1 second. Data was reduced using the NoRP package in the SolarSoftWare IDL package ([https://lmsal.com/solarsoft/ssw/radio/norh/](https://lmsal.com/solarsoft/ssw/radio/norh/)). We used a similar data processing method to correct the daily shape presented in the NoRP data.
In the absence of flares, the gradually varying background presented in the reduced RSTN and NoRP radio data at frequencies above 2 GHz is contributed mainly by the quiescent active region and the solar disk due to thermal free-free and gyroresonance emission. Since the background variations have a time scale that is much longer than that of flares, a high-pass filter was then used to remove the slow-varying components from the RSTN and NoRP data. To determine the flux uncertainty of each frequency channel, we calculated the root mean square (\(\sigma_{rms}\)) of the temporal fluctuations during two quiescent time intervals of 3-4 UT and 6-7 UT on April 8.
The Extended Data Fig. 1(top) presents the resulting daily solar radio fluxes on April 9-10, 2016. The frequency range between 1 to 2 GHz is dominated by the repetitive radio bursts with a typical cadence of 5-10 minutes, which are not well correlated with the X-ray flux, as depicted in Fig. 2a. The Extended Data Fig. 1(top) and its inset panel show the background-subtracted data at 1 GHz obtained simultaneously by both the VLA and NoRP. The 1 GHz light curves from NoRP show a deviation no greater than 50% from that obtained by VLA, indicating a high degree of agreement between the two datasets and further verifying the robustness of the background subtraction approach. Despite the VLA observation ending at 22:50 UT on 2016 April 9, the same emission feature observed by the VLA is still evident in the NoRP at 1 GHz after 22:50 UT. Furthermore, we show the fluxes below 1 GHz obtained by RSTN in Extended Data Fig. 1(top). The burst component in the 1 GHz flux, denoted by the upward arrows, is also notably presented in fluxes at lower frequencies down to 0.245 GHz. The temporal characteristics of the light curves when the bursts occur are similar, indicating the broadband nature of the bursts that
should extend to well below 1 GHz. This broadband nature aligns with the rising spectrum toward lower frequencies shown in the VLA data of 1-2 GHz (L band). However, it is worth noting that the RSTN data lacks polarization measurements. As a result, in our interpretation, we have heuristically assumed that the radio bursts observed in the non-VLA data at frequencies below the L band are circularly polarized coherent radio emissions. We made this assumption based on two facts: 1) the bursts observed by the VLA, NoRP, and RSTN have a similar temporal profile at distinct frequencies below 2 GHz, and 2) the flux density of the bursts is too high for incoherent radio emission without major solar eruptive events.
Overall, the observations presented in Extended Data Fig. 1 demonstrate a consistent temporal behavior across a broad range of frequencies. The agreement between the VLA and NoRP data at 1 GHz, along with the detection of bursts at lower frequencies by the RSTN, provides compelling evidence for the broadband nature of these bursts. However, further studies incorporating polarization measurements and high-spectral resolution in a wider frequency range are necessary to gain a comprehensive understanding of these bursts and their polarization properties at different frequencies.
Extended Data Fig. 2 shows the time history of the solar radio and X-ray emission for an entire solar rotation period. The temporal and spectral variation of the radio emission indicates that the same coherent emission feature persists for over an entire week (or \(\sim\)1/4 of the rotation period; demarcated by the yellow shaded area in Extended Data Fig. 2(b, d-e)). The HMI continuum image series in Extended Data Fig. 2a show that AR 12529, which the sunspot belongs to, is the only AR on the disk when the sunspot radio emission is present, therefore we attributed the impulsive X-ray emission shown in Extended Data Fig. 2(e) to the flare activity in this AR. The GOES X-ray flux level displays sporadic flare activity during the time when the sunspot moves across the solar disk, and drops to the background level after the sunspot moves to the back side of the Sun. The
Figure 1: **Radio and X-ray flux of the radio-burst-hosting active region (NOAA 12529 AR) observed by RSTN, NoRP, and GOES on 2016 April 9–10. Top panel**: The background subtracted solar radio flux at discrete frequencies monitored by RSTN, NoRP and VLA. The horizontal dotted lines in corresponding colors indicate equally spaced zero-lines for each frequency channel, set at 20\(\times\)10\({}^{4}\) Jy (or 20 sfu) intervals. The error bars overlaid on the zero-flux-lines in corresponding colors show the 3\(\sigma_{rms}\) uncertainties of each frequency channel. The upward arrows indicate the most prominent examples of sunspot radio aurorae, while the downward arrows indicate the occurrence of flare events. The inset shows a blowup of the 1 GHz flux simultaneously observed by NoRP and VLA, denoted by the dashed box. No zero-line offset was applied for the inset. **Bottom panel**: Simultaneous time profiles of GOES 1.5–12 keV (black line) and 3–24 keV X-ray (grey line), with corresponding GOES X-ray flare classes indicated. The arrows are the same as the downward arrows in the top panel.
radio emission shows little correlation with that of the X-ray emission from flare activity in the AR. In fact, the radio emission is only visible for a limited duration during the rotation cycle, specifically when the sunspot is located between longitudes of -70\({}^{\circ}\) to -10\({}^{\circ}\). The temporal, spectral, and polarization characteristics of the radio emission, including broadband arc-like structures in the time-frequency domain (Extended Data Fig. 2f), closely resemble ECM emissions reported in the literature from (sub)stellar systems [4, 5]. Most notably, the sun-as-a-star light curves throughout a solar rotation, primarily modulated by the coherent sunspot radio bursts (Extended Data Fig. 2(c-d)), display a striking similarity to rotationally-modulated periodic radio aurora from M-dwarfs [6]. The similarity is mainly evident in the temporal patterns in radio emissions. In both cases, the radio emissions exhibit rotationally modulated behavior, which corresponds to the rotation of the respective celestial bodies. Specifically, in the solar case, the burst duty cycle--the fraction of time spent in bursting--is approximately 20% in the L and P bands. This value is comparable to those of coherent radio bursts observed in M-dwarfs within the same frequency range, as demonstrated in previously reported stellar instances [6, 25].
The radio bursts from the sunspot contribute significantly to the flux variations at frequencies below 2 GHz, while at frequencies above 2 GHz, the temporal fluctuations are mainly caused by incoherent gyrosynchrotron emission generated by energetic electrons resulting from solar flares. To obtain the RSTN and NoRP data points in the radio burst spectra shown in Fig. 3, two different approaches were adopted for frequencies below and above 2 GHz, respectively. The upward arrows in Extended Data Fig. 1(top) denote examples of sunspot radio bursts at 1 GHz that occurred on April 9-10. We easily identified the burst component against the noise levels of the time series, and used the peak-to-trough amplitude of the most prominent burst that
Figure 2: **Flux profiles of radio-hosting active region (NOAA 12529 AR) over one solar rotation cycle.****a**, SDO/HMI continuum image sequence showing the transiting sunspot at times marked by the vertical arrows in (**b**). **b–e**, Time history of the heliocentric angle \(\theta\) of the sunspot, the radio, and X-ray emission of the Sun over one solar rotation. **f**, Blowup of the synoptic dynamic spectrum of Stokes I obtained by NoRP and RSTN in (c), showing the arcs (red dashed lines) of radio emission. **g–h**, Hierarchical substructures of the emission arcs shown in (f) obtained by VLA.
occurred at 03:30 UT on April 10 as the radio flux density of the corresponding frequencies. Nonetheless, it is important to highlight that this method inherently eliminates any constant flux component of the sunspot radio emission. As such, the flux density at frequencies below 2 GHz should be interpreted as a lower limit only. The resulting spectrum in the 1-2 GHz range deviates no more than \(\pm\)50% from the maximum of the VLA background subtracted spectra obtained between 21:45 and 22:50 UT on April 9. We used the same method to obtain the spectrum of the strongest sunspot radio burst in this solar rotation cycle, which was recorded at 22:58 UT on April 12. The downward arrows in Extended Data Fig. 1 denote the X-ray flare events that occurred on April 9-10. For frequencies above 2 GHz, the radio light curves of the flares are just above or even comparable to the noise levels. Therefore, we took the peak fluxes of individual frequencies in the entire time interval when the sunspot radio emission is present (April 9-13). However, we note that the low signal-to-noise ratio may introduce bias in the flux estimation for frequencies above 2 GHz. Thus, the flux estimated at these frequencies should be considered as upper limits.
Extended Data Fig. 4. Dynamic spectra of the right circularly polarized radio emission detected from the sunspot with VLA on April 9, 2016.
### The radio-X-ray Gudel-Benz relation
The Gudel-Benz (GB) empirical relation connects the radio and X-ray emission for a broad range of magnetically active stars, with the relationship \(L_{R}\approx 10^{-15.5\pm 0.5}\ L_{X}\ {\rm Hz}^{-1}\), where \(L_{R}\) [erg s\({}^{-1}\) Hz\({}^{-1}\)] denotes radio luminosity and \(L_{X}\) [erg s\({}^{-1}\)] represents X-ray luminosity within soft X-ray energy range[16]. In Fig. 3c, we present the best linear fit to the data from Benz & Gudel (1994)[16], as given by Berger et al. (2010)[52],
\[L_{R}=1.36(L_{X}-18.97). \tag{2}\]
Figure 5. Details of the dynamic spectrum between 21:48 and 21:49 UT on April 9, 2016.
However, as discovered in the past two decades, this correlation may not hold true for certain late-type dwarf stars. The radio/X-ray correlation of stellar ECM radio emission, spanning a broad frequency range between hundreds of MHz and a few GHz as documented in the literature [27, 52, 53], are evidently divergent from the GB relation. This deviation is exemplified by the yellow scatters depicted in Fig. 3c.
The simultaneous radio and X-ray observations allow us to assess the observed sunspot radio ECM bursts in the context of the GB relation. In order to draw a comparison between the sunspot radio luminosity \(L_{R}\), which reached its peak at 245 MHz, and the solar X-ray \(L_{X}\) luminosity, we calculate the \(L_{X}\) using the following approach. The \(L_{X}\) of the Sun at the time of the observation is dominated by the flare activity in NOAA AR 12529. We used the SSWIDL GOES tool to model the background-subtracted X-ray emission and estimate the total emission measure \(EM\) and a temperature \(T\) with an isothermal model and photospheric elemental abundance. Subsequently, the calculated \(EM\) and \(T\) were integrated into the Raymond-Smith plasma model [54] to determine the total \(L_{X}\) in the energy band of 0.2-2.0 keV--a common used energy band in the literature [27, 52, 53] (Extended Data Fig. 6). The radio/X-ray correlation of the sunspot radio emission is shown as the blue circle in Fig. 3c, displaying a clear deviation from the GB relation by more than two orders of magnitude, comparable to the observations of stellar ECM radio emission documented in previous research [27, 52, 53].
Extended Data Fig. 6. Temporal evolution of the flare activity in NOAA 12529 AR on April 9th (left) and 12th (right).
Displayed sequentially from top to bottom are GOES 1.5-12 keV (black line) and 3-24 keV X-ray (grey line), the derived plasma temperature \(T\) and total emission measure \(EM\) determined based on the background-subtracted X-ray emission, as well as the X-ray luminosity in 0.2-2.0 keV (\(L_{X}\)) employing the Raymond-Smith plasma model.
### Coronal magnetic field model
The coronal magnetic field above the sunspot is not directly measured using, e.g., the Zeeman splitting technique, because the corona is extremely tenuous and optically thin. To reconstruct the coronal magnetic field in three dimensions, we perform a nonlinear force-free field (NLFFF) extrapolation based on measurements of vector magnetic field at the photosphere. More details of the assumptions and procedures for the NLFFF technique can be found in other works [55]. In short, in the solar corona, the magnetic pressure dominates over non-magnetic forces. Therefore, under a quasi-static configuration (which is applicable to our case with a quasi-steady active region), the corona can be approximated as in a force-free state. In this case, reconstructing the coronal magnetic field can be well described as a boundary value problem under the force-free field approximation [56]. The vector photospheric magnetic field measurements were recorded by the Helioseismic and Magnetic Imager (HMI) instrument on board SDO [57] at 22:00 UT, processed with the standard pipeline [58]. In addition, a pre-processing procedure was applied to reduce the net force and torque of the photospheric magnetic field [59].
The original photospheric vector magnetogram data has an angular resolution of \(0.03\,\mathrm{\SIUnitSymbolDegree}\) pixel\({}^{-1}\) in heliographic coordinate system[60], which was binned to \(0.06\,\mathrm{\SIUnitSymbolDegree}\) pixel\({}^{-1}\) (equivalent to \(\sim 720\,\mathrm{km}\,\mathrm{pixel}^{-1}\)) for the subsequent extrapolation performed in a Cartesian grid of \(400\times 224\times 224\).
### Coronal density model
The corona density model above the sunspot is approximated using the plane-parallel atmosphere assumption. The coronal electron number density as a function of height \(h\) above the solar surface \(n_{\mathrm{e}}(h)\) is prescribed using the Baumbach-Allen formula originally derived from white-light observations during solar eclipses[61]:
\[n_{\mathrm{e}}(h)=m\left[0.036(\frac{h+R_{\odot}}{R_{\odot}})^{-1.5}+1.55( \frac{h+R_{\odot}}{R_{\odot}})^{-6}+2.99(\frac{h+R_{\odot}}{R_{\odot}})^{-16} \right]\times 10^{8}\,\mathrm{cm}^{-3}, \tag{3}\]
where \(m\) is a factor to account for the enhanced overall density above the sunspot-hosting active region.
To determine an appropriate \(m\) factor for our case, we note that the sunspot first appears on the east limb two days prior to the time of the events under study. The sunspot could be clearly distinguished as it rotated to its position on April 9 2016 from the east limb and does not show any significant morphological changes in the two-day period (Extended Data Fig. 2(a)), implying that the coronal magnetic and density structures above the sunspot should not vary significantly within the period. Thus, we further constrain the density model using multi-band EUV images obtained by SDO/AIA when the sunspot is located at the east limb. A differential emission measure (DEM) analysis is performed using the regularized inversion method[62] based on imaging data at six SDO/AIA EUV bands (94 A, 131 A, 171 A, 193 A, 211 A, 335 A). The results of the analysis are a distribution of the line-of-sight-integrated total emission measure \(\xi(x,y)=\int n_{e}^{2}(x,y)dz\approx n_{e}^{2}(x,y)\,D\) in the plane of the sky, where \(D\) is the column depth weighted by the square of the density along the line of sight. In order to derive the density distribution \(n_{e}(x,y)\) from the DEM results, we take a constant column depth of \(D=50\times 10^{3}\,\mathrm{km}\). With the derived limb-view \(n_{e}(x,y)\) map shown in Extended Data Fig. 7(b), we can derive the \(n_{\mathrm{e}}(h)\) distribution above the active region shown in Fig. 7(c), in which each data point at \(h\) above the limb is obtained by averaging the \(n_{\mathrm{e}}\) values from the pixels enclosed in the black contours in Extended Data Fig. 7(b). We find the sixfold Baumbach-Allen model is an excellent fit to the \(n_{\mathrm{e}}(h)\) inferred from the DEM analysis. We caution, however, that the actual density distribution depends on the selection of the column depth along the line of sight, which is largely unknown. The column depth we adopted should only be considered as a rough order of magnitude estimation. Therefore, the derived \(n_{\mathrm{e}}(h)\) distribution may vary by a scale factor of a few if we select a different column depth.
Extended Data Fig. 7. Electron number density distribution in the corona above the sunspot on 2016 April 7 when the sunspot was located at the east solar limb. **a**, SDO/AIA 171 A passband image at 08:10 UT. **b**, electron density \(n_{\mathrm{e}}\) derived by EUV diagnostics. **c**, distribution of \(n_{\mathrm{e}}\) and the corresponding electron plasma frequency \(\nu_{\mathrm{pe}}\) in height \(h\) obtained from regions marked in (b) for a series of equally spaced radial heights above the sunspot. The red line represents the sixfold Baumbach-Allen model. The error bars are obtained by with the column depth \(D\) ranging from 15-150 Mm.
### Emission mechanism of the sunspot radio bursts
There are two major competing emission mechanisms that can generate coherent radio emissions in the solar corona. Coherent plasma radiation often dominates the solar radio emission of solar flares in a wide frequency range spanning from \(\sim 1\) GHz all the way to tens of MHz. They are driven by the non-linear growth of plasma instabilities and followed by a wave-wave conversion, radiating at a frequency close to the fundamental or second harmonic electron plasma frequency \(\nu_{\rm pe}\approx 8980\sqrt{n_{e}}\) Hz. In contrast, the electron cyclotron maser (ECM) emission likely dominates the emission in regions of high magnetic field strength and/or low plasma density, where the electron plasma frequency (\(\nu_{\rm pe}\)) is less than the electron gyrofrequency (\(\nu_{\rm ce}\)) at the source site[63]. The ECM mechanism has been suggested as the primary emission mechanism of auroral radio emission in planetary magnetospheres, such as the terrestrial auroral kilometric radiation[64], Jovian decametric radiation[65], and Saturn's kilometric radiation[66], as well as strong radio radiation with high circular polarization in a variety of magnetically active stars such as M dwarfs[2, 3, 5]. ECM emission is generated near low harmonics of the electron cyclotron frequency \(s\nu_{\rm ce}=2.8\times 10^{6}sB\) Hz (where \(s\) is the harmonic number and \(B\) is the magnetic field strength in Gauss) due to non-linear wave growth under a variety of plasma instabilities including loss-cone, ring shell, or horseshoe instabilities[17]. We recognize that details of the ECM model, including the responsible plasma instability, local driver of ECM (e.g., a parallel electric field, as suggested by Ergun et al. 2000[45]), and energy efficiency of the ECM emission remain largely unexplored. Addressing these details requires further, and preferably, _in situ_ measurements to characterize the local magnetic and electric field, electron momentum and angular distribution, and other plasma parameters at the radio emitter[45].
Although establishing a comprehensive emission model is challenging and beyond the scope of the current study due to a lack of in situ data, we can investigate the emission mechanism using remote-sensing data provided by multi-wavelength
Figure 8: **Physical characteristics in a 2D slice on the \(X\)–\(Z\) plane of the 3D model of NOAA 12529 AR**. The location of the \(X\)–\(Z\) plane is indicated by the semitransparent gray shade in Extended Data Fig. 4(a). **a**, electron plasma frequency \(\nu_{\rm pe}\). **b**, electron cyclotron frequency \(\nu_{\rm ce}\). The red contours from top to bottom show the frequencies of 0.125, 0.25, 0.5, 1.0, 2.0 GHz. **c**, ratio of the electron plasma frequency to electron cyclotron frequency \(R=\nu_{\rm pe}/\nu_{\rm ce}\). The green contour lines show the ratio \(R\) of 0.25, 0.5, 1.0, 1.5, and 2.0, from bottom to top. The same contours are shown in panel (**b**). **d**, magnetic field scale height L\({}_{\rm B}\). **e**, magnetic field angle \(\theta\) relative to the line-of-sight. **f,** gyro-resonance opacity \(\tau\) of \(O\) mode emission at \(\nu_{\rm GHz}=1.0\) along the line-of-sight. The black arrow indicates the direction towards an Earth-based observer. The dotted, solid, and dashed lines indicate the direction with \(\theta\) equals to \(30^{\circ},45^{\circ}\), and \(60^{\circ}\). The corona that is transparent to the \(s=3\) gyro-resonance absorption layer is outlined as the tilted white shade. The green contour denotes the \(s=2\) ECM source that is inferred from the observed spatial distribution of the source locations of the coherent radio bursts. The yellow cross marks the approximate location of the solar flares.
observations. By employing a data-constrained 3D coronal magnetic field model and electron number density model, we calculate the ratio \(R=\nu_{\rm pe}/\nu_{\rm ce}\) in the 3D volume above the active region to identify regions more likely for the plasma radiation or ECM emission. The method has been previously employed to investigate the emission mechanism of coherent radio bursts in the solar corona[68, 67]. The resulting distributions of \(\nu_{\rm pe}\), \(\nu_{\rm ce}\), and their ratio \(R\) at the \(Y=0\) plane in the model are shown, respectively, in the left column of Extended Data Fig. 8. Although both the electron number density \(n_{e}\) and magnetic field strengths \(B\) decrease with increasing height in the corona, and so do the plasma frequency \(\nu_{\rm pe}\) and cyclotron frequency \(\nu_{\rm ce}\), the \(B\) decreases more rapidly in the region above the sunspot. This leads to a low \(R\) cavity just above the sunspot (Extended Data Fig. 8(c)), in which the electron maser instability is likely to be excited. The ECM emission frequency \(\nu\) in the \(R\lesssim 1\) cavity ranges from above 1.5 GHz to 250 MHz (Extended Data Fig. 8(b)). This agrees with the observations that the sunspot auroral bursts are presented in a wide frequency range from \(\sim 2\) GHz all the way to the lowest frequency of RSTN at 245 MHz (Extended Data Fig. 1(a)). We acknowledge that \(n_{e}\) and \(\nu_{\rm pe}\) are reliant on the column depth \(D\), which should be regarded as a rough estimate. Nevertheless, it is important to note that the electron plasma frequency gradually varies with the fourth root of column depth \(D\), following the relationship \(\nu_{\rm pe}\propto n_{e}^{0.5}\propto 1/D^{0.25}\). Consequently, changes in \(D\) have only a minimal impact on the electron plasma frequency \(\nu_{\rm pe}\). Varying \(D\) from 15 to 150 Mm leads to a 30% difference in \(\nu_{\rm pe}\) with respect to the value derived using \(D=50\) Mm (Extended Data Fig. 7(c)). This variation does not significantly affect the inferred ECM harmonic and the ratio \(R\).
Owing to the relatively high plasma density in the solar corona compared to, e.g., planetary magnetospheres, the detectability of the ECM emission is also affected by gyro-resonant absorption due to the ambient thermal plasma at higher harmonics of the electron gyro-frequency[63]. The opacity of the gyro-resonance layers depends on the magnetic field geometry, the thermal electron number density \(n_{e}\) and the viewing direction[69]:
\[\tau(s,\nu,\theta)=\frac{\pi e^{2}}{2m_{e}c}\frac{n_{e}L_{B}(\theta)}{\nu} \frac{s^{2}}{s!}\left(\frac{s^{2}}{2\mu}\right)^{s-1}F(\theta) \tag{4}\]
where \(\theta\) is the angle between the LOS and the local magnetic field direction, \(L_{B}(\theta)=B/|\nabla B|\) is the magnetic scale height along the LOS direction, \(\mu=m_{e}c^{2}/k_{B}T\) is the square of the ratio between the speed of light and electron thermal speed, and \(F(\theta)\) is a function of angle \(\theta\):
\[F(\theta)=\frac{\sin^{2s-2}\theta\left(\sin^{2}\theta+2s\cos^{2}\theta+\sigma \sqrt{\sin^{4}\theta+4s^{2}\cos^{2}\theta}\right)^{2}}{\sin^{4}\theta+4s^{2} \cos^{2}\theta+\sigma\sin^{2}\theta\sqrt{\sin^{4}\theta+4s^{2}\cos^{2}\theta}} \tag{5}\]
with \(\sigma=1\) for extraordinary mode (\(X\) mode) or \(\sigma=-1\) for ordinary mode (\(O\) mode).
The nearly 100% right circularly polarized radio emission is closely associated with the sunspot in negative magnetic polarity, suggests that the emission is likely in the ordinary mode, or \(O\) mode. With Equation 4, we evaluate the absorption of ECM radiation produced at a harmonic of \(s_{e}\) by the overlying gyro-resonance layer at \(s_{a}=s_{e}+1\) (the absorption at larger harmonics is much smaller and can be neglected) for a given observing frequency \(\nu\). In Extended Data Fig. 8(d)-(f), we show, respectively, maps of \(L_{B}\), \(\theta\) and the opacity \(\tau\) of the \(s_{a}=1,2,3\) layer for o-mode radiation at a representative frequency of \(\nu_{\rm GHz}=1.0\) along the line of sight at the \(Y=0\) plane. Our model shows that the \(s_{a}=2\) layer is mostly optically thick except for a narrow window at small \(\theta\) values with respect to the line of sight where the ECM emission produced at the fundamental harmonic \(s_{e}=1\) can escape. On the other hand, the model suggests that ECM emission produced at the second harmonic \(s_{e}=2\) is only partially affected by the gyro-resonance absorption layer at \(s_{a}=3\). The latter shows a morphology similar to the \(s_{a}=2\) layer, but has a much wider transparent window covering the sunspot umbra and penumbra regions. Therefore, the second-harmonic ECM emission can escape from the transparent window in the \(s_{a}=3\) layer and be observed, as demonstrated in the schematic picture in Fig. 4(h). We note that while the opacity structure model is dependent linearly on electron number density \(n_{e}\), the latter varies with the square root of column depth \(D\). Changes in \(D\) only have a minimal impact on opacity structure, following the relationship \(\tau\propto D^{-0.5}\). Varying \(D\) from 15 to 150 Mm results in a scale factor of 0.7 to 2.2 for the opacity value, which is not significant enough to affect the overall opacity structure for the ECM emission and its observability (Extended Data Fig. 9).
We end this section with a brief discussion of the growth of ECM emission in \(X\) and \(O\) mode, which depends heavily on the value of \(R=\nu_{\rm pe}/\nu_{\rm ce}\). Parametric investigations on ECM instability in the literature suggests that at the fundamental \(O\) mode or second harmonic \(X\) mode radiation would be dominant when \(0.3\lesssim R\lesssim 1\), and the second harmonic O-mode would dominate the radiation when \(1.4\lesssim R\lesssim 2\)[70, 71]. In the regions where the radio source at 1-2 GHz, the ratio \(R\) derived from the magnetic field model and coronal density model generally falls in the range of \(0.3\lesssim R\lesssim 1\) (Extended Data Fig. 8(b)), which would favor fundamental \(O\) mode or second harmonic \(X\) mode. However, as we described in the coronal density model section, the corona density depends strongly on the selection of the column depth. The absolute coronal density can be scaled up by a factor of
a few if we assumed a smaller column depth, so that the ratio \(R\) can satisfy required values for second harmonic \(O\) mode to dominate. We caution that such uncertainty in the density model impedes an accurate determination of the ratio \(R\), so as for the preferable emission wave mode. For emission below 1 GHz, we choose not to discuss the wave mode because of lacking imaging observation and polarization measurement at frequencies below 1 GHz. Nevertheless, Chen et al [72] found the corona may consist of many unresolved overdense loops based on the discrepancy between the density inferred from the frequency of type III radio bursts and that inferred from EUV observations. It is suggested that there are overdense magnetic loops above the sunspot with a density enhancement by up to an order of magnitude against the tenuous background, the ratio \(R\) in which can be greater by the square root of the same factor. If the observed coherent radio bursts can be interpreted as ECM emission originating in these overdense loops, the second harmonic \(O\) mode radiation would be more probable to dominate in these loops with a higher plasma density. The overdense loops scenario offers an alternative avenue for second harmonic \(O\) mode ECM radiation to dominate in a general low \(R\) environment (\(R\)<1).
### Spatial distribution of the ECM source
With spatially and temporally resolved imaging spectroscopy, we can derive the centroid location of each observed source at any given time \(t\) and frequency \(\nu\), provided that it has a sufficient signal-to-noise-ratio against the quiescent background. The top panels in Extended Data Fig. 10 shows examples of the distribution of the frequency-dependent source centroid locations for seven individual 20-second time integrations. Intriguingly, the spatial distribution of each burst forms a close-to-linear distribution in a way similar to those derived from electron-beam-driven type III radio bursts [18, 72].
As the emission frequency \(\nu\) due to the ECM mechanism depends only on the magnetic field strength \(B\), the observed spatial distribution of the radio source centroid location at a given frequency \(\nu\) in projection \(\vec{r}(\nu)\) is governed by the magnetic field distribution above the sunspot \(\vec{r}(B)\). Our observations have suggested that, for each radio burst, the distribution of the frequency-dependent radio source centroids is likely aligned with a single magnetic loop. Therefore, we can reduce the distribution of the radio sources to a one-dimensional dependence of \(d(\nu)\), where \(d\) denotes the plane-of-the-sky projected distance from a reference location along the respective loop, selected as that of the source centroid at 1 GHz. We construct the \(d(\nu)\) dependence of the seven example times, shown in the bottom panel of Extended Data Fig. 10 as colored circles
(whose sizes are scaled by their peak flux). With the 3D coronal magnetic field model to provide the mapping of \(d(B)\) for each magnetic loop in projection, we can predict the expected \(d(\nu)\) dependence in different viewing angles of \(\theta\). In the bottom panel in Extended Data Fig. 10, we show the predicted \(d(\nu)\) dependence along three directions with \(\theta\) equals to 30\({}^{\circ}\), 45\({}^{\circ}\), and 60\({}^{\circ}\) as the dotted, solid, and dashed lines, respectively. The results suggest that the viewing angle \(\theta\) of the radio sources with the observed \(d(\nu)\) dependence should lie in the range of 30\({}^{\circ}\)-60\({}^{\circ}\). This range of viewing angles demarcates a region in the vicinity of the apex of the \(s=2\) gyroresonance dome (denoted as the green contour in Extended Data Fig. 8(e&f)), corresponding to the predicted ECM emission sites that are co-spatial with the transparent window of the \(s_{a}=3\) gyroresonance layer in projection. The range of viewing angles is also consistent with the values of \(\theta\) in the inferred source region (green contour in Extended Data Fig. 8(e).) We note that the beaming angle of 30\({}^{\circ}\)-60\({}^{\circ}\) is generally smaller than observed values (75\({}^{\circ}\)-80\({}^{\circ}\)) in the case of Jovian radio auroral radiation [73], but similar to the beaming angles (\(\gtrsim 40^{\circ}\)) of the auroral kilometric radiation in the terrestrial case [45]. The latter is often attribute to the refraction effect that the radio waves is initially emitted to the perpendicular direction but is ducted away through refraction at the borders of the density depletion cavity in which the emission is produced [45]. This
does not seem to be the case for the sunspot radio emission. For the ducting to occur, the plasma frequency of the surrounding boundary of the low-density cavity should be significantly higher than the emitting frequency of the radiation. This requires a plasma density of at least a few times of \(10^{10}\,\mathrm{cm^{-3}}\), which is much higher than the inferred plasma density near the source region. One possibility is that the small beaming angle is a result of ECM processes coupled with Alfven waves. Wu et al (2012)[74] found that preexisting Alfven waves can modify the classical ECM processes by affecting the velocity distribution of energetic electrons via pitch-angle scattering. Indeed, Alfven waves is expected in the quiet solar corona[75, 76] and flare regions[77]. The beaming angle of second harmonic \(O\) mode ECM emission, if driven by nonthermal electrons in a horseshoe distribution, can be as low as \(30^{\circ}\). The interpretation of the observed coherent radio bursts as second harmonic \(O\) mode ECM emission is consistent with the small beam opening angles predicted in the analytica study[71]. Nevertheless, we acknowledge that the observed beaming angle is not fully consistent with the values of previously observed planetary ECM emissions. To produce a fully self-consistent model of ECM emission from the sunspot requires not only a better understanding the corona context using a data constrained MHD model, but also a fully kinetic simulations to achieve a realistic nonthermal electron distribution, both of which are beyond the scope of this study.
On the other hand, plasma emission radiates at a frequency close to the fundamental or second harmonic electron plasma frequency \(\nu_{\mathrm{pe}}\). Plasma radiation at frequencies in the range 1-1.5 GHz suggests the ambient plasma density above the sunspot in the range 1-\(3\times 10^{10}\,\mathrm{cm^{-3}}\), which is more than an order of magnitude higher than coronal density model constrained by the DEM analysis (Extended Data Fig. 7(c)). It is worth asking, however, whether the observed radio bursts are plasma radiation originating from the aforementioned overdense magnetic loops in the corona. As the emission frequency \(\nu\) due to the plasma radiation mechanism is proportional to the square root of plasma density (\(\nu_{\mathrm{pe}}\propto\sqrt{n_{\mathrm{e}}}\)), the \(d(\nu)\) of the plasma radiation source is determined by the plasma density distribution above the sunspot. The plasma frequency \(\nu\) varies from 1.0 to 1.5 GHz over \(d(\nu)\) of 3-6 Mm corresponds to the plasma density varying from \(1\times 10^{10}\) to \(3\times 10^{10}\,\mathrm{cm^{-3}}\) over a height of 4-8 Mm, assuming the overdense loops have inclination angles of \(\sim 45^{\circ}\). This is equivalent to a density scale height \(\mathrm{L_{n}}=n_{\mathrm{e}}/(dn_{\mathrm{e}}/d\overline{r})\) of 4-8 Mm under hydrostatic equilibrium, which are at least an order of magnitude smaller than the hydrostatic value at typical coronal temperature (\(\mathrm{L_{n}}\approx 46T_{\mathrm{MK}}\,\mathrm{Mm}\), or 94 Mm for \(T=2\,\mathrm{MK}\)[78]). Such steep density profiles, in other words, steep pressure gradients, are inconsistent with the expectation for the quasi-static magnetic structures above the sunspot. Therefore, we reaffirm the conclusion that plasma radiation is unlikely the responsible mechanism for the observed radio bursts.
### Observability of the sunspot ECM emission
ECM generates radiation along a thin, conical sheet. Consequently, sunspot ECM emissions are only observable when the narrow radiation beam crosses the line of sight due to the Sun's rotation. Although ECM emissions should also appear when the sunspot is in the symmetric phase on the western side of the solar disk, they are only visible for a specific heliocentric angle range on the eastern side, as shown in Extended Data Fig. 2. In this study, we attribute this phenomenon to the influence of solar flares on the observability of sunspot ECM emissions. During the observed sunspot radio bursts, the emission source region was far from the flare sites. However, as the sunspot rotates to the other side of the solar disk, the potential emission source (the green contour line in Extended Data Fig. 11) would become much closer to the flare sites. The close spatial proximity of source sites to flares can significantly alter the local plasma and magnetic environment, affecting ECM generation and escape. A more detailed discussion of various impacts is provided below:
### Extended Data Fig. 11
Similar as Extended Fig. 8(f), this figure displays the gyro-resonance opacity \(\tau\) of \(O\) mode emission at \(\nu_{\mathrm{GHz}}=1.0\) along the line-of-sight, considering the AR is located to the opposite longitude with respect to the solar disk center and assuming a 10-fold increase in electron number density \(n_{\mathrm{e}}\) to account for the impact of the nearby flare region. The black arrow indicates the direction towards an Earth-based observer. The corona that is transparent to the \(s=3\) gyro-resonance absorption layer is outlined as the tilted white shade. The green contour represents the potential source location of the \(s=2\) ECM radio emissions. The yellow cross marks the approximate location of the solar flares.
Figure 11: Similar as Extended Fig. 8(f), this figure displays the gyro-resonance opacity \(\tau\) of \(O\) mode emission at \(\nu_{\mathrm{GHz}}=1.0\) along the line-of-sight, considering the AR is located to the opposite longitude with respect to the solar disk center and assuming a 10-fold increase in electron number density \(n_{\mathrm{e}}\) to account for the impact of the nearby flare region. The black arrow indicates the direction towards an Earth-based observer. The corona that is transparent to the \(s=3\) gyro-resonance absorption layer is outlined as the tilted white shade. The green contour represents the potential source location of the \(s=2\) ECM radio emissions. The yellow cross marks the approximate location of the solar flares.
* ECM condition: The ECM mechanism requires a specific condition of \(R\lesssim 1\) at the source site [63]. Given the source site's proximity to flares, the impact of solar flares on coronal density structures must be considered. The thermal plasma density distribution during flares is highly dynamic and complicated, which required specialized modeling efforts to simulate its temporal evolution [79]. While this is beyond the scope of this study, a rough estimate can still be performed. Solar flares can increase coronal plasma density by a factor of over 10 within tens of seconds [80]. This over 10-fold increase in density near flare regions could raise the \(R\) value in the nearby ECM source region by a factor of a few, as \(R\) is proportional to \(\sqrt{n_{\rm e}}/B\). In this case, the source region near the flare sites may not meet the ECM condition and cannot produce ECM emission.
* Opacity: The solar corona's highly structured nature means that the opacity of the gyro-resonance absorption layer, which depends on the viewing direction and 3D distributions of the coronal magnetic field and thermal plasma [63], will change over time as the Sun rotates. This change subsequently affects the emissivity and polarization of sunspot ECM emissions. As mentioned earlier, the opacity varies linearly with the electron number density \(n_{\rm e}\). A more than 10-fold increase in \(n_{\rm e}\) near the flare regions could significantly alter the opacity of the nearby gyro-resonance layer in the line of sight (Extended Data Fig. 11), restricting the escape of ECM radiation.
* Magnetic configuration: In our interpretation, the generation of the sunspot radio bursts requires closed magnetic loops trapping energetic electrons. Recurring non-eruptive flares are favored because they can continuously replenish the population of energetic electrons in the overarching radio-hosting magnetic field lines above the energy release site without disrupting the overall magnetic field geometry. In contrast, eruptive flares are expected to open up the magnetic field lines above [81]. The newly accelerated electrons by solar flares can easily escape to interplanetary space along the opened field lines without getting trapped in the low corona. Moreover, the opening of the overarching magnetic field lines results in the destruction of the electron reservoir where energetic electrons accelerated by previous flares are trapped, further interrupting the generation of the sunspot radio bursts. As demonstrated in Extended Data Fig. 2(e), NOAA 12529 AR became more violent on 2016 April 14, producing an M class flare and a few sub-M class flares. In contrast, the AR only produced a few C class flares earlier.
One or more of the factors mentioned above may influence the observability of the sunspot ECM emission as the viewing angle varies, in addition to the effect of stellar rotation. Understanding the viewing geometry effects requires long-term broadband dynamic imaging spectropolarimetry to study in detail the time-dependent variations of the morphology, location, and polarization of the radio source. The emissivity of sunspot coherent radiation, including but not limited to the emission intensity, polarization and wave mode, depends heavily on the magnetic field distribution and coronal plasma distribution, which are subjected to change with time due to the gradual evolution of the radio-hosting active region and abrupt energy release events. To establish the relationships of coherent radio bursts with the structure and dynamics of the corona, including plasma density inhomogeneity, magnetic field topologies, and their variations, broadband dynamic imaging spectroscopy observations in the frequency range of 200 MHz to 2 GHz are required. These observations should be complemented by other multi-wavelength observations, including optical, EUV, and X-ray. Moreover, with the aid of data-constrained coronal 3D magnetic and thermodynamic structure, imaging spectropolarimetry would allow us to characterize and constrain coronal thermal parameters, magnetic field properties, and the energy distribution of nonthermal electrons that drive ECM emission. Detailed context data at other wavelengths, coupled with radio imaging observations, would enable the examination of the relationship between the rotational modulations of sunspot ECM emission and other significant observables, such as photometric and Doppler spectroscopic measurements [82, 32]. Such observations will be crucial for improving our understanding of the generation and rotational modulation effects of ECM bursts from the Sun and other stars.
An important corollary to the discussion of observability of the sunspot ECM emission is the potential influence of viewing effects on the duty cycle of stellar ECM emissions. If these stellar emissions were associated with the presence of starspots on the surfaces of these (sub)stellar systems, viewing effects may play a role in determining the temporal pattern within a single duty cycle of these emissions. Like the solar case, many previously reported stellar ECM emissions only appear in a certain range of stellar rotational phase [10, 82]. Depending on the relative positions of the starspot and its neighboring flare sites with respect to the light of sight, as well as the inclination of the rotation axis relative to the line of sight as viewed from the Earth, stellar ECM emissions may manifest a second time in symmetric phase with respect to the stellar-centric \(0^{\circ}\) longitude line. Similar cases have also been observed in other stars [6, 44]. Furthermore, as a type of transient radio source, the duty cycle of stellar ECM emissions could also vary as a consequence of the short-term evolution of the starspot and its magnetic connectivity to other parts of the stellar surface. Long-term variations, such as the emergence of a starspot, may lead to the appearance of ECM radio emission on a star that previously lacked such radio bursts in earlier observations, and vice versa. Understanding these viewing effects and their impact on the observability of sunspot and starspot ECM emissions can inform future survey-type observations of stellar ECM emissions.
In concluding this section, we explore the potential applicability of our model to early-type stars (spectral type B or A). Rotationally modulated ECME emissions have been discovered in several early-type stars, exhibiting consistent phases over extended periods [44, 82]. The prevailing hypothesis suggests that global dipole-like magnetic fields can create magnetic-mirror-like conditions necessary for maser emission [83]. These stars often possess magnetic fields with kG strengths and simple topologies, often well approximated by a dipole [84]. Stellar wind [44] or magnetic reconnection in the magnetosphere [85] may provide energetic electrons required to drive ECM emissions. These electrons then become trapped in the stellar magnetospheric pole regions, producing magnetic-mirror-like conditions that facilitate ECME emission. However, recent studies reveal that radio ECM emission can emerge from stars with surface magnetic fields significantly deviating from an axisymmetric dipole [86], leaving the role of the dipole magnetic field topology in producing radio ECM emission remains an open question. Analogous to the magnetospheric pole regions and the electromagnetic engine in the magnetosphere, a localized magnetic structure such as a starspot and nearby flare activity can also produce radio ECM emissions, similar to the solar case. Recent time-series photometry from space missions like Kepler, K2, and TESS challenges this belief, suggesting that rotational modulation is probable in stars with radiative envelopes [87]. The observed light variations can be most simply interpreted as rotational modulation induced by surface brightness inhomogeneities, such as magnetic spots. These localized magnetic fields might be associated with magnetic fields generated in subsurface convection zones [88]. Additionally, it is plausible that differential rotation in A and B stars may be sufficient to create local magnetic fields via dynamo action [89]. Due to the extreme stability of the magnetic fields in these types of stars [90], the periodic ECME emission can be remarkably stable, unlike the case for the highly dynamic magnetic field of the Sun. If the rotational modulation observed in early-type stars indeed results from persistent magnetic spots, our model could offer an alternative approach to generating a stable burst duty cycle over extended periods in these stars. However, this proposition depends on the presence of persistent magnetic spots on early-type stars. Further investigation is necessary to assess the efficacy of the model in accounting for the long-term radio emission behavior exhibited by these stars.
### Source size and brightness temperature
The broadband radio bursts emitting from the sunspot are detectable across frequencies ranging from 245 MHz to 1.7 GHz. The maximum flux density of these recorded bursts in stokes \(I\), observed at 245 MHz at approximately 23 UT on April 12, is around 2000 sfu, or 2\(\times 10^{10}\) mJy. Were the observed bursts to be viewed from a stellar distance, the flux density at 245 MHz can be expressed as \(0.5(D_{\rm pc})^{-2}\) mJy, where \(D_{pc}\) is the stellar distance in parsec. If we take the median distance of 20 parsec in the recent LOFAR observations [27], the flux density would approximate 1 \(\mu\)Jy. Although the total flux density is two or three orders of magnitude lower than the median values reported in the recent LOFAR observations at hundreds of MHz [27], we argue that the brightness temperature of the sunspot radio source can still be very high. This is owing to the likely compact and unresolved nature of many individual bursts that comprise the observed radio source. We consider the magnetic loops anchored at the sunspot where impulsively accelerated electrons are injected and an unstable loss-cone distribution is set up by magnetic mirroring on the sunspot end of the loops. the brightness temperature for a continuously operating maser is given by [63],
\[T_{\rm b}=\frac{m_{e}v_{0}^{2}}{4\pi k_{\rm B}}\frac{c^{2}}{\nu^{2}Lr_{0}} \approx 2\times 10^{12}\left(\frac{E}{1\ {\rm keV}}\right)\left(\frac{\nu}{200\ {\rm MHz}}\right)^{-2}\left(\frac{L}{R_{\odot}}\right)^{-1}\ {\rm K} \tag{6}\]
where \(v_{0}\) is the velocity of the emitting electrons, \(\nu\) is the emission frequency, \(L\) is the length scale of the magnetic trap with respect to the solar radius \(R_{\odot}\), \(r_{0}\) is the classical electron radius, \(m_{e}\) is the electron mass, \(k_{\rm B}\) is the Boltzmann constant, and \(c\) is the speed of light. Using a magnetic loop length scale of approximately 1/10 of the solar radius \(R_{\odot}\) and electron energy \(E\) of 20 keV, the brightness temperature is estimated to be approximately \(10^{14}\) K at 245 MHz and \(10^{13}\) K at 1 GHz for the fundamental emission. For the second harmonic ECM emission, the values are \(10^{12}\) K at 245 MHz and \(10^{11}\) K at 1 GHz, considering the latter's growth rate potentially being two orders of magnitude lower than the fundamental [71].
High brightness temperatures are not uncommon for solar radio bursts, as evidenced by solar decimetric spikes [91] whose source size scale is suggested to be below 1000 km. For the 20 sfu radio observed by VLA at 1 GHz, the brightness temperature corresponds to a source size of 1000\(\times\)1000 km. Since the observed radio source of the 1 GHz image is marginally resolved by VLA, a single 1000\(\times\)1000 km source can not account for the asymmetric structure seen in the VLA 1 GHz image. We speculate that the observed radio source may comprise many small, unresolved sources instead of a single 1000\(\times\)1000 km source. To illustrate such a possibility, we employed a hundred (\(N=100\)) randomly distributed \(10^{11}\) K point sources in the second harmonic gyroresonance of \(V_{\rm GHz}=1\) at selected field lines. We note that the choice of the source number \(N\) is arbitrary, but it does not affect our subsequent analysis. Each point source has a uniform diameter of \(1000/\sqrt{N}\) km and a flux density of \(20/N\) sfu. The model is shown in Extended Data Fig. 12a. The input model is first convolved with a \(40^{\prime\prime}\) Gaussian beam to account for angular scattering at 1 GHz by coronal turbulence [49]. Subsequently, the model is convolved with the synthesis beam of the VLA at 22 UT on April 9 to create a simulated VLA image. The result, shown in Extended Data Fig. 12b, qualitatively agrees with the actual VLA image at this frequency in terms of source location and size. It is important to note that the argument
for multiple sources is consistent with the inhomogeneous and fragmentary nature of magnetic reconnection, as evidenced by recent VLA solar observations [18]. Under this scenario, semi-relativistic electron beams can be emanated into distinct field lines, even though they are accelerated within an extremely compact region (\(\sim 600\) km) in the low solar corona over a brief time scale (1-2 seconds). Assuming that each ECM source is associated with a different electron beam energized within the same acceleration region, they can, in principle, arrive at various field lines above the sunspot, resulting in the scattered distribution of ECM sources as portrayed in the model. While our observations show that the radio bursts could originate from numerous distinct magnetic field lines, we note that these field lines are all rooted at the umbra region of the sunspot with a negative polarity. Consequently, at any particular instance, these field lines share similar line-of-sight angles with regard to the observer, which gives rise to a high degree of circular polarization of the bursts over our observational period of several hours.
Nevertheless, explaining the observed radio bursts on certain M dwarfs with orders of magnitude higher flux density than our case remains challenging. However, we note that the plasma environment on these stars are much more extreme than that on the Sun. First, the average surface magnetic field on these stars can exceed kilo Gauss [92, 93, 94], compared to \(\sim\)1-10 Gauss on the Sun. The magnetic fields in these starspots may exceed those in sunspots by orders of magnitude, potentially boosting the intrinsic brightness of the ECM emission [37]. Second, the level of flare activity on these fully convective stars is much higher than that on the Sun, allowing a possibility to pertain a much larger spatial and temporal filling factor for these bursts over the stellar disk. This speculation is reflected in the radio/X-ray correlation in the context of the GB relation, where the sunspot radio source exhibits similar deviation from the radio/X-ray relation as other stellar radio ECM emissions, despite its relatively low magnetic activity level (X-ray luminosity) among the others. Both may potentially contribute to the much brighter bursts observed on these active stars, although further observational, theoretical, and modeling investigations are required.
Extended Data Fig. 12. Simulation of the 20 sfu sunspot radio image at 1 GHz. a: The input model of the radio ECM sources at 1 GHz displayed in a context of the sunspot and the corresponding gyroresonance opacity layer, similar to Fig. 4h. The model consists of a hundred randomly distributed point sources with a brightness temperature of \(10^{11}\) K at selected field lines. **b**: similar Extended Data Fig. 3, but showing the simulated VLA image at \(\nu_{\rm GHz}=1.0\) overlaid on the HMI continuum intensity image.
### Data availability
The data that support the plots and other findings within this paper are available at [https://data.nrao.edu/portal](https://data.nrao.edu/portal) (VLA; Project ID: 16A-377), [https://www.sws.bom.gov.au/World_Data_Centre](https://www.sws.bom.gov.au/World_Data_Centre) (RSTN), [https://solar.nro.nao.ac.jp/norp](https://solar.nro.nao.ac.jp/norp) (NoRP), [https://www.swpc.noaa.gov/products/goes-x-ray-flux](https://www.swpc.noaa.gov/products/goes-x-ray-flux) (GOES), and [http://jsoc.stanford.edu](http://jsoc.stanford.edu) (SDO), or from the corresponding author upon reasonable request.
## Code availability
The magnetic field extrapolation[55] software packages are available through IDL SolarSoft at [https://sohowww.nascom.nasa.gov/solarsoft](https://sohowww.nascom.nasa.gov/solarsoft). The regularized inversion code for differential emission measure (DEM) calculation[62] is available at [https://github.com/ianan/demreg](https://github.com/ianan/demreg). Public software packages utilized in this study include SunCASA [https://github.com/suncasa/suncasa-src](https://github.com/suncasa/suncasa-src), CASA[95] [https://casa.nrao.edu](https://casa.nrao.edu), SunPy[96] [https://sunpy.org](https://sunpy.org), and Astropy[97] [https://www.astropy.org](https://www.astropy.org).
|
2305.05290 | Dialogue Planning via Brownian Bridge Stochastic Process for
Goal-directed Proactive Dialogue | Goal-directed dialogue systems aim to proactively reach a pre-determined
target through multi-turn conversations. The key to achieving this task lies in
planning dialogue paths that smoothly and coherently direct conversations
towards the target. However, this is a challenging and under-explored task. In
this work, we propose a coherent dialogue planning approach that uses a
stochastic process to model the temporal dynamics of dialogue paths. We define
a latent space that captures the coherence of goal-directed behavior using a
Brownian bridge process, which allows us to incorporate user feedback flexibly
in dialogue planning. Based on the derived latent trajectories, we generate
dialogue paths explicitly using pre-trained language models. We finally employ
these paths as natural language prompts to guide dialogue generation. Our
experiments show that our approach generates more coherent utterances and
achieves the goal with a higher success rate. | Jian Wang, Dongding Lin, Wenjie Li | 2023-05-09T09:28:23Z | http://arxiv.org/abs/2305.05290v1 | # Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue
###### Abstract
Goal-directed dialogue systems aim to proactively reach a pre-determined target through multi-turn conversations. The key to achieving this task lies in planning dialogue paths that smoothly and coherently direct conversations towards the target. However, this is a challenging and under-explored task. In this work, we propose a coherent dialogue planning approach that uses a stochastic process to model the temporal dynamics of dialogue paths. We define a latent space that captures the coherence of goal-directed behavior using a Brownian bridge process, which allows us to incorporate user feedback flexibly in dialogue planning. Based on the derived latent trajectories, we generate dialogue paths explicitly using pre-trained language models. We finally employ these paths as natural language prompts to guide dialogue generation. Our experiments show that our approach generates more coherent utterances and achieves the goal with a higher success rate1.
Footnote 1: Our code and data are available at [https://github.com/iwangjian/Color4Dial](https://github.com/iwangjian/Color4Dial).
## 1 Introduction
Dialogue systems have made significant progress in generating high-quality responses for open-domain chitchat (Zhang et al., 2020; Roller et al., 2021) and assisting users in completing specific tasks (Madotto et al., 2018; Wu et al., 2019). Instead of passively responding to users, dialogue systems can also take a proactive role to direct a conversation towards specific goals, such as introducing new and interesting topics (Wu et al., 2019) or providing sociable recommendations on target items (Wang et al., 2022). Such a proactive target-oriented or goal-directed dialogue system can guide conversations towards topics that the system knows how to discuss, making it promising to build autonomous conversational AI.
For goal-directed dialogue systems, the objective is to proactively direct conversations towards a designated target. Previous work has primarily pre-determined the targets as specific keywords (Tang et al., 2019), topics (Wu et al., 2019; Sevegnani et al., 2021), and dialogue action-topic pairs (Zhang et al., 2021; Wang et al., 2022). To achieve this task, effective dialogue planning is essential, which requires taking reasonable actions and smoothly directing dialogue topics to the designated one. More importantly, the whole process is expected to be coherent and natural. Prior studies attempted to tackle this challenge through next-turn transition prediction (Tang et al., 2019), sub-goal generation (Zhang et al., 2021; Kishinami et al., 2022), and knowledge path reasoning (Gupta et al., 2022) to control dialogue generation. However, there are still open issues worth exploring.
Figure 1: An example from the repurposed DuRecDial 2.0 (Liu et al., 2021) dataset. Given a pre-determined target and current dialogue context, we expect to plan a dialogue path to direct the conversation.
adopted a greedy strategy with a single-turn topic prediction mechanism, which lacks global planning for the dialogue process Yang et al. (2022). Consequently, these methods are often short-sighted, resulting in sub-coherent topic threads. **Second**, recognizing a user's engagement and willingness to follow the system is crucial for achieving coherent transitions. However, current studies often overlook the importance of modeling such user feedback. Therefore, it is necessary to explore globally planned dialogue strategies while incorporating user feedback to improve the coherence of goal-directed dialogue systems.
In this work, our objective is to globally plan dialogue paths that connect the current context to the target at each turn. As illustrated in Figure 1, this dialogue path should strike a balance between coherence with the ongoing dialogue context and smooth transitions towards the target. Assuming that path trajectories without a target can be represented as Brownian motion Revuz and Yor (2013) in latent space, we expect the embeddings of neighboring trajectory points to be similar to each other, while those of distant trajectory points to be dissimilar. Drawing inspiration from Wang et al. (2022), we view goal-directed dialogue behavior as a Brownian bridge Revuz and Yor (2013) stochastic process conditioned on fixed start and end points. As such, we can derive latent trajectories that follow coherent temporal dynamics.
Based on the above intuition, we propose a coherent dialogue planning approach via Brgwinian bridge (**Color**) stochastic process. It involves mapping dialogue path points, such as topics or action-topic pairs, into a latent space of Brownian bridge conditioned on the current context and designated target. To ensure goal-directed behavior and incorporate user feedback, we also map the latest user utterance into real-time user feedback representation using the same latent space. We leverage this feedback to perturb the density and uncertainty of the Brownian bridge, simulating its impact on the dialogue planning process. Our training process uses a contrastive objective, which helps retain global coherence. We then fine-tune pre-trained language models (PLMs) using the derived latent trajectories to plan dialogue paths explicitly. These paths provide step-by-step explanations for reaching the target and serve as natural language prompts for generating system utterances.
In summary, our main contributions are: (1) We propose a novel approach called Color, which effectively models global coherence and incorporates user feedback in goal-directed dialogue planning. Our method utilizes the Brownian bridge stochastic process, and to the best of our knowledge, this is the first work to apply this method to the goal-directed proactive dialogue task. (2) We repurpose existing dialogue datasets by automatically constructing system goals and splitting them into in- and out-of-domain test sets. It facilitates research in the field and allows for more accurate evaluation of models. (3) Extensive experiments demonstrate that our proposed approach outperforms other methods, both in automatic and human evaluations.
## 2 Preliminaries
Problem FormulationWe consider a corpus of goal-directed dialogues \(\mathcal{D}=\{(\mathcal{K}_{i},\mathcal{P}_{i},\mathcal{C}_{i})\}_{i=1}^{N}\), where \(N\) is the total number of dialogues. The domain knowledge facts relevant to the \(i\)-th dialogue are represented as \(\mathcal{K}_{i}=\{k_{i,j}\}_{j=1}^{N_{K}}\), each in the form of a triple. The dialogue content for the \(i\)-th dialogue is \(\mathcal{C}_{i}=\{\mathcal{C}_{i,t}\}_{t=1}^{N_{T}}\), with a total of \(N_{T}\) turns. The whole dialogue path for the \(i\)-th dialogue is denoted as \(\mathcal{P}_{i}=\{\mathcal{P}_{i,l}\}_{l=1}^{L}\), where each path point is a topic or an action-topic pair. Here, dialogue topics are mainly constructed based on the domain knowledge \(\mathcal{K}_{i}\). In some scenarios, there also exists a user profile \(\mathcal{U}_{i}\), which can be user attributes or certain personal preferences.
Given a target \(\mathcal{T}\) consisting of an action-topic pair or a topic only, a dialogue context \(\mathcal{C}\), and a set of relevant domain knowledge \(\mathcal{K}\) (and a user profile \(\mathcal{U}\), if any), our objective is to generate coherent utterances to reach the target \(\mathcal{T}\) when appropriate. The problem can be decomposed into two sub-tasks: (1) **dialogue planning**, which involves planning suitable actions and topics to lead the dialogue proactively with coherent transitions to the target, and (2) **dialogue generation**, which involves generating an appropriate utterance to achieve the planned action and topic at each turn.
Brownian BridgeThe standard Wiener process or Brownian motion \(W(t)\) has a normal distribution with mean \(0\) and variance \(t\), i.e., \(W(t)\sim\mathcal{N}(0,t)\). A Brownian bridge Revuz and Yor (2013) is a continuous-time stochastic process pinned at fixed start and end points, where its distribution \(B(t)\) is given by:
\[B(t)=W(t)-\frac{t}{T}W(T) \tag{1}\]
where \(t\in[0,T]\), \(T\) denotes the end time. Furthermore, the transition distribution of a Brownian bridge process from an initial point \(z_{0}\) at \(t=0\) to an end point \(z_{T}\) at \(t=T\) is:
\[p(z_{t}|z_{0},z_{T})\sim\mathcal{N}\Big{(}\Big{(}1-\frac{t}{T}\Big{)}z_{0}+ \frac{t}{T}z_{T},\frac{t(T-t)}{T}\Big{)} \tag{2}\]
It implies that a trajectory point \(z_{t}\) follows a noisy linear interpolation between \(z_{0}\) and \(z_{T}\), with \(z_{t}\) closer to \(z_{0}\) at the start and closer to \(z_{T}\) at the end. The uncertainty is higher in the middle of the time interval and lower near the start and end points. The time-controlled nature of the Brownian bridge process has led to its application in various fields, such as trajectory simulation [14] and language modeling [15].
## 3 Method
We propose a coherent dialogue planning approach via Brqwnian bridge (**Color**) stochastic process to steer goal-directed dialogue generation. The intuition behind Color is to learn a mapping (see SS3.1) in the Brownian bridge latent space that captures coherent temporal dynamics for planning dialogue paths. Each dialogue path consists of a sequence of topics or action-topic pairs, starting from the current context and leading to the target. We generate these paths explicitly (see SS3.2) based on representations derived from the latent space, and use them to guide the generation of dialogue utterances (see SS3.3).
### Stage 1: Brownian Bridge Mapping
A Brownian bridge latent space involves a nonlinear mapping that transforms observations into a low-dimensional latent space, using the Brownian bridge stochastic process. Our objective is to utilize this mapping to train an encoder \(\mathcal{F}\), to convert raw dialogue paths into latent representations that retain global coherence, with the overview depicted in Figure 2. In the following sections, we will introduce two crucial aspects of our approach: user feedback modeling and contrastive training.
User Feedback ModelingSuppose we obtain the user feedback representation \(z_{u}\) and have an engagement indicator \(\delta_{u}\in(0,1)\), which reflects the user's level of engagement and likelihood of following the system, we newly define the transition distribution of the Brownian bridge process between a start point \(z_{s_{0}}\) at \(t=0\) and end point \(z_{s_{T}}\) at \(t=T\) as:
\[p(z_{u})\sim\mathcal{N}\Big{(}\underbrace{(1-\frac{t}{T})(z_{s_{0}}+z_{u})+ \frac{t}{T}z_{s_{T}}}_{\mu_{u}}\underbrace{\frac{t(T-t)}{T}+\varphi(\delta_{u} )}_{\sigma^{2}}\Big{)} \tag{3}\]
where \(0<t<T\), \(\varphi(\cdot)\) is a decaying function. Here, \(z_{u}\) is used to perturb the density (the mean \(\mu_{s_{t}}\)) of the Brownian bridge process, and \(\delta_{u}\) is used to perturb its uncertainty (the variance \(\sigma^{2}\)), with perturbation strength decaying over time. This decay means that the impact of the current user feedback on future planning is reduced. Alternatively, \(\varphi(\cdot)\) can be implemented with the linear decaying, i.e., \(\varphi(\delta_{u})=\delta_{u}(1-t/T)\), or the exponential decaying, i.e., \(\varphi(\delta_{u})=\delta_{u}e^{-t/(\lambda T)}\), where \(\lambda\in(0,1)\) is a scaling factor.
Contrastive TrainingFor a tuple of observations \((S_{u},S_{0},S_{t},S_{T})\), our objective is to ensure that their latent representations \((z_{u},z_{s_{0}},z_{s_{t}},z_{s_{T}})\) follow the Brownian bridge transition distribution described in Eq. (3). Here, \(S_{u}\) is the latest user utterance (and the concatenation of the user profile, if applicable), which may embody real-time user feedback information. \(S_{0}\) consists of the concatenated domain knowledge and dialogue context, revealing the start of the dialogue path. \(S_{T}\) is the designated target, representing the end of the di
Figure 2: Overview of Stage 1: Mapping observations to Brownian bridge latents. \(S_{u}\) is the latest user utterance, \(S_{0}\) is the concatenated text of domain knowledge and dialogue context, \(S_{T}\) is the designated target, \(S_{t}\) denotes a sampled path point in the dialogue path with \(0<t<T\). We differentiate data flow by colored arrows.
alogue path. A _path point_, by default, refers to a topic or action-topic pair specific to the dataset. \(S_{t}\) denotes a sampled path point in the dialogue path, \(s.t.\), \(0<t<T\). Here, \(T\) denotes the number of transitions required to reach the target.
As shown in Figure 2, we build our encoder \(\mathcal{F}\) on top of a frozen PLM encoder, which is followed by specific trainable multilayer perceptron (MLP) blocks. All the necessary latents are given by:
\[z_{s_{0}} =f_{P}\Big{(}\text{AvgPool}\big{(}f_{\theta}(S_{0})\big{)}\Big{)}, \tag{4}\] \[z_{s_{t}} =f_{P}\Big{(}\text{AvgPool}\big{(}f_{\theta}(S_{t})\big{)}\Big{)},\] (5) \[z_{s_{T}} =f_{P}\Big{(}\text{AvgPool}\big{(}f_{\theta}(S_{T})\big{)}\Big{)},\] (6) \[z_{u} =f_{P}\bigg{(}f_{C}\Big{(}\text{AvgPool}\big{(}f_{\theta}(S_{u}) \big{)}\Big{)}\bigg{)},\] (7) \[\delta_{u} =\sigma\bigg{(}f_{E}\Big{(}f_{C}\big{(}\text{AvgPool}(f_{\theta}( S_{u}))\big{)}\Big{)}\bigg{)} \tag{8}\]
where \(f_{\theta}\) denotes a frozen PLM encoder such as BERT (Devlin et al., 2019) or BART (Lewis et al., 2020) encoder, AvgPool(\(\cdot\)) denotes the average pooling operation. \(f_{P}\), \(f_{C}\), and \(f_{E}\) are MLP blocks that produce output with a latent dimension of \(d\). \(\sigma\) is the Sigmoid activation function. The intuition behind the training is to ensure that the representation \(z_{s_{t}}\) of a positive path point \(S_{t}\) sampled from the same dialogue is close to the expected embedding \(\mu_{s_{t}}\) (the mean in Eq. (3)). In contrast, the representation \(z^{{}^{\prime}}\) of a negative random path point \(S_{t}^{{}^{\prime}}\) from a different dialogue is far from \(\mu_{s_{t}}\) (see Figure 2) because it does not align with the Brownian bridge pinned by \(z_{s_{0}}\) and \(z_{s_{T}}\). We consider a contrastive objective proposed in Wang et al. (2022) for training. Formally, given input batches \(\mathcal{B}=\{(S_{u},S_{0},S_{t},S_{T})\}\) consisting of randomly sampled positive path points \(S_{t}\) where \(0<t<T\), we optimize our encoder \(\mathcal{F}\) as follows:
\[\mathcal{L}_{CL} =-\log\frac{\exp(\text{d}(S_{t}^{+};\mathcal{F}))}{\sum\limits_{ S_{t}^{-}\in\mathcal{B}}\exp(\text{d}(S_{t}^{-};\mathcal{F}))}, \tag{9}\] \[\text{d}(S_{t};\mathcal{F}) =-\frac{1}{2\sigma^{2}}\bigg{\|}z_{s_{t}}-\mu_{s_{t}}\bigg{\|}_{2}^ {2} \tag{10}\]
where \(S_{t}^{+}\) denotes a positive tuple \((S_{u},S_{0},S_{t},S_{T})\), \(S_{t}^{-}\) denotes a negative tuple \((S_{u},S_{0},S_{t}^{{}^{\prime}},S_{T})\), \(\sigma^{2}\) is the variance in Eq. (3), \(\mu_{s_{t}}\) is the mean in Eq. (3).
### Stage 2: Planning Dialogue Paths
The Brownian bridge latent space makes it easy to derive a coherent latent trajectory with temporal dynamics. We feed the start point \(S_{0}\), designated target \(S_{T}\), and observed \(S_{u}\), into the trained encoder \(\mathcal{F}\) respectively, then sample a latent trajectory \(z=(z_{s_{1}},z_{s_{2}},\cdots,z_{s_{T}})\) that follows Eq. (3), where \(z_{s_{t}}\in\mathbb{R}^{d}\), \(t=1,2,\cdots,T\). Here, \(z\) acts like the transition-level latent representation that connects the ongoing dialogue context to the target, i.e., the dialogue path \(\mathcal{P}\) to be planned.
To generate the path \(\mathcal{P}\), we define the required input as \(\mathcal{X}=[\mathcal{C};\mathcal{K};\mathcal{T}]\), which is the concatenated text of the dialogue context \(\mathcal{C}\), domain knowledge \(\mathcal{K}\), and target \(\mathcal{T}\). As shown in Figure 3, we feed \(\mathcal{X}\) into a pre-trained BART (Lewis et al., 2020) model for fine-tuning, with the encoded hidden states being \(h=(h_{1},h_{2},\cdots,h_{m})\). We discuss the generation of \(\mathcal{P}\) by conditioning on \(h\) and \(z\) below.
**First**, sampling the latent trajectory \(z\) requires the value \(T\), i.e., the number of transitions to reach the target. We obtain this value by adding an MLP layer \(f_{T}\) to the BART encoder as a predictor, which outputs the probability of \(T\):
\[p(T)=\text{softmax}(W_{1}f_{T}(\bar{h})+b_{1}) \tag{11}\]
where \(\bar{h}\) is the average pooled representation of \(h\), \(W_{1}\) and \(b_{1}\) are trainable parameters. We optimize the predictor using cross-entropy loss \(\mathcal{L}_{c}\).
**Second**, our BART decoder conditions on \(h\) and the derived latent trajectory \(z\), then generates the dialogue path \(\mathcal{P}\) with encoder-decoder attentions. The output distribution is approximated as follows:
\[p_{\theta}(\hat{y}_{t}) =\text{softmax}(W_{2}h_{t}^{o}+b_{2}), \tag{12}\] \[h_{t}^{o} =\text{Decoder}(y_{t-1},H),\] (13) \[H =[h;W^{\text{T}}z] \tag{14}\]
where \(W_{2}\), \(b_{2}\) are trainable parameters, \(W\) denotes a linear transformation that maps the dimension of \(z\) to be identical to \(h\), and \([;]\) denotes concatenation.
Figure 3: Overview of Stage 2: Planning the dialogue path \(\mathcal{P}\), where \(\mathcal{X}\) is the required input, \(T\) denotes the number of transitions required to reach the target.
The decoder is trained by minimizing the negative log-likelihood below:
\[\mathcal{L}_{g}=-\sum_{i=1}^{N}p(y^{(i)})\log p_{\theta}(\hat{y}^{(i)}) \tag{15}\]
where \(p(y^{(i)})\) is the distribution of the ground-truth dialogue path, while \(p_{\theta}(\hat{y}^{(i)})\) is the distribution of the approximated dialogue path.
In addition, for the decoder's all hidden states \(h^{o}=(h_{1}^{o},h_{2}^{o},\cdots,h_{n}^{o})\) (see Eq. (13)) and the transformed latent trajectory \(z^{o}=W^{\mathrm{T}}z\) (see Eq. (14)), they inevitably both represent the dialogue path \(\mathcal{P}\) though at different levels. We minimize the Kullback-Leibler (KL) divergence between \(h^{o}\) and \(z^{o}\):
\[\mathcal{L}_{KL}=\sum_{i=1}^{N}D_{KL}(\bar{h^{o}}^{(i)}||\bar{z^{o}}^{(i)}) \tag{16}\]
where \(\bar{h^{o}}\) and \(\bar{z^{o}}\) denote the average pooled representation of \(h^{o}\) and \(z^{o}\), respectively.
For training, our model is optimized as follows:
\[\mathcal{L}=\alpha\mathcal{L}_{c}+\beta\mathcal{L}_{g}+\gamma\mathcal{L}_{KL} \tag{17}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are hyperparameters. During inference, we obtain the value \(T\) inferred by the predictor \(f_{T}\), then sample a latent trajectory \(z=(z_{s_{1}},\cdots,z_{s_{T}})\) given \(t=1,\cdots,T\). The decoder then generates a dialogue path token by token. Additionally, no transition is needed to reach the target if \(T=0\). In such cases, we directly generate the dialogue path by copying the given target \(\mathcal{T}\).
### Stage 3: Generating Dialogue Utterances
Motivated by prior work on prompt-based learning for dialogue generation (Zheng and Huang, 2021; Madotto et al., 2021), we regard each dialogue path \(\mathcal{P}\) as a natural language prompt to guide a generative PLM for dialogue generation. Here, \(\mathcal{P}\) serves as a global prompt that outlines the dialogue actions and topics needed to reach the target step by step. With the power of the PLM, \(\mathcal{P}\) helps to distill the necessary knowledge from both the input text and the PLM. To formulate the newly input \(\mathcal{X}^{{}^{\prime}}\), we append \(\mathcal{P}\) to the given dialogue context \(\mathcal{C}\) and domain knowledge \(\mathcal{K}\), and concatenate them as:
\[\mathcal{X}^{{}^{\prime}}=[\mathcal{K};\mathcal{C};\mathcal{P}] \tag{18}\]
where \([;]\) denotes concatenation. We then feed \(\mathcal{X}^{{}^{\prime}}\) into a pre-trained GPT-2 (Radford et al., 2019) or DialoGPT (Zhang et al., 2020) for supervised fine-tuning. We adopt the planned dialogue paths generated by our Color during inference.
## 4 Experiments and Results
### Experimental Setup
DatasetsThe task of goal-directed proactive dialogue is under-explored, making it challenging to find feasible benchmarks for evaluation. After careful consideration, we have identified the DuRecDial 2.0 (Liu et al., 2021) and TGConv (Yang et al., 2022) datasets as appropriate for our experiments. DuRecDial 2.0 (Liu et al., 2021) is a crowdsourced dataset of human-to-human dialogues in recommendation-oriented scenarios. We repurpose the dataset by defining the targets as action-topic pairs. We obtain two types of splits for the test set: _in-domain_ (ID) and _out-of-domain_ (OOD), similar to Sevegnani et al. (2021). The OOD split ensures that none of the target topics in the test set are present in the training set, whereas the ID split allows them to appear. The TGConv (Yang et al., 2022) dataset contains high-quality open-domain dialogues on a variety of common-sense topics. Each dialogue is designed to direct the conversation towards a specific keyword or topic through coherent keyword transitions, which are categorized as either easy-to-reach or hard-to-reach based on their difficulty level. Table 1 summarizes the statistics of both datasets. More details are available in Appendix A.
Baseline MethodsFor dialogue generation, our baselines include: **GPT-2**(Radford et al., 2019), **DialoGPT**(Zhang et al., 2020), and **BART**(Lewis et al., 2020). On the repurposed DuRecDial 2.0 dataset, we also compared our method with three competitive methods: **MGCG_G**(Liu et al., 2020), **KERS**(Zhang et al., 2021), and **TCP-Dial**(Wang et al., 2022). We chose these methods because they are highly relevant to our problem setting, and TCP-Dial is currently the state-of-the-art model in our knowledge. Given that our method is generalizable to the existing TGConv dataset, we evaluated its effectiveness against four competitive mod
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{**Dataset**} & \multicolumn{1}{c}{**\#Dial.**} & \multicolumn{1}{c}{**\#Uter.**} & \multicolumn{1}{c}{
\begin{tabular}{c} **Dial. Turn** \\ **\#Max.** **\#Avg.** \\ \end{tabular} } \\ \hline \multirow{3}{*}{DuRecDial 2.0} & Train & 4,256 & 68,781 & 13 & 8.1 \\ & Valid & 608 & 9,677 & 14 & 8.0 \\ & Test-ID & 770 & 12,299 & 13 & 8.0 \\ & Test-OOD & 446 & 7,962 & 12 & 8.9 \\ \hline \multirow{3}{*}{TGConv} & Train & 15,197 & 70,205 & 9 & 3.8 \\ & Valid & 2,681 & 12,167 & 9 & 3.7 \\ \cline{1-1} & Test & 1,000 & 5,132 & 9 & 3.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the datasets.
els specific to that dataset: **MultiGen**(Ji et al., 2020), **DKRN**(Qin et al., 2020), **CKC**(Zhong et al., 2021), and **TopKG**(Yang et al., 2022). More details about the above methods are shown in Appendix B.1. For dialogue planning, we compared our Color with the planning models proposed in the above methods using a planning-enhanced paradigm. We also included **BERT**(Devlin et al., 2019) and **BART**(Lewis et al., 2020) as our baselines. More details about them are described in Appendix B.2.
Implementation DetailsOur proposed Color model is implemented by PyTorch. In both Stage 1 and Stage 2, we adopt the BART-base model (768 dimensions, 6 encoder/decoder layers, and 12 attention heads) released in Huggingface's Transformers (Wolf et al., 2020) library. The latent dimension \(d\) is set to 16. The MLP blocks \(f_{P}\), \(f_{C}\), and \(f_{E}\) are all stacked to 3 layers. The decaying function \(\varphi(\cdot)\) employs the linear decaying. The hyperparameters \(\alpha\), \(\beta\) and \(\gamma\) are set to 0.1, 1.0 and 1.0, respectively. For training in Stage 2, we construct the dialogue path \(\mathcal{P}\) in the format of \(\texttt{[A]}a_{i}\texttt{[T]}t_{1}\cdots\texttt{[A]}a_{T}\texttt{[T]}t_{T}\) on the DuRecDial 2.0, and of \(\texttt{[T]}t_{1}\cdots\texttt{[T]}t_{T}\) on the TGConv. Here, \(\texttt{[A]}\) is a special token to separate an action \(a_{i}\), \(\texttt{[T]}\) is a special token to separate a topic \(t_{i}\). During inference, we generate a dialogue path token by token. Further details on training and inference are provided in Appendix C.
### Evaluation of Dialogue Generation
Evaluation MetricsTo evaluate the performance of next-turn system utterance generation, we adopt commonly used local evaluation metrics, including perplexity (**PPL**), distinct (**DIST**) (Li et al., 2016), **BLEU**-\(n\)(Papineni et al., 2002), word-level **F1** and knowledge F1 (**Know. F1**) (Liu et al., 2020). To evaluate models' goal-directed performance, we use the goal success rate (**Succ.**) as the global evaluation metric. In the repurposed DuRecDial 2.0 dataset, Succ. measures the proportion of correct target topic generation within the target turn and the two adjacent turns in the test set, as per Wang et al. (2022). For the TGConv dataset, we perform self-play simulations, following Yang et al. (2022), to simulate multi-turn conversations and compute the success rate of generating the target keyword within 8 turns. Additionally, we adopt coherence (**Coh.**) (Yang et al., 2022) as another global evaluation metric, which measures the average contextual semantic similarity between the last utterance in the context and generated utterance.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Split** & **Model** & **PPL (\(\downarrow\))** & **F1 (\%)** & **BLEU-1 / 2** & **DIST-1 / 2** & **Know. F1 (\%)** & **Succ. (\%)** \\ \hline \multirow{8}{*}{ID} & MGCG\_G (Liu et al., 2020) & 25.32 & 35.13 & 0.316 / 0.211 & 0.016 / 0.053 & 39.53 & 29.49 \\ & KERS (Zhang et al., 2021) & 20.15 & 31.27 & 0.288 / 0.196 & 0.017 / 0.061 & 41.18 & 33.75 \\ & GPT-2 (Radford et al., 2019) & 5.33 & 36.86 & 0.314 / 0.222 & 0.024 / 0.081 & 43.62 & 41.80 \\ & DialoGPT (Zhang et al., 2020) & 5.26 & 38.12 & 0.324 / 0.252 & 0.023 / 0.076 & 44.71 & 46.46 \\ & BART (Lewis et al., 2020) & 6.46 & 36.11 & 0.279 / 0.181 & **0.030 / 0.096** & 43.33 & 58.40 \\ & TCP-Dial (Wang et al., 2022) & 5.88 & 34.46 & 0.293 / 0.201 & 0.027 / 0.091 & 45.75 & 60.49 \\ \cline{2-8} & Ours (Color w/ GPT-2) & **5.17** & 40.43* & 0.337* / 0.243* & 0.026 / 0.084 & 50.81* & 69.14* \\ & Ours (Color w/ DialoGPT) & 5.22 & **43.14*** & **0.371* / 0.277* & 0.024 / 0.073 & **57.89*** & **73.20*** \\ \hline \multirow{8}{*}{OD} & MGCG\_G (Liu et al., 2020) & 28.21 & 30.84 & 0.276 / 0.167 & 0.015 / 0.046 & 20.53 & 8.46 \\ & KERS (Zhang et al., 2021) & 24.35 & 27.91 & 0.259 / 0.160 & 0.016 / 0.058 & 26.88 & 14.15 \\ & GPT-2 (Radford et al., 2019) & 5.86 & 33.06 & 0.276 / 0.193 & 0.023 / 0.077 & 28.79 & 32.79 \\ & DialoGPT (Zhang et al., 2020) & 5.37 & 34.27 & 0.283 / 0.176 & 0.021 / 0.068 & 31.75 & 32.47 \\ & BART (Lewis et al., 2020) & 8.09 & 32.38 & 0.244 / 0.149 & 0.026 / 0.081 & 30.02 & 43.08 \\ & TCP-Dial (Wang et al., 2022) & 8.24 & 29.24 & 0.255 / 0.165 & **0.027 / 0.089** & 21.36 & 18.40 \\ \cline{2-8} & Ours (Color w/ GPT-2) & 5.63 & 34.44* & 0.285* / 0.198* & 0.025 / 0.082 & 34.04* & **57.41*** \\ & Ours (Color w/ DialoGPT) & **5.30** & **37.97*** & **0.320*** / 0.227* & 0.024 / 0.072 & **41.38*** & 52.36* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Automatic local and global evaluation results of dialogue generation on the DuRecDial 2.0 dataset with different test splits. Significant improvements over backbone models are marked with * (t-test, \(p<0.05\)).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Easy Target**} & \multicolumn{2}{c}{**Hard Target**} \\ & **Succ. (\%)** & **Coh.** & **Succ. (\%)** & **Coh.** \\ \hline GPT-2\({}^{\dagger}\) (G) & 22.3 & 0.23 & 17.3 & 0.21 \\ DialoGPT (D) & 32.3 & 0.30 & 23.8 & 0.25 \\ MultiGen\({}^{\dagger}\) & 26.7 & 0.21 & 19.6 & 0.24 \\ DKRN\({}^{\dagger}\) & 38.6 & 0.33 & 21.7 & 0.31 \\ CKC\({}^{\dagger}\) & 41.9 & 0.35 & 24.8 & 0.33 \\ TopKG\({}^{\dagger}\) & 48.9 & 0.31 & 27.3 & 0.33 \\ \hline Ours (Color w/ G) & 54.2 & 0.34 & 28.8 & 0.33 \\ Ours (Color w/ D) & **66.3** & **0.36** & **30.1** & **0.35** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic global evaluation results of dialogue generation on the TGConv dataset. G and D are short for GPT-2 and DialoGPT, respectively. Models marked with \(\dagger\) are reported from Yang et al. (2022).
Results and DiscussionTable 2 shows evaluation results on the DuRecDial 2.0 dataset. We observe that MGCG_G and KERS achieve comparable performance to PLM-based models on the in-domain (ID) split. One main reason is that they use the predicted dialogue action and topic to guide the model in utterance generation. However, they perform poorly in terms of goal success rate due to a lack of dialogue-level planning. We note that BART and TCP-Dial obtain much better DIST-1/2 scores than others because they seldom generate repeated words, making the generated utterances more diverse. In comparison, our models achieve remarkable improvements over most evaluation metrics. For example, our Color with DialoGPT achieves much better knowledge F1 scores, indicating that our method is more likely to generate utterances with correct knowledge. Regarding the goal success rate, our models obtain a large margin of improvement on both ID and OOD splits. It shows that using prompts with appropriate dialogue paths effectively steers PLMs to generate proper utterances for goal-directed dialogue.
As shown in Table 3, we notice that directing a dialogue to reach the target seems challenging in the context of open-domain chitchat for all models. However, with the guidance of our dialogue planning approach, Color, our models are able to produce more coherent utterances and reach the target at a significantly higher success rate.
### Evaluation of Dialogue Planning
Evaluation MetricsTo evaluate the performance of dialogue planning, we first adopt **F1** to measure the micro-averaged precision and recall of the predicted action or topic. For generation-based models, we extract the action or topic at the evaluated turn from the generated dialogue path for a fair comparison. Due to the nature of dialogue, multiple temporary planning strategies can be reasonable before reaching the target. Following Zhou et al. (2020), we also expand gold labels by considering the system's actions or topics within the previous and subsequent turns. As such, we then compute bigram action F1 (**Bi-act. F1**) and bigram topic F1 (**Bi-top. F1**) for evaluation.
Results and DiscussionTable 4 reports the evaluation results on the DuRecDial 2.0 dataset. We find that predicting or generating dialogue topics is more challenging than dialogue actions. Further analysis reveals that the dialogue actions follow a similar transition pattern in the dialogue paths, making it easier for all models to predict actions with an F1 score of over 80%. On the other hand, the variation in dialogue paths is primarily related to topics, which requires complex reasoning of domain knowledge, dialogue context, and target for accurate prediction. When evaluating on the OOD split, all baselines show lower F1 and Bi-top. F1 scores for topics. However, our proposed Color achieves substantial improvements. We observe similar trends in Table 5 when evaluating on the TGConv dataset. Overall, our Color outperforms the baselines by generating more reasonable actions and appropriate topics, making it a promising approach for planning dialogue paths.
Analysis of Model VariantsWe analyze the following variants of our model: (1) \(\textbf{Color}_{d=7}\), which varies the value of the latent dimension \(d\) in \(\{8,32,128\}\) (The \(d\) in our Color is set to 16 as described in SS4.1); (2) _w/o_ Brownian bridge (**BB**), which removes the operation of conditioning on the derived Brownian bridge latent representation \(z\); (3) _w/o_ user feedback modeling (**UFM**), which removes \(z_{u}\) and \(\varphi(\delta_{u})\) in our Brownian bridge process as defined in Eq. (3); (4) _w/o_\(\mathcal{L}_{KL}\), which means the model is trained without the loss \(\mathcal{L}_{KL}\).
We report evaluation results on the OOD split of the DuRecDial 2.0 dataset, as shown in Table 6.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Split**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Action**} & \multicolumn{2}{c}{**Topic**} \\ & & **F1** & **Bi-act. F1** & **F1** & **Bi-top. F1** \\ \hline \multirow{6}{*}{ID} & MGCG & 90.26 & 92.47 & 74.93 & 79.24 \\ & KERS & 90.33 & 91.54 & 77.85 & 80.35 \\ & BERT & 91.68 & 92.37 & 80.64 & 82.59 \\ & TCP & 92.25 & 93.82 & 85.77 & 87.25 \\ & BART & 95.40 & 96.31 & 90.96 & 92.21 \\ \cline{2-5} & Ours (Color) & **96.86** & **97.68** & **93.30** & **94.26** \\ \hline \multirow{6}{*}{OOD} & MGCG & 82.30 & 87.25 & 36.03 & 42.00 \\ & KERS & 84.21 & 86.39 & 34.20 & 37.85 \\ \cline{1-1} & BERT & 92.23 & 94.19 & 46.55 & 52.12 \\ \cline{1-1} & TCP & 89.93 & 92.09 & 44.49 & 50.71 \\ \cline{1-1} & BART & 92.63 & 93.18 & 58.57 & 62.37 \\ \cline{1-1} \cline{2-5} & Ours (Color) & **93.43** & **93.82** & **79.09** & **83.46** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of dialogue planning on the DuRecDial 2.0 with different test splits.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **F1** & **Bi-top. F1** \\ \hline BERT (Devlin et al., 2019) & 45.90 & 49.17 \\ BART (Lewis et al., 2020) & 43.20 & 47.69 \\ TopKG-Plan (Yang et al., 2022) & 46.06 & 48.04 \\ \hline Ours (Color) & **47.17** & **52.85** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of dialogue planning on the TGConv.
We observe that a larger value of \(d\) brings fewer performance gains. Hence, the \(d\) in our Color is set to 16 after making a trade-off between effectiveness and efficiency. We note that each module or mechanism of Color contributes to dialogue planning. In particular, the performance of Color sharply drops without the Brownian bridge (BB). It is because the derived Brownian bridge latent trajectory serves as a transition-level latent representation of the dialogue path to be planned. More importantly, it follows coherent temporal dynamics and thus benefits planning the dialogue path.
### Human Evaluation
We recruit three well-educated graduate students as annotators for human evaluation. We ask the annotators to score different models based on turn-level and dialogue-level metrics, following Liu et al. (2020). The turn-level evaluation measures appropriateness (**Appr.**) and informativeness (**Info.**). The dialogue-level evaluation measures proactivity (**Proact.**), coherence (**Coh.**), and goal success (**Succ.**). More details on the metrics and evaluation procedure are described in Appendix D.
Table 7 shows human evaluation results on the DuRecDial 2.0 dataset. The Fleiss's kappa (Fleiss, 1971) scores are mainly distributed between [0.41, 0.60], indicating moderate inter-annotator agreement. We observe that DialoGPT, TCP-Dial, and ours obtain comparable scores in informativeness since they all utilize powerful PLMs. However, our method is able to generate more appropriate utterances in response to dialogue context. For dialogue-level evaluation, our method obtains better results on average compared to all baseline models. Notably, our method achieves the highest coherence score and goal success rate, indicating that our method is more likely to direct the dialogue to reach the target coherently and successfully.
### Case Study
To better analyze goal-directed dialogue generation, we show some cherry-picked cases in Appendix E due to space limitation. We observe that some baseline models can generate fluent and informative utterances. However, they still fail to direct the dialogue to reach the target and are ineffective to maintain coherence. In comparison, our Color model can plan a dialogue path with reasonable actions and appropriate topics that outlines how to reach the target step by step. With the guidance of the planned dialogue path, our system better knows when and what to talk about to proactively move the dialogue forward. More importantly, our method succeeds in achieving the goal (see Appendix E).
## 5 Related Work
Goal-directed Dialogue GenerationIn the goal-directed or target-oriented setting, existing studies mainly predetermine the targets as specific keywords Tang et al. (2019); Qin et al. (2020); Zhong et al. (2021), topics Wu et al. (2019); Severgnani et al. (2021); Lei et al. (2022), and dialogue action-topic pairs Zhang et al. (2021); Wang et al. (2022). The key to the task is dialogue planning, which leads the dialogue towards the target smoothly and coherently. Prior work pays attention to next-turn transition strategy Tang et al. (2019), hierarchical policy Xu et al. (2020),b), and sub-goal generation Zhang et al. (2021); Kishinami et al. (2022). For this knowledge-rich task, recent work Gupta et al. (2022); Yang et al. (2022); Wang et al. (2022) further concerns planning a dialogue path based on grounded knowledge to guide every turn of response generation.
Planning for Language GenerationThere is a line of work Puduppully et al. (2019); Hua and Wang (2019); Moryossef et al. (2019); Su et al. (2021) that separates text generation into content planning and surface realization. Content planning mainly concerns selecting key content (e.g., key entities)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Action**} & \multicolumn{2}{c}{**Topic**} \\ & **F1** & **Bi-act. F1** & **F1** & **Bi-top. F1** \\ \hline Color\({}_{d-8}\) & 93.21 & 93.73 & 79.21 & 83.30 \\ Color\({}_{d-32}\) & 91.24 & 92.82 & 78.03 & 83.34 \\ Color\({}_{d-128}\) & 93.57 & 94.30 & 78.67 & 82.89 \\ \hline Color & 93.43 & 93.82 & 79.09 & 83.46 \\ _w/o_ BB & 93.66 & 93.93 & 62.45 & 64.27 \\ _w/o_ UFM & 92.42 & 92.84 & 77.21 & 80.57 \\ _w/o_\({}_{KL}\) & 92.95 & 93.01 & 77.34 & 80.97 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Dialogue planning performance of our Color with different variants.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **Appr.** & **Info.** & **Proact.** & **Coh.** & **Succ.** \\ \hline MGCG\_G & 0.84 & 1.02 & 0.92 & 0.92 & 0.90 \\ DialoGPT & 1.17 & 1.35 & 1.06 & 1.17 & 1.19 \\ TCP-Dial & 1.20 & 1.24 & 1.26 & 1.20 & 1.02 \\ Ours & **1.33** & **1.40** & **1.42** & **1.35** & **1.38** \\ \hline kappa & 0.48 & 0.52 & 0.46 & 0.56 & 0.53 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Human evaluation results. The Fleiss’s kappa measures the agreement among the annotators.
and arranging their orders. Several planning frameworks Hua et al. (2021); Hu et al. (2022); Li et al. (2022) have been studied to control complex language generation tasks. Our work is more related to planning for dialogue generation Kishinami et al. (2022); Yang et al. (2022); Cohen et al. (2022). Our proposed Color is a novel dialogue-level planning method that steers dialogue generation.
## 6 Conclusion
In this work, we explore the task of goal-directed proactive dialogue and focus on planning dialogue paths that direct conversations towards the designated target. We propose a novel approach called Color, which models coherent temporal dynamics for dialogue paths in the defined latent space, and considers the impact of user feedback on the dialogue planning process. We employ the planned dialogue paths as prompts to steer dialogue generation. Experiments show that our proposed method outperforms other methods significantly.
### Limitations
Though our proposed method exhibits superior performance, we also recognize its limitations and discuss potential solutions. Our proposed method for goal-directed dialogue generation suffers from error propagation since the three stages perform in a pipeline manner. After analyzing those generated utterances with low human evaluation scores, we find that the performance of dialogue generation is prone to drop when our Color model fails to plan an appropriate dialogue path. We intend to alleviate this issue by introducing some techniques in the cascaded generation, such as noisy channel models Shannon (1948); Liu et al. (2021). In addition, other issues, such as how to make existing goal-directed dialogue systems more engaging and personalized, are worth further exploring.
### Ethical Considerations
Goal-directed dialogue systems can be used for creating non-obtrusive recommendations for specific products and services, introducing interesting new topics and educating users about those topics, and so forth. Developing such systems requires careful consideration since it has a broad impact on applications. The intention of our work is not to force the system to reach the designated target nor force users to accept recommendations. Instead, we aim to build better assistive technologies to improve the proactiveness of dialogue systems. Furthermore, our experimental datasets are publicly available. They have been filtered for sensitive and private information during dataset construction.
We hope to raise awareness of the potential for misuse of such systems with toxic intentions. For example, such systems may be used to pose as humans and actively manipulate users' perceptions on specific issues or political inclinations. To mitigate these risks, we emphasize the importance of improving transparency through regulations. It is essential to inform users that they are conversing with a bot instead of a human, and regulations on target designation are crucial when deploying these systems in specific domains. It is necessary to ensure that setting a target does not violate factual accuracy, user privacy rules, or human laws.
## Acknowledgments
This work was supported by the Research Grants Council of Hong Kong (15207122, 15207920, 15207821, 15204018) and National Natural Science Foundation of China (62076212). It was also supported in part by PolyU internal grants (ZVQ0, ZVVX).
|
2307.15822 | Universally Optimal Periodic Configurations in the Plane | We develop lower bounds for the energy of configurations in $\mathbb{R}^d$
periodic with respect to a lattice. In certain cases, the construction of sharp
bounds can be formulated as a finite dimensional, multivariate polynomial
interpolation problem. We use this framework to show a scaling of the
equitriangular lattice $A_2$ is universally optimal among all configurations of
the form $\omega_4+ A_2$ where $\omega_4$ is a 4-point configuration in
$\mathbb{R}^2$. Likewise, we show a scaling and rotation of $A_2$ is
universally optimal among all configurations of the form $\omega_6+L$ where
$\omega_6$ is a 6-point configuration in $\mathbb{R}^2$ and $L=\mathbb{Z}
\times \sqrt{3} \mathbb{Z}$. | Doug Hardin, Nathaniel Tenpas | 2023-07-28T21:45:46Z | http://arxiv.org/abs/2307.15822v3 | # Bounds for Periodic Energy and the Optimality of Two Periodic Point Configurations in the Plane
###### Abstract
We develop lower bounds for the energy of configurations in \(\mathbb{R}^{d}\) periodic with respect to a lattice. In certain cases, the construction of sharp bounds can be formulated as a finite dimensional, multivariate polynomial interpolation problem. We use this framework to show a scaling of the equitriangular lattice \(A_{2}\) is universally optimal among all configurations of the form \(\omega_{4}+A_{2}\) where \(\omega_{4}\) is a \(4\)-point configuration in \(\mathbb{R}^{2}\). Likewise, we show a scaling and rotation of \(A_{2}\) is universally optimal among all configurations of the form \(\omega_{6}+L\) where \(\omega_{6}\) is a \(6\)-point configuration in \(\mathbb{R}^{2}\) and \(L=\mathbb{Z}\times\sqrt{3}\mathbb{Z}\).
###### Contents
* 1 Introduction and Overview of Results
* 2 Lattices and Linear Programming Bounds for Periodic Energy
* 2.1 Preliminaries: Lattices and Fourier Series
* 2.2 Lattice symmetry, symmetrized basis functions, and polynomial structure
* 2.3 Linear Programming Bounds for Periodic Energy
* 2.4 Moments for certain lattice configurations
* 2.5 Lattice theta functions
* 2.6 Polynomial interpolation and linear programming bounds for lattice configurations
* 3 The Linear Programming Framework for the families \(\omega_{m^{2}},\omega_{2m^{2}},\omega_{3m^{2}},\) and \(\omega_{6m^{2}}\)
* 3.1 The Polynomials \(P_{v}^{L}\) and \(P_{v}^{A_{2}}\)
* 3.2 Interpolation Nodes
* 3.3 Interpolation Problem for \(\omega_{m^{2}}\)
* 4
###### Abstract
We consider the \(\Lambda\)-periodic
refer to \(\Lambda\) as the _periodization lattice_ and \(F\) as the _periodization kernel_. For a finite multiset \(\omega_{n}=\{x_{1},...,x_{n}\}\subseteq\mathbb{R}^{d}\) of cardinality \(n\), we consider the \(F\)_-energy_ of \(\omega_{n}\) defined by
\[E_{F}(\omega_{n}):=\sum_{i=1}^{n}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{n}F(x_{i}-x_{j}).\]
Without loss of generality, we may assume that \(\omega_{n}\) lies in some specified fundamental domain \(\Omega_{\Lambda}:=\mathbb{R}^{d}/\Lambda\) since replacing a point \(x\in\omega_{n}\) with any point in \(x+\Lambda\) does not change \(E_{F}(\omega_{n})\).
The _minimal discrete \(n\)-point \(F\)-energy_ is defined as
\[\mathcal{E}_{F}(n):=\inf\{E_{F}(\omega_{n})\mid\omega_{n}\subseteq\mathbb{R}^ {d},\,|\omega_{n}|=n\}, \tag{1}\]
where \(|\omega|\) denotes the cardinality of a multiset \(\omega\). An \(n\)-point configuration \(\omega_{n}\subset\mathbb{R}^{d}\) satisfying \(E_{F}(\omega_{n})=\mathcal{E}_{F}(n)\) is called \(F\)_-optimal_. Note that the lower-semicontinuity of \(F\) and the compactness of \(\Omega_{\Lambda}\) in the torus topology implies the existence of at least one \(F\)-optimal configuration.
More specifically, we consider periodic potentials generated by a function \(f:[0,\infty)\to[0,\infty]\) with \(d\)-rapid decay (i.e. \(f(r^{2})\in\mathcal{O}(r^{-s}),r\to\infty\), for some \(s>d\)) using
\[F_{f,\Lambda}(x):=\sum_{v\in\Lambda}f(|x+v|^{2}). \tag{2}\]
The potential \(F_{f,\Lambda}\) has the following physical interpretation: if \(f(r^{2})\) represents the energy required to place a pair of unit charge particles a distance \(r\) from each other, then \(F_{f,\Lambda}(x)\) is the energy required to place such a particle at the point \(x\) in the presence of existing particles at points of \(\Lambda\). We write the pair interaction in terms of the distance squared since it will be more compatible with the notion of _universal optimality_ described below.
The periodization of Gaussian potentials \(f_{a}(r^{2}):=\exp(-ar^{2})\) for \(a>0\) leads to a type of lattice theta function (cf. [1, Chapter 10]) and plays a central role in our analysis. For convenience, we write \(F_{a,\Lambda}:=F_{f_{a},\Lambda}\) or just \(F_{a}\) when the choice of lattice is unambiguous.
**Definition 1.1**.: Let \(\Lambda\) be a lattice in \(\mathbb{R}^{d}\).
* We say that an \(n\)-point configuration \(\omega_{n}\subset\mathbb{R}^{d}\) is \(\Lambda\)_-universally optimal_ if it is \(F_{a,\Lambda}\)-optimal for all \(a>0\) (cf., [10]).
* We say \(\Lambda\) is _universally optimal_ if for any sublattice \(\Phi\subseteq\Lambda\) of index \(n\), the \(n\)-point configuration \(\Lambda\cap\Omega_{\Phi}\) is \(\Phi\)-universally optimal.
If \(\omega_{n}\) is \(\Lambda\)-universally optimal, then it follows from a theorem of Bernstein [1] (see [10],[1]) that \(\omega_{n}\) is \(F_{f,\Lambda}\)-optimal for any \(f\) with \(d\)-rapid decay that is completely monotone on \((0,\infty)\).1
As we discuss in the Appendix, it follows from classical results of [10] that a lattice is universally optimal in the sense of Definition 1.1 if and only if it is universally optimal in the more general sense of [11] or [12], which we describe at the end of the section.
We further show that to establish the universal optimality of \(\Lambda\), it is sufficient to prove that there is some sublattice \(\Phi\subseteq\Lambda\) such that \(\Lambda\cap\Omega_{m\Phi}\) is \(m\Phi\)-universally optimal for infinitely many \(m\in\mathbb{N}\). Observing that the notion of lattice universal optimality in Definition 1.1 is invariant under linear similitudes, we find it convenient to consider the \(\Phi\)-universal optimality of configurations of the form
\[\omega(\Phi,\Lambda,m)=\left(\frac{1}{m}\Lambda\right)\cap\Omega_{\Phi} \tag{3}\]
for a sublattice \(\Phi\) of a lattice \(\Lambda\).
It was shown in [11] that \(\mathbb{Z}\) is universally optimal in \(\mathbb{R}\) (see [13] for a simple, alternate proof). Recently it was shown in [12] that the \(E_{8}\) and Leech lattices are universally optimal in dimensions \(8\) and \(24\), respectively. It was further conjectured that the hexagonal \(A_{2}\) lattice
\[A_{2}:=\begin{bmatrix}1&1/2\\ 0&\sqrt{3}/2\end{bmatrix}\mathbb{Z}^{2},\]
is universally optimal in \(\mathbb{R}^{2}\). Though \(A_{2}\) has long been known to be optimal for circle packing (see [12]) and was proved to be universally optimal among lattices in [14], surprisingly, its conjectured universal optimality among all infinite configurations (of fixed density) remains open.
The proofs of universal optimality for \(\mathbb{Z}\), \(E_{8}\), and the Leech lattice given in [11] and [12] are based on so-called "linear programming bounds" originally developed in the context of coding theory for point configurations on the \(d\)-dimensional sphere (e.g., see [1], [15], [16]) and extended to bounds for the energy and sphere-packing density of point configurations in \(\mathbb{R}^{d}\) in (e.g., see [1], [11], and [12]). In fact, these linear programming bounds for point configurations in \(\mathbb{R}^{d}\) are first proved for general periodic configurations, which we adapt for a fixed periodization lattice \(\Lambda\) in Proposition 7.
In Section 2, we formulate linear programming bounds for lattice periodic configurations in \(\mathbb{R}^{d}\) and find sufficient conditions on the periodization lattice to permit a certain polynomial structure (see Proposition 6). Furthermore, we develop conditions for the \(F\)-optimality of configurations of the form (3) in terms of polynomial interpolation.
We apply this framework to the following four families of configurations (arising from scalings of \(A_{2}\)) using the notation of (3).
* \(\Phi=A_{2}\) and \[\omega_{m^{2}}^{*}:=\omega(\Phi,A_{2},m),\] conjectured to be \(A_{2}\)-universally optimal.
* \(\Phi=L\) and \[\omega_{2m^{2}}^{*}:=\omega(\Phi,A_{2},m),\] conjectured to be \(L\)-universally optimal.
3. \(\Phi=\sqrt{3}R_{\pi/6}A_{2}\) (where \(R_{\pi/6}\) denotes the \(\pi/6\) rotation matrix) and \[\omega_{3m^{2}}^{*}:=\frac{1}{\sqrt{3}}R_{-\pi/6}\omega(\Phi,A_{2},m),\] conjectured to be \(A_{2}\)-universally optimal (note \(\frac{1}{\sqrt{3}}R_{-\pi/6}\Phi=A_{2}\)).
4. \(\Phi=\sqrt{3}R_{\pi/6}L\) and \[\omega_{6m^{2}}^{*}:=\frac{1}{\sqrt{3}}R_{-\pi/6}\omega(\Phi,A_{2},m),\] conjectured to be \(L\)-universally optimal (note \(\frac{1}{\sqrt{3}}R_{-\pi/6}\Phi=L\)).
The universal optimality of the configurations \(\omega_{km^{2}}^{*}\) for \(\kappa\in\{1,2,3,6\}\) is an immediate consequence of the conjectured universal optimality of \(A_{2}\) should this be true. Conversely, as discussed above, the universal optimality of \(A_{2}\) would follow if analogous results are established for any of the four families \(\omega_{m^{2}}^{*},\omega_{2m^{2}}^{*},\omega_{3m^{2}}^{*},\omega_{6m^{2}}^{*}\) for infinitely many \(m\in\mathbb{N}\).
The proofs of universal optimality of two of the base cases, \(\omega_{2}^{*}\) and \(\omega_{3}^{*}\), follow immediately from results on theta functions, some classical and some from [1] (cf. [15] or [11] for proofs in the context of periodic energy). Our main results are proofs, utilizing the linear programming bounds, of the universal optimality of the next two base cases, \(\omega_{4}^{*}\) and \(\omega_{6}^{*}\):
**Theorem 1**.: _The configurations \(\omega_{4}^{*}\) and \(\omega_{6}^{*}\) are \(A_{2}\) and \(L\)-universally optimal, respectively._
We can rephrase Theorem 1 in terms of the energies of infinite configurations, for which we follow the notation of [12]. Let \(B(x,r)\) be the ball of radius \(r>0\) centered at \(x\). If \(C\) is an infinite, multiset in \(\mathbb{R}^{d}\) such that every ball intersects finitely many points, we call it an _infinite configuration_. Define \(C_{r}:C\cap B(0,r)\) and the _density of \(C\)_ as
\[\lim_{r\to\infty}\frac{|C_{r}|}{\operatorname{Vol}(B(0,r))},\]
Figure 1: Points of \(A_{2}/2\) are shown with the 4 larger points yielding an \(A_{2}\) universally optimal configuration which we denote \(\omega_{4}^{*}\)
assuming the limit exists and is finite. Similarly to above, for a lower semi-continuous map \(f:[0,\infty)\to[0,\infty]\) of \(d\)-rapid decay, we define the \(f\)-energy of an \(n\)-point configuration \(\omega_{n}=\{x_{1},\ldots,x_{n}\}\subseteq\mathbb{R}^{d}\) as
\[E_{f}(\omega_{n}):=\sum_{\begin{subarray}{c}1\leq i,j\leq n\\ i\neq j\end{subarray}}f(|x_{i}-x_{j}|^{2}).\]
and the infimum over all \(n\)-point configurations contained in some set \(X\) as \(\mathcal{E}_{f}(n,X)\), where we extend the definition to \(n\not\in\mathbb{N}\) via linear interpolation. Then for a configuration \(C\) of density \(\rho\), the _lower \(f\)-energy of \(C\)_ is
\[E_{f}^{l}(C):=\liminf_{r\to\infty}\frac{E_{f}(C_{r})}{|C_{r}|}.\]
If the limit exists, we'll write it as \(E_{f}(C)\) and call it the \(f\)_-energy of \(C\)_. A configuration \(C^{\prime}\) of density \(\rho\) is _universally optimal_ if its \(f\)-energy exists and satisfies
\[E_{f}(C^{\prime})\leq E_{f}^{l}(C)\]
for every other infinite configuration \(C\) of density \(\rho\) and \(f:x\to e^{-ax^{2}}\). Similarly, a configuration \(C^{\prime}\) is universally optimal among \(S\) if we further restrict \(C\) to elements of \(S\). We'll also say an infinite configuration \(C\) is an \(N\)_-point \(\Lambda\)-periodic configuration_ if
\[C=\cup_{i=1}^{N}x_{i}+\Lambda\]
for some set of _representatives_\(\omega_{N}^{C}=\{x_{1},\ldots,x_{n}\}\). Then we have the following connection between the \(f\)-energy of \(C\), and the \(F_{f,\Lambda}\) energy of \(\omega_{N}^{C}\) (cf. [1, Lemma 9.1] or [1, Chapter 10]).
**Proposition 2**.: _Let \(C\) be an \(N\)-point \(\Lambda\)-periodic configuration with \(\omega_{N}^{C}\) a set of representatives, and \(f:x^{2}\to e^{-ax^{2}}\) for some \(a>0\). Recall \(F_{a,\Lambda}\) is the \(\Lambda\)-periodization of \(f\). Then \(E_{f}(C)\) exists and_
\[E_{f}(C)=\frac{1}{N}\left(E_{F,\Lambda}(\omega_{N}^{X})+N\sum_{0\neq v\in \Lambda}f(v)\right).\]
Thus, Theorem 1 can be restated as follows: \(A_{2}/2\) is universally optimal among all \(4\)-point \(A_{2}\)-periodic configurations, and a rotation and scaling of \(A_{2}\) is universal among all \(6\)-point \(L\)-periodic configurations.
## 2 Lattices and Linear Programming Bounds for Periodic Energy
### Preliminaries: Lattices and Fourier Series
We first gather some basic definitions and properties of lattices in \(\mathbb{R}^{d}\).
**Definition 2.1**.: Let \(\Lambda\subset\mathbb{R}^{d}\).
* \(\Lambda\) is a _lattice in_\(\mathbb{R}^{d}\) if \(\Lambda:=V\mathbb{Z}^{d}=\left\{\sum_{i=1}^{d}a_{i}v_{i}\mid a_{1},a_{2},\ldots,a _{d}\in\mathbb{Z}\right\}\) for some nonsingular \(d\times d\) matrix \(V\) with columns \(v_{1},\ldots v_{d}\). We refer to \(V\) as a _generator_ for \(\Lambda\).
* Once a choice of generator \(V\) is specified, we let \(\Omega_{\Lambda}:=V[0,1)^{d}\) denote the parallelepiped _fundamental domain_ for \(\Lambda\). The _co-volume of \(\Lambda\)_ defined by \(|\Lambda|:=|\det V|\) is the volume of \(\Omega_{\Lambda}\) which is, in fact, the same for any Lebesgue measurable fundamental domain2 for \(\mathbb{R}^{d}/\Lambda\) where \(\Lambda\) acts on \(\mathbb{R}^{d}\) by translation. Footnote 2: A _fundamental domain_ for a group \(G\) acting on a set \(X\) is a subset of \(X\) consisting of exactly one point from each \(G\)-orbit. Note that \(X/G\) will be used to denote both a fundamental domain and the set of \(G\) orbits in \(X\).
* The _dual lattice_\(\Lambda^{*}\) of a lattice \(\Lambda\) with generator \(V\) is the lattice generated by \(V^{-T}=(V^{T})^{-1}\) or, equivalently, \(\Lambda^{*}:=\{v\in\mathbb{R}^{d}\mid w\cdot v\in\mathbb{Z}\text{ for all }w\in\Lambda\}\).
* We denote by \(S_{\Lambda}\) the _symmetry group of \(\Lambda\)_ consisting of isometries on \(\mathbb{R}^{d}\) fixing \(\Lambda\) and denote by \(G_{\Lambda}\) the subgroup of \(S_{\Lambda}\) fixing the origin (and thus can be considered as elements of the orthogonal group \(O(d)\)). Note that \(G_{\Lambda}=S_{\Lambda}/\Lambda\) where we identify \(v\in\Lambda\) with the translation \(\cdot+v\). Further, note that \(G_{\Lambda^{*}}=G_{\Lambda}\) since elements of \(O(d)\) preserve inner products.
Let \(\Lambda\) be a lattice in \(\mathbb{R}^{d}\) with generator \(V\) and fundamental domain \(\Omega_{\Lambda}\). We let \(L^{2}(\Omega_{\Lambda})\) denote the Hilbert space of complex-valued \(\Lambda\)-periodic functions on \(\mathbb{R}^{d}\) with inner product \(\langle f,g\rangle=\int_{\Omega_{\Lambda}}f(x)\overline{g(x)}\,dx\). Then \(\{e^{2\pi iv\cdot x}\mid v\in\Lambda^{*}\}\) forms an orthogonal basis of \(L^{2}(\Omega_{\Lambda})\) yielding the Fourier expansion of a function \(g\in L^{2}(\Omega_{\Lambda})\):
\[g(x)=\sum_{v\in\Lambda^{*}}\hat{g}_{v}e^{2\pi iv\cdot x} \tag{4}\]
with Fourier coefficients \(\hat{g}_{v}:=\frac{1}{|\Lambda|}\int_{\Omega_{\Lambda}}g(x)\) for \(v\in\Lambda^{*}\) where equality (and the implied unconditional limit on the right hand side) holds in \(L^{2}(\Omega_{\Lambda})\). Of course, elements of \(L^{2}(\Omega_{\Lambda})\) are actually equivalence classes of functions. If \(g\in L^{2}(\Omega_{\Lambda})\) contains an element of \(C(\mathbb{R}^{d})\), then we identify \(g\) with its continuous representative and write \(g\in L^{2}(\Omega_{\Lambda})\cap C(\mathbb{R}^{d})\). As will be the case in our applications, if \(g\in L^{2}(\Omega_{\Lambda})\) is such that \(\sum_{v\in\Lambda^{*}}|\hat{g}_{v}|<\infty\), then the right-hand side of (4) converges uniformly and unconditionally to \(g\) and so \(g\in L^{2}(\Omega_{\Lambda})\cap C(\mathbb{R}^{d})\) and (4) holds pointwise for every \(x\in\mathbb{R}^{d}\).
### Lattice symmetry, symmetrized basis functions, and polynomial structure
Let \(\Lambda\) be a lattice in \(\mathbb{R}^{d}\), \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) have rapid \(d\)-decay, and \(\sigma\in G_{\Lambda}\). Since \(\sigma^{-1}\in G_{\Lambda}\) and \(\sigma\) is an isometry, we have
\[F_{f,\Lambda}(\sigma x)=\sum_{v\in\sigma^{-1}\Lambda}f(|\sigma x+\sigma v|^{2} )=\sum_{v\in\Lambda}f(|x+v|^{2})=F_{f,\Lambda}(x).\]
Then \(F_{f,\Lambda}\) is also \(\Lambda\)-periodic and we obtain:
**Proposition 3**.: _Suppose \(f:[0,\infty)\to[0,\infty]\) has \(d\)-rapid decay and \(\Lambda\) is a lattice in \(\mathbb{R}^{d}\). Then for all \(\sigma\in G_{\Lambda}\), \(v\in\Lambda\) and \(x\in\mathbb{R}^{2}\), we have \(F_{f,\Lambda}(\sigma x+v)=F_{f,\Lambda}(x)\) showing that \(F_{f,\Lambda}\) is \(S_{\Lambda}\)-invariant._
We next recall that \(g\in L^{2}(\Omega_{\Lambda})\) is \(\sigma\)-invariant for \(\sigma\in G_{\Lambda}\) if and only if the Fourier coefficients of \(g\) are \(\sigma\)-invariant, as described in the next proposition.
**Proposition 4**.: _Suppose \(g\in L^{2}(\Omega_{\Lambda})\) and \(\sigma\in G_{\Lambda}\). Then \(g(\sigma x)=g(x)\) for a.e. \(x\in\mathbb{R}^{d}\) if and only if \(\hat{g}_{\sigma v}=\hat{g}_{v}\) for all \(v\in\Lambda^{*}\)._
Proof.: Since \(\sigma^{-1}\in G_{\Lambda^{*}}=G_{\Lambda}\), we have
\[g(\sigma x)=\sum_{v\in\Lambda^{*}}\hat{g}_{v}e^{2\pi iv\cdot(\sigma x)}=\sum_{ v\in\sigma^{-1}\Lambda^{*}}\hat{g}_{v}e^{2\pi i(\sigma v)\cdot(\sigma x)}= \sum_{v\in\Lambda^{*}}\hat{g}_{\sigma v}e^{2\pi iv\cdot x}.\]
The proposition then follows from uniqueness properties of the Fourier expansion.
Let \(\Gamma\) be a subgroup of \(G_{\Lambda}\). For \(v\in\Lambda^{*}\), let \(C_{v}^{\Gamma}\) be the \(\Lambda\)-periodic function defined by
\[C_{v}^{\Gamma}(x):=\frac{1}{|\Gamma|}\sum_{\sigma\in\Gamma}e^{2\pi i(\sigma v) \cdot x}=\frac{1}{|\Gamma(v)|}\sum_{v^{\prime}\in\Gamma(v)}e^{2\pi iv^{\prime }\cdot x},\quad x\in\mathbb{R}^{d}. \tag{5}\]
where \(\Gamma(v)\) denotes the orbit \(\Gamma(v)=\{\sigma v\mid\sigma\in\Gamma\}.\) We write \(C_{v}\) for \(C_{v}^{\Gamma}\) when \(\Gamma\) is unambiguous. If \(g\in L^{2}(\Omega_{\Lambda})\) and \(g\) is \(G_{\Lambda}\)-invariant (i.e., if \(g(\sigma\cdot)=g\) for all \(\sigma\in G_{\Lambda}\)), then we may rewrite (4) as
\[g(x)=\sum_{v\in\Lambda^{*}/\Gamma}|\Gamma(v)|\,\hat{g}_{v}\,C_{v}^{\Gamma}(x). \tag{6}\]
We next consider the case of a _rectangular lattice_ by which we mean a lattice of the form \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots(a_{d}\mathbb{Z})\) with \(a_{1},\ldots,a_{d}>0\). The symmetry group of a rectangular lattice in \(\mathbb{R}^{d}\) contains the subgroup \(H\) of order \(2^{d}\) generated by the coordinate reflections
\[R_{j}(x_{1},\ldots,x_{j},\ldots,x_{d})=(x_{1},\ldots,-x_{j},\ldots,x_{d}), \qquad j=1,2,\ldots,d. \tag{7}\]
Let \(v\in\Lambda^{*}=(1/a_{1})\mathbb{Z}\times\cdots(1/a_{d})\mathbb{Z}\), and note that \(v=(k_{1}/a_{1},k_{2}/a_{2},\ldots,k_{d}/a_{d})\) for some \(k_{1},\ldots,k_{d}\in\mathbb{Z}\). A straightforward induction on \(d\) gives
\[C_{v}^{H}(x)=\prod_{i=1}^{d}\cos(2\pi k_{i}x_{i}/a_{i}))=\prod_{i=1}^{d}T_{|k _{i}|}(\cos(2\pi x_{i}/a_{i})). \tag{8}\]
Recall the \(\ell\)th Chebyshev polynomial of the first kind defined by \(\cos(\ell t)=T_{\ell}(\cos t)\) for \(\ell=0,1,2,\ldots\). We then have the following proposition.
**Proposition 5**.: _Let \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots(a_{d}\mathbb{Z})\) with \(a_{1},\ldots,a_{d}>0\). If \(v\in\Lambda_{R}^{*}\), then \(v=(k_{1}/a_{1},k_{2}/a_{2},\ldots,k_{d}/a_{d})\) for some \(k_{1},\ldots,k_{d}\in\mathbb{Z}\) and_
\[C_{v}^{H}(x)=\prod_{i=1}^{d}T_{|k_{i}|}(t_{i}), \tag{9}\]
_where \(t_{i}=\cos(2\pi x_{i}/a_{i})\in[-1,1]\) for \(i=1,2,\ldots,d\)._
We next deduce a polynomial structure for \(C_{v}^{\Lambda}\) for lattices \(\Lambda\) that are invariant under the coordinate reflections \(R_{j}\); i.e., such that \(H\subseteq G_{\Lambda}\).
**Proposition 6**.: _Let \(\Lambda\subseteq\mathbb{R}^{d}\) be a lattice such that \(H\subseteq G_{\Lambda}\). Then \(\Lambda\) contains a rectangular lattice \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots\times(a_{d}\mathbb{Z})\) and the function \(C_{v}^{G_{\Lambda}}(x)\) is a polynomial in the variables \(t_{j}=\cos(2\pi x_{j}/a_{j})\) for \(j=1,2,\ldots,d\) and any \(v\in\Lambda^{*}\)._
Proof.: We first show that \(\Lambda\) must contain some rectangular sublattice (i.e., of the form \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots\times(a_{d}\mathbb{Z})\)). Since \(\Lambda\) is full-rank, for each \(j=1,2,\ldots,d\), there is some \(w^{j}\in\Lambda\) such that \(a_{j}:=2w^{j}\cdot e^{j}\neq 0\) where \(e^{j}\) denotes the \(j\)-th coordinate unit vector. Then \(a_{j}e^{j}=w^{j}-R_{j}w^{j}\in\Lambda\), and so the rectangular lattice \((a_{1}\mathbb{Z})\times\cdots\times(a_{d}\mathbb{Z})\) is a sublattice of \(\Lambda\).
Let \(v\in\Lambda^{*}\). Since \(\Lambda_{R}\subseteq\Lambda\), \(\Lambda^{*}\subseteq\Lambda_{R}^{*}\), so \(v\in\Lambda_{R}^{*}\). Let \(C=\{\sigma_{1},\ldots,\sigma_{[G_{\Lambda};H]}\}\) be a set of right coset representatives of \(H\) in \(G_{\Lambda}\), so that \(|C||H|=|G|\). Then we have
\[C_{v}^{G_{\Lambda}} =\frac{1}{|G_{\Lambda}|}\sum_{g\in G_{\Lambda}}e^{2\pi igv\cdot x} \tag{10}\] \[=\frac{1}{|C|}\sum_{\sigma\in C}\frac{1}{|H|}\sum_{h\in H}e^{2 \pi ih\sigma v\cdot x}\] \[=\frac{1}{|C|}\sum_{\sigma\in C}C_{\sigma v}^{H}.\]
Proposition 5 implies \(C_{\sigma v}^{H}\) is polynomial in the variables \(t_{j}=\cos(2\pi x_{j}/a_{j})\) and thus so is \(C_{v}^{G_{\Lambda}}\).
Proposition 6 motivates the change of variables
\[t_{i}:=\cos(2\pi x_{i}/a_{i}),\qquad i=1,...,d. \tag{11}\]
We then let \(T_{a_{1},\ldots,a_{d}}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be defined by
\[T_{a_{1},\ldots,a_{d}}(x_{1},...,x_{d}):=(t_{1},\ldots,t_{d}). \tag{12}\]
For any function \(h\) defined on a subset of \(X\subseteq[0,a_{1}/2]\times\cdots[0,a_{d}/2]\), \(\tilde{h}\) will refer to the function defined on \(\tilde{X}\subseteq[-1,1]^{d}\) by the formula
\[\tilde{h}(t)=h\left(\frac{a_{1}\arccos t_{1}}{2\pi},\ldots,\frac{a_{d}\arccos t _{2}}{2\pi}\right),\]
which ensures \(\tilde{h}(t)=h(x)\).
It follows by Proposition 6 that the maps
\[P_{v}^{\Phi}:=\tilde{C}_{v}^{G_{\Phi}},\qquad v\in\Phi^{*}, \tag{13}\]
are polynomials in the variables \(t_{i},\ldots,t_{d}\). We shall also write \(P_{v}\) when the choice of \(\Phi\) is clear. Similarly, the \(T_{a_{1},\ldots,a_{d}}\) image of any subset \(D\subseteq[0,1/2]\times[0,\sqrt{3}/2]\) will be denoted \(\tilde{D}\). In any case where we do so, the choice of rectangular lattice (and hence the choice of \(a_{i}\)'s will be clear).
We also remark that the collection of polynomials \(\{P_{v}^{G_{\Phi}}\mid v\in\Phi^{*}/G_{\Phi}\}\) is orthogonal with respect to the measure \((1-t_{1}^{2})^{-1/2}\cdots(1-t_{d}^{2})^{-1/2}dt_{1}\,\cdots\,dt_{d}\) on \([-1,1]^{d}\).
### Linear Programming Bounds for Periodic Energy
We say that \(g\in L^{2}(\Omega_{\Lambda})\) is _positive semi-definite (PSD)_ if the Fourier coefficients \(\hat{g}_{v}\geq 0\) for all \(v\in\Lambda^{*}\setminus\{0\}\) and \(\sum_{v\in\Lambda^{*}}\hat{g}_{v}<\infty\). (This is a slightly nonstandard choice of notation since we are not requiring \(\hat{g}_{0}\geq 0\).) If \(g\in L^{2}(\Omega_{\Lambda})\) is PSD, then its Fourier series converges uniformly and so, without loss of generality, we may assume that a PSD \(g\in L^{2}(\Omega_{\Lambda})\) is continuous and equal to its Fourier series at every \(x\in\mathbb{R}^{d}\).
If \(g\in L^{2}(\Omega_{\Lambda})\) is PSD and \(\omega_{n}\) is an arbitrary \(n\)-point configuration in \(\mathbb{R}^{d}\), then the following fundamental lower bound holds:
\[\begin{split} E_{g}(\omega_{n})&=\sum_{x\neq y\in \omega_{n}}g(x-y)=-ng(0)+\sum_{x,y\in\omega_{n}}g(x-y)\\ &=-ng(0)+\sum_{v\in\Lambda^{*}}\hat{g}_{v}\sum_{x,y\in\omega_{n}} e^{2\pi iv\cdot x}e^{-2\pi iv\cdot y}\\ &=-ng(0)+\sum_{v\in\Lambda^{*}}\hat{g}_{v}\left|\sum_{x\in\omega_{ n}}e^{2\pi iv\cdot x}\right|^{2}\\ &\geq n^{2}\hat{g}_{0}-ng(0).\end{split} \tag{14}\]
For \(v\in\mathbb{R}^{d}\) we refer to
\[M_{v}(\omega_{n}):=\sum_{x\in\omega_{n}}e^{2\pi iv\cdot x},\]
as the _\(v\)-moment of \(\omega_{n}\)_. Note that equality holds in (14) if and only if
\[\hat{g}_{v}M_{v}(\omega_{n})=0,\qquad v\in\Lambda^{*}\setminus\{0\}. \tag{15}\]
The next proposition follows immediately from (14) and the condition (15) for equality in (14). This proposition is essentially contained in the proof of the linear programming bounds of [13, Proposition 9.3] and is closely related to Delsarte-Yudin energy bounds for spherical codes (cf. [1, Chapters 5.5 and 10.4]).
**Proposition 7**.: _Let \(F:\mathbb{R}^{d}\to[0,\infty]\) be \(\Lambda\)-periodic, and suppose \(g\in L^{2}(\Omega_{\Lambda})\) is PSD such that \(g\leq F\). Then for any \(n\)-point configuration \(\omega_{n}\), we have_
\[E_{F}(\omega_{n})\geq E_{g}(\omega_{n})\geq n^{2}\hat{g}_{0}-ng(0) \tag{16}\]
_with equality holding throughout (16) if and only if the following two conditions hold:_
* \(g(x-y)=F(x-y)\) _for all_ \(x\neq y\in\omega_{n}\)_,_
* \(\hat{g}_{v}M_{v}(\omega_{n})=0\)_, for all_ \(v\in\Lambda^{*}\setminus\{0\}\)_._
_If (a) and (b) hold, then \(E_{F}(\omega_{n})=\mathcal{E}_{F}(n)\)._
_Remark_.: If \(F:\mathbb{R}^{d}\to[0,\infty]\) is \(\Lambda\)-periodic and \(G_{\Lambda}\)-invariant and \(g\in L^{2}(\Omega_{\Lambda})\) is PSD such that \(g\leq F\), then the \(S_{\Lambda}\)-invariant function
\[g^{\text{sym}}(x):=\frac{1}{|G_{\Lambda}|}\sum_{\sigma\in G_{\Lambda}}g(( \sigma v)\cdot x),\qquad x\in\mathbb{R}^{d}\]
is also PSD and satisfies \(g^{\text{sym}}\leq F\). Thus, we may restrict our search for functions \(g\) to use in Proposition 7 to those of the form given in (6) in which case we only need verify the condition that \(g\leq F\) on the fundamental domain of the action of \(S_{\Lambda}\) on \(\mathbb{R}^{d}\). In particular, when \(\Lambda=A_{2}\), we have the representative set
\[\Delta_{A_{2}}:=\{(x_{1},x_{2})\mid 0\leq x_{1}\leq\frac{1}{2},0\leq x_{2}\leq x _{1}/\sqrt{3}\}\]
and when \(\Lambda=L\), we'll consider the representative set \([0,1/2]\times[0,\sqrt{3}/2]\).
### Moments for certain lattice configurations
We consider moments of configurations obtained by restricting scalings of a lattice \(\Lambda\) to the fundamental domain of a sublattice \(\Phi\).
**Theorem 8**.: _Suppose \(\Phi\) is a sublattice of a lattice \(\Lambda\) in \(\mathbb{R}^{d}\). Let \(\omega_{\Phi}:=\Lambda\cap\Omega_{\Phi}\) and \(\kappa:=|\omega_{\Phi}|\) denote the index of \(\Phi\) in \(\Lambda\). For \(m\in\mathbb{N}\), let_
\[\mu_{\kappa m^{d}}:=(\frac{1}{m}\Lambda)\cap\Omega_{\Phi}. \tag{17}\]
_Then for \(v\in\Phi^{*}\), we have_
\[M_{v}(\mu_{\kappa m^{d}})=\begin{cases}\kappa m^{d},&v\in m\Lambda^{*},\\ 0,&\text{otherwise.}\end{cases} \tag{18}\]
_Furthermore, if \(G_{\Phi}\subset G_{\Lambda}\), then for any \(v\in\Phi^{*}\) and \(\sigma\in G_{\Phi}\), we have \(M_{\sigma v}(\mu_{\kappa m^{d}})=M_{v}(\mu_{\kappa m^{d}})\)._
Proof.: Let \(\Lambda=V\mathbb{Z}^{d}\); i.e., \(V\) is a generator for \(\Lambda\). Since \(\Phi\) is a sublattice of \(\Lambda\), there is some integer \(d\times d\) matrix \(W\) such that \(VW\) is a generator for \(\Phi\). Then \(W\) is an that can be written in Smith Normal Form as \(W=SDT\) where \(S\) and \(T\) are integer matrices with determinant \(\pm 1\) (equivalently, their inverses are also integer matrices) and \(D\) is a diagonal matrix with positive integer diagonal entries \(\lambda_{1},\dots,\lambda_{d}\). Then \(\widetilde{V}=VS\) is a generator for \(\Lambda\) and \(U=\widetilde{V}D\) is a generator for \(\Phi\). Choosing the fundamental domains \(\Omega_{\Lambda}=\widetilde{V}[0,1)^{d}\) and \(\Omega_{\Phi}=U[0,1)^{d}\) we may write
\[\mu_{\kappa m^{d}}=\{\frac{1}{m}\widetilde{V}j\mid j\in I_{m\lambda_{1}} \times\dots\times I_{m\lambda_{d}}\},\]
where \(I_{p}:=\{0,1,2,\dots,p-1\}\). Let \(v\in\Phi^{*}\) so that \(v=U^{-T}k=\widetilde{V}^{-T}D^{-1}k\) for some \(k=(k_{1},k_{2},\dots,k_{d})\in\mathbb{Z}^{d}.\) Then \(v\cdot(\frac{1}{m}\widetilde{V}j)=\frac{1}{m}j\cdot(D^{-1}k)\) and so
\[M_{v}(\mu_{\kappa m^{d}}) =\sum_{j\in I_{m\lambda_{1}}\times\dots\times I_{m\lambda_{d}}}e ^{2\pi i\frac{1}{m}j\cdot D^{-1}k}=\prod_{\ell=1}^{d}\left(\sum_{j_{\ell}=0}^{ m\lambda_{\ell}-1}e^{2\pi i\frac{j_{\ell}k_{\ell}}{m\lambda_{\ell}}}\right)\] \[=\begin{cases}m^{d}\lambda_{1}\cdots\lambda_{d},&k\in mD\mathbb{ Z}^{d},\\ 0,&\text{otherwise},\end{cases}\]
where we used the finite geometric sum formula in the last equality. Noting that \(\kappa=\lambda_{1}\cdots\lambda_{d}\) and that \(v\in m\Lambda^{*}\) if and only if \(k\in mD\mathbb{Z}^{d}\) establishes (18).
Finally, if \(\sigma\in G_{\Phi}\) and \(G_{\Phi}\subset G_{\Lambda}\), then \(\sigma v\in m\Lambda^{*}\) if and only if \(v\in m\Lambda^{*}\) which completes the proof.
We define the _index_ of a configuration \(\omega_{n}\subset\mathbb{R}^{d}\) with respect to a lattice \(\Phi\subset\mathbb{R}^{d}\) by
\[\mathcal{I}_{\Phi}(\omega_{n})=\{v\in\Phi^{*}\mid M_{v}(\omega_{n})=0\}. \tag{19}\]
It then follows from Theorem 8 that \(\mathcal{I}_{\Phi}(\mu_{\kappa m^{d}})=\Phi^{*}\setminus(m\Lambda^{*})\).
### Lattice theta functions
For \(c>0\), the classical Jacobi theta function of the third type, is defined by
\[\theta(c;x):=\sum_{k=-\infty}^{\infty}e^{-\pi k^{2}c}e^{2\pi ikx},\qquad x\in \mathbb{R}. \tag{20}\]
Via Poisson Summation on the integers, we have
\[\theta(c;x)=c^{-1/2}\sum_{k=-\infty}^{\infty}e^{-\frac{\pi(k+x)^{2}}{c}}, \tag{21}\]
and so
\[F_{a,\mathbb{Z}}(x)=(a/\pi)^{1/2}\theta(\frac{\pi}{a};x).\]
We note that \(\theta(c;x)\) is \(\mathbb{Z}\)-periodic and \(\theta(c;x)=\theta(c;-x)\) for any \(x\). In terms of our earlier language for periodizing gaussians by lattices, we have
\[F_{a,\mathbb{Z}}(x)=\sqrt{\frac{a}{\pi}}\theta(\frac{\pi}{a};x).\]
We'll also use
\[\tilde{\theta}(c;t):=\theta\left(c,\frac{\arccos t}{2\pi}\right),\quad t\in[-1,1].\]
It follows from the symmetries of \(\theta(c,x)\) that for all \(x\in\mathbb{R}\),
\[\tilde{\theta}(c;\cos 2\pi x)=\theta(c;x),\]
and moreover, as shown below, \(\tilde{\theta}\) is absolutely monotone on \([-1,1]\). First, we recall the Jacobi triple product formula.
**Theorem 9**.: _Jacobi Triple Product Formula Let \(z,q\in\mathbb{C}\) with \(|q|<1\) and \(z\neq 0\). Then_
\[\prod_{r=1}^{\infty}(1-q^{2r})(1+q^{2r-1}z^{2})(1+q^{2r-1}z^{-2})=\sum_{k=- \infty}^{\infty}q^{k^{2}}z^{2k}.\]
**Proposition 10**.: _For any \(c>0\), \(\tilde{\theta}(c;\cdot):[-1,1]\to(0,\infty)\) is strictly absolutely monotone on \([-1,1]\)._
Proof.: Applying the Jacobi triple product with \(q=e^{-\pi c}\) and \(z=e^{\pi ix}\), we observe
\[\theta(c;x) =\prod_{r=1}^{\infty}(1-e^{-2\pi rc})(1+e^{-(2r-1)\pi c}e^{2\pi ix} )(1+e^{-2\pi rc})(1+e^{-(2r-1)\pi c}e^{-2\pi ix}) \tag{22}\] \[=\prod_{r=1}^{\infty}(1-e^{-2\pi rc})(1+2e^{-2\pi rc}\cos(2\pi x)+ e^{-2(2r-1)\pi c}). \tag{23}\]
So
\[\tilde{\theta}(c;t)=\prod_{r=1}^{\infty}(1-e^{-2\pi rc})(1+2e^{-2\pi rc}t+e^{- 2(2r-1)\pi c}).\]
It's elementary to verify that \(\tilde{\theta}(c;\cdot)\) is entire and repeated applications of the product rule show the strict absolute monotonicity of \(\tilde{\theta}\) on \([-1,1]\).
We also have
**Proposition 11**.: _For all \(n\in\mathbb{N}\cup\{0\}\) and \(t\in[-1,1]\), \(\tilde{\theta}\) satisfies_
\[(-1)^{n}\left(\frac{\tilde{\theta}^{\prime}}{\tilde{\theta}}\right)^{(n)}(t) >0.\]
_In other words, \(\frac{\tilde{\theta}^{\prime}}{\theta}\) is strictly completely monotone._
Proof.: Our proposition is equivalent to showing that \(h^{\prime}\) is strictly completely monotone, where \(h:=\log\tilde{\theta}\). Recall that \(\tilde{\theta}\) can be expressed via the Jacobi triple product as
\[\tilde{\theta}(c;t)=\prod_{r=1}^{\infty}(1-e^{-2\pi rc})(1+2e^{-2\pi rc}t+e^{- 2(2r-1)\pi c}).\]
so
\[h=\sum_{r=1}^{\infty}\log\left[(1-e^{-2\pi rc})(1+2e^{-2\pi rc}t+e^{-2(2r-1) \pi c})\right].\]
Let \(h_{r}\) be the \(r\)th term in this sum. It suffices to show that each \(h_{r}^{\prime}\) is strictly completely monotone. Indeed, we have
\[h_{r}^{\prime}=\frac{2e^{-2\pi rc}}{1+2e^{-2\pi rc}t+e^{-2(2r-1)\pi c}}\]
so
\[[h_{r}^{\prime}]^{(n)}=\frac{(-1)^{n}[2e^{-2\pi rc}]^{n+1}}{1+2e^{-2\pi rc}t+ e^{-2(2r-1)\pi c}}\]
from which the claim follows since \(1+2e^{-2\pi rc}t+e^{-2(2r-1)\pi c}\geq 0\) for all \(r,c>0\) and \(t\geq-1\).
If \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots\times(a_{d}\mathbb{Z})\) is a rectangular lattice, then \(F_{a,\Lambda_{R}}(x)\) is a tensor product of such functions:
\[\begin{split} F_{a,\Phi}(x)&=\sum_{v\in\Phi}e^{-a \|x+v\|^{2}}=\sum_{k\in\mathbb{Z}^{d}}\prod_{i=1}^{d}e^{-aa_{i}(x_{i}/a_{i}+k_{ i})^{2}}\\ &=\prod_{i=1}^{d}\left(\sum_{k_{i}\in\mathbb{Z}}e^{-aa_{i}(\frac{ x_{i}}{a_{i}}+k_{i})^{2}}\right)=\prod_{i=1}^{d}F_{aa_{i},\mathbb{Z}}(x_{i}/a_{i}). \end{split} \tag{24}\]
If \(\Lambda\) contains a rectangular sublattice \(\Lambda_{R}\), then we may write \(F_{\Lambda}\) as a sum of such tensor products.
**Proposition 12**.: _Suppose \(\Lambda\) is a lattice in \(\mathbb{R}^{d}\) that contains a rectangular sublattice \(\Lambda_{R}=(a_{1}\mathbb{Z})\times\cdots\times(a_{d}\mathbb{Z})\) and let \(\omega_{\Phi}=\Lambda\cap\Omega_{\Lambda_{R}}\). Then_
\[F_{a,\Lambda}(x)=\sum_{y\in\omega_{\Lambda_{R}}}F_{a,\Lambda_{R}}(x+y). \tag{25}\]
Proof.: The formula follows immediately from \(\Lambda=\omega_{\Phi}+\Phi\).
### Polynomial interpolation and linear programming bounds for lattice configurations
Combining the previous results in this section, we obtain the following general polynomial interpolation framework for linear programming bounds. For convenience, we shall write
\[W_{\Phi}:=\Phi^{*}/G_{\Phi}\text{ and }\Delta_{\Phi}:=\mathbb{R}^{d}/S_{\Phi}, \tag{26}\]
to denote some choice of the respective fundamental domains for a lattice \(\Phi\subset\mathbb{R}^{d}\).
**Theorem 13**.: _Let \(\Phi\subset\mathbb{R}^{d}\) be such that \(H\subseteq G_{\Phi}\) where \(H\) is the coordinate symmetry group (see Sec. 2.2) and suppose \(F:\mathbb{R}^{d}\rightarrow(-\infty,\infty]\) is \(S_{\Phi}\) invariant. By Proposition 6, \(\Phi\) contains a rectangular sublattice_
\[a_{1}\mathbb{Z}\times\cdots\times a_{d}\mathbb{Z},\]
_which induces the change of variables \(T:=T_{a_{1},\ldots,a_{d}}\) defined in (12) and associated polynomials \(P_{v}^{G_{\Phi}}\) defined in (13). Suppose \(a>0\) and \((c_{v})_{v\in W_{\Phi}}\) is such that (a) \(c_{v}\geq 0\), for all nonzero \(v\in W_{\Phi}\), (b) \(\sum_{v\in W_{\Phi}}c_{v}<\infty\), and (c) the continuous function_
\[\tilde{g}_{a}:=c_{0}+\sum_{v\in W\_Phi hi}c_{v}P_{v}^{G_{\Phi}}.\]
_satisfies \(\tilde{g}_{a}\leq\tilde{F}\) on \(\tilde{\Delta}_{\Phi}\)._
_Then for any \(n\)-point configuration \(\omega_{n}=\{x_{1},\ldots,x_{n}\}\subset\mathbb{R}^{d}\), we have_
\[E_{F_{a,\Phi}}(\omega_{n})\geq E_{g_{a}}(\omega_{n})=n^{2}c_{0}-n\tilde{g}_{a} (1,\ldots,1), \tag{27}\]
_where equality holds if and only if_
1. \(\tilde{g}_{a}(t)=\tilde{F}_{a,\Phi}(t)\) _for all_ \(t\in T(\{x_{i}-x_{j}\mid i\neq j\in\{1,\ldots,n\}\})\) _and_
2. \(c_{v}M_{\sigma v}(\omega_{n})=0\) _for all_ \(v\in W_{\Phi}\) _and_ \(\sigma\in G_{\Phi}\)_._
We now consider sufficient conditions to for the \(F\)-optimality of configurations of the form \(\mu_{\kappa m^{d}}=\frac{1}{m}\Lambda\cap\Omega_{\Phi}\) as in (17). For such a configuration we define
\[\tau_{\kappa m^{d}}:=(\frac{1}{m}\Lambda)\cap\Delta_{\Phi}), \tag{28}\]
which equals \(\mu_{\kappa m^{d}}\cap\Delta_{\Phi}\) if we choose \(\Delta_{\Phi}\subset\Omega_{\Phi}\).
**Corollary 14**.: _Suppose \(\Phi\), \(T:=T_{a_{1},\ldots,a_{d}}\), \(\tilde{g}_{a}\), and \(F\) are as in Theorem 13 and that \(\Phi\subseteq\Lambda\), and \(G_{\Phi}\subseteq G_{\Lambda}\) for some lattice \(\Lambda\subset\mathbb{R}^{d}\). Then the configuration \(\mu_{\kappa m^{d}}=\frac{1}{m}\Lambda\cap\Omega_{\Phi}\) defined in Prop. 8 is \(F\)-optimal if_
1. \(c_{v}=0\) _for all_ \(v\in(m\Lambda^{*})\cap W_{\Phi}\)_, and_
2. \(\tilde{g}_{a}(t)=\tilde{F}_{a,\Phi}(t)\) _for all_ \(t\in\tilde{\tau}_{\kappa m^{d}}\setminus\{\mathbf{1}\}\) _where_ \(\mathbf{1}=(1,1,\ldots,1)\in\mathbb{R}^{d}\)_._
If such a \(\tilde{g}_{a}\) exists, we refer to it as a'magic' interpolant.
## 3 The Linear Programming Framework for the families
\(\omega_{m^{2}},\omega_{2m^{2}},\omega_{3m^{2}}\), and \(\omega_{6m^{2}}\)
We explicitly apply Corollary 14 linear programming framework for the four families of periodic problems described in the introduction to obtain bivariate polynomial interpolation problems whose solutions would verify the \(A_{2}\)-universal optimality of \(\omega_{m^{2}}\) and \(\omega_{3m^{2}}\), and the \(L\)-universal optimality of \(\omega_{2m^{2}}\) and \(\omega_{6m^{2}}\). Observe that our four families of point configurations are of the form \(\mu_{\kappa m^{2}}\) with the following choices of rectangular lattice \(\Phi\subseteq\Lambda\):
1. \(\omega_{m^{2}}^{*}\): \(\Phi=A_{2},\Lambda=A_{2}\)
2. \(\omega_{2m^{2}}^{*}\): \(\Phi=L,\Lambda=A_{2}\)
3. \(\omega_{3m^{2}}^{*}\): \(\Phi=A_{2},\Lambda=\frac{1}{\sqrt{3}}A_{2}^{\pi/6}\)
4. \(\omega_{3m^{2}}^{*}\): \(\Phi=L,\Lambda=\frac{1}{\sqrt{3}}A_{2}^{\pi/6}\).
It is straightforward to check that in all cases, \(\Phi\) and \(\Lambda\) satisfy the conditions of Theorem 13. Since both choices of \(\Phi\), \(A_{2}\) and \(L\), contain \(L\) as a rectangular sublattice, we will work with the following change of variables to induce our polynomial structure, as described in Proposition 6:
\[(t_{1},t_{2}):=\left(\cos(2\pi x_{1}),\cos\left(\frac{2\pi x_{2}}{\sqrt{3}} \right)\right). \tag{29}\]
We will use \(T\) to denote the change of variables \(T(x)=(t_{1}(x_{1}),t_{2}(x_{2}))\).
Importantly, the maps \(F_{a,\Phi}\) are also well-behaved under this change of variables, as seen through decomposing \(F_{a,\Phi}\) into \(\theta\) functions as described in Proposition 25.
When \(\Phi=L\), we apply Prop. 5 to obtain
\[F_{a,L}(x)=\frac{\pi}{\sqrt{3}a}\theta(\frac{\pi}{a};x_{1})\theta(\frac{\pi}{3 a};\frac{x_{2}}{\sqrt{3}}). \tag{30}\]
As a result,
\[\tilde{F}_{a,L}(t_{1},t_{2}) :=F(\frac{\arccos(t_{1})}{2\pi},\frac{\sqrt{3}\arccos(t_{2})}{2\pi })\] \[=\frac{\pi}{\sqrt{3}a}\theta(\frac{\pi}{a};\frac{\arccos(t_{1})}{ 2\pi})\theta(\frac{\pi}{3a};\frac{\arccos(t_{2})}{2\pi})\] \[=\frac{\pi}{\sqrt{3}a}\tilde{\theta}(\frac{\pi}{a};t_{1})\tilde{ \theta}(\frac{\pi}{3a};t_{2})\]
for \(t_{1},t_{2}\in[-1,1]\). Thus, for fixed \(t_{1}\in[-1,1]\), \(\tilde{F}_{a,L}\) is strictly absolutely monotone as a function of \(t_{2}\) and vice versa. We will use the absolute monotonicity in \(t_{1}\) and \(t_{2}\) repeatedly, and the symmetry of this formula for \(\tilde{F}\) is one of the main motivations for considering the sublattice \(L\).
On the other hand, when \(\Phi=A_{2}\), we arrive at the following formula, a consequence of Proposition 25, which also appears in [1] and [14]:
\[F(x):=F_{a,A_{2}}(x)=\frac{\pi}{\sqrt{3}a}\left(\theta(\frac{\pi}{a};x_{1}) \theta(\frac{\pi}{3a};\frac{x_{2}}{\sqrt{3}})+\theta(\frac{\pi}{3a};\frac{x_{ 2}}{\sqrt{3}}+\frac{1}{2})\theta(\frac{\pi}{a};x_{1}+\frac{1}{2})\right), \tag{31}\]
which gives
\[\begin{split}\tilde{F}(t)&:=F(\frac{\arccos t_{1}} {2\pi},\frac{\sqrt{3}\arccos t_{2}}{2\pi})\\ &=\frac{\pi}{\sqrt{3}a}(\tilde{\theta}(\frac{\pi}{a};t_{1}) \tilde{\theta}(\frac{\pi}{3a};t_{2})+\tilde{\theta}(\frac{\pi}{a};-t_{1}) \tilde{\theta}(\frac{\pi}{3a};-t_{2})).\end{split} \tag{32}\]
The next corollary follows immediately from the absolute monotonicity of \(\tilde{\theta}\) (see Proposition 10).
**Corollary 15**.: _For any nonnegative integers \(l_{1}\) and \(l_{2}\) satisfying \(l_{1}+l_{2}\equiv 0\pmod{2}\),_
\[\frac{\partial^{l_{1}+l_{2}}\tilde{F}}{\partial^{l_{1}}t_{1}\partial^{l_{2}}t _{2}}>0,\quad\forall t_{1},t_{2}\in[-1,1].\]
Finally, we have the following lemma from [1] (also see [12]).
**Lemma 16**.: \[\frac{\partial\tilde{F}}{\partial t_{1}}(t_{1},t_{2})>0\quad\forall t _{1}\in[-1,1],\forall t_{2}\in[-1,1]\] (33) \[\frac{\partial\tilde{F}}{\partial t_{2}}(t_{1},t_{2})\geq 0\quad \forall t_{1}\in[-1,1],\forall t_{2}\in[-1,1]\] (34)
_where the equality holds if and only if \(t_{1}=-1\), \(t_{2}=\frac{1}{2}\). In particular, these inequalities hold in the entire \(t_{1},t_{2}\) region._
Proof.: Since even partial deriviatives of \(\tilde{F}\) are positive and every point \((t_{1},t_{2})\in\tilde{\Delta}_{A_{2}}\) satisfies \(t_{1}\geq-1\) and \(t_{2}\geq\frac{1}{2}\), it suffices to verify the inequalities
\[\frac{\partial\tilde{F}}{\partial t_{1}}(t_{1},t_{2})>0,\frac{\partial\tilde{ F}}{\partial t_{2}}(t_{1},t_{2})\geq 0\]
at \((-1,\frac{1}{2})\). See [1] or [12].
As observed in [12] (also see [1, Chapter 10] and [13]), this Lemma 16 suffices to proves the \(A_{2}\)-universal optimality of the \(2\) and \(3\)-point configurations discussed in the introduction (see section 3.5 for more detail).
We will also make the following choices of fundamental domains \(\mathbb{R}^{2}/S_{\Phi}\) and \(\Phi^{*}/G_{\Phi}\). When \(\Phi=L\), we take as a choice for \(\Phi^{*}/G_{L}\) the set
\[W_{L}:=\left\{\genfrac{[}{]}{0.0pt}{}{k_{1}}{k_{2}/\sqrt{3}}\mid k_{1},k_{2} \in\mathbb{Z}\text{ and }k_{1},k_{2}\geq 0\right\}\]
and \([0,1/2]\times[0,\sqrt{3}/2]\) for \(\mathbb{R}^{2}/S_{L}\). Likewise for \(A_{2}\), we take the sets
\[W_{A_{2}}:=\left\{\genfrac{[}{]}{0.0pt}{}{k_{1}}{k_{2}/\sqrt{3}}\in L^{*}\mid 0 \leq k_{2}\leq k_{1}\text{ and }k_{1}\equiv k_{2}\pmod{2}\right\}\]
and \(\Delta_{A_{2}}\) (see Sec. 2.3) for \(\Phi^{*}/G_{A_{2}}\) and \(\mathbb{R}^{2}/S_{A_{2}}\), respectively.
Finally, the following characterizations of the dual lattices \(\Lambda\) will be useful for determining which degree polynomials are available to us for interpolation. First, \(L^{*}=\{[k_{1},k_{2}/\sqrt{3}]^{T}\mid k_{1},k_{2}\in\mathbb{Z}\}\). Then \(A_{2}^{*}=\{v\in L^{*}\mid v\cdot e_{1}\equiv v\cdot\sqrt{3}e_{2}\pmod{2}\}\), and \(\left(\frac{1}{\sqrt{3}}A_{2}^{\pi/6}\right)^{*}=\{v\in A_{2}^{*}\mid v\cdot \sqrt{3}e_{2}\equiv 0\pmod{3}\}\). Thus, the set of all \(v\) for which \(c_{v}\) may be non-zero in the
construction of an interpolant \(\tilde{g}_{a}\) are expressed by the index sets defined
\[\mathcal{I}_{m^{2}}:=\mathcal{I}_{A_{2}}(\omega_{m^{2}}^{*})\cap W_{A _{2}} =W_{A_{2}}/(mA_{2}^{*}) \tag{35}\] \[=\{[k_{1},k_{2}/\sqrt{3}]\mid k_{1},k_{2}\geq 0,k_{1}\equiv k_{2} \bmod 2,[k_{1},k_{2}]\neq m[j_{1},j_{2}]\}\] (36) \[\mathcal{I}_{2m^{2}}:=\mathcal{I}_{L}(\omega_{2m^{2}}^{*})\cap W_ {L} =W_{L}/(mA_{2}^{*})\] (37) \[=\{[k_{1},k_{2}/\sqrt{3}]\mid k_{1},k_{2}\geq 0,[k_{1},k_{2}]\neq m [j_{1},j_{2}]\text{ for some }j_{1}\equiv j_{2}(\bmod 2)\}\] (38) \[\mathcal{I}_{3m^{2}}:=\mathcal{I}_{A_{2}}(\omega_{3m^{2}}^{*}) \cap W_{A_{2}} =W_{A_{2}}/(mA_{2}^{*})\] (39) \[=\{[k_{1},k_{2}/\sqrt{3}]\mid k_{1},k_{2}\geq 0,k_{1}\equiv k_{2} \bmod 2,[k_{1},k_{2}]\neq m[j_{1},j_{2}]\text{ for some }j_{2}\equiv 0\bmod 3\}\] (40) \[\mathcal{I}_{6m^{2}}:=\mathcal{I}_{L}(\omega_{6m^{2}}^{*})\cap W_ {L} =W_{L}/(mA_{2}^{*})\] (41) \[=\{[k_{1},k_{2}/\sqrt{3}]\mid k_{1},k_{2}\geq 0,[k_{1},k_{2}]\neq m [j_{1},j_{2}]\text{ for some }j_{1}\equiv j_{2}(\bmod 2),j_{2}\equiv 0\bmod 3\}. \tag{42}\]
### The Polynomials \(P_{v}^{l}\) and \(P_{v}^{a_{2}}\)
When \(\Phi=L\), we have already shown that the functions are \(P_{v}^{L}\) are tensors of Chebyshev Polynomials
\[P_{v}^{L}=T_{k_{1}}(t_{1})T_{k_{2}}(t_{2})\]
where \(v=[k_{1},k_{2}/\sqrt{3}]^{T}\), \(k_{1},k_{2}\geq 0\) is an arbitrary element of \(W_{L}\). (see Proposition 5).
What can be said in the case when \(\Phi=A_{2}\)? These polynomials have been studied extensively (see [10], [11], and references therein). Of particular importance to \(P_{v}^{A_{2}}\) are the polynomials \(P_{v^{\prime}}\) and \(P_{v^{\prime\prime}}\), where \(v^{\prime}=[1,1/\sqrt{3}]^{T}\) is the shortest non-zero vector in \(W_{A_{2}}\) and we'll let \(v^{\prime\prime}=[2,0]^{T}\) be the next shortest vector. We have
\[P_{v^{\prime}} =\frac{1}{3}(-1+2t_{2}(t_{1}+t_{2})), \tag{43}\] \[P_{v^{\prime\prime}} =\frac{1}{3}(-1+2t_{1}(t_{1}-3t_{2}+4t_{2}^{3})). \tag{44}\]
It turns out that every other \(P_{v}\) can be expressed as a bivariate polynomial in \(P_{v^{\prime}}\) and \(P_{v^{\prime\prime}}\), i.e. for any \(v\in A_{2}^{*}\), there exist coefficients \(c_{i,j}\) (with only finitely many nonzero) such that
\[P_{v}=\sum_{i,j\geq 0}c_{i,j}(-1+2t_{2}(t_{1}+t_{2}))^{i}(-1+2t_{1}(t_{1}-3t_{2 }+4t_{2}^{3}))^{j}\]
First, note that since \(P_{v^{\prime}}\) and \(P_{v^{\prime\prime}}\) contain only monomials of even total degree, the same is true of arbitrary \(P_{v}\) To further understand these bivariate polynomials, we set \(\alpha=P_{v^{\prime}}\), \(\beta=P_{v^{\prime\prime}}\) and introduce a notion of degree on monomials introduced in [10]:
**Definition 3.1**.: The \(A_{2}\)-degree of a bivariate monomial \(\alpha^{k_{0}}\beta^{k_{1}}\) is \(2k_{0}+3k_{1}\).
If \(v\in W_{A_{2}}\), then \(v=k_{0}v^{\prime}+k_{1}v^{*}\) for some unique \(k_{0},k_{1}\geq 0\), and so we can likewise introduce the notion of the \(A_{2}\) degree of \(v\in W_{A_{2}}\) as \(2k_{0}+3k_{1}\). We will denote the degree function as \(\mathcal{D}\) for both monomials and elements of \(W_{A_{2}}\). Now we can introduce an ordering
on monomials by \(m\)-degree and break ties via the power of \(\alpha\). Then the leading term (by \(A_{2}\)-degree) of \(P_{v}\) is \(\alpha^{k_{0}}\beta^{k_{1}}\). Certainly, this is true for our first polynomials, \(P_{0}=1\), \(P_{v^{\prime}}=\alpha\), and \(P_{v^{\prime\prime}}=\beta\), and then an examination of the recursion generating the polynomials shows that the claim holds inductively (cf. [10]).
### Interpolation Nodes
Our final step is to calculate for each family the nodes at which a hypothetical interpolant \(\tilde{g}_{a}\) must interpolate \(\tilde{F}_{a,\Phi}\) to meet the conditions of our linear programming bounds (including \(0\) to simplify formulas). We end up with
\[\tau_{\kappa m^{2}} :=\omega_{\kappa m^{2}}\cap\Delta_{A_{2}},\hskip 28.452756pt \kappa=1,3 \tag{45}\] \[\tau_{\kappa m^{2}} :=\omega_{\kappa m^{2}}\cap([0,1/2]\times[0,\sqrt{3}/2]),\hskip 28.452756pt \kappa=2,6. \tag{46}\]
Going straight from the definitions of \(\omega_{\kappa m^{2}}\) and our \(t_{1},t_{2}\) change of variables, we then compute
\[\tilde{\tau}_{m^{2}} =\left\{\left(\cos(\frac{\pi k_{1}}{m}),\cos(\frac{\pi k_{2}}{m}) \right)\mid 0\leq 3k_{2}\leq k_{1}\leq m,k_{1}\equiv k_{2}(\text{mod}2)\right\} \tag{47}\] \[\tilde{\tau}_{2m^{2}} =\left\{\left(\cos(\frac{\pi k_{1}}{m}),\cos(\frac{\pi k_{2}}{m}) \right)\mid 0\leq k_{1},k_{2}\leq m,k_{1}\equiv k_{2}(\text{mod}2)\right\}\] (48) \[\tilde{\tau}_{3m^{2}} =\left\{\left(\cos(\frac{\pi k_{1}}{m}),\cos(\frac{\pi k_{2}}{3m} )\right)\mid 0\leq k_{2}\leq k_{1}\leq m,k_{1}\equiv k_{2}(\text{mod}2)\right\}\] (49) \[\tilde{\tau}_{6m^{2}} =\left\{\left(\cos(\frac{\pi k_{1}}{m}),\cos(\frac{\pi k_{2}}{3m} )\right)\mid 0\leq 3k_{1},k_{2}\leq 3m,k_{1}\equiv k_{2}(\text{mod}2)\right\}. \tag{50}\]
### Interpolation Problem for \(\omega_{m^{2}}\)
With all the machinery now set up, we address the family \(\omega_{m^{2}}^{*}\) and its base case, \(\omega_{4}^{*}\). Recall \(v^{\prime}:=[1,1/\sqrt{3}]\) is the shortest vector in \(W_{A_{2}}\) and \(P_{v^{\prime}}=\frac{1}{3}(-1+2t_{2}(t_{1}+t_{2}))\). In Section 4, we prove the \(A_{2}\)-universal optimality of the base case \(\omega_{4}^{*}\) by constructing for each \(a>0\) a polynomial of the form \(g_{a}(t_{1},t_{2}):=c_{0}+c_{1}P_{v^{\prime}}(t_{1},t_{2})\) with \(c_{1}\geq 0\) such that \(g_{a}\leq F_{a,A_{2}}\).
For general \(m\), recalling the background on \(G_{2}\) polynomials in Sec. 3.1, we note that
\[\{v\in W_{A_{2}}\mid\mathcal{D}(v)<2m\}\subset\mathcal{I}_{m^{2}}.\]
The containment holds because if \(v=k_{0}v^{\prime}+k_{1}v^{\prime\prime}\) and \({\cal D}(v)<2m\), then \(k_{0}<m\), and so \(v\not\in mA_{2}^{*}\) (i.e. \(v\in{\cal I}_{m^{2}}\)). For the \(m=2\) base case already discussed, our interpolant \(\tilde{g}_{a}\) satisfies
\[\tilde{g}_{a}\in\mbox{span}\{P_{v}:v\in\Delta_{A_{2}},{\cal D}(v)<2m\}.\]
### Interpolation Problem for \(\omega_{2m^{2}}^{*}\)
Now to the case of \(\omega_{2m^{2}}^{*}\) with base case \(\omega_{2}^{*}\).
The universal optimality of \(\omega_{2}^{*}\) follows from
\[F_{a,L}(x)=\frac{\pi}{\sqrt{3}a}\theta(\frac{\pi}{a};x_{1})\theta(\frac{\pi}{3 a};\frac{x_{2}}{\sqrt{3}}),\]
which takes its minimum at \((1/2,\sqrt{3}/2)\) for all \(a\), as \(\theta(c,x)\) takes its minimum at \(x=1/2\) for all \(c>0\) (see Proposition 10). Since the \(F_{a,L}\) energy of a two-point configuration is determined only by the difference of the two points in the configuration, the universal optimality of \(\omega_{2}^{*}\) immediately follows, and the same argument can be used to show for any rectangular lattice (cf. [10]) that a point at the origin and a point at the centroid of a rectangular fundamental domain yield a 2-point universally optimal configuration.
For the general case,
\[\{[k_{1},k_{2}/\sqrt{3}]^{T}:0\leq k_{1},k_{2}\mbox{ and }k_{1}+k_{2}<2m\}\subseteq{ \cal I}_{2m^{2}},\]
and then
\[\mbox{span}\{P_{v}\mid v=[k_{1},k_{2}/\sqrt{3}]^{T}:0\leq k_{1},k_{2}\mbox{ and }k_{1}+k_{2}<2m\}={\cal P}_{2m-1}(t_{1},t_{2}),\]
where \({\cal P}_{n}(t_{1},t_{2})\) is the set of bivariate polynomials of total degree at most \(n\). For the first non-trivial case, \(\omega_{8}^{*}\), we have numerical evidence that for each \(a>0\), an interpolant \(\tilde{g}_{a}\) exists in \(P_{3}\) and satisfies the conditions of Corollary 14.
Figure 5: \(\omega_{2}^{*}\), pictured above is \(L\)-universally optimal, and analogous results hold for any rectangular lattice.
### Interpolation Problems for \(\omega_{3m^{2}}^{*}\)
Now to the case of \(\omega_{3m^{2}}^{*}\) with base case \(\omega_{3}^{*}\).
The universal optimality of \(\omega_{3}^{*}\) (cf. [15] and [16]) follows from Lemma 16, which
is used to show a global minimum of \(F_{a,A_{2}}\) occurs at \((1/2,\sqrt{3}/6)\) for all \(a\). This point, \((1/2,\sqrt{3}/6)\), is also the only difference \(x-y\) (up to \(S_{A_{2}}\) action) for \(x,y\in\omega_{6}^{*}\). Thus for an arbitrary \(6\)-point configuration \(\omega_{6}\), we have
\[E_{F}(\omega_{6})\geq 6F(1/2,\sqrt{3}/6)=E_{F}(\omega_{6}^{*})\]
and so \(\omega_{6}^{*}\) is \(A_{2}\)-universally optimal. This same line of argument is also used in [15] to show that the \(2\)-point honeycomb configurations pictured below are \(A_{2}\)-universally optimal.
More generally, we suggest invoking \(A_{2}\)-degree as in the \(m^{2}\) case to find a nice subset of \(\mathcal{I}_{3m^{2}}\). We have the containment
\[\{v\in W_{A_{2}}\mid\mathcal{D}(v)<3m\}\subset\mathcal{I}_{3m^{2}}.\]
The containment holds because if \(v=k_{0}v^{\prime}+k_{1}v^{\prime\prime}\) and \(\mathcal{D}(v)<3m\), then either \(v\not\in mA_{2}^{*}\) or \(v=mv^{\prime}\not\in(\frac{1}{\sqrt{3}}A_{2}^{\pi/6})^{*}\).
### Interpolation Problems for \(\omega_{6m^{2}}^{*}\)
It remains to consider the family \(\omega_{6}^{*}\) and its base case \(\omega_{6}^{*}\), whose universal optimality involves our most complex application of the linear programming bounds. In section 5, we prove the \(L\)-universal optimality of \(\omega_{6}^{*}\) by constructing for each \(a>0\) an interpolant of the form
\[g_{a}(t_{1},t_{2})=b_{0,0}+b_{1,0}t_{1}+b_{0,1}t_{2}+b_{1,1}t_{1}t_{2}+b_{0,2} t_{2}^{2}\]
where \(b_{i,j}\geq 0\) for \((i,j)\neq 0\). In that section, we will explain in greater detail why such a \(\tilde{g}_{a}\) satisfies the conditions of Corollary 14.
Finally, for arbitrary \(m\), we propose a few nice subsets of \(\mathcal{I}_{6m^{2}}\). First, we have the set
\[\{[k_{1},k_{2}/\sqrt{3}]^{T}:0\leq k_{1}<2m,0\leq k_{2}<3m\}\subseteq\mathcal{ I}_{6m^{2}},\]
and
\[\text{span}\{P_{v}\mid v=[k_{1},k_{2}/\sqrt{3}]^{T}:0\leq k_{1}<2m0\leq k_{2}< 3m\}=\mathcal{P}_{2m-1}(t_{1})\times\mathcal{P}_{3m-1}(t_{2}).\]
Working with such a tensor space of polynomials is natural due to the tensor product nature of
\[\tilde{F}_{a,L}(x)=\frac{\pi}{\sqrt{3}a}\tilde{\theta}(\frac{\pi}{a};t_{1}) \tilde{\theta}(\frac{\pi}{3a};t_{2}).\]
Notably, our interpolant, \(\tilde{g}_{a}\), for \(\omega_{6}^{*}\) satisfies \(\tilde{g}_{a}\in\mathcal{P}_{1}(t_{1})\times\mathcal{P}_{2}(t_{2})\).
## 4 Universal Optimality of \(\omega_{4}^{*}\)
To prove \(\omega_{4}^{*}\) is \(A_{2}\)-universally optimal, it remains to show for each \(a>0\) that there are \(c_{0},c_{1}\in\mathbb{R}\) with \(c_{1}\geq 0\) such that the resulting interpolant \(\tilde{g}_{a}(t_{1},t_{2}):=c_{0}+c_{1}P_{v^{\prime}}=c_{0}+c_{1}/3(-1+t_{2}( t_{1}+t_{2}))\) satisfies \(\tilde{g}_{a}\leq\tilde{F}_{a}\) on \(\tilde{\Delta}_{A_{2}}\) with equality at \((-1,1)\) or, equivalently, finding such an interpolant of the form
\[\tilde{g}_{a}(t_{1},t_{2}):=\tilde{F}_{a}(-1,1)+b_{1}t_{2}(t_{1}+t_{2}) \tag{51}\]
for \(b_{1}\geq 0\).
Our formulas for \(\tilde{g}_{a}\) are defined piecewise3 in \(a\). We set
Footnote 3: We suspect that \(b_{1}\) need not be defined piecewise. In fact the choice \(b_{1}=\frac{\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1}\) numerically appears to lead to \(\tilde{g}_{a}\leq\tilde{F}_{a}\) for all \(a>0\). But our most simple proofs come from this piecewise definition of \(b_{1}\).
\[b_{1}=\begin{cases}2\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2}& \text{if }0<a<21\\ \frac{\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1}&\text{if }a\geq 21.\end{cases} \tag{52}\]
Due to the convergence properties of different formulas for \(\theta\) (see (20) and (21)), we find it convenient to rescale \(\tilde{F}\) by a factor of \(\sqrt{3}\pi/a\) for small \(a\) case. Defining
\[\begin{split}\tilde{f}_{1}(t_{1}):=\begin{cases}\tilde{\theta}( \frac{\pi}{a};t_{1}),&0<a\leq\pi^{2}\\ \sqrt{\pi}\frac{\pi}{a}\tilde{\theta}(\frac{\pi}{a};t_{1})&a>\pi^{2},\end{cases} \\ \tilde{f}_{2}(t_{2}):=\begin{cases}\tilde{\theta}(\frac{\pi}{a};t_{2}),&0<a\leq \pi^{2}\\ \sqrt{\frac{\pi}{3a}}\tilde{\theta}(\frac{\pi}{a};t_{2})&a>\pi^{2}.\end{cases} \end{split} \tag{53}\]
With this rescaling convention, it follows from (31) that
\[\tilde{F}(t_{1},t_{2})=\tilde{f}_{1}(t_{1})\tilde{f}_{2}(t_{2})+\tilde{f}_{1} (-t_{1})\tilde{f}_{2}(-t_{2}). \tag{54}\]
### Constructing magic \(\tilde{g}_{a}\)
For all \(a>0\), we will establish
**Lemma 17**.: _For all points \((t_{1}^{\prime},t_{2}^{\prime})\in\tilde{\Delta}_{A_{2}}\), \(\left.\frac{\partial^{3}\tilde{F}}{\partial t_{1}\partial t_{2}^{2}}\right|_{ t_{1}^{\prime},t_{2}^{\prime}}\geq 0\)._
Proof.: Since even partial derivatives of \(\tilde{F}\) are positive, it suffices to check the inequality at the minimal \(t_{1}\) and \(t_{2}\) values, when \(t_{1}=-1\) and \(t_{2}^{\prime}=\frac{1}{2}\). This check is handled in in the appendix with large \(a\) and small \(a\) cases handled separately.
Likewise, we have
**Lemma 18**.: _Let \(\tilde{h}\) be of the form \(\tilde{F}(-1,1)+c_{1}t_{2}(t_{1}+t_{2})\) such that \(\tilde{h}(-1,1/2)\leq\tilde{F}(-1,1/2)\) and \(\left.\frac{\partial\tilde{F}-\tilde{h}}{\partial t_{2}}\right|_{-1,1}\leq 0\). Then for all \(t_{2}\in[1/2,1]\), \(\tilde{F}(-1,t_{2})\geq\tilde{h}(-1,t_{2})\)._
Proof.: We abuse notation here and use \(\tilde{F},\tilde{h}\) to refer to the one variable functions in \(t_{2}\) obtained by fixing \(t_{1}=-1\). By assumption on the form of \(\tilde{h}\), Lemma 16, and the two assumed inequalities, we have \(\tilde{F}(1/2)\geq\tilde{h}(1/2)\), \(\tilde{F}^{\prime}(1/2)=\tilde{h}^{\prime}(1/2)=0\)\(\tilde{F}(1)=\tilde{h}(1)\), and \(\tilde{F}^{\prime}(1)\leq\tilde{h}^{\prime}(1)\). It follows that there are exists some point in \([1/2,1]\) at which \(\tilde{F}^{\prime\prime}\leq\tilde{h}^{\prime\prime}\). Let \(t_{2}^{\prime}\leq t_{2}^{\prime\prime}\) be such that \(t_{2}^{\prime}\) and \(t_{2}^{\prime\prime}\) respectively are the minimal and maximal points in \([1/2,1]\) at which \(\tilde{F}^{\prime\prime}\leq\tilde{h}^{\prime\prime}\). For \(t_{2}\geq t_{2}^{\prime\prime}\) we have \(\tilde{F}^{\prime\prime}\geq\tilde{h}^{\prime\prime}\) and so we get \(\tilde{F}\geq\tilde{h}\) by bounding \(\tilde{F}-\tilde{h}\) below with a tangent line of \(\tilde{F}-\tilde{h}\) at \(1\). Similarly, for \(t_{2}\leq t_{2}^{\prime}\), we get the desired inequality with tangent approximation from \(\frac{1}{2}\). For \(t_{2}\in[t_{2}^{\prime\prime},t_{2}^{\prime}]\), we note that \(\tilde{F}^{\prime\prime}=\tilde{h}^{\prime\prime}\) at the endpoints of the interval and \((\tilde{F}-\tilde{h})^{\prime\prime}\) is convex (since \(\tilde{F}^{(4)}>0\)), so \((\tilde{F}-\tilde{h})^{\prime\prime}\leq 0\) for the whole interval. Thus, \(\tilde{F}(t_{2})\geq\tilde{h}(t_{2})\) by bounding the difference below with its secant line (since we've already established \(\tilde{F}\geq\tilde{h}\) at the endpoints).
#### 4.1.1 Small \(a\)
Let \(0<a\leq 21\). We will refer to \(\tilde{g}_{a}\) as simply \(\tilde{g}\).
Next, we claim that for this range of \(a\), the following inequalities hold:
\[\left.\frac{\partial\tilde{F}}{\partial t_{2}}\right|_{-1,1},4(\tilde{F}(-1,1) -\tilde{F}(-1,1/2))\leq 2\frac{\partial\tilde{F}}{\partial t_{1}}\right|_{-1,1/2} \leq\frac{\partial^{2}(\tilde{F}-\tilde{g})}{\partial t_{1}\partial t_{2}} \right|_{-1,1/2}.\]
Moreover, we claim that these \(3\) inequalities4
Footnote 4: Though in the small \(a\) case, we have set \(b_{1}=2\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2}\) for simplicity, in fact, we could set \(b_{1}\) to be any element of the (non-empty) interval \(\left.\left[\max\left\{\frac{\partial\tilde{F}}{\partial t_{2}}\right|_{-1,1 },4(\tilde{F}(-1,1)-\tilde{F}(-1,1/2))\right\},2\frac{\partial\tilde{F}}{ \partial t_{1}}\right|_{-1,1/2}]\) and the exact same proof would work.
Let's first address why these \(3\) inequalities are sufficient. Assume they hold, then first off we certainly have \(b_{1}\geq 0\) since \(b_{1}=2\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2}\geq\frac{ \partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1}\geq 0\) where the first inequality holds by assumption and the next by Lemma 16. Next, we have
\[(\tilde{F}-\tilde{g})(-1,1/2)=\tilde{F}(-1,1/2)-\tilde{F}(-1,1)+\frac{1}{4}b_{1}\geq 0\]
and likewise
\[\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{2}}\right|_{-1,1}=\frac {\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1}-b_{1}\leq 0.\]
Applying Lemma 18 and the previous two inequalities to \(\tilde{g}\), we obtain \(\tilde{F}\geq\tilde{g}\) for all points \((-1,t_{2})\) with \(t_{2}\in[1/2,1]\), and since \(\tilde{F}-\tilde{g}\) is convex in \(t_{1}\), it remains to show that \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\geq 0\) for all points of the form \((-1,t_{2})\), \(t_{2}\in[1/2,1]\) (recall the picture of \(\tilde{\Delta}_{A_{2}}\), Figure 4). By Lemma 17, \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\) is convex in \(t_{2}\) in \(\tilde{\Delta}_{A_{2}}\), so we just need to show
\[\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{-1,1/2}\geq 0, \frac{\partial^{2}(\tilde{F}-\tilde{g})}{\partial t_{1}\partial t_{2}}\bigg{|} _{-1,1/2}\geq 0.\]
But these follow directly from our assumptions on \(b_{1}\). Indeed,
\[\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{-1,1/2} =\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2}-\frac{b_ {1}}{2}=0 \tag{55}\] \[\frac{\partial^{2}(\tilde{F}-\tilde{g})}{\partial t_{1}\partial t _{2}}\bigg{|}_{-1,1/2} =\frac{\partial^{2}\tilde{F}}{\partial t_{1}\partial t_{2}}\bigg{|} _{-1,1/2}-b_{1}\geq 0. \tag{56}\]
Thus, it remains to establish the following5 is the same as for \(0<a<\pi^{2}\), but we use estimates on \(\tilde{\theta}\) derived from the 'large \(a\)' formula for \(\tilde{\theta}\). So for this range it suffices to show in the appendix that Lemma 19 holds for \(9.6\leq a\leq 21\). in the appendix:
Footnote 5: The reason we don’t use this approach for all \(a>9.6\) is Lemma 19 fails at roughly \(a=22\). Namely, the terms \(\frac{\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1},4(\tilde{F}(-1,1)- \tilde{F}(-1,1/2))\) both have lead exponential terms on the order of \(e^{-a/4}\), while \(\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2}\) is on the order of \(e^{-a/3}\).
**Lemma 19**.: _For \(0<a<\pi^{2}\), we have_
\[\frac{\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1},4(\tilde{F}(-1,1)- \tilde{F}(-1,1/2))\leq 2\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,1/2} \leq\frac{\partial^{2}(\tilde{F}-\tilde{g})}{\partial t_{1}\partial t_{2}} \bigg{|}_{-1,1/2}\]
We handle the proof piecewise, splitting into \(2\) cases, \(0<a\leq\pi^{2}\) and \(\pi^{2}<a\leq 21\) depending on which formulas we use for \(\tilde{f}_{1}\) and \(\tilde{f}_{2}\).
#### 4.1.2 Large \(a\)
Throughout, we assume \(a\geq 21\) and refer to \(\tilde{g}_{a}\) as \(\tilde{g}\).
First, we show
**Lemma 20**.: \[\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{(-1,1)}\geq\frac{\partial \tilde{F}}{\partial t_{2}}\bigg{|}_{(-1,1)}\]
The proof is contained in the appendix, using bounds also contained there.
Now recall for this range of \(a\) that
\[b_{1}=\frac{\partial\tilde{F}}{\partial t_{2}}\bigg{|}_{-1,1}.\]
We first show that this choice of \(b_{1}\) ensures \((\tilde{F}-\tilde{g})(-1,1/2)\geq 0\):
**Lemma 21**.: _For \(a\geq 21\), \((\tilde{F}-\tilde{g})(-1,1/2)\geq 0\)._
The proof is contained in the appendix using our bounds on \(\theta\). Along with our selection of \(b_{1}\), we can then apply Lemma 18 to conclude \(\tilde{F}\geq\tilde{g}\) whenever \(t_{1}=-1\) in \(\tilde{\Delta}_{A_{2}}\). Similarly, the inequality \(\tilde{F}\geq\tilde{g}\) for all points \((t_{1},1)\) with \(t_{1}\in[-1,1]\) is immediate from Lemma 20, our choice of \(b_{1}\), and the convexity of the \(\tilde{F}-\tilde{g}\) in \(t_{1}\).
Next, we show that if \(t_{1}\leq\cos(2\pi\frac{\sqrt{3}}{4})\), we have \(\tilde{F}\geq\tilde{g}\) in \(\tilde{\Delta}_{A_{2}}\), and in fact we'll show the stronger claim that \(\tilde{F}\geq\tilde{g}\) in the region \([-1,\cos(2\pi\frac{\sqrt{3}}{4})]\times[1/2,1]\) (corresponding to rectangle A in Figure 11).
To do so, we'll use that the log derivative of \(\tilde{\theta}\) is strictly completely monotone (cf. Proposition 11. One consequence of this property is that if \(H_{\tilde{F}}\) denotes the Hessian matrix of \(\tilde{F}\), we have for all \(t_{1},t_{2}\in[-1,1]\) that
\[\det(H_{\tilde{F}})<0.\]
Indeed, recalling the formulas,
\[f_{1}(x_{1})=\sqrt{\frac{a}{\pi}}\theta(\frac{\pi}{a},x)=\sum_{k =-\infty}^{\infty}e^{-a(k+x)^{2}} \tag{57}\] \[f_{2}(x_{2})=\sqrt{\frac{3a}{\pi}}\theta(\frac{\pi}{3a},x)=\sum_ {k=-\infty}^{\infty}e^{-3a(k+x)^{2}}, \tag{58}\]
we obtain in the \(A_{2}\) case,
\[\tilde{F}(t_{1},t_{2})=\tilde{f}_{1}(t_{1})\tilde{f}_{2}(t_{2})+\tilde{f}_{1 }(-t_{1})\tilde{f}_{2}(-t_{2}).\]
By Proposition 11, for \(i\in\{1,2\}\),
\[0>\left(\frac{\tilde{f}_{i}^{\prime}}{\tilde{f}_{i}}\right)^{\prime}=\frac{ \tilde{f}_{i}^{\prime\prime}\tilde{f}_{i}-(\tilde{f}_{i}^{\prime})^{2}}{ \tilde{f}_{i}^{2}}\]
and so
\[\tilde{f}_{i}^{\prime\prime}\tilde{f}_{i}<(\tilde{f}_{i}^{\prime})^{2}\]
Thus, we compute
\[\det(H_{\tilde{F}}) =\tilde{f}_{1}(t_{1})^{\prime\prime}\tilde{f}_{2}(t_{2})\tilde{f}_{ 2}^{\prime\prime}(t_{2})\tilde{f}_{1}(t_{1})-(\tilde{f_{1}}^{\prime}(t_{1}) \tilde{f_{2}}^{\prime}(t_{2}))^{2}+ \tag{59}\] \[\tilde{f}_{1}(-t_{1})^{\prime\prime}\tilde{f}_{2}(-t_{2})\tilde{ f}_{2}^{\prime\prime}(-t_{2})\tilde{f}_{1}(-t_{1})-(\tilde{f_{1}}^{\prime}(-t_{1}) \tilde{f_{2}}^{\prime}(-t_{2}))^{2}\] (60) \[=(\tilde{f_{1}}^{\prime\prime}(t_{1})\tilde{f}_{1}(t_{1}))( \tilde{f_{2}}^{\prime\prime}(t_{2})\tilde{f}_{2}(t_{2}))-\tilde{f_{1}}^{\prime }(t_{1})^{2}\tilde{f_{2}}^{\prime}(t_{2})^{2}+\] (61) \[(\tilde{f_{1}}^{\prime\prime}(-t_{1})\tilde{f}_{1}(-t_{1}))( \tilde{f_{2}}^{\prime\prime}(-t_{2})\tilde{f}_{2}(-t_{2}))-\tilde{f_{1}}^{ \prime}(-t_{1})^{2}\tilde{f_{2}}^{\prime}(-t_{2})^{2}\] (62) \[<0 \tag{63}\]
as desired.
**Definition 4.1**.: Fix \(b,c\in[-1.1]\). Then we define
\[\tilde{g}_{b,c}:=\tilde{g}-b_{1}t_{1}t_{2}+b_{1}(bt_{2}+ct_{1}-bc).\]
Note that if \(t_{1}\geq b\), \(t_{2}\leq c\), then
\[(t_{1}-b)(t_{2}-c)\leq 0\]
and so \(t_{1}t_{2}\leq bt_{2}+ct_{1}-bc\). Thus, if \(t_{1}\geq b\), \(t_{2}\leq c\), then \(\tilde{g}\leq\tilde{g}_{b,c}\) with equality if \(t_{1}=b\) or \(t_{2}=c\).
The useful thing about \(\tilde{g}_{b,c}\) for any choice of \(b,c\) is that it satisfies
\[\det(H_{\tilde{F}-\tilde{g}_{b,c}})=\det(H_{\tilde{F}})-2b_{1}\tilde{f_{1}}^{ \prime\prime}(t_{1})\tilde{f}_{2}(t_{2})<\det(H_{\tilde{F}})<0.\]
Thus, to check \(\tilde{F}\geq\tilde{g}_{a,b}\) on a rectangular region, it suffices to consider the boundary by the second derivative test. If that rectangular region is chosen such that \(g_{b,c}\geq\tilde{g}\), then we have a means of showing \(\tilde{F}\geq\tilde{g}\) without needing to consider a full 2-dimensional region.
**Lemma 22**.: _For \(t_{1},t_{2}\in[-1,\cos(2\pi\frac{\sqrt{3}}{4})]\times[\frac{1}{2},1]\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: First, we show the inequality for the rectangle \([-1,\cos(2\pi\frac{\sqrt{3}}{4})]\times[.7,1]\), As explained above, it suffices to prove that \(\tilde{F}\geq\tilde{g}_{-1,1}\) on the boundary of the rectangle. Since \(\tilde{g}_{-1,1}=\tilde{g}\leq\tilde{F}\) when \(t_{1}=-1\) or \(t_{2}=1\), it suffices to prove the inequality for the other two sides.
There, we use our approximations of \(\theta\) and \(b_{1}\) to provide a simple upper bound, call it \(\tilde{g}^{\prime}_{-1,1}\) for \(\tilde{g}_{-1,1}\). The key feature of this upper bound is that \(e^{a/4}\tilde{g}^{\prime}_{-1,1}\) is linear in \(a\). Meanwhile, as a lower bound for \(\tilde{F}\), we use the function
\[\tilde{F}_{T}:=(e^{-ax^{2}}+e^{-a(x-1)^{2}})e^{-3au^{2}}+e^{-a((\frac{1}{2}-x) ^{2}+3(\frac{1}{2}-u)^{2})}\]
where
\[x=\frac{\arccos(t_{1})}{2\pi},\ \ \ \ u=\frac{\arccos(t_{2})}{2\pi}.\]
We know \(\tilde{F}_{T}\leq\tilde{F}\) because we have simply truncated positive terms from the formulas for \(f_{1}\) and \(f_{2}\) to obtain \(\tilde{F}_{T}\). It's also straightforward to see that at any point, \(e^{a/4}\tilde{F}_{T}\) is convex in \(a\).
Thus, the difference \(e^{a/4}(\tilde{F}_{T}-\tilde{g}^{\prime}_{-1,1})\) is convex in \(a\), and so to show that \(\tilde{F}_{T}\geq\tilde{g}_{-1,1}\) at some \(t^{\prime}_{1},t^{\prime}_{2}\) for all \(a\geq 21\), it suffices to show \(\tilde{F}_{T}(t^{\prime}_{1},t^{\prime}_{2})\geq\tilde{g}^{\prime}_{-1,1}(t^ {\prime}_{1},t^{\prime}_{2})\) when \(a=21\) and also that
\[\frac{\partial\left[e^{a/4}(\tilde{F}_{T}-\tilde{g}^{\prime}_{-1,1})\right]}{ \partial a}\bigg{|}_{a=21,t_{1}=t^{\prime}_{1},t_{2}=t^{\prime}_{2}}\geq 0.\]
These computations are carried out in the appendix.
For the remaining region \([-1,\cos(2\pi\frac{\sqrt{3}}{4})]\times[.5,.7]\), we carry out the same procedure with \(\tilde{g}_{-1,.7}\) functioning as our upper bound for \(\tilde{g}\) when \(t_{2}\in[.6,.7]\) and \(\tilde{g}_{-1,.6}\) functioning as our upper bound when \(t_{2}\in[.5,.6]\). For the former case, when \(t_{2}=.7\) or \(t_{1}=-1\), \(\tilde{g}_{-1,.7}=\tilde{g}\), which we have just shown stays below \(\tilde{F}\) on those segments. So it remains to show that the inequality holds on the sides where \(t_{1}=\cos(2\pi\frac{\sqrt{3}}{4})\) and \(t_{2}=.6\). Likewise, we only need to show the inequality \(\tilde{g}_{-1,.6}\leq\tilde{F}_{T}\) when \(t_{1}=\cos(2\pi\frac{\sqrt{3}}{4})\), \(t_{2}\in[1/2,.6]\) and when \(t_{2}=1/2\), \(t_{1}\in[-1,\cos(2\pi\frac{\sqrt{3}}{4})]\). These computations are carried out as in the previous case in the appendix.
Finally, we show that \(\tilde{F}-\tilde{g}\) increases in \(t_{1}\) for every point in \(\tilde{\Delta}_{A_{2}}\) with \(t_{1}\geq\cos(2\pi\frac{\sqrt{3}}{4})\).
**Lemma 23**.: _For all \(a\geq 21\) and every \(p=(t_{1},t_{2})\in\tilde{\Delta}_{A_{2}}\) with \(t_{1}\geq\cos(2\pi\frac{\sqrt{3}}{4})\), \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{p}\geq 0\)._
Proof.: Because of the convexity of the difference in \(t_{1}\) and Lemma 17, it suffices to show that at \(P=(\cos(2\pi\frac{\sqrt{3}}{4}),\cos(2\pi\frac{\sqrt{3}}{12})):=(t^{\prime}_{ 1},t^{\prime}_{2})\),
\[\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{P}\geq 0,\ \ \ \ \ \ \ \frac{\partial^{2}(\tilde{F}-\tilde{g})}{\partial t_{1}\partial t_{2}} \bigg{|}_{P}\geq 0\]
which is handled in the appendix.
## 5 Universal Optimality of \(\omega_{6}^{*}\)
We consider the \(m=1\) case of the interpolation problem from Section 3.6. In this case we have interpolation conditions at the nodes \(\tilde{\tau}_{6}=\{(-1,-1),(1,-\frac{1}{2}),(-1,\frac{1}{2})\}\). Using the same rescaling convention as in the previous section, we have
\[\tilde{F}(t_{1},t_{2})=\tilde{f}_{1}(t_{1})\tilde{f}_{2}(t_{2}),\]
where \(\tilde{f}_{1},\tilde{f}_{2}\) are as in (53). For \(m=1\), we may choose an interpolant \(\tilde{g}=\tilde{g}_{a}\in\mathcal{P}_{1}(t_{1})\times\mathcal{P}_{2}(t_{2})\); i.e \(\tilde{g}\) of the form
\[\tilde{g}(t_{1},t_{2})=\sum_{i=0}^{1}\sum_{j=0}^{2}b_{i,j}t_{1}^{i}t_{2}^{j}.\]
We require that \(\tilde{g}\) and \(\tilde{F}\) agree at the three points in \(\tilde{\tau}_{6}\) and remark that the condition \(\tilde{g}\leq\tilde{F}\) further requires \(\partial\tilde{g}/\partial t_{2}=\partial\tilde{F}/\partial t_{2}\) at the points \((-1,1/2)\) and \((1,-1/2)\) giving a total of \(5\) linearly independent conditions on \(\mathcal{P}_{1}(t_{1})\times\mathcal{P}_{2}(t_{2})\).
Noting that \(q(t_{1},t_{2})=(1+t_{1})(t_{2}+1/2)^{2}\) vanishes on \(\tilde{\tau}_{6}\) and that \(\frac{\partial q}{\partial t_{2}}\) vanishes on \(\{(-1,1/2),(1,-1/2)\}\) shows that \(\tilde{g}\) can be written as
\[\tilde{g}(t_{1},t_{2})=\frac{(1-t_{1})}{2}\tilde{f}_{1}(-1)H_{\{-1,\frac{1}{2 },\frac{1}{2}\}}(\tilde{f}_{2})(t_{2})+\frac{(1+t_{1})}{2}\tilde{f}_{1}(1)H_{\{ -\frac{1}{2},-\frac{1}{2}\}}(\tilde{f}_{2})(t_{2})+cq(t_{1},t_{2}), \tag{64}\]
where \(H_{T}(f)\) is the Hermite interpolant to \(f\) on the node set \(T\) which can be expressed in terms of divided differences (see Appendix A). In particular,
\[H_{\{-1,\frac{1}{2},\frac{1}{2}\}}(\tilde{f}_{2})(t_{2})=\tilde{f}_{2}(-1)+ \tilde{f}_{2}[-1,\frac{1}{2}](t_{2}+1)+\tilde{f}_{2}[-1,\frac{1}{2},\frac{1}{ 2}](t_{2}+1)(t_{2}-\frac{1}{2}),\]
and \(H_{\{-\frac{1}{2},-\frac{1}{2}\}}(\tilde{f}_{2})(t_{2})=\tilde{f}_{2}(-\frac{ 1}{2})+\tilde{f}_{2}^{\ \prime}(-\frac{1}{2})(t_{2}+\frac{1}{2})^{2}\). Since \(T_{1}(t)=t\) and \(T_{2}(t)=2t^{2}-1\), it easily follows that \(\tilde{g}\) is PSD if and only if \(b_{i,j}\geq 0\) for \((i,j)\neq 0\). From (64), it follows that \(b_{1,2}=-\frac{1}{2}\tilde{f}_{1}(-1)\tilde{f}_{2}[-1,\frac{1}{2},\frac{1}{2} ]+c\). Observing that \(q\geq 0\) on \([-1,1]^{2}\), we choose \(c=\frac{1}{2}\tilde{f}_{1}(-1)\tilde{f}_{2}[-1,\frac{1}{2},\frac{1}{2}]\) as small as possible in which case \(b_{1,2}=0\).
In addition, the following derivative equality,
\[\tilde{f}_{1}(-1)\tilde{f}_{2}^{\ \prime}(-1/2)=\tilde{f}_{1}(1)\tilde{f}_{2}^{ \ \prime}(1/2). \tag{65}\]
proved in [1] (also see [11]) implies that \(b_{1,1}=b_{0,2}\). Hence we may express \(\tilde{g}\) in the form
\[\tilde{g}(t_{1},t_{2})=a_{0,0}+a_{1,0}t_{1}+a_{0,1}t_{2}+a_{0,2}(t_{1}t_{2}+t_ {2}^{2}+1/4), \tag{66}\]
where \(a_{0,0}=b_{0,0}-b_{0,2}/4\) and \(a_{i,j}=b_{i,j}\) otherwise.
Figure 12: \(\tilde{g}\) has 5 necessary equality interpolation conditions in order to provide a sharp bound, the 3 value conditions from \(\tilde{\tau}_{6}\), plus two derivative conditions.
From (64), we then compute
\[a_{0,1} =\tilde{f}_{1}(-1)\tilde{f}_{2}^{\prime}(-1/2) \tag{67}\] \[a_{0,0} =\frac{\tilde{f}_{1}(1)\tilde{f}_{2}(-1/2)+\tilde{f}_{1}(-1)\tilde{ f}_{2}(1/2)}{2}\] (68) \[a_{1,0} =\frac{\tilde{f}_{1}(1)\tilde{f}_{2}(-1/2)-\tilde{f}_{1}(-1) \tilde{f}_{2}(1/2)}{2}+\frac{a_{0,1}}{2}\] (69) \[a_{0,2} =\tilde{f}_{1}(-1)\tilde{f}_{2}[-1,\frac{1}{2},\frac{1}{2}]=\frac {4}{9}(\tilde{f}_{1}(-1)\tilde{f}_{2}(-1)+a_{0,1}+a_{1,0}-a_{0,0}). \tag{70}\]
### Verifying properties of \(\tilde{g}\)
It remains to verify that the \(a_{i,j}\)'s are nonnegative and \(\tilde{g}\leq\tilde{F}\).
**Lemma 24**.: \(\tilde{F}(-1,t_{2})\geq\tilde{g}(-1,t_{2})\) _for all \(t_{2}\in[-1,1]\)._
Proof.: Note that \(\tilde{F}(-1,t_{2})=\tilde{f}_{1}(-1)\tilde{f}_{2}(t_{2})\) is strictly absolutely monotone as a function of \(t_{2}\) on [-1,1] and so the inequality follows from the error formula (82) with strict inequality if \(t_{2}\neq-1,1/2\).
**Proposition 25**.: \(a_{0,0},a_{1,0},a_{0,1},a_{0,2}\geq 0\)_._
Proof.: That \(a_{0,1}\), \(a_{0,0}\), \(a_{0,2}\) are nonnegative follows immediately from the absolute monotonicity of \(\tilde{f}_{1}\) and \(\tilde{f}_{2}\).
Finally, by lemma 24, \(\tilde{g}(-1,-1/2)<\tilde{F}(-1,-1/2)\). Moreover, by definition, \(\tilde{g}(1,-1/2)=\tilde{F}(1,-1/2)\). Since \((\tilde{F}-\tilde{g})(t_{1},-1/2)\) is convex in \(t_{1}\), we must have
\[a_{1,0}-\frac{1}{2}a_{0,2}=\left.\frac{\partial\tilde{g}}{\partial t_{1}} \right|_{-1,-1/2}\geq\left.\frac{\partial\tilde{F}}{\partial t_{1}}\right|_{ -1,-1/2}>0.\]
So \(a_{1,0}-\frac{1}{2}a_{0,2}>0\) which implies \(a_{1,0}>0\) since we have already established \(a_{0,2}\geq 0\).
Thus, \(\tilde{g}\) has a nonnegative Chebyshev expansion as desired.
### Necessary Conditions Fulfilled and \(\tilde{F}\geq\tilde{g}\) when \(|t_{2}|\in[1/2,1]\)
Finally, we must show \(\tilde{g}\leq\tilde{F}\) on \([-1,1]^{2}\), which is by far our most technical task. Our equality conditions on \(\tilde{g}\) yield necessary inequality conditions (for \(\tilde{g}\leq\tilde{F}\)) on the derivatives of \(\tilde{g}\). First, we have the necessary conditions
\[\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_ {-1,-1} \geq 0 \tag{71}\] \[\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_ {-1,\frac{1}{2}} \geq 0 \tag{72}\]
Likewise, we have
\[\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{1,-\frac{1}{2}}\leq 0 \tag{73}\]
each of which we show is satisfied in the appendix. These necessary conditions are actually sufficient to obtain \(\tilde{F}\geq\tilde{g}\) when \(|t_{2}|\in[1/2,1]\) without any more calculations.
**Proposition 26**.: _If \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{(-1,t_{2}^ {\prime})}\geq 0\) and \(\tilde{f}_{1}(-1)\tilde{f}_{2}(t_{2}^{\prime})\geq\tilde{g}(-1,t_{2}^{\prime})\) for some \(t_{2}^{\prime}\in[-1,1]\), then_
\[\tilde{f}_{1}(t_{1})\tilde{f}_{2}(t_{2}^{\prime})\geq\tilde{g}(t_{1},t_{2}^{ \prime})\]
_for all \(t_{1}\in[-1,1]\) with strict inequality if \(t_{1}\neq-1\). The same inequality holds if \(\left.\frac{\partial(\tilde{f}_{1}\tilde{f}_{2}-\tilde{g})}{\partial t_{1}} \right|_{(1,t_{2})}\leq 0\) and \(\tilde{f}_{1}(1)\tilde{f}_{2}(t_{2})\geq\tilde{g}(1,t_{2})\), with strict inequality if \(t_{1}\neq 1\)._
Proof.: Note that for fixed \(t_{2}\), \(\tilde{g}(t_{1},t_{2})\) is linear in \(t_{1}\). Thus by the absolute monotonicity of \(\tilde{f}_{1}\), \(\tilde{F}-\tilde{g}\) is convex in \(t_{1}\). It follows that if \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{(-1,t_{2}^ {\prime})}\geq 0\) and \(\tilde{f}_{1}(-1)\tilde{f}_{2}(t_{2}^{\prime})=\tilde{g}(-1,t_{2}^{\prime})\), then
\[\tilde{f}_{1}(t_{1})\tilde{f}_{2}(t_{2}^{\prime})\geq\tilde{g}(t_{1},t_{2}^{ \prime})\]
for all \(t_{1}\in[-1,1]\) with strict inequality for \(t_{1}>-1\). Again invoking the convexity of \(\tilde{F}-\tilde{g}\) in \(t_{1}\), we obtain the same conclusion if \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{(1,t_{2}^ {\prime})}\leq 0\) and \(\tilde{f}_{1}(1)\tilde{f}_{2}(t_{2}^{\prime})\geq\tilde{g}(1,t_{2})\) with strict inequality if \(t_{1}\neq 1\).
**Lemma 27**.: \(\tilde{F}(t_{1},t_{2})\geq\tilde{g}(t_{1},t_{2})\) _in \(\tilde{\square}_{L}\) whenever \(t_{2}\geq 1/2\), with equality only at \((-1,1/2)\)._
Proof.: From our necessary conditions, we have
\[\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{-1,1/2}\geq 0,\hskip 14.226378pt\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}} \right|_{1,-1/2}\leq 0.\]
Since \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{t_{1},-1/2}\) is increasing in \(t_{1}\), \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{1,-1/2}\leq 0\) implies \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{-1,-1/2}\leq 0\). We also know that \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{-1,t_{2}}\) is convex in \(t_{2}\) by the absolute monotonicity of \(\tilde{f}_{1}\) and \(\tilde{f}_{2}\), from which we obtain \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{-1,t_{2}}\geq 0\) for all \(t_{2}\geq 1/2\). Indeed, fixing \(t_{1}=-1\), we can bound \(\left.\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\right|_{\tilde{f} _{1}}\) below for \(t_{2}\geq 1/2\) by its tangent approximation, which is positive at \(t_{2}=1/2\) and has positive slope since the average slope from \(t_{2}=-1/2\) to \(t_{2}=1/2\) is positive. Finally, lemma 24 yields \(\tilde{F}(-1,t_{2})\geq\tilde{g}(-1,t_{2})\) for all \(t_{2}\in[1/2,1]\). Combining these two facts, we invoke lemma 26 to assert \(\tilde{F}-\tilde{g}\geq 0\) whenever \(t_{2}\geq 1/2\).
Next, we have \(\tilde{F}\geq\tilde{g}\) when \(t_{1}=1\).
**Proposition 28**.: \(\tilde{f}_{1}(1)\tilde{f}_{1}(t_{2})\geq\tilde{g}(1,t_{2})\) _for all \(t_{2}\in[-1,1]\) with equality only at \(t_{2}=-1,1/2\)._
Proof.: Suppose by way of contradiction that there exists \(t_{2}^{\prime}\in[-1,1]\) such that \(t_{2}^{\prime}\neq-\frac{1}{2}\) and \(\tilde{f}_{1}(1)\tilde{f}_{2}(t_{2}^{\prime})\leq\tilde{g}(1,t_{2}^{\prime})\). Then there must be some point \(-1/2\neq p\in[-1,1]\) such that \(\tilde{f}_{1}(1)\tilde{f}_{2}(p)\leq\tilde{g}(1,p)\). Indeed, either \(t_{2}^{\prime}\) is such a point, or \(\tilde{f}_{1}(1)\tilde{f}_{2}(t_{2}^{\prime})<\tilde{g}(1,t_{2}^{\prime})\). We have from our necessary derivative conditions and lemma 27 that \(\tilde{f}_{1}(1)\tilde{f}_{2}(\pm 1)>\tilde{g}(1,\pm 1)\), which yield two cases for \(t_{2}^{\prime}\). If \(t_{2}^{\prime}<-1/2\), then there exists \(p\in(-1,t_{2}^{\prime})\) such that \(\tilde{f}_{1}(1)\tilde{f}_{2}(p)=\tilde{g}(1,p)\) by the intermediate value theorem. If \(t_{2}^{\prime}>-1/2\), instead apply the intermediate value theorem to \(t_{2}=t_{2}^{\prime},1\) to get our desired \(p\in[1/2,1]\).
Similarly to the proof that \(\tilde{F}\geq\tilde{g}\) for \(t_{2}=-1\), \(\tilde{g}(1,t_{2})\) is a polynomial of degree at most \(2\), and interpolates \(\tilde{f}_{1}(1)\tilde{f}_{1}(t_{2})\) for \(T=\{p,-1/2,-1/2\}\). By the uniqueness of hermite interpolants, \(\tilde{g}\) satisfies the error formula, that is,
\[\tilde{f}_{1}(1)\tilde{f}_{2}(t_{2})-\tilde{g}(1,t_{2})=\tilde{f}_{1}(1) \tilde{f}_{2}^{\ (3)}(\xi)(t_{2}-p)(t_{2}+1/2)^{2}\]
for some \(\xi\in[-1,1]\). By absolute monotonicity of \(\tilde{f}_{2}\), \(\tilde{f}_{1}(1)\tilde{f}_{2}^{\ (3)}(\xi)\geq 0\). But for \(t_{2}=-1\), \((t_{2}-p)(t_{2}+1/2)^{2}<0\), implying \(\tilde{f}_{1}(1)\tilde{f}_{2}(-1)<\tilde{g}(1,-1)\) a contradiction. We conclude \(\tilde{f}_{1}(1)\tilde{f}_{1}(t_{2})\geq\tilde{g}(1,t_{2})\) for all \(t_{2}\in[-1,1]\) as desired.
**Proposition 29**.: _For all \((t_{1},t_{2})\in[-1,1]\times[-1,-1/2]\)\(\tilde{f}_{1}(t_{1})\tilde{f}_{1}(t_{2})\geq\tilde{g}(t_{1},t_{2})\)._
Proof.: From lemma 24, we have \(\tilde{F}\geq\tilde{g}\) for \(t_{1}=-1\). By the necessary derivative conditions and lemma 26, we have the same inequality when \(t_{2}=-1/2,-1\). Finally, by lemma 28, we have the inequality for \(t_{1}=1\). All of these inequalities are strict in \([-1,1]\times[-1,1/2]\) except for at \((-1,-1)\) and \((1,-1/2)\). Thus, it remains to show the inequality on the interior. For an arbitrary point \(p=(p_{1},p_{2})\) on the interior, the line \(l\) through \(p\) and \((1,-1/2)\) has positive slope. Let its direction vector in the positive \(t_{1}\) direction be \(u_{l}\).
Let \(\tilde{F}^{l}(r)\) and \(\tilde{g}^{l}\) be the one-variable functions obtained by parametrizing \(\tilde{F}\) and \(\tilde{g}\) along \(l\) with constant speed so that \(\tilde{F}^{l}(1)=\tilde{F}(1,-1/2)\) and \(\tilde{F}^{l}(0)=\tilde{F}(p^{\prime})\) for some \(p^{\prime}\) on the boundary of \([-1,1]\times[-1,1/2]\) where \(t_{1}=-1\) or \(t_{2}=-1\). Since \(l\) has positive slope, and \(\tilde{f}_{1}\), \(\tilde{f}_{2}\) are both strictly absolutely monotone, \(\tilde{F}^{l}\) is also strictly absolutely monotone on \([0,1]\). Likewise, \(\tilde{g}^{l}\) is a polynomial of total degree at most \(2\) since \(\tilde{g}\) is a polynomial of total degree \(2\).
We know for all \(\epsilon>0\) sufficiently small that \((\tilde{F}^{l}-\tilde{g}^{l})(\epsilon)>0\). Indeed, if \(l\) does not go through \((-1,-1)\), then \((\tilde{F}^{l}-\tilde{g}^{l})(0)>0\) and the result follows by continuity, and if \(l\) does go through \((-1,1)\), then we can use the necessary derivative conditions to see
\[\nabla(\tilde{F}-\tilde{g})\bigg{|}_{-1,-1}\cdot u_{l}>0,\]
from which the inequality holds for all desired \(\epsilon\). Similarly, the necessary derivative inequality and equality conditions at \((1,-1/2)\) imply \(\nabla(\tilde{F}-\tilde{g})\bigg{|}_{-1,1/2}\cdot u_{l}<0\). Together with the fact that \((\tilde{F}^{l}-\tilde{g}^{l})(1)=0\), we get \((\tilde{F}^{l}-\tilde{g}^{l})(1-\epsilon)>0\) for all \(\epsilon\) sufficiently small.
Now supposing for a contradiction that \(\tilde{F}^{L}(r^{\prime})<0\) for some \(r^{\prime}\in[0,1]\), we apply the intermediate value theorem twice to get some points \(0<r_{1}<r^{\prime}<r_{2}<1\) such that
\((\tilde{F}^{l}-\tilde{g}^{l})(r_{1})=(\tilde{F}^{l}-\tilde{g}^{l})(r_{2})=0\). But now as with the previous lemmas, \(\tilde{g}^{l}\) is a polynomial of degree at most \(2\) which interpolates \(\tilde{F}^{l}\) for \(T=r_{1},r_{2},1\), leading to a contradiction with the error formula applied at \(0\) to \(\tilde{F}^{l}-\tilde{g}^{l}\). We conclude that \(\tilde{F}^{l}\geq\tilde{g}^{l}\) on \([0,1]\) as desired, and in fact, a similar argument shows we have strict inequality on \((0,1)\).
Thus, we have proved \(\tilde{F}\geq\tilde{g}\) on \([-1,1]^{2}\) whenever \(t_{2}\geq 1/2\) or \(t_{2}\leq-1/2\). Our proof of the inequality when \(-1/2\leq t_{2}\leq 1/2\), our so-called 'critical region,' requires much more calculation.
### The critical region for small \(a\) (\(a<\pi^{2}\))
For \(a<\pi^{2}\), we take a linear approximation approach. Recall that for fixed \(t_{2}^{\prime}\), \((\tilde{F}-\tilde{g})(t_{1},t_{2}^{\prime})\) is convex in \(t_{1}\). Thus, for \(-1\leq t_{1}^{\prime}\leq 0\),
\[(\tilde{F}-\tilde{g})(t_{1}^{\prime},t_{2}^{\prime})\geq(\tilde{F}-\tilde{g}) (-1,t_{2}^{\prime})+(t_{1}+1)\frac{\partial\tilde{F}-\tilde{g}}{\partial t_{1 }}\bigg{|}_{-1,t_{2}^{\prime}}=\tilde{F}(-1,t_{2}^{\prime})+(t+1)\frac{ \partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}-\tilde{g}(t_{1 }^{\prime},t_{2}^{\prime})\]
since \(\tilde{g}\) is linear in \(t_{1}\). Since we already have \((\tilde{F}-\tilde{g})(-1,t_{2}^{\prime})\geq 0\), then if \((\tilde{F}-\tilde{g})(-1,t_{2}^{\prime})+(t_{1}+1)\frac{\partial\tilde{F}- \tilde{g}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}\geq 0\) for \(t_{1}=0\), also \((\tilde{F}-\tilde{g})(-1,t_{2}^{\prime})+(t_{1}+1)\frac{\partial\tilde{F}- \tilde{g}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}\) for all \(t_{1}\leq 0\). Thus, to show \((\tilde{F}-\tilde{g})(t_{1}^{\prime},t_{2}^{\prime})\geq 0\) for all \(t_{1}^{\prime}\leq 0,\frac{-1}{2}\leq t_{2}\leq\frac{1}{2}\), we need only show for all \(t_{2}\in[\frac{-1}{2},\frac{1}{2}]\) that
\[(\tilde{F}-\tilde{g})(-1,t_{2}^{\prime})+\frac{\partial\tilde{F}-\tilde{g}}{ \partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}=\tilde{F}(-1,t_{2}^{\prime})+ \frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}-\tilde{g }(0,t_{2}^{\prime})\geq 0.\]
Likewise, when \(0\leq t_{1}^{\prime}\leq 1\),
\[(\tilde{F}-\tilde{g})(t_{1}^{\prime},t_{2}^{\prime})\geq(\tilde{F}-\tilde{g}) (1,t_{2}^{\prime})+(t_{1}-1)\frac{\partial\tilde{F}-\tilde{g}}{\partial t_{1 }}\bigg{|}_{-1,t_{2}^{\prime}}=\tilde{F}(1,t_{2}^{\prime})+(t_{1}-1)\frac{ \partial\tilde{F}}{\partial t_{1}}\bigg{|}_{1,t_{2}^{\prime}}-\tilde{g}(t_{1}^ {\prime},t_{2}^{\prime}).\]
Since we already have \((\tilde{F}-\tilde{g})(1,t_{2}^{\prime})\geq 0\), then if \((\tilde{F}-\tilde{g})(1,t_{2}^{\prime})+(t_{1}-1)\frac{\partial\tilde{F}- \tilde{g}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}\geq 0\) for \(t_{1}=0\), also \((\tilde{F}-\tilde{g})(-1,t_{2}^{\prime})+(t_{1}-1)\frac{\partial\tilde{F}- \tilde{g}}{\partial t_{1}}\bigg{|}_{-1,t_{2}^{\prime}}\) for all \(t_{1}\geq 0\). Thus, to show \((\tilde{F}-\tilde{g})(t_{1}^{\prime},t_{2}^{\prime})\geq 0\) for all \(t_{1}^{\prime}\geq 0,\frac{-1}{2}\leq t_{2}\leq\frac{1}{2}\), we need only show for all \(t_{2}\in[\frac{-1}{2},\frac{1}{2}]\) that
\[(\tilde{F}-\tilde{g})(1,t_{2}^{\prime})+\frac{\partial\tilde{F}-\tilde{g}}{ \partial t_{1}}\bigg{|}_{1,t_{2}^{\prime}}=\tilde{F}(1,t_{2}^{\prime})-\frac{ \partial\tilde{F}}{\partial t_{1}}\bigg{|}_{1,t_{2}^{\prime}}-\tilde{g}(0,t_{2} ^{\prime})\geq 0.\]
Now we claim for all \(t_{2}^{\prime}\) that
\[\tilde{F}(-1,t_{2}^{\prime})+\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_ {-1,t_{2}^{\prime}}-\tilde{g}(0,t_{2}^{\prime})\geq\tilde{F}(1,t_{2}^{\prime})+ \frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{1,t_{2}^{\prime}}-\tilde{g}(0,t _{2}^{\prime}),\]
which would imply that showing \(\tilde{F}(1,t_{2}^{\prime})+\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{1,t _{2}^{\prime}}-\tilde{g}(0,t_{2}^{\prime})\geq 0\) suffices to prove \(\tilde{F}\geq\tilde{g}\) in the entire critical region. Now 5.3 holds because \(\tilde{F}(-1,t_{2}^{\prime})+\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{ -1,t_{2}^{\prime}}\) and \(\tilde{F}(1,t_{2}^{\prime})+\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_{ 1,t_{2}^{\prime}}-\tilde{g}(0,t_{2}^{\prime})\) are both tangent approximations of \(\tilde{F}(0,t_{2}^{\prime})\) from \(t_{1}=-1,1\) respectively. Using the Lagrange Remainder of the Taylor Polynomial, we have
\[\tilde{F}(-1,t_{2}^{\prime})+\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|}_ {-1,t_{2}^{\prime}}=\tilde{F}(0,t_{2}^{\prime})-\frac{\tilde{f}_{1}^{\ \prime\prime}(\chi_{1})}{2!}\geq\tilde{F}(0,t_{2}^{\prime})- \frac{\tilde{f}_{1}^{\ \prime\prime}(\chi_{2})}{2!}=\tilde{F}(1,t_{2}^{\prime})+\frac{ \partial\tilde{F}}{\partial t_{1}}\bigg{|}_{1,t_{2}^{\prime}}\]
where \(\xi_{1}\in[-1,0],\xi_{2}\in[0,1]\) and using absolute monotonicity of \(\tilde{f}_{1}\).
#### 5.3.1 Proving the Linear Approximation Bound
So it suffices to show \(\phi(t_{2}):=\tilde{F}(1,t_{2})-\frac{\partial\tilde{F}}{\partial t_{1}}\bigg{|} _{1,t_{2}}-\tilde{g}(0,t_{2})\geq 0\) for all \(t_{2}\in[-1/2,1/2]\). We can express \(\phi\) as
\[\phi(t_{2})=(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1))\tilde{f}_{2}(t_{2}) -b_{0,0}-b_{0,1}t_{2}-b_{0,2}t_{2}^{2}.\]
Using our technical bounds on \(\tilde{\theta}\), we show in the appendix that for \(a<\pi^{2}\), we have:
**Lemma 30**.: \(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\geq 0\)_._
Thus, \(\phi^{(3)}\geq 0\), and so the 2nd degree Taylor polynomial at \(t_{2}=-\frac{1}{2}\) yields a lower bound on \(\phi\) for \(t_{2}\geq-1/2\):
\[\phi(t_{2})\geq A+B(t_{2}+1/2)+\frac{C}{2}(t_{2}+1/2)^{2}\]
where
\[A=\phi(-1/2) =\left(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\right)\tilde{f}_{2}(- 1/2)-b_{0,0}+\frac{b_{0,1}}{2}-\frac{b_{0,2}}{4} \tag{74}\] \[=\left(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\right)\tilde{f}_{2}(- 1/2)-a_{0,0}+\frac{b_{0,1}}{2}-\frac{b_{0,2}}{2}\] (75) \[=\left(\frac{1}{2}\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\right)\tilde{f}_{2}(- 1/2)+\frac{1}{6}\tilde{f}_{1}(-1)\tilde{f}_{2}^{\ \prime}(1/2)-\tilde{f}_{1}(-1)\left(\frac{2}{9}\tilde{f}_{2}(-1)+\frac{5}{18} \tilde{f}_{2}(1/2)\right)\] (76) \[B=\phi^{\prime}(-1/2) =(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1))\tilde{f}_{2}^{\ \prime}(-1/2)-b_{0,1}+b_{0,2}\] (77) \[=b_{0,2}-\tilde{f}_{1}^{\ \prime}(1)\tilde{f}_{2}^{\ \prime}(-1/2)\] (78) \[=\frac{4}{9}\tilde{f}_{1}(-1)\left(\tilde{f}_{2}(-1)-\tilde{f}_{ 2}(1/2)+\frac{3}{2}\tilde{f}_{2}^{\ \prime}(1/2)\right)-\tilde{f}_{1}^{\ \prime}(1)\tilde{f}_{2}^{\ \prime}(1/2)\] (79) \[C=\phi^{\prime\prime}(-1/2) =\left(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\right)\tilde{f}_{2}^{\ \prime\prime}(-1/2)-2b_{0,2}\] (80) \[=\left(\tilde{f}_{1}(1)-\tilde{f}_{1}^{\ \prime}(1)\right)\tilde{f}_{2}^{\ \prime\prime}(-1/2)-\frac{8}{9}\tilde{f}_{1}(-1)\left(\tilde{f}_{2}(-1)- \tilde{f}_{2}(1/2)+\frac{3}{2}\tilde{f}_{2}^{\ \prime}(1/2)\right). \tag{81}\]
With these representations, we'll produce lower bounds on \(A\) and \(C\) that show \(A,C>0\). So then it suffices to show that the quadratic has no zeroes, i.e. if \(B^{2}-2AC<0\). Moreover, if \(B>0\), there's nothing to show (as \(\phi\) would definitely be positive), so to bound \(B^{2}\) above, we can bound \(B\) below and then square that bound. Thus, an upper bound on \(B^{2}-2AC\) is obtained by using our lower bound for \(B\) and lower bounds for \(A,C\), and we show this upper bound for \(B^{2}-2AC\) is negative, completing the proof when \(a<\pi^{2}\).
### The critical region for \(a\geq 9.6\)
The inequality \(\tilde{F}\geq\tilde{g}\) for Regions A,B,C,D, and E from figure 13 is handled in lemmas 31, 32, 33, 34, and 35, respectively. We begin by showing \(\tilde{F}\leq\tilde{g}\) using the same linearization approach as in the large \(a\) case of the 4-point problem. Similarly as before, we define for \(b,c\in[-1,1]\)
\[\tilde{g}_{b,c}:=\tilde{g}-b_{0,2}t_{1}t_{2}+b_{0,2}(bt_{2}+ct_{2}-bc),\]
with the important resulting property that \(\det(H_{\tilde{F}-\tilde{g}_{b,c}})<0\) everywhere.
**Lemma 31**.: _For all \((t_{1},t_{2})\) such that \(-1\leq t_{1}\leq-\sqrt{2}/2\), \(0\leq t_{2}\leq\frac{1}{2}\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: First, we show the inequality for \(t_{1}\leq-\sqrt{2}/2\), \(\frac{1}{4}\leq t_{2}\leq\frac{1}{2}\). In this region, we have \(\tilde{g}_{-1,.5}\geq\tilde{g}\) with equality when \(t_{2}=1/2\) or \(t_{1}=-1\). As in the 4-point case, it suffices to show that \(\tilde{g}_{-1,.5}\leq\tilde{F}\) on the region, and it further suffices to show this inequality holds on the boundary.
For the sections of the boundary where \(t_{1}=-1\), or \(t_{2}=1/2\), we have \(\tilde{g}_{-1,1/2}=\tilde{g}\) and so \(\tilde{g}\leq\tilde{F}\) by lemmas 24 and 26. For the other two sections of the boundary, \(t_{1}=-\sqrt{2}/2\) and \(t_{2}=\frac{1}{4}\), we proceed analogously to the 4-point case. Using approximations of the \(a_{i,j}\) coefficients (listed in the appendix) yields a relatively simple upper bound, again call it \(\tilde{g}^{\prime}_{-1,1/2}\) for \(\tilde{g}_{-1,1/2}\), which is linear in \(a\), up to a factor of \(e^{-a/3}\). As a lower bound for \(\tilde{F}\), we use the similar function
\[\tilde{F}_{T}:=(e^{-ax^{2}}+e^{-a(x-1)^{2}})e^{-3au^{2}}\]
Figure 13: The figure depicts our proof strategy for showing \(\tilde{F}\geq\tilde{g}\) on the critical region when \(a\) is large. The points \(p_{1}\) and \(p_{2}\) are located at \((-\sqrt{2}/2,0),(0,-1/5)\), respectively.
where as usual
\[x=\frac{\arccos(t_{1})}{2\pi},\ \ \ \ u=\frac{\arccos(t_{2})}{2\pi}.\]
and as before, \(e^{a/3}\tilde{F}_{T}\) is convex in \(a\). making the difference \(e^{a/3}(\tilde{F}_{T}-\tilde{g}^{\prime}_{-1,1/2})\) convex in \(a\). So it suffices to show \(\tilde{F}_{T}(t^{\prime}_{1},t^{\prime}_{2})\geq\tilde{g}_{t^{\prime}_{1},t^{ \prime}_{2}}\) when \(a=9.6\) and also that
\[\frac{\partial\left[e^{a/3}(\tilde{F}_{T}-\tilde{g}^{\prime}_{-1,1/2})\right]} {\partial a}\bigg{|}_{a=9.6,t_{1}=t^{\prime}_{1},t_{2}=t^{\prime}_{2}}\geq 0.\]
In the appendix, we carry out these two calculations in a computer assisted manner for the boundary portions \(\{(-\sqrt{2}/2,t_{2})\mid\frac{1}{4}\leq t_{2}\leq\frac{1}{2}\}\) and \(\{(t_{1},\frac{1}{4})\mid-1\leq t_{1}\leq-\sqrt{2}/2\}\), which complete the proof that \(\tilde{F}\geq\tilde{g}\) for \((t_{1},t_{2})\in[-1,-\sqrt{2}/2]\times[\frac{1}{4},\frac{1}{2}]\).
To obtain \(\tilde{F}\geq\tilde{g}\) for \((t_{1},t_{2})\in[-1,-\sqrt{2}/2]\times[0,\frac{1}{4}]\), we conduct the same procedure with \(\tilde{g}_{-1,\frac{1}{4}}\) instead of \(\tilde{g}_{-1,\frac{1}{2}}\). Two sections of the boundary are immediate: when \(t_{1}=-1\) or \(t_{2}=\frac{1}{4}\), \(\tilde{g}=\tilde{g}_{-1,\frac{1}{4}}\), and we have just established \(\tilde{F}\geq\tilde{g}\) on those segments. The remaining two segments of the boundary are handled exactly as in the first section of the proof in the appendix.
**Lemma 32**.: _For all \((t_{1},t_{2})\) such that \(-\sqrt{2}/2\leq t_{1}\leq 1\) and \(0\leq t_{2}\leq\frac{1}{2}\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: By the convexity of \(\tilde{F}-\tilde{g}\) in \(t_{1}\), it satisfies to show the following:
1. \(\tilde{F}(-\sqrt{2}/2,t_{2})\geq\tilde{g}(-\sqrt{2}/2,t_{2})\) for all \(t_{2}\in[0,1/2]\)
2. \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{-\sqrt{2}/2,t_ {2}}\geq 0\) for \(t_{2}\in[0,1/2]\).
The first of these follows from lemma 31. To prove the second, it actually suffices to just show that \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{-\sqrt{2}/2,0} \geq 0\), which is handled in the appendix. This sufficiency is because \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\geq 0\) is convex in \(t_{2}\) and satisfies \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{-\sqrt{2}/2,-1/2 }<0\) (due to the necessary condition \(\frac{\partial(\tilde{F}-\tilde{g})}{\partial t_{1}}\bigg{|}_{1,-1/2}<0\)).
**Lemma 33**.: _For all \((t_{1},t_{2})\) such that \(t_{1}\geq 0\) and \(t_{2}\leq 0\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: We claim that for this portion of the critical region, it suffices to show at each point that
\[L_{1}(t_{1},t_{2}):=\frac{\tilde{f}_{2}^{\prime}\tilde{f}_{1}}{\tilde{f}_{1}^ {\prime}\tilde{f}_{2}}-\frac{\frac{\partial\tilde{g}}{\partial t_{2}}}{\frac {\partial\tilde{g}}{\partial t_{1}}}>0.\]
Indeed, this would imply that \(\tilde{g}\) increases along the level curves of \(\tilde{F}\) as \(t_{1}\) increases. First, we note that if we take a point \((t^{\prime}_{1},t^{\prime}_{2})\) on the boundary of the region where \(t_{2}=-\frac{1}{2}\) or
\(t_{1}=1\), then by the implicit function theorem, the equation \(\tilde{F}=\tilde{f}_{1}(t_{1}^{\prime})\tilde{f}_{2}(t_{2}^{\prime})\) implicitly defines a function \(t_{2}=p(t_{1})\) with negative slope from \((t_{1}^{\prime},t_{2}^{\prime})\) to a point on the portion of the boundary with \(t_{1}=0\) or \(t_{2}=0\). At any point on the curve, the vector \([\frac{\tilde{f}_{2}^{{}^{\prime}}}{\tilde{f}_{2}},-\frac{\tilde{f}_{1}^{{}^{ \prime}}}{\tilde{f}_{1}}]^{T}\) is parallel and in the positive \(t_{1}\) direction of \(p\), so \(\tilde{g}\) increases along \(p\) if
\[\frac{\tilde{f}_{2}^{{}^{\prime}}}{\tilde{f}_{2}}\frac{\partial\tilde{g}}{ \partial t_{1}}-\frac{\partial\tilde{g}}{\partial t_{2}}\frac{\tilde{f}_{1}^{{} ^{\prime}}}{\tilde{f}_{1}}>0\]
as needed. Thus, if the previous inequality holds, then \(\tilde{F}-\tilde{g}\) is minimized along the right and bottom boundaries of the region, where we have already showed it to be nonnegative in the previous section. We show \(L_{1}(t_{1},t_{2})>0\) in the appendix.
**Lemma 34**.: _For all \((t_{1},t_{2})\) such that \(-\sqrt{2}/2\leq t_{1}\leq 0\), \(-\frac{1}{5}\leq t_{2}\leq 0\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: The proceeds very similarly to that of lemma 31. We use \(\tilde{F}_{T}\) in the same fashion, and when \(-\sqrt{2}/2\leq t_{1}\leq 0\) and \(-.1\leq t_{2}\leq 0\), we use \(\tilde{g}_{-\sqrt{2}/2,0}\). For \(-\sqrt{2}/2\leq t_{1}\leq 0\) and \(-.2\leq t_{2}\leq-.1\), we use \(\tilde{g}_{-\sqrt{2}/2,.1}\) The precise calculations are carried out in the appendix.
**Lemma 35**.: _For all \((t_{1},t_{2})\) such that \(t_{1}\leq 0\) and \(t_{2}\leq-.2\), \(\tilde{F}\geq\tilde{g}\). Similarly, for all \((t_{1},t_{2})\) such that \(t_{1}\leq-\sqrt{2}/2\) and \(t_{2}\leq 0\), \(\tilde{F}\geq\tilde{g}\)._
Proof.: Let \(S\) be the set described in the lemma set. Fix \(-.5\leq t_{2}^{\prime}\leq 0\). We need to show \(\tilde{F}\geq\tilde{g}\) on \(S_{t_{2}^{\prime}}:=\{t_{1}\mid(t_{1},t_{2}^{\prime})\in S\text{ and }g(t_{1},t_{2}^{\prime})\geq\tilde{F}(-1,t_{2}^{\prime})\}\). Since \(\tilde{g}\) is increasing in \(t_{1}\) for \(t_{2}\geq-1/2\), we have \(S_{t_{2}^{\prime}}\) is a closed interval. Either \(S_{t_{2}^{\prime}}\) is empty and there's nothing to show, or we have \(\tilde{F}-\tilde{g}\geq 0\) at the right endpoint from our previous lemmas, namely 33 and 34. Equivalently, at this right endpoint,
\[0\leq\log(\frac{\tilde{F}}{\tilde{g}})=\log(\tilde{f}_{1})+\log(\tilde{f}_{2} )-\log(\tilde{g}).\]
Thus, to prove \(\tilde{F}\geq\tilde{g}\) for all \(t_{1}\in S_{t_{2}^{\prime}}\), it suffices to show
\[\frac{\partial\log(\frac{\tilde{F}}{\tilde{g}})}{\partial t_{1}}=\frac{\tilde {f}_{1}^{{}^{\prime}}}{\tilde{f}_{1}}-\frac{\frac{\partial\tilde{g}}{\partial t _{1}}}{\tilde{g}}\leq 0\]
throughout \(S\). Now \(\frac{\tilde{f}_{1}^{{}^{\prime}}}{\tilde{f}_{1}}-\frac{\frac{\partial\tilde{g }}{\partial t_{1}}}{\tilde{g}}\leq 0\) is equivalent to
\[\frac{\tilde{f}_{1}}{\tilde{f}_{1}^{{}^{\prime}}}-\frac{\tilde{g}}{\frac{ \partial\tilde{g}}{\partial t_{1}}}=\frac{\tilde{f}_{1}}{\tilde{f}_{1}^{{}^{ \prime}}}-t_{1}-\frac{\tilde{g}(0,t_{2})}{a_{1,0}+a_{0,2}t_{2}}\geq 0,\]
since each of \(\tilde{f}_{1},\tilde{f}_{1}^{{}^{\prime}},\tilde{g},\frac{\partial\tilde{g}}{ \partial t_{1}}\geq 0.\) Notably, \(\frac{\tilde{f}_{1}}{\tilde{f}_{1}^{{}^{\prime}}}-t_{1}\) is a function only in \(t_{1}\), while \(\frac{\tilde{g}(0,t_{2})}{a_{1,0}+a_{0,2}t_{2}}\) depends only on \(t_{2}\). Let
\[L_{2}(t_{1},t_{2}):=\frac{\tilde{f}_{1}}{\tilde{f}_{1}^{{}^{\prime}}}-t_{1}- \frac{\tilde{g}(0,t_{2})}{a_{1,0}+a_{0,2}t_{2}}.\]
If we can show that \(L_{2}\) is decreasing in \(t_{1}\) and \(t_{2}\), then to show \(L_{2}\geq 0\) on all of \(S\), we only need to show \(L_{2}(-\sqrt{2}/2,0),L_{2}(0,-.2)\geq 0\), which we handle in the appendix.
So it remains to show \(L_{2}\) is decreasing in both \(t_{1}\) and \(t_{2}\).
\[\frac{\partial L_{2}}{t_{1}}=(\frac{\tilde{f}_{1}}{\tilde{f}_{1}})^{\prime}-1= \frac{(\tilde{f}_{1}^{\ \prime})^{2}-\tilde{f}_{1}^{\ \prime\prime}\tilde{f}_{1}}{\tilde{f}_{1}^{\ \prime 2}}-1=1- \frac{\tilde{f}_{1}^{\ \prime\prime}\tilde{f}_{1}}{(\tilde{f}_{1}^{\ \prime})^{2}}-1<0\]
as needed. Similarly,
\[\frac{\partial L_{2}}{t_{2}}=\frac{b_{0,0}b_{0,2}-b_{0,1}b_{1,0}-b_{0,2}t_{2}(2 b_{1,0}+b_{0,2}t_{2})}{(b_{1,0}+b_{0,2}t_{2})^{2}}\]
whose sign depends only on \(N_{2}(t_{2}):=b_{0,0}b_{0,2}-b_{0,1}b_{1,0}-b_{0,2}t_{2}(2b_{1,0}+b_{0,2}t_{2})\). Now
\[N_{2}^{\prime}(t_{2})=-2b_{0,2}(b_{1,0}+b_{0,2}t_{2})=-2b_{0,2}\left(\frac{ \partial\tilde{g}}{\partial t_{1}}\right)\leq 0\]
for \(t_{2}\geq\frac{-1}{2}\). So the negativity of \(N_{2}(t_{2})\) and (thus \(\frac{\partial L_{2}}{t_{2}}\)) follows from \(N_{2}(-1/2)<0\), which we check in the appendix using our bounds on the \(b_{i,j}\)'s.
**Acknowledgements.** The authors thank Henry Cohn, Denali Relles, Larry Rolen, Ed Saff, and Yujian Su for helpful discussions at various stages of this project.
|
2301.07170 | Improved Sobolev inequalities on CR shpere | We establish improved CR Sobolev inequalities on CR sphere under the
vanishing of higher order moments of the volume element. As a direct
application, we give a simpler proof of the existence and the classification of
minimizers of the CR invariant Sobolev inequalities which avoids complicated
computation in Frank and Lieb's proof. Our argument relies on nice commutator
identities involving the CR intertwining operators on CR sphere and handles
both the fractional and integral cases. In the same spirit, we derive the
classical sharp Sobolev inequalities using commutator identities on the sphere. | Zetian Yan | 2023-01-17T20:32:06Z | http://arxiv.org/abs/2301.07170v1 | # Improved Sobolev inequalities on CR sphere
###### Abstract.
We establish improved CR Sobolev inequalities on \(S^{2n+1}\) under the vanishing of higher order moments of the volume element. As a direct application, we give a simpler proof of the existence and the classification of minimizers of the CR invariant Sobolev inequalities which avoids complicated computation in Frank and Lieb's proof. Our argument relies on nice commutator identities involving the CR intertwining operators on \(S^{2n+1}\) and handles both the fractional and integral cases. In the same spirit, we derive the classical sharp Sobolev inequalities using commutator identities on \(S^{n}\).
Key words and phrases:CR Yamabe problem; CR GJMS operators; sharp Sobolev inequalities 2020 Mathematics Subject Classification: Primary 39B62; Secondary 32V20, 32V40, 35B38
## 1. Introduction
This paper continues the program initiated in the work by the author [10] to simplify Frank and Lieb's proof [12] of the sharp Sobolev inequalities for the embeddings \(S^{\gamma,2}(\mathbb{H}^{n})\hookrightarrow L^{\frac{2n+2}{n+1-\gamma}}( \mathbb{H}^{n})\), and it handles both the fractional and integral cases. Our first objective is to establish improved CR Sobolev inequalities on \(S^{2n+1}\) under the vanishing of higher order moments of the area elements; see Theorem 1.1 for a precise statement. As a direct application, we give a new proof of the existence of minimizers of CR Sobolev inequalities. Our second objective is to derive commutator identities involving intertwining operators on \(S^{2n+1}\); see Theorem 1.3 for a precise statement. By combining this with the Frank-Lieb argument developed in [11], we give a simpler and direct classification of optimizers. In particular, our proof avoids complicated computations, fully addressing an open problem of Frank and Lieb [12].
Our derivation of Theorem 1.1 is motivated by recent work in [1] and [13], where Aubin's Moser-Trudinger-Onofri inequality and Sobolev inequalities on \(S^{n}\) are improved under the vanishing of higher order moments of the volume element, respectively. The key observation in [13] is that, if it is not true, there exists a sequence of functions \(\{F_{i}\}\) in \(W^{1,p}(S^{n})\) such
that
\[\|\nabla F_{i}\|_{p}^{p}\leqslant\frac{1}{\alpha},\quad\|F_{i}\|_{p^{*}}=1,\]
and \(\{F_{i}\}\) converges to zero weakly in \(L^{p}(S^{n})\), where \(p^{*}=\frac{np}{n-p}\) and \(\alpha\) is the leading coefficient of the improved Sobolev inequality [13]. Combining this with the concentration compactness principle [14], this contradicts the choice of \(\alpha\). We will apply this idea to prove Theorem 1.1 in Section 3. The crucial point of the proof is the concentration compactness principle Lemma 3.1 on CR sphere.
Let \(\mathcal{A}_{2\gamma}^{\theta_{c}}\), \(0<\gamma<n+1\), be the CR intertwining operators associated with the canonical contact form \(\theta_{c}\) on \(\mathbb{H}^{n}\). Folland [12] proved that for all \(f\in S^{\gamma,2}(\mathbb{H}^{n})\), \(p=\frac{2Q}{Q-2\gamma}\), there exists a positive constant \(C\) such that
\[\left(\int_{\mathbb{H}^{n}}|f|^{p}du\right)^{\frac{2}{p}}\leqslant C\int_{ \mathbb{H}^{n}}\bar{f}\mathcal{A}_{2\gamma}^{\theta_{c}}(f)du, \tag{1.1}\]
where \(S^{\gamma,2}(\mathbb{H}^{n})\) is the Folland-Stein space introduced in [11], and \(du=\theta_{c}\wedge(d\theta_{c})^{n}\) is the volume form associated with \(\theta_{c}\) on \(\mathbb{H}^{n}\). Let \(C_{n,2\gamma}\) be the smallest real number so that (1.1) is true for all \(f\in S^{\gamma,2}(\mathbb{H}^{n})\). Equivalently, via the Cayley transform \(\mathcal{C}:\mathbb{H}^{n}\to S^{2n+1}\backslash(0,\cdots,-1)\) given by
\[\mathcal{C}(z,t)=\left(\frac{2z}{1+|z|^{2}-it},\frac{1-|z|^{2}+it}{1+|z|^{2}- it}\right),\]
we have for all \(F\in S^{\gamma,2}(S^{2n+1})\),
\[\left(\int_{S^{2n+1}}|F|^{p}d\xi\right)^{\frac{2}{p}}\leqslant C_{n,2\gamma} \int_{S^{2n+1}}\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi, \tag{1.2}\]
where \(d\xi=\theta_{0}\wedge(d\theta_{0})^{n}\) is the volume form associated with the canonical contact form \(\theta_{0}\) on \(S^{2n+1}\).
The explicit value of \(C_{n,2\gamma}\) is computed by Frank and Lieb [10] by classifying the extremal functions of (1.2); i.e., equality holds in (1.2) if and only if
\[F(\eta)=\frac{C}{|1-\xi\cdot\bar{\eta}|^{\frac{Q-2\gamma}{2}}},\quad\eta\in S^ {2n+1},\]
for some \(C\in\mathbb{C},\xi\in\mathbb{C}^{n+1},|\xi|<1\) and
\[C_{n,2k}=\left(\frac{1}{4\pi}\right)^{\gamma}\frac{\Gamma^{2}(\frac{n+1- \gamma}{2})}{\Gamma^{2}(\frac{n+1+\gamma}{2})}. \tag{1.3}\]
Among the results of this article is a new computation of the value of \(C_{n,2\gamma}\) and the classification of its optimizers; see Theorem 1.4 for more details.
Recall that a homogeneous polynomial on \(\mathbb{C}^{n+1}\) with bidegree \((j,k)\) is a polynomial \(g\) such that for any \(\lambda\in\mathbb{C}\), \(z\in\mathbb{C}^{n+1}\),
\[g(\lambda z)=\lambda^{j}\bar{\lambda}^{k}g(z).\]
For a pair of nonnegative integers \((j,k)\), we denote
\[\mathcal{P}_{j,k}:=\{\text{all homogeneous polynomials on $\mathbb{C}^{n+1}$ with bidegree $(j,k)$}\},\]
\[\widetilde{\mathcal{P}}_{j,k}:=\bigcup_{(\tilde{j},\tilde{k})}\mathcal{P}_{ \tilde{j},\tilde{k}},\quad\text{for all $\tilde{j}\leqslant j$, $\tilde{k}\leqslant k$},\]
\[\overline{\mathcal{P}}_{j,k}:=\Big{\{}g\in\widetilde{\mathcal{P}}_{j,k}; \int_{S^{2n+1}}gd\xi=0\Big{\}}.\]
For \(m\in\mathbb{N}\) and \(0\leqslant\theta\leqslant 1\), as in [10] we define
\[\mathcal{M}^{c}_{j,k}(S^{2n+1}):=\{\nu;\,\nu\text{ is a probability measure on $S^{2n+1}$ supported on}\] \[\qquad\text{countably many points such that $\int_{S^{2n+1}}gd\nu=0$, for all $g\in\overline{\mathcal{P}}_{j,k}$}\}\]
and
\[\Theta(j,k;\theta,2n+1)\] \[:=\inf\left\{\sum_{i}\nu_{i}^{\theta};\nu\in\mathcal{M}^{c}_{j,k} (S^{2n+1}),\text{supp}(\nu)=\{x_{i}\}\subset S^{2n+1},\nu_{i}=\nu(x_{i})\right\}.\]
Our first result gives improved CR Sobolev inequalities under the assumption that higher moments vanish.
**Theorem 1.1**.: _Denote \(Q=2n+2\), \(p=\frac{2Q}{Q-2\gamma}\), \(\gamma\in(0,n+1)\). Then for any \(\epsilon>0\), and \(F\in S^{\gamma,2}(S^{2n+1})\) with_
\[\int_{S^{2n+1}}g|F|^{p}d\xi=0 \tag{1.4}\]
_for all \(g\in\overline{\mathcal{P}}_{j,k}\), we have_
\[\left(\int_{S^{2n+1}}|F|^{p}d\xi\right)^{\frac{2}{p}}\leqslant \left(\frac{C_{n,2\gamma}}{\Theta(j,k;\frac{Q-2\gamma}{Q},2n+1)}+ \epsilon\right)\int_{S^{2n+1}}\bar{F}\mathcal{A}^{\theta_{0}}_{2\gamma}(F)d\xi\] \[+C(\epsilon)\int_{S^{2n+1}}|F|^{2}d\xi, \tag{1.5}\]
_where \(C(\epsilon)\) is a constant depending on \(\epsilon\)._
When \(j+k=1\), the condition \(\int_{S^{2n+1}}gd\nu=0\) for all \(g\in\overline{\mathcal{P}}_{j,k}\) is equivalent to the balanced conditions
\[\int_{S^{2n+1}}\xi_{i}d\nu=0,\quad i=1,\cdots,2n+2, \tag{1.6}\]
where \(\{\xi_{i}\}_{i=1}^{2n+2}\) are coordinate functions on \(\mathbb{R}^{2n+2}\). Therefore, the exact value of \(\Theta(j,k;\theta,2n+1)\) is \(2^{1-\theta}\), shown in [10]. We have the following corollary for balanced functions on \(S^{2n+1}\).
**Corollary 1.2**.: _Denote \(Q=2n+2\), \(p=\frac{2Q}{Q-2\gamma}\), \(\gamma\in(0,n+1)\). Then for any \(\epsilon>0\), and \(F\in S^{\gamma,2}(S^{2n+1})\) with_
\[\int_{S^{2n+1}}\xi_{i}|F|^{p}d\xi=0,\quad i=1,\cdots,2n+2, \tag{1.7}\]
_we have_
\[\left(\int_{S^{2n+1}}|F|^{p}d\xi\right)^{\frac{2}{p}}\leqslant\left(\frac{C_{n,2\gamma}}{2^{\frac{\gamma}{n+1}}}+\epsilon\right)\int_{S^{2n+1}}\bar{F} \mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi+C(\epsilon)\int_{S^{2n+1}}|F|^{2}d\xi, \tag{1.8}\]
_where \(C(\epsilon)\) is a constant depending on \(\epsilon\)._
For \(0<\gamma<n+1\), let \((w,w^{\prime})\in\mathbb{R}^{2}\) such that \(w+w^{\prime}+n+1=\gamma\) and \(w-w^{\prime}\in\mathbb{Z}\). Through this article, without further comment, we will assume implicitly that \(w-w^{\prime}\in\mathbb{Z}\). On \((S^{2n+1},\theta_{0})\), \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) is the unique self-adjoint intertwining operator of order \(2\gamma\) in the sense of (2.9); see Proposition A.1 for more details.
**Theorem 1.3**.: _Let \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) be the CR unit sphere. For \(0<\gamma<n+1\), let \((w,w^{\prime})\in\mathbb{R}^{2}\) such that \(w+w^{\prime}+n+1=\gamma\). \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) are self-adjoint intertwining operators characterized in Proposition A.1. Then_
\[\sum_{j=1}^{n+1}\bar{z}_{j}\left[\mathcal{A}_{w,w^{\prime}}^{\theta_{0}},z_{j }\right]=\gamma(\gamma-1-w^{\prime})\mathcal{A}_{w-1,w^{\prime}}^{\theta_{0}}. \tag{1.9}\]
Theorem 1.3 handles both fractional and integral cases and covers the result in [11]. The proof of Theorem 1.3 relies on direct computation of the spectrum with respect to spherical harmonics, which is different from that in [11]. In the case \(w=w^{\prime}\), combining (1.9) with Corollary 1.2, we give a new proof of sharp CR Sobolev inequalities on \(S^{2n+1}\).
**Theorem 1.4**.: _Let \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) be the CR unit sphere with the volume element \(d\xi=\theta_{0}\wedge(d\theta_{0})^{n}\). Denote \(Q=2n+2\) and \(\gamma\in(0,n+1)\). Then \(u\) is a positive minimizer of (1.2) if and only if_
\[u(\eta)=\frac{C}{|1-\xi\cdot\bar{\eta}|^{\frac{Q-2\gamma}{2}}},\quad\eta\in S ^{2n+1}\]
_for some \(C\in\mathbb{C},\xi\in\mathbb{C}^{n+1},|\xi|<1\)._
The key ideas in the proof of Theorem 1.4 are the following. After using the CR automorphism group \(\mathcal{Aut}(S^{2n+1})\), we may assume that any minimizing sequence \(\{F_{i}\}\) in \(S^{\gamma,2}(S^{2n+1})\) is balanced [12]. By combining
Corollary 1.2 with a trick due to Lieb [10], we show that any balanced minimizing sequence \(\{F_{i}\}\) has a subsequence which converges strongly to an optimizer \(F\in S^{\gamma,2}(S^{2n+1})\) of (1.2). Next, we follow the Frank-Lieb argument developed in [11] to classify the optimizers. The Frank-Lieb argument involves three elements. First, the assumption of a local minimizer implies, via the second variation, a nice spectral estimate; see Proposition 4.3 for a precise statement. Second, CR covariance implies that one can assume that the optimizer \(F\) satisfies the balanced condition (1.6), and in particular use first spherical harmonics as test functions in the previous spectral estimate. Third, by Theorem 1.3, one deduces that a balanced positive local minimizer is constant.
Hang and Wang have announced a new proof of sharp CR Sobolev inequalities using a scheme of subcritical approximation [13] which is quite different from the approach in this article.
Our methods also give a new proof of Beckner's sharp Sobolev inequalities on the sphere [1] which is more direct than the argument of Frank and Lieb [12].
**Theorem 1.5**.: _Let \(\Delta_{S^{n}}\) be the Laplace-Beltrame operator on the standard sphere \((S^{n},g_{c})\). For \(0<\gamma<\frac{n}{2}\), classical intertwining operators \(P^{g_{c}}_{2\gamma}\) on \(S^{n}\) are defined by_
\[P^{g_{c}}_{2\gamma}=\frac{\Gamma(B+\frac{1}{2}+\gamma)}{\Gamma(B+\frac{1}{2}- \gamma)},\qquad B=\sqrt{-\Delta_{S^{n}}+\left(\frac{n-1}{2}\right)^{2}}. \tag{1.10}\]
_Then for all \(F\in W^{\gamma,2}(S^{n})\),_
\[\frac{\Gamma(\frac{n+2\gamma}{2})}{\Gamma(\frac{n-2\gamma}{2})}\omega_{n}^{ \frac{2\gamma}{n}}\left(\int_{S^{n}}|F|^{\frac{2n}{n-2\gamma}}d\sigma\right)^ {\frac{n-2\gamma}{n}}\leqslant\int_{S^{n}}FP^{g_{c}}_{2\gamma}(F)d\sigma, \tag{1.11}\]
_where \(\omega_{n}\) is the volume of \(S^{n}\) and \(d\sigma\) is the Lebesgue measure on \(S^{n}\). Equality holds if and only if_
\[F(\eta)=C|1-\langle\eta,\xi\rangle|^{\frac{2\gamma-n}{2}},\quad\eta\in S^{n} \tag{1.12}\]
_for some \(C\in\mathbb{R}\), \(\xi\in\mathbb{R}^{n+1}\), \(|\xi|<1\)._
The main ingredient in our new proof of Theorem 1.5 is a commutator identity for the intertwining operators on \(S^{n}\) which is already known when \(\gamma\in\mathbb{N}\)[11].
**Theorem 1.6**.: _Let \((S^{n},g_{c})\) be the standard sphere with the canonical metric \(g_{c}\) and \(P^{g_{c}}_{2\gamma}\) be the intertwining operator of order \(2\gamma\). Then_
\[\sum_{j=1}^{n+1}x_{j}\left[P^{g_{c}}_{2\gamma},x_{j}\right]=\gamma(n+2\gamma- 2)P^{g_{c}}_{2(\gamma-1)}, \tag{1.13}\]
_where \(x_{1},\cdots,x_{n+1}\) are the standard coordinates on \(\mathbb{R}^{n+1}\) and \(P^{g_{c}}_{2(\gamma-1)}\) should be regarded as \(\left(P^{g_{c}}_{2(1-\gamma)}\right)^{-1}\) for \(0<\gamma<1\)._
This article is organized as follows.
In Section 2, we review some basic concepts on CR sphere, spherical harmonics and CR intertwining operators. In Section 3, we prove Theorem 1.1 following the idea in [10]. Section 4 deals with the commutator identity and spectral estimate needed for the Frank-Lieb argument. As a direct application, we give a new proof of sharp CR Sobolev inequalities on \(S^{2n+1}\). In Section 5, we provide the proof of Theorem 1.6 and a new proof of classical sharp Sobolev inequalities (1.11).
**Acknowledgements.** I would like to express my deep gratitude to my advisor, Prof. Jeffrey S. Case, for the patient guidance and useful critiques of this work. I also would like to thank Yu Gao, Nan Wu and Xingyu Zhu for useful discussion.
## 2. Preliminaries
### CR sphere and spherical harmonics
Let \(S^{2n+1}\subset\mathbb{C}^{n+1}\) be the unit sphere centered at the origin with the canonical holomorphic tangent space \(T^{1,0}S^{2n+1}\) and the contact form \(\theta_{0}\) given by
\[T^{1,0}S^{2n+1}=T^{1,0}\mathbb{C}^{n+1}\cap\mathbb{C}TS^{2n+1},\] \[\theta_{0}=\frac{i}{2}\sum_{i=1}^{n+1}\left(z^{i}d\bar{z}^{i}- \bar{z}^{i}dz^{i}\right)\big{|}_{S^{2n+1}}.\]
On \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\), the sublaplacian is defined as
\[\mathcal{L}=-\frac{1}{2}\sum_{j=1}^{n+1}\left(T_{j}\overline{T}_{j}+\overline{ T}_{j}T_{j}\right),\]
where
\[T_{j}=\frac{\partial}{\partial z_{j}}-\bar{z}_{j}\sum_{k=1}^{n+1}z_{k}\frac{ \partial}{\partial z_{k}},\qquad\overline{T}_{j}=\frac{\partial}{\partial\bar {z}_{j}}-z_{j}\sum_{k=1}^{n+1}\bar{z}_{k}\frac{\partial}{\partial\bar{z}_{k}},\]
and the conformal sublaplacian is defined as
\[\mathcal{D}=\mathcal{L}+\frac{n^{2}}{4}.\]
Let \(\mathcal{P}\) denote the space of complex-valued polynomials on \(\mathbb{C}^{n+1}\) and \(\mathcal{H}\) be the subspace of harmonic polynomials. For all \(h\in\mathbb{N}_{0}\), let \(\mathcal{P}_{h}\subset\mathcal{P}\) denote
the space of homogeneous polynomials of degree \(h\) and set \(\mathcal{H}_{h}=\mathcal{P}_{h}\cap\mathcal{H}\). Clearly, we have the following decomposition
\[\mathcal{P}=\bigoplus_{h\in\mathbb{N}}\mathcal{P}_{h},\qquad\mathcal{H}= \bigoplus_{h\in\mathbb{N}}\mathcal{H}_{h}.\]
It is well-known that
\[\mathcal{P}_{h}=\mathcal{H}_{h}\oplus|z|^{2}\mathcal{P}_{h-2}\quad\text{ for all }h\in\mathbb{N}_{0}, \tag{2.1}\]
where \(z=(z_{1},\cdots,z_{n+1})\in\mathbb{C}^{n+1}\) and \(|z|^{2}=z\cdot\bar{z}\). Given \(j,k\in\mathbb{N}\), we may then define \(\mathcal{P}_{j,k}\) to be the space of homogeneous polynomials on \(\mathbb{C}^{n+1}\) with bidegree \((j,k)\), i.e., for \(g\in\mathcal{P}_{j,k}\), we have \(Zg=jg\) and \(\overline{Z}g=kg\), where \(Z\) and \(\overline{Z}\) are the holomorphic and antiholomorphic Euler fields given by
\[Z=\sum_{l=1}^{n+1}z_{l}\frac{\partial}{\partial z_{l}},\qquad\overline{Z}= \sum_{l=1}^{n+1}\bar{z}_{l}\frac{\partial}{\partial\bar{z}_{l}}, \tag{2.2}\]
respectively. Obviously, we have
\[\mathcal{P}_{h}=\bigoplus_{j+k=h}\mathcal{P}_{j,k},\qquad\mathcal{H}_{h}= \bigoplus_{j+k=h}\mathcal{H}_{j,k},\]
where \(\mathcal{H}_{j,k}=\mathcal{H}_{h}\cap\mathcal{P}_{j,k}\).
By the Stone-Weierstrass theorem, we know that the space \(L^{2}(S^{2n+1})\) endowed with the inner product
\[(F,G)=\int_{S^{2n+1}}F\overline{G}d\xi,\qquad d\xi=\theta_{0}\wedge(d\theta_{0 })^{n},\]
can be decomposed as
\[L^{2}(S^{2n+1})=\overline{\bigoplus_{j,k\in\mathbb{N}_{0}}\mathcal{H}_{j,k}^{ \mathbb{S}}},\]
where \(\mathcal{H}_{j,k}^{\mathbb{S}}\) is the restriction of \(\mathcal{H}_{j,k}\) to the sphere. Through this paper, the superscript \(\mathbb{S}\) will be used to denote the sets of restrictions to \(S^{2n+1}\). The dimension of \(\mathcal{H}_{j,k}^{\mathbb{S}}\)[1] is
\[\dim(\mathcal{H}_{j,k}^{\mathbb{S}})=m_{j,k}:=\frac{(j+n-1)!(k+n-1)!(j+k+n)!}{ n!(n-1)!j!k!},\]
and if \(\{Y_{j,k}^{l}\}_{l=1}^{m_{j,k}}\) is an orthonormal basis of \(\mathcal{H}_{j,k}^{\mathbb{S}}\), then the zonal harmonics are defined as
\[\Phi_{j,k}(\zeta,\eta)=\sum_{l=1}^{m_{j,k}}Y_{j,k}^{l}(\zeta)\overline{Y_{j,k }^{l}(\eta)}.\]
It is known from [1] that
\[\Phi_{j,k}(\zeta,\eta)=\Phi_{j,k}(\bar{\zeta}\cdot\eta):=\frac{(k+n-1)!(j+k+n) }{\omega_{2n+1}n!k!}\left(\bar{\zeta}\cdot\eta\right)^{k-j}P_{j}^{(n-1,k-j)}(2| \bar{\zeta}\cdot\eta|-1) \tag{2.3}\]
if \(j\leqslant k\), and \(\Phi_{j,k}(\zeta,\eta):=\overline{\Phi_{k,j}(\zeta,\eta)}\) if \(k\leqslant j\), where \(P_{n}^{(\alpha,\beta)}\) are the Jacobi polynomials.
### Folland-Stein spaces and intertwining operators on CR sphere
The Folland-Stein spaces on \(S^{2n+1}\) can be defined in terms of the powers of the conformal subplacian; see [1, 1, 10, 11] for more details. We summarize the main properties below.
It is well-known that for \(Y_{j,k}\in\mathcal{H}^{\mathbb{S}}_{j,k}\),
\[\mathcal{D}Y_{j,k}=\lambda_{j}\lambda_{k}Y_{j,k},\quad\lambda_{j}=j+\frac{n}{2}. \tag{2.4}\]
For \(F\in L^{2}(S^{2n+1})\), we can write
\[F=\sum_{j,k\in\mathbb{N}_{0}}\sum_{l=1}^{m_{j,k}}c_{j,k}^{l}(F)Y_{j,k}^{l}, \quad c_{j,k}^{l}(F)=\int_{S^{2n+1}}F\overline{Y}_{j,k}^{l}d\xi; \tag{2.5}\]
in particular, for \(F\in C^{\infty}(S^{2n+1})\) and any \(\gamma\in\mathbb{R}_{+}\), (2.4) implies that
\[\sum_{j,k\in\mathbb{N}_{0}}\sum_{l=1}^{m_{j,k}}\left(\lambda_{j}\lambda_{k} \right)^{\gamma}\left|c_{j,k}^{l}(F)\right|^{2}<\infty. \tag{2.6}\]
For \(F\in C^{\infty}(S^{2n+1})\) and any \(\gamma\in\mathbb{R}_{+}\), we define
\[\mathcal{D}^{\frac{\gamma}{2}}F=\sum_{j,k\in\mathbb{N}_{0}}\sum_{l=1}^{m_{j,k }}\left(\lambda_{j}\lambda_{k}\right)^{\frac{\gamma}{2}}c_{j,k}^{l}(F)Y_{j,k} ^{l}, \tag{2.7}\]
so that \(\mathcal{D}^{\frac{\gamma}{2}}\) extends naturally to the space of distributions on the sphere. For \(\gamma>0\), \(p\geqslant 1\), we let
\[S^{\gamma,p}=\left\{F\in L^{p}:\mathcal{D}^{\frac{\gamma}{2}}F\in L^{p}\right\},\]
endowed with norm
\[\|F\|_{S^{\gamma,p}}=\|\mathcal{D}^{\frac{\gamma}{2}}F\|_{p};\]
the space \(S^{\gamma,p}\) is the completion of \(C^{\infty}(S^{2n+1})\) this norm.
\(S^{\gamma,2}\) is the space of \(F\) in \(L^{2}\) so that (2.6) holds. It is a Hilbert space with inner product and norm
\[(F,G)_{S^{\gamma,2}}=\int_{S^{2n+1}}\mathcal{D}^{\frac{\gamma}{2}}F\overline{ \mathcal{D}^{\frac{\gamma}{2}}G}d\xi,\quad\|F\|_{S^{\gamma,2}}=(F,F)_{S^{ \gamma,2}}^{\frac{1}{2}}.\]
The group \(SU(n+1,1)\) acts as a group of CR automorphisms on \(S^{2n+1}\), and therefore on \(\mathbb{H}^{n}\) by means of Cayley transform. Recall that a CR automorphism is a diffeomorphism \(\tau:S^{2n+1}\to S^{2n+1}\) that preserves the contact form; i.e., \(\tau^{*}\theta_{0}=|J_{\tau}|^{2}\theta_{0}\), where the function \(J_{\tau}\) is defined by
\[\tau^{*}\left(dz_{1}\wedge\cdots\wedge dz_{n+1}\right)=J_{\tau}^{n+2}dz_{1} \wedge\cdots\wedge dz_{n+1}\]
such that \(|J_{\tau}|^{2n+2}\) is the Jacobian determinant of \(\tau\). Denote the CR automorphism group of \(S^{2n+1}\) by \(\mathcal{A}\mathbf{ut}(S^{2n+1})\). The functions \(|J_{\tau}|\) with \(\tau\in\mathcal{A}\mathbf{ut}(S^{2n+1})\) can be parametrized as
\[|J_{\tau}(\zeta)|=\frac{C}{|1-\xi\cdot\zeta|},\quad\zeta\in S^{2n+1},C\in \mathbb{C},\xi\in\mathbb{C}^{n+1},|\xi|<1. \tag{2.8}\]
The conformal sublaplacian \(\mathcal{D}\) is intertwining in the sense that for any \(F\in C^{\infty}(S^{2n+1})\), \(\tau\in\mathcal{A}\mathbf{ut}(S^{2n+1})\),
\[|J_{\tau}|^{\frac{Q+2}{2}}\left(\mathcal{D}F\right)\circ\tau=\mathcal{D}\left( |J_{\tau}|^{\frac{Q-2}{2}}\left(F\circ\tau\right)\right),\quad Q=2n+2.\]
For \(0<\gamma<\frac{Q}{2}\), let \((w,w^{\prime})\in\mathbb{R}^{2}\) such that \(w+w^{\prime}+n+1=\gamma\). The general intertwining operator \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) of order \(2\gamma\) is defined by the following property: for any \(F\in C^{\infty}(S^{2n+1})\), \(\tau\in\mathcal{A}\mathbf{ut}(S^{2n+1})\),
\[J_{\tau}^{\gamma-w}\bar{J}_{\tau}^{\gamma-w^{\prime}}\left(\mathcal{A}_{w,w^{ \prime}}^{\theta_{0}}F\right)\circ\tau=\mathcal{A}_{w,w^{\prime}}^{\theta_{0} }\left(J_{\tau}^{-w}\bar{J}_{\tau}^{-w^{\prime}}\left(F\circ\tau\right)\right) \tag{2.9}\]
In other words, the pullback of \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) by a CR automorphism \(\tau\) satisfies
\[\tau^{*}\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\left(\tau^{-1}\right)^{*}=J_{ \tau}^{-\gamma+w}\bar{J}_{\tau}^{-\gamma+w^{\prime}}\mathcal{A}_{w,w^{\prime }}^{\theta_{0}}J_{\tau}^{-w}\bar{J}_{\tau}^{-w^{\prime}},\quad\tau^{*}F=F\circ\tau.\]
In the same spirit as [1], for the reader's sake in Appendix A we offer a self-contained proof of the spectral characterization of intertwining operators. It is known from Proposition A.1 that a self-adjoint operator satisfying (2.9) is diagonal with respect to the spherical harmonics, and its spectrum is completely determined up to a multiplicative constant by the functions
\[\lambda_{j}(w)=\frac{\Gamma(j+\gamma-w)}{\Gamma(j-w)} \tag{2.10}\]
in the sense that up to constant the spectrum is precisely \(\{\lambda_{j}(w)\lambda_{k}(w^{\prime})\}\). From now on we will choose such constant to be \(1\); i.e., \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) will be the operator on \(C^{\infty}(S^{2n+1})\) such that
\[\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}Y_{j,k}=\lambda_{j}(w)\lambda_{k}(w^{ \prime})Y_{j,k},\quad Y_{j,k}\in\mathcal{H}_{j,k}^{\mathbb{S}}. \tag{2.11}\]
Observe that when \(w=w^{\prime}\), \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}=\mathcal{A}_{2\gamma}^{\theta_{0}}\) are intertwining operators characterized in [1, Proposition A.1]. In particular, in the case \(\gamma=1\), we have \(\lambda_{j}(-\frac{n}{2})=j+\frac{n}{2}\), and we recover the conformal sublaplacian; i.e., \(\mathcal{A}_{2}^{\theta_{0}}=\mathcal{D}\).
## 3. Improved CR Sobolev inequalities
In this section, we improve the CR Sobolev inequalities on \(S^{2n+1}\) under the vanishing of higher order moments of the volume elements. We start from the concentration compactness principle on CR unit sphere.
**Lemma 3.1**.: _Let \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) be the CR unit sphere with the volume element \(d\xi=\theta_{0}\wedge(d\theta_{0})^{n}\). We assume that \(F_{i}\) is a sequence of complex-valued functions bounded in \(S^{\gamma,2}(S^{2n+1})\), \(\gamma\in(0,n+1)\) and \(p=\frac{2Q}{Q-2\gamma}\), such that \(F_{i}\rightharpoonup F\) weakly in \(S^{\gamma,2}(S^{2n+1})\). Moreover, we assume as measures,_
\[|F_{i}|^{p}d\xi\to|F|^{p}d\xi+\nu,\quad\bar{F}_{i}\mathcal{A}_{2\gamma}^{\theta _{0}}(F_{i})d\xi\to\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi+\sigma. \tag{3.1}\]
_Then we can find countably many points \(\{x_{i}\}\subset S^{2n+1}\) such that_
\[\nu=\sum_{i}\nu_{i}\delta_{x_{i}},\quad\nu_{i}^{\frac{2}{p}}\leqslant C_{n,2 \gamma}\sigma_{i}\]
_where \(\nu_{i}=\nu(x_{i})\) and \(\sigma_{i}=\sigma(x_{i})\)._
Proof.: The proof is totally similar to that of Lemma 3.1 in [10]. We leave it to readers.
**Remark 3.2**.: The dual version of Lemma 3.1 on \(\mathbb{H}^{n}\) was mentioned in [14].
Now we can prove Theorem 1.1 with the help of Lemma 3.1.
Proof of Theorem 1.1.: Let
\[\alpha=\frac{C_{n,2\gamma}}{\Theta(j,k;\frac{Q-2\gamma}{Q},2n+1)}+\epsilon.\]
If (1.5) is not true, then for any \(l\in\mathbb{N}\), we can find a \(F_{l}\in S^{\gamma,2}(S^{2n+1})\) such that
\[\int_{S^{2n+1}}g|F|^{p}d\xi=0 \tag{3.2}\]
for all \(g\in\overline{\mathcal{P}}_{j,k}\), and
\[\left(\int_{S^{2n+1}}|F_{l}|^{p}d\xi\right)^{\frac{2}{p}}>\alpha\int_{S^{2n+1} }\bar{F}_{l}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{l})d\xi+l\int_{S^{2n+1}}|F_{ l}|^{2}d\xi. \tag{3.3}\]
We may assume
\[\left(\int_{S^{2n+1}}|F_{l}|^{p}d\xi\right)^{\frac{2}{p}}=1.\]
Then
\[\int_{S^{2n+1}}\bar{F}_{l}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{l})d\xi<\frac {1}{\alpha},\quad\int_{S^{2n+1}}|F_{l}|^{2}d\xi<\frac{1}{l}. \tag{3.4}\]
It follows that \(F_{l}\rightharpoonup 0\) weakly in \(S^{\gamma,2}(S^{2n+1})\). After passing to a subsequence we have
\[\bar{F}_{l}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{l})d\xi\to\sigma,\quad|F_{l}|^{ p}d\xi\to\nu. \tag{3.5}\]
By Lemma 3.1 we can find countably many points \(\{x_{i}\}\in S^{2n+1}\) such that
\[\nu=\sum_{i}\nu_{i}\delta_{x_{i}},\quad\nu_{i}^{\frac{2}{p}}\leqslant C_{n,2 \gamma}\sigma_{i}, \tag{3.6}\]
where \(\nu_{i}=\nu(x_{i})\), \(\sigma_{i}=\sigma(x_{i})\). Then
\[\nu(S^{2n+1})=1,\quad\sigma(S^{2n+1})<\frac{1}{\alpha}. \tag{3.7}\]
It follows from (3.2) and (A.4) that \(\int_{S^{2n+1}}gd\nu=0\) for all \(g\in\overline{\mathcal{P}}_{j,k}\), hence \(\nu\in\mathcal{M}_{j,k}^{c}(S^{2n+1})\). By definition of \(\Theta(j,k;\theta,2n+1)\), (3.6) and (3.7) we have
\[\Theta(j,k;\frac{Q-2\gamma}{Q},2n+1)\leqslant\sum_{i}\nu_{i}^{\frac{2}{p}} \leqslant\sum_{i}C_{n,2\gamma}\sigma_{i}=C_{n,2\gamma}\sigma(S^{2n+1}) \leqslant\frac{C_{n,2\gamma}}{\alpha}.\]
Hence
\[\alpha\leqslant\frac{C_{n,2\gamma}}{\Theta(j,k;\frac{Q-2\gamma}{Q},2n+1)}.\]
This contradicts the choice of \(\alpha\).
## 4. Sharp CR Sobolev inequalities
The main result in this section is the classification of optimizers of the sharp Sobolev inequalities (1.2). First, by improved Sobolev inequalities Corollary 1.2, we prove the existence of an optimizer.
**Proposition 4.1**.: _Denote \(Q=2n+2\), \(p=\frac{2Q}{Q-2\gamma}\). Then the sharp constant in (1.2) is attained. Moreover, for any minimizing sequence \(\{F_{i}\}\) there is a subsequence \(\{F_{i_{m}}\}\) and a sequence \(\{\Phi_{i_{m}}\}\) in the CR automorphism group \(\mathcal{A}(S^{2n+1})\) of \(S^{2n+1}\) such that_
\[F_{i_{m}}^{\Phi}=|J_{\Phi_{i_{m}}}|^{\frac{1}{p}}\Phi_{i_{m}}^{*}F_{i_{m}} \tag{4.1}\]
_converges strongly in \(S^{\gamma,2}(S^{2n+1})\), where \(|J_{\Phi_{i_{m}}}|\) is the determinant of the Jacobian of \(\Phi_{i_{m}}\)._
Proof.: By Lemma B.1 in [12], for each \(F_{i}\), there exists an element \(\Phi_{i}\) of \(\mathcal{A}(S^{2n+1})\) such that \(F_{i}^{\Phi}\) defined in (4.1) satisfies the balanced conditions (1.6). By CR invariance, we may replace \(F_{i}\) by \(F_{i}^{\Phi}\) in (1.2). Thus, \(\{F_{i}^{\Phi}\}\) is also a minimizing sequence. Passing to a subsequence \(\{F_{i_{m}}\}\), we may assume that \(F_{i_{m}}^{\Phi}\rightharpoonup F\) weakly in \(S^{\gamma,2}(S^{2n+1})\) and that \(F_{i_{m}}^{\Phi}\to F\) strongly in \(L^{2}(S^{2n+1})\).
Without loss of generality, we may assume \(\int_{S^{2n+1}}|F_{i_{m}}^{\Phi}|^{p}d\xi=1\) for all \(i_{m}\). By Corollary 1.2, we can choose a sufficiently small \(\epsilon\) such that as \(i_{m}\to\infty\),
\[1=\int_{S^{2n+1}}|F_{i_{m}}^{\Phi}|^{p}d\xi \leqslant\left(\frac{C_{n,2\gamma}}{2^{\frac{\gamma}{n+1}}}+ \epsilon\right)\int_{S^{2n+1}}\bar{F}_{i_{m}}^{\Phi}\mathcal{A}_{2\gamma}^{ \theta_{0}}(F_{i_{m}}^{\Phi})d\xi+C(\epsilon)\int_{S^{2n+1}}|F_{i_{m}}^{\Phi} |^{2}d\xi,\] \[\leqslant(C_{n,2\gamma}-\epsilon)\int_{S^{2n+1}}\bar{F}_{i_{m}}^{ \Phi}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{i_{m}}^{\Phi})d\xi+C(\epsilon)\int _{S^{2n+1}}|F_{i_{m}}^{\Phi}|^{2}d\xi,\] \[\to\left(1-\frac{\epsilon}{C_{n,2\gamma}}\right)+C(\epsilon)\int _{S^{2n+1}}|F|^{2}d\xi, \tag{4.2}\]
which implies \(F\neq 0\).
By [16, Lemma2.6], we have
\[1=\|F_{i_{m}}^{\Phi}\|_{p}^{p}=\|F\|_{p}^{p}+\|F_{i_{m}}^{\Phi}-F\|_{p}^{p}+o( 1), \tag{4.3}\]
Since for \(a,b,c\geqslant 0\) and \(p>2\), \((a^{p}+b^{p}+c^{p})^{\frac{2}{p}}\leqslant a^{2}+b^{2}+c^{2}\), we have
\[\left(\int_{S^{2n+1}}|F_{i_{m}}^{\Phi}|^{p}d\xi\right)^{\frac{2}{p }} -\left(\int_{S^{2n+1}}|F|^{p}d\xi\right)^{\frac{2}{p}}\] \[\leqslant\left(\int_{S^{2n+1}}|F_{i_{m}}^{\Phi}-F|^{p}d\xi\right) ^{\frac{2}{p}}+o(1)\] \[\leqslant C_{n,2\gamma}\int_{S^{2n+1}}\left(\bar{F}_{i_{m}}^{ \Phi}-\bar{F}\right)\mathcal{A}_{2\gamma}^{\theta_{0}}\left(F_{i_{m}}^{\Phi}-F \right)d\xi+o(1)\] \[\leqslant C_{n,2\gamma}\int_{S^{2n+1}}\left(\bar{F}_{i_{m}}^{ \Phi}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{i_{m}}^{\Phi})-\bar{F}\mathcal{A}_ {2\gamma}^{\theta_{0}}(F)\right)d\xi+o(1). \tag{4.4}\]
Since \(\{F_{i_{m}}^{\Phi}\}\) is a minimizing sequence, as \(i_{m}\to\infty\), we conclude that
\[1-\left(\int_{S^{2n+1}}|F|^{p}d\xi\right)^{\frac{2}{p}}\leqslant 1-C_{n,2 \gamma}\int_{S^{2n+1}}\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi. \tag{4.5}\]
This implies that \(F\) is a minimizer because \(F\neq 0\).
In order to see that the convergence of \(\{F_{i_{m}}^{\Phi}\}\) in \(L^{p}(S^{2n+1})\) is strong, we need to show that \(\|F\|_{p}^{p}=1\). By the weak convergence and (4.3), we may assume that \(\|F\|_{p}^{p}=a\in(0,1]\) and \(\lim\|F_{i_{m}}^{\Phi}-F\|_{p}^{p}=1-a\). The fact that \(F\) is a minimizer implies equalities in (4.4) in limit; i.e.
\[1-a^{\frac{2}{p}}=(1-a)^{\frac{2}{p}}=1-a^{\frac{2}{p}}.\]
This gives the conclusion because \(1<a^{\frac{2}{p}}+(1-a)^{\frac{2}{p}}\) for \(a\in(0,1)\).
Moreover, by (4.5), the strong convergence in \(L^{p}(S^{2n+1})\) implies that \(\int_{S^{2n+1}}\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi=C_{n,2\gamma}^{-1}\). Combining with the fact that
\[\begin{split}&\lim_{i_{m}\to\infty}\int_{S^{2n+1}}\left(\bar{F}_{i_{m} }^{\Phi}-\bar{F}\right)\mathcal{A}_{2\gamma}^{\theta_{0}}\left(F_{i_{m}}^{\Phi }-F\right)d\xi+\int_{S^{2n+1}}\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d\xi \\ =&\lim_{i_{m}\to\infty}\int_{S^{2n+1}}\bar{F}_{i_{m} }^{\Phi}\mathcal{A}_{2\gamma}^{\theta_{0}}(F_{i_{m}}^{\Phi})d\xi=C_{n,2\gamma} ^{-1},\end{split} \tag{4.6}\]
we conclude that
\[\lim_{i_{m}\to\infty}\int_{S^{2n+1}}\left(\bar{F}_{i_{m}}^{\Phi}-\bar{F}\right) \mathcal{A}_{2\gamma}^{\theta_{0}}\left(F_{i_{m}}^{\Phi}-F\right)d\xi=0. \tag{4.7}\]
By the definition of the Folland-Stein space, (4.7) is equivalent to \(\|F_{i_{m}}^{\Phi}-F\|_{S^{\gamma,2}}=0\), as \(i_{m}\to\infty\), which means the strong convergence of \(\{F_{i_{m}}^{\Phi}\}\) in \(S^{\gamma,2}(S^{2n+1})\).
**Corollary 4.2**.: _Let \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) be the CR unit sphere with the volume element \(d\xi\). Suppose that \(u\) is a positive local minimizer of_
\[Y_{\gamma}(S^{2n+1})=\inf\left\{A_{\gamma}^{\theta_{0}}(F);F\in S^{\gamma,2}( S^{2n+1}),B_{\gamma}^{\theta_{0}}(F)=1\right\},\quad 0<\gamma<n+1,\]
_where \(A_{\gamma}^{\theta_{0}}(F)\) and \(B_{\gamma}^{\theta_{0}}(F)\) are defined by_
\[A_{\gamma}^{\theta_{0}}(F) :=\int_{S^{2n+1}}\bar{F}\mathcal{A}_{2\gamma}^{\theta_{0}}(F)d \xi,\quad F\in S^{\gamma,2}(S^{2n+1}),\] \[B_{\gamma}^{\theta_{0}}(F) :=\int_{S^{2n+1}}|F|^{p}d\xi,\quad p=\frac{2Q}{Q-2\gamma}.\]
_Then there is an element \(\Phi\) of the CR automorphism group \(\mathcal{A}(S^{2n+1})\) of \(S^{2n+1}\) such that_
\[u^{\Phi}=|J_{\Phi}|^{\frac{1}{p}}\Phi^{*}u \tag{4.8}\]
_is a positive minimizer of \(Y_{\gamma}(S^{2n+1})\) which satisfies (1.6), where \(|J_{\Phi}|\) is the determinant of the Jacobian of \(\Phi\)._
Proof.: Since \(A_{\gamma}^{\theta_{0}}\) and \(B_{\gamma}^{\theta_{0}}\) are CR covariant, \(u\) is a local minimizer of \(Y_{\gamma}(S^{2n+1})\) if and only if \(u^{\Phi}\) is a local minimizer of \(Y_{\gamma}(S^{2n+1})\) for each \(\Phi\in\mathcal{A}\mathbf{ut}(S^{2n+1})\). Since \(u\) is positive, \(B_{\gamma}^{\theta_{0}}(u)\neq 0\). It follows from [11, Lemma B.1] that there is a \(\Phi\in\mathcal{A}\mathbf{ut}(S^{2n+1})\) such that \(u^{\Phi}\) satisfies (1.6).
Next, we classify optimizers of sharp Sobolev inequalities (1.2) following Frank-Lieb argument developed in [10]. As explained in the introduction, the proof of Theorem 1.4 consists of three ingredients. The desired spectral estimate are given in the following result.
**Proposition 4.3**.: _Let \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) be the CR unit sphere with the volume element \(d\xi\). Suppose that \(u\) is a positive local minimizer of \(Y_{\gamma}(S^{2n+1})\). Suppose additionally that_
\[\int_{S^{2n+1}}z_{l}u^{p}d\xi=0,\quad l=1,\cdots,n+1, \tag{4.9}\]
_where \(z_{1},\cdots,z_{n+1}\) are coordinates on \(\mathbb{C}^{n+1}\) and we regard \(S^{2n+1}\subset\mathbb{C}^{n+1}\) as the unit sphere. Then_
\[\sum_{l=1}^{n+1}\int_{S^{2n+1}}\bar{z}_{l}u[\mathcal{A}_{2\gamma}^{\theta_{0}},z_{l}](u)d\xi\geqslant(p-2)\int_{S^{2n+1}}u\mathcal{A}_{2\gamma}^{\theta_{0} }(u)d\xi, \tag{4.10}\]
_where_
\[[\mathcal{A}_{2\gamma}^{\theta_{0}},z_{l}](u):=\mathcal{A}_{2\gamma}^{\theta_ {0}}(z_{l}u)-z_{l}\mathcal{A}_{2\gamma}^{\theta_{0}}(u).\]
We first prove the commutator identity involving the intertwining operators needed to execute the Frank-Lieb argument.
Proof of Theorem 1.3.: For \(Y_{j,k}\in\mathcal{H}_{j,k}^{\mathbb{S}}\), we denote \(\mathcal{Y}_{j,k}\in\mathcal{H}_{j,k}\) the homogeneous extension of \(Y_{j,k}\) in \(\mathbb{C}^{n+1}\).
It follows from the algorithm in [1] that for each coordinate function \(z_{l}\) on \(\mathbb{C}^{n+1}\), we have the decomposition
\[z_{l}\mathcal{Y}_{j,k}=z_{l}\mathcal{Y}_{j,k}-\frac{|z|^{2}}{n+j+k}\frac{ \partial}{\partial\bar{z}_{l}}\mathcal{Y}_{j,k}+\frac{|z|^{2}}{n+j+k}\frac{ \partial}{\partial\bar{z}_{l}}\mathcal{Y}_{j,k}, \tag{4.11}\]
where
\[z_{l}\mathcal{Y}_{j,k}-\frac{|z|^{2}}{n+j+k}\frac{\partial}{\partial\bar{z}_{l }}\mathcal{Y}_{j,k}\in\mathcal{H}_{j+1,k},\qquad\frac{\partial}{\partial\bar{ z}_{l}}\mathcal{Y}_{j,k}\in\mathcal{H}_{j,k-1}.\]
Coming this with (2.11) yields
\[\begin{split}\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\left(z_{l}Y_ {j,k}\right)&=\lambda_{j+1}(w)\lambda_{k}(w^{\prime})\left(z_{l} \mathcal{Y}_{j,k}-\frac{|z|^{2}}{n+j+k}\frac{\partial}{\partial\bar{z}_{l}} \mathcal{Y}_{j,k}\right)\bigg{|}_{S^{2n+1}}\\ &+\frac{\lambda_{j}(w)\lambda_{k-1}(w^{\prime})}{n+j+k}\left(|z|^ {2}\frac{\partial}{\partial\bar{z}_{l}}\mathcal{Y}_{j,k}\right)\bigg{|}_{S^{2n +1}}.\end{split} \tag{4.12}\]
Applying (4.12) to each \(z_{l}\), \(l=1,\cdots,n+1\), multiplying \(\bar{z}_{l}\) and taking the sum, we obtain
\[\begin{split}&\left(\sum_{l=1}^{n+1}\bar{z}_{l}\left[\mathcal{A} _{w,w^{\prime}}^{\theta_{0}},z_{l}\right]\right)Y_{j,k}=\left(\lambda_{j+1}(w )\lambda_{k}(w^{\prime})-\lambda_{j}(w)\lambda_{k}(w^{\prime})\right)Y_{j,k} \\ &+\frac{\lambda_{j}(w)\lambda_{k-1}(w^{\prime})-\lambda_{j+1}(w) \lambda_{k}(w^{\prime})}{n+j+k}\left(\sum_{l=1}^{n+1}\bar{z}_{l}\frac{\partial }{\partial\bar{z}_{l}}\mathcal{Y}_{j,k}\right)\bigg{|}_{S^{2n+1}}.\end{split} \tag{4.13}\]
By the explicit value of spectrum (2.10), we have
\[\begin{split}\lambda_{j+1}(w)\lambda_{k}(w^{\prime})&= \frac{j+\gamma-w}{j-w}\lambda_{j}(w)\lambda_{k}(w^{\prime}),\\ \lambda_{j}(w)\lambda_{k-1}(w^{\prime})&=\frac{k-1-w ^{\prime}}{k-1+\gamma-w^{\prime}}\lambda_{j}(w)\lambda_{k}(w^{\prime}).\end{split} \tag{4.14}\]
Moreover, notice that
\[\left(\sum_{l=1}^{n+1}\bar{z}_{l}\frac{\partial}{\partial\bar{z}_{l}}\mathcal{ Y}_{j,k}\right)\bigg{|}_{S^{2n+1}}=\overline{Z}\mathcal{Y}_{j,k}\bigg{|}_{S^{2n+1}}= kY_{j,k}. \tag{4.15}\]
Plugging (4.14) and (4.15) into (4.13) yields
\[\left(\sum_{l=1}^{n+1}\bar{z}_{l}\left[\mathcal{A}_{w,w^{\prime}} ^{\theta_{0}},z_{l}\right]\right)Y_{j,k} =\frac{\gamma(\gamma-1-w^{\prime})}{\left(j-w\right)\left(k-1+ \gamma-w^{\prime}\right)}\lambda_{j}(w)\lambda_{k}(w^{\prime})Y_{j,k}\] \[=\gamma(\gamma-1-w^{\prime})\cdot\frac{\lambda_{j}(w)}{j-w}\cdot \frac{\lambda_{k}(w^{\prime})}{k-1+\gamma-w^{\prime}}Y_{j,k}\] \[=\gamma(\gamma-1-w^{\prime})\lambda_{j}(w-1)\lambda_{k}(w^{\prime })Y_{j,k}\] \[=\gamma(\gamma-1-w^{\prime})\mathcal{A}_{w-1,w^{\prime}}^{\theta_ {0}}Y_{j,k}. \tag{4.16}\]
By Proposition A.1, the desired result follows from (4.16).
We now turn to the desired spectral estimate. The fact that \(\mathcal{A}_{2\gamma}^{\theta_{0}}\) is formally self-adjoint implies that if \(u_{t}\) is a one-parameter family of functions in \(S^{\gamma,2}(S^{2n+1};\mathbb{R})\) with \(u_{0}=u\), then
\[\frac{d}{dt}\bigg{|}_{t=0}A_{\gamma}^{\theta_{0}}(u_{t}) =2\int_{S^{2n+1}}\dot{u}\mathcal{A}_{2\gamma}^{\theta_{0}}(u)d\xi, \tag{4.18}\] \[\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}A_{\gamma}^{\theta_{0}}(u_{t}) =2\int_{S^{2n+1}}\dot{u}\mathcal{A}_{2\gamma}^{\theta_{0}}(\dot{u})d\xi+2 \int_{S^{2n+1}}\ddot{u}\mathcal{A}_{2\gamma}^{\theta_{0}}(u)d\xi. \tag{4.17}\]
for \(\dot{u}:=\frac{\partial}{\partial t}\big{|}_{t=0}u_{t}\) and \(\ddot{u}:=\frac{\partial^{2}}{\partial t^{2}}\big{|}_{t=0}u_{t}\).
Proof of Proposition 4.3.: Let \(v\in C^{\infty}(S^{2n+1};\mathbb{R})\) be such that
\[\int_{S^{2n+1}}vu^{p-1}d\xi=0. \tag{4.19}\]
Then \(u_{t}=\|u+tv\|_{p}^{-1}(u+tv)\) defines a smooth curve with \(B_{\gamma}^{\theta_{0}}(u_{t})=1\) and \(u_{0}=u\), \(\dot{u}_{t}\big{|}_{t=0}=v\). Since \(u\) is a critical point of \(A_{\gamma}^{\theta_{0}}\), it follows from (4.17) that
\[P_{\gamma}^{\theta_{0}}(u)=A_{\gamma}^{\theta_{0}}(u)u^{p-1}.\]
Since \(u\) is a local minimizer, \(\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}A_{\gamma}^{\theta_{0}}(u_{t})\geqslant 0\). Expanding this using (4.18) and the above display yields
\[\int_{S^{2n+1}}v\mathcal{A}_{2\gamma}^{\theta_{0}}(v)d\xi\geqslant(p-1)A_{ \gamma}^{\theta_{0}}(u)\int_{S^{2n+1}}u^{p-2}v^{2}d\xi. \tag{4.20}\]
The assumption (4.9) implies that for each coordinate function \(z_{l}=x_{l}+iy_{l}\), \(1\leqslant l\leqslant n+1\), the functions \(x_{l}u\) and \(y_{l}u\) satisfy (4.19). Since \(\mathcal{A}_{2\gamma}^{\theta_{0}}\) is self-adjoint, it follows from (4.20) that
\[\sum_{l=1}^{n+1}\int_{S^{2n+1}}\bar{z}_{l}u\mathcal{A}_{2\gamma}^ {\theta_{0}}(z_{l}u)d\xi =\sum_{l=1}^{n+1}\int_{S^{2n+1}}\left(x_{l}u\mathcal{A}_{2\gamma} ^{\theta_{0}}(x_{l}u)+y_{l}u\mathcal{A}_{2\gamma}^{\theta_{0}}(y_{l}u)\right)d\xi\] \[\geqslant(p-1)A_{\gamma}^{\theta_{0}}(u)\int_{S^{2n+1}}\sum_{l=1 }^{n+1}(x_{l}^{2}+y_{l}^{2})u^{p}d\xi\] \[=(p-1)A_{\gamma}^{\theta_{0}}(u).\]
The final conclusion follows from the definition of \([\mathcal{A}_{2\gamma}^{\theta_{0}},z_{j}]\).
Proposition 4.3 and Corollary 4.2 reduce the problem of classifying positive local minimizers of \(A_{\gamma}^{\theta_{0}}\) to the problem of showing that the only functions which satisfy (4.10) are the constants. This can be done by using the commutator identity in Theorem 1.3.
Proof of Theorem 1.4.: All computations in this proof are carried out with respect to the CR unit sphere \((S^{2n+1},T^{1,0}S^{2n+1},\theta_{0})\) with the volume element \(d\xi\). As discussed in [12, Theorem 1.3], we may assume that the optimizer \(u\) of (1.2) is a positive real function; i.e. \(u\) is a positive real minimizer of
\[A_{\gamma}^{\theta_{0}}(u)=\int_{S^{2n+1}}uP_{\gamma}^{\theta_{0}}(u)d\xi\]
with \(B_{\gamma}^{\theta_{0}}(u)=1\). By Corollary 4.2 we may assume that \(u\) satisfies (4.9). We conclude from Proposition 4.3 that
\[\sum_{l=l}^{n+1}\int_{S^{2n+1}}\bar{z}_{l}u[\mathcal{A}_{2\gamma}^{\theta_{0} },z_{l}](u)d\xi\geqslant(p-2)\int_{S^{2n+1}}u\mathcal{A}_{2\gamma}^{\theta_{ 0}}(u)d\xi. \tag{4.21}\]
Combining this with Theorem 1.3 yields
\[0\geqslant\int_{S^{2n+1}}u\left((p-2)\mathcal{A}_{2\gamma}^{\theta_{0}}-\gamma (\gamma-1-w)\mathcal{A}_{w-1,w}^{\theta_{0}}\right)(u)d\xi,\quad w=\frac{ \gamma-n-1}{2}. \tag{4.22}\]
Direct computation shows that for \(Y_{j,k}\in\mathcal{H}^{\mathbb{S}}_{j,k}\),
\[\int_{S^{2n+1}}\overline{Y}_{j,k}\left((p-2)\mathcal{A}^{\theta_{0} }_{2\gamma}-\gamma(\gamma-1-w)\mathcal{A}^{\theta_{0}}_{w-1,w}\right)(Y_{j,k})\,d\xi\] \[=\left(\frac{2\gamma}{n+1-\gamma}-\frac{\gamma^{\frac{n+\gamma-1} {2}}}{(j+\frac{n+1-\gamma}{2})(k+\frac{n+\gamma-1}{2})}\right)\lambda_{j}(w) \lambda_{k}(w)\] \[\geqslant\left(\frac{2\gamma}{n+1-\gamma}-\frac{\gamma^{\frac{n+ \gamma-1}{2}}}{(\frac{n+1-\gamma}{2})(\frac{n+\gamma-1}{2})}\right)\lambda_{j} (w)\lambda_{k}(w)=0,\]
where equality holds if and only if \((j,k)=(0,0)\). We conclude that the operator in (4.22) is nonnegative with kernel exactly equal to the constant functions. Combing this with (4.21) yields \(u\) is constant. The final conclusion follows from the fact that if \(\Phi\in\mathcal{Aut}(S^{2n+1})\), then
\[|J_{\Phi}|^{Q}(\eta)=\frac{C}{|1-\xi\cdot\bar{\eta}|^{Q}},\quad\eta\in S^{2n+1}\]
for some \(C>0,\xi\in\mathbb{C}^{n+1},|\xi|<1\).
## 5. Classical sharp Sobolev inequalities
The main result in this section is the proof of Theorem 1.6 and the classification of optimizers of the classical sharp Sobolev inequalities (1.11).
Proof of Theorem 1.6.: For \(Y_{h}\in\mathcal{H}^{\mathbb{S}}_{h}\), we denote \(\mathcal{Y}_{h}\in\mathcal{H}_{h}\) the homogeneous extension of \(Y_{h}\) in \(\mathbb{R}^{n+1}\).
It follows from the algorithm in [1] that for each coordinate function \(x_{j}\) on \(\mathbb{R}^{n+1}\), we have the decomposition
\[x_{j}\mathcal{Y}_{h}=x_{j}\mathcal{Y}_{h}+\frac{|x|^{2}}{1-n-2h}\frac{\partial }{\partial x_{j}}\mathcal{Y}_{h}-\frac{|x|^{2}}{1-n-2h}\frac{\partial}{ \partial x_{j}}\mathcal{Y}_{h}, \tag{5.1}\]
where
\[x_{j}\mathcal{Y}_{h}+\frac{|x|^{2}}{1-n-2h}\frac{\partial}{\partial x_{j}} \mathcal{Y}_{h}\in\mathcal{H}_{h+1},\qquad\frac{\partial}{\partial x_{j}} \mathcal{Y}_{h}\in\mathcal{H}_{h-1}.\]
Combining this with the fact that for \(Y_{h}\in\mathcal{H}^{\mathbb{S}}_{h}\),
\[P^{g_{c}}_{2\gamma}Y_{h}=\mu_{h}(2\gamma)Y_{h},\qquad\mu_{h}(2\gamma)=\frac{ \Gamma(h+\frac{n}{2}+\gamma)}{\Gamma(h+\frac{n}{2}-\gamma)}, \tag{5.2}\]
we obtain
\[\begin{split} P_{2\gamma}^{g_{c}}\left(x_{j}Y_{h}\right)=& \mu_{h+1}(2\gamma)\left(x_{j}\mathcal{Y}_{h}+\frac{|x|^{2}}{1-n-2h}\frac{ \partial}{\partial x_{j}}\mathcal{Y}_{h}\right)\bigg{|}_{S^{n}}\\ &-\mu_{h-1}(2\gamma)\left(\frac{|x|^{2}}{1-n-2h}\frac{\partial}{ \partial x_{j}}\mathcal{Y}_{h}\right)\bigg{|}_{S^{n}}.\end{split} \tag{5.3}\]
Applying (5.3) to each \(x_{j}\), \(j=1,\cdots,n+1\), multiplying \(x_{j}\) and taking the sum, we have
\[\left(\sum_{j=1}^{n+1}x_{j}\left[P_{2\gamma}^{g_{c}},x_{j}\right] \right)Y_{h} =\left(\mu_{h+1}(2\gamma)-\mu_{h}(2\gamma)\right)Y_{h}\] \[+\frac{\mu_{h+1}(2\gamma)-\mu_{h-1}(2\gamma)}{1-n-2h}\left(|x|^{ 2}\sum_{j=1}^{n+1}\frac{\partial}{\partial x_{j}}\mathcal{Y}_{h}\right)\bigg{|} _{S^{n}}. \tag{5.4}\]
By the explicit value of \(\mu_{h}(2\gamma)\) (5.7), we have
\[\begin{split}\mu_{h-1}(2\gamma)&=\left(h+\frac{n}{ 2}-\gamma\right)\left(h+\frac{n}{2}-\gamma-1\right)\mu_{h}(2(\gamma-1)),\\ \mu_{h}(2\gamma)&=\left(h+\frac{n}{2}-\gamma\right) \left(h+\frac{n}{2}+\gamma-1\right)\mu_{h}(2(\gamma-1)),\\ \mu_{h+1}(2\gamma)&=\left(h+\frac{n}{2}+\gamma\right) \left(h+\frac{n}{2}+\gamma-1\right)\mu_{h}(2(\gamma-1)).\end{split} \tag{5.5}\]
Moreover, notice that
\[\left(\sum_{j=1}^{n+1}x_{j}\frac{\partial}{\partial x_{j}}\mathcal{Y}_{h} \right)\bigg{|}_{S^{n}}=hY_{h}. \tag{5.6}\]
Plugging (5.3) and (5.5) into (5.4) yields
\[\left(\sum_{j=1}^{n+1}x_{j}\left[P_{2\gamma}^{g_{c}},x_{j}\right]\right)Y_{h} =\gamma(n+2\gamma-2)P_{2(\gamma-1)}^{g_{c}}Y_{h}. \tag{5.7}\]
Let
\[R_{2\gamma}^{g_{c}}:=\sum_{j=1}^{n+1}x_{j}\left[P_{2\gamma}^{g_{c}},x_{j} \right]-\gamma(n+2\gamma-2)P_{2(\gamma-1)}^{g_{c}}.\]
(5.7) implies that \(R_{2\gamma}^{g_{c}}=0\) on \(\operatorname{Span}\left\{\bigoplus_{h\in\mathbb{N}}\mathcal{H}_{h}^{\mathbb{ S}}\right\}\). Under the standard inner product \((\cdot,\cdot)\) in \(L^{2}(S^{n})\), we have that for any \(H\in\operatorname{Span}\left\{\bigoplus_{h\in\mathbb{N}}\mathcal{H}_{h}^{ \mathbb{S}}\right\}\), \(F\in C^{\infty}(S^{n})\),
\[\left(R_{2\gamma}^{g_{c}}(F),H\right)=\left(F,R_{2\gamma}^{g_{c}}(H)\right)=0,\]
where we use the self-adjointness of \(R^{g_{c}}_{2\gamma}\). By the continuity of the inner product and the fact that \(L^{2}(S^{n})=\overline{\operatorname{Span}\left\{\bigoplus_{h\in\mathbb{N}} \mathcal{H}^{\mathbb{S}}_{h}\right\}}\), we obtain that
\[\left(R^{g_{c}}_{2\gamma}(F),G\right)=0,\quad\forall F\in C^{\infty}(S^{n}),G \in L^{2}(S^{n}),\]
which implies that \(R^{g_{c}}_{2\gamma}=0\) on \(C^{\infty}(S^{n})\).
The following proposition is a direct consequence of Lemma 4.2 in [10].
**Proposition 5.1**.: _Let \((S^{n},g_{c})\) be the standard sphere with the canonical metric \(g_{c}\) and \(P^{g_{c}}_{2\gamma}\) be the intertwining operator of order \(2\gamma\). Suppose that \(u\) is a positive local minimizer of_
\[Y_{\gamma}(S^{n})=\inf\left\{A^{g_{c}}_{\gamma}(F);F\in W^{\gamma,2}(S^{n}),B^ {g_{c}}_{\gamma}(F)=1\right\},\quad 0<\gamma<\frac{n}{2},\]
_where \(A^{g_{c}}_{\gamma}(F)\) and \(B^{g_{c}}_{\gamma}(F)\) are defined by_
\[A^{g_{c}}_{\gamma}(F) :=\int_{S^{n}}FP^{g_{c}}_{2\gamma}(F)d\sigma,\quad F\in W^{ \gamma,2}(S^{n}),\] \[B^{g_{c}}_{\gamma}(F) :=\int_{S^{n}}|F|^{p}d\sigma,\quad p=\frac{2n}{n-2\gamma}.\]
_Then there is an element \(\Phi\) of the conformal group \(\operatorname{Conf}(S^{n})\) of \(S^{n}\) such that_
\[u^{\Phi}=|J_{\Phi}|^{\frac{1}{p}}\Phi^{*}u \tag{5.8}\]
_is a positive minimizer of \(Y_{\gamma}(S^{n})\) which satisfies_
\[\int_{S^{n}}x_{l}|F|^{p}d\sigma=0,\quad l=1,\cdots,n+1, \tag{5.9}\]
_where \(|J_{\Phi}|\) is the determinant of the Jacobian of \(\Phi\)._
By a similar argument in Proposition 4.3, we obtain the desired spectral estimate as follows.
**Proposition 5.2**.: _Let \((S^{n},g_{c})\) be the standard sphere with the canonical metric \(g_{c}\) and \(P^{g_{c}}_{2\gamma}\) be the intertwining operator of order \(2\gamma\). Suppose that \(u\) is a positive local minimizer of \(Y_{\gamma}(S^{n})\). Suppose additionally that_
\[\int_{S^{n}}x_{l}u^{p}d\xi=0,\quad l=1,\cdots,n+1, \tag{5.10}\]
_where \(x_{1},\cdots,x_{n+1}\) are coordinates on \(\mathbb{R}^{n+1}\). Then_
\[\sum_{l=1}^{n+1}\int_{S^{n}}x_{l}u[P^{g_{c}}_{2\gamma},x_{l}](u)d\sigma\geqslant (p-2)\int_{S^{n}}uP^{g_{c}}_{2\gamma}(u)d\sigma, \tag{5.11}\]
_where_
\[[P^{g_{c}}_{2\gamma},x_{l}](u):=P^{g_{c}}_{2\gamma}(x_{l}u)-x_{l}P^{g_{c}}_{2 \gamma}(u).\]
Now, we can give a new proof of classical sharp Sobolev inequalities (1.11).
#### A new proof of Theorem 1.11
All computations in this proof are carried out with respect to the standard unit sphere \((S^{n},g_{c})\) with the volume element \(d\sigma\). As discussed in [11, Theorem 3.1], we may assume that the optimizer \(u\) of (1.11) is a positive real function; i.e. \(u\) is a positive real minimizer of
\[A_{\gamma}^{g_{c}}(u)=\int_{S^{n}}uP_{2}^{g_{c}}\gamma(u)d\sigma\]
with \(B_{\gamma}^{g_{c}}(u)=1\). By Corollary 5.1 we may assume that \(u\) satisfies (5.9). We conclude from Proposition 4.20 that
\[\sum_{l=1}^{n+1}\int_{S^{n}}x_{l}u[P_{2\gamma}^{g_{c}},x_{l}](u)d\sigma\geqslant (p-2)\int_{S^{n}}uP_{2\gamma}^{g_{c}}(u)d\sigma. \tag{5.12}\]
Combining this with Theorem 1.6 yields
\[0\geqslant\int_{S^{n}}u\left((p-2)P_{2\gamma}^{g_{c}}-\gamma\left(n+2\gamma- 2\right)P_{2(\gamma-1)}^{g_{c}}\right)(u)d\sigma. \tag{5.13}\]
It follows from the explicit form (1.10) of \(P_{\gamma}^{g_{c}}\) that
\[P_{2\gamma}^{g_{c}}=\left(-\Delta_{S^{n}}+\frac{(n-2\gamma)(n+2\gamma-2)}{4} \right)P_{2(\gamma-1)}^{g_{c}}. \tag{5.14}\]
Hence, by simple calculation, we find that
\[(p-2)P_{2\gamma}^{g_{c}}-\gamma\left(n+2\gamma-2\right)P_{2(\gamma-1)}^{g_{c }}=\frac{4\gamma}{n-2\gamma}\left(-\Delta_{S^{n}}\right)P_{2(\gamma-1)}^{g_{c }}. \tag{5.15}\]
We conclude that the operator in (5.15) is nonnegative with kernel exactly equal to the constant functions. Combing this with (5.12) yields \(u\) is constant. The final conclusion follows from the fact that if \(\Phi\in\operatorname{Conf}(S^{n})\), then the Jacobian determinant of \(\Phi\) is the form of
\[\frac{C}{|1-\langle\eta,\xi\rangle|^{n}},\quad\eta\in S^{n}\]
for some \(C>0,\xi\in\mathbb{R}^{n+1},|\xi|<1\).
## Appendix A Intertwining operators on \(S^{2n+1}\)
Following the idea in [1], in this appendix we give an explicit calculation of the spectrum of the intertwining operators \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) as defined by (2.9); a consequence of this calculation will be formula (2.10) up to a constant.
**Proposition A.1**.: _For \(0<\gamma<\frac{Q}{2}\), let \((w,w^{\prime})\in\mathbb{R}^{2}\) such that \(w+w^{\prime}+n+1=\gamma\). Suppose that the operator \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) is formally self-adjoint in \(S^{\gamma,2}\) and is intertwining, i.e., for any \(F\in C^{\infty}(S^{2n+1})\), \(\tau\in\mathcal{A}\boldsymbol{ut}(S^{2n+1})\),_
(A.1) \[J_{\tau}^{\gamma-w}\bar{J}_{\tau}^{\gamma-w^{\prime}}\left(\mathcal{A}_{w,w^{ \prime}}^{\theta_{0}}F\right)\circ\tau=\mathcal{A}_{w,w^{\prime}}^{\theta_{0}} \left(J_{\tau}^{-w}\bar{J}_{\tau}^{-w^{\prime}}\left(F\circ\tau\right)\right).\]
_Then \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) is diagonal with respect to the spherical harmonics, and for every \(Y_{j,k}\in\mathcal{H}_{j,k}^{\mathbb{S}}\),_
\[\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}Y_{j,k}=c\lambda_{j}(w)\lambda_{k}(w^{ \prime})Y_{j,k}\]
_for some constant \(c\in\mathbb{R}\), with_
\[\lambda_{j}(w)=\frac{\Gamma(j+\gamma-w)}{\Gamma(j-w)},\quad\lambda_{k}(w^{ \prime})=\frac{\Gamma(k+\gamma-w^{\prime})}{\Gamma(k-w^{\prime})}.\]
_Vice verse, a self-adjoint operator \(\mathcal{Q}_{w,w^{\prime}}^{\theta_{0}}\) with eigenvalues \(\lambda_{j}(w)\lambda_{k}(w^{\prime})\) is intertwining._
Proof.: The fact that \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) is diagonal follows from Schur's lemma and the irreducibility of the spaces \(\mathcal{H}_{j,k}^{\mathbb{S}}\). Suppose that \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\Phi_{j,k}=\lambda_{j,k}\Phi_{j,k}\). From now on, in the formulation (2.3) of \(\Phi_{j,k}\), we choose \(\eta\) to be the north pole of \(S^{2n+1}\) and denote
\[\Psi_{j,k}(z)=\Phi_{j,k}(\zeta,\eta)=\bar{z}^{k-j}P_{j}^{(n-1,k-j)}(2|z|^{2}-1 ),\quad z=\zeta_{n+1},\quad j\leqslant k\]
such that we have \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\Psi_{j,k}=\lambda_{j,k}\Psi_{j,k}\).
Consider the family of dilations on \(\mathbb{H}^{n}\), which on \(S^{2n+1}\) take the form
(A.2) \[\tau_{\delta}(\zeta)=\tau_{\delta}(\zeta^{\prime},\zeta_{n+1})\left(\frac{2 \delta\zeta^{\prime}}{1+\zeta_{n+1}+\delta^{2}(1-\zeta_{n+1})},\frac{1+\zeta_ {n+1}-\delta^{2}(1-\zeta_{n+1})}{1+\zeta_{n+1}+\delta^{2}(1-\zeta_{n+1})} \right).\]
The conformal factor \(J_{\tau_{\delta}}\) of \(\tau_{\delta}\) is given by
\[J_{\tau_{\delta}}(\zeta)=\frac{2\delta}{1+\zeta_{n+1}+\delta^{2}(1-\zeta_{n+1} )}.\]
Direct computation shows that
\[\frac{d}{d\delta}\bigg{|}_{\delta=1}J_{\tau_{\delta}}^{-w}\bar{J }_{\tau_{\delta}}^{-w^{\prime}}=-wz-w^{\prime}\bar{z},\] \[\frac{d}{d\delta}\bigg{|}_{\delta=1}\tau_{\delta}(\zeta)\cdot\eta =z^{2}-1.\]
Therefore, we have
(A.3) \[\left.\frac{d}{d\delta}\right|_{\delta=1}J_{\tau_{\delta}}^{-w}\bar{J }_{\tau_{\delta}}^{-w^{\prime}} \left(\Psi_{j,k}\circ\tau_{\delta}\right)=\left(-wz-w^{\prime}\bar{ z}\right)\bar{z}^{k-j}P_{j}^{(n-1,k-j)}(2|z|^{2}-1)\] \[+(k-j)(\bar{z}^{2}-1)\bar{z}^{k-j-1}P_{j}^{(n-1,k-j)}(2|z|^{2}-1)\] \[+2(z+\bar{z})(|z|^{2}-1)\bar{z}^{k-j}\frac{d}{dx}P_{j}^{(n-1,k-j)} (2|z|^{2}-1).\]
The above quantity is a polynomial in \(z\) and \(\bar{z}\) with highest order monomials that are multiples of \(z^{j}\bar{z}^{k+1}\) and \(z^{j+1}\bar{z}^{k}\). The projection of (A.3) on \(\mathcal{H}_{j+1,k}^{\mathbb{S}}\bigoplus\mathcal{H}_{j,k+1}^{\mathbb{S}}\) gives that for fixed \(0\leqslant j<k\),
(A.4) \[\left.\frac{d}{d\delta}\right|_{\delta=1}J_{\tau_{\delta}}^{-w} \bar{J}_{\tau_{\delta}}^{-w^{\prime}}\left(\Psi_{j,k}\circ\tau_{\delta} \right)\right|_{\mathcal{H}_{j+1,k}^{\mathbb{S}}\bigoplus\mathcal{H}_{j,k+1}^ {\mathbb{S}}} =A\bar{z}^{k-j-1}P_{j+1}^{(n-1,k-j-1)}(2|z|^{2}-1)\] \[+B\bar{z}^{k-j+1}P_{j}^{(n-1,k-j+1)}(2|z|^{2}-1),\]
and for \(j=k\),
(A.5) \[\left.\frac{d}{d\delta}\right|_{\delta=1}J_{\tau_{\delta}}^{-w} \bar{J}_{\tau_{\delta}}^{-w^{\prime}}\left(\Psi_{j,k}\circ\tau_{\delta}\right) \right|_{\mathcal{H}_{j+1,j}^{\mathbb{S}}\bigoplus\mathcal{H}_{j,j+1}^{ \mathbb{S}}} =AzP_{j}^{(n-1,1)}(2|z|^{2}-1)\] \[+B\bar{z}^{1}P_{j}^{(n-1,1)}(2|z|^{2}-1).\]
Our main goal is to determine \(A\) and \(B\). In order to do this, we consider the case \(z\) real and \(z\) imaginary, and we compare the coefficients of leading terms in (A.3) and (A.4)-(A.5). What we need here is the coefficient of \(x^{j}\) in a Jacobi polynomials of order \(j\) which is given by
\[\frac{1}{j!}\frac{d^{j}}{dx^{j}}P_{j}^{(\alpha,\beta)}(x)=\frac{1}{2^{j}j!} \frac{\Gamma(2j+\alpha+\beta+1)}{\Gamma(j+\alpha+\beta+1)}.\]
On one hand, if \(z\) is real, a comparison of the coefficients of \(z^{k+j+1}\) yields
(A.6) \[(-w-w^{\prime}+k+j)\frac{\Gamma(k+j+n)}{j!\Gamma(k+n)}=A\frac{\Gamma(k+j+n+1)} {(j+1)!\Gamma(k+n)}+B\frac{\Gamma(k+j+n+1)}{j!\Gamma(k+n+1)},\]
i.e.,
(A.7) \[(-w-w^{\prime}+k+j)=A\frac{k+j+n}{j+1}+B\frac{k+j+n}{k+n}.\]
On the other hand, if \(z\) is purely imaginary, the same comparison yields
(A.8) \[(-i)^{k-j+1}(k-j+w-w^{\prime})\frac{\Gamma(k+j+n)}{j!\Gamma(k+n)} =(-i)^{k-j-1}A\frac{\Gamma(k+j+n+1)}{(j+1)!\Gamma(k+n)}\] \[+(-i)^{k-j+1}B\frac{\Gamma(k+j+n+1)}{j!\Gamma(k+n+1)},\]
i.e.,
(A.9) \[k-j+w-w^{\prime}=-A\frac{k+j+n}{j+1}+B\frac{k+j+n}{k+n}.\]
Solving (A.7) and (A.9) for \(A\) and \(B\), we obtain that
(A.10) \[A=(j-w)\frac{j+1}{k+j+n},\quad B=(k-w^{\prime})\frac{k+n}{k+j+n},\]
which means that for \(0\leqslant j\leqslant k\),
(A.11) \[\begin{split}&\frac{d}{d\delta}\bigg{|}_{\delta=1}J_{\tau_{ \delta}}^{-w}\bar{J}_{\tau_{\delta}}^{-w^{\prime}}\left(\Psi_{j,k}\circ\tau_{ \delta}\right)\bigg{|}_{\mathcal{H}_{j+1,k}^{\mathbb{S}}\bigoplus\mathcal{H} _{j,k+1}^{\mathbb{S}}}=\\ &(j-w)\frac{j+1}{k+j+n}\Psi_{j+1,k}+(k-w^{\prime})\frac{k+n}{k+j+ n}\Psi_{j,k+1}\end{split}\]
The intertwining property of \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) means that
(A.12) \[\lambda_{j,k}J_{\tau_{\delta}}^{\gamma-w}\bar{J}_{\tau_{\delta}}^{\gamma-w^{ \prime}}\Psi_{j,k}\circ\tau_{\delta}=\mathcal{A}_{w,w^{\prime}}^{\theta_{0}} \left(J_{\tau_{\delta}}^{-w}\bar{J}_{\tau_{\delta}}^{-w^{\prime}}\left(\Psi_{ j,k}\circ\tau_{\delta}\right)\right)\]
Differentiating in \(\delta\) and using (A.11) yield
(A.13) \[\begin{split}&\lambda_{j,k}(j+\gamma-w)\frac{j+1}{k+j+n}\Psi_{j+ 1,k}+\lambda_{j,k}(k+\gamma-w^{\prime})\frac{k+n}{k+j+n}\Psi_{j,k+1}\\ &=\lambda_{j+1,k}(j-w)\frac{j+1}{k+j+n}\Psi_{j+1,k}+\lambda_{j,k +1}(k-w^{\prime})\frac{k+n}{k+j+n}\Psi_{j,k+1}.\end{split}\]
which implies that
(A.14) \[\lambda_{j+1,k}=\lambda_{j,k}\frac{j+\gamma-w}{j-w},\quad\lambda_{j,k+1}= \lambda_{j,k}\frac{k+\gamma-w^{\prime}}{k-w^{\prime}},\quad 0\leqslant j\leqslant k.\]
We conclude that for \(0\leqslant j\leqslant k\),
(A.15) \[\lambda_{j,k}=\lambda_{0,k}\frac{\Gamma(j+\gamma-w)}{\Gamma(j-w)}=\lambda_{0, 0}\frac{\Gamma(j+\gamma-w)}{\Gamma(j-w)}\frac{\Gamma(k+\gamma-w^{\prime})}{ \Gamma(k-w^{\prime})}.\]
Since \(\Phi_{j,k}(\zeta,\eta)=\overline{\Phi_{k,j}(\zeta,\eta)}\), \(k\leqslant j\) and the spectrum of \(\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}\) is real, taking the conjugation in (A.13) yields that for \(j,k\in\mathbb{N}_{0}\),
\[\lambda_{j,k}=\lambda_{0,0}\frac{\Gamma(j+\gamma-w)}{\Gamma(j-w)}\frac{\Gamma (k+\gamma-w^{\prime})}{\Gamma(k-w^{\prime})}.\]
Let \(\lambda_{0,0}=1\) and \(R_{w,w^{\prime}}^{\theta_{0}}=\mathcal{A}_{w,w^{\prime}}^{\theta_{0}}- \mathcal{Q}_{w,w^{\prime}}^{\theta_{0}}\). The assumption implies that \(R_{w,w^{\prime}}^{\theta_{0}}=0\) on \(\operatorname{Span}\left\{\bigoplus_{j,k\in\mathbb{N}_{0}}\mathcal{H}_{j,k}^{ \mathbb{S}}\right\}\). Under the standard inner product \((\cdot,\cdot)\) in \(L^{2}(S^{2n+1})\), we have that for any \(H\in\operatorname{Span}\left\{\bigoplus_{j,k\in\mathbb{N}_{0}}\mathcal{H}_{j, k}^{\mathbb{S}}\right\}\), \(F\in C^{\infty}(S^{2n+1})\),
\[\left(R_{w,w^{\prime}}^{\theta_{0}}(F),H\right)=\left(F,R_{w,w^{\prime}}^{ \theta_{0}}(H)\right)=0,\]
where we use the self-adjointness of \(R^{\theta_{0}}_{\underline{w,w^{\prime}}}\). By the continuity of the inner product and the fact that \(L^{2}(S^{2n+1})=\operatorname{Span}\left\{\bigoplus_{j,k\in\mathbb{N}_{0}} \mathcal{H}^{\mathbb{S}}_{j,k}\right\}\), we obtain that
\[\left(R^{\theta_{0}}_{w,w^{\prime}}(F),G\right)=0,\quad\forall F\in C^{\infty} (S^{2n+1}),G\in L^{2}(S^{2n+1}),\]
which implies that \(R^{\theta_{0}}_{w,w^{\prime}}=0\) on \(C^{\infty}(S^{2n+1})\).
|
2307.03098 | Lepton-pair scattering with an off-shell and an on-shell photon at two
loops in massless QED | We compute the two-loop QED helicity amplitudes for the scattering of a
lepton pair with an off-shell and an on-shell photon,
$0\to\ell\bar\ell\gamma\gamma^*$, using the approximation of massless leptons.
We express all master integrals relevant for the scattering of four massless
particles with a single external off-shell leg up to two loops in a basis of
algebraically independent multiple polylogarithms, which guarantees an
efficient numerical evaluation and compact analytic representations of the
amplitudes. Analytic forms of the amplitudes are reconstructed from numerical
evaluations over finite fields. Our results complete the amplitude-level
ingredients contributing to the N$^3$LO predictions of electron-muon scattering
$e\mu\to e\mu$, which are required to meet the precision goal of the future
MUonE experiment. | Simon Badger, Jakub Kryś, Ryan Moodie, Simone Zoia | 2023-07-06T16:14:18Z | http://arxiv.org/abs/2307.03098v3 | # Lepton-pair scattering with an off-shell and an on-shell photon at two loops in massless QED
###### Abstract
We compute the two-loop QED helicity amplitudes for the scattering of a lepton pair with an off-shell and an on-shell photon, \(0\to\ell\bar{\ell}\gamma\gamma^{*}\), using the approximation of massless leptons. We express all master integrals relevant for the scattering of four massless particles with a single external off-shell leg up to two loops in a basis of algebraically independent multiple polylogarithms, which guarantees an efficient numerical evaluation and compact analytic representations of the amplitudes. Analytic forms of the amplitudes are reconstructed from numerical evaluations over finite fields. Our results complete the amplitude-level ingredients contributing to the N\({}^{3}\)LO predictions of electron-muon scattering \(e\mu\to e\mu\), which are required to meet the precision goal of the future MUonE experiment.
## 1 Introduction
The MUonE experiment [1; 2; 3; 4] will measure the hadronic running of the electromagnetic coupling \(\alpha\) using low-energy elastic electron-muon scattering, \(e\mu\to e\mu\). This will enable a new and precise determination of the hadronic vacuum polarisation (HVP) contribution \(a_{\mu}^{\rm HVP}\)[5; 6] to the muon anomalous magnetic moment \(a_{\mu}\). This is required in light of the recent tensions between experimental [7], Standard Model (SM) data-driven [8], and lattice quantum chromodynamics (QCD) [9] results for \(a_{\mu}\). Increasing the precision of the theoretical predictions for \(e\mu\to e\mu\) scattering is a high priority for the planned MUonE experiment [10; 11]. The recent completion of full next-to-next-to-leading order (NNLO) quantum electrodynamics (QED) corrections [12] indicates that next-to-next-to-next-to-leading order (N\({}^{3}\)LO) corrections in differential distributions are required to meet MUonE's precision goal of 10 parts per million. Electron-line corrections, meaning corrections to the subprocess with the muon line stripped off (\(e\to e\gamma^{*}\)), are the dominant corrections [12], and a collaborative project was started to perform their fixed-order calculation at N\({}^{3}\)LO [13]. With the triple-virtual corrections now available [14; 15; 16], the main missing ingredient is the
real-double-virtual (RVV) matrix element (\(e\to e\gamma\gamma^{*}\)) at two loops. While these contributions could be extracted from amplitudes in the literature [17; 18; 19], our direct computation provides the massless RVV contribution in a complete and compact form.
Another application of the \(0\to\ell\bar{\ell}\gamma\gamma^{*}\) amplitudes is in electron-positron annihilation experiments [20]. It is required for initial-state corrections in predictions of the ratio of hadron-to-muon production in \(e^{+}e^{-}\) collisions, which is an important input for existing SM predictions of \(a_{\mu}^{\rm HVP}\)[21]. The two-loop amplitudes contribute to RVV corrections to \(e^{+}e^{+}\to\gamma^{*}\) in direct scan measurements, while radiative return measurements concern corrections to \(e^{+}e^{-}\to\gamma\gamma^{*}\)[8]. In the latter configuration, the \(e^{+}e^{-}\) beam has a fixed centre-of-mass energy of a few GeV and the on-shell photon originates from initial state radiation (ISR). The energy lost to the ISR photon is used to effectively scan over the energies of the decay of the off-shell photon. A differential cross section of, for example, \(\gamma^{*}\to\) hadrons with respect to the centre-of-mass energy of the decay, \({\rm d}\sigma/{\rm d}s\), can be extracted from measurements of the differential cross section with respect to the energy of the ISR photon, \({\rm d}\sigma/{\rm d}E_{\gamma}\). State-of-the-art predictions for these measurements are currently at next-to-leading order (NLO) [21]. We provide the two-loop \(e^{+}e^{-}\to\gamma\gamma^{*}\) amplitudes required for the double-virtual (VV) corrections at NNLO, although the bottleneck remains in the hadronic decay.
Our amplitudes are calculated in the approximation of massless leptons. In the NNLO massive \(e\mu\to e\mu\) cross section calculation [12], the authors obtain photonic corrections (those with no closed fermion loops), using a small-mass expansion [22; 23; 24] applied to the two-loop amplitudes with massless electrons for the VV corrections. This approximation relies on the electron mass being much smaller than any other scale, which is valid in the bulk of phase space. Further splitting the photonic corrections, they take the subset of electron-line corrections and find that the relative difference to the true massive NNLO differential cross section is generally around \(10^{-3}\alpha^{2}\), where \(\alpha\) is the fine-structure constant, which is negligible compared to the \(10^{-5}\) precision goal. The approximation breaks down in soft and collinear regions, where they treat the amplitudes using infrared (IR) factorisation [25; 26; 27], and is not used for contributions including closed fermion loops [24; 28]. Our amplitudes can be used analogously for the RVV corrections at N\({}^{3}\)LO.
Our computation uses the modern technology developed for QCD amplitudes with many scales. The high-multiplicity amplitude frontier in massless QCD lies with two-loop five-particle processes, with leading-colour [29; 30; 31; 32; 33; 34] and full-colour [35; 36; 37; 38] results in a form ready for phenomenological application becoming available over the past few years. Recently, the first single-external-mass calculations are also appearing [39; 40; 41; 42]. These computations have made extensive use of finite-field arithmetic to sidestep large intermediate expressions. This technology has had a considerable impact for solutions of systems of integration-by-parts (IBP) identities [43; 44; 45] but also applies more widely to scattering amplitude computations [46; 47]. Motivated by the improved algorithms, we choose to implement a complete finite-field based reduction for the \(2\to 2\) processes with an off-shell leg. Since the kinematics are relatively simple in comparison with other high-multiplicity configurations, this technology is not essential. It does, however, provide an opportunity to review the new techniques for readers who are not familiar with them.
A key ingredient for computing the scattering amplitudes are analytic expressions for the required Feynman integrals. Complete analytic results up to two loops are already available in the literature [48; 49; 50]. Expansions of these integrals up to higher orders in the dimensional regularisation parameter \(\epsilon\) have also been reconsidered recently [51], in view of their usage for N\({}^{3}\)LO corrections to \(2\to 2\) processes in QCD [52; 53]. The state of the art for integrals with this kinematic configuration has reached three loops [54; 55; 56; 57]. We revisit the computation of the one- and two-loop integrals following the approach of refs. [40; 58; 59; 60; 61] based on the construction of a _basis_ of independent special functions, which gives a unique and uniform representation of all the required Feynman integrals up to transcendental weight four. This enables a more efficient computation of the amplitudes using the modern workflow based on finite-field arithmetic, and leads to more compact expressions. We give explicit expressions for the basis functions in terms of multiple polylogarithms (MPLs) which can be evaluated in an efficient and stable way throughout the physical phase space. We compute all crossings of all massless one- and two-loop four-particle Feynman integrals with an external off-shell leg, so that our results for the integrals may be of use for any scattering process with these kinematics.
Our paper is organised as follows. In section 2, we describe our decomposition of the helicity amplitudes and detail how we express the off-shell currents. In section 3, we discuss our computation of analytic amplitudes by numerical evaluations over finite fields. In section 4, we present the computation of the Feynman integrals in terms of a basis of special functions. We draw our conclusions in section 5. We provide useful technical details in appendices. We define the relevant families of Feynman integrals in appendix A. In appendix B, we discuss in detail how we handle permutations of the integral families in the IBP reduction. In appendix C, we describe our rational parametrisation of the kinematics. Appendix D is devoted to the ultraviolet (UV) renormalisation and IR factorisation which determine the pole structure of the amplitudes. In appendix E we discuss the analytic continuation of the special functions to the physical kinematic regions.
## 2 Structure of the amplitude
We calculate the one- and two-loop QED corrections to the process
\[0 \to\ell(p_{1},h_{1})+\bar{\ell}(p_{2},h_{2})+\gamma(p_{3},h_{3}) +\gamma^{*}(p_{4})\,, \tag{1}\]
which we call \(0\to\ell\bar{\ell}\gamma\gamma^{*}\) for short. Here, \(\ell\) denotes an on-shell massless lepton and \(\gamma\) (\(\gamma^{*}\)) an on-shell (off-shell) photon, while \(h_{i}\) and \(p_{i}\) are the helicity and momentum of the \(i^{\text{th}}\) particle. We take the external momenta \(p_{i}\) to be all outgoing. They satisfy the following momentum-conservation and on-shell conditions:
\[\sum_{i=1}^{4}p_{i}^{\mu} =0\,, p_{i}^{2} =0 \forall\,i=1,2,3\,. \tag{2}\]
The single-off-shell four-particle phase space is described by three independent scalar invariants, which we choose as
\[\vec{s} \coloneqq\{s_{12},s_{23},s_{4}\}\,, \tag{3}\]
where \(s_{i..j}\coloneqq(p_{i}+\ldots+p_{j})^{2}\). We use dimensional regularisation in the 't Hooft-Veltman scheme [62], with \(D=4-2\epsilon\) spacetime dimensions (where \(\epsilon\) is the dimensional regulator) and four-dimensional external momenta.
Because of the off-shell photon in the process, the helicity amplitudes \(\mathcal{A}^{\mu}(1_{\ell},2_{\bar{\ell}},3_{\gamma},4_{\gamma^{*}})\) are actually off-shell currents carrying a free Lorentz index. We consider the perturbative QED expansion of the helicity amplitudes,
\[\mathcal{A}^{\mu}(1_{\ell},2_{\bar{\ell}},3_{\gamma},4_{\gamma^{*}})=g_{e}^{2} \sum_{L\geq 0}\left(n_{\epsilon}\frac{\alpha}{4\pi}\right)^{L}\mathcal{A}^{(L) \mu}(1_{\ell},2_{\bar{\ell}},3_{\gamma},4_{\gamma^{*}})\,, \tag{4}\]
with prefactor \(n_{\epsilon}=\mathrm{i}(4\pi)^{\epsilon}\mathrm{e}^{-\epsilon\gamma_{E}}\), electromagnetic coupling \(g_{e}\), and \(\alpha=g_{e}^{2}/(4\pi)\). We truncate the expansion at \(L=2\) loops. We set the renormalisation scale \(\mu_{R}\) to one throughout the computation and restore the dependence on it in the final analytic result by dimensional analysis. For the bare amplitudes we have that
\[\mathcal{A}^{(L)\mu}(\mu_{R})=\mu_{R}^{2\epsilon L}\mathcal{A}^{(L)\mu}(\mu_{ R}=1)\,. \tag{5}\]
There are two independent helicity configurations \((h_{1},h_{2},h_{3})\), which we take as
\[\{-+-\,,\ \ -++\}\,. \tag{6}\]
We derive the analytic expressions for these helicity amplitudes. We obtain the remaining helicity configurations, \(\{+-+\,,\ \ +--\}\), through parity transformation (see appendix C of ref. [37]).
We decompose the loop-level helicity amplitudes \(\mathcal{A}^{(L)\mu}\) into gauge-invariant subamplitudes \(\mathcal{A}^{(L)\mu}_{i,j}\), where the subscript \(i\) counts the number of closed massless fermion loops and \(j\) the number of external photons attached to closed fermion loops. The non-zero contributions are
\[\mathcal{A}^{(1)\mu} =\mathcal{A}^{(1)\mu}_{0,0}+n_{l}\,\mathcal{A}^{(1)\mu}_{1,1}\,, \tag{7a}\] \[\mathcal{A}^{(2)\mu} =\mathcal{A}^{(2)\mu}_{0,0}+n_{l}\left(\mathcal{A}^{(2)\mu}_{1,0} +\mathcal{A}^{(2)\mu}_{1,1}+\mathcal{A}^{(2)\mu}_{1,2}\right)+n_{l}^{2} \mathcal{A}^{(2)\mu}_{2,1}\,, \tag{7b}\]
where \(n_{l}\) denotes the number of charged lepton flavours running in the loops. Representative Feynman diagrams contributing to these subamplitudes are illustrated in figure 1. Amplitudes with a closed fermion loop attached to an odd number of photons vanish by Furry's theorem.
We decompose the amplitude and subamplitude currents as
\[\mathcal{A}^{(L)\mu}=\sum_{k=1}^{4}a_{k}^{(L)}\,q_{k}^{\mu}\,,\qquad\qquad \mathcal{A}^{(L)\mu}_{i,j}=\sum_{k=1}^{4}a_{i,j;k}^{(L)}\,q_{k}^{\mu}\,, \tag{8}\]
using the following basis written with the spinor-helicity formalism:
\[q_{k}^{\mu}=p_{k}^{\mu}\quad\forall\,k=1,2,3\,,\qquad\qquad q_{4}^{\mu}=\frac {\langle 2|p_{3}p_{1}\sigma^{\mu}|2]-\langle 1|p_{3}p_{2}\sigma^{\mu}|1]}{2s_{12}}\,. \tag{9}\]
eaders not familiar with the spinor-helicity formalism may like to consult one of the many good reviews on the subject [63; 64; 65]. Note that \(q_{4}\) is orthogonal to the momenta \(p_{i}\) by construction; one can in fact show that \(q_{4}^{\mu}\propto e^{\mu\nu\rho\sigma}q_{1\nu}q_{2\rho}q_{3\sigma}\). The subamplitude coefficients \(a_{i,j;k}^{(L)}\) can be related to the amplitude ones \(a_{k}^{(L)}\) through eq. (7).
The scattering amplitudes \(\mathcal{M}^{(L)}\) for fully on-shell processes (for instance, for \(0\to e^{-}e^{+}\gamma\mu^{-}\mu^{+}\)) are obtained by contracting the amplitude currents \({\cal A}^{(L)}{}^{\mu}\) (for \(0\to e^{-}e^{+}\gamma\gamma^{*}\)) with a suitable decay current \(\mathcal{V}_{\mu}\) (in this example, \(\gamma^{*}\to\mu^{-}\mu^{+}\)), as
\[\mathcal{M}^{(L)}\coloneqq\mathcal{A}^{(L)}\cdot\mathcal{V}=\sum_{k=1}^{4}a_{ k}^{(L)}\ (q_{k}\cdot\mathcal{V}). \tag{10}\]
In this manner, the on-shell amplitudes \(\mathcal{M}^{(L)}\) are given by the scalar product between the vector of coefficients \((a_{1}^{(L)},\dots,a_{4}^{(L)})\), and that of decay-vector contractions \((q_{1}\cdot\mathcal{V},\dots,q_{4}\cdot\mathcal{V})\). The coefficients \(a_{k}^{(L)}\) depend on the helicities of the three on-shell particles in eq. (1), while the decay vector \(\mathcal{V}_{\mu}\) depends on the helicities of the particles the off-shell photon decays to.
Figure 1: Representative Feynman diagrams for the subamplitudes defined in eq. (7). The off-shell external leg is indicated by a bold line.
The helicity-summed interference between the \(L_{1}\)-loop and the \(L_{2}\)-loop matrix elements is then given by
\[\mathcal{M}^{(L_{1},L_{2})}=\frac{1}{4}\sum_{\vec{h}}{\mathcal{M}^{ (L_{1})}_{\vec{h}}}^{*}{\mathcal{M}^{(L_{2})}_{\vec{h}}}\,, \tag{11}\]
where the subscripts \(\vec{h}\) indicates the helicities of all on-shell particles -- that is, including the decay products of the off-shell photon -- and the overall constant factor averages over the helicities of the incoming particles.
The output of the computation described in section 3 is the set of four projections \(\mathcal{A}^{(L)}_{i,j}\cdot q_{k}\) for each helicity configuration listed in eq. (6). From these, we determine the subamplitude coefficients \(a^{(L)}_{i,j;k}\) by inverting eq. (8), as
\[a^{(L)}_{i,j;k}=\sum_{m=1}^{4}\left(\mathrm{G}^{-1}\right)_{km} \left(\mathcal{A}^{(L)}_{i,j}\cdot q_{m}\right)\,, \tag{12}\]
where \(\mathrm{G}\) is the Gram matrix of the vectors \(q_{i}\), that is, the matrix of entries \(\mathrm{G}_{ij}\coloneqq q_{i}\cdot q_{j}\) for \(i,j=1,\ldots,4\). At loop level, we write the subamplitude coefficients as
\[a^{(L)}_{i,j;k}=\sum_{w=-2L}^{4-2L}\sum_{r}\,c_{r,w}\,\mathrm{ mon}_{r}(F)\,\epsilon^{w}, \tag{13}\]
where \(\mathrm{mon}_{r}(F)\) are monomials of special functions \(F\) (see section 4), and the coefficients \(c_{r,w}\) are rational functions of the kinematics. We drop the dependence on \(i\), \(j\), \(k\), and \(L\) on the right-hand side of eq. (13) for compactness. We truncate the Laurent expansion around \(\epsilon=0\) to the orders required for computing NNLO predictions. We express the coefficients \(c_{r,w}\) as \(\mathbb{Q}\)-linear combinations of a smaller set of linearly-independent coefficients (see section 3). The analytic expressions of the latter are given explicitly in terms of momentum twistor variables (see appendix C). We simplify these expressions through a multivariate partial fraction decomposition using MultivariateApart[66], and by collecting the common factors.
In the ancillary files [67], the directory amplitudes/ contains Mathematica files describing the bare helicity subamplitude currents \({\mathcal{A}^{(L)}_{i,j}}^{\mu}\) by their coefficients \(a^{(L)}_{i,j;k}\) in the form of eq. (13). The Mathematica script current.m is a reference implementation of the numerical evaluation of the bare amplitude coefficients \(a^{(L)}_{k}\) in eq. (13), including summation of subamplitudes in eq. (7), treatment of dependent helicities, and renormalisation scale restoration in eq. (5). The Mathematica script evaluation.wl demonstrates the construction of the five-particle on-shell amplitudes in eq. (10) for the process \(0\to e^{-}e^{+}\gamma\mu^{-}\mu^{+}\), and their helicity-summation to obtain the squared matrix elements in eq. (11). The results of the script are checked against a reference point included in reference_point.json.
We perform the following checks of our amplitudes.
**Ward identity**: We verify the gauge invariance of the subamplitudes \({\mathcal{A}^{(L)}_{i,j}}^{\mu}\) by checking that they vanish on replacing the on-shell photon's polarisation vector with its momentum.
One-loop crosscheckWe successfully crosscheck our one-loop \(n_{l}=0\) helicity-summed matrix element contracted with the decay \(\gamma^{*}\to\mu^{-}\mu^{+}\) against the QED NLO electron-line corrections for \(e\mu\to e\mu\gamma\) obtained with McMule[68, 69]. Finite remainderWe verify that the \(\epsilon\)-poles of the bare amplitudes have the structure predicted by UV renormalisation and IR factorisation [70, 71, 72, 73, 74]. We then subtract the expected poles and define finite remainders at one and two loops as \[{\cal F}^{(1)}{}^{\mu} =\left[{\cal A}^{(1)}{}^{\mu}-\frac{3}{2}\frac{\beta_{0}}{ \epsilon}{\cal A}^{(0)}{}^{\mu}\right]-Z^{(1)}{\cal A}^{(0)}{}^{\mu}\,,\] (14a) \[{\cal F}^{(2)}{}^{\mu} =\left[{\cal A}^{(2)}{}^{\mu}-\frac{5}{2}\frac{\beta_{0}}{ \epsilon}{\cal A}^{(1)}{}^{\mu}-\left(-\frac{15}{8}\frac{\beta_{0}^{2}}{ \epsilon^{2}}+\frac{3}{4}\frac{\beta_{1}}{\epsilon}\right){\cal A}^{(0)}{}^{ \mu}\right]-Z^{(2)}{\cal A}^{(0)}{}^{\mu}-Z^{(1)}{\cal F}^{(1)}{}^{\mu}\,,\] (14b) where the square brackets separate renormalisation of UV poles from subtraction of IR poles. We present the derivation of these formulae in appendix D.
## 3 Setup of the calculation
In this section, we outline the workflow we used to calculate our amplitudes. Firstly, we generate all Feynman diagrams contributing to eq. (1) using QGRAF[75]. Each diagram is then replaced with the corresponding Feynman rules for vertices, propagators, and external states, leading to a collection of \(D\)-dimensional Feynman integrals. Next, we filter the integrals according to eqs. (4) and (7) using a collection of Mathematica and FORM scripts [76, 77]. Within each subamplitude \({\cal A}^{(L)}_{i,j}{}^{\mu}\), we then collect the integrals according to their topology, by which we mean a unique set of denominators. For example, the diagrams in figures 0(d) and 0(e) belong to different topologies, but those in figures 0(c) and 0(f) belong to the same topology (under the assumption of massless lepton propagators). At this point, the subamplitudes are sums of Feynman integrals over distinct integral topologies, with the numerators given by linear combinations of monomials that depend on the loop as well as the external momenta. To work with the projected helicity subamplitudes \({\cal A}^{(L)}_{i,j}\cdot q_{k}\), we specify the polarisations of external particles according to eq. (6), as well as the projector \(q_{k}^{\mu}\) of the off-shell photon from eq. (9). It is natural to express helicity-dependent objects using the spinor-helicity formalism. Then, the monomials of loop momenta contain the following scalar products and spinor strings:
\[\{k_{i}\cdot k_{j},\,k_{i}\cdot p_{j},\,\langle ij\rangle,\,[ij],\,\langle i| k_{i}|j],\,\langle i|p_{4}|j],\,\langle i|k_{i}p_{4}|j\rangle,\,[i|k_{i}p_{4}|j]\}. \tag{15}\]
Their coefficients, on the other hand, are composed of the same type of objects, but do not contain any dependence on loop momenta \(k_{i}\). We express these coefficients using the rational parametrisation of the kinematics discussed in appendix C.
This marks the start of our finite-field sampling procedure [46]. The goal of this approach is to sidestep the algebraic complexity which typically plagues the intermediate stages of symbolic computations by evaluating numerically all rational coefficients. Using integers modulo some large prime number -- which constitute a finite field -- for the numerical evaluation allows us to avoid the loss of accuracy inherent to floating-point numbers, as
well as the expensive arbitrary-precision arithmetic required by rational numbers. Manipulations needed to further process the rational coefficients are a completely separate problem from the calculation of the integrals or special functions that these coefficients multiply. In fact, they can be implemented as a series of rational transformations over finite fields. We stress that this is the methodology we follow at each step of the computation described below. In particular, we use the package FiniteFlow[47], which is conveniently interfaced to Mathematica. The analytic form of the coefficients is not known at any intermediate step. It is reconstructed from the finite-field samples only at the very end of the workflow.
Firstly, we note that not all integral topologies are independent: some of them can be written as subtopologies of others. For this reason, we define the set of _maximal_ topologies, i.e. topologies with the maximum number of propagators allowed for \(L\)-loop, \(n\)-particle diagrams. In figure 2, we present the maximal topologies for the process under consideration in an arbitrary ordering of the external momenta (we give their explicit definitions in appendix A). Several orderings of the external momenta are relevant for the amplitudes, and we treat them as distinct families. Next, we map all topologies present so far onto one of these maximal topologies. The loop momenta dependent objects of eq. (10) are then expressed through the nine inverse propagators and irreducible scalar products (ISPs) associated with the chosen maximal topology. In this way, each subamplitude is now a sum
Figure 2: The two-loop amplitudes include six ordered integral families: three pentatriangles, a double-box, and two crossed double-boxes. All are planar except for the crossed double-boxes. The off-shell external leg is indicated by a bold line. External legs have outgoing momenta.
of integrals compatible with IBP reduction [78; 79], while their coefficients depend purely on external kinematics. We generate the required IBP relations using \(\mathtt{LiteRed}\)[80]. The resulting IBP system is then solved using the Laporta algorithm [81] with \(\mathtt{FiniteFlow}\)'s linear solver to yield the reduction of all the integrals present within our maximal topologies onto a much smaller subset of master integrals (MIs). We choose the MIs such that they satisfy differential equations (DEs) in canonical form [82] (see section 4.1). We stress that the IBP reduction is also done numerically over finite fields, since the coefficients of the IBP relations are rational functions of external kinematics and the dimensional regulator \(\epsilon\). This is an important simplification, since analytic IBP reduction often proves to be the bottleneck of amplitude computations. For many amplitude applications, multiple permutations of the ordered topologies can appear. We outline a strategy to optimise the reduction in such situations in appendix B.
At this point, each projected helicity subamplitude \(\mathcal{A}_{i,j}^{(L)}\cdot q_{k}\) is written as a linear combination of MIs multiplied by rational coefficients of \(\epsilon\) and the kinematic variables. We now write the MIs in terms of a basis of special functions up to the required order in \(\epsilon\) (see section 4). Finally, we Laurent expand the amplitude around \(\epsilon=0\), the deepest pole being \(1/\epsilon^{2L}\) at \(L\) loops. The only task left is to reconstruct the rational coefficients of the special-function monomials from their samples over finite fields. In general, this might be a daunting challenge and its complexity stems from two separate factors. The workflow described so far is a series of rational operations chained together within a so-called dataflow graph [47]. As such, we essentially have a black-box algorithm which takes numerical values of the kinematic variables as input, and returns the corresponding numerical values of the rational coefficients of the special-function monomials. The first factor is that several sample points are necessary to infer the analytic expression of these coefficients from their values in the finite fields. The required number is correlated with the polynomial degrees of the rational functions viewed as ratios of polynomials: the higher the degree, the more sample points are required. The second factor affecting the reconstruction complexity is the time it takes to obtain the values of the coefficients at each sample point. The more complicated the dataflow graph is, i.e. the more operations are chained together and the more difficult each operation is, the longer it will take to run the black-box algorithm. The most expensive operation in this regard is the evaluation of the solution to the IBP system. The total reconstruction time can thus be estimated as:
\[\text{reconstruction time}\approx\left(\text{number of sample points}\right) \times\left(\text{evaluation time per point}\right). \tag{2}\]
We emphasise that the sample evaluations can be run in parallel. For a detailed discussion of various strategies to improve the reconstruction time, see section 4 of ref. [36] and section 3.5 of ref. [42]. Here, we give a brief overview of the tools that proved sufficient for this work.
First, we look for \(\mathbb{Q}\)-linear relations among the rational coefficients of each helicity subamplitude. This typically requires few sample points with respect to the full reconstruction. We then solve these linear relations to express all coefficients in terms of a minimal subset of independent ones. Only the latter need to be reconstructed. Choosing them so that they have the lowest degrees often leads to a decrease in the complexity of the reconstruction.
The second strategy we employ is to match the rational coefficients with factorised ansatze informed by the singularity structure of Feynman integrals. The singularities of Feynman integrals can in fact be read off from the DEs they satisfy. For each coefficient we then write an ansatz made of the following factors:
\[\{\langle ij\rangle,\,[ij],\,\langle i|p_{4}|j],\,s_{ij}-s_{k4},\,s_{i4}-s_{4},\, s_{4}\}\, \tag{11}\]
for all \(i,j,k=1,2,3\) such that \(i\neq j\neq k\). This list includes denominator factors of the DEs satisfied by the MIs (listed by eq. (10)), as well as spinor structures aimed at capturing the phase information of helicity amplitudes. We then determine the exponents of the ansatze by comparing them to the coefficients reconstructed on a univariate slice of the kinematic variables [29], which are very cheap to obtain. We find that with this ansatz it is possible to determine all denominator factors -- which indeed are linked to the singularity structure of the amplitude -- and sometimes also some numerator factors. As a result, the undetermined functions yet to be reconstructed are of lower degree and require fewer sample points. We reconstruct the analytic form of the remaining rational functions using FiniteFlow's built-in multivariate functional reconstruction algorithm.
Finally, we note that, for more computationally demanding processes, further ansatz-based techniques -- for instance based on partial fraction decompositions or informed by the singularity structure of the amplitudes -- may be used to optimise the functional reconstruction; see, for example, refs. [36; 37; 38; 41; 83; 84].
## 4 Computation of the master integrals
The MIs for the relevant integral families were computed analytically in refs. [48; 49; 51] (see also ref. [50] for a thorough discussion of the analytic continuation). We revisit this computation to obtain expressions for the MIs which are better suited for the amplitude-computation workflow discussed in section 3. To this end, we compute the MIs for _all permutations_ of the external legs in terms of a _basis_ of special functions, following the approach of refs. [58; 59; 60; 61; 40]. In other words, we express all the Feynman integrals contributing to the amplitudes in terms of a set of special functions which are (algebraically) independent. Having such a unified and unique representation for all permutations of the integral families allows for simplifications and cancellations among different permutations of the Feynman integrals. This leads to a simpler expression of the amplitudes and to a more efficient functional reconstruction in the finite-field setup presented in section 3. We emphasise that our results cover all MIs required for computing _any_ two-loop four-particle amplitude with a single external off-shell leg, and not just the ones required for the amplitudes presented in this work.
We discuss the construction of the basis in section 4.1, and how we express it in terms of multiple polylogarithms (MPLs) in section 4.2. Finally, we give some details about the numerical evaluation and the checks we performed in section 4.3.
### Construction of the special function basis
We follow the strategy presented in ref. [61]. The starting point are the DEs satisfied by the MIs for each family [85; 86; 87; 88; 89]. Let \(\tau\) label an integral family, e.g. the double-box in figure 1(b) for an arbitrary permutation of the external massless momenta. We choose a basis of _pure_ MIs \(\vec{g}_{\tau}\), that is, a basis which satisfies DEs in the canonical form [82]
\[\mathrm{d}\,\vec{g}_{\tau}(\vec{s};\epsilon)=\epsilon\,\left(\sum_{i=1}^{7} \mathrm{A}_{i}^{(\tau)}\,\mathrm{d}\log W_{i}(\vec{s})\right)\cdot\vec{g}_{ \tau}(\vec{s};\epsilon)\,. \tag{4.1}\]
Here, \(\mathrm{d}\) is the total differential, \(\mathrm{d}f\coloneqq\mathrm{d}s_{12}\,\partial_{s_{12}}f+\mathrm{d}s_{23}\, \partial_{s_{23}}f+\mathrm{d}s_{4}\,\partial_{s_{4}}f\), \(\mathrm{A}_{i}^{(\tau)}\) are constant \(n_{\tau}\times n_{\tau}\) matrices, with \(n_{\tau}\) the number of MIs of the family \(\tau\), and
\[\begin{array}{llll}W_{1}=s_{12}\,,&W_{2}=s_{23}\,,&W_{3}=s_{12}+s_{23}\,,&W_ {4}=s_{12}-s_{4}\,,\\ W_{5}=s_{23}-s_{4}\,,&W_{6}=s_{12}+s_{23}-s_{4}\,,&W_{7}=s_{4}\end{array} \tag{4.2}\]
are called _letters_. We find such canonical bases by a mixture of methods: the package DlogBasis[90], the analysis of results in the literature for related integral families (massless two-loop five-point planar integrals [91] and two-loop four-point integrals with two massive external legs [92; 93]), and a set of heuristic rules (see e.g. ref. [94]). We normalise the MIs such that their expansion around \(\epsilon=0\) starts from order \(\epsilon^{0}\),
\[\vec{g}_{\tau}(\vec{s};\epsilon)=\sum_{w\geq 0}\epsilon^{w}\,\vec{g}_{\tau}^{(w )}(\vec{s})\,. \tag{4.3}\]
For the purpose of computing two-loop scattering amplitudes up to their finite part (i.e., up to order \(\epsilon^{0}\)), it suffices to restrict our attention to \(w\leq 4\). Since the MIs satisfy canonical DEs (4.1), the \(\epsilon\)-order of the MI coefficients \(\vec{g}_{\tau}^{(w)}(\vec{s})\) equals their _transcendental weight_[82]. We compute the derivatives of the MIs using FiniteFlow[47] and LiteRed[80]. We do so only for the integral families with the ordering of the external momenta shown in figure 2, and obtain those for all other orderings of the external massless legs by permutation. We provide the definition of the pure MIs and the corresponding DEs for all one- and two-loop four-point one-mass families in figure 2 in the folder pure_mi_bases/ of our ancillary files [67].
In order to solve the DEs (4.1) we need boundary values, i.e., values of all MIs up to order \(\epsilon^{4}\) at a phase-space point. Due to the simplicity -- by today's standards -- of the integrals under consideration, an arbitrary (non-singular) phase-space point would do. Nonetheless, we make a more refined choice following some of the criteria of refs. [59; 60]. We choose the following point in the \(s_{12}\) channel (see appendix E),
\[\vec{s}_{0}=\left(2,\,-\frac{1}{2},\,1\right)\,, \tag{4.4}\]
motivated by two principles: that it is symmetric under the permutations which preserve the \(s_{12}\) channel (i.e., swapping \(p_{1}\leftrightarrow p_{2}\)), and that it contains few distinct prime factors. The first condition reduces the number of permuted integral families we need to evaluate
in order to obtain the boundary values. The second condition reduces the number of independent transcendental constants appearing in the boundary values, which simplifies the construction of the basis of special functions. The order-\(\epsilon^{0}\) boundary values \(\vec{g}_{\tau}^{(0)}\) are rational constants. We obtain them up to their overall normalisation by solving the 'first-entry conditions' [95], i.e., by requiring the absence of unphysical branch cuts in the solutions. We fix the overall normalisation and the higher-order boundary values \(\vec{g}_{\tau}^{(w)}(\vec{s}_{0})\) (for \(1\leq w\leq 4\)) by evaluating all MIs with AMFlow[96] (interfaced to FiniteFlow[47] and LiteRed[80]) at \(\vec{s}_{0}\) with at least 60-digit precision. We anticipate from section 4.2 that, although we use floating-point boundary values, our results in terms of MPLs are fully analytic.
The canonical DEs (4.1) and the boundary values for all integral families are the input for the algorithm of ref. [61] for constructing a basis of special functions. We refer to the original work for a thorough discussion. Out of all MI coefficients up to transcendental weight 4, the algorithm selects a subset, denoted \(F\coloneqq\{F_{i}^{(w)}(\vec{s})\}\), which satisfy two constraints. First, they are _algebraically independent_, that is, there are no polynomial functional relations among them. Second, the MI coefficients of all families (including all permutations of the external massless legs) up to transcendental weight 4 are expressed as polynomials in the \(\{F_{i}^{(w)}(\vec{s})\}\) and the zeta values \(\zeta_{2}=\pi^{2}/6\) and \(\zeta_{3}\). For example, an arbitrary weight-2 MI coefficient \(g^{(2)}(\vec{s})\) has the general form
\[g^{(2)}(\vec{s})=\sum_{i=1}^{4}c_{i}\,F_{i}^{(2)}(\vec{s})+\sum_{i\leq j=1}^{3 }d_{ij}\,F_{i}^{(1)}(\vec{s})\,F_{j}^{(1)}(\vec{s})+e\,\zeta_{2}\,, \tag{4.5}\]
with \(c_{i},d_{ij},e\in\mathbb{Q}\). This special subset of MI coefficients, \(\{F_{i}^{(w)}(\vec{s})\}\), constitutes our special function basis. We give the number of functions in the basis in table 1. Note that there is freedom in the choice of which MI coefficients make up the basis. We make use of this freedom to choose as many basis-elements as possible from the one-loop family, then complement them with coefficients from the planar two-loop families, and finally complete them with coefficients from the non-planar two-loop families. In this way no two-loop MI coefficients appear in the one-loop amplitudes, and no non-planar two-loop MI coefficients appear in those amplitudes where only planar diagrams contribute (as is often the case in the leading colour approximation of QCD).
The folder mi2func/ of our ancillary files [67] contains the expression of all MI coefficients (for all one- and two-loop integral families in all permutations of the external massless legs) up to weight 4 in terms of our special function basis. This result enables the efficient amplitude-computation strategy based on finite-field arithmetic discussed in
\begin{table}
\begin{tabular}{c c} \hline Weight & Number of basis functions \\ \hline
1 & 4 \\
2 & 3 \\
3 & 20 \\
4 & 67 \\ \hline \end{tabular}
\end{table}
Table 1: Number of functions \(\{F_{i}^{(w)}\}\) in the basis weight by weight.
section 3. However, at this stage the basis functions \(\{F_{i}^{(w)}\}\) are expressed in terms of Chen iterated integrals [97] and numerical boundary values \(\vec{g}^{(w)}(\vec{s}_{0})\). This representation is excellent for investigating the analytic properties of Feynman integrals and amplitudes, but it is not readily suitable for an efficient numerical evaluation. In the next section we discuss how we construct a representation of the function basis in terms of MPLs and zeta values, which is well suited for an efficient and stable numerical evaluation.
### Expression in terms of multiple polylogarithms
In this section we construct a representation of our function basis in terms of MPLs. The weight-\(n\) MPL of indices \(\{a_{1},\ldots,a_{n}\}\) and argument \(x\) is defined recursively as
\[G(a_{1},a_{2},\ldots,a_{n};x)\coloneqq\int_{0}^{x}\frac{\mathrm{d}t}{t-a_{1}} \,G(a_{2},\ldots,a_{n};t)\,,\qquad a_{n}\neq 0\,, \tag{4.6}\]
for \(a_{n}\neq 0\), starting with \(G(;x)=1\). Trailing zeros, i.e., zeros in the right-most indices, are allowed through the definition
\[G(\underbrace{0,\ldots,0}_{k};x)\coloneqq\frac{1}{k!}\log^{k}(x)\,. \tag{4.7}\]
We refer to ref. [98] for a thorough discussion.
Since the letters in eq. (4.2) are rational and linear in all variables, we can solve the canonical DEs in eq. (4.1) algorithmically in terms of MPLs. Order by order in \(\epsilon\), the solution is given by
\[\vec{g}^{(w)}_{\tau}(\vec{s})=\sum_{i=1}^{7}\mathrm{A}_{i}^{(\tau)}\cdot\int_{ \gamma}\mathrm{d}\log\bigl{(}W_{i}(\vec{s}=\gamma)\bigr{)}\,\vec{g}^{(w-1)}_{ \tau}\bigl{(}\vec{s}=\gamma\bigr{)}+\vec{b}^{(w)}_{\tau}\,, \tag{4.8}\]
starting from the constant weight-\(0\) boundary values \(\vec{g}^{(0)}_{\tau}\) determined in the previous subsection. Here, \(\gamma\) is a path connecting an arbitrary base-point \(\vec{s}_{\mathrm{base}}\) to the end-point \(\vec{s}\). The weight-\(w\) constants \(\vec{b}^{(w)}_{\tau}\) are given by the values of the integrals at the base-point, \(\vec{b}^{(w)}_{\tau}=\vec{g}^{(w)}_{\tau}(\vec{s}_{\mathrm{base}})\). For \(\vec{s}_{\mathrm{base}}\) we may use the boundary point \(\vec{s}_{0}\) in eq. (4.4), so that the constants \(\vec{b}^{(w)}_{\tau}\) coincide with the boundary values determined numerically in the previous section. We follow a different approach, which allows us to trade all numerical constants in the expressions for zeta values.
We find it convenient to change variables from \((s_{12},s_{23},s_{4})\) to \((z_{1},z_{2},s_{4})\), with
\[z_{1}=\frac{s_{12}}{s_{4}}\,,\qquad\qquad z_{2}=\frac{s_{23}}{s_{4}}\,. \tag{4.9}\]
This way, there is only one dimensionful variable, \(s_{4}\), the dependence on which is fixed as an overall factor by dimensional analysis. We then integrate the canonical DEs as in eq. (4.8) along the following piece-wise path in the \((z_{1},z_{2},s_{4})\) space:
\[(0,0,0)\stackrel{{\gamma_{1}}}{{\longrightarrow}}(z_{1},0,0) \stackrel{{\gamma_{2}}}{{\longrightarrow}}(z_{1},z_{2},0) \stackrel{{\gamma_{3}}}{{\longrightarrow}}(z_{1},z_{2},s_{4})\,. \tag{4.10}\]
Since the Feynman integrals are divergent at the chosen base-point, the latter is understood in a _regularised_ sense (we refer to section 4 of ref. [99] for a thorough discussion). Choosing
\((0,0,0)\) as base-point has the important benefit of removing spurious transcendental numbers that would pollute the solution were we to choose a base-point where the integrals are finite. As we will see below, only zeta values appear. Roughly speaking, we define regularised, finite values \(\vec{b}_{\tau}^{(w)}\coloneqq\text{Reg}\,\vec{g}_{\tau}^{(w)}(\vec{s}_{\text{ base}})\) by introducing a regulator and formally setting to 0 the (divergent) logarithms of the regulator. Since the integrals are finite at a generic end-point \(\vec{s}\), the divergences at the base-point must cancel out with divergences arising in the integration. We can thus drop all these divergences. Provided that we do it consistently between the integration and the base-point values \(\vec{b}_{\tau}^{(w)}\), this leads to a finite and unique result. In practice, we fix the finite base-point values \(\vec{b}_{\tau}^{(w)}\) by matching the solution \(\vec{g}_{\tau}^{(w)}(\vec{s})\) evaluated at the boundary point \(\vec{s}_{0}\) against the boundary values discussed in the previous subsection.
We therefore keep the \(\vec{b}_{\tau}^{(w)}\) as symbols and integrate the canonical DEs as in eq. (10) along the path in eq. (11) up to weight 4. We parameterise each piece of the path in eq. (11) linearly. For instance, \(\gamma_{2}(t)=(z_{1},t,0)\), with \(t\in[0,z_{2}]\).
* The \(\gamma_{1}\) integration leads to MPLs with indices in \(\{0,1\}\) and argument \(z_{1}\).
* The \(\gamma_{2}\) integration leads to MPLs with indices in \(\{0,1,1-z_{1},-z_{1}\}\) and argument \(z_{2}\).
* The \(\gamma_{3}\) integration leads to powers of \(\log(-s_{4})\), fixed by dimensional analysis.
Once we have obtained expressions for all MIs in terms of MPLs and symbolic constants \(\vec{b}_{\tau}^{(w)}\), we equate them to the numerical boundary values at \(\vec{s}_{0}\), and solve for the \(\vec{b}_{\tau}^{(w)}\). We use GiNaC[100; 98] to evaluate the MPLs numerically. Finally, we use the PSLQ algorithm [101] to express the ensuing values of \(\vec{b}_{\tau}^{(w)}\) in terms of \(\zeta_{2}\) and \(\zeta_{3}\). As a result, we obtain a fully analytic representation of all MIs -- and thus of our special function basis \(\{F_{i}^{(w)}\}\) -- in terms of MPLs and zeta values, up to weight 4.
Contrary to the functions in the basis \(\{F_{i}^{(w)}\}\), the MPLs in their representation satisfy functional relations. We make use of this freedom to optimise our expressions in view of their numerical evaluation by reducing the number of distinct MPLs that need to be evaluated. First, we use the shuffle algebra of MPLs to push all trailing zeros into logarithms through eq. (11) [98]. Next, we employ the scaling relation
\[G(a_{1},\ldots,a_{n};x)=G\left(\frac{a_{1}}{x},\ldots,\frac{a_{n}}{x};1\right)\,, \tag{12}\]
which holds for \(x,a_{n}\neq 0\). As a result, all MPLs have argument 1 and indices
\[l_{0}=0\,,\qquad l_{1}=\frac{s_{4}}{s_{12}}\,,\qquad l_{2}=\frac{s_{4}}{s_{23 }}\,,\qquad l_{3}=\frac{s_{4}-s_{12}}{s_{23}}\,,\qquad l_{4}=-\frac{s_{12}}{s _{23}}\,. \tag{13}\]
Finally, we decompose the MPLs to _Lyndon words_[102] using PolyLogTools[103]; we refer to the latter work for a thorough explanation, and give here only a simple example. This procedure requires that we choose a symbolic ordering of the MPL indices. We choose \(l_{0}\prec l_{1}\prec l_{2}\prec l_{3}\prec l_{4}\), meaning that \(l_{1}\) is greater than \(l_{0}\), and so on. Consider the MPL \(G(l_{1},l_{0};1)\), whose indices are not sorted according to the ordering above, since \(l_{1}\succ l_{0}\). We
can use the shuffle algebra of MPLs to rewrite it in terms of MPLs whose indices are sorted according to the chosen ordering, as
\[G(l_{1},l_{0};1)=G(l_{0};1)\,G(l_{1};1)-G(l_{0},l_{1};1)\,. \tag{4.13}\]
Doing this consistently throughout all expressions reduces the number of higher-weight MPLs in favour of products of lower-weight ones, which are cheaper to evaluate numerically. To maximise the impact in this sense, we tested all possible orderings of the indices and selected the one -- given above -- which minimises the number of weight-4 MPLs. The resulting representation of the function basis contains 4 weight-1, 6 weight-2, 19 weight-3, and 25 weight-4 MPLs, as well as 3 logarithms:
\[\log(s_{12}/s_{4})\,,\qquad\quad\log(s_{23}/s_{4})\,,\qquad\quad\log(-s_{4})\,. \tag{4.14}\]
We write the latter in terms of logarithms rather than MPLs as they play an important role in the factorisation of the IR divergences in the scattering amplitudes (see appendix D for the IR structure of the amplitudes we compute here). We stress that \(\log(-s_{4})\) is the only function of a dimensionful argument in our representation of the function basis.
We provide in the folder mizfunc/ of our ancillary files [67] the expression of the basis functions \(\{F_{i}^{(w)}\}\) in terms of MPLs, logarithms, \(\zeta_{2}\) and \(\zeta_{3}\).
It is important to stress that the MPLs are multi-valued functions. For unit argument, there is a pole on the integration contour whenever one of the indices lies between 0 and 1. In this case the contour must be deformed in the complex plane, either above or below the pole, leading to different branches. Our MPLs are thus well-defined only in the kinematic region where all MPL indices in eq. (4.12) are either less than 0 or greater than 1, and \(s_{4}<0\) for the argument of all logarithms in eq. (4.14) to be positive. We discuss how to analytically continue the MPLs and the logarithms in eq. (4.14) to the kinematic regions of interest in appendix E.
### Performance and validation
We validated our results for the MIs of all families by crosschecking them against values obtained with AMFlow[96] at several random points in all the physical kinematic regions discussed in appendix E. Furthermore, we find agreement with the results of ref. [51]. We employ GiNaC[98; 100] to evaluate the MPLs.
Our results allow for an efficient and stable evaluation of the MIs, and are thus ready for immediate deployment in phenomenology. Indeed, the amplitudes we computed in this work have already been implemented in McMule[68; 69] to provide the real-double-virtual electron-line corrections \(e\mu\to e\mu\) scattering. The evaluation is efficient, running at \(\approx 130\) events per second in the bulk of the phase space [104] using handyG[105] for the evaluation of the MPLs.
## 5 Conclusions
In this article, we calculated analytically the two-loop QED helicity amplitudes for the process \(0\to\ell\bar{\ell}\gamma\gamma^{*}\) in terms of a basis of multiple polylogarithms that are suitable for fast
and stable numerical evaluation. We employed modern finite-field evaluation techniques to reconstruct the amplitudes directly in terms of the special function basis, sidestepping the symbolic computation in all intermediate stages. As a by-product we have recomputed all two-loop master integrals for four-point functions with an off-shell leg up to transcendental weight four, and provide all the necessary ingredients needed to use them in amplitude computations with the same kinematics.
We hope these new results will now open the path to N\({}^{3}\)LO predictions that can be used for the future MUonE experiment.
We thank Yannick Ulrich for providing the one-loop crosschecks, correspondence on the McMule implementation of these amplitudes, and other useful discussions. We further thank Heribertus Bayu Hartanto and Tiziano Peraro for collaboration on the codebase. SZ wishes to thank Dmitry Chicherin and Vasily Sotnikov for many useful discussions. This project received funding from the European Union's Horizon 2020 research and innovation programme _High precision multi-jet dynamics at the LHC_ (grant agreement number 772099).
## Appendix A Definition of the Feynman integral families
For each two-loop integral family \(\tau\) corresponding to one of the maximal topologies shown in figure 2, the Feynman integrals have the form
\[j^{\tau}(a_{1},\ldots,a_{9})=\mathrm{e}^{2\epsilon\gamma_{E}}\int\frac{ \mathrm{d}^{4-2\epsilon}k_{1}}{\mathrm{i}\pi^{2-\epsilon}}\frac{\mathrm{d}^{4 -2\epsilon}k_{2}}{\mathrm{i}\pi^{2-\epsilon}}\frac{1}{D_{\tau,1}^{a_{1}} \ldots D_{\tau,9}^{a_{9}}}\,. \tag{10}\]
The sets \(\{D_{\tau,1},\ldots,D_{\tau,9}\}\) contain seven (inverse) propagators and two ISPs (\(a_{8},a_{9}\leq 0\)). For the maximal topologies under consideration, they are given by:1
Footnote 1: We use a naming convention analogous to that of ref. [106].
* penta-triangle, \(\mathbf{mzz}\) configuration: \[\begin{split}\left\{k_{1}^{2},(k_{1}+p_{1}+p_{2}+p_{3})^{2},(k_{ 1}+p_{2}+p_{3})^{2},(k_{1}+p_{3})^{2},k_{2}^{2},(k_{2}-p_{3})^{2},\\ (k_{1}+k_{2})^{2},(k_{2}-p_{1}-p_{2}-p_{3})^{2},(k_{2}-p_{2}-p_{3} )^{2}\right\},\end{split}\] (11)
* penta-triangle, \(\mathbf{zmz}\) configuration: \[\begin{split}\left\{k_{1}^{2},(k_{1}-p_{1})^{2},(k_{1}+p_{2}+p_{3 })^{2},(k_{1}+p_{3})^{2},k_{2}^{2},(k_{2}-p_{3})^{2},(k_{1}+k_{2})^{2},\\ (k_{2}+p_{1})^{2},(k_{2}-p_{2}-p_{3})^{2}\right\},\end{split}\] (12)
* penta-triangle, \(\mathbf{zzz}\) configuration: \[\begin{split}\left\{k_{1}^{2},(k_{1}-p_{1})^{2},(k_{1}-p_{1}-p_{ 2})^{2},(k_{1}-p_{1}-p_{2}-p_{3})^{2},k_{2}^{2},(k_{2}+p_{1}+p_{2}+p_{3})^{2}, \\ (k_{1}+k_{2})^{2},(k_{2}+p_{1})^{2},(k_{2}+p_{1}+p_{2})^{2}\right\}, \end{split}\] (13)
* planar double-box: \[\begin{split}\big{\{}k_{1}^{2},(k_{1}-p_{1})^{2},(k_{1}-p_{1}-p_{2} )^{2},k_{2}^{2},(k_{2}+p_{1}+p_{2}+p_{3})^{2},(k_{2}+p_{1}+p_{2})^{2},\\ (k_{1}+k_{2})^{2},(k_{1}-p_{1}-p_{2}-p_{3})^{2},(k_{2}+p_{1})^{2} \big{\}}\,,\end{split}\] (10)
* crossed double-box, **mz** configuration: \[\begin{split}\big{\{}k_{1}^{2},(k_{1}+p_{1}+p_{2}+p_{3})^{2},(k_{1 }+p_{2}+p_{3})^{2},k_{2}^{2},(k_{2}-p_{2})^{2},(k_{1}+k_{2})^{2},\\ (k_{1}+k_{2}+p_{3})^{2},(k_{1}+p_{3})^{2},(k_{2}-p_{1}-p_{2}-p_{3} )^{2}\big{\}}\,,\end{split}\] (11)
* crossed double-box, **zz** configuration: \[\begin{split}\big{\{}k_{1}^{2},(k_{1}-p_{1})^{2},(k_{1}-p_{1}-p_ {2})^{2},k_{2}^{2},(k_{2}-p_{3})^{2},(k_{1}+k_{2})^{2},\\ (k_{1}+k_{2}-p_{1}-p_{2}-p_{3})^{2},(k_{1}-p_{1}-p_{2}-p_{3})^{2}, (k_{2}+p_{1})^{2}\big{\}}\,.\end{split}\] (12)
We also use the one-loop (one-mass) box family, made of the following integrals:
\[j^{\text{box}}(a_{1},a_{2},a_{3},a_{4})=\text{e}^{\epsilon\gamma_{E}}\int \frac{\text{d}^{4-2\epsilon}k}{\text{i}\pi^{2-\epsilon}}\frac{1}{D_{\text{box},1}^{a_{1}}D_{\text{box},2}^{a_{2}}D_{\text{box},3}^{a_{3}}D_{\text{box},4}^ {a_{4}}}\,, \tag{13}\]
with the four inverse propagators \(D_{\text{box},i}\)
\[\big{\{}k_{1}^{2},(k_{1}-p_{1})^{2},(k_{1}-p_{1}-p_{2})^{2},(k_{1}-p_{1}-p_{2 }-p_{3})^{2}\big{\}}\,. \tag{14}\]
Feynman's prescription for the imaginary parts of all propagators is implicit.
These family definitions (strictly with the ordering of inverse propagators and ISPs shown above) correspond to the integrals j[family,\(a_{1}\),...] that build the canonical MI bases provided in the pure_mi_bases/ directory of our ancillary files [67]. In this notation, each j[...] represents a Feynman integral within a given integral family, while the numbers \(a_{i}\) refer to the powers of its propagators and ISPs.
## Appendix B Optimised IBP reduction procedure for amplitudes with many permuted integral families
An amplitude will in general have contributions from permutations of the _ordered_ integral families shown in figure 2. To reduce the tensor integrals in the amplitude, IBP identities must be generated for all the permutations of these ordered families. This can lead to a very large IBP system. The performance of the reduction setup is extremely sensitive to the number of IBP identities required so, to minimise the memory consumption, we choose to generate IBP identities only for the ordered families. Next, we obtain the reduction for any permutation of these families by permuting the 'ordered' reduction numerically over finite fields. The result is then given in terms of MIs of each family permutation, but it is missing the symmetry relations that can be found between subsectors of different families. To express the final result in terms of a minimal set of MIs, we find such relations from a separate computation. One may account for integral symmetries using automated
tools such as LiteRed[80]. Since we use a pure basis of MIs, the symmetry relations amongst them will have rational numbers as coefficients. This is because the presence of any kinematic invariant would spoil the purity of the canonical DEs (see section 4), and would mean that such a symmetry relation in fact involves non-pure integrals. Therefore, the computation of the missing symmetry relations can be performed with all kinematic invariants set to numeric values, which significantly lowers the complexity of this task. Finally, we note that even if symmetries amongst the MIs were missed, a representation of the integrals in terms of a basis of special functions -- as we construct in section 4 -- would automatically incorporate the extra simplifications and so the same final result would be obtained. Nonetheless, in practice we do find it useful to include these symmetry relations, as they reduce the number of independent coefficients that have to be processed further.
The procedure can be summarised as follows:
1. Generate (analytic) IBPs for the six ordered families.
2. Compute the mappings between permutations of the MIs of the system above.
3. Take the tensor integrals in the amplitudes for each permutation of these families and solve the linear system over finite fields.
4. Apply the symmetry mappings between the MIs of each family permutation to find the minimal set for the full system.
Since there are a few additional bits of terminology, we can consider a concrete example to clarify everything. At one-loop, a four-point process with a single off-shell leg can be described by a single independent integral family which is simply the box topology (see appendix A for its explicit definition). Following the Laporta reduction algorithm leads to a basis of four MIs,
\[\mathrm{MI}^{\mathrm{box}}=\left\{j^{\mathrm{box}}(1,1,1,1),\,j^{\mathrm{ box}}(1,0,1,0),\,j^{\mathrm{box}}(0,1,0,1),\,j^{\mathrm{box}}(1,0,0,1) \right\}, \tag{11}\]
which are the scalar box and scalar bubble integrals in channels \(s_{12},s_{23}\) and \(s_{4}\) respectively. An amplitude will, in general, be written in terms of three permutations of this family. Let us denote these permutations as \(j^{\mathrm{box},1234}\), \(j^{\mathrm{box},2314}\), and \(j^{\mathrm{box},3124}\), where \(j^{\mathrm{box},1234}=j^{\mathrm{box}}\) as above and the additional superscript indices refer to the order of the external legs. Following our procedure we would load one set of IBP relations generated for \(j^{\mathrm{box}}\). These identities can then be permuted numerically, for example as FiniteFlow graphs, to reduce tensor integrals in each of the three permuted families. The result is now in terms of twelve MIs: three boxes and nine bubbles. While the amplitude is already in a minimal basis of box integrals, there is clearly an over-complete set of bubbles. The independent bubbles are in the channels \(s_{12}\), \(s_{23}\), \(s_{13}\), and \(s_{4}\), so the five additional symmetry mappings are
\[j^{\mathrm{box},2314}(1,0,1,0)=j^{\mathrm{box},1234}(0,1,0,1)\,, \quad j^{\mathrm{box},3124}(1,0,1,0)=j^{\mathrm{box},2314}(0,1,0,1)\,,\] \[j^{\mathrm{box},3124}(0,1,0,1)=j^{\mathrm{box},1234}(1,0,1,0)\,, \quad j^{\mathrm{box},2314}(1,0,0,1)=j^{\mathrm{box},1234}(1,0,0,1)\,, \tag{12}\] \[j^{\mathrm{box},3124}(1,0,0,1)=j^{\mathrm{box},1234}(1,0,0,1)\,.\]
After applying these identities we arrive at the final result with seven MIs which cover all permutations of the integral families. This approach would not lead to any significant performance enhancements in this simple example of course, but it can be particularly important when considering high-multiplicity examples where the number of permutations is high.
## Appendix C Rational parametrisation of the kinematics
Since we are applying finite-field techniques to helicity amplitudes, we employ a rational parametrisation of the external kinematics using Hodges's momentum twistor formalism [107]. While this is not essential to combat the algebraic complexity for the kinematics considered here, it does provide a convenient parametrisation of the spinor products.
The single-off-shell four-particle phase space \(p\) is obtained from a massless five-particle parametrisation \(q\) (defined in appendix A of ref. [36] with \(\{x_{2}\leftrightarrow x_{4},x_{3}\leftrightarrow x_{5}\}\)) under
\[p_{i}=q_{i}\qquad\forall i=1,2,3, p_{4}=q_{4}+q_{5}\,. \tag{106}\]
The momentum twistor variables \(x_{i}\) for \(p\) are then related to the scalar invariants \(\vec{s}\) through
\[s_{12}=x_{1}\,, s_{23}=x_{1}x_{2}\,, s_{4}=x_{1}x_{3}\,. \tag{107}\]
Momentum twistors allow us to express any spinor expression as a rational function in the variables \(x_{i}\). In this representation the helicity scaling is however obscured, as we have fixed the spinor phases in order to achieve a parameterisation in terms of the minimal number of variables (see e.g. ref. [108]). Therefore, we need to manually restore the phase information at the end of the computation. This can be achieved by multiplying the momentum twistor expression by an arbitrary factor \(\Phi\) with the same helicity scaling as the helicity amplitude under consideration, divided by that factor written in terms of momentum twistor variables. For example, for the helicity configurations of eq. (6), we can use the phase factors
\[\Phi(-++)=\frac{\langle 12\rangle}{\langle 23\rangle^{2}}\,, \Phi(-+-)=\frac{[12]}{[13]^{2}}\,, \tag{108}\]
which in our momentum twistor parameterisation are given by
\[\Phi(-++)=x_{1}^{2}\,, \Phi(-+-)=-\frac{1}{x_{1}(1+x_{2}-x_{3})^{2}}\,. \tag{109}\]
We refer to appendix C of ref. [37] a thorough discussion of how to restore the phase information in a momentum twistor parameterisation.
## Appendix D Renormalisation and infrared structure
We renormalise the coupling constant by trading the bare coupling \(\alpha_{\rm bare}\) for the renormalised one \(\alpha_{\rm R}\) through
\[\alpha_{\rm bare}=\alpha_{\rm R}(\mu_{R})\,Z_{\alpha}\big{(}\alpha_{\rm R}(\mu _{R})\big{)}\,\mu_{R}^{2\epsilon}\,S_{\epsilon}\,, \tag{110}\]
with \(S_{\epsilon}=(4\pi)^{-\epsilon}\epsilon^{\epsilon\gamma_{E}}\). The renormalisation factor \(Z_{\alpha}\) in the \(\overline{\rm MS}\) scheme is [109; 110]
\[Z_{\alpha}(\alpha)=1-\frac{\alpha}{4\pi}\frac{\beta_{0}}{ \epsilon}-\left(\frac{\alpha}{4\pi}\right)^{2}\left(-\frac{\beta_{0}^{2}}{ \epsilon^{2}}+\frac{1}{2}\frac{\beta_{1}}{\epsilon}\right)+\mathcal{O}\!\left( \alpha^{3}\right). \tag{122}\]
The \(\beta\)-function is defined from the renormalised coupling as
\[\frac{\mathrm{d}\alpha_{\mathrm{R}}(\mu_{R})}{\mathrm{d}\ln\mu_ {R}}=\left[-2\,\epsilon+\beta\!\left(\alpha_{\mathrm{R}}(\mu_{R})\right) \right]\alpha_{\mathrm{R}}(\mu_{R})\,, \tag{123}\]
and expanded as
\[\beta(\alpha)=-2\,\frac{\alpha}{4\pi}\sum_{k\geq 0} \beta_{k}\left(\frac{\alpha}{4\pi}\right)^{k}\,, \tag{124}\]
with
\[\beta_{0}=-\frac{4}{3}n_{l}\,,\qquad\qquad\beta_{1}=-4\,n_{l}\,. \tag{125}\]
The photon wavefunction renormalisation factor is \(Z_{A}=Z_{\alpha}\), which we include due to the external off-shell photon. The complete renormalisation procedure then is
\[\mathcal{A}_{\mathrm{renorm}}^{\mu}(\alpha_{R})=Z_{A}^{\frac{1}{2 }}(\alpha_{R})\,\mathcal{A}_{\mathrm{bare}}^{\mu}(\alpha_{\mathrm{bare}})\,, \tag{126}\]
where \(\alpha_{\mathrm{bare}}\) is expressed in terms of \(\alpha_{R}\) through eq. (121).
The IR poles of the renormalised amplitude factorise as [70; 71; 72; 73; 74]
\[\mathcal{A}_{\mathrm{renorm}}^{\mu}(\alpha_{R})=Z(\alpha_{R})\, \mathcal{F}^{\mu}(\alpha_{R})\,, \tag{127}\]
so that \(Z(\alpha_{R})\) captures all IR poles and \(\mathcal{F}^{\mu}\) is a finite remainder. We obtain the explicit two-loop expression of the IR factor \(Z(\alpha_{R})\) by choosing QED parameters (\(C_{A}=0\), \(C_{F}=1\), and \(T_{F}=1\)) in the non-abelian gauge-theory expressions of ref. [74]. We expand it as
\[Z(\alpha)=\sum_{k\geq 0}Z^{(L)}\left(\frac{\alpha}{4\pi} \right)^{L}\,. \tag{128}\]
The coefficients \(Z^{(L)}\) are expressed in terms of the anomalous dimension
\[\Gamma=\gamma^{\mathrm{cusp}}\ln\left(\frac{-s_{12}}{\mu^{2}} \right)+2\gamma^{l}+\gamma^{A}\,, \tag{129}\]
and its derivative
\[\Gamma^{\prime}\coloneqq\frac{\partial\Gamma}{\partial\ln\mu}=- 2\gamma^{\mathrm{cusp}}\,. \tag{130}\]
Here, \(\gamma^{\mathrm{cusp}}\) is the cusp anomalous dimension, while \(\gamma^{l}\) and \(\gamma^{A}\) are the lepton's and the photon's collinear anomalous dimensions, respectively. We expand all anomalous dimensions \(y\in\{\Gamma,\gamma^{i}\}\) as
\[y=\frac{\alpha}{4\pi}\sum_{k\geq 0}y_{k}\left(\frac{ \alpha}{4\pi}\right)^{k}\,, \tag{131}\]
with coefficients
\[\gamma_{0}^{l} =-3\,, \gamma_{1}^{l} =-\frac{3}{2}+2\pi^{2}-24\,\zeta_{3}+n_{l}\left(\frac{130}{27}+ \frac{2}{3}\pi^{2}\right)\,, \tag{143a}\] \[\gamma_{0}^{A} =-\beta_{0}\,, \gamma_{1}^{A} =-\beta_{1}\,,\] (143b) \[\gamma_{0}^{\rm cusp} =4\,, \gamma_{1}^{\rm cusp} =-\frac{80}{9}n_{l}\,. \tag{143c}\]
Finally, the coefficients of the IR factor \(Z\) up to two loop are given by
\[Z^{(0)}=1\,,\hskip 14.226378ptZ^{(1)}=\frac{\Gamma_{0}^{\prime}}{4 \epsilon^{2}}+\frac{\Gamma_{0}}{2\epsilon}\,,\hskip 14.226378ptZ^{(2)}=\frac{{Z^{( 1)}}^{2}}{2}-\frac{3\beta_{0}\Gamma_{0}^{\prime}}{16\epsilon^{3}}+\frac{ \Gamma_{1}^{\prime}-4\beta_{0}\Gamma_{0}}{16\epsilon^{2}}+\frac{\Gamma_{1}}{ 4\epsilon}\,. \tag{144}\]
Putting together the subtraction of UV and IR poles, and expanding the resulting finite remainder \(\mathcal{F}^{\mu}(\alpha_{R})\) in \(\alpha_{R}\) leads to the definitions in eq. (14).
## Appendix E Analytic continuation
We analytically continue the MPLs by adding a small positive (or negative) imaginary part to the MPL indices \(l_{i}\) in eq. (4.2) whenever they fall between \(0\) and \(1\). The imaginary part of each index prescribes how to deform the integration contour around the pole associated with it. We do similarly for the logarithms in eq. (4.2). To this end, following ref. [50], we change variables from \((s_{12},s_{23},s_{4})\) to \((s_{12},s_{23},s_{13})\), with \(s_{4}=s_{12}+s_{23}+s_{13}\). We then add a small positive imaginary part to the latter variables, as
\[s_{12}\longrightarrow s_{12}+{\rm i}\,c_{1}\,\delta\,,\hskip 14.226378pts_{23} \longrightarrow s_{23}+{\rm i}\,c_{2}\,\delta\,,\hskip 14.226378pts_{13} \longrightarrow s_{13}+{\rm i}\,c_{3}\,\delta\,, \tag{145}\]
where \(c_{1}\), \(c_{2}\) and \(c_{3}\) are arbitrary positive constants, and \(\delta\) is a positive infinitesimal. Finally, we check whether this substitution gives a positive or negative imaginary part to each MPL index \(l_{i}\). This depends on the domain of the kinematic variables. We focus on three kinematic regions which are of phenomenological interest. The analytic continuation for any other region may be obtained similarly.
Electron-line corrections to \(e^{-}\mu^{-}\to e^{-}\mu^{-}\gamma\).To define the domain of the kinematic variables relevant for this application, we embed the four-particle off-shell process of eq. (1) in the five-particle process \(e^{-}\mu^{-}\to e^{-}\mu^{-}\gamma\). We then determine the kinematic constraints for the five-particle process (see e.g. appendix A of ref. [60]), and from them derive the constraints on the four-point off-shell kinematics. The result is
\[\mathcal{P}_{e\mu\to e\mu\gamma}\coloneqq\{\vec{s}\colon s_{12}<0\, \wedge\,s_{23}<0\,\wedge\,0<s_{13}<-s_{12}-s_{23}\}\,. \tag{146}\]
The MPL index \(l_{4}=-s_{12}/s_{23}\) is always negative in \(\mathcal{P}_{e\mu\to e\mu\gamma}\), hence no analytic continuation is required. The other three indices may instead fall between \(0\) and \(1\). Let us study \(l_{1}\). Changing variables from \(s_{4}\) to \(s_{13}\) and adding imaginary parts as in eq. (145) gives
\[l_{1}=\frac{s_{12}+s_{13}+s_{23}}{s_{12}}+\frac{{\rm i}\delta}{s_{12}^{2}} \left[(c_{2}+c_{3})s_{12}-c_{1}(s_{13}+s_{23})\right]+\mathcal{O}\left(\delta ^{2}\right)\,. \tag{147}\]
The imaginary part of \(l_{1}\) may be either negative or positive in \(\mathcal{P}_{e\mu\to e\mu\gamma}\). However, it is strictly negative in the subregion of \(\mathcal{P}_{e\mu\to e\mu\gamma}\) where \(0<l_{1}<1\). We therefore assign a negative imaginary part to \(l_{1}\) whenever \(0<l_{1}<1\) in \(\mathcal{P}_{e\mu\to e\mu\gamma}\). The analysis of the other indices follows similarly, and is summarised in table 2. The arguments of the three logarithms in eq. (4.14) are positive in \(\mathcal{P}_{e\mu\to e\mu\gamma}\).
Corrections to \(e^{-}e^{+}\to\gamma\gamma^{*}\).The relevant domain of the kinematic variables in this case can be derived directly for the four-point kinematics, and is typically named the \(s_{12}\) channel. It is given by
\[\mathcal{P}_{e\bar{e}\to\gamma\gamma^{*}}\coloneqq\{\vec{s}\colon s _{23}<0\,\wedge\,s_{13}<0\,\wedge\,s_{12}>-s_{23}-s_{13}\}\,.\] (E.4)
The MPL indices \(l_{2}\), \(l_{3}\) and \(l_{4}\) can never fall between \(0\) and \(1\) in \(\mathcal{P}_{e\bar{e}\to\gamma\gamma^{*}}\), and hence require no analytic continuation. We instead need to add a positive imaginary part to \(l_{1}\). In this region also the logarithms in eq. (4.14) need to be analytically continued. The argument of \(\log(s_{12}/s_{4})\) is positive in \(\mathcal{P}_{e\bar{e}\to\gamma\gamma^{*}}\). By adding imaginary parts to the arguments of the other logarithms and studying them where the arguments are negative in \(\mathcal{P}_{e\bar{e}\to\gamma\gamma^{*}}\), we determine that the analytic continuation is achieved through the following replacements:
\[\log(s_{23}/s_{4})\longrightarrow\log(-s_{23}/s_{4})+\mathrm{i} \pi\,,\qquad\log(-s_{4})\longrightarrow\log(s_{4})-\mathrm{i}\pi\,.\] (E.5)
Corrections to the decay \(\gamma^{*}\to e^{-}e^{+}\gamma\).The relevant domain of the kinematic variables is
\[\mathcal{P}_{\gamma^{*}\to e\bar{e}\gamma}\coloneqq\{\vec{s}\colon s _{12}>0\,\wedge\,s_{23}>0\,\wedge\,s_{13}>0\}\,.\] (E.6)
All MPL indices \(l_{i}\) in eq. (4.12) are either \(l_{i}<0\) or \(l_{i}>1\), hence no analytic continuation is required. The same holds for the first two logarithms in eq. (4.14), whose arguments are positive. The only function which needs to be analytically continued is \(\log(-s_{4})\). We achieve this by replacing
\[\log(-s_{4})\longrightarrow\log(s_{4})-\mathrm{i}\pi\,.\] (E.7)
The information about the imaginary parts of the MPL indices can be fed into the publicly available libraries for evaluating these functions numerically, such as FastGPL[111],
\begin{table}
\begin{tabular}{c c c c} \hline Index & \(\mathcal{P}_{e\mu\to e\mu\gamma}\) & \(\mathcal{P}_{e\bar{e}\to\gamma\gamma^{*}}\) & \(\mathcal{P}_{\gamma^{*}\to e\bar{e}\gamma}\) \\ \hline \(l_{1}\) & \(-\) & \(+\) & \(0\) \\ \(l_{2}\) & \(-\) & \(0\) & \(0\) \\ \(l_{3}\) & \(-\) & \(0\) & \(0\) \\ \(l_{4}\) & \(0\) & \(0\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: Imaginary parts of the MPL indices defined by eq. (4.12) in the two kinematic regions discussed in appendix E. The symbol \(+\) (\(-\)) denotes a positive (negative) imaginary part, while \(0\) means no analytic continuation is needed.
GiNaC [98, 100], and handyG [105]. This typically leads to longer evaluation times with respect to MPLs which do not need analytic continuation. We find that this is not an issue for the planned applications of our results (see section 4.3). Nonetheless, we note that a more performant evaluation may be achieved by tailoring the representation to the kinematic region of interest in such a way that no MPLs require analytic continuation. We refer to ref. [50, 51] for a detailed discussion.
|
2305.15083 | Eliciting the Translation Ability of Large Language Models via
Multilingual Finetuning with Translation Instructions | Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have
shown strong abilities in multilingual translations, without being explicitly
trained on parallel corpora. It is interesting how the LLMs obtain their
ability to carry out translation instructions for different languages. In this
paper, we present a detailed analysis by finetuning a multilingual pretrained
language model, XGLM-7B, to perform multilingual translation following given
instructions. Firstly, we show that multilingual LLMs have stronger translation
abilities than previously demonstrated. For a certain language, the performance
depends on its similarity to English and the amount of data used in the
pretraining phase. Secondly, we find that LLMs' ability to carry out
translation instructions relies on the understanding of translation
instructions and the alignment among different languages. With multilingual
finetuning, LLMs could learn to perform the translation task well even for
those language pairs unseen during the instruction tuning phase. | Jiahuan Li, Hao Zhou, Shujian Huang, Shanbo Cheng, Jiajun Chen | 2023-05-24T12:00:24Z | http://arxiv.org/abs/2305.15083v4 | # Eliciting the Translation Ability of Large Language Models via
###### Abstract
Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.
+
Footnote †: * Equal contribution.
+
Footnote †: * Corresponding author.
## 1 Introduction
The emergence of Large Pretrained Language Models (LLMs) Brown et al. (2020); OpenAI (2023) has revolutionized the research of machine translation Hendy et al. (2023); Garcia et al. (2023). These models have demonstrated remarkable multilingual translation capabilities, without requiring explicit training on parallel corpora. For instance, XGLM, a medium-sized multilingual language model, outperforms supervised models using only several examples as demonstrations Lin et al. (2022); the cutting-edge LLM GPT4 has been shown to perform comparably to commercial translation systems on multiple language pairs Jiao et al. (2023).
Most existing researches on LLMs for machine translation focus on in-context learning (ICL), i.e. taking several parallel sentences as the demonstration to guide LLMs to perform translation Vilar et al. (2022); Agrawal et al. (2022); Hendy et al. (2023); Zhu et al. (2023). However, these methods rely heavily on the in-context learning ability of LLMs. For smaller models, e.g. models with only 1B or 7B parameters, the relatively weak ICL ability may result in an underestimation its potential translation ability.
Instead of relying on the ICL abilities, we propose to investigate the ability of LLMs by directly training them to follow translation instructions. Inspired by the recent success of instruction tuning Wei et al. (2022); Chung et al. (2022), we organize multilingual translation tasks as different instances of the translation instruction, with each instance corresponds to a specific language pair. By training the LLMs to follow these instructions, i.e. with **m**ultilingual **F**inetuning with **T**ranslation **I**nstructions (mFTI), it is possible to better elicit translation ability inside LLMs.
Our results show that by training on a mixed dataset of 1,000 sentences per language pair, mFTI outperforms the 8-shot in-context learning by near 3 BLEU on average, showing a greater potential of LLMs' translation ability than previously demonstrated Lin et al. (2022). In addition, we also discuss how mFTI improves the LLM and which factors influence the performance.
To better understand why LLMs could follow these instructions, we design a mFTI setting where only a subset of the translation instructions, i.e. language pairs, are used for training. Thus LLMs need to generalize their instruction following abilities for those language pairs unseen during mFTI. Surprisingly, mFTI elicits the translation ability not only for trained language pairs, but also for those unseen during instruction training. With further experiments and analyses, we find that LLMs could
learn the translation behavior in general by being trained to translate even irrelevant language pairs. It is also interesting that with mFTI, LLMs learn to directly align languages through the use of pivot languages, which also enhances the instruction-following ability for unseen language pairs.
## 2 Multilingual Finetuning with Translation Instructions
### Overall Framework
Given a corpus of multilingual parallel sentences and their languages \(\mathcal{M}=\{(l_{s}\,^{i},l_{t}^{i},\mathbf{x}^{i},\mathbf{y}^{i})\}\), where \(l_{s}\,^{i}\) and \(l_{t}\,^{i}\) are names of the source and target language of \(i\)-th parallel sentence \((\mathbf{x}^{i},\mathbf{y}^{i})\), respectively. mFTI leverages an instruction template \(\mathcal{T}\) to organize the corpus \(\mathcal{M}\) into a language modeling dataset \(\mathcal{D}\). Each sentence \(d^{i}\) in \(\mathcal{D}\) is an instantiation of the translation instruction with a specific sentence pair: \(d^{i}=\mathcal{T}(l_{s}\,^{i},l_{t}^{i},\mathbf{x}^{i},\mathbf{y}^{i})\). The parameter of LLM are then optimized using a standard next-token-prediction objective on \(\mathcal{D}\):
\[\operatorname*{argmax}_{\theta}\sum_{i=1}^{|\mathcal{D}|}\sum_{j=1}^{|d^{i}|} \text{log}\ p_{\theta}(d^{i}_{j}|d^{i}_{<j}),\]
where \(\theta\) are parameters of LLMs.
The instruction template adopted in this paper is
\[\text{Translation: }[l_{s}]\text{: }\mathbf{x}\ [l_{t}]\text{: }\mathbf{y}\]
where the prefix "Translation:" is used to indicate the translation task; the pattern "\([\cdot]\):" is used to identify the name of the specific language. We present the study of the impact of instruction templates in Appendix A.1.
### Experiment Setup
Backbone Language ModelWe consider XGLM-7B [10] as our backbone language models. XGLM-7B is a massive multilingual language model trained in an auto-regressive fashion. It is trained on a massive corpus of 500 billion tokens comprising 30 diverse languages. Low-resource languages have been up-sampled during training, making it an ideal backbone model for multilingual translation research.
LanguagesFollowing Lin et al. (2022), our evaluation involves 13 languages that are covered in the pretraining corpus of XGLM, i.e. English (En), German (De), French (Fr), Catalan (Ca), Finnish (Fi), Russian (Ru), Bulgarian (Bg), Chinese (Zh), Korean (Ko), Arabic (Ar), Swahili (Sw) and Tamil (Ta). Among these languages, En, De, Fr, Ru and Zh are high-resource languages (with ratios in the XGLM pretraining data greater 4%); Ko, Fi, Ar, Bg are medium-resource languages (with ratios between 0.5%-4%); Ca, Hi, Ta, Sw are low-resource languages (with ratios under 0.5%).
Evaluation DatasetsFollowing previous works [10], we evaluate translation models on the FLORES-101 dataset [1], which provides manual translations of 1012 sentences in 101 languages.
Finetuning DatasetsOur finetuning dataset primarily comes from WikiMatrix [13]. WikiMatrix provides a parallel corpus for 1620 different language pairs, including many non-English language pairs, which enables a systematic investigation for the translation of languages other than English. We also leverage the MultiCCAligned [1] corpus for language pairs that are not contained in WikiMatrix, including Hi-Sw, Ko-Sw, Ta-Sw, Sw-Hi, Sw-Ko, Sw-Ta.
Optimization DetailsWe finetune all models using the Adam [16] optimizer with the learning rate fixed as \(5e-6\). We use a fixed batch size of 80 sentences and finetune models for 1 epoch or 2000 steps (depending on the size of the training corpus) for all experiments.
## 3 Understanding the Potential Translation Ability of LLMs
In this section, we first assess the overall translation performance of mFTI by comparing it to few-shot in-context learning1. We then present a detailed analysis of how the corpus for mFTI influences the translation quality.
Footnote 1: We randomly select 8 examples from the FLORES-101 dev split as the demonstration for ICL. Random selection strategy is shown to be good enough in many previous works [10, 11, 12]. The template we use for ICL is <src_tex> = <gt_tex>, which shows good performance according to Zhu et al. (2023).
### Translation Ability of LLMs
We finetune XGLM on 156 language pairs spanning all 13 languages. Since our goal is to elicit the translation ability of LLMs using a small number of examples, we limit the number of parallel sentences to 1000 per language.
mFTI Better Elicits Translation Ability than Few-shot ICL.Figure 1 shows the average BLEU for translation into and out of language X, respectively. BLEU scores for different language pairs are shown in A.2. It is clear that mFTI leads to better translation performances than 8-shot ICL in almost all language pairs (3 BLEU on average). For some languages, the gap is up to 8 BLEU (e.g. translating into Catalan). This demonstrates the effectiveness of mFTI in eliciting LLM's translation ability. It also shows that LLMs have a greater potential for multilingual translation than we saw with ICL (Lin et al., 2022).
Even for translating into and out of English, mFTI still outperforms 8-shot ICL, but with a much smaller gap. This indicates that LLMs with ICL are better at performing tasks that involve English rather than other languages, but they still have the potential to perform even better.
XGLM is still an English-centric Model.The translation performance for each language varies greatly. Considering that the number of sentences used in mFTI is the same for each language, one may suspect that the translation performance of each language largely depends on the amount of its pretraining data. For this reason, the languages in Figure 1 are listed in descending order of their data amount in the XGLM pretraining. However, there are clear fluctuations. For example, Russian and Chinese are the two languages with the largest portion of pretraining data other than English, but their translation performance is much worse than some other languages such as French.
We calculate the Spearman correlation between the translation performance and possible influence factors, namely data amount in pretraining and similarity to English. For data amount, we use the size of the pretraining corpus reported in Lin et al. (2022). For similarity to English, we adopt the lang2vec2, which is a toolkit for querying the URIEL typological database3, to get each language's feature vector of different perspectives including _geography_, _phylogeny_, _syntax_, _phonology_ and _inventory4_.
Footnote 2: [https://github.com/antonisa/lang2vec](https://github.com/antonisa/lang2vec)
Footnote 3: [http://www.cs.cmu.edu/~dmortens/projects/7_project](http://www.cs.cmu.edu/~dmortens/projects/7_project)
Footnote 4: We refer readers to Littell et al. (2017) for details on how the feature vector is obtained.
As shown in Table 1, the translation performance indeed has a positive correlation with data amount in pretraining (0.39/0.36). But the similarity between the specific language and English plays a more important role in determining the performance. All considered features demonstrate a higher correlation coefficient than the data amount in pretraining. This indicates XGLM is still a predominantly English-centric model. Based on these observations, we suggest taking the relation between different languages into consideration when collecting and sampling data for pretraining multilingual LLMs.
### mFTI Enhances Direct Language Alignment
A distinct difference between ICL and mFTI is that mFTI could learn from more parallel sentences and update the model if needed. It is interesting to see what changes after the update.
Figure 1: Translation performance of 8-shot ICL and mFTI using 1000 sentences per language pair. Languages are ordered by the data amount in the pretraining corpus.
Many previous works (Zhang et al., 2023; Jiao et al., 2023b) have shown that translating by pivoting through English significantly improves ICL's translation performance. We compare performance gains of pivot translation using ICL and mFTI, respectively.
Figure 2 presents the result. Each value in the grid is the BLEU difference before and after pivoting through English. We can first observe that pivoting through English indeed improves translation performance for ICL, up to 10 BLEU in some language pairs. However, after mFTI, the gap has been significantly reduced. Considering the fact the mFTI achieves an average 3 BLEU higher than ICL, the reduction of benefits from pivoting through English compared to direct translation may indicate a better direct alignment between languages.
### Factors that affects mFTI
Quality of Finetuning Corpus is Crucial.During mFTI, it is important to consider the impact of the quality of finetuning corpus on translation performance. Therefore, we conduct a comparative experiment by selecting high and low-quality finetuning corpus and comparing their translation performance. The standard for selecting high and low-quality data is based on the scores carried by each sample in WikiMatrix, which is the cosine similarity of source and target sentence representations produced by LASER 5. From the filtered corpus, where each parallel sentence pair has a LASER similarity score greater than 1.04, we randomly select 100,000 parallel sentence pairs with the highest and lowest scores as the high-quality and low-quality pool, respectively. We then construct the corresponding high-quality and low-quality corpus by randomly selecting 1000 sentences from the pools. According to the results in Table 2, finetuning with high-quality parallel sentences can improve the BLEU score by around 2 points compared to finetuning with low-quality parallel sentences, emphasizing the importance of corpus quality.
Footnote 5: [https://github.com/facebookresearch/LASER](https://github.com/facebookresearch/LASER)
The Effectiveness of mFTI Scales up with Model size And Training Examples.Figure 3 shows the translation performance when varying the number of training examples per language pair (1k, 2k, 4k, 8k, 16k, 32k) and the number of model parameters (564M, 1.7B, 2.9B, 4.5B, 7.5B). As we can see, it follows a standard log-linear scaling law in terms of both the number of training examples and model size, which is consistent with findings in previous works (Kaplan et al., 2020; Caballero et al., 2023).
\begin{table}
\begin{tabular}{c c c} \hline \hline & **To X** & **From X** \\ \hline
**Data Amount in Pretraining** & 0.39 & 0.36 \\ \hline
**Similarity To English** & & \\ _Geography_ & 0.93 & 0.87 \\ _Syntax_ & 0.85 & 0.80 \\ _Phylogeny_ & 0.71 & 0.75 \\ _Phonology_ & 0.50 & 0.49 \\ _Inventory_ & 0.51 & 0.41 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Spearman correlation between average translation performance (in BLEU) and possible influence factors (data amount in pretraining, similarity to English). The performance of translating to and from language X is calculated separately.
Figure 2: Changes of BLEU score after pivoting through English for 8-shot ICL and mFTI.
## 4 Understanding the Ability of Carrying Out Translation Instructions
In this section, we present a comprehensive analysis on how mFTI improves the model's ability to carry out translation instructions.
We begin by presenting an overarching experiment where we intentionally withhold certain language pairs during the mFTI process, which allows us to study models' ability to carry out translation instructions under different conditions.
Furthermore, we delve deeper into our analysis by exploring how mFTI enhances LLMs' ability to carry out translation instructions from following perspectives: better understanding of translation instructions (4.3 and 4.4) and better alignment between languages to execute translation instructions (4.5).
### Manipulating Conditions
In Section 3, we have presented results in a fully supervised setting, where all testing language pairs are seen during instruction tuning. To provide further insights into LLMs' generalization ability across language pairs, we simulate a more realistic scenario where there may be a lack of source and/or target language sentences during the instruction tuning process.
More specifically, from the 13 selected languages, we hold out 6 languages as unseen languages. We further partition the rest 7 languages into three groups: Only-Source (languages only appear on the source side), Only-Target (languages only appear on the target side) and Source-Target (languages appear on both the source and target side). We then form language pairs from these partitions following the requirement of partitions. This allows us to assess mFTI's performance under the following conditions:
* **Seen Both Sides*
* Both the source side and target side language appear in the finetuning corpus. This can be further divided to:
* **Same Direction**. The same translation direction is trained during mFTI.
* **Reversed Direction**. The same translation direction does not appear when training, but the reversed direction does.
* **Unseen Direction**. The translation pair (neither the same nor the reverse) does not appear when training.
* **Unseen Src**. Only the target language sentences appear when training.
* **Unseen Tgt**. Only the source language sentences appear when training.
* **Unseen Both Sides**. Neither source language nor target language sentences appear in the finetuning corpus.
### mFTI Learns to Follow Translation Instruction across Conditions
We finetune XGLM on the corpus described in the previous section, and present the results in Table 3. Since there are 16 language directions in the training corpus, we denote the finetuned model as mFTI-16.
mFTI-16 brings improvements on most settings, yet much less than mFTI-allFirstly we can see that mFTI-16 brings improvements on most settings except Reversed Direction, demonstrating the effectiveness of mFTI-16. However, the improvements are much less compared to mFTI-all, even
\begin{table}
\begin{tabular}{l c} \hline \hline & **BLEU** \\ \hline Low quality & 15.0 \\ High quality & 16.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The translation performance of finetuned XGLM as the quality of finetuning corpus varies
Figure 3: The translation performance of finetuned XGLM as the number of model parameters and training examples scales.
for the Same Direction partition. This can be attributed to fewer language pairs when finetuning, which we will discuss in Section 4.3.
Language position shift between training and testing has negative effects on translation performance.The translation performance of mFTI-16 on Reversed Direction degrades by 0.8 BLEU compared to 8-shot ICL. By inspecting the translation results, we find that mFTI-16 suffers from severe off-target problems, i.e. generating translations in wrong target languages. We hypothesize that this could attribute to the shift in the relative positions of the source and target languages during training.
Seeing target languages when finetuning is better than source languages.When there are unseen languages in the language direction, the improvement on Unseen Src is much larger compared to Unseen Tgt, which indicates the understanding of the specified target language may be more important than the source language.
Unseen Both Sides also benefit from mFTI training.The most surprising phenomenon is that language pairs from Unseen Both Sides partition also benefit from mFTI, with an improvement of 0.7 BLEU compared to 8-shot ICL. Since mFTI-16 does not see any sentences of the source and target languages, the improvements indicate a better understanding of the translation instruction, which we will discuss in Section 4.4.
### Instruction Tuning with More Language Pairs Leads to Better Translation Performance
Previous instruction-tuning works show that scaling the number of tasks significantly benefits the unseen tasks Chung et al. (2022). Observing the performance gap of Same Direction between mFTI-16 and mFTI-all,
we gradually add more language pairs to mFTI-16, and plot the translation performance on each partition in Figure 4. In order to isolate the possible effect of additional monolingual sentences, we only add language pairs other than the studied 13 languages.
It can be seen that as the number of language pairs grows, the translation performances of all partitions generally increase, validating the importance of more language pairs. Notably, the performance of the Reversed Direction partition are significantly boosted, outperforming 8-shot ICL by a large margin when increasing the number of language pairs from 16 to 30. This emphasizes the importance of more language pairs for mFTI.
Surprisingly, the performance of Unseen Both Sides partition improves the most. Since no language data of Unseen-both language pairs are
Figure 4: Translation performance on different partitions as the number of language pairs grows. Left: partitions where sentences of both source and target language are seen when training. Right: partitions where sentences of at least one side are unseen when training.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{**Seen Both Sides**} & \multicolumn{1}{c}{**Unseen Src**} & \multicolumn{1}{c}{**Unseen Tgt**} & \multicolumn{1}{c}{**Unseen Both Sides**} \\ & **Same Direction** & **Reversed Direction** & **Unseen Direction** & **Unseen Src** & **Unseen Tgt** & **Unseen Both Sides** \\ \hline
**8-shot ICL** & 14.5 & 14.5 & 11.2 & 13.5 & 13.6 & 14.6 \\
**mFTI-16** & 15.7(+1.2) & 13.7(-0.8) & 12.6(+1.4) & 14.9(+1.4) & 14.5(+0.9) & 15.3(+0.7) \\ \hline
**mFTI-all** & 16.7 & 16.8 & 14.6 & 17.6 & 17.0 & 18.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Translation performances on different partitions. mFTI-16: XGLM multilingual finetuned with translation instructions on a mixture of 16 language pairs described in 4.1.
added, this indicates the ability of instruction-following on these language pairs has been significantly enhanced, which we will discuss in the next section.
### mFTI Generalizes the Understanding of Translation Instruction to Unseen Directions
In this section, we aim to understand how mFTI facilitates the understanding of instructions from a more fine-grained view, i.e. specific _language directions_ and _instruction-following errors_.
For the language directions, we select Ru\(\rightarrow\)Fr (high resource), Bg\(\rightarrow\)Ar (medium resource), Ca\(\rightarrow\)Ta (low resource) from the Unseen-Both Sides partition to study mFTI's effectiveness under different resource settings.
For instruction errors, we identify the following four major problems in the translations:
* _Source Copy_ (**SC**): This error occurs when the model simply copies the source sentence as the translation without making any meaningful changes. We identify this error by calculating the sentence-level BLEU score between the translations and the source sentences. If the BLEU score is above 80, it indicates that the translation is nearly identical to the source.
* _Off-target translation_ (**OT**): In this case, the model fails to produce sentences in the target language. We detect this error by using a language identification tool, such as _fasttext_, to determine the language of the generated translations.
* _Over/under translation_ (**OU**): This error refers to situations where the model produces translations that are significantly longer or shorter than references. We consider translations with a length ratio above 2 or below 0.5 as over- or under-translations, respectively.
* _Oscillatory hallucination_ (**OH**): This error occurs when the model gets stuck in a specific translation state and generates repeated n-grams until reaching the maximum length. We define translations with n-grams that consecutively repeat at least three times as oscillatory hallucinations.
Figure 5: Trends of translation and instruction-following performance on 3 Unseen-both language pairs when scaling up the number of language pairs during mFTI. The left 2 figures show the BLEU score and overall instruction-following error ratios, respectively. The rest 4 figures show the ratios of 4 specific error types, respectively, i.e. source copy, off-target, over/under translation, and oscillatory hallucination. X-axis denotes the number of training language pairs. Y-axis denotes the percentage of translations with specific error types.
#### 4.4.1 Adding Irrelevant Language Pairs
Reduces SC, OT and OU Ratios
In Section 4.3, we show that additional language pairs in mFTI lead to improved BLEU scores even for the Unseen Both Sides partition. We provide an in-depth analysis here from the aforementioned fine-grained views. We plot the trends of translation and instruction-following performance, and the ratios of 4 specific instruction-following errors as the number of additional language pairs grows. The results are in Figure 5.
More language pairs reduces instruction-following errors and improves translation performance.Firstly, we can see that as more language pairs are added to the training corpus, instruction-following errors on Unseen-both language pairs are gradually reduced, leading to improvements in BLEU scores. Comparing different language pairs, we can see that high- and medium-resource language pairs generally perform better than low-resource language pairs on all four types of errors. Since all these language directions are unseen when instruction finetuning, it highlights the importance of language skills acquired during the pretraining phase.
SC: Solved.It can be observed that after adding about 30-60 language pairs, the model learns to avoid the SC problem, indicating this is a relatively easy problem to solve.
OU: Decreased to the level of mFTI-all.We can further see that adding more language pairs is also effective for reducing OU errors, as the error ratios significantly decrease as the number of language pairs grows. Notably, after scaling the number of language pairs to 150, the OU ratios of three language pairs are comparable to full finetuning. This demonstrates the effectiveness of mFTI.
OT: Decreased, but not to a satisfactory level.Turning to the OT ratio, we observe that it also decreases as the number of language pairs grows. However, even after scaling the number of language pairs to 150, the OT ratio still cannot be decreased to the level of mFTI-all.
OH: No effect.Finally, we can see that with the increment in the number of language pairs, the OH ratio does not show a clear decreasing trend, which we will further discuss in the next section.
4.2 Joint Training with Monolingual Generation Instructions Helps Reduce OH and OT Problems More Efficiently
In the previous section, we find that the off-target (OT) and oscillatory hallucination (OH) on some language pairs cannot be fully solved to the level of mFTI-all by adding more irrelevant language pairs. We note that both problems are only related to the target language: the OT problem can be attributed to models' inability of relating target language names to the corresponding scripts of the language, and the OH problem might be caused by the poor modeling of the target languages. We hypothesize that finetuning models on instructions of monolingual generation, i.e. given a language name, generate fluent sentences from that language, should help ease these problems.
To this end, we organize the monolingual sentences of the held-out languages into monolingual generation instructions. The template we adopt is "\([l_{i}]:\)". We then finetune XGLM on the translation instruction dataset as well as these monolingual generation instructions.
We report the BLEU score, OT ratio and OH ratio in Table 4. Firstly we can see that adding
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Ru\(\rightarrow\)Fr**} & \multicolumn{3}{c}{**Bg\(\rightarrow\)Ar**} & \multicolumn{3}{c}{**Ca\(\rightarrow\)Ta**} \\ \hline & **OT** & **OH** & **BLEU** & **OT** & **OH** & **BLEU** & **OT** & **OH** & **BLEU** \\ mFTI-16 & 1.2 & 0.2 & 25.1 & 10.4 & 1.8 & 8.1 & 13.9 & 11.8 & 5.2 \\ + _unseen-mono_ & 0.9 & 0.1 & 25.4 & 2.6 & 0.9 & 10.4 & 4.4 & 6.3 & 6.3 \\ \hline mFTI-150 & 0.8 & 0.3 & 27.0 & 3.4 & 2.3 & 10.8 & 1.8 & 14.8 & 8.7 \\ + _unseen-mono_ & 0.7 & 0.2 & 27.4 & 0.5 & 1.5 & 12.0 & 1.2 & 5.1 & 9.3 \\ \hline mFTI-all & 0.7 & 0.2 & 28.0 & 0.1 & 1.5 & 12.7 & 1.0 & 8.7 & 9.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: BLEU score, off-target ratio and oscillatory hallucination ratio before and after adding monolingual sentences to the finetuning corpus. Scores where adding monolingual sentences leads to improved quality are with green background.
monolingual generation instructions for unseen-both language pairs can help mitigate the OT and OH problem in most scenarios, leading to better translation performance. Notably, by combining more irrelevant language pairs and monolingual sentences, the gap between mFTI-150 with monolingual sentences and mFTI-all has significantly diminished, despite the model has never seen parallel sentences of the tested language before.
### mFTI Improves Language Alignment via Pivot Languages
Besides the understanding of translation instruction, another crucial knowledge that models must grasp to carry out the instruction is the alignment between source and target languages. However, in scenarios where direct parallel sentences are not available, models have limited access to alignment information. This situation resembles the zero-shot setting commonly studied in multilingual translation research Gu et al. (2019); Zhang et al. (2020); Arivazhagan et al. (2019); Liu et al. (2021). In this section, we aim to investigate the ability of mFTI to establish meaningful alignments through pivot languages in this scenario.
Specifically, for the three Unseen Both Sides language pairs X\(\rightarrow\)Y studied in the previous section, i.e. Ru\(\rightarrow\)Fr, Bg\(\rightarrow\)Ar and Ca\(\rightarrow\)Ta, we start from the mFTI-150 setting, and add parallel sentences of X\(\rightarrow\)En and En\(\rightarrow\)Y to the training corpus. We then perform mFTI using these augmented corpora and evaluate the model's performance on test sentences that do not contain instruction-following errors. As knowledge of language alignments is the last requirement for carrying out translation instructions once the model has learned to execute correct translation behavior, the performance on these sentences serves as a reliable indicator of the model's proficiency in language alignment.
The result is in Table 5. First, we can see that mFTI-150 and 8-shot ICL perform comparably, both significantly worse than mFTI-all. Since the tested three language pairs in unseen in mFTI-150, this indicates that similar to mFTI-150, the main role of ICL is to enhance the model's understanding of the translation behavior instead of source-target alignment knowledge.
However, after adding pivot parallel sentences, the model's performance (+_pivot_) is significantly boosted. This demonstrates the potential of mFTI to leverage pivot languages to boost direct alignment between languages and improve translation performances.
## 5 Related Works
### LLM for MT
Machine translation researchers have recognized the potential of utilizing LLMs for MT, as these models acquire advanced language understanding skills during pretraining. The prevailing paradigm for leveraging LLMs for MT is in-context learning (ICL). For instance, Lin et al. (2022) demonstrated that providing 32 examples during pretraining can outperform GPT-3 and a supervised multilingual translation model. Other studies such as Vilar et al. (2022), Agrawal et al. (2022), and Zhu et al. (2023) have investigated different factors that affect ICL's performance, including example quality, example selection strategy, and template sensitivity. Moreover, works such as Hendy et al. (2023) and Jiao et al. (2023) have studied the translation quality of various GPT-3 models and found their performances to be comparable to commercial translation systems on high-resource language pairs. In contrast to these works, our research focuses on exploring existing LLMs' translation ability via mFTI.
The most similar work to ours is Jiao et al. (2023), which finetunes an open-source LLM LLaMA Touvron et al. (2023) on the mixes translation data and the _alpaca_ instruction dataset Taori et al. (2023) to make it a better translator. However, they mainly focus on the bilingual translation setting while our work investigates the multilingual generalization when finetuning LLMs to carry out translation instructions.
### Generalization On Unseen Language Pairs
Our work also has a close relation to zero-shot translation in the multilingual translation setting,
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Ru\(\rightarrow\)Fr** & **Bg\(\rightarrow\)Ar** & **Ca\(\rightarrow\)Ta** \\ \hline
8-shot ICL & 27.0 & 11.6 & 9.7 \\ mFTI-all & **28.2** & **13.2** & 10.6 \\ \hline mFTI-150 & 27.5 & 11.6 & 9.2 \\ + _pivot_ & 27.9 & 13.0 & **10.8** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Translation performance on test sentences without instruction-following errors. Best performances are in bold. The second-best performances are underlined.
where there are no direct parallel sentences between the source and target language. There are two major problems for zero-shot translation: generating correct languages and learning universal language representations.
For the first problem, Gu et al. (2019); Zhang et al. (2020) leverage back-translation to add more target-language-related training data. Arivazhagan et al. (2019); Liu et al. (2021) impose regularization on the encoder/decoder to make the model more aware of the target language. Unlike their works, we discuss the off-target problem in the context of LLMs, and find adding both irrelevant language pairs and additional monolingual sentences can ease the problem to a great extent.
For the second problem, previous works focus on learning language-agnostic representations through additional regularization of model representations Arivazhagan et al. (2019); Pan et al. (2021), and consistency between semantic equivalent sentencesAl-Shedivat and Parikh (2019); Yang et al. (2021). Instead, our works mainly aim to reveal the helpfulness of multilingual finetuning LLMs for unseen language pairs by internalizing the pivot language information.
Furthermore, our discussion encompasses a more stringent version of zero-shot translation, where neither source nor target language sentences are present in the finetuning corpus. This demands a stronger generalization ability, as the model must effectively utilize the language knowledge acquired during pretraining and the translation task knowledge acquired during finetuning to generate high-quality translations.
### Instruction Finetuning
Our work focuses on finetuning a pretrained model with instructions to improve zero-shot translation performance. In previous works, several have demonstrated that without few-shot examples, it is harder for LLMs to perform well in zero-shot settings, and finetuning language models on a variety of tasks can significantly improve zero-shot performance on several tasks. For instance,Wei et al. (2022) aims to improve generalization in unseen tasks by performing instruction tuning. Muennighoff et al. (2023) further extend to finetune LLM by multilingual data instead of English data and investigate that multilingual finetuning leads to better performance on unseen tasks and unseen languages. Also, Chung et al. (2022) explore instruction tuning from the perspective of a number of tasks in finetuning corpus and LLM size. Chung et al. (2022) found that scaling these factors can dramatically improve zero-shot performance.
In our work, we primarily focus on the translation performance of LLM. We adopt a comprehensive approach to consider the factors mentioned above, including the scale of the finetuning corpus, the size of model parameters, and the language selection within the fine-tuning corpus, for a comprehensive analysis of the translation performance of the LLM. Additionally, we conduct a detailed analysis of the model's understanding and execution capabilities in translation tasks after instruction finetuning. We systematically track and analyze translation errors made by LLM and show that they can be solved or alleviated by adding both translation and monolingual generation instructions.
## 6 Conclusion
In this paper, we explore Multilingual Finetuning with Translation Instructions (mFTI), to better unleash the translation ability of multilingual LLMs. Through extensive experiments, we demonstrate that by training on a mixture of 1000 sentences per language pair, mFTI achieves better performance than 8-shot ICL, indicating the untapped potential of translation ability in LLMs by previous works.
Moreover, we systematically discuss the working mechanism of mFTI by analyzing it from the view of instruction completion. Our experiments demonstrate that mFTI helps the model better follow the instruction by introducing more language pairs and monolingual sentences, and enhances the direct language alignment by learning from pivot language pairs.
Our paper also unveils remaining translation issues when adopting LLMs for zero-shot machine translation, i.e. over/under translation, oscillatory hallucination, and mistranslation caused by incorrect alignments. Future works should focus on acquiring more language knowledge from the pretraining phase, and designing better regularization terms to solve these problems.
|
2304.13903 | On Propagation Characteristics of Reconfigurable Surface Wave Platform:
Simulation and Experimental Verification | Reconfigurable intelligent surface (RIS) as a smart reflector is
revolutionizing research for next-generation wireless communications.
Complementing this is a concept of using RIS as an efficient propagation medium
for potentially superior path loss characteristics. Motivated by a recent
porous surface architecture that facilitates reconfigurable pathways with
cavities filled with fluid metal, this paper studies the propagation
characteristics of different pathway configurations in different lossy
materials on the reconfigurable surface wave platform by using a commercial
full electromagnetic simulation software and S-parameters experiments. This
paper also looks into the best scheme to switch between a straight pathway and
a $90^\circ$-bend and attempts to quantify the additional path loss when making
a turn. Our experimental results verify the simulation results, showing the
effectiveness of the proposed reconfigurable surface wave platform for a
wide-band, low path loss and highly programmable communications. | Z. Chu, K. F. Tong, K. K. Wong, C. B. Chae, C. H. Chan | 2023-04-27T01:05:31Z | http://arxiv.org/abs/2304.13903v2 | On Propagation Characteristics of Reconfigurable Surface-Wave Platform: Simulation and Experimental Verification
###### Abstract
Reconfigurable intelligent surface (RIS) as a smart reflector is revolutionizing research for next-generation wireless communications. Complementing this is a concept of using RIS as an efficient propagation medium for potentially superior path loss characteristics. Motivated by a recent porous surface architecture that facilitates reconfigurable pathways with cavities filled with fluid metal, this paper studies the propagation characteristics of different pathway configurations and evaluates the reconfigurable surface-wave platform by using a commercial full electromagnetic simulation software and experiments. This paper also looks into the best scheme to switch between a straight pathway and a \(90^{\circ}\)-turned pathway and attempts to quantify the additional path loss when making a turned pathway. Our experimental results verify the simulation results, showing the effectiveness of the proposed reconfigurable surface-wave platform for a wide-band, low path loss and highly controllable communication.
Fluid metal, reconfigurable intelligent surface, surface wave communications, switchable pathway.
## I Introduction
With research blossoming for the sixth-generation (6G) mobile communications, a few white papers have already identified reconfigurable intelligent surface (RIS) as one of the key enabling technologies, e.g., [1, 2]. RIS is enthusiastically motivated by its relatively low-cost deployment and great coverage performance [3]. Recently, there is a twist by extending RIS to serve partly as a programmable propagation medium for less path loss and more controlled communication and partly as an intelligent reflect-beamformer [4, 5].
Compared to the propagation of traditional space waves in free space, surface waves propagate on a 2-dimensional (2D) dielectric-coated planar conductor plate with a path loss which is proportional to propagation distance \(d\) rather than the less desirable square of \(d\) in the case of space waves [6, 7]. This indicates that the use of surface waves might provide better propagation conditions, e.g., stronger desired signals and less interference, for mobile communications [4, 5]. In recent years, it is encouraging to witness that surface waves also find applications in networks-on-chip systems [8, 9, 10], wearable devices for on-body networks [11, 12] and replacing cables to reduce through-life costs in industrial environments [13].
The propagation of surface waves is typically characterized by a non-directional wavefront emanating from the transmitting terminal and then spreading to the entire dielectric plate. Previous studies investigated how to manipulate surface waves using techniques such as leaky-wave antenna [14], holographic wide-band beam-scanning [15] and printed planar structures in specific geometries [16]. Isotropic and anisotropic surface wave cloaking techniques were further proposed in [17], and anisotropic materials were used to reduce scattering loss when surface waves propagated past a sharp angle at \(6\mathrm{GHz}\) in [18]. Transformation optics [19] and stacked Eaton lenses [20] were also found to be useful in directing surface waves in intended directions via refraction in the terahertz band.
Unfortunately, thus far, there has not been much progress in developing reconfigurable hardware architectures that could dynamically control the direction of surface waves. Motivated by this, the objective of this paper is to devise a reconfigurable surface-wave platform that can create directed pathways freely on demand, and evaluate such design using computer simulations and experiments. Our design utilizes fluid metal such as Galinstan, to isolate and guide surface waves into specific pathways. The selection of fluid metal is highly motivated by its high conductivity and low adhesion [21] and fluid metal has been applied in many fields such as microfluidics [22] and liquid metal antennas [23]. Our proposed reconfigurable surface is based on a porous surface model where small-sized cylindrical cavities are evenly distributed over a 2D platform.
As shown in Fig. 1, conductive fluid metal can be fed to fill the selected cavities by using micro-pumps under the metal ground plane to create a surface wave propagation pathway. Low adhesion and high fluidity fluid metal can be pumped in or out of the cavities from the microtubes attached at the bottom [24]. In so doing, the propagation of surface waves can be predictably confined in the dedicated pathway created by the fluid metal columns and can be altered by changing the pathway formation.
The proposed porosity-based reconfigurable surface-wave platform was first considered in [25]. However, the emphasis
of [25] was to investigate the performance of different porosity density and pattern and the study was only carried out using computer simulations. Different from [25], the contributions of this paper are twofold. First of all, we report experimental results of a 3-dimensional (3D) printed surface prototype and compare them with full 3D electromagnetic simulation results. Secondly, in this paper, we investigate how a turned pathway can be designed and quantify the additional path loss. More specifically, we will examine the following characteristics:
* The trend of surface wave path loss in a straight pathway in the millimeter-wave band.
* The path loss and isolation of different guided pathways formed by single-layer or multi-layer metal walls.
* The frequency dependency of the path width.
* The level of reconfigurability between a straight pathway and \(90^{\circ}\)-turn pathway.
* The power attenuation of different corner configurations in a \(90^{\circ}\)-turn pathway.
In summary, our results contribute to understanding the propagation characteristics of surface waves in physical environments and offer experimental evidence on the feasibility of the porosity-based reconfigurable surface-wave platform.
## II Surface Wave on Porosity-Based Reconfigurable Surface
### _Theory_
As shown in Fig. 1, we consider a porous reconfigurable surface geometry consisting of a top porous dielectric layer sitting on a conductive metal ground. Surface wave is excited by Transducer 1, in the form of a common rectangular waveguide, and then propagates along the interface of the surface and the air in an open environment to Transducer 2 at the right end as shown in Fig. 2\((a)\) where \(+z\) is the propagation direction. As shown in Fig. 2\((c)\), we can see that most of the wave is bound by the dielectric-metal surface as surface wave. To maximize the surface wave excitation efficiency (i.e., suppressing the space wave excited), an optimized launcher can be used [26]. Surface waves can be considered as a specific solution of the cylindrical wave in the Transverse Magnetic (TM) mode, given by [27]
\[H_{y} =\frac{A}{\sqrt{d}}e^{-\gamma_{z}z}e^{-\gamma_{x}x}e^{jwt}, \tag{1}\] \[E_{x} =\frac{A\gamma_{z}}{jw\varepsilon_{0}\sqrt{d}}e^{-\gamma_{z}z}e^ {-\gamma_{x}x}e^{jwt},\] (2) \[E_{z} =-\frac{A\gamma_{x}}{jw\varepsilon_{0}\sqrt{d}}e^{-\gamma_{z}z}e ^{-\gamma_{x}x}e^{jwt}, \tag{3}\]
where \(A\) is the amplitude, \(d\) denotes the surface wave propagation distance along the surface, \(w\) is the angular frequency, \(\varepsilon_{0}=8.854\times 10^{-12}\mathrm{F}/\mathrm{m}\) is the vacuum permittivity, \(\gamma_{x}\) and \(\gamma_{z}\) are the propagation coefficients vertically away from and horizontally along the surface in the \(+x\) and \(+z\)-direction, respectively, which can be described as in [28]:
\[\gamma_{x} =w^{2}\mu_{0}\varepsilon_{0}\left[\frac{(\varepsilon_{r}-1)}{ \varepsilon_{r}}h+\frac{\Delta}{2}-j\frac{\Delta}{2}\right], \tag{4}\] \[\gamma_{z} =\sqrt{-w^{2}\mu_{0}\varepsilon_{0}-\gamma_{z}^{2}}, \tag{5}\]
Fig. 2: The electric field (E-field) contour of \((a)\) a non-guided surface model in front view, showing that surface wave adheres to the dielectric surface while the space wave is caused by the fact that the transducer (WR-28) considered is not optimized for the surface, \((b)\) in 3D view, illustrating that the surface wave propagates over the entire surface, and \((c)\) a guided model where the surface wave is guided along a straight pathway by two columns of fluid metal walls in the reconfigurable surface geometry in Fig. 1.
Fig. 1: A reconfigurable surface with evenly distributed cavities that can be filled with conductive fluid metal.
where \(\mu_{0}=4\pi\times 10^{-7}\mathrm{H}/\mathrm{m}\) denotes the vacuum permeability, \(\varepsilon_{r}\) represents the relative permittivity of the dielectric layer with thickness \(h\), and
\[\Delta=\sqrt{\frac{2}{w\mu_{0}\sigma_{m}}} \tag{6}\]
is the skin depth of the metal ground, where \(\sigma_{m}\) is the electric conductivity of the metal ground.
On the other hand, the surface impedance \(Z_{s}\) for the air-dielectric interface can be obtained by
\[Z_{s}=R_{s}+jX_{s}=w\mu_{0}\frac{\Delta}{2}+jw\mu_{0}\left[\frac{(\varepsilon_ {r}-1)}{\varepsilon_{r}}h+\frac{\Delta}{2}\right], \tag{7}\]
where \(R_{s}\) denotes the surface resistance and \(X_{s}\) is the surface reactance. Note that an inductive reactance with a sufficiently large value can bind surface wave closer to the air-dielectric interface, and if possible, \(Z_{s}\) should be chosen to an optimum value for achieving the maximum efficiency in surface wave excitation from the transducers [26].
For the proposed porous reconfigurable surface in which the cylindrical cavities are distributed evenly inside the dielectric layer, as depicted in Fig. 1, the relative permittivity \(\varepsilon_{r}\) of a solid dielectric layer becomes the effective relative permittivity \(\varepsilon_{r}^{\text{eff}}\) which is dependent on the surface porosity
\[\rho=\frac{S_{\text{cavity}}}{S_{\text{surface}}}, \tag{8}\]
where \(S_{\text{cavity}}\) is the circular surface area of the cavities and \(S_{\text{surface}}\) is the total area of the top dielectric layer surface. In [25], it has been illustrated that a surface with high porosity performs similar to a homogeneous solid surface. Specifically, \(\varepsilon_{r}^{\text{eff}}\) can be obtained by [29]
\[\varepsilon_{r}^{\text{eff}}=\frac{\varepsilon_{r}\left[1+3\varepsilon_{r}+3 \rho(1-\varepsilon_{r})\right]}{1+3\varepsilon_{r}+\rho(\varepsilon_{r}-1)}. \tag{9}\]
### _Simulation results_
The full 3D electromagnetic simulation results in Fig. 2\((c)\) demonstrate that surface wave can be guided within a straight pathway by two columns of fluid metal walls on the proposed reconfigurable surface. On the contrary, Fig. 2\((b)\) shows that the surface wave spreads over the entire surface in the non-guided model where the cavities remain empty. To quantify the improvement, we normalize the E-field power density at the reference point, which is \(5\mathrm{mm}\) in front of the aperture of Transducer \(1\), as \(0\mathrm{dB}\). The results indicate that the surface wave decays to \(-2.2\mathrm{dB}\) at the aperture of Transducer \(2\) on the straight pathway with a \(80\mathrm{mm}\) propagation distance while it reduces to \(-15.1\mathrm{dB}\) in the non-guided case. It is evident that the E-field inside the fluid metal filled cavities is significantly concentrated within the straight pathway. Moreover, the power outside the pathway is kept to be as low as \(-37.1\mathrm{dB}\), while it is \(-6.2\mathrm{dB}\) at the same position in the non-guided scenario, indicating that the proposed reconfigurable surface can form an isolated pathway for efficient surface wave communications.
## III Experimental Results
### _Setup_
In the proposed porosity based surface, conductive fluid metal is fed to fill in selected cavities by micro-pumps under the ground plane to form a dedicated surface wave pathway as shown in Fig. 3\((a)\). The fluid metal can be pumped in or out of the cavities from the bottom attached microtubes to create or withdraw the pathway on demand as depicted in Fig. 3\((b)\) and \((c)\). A prototype surface pumped by Galinstan, a conductive fluid alloy, is shown in Fig. 3\((d)\). It is also possible to create different surface wave propagation pathway by arranging fluid metal pins in columns to assemble a 'fluid metal wall'.
Fig. 4\((a)\) illustrates the experimental setup of the reconfigurable surface prototype connected with a vector network analyzer. This prototype was 3D-printed with the dielectric resin (\(\varepsilon_{r}=2.796\) and \(\tan\delta=0.0155@26\mathrm{GHz}\)) and silver ink (\(\sigma_{s}=3.15\times 10^{6}\mathrm{S/m}\)) [30]. To demonstrate the concept and simplify the peripherals, the columnar silver ink is used as the fluid metal, see Fig. 4\((b)\)-\((d)\), and it is printed into the dielectric layer and connected to the ground silver layer. The conductivity of the silver ink is about \(8.9\%\) lower than that of Galinstan but it has been verified that different conductivity does not significantly affect the reported results. The center-to-center separation between two silver columns is defined as
Fig. 3: Illustration of the working principle of the proposed reconfigurable surface wave platform: \((a)\) The fluid metal is pumped in or out of the cavities from the bottom adhesive microtubes to \((b)\) create or \((c)\) withdraw the surface wave pathway. Then \((d)\) shows the surface prototype pumped by Galinstan.
the pathway width \(w_{c}\) of the straight pathway. Two WR-28 rectangular waveguides with a thickness of \(1\mathrm{mm}\) were selected as the transducers, located at either end of the pathway. The transducers were securely mounted on the 3-axis optical linear stages and can be shifted precisely to change the propagation distance \(d\). The \(\mathrm{S}_{21}\) parameter was measured by the vector network analyzer connected to the transducers by coaxial cables. Note that there is a gap of approximately about \(0.5\mathrm{mm}\) between the bottom of the transducer and the surface, which slightly reduces the excitation efficiency of the surface wave. To ensure consistency, this gap is also added in the simulation model conducted by CST Studio Suite 2020 full-wave 3D electromagnetic software. The full physical dimensions of the reconfigurable surface are presented in TABLE I.
### _Path loss study_
As depicted in Fig. 4, the location of Transducer 2 can be adjusted by shifting the optical linear stage to measure the received power at different distances from Transducer 1. Fig. 5 presents both the measured and simulated \(\mathrm{S}_{21}\) results for the straight pathway, with a pathway width \(w_{c}=$10\mathrm{mm}$\), created by single-layer silver 'walls',1 over a propagation distance \(d\) ranging from \(50\mathrm{mm}\) to \(150\mathrm{mm}\) with a sampling interval of \(20\mathrm{mm}\). The \(\mathrm{S}_{21}\) 3dB half-power frequency bandwidth spans about \(3.5\mathrm{GHz}\), ranging from around \(23.7\) to \(27.2\mathrm{GHz}\), with the optimal frequency occurring at \(25\mathrm{GHz}\), which corresponds to the peak of the \(\mathrm{S}_{21}\) curve, as seen clearly in Fig. 5.
Footnote 1: In this paper, we abuse the word ‘wall’ to mean a row of cavities filled with fluid metal even though it is not a structure standing above the surface.
The results show that \(\mathrm{S}_{21}\) decays with increasing frequency, as the surface impedance, described in (7), deviates from its optimal value for the best surface wave excitation efficiency [26]. The dielectric tangent loss also increases with frequency, leading to higher loss at higher frequencies. In addition, we can observe that the measurement and simulation results are generally consistent. However, the experimental \(\mathrm{S}_{21}\) values are lower than those in simulations below around \(25\mathrm{GHz}\). This is possibly due to the cut-off frequency of the two coaxial-to-waveguide adapters that are connected to the transducers in the measurements [31], while in the simulations, the waveguide ports were directly connected to the transducers. Furthermore, we can see that this set of \(\mathrm{S}_{21}\) curves gradually drops as the distance increases across the bandwidth.
At \(26\mathrm{GHz}\), the \(\mathrm{S}_{21}\) value steadily decreases from \(-13.7\mathrm{dB}\) at a propagation distance of \(50\mathrm{mm}\) to \(-16.5\mathrm{dB}\) at \(150\mathrm{mm}\), with a total attenuation of \(2.8\mathrm{dB}\) over a distance of \(100\mathrm{mm}\). The normalized results (assuming the initial reference point at \(d=$50\mathrm{mm}$\)) are plotted in Fig. 6, where we compare the measured results, CST simulation, and the mathematical model developed in [32]. We also include the results at \(28\mathrm{GHz}\) and \(30\mathrm{GHz}\) for reference. Again, the measurement and simulation results agree very well with negligible discrepancy.
Moreover, it can be observed that the E-field power intensity along the pathway decreases linearly with the distance, with the rate of attenuation of \(28.56\mathrm{dB}/\mathrm{m}\) at \(26\mathrm{GHz}\), \(37.24\mathrm{dB}/\mathrm{m}\) at \(28\mathrm{GHz}\), and \(46.90\mathrm{dB}/\mathrm{m}\) at \(30\mathrm{GHz}\). This linear relationship is influenced by several factors, such as the metal wall material, the surface impedance, and the tangent loss of the dielectric material. To demonstrate the feasibility of surface wave technology for longer propagation distances, we use the
Fig. 4: The measurement setup for \((a)\) a 3D-printed reconfigurable surface prototype where the straight surface wave pathway is formed by \((b)\) single-layer silver walls or \((c)\) double-layer silver walls with a pathway width of \(10\mathrm{mm}\) or \((d)\) single-layer silver walls with a pathway width of \(12\mathrm{mm}\).
ray-tracing mathematical model developed in [32] to produce the results in Fig. 7. The straight pathways created by three different metal walls, namely perfect electric conductor (PEC) (\(\sigma_{\rm pec}=\infty\)), copper (\(\sigma_{e}=59.6\times 10^{6}\rm{S/m}\)), and Galinstan (\(\sigma_{g}=3.46\times 10^{6}\rm{~{}S/m}\)), in the commercial low-loss Polytetrafluoroethylene (PTFE) (\(\tan\delta=0.00005\rm{\theta}26\rm{GHz}\)) surface are considered here. The results indicate that at a propagation distance of \(50\rm{m}\), the path loss is less than \(40\rm{dB}\), with an attenuation rate of \(0.8\rm{dB/m}\), inside the straight pathway formed by the Galinstan walls. We can significantly improve the E-field power intensity by selecting an existing commercial low-loss
Fig. 5: \(\rm{S}_{21}\) results for measurement and simulation inside the straight pathway for various propagation distance \(d\) from \(50\rm{mm}\) to \(150\rm{mm}\).
Fig. 6: The E-field power intensity \((\rm{dBV/m^{2}})\) in a straight pathway against the propagation distance for different frequency.
Fig. 7: The E-field power intensity by the analytical results in [32] assuming a straight pathway formed by the perfect electric conductor (PEC), copper and Galinstan walls in a low tangential loss PTFE surface with \(\tan\delta=0.00005\rm{\theta}26\rm{GHz}\). The results for space wave, coaxial cable (Sucoflex-103) and surface wave in the non-guided model are also included for comparison.
dielectric material, i.e., PTFE with \(\tan\delta=0.00005\,\text{\text{\text@underline{\char 20}}}\,26\text{GHz}\)[33], compared to 3D-printed dielectric resin with \(\tan\delta=0.0155\,\text{\text@underline{\char 20}}\,266\text{GHz}\). The results suggest that the reconfigurable surface pathway incur much lower path loss compared to space wave and even coaxial cable, demonstrating the potential of surface waves for communication.
### _Multi-layers fluid metal walls_
Here, we investigate the impact of using more layers of metal walls to form the pathway. We first consider the double-layer metal wall setup as depicted in Fig. 4\((c)\), and provide both experimental and simulation results of power intensity at several distances for different frequency in Fig. 8. It can be observed that the general trend of \(\mathrm{S21}\) for the double-layer and single-layer is very similar, except for a slight increase of approximately \(0.1\mathrm{dB}\) in \(\mathrm{S21}\) for the double-layer case. This indicates that the double-layer performs slightly better than the single-layer in confining the signal within the pathway, as the double-layer enhances the ability of the propagation boundary to block signal leakage out of the pathway. Fig. 9 illustrates the E-field power intensity decay against the distance where the simulation results of three- and four-layer metal walls are also included. It can be seen that the signal improvement in the case of three- and four-layer metal walls is less significant, being only less than \(0.05\mathrm{dB}\). Thus, double-layer metal walls appear to be sufficient to create a highly isolated pathway and guide the surface wave when necessary. In general, the single-layer metal wall structure is adequate to create the reconfigurable surface pathway, as demonstrated in the results above.
### _Pathway width and operating frequency_
As depicted in Fig. 4\((d)\), the pathway width \(w_{c}\) is increased from \(10\mathrm{mm}\) to \(12\mathrm{mm}\) defined by the two single-layer metal walls. Fig. 10 illustrates the measured and simulated \(\mathrm{S}_{21}\), and the simulated results for \(w_{c}=14\mathrm{mm}\) and \(16\mathrm{mm}\) with a fixed propagation distance of \(d=50\mathrm{mm}\). The results indicate that the \(\mathrm{S}_{21}\) curves shift to a lower frequency as the pathway width increases, and the optimal operating frequency of the surface changes from \(25\mathrm{GHz}\) at \(10\mathrm{mm}\) to \(24.2\mathrm{GHz}\) at \(12\mathrm{mm}\), then \(23.7\mathrm{GHz}\) at \(14\mathrm{mm}\) and \(23\mathrm{GHz}\) at \(16\mathrm{mm}\). Therefore, the pathway width and wavelength of the signal should be positively correlated. We can also see that the pathway width presents the frequency selectivity properties of the surface wave, and it is possible to filter and split the signal at different carrier frequencies by changing the pathway width on this reconfigurable surface. Additionally, we can observe that \(\mathrm{S}_{21}\) decreases slightly with the increase of pathway width, which may be caused by the signal leakage at the transducer to the surface interface. It means that the width of the pathway should preferably be closer to the width of the transducer, which is
Fig. 8: The \(\mathrm{S}_{21}\) results in a straight pathway formed by single-layer and double-layer silver walls in the measurements and simulations with a propagation distance \(d=50\mathrm{mm}\), \(110\mathrm{mm}\) and \(150\mathrm{mm}\).
Fig. 9: The E-field power intensity in a straight pathway against the distance after normalization for the \((a)\) measurements and simulations containing single-layer, double-layer and \((b)\) multi-layer walls results at \(26\mathrm{GHz}\).
\(w_{a}=7.1\mathrm{mm}\) here, to minimize leakage.
### _Reconfigurable pathway_
Fig. 11\((a)\) depicts a T-junction reconfigurable surface featuring an adjustable junction in which the positions of metal pins can be altered to switch the surface wave propagation direction between a straight and a \(90^{\circ}\)-turn pathway. In the implementation, the switchable pathway can be achieved by the flow of fluid metal in the cavities, as described in Section II. To demonstrate the concept, we use copper pins with the same radius of the circular cavities to evaluate the surface performance conveniently. In the simulations, E-field sampling probes marked in green in Fig. 11\((a)\), with an interval of \(10\mathrm{mm}\), have been added along the center line of the straight and \(90^{\circ}\)-turn pathways in the simulation models. When copper pins are inserted into the cavities marked in black in Fig. 11\((a)\) of the T-junction, the surface wave propagates from Transducer 1 to 2 along a straight pathway and is blocked from propagating to Transducer 3, as shown in Fig. 11\((b)\). On the other hand, when the pins are re-positioned along a \(45^{\circ}\) line (marked in red in Fig. 11\((a)\)) to direct the signal at the T-junction, the surface wave can be guided from Transducer \(1\) to \(3\) through a \(90^{\circ}\) turn, as shown in Fig. 11\((c)\). From the E-field curve, we can observe that there is a difference of more than \(30\mathrm{dB}\) between the desired and undesired pathways. The results indicate that the reconfigurable surface can effectively direct most of the surface wave to the desired receiver along an adaptable pathway and reduce the power in other undesirable directions through dynamic pathway selection. Note that the measurement results are sampled by shifting the stages standing the transducer to the probe locations. This T-junction is considered as a proof-of-concept, and the fluid metal pins on
Fig. 11: Illustration of \((a)\) a T-junction reconfigurable surface structure for guiding the surface wave propagation direction along a \((b)\) straight pathway from Transducer 1 to 2 or a \((c)\)\(90^{\circ}\)-turn pathway from Transducer 1 to 3 by adjusting the shape of the junction in top view.
Fig. 10: Measurements and simulation results in different pathway widths of \(10\mathrm{mm}\), \(12\mathrm{mm}\), \(14\mathrm{mm}\) and \(16\mathrm{mm}\) with a propagation distance \(d=50\mathrm{mm}\).
the surface can create multiple junctions to change the surface wave propagation directions or bypass obstacles.
Furthermore, we can directly compare the \(\mathrm{S}_{21}\) results of the straight and \(90^{\circ}\)-turn pathways using the results in Fig. 12. The results reveal that an almost \(3.2\mathrm{dB}\) additional loss caused by a single turn can be observed along the \(90^{\circ}\)-turn pathway, which should result from the signal reflection at the turn. This not only forms a standing wave to increase signal fluctuations but also weakens the signal in the pathway. Additionally, the loss at the corners can be moderately reduced by adjusting the distribution of metal pins to change the corner shape, which will be discussed in the next subsection.
### _Corner optimization_
To study the losses associated with different corner configurations, we replace the T-junction in Fig. 11\((a)\) with corners of varying shapes in a \(90^{\circ}\)-turn pathway for measurement, as illustrated in Fig. 13\((a)\). Corner 0 is used as a standard right angle turn reference, consisting of inner and outer metal walls both at \(90^{\circ}\). Corner 1 is similar to Corner 0, but with the outer vertex pin removed and the shapes of the pathway in Corner 0 and Corner 1 being the same, see the \(\mathrm{S}_{21}\) results in Fig. 13\((b)\). Here, we have defined the distance from the inner vertex pin \(\mathrm{O}\) to the center point \(\mathrm{A}\) of the outer wall \(\mathrm{BC}\) as the corner width \(w_{t}\). The corner width decreases as the outer wall \(\mathrm{BC}\) gradually moves towards the inner wall from Corner 1 to 8.
The \(\mathrm{S}_{21}\) results for these corners are shown in Fig. 13\((b)\). The best-performing shape is Corner 4 (\(w_{t}=8.5\mathrm{mm}\)) with \(\mathrm{S}_{21}\) of \(-16.2\mathrm{dB}\) at \(25\mathrm{GHz}\), which can be considered as the optimal operating frequency. This is followed by Corner 3 (\(w_{t}=9.9\mathrm{mm}\)) with \(\mathrm{S}_{21}\) of \(-17.4\mathrm{dB}\) and Corner 5 (\(w_{t}=7.1\mathrm{mm}\)) with \(\mathrm{S}_{21}\) of \(-18.6\mathrm{dB}\). All their corner widths \(w_{t}\) are close to the pathway width \(w_{c}=10\mathrm{mm}\). We also observe that the optimal operating frequency is \(24.5\mathrm{GHz}\) in Corner 2 (\(w_{t}=11.3\mathrm{mm}\)) and \(25.6\mathrm{GHz}\) in Corner 6 (\(w_{t}=5.7\mathrm{mm}\)), respectively. This discrepancy further illustrates the frequency selection characteristics that can be achieved by controlling the difference in pathway width. Therefore, signal separation based on frequency may be possible in different propagation directions at a junction or corner, which can be viewed as a signal filter. For Corner 7 (\(w_{t}=4.2\mathrm{mm}\)) with \(\mathrm{S}_{21}\) of \(-30\mathrm{dB}\) and Corner 8 (\(w_{t}=2.8\mathrm{mm}\)) with \(\mathrm{S}_{21}\) of \(-33\mathrm{dB}\), the attenuation looks much more significant. This is because the shrinking \(45^{\circ}\) outer wall results in more signal reflection and blockage due to the too small corner width. Note that in Corner 7 and 8, the E-field power intensity is too small and similar to the level outside the pathway, leading to a larger discrepancy between measurement and simulation.
The optimal frequencies of corners with different corner widths are listed in TABLE II. Fig. 14 shows the relationship between the corner width and the average \(\mathrm{S}_{21}\) value in the half-power bandwidth based on their respective optimal frequency. We can see that the \(\mathrm{S}_{21}\) value of Corner 4 with \(w_{t}=8.5\mathrm{mm}\) is higher than that of Corner 1 with \(w_{t}=12.7\mathrm{mm}\) by over \(15\mathrm{dB}\), indicating that the shape of the corner plays a significant role in guiding the surface wave around a \(90^{\circ}\) turn. Moreover, Corner 4 performs slightly better than Corner 3 with \(w_{t}=9.9\mathrm{mm}\), which is closer to the pathway width \(w_{c}=10\mathrm{mm}\). This suggests that moderately decreasing the corner width, such as in Corner 4, will result in less loss under the premise of \(w_{t}\) approaching \(w_{c}\) in practice.
## IV Conclusion
This paper presented a novel porous reconfigurable surface-wave platform that takes advantage of fluid metal to dynamically create a customized pathway for guiding surface wave propagation. The concept is driven by the emerging flexible fluid metal microfluids technology that is programmable. We analyzed the effect of different propagation distances, pathway widths, and multi-layer metal wall geometry structures on surface wave propagation using a 3D-printed surface. We also provided numerical results showing that the proposed reconfigurable surface, carrying surface waves, could outperform traditional space waves and coaxial cables in long propagation distances if the dielectric tangential loss of surface was small enough. Moreover, we presented a switchable pathway process of this reconfigurable surface via a T-junction by rearranging the composition of metal pins. We demonstrated that surface wave propagation from a straight to a \(90^{\circ}\)-turn pathway is feasible. Additionally, the results illustrated that properly designing corner width could effectively reduce loss when the surface wave passed through a \(90^{\circ}\)-turn. In summary, our work showed the potential of surface wave propagation on a low-loss reconfigurable surface for future wireless systems.
Fig. 12: The comparison of the E-field power intensity decay along the straight and \(90^{\circ}\)-turn pathway in measurement and simulation results. |
2306.05918 | Depth-dependent resolution quantification in 3D fluorescence microscopy | A method is presented to quantify resolution as a function of depth in
features of morphologically complex 3D samples. Applying the method to the
brain of Drosophila, resolution is measured at increasing depth throughout the
central brain region. The results quantify improvements in image quality when
using two-photon microscopy compared to confocal. It is also demonstrated how
resolution improvements through tuning a single parameter, laser power, can be
measured objectively. Since the metric is interpretable as the average
resolution within a feature, it is suitable for comparing results across
optical systems, and can be used to inform the design of biological experiments
requiring resolution of structures at a specific scale. | Neil Wright, Christopher J. Rowlands | 2023-06-09T14:22:04Z | http://arxiv.org/abs/2306.05918v1 | # Depth-dependent resolution quantification in 3D fluorescence microscopy
###### Abstract
A method is presented to quantify resolution as a function of depth in features of morphologically complex 3D samples. Applying the method to the brain of Drosophila, resolution is measured at increasing depth throughout the central brain region. The results quantify improvements in image quality when using two-photon microscopy compared to confocal. It is also demonstrated how resolution improvements through tuning a single parameter, laser power, can be measured objectively. Since the metric is interpretable as the average resolution within a feature, it is suitable for comparing results across optical systems, and can be used to inform the design of biological experiments requiring resolution of structures at a specific scale.
## Introduction
Quantifying image quality is important for both experiment design and the development of optical instruments. In biology, it may be necessary to resolve structures at a certain level of detail; users may not appreciate that features which can be readily observed in one part of the sample may be unresolvable elsewhere. Similarly, having an objective metric of quality in microscopy allows comparison and improvement of instruments under various adverse imaging conditions, as opposed to the favourable conditions that occur near a tissue surface. Since the final quality is a function of both the sample and optical system as a whole, this should be reflected in any metric.
In 3D samples, image quality is also highly dependent on tissue depth, with quality degrading due to light attenuation and distortion caused by scattering. Previous approaches to quantify this effect have sometimes relied on signal intensity [1, 2] as a metric. However, intensity lacks an obvious practical interpretation, and may not always correlate with image quality. Another approach is to use a score based on arbitrary units [3], though this also has the same issue of interpretability. In theory, using resolution as a metric directly can solve these issues.
Resolution refers to the minimum distance at which two separate objects are distinguishable. Mathematically, this can be defined based on either spatial frequency contrast or distance. The Modulation Transfer Function (MTF) can be used to characterise the former, while the Rayleigh Criterion is an example of the latter. This states that two Airy discs are resolvable if the centre of one disc lies within or outside the first minimum of the diffraction pattern of the other [4].
While specially manufactured test samples can be used to assess the performance of an optical system using either method, natural images are unlikely to contain patterns of known contrast or distance that can be used directly to measure resolution. Estimation methods must therefore be used instead. This can be done by either estimating the MTF [5] or using an approach based on Fourier Ring Correlation (FRC) [6]. The latter technique originates in electron microscopy and involves finding the highest spatial frequency in an image distinguishable from noise [7].
However, a problem arises when applying a single measurement to more complex samples where resolution may be non-uniform across the image. This is often the case in 3D biological specimens where variations in factors such as tissue depth, type, or fluorophore concentration may create large differences in quality within the same optical plane.
Previously, it has been shown that FRC can be used to analyse local resolution by splitting the image into tiles [8], an approach recently used to analyse fine features in super resolution images [9]. Here, by applying this approach to 3D fluorescence microscopy, we show how it can be used to isolate a particular feature within a 3D sample and quantify resolution within that feature as a function of depth.
The method is demonstrated by analysing the central region of the dissected brain of _Drosophila Melanogaster_, a model organism widely used in neurobiology. Due to its 3D morphology, optical sections of the fly brain can contain regions of heterogeneous resolution, with particular differences between the central brain and optic lobes caused by differing levels of light scattering. By calculating the average FRC value within the central brain, we show how to characterise resolution within any arbitrarily shaped region of an image, and by extension quantify the degradation of resolution in a feature in 3D.
Results comparing confocal and two-photon imaging are presented. These align with previous findings on the highly scattering nature of the fly brain compared to mammalian brain, even when two-photon microscopy is used [10]. Additionally, we show how to quantify improvements in resolution when a single parameter, laser power, is increased.
The method presented is generally applicable for analysing resolution in features of 3D samples where image quality is non-uniform. In addition, as a metric of resolution, it has a practical interpretation which can be used to inform experimental considerations and compare results across different optical instruments.
## Methods
### Sample preparation
Green Fluorescent Protein (GFP) was expressed pan-neuronally in _Drosophila_ by crossing nSyb-GAL4 (Bloomington Drosophila Stock Center (BDSC) #68222) with 10XUAS-IVS-mCD8::GFP (BDSC #32187). Adult male and female flies were anaesttised by low temperature and brains dissected in phosphate-buffered saline (PBS) using fine forceps (Dumont #55 and #5SF) [11]. Dissected brains were transferred to a PBS-filled glass-bottom 35 mm confocal dish (VWR) coated with poly-D-lysine (approximately 100 \(\mathrm{\SIUnitSymbolMicro g}\)/mL; Sigma), and oriented dorsal side down.
### Confocal and two-photon microscopy
Confocal and two-photon (2p) images were both taken using the same commercial Leica SP5 inverted microscope at the Facility for Imaging by Light Microscopy (FILM) at Imperial College London. Each sample was used to capture a stack for all modalities. Because initial results found photobleaching to be minimal except for higher power 2p stacks, the order was not randomised and 2p stacks were always captured last.
A 20x 0.7 NA dry objective (Leica) was used for imaging and a photomultiplier tube (PMT) was used to detect fluorescence. Bit depth was set to 16 bits and pixel size set to 94.6 nm (8192\(\times\)8192 format) with a line scan rate of 400 Hz.
Frames were captured with a step size of 10 \(\mathrm{\SIUnitSymbolMicro m}\). Confocal imaging used an Argon laser at 488 nm and pinhole size of 1 Airy Unit. Two-photon imaging used a Mai Tai DeepSee laser (Newport Spectra-Physics) tuned to 920 nm and the pinhole fully opened.
Power was measured using a Thorlabs S170C power sensor for the Argon laser, and S425C for the DeepSee laser.
### Image analysis
#### Mean resolution algorithm
The algorithm to compute the Mean FRC (mFRC) in a region-of-interest (ROI) of arbitrary shape is illustrated in Figure 1. First, ROIs were drawn around the central brain region for each image in the Z-stack and used to generate masks. The image was then split into small (64x64 pixels) tiles. For each tile in the mask, FRC was calculated. The mFRC value for each depth was taken to be the mean of all tiles in the mask for that depth. Finally, mFRC was plotted as a function of depth.
In general, calculating FRC requires two independent images. Here,'single image' FRC was used, whereby each full image is first split into sub-images as described in [12]. FRC was calculated according to the standard formula of computing the Pearson correlation coefficient of rings of increasing radius in the Fourier transforms of the two images according to Equation 1:
\[\text{FRC}(r)=\frac{\sum\limits_{i\in R}F_{1}(r_{i})\cdot F_{2}(r_{i})*}{\sqrt{ \sum\limits_{i\in R}|F_{1}(r_{i})|^{2}\cdot\sum\limits_{i\in R}|F_{2}(r_{i})|^ {2}}} \tag{1}\]
where \(F_{1}\) is the Fourier transform of the first image and \(F_{2}*\) is the complex conjugate of the Fourier transform of the second image; the numerator is a real number [13]. The inverse of the frequency below which the correlation drops below a fixed 1/7 cut-off is the FRC value [14]. All code was written in Java.
#### FRC Colourmaps
High-detail FRC colourmaps [9] were generated to highlight precise variations in resolution across images (Figure 2). The approach to generate colourmaps was similar to the method described above, except that, instead of dividing the image into tiles, a 64x64 pixel block centred on each pixel was scanned across the ROI. The FRC value for each block was converted to a colour value and used to draw a single pixel at the corresponding coordinates in the colourmap. Artifacts were smoothed by applying a Gaussian blur (Radius = 10) using Fiji [15]. It was found that the rolling FRC method produced comparable results to the tiling method in terms of mean value within the ROI, but at the cost of greatly increased computation. Therefore, the tiling method was used when comparing results from different imaging modalities. As a simple example, assuming a tile size of 64 \(\times\) 64 pixels, a 1024 \(\times\) 1024 pixel square ROI requires 16 \(\times\) 16 = 256 FRC calculations for the tiling method, but 1,048,576 FRC calculations for the rolling block method.
Figure 1: (A) Schematic of the method to calculate Mean FRC (mFRC). First, a region-of-interest (ROI) is drawn over the feature to be analysed in each image in the Z-stack. The image is then divided into small tiles, and the Fourier Ring Correlation (FRC) value for each tile within the ROI is calculated by selecting the highest spatial frequency whose correlation coefficient is greater than the cut-off value. Finally, the mean value for each ROI is computed and plotted as a function of depth. Since FRC represents the minimum resolvable distance, lower values correspond to better image quality. (B) Drosophila brain with the dorsal side facing down. The black line represents an optical plane. Variable tissue depths (blue arrows) result in differing levels of light scattering, contributing to resolution heterogeneity within the image. Drosophila brain based on graphic from virtualflybrain.org [16].
Figure 2: (A) Example images of an nSyb?GFP brain at various depths for each imaging modality. Confocal power levels were 9 μW (low), 100 μW (mid), and 224 μW (high). Two-photon power was 22mW. Each image has been individually optimised for brightness and contrast to highlight image quality. Scale bar = 100 μm. (B) High-detail rolling FRC colourmaps of the central brain ROI, generated from the same two-photon stack shown in A. The scale represents minimum resolvable distance (FRC) from 0 to 7+ μm.
Figure 3: Resolution as a function of depth in the central region of the _Drosophila_ brain for 4 different specimens. Values are plotted for depths where \(\geq\)95% of tiles contained correlating spatial frequencies for at least half the specimens. Three different confocal power levels (9 μW, 100 μW, and 224 μW) were compared with two-photon imaging (22 mW) using the mFRC metric.
## Results and Discussion
### Theoretical considerations for imaging parameters
To determine the pixel size required to capture all information within the image, Abbe's diffraction formula was used to estimate the minimum theoretical distance resolvable by the system [4]:
\[d=\lambda/(2\text{NA}) \tag{2}\]
Here, d=349 nm for confocal and 657 nm for 2p imaging. Shannon-Nyquist sampling requires using half these values [17]. Therefore, a pixel size of 189 nm (equivalent to full image size of 4096x4096 pixels) was chosen, close to the value required.
Since calculating FRC requires two independent images, initial attempts captured two separate stacks for each modality. However, it was found that occasionally the z-galvo position shifted slightly between stacks, compromising results. To ensure alignment of the two images,'single image' FRC was used instead [12], whereby a single image is split into sub-images. To achieve equivalent sampling, stacks were captured with a pixel size of 94.6 nm, equivalent to a pixel size of 189 nm for each sub-image.
### Resolution heterogeneity within the same image
Initial attempts found that using a single FRC calculation for each full image in the stack led to inconsistent and non-monotonic results. FRC colourmaps [9] revealed that this was due to non-uniformity of resolution within images, particularly the contrast between the central brain and optic lobes, which begin at deeper optical planes (Figure 1B) and thus appear as higher resolution areas due to decreased light scattering. Apart from the optic lobes, the effect of differential light scattering on resolution is also apparent at the edges of the central brain, where tissue is thinnest and image quality better, and in the middle of the brain, where tissue is thickest and image quality worst (Figure 2). While the central brain region was analysed here as a whole, depending on requirements it would also be possible to take a more fine-grained approach and analyse only specific structures of interest. Alternatively, the entire brain could be characterised solely as a function of depth using a depthmap-based method. Aside from the effect of scattering, the colourmaps indicated that the outlines of major brain structures, such as the antennal lobes, mushroom body, and central complex, also appeared as high-resolution features, likely due to the presence of sharp 'edges' resulting from differing GFP concentrations (Figure 2).
To isolate a particular feature of interest for analysis, simply cropping the image has the disadvantage that only rectangular areas can be analysed, rather than the arbitrary shapes which are common in natural images. An alternative approach of masking out other features risks creating artificially sharp edges in the image, which may distort measurements of resolution. To overcome these issues, the mFRC method was developed based on splitting the image into small tiles. This allows characterisation of resolution in image features of any arbitrary shape, providing results which remain relatively consistent across samples. As noted in the Methods, a 'rolling block' approach can be applied similarly if more fine-grained information is required, though this comes at the cost of increased computation.
It is noted that one limitation of this method is that while use of small tiles provides highly localised information, tile size also constrains the lowest spatial frequency which can be correlated. This results in a trade-off between minimising the area to which resolution information is localised and maximising the range of resolutions which can be detected. A tile size of 64\(\times\)64 pixels was found to provide a reasonable compromise in this regard.
### Confocal and two-photon resolution in Drosophila brain
Two-photon (2p) imaging is widely used in neurobiology, including in _Drosophila_[18]. Use of higher-order excitation suppresses out of focus fluorescence, while longer wavelengths of light are less prone to scattering [19]. Here, the benefits of 2p (22 mW) were apparent for depths approximately 35 um and greater, with the difference increasing with depth (Figure 3). For depths less than this, sufficiently-powered confocal appeared to provide slightly sharper images, in line with the theoretical resolution benefits of shorter wavelengths [4]. Despite the benefits of 2p, however, imaging depth was still limited in comparison to mammalian brain, where
imaging up to 1.6 mm has been reported [20]. This aligns with a recent study on the optical properties of the fly brain, which attributed its highly scattering nature to extensive light scattering at the air-tissue interface in tracheae [10]. At a practical level, though deeper brain structures such as the mushroom body calyx and proto-cerebral bridge could be imaged using 2-photon imaging (Figure 2), higher resolution imaging of these regions may require 3-photon microscopy [10]. A further consideration is that 2p causes considerably more photobleaching than confocal, which may be a factor in certain experiments, such as those involving long-term imaging. Additionally, while the results here were based on dissected brains, resolution may vary in fixed samples or _in vivo_.
Confocal laser power was used as an example to demonstrate how improvements in image quality through tuning a single parameter can be quantified using the method described (Figure 3). In the results, each increase in power led to a measurable improvement in resolution, with the improvement becoming more pronounced with increasing depth. This can be explained by the general principle that, as the number of photons increases, signal increases linearly while noise increases by the square root of the number of photons, leading to a higher signal-to-noise ratio (SNR) [21]. Quantifying resolution in this way thus allows system tuning until requirements for a given experiment are met.
## Conclusion
The method described showed how any region of arbitrary shape within an image can be characterised in terms of mean resolution using a metric based on Fourier Ring Correlation. This enabled quantification of resolution as a function of depth in 3D features in a way that remains relatively consistent across samples. Using the method applied to the brain of Drosophila, resolution at each depth level of the central brain was measured and the benefits of 2-photon imaging over confocal quantified objectively. Measurement of image quality improvements through tuning a single parameter, laser power, was also demonstrated. It is suggested that this method may be generally useful for other samples in 3D microscopy where resolution is heterogeneous and quantifying image quality at different depths is important.
## Acknowledgements
NW is grateful for support from the Medical Research Council (MRC). CJR is grateful to the following bodies for support: Engineering and Physical Sciences Research Council (EP/S016538/1, EP/X017842/1, EP/W024969/1); Biotechnology and Biological Sciences Research Council (BB/T011947/1); Wellcome Trust (212490/Z/18/Z); Cancer Research UK (29694, EDDPMA-May22\(\backslash\)100059); Royal Society (RGS\(\backslash\)R2\(\backslash\)212305); Chan Zuckerberg Initiative (2020-225443, 2020-225707); Imperial College Excellence Fund for Frontier Research. The Facility for Imaging by Light Microscopy (FILM) at Imperial College London is part-supported by funding from the Wellcome Trust (grant 104931/Z/14/Z) and BBSRC (grant BB/L015129/1). Stocks obtained from the Bloomington Drosophila Stock Center (NIH P40OD018537) were used in this study.
## Disclosures
The authors declare no conflicts of interest.
## Data Availability Statement
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2302.11242 | A Unified Cloud-Enabled Discrete Event Parallel and Distributed
Simulation Architecture | Cloud simulation environments today are largely employed to model and
simulate complex systems for remote accessibility and variable capacity
requirements. In this regard, scalability issues in Modeling and Simulation
(M\&S) computational requirements can be tackled through the elasticity of
on-demand Cloud deployment. However, implementing a high performance cloud M\&S
framework following these elastic principles is not a trivial task as
parallelizing and distributing existing architectures is challenging. Indeed,
both the parallel and distributed M\&S developments have evolved following
separate ways. Parallel solutions has always been focused on ad-hoc solutions,
while distributed approaches, on the other hand, have led to the definition of
standard distributed frameworks like the High Level Architecture (HLA) or
influenced the use of distributed technologies like the Message Passing
Interface (MPI). Only a few developments have been able to evolve with the
current resilience of computing hardware resources deployment, largely focused
on the implementation of Simulation as a Service (SaaS), albeit independently
of the parallel ad-hoc methods branch. In this paper, we present a unified
parallel and distributed M\&S architecture with enough flexibility to deploy
parallel and distributed simulations in the Cloud with a low effort, without
modifying the underlying model source code, and reaching important speedups
against the sequential simulation, especially in the parallel implementation.
Our framework is based on the Discrete Event System Specification (DEVS)
formalism. The performance of the parallel and distributed framework is tested
using the xDEVS M\&S tool, Application Programming Interface (API) and the
DEVStone benchmark with up to eight computing nodes, obtaining maximum speedups
of $15.95\times$ and $1.84\times$, respectively. | José L. Risco-Martín, Kevin Henares, Saurabh Mittal, Luis F. Almendras, Katzalin Olcoz | 2023-02-22T09:47:09Z | http://arxiv.org/abs/2302.11242v1 | # A Unified Cloud-Enabled Discrete Event Parallel and Distributed Simulation Architecture
###### Abstract
Cloud infrastructure provides rapid resource provision for on-demand computational requirements. Cloud simulation environments today are largely employed to model and simulate complex systems for remote accessibility and variable capacity requirements. In this regard, scalability issues in Modeling and Simulation (M&S) computational requirements can be tackled through the elasticity of on-demand Cloud deployment. However, implementing a high performance cloud M&S framework following these elastic principles is not a trivial task as parallelizing and distributing existing architectures is challenging. Indeed, both the parallel and distributed M&S developments have evolved following separate ways. Parallel solutions has always been focused on ad-hoc solutions, while distributed approaches, on the other hand, have led to the definition of standard distributed frameworks like the High Level Architecture (HLA) or influenced the use of distributed technologies like the Message Passing Interface (MPI). Only a few developments have been able to evolve with the current resilience of computing hardware resources deployment, largely focused on the implementation of Simulation as a Service (SaaS), albeit independently of the parallel ad-hoc methods branch. In this paper, we present a unified parallel and distributed M&S
architecture with enough flexibility to deploy parallel and distributed simulations in the Cloud with a low effort, without modifying the underlying model source code, and reaching important speedups against the sequential simulation, especially in the parallel implementation. Our framework is based on the Discrete Event System Specification (DEVS) formalism. The performance of the parallel and distributed framework is tested using the xDEVS M&S tool, Application Programming Interface (API) and the DEVStone benchmark with up to eight computing nodes, obtaining maximum speedups of \(15.95\times\) and \(1.84\times\), respectively.
keywords: Discrete-Event Simulation, Parallel Simulation, Distributed Simulation, High Performance Computing, Cloud Computing +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Parallel and distributed simulation fields are two distinct fields that emerged in the 1970s and 1980s respectively from two different research communities [1]. The Parallel Simulation community was focused on accelerating simulations through the exploitation of high-performance computing (HPC) resources. Accordingly, the parallel simulation is defined as the parallelizing of simulation across different computing nodes. When there is a significant geographical separation between the computing nodes, a parallel simulation turns into a distributed simulation. While the parallel computing solution is implicitly distributed, the converse is not always true. The Distributed Simulation community (independent of the parallel simulation aspect) has largely focused on interconnecting partial simulations through local or wide area networks. Currently, theses two communities continue to keep the same driving force: parallel simulation works mainly over tightly coupled hardware entities, while distributed simulation still works on loosely coupled components communicating over standards-based wide area networks (e.g., Distributed Interactive Simulation [DIS], High level Architecture [HLA], etc.).
The desire to bring both parallel and distributed M&S faces new challenges,
due to the complexity of new applications and the evolution in the underlying hardware [2]. From an application point of view, simulating systems of ever increasing complexity such as those in Internet of Things, needs huge computational power [3].
On the other hand, from the hardware point of view, new paradigms such as Cloud Computing enables the provision of the large computational power of Google or Amazon infrastructure to a single researcher to exploit the computing resources for simulation execution [4]. However, the technologies for Cloud computing require specific handling and the M&S applications need to evolve to adapt to cloud-enabled architectures [5].
Parallel and distributed simulation in the cloud is an emerging research area driven by the cost advantages of scaling simulations with available on-demand computing resources, without incurring the expense of purchasing and operating high-performance computing platforms, an issue that has prevented the adoption of parallel and distributed simulation technology in the past [6]. According to the U.S. National Institute of Standards and Technology (NIST), _Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction_[7]. Given this heterogeneity, implementing the appropriate simulation computational infrastructure is a very sophisticated task.
Previous M&S engines have been designed mainly to tackle global challenges like transparency, simulation as a service, cost, and performance [8]. The solution presented in this paper exploits the categorical separation of modeling and simulation aspect in any M&S architecture and focuses on the same model that is executed through parallel or distributed simulation execution, i.e., given a standard model, it must be simulated in a sequential, parallel or distributed contexts without changing a single line of the model source code. To achieve this goal:
1. The model must be defined following standard specifications,
2. Simulation engine and model must be decoupled, and
3. The simulation technology must be resilient enough to easily address the computing diversification offered by the Cloud: virtualization, containerization, etc.
There exist several M&S formalisms that help to deal with #1 above. Among them, we have selected the Discrete Event System Specification (DEVS) [9], since it provides not only a global framework to define models, but also standard mechanisms to develop the simulation engine, which is categorically decoupled from the model, addressing also point #2.
Although the parallel solution is always easy to deploy, the distributed approaches present many technical difficulties when deploying a distributed simulation through a cluster, a set of virtual machines or containers, etc. To facilitate a better distribution mechanism, our solution makes use of a straightforward distributed architecture, a client/server pattern using standard sockets. We present a unified event-driven parallel and distributed simulation architecture, where sequential simulations can be scaled-up to a parallel and distributed simulation execution with an extremely easy deployment mechanism, even in a cloud-enabled environment.
The main contributions of our research can be summarized as follows:
* We propose a unified parallel and distributed simulation architecture using the DEVS formalism as implemented through xDEVS Tool and API [10]. Once the model has been implemented, it can be simulated in sequential, parallel or distributed platforms, without modifying a single line of the model's source code.
* A simulation deployment standard scheme is designed, where using standard XML files supported by schema definitions, the simulation can be deployed in parallel or distributed platforms.
* The multi-modal deployment is highly resilient, i.e., it can be done in
several centralized or cloud-based resources, including physical (real) or virtual machines, or containers using the proposed method and structure.
* The standard DEVStone benchmark is revisited to consider (for the first time) the best ways of evaluating parallel and distributed DEVS-based simulations.
* The evaluation of the proposed architecture is done not only considering the traditional performance metric, but also resources distribution and cost.
This paper is organized as follows. Related work is described in Section 2. Section 3 introduces the foundational technologies behind the work developed through this research. In Section 4, a detailed view of the parallel and distributed architecture implemented in xDEVS is presented. Section 5 shows the parallel and distributed deployment options. Both parallel and distributed approaches are configured and evaluated in Section 6. Finally, we present conclusions and future work in Section 7.
## 2 Related work
A plethora of parallel and distributed simulation architectures can be found in literature. The parallel simulation paradigm has usually brought ad-hoc solutions using multi-threaded programming technologies [11][12], although more specific solutions can be found in the last decade using Graphic Processing Units (GPUs) [13] or even Field Programmable Gate Arrays (FPGAs) [14]. On the other hand, the distributed simulation paradigm has driven the development of not only distributed technologies like Message Passing Interface (MPI) [15], but also general and robust distributed simulation standards such as the DIS [16] or the HLA [17].
With respect to the DEVS M&S formalism used in this work, many simulation engines have been developed and published during the last twenty years.
Some of them have been specially designed to handle parallel or distributed simulations.
Regarding parallel DEVS implementations, we may find the works of Liu [18], Nutaro [19], or Lanuza [20] among others. These developments are based on optimistic simulators following the concept of Logical Processes, Time Warp Algorithm, or others. However, these approaches do not provide standard interfaces to facilitate performance evaluation and comparison through parallel or distributed benchmarks. The development of xDEVS M&S engine dates back to our first text on DEVS Unified Process [21] and over the years has been extended to bring in various domain specific languages (DSLs) to the DEVS world [22]. Our earlier approaches [23] parallelize the standard DEVS simulation loops that call transition and output functions, maintaining the original DEVS specification and all its properties. The codebase is currently maintained at [10].
With respect to the distributed DEVS implementations, some frameworks like DEVS/SOA [24], or CD++ [25], currently deprecated, were based on the concept of Simulation as a Service (SaaS), while others like PyPDEVS [26] that are more flexible require the user to be aware of intricacies to distribute the simulation.
## 3 Foundational technologies
Our framework must be able to execute simulations and optimization studies in distributed and decentralized environments. To this end, we have selected a container-based distributed architecture based on microservices due to its potential and configuration simplicity [27]. In this section, we describe the technologies involved that perform distributed simulations based on microservices, containerization paradigms and DEVS formalism.
### Microservices and containerization paradigms
Traditionally, systems have been developed following monolith architectures, where the entire system's function is based on a single program. This monolith
model often results in tightly-coupled systems, with highly interconnected and interdependent components.
In contrast, microservices architectures have been gaining traction and popularity over the last few years. In these architectures, the different features of a system are decomposed into separated application units, which communicate with each other primarily through asynchronous event-driven mechanisms. A standard communication protocol and a set of well-defined APIs independent of any vendor, product, or technology are used for inter-microservice communications. As Mittal and Martin [28] point out, any microservices-based architecture has to address two fundamental issues: distributed data management (to store the state of the microservice locally) and shared event processing (to facilitate the information exchange between stateless microservices). This information from the local data and the event processing is kept inside the microservices and is used together to execute their inherent business logic. This alternative methodology results in (i) the development of more resilient systems, as the system continue its operation even if specific components go down, (ii) better use of the resources, as it allows to scale specific components based on the demand, (iii) clear independence of the system's components, that can be developed and tested separately.
To implement and deploy microservices-based systems, it is customary to use a containerized architecture. A container is a lightweight, efficient, and standard way for applications to move between environments and run independently. It wraps a piece of software in a complete file system that contains everything needed to run (except for the shared operating system on the server). This approach favors the portability of systems, as they can be easily deployed in a multitude of operating systems and hardware architectures, and allows to accelerate development, test, and production cycles. They also present less overhead than traditional virtual machine environments, as they do not include operating system images. As a result, in many cases, the traditional virtualization present in the first times of Cloud Computing is transitioning towards container-based architectures. Fig. 1 illustrates the differences between these two approaches. In
particular, Fig. 1.a shows how virtual machines store the whole Operating System (OS), libraries, binaries, and applications, requiring a huge memory space in the host machine. Fig. 1.b shows how a container is composed by libraries, required binaries, and applications; and how all the containers share the same OS kernel.
When managing large container-based systems, container orchestration becomes essential. This orchestration is in charge of automating the deployment, management, scaling, networking, and availability of the containers. As these practices became established, different tools emerged that encapsulate them and allow them to be applied in different container engines. Some popular examples of these container orchestration tools are Kubernetes and Docker Swarm. Moreover, many cloud services offer Infrastructure as a Service (IaaS) platforms based on these tools allowing developers to deploy complex container-based scenarios. Among them are Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).
### Discrete Event System (DEVS) specifications
DEVS is a general formalism for discrete event systems modeling based on mathematical Set theory [9]. We can distinguish between Classic DEVS and Parallel DEVS. Parallel DEVS was introduced as a revision of Classic DEVS.
Figure 1: Graphical representation of (a) virtual machine and (b) container architectures
Moving forward, any mention of DEVS implies Parallel DEVS. The notion of parallelism in DEVS formalism exists at both the modeling and simulation layers. It is the confluence of events that happen concurrently at a given instant and how the DEVS formalism handles this confluence of events in its model specification and eventually implements it in the simulation coordinators preserving this confluence. The DEVS formalism does not address the performance aspect of parallel computing for speedup, etc. The execution of DEVS coordinator and component simulators in a multi-core architecture is one of the topics explored in this paper and is described ahead.
The DEVS formalism includes two model types: atomic and coupled models. Both models have an interface consisting of input (\(X\)) and output (\(Y\)) ports to communicate with others. In atomic models, every state (\(Q\)) in the model is associated with the time advance function \(ta\), which determines the duration during which the state remains unchanged. Once the time assigned to the state has passed, an internal transition function (\(\delta_{\text{int}}:Q\gets Q\)) is fired and an internal transition is triggered, producing a local state change (\(\delta_{\text{int}}(s)=s^{\prime}\)). At that moment, the model execution results are spread through the model's output ports by activating an output function (\(\lambda\)). Input external events (events received from other models) are collected in the input ports. An external transition function (\(\delta_{\text{ext}}:Q\times X\gets Q\)) specifies how to react to those inputs, using the current state (\(S\)), the elapsed time since the last event (\(e\)) and the input value (\(X\)) (\(\delta_{\text{int}}((s,e),x)=s^{\prime}\)). Parallel DEVS introduces a confluent function (\(\delta_{\text{int}}((s,ta(s)),x)=s^{\prime}\)), which decides the next state in cases of collision between external and internal events.
A coupled model has four additional sets: children components \(C\), the external input \(EIC\), external output \(EOC\), and internal coupling \(IC\) relations. Coupled models represent the aggregation/composition of two or more atomic and coupled models connected by explicit couplings, making DEVS closed under coupling. Closure under coupling allows to use networks of systems as components in a larger coupled systems, leading to hierarchical, modular construction. Overall, this formalism provides a framework for information modeling that has
several advantages to analyze and design complex systems: completeness, verifiability, extensibility and maintainability.
## 4 Parallel and distributed architecture
Once a system is described according to DEVS theory, it can be easily implemented using one of the many DEVS M&S engines. They all offer a programmer-friendly API to define new models using a high level language, but only a few provide a user-friendly API for parallel and distributed simulation execution. Among them, xDEVS [10; 29; 28] has recently incorporated a good alternative to parallelize and distribute simulations in the Cloud, following the microservices architecture and containerization mentioned in the previous section. This section provides a brief introduction to xDEVS, followed by both the parallel and distributed architectures.
### xDEVS
xDEVS is a cross-platform discrete event system simulator that provides a universal DEVS Application Programming Interface (API) both at the modeling and the simulation levels. The API is realized in three widely used object-oriented programming languages: C++, Java, and Python. The repository is made available through an API project at [10], where the project has three principal branches (named c++, java, and python). This framework allows the specification and execution of DEVS models. Based on the DEVS formalism, it has a clear separation between the modeling and simulation layers. A class diagram showing the relationship between these modeling and simulation layers is shown in Figure 2.
DEVS models in xDEVS are created using two main components. Atomic components define the behavior of the system. Coupled components contains other Atomic and Coupled components, creating a model hierarchy. Both of them have Ports, that represent input/output information points. To link two components of the model a Coupling can be created, selecting the source and
Figure 2: Class diagram of the xDEVS architecture.
destination Ports. The information of Couplings is contained in the Coupled elements that wrap the ports to be linked.
The simulation layer is based on the concept of the Abstract Simulator. Following this concept we divide the simulation entities in Simulators and Coordinators. Each Simulator is related to an Atomic component. Each Coordinator is attached to a specific coupled model and synchronize their child Simulators and Coordinators. This results in an equivalent hierarchy to the one described for the modeling layer.
Accordingly, the Coordinator API deals with executing a DEVS coupled model over time. In this paper, we present both the parallel and distributed coordinators, named CoordinatorParallel and CoordinatorDistributed, recently designed to allow simulations in centralized or distributed parallel computing environments.
### Parallel architecture
The xDEVS parallel coordinator executes the sequential coordinator using multiple concurrent threads and is apt for multi-core machines with a shared memory subsystem. An xDEVS parallel coordinator is formed by several thread pools. Each coordinator child, generally a simulator1, is attached to one of the thread pools.
Footnote 1: By default, the root coupled model is flattened in parallel and distributed simulations. A flattened DEVS model is a model that is reduced to a single level coupled model containing only atomic models as a result of a flattening algorithm that preseves the coupling relationships. As a consequence, the root coordinator only manages simulators (for atomic components) and no hierarchial coordinators. This behavior can be changed by the modeler.
Listing 1 shows a code excerpt of a parallel coordinator with a single thread pool. As can be seen when building the hierarchy, a couple of tasks, instances of TaskDeltFcn and TaskLambda, are created for each simulator: one task to run the transition function and another one to run the output function, respectively.
protectedLinkedist<TaskLambda>lambdaTasks=newLinkedist<>(); protectedLinkedist<TaskDeltFcn>delftcnTasks=newLinkedist<>(); protectedExecutorServiceexecutor; publicCoordinatorParallel(SimulationClockclock,Coupledmodel,int numberOfThreads){ super(clock,model,true); this.numberOfThreads=numberOfThreads; executor=Executors.newFixedThreadPool(numberOfThreads); } publicvoidbuildHierarchy(){ super.buildHierarchy(); simulators.forEach((simulator)->{ lambdaTasks.add(newTaskLambda(simulator)); }); simulators.forEach((simulator)->{ delftcnTasks.add(newTaskDeltFcn(simulator)); }); } publicvoidlambda(){ executor.invokeAll(lambdaTasks); propagateOutput(); } publicvoiddelftcn(){ propagateInput(); executor.invokeAll(delftcnTasks); tL=clock.getTime(); tN=tL+ta(); } } ```
Listing 1: xDEVS parallel coordinator
The DEVS simulation loop basically consists of executing in all the simulators the following:
1. the time advance function,
2. the output function, and
3. the transition function.
The time advance function invokes each simulator for the next time event, so it is not parallelized because of low complexity. Output and transition functions, on the contrary, can require more CPU time. Thus, these two tasks are fully parallelized in the thread pool. As Listing 1 shows, both the output and transition functions run the corresponding child functions in parallel (through the invokeAll call), with a number of threads defined by the user (in the attribute numberOfThreads).
The modeler can add more thread pools by creating a new parallel coordinator with two or more ExecutorService thread pools. Then, both the lambda and transition functions must be modified following this schema for \(N\) pools (Listing 2):
```
//... publicvoidlambda(){ executor1.invokeAll(lamdaTasks1); executor2.invokeAll(lamdaTasks2); //... executorN.invokeAll(lamdaTasksN); }
```
Listing 2: Schema for N pools
It is worthwhile to mention that different pools are executed sequentially, although each one internally is run in parallel. However, having a big thread pool (with many threads) for complex models and a small pool (with a few threads) for lighter models can be interesting in some cases, subject to further investigation.
For the purposes of this research paper, we have created a simple and specific class that loads an XML file, which defines the allocation pool for each atomic
model. Thus, the class creates as many different threads pools as those defined in the XML file along with the number of threads for each pool.
Note that this parallelization is completely DEVS compliant, since it follows the DEVS simulation algorithm defined in [9]. Thus, we can assure that the results of the parallel simulation will be equivalent, and indeed identical, to those obtained with the sequential simulation. Actually, the same Couple model can be simulated with the sequential Coordinator class and the parallel CoordinatorParallel class, without changing a single line in the model source code.
### Distributed architecture
In the following we provide the details of the design and implementation of the xDEVS distributed simulation engine. Its novelty and strength resides in simplifying the approaches developed during the last decade to ease the deployment of DEVS-based distributed simulations, agnostic of the heterogeneity of the Cloud solution in use.
#### 4.3.1 Overview
The microservices-based xDEVS distributed simulation execution is explained with the help of the classic Experimental Frame - Processor (EF-P) model [28]. This model, represented in Fig. 3a, contains two components: the Experimental Frame (EF) coupled model and the Processor (P) atomic model. As mentioned above, coordinators and simulators are used to specify the structure of a simulation. Each model (or atomic component) is associated with a component simulator. In the case of being a coupled model, it is associated with a component coordinator. In order to simulate it in the Cloud, this hierarchical model is automatically flattened by xDEVS2, removing all the intermediate coupled models, in order to obtain the single level coupled model comprising of 3 atomic
models: Generator - Processor - Transducer (GPT). The equivalent model depicted in Fig. 2(b).
Using a configuration file3, the distributed simulation can be started by typing in the simulation entities4 anything equivalent to the following calls (Listing 3):
Footnote 3: The configuration file enumerates the atomic models and the IP and port where each model is listening, with an equivalent structure to the parallel configuration file
Footnote 4: With entity we refer to a computer, virtual machine, container, etc. any virtual or physical device able to simulate an xDEVS model
```
$#SimulationEntity2,generator $java-cpxdevs.core.simulation.SimulatorDistributedgpt.xmlgenerator $#SimulationEntity3,processor $java-cpxdevs.core.simulation.SimulatorDistributedgpt.xmlprocessor $#SimulationEntity4,transducer $java-cpxdevs.core.simulation.SimulatorDistributedgpt.xmltransducer $#SimulationEntity1,runtheCoordinator $java-cpxdevs.core.simulation.CoordinatorDistributedgpt.xml
```
Listing 3: Execution of a distributed simulation
Agnostic of the cloud deployment, a distributed simulation can be seen as a set of independent processes interconnected through the execution of microser
Figure 3: DEVS coupled and equivalent flattened model.
vices (wrapping DEVS atomic models) that are requested through socket commands. Figure 4 illustrates the process.
Once the coordinator has been launched, it invokes a command via sockets that executes the output function as a microservice. Each component simulator listens to this command and runs the output function (lambda) of their respective atomic models5. Second, the coordinator invokes the command for the propagation of the output, sent and executed by all the component simulators. To avoid further overheads derived from the network communication, value propagation is performed directly between component simulators without the coordinator acting as a relay between them. After the output propagation, the execution of the transition function is requested, and each component simulator evaluates if the transition function must be the external, internal, or confluent function, depending on the current simulation time, the state and the external
Figure 4: Sequence diagram of the xDEVS distributed simulation based on DEVS abstract simulation protocol.
input message at the input ports. Finally, the next time event (\(tN\) in Figure 4) is requested to start the DEVS simulation loop again. This is executed until the number of DEVS iterations is reached, or all the models enter into a _passive_ state (i.e., \(\sigma=\infty\)).
As can be seen, the distributed simulation algorithm is based on the fundamental DEVS abstract simulation protocol provided in [9]. The model is always the same in the sequential, parallel and distributed execution, and consequently, the current xDEVS architecture unifies the parallel and distribution simulation of Parallel DEVS formalism within the xDEVS implementation.
#### 4.3.2 Software architecture
The design (Figure 5) is based on a traditional distributed architecture in which each client/server is able to listen, answer and process messages independently and concurrently. The distributed implementation follows the DEVS specification. The coupled model is represented through the CoupledDistributed class, which is the coupled model but with host and port labels into each compo
Figure 5: Class diagram to support distributed simulation in xDEVS.
nent. Simulator and coordinator are implemented with the CoordinatorDistributed and SimulatorDistributed classes, respectively. The Message class is implemented to handle the commands sent between coordinator and simulators (see Figure 4) and the content is propagated through the ports (via sockets). Finally, the DistributedTask class has been designed to perform all the coordinator tasks in parallel.
The distributed simulation engine does not need additional libraries or frameworks and its deployment can be easily automated, as described in the next section.
## 5 Deployment
Both the parallel and distributed simulation can be deployed in any computational environment with shared memory. In the case of distributed simulations, each atomic model is executed inside its corresponding component simulator as an isolated process, while a coordinator process marks the beginning and end of the simulation as described in Figure 4. The communication between these simulators is performed using network sockets. As a result, any distributed architecture is possible. Figure 6 shows some examples. For instance, Figure (a)a illustrates a simple GPT deployment using only Virtual Machines, a more traditional approach. Figure (b)b, on the other hand, illustrate the same distribution but with containers inside the Virtual Machines. Finally, the example shown in Figure (c)c is the one used in this work, where the set of containers are managed by a kubernetes cluster. These architectures are possible and feasible to deploy in the cloud using services provided by infrastructure providers that are well known to date: Google, Amazon and Microsoft Azure among others, and whose services are similar or at least use standard virtualization tools such as Docker and Kubernetes. For the purposes of this research, we have selected the Google Cloud Platform services, in particular we have used for the parallel simulations a single virtual machine, and for the distributed simulations a cluster of containers automatically deployed through the Google Kubernetes Engine
(GKE).
Figure 7 shows the steps that must be followed to execute a parallel or distributed simulation. This process is derived from our earlier DEVS/SOA deployment mechanisms [24].
In the first phase, an XML description of a flattened version of the original model is generated with xDEVS. This text file contains all the atomic models and coupling relations obtained after rearranging the connections of the coupled models, which are removed by default to facilitate the deployment [23] and reduce simulation overheads. This text file, in addition to the traditional DEVS attributes (component's names, port names, connections, etc.), also contains a host address that identifies a simulation entity (to be deployed in a computational execution environment), and a communication endpoint (named port as well) for the case of distributed simulation deployment, and a thread pool name for the parallel simulation deployment. Since the generation of the text file is automated, it generates a single host name and endpoint or a single thread pool.
Figure 6: Possible architectures for the deployment of distributed simulations.
Figure 7: Cloud deployment scheme
However, this file can be edited to change the default behavior.
Although all the atomic models can be allocated to a single container or a thread pool, this option is not yet operational since a DEVS model can contain hundreds of atomic models, with a huge variety of computational weight in terms of CPU cycles. Thus, editing this initial text file, as Figure 7 shows, allows us to group several atomic models per container set (distributed deployment) or thread pool (parallel deployment). Figure 7 shows a 2-level allocation policy used in this paper. This allocation distributes the atomic models (\(A_{i}^{j}\in\mathbf{A}\)) over two container sets or two thread pools. Those with high computational demands are placed at level 1 (\(L_{1}\)), from \(r_{1}\) to \(r_{n}\). The remaining atomic models are distributed over the level 2 (\(L_{2}\)), from \(r_{n+1}\) to \(r_{n+m}\). Here \(r_{i}\) represents a computational resource. \(r_{i}\) is a container in the case of the distributed simulation following the scheme provided in Figure 6c, or a single thread in the case of a parallel simulation. In any case, one or more atomic models can be allocated in each resource. In general, \(n>m\) in order to follow a coarse-grain allocation policy that:
1. exploits the modeler knowledge about which atomic models consume more CPU, and
2. avoids a computing-intensive profiling phase.
The second phase, after the allocation policy is completed, depends on the simulation type. In the parallel case, the model is just simulated with the CoordinatorParallel class (see step 2a in Figure 7), creating the specified thread pools, and the simulation results are obtained. In the distributed case, a parser reads the XML file and generates an architecture-specific script as a YAML deployment file (see step 2b in Figure 7). This file describes the distributed simulation deployment structure, including the pods configurations, their inner containers, and the ports opened in these containers to communicate the different models over the network. In this case, we specify single-container pods. Note that the design of this parser is straightforward, simply consists on reading an XML file and generating a YAML file, and can be adapted to other
service providers.
In the third (distributed) phase, the pods specified in the YAML file are created in the selected cloud platform, and the model is deployed as described by the allocation policy. In this step, the atomic models are distributed over the containers, instantiating the suitable simulator processes per the DEVS simulation protocol. Therefore, each container executes one or more distributed xDEVS simulators, each one with its corresponding atomic model. Besides, one particular container runs the distributed xDEVS root coordinator. Typically, once the simulation ends, the results are stored in a distributed way, as each atomic model can have a different mechanism to save its data. A recommended approach for unifying these data is to have different Transducer atomic models [9], collecting the relevant information and storing it in the suitable repositories.
The main difference between the parallel and distributed simulations is that in the case of parallel simulations, two or more thread pools are executed sequentially, one after the other, although each thread pool is parallel of course. In the case of the distributed simulation, each simulator is an independent full process, which demands a lot of dedicated memory, but the simulation is intrinsically parallel, independent of the number of containers used.
## 6 Evaluation
In this section, we evaluate both the parallel and distributed coordinators of the xDEVS simulation engine. This is performed through the DEVStone benchmark. To the best of our knowledge, this is the first time the DEVSTone benchmark is used with a delay in the transition functions to measure the performance of discrete event simulation engines. Including the delay aspect is essential to evaluate the impact of model's execution on CPU load. We first describe the DEVStone benchmark and how the delay is introduced. Next, we perform an analysis of synthetic delay distribution selecting a DEVStone model class to assign different delay weights to the set of atomic models. Once the
delays are assigned, we proceed with the analysis of the parallel and distributed simulations, and provide the comparison results.
### The DEVStone benchmark
DEVStone [30] is a synthetic benchmark devoted to automating the evaluation of DEVS-based simulation approaches. It allows the generation of different types of models, each of them specialized in measuring specific aspects of the simulation. This benchmark has become popular over the years, and has been used extensively in literature to evaluate and compare the performance of different DEVS simulators [29; 31].
DEVStone describes several synthetic models that can be configured to vary their size and complexity. With this aim, a recursive structure with configurable depth where all the levels contain equivalent components and interconnections is presented. The customization of the models is done through the use of four parameters: (i) _width_, that affects to the number of components (1 coupled and \(width-1\) atomic models) per layer, (ii) _depth_, that specifies the number of nested coupled models, (iii) _internal transition delay_, and (iv) _external transition delay_. According to the DEVStone specifications, these two delay times are spent executing Dhrystones [32] to keep the CPU busy. It is worthwhile to mention that in this work we compute this delay as **CPU time**. i.e., the Dhrystone benchmark loop is executing iterations as long as the CPU time consumed (not the wall clock time) is less than \(\Delta_{\text{int}}\) in the internal transition function, or \(\Delta_{\text{ext}}\) in the external transition function. It is important to measure CPU time because otherwise the CPU can run hundreds of simultaneous transition functions consuming the corresponding \(\Delta_{\text{int}}\) and \(\Delta_{\text{ext}}\) wall clock delays, and not being forced to keep each transition function in the CPU for the specified time.
The behavior of a DEVStone model is conducted by the distribution of its DEVStone atomic models. The DEVS specification of a DEVStone atomic model is shown in Algorithm 1.
DEVStone describes four types of models (depicted in Figure 8):
```
0: NUM_DELT_INTS, NUM_DELT_EXTS and NUM_OF_EVENTS are global variables, and store the total number of internal transition functions, external transition functions and events triggered inside the whole model. \(\Delta_{\text{int}}\) and \(\Delta_{\text{ext}}\) are the delays introduced in the internal and external transition functions, respectively. function[list,_phase_,\(\sigma\)] = init() list = [] {list is part of the state, and stores all the events received by this atomic model} \(\sigma\) = \(\infty\) function[list,_phase_,\(\sigma\)] = \(\delta_{\text{int}}\)(list,_phase_,\(\sigma\)) NUM_DELT_INTS = NUM_DELT_INTS + 1 Dhrvstone(\(\Delta_{\text{int}}\)) list = [] \(\sigma\) = \(\infty\) function[list,_phase_,\(\sigma\)] = \(\delta_{\text{ext}}\)(list,_phase_,\(\sigma\),\(e\),\(X^{b}\)) NUM_DELT_EXTS = NUM_DELT_EXTS + 1 Dhrvstone(\(\Delta_{\text{ext}}\)) values = \(X^{b}(in)\) {\(X^{b}(in)\) is a list containing all the events waiting in the "in" input port} NUM_OF_EVENTS = NUM_OF_EVENTS + values.size() list = [list;values] {Concatenate both lists} _phase = "active"_ \(\sigma\) = 0 function[list,_phase_,\(\sigma\)] = \(\delta_{\text{con}}\)(list,_phase_,\(\sigma\),_ta_(\(s\)),\(X^{b}\)) \(\delta_{\text{ext}}\)(\(\delta_{\text{int}}\)(list,_phase_,\(\sigma\)),0,\(X^{b}\)) function \(\lambda\)() send("out", list) {sends the whole list by the "out" output port} function\(\sigma\) = \(ta\)(list,_phase_,\(\sigma\)) \(\sigma\) = \(\sigma\)
```
**Algorithm 1** DEVStone atomic model
* **LI** (Low level of Interconnections) models are the simplest models, with a low level of coupling relations in their coupled models (Figure 8a).
* **HI** (High Input couplings) models are similar to LI models, but increases the number of internal couplings (Figure 8b).
* **HO** (Hi model with numerous Outputs) models are a variation of the HI models where all the atomic components in each coupled module are connected to the coupled output port. It is worth noting that these models present unconnected ports that may serve to detect malfunctioning in the simulators when cleaning the values of ports without couplings (Figure 8c).
* **HOmod** models reproduce an exponential level of coupling and outputs model (Figure 8d).
Analyzing the publications that study the performance of DEVS simulation engines through DEVStone, we may find that the HO set offers a good balance between CPU and memory usage [29, 33]. As a result, we use the HO set of DEVStone models to evaluate the performance of our DEVS parallel and distributed simulation engines. In HO, the deepest coupled model is formed by one single atomic model. As Figure 8c shows, the remaining coupled models are constituted by 1 coupled model, a chain of \(w-1\) atomic models, and a set of \(k=1\ldots w-1\) chains formed by \(\sum_{i=1}^{k}i\) atomic models. The second external input port is connected to the whole first row and only to the first atomic component in the remaining rows. Additionally, all the atomic models in the second row are connected to the first row, which in turn send the whole output directly to the coupled component. Finally, each remaining atomic component is connected to its upper component. The computation of the total number of atomic models, couplings, executions of transition functions and number of events propagated is quite straightforward [29]:
\[\#\text{Atomic} = 1+(d-1)\cdot(w-1) \tag{1}\]
Figure 8: DEVStone models internal structure.
\[\#\text{EIC} = 1+(d-1)\cdot(w+1) \tag{2}\] \[\#\text{IC} = (d-1)\cdot(w-2)\] (3) \[\#\text{EOC} = 1+(d-1)\cdot w\] (4) \[\#\delta_{\text{int}} = 1+(d-1)\cdot\sum_{i=1}^{w-1}i\] (5) \[= 1+(d-1)\cdot\frac{w^{2}-w}{2}\] (6) \[\#\delta_{\text{ext}} = 1+(d-1)\cdot\frac{w^{2}-w}{2}\] (7) \[\#\text{Events} = 1+(d-1)\cdot\frac{w^{2}-w}{2} \tag{8}\]
### Profiling of the benchmarks
First, HO model size and transition delays must be fixed to reach a good trade-off between the number of atomic models and the simulation time.
To find these values, we have performed the profiling of different HO model parameters. This Section shows the results obtained and proceeds with the selection of one HO model. All the profiling was performed on a virtual machine with 4 Intel(R) Xeon(R) CPU @ 2.8 GHz and 32 GiB RAM, based on the N2 Google Cloud configuration series and the Debian GNU/Linux 10 operating system and with OpenJDK 11. This is the minimum node able to run distributed simulations, so it is fixed as the base machine for sequential, parallel and distributed experiments.
For the sake of clarity we have firstly considered _squared_ models, i.e., HO width equal to HO depth (\(w=d\)). We have tested six different sizes \(w=10,11,\ldots,15\), which leads to 82, 101, 122, 145, 170, and 197 atomic models respectively, following (1). Next, we have defined the external transition delay equal to the internal transition delay in each atomic model (\(\Delta_{\text{int}}^{i\in\mathbf{A}}=\Delta_{\text{ext}}^{i\in\mathbf{A}}= \Delta_{i\in\mathbf{A}}\)). Then, each \(\Delta_{i}\) has been defined using the following seven configurations:
* A constant value for all the atomic models: \(\Delta_{i\in\mathbf{A}}=k\) seconds. We have performed three tests with \(k=1,2,3\).
* A random value using a uniform real distribution: \(\Delta_{i\in\mathbf{A}}=N(0,k)\) seconds. Again, we have performed three tests with \(k=1,2,3\)
* A random value using a chi square distribution: \(\Delta_{i\in\mathbf{A}}=\chi^{2}(f)\) seconds, with \(f=2\).
Figure 9 depicts the simulation times of the six different HO model sizes for the seven different configuration of the transition delays. In order to allow the repetition of parameters configuration in the parallel and distributed experiments, we have fixed the random seed. With independence of the HO model size, slowest simulations corresponded to \(\Delta_{i}=3\), followed by \(\Delta_{i}=\chi^{2}(2)\) or \(\Delta_{i}=2\), and \(\Delta_{i}=N(0,3),\Delta_{i}=N(0,2),\Delta_{i}=1\), and \(\Delta_{i}=N(0,1)\).
HO models with \(\Delta_{i}=3\) and \(w=15\) were simulated in 8826.06 seconds. On the other hand, HO models with \(\Delta_{i}=N(0,1)\) and \(w=10\) were simulated in 424.58 seconds. As a result, HO models with width equal to 15 seem to be an appropriate model size (197 atomic models) to perform different parallel and distributed simulations with a significant number of thread pools and containers, respectively. To select one of the seven distributions for the delays, we have checked the simulation times consumed by each atomic model. To this end, we have examined \(\Delta_{i}=2,\Delta_{i}=N(0,3)\), and \(\Delta_{i}=\chi^{2}(2)\), since they offer
Figure 9: HO simulation times.
equivalent simulation times (range 4000-6000 seconds) and represent the three distribution classes (constant, uniform and chi square).
Figure 10 illustrate the results. For the sake of clarity, we have not labeled the atomic models, but have ordered them from higher to lower simulation time consumed by each one. Table 1 shows the most representative atomic models of each distribution. Following the numbering scheme in Figure 8, an atomic component is labeled as \(A_{i}^{j}\), being \(j\) the coupled model where the atomic model belongs to (with 1 the root HO coupled model and \(d=w=15\) the last one), and \(i\in{1\ldots w-1=14}\) the number of the atomic component in the \(j-th\) coupled component's chain of models.
The constant distribution (\(\Delta_{i}=2\)) gives a well-known behavior. Since each atomic model \(A_{i}\) receives \(i\) events in cascade, a total of \(i\) external and internal transitions functions are executed (\(2\times i\) transitions). Thus, the simulation time consumed by the transitions functions of \(A_{i}\) is approximately \(2\times i\times\Delta_{i}=4\times i\). There are then 14 atomic models \(A_{14}^{*}\) consuming 56 seconds, 14 atomic models
Figure 10: Simulation times consumed by the atomic models using three different distributions and constant size (\(w=15\)).
\(A_{13}^{*}\) consuming 52 seconds, and so forth.
The uniform distribution \(N(0,3)\) gives a minimum value of 0, a maximum value of 3, and a mean value of 1.5, equally distributed. Thus, we can expect a maximum consumption of \(2\times 14\times 3=84\) seconds, close to \(t(A_{14}^{5})=81.58\) seconds in Table 1, and then a smooth linear drop, equivalent to the one observed in the constant distribution.
The chi square distribution gives a minimum value of 0, statistically a maximum value of 10.60, and a mean value of 2. There are a few models with high \(\Delta_{i}\) values, since the distribution is slightly unbalanced. We can expect a maximum consumption of \(2\times 14\times 10.60=296.80\) seconds, again close to the \(t(A_{14}^{6})=282.20\) seconds in Table 1, and then a brief abrupt drop, followed by a smooth descend.
Since the chi square distribution shows more variability, it covers pretty well the spectrum of simulations we would like to analyze. Having such variety of simulation times, we can better analyze the impact on the number of threads or containers deployed for the distributed simulation, as well as the number of distributed atomic models allocated in them.
### Parallel simulation
In the following we show the results obtained by the parallel simulations. To this end, we have configured several experiments following the deployment illustrated in Figure 7 with two thread pools, and additionally a parallel execution with a single thread pool. The hardware resources management for each thread pool has been left to the operating system.
We have used the distributed simulation as a reference to set up the baseline virtual machine. As a consequence, we have tested the parallel simulations using a 4 Intel(R) Xeon(R) CPU @ 2.8 GHz and 32 GB RAM, which is a minimum node able to support a distributed simulation, and a 32 Intel(R) Xeon(R) CPU @ 2.8 GHz and 256 GB RAM, since we have accumulated up to eight nodes in the distributed simulation, both with the Debian GNU/Linux 10 operating system and OpenJDK 11.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Name** & \(t_{i\in\mathbf{A}}\)**= time(\(\delta_{\mathrm{ext}}\))+time(\(\delta_{\mathrm{int}}\))** & **\% of \(\sum t_{i}\)** \\ \hline \multicolumn{3}{|c|}{\(\Delta_{i}=2\)} \\ \hline \(A_{14}^{7}\) & 56.00 & 1 \\ \(A_{14}^{1}\) & 56.00 & 1 \\ \(A_{14}^{13}\) & 56.00 & 1 \\ \(A_{14}^{12}\) & 56.00 & 1 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \(A_{1}^{10}\) & 4.00 & 0 \\ \(A_{1}^{5}\) & 4.00 & 0 \\ \(A_{1}^{7}\) & 4.00 & 0 \\ \hline \multicolumn{3}{|c|}{\(\Delta_{i}=N(0,3)\)} \\ \hline \(A_{14}^{5}\) & 81.58 & 2 \\ \(A_{12}^{6}\) & 64.38 & 1 \\ \(A_{14}^{13}\) & 59.63 & 1 \\ \(A_{1}^{8}\) & 58.24 & 1 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \(A_{1}^{9}\) & 1.71 & 0 \\ \(A_{1}^{10}\) & 1.65 & 0 \\ \(A_{1}^{14}\) & 0.89 & 0 \\ \hline \multicolumn{3}{|c|}{\(\Delta_{i}=\chi^{2}(2)\)} \\ \hline \(A_{14}^{5}\) & 282.20 & 5 \\ \(A_{14}^{3}\) & 158.42 & 3 \\ \(A_{13}^{8}\) & 119.40 & 2 \\ \(A_{12}^{6}\) & 110.19 & 2 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \(A_{1}^{1}\) & 1.42 & 0 \\ \(A_{2}^{5}\) & 1.37 & 0 \\ \(A_{1}^{14}\) & 0.65 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation times consumed by the most representative atomic models using three different distributions and constant size (\(w=15\)).
In a first set of experiments, we have used two thread pools. The first pool was defined to run the 25% of the slowest atomic models (49 in total), whereas the other pool was used to allocate the rest of them (149 in total, including the generator of the initial trigger event). The idea is to prove that giving resources to the slowest models (more threads, i.e., \(n>m\) in Figure 7), a better improvement in performance is obtained in return. In a second set of experiments we used a single thread pool, where the computational load of each thread was balanced allocating heaviest models in different threads. To compute the speedup, we used as the reference execution time the sequential simulation, i.e., 5896.54 seconds. We varied the number of threads in each pool, to analyze the effects of resource allocation.
On the one hand, Figure 11 illustrate the results obtained for the 4 vCPU virtual machine. Bar labels have the form \(i\times j\), where \(i\) represents the number of threads in the high-priority pool (\(L_{1}\) as in Figure 7), and \(j\) represents the number of threads in the low-priority pool (\(L_{2}\) as in Figure 7). Figure 11a shows the speedup when the number of threads managed by the \(L_{1}\) pool is being increased. As can be seen, the maximum speedup (1.78\(\times\)) is obtained when the \(L_{1}\) pool uses a number of threads equal to the number of CPUs. Figure 11b shows the same effect but varying the number of threads managed by the \(L_{2}\) pool. However, the maximum performance in this case (1.43\(\times\)) is reached when the number of threads in the \(L_{2}\) pool is equal to the number of fast atomic models (149). This is because these models have a low computational weight and then 4 CPUs are enough to handle the transition delays without difficulties. Comparing Figures 11a and 11b, we can observe that the speedup reached when more resources (threads) are given to the slower models is a 24.48% greater. These two Figures confirm our \(n>m\) hypothesis, since more threads for the \(L_{1}\) pool produces a higher speed-up improvement. After that, we have looked for a sub-optimal configuration, fixing the number of optimal threads of the \(L_{1}\) pool, 4, and varying the number of threads in the \(L_{2}\) pool. As Figure 11c shows, the speedup is significantly higher (3.88\(\times\) vs. the previous 1.78\(\times\)). Finally, we ran simulations using a single thread pool, varying the
number of threads. As Figure (d)d shows, the speedup obtained here is the best one (\(3.91\times\)). This is because when we used two thread pools, each one is executed in parallel but one pool after the other, in sequence. With one single thread pool, all the transition functions are executed in parallel, and then the linear improvement of the speedup is only limited by the number of CPUs and Input/Output operations, if any. Note that the speedup peak is always reached when the number of threads is equal to the number of CPUs. Although the forth case, with a single thread pool level, reaches the best speedup, we think based on our experience that in same real-world simulations giving resources to the slowest models can be interesting, specially when the difference between simulation times of slowest and fastest models is too high.
As mentioned above, the distributed simulation used up to eight 4 vCPU 32 GB nodes. Therefore, we have repeated the previous parallel experiments on a \(8\times 4=32\) Intel(R) Xeon(R) CPU @ 2.8 GHz and \(8\times 32=256\) GiB RAM. Figure 12 depicts the results. These are qualitative the same. When
Figure 11: Resources (threads) distribution and speedups for the 4 vCPU parallel simulation.
augmenting the number of threads in the \(L_{1}\) pool (Figure 11(a)), the maximum speedup, \(2.19\times\), was obtained when the number of threads was equal to the number of slowest models: 49, which means that 32 CPUs were able to handle all these models. The same happened when the resources went to the fast thread, i.e., the maximum speedup, \(1.63\times\), was reached with a number of threads equal to the number of fast models: 149 (Figure 11(b)). As can be derived from the two previous figures as the \(n>m\) option increases the more the speed-up. The sub-optimal approach (Figure 11(c)) obtained a extraordinary improvement compared to the 4 CPU virtual machine, \(14.33\times\) for. Additionally, as Figure 11(d) illustrates, the balanced speedup in this case is much better than in the previous ones, \(15.94\times\). However, there is a loss of efficiency from 4 vCPU to 32 vCPU, since \(15.94<8\cdot 3.91\).
Figure 12: Resources (threads) distribution and speedups for the 32 vCPU parallel simulation.
### Distributed simulation
In this section we analyze the computational cost of the distributed simulations based on the containers distribution policy and architecture described earlier in Section 5.
To this end, we have followed an incremental container strategy, similarly to the one used in the parallel approach, using a two-level queue (instead of thread pools) for allocation of containers (instead of threads), labeled \(L_{1}\) and \(L_{2}\) in Figure 7. It is worthwhile to remind that any allocation policy can be used, editing the XML file describing the model flattened structure and the containers where each atomic is placed. As aforementioned, \(L_{1}\) has been reserved for atomic models with high computational demands (i.e. slower atomic models), whereas level \(L_{2}\) is used to allocate the remaining models (i.e. faster atomic models). In the first set of experiments we increased the number of containers in \(L_{1}\) allocating one single container in \(L_{2}\), giving more resources to the models with higher computational cost. Once we found the optimal number of container in \(L_{1}\) (i.e. where there is no more margin for performance improvement) we then increased the number of containers in \(L_{2}\), as we did in the parallel approach to find the 2-level sub-optimal configuration. In the second set of experiments, we simply use one single level to allocate all the containers, balancing the distribution of atomic models among them. For executing the distributed simulations, we have used a GKE cluster with 8 n2-highmem-4 nodes. These nodes count with 4 Intel(R) Xeon(R) vCPU @ 2.8 GHz and 32 GiB of RAM.
Figure 13 depicts the results of this analysis in terms of speed-up. As in the parallel simulations, bars are labeled as \(i\times j\), where \(i\) represents the number of containers (pods) created in \(L_{1}\) and \(j\) pods created in \(L_{2}\). As we can see in the blue bars of Figure 13, the speed-up is increased as the number of pods created for slower models in \(L_{1}\) increases, reaching a maximum value of \(0.71\times\) in \(7\times 1\). However, the results suddenly become worse starting from the \(8\times 1\) distribution in advance. This is because of the number of nodes present in the cluster. While the \(7\times 1\) scenario distributes exactly one pod per node, the following scenarios present nodes with multiple pods. The speed-up is less
than 1 in this case. This is because the distributed simulation differs from the parallel one mainly in which all the 198 simulators are executed as independent Java Virtual Machine (JVM) processes, independently of the number of pods, i.e., memory resources needed to run the distributed simulation is significantly higher than in the parallel solution. As in the \(8\times 1\) configuration there are more containers than nodes, there are also less resources for the execution of the 149 fastest models, abruptly increasing the execution time and decreasing the speed-up in consequence. As a result, the optimal number of containers in \(L_{1}\) with a single container in \(L_{2}\) is reached when there are 7 containers in \(L_{1}\). Beyond this number, there is no benefit in increasing the number of containers in \(L_{1}\) without increasing the number of containers in \(L_{2}\).
After that, the number of containers in \(L_{1}\) is fixed and equal to the optimal value and then the \(L_{2}\) size in increased, looking for a sub-optimal configuration as in the parallel case. This can be seen in the green bars of Figure 13, which show that increasing containers in \(L_{2}\) also improves the performance notably, with a maximum speedup value of \(1.84\times\) at \(7\times 15\). This fact does not reinforce our \(n>m\) hypothesis, because the 149 Java Virtual Machine (JVM) instances consume significant amounts of memory and becomes a bottleneck, favoring a higher value for \(m\), which also explains the poor speed-up value.
Finally, we have used a single container level, i.e., one single class to allocate all the atomic models, balanced according to their delays. The yellow bars in Figure 13 shows that this configuration can give up to \(1.45\times\). Again, the speed
Figure 13: Distributed simulations speed-ups depending on the number of pods (\(L_{1}\times L_{2}\)).
up increases with the number of pods, until those are approximately equal to the number of nodes. In the distributed version, the benefits of dividing the resources in levels is not as clear as in the parallel version, since as stated above (a) all the atomic models are executed concurrently, and (b) one of the levels can act as bottleneck when the resources (mainly memory) reserved for that level are insufficient. This inefficiency might be attributed to propagation issues as is the case in distributed simulations. However, we confirmed that the bottleneck is not the communication between nodes, because setting the delays equal to 0 seconds (\(\Delta_{i}=0\)), the speed-ups obtained by the 32 CPU parallel machine and the distributed version were equivalent (21\(\times\) vs. 19\(\times\)). Future work includes the study of mechanisms to alleviate the weight of the JVM processes.
### Parallel vs. Distributed
In order to compare our parallel and distributed architecture, four metrics must be considered: performance, cost, cost/performance, and underlying hardware. Table 2 shows the best speedup obtained by the suboptimal and balanced configurations and the monthly cost of the nodes used for the parallel and distributed simulations.
As Table 2 shows, the best performance is obtained with the Parallel 32 vCPU balanced configuration, nearly 16 times faster than the sequential simulation.
With respect to cost, the cheapest solution is of course the 4 vCPU parallel approach, with a monthly cost of $168, as can be seen in Table 2. It is followed by the Kubernetes cluster, with $1347/month. Finally, the 32 vCPU virtual ma
\begin{table}
\begin{tabular}{|c|c|c|c|} & **Sub-optimal** & **Balanced** & **\$/month** \\ \hline Parallel 4 vCPU & 3.88 & 3.91 & 168.38 \\ Parallel 32 vCPU & 14.33 & 15.94 & 1413.63 \\ Distributed 8 \(\times\) 4 vCPU & 1.84 & 1.45 & 1347.01 \\ \hline \end{tabular}
\end{table}
Table 2: Maximum speedup and monthly cost of the parallel and distributed simulations.
chine is the most expensive solution with $1414/month. The cost/perfomance of a single balanced speedup point is $43, $89 and $732 for the 4, 32 vCPU VMs and distributed solutions, respectively. Clearly, the distributed infrastructure is completely saturate and must not be pursued from a cost-benefit factor.
Finally, regarding underlying hardware, the distributed solutions is more flexible, since it supports heterogeneous architectures as the simulation is based on a socket distributed application, compatible with any hardware distribution. The parallel solutions is only valid for systems with shared memory and homogeneous architecture.
There is still much work to do in the field of distributed simulations. Obviously, memory management by independent distributed processes is a huge bottleneck that must be alleviated. In any case, regarding the possible difficulties around the distributed setup, the M&S framework presented in this paper allows us a unified sequential, parallel, and distributed solution that facilitates the deployment of any configuration, being completely focused on model immutability and automated deployment.
## 7 Conclusions and future work
Simulation is an activity of running a simulator in a computational environment. This computational environment has evolved over time. Today it consists of varied options such as local desktop, distributed network, multi-core, virtualized infrastructure, HPC infrastructure and cloud-enabled containerized environment. An extensive and scalable simulation architecture must be able to execute in any computational environment in a seamless manner. Unfortunately, most simulation architectures are not designed to be extensible and scalable, especially when a large number of simulation runs are needed from a model that was designed for a local desktop execution that is unable to run in other high performance environments. It is a well known fact that sequential programs that were designed for a single CPU receive no benefit from their execution on a multi-core CPU. The same is true for simulation architectures.
The problem is more compounded when the model formalism is tightly coupled with the simulation architecture and both need to be rewritten for execution in a different computational execution environment than the original.
DEVS formalism categorically separates the modeling and simulation layers so that the simulation architecture is transparent to the model architecture and both can evolve horizontally. Over the past 15 years, the work by Mittal and Martin have demonstrated this aspect of executing DEVS models and various Domain Specific Models (DSMs) with their DEVS mappings over transparent simulation architectures. This paper has provided evidence that advances their earlier work with the xDEVS M&S simulation engine capable of deploying the simulator in a parallel multi-core architecture and in a distributed networked architecture in a seamless manner. We have described a unifying architecture incorporating two DEVS coordinators that run the same DEVS model in both parallel and distributed architectures. While this basic concept of having different coordinators for different deployment platforms was introduced in Zeigler's text [9], it needed some improvements for their usage in cloud-enabled platforms. These two coordinators were further deployed in an cloud-enabled containerized environment making the simulation infrastructure truly transparent to the model. Both parallel and distributed implementations are DEVS-compliant. This assures that the sequential, parallel and distributed simulations provide exactly the same results.
We described the performance evaluation of the Parallel simulation coordinator and the Distributed simulation coordinator using the DEVStone benchmark and conclusively received a 16 times speedup by the Parallel coordinator and 1.84 times speedup by the Distributed coordinator for a given hardware configuration. For Parallel simulation, we achieved the following:
1. Confirmed our hypothesis that more thread assignment to the thread pool that contains CPU-intensive models produces a higher speedup improvement.
2. Speedup peak is achieved when the number of threads in a thread pool is
equal to the number of CPUs.
3. Cost-benefit factor is much higher as compared to distributed simulation performance use case
This result demonstrates that the parallel execution of any DEVS model must be preferred over any distributed execution. This result is further extended to the entire case of distributed simulation that can never match the results obtained by parallel architectures.
The distributed computing architectures came before the multi-core parallel computing architectures. The motivation for distributed computing (before the ubiquitous Internet), which was to connect geographically distributed entities to solve a complex problem has now given way to parallel computing wherein the resources are made available either in HPC or Cloud-environments and are transparently available for use. Accordingly, the M&S architectures (both legacy and upcoming) must evolve to benefit from the cloud-enabled parallel computing architectures. Adhering to formalisms such as DEVS (and the associated xDEVS implementations) that provide sound basis for composable M&S architectures is the preferred way to go. Various algorithms, features and APIs developed in xDEVS framework provide ease of use, extensibility and scalability to any DEVS simulation. xDEVS has been reported as the most efficient DEVS simulator till date [29] and this work extends its capability to cloud-enabled parallel and distributed simulation.
While the parallel simulation architectures provide speedup to run simulations in a high performance environment for running optimizations and analyses on the model, the distributed simulation architectures will continue to find their niche in training, testing and evaluation of interoperability in Live, Virtual and Constructive environments and integration of new systems for human-in-the-loop experimentation.
### Future Work
We established the case for the increased usage of parallel architecture as compared to distributed architectures. However, there is value to be had in
bringing these two architectures together for maximum value. Future work includes the exploitation of such hybrid deployments, where distributed nodes can perform parallel simulations. This would require modifying the xDEVS modeling layer and demands a major development effort. We are also considering the a comparative study with other DEVS simulation engines, when they incorporate a parallel interface and a supporting unifying architecture. Finally, though we have artificially added CPU stress to focus on computing performance, data exchange and network latency analysis is also of great interest.
## Acknowledgments
This project has been partially supported by the Education and Research Council of the Community of Madrid (Spain), under research grant S2018/TCS-4423, and by the Google Cloud Research Credits program with the award GCP19980904.
## Disclaimer
The author's affiliation with The MITRE Corporation is provided for identification purposes only, and is not intended to convey or imply MITRE's concurrence with, or support for, the positions, opinions or viewpoints expressed by the author(s). (c)2021 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for Public Release. Distribution Unlimited. Case Number 21-02817-1.
|
2308.04172 | Predicting Drug-Drug Interactions Using Knowledge Graphs | In the last decades, people have been consuming and combining more drugs than
before, increasing the number of Drug-Drug Interactions (DDIs). To predict
unknown DDIs, recently, studies started incorporating Knowledge Graphs (KGs)
since they are able to capture the relationships among entities providing
better drug representations than using a single drug property. In this paper,
we propose the medicX end-to-end framework that integrates several drug
features from public drug repositories into a KG and embeds the nodes in the
graph using various translation, factorisation and Neural Network (NN) based KG
Embedding (KGE) methods. Ultimately, we use a Machine Learning (ML) algorithm
that predicts unknown DDIs. Among the different translation and
factorisation-based KGE models, we found that the best performing combination
was the ComplEx embedding method with a Long Short-Term Memory (LSTM) network,
which obtained an F1-score of 95.19% on a dataset based on the DDIs found in
DrugBank version 5.1.8. This score is 5.61% better than the state-of-the-art
model DeepDDI. Additionally, we also developed a graph auto-encoder model that
uses a Graph Neural Network (GNN), which achieved an F1-score of 91.94%.
Consequently, GNNs have demonstrated a stronger ability to mine the underlying
semantics of the KG than the ComplEx model, and thus using higher dimension
embeddings within the GNN can lead to state-of-the-art performance. | Lizzy Farrugia, Lilian M. Azzopardi, Jeremy Debattista, Charlie Abela | 2023-08-08T10:07:22Z | http://arxiv.org/abs/2308.04172v2 | # Predicting Drug-Drug Interactions Using Knowledge Graphs
###### Abstract
In the last decades, people have been consuming and combining more drugs than before, increasing the number of Drug-Drug Interactions (DDIs). To predict unknown DDIs, recently, studies started incorporating Knowledge Graphs (KGs) since they are able to capture the relationships among entities providing better drug representations than using a single drug property. In this paper, we propose the medicX end-to-end framework that integrates several drug features from public drug repositories into a KG and embeds the nodes in the graph using various translation, factorisation and Neural Network (NN) based KG Embedding (KGE) methods. Ultimately, we use a Machine Learning (ML) algorithm that predicts unknown DDIs. Among the different translation and factorisation-based KGE models, we found that the best performing combination was the ComplEx embedding method with a Long Short-Term Memory (LSTM) network, which obtained an \(F_{1}\)-score of 95.19% on a dataset based on the DDIs found in DrugBank version 5.1.8. This score is 5.61% better than the state-of-the-art model Deep-DDI [1]. Additionally, we also developed a graph auto-encoder model that uses a Graph Neural Network (GNN), which achieved an \(F_{1}\)-score of 91.94%. Consequently, GNNs have demonstrated a stronger ability to mine the underlying semantics of the KG than the ComplEx model, and thus using higher dimension embeddings within the GNN can lead to state-of-the-art performance.
## 1 Introduction
Drug-Drug Interactions (DDIs) occur when two or more medications are co-administered simultaneously and cause an Adverse Drug Reaction (ADR). An estimated 44% of men and 57% of women older than 65 in the US take five or more medications, which is bound to worsen, given the population's rapid ageing and the trend of increasing medication use [2]. Moreover, the risk of an ADR increases by 7 to 10% with each medication [3].
Currently, drug developers rely on clinical trials to detect unknown DDIs, whilst pharmacists and doctors depend on a textbook, such as the British National Formulary (BNF), when they are unsure of a known DDI. Amran et al. |
2310.08502 | Strengthening interacting agegraphic dark energy DGP constraints with
local measurements and multimessenger forecastings | An explanation of the nature of dark energy has been treated in extra
dimensions within the scheme of string theory. One of the most successful
models is inspired by the Dvali-Gabadadze-Porrati (DGP) model, in which the
universe is a 4-dimensional brane embedded in a 5-dimensional Minkowski
space-time. In this landscape, the study of the evolution of the normal branch
has led us to different kinds of dark energy, where the most simple case is the
cosmological constant $\Lambda$. Moreover, other viable cosmological solutions
are related to agegraphic dark energy, which allows a late cosmic acceleration
within an interacting mechanism. To explore the viability of these solutions
and possible gravitational leakage, in this paper, we present constraints on
such models using recent standard sirens forecasting in addition to local
observables such as Pantheon (SNIa), $H(z)$ measurements, baryonic acoustic
oscillations (BAO). Our results show that the value associated with the species
of quantum fields $n$ in these models is strongly restricted for supernovae
observations to $n=20$, and for GW standard sirens mock data prefers a value of
$n=1$. | Maribel Hernández, Celia Escamilla-Rivera | 2023-10-12T16:59:52Z | http://arxiv.org/abs/2310.08502v2 | Strengthening interacting agegraphic dark energy DGP constraints with supernovae and multimessenger forecastings
###### Abstract
An explanation of the nature of dark energy has been treated in extra dimensions within the scheme of string theory. One of the most successful models is inspired by the Dvali-Gabadadze-Porrati (DGP) model, in which the universe is a 4-dimensional brane embedded in a 5-dimensional Minkowski space-time. In this landscape, the study of the evolution of the normal branch has led us to different kinds of dark energy, where the most simple case is the cosmological constant \(\Lambda\). Moreover, other viable cosmological solutions are related to agegraphic dark energy, which allows a late cosmic acceleration within an interacting mechanism. To explore the viability of these solutions and a possible gravitational leakage, in this paper, we present constraints on such models using recent standard sirens forecasting in addition to local observables such as Supernovae Type Ia. Our results show that the value associated with the species of quantum fields \(n\) in these models is strongly restricted for supernovae observations to \(n=20\), and for GW standard sirens mock data prefers a value of \(n=1\).
## I Introduction
Since the discovery of the late time cosmic acceleration with measurements of Supernovae Type Ia (SNIa) [1; 2], its explanation has been one of the most current intriguing issues in Cosmology. Furthermore, this phenomenon has been attributed to some kind of exotic component so-called _dark energy_, which is characterized by negative pressure and on average constitutes the 70% of content in the Universe.
On this line of thought, the Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) model assumes that the cosmic acceleration is due to the existence of a Cosmological Constant \(\Lambda\) which Equation-of-State (EoS) is \(w_{\Lambda}=-1\). Although this model has been very successful and well constrained by local [1; 3] and early [4] observables, it has several issues, e.g. the coincidence problem and the small value of \(\Lambda\)[5], and recently, cosmological tensions [6]. In particular, the \(H_{0}\) tension [7] supports the idea that dark energy could be dynamical with an EoS \(w_{\rm DE}<-1\)[8; 9; 10]. Among other proposals, dark energy can be associated with terms derived from alternative theories of gravity [11; 12; 13] and extended theories of gravity [14; 15], showing a late cosmic acceleration.
Furthermore, different approaches have arisen to explain dark energy within the framework of fundamental theories such as quantum gravity [16] or string theory [17]. Although, until today, there is still not a complete quantum gravity theory, some attempts have been made to discover the nature of dark energy using certain considerations, e.g. through the holographic principle [18] we can propose the _Holographic Dark Energy_ (HDE) [19] and employing the Karolyhazy relation [21] we derive the _Agegraphic Dark Energy_ (ADE) [21]. In the case of the HDE model it has been found cosmological constraints with local observables [22; 23] and with Gravitational Waves (GW) from Einstein Telescope mock data [24]. However, to obtain a cosmic acceleration solution, the length-scale considered in this model is the event horizon [25].
In the case of the ADE model, the time scale is the age of the universe. Originally, this scale was pointed out as an advantage when compared to the HDE model [21]. However, it can be shown that the ADE solution never dominates the dynamics [26]. To solve this drawback, it was proposed two different paths: _(i)_ considering an interaction between the ADE model and dark matter, so-called the ADE interacting model, which modifies the evolution of ADE in such a way that there is a dark energy stage-dominated phase [25]. _(ii)_ The second solution is the _New Agegraphic Dark Energy_ (NADE) [26], where the conformal time of the Friedmann-Robertson-Walker (FRW) universe is chosen to be the time scale instead of the age of the universe. Also, it has been considered an interaction scheme with dark matter for this model. The advantage of its description is that is a single-parameter model, unlike the two-parameter ADE model. Furthermore, this NADE model has been constrained with SNIa, Cosmic Microwave Background radiation (CMB), and Large Scale Structure (LSS) data finding better systematics in comparison to the \(\Lambda\)CDM model [27].
In this line of thought, and to obtain general solutions that behave as dark energy, it has been proposed the existence of extra-dimensions within the framework of string theory, one of these is the so-called Dvali-Gabadadze-Porrati (DGP) model. In this scheme, the universe is a 4-dimensional brane embedded in a 5-dimensional Minkowski space-time [28] where gravity is modified at cosmological distances. Depending on how the brane is embedded into the space-time bulk, there are two kinds of cosmological solutions. In the well-known self-accelerating branch, we have a matter or radiation epoch that is followed by a late phase of accelerated expansion, while in the normal branch, there is not an accelerated phase and it is necessary to add some kind of dark energy to obtain it [28]. Nevertheless, the self-accelerating branch is disfavoured by SNIa and BAO [29] and this has led to focus the attention on the study of the evolution of the normal branch considering several cases, e.g. a \(\Lambda\)[30], the HDE model [31], the ADE and NADE models [32], and quintessence model [33]. Specifically in [34] the evolution of the ADE model was studied within the framework of DGP brane-world cosmology considering an interaction between this ADE and dark matter. Using a dynamical system analysis, it is possible to find two critical points: an unstable point that is related to a matter-dominated era, and a second point related to an ADE-dominated era that is stable if the coupling parameter that determines the strength of interaction satisfies \(\beta<1-2/(3n)\). This means that interaction could lead to a stable universe in the future, contrary to what happens in a DGP brane with non-interacting ADE, which does not have any stable critical point [32]. Also, it was shown that there is no big-rip singularity in this model, in opposition to what happens in a DGP brane with HDE, which suffers from the big-rip singularity [35]. In this analysis, the cosmological parameters were constrained using independent measurements such as SNIa and Baryon Acoustic Oscillations (BAO) peaks, and it was found that these observations prefer a pure holographic dark energy or a pure DGP model [35].
Furthermore, ADE and NADE models have been studied within the DGP braneworld scheme without considering interaction with dark matter, and its cosmological parameters have been constrained with \(H(z)\) measurements, CMB, and BAO [32]. In both scenarios, it was found that the universe undergoes an acceleration stage without entering the phantom regime, while in the matter-dominated effective dark energy vanishes. However, as far as we know, these models have not been studied with interaction terms that could lead to gravitational leakage effects. Therefore, in this paper, we studied these models by constraining them using current SNIa measurements and standard sirens mock data [36; 37] based on the Laser Interferometer Space Antenna (LISA) by forecasting multimessenger measurements of massive black hole binary (MBHB) mergers.
The paper is divided as follows: In Sec.2 we derive the evolution equations for the ADE and NADE models in a DGP braneworld with interactions. Also, we compute the deceleration parameter for each model. In Sec.3 we describe the observables used to constrain the models described. We include SN Pantheon measurements and, additionally, a GW mock catalog based on standard sirens. However, since we are dealing with extra-dimensional cosmological models, we need to write a new description of the luminosity distance \(d_{L}\), in Appendix A we derive this function for a D-dimensional space-time to obtain the luminosity gravity wave distance and also, we describe the GW data scattered for our DGP scenarios. In Sec.4 we discuss the statistical analysis, and finally in Sec.5 we present our conclusions.
## II 2. Agegraphic models in DGP
* **Agegraphic dark energy model (ADE).** According to the DGP model, our universe is a brane embedded in a 5-dimensional Minkowski space-time. The consequence of this is related to a crossover scale \(r_{0}\) that controls the transition between 4-dimensional behaviour to 5-dimensional, i.e. the gravitational potential of a source of mass \(m\) is the well-known 4-dimensional gravitational potential \(V\approx-Gm/r\) for \(r\ll r_{0}\), and \(V\approx-G_{\rm bulk}m/r^{2}\) when \(r\gg r_{0}\)[38]. According to [28] the Friedmann equation in the DGP model is \[H^{2}=\left(\sqrt{\frac{\rho}{3M_{p}^{2}}+\frac{1}{4r_{0}^{2}}}+\epsilon \frac{1}{2r_{0}}\right)^{2},\] (1) where \(H=\dot{a}/a\) is the Hubble parameter, \(\epsilon=\pm 1\), and their different values of \(\epsilon\) correspond to two different embeddings of the brane into the bulk. If \(\epsilon=1\) a phase of matter or radiation cosmology is followed by a late phase of accelerated expansion. This solution is known as the accelerated branch. If \(\epsilon=-1\) there is not an accelerated phase and it is necessary to add some kind of dark energy to obtain an accelerated expansion. This solution is known as the normal branch. In this work, we consider the normal branch and assume that the total energy density on the brane is \(\rho=\rho_{\rm ADE}+\rho_{\rm DM}\), which contains ADE density \(\rho_{\rm ADE}\) and dark matter density \(\rho_{\rm DM}\). The Friedmann equation for this latter case is given by \[H=\sqrt{\frac{\rho_{\rm ADE}}{3M_{p}^{2}}+\frac{\rho_{\rm DM}}{3M_{p}^{2}}+ \frac{1}{4r_{0}^{2}}}-\frac{1}{2r_{0}}.\] (2)
Based on the Karolyhazy relation [40], which says that the distance \(t\) cannot be known with better accuracy than \(\delta t=\gamma t_{p}^{2/3}t^{1/3}\), where \(\gamma\) is a numerical factor of order unity [41], we can consider that in an over length scale, the space-time consists of cells of \(\delta t^{3}\approx t_{p}^{2}t\). Therefore, a cell is the minimal detectable unit of space-time over \(t\). Then, using the fact that this cell has a finite time \(t\), due to the time energy uncertainty relation, there exists a minimal cell \(\delta t^{3}\) whose energy cannot be smaller than \(E_{\delta t^{3}}\gtrsim t^{-1}\). The energy density of the quantum fluctuations in Minkowski space-time is \(\rho_{\rm ADE}\sim E_{\delta t^{3}}/\delta t^{3}\sim 1/t_{p}^{2}t^{2}\). If \(t\) is the age of the universe, then the ADE density is given by
\[\rho_{\rm ADE} = \frac{3n^{2}M_{p}^{2}}{T^{2}},\qquad{\rm where}\qquad T=\int_{0}^ {a}\frac{da}{Ha}, \tag{3}\]
where \(n^{2}\) is a numerical factor that was introduced to parameterize for example the species of quantum fields or the effect of curved space-time [39]. Differentiating Eq.(3) with respect to cosmic time \(t\) we obtain
\[\dot{\rho}_{\rm ADE}=-2H\rho_{\rm ADE}\frac{\sqrt{\Omega_{\rm ADE}}}{n}. \tag{4}\]
For this case, we are going to consider an interaction between dark matter and ADE. Since there is not a fundamental theory that tells us something about \(Q\), previous works introduced specific forms. In our analysis, we study a specific interaction that is preferred by interacting HDE [42], and which has been already studied within the framework of DGP theory in [43]. Then
\[\dot{\rho}_{\rm DM}+3H\rho_{\rm DM} = Q, \tag{5}\] \[\dot{\rho}_{\rm ADE}+3H(1+w_{\rm ADE})\ \rho_{\rm ADE} = -Q, \tag{6}\]
where \(Q\) is given by
\[Q=3\beta H\frac{\rho_{\rm ADE}\ \rho_{\rm DM}}{\rho_{\rm ADE}+\rho_{\rm DM}}. \tag{7}\]
\(\beta\) is a positive number, as \(Q>0\) energy flows from ADE to a dark matter component. As it is standard, we define the critical density parameters as
\[\Omega_{\rm DM}=\frac{\rho_{\rm DM}}{3H^{2}M_{p}^{2}},\ \ \ \ \Omega_{\rm ADE}=\frac{\rho_{\rm ADE}}{3H^{2}M_{p}^{2}}=\frac{n^{2}}{H^{2}T^{2 }},\ \ \ \ \Omega_{r}=\frac{1}{4r_{0}^{2}H^{2}},\ \ \ \ {\rm and}\ \ \ \Omega_{k}=-\frac{k}{H^{2}a^{2}}, \tag{8}\]
and their corresponding current values, denoted by the subscript \(0\), are given by
\[\Omega_{\rm 0DM}=\frac{\rho_{\rm 0DM}}{3H_{0}^{2}M_{p}^{2}},\ \ \ \ \Omega_{\rm 0ADE}=\frac{\rho_{\rm 0 ADE}}{3H_{0}^{2}M_{p}^{2}}=\frac{n^{2}}{H_{0}^{2}T_{0}^{2}},\ \ \ \ \Omega_{0r}=\frac{1}{4r_{0}^{2}H_{0}^{2}},\ \ \ \ {\rm and}\ \ \ \Omega_{0k}=-\frac{k}{H_{0}^{2}a_{0}^{2}}. \tag{9}\]
Using Eq.(8), we can rewrite Eq.(2) as
\[1=\Omega_{\rm DM}+\Omega_{\rm ADE}-2\sqrt{\Omega_{r}}, \tag{10}\]
and \(\Omega_{\rm DE}=\Omega_{\rm ADE}-2\sqrt{\Omega_{r}}\) can be interpreted like an effective dark energy term. Defining \(-2\sqrt{\Omega_{r}}\) as \(\Omega_{\rm DGP}=-2\sqrt{\Omega_{r}}\), we have
\[\Omega_{\rm DE}=\Omega_{\rm ADE}-2\sqrt{\Omega_{r}}=\frac{\rho_{\rm ADE}}{3M_ {p}^{2}H^{2}}-\frac{1}{r_{0}H}=\Omega_{\rm ADE}+\Omega_{\rm DGP}. \tag{11}\]
Combining Eqs.(4)-(5), the ADE EoS parameter is given by
\[w_{\rm ADE}=-1+\frac{2}{3n}\sqrt{\Omega_{\rm ADE}}-\frac{Q}{3H\rho_{\rm ADE}} =-1+\frac{2}{3n}\sqrt{\Omega_{\rm ADE}}-\frac{\beta\ \Omega_{\rm DM}}{\Omega_{\rm ADE}+\Omega_{\rm DM}}. \tag{12}\]
The evolution of \(\rho_{\rm ADE}\) and \(\rho_{\rm DM}\) can be found by solving the following set of differential equations:
\[\dot{\rho}_{\rm ADE} = -2H\rho_{\rm ADE}\frac{\Omega_{\rm ADE}}{n}, \tag{13}\] \[\dot{\rho}_{\rm DM} = -Q-3H\rho_{\rm DM}. \tag{14}\]
In this work, we are going to constrain the model parameter set \(\Theta_{\rm ADE}:\{H_{0},n,\Omega_{0r},\Omega_{0{\rm ADE}},\beta\}\) with observations. Furthermore, when there is no interaction between the components of the dark sector \(w_{\rm DM}^{\rm eff}=w_{\rm DM}=0\), \(\rho_{\rm DM}\propto(1+z)^{3}\), and \(\rho_{\rm DM}/\rho_{0c}=\Omega_{0{\rm DM}}(1+z)^{3}\), the current critical density is \(\rho_{0c}=3M_{p}^{2}H_{0}^{2}\). Otherwise, when there is an interaction \(w_{\rm DM}^{\rm eff}=-Q/(3H\rho_{\rm DM})\), and the previous expressions for \(\rho_{\rm DM}\) and \(\rho_{\rm DM}/\rho_{0c}\) are no longer satisfied. In order to fulfill \(\rho_{\rm DM}(z=0)=\rho_{0{\rm DM}}\), we assume that \[\frac{\rho_{\rm DM}}{3M_{p}^{2}H_{0}^{2}}\equiv\Omega_{0{\rm DM}}\ f_{A}(z), \qquad{\rm and}\qquad\frac{\rho_{\rm ADE}}{3M_{p}^{2}H_{0}^{2}}=\frac{T_{0}^{ 2}}{T^{2}}\Omega_{0{\rm ADE}}=\Omega_{0{\rm ADE}}\ g_{A}(z),\] (15) where \(f_{A}(z=0)=1=g_{A}(z=0)\) and \(g_{A}(z):=T_{0}^{2}/T^{2}\). Replacing Eq.(15) in Eqs.(13)-(2) we obtain \[\frac{df_{A}}{dz} = 3\left(\frac{f_{A}}{1+z}\right)-\frac{\beta\ \Omega_{0{\rm ADE}}\ g_{A}f_{A}}{(1+z)(\Omega_{0{\rm ADE}}\ g_{A}+ \Omega_{0{\rm DM}}\ f_{A})},\] (16) \[\frac{dg_{A}}{dz} = \frac{2\sqrt{\Omega_{0{\rm ADE}}\ g_{A}^{3/2}}}{n(1+z)H}H_{0},\] (17) \[H = H_{0}\left(\sqrt{\Omega_{0{\rm DM}}\ f_{A}(z)+\Omega_{0{\rm ADE }}\ g_{A}(z)+\Omega_{0r}}-\sqrt{\Omega_{0r}}\right).\] (18) We can solve numerically the latter equations using the initial condition \(f_{A}(z=0)=g_{A}(z=0)=1\), and find the evolution for \(\Omega_{\rm DM}\), \(\Omega_{\rm ADE}\), \(\Omega_{r}\), from the following equations \[\Omega_{\rm DM}=\Omega_{0{\rm DM}}\ f_{A}(z)\left(\frac{H_{0}}{H}\right)^{2}, \qquad\quad\Omega_{\rm ADE}=\Omega_{0{\rm ADE}}\ g_{A}(z)\left(\frac{H_{0}}{H }\right)^{2},\qquad\quad{\rm and}\qquad\Omega_{r}=\Omega_{0r}\left(\frac{H_{0 }}{H}\right)^{2},\] (19) and using Eqs.(19)-(18) we can compute the deceleration parameter \(q=-1-(\dot{H}/H^{2})\) as \[q=-1+\frac{H}{H+H_{0}\sqrt{\Omega_{0r}}}\left(\frac{3}{2}-\frac{3}{2}\Omega_{ \rm ADE}+\frac{\Omega_{\rm ADE}^{3/2}}{n}+3\frac{H_{0}}{H}\sqrt{\Omega_{0r}}+ \frac{Q}{6M_{p}^{2}H^{3}}\right).\] (20)
* **New Agegraphic dark energy model (NADE).** In this model, the length scale is chosen to be the conformal time \(\eta=\int_{0}^{a}da/Ha^{2}\)[26], then the energy density for the NADE model is \[\rho_{\rm NADE}=\frac{3n^{2}M_{p}^{2}}{\eta^{2}},\qquad{\rm and}\qquad\Omega_ {\rm NADE}=\frac{n^{2}}{H^{2}\eta^{2}}.\] (21) Derivating Eq.(21) with respect to \(t\) we obtain \[\dot{\rho}_{\rm NADE}=-2\rho_{\rm NADE}\ H\frac{\sqrt{\Omega_{\rm NADE}}}{na},\] (22) and considering an interaction between the NADE model and dark matter we have \[\dot{\rho}_{\rm DM}+3H\rho_{\rm DM} = Q,\] (23) \[\dot{\rho}_{\rm NADE}+3H(1+w_{\rm NADE})\ \rho_{\rm NADE} = -Q,\] (24) where \(Q\) is given by Eq.(7). Substituting Eq.(22) in Eq.(23) we can find the EoS for the NADE model \[w_{\rm NADE}=-1+\frac{2}{3na}\sqrt{\Omega_{\rm NADE}}-\frac{Q}{3H\rho_{\rm NADE }}=-1+\frac{2}{3na}\sqrt{\Omega_{\rm NADE}}-\frac{\beta\ \Omega_{\rm DM}}{\Omega_{\rm NADE}+\Omega_{\rm DM}}.\] (25) Finally, we have the following differential equations for \(\rho_{\rm NADE}\) and \(\rho_{\rm DM}\), \[\dot{\rho}_{\rm NADE} = -2H\rho_{\rm NADE}\frac{\Omega_{\rm NADE}}{n},\] (26) \[\dot{\rho}_{\rm DM} = -Q-3H\rho_{\rm DM},\] (27) \[H = \sqrt{\frac{\rho_{\rm NADE}}{3M_{p}^{2}}+\frac{\rho_{\rm DM}}{3M_ {p}^{2}}+\frac{1}{4r_{0}^{2}}}-\frac{1}{2r_{0}}.\] (28)
We rewrite the latter equations in terms of the model parameters as: \[\frac{\rho_{\rm DM}}{3M_{p}^{2}H_{0}^{2}}:=\Omega_{\rm 0DM}\ f_{N}(z)\qquad \mbox{and}\qquad\frac{\rho_{\rm NADE}}{3M_{p}^{3}H_{0}^{2}}=\Omega_{\rm 0 NADE}\ g_{N}(z),\] (29) where \(f_{N}(z=0)=1\), \(g_{N}(z=0)=1\). We find \[\frac{df_{N}}{dz} = 3\frac{f_{N}}{1+z}-\frac{\beta\ \Omega_{\rm 0NADE}\ g_{N}f_{N}}{(1+z)( \Omega_{\rm 0NADE}\ g_{N}+\Omega_{\rm 0DM}\ f_{N})},\] (30) \[\frac{dg_{N}}{dz} = \frac{2\sqrt{\Omega_{\rm 0NADE}}\ g_{N}^{3/2}}{nH}H_{0},\] (31) \[H = H_{0}\left(\sqrt{\Omega_{\rm 0DM}f_{N}(z)+\Omega_{\rm 0NADE}\ g_{N}( z)+\Omega_{\rm 0r}}-\sqrt{\Omega_{\rm 0r}}\right).\] (32) As in the ADE model, we can solve numerically the latter equations using the initial condition \(f_{N}(z=0)=g_{N}(z=0)=1\), and find the evolution of \(\Omega_{\rm DM}\), \(\Omega_{\rm NADE}\), \(\Omega_{r}\), from the following equations \[\Omega_{\rm DM}=\Omega_{\rm 0DM}\ f_{N}(z)\left(\frac{H_{0}}{H}\right)^{2}, \qquad\Omega_{\rm NADE}=\Omega_{\rm 0NADE}\ g_{N}(z)\left(\frac{H_{0}}{H} \right)^{2},\qquad\quad\mbox{and}\qquad\Omega_{r}=\Omega_{\rm 0r}\left(\frac{H_{0}}{H} \right)^{2},\] (33) and \(q\) is given by \[q=-1+\frac{H}{H+H_{0}\sqrt{\Omega_{\rm 0r}}}\left(\frac{3}{2}-\frac{3}{2} \Omega_{\rm NADE}+\frac{(1+z)}{n}\ \Omega_{\rm NADE}^{3/2}+3\frac{H_{0}}{H}\sqrt{\Omega_{ \rm 0r}}+\frac{Q}{6M_{p}^{2}H^{3}}\right).\] (34)
## III Observational and forecasting baselines
In this work, we use two different observational catalogs to constrain the cosmological parameters of the models described. Here we described the SN Pantheon catalog and a mock catalog derived from standard sirens used in [44]. Below we describe them briefly.
* **Pantheon supernovae sample (SNIa)**. This catalog contains the information of the apparent magnitude in the B band \(m_{\rm B}\) of 1048 supernova events for a redshift range \(z=(0.01,2.26)\). The apparent magnitude \(m_{\rm B}\) is related to the observed distance modulus through \[\mu_{\rm obs}=m_{\rm B}-M.\] (35) For our analysis, we calibrate the absolute magnitude to obtain \(M=-19.1\). The theoretical distance modulus \(\mu\) is related to the luminosity distance, \(d_{L}\), as follows: \[\mu(z)=5\log\left[\frac{d_{L}(z)}{1{\rm Mpc}}\right]+25,\] (36) where \(d_{L}=a_{0}c(1+z)\int_{0}^{z}dz/H\), and \(d/dz\left(d_{L}/1+z\right)=c/H\), with \(a_{0}=1\). Notice that \(d_{L}\) is obtained considering that light can only travel along our 4-dimensional space-time and a spatially flat and expanding universe. This luminosity distance is also named \(d_{L}^{\rm EM}\) because is inferred from electromagnetic observations. If light could travel in extra dimensions the luminosity distance would be modified by the presence of these according Eq.A14. To perform the statistical analysis, the best-fit parameters for a specific model, using the SN Pantheon catalog, can be calculated by maximizing the logarithm of the likelihood function given by \[\ln\mathcal{L}_{\rm SN}(\mu_{\rm obs}(z_{i})|z_{i},\sigma_{i},\Theta)=-\frac{ 1}{2}\left(\chi_{\rm SN}^{2}+\sum_{n=1}^{N}\ln(2\pi\sigma_{i}^{2})\right),\] (37) where \(\Theta\) denotes the vector of free parameters of the model and \[\chi_{\rm SN}^{2}=\sum_{i=1}^{N}\frac{(\mu_{obs}(z_{i})-\mu(z_{i};\Theta))^{2} }{\sigma_{i}^{2}},\] (38)
where \(\sigma_{i}^{2}\) is the variance for each measurement, \(N\) is the number of SNIa in the total sample. Using the Bayes theorem we can obtain the joint posterior probability distribution, which is related to the likelihood using
\[p_{\rm SN}(\Theta|\mu_{\rm obs},z,\sigma)=\frac{\pi(\Theta)\mathcal{L}_{\rm SN}( \mu_{\rm obs}(z_{i})|z_{i},\sigma_{i},\Theta)}{\varepsilon}, \tag{39}\]
where \(\pi(\Theta)\) is the prior probability distribution and \(\varepsilon\) is the evidence given by
\[\varepsilon=\int d\theta\pi(\Theta)\mathcal{L}_{\rm SN}(\mu_{\rm obs}(z_{i})| z_{i},\sigma_{i},\Theta). \tag{40}\]
This is a normalisation constant that is not necessary to compute for the best-fit parameter values. As we can notice, to compute the posterior probability we need to know the prior and compute the likelihood given by Eq.(37).
* **Gravitational Waves mock data.** We use a mock catalog of standard sirens used previously in [44], which consists of standard sirens mock data [36; 37] based on the Laser Interferometer Space Antenna (LISA) by forecasting multimessenger measurements of massive black hole binary (MBHB) mergers. This data was simulated assuming the \(\Lambda\)CDM model, and the best fits are given by the Pantheon catalog with \(H_{0}=72.8\)[km/s/Mpc] and \(\Omega_{m}=0.285\). The mock redshift was generated with the normalised intrinsic merger rate \(\dot{n}(z)\), for a range \(z\ \epsilon\ [0,2.3]\). The siren distance \(D_{S}(z)=d_{L}^{\rm GW}\) that is the GW luminosity distance of the source is given by \[D_{S}(z)=D_{S_{\rm{ACDM}}}(z)+\mathcal{N}(0,\sigma),\] (41) where \(\mathcal{N}(0,\sigma)\) is the normal Gaussian probability distribution. The subscript \(0\) denotes the mean value and \(\sigma\) the standard deviation. The forecasting of this catalog includes \(1000\) GW \(d_{L}\) of simulated events with their respective redshifts and uncertainties. In order to employ the full database with the the SNIa sample, we compute the best-fit parameters of the model \(\Theta\) by maximizing the logarithm of the likelihood function given by \[\ln\mathcal{L}_{\rm GW}(d_{L_{m}}^{\rm GW}(z_{i})|z_{i},\sigma_{i},\Theta)=- \frac{1}{2}\left(\chi_{\rm GW}^{2}+\sum_{n=1}^{1000}\ln(2\pi\sigma_{i}^{2})\right)\] (42) where \[\chi_{\rm GW}^{2}=\sum_{i=1}^{1000}\frac{(d_{L}^{\rm GW}(z_{i},\Theta)-d_{L_{ m}}^{\rm GW}(z_{i}))^{2}}{\sigma_{i_{m}}^{2}}.\] (43) \(d_{L}^{\rm GW}\) is the theoretical GW luminosity distance and \(d_{L_{m}}^{\rm GW}(z_{i})\) is the GW luminosity distance obtained from the forecasting at redshift \(z_{i}\). The posterior is given by \[p(\Theta|d_{L_{m}}^{\rm GW},z,\sigma)\propto\pi(\Theta)\mathcal{L}_{\rm GW}( d_{L_{m}}^{\rm GW}(z_{i})|z_{i},\sigma_{i},\Theta).\] (44) Since within the DGP framework our 4-dimensional brane is embedded in a 5-dimensional Minkowski space-time and gravity propagates through this extra dimension, the gravitational wave distance, the distance measured from gravitational events, e.g. binary BH coalescence differs from the inferred electromagnetic luminosity distance \(d_{L}^{\rm EM}\). These quantities can be related using the relation Eq.(A19). Therefore, the \(d_{L}^{\rm GW}\) can be rewritten in terms of the DGP model parameters as \[d_{L}^{\rm GW}=d_{L}^{\rm EM}\left[1+\left(2H_{0}\sqrt{\Omega_{0r}}\frac{d_{L }^{\rm EM}}{c(1+z)}\right)^{m}\right]^{\frac{1}{2m}}.\] (45) Finally, considering the combination of the two catalogs described gives the logarithm of the total likelihood function \[\ln\mathcal{L}_{\rm SN+GW}=\ln\mathcal{L}_{\rm SN}+\ln\mathcal{L}_{\rm GW}.\] (46) This is the function we are going to employ to constrain the ADE and NADE models with and without interactions.
Cosmological constraint analysis
For our statistical analysis of ADE and NADE models in a DGP braneworld cosmology we use the SN Pantheon sample, the mock catalog computed for standard sirens, and the combination of these catalogs to estimate the best-fit values of the parameters involved and their corresponding posteriors distributions.
In the case of SN observations, to obtain the \(\ln\mathcal{L}_{\rm SN}\) we need to compute the theoretical \(\mu(z)\). First, we obtain the luminosity distance \(d_{L}\). In the case of the ADE model, we solve numerically the set of Eqs.(16)-(17), and finally Eq.(18). For the NADE model, we solve numerically Eqs. (30)-(31). The theoretical \(\mu(z)\) is obtained for each redshift \(z\) of the sample using Eq.(36). Finally, we compute the \(\ln\mathcal{L}_{\rm SN}\) through Eq.(37) for each model.
Finally, we compute the theoretical luminosity GW distance \(d_{L}^{\rm GW}\) for each \(z_{i}\) of the mock catalog using Eq.(45) and derive \(\ln\mathcal{L}_{\rm GW}\) using Eq.(42).
### Results
In this section, we discuss the constraints for the ADE and NADE models considering interacting terms and without them. The statistical analysis is performed using the observational samples described along with the mock data processed with GW configurations.
All statistical confidence levels (C.L) discussed below correspond to 1 and \(2\sigma\), respectively. We compute the posteriors of the different models performing a Markov-chain Monte Carlo analysis using the emcee1 code and we combine the marginalized distributions for each fractional density of the models using the ChainConsumer2 package.
Footnote 1: emcee.readthedocs.io
Footnote 2: samreay.github.io/ChainConsumer
* **ADE without interaction.** In this model \(\beta=0\) in Eq.(16) and the parameters to constrain are \(\Theta_{\rm ADE}=\{\Omega_{\rm 0ADE},\Omega_{0r},n,H_{0}\}\) for SN observations and \(\Theta_{\rm ADE}=\{\Omega_{\rm 0ADE},\Omega_{0r},n,H_{0},m\}\) for GW's observations. To obtain the priors we consider that the current effective dark energy is \(\Omega_{\rm 0DE}=\Omega_{\rm 0ADE}-2\sqrt{\Omega_{0r}}\) which satisfies \(\Omega_{\rm 0DM}+\Omega_{\rm 0DE}=1\). If \(0<\Omega_{\rm 0DE}<1\), then \(0<\Omega_{\rm 0ADE}-2\sqrt{\Omega_{0r}}<1\), and \(0<2\sqrt{\Omega_{0r}}<\Omega_{\rm 0ADE}<1+2\sqrt{\Omega_{0r}}\), where if \(r_{0}>H_{0}^{-1}\), therefore \[0<\Omega_{0r}=\frac{1}{4r_{0}^{2}H_{0}^{2}}\leq 0.25\quad\text{ and }\quad 0< \Omega_{\rm 0ADE}<2.\] (47) Furthermore, \(\Omega_{\rm 0ADE}=n^{2}/H_{0}^{2}T_{0}^{2}\), where \(T_{0}=(\int_{0}^{1}da/Ha)^{1/2}\) denotes the age of the universe. Following [45], the age of the Universe lies between \(11.2\) Gy \(<T_{0}<21\) Gy, and the best fit age is \(T_{0}=13.14\) Gy, therefore if \(66<H_{0}[\text{km/s/Mpc}]<74\) then \(n\sim 1\). But considering this result \(0<\Omega_{\rm 0ADE}<2\), and \(0<\Omega_{0r}<0.25\), then the auto-correlation time is less than \(\tau/N\). However, if we reduce the range of the priors for \(0<\Omega_{\rm ADE}<1\), and \(0<\Omega_{0r}<0.001\), we obtain Gaussian posteriors distributions for \(\Omega_{\rm 0ADE}\) and \(H_{0}\), with the SN, GW and the combined sample SN+GW. The mean values obtained for the posteriors of these parameters are shown at the bottom of Table 1. From this analysis, we obtain that the mean current effective dark energy is higher in comparison to the observations. On the other hand according to [32]\(n>2\), then if we consider the range for the prior for \(n\) as \(0<n<20\) and priors shown at top Table 1 we get Gaussian C.L for \(\Omega_{\rm 0ADE}\) and \(H_{0}\), see Figure 4.1. We found that according to the SN Pantheon sample, the most likely value of \(n\) tends to its upper limit. This cutoff set a lower current effective dark energy value indicating an older cosmic age. This can be seen in Figure 4.1, where the most probable value for \(n\sim 20\). For the GW forecast, the most likely value for \(n\sim 1.34\). Although the effective dark energy agrees with observations, computing \(T_{0}=n/(H_{0}\sqrt{\Omega_{\rm 0ADE}})\) with the mean values reported at top Table 1 we found that using SN, GW, and SN+GW data an older cosmic age of the order of hundreds of giga years (Gy). Notice that none of these values match the estimated age of the Universe [45]. And the crossover scale is on the order of hundreds of Gigaparsecs (Gpc). Therefore, the effect of extra dimension appears at very large distances.
* **ADE with interaction.** We set a flat prior for \(0<\beta<1\) for this model. This selection is due to the existence of a stable solution for \(\beta<1-2/3n\), and \(n\geq 1\)[34]. Furthermore, if \(\beta>1\), the evolution of density parameters of dark matter and dark energy deviates from the \(\Lambda\)CDM model at \(2\sigma\). The priors considered for the other parameters are shown in Table 2 and were chosen in such a way that the posterior distributions of \(\Omega_{\rm 0ADE}\) and \(H_{0}\) were Gaussian. We found that GW prefers a greater value of the interacting term \(\beta\) than SN observations,
and from their corresponding posterior distributions, it can be seen that the most likely value of \(\beta\) is close to zero for SN data, while for GW data is close to 1. However, the combined data indicates that the most likely value of \(\beta\) is close to zero. From Tables 1-2 it can be seen that the current effective dark energy \(\Omega_{\rm 0DE}\) is lower in comparison to when there is no interacting term. As in the previous case, the posterior for \(\Omega_{0r}\) is flat for SN and GW observations. In Table 2, it can be found that the crossover scale \(r_{0}\) is on the order of hundreds of Gpc.
* **NADE without interaction.** Unlike the ADE model, in the NADE model, the parameter \(n\) is not related to the age of the universe, so we consider the flat prior \(0<n<20\). We found that to obtain a Gaussian posterior distribution for \(\Omega_{\rm 0ADE}\) and \(H_{0}\) we have to consider the flat priors \(0<\Omega_{\rm 0NADE}<0.9\) and the ones shown in Table 3. For this model, it can be seen that for GW mock data the most likely value of \(n\) is close to one while its mean value is \(n=7.4\), GW prefers a smaller value for \(H_{0}\) than SN. Again, the crossover scale is on the order of hundreds of Gpc but this scale is larger than the values found at the top of Table 1 and Table 2.
* **NADE with interaction.** In this case, to obtain Gaussian posteriors for \(H_{0}\) and \(\Omega_{\rm 0ADE}\), we set the flat priors shown in Table 3. The posteriors of the parameters are shown in Figure 4 at the bottom right side. This model prefers a lower value of \(\beta\) than the one reported for the ADE model. The effective dark energy obtained is lower and the crossover scale is of the order of Gpc. Notice that this behaviour is almost the same when there is no interaction. The best fit for \(n\) and \(H_{0}\) using SN+GW data are the same for the non-interacting NADE model. Moreover, with GW data this model prefers a lower \(n\) value in comparison to the constraints obtained from SN and SN+GW, separately.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameters** & **Priors** & **SN Pantheon** & **GW** & **SN Pantheon+GW** \\ \hline \(\Omega_{\rm 0ADE}\) & (0,1) & \(0.785\pm 0.027\) & \(0.799\pm 0.095\) & \(0.751\pm 0.025\) \\ \hline \(\Omega_{\rm or}\) & \((0,0.0025)\) & \((12.6\pm 8.5)\times 10^{-4}\) & \((12.6\pm 8.5)\times 10^{-4}\) & \((10.3\pm 8.2)\times 10^{-4}\) \\ \hline \(n\) & (0,20) & \(13.5\pm 4.7\) & \(4.2\pm 3.0\) & \(14.6\pm 4.0\) \\ \hline \(H_{0}\)[km/s/Mpc] & (66,74) & \(71.54\pm 0.23\) & \(68.9\pm 1.2\) & \(71.33\pm 0.20\) \\ \hline \(m\) & (0.1,2) & - & \(1.35\pm 0.43\) & \(1.71\pm 0.22\) \\ \hline \(\Omega_{\rm 0DE}\) & - & \(0.714^{+0.006}_{+0.003}\) & \(0.728^{+0.074}_{-0.064}\) & \(0.687^{+0.010}_{+0.003}\) \\ \hline \(T_{0}\)[Gyr] & - & \(208.384^{+66.954}_{-69.703}\) & \(66.722^{+39.559}_{-46.053}\) & \(231.087^{+57.720}_{-59.967}\) \\ \hline \(r_{0}\) [Gpc] & - & \(590.686^{+448.153}_{-135.690}\) & \(613.319^{+480.915}_{-147.484}\) & \(655.240^{+799.80}_{-167.692}\) \\ \hline \hline \(\Omega_{\rm 0ADE}\) & \((0,1)\) & \(0.913\pm 0.031\) & \(0.828\pm 0.077\) & \(0.812\pm 0.020\) \\ \hline \(\Omega_{0r}\) & (0,0.001) & \((4.4\pm 3.4)\times 10^{-4}\) & \((5\pm 3.4)\times 10^{-4}\) & \((2.6\pm 2.3)\times 10^{-4}\) \\ \hline \(n\) & (0,2) & \(1.903\pm 0.077\) & \(1.42\pm 0.28\) & \(1.947\pm 0.042\) \\ \hline \(H_{0}\)[km/s/Mpc] & (66,74) & \(70.82\pm 0.21\) & \(68.32\pm 0.84\) & \(70.34\pm 0.18\) \\ \hline \(m\) & (0.1,2) & - & \(1.31\pm 0.46\) & \(1.67\pm 0.24\) \\ \hline \(\Omega_{\rm 0DE}\) & - & \(0.871^{+0.017}_{-0.009}\) & \(0.783^{+0.015}_{+0.014}\) & \(0.795^{+0.35}_{-0.067}\) \\ \hline \(T_{0}\)[Gyr] & - & \(27.514^{+0.556}_{-0.573}\) & \(22.348^{+2.932}_{-3.274}\) & \(30.053^{+0.199}_{-0.202}\) \\ \hline \(r_{0}\)[Gpc] & - & \(1009.738^{+1114.606}_{-253.598}\) & \(981.879^{+775.461}_{-233.543}\) & \(1322.519^{+2580.862}_{-361.613}\) \\ \hline \end{tabular}
\end{table}
Table 1: Constraints for the ADE model without interaction. The first column denotes the free parameters of the model, the second column describes the priors considered, and the third-fourth-fifth columns include the constraints for each parameter using SN Pantheon, GW mock data, and the total sample SN+GW, respectively. In the sub Table below we include a second option of priors for \(\Omega_{0r}\) and \(n\). Notice that \(\Omega_{\rm 0DE}\) is a derivable parameter and \(m\) is not a variable for SN data.
## V Conclusions
In this paper, we explore the observational constraints of interacting and non-interacting Agegraphic Dark Energy (ADE) and New Agegraphic Dark Energy (NADE) models in the normal branch within a DGP brane. Also, we include an analysis with the NADE to study the dynamics in the conformal time scheme instead of the age of the universe, as in ADE versions. A stable solution within these models is the one related to the evolution of the normal branch, which led us to different possibilities for dark energy. It is possible to recover the standard \(\Lambda\)CDM model in the case of a constant EoS, as we can see from each model in Eqs.12-25, respectively. Furthermore, viable cosmological solutions can be obtained from the ADE model version, which allows us to have a late cosmic acceleration within the interacting mechanism with dark matter.
In the ADE model the value of \(n\) is related to the age of the universe, then if \(T_{0}=13.4\) Gyr, and \(66<H_{0}<74\), \(n\approx 1\). However, when we considered in our analysis a prior within \(0<n<2\), we found that neither the age of the universe nor the effective dark energy agrees with current observations (see Table 1). Furthermore, when we extended
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameters** & **Prior** & **SN Pantheon** & **GW** & **SN Pantheon + GW** \\ \hline \(\Omega_{0\text{ADE}}\) & (0,1) & \(0.763\pm 0.049\) & \(0.720\pm 0.093\) & \(0.721\pm 0.047\) \\ \hline \(\Omega_{0r}\) & \((0,0.005)\) & \((2.5\pm 1.7)\times 10^{-3}\) & \((2.6\pm 1.7)\times 10^{-3}\) & \((1.7\pm 1.4)\times 10^{-3}\) \\ \hline \(n\) & (0,20) & \(13.7\pm 4.6\) & \(5.1\pm 3.9\) & \(14.8\pm 3.9\) \\ \hline \(H_{0}[\text{km/s/Mpc}]\) & (66,74) & \(71.49\pm 0.23\) & \(68.9\pm 1.2\) & \(71.25\pm 0.20\) \\ \hline \(\beta\) & (0,1) & \(0.44\pm 0.33\) & \(0.52\pm 0.33\) & \(0.38\pm 0.30\) \\ \hline \(m\) & (0,2) & - & \(1.38\pm 0.41\) & \(1.74\pm 0.20\) \\ \hline \(\Omega_{0\text{DE}}\) & - & \(0.663^{+0.019}_{-0.005}\) & \(0.618^{+.063}_{-0.051}\) & \(0.638^{+0.018}_{+0.001}\) \\ \hline \(T_{0}[\text{Gyr}]\) & - & \(214.649^{+62.932}_{-66.785}\) & \(85.349^{+53.964}_{-63.447}\) & \(239.345^{+52.850}_{-56.515}\) \\ \hline \(r_{0}[\text{Gpc}]\) & - & \(419.639^{+324.579}_{-96.918}\) & \(426.958^{+311.594}_{-100.641}\) & \(510.601^{+708.294}_{-133.543}\) \\ \hline \end{tabular}
\end{table}
Table 2: Constraints for the ADE model with interaction. The first column denotes the free parameters of the model, the second column describes the priors considered, and the third-fourth-fifth columns include the constraints for each parameter using SN Pantheon, GW mock data, and the total sample SN+GW, respectively. In the sub Table below we include a second option of priors for \(\Omega_{0r}\) and \(n\). Notice that \(\Omega_{0\text{DE}}\) is a derivable parameter and \(m\) is not a variable for SN data.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameters** & **Prior** & **SN Pantheon** & **GW** & **SN Pantheon+ GW** \\ \hline \(\Omega_{0\text{NADE}}\) & \((0,1)\) & \(0.784\pm 0.026\) & \(0.750\pm 0.061\) & \(0.751\pm 0.023\) \\ \hline \(\Omega_{0r}\) & \((0,0.002)\) & \((10.1\pm 6.8)\times 10^{-4}\) & \((10.1\pm 6.8)\times 10^{-4}\) & \((8.4\pm 6.6)\times 10^{-4}\) \\ \hline \(n\) & \((0,20)\) & \(13.2\pm 4.9\) & \(7.4\pm 5.2\) & \(14.4\pm 4.1\) \\ \hline \(H_{0}[\text{km/s/Mpc}]\) & \((66,74)\) & \(71.56\pm 0.23\) & \(69.84\pm 0.82\) & \(71.34\pm 0.20\) \\ \hline \(m\) & \((0,2)\) & - & \(1.36\pm 0.41\) & \(1.7\pm 0.22\) \\ \hline \(\Omega_{0\text{DE}}\) & - & \(0.720^{+0.007}_{-0.001}\) & \(0.686^{+0.042}_{-0.033}\) & \(0.693^{+0.008}_{+0.003}\) \\ \hline \(r_{0}[\text{Gpc}]\) & - & \(659.569^{+498.040}_{-151.311}\) & \(675.812^{+520.540}_{-159.426}\) & \(725.468^{+846.128}_{-184.095}\) \\ \hline \end{tabular}
\end{table}
Table 3: Constraints for the NADE model without interaction. The first column denotes the free parameters of the model, the second column describes the priors considered, and the third-fourth-fifth columns include the constraints for each parameter using SN Pantheon, GW mock data, and the total sample SN+GW, respectively. Notice that \(\Omega_{0\text{DE}}\) is a derivable parameter and \(m\) is not a variable for SN data.
the prior \(n<20\), we found that the effective dark energy for the non-interacting and interacting ADE model agrees with observations, and the age of the universe is reduced when there are interactions. Conversely, the NADE model does not have the age drawback because \(n\) does not depend on this cosmic age. Furthermore, using the best-fitted values obtained for SN+GW (see Tables I-II-IV), we obtain the density parameters evolution of the effective dark energy and dark matter for all models (see Figure 4.1). We found that the evolution of dark matter in these models deviates from the \(\Lambda\)CDM model at \(z<5\). Nevertheless, in the interacting cases, there is more dark matter at present times. Also, we compute their respective deceleration parameters (see Figure 4.1), from where we notice that the universe starts to accelerate at \(z=0.655\) for both non-interacting ADE and NADE models. For the interacting ADE model starts to accelerate at \(z\sim 0.37\), and for the interacting NADE model at \(z\sim 0.61\). The reason for this behaviour is due to a bigger value of \(\beta\). According to the constraints obtained using SN+GW data for all models, the best-fit values are \(n\sim 14\) and \(H_{0}\sim 71.3\)km/s/Mpc. It is interesting to notice that the evolution of the density parameters in the non-interacting ADE and NADE models are approximately equal and that the evolution of their respective deceleration parameters is practically the same.
Figure 1: \(2\sigma\) C.L constraints using standard sirens mock data GW (blue), SN Pantheon (red), and the total sample SN Pantheon + GW (green) for the models discussed in Sec.4: _Top left_: ADE without interaction. _Top right_: ADE with interaction. _Bottom left_: NADE without interaction. _Bottom right_: NADE with interaction.
For all the models the value for \(\Omega_{0r}\) is of the order of \(10^{-4}\). In such a case, \(r_{0}\) cannot be constrained. Furthermore, the value of \(n\) is strongly restricted to an interval between \(\{0,20\}\), where it was found that in the four models, SN observations prefer \(n=20\) and GW mock data prefers a value of \(n=1\). This result implies a younger universe in the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameters** & **Prior** & **SN Pantheon** & **GW** & **SN Pantheon+GW** \\ \hline \(\Omega_{\text{0NADE}}\) & \((0,0.9)\) & \(0.778\pm 0.027\) & \(0.744\pm 0.062\) & \(0.743\pm 0.024\) \\ \hline \(\Omega_{0r}\) & \((0,0.002)\) & \((10.1\pm 6.8)\times 10^{-4}\) & \((10.1\pm 6.8)\times 10^{-4}\) & \((8.4\pm 6.6)\times 10^{-4}\) \\ \hline \(n\) & \((0,20)\) & \(13.3\pm 4.8\) & \(7.3\pm 5.1\) & \(14.4\pm 4.1\) \\ \hline \(H_{0}[\text{km/s/Mpc}]\) & \((66,74)\) & \(71.55\pm 0.23\) & \(69.80\pm 0.82\) & \(71.33\pm 0.20\) \\ \hline \(\beta\) & \((0,0.125)\) & \(0.067\pm 0.039\) & \(0.064\pm 0.042\) & \(0.061\pm 0.042\) \\ \hline \(m\) & \((0,2)\) & - & \(1.36\pm 0.41\) & \(1.70\pm 0.22\) \\ \hline \(\Omega_{\text{0DE}}\) & - & \(0.714^{+0.008}_{-0.000}\) & \(0.680^{+0.043}_{-0.035}\) & \(0.685^{+0.007}_{+0.004}\) \\ \hline \(r_{0}[\text{Gpc}]\) & - & \(659.661^{+498.110}_{-151.332}\) & \(676.199^{+520.846}_{-159.521}\) & \(725.570^{+846.247}_{-184.121}\) \\ \hline \end{tabular}
\end{table}
Table 4: Constraints for the NADE model with interaction. The first column denotes the free parameters of the model, the second column describes the priors considered, and the third-fourth-fifth columns include the constraints for each parameter using SN Pantheon, GW mock data, and the total sample SN+GW, respectively. Notice that \(\Omega_{\text{0DE}}\) is a derivable parameter and \(m\) is not a variable for SN data.
Figure 2: Comparison of the evolution of density parameters with the \(\Lambda\)CDM model (dotted blue line) using the best-fit values of the parameters obtained from the SN+GW sample.The \(\Lambda\)CDM model is represented by the blue dotted line. For \(z>5\) the evolution of the density parameters is the same as in the \(\Lambda\)CDM model. Each line denotes the models with and without interactions: ADE without interaction (dotted red line), ADE with interaction (dashed red line), NADE without interaction (dotted green line), and NADE with interaction (dashed green line).
ADE model with or without interaction in case we constrain them with GW than SN data.
For all models, GW data prefer a lower value for \(H_{0}\) than SN data. The value of the current effective dark energy is less than the value for the case without interaction. In this analysis, we notice that the mean value of the crossover scale is larger, therefore the effects of the extra dimension appear at Gpc scales. At this scale, the dynamics of dark matter do not affect the cosmological evolution in ADE and NADE models.
###### Acknowledgements.
MH acknowledges financial support from SEP-CONACYT postgraduate grants program. CE-R acknowledges funding from PAPIIT UNAM Project TA100122 and the Royal Astronomical Society as FRAS 10147. This work is part of the Cosmostatistics National Group project. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
## Appendix A Luminosity distance for gravitational waves in higher dimensions models
In a Euclidean space, the energy per unit area or flux \(F\), observed by a detector at a certain distance \(D_{L}\) from a source of intrinsic luminosity \(L\) is \(F=L/4\pi d_{L}^{2}\). Taking into account the cosmic expansion in a FRW universe, the luminosity observed is lowered by a factor \(1/(1+z)^{2}\) and the flux is given by
\[F=\frac{L}{4\pi a_{0}^{2}r^{2}(1+z)^{2}}, \tag{10}\]
where \(a_{0}\) is the current value of the scale factor and \(r\) is the comoving radial distance. From this relationship we establish the luminosity distance in a FRW universe without curvature is
\[d_{L}=(1+z)a_{0}r. \tag{11}\]
Considering a \(D=N+1\)-dimensional space with a metric given by
\[ds^{2}=-dt^{2}+a(t)^{2}(dr_{N}^{2}+r_{N}^{2}d\Omega_{N-1}^{2}), \tag{12}\]
where \(d\Omega_{N-1}^{2}=d\theta_{1}^{2}+\sin^{2}\theta_{1}\sin^{2}\theta_{2}d\theta _{3}^{2}+...+\sin^{2}\theta_{1}...\sin^{2}\theta_{N-2}d\theta_{N-1}^{2}\), the Electromagnetic (EM) flux propagates isotropically in a hypersphere embedded in \(N+1\) dimensions, implying that the relation now defines the luminosity
Figure 3: Deceleration parameter Eqs.(20-34) for ADE and NADE models using the best-fitted values obtained from the combined sample SN+GW. Each line denotes the models with and without interactions: ADE without interaction (dotted red line), ADE with interaction (dashed red line), NADE without interaction (dotted green line), and NADE with interaction (dashed green line).
distance in a D-dimensional space
\[F=\frac{L}{S_{N}}, \tag{10}\]
where \(S_{N}\) is the area of a \(N\)-sphere of radius \(r_{N}\) that can be obtained from integrating the line element at a fixed time and fixed radius \(r_{N}\) as
\[S_{N} = \int d\theta_{1}...d\theta_{N-1}\sqrt{-g_{N-1}} \tag{11}\] \[= \int d\theta_{1}...d\theta_{N-1}r_{N}^{N-1}a(t)^{N-1}\times\sin^ {N-2}\theta_{1}...\sin^{2}\theta_{N-2}\sin\theta_{N-1}\] \[= b_{N-1}r_{N}^{N-1}a(t)^{N-1},\]
where \(b_{N-1}=2\pi^{N/2}/\Gamma(N/2).\) Therefore
\[F=\frac{L}{b_{N-1}r_{N}^{N-1}a(t)^{N-1}}. \tag{12}\]
Due to the cosmic expansion, the energy of each photon is also redshifted by 1/(1+z), this implies that the flux observed at Earth is lowered by a factor \(1/(1+z)^{2}\), then Eq.(12) is given by
\[F=\frac{L}{b_{N-1}r_{N}^{N-1}a(t)^{N-1}(1+z)^{2}}=\frac{L}{b_{N-1}(d_{L}^{D})^ {N-1}}, \tag{13}\]
where \(d_{L}^{D}\) denotes the luminosity distance for a \(D\)-dimensional space-time with metric (10). From the previous equation
\[d_{L}^{D}=a(t)r_{N}(1+z)^{2/(N-1)}=a(t)r_{N}(1+z)^{2/(D-2)}. \tag{14}\]
Furthermore, if we consider in (10) \(ds^{2}=0\) and radial geodesics, then
\[c^{2}dt^{2}=a(t)^{2}dr_{N}^{2}, \tag{15}\]
and from this follows that
\[r_{N}=c\int_{0}^{z}\frac{dz}{H}, \tag{16}\]
then
\[d_{L}^{(D)}=a_{0}c(1+z)^{\frac{2}{D-2}}\int_{0}^{z}\frac{dz}{H}. \tag{17}\]
The lowest-order waveform of a binary system emitting GWs at cosmological distances in a 4-dimensional universe is
\[h_{\times}=\frac{4}{d_{L}^{4}}(G\mathcal{M}_{cz})^{5/3}(\pi\{_{0}\}^{2/3}\cos \theta\sin\Phi(t_{0}), \tag{18}\]
where \(h_{\times}\) is the \(\times\)(cross)-polarization of the GW, \(t_{0}\) and \(f_{0}\) the time and frequency at the observer, \(\mathcal{M}_{cz}\) is the redshifted chirp mass, \(\theta\) the inclination angle, \(\Phi\) the GW phase and
\[d_{L}^{4}=a_{0}(1+z)r_{3}, \tag{19}\]
is the standard 4-dimensional luminosity distance [46]. In the case of extra spatial dimensions, i,e. if we consider the metric given by 10, the waveform of a binary system emitting GWs at cosmological distances in a D-dimensional space-time at the observer is [37]
\[h_{\times}\propto\frac{4}{(1+z)(a_{0}r_{N})^{(D-2)/2}}(G\mathcal{M}_{cz})^{5/3 }(\pi f_{0})^{2/3}\cos\theta\sin\Phi(t_{0}), \tag{20}\]
and we can rewrite this waveform as
\[h_{\times}\propto\frac{4}{d_{L}^{\rm GW}}(G\mathcal{M}_{cz})^{5/3}(\pi f_{0})^{2 /3}\cos\theta\sin\Phi(t_{0}). \tag{100}\]
We defined in the latter equation:
\[d_{L}^{\rm GW}\propto(1+z)(a_{0}r_{N})^{(D-2)/2}, \tag{101}\]
with \(d_{L}^{GW}\propto(d_{L}^{D})^{\frac{D-2}{2}}\). In the case of \(r_{N}=r_{3}\), we can rewrite the above equation as
\[d_{L}^{D}=a_{0}r_{3}(1+z)^{2/(D-2)}=d_{L}^{4}(1+z)^{(4-D)/(D-2)}, \tag{102}\]
where \(d_{L}^{4}\) is the luminosity distance that we can infer from observations denoted by \(d_{L}^{\rm EM}\), which is the luminosity distance from the electromagnetic signal emitted by a source. Combining the latter equations we obtain
\[d_{L}^{\rm GW}\propto d_{L}^{\rm EM}\left(\frac{d_{L}^{\rm EM}}{1+z}\right)^{ \frac{D-4}{2}}. \tag{103}\]
In the DGP theory, we have a crossover scale \(r_{0}\) such that if \(r\ll r_{0}\), we recover 4D gravity, and if \(r\gg r_{0}\) the effect of extra-dimensions appears as we can notice for example from the Friedmann equation. Then, to obtain a valid relation at all scales and recover 4-D gravity for \(r\ll r_{0}\) we write
\[d_{L}^{\rm GW}=d_{L}^{\rm EM}\left[1+\left(\frac{d_{L}^{EM}}{cr_{0}(1+z)} \right)^{m}\right]^{\frac{D-4}{2m}}, \tag{104}\]
here \(m\) determines the steepness of the transition from the small-scale to large-scale behaviour [37]. The value of \(m\) is a free parameter and has to be determined by observations, it has to be different from zero.
Notice that the further away the sources and the more pronounced the modification from GR, i.e. the larger the number of dimensions, the lower the screening scale, and the steeper the transition, the larger the discrepancy between the GW and EM luminosity distance and hence the larger the factor by which the error on \(x^{\rm GW}\) is increased. To understand this, Fig. 4 shows simulated GW data scattered around their 'true' GW distance for both \(\sigma\) stated above for the less 'extreme' \(\theta^{D=5}:(n=1,H_{0})\) and most extreme \(\theta^{D=7}:(n=20,H_{0})\) cosmological scenarios.
Figure 4: Examples of GW mock data scattered around their true \(d_{L}^{\rm GW}\) assuming very small (orange) and slightly larger (blue) error bars. _Left_: True cosmological model is characterised by \(\theta^{D=5}:(m=1,H_{0})\). _Right_: True cosmological model is characterised by \(\theta^{D=7}:(m=20,H_{0})\). |
2305.12934 | Position Control of Single Link Flexible Manipulator: A Functional
Observer Based Sliding Mode Approach | This paper proposes a functional observer-based sliding mode control
technique for position control of a single-link flexible manipulator. The
proposed method considers the unmodelled system dynamics as uncertainty and
aims to achieve accurate position control. The functional observer is used to
directly estimate the sliding mode control design components and a sliding mode
controller to generate the control signal, which guarantees the system's
robustness and stability. The proposed control scheme is validated using
numerical simulations. | Atul Sharma, S. Janardhanan | 2023-05-22T11:29:44Z | http://arxiv.org/abs/2305.12934v1 | Position Control of Single Link Flexible Manipulator: A Functional Observer Based Sliding Mode Approach
###### Abstract
This paper proposes a functional observer-based sliding mode control technique for position control of a single-link flexible manipulator. The proposed method considers the unmodelled system dynamics as uncertainty and aims to achieve accurate position control. The functional observer is used to directly estimate the sliding mode control design components and a sliding mode controller to generate the control signal, which guarantees the system's robustness and stability. The proposed control scheme is validated using numerical simulations.
Flexible link manipulator, Sliding Mode Control, Functional Observer, Assumed Mode Method.
## I Introduction
In recent years, the robotic manipulators have been explored for a wide range of applications including industrial production [1], hostile environments (nuclear sites, deep sea, etc.) [2], space exploration [3], health care equipment [4], building construction [5], etc. It is required that the robotic manipulators should provide faster, cost-effective and accurate operation [6, 7]. The rigid robot manipulators are made up of rigid links which makes them bulkier [8]. The industries need an upgrade to the existing classical robots in order to reduce construction costs, minimize energy consumption brought on by big actuator sizes, and increased production.
So in applications where there is a requirement that the weight-to-volume ratio of a manipulator be low, inevitably, the manipulator tends to be flexible. There are applications where large and lighter manipulators are required [9], and as a consequence of the larger and lighter arms flexibility comes into the picture. Also, as the payload-to-weight ratio increases, the tendency of flexible modes to get excited increases. The flexibility in the manipulator is modelled as the link deformation [10]. Hence, analysis of such systems can not be performed as rigid manipulators. If we consider the flexibility, then the system formed will be of infinite dimension, i.e., the dynamic model of the flexible link robot manipulator is described as a distributed parameter system. This makes the dynamics of a flexible link robot manipulator depend on both space and time. Hence, the mathematical analysis of a flexible link manipulator would involve partial differential equations (PDE) rather than the ordinary differential equation (ODE). From a control viewpoint, finding the direct analytical solution to PDEs may not always be possible, and the solutions obtained may not always be realizable. So, we need to approximate the PDE-based mathematical model of the flexible link manipulator to an ODE-based model. There are various approximation methods available in the literature [11] including the finite element method (FEM) [12, 13], assumed mode method (AMM) [14, 15, 16, 17] etc. In this paper, the assumed mode method is chosen over the finite element method [18]. This is because the finite element method works on the discretization approach; hence, for a lower-order dynamic model, it may not capture the effect of all the potential flexible modes of vibration.
The study of a single-link flexible robot manipulator was the starting point for flexible robot research. There are various methods of modelling available in the literature, Lumped parameter approach [19], Euler-Bernoulli beam theory [20], Hamilton's principle [21], Lagrangian dynamics [22], Newton-Euler-FEM method [23, 24], Finite Element Method (FEM) [12, 13], assumed-modes method [14, 15, 16, 17] etc. The most popular approach for building the mathematical model of flexible manipulators is Lagrangian dynamics because the equations of motion are formulated using kinetic and potential energies, which are scalars. Therefore, the equations of motion are derived from a single scalar known as Lagrangian.
Due to the flexibility of the link, the tip position of a flexible link robot manipulator depends on both the joint angle and the link deformation variable. Even a small link deformation has a very significant impact on the tip position. Therefore, to perform the specific operation using the flexible link manipulator, a control input must be designed to drive the tip to the desired trajectory. However, due to the deformation in the link, the existing control algorithms are insufficient to efficiently control the flexible link manipulator [7, 25].
The most desirable characteristics of a control system are a simple design, fast response, and robustness to uncertainties and disturbances. The dynamic model of a flexible link manipulator has inherent, unmodelled uncertainties. Therefore, a robust control design is preferable for such a system. Sliding mode control (SMC) is one of the most used control schemes for uncertain nonlinear systems in order to provide robustness and faster system response [26, 27]. The SMC scheme is a model-based feedback control technique. This paper uses the state feedback sliding mode control design because of its simplistic design. Therefore, it is required that the system states needed in the control input be available for feedback control design. But in a flexible link manipulator, all the
states can never be available for the feedback design. Hence, traditional SMC can not fit such a system well. To overcome this issue, an observer is to be designed that estimates the unmeasurable system state by utilising the knowledge of input and output. However, typically, a linear feedback control law needs estimation of some linear function of states of the form \(Kx(t)\). The estimation of the linear function of the state vector can be done using a minimal-order observer. Therefore, a functional observer is designed in this paper to estimate the linear function of the state vector required in the SMC design.
A functional observer is a type of model-based control system that uses system outputs to estimate the function of the linear combination of states required in the control input. Previous works have demonstrated several techniques to design functional observers for linear time-invariant (LTI) systems [28, 29, 30, 31].
A functional observer estimates the linear functions of states, which are then used in the sliding mode controller. This composite control strategy is the functional observer-based sliding mode control (FO-SMC) scheme. It has been demonstrated that FO-SMC works well for controlling the position of flexible link manipulators.
In this paper, we proposed FO-SMC to control the position of a single-link flexible manipulator. Numerical simulations are utilized to verify the proposed control scheme. The results illustrate that the proposed FO-SMC technique can precisely and successfully control the position of the single-link flexible manipulator.
These are some of the main contributions of this paper:
* To control the position of the single-link flexible manipulator, a novel composite control approach based on FO-SMC has been proposed.
* Numerical simulation is used to validate the proposed control scheme.
* The demonstration of the efficacy of the proposed FO-SMC scheme in controlling the position of a single-link flexible manipulator.
The rest of the paper is structured as follows: In Section II, the dynamic model of the single-link flexible manipulator is presented. The FO-SMC approach for position control is proposed in Section III. In Section IV, a discussion on simulation results is presented. Finally, the conclusion is presented in section V.
## II Modelling of Flexible Link Manipulator
Considering a single-link flexible manipulator as shown in figure 1 with length \(l(m)\), mass is uniformly distributed across the length with linear mass density \(\rho(kg/m)\). The following assumptions are considered for modelling the single-link flexible manipulator:
**Assumptions:**
1. Mass is uniformly distributed across the length of the link.
2. Link undergoes only small deformation of pure bending (No torsion and Compression).
3. Bending forces due to gravity and nonlinear deformations are also negligible.
The flexible link under consideration is modelled as an Euler-Bernoulli beam with \(E\) Young's modulus and an \(I\) cross-sectional moment of inertia. The electrical motor is connected at the base with inertia \(J_{0}(kg-m^{2})\) provides torque (\(\tau\) N-m) to the manipulator, and the payload carried by the manipulator has mass \(m_{p}(kg)\) and inertia \(J_{p}(kg-m^{2})\).
Using Hamilton's principle and the calculus of variation, it is shown that angle \(\theta(t)\) and deformation \(w(x,t)\) satisfy the following partial differential equations [32, 33].
\[EI\frac{\partial^{4}w(x,t)}{\partial x^{4}}+\rho\frac{\partial^{ 2}w(x,t)}{\partial t^{2}}+\rho x\ddot{\theta} =0 \tag{1a}\] \[\tau(t)-J\ddot{\theta} =0 \tag{1b}\]
where \(\theta(t)\) is the angle between the x-axis and the axis connecting the origin and center of mass position (in \(rad\)), \(J=\left(J_{0}+\rho\frac{l^{3}}{3}+J_{p}+m_{pl}l^{2}\right)\) is the total inertia of the flexible link. The partial differential equation (1a) satisfies the boundary conditions given in (2):
\[w(0,t) =0 \tag{2a}\] \[EI\frac{\partial^{2}w(0,t)}{\partial x^{2}} =J_{0}\left[\ddot{\theta}+\frac{\partial^{2}}{\partial t^{2}} \left(\frac{\partial w(0,t)}{\partial x}\right)\right]-\tau(t)\] (2b) \[EI\frac{\partial^{2}w(l,t)}{\partial x^{2}} =-J_{p}\left[\ddot{\theta}+\frac{\partial^{2}}{\partial t^{2}} \left(\frac{\partial w(l,t)}{\partial x}\right)\right]\] (2c) \[EI\frac{\partial^{3}w(l,t)}{\partial x^{3}} =m_{p}\left[l\ddot{\theta}+\frac{\partial^{2}w(l,t)}{\partial t^ {2}}\right] \tag{2d}\]
Using separation of variables, deformation \(w(x,t)\) can be written in a manner that helps in decoupling the space and time variables as given in (3).
\[w(x,t)=\sum_{i=1}^{\infty}\phi_{i}(x)p_{i}(t) \tag{3}\]
Where \(\phi(x)\) is a function of spatial coordinate, \(p(t)\) represents vibrator motion and it is a function of time and the number of mode shapes is denoted by \(n(1,2,...,\infty)\). There will be an infinite number of assumed modes for any flexible link, with one natural frequency associated with each assumed mode. But it is impossible to consider all the vibration modes in the system modelling; therefore, only a finite number of vibration modes are considered in the system modelling that
Fig. 1: Single-Link Flexible Manipulator
best describes the system's response. As a result, equation(3) can be rewritten using a finite number of modes, say \(n\):
\[w(x,t)=\sum_{i=1}^{n}\phi_{i}(x)p_{i}(t) \tag{4}\]
As the dynamic model is designed with only \(n\) flexible modes in consideration, the dynamic model of a single-link flexible manipulator will always contain unmodeled uncertainties.
Replacing \(w(x,t)\) from equation (4) in equation (1) in free evolution (\(\tau=0\implies\tilde{\theta}=0\)) and solving PDE by variable separable method, we have a set of ODE's as:
\[\frac{d^{4}\phi_{i}(x)}{dx^{4}}-{\beta_{i}}^{4}\phi_{i}(x)=0 \tag{5}\] \[\frac{d^{2}p_{i}(t)}{dt^{2}}+{\omega_{i}}^{2}p_{i}(t)=0 \tag{6}\]
where \(\omega_{i}\) is the natural frequency of vibration of modes (eigenvalue) and \(\phi(x)\) is the eigenfunction. \(\beta_{i}\) is the spatial vibration frequency of assumed modes.
\[\omega_{i}^{2}=\beta_{i}^{4}\frac{EI}{\rho} \tag{7}\]
\(\beta_{1}\), \(\beta_{2}\), \(\cdots\), \(\beta_{n}\) are the first \(n\) roots of the characteristics equation in (8).
\[(c\;sh-s\;ch)-\frac{2m_{p}}{\rho}\beta s\;sh-\frac{2J_{p}}{\rho} \beta^{3}c\;ch-\frac{J_{0}}{\rho}\beta^{3}(1+c\;ch)\] \[-\frac{m_{p}}{\rho^{2}}\beta^{4}(J_{0}+J_{p})(c\;sh-s\;ch)+\frac {J_{0}J_{p}}{\rho^{2}}\beta^{6}(c\;sh+s\;ch)\] \[-\frac{J_{0}J_{p}m_{p}}{\rho^{3}}\beta^{7}(1-c\;ch)=0 \tag{8}\]
where, \(c=\cos(\beta l)\), \(ch=\cosh(\beta l)\), \(s=\sin(\beta l)\) and \(sh=\sinh(\beta l)\).
The infinite-dimensional model in (1), is approximated with a finite-dimensional model using the modal analysis discussed above. We have considered only a finite number of flexible modes of vibration for study, as shown in (4). By using the equations (2) and (4) we get a generalized finite-dimensional dynamic model of a single-link flexible manipulator as given by (9).
\[M\tilde{q}(t)+D\dot{q}(t)+Kq(t)=\bar{B}\tau(t) \tag{9}\]
where, \(q(t)=(\theta,p_{1},....p_{n})^{T}\in\mathbb{R}^{n+1}\), \(M\in\mathbb{R}^{(n+1)\times(n+1)}\), \(D\in\mathbb{R}^{(n+1)\times(n+1)}\), \(K\in\mathbb{R}^{(n+1)\times(n+1)}\), \(\bar{B}\in\mathbb{R}^{(n+1)\times 1}\), and \(\tau(t)\in\mathbb{R}\).
\(\Omega=\text{diag}\{\omega_{1},.....,\omega_{n}\}\), \(\phi^{\prime}(0)=(\phi^{\prime}_{1}(0),...,\phi^{\prime}_{n}(0))^{T}\). Where \(\phi^{\prime}_{i}(0)=\frac{\lambda\phi_{i}(x)}{\partial x}\) at \(x=0\) and \(i=1,2,....,n\) denotes the assumed modes, and \(\zeta\) is the damping coefficient.
The measured output of the single-link flexible manipulator are the clamped joint angle \(\theta_{c}(t)\) (in \(rad\)) and tip angle \(\theta_{t}(t)\) (in \(rad\)) which can be expressed using the states of the dynamic model as:
\[\theta_{c}(t)=\theta(t)+\sum_{i=1}^{n}\phi^{\prime}_{i}(0)p_{i}(t) \tag{10}\] \[\theta_{t}(t)=\theta(t)+\sum_{i=1}^{n}\frac{\phi_{i}(l)}{l}p_{i}(t) \tag{11}\]
Equation (9) can be transformed to the state space model as:
\[\dot{x}(t)=Ax(t)+Bu(t) \tag{12}\] \[y(t)=Cx(t) \tag{13}\]
where, \(x(t)\in\mathbb{R}^{(2n+2)}\), \(A\in\mathbb{R}^{(2n+2)\times(2n+2)}\), \(B\in\mathbb{R}^{(2n+2)\times 1}\), \(C\in\mathbb{R}^{2\times(2n+2)}\), \(y(t)\in\mathbb{R}^{2}\) denotes the output of the system, and \(u(t)\in\mathbb{R}\) represents the input to the system.
\[x(t)=\begin{bmatrix}\vartheta(t)\\ \dot{\vartheta}(t)\end{bmatrix},\;\;\;\;\;y(t)=\begin{bmatrix}\theta_{c}(t)\\ \theta_{t}(t)\end{bmatrix}\]
Where, \(\vartheta(t)=[\theta(t),p_{1}(t),p_{2}(t),\dots,p_{n}(t)]^{T}\).
\[A=\begin{bmatrix}0&I\\ -M^{-1}K&-M^{-1}D\end{bmatrix},\;\;\;\;B=\begin{bmatrix}0\\ M^{-1}\bar{B}\end{bmatrix} \tag{14}\] \[C=\begin{bmatrix}1&\phi^{\prime}_{1}(0)&\phi^{\prime}_{2}(0)&\dots& \phi^{\prime}_{n}(0)&0&0&\dots\\ 1&\frac{\phi_{1}(l)}{l}&\frac{\phi_{2}(l)}{l}&\dots&\frac{\phi_{n}(l)}{l}&0&0 &\dots\end{bmatrix}\] (15) \[u(t)=\tau(t)\]
## III Composite Control Design
### _Sliding Mode Control Design_
In this section, a sliding mode control law is designed to control the position of the single-link flexible manipulator. The sliding function is chosen as given in equation (16).
\[\sigma(t)=\Gamma\left[x(t)-x_{d}(t)\right] \tag{16}\]
where, \(\Gamma>0(\in\mathbb{R}^{1\times(2n+2)})\) is a constant, which is to be designed such that the system becomes stable when confined to \(\sigma(t)=0\), \(x_{d}(t)\) is the desired position of states.
Taking the time derivative of \(\sigma(t)\) in (16) and using (12):
\[\dot{\sigma}(t) =\Gamma\left[\dot{x}(t)-\dot{x_{d}}(t)\right] \tag{17}\] \[=\Gamma\left[Ax(t)+Bu(t)-\dot{x_{d}}(t)\right]\]
The proposed control law \(u(t)\) has two components, nominal control \(u_{nom}\) and discontinuous control \(u_{disc}\). The expression for \(u(t)\) is given in equation (18).
\[u(t)=\underbrace{(\Gamma B)^{-1}\left[-\Gamma Ax(t)+\Gamma\dot{x_{ d}}(t)\right]}_{u_{\text{disc}}}\\ -\underbrace{(\Gamma B)^{-1}\left[k_{1}\sigma(t)+k_{2}\text{sgn}( \sigma(t))\right]}_{u_{disc}} \tag{18}\]
Where, \(k_{1}\text{and}k_{2}>0(\in\mathbb{R})\) are constants to be designed.
**Lemma III.1** (Finite-time lemma [34]).: _Considering a continuous time system \(\dot{\Psi}=f(\Psi)\), \(\Psi\in\mathbb{R}^{n}\) with zero as the equilibrium point. Let us choose a positive definite Lyapunov candidate function \(\mathcal{V}(\Psi):\mathbb{R}^{n}\rightarrow\mathbb{R}\), with \(\alpha_{2}>0\), \(\chi\in(0,1)\)
_and an open vicinity of origin \(\Delta_{0}\subseteq\mathbb{R}^{n}\), such that the inequality in (19) is satisfied._
\[\dot{\mathcal{V}}(\Psi)\leq-\alpha_{1}\mathcal{V}(\Psi)-\alpha_{2}\mathcal{V}^{ \chi}(\Psi);\quad\Psi\in\Delta_{0}\ \{0\}, \tag{19}\]
_then we can say that the equilibrium point is finite-time stable. Further, if \(\Delta_{0}=\mathbb{R}^{n}\) then the global finite-time stability of the equilibrium point is guaranteed._
**Theorem III.2**.: _Consider the state space model in (12) and the sliding function in (16). Under the influence of the presented controller, (18), the sliding phase will be attained in finite time (i.e., \(\sigma(t)=0\)), and the system states will converge asymptotically to the desired position._
Proof.: Let us define a Lyapunov function \(V_{1}\) as:
\[V_{1}(t)=\frac{1}{2}\sigma^{2}(t). \tag{20}\]
The time derivative of \(V_{1}(t)\) gives
\[\dot{V}_{1}(t)=\sigma(t)\dot{\sigma}(t). \tag{21}\]
From equation (17) put \(\dot{\sigma}(t)\) in (21) and using (12):
\[\dot{V}_{1}(t)=\sigma(t)\Gamma\left(Ax(t)+Bu(t)-\dot{x}_{d}(t)\right). \tag{22}\]
Putting \(u(t)\) from (18) into (22):
\[\dot{V}_{1}(t) =\sigma(t)\left(-k_{1}\sigma(t)-k_{2}\text{sgn}(\sigma(t))\right)\] \[=-k_{1}\sigma^{2}(t)-k_{2}|\sigma(t)|\] \[=-2k_{1}\frac{\sigma^{2}(t)}{2}-2k_{2}\frac{\left(|\sigma^{2}(t) |\right)}{2}\] \[\dot{V}_{1}(t) =-\alpha_{1}V_{1}(t)-\alpha_{2}V_{1}^{\frac{1}{2}}(t) \tag{23}\]
where \(\alpha_{1}=2k_{1}\), \(\alpha_{2}=2k_{2}\) and \(\chi=1/2\). From equation (23) it is clearly visible that it satisfies lemma III.1's finite time inequality equation. Thus, it can be inferred that the sliding variable in equation (16) converges to zero in finite time, thereby guaranteeing the convergence of system state \(x(t)\) to the desired position \(x_{d}(t)\).
The control input in equation (18) can be equivalently written as:
\[u(t)=-(\Gamma B)^{-1}\left[\Gamma A+k_{1}\Gamma\right]x(t)\\ -(\Gamma B)^{-1}\left[k_{2}\text{sgn}(\Gamma x(t)-\Gamma x_{d}(t))\right] \\ +(\Gamma B)^{-1}\left[\Gamma\dot{x}_{d}(t)+k_{1}\Gamma x_{d}(t)\right] \tag{24}\]
The sliding mode control law in (24) needs the system states for closed-loop design. But the system under consideration does not have all the required states available for the measurement. Therefore, an observer is to be designed to estimate the states required to make a closed-loop control law implementable. As the control input in (24) needs estimation of some linear function of states, therefore instead of designing a state observer, a linear state function observer is proposed such that the output of the functional observer can be directly used in the controller.
### _Functional Observer_
This section introduces the functional observer, a linear state function observer that estimates the linear combination of states required by the control input function.
It is required to make an estimate of the linear combination of state \(Fx(t)\), which is expressed using \(g(t)(=Fx(t))\). Now, define \(F\) using the control input given in (24) as:
\[F_{1} =-(\Gamma B)^{-1}\left[\Gamma A+k_{1}\Gamma\right]\] \[F_{2} =\Gamma\]
Where \(F\in\mathbb{R}^{2\times(2n+2)}\) and \(g(t)\in\mathbb{R}^{2}\). Hence, \(g(t)\) can be expressed as:
\[g(t)=\begin{bmatrix}F_{1}\\ F_{2}\end{bmatrix}x(t) \tag{25}\]
In order to achieve this linear state function estimation, an observer of the form (26) needs to be designed.
\[\dot{\eta}(t) =N\hat{\eta}(t)+Ly(t)+Hu(t) \tag{26a}\] \[\hat{g}(t) =Gy(t)+D\hat{\eta}(t) \tag{26b}\]
where, \(\hat{\eta}(t)\in\mathbb{R}^{v}\) is a state vector. \(\hat{g}(t)\in\mathbb{R}^{2}\) is the desired estimate of functional. \(N\in\mathbb{R}^{v\times v}\), \(L\in\mathbb{R}^{v\times 2}\), \(H\in\mathbb{R}^{v}\), \(G\in\mathbb{R}^{2\times 2}\), and \(D\in\mathbb{R}^{2\times v}\) are unknown matrices.
The output \(\hat{g}(t)\) of (26b) is said to estimate \(Fx(t)\) in an asymptotic manner if
\[\lim_{t\rightarrow\infty}[\hat{g}(t)-Fx(t)]=0 \tag{27}\]
Now let us suppose that if \(\hat{\eta}(t)\) estimates the linear function of \(x(t)\) as \(\eta(t)=Tx(t)\) (where \(T\in\mathbb{R}^{v\times(2n+2)}\)) then, \(\hat{g}(t)\) estimates the \(Fx(t)\) for which we have the theorem III.3.
**Theorem III.3**.: _The completely observable \(v^{th}\) order observer will estimate \(g(t)=Fx(t)\) if and only if the following conditions are satisfied:_
1. \(N\) _must be a Hurwitz matrix_
2. \(TA-NT-LC=0\)__
3. \(H=TB\)__
4. \(F=GC+DT\)__
5. \(v\geq rank(F-GC)\)__
_where \(F\in\mathbb{R}^{2\times(2n+2)}\) is the linear state function gain matrix and \(T\in\mathbb{R}^{v\times(2n+2)}\) is the unknown matrix which is to be determined._
Proof.: [30]
### _Proposed Functional Observer-based Sliding Mode Control_
This section proposes a composite control law using the sliding mode design and functional observer output. The error between the linear function estimates is expressed as e(t), as given in (28).
\[e(t) =\eta(t)-\hat{\eta}(t)\] \[e(t) =Tx(t)-\hat{\eta}(t) \tag{28}\]
Using equations (12), (26a) in the derivative of \(e(t)\) in (28), we get:
\[\dot{e}(t) =T\dot{x}(t)-\dot{\hat{\eta}}(t)\] \[\dot{e}(t) =T(Ax(t)+Bu(t))-(N\hat{\eta}(t)+Ly(t)+Hu(t)) \tag{29}\]
On simplifying equation (29) we get:
\[\dot{e}(t)=(TA-NT-LC)x(t)+Ne(t) \tag{30}\]
By substituting \(TA-NT-LC=0\) from theorem III.3 in (30), we get:
\[\dot{e}(t)=Ne(t) \tag{31}\]
Using the results in theorem III.3 control input \(u(t)\) can be rewritten as:
\[u(t)=[1 0]\,Fx(t)-[1 0]\,De(t)\] \[-\underbrace{(\Gamma B)^{-1}\left[k_{2}\text{sgn}(\Gamma x(t)- \Gamma x_{d}(t)\right]}_{u_{\text{transition}}}+\] \[\underbrace{(\Gamma B)^{-1}\left[\Gamma\dot{x}_{d}(t)+k_{1} \Gamma x_{d}(t)\right]}_{u_{\text{transition}}} \tag{32}\]
Now equation (12) is rewritten using (32).
\[\dot{x}(t)=Ax(t)+B\,[1 0]\,Fx(t)-[1 0]\,De(t)\] \[+B\underbrace{[u_{\text{switching control}}-u_{\text{constant}}]}_{u_ {\text{transition}}} \tag{33}\]
A composite system is formed using (31) and (33).
\[\begin{bmatrix}\dot{x}(t)\\ \dot{e}(t)\end{bmatrix}=\underbrace{\begin{bmatrix}A+B\,[1\,\,\,0]\,F&-B\,[1 \,\,\,0]\,D\\ 0&N\end{bmatrix}}_{A_{C}}\begin{bmatrix}x(t)\\ e(t)\end{bmatrix}\\ +\underbrace{\begin{bmatrix}B\\ 0\end{bmatrix}}_{B_{C}}u_{bounded} \tag{34}\]
Where, \(A_{C}\in\mathbb{R}^{(2n+2+v)\times(2n+2+v)}\), and \(B_{C}\in\mathbb{R}^{(2n+2+v)}\).
If observer matrix \(N\) and system matrix \(A\) have distinct eigenvalues, then \(TA-NT-LC=0\) will have a solution for \(T\). Also, if the composite system matrix \(A_{C}\) has all the eigenvalues in the plane's left half, the system will be uniformly ultimate bounded. Hence, the observer matrix \(N\) is chosen so the composite system matrix has stable eigenvalues. By using the theorem III.3 and the condition of stable eigenvalues for the composite system matrix \(A_{C}\) in (34) the observer matrices can be obtained. Hence, the control input \(u(t)\) can be further rewritten using the observer output obtained in (26b).
\[u(t)=\begin{bmatrix}1&0\end{bmatrix}\hat{g}(t)- (\Gamma B)^{-1}\left[k_{2}\text{sgn}(\begin{bmatrix}0&1\end{bmatrix} \hat{g}(t)-\Gamma x_{d}(t))\right]\] \[+(\Gamma B)^{-1}\left[\Gamma\dot{x}_{d}(t)+k_{1}\Gamma x_{d}(t)\right] \tag{35}\]
The state space model in (12) is of \((2n+2)\) order, designing the control for a large value of \(n\) results in a complex and difficult-to-implement control law. Therefore, in this paper, the proposed control in (35) is designed by considering only the first two assumed modes \((n=2)\), and hence the system order considered for designing the proposed control is of _sixth order_.
The observed control law in (35) designed using the system having two assumed modes, is tested for the system with a larger value of n i.e. considering the dynamic model with more number of modes.
## IV Simulation and Results
This section includes the numerical simulations and results that demonstrate the effectiveness of the presented control approach for the single-link flexible manipulator. This paper simulates the developed control law for the first five vibrational modes. The state space representation for the first five assumed modes obtained by considering \(n=5\) in (12) is given in (36). Where, \(X(t)\in\mathbb{R}^{12}\), \(\tilde{A}\in\mathbb{R}^{12\times 12}\), \(\tilde{B}\in\mathbb{R}^{12\times 1}\), \(\tilde{C}\in\mathbb{R}^{2\times 12}\), system output is denoted by \(y(t)\in\mathbb{R}^{2\times 1}\) and \(u(t)\in\mathbb{R}\) represents the control input to the system.
\[\dot{X}(t) =\tilde{A}X(t)+\tilde{B}u(t) \tag{36}\] \[y(t) =\tilde{C}X(t) \tag{37}\]
\[X(t) =[\theta(t),p_{1}(t),\dots,p_{5}(t),\dot{\theta}(t),\dot{p}_{1}(t ),\dots,\dot{p}_{5}(t)]^{T}\] \[y(t) =[\theta_{c}(t),\theta_{t}(t)]^{T}\]
The proposed control law in (35) designed for \(n=2\), is applied to the system in (36). The physical parameter specifications of the single-link flexible manipulator are given in table I.
The observer designed for the state space model in (12) and (13) by considering \(n=2\) has order \(2\). The observer matrices for an \(2^{nd}\) order functional observer are chosen as:
\[G =\begin{bmatrix}216.8704&-10.7626\\ 0.1858&0.0924\end{bmatrix},\,N=\begin{bmatrix}-0.5&2\\ -2&-0.5\end{bmatrix},\] \[L =\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\,D=\begin{bmatrix}-984.9503&159.5181\\ 9.6574&-1.0438\end{bmatrix},\,H=\begin{bmatrix}0.5678\\ -0.7321\end{bmatrix}\]
For the chosen observer matrices the composite matrix \(A_{C}\) has all its eigenvalues in the left-half plane, which guarantees the stability of the composite system.
The composite control design parameters are presented in table II.
Initial state trajectory conditions are denoted as \(X(0)\).
The simulation is performed for the system in (36) and the proposed control input in (35) for \(n=2\) is applied to it. The simulation is being performed for both regulation and tracking problems.
### _Regulation Problem_
The reference values for the angle are chosen as:
\[\theta_{d}=\frac{\pi}{4}\ \ \text{rad}.\]
Figure 2 shows the convergence of tip position \(\theta_{t}(t)\) to the desired position \(\theta_{d}\) with vibrations suppressed. The figure also shows the plots for clamped joint angle \(\theta_{c}(t)\) and center of mass position \(\theta(t)\).
The plot for the sliding variable versus time is shown in figure 3. Figure indicates that the sliding variable converges to zero in finite time.
Figure 4 shows the actuator torque applied the manipulator. The actuator torque applied is well within the bound of \(\pm\)\(0.5N-m\) i.e. the applied control input is bounded.
### _Tracking Problem_
The desired trajectory for the position of a manipulator is chosen as:
\[\theta_{d}(t)=e^{-0.5t}\sin(t)+(1-e^{-0.5t})\ \ \text{rad}.\]
Figure 5 shows the convergence of tip position \(\theta_{t}(t)\) to the desired trajectory \(\theta_{d}(t)\) with vibrations being suppressed. The figure also shows the trajectories for clamped joint angle \(\theta_{c}(t)\) and center of mass position \(\theta(t)\).
The plot of the sliding variable vs time is shown in figure 6. It is evident from the figure that the sliding variable converges to zero in a finite amount of time.
The plot of actuator torque applied to the manipulator with respect to time is shown in figure 7. The figure shows that the actuator torque has a lower limit of 0.5N m and an upper limit of +0.5N m.
tested using numerical simulation for the dynamic model with the first five vibration modes considered in the modelling. The simulation results indicate that the presented control technique efficiently controls the position of the single-link flexible manipulator.
|
2308.02503 | MyVoice: Arabic Speech Resource Collaboration Platform | We introduce MyVoice, a crowdsourcing platform designed to collect Arabic
speech to enhance dialectal speech technologies. This platform offers an
opportunity to design large dialectal speech datasets; and makes them publicly
available. MyVoice allows contributors to select city/country-level
fine-grained dialect and record the displayed utterances. Users can switch
roles between contributors and annotators. The platform incorporates a quality
assurance system that filters out low-quality and spurious recordings before
sending them for validation. During the validation phase, contributors can
assess the quality of recordings, annotate them, and provide feedback which is
then reviewed by administrators. Furthermore, the platform offers flexibility
to admin roles to add new data or tasks beyond dialectal speech and word
collection, which are displayed to contributors. Thus, enabling collaborative
efforts in gathering diverse and large Arabic speech data. | Yousseif Elshahawy, Yassine El Kheir, Shammur Absar Chowdhury, Ahmed Ali | 2023-07-23T07:13:30Z | http://arxiv.org/abs/2308.02503v1 | # MyVoice: Arabic Speech Resource Collaboration Platform
###### Abstract
We introduce MyVoice, a crowdsourcing platform designed to collect Arabic speech to enhance dialectal speech technologies. This platform offers an opportunity to design large dialectal speech datasets; and makes them publicly available. MyVoice allows contributors to select city/country-level fine-grained dialect and record the displayed utterances. Users can switch roles between contributors and annotators. The platform incorporates a quality assurance system that filters out low-quality and spurious recordings before sending them for validation. During the validation phase, contributors can assess the quality of recordings, annotate them, and provide feedback which is then reviewed by administrators. Furthermore, the platform offers flexibility to admin roles to add new data or tasks beyond dialectal speech and word collection, which are displayed to contributors. Thus, enabling collaborative efforts in gathering diverse and large Arabic speech data.
Yousseif Elshahawy, Yassine El Kheir, Shammur Absar Chowdhury, Ahmed Ali Qatar Computing Research Institute, HBKU, Doha, Qatar [email protected]
**Index Terms**: data collection, multi-dialect Arabic, speech recognition
## 1 Introduction
The field of speech and language processing has been transformed by the accessibility of large datasets, empowering the creation of advanced models that exhibit outstanding performance. However, the data preparation process can be costly, time-consuming, and, most importantly, encounter issues due to the under-representation of the language. These challenges can also impede progress and result in a centralized advancement, limiting the accessibility and utilization of these resources.
MyVoice1 aims to foster a collaborative community by building valuable resources that can further accelerate speech and language technology advancements while promoting open access to diverse and large datasets for everyone. The platform is designed to collect Arabic data to improve dialectal speech technologies and bridge the gap within the Arab world [1], with data being accessible to everyone. MyVoice enables adm to host multiple tasks while allowing contributors to select and contribute to the tasks of their choice. For each task, a collaborator can record the displayed utterances, also can validate others recordings. The platform integrates state-of-the-art voice activity detection and dialectal speech recognition system for quality assurance and provides different statistics to the contributor and administration.
Footnote 1: [https://myvoice.arabicspeech.org/](https://myvoice.arabicspeech.org/)
## 2 MyVoice
**Platform Architecture:** The platform consists of two main components: a front-end and a back-end. The front-end is responsible for providing an intuitive and user-friendly interface for recording and submitting audio segments, while the back-end handles the processing, storage, and management of the submitted audio data.
The front-end is built using the following: (_i_) **Nuxux32**: which is a progressive framework for building web applications. It provides a powerful development experience with features such as automatic code splitting, server-side rendering, and static site generation; and (_ii_) **TailwindCSS3**: which is a styling framework that provides a set of pre-defined styles and components for building responsive and modern web interfaces.
Footnote 2: [https://nuxt.com/](https://nuxt.com/)
Footnote 3: [https://tailwindcss.com/](https://tailwindcss.com/)
The back-end of the MyVoice platform is built using the following technologies: (_i_) **FastAPI4**: FastAPI is a modern, fast (high-performance) web framework for building APIs.The Framework natively supports asynchronous programming, making it well-suited for high-traffic and data-intensive applications; (_ii_) **Viciorn5**: Neuron is a fast ASGI server implementation that is built on top of the asyncio library. It's mainly used to deploy the FastAPI server for production; (_iii_) **Supabase6**: Supabase is an open-source Tool that provides a suite of back-end services, including database management, authentication, and storage. In the MyVoice platform, Supabase is used primarily for authentication and user metadata storage, allowing users to securely log in to the platform and store their
Figure 1: **MyVoice Tasks Page**
submission history and progress; (_iv_) **PM2**7: PM2 is a general process manager that is used to handle, monitor, and deploy the FastAPI server that powers the back-end of the MyVoice platform. PM2 provides features such as automatic process restarting, log management, and load balancing, which help to ensure that the server is always running smoothly and reliably. It also allows for easy deployment of updates and new features to the server, making it a key component in the development and maintenance of the MyVoice platform.
Footnote 7: [https://pm2.keymetrics.io/](https://pm2.keymetrics.io/)
Recording Tasks Interface:The **Tasks** page is designed to offer contributors a range of options for recording tasks and submitting their voice data as shown in Figure 1. MyVoice provides the contributors with the flexibility to choose a specific dialect based on their experience and begin recording their voices given a displayed text.
Validation Interface:The **Validation** page is a powerful tool that allows contributors to view all of their recorded audio files in one place. This page also enables contributors to assess the quality of their recordings and make decisions on whether to submit them or redo them for better quality. By providing contributors with the ability to review their recordings and assess their quality, the **Validation** page ensures that only the highest quality recordings are submitted to the project. This additional layer of validation increases the overall quality of the voice data collected and enhances the accuracy of any research conducted using this data.
Admin Interface:The admin page is a crucial component of the platform, it provides insights into the collected datasets and allows access to the submissions for a customizable timeline shown in Figure 3. From here, the admin can upload a new set of utterances to be recorded. The admin also has the ability to give admin privileges to any contributor.
Submissions Interface:The admin can also inspect individual submissions as shown in Figure 4. Each submission is accompanied by important information, such as the task being performed and a confidence score calculated using a state-of-the-art multi-dialect Arabic speech recognition system [2] that reflects the quality of the recording. With this information at their fingertips, administrators are able to take action based on the quality of each submission. For example, if the confidence score indicates that the recording is of poor quality or outlier recordings, administrators can delete the recordings. This ensures that all data collected by MyVoice is accurate and reliable and that contributors are held to high standards of quality.
Users Interface:The admin can view all the current users on the page shown in Figure 5, and from there, the admin can inspect the details of a specific user. The admin can use the details for a specific user, as shown in Figure 5, to decide if the user is malicious or not, in which case they can block them from making further submissions as well as delete all their previous submissions.
## 3 Conclusion
MyVoice is a crowdsourcing platform introduced for collecting dialectal Arabic speech to enhance dialectal speech technologies. It allows contributors to record and validate large dialectal speech datasets. The platform includes an integrated quality assurance system to aid the validator and admin to assess the recordings before making them publicly available, thus ensuring the quality of the data. It also offers statistical insights into the data and promotes collaborative efforts in gathering diverse and large dialectal Arabic speech data for further advancing speech technologies.
|
2301.01501 | Towards Edge-Cloud Architectures for Personal Protective Equipment
Detection | Detecting Personal Protective Equipment in images and video streams is a
relevant problem in ensuring the safety of construction workers. In this
contribution, an architecture enabling live image recognition of such equipment
is proposed. The solution is deployable in two settings -- edge-cloud and
edge-only. The system was tested on an active construction site, as a part of a
larger scenario, within the scope of the ASSIST-IoT H2020 project. To determine
the feasibility of the edge-only variant, a model for counting people wearing
safety helmets was developed using the YOLOX method. It was found that an
edge-only deployment is possible for this use case, given the hardware
infrastructure available on site. In the preliminary evaluation, several
important observations were made, that are crucial to the further development
and deployment of the system. Future work will include an in-depth
investigation of performance aspects of the two architecture variants. | Jaroslaw Legierski, Kajetan Rachwal, Piotr Sowinski, Wojciech Niewolski, Przemyslaw Ratuszek, Zbigniew Kopertowski, Marcin Paprzycki, Maria Ganzha | 2023-01-04T09:17:34Z | http://arxiv.org/abs/2301.01501v1 | # Towards Edge-Cloud Architectures for Personal Protective Equipment Detection
###### Abstract
Detecting Personal Protective Equipment in images and video streams is a relevant problem in ensuring the safety of construction workers. In this contribution, an architecture enabling live image recognition of such equipment is proposed. The solution is deployable in two settings - edge-cloud and edge-only. The system was tested on an active construction site, as a part of a larger scenario, within the scope of the ASSIST-IoT H2020 project. To determine the feasibility of the edge-only variant, a model for counting people wearing safety helmets was developed using the YOLOX method. It was found that an edge-only deployment is possible for this use case, given the hardware infrastructure available on site. In the preliminary evaluation, several important observations were made, that are crucial to the further development and deployment of the system. Future work will include an in-depth investigation of performance aspects of the two architecture variants.
edge-cloud continuum architectures, PPE detection, image recognition, worker safety +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
edge deployment. On the other hand, given the limited hardware resources available on the edge, and the extremely harsh conditions of the construction site, a cloud deployment seems attractive.
Given the possible benefits of both solutions, in this contribution, a solution is proposed for an edge-cloud continuum video analytics architecture. The architecture can be deployed in two variants (edge-only, and edge-cloud), described in the _Architecture_ section. Moreover, to determine the viability of the solution, an initial experimental study was performed. Here, an IR model was developed and integrated with the edge-only variant of the architecture. Next, it was tasked with detecting when personnel wearing PPE entered and exited the work site.
## 2 Background
To provide a context for this study, the state of the art of (1) IR system architectures and (2) machine learning models for PPE detection is summarized.
System architectures.The most obvious benefit of deploying IR systems on the edge is the decreased latency. This was demonstrated in [20], where facial recognition models were deployed on the edge. The authors found that deploying the models on the edge resulted in significantly better response speeds, as compared to a cloud deployment. In other studies [8; 9], the viability of deploying deep convolutional neural networks (CNNs) in the edge-only scenario was investigated. CNNs are characterized by high resource utilization, and thus are typically deployed in the cloud. The studies found that deploying CNNs is viable on mobile devices, when parts of the computation can be offloaded to other edge devices. Edge deployment allowed to achieve a consistently low latency of 2.24 ms while using CNNs to perform real-time object tracking in augmented reality [8]. Both studies showed that the edge deployment distributing the workload increased the inference capabilities of the system, as the models could not be run on the disconnected mobile devices alone.
One study [5] investigated an edge-cloud architecture, where data preprocessing servers were deployed close to the data source. The preprocessed data was then sent to a cloud-based deep-learning platform. This resulted in decreasing network latency and traffic. It also increased the security and privacy of the raw data.
On the other hand, edge deployments are more limited in terms of the available hardware. Low computational resources naturally limit the size of models and inference speed. A study compared different implementations (based on TensorFlow, TensorRT, and TFLite) of the same video processing model [6], and found them to differ in their resource utilization. The choice of implementation influenced the energy consumption of the model, as well as its inference speed. Interestingly, the slowest implementation (TFLite) was the most energy efficient. It was also found that TFLite managed to remain on par with the other implementations in terms of speed, when processing low-resolution video. In the case of high-resolution video, more resource-intensive models were needed to maintain the speed, suggesting that a cloud deployment could be more beneficial in low-resource settings. Nevertheless, some resource-intensive models can be deployed on the edge, if resources available there are sufficient. The deployment proposed in a different study required all nodes to be equipped with a GPU [9]. This allowed the authors to use CNNs on the edge. A similar result was reported in [13], were IR models deployed on a Raspberry Pi 4B, equipped with a camera, and an Intel Neural Compute Stick 2 (a USB device for deep learning inference on the edge) were studied. These devices were chosen for their low power consumption and good computing capabilities. Overall, a model tasked with detecting PPE in the form of helmets and safety vests achieved precision on the order of 99.5%.
Models for PPE detection.Effective video analytics-based methods for detecting the presence of protective helmets, worn by workers, due to its health and safety importance, is currently a hot research topic.
The usage of existing, unmodified machine learning models for detecting protective head covers does not provide sufficient detection accuracy, as proven in a recent study [19]. In said article, several versions of the popular YOLO algorithm [1] were compared. It was shown that the most effective version of YOLO for helmet detection is the v4. After improving the loss function, it achieved more than 93% accuracy during tests. A similar study [10] focused on improving the YOLOv5 algorithm. The system achieved results close to 97% accuracy, thanks to the improvement of the structure of the neural network. Another study [18], also investigated improving YOLOv5. However, instead of the algorithm itself, work was focused on processing of input data by applying filters on the input image. This allowed to improve the accuracy to above 95%. Yet another study [15] presented an approach for improving the detection speed and accuracy by designing a multi-level pyramidal feature fusion network based on the ConCaNet attention mechanism. Here, YOLOv3 was applied and a dataset with 6000 images was used. The results demonstrate the effectiveness of this approach, which managed to reduce the number of necessary parameters.
Helmet detection can also be done using the SSD-MobileNet algorithm [4], which is based on yet another variant of CNN. An analysis of this method, reported in [7], tested its effectiveness and managed to reach 80% accuracy during tests. In a wider comparison of algorithm types [11], the authors proposed a helmet detection method based on a dynamically changing neural network - SHDDM (Safety Helmet Detection Dynamic Model). The developed model analyzes the human posture and defines the area where the helmet should be located, to eliminate the detection of the helmet outside the head area and thus reduce the false positive rate. There are
also other approaches to helmet detection, such as methods based on color and shape used to to locate the face, and the proper wearing of a helmet [16]. Another solution used low-resolution images, captured from a video stream, using the Local Binary Pattern (LBP) and gray-level co-occurrence matrix (GLCM) methods along with a back-propagation neural network [14].
Another study [2] investigated the usefulness of artificially created images in the training of CNNs for PPE detection. The paper presented the results achieved with YOLOv3, trained on artificial images generated by the Rockstar Advanced Game Engine (RAGE) from the Grand Theft Auto V video game. This approach achieved a mean average precision (mAP) of only 55.11% on a test dataset consisting of real-world images. The mAP for synthetic images was much higher at 87.24%. It should be noted that the poor results for the real-world images are most likely caused by the RAGE engine being unable to generate a sufficient amount of head, welding mask, ear protection, and chest object variations.
As can be seen, there are many possibilities for detecting protective helmets. Here, the SHDDM is particularly noteworthy, as it has an important feature of checking whether the helmet is worn properly, and not only detecting its presence. This, in turn, is particularly relevant in real-world applications.
## 3 Proposed Architecture
The proposed video analytics system can be deployed in two architecture variants: edge-cloud (Fig. 1) and edge-only (Fig. 2). As outlined above, there are reasons to believe that both variants may be appropriate for the considered scenario. Both architectures share a common core deployed on the edge, consisting of: a camera, the Image Processor (IP) component, and the OSH (Occupational Safety and Health) manager's mobile device.
The camera (in the reported experiments the Dahua IPC-HFW5449T-ASE-LED was used) provides a live RTSP video stream, which is directed to the Image Processor. The IP is a service written in Python, which can optionally perform preliminary image analysis. Using configurable methods such as motion detection and brightness thresholding, the IP is able to discard image frames that do not contain moving people, reducing network traffic to components involved in actual image analysis. It is also responsible for communicating with the rest of the system, designed in accordance with the ASSIST-IoT reference architecture [3]. IP communicates with the rest of the system publishing alerts to an MQTT topic. This design allows other components and devices in the ASSIST-IoT deployment to be notified in a streaming manner of any OSH violations, such as workers not wearing protective helmets.
In the first version of the architecture - the edge-cloud deployment - the IP is configured to use the cloud-based AWS Rekognition platform, with its PPE detection service.
In the edge-only variant, the video analysis is performed by the Orange AI&ML Platform, which is deployed on a server on the construction site. This edge deployment allows for maintaining lower network latency, and ensures the privacy of worker data. The AI&ML Platform's services are written as Python runnable modules that provide their own APIs and GUIs. The services can reuse the APIs and GUIs provided by the platform, or build them from scratch. A service collects frames from a video source, processes them in an ML pipeline specific to the service, and adapts or interprets the results. The inference results from the Platform are forwarded to external services, with the use of provided connectors. As the Orange AI&ML Platform operates on the edge, all video
Figure 1: Edge-cloud deployment
Figure 2: Edge deployment
processing takes place on the client's site, ensuring full security of customer data (video) and compliance with appropriate regulations, such as GDPR.
## 4 Methodology
As part of this study, a preliminary version of the edge-only variant of the architecture was deployed on an active construction site. Using the Orange AI&ML Platform, a model was trained to count people wearing helmets entering and exiting a specific area. The system counts people in helmets in defined recognition areas (bounding boxes), crossing the yellow and green lines visible in Figs. 3 and 4. People entering the construction site are counted after crossing the green line, while people leaving are counted after crossing the yellow line. The machine learning pipeline consists of a YOLOX object detection model, trained for detecting heads in helmets, and a DeepSORT [12] multi-object tracking algorithm. The YOLOX model was trained using a dataset provided by the Northeastern University of China ([https://public.roboflow.com/object-detection/hard-hat-workers](https://public.roboflow.com/object-detection/hard-hat-workers)).
The system's results were compared to those obtained from an algorithm built into the Dahua camera. It should be noted that the camera counted _all_ people entering and leaving, including those without protective helmets. However, this should not impact the results much, as the safety regulations on this particular site forbid entering it without a helmet and the rule is strictly enforced before workers reach the counting location.
The measurements were performed in two series - each using a different bounding box definition. A single series spanned the length of one workday on the construction site. The number of entering and leaving people was counted in hourly intervals (between 5 AM and 7 PM).
## 5 Results
The Tables 1 and 2 present the results of the performed experiments. The Table 1 contains measurements made on 22nd November 2022, with the bounding box set as presented in Fig. 3. The average difference between the number of people entering, as measured by the camera and the model was equal to \(-6.21\), with the standard deviation of \(\sigma=5.08\), whereas for people exiting it was \(1.35\) and \(\sigma=2.73\) respectively. The correlation between entrances detected by the camera and the model deployed on the AI&ML platform, expressed by the Pearson coefficient is \(0.988\), whereas for exits \(0.995\). The correlations were found to be statistically significant (\(p\leq 0.05\)). Table 2 contains measurements from 24th November 2022 (for modified detection areas, depicted in Fig. 4). On that day, the average difference for entering was \(-4.93\) with \(\sigma=4.25\) and for exiting \(3.92\) with \(\sigma=4.92\). For these measurements the Pearson coefficient for people entering is equal to \(0.993\) and exiting \(0.989\). The correlations were found to be statistically significant (\(p\leq 0.05\)).
The tables also present differences in the number of people detected by the camera and the AI&ML platform and the sum of these differences calculated for both movement directions: entries and exits.
During the experiments, several unexpected events took place, which had a significant impact on the reported results. Workers were observed acting in an unexpected manner - lingering or walking around the detection area (Fig. 5). It was also noticed that sometimes the workers put on their helmets after having passed the detection area (Fig. 6). These behaviors present a challenge to the future system, as they significantly affect its accuracy.
Figure 4: Bounding boxes location on November 24 (after modification)
Figure 3: Bounding boxes location on November 22 (before modification)
## 6 Concluding remarks
The tested model demonstrated relatively good performance in the investigated scenario. Its accuracy when tasked with counting people wearing protective helmets was found to be sufficient, and was validated against a different system. A number of discrepancies between the counts of the model and the camera can be attributed to unexpected situations (Figs. 5 and 6) and the fact that the Dahua camera did not differentiate people wearing and not wearing helmets. The high correlation coefficient between the camera and the Orange AI&ML Platform's model allows to conclude that the two solutions perform comparably well.
It should be noted that there were changes in the correlation between the days of experiments. These differences are explained by the changes to the bounding box. This is one of the parameters that have to be investigated further.
Both variants of the proposed architecture can be used in the investigated scenario of PPE detection on a construction site. The feasibility of using an edge-deployment was confirmed - the server's computational capabilities were sufficient to maintain satisfactory inference accuracy. Therefore, it can be concluded that the construction site is equipped with sufficient hardware to warrant further experiments with the deployment.
In the future, the two proposed architecture variants will be compared in terms of network latencies, resource utilization, and their accuracy. The presented model will also be tested further, which will include manually annotating the videos to obtain a ground truth for comparison. This will allow for determining the actual accuracy of the developed model. Further optimization of bounding box locations is also planned.
## Acknowledgments
Work supported by ASSIST-IoT project funded from the European Union's H2020 RIA program under grant 957258.
|
2306.00860 | Differentiable Allpass Filters for Phase Response Estimation and
Automatic Signal Alignment | Virtual analog (VA) audio effects are increasingly based on neural networks
and deep learning frameworks. Due to the underlying black-box methodology, a
successful model will learn to approximate the data it is presented, including
potential errors such as latency and audio dropouts as well as non-linear
characteristics and frequency-dependent phase shifts produced by the hardware.
The latter is of particular interest as the learned phase-response might cause
unwanted audible artifacts when the effect is used for creative processing
techniques such as dry-wet mixing or parallel compression. To overcome these
artifacts we propose differentiable signal processing tools and deep
optimization structures for automatically tuning all-pass filters to predict
the phase response of different VA simulations, and align processed signals
that are out of phase. The approaches are assessed using objective metrics
while listening tests evaluate their ability to enhance the quality of parallel
path processing techniques. Ultimately, an over-parameterized, BiasNet-based,
all-pass model is proposed for the optimization problem under consideration,
resulting in models that can estimate all-pass filter coefficients to align a
dry signal with its affected, wet, equivalent. | Anders R. Bargum, Stefania Serafin, Cumhur Erkut, Julian D. Parker | 2023-06-01T16:19:18Z | http://arxiv.org/abs/2306.00860v2 | # Differentiable All-pass Filters for Phase Response Estimation and Automatic Signal Alignment
###### Abstract
Virtual analog (VA) audio effects are increasingly based on neural networks and deep learning frameworks. Due to the underlying black-box methodology, a successful model will learn to approximate the data it is presented, including potential errors such as latency and audio dropouts as well as non-linear characteristics and frequency-dependent phase shifts produced by the hardware. The latter is of particular interest as the learned phase-response might cause unwanted audible artifacts when the effect is used for creative processing techniques such as dry-wet mixing or parallel compression. To overcome these artifacts we propose differentiable signal processing tools and deep optimization structures for automatically tuning all-pass filters to predict the phase response of different VA simulations, and align processed signals that are out of phase. The approaches are assessed using objective metrics while listening tests evaluate their ability to enhance the quality of parallel path processing techniques. Ultimately, an over-parameterized, BiasNet-based, all-pass model is proposed for the optimization problem under consideration, resulting in models that can estimate all-pass filter coefficients to align a dry signal with its affected, wet, equivalent.
## 1 Introduction
Digital simulations of analog audio equipment like tape machines, pre-amplifiers and distortion pedals remain in demand due to the hardware's rich history and unique sonic characteristics. With the increase in computational power, the deep learning approach to machine learning has proven useful for simulating virtual analog (VA) black-box models and has in several publications been applied as the main technique for approximating the output response of analog audio systems [1, 2, 3, 4, 5]. In [2] and [3] a WaveNet-based model is as an example adapted to predict the current non-linear output sample value, given a certain number of past input samples and the current input. In both works, the number of past input samples, also called the receptive field, is dynamically selected based on the measured impulse response length of the circuits under consideration. In [4] a recurrent neural network (RNN), such as the gated recurrent unit (GRU), is proposed for simulating the non-linear behaviour of distortion circuits due to their stateful nature. Contrary to this, the authors of [5] present the state trajectory network (STN), comprised of a standard multilayer perceptron (MLP) with a skip-layer connection surpassing the densely connected layers of the network. The STN differs from related work as the input data is concatenated with measured values from the states of the circuit in order to model its behaviour. Since all aforementioned models are built upon the black-box paradigm, the results are significantly exposed to errors in the data collection process and any flaws in the hardware. Thus both the sonic characteristics and the phase response of the system are learned, introducing arbitrary and non-linear phase shifts to the incoming signal. This becomes a problem where parallel-path processing is desired, for instance when dry-wet mixing with the given simulations. All-pass filters (APF) that have unitary magnitude response and frequency-dependent phase responses would traditionally be the approach to take account of the phase shifts, however,
manual coefficient adjustments would be both time-consuming and for specific problems, impossible. An automatic solution to the problem, therefore, is highly desired. With inspiration from the differentiable digital signal processing (DDSP) methodology [6], we propose a model that tunes the coefficients of a cascaded APF system. The phase response of different black-box effects is thus automatically approximated, and the adjusted APFs are used to align a dry input signal with the processed, phase-shifted output.
The remainder of this paper is structured as follows: the all-pass optimization problem and related work are introduced in section 2. The construction of the differentiable APFs and their formulas are reviewed in section 3. Our approach and different deep optimization architectures are discussed in section 4. Finally, network evaluations, results, allusions and conclusions are presented in sections 5 and 6.
## 2 Background
DDSP stems from the motivation of generating audio by using a deep learning workflow to predict and extract synthesis parameters for vocoders and subtractive synthesis [6]. However, directly integrating classic signal processing elements into deep learning methods has shown promising results for the control and adjustment of other DSP blocks, including convolution, filters and one-period wave-tables [7]. Specifically, the authors of [8] have demonstrated the use of DDSP in the context of IIR filters, training different filter topologies in a recursive manner to match target frequency responses. Several other projects have investigated deep learning for IIR filter design, but similar to [8], the work has solely been focused on learning coefficients for magnitude rather than the phase responses. In [9], a neural network is applied to carry out parametric equalizer matching using differentiable biquads, whereas they in [10] approximate shelving filter coefficients directly in the difference equation. All of these works carry out an optimization problem in the frequency domain by minimizing the mean squared error (MSE) between the ground truth and the derived magnitude responses.
The approach to frequency response matching is different in [11]. Here a BiasNet is applied to determine the IIR equalizer parameters. The BiasNet is a simple feedforward neural network that takes advantage of the learnable bias terms, denoted \(\mathbf{b}_{0}\), in the input layer. This architecture is called a "deep optimization" algorithm, owing the name to the use of the neural network as a non-convex optimization algorithm used to tune or derive external parameters. An advantage of the BiasNet is its independence of input features, which according to [11] is more likely to provide a solution to many optimization problems. Furthermore, the network does not rely on the input size and content, hence only a target frequency response is required to be given to the loss function. Similar to the IIR system in [11], adjusting cascaded APFs might be a highly non-linear process. In this paper we, therefore, utilise the over-parameterised nature of the BiasNet to overcome the, potentially, non-convex phase response matching problem and extend the work of [8] to be applicable in the domain of APFs and phase response approximation.
### Problem Formulation
We represent the monophonic signals we want to phase compensate as input vectors \(\mathbf{x}^{T}\in\mathbb{R}\), where \(T\) is the signal length. The task is to process these signals with an APF function \(f\), such that the signal is phase shifted to match a target signal introducing the least amount of destructive interference. The function \(f\) takes as arguments the input and the number of filter coefficients \(\mathbf{c}\) matching the filter order \(N\) of the given sub-system. This yields the output \(y^{T}=f(\mathbf{c}^{n},\mathbf{x}^{T})\). For a system of cascaded APFs, we define the function composition of size \(D\), where each function receives the output of the previous one as:
\[y^{T}=f(\mathbf{c}^{n},\mathbf{x}^{T})_{1}\circ f(\mathbf{c}^{n})_{2}\circ... \circ f(\mathbf{c}^{n})_{D-1}\circ f(\mathbf{c}^{n})_{D}, \tag{1}\]
where the order \(N=nD\), if each sub-system is a 2nd order filter. The system can be more general than that depending on the value of \(n\). Depending on the deep learning techniques used, each function \(f\) can be arbitrarily complex and represented either directly as filter coefficients, as done in [8], or as parameterised sub-networks such as the BiasNet applied for the deep filter optimization procedure in this paper.
## 3 Differentiable All-pass Filters
Before outlining the model architecture of the proposed APF filter tuning process, we present the differentiable APF structures used to adjust the coefficients in the deep learning pipeline. Following the transposed direct form-II (TDF-II) structure, a 2nd order IIR APF is given by the traditional biquad transfer function [12]:
\[A_{2}(z)=\frac{c+dz^{-1}+z^{-2}}{1+dz^{-1}+cz^{-2}} \tag{2}\]
In practice, this transfer function can be implemented using the following recurrent, and stateful, difference equation:
\[\begin{split} y[n]&=cx[n]+v_{1}[n]\\ v_{1}[n]&=dx[n]+v_{2}[n]-dy[n]\\ v_{2}[n]&=x[n]-cy[n],\end{split} \tag{3}\]
where coefficients \(c\) and \(d\) are controlling the steepness and the break frequency of the APF's phase response respectively. The coefficients are products of the pole radius \(R\) and the cutoff frequency \(f_{c}\). They have the ranges: \(-1<c<1\) and \(-2<d<2\). We introduce a stability constraint to the tuning process and estimate the filter parameters rather than the coefficients themselves. The filter coefficients can for each forward call thereafter be calculated by [13]:
\[c=R^{2}\qquad d=-2R\cos(2\pi f_{c}/f_{s}), \tag{4}\]
with \(f_{s}\) being the sampling rate of the signal. As the filter coefficients, and thus the steepness of the phase response, are dependent on the pole radius, the phase response might have a significantly narrow resolution at low frequencies. When trying to match frequencies below 100 Hz, the parameters for a system with a high sampling rate (192 kHz) exist in very small ranges with \(0.9<R<0.999\) and the resulting coefficient \(d\) being between \(-1.97<d<-1.999\), depending on the cutoff frequency. It is hypothesised that the prediction of values in such small ranges might introduce numerical overflow and coefficient quantization errors while being difficult to generalise. We, therefore, propose a differentiable warped all-pass structure to increase the frequency resolution in low-frequency ranges, emphasising the importance of low-frequency content in the learning process. A warped APF is designed and realized on a warped frequency scale. It is achieved by replacing the unit delays of a traditional APF with auxiliary 1st order APFs, whose phase response is used to skew the frequency axis [14]. A warped version of the APF in equation (3) is given by the difference equation:
\[\begin{split} y[n]&=\frac{x[n](c+a^{2}+ad)+v_{1}[n] }{1+a^{2}c+ad}\\ v_{1}[n]&=x[n](2a+d+ac)+y[n]\big{(}-2ac-d-a\big{)}-a^ {3}(x[n]-cy[n])-v_{2}[n](a^{2}+1)\\ v_{2}[n]&=x[n]-cy[n]-\big{(}a^{2}(x[n]-cy[n])+av_{2}[ n]\big{)},\end{split} \tag{5}\]
with \(a\) being the warping factor i.e. the coefficient of the inserted auxiliary 1st-order APFs. For stability reasons the warping factor for all inserted APFs is identical and thus gathered into a global, but learnable, variable \(a\).
## 4 Proposed Method
By utilizing over-parameterisation we propose two BiasNet-based models and a phase alignment procedure to extend the differentiable IIR filter design techniques towards the APFs. We call these models _sequential_ and _connected_, as illustrated in figure 1. Both models contain cascaded differentiable warped APFs matching a desired filter order \(N\). The neural network is excited by a learnable bias input layer, whereas its output corresponds to the filter parameters of the closed-form equations used to calculate the final APF coefficients. Three values, \(R\), \(fc\) and \(a\) are thus fed from the output of the model to every single filter. The primary deep neural network is an MLP with periodic sinusoidal activations for the hidden layers and _tanh_ activations for the output layer. The _sine_ activation function has been included as it avoids local minima during network optimization, is robust towards vanishing gradients and thus suitable for non-convex problems such as the cascaded APF pipeline [15]. We additionally de-normalise the network outputs taking account of the range in which \(fc\) exist. We use a constrained de-normalization technique similar to the one proposed in [11] to de-normalise the _tanh_ output layers scaling it between 20 Hz and 20 kHz:
\[fc=\frac{fc_{max}-fc_{min}}{2}p+\frac{fc_{max}+fc_{min}}{2}, \tag{6}\]
where \(p\) denotes the value to de-normalise. The DNN is updated such that its output layer produces filter coefficients that create the needed phase alignment. We create two different BiasNet models to investigate the importance of over-parameterisation and its impact on the non-convex problem as well as the general learning process. More specifically, the cascade in the _sequential_ structure is achieved by chaining several BiasNets together, each representing a respective filter, to create the desired order. Individual DNNs are thus used to derive the coefficients in parallel for each individual APF. The BiasNet is initialized as a densely connected bottleneck with hidden layers of 1024, 512, 256, 128 units respectively. It contains approximately 692.5k learnable parameters, which accumulate to 2.7 million parameters for a cascaded filter of 7th order. Due to its large number of learnable parameters, a benefit of this architecture is the possibility of a complex and detailed parameter estimation process, however, it suffers from longer calculation times
and a lack of interaction between the DNNs of each individual block. For the _connected_ structure, all filter parameters are coming directly from one large BiasNet. This introduces only 692k learnable parameters in total, independent of the cascaded filter order. The size of the output layer in the _connected_ architecture thus equals 11 for a warped APF of 7th order. The _connected_ architecture allows for interaction between the cascaded filters since the same network derives all parameters, however, it might suffer from a smaller and therefore less complex parameter space.
### Loss Function
Following the majority of related work, the loss function of the network that is optimized during training happens in the frequency domain. Since the frequency response of an APF by nature is unity gain, a change in magnitude will not be detected and the direct spectrogram comparison used in [6] and [9] is thus not sufficient for the problem at hand. Rather, we calculate the difference between the sum of the target frequency response \(y\) and the predicted signal's frequency response \(\hat{y}\) individually, as well as the frequency response of the summed signals before the transform. The loss function is given by:
\[\varepsilon_{STFT}(y_{i},\hat{y}_{i})=\frac{1}{n}\sum_{i=1}^{n}((S(y_{i})+S( \hat{y}_{i}))-S(y_{i}+\hat{y}_{i}))^{2}, \tag{7}\]
where the function \(S\) denotes the spectrogram or the squared magnitude of the STFT, simply given as:
\[S(y_{i})=|STFT(y_{i})|^{2} \tag{8}\]
Figure 1: _High level overview of the deep-optimization models_
By doing this, the magnitudes of the target and the prediction are forced to be similar, leaving phase as the only changeable factor. Since the magnitude spectrogram \(S\) does not include phase information, it highlights frequency areas where the summation of the input and the target introduce destructive interference. Optimization can thus exclusively be achieved by attaining coefficients whose phase response shifts the input such that the magnitude of the signal summation matches the magnitude summation of each individual STFT. To avoid the frequency-dependent trade-off of the STFT and to improve the robustness of the loss function, we extend equation (7) by the multi-resolution STFT (M-STFT) loss [16]:
\[\varepsilon_{M-STFT}(y_{i},\hat{y}_{i})=\frac{1}{M}\sum_{m=1}^{M}\varepsilon_{ STFT}(y,\hat{y}), \tag{9}\]
with \(M\) being different analysis resolutions. Thus the final loss-function is given by an average over the normal STFT loss in eq (7) at different resolutions. By utilizing multiple FFT-lengths and summing the information across the different resolutions, we capture a more realistic representation of the training signals [16]. The different resolutions are selected according to the STFT parameters presented in [17]:
### Proof of concept
By a simple proof of concept we show that the over-parameterisation proposed above is crucial for the deep APF optimization problems at hand. To inspect the possibilities of differentiable APFs we first create an example following the work in [8]. We thus start with a naive DDSP approach and derive the coefficient values for the filters directly from the difference equation in order to provide a baseline. To do this we attempt to align the input and output of a simulated 1st-order RC filter, which due to its natural low-pass behaviour creates a phase shift in the higher frequency register. A 1st order RC filter is given by the difference equation [18]:
\[V_{out}^{n}=\frac{\rho(V_{in}^{n}+V_{in}^{n-1})+(1-\rho)V_{out}^{n-1}}{1+\rho}, \tag{10}\]
\begin{table}
\begin{tabular}{c c c} \hline
**FFT-Size** & **Hop Length** & **Window Size** \\ \hline
512 & 50 & 240 \\
1024 & 120 & 600 \\
2048 & 240 & 1200 \\ \hline \end{tabular}
\end{table}
Table 1: _Details of the parameters for the different STFT resolutions_
Figure 2: _Loss curve, phase compensation and phase response of RC Filter trained with M-STFT Loss (upper row) and the MSE Loss (lower row). The middle plot shows the alignment at 0, 2, 7 and 9 seconds into the training signal._
where \(V_{in}^{n}\) and \(V_{in}^{n-1}\) are the current and past input samples, \(V_{out}^{n}\) and \(V_{out}^{n-1}\) are the current and past output samples and \(\rho\) is given by \(fs/(2RC)\), with \(R\) and \(C\) being the resistance and capacitance of the circuit components. In our case \(R=120\Omega\) and \(C=68nF\) respectively. As the RC filter at maximum will shift incoming frequencies \(90^{\circ}\), we train a 1st-order APF the naive way, using the same hyperparameters and loss functions presented in section 5. The results of the trainings are depicted in figure 2.
As seen above, both the M-STFT and the MSE loss converge, with the latter being faster but more noisy. Both cases additionally manage to compensate the phase shifts introduced by the RC filter, with the M-STFT training being more precise. However, when applying the above naive approach to more complex problems such as the VA black box effects presented in section 5, it was quickly realized that the training loss for a system of cascaded APFs diverged and in many cases exploded. When tuning cascaded APFs we are simply handling a highly non-linear problem where the individual minimum of each APF affects the remaining cascade, while the minimum of the loss function most often is based on the frequency areas where alignment gives less destructive interference. The function that can estimate the full phase response thus might be non-convex as it has multiple local minima, which was found to be too complex for the naive and traditional DDSP approach. We argue that over-parameterisation networks and deep optimization frameworks solve this problem.
## 5 Evaluation
We examine the proposed models through objective metrics and use the proposals with the best results for final listening tests. The training data consists of a logarithmic sine sweep from 20 Hz to 20 kHz over 10 seconds at a sample rate of 192 kHz. The sweep is fed through three different VA black box simulations shown to introduce significant and complex deviations from linear phase behaviour: 1) Electronic Audio Experiments Surveyor Pre-amp, 2) 15IPS Tape Saturation, and 3) LEM 808R DLX Mixer. We train and evaluate all models in an _in-to-out_ fashion, meaning that our models learn the coefficients needed to shift the non-affected input in order to match the VA processed and phase-shifted output. Once training is done, the coefficient values can be exported and inserted into a traditional APF pipeline for the desired real-time adjustments. All models are initialized as a 7th-order APF structure with a cascade of three 2nd-order filters and one 1st-order APF. The output signals of the three systems are sampled and divided into sequences of 2048 samples, which for 20 Hz approximates to a 1/4 of a sinusoidal period at a sample rate of 192 kHz. We heuristically found this sequence length to be a good compromise between phase information and training time. The sub-sequences are additionally organized into batches. We train the models using the earlier mentioned M-STFT loss as well as the traditional MSE loss function, which is used to further validate the phase-compensated simulations/reconstructions. All training sessions are carried out using 1 NVIDIA Tesla T4 GPU for 400 epochs or until training loss plateaus (approx. 5 hours). We train the models with a learning rate of 1e-5, a batch size of 512 and the ADAM optimizer. The final M-STFT and MSE values for the trained models are seen in table 2 below:
### Performance Assessment
The performance of the trained models is quantitatively evaluated on unseen test audio. The test audio is chosen such that it exposes the model to signals of various frequencies and timbres. It consists of a concatenation of different loops counting: an acoustic breakbeat drum loop, an electric bass-guitar loop, a guitar loop and a synthesized acid
\begin{table}
\begin{tabular}{l c c l|l} \hline
**Model** & **Loss Type** & **Params** & **Effect** & **Final Loss** \\ \hline \multirow{3}{*}{Sequential} & \multirow{3}{*}{MSE} & & Surveyor & 1.375e-1 \\ & & & 15 IPS & 4.235e-3 \\ & & & LEM & 4.708e-2 \\ \hline \multirow{3}{*}{Sequential} & \multirow{3}{*}{M-STFT} & \multirow{3}{*}{2.7M} & Surveyor & 1.857e-2 \\ & & & 15 IPS & 8.078e-3 \\ & & & LEM & 9.998e-3 \\ \hline \multirow{3}{*}{Connected} & \multirow{3}{*}{MSE} & & Surveyor & 1.437e-1 \\ & & & 15 IPS & 4.271e-3 \\ \cline{1-1} & & & LEM & 4.707e-2 \\ \hline \multirow{3}{*}{Connected} & \multirow{3}{*}{M-STFT} & \multirow{3}{*}{692.5K} & Surveyor & 2.991e-1 \\ & & & 15 IPS & 4.871e-1 \\ \cline{1-1} & & & LEM & 9.887e-3 \\ \hline \end{tabular}
\end{table}
Table 2: _Model and training specifications_
bassline (duration of approx 2 minutes). As signal displacement is given in the time domain, the objective metrics chosen for this study compare the similarity between the automatically shifted input (prediction) and the VA processed output (ground truth) in the sample/phase space. We evaluate the performance by measuring the similarity using the traditional MSE as well as the mean absolute error (MAE) defined as:
\[\epsilon_{MAE}(\hat{y},y)=\frac{1}{n-1}\sum_{i=0}^{n-1}\lvert\hat{y_{i}}-y_{i}\rvert \tag{11}\]
Additionally, we include the error-to-signal ratio (ESR), which can be regarded as an extension of the MSE with the inclusion of target energy normalization to penalise the errors more equally when the input signal is lower in absolute amplitude. The ESR is given by:
\[\epsilon_{ESR}(\hat{y},y)=\frac{\sum_{i=0}^{N-1}\lvert\hat{y_{i}}-y_{i}\rvert^ {2}}{\sum_{i=0}^{N-1}\lvert y_{i}\rvert^{2}} \tag{12}\]
The final values for each objective metric are summarized in table 3, with the values for the non-shifted signals included as a static reference. We see that the _sequential_ architecture performs best for all objective metrics on both the _surveyor_ and the _15IPS_ effects (given in bold). The _sequential_ model is slightly surpassed by the _connected_ architecture in the case of the _LEM_ effect, however, only with a combined distance of 0.003 for the MSE trained version and 0.001 for the M-STFT trained version. It can thus be concluded that the _sequential_ and over-parameterised BiassNet approach quantitatively provides a closer match to the ground truth phase response of the trained systems. Figure 3 presents a few phase-matching results on different frequency content of the test audio. Examples of all trained VA simulations are shown. It is here clearly seen that the phase-response estimation done by the models compensates for the input to match the saturated and phase-shifted output.
### Listening Test
Due to the inadequacy of the objective metrics in evaluating the perceived quality of the phase alignment in real life use-cases, such as parallel path processing scenarios, a subjective listening test is carried out. By the use of an 'audio perceptual evaluation' (APE) listening test, we examine the difference between the clean summation and the compensated dry-wet mixing of musical content, using the proposed _sequential_ architecture. The APE style test extends the well-known MUSHRA test by rating different versions of the same reference on a single scale using sliders [19]. Compared to the MUSHRA test, the APE is useful for evaluating the perceived quality of dry-wet mixing as there exists no known reference. Since the audibility of dry-wet mixing highly differs relative to the use case, the participants are presented with three different musical scenarios for each compensated audio effect: a low relative mix with 75% dry signal and 25% wet signal, a middle relative mix with 50% dry signal and 50% wet signal, and a high relative mix with 25% dry signal and 75% wet signal. Each relative mix is normalized, however, no loudness compensation has been applied as volume differences in different frequency areas are natural artifacts
Figure 3: _Examples of the normalized phase alignment results for each individual black-box effect training (blue = input, orange = output, green = prediction_
of phase misalignment and thus represents the baseline of the listening test. We present the participants with two different audio mixes matching a real-world music mixing and mastering scenario, where the black-box effect would be applied to give the final mix a saturating "warmth". The participants are informed that they are listening to different versions of effect models and thereafter instructed to 'blindly' compare the clean and compensated versions based on their perceived level of audio quality. Sound examples can be heard on the accompanying webpage 2.
Footnote 2: [https://abargum.github.io/](https://abargum.github.io/)
15 convenience sampled participants without any reported hearing impairments and 3 or more years of musical experience took part in an online listening test. Individual boxplots for the evaluation of the clean and compensated audio mixes are shown in figure 4. The answers for each audio mix are summed and averaged for each participant, giving a final comparable score for the individual black-box effects across the different mix configurations.
As seen in 4, the difference between the clean and compensated versions for the'middle' scenario with 50% dry-wet mixing is highly audible. This is evident both for the _Surveyor_ and the _15IPS_ effects. The 'low' mix scenario additionally performs better for the compensated version for both the surveyor and the 15IPS. In the case of the LEM effect, all dry/wet mix cases were rated to sound equally good. As seen in figure 3 this is most likely caused by the lack of phase shifts happening in the audible frequency ranges. Lastly, the scores for all the 'high' cases barely differed, which possibly is due to the fact that the dynamics of the saturated output masks the actual interference.
It is thus clear that the trained models manage to align the input to its respective target signal in the presented examples. This is quantitatively evident in the objective metrics in table 3 where the'sequential' MSE model perform better on all metrics, compared to its static counterpart. The time-domain representation in figure 3, furthermore, supports the alignment of the musical signals where it clearly can be seen that the temporal envelope of the prediction matches the target. Lastly, a perceptual listening test show that especially the audio quality of the surveyor and the 15IPS models are improved in the dry-wet mixes provided.
## 6 Conclusions
To address the challenges of the learned phase responses in VA black box effects, this paper has presented, discussed and evaluated deep-learning techniques for automatic signal alignment. By utilizing the 'deep optimization' methodology, we propose a BiasNet-inspired architecture that approximates filter parameters used for coefficient calculations in a system of cascaded differentiable warped APFs. We thus extend the naive approach to approximating DDSP IIR filters with over-parameterized neural networks and use them to exhibit successful models for aligning the dry and wet paths of virtual analog effects. Ultimately, three black-box effects are chosen for the final training procedure. By evaluating the models on different objective metrics, we demonstrate that what we call a'sequential' architecture efficiently tunes all-pass filter coefficients for approximating a system's phase response. It is thus demonstrated that over-parameterisation is suitable when estimating filter coefficients in more complex and non-convex scenarios. The results are supported by subjective listening tests, where 15 expert listeners rated the dry-wet mixing of VA effects
\begin{table}
\begin{tabular}{l l l|l|l|l} \hline \hline
**Model** & **Loss** & **Effect** & **MAE** & **MSE** & **ESR** \\ \hline \multirow{3}{*}{Reference} & \multirow{3}{*}{None} & Surveyor & 0.161 & 0.066 & 3.942 \\ & & 15 IPS & 0.164 & 0.069 & 4.105 \\ & & LEM & 0.020 & 0.001 & 0.068 \\ \hline \multirow{3}{*}{Sequential} & \multirow{3}{*}{MSE} & Surveyor & _0.033_ & _0.003_ & _0.177_ \\ & & 15 IPS & _0.007_ & _0.0005_ & _0.007_ \\ & & LEM & 0.017 & 0.0007 & 0.045 \\ \hline \multirow{3}{*}{Sequential} & \multirow{3}{*}{MSTFT} & Surveyor & 0.037 & 0.004 & 0.235 \\ & & 15 IPS & 0.015 & 0.001 & 0.058 \\ & & LEM & 0.017 & 0.0007 & 0.045 \\ \hline \multirow{3}{*}{Connected} & \multirow{3}{*}{MSE} & Surveyor & 0.079 & 0.017 & 1.022 \\ & & 15 IPS & 0.010 & 0.0002 & 0.017 \\ & & LEM & _0.016_ & _0.0007_ & _0.042_ \\ \hline \multirow{3}{*}{Connected} & \multirow{3}{*}{MSTFT} & Surveyor & 0.074 & 0.015 & 0.888 \\ & & 15 IPS & 0.089 & 0.021 & 1.236 \\ \cline{1-1} & & LEM & 0.017 & 0.0007 & 0.044 \\ \hline \hline \end{tabular}
\end{table}
Table 3: _Overview of the performance results for the individual models across effects and loss functions_
to be significantly improved by the deep all-pass models, proving that the approach additionally is useful in real life use-cases.
## 7 Acknowledgments
Many thanks to the great number of anonymous reviewers and the whole ML/DSP research team at Native Instruments Berlin in 2022 for their help, guidance and support.
|
2301.02008 | Expressive Speech-driven Facial Animation with controllable emotions | It is in high demand to generate facial animation with high realism, but it
remains a challenging task. Existing approaches of speech-driven facial
animation can produce satisfactory mouth movement and lip synchronization, but
show weakness in dramatic emotional expressions and flexibility in emotion
control. This paper presents a novel deep learning-based approach for
expressive facial animation generation from speech that can exhibit
wide-spectrum facial expressions with controllable emotion type and intensity.
We propose an emotion controller module to learn the relationship between the
emotion variations (e.g., types and intensity) and the corresponding facial
expression parameters. It enables emotion-controllable facial animation, where
the target expression can be continuously adjusted as desired. The qualitative
and quantitative evaluations show that the animation generated by our method is
rich in facial emotional expressiveness while retaining accurate lip movement,
outperforming other state-of-the-art methods. | Yutong Chen, Junhong Zhao, Wei-Qiang Zhang | 2023-01-05T11:17:19Z | http://arxiv.org/abs/2301.02008v2 | # Expressive Speech-Driven Facial Animation With Controllable Emotions
###### Abstract
It is in high demand to generate facial animation with high realism, but it remains a challenging task. Existing approaches of speech-driven facial animation can produce satisfactory mouth movement and lip synchronization, but show weakness in dramatic emotional expressions and flexibility in emotion control. This paper presents a novel deep learning-based approach for expressive facial animation generation from speech that can exhibit wide-spectrum facial expressions with controllable emotion type and intensity. We propose an emotion controller module to learn the relationship between the emotion variations (e.g., types and intensity) and the corresponding facial expression parameters. It enables emotion-controllable facial animation, where the target expression can be continuously adjusted as desired. The qualitative and quantitative evaluations show that the animation generated by our method is rich in facial emotional expressiveness while retaining accurate lip movement, outperforming other state-of-the-art methods.
Yutong Chen\({}^{\star}\), Junhong Zhao\({}^{\dagger}\) and Wei-Qiang Zhang\({}^{\ddagger}\)\({}^{\star}\)Tsinghua University, [email protected]
\({}^{\dagger}\)Victoria University of Wellington, New Zealand, [email protected]
\({}^{\ddagger}\)Tsinghua University, [email protected],.
3D facial animation, 3D avatar, talking head, emotion, expressive facial animation
## 1 Introduction
_Facial animation_ is a growing research topic that has been widely adopted in many applications, such as education, Virtual Reality (VR), and digital entertainment. Commercial products often require high plausibility and expressiveness in the animated characters to meet users' needs for immersive engagement. Creating such realistic facial animation is a great challenge. Many recent works have resorted to deep learning to automate face animating based on easy-acquire data. Speech-driven facial animation is one critical component in this field and has drawn much attention. It is to emulate life-like facial motion based on the information carried by a vocal audio track. Our work focuses on speech-driven facial animation in 3D. Compared with animating 2D images, modulating a 3D model is directly applicable to most 3D applications like 3D games and visual aftereffects. Attempts have been made to explore the dependencies between audio and 3D face movement, most of which focus on the lower face and lip movements and their synchronization with speech [1, 2, 3, 4, 5, 6]. Although they can produce plausible basic facial motions, they are far from satisfactory in synthesizing facial expressions, especially in presenting emotions.
To realize emotional animation based on only an audio track is a challenging task. Although the dependency between sound production and lip movement for the same person is deterministic, its dependencies with the expressions of different emotion categories and intensities are highly ambiguous. There are many different personalized conveying ways of one emotion for different users. Such inherent ambiguity makes neural networks hard to handle emotion variations based on the audio input, even given long-contextual information. The works by Karras et al. and Pham et al. [7, 8] tried to extract emotion features from speech and implicitly embed them into their neural network to realize emotion synthesis. However, their solution inevitably suffers from over-smoothed regression due to the limited training data, and often results in limited expressiveness. In practice, we found that even given the best expressive speech, the animations generated by existing methods still fail to achieve drastic emotion dynamics.
We present a novel approach to realize emotion-controllable facial animation. Instead of recovering lip movement from audio, our method enriches the emotion expressivity and enables the adjustment of the intensity of emotion effects to satisfy the animators' needs. We propose an emotion controller module, which includes an emotion predictor followed by an emotion augment network, to explicitly model the relationship between emotion variations and corresponding facial expression parameters. Image-based emotion recognition was used to generate emotion information as priors to guide the training process. During inference, the specified emotion condition will apply to the speech-driven facial animation to realize emotion enhancement and customization. By explicitly modelling emotion, our animator-friendly system enables emotion control with a given emotion type and an intensity value. Our method shows promising results in synthesizing controllable emotional facial animation while retaining high-accuracy lip synchronization, outperforming the
state-of-the-arts. _The implementation code will be publicly available online._
## 2 Related Work
Despite much work focusing on facial animation from image or video [9, 10, 11, 12] or generate speech-driven 2D talking head [13, 14, 15, 16, 17], we concentrate our efforts on speech-driven 3D facial animation, mainly targeting to improve the emotional expression synthesis. We review the most relevant deep learning-based approaches.
**Speech-driven 3D models.** Earlier speech-driven 3D facial animation methods are based on phonetic annotation and viseme-model blending [1, 18]. VisemeNet [1] leveraged LSTMs for phoneme grouping and facial landmark prediction, and used their results to regress viseme parameters for lip animation. VOCA [5] tried to encode the identity-dependent information into animation and synthesized various speaking styles. MeshTalk [6] proposed a method to disentangle audio-correlated and audio-uncorrelated information to generate more plausible dynamics on the upper face while attaining accurate lip motion. FaceFormer [2] proposed a transformer-based autoregressive model to encode long-term audio context and synthesize improved lip motions. Taylor et al. [3] proposed a sliding-window regression method to predict the active appearance model (AAM) parameters of a reference lower face based on phoneme labels. Liu et al. [19] considered the influence of geometry representation and utilize them to produce generalized speaker-independent facial animation. Although considerable progress has been achieved in the field, most of these prior works focused on lip motion accuracy without considering expressiveness, which limits the realism of their results.
**Conditional emotion synthesis.** Some recent facial animation methods [15, 7, 8, 20] tackled some issues in emotion synthesis to make more vivid facial animations. Pham et al. used LSTMs in [8] (improved in [21] with CNN-RNN) to model the long-contextual relationship between acoustic features and facial expressions to realize emotion awareness in the generated animation. The method proposed by Karras et al. [7] extracts a latent emotion representation from the audio without identifying emotion categories. In both methods, emotions are implicitly represented and thus lack meaningful guidance to emotion intensity control. Ji et al. [15] decomposed speech into emotion and content components to generate emotion-controlled 2D talking heads. Chun et al. [20] introduced an emotion-guided method where emotion-expressive blendshapes are enhanced by emotion recognition guidance and then fused with mouth-expressive blendshapes. Nevertheless, the technique needs manual effort to prepare each emotion template, and the generated expressions often lack upper-face dynamics.
## 3 Method
### Network Architecture Design
Figure 1. illustrates our pipeline. Our goal is to enrich emotion expressivity in speech-driven facial animation and enable users' control over emotion variations. We first design a neural network to estimate the facial movement represented by FLAME parameters [22] (See supplementary) from the input audio, using both local and long-contextual information. The problem can be formulated as follows:
\[(\vec{\phi_{t}},\vec{\theta_{t}})=F(S(\vec{x}_{t\pm\Delta t})\rightarrow\vec{ \psi_{t}};W(\vec{x}_{t\pm\Delta t})\rightarrow\vec{\omega_{t}}) \tag{1}\]
Given the audio segment \(x_{t}\) at time \(t\) and its neighboring frames, the local content features (content vector \(\vec{\omega_{t}}\)) and global style features (style vector \(\vec{\psi_{t}}\)) learned by wav2vec2.0 \(W(\cdot)\) and a transformer encoder \(S(\cdot)\) are first extracted separately. Then they are concatenated together to predict FLAME parameters. The mapping between the audio and the FLAME parameters is learned by an Audio2FLAME module \(F(\cdot)\), which is a multi-layer CNN. The predicted FLAME parameters, including expression parameters \(\vec{\phi_{t}}\) and pose parameters \(\vec{\theta_{t}}\), combined with the shape parameters of the given identity, are converted to 3D mesh as the output. The lip synchronization will be primarily focused on and preciseness in mouth motion will be ensured at this phase.
We introduce an emotion control module that includes a bi-LSTM-based emotion predictor followed by an embedding layer to generate emotion-related latent features (emotion feature vectors), and a CNN-based emotion-augment network to enhance the expressivity of FLAME parameters based on emotion features. The emotion augmentation process can be represented as:
\[E(\vec{\psi_{t\pm\Delta t}},\vec{\omega_{t\pm\Delta t}},\gamma_{u,t}):(\vec{ \phi_{t}},\vec{\theta_{t}})\rightarrow(\vec{\phi_{t}},\vec{\theta_{t}}) \tag{2}\]
Where \(E(\cdot)\) denotes the emotion control module. With the audio features and emotion conditions \(\gamma_{u,t}\) customized by the user, it maps \((\vec{\phi_{t}},\vec{\theta_{t}})\) predicted by the Audio2FLAME model to the emotion-enhanced facial parameters (\(\phi_{t}^{\prime}\), \(\theta_{t}^{\prime}\)). We incorporate the emotion-augment network in a residual manner in our pipeline, which allows dedicated optimization on emotion-related expressions while retaining content-related expressions. Using this way, the user can explicitly design the emotion intensities and categories at the frame level and regulate the animation output.
### Emotion Control Module
The core challenge to realizing full control of emotion simulation is to make the model adaptive to not only emotional state changes, but also emotion strength variations to allow straightforward intensity adjustment. Our proposed training and inference pipeline is illustrated in Fig. 2.
**Emotion prediction and control.** In the training phase, to make the network see emotion variations in the input, we leverage the image-based emotion recognition model to obtain frame-level emotion information as emotion priors to facilitate the model training. DAN model from Weng et.al. [23] was used in our experiments, which could be substituted by other similar models. We assume emotion features decoded from 2D visual images are more reliable and informative than that from audio, which is beneficial for the model learning.
However, the emotion classification probabilities from DAN can not indicate the magnitude of emotions. Instead, we found that the emotion logits before the final softmax layer of the emotion recognition network, featuring seven-dimensional vector for seven emotions including happiness, anger and etc., are highly in agreement with the perceived emotion intensity. Therefore, we use them as emotion priors for model training and hinge them with users' emotion control. See supplementary material in detail about our simplified conceptual proof of the linear relation between emotion logits and emotion intensities. In our experiment, we found that the emotion logits are effective in emotion synthesis and work well with the adjustment of both emotion categories and intensities. The emotion augment network is resilient to the prediction error of the emotion priors from the DAN module, and can produce general emotional expressions from them.
In the inference phase, emotion priors are extracted from audio through a bi-LSTM network, and altered by the customized emotion category and intensity. The bi-LSTM network was trained by maximizing the mutual information between audio-based and video-based emotion priors, using video-based emotion priors as a pseudo ground-truth. The emotion conditions given by the user will be transformed to a one-hot vector with values ranging from 0 to 1 (\(\gamma_{u,t}\)) and added to the decentralized audio-based emotion priors (segment-level mean normalization (\(\overline{\gamma_{a}}\)) ), bringing in the final emotion priors \(\gamma_{t}=\gamma_{u,t}+(\gamma_{a,t}-\overline{\gamma_{a}})\).
**Emotion augment network.** Before inputting into the emotion augment network, the emotion priors will be transformed into emotion feature vectors through an embedding layer to drive 3D facial expression augmentation. The embedding layer is a learnable 2D matrix that will be optimized during training. The 7-dimensional priors vector will be transformed to a size of 128 emotion feature vectors by multiplying with the learned embedding matrix, and then input into the emotion augment network built with CNN blocks to realize emotion-guided facial expression enhancement. To attain a proper balance between lip synchronization and emotion expressivity, we added the emotion augment network on a residual basis (see Eq. 3). The raw facial parameters extracted from Audio2FLAME are expected to have a preferable lip synchronization. The gap between the raw facial parameters and ground truth (\(\vec{\phi_{gt,t}};\vec{\theta_{gt,t}}\)) is mainly caused by their different emotional expressions, making the augment network \(A(\cdot)\) more effective to learn on.
\[A(\vec{\phi_{t}};\vec{\theta_{t}})+(\vec{\phi_{t}};\vec{\theta_{t}})\cong( \vec{\phi_{gt,t}};\vec{\theta_{gt,t}}) \tag{3}\]
### Loss Design
We train the model using the loss function:
\[L=w_{1}L_{vx}+w_{2}L_{lm} \tag{4}\]
where \(L_{vx}\) is vertex position loss, \(L_{lm}\) is mouth shape loss, and \(w\) is the weight.
**Vertex position loss:** A 3D mesh will be transformed from the predicted FLAME parameters, and the L1 difference between the converted vertices and ground truth vertices will be
Figure 1: An overview of our pipeline.
Figure 2: Training/inference pipeline of our proposed emotion control module.
calculated as \(L_{vx}\). A vertex mask is applied to the mesh to cover the front face area and exclude ears and eyes.
**Mouth shape loss:** To ensure lip synchronization, we select the positions of the top (\(v_{t}\)), bottom (\(v_{b}\)), leftmost (\(v_{l}\)), and rightmost (\(v_{r}\)) vertex and calculate the height of the mouth, \(V=|v_{t}-v_{b}|\), and the width of the mouth, \(H=|v_{l}-v_{r}|\), as the shape description. The L1 distance of the height and width between ground truth and the predicted mouth shape is used as the mouth shape loss.
\[L_{lm}=d_{1}*||H_{p}-H_{g}||_{1}+d_{2}*||V_{p}-V_{g}||_{1}. \tag{5}\]
We set \(d_{1}=1/0.0476\) and \(d_{2}=1/0.017\) in our experiment.
### Implementation details
We train the model end to end with the Adam optimizer, and the learning rate decays from \(1e^{-4}\) to \(1e^{-5}\) for every ten epochs. A low-pass filtering step is applied to the output animation sequence to restrain high-frequency noise. We also refine the predicted FLAME parameters to ensure the bilateral symmetry of the animated face. Please refer to our supplementary for more implementation details.
## 4 Experiments
**Data setups:** Manually-labelled 3D datasets with a rich emotion diversity are limited. Our model is trained using a mixture of 3D and 2D datasets that contain emotion-rich speeches, videos, or 3D model pairs, including VO-CASET [5](3D) and CREMAD[25](2D). For the 2D CREMAD dataset, EMOCA [24] method was used to reconstruct 3D facial models from 2D images (see Figure 4 the results) and extract FLAME parameters as the ground truth.
### Emotion animation control
Conditioning on users' control to realize 3D emotional facial animation is one of the key contributions of our method. Figure 3 shows controllable facial animation results on happiness and fear emotions with intensities of 0.5, 0.75, and 1.0, respectively. All the examples are driven by the same audio input. The results demonstrate that driven by the same audio input, our model can generate diverse emotional facial animation effects that reveal users' emotion customization. The generated facial expressions were influenced by both users' alteration and audio input.
Specifically, with the same audio input, the mouth shows a lift in fear rather than a drop in happy emotion. A continuous emotion magnitude change can also lead to satisfied expression variations. For example, in the fear emotion animation, the mouth opens wider, and the mouth corners move downwards with emotion intensity increasing from 0.5 to 1.0. Similar to happiness, in which the mouth corners and cheeks raise harder for the intensity of 1.0 while dropping to neutrally closed for the intensity of 0.5. With our emotion-controllable approach, users can edit the animation in any keyframe by specifying desired intensity values without preparing any other dependencies, which is more straight
Figure 4: Results of EMOCA [24] face geometry reconstruction.
Figure 3: Results of emotion animation from the same speech with various emotion classes and customized intensities. The texture is derived from the ground truth video for visualization. Left: animation frames of happiness; Right: animation frames of fear. From top to bottom, the intensities are 1.0, 0.75, and 0.5.
forward and efficient than traditional methods like [20].
### Comparisons
We compare our method with the state-of-the-art speech-driven facial animation methods to show how our method performs in both emotion synthesis and lip synchronization. We chose the works by Pham et al. [21] and Chun et al. [20] for the emotion synthesis comparison, and VOCA [5] and FaceFormer [2] for the lip synchronization comparision (each denoted as [5], [10], [11] and [12] respectively).
**Emotion synthesis.** Figure 5 shows an example of facial animation with angry emotion. We observed that [11] and [12] show accurate lip movement but neutral expressions. While our method can put on an extra layer of emotion variations without sacrificing lip synchronization and comprehensibility of the spoken content. In our results, eyebrows drop, cheek contraction appears, and the degree of mouth opening/closing varies according to mood peaks and troughs. [5] can bring in some emotional expressions, but the lip movement is not as accurate as ours, and the dynamics in the upper face are pretty limited (nearly still). [10] provided emotional expressions with precise lip synchronization, but since the emotion template was given manually and applied uniformly to all frames, the generated animation lacks emotion dynamics alongside the speaking process, especially in the upper face region (see the beginning and end frames). In contrast, ours have more emotional swings along the animation that look natural as real humans expression. In addition, note that our method is capable of transferring emotions from one identity to another, benefiting from the FLAME method's disentanglement of shape and expression, which [10] and [5] are not.
**Lip synchronization.** It's essential for speech-driven animation to have accurate lip movements in conveying content information. For a fair comparison in terms of lip synchronization, we used the FLAME parameters from Audio2FLAME to compare with other prior works quantitatively. The testing data keeps the same as what was used by FaceFormer [2], which includes two subjects' data from VOCASET, each containing 20 sentence samples.
To avoid introducing additional alignment errors among different meshes output by different methods, we perform a linear 3D transformation before metrics calculations to make them comparable. After alignment, we uniformly selected 24 key-point vertices around the lip and calculated their distance with ground truth as the measurement of lip synchronization. The lip movement errors of a key-point vertex \(k\) in frame \(j\) of sequence \(i\) are calculated by:
\[D_{i,j,k}\!=\!\sqrt{(x_{k}\!-\!\hat{x_{k}})\!+\!(y_{k}\!-\!\hat{y_{k}})\!+\!(z _{k}\!-\!\hat{z_{k}})} \tag{6}\]
The overall mean and maximal distance are calculated as two metrics.
Table 1 shows that our method achieved the best results compared with all the other methods, demonstrating that our proposed pipeline is capable of generating decent lip synchronization for emotion enhancement. The RandInit model created from randomized initialization without training steps has the lowest accuracy, as expected. It serves as a reference to show the improvement brought by different training methods.
### Ablation Study
We ablate the loss items, network components, and training datasets to see their contributions. Table 2 summarizes the effects of our proposed model learned without vertex position loss, without mouth shape loss, and without style vector extracted from the transformer encoder. We observed that both vertex position loss and mouth shape loss contribute to lip movement accuracy. Removing any of them caused an increase in error metrics. Style vectors also facilitate the network to capture long-contextual features from the audio input and improve lip synchronization.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & RandInit & [11] & [11] & [12] \\ \hline Mean\(\downarrow\) & 2.31 & 1.94 & 1.97 & 1.92 \\ Max\(\downarrow\) & 3.87 & 3.41 & 3.33 & 3.24 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons on lip movement error (mm).
Figure 5: Qualitative state-of-the-art comparisons in angry animation. The sentence ”Dogs are sitting by the door” does not come up in our training dataset. Different animation frames were selected from the sentence’s beginning (top), intermediate (middle), and end (bottom) phases. **See the submitted video for more visualizations.**
We also tried different training datasets to investigate the benefits of proper training data. Only using the 3D VOCST dataset cuts down lip movement accuracy. We also investigated the LRS2 [26] dataset. 3000 portrait video clips from BBC television that contain various subjects, background noise, and environments are selected for training. Although the result shows no improvement in lip movement accuracy on the experimental dataset, we observe that it improves our model's generalization on in-the-wild testing.
We also compared the network structure with and without the emotion control module qualitatively. Figure 5 shows the results based on the FLAME parameters from Audio2FLAME (denoted as Ours (w/o ECM)) and the emotion control module (denoted as Ours). We can see that the emotion control module can improve the expressivity of the animation and realize more drastic emotional expression, compared with the neutral animation generated without it.
## 5 Conclusion and Future Work
We presented a novel deep learning-based approach to generate controllable speech-driven emotional facial animation. An emotion controller module is proposed to enrich emotion expressivity and enables animator customization on emotion intensity and classes. An image-based emotion recognition was used to generate emotion priors to facilitate explicit emotion learning. Future works could consider pushing the boundary of extreme emotion generation with accurate lip synchronization and improving the animation generation by upgrading the temporal performance of video-based emotion recognition.
|
2308.08990 | Semantic Information for Object Detection | In this paper, we demonstrate that the concept of Semantic Consistency and
the ensuing method of Knowledge-Aware Re-Optimization can be adapted for the
problem of object detection in intricate traffic scenes. Furthermore, we
introduce a novel method for extracting a knowledge graph from a dataset of
images provided with instance-level annotations, and integrate this new
knowledge graph with the existing semantic consistency model. Combining both
this novel hybrid knowledge graph and the preexisting methods of frequency
analysis and external knowledge graph as sources for semantic information, we
investigate the effectiveness of knowledge-aware re-optimization on the
Faster-RCNN and DETR object detection models. We find that limited but
consistent improvements in precision and or recall can be achieved using this
method for all combinations of model and method studied. | Jean-Francois Nies | 2023-08-17T13:53:29Z | http://arxiv.org/abs/2308.08990v1 | # Semantic Information for Object Detection
###### Abstract
In this paper, we demonstrate that the concept of _Semantic Consistency_ and the ensuing method of _Knowledge-Aware Re-Optimization_ can be adapted for the problem of object detection in intricate traffic scenes. Furthermore, we introduce a novel method for extracting a knowledge graph from a dataset of images provided with instance-level annotations, and integrate this new knowledge graph with the existing semantic consistency model.
Combining both this novel 'hybrid' knowledge graph and the pre-existing methods of frequency analysis and external knowledge graph as sources for semantic information, we investigate the effectiveness of knowledge-aware re-optimization on the Faster-RCNN and DETR object detection models. We find that limited but consistent improvements in precision and/or recall can be achieved using this method for all combinations of model and method studied.
## 1 Introduction
The problem of _Object Detection_[20] may be expressed informally as the question _"What objects are where?"_. More formally, it incorporates both the definition of bounding boxes encompassing objects of interest as closely as possible, and the ability to assign correct labels to each such box. In the last years, research in this domain, chiefly using convolutional neural networks (CNN), has already led to object recognition systems capable of surpassing human performance for selected tasks [1]. Such systems typically employ deep CNNs and are widely deployed in a range of practical applications in areas such as autonomous vehicles and facial recognition systems.
However, unlike a human observer, a model such as a convolutional neural network has no ability to reason about the scene it is observing, and dismiss or correct erroneous identifications based on context [1]. Other objects present, the time and location, and the position of objects in relation to each other may all provide vital cues to an onlooker, but the integration of this context into a machine learning model is a complex problem: Contextual information must be made available in a machine-readable form, frequently resulting in large amounts of additional data, and processed alongside conventional inputs. Nonetheless, this approach has received a great deal of attention. Aside from immediate gains in performance, such _Informed Machine Learning_ may also provide'soft' benefits in terms of explainability and accountability. [16]
While it is possible to design models which integrate external knowledge from the ground up, there may also be situations in which the necessary training data is not available, or where a proprietary model must be treated as a 'black box'. In either case, the ability to perform knowledge-based re-optimization of the outputs of arbitrary models may be desirable.
In this report, we investigate one variant of this approach, using Semantic Consistency Matrices. Basing our work on an implementation by Lemens et al. [1] of the algorithm described by Fang et al. [1], we evaluate the effectiveness of this method on the Cityscapes dataset, and introduce an additional method for generating semantic consistency matrices which acts as a compromise between the already existing methods. Furthermore, we study the effectiveness of the different knowledge-aware re-optimization methods on the performance of the more recent Deformable Transformer (DETR) architecture. |
2308.06765 | Strongly primeness of skew Hurwitz polynomial rings | For a ring R and an endomorphism {\alpha} of R, we characterize the left and
right strongly primeness of skew Hurwitz polynomial ring (hR, {\alpha}). | Ali Shahidikia | 2023-08-13T13:11:38Z | http://arxiv.org/abs/2308.06765v1 | # Strongly primeness of skew Hurwitz polynomial rings
###### Abstract.
For a ring \(R\) and an endomorphism \(\alpha\) of \(R\), we characterize the left and right strongly primeness of skew Hurwitz polynomial ring \((hR,\alpha)\).
Key words and phrases:strongly prime ring, skew Hurwitz polynomial ring, insulator 2020 Mathematics Subject Classification: 16D25, 16D80 The study of formal power series rings has garnered noteworthy attention and has been found to be essential in a multitude of fields, particularly in differential algebra. In [6], Keigher introduced a variant of the formal power series ring and studied its categorical properties in detail. This ring was later named Hurwitz series ring. There are many interesting applications of the Hurwitz series ring in differential algebra as Keigher showed in [7, 8].
Throughout this article, \(R\) denotes an associative ring with unity, and \(\alpha:R\to R\) is an endomorphism. We denote \((H(R),\alpha)\), or simply \((HR,\alpha)\), the skew Hurwitz series ring over a ring \(R\) whose elements are the functions \(f:\mathbb{N}\to R\), where \(\mathbb{N}\) is the set of all natural numbers, the addition is defined as usual and the multiplication given by
\[(fg)(n)=\sum_{k=0}^{n}\binom{n}{k}f(k)\alpha^{k}(g(n-k))\text{ for all }n\in\mathbb{N},\]
where \(\binom{n}{k}\) is the binomial coefficient.
Define the mappings \(h_{n}:\mathbb{N}\to R\), \(n\geq 1\) via \(h_{n}(n-1)=1\) and \(h_{n}(m)=0\) for each \(m\in\mathbb{N}\setminus\{n-1\}\) and \(h_{r}^{\prime}:\mathbb{N}\to R\), \(r\in R\) via \(h_{r}^{\prime}(0)=r\) and \(h_{r}^{\prime}(n)=0\) for each \(n\in\mathbb{N}\setminus\{0\}\). It can be easily shown that \(h_{1}\) is the unity of \((HR,\alpha)\).
For \(f\in(HR,\alpha)\), the support of \(f\), denoted by \(supp(f)\), is the set \(\{i\in\mathbb{N}\mid f(i)\neq 0\}\). The minimal element in \(supp(f)\) is denoted by \(\Pi(f)\), and \(\Delta(f)\) denotes the greatest element in \(supp(f)\) if it exists. Define \(R^{\prime}=\{h_{r}^{\prime}\mid r\in R\}\). \(R^{\prime}\) is a subring of \((HR,\alpha)\) which is isomorphism to \(R\). For any nonempty subset \(A\) of \(R\) define \(A^{\prime}=\{h_{r}^{\prime}\mid r\in A\}.\) If \(A\) is an ideal of \(R\), then \(A^{\prime}\) is an ideal of \(R^{\prime}\).
The ring \((hR,\alpha)\) of skew Hurwitz polynomials over a ring \(R\) is the subring of \((HR,\alpha)\) that consists elements of the form \(f\in(HR,\alpha)\) with \(\Delta(f)<\infty\) (see [3]).
The notion of strongly prime rings was introduced by Handelman and Lawrence in [5]. Since then strongly prime rings have been extensively studied. F.Cedo in [4] presented an example of a ring \(R\) such that \(R\) is not strongly prime but the
power series ring \(R[[x]]\) is strongly prime on both sides. It is well-known that the Hurwitz polynomial ring \(hR\) is right strongly prime if and only if the coefficient ring \(R\) is right strongly prime. The same argument can be applied to show that an analogous statement holds for skew Hurwitz polynomial rings of automorphism type. The aim of this note is to characterize the left and right strongly primeness of skew Hurwitz polynomial rings of endomorphism type. It is not a surprise that the obtained characterization is not left-right symmetric. This, in turn, enable to construct easy examples of rings which are strongly prime only on one side. The examples known earlier were much more complicated (see [5, 9]). Recall from [5], that a subset \(F\subseteq R\) is a left (right) _insulator_ if \(F\) is finite and \(\ell_{R}(F)=0\) (\(r_{R}(F)=0\)).
We need the following lemma in the sequel.
**Lemma 1**.: _[_5_]_ _For the ring \(R\) the following conditions are equivalent:_
1. \(R\) _is left (right) strongly prime._
2. _Every nonzero ideal of_ \(R\) _contains left (right) insulator._
3. _Every nonzero left (right) ideal of_ \(R\) _contains left (right) insulator._
4. _Every nonzero principal left (right) ideal of_ \(R\) _contains left (right) insulator._
Notice that if \((hR,\alpha)\) is left (right) strongly prime, then \(\alpha\) has to be a monomorphism, as otherwise \((hR,\alpha)\) would not be prime. A left ideal \(I\) of \(R\) is called a left \(\alpha\)_-ideal_ if \(\alpha(I)\subseteq I\). We say that \(R\) is left \(\alpha\)_-strongly prime_ if any nonzero left \(\alpha\)-ideal \(I\) of \(R\) contains a finite subset \(F\) such that \(\ell_{R}(\alpha^{k}(F))=0\) for any \(k\geq 0\).
_Remark 2_.:
1. When \(\alpha=id_{R}\), then \(R\) is left \(\alpha\)-strongly prime if and only if it is left strongly prime.
2. When \(\alpha\) is an automorphism, then \(k\) in the above definition can be replace by \(0\).
3. If \(R\) is left \(\alpha\)-strongly prime, then \(\alpha\) has to be a monomorphism, since \(ker(\alpha)\) is an \(\alpha\)-ideal of \(R\).
**Theorem 3**.: _The following conditions are equivalent:_
1. \((hR,\alpha)\) _is a left strongly prime ring._
2. \(R\) _is a left_ \(\alpha\)_-strongly prime ring._
Proof.: (1)\(\Rightarrow\)(2) Let \(I\) be a nonzero left \(\alpha\)-ideal of \(R\) and \((hI,\alpha)\) denote the set of all Hurwitz polynomials from \((hR,\alpha)\) with all coefficients from \(I\). Then \((hI,\alpha)\) is a nonzero left ideal of \((hR,\alpha)\). Thus, by assumption, it contains a left insulator \(\widehat{F}\). Then \(h_{n+1}\widehat{F}\) is also a left insulator for any \(k\geq 0\). Let \(F\) denote the set of all coefficients of polynomials from \(\widehat{F}\). Then, for any \(k\geq 0\), \(\alpha^{k}(F)\) is the set of all coefficients of polynomials from \(h_{n+1}\widehat{F}\). Therefore \(\ell_{R}(\alpha^{k}(F))=0\) for any \(k\geq 0\). This shows that \(R\) is left \(\alpha\)-strongly prime.
(2)\(\Rightarrow\)(1) Let \(J\) be a nonzero left ideal of \((hR,\alpha)\) and \(I\) denote the set of leading coefficients of all polynomials from \(J\). Then \(I\) is an \(\alpha\)-invariant left ideal of \(R\). Hence, by assumption, we can find a finite set \(F\subseteq I\) such that \(\ell_{R}(\alpha^{k}(F))=0\)
for any \(k\geq 0\). Let \(\widehat{F}=\{f_{a}\mid a\in F\}\), where \(f_{a}\in J\) denotes a polynomial having the leading coefficient \(a\), for \(a\in F\). Then it is standard to check that \(\widehat{F}\) is a left insulator contained in \(J\), thus \((hR,\alpha)\) is left strongly prime.
The following theorem describes right strongly primeness of \((hR,\alpha)\).
**Theorem 4**.: _For \((hR,\alpha)\) the following conditions are equivalent:_
1. \((hR,\alpha)\) _is a right strongly prime ring._
2. 1. \(\alpha\) _is a monomorphism;_ 2. _for any_ \(0\neq a\in R\) _and_ \(m\geq 0\) _there exist_ \(k\geq 0\) _and a finite set_ \(F\subseteq a\alpha^{m}(R)+\alpha(a)\alpha^{m+1}(R)+\cdots+\alpha^{k}(a)\alpha^ {m+k}(R)\) _such that_ \(r_{R}(F)\cap\alpha^{n}(R)=0\) _for some_ \(n\geq 0\)_._
Proof.: (1)\(\Rightarrow\)(2) Clearly \(\alpha\) is a monomorphism. Let \(0\neq a\in R\) and \(m\geq 0\). Then \(J=a\alpha^{m}(R)h_{m+1}(hR,\alpha)\) is a nonzero right ideal of \((hR,\alpha)\). Thus, by assumption, \(J\) contains a right insulator \(\widehat{F}=\{a_{i,m+k}h_{m+k+1}+\cdots,+a_{i,m}h_{m+1}\in J\mid 1\leq i\leq s\}\). Define \(F=\{a_{i,m+k},\alpha(a_{i,m+k-1}),\ldots,\alpha^{k}(a_{i,m})\mid 1\leq i\leq s\}\) and set \(n=m+k\). Then \(F\subseteq a\alpha^{m}(R)+\alpha(a)\alpha^{m+1}(R)+\cdots+\alpha^{k}(a)\alpha^ {n}(R)\). Let \(r\in R\) be such that \(F\alpha^{n}(r)=0\). Then, since \(\alpha\) is a monomorphism, \(a_{i,n}\alpha^{n}(r)=0,a_{i,n-1}\alpha^{n-1}(r)=0,\ldots,a_{i,m}\alpha^{m}(r)=0\) for any \(1\leq i\leq s\). This means that \(r\in r_{(hR,\alpha)}(\widehat{F})\), so \(r=0\) and \(\alpha^{n}(r)=0\). Thus, \(r_{R}(F)\cap\alpha^{n}(R)=0\) follows.
(2)\(\Rightarrow\)(1) Let \(J\) be a nonzero ideal of \((hR,\alpha)\) and \(f=h_{m+1}a+\cdots+h_{1}a_{0}\in J\), where \(a\neq 0\). By assumption, there exist \(k\geq 0\) and a finite subset \(F\subseteq a\alpha^{m}(R)+\cdots+\alpha^{k}(a)\alpha^{m+k}(R)\) such that \(r_{R}(F)\cap\alpha^{n}(R)=0\) for some \(n\). Without loss of generality, we may assume that \(F\subseteq a\alpha^{m}(R)\cup\cdots\cup\alpha^{k}(a)\alpha^{m+k}(R)\) and \(0\not\in F\). Let \(u=max\{n,m+k\}\). Since \(J\) is a two-sided ideal, for any \(b\in F\) we can pick a polynomial \(f_{b}\in J\) of degree \(u\) having \(b\) as the leading coefficient. Let \(\widehat{F}=\{f_{b}\mid b\in F\}\). Since \(u\geq n\), \(r_{R}(F)\cap\alpha^{n}(R)=0\). This yields easily that \(r_{(hR,\alpha)}(\widehat{F})=0\) and shows that \(\widehat{F}\) is a right insulator of \((hR,\alpha)\). Thus, \((hR,\alpha)\) is right strongly prime.
Notice that when \(\alpha\) is an automorphism of \(R\) then the property \((b)\) from the above theorem boils down to:
"every nonzero right \(\alpha\)-ideal of \(R\) contains right insulator". Thus characterizations obtained in Theorems 3, 4 are symmetric in this case. The above mentioned theorems enable to construct easy examples of rings which are strongly prime only on one side.
**Example 5**.: Let \(K\) be a field and \(R=K\langle x_{0},x_{1},\ldots\mid x_{k}x_{\ell}=0\) for all \(k\geq\ell\rangle\). Let \(\alpha\) denote the \(K\)-endomorphism of \(R\) given by \(\alpha(x_{k})=x_{k+1}\) for all \(k\geq 0\). Then:
1. \((hR,\alpha)\) is not left strongly prime.
2. \((hR,\alpha)\) is right strongly prime.
Proof.: (1) It is easy to see that for every finite subset \(F\) of \(R\), \(\ell_{R}(F)\neq 0\). Thus, by Theorem 3, \((hR,\alpha)\) is not left strongly prime.
(2) Let \(0\neq a\in R\) and \(n=1+max\{\ell\mid x_{\ell}\) appears in some monomial from \(a\}\). One can easily check that \(r_{R}(a)\cap\alpha^{n}(R)=0\). Then Theorem 4 yields that \((hR,\alpha)\) is right strongly prime. |
2305.06941 | Dendritic Computation through Exploiting Resistive Memory as both Delays
and Weights | Biological neurons can detect complex spatio-temporal features in spiking
patterns via their synapses spread across across their dendritic branches. This
is achieved by modulating the efficacy of the individual synapses, and by
exploiting the temporal delays of their response to input spikes, depending on
their position on the dendrite. Inspired by this mechanism, we propose a
neuromorphic hardware architecture equipped with multiscale dendrites, each of
which has synapses with tunable weight and delay elements. Weights and delays
are both implemented using Resistive Random Access Memory (RRAM). We exploit
the variability in the high resistance state of RRAM to implement a
distribution of delays in the millisecond range for enabling spatio-temporal
detection of sensory signals. We demonstrate the validity of the approach
followed with a RRAM-aware simulation of a heartbeat anomaly detection task. In
particular we show that, by incorporating delays directly into the network, the
network's power and memory footprint can be reduced by up to 100x compared to
equivalent state-of-the-art spiking recurrent networks with no delays. | Melika Payvand, Simone D'Agostino, Filippo Moro, Yigit Demirag, Giacomo Indiveri, Elisa Vianello | 2023-05-11T16:20:50Z | http://arxiv.org/abs/2305.06941v2 | # Dendritic Computation through Exploiting Resistive Memory as both Delays and Weights
###### Abstract.
Biological neurons can detect complex spatio-temporal features in spiking patterns via their synapses spread across across their dendritic branches. This is achieved by modulating the efficacy of the individual synapses, and by exploiting the temporal delays of their response to input spikes, depending on their position on the dendrite. Inspired by this mechanism, we propose a neuromorphic hardware architecture equipped with multiscale dendrites, each of which has synapses with tunable weight and delay elements. Weights and delays are both implemented using Resistive Random Access Memory (RRAM). We exploit the variability in the high resistance state of RRAM to implement a distribution of delays in the millisecond range for enabling spatio-temporal detection of sensory signals. We demonstrate the validity of the approach followed with a RRAM-aware simulation of a heartbeat anomaly detection task. In particular we show that, by incorporating delays directly into the network, the network's power and memory footprint can be reduced by up to 100x compared to equivalent state-of-the-art spiking recurrent networks with no delays.
Dendritic computation, RRAM delays, Coincidence detection, temporal computation 2018 acmcopy 2018 978-1-4509-XXXX-X/18/06_815.00 [https://doi.org/](https://doi.org/)
dendrites, each of which has synapses with tunable weight and delay elements, implemented using RRAM (Fig. 2). The delay is implemented using an RRAM coupled with a capacitor (the RRAM-C element), while the weight is represented by one RRAM device. A dendritic circuit is then constituted by a RRAM-C element, activated by input spikes applied to an access transistor, and by an output section featuring the weight RRAM, outputting a weighted current pulse.
Dendritic circuits can be arranged into arrays, as shown in Fig. 2. Each row constitutes a dendritic branch, with synapses that have both delay and weight elements. The synaptic delays of each dendritic branch have a certain distribution with a mean that is different from other branches. The green columns receive the spatio-temporal inputs, and each column receives the input from a different channel. The input spikes from these channels go through delays and get weighted and are then filtered by a different time constant (\(\tau_{i}\)). The delayed, weighted and integrated current contributions are then summed to the neuron's soma on the right end, which is modeled as Leaky integrate and fire (LIF). To learn to classify spatio-temporal signals in this architecture, each dendritic branch needs to detect signal features at its integration time scale, through coincidence detection. In other words, the delay and weight parameters should be configured to perform CD in the presence of an input feature. This makes relevant spikes available to the output neuron with temporal coincidence and leads the output neuron to produce spikes in turn.
To enable real-time processing, the delayed elements should be in the range of the time constant of the sensed real-world signals, e.g., in the order of 10s-100s of milliseconds. Thus, to implement such delays on-chip, while reducing the capacitor size, we exploit the HRS of RRAMs. Since the conductive filament resulting in resistive switching is very weak in the HRS, controlling the precise value of the resistance of RRAMs in the HRS is difficult. This can be seen in the HRS measurements preformed on HfO\({}_{2}\)-based RRAM [Leti paper with more detail on technology] shown in Fig. 3, with large variability in the HRS following a log-normal distribution. The mean of this distribution is a function of the reset voltage with which the device is switched to the HRS [APL Thomas]. Due to this variability, resetting the delay devices using the same voltage results in samples from the corresponding log-normal distribution. Each dendritic branch then features a variable delay with a certain mean, proportional to the mean HRS of the RRAM multiplied by the capacitance C. The network objective is then to learn the correct weights corresponding to the delay samples from this log-normal distribution, such that the neuron performs coincidence detection, reacting to the temporal features of the signal.
## 3. RRAM-Aware Training
The HRS of delay RRAMs cannot be precisely controlled. Therefore, prior to the training, we initialized the resistance values of delay RRAMs by sampling from HRS and kept them fixed. This substantial variability enables dendritic architecture to take advantage of a range of delay values.
The dendritic architecture poses some constraints in the offline training procedure, which have to be accounted for in order to extract its full potential. In the current configuration of the architecture, weight-RRAMs only express positive weights with limited precision (approximately 3 bits (Becker et al., 2015)), contrary to the 32 bit floating point precision available on CPU/GPU. Also, the resistance is limited to a certain interval that delimits the Low-Resistive-State (LRS), which in our case spans from 7k\(\Omega\) to 50k\(\Omega\). The HRS can also be utilized in the weight-RRAMs when the algorithm selects low weights, although the LRS is preferable for weight-RRAMs as it is more controllable. Moreover, the weight value in such devices is not deterministic (Becker et al., 2015), i.e. the resistance value in LRS after the programming operation can be modeled as sampling from a Gaussian distribution whose mean is determined by the programming operation, and its standard deviation is due to the device non-idealities and cannot be controlled.
Due to the variability of RRAMs, offline training of the dendritic architecture has to be tailored to the RRAM characteristics. In this work, a simple weight-clipping is used after the weight-update to ensure all weights remain positive and within the permitted range or resistance.
The limited precision is accounted for using a mixed-precision approach (Becker et al., 2015; Becker et al., 2015). Gradients calculated with the backpropagation algorithm are accumulated on high-precision variables on an external computer. At the end of each epoch, this variable is checked and - if the change passes the quantization step - the related RRAM device is reprogrammed. In such cases, the weight is updated by sampling its value from the new corresponding Gaussian distribution.
More precisely, the set of resistive levels assumed by the RRAM is defined by \(\mu_{n}\), where \(n\) goes from 1 to 8 (3 bits), each representing a resistance in the LRS. The high-precision variable (32-bit), also said hidden-weight \(W_{ij}\), triggers a reprogramming operation when it approaches a new resistive level \(\mu_{j}\), starting from a different value.
\[\mu:|W_{ij}-\mu|=\min_{k=0}^{n}\left\{|\mu_{k}-Wij|\right\} \tag{1}\]
Figure 1. Biological neurons include synapses distributed spatially in their dendritic arbor, which gives rise to delayed inputs. The coincidence of the delayed spikes are detected as the input features in each dendrites. \(d_{1}\), \(d_{2}\) and \(d_{3}\) show the average delay of each dendrite compartment depending on their spatial arrangement with respect to the neuron’s cell body (soma).
where \(n\) is the number of available resistive levels on the RRAM devices. The RRAM-aware training procedure is summarized below:
* \(n_{pre}\) epochs of pre-training on the 32-bit weights only, obtaining the pre-trained parameters \(W_{pre}\);
* converting the hidden weights \(W_{pre}\) to RRAM values after updating the scaling factor \(s_{w}\) relating the resistance of the RRAM to the hidden weight;
* \(n_{training}\) epochs on the 3-bit precision RRAM weights, i.e. the values sampled from the LRS levels, scaled by \(s_{w}\).
with \(s_{w}\) obtained as \(\max\,LRS/\max\,W_{trained}\).
Importantly, the resistive levels \(\mu_{i}\) and the standard deviation values related to the RRAM resistance are obtained from a 4kbit RRAM array operated with the smart programming procedure, as in (Brandt et al., 2017).
## 4. Results
To show-case the computational power of dendrites, we benchmark the architecture of Fig. 2 on a real-time sensory processing task, namely heartbeat anomaly detection, using Electrocardiogram (ECG) data. We choose the MIT-BIH dataset (Miller et al., 2017) and focus on the data of patient 208, presenting a balanced amount of normal and abnormal heartbeats. The raw data, consisting of the voltage traces recorded from different electrodes, is delta-modulated to obtain spike trains that are fed to the dendritic architecture (Kirk et al., 2017) (Fig 4). The spiking activity of the output neurons signals the presence of arrhythmia in the heartbeat, performing binary classification. Importantly, the accuracy in solving this task depends on how well the temporal features in a heartbeat signal are interpreted to identify anomalies. In our particular model, the dendritic architecture, this means that the delay values have to match the temporal features of the input signal. The average heartbeat duration is on the order of 700 ms, so the relevant temporal features should be a fraction of that period. These temporal features are detected through the delays. To find the average value of the delays required to detect the ECG features, we sweep the mean of the delay RRAM distribution, while fixing the capacitance size to 100 \(fF\). Figure 5 shows the accuracy as a function of the mean value of the log-normal distribution related to the delay RRAM, with the equivalent delay shown on top of the figure. As can be seen, the task is solved (i.e. accuracy \(>\) 95%) with a mean delay of 40 \(ms\). This delay corresponds to a HRS of 500 \(G\)_Ohms_ which is difficult to achieve with HfO\({}_{2}\)-based devices. However, their pristine state can be used to achieve this resistance. Alternatively, Ferroelectric Tunnel Junction devices are promising candidates for such large resistance levels (Kirk et al., 2017).
Figure 3. Variability of RRAMs in their HRS follows a wide log-normal distribution. The shift in the distribution is as a result of different reset voltages.
Figure 2. Dendritic architecture using complex synapses containing RRAM delays and RRAM weights. Each channel (shown in green) is applied to a parallel set of synapses (in dashed blue box) in each row, which constitutes a distribution of delays, of which a sample is taken through learning the weight values. Each branch/row can integrate the delayed and weighted input channels with a different time constant \(\tau_{i}\).
Using the mean delay of 40 ms, a single output neuron, with two dendritic branches of 64 synapses each, can achieve up to 95% accuracy on the real-time ECG anomaly detection task. This is compared to more than 100 units required in Spiking Recurrent Neural Networks (SRNNs) from previous works, giving rise to 100 \(\times\) reduction in power consumption for the aforementioned task (Kumar et al., 2017; Wang et al., 2020). Table 1 shows the comparison of the estimated power consumption and memory footprint of the dendritic architecture against other state of the art methods.
## 5. Conclusions
We have introduced an RRAM-aware dendritic architecture, which is empowered by delays, and as a result, can introduce temporal richness to a feedforward network that can classify a sensory processing task with up to 100x less power consumption and less than 100x in memory footprint compared to recurrent networks. The power benefits are thanks to the delays which keep the temporal information of the data in a passive fashion, without the need for active storage of data through recurrency.
###### Acknowledgements.
This work is supported by H2020 MeM-Scales project (871371), SNSF Starting Grant Project UNITE (TMSGI2-211461), and European Research Council consolidator grant DIVERSE (101043854).
|
2301.05767 | Theoretical modeling of capillary surfer interactions on a vibrating
fluid bath | We present and analyze a theoretical model for the dynamics and interactions
of "capillary surfers," which are millimetric objects that self-propel while
floating at the interface of a vibrating fluid bath. In our companion paper
[1], we reported the results of an experimental investigation of the surfer
system, which showed that surfer pairs may lock into one of seven bound states,
and that larger collectives of surfers self-organize into coherent flocking
states. Our theoretical model for the surfers' positional and orientational
dynamics approximates a surfer as a pair of vertically oscillating point
sources of weakly viscous gravity-capillary waves. We derive an analytical
solution for the associated interfacial deformation and thus the hydrodynamic
force exerted by one surfer on another. Our model recovers the bound states
found in experiments and exhibits good quantitative agreement with experimental
data. Moreover, a linear stability analysis shows that the bound states are
quantized on the capillary wavelength, with stable branches of equilibria
separated by unstable ones. Generally, our work shows that self-propelling
objects coupled by interfacial flows constitute a promising platform for
studying active matter systems in which both inertial and viscous effects are
relevant. | Anand U. Oza, Giuseppe Pucci, Ian Ho, Daniel M. Harris | 2023-01-13T21:45:00Z | http://arxiv.org/abs/2301.05767v1 | # Theoretical modeling of capillary surfer interactions on a vibrating fluid bath
###### Abstract
We present and analyze a theoretical model for the dynamics and interactions of "capillary surfers," which are millimetric objects that self-propel while floating at the interface of a vibrating fluid bath. In our companion paper [1], we reported the results of an experimental investigation of the surfer system, which showed that surfer pairs may lock into one of seven bound states, and that larger collectives of surfers self-organize into coherent flocking states. Our theoretical model for the surfers' positional and orientational dynamics approximates a surfer as a pair of vertically oscillating point sources of weakly viscous gravity-capillary waves. We derive an analytical solution for the associated interfacial deformation and thus the hydrodynamic force exerted by one surfer on another. Our model recovers the bound states found in experiments and exhibits good quantitative agreement with experimental data. Moreover, a linear stability analysis shows that the bound states are quantized on the capillary wavelength, with stable branches of equilibria separated by unstable ones. Generally, our work shows that self-propelling objects coupled by interfacial flows constitute a promising platform for studying active matter systems in which both inertial and viscous effects are relevant.
capillary waves, collective motion, active matter
## I Introduction
Over the last several decades, there has been significant interest in understanding the physics of so-called "wet" active matter systems, in which constituents consume energy in order to move through a fluid medium [2; 3; 4]. Such systems are ubiquitous in biology and span the Reynolds-number spectrum. On one end, organisms at the microscale interact through low-Reynolds number (viscous or Stokesian) hydrodynamic interactions [5; 6; 7]. On the other end, schools of fish and flocks of birds generate relatively high-Reynolds number flows in which inertial effects are dominant [8; 9; 10]. Interfacial active systems consist of objects or organisms that self-propel at a liquid-gas interface, and typically exist in an intermediate regime in which both inertial and viscous forces are relevant [11]. Examples include water-walking insects [12; 13; 14; 15], bio-inspired self-propellers [16] and self-assembled magnetic swimmers [17; 18; 19]. Prior work has shown that floating solid bodies can self-propel due to the net flow generated by AC electrowetting [20], and that floating water droplets [21; 22; 23] and bouncing oil droplets [24; 25] may self-propel across a vibrating fluid bath due to interfacial Faraday waves. Moreover, camphor boats self-propel due to gradients in surface tension [26; 27] and thus exhibit rich collective behavior [28; 29; 30; 31].
In a companion paper [1], we report the discovery of a new interfacial active system named "capillary surfers" [Fig. 1(a)]. A surfer consists of a millimetric hydrophobic body [Fig. 1(b)] that floats on the surface of a vertically vibrating fluid bath of water-glycerol mixture [Fig. 1(c)]. All experiments are performed below the Faraday instability threshold, above which subharmonic standing waves spontaneously form at the free surface [32]. A surfer is front-back asymmetric and thus tilts slightly backwards in equilibrium, with the contact line remaining pinned to the surfer's base perimeter. The vibration of the bath results in the vertical oscillation of the surfer, and the subsequent generation of a radiated, propagating wavefield. The surfer thus moves along its long axis in the direction of its thinner half [Fig. 1(a,c)], the velocity being constant in the absence of external perturbations and other surfers. In the following we refer to the front and back of the surfer as the "bow" and "stern," respectively.
For a given surfer geometry, the surfer speed increases with the forcing acceleration and decreases with the forcing frequency [Fig. S1 in [1]]. Moreover, surfers interact through the wavefields that they generate and thus exhibit novel collective behavior. Specifically, experiments have demonstrated that when pairs of surfers are set into motion towards each other, they may spontaneously arrange into a variety of different bound states [Fig. 2 in [1]]. The system also exhibits multistability: multiple bound states may coexist for the same experimental parameters, and these states are quantized on the capillary length [Fig. 3 in [1]]. Collections of more surfers may self-organize due to their mutual |
2310.02991 | Diagnostic Tomography of Applied Holography | The single-particle behavior in $d\geq 1$-dimensional Fermi gases with a
large number $N$ of species and strong short-range $s$-wave scattering is
discussed in the $2d$ 'tomographic' framework of a (pseudo)holographic
correspondence with a certain $3d$ gravity of the $AdS_3$ type. However, due to
the intrinsically topological nature of such a bulk theory its dynamics reduces
to a purely boundary one and so, akin to its $SYK/AdS_2$ counterpart, this
formal correspondence neither represents a genuine case of, nor endorses the
hypothetical generalized holographic duality. | D. V. Khveshchenko | 2023-10-04T17:28:23Z | http://arxiv.org/abs/2310.02991v2 | # Diagnostic Tomography of Applied Holography
###### Abstract
The single-particle behavior in \(d\geq 1\)-dimensional Fermi gases with a large number \(N\) of species and strong short-range \(s\)-wave scattering is discussed in the \(2d\) 'tomographic' framework of a (pseudo)holographic correspondence with a certain \(3d\) gravity of the \(AdS_{3}\) type. However, due to the intrinsically topological nature of such a bulk theory its dynamics reduces to a purely boundary one and so, akin to its \(SYK/AdS_{2}\) counterpart, this formal correspondence neither represents a genuine case of, nor endorses the hypothetical generalized holographic duality.
_Holography forever (yes/no?)_
After having had been abundantly present in the information space of strongly correlated systems for over 15 years, the so-called 'applied holographic' - a.k.a. 'bottom-up' \(AdS/CMT\) (where CMT stands for Condensed Matter Theory but, in reality, often means \(non-AdS/non-CFT\)) - approach [1] has been finally disappearing under the radar, as of lately. Once presented as spectacular (including quantitative, despite typically operating well outside the regime of applicability of any semiclassical approximation in the bulk gravity theory) successes in 'explaining' such popular condensed matter systems as the superconducting cuprates in their normal state, the bold claims of the early holographic studies are neither being elaborated upon even by the enthusiastic proponents, nor any longer scrutinized by the skeptics alike. In fact, nowadays even the remaining staunch advocates for applied holography tend to report on those physical phenomena which this approach _does not_ - as opposed to those which is _does_ - capture [2].
At its inception, though, this novel approach offered a seemingly straightforward and tempting to faithfully follow - 'no questions asked' - practical recipe which strive to provide a universal tool for studying a broad variety of strongly-correlated quantum many-body systems [1]. And under its self-proclamation of being uniquely suited for those situations where the conventional condensed matter techniques were claimed to fail it has been opportunistically entertained in great many ways, thus creating a rather peculiar culture of prioritizing technical convenience over (self)critical judgment. Most frustratingly, the answer to the central question: 'If generalized (\(non-AdS/non-CFT\)) holography were indeed valid, then \(why\) would that be?' has neither been seriously demanded, nor pursued with any determination.
Although somewhat delayed, the eventual demise of such a shortcut to bypassing the usual burden of proof, of course, should have been just a matter of time. However, it has also been suggested that, rather ironically, some of the general holographic claims might turn out 'to be right, albeit for the wrong reason' [3].
For one, much of the late holographic discourse has been limited to the far more humble (and, incidentally, much less prone to criticism) topic as linear hydrodynamics, one of the central issues being a possible (non)existence of fundamental bounds for the various kinetic coefficients and (non)universal relations between them [4]. However, this activity, too, has been gradually withering out, also due to the realization that such relations readily arise within the standard theory of quantum transport and with no reference to any holographic conjecture whatsoever [5].
That said, it is quite conceivable that the truly systematic - as opposed to any heuristic - approach to generic many-body quantum systems could indeed be developed in the form of a path integral over some Wigner function-type collective field variable acting in the corresponding phase-space. A subsequent reduction of this formally exact description to the first few moments of the Wigner's function, alongside their (non-linear) (hydro)dynamics, can then be thought of as establishing a holography-like relationship between the \(2d+1\)-dimensional 'bulk' and its boundary defined in some lesser (\(d+1\), as the lowest) number of dimensions [6].
In practice, such a (pseudo)holographic picture is likely to have much in common with the so-called 'geometric bosonization' technique that was discussed three decades ago [7] and was recently resurrected (either with [8] or without [9] any reference to Ref.[7]). This prospective direction is still awaiting for its further development, though.
Another holographic spin-off topic is signified by the Sachdev-Ye-Kitaev (SYK) model [10] and related (random or not [11]) \(0+1\)-dimensional models which are often portrayed as the water-proof example of holographic duality. Curiously enough, this characterization would typically be made despite readily acknowledging that in \(1+1\) dimensions the pertinent Jackiw-Teitelboim (JT) bulk gravity appears to be non-dynamical and is fully described by the boundary degrees of freedom, thereby revealing its intrinsically topological and effectively \(0+1\)-dimensional nature.
Apart from some subtle high-energy features which require an introduction of extra bulk non-gravitational (matter) fields [12], the \(0+1\)-dimensional boundary dynamics determines all the long-distance bulk correlations, low-temperature thermodynamics, etc. Moreover, this correspondence turns out to be very non-specific, as the dual \(AdS_{2}\) geometry emerges universally in the near-extremal regime of a generic \(d+1\)-dimensional bulk gravity [1].
Thus, although in the above cases certain many-body properties can indeed look like being ostensibly holographic, such correspondence appears to represent a mere equivalence between the systems of, effectively, same dimensionality. And while still being potentially useful, such topological - akin to that in the Quantum Hall Effect - duality does not quite rise to the same level as genuine, 'non-topological', one where the two dual systems do, in fact, belong in different dimensions. In that regard, the former may be viewed merely as the result of using a redundant, non-minimal, description, while the latter would indeed be radically new and requiring a major paradigm shift.
The present note aims to add to the list of pseudo-holographic correspondences by suggesting that a certain class of Fermi systems with short-range, yet potentially strong, interactions (e.g., neutral Fermi gases) could be formally mapped onto the \(2+1\)-dimensional gravity of the \(AdS_{3}\) variety. However, the latter happens to be topological as well [13], and, therefore, is neither specific enough to positively identify the boundary system in question, nor practically useful to allow for its properties to be accessible only via the bulk.
_Tomographic fermions_
The physics of weakly dispersing bands, where correlation are strong and interactions can possibly dominate over kinetic energy, has been the topic of many theoretical and experimental studies, starting with the Hubbard model and the Fractional Quantum Hall effect, and culminating, in the recent years, with the discovery of flat bands in twisted bi/tri-layer graphene and other systems.
It appears, however, that this fundamental problem finds its cleaner implementation in cold neutral Fermi gases. Unlike the former where the relevant interactions are often described by fixed intermediate (rather than truly strong) couplings under the conditions of broken translational and rotational invariances, the latter can be precisely tuned into the extreme (unitary) regime while maintaining translation and rotational invariance.
By allowing for the long-ranged Coulomb forces this model can be further applied to semiconductor heterostructures. However, even in its most minimalistic formulation the full solution to this general problem still remains to be constructed. Arguably, it should have been the first target for applying any technique pretending to provide a new insight into such more complex scenarios as, e.g., the'strange-metallic' normal state of the superconducting cuprates.
Lately, much attention has been paid to the random matrix-type models whose hallmark feature is a dominant role of the so-called'melonic' diagrams [10; 11; 12]. In the spaceless SYK-type models, a justification of the melonic approximation requires a large number \(N\) of the fermionic flavors, both, in the random (original SYK) as well as the non-random (Witten-Gurau) (non)colored tensor models.
Restoring spatial correlations and generalizing to the case of arbitrary \(q\geq 4\)-fermion couplings, the pertinent \(d+1\)-dimensional melonic theory is described by the Luttinger-Ward functional
\[F=\sum_{a=1}^{N}\ln det(\partial_{\tau}-\epsilon_{a}(\nabla_{ \bf x}))+\] \[-N\int_{\tau_{1},{\bf x}_{1}}\int_{\tau_{2},{\bf x}_{2}}(\Sigma( \tau_{12},{\bf x}_{12}G(\tau_{12},{\bf x}_{12})-\] \[-\frac{1}{q}U(\tau_{12},{\bf x}_{12})G^{q}({\bf x}_{12},\tau_{12})) \tag{1}\]
formulated in terms of the bi-local field variables \(G\) and \(\Sigma\) corresponding to the single-particle Green function and its self-energy. The dispersion functions \(\epsilon_{a}({\bf k})=v_{a}(k-k_{Fa})\) vanishing at the corresponding (Luttinger) Fermi surfaces (FS) endow the fermions with spatial dynamics while the coupling function \(U(\tau,{\bf r})\) replaces the square of the SYK entangling amplitude averaged over a random ensemble (if any).
While the fine details of this function would be instrumental for determining (non-universal) fermion structures at the lattice scale, the long-distance (potentially universal) properties can be fully characterized by the overall coupling strength and power-laws of its spatio-temporal decay.
The Schwinger-Dyson equations derived from Eq.(1) read
\[(\partial_{\tau_{1}}-\epsilon_{a}(-i\nabla_{\bf x1}))G(\tau_{12}, {\bf x}_{12})\] \[+\int_{\tau_{3},{\bf x}_{3}}\Sigma(\tau_{13},{\bf x}_{13})G(\tau _{32},{\bf x}_{32})=\delta({\bf x}_{12})\delta(\tau_{12}) \tag{2}\]
and
\[\Sigma(\tau,{\bf r})=U(\tau,{\bf r})G^{q/2}(\tau,r)G^{q/2-1}(-\tau,-r) \tag{3}\]
which expression for the self-energy is formally similar to that of the \(2^{nd}\) order Born approximation in the Fermi gas with short-range (yet, potentially strong) scattering.
In the limit of a totally flat (degenerate) dispersion relations \(\epsilon_{a}({\bf k})=const\) and strongly localized interaction, \(U(\tau,{\bf r})=J^{2}\delta({\bf r})\), the solution of Eq.(2) localizes in space
\[G_{syk-r}(\tau,{\bf r})\sim\frac{sgn(\tau)}{(J\tau)^{\Delta}}\delta({\bf r}) \tag{4}\]
with \(\Delta=2/q\), as in the standard SYK model [10]. Correspondingly, its Fourier transform, \(G_{syk-r}(\omega,{\bf k})\sim sgn(\tau)/J^{2/q}\omega^{1-2/q}\), becomes independent of momentum.
Introducing bare dispersions \(\epsilon_{a}({\bf k})\neq const\) and/or a non-zero spatial range of interactions favors non-local solutions. For \(q=4\) one such solution was proposed in Ref.[14] where the use of Eqs.(2,3) was justified by introducing \(N\gg 1\) radially nested FS, while the only allowed processes of fermion scattering would be resonant, the energies of the incoming and outgoing scattered
particles obeying the resonant condition \(\sum_{a=1}^{q/2}\epsilon_{a}({\bf k}_{a})=\sum_{b=q/2+1}^{q}\epsilon_{b}({\bf k}_{b})\).
A related version of this model was studied in Ref.[15] by exploiting the 'tomographic' expansion of the propagator
\[G(\tau,{\bf r})\approx\frac{k_{F}^{(d-1)/2}}{\Omega_{r}^{1/2}}\sum_{\pm}e^{\pm i (k_{F}r-\pi(d-1)/4)}g(\tau,\pm r) \tag{5}\]
where \(\Omega_{r}=S_{d-1}r^{(d-1)}\) is the surface area of the \(d-1\)-dimensional sphere and \(S_{d-1}=2\pi^{d/2}/\Gamma(d/2)\).
Expanding the \(G\) factors in Eqs.(2,3) and discarding all but the \(s\)-wave amplitudes (consistent with the assumed short-range nature of interactions), one arrives at the equation for the \(1+1\)-dimensional propagator \(g(\tau,r)\). Although the radial variable \(r\) takes values on the positive semi-axis, the convergent and divergent spherical waves can be combined into one chiral (uni-directional) mode defined in the entire unbounded domain \(-\infty<r<\infty\), by analogy with the standard analysis of the Kondo effect. In contrast, for \(d=1\) the spatial coordinate is initially unbounded and the fermion system is non-chiral.
In the tomographic representation Eq.(2) then reduces to its 'radial' counterpart
\[(\partial_{\tau}+\partial_{\tau})g(\tau,r)+\int_{\tau^{\prime},r^ {\prime}}\sigma(\tau-\tau^{\prime},r-r^{\prime})g(\tau^{\prime},r^{\prime})\] \[=\delta(\tau)\delta(r) \tag{6}\]
where the effectively \(1+1\)-dimensional self-energy reads
\[\sigma(\tau,r)=U(\tau,r)\sum_{\xi_{a},\zeta_{a}}e^{i\sum k_{a}r}\prod_{a=2}^{q }g(\eta_{a}\tau,\zeta_{a}r)\Omega_{r}^{(4-q)/2} \tag{7}\]
where \(\xi_{a},\zeta_{a}=\pm 1\) are two-valued sign factors. Among the different terms in the sum (7) the most important are the non-oscillating ones with \(\sum_{a}k_{a}=0\). Under such a selection rule only the terms with zero total \(1d\) momentum should be kept. Moreover, in the case of a common Fermi velocity the resonant scattering condition gets automatically fulfilled, too.
For the sake of generality it would be instructive to consider the interaction function that decays algebraically in, both, space and time, \(U(\tau,r)=J^{2}/\tau^{2\alpha}r^{2\beta}\) with some positive \(\alpha\) and \(\beta\). In Eq.(7) it appears to be multiplied with the extra power \((4-q)(d-1)/2\) of \(r\) which stems from the tomographic expansion (5).
For comparison, in Ref.[15] the 'defect' power law governing the spatial dependence of correlations was found to be \((2-q)(d-1)/2\), which mishap can be traced back to missing another factor of \(\Omega_{r}\) which is due to the conversion of the \(d\)-dimensional spatial into the \(1d\) radial \(\delta\)-function in the r.h.s. of Eq.(6).
Neglecting the derivative terms in Eq.(6) gives rise to the generalized SYK-like integral equation where the power-law decay of the interaction function prompts the use of factorized solution
\[g(\tau,r)\sim\frac{sgn\tau}{r^{2\Delta_{\tau}}}\frac{1}{r^{2\Delta_{\tau}}} \tag{8}\]
which is dominated by the self-energy (7) and governed by the exponents
\[\Delta_{\tau}=\frac{1-\alpha}{q},\qquad\Delta_{r}=\frac{1-\beta+(d-1)(1-q/4)}{q} \tag{9}\]
Such a solution was first proposed in the context of multi-dimensional SYK generalizations [16] and later reproduced in Ref.[15] (with no reference to Ref.[16] and, in fact, incorrectly, see above). Also, the factorized solution was utilized in a somewhat different - but mathematically similar - context of the replica-(a)symmetric SYK configurations [17].
In the Fourier transform of the effectively \(1d\) propagator
\[g(\omega,k)=\frac{1}{i\omega-\epsilon(k)-\sigma(\omega,k)} \tag{10}\]
the bare (derivative) terms are combined with the Fourier-transformed self-energy
\[\sigma(\omega,k)=\lambda\omega^{1-2\Delta_{\tau}}k^{1-2\Delta_{\tau}} \tag{11}\]
Balancing the bare against the self-energy terms allows one to estimate the effective dynamical critical index.
Namely, in the complimentary regimes \(\tau\gg r\) and \(\tau\gg r\) one reads off the naive estimates \(z_{>}=(1-2\Delta_{r})/2\Delta_{\tau}\) and \(z_{<}=2\Delta_{r}/(1-\Delta_{\tau})\) which, in general, would be inconsistent, as \(z_{>}\geq 1\) while \(z_{<}\leq 1\). In fact, for any \(d\), \(q\), and \(\alpha,\beta>0\) the minimal attainable value of \(z_{>,min}=(1+\beta)/(1-\alpha)\) is always greater than the maximal \(z_{<,max}=(1-\beta)/(1+\alpha)\).
However, for \(\alpha=\beta=0\) and \(q=4\) (but arbitrary \(d\)) both estimates converge to the same value of \(z=1\). With this unique choice of parameters one identifies the regime where \(\Delta_{\tau}=\Delta_{r}=1/4\) and the self-energy \(\sigma(\omega,k)=\lambda(\omega k)^{1/2}\) governed by a dimensionless amplitude \(\lambda\sim(J/\mu)^{2}\) with \(\mu=v_{F}k_{F}\) appears to be marginally on par with the bare terms in (10).
These findings are consistent with the general expectation that, in contrast to the \(0+1\)-dimensional SYK/tensor models where all the \(q\)-particle couplings are strongly relevant in the infrared, in their (effectively) \(1+1\)-dimensional generalizations the only marginal could be the interactions with the lowest non-trivial value of \(q=4\).
Incidentally, in this special case there exists an elegant exact solution to the integral equation (6) [18]
\[g(\tau,r)=\frac{1}{\sqrt{(iv_{+}\tau-r)(iv_{-}\tau-r)}} \tag{12}\]
where \(v_{\pm}=v(1\pm\lambda)\). Eq.(12) is uniquely tailored to the case of \(q=4\) and does not allow for any immediate generalization.
Also, in the non-chiral case of \(d=1\) the arguments presented in Ref.[19] suggest that the 'charge' mode tends to develop a gap due the generic back-scattering processes. Therefore, the following discussion will be limited to the
dimensions \(d>1\) where the FS is continuous and the chiral tomographic fermions remain gapless.
The Fourier transform of (12)
\[g(\omega,k)=\frac{1}{\sqrt{(i\omega-v_{+}k)(i\omega-v_{-}k)}} \tag{13}\]
features the branch-cut singularity while exhibiting the intact dynamical critical exponent \(z=1\). Instead of an anomalous power-law decay, though, Eq.(13) demonstrates the behavior akin to spin-charge separation. Accordingly, the corresponding spectral function \(Img(\omega,k)\) is manifestly non-Fermi liquid (NFL)-like, showing a double-peak structure with the square-root singularities at \(\omega=v_{\pm}k\).
The propagator given by Eqs.(5,12) differs markedly from the approximate (semi-numerical) solution [14] obtained with the use of the mixed \(\tau-k\) representation in the momentum-space variant of the SYK model
\[G_{syk-p}(\tau,{\bf k})\sim\frac{sgn\tau}{(J\tau)^{1/2}}e^{-O(T/J)\epsilon(k)\tau} \tag{14}\]
At low temperatures (13) is strongly localized in the coordinate space, regardless of its dimension, which behavior is suggestive of \(z=\infty\) and the near-extremal \(AdS_{2}\times R^{d}\) geometry of the corresponding bulk dual [1].
Likewise, Eq.(12) differs from the semi-local holographic propagator
\[G_{semi-loc}(\omega,{\bf k})=\frac{1}{(i\omega)^{2\delta_{k}}-\epsilon(k)} \tag{15}\]
where the momentum-dependent exponent \(\delta_{k}=(a+b{\bf k}^{2})^{1/2}\) varies continuously with the scaling dimension of the bulk fermion [20]. It has been used extensively in constructing the various holographic phenomenologies which, however, encounter significant problems with restoring the proper FS.
_Boundary-bound bulk dual_
Both, the \(2d\)- and \(3d\)-dimensional, gravities are intrinsically topological and, therefore, all their gauge invariant bulk degrees of freedom are effectively determined by the state of the boundary [21] Among other things, this implies that there are no gravitational waves and the list of the classical background solutions is limited to the maximally symmetric spaces of constant curvature: Minkowski, de-Sitter, or anti-de-Sitter.
It was conjectured in Ref.[6], the pertinent class of quantum \(1+1\)-dimensional systems can be related to a formally \(2+1\)-dimensional phase-space geometry, the third ('holographic') coordinate then being played by the conjugate momentum \(p\).
Somewhat surprisingly, there has been no consistent attempts to incorporate the \(1d\) phenomenon of spin-charge separation into the holographic framework. For one thing, the putative gravity dual corresponding to the \(1+1\)-dimensional chiral SYK was found in Ref.[18] to attain the maximal \(0+1\) SYK-like many-body quantum chaos quantified by the Lyapunov exponent \(\lambda_{L,max}=2\pi T\) right where the model loses its chiral nature (\(v_{-}\to 0\)).
Therefore, it is conceivable that a viable background geometry could be rid of the ubiquitous near-extremal black hole which exhibits the universal \(AdS_{2}\times R^{d}\) metric as in its presence the boundary state would likely remain maximally chaotic for \(T\to 0\) and arbitrary values of the entangling couplings [10; 12].
As far as the task of identifying the pertinent geometry is concerned, there exists a well-known connection [13] between the \(2+1\)-dimensional gravity with a negative cosmological constant \(\Lambda=-1/l^{2}\)
\[S=\frac{l}{16\pi\kappa}\int d\tau drdp\sqrt{g}(R-\Lambda) \tag{16}\]
where \(l\) and \(\kappa\) are the AdS radius and Newton's constant, respectively, and the (double-sided) Chern-Simons model with the action
\[S=\frac{l}{16\pi\kappa}Tr\int d\tau dxdp\epsilon^{\mu\nu\lambda}(\hat{A}_{ \mu}^{\pm}\partial_{\nu}\hat{A}_{\lambda}^{\pm}\!+\!\hat{A}_{\mu}^{\pm}\hat{A }_{\nu}^{\pm}\hat{A}_{\lambda}^{\pm}) \tag{17}\]
Under the identification of the \(3d\) metric
\[g_{\mu\nu}=\frac{l^{2}}{4}(A_{\mu}^{+}-A_{\mu}^{-})(A_{\nu}^{+}-A_{\nu}^{-}) \tag{18}\]
The equation of motion derived from (17) imposes the null curvature condition
\[\partial_{\mu}\hat{A}_{\nu}^{\pm}+\hat{A}_{\mu}^{\pm}\hat{A}_{\nu}^{\pm}-(\mu \leftrightarrow\nu)=0 \tag{19}\]
which then becomes the Einstein equation, \(R_{\mu\nu}-2g_{\mu\nu}\Lambda=0\) complemented by that of zero torsion.
The chiral connections \(\hat{A}_{\mu}^{\pm}\) can be expanded in the basis of generators \(\hat{L}_{0,\pm 1}^{\pm}\) of the algebra \(SL(2,R)\times SL(2,R)=SO(2,2)\) which obey the commutation relations \([\hat{L}_{n}^{\pm},\hat{L}_{m}^{\pm}]=(n-m)\hat{L}_{n+m}^{\pm}\) and are normalized as \(Tr\hat{L}_{n}^{\pm}\hat{L}_{m}^{\pm}=\frac{1}{2}\delta_{n0}\delta_{m0-}- \delta_{n1}\delta_{m,-1}\). Furthermore, the asymptotic symmetry algebra is, in fact, enhanced to the product of two chiral Virasoro algebras.
Upon parameterizing the solutions of (18) in terms of an arbitrary group element \(\hat{\chi}_{\pm}\), functions \(p_{\pm}(x_{\pm})\), and the conjugate chemical potentials \(\mu_{\pm}=\frac{\delta H^{\pm}}{\delta p_{\pm}}\)
\[\hat{A}^{\pm}(x,p,\tau)=\hat{\chi}_{\pm}^{-1}(p)\hat{L}_{0}(\mu_{\pm}d\tau\pm p _{\pm}dx)\hat{\chi}_{\pm}(p) \tag{20}\]
which becomes the continuity equation \(\dot{p}_{\pm}\mp\mu_{\pm}^{\prime}=0\). By choosing the appropriate form of the Hamiltonian \(H^{\pm}=\sum_{l}a_{l}H_{l}^{\pm}\) where all \(H_{l}^{\pm}=\sum_{k=1}^{l}b_{k}p_{\pm}^{\pm}\mu_{\pm}^{l-k}\) are in involution (\([H_{k},H_{l}]=0\)) one can then reproduce certain families of \(1+1\)-dimensional integrable (KdV, mKdV, Gardner, Benjamin-Ono, and other) equations [21].
On the gravity side, different saddle points of the coherent states path integral governed by (16) can be identified as globally distinguishable (but locally \(AdS_{3}\)) classical solutions, the two competing minima being the thermal \(AdS_{3}\) and Banados-Teitelboim- Zanelli (BTZ) black hole [22].
In particular, the non-chiral spinless Luttinger liquid (LL) with \(k=1\) can be reproduced by introducing the original Brown-Henneaux boundary conditions with constant \(\mu_{\pm}\sim p_{\pm}\), the outer/inner BTZ horizons being located at \(p_{>/<}=(p_{+}\pm p_{-})/2\). The dual metric
\[ds^{2}=\frac{p^{2}dp^{2}}{(p^{2}-p_{+}^{2})(p^{2}-p_{-}^{2})}+(p ^{2}-p_{+}^{2})(\frac{p^{2}-p_{-}^{2})}{p^{2}}d\tau^{2}\] \[+p^{2}(dr-\frac{p_{+}p_{-}}{p^{2}}d\tau)^{2} \tag{21}\]
describes a rotating BTZ black hole with the event horizon but no curvature singularity. In particular, the non-rotating solution with \(p_{<}=0\) can then be used to recover the left-right symmetric LL (cf. the above discussion of the case \(d=1\)).
For static, yet non-constant, \(p_{\pm}(x)\) the corresponding boundary solutions possess non-trivial global charges given by the chiral integrals of motion \(H_{k}^{\pm}\) while their bulk counterparts can be regarded as black holes with multi-graviton excitations ('soft hair') [21]. By introducing higher order terms \(H_{k}^{\pm}\) with \(k\geq 3\) one can generate new black hole configurations.
The most general solution can be obtained by acting on the ground state (e.g., BTZ black hole) with elements of the asymptotic symmetry group commuting with the Hamiltonian. In this way, one can construct various constant curvature (locally \(AdS_{3}\)) space-times with anisotropic Lifshitz scaling and \(z=2k-1\). This opens up the possibility of studying the systems with \(z>1\) without the need of bulk geometries which are asymptotically Lifshitz space-times. Namely, in the case of a higher-spin symmetry \(SL(M,R)\) with \(M>2\) the list of attainable gravitational backgrounds includes the asymptotically Lobachevsky, Schroedinger, warped \(AdS\), etc. space-times [23].
Despite a whole range of possibilities it would be rather unlikely for the list of the (potentially) relevant gravity theories to include the Einstein-Maxwell-dilaton theory or for the relevant metrics to be even nearly as exotic as, e.g., Bianchi VII that has been repeatedly invoked in the holographic studies [1].
In the context of the \(1+1\)-dimensional SYK model, however, the one-sided chirality with the causality cone falling within the range of velocities bounded by \(v_{\pm}\) and the lack of conformal symmetry make it more difficult to identify the proper geometries.
To that end, it was speculated in Ref.[24] that the properties of this model may be somewhat similar to the non-chiral CFTs with large central charges. In particular, it was conjectured that their gravity dual, if any, could be represented by a space-time rotating faster than the speed of light at its boundary. Also, the discrete (un)folding of the radial dimension performed in the process of constructing the 'tomographic' representation might be suggestive of some orbifold-like underlying geometry.
Some additional insight can be gained by comparing the boundary propagator (12) to the two-point function (in the Euclidean signature) of a bulk fermion of dimension \(\Delta\). In the semiclassical regime it would be governed by the exponential of the action computed on the \(3d\) geodesic anocored at the boundary end points
\[g(\tau,r)\sim\exp(-\Delta\int\sqrt{g_{pp}dp^{2}+g_{\tau\tau}d\tau^{2}+g_{rr} dr^{2}}|_{p\to 0}) \tag{22}\]
Fourier transforming this expression to the space-time domain one then obtains
\[g(\omega,k)\sim\exp(-\int dp\sqrt{g_{pp}(\frac{\Delta^{2}}{l^{2}}+\frac{\omega ^{2}}{g_{\tau\tau}}+\frac{k^{2}}{g_{rr}})})\quad) \tag{23}\]
Leaving a systematic identification of the pertinent metric to the future discussion, by a direct inspection one finds that the propagator (12,13) can be reproduced with the use of the metric
\[ds^{2}=\frac{dp^{2}}{p^{2}}+p^{2}(dr_{-}^{2}+\Delta v^{2}d\tau^{2}) \tag{24}\]
where \(r_{-}=r-i\tau\) and the bulk fermion dimension \(\Delta=1\). The logarithmic divergence in the momentum integral in (22,23) gets cut off at the low momenta \(p\sim 1/\sqrt{r_{-}^{2}+\delta v^{2}\tau^{2}}\), thereby resulting in Eqs.(12,13).
In the limit of vanishing interaction ( \(\lambda=\delta v/2v\to 0\)) the chiral boundary propagator and its Fourier transform return to their free counterparts \(g_{0}(\tau,r)=1/(r-i\tau)\) and \(g_{0}(\omega,k)=1/(i\omega-k)\), respectively.
_Strange theories of strange (or not so?) metals_
Since its inception, one of the most recurrent themes in applied holography was the problem of'strange metals', in reference to such electronic materials as the cuprates, pnictides, heavy fermion compounds, twisted bi/tri-layer graphene, etc.
Thermodynamics of Eq.(1) is governed by the free energy \(F\sim\)\(T^{1+1/z}\) in any dimension \(d\geq 1\), thus producing a linear (possibly, _log_-enhanced) specific heat \(C=-T\partial^{2}F/\partial T^{2}\sim T\) for \(z=1\), albeit with a prefactor different from that in the ordinary Fermi gas.
For comparison, with the use of the solution (13) the longitudinal optical conductivity can be readily evaluated in the same 'no vertex correction' approximation
\[\sigma(\Omega)=\frac{\pi\mu}{2T}\int\frac{d\epsilon_{k}d\omega}{ \cosh^{2}(\omega/T)}ImG(\omega+\Omega,{\bf k})ImG(\omega,{\bf k})\] \[\sim\frac{\lambda\mu T}{(\lambda T)^{2}+\Omega^{2}} \tag{25}\]
from which one recovers the Drude peak, \(\sigma(\Omega)\sim\mu\delta(\Omega)\), in the vanishing coupling limit.
Albeit not immediately obvious, the source of momentum relaxation is provided by the intrinsic randomness built into the action (1). And while the customary substitution \(\Omega\to T\) may not be generally justifiable (as it is
in the 'no momentum drag' regime), the d.c. resistivity shows a linear behavior with increasing temperature, in agreement with the results of Ref.[14].
Furthermore, in Ref.[14] the slope of the linear in \(T\) rate of momentum relaxation was found to be universal, which prediction appears to be consistent with the data on a host of materials [25] (see, however, [26]). Despite this intriguing observation, in the later work on the subject the remnant FS model was abandoned in favor of the more conventional coordinate space generalization of the SYK model with some additional single-particle randomness that, alongside the random entangling correlations, yields the resistivity \(\rho(T)=\rho_{0}+\alpha T\)[27].
The above results can be contrasted against the earlier attempts to accommodate the general SYK scenario into the physics of cuprates and other'strange metals' by exploiting the ultra-local behavior with \(z=\infty\). This universality class was argued to be characterized by such hallmarks as universal ('Planckian') linear-\(T\) equilibration/thermalization rate, maximal chaos, and saturated bounds for the kinetic coefficients [25].
Yet another thrust towards the multi-dimensional generalizations of the SYK model delivered the resistivity exponent \(4/q\), thus suggesting some 'accidentally linear' temperature dependence for \(q=4\)[28]. However, the general possibility of other values of this exponent should caution one against invoking this observation to explain the robust linear-\(T\) dependence observed in the broad range of strongly correlated systems.
Meanwhile, in the still other 'bottom up' holographic scenaria the thermodynamic and transport properties would be derived from such a popular workhorse as the Einstein-Maxwell-dilaton model which also paves the way for the Lifshitz and hyperscaling-violating (HV) background geometries which are parametrized by the dynamical index \(z\) and the HV exponent \(\theta\). Besides, yet another exponent would often be introduced for the effective charge renormalization [29].
The title of the aforementioned Ref.[2] states that the popular (Gubser-Rocha [30]) holographic model where the dynamical index \(z\) and HV exponent \(\theta\) are both infinite while their ratio is finite, \(\theta/z=-1\), does \(not\) explain the cuprates (presumably, suggesting that some alternate schemes might still do). Specifically, in Ref.[2] it was proposed to resolve the dichotomy between the \(T\)-dependences of resistivity and Hall angle observed in the cuprates by postulating a \(T\)-dependent carrier density \(n\sim T\), alongside the generic relaxation time \(\tau\sim 1/T^{2}\).
Interestingly, though, despite including the earlier work of Ref.[31] in their list of references (item [36] in Ref.[2]) the authors of Ref.[2] did not seem to realize that in Ref.[31] the same scenario had been proposed (incidentally, the actual journal reference [36] in Ref.[2] was cited incorrectly).
Nor, did the authors of Ref.[2] seem to appreciate that, apart from the conductivity \(\sigma\sim 1/T\) and the Hall angle \(\tan\theta_{H}\sim 1/T^{2}\), the scheme proposed in Ref.[31] renders the linear (possibly, log- enhanced) specific heat and reproduces the experimentally observed magnetoresistivity, \(\frac{\Delta\rho}{\rho}\sim\theta_{H}^{2}\sim 1/T^{4}\), spin susceptibility (both, at the momentum \(Q=(\pi,\pi)\) and momentum-integrated), as well as the Hall Lorentz, Seebeck, and Nernst coefficients, the latter satisfying the relations \(S\sim(T/e\sigma)d\sigma/d\mu\) and \(\nu_{N}\sim(T/eB)d\theta_{H}/d\mu\) where \(\mu\) is a chemical potential. Altogether, the above predictions appear to be in a generally better agreement with the data than the competing holographic phenomenologies [32] even without introducing an additional charge exponent.
Also, the proposal of Ref.[31] differs from such historic scenaria as that of two distinct scattering times: \(\tau_{tr}\sim 1/T\) and \(\tau_{H}\sim 1/T^{2}\) pertaining to the relaxation of longitudinal and transverse [33] or charge-symmetric vs anti-symmetric [34] currents, respectively, or that based on the marginal Fermi liquid phenomenology [35].
Although the scenaria of \(T\)-dependent carrier density were indeed discussed at the early stages of the high-\(T_{c}\) saga [36], the peculiar linear in \(T\) behavior of the carrier density can be seen as problematic enough to view the above agreement with experiment as no more than fortuitous.
Conceivably, however, in light of the gradually emerging consensus about the by and large mundane (hence, non-NFL) behavior in most of the cuprates phase diagram [37] some of the seemingly unresolved issues might eventually turn out to be moot. On the other hand, such a conciliatory resolution could still be challenged farther down the road [38].
_The bottom line (up front)_
Applied holography purportedly offers a unique insight into the largely uncharted territory of'super-strongly coupled' systems with a nearly or even completely flat dispersion. The majority of the recent and ongoing exercises on this topic exploits the various variants of the 'ultra-local' scenario represented by the SYK model and its (pseudo)holographic dual which has been identified as the JT gravity. In contrast, the present note focuses on the holographic aspects (or lack thereof) of the intrinsically 'linear-radial' tomographic representation models.
On the practical side, a valid instance of holographic correspondence could then allow one to replace the conventional (usually impossible) exact or diagrammatic calculations in the strongly-interacting quantum boundary theory with the technically much simpler task of solving some classical equations \(a\)\(la\) Einstein in the higher-dimensional bulk.
However, if the nature of the holography in question is topological, one can hardly gain any new knowledge by exploiting the (in this case, trivial) bulk gravity, thus largely negating any potential benefits of the alternate bulk description. In fact, all the essential properties of the SYK models can be understood without much input from the dual \(AdS_{2}\) gravity, thus making their correspondence secondary to the (asymptotic) exact solubility of either model. The only 'holographic' aspect that appears
to be essential for solving both models is the very existence of the 'conformal' solution (4), reminiscent of the first such example found in the theory of phase transitions in helium [39].
To summarize, in the present note it is argued that a broad family of \(d\geq 1\)-dimensional 'tomographic' models allows for the unifying effective \(1+1\)-dimensional description which, in turn, can be reproduced in the context of some bulk gravity of the \(AdS_{3}\) type. Once again, the intrinsically topological nature of the latter theory does not provide any significant insight that could have not been gathered in the framework of the boundary model itself.
On a more general note, the intrinsic topological aspect of any example of the \(AdS/CMT\) holographic correspondence could be traced back to the popular interpretation of the extra holographic coordinate as that of the Wilson's renormalization group (RG) scale [1]. Despite being intuitively appealing, this identification may run into a potential problem with double counting of the effects of the boundary degrees of freedom which are supposed to generate that RG flow without any reference to the bulk fields. Indeed, any coarse-graining RG procedure (either in the Wilson's or in tensor network sense) can be executed solely within the boundary theory. Introducing an extra holographic coordinate would then be redundant and, therefore, subject to a gauge condition and/or an associated constraint.
The above discussion implies that any purported discovery of a novel \(AdS/CMT\) holographic duality should be calling for an inquiry into whether it really involves the systems in different dimensions ( 'It from Qubit', as per the popular motto [1]) or is it merely topological ('All from Hall', as per the above slogan's obscure rival [40]). Contrary to the former, the 'HALLographic' correspondence would be relating pairs of systems of the (de facto) same dimensionality and, therefore, would be unlikely to serve as a preferred means of gaining some exclusive insight into the properties of one party ('boundary') by instead studying its counterpart ('bulk').
The hospitality at the Aspen Center for Physics funded by National Science Foundation through the grant PHY-2210452 and at the International Institute of Physics in Natal, Brazil supported by the Simons Foundation are gratefully acknowledged.
|
2305.17694 | Geometric considerations on planetary surface temperatures | We propose a formula for computing the average planetary surface temperatures
based solely on the solar irradiance and the bond albedo. The formula is
empirically derived from data on Earth, Venus and Titan, and a model is
proposed to justify it. We introduce the concept of planetary inner albedo, as
a complement to the usual bond albedo. A geometric proof is given for the main
finding of the paper, which can be summarized as follows: the ratio of the
inner to outer albedo is a constant, related to the universal parabolic
constant. Furthermore, we extend the surface temperature formula to gas giants,
giving the temperature at which condensates (e.g., of ammonia) start forming
within their atmosphere, particularly for Jupiter, Saturn and Uranus. Based on
model complexity, applicability and accuracy, the heating mechanism via
atmospheric reflectivity (a mirror effect) performs much better than the
alternatives. Responses to reviewers are included at the end. | Sabin Roman | 2023-05-28T11:20:06Z | http://arxiv.org/abs/2305.17694v2 | # Geometric considerations on planetary surface temperatures
###### Abstract
We propose a formula for computing the average planetary surface temperatures based solely on the solar irradiance and the bond albedo. The formula is empirically derived from data on Earth, Venus and Titan, and a model is proposed to justify it. We introduce the concept of planetary inner albedo, as a complement to the usual bond albedo. A geometric proof is given for the main finding of the paper, which can be summarized as follows: the ratio of the inner to outer albedo is a universal constant, related to the parabolic constant. Furthermore, we extend the surface temperature formula to gas giants, giving the temperature at which condensates (e.g., of ammonia) start forming withing their atmosphere, particularly for Jupiter and Saturn.
## I Introduction
Determining the average surface temperature of Earthlike planets (rocky, with a significant atmosphere of pressure greater than 10 kPa) is a fundamental question in planetary science, and has important implications in particular for Earth's biosphere [1; 2]. Black body estimates markedly underestimate the temperature, not accounting for the full effect of the atmosphere. Calculation of surface temperatures based on the ideal gas law or the lapse rate can give accurate estimates, but do not account for the energy balance of incoming and outgoing radiation [3]. Given the complexity of the atmospheric system, finding universal regularities can seem unlikely. However, finding such regularities is essential to our understanding. Certain well-established, simple patterns can emerge, such as the linear decrease of temperature with altitude (in the troposphere) [4].
In this work we highlight a linear relationship with universal applicability. Specifically, as we show below, we plot the ratio of net solar irradiance to emitted radiation for Titan, Earth and Venus versus their bond albedo. Surprisingly, a linear fit matches the data very well, see Fig. 1. This implies that the slope is a constant independent of the features of the celestial bodies, leading us to propose a geometric rationale for it, which we explore in Section 2. This provides a way to account for incoming solar radiation and atmospheric features in determining the surface temperature. It is striking that the albedo provides sufficient information in this regard. In Section 3, we discuss how applying the formula to gas giants seems to determine the temperature at which condensates form in their atmosphere. We have included Jupiter and Saturn in Fig. 1 and the linear regression uses the data from Table 1.
In Fig. 1 we calculate the ratio of net solar irradiance to surface radiant exitance of rocky bodies in the solar system with a substantial atmosphere, and plot it against the albedo. We can interpret the result as follows. Consider the a planet with albedo \(\alpha\) and incoming radiation \(I\). The amount of radiating passing (net irradiance) is \(I(1-\alpha)\). If the planet radiates at \(I_{p}\) with an inner albedo \(\beta\) (fraction of radiation reflected back to the surface), then amount of radiation getting through is \(I_{p}(1-\beta)\). Conservation of energy requires \(I_{p}(1-\beta)=I(1-\alpha)\), and hence:
\[\frac{I(1-\alpha)}{I_{p}}=1-r\alpha \tag{1}\]
where we define \(r=\beta/\alpha\) as the slope of the linear fit in Fig. 1. We find that \(r\simeq 1.29\).
The surface radiant exitance is calculated using the Stefan-Boltzmann law, namely \(I_{p}=\sigma T_{p}^{4}\) where \(T_{p}\) is the surface temperature. Thus, equation (1) can used to estimate the surface temperature for a given celestial body, provided it is rocky, with an appreciable atmosphere. As we will discuss in the third section, the number of planets and moons in the solar system for which equation (1) is directly applicable is limited to the examples plotted in Fig. (1).
Given the large differences in atmospheric composition, surface features and other characteristics between Venus, Earth and Titan, the fact that (1) holds empirically so well suggests that \(r\) is geometric in nature. In the next section we propose a model to account for equation (1), which can justify the universal nature of \(r\).
Figure 1: The ratio of incoming to outgoing surface radiation versus the albedo for Titan, Earth and Venus. Gas giants are included with their “surface” corresponding to where condensates start forming.
## II Model of inner albedo
Consider a reflective sphere \(S\) with a rough surface such that lights scatters in the outward tangent half-space at every point. We make two main simplifying assumptions, which will be discussed at the end of the section. The first assumption is that we restrict our analysis to a two dimensional cross-section made by a plane through the centre of \(S\), see Fig. 2. The cross-sectional circle is surrounded by a set \(M\) of circular segments that are transparent to incoming light but reflective on their inner surface. Furthermore, the mirrors \(M\) are at a close distance from the surface relative to the radius of the sphere \(S\). The incoming light has irradiance \(I_{0}\).
We are interested in how the light reflects from the inner surface of \(M\), and we look at a segment \(AB\) of \(M\) in Fig. 3. The light reflects off \(M\) at an intensity \(I_{1}\) and then reflect off the surface \(S\) at an intensity \(I_{2}\). The second key assumption is that when the light reflects off \(S\) it does so with a wavefront well-approximated by the parabola \(P\) with focus \(F\) from Fig. 3.
Let \(l_{1}\) be the length of the circle segment \(AB\) and \(L_{1}\) be the length of the parabolic segment \(EG\). The following equality holds between the incoming light \(I_{0}+I_{1}\) (radiation passing through \(M\) plus the reflected) and reflected off the surface with intensity \(I_{2}\):
\[l_{1}(I_{1}+I_{0})=L_{1}I_{2} \tag{2}\]
This express conservation of energy. In addition, we expect \(I_{0}=I_{2}\) because \(S\) cannot emit more than it receives. Hence:
\[I_{1}=\left(\frac{L_{1}}{l_{1}}-1\right)I_{0} \tag{3}\]
From the geometry in Fig. 2 we can deduce the equality \(FG\simeq AB\simeq l_{1}\), see the Appendix. The ratio of the parabolic segment \(EG\) of length \(L_{1}\) to the linear segment \(FG\simeq l_{1}\) is given by the universal parabolic constant \(p\simeq 2.295\). Then, the ratio between the reflected light from \(M\) and the outgoing light from \(S\) is:
\[\frac{I_{1}\sum_{i}l_{i}}{I_{0}C}=(p-1)\alpha \tag{4}\]
where \(\sum_{i}l_{i}\) is the summed lengths of all the segments of \(M\), which make up a fraction \(\alpha\) of the circumference \(C\) of the circle. Thus, we can consider the inner albedo of \(M\) to be \(\beta=r\alpha\) where \(r=p-1\simeq 1.295\). This theoretical value is consistent with the estimates in Table 1.
The above analysis made two key simplifying assumptions: (1) a restriction to a two-dimensional setting, and (2) the wavefront of the light emitted from the surface \(S\) is parabolic. Regarding assumption (2), we see that the parabola \(P\) approximates a circular wavefront, which is represented by a grey curve in Fig. 3. Thus, Huygens' principle is not violated. In particular, the grey circle's radius is \(5/2\) times the focal length of the parabola.
Regarding assumption (1), we can generalize the argument to three dimensions. A first intuition would be to consider radially symmetric generalizations, such as the light having a paraboloidal wavefront. However, this is not justified because the reflective segments \(M\) are irregular (e.g., the shape of clouds) and not necessarily circular disks. The two dimensional restriction in Fig. 2 does qualitatively capture a cross section of a planet with an atmosphere. This can be generalized by considering cross-sectional thin slabs wherein the wavefront consists of short parabolic cylinders. This implies a translational symmetry over short distances and keeps the proportions and above calculations unchanged. Huygens' principle is again obeyed in virtue of that fact that a short parabolic cylinder approximates a thin cross-section of a sphere.
The argument we have presented is geometric in nature and applies to rocky celestial bodies such as Venus, Earth and Titan. In the next section we discuss extensions to gas giants.
\begin{table}
\begin{tabular}{l c c c c} Planet & Solar irr. (W/m\({}^{2}\)) & Albedo & \(T_{p}\)(K) & \(r\) \\ Venus & 650.3 & 0.760 [6] & 737 & 1.303 \\ Earth & 340.2 & 0.306 & 288 & 1.290 \\ Titan & 3.7 & 0.220 [7] & 92.3 [8; 9] & 1.316 \\ Jupiter & 12.6 & 0.503 [10] & 132.79 [11] & 1.284 \\ Saturn & 3.7 & 0.342 [12] & 93.5 [13] & 1.278 \\ \end{tabular}
\end{table}
Table 1: Planetary features employed in the calculations, along with the estimates of \(r\) from equation (1). References in table, otherwise data is from [5]. Average \(r=1.294\).
Figure 2: Incoming light of irradiance \(I_{0}\), passing through the transparent segments \(M\), reflecting off the rough surface \(S\) and reflecting again off the inner surface of \(M\).
## III Applications and discussion
The formation of condensates in the gas giants has long been a matter of debate in the scientific community [14; 15; 16]. Specifically, in the case of Jupiter, explaining the lack of spectral evidence of ammonia posed a challenge [17; 18]. However, a number of mechanisms have been put forth [15; 16; 18; 19; 20] which have advanced our understanding. Nevertheless, the atmospheric structure and dynamics of the giant planets retains many open problems [21].
While we not provide detailed causal mechanisms, we do highlight the fact that equation (1) does give temperature values for the gas giants that seem to correspond to the layer in the atmosphere where condensates start to form. Employing equation (1) to determine equivalent "surface" temperatures for gas giants yields a temperature of 133.35K for Jupiter and 93.7K for Saturn. The estimate for Jupiter is consistent with the temperature of 132.79K at 0.5 bar [11, Table 7], above which NH\({}_{3}\) saturation would occur, leading to ammonia clouds. Similarly, for Saturn there's a drop in the lapse rate between 0.5 and 0.3 bar which indicates a transition to a lower convective haze layer [13]. At 0.275 bar, the temperature is 93.5K [13, Table 1], again consistent with our result for Saturn. The data is also included in Table 1.
We computed the surface temperature for Titan in Table 1 by taking the average of the estimates of 90.6 and 94K from the literature [8; 9]. It is interesting to note that the same temperature is also found at higher altitudes, where a condensate haze forms at a pressure of 0.03 bar [22]. While it is not obvious why this should be case, it is nevertheless consistent with our findings for the gas giants. Application of equation (1) to Uranus and Neptune does not yield temperatures significantly different from the black body estimate, hence we have not included the results here.
Other bodies in the solar system, such as Mercury, the Moon, Mars and Triton, do not have a substantial atmosphere and their surface temperature fluctuates significantly [23; 24; 25; 26]. This makes defining an average temperature less meaningful and not suitable to applying equation (1).
## IV Conclusion
We have presented the empirical equation (1) that allows us to determine surface temperatures of rocky celestial bodies with substantial atmospheres. Remarkably, only the solar irradiance and bond albedo are sufficient to determine the surface temperature. We have provide a geometric justification for this fact, where the universal parabolic constant pays a key role in determining the ratio between inner and outer bond albedo. In addition, we have extended the applicability of equation (1) to gas giants, finding the temperature where condensates start to form. We hope the work inspires future studies into these matters.
###### Acknowledgements.
I would like to thank Francesco Bertolotti for his feedback on the article.
## Appendix
The focus \(F\) has \(x_{F}=0\). For coordinate \(x_{G}\) we have:
\[\begin{split} x_{G}&=\frac{R}{R+h}x_{B}+\frac{y_{B} }{R+h}\sqrt{x_{B}^{2}+y_{B}^{2}-R^{2}}\\ &\simeq 2x_{B}\end{split} \tag{5}\]
where \(R\) is the radius of \(S\), \(h\) is distance between \(S\) and \(M\). The coordinate \(x_{A}=-x_{B}\), so \(AB=2x_{B}\). We assume \(h<<AB<<R\), then the length of the circular segment is \(l_{1}\simeq AB\simeq FG\).
Figure 3: The geometry around a segment \(AB\) of \(M\). Lines \(AC\) and \(BD\) go through the origin, which is the center of \(S\). Tangents at \(C\) and \(D\) intersect the circle encompassing \(AB\) at points \(E\) and \(G\). The midpoint of \(EG\) is the focus \(F\) of the parabola \(P\). Light comes in with irradiance \(I_{0}\), reflects of \(M\) with exitance \(I_{1}\) and again off \(S\) with exitance \(I_{2}\). |
2302.04129 | Hyperspectral Image Compression Using Implicit Neural Representation | Hyperspectral images, which record the electromagnetic spectrum for a pixel
in the image of a scene, often store hundreds of channels per pixel and contain
an order of magnitude more information than a typical similarly-sized color
image. Consequently, concomitant with the decreasing cost of capturing these
images, there is a need to develop efficient techniques for storing,
transmitting, and analyzing hyperspectral images. This paper develops a method
for hyperspectral image compression using implicit neural representations where
a multilayer perceptron network $\Phi_\theta$ with sinusoidal activation
functions ``learns'' to map pixel locations to pixel intensities for a given
hyperspectral image $I$. $\Phi_\theta$ thus acts as a compressed encoding of
this image. The original image is reconstructed by evaluating $\Phi_\theta$ at
each pixel location. We have evaluated our method on four benchmarks -- Indian
Pines, Cuprite, Pavia University, and Jasper Ridge -- and we show the proposed
method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low
bitrates. | Shima Rezasoltani, Faisal Z. Qureshi | 2023-02-08T15:27:00Z | http://arxiv.org/abs/2302.04129v2 | # Hyperspectral Image Compression Using Implicit Neural Representation
###### Abstract
Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly-sized color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network \(\Phi_{\emptyset}\) with sinusoidal activation functions "learns" to map pixel locations to pixel intensities for a given hyperspectral image \(\mathrm{I}\). \(\Phi_{\emptyset}\) thus acts as a compressed encoding of this image. The original image is reconstructed by evaluating \(\Phi_{\emptyset}\) at each pixel location. We have evaluated our method on four benchmarks--Indian Pines, Cuprite, Pavia University, and Jasper Ridge--and we show the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low bitrates.
## 1 Introduction
Hyperspectral images capture electromagnetic spectrum for a pixel in the image of a scene [31, 21]. These images offer many possibilities for object detection, material identification, and scene analysis. The costs associated with capturing high-resolution temporal, spatial, and spectral data continue to decrease, and it is no surprise that hyperspectral images have found widespread use in areas such as, remote sensing, biotechnology, crop analysis, environmental monitoring, food production, medical diagnosis, pharmaceutical industry, mining, and oil & gas exploration, etc. [20, 24, 2, 19, 33, 9, 3, 30, 48, 41, 17, 25, 18, 12]. Unlike color images that record red, green, and blue channels per pixel, hyperspectral images record 100s of channels per pixel, representing pixels' spectra. This suggests that hyperspectral images require two orders of magnitude more space than what is needed to store similarly-sized color images. Consequently, there is a need to develop efficient schemes for capturing, storing, transmitting, and analyzing hyperspectral images. This work studies hyperspectral image compression which plays an important role in the storage and transmission of these images.
Recently, there has been a surge in interest in learning-based compression schemes. For example, autoencoders [1] and rate-distortion autoencoders [4, 5] have been used to learn compact representations of the input signals. Here network weights together with the signal signature--latent representation in the case of autoencoders--serve as the compressed representation of the input signal. Other concurrent works are exploring the use of Implicit Neural Representations (INRs) for signal compression. INRs are well-suited to represent data that lives on an underlying grid, and these offer a new paradigm for signal representation. The goal here is to learn a mapping between a location, say an \((\mathrm{x},\mathrm{y})\) pixel location for a 2D-image \(\mathrm{I}\), and the signal value at that location \(\mathrm{I}[\mathrm{x},\mathrm{y}]\). This mapping is subsequently used to recreate the original signal. It is as simple as evaluating the INR at various locations. In the case of INRs, network weights serve as the learned representation of the input signal.
We investigate the use of INRs for hyperspectral image compression and show that it is possible to achieve high rates of compression while maintaining acceptable reconstruction quality. We evaluate the proposed approach on four benchmarks--Jasper Ridge, Indian Pines, Pavia University, and Cuprite--against three approaches--JPEG [23, 44], PCA-DCT [38], and JPEG2000 [14] --that others have used to compress hyperspectral data. The results confirm that our method achieves a better Peak Signal-to-Noise ratio (PSNR) at low compression rates than those obtained by other methods.
The rest of the paper is organized as follows. We discuss related works in the next section. Section Method describes the Image compression using implicit neural representation method and the pipeline of our proposed method. Next, we present the experimental setup and discuss the results. Section conclusions conclude a paper with a summary and possible directions for future work.
## 2 Related work
Hyperspectral images exhibit both spatial and spectral redundancies that can be exploited to achieve compression. Lossless compression schemes--e.g., those that use quantization or rely upon entropy-coding or statistics-based schemes--where it is possible to recover the original signal precisely often do not yield large savings [35, 40]. Lossy compression schemes, on the other hand, promise large savings while maintaining acceptable reconstruction quality. Inter-band compression techniques aim to eliminate spectral redundancy [10], while intra-band compression techniques aim to exploit spatial correlations. Intra-band compression techniques often follow the ideas developed for typical color image compression. [56] exploit the fact that groups of pixels that are around the same location in two adjacent bands are strongly correlated and proposes schemes that perform both inter-band and intra-band compression. Principal Component Analysis (PCA) based techniques are frequently used for hyperspectral image compression. PCA offers strong spectral decorrelation and is used to reduce the number of channels in a hyperspectral image. The remaining channels are subsequently compressed using Joint Picture Expert Group (JPEG) or JPEG2000 standard [7, 15, 53, 43]. Along similar lines, tensor decomposition methods have also been applied to the problem of hyperspectral image compression [55]. Tensor decomposition achieves dimensionality reduction while maintaining the spatial structure. Transform coding schemes that achieve image compression by reducing spatial correlation have also been used to compress hyperspectral data. Discrete Cosine Transform (DCT) has been used to perform intra-band compression; however, it ignores iner-band (or spectral) redundancy. 3D-DCT that divides a 3-dimensional hyperspectral image into \(8\times 8\times 8\) datacubes is proposed to achieve both inter-band and intra-band compression [45]. Similarly to JPEG, which uses \(8\times 8\) blocks, 3D-DCT exhibits blocking effects in reconstructed hyperspectral images. The blocking effects can be removed to some extent by using wavelet transform instead [46, 22]. Video coding methods coupled with inter-band spectral prediction modeling have also been proposed to compress hyperspectral images.
## 3 Method
### Background
In the 3D vision field, interest in using neural networks to represent data has recently increased [39, 42, 11], after the original proposal made by [50]. High-resolution signals can be compactly encoded via implicit representations, as suggested by studies [36, 49, 51].
Commonly, learned image compression techniques use hierarchical variational autoencoders [6, 32, 37], where the latent variables are discretized for entropy coding purpose. Paper [16] proposes a different approach for RGB image compression. Their method stores the weights of a neural network overfitted to the image. Similar to previous research on latent variable models [27, 28, 29, 34], numerous studies [8, 26, 54] make an effort to close the amortization gap [13] by combining the usage of amortized inference networks with iterative gradient-based optimization procedures. Using inference time per instance optimization, [54] also identifies and makes an effort to bridge the discretization gap caused by quantizing the latent variables. The concept of per-instance model optimization is expanded upon in research [52], which fine-tunes the decoder for each instance and transmits updates to the quantized decoder's parameters together with the latent code to provide better rate-distortion performance.
### Image Compression using INRs
INRs represent data as a continuous function from coordinates to values, storing coordinate-based data like photos, videos, and 3D forms. For instance, hyperspectral image maps to a channel vector in a channel space based on the horizontal and vertical coordinates \((p_{x},p_{y})\):
\[I:(p_{x},p_{y})\rightarrow(ch_{0},ch_{1},...,ch_{n}), \tag{1}\]
In which ch refers to channels, and n equals the number of channels for each image. A neural network \(f_{\Theta}\), often a Multi-Layer Perceptron (MLP) with parameters \(\Theta\), can be used to approximate this mapping such that \(I(p_{x},p_{y})\approx f_{\Theta}(p_{x},p_{y})\). INRs can be evaluated on any coordinates inside the normalized range [1, 1] since they are continuous functions.
INRs implicitly store all data in the network weights. The coordinate, which is used as the input to the INR, con
Figure 1: Overview of INR-based compression pipeline comprising overfitting, model compression using architecture search, and model compression using weight quantization.
tains no information. The INR is trained through the encoding procedure. Decoding is the same as adding a set of weights to the network and assessing the results on a coordinate grid. This can be summed up as:
\[\arg\min_{\vartheta}\mathrm{L(x,f_{\vartheta}(p))}=\vartheta^{*}\xrightarrow[\text {transmit94}^{*}]{}\widehat{\mathrm{X}}=\mathrm{f_{\vartheta^{*}}(p)}. \tag{2}\]
In order to recreate a distorted replica of the original image X, we just need to store \(\Theta^{\star}\). With our strategy, we are able to simultaneously accomplish compact storage and effective repair.
### Compression Pipeline for INRs
The pipeline for our INR-based compression is described in this section. Figure 1 represent the entire pipeline and figure 2 shows a higher-level overview.
**Stage 1: Overfitting.** We train an MLP to map pixel locations to pixel values in this work. In fact, at test time, we overfit the INR f to a data sample. This step is similar to calling the encoder in other learned techniques. To underline that the INR is trained only to represent one image, we refer to this stage as overfitting.
We save the weights of a neural network overfitted to the image rather than the pixel values for each pixel of a hyperspectral image. A general illustration of this work is proposed in Figure 2. To encode an image, we apply an MLP, which maps pixel locations to pixel values. We test the MLP at each pixel position to decode the image. While the extremely frequent data included in some hyperspectral images makes overfitting such multi-layer perceptrons, also known as implicit neural representations, challenging [47, 51], recent studies have demonstrated that this can be minimized by employing sinusoidal encodings and activations [51, 36, 49]. As a result, in this study, we use MLPs with sine activations, often known as SIRENs [49].
As a result, we first overfit an MLP to the hyperspectral image, then quantize and transmit its weights. To reconstruct the hyperspectral image, the transmitted MLP is then evaluated at all pixel positions. Let I represent the hyperspectral image we want to encode, and \(\mathrm{I[x,y]}\) gives the pixel values at that specific pixel position for the encoding part \(\mathrm{(x,y)}\). In the hyperspectral image, we build a function \(\mathrm{f_{\Theta}}\) with parameters \(\Theta\) mapping pixel positions to pixel band values. The hyperspectral image can then be encoded by overfitting \(\mathrm{f_{\Theta}}\) to it under some distortion control. Given an image x and a coordinate grid p, we minimize the objective:
\[\arg\min_{\vartheta}\mathrm{L_{MSE}(x,f_{\vartheta}(p))}. \tag{3}\]
The parameterization of \(\mathrm{f_{\Theta}}\) must be carefully chosen. Even when utilizing quite a few parameters, parameterizing \(\mathrm{f_{\Theta}}\) using an MLP with normal activation functions leads to underfitting [51, 49]. We choose the sine activation function for this work, according to [49].
**Stage 2: Model Compression using Architecture Search.** We run a hyperparameter tunning over the number of layers and layer size of the MLP and also quantize the weights from 32-bit to 16-bit precision to decrease the model size. As a result, we explore architecture search and weight quantization strategies to decrease model size.
**Stage 3: Model Compression using Weight Quantization.** Because the MLP parameters are stored as a compressed representation of the image, limiting the number of weights improves compression. The purpose is to fit \(\mathrm{f_{\Theta}}\) to I with the fewest parameters possible. The hyperspectral images are then compressed using model compression.
Decoding part is to just evaluate the function \(\mathrm{f_{\Theta}}\) at each pixel position to rebuild the image given the stored quantized weights. This decoding method provides several advantages, such as increased flexibility: we may decode the
Figure 3: The JPL’s AVIRIS hyperspectral data cube provided on a NASA ER-2 plane over Moffett Field
Figure 2: Compressed neuronal representations that are implicit. A neural network that maps pixel positions (x, y) to channel values are used to overfit an image. The weights of this neural network are then transmitted after being quantized to a smaller bit-width.
image in stages, for instance, by decoding portions of it or a low-resolution image initially, just by assessing f\({}_{\Theta}\) at different pixel positions. With autoencoder-based approaches, partly decoding images in this manner is challenging, demonstrating yet another benefit of this method.
### Datasets
We perform experiments on Indian Pines, Cuprite, Jasper Ridge, and Pavia University datasets. The AVIRIS instrument gathered the Indian Pines, Cuprite, and Jasper Ridge datasets. With the spatial dimensions in the face and the spectral dimension in the depth, respectively, the hyperspectral imaging sensors give a three-dimensional data structure known as a data cube. Figure 3 is a cube image taken by the AVIRIS satellite of the Jet Propulsion Laboratory (JPL) above Moffett Field in California. Figure 3's false-color image on top of the cube depicts a complex structure in the water and evaporation ponds to the right. On the cube's top, you can also see the Moffett Field airport.
The AVIRIS sensor gathers information from geometrically coherent spectroradiometric data that can be used to characterize the Earth's surface and atmosphere. The studies of oceanography, environmental science, snow hydrology, geology, volcanology, soil and land management, atmospheric and aerosol studies, agriculture, and limnology can all benefit from the use of these data. Applications for assessing and monitoring environmental risks such as toxic waste, oil spills, and air, land, and water pollution are currently being developed. The observations can be transformed to ground reflectance data, which can subsequently be utilized for quantitative assessment of surface features, with the required calibration and adjustment for atmospheric factors.
#### 3.4.1 Indian Pines
The Indian Pines dataset, which has 145 by 145 pixels and 16 ground-truth classes, was collected by the AVIRIS instrument in 1992. It was obtained in NW Indiana over a mixed agricultural and wooded area. There are 220 channels in this image. The Indian Pines setting is made up primarily of agriculture, with the remaining third being either forest or other types of perennial forest vegetation. Along with some low-density homes, other built objects, and minor roads, there are two significant dual-lane motorways, a rail line, and other built objects. Figure 3(a) shows an image of this dataset.
#### 3.4.2 Jasper Ridge
One well-known hyperspectral image is Jasper Ridge. The Jet Propulsion Laboratory (JPL) captured it using the AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) sensor. Its dimensions are 512 by 614 pixels. A total of 224 electromagnetic bands between 380 nm and 2,500 nm are recorded for each pixel. Figure 3(b) shows an image of this dataset.
#### 3.4.3 Pavia University
The Pavia University dataset was captured by the ROSIS-03 aerial instrument above the Pavia University in Italy. The flight above Pavia, Italy, was conducted by the German Aerospace Centre (the German Aerospace Agency) as part of the HySens project, which is run and funded by the European Union. For Pavia Centre and Pavia University, there are 102 and 103 spectral bands, respectively. Pavia Centre is 1096 by 1096 pixel image, whereas Pavia University is 610 by 610 pixels. The Pavia University data set includes several classes, such as _trees, asphalt, bitumen, gravel, metal sheet, shadow, bricks, meadow,_ and _dirt_, and it belongs to
Figure 4: From left to right, datasets: Indian Pines, Jasper Ridge, Pavia University, and Cuprite.
the Engineering School at the University of Pavia. Figure (c)c shows an image of this dataset.
#### 3.4.4 Cuprite
The Cuprite dataset covers the Cuprite in Las Vegas, Nevada, United States. There are 224 channels with wavelengths between 370 and 2480 nm. Figure (d)d shows an image of this dataset.
## 4 Experiments
We compare our model against some benchmark encoders, such as JPEG [23, 44], PCA-DCT [38], and JPEG2000 [14]. JPEG 2000 uses the JPEG 2000 image compression standard and coding method to treat each band separately and encode each band independently. JPEG uses the same way to handle each band, but it uses the JPEG standard instead. PCA-DCT employs an orthogonal transformation to reconstruct a hyperspectral image into a lower-dimensional image. The first few PCs are often used, while the other components are usually discarded after compression. The PCA-DCT-based method alters the HS image's physical composition by choosing fewer principal components. As a result, PCA-DCT was unable to achieve a higher PSNR. However, by evaluating the human vision system, PCA can still produce images with a respectable level of quality. It has been extensively utilized for HS image lossy compression uses throughout various application fields. We choose those encoders for comparison primarily because they are all known and reliable benchmark encoders.
We first identify valid depth and width configurations for the MLPs representing an image to select the optimum model designs for a particular parameter budget (measured in bits per pixel per band, or bpppb). The parameter bpppb
Figure 5: Rate distortion plots on different datasets.
is calculated as follows:
\[\text{bpppb}=\frac{\text{\#parameters}\times\text{(bits per parameter)}}{\text{\#pixels}\times\text{\#bands}}. \tag{4}\]
The optimum architecture is then chosen by doing a hyper-parameter search on all viable designs and learning rates on a single image. Each image in the dataset is used to train the final model, which is then downscaled to 16-bit precision (for model compression purposes). We observe that after training, reducing the precision of weights from 32 to 16 bits caused essentially no increase in distortion, but reducing them further to 8 bits caused a large rise in distortion, outweighing the advantage of reducing the bpppb.
Peak signal-to-noise ratio (PSNR) metrics are used in the experiments to compare the accuracy of our model to JPEG, JPEG2000, and PCA-DCT. A key component of compression methods is image quality, which is determined by PSNR. The peak signal-to-noise ratio between two images is calculated using the PSNR, expressed in decibels. To contrast the level of quality of an original image compared to one that has been compressed, we utilize this ratio. As PSNR rises, the quality of the compressed or rebuilt image also rises. The efficiency of image compression is compared using the MSE and PSNR. While PSNR is a measure of the peak error, MSE is the cumulative squared error between the original and compressed image. The error decreases as the MSE value decreases. Using the following equation, we first determine the mean-squared error:
\[\text{MSE}=\frac{\Sigma_{\text{M,N}}(\text{I}_{1}(\text{m},\text{n})-\text{I}_ {2}(\text{m},\text{n}))^{2}}{\text{M}\times\text{N}}, \tag{5}\]
where M and N in the previous equation stand for the input images' respective rows and columns. The PSNR is then calculated using the following equation:
\[\text{PSNR}=10\text{ log}_{10}\left(\frac{\text{R}^{2}}{\text{MSE}}\right), \tag{6}\]
where R is the largest variation in the input image in the previous equation. For instance, R is 1 if the input image is of the double-precision floating-point data. R is 255, for instance, if the data is an 8-bit unsigned integer.
Figure 4(a) for the Indian Pines dataset illustrates the outcomes of this technique for various bpppb values. As can be
Figure 6: Compression comparison plot for different datasets.
observed, raising the bpppb level will raise the compression quality, as determined by PSNR. We did these experiments for the other datasets, Jasper Ridge in Figure 4(b), Cuprite in figure 4(c), and Pavia University in figure 4(d), and as can be seen, the results are the same as we discussed.
### Comparison with State-of-the-Art
As mentioned before, we compare our method against some benchmarks. Figure 5(a) shows this comparison for the Jasper Ridge dataset. We show the PSNRs achieved in different bpppbs for our method in two ways. We have the results with full precision (without using entropy coding), shown with HSICS; we also convert them to half-precision (using entropy coding), shown with HP_HSICS. We then compare these results with JPEG, JPEG2000, and PCA-DCT methods. As can be seen, our method outperforms JPEG, JPEG2000, and PCA-DCT at every bpppb that we do experiments with. We also conducted another experiment on Indian Pines datasets and compared its results with JPEG2000, JPEG, and PCA-DCT. As can be seen in figure 5(b), our method in full precision outperforms all other methods in this comparison at every bit-rates in bpppbs, but in half-precision, it just outperforms other methods in low bit rates (lower than 0.4 bpppb). Figures 5(c) and 5(d) shows this comparison for Pavia University and Cuprite datasets, respectively. As can be seen, our methods outperform JPEG,
Figure 7: Model training on different datasets
JPEG2000, and PCA-DCT for the experiments on the Pavia University dataset in 6c. Figure 5(d) shows that our method outperforms JPEG, JPEG2000, and PCA-DCT, but when we want to compare the results of the half-precision method with the benchmarks, we can see that it outperforms JPEG and JPEG2000 at every bit-rates and PCA-DCT at bitrates lower than 0.02 bpppb.
Our approach does not need a decoder during testing, in contrast to the majority of previous neural data compression algorithms. In fact, the decoder model is enormous even though the latent code representing the compressed image in such systems is small. As a result, the decoding device also needs a lot of memory. In our situation, the decoder side memory requirements are orders of magnitude fewer because we only need the weights of a very small MLP. We show an example of the overfitting procedure in Figures 6(a), 6(b), 6(c), and 6(d). This experiment has been done on Jasper Ridge at 0.15 bpppb, Pavia University at 0.025 bpppb, Cuprite at 0.02 bpppb, and Indian Pines at 0.2 bpppb. We compare our model with JPEG, JPEG2000, and PCA-DCT methods. As can be seen, our method outperforms JPEG, JPEG2000, and PCA-DCT methods after some iterations and continues improving beyond that. While the optimization can be noisy, we simply save the model with the best PSNR. We just store the model with the best PSNR, given the fact that optimization can be noisy.
### Experimental Details
In Figures 7(a), 7(b), 7(c), and 7(d), we show the performance of various valid architectures of size 0.2 bpppb for Indian Pines dataset, 0.01 bpppb for Cuprite dataset, 0.1 bpppb for Jasper Ridge dataset, and 0.03 bpppb for Pavia University dataset, respectively. As can be observed, the design choice affects the compression quality, with different optimal architectures for various bpppb values. In the following, we will explain the experimental detail of this work.
Adam was used to training all models. We employed MLPs with output dimensions related to the amount of dataset image bands and two input dimensions (equivalent to (x, y) coordinates). Except for the final layer, we used sine non-linearities and the initialization method given in [49]. A learning rate of 2e-4 was employed. In figures 9, 10, 11, 12, we include qualitative results comparing the compression artifacts from our proposed method and the original image on the Cuprite, Indian Pines, Jasper Ridge, and Pavia University datasets, respectively.
## 5 Conclusion
In this work, we employ implicit neural representation to compress hyperspectral images by using neural networks to map pixel locations to band values and store the weights of the generated networks. We then assess the MLP at each pixel position to decode the image. Through experiments,
Figure 8: The performance of various valid architectures for different datasets.
we demonstrated that this method can outperform JPEG, JPEG2000, and PCA-DCT at low bit rates. For future work, we want to use meta-learned base networks, a new method for neural image compression, and then add PCA to that method to improve the results.
|
2308.11219 | Controlling the 2D magnetism of CrBr$_3$ by van der Waals stacking
engineering | The manipulation of two-dimensional (2D) magnetic order is of significant
importance to facilitate future 2D magnets for low-power and high-speed
spintronic devices. Van der Waals stacking engineering makes promises for
controllable magnetism via interlayer magnetic coupling. However, directly
examining the stacking order changes accompanying magnetic order transitions at
the atomic scale and preparing device-ready 2D magnets with controllable
magnetic orders remain elusive. Here, we demonstrate effective control of
interlayer stacking in exfoliated CrBr$_3$ via thermally assisted strain
engineering. The stable interlayer ferromagnetic (FM), antiferromagnetic (AFM),
and FM-AFM coexistent ground states confirmed by the magnetic circular
dichroism measurements are realized. Combined with the first-principles
calculations, the atomically-resolved imaging technique reveals the correlation
between magnetic order and interlay stacking order in the CrBr$_3$ flakes
unambiguously. A tunable exchange bias effect is obtained in the mixed phase of
FM and AFM states. This work will introduce new magnetic properties by
controlling the stacking order, and sequence of 2D magnets, providing ample
opportunities for their application in spintronic devices. | Shiqi Yang, Xiaolong Xu, Bo Han, Pingfan Gu, Roger Guzman, Yiwen Song, Zhongchong Lin, Peng Gao, Wu Zhou, Jinbo Yang, Zuxin Chen, Yu Ye | 2023-08-22T06:09:58Z | http://arxiv.org/abs/2308.11219v1 | # Controlling the 2D magnetism of CrBr\({}_{3}\) by van der Waals stacking engineering
###### Abstract
**The manipulation of two-dimensional (2D) magnetic order is of significant importance to facilitate future 2D magnets for low-power and high-speed spintronic devices. Van der Waals stacking engineering makes promises for controllable magnetism via interlayer magnetic coupling. However, directly examining the stacking order changes accompanying magnetic order transitions at the atomic scale and preparing device-ready 2D magnets with controllable magnetic orders remain elusive. Here, we demonstrate effective control of interlayer stacking in exfoliated CrBr\({}_{3}\) via thermally assisted strain engineering. The stable interlayer ferromagnetic (FM), antiferromagnetic (AFM), and FM-AFM coexistent ground states confirmed by the magnetic circular dichroism measurements are realized. Combined with the first-principles calculations, the atomically-resolved imaging technique reveals the correlation between magnetic order and interlayer stacking order in the CrBr\({}_{3}\) flakes unambiguously. A tunable exchange bias effect is obtained in the mixed phase of FM and AFM states. This work will introduce new magnetic properties by controlling the stacking order, and sequence of 2D magnets, providing ample opportunities for their application in spintronic devices.**
Van der Waals materials with intrinsic magnetism have attracted tremendous attention due to their atomic-scale thickness that can be used to realize integrable and flexible magnetic devices[1; 2; 3; 4; 5]. Effective control of their magnetic orders is the real power with which we can fabricate all kinds of spintronic devices. In particular, when two-dimensional (2D) magnetism was first demonstrated[1], it was surprising to find that the exfoliated thin CrI\({}_{3}\) behaves as an A-type antiferromagnetic (AFM) semiconductor with intralayer ferromagnetic (FM) coupling and interlayer AFM coupling, while the bulk CrI\({}_{3}\) crystal behaves as a ferromagnet at low temperatures. A large number of subsequent experiments and theories confirmed that this intriguing difference comes from the change of the interlayer magnetic coupling caused by the different stacking order in CrI\({}_{3}\) and CrCl\({}_{3}\)[6; 7; 8; 9; 10; 11; 12; 13]. Specifically, CrI\({}_{3}\) crystal undergoes a transition from the monoclinic (M) phase to rhombohedral (R) phase below room temperature[14; 15; 16; 17] and exhibit interlayer FM coupling at low temperature, while exfoliated thin layers of CrI\({}_{3}\) will maintain the monoclinic phase at low temperatures, which manifests as interlayer AFM coupling. Based on these understandings, strategies such as hydrostatic pressure have been proposed to effectively control the interlayer stacking order and thus their magnetic properties[6; 7; 8; 9; 18; 19; 20; 21], and recently promote the extensive development of moire magnetism[22; 23; 24; 25]. Nevertheless, the correlation between crystal structure and magnetic order is mainly characterized by optical means such as Raman spectroscopy and nonlinear optics[6; 7; 8; 10], and the verification of atomic resolution is still lacking. Surprisingly, as an isostructural material, both bulk and exfoliated CrBr\({}_{3}\) flakes have been demonstrated and widely used as perfect ferromagnetic semiconductors[26; 27; 28; 29; 30]. The remaining questions are (1) what is the essential difference of CrBr\({}_{3}\) compared with CrI\({}_{3}\) and CrCl\({}_{3}\), and (2) is there an effective method to control the magnetic order of CrBr\({}_{3}\)?
Unlike CrI\({}_{3}\) and CrCl\({}_{3}\) crystals that transition from M phase to R phase at low temperatures[14; 15; 16; 17], earlier work predicted that CrBr\({}_{3}\) would undergo this structural phase transition above room temperature[31], but it has not been experimentally confirmed. Additionally, previous pioneering work verified the correlation between the
interlayer magnetic coupling and the stacking order in molecular beam epitaxy-synthesized CrBr\({}_{3}\) bilayers by in situ spin-polarized scanning tunneling microscopy[32]. Based on the better stability of CrBr\({}_{3}\) and the expected structural phase transition above room temperature[30; 31; 32; 33], it provides a unique platform for effectively controlling the magnetic order through van der Waals stacking engineering, which is crucial for the realization of related spintronic devices. Here, effective interlayer stacking control is achieved in exfoliated CrBr\({}_{3}\) flakes by thermally assisted strain engineering, and the correlation between van der Waals stacking order and magnetic order is verified at the atomic scale unambiguously.
We start with the crystallographic structure of bulk CrBr\({}_{3}\) crystals. The single-crystal X-ray diffraction (SC-XRD) spectrum of the as-grown CrBr\({}_{3}\) crystal collected at 273 K (Supplementary Table 1) gives an R structure (\(R\bar{3}\), \(a=6.306\) A, \(c=18.372\) A), different from the M phase of CrI\({}_{3}\) and CrCl\({}_{3}\) crystals at room temperature[14; 15; 16]. In a single-layer CrBr\({}_{3}\), Cr atoms form a honeycomb structure surrounded by six octa-drally coordinated Br atoms (Fig. 1a). Sliding along the high-symmetry [1\(\bar{1}\)0] direction and stacking these monolayers will yield R phase of CrBr\({}_{3}\) (left panel of Fig. 1b). Magnetic measurements of CrBr\({}_{3}\) crystal reveals long-range ferromagnetism with a Curie temperature (\(T_{\rm c}\)) of about 33 K and exhibit an out-of-plane magnetic anisotropy with a relatively low spin-flip field (\(\sim 0.25\) T) (Supplementary Fig. S1), indicating that the room-temperature (RT) CrBr\({}_{3}\) R phase corresponds to interlayer FM coupling at temperatures below its \(T_{\rm c}\).
To fully understand the evolution of the crystal structure with temperature, especially to identify possible structural phase transition, powder XRD measurements were performed on CrBr\({}_{3}\) crystals between 300 K and 548 K. Figure 1c shows the temperature-dependent XRD measurements during one warming process. Near 378 K, the diffraction angle is clearly split into two peaks at 59.70\({}^{\circ}\) and 59.77\({}^{\circ}\), which are almost linearly red-shifted with increasing temperature on each side of 378 K, respectively, indicative of a temperature-induced structure phase transition. The significant thermal hysteresis during cooling and warming processes demonstrates the first-order nature of the transition (Supplementary Fig. 2). Differential scanning calorimetry measurement further confirmed the phase transition with an endothermic peak at an exact temperature of 373.7 K (Supplementary Fig. 2). Is the HT phase the M phase (right panel of Fig. 1b, CrBr\({}_{3}\) monolayers slide along the [100] direction and stack, \(C2/m\)) as we expected, and what is its magnetic
**Fig.** 1: **Structure phase transition of bulk CrBr\({}_{3}\) crystals.****a.** Atomic structure of single-layer CrX\({}_{3}\) (X=Cl, Br, I). Cr atoms (blue spheres) form a honeycomb lattice structure surrounded by six X atoms (gray spheres), which are octahedrally coordinated. **b.** Crystal structures of rhombohedral (R, left) and monoclinic (M, right) CrBr\({}_{3}\). **c.** Temperature-dependent (004) diffraction peak of CrBr\({}_{3}\) crystal measured by XRD during one arming process. The enlarged spectrum at 378 K shows a distinct two-peak behavior. **d.** Schematic illustration of thermal-assisted strain engineering exfoliated CrBr\({}_{3}\) flakes.
order? However, once cooling down, the HT phase will transform back to the RT R phase, which hinders us from revealing the crystal structure and magnetic properties of the HT phase.
Given the knowledge from extensive research on chromium trihalides[6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], we propose a thermally assisted strain engineering approach to fix the CrBr\({}_{3}\) HT phase (Fig. 1d). In an argon-filled glove box, the CrBr\({}_{3}\) flakes were exfoliated onto the polydimethylsiloxane (PDMS) substrate, and then the PDMS was carefully stretched several times on a hot plate set at 423 K. The stretched flakes subsequently transferred onto Si/SiO\({}_{2}\) substrates also display the same magnetic properties as those on PDMS substrates (Supplementary Fig. 3). The magnetic properties of few-layer CrBr\({}_{3}\) flakes were probed using reflective magnetic circular dichroism (RMCD) microscopy (see Methods for details). CrBr\({}_{3}\) flakes exfoliated and stretched at room temperature were first fabricated for comparison (Supplementary Fig. 4). The RMCD signal _versus_ magnetic field of a \(\sim\)10 nm
**Fig.** 2: **Stacking-related magnetism in exfoliated CrBr\({}_{3}\) flakes.****a-c.** RMCD signal \(versus\) magnetic field sweeping curves of the FM CrBr\({}_{3}\) sample (**a**) prepared at room temperature, and the AFM (**b**) and mixed-phase (**c**) CrBr\({}_{3}\) samples prepared by thermal-assisted strain engineering process. The spin-flip fields are expressed as \(H_{\rm c}\) for FM CrBr\({}_{3}\), \(H_{1}\) and \(H_{2}\) for AFM CrBr\({}_{3}\). **d-e.** For S1 (**d**) and S2 (**e**) CrBr\({}_{3}\) samples, the RMCD signal plot in the parameter space of temperature and \(\mu_{0}H\) under descending field sweep. **f.** Schematic diagram of the magnetic ground state of 2L R and M phase CrBr\({}_{3}\) obtained by calculation. **g.** Calculated energy difference between AFM and FM states during 2L CrBr\({}_{3}\) shift from R phase to M phase.
flake (Fig. 2a) shows a distinct hysteresis loop with a spin-flip field of 0.04 T at 2 K, confirming its FM nature with \(T_{\rm c}\) of about 33 K (Supplementary Fig. 5). When the thickness increases, the rectangular hysteresis vanishes and multiple magnetic transitions occur, which is in accordance with previous reports[26; 30].
In the thermally assisted stretched CrBr\({}_{3}\) flakes, we observe drastically different behaviors from the FM flakes, and we can classify them into two categories. First, in the S1-class samples (Fig. 2b), a plateau with zero RMCD signal is shown within \(\pm\)0.10 T, indicating typical AFM behavior. Then, the forced FM state is reached through a two-step spin-flip transition around 0.10 T and 0.40 T (\(H_{1}\) and \(H_{2}\), marked by the dashed lines in Fig. 2b), which resembles the even-layer A-type AFM CrI\({}_{3}\) but with weaker interlayer exchange coupling[22]. We speculate that the pure AFM state of CrBr\({}_{3}\) is also from the M phase, which will be verified later by scanning transmission electron microscopy (STEM) characterizations. As the temperature increase, the spin-flip fields decrease continuously and vanish around 27 K when the flake becomes paramagnetic (Fig. 2d and Supplementary Fig. 6). The critical temperature of the AFM CrBr\({}_{3}\) flake (Neel temperature, \(T_{\rm N}\)) is slightly lower than that of the FM flake (\(T_{\rm c}\) of 33 K). Second, apart from the pure AFM samples, the S2-class samples show obvious residual magnetic moment near zero magnetic field (indicating the presence of an FM component), and then undergo another spin-flip (\(H_{2}\)) to a forced FM state under high fields (Fig. 2c). The magnetic configuration of the S2 sample is different from that of the uncompensated odd-layer A-type AFM sample but exhibits the coexistence of FM and AFM components because (1) All S2-class samples with different thicknesses exhibit distinct FM signals around 0 T; (2) The spin-flip fields of \(H_{\rm c}\) and \(H_{2}\), and the resulting changes in the RMCD signals, do not show obvious thickness dependence (Supplementary Fig. 7); (3) \(H_{\rm c}\) and \(H_{2}\) are distributed in wide ranges of 0-0.10 T and 0.27-0.42 T, respectively (Supplementary Fig. 7), indicating that the FM and AFM components are coupled to each other; (4) The AFM features seem to disappear at a lower temperature than the FM features (Fig. 2e), con
**Fig.** 3: **STEM characterizations and corresponding simulations of FM, AFM, and mixed-phase CrBr\({}_{3}\). **a-b.** HAADF-STEM images of FM (**a**) and AFM (**b**) CrBr\({}_{3}\) reveal the rhombohedral (**a**) and monoclinic (**b**) structures, in good agreement with the corresponding simulated images inset. The stripe contrast in the monoclinic structure is due to the arrangement of Cr-Br and Br hollow columns. **c.** HAAD-STEM image of a mixed-phase CrBr\({}_{3}\), revealing a new periodic structure whose unit cell is marked by the white dashed lines. The right panel shows the simulation results of the vertically stacked 7L R phase and 7L M phase. **d.** The atomic arrangement of Cr and Br atoms in mixed-phase CrBr\({}_{3}\) utilized for HAADF-STEM simulations. **e.** Schematic of the atomic model illustrating the vertical stacking of R and M phases.
sistent with the fact that \(T_{\rm N}<T_{\rm c}\) for pure AFM and FM samples. Based on the above experimental observations, the coexistence of FM and AFM components in the S2 sample might come from an incomplete phase transition, resulting in the coexistence of R and M phases (mixed phase).
First-principle calculations in a 2L CrBr\({}_{3}\) link crystal stacking and magnetic order. The transition from R to M phase can be regarded as the sliding of the 2\({}^{nd}\) layer of CrBr\({}_{3}\) in the R phase. Density functional theory (DFT) calculations reveal the energy difference between the AFM state and the FM state during shifting from the R phase to the M phase (Fig. 2f). In the R phase, the FM state is more stable with an energy of 1.7 meV/Cr lower than the AFM state. As it approaches the M phase, the energy difference decreases and eventually crosses zero and becomes negative, indicating that the M phase is in the AFM ground state (Fig. 2g). This is consistent with our RMCD measurements of CrBr\({}_{3}\) RT FM phase and HT AFM phase.
To experimentally confirm the atomic structures of AFM and mixed-phase CrBr\({}_{3}\) samples and verify the correlation between magnetism and stacking order, we transferred representative FM, AFM, and mixed-phase samples after RMCD measurements onto copper microgrids for STEM characterizations. The high-angle annular dark-field (HAADF)-STEM image of the FM sample is highly consistent with the corresponding simulation results (Fig. 3a), and shows a uniform rhombohedral structure on a large scale (Supplementary Fig. 8), which agrees with the SC-XRD measurement results. For the AFM sample, the HAADF-STEM image shows an obvious anisotropic crystal structure (Fig. 3b). Comparison with the simulated monoclinic structure confirms that the stripe contrast is caused by the arrangement of Cr-Br (bright stripes) and Br hollow (dark stripes) atomic columns in the \(ab\) plane. RMCD measurements, and DFT calculations combined with STEM characterizations collectively confirmed that our thermally assisted strain process can engineer the van der Waals stacking in CrBr\({}_{3}\), and thus effectively control its 2D magnetic properties.
Furthermore, HAADF-STEM of mixed-phase CrBr\({}_{3}\) samples exhibit a series of complex new periodic structures (Supplementary Fig. 8). Take the zoomed-in image of Fig. 3c as an example, a unit cell (marked by the white dashed lines) contains dark spots on the four corners and irregular distributed bright and less bright spots inside. The simulation result of the 7L R phase stacked vertically on the 7L M phase (right panel in Fig. 3c) agrees well with the experimental result. Fig. 3d-e show schematic diagrams of the atomic model in top and side views, illustrating the vertical stacking of coexisted R (FM state) and M (AFM state) phases in mixed-phase CrBr\({}_{3}\). Small discrepancies between the simulation and experiment suggest that the exact vertical stacking in this structure may be more than a simple superposition of R and M phases, but complex combinations of the two components.
The coexistence of FM and AFM orders in the mixed-phase CrBr\({}_{3}\) sample produces abundant AFM/FM interfaces (vertical and/or in-plane), which provide a platform for studying the exchange interactions established at the interfaces. Sweeping in a large magnetic field range of \(\pm 0.60\) T, the FM component encounters a time-reversal symmetric AFM environment when flipping from up to down and down to up (Fig. 4a). The FM component within the minor hysteresis loop of \(\pm 0.10\) T experiences a constant AFM environment, which would generate an unprecedented exchange bias (EB) effect due to the coupling between FM and AFM components. After being historically polarized by a positive saturation magnetic field of 0.60 T, the \(H_{\rm c+}\) of the minor FM hysteresis loop shifts to the right, showing a positive EB with an exchange field of \(+12\) mT (Fig. 4b). In contrast, after being polarized by a negative saturation magnetic field of \(-0.60\) T, the \(H_{\rm c-}\) of the minor FM hysteresis loop shifts to the left, exhibiting a negative EB with an exchange
**Fig.** 4: **Exchange bias effects in the mixed-phase CrBr\({}_{3}\) at 2 K.****a.** RMCD signal _versus_\(\mu_{0}H\) for a mixed-phase CrBr\({}_{3}\) sample (S3) at 2 K. Solid circles define the magnetic field sweep range in the exchange bias study. **b-c.** The RMCD signals _versus_\(\mu_{0}H\) swept within \(\pm 0.10\) T after applying a saturation field of 0.60 T (**b**) or \(-0.60\) T (**c**), yielding opposite exchange bias fields of \(+12\) mT (**b**) and \(-12\) mT (**c**).
field of \(-12\) mT. The direction of this EB can be tuned by the historical polarization field, and the EB is rather stable under multiple back-and-forth magnetic field sweeps (Supplementary Fig. 9), suggesting that the mixed-phase CrBr\({}_{3}\) is an ideal platform for exploring interface physics and developing novel van der Waals magnet-based spintronic devices.
In summary, we experimentally realized effective control of interlayer magnetic coupling in exfoliated CrBr\({}_{3}\) flakes by a thermally assisted strain engineering approach and comprehensively demonstrated the correlation between atomically resolved stacking order and magnetism. In addition to the induction of pure FM and AFM magnetic ground states CrBr\({}_{3}\) flakes, we also reported mixed-phase CrBr\({}_{3}\) composed of FM and AFM components, leading to a tunable exchange bias effect. The precise interfaces in the vertical stacking and in-plane connection should be further examined to better understand the mechanism and application of the exchange bias effect, which may require cryo-focused ion beam (cryo-FIB) milling and in-situ STEM characterization. Our work broadens 2D magnetic material systems for studying and manipulating magnetic couplings and related physical properties[27, 28, 29], making them promising candidates for next-generation spintronic devices.
## Methods
**Crystal synthesis and characterization.** CrBr\({}_{3}\) single crystals were prepared by the chemical vapor transport (CVT) method[34]. High-purity Cr (28.8 mg, Alfa, 99.996 %) and TeBr4 (371.2 mg, Alfa, 99.999 %) were mixed (molar ratio of 1:1.5) and then sealed in a silica tube under vacuum. Thereafter, the evacuated silica tube was placed in a two-zone tubular furnace. Crystal growth was carried out for 5 days under a temperature gradient from 750 \({}^{\circ}\)C to 450 \({}^{\circ}\)C, using a heating/cooling rate of 1 \({}^{\circ}\)C/min. Temperature-dependent power XRD measurements were performed using Cu-K radiation (\(\lambda=1.5418\) A). The magnetic properties of CrBr\({}_{3}\) crystals were measured using the physical properties measurement system (PPMS) produced by Quantum Design.
**Magneto-optical measurements.** RMCD measurements were performed based on the Attocube closed-cycle cryostat (attoDRY2100) with a temperature down to 1.6 K and an out-of-plane magnetic field up to 9 T. The detailed setup and measurement have been described in our previous work.
**STEM measurements.** Atomic-resolution HAADF-STEM images were recorded using an aberration-corrected Titan Themis G2 microscope operating at 300 kV. The convergence semi-angle is 30 mrad, and the collection angle is 39-200 mrad.
**Density functional theory calculations.** Our density functional theory (DFT) calculations were carried out using generalized gradient approximation and projector augmented wave methods, as implemented in the Vienna ab initio simulation package (VASP). The uniform k mesh of 13131 was adopted for integration over the Brillouin zone (BZ). A plane-wave cutoff energy of 450 eV and a vacuum region larger than 15 Awas used during the structural relaxations, and the residual force per atom in the optimized structures was less than 1 meV/A. We used the optB86b functional for structural-related calculations and the PBEsol function for energy comparisons among different magnetic configurations. The on-site Coulomb interaction to the Cr d orbitals had \(U\) and \(J\) values of 3.9 eV and 1.1 eV, respectively, as revealed by a linear response method and comparison with the experimental results.
|
2304.07976 | Collaborative Multi-BS Power Management for Dense Radio Access Network
using Deep Reinforcement Learning | Network energy efficiency is a main pillar in the design and operation of
wireless communication systems. In this paper, we investigate a dense radio
access network (dense-RAN) capable of radiated power management at the base
station (BS). Aiming to improve the long-term network energy efficiency, an
optimization problem is formulated by collaboratively managing multi-BSs
radiated power levels with constraints on the users traffic volume and
achievable rate. Considering stochastic traffic arrivals at the users and
time-varying network interference, we first formulate the problem as a Markov
decision process (MDP) and then develop a novel deep reinforcement learning
(DRL) framework based on the cloud-RAN operation scheme. To tackle the
trade-off between complexity and performance, the overall optimization of
multi-BSs energy efficiency with the multiplicative complexity constraint is
modeled to achieve nearoptimal performance by using a deep Q-network (DQN). In
DQN,each BS first maximizes its individual energy efficiency, and then
cooperates with other BSs to maximize the overall multiBSs energy efficiency.
Simulation results demonstrate that the proposed algorithm can converge faster
and enjoy a network energy efficiency improvement by 5% and 10% compared with
the benchmarks of the Q-learning and sleep schemes, respectively. | Yuchao Chang, Wen Chen, Jun Li, Jianpo Liu, Haoran Wei, Zhendong Wang, Naofal Al-Dhahir | 2023-04-17T03:40:15Z | http://arxiv.org/abs/2304.07976v1 | Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning
###### Abstract
Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, an optimization problem is formulated by collaboratively managing multi-BSs' radiated power levels with constraints on the users' traffic volume and achievable rate. Considering stochastic traffic arrivals at the users and time-varying network interference, we first formulate the problem as a Markov decision process (MDP) and then develop a novel deep reinforcement learning (DRL) framework based on the cloud-RAN operation scheme. To tackle the trade-off between complexity and performance, the overall optimization of multi-BSs' energy efficiency with the multiplicative complexity constraint is modeled to achieve near-optimal performance by using a deep Q-network (DQN). In DQN, each BS first maximizes its individual energy efficiency, and then cooperates with other BSs to maximize the overall multi-BSs' energy efficiency. Simulation results demonstrate that the proposed algorithm can converge faster and enjoy a network energy efficiency improvement by 5% and 10% compared with the benchmarks of the Q-learning and sleep schemes, respectively.
Network energy efficiency, multi-BS power management, deep reinforcement learning, deep Q-Network.
## I Introduction
Wireless communications technology has progressed from the first generation (1G) to the latest fifth generation (5G) and beyond to meet the increasing and diversified traffic requirements [1, 2]. The anticipated more than 1000-fold increasing requirements of wireless traffic and global recognition of green communications pose significant challenges for 5G and beyond wireless communication design [3]. There are three basic means to improve wireless communication performance: (1) network densification, (2) more spectrum, and (3) increasing the spectral efficiency [4]. Therefore, future wireless communication standards are likely to deploy access points with higher density, to use new spectral bands, and to maximize the spectral efficiency [5, 6]. Since the power consumption of a single 5G base station (BS) is more than 1.5 times that of a 4G BS, and the number of required 5G BSs is more than 4 times that of 4G networks per coverage area [7], the power consumption of 5G networks per coverage area is at least 6 times that of 4G networks. Therefore, reducing the power consumption of 5G networks is a critical task, which implies that green communication with high energy-efficiency is a key design goal [8, 9, 10]. Meanwhile, different from the conventional cell-centric wireless communications, 5G and beyond networks are more likely to be user-centric systems due to diversified user service demands, implying that future wireless communications need to be more intelligent for effective management [11]. Hence, 5G and beyond wireless communications embraces two key themes: _green_ and _intelligence_.
Currently, the global carbon emissions have continued to rise for years, resulting in more harsh global weather and severe air pollution. Meanwhile, unlike the third generation (3G) and the fourth generation (4G) wireless communications, 5G and beyond wireless communications are going to be faced with huge amounts of mobile traffic that is generated by tens of billions mobile terminals [2, 12]. Mao _et al_ in [7] pointed out that the information and communications technology (ICT) industry accounts for nearly 20% of the total electricity consumption and keeps an annual growth rate of more than 6%. That paper also presented main considerations for green communications and research on artificial intelligence-based (AI-based) green communications. In 2021, ICT's share of global greenhouse gas (GHG) emissions was estimated to be 1.8-2.8% of the global GHG emissions [13]. Many researchers have shown the possibility of improving energy efficiency from different perspectives. The authors in [14] conducted an extensive survey of energy efficiency and discussed many tradeoff techniques in green communications. The research in [15] analyzed four basic network tradeoffs: spectrum efficiency versus energy efficiency, deployment efficiency versus energy efficiency, delay versus power, and bandwidth versus power. The survey on green mobile networking in [16] discussed detailed modeling methods for power consumption and energy efficiency. In [17], Li _et al_
clearly pointed out that over 80% of the power consumption took place in the radio access networks (RAN), implying that the sleeping scheme is a very useful strategy for improving network energy efficiency [18, 19, 20, 21]. To minimize the network power consumption, Dai _et al_ developed an energy-efficient design of downlink C-RAN based on the low-power sleeping mode while considering user rate constraints [18]. A novel sleeping mechanism for heterogeneous network (HetNets) was proposed to decrease energy consumption of multiple BSs by considering traffic dynamics [19]. Furthermore, some resource allocation schemes have been proposed to maximize network energy efficiency by exploring the BS switching operations. Qin _et al._ in [20] proposed a resource-on-demand energy scheduling strategy to make effective utilization of harvested energy. A multi-objective auction-based switching-off scheme in heterogeneous networks was developed to improve energy and cost savings [22]. In [23], the work enhanced the energy efficiency of HetNets by switching off a part of small cells in HetNets. The works in [24, 25] utilize a neural network (NN) to optimize the transmit power aiming at minimizing the network energy consumption. Multiple access technologies, for example rate splitting multiple access (RSMA) [26] and non-orthogonal multiple access (NOMA) [27], also play an important role in enhancing power management and has achieved positive results in RAN. Although the above contributions have improved energy efficiency of wireless communication, they fail to consider accurate downlink power management.
In 5G and beyond communication systems, the network scale and complexity have increased substantially, which makes traditional network optimization strategies inefficient and ineffective. Thanks to the abilities of AI technology in tackling large-scale and complex tasks, it can significantly improve the efficiency of network optimization [28, 29, 30, 31]. Hence, similar to _green_, _intelligence_ is also a critical characteristic of 5G and beyond wireless communications. Being a key approach in machine learning, reinforcement learning (RL) is an environment-based strategy and is realized through continuous interaction in discrete time steps between the agent and environment. To maximize the accumulative reward, RL continuously learns how to take actions automatically by interacting with the environment [32]. The basic elements of RL are a finite set of states, a finite set of actions, and the reward function. In each episode, the agent identifies the environment state and then takes action based on the optimal policy. At the end of the episode, the agent evaluates the action based on the reward function. RL has been applied to a wide range of engineering applications such as healthcare and autonomous control [33, 34, 35, 36]. For example, the actor-critic technology has been adopted to conserve network energy in [17, 37]. Wang _et al._ adopted deep Q-network (DQN) to identify the optimal policy in channel access, where the dynamic channel access problem was modeled as a partially observable Markov decision process (POMDP) [38]. To explore a representative user's computation offloading to multi-BSs, Chen _et al._ adopted DQN to determine the optimal offloading strategy [39]. In a MIMO-based UAV network, the authors in [40] investigated DQN to maximize its coverage efficiency calculated based on the received signal strength. As a well-established artificial neural network (ANN) architecture, the back propagation neural network (BPNN) is ideal as a DQN core and commonly used learning algorithm for complex function approximation [41]. For example, the authors in [42] proposed a novel adaptive learning hybrid model for precise solar intensity forecasting by using BPNN. A general BPNN is designed to consist of one input layer, multiple hidden layers, and one output layer. Considering the complexity of future wireless communication networks, the reinforcement learning techniques have great potential to improve the communication performance by using both the continuous environmental interaction and the NN's function approximation.
In [43], energy efficiency (EE) was proposed to evaluate the network energy efficiency of a wireless communication link. The energy efficiency is defined as the ratio of data rate to consumed power. Hence, to improve the network energy efficiency, this paper proposes a novel precise downlink power model by using deep reinforcement learning (DRL) based on BPNN to control power intelligently. The main contributions of this paper are summarized as follows:
* In dense-RAN, we propose a novel adaptive multiple base stations (multi-BSs) downlink power management model to improve network energy efficiency. In the studied model, based on the unpredictable differentiated traffic demands of users, the central processor simultaneously optimizes the downlink power of multi-BSs while guaranteeing the throughput requirement.
* To tackle the power management model, we design a DRL framework, where the central processor serves as an agent. The downlink power levels and network energy efficiency are the action space and reward function, respectively.
* Considering the trade-off between complexity and performance, the network energy efficiency optimization of the DRL framework is modeled to achieve near-optimal precise power management, where the cumulative energy efficiency is maximized by using a DQN with multiplicative complexity. Moreover, BPNN is used to tune the considered DQN parameter.
* Simulation results show that our proposed algorithm enjoys a network energy efficiency improvement of 5% and 10% compared with that of the Q-learning and sleep schemes, respectively.
This paper is structured as follows: Section II describes the system model, where the radio communication model and problem formulation are described, respectively. Section III presents the deep reinforcement learning framework, in which we model the precise downlink power management problem as MDP, and adopt DQN based on BPNN to identify the optimal policy for precisely managing downlink power. Section IV demonstrates the superiority of the proposed algorithm in terms of both communication performance and computational complexity. Section V concludes our work. Moreover, for ease of reference, **Table** I lists the main acronyms.
## II System Model
The studied system model is analyzed in a dense-RAN urban scenario, where downlink power management for all BSs is explored. Meanwhile, to reduce the back-lobe interference of the BS sector antenna, each BS is designed to include three sectors. In particular, each sector has its own antenna and serves different users and is independently laid out in its related BS [44]. The studied dense-RAN architecture considers a geographical area, where the set of BSs, denoted by \(\mathcal{B}=\{1,\ldots,b,\ldots,B\}\), are pre-deployed by the operator and a group of randomly deployed terrestrial users, denoted by \(\mathcal{U}=\{1,\ldots,u,\ldots,U\}\), generate unpredictable differentiated service demands over time. The dense-RAN scenario is illustrated in **Fig.** 1. Specifically, the system model includes a cloud server for optimization control and the network scenario for deploying BSs and users. The cloud server communicates with BSs through the optical fiber, while the BS communicates with the user through the radio channel. For example, the coordinates of the BS \(b\) and the user \(u\) in time-step \(t\) are given by
\[\begin{cases}l_{b}=(x_{b},y_{b},o_{b})\,&\forall b\in\mathcal{B};\\ l_{[u,t]}=\big{(}x_{[u,t]},y_{[u,t]},o_{[u,t]}\big{)}\,&\forall u\in \mathcal{U};\end{cases} \tag{1}\]
where \(x_{b}\), \(y_{b}\), \(x_{[u,t]}\), and \(y_{[u,t]}\) are plane coordinates, while \(o_{b}\) and \(o_{[u,t]}\) are the heights. Moreover, the distance between BS \(b\) and user \(u\) in time-step \(t\) is calculated using the relation
\[\begin{split}&\mathrm{D}\big{(}l_{b},l_{[u,t]}\big{)}=\\ &\sqrt{\big{(}x_{b}-x_{[u,t]}\big{)}^{2}+\big{(}y_{b}-y_{[u,t]} \big{)}^{2}+\big{(}o_{b}-o_{[u,t]}\big{)}^{2}},\end{split} \tag{2}\]
where \(\mathrm{D}\left(l_{b},l_{[u,t]}\right)\) is a smooth function related to location change of user \(u\) over time.
In long-term energy-efficient wireless networks, the user traffic arrives randomly and network interference fluctuates over time. Hence, we adopt the time-step structure in the dense-RAN system, where the operating period is discretized into the time-step space \(\mathcal{T}=\{0,1,\ldots,T-1\}\), where every time-step is regulated by 1 msec. In orthogonal frequency division multiple access (OFDMA), the different frequency sub-carriers are orthogonal to each other. Hence, we optimize transmit power allocations between BSs at the same frequency sub-carriers. Power management is more conducive to improving network energy efficiency by interference coordination of the same frequency sub-carriers. Hence, we configure an OFDMA-based network scenario for the dense-RAN, where transmit power allocations at the same frequency sub-carriers are optimized. Considering that the network scheduling optimization is out of our research scope, the associations between
Fig. 1: The DRL-RAN architecture deploying BSs and users.
BSs and users are known in advance based on the maximal Reference Signal Receiving Power (RSRP) access. This indicates that one user can choose the BS with the maximal RSRP as its association BS when it has many candidate BSs. For ease of reference, **Table** II lists the main notations of the system model.
### _Radio Communication Model_
Considering our research on power management, we simplify the channel propagation model and reduce the estimation overhead by using widebeams [45]. Thus, we adopt the link budget calculation to explore the downlink power management optimization problem in the studied dense-RAN [46]. The link budget is a summary of the transmit power along with all the link's gains and losses, which enables the strength of the received signal at the user to be calculated. The strength of the received signal is measured by RSRP [47]. It is possible to estimate whether RSRP at the user is sufficient, too high, or too low, implying that the corrective transmit power at the BS is managed for ensuring satisfactory system operation. When it is larger than the required RSRP for the user, the high transmit power at the BS can be reduced to minimize the power cost while still meeting the communication demand of its associated user.
Assume that BS \(b\) is serving user \(u\) and any other BS \(b^{{}^{\prime}}(b^{{}^{\prime}}\in\mathcal{B},b^{{}^{\prime}}\neq b)\) is the interference BS. Then, the received signal of user \(u\) in time-step \(t\), including the serving RSRP and interference RSRP, can be expressed as follows
\[\begin{split} Y_{[u,t]}&=\underbrace{h_{[b,t]}s_{[u,t]}}_{\text{desired signal}}+\sum_{\begin{subarray}{c}b^{{}^{\prime}}\in \mathcal{B}\setminus b\\ \text{inter-user interference signal}\end{subarray}}+\underbrace{n_{0}}_{ \text{noise signal}}\\ &=\left(P_{[b,t]}H_{[bu,t]}\right)s_{[u,t]}\\ &\quad+\sum_{b^{{}^{\prime}}\in\mathcal{B}\setminus b}\left(P_{[ b^{{}^{\prime}},t]}H_{[b^{{}^{\prime}},t]}\right)s_{[u^{{}^{\prime}},t]}+n_{0}, \end{split} \tag{3}\]
where \(h_{[b,t]}=P_{[b,t]}H_{[bu,t]}\) and \(h_{[b^{{}^{\prime}},t]}=P_{[b^{{}^{\prime}},t]}H_{[b^{{}^{\prime}},u,t]}\). \(P_{[b,t]}\) and \(P_{[b^{{}^{\prime}},t]}\left(P_{[b,t]},P_{[b^{{}^{\prime}},t]}\in\mathcal{P}\right)\) are, respectively, the downlink power levels of BS \(b\) and \(b^{{}^{\prime}}\), and \(\mathcal{P}=\{P_{0},\ldots,P_{\kappa-1}\}\) is the set of available downlink power levels for all BSs. In addition, \(s_{[u,t]}\) and \(s_{[u^{{}^{\prime}},t]}\) are the received signals for user \(u\) and \(u^{{}^{\prime}}\), respectively, and \(n_{0}\) is the Gaussian white noise signal at the user.
Moreover, \(H_{[bu,t]}\) is the gain that accounts for the total transmitter gain, path loss, and total receiver gain from BS \(b\) to user \(u\) in time-step \(t\), while \(H_{[b^{{}^{\prime}}u,t]}\) is defined similar to \(H_{[bu,t]}\) and is the channel gain from BS \(b^{{}^{\prime}}\) to user \(u\). In particular, based on the UMa (Urban Macro) propagation model in the 3GPP TR 38.901 standard [48], the channel gain from BS \(b\) to user \(u\) in time-step \(t\) is given by [49]
\[\begin{split} H_{[bu,t]}&=H_{TX}\times PL_{[bu,t]} \times H_{RX}\\ &=H_{TX}\times\left(\frac{c}{4\pi\times f_{c}\times\text{D}\left( l_{b},l_{[u,t]}\right)}\right)\times H_{RX},\end{split} \tag{4}\]
where \(H_{TX}\) and \(H_{RX}\) are, respectively, the signal gain constants at the transmitter and receiver. \(PL_{[bu,t]}\) is the path loss from BS \(b\) to user \(u\), c is the speed of light, and \(f_{c}\) is the center frequency.
Accordingly, based on the definition in (3), the signal-to-interference-plus-noise-ratio (SINR) for user \(u\) in time-step \(t\) is given by
\[\begin{split}\gamma_{[bu,t]}&=\frac{h_{[bu,t]}}{ \sum\limits_{b^{{}^{\prime}}\in\mathcal{B}\setminus b}\left(\varphi_{[b^{{}^{ \prime}},t]}h_{[b^{{}^{\prime}}u,t]}\right)+\sigma^{2}}\\ &=\frac{P_{[b,t]}H_{[bu,t]}}{\sum\limits_{b^{{}^{\prime}}\in \mathcal{B}\setminus b}\left(\varphi_{[b^{{}^{\prime}},t]}P_{[b^{{}^{\prime}},t ]}H_{[b^{{}^{\prime}}u,t]}\right)+\sigma^{2}},\end{split} \tag{5}\]
where \(\sigma^{2}\) is the noise power. In time-step \(t\), \(\varphi_{[b^{{}^{\prime}},t]}=1\) when BS \(b^{{}^{\prime}}\) is active for serving a user, and \(\varphi_{[b^{{}^{\prime}},t]}=0\) when BS \(b^{{}^{\prime}}\) is sleep for serving no user. Then, the downlink data rate from BS \(b\) to user \(u\) in time-step \(t\) is given by
\[C_{[bu,t]}=W\mathrm{log}_{2}\left(1+\gamma_{[bu,t]}\right), \tag{6}\]
where \(W\) is the channel bandwidth. We assume that the downlink data rate of user \(u\) is \(C_{bu}^{\max}\) when BS \(b\) transmits data with the standard downlink power level \(P_{\text{max}}\). The power decreases of BS \(b\) and the downlink data rate variation in time-step \(t\) are, respectively, expressed as follows
\[\Delta P_{[b,t]} =\varphi_{[b,t]}\left(P_{\max}-P_{[b,t]}\right), \tag{7}\] \[\Delta C_{[bu,t]} =\varphi_{[b,t]}\left(C_{[b,t]}^{\max}-C_{[b,t]}\right). \tag{8}\]
where \(\varphi_{[b,t]}\) is defined similar to \(\varphi_{[b^{{}^{\prime}},t]}\) with BS \(b\).
By dividing the downlink data rate in (6) by the downlink power, the network energy efficiency is given in (9) in units of bits per seconds per dBW (Mbps/dBW) [43].
\[R_{t}=\frac{1}{\sum\limits_{b=1}^{B}\varphi_{[b,t]}}\sum\limits_{b=1}^{B}R_{[ bu,t]}, \tag{9}\]
where
\[R_{[bu,t]}=\frac{C_{[bu,t]}}{P_{[b,t]}}, \tag{10}\]
and the power unit translation between "dBW" and "W" is denoted as
\[P\left[\mathrm{in}\ \mathrm{dBW}\right]=10\cdot\mathrm{log}_{10}\left(\frac{P \left[\mathrm{in}\ \mathrm{W}\right]}{1\left[\mathrm{in}\ \mathrm{W}\right]}\right). \tag{11}\]
Besides, when user \(u\) requests a traffic volume \(V_{[bu,t]}\) from its associated BS \(b\) in time-step \(t\), the number of required time-steps starting from time-step \(t\), denoted by \(T_{[bu,t]}\). satisfies the following constraint
\[V_{[bu,t]}=\sum\limits_{k=0}^{T_{[bu,t]}-1}C_{[bu,t+k]}, \tag{12}\]
where \(T_{[bu,t]}\leq T\), and it is similar to the relation between any other BS \(b^{{}^{\prime}}\) and its corresponding associated user.
### _Problem Formulation_
In dense-RAN, based on constraints of the traffic volume and downlink data rate variation, the long-term energy efficiency maximization problem is formulated as follows
\[\begin{split}&\mathrm{maximize}\sum\limits_{t}R_{t};\\ & s.t.\quad\sum\limits_{k=0}^{T_{[bu,t]}-1}C_{[bu,t+k]}=V_{[bu,t]} ;\\ &\quad\sum\limits_{b=1}^{B}\Delta C_{[bu,t]}\geq 0;\\ &\quad P_{[b,t]}\in\mathcal{P};\\ &\quad b\in\mathcal{B},\ u\in\mathcal{U},\ t\in\mathcal{T}.\end{split} \tag{13}\]
where the power level \(P_{[b,t]}\) for any BS \(b(b\in\mathcal{B})\) in time-step \(t\) is the optimization variable, the network energy efficiency \(\sum\limits_{t}R_{t}\) is the optimization objective, and other parameters, for example the association between BS and user \(\varphi_{[b^{\prime},t]}\) and the power set \(\mathcal{P}\), are the scenario setting. The first constraint is the traffic volume determined by the traffic volume demand for any active user \(u(\forall u\in\mathcal{U})\) and its serving BS \(b\). The second constraint shows the network throughput limit, which is calculated by accumulating downlink data rate variations for all BSs in any time-step \(t(\forall t\in\mathcal{T})\). The third constraint illustrates that the available downlink power levels are identical for all BSs. By combining formulas (5) and (6), the downlink data rate in the first constraint of the optimization problem (13) is computed as follows
\[\begin{split}& C_{[bu,t+k]}=\\ &\quad W\mathrm{log}_{2}\!\!\left(\!\!1\!+\!\frac{P_{[b,t+k]}H_{[bu, t+k]}}{\sum\limits_{b^{\prime}\in\mathcal{B}\setminus\mathcal{B}}\!\!\left(\!\varphi_{[b^{ \prime},t+k]}P_{[b^{\prime},t+k]}H_{[b^{\prime},u,t+k]}\!\right)\!+\!\sigma^{2} }\!\right)\!.\end{split} \tag{14}\]
## III Deep Reinforcement Learning Framework
Wireless random traffic has made large-scale and complex network optimization techniques move from the rule-based to AI-based. The RL's learning process is realized through discrete time steps, implying that RL is a better alternative for the dense-RAN scenario. As an example RL model, DQN has been introduced to predict the Q-value for tackling the continuous and large state space and action space by using deep neural networks. In [50], DQN was firstly proposed to train an agent to learn a policy from its observations by Google DeepMind based on an ANN, which overcomes the limitation of the traditional look-up table approach in Q-learning. The actual solution of this optimization problem is to obtain the near-optimal collaborative multi-BS power management while meeting the network performance requirements [51]. RL is a continuous learning algorithm that optimizes control through continuous observations, which can deal with the changing state of complex systems in real time and realize an optimized control scheme. Moreover, we learn that the traffic volume possesses Markov process characteristics from (12). Hence, it is feasible to formulate the optimization problem in (13) as an MDP problem. Our goal is to maximize the long-term network energy efficiency with user traffic volume demand while guaranteeing no network throughput loss.
### _Reinforcement Learning in a Nutshell_
The system state is designed with the residue transmitted traffic volume and the users' RSRP. In particular, the state space is defined as follows
\[\mathcal{S}=\left\{S_{t}\begin{vmatrix}S_{t}=\left\{V_{[bu,t]},Y_{[u,t]} \right\},\\ b\in\mathcal{B},u\in\mathcal{U},t\in\mathcal{T}.\end{vmatrix}\right\}. \tag{15}\]
Each BS chooses one available downlink power level from the downlink power space \(\mathcal{P}\) to serve its associated user. Overall, the action space based on \(\mathcal{P}\) is denoted by
\[\mathcal{A}=\left\{A_{t}\begin{vmatrix}A_{t}=\left\{A_{[bu,t]}\right\},A_{[bu,t]}\in\mathcal{P};\\ b\in\mathcal{B},u\in\mathcal{U},t\in\mathcal{T}.\end{vmatrix}\right\}. \tag{16}\]
Power management not only results in a downlink power decline, but also causes an uncertain downlink data rate variation. The energy efficiency serves as the immediate reward in the reinforcement learning optimization process. In addition to a maximum immediate reward at the current time-step, we also optimize the long-term reward in terms of a cumulative energy efficiency. To this end, the long-term reward function [32] is defined as the total discounted rewards from time-step \(t\), and is computed as follows
\[\begin{split} G_{[bu,t]}&=\sum\limits_{k=0}^{\infty} \lambda^{k}R_{[bu,t+k+1]}\\ &=R_{[bu,t+1]}+\lambda R_{[bu,t+2]}+\lambda^{2}R_{[bu,t+3]}+ \cdots\\ &=R_{[bu,t+1]}+\lambda\left(R_{[bu,t+2]}+\lambda R_{[bu,t+3]}+ \cdots\right)\\ &=R_{[bu,t+1]}+\lambda G_{[bu,t+1]}.\end{split} \tag{17}\]
where \(\lambda\left(0<\lambda\leq 1\right)\) is the discount factor.
Furthermore, the long-term reward is used to build the state-action-value function, which estimates how good it is to be at state \(s\) with action \(a\) according to policy \(\pi\). Hence, the state-action-value function is given by
\[\begin{split}& Q_{[bu,t]}^{\pi}\left(s,a\right)\\ &\doteq\mathbb{E}_{\pi}\left[G_{[bu,t]}\left|S_{[bu,t]}=s,A_{[ bu,t]}=a\right.\right]\\ &=\mathbb{E}_{\pi}\left[R_{[bu,t+1]}+\lambda G_{[bu,t+1]}\left|S_ {[bu,t]}=s,A_{[bu,t]}=a\right.\right]\\ &=\!\!\!\sum\limits_{s^{\prime}}\!P\left(s^{{}^{\prime}}|s,a \right)\!\!\left[R_{[bu,t+1]}+\lambda\max\limits_{a^{\prime}}Q_{[bu,t+1]}^{ \pi}\left(s^{{}^{\prime}},a^{{}^{\prime}}\right)\!\right]\!.\end{split} \tag{18}\]
The expression in (18) is the so-called Bellman equation for state \(s\), and shows a relationship between the value of state \(s\) and the values of its successive states.
The transition probability \(P\left(s^{{}^{\prime}}|s,a\right)\) depends on the optimization policy \(\pi\) and the historical action selection in replay memory \(\mathcal{D}\) defined in DQN. Therefore, \(\pi\left(s,a\right)\), representing the transition probability at state \(s\) with action \(a\) according to the policy \(\pi\), is calculated as follows
\[\pi\left(s,a\right)=\begin{cases}\left(1-\varepsilon\right)\cdot\frac{| \mathcal{D}^{a}|}{|\mathcal{D}|},&1-\varepsilon;\\ \varepsilon\cdot\frac{1}{|\mathcal{D}|},&\varepsilon;\end{cases} \tag{19}\]
where \(|\mathcal{D}|\) is the size of replay memory \(\mathcal{D}\). \(\mathcal{D}^{a}\) is the sample space related to action \(a\) that is the subset of \(\mathcal{D}\), and its size is \(|\mathcal{D}^{a}|\). \(|\mathcal{P}|\) is the size of the downlink power space \(\mathcal{P}\).
In MDP, policy \(\pi\) specifies the action for each state. To achieve a good trade-off between exploration and exploitation, we adopts the following \(\varepsilon\)-greedy policy
\[A_{[bu,t]}=\begin{cases}\operatorname*{argmax}_{a^{{}^{\prime}}}Q\left(S_{[ bu,t]},a^{{}^{\prime}}\right),&1-\varepsilon;\\ \operatorname*{random}(A),&\varepsilon.\end{cases} \tag{20}\]
Hence, the long-term energy efficiency maximization in (13) is equivalent to finding the optimal policy \(\pi^{*}\) such that the energy efficiency over time is maximized based on multiple constraints. Mathematically, the optimization problem in (13) can be transformed into the following deep reinforcement learning problem
\[\begin{split}\pi^{*}&:=\operatorname*{arg\,max}_{ \pi}\sum_{b}Q_{[bu,t]}^{\pi}\left(s,a\right);\\ s.t.&\sum_{k=0}^{T_{[bu,t]}-1}C_{[mn,t+k]}=V_{[bu,t]};\\ &\sum_{b=1}^{B}\Delta C_{[bu,t]}>=0;\\ & P_{[b,t]}\in\mathcal{P};\\ & b\in\mathcal{B},\ u\in\mathcal{U},\ t\in\mathcal{T}.\end{split} \tag{21}\]
where the optimization objective function, that is expressed by the cumulative energy efficiency in (13), is replaced by finding the optimal policy based on the long-term reward in (18).
### _Power Management Based on Deep-Q-Network_
DQN improves Q-Learning with deep neural networks to make reinforcement learning solve complex and nonlinear optimization problems. The typical DQN framework is composed of the replay memory, mini-batch sample, target DQN, predicted DQN, weight updating, and loss function. By refining **Fig. 1** with the RL framework, the workflow diagram of precise downlink power management based on DQN is illustrated in **Fig. 2**. In DQN, the experience in each time-step is stored into the replay memory \(\mathcal{D}\) for being accessed to update the DQN parameter. For example, after BS \(b\) executes action \(A_{[bu,t]}\) at state \(S_{[bu,t]}\), it returns reward \(R_{[bu,t+1]}\) and transfers to the next state \(S_{[bu,t+1]}\). Hence, the tuple \(\left(S_{[bu,t]},A_{[bu,t]},R_{[bu,t+1]},S_{[bu,t+1]}\right)\) is stored into the replay memory \(\mathcal{D}\) as follows
\[\mathcal{D}\leftarrow\mathcal{D}\cup\left\{\left(S_{[bu,t]},A_{[ bu,t]},R_{[bu,t+1]},S_{[bu,t+1]}\right)\right\}. \tag{22}\]
At each update process, the DQN parameter is updated based on the mini-batch samples memory \(\mathcal{D}_{m}\) that comes from \(\mathcal{D}\). The predicted network inputs the current state-action pair \(\left(S_{[bu,t]},A_{[bu,t]}\right)\) and outputs the predicted value, i.e., \(Q\left(S_{[u_{1},t]},A_{[bu,t]};\mathbf{W}_{\text{p}}\right)\). The target network inputs the next state \(s^{{}^{\prime}}\) and outputs the maximum Q-value of the next state-action pair. Therefore, the target value of \(\left(S_{[bu,t]},A_{[bu,t]}\right)\) is given by
\[Q\left(S_{[bu,t]},A_{[bu,t]};\mathbf{W}_{\text{o}}\right)=R_{[ bu,t+1]}+\lambda\max_{a^{{}^{\prime}}}Q\left(s^{{}^{\prime}},a^{{}^{\prime}}; \mathbf{W}_{\text{o}}\right). \tag{23}\]
where \(a^{{}^{\prime}}(a^{{}^{\prime}}\in\mathcal{P})\) is the candidate action for the next state.
The loss function is used to measure whether the DQN parameter is optimized or not. In particular, the DQN parameter is optimized when the loss function is stable and tends to 0; and it is not optimized otherwise. Hence, based on the mini-batch sample set \(\mathcal{D}_{m}\), the loss function measures how good is the predicted DQN as follows
\[\begin{split}&\mathrm{L}\left(\mathbf{W}_{\text{p}},\mathbf{W}_{ \text{o}}\right)\\ &=\frac{1}{2\left|\mathcal{D}_{m}\right|}\sum_{k=1}^{\left| \mathcal{D}_{m}\right|}[Q\left(s_{k},a_{k};\mathbf{W}_{\text{p}}\right)-Q \left(s_{k},a_{k};\mathbf{W}_{\text{o}}\right)]^{2},\end{split} \tag{24}\]
where \(\left|\mathcal{D}_{m}\right|\) is the size of mini-batch sample memory.
Fig. 2: Workflow diagram of precise downlink power management Based on DQN.
The structure of precise downlink power management based on DQN is described in **Algorithm 1**, which includes two modules, named downlink power management optimization and the DQN training process. In the downlink power management optimization module, the optimal policy for each BS is learned to achieve a satisfactory reward considering downlink data rate variations described from line 3 to 22. During each optimization iteration, the vectors for storing the selected actions and their respective state-action-values of BSs are initialized in line 4. Each BS selects an action based on the \(\varepsilon-\)greedy policy in line 4, and stores it into the action vector \(\mathcal{A}_{n}\) in line 4. From lines 4 and 4, a series of computations associated with the reward and state transition for each BS are performed. The satisfactory conditions for the training iteration in terms of the cumulative downlink data rate variation are defined from lines 4 to 21. Based on the state-action-value, the optimal policy is identified to obtain its corresponding tuple in line 4. The second module is described from line 4 to 34, wherein the DQN is trained by the transition pairs stored in replay memory \(\mathcal{D}\). In line 4, the replay memory \(\mathcal{D}\) and records \(n_{t}^{*}\) are updated based on the optimal tuple. From lines 4 to 31, the mini-batch samples \(\mathcal{D}_{m}\) are randomly extracted from \(\mathcal{D}\) to calculate the predicted value and target value. In line 4, the loss function is calculated to update \(\mathbf{W}_{\text{p}}\) based on (24), where \(Q\left(s_{k},a_{k};\mathbf{W}_{\text{p}}\right)\) and \(Q\left(s_{k},a_{k};\mathbf{W}_{\text{o}}\right)\) are, respectively, the predicted and target values of the \(k^{th}\) mini-batch sample from \(\mathcal{D}_{m}\). The gradient descent method is adopted to update \(\mathbf{W}_{\text{p}}\) of the predicted network, and \(\mathbf{W}_{\text{o}}\) is updated after a fixed interval in line 4.
## IV Numerical Results
In this section, simulation results are presented to evaluate the performance of the proposed algorithm. We adopt the sleep scheme in [19] as the benchmark, and evaluate performance improvement of the precise power management strategies that include the proposed algorithm and Q-learning. In particular, we first show the superiority of the proposed algorithm in terms of network energy efficiency over the Q-learning and sleep scheme. Next, the downlink average throughput and power of all BSs are analyzed based on the network energy efficiency definition in (9). Furthermore, we present the effect of precise power management on the RSRP and interference reduction, which determines the downlink network throughput variation of all the BSs. Finally, the computation complexities of the proposed algorithm and Q-learning are verified by the optimization success ratio and iterations.
The basic simulation parameters are classified into the network topology parameters, BS equipment parameters, and algorithm setting parameters. The network topology parameters for the dense-RAN are the number of BSs \(B=19\) and the maximum number of users scheduled at a time-step \(U=57\). Meanwhile, hundreds and thousands of users are served by BSs in the multiple time-steps. The BS equipment parameters are \(P_{\text{max}}=\)15.2dBW, \(w_{0}=\)-125dBW, and \(W=\)10MHz; and the algorithm setting parameters are \(T=20000\), \(T_{m}=500\), the reduced downlink power \(\Delta P_{\max}=\)2dBW or 5dBW, \(\lambda=0.9\), \(\varepsilon=0.1\), \(N=\)100 or 200, \(|\mathcal{D}|=5000\), and \(|\mathcal{D}_{m}|=1000\).
### _Energy Efficiency Analysis_
In this subsection, we analyze the reward statistics of all algorithms based on overall and cumulative average network energy efficiency, which are given by
\[\overline{R}_{T}^{\Sigma}=\frac{1}{T}\sum_{t=0}^{T-1}\Bigg{(}\frac{1}{B}\sum_ {b=1}^{B}R_{[bu,t]}\Bigg{)}, \tag{25}\]
\[\overline{R}_{t}^{\Sigma}=\frac{1}{t+1}\sum_{t^{\prime}=0}^{t}\Bigg{(}\frac{1 }{B}\sum_{b=1}^{B}R_{[bu,t^{\prime}]}\Bigg{)}. \tag{26}\]
The overall average network energy efficiency for the proposed algorithm, Q-learning, and sleep scheme are depicted in **Fig.** 3-(1). First, the proposed algorithm achieves the largest overall average energy efficiency for different values of the tuple \((N,\Delta P_{\max})\) among all the algorithms. Second, when
\(\Delta P_{max}\) increases, the overall average energy efficiency of the proposed algorithm achieves better performance, while that of Q-learning improves slightly. Third, the iteration has little effect on the overall average energy efficiency of both the proposed algorithm and Q-learning. Based on the overall average energy efficiency analysis, we set \((N,\Delta P_{\text{max}})=\)(100, 2dBW) and \((N,\Delta P_{\text{max}})=\)(100, 5dBW) to show the cumulative average energy efficiency of all the algorithms in **Fig.** 3-(2). It can be learned that the proposed algorithm outperforms Q-learning based on the cumulative average energy efficiency, while Q-learning remains superior to the sleep scheme. Moreover, we enlarge the curves as shown in the small sub-image of **Fig.** 3-(2), which shows that the energy efficiency fluctuates slightly over time due to a variable arrival rate. In particular, the fluctuating curve of the sleep scheme demonstrates that the variable request arrival rate leads to diversified BSs' sleep over the time-step. Next, we further analyze how the downlink power management affects the energy efficiency by controlling the downlink power and data rate of BSs in Section IV-B.
### _Effect of Downlink Power Management_
Because the downlink power and data rate of BSs are, respectively, the numerator and denominator of the energy efficiency, it is important to explore how the downlink power and data rate of BSs vary with the increasing number of episodes during the adaptive downlink power management process. Similar to \(\overline{R}_{t}^{\Sigma}\), the cumulative average throughput of BSs is expressed as follows
\[\overline{C}_{t}^{\Sigma}=\frac{1}{t+1}\sum_{t^{\prime}=0}^{t}\left(\frac{1}{ B}\sum_{b=1}^{B}C_{\left\lfloor{bu,t^{\prime}}\right\rfloor}\right). \tag{27}\]
The average downlink power of BSs in time-step \(t\) is given by
\[\overline{P}_{t}=10\cdot\log_{10}\left(\frac{1}{B}\sum_{b=1}^{B}10^{\left(P_{ \left\lfloor{b,t}\right\rfloor}/10\right)}\right). \tag{28}\]
Hence, the cumulative average power of BSs is given by
\[\overline{P}_{t}^{\Sigma}=\frac{1}{t+1}\sum_{t^{\prime}=0}^{t}\overline{P}_{t ^{\prime}}. \tag{29}\]
**Fig.** 4 depicts the variations of the cumulative average downlink transmit power and data rate of BSs with increasing number of episodes for the settings \((N,\Delta P_{\text{max}})=\)(100, 2dBW) and \((N,\Delta P_{\text{max}})=\)(100, 5dBW). **Fig.** 4-(1) and **Fig.** 4-(3) imply that the average optimized downlink power of BSs has a slight effect on the average network throughput for the proposed algorithm and Q-learning in comparison to that of the sleep scheme, suggesting that it is feasible to optimize the downlink power without network throughput loss. **Fig.** 4-(2) and **Fig.** 4-(4) show the average downlink power of BSs for the proposed algorithm, Q-learning, and sleep scheme with increasing number of episodes. It can be learned that our proposed algorithm achieves better downlink power efficiency in comparison with Q-learning and sleep scheme, indicating that the downlink power optimization is almost the sole main contributor to the energy efficiency improvement. Meanwhile, we also enlarge the fluctuating sleep scheme curve as shown in the sub-images of **Fig.** 4-(2) and -(4), indicating that the variable request arrival rate leads to diversified BSs' sleep over the time-step.
The average downlink powers of BSs suffer from decline while the average throughputs of BSs hardly change, which inspires us to explore how downlink power management affects the network interference of BSs. In time step \(t\), the average RSRP and interference decline of BSs in dBW are, respectively, expressed in (30). Moreover, the cumulative average RSRP and interference decline of BSs are, respectively, given by
\[\begin{cases}\Delta\overline{P}_{t}^{S,\Sigma}=\frac{1}{t+1}\sum_{t^{\prime}=0}^ {t}\Delta\overline{P}_{t^{\prime}}^{S},\\ \Delta\overline{P}_{t}^{I,\Sigma}=\frac{1}{t+1}\sum_{t^{\prime}=0}^{t}\Delta \overline{P}_{t^{\prime}}^{I}.\end{cases} \tag{31}\]
Fig. 3: Cumulative average energy efficiency statistics and overall average energy efficiency statistics.
**Fig. 5** depicts declining variations of RSRP and interference for all BSs with increasing number of episodes for the settings \((N,\Delta P_{\max})=\)(100, 2dBW) and \((N,\Delta P_{\max})=\)(100, 5dBW). **Fig. 5**-(1) and **Fig. 5**-(4) show that the average RSRP decline of all the BSs of the proposed algorithm compared to the sleep scheme is much larger than that of Q-learning, whose trends match that of the average downlink power in **Fig. 4**-(2) and **Fig. 4**-(4), respectively. From **Fig. 5**-(2) and -(5), it can be learned that the average interference decline of all BSs of the proposed algorithm compared to the sleep scheme is also much larger than that of Q-learning. The enlarged curves of the sub-images in **Fig. 5**-(1), -(2), -(4) and -(5) show that the optimization process can converge in less 200 iterations, implying that the optimization is fast enough to cope with the necessary power adjustments. Moreover, the gaps between the average interference decline and the average
Fig. 4: Statistics for the network throughput and downlink power with diverse \(\Delta P_{max}\).
Fig. 5: Statistics for the average RSRP and interference decline.
RSRP decline for the settings \((N,\Delta P_{\text{max}})=\)(100, 2dBW) and \((N,\Delta P_{\text{max}})=\)(100, 5dBW) are, respectively, shown in **Fig.** 5-(3) and **Fig.** 5-(6). These figures indicate that better downlink power management of BSs can make the average interference decline larger than the average RSRP decline, which helps improve the average downlink networks throughput. However, the improvement of average downlink network throughput is less prominent due to the much smaller gap.
For example, we randomly select one episode to analyze the RSRP decline, interference decline, and downlink data rate decline of active BSs for the setting \((N,\Delta P_{\text{max}})=\)(100, 2dBW), which is illustrated in **Fig.** 6. For any active BS, it can be learned that the downlink data rate increases when RSRP decline is smaller than interference decline, and that the downlink data rate declines when RSRP decline is larger than interference decline. However, the difference between RSRP decline and interference decline is a nonlinear function with the downlink data rate decline due to (6). Based on (30), the average RSRP decline and interference decline of all the BSs are calculated as 1.43dBW and 1.04dBW, respectively. The average downlink data rate decline of all BSs, denoted by \(\frac{1}{B}\sum\limits_{k=1}^{B}\Delta C_{[bu,t]}\), decreases by 11.1Kb/s.
**Fig.** 5 and 6 show that power management of multi-BSs has different effect on declines of RSRP and interference for various BSs. The RSRP decline for one BS can simultaneously lead to the interference declines for its interference-related BSs, while the BS also enjoys the cumulative interference reduction from the RSRP decline of its interference-related BSs based on common sense of interaction. Moreover, **Fig.** 3 and 4 demonstrate that the optimization algorithm can converge quickly no matter how much BS reduces its downlink power, and that the proposed algorithm has good advantages. We can learn that collaborative multi-BSs power management for dense-RAN is to search the optimal power reduction combination of multi-BSs while considering the downlink data rate, and that DRL has advantages in handling the massive states' problem caused by multi-BSs through continuous learning. Therefore, it represents the joint optimization of performance and power for artificial intelligence (COPPAI) in dense-RAN.
The above analysis implies that the adaptive downlink power management achieves the optimal downlink power for each BS. The optimal downlink power generates satisfactory RSRP decline and the corresponding interference decline at users, which weakens the loss of the average downlink network throughput. Hence, the optimal downlink power management improves the energy efficiency while guaranteeing the average downlink network throughput. Moreover, the proposed algorithm achieves better communication performance in comparison with Q-learning and the sleep scheme. This is why the proposed algorithm can learn the dense-RAN state to more precisely manage the downlink power of BSs.
### _Discussion on Optimization Complexity_
In the proposed algorithm, the main time consuming step is to search for the optimal power value combination of all BSs, which requires \(O(2^{B})\) floating point operations (flops). To quantify the complexity of near-optimal solution, **Fig.** 3-(1) shows that the two numbers of iterations, \(N=100\) and \(N=200\), have comparable effect on the energy efficiency improvement, which inspires us to further explore the computational complexity of both the proposed algorithm and Q-learning. In this subsection, the computational complexity of the proposed algorithm and Q-learning are analyzed in terms of the optimization success ratio and the number of iterations. First, the overall and cumulative average optimization success ratio are, respectively, defined as follows
\[\left[\begin{array}{cc}\overline{Z}_{T}^{\Sigma}&\overline{Z}_{t}^{\Sigma} \end{array}\right]=\left[\begin{array}{cc}\frac{1}{T}\sum\limits_{t=0}^{T- 1}\zeta_{t}&\frac{1}{t+1}\sum\limits_{t^{\prime}=0}^{t}\zeta_{t}\end{array} \right], \tag{32}\]
where \(\zeta_{t}=1\) represents optimization success and \(\zeta_{t}=0\) is optimization failure. Then the overall and cumulative average number of iterations are, respectively, defined as follows
\[\left[\begin{array}{cc}\overline{N}_{T}^{\kappa,\Sigma}&\overline{N}_{t}^{ \kappa,\Sigma}\end{array}\right]=\left[\begin{array}{cc}\frac{1}{T}\sum \limits_{t=0}^{T-1}n_{t}^{*}&\frac{1}{t+1}\sum\limits_{t^{\prime}=0}^{t}n_{t}^{* }\end{array}\right], \tag{33}\]
where \(n_{t}^{*}\) is the optimal number of iterations in time-step \(t\).
In **Fig.** 7-(1) and **Fig.** 7-(2), the overall and cumulative average optimization success ratio are, respectively, depicted with increasing number of episodes. In particular, **Fig.** 7-(1) demonstrates that the overall average optimization success ratio of the proposed algorithm is approaching 1 and is much better than that of Q-learning. **Fig.** 7-(2) shows that the cumulative average optimization success ratio of the proposed algorithm achieves rapid convergence in comparison with that of Q-learning. Meanwhile, the cumulative average optimization success ratio of the proposed algorithm remains stable with increasing number of episodes, while it drops slightly for Q-learning. Further, the curves in **Fig.** 7-(2) are enlarged in the sub-image to demonstrate fast convergence for the necessary power adjustments. This is the reason that the Q-table of Q-learning fails to fully learn the dense-RAN state.
In **Fig.** 7-(3) and **Fig.** 7-(4), the overall and cumulative average number of iterations are, respectively, depicted with increasing number of episodes. It can be learned that the overall and cumulative average number of iterations of the proposed algorithm are much smaller than those of Q-learning. Hence, like the communication performance, the computational complexity of the proposed algorithm is lower than Q-learning. It is insufficient for the Q-table of Q-learning to learn the dense-RAN state, which results in more explorations for Q-learning in comparison with the proposed algorithm.
Fig. 6: Decline of RSRP, interference, and data rate for active BSs.
## V Conclusion
In this paper, we propose a novel multi-BSs downlink power optimization algorithm to improve the long-term energy efficiency in dense-RAN, which is defined as the ratio of the downlink data rate by the downlink power. In the studied model, the transmitting traffic volume, generated from many users with unpredictable service demands, is integrated with the network throughput to formulate the constraints. Based on the cloud-RAN operation scheme, we design a deep reinforcement learning framework to tackle the optimization problem. Considering the Markov characteristics of traffic volume, we transform the maximization problem of long-term energy efficiency into the Bellman equation, and adopt a DQN to optimize multi-BSs downlink power management. In the studied DQN algorithm, BPNN is used to tune the DQN parameter to find the mapping approximation from the dense-RAN state to the cumulative reward. Simulation results show that the proposed algorithm demonstrates superiority of communication performance and computational complexity over Q-learning and the sleep scheme. Moreover, our numerical results demonstrate that precise downlink power management lowers the downlink power and guarantees the network throughput, which is key to improve the energy efficiency.
## Acknowledgments
The work of N. Al-Dhahir was supported by Erik Jonsson Distinguished Professorship at UT-Dallas
|
2310.05026 | Low-Resolution Self-Attention for Semantic Segmentation | Semantic segmentation tasks naturally require high-resolution information for
pixel-wise segmentation and global context information for class prediction.
While existing vision transformers demonstrate promising performance, they
often utilize high resolution context modeling, resulting in a computational
bottleneck. In this work, we challenge conventional wisdom and introduce the
Low-Resolution Self-Attention (LRSA) mechanism to capture global context at a
significantly reduced computational cost. Our approach involves computing
self-attention in a fixed low-resolution space regardless of the input image's
resolution, with additional 3x3 depth-wise convolutions to capture fine details
in the high-resolution space. We demonstrate the effectiveness of our LRSA
approach by building the LRFormer, a vision transformer with an encoder-decoder
structure. Extensive experiments on the ADE20K, COCO-Stuff, and Cityscapes
datasets demonstrate that LRFormer outperforms state-of-the-art models. The
code will be made available at https://github.com/yuhuan-wu/LRFormer. | Yu-Huan Wu, Shi-Chen Zhang, Yun Liu, Le Zhang, Xin Zhan, Daquan Zhou, Jiashi Feng, Ming-Ming Cheng, Liangli Zhen | 2023-10-08T06:10:09Z | http://arxiv.org/abs/2310.05026v1 | # Low-Resolution Self-Attention for Semantic Segmentation
###### Abstract
Semantic segmentation tasks naturally require high-resolution information for pixel-wise segmentation and global context information for class prediction. While existing vision transformers demonstrate promising performance, they often utilize high-resolution context modeling, resulting in a computational bottleneck. In this work, we challenge conventional wisdom and introduce the Low-Resolution Self-Attention (LRSA) mechanism to capture global context at a significantly reduced computational cost. Our approach involves computing self-attention in a fixed low-resolution space regardless of the input image's resolution, with additional \(3\times 3\) depthwise convolutions to capture fine details in the high-resolution space. We demonstrate the effectiveness of our LRSA approach by building the LRFormer, a vision transformer with an encoder-decoder structure. Extensive experiments on the ADE20K, COCO-Suff, and Cityscapes datasets demonstrate that LRFormer outperforms state-of-the-art models. The code will be made publicly available.
Low-Resolution Self-Attention, Semantic Segmentation, Vision Transformer
## 1 Introduction
As a fundamental computer vision problem, semantic segmentation [1, 2, 3] aims to assign a semantic label to each image pixel. Semantic segmentation models [4, 5] usually rely on pretrained backbone networks [6, 7] for feature extraction, which is then followed by specific designs for pixel-wise predictions. In the last decade, the progress in feature extraction via various backbone networks has consistently pushed forward state-of-the-art semantic segmentation [8, 9, 10]. This paper improves the feature extraction for semantic segmentation from a distinct perspective.
It is commonly believed that semantic segmentation, as a dense prediction task, requires high-resolution features to ensure accuracy. In contrast, image classification typically infers predictions from a very small feature map, such as \(1/32\) of the input resolution. Semantic segmentation models with convolutional neural networks (CNNs) usually decrease the strides of backbone networks to increase the feature resolution [11, 12, 13, 14], _e.g._, \(1/8\) of the input resolution. This attribute is also well preserved in transformer-based semantic segmentation, demonstrating that high-resolution is still necessary for semantic segmentation.
High-resolution features are powerful for capturing the local details, while context information pertains to the broader understanding of the scene. Contextual features discern the interrelations between various scene components [21], mitigating the ambiguity inherent in local features. Thus, considerable research efforts [1, 22] have been devoted to extending the receptive field of CNNs. Conversely, vision transformers inherently facilitate the computation of global relationships by introducing self-attention with a global receptive field. Nonetheless, this comes at a significant computational cost, as vanilla attention mechanisms exhibit quadratic complexity to input length. Intriguingly, seminal studies [9, 20, 23] made a remarkable effort by judiciously downsampling some of the features (_i.e._, key and value) during the self-attention computation for reduced computational complexities.
Fig. 1: **Comparison of existing and our proposed paradigms for the self-attention calculation in the vision transformer. Representatives include (a) ViT [15], DeiT [16]; (b) Swin [17], CSwin [18]; (c) PVT [19], SegFormer [9], P2T [20]; and (d) our LRFormer.**
Nevertheless, we observe that the computational overhead of self-attention remains a non-negligible bottleneck for existing vision transformers, as evidenced by Tab. III. Consequently, we aim to delve deeper into the downsampling in the core component of the transformer, _i.e._, self-attention. Diverging from prior works that only downsample the key and value features [20, 9, 23], we propose to downsample all constituents-query, key, and value features. In this way, the output of self-attention would be in a low-resolution so that the mainstream of the transformer would contain low-resolution. Furthermore, we adopt a fixed downsampling size rather than a downsampling ratio to attain a very low computational complexity for self-attention. The proposed method is called **Low-Resolution Self-Attention (LRSA)**.
Fig. 1 depicts the differences between existing self-attention approaches and our LRSA. Vanilla self-attention [15] (Fig. 1(a)) directly computes the global feature relations in the original resolution, which is quite expensive. Window-based methods [17, 18, 24, 25] (Fig. 1(b)) divide the features into small windows and perform local self-attention within each window. Downsampling-based methods [26, 27, 9, 19, 20] (Fig. 1(c)) keep the size of the query unchanged, and they downsample the key and value features with a fixed pooling ratio. The lengths of key and value features increase linearly with the input resolution. In contrast, our LRSA (Fig. 1(d)) downsamples all query, key, and value to a small fixed size, leading to very low complexity regardless of the input resolution. More analysis of the computational complexity can refer to SS3.1.
While LRSA significantly boosts efficiency in capturing global context, we recognize that maintaining fine-grained details is another critical aspect for optimal performance in semantic segmentation. To address this duality, we employ LRSA to capture global context information in a purely low-resolution domain, while simultaneously integrating small kernel (3\(\times\)3) depth-wise convolution to capture local details in the high-resolution space. Based on these foundational principles, we build a new backbone network for feature extraction and a simple decoder to aggregate the extracted multi-level features for semantic segmentation. This new model is dubbed as **Low-Resolution Transformer (LRFormer)**. We evaluate LRFormer on popular benchmarks, including ADE20K [28], COCO-Stuff [29], and Cityscapes [30]. Experimental results (_e.g._, Fig. 2) demonstrate the superiority of LRFormer over state-of-the-art models. Besides, LRFormer also achieves competitive performance for image classification on the ImageNet dataset [31], compared with recent strong baselines.
## 2 Related Work
### _Semantic Segmentation_
Semantic segmentation is a fundamental task in computer vision. It is challenging due to the numerous variations like object sizes, textures, and lighting conditions in practical scenarios. FCN [32], the pioneering work in this area, proposed the adaptation of CNNs for semantic segmentation in an end-to-end manner. Since then, numerous studies have been built upon FCN [32], with major efforts focused on enriching multi-scale representations [33, 4, 5], enhancing boundary perception [34, 35, 36, 37], contextual representations [38, 21] and introducing visual attention [11, 14, 2, 8, 3]. These studies deeply explored the semantic head design upon FCN [32] and achieved great progress. Among these, many approaches [11, 12, 13, 14, 15, 1, 16, 17, 18, 19, 20] are greatly benefited from the high-resolution features, performing prediction in the 1/8 of the input resolution to ensure high accuracy.
More recently, many works [26, 27, 28, 29, 30] showed that vision transformers [15] can largely improve the performance of semantic segmentation. This is mainly attributed to the strong global capability of vision transformers, which happens to be a crucial property required for semantic segmentation. For example, SETR [39] first adapted ViT as an encoder followed by multi-level feature aggregation. SegFormer [9] introduced a novel pyramid vision transformer encoder with an MLP mask decoder. MaskFormer [40] revolutionized mask decoders with transformer-based mask classification. More discussions on vision transformers can refer to SS2.3.
### _Convolutional Neural Networks_
Given that CNN-based semantic segmentation models rely on CNN backbones for feature extraction, we discuss some notable CNN architectures. Since the emergence of AlexNet [42], many techniques have been developed to strengthen the CNN representations and achieved great success. For example, VGG [43], GoogleNet [44], ResNets [6] and DenseNets [45] developed increasingly deep CNNs to learn more powerful representations. ResNeKs [7], Res2Nets [46], and ResNeSts [47] explored the cardinal design in ResNets [6]. SENet [48] and SKNet [49] introduced different attention architectures for selective feature learning. Very recently, CNNs with large kernels are proven powerful in some works [50, 51, 52]. To ensure the high-resolution of feature maps for accurate semantic segmentation, semantic segmentation models usually decrease the strides of these CNNs and uses the dilated convolutions [22] to keep larger receptive field. Motivated by this, HRNet [53] was proposed to directly learn high-resolution CNN features. Despite the numerous successful stories, CNNs are limited in capturing global and long-range relationships, which are of vital importance for semantic segmentation.
Fig. 2: **Experimental comparisons on ADE20K [28] dataset.**
### _Vision Transformers_
Transformers are initially proposed in natural language processing (NLP) [54]. Through multi-head self-attention (MHSA), transformers are capable of modeling global relationships. Thanks to this characteristic, transformers may also be powerful for computer vision tasks that require global information for a whole understanding of the visual scenarios. To bridge this gap, ViT [15] transformed an image to tokens via a 16\(\times\)16 pooling operation and adopts the transformer to process these tokens, achieving better performance than CNNs in image recognition. After that, vision transformers are developed rapidly by leveraging knowledge distillation [55], overlapping patch embedding [56] or convolutions [57, 58]. Recently, pyramid vision transformers [10, 17, 19, 20, 23, 26, 59, 60] are proven to be powerful for image recognition tasks like semantic segmentation. For example, PVT [19] and MViT [26] proposed to build a pyramid vision transformer pipeline via performing downsampling on key and value features. Liu _et al._[17] created a window-based vision transformer with shifted windows. Yuan _et al._[10] presented HRFormer to learn high-resolution features for dense prediction using the vision transformer. Xia _et al._[61] proposed DAT with deformable attention, conducting deformable sampling on key and value features.
Despite their reported effectiveness, it is still commonly believed that high-resolution features are crucial for self-attention to effectively capture contextual information in semantic segmentation. Window-based vision transformers [17, 18, 24, 25] calculate self-attention within each local windows to reduce the computational complexity so that they can keep the high-resolution of feature maps. Downsampling-based vision transformers [9, 19, 20, 23, 26, 27] keep the size of the query while partially conduct the downsampling on the key and value features with a fixed pooling ratio. Such strategy greatly reduces the complexity compared with vanilla attention so as to keep high-resolution features, making themselves computationally non-negligible especially for high-resolution inputs (Tab. 11). In contrast, we question the necessity of keeping high-resolution for capturing context information via self-attention. We study this question by proposing LRFormer with LRSA. The good performance on several public benchmarks suggest the superiority of our LRFormer for semantic segmentation.
## 3 Methodology
In this section, we first introduce the Low-Resolution Self-Attention (LRSA) mechanism in SS3.1. Then, we build Low-Resolution Transformer (LRFormer) using LRSA for semantic segmentation in SS3.2. The decoder of LRFormer is presented in SS3.3. Finally, we provide the implementation details in SS3.4.
### _Low-Resolution Self-Attention_
Unlike existing vision transformers that aim to maintain high-resolution feature maps during self-attention, our proposed LRSA computes self-attention in a low-resolution space, significantly reducing computational costs. Before delving into our proposed LRSA, let us first revisit the vision transformer architecture.
**Revisiting self-attention in transformers.** The vision transformer [15] has been demonstrated to be very powerful for computer vision [10, 17, 18, 23, 24, 25, 26, 23]. It consists of two main parts: the multi-head self-attention (MHSA) and the feed-forward network (FFN). We continue by elaborating on MHSA. Given the input feature \(F_{in}\), the query \(Q\), key \(K\) and value \(V\) are obtained with a linear transformation from \(F_{in}\). Then, we can calculate MHSA as
\[\text{Attention}(F_{in})=\text{Softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V, \tag{1}\]
where \(d_{k}\) is the number of channels of \(F_{in}\). We omit the multi-head operation for simplicity. The overall computational cost of vanilla self-attention is \(O(N^{2}C+C^{2}N)\), where \(N\) and \(C\) are the number of tokens and the number of channels of \(F_{in}\in\mathbb{R}^{N\times C}\), respectively. As the number of tokens of natural images are usually very large, the computational cost of vanilla self-attention is very high.
**Previous solutions.** To alleviate the computational cost while keeping the high-resolution of feature maps, downsampling-based vision transformers [9, 20, 23, 26, 27] change the self-attention computation to
\[\text{Attention}(F_{in})=\text{Softmax}(\frac{QK^{T}_{s}}{\sqrt{d_{k}}})V_{s}, \tag{2}\]
in which \(K_{s}\) and \(V_{s}\) are the downsampled key \(K\) and value \(V\) with a fixed downsampling ratio \(s_{r}\), respectively. The \(1D\leftrightarrow 2D\) feature reshaping is omitted for convenience. The length of \(K_{s}\) and \(V_{s}\) is \(1/s_{r}^{2}\) of the original \(K\) and \(V\). If the original length of \(K\) and \(V\) is too large, the \(K_{s}\) and \(V_{s}\) will also be long sequences, introducing considerable computational cost in self-attention. Here, we only introduce downsampling-based transformers because they are most relevant to our method.
**Our solution.** Instead, we tackle the heavy computation of vanilla self-attention from a new perspective: we do not keep the high-resolution of feature maps but process the features in a very low-resolution space. Specifically, the proposed LRSA downsamples the input feature \(F_{in}\) to a fixed size \(m\). Then, multi-head self-attention is applied:
\[\text{Attention}(F_{in})=\text{Softmax}(\frac{Q_{p}K^{T}_{p}}{\sqrt{d_{k}}})V _{p}, \tag{3}\]
where \(Q_{p}\), \(K_{p}\) and \(V_{p}\) are obtained by a linear transformation from the downsampled \(F_{in}\). \(Q_{p}\), \(K_{p}\) and \(V_{p}\) are with a fixed size \(m\), regardless of the resolution of the input \(F_{in}\). Compared with vanilla self-attention and previous
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Scheme & Global & Spatial Corr. & Complexity \\ \hline Window-based [17] & ✘ & ✔ & \(O(NC^{2})\) \\ Factorized [62] & ✔ & ✘ & \(O(NC^{2})\) \\ Downsampling-based [9] & ✔ & ✔ & \(O(N^{2}C+NC^{2})\) \\ LRSA (Ours) & ✔ & ✔ & \(O(NC+C^{2})\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Comparison of various self-attention schemes. \(N\) is the length of the flattened features and \(C\) is the number of feature channels. “Spatial Corr.” denotes the spatial correlation. We omit constant factors for simplicity, like the window size in window-based methods and the downsampled size of LRSA.**
solutions, our LRSA has a much lower computational cost. LRSA can also facilitate attention optimization due to the much shorter token length. To fit the size of the original \(F_{in}\), we then perform a bilinear interpolation after the self-attention calculation.
**Complexity and characteristics.** The computational complexity of LRSA is much lower than existing self-attention mechanisms for vision transformers. We summarize the main characteristics and computational complexity of recent popular self-attention mechanisms and our LRSA in Tab. I. Spatial correlation means that self-attention is carried out in the spatial dimension, and some factorized transformers [62] compute self-attention in the channel dimension for reducing complexity. As can be observed from Tab. I, other methods often face trade-offs among complexity, global receptive field, and spatial correlation. In contrast, our LRSA offers advantages in all these aspects.
Let us continue by analyzing the computational complexity of LRSA. For convenience, we do not include the 1D\(\leftrightarrow\)2D feature reshaping. LRSA first downsamples the input features \(F_{in}\in\mathbb{R}^{N\times C}\) to a fixed size \(m\times C\) with a 2D pooling operation, whose computational cost is \(O(NC)\). Then, LRSA performs linear transformations and self-attention on the pooled features, which costs \(O(mC^{2})\). The computation of self-attention costs \(O(m^{2}C)\). The final upsampling operation has the same computational cost as downsampling. Overall, the computational complexity of LRSA is \(O(NC+mC^{2}+m^{2}C)\). As \(m\) is a constant number (_e.g._, \(16^{2}\)) regardless of the value of \(N\), we can simplify the complexity of LRSA to \(O(NC+C^{2})\), which is much smaller than existing methods.
### _Low-Resolution Transformer_
In this part, we build the LRFormer for semantic segmentation by incorporating the proposed LRSA. The overall architecture of LRFormer is illustrated in Fig. 3, with an encoder-decoder architecture.
**Encoder-decoder.** Taking a natural image as input, the encoder first downsamples it by a factor of \(1/4\), following prevailing literature in this field [17, 19, 20, 23, 26]. The encoder consists of four stages with a pyramid structure, each comprising multiple stacked basic blocks. In between every two stages, we include a patch embedding operation to reduce the feature size by half. This results in the extraction of multi-level features \(F_{1},F_{2},F_{3},F_{4}\) with strides of 4, 8, 16, and 32, respectively. We resize \(F_{2},F_{3},F_{4}\) to the same size as \(F_{2}\) before concatenating and squeezing them to smaller channels. The resulting features are then fed into our decoder head, which performs further semantic reasoning and outputs the final segmentation map via a \(1\times 1\) convolution layer. The details of our decoder head are presented in SS3.3.
**Basic block.** The basic block is illustrated in Fig. 4. Like previous transformer blocks [17, 19], the basic block of our LRFormer is composed of a self-attention module and an FFN. The FFN is generally an MLP layer composed of two linear layers with GELU [63] activation in between. Differently, we renovate the self-attention module with our proposed LRSA. As LRSA is computed in a very low-resolution space, attaining a low complexity regardless of the input resolution. However, the low-resolution space may lose the spatial locality of the input features. Inspired by recent works [20, 58], we further introduce depth-wise convolution (DWConv) in both positional encoding and FFN, assisting the feature extraction via capturing spatial local details. That is, we insert a \(3\times 3\) DWConv layer with short connection followed by our LRSA, providing conditional positional encoding [58]. This strategy is also applied between the two linear layers of the FFN. Therefore, our basic block can be simply formulated as below:
\[\begin{split} F_{in}^{\prime}&=F_{in}+\text{DWConv}(F_ {in}),\\ F_{att}&=F_{in}^{\prime}+\text{LRSA}(\text{LayerNorm}(F_ {in}^{\prime})),\\ F_{out}&=F_{att}+\text{FFN}(\text{LayerNorm}(F_{att })),\end{split} \tag{4}\]
Fig. 4: **Illustration of a basic block of our LRFormer.** The symbol “\(\oplus\)” denotes the element-wise addition. We add a 3\(\times\)3 depth-wise convolution (DWConv) with a residual connection before LRSA, which is also applied between the two linear layers of the FFN.
Fig. 3: **Pipeline of the proposed LRFormer.**\(F_{2}\), \(F_{3}\) and \(F_{4}\) are fed into the decoder head for semantic segmentation.
where \(F_{in}\), \(F_{att}\) and \(F_{out}\) represent the input, output of LRSA, and output of the basic block, respectively.
**Architecture setting.** To fit the budgets of different computational resources, we design four variants of LRFormer, namely LRFormer-T/S/B/L, stacking different numbers of basic blocks for each stage in the encoder. We summarize the detailed settings of their encoders in Tab. II. In terms of ImageNet pretraining [31], the computational cost of LRFormer-T/S/B/L is similar to ResNet-18 [6] and SwinT/S/B [17], respectively.
### _Decoder Head_
In semantic segmentation, it is suboptimal to predict the results based solely on the final output of the encoder, as multi-level information is useful in perceiving objects with various scales and aspect ratios [64, 9]. Thus, we design a simple decoder for LRFormer to aggregate multi-level features efficiently and effectively. To this end, we note that an MLP aggregation can achieve good performance in the state-of-the-art work SegFormer [9]. However, it does not consider the spatial correlation between the features from different levels. Therefore, we encapsulate our LRSA into our decoder for feature refinement, strengthening the semantic reasoning of LRFormer.
As mentioned above, \(F_{2},F_{3},F_{4}\) are resized to the same size as \(F_{2}\) and then concatenated together. We apply a \(1\times 1\) convolution on the concatenated feature to squeeze the number of channels. Then, a basic block (LRSA + FN) is adopted to refine the squeezed feature. As we know, the feature from the top of the encoder, _i.e.,_\(F_{4}\), could be the most semantic meaningful. To avoid the loss of semantic information in the aggregation of high-level (\(F_{4}\)) and low-level (\(F_{2},F_{3}\)) features, we concatenate the refined feature with \(F_{4}\) to enhance the semantics. After that, another basic block is connected for further feature refinement. Finally, we infer the segmentation prediction from the refined feature with a simple \(1\times 1\) convolution. The experiments demonstrate that our simple decoder with LRSA can do better than previous state-of-the-art decoder heads for semantic segmentation, as shown in Tab. IX.
### _Implementation Details_
In LRFormer, We apply the overlapped patch embedding, _i.e.,_ a \(3\times 3\) convolution with a stride of 2, to downsample the features by half between each stage. To strengthen multi-scale learning of LRSA with negligible cost, we use pyramid pooling [20] to extract multi-scale features when computing the key and value features in LRSA. The desired fixed downsampling size \(m\) for generating the query, key and value is \(16^{2}\) for semantic segmentation. Such size is changed to \(7^{2}\) for ImageNet pretraining because \(m=16^{2}\) is too large for image classification. For the number of channels in the decoder, we set it to 256/384/512/640 for LRFormer-T/S/B/L, respectively.
## 4 Experiments
### _Experimental Setup_
**Datasets.** We perform experiments on three well-established datasets. ADE20K [28] is a very challenging scene parsing dataset that contains 150 semantic classes with diverse foreground and background, consisting of 20K, 2K, and 3.3K images for training, validation, and testing, respectively. COCO-Stuff [29] labels both things and stuffs with a total of 171 fine-grained semantic labels, with 164K, 5K, 20K, and 20K images for training, validation, test-dev, and test challenge. Cityscapes [30] is a high-quality dataset for street scene parsing that contains 3K, 0.5K, and 1.5K driving images for training, validation, and testing. These datasets cover a wide range of semantic categories and pose different challenges for semantic segmentation models.
**ImageNet pretraining.** We adopt the popular _timm_ package to implement our network. Following other networks, we first pretrain the backbone encoder of LRFormer on the ImageNet-1K dataset, which has 1.3M training and 50K validation images with 1K object categories. During ImageNet pretraining, the decoder head of LRFormer is omitted. To regularize the training process, we follow the standard data augmentation techniques and optimization strategy used in previous works [9, 17, 55]. We use AdamW [65] as the default optimizer with a learning rate of 0.001, weight decay of 0.05, a _cosine_ learning rate adjustment schedule, and a batch size of 1024. No model EMA is applied. The backbone encoder is pretrained for 300 epochs, and we apply layer scale [66] to alleviate the overfitting of large networks, as suggested by recent works [50, 66]. For LRFormer-L, we follow [17, 50] additionally pretrain the network on the full ImageNet-22K dataset for 90 epochs and then finetune it on ImageNet-1K dataset for 30 epochs. In the finetuning, the learning rate is set as 5e-5, and each mini-batch has 512 images.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Stage & Output Size & LRFormer-T & LRFormer-S & LRFormer-B & LRFormer-L \\ \hline \multirow{2}{*}{1} & \multirow{2}{*}{\(F_{1}:\frac{H}{4}\times\frac{W}{4}\)} & \(C=48\), \(E=8\) & \(C=64\), \(E=8\) & \(C=80\), \(E=8\) & \(C=96\), \(E=8\) \\ & & \(C_{h}=24\), \(n_{1}=2\) & \(C_{h}=32\), \(n_{1}=3\) & \(C_{h}=40\), \(n_{1}=4\) & \(C_{h}=48\), \(n_{1}=4\) \\ \hline \multirow{2}{*}{2} & \multirow{2}{*}{\(F_{2}:\frac{H}{8}\times\frac{W}{8}\)} & \(C=96\), \(E=8\) & \(C=128\), \(E=8\) & \(C=160\), \(E=8\) & \(C=192\), \(E=8\) \\ & & & & & \\ & & & & & \\ \hline \multirow{2}{*}{3} & \multirow{2}{*}{\(F_{3}:\frac{H}{16}\times\frac{W}{16}\)} & \(C=240\), \(E=4\) & \(C=320\), \(E=4\) & \(C=400\), \(E=4\) & \(C=480\), \(E=4\) \\ & & \(C_{h}=24\), \(n_{3}=6\) & \(C_{h}=32\), \(n_{3}=12\) & \(C_{h}=32\), \(n_{3}=15\) & \(C_{h}=32\), \(n_{3}=18\) \\ \hline \multirow{2}{*}{4} & \multirow{2}{*}{\(F_{4}:\frac{H}{32}\times\frac{W}{32}\)} & \(C=384\), \(E=4\) & \(C=512\), \(E=4\) & \(C=512\), \(E=4\) & \(C=640\), \(E=4\) \\ & & \(C_{h}=24\), \(n_{4}=3\) & \(C_{h}=32\), \(n_{4}=3\) & \(C_{h}=40\), \(n_{4}=8\) & \(C_{h}=48\), \(n_{4}=8\) \\ \hline \end{tabular}
\end{table} TABLE II: **Detailed settings of the encoders for different LRFormer variants.**\(C\), \(C_{h}\), \(E\), and \(n_{i}\) denote the number of feature channels, channels of each attention head, expansion ratio of FFN, and the number of basic blocks for the \(i\)-th stage, respectively.
Training for semantic segmentation.We use _mmsegmentation_ framework to train our network for semantic segmentation. AdamW [65] is adopted as the default optimizer, with learning of 0.00006, weight decay of 0.01, and _poly_ learning rate schedule with factor 1.0. Following [9, 17], the weight decay of LayerNorm [69] layers is set as 0. Regarding the data augmentation, we use the same strategy as mentioned in [9, 17]. That is we construct the pipeline of image resizing (\(0.5\sim 2\times\)), random horizontal flipping, followed by a random cropping of size 512\(\times\)512, 512\(\times\)512, and 1024\(\times\)1024 for ADE20K, COCO-Stuff, and Cityscapes datasets, respectively. Note that for our largest model LRFormer-L in ADE20K, the cropped size remains 640\(\times\)640, consistent with recent works. The mini-batch size is set to 16, 16, and 8 images for ADE20K, COCO-Stuff, Cityscapes datasets, respectively. We train our network for 160K, 80K, and 160K iterations for ADE20K, COCO-Stuff, and Cityscapes datasets, respectively. We only use the cross-entropy loss for training and do not employ any extra losses like the auxiliary loss [4] and OHEM [70].
Testing for semantic segmentation.During testing, we maintain the original aspect ratio of the input image and resize it to a shorter size of 512 and a longer size not exceeding 2048 for the ADE20K and COCO-Stuff datasets. We follow the suggestion of [9] and resize the input size of LRFormer-L for the ADE20K dataset to a shorter size of 640 and a longer size not exceeding 2560. In the Cityscapes dataset, we apply a crop size of 1024\(\times\)1024 with sliding window testing strategy following [9].
divided by the FLOPs of approximate 2G, 4.5G, 9G, 16G, respectively. The last group includes the results pretrained on ImageNet-22K dataset. The backbone encoder of our LRFormer outperformed recent state-of-the-art CNN-based methods such as ConvNeXt [50] and RepLKNet [51], and transformer-based methods like CSwin [18] and P2T [20].
### _Visualization analysis._
To visually illustrate the effectiveness of our method, we pick segformer [9] as the model for intuitive comparison from ADE20K val set and Cityscapes val set, as shown in Fig. 5 and Fig. 6 respectively. The results indicate that LRFormer is capable of generating more precise segmentation maps, particularly in the areas highlighted by the red boxes. We discover that LRFormer offers significant advantages in terms of maintaining object segmentation integrity and capturing intricate details.
### _Ablation Study_
In the following part, we conduct several ablation studies to analyze our LRFormer. Except for specifically mentioning, we use the following settings. LRFormer-S is set as the baseline and trained using 8 GPUs for both classification and semantic segmentation. For classification, our network is trained for 100 epochs in the ImageNet-1K [31] dataset. For semantic segmentation, our network is trained for 80K iterations in the ADE20K [28] dataset. Other settings are kept same as the setup in SS4.1.
**Locality capturing.** Our LRSA only computes the attention in low-resolution space. Introducing spatial locality, \(3\times 3\) depth-wise convolution, to our network is beneficial for getting fine-grained semantic maps. In Tab. VII, we analyzed the effect of the two depth-wise convolution before LRSA and in FFN. We can observe that the ADE20K performance of our LRFormer is improved by 0.5% and 1.4% and when adding the depth-wise convolution before LRSA in FFN, with 5% and 24% training memory overhead. Therefore, we add both of them in our LRFormer.
**Fixed pooled size.** We reported the results in Tab. VIII. For each basic block, the pooling operation will be omitted if the feature map size is smaller than the desired pooled size. Default fixed pooled size \(m\) is \(16^{2}\) for semantic segmentation. Results show larger pooled size (\(m\geq 16^{2}\)) achieves saturated performance. The default setting only introduces 5% training memory overhead and FLOPs compared with the pooled size of \(8^{2}\) for semantic segmentation. When
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Model & FLOPs \(\downarrow\) \#Params \(\downarrow\) & Size & Top-I Acc. \(\uparrow\) \\ \hline ResNet-18 [6] & 1.8G & 12M \(224^{2}\) & 68.5\% \\ PVTV-2-B1 [23] & 2.1G & 13M \(224^{2}\) & 78.7\% \\ P2T-Thy [20] & 1.8G & 12M \(224^{2}\) & 79.8\% \\ LKFormer-T & 1.8G & 13M \(224^{2}\) & **80.8\%** \\ \hline ResNet-50 [6] & 4.1G & 26M \(224^{2}\) & 78.5\% \\ Swin-T [17] & 4.5G & 28M \(224^{2}\) & 81.5\% \\ ConvNeXt-T [50] & 4.5G & 29M \(224^{2}\) & 82.1\% \\ MVTiV2-T [27] & 4.7G & 24M \(224^{2}\) & 82.3\% \\ CSwin-T [18] & 4.3G & 23M \(224^{2}\) & 82.7\% \\ LKFormer-S & 4.7G & 30M \(224^{2}\) & **83.5\%** \\ \hline Swin-S [17] & 8.7G & 50M \(224^{2}\) & 83.0\% \\ ConvNeXt-S [50] & 8.7G & 50M \(224^{2}\) & 83.1\% \\ DAT-S [61] & 9.0G & 50M \(224^{2}\) & 83.7\% \\ P2T-Large [20] & 9.8G & 55M \(224^{2}\) & 83.9\% \\ LRFormer-B & 9.3G & 62M \(224^{2}\) & **84.5\%** \\ \hline DeiT-B [16] & 17.5G & 86M \(224^{2}\) & 81.8\% \\ RegNetY-16G [72] & 16.0G & 84M \(224^{2}\) & 82.9\% \\ RepLKNet-3IB [51] & 15.3G & 79M \(224^{2}\) & 83.5\% \\ SwinT-B [17] & 15.4G & 88M \(224^{2}\) & 83.5\% \\ ConvNeXt-B [50] & 15.4G & 89M \(224^{2}\) & 83.8\% \\ FocalNet-B [73] & 15.4G & 89M \(224^{2}\) & 83.9\% \\ DAT-B [61] & 15.8G & 88M \(224^{2}\) & 84.0\% \\ CSwin-B [18] & 15.0G & 78M \(224^{2}\) & 84.2\% \\ LKFormer-L & 15.7G & 101M \(224^{2}\) & **85.0\%** \\ \hline Swin-B\({}^{\dagger}\)[17] & 15.4G & 88M \(224^{2}\) & 85.2\% \\ ConvNeXt-B\({}^{\dagger}\)[50] & 15.4G & 89M \(224^{2}\) & 85.8\% \\ LRFormer-L\({}^{\dagger}\) & 15.7G & 101M \(224^{2}\) & **86.4\%** \\ ConvNeXt-B\({}^{\dagger}\)[50] & 45.1G & 89M \(384^{2}\) & 86.8\% \\ Swin-B\({}^{\dagger}\)[17] & 47.0G & 88M \(384^{2}\) & 86.4\% \\ LRFormer-L\({}^{\dagger}\) & 46.3G & 101M \(384^{2}\) & **87.2\%** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: **Classification results on ImageNet-1K [31] dataset.** Results of our method are marked as **bold**. Results marked with “\({}^{\dagger}\) are pretrained on ImageNet-22K dataset.
Fig. 5: **Qualitative Visualization on ADE20K val set.** The figures from left to right are input images, ground truth, segmentation maps of SegFormer [9], segmentation maps of our LRFormer. Significant improvements are indicated by red boxes on segmentation maps.
increasing the pooled size to \(32^{2},48^{2},64^{2}\), we obtain a minor improvement or even decreased performance on ADE20K semantic segmentation. We also observe that the FLOPs and training memory overhead are much more significant (26% \(\sim\) 170%) when the pooled size is larger than \(16^{2}\). Considering the the performance, FLOPs and training memory, a low-resolution setting in LRFormer is much more proper.
cost.
**Memory and FLOPs.** Our LRSA has a very low computational complexity of only \(O(C^{2}+CN)\). We numerically analyze the efficiency of our LRFormer for different input sizes, as well as the comparisons with the representative method SegFormer [9]. The analyzed results on FLOPs, attention FLOPs, and training memory are shown in Tab. XI. Our LRFormer-S costs much less memory and FLOPs than SegFormer-B2. Given input size of \(1024\times 1024\), the number of FLOPs of MHSA operations in our LRFormer is dramatically lower than (0.4G _vs._ 62G) the self-attention in SegFormer. The training memory SegFormer-B5 is close to 32GB, which is close to the memory limit of a 32GB V100 GPU. Instead, our LRFormer-L only costs 12.8GB memory.
## 5 Conclusion
In this paper, we presented a novel approach to semantic segmentation via introducing the low-resolution self-attention. LRSA computes the self-attention in a fixed low-resolution space, regardless of the size of the input image, making the self-attention highly efficient. Extensive experiments on ADE20K [28], COCO-Stuff [29] and Cityscapes [30] datasets show that LRFormer outperforms state-of-the-art models, suggesting that LRSA is adequate to keep global receptive field with negligible computational cost. This study provides evidence for the effectiveness of LRSA and opens a new direction for future research.
**Acknowledgements.** This work is funded by NSFC (NO. 62225604, 62176130), and the Fundamental Research Funds for the Central Universities (Nankai University, 070-63233089). Computation is supported by the Supercomputing Center of Nankai University.
|
2301.13105 | Generalization on the Unseen, Logic Reasoning and Degree Curriculum | This paper considers the learning of logical (Boolean) functions with focus
on the generalization on the unseen (GOTU) setting, a strong case of
out-of-distribution generalization. This is motivated by the fact that the rich
combinatorial nature of data in certain reasoning tasks (e.g.,
arithmetic/logic) makes representative data sampling challenging, and learning
successfully under GOTU gives a first vignette of an 'extrapolating' or
'reasoning' learner. We then study how different network architectures trained
by (S)GD perform under GOTU and provide both theoretical and experimental
evidence that for a class of network models including instances of
Transformers, random features models, and diagonal linear networks, a
min-degree-interpolator is learned on the unseen. We also provide evidence that
other instances with larger learning rates or mean-field networks reach leaky
min-degree solutions. These findings lead to two implications: (1) we provide
an explanation to the length generalization problem (e.g., Anil et al. 2022);
(2) we introduce a curriculum learning algorithm called Degree-Curriculum that
learns monomials more efficiently by incrementing supports. | Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Kevin Rizk | 2023-01-30T17:44:05Z | http://arxiv.org/abs/2301.13105v2 | # Generalization on the Unseen, Logic Reasoning and Degree Curriculum
###### Abstract
This paper considers the learning of logical (Boolean) functions with focus on the _generalization on the unseen (GOTU)_ setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes representative data sampling challenging, and learning successfully under GOTU gives a first vignette of an 'extrapolating' or'reasoning' learner. We then study how different network architectures trained by (S)GD perform under GOTU and provide both theoretical and experimental evidence that for a class of network models including instances of Transformers, random features models, and diagonal linear networks, a _min-degree-interpolator (MDI)_ is learned on the unseen. We also provide evidence that other instances with larger learning rates or mean-field networks reach leaky MDIs. These findings lead to two implications: (1) we provide an explanation to the _length generalization problem_ (e.g., Anil et al. 2022); (2) we introduce a curriculum learning algorithm called _Degree-Curriculum_ that learns monomials more efficiently by incrementing supports.
## 1 Introduction
Neural networks trained by stochastic gradient descent (SGD) have proved to be a powerful learning paradigm when there is enough representative data about the distribution to be learned, specifically in applications involving images or text where there is also a good understanding of the relevant architectures.
There is now an increasing interest in tackling tasks involving more'reasoning' components, which depart from classical perception tasks of images and texts. While such tasks remain vaguely defined, a list that we consider here under this class is given by: (1) arithmetic and algebra (Savaton et al., 2019; Lewkowycz et al., 2022), (2) synthetic tasks such as PVR (Zhang et al., 2021) and LEGO (Zhang et al., 2022), (3) visual reasoning such as CLEVR (Johnson et al., 2017), (4) physical reasoning such as Phyre (Bakhtin et al., 2019), (5) algorithmic data such as CLRS (Velickovic et al., 2022) and reasoning on graphs (Mahdavi et al., 2022).
One common trademark of these tasks is that the input space is usually of discrete/combinatorial nature, and consequently, the data may not necessarily lay on a low dimensional manifold that is well sampled. In various cases, the input space may even have a variable length. This combinatorial nature is already present for text, but it is further amplified in, say, arithmetic since most symbol combinations could a priori represent a valid input (in contrast to text). Further, the target function in such tasks may rely on a large composition of logical steps or mathematical operations that require to be jointly learned. Therefore, in such reasoning tasks, the setting with abundant
representative data seems less prominent. This motivates us to focus on a strong out-of-distribution (OOD) generalization setting.
For instance, when learning arithmetic or logic functions on a training set with a bounded length or bounded number of truth assignments, how would the neural network generalize on more general input assignments (this is a case of length generalization)? When training a neural network to learn a Boolean formula, such as a voting scheme on data from a polarized cohort of voters, how does the network generalize to an unpolarized cohort?
We thus consider the problem of learning functions with a holdout domain where part of the distribution support is barely/never seen at training, and with target functions that are Boolean to capture the discrete and combinatorial nature of various reasoning tasks (e.g., arithmetic, decision trees, logical circuits). Learning successfully under holdout gives a first vignette that the learner is operating with a certain amount of'reasoning' or 'extrapolation' since memorization is voided on the unseen domain.
### Our main contributions
1. We lay down some basic principles of stronger generalization requirements that rely on the 'generalization on the unseen (GOTU)' performance metric, defined as a strong case of OOD generalization (Section 2), setting a benchmark for 'extrapolating' or'reasoning' solutions on the considered tasks;
2. We study how standard neural network architectures trained by (S)GD perform on the GOTU metric, in particular, which solutions are learned on the unseen domain for such architectures: (i) We prove two theoretical results showing that for a class of network models including random features model (Theorem 1) and deep diagonal linear networks (Theorem 2), a _min-degree-interpolator (MDI)_ is learned on the unseen; (ii) We show experimental results (Section 4) supporting that Transformers tend to also have the _min-degree bias_ towards MDIs. The MDI is defined as the interpolator of minimal _degree-profile_, i.e., the Boolean function interpolating the training data and having a Fourier-Walsh transform whose energy concentrates on basis elements of lowest possible degree. Connections to algebraic geometry are given in Appendix C in order to characterize how MDIs can be constructed from the 'vanishing ideal' of the seen data. We also point out that very large learning rates or other architectures (such as mean-field networks) can produce leaky MDIs (i.e., assigning larger mass on higher-degree monomials); see Appendix B.2.
3. Using these, we obtain two additional results: (i) we provide a formal explanation (Theorem 3) to the 'length generalization problem' discussed in (Anil et al., 2022) (for the case of bounded weight vectors, also related to (Zhang et al., 2022)); (ii) we turn the min-degree bias into an asset to accelerate learning by introducing a curriculum learning algorithm called 'Degree-Curriculum' (Algorithm 1), which successively increases the input complexity with respect to Hamming weights in order to incrementally learn the monomials support (see Section 5.2).
Generalization on the unseen
The classical setting of statistical learning theory requires the control of three error pillars for the generalization of a learning model: (1) the approximation error (depending on the properties/richness of the model class), (2) the estimation error (depending on the properties/richness of the training set), (3) the optimization error (depending on the properties/richness of the training algorithm).
In some of the recent deep learning applications for computer vision and natural language processing, the richness of the training set, the size of the model and its alignment with the data, as well as the computational power, make the three pillars well controlled. The recent success of large language models (LLM) and scaling laws are perfect examples of this phenomenon (Alabdulmohsin et al., 2022).
As mentioned in the introduction, the type of data occurring in reasoning tasks is slightly different due to the richness and combinatorial nature of the data. To better cope with this challenge, we propose in this paper to depart from the classical generalization objectives described with the three pillars. We focus instead upfront on distribution shift and, more specifically, a strong case of OOD generalization where part of the distribution domain is almost/completely unseen at training but used at testing (in particular, prohibiting any memorization scheme).
Of course, on the unseen domain, all bets are off for generalization: one cannot hope for an algorithm trained on a given data domain to perform well on a larger data domain without any incentive to do so. Yet various algorithms will have various implicit biases on the unseen and thus produce various solutions on the unseen. Understanding this 'bias on the unseen' for different network architectures and Boolean target functions is the objective of this paper.
We start by redefining the generalization error when the train and test distribution are not necessarily the same.
**Definition 1**.: _Let \(X_{1},\ldots,X_{m}\) be samples drawn i.i.d. under \(\mu_{1}\) and labeled by a target function \(f\), and let \(\tilde{f}\) be the function learned by a learning algorithm. The algorithm has \((\mu_{1},\mu_{2},m,\epsilon)\)-generalization (for loss \(\ell\)) if \(\mathbb{E}_{X^{m}\sim\mu_{1}^{\otimes m},X_{m+1}\sim\mu_{2}}[\ell(\tilde{f}_{ X^{m}}(X_{m+1}),f(X_{m+1}))]\leq\epsilon\). In other words, the algorithm is trained under distribution \(\mu_{1}\) and tested under distribution \(\mu_{2}\), producing \(\epsilon\)-test-loss with sample complexity \(m\)._
Now we focus on a special case of interest, a strong case where we essentially see all the data on some part of the domain but miss another part. Naturally, we will next study a'soft version' of this metric, where both in-distribution and out-of-distribution generalization are considered, but this strong case is already rich and insightful.
**Definition 2** (Generalization on the Unseen).: _Consider a given sample space \(\Omega\). During training, part of \(\Omega\) is not sampled, and we call this the unseen domain (or the holdout set) \(\mathcal{U}\). At testing, however, we sample from the full set \(\Omega\). This represent a special case of the previous definition where \(\mu_{1}=\mu|_{\Omega\setminus\mathcal{U}}\) and \(\mu_{2}=\mu|_{\mathcal{U}}\) for some \(\mu\)._
_We now further specify the setting: we assume that the training error is 0 on the training set \(\Omega\setminus\mathcal{U}\), e.g., seeing all the samples in \(\Omega\setminus\mathcal{U}\), and define the generalization on the unseen (GOTU) for an algorithm \(\tilde{f}\) and target function \(f\) as_
\[GOTU(f,\tilde{f},\mathcal{U})=\mathbb{E}_{X\sim\mathcal{U}}[\ell(\tilde{f}_{ \Omega\setminus\mathcal{U}}(X),f(X))]. \tag{1}\]
_Note that we only sample on \(\mathcal{U}\) at testing because we assumed zero training error and sample there uniformly._
A few remarks are in order:
* GOTU is a special case of OOD and distribution shift setting that is extremal in the sense that it completely gives access to part of the distribution domain and completely omits the complement. Since we consider rich enough models to interpolate the data, the'statistical' and 'approximation' pillars of the learning problem are removed (there may still be randomness used by the learning algorithm, thus statistical analysis may still be relevant). The problem thus turns into a pure optimization problem where the central object of study is the implicit bias of the learning algorithm on the unseen. Note that this is not exactly the same implicit bias as studied in the setting of overparametrized models (Soudry et al., 2017; Gunasekar et al., 2017, 2018; Arora et al., 2019; Razin and Cohen, 2020; Chizat and Bach, 2020; Moroshko et al., 2020) as here we have the distribution shift and investigate the behavior of the equivalence class of interpolators on the unseen \(\mathcal{U}\).
* In some experiments, we replace the 'perfect' training data on the seen domain with a 'large' sampling on the seen domain. We defined the GOTU in the extreme case to simplify the number of parameters to track and to allow for cleaner theorem statements, but there could also be a sampling rate on \(\Omega\setminus\mathcal{U}\); this is left for future research. Also, we assume here a uniform prior because this is a natural first case for arithmetic/logic tasks, but this could also be relaxed.
* We will consider different subsets \(\mathcal{U}\) in the applications. We are sometimes interested in \(\mathcal{U}\)'s for which the data invariances or equivariances could give hope to learn. This is further specified with the next definition.
**Definition 3**.: _A function \(f:\Omega\to\mathbb{R}\) is (1) \(G\)-invariant or invariant under the group action \(G\) on \(\Omega\) if \(f(gx)=f(x)\) for all \(g\in G\), \(x\in\Omega\); (2) \(G_{i,o}\)-equivariant or equivariant under the action \(G_{i,o}\) if \(f(g_{i}(x))=g_{o}(f(x))\) for all \((g_{i},g_{o})\in G_{i,o}\) and \(x\in\Omega\)._
As stated earlier, we cannot expect algorithms to generalize on the unseen domain by themselves. However, we can hope that certain training algorithms will catch invariances/equivariances and thus extrapolate. For example consider the parity function on \(d\) bits defined as \(f(x_{1},\dots,x_{d})=x_{1}x_{2}\cdots x_{d}\). This function is permutation-invariant (group \(G=S_{d}\)). In particular, if one uses a model favoring permutation symmetries, one may not have to see all inputs that are permutation equivalent. There has been a series of works designing layers/architectures that are equivalent under a prespecified family of actions (e.g., all permutations) (Ravanbakhsh et al., 2017; Zaheer et al., 2017; Hartford et al., 2018). More recently, (Zhou et al., 2020) proposes a method to learn invariances in a multi-task setting using meta-learning. An example of an equivariant Boolean function would be the majority function on \(\{+1,-1\}^{d}\), \(d\) odd, with the action of global bit flipping on the input and the output (since the majority is reversed if all the bits are flipped). Thus a holdout on vectors of dual-weight could again be handled by a model having such an equivariance. Note that we are also interested in cases where these equi/in-variances are not present in the target, to understand what solutions neural nets favor on the unseen.
## 3 Results
We assume \(f:\Omega\to\mathbb{R}\) with \(\Omega=\{\pm 1\}^{d}\). We introduce some preliminary material on Boolean functions in the next section and then state our results.
### Preliminaries
Fourier-Walsh transform.Any Boolean function \(f:\{\pm 1\}^{d}\to\mathbb{R}\) can be expressed as \(f(x)=\sum_{T\in[d]}\hat{f}(T)\chi_{T}(x)\), where \(\chi_{T}(x)=\prod_{i\in T}x_{i}\) are the monomials and \(\hat{f}(T)=\mathbb{E}_{X\sim_{U}\{\pm 1\}^{d}}[\chi_{T}(X)f(X)]\) are the coefficients.
Unseen domain and vanishing ideals.We now introduce the unseen domain \(\mathcal{U}\). First, consider the canonical holdout, when a bit is frozen during training, e.g., \(x_{i}=1\) and \(\mathcal{U}=\{x\in\{\pm 1\}^{d}:x_{i}=-1\}\). In this case, one can see that any function of the form \(f(x)+(1-x_{i})\Delta(x)\) is an equivalent interpolator on the training data. This can be extended to more general sets. For instance, consider the case that \((x_{i},x_{j})\neq(-1,-1)\) during training. This is equivalent to the condition \((x_{i}-1)(x_{j}-1)=0\) for the training samples, which gives \(f(x)+\Delta(x)(x_{i}-1)(x_{j}-1)\) as the equivalence class of interpolators. For general unseen domain \(\mathcal{U}\subseteq\Omega=\{\pm 1\}^{n}\), there exist polynomials \(v_{1}(x),\ldots,v_{k}(x)\) such that \(x\in\Omega\setminus\mathcal{U}\iff v_{1}(x)=v_{2}(x)=\ldots=v_{k}(x)=0\) (see Appendix C). Consequently, all solutions of the form \(f(x)+\Delta_{1}(x)v_{1}(x)+\cdots\Delta_{k}(x)v_{k}(x)\) are equivalent during training. This is the quotient space of \(f\) under the vanishing ideal defined by \(\Omega\setminus\mathcal{U}\). We refer to Appendix C for more details on this relation to algebraic geometry.
We now define measures of complexity relevant to us.
**Definition 4** (Degree).: _For a function \(f:\{\pm 1\}^{d}\to\mathbb{R}\), the degree \(\deg(f)\) refers to the maximum degree of the monomials present in the Fourier-Walsh transform of \(f\)._
**Definition 5** (Degree profile).: _For a function \(f:\{\pm 1\}^{d}\to\mathbb{R}\), we define the degree-profile of \(f\), \(\deg\_{\text{pro}}(f)\in\mathbb{R}^{d+1}\) such that \(\deg\_{\text{pro}}(f)_{i}=\sum_{T\subseteq[d],|T|=d+1-i}\hat{f}(T)^{2}\) for \(1\leq i\leq d+1\). Furthermore, we consider lexicographic ordering on these vectors, i.e., \(\deg\_{\text{pro}}(f)<\deg\_{\text{pro}}(g)\) iff \(\exists i\deg\_{\text{pro}}(f)_{i}<\deg\_{\text{pro}}(g)_{i}\) and \(\deg\_{\text{pro}}(f)_{j}=\deg\_{\text{pro}}(g)_{j}\;1\leq j<i\)._
Note that the degree-profile is a stronger notion of degree, i.e., \(\deg(f)<\deg(g)\implies\deg\_{\text{pro}}(f)<\deg\_{\text{pro}}(g)\).
**Definition 6** (Min-degree interpolators (MDIs)).: _Consider a target function \(f\) and unseen domain \(\mathcal{U}\). The set of interpolators is defined as \(\mathcal{F}_{\mathrm{int}}(f,\mathcal{U})=\{g:\{\pm 1\}^{d}\to\mathbb{R}\mid g(x)=f(x ),\forall x\in\mathcal{U}^{c}\},\) where \(\mathcal{U}^{c}\coloneqq\Omega\setminus\mathcal{U}\) is the seen domain. We call an interpolator a min-degree interpolator (MDI) of \((f,\mathcal{U})\) (or of \(\{x,f(x)\}_{x\in\mathcal{U}^{c}}\)) if it is an element of \(\mathcal{F}_{\mathrm{int}}(f,\mathcal{U})\) that minimizes the degree-profile with respect to the lexicographic order. This means that no part of the Fourier-Walsh expansion of the interpolator could be replaced with a lower-degree alternative and still interpolate._
For example, consider the case of 'canonical holdout' where we always have \(x_{1}=1\) at training, i.e., \(\mathcal{U}=\{x\in\{\pm 1\}^{d}:x_{i}=-1\}\), and target function \(x_{1}x_{2}+x_{1}x_{3}x_{4}\). Here, both \(x_{1}x_{2}+x_{3}x_{4}\) and \(x_{2}+x_{3}x_{4}\) are of degree \(2\) but only \(x_{2}+x_{3}x_{4}\) is an MDI because \(x_{1}x_{2}\) in the first function is replaceable with the lower-degree \(x_{2}\) alternative. Furthermore, note that there may be multiple interpolators having minimal max-degree rather than degree-profile. For example, consider the unseen domain induced by \(x_{i}=x_{j}\) and target function \(f(x)=x_{i}+x_{j}\). Then \(2x_{i}\) of \(x_{i}+x_{j}\) are both interpolators with minimal max-degree, but only \(x_{i}+x_{j}\) is an MDI with a minimal degree-profile.
In fact, the MDI is always unique (if \(f_{1}\) and \(f_{2}\) are interpolators with the same degree-profile, then \(\frac{f_{1}+f_{2}}{2}\) is an interpolator with a smaller degree-profile unless \(f_{1}=f_{2}\).)
### Main theoretical results
We show that certain network models have a min-degree implicit bias on the unseen. Our first result is on learning sparse Boolean functions with random features model.
**Definition 7**.: _We consider a \(P\)-dimensional latent function \(h:\{\pm 1\}^{P}\to\mathbb{R}\) embedded in ambient dimension \(d\). More precisely, we consider learning \(f:\{\pm 1\}^{d}\to\mathbb{R}\) such that \(f(x)=h(x_{i_{1}},\ldots,x_{i_{P}})\). We further denote \(I=\{i_{1},\ldots,i_{P}\}\) and \(x_{I}=(x_{i_{1}},\ldots,x_{i_{P}})\) (same as (Abbe et al., 2022b)). We also assume that some specific combination of \(x_{I}\) are not present in the training samples, i.e., \(x_{I}\notin\mathcal{U}^{*}\) and define the unseen domain as \(\mathcal{U}=\{x\in\{\pm 1\}^{d}\mid x_{I}\in\mathcal{U}^{*}\}\)._
Note that considering sparse function enables us to define the unseen domain properly and differentiate between the unseen domain (where there are minimal structures) and unseen data (for example when there is uniform sampling).
Our first result is for the random features (RF) model (Rahimi and Recht, 2007). The RF model was initially introduced to approximate kernels and enhance the time complexity of kernel methods (Rahimi and Recht, 2007). They can also be viewed as approximations of neural networks in the NTK regime (Jacot et al., 2018; Ghorbani et al., 2019; Mei and Montanari, 2022). In this paper, we take the latter view on the RF model as well, with the following formulation.
**Definition 8** (Random features model).: _Consider \(x\in\mathbb{R}^{d}\) as the input; we define random features model with \(N\) random features as_
\[f_{\mathrm{RF}}(x;a,w,b)=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}a_{i}\sigma(\langle w _{i},x\rangle+b_{i}), \tag{2}\]
_where \(a_{i}\in\mathbb{R}\) are the trainable parameters, \(\sigma\) is the activation function, and \(w_{i},b_{i}\sim\mathcal{N}(0,\frac{1}{d})^{\otimes d}\otimes\mathcal{N}(0, \frac{1}{d})\) are the random weights and biases. We use \(\phi_{w_{i}}(x)\coloneqq\sigma(\langle w_{i},x\rangle+b_{i})\) as a shorthand notation for the \(i\)-th feature._
The following activation property is used as a strengthening of the condition in (Abbe et al., 2022c).
**Definition 9**.: _(Strongly expressive) We call a continuous activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) strongly expressive up to \(P\) if (A1) \(\sigma\) satisfies upper bound \(\mathbb{E}_{g\sim\mathcal{N}(0,2)}[\sigma(g)^{4}]<\infty\); and (A2) \(\forall T\subseteq[d],|T|\leq P\ \mathbb{E}_{w,b}[\hat{\phi}_{w,b}(T)^{2}]= \Omega_{d}(d^{-|T|})\), where \(\hat{\phi}_{w,b}(T)\coloneqq\mathbb{E}_{x}[\sigma(\langle w,x\rangle+b)\chi _{T}(x)]\) is the Fourier coefficient of \(T\) in random feature created by \(w,b\)._
As will be proven in Lemma 2, property (A1) implies \(\mathbb{E}[\hat{\phi}_{w,b}(T)^{2}]=O(d^{-|T|})\) for \(|T|=O_{d}(1)\). Therefore, the second condition (A2) is ensuring that the model is able to strongly express degree \(k\leq P\) monomials.
We note that \(\hat{\phi}_{w,b}(T)^{2}\) has been studied in (Abbe et al., 2022c) as the initial alignment (INAL) between monomial \(\chi_{T}(x)\) and \(\phi_{w,b}(x)\). Indeed, based on Lemma A.2. of (Abbe et al., 2022c), the following conditions give us a family of strongly-expressive activation functions.
**Lemma 1**.: _Any continuous polynomially-bounded function \(\sigma\) such that its first \(P\) coefficients in the Hermite expansion are non-zero is strongly expressive up to \(P\)._
For example, polynomial activation functions such as \((1+x)^{k}\) are strongly expressive up to \(k\).
**Theorem 1**.: _Let \(f:\{\pm 1\}^{d}\to\mathbb{R}\) be a \(P=O_{d}(1)\)-sparse function to be learned in the GOTU setting (Definition 7) by a random features model with parameters \((N,\sigma,a,b,w)\) (Definition 8) with a strongly expressive activation function. As \(N\) diverges, the random features model can interpolate the training data with high probability. Furthermore, defining \(f^{d,N}_{RF}(\mathcal{U})\) to be the interpolating solution minimizing \(\|a\|_{2}\) (i.e., the solution reached by gradient descent/flow starting from \(a=0\) under \(\ell_{2}\) loss), we have w.h.p._
\[f^{d,N}_{\mathrm{RF}}(\mathcal{U})\overset{N\to\infty}{\to}\mathrm{MinDegInterp }(f,\mathcal{U})+\epsilon_{d} \tag{3}\]
_where \(\mathrm{MinDegInterp}(f,\mathcal{U})\) is the min-degree interpolator (MDI) on the training data \(\{x,f(x)\}_{x\in\mathcal{U}^{c}}\) and \(\epsilon_{d}\) is a function on \(P\) variables that tends pointwise to 0 as \(d\) diverges. (We refer to the above as a'min-degree bias' or 'MDI bias'.)_
Proof Sketch.: In Lemma 2, we show that random features generated by a strongly expressive \(\sigma\) have in general a decaying degree-profile with \(\mathbb{E}_{w,b}[\hat{\phi}_{w,b}(T)^{2}]=\Theta(d^{-|T|})\) for \(|T|\leq P\). We then investigate the interpolators in the Fourier-Walsh basis and show that the minimality condition of \(\|a\|_{2}\) is equivalent to learning the minimal degree-profile interpolator since high-degree monomials are less expressed in the features and consequently larger \(\|a\|\)'s are required to capture them. The full proof relies on concentration results and Boolean Fourier analysis and is given in Section A.
**Remark 1** (Other activation functions).: _Note that Theorem 1 does not hold for any arbitrary activation function. For example, if the activation function is \(\sigma(z)=z^{2}\), one can easily see that \(\mathbb{E}_{w,b}[\hat{\phi}_{w,b}(x)(\{i\})^{2}],\mathbb{E}_{w,b}[\hat{\phi}_ {w,b}(x)(\{i,j\})^{2}]\in\Theta_{d}(d^{-2})\), and hence degree 1 monomials have no priority over degree 2 monomials. An important case is ReLU activation. Results of (Abbe et al., 2022c) show that for ReLU activation and \(|T|\leq P\), we have_
\[\mathbb{E}_{w,b}[\hat{\phi}_{w,b}(T)^{2}]=\begin{cases}\Omega(d^{-|T|})&|T| \;\mathrm{even}\;\mathrm{or}\;|T|=1\\ \Omega(d^{-|T|-1})&\mathrm{otherwise}\end{cases}. \tag{4}\]
_Consequently, the min-degree bias still exists, but in a weaker form. For further discussion and experiments on ReLU activation refer to Appendix A._
In the experiments, we show that having the sparsity assumption may not be necessary in some cases, and the min-degree bias can be observed for small values of \(d\) and \(N\) as well. Furthermore, we show that the min-degree bias goes beyond the random features and NTK models; see Section 4.
We next move to a theorem on deep diagonal linear neural networks where we will be able to analyze non-linear dynamics for gradient flow. Note that in the case of linear functions, replacing a degree-1 variable \(x_{k}\) with the degree-0 variable 1 is the only case of lower degree bias. In other words, we consider the case that unseen data is \(\mathcal{U}=\{x\mid x_{k}=-1\}\) (referred to canonical holdout in (Abbe et al., 2022a)). We show that diagonal linear neural networks learn the min-degree interpolator with a leakage factor that vanishes as their initialization scale is small enough or as their depth is large enough. We now define diagonal linear neural networks with bias.
**Definition 10** (Diagonal linear neural network with bias).: _We define a diagonal linear neural network with bias as an extension of diagonal neural networks, where there is only one parameter
_for bias at the last layer. More precisely,_
\[\theta=(b,w_{1}^{(1)},\ldots,w_{d}^{(1)},\ldots,w_{1}^{(L)},\ldots,w_ {d}^{(L)}),\] \[f_{\mathrm{NN}}(x_{1},\ldots,x_{d};\theta)=b+\sum_{i=1}^{d}\left( \prod_{l=1}^{L}w_{i}^{(l)}\right)x_{i},\]
_where \(\theta\), \(d\), and \(L\) represent the model's parameters, input dimension, and depth, respectively._
**Theorem 2**.: _Let \(f:\{\pm 1\}^{d}\rightarrow\mathbb{R}\) be a linear function, i.e., \(f(x_{1},\cdots,x_{d})=\hat{f}(\emptyset)+\sum_{i=1}^{d}\hat{f}(\{i\})x_{i}\). Consider learning this function using gradient flow on a diagonal neural network (where depth \(L\geq 2\)) while the \(k\)-th component is frozen at training (the canonical holdout setting with \(\mathcal{U}=\{x\in\{\pm 1\}^{d}\mid x_{k}=-1\}\)). For any \(\epsilon>0\), there exists an \(\alpha_{max}\) (increasing with \(L\)) such that if all the model's parameters are initialized i.i.d. under the uniform distribution \(U(-\alpha,\alpha)\) for any \(0<\alpha\leq\min\{\alpha_{max},\frac{1}{2}\}\), then, with probability 1, the training loss converges to 0, and the coefficient of the learned function \(f_{\mathrm{NN}}\) on the high-degree monomial \(x_{k}\) is less than \(\epsilon\), i.e., \(\hat{f}_{\mathrm{NN}}(\{k\})\leq\epsilon\)._
_Proof Sketch._ We prove this theorem by analyzing the trajectory of gradient flow on the parameters. Primarily, we show the convergence of the model. Note that \(\hat{f}_{\mathrm{NN}}(\{k\})\leq\epsilon\) is equivalent to \(x_{k}\) being ignored by the neural network, i.e., the frozen variable \(x_{k}\) not contributing to the bias learned by the neural network. We pursue the proof in two steps. As the first step, we show there exists a time \(T_{\epsilon}\) such that the bias is almost learned by the bias parameter and the rest of the parameters and the role of \(x_{k}=1\) are still small (note that this point is close to a saddle). For the second step, we show that the contribution of \(x_{k}=1\) to the bias will not change much throughout the training process.
**Remark 2**.: _Note that with the assumptions of Theorem 2, the generalization error of the model becomes1_
Footnote 1: The factor 4 is removed if we consider the half-quadratic loss and GOTU on the full space.
\[GOTU(f,f_{\mathrm{diag}},\mathcal{U}=\{x:x_{k}=-1\})=4\mathrm{Inf}_{k}(f)+O( \epsilon),\]
_where \(\mathrm{Inf}_{k}(f)=\hat{f}(\{k\})^{2}\) is the Boolean influence of the \(k\)-th bit (O'Donnell, 2014). This confirms the empirical observations of (Abbe et al., 2022a) on fully connected linear neural networks. Indeed, we expect our proof to generalize to fully connected linear neural networks. Assuming small enough initialization, one can show that the bias parameter of the last layer would learn the bias of the target function while the rest of the parameters do not move much, which is the first step of the proof. The second step, showing that the contribution to the bias remains the same after this point, requires more precise analysis since the network's learning of weights and biases are closely coupled._
## 4 Experiments
In this section, we present our experimental results on the min-degree bias of neural networks.2 We have used four architectures for our experiments: a multi-layer perceptron (MLP) with 4 hidden layers, the random features model (Definition 8), Transformers (Vaswani et al., 2017), and 2-layer neural network with mean-field parametrization (Mei et al., 2018). By doing this, we consider a spectrum of models covering lazy regimes, active/feature learning regimes, and models of practical interest. For the Transformer, \(\pm 1\) bits are first encoded using an encoding layer and then passed to the Transformer; while for the rest of the architectures, binary vectors are directly used as the input.
For each experiment, we generate all binary sequences in \(\mathcal{U}^{c}=\{\pm 1\}^{d}\setminus\mathcal{U}\) for training.3 We then train models under \(\ell_{2}\) loss. We employ Adam (Kingma and Ba, 2014) optimizer for the Transformer model and mini-batch SGD for the rest of the architectures. We also use moderate learning rates as learning rate can affect the results (refer to Appendix B.2). During training, we evaluate the coefficients of the function learned by the neural network using \(\hat{f}_{\text{NN}}(T)=\mathbb{E}_{x\sim U(\pm 1)^{d}}[\chi_{T}(x)f_{\text{NN}}(x)]\) to understand which interpolating solution has been learned by the model. Moreover, each experiment is repeated 10 times and averaged results are reported. For more information on the setup of experiments, hyperparameter sensitivity analysis, and additional experiments refer to Appendix B.
Footnote 3: In practice, one can generate a large enough number of samples so that the function is learned well on the training distribution.
Here, we consider the following 3 functions and unseen domains on input dimension 15. Dimension 15 is used as a large dimension where the training data can be generated explicitly but has otherwise no specific meaning (Appendix B provides other instances). The first function is an example of degree-2 where the unseen domain induces a degree-1 MDI. The second example is the classic degree-2 parity or XOR function. The third example is such that the function is symmetric under cyclic permutations while its MDI is not, in order to test whether certain models would favor symmetric interpolators. We also consider other examples such as the majority function in Appendix B. Let:
1. \(f_{1}(x)=x_{0}x_{1}-1.25x_{1}x_{2}+1.5x_{2}x_{0}\) and \(\mathcal{U}_{1}=\{x_{0}x_{1}x_{2}=-1\}\). In this case we have \(x_{0}x_{1}=x_{2}\), \(x_{1}x_{2}=x_{0}\), and \(x_{2}x_{0}=x_{1}\) for the training samples, hence the min-degree interpolator is \(\tilde{f}_{1}(x)=x_{2}-1.25x_{0}+1.5x_{1}\).
2. \(f_{2}(x)=x_{0}x_{1}\) and \(\mathcal{U}_{2}=\{(x_{0},x_{1})=(-1,-1)\}\). Note that in the min-degree interpolator is \(\tilde{f}_{2}(x)=x_{1}+x_{0}-1\) for the seen domain.
3. \(f_{3}(x)=x_{0}x_{1}x_{2}+x_{1}x_{2}x_{3}+\cdots+x_{13}x_{14}x_{0}+x_{14}x_{0}x _{1}\) and \(\mathcal{U}_{3}=\{(x_{0},x_{1},x_{2})=(-1,-1,-1)\}\). In this case the min-degree interpolator is given by \(\tilde{f}_{3}(x)=(x_{0}x_{1}+x_{1}x_{2}+x_{2}x_{0}-x_{0}-x_{1}-x_{2}+1)+x_{1}x _{2}x_{3}+\cdots+x_{13}x_{14}x_{0}+x_{14}x_{0}x_{1}\).
We generally obtain that the Transformer exhibits a strong MDI bias. The solutions learned by the Transformer for \(f_{1}\), \(f_{2}\), \(f_{3}\) are shown in Figure 1. It can be seen that these are very close to the MDI in all cases. However, other models also display a 'leaky' MDI bias where higher degree monomials are still captured along with the lower degree ones. In particular, Figure 2 shows a mean-field model and an MLP having such leaky MDI. Note that the RF model in Figure 2 has a small leakage as well, simply caused by the ambient dimension being \(d=15\) and not diverging as in Theorem 1. In Appendix B.2, we also discuss the effect that large learning rates may increase the leakage.
## 5 Further implications
We now discuss some of the consequences of the min-degree bias. First, we explain why the min-degree bias makes some length generalization problems difficult. Second, we show how to turn the min-degree bias into a strategy for curriculum learning and enable an improved sample complexity.
### Length generalization
Several recent works on the reasoning of neural networks evaluate whether neural networks are able to generalize when the length of the problem is increased, and it is often found that neural networks struggle with length generalization (Zhang et al., 2022; Anil et al., 2022). For example, consider learning the parity problem \(\text{parity}(x_{1},\dots,x_{d})=x_{1}x_{2}\cdots x_{d}\) on \(x_{i}=\pm 1\). Two variants of this task can be considered: (1) the number of bits, \(d\), is increased during test, and (2) \(d\) is the same during training and test; however, during training, only samples with a bounded number of \(-1\)'s are observed, i.e., the radius \(r\) Hamming ball \(B_{r}\coloneqq\{x\in\{\pm 1\}^{d}\mid\#_{-1}(x)\leq r\}\) (note that \(+1\) is the identity element in this setting). (Anil et al., 2022) show that both of these variants capture the notion and difficulty of length generalization.4 Here, we focus on the latter variant which falls under our GOTU setting.
Footnote 4: We train our model directly on the parity function; whereas (Anil et al., 2022) uses large language models and fine-tunes parity tasks on them. In this sense, our approach is closer to (Zhang et al., 2022) which also train models on their synthetic task.
Figure 1: Target functions \(f_{1}\), \(f_{2}\), and \(f_{3}\) learned by the Transformer (model details in Appendix B). Note that in all of the cases the Transformer model learns a solution very close to the min-degree interpolator. More precisely, the coefficients of \(x_{0}x_{1},x_{1}x_{2},x_{2}x_{0}\) in the left plot (\(f_{1}\)), the coefficient of \(x_{0}x_{1}\) in the middle plot (\(f_{2}\)), and the coefficient of \(x_{0}x_{1}x_{2}\) in the right plot (\(f_{3}\)) are close to zero.
Figure 2: \(f_{4}(x_{0},\dots,x_{14})=x_{0}x_{1}\) learned by the RF, MLP, and mean-field models while samples satisfying \((x_{0},x_{1})=(-1,-1)\) are withheld during training. Consequently, \(x_{0}x_{1}x\) (solid orange line) is replaceable by \(x_{0}+x_{1}-1\) (dashed lines). The MLP and mean-field models learn a leaky min-degree interpolator with the \(x_{0}x_{1}\) coefficient bounded away from \(0\). The RF model learns the min-degree interpolator with a small leakage since the ambient dimension is \(d=15\); this leakage disappears as \(d\) increases as stated in Theorem 1.
**Theorem 3**.: _Consider a Boolean function \(f:\{\pm 1\}^{d}\to\mathbb{R}\). Then (i) there exists a unique function \(f_{r}:\{\pm 1\}^{d}\to\mathbb{R}\) such that \(\forall x\in B_{r},f_{r}(x)=f(x)\) and \(\deg(f_{r})\leq r\); (ii) when \(f\) is a parity function (monomial) of degree \(k\leq d\), the \(\ell_{2}\)-test-loss of the MDI is larger than \(\binom{k-1}{r}^{2}\)._
We defer the proof to Appendix A. Now consider learning the parity function \(x_{1}x_{2}\cdots x_{d}\) where training samples have \(r\) or less \(-1\) coordinates, i.e., training samples belong to \(B_{r}\). Using the previous theorem, there is a degree \(r\) alternative to \(x_{1}x_{2}\cdots x_{d}\). Note that when such a low-degree alternative exists, assuming the min-degree bias, the model will learn this alternative instead of the full function of degree \(d\). This explains why in this case neural networks cannot generalize when the length is increased. We conduct an experiment to evaluate this, where we learn the full parity function on 15 bits using the MLP model trained on different lengths. Figure 3 shows that we learn more of lower degree terms and less of the full parity term as we train on shorter lengths.
### Curriculum learning
The bias of neural networks towards min-degree solutions can also be utilized to boost the learning via a curriculum learning (Bengio et al., 2009) algorithm. We propose to train models by increasing the 'complexity' of training samples with respect to the input Hamming weight, i.e., \(B_{r_{1}}\subseteq B_{r_{2}}\subseteq\ldots\subseteq B_{r_{k}}\) where \(B_{r}\) is the Hamming ball of radius \(r\). Training a model on samples included in \(B_{r}\) with \(r<d\) produces biased inputs compared to the uniform distribution. It has been shown that learning parities with GD on biased inputs is easier for various architectures (Malach et al., 2021; Daniely and Malach, 2020). In particular, the biasedness of the input distribution can be viewed as converting a monomial on non-centered inputs to a staircase on centered inputs as discussed in (Abbe et al., 2021). Moreover, (Abbe et al., 2022) shows that the sample complexity for learning staircases is significantly reduced compared to that of monomials of matching degree. In particular, a layer-wise analysis shows that the hidden neurons in the first layer detect the support of a parity function under biased inputs, allowing for the fitting of the target function with the second layer if enough neuron diversity is available. One can thus attempt to bootstrap this approach and progressively climb the support (and degree) of the target function by training suc
Figure 3: Learning full parity function in dimension \(d=15\) in the length generalization setting with inputs in \(B_{6},B_{7},B_{8},B_{9},B_{10}\) and \(B_{15}\) (full space) respectively, with an MLP (model details in Appendix B). X-axis: degree-profile component, Y-axis: degree-profile value, i.e., \(\sum_{T:|T|=x}\hat{f}_{\text{NN}}(T)^{2}\). As the length of training samples is decreased, the coefficient of the full parity gets smaller and the coefficients of low-degree monomials get larger.
cessively the network on increasing balls. We now develop this approach into a general curriculum algorithm.
```
Input: Training samples \(S=\{(x_{i},y_{i})\}_{i=1}^{m}\); Curriculum \(B_{r_{1}}\subset B_{r_{2}}\subset\ldots\subset B_{r_{k}}=B_{d}\); Loss threshold \(\epsilon\) for\(i=1\)to\(k\)do \(S_{r_{i}}\coloneqq\{(x,y)\in S|x\in B_{r_{i}}\}\) (samples in \(B_{r_{i}}\)) Initialize train loss \(=1+\epsilon\). while train loss \(>\epsilon\)do train model with SGD on \(S_{r_{i}}\) update train loss endwhile endfor
```
**Algorithm 1** Degree-Curriculum algorithm
Note that at the \(i\)-th step of Algorithm 1 the training samples all belong to \(B_{r_{i}}\). Therefore, for models obeying the min-degree bias on the unseen, the model learns a min-degree interpolator of degree at most \(r_{i}\) (e.g., Theorems 1, 2). Furthermore, if the sampling set \(S\) is such that \(B(r_{i})\cap S\) contains enough degree \(r_{i}\) elements, the min-degree interpolator is of degree \(r_{i}\) -- see Theorem 3. If one then takes \(r_{i}=r_{i-1}+1\), the new min-degree interpolator has monomials at step \(i-1\) that are contained in those at step \(i\), as in the learning of a merged staircase function (Abbe et al., 2022b) (and a lower leap function more generally if one takes a leap in the curriculum degrees). Thus, for a parity function, the proposed curriculum learning algorithm learns the support sets incrementally as for the implicit staircase function.
We evaluate the Degree-Curriculum algorithm on learning full parity function \(x_{0}x_{1}\cdots x_{d-1}\) in dimensions \(d=16\) and \(d=30\) with an MLP. More precisely, for the same training set and hyperparameters, we once train the MLP with normal SGD and once with the proposed Degree-Curriculum Algorithm. We choose curriculum \(B_{4},B_{8},B_{12},B_{16}\) (curriculum-leap of 4) for dimension 16 and curriculum \(B_{1},B_{2},\ldots,B_{30}\) (curriculum-leap of 1) for dimension 30. We also set the loss threshold to \(\epsilon=0.001\). The results are depicted in Figure 4.
Figure 4: Test loss on the full parity function in dimensions 16 (left) and 30 (right) for different sample complexities with and without the Degree-Curriculum Algorithm. We note that the MLP model trained without curriculum was not able to learn the full parity function in dimension 30 (right) for the given sample sizes (and even up to \(10^{5}\) samples), in contrast to the same model trained with Degree-Curriculum.
In Algorithm 1, it is assumed that the training set is given with the random access model. We can also consider a variant with the query access model, where at step \(i\), training samples are queried directly from \(B_{r_{i}}\) (or some distribution). In the former case, the probability of a sample belonging to \(B_{r}\) is small for small values of \(r\) (e.g., \(r=o_{d}(d)\)). The Degree-Curriculum algorithm with query access model can on the hand sample more heavily the low-radius balls, which may be beneficial in certain cases.
Finally, we can naturally extend the Degree-Curriculum algorithm to other settings (also non-Boolean) using the same principle as above:
_Build curriculum sets \(\{\tilde{B}_{i}\}\) of 'increased complexity' in order to have a path of learned functions on support sets \(\{\mathcal{S}^{(i)}\}\) that are as tightly nested as possible (e.g., staircases or low-leap functions (Abbe et al., 2022b)), with the target function at last_.
## 6 Conclusions and future directions
In this paper, we put forward the concept of generalization on the unseen (GOTU) and considered the learning of Boolean functions. We showed that various network architectures have a bias toward a min-degree interpolator (MDI), with theoretical results for RF and DLN, and experimental results for Transformers. We also found empirically that for large learning rates or for other models such as mean-field networks, a leaky version of the MDI takes place.
We showed that the min-degree bias can be utilized in a more efficient curriculum learning algorithm where the training takes place on sets of increasing complexity. We also demonstrated that the min-degree bias can impede the learning of symmetric solutions and makes length generalization difficult.
The min-degree bias is a form of Occam's razor chosen by GD-trained neural nets, where the'simplicity' is measured by the 'degree-profile'. However, this might not be a desirable form of razor for various reasoning tasks. We believe that other forms promoting symmetries, compositionality or more generally minimum description length (MDL) may often be more suitable. The next natural steps are thus to correct this min-degree bias. We propose here some natural directions to pursue: (1) architecture design promoting symmetries or compositionality, (2) hyperparameter tuning (e.g., learning rates, scale), (3) data augmentation and multitasking, (4) MDL-like regularization at training.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.