id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.13030 | CompletionFormer: Depth Completion with Convolutions and Vision
Transformers | Given sparse depths and the corresponding RGB images, depth completion aims
at spatially propagating the sparse measurements throughout the whole image to
get a dense depth prediction. Despite the tremendous progress of
deep-learning-based depth completion methods, the locality of the convolutional
layer or graph model makes it hard for the network to model the long-range
relationship between pixels. While recent fully Transformer-based architecture
has reported encouraging results with the global receptive field, the
performance and efficiency gaps to the well-developed CNN models still exist
because of its deteriorative local feature details. This paper proposes a Joint
Convolutional Attention and Transformer block (JCAT), which deeply couples the
convolutional attention layer and Vision Transformer into one block, as the
basic unit to construct our depth completion model in a pyramidal structure.
This hybrid architecture naturally benefits both the local connectivity of
convolutions and the global context of the Transformer in one single model. As
a result, our CompletionFormer outperforms state-of-the-art CNNs-based methods
on the outdoor KITTI Depth Completion benchmark and indoor NYUv2 dataset,
achieving significantly higher efficiency (nearly 1/3 FLOPs) compared to pure
Transformer-based methods. Code is available at
\url{https://github.com/youmi-zym/CompletionFormer}. | Zhang Youmin, Guo Xianda, Poggi Matteo, Zhu Zheng, Huang Guan, Mattoccia Stefano | 2023-04-25T17:59:47Z | http://arxiv.org/abs/2304.13030v1 | # CompletionFormer: Depth Completion with Convolutions and
###### Abstract
Given sparse depths and the corresponding RGB images, depth completion aims at spatially propagating the sparse measurements throughout the whole image to get a dense depth prediction. Despite the tremendous progress of deep-learning-based depth completion methods, the locality of the convolutional layer or graph model makes it hard for the network to model the long-range relationship between pixels. While recent fully Transformer-based architecture has reported encouraging results with the global receptive field, the performance and efficiency gaps to the well-developed CNN models still exist because of its deteriorative local feature details. This paper proposes a Joint Convolutional Attention and Transformer block (JCAT), which deeply couples the convolutional attention layer and Vision Transformer into one block, as the basic unit to construct our depth completion model in a pyramidal structure. This hybrid architecture naturally benefits both the local connectivity of convolutions and the global context of the Transformer in one single model. As a result, our CompletionFormer outperforms state-of-the-art CNNs-based methods on the outdoor KITTI Depth Completion benchmark and indoor NYUv2 dataset, achieving significantly higher efficiency (nearly 1/3 FLOPs) compared to pure Transformer-based methods. Code is available at [https://github.com/youmi-zym/CompletionFormer](https://github.com/youmi-zym/CompletionFormer).
## 1 Introduction
Active depth sensing has achieved significant gains in performance and demonstrated its utility in numerous applications, such as autonomous driving and augmented reality. Although depth maps captured by existing commercial depth sensors (, Microsoft Kinect [23], Intel RealSense [11]) or depths points within the same scanning line of LiDAR sensors are dense, the distance between valid/correct depth points could still be far owing to the sensor noise, challenging conditions such as transparent, shining, and dark surfaces, or the limited number of scanning lines of LiDAR sensors. To address these issues, depth completion [2, 16, 31, 26], which targets at completing and reconstructing the whole depth map from sparse depth measurements and a corresponding RGB image (, RGBD), has gained much attention in the latest years.
For depth completion, one key point is to get the depth affinity among neighboring pixels so that reliable depth labels can be propagated to the surroundings [2, 3, 8, 16, 26]. Based on the fact that the given sparse depth could be highly sparse due to noise or even no measurement being returned from the depth sensor, it requires depth completion methods to be capable of 1) detecting depth outliers by measuring the spatial relationship between pixels in both local and global perspectives; 2) fusing valid depth values from close or even extremely far distance points. All these properties ask the network for the potential to capture both local and global correlations between pixels. Current depth completion networks collect context information with the widely used convolution neural networks (CNNs) [2, 3, 8, 16, 26, 29, 37, 51] or graph neural network [42, 49]. However, both the convolutional layer and graph models can only aggregate within a local region, square kernel in \(3\times 3\) for convolution and kNN-based neighborhood for graph models [42, 49], making it still tough to model global long-range relationship, in particular within the shallowest layers of the architecture. Recently, GuideFormer [31] resorts fully Transformer-based architecture to enable global reasoning. Unfortunately, since Vision Transformers project image patches into vectors through a single step, this causes the loss of local details, resulting in ignoring local feature details in dense prediction tasks [28, 43]. For depth completion, the limitations affecting pure CNNs or Transformer based networks also manifest, as shown in Fig. 1. Despite _any_ distance the reliable depth points could be distributed at, exploring an elegant integration of these two distinct paradigms, CNNs and Transformer, has not
been studied for depth completion yet.
In this work, we propose CompletionFormer, a pyramidal architecture coupling CNN-based local features with Transformer-based global representations for enhanced depth completion. Generally, there are two gaps we are facing: 1) the content gap between RGB and depth input; 2) the semantic gap between convolution and Transformer. As for the multimodal input, we propose embedding the RGB and depth information at the early network stage. Thus our CompletionFormer can be implemented in an efficient single-branch architecture as shown in Fig. 2 and multimodal information can be aggregated throughout the whole network. Considering the integration of convolution and Transformer, previous work has explored from several different perspectives [6, 12, 25, 28, 43] on image classification and object detection. Although state-of-the-art performance has been achieved on those tasks, high computation cost [12] or inferior performance [6, 12] are observed when these networks are directly adapted to depth completion task. To promise the combination of self-attention and convolution still being efficient, and also effective, we embrace convolutional attention and Transformer into one block and use it as the basic unit to construct our network in a multi-scale style. Specifically, the Transformer layer is inspired by Pyramid Vision Transformer [39], which adopts spatial-reduction attention to make the Transformer layer much more lightweight. As for the convolution-related part, the common option is to use plain convolutions such as Inverted Residual Block [32]. However, the huge semantic gap between convolution and the Transformer and the lost local details by Transformer require the convolutional layers to increase its own capacity to compensate for it. Following this rationale, we further introduce spatial and channel attention [40] to enhance convolutions. As a result, without any extra module to bridge the content and semantic gaps [12, 28, 31], every convolution and Transformer layer in the proposed block can access the local and global features. Hence, information exchange and fusion happen effectively at every block of our network.
To summarize, our main contributions are as follows:
* We propose integrating Vision Transformer with convolutional attention layers into one block for depth completion, enabling the network to possess both local and global receptive fields for multi-modal information interaction and fusion. In particular, spatial and channel attention are introduced to increase the capacity of convolutional layers.
* Taking the proposed Joint Convolutional Attention and Transformer (JCAT) block as the basic unit, we introduce a single-branch network structure, _i.e_. CompletionFormer. This elegant design leads to a comparable computation cost to current CNN-based methods while presenting significantly higher efficiency when compared with pure Transformer based methods.
* Our CompletionFormer yields substantial improvements to depth completion compared to state-of-the-art methods, especially when the provided depth is _very_ sparse, as often occurs in practical applications.
## 2 Related Work
**Depth Completion.** Scene depth completion has become a fundamental task in computer vision with the emergence of active depth sensors. Recently, following the advance of deep learning, fully-convolutional network has been the prototype architecture for current state-of-the-art on depth completion. Ma _et al_. [21, 22] utilize a ResNet [7] based encoder-decoder architecture, _i.e_. U-Net, within either a supervised or self-supervised framework to predict the dense output. To preserve the accurate measurements in the given sparse depth and also perform refinement over the final depth map, CSPN [3] appends a convolutional spatial propagation network (SPN [18]) at the end of U-Net to refine its coarse prediction. Based on CSPN, learnable convolutional kernel sizes and a number of iterations are proposed to improve the efficiency [2], and the performance could be further improved by using unfixed local neighbors [26, 44] and independent affinity matrix for each iteration [16]. For all these SPN-based methods, while a larger context is observed within recurrent processing, the performance is limited by the capacity of the convolutional U-Net backbone.
Figure 1: **Comparison of attention maps of pure CNNs, Vision Transformer, and the proposed CompletionFormer with joint CNNs and Transformer structure.** The pixel highlighted with a yellow cross in RGB image (a) is the one we want to observe how the network predicts it. Pure CNNs architecture (b) activates discriminative local regions (_i.e_., the region on the fire extinguisher), whereas pure Transformer based models (c) activate globally yet fail on local details. In contrast, our full CompletionFormer (d) can retain both the local details and global context.
Accordingly, we strengthen the expressivity of the U-Net backbone with local and global coherent context information, proving effective in improving performance.
Rather than depending on a single branch, multi-branch networks [8, 17, 24, 29, 35, 37, 46] are also adopted to perform multi-modal fusion. The common way to fuse the multi-modal information is simple concatenation or element-wise summation operation. More sophisticated strategies like image-guided spatially-variant convolution [35, 45], channel-wise canonical correlation analysis [50], neighbour attention mechanism [47] and attention-based graph propagation [42, 49] were also proposed to enhance local information interaction and fusion. Instead of pixel-wise operation or local fusion, recently, GuideFormer [31] proposed a dual-branch fully Transformer-based network to embed the RGB and depth input separately, and an extra module is further designed to capture inter-modal dependencies. The independent design for each input source leads to huge computation costs (near 2T FLOPs with the \(352\times 1216\) input). In contrast, our CompletionFormer in one branch brings significant efficiency (559.5G FLOPs), and the included convolutional attention layer complements the disadvantage of a Transformer in local details.
**Vision Transformer.** Transformers [12, 19] are first introduced in natural language processing [38], then also showing great potential in the fields of image classification [4], object detection [12, 19, 43] and semantic segmentation [41]. Tasks related to 3D vision have also benefited from the enriched modeling capability of Transformer, such as stereo matching [13, 15], supervised [14, 30] and unsupervised monocular depth estimation [48], optical flow [10, 34] and also depth completion [31]. Instead of relying on pure Vision Transformer [31], in this paper, we explore the combination of Transformer and convolution into one block for depth completion. Compared to the general backbone networks (ResNet [7] with fully CNN-based design, Swin Transformer [19] and PVT [39] based on pure Transformer, MPViT [12] and CMT [6] using both the convolutions and Vision Transformer), our proposed joint convolutional attention and Transformer block achieves much higher efficiency and performance on public benchmarks [33, 36]
## 3 Method
In practical applications, depth maps captured by sensors present various levels of sparsity and noise. Our goal is to introduce both the local features and global context information into the depth completion task so that it can gather reliable depth hints from any distance. The overall diagram of our CompletionFormer is shown in Fig. 2. After obtaining depth and RGB image embedding, a backbone constructed by our JCAT block is used for feature extraction at multiple scales and the decoder provides full-resolution features for initial depth prediction. Finally, for the purpose of preserving accurate depth from the sparse input, we refine the initial estimation with a spatial propagation network.
### RGB and Depth Embedding
For depth completion, multimodal information fusion at an early stage has several advantages, 1) it makes the feature vector of each pixel possess both the RGB and depth information so that pixels with invalid depth still have a chance to be corrected by reliable depth measurements according to appearance similarity; 2) only one branch is required for the following network, which enables much efficient implementation. Therefore, we firstly use two separate convolutions to encode the input sparse depth map \(S\) and RGB image \(I\). The outputs are concatenated and further processed by another convolution layer to get the raw feature containing contents from both sources.
### Joint Convolutional Attention and Transformer Encoder
It has been extensively studied how to build connections between pixels to implement depth propagation from reliable pixels while avoiding incorrect ones. Recently, convolution layer [2, 3, 8, 16, 29, 51, 37] or attention based
Figure 2: **CompletionFormer Architecture.** Given the sparse depth and corresponding RGB image, a U-Net backbone strengthened with JCAT block is used to perform the depth and image information interaction at multiple scales. Features from different stages are fused at full resolution and fed for initial prediction. Finally, a spatial propagation network (SPN) is exploited for final refinement.
graph propagation [42, 49] has been the dominant operation for this purpose. Although a fully Transformer-based network [31] has also been adopted for this purpose, it shows worse results and much higher computational cost compared to pure CNNs-based methods. Considering the complementary properties of these two-style operations, an elegant integration of these two paradigms is highly demanded for depth completion task. On the other hand, for classification and object detection tasks, MPViT [12] and CMT [6] are two representative state-of-the-art networks on the combination of self-attention and convolution as shown in Fig. 3 (a) and (b) respectively. Generally, the integration can be implemented in a parallel or cascaded manner. Accordingly, inspired by their design, within CompletionFormer, we propose a joint design as shown in Fig. 3 (c) and (d). To decrease computation overhead but also get highly accurate depth completion result, our CompletionFormer only contains single rather than multiple time-consuming Transformer-based paths as in MPViT [12]. Furthermore, the representation power of convolution-based path is enhanced with spatial and channel attention.
Specifically, our encoder has five stages, allowing features representation at different scales to communicate with each other effectively. In the first stage, to decrease the computation cost and memory overhead introduced by the Transformer layer, we use a series of BasicBlocks from ResNet34 [7] to process and finally get downsampled feature map \(F_{1}\) at half resolution. For the next four stages, we introduce our proposed JCAT block as the basic unit for framework design.
Basically, for each stage \(i\in\{2,3,4,5\}\), it consists of a patch embedding module and \(L_{i}\) repeated JCAT blocks. The patch embedding module firstly divides the feature map \(F_{i-1}\) from previous stage \(i-1\) into patches with size \(2\times 2\). We implement it with a \(3\times 3\) convolution layer and stride set to 2 as [39], so it actually halves resolution for features \(F_{i-1}\) and thus allows for obtaining a features pyramid \(\{F_{2},F_{3},F_{4},F_{5}\}\), whose resolutions are \(\{1/4,1/8,1/16,1/32\}\) with respect to the input image. Furthermore, position embedding is also included in the embedded patches and passed through the JCAT blocks.
**Joint Convolutional Attention and Transformer Block.** In overview, our JCAT block can be organized in a parallel or cascaded manner as shown in Fig. 3 (c) and (d) respectively. The Transformer layer is implemented in an efficient way as in Pyramid Vision Transformer [39], which contains a spatial-reduction attention (SRA) layer with multi-head mechanism and a feed-forward layer (FNN).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline CompletionFormer & \#Layers & Params (M) & FLOPs (G) \\ \hline Tiny & [2, 2, 2, 2] & 41.5 & 191.7 \\ Small & [3, 3, 6, 3] & 78.3 & 231.8 \\ Base & [3, 3, 18, 3] & 142.4 & 301.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **CompletionFormer Configurations.**#Layers denotes the number of our JCAT blocks in each stage. For all model variants, the channels of 4 stages are 64, 128, 320, 512, respectively. FLOPs are measured using \(480\times 640\) input image.
Figure 3: **Example of architecture with convolutions and Vision Transformer.** (a) Multi-Path Transformer Block of MPViT [12]. (b) CMT Block of CMT-S [6]. (c) Our proposed JCAT block which contains two parallel streams, _i.e_. convolutional attention and Transformer layer respectively. (d) The variant of our proposed block with cascaded connection.
Given input features \(F\in\mathbb{R}^{H_{i}\times W_{i}\times C}\) from the patch embedding module or last joint block (with \(H_{i}\) and \(W_{i}\) height and width of features at stage \(i\), and \(C\) number of channel), we firstly normalize it with layer normalization [1] (LN) and then flatten it into vector tokens \(X\in\mathbb{R}^{N\times C}\), where \(N\) is the number of tokens and equals to \(H_{i}\times W_{i}\), _i.e_. the number of all pixels in the \(F\). Using learned linear transformations \(W^{Q}\), \(W^{K}\), and \(W^{V}\in\mathbb{R}^{C\times C}\), tokens \(X\) are projected into corresponding query \(Q\), key \(K\), and value vectors \(V\in\mathbb{R}^{N\times C}\). Here, the spatial scale of \(K\) and \(V\) is further reduced to decrease memory consumption, and then self-attention is performed as:
\[\text{Attention}(Q,K,V)=\text{Softmax}(\frac{QK^{T}}{\sqrt{C_{head}}})V, \tag{1}\]
with \(C_{head}\) the channel dimension of each attention head in SRA. According to Eq. (1), each token in the entire input space \(F\) is matched with any tokens, including itself. Our depth completion network benefits from the self-attention mechanism in two folds: 1) it extends the receptive field of our network to the full image in each Transformer layer; 2) as we have embedded each token with both depth and RGB image information, the self-attention mechanism explicitly compare the similarity of each pixel not only by appearance, but also by depth with dot-product operation. Thus, reliable depth information can be broadcasted to the whole image, enabling to correct erroneous pixels.
We boost the representation power of the convolutional path with channel and spatial attention [40]. On the one hand, it helps to model locally accurate attention and reduce noise. On the other hand, due to the semantic gap between convolution and Transformer, the increased modeling capacity by using the attention mechanism enables this path to focus on important features provided by the Transformer layer while suppressing the unnecessary ones. Finally, by concatenating the reshaped feature from the Transformer-based path, we fuse the two paths with a \(3\times 3\) convolution and send it to the next block or stage.
Taking the proposed JCAT block as the basic unit, we build stages 2-5 with repeated configurations. As reported in Tab. 1, we scale-up the 4 stages in CompletionFormer from tiny, small to base scale. Our results demonstrate the superiority of JCAT design compared to recent Vision Transformers [12, 19, 39] in depth completion task.
### Decoder
In the decoder, outputs from each encoding layer are concatenated and further processed by the corresponding decoding layers via skip connections. To accommodate diverse scale features better, the features from previous decoder layer is upsampled to current scale with a deconvolution layer and convolutional attention mechanism [40] is also exploited to strengthen feature fusion in channel and spatial dimensions. Finally, the fused result from the decoder is concatenated with features from stage one and fed to the first convolution layer of the prediction head. Its output is concatenated with the raw feature from RGB and depth embedding module (Sec. 3.1) and sent to another convolution, which is in charge of initial depth prediction \(D^{0}\).
### SPN Refinement and Loss Function
Considering that the accurate depth values from the sparse input may not be well preserved after going though U-Net [3, 8], spatial propagation network [18] has been a standard operation for final refinement. Recent work [2, 3, 8, 26] mainly focuses on improving the spatial propagation network from fixed-local to nonlocal propagation. While in our experiments (Tab. 2), we observe that, with our enhanced U-Net backbone, the network is able to provide good depth affinity and thus obtain almost the same accuracy with fixed-local [2, 3] or non-local [26] neighbours for spatial propagation. With regard to CSPN++[2] consumes more computation cost, we adopt the non-local spatial propagation network [26] (NLSPN) for further refinement. Specifically, let \(D^{t}=(d^{t}_{u,v})\in\mathbb{R}^{H\times W}\) denotes the 2D depth map updated by spatial propagation at step \(t\), where \(d^{t}_{u,v}\) denotes the depth value at pixel \((u,v)\), and \(H,W\) denotes the height and width of the \(D^{t}\) respectively. The propagation of \(d^{t}_{u,v}\) at step \(t\) with its non-local neighbors \(N^{NL}_{u,v}\) is defined as follows:
\[d^{t}_{u,v}=w_{u,v}(0,0)d^{t-1}_{u,v}+\sum_{(i,j)\in N^{NL}_{u,v},i\neq 0,j \neq 0}w_{u,v}(i,j)d^{t-1}_{i,j}, \tag{2}\]
where \(w_{u,v}(i,j)\in(-1,1)\) describes the affinity weight between the reference pixel at \((u,v)\) and its neighbor pixel at \((i,j)\) and \(w_{u,v}(0,0)=1-\sum_{(i,j)\in N_{u,v},i\neq 0,j\neq 0}w_{u,v}(i,j)\) stands for how much the original depth \(d^{t-1}_{u,v}\) will be preserved. Moreover, the affinity matrix \(w\) is also outputted by the decoder and modulated by a predicted confidence map from decoder to prevent less confident pixels from propagating into neighbors regardless of how large the affinity is. After \(K\) steps spatial propagation, we get the final refined depth map \(D^{K}\).
Finally, following [26], a combined \(L_{1}\) and \(L_{2}\) loss is employed to supervise the network training as follows:
\[L(\hat{D},D^{gt})=\frac{1}{|V|}\sum_{v\in V}\left(|\hat{D}_{v}-D^{gt}_{v}|+| \hat{D}_{v}-D^{gt}_{v}|^{2}\right), \tag{3}\]
where \(\hat{D}=D^{K}\), and \(V\) is the set of pixels with valid depth in ground truth \(D^{gt}\), and \(|V|\) denotes the size of set \(V\).
## 4 Experiments
### Datasets
**NYUv2 Dataset [33]:** it consists of RGB and depth images captured by Microsoft Kinect [23] in 464 indoor scenes. Following the similar setting of previous depth completion methods [26, 51], our method is trained on 50,000 images uniformly sampled from the training set and tested on the 654 images from the official labeled test set for evaluation. For both training and test sets, the original frames of size \(640\times 480\) are half down-sampled with bilinear interpolation and then center-cropped to \(304\times 228\). The sparse input depth is generated by random sampling from the dense ground truth.
**KITTI Depth Completion (DC) Dataset [36]:** it contains 86 898 training data, 1 000 selected for validation, and 1 000 for testing without ground truth. The original depth map obtained by the Velodyne HDL-64e is sparse, covering about 5.9% pixels. The dense ground truth is generated by collecting LiDAR scans from 11 consecutive temporal frames into one, producing near 30% annotated pixels. These registered points are verified with the stereo image pairs to eliminate noisy values. Since there is no LiDAR return at the top of the image, following [26], input images are bottom center-cropped to \(240\times 1216\) for training, validation and testing phases.
### Implementation Details
We implement our model in PyTorch [27] on 4 NVIDIA 3 090 GPUs, using AdamW [20] as optimizer with an initial learning rate of 0.001, \(\beta_{1}=0.9,~{}\beta_{2}=0.999\), weight decay of 0.01. The batch size per GPU is set to 3 and 12 on KITTI DC and NYUv2 datasets, respectively. On the NYUv2 dataset, we train the model for 72 epochs and decay the learning rate by a factor of 0.5 at epochs 36, 48, 60, 72. For the KITTI DC dataset, the model is trained with 100 epochs, and we reduce the learning rate by half at epochs 50, 60, 70, 80, 90. The supplementary material outlines more details about network parameters.
### Evaluation Metrics
Following the KITTI benchmark and existing depth completion methods [26, 51], given the prediction \(\hat{D}\) and ground truth \(D^{gt}\), we use the standard metrics for evaluation: (1) root mean square error (RMSE); (2) mean absolute error (MAE); (3) root mean squared error of the inverse depth (iRMSE); (4) mean absolute error of the inverse depth (iMAE); (5) mean absolute relative error (REL).
### Ablation Studies and Analysis
We assess the impact of the main components of our CompletionFormer on NYUv2 dataset [33]. Following previous methods [16, 26], we randomly sample 500 depth pixels from the ground truth depth map and input them along with the corresponding RGB image for network training. Results are reported in Tab. 2.
**Cascaded vs Parallel Connection.** We can notice that the cascaded design (A) shows inferior performance compared to parallel style (B). This conclusion is also confirmed in (I) and (J), as CMT-Base gets even worse RMSE (92.0) than MPViT-Base [12] (91.0). It indicates the parallel connection is more suitable for information interaction between streams with different contents and semantics on depth completion task. Thus, we adopt parallel connection as our final scheme.
**Spatial and Channel Attention in JCAT block.** Previous methods combine the Transformer with plain convolutions [6, 12, 25, 28]. Here, we also ablate the case when we disable the spatial and channel attention at the convolutional path of our proposed JCAT block (C). The drop in accuracy (RMSE increases from 90.0 to 91.1) confirms that increasing the capacity of convolutions is vital when combining convolution with Vision Transformer, and it only leads to negligible FLOPs increase (0.2G FLOPs).
**Single- or Dual-branch Encoder.** Similar to previous methods [31, 35], we test the dual-branch architecture, which encodes the RGB and depth information separately (D). For feature communication between two branches, we include the spatial and channel attention mechanism [40] at the end of each stage in the encoder. However, the worse results compared to our single-branch design (B) demonstrates that embedding the multimodal information at the early stage is much more effective and efficient.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{2}{c}{CompletionFormer} & \multicolumn{2}{c}{\begin{tabular}{c} RMSE \\ **(mm)** \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} MAE \\ **(mm)** \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} Params.\end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} FLOPs.\end{tabular} } \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline (A) & w/ cascaded connection & & 91.5 & 35.7 & 82.6 & 429.6 \\ (B) & w/ parallel connection & & **90.0** & **35.0** & 82.0 & 429.6 \\ (C) & w/o Spatial and Channel Attention & & 91.1 & 35.5 & **82.5** & **429.4** \\ \hline (D) & w/ dual-branch encoders & & 94.0 & 36.4 & 161.0 & 661.4 \\ \hline & \multicolumn{2}{c}{Reshape} & Attention & \multicolumn{2}{c}{\begin{tabular}{c} RMSE \\ **(mm)** \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} MAE \\ **(mm)** \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} Params.\end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} Params.\end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} Params.\end{tabular} } & \multicolumn{2}{c}{
\begin{tabular}{c} Params.\end{tabular} } \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline (E) & ResNet34 [7] & β & 18 & 92.3 & 36.1 & **26.4** & 542.2 \\ (G) & ResNet34 [7] & β & 18 & 91.4 & 35.5 & 28.1 & 58.21 \\ (G) & Sim-Tiny [19] & β & 18 & 92.6 & 36.4 & 38.1 & 63.8 \\ (G) & PVT-Large [90] & β & 18 & 91.4 & 35.6 & 68.3 & 41.98 \\ (H) & MPViT-Base [12] & β & 18 & 91.0 & 35.5 & 83.1 & 129.3 \\ (J) & CMT-Base [6] & β & 18 & 92.0 & 35.9 & 47.6 & **388.7** \\ (K) & Our-Small & β & 18 & 90.1 & 35.2 & 82.6 & 49.91 \\ (J) & Our-Small & β & 6 & **90.0** & 35.0 & 82.6 & 429.6 \\ (H) & Our-Small & β & CSPN++ & 90.3 & **34.9** & 82.7 & 446.4 \\ (H) & Our-Large & β & 6 & 90.9 & 35.3 & 45.8 & 398.4 \\ (H) & Our-Base & β & 6 & 90.1 & 35.1 & 146.7 & 499.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation study on NYU Depth v2 [33]**. We ablate the settings of our network in the following aspects: the backbone type, the convolutional attention mechanism in decoder and the iterations of NLSPN Refinement module. FLOPs are measured with input resolution \(480\times 640\).
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline & SPN & ResNet34 & Swin-Tiny & PVT-Large & MPViT-Base & CMT-Base & Our-Small \\ \hline RMSE(mm) & β & 106.5 & 106.5 & 106.4 & 106.2 & 106.4 & **92.2** \\ \hline \multirow{2}{*}{} & β & 91.4 & 92.6 & 91.4 & 91.0 & 92.0 & **90.0** \\ \cline{1-1} \cline{2-10} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation study without SPN module.** We report the accuracy of different backbone with/without SPN on NYUv2.
**Comparisons with General Feature Backbones.** In RMSE, our novel CompletionFormer in small scale (K) outperforms pure CNNs based method (F) and those pure Transformer-based variants (G, H) counting comparable FLOPs with respect to our model. In particular, compared to the recent MPViT-Base [12] (I) and CMT-Base [6] (J) which also integrate CNNs and Transformer for feature extraction, our network achieves higher accuracy and much lower computational overhead (429.6G FLOPs) than MPViT-Base (1259.3G FLOPS).
**Decoder.** Compared to our baseline (E), _i.e_., NLSPN [26], introducing spatial and channel attention for multiscale feature fusion in the decoder (F) further improves the results, RMSE drops from 92.3 to 91.4.
**SPN Refinement Iterations.** Our network (L) requires as few as 6 iterations for SPN refinement to converge to the best result. Compared to baseline (E) which requires 18 iterations, our enhanced U-Net has collected information from the whole image and thus doesn't need large iterations to propagate to long distance. Even when the non-local refinement in NLSPN replaced with fixed-local neighbors in CSPN++ [2] (M), the accuracy remains almost the same. It indicates that our CompletionFormer can learn good affinity locally and globally, thus helping to soften the problem raised by the limited and fixed range of aggregation in CSPN++ [2].
**Model Scales.** Our models in various scales (L, N, O), benefiting from local and global cues, achieve significant improvement compared to the pure CNN-based baseline (E), while counting fewer FLOPs. To trade-off between accuracy and efficiency, we select our model in small scale (L) as our final architecture for the remaining experiments.
**With/without SPN refinement.** To conclude, in Tab. 3 we show the results achieved by different backbones with and without SPN refinement. Ours yields the most accurate results in both cases.
(Ours-ViT) exhibits better results in all metrics as the LiDAR points get sparser. However, solely using Transformer layers makes it difficult to distinguish the objects from the background, as shown in Fig. 4. By coupling the local features and global representations, our complete model (Ours) significantly decreases the errors in all metrics.
**Indoor Scene.** On the NYUv2 dataset [33], we randomly sample 0, 50, 200 and 500 points from the ground truth depth map to mimic different depth sparsity levels while keeping the ground truth depth used for supervision unchanged. Both our model and NLSPN with publicly available code are retrained for a fair comparison, while the results of GuideNet [35] and PackNet-SAN [5] are taken from the original papers. In Tab. 5, CompletionFormer with both CNNs and Transformers consistently outperforms all other methods in any cases. Qualitative results are provided in the supplementary material.
### Comparison with SOTA Methods
This section comprehensively assesses the performance of state-of-the-art (SOTA) methods. On **indoor**, _i.e_. NYUv2 dataset [33], CompletionFormer achieves the best results as reported in Tab. 6. When moving to **outdoor** dataset, MAE, iMAE, RMSE and iRMSE are adopted for benchmark on KITTI depth completion (DC) dataset [36]. Empirically, our model trained with only \(L_{1}\) loss achieves the best results on two among four metrics (MAE and iMAE). By jointly minimizing \(L_{1}\) and \(L_{2}\) losses, CompletionFormer ranks first on RMSE metric among published methods. Qualitative results on the KITTI DC test dataset are provided in Fig. 5. By integrating convolutions and Transformers, our model performs better near depth missing areas (_e.g_. the zoom-in visualization on the second and fourth rows), textureless objects (_e.g_. the cars on the first row) and small objects (_e.g_. the pillars and tree stem far in the distance, on second and fourth rows).
## 5 Conclusion and Limitations
This paper proposed a single-branch depth completion network, CompletionFormer, seamlessly integrating convolutional attention and Transformers into one block. Extensive ablation studies demonstrate the effectiveness and efficiency of our model in depth completion when the input is sparse. This novel design yields state-of-the-art results on indoor and outdoor datasets. Currently, CompletionFormer runs at about 10 FPS: decreasing its runtime further to meet real-time requirements will be our future work.
Figure 4: **Qualitative results on KITTI DC selected validation dataset with 4 and 16 LiDAR scanning lines. We attach the subsampled LiDAR lines to the corresponding RGB image for better visualization. Ours-ViT denotes that only the Transformer layer is enabled in our proposed block. A colder color in depth and error maps denotes a lower value.**
Figure 5: **Qualitative results on the KITTI depth completion test set. Comparisons of our method against state-of-the-art methods including RigNet [45], NLSPN [26], DySPN [16] are presented. We provide RGB images, dense predictions, zoom-in views of challenging areas and corresponding error maps for better visualization.**
## Appendix A Appendix
### Qualitative Results on NYUv2 Dataset
Qualitative results concerning the NYUv2 dataset [33] are provided in Fig. 6. In both visualized cases, we can notice the improved results yielded by our CompletionFormer compared to NLSPN [26]. Especially for the transparent regions near the windows in both cases, with local details of convolution and global cues of Transformer, our complete model (Ours) predicts clear object boundaries while NLSPN and Ours-ViT give blurry estimations.
### Model Architecture Details
To better understand our architecture and to ease reproducibility, we present the network parameters of our CompletionFormer in Tab. 7.
|
2308.15794 | Fluctuations and correlations of baryonic chiral partners | Fluctuations and correlations of the net-baryon number play an important role
in exploring critical phenomena in phase transitions of strongly interacting
matter governed by Quantum chromodynamics (QCD). In this work, we use the
parity doublet model to investigate the fluctuations of the net-baryon number
density in hot and dense hadronic matter. The model accounts for chiral
criticality within the mean-field approximation. We focus on the qualitative
properties and systematics of the first- and second-order susceptibility of the
net-baryon number density, and their ratios for nucleons of positive and
negative parity, as well as their correlator. We show that the fluctuations of
the positive-parity nucleon do not necessarily reflect the fluctuations of the
total net-baryon number density at the phase boundary of the chiral phase
transition. We also investigate the non-trivial structure of the correlator.
Furthermore, we discuss and quantify the differences between the fluctuations
of the net-baryon number density in the vicinity of the chiral and liquid-gas
phase transition in nuclear matter. We indicate a possible relevance of our
results with the interpretation of the experimental data on net-proton number
fluctuations in heavy-ion collisions. | Volker Koch, MichaΕ Marczenko, Krzysztof Redlich, Chihiro Sasaki | 2023-08-30T06:56:38Z | http://arxiv.org/abs/2308.15794v2 | # Fluctuations and correlations of baryonic chiral partners
###### Abstract
Fluctuations and correlations of the net-baryon number play an important role in exploring critical phenomena in phase transitions of strongly interacting matter governed by Quantum chromodynamics (QCD). In this work, we use the parity doublet model to investigate the fluctuations of the net-baryon number density in hot and dense hadronic matter. The model accounts for chiral criticality within the mean-field approximation. We focus on the qualitative properties and systematics of the first- and second-order susceptibility of the net-baryon number density, and their ratios for nucleons of positive and negative parity, as well as their correlator. We show that the fluctuations of the positive-parity nucleon do not necessarily reflect the fluctuations of the total net-baryon number density at the phase boundary of the chiral phase transition. We also investigate the non-trivial structure of the correlator. Furthermore, we discuss and quantify the differences between the fluctuations of the net-baryon number density in the vicinity of the chiral and liquid-gas phase transition in nuclear matter. We indicate a possible relevance of our results with the interpretation of the experimental data on net-proton number fluctuations in heavy-ion collisions.
## I Introduction
One of the prominent tasks within high-energy physics is to unveil the phase diagram of Quantum Chromodynamics (QCD), the theory of strong interactions. Due to great activity in the field, significant progress has been made from both the theoretical and experimental sides. From ab initio lattice QCD (LQCD) calculations, it is now known that, at vanishing baryon density, strongly interacting matter undergoes a smooth chiral symmetry restoration transition from hadronic matter to quark-gluon plasma (QGP) at \(T_{c}\approx 155\) MeV [1; 2; 3; 4; 5]. However, the applicability of the LQCD methods at high baryon densities ceases, due to a well-known sign problem. Effective models, such as the linear sigma [6; 7] or Nambu-Jona-Lasinio (NJL) [8; 9] models predict a first-order phase transition at low temperature. Its existence would imply the presence of a putative critical endpoint (CP) on the QCD phase diagram. Throughout recent years experimental attempts were made to locate it on the phase diagram of QCD. Despite enormous experimental effort within the beam energy scan (BES) programs at the Relativistic Heavy Ion Collider (RHIC) at BNL [10] and the Super Proton Synchrotron (SPS) at CERN [11], this pressing issue remains unresolved (for a recent review see [12]).
One of the tools used in the experimental searches of the critical point are fluctuations and correlations of conserved charges. They are known to be propitious theoretical observables in search of critical behavior at the QCD phase boundary [13; 14; 15; 16] and chemical freeze-out in the heavy-ion collisions (HIC) [17; 18; 19; 20; 21; 22]. In particular, fluctuations of conserved charges have been proposed to probe the QCD critical point, as well as the remnants of the \(O(4)\) criticality at vanishing and finite net-baryon densities [16; 23; 24; 25; 22].
Non-monotonic behavior is also expected for various ratios of the cumulants of the net-baryon number. Recently, results from BES-I, which covered \(\sqrt{s_{\rm NN}}=7.7-200\) GeV, have shown indications of a non-monotonic behavior of the forth-to-second cumulant ratio of the net-proton multiplicity distributions in central Au+Au collisions [26]. However, more data and higher statistics at low collision energies are needed to draw firm conclusions.
One of the consequences of the restoration of chiral symmetry is the emergence of parity doubling around the chiral crossover. This has been recently observed in LQCD calculations in the spectrum of low-lying baryons around the chiral crossover [27; 28; 29]. The masses of the positive-parity baryonic ground states are found to be rather weakly temperature-dependent, while the masses of negative-parity states drop substantially when approaching the chiral crossover temperature. The parity doublet states become almost degenerate with a finite mass in the vicinity of the chiral crossover. Such properties of the chiral partners can be described in the framework of the parity doublet model [30; 31; 32]. The model has been applied to the vacuum phenomenology of QCD, hot
and dense hadronic matter, as well as neutron stars [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60].
In this paper, we apply the parity doublet model to calculate the cumulants and susceptibilities of the net-baryon number distribution. Specifically, we focus on the fluctuations of individual parity channels and correlations among them. Their qualitative behavior is examined near the chiral, as well as the nuclear liquid-gas phase transitions.
The differences in the qualitative critical behavior of opposite parity states were shown to be non-trivial, e.g., the difference of the sign of contributing terms to the overall fluctuations that are linked to the positive- and negative-parity states [61]. The decomposition performed in this study, however, cannot be interpreted in terms of cumulants of the baryon number. In this work, we extend this analysis by explicitly evaluating the fluctuations in the individual parity channels, as well as the correlation among them.
This work is organized as follows. In Sec. II, we introduce the hadronic parity doublet model. In Sec. III, we introduce the cumulants and susceptibilities of the net-baryon number. In Sec. IV, we present our results. Finally, Sec. VI is devoted to the summary of our findings.
## II Parity doublet model
The hadronic parity doublet model for the chiral symmetry restoration [30; 31; 32] is composed of the baryonic parity doublet and mesons as in the Walecka model [62]. The spontaneous chiral symmetry breaking yields the mass splitting between the two fermionic parity partners. In this work, we consider a system with \(N_{f}=2\); hence, relevant for this study are the positive-parity nucleons and their negative-parity partners. The fermionic degrees of freedom are coupled to the chiral fields (\(\sigma\), \(\mathbf{\pi}\)) and the isosinglet vector field (\(\omega_{\mu}\)).
To investigate the properties of strongly interacting matter, we adopt a mean-field approximation. Rotational invariance requires that the spatial component of the \(\omega_{\mu}\) field vanishes, namely, \(\langle\mathbf{\omega}\rangle=0\)#1. Parity conservation on the other hand dictates \(\langle\mathbf{\pi}\rangle=0\). The mean-field thermodynamic potential of the parity doublet model reads [61]#2
Footnote #1: Since \(\omega_{0}\) is the only non-zero component in the mean-field approximation, we simply denote it by \(\omega_{0}\equiv\omega\).
Footnote #2: Assuming isospin symmetric system.
\[\Omega=\Omega_{+}+\Omega_{-}+V_{\sigma}+V_{\omega}, \tag{1}\]
with
\[\Omega_{\pm}=\gamma_{\pm}\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\;T\left[\ln \left(1-f_{\pm}\right)+\ln\left(1-\bar{f}_{\pm}\right)\right], \tag{2}\]
where \(\gamma_{\pm}=2\times 2\) denotes the spin-isospin degeneracy factor for both parity partners, and \(f_{\pm}\) (\(\bar{f}_{\pm}\)) is the particle (antiparticle) Fermi-Dirac distribution function,
\[\begin{split} f_{\pm}&=\frac{1}{1+e^{(E_{\pm}-\mu_ {N})/T}},\\ \bar{f}_{\pm}&=\frac{1}{1+e^{(E_{\pm}+\mu_{N})/T}}, \end{split} \tag{3}\]
where \(T\) is the temperature, the dispersion relation \(E_{\pm}=\sqrt{\mathbf{p}^{2}+m_{\pm}^{2}}\), and the effective baryon chemical potential \(\mu_{N}=\mu_{B}-g_{\omega}\omega\). The mean-field potentials read
\[V_{\sigma} =-\frac{\lambda_{2}}{2}\Sigma+\frac{\lambda_{4}}{4}\Sigma^{2}- \frac{\lambda_{6}}{6}\Sigma^{3}-\epsilon\sigma, \tag{4a}\] \[V_{\omega} =-\frac{m_{\omega}^{2}}{2}\omega^{2}. \tag{4b}\]
where \(\Sigma=\sigma^{2}+\mathbf{\pi}^{2}\), \(\lambda_{2}=\lambda_{4}f_{\pi}^{2}-\lambda_{6}f_{\pi}^{4}-m_{\pi}^{2}\), and \(\epsilon=m_{\pi}^{2}f_{\pi}\). \(m_{\pi}\) and \(m_{\omega}\) are the \(\pi\) and \(\omega\) meson masses, respectively, and \(f_{\pi}\) is the pion decay constant.
The masses of the positive- and negative-parity baryonic chiral partners, \(N_{\pm}\), are given by
\[m_{\pm}=\frac{1}{2}\left(\sqrt{a^{2}\sigma^{2}+4m_{0}^{2}}\mp b\sigma\right), \tag{5}\]
where \(a\), \(b\) are combinations of Yukawa coupling constants [61], and \(m_{0}\) is the chirally invariant mass parameter. We note that in the parity doublet model, the chiral symmetry breaking yields the mass splitting between the chiral partners. Therefore, the order parameter for the chiral symmetry breaking is the mass difference, \(m_{-}-m_{+}=b\sigma\).
In-medium profiles of the mean fields are obtained by extremizing the thermodynamic potential, Eq. (1), leading to the following gap equations:
\[\begin{split} 0&=\frac{\partial\Omega}{\partial\sigma} =\frac{\partial V_{\sigma}}{\partial\sigma}+s_{+}\frac{\partial m_{+}}{ \partial\sigma}+s_{-}\frac{\partial m_{-}}{\partial\sigma},\\ 0&=\frac{\partial\Omega}{\partial\omega}=\frac{ \partial V_{\omega}}{\partial\omega}+g_{\omega}\left(n_{+}+n_{-}\right),\end{split} \tag{6}\]
where the scalar and vector densities are
\[s_{\pm}=\gamma_{\pm}\int\frac{\mathrm{d}^{3}p}{\left(2\pi\right)^{3}}\frac{m_{ \pm}}{E_{\pm}}\left(f_{\pm}+\bar{f}_{\pm}\right) \tag{7}\]
and
\[n_{\pm}=\gamma_{\pm}\int\frac{\mathrm{d}^{3}p}{\left(2\pi\right)^{3}}\left(f_{ \pm}-\bar{f}_{\pm}\right), \tag{8}\]
respectively.
In the grand canonical ensemble, the net-baryon number density can be calculated as follows:
\[n_{B}=-\frac{\mathrm{d}\Omega}{\mathrm{d}\mu_{B}}\Bigg{|}_{T}=n_{+}+n_{-}, \tag{9}\]
where \(n_{\pm}\) are the vector densities of the baryonic chiral partners.
The positive-parity state, \(N_{+}\), corresponds to the nucleon \(N(938)\). Its negative parity partner, \(N_{-}\), is identified with \(N(1535)\)[63]. Their vacuum masses are shown in Table 1. The value of the parameter \(m_{0}\) has to be chosen so that a chiral crossover is realized at finite temperature and vanishing chemical potential. The model predicts the chiral symmetry restoration to be a crossover for \(m_{0}\gtrsim 700\) MeV. Following the previous studies of the parity-doublet-based models [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 61], as well as recent lattice QCD results [27; 28; 29], we choose a rather large value, \(m_{0}=750\) MeV. We note, however, that the results presented in this work qualitatively do not depend on the choice of \(m_{0}\), as long as the chiral crossover appears at \(\mu_{B}=0\). The parameters \(a\) and \(b\) are determined by the aforementioned vacuum nucleon masses and the chirally invariant mass \(m_{0}\) via Eq. (5). The remaining parameters: \(g_{\omega}\), \(\lambda_{4}\) and \(\lambda_{6}\), are fixed by the properties of the nuclear ground state at zero temperature, i.e., the saturation density, binding energy, and compressibility parameter at \(\mu_{B}=923\) MeV. The constraints are as follows:
\[n_{B} =0.16~{}{\rm fm}^{-3}, \tag{10a}\] \[E/A-m_{+} =-16~{}{\rm MeV},\] (10b) \[K =9n_{B}^{2}\frac{\partial^{2}\left(E/A\right)}{\partial n_{B}^{2}} =240~{}{\rm MeV}. \tag{10c}\]
We note that the six-point scalar interaction term in Eq. (4a) is essential to reproduce the empirical value of the compressibility in Eq. (10c) [58].
The compilation of the parameters used in this paper is found in Table 1. For this set of parameters, we obtain the pseudo-critical temperature of the chiral crossover at vanishing baryon chemical potential, \(T_{c}=209\) MeV. In Fig. 1 we show the temperature dependence of the masses of the chiral partners. At low temperatures, chiral symmetry is broken and they have different masses. As chiral symmetry gets restored, their masses converge towards the chirally invariant mass \(m_{0}\). The mass of the \(N_{-}\) monotonically decreases towards \(m_{0}\). On the other hand, the mass of \(N_{+}\) develops a shallow minimum close to the chiral restoration and converges to \(m_{0}\) from below. The derivatives of \(m_{\pm}\) can be readily calculated from Eq. (5), namely
\[\frac{\partial m_{\pm}}{\partial\sigma}=\frac{1}{2}\left(\frac{a^{2}\sigma}{ \sqrt{a^{2}\sigma^{2}+4m_{0}^{2}}}\mp b\right). \tag{11}\]
Note that for the positive-parity state, a minimum value of the mass, \(m_{+}^{\rm min}\), exists at
\[\sigma_{\rm min}=\frac{2bm_{0}}{a\sqrt{a^{2}-b^{2}}}, \tag{12}\]
while the mass of the negative-parity state monotonically decreases with \(\sigma\) as the chiral symmetry gets restored. We also note that \(\sigma_{\rm min}>0\); Thus, the positive-parity state attains a minimum mass for any choice of \(m_{0}>0\)[61].
At low temperatures, the model predicts sequential first-order nuclear liquid-gas and chiral phase transitions with critical points located at \(T_{\rm Ig}=16\) MeV, \(\mu_{B}=909\) MeV, (\(n_{B}=0.053~{}{\rm fm}^{-3}=0.33n_{0}\)) and \(T_{\rm ch}=7\) MeV, \(\mu_{B}=1526\) MeV (\(n_{B}=1.25~{}{\rm fm}^{-3}=7.82n_{0}\)), respectively. In Fig. 2, we show the parity doublet model phase diagram. At low temperature, the nuclear liquid-gas and chiral phase transitions are sequential. As temperature increases, they combine and form a single crossover transition at vanishing baryon chemical potential. We note that the exact location of the chiral phase transition at low temperature depends on, e.g., the mass of the negative-parity state [38]. At zero temperature it is expected that it occurs roughly at \(\mu_{B}\sim m_{-}\).
We note that the minimum of \(m_{+}\) is obtained for any trajectory from chirally broken to chirally symmetric phase. Remarkably, \(\sigma_{\rm min}\) is reached at \(T\) and \(\mu_{B}\) which are close to the chiral phase boundary (see Fig. 2). We emphasize that the properties discussed in this work are expected to appear independently of the position of the chiral critical point on the phase diagram. Although the dependence of \(m_{+}\) on \(\sigma\) is not universal and model dependent, we stress that the calculations with the functional renormalization group techniques preserve the same in-medium behavior [64]. At present, only the first-principle LQCD calculations can provide a reliable answer.
In the next section, we discuss the general structure of the second-order susceptibilities of the net-baryon number density for positive- and negative-parity chiral partners to quantify their roles near the second-order phase transition at finite density.
## III Cumulants and susceptibilities of the net-baryon number
For a system consisting of \(N_{B}=N_{+}+N_{-}\) baryons with \(N_{\pm}\) being the net number of positive/negative-parity baryons, the mean can be calculated as
\[\langle N_{B}\rangle\equiv\kappa_{1}^{B}=\kappa_{1}^{+}+\kappa_{1}^{-}, \tag{13}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(m_{0}\) [GeV] & \(m_{+}\) [GeV] & \(m_{-}\) [GeV] & \(m_{\pi}\) [GeV] & \(f_{\pi}\) [GeV] & \(m_{\omega}\) [GeV] & \(\lambda_{4}\) & \(\lambda_{6}f_{\pi}^{2}\) & \(g_{\omega}\) & \(a\) & \(b\) \\ \hline \hline
0.750 & 0.939 & 1.500 & 0.140 & 0.93 & 0.783 & 28.43 & 11.10 & 6.45 & 20.68 & 6.03 \\ \hline \end{tabular}
\end{table}
Table 1: Physical inputs in matter-free space and the model parameters used in this work. See Sec. II for details.
and the variance,
\[\langle\delta N_{B}\delta N_{B}\rangle\equiv\kappa_{2}^{B}=\kappa_{2}^{++}+\kappa_ {2}^{--}+2\kappa_{2}^{+-}, \tag{14}\]
where
\[\begin{split}\kappa_{1}^{\alpha}&=\langle N_{\alpha} \rangle,\\ \kappa_{2}^{\alpha\beta}&=\langle\delta N_{\alpha} \delta N_{\beta}\rangle.\end{split} \tag{15}\]
Notably \(\kappa_{1}^{\pm}\), \(\kappa_{2}^{++}\) and \(\kappa_{2}^{--}\) are the cumulants of the \(N_{+}\) and \(N_{-}\) distributions; \(\kappa_{2}^{\frac{1}{2}-}\) is the correlation between \(N_{+}\) and \(N_{-}\).
In general, the cumulants of the baryon number are defined as
\[\kappa_{n}^{B}\equiv T^{n}\frac{\mathrm{d}^{n}\log\mathcal{Z}}{\mathrm{d}\mu_ {B}^{n}}\Bigg{|}_{T}, \tag{16}\]
where \(\mathcal{Z}\) is the partition function. Because the thermodynamic potential \(\Omega\) is related to the grand-canonical partition function through \(\Omega=-T\log\mathcal{Z}/V\), one may relate the cumulants with the susceptibilities of the net-baryon number in the following way
\[\kappa_{n}^{B}=VT^{3}\chi_{n}^{B}, \tag{17}\]
where \(V\) is the volume of the system and
\[\chi_{n}^{B}\equiv-\frac{\mathrm{d}^{n}\hat{\Omega}}{\mathrm{d}\hat{\mu}_{B}^ {n}}\Bigg{|}_{T}, \tag{18}\]
with \(\hat{\Omega}=\Omega/T^{4}\) and \(\hat{\mu}_{B}=\mu_{B}/T\). For example, \(\kappa_{1}^{B}=V\chi_{1}^{B}=Vn_{B}=\langle N_{B}\rangle\) is the mean of the baryon number. We note that \(\langle N_{B}\rangle=\langle N_{+}\rangle+\langle N_{-}\rangle\) is the sum of the means of the net number of particles with a given parity; thus \(\kappa_{1}^{B}=\kappa_{1}^{+}+\kappa_{1}^{-}\), where \(\kappa_{1}^{\alpha}=\langle N_{\alpha}\rangle\).
To be able to connect the individual cumulants \(\kappa_{n}^{\alpha\beta}\) to susceptibilities, we need to rewrite the mean-field thermodynamic potential in terms of newly defined chemical potentials, \(\mu_{\pm}\) for positive- and negative-parity states as follows:
\[\begin{split}\Omega&=\Omega_{+}\left(\mu_{+},T, \sigma\left(\mu_{+},\mu_{-}\right),\omega\left(\mu_{+},\mu_{-}\right)\right) \\ &+\Omega_{-}\left(\mu_{-},T,\sigma\left(\mu_{+},\mu_{-}\right), \omega\left(\mu_{+},\mu_{-}\right)\right)\\ &+V_{\sigma}(\sigma\left(\mu_{+},\mu_{-}\right))+V_{\omega}( \omega\left(\mu_{+},\mu_{-}\right)).\end{split} \tag{19}\]
Such a separation into separate chemical potentials is possible in the mean field approximation which is a single particle theory (see detailed discussion in [65]). To be thermodynamically consistent, one needs to set \(\mu_{\pm}=\mu_{N}=\mu_{B}-g_{\omega}\omega\) at the end of the calculations and before numerical evaluation. We note that \(\mu_{\pm}\) are independent variables. The net-baryon density is then given as
\[n_{B}=n_{+}+n_{-}, \tag{20}\]
where \(n_{\pm}\) are the net densities given by
\[\begin{split} n_{\pm}&=-\frac{\mathrm{d}\Omega}{ \mathrm{d}\mu_{\pm}}\Bigg{|}_{T,\mu_{\pm}=\mu_{N}}=\\ &-\left.\frac{\partial\Omega}{\partial\mu_{\pm}}-\frac{\partial \Omega}{\partial\sigma}\frac{\partial\sigma}{\partial\mu_{\pm}}-\frac{\partial \Omega}{\partial\omega}\frac{\partial\omega}{\partial\mu_{\pm}}=-\frac{ \partial\Omega}{\partial\mu_{\pm}},\end{split} \tag{21}\]
The last equality holds due to the stationary conditions. We stress that the derivative should be taken not only at constant temperature but also at \(\mu_{+}=\mu_{-}=\mu_{N}\).
Given that \(\mu_{\pm}\) are independent, one recognizes that Eq. (21) agrees with the definition in Eq. (9). Likewise, the second-order susceptibility can be expressed as follows
\[\chi_{2}^{B}=\chi_{2}^{++}+\chi_{2}^{--}+2\chi_{2}^{+-}, \tag{22}\]
Figure 2: Phase diagram obtained in the parity doublet model. Shown are the liquid-gas (red, solid/dotted line) and chiral (black, solid/dashed-dotted line) phase transition/crossover lines. Circles indicate critical points below which the transitions are of the first order. The lines are obtained from the minima of \(\partial\sigma/\partial\mu_{\pm}\) (see text for details). The blue, dashed line shows the line where the mass of the positive-parity state has a minimum (see text for details).
Figure 1: Masses of the baryonic chiral partners at finite temperature and vanishing baryon chemical potential. The temperature is normalized to the chiral crossover temperature, \(T_{c}\), at \(\mu_{B}=0\). The dotted, blue line shows the chirally invariant mass, \(m_{0}\). The vertical line marks the chiral crossover transition.
where \(\chi_{++}\) (\(\chi_{--}\)) are the susceptibilities of the positive- (negative-) parity and \(\chi_{+-}\) gives the correlations between them, i.e., correlations between vector densities. The individual terms in the above equation are given as follows
\[\chi_{2}^{\alpha\beta}=\frac{1}{VT^{3}}\kappa_{2}^{\alpha\beta}=-\frac{\mathrm{ d}^{2}\hat{\Omega}}{\mathrm{d}\hat{\mu}_{\alpha}\mathrm{d}\hat{\mu}_{\beta}} \bigg{|}_{T,\mu_{\alpha}=\mu_{\beta}=\mu_{N}}, \tag{23}\]
where \(\hat{\mu}_{x}=\mu_{x}/T\), and \(x=\alpha,\beta\) correspond to the particle species and \(\mu_{\alpha}\)'s correspond to their effective chemical potentials \(\mu_{\pm}\). We notice that, under the mean-field approximation, \(\chi_{2}^{\alpha\beta}=\chi_{2}^{\beta\alpha}\), thus \(\chi_{2}^{+-}=\chi_{2+}^{-+}\). Furthermore, we assume isospin symmetry, thus \(\chi_{2}^{+-}\) is the net-nucleon number susceptibility. Consequently, the susceptibility of the net-proton number density is \(\chi_{2}^{pp}\approx 1/2\chi_{2}^{++}\). This is a fair assumption since isospin correlations are expected to be small [66].
Event-by-event cumulants and correlations are extensive quantities. They depend on the volume of the system and its fluctuations, which are unknown in heavy-ion collisions. The volume dependence, however, can be canceled out by taking the ratio of cumulants. Therefore, it is useful to define ratios of the cumulants of the baryon number, which may also be expressed through susceptibilities,
\[R_{n,m}^{B}\equiv\frac{\kappa_{n}^{B}}{\kappa_{m}^{B}}=\frac{\chi_{n}^{B}}{\chi _{m}^{B}}. \tag{24}\]
In the following, we focus on the ratios of the second and first-order cumulants of different parity distributions. Therefore, it is useful to define
\[R_{2,1}^{\alpha\beta}\equiv\frac{\kappa_{2}^{\alpha\beta}}{\sqrt{\kappa_{1}^{ \alpha}\kappa_{1}^{\beta}}}=\frac{\chi_{2}^{\alpha\beta}}{\sqrt{\chi_{1}^{ \alpha}\chi_{1}^{\beta}}}. \tag{25}\]
We note that in general the ratios, \(R_{n,m}^{\alpha\beta}\), are not additive, e.g., \(R_{2,1}^{++}+R_{2,1}^{--}+R_{2,1}^{+-}\neq R_{2,1}^{B}\).
In the following, we will also compare our results with the hadron resonance gas (HRG) model formulation of the thermodynamics of the confined phase of QCD. The model is widely used for the description of matter under extreme conditions, e.g., in the context of heavy-ion collision phenomenology [67; 68; 69; 70; 71; 72]. Commonly used implementations of the HRG employ vacuum hadron masses in the hadronic phase and hence do not include possible in-medium effects. Several extensions of the HRG model have been proposed to quantify the LQCD EoS and various fluctuation observables. They account for consistent implementation of hadronic interactions within the S-matrix approach [73], a more complete implementation of a continuously growing exponential mass spectrum and/or possible repulsive interactions among constituents [74; 75; 76; 77; 71; 70]. Nevertheless, it is challenging to identify the role of different in-medium effects and hadronic interactions on the properties of higher-order fluctuations of conserved charges.
The thermodynamic potential of the HRG model is given as a sum of uncorrelated ideal-gas particles:
\[\Omega^{\text{HRG}}=\sum_{x=\pm}\Omega_{x}, \tag{26}\]
with \(\Omega_{x}\) given by Eq. (2). The masses of \(N_{\pm}\) are taken to be the vacuum masses (see Table 1) and \(\mu_{N}=\mu_{B}\). The net-baryon density and its susceptibility are obtained through Eqs. (9) and (18), respectively. Thus, in the HRG model one has,
\[\chi_{2}^{B,\text{HRG}}=\chi_{2}^{++}+\chi_{2}^{--}. \tag{27}\]
The susceptibilities introduced in Eq. (23), can be evaluated analytically by differentiating Eq. (19). Explicit calculations yield
\[\begin{split}\chi_{2}^{\alpha\beta}=&-\frac{\partial \sigma}{\partial\hat{\mu}_{\beta}}\left(\frac{\partial^{2}\hat{\Omega}}{ \partial\sigma^{2}}\frac{\partial\sigma}{\partial\hat{\mu}_{\alpha}}+\frac{ \partial^{2}\hat{\Omega}}{\partial\sigma\partial\omega}\frac{\partial\omega}{ \partial\hat{\mu}_{\alpha}}-\frac{\partial\hat{n}_{\alpha}}{\partial\sigma} \right)\\ &-\frac{\partial\omega}{\partial\hat{\mu}_{\beta}}\left(\frac{ \partial^{2}\hat{\Omega}}{\partial\omega^{2}}\frac{\partial\omega}{\partial \hat{\mu}_{\alpha}}+\frac{\partial^{2}\Omega}{\partial\sigma\partial\omega} \frac{\partial\sigma}{\partial\hat{\mu}_{\alpha}}-\frac{\partial\hat{n}_{ \alpha}}{\partial\omega}\right)\\ &+\frac{\partial\sigma}{\partial\hat{\mu}_{\alpha}}\frac{\partial \hat{n}_{\beta}}{\partial\sigma}+\frac{\partial\omega}{\partial\hat{\mu}_{ \alpha}}\frac{\partial\hat{n}_{\beta}}{\partial\omega}+\frac{\partial\hat{n}_{ \alpha}}{\partial\hat{\mu}_{\beta}},\end{split} \tag{28}\]
where \(\hat{n}_{\alpha/\beta}=n_{\alpha/\beta}/T^{3}\), and \(n_{\alpha/\beta}\) are the net densities defined in Eq. (21). We note that the last term, \(\partial\hat{n}_{\alpha}/\partial\hat{\mu}_{\beta}=0\) for \(\alpha\neq\beta\).
To evaluate Eq. (28), one needs to extract the derivatives of the mean fields w.r.t chemical potentials \(\mu_{\pm}\). They can be carried out by differentiating the gap equations, namely
\[\begin{split}&\left.\frac{\mathrm{d}}{\mathrm{d}\hat{\mu}_{ \alpha}}\left(\frac{\partial\hat{\Omega}}{\partial\sigma}\right)\right|_{T,\hat {\mu}_{\alpha}=\hat{\mu}_{N}}&=0,\\ &\left.\frac{\mathrm{d}}{\mathrm{d}\hat{\mu}_{\alpha}}\left(\frac{ \partial\hat{\Omega}}{\partial\omega}\right)\right|_{T,\hat{\mu}_{\alpha}=\hat{ \mu}_{N}}&=0.\end{split} \tag{29}\]
Figure 3: Susceptibilities, \(\chi_{2}^{\alpha\beta}\), at vanishing baryon chemical potential. Shown are also the net-baryon number susceptibility \(\chi_{2}^{B}\) and the corresponding result, \(\chi_{2}^{B,\text{HRG}}\) obtained in the HRG model. We note that the correlator, \(\chi_{2}^{+-}\), is shown with the negative sign. The vertical, dotted line marks the chiral phase transition.
Writing them explicitly and isolating \(\partial\sigma/\hat{\mu}_{\alpha}\), \(\partial\omega/\partial\hat{\mu}_{\alpha}\), yields
\[\frac{\partial\sigma}{\partial\hat{\mu}_{\alpha}}= \left(\frac{\frac{\partial^{2}\Omega}{\partial\sigma\partial\omega} }{\frac{\partial\Omega}{\partial\omega}}\frac{\partial\hat{n}_{\alpha}}{ \partial\omega}-\frac{\partial\hat{n}_{\alpha}}{\partial\sigma}\right)\Bigg{/} \left(\frac{\partial^{2}\hat{\Omega}}{\partial\sigma^{2}}-\frac{\left(\frac{ \partial^{2}\hat{\Omega}}{\partial\sigma\partial\omega}\right)^{2}}{\frac{ \partial^{2}\Omega}{\partial\omega^{2}}}\right), \tag{30}\] \[\frac{\partial\omega}{\partial\hat{\mu}_{\alpha}}= -\left(\frac{\partial\hat{n}_{\alpha}}{\partial\omega}+\frac{ \partial^{2}\hat{\Omega}}{\partial\sigma\partial\omega}\frac{\partial\sigma}{ \partial\hat{\mu}_{\alpha}}\right)\Bigg{/}\frac{\partial^{2}\hat{\Omega}}{ \partial\omega^{2}}.\]
We note that corresponding derivatives of the mean fields w.r.t. \(\hat{\mu}_{\beta}\) can be found similarly upon replacing \(\alpha\rightarrow\beta\). The above derivatives can be plugged into Eq. (28). Now, calculating Eq. (28) amounts to providing the values of the mean fields and evaluating them numerically.
## IV Results
Using Eq. (28), we evaluate the susceptibilities of the net number densities for the positive- and negative-parity chiral partners, as well as the correlations among them within the parity doublet model. The results for vanishing baryon chemical potential are shown in Fig. 3. The net-baryon susceptibility obtained in the HRG model increases monotonically and does not resemble any critical behavior. This is expected because the partition function of the HRG model is just a sum of ideal, uncorrelated particles [cf. Eq. (26)] with vacuum hadron masses. The net-baryon susceptibility obtained in the parity doublet model clearly deviates from the HRG result. The increase around \(T_{c}\) and saturation above it is a bulk consequence of the interplay between critical chiral dynamics with in-medium hadron masses and repulsive interactions [79]. Around \(T_{c}\), the susceptibilities \(\chi_{2}^{++}\) and \(\chi_{2}^{--}\) develop a swift increase due to chiral symmetry restoration, and therefore the change of their effective masses. They con
Figure 4: Susceptibilities, \(\chi_{2}^{\alpha\beta}\), at different temperatures. Also shown is the net-baryon number susceptibility, \(\chi_{2}^{B}\). We note that the correlator, \(\chi_{2}^{+-}\), is shown with the negative sign. The dashed and dotted vertical lines mark baryon chemical potentials for the liquid-gas and chiral crossover transitions, respectively. The inset figures in the top panel show \(\chi_{2}^{B}\) in the vicinity of the chiral crossover transition.
tinue to grow at higher temperatures. Up to \(T_{c}\), the correlation \(\chi_{2}^{+-}\) is almost negligible. The reason is that the \(N_{-}\) resonance is thermally suppressed at low temperatures due to its high mass. The correlation only becomes relevant in the vicinity of the chiral crossover, where the negative-parity state becomes swiftly populated. The full net-baryon number susceptibility saturates and gradually decreases to zero at high temperatures due to the non-vanishing correlation between the baryonic chiral partners. We note that \(\chi_{2}^{+-}\) is negative at vanishing \(\mu_{B}\).
Next, we turn to finite baryon chemical potential. In Fig. 4, we show the susceptibilities \(\chi_{2}^{\alpha\beta}\) for different temperatures. At \(T=30\) MeV, the net-baryon number susceptibility develops a peak at \(\mu_{B}<1\) GeV, which is a remnant of the liquid-gas phase transition. At higher chemical potentials, it develops a plateau with a small peak around \(\mu_{B}=1.4\) GeV, which is a remnant of the chiral phase transition. The net-nucleon susceptibility, \(\chi_{2}^{++}\), overlaps with \(\chi_{2}^{B}\) at small \(\mu_{B}\), which is expected due to thermal suppression of the negative-parity state. On the other hand both \(\chi_{2}^{++}\) and \(\chi_{2}^{--}\) develop strong peaks around \(\mu_{B}\sim 1.4\) GeV. Interestingly, the correlator becomes negative, and \(\chi_{2}^{+-}\) features a minimum, which is of similar magnitude as the peaks in \(\chi_{2}^{++}\) and \(\chi_{2}^{--}\). Therefore, the negative correlation between the baryonic chiral partners causes the suppression of the net-baryon susceptibility around the chiral crossover [cf. Eq. (22)]. The structure is similar at \(T=50\) MeV.
At low temperature, the liquid-gas and chiral phase transitions are well separated. Higher temperature gives rise to a more complicated structure; the two crossover lines become closer and finally merge (see Fig. 2). This is seen in the bottom panels of Fig. 4. The \(\chi_{2}^{B}\) features a peak around the chemical potential where the transitions happen. This is not reflected in the individual parity fluctuations; \(\chi_{2}^{--}\) swiftly increase at the chiral crossover, while the correlator \(\chi_{2}^{+-}\) starts to decrease.
In Fig. (5), we plot the ratios \(R_{2,1}^{\alpha\beta}\) for different temper
Figure 5: Scaled variances, \(R_{2,1}^{\alpha\beta}\) for different temperatures. Also shown is the ratio \(R_{2,1}^{B}\), for the net-baryon number susceptibility. We note that the ratio, \(R_{2,1}^{+-}\), is shown with the negative sign. The dashed and dotted vertical lines mark baryon chemical potentials for the liquid-gas and chiral crossover transitions, respectively. In the top panel, the inset figures show \(R_{2,1}^{B}\) in the vicinity of the chiral crossover transition.
atures. At low temperatures, the ratio \(R_{2,1}^{++}\) is sensitive to both liquid-gas and chiral crossovers, while \(R_{2,1}^{--}\) is sensitive only to the latter transition. Notably, at the chiral crossover, the peak \(R_{2,1}^{--}\) is much stronger than in \(R_{2,1}^{++}\). On the other hand, similarly to \(\chi_{2}^{B}\), the ratio \(R_{2,1}^{B}\) is sensitive to the liquid-gas phase transition, however, it becomes suppressed as compared to \(R_{2,1}^{++}\) and \(R_{2,1}^{--}\), and the enhancement due to criticality is essentially invisible at the chiral phase boundary. We note that in the close vicinity of the chiral critical endpoint, the \(R_{2,1}^{B}\) ratio indeed shows critical behavior. However, this happens at much lower temperatures. At small \(\mu_{B}\), the ratio \(R_{2,1}^{+-}\) is negligibly close to zero and deviates from it only when the negative-parity chiral partner becomes populated, i.e., \(R_{2,1}^{--}\) deviates from unity. Its minimum value is obtained in the vicinity of the chiral crossover. This signals the sensitivity of the correlation between the baryonic chiral partners to the onset of chiral symmetry restoration. Interestingly, \(R_{2,1}^{--}\) features a well-pronounced peak at high temperatures in the vicinity of the chiral transition, while other quantities do not.
To quantify the differences of fluctuations in the vicinity of the liquid-gas and chiral phase transitions, we calculate the fluctuations as functions of temperature along the trajectories obtained by tracing the remnants of these two transitions, i.e., the corresponding minima of \(\partial\sigma/\partial\mu_{+}\) and \(\partial\sigma/\partial\mu_{-}\) (see the phase diagram in Fig. 2). The temperature dependence of \(R_{2,1}^{\alpha\beta}\) along the remnant of the liquid-gas phase transition is shown in the left panel of Fig. 6. The ratio \(R_{2,1}^{++}\) increases toward the critical point of the liquid-gas phase transition, located at \(T\simeq 16\) MeV. On the other hand, \(R_{2,1}^{--}\) stays close to unity, due to thermal suppression of the negative-parity nucleon. As a result the \(R_{2,1}^{+-}\) vanishes. Therefore, as the critical point of the liquid-gas phase transition is approached, the system is dominated by the positive-parity state and the fluctuations are entirely due to its contribution. In the right panel of Fig. 6, we show the same quantities along the chiral crossover line. All quantities diverge at the chiral critical point, which is located at \(T\simeq 7\) MeV. In this case, the contribution from the negative-parity state is not negligible close to the critical point. Their appearance increases the strength of the correlation between the chiral partners, which becomes large and negatively divergent. In turn, the ratio \(R_{2,1}^{B}\) decreases and starts diverging only in the close vicinity of the chiral critical point. Our results indicate that the net-proton fluctuations do not necessarily reflect the net-baryon fluctuations at the chiral phase boundary.
As we have observed, the susceptibility of the negative-parity state becomes dominant in the vicinity of the chiral critical region. This is even more readily seen in the ratio of the second to first-order susceptibility. Our finding suggests the fluctuations of the negative-parity state provide a good signal to identify the chiral critical point. We remark, however, on the simplified nature of this model calculations. In the current model, the negative-parity state, \(N_{-}(1535)\), is treated as a stable particle with no width, which is known to be of the order of \(\Gamma\approx 150\) MeV [63]. It would be vital to explore the finite width, and decay properties, and to understand their influence on the fluctuation observables.
## V Effect of repulsion
The repulsive interactions have little to none effect on the chiral crossover transition at small baryon chemical potentials. This is expected due to the vanishing of the \(\omega\) mean field at \(\mu_{B}=0\). In Fig. 7, we show the susceptibilities for different values of the repulsive coupling \(g_{\omega}\) and other parameters kept fixed at \(\mu_{B}=0\). As expected, for vanishing coupling, fluctuations are the largest, and the correlator \(\chi_{2}^{+-}\) vanishes. As the value of \(g_{\omega}\) increases,
the fluctuations of the positive- and negative-parity state become suppressed. At the same time, finite \(g_{\omega}\) implies finite correlations, which otherwise vanish at \(\mu_{B}=0\). With increasing the coupling, the correlations become more negative, further suppressing the total net-baryon number fluctuations. Thus, it is the correlation between the baryonic chiral partners that non-trivially modifies the net-baryon number fluctuations.
While in-medium effects due to chiral symmetry restoration may spoil agreement between the HRG model and LQCD results on the second-order susceptibilities, it can be potentially restored by tuning the strength of repulsive interactions. This can be deduced from Fig. 8, where we compare susceptibilities of the net-baryon number density \(\chi_{2}^{B}\) for different values of the repulsive coupling constant. For vanishing repulsive coupling, the susceptibility swiftly increases and overestimates the HRG result in the vicinity of the chiral crossover. In general, as the repulsive coupling increases, the fluctuations tend to decrease [80]. For twice the value of the original coupling, the susceptibility already underestimates the HRG result. Therefore, by choosing value somewhere in between, the in-medium effects would cancel out and the agreement with HRG fluctuations would be restored.
To see the effect of the repulsion on the phase structure, in Fig. 9, we plot the phase diagram of the model in the \(T-\mu_{B}\) plane for different values of the repulsive coupling \(g_{\omega}\). In general, smaller repulsive coupling yields the region where \(\chi_{2}^{+-}>0\) more tilted to the left. Nevertheless, the qualitative structure remains the same, regardless of the presence of the repulsive forces. We note that in Fig. 9, we do not show results for \(T<20\), where the liquid-gas and chiral transitions become of first-order and additional effects, such as non-equilibrium spinodal decomposition have to be addressed. These interesting effects have been already explored in the context of the Nambu-Jona-Lasinio model [81; 82]. This is, however, beyond the scope of the current work and we plan to elaborate on this elsewhere.
Now, we focus on the properties of the correlator, in particular on the change of its sign at finite chemical potential. Because the qualitative behavior of the correlator does not depend on the repulsive interactions, we consider \(g_{\omega}=0\) and neglect the vector channel. Then, the correlator in Eq. (23) simplifies to the following
\[\chi_{2}^{\alpha\beta}=\frac{1}{\frac{\partial^{2}\Omega}{\partial\sigma^{2} }}\frac{\partial\hat{n}_{\alpha}}{\partial\sigma}\frac{\hat{m}_{\beta}}{ \partial\sigma}=\frac{1}{\frac{\partial^{2}\Omega}{\partial\sigma^{2}}}\frac{ \partial\hat{n}_{\alpha}}{\partial m_{\alpha}}\frac{\partial\hat{n}_{\beta}} {\partial m_{\beta}}\frac{\partial m_{\alpha}}{\partial\sigma}\frac{\partial m _{\beta}}{\partial\sigma}. \tag{31}\]
Since the curvature, \(\frac{\partial^{2}\Omega}{\partial\sigma^{2}}>0\) is positive, the sign change in the correlator at finite baryon chemical potential is related to the change of the sign of \(\partial m_{\pm}/\partial\sigma\). From Eq. (11), one sees that at \(\sigma_{\rm min}\), the correlator
Figure 8: Susceptibility of the net-baryon number density at \(\mu_{B}=0\) as a function of temperature for different values of the repulsive coupling constant \(g_{\omega}\).
changes sign, while \(\chi_{2}^{++}\) and \(\chi_{2}^{--}\) stay positive. Indeed, we have confirmed this numerically for vanishing repulsive interactions. Nevertheless, in a more realistic scenario with repulsive interactions, they provide additional sources of negative correlations. This is seen in Fig. 9, where the vanishing \(\chi_{2}^{+-}\) lines lie at \(\mu_{B}<1\) GeV, where \(\sigma>\sigma_{\rm min}\) (compare with Fig. 2). Therefore, the overall behavior of the correlator is given by a non-trivial interplay between chiral symmetry restoration and repulsive interactions.
## VI Conclusions
We have investigated the net-baryon number density fluctuations and discussed the qualitative role of chiral criticality of hadronic matter at finite temperature and baryon chemical potential. In particular, we have studied for the first time the susceptibilities of the positive- and negative-parity chiral partners, as well as their correlations. To this end, we have used the parity doublet model in the mean-field approximation. We have analyzed the thermodynamic properties and the susceptibility of the net-baryon number.
We have confirmed that in the vicinity of the liquid-gas phase transition, the net baryon number density is dominated by the contribution of the positive-parity state. In contrast, this does not need to be the case at the boundary of the chiral crossover. We find that there, the fluctuations of the net-baryon number density are suppressed, compared to the positive-parity state fluctuations (i.e. net-nucleon). This qualitative difference is not only due to the presence of the negative-parity state but largely due to the non-trivial correlation between the chiral partners.
The qualitative differences in the net-nucleon and net-baryon fluctuations can also be useful in searching for possible critical points in the QCD phase diagram. In particular, our results bring significant and nontrivial differences in the critical behavior of the net-nucleon fluctuations in the vicinity of the liquid-gas and chiral phase transitions. This strongly suggests that in order to fully interpret the critical properties of the matter created in heavy-ion collisions, especially in the forthcoming large-scale nuclear experiments FAIR at GSI and NICA in Dubna, it is essential to consistently incorporate and understand the chiral in-medium effects carried by the baryonic parity partners and their correlations.
To reach further theoretical insights and understanding of the QCD phase diagram, it is important to determine correlations between baryonic chiral partners of opposite parity in lattice QCD calculations. Furthermore, to elaborate on the relationship between net-nucleon and net-baryon fluctuations, it is desirable to perform more refined calculations of the higher-order susceptibilities and their ratios. It is also useful to understand the role of finite width and decay properties of the negative parity states on the fluctuation observables. Work in these directions is in progress and will be reported elsewhere.
## Acknowledgements
M.M. and K.R. acknowledge fruitful discussions and helpful suggestions from Bengt Friman and Nu Xu. This work is supported partly by the Polish National Science Centre (NCN) under OPUS Grant No. 2022/45/B/ST2/01527 (K.R. and C.S.), Preludium Grant No. 2017/27/N/ST2/01973 (M.M.), and the program Excellence Initiative-Research University of the University of Wroclaw of the Ministry of Education and Science (M.M.). The work of C.S. was supported in part by the World Premier International Research Center Initiative (WPI) through MEXT, Japan. K.R. also acknowledges the support of the Polish Ministry of Science and Higher Education. V.K. acknowledges the support of the University of Wroclaw within the IDUB visiting professor program. V.K also would like to thank GSI and the Institute for Nuclear Theory at the University of Washington for their kind hospitality and stimulating research environment. V.K. has been supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-05CH11231, by the INT's U.S. Department of Energy grant No. DE-FG02-00ER41132, and by the ExtreMe Matter Institute EMMI at the GSI Helmholtzzentrum fur Schwerionenforschung, Darmstadt, Germany.
|
2308.04841 | Discovery Prospects for Electron and Neutron Electric Dipole Moments in
the General Two Higgs Doublet Model | Baryon asymmetry of the Universe offers one of the strongest hints for
physics Beyond the Standard Model (BSM). Remarkably, in the general two Higgs
Doublet Model (g2HDM) that possesses a second set of Yukawa matrices, one can
have electroweak baryogenesis (EWBG) while the electron electric dipole moment
(eEDM) is evaded by a natural flavor tuning that echoes SM. We show that eEDM
may first emerge around $10^{-30}\,e$ cm or so, followed by neutron EDM (nEDM)
down to $10^{-27}\,e$ cm. We illustrate a cancellation mechanism for nEDM
itself, which in turn can be probed when a facility capable of pushing down to
$10^{-28}\,e$ cm becomes available. | Wei-Shu Hou, Girish Kumar, Sven Teunissen | 2023-08-09T10:03:45Z | http://arxiv.org/abs/2308.04841v1 | # Discovery Prospects for Electron and Neutron Electric Dipole Moments
###### Abstract
Baryon asymmetry of the Universe offers one of the strongest hints for physics Beyond the Standard Model (BSM). Remarkably, in the _general_ two Higgs Doublet Model (\(g\)2HDM) that possesses a second set of Yukawa matrices, one can have electroweak baryogenesis (EWBG) while the electron electric dipole moment (eEDM) is evaded by a _natural_ flavor tuning that echoes SM. We show that eEDM may first emerge around \(10^{-30}\,e\,\mathrm{cm}\) or so, followed by neutron EDM (nEDM) down to \(10^{-27}\,e\,\mathrm{cm}\). We illustrate a cancellation mechanism for nEDM itself, which in turn can be probed when a facility capable of pushing down to \(10^{-28}\,e\,\mathrm{cm}\) becomes available.
_Introduction.--_ With no BSM physics emerging at the Large Hadron Collider (LHC), particle physics is in a state of exasperation. It is not clear whether one can address lofty issues such as the Baryon Asymmetry of the Universe, arguably one of the strongest hints for BSM physics that calls for the existence of _large_\(CP\) violating (CPV) phase(s) beyond the Kobayashi-Maskawa phase [1] of SM. The current frontier is the experimental race to measure electron EDM, where the bound held by the ACME experiment [2] has recently been surpassed at JILA [3], giving \(d_{e}<0.41\times 10^{-29}\,e\,\mathrm{cm}\) at 90% C.L. This is several orders of magnitude stronger than the current nEDM bound of \(d_{n}<1.8\times 10^{-26}\,e\,\mathrm{cm}\) by the nEDM experiment at PSI [4]. However, by using ultra cold neutrons (UCN), nEDM measurement is poised to improve by two orders of magnitude within two decades [5], with many experiments joining the fray.
In fact, the EDM experiments, much smaller than the behemoth LHC and its associated experiments, pose a _general_ challenge: since BAU demands extremely large BSM CPV, can one survive the EDM bounds, especially eEDM? We explore this theme and promote the _general_ two Higgs doublet model (\(g\)2HDM), where dropping the usual \(Z_{2}\) symmetry one can have enough CPV for BAU, but the observed _flavor_ (fermion mass and mixing) _hierarchies_ -- a mystery in itself -- allows for an exquisite _natural flavor cancellation_ mechanism to work for eEDM. We project that eEDM and nEDM could well emerge in the next decade or two, and extend the parameter range beyond previous considerations.
With _one_ Higgs doublet observed, the two Higgs doublet model [6] should be a no-brainer. A \(Z_{2}\) symmetry is usually imposed to enforce the natural flavor conservation (NFC) condition posited by Glashow and Weinberg [7] to forbid extra Yukawa matrices of charged fermions. But as first illustrated by Cheng and Sher [8], the flavor hierarchies may help alleviate Glashow's worries about flavor changing neutral couplings (FCNCs). It was pointed [9] out, even before the top discovery, that the process to watch, then, is \(t\to ch\). The bound at the LHC, however, has reached the stringent \(\mathcal{B}(t\,\to\,ch)<0.00073\)[10]. But as stressed in 2013 [11] after the observation of \(h(125)\), as the \(\rho_{tc}\) coupling is associated more with the exotic \(H\) and \(A\) bosons, the \(tch\) coupling should be \(\rho_{tc}\gamma\), where \(c_{\gamma}\equiv\cos\gamma\) is the \(h\)-\(H\) mixing angle between the two \(CP\)-even scalars. Who would have guessed that _Nature_ would throw in, circa 2015, the _alignment_ (small \(c_{\gamma}\)) phenomenon from the purely Higgs sector, to protect \(t\to ch\) decay.
Having introduced the \(\rho_{tc}\) element of the up-type extra Yukawa matrix, it was subsequently shown [12] that \(\lambda_{t}\,\mathrm{Im}\,\rho_{tt}\) can robustly drive EWBG [13], with top Yukawa \(\lambda_{t}\cong 1\) recently measured [1], and with first order phase transition arising from \(O(1)\)[14] Higgs quartic couplings, where there are a total of 7 in absence of \(Z_{2}\). It was further inferred with emergent alignment that the exotic scalars are likely sub-TeV [15] in mass and populate 300-600 GeV, opening up a search program at the LHC [16; 17; 18; 19], where Ref. [19] is from ATLAS.
The large \(\mathrm{Im}\,\rho_{tt}\) at \(O(\lambda_{t})\sim 1\) that drives EWBG brings up our theme of how to survive eEDM. A typical two-loop Barr-Zee diagram [20] for eEDM is given in Fig. 1. To cancel the leading effect due to \(\rho_{tt}\)_and_\(\rho_{ee}\), specifically the \(\phi\gamma\gamma^{*}\) insertion, one finds [21]
\[|\rho_{ee}/\rho_{tt}|=r|\lambda_{e}/\lambda_{t}|,\ \ \ \ \arg(\rho_{ee}\rho_{tt})=0, \tag{1}\]
with \(r\simeq 0.7\), where the first relation follows from a phase-lock between \(\rho_{ee}\) and \(\rho_{tt}\) for \(\phi=A\). Eq. (1) is remarkable in that the \(\rho\) matrices seem to "know" the quark mass and mixing hierarchies in SM.
Figure 1: A two-loop Barr-Zee diagram for electron EDM with extra Yukawa coupling \(\rho_{ee}\) on electron line, and top (hence \(\rho_{tt}\)) and \(W\) run in the gray blob for neutral scalar \(\phi=h,H,A\). Neutron EDM has many more contributions, including \(u\)- and \(d\)-quark chromo-moments and the Weinberg operator.
The purpose of this Letter is to show that the combined eEDM and nEDM effort provides the cutting edge probe of \(\rho_{tt}\)-driven EWBG in \(g\)2HDM: as the experimental competition heats up, we may first observe eEDM in the \(10^{-30}\)-\(10^{-31}\,e\,\)cm range, followed by confirmation at n2EDM at PSI for \(d_{n}\sim 10^{-26}\)-\(10^{-27}\,e\,\)cm in about a decade. But as we will illustrate a general cancellation mechanism for nEDM itself, a more advanced nEDM experiment may confirm down to \(10^{-28}\,e\,\)cm in two decades. To unravel the underlying dynamics, the "decadal mission" [24] with direct exotic scalar search at the LHC, flavor physics explorations with LHCb and Belle II, plus \(\mu\) and \(\tau\) studies, would be needed.
_g2HDM and EDMs.--_ For simplicity, we assume \(CP\)-conserving [15; 23] Higgs potential of g2HDM, removing it as a CPV source without discussing it any further here, so CPV is relegated to extra Yukawa couplings. As already stated, \(O(1)\) Higgs quartics supply [12; 14] the prerequisite first order EW phase transition for BAU, which is a bonus in _g2_HDM.
To clarify the flavor and EWBG discussion in the Introduction, without any \(Z_{2}\) symmetry, there are extra Yukawa matrices \(\rho^{f}\) for charged fermions \(f=u\), \(d\), \(\ell\)[18; 23], which are complex and nondiagonal,
\[\mathcal{L} = -\frac{1}{\sqrt{2}}\sum_{f=u,d,\ell}\bar{f}_{i}\Big{[}\big{(}- \lambda_{i}^{f}\delta_{ij}s_{\gamma}+\rho_{ij}^{f}c_{\gamma}\big{)}h \tag{2}\] \[\quad+\big{(}\lambda_{i}^{f}\delta_{ij}c_{\gamma}+\rho_{ij}^{f}s _{\gamma}\big{)}H-i\,\text{sgn}(Q_{f})\rho_{ij}^{f}A\Big{]}R\,f_{j}\] \[-\bar{u}_{i}\left[(V\rho^{d})_{ij}R-(\rho^{u\dagger}V)_{ij}L \right]d_{j}H^{+}\] \[-\bar{\nu}_{i}\rho_{ij}^{L}R\,\ell_{j}H^{+}+\text{h.c.},\]
with generation indices \(i\), \(j\) summed over, \(L,R=1\mp\gamma_{5}\), and \(s_{\gamma}\equiv\sin\gamma\). The \(A\), \(H^{+}\) couplings are \(c_{\gamma}\)-independent, while in the alignment limit (\(c_{\gamma}\to 0\), \(s_{\gamma}\to-1\)), \(h\) couples diagonally and \(H\) couples via extra Yukawa couplings \(-\rho_{ij}^{f}\), which can drive BAU. Thus, besides mass-mixing hierarchy protection [9] of FCNCs, alignment provides [15] further safeguard, such as for \(t\to ch\), without the need of NFC. Furthermore, the \(\mu_{12}^{2}\Phi^{\dagger}\Phi^{\prime}\) term in the Higgs potential is eliminated after symmetry breaking by minimization, leaving a unique \(h\)-\(H\) mixing parameter, \(\eta_{6}\), which can be \(O(1)\)[15] for small \(c_{\gamma}\), with \(H\), \(A\), \(H^{+}\) likely in the 300-600 GeV mass range.
Considering how effective _g2_HDM _evades_ stringent flavor constraints, and to address the question "What makes _g2_HDM so well hidden so far?", we guessed a "rule of thumb" [22] for flavor control:
\[\rho_{ii}\lesssim\mathcal{O}(\lambda_{i}),\ \ \rho_{1i}\lesssim\mathcal{O}( \lambda_{1}),\ \ \rho_{3j}\lesssim\mathcal{O}(\lambda_{3}), \tag{3}\]
with \(j\neq 1\). This allows \(\rho_{tt}=\mathcal{O}(1)\) but \(\rho_{bb}\simeq 0.02\). However, \(\rho_{ij}^{d}\) seems to be an order of magnitude weaker by flavor constraints.
With complications of transport equations for EWBG [12], the simplified case with \(H\), \(A\), \(H^{+}\) degenerate at 500 GeV was studied. The ACME experiment [2] taught us the lesson to keep the weakest \(\rho_{ee}\) coupling in the Barr-Zee diagrams of Fig. 1, where the exquisite cancellation mechanism of Eq. (1) was uncovered [21]. The prowess of ACME, however, led one to illustrate with the timid \(|\rho_{tt}|\simeq 0.1\), which we seek to extend here.
_Results: Interplay of eEDM and nEDM.--_ In our numerical illustration, we shall keep the degeneracy at 500 GeV, but explore a broader range of
\[\text{Re}\,\rho_{tt}=\text{Im}\,\rho_{tt}=-0.1,-0.2,-0.3, \tag{4}\]
and follow the numeric ansatz [21] for \(f=u,c;d,s,b\),
\[\text{Re}\,\rho_{ff}=-r\frac{\lambda_{f}}{\lambda_{t}}\text{Re}\,\rho_{tt}, \quad\text{Im}\,\rho_{ff}=+r\frac{\lambda_{f}}{\lambda_{t}}\text{Im}\,\rho_{ tt}, \tag{5}\]
where \(r\simeq 0.71\) is a combination of loop functions that is insensitive [21] to exotic Higgs spectrum.
In Fig. 2 we illustrate the _natural_ "flavor tuning" [21] of Eq. (1), for \(\rho_{tt}\) values in Eq. (4) and numeric ansatz of Eq. (5), where both bounds of ACME [2] and JILA [3] are shown. We take some liberty in the visual effect of the light purple band, with left side taken from the red-dashed \(d_{e}^{\phi\gamma}\) curve [21], and right side from the red-solid \(d_{e}\) curve. This is in part because, though the cancellation point (black-solid curve sitting in the middle, with
final shift from \(C_{S}\) effect [21]) is insensitive to the spectrum [21], there should be some spread in exotic scalar masses, which we refrain from exploring.
From left to right in Fig. 2, as \(\rho_{tt}\) strength rises, the "funnel" is raised, but at \(10^{-30}\,e\,\mathrm{cm}\), the opening of the funnel is still decent, suggesting a still robust discovery likelihood, although by \(10^{-31}\,e\,\mathrm{cm}\), it approaches a pinpoint and may no longer seem plausible. In any case, these plots are for numeric illustration.
Turning to nEDM, besides effects of \(\rho_{uu}\) and \(\rho_{dd}\) through Barr-Zee type diagrams, there are also chromomomments and the Weinberg operator, with progressively larger theory uncertainties. While the classic review of Pospelov and Ritz [25] continue to be widely cited, it is a bit dated. We use the more recent formula [26],
\[d_{n}=-0.20\,d_{u}+0.78\,d_{d} +e\,(0.29\,\tilde{d}_{u}+0.59\tilde{d}_{d})\] \[+e\,23\,\mathrm{MeV}\,C_{W}, \tag{6}\]
where we evaluate chromo-moments \(\tilde{d}_{u,d}\) and the Weinberg operator \(C_{W}\) term [27] by following Refs. [28] and [29], respectively. A recent discussion on uncertainties can be found in Ref. [27].
We give in Fig. 3 the scan plot for \(r\in[0.6,0.8]\) for same range of \(\rho_{tt}\) and exotic Higgs masses as in Fig. 2, showing both the JILA bound [3] on eEDM, and PSI bound [4] on nEDM. One survives the PSI bound even for \(|\rho_{tt}|\simeq 0.3\sqrt{2}\), while \(r\simeq 0.7\) nicely illustrates the _natural_ flavor cancellation of eEDM. The follow-up experiment to nEDM at PSI, i.e. n2EDM [30], plans to reach down to \(10^{-27}\,e\,\mathrm{cm}\) sensitivity within a decade, and should be able to cover the range illustrated in Fig. 3.
But we should admit that Eq. (5) is nothing but an ansatz [21] for sake of numeric illustration. The fact is, we have little knowledge of the actual strength of extra Yukawa couplings such as \(\rho_{uu}\). Our "rule of thumb" of Eq. (3) is our guess of the "flavor protection" in \(g\)2HDM, which echos the remarkable cancellation mechanism of Eq. (1) for eEDM. Taking Eq. (3) literally, it states that \(|\rho_{uu}|=O(\lambda_{u})\), with phase unknown. Thus, taking the usual sense of "an order of magnitude", we vary
\[|\rho_{uu}|\in[0.3\lambda_{u},3\lambda_{u}],\quad\arg\rho_{uu}\in[-\pi,\pi], \tag{7}\]
while keeping other \(\rho_{ff}\)s according to Eq. (5). This explores the impact of \(\rho_{uu}\) strength and phase on nEDM. Since \(\rho_{tt}\) is in the 3rd quadrant in Eq. (4), in the convention of Eq. (7), \(\arg\rho_{tt}=-3\pi/4\).
A scan plot of the variation of Eq. (7) is given in Fig. 4 for illustration. For negative \(\arg\rho_{uu}\), nEDM is closer to the PSI bound (red and yellow scan points), and for the largest \(|\rho_{tt}|=0.3\sqrt{2}\) (right plot), the bound cuts a little bit into the scan space. But interestingly, for positive \(\arg\rho_{uu}\), i.e. _opposite_ the sign of \(\arg\rho_{tt}\), the blue scan points extend below \(10^{-27}\,e\,\mathrm{cm}\), which can evade n2EDM of PSI. Therefore, the scan in Fig. 4 illustrates a _general_ cancellation mechanism that may well be operative in _Nature_ for neutron EDM. It can be probed, however, at more advanced nEDM facilities, such as the nEDM experiment under construction at the Spallation Neutron Source [31] at Oak Ridge National Lab (ORNL), which utilizes UCN and can probe down to \(10^{-28}\,e\,\mathrm{cm}\). Although this may go beyond the next decade, the possibility appears to be covered fully, as the blue scan points tend to run out by \(10^{-28}\,e\,\mathrm{cm}\).
Thus, if \(g\)2HDM is the source of EWBG, the combined effort of eEDM and nEDM experiments seem poised for major discoveries in the coming decade or two.
Discussion and Summary.-- This work was actually stimulated by the ability at the LHC to probe top CPV, i.e. top chromo-moments [32]. As this is a new beginning, top chromo-moment bounds are still rather weak. We realized instead that prospects for electron and neutron EDMs are rather good in \(g\)2HDM.
We have kept \(H\), \(A\) and \(H^{+}\) degenerate at 500 GeV and have not revisited EWBG, but we have checked that features at 300 GeV are quite similar, where baryogenesis should be more efficient. The actual parameter space should therefore be considerably larger. For example, breaking the degeneracy, one would need to face precision electroweak constraints [1], where either one keeps \(m_{A}=m_{H^{+}}\) (custodial symmetry), or take the twisted-custodial [33] case of \(m_{H}=m_{H^{+}}\).
We have emphasized as our theme that it is nontrivial that \(g\)2HDM can provide electroweak baryogenesis while surviving the eEDM constraint, a remarkable feat rooted in the _flavor_ structure as revealed by the SM sector. With exotic \(H\), \(A\) and \(H^{+}\) bosons sub-TeV in mass, search programs at the LHC [19] have started, while there are also some good flavor probes [22]. Any BSM theory of EWBG would need to face the litmus test of surviving the eEDM bound [3].
We may sound optimistic in the discovery prospect for eEDM at \(10^{-30}\,e\,\mathrm{cm}\). Note that both the JILA
Figure 3: Combined scan result for \(r\in[0.6,0.8]\) for electron and neutron EDM for same range of \(\rho_{tt}\) and exotic Higgs masses as in Fig. 2, with \(\rho_{ff}\) fixed according to Eq. (5).
and ACMe bounds are still consistent even with \(O(10^{-29})\,e\,\)cm. Considering possible fluctuations in data, discovery not far below the existing bound is quite plausible, especially if _Nature_ has already marked \(g\)2HDM up for baryogenesis. A known example is the ARGUS discovery [34] of \(B^{0}\)-\(\bar{B}^{0}\) mixing, which practically sits right on top the previous CLEO [35] bound.
In summary, \(g\)2HDM without \(Z_{2}\) symmetry achieves baryogenesis but can evade the eEDM bound by _natural_ flavor tuning. Electron EDM may harbinger a new era, echoed not long after by neutron EDM; while this does not prove \(g\)2HDM is behind EWBG, it would likely become a frontrunner. With exotic Higgs search at the LHC, ongoing efforts at Belle II and other flavor fronts, and with excellent prospects for electron and neutron EDM measurements, the future looks bright for unveiling what may actually lie behind baryogenesis.
**Acknowledgments** We thank the support of grants NSTC 112-2639-M-002-006-ASP, and NTU 112L104019 and 112L893601.
|
2307.06623 | Bootstrap percolation in strong products of graphs | Given a graph $G$ and assuming that some vertices of $G$ are infected, the
$r$-neighbor bootstrap percolation rule makes an uninfected vertex $v$ infected
if $v$ has at least $r$ infected neighbors. The $r$-percolation number,
$m(G,r)$, of $G$ is the minimum cardinality of a set of initially infected
vertices in $G$ such that after continuously performing the $r$-neighbor
bootstrap percolation rule each vertex of $G$ eventually becomes infected. In
this paper, we consider percolation numbers of strong products of graphs. If
$G$ is the strong product $G_1\boxtimes \cdots \boxtimes G_k$ of $k$ connected
graphs, we prove that $m(G,r)=r$ as soon as $r\le 2^{k-1}$ and $|V(G)|\ge r$.
As a dichotomy, we present a family of strong products of $k$ connected graphs
with the $(2^{k-1}+1)$-percolation number arbitrarily large. We refine these
results for strong products of graphs in which at least two factors have at
least three vertices. In addition, when all factors $G_i$ have at least three
vertices we prove that $m(G_1 \boxtimes \dots \boxtimes G_k,r)\leq 3^{k-1} -k$
for all $r\leq 2^k-1$, and we again get a dichotomy, since there exist families
of strong products of $k$ graphs such that their $2^{k}$-percolation numbers
are arbitrarily large. While $m(G\boxtimes H,3)=3$ if both $G$ and $H$ have at
least three vertices, we also characterize the strong prisms $G\boxtimes K_2$
for which this equality holds. Some of the results naturally extend to infinite
graphs, and we briefly consider percolation numbers of strong products of
two-way infinite paths. | BoΕ‘tjan BreΕ‘ar, Jaka HedΕΎet | 2023-07-13T08:40:46Z | http://arxiv.org/abs/2307.06623v2 | # Bootstrap percolation in strong products of graphs
###### Abstract
Given a graph \(G\) and assuming that some vertices of \(G\) are infected, the \(r\)-neighbor bootstrap percolation rule makes an uninfected vertex \(v\) infected if \(v\) has at least \(r\) infected neighbors. The \(r\)-percolation number, \(m(G,r)\), of \(G\) is the minimum cardinality of a set of initially infected vertices in \(G\) such that after continuously performing the \(r\)-neighbor bootstrap percolation rule each vertex of \(G\) eventually becomes infected. In this paper, we consider percolation numbers of strong products of graphs. If \(G\) is the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of \(k\) connected graphs, we prove that \(m(G,r)=r\) as soon as \(r\leq 2^{k-1}\) and \(|V(G)|\geq r\). As a dichotomy, we present a family of strong products of \(k\) connected graphs with the \((2^{k-1}+1)\)-percolation number arbitrarily large. We refine these results for strong products of graphs in which at least two factors have at least three vertices. In addition, when all factors \(G_{i}\) have at least three vertices we prove that \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\leq 3^{k-1}-k\) for all \(r\leq 2^{k}-1\), and we again get a dichotomy, since there exist families of strong products of \(k\) graphs such that their \(2^{k}\)-percolation numbers are arbitrarily large. While \(m(G\boxtimes H,3)=3\) if both \(G\) and \(H\) have at least three vertices, we also characterize the strong prisms \(G\boxtimes K_{2}\) for which this equality holds. Some of the results naturally extend to infinite graphs, and we briefly consider percolation numbers of strong products of two-way infinite paths.
\({}^{a}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia
\({}^{b}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia
**Keywords:** bootstrap percolation, strong product of graphs, infinite path.
**AMS Subj. Class.:** 05C35, 05C76, 60K35
## 1 Introduction
Given a graph \(G\) and an integer \(r\geq 2\), the \(r\)_-neighbor bootstrap percolation_ is an update rule for the states of vertices in \(G\). At any given time the state of a vertex is either _infected_ or _uninfected_. From an initial set of infected vertices further updates occur simultaneously and in discrete intervals: any uninfected vertex with at least \(r\) infected neighbors becomes infected, while infected vertices never change their state. Given a graph \(G\), the smallest cardinality of a set of initially infected vertices, which results in all vertices of \(G\) being infected after the \(r\)-neighbor bootstrap percolation process is finished, is the \(r\)_-percolation number_, \(m(G,r)\), of \(G\).
The origins of bootstrap percolation come from physics of ferromagnetism and go back to 1979 [10], while in 1998 the concept was considered in the context of spreading an infection in square grid networks [3]. Balogh and Bollobas considered the random bootstrap percolation in hypercubes, where the main challenge is to find the tresholds for probabilities of vertices being initially set as infected in order to get all vertices of the graph infected with probability \(1\) or \(0\), respectively; see also a related study considering square grids [2]. Recently, Przykucki and Shelton considered \(m(G,r)\) where \(G\) is the \(d\)-dimensional square grid [19], while Bidgoli et al. [5] considered bootstrap percolation of specific Hamming graphs, namely the Cartesian powers of complete graphs. The common feature of the above mentioned investigations of bootstrap percolation is that they all involve various types of Cartesian products of graphs. Beside the Cartesian product operation, the \(r\)-neighbor bootstrap percolation was considered with respect to the minimum degree of a graph [15], and from the complexity point of view concerning the time (i.e., number of percolation steps) that takes to infect the entire graph [17]. In this context, it is natural to consider also other graph products; in particular the strong product of graphs, which is the densest among all standard graph products, is a natural candidate for studying bootstrap percolation.
A special attention was given to the case \(r=2\). For instance, Dairyko et al. [13] presented Ore-type and Chvatal-type conditions related to degrees of a graph \(G\) that enforce \(m(G,2)=2\). Morris in [18] provided some bounds on the minimal bootstrap percolation sets in rectangular grids, where a set is minimal if it yields an infection of the whole graph while none of its proper subsets do it. As it turns out, the \(2\)-neighbor bootstrap percolation coincides with the concept from graphs convexity; notably, for the so-called \(P_{3}\)-convexity, as introduced by Centeno et al. [8], the \(P_{3}\)-hull number of a graph \(G\) is exactly \(m(G,2)\). The \(P_{3}\)-convexity and the corresponding hull number were studied in comparison with other convexity parameters [9, 12], and were also considered in specific graph classes such as Kneser graphs [14] and Hamming graphs [7]. Coelho et al. [12] performed a systematic study of the \(P_{3}\)-hull number in graph products. While the Cartesian product seems to be the most challenging one for bootstrap percolation, for the strong product \(G\boxtimes H\) of any non-trivial connected graphs \(G\) and \(H\) they proved that \(m(G\boxtimes H,2)=2\). In this paper, we widely extend the study from [12] by investigating the \(r\)-percolation number in strong products of graphs.
### Formal definitions and notation
All graphs considered in this paper are simple and connected. Given a graph \(G\) a vertex \(x\in V(G)\) is a _cut-vertex_ if \(G-x\) is disconnected. The _neighborhood_, \(N_{G}(v)\), of a vertex \(v\in V(G)\) is the set of all vertices in \(G\) adjacent to \(v\), and the _closed neighborhood_ of \(v\), is defined as \(N_{G}[v]=N_{G}(v)\cup\{v\}\). Vertices \(u\) and \(v\) in a graph \(G\) are _(closed) twins_ if \(N_{G}[u]=N_{G}[v]\). That is, \(u\) and \(v\) are adjacent and have the same neighborhoods.
We follow with a formal definition of the \(r\)_-neighbor bootstrap percolation_. Let \(A_{0}\subseteq V(G)\) be an initial set of infected vertices, and, for every \(t\geq 1\), let
\[A_{t}=A_{t-1}\cup\{v\in V(G):\,|N(v)\cap A_{t-1}|\geq r\}.\]
The set \(A_{t}\setminus A_{t-1}\) is referred to as vertices infected at time \(t\). A vertex \(v\) is infected before \(u\) if \(v\in A_{t}\), for some \(t\geq 0\), while \(u\notin A_{t}\). We say that \(A_{0}\)_percolates_ (or is a _percolating set_) if \(\bigcup\limits_{t\geq 0}A_{t}=V(G)\).
A natural extremal problem is to find a smallest percolating set \(S=A_{0}\). For any graph \(G\) and \(r\geq 2\), let
\[m(G,r)=\min\Big{\{}|A_{0}|:\,A_{0}\subseteq V(G),\,\bigcup\limits_{t=0}^{ \infty}A_{t}=V(G)\Big{\}}.\]
Any percolating set \(S\) satisfying \(m(G,r)=|S|\) is thus a _minimum percolating set_, and \(m(G,r)\) is the _\(r\)-percolation number_ of \(G\). Clearly, \(m(G,r)\geq r\) for all \(r\leq|V(G)|\).
The _strong product_ of graphs \(G\) and \(H\) is the graph \(G\boxtimes H\), whose vertex set is \(V(G)\times V(H)\), and two vertices \((g,h)\) and \((g^{\prime},h^{\prime})\) are adjacent precisely if one of the following is true:
* \(g=g^{\prime}\) and \(hh^{\prime}\in E(H)\), or
* \(h=h^{\prime}\) and \(gg^{\prime}\in E(G)\), or
* \(gg^{\prime}\in E(G)\) and \(hh^{\prime}\in E(H)\).
By \(G^{h}=\{(g,h)\) : \(g\in V(G)\}\) we denote the subset of \(V(G\boxtimes H)\) called the _\(G\)-layer_ on vertex \(h\), and, by abuse of language, \(G^{h}\) also denotes the subgraph of \(G\boxtimes H\) induced by the vertices of the \(G\)-layer on \(h\). Clearly, \(G^{h}\) is isomorphic to \(G\) for every \(h\in V(H)\). Similarly, for \(g\in V(G)\), the _\(H\)-layer_ on vertex \(g\) is \({}^{g}\!H=\{(g,h)\) : \(h\in V(G)\}\).
Note that strong product operation is associative and commutative, and \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) has \(V(G_{1}\boxtimes\cdots\boxtimes G_{k})=V(G_{1})\times\cdots\times V(G_{k})\), and \((x_{1},\ldots,x_{k})(y_{1},\ldots,y_{k})\in E(G_{1}\boxtimes\cdots\boxtimes G _{k})\) if and only if \(x_{i}=y_{i}\) or \(x_{i}y_{i}\in E(G_{i})\) for all \(i\in[k]\). If for a factor \(G_{i}\) in the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) we have \(|V(G_{i})|=2\), we say that \(G_{i}\) is an _edge-factor_ of the strong product. Graph \(K_{1}\) is said to be _trivial_, and if \(G_{i}\) is a factor of a strong product with \(|V(G_{i})|=1\), \(G_{i}\) is a _trivial factor_. In this paper, we will only consider strong products in which all factors are non-trivial.
### Main results and organization of the paper
In this paper, we consider percolation numbers of strong products of non-trivial graphs. More precisely, we study \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\) depending on the number of factors \(k\) and the threshold \(r\). In Section 2, we study general upper bounds on the percolation numbers of strong products of graphs, which in many cases lead to exact values. In particular, it is often the case that the best possible value, \(m(G,r)=r\), is obtained when \(G\) is the strong product of \(k\) factors and the threshold \(r\) is bounded by a function of \(k\). The results depend also on the number of factors in the strong product that have at least three vertices, and are illustrated in the following table:
\begin{tabular}{|l|c|c|} \hline \(r\) & \(m(G,r)\leq\) & \# non-edge factors \\ \hline \(\leq\mathbf{2^{k-1}}\) & \(r\) & \(1\) \\ \hline \(\leq\mathbf{3\cdot 2^{k-2}}\) & \(r\) & \(2\) \\ \hline \(\leq 7\cdot 2^{k-3}\) & \(7\cdot 2^{k-3}\) & \(3\) \\ \hline \(\leq 2^{k}-1\) & \(3^{k-1}-k\) & \(k\) \\ \hline \end{tabular} The table gives the bounds on \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\), depending on the number of non-edge factors. Bounds in the first two lines are bold, which is to indicate the fact that in all these cases \(m(G,r)=r\). In particular, the first line is given by Corollary 2.2 and states that \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)=r\) whenever \(r\leq 2^{k-1}\) and \(k\geq 2\). As a dichotomy we present an example showing that \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\) is not only greater than \(r\), but can even be arbitrarily large as soon as \(r=2^{k-1}+1\). Next, we prove in Theorem 2.4 that \(m(G,r)=r\) if \(r\leq 3\cdot 2^{k-2}\) and \(G\) is the strong product of \(k\) factors at least two of which are not \(K_{2}\), and a similar dichotomy is proved also in this case. If there are at least three non-edge factors, we can further increase the threshold as shown in the third line (see Theorem 2.6), while the last line presents an upper bound when all factors have at least three vertices (see Theorem 2.7).
To see that the bound \(r\leq 2^{k}-1\) in the last line of the above table is best possible, consider \(G=G_{1}\boxtimes\cdots\boxtimes G_{k}\), where \(k\geq 2\) and \(G_{i}\) are connected graphs such that \(\delta(G_{i})=1\) for every \(i\in[k]\). From the definition of the strong product it follows that \(\delta(G)=2^{k}-1\). Therefore, whenever \(r\geq 2^{k}\), every vertex of degree \(\delta(G)\) must be included in the set of initially infected vertices. For instance, if \(G_{i}\) is isomorphic to the star \(K_{1,n}\) for every \(i\in[k]\), then \(G\) has \(n^{k}\) vertices of degree \(2^{k}-1\) and therefore \(m(G,r)\geq n^{k}\) for every \(r\geq 2^{k}\). Noting that \(n\in\mathbb{N}\) can be arbitrarily large, we derive the following
**Observation 1.1**: _If \(r\geq 2^{k}\), then for every integer \(M\) there exist graphs \(G_{1},\ldots,G_{k}\) such that \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)>M\)._
In Section 3, we consider percolation numbers of strong products of graphs with only two factors. When \(r=3\) and both \(G\) and \(H\) have order at least \(3\), Theorem 2.4 implies that \(m(G\boxtimes H,3)=3\). Thus we consider the only remaining case for \(m(G\boxtimes H,3)\), which is when one of the factors is \(K_{2}\), and we prove a characterization of the graphs \(G\) such that \(m(G\boxtimes K_{2},3)=3\). Furthermore, if \(G\) and \(H\) have the property that \(m(G,2)=2\) and \(m(H,2)=2\), then \(m(G\boxtimes H,4)\leq 5\), and if both \(G\) and \(H\) are not \(K_{2}\), then \(m(G\boxtimes H,5)\) can also be bounded from above (see Theorem 3.6).
In Section 4, we consider a natural extension of percolation to infinite graphs. Note that the original definition works also in the case \(G\) is an infinite graph, where the only (silent) modification is that the initial set of infected vertices may need to be infinite in order to percolate, in which case we set the \(r\)-percolation number to be infinite. In this vein, Theorem 2.7 can also be applied to strong products of infinite graphs. In particular, we infer that \(m(\mathbb{Z}^{\boxtimes,n},2^{n}-1)\leq 3^{n-1}-n\), where \(\mathbb{Z}^{\boxtimes,n}\) is the strong product of \(n\) two-way infinite paths \(\mathbb{Z}\). It is natural to consider the _finiteness percolation threshold_
of a graph \(G\), which is the supremum of the set of thresholds \(r\) for which \(m(G,r)<\infty\); we denote this number by \(\mbox{fpt}(G)\). It is easy to prove that \(2^{n}-1\leq\mbox{fpt}(\mathbb{Z}^{\boxtimes,n})\leq 3^{n-1}\), and we establish that for \(n\in\{2,3\}\) the upper bound is actually the exact value. In Section 5, we pose some open problems that arise from this study.
## 2 Strong products of graphs with \(k\) factors
In this section, we consider upper bounds and exact results for the percolation number of the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\), where for the threshold \(r\) we have \(r<2^{k}\). (As mentioned in Section 1.2, there is no general upper bound for \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\) when \(r\geq 2^{k}\).) The results are divided into several subsections depending on the number of non-edge factors (that is, the number of factors with at least three vertices). The case when all factors of the strong product of graphs are \(K_{2}\) is trivial, hence we first consider the most general case when there is at least one non-edge factor.
### At least one non-edge factor
We start by considering the \(r\)-neighbor bootstrap percolation in strong products of \(k\) graphs, where the threshold \(r\) is at most by \(2^{k-1}\). Clearly, \(m(G,r)\geq r\) for any graph \(G\) with \(|V(G)|\geq r\geq 2\). As we will see in the next result(s), if \(r\leq 2^{k-1}\), then \(m(G,r)=r\) where \(G\) is a strong product of graphs with \(k\) factors.
**Theorem 2.1**: _If \(2\leq k\leq r\leq 2^{k-1}\) and \(G\) is the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\), where \(G_{i}\) are connected graphs so that \(|V(G)|\geq r\), then \(m(G,r)=r\)._
**Proof.** Let \(k\geq 2\) and \(r\in\{k,\ldots,2^{k-1}\}\) (note that \(k\geq 2\) implies \(k\leq 2^{k-1}\)). Consider the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\), where factors are connected, and the order of the product is at least \(r\). For all \(i\in[k]\), let \(|V(G_{i})|=n_{i}\). Since \(G_{i}\) is connected, it contains a BFS-tree. Let us denote the vertices of \(G_{i}\) by \(v_{1}^{i},v_{2}^{i},\ldots,v_{n_{i}}^{i}\) such that \(v_{1}^{i}\) is the root of the BFS tree, and for each \(j\), \(2\leq j\leq n_{i}\), let \(p(v_{j}^{i})=v_{\ell}^{i}\), be the parent of \(v_{j}^{i}\), where \(\ell<j\). In particular, the parent (and a neighbor) of \(v_{2}^{i}\) is \(v_{1}^{i}\), while \(v_{3}^{i}\) has \(v_{2}^{i}\) or \(v_{1}^{i}\) as the parent (and a neighbor).
Since \(m(G,r)\geq r\) is clear, it suffices to find a set \(S\subset V(G)\) of size \(r\) that percolates. Let \(S\) be any subset of the set \(\{(v_{i}^{1},v_{i}^{2},\ldots,v_{i}^{k}):\,i\in\{1,2\}\}\), such that \(|S|=r\). Such a set \(S\) always exists because \(r\leq 2^{k-1}<2^{k}\).
First note that every vertex \(x=(x^{1},\ldots,x^{k})\), where \(x^{i}\in\{v_{1}^{i},v_{2}^{i}\}\) for all \(i\in[k]\), gets infected, since \(x\) is in \(S\) or is a neighbor of all vertices in \(S\). We will use induction to prove that eventually every vertex of \(G\) gets infected. Let \(t_{i}\in[n_{i}]\) for all \(i\in[k]\). We claim that every vertex \(x=(x^{1},\ldots,x^{k})\in V(G)\), where \(x^{i}\in\{v_{j}^{i}:\,j\leq t_{i}\}\) for all \(i\in[k]\), gets infected. The induction is on \(\sum_{i=1}^{k}t_{i}\) where for the base case we can take \(\sum_{i=1}^{k}t_{i}=2k\); that is \(t_{i}=2\) for all \(i\in[k]\). Thus the base of induction is that all vertices \(x=(x^{1},\ldots,x^{k})\), whose coordinates, \(x^{i}\), are in \(\{v_{j}^{i}:\,j\leq 2\}\) are infected, which has already been proved.
In the inductive step we assume that every vertex \(x=(x^{1},\ldots,x^{k})\in V(G)\), where \(x^{i}\in\{v^{i}_{j}:\,j\leq t_{i}\}\) for all \(i\in[k]\) and some \(t_{i}\leq n_{i}\), is infected. In addition, we may assume there exists an index \(s\in[k]\) such that \(t_{s}<n_{s}\), and without loss of generality, let \(s=1\). Consider a vertex \(x=(x^{1},\ldots,x^{k})\), where \(x^{1}=v^{1}_{t_{1}+1}\) and \(x^{j}\in\{v^{j}_{1},v^{j}_{2},\ldots,v^{j}_{t_{j}}\}\) for all \(j\neq 1\). Note that \(x\) is adjacent to vertices \((p(v^{1}_{t_{1}+1}),y^{2},\ldots,y^{k})\) where \(y^{i}\in\{x^{i},p(x^{i})\}\) for \(2\leq i\leq k\). Since these vertices are all infected, \(x\) has at least \(2^{k-1}\geq r\) infected neighbors, therefore it gets infected. We have thus proved that all the vertices \(x=(v^{1}_{\ell},x^{2},\ldots,x^{k})\in V(G)\), where \(\ell\in[t_{1}+1]\) and \(x^{i}\in\{v^{i}_{j}:\,j\in[t_{i}]\}\) for all \(i>1\), get infected, which concludes the proof of the inductive step. \(\Box\)
When considering the graph \(G=G_{1}\boxtimes\cdots\boxtimes G_{k}\) in the \(r\)-neighbor bootstrap percolation when \(r<k\), we can write \(G=G_{1}\boxtimes\cdots\boxtimes G_{r-1}\boxtimes(G_{r}\boxtimes\cdots \boxtimes G_{k})\), and consider \(G_{r}\boxtimes\cdots\boxtimes G_{k}\) as a sole factor. Hence, applying Theorem 2.1, we infer the following
**Corollary 2.2**: _If \(k\geq 2\) and \(G=G_{1}\boxtimes\cdots\boxtimes G_{k}\) for non-trivial connected graphs \(G_{i}\), then \(m(G,r)=r\) for all \(r\leq 2^{k-1}\)._
By letting \(k=2\) in Theorem 2.1, and noting that the \(2\)-neighbor bootstrap percolation coincides with \(P_{3}\)-hull convexity, we get the result of Coelho [12, Theorem 3.1] regarding the \(P_{3}\)-hull number of the strong product of two graphs.
Since the case \(r\leq 2^{k-1}\) is completely resolved, we continue by investigating strong products of \(k\) graphs and the \(r\)-neighbor bootstrap percolation, where \(r>2^{k-1}\). The following results yields a dichotomy to the corollary above by showing that as soon as \(r>2^{k-1}\), the \(r\)-percolation number of the strong product of \(k\) factors can be arbitrarily large.
**Theorem 2.3**: _If \(n\geq 3\), then \(m(C_{n}\boxtimes K_{2}\boxtimes\cdots\boxtimes K_{2},2^{k-1}+1)=2^{k-1}-1+ \lceil\frac{n}{2}\rceil\), where \(K_{2}\) appears as a factor \((k-1)\)-times._
**Proof.** Note that \(K_{2}\boxtimes\cdots\boxtimes K_{2}\) is isomorphic to the complete graph \(K_{2^{k-1}}\) and let \(G=C_{n}\boxtimes K_{2^{k-1}}\). Denote \(V(C_{n})=\{v_{1},\ldots,v_{n}\}\) and \(V(K_{2^{k-1}})=\{1,2,\ldots,2^{k-1}\}\). We start by proving the lower bound \(m(G,2^{k-1}+1)\geq 2^{k-1}-1+\lceil\frac{n}{2}\rceil\).
Let \(S\) be a minimum percolating set of \(G\). For all \(i\in[n]\), let \(H_{i}=G[\{(v_{i},p),(v_{i+1},p):p\in[2^{k-1}]\}]\) be the subgraph of \(G\), where \(i\) is taken with respect to modulo \(n\). Clearly, \(H_{i}\) is the union of two \(H\)-layers \({}^{v_{i}}\!K_{2^{k-1}}\) and \({}^{v_{i+}}\!K_{2^{k-1}}\) and is isomorphic to the complete graph on \(2^{k}\) vertices. In addition, every vertex in \(H_{i}\) has exactly \(2^{k-1}\) neighbors in \(G-V(H_{i})\). Since \(r=2^{k-1}+1\), we derive that
\[|S\cap V(H_{i})|\geq 1,\mbox{ for all }i\in[n]\,. \tag{1}\]
Without loss of generality, renaming the vertices of \(G\) if necessary, we may assume that \((v_{2},1)\notin S\) is a vertex infected at step 1 of the percolation process. Hence, there are at least \(2^{k-1}+1\) initially infected vertices within \({}^{v_{1}}\!K_{2^{k-1}}\cup\,{}^{v_{2}}\!K_{2^{k-1}}\cup\,{}^{v_{3}}\!K_{2^{k -1}}=H_{1}\cup H_{2}\).
When \(n\in\{3,4\}\), we have \(|S|\geq 2^{k-1}-1+\lceil\frac{n}{2}\rceil\), which proves the desired lower bound. Let \(n\geq 5\). Note that \(|\{H_{4},\ldots,H_{n-1}\}|\geq 1\), and using (1), we get
\[|S|=|S\cap(H_{1}\cup H_{2})|+|S\cap(H_{4}\cup\cdots\cup H_{n-1})|\geq 2^{k-1}+1+ \left\lfloor\frac{n-3}{2}\right\rfloor.\]
If \(n\) is even, then \(\lfloor\frac{n-3}{2}\rfloor=\frac{n}{2}-2\), and if \(n\) is odd, then \(\lfloor\frac{n-3}{2}\rfloor=\frac{n+1}{2}-2\). In both cases, we get \(m(G,2^{k-1}+1)\geq 2^{k-1}-1+\lceil\frac{n}{2}\rceil\), as desired.
To prove the upper bound, \(m(G,2^{k-1}+1)\leq 2^{k-1}-1+\lceil\frac{n}{2}\rceil\), we consider two possibilities for a percolating set \(S\) with respect to the parity of \(n\). If \(n\) is even, let
\[S=\,^{v_{1}}\!K_{2^{k-1}}\cup\{(v_{3},1),(v_{5},1),\ldots,(v_{n-1},1)\},\]
while if \(n\) is odd, let
\[S=\,^{v_{1}}\!K_{2^{k-1}}\cup\{(v_{3},1),(v_{5},1),\ldots,(v_{n},1)\}.\]
In either case, \(|S|=2^{k-1}-1+\lceil\frac{n}{2}\rceil\). It is easy to see that \(S\) percolates, and so the proof is complete. \(\Box\)
### At least two non-edge factors
Theorem 2.3 shows that if the strong product has only one non-edge factor, Theorem 2.1 is best possible. If there are at least two non-edge factors, we can improve Theorem 2.1 as follows.
**Theorem 2.4**: _Let \(G\) be the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of connected graphs \(G_{i}\), \(i\in[k]\). If \(|V(G)|\geq r\) and at least two of the factors have order at least \(3\), then \(m(G,r)=r\) for all \(2\leq r\leq 3\cdot 2^{k-2}\)._
**Proof.** The result for \(r\leq 2^{k-1}\) follows from Theorem 2.1. We start with the proof of the statement, when \(r=3\cdot 2^{k-2}\). (The cases when \(r\in\{2^{k-1}+1,\ldots,3\cdot 2^{k-2}-1\}\) will be dealt with in the final paragraph of this proof.)
Let \(n_{1}\geq n_{2}\geq 3\), and \(n_{i}\geq 2\) for all \(i\in\{3,\ldots,k\}\). Denote \(V(G_{i})=\{v_{1}^{(i)},v_{2}^{(i)},\ldots,v_{n_{i}}^{(i)}\}\) for all \(i\in[k]\). Since \(G_{1}\) is connected of order at least \(3\), it contains a path on three vertices (not necessarily induced). Assume without loss of generality, renaming vertices if necessary, that \(P:v_{1}^{(1)}v_{2}^{(1)}v_{3}^{(1)}\) is a path in \(G_{1}\). For each \(i\in\{3,\ldots,k\}\), let \(F_{i}=\{v_{1}^{(i)},v_{2}^{(i)}\}\) consist of two adjacent vertices. Let
\[S=V(P)\times\{v_{1}^{(2)}\}\times\prod_{i=3}^{k}F_{i}. \tag{2}\]
Clearly, \(|S|=3\cdot 2^{k-2}\). We claim that \(S\) percolates.
Firstly, assuming that vertices in \(S\) are infected, we show that vertices in \(V(P)\times V(G_{2})\times\prod_{i=3}^{k}F_{i}\) become infected. Since \(G_{2}\) is connected, it suffices to show that
the set \(V(P)\times\{v_{s}^{(2)}\}\times\prod_{i=3}^{k}F_{i}\) being infected implies that the set \(V(P)\times\{v_{t}^{(2)}\}\times\prod_{i=3}^{k}F_{i}\) becomes infected, where \(v_{t}^{(2)}\) is adjacent to \(v_{s}^{(2)}\) in \(G_{2}\). Indeed, each vertex \((v_{2}^{(1)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in\{3,\ldots,k\}\), is adjacent to all vertices in \(V(P)\times\{v_{s}^{(2)}\}\times\prod_{i=3}^{k}F_{i}\). Since \(|V(P)\times\{v_{s}^{(2)}\}\times\prod_{i=3}^{k}F_{i}|=3\cdot 2^{k-2}\), we infer that vertices \((v_{2}^{(1)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in\{3,\ldots,k\}\), become infected. Now, consider any vertex \((v_{1}^{(1)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in\{3,\ldots,k\}\), and note that it is adjacent to all vertices \((v_{i_{1}}^{(1)},v_{s}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in[k]\setminus\{2\}\), as well as to all (newly infected) vertices \((v_{1}^{(2)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in\{3,\ldots,k\}\). Thus, altogether, \((v_{1}^{(1)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\) is adjacent to \(2^{k-1}+2^{k-2}=3\cdot 2^{k-2}\) infected neighbors, and so it gets infected. By symmetry, we infer the same fact about vertices \((v_{3}^{(1)},v_{t}^{(2)},v_{i_{3}}^{(3)},\ldots,v_{i_{k}}^{(k)})\), where \(i_{p}\in[2]\) for all \(p\in\{3,\ldots,k\}\). Hence, vertices in \(V(P)\times V(G_{2})\times\prod_{i=3}^{k}F_{i}\) become infected, as claimed.
Secondly, we prove that all vertices in \(V(P)\times\prod_{i=2}^{k}V(G_{i})\) become infected. For this purpose, we claim that for any \(i\in\{2,\ldots,k-1\}\), vertices in \(V(P)\times\prod_{j=2}^{i+1}V(G_{j})\times\prod_{j=i+2}^{k}F_{j}\) get infected assuming that vertices in \(V(P)\times\prod_{j=2}^{i}V(G_{j})\times\prod_{j=i+1}^{k}F_{j}\) are infected. (Since vertices in \(V(P)\times V(G_{2})\times\prod_{i=3}^{k}F_{i}\) became infected as proved in the previous paragraph, the truth of this claim implies the statement that vertices in \(V(P)\times\prod_{i=2}^{k}V(G_{i})\) become infected.) By using the assumption, note that all vertices in \(V(P)\times\prod_{j=2}^{i}V(G_{j})\times\{v_{1}^{(i+1)}\}\times\prod_{j=i+2}^{k }F_{j}\) are infected. Now, consider arbitrary adjacent pairs of vertices \(v_{j_{1}}^{(j)}\) and \(v_{j_{2}}^{(j)}\) in \(G_{j}\), where \(j\in\{2,\ldots,i\}\), and let \(W_{j}=\{v_{j_{1}}^{(j)},v_{j_{2}}^{(j)}\}\). Since \(G_{i+1}\) is connected, it suffices to show that the set \(V(P)\times\prod_{j=2}^{i}W_{j}\times\{v_{s}^{(i+1)}\}\times\prod_{j=i+2}^{k}F_ {j}\) being infected implies that the set \(V(P)\times\prod_{j=2}^{i}W_{j}\times\{v_{t}^{(i+1)}\}\times\prod_{j=i+2}^{k}F_ {j}\), where \(v_{t}^{(i+1)}\) is adjacent to \(v_{s}^{(i+1)}\) in \(G_{i+1}\), becomes infected. To see this, one can use analogous arguments as in the proof in the previous paragraph. Since pairs of vertices in \(W_{j}\) were chosen arbitrarily, we infer that vertices in \(V(P)\times\prod_{i=2}^{k}V(G_{i})\) become infected, as claimed.
Thirdly, let \(P^{\prime}:v_{i_{1}}^{(2)}v_{i_{2}}^{(2)}v_{i_{3}}^{(2)}\) be an arbitrary path in \(G_{2}\), and note that any vertex of \(G_{2}\) lies on such a path, since \(G_{2}\) is connected and \(|V(G_{2})|\geq 3\). Further, let \(W_{j}=\{v_{j_{1}}^{(j)},v_{j_{2}}^{(j)}\}\) consist of arbitrary adjacent vertices in \(G_{j}\), for all \(j\in\{3,\ldots,k\}\). Note that all vertices in \(\{v_{1}^{(1)}\}\times V(P^{\prime})\times\prod_{j=3}^{k}W_{j}\) are already infected. Again, by using symmetric arguments as earlier, one can prove that vertices in \(\{v_{t}^{(1)}\}\times V(P^{\prime})\times\prod_{j=3}^{k}W_{j}\) become infected assuming that vertices in \(\{v_{s}^{(1)}\}\times V(P^{\prime})\times\prod_{j=3}^{k}W_{j}\) are infected, where \(v_{s}^{(1)}v_{t}^{(1)}\in E(G_{1})\). Since, \(G_{1}\) is connected, we deduce that \(V(G_{1})\times V(P^{\prime})\times\prod_{j=3}^{k}W_{j}\) gets infected. Noting that vertices in \(P^{\prime}\) and \(W_{j}\), where \(j\in\{3,\ldots,k\}\), were arbitrarily chosen, we get that \(V(G)\) becomes infected, and \(S\) is indeed a percolating set.
Finally, let \(r\in\{2^{k-1}+1,\ldots,3\cdot 2^{k-2}-1\}\). Note that it suffices to find a set \(S^{\prime}\) in \(G\) of size \(r\) such that \(S\), as defined in (2), becomes infected assuming that \(S^{\prime}\) is infected. Now, let \(S^{\prime}=S\setminus U\), where \(U\) consists of any \(3\cdot 2^{k-2}-r\) vertices in \(\{v_{2}^{(1)}\}\times\{v_{1}^{(2)}\}\times\prod_{i=3}^{k}F_{i}\)
(Since \(r\in\{2^{k-1}+1,\ldots,3\cdot 2^{k-2}-1\}\), we have \(|U|\leq 3\cdot 2^{k-2}-(2^{k-1}+1)<2^{k-2}\), hence \(U\) is well defined.) Since \(S\) is isomorphic to \(P_{3}\boxtimes K_{2^{k-2}}\), vertices in \(\{v_{2}^{(1)}\}\times\{v_{1}^{(2)}\}\times\prod_{i=3}^{k}F_{i}\) are adjacent to all other vertices in \(S\). Hence, if \(S^{\prime}\) is initially infected, \(S\) becomes infected, and so \(S^{\prime}\) is a percolating set. \(\Box\)
We have shown that \(r\leq 3\cdot 2^{k-2}\) implies \(m(G,r)=r\), whenever \(G\) is a strong product of \(k\) graphs at least two of which have order at least \(3\). Now, the natural question is whether \(m(G,r)\) is bounded from above also if \(r>3\cdot 2^{k-2}\), and the next results shows this is not always true. On the contrary, \(m(G,3\cdot 2^{k-2}+1)\) can be arbitrarily large.
**Theorem 2.5**: _If \(n\geq 3\), then \(m(C_{n}\boxtimes C_{n}\boxtimes K_{2}\boxtimes\cdots\boxtimes K_{2},3\cdot 2 ^{k-2}+1)\geq\lceil\frac{n}{2}\rceil\), where \(K_{2}\) appears as a factor \((k-2)\) times._
**Proof.** Note that \(K_{2}\boxtimes\cdots\boxtimes K_{2}\) is isomorphic to the complete graph \(K_{2^{k-2}}\) and let \(G=C_{n}\boxtimes C_{n}\boxtimes K_{2^{k-2}}\). Denote \(V(C_{n})=\{v_{1},\ldots,v_{n}\}\) and \(V(K_{2^{k-2}})=\{1,\ldots,2^{k-2}\}\). For all \(i\in[n]\) let \(H_{i}=G[\{(v_{i},v_{j},p),(v_{i+1},v_{j},p):\,j\in[n],p\in[2^{k-2}]\}]\), where \(i\) is taken with respect to modulo \(n\). Finally let \(S\) be a minimum percolating set of \(G\).
Note that every vertex \((v_{i},v_{j},p)\in V(H_{i})\) has exactly \(3\cdot 2^{k-2}\) neighbors in \(G-V(H_{i})\). More precisely, \((v_{i},v_{j},p)\) is adjacent to vertices \((v_{i-1},v_{j^{\prime}},p^{\prime})\), where \(j^{\prime}\in\{j-1,j,j+1\}\) and \(p^{\prime}\in[2^{k-2}]\). We infer that \(|S\cap H_{i}|\geq 1\), for all \(i\in[n]\), which in turn implies that \(|S|\geq\lceil\frac{n}{2}\rceil\). \(\Box\)
### At least three non-edge factors
We can further improve Theorem 2.4 if there are at least three non-edge factors. However, in this case we only obtain an upper bound.
**Theorem 2.6**: _Let \(G\) be the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of connected graphs \(G_{i}\), \(i\in[k]\). If at least three of the factors have order at least \(3\), then \(m(G,r)\leq 7\cdot 2^{k-3}\) for all \(r\leq 7\cdot 2^{k-3}\)._
**Proof.** Since \(m(G,r)\leq c\) for some \(r\) and \(c\) implies \(m(G,r^{\prime})\leq c\) for all \(r^{\prime}\leq r\), it suffices to show that \(m(G,7\cdot 2^{k-3})=7\cdot 2^{k-3}\).
Let \(G=G_{1}\boxtimes\cdots\boxtimes G_{k}\) and denote \(V(G_{i})=\{v_{1}^{(i)},v_{2}^{(i)},\ldots,v_{n_{i}}^{(i)}\}\) for all \(i\in[k]\), and \(n_{1}\geq n_{2}\geq n_{3}\geq 3\). Consider the paths \(P^{(i)}:v_{1}^{(i)}v_{2}^{(i)}v_{3}^{(i)}\) in \(G_{i}\) for all \(i\in\{1,2\}\). Since \(G_{1}\) and \(G_{2}\) are connected of order at least \(3\), such paths exist (note that \(P^{(i)}\) is not necessarily induced). First, let \(k=3\), and
\[S^{\prime}=V(P^{(1)})\times V(P^{(2)})\times\{v_{1}^{(3)}\}\setminus\{(v_{2}^{ (1)},v_{2}^{(2)},v_{1}^{(3)}),(v_{1}^{(1)},v_{2}^{(2)},v_{1}^{(3)})\}. \tag{3}\]
Clearly, \(|S^{\prime}|=7\). Thus consider the spreading of infection in the \(7\)-bootstrap percolation. The first few infection steps are presented in the following table. (In the table, we use a simplified notation for the vertices. Notably, vertex \((v_{i}^{(1)},v_{j}^{(2)},v_{\ell}^{(3)})\) is written as \((i,j,\ell)\).)
\begin{tabular}{|l|l|l|} \hline Infection step & Newly infected vertices & Infected by neighbors \\ \hline Step 1 & \((2,2,1),(2,2,2)\) & \(S^{\prime}\) \\ \hline Step 2 & \((3,2,2)\) & Step 1 and \((2,1,1),(3,1,1),(3,2,1),(3,3,1),(2,3,1)\) \\ \hline Step 3 & \((2,1,2)\) and \((2,3,2)\) & Steps 1, 2, and \((1,1,1),(2,1,1),(3,1,1),(3,2,1)\) and \((1,3,1),(2,3,1),(3,3,1),(3,2,1)\), respectively. \\ \hline Step 4.1 & \((1,2,1),(1,2,2)\) & Steps 1, 3, and \((1,1,1),(2,1,1),(1,3,1),(2,3,1)\) \\ \hline Step 4.2 & \((3,1,2),(3,3,2)\) & Steps 1, 2, and \((2,1,1),(3,1,1),(3,2,1),(2,1,2)\) and \((2,3,1),(3,3,1),(2,3,2),(3,2,1)\), respectively. \\ \hline Step 5 & \((1,1,2)\) and \((1,3,2)\) & Steps 1, 4.1, and \((2,1,2),(1,1,1),(2,1,1)\) and \((2,3,1),\)\((1,3,1),(3,1,1),\) respectively. \\ \hline \end{tabular}
After these infection steps, the vertices in \(V(P^{(1)})\times V(P^{(2)})\times\{v_{1}^{(3)},v_{2}^{(3)}\}\) are all infected. Next, by replacing \(v_{1}^{(3)}\) and \(v_{2}^{(3)}\) in the previous steps with arbitrary adjacent vertices \(v_{s}^{(3)}\) and \(v_{t}^{(3)}\) in \(G_{3}\) we deduce the following claim: \(V(P^{(1)})\times V(P^{(2)})\times\{v_{s}^{(3)}\}\) being infected implies that vertices in \(V(P^{(1)})\times V(P^{(2)})\times\{v_{t}^{(3)}\}\) also become infected. Since \(G_{3}\) is connected this proves that \(S^{\prime}\) eventually infects all vertices in \(V(P^{(1)})\times V(P^{(2)})\times V(G_{3})\).
Let \(P^{\prime}\) be an arbitrary path on three vertices in \(G_{3}\) (\(P^{\prime}\) is well defined because \(|V(G_{3})|\geq 3\)). By using analogous arguments as in the previous paragraph (where the roles of the first and the third coordinate are reversed) we derive that infected vertices \(\{v_{1}^{(1)}\}\times V(P^{(2)})\times V(P^{\prime})\) also infect \(V(G_{1})\times V(P^{(2)})\times V(P^{\prime})\). Since \(P^{\prime}\) is arbitrary and every vertex in \(V(G_{3})\) lies on some path \(P_{3}\), this implies that \(V(G_{1})\times V(P^{(2)})\times V(G_{3})\) becomes infected.
Finally let \(P^{\prime\prime}\) be an arbitrary path on three vertices in \(G_{1}\). Then infected vertices \(V(P^{\prime\prime})\times\{v_{1}^{(2)}\}\times V(P^{\prime})\) eventually infect \(V(P^{\prime\prime})\times V(G_{2})\times V(P^{\prime})\). Once again since \(P^{\prime}\) and \(P^{\prime\prime}\) can be chosen arbitrarily, this implies that \(V(G_{1})\times V(G_{2})\times V(G_{3})=V(G)\) becomes infected.
Now let \(k\geq 4\). For each \(i\in\{4,\ldots,k\}\), let \(F_{i}=\{v_{1}^{(i)},v_{2}^{(i)}\}\) consist of two adjacent vertices. Let
\[S=S^{\prime}\times\prod_{i=4}^{k}F_{i},\]
where \(S^{\prime}\subset V(G_{1})\times V(G_{2})\times V(G_{3})\) as defined in (3).
Let \((x_{1},x_{2},x_{3})\in V(G_{1}\boxtimes G_{2}\boxtimes G_{3})\) be a vertex infected in Step 1 from the above table. Then, any vertex \((x_{1},x_{2},x_{3},u_{4},\ldots,u_{k})\), where \(u_{i}\in F_{i}\) for all \(i\geq 4\), is adjacent to all vertices in \(S\), therefore it has \(7\cdot 2^{k-3}\) neighbors, and becomes infected. By using analogous constructions for Steps 2-5, we deduce that eventually every vertex in \(V(P^{(1)})\times V(P^{(2)})\times\{v_{1}^{(3)},v_{2}^{(3)}\}\times\prod_{i=4}^ {k}F_{i}\) becomes infected. This in turn implies that vertices in \(V(P^{(1)})\times V(P^{(2)})\times V(G_{3})\times\prod_{i=4}^{k}F_{i}\) become infected, and in the similar
way as above, we then infer that vertices in \(V(G_{1})\times V(G_{2})\times V(G_{3})\times\prod_{i=4}^{k}F_{i}\) also become infected.
By continuing this process (choosing paths on three vertices in two coordinates among the first three coordinates, and exchanging the coordinate in which the infection is spread), we finally derive (in a similar way as in the proof of Theorem 2.4) that all vertices of \(G\) eventually become infected, and thus \(S\) percolates. \(\Box\)
### No \(K_{2}\) factors
In this subsection we deal with the last line of the table in Section 1.2.
**Theorem 2.7**: _Let \(k\geq 4\) and \(G\) be the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of connected graphs \(G_{i}\), \(i\in[k]\). If \(|V(G_{i})|\geq 3\) for all \(i\in[k]\), then \(m(G,r)\leq 3^{k-1}-k\) for all \(r\leq 2^{k}-1\)._
**Proof.** Note that it suffices to show that \(m(G,2^{k}-1)\leq 3^{k-1}-k\). Indeed, the truth of this inequality directly implies that \(m(G,r)\leq 3^{k-1}-k\) for all \(r\leq 2^{k}-1\).
Let \(V(G_{i})=\{v_{1}^{(i)},v_{2}^{(i)},\ldots,v_{n_{i}}^{(i)}\}\) for all \(i\in[k]\), and by the assumption \(n_{i}\geq 3\) for all \(i\in[k]\). Let \(P^{(i)}:v_{1}^{(i)}v_{2}^{(i)}v_{3}^{(i)}\) be paths in \(G_{i}\) for all \(i\in[k]\). (Since for each \(i\in[k]\) graph \(G_{i}\) is connected of order at least \(3\), such a path exists.) Let \(v=(v_{2}^{(1)},\ldots,v_{2}^{(k-1)},v_{1}^{(k)})\),
\[U=\{(x_{1},\ldots,x_{k-1},v_{1}^{(k)}):\,\exists i\in[k-1]\mbox{ with }x_{i}=v_{3}^{(i)},\mbox{ and }x_{j}=v_{2}^{(j)}\,\forall j\neq i\}\bigcup\{v\},\]
and let
\[S=\left(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{1}^{(k)}\}\right)\setminus U. \tag{4}\]
Clearly, \(|S|=3^{k-1}-k\). Since \(k\geq 4\), it follows that \(3^{k-1}-k\geq 2^{k}-1\). We claim that \(S\) percolates. First note that \(v\) is adjacent to every vertex in \(S\), so it gets infected. Let \(u\in U\setminus\{v\}\) be an arbitrary vertex. Without loss of generality let \(u=(v_{3}^{(1)},v_{2}^{(2)},\ldots,v_{2}^{(k-1)},v_{1}^{(k)})\). Then \(u\) is adjacent to \(v\) and also to every vertex \((y_{1},\ldots,y_{k-1},,v_{1}^{(k)})\), where \(y_{1}\in\{v_{2}^{(1)},v_{3}^{(1)}\}\) and \(y_{i}\in P^{(i)}\) for all \(i\in\{2,\ldots,k-1\}\), except for the vertices in \(U\setminus\{v\}\). Therefore it has \(2\cdot 3^{k-2}-(k-1)\) infected neighbors, and since \(k\geq 4\), we have \(2\cdot 3^{k-2}-(k-1)\geq 2^{k}-1\), as desired. With this we have proved that vertices in \(U\) get infected. Therefore, all vertices in \(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{1}^{(k)}\}\) are now infected.
Next, we will prove that vertices in \(\prod_{i=1}^{k-1}V(P^{(i)})\times V(G_{k})\) become infected. Since \(G_{k}\) is connected it suffices to show that if vertices in \(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{s}^{(k)}\}\) are infected this implies that vertices \(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{t}^{(k)}\}\) become infected, where \(v_{s}^{(k)}\) and \(v_{t}^{(k)}\) are any adjacent vertices in \(G_{k}\). Assume now that \(v_{s}^{(k)}\) and \(v_{t}^{(k)}\) are adjacent in \(G_{k}\) and that vertices of \(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{s}^{(k)}\}\) are infected. Note that all these vertices are adjacent to the vertex \((v_{2}^{(1)},\ldots,v_{2}^{(k-1)},v_{t}^{(k)})\), which gets infected, since it has \(3^{k-1}\) infected neighbors. Let \(j\in\{0,1,\ldots,k-2\}\) and suppose that all vertices in \(\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{t}^{(k)}\}\) which contain at most \(j\) coordinates not equal to \(v_{2}^{(i)}\) for some
\(i\in[k]\) are already infected. Consider any vertex \(x\in\prod_{i=1}^{k-1}V(P^{(i)})\times\{v_{t}^{(k)}\}\) which contains exactly \(j+1\) coordinates not equal to \(v_{2}^{(i)}\) for some \(i\in[k]\). Without loss of generality let \(x=(v_{3}^{(1)},\ldots,v_{3}^{(j+1)},v_{2}^{(j+2)},\ldots,v_{2}^{(k-1)},v_{t}^{( k)}).\) Then \(x\) has \(2^{j+1}\cdot 3^{k-j-2}\) infected neighbors of the form of \(y=(y_{1},\ldots,y_{k-1},v_{s}^{(k)})\) where \(y_{i}\in\{v_{2}^{(i)},v_{3}^{(i)}\}\) for all \(i\in[j+1]\) and \(y_{i}\in\{v_{1}^{(i)},v_{2}^{(i)},v_{3}^{(i)}\}\) for all \(i\in\{j+2,\ldots,k-1\}.\) Note that \(x\) also has \(2^{j+1}-1\) infected neighbors of the form of \(y=(y_{1},\ldots,y_{j+1},v_{2}^{(j+2)},\ldots,v_{2}^{(k-1)},v_{t}^{(k)})\), where \(y_{i}\in\{v_{2}^{(i)},v_{3}^{(i)}\}\) for all \(i\in[j+1]\), except when \(y_{i}=v_{3}^{(i)}\) for every \(i\in[j+1]\). Thus \(x\) has \(2^{j+1}\cdot 3^{k-j-2}+2^{j+1}-1\geq 2^{k}-1\) infected neighbors, therefore it gets infected. This proves that eventually every vertex in \(\prod_{i=1}^{k-1}V(P^{(i)})\times V(G_{k})\) gets infected.
By continuing this process (choosing paths on three vertices in all but one coordinate, and spreading the infection throughout the remaining coordinate), we finally derive that all vertices of \(G\) eventually become infected, and thus \(S\) percolates. \(\Box\)
Note that we only proved an upper bound for \(m(G,r)\), where \(G\) has \(k\) factors all of which with at least three vertices, and the exact values are still open. Nevertheless, Observation 1.1 yields a dichotomy to the above result by presenting strong products of \(k\) graphs in which the \(r\)-percolation number, where \(r\geq 2^{k}\), can be arbitrarily large.
## 3 Percolation numbers of \(G\boxtimes H\)
In this section, we consider \(m(G\boxtimes H,r)\), where \(G\) and \(H\) are non-trivial connected graphs. When \(r=3\) and both \(G\) and \(H\) have order at least \(3\), we immediately get the following result from Theorem 2.4.
**Corollary 3.1**: _If \(G\) and \(H\) are connected graphs each with at least \(3\) vertices, then \(m(G\boxtimes H,3)=3\)._
In the next subsection, we consider the only remaining case for \(m(G\boxtimes H,3)\), which is when one of the factors is \(K_{2}\), and prove a characterization of the graphs \(G\) such that \(m(G\boxtimes K_{2},3)=3\). In Section 3.2 we follow with some bounds on \(m(G\boxtimes H,4)\) and \(m(G\boxtimes H,5)\).
### Strong prisms and \(r=3\)
By Corollary 3.1, the only remaining case for the \(3\)-neighbor bootstrap percolation of the strong product of two factors is when one of the factors is \(K_{2}\), that is, \(G\boxtimes K_{2}\), or the so-called strong prism of a graph \(G\). We will denote the vertices of a strong prism as follows: letting \(V(K_{2})=[2]\), we will write \(V(G\boxtimes K_{2})=\{v_{i}:\,v_{i}=(v,i),\mbox{ where }v\in V(G),i\in[2]\}\).
Note that if \(x\) and \(y\) are twins in a graph \(G\), and among the two only \(x\) belongs to a percolating set of \(G\), then \(S^{\prime}=(S\setminus\{x\})\cup\{y\}\) is also a percolating set of \(G\), and has the same cardinality as \(S\). This observation will be helpful in the proof of the following auxiliary result.
**Lemma 3.2**: _If \(G\) is a graph of order at least \(3\) and \(m(G\boxtimes K_{2},3)=3\), then there exists a minimum percolating set of \(G\boxtimes K_{2}\) all vertices of which are in the same \(G\)-layer._
**Proof.** Let \(S\) be a minimum percolating set of \(G\boxtimes K_{2}\). Note that for any \(v\in V(G)\), vertices \(v_{1}\) and \(v_{2}\) are (closed) twins. Hence, if \(S\) contains three vertices no two of which are in the same \(K_{2}\)-layer, then by the observation preceding the lemma, we infer that \(S\) can be modified in such a way that all of its vertices belong to the same \(G\)-layer.
Now, assume that \(S=\{w_{1},w_{2},u_{1}\}\) and let \(x\in V(G\boxtimes K_{2})\) be a common neighbor of vertices \(w_{1},w_{2},u_{1}\). Consider the following two cases.
* If \(x=u_{2}\), then \(u_{1}\) and \(w_{1}\) are neighbors. Since \(|V(G)|\geq 3\), there exists a vertex \(z_{1}\) adjacent to \(u_{1}\) or \(w_{1}\) (hence, \(u,z\) and \(w\) form a path in \(G\) of which central vertex is \(u\) or \(w\)). Note that \(S^{\prime}=\{u_{1},z_{1},w_{1}\}\) is a percolating set of \(G\boxtimes K_{2}\), since after at most two steps \(w_{2}\) gets infected as well.
* If \(x\neq u_{2}\), then \(x\in\{z_{1},z_{2}\}\) for a vertex \(z\in V(G)\setminus\{u,w\}\). In either way, \(z_{1}\) is adjacent to both \(u_{1}\) and \(w_{1}\), and so \(uzw\) is a path in \(G\). Again, \(S^{\prime}=\{u_{1},z_{1},w_{1}\}\) is a percolating set of \(G\boxtimes K_{2}\), since \(z_{2}\) gets infected after the first step, and then \(w_{2}\) is infected after the second step.
In both cases, we found a minimum percolating set of \(G\boxtimes K_{2}\) that lies in one \(G\)-layer, as desired. \(\square\)
Next we present a complete characterization of strong prisms whose \(3\)-percolation number equals \(3\).
**Theorem 3.3**: _If \(G\) is a connected graph, then \(m(G\boxtimes K_{2},3)=3\) if and only if either \(m(G,2)=2\) or \(m(G,2)=3\) with a percolating set \(S\) such that vertices from \(S\) lie in a subgraph of \(G\) isomorphic to \(P_{3}\) or \(K_{1,3}\)._
**Proof.** Firstly, let \(m(G,2)=2\) and let \(\{u,v\}\) be a percolating set in the \(2\)-neighbor bootstrap percolation in \(G\). If \(G\cong K_{2}\), then \(S=\{u_{1},v_{1},u_{2}\}\) is a percolating set in the \(3\)-neighbor bootstrap percolation in \(G\boxtimes K_{2}\), so we may assume that \(|V(G)|\geq 3\). Hence, there is a vertex \(w\) in \(G\) adjacent to both \(u\) and \(v\). Let \(S=\{u_{1},v_{1},w_{1}\}\), and consider the \(3\)-neighbor bootstrap percolation in \(G\boxtimes K_{2}\). Note that \(w_{2}\) gets infected directly from \(S\), and in the second step also \(u_{2}\) and \(v_{2}\) get infected. From this point forward, the \(3\)-neighbor bootstrap percolation process in \(G\boxtimes K_{2}\) follows analogous lines as the \(2\)-neighbor bootstrap percolation process in \(G\). Notably, if \(z\in V(G)\) gets infected by \(x\) and \(y\) in \(G\), then \(z_{1}\in V(G)\times[2]\) and \(z_{2}\in V(G)\times[2]\) get infected by \(\{x_{1},x_{2},y_{1},y_{2}\}\) in \(G\boxtimes K_{2}\). Thus \(S\) percolates, and \(m(G\boxtimes K_{2},3)=3\).
Secondly, let \(m(G,2)=3\) and let \(S=\{u,v,w\}\) be a percolating set of \(G\) such that \(uvw\) is a path in \(G\). Let \(S^{\prime}=\{u_{1},v_{1},w_{1}\}\), and consider the \(3\)-neighbor bootstrap percolation in \(G\boxtimes K_{2}\). In the same way as in the previous paragraph we note that also \(u_{2},v_{2}\) and \(w_{2}\) get infected. In addition, we note that from this point forward, the \(3\)-neighbor bootstrap percolation process in \(G\boxtimes K_{2}\) follows analogous lines as the \(3\)-neighbor bootstrap percolation process in \(G\). Notably, if \(w\in V(G)\) gets infected by \(x,y\) and \(z\) in
\(G\), then \(w_{1}\in V(G)\times[2]\) and \(w_{2}\in V(G)\times[2]\) get infected by \(\{x_{1},x_{2},y_{1},y_{2},z_{1},z_{2}\}\) in \(G\boxtimes K_{2}\) (in fact, one could use only three vertices among the six to get the same result). Finally, let \(S=\{u,v,w\}\) be a percolating set of \(G\) such that \(u,v,w\) are leaves in a subgraph of \(G\) isomorphic to \(K_{1,3}\) whose central vertex is denoted by \(a\). Note that \(S^{\prime}=\{u_{1},v_{1},z_{1}\}\) immediately infects \(a_{1}\). Since vertices of the path \(u_{1}a_{1}v_{1}\) in \(G\boxtimes K_{2}\) are infected, we infer by the same reasoning as earlier that \(S^{\prime}\) percolates in \(G\boxtimes K_{2}\).
For the reverse implication, let \(m(G\boxtimes K_{2},3)=3\). If \(|V(G)|\leq 3\) one can readily check that \(m(G,2)=2\), so let \(G\) be of order at least 4. By Lemma 3.2, one can choose a percolating set \(S\) such that all its vertices lie in the same \(G\)-layer. Thus we may assume that \(S=\{u_{1},v_{1},w_{1}\}\) is a percolating set of \(G\boxtimes K_{2}\), where \(u,v,w\in V(G)\). Hence, there exists a common neighbor \(x\in V(G\boxtimes K_{2})\) of \(u_{1},v_{1},w_{1}\). Consider the following cases:
* \(x\in\{u_{2},v_{2},w_{2}\}\). Then \(u,v,w\) lie on a path \(P_{3}\) in \(G\).
* \(x\notin\{u_{2},v_{2},w_{2}\}\). Then \(x\in\{z_{1},z_{2}\}\) for some vertex \(z\in V(G)\setminus\{u,v,w\}\). Note that both \(z_{1}\) and \(z_{2}\) are adjacent to \(u_{1},v_{1},w_{1}\). Hence, \(u,v\) and \(w\) are leaves of a subgraph of \(G\) isomorphic to \(K_{1,3}\) and \(z\) is its central vertex.
In both cases, either immediately or after the first step, one finds an infected path isomorphic to \(P_{3}\) in the first layer, and as noted earlier, the corresponding vertices in the second layer also get infected. This property is maintained throughout the process; namely, whenever a vertex \(u_{1}\) gets infected, its twin \(u_{2}\) has the same set of (infected) neighbors and so it also get infected at the same time (and vice versa). Since \(u_{1}\) had at least three infected neighbors, at least two of them were in \(G^{1}\). We infer that the set \(\{u,v,w\}\) percolates in the 2-neighbor bootstrap percolation in \(G\), therefore \(m(G,2)\leq 3\). In addition, if \(m(G,2)=3\), then by the above, \(\{u,v,w\}\) is a percolating set, where \(u,v,w\) are in a subgraph isomorphic to \(P_{3}\) or \(K_{1,3}\). The proof is complete. \(\Box\)
Unfortunately, we do not know of a structural characterization of the class of graphs with \(m(G,2)=2\) or \(m(G,2)=3\). One might wonder if the latter class of graphs always contains a 2-neighbor bootstrap percolating set \(S\) that appears in the formulation of the theorem, that is, vertices from \(S\) lying in a subgraph of \(G\) isomorphic to \(P_{3}\) or \(K_{1,3}\). However, the following example shows this is not the case. Let \(G^{\prime}\) be the graph obtained from \(C_{4}\) by adding two leaves to a vertex; see Fig. 1. Note that \(m(G^{\prime},2)=3\) with the unique percolating set \(S\) depicted in the figure (the leaves must be included in \(S\) and one more vertex is needed), which does not satisfy the condition of Theorem 3.3, therefore \(m(G^{\prime}\boxtimes K_{2},3)>3\).
To gain a better understanding of the class of graphs characterized in Theorem 3.3, we give some structural properties of the graphs \(G\) with \(m(G\boxtimes K_{2},3)=3\) related to cut-vertices in \(G\).
**Proposition 3.4**: _If \(m(G\boxtimes K_{2},3)=3\), then \(G\) has at most one cut-vertex \(x\), and if \(x\) is a cut-vertex of \(G\) then \(G-x\) has at most three components._
**Proof.** Let \(G\) be a graph that either has two distinct cut-vertices, or \(G\) has one cut-vertex \(x\), such that \(G-x\) has more than three components (both cases infer \(|V(G)|\geq 4\)).
. Then according to Theorem 3.3 this implies that either \(m(G,2)=2\) or \(m(G,2)=3\) with a percolating set \(S\) such that vertices in \(S\) lie on a subgraph of \(G\) isomorphic to \(P_{3}\) or \(K_{1,3}\).
Firstly let \(x\) be an arbitrary cut-vertex of \(G\) and let \(S\) be a percolating set of \(G\) of either size 2 or 3, under the 2-neighbor bootstrap percolation. Let \(K\) and \(L\) be any two connected components of \(G-x\). Since there are no edges between vertices in \(K\) and vertices in \(L\), we infer that \(S\) contains at least one vertex from either \(K\) and \(L\). Thus if \(G-x\) has more than three connected components, \(S\) would contain at least 4 vertices, a contradiction.
Now, suppose that \(G\) contains an additional cut-vertex \(y\). If \(G-\{x,y\}\) has three connected components, then from the same reason as above \(S\) contains a vertex from each of these components. However, such a set \(S\) does not satisfy the condition that its vertices lie on a subgraph isomorphic to \(P_{3}\) or \(K_{1,3}\).
Finally, if \(G-\{x,y\}\) has only two connected components, then \(x\) and \(y\) are adjacent. Once again \(S\) contains a vertex from each connected component of \(G-\{x,y\}\). Now, if \(|S|=2\), then there are no common neighbors between infected vertices and \(S\) cannot percolate. If \(|S|=3\), then vertices in \(S\) cannot lie on a graph isomorphic to \(P_{3}\) or \(K_{1,3}\). \(\Box\)
Let \(G=K_{1,3}\) and consider \(m(G\boxtimes K_{2},3)\). Note that \(G\) has a cut-vertex \(x\), which separates \(G-x\) into three components. Also note that a set \(S\) containing the leaves of \(K_{1,3}\) infects \(G\), therefore according to Theorem 3.3, \(m(G\boxtimes K_{2},3)=3\). Hence, having a cut-vertex that yields exactly three components is possible for \(m(G\boxtimes K_{2},3)=3\).
To see that the inverse of Proposition 3.4 does not hold, take the graph \(G^{\prime}\) from Fig. 1. Note that \(G^{\prime}\) has just one cut-vertex \(x\) and \(G^{\prime}-x\) has three components. However \(m(G^{\prime}\boxtimes K_{2},3)>3\) because \(m(G^{\prime},2)=3\) with a unique percolating set \(S\), which does not satisfy the condition from Theorem 3.3. Another example is the cycle \(C_{5}\). It has no cut-vertices and it is easy to see that no vertex set of size two percolates in the 2-neighbor bootstrap percolation process. This means that \(m(C_{5},2)=3\) since a set containing one vertex and both of its diametral vertices is a percolating set of size 3. This is in fact the only way to form a minimum percolating set, since the only other possible subset of three vertices is a path \(P_{3}\), which does not infect any new vertices. Since the condition of Theorem 3.3 is not satisfied, \(m(C_{5}\boxtimes K_{2},2)>3\).
Figure 1: Graph \(G^{\prime}\) with a unique percolating set \(S\)
By Theorem 2.3, we infer the following result, showing that \(m(G\boxtimes K_{2},3)\) can be arbitrarily large.
**Corollary 3.5**: _Let \(n\in\mathbb{N}\). Then \(m(C_{n}\boxtimes K_{2},3)=\lceil\frac{n}{2}\rceil+1\)._
### Two factors and \(r=4\) or \(r=5\)
Next, we consider the \(4\)- and the \(5\)-neighbor bootstrap percolation in strong products of two factors. The following result is of similar flavor as Theorem 3.3 in the sense that we use the \(2\)-percolation numbers of factor graphs.
**Theorem 3.6**: _Let \(G\) and \(H\) be connected graphs such that \(m(G,2)=2\) and \(m(H,2)=2\). Then, \(m(G\boxtimes H,4)\leq 5\). In addition, if there exists a percolating set of \(G\) (or \(H\)) consisting of two adjacent vertices, then \(m(G\boxtimes H,4)=4\)._
**Proof.** Denote \(V(G)=\{g_{1},\ldots,g_{n}\}\) and \(V(H)=\{h_{1},\ldots,h_{m}\}\) in such a way \(\{g_{1},g_{2}\}\) and \(\{h_{1},h_{2}\}\) are the percolating sets of \(G\) and \(H\), respectively. We also assume, renaming the vertices of \(G\) and \(H\) if necessary, that every vertex \(g_{i}\), respectively \(h_{j}\), is infected in \(G\), respectively \(H\), using the \(2\)-neighbor bootstrap percolation rule by some pair \(g_{i_{1}},g_{i_{2}}\), respectively \(h_{j_{1}},h_{j_{2}}\), where \(i_{1}<i_{2}<i\) and \(j_{1}<j_{2}<j\).
First let \(V(G)=\{g_{1},g_{2}\}\), in which case \(g_{1}g_{2}\in E(G)\). If \(V(H)=\{h_{1},h_{2}\}\), then \(m(G\boxtimes H,4)=4\). Hence let \(|V(H)|\geq 3\) and let \(S=\{(g_{1},h_{1}),(g_{1},h_{2}),(g_{2},h_{1}),(g_{2},h_{2})\}\). Let \(h_{i}\) be a vertex adjacent to \(h_{i_{1}},h_{i_{2}}\) in \(H\) and assume that all vertices \((g_{1},h_{j}),(g_{2},h_{j})\) for all \(j<i\) are already infected. Then \((g_{1},h_{i}),(g_{2},h_{i})\) are both adjacent to vertices \((g_{1},h_{i_{1}}),(g_{2},h_{i_{1}}),(g_{1},h_{i_{2}}),(g_{2},h_{i_{2}})\). By induction, the whole \(G\boxtimes H\) gets infected and \(m(G\boxtimes H,4)=4\).
Now let us assume that \(|V(G)|\geq 3\) and \(|V(H)|\geq 3\). Denote by \(S_{3}\) the subgraph induced by \(\{(g_{i},h_{j}):\,i,j\in[3]\}\). Consider two cases for a percolating set \(S\) in the \(4\)-neighbor bootstrap percolation.
**Case 1:**\(g_{1}g_{2}\notin E(G)\) and \(h_{1}h_{2}\notin E(H)\). Let \(S=\{(g_{1},h_{1}),(g_{1},h_{2}),(g_{2},h_{1}),(g_{2},h_{2}),\)\((g_{3},h_{1})\}\) be a set of size \(5\) and consider the following \(4\)-neighbor bootstrap percolation process. Immediately \((g_{3},h_{3})\) is infected by (all) vertices in \(S\). After that, the vertex \((g_{2},h_{3})\) is infected by \((g_{3},h_{3}),(g_{2},h_{1}),(g_{2},h_{2}),(g_{3},h_{1})\). Finally, vertices \((g_{3},h_{2})\) and \((g_{1},h_{3})\) are adjacent to \((g_{1},h_{2}),(g_{1},h_{3}),(g_{2},h_{2}),(g_{2},h_{3}),(g_{3},h_{3})\) and \((g_{1},h_{1}),(g_{1},h_{2}),\)\((g_{3},h_{1}),(g_{3},h_{2}),(g_{3},h_{3})\) respectively. This infects the whole \(S_{3}\).
**Case 2:**\(g_{1}g_{2}\in E(G)\). Let \(S=\{(g_{1},h_{1}),(g_{1},h_{2}),(g_{2},h_{1}),(g_{2},h_{2})\}\) and consider the following \(4\)-neighbor bootstrap percolation process. Vertex \((g_{3},h_{1})\) is now adjacent to every vertex in \(S\). Since we have now obtained the same set of infected vertices as in Case 1, this set infects \(S_{3}\).
Now, let us assume that \(S_{3}\) is already infected and consider the \(4\)-neighbor bootstrap percolation process. Let \(g_{i}\) be a vertex infected in the \(2\)-neighbor bootstrap percolation process in \(G\) by vertices \(g_{i_{1}},g_{i_{2}}\) (where \(i_{1},i_{2}<i\)) and suppose that vertices \((g_{k},h_{1}),\ldots,(g_{k},h_{p})\), for all \(k<i\) and some \(p\leq m\) are already infected in \(G\boxtimes H\). Then \((g_{i},h_{p})\) gets infected by vertices \((g_{i_{1}},h_{p}),(g_{i_{2}},h_{p}),(g_{i_{1}},h_{p_{1}}),(g_{i_{2}},h_{p_{1}})\)
\((g_{i_{1}},h_{p_{2}}),(g_{i_{2}},h_{p_{2}})\), where \(h_{p}\) gets infected by \(h_{p_{1}}\) and \(h_{p_{2}}\) in the \(2\)-neighbor bootstrap percolation process in \(H\). By the same argument we can show that every vertex \((g_{i},h_{j})\) where \(3\leq j\leq p-1\) has \(6\) infected neighbors and thus becomes infected. Finally \((g_{i},h_{2})\) and \((g_{i},h_{1})\) are adjacent to \((g_{i_{1}},h_{2}),(g_{i_{2}},h_{2}),(g_{i},h_{3}),(g_{i_{1}},h_{3}),(g_{i_{2}}, h_{3})\) and \((g_{i_{1}},h_{1}),(g_{i_{2}},h_{1}),(g_{i},h_{3}),(g_{i_{1}},h_{3}),(g_{i_{2}}, h_{3})\) respectively. Thus all vertices in \(\{g_{1},\ldots,g_{i}\}\times\{h_{1},\ldots,h_{p}\}\) are now infected.
By reversing the roles of \(G\) and \(H\), we can deduce that all vertices in \(\{g_{1},\ldots,g_{i}\}\times\{h_{1},\ldots,h_{p+1}\}\) become infected. By using induction, we infer that all vertices in \(G\boxtimes H\) become infected. Therefore \(S\) percolates and \(m(G\boxtimes H,4)\leq 5\), concluding the proof of the theorem. \(\Box\)
Notice that throughout the infection process in the above proof, the only vertex which was infected by less than \(5\) vertices, was vertex \((g_{2},h_{3})\). Therefore it is not difficult to see that one can modify the proof of Theorem 3.6 by adding the vertex \((g_{2},h_{3})\) to the set of initially infected vertices \(S\), and obtain the following corollary.
**Corollary 3.7**: _Let \(G\) and \(H\) be connected graphs such that \(m(G,2)=2\) and \(m(H,2)=2\). If \(|V(G)|\geq 3,|V(H)|\geq 3\), then \(m(G\boxtimes H,5)\leq 6\). In addition, if there exists a percolating set of \(G\) (or \(H\)) consisting of two adjacent vertices, then \(m(G\boxtimes H,5)=5\)._
Consider \(m(P_{3}\boxtimes P_{3},4)\), and denote \(V(P_{3})=\{1,2,3\}\). Suppose that \(S\) is a percolating set of \(P_{3}\boxtimes P_{3}\) with \(|S|=4\). Since vertices \((1,1),(1,3),(3,1)\) and \((3,3)\) are of degree \(3\), they must all be in \(S\). They are all adjacent to vertex \((2,2)\), which gets infected. After that every remaining vertex, namely \((1,2),(2,1),(2,3),(3,2)\), contains exactly \(3\) infected neighbors, therefore \(S\) does not percolate. Hence, \(m(P_{3}\boxtimes P_{3},4)=5\). Now, consider \(m(P_{3}\boxtimes P_{3},5)\) and suppose that \(S\) is a percolating set with \(|S|=5\). Then \((1,1),(1,3),(3,1)\) and \((3,3)\) are all in \(S\). It is not difficult to check that by adding to \(S\) any of the remaining vertices, \(S\) does not percolate. These examples show that the upper bounds in Theorem 3.6 and Corollary 3.7 are sharp.
Let \(H=H_{1}\boxtimes H_{2}\) for arbitrary connected graphs \(H_{1},H_{2}\). Since Theorem 2.1 states that \(m(H,2)=2\), where a percolating set consists of two adjacent vertices, the following corollary of Theorem 3.6 is immediate.
**Corollary 3.8**: _If \(m(G,2)=2\) and \(|V(G)|\geq 3\), then \(m(G\boxtimes H_{1}\boxtimes H_{2},5)=5\)._
## 4 Strong products of infinite paths
In this section, we extend the concept of the \(r\)-neighbor bootstrap percolation to infinite graphs. In particular, we consider strong products of two-way infinite paths. In what follows, we extend the consideration of the \(r\)-neighbor bootstrap percolation also to the trivial case when \(r=1\). Clearly, if \(G\) is connected, then \(m(G,1)=1\).
Given an infinite graph \(G\) and a positive integer \(r\) we let \(m(G,r)=\ell<\infty\) if \(S\), where \(|S|=\ell\), is a minimum set of vertices in \(G\) that are initially set as infected, and an arbitrary vertex in \(G\) becomes infected by the \(r\)-neighbor bootstrap percolation
process in a finite number of steps. Otherwise, if there is no such finite set \(S\), we let \(m(G,r)=\infty\). By
\[\mbox{fpt}(G)=\sup\{r:\,m(G,r)<\infty\}\]
we define the _finiteness percolation threshold_ of a graph \(G\).
Clearly, if \(G\) is the complete (infinite) graph, then \(\mbox{fpt}(G)=\infty\). In addition, \(\mbox{fpt}(G)=\infty\) is true for any finite graph \(G\), since \(m(G,r)\leq|V(G)|\) holds for any finite graph and any positive integer \(r\). On the other hand, it is easy to see that \(\mbox{fpt}(\mathbb{Z})=1\), where \(\mathbb{Z}\) is the two-way infinite path. As usual, let \(\mathbb{Z}^{\boxtimes,n}=\mathbb{Z}\boxtimes\cdots\boxtimes\mathbb{Z}\) stand for the strong product of \(n\) two-way infinite paths. We simplify the notation \(\mathbb{Z}^{\boxtimes,n}\) to \(\mathbb{Z}^{n}\). We wish to determine \(\mbox{fpt}(\mathbb{Z}^{n})\) for every \(n\in\mathbb{N}\), and we can use some results from previous sections to bound this value. The following result follows immediately from Theorem 2.7.
**Corollary 4.1**: _For every \(n\in\mathbb{N}\), \(\mbox{fpt}(\mathbb{Z}^{n})\geq 2^{n}-1\)._
The following upper bound for the threshold of \(\mathbb{Z}^{n}\) comes with an easy proof.
**Proposition 4.2**: _For every \(n\in\mathbb{N}\), \(\mbox{fpt}(\mathbb{Z}^{n})\leq 3^{n-1}\)._
**Proof.** Let \(G=\mathbb{Z}^{n}\) for some \(n\in\mathbb{N}\) and assume that \(m(G,3^{n-1}+1)<\infty\). Let \(S\) be the percolating set of \(G\). Without loss of generality and possibly by using a linear translation of \(S\), we may assume that \(S\subseteq[k]^{n}\) for some positive integer \(k\). Let \(x\in V(G)\setminus[k]^{n}\). Then \(x\) has at most \(3^{n-1}\) neighbors in \([k]^{n}\), therefore the infection cannot spread out from the box \([k]^{n}\), a contradiction. Thus, \(\mbox{fpt}(\mathbb{Z}^{n})\leq 3^{n-1}\). \(\Box\)
Combining Corollary 4.1 with Proposition 4.2 we get the exact value of the finiteness percolation threshold of the strong grid \(\mathbb{Z}^{2}\).
**Corollary 4.3**: \(\mbox{fpt}(\mathbb{Z}^{2})=3\)_._
The following result gives the finiteness percolation threshold of \(\mathbb{Z}^{3}\), which needs more effort.
**Theorem 4.4**: \(\mbox{fpt}(\mathbb{Z}^{3})=9\)_._
**Proof.** From Proposition 4.2 we get that \(\mbox{fpt}(\mathbb{Z}^{3})\leq 9\). To obtain the desired equality we need to show that \(m(\mathbb{Z}^{3},9)<\infty\). Let \(S=[5]^{3}\) and let \(G=\mathbb{Z}^{3}\). We will prove that \(S\) percolates \(G\) in the 9-neighbor bootstrap percolation process. For this purpose we will first show that vertices in \([5]\times[5]\times[6]\) get infected.
Firstly note that vertices in \(\{2,3,4\}\times\{2,3,4\}\times\{6\}\) have 9 neighbors in \(S\). Namely, any vertex \((x_{1},x_{2},6)\) where \(x_{i}\in\{2,3,4\}\) for \(i\in[2]\) is adjacent to \((y_{1},y_{2},5)\), where \(y_{i}\in\{x_{i}-1,x_{i},x_{i}+1\}\) for \(i\in[2]\). Secondly, vertex \(x=(1,3,6)\) has 6 neighbors in \(S\), namely \((y_{1},y_{2},5)\) where \(y_{1}\in\{1,2\}\) and \(y_{2}\in\{2,3,4\}\). Vertex \(x\) is also adjacent to vertices \((2,y_{2},6)\) where \(y_{2}\in\{2,3,4\}\), which were infected the first step. Thus, \(x\) has 9 infected neighbors and gets infected. By using symmetric arguments, vertices
\((3,1,6),(5,3,6)\) and \((3,5,6)\) also get infected. Thirdly, vertex \(x=(1,2,6)\) is adjacent to \((y_{1},y_{2},5)\), where \(y_{1}\in\{1,2\}\) and \(y_{2}\in\{1,2,3\}\), which are all in \(S\). Vertex \(x\) is also adjacent to vertices \((2,y_{2},6)\) for \(y_{2}\in\{2,3\}\) and also to \((1,3,6)\), all of which were infected in previous steps. Since \(x\) has \(9\) infected neighbors, it gets infected. By symmetry, vertices \((1,4,6),(5,2,6),(5,4,6),(2,1,6)\), \((2,5,6),(4,1,6)\) and \((4,5,6)\) also get infected.
So far, with the exception of corner vertices, that is, vertices \((1,1,6),(5,1,6),(2,5,6)\) and \((5,5,6)\), all other vertices in \([5]\times[5]\times\{6\}\) have been infected. Again by using symmetry we infer that with the exception of corner vertices all other vertices of \([5]\times[5]\times\{0,6\}\), \([5]\times\{0,6\}\times[5]\) and \(\{0,6\}\times[5]\times[5]\) become infected. More precisely, the corner vertices that have not yet been infected are of the form \(x=(x_{1},x_{2},x_{3})\), where \(x_{i}\in\{0,6\}\) for an \(i\in[3]\) and \(x_{j}\in\{1,5\}\) for all \(j\neq i\). Now, without loss of generality, consider the corner vertex \(x=(5,1,6)\). Note that \(x\) is adjacent to vertices \((4,1,6),(4,2,6),(5,2,6),(4,1,5),(4,2,5),(5,1,5)\) and \((5,2,5)\) as well as to \((4,0,5)\) and \((6,2,5)\), all of which have been infected. By symmetric arguments we infer that the other three corner vertices of \([5]\times[5]\times[6]\) become infected, by which all vertices of \([5]\times[5]\times[6]\) are infected.
Now, by repeating this infection process, we can eventually infect any vertex in \([5]\times[5]\times\{-k,-k+1,\ldots,k-1,k\}\) for any \(k\in\mathbb{Z}\). Finally, similarly as in the proof of Theorem 2.7, reversing the roles of factors, we deduce that eventually every vertex in \(\{-k,-k+1,\ldots,k-1,k\}^{3}\) becomes infected, therefore \(S\) percolates. \(\Box\)
## 5 Concluding remarks
In this paper, we considered the \(r\)-neighbor bootstrap percolation of a strong product of \(k\) graphs obtaining or bounding the values of \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\). The results are divided into several cases, in which \(r\) can be expressed as a function of \(k\). In the basic case, where \(k\geq 2\) and \(r\leq 2^{k-1}\), we show \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)=r\), by which we generalize the result [12, Theorem 3.1] due to Coelho et al. This case is improved when there are at least two non-edge factors in the strong product. More precisely, when there are two non-edge factors and \(r\leq 3\cdot 2^{k-2}\), we get \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)=r\), while for three non-edge factors and \(r\leq 7\cdot 2^{k-3}\), we get the upper bound \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\leq 7\cdot 2^{k-3}\). The following question is thus natural.
**Question 1**: _Let \(G\) be the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of connected graphs \(G_{1},\ldots,G_{k}\), among which there are \(\ell\) non-edge factors. For which \(\ell\), where \(\ell\in\{3,\ldots,k\}\), it holds that_
\[m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)\leq(2^{\ell}-1)\cdot 2^{k-\ell},\]
_where \(r\leq(2^{\ell}-1)\cdot 2^{k-\ell}\)?_
It is likely that in the answer to the question above, \(\ell\) is expressed as a function of \(k\). Clearly, when \(\ell=3\) the answer is positive by Theorem 2.6. In fact, when \(\ell=3\) we suspect that the upper bound can be improved so that the equality \(m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)=r\) holds for all \(r\leq 7\cdot 2^{k-3}\). We extend this into the following question.
**Question 2**: _Let \(G\) be the strong product \(G_{1}\boxtimes\cdots\boxtimes G_{k}\) of connected graphs \(G_{1},\ldots,G_{k}\), among which there are \(\ell\) non-edge factors. For which \(\ell\), where \(\ell\in\{3,\ldots,,k\}\), it holds that_
\[m(G_{1}\boxtimes\cdots\boxtimes G_{k},r)=r,\]
_where \(r\leq(2^{\ell}-1)\cdot 2^{k-\ell}\)?_
Note that the positive answer to Question 2 when \(\ell=k\) would mean a considerable improvement of Theorem 2.7, which seems unlikely.
In the case of two factors (that is, \(k=2\)) the situation has been resolved for \(r\in\{2,3\}\), and in part also for \(r\in\{4,5\}\). It would be interesting to find a generalization of Theorem 3.6 in which \(m(G\boxtimes H,r)\) would depend on the values of \(m(G,r_{1})\) and \(m(H,r_{2})\) for some \(r_{1}\) and \(r_{2}\), which are smaller than \(r\).
The main open problem arising in Section 4 is to determine \(\operatorname{fpt}(\mathbb{Z}^{n})\) for all \(n\geq 4\). In particular, the following question is also open:
**Question 3**: _For which \(n\geq 2\), we have \(\operatorname{fpt}(\mathbb{Z}^{n})=3^{n-1}\)?_
Clearly, the above question has a positive answer for \(n\in\{2,3\}\). We suspect that as \(n\to\infty\), \(\operatorname{fpt}(\mathbb{Z}^{n})<3^{n-1}\).
It would also be interesting to consider the \(r\)-neighbor bootstrap percolation in other classes of infinite graphs.
## Acknowledgement
B.B. was supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, J1-3002, and J1-4008.
|
2302.03185 | Regulating Oligopolistic Competition | We consider the problem of how to regulate an oligopoly when firms have
private information about their costs. In the environment, consumers make
discrete choices over goods, and minimal structure is placed on the manner in
which firms compete. In the optimal regulatory policy, the regulator need only
solicit prices from firms, and based on those prices, charge them taxes or give
them subsidies, and impose on each firm a ``yardstick'' price cap that depends
on the posted prices of competing firms. | Kai Hao Yang, Alexander K. Zentefis | 2023-02-07T01:34:05Z | http://arxiv.org/abs/2302.03185v3 | # Regulating Oligopolistic Competition+
###### Abstract
We consider the problem of how to regulate an oligopoly when firms have private information about their costs. In the environment, consumers make discrete choices over goods, and minimal structure is placed on the manner in which firms compete. In the optimal regulatory policy, the regulator need only solicit prices from firms, and based on those prices, charge them taxes or give them subsidies, and impose on each firm a "yardstick" price cap that depends on the posted prices of competing firms.
**JEL classification:** D40, D82, L5
**Keywords:** regulation, price caps, mechanism design
Introduction
In a seminal paper, Baron and Myerson (1982) derive the optimal regulation of a monopolist whose costs are unknown to the regulator. In their model, the monopolist faces a commonly known inverse demand function, and the regulator's problem is to determine whether the monopolist is allowed to produce at all, and if so, how the monopolist's price and transfer should be determined, as functions of the production cost the monopolist reports. Baron and Myerson show that the optimal regulatory policy characterizes a price schedule for all monopolist types permitted to operate, where production is permitted only for types for which consumer surplus under the optimal policy exceeds the fixed cost of production.1
Footnote 1: See Amador and Bagwell (2021) and Guo and Shmaya (2019) for other recent papers related to Baron and Myerson (1982).
In this paper, we generalize Baron and Myerson (1982) to an oligopoly setting. Firms have independent, one-dimensional private information about their costs, and people choose one among many differentiated goods to consume. These two changes to the Baron-Myerson model substantially enriches the regulatory problem. Instead of a one-dimensional demand quantity as in Baron and Myerson (1982), the relevant allocation is the entire distribution of matches between consumers and firms. And with multiple firms, the regulator must now take into account the strategic manner in which firms compete (i.e., the model of market conduct).
To derive the optimal regulatory policy, we follow the Baron-Myerson approach of searching for an incentive-efficient allocation that maximizes a linear social welfare function of consumer surplus and firms' profits. But we define the class of candidate indirect mechanisms broadly enough to include a wide range of strategies that firms might employ when competing. This expansive treatment accommodates the fact that styles of competition can significantly differ between markets. We demonstrate that the class of indirect mechanisms we search over nests competing on price a la Bertrand (1883), competing on quantity a la Cournot (1838), competing over differentiated goods (Perloff and Salop 1985), and consumer search (Varian 1980; Narasimhan 1988; Armstrong and Vickers 2019). Searching across this large class, we prove that every constrained efficient indirect mechanism is equivalent to price competition, but with lump-sum transfers and a certain form of price controls--namely, firm
specific price ceilings that depend on the prices of competitors (i.e., "yardstick" price caps).2
Footnote 2: Wang (2000) also studies the problem of regulating an oligopoly with unknown costs. The setting there is somewhat less general, presuming homogeneous goods, two firm-cost types, no fixed costs, and a one-dimensional demand curve.
The intuition behind our main characterization is as follows: Because we presume independent private types across firms, we can adopt the Myersonian approach to keep track of revenues as functions of an allocation of goods to consumers. Doing so implies that the efficient allocation must allocate goods to consumers who have the highest ex-post _virtual surplus_ on the intensive margin (i.e., the difference between a consumer's value for a good and the virtual marginal cost of the firm supplying the good). At the same time, the efficient allocation must also select the set of firms that generate the highest ex-ante virtual surplus on the extensive margin when taking fixed costs into account. Price competition implements the efficient allocation on the intensive margin. By properly designing lump-sum transfers between consumers and firms as functions of firms' prices, one can incentivize firms to post prices that exactly reflect their virtual marginal cost. With virtual marginal costs (and, hence, firms' private types) being reflected in prices, one can then select the most efficient firms based solely on the information collected from posted prices on the extensive margin. This selection criterion leads to the firm-specific price ceilings.
The rest of this paper is organized as follows: In Section2, we introduce the model, define our indirect mechanisms, and specify the welfare criterion for efficiency. Section3 states the main result, and Section4 has the proof. Section5 provides an example of an efficient regulatory policy. Section6 concludes.
## 2 Model
### Primitives
A number \(N\geq 1\) of firms produce \(N\) heterogeneous goods. Each firm \(i\) has cost function \(C_{i}(q)=\theta_{i}(q+\kappa_{i})\), where \(q\) is quantity and \(\kappa_{i}\geq 0\) is commonly known. Meanwhile, \(\theta_{i}\geq 0\) represents the firm's cost efficiency and is private information. A lower \(\theta_{i}\) implies that a firm is more cost-efficient. We assume that \(\theta=(\theta_{i})_{i=1}^{N}\in\mathbb{R}^{N}\) is independent and \(\theta_{i}\) follows a distribution \(G_{i}\), which has a support \(\Theta_{i}:=[\underline{\theta}_{i},\overline{\theta_{i}}]\), with \(0\leq\underline{\theta}_{i}\leq\overline{\theta}_{i}<\infty\).
A unit mass of consumers stand ready to purchase. Each consumer has unit demand and heterogeneous values \(\mathbf{v}\in V\subseteq\mathbb{R}_{+}^{N}\), so that a consumer with value vector \(\mathbf{v}=(v_{1},\ldots,v_{N})\) has value \(v_{i}\) for firm \(i\)'s good. The consumers' values are distributed according to measure \(F\in\Delta(V)\).
### Indirect Mechanisms
An indirect mechanism \(\mathcal{M}\) is a tuple \(\mathcal{M}=(S_{i},r_{i},\boldsymbol{\mu}_{i},t_{i})_{i=1}^{N}\) that assigns firms' strategies to (i) market entry probabilities, (ii) an allocation of goods to consumers, and (iii) firm revenues; where, for all \(i\), \(S_{i}\) is an arbitrary (measurable) set, \(r_{i}\) is a mapping from \(S:=\prod_{i=1}^{N}S_{i}\) to \([0,1]\), \(t_{i}\) is a mapping from \(S\) to \(\mathbb{R}\), and \(\boldsymbol{\mu}_{i}\) is a mapping from \(V\times S\) to \([0,1]\), such that \(\sum_{i=1}^{N}\boldsymbol{\mu}_{i}(\mathbf{v}|s)\leq 1\) for all \((\mathbf{v},s)\in V\times S\).
For any indirect mechanism, \(S_{i}\) describes firm \(i\)'s available strategies (e.g., chosen price, chosen quantity, or entry decision). Given any strategy profile \(s\in S=\prod_{i=1}^{N}S_{i}\), \(r_{i}(s)\in[0,1]\) denotes the probability that firm \(i\) enters the market; \(\boldsymbol{\mu}_{i}(\mathbf{v}|s)\in[0,1]\) represents the share of consumers with value \(\mathbf{v}\) who receives firm \(i\)'s good, conditional on firm \(i\) being in the market; and \(t_{i}(s)\) is the revenue of firm \(i\). We normalize the firms' outside options to zero, and we require that any indirect mechanism must allow an opt-out option \(s_{0}\in S_{i}\) such that \(t_{i}(s_{0},s_{-i})=\boldsymbol{\mu}_{i}(\mathbf{v}|s_{0},s_{-i})=r_{i}(s_{0},s_{-i})=0\) for all \(i\), for all \(s_{-i}\in S_{-i}\), and for all \(\mathbf{v}\in V\).
Given \(\mathcal{M}\), the timing of events is as follows: (1) types \(\{\theta_{i}\}_{i=1}^{N}\) are drawn independently from \(\{G_{i}\}_{i=1}^{N}\), and each firm privately observes its own type; (2) firms simultaneously choose \(s_{i}\) from \(S_{i}\); and (3) each firm \(i\) receives ex-post payoff
\[\pi_{i}(s,\theta_{i}|\mathcal{M}):=t_{i}(s)-r_{i}(s)\theta_{i}\left(\int_{V} \boldsymbol{\mu}_{i}(\mathbf{v}|s)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right).\]
Notice that \(\mathcal{M}\) defines a Bayesian game where each firm \(i\) has private type \(\theta_{i}\in\Theta_{i}\), strategy space \(S_{i}\), and payoff function \(\pi_{i}(s,\theta_{i}|\mathcal{M})\).
### Example Indirect Mechanisms
Here we provide three example indirect mechanisms: price competition, quantity competition, and consumer search. Other examples are in Online Appendix B.1.
_Example 1_ (**Price Competition**).: The following mechanism describes a price competition model. Under this mechanism, all \(N\) firms operate in the market and compete on the price margin (i.e., each firm \(i\) sets price \(s_{i}\geq 0\)). After seeing firms' prices \(s=(s_{1},\ldots,s_{N})\), a consumer buys from the firm providing the highest surplus.
Specifically, each firm \(i\) has strategy space \(S_{i}=\mathbb{R}_{+}\). Under any strategy profile \(s\in S\), firm \(i\)'s entry probability is \(r_{i}(s)=1\) and revenue is \(t_{i}(s)=s_{i}\int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|s)F(\mathrm{d}\mathbf{v})\), where \(\boldsymbol{\mu}_{i}\) is given by
\[\boldsymbol{\mu}_{i}(v|s)=\left\{\begin{array}{cl}\frac{1}{|\mathbf{p}( \mathbf{v},s)|},&\mbox{if $v_{i}-s_{i}=\max_{j}\{v_{j}-s_{j}\}$ and $v_{i}\geq s_{i}$}\\ 0,&\mbox{otherwise}\end{array}\right.,\]
for all \(i\in\{1,\ldots,N\}\) and for all \(s\in S\), with \(\mathbb{M}(\mathbf{v},s):=\mathrm{argmax}_{i}\{v_{i}-s_{i}\}\).3
Footnote 3: Notice that with different specifications of the value distribution \(F\), this mechanism corresponds to various canonical competition models. In particular, by assuming that \(\mathbf{v}\) is perfectly correlated (i.e., \(v_{1}=\ldots,v_{N}=v\) with \(F\)-probability 1), we have the classical Bertrand competition model (Bertrand, 1883), but with private marginal costs; by assuming that \(\mathbf{v}\) is independent, we have the model Γ la Perloff and Salop (1985); by assuming that \(N=2\) and that \(\mathbf{v}\) is perfectly negatively correlated (i.e., \(v_{1}+v_{2}=1\) with \(F\)-probability 1), we have the Hotelling location model (Hotelling, 1929).
_Example 2_ (**Quantity Competition**).: Suppose that \(F\) is atomless. Then there exists an indirect mechanism that describes quantity competition of which the classical Cournot model (Cournot, 1838) is a special case. Under this mechanism, each firm \(i\) chooses quantity \(s_{i}\in[0,1]\) it wishes to sell. Market prices (and, hence, the allocation of goods) are determined through a system of inverse demand functions \(\{\boldsymbol{p}_{i}\}_{i=1}^{N}\). For any \(i\), \(\boldsymbol{\mu}_{i}\) is defined so that firm \(i\) sells \(s_{i}\) units at price \(\boldsymbol{p}_{i}(s)\) if \(\sum_{j}s_{j}\leq 1\), and sells \(\frac{s_{i}}{\sum_{j}s_{j}}\) units at price \(0\) if \(\sum_{j}s_{j}>1\). This arrangement is strategically equivalent to a quantity competition game with inverse demand functions \(\{\boldsymbol{p}_{i}\}_{i=1}^{N}\).4
Footnote 4: See more details in Online Appendix B.1.
_Example 3_ (**Consumer Search and Promotional Sales**).: Consider the price competition mechanism given by 1, except that \(\boldsymbol{\mu}_{i}\) becomes
\[\boldsymbol{\mu}_{i}(\mathbf{v}|s)=\left\{\begin{array}{cl}\gamma_{i}+\left( 1-\sum_{j=1}^{N}\gamma_{j}\right)\frac{\mathbf{1}\{s_{i}\in\mathbb{M}(\mathbf{ v},s)\}}{|\mathbf{p}(\mathbf{v},s)|},&\mbox{if $v_{i}\geq s_{i}$}\\ 0,&\mbox{if $v_{i}<s_{i}$}\end{array}\right..\]
This indirect mechanism then describes a model with "captive consumers" and "shoppers," where each firm \(i\) has \(\gamma_{i}\in[0,1]\) share of captive consumers who can only see its price, while
the remaining consumers can visit all firms and see all firms' prices.5
Footnote 5: Notice that if \(\mathbf{v}\) is perfectly correlated so that with \(F\)-probability 1, \(v_{1}=v_{2}=\cdots=v_{N}\), this mechanism describes the promotional sales model of Armstrong and Vickers (2019), which in turn nests the consumer search model of Varian (1980) and Narasimhan (1988).
Discussion.As hinted by the examples above, _any_ Bayesian game that models competition among \(K\leq N\) firms is included in the class of indirect mechanisms we consider. As such, our analysis of mechanisms applies to all possible static models of competition with fixed preferences and technology, regardless of a model's assumptions about firm conduct, market power, or price determination. Any dynamic model that can be represented in strategic form is also eligible, as are markets in which prices are determined via bilateral bargaining.
Of course, not all competitive games among \(K\leq N\) firms have an equilibrium. Furthermore, even if an equilibrium exists, some equilibria might be extremely difficult to characterize. A great benefit of our framework is that, as explained below, it bypasses explicit characterizations of equilibria and only focuses on the outcomes. Across this broad range, our main interest is to characterize the efficient indirect mechanisms and explore ways to implement them. To this end, we first formally define our notion of efficiency.
### Defining Efficiency
For any indirect mechanism \(\mathcal{M}=(S_{i},r_{i},\boldsymbol{\mu}_{i},t_{i})_{i=1}^{N}\), and for any Bayes-Nash equilibrium \(\sigma=\prod_{i=1}^{N}\sigma_{i}\) of the induced Bayesian game, where \(\sigma_{i}:\Theta_{i}\to\Delta(S_{i})\) is firm \(i\)'s equilibrium strategy, let
\[\Pi_{i}(\theta_{i}|\mathcal{M},\sigma):=\mathbb{E}_{\theta_{-i}}\left[\int_{S }\pi_{i}(s,\theta_{i}|\mathcal{M})\sigma(\mathrm{d}s|\theta)\right]\]
denote firm \(i\)'s interim profit, and let
\[\Sigma(\mathcal{M},\sigma):=\mathbb{E}_{\theta}\left[\int_{V\times S}\sum_{i =1}^{N}v_{i}\boldsymbol{\mu}_{i}(\mathbf{v}|s)\sigma(\mathrm{d}s|\theta)F( \mathrm{d}\mathbf{v})-\sum_{i=1}^{N}\int_{S}t_{i}(s)\sigma(\mathrm{d}s|\theta) \right].\]
denote the expected consumer surplus. With this notation, we have the following definition of Pareto dominance:
**Definition 1**.: An indirect mechanism \((S,r,\boldsymbol{\mu},t)\) and a Bayes-Nash equilibrium \(\sigma\)_dominates
another market structure \((S^{\prime},r^{\prime},\mathbf{\mu}^{\prime},t^{\prime})\) and Bayes-Nash equilibrium \(\sigma^{\prime}\) if
\[\Sigma(S,r,\mathbf{\mu},t;\sigma)\geq\Sigma(S^{\prime},r^{\prime},\mathbf{\mu}^{\prime},t^{\prime};\sigma^{\prime})\]
and
\[\Pi_{i}(\theta_{i}|S,r,\mathbf{\mu},t;\sigma)\geq\Pi_{i}(\theta_{i}|S^{\prime},r^ {\prime},\mathbf{\mu}^{\prime},t^{\prime};\sigma^{\prime})\]
for all \(i\) and for all \(\theta_{i}\in\Theta_{i}\), with at least one inequality being strict.
We can then define (constrained) efficiency in the usual sense:
**Definition 2**.: An indirect mechanism \((S,r,\mathbf{\mu},t)\) is (constrained) _efficient_ if there exists a Bayes-Nash equilibrium \(\sigma\) in the Bayesian game induced by \((S,r,\mathbf{\mu},t)\) such that no other indirect mechanisms and Bayes-Nash equilibria dominate \((S,r,\mathbf{\mu},t)\) and \(\sigma\).
Just as in Baron and Myerson (1982), consumers are not explicitly included as agents in an indirect mechanism. Rather, they are implicitly embedded into the allocation rules \(\mathbf{\mu}\) (just as they are embedded into the market demand in Baron and Myerson 1982). A consequence of this formulation is that consumers do not have participation constraints. Thus, there might be indirect mechanisms that leave consumers with negative surplus while granting firms unbounded revenue via taxation and subsidy. To rule out these trivial cases, we focus hereafter on mechanisms that leave consumers with non-negative surplus. One way to incorporate this constraint is to represent efficiency with a social planner maximizing a weighted sum of consumer surplus and firm profits, and, on average, assigning a relatively higher weight to consumers than to firms. We summarize this observation in the following lemma:
**Lemma 1**.: _An indirect mechanism \((S,r,\mathbf{\mu},t)\) is (constrained) efficient with consumers obtaining non-negative surplus if and only if there exists a collection of nondecreasing, right-continuous functions \(\{\Lambda_{i}\}_{i=1}^{N}\) on \(\Theta_{i}\), with \(0\leq\Lambda_{i}(\theta_{i})\leq G_{i}(\theta_{i})\) such that_
\[\Sigma(\mathcal{M},\sigma)+\sum_{i=1}^{N}\int_{\Theta_{i}}\Pi(\theta_{i}| \mathcal{M},\sigma)\Lambda_{i}(\mathrm{d}\theta_{i})\geq\Sigma(\mathcal{M}^{ \prime},\sigma^{\prime})+\sum_{i=1}^{N}\int_{\Theta_{i}}\Pi(\theta_{i}| \mathcal{M}^{\prime};\sigma^{\prime})\Lambda_{i}(\mathrm{d}\theta_{i}). \tag{1}\]
In essence, Lemma 1 uses the familiar method that represents the Pareto frontier with solutions of a planner's problem, where the planner maximizes a weighed sum of consumer surplus and firm profits. The Pareto weights for consumers are normalized to 1, whereas the weights for firm \(i\) are given by \(\{\Lambda_{i}(\theta_{i})\}_{\theta_{i}\in\Theta_{i}}\).6 The restriction that \(\Lambda_{i}(\theta)\leq G_{i}(\theta_{i})\) ensures that firms on average receive lower weights then consumers do. In the special case where \(N=1\) and \(\Lambda_{i}(\theta_{i})=(1-\alpha)G_{i}(\theta_{i})\), (1) exactly matches the regulator's objective in Baron and Myerson (1982). In other words, the restriction that \(\Lambda_{i}(\theta)_{i}\leq G_{i}(\theta_{i})\) can be regarded as a multi-firm and interim analogue of the paramterization with \(\alpha\in[0,1]\) used in Baron and Myerson (1982).
Footnote 6: This is because the dominance criterion is applied in the interim stage for each realization of types \(\theta_{i}\).
## 3 Efficiency of PRYCE CAP Mechanisms
In what follows, we present our main result. As noted in the previous section, our central interest is in characterizing the efficient mechanisms and exploring practical ways to implement them. Although there are infinitely many possible mechanisms and some of them can be extremely complex, we show that the efficient ones are "simple," in the sense that any efficient indirect mechanism is equivalent to one that belongs to a natural class. This class of mechanisms involves price competition with lump-sum transfers and firm-specific price caps that depend on the chosen prices of competitors. We call these price ceilings _yardstick price caps_, and we refer to this class as PRYCE CAP mechanisms, which we define next.7
Footnote 7: In naming the price caps, we use the word βyardstickβ in a way similar to Shleifer (1985)βs use of the word, in that a firmβs regulation depends on characteristics of other firms.
**Definition 3**.: \(\mathcal{M}=(S_{i},r_{i},\boldsymbol{\mu}_{i},t_{i})_{i=1}^{N}\) is a price competition mechanism with lump-sum transfers and yardstick price caps (PRYCE CAP) if, for any \(i\),
1. \(S_{i}=\mathbb{R}_{+}\).
2. For any \(s\in S\), \(r_{i}(s)=\mathbf{1}\{s_{i}\leq\bar{p}_{i}(s_{-i})\}\), for some \(\bar{p}_{i}:S_{-i}\rightarrow\mathbb{R}_{+}\cup\{\infty\}\).
3. For any \(\mathbf{v}\in V\), and for any \(s\in S\), \[\boldsymbol{\mu}_{i}(\mathbf{v}|s)=\left\{\begin{array}{ll}1,&\mbox{if }v_{i}-s_{i}> \max_{\{j|r_{j}(s)=1,\,j\neq i\}}(v_{j}-s_{j})^{+}\mbox{ and }r_{i}(s)=1\\ 0,&\mbox{if }v_{i}-s_{i}<\max_{\{j|r_{j}(s)=1,\,j\neq i\}}(v_{j}-s_{j})^{+} \mbox{ or }r_{i}(s)=0\end{array}\right..\]
4. For any \(s\in S\), \(t_{i}(s)=s_{i}\int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|s)F(\mathrm{d}\mathbf{v})- \tau_{i}(s_{i})\), for some \(\tau_{i}:S_{i}\to\mathbb{R}\).
Under a PRYCE CAP mechanism, each firm \(i\) simultaneously announces a price \(s_{i}\geq 0\). Given the announced prices \(s=(s_{1},\ldots,s_{N})\), a firm is first selected into the market based on whether its announced price \(s_{i}\) is below its price cap \(\bar{p}_{i}(s_{-i})\). The price caps and the rules for market entry are thus intimately linked. When choosing a price to announce, a firm accounts for both its own price cap and the effect that its choice will have on the price caps of other firms. Among the firms that enter the market, consumers then see the announced prices and decide which firm to buy from. Finally, each firm is compensated or taxed via lump-sum transfers from consumers. This transfer amount \(\tau_{i}(s_{i})\) depends only on a firm's own price.
Notice that if \(\bar{p}_{i}(s_{-i})=\infty\) and \(\tau_{i}(s_{i})=0\) for all \(i\) and for all \(s\), a PRYCE CAP mechanism reduces to a pure price competition model (see 1 above). From this perspective, PRYCE CAP mechanisms can be regarded as generalizations of pure price competition models that are commonly assumed, with the differences being re-distributional transfers \(\{\tau_{i}\}_{i=1}^{N}\) and yardstick price caps \(\{\bar{p}_{i}\}_{i=1}^{N}\).
With the formal definition of PRYCE CAP mechanisms presented, we now state our main result.
**Theorem 1**.: _Any efficient indirect mechanism is equivalent to a PRYCE CAP mechanism._
The significance of Theorem 1 is that, among infinitely many indirect mechanisms, the efficient ones are equivalent to a PRYCE CAP mechanisms. This means that price competition, together with interventions solely in the form of lump-sum transfers and price ceilings, is enough to achieve constrained Pareto efficiency. PRYCE CAP mechanisms emerge as efficient out of an expansive set of market environments in which firms and consumers partake, with each environment potentially experiencing enormously complicated forms of firm conduct, barriers to entry, and regulatory policies.
Furthermore, Theorem 1 implies that omniscient knowledge about the market setting is not required to implement an efficient regulation. More precisely, implementing a PRYCE CAP mechanism does not require knowledge about each individual consumer's value vector \(\mathbf{v}\). With proper lump-sum transfers and yardstick price caps, firms would post correct prices and consumers would sort themselves into the efficient allocation.
The fact that any efficient regulation is equivalent to a PRYCE CAP mechanism sheds light on which regulatory policies are necessary and which are not. After all, Theorem 1 implies that if firms compete on price, any regulatory policy other than lump-sum transfers and price ceilings is unwarranted for reaching efficiency in an environment like ours. In other words, lump-sum transfers and price ceilings can be regarded as _minimal_ regulatory policies.
## 4 Proof of Theorem 1
This section provides the proof of Theorem 1. First, notice that by the revelation principle (Myerson, 1979), it is without loss to restrict attention to incentive compatible and individually rational direct mechanisms. A direct mechanism is a tuple \((S,r,\boldsymbol{\mu},t)\) where \(S_{i}=\Theta_{i}\) for all \(i\). For simplicity, we refer to a direct mechanism as a mechanism, and we denote it by \((r,\boldsymbol{\mu},t)\) hereafter when there is no confusion.
A mechanism is said to be incentive compatible if, for all \(i\) and for all \(\theta_{i},\theta_{i}^{\prime}\in\Theta_{i}\),
\[\mathbb{E}_{\theta_{-i}}\left[t_{i}(\theta_{i},\theta_{-i})-r_{i }(\theta_{i},\theta_{-i})\theta_{i}\left(\int_{V}\boldsymbol{\mu}_{i}({\bf v} |\theta_{i},\theta_{-i})F({\rm d}{\bf v})+\kappa_{i}\right)\right]\] \[\qquad\geq\mathbb{E}_{\theta_{-i}}\left[t_{i}(\theta_{i}^{\prime},\theta_{-i})-r_{i}(\theta_{i}^{\prime},\theta_{-i})\theta_{i}\left(\int_{V} \boldsymbol{\mu}_{i}({\bf v}|\theta_{i}^{\prime},\theta_{-i})F({\rm d}{\bf v} )+\kappa_{i}\right)\right],\] (IC)
and it is said to be individually rational if, for all \(\theta_{i}\in\Theta_{i}\),
\[\mathbb{E}_{\theta_{-i}}\left[t_{i}(\theta_{i},\theta_{-i})-r_{i }(\theta_{i},\theta_{-i})\theta_{i}\left(\int_{V}\boldsymbol{\mu}_{i}({\bf v} |\theta_{i},\theta_{-i})F({\rm d}{\bf v})+\kappa_{i}\right)\right]\geq 0.\] (IR)
Under any incentive compatible and individually rational mechanism \((r,\boldsymbol{\mu},t)\), firm \(i\)'s interim expected profit is
\[\Pi_{i}(\theta_{i}|r,\boldsymbol{\mu},t)=\mathbb{E}_{\theta_{-i}}\left[t_{i}( \theta_{i})-r_{i}(\theta)\theta_{i}\left(\int_{V}\boldsymbol{\mu}_{i}({\bf v} |\theta)F({\rm d}{\bf v})+\kappa_{i}\right)\right],\]
while the expected consumer surplus is
\[\Sigma(r,\boldsymbol{\mu},t):=\mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}( \theta)\int_{V}\boldsymbol{\mu}_{i}({\bf v}|c)v_{i}F({\rm d}{\bf v})-\sum_{i=1 }^{N}t_{i}(\theta)\right].\]
As a result, an incentive compatible and individually rational mechanism is efficient if and only if it is the solution to the following problem:
\[\sup_{(r,\boldsymbol{\mu},t)}\bigg{[}\sum_{i=1}^{N}\int_{\Theta_{i}}\Pi_{i}( \theta_{i}|r,\boldsymbol{\mu},t)\Lambda_{i}(\mathrm{d}\theta_{i})+\Sigma(r, \boldsymbol{\mu},t)\bigg{]}, \tag{2}\]
subject to (IC) and (IR), for some collection of nondecreasing and right-continuous functions \(\{\Lambda_{i}\}\) with \(0\leq\Lambda_{i}(\theta_{i})\leq G_{i}(\theta_{i})\) for all \(\theta_{i}\in\Theta_{i}\).
Meanwhile, using the standard envelope arguments, we can characterize incentive compatibility by a revenue equivalence formula and a monotonicity condition, as summarized by the following lemma.
**Lemma 2**.: _A mechanism \((r,\boldsymbol{\mu},t)\) is incentive compatible if and only if, for all \(i\), there exists a constant \(\bar{t}_{i}\in\mathbb{R}\) such that_
1. _For any_ \(i\) _and for any_ \(\theta_{i}\in\Theta_{i}\)_,_ \[\mathbb{E}_{\theta_{-i}}[t_{i}(\theta_{i},\theta_{-i})]\] \[= \bar{t}_{i}+\mathbb{E}_{\theta_{-i}}\bigg{[}r_{i}(\theta)\theta_ {i}\left(\int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{v} )+\kappa_{i}\right)+\int_{\theta_{i}}^{\overline{\theta}_{i}}r_{i}(x,\theta_{ -i})\bigg{(}\int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|x,\theta_{-i})F(\mathrm{d }\mathbf{v})+\kappa_{i}\bigg{)}\,\mathrm{d}x\bigg{]}.\]
2. _For any_ \(i\)_, the function_ \[\theta_{i}\mapsto\mathbb{E}_{\theta_{-i}}\left[r_{i}(\theta_{i},\theta_{-i}) \left(\int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]\] _is nonincreasing._
From Lemma 2, for any incentive compatible mechanism \((r,\boldsymbol{\mu},t)\), and for all \(i\),
\[\mathbb{E}_{\theta}[t_{i}(\theta)]= \int_{\Theta_{i}}\mathbb{E}_{\theta_{-i}}[t_{i}(\theta_{i})]G_{i} (\mathrm{d}\theta_{i})\] \[= \bar{t}_{i}+\int_{\Theta_{i}}\theta_{i}\mathbb{E}_{\theta_{-i}} \left[r_{i}(\theta_{i},\theta_{-i})\left(\int_{V}\boldsymbol{\mu}_{i}( \mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]G_{i}( \mathrm{d}\theta_{i})\] \[+\int_{\Theta_{i}}G_{i}(\theta_{i})r_{i}(\theta_{i},\theta_{-i}) \mathbb{E}_{\theta_{-i}}\left[r_{i}(\theta_{i},\theta_{-i})\left(\int_{V} \boldsymbol{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d}\mathbf{v} )+\kappa_{i}\right)\right]\mathrm{d}\theta_{i}.\]
Thus, expected consumer surplus can be written as
\[\Sigma(r,\boldsymbol{\mu},t)= \mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}(\theta_{i},\theta_{-i} )\int_{V}v_{i}\boldsymbol{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{ d}\mathbf{v})\right]\] \[-\sum_{i=1}^{N}\int_{\Theta_{i}}\theta_{i}\mathbb{E}_{\theta_{-i} }\left[r_{i}(\theta_{i},\theta_{-i})\left(\int_{V}\boldsymbol{\mu}_{i}( \mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]G_{i}( \mathrm{d}\theta_{i})\] \[-\sum_{i=1}^{N}\int_{\Theta_{i}}G_{i}(\theta_{i})r_{i}(\theta_{i},\theta_{-i})\mathbb{E}_{\theta_{-i}}\left[r_{i}(\theta_{i},\theta_{-i})\left( \int_{V}\boldsymbol{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d} \mathbf{v})+\kappa_{i}\right)\right]\mathrm{d}\theta_{i}-\sum_{i=1}^{N}\bar{t} _{i}.\]
Meanwhile, for each firm \(i\),
\[\int_{\Theta_{i}}\Pi_{i}(\theta_{i}|r,\boldsymbol{\mu},t)\Lambda_{i}(\mathrm{ d}\theta_{i})=\int_{\Theta_{i}}\Lambda_{i}(\theta_{i})\mathbb{E}_{\theta_{-i} }\left[r_{i}(\theta_{i},\theta_{-i})\left(\int_{V}\boldsymbol{\mu}_{i}( \mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d}\mathbf{v})+\kappa_{i}\right) \right]\mathrm{d}\theta_{i}+\Lambda_{i}(\overline{\theta}_{i})\bar{t}_{i}.\]
With the above expressions, we now consider a relaxed problem of (2). To this end, we first introduce the following lemma summarizing the virtual cost functions \(\{\phi_{i}^{\Lambda_{i}}\}\).
**Lemma 3**.: _For any \(i\) and for any nondecreasing, right-continuous function \(\Lambda_{i}\) with \(0\leq\Lambda_{i}(\theta_{i})\leq G_{i}(\theta_{i})\), there exists a nondecreasing function \(\phi_{i}^{\Lambda_{i}}:\Theta_{i}\to\mathbb{R}_{+}\) such that_
\[\int_{\Theta_{i}}\theta_{i}Q_{i}(\theta_{i})G(\mathrm{d}\theta_{i})+\int_{ \Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i}))Q_{i}(\theta_{i})\, \mathrm{d}\theta_{i}\geq\int_{\Theta_{i}}\phi_{i}^{\Lambda_{i}}(\theta_{i})Q_ {i}(\theta_{i})G_{i}(\mathrm{d}\theta_{i})\]
_for any nonincreasing function \(Q_{i}:\Theta_{i}\to\mathbb{R}_{+}\), and the equality holds whenever \(Q_{i}\) is measurable with respect to the \(\sigma\)-algebra generated by \(\phi_{i}^{\Lambda_{i}}\)._
Proof.: See Appendix A.1.
This lemma is essentially the "ironing" technique a la Myerson (1981), except that (i) the type distribution does not necessarily have a density, and (ii) the function being "ironed" is the Pareto-weight-adjusted virtual cost, rather than the virtual value. The proof of the lemma follows from Monteiro and Svaiter (2010), who provide an extension (to even more general settings than ours) of the Myersonian ironing technique that can accommodate these two differences.
Combining Lemma 2 and Lemma 3, one can observe that the value of (2) is bounded
from above by the solution of
\[\sup_{r,\mathbf{\mu}}\left\{\mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}(\theta) \left(\int_{V}\left(v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i})\right)\mathbf{\mu}_{i}( \mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})-\phi_{i}^{\Lambda_{i}}(\theta_{i}) \kappa_{i}\right)\right]-\sum_{i=1}^{N}(1-\Lambda_{i}(\overline{\theta}_{i})) \bar{t}_{i}\right\}, \tag{3}\]
subject to
\[\theta_{i}\mapsto\mathbb{E}_{\theta_{-i}}\left[r_{i}(\theta_{i},\theta_{-i}) \left(\int_{V}\mathbf{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d} \mathbf{v})+\kappa_{i}\right)\right]\text{ is nonincreasing.} \tag{4}\]
Moreover, by Lemma 2, any individually rational mechanism must have \(\bar{t}_{i}\geq 0\) for all \(i\). Thus, it is without loss to set \(\bar{t}_{i}=0\) for all \(i\).
In what follows, we characterize the solution of (2) by finding a solution to (3) first and then verifying that the objective of (2) equals the objective of (3) under this solution. To this end, define \((r^{*},\mathbf{\mu}^{*},t^{*})\) as follows: For any \(\theta\in\Theta\), let \(\mathcal{E}^{*}(\theta)\) be a solution of
\[\max_{\mathcal{E}\subseteq\{1,\ldots,N\}}\left(\int_{V}\max_{i\in\mathcal{E}}( v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i}))^{+}F(\mathrm{d}\mathbf{v})-\sum_{i\in \mathcal{E}}\phi_{i}^{\Lambda}(\theta_{i})\kappa_{i}\right).\]
Then, let
\[\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta):=\left\{\begin{array}{cc}\frac{1}{| \mathfrak{H}^{*}(\mathbf{v},\theta)|},&\text{if $v_{i}\geq\phi_{i}^{\Lambda_{i}}( \theta_{i})$ and $i\in\mathbb{M}^{*}(\mathbf{v},\theta)$}\\ 0,&\text{otherwise}\end{array}\right.,\]
where \(\mathbb{M}^{*}(\mathbf{v},\theta):=\operatorname*{argmax}_{j\in\mathcal{E}^{* }(\theta)}\{v_{j}-\phi_{j}^{\Lambda_{j}}(\theta_{j})\}\), for all \(i\), for all \(\mathbf{v}\in V\), and for all \(\theta\in\Theta\); and
\[r_{i}^{*}(\theta)=\mathbf{1}\{i\in\mathcal{E}^{*}(\theta)\}\]
for all \(i\) and for all \(\theta\in\Theta\); and
\[t_{i}^{*}(\theta)=T_{i}^{*}(\theta_{i})\] \[:= \mathbb{E}_{\theta_{-i}}\bigg{[}r_{i}^{*}(\theta)\theta_{i}\left( \int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i} \right)-\int_{\theta_{i}}^{\overline{\theta}_{i}}r_{i}^{*}(x,\theta_{-i}) \bigg{(}\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|x,\theta_{-i})F(\mathrm{d}\mathbf{ v})+\kappa_{i}\bigg{)}\,\mathrm{d}x\bigg{]},\]
for all \(i\) and for all \(\theta\in\Theta\).
**Lemma 4**.: _The mechanism \((r^{*},\mathbf{\mu}^{*},t^{*})\) solves (3). Furthermore,_
\[\sum_{i=1}^{N}\int_{\Theta_{i}}\Pi(\theta_{i}|r^{*},\mathbf{\mu}^{*},t^ {*})\Lambda_{i}(\mathrm{d}\theta_{i})+\Sigma(r^{*},\mathbf{\mu}^{*},t^{*})\] \[= \mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}^{*}(\theta)\left(\int _{V}(v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i}))\mathbf{\mu}_{i}^{*}(\mathbf{v}| \theta)F(\mathrm{d}\mathbf{v})-\phi_{i}^{\Lambda_{i}}(\theta_{i})\kappa_{i} \right)\right].\]
Proof.: See Appendix A.2.
Lemma 4 implies that the mechanism \((r^{*},\mathbf{\mu}^{*},t^{*})\) is a solution to (2). Furthermore, Lemma 2 and Lemma 3 imply that any other solution of (2) must be outcome-equivalent to \((r^{*},\mathbf{\mu}^{*},t^{*})\) with probability 1, save for the tie breaking rules that do not affect efficiency.
Now consider any efficient mechanism. As noted above, it is without loss to assume that this mechanism is \((r^{*},\mathbf{\mu}^{*},t^{*})\). To see that \((r^{*},\mathbf{\mu}^{*},t^{*})\) is equivalent to a PRYCE CAP mechanism, consider the mechanism \((S,r^{\mathcal{P}},\mathbf{\mu}_{i}^{\mathcal{P}},t^{\mathcal{P}})\) as follows: \(S_{i}:=\mathbb{R}_{+}\) for all \(i\);
\[r_{i}^{\mathcal{P}}(s):=\mathbf{1}\{i\in\mathcal{E}^{\mathcal{P}}(s)\},\]
for all \(s\in S\), where \(\mathcal{E}^{\mathcal{P}}(s)\) is a solution of
\[\max_{\mathcal{E}\subseteq\{1,\ldots,N\}}\left(\int_{V}\max_{i\in\mathcal{E}}( v_{i}-s_{i})^{+}F(\mathrm{d}\mathbf{v})-\sum_{i\in\mathcal{E}}s_{i}\kappa_{i} \right),\]
for all \(s\in S\);
\[\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}|s)=\left\{\begin{array}{cc}\frac{1}{| \mathbb{H}(\mathbf{v},s)|},&\mbox{if $v_{i}\geq s_{i}$ and $i\in\mathbb{M}(\mathbf{v},s)$}\\ 0,&\mbox{otherwise}\end{array}\right.,\]
where \(\mathbb{M}(\mathbf{v},s):=\operatorname*{argmax}_{j\in\mathcal{E}^{\mathcal{P} }(s)}\{v_{j}-s_{j}\}\); and
\[t_{i}^{\mathcal{P}}(s):=s_{i}\mathbb{E}_{\theta_{-i}}\left[r_{i}^{\mathcal{P} }(s_{i},\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))\int_{V}\mathbf{\mu}_{i}^{\mathcal{ P}}(\mathbf{v}|s_{i},\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F(\mathrm{d} \mathbf{v})\right]-\tau_{i}^{*}((\phi_{i}^{\Lambda_{i}})^{-1}(s_{i})),\]
where
\[\tau_{i}^{*}(\theta_{i}):=\phi_{i}^{\Lambda_{i}}(\theta_{i})\mathbb{E}_{\theta _{-i}}\left[r_{i}^{*}(\theta)\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F( \mathrm{d}\mathbf{v})\right]-T_{i}^{*}(\theta_{i}),\]
and \((\phi_{i}^{\Lambda_{i}})^{-1}(s_{i}):=\inf\{\theta_{i}\in\Theta_{i}|\phi_{i}^ {\Lambda_{i}}(\theta_{i})\geq s_{i}\}\), for all \(i\) and for all \(s_{i}\in S_{i}\).
Notice that for any \(i\), any \(s_{-i}\in S_{-i}\), and any \(s_{i},s^{\prime}_{i}\in S_{i}\), if \(s_{i}>s^{\prime}_{i}\) and \(i\in\mathcal{E}^{\mathcal{P}}(s_{i},s_{-i})\), it must be that \(i\in\mathcal{E}^{\mathcal{P}}(s^{\prime}_{i},s_{-i})\). Thus, for any \(i\) and for any \(s_{-i}\in S_{-i}\), there exists \(\bar{p}_{i}(s_{-i})\in\mathbb{R}_{+}\cup\{\infty\}\) such that \(i\in\mathcal{E}^{\mathcal{P}}(s_{i},s_{-i})\) if and only if \(s_{i}\leq\bar{p}_{i}(s_{-i})\). For this reason, \((S,r^{\mathcal{P}},\boldsymbol{\mu}^{\mathcal{P}},t^{\mathcal{P}})\) is indeed a PRYCE CAP mechanism.
The following lemma completes the proof.
**Lemma 5**.: _The PRYCE CAP mechanism \((S,r^{\mathcal{P}},\boldsymbol{\mu}^{\mathcal{P}}_{i},t^{\mathcal{P}})\) has a pure-strategy Bayes-Nash equilibrium \(\sigma^{\mathcal{P}}\) that induces the same outcome as \((r^{*},\boldsymbol{\mu}^{*},t^{*})\)._
Proof.: See Appendix A.3.
## 5 PRYCE CAP Example and Properties
### PRYCE CAP Example
Suppose that the number of potentially active firms \(N=2\) and that consumer values and firm types \(v_{1},\theta_{1},v_{2},\theta_{2}\in[0,1]\) are independently drawn from a uniform distribution. Suppose further that the commonly known fixed cost parameters \(\kappa_{1}=\kappa_{2}=1\) and that the Pareto weight functions \(\Lambda_{1}(x)=\Lambda_{2}(x)=x\) for all \(x\in[0,1]\). Then, the price cap functions \(\bar{p}_{1}\) and \(\bar{p}_{2}\) and the set \(\mathcal{E}^{\mathcal{P}}\) can be depicted by Figure I.
Figure I illustrates the tight link between the yardstick price caps and the sets of firms optimally granted market entry. In the figure, the set of (undominated) prices \([0,1]^{2}\) is partitioned into four regions, where each region of \((s_{1},s_{2})\) is mapped into different values of \(\mathcal{E}^{\mathcal{P}}(s_{1},s_{2})\). As a result, the boundaries of the regions define the yardstick price caps. The gold-orange curve represents firm 2's price cap \(\bar{p}_{2}\) as a function of \(s_{1}\), and the blue-green curve represents firm 1's price cap \(\bar{p}_{1}\) as a function of \(s_{2}\). Given firm 1's published price \(s_{1}\), firm 2 is excluded from the market if it posts a price \(s_{2}>\bar{p}_{2}(s_{1})\). Similarly, given firm 2's published price \(s_{2}\), firm 1 is excluded from the market if it posts a price \(s_{1}>\bar{p}_{1}(s_{2})\). Notice that both firms operate in the market if both publish relatively low prices, and both are restricted from entering if both publish relatively high prices. If one firm posts too high a price relative to the second, the first firm is excluded, whereas the second can enter.
Focusing on the behavior of the price caps in the figure, one can observe the two caps initially increasing in the other firm's price. As the competing firm publishes a higher price,
the restriction on the other firm's price loosens, consistent with the yardstick nature of the price ceiling. Once the competing firm's price exceeds a certain value, though, the other firm's price cap flattens, becoming independent of the competing firm's choice. This change in pattern is from the competing firm no longer operating in the market precisely because its high price denied it entry. At that point, the price cap of the other firm remains fixed and its authorization for business depends only on its own published price.
### Yardstick Price Cap Properties
The properties of the yardstick price cap just described are not special to the assumptions of two firms or uniformly distributed consumer values. Under the broader assumption that consumers' values \(\{v_{i}\}_{i=1}^{N}\) are i.i.d., the next proposition explains that a firm's price ceiling rises when competing firms submit higher prices. Moreover, a firm can guarantee itself entry if it submits a price _below_ a certain threshold; and it can guarantee itself no entry if it submits a price _above_ another threshold.
**Proposition 1**.: _Suppose that \(\{v_{i}\}_{i=1}^{N}\) are i.i.d. and that \(\kappa_{i}=\kappa\) for all \(i\). Consider any
efficient PRYCE CAP mechanism and let \(\bar{p}_{i}:S_{-i}\rightarrow\mathbb{R}_{+}\cup\{\infty\}\) denote the yardstick price cap for firm \(i\). Then,_
1. _For any price vector_ \(s\in\mathbb{R}_{+}^{N}\)_,_ \(\bar{p}_{i}(s_{-i})\leq\bar{p}_{j}(s_{-j})\) _if and only if_ \(s_{i}\geq s_{j}\)_, for all_ \(i,j\in\{1,\ldots,N\}\)_._
2. _For any_ \(i\in\{1,\ldots,N\}\) _and for any_ \(s_{-i}\in\mathbb{R}_{+}^{N-1}\)_,_ \(\bar{p}_{i}(s_{-i})\in[\underline{s},\bar{s}]\) _for some_ \(0\leq\underline{s}\leq\bar{s}<\infty\)_._
Proof.: See Appendix B.1.
An immediate consequence of Proposition 1 is that the firm publishing the lowest price faces the highest price ceiling. This relation implies that the price a firm publishes has two effects on its eligibility to operate in an efficient PRYCE CAP mechanism. The first is a _direct effect_: A lower submitted price is more likely to be below the firm's price ceiling and grant the firm the right to sell. The second is a _yardstick effect_: A lower submitted price, other things equal, means the firm will face a higher price ceiling compared to its competitors, which can be more easily met.
## 6 Conclusion
We study optimal regulation of oligopolistic competition where firms have private information about their costs and consumers make discrete choices over goods. We search over a broad class of mechanisms, covering a variety of ways in which firms compete, that implement constrained Pareto efficient allocations. The socially efficient mechanisms are equivalent to price competition, but with lump-sum transfers and yardstick price caps. We refer to these mechanisms as PRYCE CAP mechanisms, and they can be implemented without knowledge of individual consumer preferences, realized firm costs, or firm conduct.
To implement a PRYCE CAP mechanism, a planner, we presume, has power to verify and enforce competition exclusively on price, regardless of the kinds of complicated competitive conduct that might already prevail in the market. But upholding price competition is not the unique way to achieve efficiency. For certain markets, a clever selection of lump-sum transfers alone might convert an existing inefficient market setting into an efficient one. But
administering such creative transfers would likely be intractable, and the planner would need unearthly knowledge of the competitive game that firms engage. A significant contribution of PRYCE CAP mechanisms is that they require no such awareness, and they apply to a broad range of potential market environments.
Hence, if the planner can verify and enforce price competition, the search for efficiency ends with PRYCE CAP mechanisms. In practice, though, a planner might lack such powers entirely or wield them imperfectly. A natural implication of our result is that social efficiency is more easily achieved in markets where a planner can plausibly maintain price competition. Or rather, more realistically, markets where posting prices _already_ drives the nature of competition are better candidates for reaching efficiency.
## Appendix A Omitted Proofs for Section 4
### Proof of Lemma 3
Let \(\nu_{i}\) be a signed measure on \(\Theta_{i}:=[\underline{\theta}_{i},\overline{\theta}_{i}]\) defined by
\[\nu(A):=\int_{A}\theta_{i}G_{i}(\mathrm{d}\theta_{i})+\int_{A}(G_{i}(\theta_{i })-\Lambda_{i}(\theta_{i}))\,\mathrm{d}\theta_{i},\]
for all (Borel) subset \(A\) of \(\Theta_{i}\), and let \(N_{i}(\theta_{i}):=\nu([\underline{\theta}_{i},\theta_{i}])\) for all \(\theta_{i}\in\Theta_{i}\) be its CDF. By definition 3 and theorem 2 of Monteiro and Svaiter (2010), there exists a nondecreasing function \(\phi_{i}^{\Lambda_{i}}\) such that for any nonincreasing function \(Q_{i}:\Theta_{i}\to\mathbb{R}_{+}\),
\[\int_{\Theta_{i}}\theta_{i}G_{i}(\theta_{i})(-Q_{i}(\theta_{i}))G _{i}(\mathrm{d}\theta_{i})+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}( \theta_{i}))(-Q_{i}(\theta_{i}))\,\mathrm{d}\theta_{i}\] \[= \int_{\Theta_{i}}(-Q_{i}(\theta_{i}))\nu_{i}(\mathrm{d}\theta_{i})\] \[\leq \int_{\Theta_{i}}(-Q_{i}(\theta_{i}))\phi_{i}^{\Lambda_{i}}( \theta_{i})G_{i}(\mathrm{d}\theta_{i}),\]
and hence
\[\int_{\Theta_{i}}\theta_{i}G_{i}(\theta_{i})Q_{i}(\theta_{i})G_{i}(\mathrm{d} \theta_{i})+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i}))Q_{i}( \theta_{i})\,\mathrm{d}\theta_{i}\geq\int_{\Theta_{i}}Q_{i}(\theta_{i})\phi_{i }^{\Lambda_{i}}(\theta_{i})G_{i}(\mathrm{d}\theta_{i})\]
for any nonincreasing function \(Q_{i}\). Meanwhile, theorem 3 of Monteiro and Svaiter (2010) implies that the inequality is binding whenever \(Q_{i}\) is measurable with respect to the \(\sigma\)-algebra generated by \(\phi_{i}^{\Lambda_{i}}\). This completes the proof.
To better understand why the lemma follows from Monteiro and Svaiter (2010), assume that \(G_{i}\) has a density \(g_{i}\) and that \(g_{i}(\theta_{i})>0\) for all \(\theta_{i}\in\Theta_{i}\). Moreover, let \(\psi_{i}(\theta_{i}):=\theta_{i}+(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i}))/ g_{i}(\theta_{i})\) for all \(\theta_{i}\in\Theta_{i}\). Then, for any (measurable) function \(Q_{i}:\Theta_{i}\to\mathbb{R}_{+}\),
\[\int_{\Theta_{i}}\theta_{i}Q_{i}(\theta_{i})G_{i}(\mathrm{d} \theta_{i})+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i}))Q_{i}( \theta_{i})\,\mathrm{d}\theta_{i}\] \[= \int_{\Theta_{i}}Q_{i}(\theta_{i})\left(\theta_{i}+\frac{G_{i}( \theta_{i})-\Lambda_{i}(\theta_{i}))}{g_{i}(\theta_{i})}\right)G_{i}(\mathrm{ d}\theta_{i})\] \[= \int_{\Theta_{i}}Q_{i}(\theta_{i})\psi_{i}(\theta_{i})G_{i}( \theta_{i}).\]
The function \(\psi_{i}\) is the usual virtual cost function, but since the Pareto weight \(\Lambda_{i}\) is arbitrary, \(\psi_{i}\) is not necessarily monotone, and hence ironing is generally needed. To this end, we can define the ironed virtual cost as \(\phi_{i}^{\Lambda_{i}}\) through the following procedure, which is essentially due to Myerson (1981) and Baron and Myerson (1982).
First, note that since \(g_{i}>0\) on its support, \(G_{i}\) is strictly increasing and hence \(G_{i}^{-1}\) is
well-defined. Now, let \(h:[0,1]\to\mathbb{R}_{+}\) be defined as
\[h(q):=\psi_{i}(G_{i}^{-1}(q))=G_{i}^{-1}(q)+\frac{q-\Lambda_{i}(G_{i}^{-1}(q))}{g _{i}(G_{i}^{-1}(q))},\]
and let \(H(q):=\int_{0}^{q}h(z)\,\mathrm{d}z\) and \(K\) be the convex closure of \(H\) (i.e., the largest convex function below \(H\)). Lastly, let \(k(q):=K^{\prime}(q)\) and define \(\phi_{i}^{\Lambda_{i}}\) as
\[\phi_{i}^{\Lambda_{i}}(\theta_{i}):=k(G_{i}(\theta_{i})),\,\forall\theta_{i} \in\Theta_{i}.\]
Note that for any nonincreasing function \(Q_{i}\), using integration by parts, as well as the fact that \(K(0)=H(0)\) and \(K(1)=H(1)\),
\[\int_{\Theta_{i}}Q_{i}(\theta_{i})(\phi_{i}^{\Lambda_{i}}(\theta_{i})-\psi_{i }(\theta_{i}))G_{i}(\mathrm{d}\theta_{i})=-\int_{\Theta_{i}}(K(G_{i}(\theta_{ i}))-H(G_{i}(\theta_{i})))\mu^{Q_{i}}(\mathrm{d}\theta_{i})\leq 0,\]
where \(\mu^{Q_{i}}\) denotes the measure associated with the CDF given by the right-limit of \(1-Q_{i}\), and the inequality follows from \(K\) being below \(H\) pointwise. Furthermore, if \(Q_{i}\) is measurable with respect to \(\phi_{i}^{\Lambda_{i}}\), then \(\mu^{Q_{i}}\) assigns zero probability to an interval whenever \(\phi_{i}^{\Lambda_{i}}\) is constant on that interval, while, by the definition of convex closure, \(K(G_{i}(\theta_{i}))<H(G_{i}(\theta_{i}))\) on an interval if and only if \(\phi_{i}^{\Lambda_{i}}\) is constant on that interval. Together, it must be that
\[\int_{\Theta_{i}}(K(G_{i}(\theta_{i}))-H(G_{i}(\theta_{i})))\mu^{Q_{i}}( \mathrm{d}\theta_{i})=0\]
for any \(Q_{i}\) that is measurable with respect to \(\phi_{i}^{\Lambda_{i}}\).
To be more specific about how Lemma 3 follows from results of Monteiro and Svaiter (2010), let \(\nu_{i}\) be a signed measure on \(\Theta_{i}:=[\underline{\theta}_{i},\overline{\theta}_{i}]\) defined by
\[\nu(A):=\int_{A}\theta_{i}G_{i}(\mathrm{d}\theta_{i})+\int_{A}(G_{i}(\theta_{ i})-\Lambda_{i}(\theta_{i}))\,\mathrm{d}\theta_{i},\]
for all (Borel) subset \(A\) of \(\Theta_{i}\), and let \(N_{i}(\theta_{i}):=\nu([\underline{\theta}_{i},\theta_{i}])\) for all \(\theta_{i}\in\Theta_{i}\) be its CDF. Then we may apply their definition 3 (with their \(F\) being replaced by \(G_{i}\) and their \(H\) replaced by \(N_{i}\)) and obtain a nondecreasing function \(\phi_{i}^{\Lambda_{i}}\) (denoted by \(l\) in their paper). By the definition of \(\nu_{i}\), theorem 2 of Monteiro and Svaiter (2010) implies that, for any nonincreasing function \(Q_{i}\),
\[\int_{\Theta_{i}}\theta_{i}G_{i}(\theta_{i})(-Q_{i}(\theta_{i}))G _{i}(\mathrm{d}\theta_{i})+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}( \theta_{i}))(-Q_{i}(\theta_{i}))\,\mathrm{d}\theta_{i}\] \[= \int_{\Theta_{i}}(-Q_{i}(\theta_{i}))\nu_{i}(\mathrm{d}\theta_{i})\] \[\leq \int_{\Theta_{i}}(-Q_{i}(\theta_{i}))\phi_{i}^{\Lambda_{i}}( \theta_{i})G_{i}(\mathrm{d}\theta_{i}),\]
and hence
\[\int_{\Theta_{i}}\theta_{i}G_{i}(\theta_{i})Q_{i}(\theta_{i})G_{i}( \mathrm{d}\theta_{i})+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i }))Q_{i}(\theta_{i})\,\mathrm{d}\theta_{i}\geq\int_{\Theta_{i}}Q_{i}(\theta_{i })\phi_{i}^{\Lambda_{i}}(\theta_{i})G_{i}(\mathrm{d}\theta_{i})\]
for any nonincreasing function \(Q_{i}\). Meanwhile, Theorem 3 of the same paper implies that the inequality is binding whenever \(Q_{i}\) is measurable with respect to the \(\sigma\)-algebra generated by \(\phi_{i}^{\Lambda_{i}}\).
### Proof of Lemma 4
Proof.: We first show that, for all \(i\),
\[\theta_{i}\mapsto\mathbb{E}_{\theta_{-i}}\left[r_{i}^{*}(\theta_{i},\theta_{- i})\left(\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta_{i},\theta_{-i})F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\right] \tag{5}\]
is nonincreasing. To see this, notice that for any \(i\) and for any \(\theta\in\Theta\),
\[\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})=\int_{V} \mathbf{1}\{\phi_{i}^{\Lambda_{i}}(\theta_{i})\leq\phi_{i}^{\Lambda_{i}}( \theta_{j})+v_{i}-v_{j},\,\forall j\in\mathcal{E}^{*}(\theta),\,j\neq i\}F( \mathrm{d}\mathbf{v}). \tag{6}\]
Moreover, notice that for any \(i\), for any \(\theta_{-i}\in\Theta_{-i}\), and for any \(\theta_{i},\theta_{i}^{\prime}\in\Theta_{i}\) with \(\theta_{i}^{\prime}<\theta_{i}\), \(i\in\mathcal{E}^{*}(\theta_{i},\theta_{-i})\) implies \(i\in\mathcal{E}^{*}(\theta_{i}^{\prime},\theta_{-i})\). Together with the fact that \(\phi_{i}^{\Lambda_{i}}\) is nondecreasing, it then follows that both (6) and \(r_{i}^{*}\) are nonincreasing functions of \(\theta_{i}\) for all \(\theta_{-i}\in\Theta_{-i}\). Therefore, (5) is indeed nonincreasing.
Furthermore, by definition of \((r^{*},\mathbf{\mu}^{*})\), for any \((r,\mathbf{\mu})\) such that the function
\[\theta_{i}\mapsto\mathbb{E}_{\theta_{-i}}\left[r_{i}(\theta_{i},\theta_{-i}) \left(\int_{V}\mathbf{\mu}_{i}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d} \mathbf{v})+\kappa_{i}\right)\right]\]
is nonincreasing, it must be that
\[\mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}(\theta)\left(\int_{ V}(v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i}))\mathbf{\mu}_{i}(\mathbf{v}|\theta)F( \mathrm{d}\mathbf{v})-\phi_{i}^{\Lambda_{i}}(\theta_{i})\kappa_{i}\right)\right]\] \[\leq \mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}^{*}(\theta)\left( \int_{V}(v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i}))\mathbf{\mu}_{i}^{*}(\mathbf{v}| \theta)F(\mathrm{d}\mathbf{v})-\phi_{i}^{\Lambda_{i}}(\theta_{i})\kappa_{i} \right)\right].\]
Thus, \((r^{*},\mathbf{\mu}^{*})\) is a solution to (3).
Lastly, by Lemma 3, since (5) is nonincreasing and is measurable with respect to \(\phi_{i}^{\Lambda_{i}}\) for
all \(i\), we have
\[\int_{\Theta_{i}}\phi_{i}^{\Lambda_{i}}(\theta_{i})\mathbb{E}_{ \theta_{-i}}\left[r_{i}^{*}(\theta)\left(\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}| \theta)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]G_{i}(\mathrm{d}\theta_{i})\] \[= \int_{\Theta_{i}}\theta_{i}\mathbb{E}_{\theta_{-i}}\left[r_{i}^{* }(\theta)\left(\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{ v})+\kappa_{i}\right)\right]G_{i}(\mathrm{d}\theta_{i})\] \[+\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i})) \mathbb{E}_{\theta_{-i}}\left[r_{i}^{*}(\theta)\left(\int_{V}\mathbf{\mu}_{i}^{*}( \mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]\mathrm{d} \theta_{i}\]
Therefore,
\[\mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}^{*}(\theta)\left( \int_{V}(v_{i}-\phi_{i}^{\Lambda_{i}}(\theta_{i}))\mathbf{\mu}_{i}^{*}(\mathbf{v} |\theta)F(\mathrm{d}\mathbf{v})-\phi_{i}^{\Lambda_{i}}(\theta_{i})\kappa_{i} \right)\right]\] \[= \mathbb{E}_{\theta}\left[\sum_{i=1}^{N}r_{i}^{*}(\theta)\int_{V}v _{i}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})\right]-\int_{ \Theta_{i}}\theta_{i}\mathbb{E}_{\theta_{-i}}\left[r_{i}^{*}(\theta)\left( \int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i} \right)\right]G_{i}(\mathrm{d}\theta_{i})\] \[-\int_{\Theta_{i}}(G_{i}(\theta_{i})-\Lambda_{i}(\theta_{i})) \mathbb{E}_{\theta_{-i}}\left[r_{i}^{*}(\theta)\left(\int_{V}\mathbf{\mu}_{i}^{*}( \mathbf{v}|\theta)F(\mathrm{d}\mathbf{v})+\kappa_{i}\right)\right]\mathrm{d} \theta_{i}\] \[= \Sigma(r^{*},\mathbf{\mu}^{*},t^{*})+\sum_{i=1}^{N}\int_{\Theta_{i}} \Pi(\theta_{i}|r^{*},\mathbf{\mu}^{*},t^{*})\Lambda_{i}(\mathrm{d}\theta_{i}),\]
as desired.
### Proof of Lemma 5
Proof.: Consider the mechanism \((S,r^{\mathcal{P}},\mathbf{\mu}_{i}^{\mathcal{P}},t^{\mathcal{P}})\). First, notice that by definition of \(\mathbf{\mu}_{i}^{\mathcal{P}}\), for all \(\theta\in\Theta\) and for all \(i\),
\[\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}|\phi_{1}^{\Lambda_{1}}(\theta_{1}),\dots,\phi_{N}^{\Lambda_{N}}(\theta_{N}))=\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta_{1}, \dots,\theta_{N}),\]
for all \(\mathbf{v}\in V\). Moreover, by Lemma 2, for each \(i\) and for any interval \([\theta_{i}^{1},\theta_{i}^{2}]\) on which \(\phi_{i}^{\Lambda_{i}}\) is constant, \(T_{i}^{*}\) is also constant. Therefore, for any \(i\) and for any \(\theta_{i}\in\Theta_{i}\), if \(\theta_{i}\) belongs to an interval \([\theta_{i}^{1},\theta_{i}^{2}]\) on which \(\phi_{i}^{\Lambda_{i}}\) is a constant, then \((\phi_{i}^{\Lambda_{i}})^{-1}(\phi_{i}^{\Lambda_{i}}(\theta_{i}))=\theta_{i}^{2}= \theta_{i}\), Thus, for any \(i\) and for any \(\theta\in\Theta\),
\[t_{i}^{\mathcal{P}}(\phi_{1}^{\Lambda_{1}}(\theta_{1}),\dots, \phi_{N}^{\Lambda_{N}}(\theta_{N}))= \mathbb{E}_{\theta_{-i}}\left[\phi_{i}^{\Lambda_{i}}(\theta_{i})r_ {i}^{\mathcal{P}}(\phi_{i}^{\Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}} \theta_{-i})\int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}|\phi_{i}^{\Lambda_{ i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F(\mathrm{d}\mathbf{v}) \right]-\tau_{i}^{*}(\theta_{i})\] \[= \mathbb{E}_{\theta_{-i}}\left[\phi_{i}^{\Lambda_{i}}(\theta_{i})r _{i}^{*}(\theta)\int_{V}\mathbf{\mu}_{i}^{*}(\mathbf{v}|\theta)F(\mathrm{d} \mathbf{v})\right]-\tau_{i}^{*}(\theta_{i})\] \[= T_{i}^{*}(\theta_{i})\] \[= t_{i}^{*}(\theta_{1},\dots,\theta_{N}),\]
where \(\phi_{-i}^{\Lambda_{-i}}:=(\phi_{1}^{\Lambda_{1}},\ldots,\phi_{i-1}^{\Lambda_{i-1 }},\phi_{i+1}^{\Lambda_{i+1}},\ldots,\phi_{N}^{\Lambda_{N}})\). Furthermore, by the definitions of \(\mathcal{E}^{\mathcal{P}}\) and \(\mathcal{E}^{*}\) given \(\mu^{\mathcal{P}}\) and \(t^{\mathcal{P}}\), when each firm \(i\) with type \(\theta_{i}\) chooses \(\phi_{i}^{\Lambda_{i}}(\theta_{i})\), the induced welfare outcomes (i.e., the weighted sum of consumer surplus and firms' interim expected revenue) under \((r^{*},\mu^{*},t^{*})\) is the same as that under \((S,r^{\mathcal{P}},\mu^{\mathcal{P}},t^{\mathcal{P}})\).
It then remains to show that the strategy profile where each firm \(i\) with type \(\theta_{i}\) chooses \(\phi_{i}^{\Lambda_{i}}(\theta_{i})\) is a Bayes-Nash equilibrium in the game induced by \((S,r^{\mathcal{P}},\mathbf{\mu}_{i}^{\mathcal{P}},t^{\mathcal{P}})\). Indeed, for any firm \(i\), any type \(\theta_{i}\in\Theta_{i}\), and for any \(s_{i}\in\phi_{i}^{\Lambda_{i}}(\Theta_{i})\), given that all other firms follow the strategy \(\phi_{-i}^{\Lambda_{-i}}=(\phi_{1}^{\Lambda_{1}},\ldots,\phi_{i-1}^{\Lambda_{ i-1}},\phi_{i+1}^{\Lambda_{i+1}},\ldots,\phi_{N}^{\Lambda_{N}})\), let \(\theta_{i}^{\prime}\in\Theta_{i}\) be such that \(\phi_{i}^{\Lambda_{i}}(\theta_{i}^{\prime})=s_{i}\). We then have
\[\mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\mathcal{P}}(\phi_{i}^{ \Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))-r_{i}^{ \mathcal{P}}(\phi_{i}^{\Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}( \theta_{-i}))\theta_{i}\left(\int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}| \phi_{i}^{\Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\bigg{]}\] \[= \mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\ast}(\theta_{i},\theta_{ -i})-r_{i}^{\ast}(\theta_{i},\theta_{-i})\theta_{i}\left(\int_{V}\mathbf{\mu}_{i} ^{\ast}(\mathbf{v}|\theta_{i},\theta_{-i})F(\mathrm{d}\mathbf{v})+\kappa_{i} \right)\bigg{]}\] \[\geq \mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\ast}(\theta_{i}^{\prime}, \theta_{-i})-r_{i}^{\ast}(\theta_{i}^{\prime},\theta_{-i})\theta_{i}\left( \int_{V}\mathbf{\mu}_{i}^{\ast}(\mathbf{v}|\theta_{i}^{\prime},\theta_{-i})F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\bigg{]}\] \[= \mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\mathcal{P}}(\phi_{i}^{ \Lambda_{i}}(\theta_{i}^{\prime}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))-r_{i} ^{\mathcal{P}}(\phi_{i}^{\Lambda_{i}}(\theta_{i}^{\prime}),\phi_{-i}^{\Lambda _{-i}}(\theta_{-i}))\theta_{i}\left(\int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{ v}|\phi_{i}^{\Lambda_{i}}(\theta_{i}^{\prime}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\] \[\mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\mathcal{P}}(s_{i},\phi_{ -i}^{\Lambda_{-i}}(\theta_{-i}))-r_{i}^{\mathcal{P}}(s_{i},\phi_{-i}^{\Lambda_ {-i}}(\theta_{-i}))\theta_{i}\left(\int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{ v}|s_{i},\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F(\mathrm{d}\mathbf{v})+\kappa_{i} \right),\]
where the inequality follows from the fact that \((r^{*},\mathbf{\mu}^{*},t^{*})\) is incentive compatible. Meanwhile, it is easy to verify that for any firm \(i\), any type \(\theta_{i}\in\Theta_{i}\), and for any \(s_{i}\notin\phi_{i}^{\Lambda_{i}}(\Theta_{i})\), given that all other firms follow the strategy \(\phi_{-i}^{\Lambda_{-i}}\),
\[\mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\mathcal{P}}(\phi_{i}^{ \Lambda_{i}}(\theta_{i}^{\prime}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))-r_{i} ^{\mathcal{P}}(\phi_{i}^{\Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}( \theta_{-i}))\theta_{i}\left(\int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}| \phi_{i}^{\Lambda_{i}}(\theta_{i}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))F( \mathrm{d}\mathbf{v})+\kappa_{i}\right)\] \[\geq \mathbb{E}_{\theta_{-i}}\bigg{[}t_{i}^{\mathcal{P}}(\phi_{i}^{ \Lambda_{i}}(\theta_{i}^{\prime}),\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))-r_{i} ^{\mathcal{P}}(s_{i},\phi_{-i}^{\Lambda_{-i}}(\theta_{-i}))\theta_{i}\left( \int_{V}\mathbf{\mu}_{i}^{\mathcal{P}}(\mathbf{v}|s_{i},\phi_{-i}^{\Lambda_{-i}}( \theta_{-i}))F(\mathrm{d}\mathbf{v})+\kappa_{i}\right).\]
Together, it then follows that \((\phi_{1}^{\Lambda_{1}},\ldots,\phi_{N}^{\Lambda_{N}})\) is indeed a Bayes-Nash equilibrium in the game induced by \((S,r^{\mathcal{P}},\mathbf{\mu}_{i}^{\mathcal{P}},t^{\mathcal{P}})\). This completes the proof.
## Appendix B Omitted Proof for Section 5
### Proof of Proposition 1
Proof.: From the proof of Theorem 1, for each \(i\in\{1,\ldots,N\}\) and for any \(s\in\mathbb{R}_{+}^{N}\), firm \(i\in\mathcal{E}^{\mathcal{P}}(s_{i},s_{-i})\) if and only if \(s_{i}\leq\bar{p}_{i}(s_{-i})\), where \(\mathcal{E}^{P}\) is a solution of
\[\max_{\mathcal{E}\subseteq\{1,\ldots,N\}}\left(\int_{0}^{\infty}\cdots\int_{0}^{ \infty}\max_{i\in\mathcal{E}}(v_{i}-s_{i})F(\mathrm{d}v_{1})\cdots F(\mathrm{d} v_{N})-\sum_{i\in\mathcal{E}}s_{i}\kappa_{i}\right).\]
As a result, there must exists \(\bar{p}:\mathbb{R}_{+}^{N-1}\to\mathbb{R}_{+}\cup\{\infty\}\) such that \(\bar{p}_{i}(s_{-i})=\bar{p}(s_{-i})\) for all \(i\) and for all \(s\in\mathbb{R}_{+}^{N}\). We claim that \(\bar{p}\) is nondecreasing in each argument. Indeed, for any \(i\) and for any \(s,s^{\prime}\in\mathbb{R}_{+}^{N}\), such that \(s_{i}=s^{\prime}_{i}\) and \(s_{j}\leq s^{\prime}_{j}\) for some \(j\neq i\), if \(i\in\mathcal{E}^{\mathcal{P}}(s)\), then it must be that \(i\in\mathcal{E}^{\mathcal{P}}(s^{\prime})\) as well. Therefore, it must be that \(\bar{p}(s_{-i})\leq\bar{p}(s^{\prime}_{-i})\), as desired. Since \(\bar{p}\) is nondecreasing in every component, for any \(i,j\in\{1,\ldots,N\}\) with \(i\neq j\), and for any \(s\in\mathbb{R}_{+}^{N}\) with \(s_{i}\geq s_{j}\), it must be that \(\bar{p}(s_{-j})\geq\bar{p}(s_{-i})\), as desired.
Meanwhile, notice that for any \(i\in\{1,\ldots,N\}\) and for any \(s_{-i}\in\mathbb{R}_{+}^{N-1}\), if \(s_{i}=0\), then it must be that \(i\in\mathcal{E}^{\mathcal{P}}(s_{i},s_{-i})\). In contrast, since for any \(\mathcal{E}\subseteq\{1,\ldots,N\}\) such that \(i\notin\mathcal{E}\),
\[\lim_{s_{i}\to\infty}\sup_{s_{-i}\in\mathbb{R}_{+}^{N}}\left[\int_{0}^{\infty }\cdots\int_{0}^{\infty}[\max_{j\in\mathcal{E}\cup\{i\}}(v_{j}-s_{j})^{+}-\max _{j\in\mathcal{E}}(v_{j}-s_{j})^{+}]F(\mathrm{d}v_{1})\cdots F(\mathrm{d}v_{N })-s_{i}\kappa\right]<0,\]
there must exist \(\bar{s}\) such that \(i\notin\mathcal{E}^{P}(s)\) whenever \(s_{i}\leq\bar{s}\), for all \(s\in\mathbb{R}_{+}^{N}\). This completes the proof. |
2306.16719 | Radar Enhanced Multi-Armed Bandit for Rapid Beam Selection in Millimeter
Wave Communications | Multi-arm bandit (MAB) algorithms have been used to learn optimal beams for
millimeter wave communication systems. Here, the complexity of learning the
optimal beam linearly scales with the number of beams, leading to high latency
when there are a large number of beams. In this work, we propose to integrate
radar with communication to enhance the MAB learning performance by searching
only those beams where the radar detects a scatterer. Further, we use radar to
distinguish the beams that show mobile targets from those which indicate the
presence of static clutter, thereby reducing the number of beams to scan.
Simulations show that our proposed radar-enhanced MAB reduces the exploration
time by searching only the beams with distinct radar mobile targets resulting
in improved throughput. | Akanksha Sneh, Sumit Darak, Shobha Sundar Ram, Manjesh Hanawal | 2023-06-29T06:36:47Z | http://arxiv.org/abs/2306.16719v1 | # Radar Enhanced Multi-Armed Bandit for Rapid Beam Selection in Millimeter Wave Communications
###### Abstract
Multi-arm bandit (MAB) algorithms have been used to learn optimal beams for millimeter wave communication systems. Here, the complexity of learning the optimal beam linearly scales with the number of beams, leading to high latency when there are a large number of beams. In this work, we propose to integrate radar with communication to enhance the MAB learning performance by searching only those beams where the radar detects a scatterer. Further, we use radar to distinguish the beams that show mobile targets from those which indicate the presence of static clutter, thereby reducing the number of beams to scan. Simulations show that our proposed radar-enhanced MAB reduces the exploration time by searching only the beams with distinct radar mobile targets resulting in improved throughput.
multi-armed bandit, joint radar communication, upper confidence bound, analog beamforming
## I Introduction
Millimeter wave (mmW) unlicensed spectrum has been identified as a viable solution for realizing high data rate communications between connected vehicles [1, 2, 3, 4, 5]. The communication links are, however, characterized by high atmospheric absorption and hence can be operational only in short-range line-of-sight scenarios with highly directional beams realized through analog or digital beamforming at the transmitter/receiver. Digital beamforming allows for multiple simultaneous beams but is costly and complicated to implement since multiple phase and time-synchronized RF/mmW chains are required [6]. Analog beamforming is less costly since it involves a single beam at a time. But there is considerable overhead expended by the communication protocol and a long search/exploration time to scan the entire field of view and select the best beams for each mobile user (MU). This results in high latency and shorter service/exploitation time available for communication causing low throughput.
There have been several recent works that have applied multi-armed bandit (MAB) algorithms for reducing the exploration time of the best beams in order to increase the exploitation time for subsequent mmW communications [7, 8, 9, 10]. MAB algorithms are a class of algorithms within the reinforcement learning framework which provides a basis for making decisions under uncertainty. MAB-based beam selection works, such as [8, 9], have relied on a strategy where the base station (BS) waits for the feedback (reward) over the uplink for the beam selection in subsequent time slots. In such time-slotted communication, the transmitter can switch the beam only once in a slot, and the duration of each slot depends on the time taken by the MU to process the downlink signal and share the feedback over the uplink.
The use of radar signals for detecting the presence of targets can potentially speed-up the beam search. However, there are certain challenges in the integration of radar with communication physical layer. The use of an auxiliary radar sensor for detecting MU, cannot be considered, as it would increase the cost and complexity of the system. Further, the radar and communication functionalities would have to be synchronized as well as managed for interference. Instead, we propose that an integrated sensing and communication system be utilized for mmW communication such as those proposed by [11, 12]. Here, a common waveform on a common spectrum is used for joint radar sensing and communications. Hence, no separate hardware/spectrum/synchronization or interference management is required to support both functionalities. Note that joint radar communication (JRC) systems have been explored over the last several decades to tackle spectral congestion issues [13]. While some works have studied how to manage mutual interference that arises from the coexistence of both systems on a common spectrum [14], others have exploited the communication signal as an opportunistic illumination for passive radar receivers [15]. We identify our work to belong to the third category of research that explores the collaborative design of JRC systems to improve the performance of each functionality [16, 17].
In this work, we propose incorporating a radar sensing mechanism into the MAB framework at the BS to overcome the limitations listed above. In the proposed framework, the radar at the BS is used to detect the presence of MU in the candidate beams based on the strength of the scattered signal (_amplitude gated radar enhanced MAB_) and the Doppler frequency shift (_Doppler gated radar enhanced MAB_) introduced to the radar signal. Only those beams that indicate the presence of a mobile radar target will be further scanned for the presence of a MU. Radar detection-based decision-making will be much faster than communication metric-based decision-making due to multiple reasons. First, the feedback for a radar signal is nearly instantaneous since it is based on the electromagnetic scattering of the signal by mobile targets. Second, the exploration time is substantially reduced by restricting the number of candidate beams that have to be scanned. Due to these factors, the overall exploration time will be reduced resulting in rapid beam alignment and improved overall communication throughput.
_Notation:_ In our paper, scalar variables, vectors, and matrices are denoted with regular, and boldface lower and upper case characters respectively. Vector superscript \(T\) and symbol \(\otimes\) denote transpose and convolution operations.
## II Rad-Com Signal Model
We first present the signal models for the JRC transmitter and receiver based on the IEEE 802.11ad protocol where the Golay sequences in the communication frame are exploited for radar sensing [11, 12, 17]. The digital waveform \(\mathbf{x}_{q}[m]\) corresponds to the Golay sequence in the \(q^{th}\) packet transmitted at a pulse repetition interval of \(T_{P}\) with \(m=1,2,\ldots,M\) samples. These digital packets are then converted into analog signals \(\mathbf{x}_{q}(t)\) at the BS as follows:
\[\mathbf{x}_{q}(t)=\sum_{m=0}^{M}x_{q}[mT_{s}]\delta\left(t-mT_{s}-(q-1)T_{P} \right), \tag{1}\]
where \(T_{s}\) is the sampling time. The signal is then amplified with energy \(E_{s}\), convolved with a transmit shaping filter, \(\mathbf{g}_{T}\), and then passed through analog upconversion to the mmW carrier frequency \(f_{c}\) as
\[\mathbf{x}_{q_{ue}}(t)=\sqrt{E_{s}}\left(\mathbf{x}_{q}(t)\otimes\mathbf{g}_{T} (t)\right)e^{+j2\pi f_{c}t}. \tag{2}\]
The upconverted signal is then transmitted via analog beamforming through a uniform linear array (ULA) of \(P_{BS}\) elements after applying complex antenna weight vector at BS transmitter (BS-TX), \(\mathbf{w}_{BS_{\theta}}\in\mathcal{C}^{P_{BS}\times 1}\) for a given angle \(\theta\), resulting in
\[\mathbf{X}_{q_{ue}}(t)=\mathbf{w}_{BS_{\theta}}\mathbf{x}_{\mathbf{q}_{ue}}{}^ {T}(t). \tag{3}\]
Here, \(\mathbf{w}_{BS_{\theta}}=[1\ e^{-jk_{c}d_{BS}\sin\theta}\ \cdots\ e^{-jk_{c}d_{BS}(P_{BS}-1)\sin\theta}]\) where \(k_{c}\) is the propagation constant and \(d_{BS}\) is the uniform element spacing. _It is important to note that the problem of searching for a new beam only arises for a MU that has changed its position and not a static user._
**Radar received signal**: Along a pre-determined beam angle \(\theta\), we assume that there are \(B\) radar targets present in the channel including MU and other discrete clutter scatterers. Then the received signal at the \(P_{BS}\)-element ULA at the BS receiver (BS-RX) after being reflected from the targets, is
\[\mathbf{\hat{x}}_{q}(t)=\sum_{b=1}^{B}\sigma_{b}\mathbf{w}_{BS_{\theta}} \mathbf{u}_{\theta}\mathbf{H}_{\mathbf{r}}{}^{2}\mathbf{u}_{\theta}^{T}\left[ \mathbf{X}_{q_{ue}}(t-2\tau_{b})\right]+\rho(t), \tag{4}\]
where \(\tau_{b}\) denotes the time delay caused by one-way propagation and \(\sigma_{b}\) is the strength of the reflection from each \(b^{th}\) point target obtained from Frii's radar range equation. \(\mathbf{H}_{\mathbf{r}}{}^{2}\) is the \(P_{BS}\times P_{BS}\) channel matrix that includes the direct path and multipath due to static clutter scatterers (SCS) present in the environment modeled in a manner described in [18] but for two-way propagation. The steering vector from the ULA corresponding to \(\theta\) is given by \(\mathbf{u}_{\theta}^{T}=[1\ e^{jk_{c}d_{BS}\sin\theta}\ \cdots\ e^{jk_{c}d_{BS}(P_{BS}-1)\sin\theta}]\) while \(\rho\) is the additive circular symmetric white Gaussian noise at the BS-RX. We assume that each radar target is moving with a constant radial velocity \(v_{b}\), such that the Doppler shift is \(f_{b}=2v_{b}/\lambda\) where \(\lambda\) is the wavelength. After down-conversion and digitization, the received radar signal is
\[\mathbf{\hat{x}}_{q}[m]=\sum_{b=1}^{B}\sigma_{b}\mathbf{w}_{BS}\mathbf{u}_{b} ^{T}\mathbf{X}_{q_{ue}}\left[m-m_{b}\right]e^{-j2\pi f_{b}qT_{P}}+\rho, \tag{5}\]
where, \(m_{b}\) is the sample index corresponding to \(\tau_{b}\).
**Radar signal processing:** The radar received signal at BS-RX gathered over \(Q\) packets is first converted to a radar rectangle of dimension \([M\times Q]\) which then goes to the radar signal processing block to obtain the range and Doppler of the corresponding target. The range estimation output, \(\chi_{\mathbf{q}}\), is obtained through the matched filtering for each \(q^{th}\) packet, \(\chi_{\mathbf{q}}=\mathbf{\hat{x}}_{q}\otimes\mathbf{x}_{q}\,.\) The output is processed through the ordered-statistics constant false alarm (OS-CFAR) to estimate each \(a^{th}\) peak with amplitude, \(\hat{\sigma}_{a}\), at the range \(\hat{r}_{a}\). This \(\hat{\sigma}_{a}\) information is used subsequently for the _amplitude gated radar-enhanced MAB algorithm_ discussed in the next section. Next, Doppler estimation is carried out through one-dimensional multiple signal classification (MUSIC) for each \(a^{th}\) peak across the \(Q\) packets to estimate the corresponding \(\hat{f}_{a}\). This \(\hat{f}_{a}\) information is used for the _Doppler gated radar-enhanced MAB algorithm_ discussed in the next section.
**Communication received signal and processing:** The one-way propagated communication signal, \(\mathbf{\hat{x}}(t)\), is received at the \(P_{MU}\) element ULA at the MU receiver (MU-RX) as shown in
\[\mathbf{\hat{x}}(t)=\mathbf{w}_{MU_{\phi}}\mathbf{u}_{\phi}\mathbf{H}_{\mathbf{ c}}\mathbf{u}_{\theta}^{T}\left[\mathbf{X}_{q_{ue}}(t-\tau_{b})\right]+\delta(t). \tag{6}\]
Here, \(\mathbf{w}_{MU_{\phi}}\) represents the weights applied at the MU-RX and \(\mathbf{u}_{\phi}=[1\ e^{jk_{c}d_{MU}\sin\phi}\ \cdots]\) is the steering vector for the BS at \(\phi\) for \(d_{MU}\) antenna element spacing. \(H_{c}\) is the one-way propagation channel matrix \(P_{BS}\times P_{MU}\) model [18] and \(\delta\) is the additive circular-symmetric white Gaussian noise at MU-RX. The signal \(\mathbf{\hat{x}}(t)\) is received by MU and gets processed and the corresponding signal-to-noise ratio (SNR) is sent back to the BS as uplink feedback. Note that the processing time for the MU results in a greater delay for the uplink signal to return to the BS compared to the nearly instantaneous radar-scattered signal. Second, the uplink \(\mathbf{\hat{x}}\) is distinguished from \(\mathbf{\hat{x}}\) at the BS-RX through cross-correlation with \(\mathbf{x}\). Due to the nature of the Golay sequence, the peak-to-sidelobe ratio after cross-correlation for \(\mathbf{\hat{x}}\) is very high compared to the \(\mathbf{\hat{x}}\).
## III Proposed MAB Framework for JRC
In this section, we set up the beam-selection problem between BS and MU as MAB and develop the algorithms that speed up the beam selection. The standard stochastic MAB consists of a set of \(\mathcal{K}\) arms (predetermined beams) and a single player (the BS-TX/RX) as shown in Fig.1a. In each time slot, the BS-TX/RX, selects a single \(k^{th}\) beam and receives the reward - the SNR of the communication link, \(S_{k}\), obtained from uplink feedback along the beam. For each arm, the reward is assumed to be drawn independently across time from distributions that are stationary and independent across arms. The performance metric is equal to the difference between the SNR of the optimal beam and the SNR over the selected beam. We define this as regret which is given as
\[R=TS_{k^{\prime}}-\mathbb{E}\left[\sum_{k\in\mathcal{K}}S_{k}N_{k}\right] \tag{7}\]
where \(T\) is the total number of time slots, \(N_{k}\) is the number of times the beam \(k\) is selected by BS and \(k^{\prime}=\underset{k\in\mathcal{K}}{\mathrm{argmax}}\ S_{k}\). The expectation here is with respect to the random number of pulls of the arms \((N_{k})\). Thus, the regret can be minimized by selecting the optimal beam, \(k^{\prime}\), i.e., the beam with the highest SNR as many times as possible in a given horizon of size \(T\). In this paper, we limit our discussion to the upper confidence bound-based (UCB) MAB algorithm [19] and provide regret bounds. The proposed idea can be easily extended to other MAB algorithms such as UCB variants and Thompson Sampling.
### _SNR Based Beam Selection using UCB_
We first present the conventional approach for beam selection using the UCB algorithm given in Algorithm 1: \(\mathbf{UCB}_{SNR}\). The communication is assumed to be time-slotted. At each \(t^{th}\) time slot, the BS-TX transmits over the beam selected by the UCB algorithm. The algorithm selects each of the \(\mathcal{K}\) beams once at the beginning (lines 4-6), and thereafter, the beam selection is based on the UCB index (lines 7-8). Here, \(T_{RSP}\) denotes the time slots required for RSP and it is set to 0 for \(\mathbf{UCB}_{SNR}\). We denote the index of the selected beam and corresponding reward, i.e., instantaneous normalized SNR, as \(I_{t}\) and \(W_{t}\), respectively. At the end of each time slot, the parameters are updated (line 12). The UCB index is calculated for each beam separately and it is given as
\[UCB_{k}(t)=\frac{\hat{S}_{k}}{N_{k}}+\sqrt{\frac{2\log(t)}{N_{k}}}, \tag{8}\]
where \(\hat{S}_{k}\) denotes the empirical mean of \(k^{th}\) arm using samples obtained till time \(t\). The expected regret of \(\mathbf{UCB}_{SNR}\) scales as \(\mathcal{O}(\sum_{k\in K\setminus k^{\prime}}\frac{\log T}{\Delta_{k}})\)[20] where \(\Delta_{k}=S_{k^{\prime}}-S_{k}\) for all \(k\neq k^{\prime}\). Furthermore, it suffers from high exploration time especially when \(\mathcal{K}\) is large. Both these drawbacks limit the usefulness of \(\mathbf{UCB}_{SNR}\) for mmW communication with a large number of narrow directional beams.
```
1:Input:\(\mathcal{K},T,T_{RSP}\)
2:Initialize:\(N_{k}\gets 0\) and \(S_{k}\gets 0\) for all \(k\)
3:for\(t=T_{RSP}+1,2\ldots T,\)do
4:if\(t\leq\mathcal{K}\)then
5: Select beam, \(I_{t}=t\).
6:else
7:\(\forall k\in[\mathcal{K}]\) : compute \(UCB_{k}(t)\) as given in Eq. (8)
8: Select beam, \(I_{t}=\arg\max_{k\in[K]}UCB_{k}(t)\)
9:endif
10: BS-TX transmits a data frame over \(I_{t}\)
11: MU observes instantaneous normalized SNR, \(W_{t}\) and communicate to BS-RX over the uplink.
12:\(N_{I_{t}}\gets N_{I_{t}}+1\) and \(S_{I_{t}}\gets S_{I_{t}}+W_{t}\).
13:endfor
```
**Algorithm 1**\(\mathbf{UCB}_{SNR}\): SNR Based Beam Selection
### \(\mathbf{UCB}_{SNR\_AG}\)_: Amplitude Gated Radar-Enhanced MAB_
In this section, we augment the \(\mathbf{UCB}_{SNR}\) with the proposed radar-based target detection as shown in Fig. 1b and described in Algorithm 2: \(\mathbf{UCB}_{SNR\_AG}\). Here, a radar target is detected in a beam when the _amplitude/strength_ of the scattered signal in any one or more of the range bins within the beam is above a pre-set threshold determined by CFAR. The number of beams where potential targets are detected is \(\tilde{\mathcal{K}}\) (dark-colored beams in the figure) where \(\mathcal{K}\geq\tilde{\mathcal{K}}\). Compared to Algorithm 1, the number of available beams is updated based on radar target detection during the first time slot (line 3 of Algorithm.2). Compared to \(\mathbf{UCB}_{SNR}\), \(\mathbf{UCB}_{SNR\_AG}\) potentially offers lower regret due to the following reasons: 1) Faster target detection: The identification of the presence of targets using radar is significantly faster since returns of the scattered signals from the short-range targets are nearly instantaneous with a short round-trip delay of the order of a few \(ns\). For 5G, one slot is at least 4 \(ms\) assuming downlink sub-frame (1 \(ms\)), uplink sub-frame for reward feedback (1 \(ms\)), downlink (1 \(ms\)) and uplink data processing (1 \(ms\)). On average, RSP time, \(T_{RSP}\) for 10 radar packets is 36 \(ms\), i.e., 9 slots are sufficient to find \(\tilde{\mathcal{K}}\)[21]; 2) The proposed algorithm focuses on a subset of the total beams in which a mobile target may be present which in turn reduces the exploration time. To quantify this gain, let us fix a bandit instance. The set of beams is detected by the radar, \(\tilde{\mathcal{K}}\), is a random variable depending on the distribution of scatterers. We can assume \(\tilde{\mathcal{K}}\) includes the optimal arm in each realization as it has the maximum signal strength and radar is unlikely to miss the MU. Hence the optimal arm is the same in any realized set \(\tilde{\mathcal{K}}\). Expected regret over the set \(\tilde{\mathcal{K}}\) is \(\mathcal{O}(\sum_{k\in\tilde{\mathcal{K}}\setminus k^{\prime}}\frac{\log T}{ \Delta_{k}})\). Clearly this bound is smaller than \(\mathcal{O}(\sum_{k\in\mathcal{K}\setminus k^{\prime}}\frac{\log T}{\Delta_{k}})\) obtained for the previous case. Taking expectation over the random realizations \(\tilde{\mathcal{K}}\), we get expect regret of \(\mathbf{UCB}_{SNR\_AG}\) as \(\mathbb{E}\left[\mathcal{O}(\sum_{k\in\tilde{\mathcal{K}}\setminus k^{ \prime}}\frac{\log T}{\Delta_{k}})\right]\leq\mathcal{O}(\sum_{k\in\mathcal{K} \setminus k^{\prime}}\frac{\log T}{\Delta_{k}})\). Thus \(\mathbf{UCB}_{SNR\_AG}\) is better than that of \(\mathbf{UCB}_{SNR}\) resulting in an improvement in the performance.
```
1:Input:\(\mathcal{K}\)
2:Output:\(\tilde{\mathcal{K}}\)
3:for\(\theta=1,2\ldots\mathcal{K}\), do
4:\(\chi_{\theta}\): Matched filtering across fast time samples
5: CFAR detection:
6:if\(\chi_{\theta}\geq\gamma\)then
7: Include beam, \(\theta\) in subset \(\tilde{\mathcal{K}}\)
8:endif
9:endfor
```
**Algorithm 2**(or)3 \(\mathbf{UCB}_{SNR\_AG}\) (or) \(\mathbf{UCB}_{SNR\_DG}\)
### \(\mathbf{UCB}_{SNR\_DG}\)_: Doppler Gated Radar-Enhanced MAB_
In the \(\mathbf{UCB}_{SNR\_DG}\) algorithm, all the beams where radar targets are present are selected. However, some of the beams correspond to SCS as shown in Fig.1c. The SCS is distinguished to be of two types: some give rise to direct scattering at the radar (termed \(SCS_{1}\)) with zero Doppler while others give rise to Doppler-shifted returns at the radar through multipath with respect to the MU (termed \(SCS_{2}\)). The proposed Doppler-enhanced MAB algorithm is described in Algorithm 3: \(\mathbf{UCB}_{SNR\_DG}\). Since direct path returns from SCS do not give any type of information regarding the MU, they can be excluded from the list of candidate beams based on the Doppler shift estimated from the radar signal processing described earlier. Hence the total number of beams where potential MU are detected is \(\tilde{\mathcal{K}}\) and \(\mathcal{K}\geq\tilde{\mathcal{K}}\geq\tilde{\mathcal{K}}\). Thus, fewer candidate beams (dark-colored beams in the figure) result in lower exploration time. The theoretical explanation for the reduction in beams follows the same logic provided for the previous algorithm and hence is not repeated here. Note
Fig. 1: System model showing (a) Standard non-radar based MAB beam selection, (b) Amplitude Gated Radar-Enhanced MAB, (c) Doppler Gated Radar-Enhanced MAB. Beams with red and green boxes indicate zero and non-zero Doppler targets respectively.
that static communication targets are not likely to have first triggered the necessity for the selection of a new beam by the BS and hence, static targets can be interpreted safely as SCS.
```
1:Input:\(\mathcal{K}\)
2:Output:\(\tilde{K}\)
3:for\(\theta=1,2\ldots\mathcal{K}\),do
4: 1D-MUSIC for \(\hat{r}_{a}\) across Q packets
5:if\(\hat{f}_{a}\neq 0\)then
6: Include beam, \(\theta\), in subset \(\tilde{\mathcal{K}}\)
7:endif
8:endfor
```
**Subroutine 2 DG**: \(\tilde{\mathcal{K}}\) Beams Selection Based on Doppler Estimation
## IV Performance Analysis
We consider a three-dimensional (3D) Cartesian coordinate space with the ground plane defined by the \(x\) and \(y\) axes and the height axis along \(z\). The BS is located at \([0,0,0]\) m with a \(y\)-aligned uniform linear array (ULA) of 32 antennas with an antenna spacing of \(\lambda/2\) where \(\lambda\) is the wavelength corresponding to the center frequency \(f_{c}\) of 60 GHz. We adopt a 16QAM modulation and coding scheme with 512 OFDM subcarriers with a signal bandwidth of 1.76 GHz. We assume that the channel consists of a single MU and multiple SCS as shown in Fig. 1. The radar scattered returns of each of these are confined to a single beam. We assume that the MU is also with a 32-element ULA and is initially located at \([50,20,0]\)m and subsequently moves with a constant velocity of \(v\) m/s along the \(x\) axis. Both MU and SCS are modeled as isotropic point scatterers and the SCS are distributed randomly across the 3D Cartesian space.
The throughput, \(\Upsilon\), is calculated as \(\left(1-\frac{\sum_{i=1}^{N_{t}}BER_{i}}{N_{t}}\right)\frac{D}{T_{d}}\) where, \(BER_{i}\) corresponds to the bit error rate of \(i^{th}\) time slot, \(D\) and \(T_{d}\) correspond to the total number of bits and the time duration for each slot respectively. We benchmark the \(\Upsilon\) performance of the proposed algorithms, \(\mathbf{UCB}_{SNR\_AG}\) and \(\mathbf{UCB}_{SNR\_DG}\) with conventional \(\mathbf{UCB}_{SNR}\), lower upper confidence bound (LUCB) described in [22], digital beamforming and trivial random beam selection approach. We present the effects of the number of targets, number of beams, the Doppler velocity resolution, and radar receiver SNR on \(\Upsilon\). Each result presented in this section is obtained after averaging over 15 independent experiments and each experiment's duration/horizon is 2000 time slots.
**Effect of Number of Radar Scatterers:** In Fig. 2(a), we compare \(\Upsilon\) of all the algorithms at different instants of the time horizon. Here, we assume one MU and vary the number of SCS. The Doppler velocity of the MU is fixed to 3 m/s and the angular resolution is \(4^{o}\) resulting in a total of 41 candidate beams spanning from \(-80^{o}\) to \(80^{o}\) and the Doppler velocity resolution is 1 m/s. It can be observed in the figure that the proposed algorithms offer higher \(\Upsilon\) than the benchmarked approaches - except for DBF - due to faster identification of the optimal beam. DBF provides the best-case results since all the beams are tested simultaneously. However, this approach is not pursued since the implementation of multiple synchronized receiver chains is costly and complex. In Fig. 2(b), we compare \(\Upsilon\) of all algorithms at the end of the horizon as the number of SCS increases. We observe that the performance of all MAB-based approaches is significantly better than the random selection approach validating the need for a learning algorithm.
time slots for a high SNR (10 dB). We observe in Fig. 5(b) that the regret improves for the \(\mathbf{UCB}_{SNR\_AG}\) and \(\mathbf{UCB}_{SNR\_DG}\) with increase in SNR. At lower SNR, the poor prediction of the target presence amidt noise results in higher regret. Note that since we consider the SNR with respect to the radar receiver, change in SNR does not have any impact on the random beam selection as well as \(\mathbf{UCB}_{SNR}\) algorithms.
**Impact of Doppler Processing:** Further, we discuss the scenario where the number of \(SCS_{1}\) is greater than \(SCS_{2}\) such that the candidate beams selected by the \(\mathbf{UCB}_{SNR\_DG}\) are significantly lesser than those selected by the \(\mathbf{UCB}_{SNR\_AG}\). In Fig. 6(a), we consider one MU, one \(SCS_{2}\) and two \(SCS_{1}\). The angular resolution is \(4^{\circ}\) and the velocity resolution is 1 m/s.
Since \(SCS_{2}\) gives rise to Doppler shift at the BS through multipath from the MU, the corresponding beam is retained for further scanning while the beams corresponding to the two \(SCS_{1}\) that give rise to zero-Doppler shift are excluded in \(\mathbf{UCB}_{SNR\_DG}\). This results in better performance of the algorithm compared to \(\mathbf{UCB}_{SNR\_AG}\). Further, the improvement is greater as the number of \(SCS_{1}\) increases for a fixed number of \(SCS_{2}\) as seen in Fig. 6(b).
## V Conclusion
In this work, we demonstrate how radar-enhanced MAB within a JRC BS can substantially reduce the exploration time by selecting only those candidate beams that detect the presence of radar targets of which the MU may be one. Further reduction in the exploration time is realized by distinguishing SCS from MU through radar-based Doppler estimation. Simulation results demonstrate an overall improvement in the communication link metrics with the reduction in the exploration time through radar-enhanced MAB when compared with conventional MAB algorithms.
|
2305.17157 | The Orbital Eccentricity Distribution of Planets Orbiting M dwarfs | We investigate the underlying distribution of orbital eccentricities for
planets around early-to-mid M dwarf host stars. We employ a sample of 163
planets around early- to mid-M dwarfs across 101 systems detected by NASA's
Kepler Mission. We constrain the orbital eccentricity for each planet by
leveraging the Kepler lightcurve together with a stellar density prior,
constructed using metallicity from spectroscopy, Ks magnitude from 2MASS, and
stellar parallax from Gaia. Within a Bayesian hierarchical framework, we
extract the underlying eccentricity distribution, assuming alternately
Rayleigh, half-Gaussian, and Beta functions for both single- and multi-transit
systems. We describe the eccentricity distribution for apparently
single-transiting planetary systems with a Rayleigh distribution with sigma =
0.19 (+0.04, -0.03), and for multi-transit systems with sigma = 0.03 (+0.02,
-0.01). The data suggest the possibility of distinct dynamically warmer and
cooler sub-populations within the single-transit distribution: The
single-transit data prefer a mixture model composed of two distinct Rayleigh
distributions with sigma_1 = 0.02 (+0.11, -0.00) and sigma_2 = 0.24 (+0.20,
-0.03) over a single Rayleigh distribution, with 7:1 odds. We contextualize our
findings within a planet formation framework, by comparing them to analogous
results in the literature for planets orbiting FGK stars. By combining our
derived eccentricity distribution with other M dwarf demographic constraints,
we estimate the underlying eccentricity distribution for the population of
early- to mid-M dwarf planets in the local neighborhood. | Sheila Sagear, Sarah Ballard | 2023-05-26T18:00:00Z | http://arxiv.org/abs/2305.17157v1 | # The Orbital Eccentricity Distribution of Planets Orbiting M dwarfs
###### Abstract
We investigate the underlying distribution of orbital eccentricities for planets around early-to-mid M dwarf host stars. We employ a sample of 163 planets around early- to mid-M dwarfs across 101 systems detected by NASA's _Kepler_ Mission. We constrain the orbital eccentricity for each planet by leveraging the _Kepler_ lightcurve together with a stellar density prior, constructed using metallicity from spectroscopy, \(K_{s}\) magnitude from 2MASS, and stellar parallax from Gaia. Within a Bayesian hierarchical framework, we extract the underlying eccentricity distribution, assuming alternately Rayleigh, half-Gaussian, and Beta functions for both single- and multi-transit systems. We describe the eccentricity distribution for apparently single-transiting planetary systems with a Rayleigh distribution with \(\sigma=0.19^{+0.04}_{-0.03}\), and for multi-transit systems with \(\sigma=0.03^{+0.02}_{-0.01}\). The data suggest the possibility of distinct dynamically warmer and cooler sub-populations within the single-transit distribution: The single-transit data prefer a mixture model composed of two distinct Rayleigh distributions with \(\sigma_{1}=0.02^{+0.11}_{-0.00}\) and \(\sigma_{2}=0.24^{+0.20}_{-0.03}\) over a single Rayleigh distribution, with 7:1 odds. We contextualize our findings within a planet formation framework, by comparing them to analogous results in the literature for planets orbiting FGK stars. By combining our derived eccentricity distribution with other M dwarf demographic constraints, we estimate the underlying eccentricity distribution for the population of early- to mid-M dwarf planets in the local neighborhood.
exoplanets | M dwarf | orbital eccentricities | transit | planetary dynamics
Orbital eccentricity is a fundamental property of exoplanets. In addition to quantifying the current dynamical state, eccentricity also encodes information about planetary formation. Because eccentricity is associated with both variable insolation and tidal heating, it is also relevant to planetary habitability [(1, 2, 3)]. For planets orbiting M dwarfs in particular, the relative proximity of the habitable zone to the star [(4)] means that even modest eccentricities can render planets inhospitable to life. For example, an Earth-like planet orbiting a 0.25 \(M_{\oplus}\) star with an eccentricity \(e>0.2\) could experience a "tidal Venus" catastrophe, with tidal heating sufficient to evaporate a water ocean [(5)].
Investigating orbital eccentricity for planets orbiting M dwarfs is also pressing from a population standpoint: M dwarfs host planets smaller than Neptunes at a rate 3.5 times higher than Sunlike stars [(6)]. They are themselves the most common type of star in our galaxy (e.g. [(7)]; for review see [(8)]), and the planet-to-star size ratio renders them especially appealing for both detection and follow-up efforts [(9)]. All of these factors conspire to make them very likely targets for follow-up surveys to search for life [(10)]. For these reasons, the eccentricities of planets around M dwarfs is of particular interest. The high eccentricity of the very first transiting planet discovered to orbit an M dwarf, GJ 436b [(11, 12)], is still mysterious fifteen years after its detection: while the Neptune-sized planet ought to have circularized over the age of its star, its high eccentricity persists without an as-yet detected perturber [(13)].
Much of our demographic information about planetary eccentricity, however, is obtained through radial velocity measurements and is focused upon Sunlike stars. Eccentricity is observationally correlated with planet multiplicity, with single-planet systems exhibiting higher eccentricities on average [(14)]. This relationship is moderated by both stellar metallicity and stellar binarity, however, with higher metallicity stars and stars in binary systems tending to host more eccentric planets [(15, 16, 17)]. Studies on the eccentricities of M dwarf planets tend to have smaller sample sizes, with patterns still newly emergent. Similarly to Sunlike stars [(17)], stellar binarity appears to be a good predictor of increased eccentricity among M dwarf planets [(18)]. In addition, older M dwarfs host modestly eccentric planets as well as their younger counterparts [(19)].
A larger-scale study of the eccentricities of M dwarf planets would be helpful toward understanding their demographics. Yet, the gold standard for measuring orbital eccentricity, the radial velocity phase curve, is both time- and resource-intensive. It's also possible to measure eccentricities in closely-packed transiting planets in (near)resonant configurations that exhibit transit-timing variations [(20)], but only about a hundred are known. There exists another opportunity for transiting planets, even without radial velocity data or TTVs, to roughly constrain orbital eccentricity: the "photoeccentric effect". First described by [(21)] and [(22)] and applied by [(23)], it relies on the relationship between the orbital speed (variable for an eccentric planet) and the transit duration. With strong con
straints on stellar density, deviation from the standard circular orbital speed is encoded in a measurably short or long transit duration [(23)]. While first deployed for Hot Jupiters, the effect can be leveraged to constrain eccentricities for large samples of smaller planets, provided there are sufficiently informative density priors [(24, 25, 26)]. Previously, these samples have included stellar hosts with spectroscopically-constrained densities [(27, 28, 15)] or even asteroseismically constrained densities [(30, 29)]. While previous studies have mostly focused upon FGK dwarfs, [(18)] focused upon a sample of 8 M dwarfs. These stars possessed parallax measurements, enabling strong enough constraints on their densities to enable the eccentricity characterization of their planets.
In this manuscript, we apply the photoeccentric method to the sample of M dwarf planetary hosts identified by NASA's _Kepler_ Mission, with the goal of estimating their underlying eccentricity distribution. In this work, we favor using Kepler data over TESS data because of its longer observation baseline and higher photometric precision. With the combination of spectroscopy and distance measurements from ESA's Gaia spacecraft, we demonstrate the ability to extract eccentricity constraints from the lightcurves of 163 planets. Within a hierarchical Bayesian framework, we go on to model the underlying eccentricity distribution for this sample. With demographic constraints for the mixture of M dwarf planetary systems in the Milky Way, we estimate the representative eccentricity distribution for the population of early- to mid-M dwarf planets, weighted by occurrence rate, in the local neighborhood.We note that references to "single-transist systems' in this manuscript refer to systems with a single known transiting planet which may have other, non-transiting or undetectable planets.
## Methods
To extract eccentricity measurements for the sample of _Kepler_ M dwarf planets, we take advantage of the so-called "photoeccentric effect", the phenomenon by which an eccentric planet's transit duration (the time for a transiting planet to cross its stellar disk) differs from an analogous circular planet. The total transit duration \(T_{14}\) and total transit duration from second to third contact \(T_{23}\) are directly measurable from transit light curves, as well as the orbital period \(P\), planet-to-star radius ratio \(R_{p}/R_{s}\) (see [(31)] for additional description). [(23)] derived an expression involving \(T_{14}\), \(T_{23}\) and the density of the host star \(\rho_{\star}\) as follows:
\[\rho_{\star}=g^{-3}\bigg{(}\frac{2\delta^{1/4}}{\sqrt{T_{14}^{2}-T_{23}^{2}}} \bigg{)}^{3}\frac{3P}{G\pi^{2}}, \tag{1}\]
where \(g\) is defined to be
\[g(e,\omega)=\frac{1+e\sin\omega}{\sqrt{1-e^{2}}} \tag{2}\]
In this way, \(\rho_{\star}\) is related to two quantities: a quantity dependent entirely on transit observables (\(T_{14}\), \(T_{23}\), \(\delta\), and \(P\)), and a quantity \(g\) that encodes eccentricity information. With prior information about \(\rho_{star}\), \(g\) is in principle extractable.
Using stellar metallicities and errors from the literature (compiled by [(32)]), stellar parallaxes from Gaia [(33, 34)], and \(K_{S}\) magnitudes from 2MASS [(35)], we calculate the stellar mass using the empirical, semi-model-independent \(M_{K_{S}}\)-\(M_{\star}\)-\([Fe/H]\) relation from [(36)] and the stellar radius using the \(M_{K_{S}}\), Fe/H, and \(R_{\star}\) relation from [(37)]. We combine the mass and radius to constrain \(\rho_{star}\). We use this independent constraint to construct a prior on \(\rho_{star}\) and fit for transit observables, including the period, transit depth, impact parameter, eccentricity and longitude of periastron, for each planet in our sample. A detailed description of our methodology can be found in the SI Appendix.
## Analysis and Results
The individual \(e\) posteriors for our sample are shown in Figure 1. Corner plots of the \(R_{p}/R_{\star}\), \(e\), \(b\), and \(\rho_{\star}\) posteriors for each fit are shown in the SI Appendix. The modes of each \(e\) posterior and a combined, normalized histogram containing 100 points drawn from each \(e\) posterior are shown in Figure 2. The fit planet parameters are listed in Table 1. Because the \(e\) posterior is highly correlated with other free parameters, we report the mode of each \(e\) posterior along with the \(16^{th}\) and \(84^{th}\) percentiles. With the photoeccentric effect, it is not always possible to precisely constrain \(e\) for a single planet, especially for systems with few observed transits and long-cadence data. We note that even if the mode of a given \(e\) posterior is greater than zero, it is often impossible to rule out \(e=0\).
With eccentricity posteriors in hand, we now investigate the likeliest parent distribution from which these eccentricities are drawn. This requires a careful accounting for detection bias. The higher likelihood of eccentric planets to transit was noted by [(21)] and [(22)], given the boost in transit probability possessed by the planet at periapse. After describing our method of accounting for bias, we then turn to the extraction of the underlying eccentricity distribution. In these sections, we consider several individual functions for modeling the distribution. We then consider the evidence for a mixture model of two distributions (we confirm our ability to accurately recover a known underlying eccentricity distribution of a synthetic sample of transiting planets in the SI Appendix).
### Inference of Parent Eccentricity Distribution.
Following the method of [(30)], we calculate the likelihood of observing our set of \(e\) posteriors, given a parent distribution model with parameters \(\theta\):
\[p(obs|\theta)=\frac{1}{N}\prod_{k=1}^{K}\sum_{n=1}^{N}\frac{p(e_{k}^{n}| \theta)}{p(e_{k}^{n}|\alpha)}, \tag{3}\]
where the number of samples from each posterior \(N=100\), the number of planets \(K=163\), and the prior with parameters \(\alpha\) is defined by \(p(e_{k}^{n}|\alpha)\)[(39)]. We assume that all planets in the population were detected, and the \(e\) posteriors of all planets are independent. Eccentric planets are geometrically likelier to transit overall, due to the enhanced transit probability at to periapse (\(\omega=90^{\circ}\)) [(40, 41, 21)]. We use the notation \(\hat{t}\) to indicate a planet that fully transits its star (having an impact parameter \(b<1\)). We now take the non-uniform transit probability into account with the joint (\(e\), \(\omega\)) prior given a transiting planet \(\hat{t}\)
\[p(e,\omega|\hat{t})=\frac{1+e\sin\omega}{1-e^{2}}. \tag{4}\]
Our joint \(\{e,w\}\) posteriors, without accounting for this effect, will be biased toward higher eccentricities than the
Figure 1: Individual \(e\) posteriors for each planet, denoted by KOI. Bin widths are arbitrarily set to 0.1. The solid line represents a kernel density estimate of each distribution.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline KOI & LC Days & SC Days & \(R_{p}/R_{s}\) & \(b\) & \(\rho_{s}\) & \(esim\omega\) & \(ecos\omega\) & \(e\) & \(\omega\) (deg) & \(e_{e_{e_{e_{e_{e_{e_{e_{e_{e_e_e_e_e_e_e_e
true parent distribution. We account for this non-uniform transit probability by multiplying each eccentricity posterior distribution \(p(e_{k}^{\text{h}}|\theta)\) by the reciprocal of the the prior \(p(e,\omega|\ell)\).
We evaluate Equation 3, replacing the likelihood \(p(e_{k}^{\text{h}}|\theta)\) with the appropriate likelihood function for each Rayleigh, half-Gaussian, and Beta distributions. We employ a uniform prior for all distribution parameters. According to dynamical simulation models, a Rayleigh distribution appropriately describes the eccentricity distribution of small planets [(42, 43)]. We include the half-Gaussian and Beta distributions to enable a one-to-one comparison with [(30)]. We include the complete likelihood functions in the SI Appendix.
We use a Markov Chain Monte Carlo (MCMC) analysis with the Python package emcece [(44)]. The chains were run with 32 walkers for 10,000 steps each, and we discarded a burn-in phase of 1,000 steps. To demonstrate convergence, we calculate the autocorrelation time as a function of step number for each fit. The final autocorrelation time is smaller than the total number of steps divided by 50. We include the full analysis of autocorrelation times in the Data Supplements.
Here, we take the step of dividing the sample into singly- and multiply-transiting systems. Per [(30)], [(27)], and [(15)] there is a strong possibility that the single transiting planet systems and multiple transiting planet systems will be drawn from distinct parent distributions. We are doubly motivated by the appearance by eye in Figure 2 of distinct distributions, when comparing the modes of the eccentricity posterior distributions for singles and multis.
#### Single Function Model
We first model the eccentricities with a single Rayleigh, single Beta, and then single half-Gaussian distribution, alternatively with the sample of singly-transiting planets, and then with the sample of multiply-transiting planets. For all trial functions, there is little overlap between the credible intervals for the best-fit model parameters for the populations of single and multiple transit systems. For example, the best-fit Rayleigh distribution for the single transiting planets is characterized by the modest eccentricity \(\sigma=0.19^{+0.04}_{-0.03}\), while the best-fit for the multiple transiting planets is much closer to circular, with \(\sigma=0.03^{+0.02}_{-0.01}\). In Table 2, we show the best-fit parameters for each distribution. Figure 3 shows the best-fit Rayleigh, half-Gaussian, and Beta distributions and errors.
We quantify the conclusion that the single-transiting and multi-transiting samples are best fit by two distinct distributions, rather than one distribution. We evaluate the probability ratio
\[R=\frac{P(D_{1}D_{2}|H_{1})}{P(D_{1}|H_{0})P(D_{2}|H_{0})} \tag{5}\]
[(45, 46, 47)] where \(D_{1}\) and \(D_{2}\) represent two subsets of data. The hypothesis \(H_{1}\) states that the data is best described by a joint fit, and the hypothesis \(H_{0}\) states that the data prefer separate models in different parts of parameter space. \(P(D_{1}D_{2}|H_{1})\) represents a joint fit to the entire dataset, while \(P(D_{1}|H_{0})\) and \(P(D_{2}|H_{0})\) represent individual fits to two data subsets. Where \(P(D_{1}|H_{0})\) and \(P(D_{2}|H_{0})\) are the likelihoods of the best-fit models for the singles and multis subsets, respectively, we find that \(ln(R)\) is negative for the Rayleigh, Half-Gaussian, and Beta distribution models. For the Rayleigh fits, \(log(R)=-7.09\); for the half-Gaussian fits, \(log(R)=-6.72\); and for the Beta distribution fits, \(log(R)=-7.78\). This shows strong evidence for \(H_{0}\), in favor of two distinct models to describe the data [(47)]
#### Mixture Model
The appearance of a distinct eccentricity parent distribution for singly- and multiply- transiting systems hints at a range of underlying dynamical temperatures among the exoplanetary systems [(48)]. In this scenario, planetary systems presenting multiple transits possess lower mutual inclinations on average; by extension due to equipartition, these planets also possess lower eccentricities (e.g. [(49)]). [(50)] and [(51)] quantified the contribution of each of these hypothetical underlying populations to the _Kepler_ planetary yield. Some fraction of the time, _bona fide_ dynamically cool systems of multiple planets will present only a single transiting planet due to geometric probability. In this sense, the sample of single-transit systems can be understood to be "contaminated", in that it contains more than true dynamically warmer single-planet systems. Rather, it also includes a contribution from dynamically cooler multi-planet systems for which only one transit is observed. Indeed, dynamically cool but geometrically unlucky systems contribute to the population of singly-transiting systems at the level of \(\sim\)50% [(50)]. We hypothesize that the single-planet eccentricity distribution can be modeled by one _or more_ Rayleigh distribution, and pose a test for this hypothesis as a model comparison between a one and two Rayleigh distribution model. Both model functions are described by the following expression, which assigns a weight \(f\) to one of the Rayleigh distributions and correspondingly \(1-f\) to the second:
\[M(x)=f\times R_{\sigma_{1}}(x)+(1-f)\times R_{\sigma_{2}}(x) \tag{6}\]
where \(M(x)\) is the mixture model likelihood at a point \(x\), and \(R_{\sigma_{n}}(x)\) is the likelihood of a Rayleigh distribution with \(\sigma_{n}\) at a point \(x\). The mixture model parameters now correspond to two independently varying Rayleigh \(\sigma\)s (\(\sigma_{1}\) and \(\sigma_{2}\)), and the relative weight of \(\sigma_{1}\) to \(\sigma_{2}\) (\(f\)). We observe that the one-Rayleigh model is simply a special case of Equation 6; if \(f=1\) or \(f=0\), \(M(x)\) expresses a single Rayleigh distribution. We now evaluate the likelihood \(p(obs|\theta)\), where \(\theta\) now refers to \(f\), \(\sigma_{1}\), and \(\sigma_{2}\). As before, we apply a uniform prior to all parameters. For the sake of removing redundancy, we require \(\sigma_{1}\leq\sigma_{2}\). Table 2 shows the best-fit mixture model parameters for the single-transit eccentricity distribution. In Figure 4, we show the corner plot for the single-transit model fit and the best-fit mixture distributions. For single-transit systems, we find that the best two-Rayleigh model is characterized by \(\sigma_{1}=0.02^{+0.11}_{-0.00}\), \(\sigma_{2}=0.24^{+0.20}_{-0.03}\), and \(f=0.86^{+0.03}_{-0.01}\). The best single-Rayleigh model is the same as the one inferred in the preceding single-Rayleigh analysis: \(\sigma=0.19^{+0.04}_{-0.03}\) and \(f=1.0\).
We can now calculate the Bayesian Information Criterion (BIC):
\[\text{BIC}=k\ln n-2\ln\mathcal{L}, \tag{7}\]
where \(k\) corresponds to the number of parameters in our model, \(n\) to the number of observations contained in \(x_{n}\), and \(\mathcal{L}\) to the peak likelihood of the model, \(p(obs|\theta)\). A comparison between the BIC for the single-Rayleigh model and the BIC for the two-Rayleigh model will illuminate the preference of the data for one model over the over, and allow us to judge the merit of the two-Rayleigh model as a descriptor for our eccentricity distribution of singly-transiting planets.
We employ here a ready simplification for nested mixture models such as ours, in which one model is simply a special case of the generalized model. Both models were fit with the same parameters \(\theta\), and therefore \(k\) is the same; since we are employing the same data set to test both the one- and two-Rayleigh distributions, \(n\) is also the same. We therefore find that the difference \(D\) between the two BIC values is described by
\[D=2\text{ln}\bigg{(}\frac{\text{Peak likelihood of two-Rayleigh model}}{\text{ Peak likelihood of one-Rayleigh model}}\bigg{)} \tag{8}\]
We find \(D=7.17\), with the two-Rayleigh model corresponding to the lower BIC. When analyzing the eccentricity distribution for single-transit systems, the \(\Delta BIC\) heuristic favors our model with two distinct distributions (where one closely matches the distribution of multi-transit eccentricities and one peaks at a higher \(e\) of \(0.24\)) over our model with one component. However, \(\Delta BIC\) is biased to favor the more complex model, as it does not account for the value of the prior probabilities or the differing volume of the peaks in the posteriors for the two models. A proper Bayesian model comparison is beyond the scope of this paper.
We hypothesize the following physical interpretation: that the latter Rayleigh distribution may be considered as the underlying \(e\) distribution for "true" dynamically hotter single-planet systems. While this hypothesis has not been explicitly tested in this work, the dynamical heat of a system is in part determined by its eccentricity, and planetary systems with high eccentricities may be associated with higher mutual inclinations (52). This hypothesis will be explicitly investigated in a future work.
We apply the same methodology to the multi-transit systems and find a \(D\) value of \(-3\times 10^{-6}\), indicating no strong preference between the one- and two-Rayleigh models. We conclude that the eccentricities of multi-transit systems are drawn from a single distribution with no contamination, as expected. Similarly to previous analyses, we demonstrate the validity of our methodology by correctly recovering the parameters of a known mixture model in the SI Appendix.
## Discussion
We now consider our findings from the preceding section, within a context of other exoplanet eccentricity studies. While we cannot presently compare our findings to other larger-scale studies of the underlying M dwarf planet eccentricity distribution, we can compare to inferred eccentricity distributions of planets orbiting larger stars. (29) and (30) constrained the eccentricities of 66 multi-transit systems and 53 single-transit systems (respectively) around G-type stars, using a framework we have employed here. They found the best-fit half-Gaussian distribution for single- and multi-planet systems to be \(\{\sigma_{single},\sigma_{multi}\}=\{0.32^{+0.06}_{-0.06},0.083^{+0.015}_{-0.02}\}\). They found the best-fit Rayleigh distribution for single- and multi-planet systems to be \(\{\sigma_{single},\sigma_{multi}\}=\{0.24^{+0.04}_{-0.04},0.061^{+0.01}_{-0.012}\}\). Notably, (30) found a significant difference between the underlying \(e\) distributions for singly-transiting and multi-transiting systems. We note that "singly-transiting" systems in (30) refer to systems which have one known transiting planet, but may contain other non-transiting planets. There is significant overlap between the best-fit model parameters for our underlying eccentricity distributions and those of (30). The consistency of our results suggests that the underlying eccentricity distribution of small planets may dependent weakly on stellar
\begin{table}
\begin{tabular}{c c c} \hline Distribution & Parameters & Best-Fit Values \\ \hline Rayleigh & \((\sigma_{s},\sigma_{m})\) & \(0.19^{+0.04}_{-0.03}\)\(0.03^{+0.02}_{-0.01}\) \\ Half-Gaussian & \((\sigma_{s},\sigma_{m})\) & \(0.25^{+0.06}_{-0.05}\)\(0.04^{+0.03}_{-0.02}\) \\ Beta & \((a_{s},b_{s}),(a_{m},b_{m})\) & \((1.18^{+1.95}_{-0.61},6.34^{+5.94}_{-2.77})\); \((2.95^{+2.34}_{-1.61},75.46^{+17.68}_{-2.74})\) \\ Mixture & \((\sigma_{1s},\sigma_{2s},f_{s});(\sigma_{1m},\sigma_{2m},f_{m})\) & \((0.02^{+0.11}_{-0.00},0.24^{+0.20}_{-0.03},0.86^{+0.03}_{-0.41})\); \((0.03^{+0.02}_{-0.01},0.06^{+0.36}_{-0.02},0.99^{+0.01}_{-0.59})\) \\ \end{tabular}
\end{table}
Table 2: Best-fit \(e\) distribution parameters. For Rayleigh, half-Gaussian, and beta distribution models, the median\({}^{+84^{th}}_{-16^{th}}\) percentiles are shown. For the mixture model, the mode\({}^{+84^{th}}_{-16^{th}}\) percentiles are shown. For multi-transit systems, the best-fit \(f\) is consistent with \(1\), so the best-fit model is likely comprised entirely of a single Rayleigh distribution with \(\sigma=\sigma_{1m}\). Therefore, \(\sigma_{2m}\) is likely not meaningful.
Figure 3: Left: Best-fit Rayleigh distributions for single-planet systems (green) and multi-planet systems (blue). _Center:_ Best-fit half-Gaussian distributions for single-planet systems (green) and multi-planet systems (blue). _Right:_ Best-fit Beta distributions for single-planet systems (green) and multi-planet systems (blue). The shaded regions represent the distributions corresponding to the \(16^{th}\) and \(84^{th}\) quartiles of the marginal eccentricity posteriors. The dotted lines represent best-fit distributions from a similar analysis by (30) for F6K-type planet hosts.
spectral type, if at all.
### Consideration of Dynamical Mixture
We find from our investigation that M dwarf exoplanets exhibit evidence for the two different underlying parent distributions in eccentricity: one dynamically cooler (associated with multiple transits), and one dynamically warmer (associated with single transits). This extends the similar findings for FGK dwarfs [(15, 27, 30)] to later spectral types. There is also modest evidence to support the claim that the eccentricity distribution of singly-transiting planets is better described by contributions from both dynamically cold and warm contributions. We understand this finding in light of the geometric transit probability: some single transits are attributable to _bona fide_ dynamically warmer planets, while the remaining fraction are drawn from dynamically cool systems in which only one member transits. The degree to which the singly-transiting population is mixed (that is, the value of \(f\)) is of interest. While it peaks at a 90% contribution from _bona fide_ dynamically cool systems, a 50/50 mixture is allowable at the \(1\sigma\) level. This shows broad consistency with the \(\sim\)50% predicted contribution of dynamically cool systems to the population of single transit hosts from [(50)] and [(51)]. It may alternately be the case that the two-population model of "dynamically cooler" and "dynamically warmer" planets we have employed here is overly simplified. [(53)] and [(54)] modeled the range of dynamical temperature in planetary systems as a single continuous distribution in \(\{e,i\}\) space. Both demonstrated the consistency of that model with the observable properties of transiting planets as well. Indeed, for the sample of radial velocity planets with measured eccentricities, [(14)] showed a gradual increase with average eccentricity as the number of planets decreased. We have not tested whether the eccentricity measurements of our planetary sample are better modeled by a continuous distribution versus a bimodal one, but this may be possible with a larger sample. In any case, we find broad consistency between the relative contribution of dynamically warm and cool systems required to model our eccentricity measurements, as described by transit multiplicity distributions in other studies.
Whether the mixture of dynamical temperature is best modeled as a continuous or bimodal function, the mixture encodes information about the formation and subsequent evolution of planetary systems. It may be the case that the mixture is baked in during the earliest stages of formation. As [(53)] demonstrated, the continuous range in \(\{e,i\}\) space that correctly replicates many observables of the _Kepler_ planet yield is predictable: planetary systems are clustered at the "angular momentum deficit" stability limit as a natural outcome of pebble accretion. The interaction between larger planetesimals with one another during the first 10 Myr of formation may also generate larger eccentricities and mutual inclinations, though this effect varies with the surface density distribution of planetesimals in the disk [(55)] and the presence of ambient gas [(51)]. [(56)] demonstrated the plausibility of self-excitation of planetary systems, which are born only metastable at formation. The timescale for this excitation is 10s to 100s of Myr after formation [(56)].
Planetary systems can also possess dynamically distinct component parts. [(57)] showed that migration trapping can produce systems with dynamically cold inner planets, decoupled
Figure 4: Left: Corner plot of mixture model fit to single-transit \(e\) distribution. \(\sigma_{1}\) and \(\sigma_{2}\) are the model parameters for two Rayleigh distributions that make up the mixture model. \(f\) is the fractional contribution of \(\sigma_{1}\) (\(\text{{f}}=1\), the model is entirely defined by \(\sigma_{1}\)). The statistical mode \((-16\,th,+84\,th)\) percentiles are shown as titles. The shaded regions represent the \(1\sigma\), \(2\sigma\) and \(3\sigma\) posterior regions. The single-transit eccentricity distribution is best described by a mixed Rayleigh distribution with \(\sigma_{1}=0.02\), \(\sigma_{2}=0.24\), and \(f=0.86\). _Right:_ The best-fit parameters of the mixture model fit drawn as individual Rayleigh distributions: the light blue, dashed line represents the \(\sigma_{1}\) single-transit model, and the green, dotted line represents the \(\sigma_{2}\) single-transit model. We include the mixture model fit to the multi-transit eccentricity posteriors (blue solid line), which prefers a single Rayleigh distribution (with \(f\equiv 1\)). The shaded regions represent the distributions corresponding to the \(16^{th}\) and \(84^{th}\) quantiles of the marginal eccentricity posteriors. The \(\sigma_{1}\) single-transit model is consistent with the multi-transit model, and the \(\sigma_{2}\) single-transit model peaks at higher eccentricity. This suggests that the single-transit sample is drawn from two distinct \(e\) distributions: one of true multi-planet systems, and one of true single-planet systems with higher \(e\).
from dynamically warmer planets further from the star. Other studies have investigated the population of ultra-short-period (USP) exoplanets, which can be substantially misaligned with other planets orbiting the same star. In this case, it is the inner planets that often exhibit evidence for inclination excitation. (58) showed that innermost planets exhibit typical mutual inclinations of \(7^{\circ}\) with planets further from the star. This is significant, as the further planets themselves show mutual inclinations of typically \(2^{\circ}\). The high mutual inclination of USP planets, compared with the other planets orbiting the same star, may also be due to tidal evolution (59) or interaction with the quadrupole moment of the host star (60). The relative fraction determined in this work of singly-transiting systems that are eccentric, versus those that are closer to circular, may be a useful distinguishing diagnostic between these scenarios.
The fact that M dwarf systems tend to lack external giant planetary perturbers, as compared to FGK-type systems, is a suggestive hint (61). (30) considered the possibility that single-transit FGK dwarf systems may be more likely to be eccentric due to perturbations from outer giant planets. This effect was quantified in the simulations of (62). The original flat configuration of a multi-planet system may be disturbed by a massive outer planet, exciting one or more from the transit geometry. Compact multi-planet systems may be more resistant to this effect (63, 64). But because single- and multi-transit systems appear to have qualitatively similar eccentricity distributions for M and FGK dwarfs, our finding suggests the possibility of a dynamical mechanism that does not depend on the presence of giant perturbers. However, there exists significant uncertainty in the rate of giant planet occurrence around both FGK and M dwarfs, and the lack of stellar type dependence on eccentricity is not enough to confirm this suggestion. We warn that though the \(e\) distributions for M and FGK dwarfs appear similar, the sample size in this work may not be large enough to prove statistically significant differences between the distributions. For _giant_ planets orbiting FGK dwarfs, there is strong evidence that the higher eccentricities associated with higher metallicity are driven by the presence of another giant planet (16). But the relationship between eccentricity and metallicity for small planets may require an interpretation that does not invoke the formation of (and subsequent agitation by) giant planets, given the similarity of the eccentricity distribution of small planets across spectral type. (15) found evidence that Kepler planets with high eccentricities preferentially occur around metal rich ([Fe/H] \(>0\)) stars. Indeed, with the same sample of _Kepler_ M dwarfs as we employ in this study, (65) showed some evidence that multiple-transiting planet systems are metal poor compared to single-transiting planet systems, and even more so compared to field stars: this establishes a common relationship across spectral type that dynamically cool systems are likelier around metal-poor hosts. (66) provided a non-giant-planet interpretation of the metallicity trend with planet occurrence. They argued that the relationship between M dwarf metallicity and raw planet occurrence (higher for metal-rich stars) is evidence for _planetesimal_ accretion rather than pebble accretion. The extraction of the relationship between orbital eccentricity and stellar metallicity may be hindered for now by a relatively small lever arm in metallicity (95% of the sample span a range of only -0.5 to 0.5 dex, per (67)).
(30) also consider the possibility of self-excitation as the cause of the difference in single- and multi-transiting eccentricity distributions, as formation conditions that cause high eccentricities also cause widely spaced orbits, larger mutual inclinations, and therefore low transit multiplicities for eccentric planets (51, 55). In the case where self-excitation is significant, the resulting eccentricities would depend strongly on the solid surface density and the radial distribution of disk solids, which differ among stellar types (30, 55). In comparing to (30) we find no significant dependence of the eccentricity distribution on stellar type, so we suggest that self-excitation may not be the most important process in exciting eccentricities. However, knowing the mutual inclination distribution of M dwarf planets is critical in quantitatively evaluating the importance of self-excitation. This concept will be investigated in a future work.
Full Underlying \(\bullet\) distribution.We now infer the eccentricity distribution for the general population of M dwarf systems, combining our findings with planet occurrence rates for single- and multi-planet systems in the literature. We caution that our findings extend to planets \(\geq 1.5R_{\oplus}\) with orbital periods \(<200\) days, where _Kepler_'s completeness is highest for M dwarfs (68), and that our results are most applicable for the early-to-mid spectral types targeted by _Kepler_. (69) found that \(21^{+75}_{-5}\)% of mid-M dwarf systems host compact multiple systems, a number consistent with the fraction among early M dwarfs from (50).
To estimate the eccentricities of planets among a volume-complete sample of M dwarfs, we first assume that all M dwarfs host a planetary system of some kind, consistent with (68). The stellar sample in (68) is skewed towards earlier-type M dwarfs relative to our sample and is potentially contaminated by late K dwarfs. Planet occurrence rate may therefore vary significantly across spectral types within M stars. However, occurrence rates broadly tend to increase with later spectral types (6, 61, 70), so the assumption that each star in our sample hosts at least one planet appears to be valid. Among these, \(21^{+75}_{-5}\)% of stars in the sample are designated hosts to compact multiple systems (69), with the remainder hosting the dynamically warmer systems. For this analysis we assume that all compact multiple systems host exactly 5 planets, so we assume the number of single-planet hosts out of 100 M dwarfs is \(100-21^{+7}_{-5}\). This number must be higher to explain systems like TRAPPIST-1 (71), and indeed it lies at the low end of posterior for number of planets per dynamically-cool-host, so in this sense our distribution is a lower-limit at low eccentricities. For 100 iterations, we draw the fraction of M dwarfs compact multi hosts from an asymmetric Gaussian with \(\mu=0.21\), \(\sigma_{u}=0.07\) and \(\sigma_{l}=0.05\). We calculate the number of compact multiple and single-planet hosts out of 1000 planets for each draw. We take the eccentricity distribution for compact multiple systems to be a Rayleigh distribution with \(\sigma_{m}=0.03\), and for single-planet systems a Rayleigh distribution with \(\sigma_{s}=0.24\), according to the mixture model fits in the Results & Analysis section. We draw eccentricities from the single- and multi-planet distributions corresponding to the fraction of single- and multi-planet systems in each iteration. The resulting group of 10,000 eccentricities reflects the complete underlying eccentricity distribution for M dwarf planets in a volume-complete sample (Figure 5). In this calculation, we have made simplifying assumptions: we have employed a single fiducial template for the number of planets in dynamically
cool and warm systems, and have not folded the error in our measurements of \(\sigma_{m}\) and \(\sigma_{s}\) into our analysis; these quantities contribute much less to the uncertainty on the resulting \(e\) distribution, however, than the uncertainty of the compact multi rate.
From this distribution, it is clear that low orbital eccentricities (\(e<0.1\)) are not the norm among a typical sample of planets with periods \(P<200\) days. While compact multiple planetary systems are the minority of M dwarf planetary systems as a whole, the larger number of planets-per-star among those systems does boost the occurrence of planets with nearly-circular orbits. Between 21-36% of planets, with the \(1\sigma\) confidence interval, possess eccentricities \(<0.1\) within the period and radius ranges where _Kepler_ is most complete.
## Conclusions
We constrained the eccentricities of 163 planets orbiting 101 M dwarfs. We performed our measurements via the 'photoeccentric method', combining stellar densities for the 101 stars (derived from a combination of spectroscopy, _Gaia_ parallaxes, and 2MASS magnitudes), with transit durations from _Kepler_ light curves. We employ the resulting \(e\) posteriors within a hierarchical Bayesian framework to infer the underlying \(e\) distribution for planets orbiting early-to-mid M-dwarfs, considering a variety of functional forms including a mixture models. We summarize our findings as follows:
* The eccentricities of single-transit and multi-transit systems are likely drawn from distinct underlying parent distributions. The eccentricity distribution for single-transit systems is best described by models that peak at higher \(e\) than for multi-transit systems.
* We find modest evidence that the single-transit population is best described with a dynamical mixture model, with dynamically warmer and dynamically cooler populations. We conclude that the sample as a whole is best modeled as a mixture of Rayleigh distributions: one peaking at \(\sigma_{2}=0.21^{+0.28}_{-0.01}\), and the other at \(\sigma=0.04^{+0.02}_{-0.02}\). The data for the single-transiting systems favor the dynamical mixture model over the single-population Rayleigh model with 7:1 odds.
* The inferred parent distributions in orbital eccentricity for single- and multi-transit M dwarf systems are similar to analogous distributions for FGK dwarfs from the literature. Because M dwarfs tend to lack external giant planets when compared to larger stars, our findings favor a interpretation for dynamical excitation that does not require the presence of giant perturbers. In this sense, the eccentricity-metallicity relation for small planets (by which metal-poor stars tend to host lower eccentricity planets) may reflect a relationship other than metallicity's impact upon pebble accretion or planetesimal accretion early on, or self-excitation by neighboring small planets later in the system's lifetime.
* We present an estimate of the underlying intrinsic \(e\) distribution for the population of early- to mid-M dwarf planets in the local neighborhood with radii \(>1.5R_{\oplus}\) and with periods \(<200\) days, by combining our findings with other M dwarf planetary demographic constraints. Assuming the _Kepler_ sample is representative of typical early-to-M dwarfs in the galaxy, this distribution may typify eccentricities for planets orbiting small stars in the Milky Way.
The underlying eccentricity distributions presented here may be applicable for transit fit priors for small transiting planets in future studies. While our per-planet eccentricity constraint is quite coarse, these individual eccentricity posteriors may be useful toward target selection for follow-up observations. Furthermore, because our per-planet eccentricities are not well constrained, it is challenging to comment on the effects on habitability for individual planets. However, we contend that "compact multiple" systems may be the best place to search for habitable planets, as they appear more likely to have near-circular orbits.
Expanding this analysis using data from other surveys is promising. Applying our methods to measure the eccentricities of M dwarf planets observed by the Transiting Exoplanet Survey Satellite (TESS) [(72)] is also feasible, though this may prove to be more challenging with TESS due to the shorter observation baseline. Data from the PLAnetary Transits and Oscillations of stars (PLATO) Mission [(73)], expected to launch in 2026, could prove to be useful for expanding this work due to its planned high precision and long observation baseline.
## Data Availability
All codes and data used in this manuscript are publicly available on Zenodo with the DOI 10.5281/zenodo.7731019 and on GitHub at [https://github.com/ssagear/photoeccentric](https://github.com/ssagear/photoeccentric).
We are grateful to Andrew Mann for helpful guidance on the transit fitting process. We thank Christopher Lam for thoughtful feedback on this manuscript. We thank the anonymous referees for carefully reviewing this manuscript and offering suggestions that improve the quality of this work. This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STSC). Funding for the Kepler mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)),
Figure 5: Fraction of all M dwarf planets with each eccentricity from simulations based on planet occurrence rates for single- and multi-planet systems from [(69)]. We show the mean and \(1\sigma\) region from 100 simulations. Bins have widths of 0.05.
processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work made use of the gaia-kepler.fun crossmatch database created by Megan Bedell. This work made use of the following facilities: Kepler, Gaia, 2MASS, IRSA, Exoplanet Archive, ExoFOP.
## Supporting Information for
The Orbital Eccentricity Distribution of Planets Orbiting M dwarfs
Sheila Sagear and Sarah Ballard
Sheila Sagear.
Email: [email protected]
This PDF file includes:
Supporting text
Figs. S1 to S4
Tables S1 to S3
SI References
## Methods
In this section, we detail our methodology for extracting eccentricity measurements from the sample of known _Kepler_ M dwarf planets. We first describe this sample, the lightcurve preparation process, and how we obtain our stellar density priors from this sample. We summarize the "photoececentri" formalism and our fitting pipeline (note that we quantify our sensitivity as a function of \(\{e,w\}\) parameter space, by injecting and recovering these parameters from synthetic lightcurves later in the SI Appendix).
Stellar and Planetary SampleWe employ the stellar properties compiled in Kepler DR25 (32), which are listed in NASA's Exoplanet Archive Cumulative KOI table (74). We take the effective temperature (_T_eff) and stellar metallicity ([Fe/H]) from Kepler DR25. The effective temperatures and metallicities were compiled from the literature by (32) (see Table 3 of (32) for a list of provenes for each star). These stellar properties are not calculated by interpolating stellar isochrones.
To select our sample, we first remove all KICs with stellar host effective temperatures greater than 4000 K and listed disposition scores below 0.5 (75). From this subset, we exclude KOIs with an Exoplanet Archive Disposition of "False Positive", according to the tests performed by (76).
We cross-match these KICs with Gaia IDs using the 1\({}^{\prime\prime}\) radius gaia-kepler.fun crossmatch database. For KICs without 1\({}^{\prime\prime}\) database entries, we use the cross-match from the analogous 4\({}^{\prime\prime}\)radius database if there is only one entry. For confirmed planets with no entries in either database, or with no entries in the 1\({}^{\prime\prime}\) database but multiple entries in the 4\({}^{\prime\prime}\) database, we take the Gaia ID listed in the Exoplanet Archive's Planetary Systems table (77). KOIs 1681, 2626, 3010, and 6276 were further excluded because they have Gaia IDs from one of these sources, but no parallax information. KOI 2862 was excluded because we found no matching Gaia ID from any of these sources. The parallax for KOI 1422 was taken from Gaia DR3, while all other parallaxes were taken from Gaia DR2.
We then cross-match these KICs with the 2MASS All-Sky Point Source Catalog using a 2\({}^{\prime\prime}\) cone search on the NASA/IPAC Infrared Science Archive (35). KOIs 1201 and 1725 were further excluded because the 2MASS data for these targets were incomplete. We exclude KOIs 7408 and 8007 because their impact parameters listed in the Exoplanet Archive were greater than 1.2 (74). We remove KOIs 605, 3497 and 7791 because our stellar mass calculations do not yield valid results for these stars, as we describe in the Calculation of Stellar Densities section. Finally, we exclude five KOIs (961.02, 4419.01, 6863.01, 7793.01, and 8037.01) because the computational needs of their transit fits significantly exceed the needs of the rest of the sample. We do not expect that excluding these KOIs significantly affects our results, given the size of our sample.
The final sample includes a total of 163 KOIs: 67 single-transit systems and 96 planets or candidates across 34 multi-transit systems. We include 16 two-planet systems, 11 three-planet systems, 3 four-planet systems, and 4 five-planet systems. We note that for one three-planet system (the KOI 961 system), we only include two out of three confirmed planets. The sample includes both confirmed planets and planet candidates. There are a total of 25 planet candidates in the sample with 16 candidates in single-transit systems, 8 candidates in two-planet systems, and one candidate in a five-planet system. Given the proportion of planet candidates to confirmed planets and our sample vetting process, we expect that this is a high-fidelity sample which is unlikely to contain significant numbers of false positives.
Lighcurve PreparationWe downloaded all available long- and short-cadence Kepler light curves for our sample from the Mikulski Archive for Space Telescopes (MAST). Where both long- and short-cadence data are available for a single quarter, we take the short-cadence data. For each target, we normalize the light curve data to 1 and stitch quarters together, preserving the original time stamps. We estimate the transit midpoints at the times observed using the transit start time and orbital period obtained from (74).
To prepare the lightcurves for modeling, we first remove out-of-transit data. We find the closest flux point to each injected transit midpoint and isolate a window of 4 hours plus one half of the transit duration published in (74) before and after the transit midpoint. For KOIs 902.01, 2418.01, 2992.01, and 3263.01, we increased the length of the added baseline window to 8 hours before and after the transit (instead of 4 hours) due to their comparatively long transit durations. We discard the out-of-transit data. We fit a cubic model to the outer 2.5 hours of each transit segment. We subtract the cubic model from the entire transit segment.
Calculation of Stellar DensitiesWe summarize here our calculation of the stellar density, which will ultimately apply as a prior during the lightcurve fit. The fundamental properties of M dwarfs are difficult to accurately extract from spectroscopy alone (78), so we use the empirically different method of (18). For each star, we take stellar metallicities and errors from the literature (complied by (32)), stellar parallaxes from Gaia (33, 34), and \(K_{S}\) magnitudes from 2MASS (35). We first calculate the absolute magnitude \(M_{K_{S}}\) using the 2MASS \(K_{S}\) magnitudes and parallaxes from Gaia. We then calculate the radius \(R_{*}\) for each sample star with the \(M_{K_{S}}\), Fe/H, and \(R_{*}\) relation from (37). (37) investigated the significance of correlations between \(M_{*}\) and \(R_{*}\), and demonstrated that these relations properly reproduce the covariance between mass and radius.
Next, we calculate stellar masses using the empirical, semi-model-independent \(\mathbf{M_{K_{S}}}\)-\(\mathbf{M_{*}}[\mathbf{Fe/H}]\) relation from (36) using the M_M_K-python software. The empirical relation was derived according to a sample of nearby M dwarfs with \(\mathbf{4<M_{K_{s}}<11}\) and metallicities of \(-\mathbf{0.6<[Fe/H]<0.4}\). We have excluded any stars from the sample which fall outside this range of \(\mathbf{M_{K_{s}}}\). There are six stars in our sample which have \(\mathbf{4<M_{K_{s}}<11}\) and \(\mathbf{[Fe/H]=<0.4}\), but we include these stars in our analysis because the role of metallicity in the empirical relation is much less significant than the \(\mathbf{M_{K_{s}}}\) magnitude (36). The empirical relation is best calibrated in the region between \(-\mathbf{0.4<[Fe/H]<0.3}\), and \(\mathbf{90\%}\) of our sample falls within this region. The empirical relation is best calibrated between \(\mathbf{4.5<M_{K_{s}}<10.5}\) and \(\mathbf{88\%}\) of our sample have \(\mathbf{M_{K_{s}}}\) greater than 4.5. Because \(\mathbf{M_{K_{s}}}\) has the strongest influence on the mass calculation, we urge caution in interpreting the calculated masses and resulting eccentricities of systems with \(\mathbf{M_{K_{s}}<4.5}\) (Table 4).
We combine \(\mathbf{R_{*}}\) and \(\mathbf{M_{*}}\) to calculate the stellar density for each star. The literature stellar parameters we used in these calculations are listed in Table 3. The calculated stellar parameters are listed in Table 4. The (36) \(\mathbf{M_{K_{s}}}\), Fe/H, and \(\mathbf{R_{*}}\) relation should be restricted to main-sequence stars with \(\mathbf{4<M_{K}<11}\). At this stage, we exclude KOI systems 605, 3497 and 7791 because \(\mathbf{M_{K}<4}\) for these stars. We note that where our sample includes KOIs around binary systems, we assume that the planet transits the primary star.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline KIC & KOI & Kepler Name & Gaia ID & 2MASS ID & T\({}_{eff}\) (K) & [Fe/H] & \(\pi(^{\prime\prime})\) & \(K_{s}\) \\ \hline
10925104 & 156 & Kepler-114 & 2128939873302216320 & 19362914+4820582 & 3980\(\pm\)79 & -0.2\(\pm\)0.15 & 0.003824+1.9e-05 & 11.37\(\pm\)0.02 \\
11852982 & 247 & Kepler-1712 & 2132326747070439296 & 18595966+5008484 & 3732\(\pm\)79 & 0.02\(\pm\)0.15 & 0.006371\(\pm\)1.9e-05 & 11.12\(\pm\)0.02 \\
5364071 & 248 & Kepler-49 & 2053523271244105216 & 19291070+4035304 & 3834\(\pm\)81 & -0.02\(\pm\)0.15 & 0.003185\(\pm\)2.6e-05 & 12.38\(\pm\)0.02 \\
9390653 & 249 & Kepler-504 & 2017186134525176960 & 18594123+4558206 & 3547\(\pm\)75 & -0.14\(\pm\)0.15 & 0.010065\(\pm\)3.1e-05 & 11.16\(\pm\)0.03 \\
9757613 & 250 & Kepler-28 & 21071375865730868 & 18594583463595 & 3879\(\pm\)81 & -0.12\(\pm\)0.15 & 0.002954\(\pm\)2.7e-05 & 12.63\(\pm\)0.03 \\
10489206 & 251 & Kepler-125 & 208643948828437358 & 19539149+4736178 & 380\(\pm\)80 & -0.06\(\pm\)0.15 & 0.005452\(\pm\)2.9e-05 & 11.68\(\pm\)0.02 \\
11187837 & 252 & Kepler-1663 & 212957828696530040 & 19213636+489213 & 3744\(\pm\)79 & 0.06\(\pm\)0.15 & 0.002931\(\pm\)3e-05 & 12.55\(\pm\)0.03 \\
11752906 & 253 & & 1231244056342471168 & 19021784+4957441 & 3757\(\pm\)78 & 0.48\(\pm\)0.1 & 0.003091\(\pm\)3.5e-05 & 12.29\(\pm\)0.04 \\
5794240 & 254 & Kepler-45 & 2053562475706063744 & 19312949+4103513 & 3793\(\pm\)80 & 0.32\(\pm\)0.15 & 0.00259\(\pm\)4.3e-05 & 12.89\(\pm\)0.03 \\
7021681 & 255 & Kepler-505 & 2102511874378223360 & 19112594+4232334 & 3780\(\pm\)80 & -0.02\(\pm\)0.15 & 0.0035\(\pm\)2.2e-05 & 12.08\(\pm\)0.02 \\ \hline \end{tabular}
\end{table}
Table 3: Literature stellar parameters. Gaia IDs are crossmatched with KICs using the gaia-kepler.fun crossmatch database. Effective temperatures and stellar metallicities are taken from (32). Parallaxes are taken from Gaia crossmatched data. \(K_{s}\) magnitudes are taken from 2MASS crossmatched data. Only a portion of the table is shown here to demonstrate its form and function. The full table is available in machine-readable form as βDataset S2β in the Data Supplements.
\begin{table}
\begin{tabular}{c c c c c} KOI & \(M_{*}\) (\(M_{S}\)) & \(R_{*}\) (\(R_{\rm G}\)) & \(M_{K_{s}}\) & \(\rho_{*}\) (\(\rho_{\rm G}\)) \\ \hline
156 & 0.7065 \(\pm\) 0.0195 & 0.7557 \(\pm\) 0.0072 & 4.2787 \(\pm\) 0.0246 & 1.6377 \(\pm\) 0.0652 \\
247 & 0.5803 \(\pm\) 0.0146 & 0.5827 \(\pm\) 0.0057 & 5.1449 \(\pm\) 0.0227 & 2.9341 \(\pm\) 0.1123 \\
248 & 0.6166 \(\pm\) 0.016 & 0.6285 \(\pm\) 0.0068 & 4.8982 \(\pm\) 0.029 & 2.4855 \(\pm\) 0.1035 \\
249 & 0.4164 \(\pm\) 0.0108 & 0.4218 \(\pm\) 0.0049 & 6.1691 \(\pm\) 0.028 & 5.5554 \(\pm\) 0.2411 \\
250 & 0.6046 \(\pm\) 0.0156 & 0.6158 \(\pm\) 0.0073 & 4.9819 \(\pm\) 0.0327 & 2.591 \(\pm\) 0.1141 \\
251 & 0.5484 \(\pm\) 0.0137 & 0.5484 \(\pm\) 0.0052 & 5.354 \(\pm\) 0.0222 & 3.3274 \(\pm\) 0.126 \\
252 & 0.6182 \(\pm\) 0.0161 & 0.6285 \(\pm\) 0.0082 & 4.8869 \(\pm\) 0.038 & 2.4932 \(\pm\) 0.1179 \\
253 & 0.638 \(\pm\) 0.0187 & 0.6437 \(\pm\) 0.0089 & 4.7404 \(\pm\) 0.0449 & 2.3946 \(\pm\) 0.1209 \\
254 & 0.6068 \(\pm\) 0.0173 & 0.6079 \(\pm\) 0.0095 & 4.9588 \(\pm\) 0.0467 & 2.7051 \(\pm\) 0.1475 \\
255 & 0.6306 \(\pm\) 0.0159 & 0.6469 \(\pm\) 0.0061 & 4.8002 \(\pm\) 0.0227 & 2.3304 \(\pm\) 0.0879 \\ \end{tabular}
\end{table}
Table 4: Calculated stellar parameters. Stellar masses are calculated using the \(M_{K_{S}}\)-\(M_{*}\) relation from (79) with 2MASS \(K_{S}\) magnitudes and Gaia parallaxes. Stellar radii are calculated using the \(M_{K_{S}}\), Fe/H, and \(R_{*}\) relation from (37) with 2MASS \(K_{S}\) magnitudes and Gaia parallaxes. Only a portion of this table is shown here to demonstrate its form and content. The full table is available in machine-readable form as βDataset S3β in the Data Supplements.
**Photoeccentric Effect Pipeline.** We summarize here a formalism detailed fully in (23), and describe how we employ it in our lightcurve modeling procedure. With a fortuitous rearrangement of Newton's version of Kepler's third law (80), we can express a directly measured quantity from the transit lightcurve (the ratio of the semimajor axis \(\alpha\) to the stellar radius \(R_{\bullet}\)) in terms of the planetary period \(P\) and the mass and radius of the star \(M_{\bullet}\) and \(R_{\bullet}\). For illustrative purposes, we manipulate \(M_{\bullet}\) and \(R_{\bullet}\) to substitute \(\rho_{\bullet}\) and find
\[a/R_{\bullet}=\sqrt[3]{\frac{GM_{\bullet}P^{2}}{4\pi^{2}{R_{\bullet}}^{3}}}=\sqrt [3]{\frac{GP^{2}}{3\pi\rho_{\bullet}}}. \tag{9}\]
In the simplified case of a circular orbit with an impact parameter \(b=0\), the transit duration is given by the time to sweep across \(2R_{\bullet}\). The planet's speed is constant in this case, \(2\pi a/P\), so that the duration can be very roughly approximated at \(T\sim P/\pi(a/R_{\bullet})\). However, since the effect of a non-zero impact parameter may influence the transit duration in a similar way as a non-zero eccentricity, we must incorporate these variables into our expression for transit duration. Including the additional complexities of an eccentric orbit characterized by an eccentricity \(e\), longitude of periapse \(\omega\), inclination \(i\) and transit depth \(\delta\), the full expression for transit duration is given by Equation 10:
\[T_{14/23}=\frac{P}{\pi}\frac{(1-e^{2})^{3/2}}{(1+e\sin\omega)^{2}}\arcsin \left(\frac{\sqrt{(1+/-\delta^{1/2})^{2}-(a/R_{\bullet})^{2}\left(\frac{1-e^{2} }{1+e\sin\omega}\right)^{2}\cos^{2}i}}{(a/R_{\bullet})\frac{1-e^{2}}{1+e\sin \omega}\sin i}\right), \tag{10}\]
where \(T_{14}\) is the duration between first and fourth contact and \(T_{23}\) is between 2nd and 3rd contact (see (31) for additional description). \(T_{14}\) is calculated with \((1+\delta^{1/2})\), and \(T_{23}\) is calculated with \((1-\delta^{1/2})\).
We include this analytic expression (taken from (23)) with the understanding that it reflects an approximation without accounting for limb darkening. We wish to convey the formalism of the relationship between transit duration and eccentricity. When the effects of limb darkening are significant, this approximation is still useful, though it would underpredict the covariances and uncertainties of the fit parameters (81). Therefore, in our analysis, we fit for the limb darkening along with the other transit parameters, including the eccentricity directly.
(23) derived an expression involving \(T_{14}\), \(T_{23}\) and equation 9 as follows:
\[\rho_{\bullet}=g^{-3}\Bigg{(}\frac{2\delta^{1/4}}{\sqrt{T_{14}^{2}-T_{23}^{2} }}\Bigg{)}^{3}\frac{3P}{G\pi^{2}}, \tag{11}\]
where \(g\) is defined to be
\[g(e,\omega)=\frac{1+e\sin\omega}{\sqrt{1-e^{2}}} \tag{12}\]
In this way, \(\rho_{\bullet}\) is related to two quantities: a quantity dependent entirely on observables from the transit (\(T_{14}\), \(T_{23}\), \(\delta\), and \(P\)), and a quantity \(g\) that separately encodes eccentricity information. With prior information about \(\rho_{star}\) from some other means (asteroseismology or spectroscopy), \(g\) is in principle extractable.
We continue to adopt the symbolism and formalism of (23) to describe the Bayesian statistical framework of our analysis. We describe a model lightcurve parameterized by \(e\), \(\omega\), \(\rho_{\bullet}\), and \(X\). \(X\) represents all other parameters of the model light curve (such as orbital period, transit epoch, radius ratio, limb-darkening parameters, and impact parameter). We take the variable \(D\) to represent the light curve data. We intend to determine the probability of various values of e and omega given the data. According to Bayes' theorem,
\[P(e,\omega,\rho_{\bullet},X|D)\propto P(D|e,\omega,\rho_{\bullet},X)P(e,\omega,\rho_{\bullet},X) \tag{13}\]
where the last term \(P(e,\omega,\rho_{\bullet},X)\) represents the prior knowledge. We impose a non-uniform prior only on \(\rho_{\bullet}\) based on the stellar densities and uncertainties we calculated above, from stellar parameters measured independently. We rewrite the probability as
\[P(e,\omega,\rho_{\bullet},X|D)\propto P(D|e,\omega,\rho_{\bullet},X)P(\rho_{ \bullet}) \tag{14}\]
To obtain a two-dimensional joint posterior distribution for e and omega, we marginalize over X and \(\rho_{\bullet}\) and obtain
\[P(e,\omega|D)\propto\int\int P(D|e,\omega,\rho_{\bullet},X)P(\rho_{\bullet})dXd \rho_{\bullet} \tag{15}\]
And finally, we marginalize over \(\omega\) to obtain
\[P(e|D)\propto\int\int\int P(D|e,\omega,\rho_{\bullet},X)P(\rho_{\bullet})dXd \rho_{\bullet}d\omega \tag{16}\]
We demonstrate that, when incorporating a Bayesian sampling method to explore parameter space, we are able to translate a prior on stellar density and uniform priors on \(e\) and \(\omega\) into a constraint on a planet's eccentricity.
We perform the lightcurve modeling using gradient descent with the exoplanet (82) and pymc3 (83) packages. The free parameters are the orbital period \(P\), transit epoch \(t_{0}\), planet-star radius ratio \(R_{p}/R_{\bullet}\), impact parameter \(b\), quadratic limb-darkening parameters \(u_{1}\) and \(u_{2}\) (sampled uniformly across \(q_{1}\) and \(q_{2}\) using the triangular limb darkening parameterization of (84), eccentricity parameters \(\sqrt{e}\sin\omega\) and \(\sqrt{e}\cos\omega\), and the stellar density \(\rho_{\bullet}\). With exoplanet, the stellar density itself may be taken as a free parameter, and the transit light curve is modeled based on the combination of each sampled \(\rho_{\bullet}\) and \(P\). Therefore, it is not necessary to e.g. manually convert a prior on \(\rho_{\bullet}\) to a prior on \(a/R_{\bullet}\). Using each combination of \(e\), \(\omega\), and \(b\), we calculate \(\rho_{\bullet}\). We apply a normal prior on the free parameter \(\rho\) centered at the calculated \(\rho_{\bullet}\) with \(\sigma=\sigma_{\rho_{\bullet}}\), where \(\sigma_{\rho_{\bullet}}\) is the calculated uncertainty for each star.
The priors for each parameter are listed in Table 5. We first calculated the maximum a posteriori (MAP) model solution, and used the MAP solution to initialize the sampler. We sampled the model parameters using No-U Turn Sampling (NUTS) (85) with two chains 20,000 tuning steps and 20,000 posterior draws each. The sample acceptance rate is greater than **90%** for all fits. We calculate the Gelman-Rubin \(R\) statistic for each transit fit to check for convergence, and we find that \(R<1.05\) for each parameter in each transit fit. The full posteriors, convergence statistics, corner plots and trace plots are available in the Data Supplement.
### Application of Pipeline to _Kepler_ Sample.
With the proof-of-concept described in the Appendix in hand, we apply the photoeccentric pipeline to the sample of 163 _Kepler_ planets. For multi-planet systems, we fit each planet individually. We discard any simultaneous transits. We create stitched light curves containing only the transits of a single planet in each system following the method in the Injection and Recovery Demonstration section of the SI Appendix. While fitting planets in multi-planet systems individually does not force common stellar density and limb darkening posteriors, this method simplifies the process of removing planets or false positives in multi-planet systems from our sample and significantly reduces the required computational resources. As a test, we performed a joint planet fit on the three-planet system Kepler-445 (KOI 2704) and compared the posteriors for the individual and system fits. We found the differences in the posteriors, including those for stellar density and limb darkening, to be marginal. All fit parameters were consistent well within \(\mathbf{1}\sigma\) between the joint and individual fits. (18) performed a similar test with the Kepler-42 (KOI 961) system and also found the differences in posteriors to be marginal.
If a system exhibits TTVs, fitting a transit light curve without correcting for TTVs may cause a "smeared" model fit that misrepresents the impact parameter (87). An inflated impact parameter may be compensated by an inflated transit duration, which may incorrectly suggest that the planet is eccentric (29). For systems with TTVs, we fit the Kepler light curve with exoplanet simultaneously with each individual transit time. We use the transit times published by (75) to set normal priors around each transit time \(t_{\mathbf{n}}\), with \(\sigma_{\mathbf{t}n}=\mathbf{0.05}\) days (1.2 hours). Consequently, the free parameters for these systems do not include \(\mathbf{P}\) or \(\mathbf{t}_{0}\). According to (67), KOIs 248.01, 248.29, 250.01, 250.02, 314.01, 314.02, 314.03, 886.01, 886.01, and 989.01, and 952.02 show evidence for transit timing variations (TTVs). Additionally, (88) reported evidence for TTVs in KOI 902.01, which was not included in the sample of (67). We fit transit times for KOI 902.01 as well. (75) did not publish transit times and TTVs for KOI 898.01. In the interest of consistency, we do not fit TTVs for KOI 898.01. We do not see evidence of "smearing" or an inflated transit duration in the fit for KOI 898.01, so we conclude that a periodic transit model is appropriate for this planet and dataset.
We warn of a risk that low-amplitude TTVs may be undetected, or that the known TTVs are not sufficiently accounted for, affecting the eccentricity posteriors reported in this work. We compare the eccentricity distribution of planets with known TTVs and without TTVs, and we find no considerable differences between the two distributions. We also calculate \(\mathbf{g}\) (Equation 12) for each planet, and we find that values of \(\mathbf{g}\) greater than 1 and less than 1 are roughly equally common in our sample, suggesting that TTVs are sufficiently accounted for. The sub-sample of planets with known TTVs is small (11 planets), and all conclusions in this paper would be upheld if planets with known TTVs were removed from our sample.
For KOIs 255.02, 676.01, 676.02, 898.03, 936.01, 952.04, 961.03, 1427.01, 1427.02, 2704.03, 2715.02, 2715.03, 2842.02, 2842.03, 2926.03, and 2926.05, we do not fit the orbital period or epoch to reduce the computational needs of their fits. We instead fix the period and epoch to the value published in (74). Because the orbital period posteriors for our sample tend to have uncertainties less than \(\mathbf{10^{-4}}\) days, we do not expect fixing the period to significantly affect the resulting transit fit posteriors.
As a test, we compared transit fit posteriors using long cadence and short cadence data for several planets in our sample. We find that though the long-cadence fits constrain parameters more loosely than the short-cadence fits, the eccentricity posteriors are consistent with one another. We contend that for KOIs where only long-cadence data are available, the eccentricity posteriors may be poorly constrained, but are not significantly biased.
### Likelihood Functions for Underlying Eccentricity Models
The complete likelihood functions we used for the underlying eccentricity models are as follows:
\[\mathbf{p(obs|\theta)}=\frac{1}{N}\prod_{k=1}^{K}\sum_{n=1}^{N}\frac{e_{k}^{n}}{ \sigma^{2}}\exp\left(\frac{-e_{k}^{n}}{2\sigma^{2}}\right)\left(\frac{1-e^{2} }{1+e\sin\omega}\right) \tag{17}\]
for the Rayleigh distribution with parameter \(\mathbf{\sigma}\);
\[\mathbf{p(obs|\theta)}=\frac{1}{N}\prod_{k=1}^{K}\sum_{n=1}^{N}\frac{\exp\left( \frac{-e_{k}^{n}}{2\sigma}\right)^{2}}{\sigma\sqrt{2\pi}}\left(\frac{1-e^{2} }{1+e\sin\omega}\right) \tag{18}\]
for the half-Gaussian distribution with parameter \(\mathbf{\sigma}\); and
\[\mathbf{p(obs|\theta)}=\frac{1}{N}\prod_{k=1}^{K}\sum_{n=1}^{N}\frac{\Gamma(a+b)( e_{k}^{n})^{\alpha-1}(1-e_{k}^{n})^{b-1}}{\Gamma(a)\Gamma(b)}\left(\frac{1-e^{2} }{1+e\sin\omega}\right) \tag{19}\]
for the Beta distribution with parameters \(\mathbf{a}\) and \(\mathbf{b}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline Free Parameter & Prior Distribution & Values \\ \hline \(\mathbf{P}\) & Uniform & \([Period-0.1,Period+0.1]\) (days) \\ \(t_{0}\) & Uniform & \([Epoch-0.1,Epoch+0.1]\) (days) \\ \(\mathbf{R}_{p}/\mathbf{R}_{*}\) & Uniform & \([0.0,0.2]\) \\ \(\mathbf{b}\) & Uniform & \([-1.2,1.2]\) \\ \(\mathbf{u}_{1}\) & Normal & \([u_{1},0.05]\) \\ \(\mathbf{u}_{2}\) & Normal & \([u_{2},0.05]\) \\ \(\sqrt{e}\sin\omega\) & Uniform & \((-1,1)\) \\ \(\sqrt{e}\cos\omega\) & Uniform & \((-1,1)\) \\ \(\rho\) & Normal & \([\rho_{*},\sigma_{\rho_{*}}]\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Transit fit free parameter prior distributions and values. \(Period\) and \(Epoch\) in the Values column represent the published values for the associated parameters in the NASA Exoplanet Archive (74). \(u_{1}\) and \(u_{2}\) represent the limb darkening coefficients published by (86) for the respective host star. The limb darkening coefficients are sampled using the triangular limb darkening parameterization of (84).
## Discussion of Stellar Metallicity Provenances
We take stellar metallicities from Kepler DR25 as compiled by (32) to calculate densities for each star. (32) compiled stellar parameters for Kepler stars from several different sources. For the vast majority of our sample (124 stars), we take metallicities derived spectroscopically from (89). The next largest fraction (19 stars) have metallicities derived spectroscopically from (90). The remainder of our sample has spectroscopically derived metallicities from the following sources: 2 stars from (78, 91); 1 star from (92); and 6 stars from (69). Finally, we include photometrically derived metallicities from the following sources: 8 stars from (93), 2 stars from (94); and 2 stars from (95).
For the two largest stellar processes ((89) and (90)), we calculate the normalized transit duration (\(\mathbf{T_{14}/P^{3}}\)) for planets in each subsample. We compare the total transit duration distributions in these two subsamples using a CDF plot (Figure 1) to ensure that the medians of both distributions appear similar. Because the vast majority of our sample has metallicities taken from one source, and over **93%** of our sample has metallicities from spectroscopy, we contend that taking stellar parameters from different sources has not significantly biased our results, especially considering we ultimately take these values from the Kepler DR25 catalog, designed to be used uniformly to support the final Kepler transit detection run (32).
## Injection and Recovery Demonstration
To measure orbital eccentricities, we fit each planet's Kepler light curves using the calculated stellar density prior. We perform an injection and recovery test simulating this procedure to ensure our pipeline accurately recovers known planetary properties, and to investigate any variability in detection sensitivity (\(\mathbf{e}\)) and longitude of periastron (\(\mathbf{\omega}\)) space. We simulate a suite of light curves of various signal-to-noise ratios (SNR). We define the SNR as
\[\mathbf{SNR=\frac{A}{\mathbf{\sigma}}\sqrt{N*N_{t}}} \tag{20}\]
where \(\mathbf{A}\) is the normalized transit depth, \(\mathbf{\sigma}\) is the individual flux error, \(\mathbf{N}\) is the number of observations in each transit, and \(\mathbf{N_{t}}\) is the number of transits in each light curve. We calculate \(\mathbf{N}\) by dividing the full transit duration by the appropriate flux cadence, and rounding to the nearest integer.
To investigate detection sensitivity across \(\mathbf{e}\) and \(\mathbf{\omega}\) space, we drew random combinations of \(\mathbf{e}\) and \(\mathbf{\omega}\) for several values of SNR. We drew between 200 and 500 combinations each for impact parameters of 0, 0.3, 0.6, and 0.8. \(\mathbf{e}\) and \(\mathbf{\omega}\) were drawn from uniform distributions with bounds \(\mathbf{e=[0.0,0.95]}\), \(\mathbf{\omega=[0.0,360]}\) degrees. We do not allow the injected or fit e to be larger than 0.95. We draw a set of combinations for light curve SNRs of 10, 50, and 100, using Equation 20 to calculate the corresponding flux error for individual points. We choose these values of SNR and \(\mathbf{b}\) to reflect the properties of our sample, the majority of which have transit SNRs between 10 and 70 according to (74).
We calculate \(\mathbf{N}\) by first calculating the total transit duration (\(\mathbf{T_{14}}\)) using Equation 10, based on the injected values of e, omega, and impact parameter, and dividing \(\mathbf{T_{14}}\) by the appropriate flux cadence (one minute for short cadence and 30 minutes for long cadence). Therefore, the magnitude of flux error bars for each injected light curve is slightly different, based on the transit duration as determined by the injected \(\mathbf{e}\), \(\mathbf{\omega}\), and impact parameter, preserving the SNRs (and not necessarily the magnitude of the light curve uncertainties).
For each combination of \(\mathbf{e}\), \(\mathbf{\omega}\), \(\mathbf{b}\), and SNR, we create a light curve using these properties based on the transit properties of KOI 255.01. We chose KOI 255.01 to model the synthetic light curves because its transit properties are typical of our sample. The simulated light curves all have the same orbital period \(\mathbf{P=27.5}\) days, planet/star radius ratio \(\mathbf{R_{p}/R_{\star}=0.044}\), and quadratic limb darkening parameters \(\mathbf{u_{1}=0.42}\) and \(\mathbf{u_{2}=0.30}\). Each synthetic light curve necessarily has \(\mathbf{N_{t}=12}\) transits per light curve. We obtain the planet parameters from the NASA Exoplanet Archive (74).
The simulated light curves all have the same semimajor axis/stellar radius ratio \(\mathbf{a/R_{\star}}\), which we calculate with Equation 9, where \(\mathbf{M_{\star}}\) and \(\mathbf{R_{\star}}\) are the stellar mass and radius calculated using the method in the Calculation of Stellar Densities section of the SI Appendix, respectively. We calculate \(\mathbf{a/R_{\star}}\) rather than using the values published by (74) to ensure the system is consistent with our calculated mass and radius. Using each combination of \(\mathbf{e}\), \(\mathbf{\omega}\), and \(\mathbf{b}\), we calculate \(\mathbf{\rho_{\star}}\) for each simulated system. We create simulated light curves using the transit modeling Python package batan (96). This process is repeated twice for simulated short-cadence and long-cadence data. For long-cadence light curves, each flux point is integrated over an exposure time of 30 minutes. We process the simulated light curves according to the procedure described in the Lightcurve Preparation section of the SI Appendix. To fit the transits, we apply a normal prior on the free parameter \(\mathbf{\rho}\) centered at the calculated \(\mathbf{\rho_{\star}}\) with \(\mathbf{\sigma=1\rho_{\odot}}\). A prior with \(\mathbf{\sigma=1\rho_{\odot}}\) reflects a typical width of stellar density priors in our sample. We sampled the model parameters using No-U Turn Sampling (NUTS) (85) with chains of at least 1,000 draws each. We use 1,000 tuning steps for each fit.
We analyze the accuracy of the recovered eccentricity and longitude of periastron parameters in (\(\mathbf{\sqrt{e}sin\omega}\), \(\mathbf{\sqrt{e}cos\omega}\)) space. For each injected and recovered transit, we calculate the sensitivity metric \(\mathbf{N_{\sigma}}\), where
\[\mathbf{N_{\sigma}}=\left\langle\frac{\text{abs}(\mathbf{\sqrt{e}\sin\omega_{inj}- \sqrt{e}\sin\omega_{rec})}}{\mathbf{\sigma_{e\sin\omega}}},\frac{\text{abs}(\mathbf{ \sqrt{e}\cos\omega_{inj}-\sqrt{e}\cos\omega_{rec})}}{\mathbf{\sigma_{e\cos\omega}}}\right\rangle \tag{21}\]
where \(\mathbf{inj}\) refers to the injected parameter, \(\mathbf{rec}\) refers to the recovered parameter, and \(\mathbf{\sigma}\) is the standard deviation of the \(\mathbf{\sqrt{e}\sin\omega}\) or \(\mathbf{\sqrt{e}\cos\omega}\) posterior. We take the point estimates \(\mathbf{\sqrt{e}\sin\omega_{rec}}\) and \(\mathbf{\sqrt{e}\cos\omega_{rec}}\) to be the mean of the respective posterior. This metric represents the number of posterior standard deviations a recovered value falls from the injected value.
Figure 7 shows the sensitivity in parameter space for long cadence and short cadence data in (\(\mathbf{e}\cos\omega\), \(\mathbf{e}\sin\omega\)) space for injected SNRs of 10, 50, and 100. The color bar represents the median value of \(\mathbf{N_{\sigma}}\) in each bin. The sensitivity is approximately uniform in parameter space, and all bins have a mean error less than \(\mathbf{2.5\sigma}\). We demonstrate that the transit fitting machinery accurately recovers eccentricities using the light curve transit duration and stellar density for both long- and short-cadence simulated Kepler data, with little dependence on transit signal-to-noise.
Figure 7 shows the sensitivity in parameter space for long cadence and short cadence data, for injected SNRs of 10, 50, and 100. The color bar represents the median value of \(\mathbf{N_{\sigma}}\) in each bin. The sensitivity is approximately uniform in parameter space, and all bins have a mean error less than \(\mathbf{2.5\sigma}\). We demonstrate that the transit fitting machinery accurately recovers eccentricities using the light curve transit duration and stellar density for both long- and short-cadence simulated Kepler data, with little dependence on transit signal-to-noise.
## Inference of Simulated Parent Distribution
We employ an injection and recovery technique to verify our hierarchical Bayesian inference technique to draw out the underlying eccentricity distribution. We designate _a priori_ a functional form for this distribution, and then draw from it to assign eccentricities to a synthetic planetary sample. We ought ideally then to recover this distribution, if we properly account for selection bias.
[MISSING_PAGE_POST]
We performed six injection and recovery simulations, drawing from three functional forms for eccentricity. We employ the same set of functions as those tested by (30) for the exoplanet eccentricity distribution. These include Rayleigh distributions with \(\mathbf{\sigma=0.2}\) and \(\mathbf{0.5}\). Beta distributions with \(\mathbf{u=0.8,b=3.0}\) and \(\mathbf{a=2,b=10}\), and half-Gaussian distributions with \(\mathbf{\sigma=0.2}\) and \(\mathbf{0.5}\). For each simulation, we repeat the steps outlined in the Injection and Recovery section of the SI Appendix using only short cadence data with an SNR of \(100\). We again model all simulated light curves based on the properties of KOI \(255.10\), with \(\mathbf{P=27.5}\) days, planet/star radius ratio \(\mathbf{R_{P}/R_{=}=0.044}\), and quadratic limb darkening parameters \(\mathbf{u_{1}=0.42}\) and \(\mathbf{u_{2}=0.30}\).
Instead of assigning one of three impact parameters to each simulated planet, as in the Injection and Recovery section of the SI Appendix, we randomly draw a point on a unit sphere for the orbital inclination. If the inclination produces an orbital path where a planet may not fully transit (\(\mathbf{b>0.9}\)), we discard the simulation. We reject with \(\mathbf{b>0.9}\) because allowing the sampler to explore \(\mathbf{b}\) approaching \(\mathbf{1}\) for grazing transits greatly increases the computational resources needed to perform this simulation. We discarded draws with \(\mathbf{b>0.9}\) and only allowed the sampler to explore \(\mathbf{b<=0.9}\) to mitigate this effect. Likewise, if we draw a combination of \(\mathbf{e}\) and \(\mathbf{\omega}\) that is not physical (e.g. the planet at periapse is less than \(1\)\(\mathbf{R_{n}}\) away from the star), we discard the simulation. The discarded draws were not replaced with another draw, but we continue drawing until we reach \(50\) acceptable draws for each underlying distribution. We draw and simulate light curves for \(50\)\(es\) in each underlying distribution.
We aim to recover the parameters of the single Rayleigh, Beta, and half-Gaussian distributions that we used to prescribe \(\mathbf{e}\) values for our synthetic sample. We randomly select \(1000\) points from the mock-up e posterior distribution for each planet. We employ a uniform prior for all distribution parameters. We use a Markov Chain Monte Carlo (MCMC) analysis with the Python package emceq (44). The chains were run with 32 walkers for \(2000\) steps each, and we discarded a burn-in phase of \(500\) steps. We recover the true underlying \(\mathbf{e}\) distribution within \(1\)-sigma in all cases. We show one injected and recovered distribution for each distribution type in Figure 8. In all cases, we recover the injected distribution parameters within \(1\mathbf{\sigma}\).
We repeat this analysis for mixture models, specifically for a mixture model of Rayleigh distributions. We inject and recover two mixture model parameter sets: \(\mathbf{\sigma_{1}=0.1,\sigma_{2}=0.4,f=0.8}\) and \(\mathbf{\sigma_{1}=0.25,\sigma_{2}=0.05,f=0.5}\). In both cases, we recover the injected parameters within \(1\mathbf{\sigma}\). In Figure 8, we show the injected and recovered parameters for one of these simulations.
## Appendix F Transit Fior Corner Plots
This section contains Figure 9.
|
2306.01265 | Calibrating Multimodal Learning | Multimodal machine learning has achieved remarkable progress in a wide range
of scenarios. However, the reliability of multimodal learning remains largely
unexplored. In this paper, through extensive empirical studies, we identify
current multimodal classification methods suffer from unreliable predictive
confidence that tend to rely on partial modalities when estimating confidence.
Specifically, we find that the confidence estimated by current models could
even increase when some modalities are corrupted. To address the issue, we
introduce an intuitive principle for multimodal learning, i.e., the confidence
should not increase when one modality is removed. Accordingly, we propose a
novel regularization technique, i.e., Calibrating Multimodal Learning (CML)
regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the
performance in terms of confidence calibration, classification accuracy, and
model robustness. | Huan Ma. Qingyang Zhang, Changqing Zhang, Bingzhe Wu, Huazhu Fu, Joey Tianyi Zhou, Qinghua Hu | 2023-06-02T04:29:57Z | http://arxiv.org/abs/2306.01265v1 | # Calibrating Multimodal Learning
###### Abstract
Multimodal machine learning has achieved remarkable progress in a wide range of scenarios. However, the reliability of multimodal learning remains largely unexplored. In this paper, through extensive empirical studies, we identify current multimodal classification methods suffer from unreliable predictive confidence that tend to rely on partial modalities when estimating confidence. Specifically, we find that the confidence estimated by current models could even increase when some modalities are corrupted. To address the issue, we introduce an intuitive principle for multimodal learning, i.e., the confidence should not increase when one modality is removed. Accordingly, we propose a novel regularization technique, i.e., Calibrating Multimodal Learning (CML) regularization, to calibrate the predictive confidence of previous methods. This technique could be flexibly equipped by existing models and improve the performance in terms of confidence calibration, classification accuracy, and model robustness.
Machine Learning, ICML
## 1 Introduction
Multimodal data widely exist in real-world applications such as medical analysis (Perrin et al., 2009), social media (Wang et al., 2019), and autonomous driving (Khodayari et al., 2010). To fully explore the potential value of each modality, multimodal learning emerges as a promising way to train a machine learning (ML) model by integrating all available multimodal cues for further data analysis tasks. Numerous approaches have been proposed to build multimodal learning paradigms for various tasks (Wang et al., 2019; Antol et al., 2015; Bagher Zadeh et al., 2018; Kishi et al., 2019). Despite above progresses, the reliability of current multimodal learning methods remains largely unexplored. In the setting of classification, one key aspect of the reliability is to build a high-quality confidence estimator (Moon et al., 2020; Corbiere et al., 2019; Guo et al., 2017), which can quantitatively characterize the probability that predictions will be correct. With such an estimator, further processing can be taken to improve the performance of the system (e.g., human assistance) when the predictive uncertainty is high. This is especially useful in high-stake scenarios (Hafner et al., 2019; Qaddoum and Hines, 2012).
In the setting of multimodal learning, in addition to exact overall prediction confidence, the relationship between the confidence and the number of modalities should also be taken into concerns. Intuitively, the confidence of an ideal multimodal classifier should not increase when one modality is removed (for brevity, we initialize the question as "one modality", and the same phenomenon is observed when removing more than one modality). An illustrative example of an ideal confidence estimator is shown in Fig. 1, where the confidence gradually decreases when the observed information becomes less comprehensive. However, we conduct extensive empirical studies on current methods and observe that when one modality is removed, the overall confidence estimated by them can even increase. This observation contradicts the common assumption of multimodal learning since modalities are assumed to be predictive of the target for most multimodal learning tasks (Wu et al., 2022) and the principle "_the essence of information is to eliminate uncertainty (Shannon)_" in informatics (Soni and Goodman, 2017; Burgin, 2002). Intuitively, this implies that the models are more inclined to believe in a unique modality and is prone to be affected by this modality, which has also been shown in prior works (Wu et al., 2022; Wang et al., 2020). This further impairs the robustness of the learned models, i.e., the models are easy to be influenced when some modalities are corrupted, since the models can not make decisions according to a trustworthy confidence (probability) estimator.
A natural idea to address the above issue is to employ re
cent uncertainty calibration methods such as temperature scaling (Guo et al., 2017) or Bayesian learning (Cobb and Jalaian, 2021; Karaletsos and Bui, 2020; Foong et al., 2020), which can build more accurate confidence estimation than the traditional training/inference manner. However, these approaches do not explicitly consider the relationship between different modalities (i.e., they can only calibrate the overall confidence but can not calibrate the confidence of using different number of modalities) and thus still fail to achieve satisfactory performance in the multimodal learning setting. To address this issue, we propose a novel regularization technique called **C**alibrating **M**ultimodal **L**earning (CML) which enforces the consistency between prediction confidence and the number of modalities. The motivation of CML is based on a natural intuition, i.e., the prediction confidence should decrease (at least not increase) when one modality is removed, which could intrinsically improve the confidence calibration. Specifically, we propose a simple regularization term that enforces a model to learn an intuitive ranking relationship by adding a penalty for the samples whose predictive confidence will increase when one modality is removed. The main contributions of this paper are summarized as follows:
* We conduct extensive empirical studies to show that most existing multimodal learning paradigms tend to be over-confident on partial modalities (different samples are over-confident on different modalities rather than all samples are over-confident on the same modalities), which implies that they fail to achieve trustworthy confidence estimation.
* We introduce a measure to evaluate the reliability of the confidence estimation from the confidence ranking perspective, which can characterize whether a multimodal learning method can treat all modalities fairly.
* We propose a regularization strategy to calibrate the confidence of various multimodal learning methods, and then conduct extensive experiments to show the superiority of our method in terms of the confidence calibration (Table 1), classification accuracy (Table 2) and model robustness (Table 3).
## 2 Related Work
**Uncertainty estimation** provides a way for trustworthy prediction (Abdar et al., 2021; Chau et al., 2021; Slack et al., 2021; Singh et al., 2021; Ning et al., 2021; Zhang et al., 2021). Uncertainty can be used as an indicator of whether the predictions given by models are prone to be wrong (Ritter et al., 2021; Wang and Zou, 2021; Zaidi et al., 2021; Stadler et al., 2021; Bai et al., 2021; Rahaman and thiery, 2021; Galil and El-Yaniv, 2021; Upadhyay et al., 2021). Many uncertainty-based models have been proposed in the past decades, such as Bayesian neural networks (Neal, 2012;
Figure 1: Motivation of calibrating multimodal learning. The confidence of an ideal multimodal classifier should decrease or at least not increase when one modality is removed (even when the removed modality is noised, or it indicates the model takes noise as semantics and the model is not trustworthy).
MacKay, 1992; Denker and LeCun, 1990; Kendall and Gal, 2017), Dropout (Molchanov et al., 2017), Deep ensembles (Lakshminarayanan et al., 2017; Havasi et al., 2020), and DUQ (van Amersfoort et al., 2020) built upon RBF networks. **Prediction confidence**(Sahoo et al., 2021; Wald et al., 2021; Pan et al., 2021; Luo et al., 2021; Xu et al., 2021; Chung et al., 2021; Xiong et al., 2021) is always referred to in classification models, which expects the predicted class probability to be consistent with the empirical accuracy (Qin et al., 2021; Minderer et al., 2021; Zhao et al., 2021; Tian et al., 2021; Karandikar et al., 2021; Jeong et al., 2021). Many methods focus on smoothing the prediction probabilities distribution, such as Label smoothing (Muller et al., 2019), focal loss (Mukhoti et al., 2020), TCP (Corbiere et al., 2019)and Temperature scaling (TS) (Guo et al., 2017). More related researches please refer to Appendix G.
**Multimodal learning** emerges as a promising way to exploit complementary information from different modalities. How to benefit from multimodal data has been a popular research direction, and researchers usually focus on improving architectural designs of the multimodal model (Perez-Raa et al., 2019; Sun et al., 2021). In the setting of multimodal classification, MMTM (Joze et al., 2020) achieves state-of-the-art performance by connecting corresponding convolutional layers from different uni-modal branches. Considering the proposed method calibrating confidence with using different number of modalities, multimodal classifiers that can deal with incomplete data are natural candidates to validate our motivation. There is a wide range of research interests in handling missing modalities for multimodal learning, including imputation-independent methods (Zhang et al., 2019) and imputation-dependent methods (Mattei and Frellsen, 2019; Wu and Goodman, 2018). For imputation-independent methods, there is no need to reconstruct the missing modalities and conduct classification using the imputed data. Imputation-dependent methods usually conduct classification with two stages, reconstructing the missing modalities and making classification according to the reconstructed modalities. In this paper, we employ CPM-Nets (Zhang et al., 2019), MIWAE (Mattei and Frellsen, 2019), and MMTM (Joze et al., 2020) to validate our motivation due to their representativeness in multimodal learning.
## 3 Method
In this section, we first introduce some basic notations in Section 3.1. We show the basic assumption of our method and its empirical motivation in Section 3.2 based on the principle "the essence of information is to eliminate uncertainty", and then evaluate the confidence estimation performance of current multimodal methods in Section 3.3 and find they violate the principle. At the end, we propose a simple yet effective regularization technique to improve the confidence estimation of multimodal models and elaborate the technical details in Section 3.4.
### Notation
We define the training data as \(\mathcal{D}=\left\{\{x_{i}^{m}\}_{m=1}^{M},y_{i}\right\}_{i=1}^{N}\), where \(x_{i}^{m}\) is the \(m\)-th modality of the \(i\)-th sample, and \(y_{i}\in\{1,\cdots,K\}\) is the corresponding class label. To distinguish one modality or a set of modalities, we use \(x^{m}\) and \(x^{(\mathbb{S})}\) to represent the \(m\)-th modality and multiple modalities respectively, where \(\mathbb{S}\) is a set of modalities' indexes (e.g., if we have \(\mathbb{S}=\{1,2\}\), then \(x^{(\mathbb{S})}\) indicates a feature set consisting of \(x^{1}\) and \(x^{2}\), and \(x^{(\mathbb{M})}=\{x^{1},\cdots,x^{M}\}\) indicates the complete \(M\) modalities). The goal is to learn a function parameterized by \(\theta\): \(f(x^{(\mathbb{M})},\theta)\to z\), where the output \(z\) of the network is a vector of \(K\) values called logits. Then the logits vector is transformed by a softmax layer: \(\hat{p_{k}}=e^{z_{k}}/\sum_{k}e^{z_{k}}\), where the probability distribution of a sample \(x\) is defined as \(\operatorname{P}(y\mid\theta,x^{(\mathbb{M})})=\{\hat{p_{k}}\}_{k}^{K}\). The predicted class label is \(\hat{y}=\operatorname*{arg\,max}_{y}\operatorname{P}(y\mid\theta,x^{(\mathbb{ M})})\), and the confidence is defined as \(\text{Conf}(x^{(\mathbb{M})})=\max_{y}\operatorname{P}(y\mid\theta,x^{(\mathbb{ M})})\).
### Basic Assumption
In real-world applications, the quality of multimodal data is usually unstable (e.g., some modalities may be corrupted), so the quality of the multimodal input should be reflected in some quantitative manner (i.e., predictive confidence) which is especially important when multimodal models are deployed for the high-stake tasks. However, it is difficult to exactly define the "quality" of each sample, and we can not define the exact functional relationship between the quality and confidence since the confidence from different models is basically different for a same sample. This issue results in the lack of supervision for confidence estimation. Fortunately, according to the principle "_the essence of information is to eliminate uncertainty (Shannon)_" in informatics (Soni and Goodman, 2017; Burgin, 2002) (i.e., more information, less uncertainty), we can approximate this relationship with a ranking-based form as follow:
**Proposition 3.1**.: _Given two versions of a sample \(x^{(\mathbb{M})}\), i.e., \(x^{(\mathbb{T})}\) and \(x^{(\mathbb{S})}\), if we can assure \(\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M}\), then, for a trustworthy multimodal classifier \(f(\cdot)\), it should hold \(\text{Conf}(f(x^{(\mathbb{T})}))\leq\text{Conf}(f(x^{(\mathbb{S})})\)._
For most multimodal learning tasks, all modalities are assumed to be predictive for the target (Wu et al., 2022), and the proposed method is also based on this assumption. For a trustworthy classifier, the predictive confidence should not increase when one modality is removed. We further define the prediction **C**onfidence **I**ncrement (CI) with informative
ness increment for a sample as:
\[\begin{split}\mathrm{CI}(x^{(\mathbb{T})},x^{(\mathbb{S})})=\mathrm{ Conf}(f(x^{(\mathbb{S})}))-\mathrm{Conf}(f(x^{(\mathbb{T})}))\\ \text{s.t. }\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M},\end{split} \tag{1}\]
where \(\mathbb{T}\) and \(\mathbb{S}\) are sets of modalities' indexes. Specifically, a negative value indicates a poor confidence estimation performance where the predictive confidence increases when one modality is removed. To quantify the extent that a learned model violates Proposition 3.1, we introduce a novel measure: **V**iolating **R**anking **R**ate (VRR) as the proportion of test samples whose predictive confidence will increase when removing one modality:
\[\begin{split}\mathrm{VRR}=\mathbb{E}_{(\mathbb{T},\ \mathbb{S})}\left[ \mathbb{1}\left(\mathrm{CI}(x^{(\mathbb{T})},x^{(\mathbb{S})})<0\right) \right]\\ \text{s.t. }\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M}.\end{split} \tag{2}\]
Inspired by prior methods (Moon et al., 2020; Toneva et al., 2018), we initialize \(\mathbb{S}\) as the complete modalities, and obtain \(\mathbb{T}\) by randomly removing a modality from \(\mathbb{S}\). Then \(\mathbb{T}\) is regarded as \(\mathbb{S}\) for another confidence ranking pair and we repeat this process until there is only one modality remained in \(\mathbb{T}\) (Please refer to Appendix A for detail). A natural question then arises: how about the confidence estimation performance of the current methods when one modality is removed?
### Confidence Estimation Performance of Current Multimodal Methods
To evaluate the quality of confidence estimation of existing multimodal classifiers, we compute the VRR score of CPMNets (Zhang et al., 2019) and MIWAE (Mattei and Frellsen, 2019), which are two typical methods in handling incomplete multimodal data. In addition to classifiers for incomplete multimodal data, we also evaluate MMTM (Wu et al., 2022), which is a state-of-the-art multimodal classification method. As shown in Table 1, the VRR scores of previous methods are quite high which indicates the prediction confidence on a large portion of samples will violate Proposition 3.1. The visualization is shown in Fig. 2, where the red color indicates the proportion of test samples whose predictive confidence estimated by the model decreases while providing more modalities.
A naive strategy is to re-balance the contribution of every modality (i.e., allocating a smaller weight to the modality that samples are over-confident on during the fusion). As shown in Fig. 2, however, we find that different samples are over-confident on different modalities rather than all samples are over-confident on the same modality. This indicates that the problem can not be solved by re-weighting the overall contribution of different modalities since it will make the confidence estimation of some samples worse. Instead, our method characterizes the relationship between the modalities in sample-wise manner, which inherently calibrates the contribution for all samples. Intuitively, it is risky for a model which usually increases the prediction confidence when one modality is removed, since this usually implies that the confidence of the sample and its informativeness are not matched. For this issue, these models can not be deployed into risk-sensitive applications such as medical diagnosis. As a comparison, our method can significantly decrease VRR score (see more details in Table 1) implying a more trustworthy confidence estimation.
### Calibrating Multimodal Classification Model
As shown in Section 3.3, current multimodal methods usually increase the prediction confidence when one modality is removed, which potentially harms both trustworthiness and performance. To address this issue, the direct strategy
Figure 2: Current methods (Wu et al., 2022; Zhang et al., 2019; Mattei and Frellsen, 2019) violate the Proposition 3.1 (red color indicates the proportion of test samples whose predictive confidence given by the model decreases while providing more modalities, βCIβ is defined in Eq. 1). We estimate the performance on two-modality datasets, and the pie charts show that different samples over-rely on different modalities rather than all samples over-rely on the same modality (e.g., β\(53\%\) Mod1β indicates βamong the samples who violate Proposition 3.1, there is \(53\) percent of samples whose confidence will increase when Mod2 is removed and the other samples will increase confidence when Mod1 is removedβ).
is to minimize the following confidence difference:
\[\mathcal{L}^{(\mathbb{T},\ \mathbb{S})}=\mathrm{Conf}(x^{(\mathbb{T})})-\mathrm{ Conf}(x^{(\mathbb{S})}). \tag{3}\]
However, models sometimes can still make an accurate prediction confidently when one modality is removed in practice. Eq. 3 forces models to produce relatively small confidence when one modality is removed, which results in extremely small confidence for each modality (Please refer to Appendix B.6 for detail). For this issue, we relax this regularization by only penalizing the situation that the estimated confidence increases when one modality is removed. For any pair of multimodal inputs which satisfies that \(\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M}\), the regularization can be written as:
\[\mathcal{L}^{(\mathbb{T},\ \mathbb{S})}=\max\left(0,\mathrm{Conf}(x^{(\mathbb{T})})- \mathrm{Conf}(x^{(\mathbb{S})})\right). \tag{4}\]
For each sample, the total regularization loss is integrated over all pairs of inputs with different numbers of modalities, which is formalized as:
\[\mathcal{L}^{\text{CML}}=\sum_{(\mathbb{T},\ \mathbb{S})}\mathcal{L}^{( \mathbb{T},\ \mathbb{S})},\quad\{\forall(\mathbb{T},\ \mathbb{S})|\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M}\}. \tag{5}\]
The exact computation of above loss needs to enumerate all modality set pairs \((\mathbb{T},\mathbb{S})\), which is typically computational expensive sometimes. Therefore, we propose to approximate this loss by sampling and it works well in practice. Specifically, we conduct sampling as same as that in computing VRR defined in Eq. 2.
The proposed regularization is general and thus can be equipped by current multimodal classifiers to calibrate their confidence estimation as an additional loss item. We typically provide examples in utilizing the proposed technique in imputation-independent method (i.e., CPM-Nets (Zhang et al., 2019)), imputation-dependent method (i.e., MI-VAE (Mattei & Frellsen, 2019)), and recent multimodal classification method (i.e., MMTM (Wu et al., 2022)). The proposed regularization can be deployed to current multimodal methods flexibly, and accordingly the objective function is induced as:
\[\mathcal{L}=\mathcal{L}^{\text{CL}}+\lambda\mathcal{L}^{\text{ CML}}, \tag{6}\]
where \(\mathcal{L}^{\text{CL}}\) is the classification loss criterion (e.g., cross-entropy loss), and \(\lambda\) is hyperparameter controlling the strength of CML regularization. The process of calibrating multimodal classification are shown in Algorithm 1.
```
Given dataset \(\mathcal{D}=\left\{\{x_{i}^{m}\}_{m=1}^{M},y_{i}\right\}_{i=1}^{N}\), initialized classifier \(f\), classification loss criterion \(\mathcal{L}^{\text{CL}}\), hyperparameter \(\lambda\), and epochs for training the classifier \(train\_epochs\). for\(e=1,\dots,train\_epochs\)do \(\mathbb{S}\leftarrow\mathbb{M}\); \(\mathcal{L}^{\text{CL}}\leftarrow\mathcal{L}^{\text{CL}}(x^{(\mathbb{S})})\); \(\mathcal{L}^{\text{CML}}\gets 0\); for\(m=1,\dots,M-1\)do Randomly remove a modality of \(\mathbb{S}\) and set it as \(\mathbb{T}\); Compute the classification loss: \(\mathcal{L}^{\text{CL}}\leftarrow\mathcal{L}^{\text{CL}}+\mathcal{L}^{\text{ CL}}(x^{(\mathbb{T})})\); Compute the regularization loss: \(\mathcal{L}^{\text{CML}}\leftarrow\mathcal{L}^{\text{CML}}+\max\left(0, \mathrm{Conf}(x^{(\mathbb{T})})-\mathrm{Conf}(x^{(\mathbb{S})})\right)\); \(\mathbb{S}\leftarrow\mathbb{T}\); endfor Total loss: \(\mathcal{L}=\frac{1}{M}\mathcal{L}^{\text{CL}}+\lambda\mathcal{L}^{\text{ CML}}\); Update the parameters of the classifier \(f\) with \(\mathcal{L}\); endfor return the classifier \(f\)
```
**Algorithm 1** Calibrating Multimodal Classifier
### Discussion and Analyses
\(\circ\)**Why should a model meet the ranking relationship regardless of class labels?** For multimodal learning, all modalities are assumed to be predictive of the target (Wu et al., 2022), which can be expressed as \(I(y,x^{m})\geq 0\), where \(I(\cdot)\) denotes mutual information (Blum & Mitchell, 1998) and \(x^{m}\) indicates the \(m\)-th modality.
**Lemma 3.2**.: _Suppose we have two versions of a sample \(x^{(\mathbb{M})}\), i.e., \(x^{(\mathbb{T})}\) and \(x^{(\mathbb{S})}\), if we can assure \(\mathbb{T}\subset\mathbb{S}\subseteq\mathbb{M}\), then, for any class label \(y\), we have \(I(y,x^{(\mathbb{T})})\leq I(y,x^{(\mathbb{S})})\)._
In other words, \(x^{(\mathbb{S})}\) is more predictive for the target than \(x^{(\mathbb{T})}\) regardless of the label. For a trustworthy multimodal classification model, the confidence of \(x^{(\mathbb{T})}\) should not be larger than \(x^{(\mathbb{S})}\).
\(\circ\)**Why can CML regularization calibrate a model?** CML regularization can guarantee a smaller confidence of \(x^{(\mathbb{T})}\) when the model makes a wrong prediction of \(x^{(\mathbb{S})}\), which means that CML can alleviate the over-confidence.
**Lemma 3.3**.: _Suppose the CML regularization can achieve a lower \(\mathrm{VRR}\), i.e., \(\mathrm{VRR}_{CML}<\mathrm{VRR}_{ORIG}\), then for the samples that meet \(\mathbb{E}\left(\mathrm{Conf}_{CML}(x^{(\mathbb{S})})\right)=\mathbb{E}\left( \mathrm{Conf}_{ORIG}(x^{(\mathbb{S})})\right)\), we have \(\mathbb{E}\left(\mathrm{Conf}_{CML}(x^{(\mathbb{T})})\right)\leq\mathbb{E} \left(\mathrm{Conf}_{ORIG}(x^{(\mathbb{T})})\right)\)._
From the empirical results, we find \(\mathrm{Conf}_{CML}(x^{(\mathbb{S})})\) and \(\mathrm{Conf}_{ORIG}(x^{(\mathbb{S})})\) are very similar for most samples, where \(\mathrm{Conf}_{ORIG}(\cdot)\) and \(\mathrm{Conf}_{CML}(\cdot)\) indicate the confidence estimated by the original (ORIG) model and the model improved by CML regularization respectively. The proof of Lemma 3.3 and empirical results please refer to Appendix B.5.
\(\circ\)**Why not just penalize the difference in confidence (i.e., minimizing \(\text{Conf}(x^{(\mathbb{T})})-\text{Conf}(x^{(\mathbb{S})})\))?** Forcing the confidence for \(x^{(\mathbb{T})}\) to be smaller than the confidence for \(x^{(\mathbb{S})}\) regardless of whether the samples violate the Prop. 3.1 will lead to very small confidence for \(x^{(\mathbb{T})}\), and adding such
a penalty to samples who meet the Prop. 3.1 will lead to a trivial solution (i.e., extremely small confidence when any modality is removed, and the experiments are shown in Appendix B.6). What's more, the model sometimes can still make correct predictions confidently when one modality is removed. A flexible ranking regularization (Eq. 4) makes it more reasonable for the real situation.
## 4 Experiments
### Setup
We deploy the proposed regularization strategy into different types of multimodal classifiers including the imputation-independent method (Type I), the imputation-dependent method (Type II), and the recent state-of-the-art method (Type III). CPM-Nets (Zhang et al., 2019) is a typical imputation-independent algorithm, which can adapt to arbitrary missing patterns without reconstructing the missing modalities. MIWAE (Mattei & Frellsen, 2019) is a imputation-dependent algorithm. The above two methods are well-established models in incomplete multimodal learning. In addition to incomplete multimodal learning methods, we also deploy the regularization into an advanced multimodal classification method (Wu et al., 2022), which is termed Multimodal Transfer Module (MMTM). We approximate the modality removal by feature corruption (e.g., adding strong noise) because MMTM can not make a prediction when one modality is explicitly removed. For a fair comparison, the only difference between whether the model is equipped with CML regularization or not. Please refer to Appendix B.2 for more detailed settings.
**Datasets:**We evaluate the proposed method on diverse datasets, including data with multimodal data, such as YaleB (Georghiades et al., 2002), Handwritten (Perkins & Theiler, 2003), CUB (Wah et al., 2011), Animal (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015) (which is a dataset under class-imbalanced), TUANDROMD (Borah et al., 2020), NYUD2 (Qi et al., 2017), and SUN-RGBD (Song et al., 2015). It should be pointed out that we also estimate the proposed method on the class-imbalanced dataset. We find that CML can improve the performance when the training data is class-imbalanced since CML calibrates the model regardless of the label while the vanilla model always tends to be under-confidence of the minority classes compared with majority classes. For more detailed analysis please refer to Appendix B.1.
### Questions to be Verified
We conduct diverse experiments to comprehensively investigate the underlying assumption and the proposed method, including:
\(\circ\)**Can CML regularization improve the confidence estimation of multimodal classifiers?** To validate whether the proposed method improves multimodal classifiers' confidence estimation, we evaluate the confidence estimation of current multimodal classifiers without and with CML regularization, respectively. We conduct experiments of each type of method on seven datasets and evaluate their trustworthiness in terms of VRR (defined in Eq. 2).
\(\circ\)**Can CML regularization improve robustness?** CML regularization can improve multimodal classifiers' confidence estimation, so a natural question arises - does a better confidence estimation imply better robustness? To verify this, we evaluate the robustness on the complete multimodal data and noisy multimodal data (adding Gaussian noise to some modalities, i.e., zero mean with varying variance \(\epsilon\)).
\(\circ\)**Is CML easy to be deployed and not sensitive to hyperparameters?** In order to investigate the key factor that makes the improvement in the proposed method, we evaluate the performance in terms of classification accuracy under different strengths of CML regularization. We conduct experiments on both the original and noised data (i.e., adding noise to one of the modalities during the test). More details are shown in Appendix B.2.
### Results
#### 4.3.1 Confidence Estimation
We evaluate the confidence estimation of current multimodal learning models from a ranking perspective. It is observed that for a large portion of samples the confidence will increase when one modality is removed, while the confidence estimation of the classification models equipped with our proposed CML regularization is significantly improved. We intuitively demonstrate the confidence changing in Fig. 3, and the quantitative results are shown in Tab. 1. According to Fig. 3, we show the confidence estimation of CPM-Nets, where "Original" and "CML" indicate the model is without and with the proposed CML regularization respectively. According to Fig. 3, it is observed that the confidence without CML regularization may inc
Figure 3: Confidence estimation when one modality is removed, where βCIβ is defined in Eq. 1.
is removed, which indicates that the model fails to take all modalities into account fairly when making predictions. This will lead to unpromising robustness and generalization, which clearly verifies the main assumption in Sec. 4.3.2.
#### 4.3.2 CML Regularization Improves Robustness
In this subsection, we evaluate the performance on the complete multimodal data, where the training/test data is divided as previous work (Zhang et al., 2019). From Tab. 2, the classification models equipped with CML regularization consistently outperform their counterparts (i.e., the original classification models) validating the rationality of CML principle. It is worth noting that Type III exhibits a significant improvement, while the improvement in Type I and Type II is relatively minor compared to the standard deviation. The high variance can be attributed to the baseline models themselves. To avoid the influence of empirical contingency, we report the means and standard deviations over 5 or 10 runs in our paper. Furthermore, we distinguish the marks in the table based on the significance of the improvement, with a lighter color indicating a relatively minor improvement compared to the standard deviation. Results on more datasets are shown in Appendix B.4.
Significantly improving the accuracy on real-world data without additional techniques or more advanced architectures can be challenging as the benchmark datasets have already achieved good performance in terms of accuracy. However, we observed that the models equipped with CML regularization are more robust to noise, particularly when the noise is heavy. Specifically, we find that CML regularization can improve the robustness of imperfect data, such as noise. We evaluate the models in terms of the accuracy in the test under Gaussian noise (i.e., zero mean and varying variance \(\epsilon\)), and "Noise On" indicates which modality is noised (e.g., \(\{1\}\) indicates the first modality is noised). We report the performance on the challenging datasets (CUB and Animal) in the main text (Tab. 3) and more results are in Appendix B.3. We can find that the models equipped with CML regularization are more robust to noise, especially when the noise is much heavier.
#### 4.3.3 Performance under Different Strengths of CML Regularization
In this subsection, we report the accuracy under different strengths of regularization (where "\(\lambda=0\)" indicates the model is not equipped with the proposed CML regularization). We also add Gaussian noise (i.e., zero mean and varying variance \(\epsilon\)) to one of the modalities on CUB, and it is clear that the model with CML regularization is more robust to the potential noise.
As shown in Fig. 4, it is observed that CML regularization can promote accuracy on the noisy data. The potential reason is that the CML regularization enforces the reasonable confidence estimation and thus prohibits the model from being over-confident on the low-quality modality, where the low-quality modality usually tends to result in a wrong decision. Moreover, according to Fig. 4, the proposed regularization is not sensitive to the hyperparameter \(\lambda\), where promising performance could be expected with a mild regularization strength. In other words, the proposed regularization is not sensitive to hyperparameters and CML is easy to be deployed into a wide spectrum of multimodal models.
## 5 Conclusion
In this work, we reveal a novel issue widely existing in multimodal learning through extensive empirical studies.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Method & CML & TUANDROMD & YaleB & Handwritten & CUB & Animal \\ \hline \multirow{3}{*}{Type I} & β & \(23.38\pm 1.39\) & \(39.15\pm 4.97\) & \(17.64\pm 2.31\) & \(2.83\pm 1.55\) & \(44.39\pm 7.55\) \\ & β & \(12.58\pm 2.84\) & \(15.05\pm 1.12\) & \(3.18\pm 0.80\) & \(2.17\pm 1.13\) & \(29.02\pm 5.43\) \\ & Improve & \(\bigtriangleup 10.80\) & \(\bigtriangleup 24.10\) & \(\bigtriangleup 14.46\) & \(\bigtriangleup 0.66\) & \(\bigtriangleup 15.37\) \\ \hline \multirow{3}{*}{Type II} & β & \(39.17\pm 2.32\) & \(20.54\pm 4.26\) & \(33.82\pm 5.16\) & \(23.17\pm 4.87\) & \(12.51\pm 1.50\) \\ & β & \(8.38\pm 1.31\) & \(14.46\pm 2.17\) & \(29.99\pm 2.30\) & \(20.17\pm 3.05\) & \(8.64\pm 0.32\) \\ \cline{1-1} & Improve & \(\bigtriangleup 30.79\) & \(\bigtriangleup 6.08\) & \(\bigtriangleup 3.83\) & \(\bigtriangleup 3.00\) & \(\bigtriangleup 3.87\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: VRR (\(\%\)) of test samples (a lower value indicates a better confidence estimation. Type III is shown in Appendix). βββ indicates the model is not equipped with the proposed regularization (\(\lambda=0\)). Performance on Type III please refer to Appendix B.6.
Figure 4: Accuracy estimation where one of the modalities is corrupted with noise.
We observe that the confidence estimations of current multimodal learning algorithms are typically unreliable, and tend to rely on some partial modalities. This further results in the non-robustness of learned models against modality corruption. Concretely, existing multimodal classifiers tend to be overconfident based on some modalities, and ignore the valuable evidence from other modalities even those might be critical to make the decision. To solve this problem, we introduce a novel regularization technique to calibrate the confidence estimation, which forces model to estimate a calibrated predictive confidence. This technique can be naturally deployed into existing multimodal learning methods without modifying the main training process. We conduct comprehensive experiments which demonstrate the superiority of our method in classification in terms of both accuracy and calibration. The proposed method is the first attempt to calibrate the relationship between confidence and the number of modalities used in multimodal learning. This research is an inspirational topic which could benefit the multimodal learning community. In current implementation, we employ sampling to construct constraint. Although it is widely used and effective in machine learning, we will focus on more principled approximation strategies in the future.
## Acknowledgments
This work is jointly supported by the National Natural Science Foundation of China (Grant No. 61976151), the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045), and A*STAR Central Research Fund. We gratefully acknowledge the support of CAAI-Huawei Mind-Sporc Open Fund1. The project was finished during the internship in AI Lab, Tencent.
Footnote 1: [https://www.mindsporc.cn/](https://www.mindsporc.cn/)
|
2309.00926 | Time-bin entanglement at telecom wavelengths from a hybrid photonic
integrated circuit | Mass-deployable implementations for quantum communication require compact,
reliable, and low-cost hardware solutions for photon generation, control and
analysis. We present a fiber-pigtailed hybrid photonic circuit comprising
nonlinear waveguides for photon-pair generation and a polymer interposer
reaching 68dB of pump suppression and photon separation with >25dB polarization
extinction ratio. The optical stability of the hybrid assembly enhances the
quality of the entanglement, and the efficient background suppression and
photon routing further reduce accidental coincidences. We thus achieve a
96(-8,+3)% concurrence and a 96(-5,+2)% fidelity to a Bell state. The generated
telecom-wavelength, time-bin entangled photon pairs are ideally suited for
distributing Bell pairs over fiber networks with low dispersion. | Hannah Thiel, Lennart Jehle, Robert J. Chapman, Stefan Frick, Hauke Conradi, Moritz Kleinert, Holger Suchomel, Martin Kamp, Sven HΓΆfling, Christian Schneider, Norbert Keil, Gregor Weihs | 2023-09-02T12:34:24Z | http://arxiv.org/abs/2309.00926v1 | # Time-bin entanglement at telecom wavelengths from a hybrid photonic integrated circuit
###### Abstract
Mass-deployable implementations for quantum communication require compact, reliable, and low-cost hardware solutions for photon generation, control and analysis. We present a fiber-pigtailed hybrid photonic circuit comprising nonlinear waveguides for photon-pair generation and a polymer interposer reaching \(68\,\mathrm{dB}\) of pump suppression and photon separation with \(>25\,\mathrm{dB}\) polarization extinction ratio. The optical stability of the hybrid assembly enhances the quality of the entanglement, and the efficient background suppression and photon routing further reduce accidental coincidences. We thus achieve a \(\left(96^{+3}_{-8}\right)\) % concurrence and a \(\left(96^{+2}_{-5}\right)\) % fidelity to a Bell state. The generated telecom-wavelength, time-bin entangled photon pairs are ideally suited for distributing Bell pairs over fiber networks with low dispersion.
1Institut fur Experimentalphysik, Universitat Innsbruck, 6020 Innsbruck, Austria
2Faculty of Physics & Vienna Doctoral School in Physics & Vienna Center for Quantum Science and Technology, University of Vienna, 1090 Vienna, Austria
3Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, 10587 Berlin, Germany
4Optical Nanomaterial Group, Institute for Quantum Electronics, Department of Physics, ETH Zurich, 8093 Zurich, Switzerland
5Technische Physik, Universitat Wurzburg, 97074 Wurzburg, Germany
6Institute of Physics, University of Oldenburg, 26129 Oldenburg, Germany
*[email protected]
## 1 Introduction
As data traffic continues to grow, the cryptography community is increasingly aware of the importance of methods and devices ensuring an efficient and secure data transmission. For the required security, the conventional public key infrastructure has been shown to be unsuitable in the long term [1]. Quantum communication, in contrast, provides information-theoretical security when implemented correctly [2]. A multitude of implementations have been demonstrated in field trials using metro networks [3, 4, 5]. Among those, the majority do not rely on entanglement and the experimental setups have the size of a computer rack or larger. For mass-deployment and practical implementation, however, quantum communication systems must become more compact, cost-effective and scalable. This can be achieved via quantum system-on-chip modules [6, 7, 8]. Also allowing for individual optimization of dissimilar components, hybrid photonic integrated circuits (PICs) have recently received much attention in quantum photonics [9, 10], where challenges ranging from single-photon generation to reconfigurable photon routing and high-efficiency detection have particularly demanding requirements that cannot be fulfilled by a single photonic platform.
In addition to scaling up quantum communication systems, one must strive for more than conditional security. It will be difficult to certify all quantum communication source and receiver modules and to ensure their long-term integrity. Therefore, entanglement-based quantum key distribution (QKD) schemes are promising, especially when used in future device-independent schemes that rely on the principle of non-locality and can generate secure keys even for untrusted devices [11, 12, 13, 14].
For pratical QKD, the transmitted qubits must be compatible with the existing telecom infrastructure and also conserve the entanglement en route. To this end, time-bin entanglement is especially well suited as it does not suffer from decoherence due to polarization mode dispersion [15, 16]. In this scheme, a photon pair is created in a coherent superposition of two time bins and the communicating parties each receive one of the photons allowing them to test the quality of the entanglement and generate bits of a shared secret key. A number of experiments have demonstrated this form of entanglement as a proof-of-principle for entanglement-based QKD [17], using integration-ready sources [18, 19, 20], generating on-demand time-bin qubits [21], or achieving a distance record [22]. However, few telecom time-bin entanglement sources have been realized on-chip or in optical fiber [23, 24, 25].
We present in this article an on-chip, partially fiber-pigtailed source of time-bin entangled photon pairs in the telecom wavelength range working at room temperature. The photon pairs are generated in a nonlinear crystal made of aluminum gallium arsenide, called a Bragg-reflection waveguide (BRW) [26, 27, 28]. This source is integrated with a polymer chip, the PolyBoard, which hosts all passive optical components including a long-pass filter (LP) showing \(68\,\mathrm{dB}\) of pump suppression, a polarizing beam splitter achieving \(>25\,\mathrm{dB}\) polarization extinction ratio (_PER_), and specially designed grooves for fiber pigtailing [29, 30, 31]. We achieve a coincidence rate of \(460\,\mathrm{Hz}\) per mW continuous-wave (CW) external pump power between the signal and idler photons without correcting for fiber loss or detector efficiency. In the time-bin entanglement scheme, this results in photon pair rates of \(1.4\,\mathrm{Hz}\) per mW of external pump power, a concurrence of \(\left(96^{+3}_{-8}\right)\%\) and a fidelity of \(\left(96^{+2}_{-5}\right)\%\) to the \(\left|\Phi^{+}\right\rangle\) Bell state.
The article is structured as follows: After a brief explanation of the time-bin scheme, both the BRW and the PolyBoard are introduced in more detail followed by a section on their hybrid integration and assembly process. We then perform a classical characterisation of the PIC and finally present the time-bin measurements including state tomography using maximum likelihood estimation [32, 33].
## 2 Materials and methods
We implement the time-bin entanglement as illustrated in Fig. 1. A coherent superposition of time bins is prepared by passing a pulsed Ti:Sapphire laser with \(76\,\mathrm{MHz}\) repetition rate and \(0.8\,\mathrm{nm}\) bandwidth emitting at \(767\,\mathrm{nm}\) through an asymmetric free-space Michelson interferometer. This splits each pulse into an early and a late time bin separated by a \(3\,\mathrm{ns}\) delay and the pulse pair then travels to the hybrid PIC. One photon pair is produced with probability \(p\ll 1\) by either the early or late pump pulse, separated by polarization and routed to two optical fibers on the hybrid PIC. The photons are sent to two parties, Alice and Bob, who analyze the entanglement via interferometers with the same delay as the pump interferometer. In our setup, all three interferometers are folded into the same physical interferometer. Finally, the photons are measured by superconducting-nanowire-single-photon detectors (SNSPDs) with \(40\,\mathrm{ps}\) timing jitter and \(>60\,\%\) detection efficiency. A triple coincidence between the pump pulse and the photons detected by Alice and Bob is computed via a time tagger with \(10\,\mathrm{ps}\) rms jitter and \(2\,\mathrm{ns}\) dead time.
The hybrid PIC comprises a nonlinear BRW with a high \(\chi^{(2)}\) nonlinear coefficient enabling efficient parametric down-conversion (PDC) [34, 35], and the PolyBoard, a passive optical interposer. The assembly process and final chip are shown in Fig. 2.
To provide waveguiding and modal phase-matching, the BRW is made up of layers with different aluminum concentrations and etched into a ridge structure. By carefully engineering the layer thicknesses and aluminum concentrations [36], and by reducing the waveguide ridge sidewall roughness [37] we achieve high photon-pair rates of up to \((8.9\pm 0.5)\cdot 10^{4}\) Hz per mW of external pump power, which corresponds to about \(4\cdot 10^{5}\,\mathrm{Hz}\) per mW of internal pump power [38]. The photon pairs generated in the telecom wavelength range benefit from minimal signal attenuation
in the existing fiber infrastructure. Because of their broad-band (\(\sim\)100 nm) emission, BRWs can also be considered for the distribution of entanglement in multiple telecom channels. In addition to being correlated in their time of creation, which is used for time-bin entanglement, the two photons of a pair are anti-correlated in wavelength and polarization, opening up the possibility for other forms of entanglement or even hyperentanglement. Achievements realized with BRWs thus far include the generation of polarization entanglement [39, 40, 41], energy-time entanglement [42], and free-space time-bin entanglement [20], as well as the integration of an internal pump laser [43, 44] and with it the demonstration of difference-frequency generation [45]. The photons generated by PDC in the BRW are orthogonally polarized and collinear with the pump light. It is therefore necessary to spectrally filter the pump and convenient to separate the photons with a polarizing beamsplitter. The required components are technologically challenging to realize and therefore we employ a hybrid integration with polymer waveguide circuits [29].
Polymer-based PICs feature lower production and material cost than standard semiconductor platforms [47, 48], a large transparency window, and an effective index that closely matches silica fibers allowing low-loss pigtailing. The presented interposer features two custom-made, dielectric thin-film elements, a LP to reject the pump light and a polarizing beam splitter (PBS) redirecting orthogonal polarizations to separate waveguides. The input waveguide facet is diced for end-facet coupling to the BRW, whereas the output waveguides are directly pigtailed with standard polarization maintaining (PM) fibers in a U-groove arrangement [29], which improves the mechanical stability.
Hybrid integration, as employed between the BRW and the PolyBoard in this work, combines the strengths of both material platforms and can also introduce new features missing in the monolithic counterparts. The PolyBoard has proven its versatility by implementing on-chip free space sections [30], thermal phase shifters or switches [31], tunable distributed Bragg-reflector lasers [49, 50], on-chip isolators and circulators [51, 52], and various integrated circuits for quantum photonics [46].
When interfacing dissimilar platforms, the mode field overlap is crucial for the coupling loss. The complicated layer structure of the BRW gives rise to a non-rotationally symmetric mode with a shape resembling two stacked cigars. Thus, the current design results in a mode field overlap with the near-Gaussian mode of the PolyBoard of \(\sim\)55 % (for more details see Appendix A and B) but mode-engineering via taper structures can boost the overlap significantly. The assembly of
Figure 1: **Time-bin entanglement scheme.** Our setup includes the preparation of pump pulse pairs, the PIC, where the telecom photon pairs are created, filtered, separated and coupled into fiber, as well as the stations of Alice and Bob, where the entanglement is analyzed. These consist of interferometers with variable phase shift (PS) and single photon detectors.
the hybrid PIC is a multi-step process with active alignment using the telecom laser transmission signal and is sketched in Fig. 2 (a)-(e).
## 3 Results
We perform a series of classical characterization measurements to evaluate the performance of the individual components of the PIC. The results provide insights in addition to the coincidence measurements at the few-photon level and, furthermore, are less sensitive to noise.
To this end, we couple a CW laser into the diced facet of the PolyBoard and measure the transmission of both output fibers for transversal-electrically (TE) and transversal-magnetically (TM) polarized input light while scanning the laser wavelength (see Fig. 3). For both outputs, we find a flat transmission curve for the favored polarization with an average loss of (\(6.54\pm 0.08\)) dB for the TE and (\(9.1\pm 0.1\)) dB for the TM path. Note that this measurement also includes the input coupling loss of \(0.5\,\mathrm{dB}\) to \(1\,\mathrm{dB}\). The lower transmission of the TM fiber is ascribed to a slight out-of-plane deflection caused by a non-optimal angle of the inserted PBS. Further, we compute the polarization extinction ratios from the transmission in the orthogonal polarization yielding \(\mathit{PER}>$30\,\mathrm{dB}$\) for the reflection and \(\mathit{PER}>$25\,\mathrm{dB}$\) for the transmission port of the PBS.
The suppression of pump light by the LP cannot be measured directly at the PolyBoard but is inferred from a separate test structure. Using a white light source, we find a suppression exceeding \(40\,\mathrm{dB}\) for the range of \(700\,\mathrm{nm}\) to \(850\,\mathrm{nm}\) limited only by the noise floor of our detector. Employing a laser diode emitting at \(785\,\mathrm{nm}\), we verify a suppression of (\(68\pm 1\)) dB, while the loss of the LP at telecom wavelengths amounts to \(\sim$0.9\,\mathrm{dB}$\) (for more details see Appendix A).
We conclude that the PolyBoard not only reduces size and cost of the implementation drastically but also provides high-performance polarization splitting and long-pass filtering that easily matches or even outperforms the characteristics of bulk elements.
Moving on to characterizing the quantum performance of our PIC, we pump PDC by coupling \(767\,\mathrm{nm}\) CW light into the BRW input facet. We measure a coincidence rate of \(460\,\mathrm{Hz}\) per mW external pump power between the signal and idler photons without correcting for fiber loss or detector efficiency. The coincidence rate is consistent with this BRW's stand-alone performance considering the losses expected from hybrid integration on the PIC described above.
Figure 2: **Assembly process and photograph of the hybrid PIC.** First, the PolyBoard is prepared by inserting the thin-film long-pass filter (LP) and polarizing beam splitter (PBS) in their pre-etched slots (a), installing the output fibers (b), optimizing all elements for transmission and securing them with UV-curing, index-matched adhesive (c). Next, using active alignment, the BRW is end-facet coupled to the PolyBoard and the interface secured with adhesive once the transmission is optimized (d). Finally, the newly formed hybrid PIC is mechanically stabilized by a common silicon mount with \(7\mathrm{x}10\,\mathrm{mm}\) footprint (e). A photograph (reprinted with permission from [46]) of the final assembly used in this work (f).
For the time-bin entanglement measurement and state tomography, we follow the methods described by James et al. [32] and Takesue et al. [33]. We reference the arrival times of photons at Alice's and Bob's detectors to a trigger given by a photodiode installed in the path of the pulsed pump laser. Due to the limited transmissions of the free-space interferometers of \(5\,\%\) to \(7\,\%\), we measure a total coincidence count of about \(1.4\,\mathrm{Hz}\) per mW external pump power. Correcting for the loss in the two telecom interferometers we obtain a coincidence rate of \(290\,\mathrm{Hz}\) to \(560\,\mathrm{Hz}\). By rotating the phase plate in one of the interferometers, we reveal the interference in the central time bin with a \((91\pm 5)\,\%\) visibility.
From the triple coincidence between the trigger and Alice's and Bob's detectors results a 2D histogram, an example of which is shown in Appendix C. We perform a measurement for each of the four states \(\ket{++}\), \(\ket{+L}\), \(\ket{L+}\), and \(\ket{LL}\), where \(\ket{+}=1/\sqrt{2}\left(\ket{1}+\ket{2}\right)\) and \(\ket{L}=1/\sqrt{2}\left(\ket{1}+i\ket{2}\right)\), by rotating the phase plates in Alice's and Bob's interferometers. From these, we obtain the coincidence counts (without correcting for accidentals) for projections onto 16 different two-photon states serving as input for the state tomography. As the linear reconstruction of the density matrix leads to negative eigenvalues and therefore an unphysical state, we employ a maximum likelihood estimation to recover the density matrix shown in Fig. 4 (values can be found in Appendix D). We obtain a concurrence of \(\left(96^{+3}_{-8}\right)\,\%\), a \(\left(96^{+2}_{-5}\right)\,\%\) fidelity to the \(\ket{\Phi^{+}}\) Bell state and a Bell S-parameter of \(\left(2.70^{+0.09}_{-0.33}\right)\,\%\). The uncertainties are derived using a Monte Carlo simulation where we create \(10^{4}\) sets of coincidence counts with Poissonian distribution around the actually measured counts and perform the maximum likelihood estimation for each. The results demonstrate both strong entanglement and violation of the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality. The resulting nonlocal correlations are a useful resource for quantum communication tasks.
Figure 3: **Transmission measurements of the PolyBoard interposer.** The input polarization is set to transversal-electric (TE) and transversal-magnetic (TM) and the transmission is measured at both output paths while the laser wavelength is scanned. The polarizing beam splitter (PBS) predominantly transmits TE-polarized light and reflects TM-polarized light. For both the TE path (a) and the TM path (b), the polarization extinction ratio is calculated as the difference between TE and TM transmission. The spectral dependence of the suppressed polarization is ascribed to chromatic effects in the PBS thin-film layer stack.
## 4 Discussion and conclusion
The mass-deployment of entanglement-based QKD transceivers requires a high level of integration while components must comply with the challenging operation in a real-life environment based on noisy, dispersive fiber networks. The pair emission rate of our PIC is consistent with previous experiments using BRWs in our group [20] and comparable to those of others [28]. Further enhancement of the coincidence rates involves optimizing the design of the hybrid PIC. First, engineering the BRW's and the PolyBoard's mode field at the intersection using tapered waveguides reduces loss due to mode mismatch. Second, employing the latest generation of BRWs - featuring photon-pair generation rates >60 times higher than the sample used here [38] - significantly relaxes the requirements for low loss down the line. In the current implementation, we identify the free-space interferometers as the dominating source of loss and therefore as the bottleneck on our way to efficiently produce entangled photon pairs. Actively stabilizing the interferometers can improve the spatial overlap of beams as well as the temporal overlap of pulses and counteract some of the degradation in the classical visibilities [53]. However, the free-space interferometers are not at the core of this work. Exchanging them for chip- or fiber-based counterparts may not only improve the efficiency but also promotes miniaturisation further.
In contrast to the modest coincidence rates, the demonstrated entanglement is very strong with a concurrence of \(\left(96^{+3}_{-8}\right)\%\) and a fidelity of \(\left(96^{+2}_{-5}\right)\%\) to a Bell state. Lower uncertainties can be achieved with higher count rates once the interferometers have been replaced. Already now, our PIC compares well with other telecom time-bin entanglement demonstrations, including the \(\left(88.9\pm 1.8\right)\%\) concurrence and \(\left(94.2\pm 0.9\right)\%\) fidelity measured for a bare BRW [20] in free space, the \(\left(74.1\pm 4.8\right)\%\) coincidence fringe visibility found for a fiber-based approach [23], and the \(\left(91.0\pm 0.7\right)\%\) fidelity quoted for an all on-chip implementation [25].
We attribute the increased purity of the entanglement to the hybrid integration of BRW and PolyBoard, as the PIC offers optical stability and the end-facet coupling reduces the amount of unwanted photoluminescence picked up from the BRW [54]. Moreover, the _PER_ of \(>25\,\mathrm{dB}\) reduces the rate of the accidental coincidences and the strong suppression of the LP of \(\left(68\pm 1\right)\mathrm{dB}\) obviates the need for additional bandwidth filtering or background suppression. By adding thin-film elements for band-pass filtering or chromatic pre-compensation, we expect to reduce effects of dispersion and thereby improve the temporal overlap of the pulses. Finally, balancing
Figure 4: **Density matrix reconstructed via maximum likelihood estimation.** The real and imaginary parts of the density matrix demonstrate the high degree of entanglement and fidelity to the \(\left|\Phi^{+}\right\rangle\) Bell state.
the loss of both polarization modes will enhance the entanglement further.
To conclude, we demonstrated the hybrid integration of a BRW with the PolyBoard interposer to produce high-quality time-bin entangled photon pairs in the telecom wavelength range. Our results testify to the adequacy of the BRW-PolyBoard PIC for miniaturized quantum communication. We identify the main causes of photon loss and outline a feasible route towards a second generation of significantly enhanced hybrid PICs. Here, the most notable upgrades include transitioning to fiber- or chip-based interferometers, engineering the mode field overlap at the chip interface and employing the already available and greatly improved BRW structures.
## Appendix A PolyBoard
The PolyBoard is fabricated from two different polymers which are iteratively applied via spino-coating on a 4-inch silicon wafer and further processed using photo-lithography and dry-etching. The waveguides have a quadratic cross-section of \(3.2\,\mu\mathrm{m}\,\mathrm{x}\,3.2\,\mu\mathrm{m}\) with an index contrast of \(\Delta n=0.03\) resulting in a simulated mode field that is rotationally symmetric with a \(1/e^{2}\) diameter of \(3.9\,\mu\mathrm{m}\). The effective index for the transversal-electric (TE) mode is \(n_{\mathrm{eff}}^{\mathrm{TE}}=\)1.463 whereas the transversal-magnetic (TM) mode has \(n_{\mathrm{eff}}^{\mathrm{TM}}=\)1.462 resulting in a birefringence of \(\sim 1\cdot 10^{-3}\). Using cut-back measurements, we evaluate a propagation loss of \(\sim 0.9\,\mathrm{dB/cm}\) for this wafer in separate test structures.
In the presented PolyBoard, thin-film elements (TFE) are used to realize wavelength filtering and polarization splitting which are challenging to implement monolithically as they require a large footprint or exhibit high losses and low extinction ratios. During the fabrication, slots of a few-micrometer thickness are etched into the PolyBoard and are later equipped with the fitting TFE [30]. Because of the small index contrast, the optical loss caused by the unguided propagation through the etched slot is limited and further minimized by appropriate waveguide tapering on each side of the slot. After inserting the TFEs, they are secured with an index-matched and UV-curable adhesive.
To assess the performance of the long-pass (LP) filter, we insert it into a test structure depicted in the inset of Fig. 5 (a). By comparing the transmission of a straight reference waveguide and a waveguide passing the LP, we evaluate the suppression. We perform the measurement for three different light sources: a supercontinuum white light laser (_NKT Photonics, SuperK_), a laser diode emitting at \(785.05\,\mathrm{nm}\) with high spectral power density (_Integrated optics, Matchbox 785nm SLM_), and a tunable laser (_Agilent, 8164B with 81635A and 81689B_) covering the telecom C-band. In this way, we analyze the transmission for telecom wavelengths, the maximum suppression close to the wavelength used to pump the parametric down-conversion process, and the bandwidth of the filter. We find a suppression of more than \(40\,\mathrm{dB}\) for \(700\,\mathrm{nm}\) to \(850\,\mathrm{nm}\) and a maximum suppression of \((68\pm 1)\,\mathrm{dB}\) at \(785.05\,\mathrm{nm}\).
## Appendix B Modes and coupling loss
The telecom wavelength optical modes of the Bragg-reflection waveguide (BRW) and the PolyBoard have very different shapes, as shown in Fig. 6. Upon assembly of the photonic integrated circuit, any displacement of the facets with respect to each other leads to a considerable coupling loss, as shown in Fig. 7. For perfect alignment we expect \(\sim 2.6\,\mathrm{dB}\) of coupling loss with a horizontal (vertical) tolerance of \(\sim 1.1\,\mathrm{\SIUnitSymbolMicro m}\) (\(\sim 0.7\,\mathrm{\SIUnitSymbolMicro m}\)) causing additional \(1\,\mathrm{dB}\) of loss or \(\sim 1.8\,\mathrm{\SIUnitSymbolMicro m}\) (\(\sim 1.3\,\mathrm{\SIUnitSymbolMicro m}\)) causing \(3\,\mathrm{dB}\).
Figure 5: **Transmission measurements for long-pass (LP) filter at test structure.** Filter transmission over a broad spectrum using a white light source and a tunable C-Band laser (a) and a laser diode emitting at 785.05 nm with high spectral power density (SPD) (b). The filter performance is extracted from the transmission of a straight reference waveguide (WG) and a waveguide passing the filter slot with the LP. The layout of the test structure is shown in the inset of (a).
Figure 6: **Simulations of the waveguide modes of BRW and PolyBoard.** The electric field absolute value of the 1550 nm TE-polarized double cigar shaped mode of the BRW (a) and the rotationally symmetric mode of the PolyBoard waveguide (b). The TM-polarized modes are not shown here as they look very similar.
## Appendix C 2D histogram and its interpretation
One sample of the measurements recorded for the time-bin tomography is shown in Fig. 8 (a). Its interpretation is illustrated in Fig. 8 (b). A detection time \(t_{-1}\) (\(t_{+1}\)) represents the single photon state \(\ket{1}\) (\(\ket{2}\)), where a photon measured at either detector took the short (long) path through both the pump and the analysis interferometers. Photons detected at \(t_{0}\) took the short path once and the long path once. Integrating over the antidiagonals of the histogram gives insight into the two-photon states in five consecutive time-bins. These are illustrated as peaks in Fig. 8 (b):
\[\begin{array}{ll}\text{Peak \#1}&\ket{11}\\ \text{Peak \#2}&\ket{1+},\,\ket{1L},\,\ket{+1},\,\ket{L1}\\ \text{Peak \#3}&\ket{++},\,\ket{+L},\,\ket{L+},\,\ket{LL}\\ \text{Peak \#4}&\ket{2+},\,\ket{2L},\,\ket{+2},\,\ket{L2}\\ \text{Peak \#5}&\ket{22}\end{array}\]
Here \(\ket{+}=1/\sqrt{2}\left(\ket{1}+\ket{2}\right)\) and \(\ket{L}=1/\sqrt{2}\left(\ket{1}+i\ket{2}\right)\). The states \(\ket{12}\) and \(\ket{21}\) are not measured if a photon pair is created in either the early or the late pump pulse, as is the case here.
## Appendix D Density matrix and its eigenvalues
The density matrix \(\rho\) reconstructed via maximum likelihood estimation has positive eigenvalues demonstrating that it represents a physical state.
Figure 7: **Coupling loss between BRW and PolyBoard waveguides.** The minimum achievable coupling loss is \(\sim\)2.6 dB. Due to the layer structure of the BRW, even a small displacement in the vertical direction leads to a significant increase in coupling loss.
\[\rho=\begin{pmatrix}0.4961+0.j&0.1235+0.0004j&0.0211+0.0480j&0.4765-0.0703j\\ 0.1235-0.0004j&0.0307+0.j&0.0053+0.0119j&0.1185-0.0179j\\ 0.0211-0.0480j&0.0053-0.0119j&0.0055+0.j&0.0135-0.0491j\\ 0.4765+0.0703j&0.1185+0.0179j&0.0135+0.0491j&0.4676+0.j\end{pmatrix}\]
Eigenvalues = (9.99999910e-01+4.48772161e-18j, 9.02487908e-08+9.08286447e-18j, 1.11913599e-10-7.45628785e-18j, 1.22826754e-13+2.73128400e-18j)
Funding.The authors acknowledge funding by the Uniqorn project (Horizon 2020 grant agreement no. 820474), the Marie Sklodowska-Curie grant agreement No 956071 (AppQInfo) and the BeyondC project (FWF project no. F7114).
Author contributions.Conceptualization, H.T., L.J., H.C., M.Kl., R.C., S.F., N.K., G.W.; Formal analysis, H.T., L.J., R.C., S.F.; Methodology, H.T., L.J., H.C., M.Kl., R.C., S.F., N.K., G.W; Investigation, H.T., L.J., R.C., S.F.; Resources, H.S., M.Ka., S.H., C.S.; Supervision, M.Kl., R.C., S.F., C.S., N.K., G.W.; Writing - original draft, H.T., L.J.; Writing - review & editing, All Authors; Funding acquisition, C.S., N.K., G.W.
Disclosures.The authors have nothing to disclose.
Data availability.Data underlying the results presented in this paper are available at 10.5281/zenodo.8059483.
|
2310.12299 | Instantaneous Frequency Estimation in Unbalanced Systems Using Affine
Differential Geometry | The paper discusses the relationships between electrical and affine
differential geometry quantities, establishing a link between frequency and
time derivatives of voltage, through the utilization of affine geometric
invariants. Based on this link, a new instantaneous frequency estimation
formula is proposed, which is particularly suited for unbalanced and
single-phase systems. Several examples as well as measurements based on two
real-world events illustrate the findings of the paper. | Ali Alshawabkeh, Georgios Tzounas, Angel Molina-Garcia, Federico Milano | 2023-10-18T20:05:06Z | http://arxiv.org/abs/2310.12299v2 | # Instantaneous Frequency Estimation in Unbalanced Systems Using Affine Differential Geometry
###### Abstract
The paper discusses the relationships between electrical quantities, namely voltages and frequency, and affine differential geometry ones, namely affine arc length and curvature. Moreover, it establishes a link between frequency and time derivatives of voltage, through the utilization of affine differential geometry invariants. Based on this link, a new instantaneous frequency estimation formula is proposed, which is particularly suited for unbalanced systems. An application of the proposed formula to single-phase systems is also provided. Several numerical examples based on balanced, unbalanced, as well as single-phase systems illustrate the findings of the paper.
Frequency estimation, affine differential geometry, instantaneous frequency, unbalanced systems, curvature, phase-locked loop (PLL).
## I Introduction
### _Motivation_
On-line frequency estimation in electric power systems is known to be a complex problem, especially in conditions characterized by the presence of harmonics and imbalances of amplitude or phase angle within the measured signals. Aiming in particular to tackle the complexities posed by unbalanced systems, this paper introduces a technique for online frequency estimation, based on the principles of affine differential geometry.
### _Literature Review_
The problem of frequency estimation has been studied for many years and several solution approaches have been reported in the literature, e.g. see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. These approaches rely on a variety of methods, including phase-locked loops (PLLs) [1, 2], discrete Fourier transform [3, 4], Kalman filtering [5, 6], least-squares [7, 8], adaptive notch filters [9, 10], etc.
For grid synchronization and control applications in particular, PLLs are a popular solution due their performance characteristics and straightforward structure and practical implementation. Three-phase PLLs, for example, are widely utilized to provide on-line phase and frequency estimations in grid-connected power converters. With this regard, a conventional PLL configuration in three-phase system applications is the synchronous reference frame (SRF) PLL, which relies on transforming input voltages to the \(\mathrm{dq}\) synchronous reference frame and on regulating the frame's angular position so that either the \(\mathrm{d}\)- or \(\mathrm{q}\)-axis component is zero. The analogous of SRF-PLL for single-phase systems is the quadrature signal generation (QSG)-based PLL. Given a single-phase voltage signal, a QSG-PLL defines a second dimension through a fictitious quadrature signal, required to enable the application of the Park transform (and thus the formulation of \(\mathrm{dq}\)-axis voltage components). The simplest approach to do that is using a _transport delay_ of \(T/4\), where \(T\) is the period of the fundamental frequency, e.g. see [11].
Other approaches are based on the inverse Park transform [12], the Hilbert transform [13], and on second-order generalized integrators [14]. Although they provide robust frequency estimations under balanced conditions, they are also known to perform poorly for unbalanced systems, wherein they often result in estimations with sinusoidal ripple errors [15, 16, 17, 18, 19]. Reducing the bandwidth helps mitigate this issue and refine accuracy, but also compromises dynamic performance [20]. In this regard, careful tuning of PLL parameters is essential to achieve good trade-offs between dynamic performance and estimation accuracy. Efforts to improve the performance of PLLs under unbalanced conditions include, among other studies, [17, 21, 1, 22].
PLLs belong to the broad family of time-domain methods. In this paper we also focus on time-domain methods but approach the problem of frequency estimation from an unconventional perspective, that is based on the theory of differential geometry. The starting assumption is that any voltage vector can be perceived as the velocity of a point on a space curve and, as such, be analyzed using differential geometrical invariants. In our recent work on the topic, we described the definition of these curves in a Euclidean space and, by applying the Frenet-Serret formulas, we derived a correspondence between curvature and instantaneous electrical frequency [23, 24, 25, 26]. Despite providing accurate frequency estimations for balanced systems, the curvature obtained in these works is time-varying in stationary unbalanced operating conditions, a result that clearly does not align well with the notion of angular frequency of stationary ac signals.
In this paper, we aim at solving this issue through an alternative theory of differential geometry of curves, namely through _affine differential geometry_. This theory has found applications in various areas, such as control of mechanical systems [27], computer vision [28], and motion identification [29]. But there has been, to the best of our knowledge, no application to power system analysis or frequency estimation.
### _Contributions_
The specific contributions of the paper are as follows.
* A derivation of the expressions for the affine arc length and affine curvature in terms of the voltage of an ac system.
* An approximated yet accurate formula of the instantaneous angular frequency of a three-phase voltage as a function of affine geometrical invariants.
* A demonstration of the effectiveness of the proposed formula to serve as an instantaneous frequency estimation technique for unbalanced three-phase systems, as well as for single-phase systems.
The last two points are fully supported through a variety of examples, which are provided in the case study section. The examples show, in particular, that the proposed expression yields a more precise estimation of the instantaneous frequency in unbalanced systems, compared to PLLs and the Frenet-frame based method from [23].
### _Paper Organization_
The remainder of the paper is organized as follows. Section II provides an overview of basic concepts from affine geometry. These concepts are essential for the derivation of the theoretical results of the paper presented in Section III. Section IV tests the proposed approach through analytical examples, as well as through a case study based on a fully-fledged EMT model of the IEEE 39-bus system. Finally, Section V draws relevant conclusions.
## II Outlines of Affine Differential Geometry
This section provides a brief overview of affine quantities that are relevant for the derivation of the instantaneous frequency formula for ac systems presented in Section III. The interested reader can find a comprehensive presentation of the theory of affine differential geometry in [30].
Affine geometry can be defined as a Euclidean geometry without measuring distances or angles. In other words, it is a Euclidean geometry whose metric structure has been removed [29]. For the affine plane, a pair of non-collinear vectors determines a parallelogram whose area is given by the determinant of the two-by-two matrix formed by these vectors.
Let us consider a smooth parametric curve in the plane:
\[\boldsymbol{x}(t)=x_{1}(t)\,\boldsymbol{e}_{1}+x_{2}(t)\,\boldsymbol{e}_{2}\,, \tag{1}\]
where \(x_{1}(t),x_{2}(t):\mathbb{R}\mapsto\mathbb{R}\) are smooth and \(\boldsymbol{e}_{1}\) and \(\boldsymbol{e}_{2}\) form an orthogonal basis of the plane. Let us also assume that the curve \(\boldsymbol{x}\) does not have inflection points, that is, the magnitude of the bracket operator
\[[\dot{\boldsymbol{x}}(t),\ddot{\boldsymbol{x}}(t)]\neq 0,\quad\forall t\,, \tag{2}\]
never vanishes. In (2), \(\dot{\boldsymbol{x}}=d\boldsymbol{x}/dt\) and \(\ddot{\boldsymbol{x}}=d^{2}\boldsymbol{x}/dt^{2}\), and the bracket operator of two vectors, say \([\boldsymbol{a},\boldsymbol{b}]\), with \(\boldsymbol{a},\boldsymbol{b}\in\mathbb{R}^{2}\), is defined as:
\[[\boldsymbol{a},\boldsymbol{b}]=\det\begin{bmatrix}a_{1}&b_{1}\\ a_{2}&b_{2}\end{bmatrix}=a_{1}b_{2}-b_{1}a_{2}\,, \tag{3}\]
The _affine arc length_ or _equi-affine arc length_, indicated with \(\sigma\), is defined as:
\[\sigma(t)=\int_{t_{0}}^{t}[\dot{\boldsymbol{x}}(t),\ddot{\boldsymbol{x}}(t)] ^{1/3}dt\,, \tag{4}\]
or, equivalently:
\[\dot{\sigma}(t)=\frac{d\sigma(t)}{dt}=[\dot{\boldsymbol{x}}(t),\ddot{ \boldsymbol{x}}(t)]^{1/3}\,. \tag{5}\]
A curve \(\boldsymbol{x}\) is said to be parameterized with \(\sigma\) if, for all \(\sigma\), it satisfies the condition:
\[[\boldsymbol{x}^{\prime}(\sigma),\boldsymbol{x}^{\prime\prime}(\sigma)]=1\,, \tag{6}\]
where \(\boldsymbol{x}^{\prime}=d\boldsymbol{x}/d\sigma\) is the _affine tangent_ and \(\boldsymbol{x}^{\prime\prime}=d^{2}\boldsymbol{x}/d\sigma^{2}\) is the _affine normal_. Applying the chain rule, \(\boldsymbol{x}^{\prime}\) becomes:
\[\boldsymbol{x}^{\prime}(\sigma(t))=\frac{d\boldsymbol{x}}{d\sigma}=\frac{d \boldsymbol{x}}{dt}\frac{dt}{d\sigma}=\frac{\dot{\boldsymbol{x}}(t)}{[\dot{ \boldsymbol{x}}(t),\ddot{\boldsymbol{x}}(t)]^{1/3}}\,, \tag{7}\]
and, differentiating (6) with respect to \(\sigma\), one obtains:
\[[\boldsymbol{x}^{\prime}(\sigma),\boldsymbol{x}^{\prime\prime\prime}(\sigma) ]=0\,. \tag{8}\]
This result implies that \(\boldsymbol{x}^{\prime}\) and \(\boldsymbol{x}^{\prime\prime\prime}\) are linearly independent, leading to the relationship:
\[\boldsymbol{x}^{\prime\prime\prime}(\sigma)=-\kappa_{a}(\sigma)\, \boldsymbol{x}^{\prime}(\sigma)\,, \tag{9}\]
where \(\kappa_{a}\) is the _affine curvature_ or _equi-affine curvature_ of the curve \(\boldsymbol{x}\) and is defined as:
\[\kappa_{a}(\sigma)=[\boldsymbol{x}^{\prime\prime}(\sigma),\boldsymbol{x}^{ \prime\prime\prime}(\sigma)]\,. \tag{10}\]
The affine curvature is represented by the area of the parallelogram formed by the vectors \(\boldsymbol{x}^{\prime\prime}\) and \(\boldsymbol{x}^{\prime\prime\prime}\).
It is relevant for the following discussion on the estimation of the instantaneous frequency to note that for non-singular conic sections, \(\kappa_{a}\) is constant, as follows [30, 31]:
* for \(\kappa_{a}=0\), the curve is a parabola;
* for \(\kappa_{a}>0\), the curve is an ellipse;
* for \(\kappa_{a}<0\), the curve is an hyperbola.
In the next section, we consider the specific case of the ellipse, that is \(\kappa_{a}>0\).
## III Voltage in the Affine Plane
We adopt the assumption made in [25], that is, the magnetic flux \(\boldsymbol{\varphi}\) is the _position_ of a point on a space curve in generalized coordinates and, from Faraday's law, the _speed_ of such a point is the voltage, as follows:
\[\boldsymbol{\varphi}(t)\equiv-\boldsymbol{x}(t)\quad\Rightarrow\quad \boldsymbol{v}(t)=-\dot{\boldsymbol{\varphi}}(t)\equiv\dot{\boldsymbol{x}}(t)\,. \tag{11}\]
Reference [23] shows that one can express electrical quantities such as voltage and current in terms of the coordinates of the Frenet frame and of geometric invariants such as arc length, curvature and torsion [23]. In the same vein, but using the definitions of coordinates, arc length and curvature given by affine differential geometry, this section derives a new formula for the instantaneous frequency of electrical quantities. In the remainder of this section, we discuss exclusively voltages, but the same procedure and results can be obtained using currents. We consider two scenarios, namely unbalanced three-phase systems; and single-phase systems.
### _Three-Phase Unbalanced Voltages_
Let's assume that the phases \(\mathrm{abc}\) of a three-phase voltage \(\mathbf{v}(t)\) constitute a set of orthogonal coordinates:
\[\mathbf{v}(t)=v_{\mathrm{a}}(t)\,\mathbf{e}_{\mathrm{a}}+v_{\mathrm{b}}(t)\,\mathbf{e}_{ \mathrm{b}}+v_{\mathrm{c}}(t)\,\mathbf{e}_{\mathrm{c}}\,. \tag{12}\]
In order to employ the theory described in the previous section, which applies to curves in two dimensions, we first need to transform the voltage \(\mathbf{v}(t)\) into the following shape:
\[\mathbf{v}(t)=v_{1}(t)\,\mathbf{e}_{1}+v_{2}(t)\,\mathbf{e}_{2}\,. \tag{13}\]
This is conveniently achieved by applying the Clarke transform to (12) and taking the \(\alpha\) and \(\beta\) components, as follows:
\[\begin{bmatrix}v_{\alpha}(t)\\ v_{\beta}(t)\end{bmatrix}=\frac{2}{3}\begin{bmatrix}1&-\frac{1}{2}&-\frac{1}{ 2}\\ 0&\frac{\sqrt{3}}{2}&-\frac{\sqrt{3}}{2}\end{bmatrix}\begin{bmatrix}v_{\mathrm{a }}(t)\\ v_{\mathrm{b}}(t)\\ v_{\mathrm{c}}(t)\end{bmatrix}. \tag{14}\]
Thus, the components of the voltage in (13) are:
\[v_{1}(t)=v_{\alpha}(t)\,,\qquad v_{2}(t)=v_{\beta}(t)\,. \tag{15}\]
#### Iii-A1 Stationary Sinusoidal Voltages
We discuss in this section a "base case" scenario for which the theory of affine differential geometry allows obtaining the exact value of the frequency of the voltage. This is the case of an unbalanced stationary sinusoidal (i.e., without harmonics) three-phase voltage. As discussed above, since we apply the Clarke transform, the components of the voltage vector in (13) are:
\[v_{1}(t)=V_{1}\cos\theta(t)\,,\qquad v_{2}(t)=V_{2}\sin\theta(t)\,, \tag{16}\]
where \(V_{1}\) and \(V_{2}\) are constant with \(V_{1}\neq V_{2}\), and:
\[\theta(t)=\omega_{o}t+\theta_{o}\,, \tag{17}\]
where \(\omega_{o}\) is the fundamental synchronous reference frequency of the system and \(\theta_{o}\) is constant and depends on the chosen phase angle reference of the system.
With the equivalence given in (11), the time derivative of the affine arc length \(\hat{\sigma}\) in (5) can be written as:
\[\hat{\sigma}=[\mathbf{v}(t),\dot{\mathbf{v}}(t)]^{1/3}=(\omega_{o}V_{1}V_{2})^{1/3}\,. \tag{18}\]
Note that while \(\mathbf{v}\) and \(\dot{\mathbf{v}}\) depend on time, \(\dot{\sigma}\) does not. Then, imposing that the components of the voltage are as those given in (16), one obtains:
\[\mathbf{x}^{\prime}(t)=\frac{\mathbf{v}(t)}{\hat{\sigma}}\,,\quad\mathbf{x}^{\prime\prime} (t)=\frac{\dot{\mathbf{v}}(t)}{\hat{\sigma}^{2}}\,,\quad\mathbf{x}^{\prime\prime\prime }(t)=\frac{\ddot{\mathbf{v}}(t)}{\hat{\sigma}^{3}}\,, \tag{19}\]
where
\[\mathbf{v}(t) =V_{1}\cos\theta(t)\,\mathbf{e}_{1}+V_{2}\sin\theta(t)\,\mathbf{e}_{2}\,, \tag{20}\] \[\dot{\mathbf{v}}(t) =-\omega_{o}V_{1}\sin\theta(t)\,\mathbf{e}_{1}+\omega_{o}V_{2}\cos \theta(t)\,\mathbf{e}_{2}\,,\] \[\ddot{\mathbf{v}}(t) =-\omega_{o}^{2}V_{1}\cos\theta(t)\,\mathbf{e}_{1}-\omega_{o}^{2}V_{ 2}\sin\theta(t)\,\mathbf{e}_{2}\,.\]
Then, using (10), (18) and (19), the expression of the affine curvature \(\kappa_{a}\) becomes:
\[\kappa_{a}=\frac{1}{\hat{\sigma}^{5}}[\dot{\mathbf{v}}(t),\ddot{\mathbf{v}}(t)]=\frac {\omega_{o}^{3}V_{1}V_{2}}{\hat{\sigma}^{5}}\,, \tag{21}\]
where \(\kappa_{a}\) is constant, which is as expected since (16) describes an ellipse in the plane \((v_{1},v_{2})\). Merging (18) and (21) we obtain:
\[\omega_{o}=\sqrt{\kappa_{a}}\,\dot{\sigma}=\sqrt{\begin{bmatrix}\dot{\mathbf{v}}(t ),\ddot{\mathbf{v}}(t)\\ \hline\mathbf{v}(t),\dot{\mathbf{v}}(t)\end{bmatrix}}\,. \tag{22}\]
Equation (22) indicates that, in order to calculate the angular frequency of the voltage in unbalanced conditions, it suffices to measure \(\mathbf{v}\) and estimate its first and second time derivatives.
#### Iii-A2 Transient Voltages
Section III-A1 considers an ideal scenario for which the magnitude and the angular frequency of a three-phase voltage are constant. This scenario leads to a compact and elegant analytical result. However, such scenario is hardly found in practice, wherein the presence of noise, harmonics, and transient conditions prevents obtaining a general explicit expression for the instantaneous frequency.
Under certain conditions, however, it is still possible to utilize the results of Section III-A1 for a voltage of time-varying angular frequency and/or magnitudes of \(v_{1}\) and \(v_{2}\). Consider a time-varying voltage vector:
\[\mathbf{v}(t)=V_{1}(t)\cos\vartheta(t)\,\mathbf{e}_{1}+V_{2}(t)\sin\vartheta(t)\,\mathbf{e}_ {2}\,, \tag{23}\]
where \(\vartheta(t)=\omega_{o}t+\phi(t)\). The conditions so that (22) holds for a voltage \(\mathbf{v}(t)\) in the form of (23) are:
\[\frac{d^{h}}{dt^{h}}\phi(t) \ll\omega_{o}^{h}\,,\quad h=1,2\,, \tag{24}\] \[\frac{d^{h}}{dt^{h}}\frac{V_{i}(t)}{\langle V_{i}\rangle} \ll\omega_{o}^{h}\,,\quad i,h=1,2\,, \tag{25}\]
where \(\langle\cdot\rangle\) denotes the average value. Condition (24) for \(h=1\) indicates that the variation of the instantaneous frequency of the voltage is close to the synchronous reference angular frequency of the system; for \(h=2\), (24) imposes a boundary to the rate of change of frequency (RoCoF); and conditions (25) impose that the variations of the _radial frequency_ (see definition in [25]) are small compared to the fundamental frequency of the grid. All these assumptions are generally well satisfied in power systems.
Conditions (24) and (25) are sufficient for (22) to hold at least as a first order approximation. In fact, the first time derivative of the voltage vector in (23) is:
\[\dot{\mathbf{v}}=\left(\ddot{V_{1}}\cos\vartheta-\dot{\vartheta}V_{1}\sin\vartheta \right)\mathbf{e}_{1}+\left(\dot{V_{2}}\sin\vartheta+\dot{\vartheta}V_{2}\cos \vartheta\right)\mathbf{e}_{2}\,,\]
and the second time derivative is:
\[\ddot{\mathbf{v}}= -\left(\ddot{\vartheta}V_{1}\sin\vartheta+\dot{\vartheta}^{2}V_{1} \cos\vartheta+\dot{\vartheta}\dot{V}_{1}\sin\vartheta-\ddot{V}_{1}\cos\vartheta \right)\mathbf{e}_{1}\] \[+\left(\ddot{\vartheta}V_{2}\cos\vartheta+\dot{\vartheta}^{2}V_{2} \sin\vartheta+\dot{\vartheta}\dot{V}_{2}\cos\vartheta-\ddot{V}_{2}\sin\vartheta \right)\mathbf{e}_{2}\,,\]
where the dependency on time has been omitted for economy of notation.
It is straightforward to show that by applying (24) and (25), the voltage derivatives can be approximated with the second and third equations of (20) and, hence, the instantaneous frequency can be approximated using (22). In summary, (24) and (25) lead to the following approximated expression of the instantaneous frequency of a time-varying unbalanced voltage:
\[\dot{\vartheta}(t)\approx\boxed{\omega_{a}(t)=\sqrt{\begin{bmatrix}\dot{\mathbf{v}}(t ),\ddot{\mathbf{v}}(t)\\ \hline\mathbf{v}(t),\dot{\mathbf{v}}(t)\end{bmatrix}}} \tag{26}\]
The expression of \(\omega_{a}\) in (26) is the main result of this work. We test the accuracy of (26) through a variety of examples and a case study in Section IV.
### _Single-Phase Voltages_
In this section we consider a single-phase voltage with instantaneous value \(v(t)\). To apply the theory described in Section II, we first need to transform \(v(t)\) into the shape of (13). To this aim, we construct the second dimension by employing the voltage derivative. That is:
\[v_{1}(t)=v(t)\,,\qquad v_{2}(t)=\dot{v}(t)\,. \tag{27}\]
Since the time derivative of sinusoidal signals gives a \(90^{\circ}\) rotation, using the time derivative is equivalent to defining a quadrature axis (see also discussion on quadrature signal generation in Section I).
#### Iii-B1 Stationary Sinusoidal Voltages
The result obtained in the previous section can be easily extended to a stationary sinusoidal single-phase voltage using (27). Let the voltage be:
\[v(t)=V\cos\theta(t)\,, \tag{28}\]
where \(V\) is constant and \(\theta\) is defined in (17). Then, from (27), the components of the voltage vector are:
\[v_{1}(t)=V\cos\theta(t)\,,\qquad v_{2}(t)=-\omega_{o}V\sin\theta(t)\,. \tag{29}\]
Substituting \(V_{1}=V\) and \(V_{2}=\omega_{o}V\) in (18) and (21), one obtains:
\[\dot{\sigma}=(\omega_{o}V)^{2/3}\,,\quad\kappa_{a}=\frac{\omega_{o}^{4}V^{2}} {\dot{\sigma}^{5}}\,. \tag{30}\]
Apart from the fact that calculation of \(\tilde{\mathbf{v}}(t)\) in this case requires the additional step of computing the third derivative of \(v(t)\), equation (22) holds and allows estimating the angular frequency also for a single-phase voltage.
#### Iii-B2 Transient Voltages
Consider a time-varying single-phase voltage:
\[v(t)=V(t)\cos\vartheta(t)\,, \tag{31}\]
where \(\vartheta(t)=\omega_{o}t+\phi(t)\). The voltage vector is defined as:
\[\mathbf{v}(t) =V(t)\cos\vartheta(t)\,\mathbf{e}_{1} \tag{32}\] \[+\left[\dot{V}(t)\cos\vartheta(t)-V(t)\dot{\vartheta}(t)\sin \vartheta(t)\right]\mathbf{e}_{2}\,.\]
If one assumes:
\[\frac{d^{h}}{dt^{h}}\phi(t) \ll\omega_{o}^{h}\,,\quad h=1,2,3\,, \tag{33}\] \[\frac{d^{h}}{dt^{h}}\frac{V(t)}{\left\langle V_{i}\right\rangle} \ll\omega_{o}^{h}\,,\quad h=1,2,3\,, \tag{34}\]
then (26) is also a good approximation of the instantaneous frequency of the time-varying single-phase voltage in (31).
## IV Case Studies
The examples discussed in this section are aimed at illustrating the accuracy of (26) for both unbalanced three- and single-phase voltages in various non-sinusoidal and non-stationary conditions. The proposed approach is compared with a conventional technique, namely the SRF-PLL, as well as with the Frenet frame-based method proposed in [23]. The first section of the study focuses on three-phase systems under various conditions. Both balanced and unbalanced cases are discussed. The IEEE 39-bus system is also utilized to illustrate the accuracy of the proposed method under transient unbalanced conditions. Finally, an example discussing the frequency estimation for a non-stationary single-phase voltage is presented.
In all figures shown in this section, \(\omega_{a}\) represents the estimated angular frequency derived from the proposed approach, while \(\omega_{\kappa}\) represents the estimated angular frequency obtained using the Frenet frame-based method from [23], as follows:
\[\omega_{\kappa}(t)=\frac{[\mathbf{v}(t),\dot{\mathbf{v}}(t)]}{|\mathbf{v}(t)|^{2}}\,. \tag{35}\]
The three-phase voltage trajectories are given in per unit (pu) with respect to a base of 12 kV, and \(\omega_{a}\) and \(\omega_{\kappa}\) are in pu with respect to \(\omega_{o}=100\pi\) rad/s. Finally, note that in all examples, (26) is calculated using a sampling of the voltage signal, transforming it through the Clarke transform and then evaluating numerically the time derivatives of the \(\alpha\) and \(\beta\) components.
### _Three-Phase Voltage_
Let us consider the three-phase voltage vector given in (12), which we repeat here for convenience:
\[\mathbf{v}(t)=v_{\rm a}(t)\,\mathbf{e}_{\rm a}+v_{\rm b}(t)\,\mathbf{e}_{\rm b}+v_{\rm c }(t)\,\mathbf{e}_{\rm c}\,, \tag{36}\]
with components:
\[v_{\rm a}(t) =V_{\rm a}\sin(\omega_{o}t+\phi_{\rm a}(t))\,, \tag{37}\] \[v_{\rm b}(t) =V_{\rm b}\sin(\omega_{o}t+\phi_{\rm b}(t)-\zeta_{\rm b})\,,\] \[v_{\rm c}(t) =V_{\rm c}\sin(\omega_{o}t+\phi_{\rm c}(t)+\zeta_{\rm c})\,.\]
Recall that the proposed approach operates in two dimensions, and that we use Clarke transform to convert (12) to the two-dimensional \((\alpha,\beta)\) plane.
#### Iv-A1 Balanced Three-Phase Voltage
We discuss two examples: the first example involves a stationary condition, whereas the second example considers a signal with time-varying voltage magnitude. In both examples, the angular frequency is constant and equal to \(\omega_{o}\). The parameters used are:
* E1: \(V_{i}=12\) kV, with \(\omega_{o}=100\pi\) rad/s, \(\phi_{i}=0\) and \(\zeta_{\rm b}=\zeta_{\rm c}=\frac{2\pi}{3}\) rad.
* E2: \(V_{i}=12+3\sin(\pi t)\) kV, with \(\omega_{o}=100\pi\) rad/s, \(\phi_{i}=0\) and \(\zeta_{\rm b}=\zeta_{\rm c}=\frac{2\pi}{3}\) rad.
Figure 1 shows the phase voltages, the geometric frequencies \(\omega_{a}\) and \(\omega_{\kappa}\), as well as the instantaneous frequency \(\omega_{\rm PLL}\) obtained with a conventional SRF-PLL for E1 and E2. As expected, since the voltage is balanced and the curve in the plane \((\alpha,\beta)\) is a circle, there is a perfect match between the estimations obtained with the two differential geometry-based methods. Note, however, that the two approaches return the right results for two different reasons: the Frenet frame-based formula returns a constant \(\omega_{\kappa}\) because the circle has a constant curvature; whereas the proposed affine differential geometry approach returns a constant \(\omega_{a}\) because the circle is a special case of an ellipse. We note that the conventional PLL also works well in this balanced-voltage case.
#### Iii-A2 Unbalanced Three-Phase Voltage
Three examples of unbalanced voltages with constant angular frequency \(\omega_{o}\) are considered in this section. The first example involves unequal constant voltage magnitudes; the second example examines a system with unequal and time-varying voltage magnitudes; and the third example examines a system with unequal phase displacements. The following three cases are considered:
* E3: \(V_{\mathrm{a}}=V_{\mathrm{c}}=12\) kV, \(V_{\mathrm{b}}=8\) kV, with \(\omega_{o}=100\pi\) rad/s, \(\phi_{i}=0\) and \(\zeta_{\mathrm{b}}=\zeta_{\mathrm{c}}=\frac{2\pi}{3}\) rad.
* E4: \(V_{\mathrm{a}}=V_{\mathrm{c}}=12+3\sin(\pi t)\) kV, \(V_{\mathrm{b}}=8+2\sin(2\pi t)\) kV, with \(\omega_{o}=100\pi\) rad/s, \(\phi_{i}=0\) and \(\zeta_{\mathrm{b}}=\zeta_{\mathrm{c}}=\frac{2\pi}{3}\) rad.
* E5: \(V_{\mathrm{a}}=V_{\mathrm{b}}=V_{\mathrm{c}}=12\) kV, with \(\omega_{o}=100\pi\) rad/s, \(\phi_{i}=0\) and \(\zeta_{\mathrm{b}}=\frac{-2\pi}{3},\zeta_{\mathrm{c}}=\frac{1.5\pi}{3}\) rad.
Figure 2 shows the voltage components and estimated geometric and PLL frequencies for examples E3-E5. In all these examples, the curves in the \((\alpha,\beta)\) plane are ellipses. This means that the curvature obtained using the Frenet frame is time-varying and periodic, thus leading to a time-varying and periodic \(\omega_{\kappa}\). Moreover, the conventional PLL also outputs a time-varying frequency in the form of a significant ripple around the frequency \(\omega_{o}\). On the other hand, the proposed affine geometry formula returns a constant \(\omega_{a}\) equal to \(\omega_{o}\) (in pu), which is consistent with the expected in this case result.
### _Three-Phase Voltage with Time-Varying Frequency_
Two examples of three-phase system with varying angular frequency are considered in this section. The first example considers a voltage with angular frequency that varies periodically around its average value. This example is used to resemble the transient behavior of the voltage following a contingency in a power system, where voltage phase angle oscillations arising due to electro-mechanical swings of synchronous machines are poorly damped and thus sustain for a relatively long time. The second example is an extreme and uncommon situation in power systems, where the components of the three-phase voltage are time-varying and have unequal angular frequencies. The following parameters are used:
* E6: \(V_{i}=12\) kV, with \(\omega_{o}=100\pi\) rad/s and \(\zeta_{\mathrm{b}}=\zeta_{\mathrm{c}}=\frac{2\pi}{3}\) rad and \(\phi_{i}(t)=\pi\sin(0.4\pi t)\) rad.
* E7: \(V_{i}=12\) kV, with \(\omega_{o}=100\pi\) rad/s and \(\zeta_{\mathrm{b}}=\zeta_{rmc}=\frac{2\pi}{3}\) rad and \(\phi_{a}(t)=\phi_{\mathrm{b}}(t)=\pi\sin(0.4\pi t),\phi_{\mathrm{c}}(t)=1.1\pi \sin(0.4\pi t)\) rad.
Figure 2(a) shows the estimated frequency with the proposed formula, the frequency estimated with the PLL, and the geometrical frequency \(\omega_{\kappa}\), for E6. Despite the approximations imposed by assuming (24) and (25), we note that (26) is able to precisely track the exact instantaneous frequency. In this example, also \(\omega_{\kappa}\) and \(\omega_{\mathrm{PLL}}\) track well IF. On the other hand, for E7, while \(\omega_{a}\) and \(\omega_{\mathrm{PLL}}\) still track well the exact frequency, \(\omega_{\kappa}\) shows significant fluctuations. This behavior is illustrated in Fig. 2(b).
### _Stationary Three-Phase Voltage with Harmonics_
As a last example on three-phase voltage, we discuss the effect of harmonics on the estimation of the frequency based on (26). A fundamental condition for the affine differential geometry approach to work properly is that (2) is satisfied at all times. Harmonics, however, introduce inflection points, that is, points for which \([\dot{\mathbf{x}},\ddot{\mathbf{x}}]\leq 0\). Moreover, since the term \([\dot{\mathbf{x}},\ddot{\mathbf{x}}]\)
Fig. 1: Balanced three-phase voltages and estimated frequency.
Fig. 2: Unbalanced three-phase voltages and estimated frequencies.
appears in the denominator of (26), this leads to numerical issues. Figure 3(b) shows the performance of (26) as well as of the PLL and Frenet-based estimated frequencies for a stationary balanced three-phase voltage. For the fundamental frequency, \(V_{i}=12\) kV, with \(\omega_{o}=100\pi\) rad/s and \(\zeta_{\text{b}}=\zeta_{\text{c}}=\frac{2\pi}{3}\) rad and \(\phi_{i}(t)=0\) are assumed. Then, 7-th and 11-th harmonics are added with magnitudes \(0.02V_{i}\) and \(0.01V_{i}\), respectively. To overcome numerical issues, we have set \(\omega_{a}=0\) if \([\dot{\mathbf{x}},\ddot{\mathbf{x}}]\leq 0\). As expected, in this scenario, \(\omega_{a}\) shows a poor performance. The best estimation is obtained with the PLL.
### _IEEE 39-Bus System in Unbalanced Conditions_
In this section, the accuracy of (26) is tested using a fully-fledged EMT model of the IEEE 39-bus system. The system setup is the same as the unbalanced scenario described in [24], where it was utilized to show the performance of the Frenet frame-based frequency estimation. In particular, the power consumption of all 19 loads of the system is unbalanced, with imbalances ranging from 5 to 10% on one of the phases. A three-phase fault is simulated at bus 4. The fault occurs at \(t=0.2\) s and is cleared at \(t=0.3\) s. The behavior of the three-phase voltage at bus 26 following the contingency is illustrated in Fig. 5.
Figure 6 shows the results of the frequency estimation for the unbalanced voltage at bus 26. The \(\omega_{a}\) is more accurate than \(\omega_{\text{PLL}}\). Note that (26) has been evaluated using numerical differentiation of the time series of the three-phase voltage components that have a constant time sampling rate of \(10\) ms. The numerical derivatives are filtered using a second order Butterworth digital filter and a IIR filter. Note that, in this scenario, \(\omega_{\kappa}\) shows a bigger ripple than \(\omega_{\text{PLL}}\) and, thus \(\omega_{\kappa}\) is omitted in Fig. 6 for clarity. The interested reader can find a comprehensive comparison between PLL frequency estimations and \(\omega_{\kappa}\) in [24].
### _Single-Phase Voltage_
This last example illustrates the performance of the proposed formula (26) when applied to a single-phase voltage with time-varying angular frequency \(\omega t+\phi(t)\) and constant amplitude \(V\):
\[v(t)=V\sin(\omega_{o}t+\phi(t))\,. \tag{38}\]
The parameters considered for this example are: \(V=12\) kV, \(\omega_{o}=100\pi\) rad/s and \(\phi(t)=0.05\omega_{o}e^{-t}(1-\cos(\pi t))\) rad.
As discussed in Section III-B, we construct the second dimension by using the derivative of the original signal, as in (27). Figure 7 illustrates the accuracy of (26) in matching the actual analytical value of the instantaneous frequency, that is, \(\mathrm{IF}=\omega_{0}+\dot{\phi}\).
Fig. 4: E8: Three-phase voltage with harmonics and estimated angular frequency.
Fig. 5: Voltage at bus 26 of IEEE 39-bus system following a three-phase fault at bus 4.
Fig. 3: Estimated angular frequency.
Fig. 6: Estimated frequency for IEEE 39-bus system for unbalanced time-varying voltage.
Fig. 7: Estimated frequency, analytical instantaneous frequency (\(\mathrm{IF}\)) and frequency estimated using a phase shift for the single-phase voltage.
Figure 7 also shows the frequency estimated using a conventional PLL where the quadrature signal is obtained using \(v(t-\tau)\), where the transport delay is \(\tau=0.25\,T=0.5\,\pi/\omega_{o}\). Despite the approximations resulting from the assumptions (33) and (34), also in this case the proposed approach shows very good accuracy, whereas the PLL shows some ripples due to the fact that the quadrature signal is not exact because the frequency is time-varying.
## V Conclusions
This paper presents an approach based on affine differential geometry to estimate the angular frequency of unbalanced three-phase voltages as well as of single-phase voltages. The main contribution of this work is the approximated formula (26), that estimates the angular frequency of the voltage through measurements of the voltage, and calculation of the first and second derivatives of the voltage itself.
Approximations based on the nature of typical power system transients are assumed in order to achieve a compact explicit expression of the proposed angular frequency estimation formula. Then, a variety of examples are provided to demonstrate the adequateness of such approximations and the performance of the proposed formula. When compared to PLLs as well as to the Frenet frame-based estimation from [23], the proposed formula proves to be accurate and robust in balanced and unbalanced conditions, as well as for voltages of time-varying magnitude and frequency.
Future work will focus on testing the proposed formula with real-world measurements and on extending its formulation to process signals with harmonic contents as well as with multi-phase systems with a number of phases higher than three.
|
2308.01911 | Synthesis of a quantum tree Weyl matrix | A method for successive synthesis of a Weyl matrix (or Dirichlet-to-Neumann
map) of an arbitrary quantum tree is proposed. It allows one, starting from one
boundary edge, to compute the Weyl matrix of a whole quantum graph by adding on
new edges and solving elementary systems of linear algebraic equations in each
step. | Sergei A. Avdonin, Kira V. Khmelnytskaya, Vladislav V. Kravchenko | 2023-06-15T01:38:29Z | http://arxiv.org/abs/2308.01911v1 | # Synthesis of a quantum tree Weyl matrix
###### Abstract
A method for successive synthesis of a Weyl matrix (or Dirichlet-to-Neumann map) of an arbitrary quantum tree is proposed. It allows one, starting from one boundary edge, to compute the Weyl matrix of a whole quantum graph by adding on new edges and solving elementary systems of linear algebraic equations in each step.
## 1 Introduction
Quantum graphs or differential equations networks have wide applications in science and engineering and give rise to challenging problems involving many areas of modern mathematics, from combinatorics to partial differential equations and spectral theory. A number of surveys and collections of papers on quantum graphs appeared last years, including the first books on this topic by Berkolaiko and Kuchment [8] and Mugnolo [14]. In the present work we consider tree graphs, that is, finite connected compact graphs without cycles. The Weyl or Titchmarsh-Weyl matrix of a quantum graph is one of the key mathematical objects, it naturally appears in direct and inverse spectral theory, in the control theory of quantum graphs and in numerous applications. Its importance lies in the fact that the Weyl matrix (or more precisely, its transposed matrix) is the Dirichlet-to-Neumann map of the quantum graph. For a fixed value of the spectral parameter, a vector of arbitrary Dirichlet-type boundary values of a solution multiplied by the transposed Weyl matrix gives one a vector of the corresponding Neumann-type boundary values, if only the value of the spectral parameter is not a Dirichlet eigenvalue of the quantum graph. Moreover, the singularities of the Weyl matrix, considered as a function of the spectral parameter, determine the Dirichlet spectrum of the quantum graph. Many papers on inverse problems for quantum graphs exploit the Weyl matrix or equivalent spectral data, see, e.g., [6], [9], [15], [7], [10], [4].
Direct construction of the Weyl matrix for a sufficiently large quantum tree is quite a challenging problem. It requires solving large systems of equations involving solutions (and
their derivatives) of differential equations on all edges of the tree. The main result of the present work is a simple procedure for a progressive synthesis of the Weyl matrix of an arbitrary quantum tree. Starting from just one leaf edge and adding successively new edges, it allows one to compute the Weyl matrices for ever larger quantum trees from the Weyl matrices for the smaller ones. We call this procedure the synthesis of the Weyl matrix. In a sense, it is inverse with respect to the leaf peeling method, which was developed in [4], see also [5], [2]. The proposed synthesis of the Weyl matrix is based on revealed relations between Weyl solutions of a larger quantum tree graph \(\Omega\) with those of a smaller one \(\widetilde{\Omega}\), obtained from \(\Omega\) by cutting out all the leaf edges of an internal vertex.
In Section 2 we recall necessary definitions and express the Weyl solutions in terms of fundamental systems of solutions of the Sturm-Liouville equation on each edge. Section 3 presents the main result of the work, the procedure of the synthesis of a Weyl matrix of an arbitrary quantum tree graph. Finally, Section 4 contains some concluding remarks.
## 2 Preliminaries
Let \(\Omega\) be a finite connected compact graph without cycles (a tree graph) consisting of \(P\) edges, \(e_{1},\ldots,e_{P}\), and \(P+1\) vertices, \(V=\left\{v_{1},...,v_{P+1}\right\}.\) The notation \(e_{j}\sim v\) means that the edge \(e_{j}\) is incident to the vertex \(v\). Every edge \(e_{j}\) is identified with an interval \((0,L_{j})\) of the real line. The boundary \(\Gamma=\left\{\gamma_{1},\ldots,\gamma_{m}\right\}\) of \(\Omega\) is the set of all leaves of the graph (the external vertices). The edge adjacent to some \(\gamma_{j}\) is called a leaf or boundary edge.
A continuous function \(u\) defined on the graph \(\Omega\) is a \(P\)-tuple of functions \(u_{j}\in C\left[0,L_{j}\right]\) satisfying the continuity condition at the internal vertices \(v\): \(u_{i}(v)=u_{j}(v)\) for all \(e_{i},e_{j}\sim v\). Then \(u\in C(\Omega)\).
Let \(q\in\mathcal{L}_{1}(\Omega)\) be real valued, and \(\lambda\) a complex number. Consider the Sturm-Liouville equation on \(\Omega\):
\[-u^{\prime\prime}(x)+q(x)u(x)=\lambda u(x). \tag{2.1}\]
A function \(u\) defined on \(\Omega\) is said to be a solution of (2.1) if besides (2.1) we have that
\[u\in C(\Omega), \tag{2.2}\]
and for every internal vertex \(v\) the Kirchhoff-Neumann condition is fulfilled
\[\sum_{e_{j}\sim v}\partial u_{j}(v)=0,\quad\mbox{for all $v\in V\setminus \Gamma$.} \tag{2.3}\]
Here \(u_{j}\) is a restriction of \(u\) onto \(e_{j}\), \(\partial u_{j}(v)\) stands for the derivative of \(u\) at the vertex \(v\) taken along the edge \(e_{j}\) in the direction outward the vertex, and the sum is taken over all the edges incident to the internal vertex \(v\).
For simplicity we assume that the considered graph does not have vertices of degree two, because every such vertex can be regarded as an internal point of an edge which is a sum of two edges incident at such a vertex, and the continuity condition (2.2) together with the Kirchhoff-Neumann condition (2.3) guarantee that any solution of (2.1) on the incident edges keeps satisfying (2.1) also on the union of them.
**Definition 2.1**: _A solution \(w_{i}\) of (2.1) on \(\Omega\) is called the **Weyl solution** associated with the leaf \(\gamma_{i}\) if it satisfies the boundary conditions_
\[w_{i}(\gamma_{i})=1\quad\mbox{and}\quad w_{i}(\gamma_{j})=0\mbox{ for all }j\neq i. \tag{2.4}\]
If \(\lambda\) in (2.1) is not a Dirichlet eigenvalue of the quantum tree \(\Omega\), the Weyl solution \(w_{i}\) exists and is unique for any \(i=1,\ldots,m\). In particular, since the potential \(q\) is real valued, the Dirichlet spectrum is real and thus the boundary value problem (2.1), (2.4) is uniquely solvable for all \(\lambda\notin\mathbb{R}\).
**Definition 2.2**: _The \(m\times m\) matrix-function \({\bf M}(\lambda)\), \(\lambda\notin\mathbb{R}\), consisting of the elements \({\bf M}_{ij}(\lambda)=\partial w_{i}(\gamma_{j})\), \(i,j=1,\ldots,m\) is called the **Weyl matrix**._
For a fixed value of \(\lambda\), the transposed Weyl matrix represents a Dirichlet-to-Neumann map of the quantum graph defined by \(\Omega\) and \(q\in{\cal L}_{1}(\Omega)\). Indeed, if \(u\) is a solution of (2.1) satisfying the Dirichlet condition at the boundary vertices \(u(\lambda,\gamma)=f(\lambda)\), then \(\partial u(\lambda,\gamma)={\bf M}^{T}(\lambda)f(\lambda)\), \(\lambda\notin\mathbb{R}\).
It is clear that the direct computation of the Weyl matrix, which involves finding Weyl solutions and computing their derivatives at leaves may be a difficult task, especially, when \(\Omega\) consists of a large number of edges. The corresponding systems of equations which need to be solved in this case may be too large, because they should combine information on the solutions and their derivatives on each edge and at all internal vertices.
The main result of the present work is a simple procedure which allows one to synthesize the Weyl matrix progressively, by adding edges to smaller graphs and computing the Weyl matrices for the obtained larger graphs from the Weyl matrices for the smaller ones. We call this procedure synthesis of the Weyl matrix. It allows one to compute the Weyl matrix of any quantum tree starting from one leaf edge and adding successively new edges.
By \(\varphi_{i}(\rho,x)\) and \(S_{i}(\rho,x)\) we denote the so-called fundamental solutions of the Sturm-Liouville equation on the edge \(e_{j}:\)
\[-y^{\prime\prime}(x)+q_{i}(x)y(x)=\rho^{2}y(x),\quad x\in(0,L_{i}), \tag{2.5}\]
satisfying the initial conditions
\[\varphi_{i}(\rho,0)=1,\quad\varphi_{i}^{\prime}(\rho,0)=0,\]
\[S_{i}(\rho,0)=0,\quad S_{i}^{\prime}(\rho,0)=1.\]
Here \(q_{i}(x)\) is the component of the potential \(q(x)\) on the edge \(e_{i}\), and \(\rho=\sqrt{\lambda}\), \(\mbox{Im}\,\rho\geq 0\). For a leaf edge \(e_{i}\) it is convenient to identify its leaf \(\gamma_{i}\) with the left endpoint \(x=0.\) Then the Weyl solution \(w_{i}(\rho,x)\) has the form
\[w_{ii}(\rho,x)=\varphi_{i}(\rho,x)+{\bf M}_{i,i}(\rho^{2})S_{i}(\rho,x)\quad \mbox{on the adjacent leaf edge $e_{i}$}\]
and
\[w_{ij}(\rho,x)={\bf M}_{i,j}(\rho^{2})S_{j}(\rho,x)\quad\mbox{on every other leaf edge $e_{j}$},\quad j\neq i.\]
Hereafter, the notation \(w_{ij}(\rho,x)\) means that we consider \(j\)-th component of a solution \(w_{i}(\rho,x)\), that is, the solution \(w_{i}(\rho,x)\) on the edge \(e_{j}\).
On internal edges \(e_{j}\) we have
\[w_{ij}(\rho,x)=a_{ij}(\rho)\varphi_{j}(\rho,x)+b_{ij}(\rho)S_{j}(\rho,x),\]
where the choice of which vertex is identified with zero is arbitrary, and in general the factors \(a_{ij}(\rho)\), \(b_{ij}(\rho)\) are unknown.
Since in direct and inverse spectral problems involving the Weyl matrix one deals with solutions on large ranges of the parameter \(\rho\), it is convenient to use the Neumann series of Bessel functions representations for \(\varphi_{j}(\rho,x)\) and \(S_{j}(\rho,x)\), introduced in [13] and applied in a number of direct and inverse problems (see, e.g., [1], [2], [3], [11], [12]). One of the features of these representations is the existence of estimates for the remainders of the series independent of \(\mathop{\rm Re}\rho\).
## 3 Synthesis of Weyl matrix
Consider a quantum tree graph \(\widetilde{\Omega}\) whose leaves are \(\gamma_{0},\gamma_{1},\ldots,\gamma_{m}\). Assume its Weyl matrix \(\widetilde{\bf M}(\rho^{2})\) to be known for some value of \(\rho\). Attach a number of edges to the leaf \(\gamma_{0}\) (see Fig. 1), so that \(\gamma_{0}\) becomes an internal vertex of a new larger graph \(\Omega\), and \(\gamma_{1},\ldots,\gamma_{m},\gamma_{m+1},\ldots,\gamma_{m+m_{1}}\) the leaves of \(\Omega\). Here \(m_{1}\) is the number of the new attached edges. Thus, \(\Omega\) is a tree graph obtained from \(\widetilde{\Omega}\) by attaching \(m_{1}\) edges to \(\gamma_{0}\). For simplicity we call these new edges \(e_{m+1},\ldots,e_{m+m_{1}}\) and denote their respective lengths as \(L_{m+1},\ldots,L_{m+m_{1}}\). We assume that a corresponding potential \(q_{j}\in{\cal L}_{1}(0,L_{j})\), \(j=m+1,\ldots,m+m_{1}\), is given on each new edge, and the leaf \(\gamma_{j}\) is identified with \(x=0\).
Our task is to find the Weyl matrix \({\bf M}(\rho^{2})\) of the quantum graph \(\Omega\). The idea is to construct the Weyl solutions \(w_{i}(\rho,x)\) of \(\Omega\) in terms of the Weyl solutions \(\widetilde{w}_{j}(\rho,x)\) of \(\widetilde{\Omega}\). We start with
Figure 1: Tree graph \(\Omega\) is obtained from a subgraph \(\widetilde{\Omega}\) (its edges are presented by solid lines) by attaching to the vertex \(\gamma_{0}\) a number of new edges (dashed lines).
the Weyl solutions \(w_{m+j}(\rho,x)\) associated with the new leaves \(\gamma_{m+j}\), \(j=1,\ldots,m_{1}\).
Let us look for \(w_{m+j}(\rho,x)\) in the form
\[w_{m+j}(\rho,x)=c_{j}(\rho)\widetilde{w}_{0}(\rho,x)\quad\mbox{on $\widetilde{ \Omega}$}. \tag{3.1}\]
That is, on the subgraph \(\widetilde{\Omega}\) the Weyl solution \(w_{m+j}(\rho,x)\) coincides with \(\widetilde{w}_{0}(\rho,x)\) up to a multiplicative constant \(c_{j}(\rho)\). In this case, it automatically satisfies the homogeneous Dirichlet condition at \(\gamma_{1},\ldots,\gamma_{m}\), and we still need to satisfy the conditions \(w_{m+j}(\rho,\gamma_{m+j})=1\) and \(w_{m+j}(\rho,\gamma_{m+i})=0\) for \(i\neq j\).
From (3.1) we have
\[\varphi_{m+j}(\rho,L_{m+j})+{\bf M}_{m+j,m+j}(\rho^{2})S_{m+j}(\rho,L_{m+j})= c_{j}(\rho) \tag{3.2}\]
and
\[\varphi^{\prime}_{m+j}(\rho,L_{m+j})+\sum_{k=1}^{m_{1}}{\bf M}_{m+j,m+k}(\rho ^{2})S^{\prime}_{m+k}(\rho,L_{m+k})=c_{j}(\rho)\widetilde{\bf M}_{0,0}(\rho^{2 }), \tag{3.3}\]
where \(\widetilde{\bf M}_{0,0}(\rho^{2})\) is an element of the Weyl matrix \(\widetilde{\bf M}(\rho^{2})\): \(\widetilde{\bf M}_{0,0}(\rho^{2})=\partial\widetilde{w}_{0}(\gamma_{0})\).
Additionally, the continuity condition gives us the equalities
\[{\bf M}_{m+j,m+k}(\rho^{2})S_{m+k}(\rho,L_{m+k})=c_{j}(\rho),\quad k=1,\ldots, m_{1}\mbox{ and }k\neq j. \tag{3.4}\]
Thus, for each \(j\), from (3.2), (3.3) and (3.4) we have \(m_{1}+1\) equations for the \(m_{1}+1\) unknowns
\[\left\{{\bf M}_{m+j,m+k}(\rho^{2}),\,k=1,\ldots,m_{1};\,\,c_{j}(\rho)\right\}.\]
Note that in the linear algebraic system (3.2), (3.3), (3.4) the magnitudes \(\varphi_{m+j}(\rho,L_{m+j})\), \(S_{m+k}(\rho,L_{m+k})\), \(\varphi^{\prime}_{m+j}(\rho,L_{m+j})\), \(S^{\prime}_{m+k}(\rho,L_{m+k})\) are known, since all the potentials \(q_{m+k}(x)\), \(k=1,\ldots,m_{1}\) are known.
Thus, from (3.2), (3.3) and (3.4) we find \({\bf M}_{m+j,m+k}(\rho^{2})\), \(k=1,\ldots,m_{1}\) and \(c_{j}(\rho)\).
From (3.1) we obtain additionally,
\[{\bf M}_{m+j,i}(\rho^{2})=c_{j}(\rho)\widetilde{\bf M}_{0,i}(\rho^{2})\quad \mbox{for $i=1,\ldots,m$}, \tag{3.5}\]
and thus we have already completed the rows \(m+1,\ldots,m+m_{1}\) of the Weyl matrix \({\bf M}(\rho^{2})\).
Now, choose an \(i\in\{1,\ldots,m\}\). We look for \(w_{i}(\rho,x)\) such that on \(\widetilde{\Omega}\) the equality be valid
\[w_{i}(\rho,x)=\widetilde{w}_{i}(\rho,x)+\alpha_{i}(\rho)\widetilde{w}_{0}(\rho,x),\]
where \(\alpha_{i}(\rho)\) is a constant. This is a natural choice, because
\[w_{i}(\rho,\gamma_{i})=1\quad\mbox{and}\quad w_{i}(\rho,\gamma_{j})=0\quad \mbox{for $j=1,\ldots,m$ and $j\neq i$}.\]
Moreover, we have
\[w_{i}(\rho,\gamma_{0})=\alpha_{i}(\rho)\]
and
\[\partial w_{i}(\rho,\gamma_{0})=\widetilde{\bf M}_{i,0}(\rho^{2})+\alpha_{i}( \rho)\widetilde{\bf M}_{0,0}(\rho^{2}).\]
Thus, for all \(j=m+1,\ldots,m+m_{1}\) we have
\[w_{ij}(\rho,\gamma_{0})=\alpha_{i}(\rho) \tag{3.6}\]
and
\[\sum_{j=m+1}^{m+m_{1}}\partial w_{ij}(\rho,\gamma_{0})=-\widetilde{\mathbf{M}}_ {i,0}(\rho^{2})-\alpha_{i}(\rho)\widetilde{\mathbf{M}}_{0,0}(\rho^{2}). \tag{3.7}\]
Equality (3.6) can be written as
\[\mathbf{M}_{i,j}(\rho^{2})S_{j}(\rho,L_{j})=\alpha_{i}(\rho),\quad j=m+1, \ldots,m+m_{1}, \tag{3.8}\]
while (3.7) takes the form
\[\sum_{j=m+1}^{m+m_{1}}\mathbf{M}_{i,j}(\rho^{2})S_{j}^{\prime}(\rho,L_{j})= \widetilde{\mathbf{M}}_{i,0}(\rho^{2})+\alpha_{i}(\rho)\widetilde{\mathbf{M}} _{0,0}(\rho^{2}). \tag{3.9}\]
For every \(i\in\{1,\ldots,m\}\), equations (3.8) and (3.9) give us \(m_{1}+1\) equations for the \(m_{1}+1\) unknowns
\[\left\{\mathbf{M}_{i,j}(\rho^{2}),\,j=m+1,\ldots,m+m_{1};\ \alpha_{i}(\rho) \right\}.\]
The magnitudes \(S_{j}(\rho,L_{j})\), \(S_{j}^{\prime}(\rho,L_{j})\), \(\widetilde{\mathbf{M}}_{i,0}(\rho^{2})\), \(\widetilde{\mathbf{M}}_{0,0}(\rho^{2})\) in (3.8) and (3.9) are already known.
Finally, to obtain \(\mathbf{M}_{i,j}(\rho^{2})\) for \(i,j=1,\ldots,m\) we observe that
\[\partial w_{i}(\rho,\gamma_{j})=\partial\widetilde{w}_{i}(\rho,\gamma_{j})+ \alpha_{i}(\rho)\partial\widetilde{w}_{0}(\rho,\gamma_{j})\]
and thus
\[\mathbf{M}_{i,j}(\rho^{2})=\widetilde{\mathbf{M}}_{i,j}(\rho^{2})+\alpha_{i}( \rho)\widetilde{\mathbf{M}}_{0,j}(\rho^{2})\quad\mbox{for $i,j=1,\ldots,m$.} \tag{3.10}\]
Let us summarize the procedure of construction of the Weyl matrix \(\mathbf{M}(\rho^{2})\) of the quantum tree \(\Omega\) from the Weyl matrix \(\widetilde{\mathbf{M}}(\rho^{2})\) of the quantum subtree \(\widetilde{\Omega}\).
1) For each \(j=1,\ldots,m_{1}\) solve the \((m_{1}+1)\times(m_{1}+1)\)-system of linear algebraic equations (3.2)-(3.4) to find the constant \(c_{j}(\rho)\) and the Weyl matrix entries \(\mathbf{M}_{m+j,m+k}(\rho^{2})\), \(k=1,\ldots,m_{1}\). Compute the entries \(\mathbf{M}_{m+j,i}(\rho^{2})\), \(i=1,\ldots,m\) from (3.5). Thus, the rows from \(m+1\) to \(m+m_{1}\) of the Weyl matrix \(\mathbf{M}(\rho^{2})\) are computed.
2) For each \(i=1,\ldots,m\) solve the \((m_{1}+1)\times(m_{1}+1)\)-system of linear algebraic equations (3.8), (3.9) to find the constant \(\alpha_{i}(\rho)\) and the Weyl matrix entries \(\mathbf{M}_{i,j}(\rho^{2})\), \(j=m+1,\ldots,m+m_{1}\). Compute the entries \(\mathbf{M}_{i,j}(\rho^{2})\), \(j=1,\ldots,m\) from (3.10). This completes the construction of the Weyl matrix \(\mathbf{M}(\rho^{2})\).
Finally, let us consider the situation when \(\widetilde{\Omega}\) is just a single segment with the vertices \(\gamma_{0}\) and \(\gamma_{1}\). Since the potential on this segment is supposed to be given, we may assume that both corresponding Weyl solutions on such \(\widetilde{\Omega}\) are known: \(\widetilde{w}_{0}(\rho,x)\) and \(\widetilde{w}_{1}(\rho,x)\), that satisfy the boundary conditions
\[\widetilde{w}_{0}(\rho,\gamma_{0})=1,\quad\widetilde{w}_{0}(\rho,\gamma_{1})=0\]
and
\[\widetilde{w}_{1}(\rho,\gamma_{0})=0,\quad\widetilde{w}_{1}(\rho,\gamma_{1})=1.\]
Then the entries of the \(2\times 2\) - Weyl matrix \(\widetilde{\mathbf{M}}(\rho^{2})\) have the form
\[\widetilde{\mathbf{M}}_{0,0}(\rho^{2})=\partial\widetilde{w}_{0}(\rho,\gamma_{0 }),\quad\widetilde{\mathbf{M}}_{0,1}(\rho^{2})=\partial\widetilde{w}_{0}(\rho, \gamma_{1}),\]
\[\widetilde{\mathbf{M}}_{1,0}(\rho^{2})=\partial\widetilde{w}_{1}(\rho,\gamma_{0 }),\quad\widetilde{\mathbf{M}}_{1,1}(\rho^{2})=\partial\widetilde{w}_{1}(\rho, \gamma_{1}).\]
Now, as a first step for synthesis of the Weyl matrix of a quantum tree we attach a number of edges to the vertex \(\gamma_{0}\), that gives us a quantum star graph. The procedure described above gives us its Weyl matrix. Subsequently, attaching new edges to the leaves and applying the above procedure leads to the computation of the Weyl matrix of an ever larger quantum tree. Thus, starting with one edge we synthesize the Weyl matrix of the whole quantum tree.
**Remark 3.1**: _The requirement on the potential \(q\) to be real valued is not essential. The proposed procedure for the synthesis of the Weyl matrix is applicable to complex valued potentials without modifications. The only constraint that must be checked at each step is that \(\rho\in\mathbb{C}\) does not belong to the Dirichlet spectrum of either graph \(\widetilde{\Omega}\) or \(\Omega\)._
## 4 Conclusions
A procedure of the synthesis of a Weyl matrix of an arbitrary quantum tree graph is developed. It allows one to compute the Weyl matrix for a large quantum tree successively, by adding new edges and computing the Weyl matrices for ever larger subgraphs. In each step relatively small systems of linear algebraic equations are solved. Due to the fact that the transposed Weyl matrix is the Dirichlet-to-Neumann map of the quantum tree graph, the synthesis procedure will find applications in solving a variety of boundary value and control problems on quantum tree graphs.
**Funding** The research of Sergei Avdonin was supported in part by the National Science Foundation, grant DMS 1909869, and by Moscow Center for Fundamental and Applied Mathematics. The research of Vladislav Kravchenko was supported by CONACYT, Mexico, via the project 284470.
**Data availability** The data that support the findings of this study are available upon reasonable request.
**Declarations**
**Conflict of interest** The authors declare no competing interests.
|
2301.02913 | Observational constraints of diffusive dark-fluid cosmology | In this work, we consider an interacting dark-fluid cosmological model in
which energy exchange between dark matter and dark energy occurs through
diffusion. After solving the background expansion history for a late-time
universe, we attempt to constrain the cosmological parameters by comparing
simulated values of the model against Supernovae Type 1A data. We consider four
different cases and compare them against the LCDM model as the "true model".
Our results show that the diffusive model in which dark energy flows to dark
matter is the most likely alternative to LCDM model. This model is not only in
line with Planck 2018 observational results but can also give a potential
explanation to the so-called Hubble tension. | Remudin Reshid Mekuria, Amare Abebe | 2023-01-07T18:26:26Z | http://arxiv.org/abs/2301.02913v1 | # Observational constraints of diffusive dark-fluid cosmology
###### Abstract
In this work, we consider an interacting dark-fluid cosmological model in which energy exchange between dark matter and dark energy occurs through diffusion. After solving the background expansion history for a late-time universe, we attempt to constrain the cosmological parameters by comparing simulated values of the model against Supernovae Type 1A data. We consider four different cases and compare them against the \(\Lambda\)CDM model as the "true model". Our results show that the diffusive model in which dark energy flows to dark matter is the most likely alternative to \(\Lambda\)CDM model. This model is not only in line with Planck 2018 observational results but can also give a potential explanation to the so-called Hubble tension.
**PACS numbers:** 04.50.Kd, 98.80.Jk, 98.80.-k, 95.36.+x, 98.80.Cq
Introduction
A lot has already been reported about the discrepancy between observational findings [1; 2; 3; 4; 5; 6] and theoretical predictions of the expansion history of the universe in standard cosmology. The missing matter and energy in the universe, dubbed dark matter (DM) and dark energy (DE), respectively, account for a whopping 95% of the total content of the universe. The nature of these dark components of the universe is not properly understood, but there are several candidates in the literature, including unified dark-fluid models, proposed to describe them and their effect on astrophysics and cosmology. On the DM side, most commonly studied candidates include Weakly Interacting Massive Particles (WIMPS) [7; 8; 9; 10] or some astrophysical modification of gravity such as the Modified Newtonian Dynamics (MOND)[11] among many others, whereas on the DE side, the cosmological constant \(\Lambda\)[12] is perhaps the simplest addition to the standard cosmological model needed to explain most of the observed data. There are some serious issues associated with the cosmological constant, however, such as the eponymous _cosmological constant problem_[13] and the coincidence problem [14; 15] which make the choice less attractive. That is why there are currently a plethora of other alternatives to explain current cosmological observations, such as modifications to the gravitational theory itself (see, for example, [16; 17; 18; 19]), an evolving \(\Lambda\)[20; 21; 22], deviations from the standard homogeneous (see [23; 24] and references therein) and isotropic universe (such as the various Bianchi cosmological models) assumption, or some form of combination of these, among others.
Another aspect to consider, and one gaining much traction recently, is the interaction of dark matter and dark energy [3; 25; 26; 27; 28]. Such an approach is interesting because it has the potential to explain the cosmological and coincidence problems, the Hubble tension and/or the \(\sigma_{8}\) discrepancy [29; 30; 31].
Our current work pursues the last aspect, and studies the cosmological viability of a model [3] of the dark-fluid interaction using Supernovae Type 1A data. We organise the rest of the manuscript as follows: in Sec. II we give a covariant thermodynamics description of, and derive the field equations for, the background universe involving the diffusive dark-fluid system. In Sec. III we give an observational-constraint analysis using MCMC simulations of Supernovae Type 1A and Planck 2018 data and give some predictions on the values of the defining parameters of our model. Finally in Sec. IV we discuss the results and give conclusions.
Background Thermodynamics
The standard \(\Lambda\)CDM cosmology is a solution of the Einstein field equations (EFEs) derived from the action (From here onwards, we will work with units in which the speed of light \(c=1\)):
\[S=\frac{c^{4}}{16\pi G}\int d^{4}x\sqrt{-g}\left[R+2\left(L_{m}-\Lambda\right) \right]\,, \tag{1}\]
where \(R\), \(L_{m}\) and \(\Lambda\) are the Ricci scalar, the matter Lagrangian density and the cosmological constant, respectively. The corresponding EFEs read:
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=8\pi GT_{\mu\nu}\;, \tag{2}\]
with the first (geometric) term represented by the Einstein tensor, and the RHS of the equation representing the total energy-momentum tensor (EMT) of matter fluid forms. Both \(G_{\mu\nu}\) and \(T_{\mu\nu}\) are covariantly conserved quantities. The EMT for perfect-fluid models is given by
\[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}\;, \tag{3}\]
where \(\rho\) and \(p\) are the energy density and isotropic pressure of matter, respectively, often related by the barotropic equation of state (EoS) \(p=w\rho\) for a constant EoS parameter \(w\). The normalised vector \(u_{\alpha}\) represents the four-velocity of fundamental observers comoving with the fluid. The divergence-free EMT \({T^{\mu\nu}}_{;\mu}=0\) leads to the fluid conservation equation
\[\dot{\rho}+3\frac{\dot{a}}{a}(1+w)\rho=0\;, \tag{4}\]
where \(a(t)\) is the cosmological scale factor whose evolution is given by the Friedmann equation
\[\frac{\dot{a}^{2}}{a^{2}}=\frac{8\pi G}{3}\rho+\frac{\Lambda}{3}-\frac{k}{a^{2}} \tag{5}\]
where \(k\) is the normalised spatial curvature parameter with values \(-1\;,0\;,1\) depending on an open, flat or closed spatial geometry. In a multi-component fluid system, it is usually assumed that the energy density of each perfect-fluid component is assumed to evolve independently of the other fluids of the system:
\[\dot{\rho_{i}}+3\frac{\dot{a}}{a}(1+w)\rho_{i}=0\;, \tag{6}\]
and in this case the EMT in Eq. (3) is the algebraic sum of the EMTS of each fluid, so are the total energy density and total pressure terms of Eq. (5) the algebraic sums of the individual components.
However, if we relax this assumption due to the presence of diffusion between the constituent components of the fluid, the individual components do not obey the matter conservation equation, but the total fluid still does. For the the \(i\)th component fluid, the new conservation equation reads:
\[T_{i}^{\mu\nu}{}_{;\mu}=N_{i}^{\nu}\;, \tag{7}\]
where \(N_{i}^{\nu}\) corresponds to the current of diffusion term for that fluid. One can then write the non-conservation equation for the fluid as:
\[\dot{\rho_{i}}+3\frac{\dot{a}}{a}(1+w)\rho_{i}=\frac{\gamma_{i}}{a^{3}}\;, \tag{8}\]
where \(\gamma_{i}\) is a constant for that fluid such that \(\sum_{i}\gamma_{i}=0\). Integrating this equation gives
\[\rho_{i}=a^{-3(1+w_{i})}\left[\rho_{i0}+\gamma_{i}\int_{t_{0}}^{t}a^{3w_{i}}dt ^{\prime}\right]\;, \tag{9}\]
with \(\rho_{i0}\) representing the present-day (\(t=t_{0}\)) value of the energy density of the \(i\)th fluid. Using a late-time \(t-t_{0}\ll t_{0}\) expansion and expressing \(a(t)=a_{0}\left[1-(t_{0}-t)H_{0}\right)+\dots\right]\), we can write the last term of the above integrand as
\[\int_{t_{0}}^{t}a^{3w_{i}}dt =\int_{t_{0}}^{t}a^{3w_{i}}\left[1-(t_{0}-t)H_{0}\right)+\dots \right]^{3w_{i}}dt^{\prime}\] \[=-\frac{1}{1+3w_{i}}\left[\left(1+(t_{0}-t)H_{0}\right)^{1+3w_{i }}-\left(1+(t_{0}-t)H_{0}\right)^{1+3w_{i}}+\dots\right]\] \[\approx\frac{1}{1+3w_{i}}\left[1-\left(1+(t_{0}-t)H_{0}\right)^{1 +3w_{i}}\right]\] \[=\frac{1}{(1+3w_{i})H_{0}}\left[1-(2-a)^{1+3w_{i}}\right]\;, \tag{10}\]
where in the last step, we have used normalised the scale factor to unity today: \(a_{0}=1\). Thus the energy density of each diffusive fluid component is given according to the below relation:
\[\rho_{i}=a^{-3(1+w_{i})}\left[\rho_{i0}+\frac{\gamma_{i}}{(1+3w_{i})H_{0}} \left[1-(2-a)^{1+3w_{i}}\right]\right]\;. \tag{11}\]
Assuming the well-known component of radiation, dust-like matter (baryons and dark matter) and vacuum energy, the above diffusive solution leads to:
\[\rho_{\rm r} =a^{-4}\left[\rho_{\rm r0}+\frac{\gamma_{\rm r}}{2H_{0}}\left[1- (2-a)^{2}\right]\right]\] \[\rho_{\rm m} =a^{-3}\left[\rho_{\rm m0}+\frac{\gamma_{\rm m}}{H_{0}}\left[1-(2 -a)\right]\right]\] \[\rho_{\Lambda} =\rho_{\Lambda 0}-\frac{\gamma_{\Lambda}}{2H_{0}}\left[1-(2-a)^{-2}\right]\]
Let us now consider the Friedmann equation for the \(\Lambda\)CDM model, assuming \(k=0\), which can be given as:
\[\frac{\dot{a}^{2}}{a^{2}}=\frac{8\pi G}{3}\left[\rho_{\rm r0}a^{-4}+\rho_{\rm m0 }+\frac{\gamma_{\rm m}}{H_{0}}\left[1-(2-a)\right]a^{-3}+\rho_{\Lambda 0}- \frac{\gamma_{\Lambda}}{2H_{0}}\left[1-(2-a)^{-2}\right]\right]\;. \tag{13}\]
We are going to assume the diffusive interaction is limited between dark matter and dark energy for this work, and hence \(\gamma_{\rm r}=0\) in the above equation.
Let us now introduce the following dimensionless quantities:
\[\Omega_{\rm i}\equiv\frac{8\pi G}{3H_{0}^{2}}\rho_{\rm i}\;,\qquad\Delta_{\rm m }\equiv\frac{8\pi G}{3H_{0}^{3}}\gamma_{\rm m}\;,\qquad\Delta_{\Lambda}\equiv \frac{8\pi G}{3H_{0}^{3}}\gamma_{\Lambda}\;,\qquad 1+z\equiv a^{-1}\;, \qquad h\equiv\frac{H}{H_{0}}\;. \tag{14}\]
We can then show that the Friedmann equation can be recast as
\[h^{2}=\Omega_{\rm r0}(1+z)^{4}+\Omega_{\rm m0}(1+z)^{3}+\Omega_{\Lambda 0}- \Delta_{\rm m}z(1+z)^{2}-\Delta_{\Lambda}\left[\frac{1}{2}-\frac{1}{2}\left( \frac{1+2z}{1+z}\right)^{-2}\right]\;. \tag{15}\]
Moreover, the deceleration parameter can be shown to be
\[q \equiv-\frac{\ddot{a}a}{\dot{a}^{2}}=\frac{4\pi G}{3H^{2}}\sum_{i }\rho_{\rm i}(1+3w_{\rm i})\] \[=\frac{1}{2}\left\{\frac{2\Omega_{\rm r0}(1+z)^{4}+\Omega_{\rm m 0}(1+z)^{3}-2\Omega_{\Lambda 0}-\Delta_{\rm m}z(1+z)^{2}+\Delta_{\Lambda} \left[1-\left(\frac{1+2z}{1+z}\right)^{-2}\right]}{\Omega_{\rm r0}(1+z)^{4}+ \Omega_{\rm m0}(1+z)^{3}+\Omega_{\Lambda 0}-\Delta_{\rm m}z(1+z)^{2}-\Delta_{ \Lambda}\left[\frac{1}{2}-\frac{1}{2}\left(\frac{1+2z}{1+z}\right)^{-2} \right]}\right\} \tag{16}\]
These equations reduce to their respective \(\Lambda\)CDM limits when \(\Delta_{\rm m}\) and \(\Delta_{\Lambda}\) both vanish.
## III Observational constraints
In the following we will provide the result for observational constraints for the diffused dark fluid models we have introduced in this work. We have used the distance modulus equation which can be obtained by combining the different cosmological distance definitions to fit against the supernovae data in our MCMC simulation, which is presented in the work of [32].
As shown in Fig. 1 we run MCMC simulation for the diffusive model, by combining Eqs. 14 and 15, we find on average the best-fitting parameter value for each free parameter to be h = 0.6966 for the Hubble uncertainty parameter, \(\Omega_{m0}=0.2678\) for the matter density parameter, and \(\Omega_{r0}=0.00050\) for the radiation density parameter, along with a newly introduced parameters \(\Delta_{\rm m}=0.00252\) and \(\Delta_{\Lambda}=-0.00251\). We shall henceforth refer to this diffusive model case as Case I.
Among some of our optimum results we have also, as Case II, obtained the situation where \(\Omega_{m0}\) result is much closer to the observational result of Planck 2018 (\(=0.315^{+0.555}_{-0.111}\)) as shown below which
are also obtained with MCMC simulation for the diffusive model, by combining Eqs. 14 and 15, we find on average the best-fitting parameter value for each free parameter to be h = 0.6955 for the Hubble uncertainty parameter, \(\Omega_{m0}=0.3134\) for the matter density parameter, and \(\Omega_{r0}=0.00050\) for the radiation density parameter, along with a newly introduced parameters \(\Delta_{\rm m}=0.1246\) and \(\Delta_{\Lambda}=-0.1244\).
In Figs. 3 and 4 the above discussed two diffusive cosmological cases were given which clearly showsto fit the extremely well with the data. Even the corresponding \(1\sigma\)-deviation do not really have
an impact on the full range predicted by them. Additionally, Figs. 5 and 6 display the residuals obtained in the above two cases. It can clearly be seen that at no point do the models over-or under-estimate the resulting distance modulus for each supernovae. We also note that the average off-set the model has, compared to the data, is \(\bar{x}_{res}=-0.0374\) Mpc in both cases with a standard deviation of \(\sigma_{res}=0.2148\) and \(\sigma_{res}=0.2152\), respectively. These results show that these are very strong relationships between the models and the data points.
Figs. 7 and 8 shows the evolution of the Hubble parameter across the Red
Figure 3: The diffusive modelβs Eq. 15 for best-fitting free parameters for the Supernovae Type 1A data with cosmological parameter values as \(h=0.6966^{+0.0047}_{-0.0047}\), \(\Omega_{m0}=0.2678^{+0.0248}_{-0.0237}\), \(\Omega_{r0}=0.0005^{+0.0003}_{-0.0003}\), \(\Delta_{m}=0.0025^{+0.0169}_{-0.0173}\) and \(\Delta_{\Lambda}=-0.0025^{+0.0172}_{-0.0169}\) of the MCMC simulation result shown in Fig. 1.
Figure 6: This is the residuals distance in Mpc between the predicted model values and the data points for the diffusive modelβs Eq. 15 for best-fitting free parameters shown in Fig. 4.
Figure 4: The diffusive modelβs Eq. 15 for best-fitting free parameters for the Supernovae Type 1A data with cosmological parameter values as \(h=0.6955^{+0.0047}_{-0.0047}\), \(\Omega_{m0}=0.3134^{+0.0531}_{-0.0478}\), \(\Omega_{r0}=0.0005^{+0.0003}_{-0.0003}\), \(\Delta_{m}=0.1246^{+0.1098}_{-0.0887}\) and \(\Delta_{\Lambda}=-0.1244^{+0.0877}_{-0.1096}\) of the MCMC simulation result shown in Fig. 2.
model cases discussed above. The blue curves represent the results obtained by considering diffusive fluid and employing MCMC simulations, with 1-\(\sigma\) deviation results displayed in yellowish shaded regions. The red curves represent \(\Lambda\)CDM cosmology result using the average values obtained from the MCMC simulations where as the green curves represent those which are obtained directly by using the Planck 2018 data. In Fig. 7 a complete overlap is observed between the two curves which are obtained by using the MCMC simulation data in the \(\Lambda\)CDM model and the average value of MCMC simulation data in diffusive model. In contrast in the results of Fig. 8 we begin to notice a deviation between the two curves from \(\sim\) z of 0.75 onward. Even though it is not expected to have a complete overlap between the \(\Lambda\)CDM model using Planck 2018 values put directly in the cosmological equations (green curves) and the MCMC results (blue curves and yellowish shaded regions), the difference between them is observed to be more prominent in the case of Fig. 8 than that of Fig. 7.
Fig. 9 and 10 shows the evolution of deceleration parameter across
Figure 8: The Hubble parameter vs Redshift for the Model displayed in Fig. 2. The blue curve represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation result displayed in yellowish shaded region. The red curve represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
Figure 7: The Hubble parameter vs Redshift for the Model displayed in Fig. 1. The blue curve represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation result displayed in yellowish shaded region. The red curve represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
sive cosmological model cases discussed above. The blue curves represent the results obtained by considering diffusive fluid and employing MCMC simulations, with 1-\(\sigma\) deviation results displayed in yellowish shaded regions. The red curves represent \(\Lambda\)CDM cosmological results by using the MCMC simulation data, where as the green curves represent those which are obtained directly by substituting the Planck 2018 data values. In Fig. 9 we observe a complete overlap between the two curves which are obtained by using MCMC simulation data in the \(\Lambda\)CDM equation and the average value of MCMC simulation data of the diffusive model, which is also observed in the Hubble parameter vs Redshift plot of Fig. 7. Moreover, the 1-\(\sigma\) deviation result which is indicated in the yellowish colored shaded region is observed to encompass all the curves for about (z \(\sim\) 0.5) of the deceleration parameter values given in Fig. 10. The diffusive model in this case has slightly larger values of deceleration parameter in the present universe (z \(\sim\) 0) compared to what is observed in the case of Fig. 9.
The above two cases (Case I and Case II) were obtained with a positive \(\Delta_{\rm m}\) and negative \(\Delta_{\Lambda}\). In what follows, let us provide the results corresponding to the cases of negative \(\Delta_{m}\) which can be explained as being the situation when energy flows from dark matter sector to that of dark energy. As shown in Fig. 11 we run MCMC simulation for the diffusive model, by combining Eqs. 14 and 15, we find on average the best-fitting parameter value for each free parameter to be h = 0.6967 for the Hubble uncertainty parameter, \(\Omega_{m0}\) = 0.2655 for the matter density parameter, \(\Omega_{r0}\) = 0.00050 for the radiation density parameter, along with a newly introduced parameters \(\Delta_{\rm m}=-0.00251\) and \(\Delta_{\Lambda}=0.00246\). We will call this diffusive model case hereafter Case III.
In the following we will also provide one interesting case in which the best-fitting parameter value for each free parameter to be h = 0.6976 for the Hubble uncertainty parameter, \(\Omega_{m0}=0.2283\) for the matter density parameter, and \(\Omega_{r0}=0.00050\) for the radiation density parameter, along with a newly introduced parameters \(\Delta_{\rm m}=-0.10747\) and \(\Delta_{\Lambda}=0.10426\). We will refer to this diffusive model case as Case IV.
In Fig. 13 and Fig. 14 the above discussed two diffusive cosmological model cases were given which clearly shows to fit the extremely well with the data. Even the corresponding \(1\sigma\)-deviation do
not really have an impact on the full range predicted by them.
Additionally, Fig. 15 and Fig. 16 displays the residuals obtained in the above two cases. It can clear be seen that at no point does the models over-or under-estimate the resulting distance modulus for each supernovae. We also note that the average off-set the model has, compared to the data, is \(\bar{x}_{res}=-0.0374\) Mpc and \(\bar{x}_{res}=-0.0386\) Mpc, with a standard deviation of \(\sigma_{res}=0.2147\) and \(\sigma_{res}=0.2145\), respectively. These results show that these are very strong relationships between the models and data points.
Figure 16: This is the residuals distance in Mpc between the predicted model values and the data points for the diffusive modelβs Eq. 15 for best-fitting free parameters shown in Fig. 13.
Figure 14: The diffusive modelβs Eq. 15 for best-fitting free parameters for the Supernovae Type 1A data with cosmological parameter values as \(h=0.6976^{+0.0047}_{-0.0047}\), \(\Omega_{m0}=0.2283^{+0.0387}_{-0.0374}\), \(\Omega_{r0}=0.0005^{+0.0003}_{-0.0003}\), \(\Delta_{m}=-0.1074^{+0.0716}_{-0.0641}\) and \(\Delta_{\Lambda}=0.1042^{+0.0660}_{-0.0697}\) of the MCMC simulation result shown in Fig. 12.
Figure 13: The diffusive modelβs Eq. 15 for best-fitting free parameters for the Supernovae Type 1A data with cosmological parameter values as \(h=0.6967^{+0.0047}_{-0.0047}\), \(\Omega_{m0}=0.2655^{+0.0248}_{-0.0237}\), \(\Omega_{r0}=0.0005^{+0.0003}_{-0.0003}\), \(\Delta_{m}=-0.0025^{+0.0169}_{-0.0170}\) and \(\Delta_{\Lambda}=0.0024^{+0.0172}_{-0.0167}\) of the MCMC simulation result shown in Fig. 11.
Figure 15: This is the residuals distance in Mpc between the predicted model values and the data points for the diffusive modelβs Eq. 15 for best-fitting free parameters shown in Fig. 13.
Fig. 17 and 18 shows the evolution of the Hubble parameter across the Redshift for the two cosmological model cases discussed above. The blue curves represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation results displayed in yellowish shaded regions. The red curves represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
Fig. 17 and 18 shows the evolution of the Hubble parameter across the Redshift for the two cosmological model cases discussed above. The blue curves represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation results displayed in yellowish shaded regions. The red curves represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
Fig. 17 between the two curves which are obtained by using the MCMC simulation data in the \(\Lambda\)CDM model and that of the average value of MCMC simulation data of the diffusive model. In contrast, a noticeable deviation emerges to be observed between the the Hubble parameter values of the diffusive model result (the blue curve) and that of the \(\Lambda\)CDM result based on MCMC simulation data (the red curve) in Fig. 18 from \(\sim\) z of 0.75 onward. Even though it is not expected to have a complete overlap between the \(\Lambda\)CDM model using Planck 2018 values put directly in the cosmological equations (green curves) and the MCMC results (blue curves and yellowish shaded regions), the difference between them is observed to be more prominent in the
Figure 17: The Hubble parameter vs Redshift for the Model displayed in Fig. 1. The blue curve represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation result displayed in yellowish shaded region. The red curve represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
Figure 18: The Hubble parameter vs Redshift for the Model displayed in Fig. 2. The blue curve represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation result displayed in yellowish shaded region. The red curve represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curve represent one obtained directly by using the Planck 2018 data for the purpose of comparison.
case of Fig. 17 than that of Fig. 18.
Fig. 19 and 20 shows the evolution of the deceleration parameter across Redshift for the two cosmological model cases discussed above. The blue curves represent the result obtained by considering diffusive fluid and employing MCMC simulation, with 1-\(\sigma\) deviation results displayed in yellowish shaded regions. The red curves represent \(\Lambda\)CDM cosmology result using MCMC simulation where as the green curves represent those obtained directly by using the Planck 2018 data. In Fig. 19 we observe a complete overlap while using MCMC simulation data in the \(\Lambda\)CDM model and the average value of MCMC simulation data of the diffusive model, which is also observed in the Hubble parameter vs Redshift plot of Fig. 17.
As redshift increases, the Hubble parameter values of the diffusive model result (the blue curve) and that of the \(\Lambda\)CDM result based on MCMC simulation data (the red curve), begin to acquire similar values as can be seen in Fig. 20. The 1-\(\sigma\) deviation results indicated in yellow colored shaded region is observed to encompass both diffusive (blue curve) and non-diffusive (red curve) cases of the deceleration parameter values given in Fig. 20. In the current universe, the diffusive model has slightly lower values of deceleration parameter compared to what is observed in the case of Fig. 19.
In Table 1 we provide some statistical result which allows us to determine the best diffusive model case in comparisons to the \(\Lambda\)CDM model. The statistical analysis test that we have used is the Akaike information criterion (AIC) and Bayesian/Schwarz information criterion (BIC) selections which were used in a similar work in [32]. These information criteria evaluate the plausibility of an alternative model explaining the data compared to an "accepted/true model". In our case the \(\Lambda\)CDM model will
be considered as the "true model". Following the suggestion made in [32] as the calculated values for the AIC and BIC can by very random, we will also use the difference in AIC (i.e., \(\Delta\)AIC) and BIC (i.e., \(\Delta\)BIC) values of each model compared to the "true model's AIC and BIC values, and we use the Jeffrey's scale in order to make conclusions about the viability of the various Diffusive model cases. Moreover, the reduced \(\chi^{2}\) -values are used as an indication of the goodness of fit for each model on the supernovae data. It is observed that, the first two Diffusive model cases (shown in Fig. 1 and 2) have obtained better likelihood function values than the \(\Lambda\)CDM model based on a Gaussian probability distribution, with Case II obtaining the larger likelihood function value. However, in the reduced \(\chi^{2}\) -values in which the number of parameters are taken into account when determining the goodness of fit, the \(\Lambda\)CDM model has the best value with the Diffusive model Case I (shown in Fig. 1) managing to have a closer value to this accuracy. In order to find the better fitting model among these two cases, we use AIC test, according to which the Diffusive model Cases I and II have obtained more observational and less observational support, respectively. Case I is seen to have a value just missing out on the substantial observational support category, but is still with a closer value to the boundary for less observational support. Therefore, it can be concluded that Case I has some observational support according to the AIC criterion, while Case II has less observational support. In terms of the BIC criteria, we did not obtain one model to have some observational support category, but Case I
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Models** & \(\Delta_{\rm m}\) & \(\Delta_{\Lambda}\) & \(\mathbf{L}(\hat{\theta}|\)data) & \(\chi^{2}\) & Red.\(\chi^{2}\) & \(AIC\) & \(|\Delta AIC|\) & \(BIC\) & \(|\Delta BIC|\) \\ \hline Diffusive Case II & +ve & -ve & -121.1677 & 242.3355 & 0.6845 & 252.3355 & 4.9405 & 271.7521 & 12.7072 \\ \hline Diffusive Case I & +ve & -ve & -120.7059 & 241.4118 & 0.6819 & 251.4118 & 4.0168 & 270.8285 & 11.7835 \\ \hline \(\Lambda\)CDM & 0 & 0 & -120.6975 & 241.3950 & 0.6780 & 247.3950 & 0 & 259.0449 & 0 \\ \hline Diffusive Case III & -ve & +ve & -120.6890 & 241.3781 & 0.6818 & 251.3781 & 3.9831 & 270.7947 & 11.7497 \\ \hline Diffusive Case IV & -ve & +ve & -120.3936 & 240.7872 & 0.6801 & 250.7872 & 3.3922 & 270.2039 & 11.1589 \\ \hline \end{tabular}
\end{table}
Table 1: The best-fit for each tested model, including the \(\Lambda\)CDM model. The models are listed in the order from the largest likelihood function value \(\mathbf{L}(\hat{\theta}|\)data) to the smallest likelihood of being viable. The reduced \(\chi^{2}\) -values are given as an indication of the goodness of fit for a particular model. The AIC and BIC values are shown, as well as the \(\Delta\)AIC and \(\Delta\)BIC for each information criterion. The \(\Lambda\)CDM model is chosen as the βtrue modelβ.
was the closer of being in one of the categories. Therefore, statistically, based on the likelihood, the goodness of fit, the AIC and BIC criteria, Case I is the most likely to be an alternative model to the \(\Lambda\)CDM model, with Case II not being ruled out, but will have to be tested on other datasets before being accepted or rejected.
Conclusions
In this manuscript we considered diffusive cosmological models where dark matter and dark energy interact by exchanging energy. The background cosmological parameters in particular the thermodynamics parameters have been studied and compared against supernova cosmological data for different Diffusive model cases using MCMC simulation results presented in the previous section.
For the two new parameters which arise in our Diffusive cosmological model, namely \(\Delta_{m}\) and \(\Delta_{\Lambda}\), we have examined the Hubble and deceleration parameters results of Figs. 7 to 10 and that of Figs. 17 to 20. Recalling the requirement that the sum of these two parameters need to be zero, the magnitude of \(\Delta_{m}\) and \(\Delta_{\Lambda}\) of \(\ \approx 0.0025\) fit the parameter space very well. Following which we investigated this deeply based on the statistical analysis made in the above section which is given in Table 1. From our analysis we observed that cases having positive values of \(\Delta_{\rm m}\) were showing the largest values of likelihood function. Based on the analysis of likelihood, goodness of fit, AIC and BIC criteria, one can conclude that overall Case I is the most likely to be an alternative to the \(\Lambda\)CDM model.
As we have highlighted in the discussion part our current work is to provide a viability test of the different cases considered, but to reject or accept any of them more data and rigorous testing method is needed. Moreover, our initial result such as the one shown in Figs. 7 and 17 suggest that one can look for a potential explanation of the Hubble Tension in such models.
## Acknowledgements
AA acknowledges that this work is based on the research supported in part by the National Research Foundation (NRF) of South Africa (grant number 112131). This work was part of the research programme "New Insights into Astrophysics and Cosmology with Theoretical Models confronting Observational Data" of the National Institute for Theoretical and Computational Sciences of South Africa.
|
2302.12088 | Robust suppression of noise propagation in GKP error-correction | Straightforward logical operations contrasting with complex state preparation
are the hallmarks of the bosonic encoding proposed by Gottesman, Kitaev and
Preskill (GKP). The recently reported generation and error-correction of GKP
qubits in trapped ions and superconducting circuits thus holds great promise
for the future of quantum computing architectures based on such encoded qubits.
However, these experiments rely on error-syndrome detection via an auxiliary
physical qubit, whose noise may propagate and corrupt the encoded GKP qubit. We
propose a simple module composed of two oscillators and a physical qubit,
operated with two experimentally accessible quantum gates and elementary
feedback controls to implement an error-corrected GKP qubit protected from such
propagating errors. In the idealized setting of periodic GKP states, we develop
efficient numerical methods to optimize our protocol parameters and show that
errors of the encoded qubit stemming from flips of the physical qubit and
diffusion of the oscillators state in phase-space may be exponentially
suppressed as the noise strength over individual operations is decreased. Our
approach circumvents the main roadblock towards fault-tolerant quantum
computation with GKP qubits. | Christian Siegele, Philippe Campagne-Ibarcq | 2023-02-23T15:21:50Z | http://arxiv.org/abs/2302.12088v3 | # Robust suppression of noise propagation in GKP error-correction
###### Abstract
Straightforward logical operations contrasting with complex state preparation are the hallmarks of the bosonic encoding proposed by Gottesman, Kitaev and Preskill (GKP). The recently reported generation and error-correction of GKP qubits in trapped ions and superconducting circuits thus holds great promise for the future of quantum computing architectures based on such encoded qubits. However, these experiments rely on error-syndrome detection via an ancillary two-level system (TLS), whose noise may propagate and corrupt the encoded qubit. We propose a simple module composed of two oscillators and a TLS, operated with two experimentally accessible quantum gates and elementary feedback controls to implement an error-corrected GKP qubit protected from such propagating errors. In the idealized setting of periodic GKP states, we develop efficient numerical methods to optimize our protocol parameters and show that errors of the encoded qubit stemming from flips of the TLS and diffusion of the oscillators state in phase-space may be exponentially suppressed as the noise strength over individual operations is decreased. Our approach circumvents the main roadblock towards fault-tolerant quantum computation with GKP qubits.
## I Introduction
In their seminal paper [1; 2], Gottesman, Kitaev, and Preskill proposed to encode, within the vast Hilbert space of a harmonic oscillator, a qubit robust against position and momentum shifts of the embedding oscillator. Clifford operations on encoded GKP qubits are straightforward to implement and do not amplify small shift errors. Therefore, concatenation of the GKP code into the surface code recently attracted interest [3; 4] as, beyond the potentially enhanced coherence of GKP qubits compared to faulty physical qubits, analog information from the GKP error-correction layer may be decoded to improve the surface code threshold [5; 6; 7; 8; 9]. Crucially, these desirable features rely on the assumption that noise-induced shifts of the embedding oscillators are short and can be detected before they accumulate. This hypothesis is not valid in current experimental implementations with superconducting circuits [10; 11]. In order to comprehend this serious limitation, one needs to delve into the code structure and error-correction techniques employed in these experiments.
In reduced phase-space coordinates \((q_{a},p_{a})\)[13], the basis states of the square GKP code are superpositions of periodically spaced position eigenstates
\[|+Z\rangle=\sum_{n\in\mathbb{Z}}|q_{a}=n\alpha\rangle\qquad|-Z\rangle=\mathbf{ D}_{\frac{\alpha}{2}}|+Z\rangle, \tag{1}\]
where \(\alpha=2\sqrt{\pi}\) and \(\mathbf{D}_{\delta_{q}+i\delta_{p}}=e^{-i\delta_{q}\mathbf{p}+i\delta_{p} \mathbf{q}}\) displaces the oscillator state respectively by \(\delta_{q}\) and \(\delta_{p}\) along \(q_{a}\) and \(p_{a}\). The logical states \(|\pm X\rangle\) are obtained by a \(\pi/2\) rotation in phase-space of \(|\pm Z\rangle\). Note that infinitely delocalized states are unrealistic, but the essential properties and control techniques considered in our work apply to states normalized by a broad Gaussian envelope in phase-space [1; 14; 10; 15]. One may measure the GKP qubit in the \(|\pm Z\rangle\) or \(|\pm X\rangle\) basis by detecting the _modular logical operators_\(\tilde{\mathbf{q}}_{a}^{L}=\mathbf{q}_{a}\) mod \(\alpha\) and \(\tilde{\mathbf{p}}_{a}^{L}=\mathbf{p}_{a}\) mod \(\alpha\). Crucially, a code state \(|\Psi\rangle\) shifted in position and momentum can still be correctly decoded as long as the shifts are shorter than \(\alpha/4\). Moreover, these shifts can be detected without revealing the GKP qubit state by measuring the two commuting _modular stabilizers_\(\tilde{\mathbf{q}}_{a}^{S}=\mathbf{q}_{a}\) mod \(\alpha/2\) and \(\tilde{\mathbf{p}}_{a}^{S}=\mathbf{p}_{a}\) mod \(\alpha/2\), of which \(\mathbf{D}_{\delta_{q}+i\delta_{p}}|\Psi\rangle\) is a joint eigenvector with respective eigenvalues \(\delta_{q}\) and \(\delta_{p}\).
Measuring the modular stabilizers without extracting logical information is the main challenge in GKP error-correction [16; 17; 18; 19; 20; 21; 22]. It was only recently achieved experimentally with trapped ions [23; 24; 15] and superconducting circuits [10; 11]. In these experiments, the target oscillator is coupled to an ancillary TLS via a controllable Rabi-like interaction static in the interaction picture \(-\chi\mathbf{r}_{a}\mathbf{\sigma}_{z}\) (where \(\mathbf{r}_{a}=\mathbf{q}_{a}\) or \(\mathbf{r}_{a}=-\mathbf{p}_{a}\), \(\mathbf{\sigma}_{z}\) is a TLS Pauli operator), in order to implement a conditional displacement gate \(\mathbf{U}_{r_{a}}^{CD}=e^{i\frac{\alpha}{2}\mathbf{r}_{a}\mathbf{\sigma}_{z}}\) that rotates the TLS phase by \(\alpha\tilde{\mathbf{r}}_{a}^{S}\) (see Fig. 1a). The gate is named after its backaction on the oscillator, which is displaced by \(\pm\frac{\alpha}{2}\) along the \(\pi/2\)-rotated quadrature \(r_{a}^{\perp}\) conditioned on the TLS state. This evolution deterministically shifts the logical operator \(\tilde{\mathbf{r}}_{a}^{\perp L}\) by \(\alpha/2\)--accounted for in software--but otherwise leaves all modular operators unchanged. However, if a TLS bit-flip occurs during the evolution, the displacement takes a value uniformly sampled in \([-\frac{\alpha}{2},\frac{\alpha}{2}]\) depending on the unknown instant of the flip [12] (dashed wavefunction in Fig. 1a). This randomizes the value of \(\tilde{\mathbf{r}}_{a}^{\perp L}\) and the error propagates at the logical level with probability \(1/2\). These propagating errors, which become more frequent as the error-correction clock rate increases,
are a serious bottleneck towards fault-tolerant quantum computation with GKP qubits. Various strategies were proposed to mitigate this advert effect [25, 26, 27, 28, 29], but unleashing the full potential of GKP qubits will require to suppress propagating errors at a level beyond the reach of state-of-the-art hardware [30, 31, 32, 33].
This roadblock is not present in the so-called Steane-type error-correction scheme [1, 34], where the target mode is probed via a quadrature interaction \(-\chi^{\prime}\mathbf{r}_{a}\mathbf{q}_{b}\) with an ancillary oscillator \(b\) to implement a quadrature gate \(\mathbf{U}_{r_{a}}^{\mathrm{quad}}=e^{i\frac{\gamma}{\beta}\mathbf{r}_{a} \cdot\mathbf{q}_{b}}\). The ancilla is itself prepared in a rectangular GKP state
\[\ket{\phi}=\sum_{n\in\mathbb{Z}}\ket{n\beta}_{q_{b}} \tag{2}\]
prior to the interaction. Since this state is employed as a displacement sensor [35] and does not encode logical information, we define only modular stabilizers \(\tilde{\mathbf{q}}_{b}=\mathbf{q}_{b}\) mod \(\beta\) and \(\tilde{\mathbf{p}}_{b}=\mathbf{p}_{b}\) mod \(2\pi/\beta\), of whom \(\ket{\phi}\) is the single joint eigenstate with eigenvalue \(0\)[1]. The quadrature gate displaces the ancilla along \(p_{b}\) conditioned on the value of \(\mathbf{r}_{a}\) while, reciprocally, the target oscillator is shifted along \(r_{a}^{\perp}\) conditioned on the value of \(\mathbf{q}_{b}\) (see Fig. 1b). We summarize the gate effect on modular operators as
\[\frac{\tilde{\mathbf{p}}_{b}}{2\pi/\beta}\rightarrow\frac{\tilde{\mathbf{p}}_ {b}}{2\pi/\beta}+\frac{\tilde{\mathbf{r}}_{a}^{S}}{\alpha/2}\hskip 28.452756pt \frac{\tilde{\mathbf{r}}_{a}^{\perp L}}{\alpha}\rightarrow\frac{\tilde{ \mathbf{r}}_{a}^{\perp L}}{\alpha}+\frac{\tilde{\mathbf{q}}_{b}}{\beta} \tag{3}\]
The crucial difference with TLS-based error-correction lies in the ancilla noise model, assumed to only generate short shifts of its state. A shift by \(\delta_{b}\) along \(\mathbf{q}_{b}\), occurring before or during the gate, propagates to the target oscillator as a shift shorter than \(\alpha\delta_{b}/\beta\) (see Fig. 1b), correctable if \(\delta_{b}\ll\beta\). However, if the ancilla is prepared through a series of TLS-based measurements of its stabilizers, bit-flips of the TLS may induce shifts along \(q_{b}\) covering the whole \([-\frac{\beta}{2},\frac{\beta}{2}]\) interval, propagating as shifts of the target oscillator covering \([-\frac{\alpha}{2},\frac{\alpha}{2}]\) irrespective of the value of \(\beta\). In GKP-surface code architectures, these _structureless_ shifts cancel the benefits of GKP qubits with respect to physical qubits. Therefore, a central question for the viability of Steane-type error-correction is: how can we ensure a supply of ancillary \(\ket{\phi}\) states whose errors do not propagate as long shifts of the target oscillator?
## II Asymmetric ancilla preparation
We consider the module depicted in Fig. 2a where the target oscillator interacts with an ancillary oscillator, itself coupled to a TLS. The target oscillator is corrected by repeated Steane-type correction _cycles_ denoted \(\mathcal{C}_{r_{a}}\), alternating \(r_{a}=q_{a}\) and \(r_{a}=p_{a}\) (Fig. 2d). Each cycle starts with the ancilla prepared in \(\ket{\phi}\) (Fig. 2c), possibly shifted due to preparation errors. A quadrature gate \(\mathbf{U}_{r_{a}}^{\mathrm{quad}}\) maps the value of \(\tilde{\mathbf{r}}_{a}^{S}\) onto the ancilla stabilizer \(\tilde{\mathbf{p}}_{b}\). The ancilla is then measured and re-prepared through a sequence of preparation _rounds_ labeled \(\mathcal{R}_{r_{b}}\) (for \(r_{b}=q_{b}\) or \(r_{b}=p_{b}\)). Each round is built around a conditional displacement gate \(\mathbf{U}_{r_{b}}^{CD}\) mapping the value of \(\tilde{\mathbf{r}}_{b}\) onto the phase of the TLS, prepared beforehand in an eigenstate of \(\mathbf{\sigma}_{x}\) and subsequently measured along \(\mathbf{\sigma}_{y}\) (Fig. 2b). Each TLS measurement controls a proportional feedback displacement by \(\pm\epsilon\) along \(r_{b}\). As detailed below, repeated \(\mathcal{R}_{r_{b}}\) rounds corral the ancilla state toward \(\tilde{r}_{b}=0\)[10]. We further store the measurement record outputted by the \(\mathcal{R}_{p_{b}}\) rounds as it encodes the value of \(\tilde{\mathbf{p}}_{b}\) following the quadrature gate, i.e. the target error-syndrome [12]. After straightforward decoding, a corrective feedback displacement is applied to the target oscillator, concluding the correction cycle.
Our proposal to suppress error-propagation is based on two observations. First, if the oscillators only interact via the \(\mathbf{q}_{b}\) quadrature operator of the ancilla (see Fig. 2), only shifts along this quadrature directly propagate to the target oscillator (second term in Eq. (3)). As a consequence, the ancilla may be asymmetrically prepared, with a focus on preparing a wrapped probability distribution \(Q_{b}\) sharply peaked near \(0\) for the stabilizer \(\tilde{\mathbf{q}}_{b}\), at the expense of a broader wrapped distribution \(P_{b}\) for \(\tilde{\mathbf{p}}_{b}\). Admittedly, an ancilla with a broad
distribution yields blurred error-syndromes (first term in Eq. (3)), but these errors are mitigated by cycle repetition. Second, during TLS-based preparation of the ancilla, TLS flips only trigger long shifts along \(q_{b}\) if they occur during \(\mathcal{R}_{p_{b}}\) rounds. Based on these two observations, we propose to prepare the ancilla with a large number \(N_{p}\) of \(\mathcal{R}_{p_{b}}\) rounds _followed_ by a large number \(N_{q}\) of \(\mathcal{R}_{q_{b}}\) rounds (see Fig. 2c), allowing the latter to correct long shifts induced by the former.
The detailed analysis of this preparation sequence is facilitated by the periodicity of the ancilla state along both quadratures, preserved by the applied controls and by our noise model. This model combines bit and phase flips of the TLS, with respective small probabilities \(p^{BF}\) and \(p^{PF}\) during each round, and quadrature noise of the oscillators at rate \(\kappa\)[12]--equivalent to photon loss and gain at equal rate \(\kappa\)--inducing uniform state diffusion in phase-space. The ancilla may then be modeled as a classical particle living on a torus, whose state is fully characterized by separable wrapped distributions \(Q_{b}\) and \(P_{b}\)[12]. In this picture, repeated \(\mathcal{R}_{r_{b}}\) rounds induce a classical random walk of the particle along \(\tilde{r}_{b}\), whose steps by \(\pm\epsilon\) are biased toward \(\tilde{r}_{b}=0\). In the limit of short steps, the corresponding \(R_{b}\) distribution evolves with a position-dependent drift velocity \(v(\tilde{r}_{b})=-\frac{\epsilon p^{NF}}{T_{\rm round}}\text{sin}(2\pi\frac{ \tilde{r}_{b}}{r_{0}})\) and a uniform diffusion constant \(D=\frac{\epsilon^{2}}{T_{\rm round}^{2}}+\kappa\), where \(T_{\rm round}\) is the round duration and \(p^{NF}=1-p^{BF}-2p^{PF}\) is close to 1 [12]. The steady-state of this dynamic approaches a wrapped normal distribution whose variance depends on \(\epsilon\) and reaches a minimum \(V_{min}=(\kappa T_{\rm round})^{1/2}r_{0}/(2\pi p^{NF})\) for \(\epsilon_{min}=(\kappa T_{\rm round})^{1/2}\), where \(r_{0}=\beta\) when \(\tilde{r}_{b}=\tilde{q}_{b}\) and \(r_{0}=2\pi/\beta\) when \(\tilde{r}_{b}=\tilde{p}_{b}\). However, the vanishing drift velocity in the vicinity of \(\tilde{r}_{b}=r_{0}/2\) and small diffusion constant for \(\epsilon=\epsilon_{min}\) (\(\kappa T_{\rm round}<10^{-4}\) considered in this work) result in a long convergence time and persisting tails of the \(R_{b}\) distribution at this optimal value. We mitigate this advert effect by varying the feedback displacement length \(\epsilon_{j}\) as a function of the round index \(j\), starting with \(\epsilon_{j}\sim r_{0}/2\) to suppress the tails of \(R_{b}\) and ending with \(\epsilon_{j}\sim\epsilon_{min}\) to limit its central peak width. We exactly compute the evolution of the distributions throughout this preparation, compactly encoded in the form of \((2n_{F}+1)\)-Fourier coefficient vectors (\(n_{F}\ \sim 30-60\) throughout this work). In Fig. 3 (bottom panel), we represent the \(Q_{b}\) distribution obtained after a given number \(N_{q}\) of \(\mathcal{R}_{q_{b}}\) rounds. Its tails are exponentially suppressed as \(N_{q}\) increases while its central peak has a constant variance \(V_{min}\), ensuring robust suppression of error-propagation to the target mode.
As the \(R_{b}\) distribution is being sculpted by repeated
Figure 2: **a)** In our proposed architecture, the target oscillator \(a\) couples to an ancillary oscillator \(b\) via a controlled quadrature interaction. The ancilla is prepared and measured via a TLS. **b)** An ancilla preparation round is composed of a conditional displacement gate flanked with \(\frac{\pi}{2}\) rotations of the TLS. The final TLS measurement along \(\mathbf{\sigma}_{z}\) controls a proportional feedback displacement by \(\mp\epsilon\), a conditional flip of the TLS to reset it in \(|g\rangle\), and is stored for further processing (double black lines represent classical communication channels). **c)** A \(\mathcal{C}_{q_{a}}\) correction cycle starts with the ancilla prepared in \(|\phi\rangle\). The quadrature gate maps the value of a target stabilizer (here \(\mathbf{\bar{q}}_{a}^{S}\)) to the ancilla stabilizer \(\mathbf{\bar{p}}_{b}\). The ancilla is then measured and prepared for the next cycle by a series of \(\mathcal{R}_{p_{b}}\) rounds _followed_ by a series of \(\mathcal{R}_{q_{b}}\) rounds, ensuring robust suppression of propagating errors. The measurement record from \(\mathcal{R}_{p_{b}}\) rounds is simply summed to estimate the value of \(\mathbf{\bar{p}}_{b}\) following the quadrature gate [12]. The result \(m\) controls a displacement by \(-f(m)\) on the target oscillator. **d)** Alternating \(\mathcal{C}_{q_{a}}\) and \(\mathcal{C}_{p_{a}}\) correction cycles protect the GKP qubit.
Figure 3: Wrapped probability distributions of a square ancilla state (\(\beta=(2\pi)^{1/2}\)) prepared from a uniform distribution by \(N_{p}+N_{q}\) preparation rounds (\(N_{p}=50\), varying \(N_{q}\) encoded in color), in presence of quadrature noise at rate \(\kappa=(10^{5}\ T_{\rm round})^{-1}\) and TLS flips with probabilities \(p^{BF}=2p^{PF}=0.002\) per round. The length of feedback displacements \(\epsilon_{j}\) is varied throughout preparation to suppress the tails of the distributions while maintaining a minimal variance for the central peak (see text, the black dashed line is a Gaussian with variance \(V_{min}\)), ensuring robust suppression of error propagation to the target mode. As \(N_{q}\) increases, bit-flips of the TLS entail more frequent shifts along \(p_{b}\), which elevate the tails of \(P_{b}\), and the distribution central peak defates under the action of quadrature noise.
\(\mathcal{R}_{r_{b}}\) rounds, long shifts triggered by bit-flips of the TLS uniformize the distribution along the conjugate quadrature, and quadrature noise deflates its central peak. In our asymmetric preparation scheme, the \(Q_{b}\) distribution is sculpted last and its final value is not impacted by these errors. On the other hand, they have a dramatic effect on \(P_{b}\) which becomes near uniform for large \(N_{q}\) (Fig. 3, top panel) as the probability \((1-p_{BF})^{N_{q}}\) that no bit-flip occurred during the \(\mathcal{R}_{q_{b}}\) rounds approaches 0. Thus, \(N_{q}\) cannot be arbitrarily large for the ancilla to be a resource for Steane-type error-correction, even in the limit of weak intrinsic noise of the oscillators.
## III Target Mode Error-Correction
We now consider the evolution of the target oscillator over alternating \(\mathcal{C}_{q_{a}}\) and \(\mathcal{C}_{p_{a}}\) error-correction cycles. As for the ancilla during preparation, the target oscillator state remains periodic [12]. In order to estimate the decay rate of the \(z\)-component of the GKP qubit Bloch vector \(\kappa_{log}\)--the \(x\)-component decays at the same rate and the \(y\)-component twice faster in the square code--we consider the evolution of the wrapped distribution of the logical operator \(\tilde{q}_{a}^{L}\) only, denoted \(Q_{a}\). We compactly represent it as an \((2n_{F}+1)\)-Fourier coefficient vector and encode the system evolution over a pair of \(\mathcal{C}_{q_{a}}\) and \(\mathcal{C}_{p_{a}}\) cycles in an \((2n_{F}+1)\times(2n_{F}+1)\) evolution matrix, which accounts for realistic ancilla preparation and \(\tilde{\mathbf{p}}_{b}\) detection (see Fig. 2 and [12]). The only approximation made in this formalism is to model noise as effective quantum channels applied in-between perfect gates, with negligible impact on the estimate of error-correction performances [12]. Choosing, as an initial guess, a simple _sine_ function for the feedback law \(f\) controlling displacements applied to the target oscillator (see Fig. 2c), we observe that the \(Q_{a}\) distribution converges over a few cycles from an arbitrary initial state to a meta-stable state with two peaks centered at \(\tilde{q}_{a}^{L}=0\) and \(\tilde{q}_{a}^{L}=\alpha/2\) (shown in Supplemental materials), as expected from a state close to the GKP code manifold. A slow dynamic then comes into play, following which the respective amplitudes of the two peaks equilibrate as the GKP qubit relaxes to the fully mixed logical state.
For given numbers of preparation rounds \(N_{p}\) and \(N_{q}\) and noise values \(p^{BF},\,p^{PF},\,\kappa\), we efficiently extract \(\kappa_{log}\) by spectral analysis of the evolution matrix [12]. Moreover, we adjust the cycle feedback parameters (ancilla displacements \(\epsilon_{j}\) and Fourier coefficients \(f_{k}\) of a general feedback law \(f\) on the target) by gradient ascent in order to minimize \(\kappa_{log}\). Finally, we select the preparation round number yielding the smallest error rate, assuming a quadrature gate time \(T_{\text{quad}}=5T_{\text{round}}\)--a longer gate time does not impact significantly the performances as long as it does not dominate the overall cycle duration. In Fig. 4, we report the rate \(\kappa_{log}\) obtained after this optimization. Strikingly, \(\kappa_{log}\) decreases exponentially as the system noise strength--or equivalently the gates duration--decreases. Quantitatively, the GKP qubit coherence time surpasses that of the embedding hardware by a factor 40 for \(\kappa T_{\text{round}}=4.10^{-5}\) and \(p^{BF}=2p^{PF}=2.10^{-3}\). These particular values are within reach of state-of-the-art superconducting circuit experiments, assuming a preparation round duration of \(T_{\text{round}}=1\)\(\mu\)s [36; 37; 38].
## IV Conclusion and Outlook
In this letter, we proposed a simple architecture controlled with two elementary gates to robustly protect an encoded GKP qubit. The conditional displacement gate is now routinely employed in superconducting circuit experiments, while the quadrature gate may be decomposed as a sequence of previously demonstrated operations [34; 39; 20] or directly activated parametrically [40]. Numerical simulations assuming a simplified noise model indicate that the lifetime of the GKP qubit protected by our protocol is exponentially enhanced as the noise strength during each gate decreases. Extending this result to more realistic noise models and normalized GKP code states will be the subject of future work. While state-of-the-art superconducting circuits would only approach the regime of strong suppression of logical errors, a substantial margin for improvement exists if one considers a larger class of feedback controls--more refined decoding of classical signals--more extensive hardware--multiple ancillas and TLS's to multiplex error-syndrome detection--and more diverse quantum gates--allowing conditional displacements by multiples of the GKP lattice length. Given that Clifford operations in the GKP code rely on the same controls considered in this letter, our work opens a clear path toward fault-tolerant quantum computation with GKP qubits.
Figure 4: Decay rate of the \(z\)-component of the GKP qubit Bloch vector \(\kappa_{log}\) as a function of the oscillators quadrature noise rate \(\kappa\) and TLS flip probabilities \(p^{BF}=2p^{PF}\), linearly varied. For each noise value, the number of preparation rounds \(N_{q}\) and \(N_{p}\) are swept together (allowing different values did not significantly improve error-correction performances) and the cycle feedback parameters optimized by gradient ascent. We report the minimum error rate along with the corresponding round number, encoded in color.
###### Acknowledgements.
The authors warmly thank M. Mirrahimi and P. Rouchon for carefully reviewing the manuscript, and M. H. Devoret, A. Eickbusch and S. Touzard for discussions that motivated this work. This work was supported by the Agence Nationale de la Recherche (ANR, project SYNCAMIL), the Paris Ile-de-France Region in the framework of DIM SIRTEQ, the European Research Council (ERC, project DANCINGFOOL, grant agreement No. 101042304), the Plan France 2030 through the project ANR-22-PETQ-0006 and by the Army Research Office (ARO) under grant No. W911NF- 18-1-0212.
|
2302.04832 | Bridging the Sim2Real gap with CARE: Supervised Detection Adaptation
with Conditional Alignment and Reweighting | Sim2Real domain adaptation (DA) research focuses on the constrained setting
of adapting from a labeled synthetic source domain to an unlabeled or sparsely
labeled real target domain. However, for high-stakes applications (e.g.
autonomous driving), it is common to have a modest amount of human-labeled real
data in addition to plentiful auto-labeled source data (e.g. from a driving
simulator). We study this setting of supervised sim2real DA applied to 2D
object detection. We propose Domain Translation via Conditional Alignment and
Reweighting (CARE) a novel algorithm that systematically exploits target labels
to explicitly close the sim2real appearance and content gaps. We present an
analytical justification of our algorithm and demonstrate strong gains over
competing methods on standard benchmarks. | Viraj Prabhu, David Acuna, Andrew Liao, Rafid Mahmood, Marc T. Law, Judy Hoffman, Sanja Fidler, James Lucas | 2023-02-09T18:39:28Z | http://arxiv.org/abs/2302.04832v1 | # Bridging the Sim2Real gap with CARE:
###### Abstract
Sim2Real domain adaptation (DA) research focuses on the constrained setting of adapting from a labeled synthetic source domain to an unlabeled or sparsely labeled real target domain. However, for high-stakes applications (_e.g._ autonomous driving), it is common to have a modest amount of human-labeled real data in addition to plentiful auto-labeled source data (_e.g._ from a driving simulator). We study this setting of _supervised_ sim2real DA applied to 2D object detection. We propose Domain Translation via Conditional Alignment and Reweighting (CARE) a novel algorithm that systematically exploits target labels to explicitly close the sim2real appearance and content gaps. We present an analytical justification of our algorithm and demonstrate strong gains over competing methods on standard benchmarks.
Machine Learning, Domain Adaptation, Randomization, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial Adaptation, Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial Adaptation, Adversarial, Adaptation, Adversarial, Adaptation, Adversarial, Adaptation, Adversarial
semi-supervised adaptation (_few target labels_, Donahue et al. (2013); Wang et al. (2019); Saito et al. (2019); Wang et al. (2020)). Although such methods could be extended to the supervised setting, _e.g._ by adding a supervised target loss to an off-the-shelf unsupervised DA method, we find this to be suboptimal in practice (see Table 1), since these straightforward extensions do not exploit large-scale target labels and their statistics for domain alignment. Similarly, few-shot and semi-supervised adaptation methods assume access to limited target labels (_e.g._ 8 labeled images per class for object detection, Wang et al. (2019)) that are insufficient for reliably estimating target statistics. Facing this research gap, industry practitioners may resort to naively combining labeled source and target data via mixing (Kishore et al., 2021) (_i.e._ training on combined source and target data) or sequential fine-tuning (Tremblay et al., 2018; Prakash et al., 2019, 2021) (_i.e._ training on source data followed by fine-tuning on target data). However, these simple heuristics do not address the domain gap between simulation and reality.
This paper addresses the research-practice gap to show that _systematically_ combining the two labeled data sets can significantly improve performance over competing methods (see Table 1). We propose a general framework called _Domain Translation via Conditional Alignment and Reweighting_ (CARE) for supervised Sim2Real DA. CARE builds on commonly-used baselines and off-the-shelf adaptation methods but explicitly leverages existing labels in the target domain to minimize both appearance (pixel and instance-level visual disparity) and content gaps (disparities in task label distributions and scene layout). Specifically, we overcome the appearance gap by explicitly using ground-truth labels to conditionally align intermediate instance representations. To overcome the content gap, we conditionally reweight the importance of samples using estimated spatial, size, and categorical distributions. We formalize our setting using the joint risk minimization framework, and provide theoretical insights for our design choices. Finally, we apply our framework to the challenging task of 2D object detection We make the following contributions:
(1) We present a detailed study of supervised Sim2Real object detection adaptation and show that existing methods yield suboptimal performance by not adequately exploiting target labels. (2) We propose CARE, a general framework for supervised Sim2Real domain adaptation and apply it to the 2D object detection. On three standard Sim2Real benchmarks for detection adaptation, CARE strongly outperforms competing methods (_e.g._ boosting mAP@50 by as much as \(\sim\)25% on Synscapes\(\rightarrow\)Cityscapes). (3) We formalize our setting using the joint risk minimization framework and provide theoretical insights into our design choices.
## 2 Related work
To our knowledge, supervised domain adaptation (SDA) for object detection has not seen recent work in computer vision. Early DA works (Saenko et al., 2010; Kulis et al., 2011; Hoffman et al., 2013; Tsai et al., 2016) have studied the SDA setting applied to image classification, proposing contrastive-style approaches based on metric learning with cross-domain pairwise constraints. However, these works predate deep learning and do not study complex tasks like object detection. Below, we summarize lines of work in the related areas of unsupervised and few-shot adaptation.
**Unsupervised domain adaptation (UDA)**. The DA literature primarily focuses on _unsupervised_ adaptation from a labeled source setting to an unlabeled target domain (Saenko et al., 2010; Ganin and Lempitsky, 2015; Hoffman et al., 2018). Successful UDA approaches have employed different strategies ranging from domain adversarial learning (Long et al., 2015; Acuna et al., 2021) to domain discrepancy minimization (Long et al., 2018), image translation (Hoffman et al., 2018), and self-training (Prabhu et al., 2021; Li et al., 2022). Cross-domain object detection has also seen recent work, based on multi-level domain adversarial learning (Chen et al., 2018), strong-weak distribution alignment of local and global features (Saito et al., 2019), and domain adversarial learning weighted by region discriminuress (Zhu et al., 2019), Alternatively, RoyChowdhury et al. (2019); Li et al. (2022) self-train with refined pseudolabels, and Kim et al. (2019) use background regularization.
Importantly, due to the absence of target labels, UDA methods resort to approximations based on marginal alignment or pseudolabels. In this paper, we instead consider _supervised_ Sim2Real adaptation where ground-truth labels are provided for the target dataset during training. To compare against our approach, we benchmark supervised extensions of existing UDA methods as baselines in our paper.
**Few-shot (FDA) and Semi-supervised Domain Adaptation (SSDA).** Closer to our setting are Few-shot DA learning (FDA, Wang et al. (2019); Gao et al. (2022); Zhong et al. (2022); Ramamonjison et al. (2021)) and Semi-supervised DA(SSDA, Donahue et al. (2013); Yao et al. (2015); Saito et al. (2019)), which differ in important ways. FDA assumes a very small amount of labeled target data is available (_e.g._ 8 images per-class for detection in Wang et al. (2019)). Such methods employ source feature-regularized images with instance-level adversarial learning (Wang et al., 2019), point-wise distribution alignment (Zhong et al., 2022), and multi-level domain-aware data augmentation (Gao et al., 2022). SSDA also assumes limited target labels (_e.g._ 1 to 3 images per category for image classification (Saito et al., 2019)), but additionally leverages a large set of _unlabeled_ target data, making use of min-max entropy optimization (Saito et al., 2019) or student-teacher learning
frameworks (Li et al., 2022). Instead, we operate in a _supervised_ DA setting with access to a substantial amount of labeled target data in addition to a large (in theory, possibly infinite) amount of labeled simulated data. As a result, SDA uniquely permits _reliable_ estimates of target statistics. Our algorithm leverages these statistics and target labels to systematically close the Sim2Real domain gap.
## 3 Approach
In this section, we first introduce the supervised Sim2Real detection adaptation problem (Section 3.1). We characterize two primary aspects of the Sim2Real domain gap: an appearance and a content gap (Section 3.2). Finally we introduce our method CARE that leverages a labeled target dataset to close this domain gap (Section 3.3) and provide an analytical justification of the algorithm (Section 3.4).
### Problem Formulation
Let \(\mathcal{X}\) and \(\mathcal{Y}\) denote input and output spaces. In object detection, \(x\in\mathcal{X}\) are images (\(\mathcal{X}\subseteq\mathbb{R}^{H\times W\times 3}\)) and \(y:=(B,C)\in\mathcal{Y}\) are \(K\)-class labels with \(C\in\{1,..,K\}\) and bounding boxes \(B\subseteq\{(\mathsf{w},\mathsf{h},\mathsf{x},\mathsf{y})\in\mathbb{R}^{4}\}\) (comprising the width \(\mathsf{w}\), height \(\mathsf{h}\), and centre coordinates \((\mathsf{x},\mathsf{y})\), respectively). Let \(h(x):=h_{\theta}(g_{\phi}(x))\) be an object detector composed of a feature extractor \(g(x)\) and a classifier \(h(g(x))\) that are parameterized by \(\phi\) and \(\theta\). Matching prior object detection work (Khindkar et al., 2022; Wang et al., 2021), we design \(h(g(x))\) via Faster RCNN (Ren et al., 2015), which uses a region proposal network that receives features generated by a backbone network and passes them through an ROI align layer to obtain ROI features; these are then passed through a final box predictor. We let \(\hat{B},\hat{C}=\arg\max h(g(x))\) be bounding box coordinates and object class predicted by the model for input image \(x\). In sim2real SDA, we are given two labeled data sets representing a (simulated) source distribution \(P_{S}\) and a (real) target distribution \(P_{T}\). Our goal is to minimize the expected risk of a detection loss consisting of a classification loss \(\ell_{cls}\) and bounding box regression loss \(\ell_{box}\):
\[\ell_{det}(h(g(x)),B,C):=\ell_{box}(\hat{B},B)+\ell_{cls}(\hat{C},C) \tag{1}\]
over a target domain \(r_{T}:=\mathbb{E}_{x,B,C\sim P_{T}}[\ell_{det}(h(x),B,C)]\).
### Characterizing the Sim2Real Domain Gap
Leveraging the source distribution to improve performance on the target is challenging due to the _domain gap_ which exists in both the image and label distributions. We partition this gap into two categories: appearance and content gap (Kar et al., 2019) and characterize these in detail, using the Synscapes (Wrenning and Unger, 2018)\(\rightarrow\) Cityscapes (Cordts et al., 2016) shift for object detection adaptation as an example.
The **appearance gap** consists of visual disparities between images from the two domains (see Fig 2, _left_). For example, a pixel-level appearance gap may be due to differences in lighting between real and simulated images (Chattopadhyay et al., 2022), while an instance-level gap may be due to differences in the appearance of synthesized versus real objects. We characterize the appearance gap as the dissimilarity \(D(\cdot,\cdot)\) in the probabilities between source and target distributions when conditioned on the label (e.g. \(D(P_{S}(x|B,C),P_{T}(x|B,C))\)).
The **content gap** can be decomposed into scene-level changes in the layout of objects (e.g., size and spatial distribution) as well as shifts in the task label distributions and the frequencies of classes (see Fig 2, _right_). We characterize the scene-level changes as the dissimilarity in the probabilities of object bounding boxes when conditioned on
Figure 2: The domain gap between a simulated source and real target domain consists of an appearance and content gap. The appearance gap corresponds to pixel-level differences (_e.g._ texture and lighting) and instance-level differences (_e.g._ vehicle design). The content gap consists of differences in label distributions due to different class frequencies and bounding box sizes and locations. **Right.** Column 1: Task label histograms. Column 2: Empirical distribution of βcarβ box _sizes_. Column 3: Empirical distribution of βcarβ box _locations_.
the class \(D(P_{S}(B|C),P_{T}(B|C))\) and the task-level class frequency gap as the dissimilarity in class probabilities \(D(P_{S}(C),P_{T}(C))\).
### Bridging the domain gap with CARE
To close the sim2real gap, _Conditional Alignment and Reweighting_ (CARE) minimizes the effect of both the appearance and the content gap via feature alignment and importance reweighing. Let \(w_{S}(C):=1/P_{S}(C),w_{T}(C):=1/P_{T}(C)\) be the inverse class frequency for each domain and let \(v(B|C):=P_{T}(B|C)/P_{S}(B|C)\) be the inverse ratio of the scene-level bounding box frequency gap. These reweighting factors ensure that the learned classifier considers that the source and target data sets follow the same distribution during training. In CARE, we minimize the following _domain translation_ loss:
\[\begin{split}\min_{\theta,\phi}&\mathbb{E}_{x,B,C \sim P_{S}}\bigg{[}w_{S}(C)v(B|C)\ell_{det}(h(g(x)),B,C)\bigg{]}\\ &+\mathbb{E}_{x^{\prime},B^{\prime},C^{\prime}\sim P_{T}}\Big{[}w _{T}(C^{\prime})\ell_{det}(h(g(x^{\prime})),B^{\prime},C^{\prime})\Big{]}\\ &+\lambda\mathbb{E}_{x^{\prime},B^{\prime},C^{\prime}\sim P_{T} \atop x,B,C\sim P_{S}}\Big{[}\ell_{align}(g(x),g(x^{\prime}))\Big{]}C=C^{ \prime}\bigg{]}.\end{split} \tag{2}\]
where \(\ell_{align}\) is defined in Eq. (3), and \(\lambda\geq 0\) is a regularization parameter. The above loss minimizes three terms, where the first term is a reweighted detection loss over the source dataset and the second loss is a class-balanced detection loss over the target dataset. The third term aligns the encoded features \(g(x)\) and \(g(x^{\prime})\) of similar cross-domain instance embeddings belonging the same class. We now elaborate upon each term.
#### 3.3.1 Bridging appearance gap with cross-domain cycle consistency
To minimize the appearance gap, \(\ell_{align}\) performs a class-and-box conditional feature alignment strategy by optimizing a cross-domain cycle consistency objective. Specifically, we extract ROI features corresponding to the ground truth bounding box coordinates of both source and target images and match similar cross-domain instance features belonging to the same class. Fig. 4 visualizes the intuition.
For a given class, suppose we are given \(k\) ground truth bounding boxes from the source and target domains each. For each instance, our encoder extracts \(d\)-dimensional ROI features \(\mathbf{f}^{i}_{\omega}\in\mathbb{R}^{d}\), where \(i\in\{1,\dots,k\}\) and \(\omega\in\{S,T\}\) denote the \(i\)-th feature and the domain, respectively. We first measure the (negative of the) squared Euclidean distance between these same-class cross-domain features:
\[s_{i,j}:=-\|\mathbf{f}^{i}_{S}-\mathbf{f}^{j}_{T}\|_{2}^{2}.\]
For each target \(j\), we compute soft-matching features
\[\hat{\mathbf{f}}^{j}_{T}:=\sum_{j^{\prime}=1}^{k}\alpha_{j,j^{\prime}}\mathbf{ f}^{j^{\prime}}_{T},\;\text{where}\;\;\alpha_{j,j^{\prime}}:=\frac{e^{s_{j,j^{ \prime}}}}{\sum_{m=1}^{k}e^{s_{j,m}}}\]
Finally, we assemble a similarity score between each source \(i\) and target \(j\) instance by minimizing the negative squared Euclidean distance between the source and the soft-matching target feature vectors
\[\hat{s}_{i,j}:=-\|\mathbf{f}^{i}_{S}-\hat{\mathbf{f}}^{j}_{T}\|_{2}^{2}.\]
Let \(\hat{\mathbf{s}}^{j}:=[\hat{s}_{1,j},\dots,\hat{s}_{k,j}]\) be the vector of similarity scores for the \(j\)-th target. Our cycle matching alignment loss minimizes the cross entropy between features as follows:
\[\ell_{align}(\mathbf{f}_{S},\hat{\mathbf{f}}^{j}_{T}):=-\frac{1}{k}\sum_{i=1 }^{k}\mathbbm{1}_{i=j}\left(\log\big{(}\text{softmax}(\hat{\mathbf{s}}^{i})_{ j}\big{)}\right). \tag{3}\]
The above approach is a modification of a temporal cycle confusion objective proposed for robust object detection (Wang et al., 2021). However, we differ in three ways. First, we align cross-domain instance features between source and target domains, whereas the original approach aligns instance features across time given video data. Second, we leverage target labels to align ROI features corresponding to _ground truth_ rather than predicted bounding
Figure 4: Visualization of cross-domain cycle consistency matching with CARE on Sim10K\(\rightarrow\)Cityscapes. CARE embeds similar-looking cars closer to minimize the appearance gap.
Figure 3: Conditional Alignment and Reweighting (CARE) exploits target labels to estimate and bridge cross-domain appearance gaps (via a cycle consistency-based conditional feature alignment objective) and content gaps (via importance reweighting).
box coordinates. Finally, our alignment objective uses cycle _consistency_ rather than cycle confusion. Intuitively, we encourage _similar-looking_ instances to be close together (by taking the negative Euclidean distance), whereas the original aligns dissimilar instances. Our alignment loss reduces to the classification of the soft nearest neighbors and therefore tends to be robust to label noise (Dwibedi et al., 2019).
#### 3.3.2 Bridging content gap with importance reweighting
To close the task label distribution content gap, we apply inverse frequency reweighing to simulate a balanced label distribution in the source and target domains. For each domain \(\omega\in\{S,T\}\), we reweigh instances of class \(C\) via multiplicative class weights \(w_{\omega}(C)\propto 1/N_{\omega}(C)\), where \(N_{\omega}(C)\) is the number of training examples in domain \(\omega\).
We approximate the class-conditional box ratios as follows
\[\frac{P_{T}(B|C)}{P_{S}(B|C)}\approx\frac{P_{T}(\mathsf{w},\mathsf{h}|C)}{P_{S }(\mathsf{w},\mathsf{h}|C)}\frac{P_{T}(\mathsf{x},\mathsf{y}|C)}{P_{S}( \mathsf{x},\mathsf{y}|C)}\eqqcolon v(B|C) \tag{4}\]
Intuitively, this ratio upweighs boxes of a class that are of a size and location relatively more represented in the target than in the source. Note that the first approximate equality \(\approx\) is due to an assumption of independence between \((\mathsf{w},\mathsf{h})\) and \((\mathsf{x},\mathsf{y})\), which we assume to simplify computations. We estimate each probability component via class-conditional Gaussian kernel density estimation (KDE) (Scott, 2015) fitted to the ground truth bounding box locations and sizes respectively. In Appendix A.2, we include details of this estimation, including appropriate smoothing and thresholding to handle regions with low target support.
### Analytical justification
We now analyze our loss function in Eq. (2) to develop a theoretical intuition for its effectiveness. Let us rewrite the first term in the loss as follows:
\[\mathbb{E}_{P_{S}}\bigg{[}w_{S}(C)v(B|C)\ell_{det}(h(g(x)),B,C) \bigg{]} \tag{5}\] \[= \mathbb{E}_{P_{T}}\bigg{[}\frac{P_{S}(x,B,C)}{P_{T}(x,B,C)}w_{S}( C)v(B|C)\ell_{det}(h(g(x)),B,C)\bigg{]}\] \[= \mathbb{E}_{P_{T}}\bigg{[}\frac{P_{S}(C)}{P_{T}(C)}\times\frac{P _{S}(B|C)}{P_{T}(B|C)}\times\frac{P_{S}(x|B,C)}{P_{T}(x|B,C)}\] \[\qquad\qquad\times w_{S}(C)v(B|C)\ell_{det}(h(g(x)),B,C)\bigg{]}.\]
Above, the second line follows from importance reweighting, and the third line follows from Bayes rule. Next, recall that \(w_{S}(C)=1/P_{S}(C)\) and \(v(B|C)\approx P_{T}(B|C)/P_{S}(B|C)\). Substituting these two, we obtain
\[\text{Eq.~{}(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:
scenes of 1440\(\times\)720 resolution. Finally, DriveSim is a private synthetic data set of 48,000 photorealistic driving scenes. Synscapes and DriveSim exhibit a long-tailed category distribution (see Fig. 2). For each source, we train an object detector to adapt to our target, Cityscapes (Cordts et al., 2016) which is a data set of 2500 real driving images. For all evaluations, we fix the target data set size to 25% to model the realistic scenario of available but an order of magnitude less real data than synthetic data (see appendix for details). For **Sim10K\(\rightarrow\)Cityscapes**, we focus on object detection for a single class (_i.e._ car) to better compare against prior Sim2Real domain adaptation methods (Khindkar et al., 2022). For **Synscapes\(\rightarrow\)Cityscapes** and **DriveSim\(\rightarrow\)CityScapes**, we evaluate object detection for eight and three classes, respectively. To evaluate all models, we match prior work (Chen et al., 2018; Khindkar et al., 2022; Wang et al., 2021) and report per-category Average Precision (AP) and its mean across classes at an IoU threshold of 50% (mAP@50), over the target test set.
### Implementation details
We use a Faster-RCNN (Ren et al., 2015) architecture with a ResNet-50 (He et al., 2016) backbone. We run 10k iterations of SGD with a learning rate of 0.01, momentum of 0.9, weight decay of \(10^{-4}\), and learning rate warmup matching (Wang et al., 2021). We set \(\lambda=0.1\) in Eq. (2). We use 8 NVIDIA V100 GPUs with a per-GPU batch size of 4, and maintain a 1:1 within-batch source to target ratio across experiments.
### Baselines
We compare against: (1) **Source only**: Supervised learning using only the labeled source dataset. (2) **Target only**: Supervised learning using only the labeled target dataset. (3) **Mixing**(Kishore et al., 2021): Supervised learning on the combined source and target data sets, while maintaining a 1:1 ratio within batches (we ablate this mixing ratio in appendix). (4) **Sequential Finetuning**(Tremblay et al., 2018): Supervised learning on the source dataset followed by finetuning all layers of the model with the target dataset. (5) **Unsupervised DA (UDA) with ILUME**(Khindkar et al., 2022): For completeness, we copy results on Sim10K\(\rightarrow\)Cityscapes of a state-of-the-art UDA method that uses labeled source and unlabeled target data.
We also propose and benchmark supervised extensions of two popular UDA strategies: (6) **S-MMD**: A class and box-conditional _supervised_ version of Maximum Mean Discrepancy (Long et al., 2015). S-MMD minimizes the MMD loss between cross-domain box features corresponding to the same class, using a linear kernel. (7) **S-DANN**: A class and box-conditional _supervised_ version of DANN (Ganin and Lempitsky, 2015). S-DANN minimizes the domain adversarial loss between cross-domain box features corresponding to the same class, similar to Chen et al. (2018). (8) **Few-shot DA (FDA) with TFA:**(Wang et al., 2020). This is a two-stage finetuning algorithm proposed for few-shot object detection that updates all parameters on source (base) data followed by finetuning only the final layer (box regressor and classifer) on a balanced dataset of source and target data. However, we observe low performance with finetuning only the last layer (despite using a lower learning rate as recommended and both with and without weight re-initialization). Instead, we report results _without_ freezing weights in the second phase.
### Main Results
Table 2 summarizes our results. We find:
\(\triangleright\)**Simulated data and labeled real data are both needed.** We first confirm that supervised learning using only the target data outperforms both the settings of using only source data and unsupervised domain adaptation with unlabeled
\begin{table}
\end{table}
Table 2: Results for supervised sim2real object detection adaptation on target. We compare CARE to source and target only training, a state-of-the-art unsupervised DA method (ILLUME (Khindkar et al., 2022)), naive sim+real combinations (mixing (Kishore et al., 2021) and sequential finetuning (Tremblay et al., 2018)), supervised extensions of popular UDA methods (DANN (Ganin and Lempitsky, 2015) and MMD (Long et al., 2015)),and a recently proposed few-shot detection strategy (Wang et al., 2020).
target data. Moreover across all three shifts, even baselines that naively combine simulated and real data (_i.e._ mixing and sequential finetuning) outperform training using only the target data. This shows that additional simulated data is helpful. Moreover, sequential finetuning outperforms mixing on two of three shifts. Finally, we find that mixing with additional conditional feature alignment (S-MMD, S-DANN), consistently outperforms naive mixing. Additional results are in Appendix A.1.
\(\triangleright\)**CARE outperforms all competing methods.** First, note that across each shift, CARE outperforms mixing **(+3.3**, **+9.5**, **+4.4** mAP@50) and sequential finetuning **(+1.7**, **+8.7**, **+8.3** mAP@50). This suggests that the Sim2Real domain gap is a barrier to effective mixing, and systematically mitigating it using target labels is beneficial. Most importantly, we outperform each benchmarked supervised extension of UDA on all shifts. This result validates the research-practice gap by showing that UDA cannot be easily extended to the practical setting of labeled target data, thereby necessitating CARE in supervised domain adaptation.
### Ablation study
In Table 4, we ablate the various components of CARE.
\(\triangleright\)**Class-and-box conditional feature alignment is necessary (Rows 2-4 vs. 1).** Regardless of the specific feature alignment strategy (_i.e._ S-MMD, S-DANN, and our proposed cross-domain Cycle Consistency), additional feature alignment improves performance.
We also remark that during model design, we tested variations of Cycle Consistency-based alignment on Sim10K\(\rightarrow\)Cityscapes by i) conditioning on _predicted_ rather than ground truth class and box coordinates (66.1 mAP@50, **-1.1** compared to our method), and ii) conditioning on predicted box coordinates and ignoring class predictions (64.9 mAP@50, roughly on par with mixing). These two settings yielded 66.1 mAP@50 **(-1.1** versus Row 4) and 64.9 mAP@50 **(-2.3** versus Row 4), respectively. Finally, we also tested a dissimilarity variant of our approach (_i.e._ similar to Wang et al. (2021)) instead of consistency matching for feature alignment. This approach performs on par with Row 4 (67.3 mAP@50 on Sim10K\(\rightarrow\)Cityscapes), and we consequently opted to keep cycle consistency throughout.
### CARE: Fine-grained performance analysis
\(\triangleright\)\(P(C)\)**reweighting is highly effective (Row 5 vs. 1).** Particularly on multi-class source settings (_e.g._ Synscapes and DriveSim), \(P(C)\) reweighting considerably boosts performance. Further, Table 5 (a) shows that class balancing naturally improves the baselines as well, due to mAP evaluating classes equally.
\(\triangleright\)\(P(B|C)\)**reweighting is helpful (Row 7 vs. 1).** Finally, we show that additional class-conditional box reweighting consistently improves performance across all shifts. Table 5
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{\#} & \(P(g(x)|B,C)\) & \(P(C)\) & \(P(B|C)\) & \multicolumn{2}{c}{mAP@50 (\(\uparrow\))} \\ \cline{3-6} & alignment & rewt. & rewt. & Sim10k & Synscapes & DriveSim \\ \hline
1 & \multicolumn{4}{c}{(Mixing baseline)} & 64.8 & 39.0 & 49.3 \\ \hline
2 & S-MMD & & & 65.8 & 40.0 & 50.6 \\
3 & S-DANN & & & 65.3 & 40.8 & 49.8 \\
4 & Cycle Consistency & & & 67.2 & 41.8 & 50.8 \\
5 & None (Mixing baseline) & β & & 64.8 & 46.1 & 51.8 \\
6 & Cycle Consistency & β & & 67.2 & 46.6 & 52.5 \\ \hline
7 & Cycle Consistency & β & β & **68.1**+3.3 & **48.5**+9.5 & **53.7**+4.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablating our proposed method on all three shifts. Our method is in gray with the improvement versus mixing in small font.
Figure 5: Per-class performance comparison of CARE to baselines on Synscapes\(\rightarrow\)Cityscapes.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **w/o CB** & **w/ CB** \\ \hline Source & 19.2 & 20.0 \\ Target & 34.2 & 40.0 \\ Mixing & 39.0 & 46.1 \\ Seq. FT & 39.8 & 44.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablating our proposed conditional reweighting strategies on Synscapes \(\rightarrow\)Cityscapes.
(b) presents results for different formulations of \(P(B|C)\). It validates our reweighing scheme which decomposes box size with \(P(\mathsf{w},\mathsf{h}|C)\) and location with \(P(\mathsf{x},\mathsf{y}|C)\). Capturing both components is better than using only one or neither.
Using Synscapes\(\rightarrow\)Cityscapes, we analyze content-specific metrics to demonstrate CARE consistently outperforms baselines on all settings and not just in aggregate.
\(\triangleright\)**CARE improves over baselines on all classes.** Fig. 5 studies per-class performance improvements with our proposed method against baselines. Our method outperforms each baseline for every class.
\(\triangleright\)**CARE improves per-class performance across box sizes.** Fig. 6 (_top_) visualizes bounding box frequency ratio weights \(v(\mathsf{w},\mathsf{h}|C)\) for the "car" class estimated via the first term of Eq. (4). Matching our intuition (see Fig. 2, _right_), these ratios upweigh target cars of sizes that are relatively less frequent in the source domain. Fig. 6 (_bottom_) illustrates the change in mAP as a result of our reweighing for three categories over boxes of different sizes. Here, reweighing consistently improves mAP and can yield up to \(+10\) mAP improvement for large objects such as buses. We remark that these trends also hold for the remaining categories.
\(\triangleright\)**Fine-grained error analysis.** We use the TIDE (Bolya et al., 2020) toolbox to evaluate specific error types of our mixing baseline and CARE models (lower is better). Fig. 7 shows that CARE reduces classification, localization, and duplicate errors, while slightly worsening joint classification+localization errors.
\(\triangleright\)**Visualizing matching with cycle consistency.** Fig. 4 provides a qualitative visualization of the matching behavior of our proposed cycle consistency approach, for two pairs of source and target images. For each example, we estimate the Euclidean distance in feature space between all cross-domain instance pairs in the aligned feature space of our CARE model and visualize the closest pair of car instances for each example. As expected, we find that our method embeds similar looking cars closer in feature space.
## 5 Discussion
We study supervised Sim2Real adaptation applied to object detection, and propose a strategy that exploits target labels to explicitly estimate and bridge the sim2real appearance and content gaps. Our method possesses a clear theoretical intuition and our empirical analyses validate our improvements in every setting that we tested, for example by boosting mAP@50 by as much as \(\sim\)25%. Most importantly, this paper tackles a large research-practice gap by bridging the literature on unsupervised and few-shot domain adaptation with an industry-standard practice of combining labeled data from both simulated and real domains. With this, we envision a renewed future methodological interest in SDA.
**Limitations.** Our method requires sufficient labeled data in source and target domains to reliably estimate dataset-level statistics. Further, our formulation assumes conditional independence of box sizes and locations as well as an equivalence between pixel-level and feature-level distributions. We also rely on successful cross-domain alignment. These assumptions may be violated to varying degrees in practice.
Figure 6: Visualizing \(P(\mathsf{w},\mathsf{h}|C)\) reweighting on Synscapes\(\rightarrow\)Cityscapes. **(top)** Visualizing \(v(\mathsf{w},\mathsf{h}|C=\text{car})\). **(bottom)** Visualizing change in mAP after \(P(\mathsf{w},\mathsf{h}|C)\) reweighting for three categories (car, bus, bike).
Figure 7: Visualizing change in dAP (lower is better) (Bolya et al., 2020) for errors of different types using CARE, over a mixing baseline.
We focus on object detection and the applicability of our method to other tasks, while plausible, is not established. Finally, we do not consider an unlabeled portion of the target domain and leave that exploration to future work.
|
2304.09040 | Edge-selective extremal damping from topological heritage of dissipative
Chern insulators | One of the most important practical hallmarks of topological matter is the
presence of topologically protected, exponentially localised edge states at
interfaces of regions characterised by unequal topological invariants. Here, we
show that even when driven far from their equilibrium ground state, Chern
insulators can inherit topological edge features from their parent Hamiltonian.
In particular, we show that the asymptotic long-time approach of the
non-equilibrium steady state, governed by a Lindblad Master equation, can
exhibit edge-selective extremal damping. This phenomenon derives from edge
states of non-Hermitian extensions of the parent Chern insulator Hamiltonian.
The combination of (non-Hermitian) topology and dissipation hence allows to
design topologically robust, spatially localised damping patterns. | Suraj S. Hegde, Toni Ehmcke, Tobias Meng | 2023-04-18T14:58:03Z | http://arxiv.org/abs/2304.09040v3 | # Edge-selective extremal damping from topological heritage of dissipative Chern insulators
###### Abstract
One of the most important practical hallmarks of topological matter is the presence of topologically protected, exponentially localized edge states at interfaces of regions characterized by unequal topological invariants. Here, we show that even when driven far from their equilibrium ground state, Chern insulators can inherit topological edge features from their parent Hamiltonian. In particular, we show that the asymptotic long-time approach of the non-equilibrium steady state, governed by a Lindblad Master equation, can exhibit edge-selective extremal damping. This phenomenon derives from edge states of non-Hermitian extensions of the parent Chern insulator Hamiltonian. The combination of (non-Hermitian) topology and dissipation hence allows to design topologically robust, spatially localized damping patterns.
_Introduction._ The notion of topological phases of matter has recently been extended to systems with dissipation and driving described by quantum Liouvillian evolution [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], and to non-Hermitian models. Liouvillian dynamics has for example been utilized for the purely dissipative preparation of topological non-equilibrium steady states (NESSs) [3; 4; 5; 6; 7; 8; 18]. Non-Hermitian Hamiltonians even possess unique topological features such as spectral topology, modified bulk-boundary correspondence, skin effect, and have been realised in multiple experimental settings [19; 20; 21; 22; 23]. Uniting these two branches of research, recent studies have begun to analyze how non-Hermitian topology can emerge in Liouvillian dynamics [9; 12; 15; 16; 24; 25; 26]. There, topology appears in two important ways [2; 11; 25; 12]: one is in the dynamics of approaching the NESS, the other is in the characteristics of the NESS itself. Using a dissipative Chern insulator as a prototype, we now unveil how the combination of dissipation with the topology of the parent Hamiltonian and its non-Hermitian extensions can be used to create topologically protected, spatially localized features in the approach towards the NESS. In particular, we discuss how exponentially edge-localized extremal damping can be created. This enables topologically robust local control of damping in dissipative electronic systems.
_The model._ Our starting point is a paradigmatic two-band model for a Chern insulator on a two-dimensional square lattice [27]. With periodic boundary conditions (PBC) along \(x\) and \(y\), the model Hamiltonian reads
\[H=\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\,\left(\mathbf{d}\left(\mathbf{k}\right)\cdot \mathbf{\sigma}-\mu\,\mathds{1}\right)\,\Psi_{\mathbf{k}}, \tag{1}\]
where \(d_{x}\left(\mathbf{k}\right)=m-\alpha\,\cos\left(k_{x}\right)-\alpha\,\cos\left(k_ {y}\right)\), \(d_{y}\left(\mathbf{k}\right)=\beta\,\sin\left(k_{x}\right)\), and \(d_{z}\left(\mathbf{k}\right)=\beta\,\sin\left(k_{y}\right)\), while \(\Psi_{\mathbf{k}}^{T}=\left(c_{\mathbf{k}\uparrow},c_{\mathbf{k}\downarrow}\right)\) is the spinor of annihilation operators for electrons with momentum \(\mathbf{k}\) and spin \(s=\uparrow,\downarrow\). The vector of Pauli matrices is \(\mathbf{\sigma}\), the chemical potential is \(\mu\), \(m\) denotes an effective mass, and \(\alpha,\beta\) quantify hopping amplitudes in the underlying tight-binding model. We furthermore use units such that \(e=\hbar=a=1\), where \(e>0\) is the elementary charge, and \(a\) the lattice spacing. Without coupling to an environment, \(H\) features topological transitions at \(m=-2\), \(m=0\), and \(m=2\) with topological phases characterized by Chern numbers \(C=-\text{sgn}(m)\) for \(|m|<2\)[27].
As sketched in Fig. 1, we weakly couple the Chern insulator to a Markovian bath, such that its dynamics is described by a Lindblad Master equation for the density matrix \(\rho\)[28],
\[\dot{\rho}=-i\left[H,\rho\right]+\sum_{j}\,\left(L_{j}\,\rho\,L_{j}^{\dagger }-\frac{1}{2}\,\left\{L_{j}^{\dagger}L_{j},\rho\right\}\right). \tag{2}\]
The jump operators \(L_{j}\) encode loss and gain for \(j\)-electrons, where \(j\) comprises the spin \(s\) as well as the lattice site or momentum. We focus on a system preserving translation invariance by using the same jump operators at each site. With PBC, we can transform to
Figure 1: A Chern insulator coupled to a bath inducing loss or gain on every site. The topology of the Chern insulator can stabilize edge-selective extremal damping, i.e. maximal damping on one edge and minimal damping on the opposite edge, as indicated by the size of the lattice site symbols.
momentum space, where the jump operators read
\[L_{\mathbf{k}s}=\begin{cases}\sqrt{2\,\Gamma_{s}}\,c_{\mathbf{k}s}&\text{ with }\Gamma_{s}>0\text{ for loss},\\ \sqrt{-2\,\Gamma_{s}}\,c_{\mathbf{k}s}^{\dagger}&\text{with }\Gamma_{s}<0\text{ for gain}.\end{cases} \tag{3}\]
In the remainder, it is convenient to define the net loss or gain as \(\Gamma_{0}=(\Gamma_{\uparrow}+\Gamma_{\downarrow})/2\), and the relative loss or gain as \(\Gamma_{z}=(\Gamma_{\uparrow}-\Gamma_{\downarrow})/2\).
_The non-equilibrium steady state and its relation to band topology._ In the closed limit \(L_{j}=0\)\(\forall j\), dynamics is described by Schrodinger's equation. Since every product state of single-particle eigenstates is then a steady state, a closed Chern insulator has infinitely many steady states. If in contrast any finite amount of gain or loss is present, we find that the non-equilibrium steady state (NESS) is unique, as is often the case in dissipative systems [28].
For \(|\Gamma_{0}|>|\Gamma_{z}|\), both spin species experience loss or gain. Since the empty and fully occupied states are also many-particle eigenstates of \(H\), the unique NESS is either an entirely empty or a fully occupied pure state. If instead \(|\Gamma_{z}|>|\Gamma_{0}|\), there is one lossy and one gainy spin species. We then find that the NESS is not a simple dark state, but rather a mixed state determined by the competition of Hamiltonian dynamics and quantum jumps. Only for asymptotically large loss or gain does the NESS approach a pure state determined by the jump operators. For \(\Gamma_{z}\gg|\Gamma_{0}|\), this state for example corresponds to a fully spin-\(\downarrow\)-polarized half-filled system. The non-trivial mixed state arising for smaller \(|\Gamma_{z}|\) is most conveniently discussed for PBC, when the NESS density matrix is a tensor product of momentum-resolved density matrices \(\rho_{\mathbf{k}}^{\text{NESS}}\). We write the latter as linear combinations of \(\ket{ab}\bra{cd}\), where \(a,b,c,d\in\{0,1\}\) label occupancies of electronic states \(|n_{\mathbf{k}\uparrow}n_{\mathbf{k}\downarrow}\rangle\) with spin \(\uparrow,\downarrow\) at the given momentum \(\mathbf{k}\). The Liouvillian can be expressed as a matrix acting on the vector combining the 16 independent real prefactors of \(\ket{ab}\bra{cd}\). Importantly, the kernel of this Liouvillian matrix can be determined analytically [29], which allows us to deduce closed-form expressions for all steady-state properties of our dissipative Chern insulator with PBC. Fig. 2 for example illustrates the density matrix for a generic momentum, and shows that the NESS is generically strongly mixed.
Given that a fully empty system, a fully occupied system, and (slightly less obviously) a half-filled, spin-polarized system all have zero net current, a finite NESS current density \(\langle\mathbf{j}\rangle_{\text{NESS}}\) provides a convenient experimental proxy of the NESS's mixed state character resulting from the competition between Hamiltonian dynamics and quantum jumps (\(\langle\cdot\rangle_{\text{NESS}}\) denotes steady state expectation values). We define the current from the system-internal [1] electron flow \(\mathbf{j}_{\mathbf{k}}=\Psi_{\mathbf{k}}^{\dagger}\)\([\nabla_{\mathbf{k}}\left(\mathbf{d}(\mathbf{k})\cdot\mathbf{\sigma}\right)]\,\Psi_{\mathbf{k}}\) as \(\langle\mathbf{j}\rangle_{\text{NESS}}=-\int d^{2}k\,\langle\mathbf{j}_{\mathbf{k}} \rangle_{\text{NESS}}/(2\pi)^{2}\). As shown in Fig. 3, the largest steady state currents appear when \(|\Gamma_{0}|\lesssim|\Gamma_{z}|\), i.e. when the system has just transited from having only loss (or gain) to a regime in which one mode still exhibits loss (or gain), while the other mode has entered its gainy (lossy) regime. This regime maximizes \(\langle\mathbf{j}\rangle_{\text{NESS}}\) because it is associated with a strongly mixed density matrix.
The topological character of a (closed) Chern insulator is formally defined by the Chern number of its bands, and their occupation in the ground state. This notion of topology clearly cannot carry over unchanged to the dissipative limit. Considering a system in which the coupling to the environment is turned on at a time \(t_{0}\), some of the usual signatures of topology may survive for a limited amount of time. Technically, this can be phrased in the language of a non-Hermitian Hamiltonian \(H_{\text{nH}}^{\text{initial}}=H_{0}-i\sum_{j}L_{j}^{\dagger}L_{j}/2\) that describes the time evolution without quantum jumps at short initial times [30; 31]. It has for example been shown that the spectrum of \(H_{\text{nH}}^{\text{initial}}\) can still exhibit edge states [32; 33; 34]. The Hall conductance, on the contrary, immediately loses its quantization [35; 36; 37; 38]. As time grows, one could suspect all remaining topological signatures of the original Hamiltonian to be washed out because the NESS is a complicated mixed state that depends strongly on the jump operators. We for example find that the NESS current does not feature obvious signatures of topological phase transitions [29].
Figure 3: The steady state current resulting from the mixed state character. Panel (a): \(\langle j_{y}\rangle_{\text{NESS}}\) as function of \(\Gamma_{0}\) and \(\Gamma_{s}\) in units of its maximal amplitude \(j_{y}^{\text{max}}\). Panel (b): cuts along fixed \(\Gamma_{0}\) as indicated in panel (a). We use \(m=3\), \(\alpha=\beta=1\), and \(\mu=0\), as well as PBC along \(x\) and \(y\).
Nevertheless, some signatures of the original topology survive at all times. Consider for example topological transitions of the original Hamiltonian. Those are associated with bulk gap closings at specific momenta \(\mathbf{k}_{i}\), for which the Bloch Hamiltonian is a trivial unit matrix. The Hamiltonian therefore plays no part in determining the NESS at these momenta. The density matrix \(\rho_{\mathbf{k}_{i}}^{\rm NESS}\) is then exclusively determined by the jump operators, and the NESS becomes the specific dark state chosen by the jump operators. In the particular model considered here, we even find that the momentum-resolved purity is a perfect marker for the band topology of the closed Chern insulator: a NESS with unit purity in general arises when a unique pure dark state (annihilated by all jump operators) happens to be an eigenstate of the Hamiltonian. For \(|\Gamma_{z}|>|\Gamma_{0}|\), that state is a half-filled, spin-polarized state, which is an eigenstate of \(H\) provided \(d_{x}(\mathbf{k})=d_{y}(\mathbf{k})=0\) for some \(\mathbf{k}\). This can be satisfied for all \(|m|\leq 2\,\alpha\). As shown in Fig. 4, topological phases of the original Chern Hamiltonian are thus heralded by the existence and location of unit purity points in the momentum-resolved steady-state density matrix. In general, however, the purity will only peak at topological transitions. Similarly, the jump operators may not have a pure dark state, in which case the purity will only reach a correspondingly reduced value at the momenta associated with gap closings.
_Extremal edge-selective damping._ Having characterized the NESS, we now turn to the main focus of the present study: signatures of the topology of \(H\) in the dynamics of approaching the NESS. As our central result, we find that dissipative Chern insulators can feature edge-selective extremal damping as a consequence of topological edge states associated with dynamically-emergent non-Hermitian topology.
To simplify our calculations for finite-size systems, we now switch to the method of "third quantisation" [39; 40]. First, every fermionic operator is converted to a pair of Majorana operators via \(c_{j}=(w_{2j-1}-i\,w_{2j})/2\), where \(j\) indicates both lattice site/momentum, and spin. Dropping constant energy offsets, this allows to express the Hamiltonian and jump operators as \(H=\sum_{p,q}w_{p}\,\mathcal{H}_{pq}\,w_{q}\) and \(L_{j}=\sum_{p}l_{j,p}\,w_{p}\), respectively. Second, adjoint creation (annihilation) operators \(\hat{c}_{p}^{\dagger}\) (\(\hat{c}_{p}\)) describing the effect of \(w_{p}\) in the Hilbert space of operators are introduced. The Liouvillian can then be cast into the form
\[\hat{\mathcal{L}}=\frac{1}{2}\,\sum_{ij}\left(\hat{\mathbf{c}}^{\dagger}\;\;\hat{ \mathbf{c}}\;\right)\begin{pmatrix}-X^{\dagger}&iY\\ 0&X\end{pmatrix}\begin{pmatrix}\hat{\mathbf{c}}^{\dagger}\\ \hat{\mathbf{c}}\end{pmatrix}-\frac{1}{2}\operatorname{Tr}(X). \tag{4}\]
Here, \(\hat{\mathbf{c}}^{\dagger}\) (\(\hat{\mathbf{c}}\;\)) is the vector of adjoint creation (annihilation) operators, while \(X=-4i\,\mathcal{H}+\mathcal{M}+\mathcal{M}^{T}\) and \(Y=-2i\,(\mathcal{M}-\mathcal{M}^{T})\). The components \(\mathcal{H}_{pq}\) of \(\mathcal{H}\) are defined by the Hamiltonian in Majorana form, while \(\mathcal{M}\) follows from the jump operators as \(\mathcal{M}_{pq}=\sum_{j}l_{j,p}l_{j,q}^{\star}\).
The so-called damping matrix \(X\) plays a key role for the long-time dynamics approaching the NESS, and allows to define an effective non-Hermitian damping Hamiltonian \(H_{\rm nH}^{\rm damping}=i\,X\neq H_{\rm nH}^{\rm initial}\)[11]. For PBC, the non-Hermitian damping Hamiltonian can be block-diagonalized via a unitary \(U\) yielding [29]\(U^{\dagger}\,H_{\rm nH}^{\rm damping}(\mathbf{k})\,U={\rm diag}(H_{\rm nH}^{(+)}( \mathbf{k}),H_{\rm nH}^{(-)}(\mathbf{k}))\) with
\[H_{\rm nH}^{(\pm)}(\mathbf{k})= \mathbf{d}\left(\mathbf{k}\right)\cdot\mathbf{\sigma}\pm i\,\xi\,\Gamma_{<} \,\sigma_{z}+(\mp\mu+i\,|\Gamma_{>}|)\,\mathds{1}, \tag{5}\]
where \(\Gamma_{<}=\Theta(|\Gamma_{0}|-|\Gamma_{z}|)\,\Gamma_{z}+\Theta(|\Gamma_{z}|-| \Gamma_{0}|)\,\Gamma_{0},\,\Gamma_{>}=\Theta(|\Gamma_{0}|-|\Gamma_{z}|)\, \Gamma_{0}+\Theta(|\Gamma_{z}|-|\Gamma_{0}|)\,\Gamma_{z}\), and \(\xi={\rm sgn}\,(\Gamma_{>})\).
In Hermitian topological systems, an important experimental consequence of topological bulk phases are gapless edge states. In non-Hermitian systems, this celebrated bulk-boundary correspondence breaks down in its usual form. Nevertheless, topological edge states can also exist in non-Hermitian topological systems. For \(H_{\rm nH}^{(\pm)}\), the results of Ref. [33] for example imply that topological edge states exist in a large window of parameters, both in bulk gapped phases directly associated with topologically non-trivial regimes of the closed Chern insulator, as well as in inherently non-Hermitian phases with exceptional points [33; 29].
Figure 5: Eigenvalues of the damping matrix \(\mathrm{X}\) for 15 sites and OBC along \(x\), combining 50 equidistant values of \(k_{y}\). The dot color indicates the localization of the corresponding eigenstates: bulk states are black, states at the left (right) edge red (blue). We set \(m=0\) in panel (a), \(m=1\) in panel (b), and use \(\Gamma_{0}=0.08\), \(\Gamma_{z}=1.2\), \(\alpha=\beta=1\), and \(\mu=0\).
For open boundary conditions (OBC) along \(x\) and PBC along \(y\), the damping matrix \(X\) features topological edge states for \(|m|<2\,|\alpha|\), i.e. precisely in the topological regime of the parent Hamiltonian \(H\). Since \(H^{(\pm)}_{\rm{nH}}=i\,X\) is a non-Hermitian extension of the original Hamiltonian, these topological edge states can be identified as being inherited from the non-trivial topology of \(H\) (their wavefunctions are for independent of \(\Gamma_{0}\) and \(\Gamma_{z}\)) [33]. The edge states are spin-polarized \(\downarrow(\uparrow)\) on the left (right) edge, and have eigenvalues \(i\,\xi_{1}\,\sin(k_{y})+\xi_{2}\,\Gamma_{<}+|\Gamma_{>}|\) with \(\xi_{1,2}=\pm 1\). Crucially, this means that the edge states have the largest and smallest real parts of all eigenvalues. This is shown in Fig. 5 both for \(m=0\), a phase in which \(X\) has bulk exceptional points when subjected to PBC, and \(m=1\), a phase with a bulk line gap under PBC [29]. We hence dub these states as extremal edge states.
The most important experimental implication of these extremal edge states is extremal edge-selective damping. Formally, this follows from the time-evolution of the covariance matrix \(\mathcal{C}(t)\) with components \(\mathcal{C}_{pq}=\delta_{pq}-\mathrm{tr}(\rho(t)\,w_{p}w_{q})\). The covariance matrix approaches its steady state value \(\mathcal{C}_{\rm{NESS}}\) according to \(d\Delta\mathcal{C}(t)/dt=-\Delta\mathcal{C}(t)\,X-X^{\dagger}\,\Delta\mathcal{ C}(t)\), where \(\Delta\mathcal{C}(t)=\mathcal{C}(t)-\mathcal{C}_{\rm{NESS}}\). This means that \(\Delta\mathcal{C}(t)\) can be expressed in terms of the left eigenvectors \(\mathbf{l}_{j}\) of \(X\) and the corresponding eigenvalues \(x_{j}\) as
\[\Delta\mathcal{C}(t)=\sum_{j>k}C_{jk}\,e^{-(x_{j}+x_{k})t}\,\left(\mathbf{l}_{j} \otimes\mathbf{l}_{k}-\mathbf{l}_{k}\otimes\mathbf{l}_{j}\right), \tag{6}\]
where \(C_{jk}\) are coefficients that depend on the initial density matrix [26, 9, 24]. We find that parameter regimes with extremal edge states are characterized by edge-selective extremal damping: the damping described by \(\Delta\mathcal{C}(t)\) is much stronger at one edge than in the bulk, and it is analogously suppressed at the opposite edge. An experimentally observable consequence is for example the decay of the local on-site densities \(n_{js}=c_{js}^{\dagger}c_{js}\) towards their steady state values as measured by \(\Delta n_{js}(t)=(n_{js})(t)-(n_{js})_{\rm{NESS}}\). As shown in Fig. 6 for \(m=0\) and a generic momentum \(k_{y}\) associated with extremal edge states, edge-selective extremal damping can also be observed for inherently non-Hermitian phases in which \(X\) has exceptional points for PBC and is gapless along imaginary energy [29, 33].
_Conclusions._ In this work, we have analyzed the topological heritage of Chern insulators coupled to an environment. Even when driven far from their equilibrium ground states, open Chern insulator preserve remnants of the band topology associated with their Hamiltonians. In particular, the approach of the steady state is governed by an effective non-Hermitian damping Hamiltonian generalizing the Hermitian parent Hamiltonian via the damping matrix.
As a key result, we find that topological edge states in the effective non-Hermitian damping Hamiltonian can be used to engineer damping properties of open Chern insulators also beyond the Liouvillian skin effect. In particular, we find that extremal edge states, i.e. edge states of the damping matrix with extremal real parts, allow to focus damping on one edge, and to similarly suppress it on the opposite edge. This paves the way for using non-Hermitian topology to design topologically protected damping landscapes in electronic systems. Practical implementations will benefit from the spin polarization of the extremal edge states. If for example spin-\(\uparrow\) electrons leak much more strongly to the environment than spin-\(\downarrow\)-electrons, \(\Gamma_{\uparrow}\gg\Gamma_{\downarrow}\), tuning the system to a regime with extremal edge states will result in a non-equilibrium state that after a short initial phase is essentially empty except for one edge, where an exponentially localized stripe of spin-\(\downarrow\) electrons remains up to a time that diverges as \(\Gamma_{\downarrow}\to 0\)[29]. Finally, given the impressive theoretical and experimental progress in implementing non-Hermitian Hamiltonians in dissipative photonic systems, these platforms might provide alternative avenues for the physics discussed here [41, 42, 43].
_Note:_ While preparing this manuscript we became aware of a related work [44] that implements closely related physics in lossy classical waveguides.
All authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) via the Emmy Noether Programme (Quantum Design grant, ME4844/1, project-id 327807255), project A04 of the Collaborative Research Center SFB 1143 (project-id 247310070), and the Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147, project-id 390858490).
S.H. and T.E. contributed equally to this work.
## References
* (1)
Figure 6: Deviation of the electronic density from its steady state value as function of time \(t\) and site number \(j\) for spin \(s\), \(\Delta n_{j\downarrow}(t)\), in a system with 15 sites, OBC along \(x\), and PBC along \(y\) after initialization in a homogeneous half-filled state at \(t=0\). Panel (a) shows \(s=\uparrow\), panel (b) corresponds to \(s=\downarrow\). We set all parameters as in Fig. 5 (a), and fix \(k_{y}=1.629\). The reference density \(n_{\rm{ref}}(t)\) is the geometric mean of the largest and smallest density at time \(t\) over all sites and spins. |
2305.09369 | Dynamics of niche construction in adaptable populations evolving in
diverse environments | In both natural and artificial studies, evolution is often seen as synonymous
to natural selection. Individuals evolve under pressures set by environments
that are either reset or do not carry over significant changes from previous
generations. Thus, niche construction (NC), the reciprocal process to natural
selection where individuals incur inheritable changes to their environment, is
ignored. Arguably due to this lack of study, the dynamics of NC are today
little understood, especially in real-world settings. In this work, we study NC
in simulation environments that consist of multiple, diverse niches and
populations that evolve their plasticity, evolvability and niche-constructing
behaviors. Our empirical analysis reveals many interesting dynamics, with
populations experiencing mass extinctions, arms races and oscillations. To
understand these behaviors, we analyze the interaction between NC and
adaptability and the effect of NC on the population's genomic diversity and
dispersal, observing that NC diversifies niches. Our study suggests that
complexifying the simulation environments studying NC, by considering multiple
and diverse niches, is necessary for understanding its dynamics and can lend
testable hypotheses to future studies of both natural and artificial systems. | Eleni Nisioti, ClΓ©ment Moulin-Frier | 2023-05-16T11:52:14Z | http://arxiv.org/abs/2305.09369v1 | # Dynamics of niche construction
###### Abstract
In both natural and artificial studies, evolution is often seen as synonymous to natural selection. Individuals evolve under pressures set by environments that are either reset or do not carry over significant changes from previous generations. Thus, niche construction (NC), the reciprocal process to natural selection where individuals incur inheritable changes to their environment, is ignored. Arguably due to this lack of study, the dynamics of NC are today little understood, especially in real-world settings. In this work, we study NC in simulation environments that consist of multiple, diverse niches and populations that evolve their plasticity, evolvability and niche-constructing behaviors. Our empirical analysis reveals many interesting dynamics, with populations experiencing mass extinctions, arms races and oscillations 1. To understand these behaviors, we analyze the interaction between NC and adaptability and the effect of NC on the population's genomic diversity and dispersal, observing that NC diversifies niches. Our study suggests that complexifying the simulation environments studying NC, by considering multiple and diverse niches, is necessary for understanding its dynamics and can lend testable hypotheses to future studies of both natural and artificial systems.
Footnote 1: We provide an online repo for reproducing our simulations at [https://github.com/eleninisioti/NicheConstructionModel/tree/main](https://github.com/eleninisioti/NicheConstructionModel/tree/main)
## Introduction
Biological organisms and their environments share a reciprocal relationship: organisms survive and reproduce under selection pressures present in their habitats and environments are modified by their inhabitants, with changes being inherited to the next generation and accumulating with evolutionary time [1]. The first process, natural selection, became the cornerstone of evolutionary theory in the early 20th century [13] and, in both natural and artificial life studies, is often seen as synonymous to evolution. The second, niche construction (NC), was characterized as the neglected process in evolution in the early 2000s [1], as evolutionary theory assumed that NC's effect on selection pressures is negligible. Since then, studies of natural populations have shown that NC often affects selection pressures by helping species protect themselves from environmental uncertainty and accelerates evolution by complexifying environments [1, 1, 1]. By now, NC is, at best, the elephant in the room of evolutionary synthesis: despite evidence for its existence, our understanding of it is too limited to enable its study in settings capturing the complexity of the real world.
Recent hypotheses studying major events in evolution point to environmental complexity [14, 1, 2]. For example, the birth of our own lineage in East Africa occurred amidst large climatic instability that fragmented the landscape into patches of land differing in resource availability and separated by large lakes [14]. Other hypotheses further emphasize the ability of natural landmarks such as deserts and oceans to form barriers that isolate populations [15]. A similar story is unfolding in artificial life: under Quality-Diversity optimization [23], the search space is divided into behavioral niches to evolve diverse solutions, a paradigm that challenged the dominant approach of training a single agent, as it proved more robust in real-world settings [12].
When in a heterogeneous environment, organisms can survive by: a) specializing in a niche to out-compete others, paying the cost that they are out-compete in other niches [16, 17] b) becoming _phenotypically plastic_: a plastic individual adapts its phenotype to its environment without genetic change and survive in diverse niches within its lifetime. On the downside, plasticity comes with fitness costs, so that plastic individuals are out-competed by specialists in their preferred niche [16, 15] c) becoming highly evolvable: a high mutation rate enables quick adaptation at an evolutionary scale but comes at the cost of increased deleterious mutations [13, 15]. As both plasticity and evolvability enable adaptation we jointly refer to them as adaptability.
Although limited, our understanding of NC shows that it is influenced both by the adaptability of populations and the
diversity of environments. For example, processes such as the invention of agriculture (Zohari, 1986) and cultural innovation (Mesoudi and Thornton, 2022) are both influenced by geography and our impressive social learning abilities (Diamond, 1998; Migliano et al., 2020; Migliano and Vinicius, 2022). Explanations for this rely on the inter-play between a population's connectivity and its ability to accumulate solutions: spatial heterogeneity enforces isolation and, therefore, diversity within a population, while adaptability ensures that environmental barriers can be crossed and solutions spread, causing an intensification of NC (Cantor and Whitehead, 2013; Derex and Boyd, 2016; Nisioti et al., 2022).
In this work, we study the evolution of NC in environments divided in diverse niches where populations evolve their adaptability and niche-constructing behavior. Each niche is characterized by its environmental state, which determines how many agents it can fit, and consists of two components: the intrinsic state (which can for example model climatic variations uncontrollable by the agents) and the niche-constructed state, which is the product of niche-constructing individuals inhabiting the niche. The niche-constructed state is also inherited from the previous generation with some decaying applied to capture the fact that ecologically-inherited artifacts cannot persist indefinitely. Modeling NC as a process that changes the capacity of the environment is common (Laland et al., 1999; Krakauer et al., 2009) and finds inspiration in natural behaviors, such as nest-constructing species. Genomes consist of four genes: preferred environmental state (the state at which their fitness is highest), plasticity (the variance in environmental state they can tolerate without a large impact on fitness), evolvability (the mutation rate) and niche construction (the amount by which they change the environmental state of a niche they inhabit). We consider two different mechanisms for selecting which agents will reproduce: a) under _global competition_ we select them with a probability proportional to their average fitness across the niches they can survive in and reproduction stops when the environment's capacity is reached b) under _local competition_ we consider reproduction in each niche independently: an agent is selected based on its fitness in each niche and reproduction stops when the capacity of that niche is filled. Thus, agents benefit from surviving in multiple niches. Our models of local and global competition have their equivalents both in previous studies in evolutionary computation (with Quality-Diversity algorithms representing local competition and classical algorithms global) and in the study of natural populations (where local adaptation considers that populations of the same species inhabiting different niches experience different natural selection pressures (Leimu and Fischer, 2008)).
An agent may benefit from NC by: a) increasing the capacity of a niche, for example through nest-building, thus, reducing competition for resources b) reducing the capacity of a niche thus making it less desirable to others. This behavior, which we refer to as negative niche construction, is exemplified in nature by pines, which spread their needles to increase the probability of fire and out-compete not fire-resistant species (Schwilk, 2003) c) bringinf the environmental state closer to its preferred niche and further away from others For example, aquatic earthworms can inhabit earth soil only by changing its consistency (Turner, 2000). But NC also comes at a cost: a) increasing the capacity of a niche may invite other species in and increase competition b) decreasing its capacity may make it uninhabitable c) changing the environmental state will require adaptability, which comes at its own cost d) multiple agents co- inhabiting a niche can create high environmental uncertainty that can lead to mass extinctions if adaptability cannot increase enough to respond to it, a case that feels most relevant to our own evolution (Boivin et al., 2016; West, 2017).
Taking into account these complex interactions between environmental heterogeneity, population adaptability and niche construction, we believe that studies of NC should depart from simplified settings with a single niche (Laland et al., 1999; Suzuki and Arita, 2009; Chiba et al., 2020) and non-plastic populations (Laland et al., 1999; Krakauer et al., 2009). As we show in this work, rich models can offer interesting hypotheses about the dynamics of NC, such as that:
* having multiple niches is necessary for avoiding mass extinctions. We observed that, in a single niche, environmental uncertainty induced by NC becomes too high for adaptability to cope with;
* NC promotes adaptability and populations adapt differently depending on the selection mechanism: under global competition NC leads to higher plasticity while under local to higher evolvability;
* the population may niche-construct negatively for prolonged periods of time, making niches uninhabitable. Agents deal with this by evacuating the niche until it becomes inhabitable due to the decay of NC;
* under local competition NC increases genomic and environmental diversity
## Related works
Previous studies have primarily modeled NC in two different ways: as an increase in the amount of resources that increases the environment's capacity (Laland et al., 1999; Krakauer et al., 2009) and as a direct increase or decrease in fitness that does not affect capacity (Suzuki and Arita, 2009). We follow the first approach but also allow reductions in capacity. NC studies were at first primarily theoretical, employing differential equations to predict how NC affects evolution (Laland et al., 1999; Krakauer et al., 2009). Such models lent intuitive insights, such that NC can emerge without direct selection (Laland et al., 1999) and that
the ability to monopolize niches enables the emergence of NC (Krakauer et al., 2009) but come with limitations, such as the fact that niches are identical and agents are not plastic.
Suzuki and Arita (2009) study the co-evolution of plasticity and NC in an agent-based model and show that populations alternate between phases of high plasticity and NC, provided that agents niche-construct sequentially. This model differs from ours as the environment consists of a single niche, NC does not modify its capacity and evolvability is constant. Interestingly, when Suzuki and Arita (2009) applied NC in parallel these patterns disappeared due to agents canceling out each other's behavior. Instead, the patterns in our analysis occur with NC applied in parallel. As we show, this becomes possible due to the presence of multiple niches that stabilize NC. Nisioti and Moulin-Frier (2022) studie the co-evolution of plasticity and evolvability in an environment with multiple niches, but do not consider NC.
Recent advents in deep reinforcement learning (DRL) and neuroevolution have enabled the study of complex behaviors in simulations, such as foraging and tool-use (Perolat et al., 2017; Hamon et al., 2023; Baker et al., 2020). Chiba et al. (2020) leverage such techniques to study NC in a 2D environment where agents can construct artifacts useful for avoiding predation. Our study can lend insights to this direction, as we can view DRL as the mechanism that underlies the behavioral plasticity considered in our model.
## 2 Modeling and methodology
We now separately discuss our model of the environment and genomes, the evolutionary algorithm and the set of metrics we use to monitor evolution.
### Modeling the environment
The environment, illustrated in Figure 1, is divided into \(N\) niches arranged in a simple latitudinal model: we consider a reference niche at \(n=0\), \(N/2\) "northern" niches indicated with positive indexes \(n\in(0,N/2]\) and \(N/2-1\) "southern" niches with negative indexes \(n\in(-N/2,0)\). Each niche is characterized by its environmental state \(e_{n}^{g}\) which is the sum of two elements:
* an intrinsic state \(i_{n}\) that remains constant with evolutionary time and depends on the location of the niche. Specifically, the state of niche \(n\) at generation \(g\) is \(i_{n}=i_{0}+\epsilon\cdot n\), where \(i_{0}\) is the state of the reference niche and \(\epsilon\) is a constant capturing the difference between adjacent niches.
* the niche-constructed state \(b_{n}^{g}\), capturing the modifications that agents inhabiting the niche cause. These modifications are carried over generations but are discounted by a factor \(\gamma\). Formally, the niche-constructed state \(b_{n}^{g}\) of niche \(n\) is equal to \(b_{n}^{g-1}\cdot\gamma\), with \(\gamma<1\), plus the total amount of NC applied by all agents that reproduced in this niche in generation \(g\). We denote this latter amount as \(\sum_{k\in\mathcal{K}_{n}}a_{k}^{g}\), where \(\mathcal{K}_{n}\) is the subset of all agents that reproduced in niche \(n\) and \(a_{k}^{g}\) is the amount of niche construction by a single agent (we later explain how agents niche-construct).
Thus, the general equation describing the evolution of niche \(n\) with generations \(g\) is:
\[b_{n}^{g} =b_{n}^{g-1}\cdot\gamma+\sum_{k\in\mathcal{K}_{n}}a_{k}^{g}\] \[e_{n}^{g} =i_{0}+\epsilon\cdot n+b_{n}^{g} \tag{1}\]
As we explain later, the environmental state determines the fitness of an agent based on its genome and the capacity of the niche \(c_{n}^{g}\) as: \(c_{n}^{g}=e_{n}^{g}C_{N}\), where \(C_{N}\) is the reference niche capacity. Thus, higher environmental states can support larger populations and are termed "high-quality". To ensure that the maximum population size is independent of the number of niches we define \(C_{N}=C_{\text{ref}}/N\), where \(C_{\text{ref}}\) is equal to the desirable maximum population size. An assumption of this model is that there is spatial smoothness, i.e, nearby niches are similar. Note that \(C_{\text{ref}}\) is independent of niche-constructing behavior: by niche-constructing the population can exceed this capacity. We, therefore, further bound the population size by randomly discarding agents if the population exceeds a value \(K_{\text{max}}\).
### Modeling the genome
We model plasticity through tolerance curves, a tool developed in ecology (Lynch and Gabriel, 1987) and previously
Figure 1: Our model of the environment: each niche \(n\) is characterized by its environmental state \(e_{n}^{g}\) which is the sum of the intrinsic state \(i_{n}\) (in blue) and the niche-constructed state \(b_{n}^{g}\) (in red). \(i_{n}\) depends on the state of the reference niche \((e_{0})\), the offset \(\epsilon\) and \(n\). \(b_{n}^{g}\) is the sum of the niche-constructed state at the previous generation discounted by \(\gamma\) plus the sum of the niche-constructing behavior of agents that reproduced in this niche in the current generation.
employed in simulation environments (Grove, 2014; Nisioti and Moulin-Frier, 2022). A tolerance curve is a normal distribution with mean \(\mu_{k}^{g}\), indicating the environmental state of highest fitness for an individual \(k\) at generation \(g\), called the preferred state, and standard deviation \(\sigma_{k}^{g}\) that captures how quickly the fitness of the genome drops as the environmental state varies from its preferred state. Genomes with large \(\sigma_{k}^{g}\) are indicative of plastic individuals (we illustrate the tolerance curves of a plastic and a non-plastic individual in Figure 2).
To model niche construction we introduce a niche-constructing gene \(a_{k}\in[-N/K_{\text{max}},N/K_{\text{max}}]\) where we bound the amount an agent can niche-construct to ensure that agents inhabiting large environments have limited capabilities. The niche-constructing gene is expressed in the niche an agent reproduces in by modifying its environmental state. Thus, at each generation, niche \(n\) is constructed by an amount \(\sum_{k\in K_{n}}a_{k}\), where \(\mathcal{K}_{n}\) denotes the subset of agents that reproduced in it. The genome \(o_{k}^{g}\) also includes the mutation rate \(r_{k}^{g}\). Thus the complete form of a genome is \(o_{k}^{g}=[\mu_{k}^{g},\sigma_{k}^{g},r_{k}^{g},a_{k}^{g}]\), and upon reproduction, it mutates as:
\[\mu_{k}^{g+1} =\mu_{k}^{g}+\mathcal{N}(0,r_{k}^{g})\] \[\sigma_{k}^{g+1} =\sigma_{k}^{g}+\mathcal{N}(0,r_{k}^{g})\] \[a_{k}^{g+1} =[c_{k}^{g}+\mathcal{N}(0,r_{\alpha})]_{-N/K_{\text{max}}}^{N/K_{ \text{max}}}\] \[r_{k}^{g+1} =r_{k}^{g}+\mathcal{N}(0,r_{k}^{g}) \tag{2}\]
where \(\mathcal{N}(x,y)\) denotes a normal distribution with mean \(x\) and variance \(y\) and we have highlighted that the niche-constructing gene is bounded. Note also that, for stability reasons, we employ a different mutation rate for the niche-constructing gene, \(r_{\alpha}\), that remains constant.
### Global and local selection
At the end of a generation agents are selected for crossover based on their fitness. Each chosen individual has two offspring and the next generation consists only of offspring. To compute the fitness of an individual \(k\) in generation \(g\) we first detect the niches in which it can survive as:
\[n\in\{1,\cdots,N\}\quad|\quad e_{n}^{g}\in[\mu_{k}^{g}-2\sigma_{k}^{g},\mu_{k }^{g}+2\sigma_{k}^{g}] \tag{3}\]
and compute its fitness in each one of them as \(f_{k,n}^{g}=pdf(\mu_{k}^{g},\sigma_{k}^{g},e_{n}^{g})\), where \(pdf\) denotes the value of the normal probability density function with mean \(\mu_{k}^{g}\), and variance \(\sigma_{k}^{g}\) at location \(e_{n}^{g}\). We study two selection mechanisms:
* under global selection all agents in the population are ranked based on their average fitness across the niches they can survive in and reproduce with a probability proportional to it. Agents reproduce until the capacity of the environment is filled.
* under local selection we apply the same criterion but independently for each niche: we detect which agents survive in a niche and reproduce them with a probability proportional to their fitness. Thus, agents that can survive in multiple niches have higher chances of reproduction under this mechanism. Again, agents reproduce in a niche until its capacity is filled.
Thus, at the end of a generation a new population is formed that consists of the offspring of agents from the previous population that were chosen for reproduction, based on their fitness and the environment's capacity. Note that the niche is not inherited from parent to offspring: the offspring will inhabit the niches its genome describes (as in Eq. 3). We present the pseudocode of our algorithm in Algorithm 1 and 2, provided in our online repo due to limited space.
### Metrics
In addition to population-wide averages of the genome values and environment-wide averages of the intrinsic and niche-constructed states we monitor the following metrics:
1. \(X^{g}=\sum_{k}X_{k}^{g}\), the number of extinctions. We denote the survival of individual \(k\) in niche \(n\) at generation \(g\) as a binary variable: \[s_{k,n}^{g}=(e_{n,g}\in[\mu_{k}^{g}-2\sigma_{k}^{g},\mu_{k}^{g}+2\sigma_{k}^{g }])\] (4) Thus, an individual goes extinct (\(X_{k}^{g}=1\)) if \(\sum_{n}^{N}s_{k,n}^{g}\) is zero and survives (\(X_{k}^{g}=0\)) if \(\sum_{n}^{N}s_{k,n}^{g}\) is positive.
Figure 2: Modeling plasticity as a normal distribution \(\mathcal{N}(\mu_{k},\sigma_{k})\). A non-plastic individual (\(k\)) has small \(\sigma_{k}\) and a high peak at their preferred niche, while a plastic individual (\(k\)β) has large \(\sigma_{k}\) and a lower peak at their preferred niche. Fitness in a given niche \(n\) is computed as the probability density function of the distribution at the environmental state \(e_{n}\). This figure also illustrates the cost and benefit of plasticity, assuming that \(\mu_{k}=\mu_{k}^{{}^{\prime}}\). If \(e_{n}=\mu_{k}\) (the actual environmental state is identical to the preferred niche of both individuals) the plastic individual has lower fitness (cost of plasticity). If \(e_{n}\) differs significantly from \(\mu_{k}\) (the actual environmental state differs from the preferred one) the plastic individual has higher fitness (benefit of plasticity).
2. \(V^{g}_{\mu}\), the diversity of the population defined as the standard deviation of the population's preferred state, namely \[V^{g}_{\mu}=\sigma_{\mu s}\] (5) This metric captures the genetic diversity of the population computed for the gene of preferred state.
3. \(D^{g}\), the dispersal of the population, computed as the number of niches over which at least one individual survives for a temporal window of at least \(w\) generations. Formally, \(D^{g}=\sum_{n=1}^{N}d^{g}_{n,w}\), where \(d^{g}_{n,w}\) denotes the persistence of the population in a given niche for the required time window and is computed as \[d^{g}_{n,w}=\begin{cases}1&\text{if}\sum_{g^{\prime}=g-w}^{g^{\prime}}s^{g^{ \prime}}_{n}=w\\ 0&\text{otherwise}\end{cases}\] (6) where \(s^{g}_{n}\) is indicates the survival of at least one individual in a given niche and is computed as \[s^{g}_{n}=\begin{cases}1&if\sum_{k}^{K}s^{g}_{k,n}>1\\ 0&\text{otherwise}\end{cases}\] (7) with \(s^{g}_{k,n}\) defined in Eq. (4).
4. \(H^{g}\), the competition for reproduction within the population. We count, for each niche, the number of agents that survived in it and were fit enough to be chosen for reproduction but did not reproduce because its capacity was reached and, then, sum over all niches.
## Results
We now examine the behavior of agents under different settings. First, we compare behaviors under global competition between environments with different number of niches. After, we consider only heterogeneous environments (\(N=100\)) niches and we contrast the behavior across two dimensions: a) whether NC takes place or not (where we force all niche-constructing genes to be zero) and b) whether selection is local or global. In all simulations we set the discount factor \(\gamma\) to 0.5, the intrinsic state of the reference niche \(i_{0}\) to 0.6, \(\epsilon\) (the difference between adjacent niches) to 0.01, the reference capacity to \(C_{\text{ref}}=1000\), the maximum population size to \(K_{\text{max}}=5000\) and the mutation rate of the niche-constructing gene, \(r_{c}\), to 0.0003. As we discuss later it would be interesting to study the effect of some of these hyper-parameters. For this study we note that the values of \(C_{\text{ref}},K_{\text{max}}\) where chosen to limit computational complexity, increasing them further should not qualitatively change our conclusions. Also regarding \(r_{c}\), setting it to too low a value will disable NC and setting it too high may destabilize NC. We performed ten independent trials and present in plots median values and \(95\%\) confidence intervals. When studying niche-constructing behaviors we do not average across trials but present them individually, as differences would average out and conceal information.
### Spatial heterogeneity stabilizes niche construction
In Figure 3 we compare the behavior under global competition for different number of niches (\(N\in\{1,50,100\}\) for a single trial. We observe that, for \(N=1\), the agents are niche-constructing positively (third row), which pushes the environment state to higher values and more variability (first row). We see that the the population reacts by adapting its preferred niche (second row), increasing its plasticity (fourth row) and keeping its evolvability high (fifth row). Despite that, the population goes extinct around generation 250, after experiencing many booms and busts (sixth row). This behavior was consistent across all trials for \(N=1\), with half of the trials experiencing a mass extinction due to negative NC and half due to positive NC (as in Figure 3).
In the heterogeneous environments, on the other hand, the population survives for the whole simulation. We see that the average NC among agents (\(\mathbb{E}(a)\)) is similar to that of the homogeneous environment but that this leads to less niche construction in the environment (evident through the environmental state). This is because the population in the heterogeneous environment spreads over multiple niches and often migrates collectively out of a niche (we will take a look at this behavior in the next section), leading to a decrease to the accumulation of NC due to its discounting. In contrast, when \(N=1\) NC accumulates in a single niche. We also observe that NC in \(N=100\) does not increase continuously but experiences oscillations, which suggests that some agents are niche-constructing negatively or reducing their positive NC. As a result, the environment experiences lower oscillations which enables the population to reduce its evolvability. Plasticity remains relatively high but much lower than in the homogeneous environment. We also mea
Figure 3: Comparison between a homogeneous (\(N=1\)) and a heterogeneous environment (\(N=100\)) under global competition
sured the number of mass extinctions for environments with different number of niches and observed that even a small number of niches is useful and increasing the number of niches continuously decreases the probability of extinction (10/10 went extinct for \(N=1\), 4/10 for 20, 3/10 for 50 and 2/10 for 100). Thus, the presence of multiple niches was necessary for the survival of the niche-constructing population. Next, we will look into how adaptability interacts with NC to give rise to this behavior. As we hypothesize that the presence of niches matters, we will focus on how agents disperse and niche-construct differently in different niches.
### Niche construction promotes adaptability
We now compare a niche-constructing population (denoted as \(R_{NC}\)) and a population that cannot niche -construct (\(R\)). We examine populations under global selection in Figure 4 and under local selection in Figure 5. We observe that both populations increase their adaptability but do so differently.
Under global selection, the \(R_{NC}\) population increases its plasticity but keeps its evolvability low. Dispersal is a bit higher and genomic diversity is initially high but, by the middle of the simulation, reaches the low level of the \(R\) population. We also see that \(R_{NC}\) has a smaller population size early in the simulation. As extinctions are low, this is due to NC reducing the capacity of the environment. Near the end of the simulation we see that \(R_{NC}\) reaches and, in some trials, surpasses the size of the \(R\) population. This indicates that positive NC has increased the environment's capacity.
Under local selection \(R_{NC}\) has very high evolvability but plasticity is at similar level to \(R\). We observe that some extinctions persist throughout evolution but the population size is relatively stable. Why did the niche-constructing population prefer to adapt through evolvability rather than plasticity? As we will see more closely in our analysis of specific trials, agents in a niche-constructing population manage to coordinate their NC behavior within a niche. This would not have been possible under high plasticity, as agents would stochastically niche-construct in different niches and increase environmental uncertainty.
### Niche construction can cause an arms race
In Figure 6 we analyze one of the trials for the niche-constructing population under global competition through heatmaps that show the amount of NC at each generation and niche. Populations are initialized with diverse genomes, so that at the beginning they are dispersed in all niches.
Figure 4: Comparison between populations with (\(R_{NC}\)) and without NC (\(R\)) under global competition.
Figure 5: Comparison between populations with (\(R_{NC}\)) and without NC (\(R\)) under local competition.
Then, global competition leads to a decrease in plasticity and quickly wipes out the diversity of the population, gathering all agents in the same narrow stripe of niches. Around generation 50, the population starts positively niche-constructing in the middle niche, then starts moving to the north and, once it reaches the end of the environment (around generation 150), stays there and oscillates between phases of positive and negative NC. On the top plot, we see that this population has adapted its preferred state to the last niche and that as soon it reaches it, plasticity stops decreasing and the population size stops increasing. The trial depicted in Figure 6 is representative of 4 out of the 10 trials. In another five trials the same pattern occured, but in the south instead of the north. In contrast, we observed that populations without NC do not gather at the edge of the environment but stay in a random niche near the center. Thus, NC under global competition leads to a "geographic" arms race due to the following dynamics: once the agents have been gathered to a narrow stripe of niches, they begin niche-constructing, initially randomly. Then, NC will, by random chance become slightly positive or negative. As all agents are competing in the same niche, this will force them to adapt their preferred state towards the direction that the sum of agents is constructing to. This enables inhabiting the adjacent niche, where the agents will niche-construct towards the same direction. Eventually, the population will reach the end of the environment. Why doesn't the population continue increasing its niche construction to the same direction instead of experiencing cycles? We believe that this is not possible as the mutation rate is low and, therefore, the preferred niche cannot be adapted anymore.
### Negative niche construction reduces competition
In Figure 7 we analyze another trial for the niche-constructing population under global competition, where a different behavior emerged: the population niche-constructs negatively, which pushes it to the low-index niches. There we see an interesting pattern: the population is gathered in a narrow stripe of niches where it niche-constructs negatively and with increasing intensity. Then the population leaves this stripe and moves to the adjacent northern and southern niches. This is possible due to the maintained plasticity and does not require an adaptation of the preferred niche. Then, the same behavior happens until the population moves back to the previous stripe and we see many cycles of moving
Figure 6: Analyzing one trial for global competition with niche-constructing population where an arms race emerged (Top) evolution of metrics (Bottom) Heatmap with rows corresponding to niches, columns to generations and value/color of cell to the sum of niche construction of all agents in a given niche and generation.
Figure 7: Analyzing one trial for global competition with niche-constructing population where negative niche construction emerged.
back and forth. This switching behavior is caused by the fact that, once the population niche constructs negatively in a niche for some time, then this niche becomes uninhabitable, so that the population needs to move out until it becomes inhabitable again. Eventually, some positive NC happens, which leads to an extinction, as evolvability is too low to enable adaptation. This indicates that negative niche construction is not stable in the long-term.
### Niche construction diversifies the environment
We now move to local selection, where we saw in Figure 5 that NC leads to a more evolvable population. In Figure 8, we analyze a typical trial where we see that NC is positive and high in northern niches and negative in more southern niches. Thus, the majority lives in the north (this can be inferred from the preferred state \(\mathbb{E}(\mu)=0.8\) surpassing the state of the reference niche \(i_{0}=0.6\)). The population experiences many changes, with niches often switching between being positively and negatively niche-constructed. We also measured the variance of NC within a niche and found that it is two orders of magnitude smaller than the one under global selection. This suggests that local selection enables agents to coordinate their niche construction within a niche, which diversifies the environment and leads to the higher genomic diversity observed in Figure 5.
An intriguing question is why populations under global selection adapt by increasing their evolvability but keep their plasticity at even lower levels than without NC. We believe that this is due to the interplay between NC and plasticity: very plastic agents could niche-construct in many niches, which would make it difficult to maintain low variance in NC within a niche. By monopolizing a limited number of niches and quickly changing niche through mutations these populations can better coordinate their NC.
## Discussion
We have shown that the evolution of niche construction is contingent on the number of niches in the environment and the selection mechanism. We explained this by looking into the interplay between adaptability and NC, in particular their effect on the population's and environment's diversity and dispersal patterns. Populations are impressive in their ability to self-regulate themselves: NC remains bounded, even though we do not introduce an explicit cost for it. Populations fail to coordinate their NC and go extinct only when the environment has a single niche.
Our empirical study could be extended in multiple ways. First, we could study the effect of additional hyperparameters. For example, we could allow the intrinsic state to vary with time, following a noisy or periodic signal (Nisioti and Moulin-Frier, 2022). We could also consider that NC is plastic: currently an agent's gene determines its NC independently of the environmental state while we could imagine scenarios where agents niche-construct differently for different states. Finally, we believe that our empirical observations should be tested in grounded environments with RL agents, such as grid-worlds where a population forages or avoids predation (Hamon et al., 2023; Chiba et al., 2020).
We hope that our work will contribute towards incorporating NC in future studies of collective adaptation. We think this is particularly relevant for the Artificial Intelligence community, which recently focused on agents that can generalize to diverse environments, the importance of environmental complexity and multi-agent dynamics (Reed et al., 2022; Jaderberg et al., 2022; Nisioti et al., 2021; Moulin-Frier, 2022). As we show here, niche construction, which can be seen as a meta-learning mechanism (Constant et al., 2018), can prove promising in complexifying environments and population dynamics and bring us closer to artificial agents with behaviors reminiscent of natural ones.
AcknowledgementsThis research was partially funded by the French National Research Agency ([https://anr.fr/](https://anr.fr/), project ECOCURL, Grant ANR-20-CE23-0006). This work also benefited from access to the HPC resources of IDRIS under the allocation 2020-[A0091011996] made by GENCI.
Figure 8: Analyzing a typical trial for local competition with a niche-constructing population. |
2306.08527 | Variance-Preserving-Based Interpolation Diffusion Models for Speech
Enhancement | The goal of this study is to implement diffusion models for speech
enhancement (SE). The first step is to emphasize the theoretical foundation of
variance-preserving (VP)-based interpolation diffusion under continuous
conditions. Subsequently, we present a more concise framework that encapsulates
both the VP- and variance-exploding (VE)-based interpolation diffusion methods.
We demonstrate that these two methods are special cases of the proposed
framework. Additionally, we provide a practical example of VP-based
interpolation diffusion for the SE task. To improve performance and ease model
training, we analyze the common difficulties encountered in diffusion models
and suggest amenable hyper-parameters. Finally, we evaluate our model against
several methods using a public benchmark to showcase the effectiveness of our
approach | Zilu Guo, Jun Du, Chin-Hui Lee, Yu Gao, Wenbin Zhang | 2023-06-14T14:22:22Z | http://arxiv.org/abs/2306.08527v2 | # Variance-Preserving-Based Interpolation Diffusion Models
###### Abstract
The goal of this study is to implement diffusion models for speech enhancement (SE). The first step is to emphasize the theoretical foundation of variance-preserving (VP)-based interpolation diffusion under continuous conditions. Subsequently, we present a more concise framework that encapsulates both the VP- and variance-exploding (VE)-based interpolation diffusion methods. We demonstrate that these two methods are special cases of the proposed framework. Additionally, we provide a practical example of VP-based interpolation diffusion for the SE task. To improve performance and ease model training, we analyze the common difficulties encountered in diffusion models and suggest amenable hyper-parameters. Finally, we evaluate our model against several methods using a public benchmark to showcase the effectiveness of our approach.
Zilu Guo\({}^{1}\), Jun Du\({}^{1,*}\), Chin-Hui Lee\({}^{2}\), Yu Gao\({}^{3}\), Wenbin Zhang\({}^{3}\)\({}^{1}\)University of Science and Technology of China, Hefei 230027, China
\({}^{2}\)Georgia Institute of Technology, Atlanta, GA. 30332-0250, USA
\({}^{3}\)AI Innovation Center, Midea Group (Shanghai) Co.,Ltd., Shanghai 201702, China [email protected], \(\copyright\)[email protected], \(\copyright\)[email protected], {gaoyu11, zhangwb87}@midea.com
**Index Terms**: speech enhancement, speech denoising, diffusion models, score-based, interpolation diffusion
## 1 Introduction
Speech enhancement (SE) [1] has been the subject of research for several decades with the goal of diminishing or even eliminating the noise in a noisy speech while minimizing the distortion to speech quality. In recent years, SE has been approached as a supervised regressive task with the assistance of deep learning. The early attempts to deploy deep learning for SE involve employing off-the-shelf models to predict the targets that are typically utilized in traditional approaches. However, it is important to note that these approaches are sub-optimal since they are not able to obtain the clean phase of the speech. Common targets for these methods include the magnitude of the spectrogram [2], the log magnitude (Mapping) [3], the ideal ratio mask (IRM)[4], the spectral magnitude mask (SMM) [2], and etc. The real and imaginary parts of the spectrogram are utilized directly as the target to obtain the clean phase afterward [5, 6, 7]. Meanwhile, another main-stream SE method seeks to predict clean waveforms in an end-to-end (E2E) manner [8], rather than the spectrum. In addition to the regressive methods, some researchers have utilized generative models for the SE, such as, VAE [9, 10], GAN [11, 12, 13], flow [14, 15], and etc.
Recently, Diffusion models have been successful in various generative tasks, including image generation [16, 17, 18], image editing [19], speech synthesis [20] and etc. Diffusion models involve two processes, i.e., diffusion (or forward) and reverse (or backward) processes. Another approach in the discrete domain is the score-based model [18]. Both the diffusion [16] and the score-based [18, 21] model are unified in [17, 22], and generalized to the continuous time domain, endowing the model with more capacity. Moreover, the authors in [17] have classified the diffusion models into two types: the variance-preserving(VP)-based [16] and the variance-exploding(VE)-based [18] method grounded on their intuitive properties. The VE-based approach gradually increases the variance over time while keeping the clean component unaltered. In contrast, VP-based diffusion attempts to preserve the amplitude with fewer fluctuations. Besides their applications in the image processing field, diffusion models have also been explored for SE tasks.
In their work, CDiffSE [23] proposes that a degraded signal is composed of three components, i.e, the clean signal, the noisy and the Gaussian noise. They suggest that the mean of the degraded signal in the diffusion process is obtained through a linear combination of the clean and noisy speech, specifically the linear interpolation of the two. They then apply this interpolation method to Diffwave [20], a model for speech synthesis, for the SE task. SRTNET [24] follows a similar approach, utilizing the interpolation diffusion to estimate the distortion between the clean speech and an enhanced speech predicted by a pre-trained discriminative model. However, the two-stage models commonly have higher computational overhead. Apart from the discrete diffusion models mentioned above, there are also several interpolation-based methods for SE under continuous conditions. In [25, 26], the authors formulate the theoretical foundation for the VE-based interpolation diffusion model (VEIDM) in the continuous time field and generalize it to the continuous time system. However, they leave the VP-based one untouched which showcases better performance in some tasks than the VE-based diffusion. In this paper, we attempt to apply linear interpolation to VP-based diffusion.
The rest of the paper is organized as follows. The proposed method is introduced in Sec. 2. Specifically, we accentuate the proposed signal model, training method, and sampling algorithms in this section. Experiments settings, results, and analyses are presented in Sec. 3. we draw conclusions in Sec. 4.
## 2 The proposed method
In the forward process of the vanilla VP-based diffusion model, a clean signal (such as an image or speech) is gradually degraded by adding Gaussian noise in steps until it reaches an approximate Gaussian noise level. In this process, the mean of the state \(t\) is an affine function of the clean signal. However, in the VP-based interpolation diffusion, the mean is replaced with a linear interpolation of the clean and the noisy.
### The VP-based interpolation diffusion model (VPIDM)
For tasks of SE, image editing, voice conversion and etc, there is an existing condition that holds copious information about the target. In the case of SE, for instance, the off-the-shelf noisy
speech could be implemented to guide the diffusion process of the clean. We refer to this approach as the VP-based interpolation diffusion model (VPIDM). The signal model for VPIDM is defined as
\[\mathbf{x}(t)=\alpha_{t}[\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{y}]+\sqrt{1-\alpha _{t}^{2}}\mathbf{z} \tag{1}\]
where \(x(t)\) is the degraded signal at \(t\) time index in the diffusion process, \(\mathbf{x}_{0}\) is the clean signal, \(\mathbf{y}\) is the noisy speech, \(\mathbf{z}\) is the Gaussian noise sampled from normal distribution, \(\alpha_{t}\) determines the diffusion process, \(\lambda_{t}\) is the slope of the interpolation process, \(\alpha_{t}\) and \(\lambda_{t}\) are functions of \(t\). The differential of \(\mathbf{x}(t)\) is
\[d\mathbf{x}(t)=[\mathbf{x}(t)\ln^{{}^{\prime}}(\alpha_{t}\lambda_{t})-\mathbf{y}\alpha_{t} \ln^{{}^{\prime}}\lambda_{t}]dt+\mathbf{\Sigma}_{t}d\mathbf{w} \tag{2}\]
where \(\mathbf{w}\) is the stochastic process, \(ln^{{}^{\prime}}[\cdot]\) is the derivative of \(ln[\cdot]\) with respect to \(t\), \(d[\cdot]\) is the operation of differential. When \(\mathbf{\Sigma}_{t}\) is a diagonal matrix, suppose that \(\mathbf{\Sigma}_{t}=g(t)\mathbf{I}\). From Eq.(5.53) in [27], we get
\[\frac{dG^{2}(t)}{dt}=2G^{2}(t)\ln^{{}^{\prime}}(\alpha_{t}\lambda_{t})+g^{2}(t) \tag{3}\]
where \(g(t)\) indicates the spread speed of the stochastic process in the derivative of \(\mathbf{x}(t)\), \(g(t)=\sqrt{-2G^{2}(t)\ln^{{}^{\prime}}\lambda_{t}-2\ln^{{}^{\prime}}\alpha_{t}}\), \(G(t)\) is the coefficient of the Gaussian noise in \(\mathbf{x}(t)\), \(G^{2}(t)=1-\alpha_{t}^{2}\), \(t\in(0,1]\), \(\ln^{{}^{\prime}}\alpha_{t}\leq 0\) and \(\ln^{{}^{\prime}}\lambda_{t}\leq 0\), which means \(\lambda_{t}\) and \(\alpha_{t}\) are monotonous decrease functions. In this article, all constant superscripts represent powers, unless otherwise specified. When \(t\to 0\), then \(\lambda_{t}\to 1\),\(\alpha_{t}\to 1\).
In principle, we hope \(\lambda_{1}\to 0\), which implies that the final state is a combination of the noisy signal and the Gaussian noise. Therefore, the larger \(-\ln^{{}^{\prime}}\lambda_{t}\) appears to be more favorable. However, empirical evidence suggests that \(-\ln^{{}^{\prime}}\lambda_{t}\) cannot be infinitely large. The reason behind this is that when we sample a clean and \(-\ln^{{}^{\prime}}\lambda_{t}\) is set to sufficiently large, the linear interpolation tends to change quickly from \(\mathbf{y}\) to \(\mathbf{x}_{0}\) over several steps, it is difficult for neural networks to capture.
In this paper, we adopt the similar \(\alpha_{t}\) schedule of the VP-based diffusion in [17] for the SE, i.e., \(\alpha_{t}=e^{-0.5\int_{0}^{t}\beta(\tau)d\tau}\), where \(\beta(t)=(\beta_{\text{max}}-\beta_{\text{min}})t+\beta_{\text{min}}\), \(\beta_{\text{min}}\) controls the slope of the clean scale when \(t\to 0\), \((\beta_{\text{max}}-\beta_{\text{min}})\) controls the changing speed of \(\mathbf{x}_{t}\) from the clean to the Gaussian.
\[g(t)=\sqrt{\beta(t)+2\lambda(1-e^{-\int_{0}^{t}\beta(\tau)d\tau})} \tag{4}\]
where \(\lambda_{t}=e^{-\lambda t}\). From Eq.(2), the \(d\mathbf{x}(t)\)
\[d\mathbf{x}(t)=[-(0.5\beta(t)+\lambda)\mathbf{x}(t)+\lambda\alpha_{t}\mathbf{y}]dt+g(t)d \mathbf{w} \tag{5}\]
In the reverse process, the \(\mathbf{x}(1)\) is sampled from the distribution \(\mathcal{N}(\alpha_{1}\mathbf{y},\sqrt{1-\alpha_{1}^{2}}\mathbf{I})\).
### The loss function and the training stage
For unconditional diffusion, the neural network is trained for predicting \(\mathbf{\nabla}_{\mathbf{x}}\ln(p_{t}(\mathbf{x}))\). This is equivalent to optimizing the following cost function
\[\mathcal{L}=\mathbb{E}_{t,\mathbf{x}_{0},\mathbf{x}(t)}\{W\cdot||\mathbf{\theta}(\mathbf{x}(t ))-\mathbf{\nabla}_{\mathbf{x}}\ln(p_{t}(\mathbf{x}))||^{2}\} \tag{6}\]
where \(\mathbf{\theta}(\mathbf{x}(t))\) is the output of the neural network. For interpolation-based diffusion,
\[p_{t}(\mathbf{x})=p(\mathbf{x}(t)|\mathbf{x}_{0},\mathbf{y})=\mathcal{N}(\mathbf{m}(\mathbf{x}_{0}, \mathbf{y});G(t)\mathbf{I}) \tag{7}\]
where \(p_{t}(\mathbf{x})\) is the conditional probability density function of \(\mathbf{x}(t)\), \(\mathbf{m}(\mathbf{x}_{0},\mathbf{y})=\alpha_{t}[\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{ y}]\) is the mean of \(p_{t}(\mathbf{x})\).
\[\mathbf{\nabla}_{\mathbf{x}}\ln(p_{t}(\mathbf{x}))=\mathbf{\nabla}_{\mathbf{x}}[-\frac{||\mathbf{x}( \mathbf{x}_{0},\mathbf{y})||^{2}}{2G^{2}(t)}]=-\frac{\mathbf{z}}{G(t)} \tag{8}\]
here \(\mathbf{x}(t)-\mathbf{m}(\mathbf{x}_{0},\mathbf{y})=G(t)\mathbf{z}\). Then we get the loss function
\[\mathcal{L}=\mathbb{E}\{W\cdot||\mathbf{\theta}(\mathbf{x}(t))+\frac{\mathbf{z}}{G(t)}||^ {2}\}=\mathbb{E}||G(t)\mathbf{\theta}(\mathbf{x}(t))+\mathbf{z}||^{2} \tag{9}\]
We follow the settings in [17, 18, 25], utilize the weighted loss, and set \(W=G^{2}(t)\) for better performance.
The training algorithm is shown in Alg. 1, where \(\epsilon\) represents the minimum sample time, where the superscript \(b\) in \([\cdot]^{b}\) denotes the \(b\)-th sample of a batch, batch size is \(B\), \(p(\mathbf{x_{0}},\mathbf{y})\) denotes the joint probability density function of the clean and noisy pair, i.e., \(\mathbf{x}_{0},\mathbf{y}\).
```
Suppose the number of sample steps is \(K\), \(t_{k}=\frac{(1-\epsilon)}{K}k+\epsilon\) Sample \(\mathbf{x}_{K}=\mathbf{x}(t_{K})=\mathbf{x}(1)=\alpha_{1}\mathbf{y}+G(1)\mathbf{z}\) for\(k=K-1,K-2,\ldots,1\)do Input the \(\mathbf{x}_{k+1}\), get the \(\hat{\mathbf{\theta}}(\mathbf{x}_{k+1})\) Sample a Gaussian noise \(\mathbf{z}\), \(\mathbf{z}\sim\mathbf{\mathcal{N}}(\mathbf{0},\mathbf{I})\) Compute the \(\mathbf{x}_{k}\) from Eq.(17) endfor Input the \(\mathbf{x}_{1}\), get the \(\hat{\mathbf{\theta}}(\mathbf{x}_{1})\) \(\hat{\mathbf{x}}_{0}=\mathbf{x}_{1}-[f(\mathbf{x}_{1},\mathbf{y})-g_{1}^{2}\hat{\mathbf{\theta}}( \mathbf{x}_{1})]\Delta\) return\(\hat{\mathbf{x}}_{0}\)
```
**Algorithm 2** Sampling (enhancing) stage
### The reverse process for sampling a clean
From [17, 28], the reverse process is also a diffusion process which can be represented as
\[d\mathbf{x}(t)=[f(\mathbf{x}(t),\mathbf{y})-g(t)^{2}\mathbf{\theta}(\mathbf{x}(t))]dt+g(t)d\mathbf{w} \tag{16}\]
where \(f(\mathbf{\pi}(t),\mathbf{y})=\mathbf{\pi}(t)\ln^{{}^{\prime}}(\alpha_{t}\lambda_{t})-\mathbf{y} \alpha_{t}\ln^{{}^{\prime}}\lambda_{t}\) for the VPIDM and \(\mathbf{\hat{w}}\) is another stochastic process but has the same distribution with \(\mathbf{w}\). Typically, the continuous process is discretized in the sampling stage. \(\Delta=\frac{1-\epsilon}{\mathcal{R}}\) is set and let \(\mathbf{x}_{k}=\mathbf{x}(\frac{k(1-\epsilon)}{\mathcal{K}}+\epsilon)\) and \(g_{k}=g(\frac{k(1-\epsilon)}{\mathcal{K}}+\epsilon)\sqrt{\Delta}\). The sampling stage is elaborated in Alg. 2.
\[\mathbf{x}_{k-1}=\mathbf{x}_{k}-[f(\mathbf{x}_{k},\mathbf{y})-g_{k}^{2}\mathbf{\theta}(\mathbf{x}_{k})] \Delta+g_{k}\mathbf{z} \tag{17}\]
where the subscript \(k\) of \(\mathbf{x}_{k}\) and \(g_{k}\) denote the discrete sampling time index, \(\mathbf{x}_{k}\) and \(g_{k}\) represent the discrete samplings of \(\mathbf{\pi}(t)\) and \(g(t)\).
### Comparison with the VEIDM
Furthermore, the proposed VPIDM and the VEIDM proposed in [25] can be concluded as
\[d\mathbf{x}(t)=\mathbf{x}(t)d\ln{(\alpha_{t}\lambda_{t})}-\mathbf{y}\alpha_{t}d\ln{ \lambda_{t}}+g(t)d\mathbf{w} \tag{18}\]
\[\mathbf{\pi}(t)=\alpha_{t}[\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{y}]+G(t)\mathbf{z} \tag{19}\]
The relation between \(G\) and \(g\) is constrained by Eq.(3) which is referred to as the interpolation diffusion model (IDM). When \(G^{2}(t)=1-\alpha_{t}^{2}\), the interpolation diffusion becomes a VP-based method. In the case of VE-based interpolation diffusion, \(\alpha_{t}\) in Eq.(18) and (19) is constant \(1\). Substitute \(\alpha_{t}\) with constant \(1\) and solve the ordinary differential equation in Eq.(3), we get \(G(t)\)
\[G^{2}(t)=\lambda_{t}^{2}G(0)^{2}+\lambda_{t}^{2}\int_{0}^{t}g^{2}(\tau)/\lambda _{\tau}^{2}d\tau \tag{20}\]
In [25], The interpolation coefficient \(\lambda_{t}\) is defined as \(e^{-\lambda t}\). It can be verified that the VEIDM, which use \(G(t)\) from Eq.(10), and the \(g(t)\) from Eq.(11), satisfies Eq.(20) as a special case. Additionally, a comparison is made between the VEIDM, the proposed VPIDM, and IDM in Tab.1.
In the reverse process, the initial sample of the VEIDM in [25] is taken as \(\mathbf{y}+G(1)\mathbf{z}\), rather than the ground truth \(\lambda_{1}\mathbf{x}_{0}+(1-\lambda_{1})\mathbf{y}+G(1)\mathbf{z}\) because obtaining the clean is not possible at this stage. However, this can result in damage to the enhanced speech. To quantify this effect, we define the initial error (_IE_)
\[\textit{IE}_{\text{VEIDM}} =[\mathbf{y}+G(1)\mathbf{z}]-[\lambda_{1}\mathbf{x}_{0}+(1-\lambda_{1})\mathbf{y }+G(1)\mathbf{z}] \tag{21}\] \[=\lambda_{1}(\mathbf{y}-\mathbf{x}_{0}) \tag{22}\]
Whereas, the reverse of the proposed start from \(\alpha_{1}\mathbf{y}+G(1)\mathbf{z}\), and the truth is \(\alpha_{1}(\lambda_{1}\mathbf{x}_{0}+(1-\lambda_{1})\mathbf{y})+G(1)\mathbf{z}\). The _IE_ is
\[\textit{IE}_{\text{VPIDM}}=\alpha_{1}\lambda_{1}(\mathbf{y}-\mathbf{x}_{0}) \tag{23}\]
When the same \(\lambda_{t}\) is utilized in the Sgmse+ and VPIDM, \(\textit{IE}_{\text{VPIDM}}\ll\textit{IE}_{\text{Sgmse+}}\), where \(\alpha_{1}\to 0\). That is, the VPIDM has a smaller _IE_ than the VEIDM.
## 3 Experiments
### Training settings
We conduct our experiments on the publicly available benchmark, i.e., VoiceBank-DEMAND(VBD) [29] Metrics in [30, 31, 32], i.e., SI-SDR, SI-SIR, SI-SAR, PESQ, CSIG, CBAK, COVL, are adopted to compare the performance to other state-of-the-art methods. \(25\) speech clips from the test dataset are selected randomly as the validation dataset. We train the neural network for 120 epochs. The best checkpoint is saved when PESQ is in its optimal state during the validation phase.
We use the neural network proposed in [26] as our backbone model, which is originally introduced in [17] for the image generation task. We treat the complex spectrum as a real-valued tensor to circumvent complex-valued computation where the real and imaginary parts of the complex are represented as two channels of the tensor. The tensor is scaled to ensure that its amplitude approximately falls within the range of \(-1\) to \(1\) before being fed into the neural network. We follow the scaling function described in [25] where given a complex-valued spectrum \(\mathbf{x}(t)=|\mathbf{\pi}(t)|e^{\angle\mathbf{x}(t)}\), the scaled \([\mathbf{x}(t)]^{s}=a|\mathbf{x}(t)|^{c}e^{\angle\mathbf{x}(t)}\), here \(a,c\) are two hyper-parameters, \([\cdot]^{s}\) means the operation of the scaling function. We find that \(a50^{c}\approx 1,0<c\leq 1\) is of avail for the performance and \(c\) can not be set too small, as it compresses the dynamic range of the signal drastically and makes it more difficult to learn the structure of clean spectrum. In this paper, we empirically set \(a=0.15\) and \(c=0.5\), \(\beta_{\text{min}}=0.1\), \(\beta_{\text{max}}=2\) and \(\lambda_{t}=e^{-\lambda t}\), \(\lambda=1.5\), \(G(t)=\sqrt{1-\alpha_{t}^{2}}\).
### Results and analyses
For the continuum diffusion model, we typically sample \(t\) from the uniform distribution \(\mathcal{U}(\epsilon,1)\), \(\epsilon\) denotes the time index of the first state after the clean. Ideally, we want to set \(\epsilon\) as small as possible. However, when \(\epsilon\) is too small, it can lead to difficulties in achieving convergence during training and may result in unstable fluctuations in the optimization process.
In Fig.2, we vary the value of \(\epsilon\) for the model and observe the training loss with training steps. The results show that when the \(\epsilon\) is too small, the training loss exhibits lots of fluctuations and becomes difficult to converge. In our view, the reason is
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{4}{c}{The state equations} & \multicolumn{2}{c}{The stochastic differential equations} \\ \hline VEIDM [25] & \[\mathbf{x}(t)=\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{y}+\sqrt{\ln{\left(\frac{ \sigma_{\text{max}}}{\sigma_{\text{min}}}\right)}\frac{\sigma_{\text{min}}^{2} \left((\sigma_{\text{max}}/\sigma_{\text{max}}\right)^{2t}-e^{-2\lambda t} \right)}{\lambda+\ln{(\sigma_{\text{max}}/\sigma_{\text{min}})}}}\mathbf{z}\] & (10) & \[d\mathbf{x}(t)=\lambda(\mathbf{y}-\mathbf{x}(t))dt+\sigma_{\text{max}}(\frac{\sigma_{\text{ max}}}{\sigma_{\text{min}}})^{t}\sqrt{21\ln{\left(\frac{\sigma_{\text{ max}}}{\sigma_{\text{min}}}\right)}}\mathbf{z}\] & (11) \\ \hline VPIDM & \[\mathbf{x}(t)=\alpha_{t}[\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{y}]+\sqrt{1- \alpha_{t}^{2}}\mathbf{z}\] & (12) & \[d\mathbf{x}(t)=[\mathbf{x}(t)\ln^{{}^{\prime}}(\alpha_{t}\lambda_{t})-\mathbf{y}_{0}\ln^{{}^{ \prime}}\lambda_{t}]dt+\mathbf{\Sigma}_{t}\mathbf{z}\] & (13) \\ \hline IDM & \[\mathbf{x}(t)=\alpha_{t}[\lambda_{t}\mathbf{x}_{0}+(1-\lambda_{t})\mathbf{y}]+G(t)\mathbf{z}\] & (14) & \[d\mathbf{x}(t)=\mathbf{x}(t)d\ln{(\alpha_{t}\lambda_{t})}-\mathbf{y}_{0}\ln^{{}^{\prime}} \lambda_{t}+g(t)d\mathbf{z}\] & (15) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the VEIDM, and IDM.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Settings & PESQ \(\uparrow\) & SISDR \(\uparrow\) & SISIR \(\uparrow\) & SISDR \(\uparrow\) \\ \hline \(\epsilon=1\cdot 10^{-2}\) & \(3.01\) & \(18.3\) & \(\mathbf{31.9}\) & \(18.6\) \\ \(\epsilon=3\cdot 10^{-2}\) & \(3.02\) & \(\mathbf{18.9}\) & \(30.1\) & \(19.4\) \\ \(\epsilon=4\cdot 10^{-2}\) & \(\mathbf{3.13}\) & \(18.7\) & \(28.6\) & \(19.3\) \\ \(\epsilon=5\cdot 10^{-2}\) & \(2.86\) & \(18.6\) & \(27.6\) & \(19.4\) \\ \(\epsilon=6\cdot 10^{-2}\) & \(2.95\) & \(18.7\) & \(26.6\) & \(\mathbf{19.7}\) \\ \(
that, when \(\epsilon\) is too small, the model is required to estimate the target in a wider range of SNR conditions. We conduct experiments to check the model's ability to predict the target in low SNR conditions using only one state (\(t=\epsilon\)) as the input for the model. However, the model can not predict the target well when \(\epsilon\leq 10^{-2}\). That is, when \(t\to 0\), \(\alpha_{t}\to 1\), \(\lambda_{t}\to 1\), then \(\mathbf{x}(t)\approx\mathbf{x}_{0}+\sqrt{1-\alpha_{t}^{2}}\mathbf{z}\). From the perspective of Gaussian noise, the SNR \(\approx 10\log_{10}(\frac{1-\alpha_{t}^{2}}{1+\epsilon})\approx 10\log_{10}( \beta_{\text{init}}t)\). Therefore, when \(t=10^{-5},10^{-4},10^{-3},10^{-2}\), the model predicts the target at about \(-60\)dB, \(-50\)dB, \(-40\)dB, \(-30\)dB SNR, respectively. Based on these results, we choose \(\epsilon\geq 10^{-2}\). We treat the \(\epsilon\) as the minimum resolution of the model, so we set the number of sample steps in the reverse process to \(K\approx[\frac{1}{2}]\). As a result, for \(\epsilon=[10^{-2},3\cdot 10^{-2},4\cdot 10^{-2},5\cdot 10^{-2},6\cdot 10^{-2}, 10^{-1}]\), the number of sampling steps are \(100,30,25,20,15,10\), respectively. According to Tab.2, the best PESQ is achieved when \(\epsilon=4\cdot 10^{-2}\), which corresponds to \(25\) sample steps. The model demonstrates strong performance across various evaluation metrics when the number of sample steps exceeds \(15\), but when the sample steps are less than \(15\), the performance drops significantly.
In Tab.3, we compare our model to several SOTA methods. Sgmse+ (1) denotes the vanilla model without the corrector, while Sgmse+ (2) includes the corrector. The proposed model achieves the best CSIG, indicating the least speech distortion. This demonstrates that our generative model has learned the closest clean distribution to the ground truth that can preserve clean speech best. Sgmse+ (1) in [26] is the VE-based interpolation diffusion with \(30\) sampling steps. The proposed method with fewer steps, i.e., \(25\) steps (\(\epsilon=4\cdot 10^{-2}\)), achieves \(0.3\) PESQ increment over Sgmse+ (1). To further improve the performance, Sgmse+ (2) in [26] implements a corrector which requires \(60\) total steps. The proposed requires less than half steps of the Sgmse+ (2) and obtains a \(0.2\) PESQ improvement. In fact, the scaling of the signal, i.e., \(\alpha_{t}\), on the right side of Eq.(1) is a type of data augmentation that is beneficial for the model to learn the intrinsic structure of clean speech. However, as shown in Fig.1, Sgmse+ (2) causes the neural network to consider some noises in \(\mathbf{y}\) as the clean when starting from \(\mathbf{y}+g(1)\mathbf{z}\) in the sampling stage. Therefore, some noises in the noisy speech are not effaced thoroughly. From the two black ellipses in Fig.1, we can see that the Sgmse+ (2) has residual noise in \(0-0.6\)(s) and \(2-2.1\)(s) time intervals.
## 4 Conclusions
In this paper, we present the VP-based interpolation diffusion model in a continuous time system and summarize the VE- and VP-based interpolation diffusion models into a more concise framework called the IDM. The VE- and VP-based interpolation diffusion models serve as examples of the IDM. While we only apply the VPIDM to the SE task to showcase our proposed method, it is worth noting that the approach is a general method that can be used for other tasks as well.
## 5 Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant 62171427. We also thank Midea Group Co., Ltd for funding this work
Figure 1: The log spectrogram of the STFT magnitudes. a) the clean \(\mathbf{x}_{0}\) spectrogram; b) the noisy \(\mathbf{y}\) spectrogram; c) the spectrogram of the estimated clean from the Sgmse+ (2); d) the spectrogram of the estimated clean from the VPIDM.
\begin{table}
\begin{tabular}{l c c c c} \hline Model & PESQ \(\uparrow\) & CSIG \(\uparrow\) & CBAK \(\uparrow\) & COVL \(\uparrow\) \\ \hline noisy & \(1.97\) & \(3.35\) & \(2.44\) & \(2.64\) \\ \hline PFP [33] & \(\mathbf{3.15}\) & \(4.18\) & \(\mathbf{3.60}\) & \(3.67\) \\ MetricGAN [12] & \(2.86\) & \(3.99\) & \(3.18\) & \(3.42\) \\ MetricGAN+ [13] & \(3.15\) & \(4.14\) & \(3.16\) & \(3.64\) \\ CliffuSE [23] & \(2.52\) & \(3.72\) & \(2.91\) & \(3.10\) \\ SRTNet [24] & \(2.69\) & \(4.12\) & \(3.19\) & \(3.39\) \\ CDSE [34] & \(2.77\) & \(3.91\) & \(3.32\) & \(3.33\) \\ Sgmse+ (1) [26] & \(2.80\) & \(4.10\) & \(3.24\) & \(3.44\) \\ Sgmse+ (2) [26] & \(2.93\) & \(4.12\) & \(3.37\) & \(3.51\) \\ \hline VPIDM & \(3.13\) & \(\mathbf{4.63}\) & \(3.41\) & \(\mathbf{3.94}\) \\ \hline \end{tabular}
\end{table}
Table 3: The proposed method versus some SOTA methods with respect to different metrics.
Figure 2: The training loss when \(t\sim\mathcal{U}(\epsilon,1)\). |
2305.11526 | Enhancing Short-Term Wind Speed Forecasting using Graph Attention and
Frequency-Enhanced Mechanisms | The safe and stable operation of power systems is greatly challenged by the
high variability and randomness of wind power in large-scale
wind-power-integrated grids. Wind power forecasting is an effective solution to
tackle this issue, with wind speed forecasting being an essential aspect. In
this paper, a Graph-attentive Frequency-enhanced Spatial-Temporal Wind Speed
Forecasting model based on graph attention and frequency-enhanced mechanisms,
i.e., GFST-WSF, is proposed to improve the accuracy of short-term wind speed
forecasting. The GFST-WSF comprises a Transformer architecture for temporal
feature extraction and a Graph Attention Network (GAT) for spatial feature
extraction. The GAT is specifically designed to capture the complex spatial
dependencies among wind speed stations to effectively aggregate information
from neighboring nodes in the graph, thus enhancing the spatial representation
of the data. To model the time lag in wind speed correlation between adjacent
wind farms caused by geographical factors, a dynamic complex adjacency matrix
is formulated and utilized by the GAT. Benefiting from the effective
spatio-temporal feature extraction and the deep architecture of the
Transformer, the GFST-WSF outperforms other baselines in wind speed forecasting
for the 6-24 hours ahead forecast horizon in case studies. | Hao Liu, Huimin Ma, Tianyu Hu | 2023-05-19T08:50:58Z | http://arxiv.org/abs/2305.11526v2 | # Enhancing Short-Term Wind Speed Forecasting using Graph Attention and Frequency-Enhanced Mechanisms
###### Abstract
The safe and stable operation of power systems is greatly challenged by the high variability and randomness of wind power in large-scale wind-power-integrated grids. Wind power forecasting is an effective solution to tackle this issue, with wind speed forecasting being an essential aspect. In this paper, a Graph-attentive Frequency-enhanced Spatial-Temporal Wind Speed Forecasting model based on graph attention and frequency-enhanced mechanisms, i.e., GFST-WSF, is proposed to improve the accuracy of short-term wind speed forecasting. The GFST-WSF comprises a Transformer architecture for temporal feature extraction and a Graph Attention Network (GAT) for spatial feature extraction. The GAT is specifically designed to capture the complex spatial dependencies among wind speed stations to effectively aggregate information from neighboring nodes in the graph, thus enhancing the spatial representation of the data. To model the time lag in wind speed correlation between adjacent wind farms caused by geographical factors, a dynamic complex adjacency matrix is formulated and utilized by the GAT. Benefiting from the effective spatio-temporal feature extraction and the deep architecture of the Transformer, the GFST-WSF outperforms other baselines in wind speed forecasting for the 6-24 hours ahead forecast horizon in case studies.
Wind speed forecast, deep learning, spatial-temporal correlations, Transformer, GAT.
## I Introduction
In recent years, the rapid development of modern industry and the exponential growth of population have caused environmental pollution and global energy crisis to become increasingly serious. As a result, renewable energy has gained widespread attention for its potential to solve environmental and energy problems. Among clean renewable energy sources, wind power has seen overwhelming growth over the past decade, with global wind power capacity reaching 837 GW in 2021 [1]. However, the variability of wind speed can lead to intermittent power generation, which poses challenges to the safety and stability of smart grid energy management systems. Studies have shown that wind power forecasting is one of the most cost-effective and efficient methods to minimize this risk [2, 3, 4]. Therefore, accurate wind power forecasting is urgently needed, as well as wind speed forecasting, which actually serves as the foundation for wind power forecasting [5, 6].
Numerous wind speed forecasting methods have been proposed, including physical, statistical, machine learning, and deep learning methods. Physical methods mostly rely on Numeric Weather Prediction (NWP), which uses weather data such as temperatures and pressure to solve complex mathematical and physical models and forecast wind speed [7]. Statistical methods, on the other hand, construct a linear or nonlinear function from historical data to forecast future wind speed values [8], e.g., Auto-Regressive Moving Average (ARMA) [9], Auto-Regressive Integrated Moving Average (ARIMA) [10], and Support Vector Regression (SVR) [11], etc.
Artificial intelligence technologies have led to the emergence and development of numerous machine learning and deep learning models for wind speed forecasting. AI-based short-term wind power forecasting methods include Extreme Learning Machines (ELM) [12], Light Gradient Boosting Machine (LightGBM) [13], Artificial Neural Network (ANN) [14], Convolutional Neural Network (CNN) [15], Recurrent Neural Network (RNN) [16], Long Short-term Memory Network (LSTM) [17], among others. However, these methods have mainly relied on historical data from individual wind farms, hardly considering cross-farm spatial correlation [18]. As multiple wind farms close to each other are typically within the same wind belt, wind speeds are strongly correlated spatially. As a result, several studies considering both temporal and spatial correlation between adjacent wind fields have emerged in recent years, and have further improved forecasting performance [19, 20, 21, 22].
In [19], a spatial vector consisting of wind speed values from multiple nodes was used as the spatial feature input to the forecasting model, while the temporal features were extracted by LSTM. [20] proposed a model for wind speed forecasting with spatio-temporal correlation, integrating CNN and a multi-layer perceptron (MLP). Here, the spatial features were extracted by CNN, and the MLP captured the temporal dependencies among these extracted spatial features. In [21], a deep architecture named the Predictive Spatio-Temporal Network (PSTN) was used for wind speed forecasting, integrating CNN and LSTM. Similar to the CNN-MLP model, the temporal feature extraction module of this model was changed from MLP to LSTM. Finally, in [22], a model was proposed to forecast the wind farm cluster by combining Graph Convolution Networks (GCN) with LSTM. The performance improvements observed in these methods, whether using CNN or Graph Neural Network (GNN), demonstrate the effectiveness of utilizing spatial correlation for wind speed forecasting. Therefore, we utilize the GAT [23] to extract spatial correlation features between neighboring wind farms, aiming to further enhance the forecasting performance of our
model.
Recently, the Transformer architecture has gained attention in deep learning for its ability to model long-range dependencies, e.g., ChatGPT, which makes it suitable for time series modeling [24]. In this paper, we leverage the high-performing Frequency Enhanced Decomposed Transformer (FEDformer) [25] to model the auto-correlation of wind speed series. However, we observe that FEDformer primarily operates on frequency domain information obtained through Fourier transformation and lacks traditional processing of time-domain information. To address this limitation, we introduce a multi-head self-attention mechanism that allows us to effectively handle time-domain information. By incorporating this mechanism, we enhance the model's ability to capture temporal features and improve its forecasting performance.
In this study, we analyzed wind speed data from 25 wind farms to determine their correlation. We found that the spatial distribution of wind farms varies, resulting in potential time lags in the correlation of wind speeds. Additionally, wind speeds can vary with different wind directions. To address these challenges, we propose a dynamic complex adjacency matrix that can simultaneously consider the spatial correlation and time lags between wind speeds from different wind farms. Furthermore, it can reflect real-time changes in the correlation between wind speeds, capturing their evolving relationship.
To capture the spatio-temporal features among neighboring wind farms, we integrate GAT and FEDformer models and incorporate a multi-head self-attention mechanism to effectively process time-domain information. As a result, we propose the GFST-WSF model.
The main contributions of this paper are summarized as follows:
1. We propose a complex adjacency matrix to represent the spatio-temporal correlation among neighboring wind farms. Each element of the matrix is a complex number, where the real part represents the strength of the correlation, and the imaginary part represents the time difference of the correlation's occurrence. This approach provides a more comprehensive representation of the spatio-temporal correlation, leading to better forecasting performance in the proposed GFST-WSF model.
2. We enhance the GFST-WSF model with the GAT to effectively utilize the complex adjacency matrix and capture intricate spatial dependencies among neighboring wind farms' wind speed. This approach improves the model's ability to extract spatial features, which is crucial for wind speed forecasting accuracy.
3. We introduce the FEDformer architecture, which leverages a frequency-enhanced mechanism to improve the temporal information extraction process in wind speed forecasting. We have further improved the FEDformer by adding the multi-headed attention mechanism to enhance its ability to capture long and short-term dependencies in the wind speed series. The deep architecture and multi-head attention of the GFST-WSF model enables it to effectively extract the temporal correlation within the wind speed from neighboring wind farms.
The rest of this paper is organized as follows: Section II presents the problem description; Section III presents the framework of GFST-WSF; Section IV presents the case studies; conclusions are drawn in Section V.
## II Problem Description
We focus on the problem of short-term spatial-temporal wind speed forecasting for multiple wind farms. The meteorological data (including wind speed, wind direction, air pressure, temperature, etc.) of the \(i\)-th wind farm at time period \(t\) can be denoted as \(x_{t}^{i}\). Thus, the data vector of all the wind farms at time \(t\) is \(\mathbf{x}_{t}=\{x_{t}^{1},x_{t}^{2},x_{t}^{3},...,x_{t}^{n}\}\), where \(n\) represents the total number of wind farms. We use multiple variables to forecast the wind velocity vector \(p_{t+r}\) of a single wind farm, where \(r\in Z^{+}\) represents the lead time.
Therefore, the forecasting model can be formulated as follows:
\[f(\mathbf{x}_{t},\mathbf{x}_{t-1},\mathbf{x}_{t-2},...\mathbf{x}_{t-(m-1)}, \theta,\mathcal{G})=\hat{p}_{t+r} \tag{1}\]
where \(f(\cdot)\) represents mapping function, \(\theta\) represents the parameters of \(f(\cdot)\), \(\mathcal{G}\) represents the spatial relationships among wind farms, \(\hat{p}_{t+r}\) is the forecasted value of \(p_{t+r}\), \(m\) is the historical period for forecasting.
## III The Framework of GFST-WSF
Inspired by recent innovations in the field of time series forecasting, GFST-WSF adopts an encoder-decoder structure, which includes graph neural network encoding, multi-head attention from the original Transformer, sequence decomposition using average pooling, frequency-enhanced mechanism from FEDformer for sequence-level connections, and feedforward networks, as shown in Fig.1.
### _GAT Block_
The GAT Block, illustrated in Fig.2, consists of two components: a GAT and a Time Series Transform module (TST). Initially, the dynamic complex adjacency matrix \(\Lambda\) is decomposed into a real part matrix \(\mathcal{A}\) and an imaginary part matrix \(\mathcal{B}\). The elements of \(\mathcal{A}\) reflect the correlation between wind farms, while the elements of \(\mathcal{B}\) represent the time difference between them. In the TST module, the original time series features are transformed according to the elements of \(\mathcal{B}\). Then, the updated time series features are used as input to the GAT, with the adjacency matrix of GAT being \(\mathcal{A}\). Finally, the learned spatio-temporal information is utilized as input to the encoder.
#### Iii-A1 Building a Dynamic Complex Adjacency Matrix
In this study, we propose a novel approach to represent the correlation between all wind farms in a given graph. Traditionally, the correlation between wind farms is represented using a simple adjacency matrix, where the nodes are the wind farms and the edges represent the correlation between them. However, due to the variability of wind speed, distance between wind farms, and constantly changing wind direction, the correlation between adjacent wind farms may exist at different time lags. To address this issue, we introduce a dynamic complex adjacency matrix denoted as \(\Lambda\) to represent the time difference in the
correlation between wind farms. This matrix contains complex values of the form \(a+bi\), as illustrated in Fig.2. The real part of the complex value, \(a\), represents the correlation between wind farms. If \(a>0\), it indicates a positive correlation between wind farms. On the other hand, the imaginary part of the complex value, \(b\), represents the time difference in the correlation between wind farms. If there is no correlation between wind farms, the value of \(b\) is invalid. The complex adjacency matrix is defined as follows:
\[\Lambda(s_{i},s_{j})=\begin{cases}a+bi&\text{if}\ \ a>0\\ 0&\text{otherwise}\end{cases} \tag{2}\]
And the definition of the parameter \(a\) is given as follows:
\[a=\begin{cases}1&if\ \ \ F(s_{i},s_{j})>\beta\\ 0&otherwise\end{cases} \tag{3}\]
where \(\beta\) is the threshold of the correlation coefficient, \(s_{i}\) and \(s_{j}\) represent the wind speed series of the \(i\)-th and \(j\)-th wind farm, and \(F(\cdot)\) is a function that calculates the correlation between the two wind farms. In our study, we use the Pearson correlation coefficient, which is defined as follows:
\[F(s_{i},s_{j})=\frac{\sum{(s_{i}-\bar{s}_{i})(s_{j}-\bar{s}_{j})}}{\sqrt{\sum{( s_{i}-\bar{s}_{i})^{2}\sum{(s_{j}-\bar{s}_{j})^{2}}}}} \tag{4}\]
To calculate the time difference \(b\) between any two wind farms, we use the latitude and longitude information in the dataset to determine the azimuth information. We also apply the Haversine formula [26] to calculate the distance between any two points. Based on these values, we factor in the azimuth, wind direction, and distance to determine the time difference \(b\).
#### Ii-B2 Graph Attention Networks
Wind is a rapidly propagating air flow that affects a large area, resulting in intrinsic
Fig. 1: The GFST-WSF Structure. The Self-Attention Block utilizes the original multi-headed attention mechanism in Transformer. The GAT Block is composed of the time series transform module (TST) and the graph attention network (GAT). The Frequency Enhanced Block (FEB) and Frequency Enhanced Attention (FEA) are utilized to perform representation learning in frequency domain. The series decomposition blocks (SeriesDecomp) are employed to decompose the series into trend-cyclical and seasonal parts.
correlations between the wind speeds of adjacent wind farms. To capture these relationships, we model each wind farm as a graph node and the meteorological information series for the wind farm as the node's characteristic quantity. The correlation between wind farms is captured as the attribute of the graph's edges.
GNN [27] refers to a general class of models that apply neural networks to graphs. These models can be categorized into different types based on their underlying techniques. However, as wind direction changes over time, the correlation between adjacent wind farms can also change. To account for this dynamic relationship, we employ GAT [23], which leverage masked self-attentional layers to overcome the limitations of prior methods, such as GCN and Graph Sample and Aggregate (GraphSAGE), that cannot handle dynamic graph relationships.
The input of GAT is a set of node features, \(\mathbf{h}=\{\vec{h}_{1},\vec{h}_{2},...,\vec{h}_{n}\}\), where \(n\) is the number of nodes. The importance of node \(j\)'s features to node \(i\) is defined as:
\[e_{ij}=a(\mathbf{W}\vec{h}_{i},\mathbf{W}\vec{h}_{j}) \tag{5}\]
where \(\mathbf{W}\) is a weight matrix, and the attention mechanism is a single-layer feedforward neural network. Meanwhile, use the softmax function to normalize them:
\[\alpha_{ij}=softmax(e_{ij})=\frac{exp(e_{ij})}{\sum_{k\in\mathcal{N}_{i}}exp(e _{ik})} \tag{6}\]
Furthermore, GAT introduces the idea of multi-headed attention mechanism into graph neural networks. Specifically, perform \(k\) separate attention calculations and then concatenate them:
\[\vec{h}^{\prime}_{i}=\|_{k=1}^{K}\sigma\left(\sum_{j\in\mathcal{N}_{i}}\alpha _{ij}^{k}\omega^{k}\vec{h}_{j}\right) \tag{7}\]
where \(\|\) is the concatenation operation and \(\omega^{k}\) is the corresponding input linear transformation's weight matrix.
### _Encoder_
The encoder comprises multiple layers, with the output of the \(l\)-th layer denoted as \(\mathcal{X}_{en}^{l}=Encoder(\mathcal{X}_{en}^{l-1})\), where \(l\in\{1,2,..,N\}\). The input to the encoder is \(\mathcal{X}_{en}^{0}\), which is the historical series that has been embedded using a graph attention network and one-dimensional convolution. The spatio-temporal information learned from this embedding is fed to the encoder to enable learning of the sequence through frequency enhanced blocks, which are then decomposed using average pooling. To capture interdependent features in the sequence and increase model variability, multi-head attention is used. The overall learning process is formulated as follows:
\[\mathcal{X}_{en}^{0} =Conv1D(GAT(\mathcal{X},\mathcal{G})) \tag{8}\] \[\mathcal{I}_{en}^{l} =Attention(\mathcal{X}_{en}^{l-1})+\mathcal{X}_{en}^{l-1}\] (9) \[\mathcal{S}_{en}^{l,1}\_\_=SeriesDecomp(FEB(\mathcal{I}_{en}^{l}) +\mathcal{I}_{en}^{l})\] (10) \[\mathcal{S}_{en}^{l,2}\_\_=SeriesDecomp(FeedForward(\mathcal{S}_{ en}^{l,1})+\mathcal{S}_{en}^{l,1})\] (11) \[\mathcal{X}_{en}^{l} =\mathcal{S}_{en}^{l,2} \tag{12}\]
where \(\mathcal{X}\) represents the meteorological information of all wind farms, \(\mathcal{G}\) represents the graph network code of the sites, \(\mathcal{I}_{en}^{l}\) is the result of multi-head attention, and \(\mathcal{S}_{en}^{l,i},i\in{1,2}\) represents the seasonal component after the \(i\)-th series decomposition block in the \(l\)-th layer respectively. The frequency enhanced block (FEB) module is implemented using a Discrete Fourier Transform (DFT) mechanism.
#### Iii-B1 Frequency Enhanced Block with Fourier Transform (FEB)
The structure presented in this paper leverages the Discrete Fourier Transform (DFT), which was first introduced in [25]. By employing the Fast Fourier Transform (FFT) and randomly selecting a fixed number of Fourier components, the computational complexity of the structure can be reduced to \(O(N)\), making it more efficient.
The FEB block takes an input \(\mathbf{x}\in\mathbb{R}^{N\times D}\), which is first linearly projected using \(\mathbf{w}\in\mathbb{R}^{D\times D}\), and then converted to the
Fig. 2: GAT Block. Constructing a complex adjacency matrix based on historical data (illustrated by a simple example in the figure), where the upper part represents the real components and the lower part represents the imaginary components.
frequency domain using the Fourier transform. Specifically, the input is transformed as follows: \(\mathbf{q}=\mathbf{x}\cdot\mathbf{w},~{}~{}\mathbf{Q}=\mathcal{F}(\mathbf{q})\), where \(\mathcal{F}\) represents the Fourier transform, and \(\mathbf{Q}\in\mathbb{C}^{N\times D}\). In the frequency domain, only a randomly selected subset of \(M(M<<N)\) modes are retained, resulting in \(\mathbf{\tilde{Q}}=Select(\mathbf{Q})\), where \(\mathbf{\tilde{Q}}\in\mathbb{C}^{M\times D}\). Thus, the FEB is defined as
\[FEB(\mathbf{q})=\mathcal{F}^{-1}(Padding(\mathbf{\tilde{Q}}\odot\mathbf{R})) \tag{13}\]
where \(\mathbf{R}\in\mathbb{C}^{D\times D\times M}\) is a parameterized kernel initialized randomly, \(\mathcal{F}^{-1}\) denotes the inverse Fourier transform. The operator \(\odot\) is the complex matrix multiplication. I.e., the result is zero-padded to \(\mathbb{C}^{N\times D}\) and converted back to the time dimension by \(\mathcal{F}^{-1}\).
#### Iii-B2 Series Decomp
In order to model the complex temporal patterns in wind speed sequences, the decomposition approach is adopted to divide the time series into two components: the trend-cyclical component and the seasonal component, which respectively reflect the long-term progress and periodicity of the time series. In practice, the moving average method is used to smooth out the cyclical fluctuations and highlight the long-term trend. Specifically, for an input sequence \(\mathcal{X}\) of length \(L\), the decomposition process is as follows:
\[\mathcal{X}_{t} =AvgPool(Padding(\mathcal{X})) \tag{14}\] \[\mathcal{X}_{s} =\mathcal{X}-\mathcal{X}_{t} \tag{15}\]
where \(\mathcal{X}_{t},\mathcal{X}_{s}\) denote the trend-cyclical and the seasonal part respectively. We use the \(AvgPool()\) operation with padding for moving average and summarize the above equations using \(\mathcal{X}_{t},\mathcal{X}_{s}=SeriesDecomp(\mathcal{X})\).
### _Decoder_
The decoder also adopts a multilayer structure as : \(\mathcal{X}^{l}_{de},\mathcal{T}^{l}_{de}=Decoder(\mathcal{X}^{l-1}_{de}, \mathcal{T}^{l-1}_{de})\), where \(l\in\{1,2,...,M\}\) is the output of \(l\)-th decoder layer. The decoder takes two inputs: \(\mathcal{X}_{de}\), which is the spatio-temporal embedding obtained from the seasonal initialization term through graph attention and one-dimensional convolution, and \(\mathcal{T}_{de}\), which is the direct input of the trend initialization term. The first layer of the decoder, \(\mathcal{X}^{0}_{de}\), is obtained by applying a one-dimensional convolution to the output of a graph attention layer that takes as input the concatenation of the average-pooled \(\mathcal{X}\) and the seasonal initialization term \(\mathcal{S}_{init}\), with attention weights given by the graph \(\mathcal{G}\). In each subsequent layer \(l\), the input \(\mathcal{X}^{l-1}_{de}\) is refined by an attention mechanism and added to the output of the encoder for the following decoding. The trend component \(\mathcal{T}_{de}\) is accumulated across the layers to improve the model's inference ability. Specifically, we define the decomposition blocks as follows:
\[\mathcal{S}^{0}_{de} =Concat(SeriesDecomp(X),\mathcal{S}_{init}) \tag{16}\] \[\mathcal{X}^{0}_{de} =Conv1D(G\mathcal{A}(\mathcal{S}^{0}_{de},\mathcal{G}))\] (17) \[\mathcal{I}^{l}_{de} =Attention(\mathcal{X}^{l-1}_{de})+\mathcal{X}^{l-1}_{de}\] (18) \[\mathcal{T}^{0}_{de} =Concat(SeriesDecomp(X),\mathcal{T}_{init})\] (19) \[\mathcal{S}^{l,1}_{de},\mathcal{T}^{l,1}_{de} =SeriesDecomp(FEB(\mathcal{I}^{l}_{de})+\mathcal{I}^{l}_{de})\] (20) \[\mathcal{S}^{l,2}_{de},\mathcal{T}^{l,2}_{de} =SeriesDecomp(FEA(\mathcal{S}^{l,1}_{de},\mathcal{X}^{N}_{en})+ \mathcal{S}^{l,1}_{de})\] (21) \[\mathcal{S}^{l,3}_{de},\mathcal{T}^{l,3}_{de} =SeriesDecomp(FeedForward(\mathcal{S}^{l,2}_{de})+\mathcal{S}^{l,2}_ {de})\] (22) \[\mathcal{T}^{l}_{de} =\mathcal{T}^{l-1}_{de}+\mathcal{W}_{l,1}\cdot\mathcal{T}^{l,1}_{ de}+\mathcal{W}_{l,2}\cdot\mathcal{T}^{l,2}_{de}+\mathcal{W}_{l,3}\cdot\mathcal{T}^{l,3}_ {de}\] (23) \[\mathcal{X}^{l}_{de} =\mathcal{T}^{l,3}_{de} \tag{24}\]
where \(\mathcal{S}_{init}\) and \(\mathcal{T}_{init}\) represent the initialization of seasonal and trend terms respectively. After the \(i\)-th series decomposition block in the \(l\)-th layer, the seasonal and trend components are represented by \(\mathcal{S}^{l,i}_{de}\) and \(\mathcal{T}^{l,i}_{de}\), where \(i\in\{1,2,3\}\). The linear projector for the \(i\)-th extracted trend \(\mathcal{T}^{l,i}_{de}\) is denoted by \(\mathcal{W}_{l,i}\). The frequency enhanced attention (FEA) is similar to FEB in that it uses DFT projection with an attention design.
**Frequency Enhanced Attention with Fourier Transform(FEA)** FEA takes inputs that are similar to those of the original Transformer, including queries \(\mathbf{q}\in\mathbb{R}^{L\times D}\), keys \(\mathbf{k}\in\mathbb{R}^{L\times D}\), and values \(\mathbf{v}\in\mathbb{R}^{L\times D}\). The queries are sourced from the decoder, while the keys and values come from the encoder : \(\mathbf{q}=\mathbf{x}_{de}\cdot\mathbf{w}_{q}\), \(\mathbf{k}=\mathbf{x}_{en}\cdot\mathbf{w}_{k}\), \(\mathbf{v}=\mathbf{x}_{en}\cdot\mathbf{w}_{v}\), where \(\mathbf{w}_{q},\mathbf{w}_{k},\mathbf{w}_{v}\in\mathbb{R}^{D\times D}\). Then the canonical attention can be defined as
\[Atten(\mathbf{q},\mathbf{k},\mathbf{v})=Softmax(\frac{\mathbf{q}\mathbf{k}^{T}}{\sqrt{d_{q}}})\mathbf{v} \tag{25}\]
where \(d_{q}\) represents the length of \(\mathbf{q}\). In FEA, the queries, keys, and values (\(\mathbf{q},\mathbf{k}\), and \(\mathbf{v}\)) are first transformed into the frequency domain using the Fourier transform, after which a subset of \(M\) modes is randomly selected in the frequency domain. Next, a similar cross-attention mechanism is applied in the frequency domain. The FEA can be defined as follows:
\[FEA(\mathbf{q},\mathbf{k},\mathbf{v})=\mathcal{F}^{-1}(Padding(\sigma(\mathbf{\tilde{Q}}\cdot \mathbf{\widetilde{K}})\cdot\mathbf{\tilde{V}})) \tag{26}\]
where \(\sigma\) is the activation function, \(\mathbf{\tilde{Q}}=Select(\mathcal{F}(\mathbf{q}))\), \(\mathbf{\widetilde{K}}=Select(\mathcal{F}(\mathbf{k}))\), and \(\mathbf{\tilde{V}}=Select(\mathcal{F}(\mathbf{v}))\). The result of \(\sigma(\mathbf{\tilde{Q}}\cdot\mathbf{\widetilde{K}})\cdot\mathbf{\tilde{V}}\) also need to be zero-padded to \(\mathbb{C}^{L\times D}\) before converting back to the time dimension.
## IV Case Studies
### _Evaluation Metrics_
The Mean Squared Error (MSE) and normalized Mean Absolute Error (MAE) are commonly used as evaluation metrics in the task of forecasting wind speed at a single site. The metrics are defined as follows:
\[MSE =\frac{1}{T}\sum_{t=1}^{T}(p_{t+r}-\hat{p}_{t+r})^{2} \tag{27}\] \[MAE =\frac{1}{T}\sum_{t=1}^{T}|p_{t+r}-\hat{p}_{t+r}| \tag{28}\]
### _Data Description_
The Wind Integration National Dataset (WIND) Toolkit [28] is a software package developed by the National Renewable
Energy Laboratory (NREL) in the United States. It provides researchers with access to a wide range of wind power data. For this study, we utilized the India Wind Dataset from WIND, which was developed in part through the India Renewable Integration Study. This dataset contains wind speed, wind direction, temperature, and pressure at heights of 40m, 80m, 100m, and 120m above the ground. The data has a spatial resolution of 3 km and a temporal resolution of 5 minutes. From this dataset, we selected 25 sites in Tamil Nadu, India for short-term wind speed forecasting, as shown in Fig.3. The annual dataset for 2014 was chosen and the time resolution was adjusted to 15 minutes. The wind farm being forecasted is located at the center of the selected sites and is the largest wind farm in southern India, named the Muppandal Wind Farm (the green point). The distance between sites ranges from 5 kilometers to 40 kilometers, as shown in the Fig.3.
### _Comparison on Test Set_
To verify the superiority of the GFST-WSF, we employed state-of-the-art time series forecasting methods and deep learning methods as baselines. There are following:
1. The Persistence method (Pers.): The Persistence method forecasts the next time step by taking the current time step's observed value as the forecasted value, i.e., all forecasted values are equal to the last observed value at the known time step.
2. LightGBM: LightGBM is a mainstream implementation of the Gradient Boosting Decision Tree (GBDT) algorithm, which utilizes iteratively trained weak classifiers (decision trees) to obtain an optimized model. It is commonly used for forecasting tasks.
3. Long Short-Term Memory (LSTM): LSTM, as a deep learning algorithm used for processing sequence data, features memory cells and gate mechanisms, exhibiting strong abilities in modeling long-term dependencies and effectively addressing the issues of vanishing and exploding gradients.
4. Transformer: Transformer is a neural network architecture based on self-attention mechanism used for sequence-to-sequence tasks and is one of the state-of-the-art models in this field.
5. FEDformer: FEDformer is a variant of the Transformer model that introduces frequency domain enhancement mechanisms, and currently demonstrates excellent performance in time series forecasting tasks.
In TABLE I and TABLE II, we present a comparison of the performance of our GFST-WSF model with other time series forecasting models. The results clearly demonstrate that GFST-WSF outperforms all considered methods in terms of the lowest MAE and MSE values. As a commonly used machine learning method, LightGBM performs well in short-term forecasting but exhibits poor performance in long-term forecasting. Among the baselines, LSTM performed the worst in terms of forecasting performance, with its error increasing sharply as the length of the forecasting horizon increased. However, the Transformer model showed improved performance, owing to its effective capture of the attention mechanism on the long-term and short-term dependence of time series. Building upon the Transformer architecture, FEDformer achieved significant performance improvements in forecasting long-term time series. Specifically, in the 12-hour forecasting task, FEDformer lowered MSE and MAE by approximately 5%, while in the 24-hour forecasting task, it demonstrated a 28% reduction in MSE and a 17% reduction in MAE. This is because FEDformer replaced the multi-head self-attention mechanism
Fig. 3: The geographic distribution of the wind farms. These wind farms are located in southern India and have been clearly marked on the map.
in Transformer with a frequency-enhanced mechanism, and applied cross-attention to the frequency-domain information. Additionally, FEDformer utilized a temporal decomposition module, which improved its performance in long sequence forecasting.
GFST-WSF is a model based on FEDformer, which combines GAT module to extract spatial features and uses multi-head attention mechanism to extract temporal information, thus further improving the forecasting performance. At the same time, GFST-WSF uses dynamic complex adjacency matrix to model the time lag relationship between adjacent wind farms, in order to better capture the wind speed correlation. Through these improvements, GFST-WSF can effectively extract various wind speed features and capture complex patterns in wind speed data, thereby performing well in short-term wind speed forecasting tasks. Compared with other baseline models, GFST-WSF has stronger modeling and representation capabilities, and therefore has better application prospects in ensuring the safe and stable operation of wind power integrated grids.
The experimental results on the test set are shown in Fig.4, where Fig.4, Fig.4, and Fig.4 represent the experimental results of all comparative models in forecasting future 6, 12, and 24 hours, respectively. It can be clearly observed that GFST-WSF is the closest to the ground truth in the results of different forecasting lengths. Especially as the forecasting length increases, the performance advantage of GFST-WSF becomes more pronounced.
### _Ablation experiments_
To verify the effectiveness and necessity of each module in GFST-WSF, we also conducted an additional ablation experiment, and the results are listed in TABLE III. After removing the multi-head self-attention, the new model is called GAT-FEDformer; then remove the GAT Block, and the model becomes the original FEDformer. TABLE III shows the following.
1. FEDformer has shown remarkable performance in forecasting longer time series, with its frequency-enhanced mechanism playing a crucial role.
2. GAT-FEDformer outperforms FEDformer by 5%-12% in forecasting wind speed at different lengths, indicating the effectiveness of the GAT and the designed dynamic complex adjacency matrix in capturing spatial features.
3. GFST-WSF surpasses GAT-FEDformer in terms of forecasting accuracy, demonstrating that the multi-head self-attention on temporal information can enhance the model's representation ability and improve its overall performance.
Based on the results of the ablation experiment, which are shown in Fig.5 and TABLE III, it is evident that the GFST-WSF model exhibits the highest forecast performance. Specifically, the results indicate that each module plays a
Fig. 4: Comparison between GFST-WSF and baselines.
Fig. 5: Comparison of results from ablation experiments.
significant role in the model's overall performance, and that the removal of any one module significantly impairs the forecast capabilities of the model. Thus, the GFST-WSF model stands out as a superior choice for this task, owing to its comprehensive architecture and the high degree of synergy among its constituent parts.
### _Convergence and Stability of GFST-WSF_
In order to verify the convergence and stability of GFST-WSF, we re-implemented the training of 100 instances of GFST-WSF. The final distribution of MSE and MAE on the test set are shown in Fig.(a)a and Fig.(b)b, respectively, demonstrating that: the performance of GFST-WSF in terms of convergence and stability is satisfactory, with a small range of fluctuation in both MSE and MAE, as shown in TABLE IV. Case studies and convergence analysis reveal that GFST-WSF has good performance and high training stability.
### _Summary of Case Studies_
In summary of the case studies conducted on the test set, the following phenomena can be observed:
1. LSTM performs poorly compared to deep Transformer-based forecasting models. As the forecasting length increases, the performance of LSTM decreases, while Transformer-based models perform better.
2. FEDformer, which is the latest variant of Transformer, exhibits increasingly significant performance improvements as the forecasting length increases.
3. the introduction of graph neural networks (GNNs) enhances the model's performance compared to methods that only utilize single wind farm data. This is because GNNs can extract spatial features and improve the model's ability to handle uncertainty in wind speed data.
4. GFST-WSF outperforms all benchmark experiments because it not only extracts spatio-temporal features of wind speed data but also performs representation learning in both time and frequency domains.
## V Conclusion
In this paper, we proposes a novel GFST-WSF model for short-term wind speed forecasting based on spatio-temporal information. It is the first work to apply Transformer to wind speed forecasting and design a dynamic complex adjacency matrix for GAT. The set of wind farms is modeled as a graph, where nodes with high correlation in wind speed and direction are connected by an edge. The connectivity of nodes in the graph is represented by a complex adjacency matrix, which clearly characterizes the wind speed correlation and time lag between different nodes. GAT can better capture the spatial features of wind speeds based on this matrix. In addition, the variant of Transformer, FEDformer, with its representation learning ability for time series, enables GFST-WSF to better forecast wind speed sequences. The spatio-temporal features obtained by the model can handle noise and uncertainty in wind speed data. The proposed model is verified by comparison with various benchmark testing methods.
Case studies on multi-wind farm data demonstrated the superiority of GFST-WSF in wind speed forecasting. On the testing dataset, compared with state-of-the-art deep learning models, the GFST-WSF achieved a 14%, 12%, and 8% decrease in MSE when forecasting wind speeds for 6, 12, and 24 hours, respectively. These results not only demonstrate the successful application of GFST-WSF in capturing spatial features, but also suggest that deep learning models with deep architectures can better capture complex wind speed variations.
|
2302.10847 | There Are No Post-Quantum Weakly Pseudo-Free Families in Any Nontrivial
Variety of Expanded Groups | Let $\Omega$ be a finite set of finitary operation symbols and let $\mathfrak
V$ be a nontrivial variety of $\Omega$-algebras. Assume that for some set
$\Gamma\subseteq\Omega$ of group operation symbols, all $\Omega$-algebras in
$\mathfrak V$ are groups under the operations associated with the symbols in
$\Gamma$. In other words, $\mathfrak V$ is assumed to be a nontrivial variety
of expanded groups. In particular, $\mathfrak V$ can be a nontrivial variety of
groups or rings. Our main result is that there are no post-quantum weakly
pseudo-free families in $\mathfrak V$, even in the worst-case setting and/or
the black-box model. In this paper, we restrict ourselves to families
$(H_d\mathbin|d\in D)$ of computational and black-box $\Omega$-algebras (where
$D\subseteq\{0,1\}^*$) such that for every $d\in D$, each element of $H_d$ is
represented by a unique bit string of length polynomial in the length of $d$.
In our main result, we use straight-line programs to represent nontrivial
relations between elements of $\Omega$-algebras. Note that under certain
conditions, this result depends on the classification of finite simple groups.
Also, we define and study some types of weak pseudo-freeness for families of
computational and black-box $\Omega$-algebras. | Mikhail Anokhin | 2023-02-21T17:55:42Z | http://arxiv.org/abs/2302.10847v2 | # There Are No Post-Quantum Weakly Pseudo-Free Families in Any Nontrivial Variety of Expanded Groups
###### Abstract
Let \(\Omega\) be a finite set of finitary operation symbols and let \(\mathfrak{V}\) be a nontrivial variety of \(\Omega\)-algebras. Assume that for some set \(\Gamma\subseteq\Omega\) of group operation symbols, all \(\Omega\)-algebras in \(\mathfrak{V}\) are groups under the operations associated with the symbols in \(\Gamma\). In other words, \(\mathfrak{V}\) is assumed to be a nontrivial variety of expanded groups. In particular, \(\mathfrak{V}\) can be a nontrivial variety of groups or rings. Our main result is that there are no post-quantum weakly pseudo-free families in \(\mathfrak{V}\), even in the worst-case setting and/or the black-box model. In this paper, we restrict ourselves to families \((H_{d}\,|\,d\in D)\) of computational and black-box \(\Omega\)-algebras (where \(D\subseteq\{0,1\}^{+}\)) such that for every \(d\in D\), each element of \(H_{d}\) is represented by a unique bit string of length polynomial in the length of \(d\). We use straight-line programs to represent nontrivial relations between elements of \(\Omega\)-algebras in our main result. Note that under certain conditions, this result depends on the classification of finite simple groups. Also, we define and study some types of weak pseudo-freeness for families of computational and black-box \(\Omega\)-algebras.
**Keywords:** post-quantum cryptography, universal algebra, expanded group, family of computational universal algebras, black-box model, weakly pseudo-free family.
###### Contents
* 1 Introduction
* 1.1 Related Work
* 1.2 Our Contribution and Organization of the Paper
* 2 Preliminaries
* 2.1 General Preliminaries
* 2.2 Universal-Algebraic Preliminaries
* 2.3 Group-Theoretic Preliminaries
* 2.4 Probabilistic Preliminaries
* 2.5 Cryptographic Preliminaries
* 3 Weakly Pseudo-Free Families of Computational and Black-Box \(\Omega\)-Algebras
* 3.1 Standard Model
* 3.2 Black-Box \(\Omega\)-Algebra Model
* 3.3 Quantum Computation Model
* 3.4 Relations between the Types of Weak Pseudo-Freeness
* 3.5 Weak Pseudo-Freeness in the Variety Generated by the \(\Psi\)-Reducts of All \(\Omega\)-Algebras in \(\mathfrak{V}\)
* 4 Some Polynomial-Time Black-Box Group Quantum Algorithms
* 4.1 The Case Where \(\mathfrak{V}\) Has Infinite Exponent
* 4.2 The Case Where \(\mathfrak{V}\) Is Nontrivial and Is Not the Variety of All Groups
* 5
Main Result
* 6 Conclusion
* A Table of Notation
## 1 Introduction
Let \(\Omega\) be a finite set of finitary operation symbols and let \(\mathfrak{V}\) be a variety of \(\Omega\)-algebras. (See Subsection 2.2 for definitions.) Informally, a family of computational \(\Omega\)-algebras is a family of \(\Omega\)-algebras whose elements are represented by bit strings in such a way that equality testing, the fundamental operations, and generating random elements can be performed efficiently. Loosely speaking, a family of computational \(\Omega\)-algebras is called pseudo-free in \(\mathfrak{V}\) if all members of this family belong to \(\mathfrak{V}\) and, given a random member \(H\) of the family (for a given security parameter) and random elements \(g_{1},\dots,g_{m}\in H\), it is computationally hard to find a system of equations
\[v_{i}(a_{1},\dots,a_{m};x_{1},\dots,x_{n})=w_{i}(a_{1},\dots,a_{m};x_{1},\dots,x_{n}),\quad i\in\{1,\dots,s\}, \tag{1}\]
in the variables \(x_{1},\dots,x_{n}\) together with elements \(h_{1},\dots,h_{n}\in H\) such that
* for each \(i\in\{1,\dots,s\}\), \(v_{i}(a_{1},\dots,a_{m};x_{1},\dots,x_{n})\) and \(w_{i}(a_{1},\dots,a_{m};x_{1},\dots,x_{n})\) are elements of the \(\mathfrak{V}\)-free \(\Omega\)-algebra freely generated by \(a_{1},\dots,a_{m},x_{1},\dots,x_{n}\),
* system (1) is unsatisfiable in the \(\mathfrak{V}\)-free \(\Omega\)-algebra freely generated by \(a_{1},\dots,a_{m}\), and
* \(v_{i}(g_{1},\dots,g_{m};h_{1},\dots,h_{n})=w_{i}(g_{1},\dots,g_{m};h_{1},\dots,h_{n})\) in \(H\) for all \(i\in\{1,\dots,s\}\).
If a family of computational \(\Omega\)-algebras satisfies this definition with the additional requirement that \(n=0\) (i.e., that the equations in (1) be variable-free), then this family is said to be weakly pseudo-free in \(\mathfrak{V}\). By fixing the number \(s\) of equations in the definition of a pseudo-free (resp., weakly pseudo-free) family in \(\mathfrak{V}\), we obtain a definition of an \(s\)-pseudo-free (resp., weakly \(s\)-pseudo-free) family in \(\mathfrak{V}\). Of course, pseudo-freeness (in any above version) may depend heavily on the form in which system (1) is required to be found, i.e., on the representation of such systems.
The notion of pseudo-freeness (which is a variant of weak \(1\)-pseudo-freeness in the above sense) was introduced by Hohenberger in [14, Section 4.5] for black-box groups. Rivest gave formal definitions of a pseudo-free family of computational groups (see [13, Definition 2], [13, Slide 17]) and a weakly pseudo-free one (see [13, Slide 11]). These authors consider (weak) pseudo-freeness only in the varieties of all groups and of all abelian groups. Note that pseudo-freeness (resp., weak pseudo-freeness) in [13, Riv04b] is in fact \(1\)-pseudo-freeness (resp., weak \(1\)-pseudo-freeness) in our terminology. For motivation of the study of pseudo-freeness, we refer the reader to [14, Riv04a, Mic10].
Let \(\mathtt{H}=(H_{d}\,|\,d\in D)\) be a family of computational \(\Omega\)-algebras, where \(D\subseteq\{0,1\}^{*}\). (We specify only the \(\Omega\)-algebras here.) Then this family is said to have exponential size if there exists a polynomial \(\xi\) such that \(|H_{d}|\leq 2^{\xi(|d|)}\) for all \(d\in D\) (see also [10, Definition 3.2]). The family \(\mathtt{H}\) is called polynomially bounded if there exists a polynomial \(\eta\) such that the length of any representation of every \(h\in H_{d}\) is at most \(\eta(|d|)\) for all \(d\in D\) (see also [10, Definition 3.3]). Of course, if \(\mathtt{H}\) is polynomially bounded, then it has exponential size. It should be noted that a (weakly) pseudo-free family of computational \(\Omega\)-algebras can have applications in cryptography only if it is polynomially bounded or at least has exponential size. Such families that do not have exponential size _per se_ are of little interest; they can be constructed unconditionally (see [10, Subsection 3.4]). Finally, the family \(\mathtt{H}\) is said to have unique representations of elements if for every \(d\in D\), each element of \(H_{d}\) is represented by a unique bit string (see also [10, Definition 3.4]). This property seems to be useful for applications. In this paper, unless otherwise specified, families of computational \(\Omega\)-algebras are assumed to be polynomially bounded and to have unique representations of elements.
We emphasize that in the introduction, all results are stated loosely. In particular, we do not mention the probability distribution (depending on the security parameter) according to which the index of the \(\Omega\)-algebra in the family is sampled. Also, we usually do not specify the representation of elements of the \(\mathfrak{V}\)-free \(\Omega\)-algebra by bit strings. This representation is used for representing systems of the form (1).
### Related Work
Most researchers consider pseudo-freeness (in various versions) in the varieties of all groups [14, 15, 16, 17, 18, 19, 20], of all abelian groups [14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25], and of all elementary abelian \(p\)-groups, where \(p\) is a prime [1]. Surveys of this area can be found in [13, Chapter 1], [1, Section 1], [1, 1], and [1, Subsection 1.1].
We mention some conjectures and results concerning (weakly) pseudo-free families of computational groups. In these conjectures and results, families of computational groups are presented in the form \(((G_{d},\mathcal{G}_{d})\,|\,d\in D)\), where \(D\subseteq\{0,1\}^{*}\), \(G_{d}\) is a group whose every element is represented by a unique bit string of length polynomial in the length of \(d\), and \(\mathcal{G}_{d}\) is a probability distribution on \(G_{d}\) (\(d\in D\)). Thus, these families are polynomially bounded and have unique representations of elements as assumed above. Of course, the multiplication, the inversion, and computing the identity element in \(G_{d}\) should be performed efficiently when \(d\) is given. Furthermore, given \((d,1^{k})\), one can efficiently generate random elements of \(G_{d}\) according to a probability distribution that is statistically \(2^{-k}\)-close to \(\mathcal{G}_{d}\). For a positive integer \(n\), denote by \(\mathbb{Z}_{n}\) the set \(\{0,\ldots,n-1\}\) considered as a ring under addition and multiplication modulo \(n\) and by \(\mathbb{Z}_{n}^{*}\) the group of units of \(\mathbb{Z}_{n}\). Also, let \(\mathbb{S}_{n}\) and \(\mathbb{O}_{n}\) be the subgroups of squares in \(\mathbb{Z}_{n}^{*}\) (i.e., \(\{z^{2}\bmod n\,|\,z\in\mathbb{Z}_{n}^{*}\}\)) and of elements of odd order in \(\mathbb{Z}_{n}^{*}\), respectively. We denote by \(\mathcal{U}(Y)\) the uniform probability distribution on a nonempty finite set \(Y\).
Suppose \(N\) is the set of all products of two distinct primes. Rivest conjectured that the family \(((\mathbb{Z}_{n}^{*},\mathcal{U}(\mathbb{Z}_{n}^{*}))\,|\,n\in N)\) is pseudo-free in the variety \(\mathfrak{A}\) of all abelian groups (super-strong RSA conjecture, see [15, Conjecture 1], [15, Slide 18]). If both \(p\) and \(2p+1\) are prime numbers, then \(p\) is called a _Sophie Germain prime_ and \(2p+1\) is said to be a _safe prime_. Let \(S\) be the set of all products of two distinct safe primes. Micciancio [15] proved that the family \(((\mathbb{Z}_{n}^{*},\mathcal{U}(\mathbb{S}_{n}))\,|\,n\in S)\) is pseudo-free in \(\mathfrak{A}\) under the strong RSA assumption for \(S\) as the set of moduli. Informally, this assumption is that, given a random \(n\in S\) (for a given security parameter) and a uniformly random \(g\in\mathbb{Z}_{n}^{*}\), it is computationally hard to find an integer \(e\geq 2\) together with an \(e\)th root of \(g\) in \(\mathbb{Z}_{n}^{*}\). It is easy to see that if \(n\in S\) and the prime factors of \(n\) are different from \(5\), then \(\mathbb{S}_{n}=\mathbb{O}_{n}\). Hence the above result of Micciancio remains valid if we replace \(\mathbb{S}_{n}\) by \(\mathbb{O}_{n}\) in it. The same result as in [15], but with slightly different representations of group elements by bit strings and different distributions of random elements of the groups, was obtained by Jhanwar and Barua [19]. Moreover, Catalano, Fiore, and Warinschi [13] proved that under the same assumption as in the above result of Micciancio, the family \(((\mathbb{Z}_{n}^{*},\mathcal{U}(\mathbb{S}_{n}))\,|\,n\in S)\) satisfies an apparently stronger condition than pseudo-freeness in \(\mathfrak{A}\). That condition, called adaptive pseudo-freeness, was introduced in [13].
Note that it is unknown whether the set \(S\) is infinite. Indeed, this holds if and only if there are infinitely many Sophie Germain primes, which is a well-known unproven conjecture in number theory. Thus, the assumption used in [15, 16, 17] is very strong.
A natural candidate for a pseudo-free family in the variety of all groups is \(((\mathrm{GL}_{2}(\mathbb{Z}_{n}),\mathcal{U}(\mathrm{GL}_{2}(\mathbb{Z}_{n})) )\,|\,n\in N)\), where \(\mathrm{GL}_{2}(\mathbb{Z}_{n})\) is the group of invertible \(2\times 2\) matrices over \(\mathbb{Z}_{n}\) (see [14]). However, (weak) pseudo-freeness of this family under a standard cryptographic assumption is still unproven. Assume that finding a nontrivial divisor of a random number in some set \(C\) of composite numbers (for a given security parameter) is a computationally hard problem. Then Anokhin [1] constructed an exponential-size pseudo-free family in the variety of all groups. That family is not polynomially bounded and does not have unique representations of elements. Moreover, each element of any group in that family is represented by infinitely many bit strings. Note that the family presented in [1] is pseudo-free with respect to a natural but non-succinct representation of elements of the free group by bit strings. Under the same assumption, Anokhin [1] proved that the family \(((\mathbb{O}_{n},\mathcal{U}(\mathbb{O}_{n}))\,|\,n\in C)\) is weakly pseudo-free in \(\mathfrak{A}\). It is evident that this result also holds for \(((\mathbb{Z}_{n}^{*},\mathcal{U}(\mathbb{O}_{n}))\,|\,n\in C)\). Compared to the above result of Micciancio, this is a weaker statement, but it is proved under a much weaker cryptographic assumption.
Suppose \(p\) is an arbitrary fixed prime number and let \(\mathfrak{A}_{p}\) be the variety of all elementary abelian \(p\)-groups. Then pseudo-free families in \(\mathfrak{A}_{p}\) exist if and only if certain homomorphic collision-resistant \(p\)-ary hash function families exist or, equivalently, certain homomorphic one-way families of functions exist. See [1, Theorem 4.12] for details. Note that for families of computational elementary abelian \(p\)-groups, pseudo-freeness in \(\mathfrak{A}_{p}\) is equivalent to weak pseudo-freeness in \(\mathfrak{A}_{p}\) (see [1, Theorem 3.7]).
There are many constructions of cryptographic objects based on classical algebraic structures (e.g.,
groups). However, to the best of our knowledge, there are only a few works concerning both universal algebra and cryptography. Probably the first such work is by Artamonov and Yashchenko [1]. In that work, the authors introduced and studied the notion of a pk-algebra. This notion naturally formalizes the syntax of a two-message two-party key agreement scheme. See also the extended version [1] of [1]. Partala [15] proposed a generalization of the well-known Diffie-Hellman key agreement scheme based on universal algebras. Moreover, he considered some approaches to the instantiation of the proposed scheme. Loosely speaking, that scheme is secure if it is computationally hard to compute images under an unknown homomorphism (in a certain setting). See also [15] (a preliminary version of [15]) and the thesis [15].
Anokhin [1] initiated the study of (weakly) pseudo-free families of computational \(\Omega\)-algebras in arbitrary varieties of \(\Omega\)-algebras. In our opinion, the study of these families opens up new opportunities for using (weak) pseudo-freeness in mathematical cryptography. We briefly recall the main results of [1].
Let \(\mathfrak{O}\) denote the variety of all \(\Omega\)-algebras. Then the following trichotomy holds:
1. If \(\Omega\) consists of nullary operation symbols only, then unconditionally there exists a pseudo-free family in \(\mathfrak{O}\). This family consists of free \(\Omega\)-algebras.
2. If \(\Omega=\Omega_{0}\cup\{\omega\}\), where \(\Omega_{0}\) consists of nullary operation symbols and the arity of \(\omega\) is \(1\), then in \(\mathfrak{O}\), unconditionally there exist an exponential-size pseudo-free family and a weakly pseudo-free family. The former family has unique representations of elements but is not polynomially bounded.
3. In all other cases, the existence of polynomially bounded weakly pseudo-free families in \(\mathfrak{O}\) (not necessarily having unique representations of elements) implies the existence of collision-resistant hash function families.
Assume that \(\Omega\) contains a binary operation symbol \(\omega\) and \(\mathfrak{V}\) is a nontrivial variety of \(\Omega\)-algebras such that any \(\Omega\)-algebra in \(\mathfrak{V}\) is a groupoid with an identity element under \(\omega\). (In particular, this holds if \(\mathfrak{V}\) is a nontrivial variety of monoids, loops, groups, or rings.) Then the existence of polynomially bounded weakly pseudo-free families in \(\mathfrak{V}\) (not necessarily having unique representations of elements) implies the existence of collision-resistant hash function families. See [1, Section 4] for details.
Suppose \(\Omega\) consists of a single \(m\)-ary operation symbol, where \(m\geq 1\). In other words, we consider \(m\)-ary groupoids. Furthermore, assume the existence of collision-resistant hash function families. Then in \(\mathfrak{O}\), there exist a weakly pseudo-free family and an exponential-size pseudo-free family. The latter family is not polynomially bounded and does not have unique representations of elements. See [1, Section 5] for details. As we have already seen, if \(m=1\), then such families (even an exponential-size pseudo-free family having unique representations of elements) exist unconditionally.
In [1], Anokhin studied the connections between pseudo-free families of computational \(\Omega\)-algebras (in appropriate varieties of \(\Omega\)-algebras) and certain standard cryptographic primitives. The main results of that paper are as follows:
* Any \(1\)-pseudo-free (in particular, pseudo-free) family of computational mono-unary algebras with one-to-one fundamental operation (satisfying an additional condition) in \(\mathfrak{O}\) naturally defines a one-way family of permutations. Conversely, if there exists a one-way family of permutations, then there exists a pseudo-free family of computational mono-unary algebras in \(\mathfrak{O}\) with one-to-one fundamental operation.
* Let \(m\in\{2,3,\dots\}\). Then any \(1\)-pseudo-free (in particular, pseudo-free) family of computational \(m\)-unary algebras with one-to-one fundamental operations (satisfying an additional condition) in \(\mathfrak{O}\) naturally defines a \(\mathfrak{O}\) naturally defines a claw-resistant family of \(m\)-tuples of permutations. Conversely, if there exists a claw-resistant family of \(m\)-tuples of permutations, then there exists a pseudo-free family of computational \(m\)-unary algebras in \(\mathfrak{O}\) with one-to-one fundamental operations.
* For a certain \(\Omega\) and a certain variety \(\mathfrak{V}\) of \(\Omega\)-algebras, any \(1\)-pseudo-free (in particular, pseudo-free) family of computational \(\Omega\)-algebras (satisfying some additional conditions) in \(\mathfrak{V}\) naturally defines a family of trapdoor permutations.
Recall that if \(\Omega\) consists of a single unary operation symbol (resp., of \(m\) unary operation symbols), then \(\Omega\)-algebras are called mono-unary (resp., \(m\)-unary) algebras.
### Our Contribution and Organization of the Paper
We note that all known candidates for (weakly) pseudo-free families in nontrivial varieties of groups are not weakly pseudo-free in a post-quantum world. This raises the following question: Does there exist (under a standard cryptographic assumption) a post-quantum (in the natural sense) weakly pseudo-free family in some nontrivial variety of groups? Recall that, unless otherwise specified, families of computational \(\Omega\)-algebras are assumed to be polynomially bounded and to have unique representations of elements. Of course, all families of computational \(\Omega\)-algebras (in particular, groups) in the trivial variety are post-quantum pseudo-free in it. This is because every system of equations of the form (1) is satisfiable in any trivial \(\Omega\)-algebra. See [1, Remark 3.4] for groups and [1, Remark 3.7] for \(\Omega\)-algebras.
In this paper, we also consider a worst-case version of weak pseudo-freeness. In this version, loosely speaking, a member \(H\) of the family and elements \(g_{1},\dots,g_{m}\in H\) (see the informal definition at the beginning of the paper) are arbitrary rather than random. It is easy to see that weak pseudo-freeness implies worst-case weak pseudo-freeness in the same variety (see Remark 3.22).
Moreover, in addition to families of computational \(\Omega\)-algebras, we consider families of black-box \(\Omega\)-algebras. In the black-box \(\Omega\)-algebra model, elements of a finite \(\Omega\)-algebra \(H\) are represented for computational purposes by bit strings of the same length (depending of \(H\)) and the fundamental operations of \(H\) are performed by an oracle. This model was introduced by Babai and Szemeredi [1] for groups. See Subsection 3.2 for details.
The above properties of families of computational \(\Omega\)-algebras (pseudo-freeness, weak pseudo-freeness, polynomial boundedness, etc.) can be defined for families of black-box \(\Omega\)-algebras similarly. Like families of computational \(\Omega\)-algebras, unless otherwise specified, families of black-box \(\Omega\)-algebras are assumed to be polynomially bounded and to have unique representations of elements. Note that if there exists a weakly pseudo-free family of computational \(\Omega\)-algebras, then there exists a weakly pseudo-free family of black-box \(\Omega\)-algebras in the same variety (see Proposition 3.25).
Let \(\mathfrak{V}\) be a variety of \(\Omega\)-algebras. Suppose \(\Omega\) contains a set \(\Gamma\) of group operation symbols such that for all \(H\in\mathfrak{V}\), the \(\Gamma\)-product of \(H\) (i.e., \(H\) considered as a \(\Gamma\)-algebra) is a group. (A set of group operation symbols consists of a binary, a unary, and a nullary operation symbols for the multiplication, the inversion, and the identity element in a group, respectively.) In this case, \(\mathfrak{V}\) is called a variety of expanded groups. Choose such a set \(\Gamma\). Furthermore, we assume that \(\mathfrak{V}\) is nontrivial and elements of the \(\mathfrak{V}\)-free \(\Omega\)-algebra freely generated by \(a_{1},a_{2},\dots\) are represented by straight-line programs (see Example 3.1). Then our main result (Theorem 5.1) states that in \(\mathfrak{V}\), there are no families of any of the following types:
* post-quantum weakly pseudo-free families of computational \(\Omega\)-algebras,
* post-quantum worst-case weakly pseudo-free families of computational \(\Omega\)-algebras,
* post-quantum weakly pseudo-free families of black-box \(\Omega\)-algebras,
* post-quantum worst-case weakly pseudo-free families of black-box \(\Omega\)-algebras.
In particular, this is true for nontrivial varieties of groups, rings, modules and algebras over a finitely generated commutative associative ring with \(1\), near-rings, and, more generally, groups with finitely many multiple operators (see Remark 5.2). Thus, we give a negative answer to the above question.
We denote by \(\mathfrak{V}|_{\Gamma}\) the variety of groups generated by the \(\Gamma\)-reducts of all \(\Omega\)-algebras in \(\mathfrak{V}\). Note that if the set \(\Gamma\) cannot be chosen so that \(\mathfrak{V}|_{\Gamma}\) has infinite exponent or is solvable, then our main result depends on the classification of finite simple groups.
The outline of the proof of our main result is as follows. First, it is sufficient to prove the nonexistence of post-quantum worst-case weakly pseudo-free families of black-box \(\Omega\)-algebras in \(\mathfrak{V}\) (see, e.g., Figure 1). Second, the results of Subsection 3.5 imply that for this it suffices to prove the nonexistence of post-quantum worst-case weakly pseudo-free families of black-box groups in \(\mathfrak{V}|_{\Gamma}\). Third, for any family of black-box groups in \(\mathfrak{V}|_{\Gamma}\), we construct a polynomial-time black-box group quantum algorithm that breaks the worst-case weak pseudo-freeness of this family. This algorithm is based on a polynomial-time black-box group quantum algorithm for one of the following problems:
1. Given a black-box group \(G\in\mathfrak{V}|_{\Gamma}\) and its element \(g\), find a multiple of the order of \(g\) (if \(\mathfrak{V}|_{\Gamma}\) has infinite exponent).
2. Given a black-box group \(G\in\mathfrak{V}|_{\Gamma}\) and its elements \(g_{1},\ldots,g_{m},h\) such that \(h\) is in the subgroup generated by \(g_{1},\ldots,g_{m}\), find a straight-line program computing \(h\) from \(g_{1},\ldots,g_{m}\) (if \(\mathfrak{V}|_{\Gamma}\) has finite exponent).
Such algorithms for these problems do exist. Indeed, problem (i) can be solved in quantum polynomial time by Shor's order-finding algorithm (see [21, Section 5], [22, Subsection 5.3.1], or [17, 16]) modified for the black-box group model. Problem (ii) can be solved by the black-box group quantum algorithm of Ivanyos, Magniez, and Santha for the constructive membership problem (see [14, Theorem 5]). If \(\mathfrak{V}|_{\Gamma}\) is not the variety of all groups (in particular, if \(\mathfrak{V}|_{\Gamma}\) has finite exponent), then that algorithm runs in polynomial time whenever the given black-box group is in \(\mathfrak{V}|_{\Gamma}\). This follows from a result of Jones [15] together with the classification of finite simple groups. See Remark 4.2 for details.
For a positive integer \(e\), \(\mathfrak{A}_{e}\) denotes the variety of all abelian groups \(G\) such that \(g^{e}=1\) for all \(g\in G\). We note that if \(\mathfrak{V}|_{\Gamma}=\mathfrak{A}_{e}\), where \(e\geq 2\), then the third step of the proof of our main result can be done using a polynomial-time quantum algorithm for the hidden subgroup problem for the \(\mathfrak{A}_{e}\)-free group generated by \(a_{1},\ldots,a_{m}\) (\(m\geq 1\)). (This group is the direct product of the cyclic subgroups generated by \(a_{1},\ldots,a_{m}\); each of these subgroups has order \(e\).) Such an algorithm exists, e.g., by [13, Theorem 3.13]. We recommend [13] as a good source of information on quantum algorithms for the hidden subgroup problem.
The rest of the paper is organized as follows. Section 2 contains notation, basic definitions, and general results used in the paper. In Section 3, we define and discuss some types of weak pseudo-freeness (including post-quantum ones) for families of computational and black-box \(\Omega\)-algebras. Relations between these types are studied in Subsection 3.4 and depicted in Figure 1. In our opinion, some of these types of weak pseudo-freeness might be interesting for future research. Also, we want to state our main result in the strongest possible form. This is another motivation for introducing new types of weak pseudo-freeness. In Subsection 3.5, loosely speaking, we show that if \(\Psi\subseteq\Omega\), then the family of \(\Psi\)-reducts of \(\Omega\)-algebras in a weakly pseudo-free family in \(\mathfrak{V}\) is weakly pseudo-free in \(\mathfrak{V}|_{\Psi}\). The purpose of Section 4 is to prove the existence of polynomial-time black-box group quantum algorithms that are used in the proof of our main result. These algorithms are constructed in the proofs of Lemmas 4.1 and 4.3. In Section 5, we prove the main result of this paper (Theorem 5.1). Section 6 concludes and suggests some directions for future research. Finally, in Appendix A, we briefly recall some notation introduced in Sections 2 and 3.
## 2 Preliminaries
### General Preliminaries
In this paper, \(\mathbb{N}\) denotes the set of all nonnegative integers. Let \(Y\) be a set and let \(n\in\mathbb{N}\). We denote by \(Y^{n}\) the set of all (ordered) \(n\)-tuples of elements from \(Y\). Of course, \(Y^{1}\) is identified with \(Y\). Furthermore, we put \(Y^{\leq n}=\bigcup_{i=0}^{n}Y^{i}\) and \(Y^{*}=\bigcup_{i=0}^{\infty}Y^{i}\). In particular, \(\emptyset^{*}\) consists only of the empty tuple.
We consider elements of \(\{0,1\}^{*}\) as bit strings and denote the length of a string \(u\in\{0,1\}^{*}\) by \(|u|\). The unary representation of \(n\), i.e., the string of \(n\) ones, is denoted by \(1^{n}\). Similarly, \(0^{n}\) is the string of \(n\) zeros. As usual, \(\oplus\) denotes the bitwise XOR operation.
Let \(I\) be a set. Suppose each \(i\in I\) is assigned an object \(q_{i}\). Then we denote by \((q_{i}\,|\,i\in I)\) the family of all these objects, whereas \(\{q_{i}\,|\,i\in I\}\) denotes the set of all elements of this family.
When necessary, we assume that all "finite" objects (e.g., integers, tuples of integers, tuples of tuples of integers) are represented by bit strings in some natural way. Sometimes we identify such objects with their representations. Unless otherwise specified, integers are represented by their binary expansions.
Suppose \(\phi\) is a function. We denote by \(\operatorname{dom}\phi\) the domain of \(\phi\). Also, we use the same notation for \(\phi\) and for the function \((z_{1},\ldots,z_{n})\mapsto(\phi(z_{1}),\ldots,\phi(z_{n}))\), where \(n\in\mathbb{N}\) and \(z_{1},\ldots,z_{n}\in\operatorname{dom}\phi\). The identity function on the set \(Y\) is denoted by \(\operatorname{id}_{Y}\).
Let \(\rho\) be a function from a subset of \(\{0,1\}^{*}\) onto a set \(T\) and let \(t\in T\). Then \([t]_{\rho}\) denotes an arbitrary preimage of \(t\) under \(\rho\). A similar notation was used by Boneh and Lipton in [1] and by Hohenberger in [1]. In general, \([t]_{\rho}\) denotes many strings in \(\{0,1\}^{*}\) unless \(\rho\) is one-to-one. We use any of these strings as a representation of \(t\) for computational purposes.
For convenience, we say that a function \(\pi\colon\mathbb{N}\to\mathbb{N}\setminus\{0\}\) is a _polynomial_ if there exist \(c\in\mathbb{N}\setminus\{0\}\) and \(d\in\mathbb{N}\) such that \(\pi(n)=cn^{d}\) for any \(n\in\mathbb{N}\setminus\{0\}\) (\(\pi(0)\) can be an arbitrary positive integer). Of course,
every polynomial growth function from \(\mathbb{N}\) to \(\mathbb{R}_{+}=\{r\in\mathbb{R}\,\big{|}\,r\geq 0\}\) can be upper bounded by a polynomial in this sense. Therefore this notion of a polynomial is sufficient for our purposes.
### Universal-Algebraic Preliminaries
In this subsection, we recall the basic definitions and simple facts from universal algebra. For a detailed introduction to this topic, the reader is referred to standard books, e.g., [11, 12, 13].
Throughout the paper, \(\Omega\) denotes a set of finitary operation symbols. Moreover, in all sections except this one, we assume that \(\Omega\) is finite and algorithms can work with its elements. Each \(\omega\in\Omega\) is assigned a nonnegative integer called the _arity_ of \(\omega\) and denoted by \(\operatorname{ar}\omega\). An _\(\Omega\)-algebra_ is a set \(H\) called the _carrier_ (or the _underlying set_) together with a family \((\widehat{\omega}\colon H^{\operatorname{ar}\omega}\to H\,|\,\omega\in\Omega)\) of operations on \(H\) called the _fundamental operations_. For simplicity of notation, the fundamental operation \(\widehat{\omega}\) associated with a symbol \(\omega\in\Omega\) will be denoted by \(\omega\). Furthermore, we often denote an \(\Omega\)-algebra and its carrier by the same symbol.
Let \(H\) be an \(\Omega\)-algebra. A subset of \(H\) is called a _subalgebra_ of \(H\) if it is closed under the fundamental operations of \(H\). If \(S\) is a system of elements of \(H\), then we denote by \(\langle S\rangle\) the subalgebra of \(H\) generated by \(S\), i.e., the smallest subalgebra of \(H\) containing \(S\).
Suppose \(G\) is an \(\Omega\)-algebra. A _homomorphism_ of \(G\) to \(H\) is a function \(\phi\colon G\to H\) such that for every \(\omega\in\Omega\) and \(g_{1},\dots,g_{\operatorname{ar}\omega}\in G\),
\[\phi(\omega(g_{1},\dots,g_{\operatorname{ar}\omega}))=\omega(\phi(g_{1}), \dots,\phi(g_{\operatorname{ar}\omega})).\]
If a homomorphism of \(G\) onto \(H\) is one-to-one, then it is called an _isomorphism_. Of course, the \(\Omega\)-algebras \(G\) and \(H\) are said to be _isomorphic_ if there exists an isomorphism of \(G\) onto \(H\).
Let \((H_{i}\,|\,i\in I)\) be a family of \(\Omega\)-algebras. Recall that the fundamental operations of the _direct product_ of this family are defined as follows:
\[\omega((h_{1,i}\,|\,i\in I),\dots,(h_{\operatorname{ar}\omega,i}\,|\,i\in I) )=(\omega(h_{1,i},\dots,h_{\operatorname{ar}\omega,i})\,|\,i\in I),\]
where \(\omega\in\Omega\) and \(h_{1,i},\dots,h_{\operatorname{ar}\omega,i}\in H_{i}\) for all \(i\in I\).
An \(\Omega\)-algebra with only one element is said to be _trivial_. It is obvious that all trivial \(\Omega\)-algebras are isomorphic.
For every \(n\in\mathbb{N}\), put \(\Omega_{n}=\{\omega\in\Omega\,|\,\operatorname{ar}\omega=n\}\). We note that if \(\Omega_{0}=\emptyset\), then an \(\Omega\)-algebra may be empty. Whenever \(\omega\in\Omega_{0}\), it is common to write \(\omega\) instead of \(\omega()\).
Let \(Z\) be a set of objects called variables. We always assume that any variable is not in \(\Omega\). The set \(\operatorname{Tm}Z\) of all _\(\Omega\)-terms_ (or simply _terms_) over \(Z\) is defined as the smallest set such that \(\Omega_{0}\cup Z\subseteq\operatorname{Tm}Z\) and if \(\omega\in\Omega\setminus\Omega_{0}\) and \(v_{1},\dots,v_{\operatorname{ar}\omega}\in\operatorname{Tm}Z\), then the formal expression \(\omega(v_{1},\dots,v_{\operatorname{ar}\omega})\) is in \(\operatorname{Tm}Z\). Of course, \(\operatorname{Tm}Z\) is an \(\Omega\)-algebra under the natural fundamental operations. This \(\Omega\)-algebra is called the _\(\Omega\)-term algebra_ over \(Z\).
Consider the case when \(Z=\{z_{1},z_{2},\dots\}\), where \(z_{1},z_{2},\dots\) are distinct. Let \(m\in\mathbb{N}\). We denote by \(T_{\infty}\) and \(T_{m}\) the \(\Omega\)-term algebras \(\operatorname{Tm}\{z_{1},z_{2},\dots\}\) and \(\operatorname{Tm}\{z_{1},\dots,z_{m}\}\), respectively. Suppose \(h=(h_{1},\dots,h_{m},\dots)\) is either an \(m^{\prime}\)-tuple, where \(m^{\prime}\geq m\), or an infinite sequence of elements of \(H\). Furthermore, for every \(v\in T_{m}\), the element \(v(h)=v(h_{1},\dots,h_{m})\in H\) is defined inductively in the natural way. It is easy to see that \(\{v(h_{1},\dots,h_{m})\,|\,v\in T_{m}\}=\langle h_{1},\dots,h_{m}\rangle\). If \(h\) is an infinite sequence, then \(\{v(h)\,|\,v\in T_{\infty}\}=\langle h_{1},h_{2},\dots\rangle\).
An _identity_ (or a _law_) over \(\Omega\) is a closed first-order formula of the form \(\forall\,z_{1},\dots,z_{m}\) (\(v=w\)), where \(m\in\mathbb{N}\) and \(v,w\in T_{m}\). Usually we will omit the phrase "over \(\Omega\)." We will write identities simply as \(v=w\), where \(v,w\in T_{\infty}\), assuming that all variables are universally quantified. A class \(\mathfrak{V}\) of \(\Omega\)-algebras is said to be a _variety_ if it can be defined by a set \(\Upsilon\) of identities. This means that for any \(\Omega\)-algebra \(G\), \(G\in\mathfrak{V}\) if and only if \(G\) satisfies all identities in \(\Upsilon\). By the famous Birkhoff variety theorem (see, e.g., [11, Chapter IV, Theorem 3.1], [12, Chapter II, Theorem 11.9], or [13, Subsection 3.2.3, Theorem 21]), a class of \(\Omega\)-algebras is a variety if and only if it is closed under taking subalgebras, homomorphic images, and direct products. Note that if a class of \(\Omega\)-algebras is closed under taking direct products, then it contains a trivial \(\Omega\)-algebra as the direct product of the empty family of \(\Omega\)-algebras.
The variety consisting of all \(\Omega\)-algebras with at most one element is said to be _trivial_; all other varieties of \(\Omega\)-algebras are called _nontrivial_. The trivial variety is defined by the identity \(z_{1}=z_{2}\). When \(\Omega_{0}=\emptyset\), the trivial variety contains not only trivial \(\Omega\)-algebras, but also the empty \(\Omega\)-algebra. If \(\mathfrak{C}\) is a class
of \(\Omega\)-algebras, then the variety _generated_ by \(\mathfrak{C}\) is the smallest variety of \(\Omega\)-algebras containing \(\mathfrak{C}\). This variety is defined by the set of all identities holding in all \(\Omega\)-algebras in \(\mathfrak{C}\).
Throughout the paper, \(\mathfrak{V}\) denotes a variety of \(\Omega\)-algebras. An \(\Omega\)-algebra \(F\in\mathfrak{V}\) is said to be _\(\mathfrak{V}\)-free_ if it has a generating system \((f_{i}\,|\,i\in I)\) such that for every system of elements \((g_{i}\,|\,i\in I)\) of any \(\Omega\)-algebra \(G\in\mathfrak{V}\) there exists a homomorphism \(\alpha\colon F\to G\) satisfying \(\alpha(f_{i})=g_{i}\) for all \(i\in I\) (evidently, this homomorphism \(\alpha\) is unique). Any generating system \((f_{i}\,|\,i\in I)\) with this property is called _free_ and the \(\Omega\)-algebra \(F\) is said to be _freely generated_ by every such system. The next lemma is well known and/or can be proved straightforwardly.
**Lemma 2.1**.: _Suppose \(F\) is an \(\Omega\)-algebra in \(\mathfrak{V}\) and \((f_{i}\,|\,i\in I)\) is a generating system of \(F\). Then \(F\) is a \(\mathfrak{V}\)-free \(\Omega\)-algebra freely generated by \((f_{i}\,|\,i\in I)\) if and only if for any \(m\in\mathbb{N}\) and any \(v,w\in T_{m}\), the identity \(v=w\) holds in \(\mathfrak{V}\) whenever \(v(f_{i_{1}},\dots,f_{i_{m}})=w(f_{i_{1}},\dots,f_{i_{m}})\) for some distinct \(i_{1},\dots,i_{m}\in I\)._
It is well known (see, e.g., [10, Chapter IV, Corollary 3.3], [11, Chapter II, Definition 10.9 and Theorem 10.10], or [11, Subsection 3.2.3, Theorem 16]) that for any set \(I\) there exists a unique (up to isomorphism) \(\mathfrak{V}\)-free \(\Omega\)-algebra with a free generating system indexed by \(I\). It is easy to see that if \(\mathfrak{V}\) is nontrivial, then for every free generating system \((f_{i}\,|\,i\in I)\) of a \(\mathfrak{V}\)-free \(\Omega\)-algebra, \(f_{i}\) are distinct. In this case, one may consider free generating systems as sets.
We denote by \(F_{\infty}(\mathfrak{V})\) the \(\mathfrak{V}\)-free \(\Omega\)-algebra freely generated by \(a_{1},a_{2},\dots\). Of course, if \(\mathfrak{V}\) is nontrivial, then \(a_{1},a_{2},\dots\) are assumed to be distinct. Furthermore, suppose \(m\in\mathbb{N}\) and let \(F_{m}(\mathfrak{V})=\langle a_{1},\dots,a_{m}\rangle\). For elements of \(F_{m}(\mathfrak{V})\), we use the notation \(v(a)=v(a_{1},\dots,a_{m})\), where \(v\in T_{m}\). It is well known that \(a_{i}\) can be considered as variables taking values in arbitrary \(\Omega\)-algebra \(G\in\mathfrak{V}\). That is, for any \(v(a)\in F_{m}(\mathfrak{V})\) and any \(g=(g_{1},\dots,g_{m})\in G^{m}\), the element \(v(g)=v(g_{1},\dots,g_{m})\in G\) is well defined as \(\alpha(v(a))\), where \(\alpha\) is the unique homomorphism of \(F_{m}(\mathfrak{V})\) to \(G\) such that \(\alpha(a_{i})=g_{i}\) for all \(i\in\{1,\dots,m\}\).
By a _straight-line program_ over \(\Omega\) we mean a nonempty sequence \((u_{1},\dots,u_{n})\) such that for every \(i\in\{1,\dots,n\}\), either \(u_{i}\in\mathbb{N}\setminus\{0\}\) or \(u_{i}=(\omega,j_{1},\dots,j_{\mathrm{ar}\,\omega})\), where \(\omega\in\Omega\) and \(j_{1},\dots,j_{\mathrm{ar}\,\omega}\in\{1,\dots,i-1\}\). These two cases should be clearly distinguished. Usually we will omit the phrase "over \(\Omega\)." Suppose \(g_{1},\dots,g_{m}\in H\), where \(m\in\mathbb{N}\). Furthermore, let \(u=(u_{1},\dots,u_{n})\) be a straight-line program such that if \(u_{i}\in\mathbb{N}\setminus\{0\}\), then \(u_{i}\leq m\). Then \(u\) naturally defines the sequence \((h_{1},\dots,h_{n})\) of elements of \(H\) by induction. Namely, for each \(i\in\{1,\dots,n\}\), we put \(h_{i}=g_{u_{i}}\) if \(u_{i}\in\mathbb{N}\setminus\{0\}\) and \(h_{i}=\omega(h_{j_{1}},\dots,h_{j_{\mathrm{ar}\,\omega}})\) if \(u_{i}=(\omega,j_{1},\dots,j_{\mathrm{ar}\,\omega})\), where \(\omega\) and \(j_{1},\dots,j_{\mathrm{ar}\,\omega}\) are as above. We say that \(u\)_computes_ the element \(h_{n}\) from \(g_{1},\dots,g_{m}\). The positive integer \(n\) is called the _length_ of the straight-line program \(u\). It is easy to see that an element \(h\in H\) can be computed from \(g_{1},\dots,g_{m}\) by a straight-line program if and only if \(h\in\langle g_{1},\dots,g_{m}\rangle\).
### Group-Theoretic Preliminaries
In this subsection, we recall some definitions and facts from group theory. For a detailed introduction to this topic, the reader is referred to standard textbooks, e.g., [11, 12, 13].
We say that \(\Omega\) is a _set of group operation symbols_ if it consists of a binary, a unary, and a nullary operation symbols (for the multiplication, the inversion, and the identity element in a group, respectively). We consider groups as \(\Omega\)-algebras, where \(\Omega\) is a set of group operation symbols. Therefore the content of Subsection 2.2 is applicable to groups. Of course, we use the standard group-theoretic notation, e.g., \(gh\), \(g^{n}\), and \(1\), where \(g\) and \(h\) are elements of a group and \(n\) is an integer.
The abbreviation CFSG stands for the Classification of Finite Simple Groups. This classification states that every finite simple group is isomorphic to
* a cyclic group of prime order,
* an alternating group of degree at least \(5\),
* a finite simple group of Lie type, or
* one of the \(26\) sporadic finite simple groups.
See [13, Chapter 2] or [12, Part I, Chapter 1, Section 1] for details.
Let \(G\) be a group. The notation \(H\lhd G\) means that \(H\) is a proper normal subgroup of \(G\). A subnormal series
\[\{1\}=G_{0}\lhd G_{1}\lhd\dots\lhd G_{n}=G\]
is said to be a _composition series_ of the group \(G\) if all factors \(G_{i}/G_{i-1}\) (\(i\in\{1,\ldots,n\}\)) of this series are simple groups. Of course, not every group has a composition series. However, any finite group certainly has one. By the well-known Jordan-Holder theorem, the factors of a composition series of \(G\) do not depend on the series (up to isomorphism and permutation of factors); these factors are called the _composition factors_ of \(G\). See [10, Chapter 5, Section "The Jordan-Holder Theorem"], [10, Section 3.1], or [11, Subsection 4.4]. For finite groups, see also [10, Section 1.1, Definition D6] or [10, Part I, Chapter 1, Section 3]. It is well known and easy to see that a finite group is solvable if and only if all its composition factors are abelian (or, equivalently, cyclic of prime order).
The next lemma, due to Babai and Szemeredi, is known as the Reachability Lemma (see [1, Theorem 3.1] or [1, Lemma 6.4]).
**Lemma 2.2**.: _Suppose \(G\) is a finite group. Let \(g_{1},\ldots,g_{m}\) (where \(m\in\mathbb{N}\)) be a generating system of \(G\). Then any element of \(G\) can be computed from \(g_{1},\ldots,g_{m}\) by a straight-line program of length at most \((1+\log_{2}\lvert G\rvert)^{2}\)._
Suppose \(\mathfrak{W}\) is a variety of groups. Assume that there exists a positive integer \(n\) such that every group in \(\mathfrak{W}\) satisfies the identity \(z_{1}^{n}=1\). Then the smallest such positive integer is called the _exponent_ of the variety \(\mathfrak{W}\). Otherwise, the _exponent_ of \(\mathfrak{W}\) is said to be infinite. In the latter case, some authors say that \(\mathfrak{W}\) is of exponent zero (see, e.g., [11]). It is easy to see that the exponent of \(\mathfrak{W}\) (finite or infinite) coincides with \(\lvert F_{1}(\mathfrak{W})\rvert\).
The variety \(\mathfrak{W}\) is called _solvable_ if it consists of solvable groups. It is evident that the derived length of groups in any solvable variety is upper bounded by a nonnegative integer depending on the variety.
### Probabilistic Preliminaries
Let \(\mathcal{Y}\) be a probability distribution on a finite or countably infinite sample space \(Y\). Then we denote by \(\operatorname{supp}\mathcal{Y}\) the _support_ of \(\mathcal{Y}\), i.e., the set \(\{y\in Y\,|\,\operatorname{Pr}_{\mathcal{Y}}\{y\}\neq 0\}\). In many cases, one can consider \(\mathcal{Y}\) as a distribution on \(\operatorname{supp}\mathcal{Y}\).
Suppose \(Z\) is a finite or countably infinite set and \(\alpha\) is a function from \(Y\) to \(Z\). Then the image of \(\mathcal{Y}\) under \(\alpha\), which is a probability distribution on \(Z\), is denoted by \(\alpha(\mathcal{Y})\). This distribution is defined by \(\operatorname{Pr}_{\alpha(\mathcal{Y})}\{z\}=\operatorname{Pr}_{\mathcal{Y}} \alpha^{-1}(z)\) for each \(z\in Z\). Note that if a random variable \(\mathbf{y}\) is distributed according to \(\mathcal{Y}\), then the random variable \(\alpha(\mathbf{y})\) is distributed according to \(\alpha(\mathcal{Y})\).
We use the notation \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\sim\mathcal{Y}\) to indicate that \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) (denoted by upright bold letters) are independent random variables distributed according to \(\mathcal{Y}\). We assume that these random variables are independent of all other random variables defined in such a way. Furthermore, all occurrences of an upright bold letter in a probabilistic statement refer to the same (unique) random variable. Of course, all random variables in a probabilistic statement are assumed to be defined on the same sample space. Other specifics of random variables do not matter for us. Note that the probability distribution \(\mathcal{Y}\) in this notation may be random. For example, let \((\mathcal{Y}_{i}\,|\,i\in I)\) be a probability ensemble consisting of distributions on the set \(Y\), where the set \(I\) is finite or countably infinite. Moreover, suppose \(\mathcal{I}\) is a probability distribution on \(I\). Then \(\mathbf{i}\sim\mathcal{I}\) and \(\mathbf{y}\sim\mathcal{Y}_{\mathbf{i}}\) mean that the joint distribution of the random variables \(\mathbf{i}\) and \(\mathbf{y}\) is given by \(\Pr[\mathbf{i}=i,\,\mathbf{y}=y]=\operatorname{Pr}_{\mathcal{I}}\{i\} \operatorname{Pr}_{\mathcal{Y}_{i}}\{y\}\) for each \(i\in I\) and \(y\in Y\).
For any \(n\in\mathbb{N}\), we denote by \(\mathcal{Y}^{n}\) the distribution of a random variable \((\mathbf{y}_{1},\ldots,\mathbf{y}_{n})\), where \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\sim\mathcal{Y}\). (Of course, the distribution of this random variable does not depend on the choice of independent random variables \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) distributed according to \(\mathcal{Y}\).) It is easy to see that \(\alpha(\mathcal{Y}^{n})=(\alpha(\mathcal{Y}))^{n}\) for every \(\alpha\colon Y\to Z\) and \(n\in\mathbb{N}\).
### Cryptographic Preliminaries
Let \(\mathcal{P}=(\mathcal{P}_{i}\,|\,i\in I)\) be a probability ensemble consisting of distributions on \(\{0,1\}^{*}\), where \(I\subseteq\{0,1\}^{*}\). Then \(\mathcal{P}\) is called _polynomial-time samplable_ (or _polynomial-time constructible_) if there exists a probabilistic polynomial-time algorithm \(A\) such that for every \(i\in I\) the random variable \(A(i)\) is distributed according to \(\mathcal{P}_{i}\). It is easy to see that if \(\mathcal{P}\) is polynomial-time samplable, then there exists a polynomial \(\pi\) satisfying \(\operatorname{supp}\mathcal{P}_{i}\subseteq\{0,1\}^{\leq\pi(|i|)}\) for any \(i\in I\). Furthermore, let \(\mathcal{Q}=(\mathcal{Q}_{j}\,|\,j\in J)\) be a probability ensemble consisting of distributions on \(\{0,1\}^{*}\), where \(J\subseteq\mathbb{N}\). Usually, when it comes to polynomial-time samplability of \(\mathcal{Q}\), the indices are assumed to be represented in binary. If, however, these indices are
represented in unary, then we specify this explicitly. Thus, the ensemble \(\mathcal{Q}\) is said to be _polynomial-time samplable when the indices are represented in unary_ if there exists a probabilistic polynomial-time algorithm \(B\) such that for every \(j\in J\) the random variable \(B(1^{j})\) is distributed according to \(\mathcal{Q}_{j}\).
Suppose \(K\) is an infinite subset of \(\mathbb{N}\) and \(D\) is a subset of \(\{0,1\}^{*}\). Also, let \((\mathcal{D}_{k}\,|\,k\in K)\) be a probability ensemble consisting of distributions on \(D\). We assume that this probability ensemble is polynomial-time samplable when the indices are represented in unary. Furthermore, suppose \((D_{k}\,|\,k\in K)\) is a family of nonempty subsets of \(D\) such that there exists a polynomial \(\theta\) satisfying \(D_{k}\subseteq\{0,1\}^{\leq\theta(k)}\) for all \(k\in K\). This notation is used throughout the paper.
A function \(\delta\colon K\to\mathbb{R}_{+}\) is called _negligible_ if for every polynomial \(\pi\) there exists a nonnegative integer \(n\) such that \(\delta(k)\leq 1/\pi(k)\) whenever \(k\in K\) and \(k\geq n\). We denote by \(\operatorname{negl}\) an unspecified negligible function on \(K\). Any (in)equality containing \(\operatorname{negl}(k)\) is meant to hold for all \(k\in K\).
## 3 Weakly Pseudo-Free Families of Computational and Black-Box \(\Omega\)-Algebras
From now on, we assume that \(\Omega\) is finite and algorithms can work with its elements. In this section, we formally define and discuss families of computational and black-box \(\Omega\)-algebras, as well as some types of weak pseudo-freeness (including post-quantum ones) for these families. Of course, one can easily define the respective types of pseudo-freeness.
Throughout the paper, we denote by \(\sigma\) a function from a subset of \(\{0,1\}^{*}\) onto \(F_{\infty}(\mathfrak{V})\). This function is used for representation of elements of \(F_{\infty}(\mathfrak{V})\) for computational purposes. Let \(H\in\mathfrak{V}\) and \(g=(g_{1},\dots,g_{m})\), where \(m\in\mathbb{N}\setminus\{0\}\) and \(g_{1},\dots,g_{m}\in H\). Then we put
\[\Lambda(H,\mathfrak{V},\sigma,g) =\{(t,u)\in(\operatorname{dom}\sigma)^{2}\,|\,\sigma(t),\sigma(u )\in F_{m}(\mathfrak{V}),\,\sigma(t)\neq\sigma(u),\,\sigma(t)(g)=\sigma(u)(g)\}\] \[=\bigcup_{\begin{subarray}{c}v,w\in F_{m}(\mathfrak{V})\text{ s.t.}\\ v\neq w\wedge v(g)=w(g)\end{subarray}}(\sigma^{-1}(v)\times\sigma^{-1}(w)).\]
It is natural to call a pair \((v,w)\in(F_{m}(\mathfrak{V}))^{2}\) a _nontrivial relation_ between \(g_{1},\dots,g_{m}\) if \(v\neq w\) and \(v(g)=w(g)\). Then \(\Lambda(H,\mathfrak{V},\sigma,g)\) is the set of all representations of nontrivial relations between \(g_{1},\dots,g_{m}\) using \(\sigma\).
**Example 3.1** (representation of elements of \(F_{\infty}(\mathfrak{V})\) by straight-line programs, see also [1, Example 3.13] or [1, Example 2.10]).: Denote by \(\operatorname{SLP}_{\mathfrak{V}}\) the function that takes each straight-line program \(u\) (over \(\Omega\)) to the element of \(F_{\infty}(\mathfrak{V})\) computed by \(u\) from \(a_{1},\dots,a_{m}\), where the nonnegative integer \(m\) is an upper bound for all integer elements of the sequence \(u\). (Of course, this element of \(F_{\infty}(\mathfrak{V})\) does not depend on \(m\).) Usually we will write \(\operatorname{SLP}\) instead of \(\operatorname{SLP}_{\mathfrak{V}}\). It is evident that \(\operatorname{SLP}\) is a function onto \(F_{\infty}(\mathfrak{V})\). Sometimes we will use this function as the function \(\sigma\). Note that this method of representation (for elements of the free group) was used in [11].
### Standard Model
A general definition of a family of computational \(\Omega\)-algebras was given in [1] (see Definition 3.1 in that work). These families consist of triples of the form \((H_{d},\rho_{d},\mathcal{R}_{d})\), where \(d\) ranges over \(D\), \(H_{d}\) is an \(\Omega\)-algebra, \(\rho_{d}\) is a function from a subset of \(\{0,1\}^{*}\) onto \(H_{d}\), and \(\mathcal{R}_{d}\) is a probability distribution on \(\operatorname{dom}\rho_{d}\) for any \(d\in D\). In this paper, we study only polynomially bounded families \(((H_{d},\rho_{d},\mathcal{R}_{d})\,|\,d\in D)\) of computational \(\Omega\)-algebras that have unique representations of elements. This means that the following conditions hold:
* There exists a polynomial \(\eta\) such that \(\operatorname{dom}\rho_{d}\subseteq\{0,1\}^{\leq\eta(|d|)}\) for all \(d\in D\). See also [1, Definition 3.3].
* For each \(d\in D\), the function \(\rho_{d}\) is one-to-one. Hence we can assume that for every \(d\in D\), \(H_{d}\subseteq\{0,1\}^{*}\) and the unique representation of each \(h\in H_{d}\) is \(h\) itself. Namely, we use the family \(((\operatorname{dom}\rho_{d},\operatorname{id}_{\operatorname{dom}\rho_{d}}, \mathcal{R}_{d})\,|\,d\in D)\) instead of \(((H_{d},\rho_{d},\mathcal{R}_{d})\,|\,d\in D)\). Here \(\operatorname{dom}\rho_{d}\) is considered as the unique \(\Omega\)-algebra such that \(\rho_{d}\) is an isomorphism of this \(\Omega\)-algebra onto \(H_{d}\) (\(d\in D\)). See also [1, Definition 3.4 and Remark 3.5]. Moreover, if \(H_{d}\subseteq\{0,1\}^{*}\), then we write \((H_{d},\mathcal{R}_{d})\) instead of \((H_{d},\operatorname{id}_{H_{d}},\mathcal{R}_{d})\).
Now we give a formal definition of a family of computational \(\Omega\)-algebras with the above restrictions. We also need a variant of this definition without probability distributions.
Suppose an \(\Omega\)-algebra \(H_{d}\subseteq\{0,1\}^{*}\) is assigned to each \(d\in D\). When necessary, we denote by \(\mathcal{H}_{d}\) a probability distribution on the (necessarily nonempty) \(\Omega\)-algebra \(H_{d}\) for every \(d\in D\). Note that some definitions do not depend on these probability distributions.
**Definition 3.2** (family of computational \(\Omega\)-algebras without distributions).: The family \((H_{d}\,|\,d\in D)\) is called a _family of computational \(\Omega\)-algebras without distributions_ if the following two conditions hold:
1. There exists a polynomial \(\eta\) such that \(H_{d}\subseteq\{0,1\}^{\leq\eta(|d|)}\) for all \(d\in D\).
2. For every \(\omega\in\Omega\) there exists a deterministic polynomial-time algorithm that, given \(d\in D\) and \(h_{1},\dots,h_{\operatorname{ar}\omega}\in H_{d}\), computes \(\omega(h_{1},\dots,h_{\operatorname{ar}\omega})\) in \(H_{d}\).
**Definition 3.3** (family of computational \(\Omega\)-algebras (with distributions), see also [1, Definition 3.1] or [1, Definition 2.6]).: The family \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is said to be a _family of computational \(\Omega\)-algebras with distributions_ or simply a _family of computational \(\Omega\)-algebras_ if the following two conditions hold:
1. The family \((H_{d}\,|\,d\in D)\) is a family of computational \(\Omega\)-algebras without distributions.
2. The probability ensemble \((\mathcal{H}_{d}\,|\,d\in D)\) is polynomial-time samplable.
Thus, by default, a family of computational \(\Omega\)-algebras is a family with distributions. The main motivation for introducing the notion of a family of computational \(\Omega\)-algebras without distributions is to abstract from these distributions whenever this is possible.
**Definition 3.4** (family is in \(\mathfrak{V}\)).: We say that the family \((H_{d}\,|\,d\in D)\) (or \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\)) is in \(\mathfrak{V}\) if \(H_{d}\in\mathfrak{V}\) for all \(d\in D\).
**Definition 3.5** (weakly pseudo-free family of computational \(\Omega\)-algebras).: Assume that \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a family of computational \(\Omega\)-algebras in \(\mathfrak{V}\). Then this family is called _weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any probabilistic polynomial-time algorithm \(A\),
\[\Pr[A(1^{k},\mathbf{d},\mathbf{g})\in\Lambda(H_{\mathbf{d}},\mathfrak{V}, \sigma,\mathbf{g})]=\operatorname{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\) and \(\mathbf{g}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\).
_Remark 3.6_.: Note that for any \(H\in\mathfrak{V}\) and any \(g\in H^{m}\) (where \(m\in\mathbb{N}\setminus\{0\}\)), \(\Lambda(H,\mathfrak{V},\sigma,g)\) coincides with \(\Sigma_{1}^{\prime}(H,\mathfrak{V},\sigma,g)\) in the notation of [1]. So weak pseudo-freeness in the sense of Definition 3.5 is in fact weak \(1\)-pseudo-freeness in the sense of [1, Remark 3.9]. However, it is easy to see that weak \(1\)-pseudo-freeness in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) is equivalent to weak pseudo-freeness in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) (see [1, Remark 3.9]).
**Definition 3.7** (worst-case weakly pseudo-free family of computational \(\Omega\)-algebras without distributions).: Assume that \((H_{d}\,|\,d\in D)\) is a family of nonempty computational \(\Omega\)-algebras in \(\mathfrak{V}\) without distributions. Then this family is said to be _worst-case weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any probabilistic polynomial-time algorithm \(A\),
\[\min_{d\in D_{k},\,g\in H_{d}^{\pi(k)}}\Pr[A(1^{k},d,g)\in\Lambda(H_{d}, \mathfrak{V},\sigma,g)]=\operatorname{negl}(k).\]
### Black-Box \(\Omega\)-Algebra Model
Babai and Szemeredi [1] introduced a model of computation in finite groups, called the black-box group model. In that model, elements of a finite group \(G\) are represented for computational purposes by bit strings of the same length (depending of \(G\)) and the group operations in \(G\) are performed by an oracle. Such groups are called black-box groups. This model can be naturally generalized to \(\Omega\)-algebras.
In this paper, unless otherwise specified, we require every element of a black-box \(\Omega\)-algebra to be represented by a unique bit string. Therefore we can assume that for any black-box \(\Omega\)-algebra \(H\), we have \(H\subseteq\{0,1\}^{n}\), where \(n\in\mathbb{N}\), and the unique representation of each \(h\in H\) is \(h\) itself.
**Definition 3.8** (black-box \(\Omega\)-algebra).: Any \(\Omega\)-algebra \(H\) such that \(H\subseteq\{0,1\}^{n}\) for some \(n\in\mathbb{N}\) is called a _black-box \(\Omega\)-algebra_.
Let \(H\) be a black-box \(\Omega\)-algebra. It is evident that if \(H\neq\emptyset\), then \(H\subseteq\{0,1\}^{n}\) for a single \(n\in\mathbb{N}\). Otherwise, this inclusion holds for all \(n\in\mathbb{N}\).
**Definition 3.9** (\(\Omega\)-oracle).: An oracle is said to be an _\(\Omega\)-oracle_ for \(H\) if, given any query of the form \((\omega,h_{1},\ldots,h_{\operatorname{ar}\omega})\) with \(\omega\in\Omega\) and \(h_{1},\ldots,h_{\operatorname{ar}\omega}\in H\), this oracle returns \(\omega(h_{1},\ldots,h_{\operatorname{ar}\omega})\). (On other queries, the behavior of the oracle may be arbitrary.)
If \(\Omega\) is a set of group operation symbols, then an \(\Omega\)-oracle for a black-box group is called a _group oracle_. Note that some authors require a group oracle for a black-box group to perform only the multiplication and the inversion in this group (see, e.g., [11, 12, 13]). It is obvious that the identity element of any group can be computed as \(g^{-1}g\), where \(g\) is an arbitrary element of this group.
**Definition 3.10** (black-box \(\Omega\)-algebra algorithm).: A (possibly probabilistic) algorithm \(A\) is called a _black-box \(\Omega\)-algebra algorithm_ if, when \(A\) performs a computation in an arbitrary black-box \(\Omega\)-algebra,
* \(A\) has access to an \(\Omega\)-oracle for this black-box \(\Omega\)-algebra and
* all queries made by \(A\) to this \(\Omega\)-oracle have the form specified in Definition 3.9.
Suppose \(A\) is a probabilistic black-box \(\Omega\)-algebra algorithm. Consider a computation of \(A\) in the black-box \(\Omega\)-algebra \(H\). Then Definitions 3.9 and 3.10 imply that this computation and its output depend only on \(H\) but not on the \(\Omega\)-oracle for \(H\) used by \(A\). This is because the answers of this oracle to the queries made by \(A\) are completely determined by the \(\Omega\)-algebra \(H\). Therefore we can denote by \(A^{H}\) the algorithm \(A\) performing a computation in \(H\) and hence using an \(\Omega\)-oracle for \(H\). If the algorithm \(A^{H}\) has access to an additional oracle, say, \(O\), then we denote this algorithm by \(A^{H,O}\).
Similarly to Subsection 3.1, let a black-box \(\Omega\)-algebra \(H_{d}\) be assigned to each \(d\in D\). When necessary, we denote by \(\mathcal{H}_{d}\) a probability distribution on the (necessarily nonempty) \(\Omega\)-algebra \(H_{d}\) for every \(d\in D\). We give analogs of Definitions 3.2, 3.3, 3.5, and 3.7 in the black-box \(\Omega\)-algebra model. Note that some of these definitions do not depend on the probability distributions \(\mathcal{H}_{d}\).
**Definition 3.11** (family of black-box \(\Omega\)-algebras without and with distributions).:
* The family \((H_{d}\,|\,d\in D)\) is called a _family of black-box \(\Omega\)-algebras without distributions_ if there exist a function \(\xi\colon D\to\mathbb{N}\) and a polynomial \(\eta\) such that \(H_{d}\subseteq\{0,1\}^{\xi(d)}\) and \(\xi(d)\leq\eta(|d|)\) for all \(d\in D\).
* The family \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is said to be a _family of black-box \(\Omega\)-algebras with distributions_ or simply a _family of black-box \(\Omega\)-algebras_ if \((H_{d}\,|\,d\in D)\) is a family of black-box \(\Omega\)-algebras without distributions.
By default, similarly to families of computational \(\Omega\)-algebras, a family of black-box \(\Omega\)-algebras is a family with distributions. The main motivation for introducing the notion of a family of black-box \(\Omega\)-algebras without distributions is to abstract from these distributions whenever this is possible. Cf. Subsection 3.1.
**Definition 3.12** (weakly pseudo-free family of black-box \(\Omega\)-algebras).: Assume that \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a family of black-box \(\Omega\)-algebras in \(\mathfrak{V}\). Then this family is called _weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomials \(\pi\) and \(\tau\) and any probabilistic polynomial-time black-box \(\Omega\)-algebra algorithm \(A\),
\[\operatorname{Pr}[A^{H_{d}}(1^{k},\mathbf{d},\mathbf{g},\mathbf{r})\in\Lambda( H_{\mathbf{d}},\mathfrak{V},\sigma,\mathbf{g})]=\operatorname{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\), \(\mathbf{g}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\), and \(\mathbf{r}\sim\mathcal{H}_{\mathbf{d}}^{\tau(k)}\).
_Remark 3.13_.: For a probability distribution \(\mathcal{Y}\) on \(\{0,1\}^{*}\), let \(\operatorname{Smpl}\mathcal{Y}\) be a probabilistic oracle that returns a random sample from \(\mathcal{Y}\) on every query. These samples are chosen independently of each other regardless of the queries. Consider the definition obtained from Definition 3.12 by removing \(\mathbf{r}\) (and \(\tau\)) and giving the algorithm \(A\) access to \(\operatorname{Smpl}\mathcal{H}_{\mathbf{d}}\). It is easy to see that this definition is equivalent to the original one. Namely, assume that \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a family of black-box \(\Omega\)-algebras in \(\mathfrak{V}\), as in
Definition 3.12. Then this family is weakly pseudo-free in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) if and only if for any polynomial \(\pi\) and any probabilistic polynomial-time black-box \(\Omega\)-algebra algorithm \(A\),
\[\Pr[A^{H_{\mathbf{d}},\operatorname{Smpl}\mathcal{H}_{\mathbf{d}}}(1^{k}, \mathbf{d},\mathbf{g})\in\Lambda(H_{\mathbf{d}},\mathfrak{V},\sigma,\mathbf{g })]=\operatorname{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\) and \(\mathbf{g}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\).
**Definition 3.14** (worst-case weakly pseudo-free family of black-box \(\Omega\)-algebras without distributions).: Assume that \((H_{d}\,|\,d\in D)\) is a family of nonempty black-box \(\Omega\)-algebras in \(\mathfrak{V}\) without distributions. Then this family is said to be _worst-case weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any probabilistic polynomial-time black-box \(\Omega\)-algebra algorithm \(A\),
\[\min_{d\in D_{k},\,g\in H_{d}^{\pi(k)}}\Pr[A^{H_{d}}(1^{k},d,g)\in\Lambda(H_{d },\mathfrak{V},\sigma,g)]=\operatorname{negl}(k).\]
### Quantum Computation Model
We assume that the reader is familiar with the basics of quantum computation. For a detailed introduction to this model of computation, see [10], [11, Part 2], or [14, Section 2 and Appendix C].
The purpose of this subsection is to give analogs of Definitions 3.5, 3.7, 3.9, 3.10, 3.12, and 3.14 in the quantum computation model. For Definitions 3.5 and 3.7, this is straightforward. Namely, it suffices to require the algorithm \(A\) to be quantum.
**Definition 3.15** (post-quantum weakly pseudo-free family of computational \(\Omega\)-algebras).: Let \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) be a family of computational \(\Omega\)-algebras in \(\mathfrak{V}\). Then this family is called _post-quantum weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any polynomial-time quantum algorithm \(A\),
\[\Pr[A(1^{k},\mathbf{d},\mathbf{g})\in\Lambda(H_{\mathbf{d}},\mathfrak{V}, \sigma,\mathbf{g})]=\operatorname{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\) and \(\mathbf{g}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\).
**Definition 3.16** (post-quantum worst-case weakly pseudo-free family of computational \(\Omega\)-algebras without distributions).: Suppose \((H_{d}\,|\,d\in D)\) is a family of nonempty computational \(\Omega\)-algebras in \(\mathfrak{V}\) without distributions. Then this family is said to be _post-quantum worst-case weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any polynomial-time quantum algorithm \(A\),
\[\min_{d\in D_{k},\,g\in H_{d}^{\pi(k)}}\Pr[A(1^{k},d,g)\in\Lambda(H_{d}, \mathfrak{V},\sigma,g)]=\operatorname{negl}(k).\]
Let \(H\) be a black-box \(\Omega\)-algebra and let \(n\) be a nonnegative integer such that \(H\subseteq\{0,1\}^{n}\). (If \(H\neq\emptyset\), then \(n\) is unique; otherwise, \(n\) can be chosen arbitrarily.) We denote by \(Q_{n}\) the state space of \(n\) qubits. Suppose \(m\in\mathbb{N}\setminus\{0\}\). Consider a system of \(m\) quantum registers, each consisting of \(n\) qubits. Of course, the state space of this system is the \(m\)th tensor power of \(Q_{n}\), denoted by \(Q_{n}^{\otimes m}\). If for every \(i\in\{1,\ldots,m\}\) the \(i\)th quantum register is in the state \(|y_{i}\rangle\in Q_{n}\), then we write the state of the total system as \(|y_{1}\rangle\ldots|y_{m}\rangle\) instead of \(|y_{1}\rangle\otimes\cdots\otimes|y_{m}\rangle\). (We use the Dirac ket notation \(|\cdot\rangle\) for quantum state vectors.) For a unitary operator \(W\) on \(Q_{n}^{\otimes r}\) (where \(r\in\{1,\ldots,m\}\)) and a tuple \((i_{1},\ldots,i_{r})\) of distinct integers in \(\{1,\ldots,m\}\), we denote by \(W[i_{1},\ldots,i_{r}]\) the unitary operator on \(Q_{n}^{\otimes m}\) acting as \(W\) on the system of quantum registers with numbers \(i_{1},\ldots,i_{r}\) (taken in this order) and leaving all other registers unchanged.
**Definition 3.17** (quantum \(\Omega\)-oracle).: A family \((U_{\omega}\,|\,\omega\in\Omega)\), where \(U_{\omega}\) is a unitary operator on \(Q_{n}^{\otimes((\operatorname{ar}\omega)+1)}\) for every \(\omega\in\Omega\), is called a _quantum \(\Omega\)-oracle_ for \(H\) if
\[U_{\omega}(|h_{1}\rangle\ldots|h_{\operatorname{ar}\omega}\rangle|v\rangle)=|h _{1}\rangle\ldots|h_{\operatorname{ar}\omega}\rangle|v\oplus\omega(h_{1}, \ldots,h_{\operatorname{ar}\omega})\rangle\]
for all \(\omega\in\Omega\), \(h_{1},\ldots,h_{\operatorname{ar}\omega}\in H\), and \(v\in\{0,1\}^{n}\).
Similarly to Subsection 3.2, if \(\Omega\) is a set of group operation symbols, then a quantum \(\Omega\)-oracle for a black-box group is called a _quantum group oracle_.
_Remark 3.18_.: In this remark, we assume that \(\Omega\) is a set of group operation symbols and \(H\) is a (black-box) group. In some works (see, e.g., [20, Section 2] and [21, Section 2]), a quantum group oracle for \(H\) is given by a pair \((M,M^{\prime})\) of unitary operators on \(Q_{n}^{\otimes 2}\) such that
\[M(|g\rangle|h\rangle)=|g\rangle|gh\rangle\quad\text{and}\quad M^{\prime}(|g \rangle|h\rangle)=|g\rangle|g^{-1}h\rangle\quad\text{for all }g,h\in H. \tag{2}\]
However, such a pair can be efficiently implemented using a quantum group oracle for \(H\) in the sense of Definition 3.17, and vice versa. Details follow. In particular, the result of [21] used by us in Subsection 4.2 holds in our model as well.
Let \(\text{CNOT}_{n}\) be the unitary operator on \(Q_{n}^{\otimes 2}\) such that \(\text{CNOT}_{n}(|v\rangle|w\rangle)=|v\rangle|v\oplus w\rangle\) for all \(v,w\in\{0,1\}^{n}\). Of course, \(\text{CNOT}_{n}\) can be efficiently implemented by a quantum circuit consisting of \(n\) controlled-NOT gates (these gates implement \(\text{CNOT}_{1}\)).
Denote by \(\mu\), \(\iota\), and \(1\) the symbols in \(\Omega\) for the multiplication, the inversion, and the identity element in a group, respectively.
1. Suppose \((U_{\mu},U_{\iota},U_{1})\) is a quantum group oracle for \(H\). Then \[\underline{|g\rangle}|h\rangle|\underline{|gh\rangle}=U_{\mu}(|g\rangle|h \rangle|0^{n}\rangle)\quad\text{and}\quad\underline{|g\rangle}|h\rangle|g^{-1 }\rangle|\underline{|g^{-1}h\rangle}=U_{\mu}[3,2,4]U_{\iota}[1,3](|g\rangle|h \rangle|0^{n}\rangle|0^{n}\rangle)\] for all \(g,h\in H\). This yields an efficient implementation of a pair \((M,M^{\prime})\) of unitary operators on \(Q_{n}^{\otimes 2}\) satisfying condition (2). The contents of the registers that form the outputs of \(M\) and \(M^{\prime}\) are underlined.
2. Let \((M,M^{\prime})\) be a pair of unitary operators on \(Q_{n}^{\otimes 2}\) satisfying condition (2). Then \[\underline{|g\rangle}|h\rangle|v\oplus gh\rangle|gh\rangle =\text{CNOT}_{n}[4,3]M[1,4]\,\text{CNOT}_{n}[2,4](|g\rangle|h \rangle|v\rangle|0^{n}\rangle),\] \[\underline{|h\rangle|v\oplus h^{-1}\rangle|h^{-1}\rangle} =\text{CNOT}_{n}[3,2](M^{\prime}[1,3])^{2}\,\text{CNOT}_{n}[1,3](|h \rangle|v\rangle|0^{n}\rangle),\text{ and}\] \[|h\rangle|\underline{v\oplus 1}\rangle|1\rangle =\text{CNOT}_{n}[3,2]M^{\prime}[1,3]\,\text{CNOT}_{n}[1,3](|h \rangle|v\rangle|0^{n}\rangle)\] for all \(g,h\in H\) and \(v\in\{0,1\}^{n}\). This yields an efficient implementation of a quantum group oracle \((U_{\mu},U_{\iota},U_{1})\) for \(H\). The contents of the registers that form the outputs of \(U_{\mu}\), \(U_{\iota}\), and \(U_{1}\) are underlined.
For each \(\omega\in\Omega\), we denote by \(E_{H,\omega}\) the subspace of \(Q_{n}^{\otimes((\operatorname{ar}\omega)+1)}\) spanned by
\[\{|h_{1}\rangle\dots|h_{\operatorname{ar}\omega}\rangle|v\rangle\,|\,h_{1}, \dots,h_{\operatorname{ar}\omega}\in H,\,v\in\{0,1\}^{n}\}.\]
**Definition 3.19** (black-box \(\Omega\)-algebra quantum algorithm).: A quantum algorithm is said to be a _black-box \(\Omega\)-algebra quantum algorithm_ if, when \(A\) performs a computation in an arbitrary black-box \(\Omega\)-algebra \(G\),
* \(A\) has access to a quantum \(\Omega\)-oracle (say, \((U_{\omega}\,|\,\omega\in\Omega)\)) for \(G\) and
* for every \(\omega\in\Omega\), the operator \(U_{\omega}\) is applied only to state vectors in \(E_{G,\omega}\).
If \(\Omega\) is a set of group operation symbols, then a black-box \(\Omega\)-algebra quantum algorithm is called a _black-box group quantum algorithm_ when we are interested in its computations only in black-box groups.
Suppose \(A\) is a black-box \(\Omega\)-algebra quantum algorithm. Consider a computation of \(A\) in the black-box \(\Omega\)-algebra \(H\). Then Definitions 3.17 and 3.19 imply that this computation and its output depend only on \(H\) but not on the quantum \(\Omega\)-oracle for \(H\) (say, \((U_{\omega}\,|\,\omega\in\Omega)\)) used by \(A\). This is because for every \(\omega\in\Omega\), the action of the operator \(U_{\omega}\) on \(E_{H,\omega}\) is completely determined by the \(\Omega\)-algebra \(H\). Therefore, similarly to Subsection 3.2, we can denote by \(A^{H}\) the algorithm \(A\) performing a computation in \(H\) and hence using a quantum \(\Omega\)-oracle for \(H\).
**Definition 3.20** (post-quantum weakly pseudo-free family of black-box \(\Omega\)-algebras).: Let \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) be a family of black-box \(\Omega\)-algebras in \(\mathfrak{V}\). Then this family is called _post-quantum weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomials \(\pi\) and \(\tau\) and any polynomial-time black-box \(\Omega\)-algebra quantum algorithm \(A\),
\[\Pr[A^{H_{d}}(1^{k},\mathbf{d},\mathbf{g},\mathbf{r})\in\Lambda(H_{\mathbf{d}}, \mathfrak{V},\sigma,\mathbf{g})]=\text{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\), \(\mathbf{g}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\), and \(\mathbf{r}\sim\mathcal{H}_{\mathbf{d}}^{\tau(k)}\).
**Definition 3.21** (post-quantum worst-case weakly pseudo-free family of black-box \(\Omega\)-algebras without distributions).: Suppose \((H_{d}\,|\,d\in D)\) is a family of nonempty black-box \(\Omega\)-algebras in \(\mathfrak{V}\) without distributions. Then this family is said to be _post-quantum worst-case weakly pseudo-free_ in \(\mathfrak{V}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\sigma\) if for any polynomial \(\pi\) and any polynomial-time black-box \(\Omega\)-algebra quantum algorithm \(A\),
\[\min_{d\in D_{k},\,g\in H_{d}^{\pi(k)}}\Pr[A^{H_{d}}(1^{k},d,g)\in\Lambda(H_{d},\mathfrak{V},\sigma,g)]=\operatorname{negl}(k).\]
### Relations between the Types of Weak Pseudo-Freeness
In this subsection, weak pseudo-freeness of any type means weak pseudo-freeness of this type in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) (or \((D_{k}\,|\,k\in K)\) in the worst-case setting) and \(\sigma\).
_Remark 3.22_.: In this remark, we assume that \(\operatorname{supp}\mathcal{D}_{k}\subseteq D_{k}\) for all \(k\in K\). It is easy to see that if \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a weakly (resp., post-quantum weakly) pseudo-free family of computational \(\Omega\)-algebras, then \((H_{d}\,|\,d\in D)\) is a worst-case weakly (resp., post-quantum worst-case weakly) pseudo-free family of computational \(\Omega\)-algebras without distributions. Similarly, if \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a weakly (resp., post-quantum weakly) pseudo-free family of black-box \(\Omega\)-algebras, then \((H_{d}\,|\,d\in D)\) is a worst-case weakly (resp., post-quantum worst-case weakly) pseudo-free family of black-box \(\Omega\)-algebras without distributions. This is because the minimum of a real-valued random variable does not exceed the expectation of this random variable, provided that these minimum and expectation exist.
_Remark 3.23_.: It is well known that every probabilistic polynomial-time algorithm can be simulated by a polynomial-time quantum algorithm (see, e.g., [10, Subsection 1.4.1] or [13, Section 7]). Also, every probabilistic polynomial-time black-box \(\Omega\)-algebra algorithm can be simulated by a polynomial-time black-box \(\Omega\)-algebra quantum algorithm. Therefore any post-quantum weakly (resp., post-quantum worst-case weakly) pseudo-free family of computational or black-box \(\Omega\)-algebras with (resp., without) distributions is also weakly (resp., worst-case weakly) pseudo-free.
_Remark 3.24_.: Let \(\mathtt{H}=((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) (resp., \(\mathtt{H}=(H_{d}\,|\,d\in D)\)) be a family of computational \(\Omega\)-algebras in \(\mathfrak{V}\) with (resp., without) distributions. Assume that there exist a function \(\xi\colon D\to\mathbb{N}\) and a polynomial \(\eta\) such that \(H_{d}\subseteq\{0,1\}^{\xi(d)}\) and \(\xi(d)\leq\eta(|d|)\) for all \(d\in D\). Then by Definition 3.11, \(\mathtt{H}\) is a family of black-box \(\Omega\)-algebras with (resp., without) distributions. Furthermore, \(\mathtt{H}\) is weakly pseudo-free or post-quantum weakly pseudo-free (resp., worst-case weakly pseudo-free or post-quantum worst-case weakly pseudo-free) as a family of computational \(\Omega\)-algebras with (resp., without) distributions if and only if \(\mathtt{H}\) satisfies the same weak pseudo-freeness condition as a family of black-box \(\Omega\)-algebras with (resp., without) distributions. This can be proved straightforwardly.
**Proposition 3.25**.: _Assume that there exists a weakly pseudo-free or a post-quantum weakly pseudo-free (resp., a worst-case weakly pseudo-free or a post-quantum worst-case weakly pseudo-free) family of computational \(\Omega\)-algebras with (resp., without) distributions. Then there exists a family of black-box \(\Omega\)-algebras with (resp., without) distributions that satisfies the same weak pseudo-freeness condition as a family of black-box \(\Omega\)-algebras with (resp., without) distributions._
Proof.: For any \(n\in\mathbb{N}\), let \(\alpha_{n}\) be the one-to-one function from \(\{0,1\}^{\leq n}\) onto \(\{0,1\}^{n+1}\setminus\{0^{n+1}\}\) defined by \(\alpha_{n}(u)=u10^{n-|u|}\) for all \(u\in\{0,1\}^{\leq n}\). (Here, of course, \(u10^{n-|u|}\) denotes the concatenation of \(u\), \(1\), and \(0^{n-|u|}\).) Then the functions \((1^{n},u)\mapsto\alpha_{n}(u)\) and \((1^{n},t)\mapsto\alpha_{n}^{-1}(t)\), where \(n\in\mathbb{N}\), \(u\in\{0,1\}^{\leq n}\), and \(t\in\{0,1\}^{n+1}\setminus\{0^{n+1}\}\), are polynomial-time computable.
Suppose \(\mathtt{G}=((G_{d},\mathcal{G}_{d})\,|\,d\in D)\) is a family of computational \(\Omega\)-algebras in \(\mathfrak{V}\). Choose a polynomial \(\eta\) such that \(G_{d}\subseteq\{0,1\}^{\leq n(|d|)}\) for all \(d\in D\). For each such \(d\), let \(H_{d}=\alpha_{\eta(|d|)}(G_{d})\subseteq\{0,1\}^{\eta(|d|)+1}\setminus\{0^{( |d|)+1}\}\) and \(\mathcal{H}_{d}=\alpha_{\eta(|d|)}(\mathcal{G}_{d})\). Consider \(H_{d}\) as the unique \(\Omega\)-algebra such that the restriction of \(\alpha_{\eta(|d|)}\) to \(G_{d}\) is an isomorphism of \(G_{d}\) onto \(H_{d}\). Then it is easy to see that \(\mathtt{H}=((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a family of computational \(\Omega\)-algebras in \(\mathfrak{V}\). Moreover, \(\mathtt{H}\) is also a family of black-box \(\Omega\)-algebras (see Remark 3.24). This is because there exists a polynomial \(\eta^{\prime}\) such that \(\eta(n)+1\leq\eta^{\prime}(n)\) for all \(n\in\mathbb{N}\).
Assume that \(\mathtt{G}\) is a weakly pseudo-free family of computational \(\Omega\)-algebras. It is easy to show that for any isomorphic \(\Omega\)-algebras \(G,H\in\mathfrak{V}\), any \(g\in G^{m}\), where \(m\in\mathbb{N}\setminus\{0\}\), and any isomorphism \(\alpha\colon G\to H\), we have \(\Lambda(H,\mathfrak{V},\sigma,\alpha(g))=\Lambda(G,\mathfrak{V},\sigma,g)\). This implies that \(\mathtt{H}\) is a weakly pseudo-free family of computational \(\Omega\)-algebras. Indeed, suppose \(\pi\) is a polynomial and \(A\) is a probabilistic polynomial-time algorithm trying to break the weak pseudo-freeness of \(\mathtt{H}\) for \(\pi\) (i.e., the condition of Definition 3.5 for
and \(\pi\)). Let \(B\) be a probabilistic polynomial-time algorithm (trying to break the weak pseudo-freeness of \(\mathfrak{G}\) for \(\pi\)) that on input \((1^{k},d,g)\) for every \(k\in K\), \(d\in\operatorname{supp}\mathcal{D}_{k}\), and \(g\in(\operatorname{supp}\mathcal{G}_{d})^{\pi(k)}\) runs \(A\) on input \((1^{k},d,\alpha_{\eta(|d|)}(g))\) and returns the output (if it exists). Then
\[\Pr[A(1^{k},\mathbf{d},\mathbf{h})\in\Lambda(H_{\mathbf{d}}, \mathfrak{V},\sigma,\mathbf{h})] =\Pr[A(1^{k},\mathbf{d},\alpha_{\eta(|\mathbf{d}|)}(\mathbf{g})) \in\Lambda(H_{\mathbf{d}},\mathfrak{V},\sigma,\alpha_{\eta(|\mathbf{d}|)}( \mathbf{g}))]\] \[=\Pr[B(1^{k},\mathbf{d},\mathbf{g})\in\Lambda(G_{\mathbf{d}}, \mathfrak{V},\sigma,\mathbf{g})]=\operatorname{negl}(k),\]
where \(\mathbf{d}\sim\mathcal{D}_{k}\), \(\mathbf{h}\sim\mathcal{H}_{\mathbf{d}}^{\pi(k)}\), and \(\mathbf{g}\sim\mathcal{G}_{\mathbf{d}}^{\pi(k)}\). Here we use the fact that the random variables \((\mathbf{d},\mathbf{h})\) and \((\mathbf{d},\alpha_{\eta(|\mathbf{d}|)}(\mathbf{g}))\) are identically distributed.
By Remark 3.24, \(\mathtt{H}\) is also a weakly pseudo-free family of black-box \(\Omega\)-algebras. Thus, if there exists a weakly pseudo-free family of weakly pseudo-free family of multi-box \(\Omega\)-algebras. For other types of weak pseudo-freeness mentioned in the proposition, the proofs are the same, _mutatis mutandis_.
We illustrate the statements of Remarks 3.22 and 3.23 and of Proposition 3.25 by the diagram in Figure 1.
Weak Pseudo-Freeness in the Variety Generated by the \(\boldsymbol{\Psi}\)-Reducts of All \(\boldsymbol{\Omega}\)-Algebras in \(\mathfrak{V}\)
Let \(\Psi\) be a subset of \(\Omega\) and let \(H\) be an \(\Omega\)-algebra. Then the \(\Psi\)-algebra obtained from \(H\) by omitting the fundamental operations associated with the symbols in \(\Omega\setminus\Psi\) is called the _\(\Psi\)-reduct_ of \(H\) (or the _reduct_ of \(H\) to \(\Psi\)). We denote the \(\Psi\)-reduct of \(H\) by \(H|_{\Psi}\). The \(\Omega\)-algebra \(H\) is said to be an _expansion_ of \(H|_{\Psi}\) to \(\Omega\). Furthermore, \(\mathfrak{V}|_{\Psi}\) denotes the variety of \(\Psi\)-algebras generated by the \(\Psi\)-reducts of all \(\Omega\)-algebras in \(\mathfrak{V}\). In other words, \(\mathfrak{V}|_{\Psi}\) is the variety of \(\Psi\)-algebras defined by the set of all identities over \(\Psi\) that hold in \(\mathfrak{V}\) (actually, in \(G|_{\Psi}\) for every \(G\in\mathfrak{V}\)). Clearly, \(\mathfrak{V}|_{\Psi}\) is nontrivial if and only if \(\mathfrak{V}\) is nontrivial.
The \(\Omega\)-algebra \(H\) is called an _expanded group_ if there exists a set \(\Gamma\subseteq\Omega\) of group operation symbols such that \(H|_{\Gamma}\) is a group. When it comes to classes of expanded groups, we assume that this set \(\Gamma\) is the same for all expanded groups in the class. Thus, \(\mathfrak{V}\) is said to be a _variety of expanded groups_ if \(\Omega\)
Figure 1: Relations between the types of weak pseudo-freeness defined in Section 3. The abbreviations PQ, WC, FoC\(\Omega\)A, FoBB\(\Omega\)A, and w/oD stand for Post-Quantum, Worst-Case, (weakly pseudo-free) Family of Computational \(\Omega\)-Algebras, (weakly pseudo-free) Family of Black-Box \(\Omega\)-Algebras, and without Distributions, respectively. For brevity, we do not write an abbreviation for Weakly Pseudo-Free in the diagram. Weak pseudo-freeness of any type means weak pseudo-freeness of this type in \(\mathfrak{V}\) with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) (or \((D_{k}\,|\,k\in K)\) in the worst-case setting) and \(\sigma\). A horizontal double-line arrow \(Y\frac{\text{if }\forall\,k\,\,(\operatorname{supp}\mathcal{D}_{k}\subseteq D_{k})}{ \text{if }\forall\,k\,\,(\operatorname{supp}\mathcal{D}_{k}\subseteq D_{k})}Z\) means that if \(((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) is a family of type \(Y\) and \(\operatorname{supp}\mathcal{D}_{k}\subseteq D_{k}\) for all \(k\in K\), then \((H_{d}\,|\,d\in D)\) is a family of type \(Z\) (see Remark 3.22). Furthermore, a vertical double-line arrow \(Y\frac{\text{\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{fig-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph-graph--graph-graph-graph-graph-graph-graph-graph-graph--graph-graph-graph-graph--graph-graph-graph-graph--graph-graph--graph--graph-graph--graph--graph--graph--graph-graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph-graph--graph--graph--graph--graph-graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph-graph--graph-graph--graph--graph-graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph-graph--graph--graph--graph--graph--graph--graph--graph--graph-graph--graph--graph--graph--graph--graph--graphgraph---graph--graph-graph--graphgraph--graph--graph--graph--graph--graph--graph--graph--graphgraph--graph--graph--graphgraph--graph--graph--graph--graph---graphgraph---graph---graphgraph---graphgraph---graph--graph--graph---graph--graph---graphgraph---graph--graph--graph--graph--graph--graph--graph--graph--graph--graph---graph--graph---graphgraph---graph--graph--graphgraph--graph--graph--graph--graph---graphgraph---graph--graph--graphgraph---graph--graphgraph---graph--graph--graph---graphgraph---graph---graphgraph---graph---graphgraph---graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph---graphgraph---graph--graph--graph--graph--graph---graphgraph---graph--graph--graph--graph--graph--graph--graph--graph--graph--graph---graph--graph--graph--graph--graph--graph--graph---graph---graph--graph---graph--graph--graph---graphgraph---graph--graph--graph--graph--graph--graph--graph---graph---graph--graph--graph---graph--graph--graph--graph---graph---graph--graph--graph--graph--graph--graph--graph--graph--graph--graph--graph---graph--graph---graph--graph--graph---graph---graph-graph---graph--graph---graph--graph--graph---graph--graph---graph--graph---graph--graph--graph--graph--graph--graph--graph---graph--graph---graph--graph--graph----graph--graph--graph--graph---graph--graph--graph--graph--graph---graph---graph--graph---graph--graph--graph---graph--graph---graph---graph--graph--graph---graph---graph---graph--graph--graph---graph--graph---graph--graph--graph---graph---graph--graph---graph---graph--graph--graph---graph--graph---graph---graph--graph---graph---graph---graph---graph---graph---graph---graph---graph--graph--graph----graph--graph---graph--graph---graph---graph--graph---graph---graph---graph--graph---graph---graph---graph---graph---graph---graph---graph----graph--graph----graph---graph--graph---graph----graph---graph---graph---graph---graph---graph---graph----graph---graph---graph----graph---graph---graph----graph--graph---graph----graph----graph----graph----graph---graph--graph---graph----graph---graph---graph----graph---graph----graph----graph---graph----graph----graph----graph---graph----graph----graph----graph---graph----graph----graph----graph----graph----graph----graph---graph----graph----graph----graph---graph----graph---graph----graph----graph---graph-----graph-----graph-----graph-----graph----graph---graph----graph----graph----graph----graph---graph----graph----graph-----graph----graph----graph----graph----graph----graph----graph---graph---graph----graph----graph---graph---graph----graph---graph----graph----graph----graph----graph----graph----graph----graph----graph----graph----graph----graph-----graph----graph----graph----graph---graph----graph---graph---graph----graph----graph----graph----graph----graph----graph----graph---graph----graph---graph----graph----graph----graph----graph----graph----graph-----graph----graph-
contains a set \(\Gamma\) of group operation symbols such that \(H|_{\Gamma}\) is a group for all \(H\in\mathfrak{V}\) (or, equivalently, \(\mathfrak{V}|_{\Gamma}\) is a variety of groups).
Lemma 2.1 implies that the subalgebra of \(F_{\infty}(\mathfrak{V})|_{\Psi}\) generated by \(a_{1},a_{2},\dots\) is a \(\mathfrak{V}|_{\Psi}\)-free \(\Psi\)-algebra freely generated by this system. So we choose this \(\Psi\)-algebra as \(F_{\infty}(\mathfrak{V}|_{\Psi})\). Similarly, we assume that for any \(m\in\mathbb{N}\), \(F_{m}(\mathfrak{V}|_{\Psi})\) is the subalgebra of \(F_{m}(\mathfrak{V})|_{\Psi}\) generated by \(a_{1},\dots,a_{m}\). In particular, \(F_{\infty}(\mathfrak{V}|_{\Psi})\subseteq F_{\infty}(\mathfrak{V})\) and \(F_{m}(\mathfrak{V}|_{\Psi})\subseteq F_{m}(\mathfrak{V})\) for all \(m\in\mathbb{N}\).
In the next proposition, we assume that an \(\Omega\)-algebra \(H_{d}\subseteq\{0,1\}^{*}\) is assigned to each \(d\in D\). When necessary, we denote by \(\mathcal{H}_{d}\) a probability distribution on the (necessarily nonempty) \(\Omega\)-algebra \(H_{d}\) for every \(d\in D\). Cf. Subsections 3.1 and 3.2.
**Proposition 3.26**.: _Suppose \(S\) is a subset of \(\operatorname{dom}\sigma\) such that \(\sigma(S)=F_{\infty}(\mathfrak{V}|_{\Psi})\). Let \(\mathtt{H}=((H_{d},\mathcal{H}_{d})\,|\,d\in D)\) (resp., \(\mathtt{H}=(H_{d}\,|\,d\in D)\)) and \(\mathtt{H}^{\prime}=((H_{d}|_{\Psi},\mathcal{H}_{d})\,|\,d\in D)\) (resp., \(\mathtt{H}^{\prime}=(H_{d}|_{\Psi}\,|\,d\in D)\)). Then the following statements hold:_
* _If_ \(\mathtt{H}\) _is a family of computational_ \(\Omega\)_-algebras in_ \(\mathfrak{V}\) _with (resp., without) distributions, then_ \(\mathtt{H}^{\prime}\) _is a family of computational_ \(\Psi\)_-algebras in_ \(\mathfrak{V}|_{\Psi}\) _with (resp., without) distributions._
* _If_ \(\mathtt{H}\) _is a family of black-box_ \(\Omega\)_-algebras in_ \(\mathfrak{V}\) _with (resp., without) distributions, then_ \(\mathtt{H}^{\prime}\) _is a family of black-box_ \(\Psi\)_-algebras in_ \(\mathfrak{V}|_{\Psi}\) _with (resp., without) distributions._
* _If_ \(\mathtt{H}\) _is a weakly pseudo-free or post-quantum weakly pseudo-free (resp., worst-case weakly pseudo-free or post-quantum worst-case weakly pseudo-free) family of computational or black-box_ \(\Omega\)_-algebras (with (resp., without) distributions) in_ \(\mathfrak{V}\) _with respect to_ \((\mathcal{D}_{k}\,|\,k\in K)\) _(resp.,_ \((D_{k}\,|\,k\in K)\)_) and_ \(\sigma\)_, then_ \(\mathtt{H}^{\prime}\) _satisfies the same weak pseudo-freeness condition in_ \(\mathfrak{V}|_{\Psi}\) _with respect to_ \((\mathcal{D}_{k}\,|\,k\in K)\) _(resp.,_ \((D_{k}\,|\,k\in K)\)_) and_ \(\sigma|_{S}\)_._
Proof.: Statements (i) and (ii) can be proved straightforwardly. Suppose \(H\in\mathfrak{V}\) and \(g\in H^{m}\), where \(m\in\mathbb{N}\setminus\{0\}\). Then it is easy to show that \(\Lambda(H|_{\Psi},\mathfrak{V}|_{\Psi},\sigma|_{S},g)\subseteq\Lambda(H, \mathfrak{V},\sigma,g)\). (Note that \(\Lambda(H|_{\Psi},\mathfrak{V}|_{\Psi},\sigma|_{S},g)=\Lambda(H,\mathfrak{V}, \sigma,g)\cap S^{2}\), but we do not need this fact.) Furthermore, a black-box \(\Psi\)-algebra algorithm can be considered as a black-box \(\Omega\)-algebra algorithm. The same holds for quantum algorithms. These observations imply statement (iii).
_Remark 3.27_.: In particular, Proposition 3.26 can be applied to the case where \(\sigma=\mathrm{SLP}\) and \(S\) is the set of all straight-line programs over \(\Psi\). For this set \(S\), we have \(\mathrm{SLP}|_{S}=\mathrm{SLP}_{\mathfrak{V}|_{\Psi}}\).
## 4 Some Polynomial-Time Black-Box Group Quantum Algorithms
In this section, we assume that \(\Omega\) is a set of group operation symbols and \(\mathfrak{V}\) is a variety of groups. We prove that if \(\mathfrak{V}\) is nontrivial, then there exists a polynomial-time black-box group quantum algorithm \(B\) such that for any black-box group \(G\in\mathfrak{V}\) and any \(g\in G^{m}\) with \(m>\log_{2}\!|G|\), we have \(\Pr[B^{G}(g)\in\Lambda(G,\mathfrak{V},\mathrm{SLP},g)]\geq\epsilon\), where \(\epsilon\) is a positive constant. See Lemmas 4.1 and 4.3 below.
### The Case Where \(\mathfrak{V}\) Has Infinite Exponent
Throughout this subsection, we assume that the variety \(\mathfrak{V}\) is of infinite exponent and that, given \(s\in\mathbb{N}\setminus\{0\}\), one can compute \([a_{1}^{s}]_{\sigma}\) in polynomial time. Of course, \(\mathrm{SLP}\) satisfies the latter assumption.
In the next lemma, we denote by \(A\) a polynomial-time black-box group quantum algorithm such that for any black-box group \(G\in\mathfrak{V}\) and any \(g\in G\),
\[\Pr[A^{G}(g)=s\in\mathbb{N}\setminus\{0\}\text{ s.t. }g^{s}=1]\geq\epsilon, \tag{3}\]
where \(\epsilon\) is a positive constant. Such an algorithm exists. For example, Shor's order-finding algorithm (see [20, Section 5], [10, Subsection 5.3.1], or [13, Subsections 13.4-13.6]) can be easily converted to a polynomial-time black-box group quantum algorithm that satisfies the required condition.
**Lemma 4.1**.: _There exists a polynomial-time black-box group quantum algorithm \(B\) such that for any black-box group \(G\in\mathfrak{V}\) and any \(g\in G\),_
\[\Pr[B^{G}(g)\in\Lambda(G,\mathfrak{V},\sigma,g)]\geq\epsilon,\]
_where \(\epsilon\) is the same positive constant as in (3)._
Proof.: Let \(B\) be a polynomial-time black-box group quantum algorithm such that for any black-box group \(G\in\mathfrak{V}\) and any \(g\in G\), \(B\) on input \(g\) with access to a quantum group oracle for \(G\) proceeds as follows:
1. Run \(A^{G}\) on input \(g\).
2. If the output is a positive integer \(s\) satisfying \(g^{s}=1\), then return \(([a_{1}^{s}]_{\sigma},[1]_{\sigma})\). Otherwise, the algorithm \(B\) fails.
Suppose \(G\) and \(g\) are as in the statement of the lemma. Then it is easy to see that \(B^{G}(g)\in\Lambda(G,\mathfrak{V},\sigma,g)\) if and only if \(A^{G}(g)=s\), where \(s\in\mathbb{N}\setminus\{0\}\) and \(g^{s}=1\). (Note that \(a_{1}^{s}\neq 1\) for all \(s\in\mathbb{N}\setminus\{0\}\) because \(\mathfrak{V}\) has infinite exponent.) Hence,
\[\Pr[B^{G}(g)\in\Lambda(G,\mathfrak{V},\sigma,g)]=\Pr[A^{G}(g)=s\in\mathbb{N} \setminus\{0\}\text{ s.t. }g^{s}=1]\geq\epsilon.\qed\]
### The Case Where \(\mathfrak{V}\) Is Nontrivial and Is Not the Variety of All Groups
Throughout this subsection, we assume that \(\mathfrak{V}\) is nontrivial and is not the variety of all groups. Moreover, all results of this subsection depend on the CFSG unless \(\mathfrak{V}\) is solvable.
Let \(H\) be a finite group. As in [1] and [13], we denote by \(\nu(H)\) the smallest \(n\in\mathbb{N}\setminus\{0\}\) such that all nonabelian composition factors of \(H\) can be embedded in the symmetric group of degree \(n\). If all composition factors of \(H\) are abelian (i.e., \(H\) is solvable), then \(\nu(H)=1\).
The _constructive membership problem_ for subgroups of \(H\) is defined as follows: Given \(g_{1},\ldots,g_{m},h\in H\) (where \(m\in\mathbb{N}\)), either find a straight-line program computing \(h\) from \(g_{1},\ldots,g_{m}\) (if \(h\in\langle g_{1},\ldots,g_{m}\rangle\)) or report that no such straight-line program exists (if \(h\notin\langle g_{1},\ldots,g_{m}\rangle\)). Lemma 2.2 implies that if \(h\in\langle g_{1},\ldots,g_{m}\rangle\), then there exists such a straight-line program of length at most \((1+\log_{2}\lvert H\rvert)^{2}\).
_Remark 4.2_.: Ivanyos, Magniez, and Santha [13, Theorem 5] proved the existence of a black-box group quantum algorithm that solves the constructive membership problem for subgroups of any given black-box group \(G\) in time polynomial in the input \(\operatorname{length}+\nu(G)\) with success probability at least \(\epsilon\), where \(\epsilon\) is a constant satisfying \(1/2<\epsilon<1\). The proof of this uses deep algorithmic results of Beals and Babai [1, Theorem 1.2], which in turn depend on the CFSG. Note that in [13], straight-line programs for groups are defined slightly differently. However, this does not matter for us. The reason is that if \(g_{1},\ldots,g_{m}\) are elements of a group, where \(m\in\mathbb{N}\setminus\{0\}\), and \(h\in\langle g_{1},\ldots,g_{m}\rangle\), then a straight-line program computing \(h\) from \(g_{1},\ldots,g_{m}\) in the sense of [13] can be efficiently converted to a straight-line program computing \(h\) from \(g_{1},\ldots,g_{m}\) in our sense, and vice versa.
Furthermore, by a result of Jones [14] together with the CFSG, there are only finitely many (up to isomorphism) nonabelian finite simple groups in every variety of groups different from the variety of all groups. (This gives a negative answer to Problem 23 in [12].) Hence for any finite group \(H\in\mathfrak{V}\), \(\nu(H)\) is upper bounded by a constant because \(\mathfrak{V}\) is not the variety of all groups. Thus, the above-mentioned black-box group quantum algorithm for the constructive membership problem runs in polynomial time whenever the given black-box group is in \(\mathfrak{V}\).
Note that if \(\mathfrak{V}\) is solvable, then \(\nu(H)=1\) for every finite group \(H\in\mathfrak{V}\) and the above-mentioned algorithm of Ivanyos, Magniez, and Santha does not deal with nonabelian finite simple groups during a computation in a black-box group in \(\mathfrak{V}\). Therefore in this case, we do not need the CFSG for our purposes.
In the next lemma, we denote by \(A\) a polynomial-time black-box group quantum algorithm such that for any black-box group \(G\in\mathfrak{V}\), any \(g_{1},\ldots,g_{m}\in G\) (where \(m\in\mathbb{N}\)), and any \(h\in\langle g_{1},\ldots,g_{m}\rangle\),
\[\Pr[A^{G}(g_{1},\ldots,g_{m},h)\text{ is a straight-line program computing }h\text{ from }g_{1},\ldots,g_{m}]\geq\epsilon, \tag{4}\]
where \(\epsilon\) is a positive constant. By Remark 4.2, such an algorithm exists.
**Lemma 4.3**.: _There exists a polynomial-time black-box group quantum algorithm \(B\) such that for any black-box group \(G\in\mathfrak{V}\) and any \(g\in G^{m}\) with \(m>\log_{2}\lvert G\rvert\),_
\[\Pr[B^{G}(g)\in\Lambda(G,\mathfrak{V},\mathrm{SLP},g)]\geq\epsilon,\]
_where \(\epsilon\) is the same positive constant as in (4)._
Proof.: Let \(B\) be a polynomial-time black-box group quantum algorithm such that for any black-box group \(G\in\mathfrak{V}\) and any \(g=(g_{1},\ldots,g_{m})\), where \(m\in\mathbb{N}\) and \(g_{1},\ldots,g_{m}\in G\), \(B\) on input \(g\) with access to a quantum group oracle for \(G\) proceeds as follows:
1. For each \(i\in\{1,\ldots,m\}\) (in ascending order), run \(A^{G}\) on input \((g_{1},\ldots,g_{i})\). If the output is a straight-line program \(u\) that computes \(g_{i}\) from \(g_{1},\ldots,g_{i-1}\), then return \((u,(i))\) and stop. (The straight-line program \((i)\) computes the \(i\)th element of the input sequence.)
2. If this point is reached, then the algorithm \(B\) fails.
Suppose \(G\) and \(g=(g_{1},\ldots,g_{m})\) are as in the statement of the lemma (in particular, \(m>\log_{2}\lvert G\rvert\)). If \(g_{i}\notin(g_{1},\ldots,g_{i-1})\) for all \(i\in\{1,\ldots,m\}\), then an easy induction on \(i\) shows that \(\lvert\langle g_{1},\ldots,g_{i}\rangle\rvert\geq 2^{i}\) for every \(i\in\{0,\ldots,m\}\). In particular, \(2^{m}\leq\lvert\langle g_{1},\ldots,g_{m}\rangle\rvert\leq\lvert G\rvert\), which contradicts \(m>\log_{2}\lvert G\rvert\). Hence \(g_{j}\in\langle g_{1},\ldots,g_{j-1}\rangle\) for some \(j\in\{1,\ldots,m\}\). Choose the smallest such \(j\). Assume that \(A^{G}(g_{1},\ldots,g_{j})=u\), where \(u\) is a straight-line program computing \(g_{j}\) from \(g_{1},\ldots,g_{j-1}\); this holds with probability at least \(\epsilon\). Let \(v=\mathrm{SLP}(u)\), i.e., \(v\) is the element of \(F_{j-1}(\mathfrak{V})\) computed from \(a_{1},\ldots,a_{j-1}\) by \(u\). Then \(v=v(a)=v(a_{1},\ldots,a_{j-1})\neq a_{j}\) (because \(\mathfrak{V}\) is nontrivial) and \(v(g)=v(g_{1},\ldots,g_{j-1})=g_{j}\). This shows that \(B^{G}(g)=(u,(j))=([v]_{\mathrm{SLP}},[a_{j}]_{\mathrm{SLP}})\in\Lambda(G, \mathfrak{V},\mathrm{SLP},g)\) with probability at least \(\epsilon\).
## 5 Main Result
**Theorem 5.1**.: _Assume that \(\mathfrak{V}\) is a nontrivial variety of expanded groups. Then there are no families of any of the following types:_
1. _post-quantum weakly pseudo-free families of computational_ \(\Omega\)_-algebras in_ \(\mathfrak{V}\) _with respect to_ \((\mathcal{D}_{k}\,|\,k\in K)\) _and_ \(\mathrm{SLP}\)_,_
2. _post-quantum worst-case weakly pseudo-free families of computational_ \(\Omega\)_-algebras (without distributions) in_ \(\mathfrak{V}\) _with respect to_ \((D_{k}\,|\,k\in K)\) _and_ \(\mathrm{SLP}\)_,_
3. _post-quantum weakly pseudo-free families of black-box_ \(\Omega\)_-algebras in_ \(\mathfrak{V}\) _with respect to_ \((\mathcal{D}_{k}\,|\,k\in K)\) _and_ \(\mathrm{SLP}\)_,_
4. _post-quantum worst-case weakly pseudo-free families of black-box_ \(\Omega\)_-algebras (without distributions) in_ \(\mathfrak{V}\) _with respect to_ \((D_{k}\,|\,k\in K)\) _and_ \(\mathrm{SLP}\)_._
Proof.: Choose a set \(\Gamma\subseteq\Omega\) of group operation symbols such that \(\mathfrak{V}|_{\Gamma}\) is a variety of groups. Remark 3.22 and Proposition 3.25 (see also Figure 1) show that it is sufficient to prove the nonexistence of families of type (iv) (for families of types (i) and (iii), we put \(D_{k}=\operatorname{supp}\mathcal{D}_{k}\) for all \(k\in K\)). Furthermore, Proposition 3.26 and Remark 3.27 imply that for this it suffices to prove the nonexistence of post-quantum worst-case weakly pseudo-free families of black-box groups (without distributions) in \(\mathfrak{V}|_{\Gamma}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\mathrm{SLP}_{\mathfrak{V}|_{\Gamma}}\).
Suppose \((G_{d}\,|\,d\in D)\) is a family of black-box groups in \(\mathfrak{V}|_{\Gamma}\) without distributions, where \(G_{d}\subseteq\{0,1\}^{\xi(d)}\) for each \(d\in D\) (\(\xi\colon D\to\mathbb{N}\)). Let \(B\) be a polynomial-time black-box group quantum algorithm from either Lemma 4.1 if the exponent of \(\mathfrak{V}|_{\Gamma}\) is infinite or Lemma 4.3 otherwise. Also, suppose \(\epsilon\) is the positive constant from that lemma. If the exponent of \(\mathfrak{V}|_{\Gamma}\) is infinite, then let \(\pi\) be the constant polynomial \(n\mapsto 1\) (\(n\in\mathbb{N}\)). Otherwise, choose a polynomial \(\pi\) such that \(\xi(d)<\pi(k)\) for all \(k\in K\) and \(d\in D_{k}\). Such a polynomial exists because there are polynomials \(\eta\) and \(\theta\) satisfying \(\xi(d)\leq\eta(\lvert d\rvert)\) for all \(d\in D\) and \(\lvert d\rvert\leq\theta(k)\) for all \(k\in K\) and \(d\in D_{k}\).
Let \(C\) be a polynomial-time black-box group quantum algorithm such that for any \(k\in K\), \(d\in D_{k}\), and \(g\in G_{d}^{\pi(k)}\), \(C\) on input \((1^{k},d,g)\) with access to a quantum group oracle for \(G_{d}\) runs \(B^{G_{d}}\) on input \(g\) and returns the output (if it exists). Suppose \(k\), \(d\), and \(g\) are as in the previous sentence. Note that if
the exponent of \(\mathfrak{V}|_{\Gamma}\) is finite, then \(|G_{d}|\leq 2^{\xi(d)}\) and hence \(\pi(k)>\xi(d)\geq\log_{2}|G_{d}|\). By either Lemma 4.1 (if the exponent of \(\mathfrak{V}|_{\Gamma}\) is infinite) or Lemma 4.3 (otherwise), we have
\[\Pr[C^{G_{d}}(1^{k},d,g)\in\Lambda(G_{d},\mathfrak{V}|_{\Gamma},\mathrm{SLP}_{ \mathfrak{V}|_{\Gamma}},g)]\geq\epsilon.\]
Therefore,
\[\min_{d\in D_{k},\,g\in G_{d}^{\pi(k)}}\Pr[C^{G_{d}}(1^{k},d,g)\in\Lambda(G_{d},\mathfrak{V}|_{\Gamma},\mathrm{SLP}_{\mathfrak{V}|_{\Gamma}},g)]\geq\epsilon\]
for all \(k\in K\). This shows that \((G_{d}\,|\,d\in D)\) is not post-quantum worst-case weakly pseudo-free in \(\mathfrak{V}|_{\Gamma}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\mathrm{SLP}_{\mathfrak{V}|_{\Gamma}}\). Thus, there are no post-quantum worst-case weakly pseudo-free families of black-box groups (without distributions) in \(\mathfrak{V}|_{\Gamma}\) with respect to \((D_{k}\,|\,k\in K)\) and \(\mathrm{SLP}_{\mathfrak{V}|_{\Gamma}}\).
Note that if the set \(\Gamma\) in the proof of Theorem 5.1 cannot be chosen so that \(\mathfrak{V}|_{\Gamma}\) has infinite exponent or is solvable, then the statement of this theorem depends on the CFSG.
_Remark 5.2_.: In particular, Theorem 5.1 can be applied to nontrivial varieties of groups, rings, modules and algebras over a finitely generated commutative associative ring with \(1\), near-rings, and, more generally, groups with finitely many multiple operators. See [10] or [14, Chapter II, Section 2] for the definition of a group with multiple operators (also known as a multi-operator group).
## 6 Conclusion
We have shown that in any nontrivial variety of expanded groups, there are no post-quantum weakly pseudo-free families with respect to \((\mathcal{D}_{k}\,|\,k\in K)\) (or \((D_{k}\,|\,k\in K)\) in the worst-case setting) and SLP, even in the worst-case setting and/or the black-box model. In our opinion, this is an additional motivation for studying (weak) pseudo-freeness in varieties of \(\Omega\)-algebras that in general are not expanded groups. In particular, it would be interesting to explore (weakly) pseudo-free families of semigroups, monoids, quasigroups, eloops, and lattices. The terms "equasigroup" and "eloop" mean "equationally definable quasigroup" and "equationally definable loop," respectively; see [14, Chapter IV, Section 1] for definitions of these terms.
Here are some open questions for future research:
* Does the statement of Theorem 5.1 hold if \(\mathfrak{V}\) is a nontrivial variety of semigroups, monoids, quasigroups, eloops, or lattices?
* Does Theorem 5.1 remain valid if the families of computational and black-box \(\Omega\)-algebras are polynomially bounded but do not necessarily have unique representations of elements?
* Do the types of weak pseudo-freeness defined in Section 3 (and/or the respective types of pseudo-freeness) have interesting properties?
* Does there exist an exponential-size post-quantum (weakly) pseudo-free family of computational groups in some nontrivial variety \(\mathfrak{V}\) of groups under a standard cryptographic assumption? We do not require this family to be polynomially bounded or to have unique representations of elements. Also, one may use any natural representation of elements of the \(\mathfrak{V}\)-free group by bit strings.
* Theorem 4.2 in [1] states that under the general integer factoring intractability assumption, a certain family (say, \(\mathfrak{G}\)) of computational groups is pseudo-free in the variety of all groups with respect to a certain probability ensemble \((\mathcal{E}_{k}\,|\,k\in K)\) and a certain function \(\beta\). The family \(\mathfrak{G}\) has exponential size, but is not polynomially bounded and does not have unique representations of elements. Is \(\mathfrak{G}\) post-quantum weakly pseudo-free in the variety of all groups with respect to \((\mathcal{E}_{k}\,|\,k\in K)\) and \(\beta\)? The conjectured answer is no.
|
2305.05553 | Active thermodynamic force driven mitochondrial alignment | Mitochondria are critical organelles in eukaryotes that produce the energy
currency ATP. In nerve axons, mitochondria are known to align at almost regular
intervals to maintain a constant ATP concentration, but little is known about
the mechanism. In this letter, we show theoretically that ATP production and
ATP-dependent non-directional movement of mitochondria are sufficient for
alignment, even in the absence of an explicit repulsive force between them.
This is similar to thermodynamic forces driven by thermal fluctuations, even
generated by non-equilibrium processes, and demonstrates the diversity of
mechanisms governing the motion of biological matter. | Masashi K. Kajita, Yoshiyuki Konishi, Tetsuhiro S. Hatakeyama | 2023-05-09T15:44:11Z | http://arxiv.org/abs/2305.05553v1 | # Active thermodynamic force driven mitochondrial alignment
###### Abstract
Mitochondria are critical organelles in eukaryotes that produce the energy currency ATP. In nerve axons, mitochondria are known to align at almost regular intervals to maintain a constant ATP concentration, but little is known about the mechanism. In this letter, we show theoretically that ATP production and ATP-dependent non-directional movement of mitochondria are sufficient for alignment, even in the absence of an explicit repulsive force between them. This is similar to thermodynamic forces driven by thermal fluctuations, even generated by non-equilibrium processes, and demonstrates the diversity of mechanisms governing the motion of biological matter.
Understanding how the position of organelles is regulated in eukaryotic cells will be important both biologically and physically. In particular, studying the positioning of mitochondria will be necessary when considering the energetics of the cell [1; 2; 3; 4]. The mitochondrion is a fundamental organelle in most eukaryotic cells that produces adenosine triphosphate (ATP) [5; 6]. ATP is hydrolyzed and used as an energy source for many processes in the cell, such as the synthesis of biomolecules, signal transduction, and the motility of molecular motors, and then it is essential to transport ATP to its precise location. Since ATP is synthesized by the mitochondria and diffuses, ATP would be concentrated around the mitochondria and its concentration would decrease as the distance from the mitochondria increases [4]. Mitochondrial positioning is then essential for the proper transport of ATP to its precise location in the cell, but the physical mechanism for this is still unknown.
The importance of mitochondrial positioning may become more critical as the size of the cell increases. In particular, the nerve axons of neurons in animals are quite long, reaching lengths of centimeters in rodents and meters in large mammals [7], while the size of the cell bodies of neurons is typically between a few and several tens of micrometers in diameter [8]. Thus, the positioning of mitochondria within a nerve axon may be essential for the distribution of ATP throughout the axon. Indeed, it has been reported that mitochondria are aligned at nearly equal intervals within a micrometer-to-centimeter-length nerve axon [2; 4]. Mitochondrial alignment requires that mitochondria move away from each other. As for the movement itself, cell biological observations have shown that mitochondria in nerve axons move by axonal transport of kinesin and dynein on microtubules [9; 3; 10; 11]. However, little is known about how the repulsive movement occurs.
In this letter, we show that, contrary to intuition, direct repulsion between mitochondria is not necessary, and that mitochondrial alignment can arise only from mitochondrial ATP production and ATP concentration-dependent fluctuations in movement. This may seem strange at first, but as mitochondria approach each other, the increase in local ATP concentration leads to an increase in motion fluctuations and effective repulsion between mitochondria, and then the mitochondria are aligned in a steady state. This mechanism of generating an effective unidirectional force is very similar to the mechanism of the Soret effect [12; 13; 14], diffusion phoresis [15; 16; 17], or chemophoresis [18; 19; 20], where a force is generated that moves particles according to a gradient of temperature, diffusion constant, or adsorptive substance, respectively. This effective unidirectional force, generated only by non-directional fluctuations in motion, is called the thermodynamic force. Our study shows that even when mitochondria are driven by a non-directional non-equilibrium force, an effective unidirectional force can be generated by a mechanism similar to the thermodynamic force, and then mitochondrial alignment is achieved.
Here we consider the movement of mitochondria along one-dimensional microtubules in a nerve axon (Fig. 1). Many molecular motors, i.e., dynein and kinesin, move along the microtubules by using chemical energy from ATP [11; 21]. Although dynein and kinesin move in opposite directions [22] and have different properties, we assume that they have the same property for simplicity. Since mitochondria are sufficiently large, the thermal noise of their motion is negligible. Instead, mitochondria attach and detach stochastically to molecular motors moving forward or backward, and if we observe their motion on a slower timescale than that of attachment and detachment, mitochondria appear to exhibit a random walk. Furthermore, when a mitochondrion detaches from a molecular motor, it does not move, and we cannot observe the inertia of the movement. We then model the mitochondrial motion as a random walk using the overdamped Langevin equation [23].
Molecular motors can only move if they are attached to an ATP molecule [11; 21], and then the probability of movement increases with the concentration of ATP. Just as ambient temperature determines the intensity of Brownian motion, ATP concentration determines the intensity of mitochondrial motion. We consider the case where there is no explicit force to align the mitochondria. Therefore, the Langevin equation for the position of the \(i\)th mitochondrion is given by
\[\frac{dx_{i}}{dt} = f\left(a(x_{i})\right)\eta_{i}(t), \tag{1}\] \[\langle\eta_{i}(t)\rangle = 0,\] \[\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle = 2\delta_{i,j}\delta(t-t^{\prime}),\]
where \(a(x_{i})\) is the ATP concentration at \(x_{i}\), \(f(a)\) is an increasing function of \(a\) because the probability of moving increases with the concentration of ATP.
Here we consider the concentration of ATP around the mitochondria. It is natural to assume that the diffusion and consumption of ATP is much faster than the movement of the mitochondria, because ATP is a small molecule and is consumed by too many molecules. Then we assume that the ATP concentration immediately relaxes to the steady state value following the mitochondrial movement. ATP is produced by mitochondria, consumed as an energy source, and diffuses. We assumed that the concentrations of molecules consuming ATP are uniformly distributed in space, and then the consumption of ATP is spatially uniform. Thus, if the consumption of ATP is linearly proportional to its concentration due to mass action, the equation for the ATP concentration \(a\) produced by a mitochondrion located at \(x=x_{i}\) is
\[\frac{\partial a(x,t)}{\partial t}=p_{a}\delta(x-x_{i})+D_{a}\frac{\partial^{ 2}a}{\partial x^{2}}-d_{a}a, \tag{2}\]
where \(p_{a}\), \(D_{a}\), \(d_{a}\) are the production rate, diffusion constant, and consumption rate of ATP, respectively. When the ATP concentration at \(x=\infty\) and \(-\infty\) is 0, the steady state ATP concentration produced by one mitochondrion is
\[a_{i}^{*}=a_{0}e^{-\sqrt{\frac{d_{a}}{D_{a}}}|x-x_{i}|}, \tag{3}\]
where \(a_{0}\) is given by \(a_{0}=p_{a}/d_{a}\). This is consistent with the previous experimental observation (see Fig. 1C and [4]). Since Eq. 2 is linear, the concentration of ATP produced by different mitochondria can be linearly superposed, and the ATP concentration is given by
\[a^{*}=a_{0}\sum_{i=1}^{N}e^{-\sqrt{\frac{d_{a}}{D_{a}}}|x-x_{i}|}, \tag{4}\]
where \(N\) is the number of mitochondria.
First, we show that there is an effective repulsive force between mitochondria. For simplicity, we consider the interaction between two mitochondria, one of which is fixed at \(x=0\). If the position of the freely moving mitochondrion is \(x=x_{1}\), the ATP concentration at this location is given as the sum of the ATP produced by the fixed and moving mitochondria as \(a(x_{1})=a_{0}+a_{0}\exp\left(-\sqrt{\frac{d_{a}}{D_{a}}}\left|x_{1}\right|\right)\). Then, the dynamics of the freely moving mitochondrion is given by
\[\frac{dx_{1}}{dt}=f\left(a_{0}\left(1+e^{-\sqrt{\frac{d_{a}}{D_{a}}}|x_{1}|} \right)\right)\eta_{1}(t). \tag{5}\]
From the above equation, we solve the Fokker-Planck equation by considering the above equation a the Stratonovich stochastic differential equation as
Figure 1: (A) Schematic representation of the model. We consider one-dimensional dynamics of mitochondria and ATP concentration. Mitochondria produce ATP, and then the gradient of ATP concentration is formed around the mitochondria. Molecular motors, i.e. dynein and kinesin, attach to mitochondria and move on microtubules depending on the ATP concentration. We do not explicitly include the molecular motors in the model and instead represent the mitochondrial movements as a function of ATP concentration. (B) Microscopic images of axons showing mitochondria and tubulin stained with MitoTracker Red CM-H2XRos (red) and antitubulin antibody (green), respectively. Scale bars represent 10 \(\mu\)m. (C) Relative ATP:ADP signal ratio at different distances from a mitochondrion along the axon. The red line and red band represent the mean and 95% confidence interval, respectively. This figure was adopted from [4].
\[\frac{dP(x_{1},t)}{dt} = \frac{\partial}{\partial x_{1}}\left[f\left(a(x_{1})\right)\frac{ \partial}{\partial x_{1}}\left\{f\left(a(x_{1})\right)P(x_{1},t)\right\}\right] \tag{6}\] \[= -\frac{df(a)}{da}\frac{\partial}{\partial x_{1}}\left\{\frac{da(x _{1})}{dx_{1}}f\left(a(x_{1})\right)P(x_{1},t)\right\}+\frac{\partial^{2}}{ \partial x_{1}^{2}}\left\{f\left(a(x_{1})\right)P(x_{1},t)\right\}\] \[= \pm a_{0}\sqrt{\frac{d_{a}}{D_{a}}}\frac{df(a)}{da}\frac{\partial }{\partial x_{1}}\left\{e^{-\sqrt{\frac{d_{a}}{D_{a}}}|x_{1}|}f\left(a(x_{1} )\right)P(x_{1},t)\right\}+\frac{\partial^{2}}{\partial x_{1}^{2}}\left\{f \left(a(x_{1})\right)P(x_{1},t)\right\},\]
where if \(x_{1}>0\) or \(<0\), a sign of the first term is positive or negative, respectively. The first and second terms in the above equation indicate an anisotropic flow and an isotropic diffusion, respectively. Although no explicit force is added to the mitochondrion as in Eq. (1), there is an effective force between mitochondria due to the existence of an ATP concentration gradient similar to the thermodynamic force. Here, \(f(a)\) is an increasing function of \(a\), and then \(\frac{df(a)}{da}\) is positive. Hence, the first term works by increasing the distance between two mitochondria, i.e., the active thermodynamic force between two mitochondria works as a repulsive force.
We confirmed that this repulsive force can drive the alignment of mitochondria. As shown in Fig. 2, initially all mitochondria are randomly distributed. When the positions of two or more mitochondria were close, the local ATP concentration around each mitochondrion was increased, and these positions fluctuated intensely. Then, these mitochondria moved away from each other with high probability, and after they separated and isolated, the ATP concentration around a mitochondrion decreased (see also the Supplementary Movie [24]). Therefore, the fluctuation of the position of the mitochondria also decreased, and the mitochondria were kept apart for a long time. That is, the mitochondria moved away from each other autonomously and were almost equally spaced without the explicit repulsion force between them.
Statistical properties also showed that mitochondria
Figure 3: Probability distribution of the distance between two adjacent mitochondria at steady state. The gray, magenta, cyan and green solid lines are for cases where \(f(a)\) is given as constant, \(a\), \(a^{3}\), \(a^{5}\), respectively. The black dashed line is the analytically derived distribution without interaction between mitochondria. To obtain the distribution, we set \(N\) to 100 and \(L\) to 100, the same ratio \(N/L\) as in Fig. 2. Other parameters are the same as in Fig. 2. Inset: The distribution is plotted with a logarithmic scale.
Figure 2: Time evolution of mitochondrial positions. (A) Each line corresponds to the time evolution of the position of a different mitochondrion. At \(t=0\), the position of each mitochondrion was randomly distributed. (B) Snapshots of mitochondrial positions and ATP concentration. The black circles are the mitochondrial positions, and the red area is the ATP concentration. We set \(N\) to 10, \(L\) to 10, \(a_{0}\) to 0.3, and \(d_{a}/D_{a}\) to 9.0. We numerically solved the dynamics under the periodic boundary condition. See also the Supplementary Movie [24].
move away from each other without explicit repulsive forces. When there is no interaction between mitochondria, their position is completely random. Then the distribution of the distance between two neighboring mitochondria in steady state follows the exponential distribution: In the limit of infinite system size, keeping the ratio \(N/L\), where \(L\) is the system size, the frequency of mitochondrial existence is described by a Poisson process, where on average \(Nx/L\) mitochondria appear for every fixed distance \(x\). Thus, a probability distribution of the distance between two adjacent mitochondria at steady state is given by \(N\exp(-Nx/L)/L\), where the probability decreases monotonically with \(x\). In fact, if \(f(a)\) is given as a constant independent of \(a\), i.e., each mitochondrion exhibits the random walk independent of the ATP concentration, the distribution of the distance between two adjacent mitochondria was well fitted to the exponential distribution (see gray and black dashed lines in Fig. 3), and no peaks appear at points where the distance is not zero.
In contrast, when \(f(a)\) was given as an increasing function of \(a\), i.e. the intensity of the fluctuation of the mitochondrial position depended on the local ATP concentration, the distribution of the distance between two adjacent mitochondria at steady state was no longer fitted by the exponential function, but showed the peak at points where the distance is not zero. Although the peak still appeared when \(f(a)\) was a linear function of \(a\), the peak was less obvious because its position was close to the origin and its height was not as high (see magenta line in Fig. 3). The peak appeared more obvious when \(f(a)\) was a higher order equation for \(a\) with the same parameter set (see cyan and green lines in Fig. 3). This is because the higher the order of \(f(a)\) for \(a\), the greater the difference in the magnitude of the fluctuations compared to when a mitochondrion exists alone or in the vicinity of other mitochondria. Thus, the thermodynamic repulsive force between mitochondria will also be greater since \(f(a)\) is of higher order. Indeed, the repulsive force between two mitochondria is greater if \(f(a)\) is of higher order than \(a\), as can be seen from Eq. (6). In any case, if the magnitude of the fluctuations in mitochondrial position depends positively on the ATP concentration, the mitochondria will align at steady state even if there is no explicit repulsive force between them.
Here we show that non-directional fluctuations in mitochondrial movement and mitochondrial ATP production are sufficient to align mitochondria along a nerve axon. This suggests that mitochondrial function itself is linked to patterning, and that no special function such as signaling between mitochondria is required for equispaced patterning. In addition, it has been observed experimentally that the movement of mitochondrial position in nerve axons is initially strong, but as it gradually approaches the iso-pattern, the movement becomes weaker [4], as we observed in Fig. 2. For this gradual fixation, the nonlinear dependence of the fluctuation of the mitochondrial movement on the ATP concentration will play an important role; The stronger the nonlinearity of the dependence of the fluctuations on the ATP concentration, the greater the relative difference in the fluctuations when the mitochondria are close together and when they are far apart, and the less likely they are to move when the mitochondria are aligned. Since multiple molecular motors are known to act in concert for cargo transport [22; 25; 26; 27], the ATP concentration dependence of mitochondrial fluctuations would inevitably be nonlinear. In the future, such nonlinearity will be validated both experimentally and theoretically using more microscopic models. Note that even after alignment, the fluctuations continue, albeit weakly. Therefore, in the real system, after the initial alignment by a mechanism we propose here, there may be a mechanism to further fix the position of the mitochondria. Indeed, some mechanisms have been proposed to arrest mitochondria after positioning [3; 28; 29].
Mitochondrial alignment is thought to be physiologically important for maintaining a uniform ATP concentration in cells [4]. The mechanism proposed here shows that mitochondria move to autonomously resolve deviations in ATP concentration without any special mechanism. Moreover, it has long been known that mitochondria move in the direction of lower ATP concentrations in a nerve axon [1]. Our results are in good agreement with this observation. Although we consider a one-dimensional system here because the nerve axon is pseudo-one-dimensional, the mechanism presented will also work for higher-dimensional systems. Therefore, the proposed mechanism can be used to achieve a uniform intracellular ATP concentration in different cells, even beyond the nerve axon. Furthermore, a similar mechanism may work for the uniform distribution of organelles other than mitochondria and molecules. This study will provide an important basis for future discussions of the distribution of substances within cells.
In this letter, we have shown that nondirectional motion due to non-equilibrium processes, not thermal noise, can generate unidirectional motion of biological matter. This is more like thermodynamic forces driven by thermal fluctuations than the motion of active matter, and we term this active thermodynamic force. In fact, Eq. (1) contains neither interaction terms between mitochondria nor ATP gradient-dependent terms, unlike many equations describing active matter [30; 31]. This fundamental equation contains only the ATP concentration-dependent fluctuation like the Brownian particle in the thermal gradient. Although the equations governing motion are so simple, by combining them with non-equilibrium reactions that change the chemical field, we found that interactions between biological matter can occur through the active thermodynamic force. This result reminds us of the diversity of mechanisms governing the motion of biological matter. Our study will be a pioneering step in understanding the motion of biological materials due to active thermodynamic forces driven by nonequilibrium processes, and it is expected that both experiment and
theory will reveal in further studies that a variety of processes are driven by similar mechanisms.
###### Acknowledgements.
We would like to thank Kunihiko Kaneko, and Shuji Ishihara for fruitful discussion. This work was partially supported by the JSPS KAKENHI under Grant Numbers 20K06889, 21K15048, and 21K17851.
|
2307.13548 | Node Injection Link Stealing Attack | In this paper, we present a stealthy and effective attack that exposes
privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private
links within graph-structured data. Focusing on the inductive setting where new
nodes join the graph and an API is used to query predictions, we investigate
the potential leakage of private edge information. We also propose methods to
preserve privacy while maintaining model utility. Our attack demonstrates
superior performance in inferring the links compared to the state of the art.
Furthermore, we examine the application of differential privacy (DP) mechanisms
to mitigate the impact of our proposed attack, we analyze the trade-off between
privacy preservation and model utility. Our work highlights the privacy
vulnerabilities inherent in GNNs, underscoring the importance of developing
robust privacy-preserving mechanisms for their application. | Oualid Zari, Javier Parra-Arnau, AyΕe Γnsal, Melek Γnen | 2023-07-25T14:51:01Z | http://arxiv.org/abs/2307.13548v1 | # Node Injection Link Stealing Attack
###### Abstract.
In this paper, we present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data. Focusing on the inductive setting where new nodes join the graph and an API is used to query predictions, we investigate the potential leakage of private edge information. We also propose methods to preserve privacy while maintaining model utility. Our attack demonstrates superior performance in inferring the links compared to the state of the art. Furthermore, we examine the application of differential privacy (DP) mechanisms to mitigate the impact of our proposed attack, we analyze the trade-off between privacy preservation and model utility. Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
graph neural networks, privacy attacks, link inference, link stealing, differential privacy +
Footnote β : journal: Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Ac Accepted on Accepted on Accepted on Accepted on Ac Accepted on Accepted on Accepted on Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Accepted on Ac Accepted on Ac Accepted on Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Ac Accepted on Accepted Ac Ac Ac Accepted on Accepted Ac Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted on Ac Ac Accepted Ac Ac Ac Accepted
### Graph neural networks
#### 2.1.1. GNNs Overview
GNNs (GNNs, 2017) have emerged as a powerful class of machine learning models specifically designed to handle graph-structured data. They have gained considerable attention due to their ability to effectively learn and capture complex patterns in graph data, showing significant performance across a wide range of tasks, such as node classification (Krizhevsky et al., 2014; Szegedy et al., 2015), link prediction (Zhu et al., 2015), and graph classification (Zhu et al., 2016; Zhu et al., 2017). In this paper, we specifically focus on the task of node classification, where the objective is to assign labels to individual nodes based on their features and on the overall graph structure.
More specifically, a graph \(G=(V,E)\) is defined as a collection of nodes \(V\) and edges \(E\). Nodes represent data points such as users in social networks or proteins in biological networks, while edges represent relationships or interactions between the nodes. Graphs can be represented using an adjacency matrix \(A\in\mathbb{R}^{n\times n}\), where \(n=|V|\) is the number of nodes in the graph, and \(A_{ij}=1\) if there exists an edge between nodes \(i\) and \(j\), and \(A_{ij}=0\) otherwise. Additionally, nodes exhibit a set of features, which can be represented as vectors containing \(d\) elements, where \(d\) corresponds to the number of features. In social networks, these features may include demographic information such as age, gender, and location, as well as user interests and preferences. To capture these features, the graph is also associated with a feature matrix \(X\in\mathbb{R}^{n\times d}\). This matrix provides essential information about the characteristics of each node in the graph.
GNNs primarily operate by employing a message-passing mechanism (GNNs, 2017) that allows nodes to exchange and aggregate information from their local neighborhoods. This iterative process helps GNNs capture local and global structural information in the graph. For instance, in the context of graph convolutional networks (GCNs) (GCNs, 2017), the most representative and well-established GNNs models, their core architecture consists of a series of graph convolutional layers, which can be formulated as follows:
\[H^{(0)}=X,\quad H^{(k+1)}=\sigma\left(\hat{A}H^{(I)}W^{(I)}\right),\quad H^{(L )}=P \tag{1}\]
For instance, \(H^{(0)}\) denotes the node feature matrix \(X\); \(H^{(I)}\in\mathbb{R}^{n\times d_{I}}\) is the hidden node representation matrix at layer \(I\), where \(L\) is the total number of layers; and \(P\in\mathbb{R}^{n\times c}\) represents the prediction scores for each potential class or label associated with the queried nodes, where \(c\) represents the number of classes; \(W^{(I)}\in\mathbb{R}^{d\times d_{I+1}}\) is the learnable weight matrix for layer \(I\); \(\sigma(\cdot)\) is an activation function (e.g., ReLU), and \(\hat{A}\) is a normalized adjacency matrix.
#### 2.1.2. GNNs with dynamic graphs
GNNs usually handle dynamic graph data as in real-life scenarios such as social network applications or recommendation systems, where graphs usually evolve over time. New nodes or edges may be introduced with time and the goal would be to make predictions for such new nodes.
When a new node is added to the graph, both the adjacency matrix \(A\in\mathbb{R}^{n\times n}\), and the feature matrix \(X\in\mathbb{R}^{n\times d}\) are updated. The adjacency matrix expands to \(A^{\prime}\in\mathbb{R}^{(n+1)\times(n+1)}\), while the feature matrix becomes \(X^{\prime}\in\mathbb{R}^{(n+1)\times d}\), incorporating the new node's connections and features, respectively.
Once the graph is updated, the GNN performs inference on the modified graph, using the message-passing mechanism described earlier.
### Differential privacy
The original definition of DP (Krizhevsky et al., 2014; Goyal et al., 2015) was introduced in the context of microdata, that is, databases containing records at the level of individuals. A central aspect of DP is the concept of _neighborhood_, which was defined originally for that data structure as follows.
Definition 2.1 (**Neighboring databases**).: Let \(\mathcal{D}\) be the class of possible databases. Any two databases \(D,D^{\prime}\in\mathcal{D}\) that differ in one record are called _neighbors_. For two neighbor databases, the following equality holds:
\[d(D,D^{\prime})=1,\]
where \(d\) denotes the Hamming distance.
Definition 2.2 (\((\varepsilon,\delta)\)-**Differential privacy**(Krizhevsky et al., 2014; Goyal et al., 2015)).: A randomized mechanism \(\mathcal{M}\) satisfies \((\varepsilon,\delta)\)-DP with \(\varepsilon,\delta\geqslant 0\) if, for all pairs of neighboring databases \(D,D^{\prime}\in\mathcal{D}\) and for all measurable \(\mathcal{O}\subseteq\text{Range}(\mathcal{M})\),
\[\text{P}\{\mathcal{M}(D)\in\mathcal{O}\}\leqslant e^{\varepsilon}\,\text{P} \{\mathcal{M}(D^{\prime})\in\mathcal{O}\}+\delta.\]
In words, the output of a mechanism satisfying DP should not reveal the presence or absence of any specific record in the database, up to an exponential factor of \(\varepsilon\). When each record corresponds to a distinct individual respondent, DP aims to ensure their information will remain confidential. A lower value of \(\varepsilon\), referred to as the _privacy budget_, provides stronger protection.
Probably the most popular mechanism satisfying DP is the Laplace mechanism, which relies on a quantity called _global sensitivity_, defined next.
Definition 2.3 (\(L_{p}\)-**Global sensitivity**(Goyal et al., 2015)).: The \(L_{p}\)-global sensitivity of a query function \(f\colon\mathcal{D}\to\mathbb{R}^{d}\) is defined as
\[\Delta_{p}(f)=\max_{\forall D,D^{\prime}\in\mathcal{D}}\|f(D)-f(D^{\prime})\| _{p},\]
where \(D,D^{\prime}\) are any two neighbor databases.
\begin{table}
\begin{tabular}{l l} \hline \hline Symbol & Description \\ \hline \(A\) & Adjacency matrix \\ \(\mathcal{A}\) & Adversary \\ \(E\) & Set of edges in the graph \\ \(G\) & Graph \\ \(n\) & Number of nodes \\ \(V\) & Set of nodes in the graph \\ \(V_{\mathcal{M}}\) & Target set nodes \\ \(v_{m}\) & Malicious injected node \\ \(o_{t}\) & Target node \\ \(P\) & Prediction scores of the GNN \\ \(X\) & Feature matrix \\ \(x_{t}\) & Features of the target node \\ \(x_{m}\) & Features of the malicious node \\ \hline \hline \end{tabular}
\end{table}
Table 1. List of notations.
**Definition 2.4** (**Laplace mechanism**(Kumar et al., 2017)).: Given any function \(f\colon\mathcal{D}\to\mathbb{R}^{d}\), the Laplace mechanism mechanism is defined as follows:
\[\mathcal{M}_{L}(D,f(\cdot),\varepsilon)=f(D)+(Y_{1},\ldots,Y_{d}),\]
where \(Y_{i}\) are i.i.d. random variables drawn from a Laplace distribution with zero mean and scale \(\Delta_{1}(f)/\varepsilon\).
## 3. Related Work
GNNs have gained significant attention in recent years due to their effectiveness in handling graph-based data across various applications (Kang et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019). As the adoption of GNNs increases, concerns regarding privacy and adversarial attacks on these networks also arise and become more significant (Li et al., 2019; Li et al., 2019). On the other hand, several privacy-preserving methods are developed to mitigate the effectiveness of these privacy attacks against GNNs (Li et al., 2019; Li et al., 2019).
### Privacy attacks on GNNs
Privacy attacks on GNNs can be categorized based on the actual leakage in the graph, namely, information about graph nodes, their attributes, or graph edges. Node privacy attacks, such as membership inference attacks (MIA) (Beng et al., 2015; Li et al., 2019; Li et al., 2019; Li et al., 2019), aim to determine if a given node was part of the training set. In contrast, attribute inference attacks (Li et al., 2019) focus on revealing sensitive information related to node attributes, violating attribute privacy. In this work, we concentrate on edge privacy violation, where the common attacks are the so-called link stealing, re-identification, or inference attacks, which aim to uncover the edges of the graph structure used by the GNN.
Early works (Li et al., 2019; Li et al., 2019; Li et al., 2019) have demonstrated the success and feasibility of link-stealing attacks. In the attack proposed in (Li et al., 2019), the adversary leverages prior knowledge about the graph, such as the likelihood of nodes with similar features or predictions being connected, to infer links in the graph. The attacker applies methods such as clustering to predict connections for nodes within the same cluster. In (Li et al., 2019), the authors demonstrate that by accessing the node embeddings trained to preserve the graph structure, one can recover edges by training a decoder to convert the embedding to the graph structure. The Linkteller attack (Li et al., 2019) involves probing the features of the nodes and studying their output predictions by the GNN to infer the links of the graph.
Existing link-stealing attacks exhibit certain weaknesses. The attack in (Li et al., 2019) assumes a powerful adversary who requires access to the features, a shadow dataset, and the ability to train shadow GNNs in order to train an attack model. The attack model is trained to classify the link presence based on the output predictions or features. The attack's performance declines in the inductive setting, where training and inference occur on different graphs, as evidenced in the further Linkteller paper (Li et al., 2019). Additionally, its effectiveness diminishes when there is no correlation between features and links of the nodes.
On the other hand, the main drawback of the Linkteller attack (Li et al., 2019) is its non-stealthy perturbation of features, particularly when dealing with discrete datasets. The Linkteller's strategy consists of altering the input features of the graph to obtain information about the links. For discrete datasets, the perturbation can render the features as real values, making them easier to be detected. Moreover, the effectiveness of the Linkteller attack decreases when mounted against deep GNNs with depths of more than three.
In addition to privacy attacks targeting GNNs, adversarial attacks exist where the adversary's goal is to deceive the GNN's predictions or degrade its utility. These attacks involve altering the graph structure through node addition or deletion (Li et al., 2019; Li et al., 2019).
In this paper, we propose a novel link-stealing attack NIIS that addresses the limitations of existing approaches, taking advantage of the dynamic nature of GNNs by injecting malicious nodes in the style of an adversarial attack. Our proposed NILS attack outperforms previous link-stealing attacks (Li et al., 2019; Li et al., 2019).
### Differential privacy mechanisms for graphs
DP has been extensively studied and applied to various data types, including graphs, with the aim of preserving sensitive information. Various DP mechanisms have been developed (Li et al., 2019; Li et al., 2019) to protect both node and edge information. Node-level DP focuses on preserving the privacy of individual nodes, protecting from attacks, such as membership inference attacks (Beng et al., 2015; Li et al., 2019; Li et al., 2019). In contrast, Edge-level DP seeks to preserve the privacy of edge information, which represents relationships between nodes, preventing link stealing attacks (Li et al., 2019; Li et al., 2019; Li et al., 2019).
Substantial research has been conducted on achieving node-level DP and edge-level DP in graph-based models. Several approaches allow for the publication of graph statistics with edge-level DP guarantees, including degree subgraph count (Li et al., 2019), and degree distributions (Li et al., 2019; Li et al., 2019). Although these statistics are beneficial for graph analysis, they are inadequate for training a GNN model, as most of the GNNs require access to the raw graph structure for the message-passing mechanism. Consequently, other approaches have been developed to train GNN models by adopting input perturbation DP, releasing the graph while ensuring edge-level DP (Beng et al., 2015; Li et al., 2019; Li et al., 2019)
Furthermore, when designing DP solutions, it is crucial to consider specific privacy threats and adversary strengths. In the context of our proposed NILS attack, the adversary is capable of injecting nodes into the graph to discover sensitive edge information, violating edge privacy. Therefore, we propose a customized DP notion that specifically addresses this type of privacy attack. We then leverage the LapGraph algorithm (Li et al., 2019) to achieve the desired DP guarantees under the new, tailored notion and study its effectiveness.
## 4. Node Injection Link Stealing Attack
GNNs are prone to various privacy attacks that usually aim at learning as much information as possible about their underlying graph structure. GNNs inherit the potential attacks against standard neural networks such as membership inference attacks (Li et al., 2019; Li et al., 2019), whereby the goal of the adversary is to ascertain whether a sample is included in the training dataset or not.
As introduced earlier, in this paper, we focus on a particular attack named as _link stealing attack_, where an adversary without access to the adjacency matrix aims to learn whether a particular edge exists or not.
In this section, we first introduce the threat model to characterize the adversary's background knowledge. Then, we propose our node
injection link stealing attack that takes advantage of the dynamicity of GNNs.
### Threat model
#### 4.1.1. Environment
As mentioned in the previous section, we consider a GNN application in which a server has already trained the GNN using a specific dataset and offers access to this GNN through a black-box API. In this context, the black-box API is an interface provided by the server that enables users to interact with the pre-trained GNN model without directly accessing its internal components, such as the model architecture, parameters, or graph structure. Users can submit prediction queries using node IDs. If a new node needs to be added to the graph, users can employ a _connect_ query to attach the node to the graph before querying its prediction based on its ID. The API processes input data into output predictions, ensuring that the model's underlying computations remain hidden from the user. Users can query this GNN for the purpose of node classification. Hence, the query consists of the queried node's ID and the output of this query is the vector of prediction scores for this particular node. The users do not have the knowledge of edges of this graph. Hence the only information that a user knows is the set of nodes' ids.
#### 4.1.2. Adversary's goal and knowledge
We consider an adversary, \(\mathcal{A}\), who assumes the role of a GNN user. Her objective is to determine the neighbors of a specific _target node_, \(v_{t}\), selected from a set of _target nodes_, \(V_{\mathcal{A}}\), within the graph. This is done based on the GNN's predictions for the node set \(V_{\mathcal{A}}\). In simpler terms, \(\mathcal{A}\) aims to identify the neighbors of the target node \(v_{t}\) that are included in the target set nodes \(V_{\mathcal{A}}\).
We should note that if the adversary aims to identify all the links within the graph, then the set of target nodes \(V_{\mathcal{A}}\) becomes the set containing all the nodes of the graph \(V\). To achieve this, the adversary may need to perform multiple node injections, targeting different nodes from the graph each time. However, the practicality of such an approach is debatable. The adversary's selection of target nodes reflects her background knowledge about these nodes. For instance, in the context of social networks, the adversary's background knowledge could include information such as users' interests. This information can guide the adversary in selecting target nodes \(V_{\mathcal{A}}\) that are more likely to be connected. In our attack scenario, we choose the target nodes uniformly at random.
The adversary \(\mathcal{A}\) is able to obtain the predictions of the target nodes \(V_{\mathcal{A}}\) by sending the server their corresponding IDs through the provided API. In addition, the adversary \(\mathcal{A}\) is able to use the _connect_ query to connect a node \(v_{m}\) to a target node \(v_{t}\). In general, we assume that the adversary does not have access to the features of the nodes in the graph, with the exception of certain attack strategies described in Sec. 4.3.
### Node injection link stealing attack
In this section, we formally define our NILS attack that, unlike existing link-stealing attacks, exploits the dynamic nature of the underlying GNN. Indeed, adversary \(\mathcal{A}\) can _connect_ new nodes and further query the prediction scores of a set of nodes \(V_{\mathcal{A}}\) in the graph. While adding this new node \(v_{m},\mathcal{A}\) can choose which existing node \(v_{t}\) it actually connects to and hence try to discover its neighbors. More formally:
1. \(\mathcal{A}\) first queries the prediction scores of the target nodes \(V_{\mathcal{A}}\) and receives the corresponding prediction matrix \(P\) of the target nodes \(V_{\mathcal{A}}\).
2. \(\mathcal{A}\) generates malicious features of a malicious node \(v_{m}\) based on the obtained prediction matrix \(P\) (see Sec. 4.3 for further details on this step).
3. Next, \(\mathcal{A}\) sends a _connect_ query to inject the malicious node \(v_{m}\). The query has the following parameters: the features \(x_{m}\) of the new node, and the ID of the target node \(v_{t}\) the adversary wishes to connect \(v_{m}\) to.
4. The server adds this malicious node \(v_{m}\) to the graph and links it to the target node \(v_{t}\).
5. \(\mathcal{A}\) queries back the server for new prediction matrix \(P^{\prime}\) of the target nodes \(V_{\mathcal{A}}\) and obtains it.
6. With access to \(P\) and \(P^{\prime}\), \(\mathcal{A}\) computes the \(L_{1}\) distance between \(P(v)\) and \(P^{\prime}(v)\) of each node \(v\) in \(V_{\mathcal{A}}\). A significant change in the prediction scores of a node \(v\) indicates a high probability of being a neighbor to \(v_{t}\). If the difference exceeds a threshold \(R\), the adversary infers that node \(v\) is a neighbor of \(v_{t}\).
The decision threshold \(R\) is determined through an extensive parameter tuning process, aiming for an optimal trade-off between precision and recall in identifying the true neighbors of the target node. This balance is represented by the \(F_{1}\) score. We evaluate various candidate values of \(R\), selecting the one that yields the highest \(F_{1}\) score as the optimal threshold. The results reported in our study are based on this optimal value of \(R\).
This attack strategy is depicted in Figure 1 and outlined in Algorithm 1.
Figure 1. Adversary-Server Interaction: In the inference phase, the adversary first queries the prediction scores \(P\) of the target nodes, represented as \(V_{\mathcal{A}}\). Next, the server sends the predictions \(P\) of the GNN to the adversary. Then, the adversary sends a _Connect_ query to inject the malicious node \(v_{m}\), with features \(x_{m}\), to the target node \(v_{t}\). Finally, after the injection, the adversary queries again the prediction scores \(P^{\prime}\) of the target nodes \(V_{\mathcal{A}}\).
### Strategies for malicious node's features
In order to evaluate how the injection of the malicious node \(v_{m}\) influences the predictions of the GNN, we study five strategies to generate the malicious node's features \(x_{m}\). This helps us to assess the success of our attack. These five strategies are designed with varying degrees of sparsity and stealthiness, enabling us to explore their effectiveness in altering the model's predictions. We define the proposed strategies as follow:
1. **All-ones strategy**: Generates a dense feature vector for the malicious node, containing all ones, as shown in the equation below: \[x_{m}=1.\] This strategy potentially causes significant changes in predictions but may be less stealthy due to its dense feature vector.
2. **All-zeros strategy**: Creates a sparse feature vector for the malicious node, containing all zeros, as shown in the equation below: \[x_{m}=0.\] This approach may subtly alter the output of the GNN, leading to smaller changes in predictions, while offering increased stealthiness.
3. **Identity strategy**: Introduces a malicious node with a feature vector identical to the target node's feature vector, as shown below: \[x_{m}=x_{t}.\] This strategy causes confusion in the model's predictions for neighboring nodes and has variable stealthiness based on the similarity between injected and target nodes. For this strategy, we assume that \(\mathcal{H}\) knows the features of the target node \(x_{t}\).
4. **Max attributes strategy**: This method creates a malicious node feature vector by computing the element-wise maximum of each attribute in the target nodes' feature matrix. Specifically, it considers only nodes from classes different from the target node's class, as shown below: \[x_{m,k}=\max_{i\in V_{\mathcal{H}_{0}},\text{ with }C_{i}\neq C_{t}}X_{i,k}, \quad\text{for}\quad k=1,\dots,d.\] Here, \(C_{t}\) represents the class of node \(i\), and \(C_{t}\) is the class of the target node. This strategy potentially causes significant changes in predictions but may be less stealthy due to exaggerated features. We assume in this strategy that the adversary has access to the features of the set of target nodes \(V_{\mathcal{H}}\) and also to their predicted classes by the GNN. The predicted classes are accessible to the adversary after step 1 in Algorithm 1.
5. **Class representative strategy**: This approach generates a malicious node feature vector by selecting the feature vector of the node with the highest confidence score for a specific class, different from the target node's class, as shown below: \[x_{m}=x_{t}\text{ with }i^{*}=\operatorname*{arg\,max}_{\begin{subarray}{c}i\in V _{\mathcal{H}_{0}},\\ C_{i}\neq C_{t}\end{subarray}}p_{i,j}.\] In this equation, \(x_{m}\) is the malicious node feature vector, \(i^{*}\) is the node index index with the highest confidence score for a specific class different from the target node's class, \(V_{\mathcal{H}}\) is the set of target nodes, \(C_{t}\) represents the class of node \(i\), and \(C_{t}\) is the class of the target node. This strategy leverages the model's predictions to alter the neighbors of the target node predictions, potentially offering increased stealthiness.
Additionally, we introduce the _so-called_ LinkTeller **Influence** strategy as an alternative to the original method in (Zhu et al., 2017) incorporating their feature perturbation strategy. This strategy entails perturbing the features of the target node by adding a small real value \(\delta\), as shown below:
\[x_{m}=x_{t}+\delta.\]
We assess the performance of the Influence strategy in comparison to other strategies, aiming to determine whether the attack performance gains are attributable to node injection or the crafting of malicious features. It is worth noting, however, that the Influence strategy may be easily detected if the feature \(x_{t}\) has a discrete nature, given that \(x_{m}\) is real-valued.
## 5. Evaluation of the attack
In this section, we present the evaluation results of our proposed attack. First, we introduce our experimental setup. Then, we provide a detailed analysis of the performance of our attack on various datasets, discussing its effectiveness and limitations.
### Experimental setup
#### 5.1.1. Datasets
In order to evaluate the effectiveness of our attack, we conducted experiments on various real-world datasets previously utilized in related research. We include the Flickr (Flickr et al., 2017) dataset, where nodes represent images uploaded to the Flickr platform. Edges connect nodes if the images share common properties like geographic location, gallery, or user comments. Node features contain word representations. Additionally, we utilize two Twitch datasets (TWITCH-FR and TWITCH-RU)(Zhu et al., 2018) to evaluate NILS. We use Twitch-ES to train the GNNs as done previously in (Zhu et al., 2018) for the inductive setting. Twitch datasets (Zhu et al., 2018) illustrate follow relationships between users on the Twitch streaming platform. The objective of these datasets is to perform binary classification to determine if a streamer uses explicit language, using features such as users' preferred games, location, and streaming habits.
Furthermore, for the transductive setting, where the training and testing of the GNNs occur on the same graph, we incorporate three citation network datasets (Zhou et al., 2017), Cora, Citeseer, and Pubmed. These datasets capture citation relationships among scientific publications across various fields. The classification task of these datasets involves predicting the topic of publications based on their textual features. While Cora and Citeseer encompass general scientific publications, Pubmed is dedicated to biomedical publications. By employing these datasets in our evaluation, we aim to demonstrate the effectiveness of our proposed attack in both inductive and transductive settings, as well as across a range application domains.
#### 5.1.2. Models
In our study, we follow LinkTeller's approach to training the models and selecting hyperparameters (Zhou et al., 2017). In LinkTeller (Zhou et al., 2017), the authors trained Graph Convolutional Networks (GCNs) using various configurations and hyperparameters, which encompassed normalization techniques applied to the adjacency matrix, the number of hidden layers, input and output units, and dropout rates. In order to identify the optimal set of hyperparameters, the authors employed a grid search strategy, systematically exploring combinations of hyperparameters and evaluating their performance on a validation set. The search space for hyperparameters and the formulae for different normalization techniques were provided in (Zhou et al., 2017, Appendix F). After obtaining the best set of hyperparameters, the authors trained the GCN models to minimize the cross-entropy loss for the intended tasks.
In our experiments, we adhere to the same methodology as in LinkTeller (Zhou et al., 2017), ensuring consistency across the studies. By utilizing the same training procedures and hyperparameter tuning strategies, we aim to provide a comprehensive understanding of the attack performance across different layer configurations (two, three, and four layers) while maintaining consistency.
#### 5.1.3. Evaluation of attack performance
In accordance with the evaluation methodology presented in the LinkTeller paper (Zhou et al., 2017), we employ precision, recall, and the \(F_{1}\) score as our primary evaluation metrics. These metrics are particularly suitable for addressing the imbalanced binary classification problem at hand, in which the minority class (i.e., connected nodes) is of central interest. We primarily select the set target nodes \(V_{\mathcal{A}}\), such that \(|V_{\mathcal{A}}|=500\), using a uniform random sampling approach. Furthermore, following the baseline (Zhou et al., 2017) study's example, we explore scenarios where target nodes exhibit either low or high degrees. A comprehensive discussion of the sampling strategy can be found in (Zhou et al., 2017, Section V.D.). We report the results averaged over three runs with different random seeds along with the standard deviation.
### Analysis of strategies for malicious node's features
In this section, we analyze the impact of different strategies, as defined in Section 4.3, for generating the features \(x_{m}\) of the malicious node \(v_{m}\) on the success of our attack.
The success rates of these strategies, as shown in Table 2, reveal that the All-ones, Max attributes, and Class representative strategies are the most effective in causing significant changes in the predictions of the target node's neighbors. These results suggest that injecting nodes with high-valued or class-specific features can effectively disrupt the model's output predictions.
Conversely, the All-zeros, and Identity strategies exhibit relatively lower success rates, as shown in Table 2. While these strategies offer certain benefits in terms of stealthiness, their impact on the graph structure and predictions is less pronounced, highlighting a trade-off between attack effectiveness and stealthiness.
Concerning the Influence strategy, our NILS method exhibits a modest improvement over the LinkTeller baseline for the Twitch-FR dataset, as illustrated in Table 2. This suggests that the node injection property of our NILS attack is effective in this context. However, for the Twitch-RU dataset, NILS underperforms in comparison to the LinkTeller baseline. The most significant improvement is observed in the Flickr dataset, where the node injection property of NILS considerably increases the \(F_{1}\) score from \(0.32\pm 0.13\) of LinkTeller to \(0.89\pm 0.10\). This outcome highlights the advantage of NILS attack's node injection method within the Influence strategy, particularly when compared to the LinkTeller attack, which employs the Influence strategy without node injection.
These findings underscore the importance of considering both the effectiveness and stealthiness of malicious feature generation strategies when devising link inference attacks on GNNs.
### Comparison with the baselines
In this study, we conducted experiments to evaluate the performance of our proposed NILS attack in comparison to the LinkTeller attack using the same experimental setup. Our focus is on analyzing the optimal attacks for both approaches, which involved accurately estimating the number of neighbors of the target set nodes. The results, summarized in Table 3, demonstrate that our attack outperforms LinkTeller on both Twitch datasets (TWITCH-FR and TWITCH-RU). Furthermore, our method exhibits a substantial improvement over LinkTeller on the Flickr dataset, achieving nearly double the precision and recall values. Notably, our attack demonstrates stable performance across varying node degrees, with only a marginal decrease in effectiveness for high-degree target nodes. This can be attributed to the smaller influence that each neighboring node has on the aggregation of the GCN layer when the target node degree is high. Overall, our proposed NILS attack demonstrates consistently a superior performance compared to the LinkTeller attack.
We further compare our attack with link-stealing attacks introduced in (Kumar et al., 2019), where the authors' various attack strategies rely on
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Twitch-FR & Twitch-RU & Flickr \\ \hline Class Rep. & \(0.94\pm 0.01\) & \(0.83\pm 0.06\) & \(0.96\pm 0.06\) \\ Max Attr. & \(0.99\pm 0.00\) & \(0.98\pm 0.02\) & \(\mathbf{1.00\pm 0.00}\) \\ All-ones & \(\mathbf{0.99\pm 0.00}\) & \(\mathbf{0.97\pm 0.01}\) & \(0.99\pm 0.02\) \\ All-zeros & \(0.58\pm 0.02\) & \(0.48\pm 0.01\) & \(0.71\pm 0.07\) \\ Identity & \(0.81\pm 0.02\) & \(0.69\pm 0.01\) & \(0.95\pm 0.07\) \\ Influence NILS & \(0.81\pm 0.02\) & \(0.70\pm 0.01\) & \(0.89\pm 0.10\) \\ Influence LinkTeller (Zhou et al., 2017) & \(0.80\pm 0.02\) & \(0.74\pm 0.01\) & \(0.32\pm 0.13\) \\ \hline \hline \end{tabular}
\end{table}
Table 2. \(F_{1}\) **scores and standard deviations for different attack methods and datasets.**
different types of background knowledge available to the adversary, such as node attributes and shadow datasets. Specifically, in their Attack-2, the adversary has access to both the features and prediction scores of the nodes. Utilizing this information, the adversary creates two types of attacks: LSA2-attr and LSA2-post. LSA2-attr calculates distances between node attributes, while LSA2-post computes distances between node prediction scores (posteriors). It is important to highlight that these two attacks align closely with our threat model, as both assume that the adversary has access to the features and prediction scores of the target node. This similarity in assumptions renders these attacks particularly relevant for comparison with our proposed NILS attack. The attacks are executed under the transductive setting, where training and inference occur on the same graph. As shown in Table 4, our proposed NILS attack outperforms the LSA2-post and LSA2-attr attacks constructed in (Kumar et al., 2019). However, our attack performance is nearly equivalent to that of LinkTeller. These results demonstrate that NILS attack maintains effectiveness under the transductive setting, just as in the inductive setting.
### Depth of the GNN
In this section, we examine the impact of increasing the depth of GNN on the success rate of the attack for the Twitch-Fr dataset. Our findings illustrated in Figure 2 indicate that as the depth of the GNN increases, the attack's success rate decreases, which can be attributed to the dilution of the injected poisoning node's influence within the target node's neighborhood. As the GNN depth increases, the model aggregates information from a larger neighborhood, encompassing nodes that are \(k-\)hops away from the target node. Consequently, the injected malicious node's features become one among many contributing factors in the aggregated information, leading to a dilution of its influence. This reduction in the injected node's impact on the aggregated information diminishes the overall effectiveness of the attack, making it less successful in altering the predictions of the target node's neighbors.
In comparison with LinkTeller (Kumar et al., 2019), as shown in Table 5, NILS outperforms LinkTeller (Kumar et al., 2019) across various GCN depths. Specifically, for Twitch-FR dataset, NILS demonstrates higher precision and recall values when the GCN depth is 3 (precision: 85.06 \(\pm\)1.2, recall: 81.56 \(\pm\)1.2) compared to the LinkTeller method (precision: 50.01 \(\pm\)1.1, recall: 46.6 \(\pm\)5.0). Notably, NILS consistently outperforms LinkTeller even when comparing the attack performance of LinkTeller with a GCN depth of 2 and NILS with a GCN depth of 3. Specifically, for Twitch-FR dataset, NILS demonstrates higher precision and recall values at a GCN depth of 3 (precision: 85.06 \(\pm\)1.2, recall: 81.56 \(\pm\)1.2) compared to the LinkTeller method with a GCN depth of 2 (precision: 84.1 \(\pm\)3.7, recall: 78.2 \(\pm\)1.9). These results highlight the effectiveness of our node injection strategy, as it consistently outperforms the LinkTeller method across different depths of the GCN.
## 6. Defense
This section introduces the basic notions of DP in the context of GNNs. As a reminder, the goal is to protect the privacy of the graph, in the sense of preventing an adversary from discovering whether, in a given graph, there is a link between two nodes. With this aim, we need to define the neighbouring relation of graphs and further revise the definition of DP.
### DP for graphs
Recall from Sec. 2.2 that the notion of neighborhood of DP was defined originally for microdata, and, accordingly, two databases are said to be neighbors if they differ just in one record. In the context of graphs at hand, however, this notion must be adapted since two graphs may differ with respect to either one edge or one node.
In the literature, we find two attempts (Kumar et al., 2019; Kumar et al., 2019) to adapt DP to graphs. Before examining them, recall from Sec. 2.1.1 that a graph \(\mathcal{G}=(V,E)\) is represented with an adjacency matrix \(A\), whereby \(A_{ij}=1\) if there is a link between node \(i\) and node \(j\), and \(A_{ij}=0\) otherwise (where \(i,j\in\{1,\ldots,|V|\}\)).
Definition 6.1 (Edge-level adjacent graphs (Kumar et al., 2019)).: \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) are considered _edge-level adjacent graphs_ if one can be obtained from the other by removing a single edge. In other words, \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) differ by at most one edge. Hence, their adjacency matrices differ by one element only.
Accordingly, an edge-level DP mechanism is defined as follows:
Definition 6.2 (\((\epsilon,\delta)\)-Edge-level differential privacy).: A randomized mechanism \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)**-edge-level DP** with \(\epsilon,\delta\geqslant 0\) if, for all pairs of edge-level adjacent graphs \(\mathcal{G},\mathcal{G}^{\prime}\) and for all measurable \(\mathcal{O}\subseteq\mathrm{Range}(\mathcal{M})\),
\[\mathrm{P}\{\mathcal{M}(\mathcal{G})\in\mathcal{O}\}\leqslant e^{\epsilon} \,\mathrm{P}\{\mathcal{M}(\mathcal{G}^{\prime})\in\mathcal{O}\}+\delta.\]
Definition 6.3 (Node-level adjacent graphs (Kumar et al., 2019)).: \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) are said to be _node-level adjacent graphs_ if one can be obtained from the other by removing a single node and all of its incident edges.
Node-level DP is defined analogously as follows:
Figure 2. Success rates of the attack for different depths and malicious features generation strategies for Twitch-FR dataset
**Definition 6.4** (\((\varepsilon,\delta)\)-**Node-level differential privacy**).: A randomized mechanism \(\mathcal{M}\) satisfies \((\varepsilon,\delta)\)**-node-level DP** with \(\varepsilon,\delta\geqslant 0\) if, for all pairs of node-level adjacent graphs \(\mathcal{G},\mathcal{G}^{\prime}\) and for all measurable \(\mathcal{O}\subseteq\operatorname{Range}(\mathcal{M})\), the following inequality holds:
\[\operatorname{P}\{\mathcal{M}(\mathcal{G})\in\mathcal{O}\}\leqslant e^{ \varepsilon}\operatorname{P}\{\mathcal{M}(\mathcal{G}^{\prime})\in\mathcal{O }\}+\delta\]
### One-node-one-edge-level DP
The adversary defined in Sec. 4.2 adds a malicious node to a graph and connects it to a target node through a _single_ edge. Countering such an adversary with a node-level DP mechanism (see Definition 6.3) is clearly not a suitable choice in terms of model accuracy since node-level DP targets a stronger adversary. Trying to hide the presence or absence of one node and _all_ of its incident edges intuitively would increase the scale of the noise to be added (for example to the original adjacency matrix), and incur more data inaccuracy than necessary. Motivated by this, we define a new notion of neighboring graphs (and the corresponding DP mechanism), which is designed to specifically counter the adversary proposed in this work.
**Definition 6.5** (**One-node-one-edge-level adjacent graphs**).: \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) are considered _one-node-one-edge-level adjacent graphs_ if one can be obtained from the other by adding a single node with one edge only.
Note that, as in node-level adjacent graphs (Definition 6.3), the adjacency matrices of two neighboring graphs (in the sense of one-node-one-edge) differs by one row and one column only, but unlike node-level, the difference in \(L_{1}-\)norm between the adjacency matrices is always one. Based on Definition 6.5, a one-node-one-edge-level DP is defined as follows:
**Definition 6.6** (\((\varepsilon,\delta)\)-**One-node-one-edge-level differential privacy**).: A randomized mechanism \(\mathcal{M}\) satisfies \((\varepsilon,\delta)\)**-one-node-one-edge-level DP** with \(\varepsilon,\delta\geqslant 0\) if, for all pairs of one-node-one-edge-level adjacent graphs \(\mathcal{G},\mathcal{G}^{\prime}\) and for all measurable \(\mathcal{O}\subseteq\operatorname{Range}(\mathcal{M})\), the following holds:
\[\operatorname{P}\{\mathcal{M}(\mathcal{G})\in\mathcal{O}\}\leqslant e^{ \varepsilon}\operatorname{P}\{\mathcal{M}(\mathcal{G}^{\prime})\in\mathcal{O }\}+\delta\]
### Countermeasures for our attack
In this section, we describe one DP-based strategy, namely the LapGraph mechanism, which was introduced in (Srivastava et al., 2017). A much simpler defense approach against any privacy attack to GNNs would of course be output perturbation (Kipf and Welling, 2017), whereby the very same output of the GNN prediction is perturbed with some DP mechanism (e.g., the classical Laplace mechanism). While this solution is straightforward to implement and indeed can be used to satisfy the one-node-one-edge-level DP notion, unfortunately, it would significantly deteriorate the accuracy of the GNN output. It is easy to see that the \(L_{1}\)-global sensitivity of a prediction matrix for the set of nodes \(V_{\mathcal{G}}\) is as large as \(2\left|V_{\mathcal{G}}\right|\), which makes us rule out output perturbation.
To defend against the newly proposed attack, similar to (Srivastava et al., 2017), we propose to apply the LapGraph algorithm, which consists in perturbing the adjacency matrix using the Laplace mechanism and binarizing it by replacing the top-\(N\) largest values by \(1\) and the remaining values by \(0\). Here, \(N\) represents the estimated number of edges in the graph, which is also computed using the Laplace mechanism.
By leveraging the post-processing property of \(\operatorname{DP}^{1}\), the edge information remains protected even if the adversary observes the predictions generated by the GNN. Furthermore,
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Depth-\(2\)} & \multicolumn{2}{c}{Depth-\(3\)} \\ \cline{3-6} & & precision & recall & precision & recall & precision & recall \\ \hline \multirow{2}{*}{TWITCH-FR} & NILS (Ours) & 99.13 \(\pm\)0.8 & 99.57 \(\pm\)0.35 & 85.06 \(\pm\)1.2 & 81.56 \(\pm\)1.2 \\ & LinkTeller & 84.1 \(\pm\)3.7 & 78.2 \(\pm\)1.9 & 50.1 \(\pm\)5.1 & 46.6 \(\pm\)0.0 \\ \hline \multirow{2}{*}{TWITCH-RU} & NILS (Ours) & 96.45 \(\pm\)0.4 & 98.34 \(\pm\)0.7 & 78.78 \(\pm\)3.8 & 76.35 \(\pm\)9.3 \\ & LinkTeller & 71.8 \(\pm\)2.2 & 78.5 \(\pm\)2.4 & 45.7 \(\pm\)2.2 & 50.0 \(\pm\)2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparative performance of NILS attack with LinkTeller (Srivastava et al., 2017) and link-stealing attacks in (Kipf and Welling, 2017) across three datasets (Cora, Citeseer, and Pubmed).
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{low} & \multicolumn{2}{c}{uncontrained} & \multicolumn{2}{c}{high} \\ \cline{3-8} & & precision & recall & precision & recall & precision & recall \\ \hline \multirow{2}{*}{TWITCH-FR} & NILS (Ours) & 100.0 \(\pm\)0.0 & 100.0 \(\pm\)0.0 & 99.13 \(\pm\)0.8 & 99.57 \(\pm\)0.35 & 99.91 \(\pm\)2.6 & 100.0 \(\pm\)0.0 \\ & LinkTeller & 92.5 \(\pm\)5.4 & 92.5 \(\pm\)5.4 & 84.1 \(\pm\)3.7 & 78.2 \(\pm\)1.9 & 83.2 \(\pm\)1.4 & 80.6 \(\pm\)6.7 \\ \hline \multirow{2}{*}{TWITCH-RU} & NILS (Ours) & 100.0 \(\pm\)0.0 & 100.0 \(\pm\)0.0 & 96.45 \(\pm\)0.4 & 98.34 \(\pm\)0.7 & 99.77 \(\pm\)0.1 & 99.37 \(\pm\)0.1 \\ & LinkTeller & 78.8 \(\pm\)1.9 & 92.6 \(\pm\)5.5 & 71.8 \(\pm\)2.2 & 78.5 \(\pm\)2.4 & 89.7 \(\pm\)1.7 & 65.7 \(\pm\)3.9 \\ \hline \multirow{2}{*}{Flickr} & NILS (Ours) & 100.0 \(\pm\)0.0 & 100.0 \(\pm\)0.0 & 99.11 \(\pm\)1.7 & 95.83 \(\pm\)5.0 & 93.72 \(\pm\)3.1 & 78.9 \(\pm\)1.9 \\ & LinkTeller & 51.0 \(\pm\)7.0 & 53.3 \(\pm\)4.7 & 33.8 \(\pm\)13.3 & 32.1 \(\pm\)13.3 & 18.2 \(\pm\)4.5 & 18.5 \(\pm\)6.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparative performance of our proposed attack NILS and LinkTeller across three datasets (TWITCH-FR, TWITCH-RU, and Flickr) under low, unconstrained, and high constraint settings. The results are presented in terms of precision and recall with corresponding standard deviations
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Depth-\(2\)} & \multicolumn{2}{c}{Depth-\(3\)} \\ \cline{3-6} & & precision & recall & precision & recall \\ \hline \multirow{2}{*}{TWITCH-FR} & NILS (Ours) & 99.13 \(\pm\)0.8 & 99.57 \(\pm\)0.35 & 85.06 \(\pm\)1.2 & 81.56 \(\pm\)1.2 \\ & LinkTeller & 84.1 \(\pm\)3.7 & 78.2 \(\pm\)1.9 & 50.1 \(\pm\)5.1 & 46.6 \(\pm\)0.0 \\ \hline \multirow{2}{*}{TWITCH-RU} & NILS (Ours) & 96.45 \(\pm\)0.4 & 98.34 \(\pm\)0.7 & 78.78 \(\pm\)3.8 & 76.35 \(\pm\)9.3 \\ & LinkTeller & 71.8 \(\pm\)2.2 & 78.5 \(\pm\)2.4 & 45.7 \(\pm\)2.2 & 50.0 \(\pm\)2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Success rates of the attack for different depths in comparison with LinkTeller (Srivastava et al., 2017). We use the all-ones strategy and Twitch-FR dataset.
each time a user connects a new node, a new adjacency matrix is generated following the same LapGraph mechanism, accumulating this way the privacy budget by the sequential composition property of DP (Kumar et al., 2018).
Although the LapGraph mechanism was proposed to meet edge-level DP, it is not difficult to show that the mechanism can also be used to satisfy one-node-one-edge-level DP. For this, let \(f_{A}\) be the query function returning the adjacency matrix of a graph \(G\). Unlike edge-level neighborhood, the corresponding matrices \(A,A^{\prime}\) of two one-node-one-edge neighboring graphs \(G,G^{\prime}\) have different dimensions, namely, either \(A\in\mathbb{R}^{n\times n}\) and \(A^{\prime}\in\mathbb{R}^{(n+1)\times(n+1)}\), or \(A\in\mathbb{R}^{(n+1)\times(n+1)}\) and \(A^{\prime}\in\mathbb{R}^{n\times n}\). Without loss of generality, we assume the former case, where \(A\) and \(A^{\prime}\) represent the adjacency matrices _before_ and _after_ the new node is connected to \(G\) (resulting in \(G^{\prime}\)). We shall also assume that the new node corresponds to the \((n+1)\)-th row and, for symmetry, to the \((n+1)\)-th column of \(A^{\prime}\). Precisely, since any adjacency matrix is symmetric by definition, the computation of the sensitivity of \(f_{A}\) only requires the upper or lower triangular matrix of \(A\).
To enable the subtraction operation \(A-A^{\prime}\) implicit in the definition of the global sensitivity (see Definition 2.3), we append one zero-row and one-zero column to \(A\) and denote the resulting matrix by \(A\in\mathbb{R}^{(n+1)\times(n+1)}\). As in the case of \(A^{\prime}\), we assume that the appended row and column are in the \((n+1)\)-th position of \(\tilde{A}\).
To compute the sensitivity of \(f_{A}\) for the notion of one-node-one-edge neighboring graphs, we just need to note that the \((n+1)\)-th columns (or rows, if we consider the lower triangular of the adjacency matrix) of \(\tilde{A}\) and \(A^{\prime}\) always differ in one element. The reason is because one-node-one-edge neighboring graphs differ in only one edge. As a result,
\[\|\tilde{A}-A^{\prime}\|_{1}=1\]
for any pair of neighboring graphs, which yields an \(L_{1}\)-global sensitivity of \(1\), as in the original LapGraph mechanism intended for edge-level DP. The fact that the two sensitivities coincide implies that the LapGraph version utilized in this work will provide stronger protection for the same level of utility, compared to the original LapGraph. The reason for that is because, while the scale of the Laplace noise will be the same for a same \(\varepsilon\), one-node-one-edge guarantees indistinguishability between any pair of graphs differing not only in one edge and but also in one node.
### LapGraph evaluation
In this section, we evaluate the effectiveness of LapGraph (Zhu et al., 2017) in reducing the success of NILS attack while ensuring our one-node-one-edge-level DP notion. We also investigate the utility of GCN models trained with LapGraph protection.
#### 6.4.1. Evaluation setup
We use the same training hyperparameters and normalization techniques as in the vanilla case, where DP is not applied. Initially, we protect the training graph with LapGraph. Following that, we apply LapGraph each time the graph changes due to node injection by the adversary. In line with the setup in (Zhu et al., 2017), we compute the \(F_{1}\) score for our NILS attack as well as the classification task's \(F_{1}\) score for the GCN. This allows us to measure LapGraph protection along with the GCN utility across various privacy budgets \(\varepsilon\). We report the results averaged over 5 runs with different random seeds for LapGraph.
#### 6.4.2. Evaluation results
Figure 3 presents the \(F_{1}\) score of the attack for various \(\varepsilon\) values. We observe that applying LapGraph reduces the effectiveness of NILS. The \(F_{1}\) score becomes almost zero when the privacy budget \(\varepsilon\) is small. However, for large \(\varepsilon\), LapGraph provides moderate protection, but the attack's \(F_{1}\) score remains significantly lower than in the non-private case where DP is not applied.
For comparison, in the LinkTeller (Zhu et al., 2017) attack, where LapGraph is applied only once to ensure edge-level DP, LapGraph offers limited protection when \(\varepsilon\) is large, allowing LinkTeller to achieve a success rate nearly as high in the non-private case. Conversely, in our scenario, where LapGraph is also applied after the adversary's node injection, LapGraph provides stronger protection. The application of LapGraph during inference makes it more challenging for the adversary to distinguish between the target node's neighbors and non-neighbors, as the prediction scores of all target nodes change after each inference query. Consequently, the distances between the prediction scores \(P\) and \(P^{\prime}\), before and after the node injection, become noisier due to LapGraph's application following the node injection.
To provide insights about the privacy-utility tradeoff of LapGraph, we present in Figure 4 the utility of the GCNs for different values of the privacy budget. We observe that the utility increases when \(\varepsilon\) increases, as expected. Large values of \(\varepsilon\geq 7\) give a better utility close to that in the non-private vanilla case. Therefore, carefully choosing an \(\varepsilon\) will give fairly good utility and a certain level of protection against NILS attack.
## 7. Conclusion
In this paper, we have presented a powerful new NILS attack--a link-stealing attack using node injection against GNNs. Our results have demonstrated the superior performance of NILS compared to previous attacks, further emphasizing the vulnerabilities of GNNs regarding edge information leakage. We have also evaluated NILS against differentially private GNNs, ensuring a one-node-one-edge-level DP notion specifically designed to protect against our proposed attack. |
2308.15964 | Specx: a C++ task-based runtime system for heterogeneous distributed
architectures | Parallelization is needed everywhere, from laptops and mobile phones to
supercomputers. Among parallel programming models, task-based programming has
demonstrated a powerful potential and is widely used in high-performance
scientific computing. Not only does it allow for efficient parallelization
across distributed heterogeneous computing nodes, but it also allows for
elegant source code structuring by describing hardware-independent algorithms.
In this paper, we present Specx, a task-based runtime system written in modern
C++. Specx supports distributed heterogeneous computing by simultaneously
exploiting CPUs and GPUs (CUDA/HIP) and incorporating communication into the
task graph. We describe the specificities of Specx and demonstrate its
potential by running parallel applications. | Paul Cardosi, BΓ©renger Bramas | 2023-08-30T11:41:30Z | http://arxiv.org/abs/2308.15964v1 | # Specx: a C++ task-based runtime system for heterogeneous distributed architectures
###### Abstract
Parallelization is needed everywhere, from laptops and mobile phones to supercomputers. Among parallel programming models, task-based programming has demonstrated a powerful potential and is widely used in high-performance scientific computing. Not only does it allow for efficient parallelization across distributed heterogeneous computing nodes, but it also allows for elegant source code structuring by describing hardware-independent algorithms. In this paper, we present Specx, a task-based runtime system written in modern C++. Specx supports distributed heterogeneous computing by simultaneously exploiting CPUs and GPUs (CUDA/HIP) and incorporating communication into the task graph. We describe the specificities of Specx and demonstrate its potential by running parallel applications.
**This document is a preliminary version of the publication on Specx, which does not include the benchmarks. We invite the readers to regularly check if a new version is available online.**
## 1 Introduction
Modern computers are increasingly heterogeneous and structured hierarchically, both in terms of memory and parallelization. This is especially visible in the high-performance computing (HPC) environment, where clusters of computing nodes equipped with multi-core CPUs and several GPUs are becoming the norm. Programming applications for this type of architectures is challenging, and using them efficiently requires expertise.
The research community has proposed various runtime systems to help parallelize computational codes. These tools differ on many aspects, including the hardware they target, their ease of use, their performance, and their level of abstraction. Some runtime systems have demonstrated flexibility in their use, but they are designed for experts, such as StarPU [6]. Others provide a modern C++ interface, but they do not support as many features as what HPC applications need, such as Taskflow [30].
In our current study, we describe Specx (/'speks/) 1, a runtime system that has been designed with the objective of providing the features of advanced HPC runtime systems, while being easy to use and allowing developers to obtain modular and easy-to-maintain applications.
Footnote 1: [https://gitlab.inria.fr/bramas/specx](https://gitlab.inria.fr/bramas/specx)
The contribution of our work can be summarized as follows:
* We describe the internal organization of Specx, a task-based runtime system written in modern C++.
* We present the key features needed to develop advanced HPC applications, such as scheduler customization, heterogeneous tasks, and dynamic worker teams.
* We show that Specx allows developers to write compact C++ code thanks to advanced meta-programming.
* Finally, we demonstrate the performance of Specx on several test cases.
The manuscript is organized as follows. We provide the prerequisites in Section 2 and the related work in Section 3. Then, we describe Specx in Section 4, before the performance study in Section 5.
## 2 Background
In this section, we briefly describe task-based parallelization and the challenges it faces when computing on heterogeneous architectures.
### Task-based parallelization
Task-based parallelization is a programming model in which the application is decomposed into a set of tasks. It relies on the principle that an algorithm can be decomposed in interdependent operations, where the output of some tasks is the input of others. These tasks can be executed independently or in parallel, and they can be dynamically scheduled to different processing units while to ensure execution coherency. The result can be seen as a direct acyclic graph (DAG) of tasks, or simple graph of tasks, where each node is a task and each edge is a dependency. An execution of such a graph will start from the nodes that have no predecessor and continue inside the graph, ensuring that when a task starts, all its predecessors have completed. The granularity of the tasks, that is, the
content in terms of computation, cannot be too fine-grained because the internal management of the graph implies an overhead that must be negligible to ensure good performance [48]. Therefore, it is usually the developer's responsibility to decide what a task should represent. The granularity is then a balance between the degree of parallelism and the runtime system overhead. For that reason, several researches are conducted to delegate partially or totally the runtime system to the hardware with the objective of relieving the worker threads, as in [17].
The dependencies between tasks can be described in various ways. One way is to have the user explicitly connect tasks together. For example, the user might call a function \(connect(t_{i},t_{j})\) to connect tasks \(t_{i}\) and \(t_{j}\). This approach requires the user to manage the coherency and to keep track of the dependencies between tasks, which can be error-prone and complicated between different stages of an application. TaskFlow uses this approach.
Another way is to inform the runtime system about the input/output of the tasks, and letting it taking care of the coherency. This approach is more convenient for the users, but there are many possibilities how this approach can be implemented. One approach is to use a mechanism like the C++ future to access the result of asynchronous operations. This approach allows the runtime system to track the dependencies between tasks and ensure that they are satisfied without having a view on the input/output. This approach is used by the ORWL runtime system [18].
An alternative is to use sequential task-flow (STF) [5], also called task-based data programming. In this approach, the users describes the tasks and tells what are the data input/output for each task. In general, a single thread creates the tasks and posts them in a runtime system, while informing about the access of each of them on the data. The runtime system is then able to generate the graph and guarantee that the parallel execution will have the absolute same result as a sequential one. This ends in a very compact code with few modifications required to add to an existing application by moving the complexity in the runtime system. The sequential order is used to set the dependencies caused by read after write, or write after read. This approach has a number of advantages, including:
* A sequential program can be transformed into a parallel equivalent very easily.
* The users do not have to manage the dependencies.
* The accesses can be more precise than read/write and specific properties can be set to the accesses, such as commutativity.
* The tasks can be mapped to a graph, allowing the runtime system to analyze the graph to predict the workload or memory transfers and takes clever decisions.
In our work, we use the STF model.
### Computing on heterogeneous architectures
Heterogeneous computing nodes consist of at least two distinct types of processing units. The most common configuration includes a dual-socket CPU paired with one or several GPUs, each having separate memory nodes (Figure 1. However, similar principles apply to other types of processing units as well.
Traditionally, these nodes operate in a pattern where a single CPU thread manages data movement to the device's memory, initiates the computational kernel, and waits for its completion before transferring the data back if required. To assist programmers, vendors have introduced unified memory, a mechanism that creates the illusion of a shared memory space between CPUs and GPUs. However, due to potential unpredictability and lack of control, its use remains rare in High-Performance Computing (HPC). Meanwhile, this usage pattern leaves other CPU cores idle, which is untenable in HPC.
To enhance the utilization of processing units, this pattern can be expanded to incorporate multiple CPU threads sharing a single GPU, enabled by mechanisms like streams or queues. This arrangement allows full exploitation of the GPU in terms of computational capability and memory transfers. However, managing the device's memory becomes increasingly complex, as it becomes crucial to avoid redundant object copying and ensure memory capacity isn't exceeded.
Furthermore, this method introduces the key challenge of balancing heterogeneous computing. Specifically, determining the optimal number of CPU threads per GPU, figuring out how to best use idle CPU cores, and deciding which parts of applications are more suited for CPUs than GPUs. In essence, how can we optimally distribute work among all processing units?
Task-based runtime systems aim to resolve these issues. They predominantly manage data transfers between CPUs and GPUs, allocate specific CPU cores to control GPUs, separate these cores from others to allow for concurrent
Figure 1: Simplified view of a heterogeneous computing node with 2 CPUs and 4 GPUs. Multiple nodes can be interconnected via network.
executions, and schedule tasks across various types of processing units while considering workload and the most efficient processing unit type.
## 3 Related work
### Task-based parallelization
The most common task-based programming pattern can be described as a tasks-and-wait scheme, where independent tasks are inserted into a runtime system and a synchronization point allows waiting for their completion. The task model from OpenMP 3 [40][8] and the task-based programming language Cilk [10] (later extended in Cilk++ [38] and Cilk Plus [31]) follow this idea. This remains a fork-join model because successive spawn phases of independent tasks (fork) must be explicitly synchronized (join) to ensure a correct execution. Therefore, it limits the scalability because of the waiting time and the imbalance between tasks. Of course, developers can increase the degree of parallelism by using multiple sources of tasks that they know are independent. However, such an implementation starts to become a manual management of the dependencies, which a modern task-based runtime system is intended to do.
This is why there now exist numerous different task-based runtime systems that support dependency management. The most popular ones are implementations of the OpenMP version 4 [41, 15] standard that defines the additional pragma keyword _depend_ to inform the runtime system about the type of data accesses performed by the tasks. However, using pragmas, in general, is tedious when a task has hundreds of dependencies or when the number of dependencies are known at runtime. This can lead to ugly and error-prone code. In addition, as OpenMP is a standard, it is upgraded slowly to ensure backward compatibility. Moreover, the standard is weak in the sense that it does not impose any constraints on the implementation and complexity of the underlying algorithms. This can cause performance surprises for the users when they use different OpenMP runtime systems. In addition, OpenMP does not support distributed memory parallelization. Nonetheless, its portability, stability, and maturity make it a safe long-term choice.
StarPU [7] is a runtime system that was first designed to manage heterogeneous architectures. It is a C library, which means that users are constrained to use low-level programming and function pointers. However, it is extremely flexible and used by many HPC applications. It also supports distributed memory parallelization with three different approaches [3]. Each of these approaches uses a different description for the task graph, and the degree of information that the StarPU instances have on the complete graph is different. (there is one StarPU instance per computing node) The first approach is the most trivial, and it consists of declaring the complete graph on all computing nodes. This means that there is one thread that describes the graph in each StarPU instance. Each instance can analyze the graph without any communication, since it holds the complete graph. This can be used to create low-cost scheduling
strategies. For example, consider that all instances iterate over the task graph and have to decide on which computing node each task is executed. Thanks to the view on the complete graph, they can assign a task to a computing node while minimizing the communication, and all instances take the same decision without communicating. Moreover, the instances know where the data dependencies are located because they can track them while iterating on the graph. This can be used to post send/receive operations accordingly and manage the communication automatically. However, there is a clear disadvantage to this approach: the method cannot scale because its cost and overhead increase with the size of the task graph, independently of the number of computing nodes that will be used. In the second approach, each instance declares only a partial task graph that covers only the tasks it will compute. However, StarPU needs additional information to track the data movement and to connect the different partial task graphs that manage the communication. The first option consists in requesting explicit communication calls (similar to the MPI) that connect the tasks between the instances. In the second option, each instance inserts the tasks that will be computed by others and which are at the frontier of its partial graph. These two approaches remove abstraction because the developers manually split the task graph and have to manage the boundaries of the partial graph. Moreover, each instance has only a partial view, making analysis and scheduling difficult. Specx uses a similar approach.
PaRSEC [11; 20] is a runtime system based on the parametrized task graph (PTG) model [19]. It has been demonstrated to be effective in various scientific applications. The PTG is a domain-specific language (DSL) that captures a static, algebraic description of a task graph that can be expanded efficiently at runtime. This allows PaRSEC to manage large graphs without fully instantiating them. This approach works well on affine loops thanks to polyhedral analysis. The analysis of the data-flow of a task instance is constant in time, and the representation of the graph is constant in space. This makes the PTG a very efficient way to represent task graphs. However, the PTG is not as expressive as other task graph models. It is difficult to use the PTG to represent applications with irregular or sparse algorithmic or data access patterns. Despite this limitation, the PTG has been shown to be effective in a wide variety of scientific applications. It is a powerful tool for parallelizing applications that can be expressed in terms of affine loops. It is theoretically possible to write PTGs for highly dynamic applications, but this would imply an unbounded amount of time building and traversing dynamic meta-data in memory. However, the PTG is impractical for implementing applications with irregular or sparse algorithmic or data access patterns, where the logic is difficult to express with linear equations. The PTG graph representation is highly efficient, but the expressiveness of the model is limited. Internally, the representation allows collapsing a task graph in two dimensions, i.e., time and parallelism [45], which permits several optimizations. In distributed-memory, the different Parsec instances all hold an algebraic representation of the complete graph. Parsec uses advanced mechanisms to allow scheduling the tasks efficiently using heuristics and potential input from the users.
Charm++ [34, 35, 14] is an object-oriented parallel programming framework that relies on a partitioned global address space (PGAS) and that supports the concept of graphs of actors. It includes the parallelism by design with a migratable-objects programming model, and it supports task-based execution. The actors (called _chares_) interact with each other using invocations of asynchronous methods. However, with Charm++, there is no notion of tasks as we aim to use. Instead, tasks are objects that communicate by exchanging messages. Charm++ schedules the chares on processors and provides object migration and a load balancing mechanism. PGAS allows accessing data independent of their actual location, which is the inverse of what the task-based method intends to offer. A task is a piece of work that should not include any logic or communication. This approach forbids many optimizations and mechanisms that task graphs support [49].
HPX [33] is an open-source implementation of the ParalleX execution model. Its implementation aims to respect the C++ standard, which is an asset for portability and compliance with existing C++ source code. In HPX, tasks request access to data by calling an accessor function (get/wait). The threads provide the parallelism description, which is tied to the order and type of data accesses.
OmpSs [21, 42] uses the insert-task programming model with pragmas similar to OpenMP through the Nanos++ runtime to manage tasks. When running in distributed memory, it follows a master-slave model, which may suffer from scalability issues as the number of available resources or the problem size increase.
XKaapi [26] is a runtime system that can be used with standard C++ or with specific annotations, but it requires a specific compiler. Legion [9] is a data-centric programming language that allows for parallelization with a task-based approach. SuperGlue [52] is a lightweight C++ task-based runtime system. It manages the dependencies between tasks using a data version pattern. X10 [16] is a programming model and a language that relies on PGAS, too, and hence has similar properties as Charm++. Intel Threading Building Blocks [32] (ITBB) is an industrial runtime system provided as a C++ library. It is designed for multicore parallelization or in conjunction with oneAPI, but it follows a fork-join parallelization pattern.
Regarding distributed parallelization, most runtime systems can be used with MPI [44]. The developers implement a code that alternate between calls to the runtime systems and post of MPI communications. When supported by the runtime system, the data movements between CPUs/GPUs and in-node load balancing are delegated to the runtime system. More advanced methods have been elaborated, and they entirely delegate the communications to the runtime system [55, 54, 36, 29, 24], like Parsec, StarPU [2], Legion, Charm++, TaskTorrent [13], and HPX.
Most of these tools support a core aspect of a task-based runtime system, including the creation of a task graph (although the implementation may vary) where tasks can read or write data. However, scheduling is an important factor in the performance [4], and few of these runtime systems propose a way to create
a scheduler easily without having to modify the code. Moreover, specific features offer mechanisms to increase the degree of parallelism. For instance, some runtime systems permit the specification of whether data access is commutative, implying that tasks write data without any particular order. This kind of advanced functions can significantly impact performance [1]. They differ whether the task graphs are statically or dynamically generated, how the generation is performed, which in-memory representation is used, which parallelization levels are supported, and many other features [51, 50, 27, 28].
### Speculative execution
Speculative execution is an approach that can increase the degree of parallelism. It has been widely used on hardware and is an ongoing research topic in regard to software [22, 37, 39]. The key idea of speculative execution is to utilize idle components to execute operations in advance, which includes the risk of performing actions that may later be invalidated. The prominent approach is to parallelize an application, and to detect at runtime if race conditions or accesses with invalid orders which violate dependencies happen. The detection of invalid speculative execution can be expensive, and as a result, some research is intended to design hardware modules for assistance [47, 43]. However, these low-level strategies are unsuitable for massively parallelized applications and impose the need for either detecting the code parts suitable for speculation or relying on explicit assistance from developers.
In a previous study, we have shown that characterizing accesses as'maybe-write' instead of 'write' allows us to increase the degree of parallelism thanks to speculative execution in the task-based paradigm [12]. This novel kind of uncertain data access (UDA) can be used when it is uncertain at task insertion time whether the tasks will modify some data or not. Similar to the 'commutative write', developers simply provide additional information to the runtime system, enabling it to set up a strategy by modifying the graph of tasks on the fly. This also makes it possible to delay some decisions from the implementation time to the execution time, where valuable information about the ongoing execution is available. We have implemented this mechanism in our task-based runtime system Specx (originally called SPETABARU) and conducted an evaluation on Monte Carlo simulations, which demonstrated significant speedups. We are currently developing a new model [46].
On different fields, speculation has also been used in a tasking framework for adaptive speculative parallel mesh generation [53] and for resource allocation in parallel trajectory splicing [25].
Specx's features, design and implementation
### Task graph description
In Specx, we dissociated the task-graph from the so-called computing engine that contains the workers. Therefore, the user has to instantiate a task-graph and select among two types, one with speculative execution capability and one without, which allows the removal of the overhead of the speculative execution management when no UDAs will be used. We provide an example in Code 1.
```
1//Createataskgraph
2SpTaskGraph<SpSpeculativeModel::SP_NO_SPEC>tg;
3//Legacyversion,createaruntime(acomputeening+ataskgraph)
4SpRuntimeruntime(SpUtils::DefaultNumThread());
```
Code 1: Specx example - creation of a task graph.
Task InsertionSpecx follows the STF model: a single thread inserts the task in the runtime system (task-graph object) and tells which variables will be written or read. Additionally, the user can pass a priority that the scheduler is free to use when making decisions. The core part of the task consists of a callable object with the operator _()_, which allows for the use of C++ lambda functions. The data access modes that Specx currently supports are:
* SpRead: the given dependency will only be read by the task. As such, the parameter given to the task function must be _const_.
* SpWrite: the given dependency will be read or write by the task.
* SpCommutativeWrite: the given dependency will be read or write by the task but the order of execution of all the _SpCommutativeWrite_ inserted jointly is not important.
* SpMaybeWrite: the given dependency might be read and/or write by the task. Possible speculative execution patterns can be applied.
* SpAtomicWrite: the given dependency will be read or write by the task, but the user will protect the access by its own mechanisms (using mutual exclusion, for example). The runtime system manages this access very similarly to a read access (multiple _SpAtomicWrite_ can be done concurrently, but the runtime system has to take care of the read-after-write, write-after-read coherency).
When a dependency \(X\) is passed, the runtime dereferences \(X\) to get its address, and this is what will be used as the dependency. An important point when using task-based programming is that it is the user's responsibility to ensure that the objects will not be destroyed before all tasks that use them are completed. We provide an example in Code 2.
```
1constintinitVal=1;
2intwriteVal=0;
3//Createtaskwithlambdafunction
4tg.task(SpRead(initVal),SpWrite(writeVal),
5[](constintsinitValParam,intswriteValParam){
6writeValParam+=initValParam;
7]);
```
Code 2: Specx example - creation of a task for CPU.
Dependencies on a Subset of ObjectsA critical drawback of OpenMP is the rigidity of the dependency declaration. Indeed, the number of dependencies of a task has to be set at compile time. For example, if we use a vector of objects and want to declare a dependency on all or some elements and not on the vector, we cannot do it in OpenMP if the size of the vector is not known when we write the code because we have to write one pragma _depend_ statement per dependency.
To solve this issue, in Specx, we can declare the dependencies on a set of objects using the following mechanisms: _SpReadArray_(\(<\)_XTy\(>\) x, \(<\)_ViewTy\(>\) view_), _SpWriteArray_(\(<\)_XTy\(>\) x, \(<\)_ViewTy\(>\) view_), _SpMaybeWriteArray_(\(<\)_XTy\(>\) x, \(<\)_ViewTy\(>\) view_), _SpCommutativeWriteArray_(\(<\)_XTy\(>\) x, \(<\)_ViewTy\(>\) view_), _SpAtomicWriteArray_(\(<\)_XTy\(>\) x, \(<\)_ViewTy\(>\) view_), where \(x\) should be a pointer to a contiguous buffer (or any container that support the _[]_ operator), and _view_ should be an object representing the collection of specific indices of the container elements that are affected by the dependency. _view_ should be iterable (in the sense of "stl iterable").
With this mechanism, Specx can iterate over the elements and apply the dependencies on the selected ones. We provide an example in Code 3.
```
1std:vector<int>vec=...;
2//AccessalltheelementsintspArrayView
3tg.task(SpPriority(1),SpWriteArray(vec.data(),SpArrayView(vec.size())),
4[](SpArrayAccessor<int>kvecView){
5...
6]);
```
Code 3: Specx example - use array of dependencies.
Task ViewerInserting a task in the task-graph returns a task view object, which allows accessing some attributes of the real task object. For instance, it allows setting the name of the task, waiting for the task completion, or getting the value produced by the task (in case the task returns a value). Unfortunately, there is a pitfall with the current design, which is the fact that accessing the task through the viewer can potentially be done after the task has been computed. For instance, we cannot use the tasks' names in the scheduler because they might be set after the tasks are computed. We provide an example in Code 4.
```
1autotaskViewer=runtime.task(SpRead(initVal),SpWrite(writeVal),
2[](constint&initValParam,int&writeValParam)->bool{
3writeValParam+=initValParam;
4returntrue;
5]);
6taskViewer.setName("Thenameoftthetask");
7taskViewer.wait();//Waitforthissingletask
8taskViewer.getValue();//Getthevalue(whenthetaskisover)
```
Code 4: Specx example - task viewer.
### Teams of Workers and Compute Engines
Within Specx, a team of workers constitutes a collection of workers that can be assigned to computational engines. In the current implementation, each worker is associated with a CPU thread that continuously retrieves tasks from the scheduler and handles them. If the worker is CPU-based, the task is directly executed by the CPU thread. Conversely, in the case of a GPU worker, the CPU thread manages the data movement between memory nodes and call the device kernel.
A compute engine necessitates a team of workers and may be responsible for several task-graphs. Currently, it is not possible to change the compute engine assigned to a task-graph, but it is possible to shift workers among different compute engines. This feature provides the ability to dynamically adjust the capabilities of the compute engine during execution and design advanced strategies to adapt to the workload of the graphs.
Given that dependencies among task-graphs are not shared, the insertion of tasks and their dependencies into a task-graph does not affect others. This allows for the creation of recursive parallelism, in which a task-graph is created within a task. Such a task-graph could potentially be attached to the same compute engine as the parent task. This approach could help mitigate the overhead associated with the creation of a large set of tasks by organizing them into sub-task-graphs. We provide an example in Code 5.
```
1SpTaskGraph<SpSpeculativeModel::SP_NO_SPEC>tg;
2//Createthecomputeengine
3SpComputeEnginece(SpWorkerTeamBuilder::TeamOfcpuWorkers(NbThreads));
4//OR
5SpComputeEnginece(SpWorkerTeamBuilder::TeamOfcpuCudaWorkers());
6//Tellswhichcomputeenginewillmanagethegraph
7tg.computeOn(ce);
```
Code 5: Specx example - creation of a compute engine.
### Tasks for Heterogeneous Hardware
Specx relies on the same principles as StarPU in supporting heterogeneous hardware, i.e., we have distinct workers for each type of processing unit, and each
task can operate on CPUs, GPUs, or both. Specifically, at task insertion, we require a unique callable object for each processing unit type capable of executing the task. During execution, the scheduler determines where the task will be executed. This represents a critical challenge in task-based computing on heterogeneous systems.
Regarding the interface, the primary challenge is the movement of data between memory nodes. More specifically, we strive to exploit C++ and use an abstraction mechanism to facilitate object movement. Consequently, we have determined that objects passed to tasks should comply to one of the following rules: 1) the object is trivially copyable 2; 2) the object is a std::vector of trivially copyable objects; 3) the object's class implements specific methods that the runtime system will call.
Footnote 2: [https://en.cppreference.com/w/cpp/types/is_trivially_copyable](https://en.cppreference.com/w/cpp/types/is_trivially_copyable)
In the last case, the object's class must have as a class attribute a data type called _DataDescriptor_ and three methods:
* _memmovNeededSize_: Invoking this method on the object should yield the required size of the memory to be allocated on the device for copying the object.
* _memmovHostToDevice_: This method is called to transfer the object to the device. The method receives a mover class (with a copy-to-device method) and the address of a memory block of the size determined by _memmovNeededSize_ as parameters. The method may return a _DataDescriptor_ object, which will later be passed to _memmovDeviceToHost_ and to the task utilizing the object.
* _memmovDeviceToHost_: This method is invoked to move the data back from the GPU to the object. The method receives a mover class (with a copy-from-device method), the address of a GPU memory block, and an optional _DataDescriptor_ object as parameters.
From a programming perspective, we require the users to determine how the data should be moved as they have the knowledge to do so. For example, consider an object on a CPU being a binary tree where each node is a separate memory block. It would be inefficient to allocate and copy each node. Consequently, we ask the users to estimate the needed memory block size, and we perform a single allocation. Then it is the users' responsibility to mirror the tree on the GPU using the block we allocated, and to implement the task such that it can use this mirror version. This design may change in the future as we continue to apply Specx to existing applications.
Currently, we employ the Least Recently Used (LRU) policy to determine which memory blocks should be evicted from the devices when they are full. Concretely, this implies that when a task is about to be computed on the device, the worker's thread will iterate over the dependencies and copy them onto the GPU's memory using a stream/queue. If an object already has an up-to-date version on the device, the copy will be skipped, and if there is not enough free
memory, older blocks may be evicted. As a result, at the end of a simulation, the up-to-date versions of the objects might be on the GPUs, necessitating their transfer back to the CPUs if required. At present, this can be accomplished by inserting empty CPU tasks that use these objects.
By default, worker teams align with hardware configurations, i.e., they will contain GPU workers for each available GPU. Therefore, if users are not careful and only need one type of processing unit for their tasks, the hardware will be underutilized as some workers will remain idle. We provide an example in Code 6.
```
1classMatrix{
2intnbRows;
3intnbCols;
4std::vector<double>values;
5public:
6//Whattoallocateonthedevice
7std::size_tmemmovNeedeedSize()const{
8returnsizeof(double)*nbRows*nbCols;
9}
10
11//Copytothedevice(size=mammovNeededSize())
12template<classDeviceMemmov>
13automemoryHostToDevice(DeviceMemmovsmover,void*devicePtr,...
14std::size_tsize){
15double*doubleDevicePtr=reinterpret_cast<double*>(devicePtr);
16mover.copyHostToDevice(doubleDevicePtr,values.data(),...
17nbRows*nbColssizeof(double));
18returnDataDescr(rowOffset,colOffset,nbRows,nbCols);
19}
20//CopytotheCPU
21template<classDeviceMemmov>
22voidmemoryDeviceToHost(DeviceMemmovsmover,void*devicePtr,...
23std::size_tsize,constDataDescrts/*inDataDescrts/){
24double*doubleDevicePtr=reinterpret_cast<double*>(devicePtr);
25moveer.copyDeviceToHost(values.data(),doubleDevicePtr,...
26mBRows*nbColssizeof(double));
27}
28};
29//....
30Matrixmatrix;
31
32
33t.task(SpPriority(1),SpWrite(matrix),
34spCpu([](MatrixMatrixMatrix){
35})
36#ifdefSpECK_COMPILE_WITH_CUDA
37,SpCuda([](SpDeviceDataVieweMatrix>matrix){
38//...
39})
40#endif
41.setTaskName(std::string("Myoperation");//Setthenameoftthetask
```
Code 6: Specx example - creation of a task for CPU/GPU.
### Mixing Communication and Tasks
In the context of distributed memory parallelization, Specx provides the capability to mix send/receive operations (MPI) and computational tasks. Putting MPI communications directly inside tasks will fail due to the potential concurrent accesses to the communication library (which is not universally supported by MPI libraries) and the risk of having workers waiting inside tasks for communication completion, leading to deadlocks if tasks sending data on one node do not coincide with tasks receiving data on another. Therefore, to avoid having the workers dealing with communication, our solution is to use a dedicated background thread that manages all the MPI calls.
In this approach, a _send_ operation is transformed into a communication task that does a write access on the data, with execution will be done by the background thread. Similarly, a _receive_ operation becomes a communication task that performs a read access, and is also managed by the background thread. Once a communication task is ready, the background thread executes the corresponding non-blocking MPI calls, receiving an MPI request in return. This request is stored in a list, which the background thread aims to complete by calling the MPI _test-any_ function. When a request is fulfilled, the background thread releases the dependencies of the associated communication task, thereby ensuring progression of the task-graph execution. In this way, the progression is done as early as possible.
In order to send/receive C++ objects using MPI in a single communication (although we perform two - one for the size and one for the data), we need a way to store the object into a single array. To achieve this, the object must comply with one of the following rules:
* It should be trivially copyable;
* It should provide access to a pointer of the array to be sent (or received). For example, if a class has virtual methods, it will not be trivially copyable. However, if the class's only attribute is a vector of integers, sending the object is equivalent to sending the vector's data.
* It should support our serialization/deserialization methods. Here, we allow the object to serialize itself using our utility serializer class, yielding a single array suitable for communication. Upon receipt, the buffer can be deserialized to recreate the object. This method offers the more flexibility, but is also the less efficient.
Specx also supports MPI broadcast as part of MPI global communication functions. Currently, users must ensure that all Specx instances perform the same broadcasts and in the same order.
As a side note, MPI communications are incompatible with the speculative execution capabilities of Specx due to the potential creation of extra tasks and instantiation of diverse execution paths. We provide an example in Code 7.
### Scheduling
We designed the scheduler module following the implementation approach used in StarPU, with a scheduler providing two key functions: _push_ and _pop_. When a task becomes ready (i.e., its predecessors are finished), it is pushed into the scheduler. Conversely, when a worker becomes available, it calls the pop function on the scheduler, which may potentially return void if there are no tasks compatible with its processing unit type, or if the scheduler makes such a decision. As such, the scheduler plays a crucial role as it manages task distribution and the order of task execution.
At present, Specx utilizes a simple First-In-First-Out (FIFO) scheduler, but we plan to introduce more sophisticated schedulers [23] in the near future. Anyway, it is straightforward to provide a new scheduler, and users have the flexibility to implement a custom scheduler specifically designed for their application. This can be accomplished by creating a new class that inherits from our abstract scheduler interface.
### Speculative execution
Specx supports task-based speculative execution, which is an ongoing research problem. We currently support two speculative models applicable when certain data accesses are flagged as _maybe-write_. In these scenarios, the runtime system may duplicate some data objects and tasks to enable potential speculative work, subsequently performing a rollback if the uncertain tasks actually modified the data.
### Internal implementation
In this section, we delve into the finer details of Specx's implementation. When a task is inserted, the callable's prototype should match with the dependency types. Hence, read parameters are passed as _const_. For CPU callables, parameters should be of the type references to the object types passed as arguments. Using a value instead of a reference will simply result in a copy, which is typically not the intended outcome. Indeed, if values are required but are not significant as dependencies, it is more appropriate to pass them as captures in the lambda.
When an object is passed to a task, the runtime system dereferences it to obtain its address. This address is utilized as a dependency value and also as a key in an unordered hashmap that matches pointers to data handles. A data handle is a class that contains all the necessary information the runtime system requires concerning a dependency. For instance, the data handle includes the list of dependencies applied to the associated object. This allows progression in the list and subsequent release of dependencies. In terms of implementation, we do not construct a graph; instead, we have one data handle per address that has been used as a dependency, and the data handles' dependency lists contain pointers to the tasks that use the related objects. Consequently, when a task is finished, we increment a counter on the dependency list and access the next tasks. Doing so, we then examine whether the task now pointed to by the updated counter is ready, and we push the task into the scheduler if that is the case. The data handle also possesses a mutual exclusion object that enables its locking for modification. When several data handles need to be locked, we sort them based on their address, ensuring deadlock prevention.
The commutative (_SpCommutativeWrite_) dependency is managed differently because the related tasks' order is not static. Said differently, when the next tasks use the commutative access, we do not know which one should be executed. As such, we cannot merely point to the first task in the dependency list and stop our inspection if it's not ready, as the following tasks that also have commutative access to the dependency might be ready. This necessitates a check on all the tasks performing a commutative access at the same time point. However, several threads that completed a task might do so simultaneously, requiring us to use a mutual exclusion to protect all the commutative dependencies. In other words, using commutative dependencies implies the use of global mutual exclusion.
We use C++ meta-programming massively, such as testing if an object is trivially copyable or supports serialization methods, for example. We also utilize the inheritance/interface pattern and the template method design pattern. For instance, this enables us to have a task class containing the callable type, and thus the types of all parameters and arguments. We can then carry out meta-programming tests on the arguments to ensure compliance with specific rules, etc.
Finally, as we use a hashmap to store the data dependency objects' information, using their address as keys, it's currently undefined behavior to have objects of different types but stored at the same address. This primarily occurs
when an object of type \(x\) is freed, and subsequently, an object of type \(y\) is allocated using the same memory block. This is because the data handle class uses a data copier through an interface, but the copier is actually templatized over the dependency object type.
### Visualization
Profiling and optimizing task-based applications are crucial to achieve high performance. The main information are:
* Degree of parallelism: This represents how many tasks can be executed in parallel. The task graph can be used to evaluate if the degree of parallelism is sufficient to utilize all the processing units fully. Furthermore, during the execution, the number of ready tasks over time can also be analyzed.
* Task granularity: The task granularity can impact the degree of parallelism. An examination of an execution trace can help determine if the granularity is too small. If so, the overhead of task management and/or data displacement may be too large compared to the task durations, thereby negatively affecting performance. Conversely, if the tasks are too large, the degree of parallelism can be too small, and the end of the execution can be penalized with too few large tasks to compute.
* Scheduling choices for task distribution: If a slow worker is selected mistakenly (a worker that can compute a task but is not efficient to do so), it can reduce performance. It could be faster to wait and assign the tasks to a quicker worker, but this depends on the scheduler.
* Scheduling choices for task order: The degree of parallelism (and sometimes the availability of suitable tasks for all workers) can be influenced by the order of task execution, that is, the choice among the ready tasks.
An good situation, but not necessarily optimal, is when no workers have been idle, and the tasks have been assigned to the processing units that can execute them most efficiently.
To facilitate the profiling, Specx provides features to export the task-graph and execution trace. The task-graph is generated in the _dot_ format.3 For the execution trace, a SVG file4 is exported that can be opened with any modern internet browser. The execution trace also indicates the number of tasks available during the execution. In the next release, we plan to export metrics that will provide concise but meaningful numbers on execution quality, such as the idle time.
Footnote 3: [https://gitlab.com/graphviz/graphviz](https://gitlab.com/graphviz/graphviz)
Footnote 4: [https://www.w3.org/Graphics/SVG/](https://www.w3.org/Graphics/SVG/)
A graph and an execution trace are provided in Figure 2, and the corresponding calls are given in Code 8.
## 5 Performance and usability study
### Configuration and test cases
We assess our method on two configurations:
* Intel-AVX512: it is a 2 \(\times\) 18-core Cascade Lake Intel Xeon Gold 6240 at 2.6 GHz with AVX-512 (Advanced Vector 512-bit, Foundation, Conflict Detection, Byte and Word, Doubleword and Quadword Instructions, and Vector Length). The main memory consists of 190 GB DRAM memory arranged in two NUMA nodes. Each CPU has 18 cores with 32KB private L1 cache, 1024KB private L2 cache, and 25MB shared L3 cache. We use the GNU compiler 11.2.0 and the MKL 2022.0.2.
### Results
OverheadIn this section, we discuss the engine overhead that we evaluate with the following pattern. We create a runtime system with \(T\) CPU workers and \(T\) distinct data objects. Then, we insert \(T\times N\) tasks, with each task accessing one of the data object. Consequently, the task graph we generate is actually composed of \(T\) independent sub-graphs. Inside each task, the worker that execute it simply waits until for a given duration \(D\). As a result, the final execution time is given \(N\times(D+O)\), where \(O\) is an overhead of picking a task
Figure 2: Example of graphs and execution trace exported after a run.
from the runtime. Also, we can measure the time it takes to insert the \(T\times N\) tasks to obtain an insertion cost \(I\).
We provide the result in Figure 3 for 1 to 20 dependencies. As expected, the overhead of using commutative write is significant compared to a normal write. The insertion cost is also higher when the tasks duration is smaller (\(D=10^{-5}\)). The reason is that as the tasks are smaller, the worker query to the runtime system more often, which can compete with the insertion of ready tasks by the master thread and create contention. The cost also increase slightly as the number of dependencies in the tasks increases. Finally, the overhead per tasks is stable as the number of dependencies per tasks increases for the write access. However, for the commutative access, the overhead increases with the number of dependencies.
ApplicationThis part is not ready yet.
## 6 Conclusion
We presented Specx, a task-based runtime system written in C++ and for C++ applications. Specx allows parallelizing over distributed computing nodes and exploit CPUs and GPUs jointly. It is easy to use and provides advanced features such as scheduler customization and execution trace visualization.
We plan to improve Specx by providing a scheduler made for heterogeneous computing node, create new speculative execution model, add conditional tasks, and improve the compilation error handling.
## 7 Acknowledgement
We used the PlaFRIM experimental testbed, supported by Inria, CNRS (LABRI and IMB), Universite de Bordeaux, Bordeaux INP and Conseil Regional d'Aquitaine 5.
Footnote 5: [https://www.plafrim.fr](https://www.plafrim.fr)
This work has been funded by the Inria ADT project SPETABARU-H, and the ANR National project AUTOSPEC (ANR-21-CE25-0009).
|
2301.08580 | Fractional Zernike functions | We consider and provide an accurate study for the fractional Zernike
functions on the punctured unit disc, generalizing the classical Zernike
polynomials and their associated $\beta$-restricted Zernike functions. Mainly,
we give the spectral realization of the latter ones and show that they are
orthogonal $L^2$-eigenfunctions for certain perturbed magnetic (hyperbolic)
Laplacian. The algebraic and analytic properties for the fractional Zernike
functions to be established include the connection to special functions, their
zeros, their orthogonality property, as well as the differential equations,
recurrence, and operational formulas they satisfy. Integral representations are
also obtained. Their regularity as poly-meromorphic functions is discussed and
their generating functions including a bilinear one of "Hardy--Hille type" are
derived. Moreover, we prove that a truncated subclass defines a complete
orthogonal system in the underlying Hilbert space giving rise to a specific
Hilbertian orthogonal decomposition in terms of a second class of generalized
Bergman spaces. | Hajar Dkhissi, Allal Ghanmi, Safa Snoun | 2023-01-19T14:02:16Z | http://arxiv.org/abs/2301.08580v1 | # Fractional Zernike functions
###### Abstract
We consider and provide an accurate study for the fractional Zernike functions on the punctured unit disc, generalizing the classical Zernike polynomials and their associated \(\beta\)-restricted Zernike functions. Mainly, we give the spectral realization of the latter ones and show that they are orthogonal \(L^{2}\)-eigenfunctions for certain perturbed magnetic (hyperbolic) Laplacian. The algebraic and analytic properties for the fractional Zernike functions to be established include the connection to special functions, their zeros, their orthogonality property, as well as the differential equations, recurrence and operational formulas they satisfy. Integral representations are also obtained. Their regularity as polymenormorphic functions is discussed and their generating functions including a bilinear one of "Hardy-Hille type" are derived. Moreover, we prove that a truncated subclass defines a complete orthogonal system in the underlying Hilbert space giving rise to a specific Hilbertian orthogonal decomposition in terms of a second class of generalized Bergman spaces.
Zernike polynomials; \(\beta\)-restricted Zernike functions; fractional Zernike functions; \(\beta\)-weighted Poly-Bergman spaces; Zeros set; Poly-meromorphy; Generating functions; Orthogonality; Completeness. +
Footnote β : :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β : [FOOTNOTE:+][ENDFOOTOTNOTE]
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β :
+
Footnote β : [FOOTNOTE:+]
+
Footnote β : [FOOTNOTE:+][END
## 1 Introduction
The classical real Zernike polynomials are introduced in the framework of optical problems, especially in order to analyze the figure of a circular mirror. In Zernike"s paper on the knife edge test and the phase contrast method [31], they are defined as eigenfunctions of a rotational invariant second order partial differential equation. Next, they have been used in the Nijboer's works to develop the diffraction theory of optical aberrations. Since then, they have been extensively employed to express the propagation of a wavefront data in optical tests through imaging system [14, 15, 20], and to represent the aberrations of optical systems (by atmospheric turbulence) [26, 29]. They are also used to study diffraction problems in the rotationally symmetric system with circular pupils [24, 32] and pattern recognition [18, 28]. More recently, they are applied efficiently to characterize the shape of any portion of molecular surfaces and to evaluate the shape complementarity of protein-protein interfaces [23].
A generalized complex version (called Zernike or disc polynomials) defines them as the orthogonal ones on the unit disc \(\mathbb{D}=\{z\in\mathbb{C};\,|z|<1\}\) with finite values at the boundary. They are given by the Rodrigues type formula
\[\mathcal{Z}^{\gamma}_{m,n}(z,\overline{z}):=(-1)^{m+n}(1-|z|^{2})^{-\gamma} \frac{\partial^{m+n}}{\partial z^{m}\overline{z}^{n}}\left(1-|z|^{2}\right)^{ \gamma+m+n} \tag{1.1}\]
for varying nonnegative integers \(m,n\), and real \(\gamma>-1\). This definition agrees with the one provided by Koornwinder [19] as well as the one considered by Dunkl [6]. Algebraic and analytic properties of \(\mathcal{Z}^{\gamma}_{m,n}(z,\bar{z})\) have been discussed in many papers [1, 19, 30]. The corresponding Wiener and Paley type theorems have been obtained by Kanjin in [16]. Recently, they have shown to be useful in the concrete description of spectral properties of different types of Cauchy transforms [7, 8].
In the present paper, we consider a specific generalization of the Zernike polynomials in (1.1). Namely, we deal with the family of functions
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z}):=(-1)^{m}z^{-\rho}(1-|z|^{2}) ^{-\kappa}\frac{\partial^{m}}{\partial z^{m}}\left(z^{n+\rho}(1-|z|^{2})^{ \kappa+m}\right) \tag{1.2}\]
on the punctured unit disc \(\mathbb{D}^{*}=\mathbb{D}\setminus\{0\}\), for fixed real numbers \(\rho,\kappa>-1\) and varying integers \(m\) and \(n\) such that \(m\geq 0\) and \(n+\rho\geq 0\). Thus, for arbitrary nonnegative integer \(\rho\), they reduce further to the Zernike polynomials in (1.1) since for every \(\ell=0,1,\cdots\) we have
\[z^{\ell}\mathcal{Z}^{\kappa,\ell}_{m,n}(z,\overline{z})=\frac{\mathcal{Z}^{ \kappa}_{m,n+\ell}(z,\overline{z})}{(\kappa+m+1)_{n+\ell}}. \tag{1.3}\]
Otherwise, they are no longer polynomials. Their study for arbitrary \(\rho\) can be reduced to the subclass corresponding to \(0\leq\rho<1\). More precisely, we have
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=z^{-[\rho]}\mathcal{Z}^{ \kappa,\widetilde{\rho}}_{m,n+[\rho]}(z,\overline{z})\]
whenever \(n+[\rho]\geq 0\), where \([\rho]\) denotes the integer part of \(\rho\) and \(0\leq\widetilde{\rho}=\rho-[\rho]<1\). This justifies somehow the following definition which can also be justified from being poly-meromorphic (see Theorem 3.10).
**Definition 1.1**.: The functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\) in (1.2) are referred to as fractional Zernike functions.
Contrary to the classical Zernike polynomials satisfying the symmetry relationships \(\overline{\mathcal{Z}^{\kappa}_{m,n}(z,\overline{z})}=\mathcal{Z}^{\kappa}_{ m,n}(\overline{z},z)=\mathcal{Z}^{\kappa}_{n,m}(z,\overline{z})\), which play a crucial rule in their study, this relation is no longer valid
for the fractional Zernike functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\) even if \(\rho\) is a positive integer. In fact, we have only \(\overline{\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})}=\mathcal{Z}^{\kappa, \rho}_{m,n}(\overline{z},z)\) for arbitrary \(\rho\) and one gets from (1.3) the identity
\[\overline{\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})}=\frac{(\kappa+1)_{m }}{(\kappa+1)_{n+\rho}}z^{\rho}\overline{z}^{-\rho}\mathcal{Z}^{\kappa,\rho}_ {n+\rho,m-\rho}(z,\overline{z}) \tag{1.4}\]
valid for \(\rho\) being a nonnegative integer. This reveals in particular that the analytic and spectral properties of the functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\) can not be directly recovered from the Zernike polynomials, and the relevant properties may be completely different from the classical ones, essentially when \(\rho\) is non integer. Thus, a concrete description of their algebraic and analytic properties for fixed reals \(\rho,\kappa>-1\) is desirable.
To this purpose we begin by considering the so-called \(\beta\)-restricted Zernike functions \(\psi^{\gamma,\eta}_{m,n}\). They are shown to be a special class of polyanalytic excited states in the weighted Hilbert space \(L^{2,\alpha}_{\beta}(\mathbb{D})=L^{2}(\mathbb{D},d\mu_{\alpha,\beta})\) of all complex-valued functions that are square integrable with respect to the positive measure
\[d\mu_{\alpha,\beta}(z):=(1-|z|^{2})^{\alpha}|z|^{2\beta}dxdy;\quad z=x+iy,\ \alpha,\beta>-1. \tag{1.5}\]
The main results concerning the functions \(\psi^{\gamma,\eta}_{m,n}\) are summarized in Theorem 2.6. Namely, we prove that they form an orthogonal system of eigenfunctions in \(L^{2,\alpha}_{\beta}(\mathbb{D})\) for a perturbed magnetic Laplacian, which is essentially the classical magnetic Schrodinger operator on the hyperbolic disc perturbed by a particular potential (with zero magnetic field) modeling the Aharonov-Bohm effect (see Remark 2.3). Moreover, the \(L^{2,\alpha}_{\beta}\)-eigenspace of the considered Laplacian associated with its lowest eigenvalue is shown to be the \(\beta\)-modified Bergman space \(\mathcal{A}^{2,\alpha}_{\beta}(\mathbb{D})\) on the punctured unit disc \(\mathbb{D}^{*}\) recently introduced and studied in [11, 12]. The other \(L^{2,\alpha}_{\beta}\)-eigenspaces associated with the hyperbolic Landau levels for the considered Laplacian can be seen as the polyanalytic analogs of \(\mathcal{A}^{2,\alpha}_{\beta}(\mathbb{D})\) (see Remark 2.7).
The motivation of considering \(\psi^{\gamma,\eta}_{m,n}\) is that they can be seen as the spectral side of fractional Zernike functions. For special values of \(\gamma\) and \(\eta\) they are closely connected to by
\[\mathcal{Z}^{\kappa_{m},\rho}_{m,n}(z,\overline{z})=|z|^{2\eta}(1-|z|^{2})^{ \frac{\alpha+1-\kappa_{m}}{2}}\psi^{\gamma,\eta}_{m,n}(z,\overline{z}) \tag{1.6}\]
for \(m,n\geq 0\) with \(\rho=\beta-2\eta\) and for \(\kappa\) depending in \(m\) and given by \(\kappa=\kappa_{m}=\alpha-2(\gamma+m)-1\). However, this last fact can not be employed to recover the global properties of the fractional Zernike functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\). Only the local ones for every fixed \(m\), \(n\) and \(\rho\) with the specific \(\kappa=\kappa_{m}\) can be derived.
For the concrete study of \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\) we begin by establishing their explicit expressions, their different hypergeometric representations, their expression in terms of the Jacobi polynomials as well as their connection to the complex Zernike polynomials in (1.1). Subsequently, the zero sets of \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\) are described (Corollary 3.6) and shown to be the centered circles of radii being the zeros of the real Jacobi polynomials. The orthogonality in the Hilbert space \(L^{2,\kappa}_{\rho}(\mathbb{D})=L^{2}(\mathbb{D},d\mu_{\kappa,\rho})\) is discussed and the square norm is explicitly computed. The membership to a specific class of poly-meromorphic functions in \(\mathbb{D}\) is also considered (Theorem 3.10). Moreover, we investigate the operational formulas they satisfy including those of Burchnall type and discuss some recurrence relations, the differential equations they obey (Theorems 3.15 and 3.12) and so on. Certain associated generating functions are obtained such a bilinear generating function analogous to the one Hardy-Hille generating function for the generalized Laguerre polynomials. The latter one can be employed to derive special integral representation for \(\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})\). Another integral representation of Cauchy-type is obtained as a special integral on the unit circle. Finally, we show in Theorem 3.25 that the truncated fractional Zernike
functions
\[\Upsilon^{\kappa,\rho}_{m,s}(z,\overline{z}):=z^{s}|z|^{-s}{\cal Z}^{\kappa,\rho}_{ m,m}(z,\overline{z}),\,s\in\mathbb{Z},\,m=0,1,\cdots, \tag{1.7}\]
constitute an orthogonal basis of the Hilbert space \(L^{2,\kappa}_{\rho}(\mathbb{D})\). Accordingly, we define a second class of poly-meromorphic Bergman spaces leading to a complete microlocal orthogonal decomposition of the underlying Hilbert space \(L^{2,\kappa}_{\rho}(\mathbb{D})\). The obtained results will contribute efficiently in the study of the associated isometric integral transforms (of Bargmann type) on the configuration space on the positive half real line.
The remaining sections are organized as follows. Section 2 deals with the spectral realization of the \(\beta\)-modified functions \(\psi^{\gamma,\eta}_{m,n}\) by Schrodinger's factorization method. A proof of \(\psi^{\gamma,\eta}_{m,n}\) form an orthogonal system of eigenfuctions in \(L^{2,\alpha}_{\beta}(\mathbb{D})\) is also presented in this section. The basic properties of the fractional functions as described above are stated and proved in Section 3.
## 2 The \(\beta\)-restricted Zernike functions (spectral realization)
In this section we are concerned with the functions
\[\psi^{\gamma,\eta}_{m,n}(z,\overline{z})=(-1)^{m}z^{\eta-\beta}\overline{z}^{ -\eta}(1-|z|^{2})^{\gamma-\alpha+m}\frac{\partial^{m}}{\partial z^{m}}\big{(} z^{n+\beta-2\eta}(1-|z|^{2})^{\alpha-2\gamma-m-1}\big{)} \tag{2.1}\]
for given reals \(\alpha,\beta,\gamma,\eta\). They referred to as the \(\beta\)-restricted Zernike functions (justified by Remark 2.7 below). We aim to derive their basic properties and show that they form an orthogonal system of \(L^{2,\alpha}_{\beta}\)-eigenfunctions for a perturbed magnetic Laplacian of the form
\[\Delta^{c,d}_{a,b}=\Delta_{hyp}+(1-|z|^{2})\left(H^{b}_{a}(z)E-H^{d}_{c}(z) \overline{E}\right)+H^{b}_{a}(z)H^{d}_{c}(z)|z|^{2} \tag{2.2}\]
acting on the weighted Hilbert space \(L^{2,\alpha}_{\beta}(\mathbb{D})\), \(\alpha,\beta>-1\). Above \(a\), \(b\), \(c\) and \(d\) are given real numbers, \(\Delta_{hyp}=-(1-|z|^{2})^{2}\partial^{2}/\partial z\partial\bar{z}\) is the Laplace-Beltrami operator on the hyperbolic disc, \(\overline{E}=\overline{z}\partial/\partial\overline{z}\) denotes the complex conjugate of the complex Euler operator \(E:=z\partial/\partial z\) and
\[H^{b}_{a}(z):=a+b-\frac{b}{|z|^{2}}. \tag{2.3}\]
It is worth noting that for particular values of \(a,b,c,d\) one recovers the magnetic Schrodinger operator on the hyperbolic unit disc representing the Hamiltonian of a charged particle in motion under an external uniform magnetic field [3, 5, 10, 13].
To this end, we have to factorize the considered Laplacian in terms of some first order differential operators (leading in particular to their Rodrigues type formula). Thus, if we set \(h_{\alpha,\beta}(z)=h(z)^{\alpha}|z|^{2\beta}\) with \(h(z)=1-|z|^{2}\), we can consider the first order differential operator
\[A_{\gamma,\eta}f(z):=h_{1-\gamma,-\eta}(z)\frac{\partial}{\partial\overline{z }}(h_{\gamma,\eta}f)(z)\]
for given fixed reals \(\gamma\) and \(\eta\). Its explicit expression is given by
\[A_{\gamma,\eta}f(z)=\left\{(1-|z|^{2})\frac{\partial}{\partial\overline{z}}-H^ {\eta}_{\gamma}(z)\right\}f(z). \tag{2.4}\]
The corresponding null space is closely connected to the set \(\mathrm{Hol}(\mathbb{D}^{*})\) of holomorphic functions on the punctured unit disc. Namely, we have \(\ker(A_{\gamma,\eta})=h_{-\gamma,-\eta}\mathrm{Hol}(\mathbb{D}^{*})\). Moreover, the formal adjoint operator \(A_{\gamma,\eta}^{*_{\alpha,\beta}}\) of \(A_{\gamma,\eta}\) with respect to the inner scalar product
\[\langle f,g\rangle_{\alpha,\beta}:=\int_{\mathbb{D}}f(z)\overline{g(z)}d\mu_{ \alpha,\beta}(z) \tag{2.5}\]
in \(L_{\beta}^{2,\alpha}(\mathbb{D})\) is given by
\[A_{\gamma,\eta}^{*_{\alpha,\beta}}f(z):=-h_{\gamma-\alpha,\eta-\beta}(z) \frac{\partial}{\partial z}(h_{\alpha-\gamma+1,\beta-\eta}f)(z) \tag{2.6}\]
in account of the conventional calculation. Accordingly, we perform
\[\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}=A_{\gamma,\eta}A_{\gamma,\eta}^{*_ {\alpha,\beta}}\quad and\quad\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,-}=A_{ \gamma,\eta}^{*_{\alpha,\beta}}A_{\gamma,\eta}. \tag{2.7}\]
Straightforward computation leads to the explicit expression of these second order differential operators in terms of \(\Delta_{a,b}^{c,d}\) in (2.2) (we omit the proof).
**Lemma 2.1**.: _The expression of \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) in the \(z\)-coordinate is given by_
\[\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}=\Delta_{\gamma+1,\eta}^{\gamma- \alpha-1,\eta-\beta}+(\alpha-\gamma+1).\]
_Moreover, the operators \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) and \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,-}\) satisfy_
\[\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}=\mathcal{L}_{\gamma+1,\eta}^{\alpha,\beta,-}+(\alpha-2\gamma). \tag{2.8}\]
**Remark 2.2**.: For \(\alpha=-2\) and \(\beta=0\) we have \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}=\mathcal{L}_{\gamma+1,\eta}^{ \alpha,\beta,-}-2(\gamma+1)\). Also \(H_{\alpha-\gamma+1}^{\beta-\eta}=-H_{\gamma+1}^{\eta}\) so that the Laplacian \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) reduces further to
\[\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}=-h\left\{h\frac{\partial^{2}}{ \partial z\partial\overline{z}}-H_{\gamma+1}^{\eta}(z)\left(E-\overline{E} \right)\right\}+(H_{\gamma+1}^{\eta}(z))^{2}|z|^{2}-(\gamma+1). \tag{2.9}\]
For the particular cases of \(\gamma,\eta\) we recover the Landau-like Hamiltonian on \(\mathbb{D}\) (see e.g. [3, 10, 13]).
**Remark 2.3**.: The considered operators \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) and \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,-}\) can be realized geometrically as magnetic Schrodinger operators associated with a singular real differential 1-form (vector potential) \(\theta_{\alpha,\beta}=\theta_{\alpha}+\widetilde{\theta_{\beta}}\) with \(d\widetilde{\theta_{\beta}}=0\) and \(d\theta_{\alpha}\) is the Khaler two form on the hyperbolic unit disc up to a multiplicative constant. More precisely, we have
\[\theta_{\alpha,\beta}(z)=\frac{i\alpha\left(\bar{z}dz-zd\bar{z}\right)}{1-|z| ^{2}}-i\beta\left(\frac{dz}{z}-\frac{d\overline{z}}{\overline{z}}\right). \tag{2.10}\]
Now, by means of the identity (2.8) we can establish the following (we omit the proof).
**Lemma 2.4**.: _The following commutation rules hold trees_
\[(i) \mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}A_{\gamma,\eta}=A_{\gamma, \eta}\mathcal{L}^{\alpha,\beta,-}_{\gamma,\eta}\quad\text{and}\quad A^{*_{ \alpha,\beta}}_{\gamma,\eta}\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}= \mathcal{L}^{\alpha,\beta,-}_{\gamma,\eta}A^{*_{\alpha,\beta}}_{\gamma,\eta}.\] \[(ii) \mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}A^{*_{\alpha,\beta}}_{ \gamma+1,\eta}=A^{*_{\alpha,\beta}}_{\gamma+1,\eta}\left(\mathcal{L}^{\alpha, \beta,+}_{\gamma+1,\eta}+(\alpha-2\gamma)\right).\] \[(iii) A_{\gamma+1,\eta}\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}= \left(\mathcal{L}^{\alpha,\beta,+}_{\gamma+1,\eta}+(\alpha-2\gamma)\right)A_{ \gamma+1,\eta}.\] \[(iv) \mathcal{L}^{\alpha,\beta,-}_{\gamma+1,\eta}A_{\gamma,\eta}=A_{ \gamma,\eta}\left(\mathcal{L}^{\alpha,\beta,-}_{\gamma,\eta}-(\alpha-2\gamma) \right).\] \[(v) A^{*_{\alpha,\beta}}_{\gamma,\eta}\mathcal{L}^{\alpha,\beta,-}_{ \gamma+1,\eta}=\left(\mathcal{L}^{\alpha,\beta,-}_{\gamma,\eta}-(\alpha-2 \gamma)\right)A^{*_{\alpha,\beta}}_{\gamma,\eta}.\]
Lemma 2.4 is efficient in analyzing and studying the family of functions \(\psi^{\gamma,\eta}_{m,n}\) in (2.1). In fact, we show that they can be obtained by successive application of \(A^{*}_{\gamma+j,\eta}\); \(j=1,2,\cdots,m\), to the ground state functions. Namely, we consider the differential operator
\[A^{*,m}_{\gamma,\eta}(f):=A^{*_{\alpha,\beta}}_{\gamma+1,\eta}\circ A^{*_{ \alpha,\beta}}_{\gamma+2,\eta}\circ\cdots\circ A^{*_{\alpha,\beta}}_{\gamma+m, \eta}(f).\]
Then, we claim the following.
**Lemma 2.5**.: _The closed expression of Rodrigues type for \(A^{*,m}_{\gamma,\eta}\) is given by_
\[A^{*,m}_{\gamma,\eta}(f)=(-1)^{m}h_{\gamma-\alpha+m,\eta-\beta}\frac{\partial ^{m}}{\partial z^{m}}(h_{\alpha-\gamma,\beta-\eta}f). \tag{2.11}\]
Proof.: Starting from the definition of \(A^{*,m}_{\gamma,\eta}\) one gets
\[A^{*,m}_{\gamma,\eta}(f)=(-1)^{m}h_{\gamma-\alpha-1,\eta-\beta}\left(h^{2} \frac{\partial}{\partial z}\right)^{m}(h_{\alpha-\gamma-m+1,\beta-\eta}f).\]
Then (2.11) readily follows thanks to the fact \((h^{2}\partial)^{m}(f)=h^{m+1}\partial^{m}(h^{m-1}f)\) in [9].
The main result in this section is the following.
**Theorem 2.6**.: _Fix \(\gamma\) such that \(\alpha>2\gamma+1\). Then, for integers \(m,n\) such that \(n>2\eta-\beta-1\) and \(0\leq m<(\alpha-1-2\gamma)/2\), the following assertions hold._
1. _The function_ \(\psi^{\gamma,\eta}_{m,n}\) _is a_ \(L^{2,\alpha}_{\beta}\)_-eigenfunction of_ \(\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}\) _with_ \(E^{\gamma,\alpha}_{m}=(m+1)(\alpha-2\gamma-m)\) _as corresponding eigenvalue._
2. _The functions_ \(\psi^{\gamma,\eta}_{m,n}\) _form an orthogonal system in the Hilbert space_ \(L^{2,\alpha}_{\beta}(\mathbb{D})\) _and their square norm (induced from (_2.5_)) is given by_ \[\left\|\psi^{\gamma,\eta}_{m,n}\right\|^{2}_{\alpha,\beta}=\frac{\pi m!}{( \alpha-2(\gamma+m)-1)}\frac{\Gamma(\alpha-2\gamma-m)\Gamma(n+\beta-2\eta+1)} {\Gamma(n+\alpha+\beta-2(\gamma+\eta+m))}.\] (2.12) _Here_ \(\Gamma\) _is the Gamma Euler function._
Proof.: In virtue of the algebraic identity \((iii)\) in Lemma 2.4 and the identity (2.8) we can proceed
by mathematical induction to get
\[\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}A^{*,m}_{\gamma,\eta} =A^{*,m}_{\gamma,\eta}\mathcal{L}^{\alpha,\beta,+}_{\gamma+m,\eta}+ \sum_{j=0}^{m-1}(\alpha-2(\gamma+j))A^{*,m}_{\gamma,\eta}\] \[=A^{*,m}_{\gamma,\eta}\left(\mathcal{L}^{\alpha,\beta,-}_{\gamma+m +1,\eta}+(\alpha-2(\gamma+m))\right)+\sum_{j=0}^{m-1}(\alpha-2(\gamma+j))A^{*,m} _{\gamma,\eta}\] \[=A^{*,m}_{\gamma,\eta}\mathcal{L}^{\alpha,\beta,-}_{\gamma+m+1, \eta}+\sum_{j=0}^{m}(\alpha-2(\gamma+j))A^{*,m}_{\gamma,\eta}\] \[=A^{*,m}_{\gamma,\eta}\mathcal{L}^{\alpha,\beta,-}_{\gamma+m+1, \eta}+(m+1)(\alpha-2\gamma-m)A^{*,m}_{\gamma,\eta}.\]
Accordingly, it becomes clear that the functions \(A^{*,m}_{\gamma,\eta}(\varphi^{\gamma,\eta}_{m})\) are eigenfunctions of \(\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}\) whenever \(\varphi^{\gamma,\eta}_{m}\) belongs to the null space of \(A_{\gamma+m+1,\eta}\),
\[\ker(A_{\gamma+m+1,\eta})=\{f:\mathbb{D}^{*}\longrightarrow\mathbb{C};\ A_{ \gamma+m+1,\eta}f=0\}\subseteq\ker\left(\mathcal{L}^{-}_{\gamma+m+1,\eta} \right).\]
This is the case when considering
\[\varphi^{\gamma,\eta}_{m}(z)=\varphi^{\gamma,\eta}_{m,n}(z):=z^{n}(1-|z|^{2})^ {-(\gamma+m+1)}|z|^{-2\eta};\ \ n\in\mathbb{Z}. \tag{2.13}\]
More precisely, the functions \(A^{*,m}_{\gamma,\eta}(\varphi^{\gamma,\eta}_{m})=A^{*,m}_{\gamma,\eta}(z^{n}h _{-(\gamma+m+1),-\eta})\) are given by
\[A^{*,m}_{\gamma,\eta}(z^{n}h_{-(\gamma+m+1),-\eta})=(-1)^{m}h_{ \gamma-\alpha+m,\eta-\beta}\frac{\partial^{m}}{\partial z^{m}}(z^{n}h_{\alpha- 2\gamma-m-1,\beta-2\eta}) \tag{2.14}\] \[\qquad=(-1)^{m}(1-|z|^{2})^{\gamma-\alpha+m}|z|^{2(\eta-\beta)} \frac{\partial^{m}}{\partial z^{m}}\big{(}z^{n}|z|^{2(\beta-2\eta)}(1-|z|^{2}) ^{\alpha-2(\gamma+m)+m-1}\big{)}\]
thanks to Lemma 2.5. The latter formula reduces further to the expression of the \(\beta\)-restricted Zernike functions in (2.1). Moreover, they satisfy
\[\mathcal{L}^{\alpha,\beta,+}_{\gamma,\eta}\left(\psi^{\gamma,\eta}_{m,n} \right)=(m+1)(\alpha-2\gamma-m)\psi^{\gamma,n}_{m,n}=E^{\gamma,\alpha}_{m}\psi ^{\gamma,\eta}_{m,n}. \tag{2.15}\]
Now, for their orthogonality in \(L^{2,\alpha}_{\beta}(\mathbb{D})\) one can use their explicit expressions in terms of certain special functions (see for example Remark 3.7 below). However, we present below another proof using the factorization method. To this purpose, notice first that \(A^{*,m}_{\gamma,\eta}=A^{*_{\alpha,\beta}}_{\gamma+1,\eta}\circ A^{*,m-1}_{ \gamma+1,\eta}\) and that \(\varphi^{\gamma,\eta}_{m,n}=\varphi^{\gamma+1,\eta}_{m-1,n}\). It follows
\[\psi^{\gamma,\eta}_{m,n}=A^{*,m}_{\gamma,\eta}(\varphi^{\gamma,\eta}_{m,n})=A^ {*_{\alpha,\beta}}_{\gamma+1,\eta}\circ A^{*,m-1}_{\gamma+1,\eta}(\varphi^{ \gamma,\eta}_{m,n})=A^{*_{\alpha,\beta}}_{\gamma+1,\eta}(\psi^{\gamma+1,\eta }_{m-1,n}).\]
Accordingly, making use of (2.15) we obtain
\[\left\langle\psi^{\gamma,\eta}_{m,n},\psi^{\gamma,\eta}_{j,k}\right\rangle= \left\langle\mathcal{L}^{\alpha,\beta,+}_{\gamma+1,\eta}(\psi^{\gamma+1,\eta}_ {m-1,n}),\psi^{\gamma+1,\eta}_{j-1,k}\right\rangle=E^{\gamma+1,\alpha}_{m-1} \Big{\langle}\psi^{\gamma+1,\eta}_{m-1,n},\psi^{\gamma+1,\eta}_{j-1,k}\Big{\rangle}.\]
More generally, by induction we arrive at
\[\left\langle\psi_{m,n}^{\gamma,\eta},\psi_{j,k}^{\gamma,\eta}\right\rangle=\prod_{ \ell=1}^{s}E_{m-\ell}^{\gamma+\ell,\alpha}\Big{\langle}\psi_{m-s,n}^{\gamma+s,\eta},\psi_{j-s,k}^{\gamma+s,\eta}\Big{\rangle};\,1\leq s\leq m.\]
Therefore, without lost of generality we can assume that \(m\leq j\) and take \(s=m\) to get
\[\left\langle\psi_{m,n}^{\gamma,\eta},\psi_{j,k}^{\gamma,\eta}\right\rangle =\prod_{\ell=1}^{m}E_{m-\ell}^{\gamma+\ell,\alpha}\Big{\langle} \psi_{0,n}^{\gamma+m,\eta},\psi_{j-m,k}^{\gamma+m,\eta}\Big{\rangle}\] \[=\prod_{\ell=1}^{m}E_{m-\ell}^{\gamma+\ell,\alpha}\Big{\langle} \psi_{0,n}^{\gamma+m,\eta},A_{\gamma+m+1,\eta}^{*}\circ A_{\gamma+m+2,\eta}^{ *}\circ\cdots\circ A_{\gamma+j,\eta}^{*}(\varphi_{j-m,k}^{\gamma+m,\eta}) \Big{\rangle}\] \[=\prod_{\ell=1}^{m}E_{m-\ell}^{\gamma+\ell,\alpha}\Big{\langle}A _{\gamma+j,\eta}\circ\cdots\circ A_{\gamma+m+1,\eta}(\varphi_{0,n}^{\gamma+m, \eta}),\varphi_{j-m,k}^{\gamma+m,\eta}\Big{\rangle}.\]
The last identity holds by observing that \(\psi_{0,n}^{\gamma+s,\eta}=\varphi_{0,n}^{\gamma+s,\eta},\) which readily follows from (2.11) or (2.14). Next, since \(\varphi_{0,n}^{\gamma+m,\eta}\) belongs to \(\ker(A_{\gamma+m+1,\eta})\) and then \(A_{\gamma+j,\eta}\circ\cdots\circ A_{\gamma+m+2,\eta}\circ A_{\gamma+m+1,\eta} (\varphi_{0,n}^{\gamma+m,\eta})\) vanishes whenever \(m<j\), we obtain
\[\left\langle\psi_{m,n}^{\gamma,\eta},\psi_{j,k}^{\gamma,\eta}\right\rangle= \left(\prod_{\ell=1}^{m}E_{m-\ell}^{\gamma+\ell,\alpha}\right)\Big{\langle} \varphi_{0,n}^{\gamma+m,\eta},\varphi_{0,k}^{\gamma+m,\eta}\Big{\rangle} \delta_{m,j}.\]
To the computation of the quantity \(\left\langle\varphi_{0,n}^{\gamma+m,\eta},\varphi_{0,k}^{\gamma+m,\eta}\right\rangle\) we make use of (2.13) giving the explicit expression of \(\varphi_{m,n}^{\gamma,\eta}.\) This yields
\[\left\langle\varphi_{0,n}^{\gamma+m,\eta},\varphi_{0,k}^{\gamma+ m,\eta}\right\rangle =\int_{\mathbb{D}}(1-|z|^{2})^{\alpha-2(\gamma+m+1)}|z|^{2(\beta- 2\eta)}z^{n}\overline{z}^{k}d\lambda(z)\] \[=\pi\left(\int_{0}^{1}(1-t)^{\alpha-2(\gamma+m+1)}t^{n+\beta-2 \eta}dt\right)\delta_{n,k}\] \[=\pi B(n+\beta-2\eta+1,\alpha-2(\gamma+m)-1)\delta_{n,k},\]
where \(B(a,b)\) denotes the classical beta function. The validity of the previous formula requires that \(n>2\eta-\beta-1\) and \(\alpha-2\gamma-1>2m\) with \(\alpha-2\gamma-1>0.\) Finally, since
\[\prod_{\ell=1}^{m}E_{m-\ell}^{\gamma+\ell,\alpha}=m!(\alpha-2(\gamma+m))_{m}=m! \frac{\Gamma(\alpha-2\gamma-m)}{\Gamma(\alpha-2(\gamma+m))},\]
we arrive at
\[\left\langle\psi_{m,n}^{\gamma,\eta},\psi_{j,k}^{\gamma,\eta}\right\rangle= \frac{\pi m!}{(\alpha-2(\gamma+m)-1)}\frac{\Gamma(\alpha-2\gamma-m)\Gamma(n+ \beta-2\eta+1)}{\Gamma(n+\alpha+\beta-2(\gamma+\eta+m))}\delta_{m,j}\delta_{ n,k}.\]
This completes the proof.
**Remark 2.7**.: The functions in (2.1) corresponding to \(\gamma=-1\), \(\eta=0\) and \(m=0\) reduce further to \(\psi_{0,n}^{-1,0}(z,\overline{z})=z^{n}\), for varying integer \(n>-(\beta+1)\), whose square norm in \(L^{2,\alpha}_{\beta}(\mathbb{D})\) is given by
\[\left\|\psi_{0,n}^{-1,0}\right\|_{\alpha,\beta}^{2}=\pi\frac{\Gamma(\alpha+1) \Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+2)}.\]
They form an orthogonal basis of the \(\beta\)-modified Bergman space \(\mathcal{A}^{2,\alpha}_{\beta}(\mathbb{D})\) defined as the closed subspace in \(L^{2,\alpha}_{\beta}(\mathbb{D})\) formed by the holomorphic functions on the punctured disc \(\mathbb{D}^{*}\) (see [11, 12] for details). In other words, the \(\beta\)-modified Bergman space is the \(L^{2}\)-eigenspace of our magnetic Laplacian \(\mathcal{L}^{\alpha,\beta,+}_{\gamma+1,\eta}\) associated with its lowest Landau level. For the particular case of \(\beta=0\) we recover the classical Bergman space on the unit disc with respect to the weight function being of the generalized Gegenbauer form \((1-|z|^{2})^{\alpha}\).
**Remark 2.8**.: The functions \(\psi_{m,n}^{\gamma,\eta}\) do not form a complete system in \(L^{2,\alpha}_{\beta}(\mathbb{D})\). However, for fixed \(m\) such that \(0\leq m<(\alpha-1-2\gamma)/2\) and varying integer \(n\geq 2\eta-\beta\) they span a specific closed subspace \(\mathcal{A}^{2,\alpha}_{\beta,m}(\mathbb{D})\) in \(L^{2,\alpha}_{\beta}(\mathbb{D})\). This gives rise to what can be called the \(m\)-th generalized (or also poly-meromophic) \(\beta\)-modified Bergman space on \(\mathbb{D}^{*}\) and can be seen as the polyanalytic analog of the \(\beta\)-modified Bergman space. Its reproducing kernel is given in Remark 3.20 below.
## 3 Fractional Zernike functions
In this section we provide an accurate theoretical study for the fractional Zernike functions in (1.2). We discuss their connection to some special functions, zeros, orthogonality in \(L^{2,\kappa}_{\rho}(\mathbb{D})\), regularity, differential equations, recurrence and operational formulas. Some results concerning the generating functions, the integral representations and completeness are also obtained.
### Connection to special functions and explicit expression.
We begin by establishing the explicit expression of the fractional Zernike functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) in terms of the classical Zernike polynomials. Thus, for given real \(b\) and nonnegative integer \(m\) we define the infected minimum \(m\wedge^{*}b\) to be
\[m\wedge^{*}b=\left\{\begin{array}{ll}\min(m,b),&b=0,1,2,\cdots\\ m,&b\in\mathbb{R},\,b\neq 0,1,2,\cdots\,.\end{array}\right.\]
**Proposition 3.1**.: _For every \(\rho>-1\) we have_
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=\frac{m!\Gamma(\rho+1)}{( \kappa+m+1)_{n}}\sum_{j=0}^{m\wedge^{*}\rho}\frac{(-1)^{j}}{j!(m-j)!\Gamma( \rho-j+1)}\frac{\left(1-|z|^{2}\right)^{j}}{z^{j}}\mathcal{Z}^{\kappa+j}_{m-j,n}(z,\overline{z}). \tag{3.1}\]
Proof.: Using the facts (3.4) and
\[z^{n}(1-|z|^{2})^{\kappa+m}=\frac{(-1)^{n}}{(\kappa+m+1)_{n}}\frac{\partial^{ n}}{\partial\overline{z}^{n}}\left((1-|z|^{2})^{\kappa+m+n}\right)\]
we get
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z}) =\frac{(-1)^{m+n}}{(\kappa+m+1)_{n}}z^{-\rho}(1-|z|^{2})^{-\kappa} \frac{\partial^{m}}{\partial z^{m}}\left(z^{\rho}\frac{\partial^{n}}{\partial \overline{z}^{n}}\left((1-|z|^{2})^{\kappa+m+n}\right)\right)\] \[=\frac{(-1)^{m+n}m!}{(\kappa+m+1)_{n}}z^{-\rho}(1-|z|^{2})^{- \kappa}\sum_{j=0}^{m}\frac{(-\rho)_{j}}{j!(m-j)!}z^{\rho-j}\frac{\partial^{m-j+ n}}{\partial z^{m-j}\partial\overline{z}^{n}}\left((1-|z|^{2})^{\kappa+j+m-j+n} \right),\]
which can be rewritten as (3.1).
**Remark 3.2**.: For \(\rho=0\) we recover the Zernike polynomials \(\mathcal{Z}_{m,n}^{\kappa}(z,\overline{z})\) up to the multiplicative constant \(1/(\kappa+m+1)_{n}\), while when \(\rho=1\) we get
\[\mathcal{Z}_{m,n+1}^{\kappa}(z,\overline{z})=(\kappa+m+n+1)\left(z\mathcal{Z} _{m,n}^{\kappa}(z,\overline{z})+m\left(1-|z|^{2}\right)\mathcal{Z}_{m-1,n}^{ \kappa+1}(z,\overline{z})\right),\]
which is exactly the three terms recurrence formula for the Zernike polynomials [1, p. 403, Eq. (5.1)]. This follows since that \((-\rho)_{j}=0\) for \(j\geq\rho+1\) whenever \(\rho=0,1,2,\cdots.\) More generally, from (1.3) with \(\rho\) being a nonnegative integer we obtain new recurrence formula for the classical complex Zernike polynomials
\[\mathcal{Z}_{m,n+\rho}^{\kappa}(z,\overline{z})=m!\Gamma(\rho+1)(\kappa+m+n+1) _{\rho}\sum_{j=0}^{m\wedge\rho}\frac{(-1)^{j}z^{\rho-j}\left(1-|z|^{2}\right) ^{j}}{j!(m-j)!\Gamma(\rho-j+1)}\mathcal{Z}_{m-j,n}^{\kappa+j}(z,\overline{z}). \tag{3.2}\]
The explicit expression of the few first terms of \(\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\) can be computed easily from the Rodrigues formula (1.2) or also using (3.1). Thus, those corresponding to \(m=0\) reduce to the monomials, \(\mathcal{Z}_{0,n}^{\kappa,\rho}(z,\overline{z})=z^{n}.\) For \(m=1\) and \(m=2\) we get respectively
\[\mathcal{Z}_{1,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+n_{\rho}+1)\overline{ z}z^{n}-n_{\rho}z^{n-1}\]
and
\[\mathcal{Z}_{2,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+n_{\rho}+1)(\kappa+n_ {\rho}+2)\overline{z}^{2}z^{n}-2n_{\rho}(\kappa+n_{\rho}+1)\overline{z}z^{n-1 }+n_{\rho}(n_{\rho}-1)z^{n-2},\]
where we have set \(n_{\rho}=n+\rho.\) A general formula for the explicit expression of \(\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\) is given by the following assertion.
**Proposition 3.3**.: _We have_
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=\sum_{j=0}^{m\wedge^{*}(n+\rho )}\frac{(-1)^{j}m!\Gamma(n+\rho+1)\Gamma(\kappa+m+1)}{j!(m-j)!\Gamma(n+\rho-j+ 1)\Gamma(\kappa+j+1)}\left(1-|z|^{2}\right)^{j}z^{n-j}\overline{z}^{n-j}. \tag{3.3}\]
Proof.: Using the fact \((x)^{\underline{n}}=\Gamma(x+1)/\Gamma(x-n+1)\) for the decreasing factorial \((x)^{\underline{n}}=x(x-1)\cdots(x-n+1)\), we obtain
\[\frac{\partial^{m}}{\partial z^{m}}\left(1-xz\right)^{a}=(-1)^{m}\frac{\Gamma( a+1)}{\Gamma(a+1-m)}x^{m}\left(1-xz\right)^{a-m}\]
\[\frac{\partial^{m}}{\partial z^{m}}\left(z^{a}\right)=(-a)_{m}z^{a-m}=\varepsilon_{ a,m}^{*}\frac{\Gamma(a+1)}{\Gamma(a+1-m)}z^{a-m}, \tag{3.4}\]
where for the nonnegative integer \(m\) we have set
\[\varepsilon_{a,m}^{*}=\left\{\begin{array}{ll}1,&a\geq m;\,a=0,1,\cdots\\ 0,&a<m;\,a=0,1,\cdots\\ 1,&a\in\mathbb{R},\,a\neq 0,1,\cdots\,.\end{array}\right.\]
Thus, applying the Leibnitz formula for high order derivation of a product yields
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z}) =(-1)^{m}z^{-\rho}(1-|z|^{2})^{-\kappa}\frac{\partial^{m}}{ \partial z^{m}}\big{(}z^{n+\rho}(1-|z|^{2})^{\kappa+m}\big{)}\] \[=\sum_{j=0}^{m}\varepsilon_{n+\rho,j}^{*}\frac{(-1)^{j}m!\Gamma(n +\rho+1)\Gamma(\kappa+m+1)}{j!(m-j)!\Gamma(n+\rho-j+1)\Gamma(\kappa+j+1)} \overline{z}^{m-j}z^{n-j}\left(1-|z|^{2}\right)^{j}.\]
This gives rise to (3.3).
Below, we present different hypergeometric representations of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) in terms of the Gauss hypergeometric function defined on the open unit disc by power series
\[{}_{2}F_{1}\left(\begin{array}{c}a,b\\ c\end{array}\bigg{|}z\right)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{z^{n}}{n!}\]
provided that \(c\neq 0,-1,-2,\cdots\).
**Proposition 3.4**.: _The functions \(\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\) are given in terms of the \({}_{2}F_{1}\) function by_
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+1)_{m}z^{n}\overline{ z}^{m}{}_{2}F_{1}\left(\begin{array}{c}-m,-n-\rho\\ \kappa+1\end{array}\bigg{|}1-\frac{1}{|z|^{2}}\right). \tag{3.5}\]
Proof.: By means of \((a)^{\underline{n}}=(-1)^{n}(-a)_{n}=\Gamma(a+1)/\Gamma(a-n+1)\) combined with \((-1)^{j}(-m)_{j}(m-j)!=m!\), we can rewrite (3.3) as
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z}) =(\kappa+1)_{m}z^{n}\overline{z}^{m}\sum_{j=0}^{m\wedge^{*}(n+ \rho)}\frac{(-m)_{j}(-n-\rho)_{j}}{(\kappa+1)_{j}j!}\left(1-\frac{1}{|z|^{2}} \right)^{j}\] \[=(\kappa+1)_{m}z^{n}\overline{z}^{m}{}_{2}F_{1}\left(\begin{array} []{c}-m,-n-\rho\\ \kappa+1\end{array}\bigg{|}1-\frac{1}{|z|^{2}}\right).\]
There are several equivalent expressions for \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) in terms of the Gauss hypergeometric functions which follow from the well-known linear transformations for \({}_{2}F_{1}.\) Thus, from the second and the third ones in [22, SS 2.4., p. 47], it follows
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+1)_{m}z^{n-m}{}_{2}F _{1}\left(\begin{array}{c}-m,n+\kappa+\rho+1\\ \kappa+1\end{array}\bigg{|}1-|z|^{2}\right) \tag{3.6}\]
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+1)_{m}z^{-\rho}\overline{z} ^{n-n-\rho}{}_{2}F_{1}\left(\begin{array}{c}-n-\rho,\kappa+m+1\\ \kappa+1\end{array}\left|1-|z|^{2}\right.\right). \tag{3.7}\]
However, starting from (3.5) and applying the linear transformation [25, Eq. (15.8.7)] for the limiting case one obtains
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+\rho+n+1)_{m}z^{n} \overline{z}^{n}{}_{2}F_{1}\left(\begin{array}{c}-m,-n-\rho\\ -\kappa-\rho-n-m\end{array}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
which clearly leads to (3.10). This can also be deduced from the functions \(z^{\rho}\mathcal{Z}_{m,n}^{\kappa,\rho}\) being the classical Zernike functions in (1.1) up to multiplicative constant and next making appeal to the one in [9].
Finally, the formula (3.11) for \(\rho\) being non-integer is exactly (3.10) (when \(\rho\) integer and \(m\leq n+\rho\)). It can be obtained by means of (3.6) and (3.12) valid whenever \(m=m\wedge^{*}(n+\rho)\), which corresponds to \(\rho>-1\) being non-integer or also to \(m=m\wedge(n+\rho)\). A direct proof can be handled starting from the derivation formula [4, p. 1, 1.1.2 Eq. 2] that we can rewrite as
\[\frac{d^{m}}{dz^{m}}\left(z^{a}(1-xz)^{n}\right)=m!z^{a-m}(1-xz)^{b-m}P_{m}^{( a-m,b-m)}(1-2xz). \tag{3.15}\]
Therefore, with the specification \(a=n+\rho\), \(b=\kappa+m\) and \(x=\overline{z}\), the expression of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) in (1.2) becomes (3.11).
The next result concerning the zeros of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) is an immediate consequence of Proposition 3.5.
**Corollary 3.6**.: _The point \(0\) is a zero of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) when \(\rho=0,1,2,\cdots\) if and only if \(n>m\) or \(m>n\) with \(\rho=0\). However, when \(\rho\) is non-integer, it is a zero of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) only when \(n>m\). The other zeros are the circles centered at the origin with radii \((1+r_{m,n}^{\kappa,\rho})/2\), where \(r_{m,n}^{\kappa,\rho}\) are the zeros, located at the segment \((0,1)\), of the real Jacobi polynomials \(P_{m}^{(\kappa,n-m+\rho)}(x)\) when \(\rho\) is non-integer and \(P_{m\wedge(n+\rho)}^{(\kappa,|n-m+\rho|)}(x)\) when \(\rho\) is a non-negative integer._
**Remark 3.7**.: According to Proposition 3.5, the expression of the \(\beta\)-restricted Zernike functions \(\psi_{m,n}^{\gamma,\eta}\) in terms of the Jacobi polynomials reads
\[\psi_{m,n}^{\gamma,\eta}(z,\overline{z})=(-1)^{m}m!\frac{z^{n-m}}{|z|^{2\eta}} (1-|z|^{2})^{\frac{\kappa-a-1}{2}}P_{m}^{(n-m+\rho,\kappa)}(1-2|z|^{2}), \tag{3.16}\]
where \(\rho=\beta-2\eta>-1\) is non-integer and \(\kappa=\alpha-2(\gamma+m)-1\).
We conclude this subsection by discussing the orthogonality of the considered functions.
**Corollary 3.8**.: _The functions \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) form an orthogonal system in \(L_{\rho}^{2,\kappa}(\mathbb{D})\) with square norm given by_
\[\left\|\mathcal{Z}_{m,n}^{\kappa,\rho}\right\|_{L_{\rho}^{2,\kappa}(\mathbb{D })}^{2}=\frac{\pi m!(n+\rho)!\Gamma(m+\kappa+1)}{(m+n+\rho+\kappa+1)\Gamma(n+ \rho+\kappa+1)}=:\frac{1}{\gamma_{m,n}^{\kappa,\rho}} \tag{3.17}\]
Proof.: We provide explicit computation only when \(\rho\) being integer. For the case of \(\rho\) non-integer one can proceeds as for \(m=m\wedge(n+\rho)\) and \(\rho\) is integer. Thus, let \(\rho\) a fixed integer and set
\[d_{m,n}^{\rho,\kappa}:=\frac{(\kappa+1)_{m\vee(n+\rho)}(m\wedge(n+\rho))!}{( \kappa+1)_{n+\rho}}.\]
Then, from Proposition 3.5 and the use of the polar coordinates \(z=\sqrt{t}e^{i\theta}\); \(0\leq t<1\), \(0\leq\theta<2\pi\), we get
\[I_{m,n,j,k}^{\rho,\kappa} :=\int_{D}\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z}) \overline{\mathcal{Z}_{j,k}^{\kappa,\rho}(z,\overline{z})}|z|^{2\rho}(1-|z|^{2 })^{\kappa}d\Lambda(z)\] \[=\pi d_{m,n}^{\rho,\kappa}d_{j,k}^{\rho,\kappa}\left(\int_{0}^{1 }t^{|m-n-\rho|}(1-t)^{\kappa}P_{m\wedge(n+\rho)}^{(\kappa,|m-n-\rho|)}(2t-1)P_ {j\wedge(k+\rho)}^{(\kappa,|m-n-\rho|)}(2t-1)dt\right)\delta_{n-m,k-j}\] \[=\frac{\pi d_{m,n}^{\rho,\kappa}d_{j,k}^{\rho,\kappa}}{2^{|m-n- \rho|+\kappa+1}}\left(\int_{-1}^{1}(1+x)^{|m-n-\rho|}(1-x)^{\kappa}P_{m\wedge (n+\rho)}^{(\kappa,|m-n-\rho|)}(x)P_{j\wedge(k+\rho)}^{(\kappa,|m-n-\rho|)}(x) dx\right)\delta_{n-m,k-j}.\]
Now, by the orthogonal property for the classical Jacobi polynomials [22, p.212], it follows
\[I_{m,n,j,k}^{\rho,\kappa} =\frac{\pi d_{m,n}^{\rho,\kappa}d_{j,k}^{\rho,\kappa}}{2^{|m-n-\rho| +\kappa+1}}\Big{\|}P_{m\wedge(n+\rho)}^{(\kappa,|m-n-\rho|)}\Big{\|}^{2}\delta_ {n-m,k-j}\delta_{m\wedge(n+\rho),j\wedge(k+\rho)}\] \[=\frac{\pi m!(n+\rho)!\Gamma(m+\kappa+1)}{(m+n+\rho+\kappa+1) \Gamma(n+\rho+\kappa+1)}\delta_{m,j}\delta_{n,k}.\]
This proves the orthogonality of \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) in the Hilbert space \(L_{\rho}^{2,\kappa}(\mathbb{D})\).
### Poly-meromorphy
In analogy with the definition of the polyanalytic functions defined as those satisfying the Cauchy-Riemann equation \(\partial^{n}/\partial\overline{z}^{n}=0\) one has to suggest the following or fthe poly-meromorpy [2, p 199].
**Definition 3.9**.: A complex-valued function \(f\) on an open set \(U\) in the complex plane is said to be poly-meromorphic of order \(n\) (of first kind) if there exist certain meromorphic functions \(\psi_{k};\)\(k=0,1,\cdots,n-1\) on \(U\) such that
\[f(z)=\psi_{0}(z)+\overline{z}\psi_{1}(z)+\cdots+\overline{z}^{n-1}\psi_{n-1}(z).\]
The main result in this subsection discusses the regularity of the considered fractional Zernike functions.
**Theorem 3.10**.: _The fractional Zernike functions \(\mathcal{Z}_{m,n}^{\kappa,\rho}\) are polynomials in \(z\) and \(\overline{z}\) if and only if \(\rho=0\) or \(m\leq n.\) Alternatively, they are poly-meromorphic functions of order \(m\) with \(0\) as unique pole. Its order of multiplicity is given by \(Ord_{m,n}^{\kappa,\rho}=\rho\) for \(m>n+\rho\) when \(\rho=1,2,\cdots\), and by \(Ord_{m,n}^{\kappa,\rho}=m-n\) for \(m>n\) when \(\rho>-1\) is non-integer, or when \(n<m\leq n+\rho\) with \(\rho=1,2,\cdots\)._
Proof.: Set
\[c_{m,n,j}^{\kappa,\rho}:=\frac{(-1)^{j}m!\Gamma(n+\rho+1)\Gamma(\kappa+m+1)}{j!(m-j)!\Gamma(n+\rho-j+1)\Gamma(\kappa+j+1)}\]
and for \(p<q\leq m\wedge^{*}(n+\rho)\) consider the quantities
\[S_{q,p}^{\kappa,\rho,m,n}:=R_{q}^{\kappa,\rho,m,n}-X^{p-q}R_{p}^{\kappa,\rho, m,n},\]
where \(R_{p}^{\kappa,\rho,m,n}\) is the polynomial of degree less or equal to \(p\) given by
\[R_{p}^{\kappa,\rho,m,n}(X)=\sum_{k=0}^{p}\left(\sum_{j=0}^{p-k}(-1)^{j}\frac{( j+k)!}{j!k!}c_{m,n,j+k}^{\kappa,\rho}\right)X^{p-k}. \tag{3.18}\]
Notice for instance that its constant coefficients is given by \(S_{q,p}^{\kappa,\rho,m,n}(0)=R_{q}^{\kappa,\rho,m,n}(0)=c_{m,n,q}^{\kappa,\rho}.\) Thus, starting from (3.3) we can rewrite \(\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\) as
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=z^{n-[m\wedge^{*}(n+\rho)]} \overline{z}^{m-[m\wedge^{*}(n+\rho)]}R_{m\wedge^{*}(n+\rho)}^{\kappa,\rho,m,n} (|z|^{2}). \tag{3.19}\]
But, since \(c^{\kappa,\rho,\rho}_{m,n}\neq 0\), it becomes clear from (3.19) that the functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) are polynomials if and only if \(\rho=0\) or \(m\leq n\) independently of \(\rho>-1\) being integer or not. This assertion is also immediate from Proposition 3.5. Next, using the fact that \(R^{\kappa,\rho,m,n}_{q}=X^{q-p}R^{\kappa,\rho,m,n}_{p}+S^{\kappa,\rho,m,n}_{q,p}\) we obtain
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=\overline{z}^{m-n}R^{\kappa, \rho,m,n}_{n}(|z|^{2})+z^{n-[m\wedge^{*}(n+\rho)]}\overline{z}^{m-[m\wedge^{*} (n+\rho)]}S^{\kappa,\rho,m,n}_{m\wedge^{*}(n+\rho),n}(|z|^{2}). \tag{3.20}\]
A meticulous study of the different possible cases of \(m\) compared to \(n\) and \(n+\rho\) for given \(\rho>-1\) leads to
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=\left\{\begin{array}{ll}z^{ n-m}R^{\kappa,\rho,m,n}_{m}(|z|^{2})&\mbox{if $m\leq n$; $\rho>-1$}\\ \overline{z}^{m-n}R^{\kappa,\rho,m,n}_{n}(|z|^{2})&\mbox{if $m>n$; $\rho=0$}\\ \overline{z}^{m-n}R^{\kappa,\rho,m}_{n}(|z|^{2})+\frac{1}{z^{m-n}}S^{\kappa, \rho,m,n}_{m}(|z|^{2})&\mbox{if $m>n$; $\rho$ non-integer}\\ &\mbox{or $n<m\leq n+\rho$; $\rho=1,2,\cdots$}\\ \overline{z}^{m-n}R^{\kappa,\rho,m,n}_{n}(|z|^{2})+\frac{\overline{z}^{m-(n+ \rho)}}{z^{\rho}}S^{\kappa,\rho,m,n}_{n+\rho}(|z|^{2})&\mbox{if $m>n+\rho$; $\rho=1,2,\cdots$}.\end{array}\right.\]
Moreover, this reveals that the regular part of \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) is always a polyanalytic function of order \(m\) and anti-polyanalytic of order \(n\). It is given by
\[R^{\kappa,\rho}_{m,n}(|z|^{2})=\left\{\begin{array}{ll}z^{n-m}R^{\kappa,\rho,m,n}_{m}(|z|^{2}),&m\leq n\\ \overline{z}^{m-n}R^{\kappa,\rho,m,n}_{n}(|z|^{2}),&m\geq n\end{array}\right.= z^{n-m\wedge n}\overline{z}^{m-m\wedge n}R^{\kappa,\rho,m,n}_{m\wedge n}(|z|^{2}). \tag{3.21}\]
However, in general \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) are poly-meremorphic with \(0\) as the unique pole for \(\rho\neq 0\) and \(m>n\). Thus, for \(m>n+\rho\) with \(\rho=1,2,\cdots\) the singular part is clearly given by \(z^{-\rho}\overline{z}^{m-(n+\rho)}S^{\kappa,\rho,m,n}_{n+\rho}(|z|^{2})\). It is given by \(z^{n-m}S^{\kappa,\rho,m,n}_{m}(|z|^{2})\) whenever \(n<m\) and \(\rho>-1\) non-integer or \(n<m\leq n+\rho\) when \(\rho=1,2,\cdots\). Therefore, it becomes clear that the multiplicity of the singularity is given by
\[Ord^{\kappa,\rho}_{m,n}=\left\{\begin{array}{ll}\rho&\mbox{if $m>n+\rho$; $\rho=1,2,\cdots$}\\ m-n&\mbox{if $n<m$; $\rho$ non-integer}\\ &\mbox{or $n<m\leq n+\rho$; $\rho=1,2,\cdots$}.\end{array}\right.\]
This completes the proof.
**Remark 3.11**.: The obtained result can justifies somehow the appellation of \(\mathcal{Z}^{\kappa_{m},\rho}_{m,n}\) by fractional Zernike functions.
### Differential equations.
In this subsection we are concerned with some second order differential equations satisfied by the fractional Zernike functions.
**Theorem 3.12**.: _Let \(m_{\rho}=m+\rho\). Then, the function \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) is solution of_
\[z^{2}(1-|z|^{2})\frac{\partial^{2}}{\partial z^{2}}+\left((m_{\rho}-n+1)-( \kappa+m_{\rho}-n+2)|z|^{2}\right)z\frac{\partial}{\partial z}+n(\kappa+m_{ \rho}+1)|z|^{2}=\rho(n-m).\]
Proof.: Consider the first order differential operator
\[\nabla^{\kappa,\rho}(f)(z)=-\left((1-|z|^{2})\frac{\partial}{\partial z}- \mathcal{Z}^{\kappa,\rho}_{1,0}(z,\overline{z})\right)(f)(z). \tag{3.22}\]
Also, for varying \(j=1,2,\cdots,m\) we set
\[\nabla^{\kappa,\rho}_{j}(f):=-z^{-\rho}(1-|z|^{2})^{-\kappa-j+1}\frac{\partial}{ \partial z}\big{(}z^{\rho}(1-|z|^{2})^{\kappa+j}f\big{)}, \tag{3.23}\]
so that \(\nabla^{\kappa,\rho}(f)(z)=\nabla^{\kappa,\rho}_{1}(f)(z).\) Successive application of \(\nabla^{\kappa,\rho}_{j}\) leads to the operator \(\widetilde{\nabla}^{\kappa,\rho}_{m}:=\nabla^{\kappa,\rho}_{1}\circ\nabla^{ \kappa,\rho}_{2}\circ\cdots\circ\nabla^{\kappa,\rho}_{m}\) satisfying \(\widetilde{\nabla}^{\kappa,\rho}_{m+1}=\nabla^{\kappa,\rho}_{1}\circ \widetilde{\nabla}^{\kappa+1,\rho}_{m}\) since \(\nabla^{\kappa,\rho}_{j+1}=\nabla^{\kappa+1,\rho}_{j}.\) It is explicitly given by
\[\widetilde{\nabla}^{\kappa,\rho}_{m}(f)(z)=(-1)^{m}z^{-\rho}(1-|z|^{2})^{- \kappa}\frac{\partial^{m}}{\partial z^{m}}\big{(}z^{\rho}(1-|z|^{2})^{\kappa+ m}f\big{)}. \tag{3.24}\]
Thus, in view of (1.2) it is clear that \(\widetilde{\nabla}^{\kappa,\rho}_{m}(e_{n})(z)=\mathcal{Z}^{\kappa,\rho}_{m,n }(z,\overline{z})\) for \(e_{n}(z):=z^{n}\), and therefore
\[\nabla^{\kappa,\rho}(\mathcal{Z}^{\kappa+1,\rho}_{m,n}(z,\overline{z}))= \nabla^{\kappa,\rho}_{1}\left(\nabla^{\kappa+1,\rho}_{1}\circ\nabla^{\kappa+1,\rho}_{2}\circ\cdots\circ\nabla^{\kappa+1,\rho}_{m}\right)(e_{n})=\mathcal{Z} ^{\kappa,\rho}_{m+1,n}(z,\overline{z}). \tag{3.25}\]
Now associated with the Euler differential operator \(E_{z}=z\partial/\partial z\) and the constant \(c^{\kappa,\rho}_{m,n}=m(n+\rho+\kappa+1)\), we define the first order differential operator
\[D^{\kappa,\rho}_{m,n}=\frac{1}{c^{\kappa,\rho}_{m,n}\overline{z}}\left(E_{z}- (n-m).\right)=\frac{1}{c^{\kappa,\rho}_{m,n}}\left(\frac{z}{\overline{z}} \frac{\partial}{\partial z}-\frac{(n-m)}{\overline{z}}\right).\]
Hence making use of (3.11) combined with the differentiation formula of Jacobi polynomials in [22, p.213 ] we obtain the identity
\[D^{\kappa,\rho}_{m,n}(\mathcal{Z}^{\kappa,\rho}_{m,n})=\mathcal{Z}^{\kappa+1, \rho}_{m-1,n}. \tag{3.26}\]
Therefore, from (3.25) and (3.26) it is immediate that \(\nabla^{\kappa,\rho}\circ D^{\kappa,\rho}_{m,n}(\mathcal{Z}^{\kappa,\rho}_{m,n }(z,\overline{z}))=\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z}).\) A direct computation shows that \(-c^{\kappa,\rho}_{m,n}\overline{z}\nabla^{\kappa,\rho}\circ D^{\kappa,\rho}_ {m,n}\) is given by
\[z(1-|z|^{2})\frac{\partial^{2}}{\partial z^{2}}+\left([m_{\rho}-n+1]-[\kappa+m _{\rho}-n+2]|z|^{2}\right)\frac{\partial}{\partial z}+(m-n)\left(\frac{\rho}{ z}-[\kappa+\rho+1]\overline{z}\right)\]
This shows that the fractional Zernike function \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) satisfy the desired differential equation.
**Remark 3.13**.: In view of (3.25) and (3.26) the considered operators \(\nabla^{\kappa,\rho}\) and \(D^{\kappa,\rho}_{m,n}\) appear as creation and annihilation operators for the fractional Zernike functions.
**Remark 3.14**.: Let \((P_{j}f)(z)=z^{j}f(z).\) Then, the commutation relation \(P_{j}\circ\nabla^{\kappa,\rho}_{m}\circ P^{-1}_{j}=\nabla^{\kappa,\rho-j}_{m}\) holds for all \(z\in\mathbb{D}^{*}.\) This follows by observing that from (1.2), we have
\[z^{j}\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=\mathcal{Z}^{\kappa,\rho- j}_{m,n+j}(z,\overline{z}). \tag{3.27}\]
The following can be proved using the close connection of \(\mathcal{Z}^{\kappa_{m},\rho}_{m,n}\) to the \(\beta\)-restricted Zernike functions studied in Section 2.
**Theorem 3.15**.: _For given fixed nonnegative integer \(m\) and reals \(\alpha>-1\) and \(\gamma\) such that \(\kappa_{m}=\alpha-2(\gamma+m)-1\), the fractional Zernike functions \(\mathcal{Z}^{\kappa_{m},\rho}_{m,n}\) for varying \(n\) are eigenfunctions of_
\[-(1-|z|^{2})^{2}\partial\overline{\partial}-(1-|z|^{2})\left(mE-H^{\rho}_{ \kappa_{m}+m+1}(z)\overline{E}\right)+mH^{\rho}_{\kappa_{m}+m+1}(z)|z|^{2} \tag{3.28}\]
_with \(m(\kappa_{m}+m+1)\) as corresponding eigenvalue._
Proof.: For the proof observe that the fractional Zernike functions \(\mathcal{Z}_{m,n}^{\kappa_{m},\rho}\) are closely connected to the \(\beta\)-restricted Zernike functions \(\psi_{m,n}^{\gamma,\eta}(z,\overline{z})\) by (2.1) for every fixed nonnegative integer \(m.\) The latter ones are eigenfunctions of the hamiltonian \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) in (2.7) with \(E_{m}^{\gamma,\alpha}=(m+1)(\alpha-2\gamma-m)\) as corresponding eigenvalue (see \((i)\) in Theorem 2.6). The key observation to conclude is that the operators \(\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}\) and \(\mathcal{L}_{\gamma+4,\eta+b}^{\alpha+2a,\beta+2b,+}\) are unitary equivalent for arbitrary reals \(a\) and \(b.\) More precisely, since \(A_{\gamma,\eta}^{*_{a,\beta}}(h_{a,b}f)=h_{a,b}A_{\gamma+a,\eta+b}^{*_{a,a+2b} }(f)\) and \(A_{\gamma,\eta}(h_{a,b}f)=h_{a,b}A_{\gamma+a,\eta+b}(f),\) we obtains
\[\mathcal{L}_{\gamma,\eta}^{\alpha,\beta,+}(h_{a,b}f)=A_{\gamma,\eta}A_{ \gamma,\eta}^{*_{a,\beta}}(h_{a,b}f)=h_{a,b}A_{\gamma+a,\eta+b}A_{\gamma+a, \eta+b}^{*_{a+2a,\beta+2b}}(f)=h_{a,b}\mathcal{L}_{\gamma+a,\eta+b}^{\alpha+2 a,\beta+2b,+}(f).\]
Subsequently, for \(\kappa_{m}=\alpha-2(\gamma+m)-1,\)\(\rho=\beta-2\eta,\)\(b=-\eta\) and \(a=(\kappa_{m}-\alpha-1)/2=-(\gamma+m+1),\) the fractional Zernike function \(\mathcal{Z}_{m,n}^{\kappa_{m},\rho}\) satisfies
\[\mathcal{L}_{-m-1,0}^{\kappa_{m}-1,\rho,+}\mathcal{Z}_{m,n}^{\kappa_{m},\rho }=E_{m}^{\gamma,\alpha}\mathcal{Z}_{m,n}^{\kappa_{m},\rho}.\]
But from Lemma 2.1, it is clear that the second order partial differential equation in (3.28) is exactly \(\mathcal{L}_{-m-1,0}^{\kappa_{m}-1,\rho,+}-(\kappa_{m}+m+1).\)
**Corollary 3.16**.: _The Zernike polynomials \(\mathcal{Z}_{m,n}^{(-1)}\), corresponding to the limit case of \(\kappa_{m}=-1\) and fixed \(m\), are harmonic functions for the Laplacian_
\[\left\{(1-|z|^{2})\partial\overline{\partial}+m\left(E-\overline{E}\right)-m^ {2}\right\}\mathcal{Z}_{m,n}^{-1}.\]
Proof.: This readily follows by specifying \(\rho=0\) in Theorem 3.15 and choosing \(\alpha\) and \(\gamma\) such that \(m=(\alpha/2)-\gamma.\) Indeed, in this case we have \(\kappa_{m}+m+1=m\) and the left hand side of (3.28) reduces further to the Landau Hamiltonian \((1-|z|^{2})\left\{(1-|z|^{2})\partial\overline{\partial}+m\left(E-\overline{E }\right)\right\}+m^{2}|z|^{2}\) with quantized constant magnetic field of magnitude \(m.\)
### Recurrence and operational formulas.
From the three terms recurrence formula in [22, p: 213] for the Jacobi polynomials one can deduces
\[A_{m,b}z^{2}\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})+B_{m,b}z\mathcal{ Z}_{m-1,n}^{\kappa,\rho-1}(z,\overline{z})+C_{m,b}\mathcal{Z}_{m-2,n}^{\kappa, \rho-2}(z,\overline{z})=0\]
valid for all \(m=2,3,4,\cdots,\) where we have set \(A_{m,b}:=(b-m)(b+2),\)\(B_{m,b}:=b(b-1)(b-\kappa-1)\) and \(C_{m,b}:=b(m-1)(\kappa+m-1)(b-\kappa-m)\) with \(b=\kappa+n+m+\rho.\) However, starting from the Rodriguez formula for the fractional Zernike functions by rewriting it in the form
\[\mathcal{Z}_{m,n}^{\kappa,\rho}=(-1)^{m}z^{-\rho}(1-|z|^{2})^{-\kappa}\partial _{z}^{m-1}\left(\partial_{z}(z^{n+\rho}(1-|z|^{2})^{\kappa+m})\right),\]
one derives the recurrence formula
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+m)\overline{z} \mathcal{Z}_{m-1,n}^{\kappa,\rho}(z,\overline{z})-(n+\rho)(1-|z|^{2})\mathcal{ Z}_{m-1,n-1}^{\kappa+1,\rho}(z,\overline{z}). \tag{3.29}\]
But, by means of (3.27) we can rewrite the recurrence formula (3.29) as
\[z\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})=(\kappa+m)\overline{z} \mathcal{Z}_{m-1,n+1}^{\kappa,\rho-1}(z,\overline{z})-(n+\rho)(1-|z|^{2}) \mathcal{Z}_{m-1,n}^{\kappa+1,\rho-1}(z,\overline{z}). \tag{3.30}\]
Moreover, we can prove the following
\[\mathcal{Z}_{m+1,n+1}^{\kappa-1,\rho-1}=[\kappa|z|^{2}+(m-n-\rho)(1-|z|^{2})] \mathcal{Z}_{m,n}^{\kappa,\rho}-m(n+\rho+\kappa+1)\overline{z}(1-|z|^{2}) \mathcal{Z}_{m-1,n}^{\kappa+1,\rho}. \tag{3.31}\]
Indeed, it follows from the use (3.26), leading to
\[z\frac{\partial}{\partial_{z}}(\mathcal{Z}_{m,n}^{\kappa,\rho})-(n-m) \mathcal{Z}_{m,n}^{\kappa,\rho}=\overline{z}m(n+\rho+\kappa+1)\mathcal{Z}_{m -1,n}^{\kappa+1,\rho},\]
combined with the derivation formula
\[(1-|z|^{2})\frac{\partial}{\partial_{z}}(\mathcal{Z}_{m,n}^{\kappa,\rho})=- \rho(1-|z|^{2})\mathcal{Z}_{m,n-1}^{\kappa,\rho+1}+\kappa\overline{z} \mathcal{Z}_{m,n}^{\kappa,\rho}-\mathcal{Z}_{m+1,n}^{\kappa-1,\rho}.\]
The latter one follows from the Rodrigues Formula.(1.2) In the sequel, we obtain non-trivial recurrence formulas of Nielsen type for the fractional Zernike functions. This follows as specific cases of the so-called Burchnall representation type formulas for the fractional Zernike functions. To the exact statement we let \(\mathbf{a}_{q,j,\ell,k}^{\kappa,\rho,m,n}\) and \(\mathbf{b}_{m,n,j,k}^{\kappa,\rho}\) respectively, be the constants given by
\[\mathbf{a}_{q,j,\ell,k}^{\kappa,\rho,m,n}:=\varepsilon_{\rho,m-j}^{*}\frac{(- 1)^{m+j+\ell}m!q!\Gamma(\rho+1)\Gamma(\kappa+m+1)}{\ell!(m-j)!(j-\ell)!(q- \ell)!\Gamma(\rho-m+j+1)\Gamma(\kappa+m+n+1)} \tag{3.32}\]
and
\[\mathbf{b}_{j,k}^{\kappa,\rho,m,n}:=\frac{(-1)^{j+k}m!n!\Gamma(\kappa+m+n+1)} {j!k!(m-j)!(n-k)!\Gamma(\kappa+m+k+1)}. \tag{3.33}\]
**Proposition 3.17**.: _Let \(\kappa,\rho,m\) and \(n\) be as above. Let \(p\) be a nonnegative integer and \(u\) a real such that \(u\geq\max(-\kappa,-1)\). Then, we have_
\[\mathcal{Z}_{m,n+q}^{\kappa,\rho}(z,\overline{z})=z^{q}\sum_{j=0}^{m}\sum_{ \ell=0}^{j\wedge q}\mathbf{a}_{q,j,\ell,k}^{\kappa,\rho,m,n}\left(\frac{1-|z|^ {2}}{z}\right)^{m+\ell-j}\mathcal{Z}_{j-\ell,n}^{\kappa+m+\ell-j}(z,\overline {z}), \tag{3.34}\]
\[\mathcal{Z}_{m,n+q}^{\kappa,\rho}(z,\overline{z})=\frac{m!q!\Gamma(\kappa+m+1 )}{\Gamma(\kappa+m+n+1)}z^{q}\sum_{j=0}^{m\wedge q}\sum_{k=0}^{n}\frac{(-1)^{ j}}{j!(m-j)!(q-j)!}\left(\frac{1-|z|^{2}}{z}\right)^{j}\mathcal{Z}_{m-j,n}^{ \kappa+j,\rho}(z,\overline{z}) \tag{3.35}\]
_and_
\[\mathcal{Z}_{m,n}^{\kappa+u,\rho}(z,\overline{z})=\frac{\Gamma(\kappa+u+m+1)} {\Gamma(\kappa+u+m+n+1)}\sum_{j=0}^{m}\sum_{k=0}^{n}(-1)^{j+k}\mathbf{b}_{j,k }^{\kappa,\rho,m,n}\mathcal{Z}_{j,k}^{u-j-k}(z,\overline{z}). \tag{3.36}\]
Proof.: Considering the operator
\[B_{m,n}^{\kappa,\rho}(f)=\frac{\partial^{m}}{\partial z^{m}}\left(z^{\rho} \frac{\partial^{n}}{\partial\overline{z}^{n}}\left((1-|z|^{2})^{\kappa+m+n}f \right)\right), \tag{3.37}\]
for every sufficiently differentiable function \(f.\) Making use of the Leibnitz formula applied from outside to inside we arrive at the Burchnall type formula
\[B_{m,n}^{\kappa,\rho}(f) =\sum_{j=0}^{m}\binom{m}{j}\frac{\partial^{m-j}}{\partial z^{m-j}} (z^{\rho})\frac{\partial^{j}}{\partial z^{j}}\left(\frac{\partial^{n}}{ \partial\overline{z}^{n}}[(1-|z|^{2})^{\kappa+m+n}f]\right) \tag{3.38}\] \[=z^{\rho}(1-|z|^{2})^{\kappa+m}\sum_{j=0}^{m}\sum_{\ell=0}^{j} \sum_{k=0}^{n}\mathbf{a}_{m,n,j,\ell,k}^{\kappa,\rho}z^{j-m}(1-|z|^{2})^{k+\ell -j}\mathcal{Z}_{j-\ell,n-k}^{\kappa+m+k+\ell-j}(z,\overline{z})\frac{\partial ^{\ell+k}}{\partial z^{\ell}\partial\overline{z}^{k}}(f),\]
where the involved constant is given by
\[\mathbf{a}_{m,n,j,\ell,k}^{\kappa,\rho}:=(-1)^{m+n+k}\frac{n!(q- \ell)!\Gamma(\kappa+m+n+1)}{q!k!(n-k)!\Gamma(\kappa+m+1)}\mathbf{a}_{q,j,\ell }^{\kappa,\rho,m,n}.\]
Similarly we get (by Leibnitz formula from inside to outside)
\[B_{m,n}^{\kappa,\rho}(f) =\frac{\partial^{m}}{\partial z^{m}}\left(z^{\rho}\left[\sum_{k=0 }^{n}\binom{n}{k}\frac{\partial^{n-k}}{\partial\overline{z}^{n-k}}\left((1-|z |^{2})^{\kappa+m+n}\right)\frac{\partial^{k}}{\partial\overline{z}^{k}}(f) \right]\right)\] \[=(-1)^{m+n}z^{\rho}\sum_{j=0}^{m}\sum_{k=0}^{n}\mathbf{b}_{m,n,j, k}^{\kappa,\rho}(1-|z|^{2})^{\kappa+j+k}\mathcal{Z}_{m-j,n-k}^{\kappa+j+k, \rho}(z,\overline{z})\frac{\partial^{j+k}}{\partial z^{j}\partial\overline{z} ^{k}}(f). \tag{3.39}\]
Therefore, since the action of \(B_{m,n}^{\kappa,\rho}\) on the specific case of \(f=e_{q}=z^{q}\) reduces to
\[B_{m,n}^{\kappa,\rho}(z^{q})=(-1)^{m+n}(\kappa+m+1)_{n}z^{\rho}(1-|z|^{2})^{ \kappa}\mathcal{Z}_{m,n+q}^{\kappa,\rho}(z,\overline{z}),\]
we obtain (3.34) (resp. (3.35)) from (3.38) (resp. (3.39)). The identity (3.36) follows from (3.39) by considering the particular case of \(f(z)=(1-|z|^{2})^{u}\) and observing that \(B_{m,n}^{\kappa,\rho}((1-|z|^{2})^{u}g)=B_{m,n}^{\kappa+u,\rho}(g).\)
**Remark 3.18**.: The Burchnall representation in (3.39) for \(f\) being an holomorphic function simply reads
\[B_{m,n}^{\kappa,\rho}(f)=(-1)^{m+n}z^{\rho}h^{\kappa}\sum_{j=0}^{m}\mathbf{b}_ {m,n,j,0}^{\kappa,\rho}(1-|z|^{2})^{j}\mathcal{Z}_{m-j,n}^{\kappa+j,\rho}(z, \overline{z})\frac{\partial^{j}(f)}{\partial z^{j}}. \tag{3.40}\]
Analog representation for arbitrary \(f\) (not necessary holomorphic) can be developed using the differential operator
\[A_{m,n}^{\rho,\kappa}(f)=\frac{\partial^{m}}{\partial z^{m}} \left(z^{n+\rho}(1-|z|^{2})^{\kappa+m}f\right). \tag{3.41}\]
More precisely, one obtains
\[A_{m,n}^{\rho,\kappa}(f)=(-1)^{m}m!z^{\rho}h^{\kappa}\sum_{j=0} ^{m}\frac{(-1)^{j}(1-|z|^{2})^{j}}{j!(m-j)!}\mathcal{Z}_{m-j,n}^{\kappa+j, \rho}(z,\overline{z})\frac{\partial^{j}f}{\partial z^{j}}. \tag{3.42}\]
### Generating and bilinear generating functions.
The aim here is to obtain some generating and bilinear generating functions for the fractional Zernike functions. First, it is worth noting that from Proposition 3.5 and making use of the generating function for the Jacobi polynomials in [22, p: 213] we obtain the generating function
\[\sum_{m=0}^{+\infty}\frac{u^{m}}{m!}\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{ z})=2^{n}z^{2n+1-m+\rho+\kappa}\frac{(z-u+R(u,z))^{m-n-\rho}(z+u+R(u,z))^{- \kappa}}{R(u,z)}.\]
Here \(R(u,z)=1\) for \(u=0\) and \(R(u,z)=(z^{2}+2uz(1-2|z|^{2})+z^{2}(1-2|z|^{2})^{2})^{1/2}\) when \(u\neq 0\).
The next result gives the expression of special bilinear generating functions as derivative of the confluent and Gauss hypergeometric functions by means of the partial differential operator
\[R_{m}^{\kappa,\rho}f(z)=\frac{1}{(z\overline{w})^{\rho}(1-|z|^{2})^{\kappa}(1- |w|^{2})^{\kappa}}\frac{\partial^{2m}}{\partial z^{m}\partial\overline{w}^{m} }\left((z\overline{w})^{\rho}(1-|z|^{2})^{\kappa+m}(1-|w|^{2})^{\kappa+m}f \right)(z)\]
for sufficiently differential function \(f\).
**Proposition 3.19**.: _We have_
\[\sum_{n=0}^{+\infty}\frac{(a)_{n}}{n!(c)_{n}}\mathcal{Z}_{m,n}^{ \kappa,\rho}(z,\overline{z})\overline{\mathcal{Z}_{m,n}^{\kappa,\rho}(w, \overline{w})}=R_{m}^{\kappa,\rho}\left({}_{1}F_{1}\left(\begin{array}{c}a \\ c\end{array}\right|z\overline{w}\right)\right) \tag{3.43}\]
_and_
\[\sum_{n=0}^{+\infty}\frac{(a)_{n}(b)_{n}}{n!(c)_{n}}\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\overline{\mathcal{Z}_{m,n}^{\kappa,\rho}(w, \overline{w})}=R_{m}^{\kappa,\rho}\left({}_{2}F_{1}\left(\begin{array}{c}a,b\\ c\end{array}\right|z\overline{w}\right)\right). \tag{3.44}\]
Proof.: This readily follows by means of the Rodrigues' formula for \(\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z})\). Indeed we can rewrite the left-hand side in (3.43) as
\[\frac{1}{(z\overline{w})^{\rho}(1-|z|^{2})^{\kappa}(1-|w|^{2})^{\kappa}}\frac {\partial^{2m}}{\partial z^{m}\partial\overline{w}^{m}}\left((z\overline{w})^ {\rho}[(1-|z|^{2})(1-|w|^{2})]^{\kappa+m}\sum_{n=0}^{+\infty}\frac{(a)_{n}}{( c)_{n}}\frac{z^{n}}{n!}\right),\]
which reduces further to (3.43). The formula (3.44) follows in a similar way.
**Remark 3.20**.: For the special values of \(a=1\), \(b=\kappa+\rho+1\) and \(c=\rho+1\) with \(\rho=\beta-2\eta\) and \(\kappa=\kappa_{m}=\alpha-2(\gamma+m)-1\), the quantity \(n!(c)_{n}/(a)_{n}(b)_{n}\) reduces to be the square norm of \(\psi_{m,n}^{\gamma,\eta}\) in (2.12) up to a multiplicative constant \(d_{m}^{\kappa,\rho}\) independent of \(n\). Thus, formula in (3.44) leads to the reproducing kernel
\[K_{\gamma,\eta,m}^{\alpha,\beta}(z,w)=\sum_{n=0}^{+\infty}\frac{\psi_{m,n}^{ \gamma,\eta}(z,\overline{z})\overline{\psi_{m,n}^{\gamma,\eta}(w,\overline{w} )}}{\left\|\psi_{m,n}^{\gamma,\eta}\right\|_{\alpha,\beta}^{2}}\]
of the \(m\)-th generalized \(\beta\)-modified Bergman space introduced in Remark 2.8. In fact, we have
\[K_{\gamma,\eta,m}^{\alpha,\beta}(z,w)=d_{m}^{\kappa,\rho}\frac{[(1-|z|^{2})(1- |w|^{2})]^{\gamma+m}}{|zw|^{2\eta}}R_{m}^{\kappa,\rho}\left({}_{2}F_{1}\left( \begin{array}{c}a,b\\ c\end{array}\right|z\overline{w}\right)\right).\]
A closed formula for \(K^{\alpha,\beta}_{\gamma,p,m}(z,w)\) needs further investigation.
Below, we prove a bilinear generating function for the fractional Zernike function that looks like the Hardy-Hille formula for the generalized Laguerre polynomials. Thus, we deal with
\[G^{\kappa,\rho}_{n}(z,w|t):=\sum_{m=0}^{+\infty}\frac{t^{m}\mathcal{Z}^{\kappa, \rho}_{m,n}(z,\overline{z})\mathcal{Z}^{\kappa,\rho}_{m,n}(\overline{w},w)}{m! (\kappa+1)_{m}}. \tag{3.45}\]
**Proposition 3.21**.: _For every sufficiently small \(t\), the closed expression of \(G^{\kappa,\rho}_{n}(z,w|t)\) is given_
\[G^{\kappa,\rho}_{n}(z,w|t)=\frac{(z+tw)^{n+\rho}(\overline{w}+t \overline{z})^{n+\rho}}{(z\overline{w})^{\rho}(1+t\overline{z}w)^{2(n+\rho)+ \kappa+1)}}\,{}_{2}F_{1}\left(\begin{array}{c}-n-\rho,-n-\rho\\ \kappa+1\end{array}\bigg{|}-\frac{t(1-|z|^{2})(1-|w|^{2})}{(z+tw)(\overline{w }+t\overline{z})}\right).\]
Proof.: Using the hypergeometric representation (Proposition 3.5) we can rewrite \(G^{\kappa,\rho}_{n}(z,w|t)\) as
\[G^{\kappa,\rho}_{n}(z,w|t)=(z\overline{w})^{n}\sum_{m=0}^{+ \infty}\frac{(\kappa+1)_{m}}{m!}(t\overline{z}w)^{m}\] \[{}_{2}F_{1}\left(\begin{array}{c}-m,-n-\rho\\ \kappa+1\end{array}\bigg{|}1-\frac{1}{|z|^{2}}\right){}_{2}F_{1}\left( \begin{array}{c}-m,-n-\rho\\ \kappa+1\end{array}\bigg{|}1-\frac{1}{|w|^{2}}\right).\]
Thus, one concludes for the result in Proposition 3.21 by making use of the Meixner bilinear relation in [21, Eq. (12), p. 85].
### Integral representations
By means of the classical integral representations for the Gauss hypergeometric functions in the right hand side of (3.6) and (3.9) or for the Jacobi polynomials in (3.11), we can derive different integral representations for \(\mathcal{Z}^{\rho,\kappa}_{m,n}(z,\bar{z})\). However, we give below some non-trivial ones. The first one is based on the Cauchy integral formula for holomorphic functions and following in spirit Kazantsev and Bukhgeimwe idea [17].
**Theorem 3.22**.: _The fractional Zernike functions \(\mathcal{Z}^{\kappa,\rho}_{m,n}\) admits the following integral representation_
\[\mathcal{Z}^{\kappa,\rho}_{m,n}(z,\overline{z})=\frac{(-1)^{m}m!} {2\pi i}z^{-\rho}(1-|z|^{2})^{-\kappa}\oint_{|t|=1}t^{n+m+\rho+\kappa}\frac{( \overline{t}-\overline{z})^{\kappa+m}}{(t-z)^{m+1}}dt. \tag{3.46}\]
Proof.: Make use of the ordinary binomial expansion with the factorial function [27, p. 56],
\[(1-\xi)^{-a}=\sum_{j=0}^{+\infty}{(a)_{j}\frac{\xi^{j}}{j!}},\]
to expand the factor \((1-|z|^{2})^{j}\) in the explicit expression of \(\mathcal{Z}^{\kappa_{m},\rho}_{m,n}\) given by (3.3). Also, we need to the fact that
\[\frac{\partial^{m}}{\partial z^{m}}(z^{j+n+\rho})=\frac{m!}{2\pi i }\oint_{|t|=1}\frac{t^{j+n+\rho}}{(t-z)^{m+1}}dt,\]
which follows from the Cauchy integral formula applied to the function \(\varphi_{z}(t)=t^{m+j+\rho}/(t-z).\) Thus, we obtain
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(z,\overline{z}) =(-1)^{m}z^{-\rho}(1-|z|^{2})^{-\kappa}\sum_{j=0}^{+\infty}\frac{ (-\kappa-m)_{j}}{j!}\overline{z}^{j}\frac{\partial^{m}}{\partial z^{m}}\left(z ^{n+\rho+j}\right)\] \[=\frac{(-1)^{m}m!}{2\pi i}z^{-\rho}(1-|z|^{2})^{-\kappa}\oint_{|t |=1}\frac{t^{n+\rho}}{(t-z)^{m+1}}\left(\sum_{j=0}^{+\infty}(-\kappa-m)_{j} \frac{(t\overline{z})^{j}}{j!}\right)dt\] \[=\frac{(-1)^{m}m!}{2\pi i}z^{-\rho}(1-|z|^{2})^{-\kappa}\oint_{|t |=1}\frac{t^{n+\rho}(1-t\overline{z})^{\kappa+m}}{(t-z)^{m+1}}dt\] \[=\frac{(-1)^{m}m!}{2\pi i}z^{-\rho}(1-|z|^{2})^{-\kappa}\oint_{|t |=1}t^{n+m+\rho+\kappa}\frac{(\overline{t}-\overline{z})^{\kappa+m}}{(t-z)^{m +1}}dt.\]
This proves (3.46).
The next integral representation for the fractional Zernike functions appears as corollary of the bilinear generating function in Proposition 3.21.
**Proposition 3.23**.: _Let \(\gamma_{m,n}^{\kappa,\rho}\) be as in (3.17). The fractional Zernike functions have the integral representation_
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(w,\overline{w})=\frac{m!(\kappa+1 )_{m}\gamma_{m,n}^{\kappa,\rho}}{w^{\rho}t^{m}} \int_{D}\Xi_{m,n}^{\kappa,\rho}(z,w|t)_{2}F_{1}\left(\begin{array}{c}-m,n+ \kappa+\rho+1\\ \kappa+1\end{array}\left|1-|z|^{2}\right) \tag{3.47}\] \[\times{}_{2}F_{1}\left(\begin{array}{c}-n-\rho,-n-\rho\\ \kappa+1\end{array}\right|-\frac{t(1-|z|^{2})(1-|w|^{2})}{(w+tz)(\overline{z}+ t\overline{w})}\right)d\lambda(z);\]
_where_
\[\Xi_{m,n}^{\kappa,\rho}(z,w|t):=\frac{\overline{z}^{n-n-\rho}(w+tz)^{n+\rho}( \overline{z}+t\overline{w})^{n+\rho}}{(1+tz\overline{w})^{2(n+\rho)+\kappa+1 }}(1-|z|^{2})^{\kappa}.\]
Proof.: Starting from the expansion (3.45) one gets
\[\mathcal{Z}_{m,n}^{\kappa,\rho}(w,\overline{w})=\frac{m!(\kappa+1)_{m}\gamma_ {m,n}^{\kappa,\rho}}{t^{m}}\overline{\langle G_{n}^{\kappa,\rho}(\cdot,w|t), \mathcal{Z}_{m,n}^{\kappa,\rho}\rangle_{L_{\rho}^{2,\kappa}(\mathbb{D})}}.\]
Next, by the explicit expression of the kernel function \(G_{n}^{\kappa,\rho}(z,w|t)\) given in Proposition 3.21 combined with the hypergeometric representation in (3.6) we derive the integral representation (3.47).
**Remark 3.24**.: The integral in the right hand side of (3.47) is a rigid integral on complex domain \(\mathbb{D}\) in the sense that it is nontrivial and can not be reduced to classical integral on real domains.
### Completeness.
Here we discuss the completeness of the fractional Zernike functions in \(L_{\rho}^{2,\kappa}(\mathbb{D}).\) Notice for instance that for \(\rho=0,1,2,\cdots\) the functions \(z^{\rho}\mathcal{Z}_{m,n}^{\kappa,\rho}\) for varying \(m,n+\rho=0,1,2,\cdots\) constitute an orthogonal basis of \(L_{\rho}^{2,\kappa}(\mathbb{D})\) since they are closely connected to the complex Zernike polynomials \(\mathcal{Z}_{m,n+\rho}^{\kappa}\) by (1.3). The latter ones are known to form an orthogonal basis for the Hilbert space \(L_{0}^{2,\kappa}(\mathbb{D})\) (see e.g., [6, 16]).
A direct proof starts from the observation that each \((\mathcal{Z}_{m,n}^{\kappa,\ell})_{m,n}\) is a polynomial of exact degrees \(m\) in \(\overline{z}\) and \(n\) in \(z\), so that any \(z^{n}\overline{z}^{m}\) can be rewritten as
\[\overline{z}^{m}z^{n}=\sum_{j=0}^{m}\sum_{k=0}^{n}a_{j,k}^{m,n}\,\mathcal{Z}_{ j,k}^{\kappa,\ell}(z,\overline{z}).\]
The coefficients \(a_{j,k}^{m,n}\) are explicit and can be computed by the formula
\[a_{j,k}^{m,n}=\gamma_{m,n}^{\kappa,\rho}\int_{\mathbb{D}}\overline{z}^{m}z^{n} \overline{\mathcal{Z}_{j,k}^{\kappa,\ell}}(z,\overline{z})|z|^{2\rho}(1-|z|^{ 2})^{\kappa}d\lambda(z),\]
where \(\gamma_{m,n}^{\kappa,\rho}\) is as in (3.17). In the sequel, we consider only the case of \(\rho\) non-integer \(\rho>-1\).
**Theorem 3.25**.: _The functions \(\Upsilon_{m,s}^{\kappa,\rho}:=\overline{z}^{-s/2}\mathcal{Z}_{m,m+s/2}^{ \kappa,\rho-s/2}\), for varying nonnegative integer \(m\) and varying integer \(s\), form an orthogonal complete system in \(L_{\rho}^{2,\kappa}(\mathbb{D})\)._
_Proof._ The key observation is that for \(z=\sqrt{\left(1+x\right)/2}\,e^{i\theta}\) with \(x\in[-1,1[\) and \(\theta\in[0,2\pi[\) we have
\[\Upsilon_{m,s}^{\kappa,\rho}(z,\overline{z})=\frac{z^{s}}{|z|^{s}}\mathcal{Z}_ {m,m}^{\kappa,\rho}(z,\overline{z})=m!e^{is\theta}P_{m}^{(\kappa,\rho)}(x).\]
Subsequently, their orthogonality in \(L_{\rho}^{2,\kappa}(\mathbb{D})\) readily follows using the orthogonality of the Jacobi polynomials \(P_{m}^{(\kappa,\rho)}\) in the Hilbert \(L_{\kappa,\rho}^{2}\) of square integrable functions on \([-1,1[\) with respect to the measure \((1-x)^{\kappa}(1+x)^{\rho}dx\). More exactly we have
\[\int_{D}\Upsilon_{m,s}^{\kappa,\rho}(z,\overline{z})\overline{\Upsilon_{n,r}^{ \kappa,\rho}(z,\overline{z})}|z|^{2\rho}(1-|z|^{2})^{\kappa}d\lambda(z)=\frac{ m!n!\pi}{2^{\kappa+\rho+1}}\Big{\|}P_{m}^{(\kappa,\rho)}\Big{\|}_{L_{\kappa, \rho}^{2}}^{2}\,\delta_{m,n}\delta_{s,r}.\]
For their completeness, let \(f\in F^{\perp}\), the orthogonal of \(F=Span\{\Upsilon_{m,s}^{\kappa,\rho};m=0,1,2,\cdots,s\in\mathbb{Z}\}\) in \(L_{\rho}^{2,\kappa}(\mathbb{D})\). Thus, by Proposition 3.5, the assumption that \(f\in F^{\perp}\) becomes equivalent to
\[\left\langle f,\Upsilon_{m,s}^{\kappa,\rho}\right\rangle_{L_{\rho}^{2,\kappa} }= \frac{m!}{2^{\kappa+\rho+2}}\int_{-1}^{1}\left(1-x\right)^{\kappa/2} \left(1+x\right)^{\rho/2}P_{m}^{(\kappa,\rho)}(x)\hat{f}_{s}^{\kappa,\rho}(x)dx=0\]
for every integer \(s\) and \(m=0,1,2,\cdots\). The involved function is defined by
\[\hat{f}_{s}^{\kappa,\rho}(x):=\left(1-x\right)^{\kappa/2}\left(1+x\right)^{ \rho/2}\hat{f}_{s}(x),\]
where \(\hat{f}_{s}(x)\) denotes the \(s\)-th Fourier coefficient of the function \(f_{x}:=\theta\longmapsto f(\sqrt{\left(1+x\right)/2}\,e^{i\theta})\) for every fixed \(x\in[-1,1[\). Clearly \(\hat{f}_{s}^{\kappa,\rho}\) belongs to \(L^{2}([-1,1[;dt)\) since by means of the Cauchy-Schwartz inequality and the Fubini's theorem one gets
\[\int_{-1}^{1}|\hat{f}_{s}^{\kappa,\rho}(x)|^{2}dx\leq 2\pi\int_{-1}^{1} \left(1-x\right)^{\kappa}\left(1+x\right)^{\rho}\left(\int_{0}^{2\pi}\left|f \left(\sqrt{\frac{1+x}{2}}e^{i\theta}\right)\right|^{2}d\theta\right)dx=2^{ \kappa+\rho+3}\pi\|f\|_{L_{\rho}^{2,\kappa}}^{2}.\]
Therefore, \(\hat{f}_{s}^{\kappa,\rho}=0\) a.e on \([-1,1[\) for every \(s\in\mathbb{Z}\) for the functions \(\left(1-x\right)^{\kappa/2}\left(1+x\right)^{\rho/2}P_{m}^{(\kappa,\rho)}\), \(m=0,1,2,\cdots\), being an orthogonal basis of \(L^{2}\left([-1,1[,dt)\). This implies in particular that the Fourier
transform of \(f_{x}\in L^{2}([0,2\pi[,d\theta]\) satisfies \(\mathcal{F}(f_{x})(\ell)=\hat{f}_{-s}(x)=0\) for every \(s\in\mathbb{Z}\) and every fixed \(x\in[-1,1[\setminus N,\) where we have set \(N:=\cup_{s}\{x\in[-1,1[;\,\hat{f}_{s}(x)\neq 0\}.\) This proves that the function \(f_{x}=0\) a.e. on \([0,2\pi[\) for almost every \(x\in[-1,1[\). Therefore, \(f\) is a vanishing function almost every \(t\in[-1,1[\). This completes the proof.
**Corollary 3.26**.: _The Hilbert spaces \(A_{s}^{\kappa,\rho}(\mathbb{D}):=\overline{Span\{\overline{z}^{-s/2}\mathcal{ Z}_{m,m+s/2}^{\kappa,\rho-s/2};m=0,1,2,\cdots\}}^{L_{\rho}^{2,\kappa}};\,s\, \in\,\mathbb{Z}\) defines a Hilbertian orthogonal decomposition of \(L_{\rho}^{2,\kappa}(\mathbb{D})\). Namely, we have_
\[L_{\rho}^{2,\kappa}(\mathbb{D})=\bigoplus_{s\in\mathbb{Z}}A_{s}^{\kappa,\rho} (\mathbb{D}).\]
**Definition 3.27**.: The closed subspace \(A_{s}^{\kappa,\rho}(\mathbb{D})\) are called generalized (poly-meromorphic) Bergman spaces of second kind.
**Acknowledgement:** The assistance of the members of "Ahmed Intissar" and "Analysis, P.D.E. & Spectral Geometry" seminars is gratefully acknowledged.
**Data availability statement:** All data generated or analyzed during this study are included in this article.
**Conflict of interest:** The authors declare that they have no conflict of interest..
|
2302.11581 | HEROES: The Hawaii eROSITA Ecliptic Pole Survey Catalog | We present a seven band (g, r, i, z, y, NB816, NB921) catalog derived from a
Subaru Hyper Suprime-Cam (HSC) imaging survey of the North Ecliptic Pole (NEP).
The survey, known as HEROES, consists of 44 sq. deg of contiguous imaging
reaching median 5-sigma depths of g: 26.5, r: 26.2, i: 25.7, z: 25.1, y: 23.9,
NB816: 24.4, NB921: 24.4 mag. We reduced these data with the HSC pipeline
software hscPipe, and produced a resulting multiband catalog containing over 25
million objects. We provide the catalog in three formats: (1) a collection of
hscPipe format forced photometry catalogs, (2) a single combined catalog
containing every object in that dataset with selected useful columns, and (3) a
smaller variation of the combined catalog with only essential columns for basic
analysis or low memory machines. The catalog uses all the available HSC data on
the NEP and may serve as the primary optical catalog for current and future NEP
deep fields from instruments and observatories such as SCUBA-2, eROSITA,
Spitzer, Euclid, and JWST. | A. J. Taylor, A. J. Barger, L. L. Cowie, G. Hasinger, E. M. Hu, A. Songaila | 2023-02-22T19:00:01Z | http://arxiv.org/abs/2302.11581v2 | # HEROES: The Hawaii eROSITA Ecliptic Pole Survey Catalog
###### Abstract
We present a seven band (\(g\), \(r\), \(i\), \(z\), \(y\), NB816, NB921) catalog derived from a Subaru Hyper Suprime-Cam (HSC) imaging survey of the North Ecliptic Pole (NEP). The survey, known as HEROES, consists of 44 deg\({}^{2}\) of contiguous imaging reaching median 5\(\sigma\) depths of \(g\): 26.5, \(r\): 26.2, \(i\): 25.7, \(z\): 25.1, \(y\): 23.9, NB816: 24.4, NB921: 24.4 mag. We reduced these data with the HSC pipeline software hscPipe, and produced a resulting multiband catalog containing over 25 million objects. We provide the catalog in three formats: (1) a collection of hscPipe format forced photometry catalogs, (2) a single combined catalog containing every object in that dataset with selected useful columns, and (3) a smaller variation of the combined catalog with only essential columns for basic analysis or low memory machines. The catalog uses all the available HSC data on the NEP and may serve as the primary optical catalog for current and future NEP deep fields from instruments and observatories such as SCUBA-2, eROSITA, Spitzer, Euclid, and JWST.
Catalogs (205), Galaxy counts (588), Surveys (1671), Observational cosmology (1146) 0000-0002-4880-8080]A. J. Taylor
0000-0002-2188-7885]A. J. Barger
0000-0002-1888-7885]L. L. Cowie
0000-0002-1881-7885]G. Hasinger
0000-0002-1881-7885]E. M. Hu
0000-0002-1888-7885]A. Songaila
## 1 Introduction
Since its first light in 2013, Hyper Suprime-Cam (HSC; Miyazaki et al., 2018) on the Subaru 8.2m Telescope has been the premier wide field optical imager on 6-10 meter class telescopes. The large collecting area of Subaru and the wide field of view of HSC (\(1\fdg 5\)) allow for the efficient observation of both wide and deep surveys. The largest of these projects--the HSC Subaru Strategic Program (HSC-SSP)--has fully mapped 670 deg\({}^{2}\) of sky in \(grizy\) broadband filters and over 1470 deg\({}^{2}\) in partially observed (incomplete coverage in \(grizy\)) area (Aihara et al., 2018, 2019, 2022). HSC-SSP has also produced deep observations in the DEEP2-3, SXDS+XMM-LSS, ELAIS-N1, and COSMOS fields totaling \(>\)30 deg\({}^{2}\) of imaging in \(grizy\) broadband filters, as well as NB387, NB816, and NB921 narrowband filters, all at a 5\(\sigma\) depth \(>\)25 mag.
In 2015, given the enormous potential of HSC and its available narrowband filters, we designed and began to execute HEROES: the Hawaii EROsita Ecliptic pole Survey. Our goal was to produce a survey of the North Ecliptic Pole (NEP) that would match the filter coverage and imaging depths of the HSC-SSP Deep fields with a much wider contiguous area, as well as complement the then-future deepest eROSITA X-ray observations (Merloni et al., 2012).
In recent years, the NEP has become a major focus of a number of ground and space-based surveys and missions spanning sub-millimeter to X-ray wavelengths. HEROES provides complementary wide-field optical broadband and narrowband data to these surveys.
At sub-millimeter wavelengths, the NEP has also been extensively observed with SCUBA-2 on the James Clerk Maxwell Telescope through the S2CLS (0.6 deg\({}^{2}\), 850 \(\mu\)m; Geach et al., 2017), NEPSC2 (2 deg\({}^{2}\), 850 \(\mu\)m; Shim et al., 2020), and S2TDF (0.087 deg\({}^{2}\), 850\(\mu\)m; Hyun et al., 2023) surveys. HEROES will provide sub-millimeter studies with corresponding optical data that may be used to find optical counterparts for sub-millimeter bright sources. The combination of optical/NIR and sub-millimeter flux may better constrain photometric redshifts and spectral energy distribution fittings for the determination of stellar mass, star for
mation rate, age, and dust attenuation for such sources (e.g. S. McKay et al. submitted).
In the infrared, the NEP contains the Spitzer IRAC Dark Field (Krick et al., 2008, 2009; Frost et al., 2009), as well as the upcoming 20 deg\({}^{2}\) Euclid Deep Field North (Amendola et al., 2013, 2018). HEROES will serve as a natural complement and extension of these datasets into optical wavelengths, providing broadband coverage from 0.4-0.8 \(\mu\)m (when combined with Spitzer/IRAC) and providing individual \(griz\) magnitudes to help better complement the upcoming single 550-900 nm very-broadband Euclid/VIS observations and 0.92-2 \(\mu\)m Euclid/NISP \(YJH\) photometric and 1.1-2 \(\mu\)m slitless spectroscopic observations (Laureijs et al., 2011). The HEROES data will permit improved SED fitting for Euclid detected sources and ultimately provide a more robust understanding of the properties of these objects.
In x-rays, the NEP is already home to the deepest eROSITA X-ray observations (Merloni et al., 2012), and will contain the future SPHEREx Deep North Field (Dore et al., 2016, 2018). For these missions, HEROES and may provide insight into optical properties of x-ray detected AGN including photometric redshifts, as well as provide superior target astrometry when compared to the x-ray measurements. For example, Radzom et al. (2022) used Chandra data in conjunction with HSC data in the SSA22 field to produce x-ray luminosity functions at redshifts \(z=0.2--4\). With the combination of HEROES, eROSITA, SPHEREx, and Euclid, this type of study could be replicated in the NEP with \(>500\times\) the survey area.
Moreover, HEROES and the NEP are especially important for space-based observatories that orbit the Sun-Earth L2 Lagrange point, as the NEP and the South Ecliptic Pole are typically part of any such observatory's continuous viewing zone. As such, the NEP contains the aforementioned Sptizer IRAC Dark Field, eROSITA Deep Field, upcoming Euclid Deep Field North, and SPHEREx Deep Field North as well as the JWST Time Domain Field (TDF; part of the PEARLS project; Windhorst et al., 2017, 2022).
HEROES was initially reduced in 2017 using the PanstARRS Image Processing Pipeline (IPP; Magnier et al., 2016, 2020, 2020). This version of HEROES had incomplete coverage in the \(r\) and \(y\) bands at low R.A. pointings and did not have the additional JWST TDF pointing (see SS2 below).
HEROES is the largest contiguous HSC narrowband survey to date. As such, we previously used the wide-field narrowband coverage in the initial HEROES dataset for studies of Lyman-\(\alpha\) Emitters (LAEs) near the epoch of reionization at \(z=5.7-6.6\)(Songaila et al., 2018, 2022; Taylor et al., 2020, 2021), as well as the development of a broadband selection technique for emission line galaxies (Rosenwasser et al., 2022).
In 2021, we completed the final HEROES observations. Here we present the photometric catalog of the complete and newly re-reduced version of HEROES, which also incorporates all the archival HSC data on this field.
In Section 2, we describe the HSC data. In Section 3, we summarize the data reduction and processing. In Section 4, we present the final catalog and describe its format and availability. In Section 5, we verify the catalog's quality and measure its depth in each filter. Finally, in Section 6, we demonstrate both a \(z=6.6\) LAE sample selection and \(g\), \(r\), and \(i\)-band dropout selections.
We assume \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) throughout. All magnitudes are given in the AB magnitude system, where an AB magnitude is defined by \(m_{AB}=-2.5\log f_{\nu}-48.60\). Here \(f_{\nu}\) is the flux of the source in units of ergs cm\({}^{-2}\) s\({}^{-1}\) Hz\({}^{-1}\).
## 2 Observations
We summarize the NEP HSC observations in Table 1 and illustrate the pointing centers in Figure 1. Our HEROES observations contributed 2574 targeted shots (a "shot" is a single HSC exposure consisting of 112 mosaicked CCDs). We used a hexagonal grid of 38 pointings separated by \(1\fdg 0\) to balance pointing overlap with the \(1\fdg 5\) diameter HSC field of view (see Figure 1). In our pointings, we used a six point mosaic dither pattern with one central shot and five shots evenly distributed around a \(2\farcm 0\) radius circle centered on the central shot (see Songaila et al., 2018, their Figure 1). In addition to our pointing grid, we conducted observations at a specialized pointing with the HSC-r2, HSC-z, and NB921 filters centered on the JWST TDF (Windhorst et al., 2017, 2022). Our HEROES observations were taken using the standard set of HSC broadband filters: HSC-g, HSC-r2, HSC-i2, HSC-z, HSC-Y, and the NB816 and NB921 narrowband filters (hereafter, referred to as \(g\), \(r\), \(i\), \(z\), \(y\), NB816, and NB921).
In addition to the HEROES NEP observations, the AKARI-HSC survey (Oi et al., 2021, hereafter, AKARI) previously observed 5.4 deg\({}^{2}\) of the NEP in \(grizy\) broadbands to complement the infrared coverage of the AKARI satellite (Matsuhara et al., 2006; Murakami et al., 2007; Lee et al., 2009). The Hawaii Two-0 (H20; Beck et al., 2020) survey also combined parts of the HEROES raw data with new observations to provide \(griz\) broadband coverage to complement the upcoming Euclid Deep Field North (Amendola et al., 2013, 2018). We incorporated 181 archival shots in HSC \(g\), \(z\), and \(y\) filters from
the AKARI (Oi et al., 2021) and 305 archival shots in \(g\), \(r\), \(i\), and \(z\) filters from H20 (Beck et al., 2020) into our present HEROES data reduction. All together, these observations cover 44 deg\({}^{2}\) with full 7-band imaging.
## 3 Data Reduction
We processed the HEROES dataset using the standard procedure provided by the hscPipe pipeline version 8.4 (Bosch et al., 2018). We performed this processing on the National Astronomical Observatory of Japan Large-scale Data Analysis System computing cluster with access provided from our September 2021 HSC observations. hscPipe is a comprehensive pipeline for the reduction and processing of HSC data that acts in 5 main steps: Bias/Dark/Flat/Fringe processing, Single visit processing, Moasicking, Coadding images, and Multiband analysis. In each of these steps, we used the default hscpipe parameters, unless noted below.
We assembled preprocessed Bias/Dark/Flat/Fringe data from the HSC calibration data archive1 that matched all of the science data filters and observing runs. We performed single visit processing (detrending, WCS, and photometric calibration) on all individual CCD frames (103 science frames per shot) using the pipeline defaults. In the mosaicking step, we defined a custom tract and patch scheme. A hscPipe tract is an area of sky over which images are mosaicked and coadded with a common WCS solution. To produce data products of manageable size, a tract is divided into a grid of images called patches. We defined a single tract that encompassed the entire data coverage, and we divided it into a grid of \(20\times 14\) patches, each measuring 10,200 \(\times\) 10,200 pixels to allow for overlap between adjacent patches. We show the position of the patches on the sky in Figure 2. We used the hscPipe FGCM (Forward Global Calibration Method) to calibrate the data simultaneously in all seven filters to Pan-STARRS photometry and astrometry.
Footnote 1: [https://www.naoj.org/Observing/Instruments/HSC/calib_data.html](https://www.naoj.org/Observing/Instruments/HSC/calib_data.html) - last accessed 2023 February 13
We executed the standard imaging coadding routines for each filter. We then ran the standard multiband analysis pipeline procedures to combine the detected source catalogs across the seven different filters for each patch and to perform forced photometry on the resulting combined catalogs. After completing this step, we noticed that the coadded images at the edges of the field included extrapolated background modeled pixels that extended beyond the science imaging coverage. These extrapolated pixels resulted in erroneous measurements of object fluxes during the multiband analysis step on our initial processing run. To remedy this, we masked all pixels outside of the science imaging coverage. We re-ran the multiband measurements after this manual
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Filter\({}^{a}\) & HEROES & \multicolumn{2}{c}{H20} & AKARI & Total \\ & Shots & Hours & Shots & Hours & Shots & Hours & Shots & Hours \\ \hline HSC-g & 355 & 8.90 & 40 & 2.78 & 100 & 7.92 & 495 & 19.59 \\ HSC-r2 & 208 & 9.02 & 60 & 3.33 & & & 268 & 12.35 \\ HSC-i2 & 452 & 16.80 & 100 & 6.14 & & & 552 & 22.94 \\ HSC-z & 735 & 24.21 & 105 & 7.06 & 29 & 1.85 & 869 & 33.11 \\ HSC-Y & 241 & 10.78 & & & 52 & 4.55 & 293 & 12.23 \\ NB816 & 235 & 11.33 & & & & & 235 & 11.33 \\ NB921 & 348 & 20.86 & & & & & 348 & 20.86 \\ \hline Total & 2574 & 101.79 & 305 & 19.31 & 181 & 14.32 & 3060 & 135.42 \\ \hline \end{tabular} \({}^{a}\)We refer to the HSC-g, HSC-r2, HSC-i2, HSC-z, and HSC-Y filters as \(g\), \(r\), \(i\), \(z\), and \(y\) respectively throughout this article.
\end{table}
Table 1: HSC Observations
Figure 1: Pointing centers and overlap for the HSC data in the NEP. Our HEROES pointings are shown in blue, the H20 pointings in red, and the AKARI pointings in green. Our pointing targeting the JWST TDF is shown in yellow. Each of the pointings used either the HEROES dither patten or something similar (see Β§2). Color shading shows pointing overlaps but does not explicitly indicate stacked imaging depth.
change and confirmed that our modification resolved the irregular photometric measurements.
## 4 The Catalog
We present the final HEROES catalog in three forms to best provide the maximum amount of available information, as well as convenient and manageable file sizes.
First, we provide the forced photometry output catalogs from the multiband analysis step of hscPipe. These are organized by filter and patch using the naming format forced_src-{filter}-0-{patch}.fits. The columns of these FITS tables are provided in the FITS headers and are summarized in the hscPipe documentation2. These 1671 catalogs each contain 239 columns and total 211 GB (compressed). While these catalogs provide all of the available hscPipe forced photometry information, they are sometimes inconvenient for use in multifilter full field studies and contain many columns that are not useful for typical research applications.
Footnote 2: [https://hsc.mtk.nao.ac.jp/pipedoc/pipedoc_8_e/tutorial_e/schema_multiband.html](https://hsc.mtk.nao.ac.jp/pipedoc/pipedoc_8_e/tutorial_e/schema_multiband.html) - last accessed 2023 February 13
We also provide two catalogs that each include all of the objects in the dataset and select columns from the forced_src catalogs. The first catalog (HEROES_Full_Catalog.fits, 25,445,387 objects, 172 columns, 12.4 GB) contains all of the columns described in the Appendix in Table 2. The second catalog (HEROES_Small_Catalog.fits, 25,445,387 objects, 37 columns, 4.1 GB) is designed for more basic analyses or for machines with less system memory and only contains a subset of the full catalog's columns. It contains selected columns, as noted in the Appendix in Table 2.
In these catalogs, we converted forced_src fluxes to AB magnitudes. To preserve the information for sources with negative measured fluxes, we report these values as negative magnitudes. For example; we report an object with a measured flux of \(-3631\times 10^{-11}\) Jy as an AB magnitude of -27.5. For objects that do not have imag
Figure 2: Patch locations for the HSC data in the NEP. The bounding box of each patch is shown in black and labeled with its patch ID. The field imaging coverage is shown in the background in light gray shading.
ing coverage in a given filter, or lack measured fluxes in a given filter, we report magnitudes of -99.
These data products are all publicly available at Harvard Dataverse3.
Footnote 3: [https://dataverse.harvard.edu/dataverse/heroes](https://dataverse.harvard.edu/dataverse/heroes)
## 5 Data Quality
We tested the processed data quality by calculating 2\(\farcs\)0 diameter aperture magnitude depths across the field and in each filter with two different methods. In our first method, we placed 100,000 apertures randomly in each patch (\(\sim\)150 apertures per arcmin\({}^{2}\)) and measured the flux in each aperture. We then divided the patch into 100 subregions and analyzed the apertures in each region separately. We used sigma-clipping to discard apertures that captured flux from field objects and took the standard deviation of the flux measurements from the remaining apertures. We converted this flux standard deviation to a magnitude to produce a 1\(\sigma\) aperture magnitude depth, which we then converted to a 5\(\sigma\) depth for the subregion by subtracting 1.75 mag (\(-2.5\log_{10}5\)). We report the overall median 2\(\farcs\)0 diameter aperture magnitude 5\(\sigma\) depths as: \(g\): 26.0, \(r\): 25.7, \(i\): 25.2, \(z\): 24.7, \(y\): 23.7, NB816: 24.3, NB921: 24.2.
In our second calculation method, we adopted the methodology of Oi et al. (2021). Here, we divided the survey into \(150\times 150\) subregions. In each subregion, we used the catalog provided fluxes and flux errors to select objects with a signal-to-noise of \(\sim\)5; (\(5\pm 0.1\)). We then took the median magnitude of this S/N\(\sim\)5 population as the 5\(\sigma\) depth for the subregion. For this method, we report the overall median 2\(\farcs\)0 diameter aperture magnitude 5\(\sigma\) depths as: \(g\): 26.5, \(r\): 26.2, \(i\): 25.7, \(z\): 25.1, \(y\): 23.9, NB816: 24.4, NB921: 24.4, and we show their variations across the field in Figure 3. We attribute the 0.1-0.5 magnitude differences between the two methods to potential pattern noise effects or correlated variance that may not be fully sampled by the measurement routines in hscPipe, and/or to not fully cleaning contamination in the aperture method. The two methods may bracket the true limits.
We tested the data quality by comparing the spatial number density of sources as a function of magnitude against other large surveys. To limit contaminating sources in our catalog for these comparisons, we applied a number of cuts to the catalog. First, we required all catalog objects (both stars and galaxies) to have the is_primary flag. This flag selects objects that are not detected as blended composite objects. For each filter, we also removed all objects with the flag {filter}_base_PixelFlags_flag_edge. This cut has two main effects: First, it removes all sources that are outside of the science imaging coverage. Second, it removes bad sources near saturated pixels (e.g., near diffraction spikes from bright stars). After these cuts, we compared the area density of sources in our catalog to the HSC COSMOS Deep/UltraDeep (D/UD) catalog from HSC-SSP (10.0 deg\({}^{2}\), Aihara et al., 2021) and the Dark Energy Survey Year 3 GOLD catalog (DESY3; 5347 deg\({}^{2}\); Sevilla-Noarbe et al., 2021; Hartley et al., 2022). We use 2\(\farcs\)0 diameter aperture magnitudes from hscPipe for HEROES and COSMOS, and we use "Single Object Fitted Corrected" Magnitudes for DESY3 (2\(\farcs\)0 diameter aperture magnitudes are not provided in the DESY3 Gold Catalog). We show this comparison in Figure 4.
Across \(g\), \(r\), \(i\), and \(z\), HEROES (blue) shows excellent agreement with COSMOS D/UD (red). The DESY3 Gold catalog appears to be slightly overdense in the bluer \(g\) and \(r\) bands when compared to the other catalogs, but it shows good agreement in the \(i\) and \(z\) bands. Differences between the catalogs at the bright end are likely due to minor contamination by bright stars and image artifacts, while differences between the catalogs at the faint end are due to the different depths between the surveys. From these comparisons, we conclude that HEROES is consistent with other leading surveys that have well-calibrated photometry and low contamination.
We further characterize the contamination rate in HEROES through visual inspections of 1000 sources drawn randomly from the filtered sample described above. In these inspections, we inspect \(grizy\) thumbnails of the sources and look for diffraction spikes, glints, halos, or other visual artifacts that are detected as the source under inspection. Of the 1000 inspected sources, we find that only 27 (2.7\(\pm\)0.5%) are impacted by visual artifacts. In most cases, the artifacts are the unsaturated tails of diffraction spikes or halo-like arcs from bright stars in the field.
## 6 Sample Selections
Here we demonstrate as examples two different sample selections using HEROES. These are the focus of our previous and upcoming research projects.
### Narrowband Selection of \(z\sim 6\) L AEs
We used a previous reduction of the HEROES data for our sample selections of \(z\sim 5\) and \(z\sim 6\) LAEs in Taylor et al. (2020, 2021) and Songaila et al. (2022). We now repeat these selections to verify the photometric consistency between the two reductions.
Our selection criteria are as follows. First, we remove sources with bad pixel, bright object, edge pixel,
saturated center, cosmic ray, and interpolated center flags in the \(i\), \(z\), \(y\), and NB921 filters. We then remove sources within 6\(\farcs\)0 of any _Gaia_ star with _Gaia_\(g\) magnitude less than 8. In Figure 5, we show 1% of the resulting clean sample as black points. From this clean sample, we next require strong detections (n921_detect = True, n921_base_SdssCentroid_flag = False, n921_base_SdssShape_flag = False) in NB921 and 2\(\sigma\) non-detections ({filter}_detect = False) with forced aperture magnitude uncertainties greater than 0.5 mags in the \(g\), \(r\), and \(i\) bands to enforce a strong Lyman break blueward of the Ly\(\alpha\) line. We select a narrowband excess \(z\) - NB921 \(>\) 1.3 mags. We also adopt the \(\Sigma\) parameter from Sobral et al. (2013). This parameter characterizes the significance of a narrowband excess above the uncertainties in the NB921 and \(z\) source magnitudes and is given by
\[\Sigma=\frac{1-10^{-0.4(z-NB921)}}{10^{-0.4(27-NB921)}\sqrt{\sigma_{NB921}^{2}+ \sigma_{z}^{2}}}\,, \tag{1}\]
where \(z\) and NB921 are the AB magnitudes of \(z\) and NB921, \(\sigma_{\rm NB921}\) and \(\sigma_{z}\) are the average 1\(\sigma\) image count rate uncertainties in 2\(\farcs\)0 diameter apertures in NB921 and \(z\), and 27 is the magnitude zeropoint of the imaging data. In our source selection, we require \(\Sigma>3\). We show the resulting cut in Figure 5 (blue curve).
We then visually inspect cutouts of the remaining sources in stacked \(gri\), \(z\), \(y\), and NB921 to reject sources with significant contamination from an elevated background, glints, diffraction spikes, transients, or other artifacts. This visual inspection is also helpful in rejecting sources that are not detected in \(g\), \(r\), or \(i\) separately but are visually identifiable in stacked \(gri\) cutouts. For a narrowband selection targeting objects with \(z\) - NB921 \(>\) 1.3 and NB921\(<\) 24.25, the above cuts produced a sample of 384 LAE candidates. After visual inspection, we reduced this sample to 63 candidates that showed no hint of emission in stacked and smoothed \(gri\) cutouts, had compact morphologies, and were visually free of contamination. This significant reduction in candidates through visual inspection is primarily due to the ability of stacked \(gri\) cutouts to detect low-redshift sources that do not show significant \(gri\) emission in single filter observations. Furthermore, narrowband excess and Lyman break samples are more susceptible to objects with visual artifacts and contamination, as many forms of contamination may artificially emulate the narrowband excess criteria and non-detections in the bluer bands. Removing these sources through stricter magnitude and color cuts may risk reducing the sample completeness of the inherently rare bright \(z>6\) LAEs, thus we use visual inspections to eliminate contaminating objects to ensure that our samples remain both complete and pure.
From these selection criteria (excluding the sources at NB921\(<\) 24.25 that were not visually re-inspected), we completely recover the Taylor et al. (2020) and Songaila et al. (2022) samples (shown in Figure 5 as red crosses) and identify additional candidates for spectroscopic follow up (A. Songaila et al., 2023, in prep). These candidates and the recovered previous samples are uniformly distributed across the survey field with no obvious visual
Figure 3: 5\(\sigma\) detection limits for each filter across the survey field from the Oi et al. (2021) method. Note the depth increases in the centers of the \(g,r,i,z,y\) imaging from the overlapping archival data and the significant depth increases in \(r\), \(z\), and NB921 near the JWST TDF (R.A. 17:22:47.896, Decl. +65:49:21.54) from our additional targeted pointing.
clustering or gradient beyond minor correlations with the imaging depth.
### Broadband Selection of Dropout Galaxies
In order to test further the quality and science potential of the dataset, we also demonstrate a broadband dropout selection using the selection criteria and color-color cuts from Ono et al. (2018). In their study, "GOLDRUSH", they selected \(z\sim 4,5,6,7\) galaxies using the dropout method with the UltraDeep, Deep, and Wide fields from HSC-SSP. These fields total 102.7 deg\({}^{2}\) in combined area, with the largest field (W-XMM) providing 28.5 deg\({}^{2}\) of coverage. We are currently working on a full comparison with the GOLDRUSH luminosity functions and clustering analysis (Harikane et al., 2018), and we summarize the preliminary galaxy selection results below.
As both the HEROES and HSC-SSP catalogs are produced by hscpipe, it is simple to adopt the Ono et al. (2018) selection criteria from their Table 2. In brief, they first required sources to have no bad pixel, bright object, edge pixel, saturated center, cosmic ray, or interpolated center flags in \(grizy\). For each sample of \(g\), \(r\), \(i\), and \(z\)-dropouts, they required non-detections in filters blueward of the dropout filter and strong detections in filters redward of the dropout filter. They then adopted color-color cuts from Hildebrandt et al. (2009) (see Ono et al., 2018, Equations 1-10) to produce their initial dropout selections. We adopt the same cuts and show the color-color criteria and our results in Figure 6.
Figure 4: Area density of sources in the \(g\), \(r\), \(i\), and \(z\) bands as a function of magnitude from HEROES (blue, 2β diameter aperture magnitudes), COSMOS D/UD (red, 2β diameter aperture magnitudes, Aihara et al., 2021), and the DESY3 GOLD catalog (green, single object fitted magnitudes, Sevilla-Noarbe et al., 2021; Hartley et al., 2022). While the DESY3 catalog appears to be slightly overdense in the \(g\) and \(r\) bands, HEROES shows strong agreement with the area densities from COSMOS D/UD in all four bands and DESY3 in \(i\) and \(z\).
In each panel of Figure 6, we show a subset of the sources that pass the above described quality cuts as black points, and those that also pass the color-color dropout cuts as red points. For \(g\), \(r\), and \(i\) dropouts, we find 295129, 18607, and 124 galaxies, respectively. This corresponds to \(g\): 6700, \(r\): 420, and \(i\): 2.8 dropouts deg\({}^{-2}\). These surface densities are comparable to the densities from Ono et al. (2018) of \(g\): 5300, \(r\): 380, \(i\): 5.2 deg\({}^{-2}\), and we attribute the offsets to differences in imaging depth and Poisson statistics.
The surface densities of all three classes of dropouts are roughly uniform, with differences of no more than a factor of 2 over the HEROES field due primarily to differences in imaging depth both between bands and across the field. We will refine these selections and compare our resulting luminosity functions and galaxy-galaxy angular correlation functions to the GOLDRUSH sample in A. J. Taylor et al. 2023, in prep.
## 7 Summary
We present the complete photometric catalog from HEROES: a 44 deg\({}^{2}\) Subaru/HSC imaging survey of the NEP in \(grizy\) broadbands and NB816+NB921 narrowbands. The catalog contains 25.4 million objects and is available in patch by patch, filter by filter hscpipe forced_src format, as well as in two combined catalogs with selected columns.
HEROES has enormous potential due to its overlap with other legacy, current, and future missions and surveys (e.g., AKARI, eROSITA, H20, S2CLS, NEPSC2, Spitzer, Euclid, JWST TDF). Outside of these complementary datasets, we are using HEROES to produce luminosity functions and angular correlation functions for \(z\sim 3-7\) dropout galaxies, as well as continuing to search for LAEs near the epoch of reionization. We hope this public catalog release will enable new studies of galaxy evolution across cosmic time and provide complementary optical data for upcoming NEP surveys.
We thank the anonymous referee for their constructive report that helped us to improve this work. We gratefully acknowledge support for this research from NSF grants AST-1715145 (A. J. B) and AST-1716093 (E. M. H., A. S.). We also gratefully acknowledge the William F. Vilas Estate (A. J. T.) and a Kellett Mid-Career Award and a WARF Named Professorship from the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation (A. J. B.). This research is based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Data analysis was carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center and the Large-scale Data Analysis System cooperated by the Astronomy Data Center and the Subaru Telescope. We especially thank the HSC Software Help Desk for their rapid replies and helpful support during the data reduction process. We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Subaru Telescope astropy: Astropy Collaboration et al. (2013, 2018), hscPipe: Bosch et al. (2018)
|
2307.15139 | Online Clustered Codebook | Vector Quantisation (VQ) is experiencing a comeback in machine learning,
where it is increasingly used in representation learning. However, optimizing
the codevectors in existing VQ-VAE is not entirely trivial. A problem is
codebook collapse, where only a small subset of codevectors receive gradients
useful for their optimisation, whereas a majority of them simply ``dies off''
and is never updated or used. This limits the effectiveness of VQ for learning
larger codebooks in complex computer vision tasks that require high-capacity
representations. In this paper, we present a simple alternative method for
online codebook learning, Clustering VQ-VAE (CVQ-VAE). Our approach selects
encoded features as anchors to update the ``dead'' codevectors, while
optimising the codebooks which are alive via the original loss. This strategy
brings unused codevectors closer in distribution to the encoded features,
increasing the likelihood of being chosen and optimized. We extensively
validate the generalization capability of our quantiser on various datasets,
tasks (e.g. reconstruction and generation), and architectures (e.g. VQ-VAE,
VQGAN, LDM). Our CVQ-VAE can be easily integrated into the existing models with
just a few lines of code. | Chuanxia Zheng, Andrea Vedaldi | 2023-07-27T18:31:04Z | http://arxiv.org/abs/2307.15139v1 | # Online Clustered Codebook
###### Abstract
Vector Quantisation (VQ) is experiencing a comeback in machine learning, where it is increasingly used in representation learning. However, optimizing the codevectors in existing VQ-VAE is not entirely trivial. A problem is codebook collapse, where only a small subset of codevectors receive gradients useful for their optimisation, whereas a majority of them simply "dies off" and is never updated or used. This limits the effectiveness of VQ for learning larger codebooks in complex computer vision tasks that require high-capacity representations. In this paper, we present a simple alternative method for online codebook learning, Clustering VQ-VAE (CVQ-VAE). Our approach selects encoded features as anchors to update the "dead" codevectors, while optimising the codebooks which are alive via the original loss. This strategy brings unused codevectors closer in distribution to the encoded features, increasing the likelihood of being chosen and optimized. We extensively validate the generalization capability of our quantiser on various datasets, tasks (_e.g_. reconstruction and generation), and architectures (_e.g_. VQ-VAE, VQGAN, LDM). CVQ-VAE can be easily integrated into the existing models with just a few lines of code.
## 1 Introduction
Vector Quantisation (VQ) [12] is a basic building block of many machine learning techniques. It is often used to help learning unsupervised representations for vision and language tasks, including data compression [1, 39, 36], recognition [26, 3, 44, 24, 23], and generation [37, 31, 11, 32, 47, 34, 33]. VQ quantises continuous feature vectors into a discrete space by embedding them to the closest vectors in a codebook of representatives or codevectors. Quantisation has been shown to simplify optimization problems by reducing a continuous search space to a discrete one.
Despite its success, VQ has some drawbacks when applied to deep networks [37]. One of them is that quantisation stops gradients from back-propagating to the codevectors. This has been linked to _codebook collapse_[36], which means that only a small subset of active codevectors are optimized alongside the learnable features, while the majority of them _are not used at all_ (see the green "dead" points in Fig. 1(a)). As a result, many recent methods [11, 10, 44, 32, 6, 47] fail to utilise the full expressive power of a codebook due to the low codevector utilisation, especially when the codebook size is large. This significantly limits VQ's effectiveness.
To tackle this issue, we propose a new alternative quantiser called _Clustering VQ-VAE_ (CVQ-VAE). We observe that classical clustering algorithms, such as refined initialization \(k\)-means [4] and \(k\)-means++ [2], use a dynamic cluster initialization approach. For example, \(k\)-means++ randomly selects a data point as the first cluster centre, and
Figure 1: **Codebook usage and reconstruction error. The setting is the same as VQ-VAE [37], except for the different quantiers. All models are trained and evaluated on the CIFAR10 [20] dataset. VQ-VAE has many βdeadβ vectors (green points) which are _not_ used. CVQ-VAE updates these unoptimized vectors by using online sampled feature anchors, leading to a 100% usage of the codebook. CVQ-VAE achieves substantially higher codebook perplexity and better reconstruction results than with the fixed initialization.**
then chooses the next new centre based on a weighted probability calculated from the distance to the previous centres. Analogously, CVQ-VAE _dynamically_ initializes unoptimized codebooks by resampling them from the learned features (Fig. 2). This simple approach can avoid codebook collapse and significantly enhance the usage of larger codebooks by enabling optimization of all codevectors (achieving \(100\%\) codebook utilisation in Fig. 1(c)).
While CVQ-VAE is inspired by previous dynamic cluster initialization techniques [4, 2], its implementation in deep networks requires careful consideration. Unlike traditional clustering algorithms [25, 4, 14, 2] where source data points are fixed, in deep networks features and their corresponding codevectors are mutually and incrementally optimized. Thus, simply sampling codevectors from a single snapshot of features would not work well because any mini-batch used for learning _cannot_ capture the true data distribution, as demonstrated in our offline version in Tab. 3. To fix this issue, we propose to _compute running averages_ of the encoded features across different training mini-batches and use these to improve the dynamic reinitialization of the collapsed codevectors. This operation is similar to an online feature clustering method that calculates average features across different training iterations (Fig. 2). While this may seem a minor change, it leads to a very significant improvement in terms of performance (Fig. 1(e)).
As a result of these changes, CVQ-VAE significantly outperforms the previous models VQ-VAE [37] and SQ-VAE [36] on various datasets under the same setting, and with no other changes except for swapping in the new quantisier. Moreover, we conduct thorough ablation experiments on variants of the method to demonstrate the effectiveness of our design and analyse the importance of various design factors. Finally, we incorporate CVQ-VAE into large models (VQ-GAN [11] and LDM [32]) to further demonstrate its generality and potential in various applications.
## 2 Related Works
VQ-VAE [37] learns to quantise the continuous features into a discrete space using a restricted number of codebook vectors. By clustering features in the latent space, VQ-VAE can automatically learn a crucially compact representation and store the domain information in the decoder that does not require supervision. This discrete representation has been applied to various downstream tasks, including image generation [31, 44, 6, 22, 17], image-to-image translation [11, 30, 10, 32], text-to-image synthesis [30, 9, 29, 18], conditional video generation [28, 40, 42], image completion [11, 10, 46], recognition [26, 3, 44, 24, 23] and 3D reconstruction [27, 34, 33].
Among them, VQ-GAN [11], ViT-VQGAN [44], RQ-VAE [22], and MoVQ [46] aim to train a better discrete representation through deeper network architectures, additional loss functions, multichannel or higher resolution representations. However, none of them tackle the _codebook collapse_ issue for the unoptimized "dead" point.
To address this issue, additional training heuristics are proposed in recent works. SQ-VAE [36] improves VQ-VAE with stochastic quantisation and a trainable posterior categorical distribution. VQ-WAE [38] builds upon SQ-VAE by directly encouraging the discrete representation to be a uniform distribution via a _Wasserstein_ distance. The most related works are HVQ-VAE [39] and Jukebox [8] that use _codebook reset_ to randomly reinitialize unused or low-used codebook entries. However, they only assign a single sampled anchor to each unoptimized codevector. In contrast, our CVQ-VAE considers the changing of features in deep networks and designs an online clustering algorithm by running average updates across the training mini-batch. Additionally, our work bridges codebook reset in Jukebox for music generation to the more general class of running average updates that are applicable to image compression and generation problems in computer vision.
## 3 Method
VQ is in the context of unsupervised representation learning. Our main goal is to learn a discrete codebook that efficiently utilizes _all codebook entries within it_. To achieve this, our quantisation method, as illustrated in Fig. 2, is conceptually similar to VQ-VAE [37], except that our codevectors are _dynamically initialized_ rather than being sampled from a _fixed_ uniform or Gaussian distribution. In the following sections, we provide a general overview of VQ (Sec. 3.1), followed by our proposed CVQ-VAE (Sec. 3.2).
### Background: VQ-VAE
Given a high dimensional image \(x\in\mathbb{R}^{H\times W\times c}\), VQ-VAE [37] learns to embed it with low dimensional codevectors \(z_{q}\in\mathbb{R}^{h\times w\times n_{q}}\), where \(n_{q}\) is the dimensionality of the vectors in the codebook. Then, the feature tensor can be equivalently described as a compact representation with \(h\times w\) indices corresponding to the codebook entries \(z_{q}\). This is done via an autoencoder
\[\hat{x}=\mathcal{G}_{\theta}(z_{q})=\mathcal{G}_{\theta}(\mathbf{q}(\hat{z})) =\mathcal{G}_{\theta}(\mathbf{q}(\mathcal{E}_{\phi}(x))). \tag{1}\]
Here \(\mathcal{E}_{\phi}\) and \(\mathcal{G}_{\theta}\) refer to the encoder and decoder, respectively. The encoder embeds images into the continuous latent space, while the decoder inversely maps the latent vectors back to the original image. \(\mathbf{q}(\cdot)\) is a quantisation operation that maps the continuous encoded observations \(\hat{z}\) into the discrete space by looking up the closest codebook entry \(e_{k}\) for each grid feature \(\hat{z}_{i}\) using the following equation:
\[z_{q_{i}}=\mathbf{q}(\hat{z}_{i})=e_{k},\quad\text{where}\quad k=\underset{e_ {k}\in\mathcal{Z}}{\operatorname{argmin}}\|\hat{z}_{i}-e_{k}\|, \tag{2}\]
where \(\mathcal{Z}=\{e_{k}\}_{k=1}^{K}\) is the codebook that consists of \(K\) entries \(e_{k}\in\mathbb{R}^{n_{q}}\) with dimensionality \(n_{q}\). During training, the encoder \(\mathcal{E}_{\phi}\), decoder \(\mathcal{G}_{\theta}\) and codebook \(\mathcal{Z}\) are jointly optimized by minimizing the following objective:
\[\mathcal{L}=\|x-\hat{x}\|_{2}^{2}+\|\mathrm{sg}[\mathcal{E}_{\psi}(x)]-z_{q} \|_{2}^{2}+\beta\|\mathcal{E}_{\psi}(x)-\mathrm{sg}[z_{q}]\|_{2}^{2}, \tag{3}\]
where \(\mathrm{sg}\) denotes a stop-gradient operator, and \(\beta\) is the hyperparameter for the last term _commitment loss_. The first term is known as _reconstruction loss_, which measures the difference between the observed \(x\) and the reconstructed \(\hat{x}\). The second term is the _codebook loss_, which encourages the codevectors to be close to the encoded features. In practice, the codebook \(\mathcal{Z}\) is optimized using either the _codebook loss_[37] or using an exponential moving average (EMA) [31]. However, these methods work only for the active codevectors, _leaving the "dead" ones unoptimized_.
### Clustering VQ-VAE (CVQ-VAE)
The choice of initial points is a crucial aspect of unsupervised codebook learning. Classical clustering methods like refined \(k\)-means [4] and \(k\)-means++ [2] are _dynamically-initialized_, which means that each new clustering centre is initialized based on previously calculated distance or points. This leads to a more robust and effective clustering result, as reported in comparative studies [5].
Analogously, we build a _dynamically-initialized_ vector quantized codebook in deep networks. However, unlike traditional clustering settings, the data points, _i.e_. the encoded features \(\hat{z}\) in the deep network, are also updated during training instead of being fixed. Therefore, a dynamical initialization strategy should take into account the changing feature representations during training.
Running average updates.To build the online initialization for a codebook, we start by accumulatively counting the average usage of codevectors in each training mini-batch:
\[N_{k}^{(t)}=N_{k}^{(t-1)}\cdot\gamma+\frac{n_{k}^{(t)}}{Bhw}\cdot(1-\gamma), \tag{4}\]
where \(n_{k}^{(t)}\) is the number of encoded features in a training mini-batch that will be quantised to the closest codebook entry \(e_{k}\), and \(Bhw\) denotes the number of features on Batch, height, and width. \(\gamma\) is a decay hyperparameter with a value in \((0,1)\) (default \(\gamma=0.99\)). \(N_{k}^{(0)}\) is initially set as zero.
We then select a subset \(\bar{\mathcal{Z}}\) with \(K\) vectors from the encoded features \(\hat{z}\), which we denote as **anchors**. Instead of directly using the anchors to reinitialize the unoptimized codevectors, we expect that _codevectors that are less-used or unused should be modified more than frequently used ones_. To achieve this goal, we compute a decay value \(a_{k}^{(t)}\) for each entry \(e_{k}\) using the accumulative average usage
Figure 2: **Codebook optimization**. The Red points indicate the encoded features, while the Green and Peach points denote the unoptimized and active vectors in the codebook, respectively. 1) In VQ-VAE [37] (row 1), only the active βluckyβ seeds (in Peach) are optimized alongside the encoded features (in Red) during training. The other βdeadβ vectors (in Green) are _not_ given attention and remain fixed. 2) In our CVQ-VAE (offline) (row 2), we reinitialize the codevectors based on the anchors sampled from the encoded features (in Red), encouraging the βdeadβ ones to be closer to the features in distribution. 3) To address the difficulty of covering all samples by single sampling in mini-batch learning, we further propose an online learning variant (row 3), where the anchor is obtained by calculating the moving average of the encoded features in different batches. We highlighted the difference between various methods in the blue thickened border.
and reinitialize the features as follows:
\[\alpha_{k}^{(t)}=\exp^{-N_{k}^{(t)}K\frac{10}{1-\gamma}-\epsilon}, \tag{5}\] \[e_{k}^{(t)}=e_{k}^{(t-1)}\cdot(1-\alpha_{k}^{(t)})+\hat{z}_{k}^{(t )}\cdot\alpha_{k}^{(t)}, \tag{6}\]
where \(\epsilon\) is a small constant to ensure the entries are assigned with the average values of features along different mini-batches, and \(\hat{z}_{k}^{(t)}\in\mathcal{\bar{Z}}^{K\times z_{q}}\) is the sampled anchor.
This running average operation differs from the exponential moving average (EMA) used in VQ-VAE [31]. Our equation is applied to reinitialize unused or low-used code-vectors, instead of updating the active ones. Furthermore, our decay parameter in Eq. (5) is computed based on the average usage, which is _not a pre-defined hyperparameter_.
Choice of the anchors.Next, we describe several versions of the anchor sampling methods. Interestingly, experimental results (Tab. 3(c)) show that our online version is _not_ sensitive to these choices. However, the different anchor sampling methods have a direct impact on the _offline_ version, suggesting that our running average updates behaviour is the primary reason for the observed improvements.
* **Random.** Following the codebook reset [8, 39], a natural choice of anchors is randomly sampled from the encoded features.
* **Unique.** To avoid repeated anchors, a random permutation of integers within the number of features (\(Bhw\)) is performed. Then, we select the first \(K\) features.
* **Closest.** A simple way is inversely looking up the closest features of each entry, _i.e_\(i=\operatorname*{argmin}_{\hat{z}_{i}\in\mathcal{E}_{\phi}(x)}\lVert\hat{z}_{i }-e_{k}\rVert\).
* **Probabilistic random.** We can also sample anchors based on the distance \(D_{i,k}\) between the codevectors and the encoded features. In this paper, we consider the probability \(p=\frac{\exp{(-D_{i,k})}}{\sum_{i=1}^{Bhw}\exp{(-D_{i,k})}}\).
Contrastive loss.We further introduce a contrastive loss \(-\log\frac{e^{sim(ek_{i}+\hat{z}_{i}^{+})/\tau}}{\sum_{i=1}^{Bw}e^{sim(ek_{i} -\hat{z}_{i}^{-})/\tau}}\) to encourage sparsity in the codebook. In particular, for each codevector \(e_{k}\), we directly select the closest feature \(\hat{z}_{i}^{+}\) as the positive pair and sample other farther features \(\hat{z}_{i}^{-}\) as negative pairs using the \(D_{i,k}\).
Relation to prior work.To mitigate the codebook collapse issue, several methods have been proposed, like normalized codevectors in ViT-VQGAN [44]. However, these methods only optimize the _active_ entries, rather than the entire codebook. Recently, SQ-VAE [36], SeQ-GAN [13], and VQ-WAE [38] assume that the codebook follows a fixed distribution. Although these methods achieve high perplexity, the reconstruction quality is _not_ always improved (Tab. 3). The most relevant work to ours is codebook reset, which randomly reinitializes the unused or low-used codevectors to high-usage ones [39] or encoder outputs [8]. However, these methods rely only on a temporary single value for initialization and miss the opportunity of exploiting online clustering across different training steps.
## 4 Experiments: Image Quantisation
### Experimental Details
Implementation.CVQ-VAE can be easily implemented by a few lines of code in Pytorch, where the gradient for the selected codevectors is preserved. The code is available at [https://github.com/lyndonzheng/CVQ-VAE](https://github.com/lyndonzheng/CVQ-VAE).
Our implementation is built upon existing network architectures. We set all hype-parameters following the original code, except that we replace these quantisers with our online clustering codebook. In particular, we first demonstrate our assumption on small datasets with the officially released VQ-VAE [37] implementation 1,2. Then, we verify the generality of our quantiser on large datasets using the officially released VQ-GAN [11] architecture 3.
Footnote 1: [https://github.com/deepmind/sonnet/blob/v2/sonnet/src/nets/vqvae.py](https://github.com/deepmind/sonnet/blob/v2/sonnet/src/nets/vqvae.py)
Footnote 2: [https://github.com/deepmind/sonnet/blob/v1/sonnet/examples/vqvae.py](https://github.com/deepmind/sonnet/blob/v1/sonnet/examples/vqvae.py)
Footnote 3: [https://github.com/CompVis/taming-transformers](https://github.com/CompVis/taming-transformers)
Datasets.We evaluated the proposed quantiser on various datasets, including MNIST [21], CIFAR10 [20], Fashion MNIST [41], and the higher-resolution FFHQ [19] and the large ImageNet [7].
Metrics.Following existing works [11, 47, 13], we evaluated the image quality between reconstructed and original images on different scales, including patch-level structure similarity index (SSIM), feature-level Learned Perceptual Image Patch Similarity (LPIPS) [45], and dataset-level Frechet Inception Distance (FID) [15]. We also report the perplexity score for the codebook ablation study as in SQ-VAE [36] and VQ-WAE [38]. It is defined as \(e^{-\sum_{k=1}^{K}p_{e_{k}}\log{p_{e_{k}}}}\), where \(p_{e_{k}}=\frac{n_{k}}{\sum_{i=1}^{Bw}n_{k}}\), and \(n_{k}\) is the number of encoded features associated with codevector \(e_{k}\).
### Main Results
Quantitative Results:We first evaluated our CVQ-VAE and various quantisers, including VQ-VAE [37]\({}_{\text{NeurIPS'2017}}\), HVQ-VAE [39]\({}_{\text{NeurIPS'2020}}\), and SQ-VAE [36]\({}_{\text{ICML'2022}}\), under the identical experimental settings in Tab. 1. All instantiations of our model outperform the baseline variants of previous state-of-the-art models. Although the latest SQ-VAE [36] optimizes all code entries by explicitly enforcing the codebook to be a defined distribution, this assumption may not hold for all datasets. For instance, code entries that respond to the background elements like sky and ground should take more count than code entries that represent specific objects, such as vehicle wheels. In contrast, our quantiser only encourages all code entries to be optimized, leaving the association to be automatically learned.
Then, we compared our CVQ-VAE with the state-of-the-art methods, including VQGAN [11]\({}_{\text{CVPR-2021}}\), ViT-VQGAN [44]\({}_{\text{ICLR-2022}}\), RQ-VAE [22]\({}_{\text{CVPR-2022}}\), and MoVQ [45]\({}_{\text{NeurIPS'2022}}\) for the task of reconstruction. Table 2 shows quantitative results on two large datasets. Under the same compression ratio (768\(\times\), _i.e_. 256\(\times\)256\(\times\)3\(\rightarrow\)16\(\times\)16), our model significantly outperforms the state-of-the-art models, including the baseline VQGAN [11] and the concurrent SeQ-GAN [13]. Interestingly, on the FFHQ dataset, our model even outperforms ViT-VQGAN [44] and RQ-VAE [22], which utilize 4\(\times\) tokens for the representation. This suggests that the high usage of codevectors is significant for maintaining information during data compression. Additionally, we also run 4\(\times\) tokens experiments, as in MoVQ [47]. The CVQ-VAE further achieves a relative 10.1% improvement. Although our 4\(\times\) version shows a slightly lower rFID score than the MoVQ [47] on ImageNet dataset (1.12 _vs_. 1.20), we achieve better performance on other metrics (as shown in Appendix Tab. B.2).
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Dataset** & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID \(\downarrow\) \\ \hline VQ-VAE [37] & & 0.9777 & 0.0282 & 3.43 \\ HVQ-VAE [39] & & 0.9790 & 0.0270 & 3.17 \\ SQ-VAE [36] & & 0.9819 & 0.0256 & 3.05 \\ \(\mathbf{CVQ-VAE}\) & & **0.9833** & **0.0222** & **1.80** \\ \hline VQ-VAE [37] & & 0.8595 & 0.2504 & 39.67 \\ HVQ-VAE [39] & & 0.8553 & 0.2553 & 41.08 \\ SQ-VAE [36] & & 0.8779 & 0.2333 & 37.92 \\ \(\mathbf{CVQ-VAE}\) & & **0.8978** & **0.1883** & **24.73** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Reconstruction results** on the validation sets of MNIST (10,000 images) and CIFAR10 (10,000 images). All models are trained with the same experimental settings, except for the different quantiersers.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Dataset** & \(\mathcal{S}\downarrow\) & \(\mathcal{K}\downarrow\) & Usage \(\uparrow\) & rFID \(\downarrow\) \\ \hline VQGAN [11] & & 16\({}^{2}\) & 1024 & 42\% & 4.42 \\ ViT-VQGAN [44] & & 32\({}^{2}\) & 8192 & β & 3.13 \\ RQ-VAE [22] & & 16\({}^{2}\)\(\times\)4 & 2048 & β & 3.88 \\ MoVQ [47] & & 16\({}^{2}\)\(\times\)4 & 1024 & 56\% & 2.26\({}^{*}\) \\ SeQ-GAN [13] & & 16\({}^{2}\) & 1024 & 100\% & 3.12 \\ \(\mathbf{CVQ-VAE}\) (ours) & & 16\({}^{2}\) & 1024 & 100\% & 2.80 \\ \(\mathbf{CVQ-VAE}\) (ours) & & 16\({}^{2}\)\(\times\)4 & 1024 & 100\% & **2.03** \\ \hline VQGAN [11] & & 16\({}^{2}\) & 1024 & 44\% & 7.94 \\ ViT-VQGAN [44] & & 32\({}^{2}\) & 8192 & 96\% & 1.28 \\ RQ-VAE [22] & & 8\({}^{2}\)\(\times\)16 & 16384 & β & 1.83 \\ MoVQ [47] & & 16\({}^{2}\)\(\times\)4 & 1024 & 63\% & **1.12** \\ SeQ-GAN [13] & & 16\({}^{2}\) & 1024 & 100\% & 1.99 \\ \(\mathbf{CVQ-VAE}\) (ours) & & 16\({}^{2}\) & 1024 & 100\% & 1.57 \\ \(\mathbf{CVQ-VAE}\) (ours) & & 16\({}^{2}\)\(\times\)4 & 1024 & 100\% & 1.20\({}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Reconstruction results** on validation sets of ImageNet (50,000 images) and FFHQ (10,000 images). \(\mathcal{S}\) denotes the latent size of encoded features, and \(\mathcal{K}\) is the number of codevectors in the codebook. Usage indicates how many entries in a codebook are used during the quantisation on the validation set. More evaluation metrics are reported in Appendix Table B.2.
Figure 3: **Reconstructions from different models.** The two models are trained under the same settings, except for the different quantiers. Compared with the state-of-the-art baseline VQGAN [11], the proposed model significantly improves the reconstruction quality (highlight in red box) under the same compression ratio (768\(\times\), with 16\(\times\) downsampling).
Qualitative Results:The qualitative comparisons are presented in Fig. 3. Our model achieves superior results even under challenging conditions. Compared to the baseline model VQGAN [11], our CVQ-VAE provides higher-quality reconstructed images that retain much more details. In particular, VQGAN struggles with reconstructing abundant scene elements, as evidenced by the artifacts on the bowls. In contrast, our CVQ-VAE shows no such artifacts. These fine-grained details are crucial for downstream generation-related tasks, such as generation, completion, and translation [47].
### Ablation Experiments
We ran a number of ablations to analyse the effects of core factors in codebook learning. Results are reported in Tabs. 3, 4, B.3 and B.4.
Core Factors.We evaluated core components in our redesigned online clustering quantiser in Tab. 3, which shows that the new quantiser considerably enhances the reconstruction quality. We started by implementing the baseline configuration (\(\mathbb{A}\)) from VQ-VAE [37]. Next, we explored different distance metrics, which are used to look up the closest entry for each encoded feature. We found that us
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**MNIST** (28\(\times\)28)} & \multicolumn{2}{c}{**CIFAR10** (32\(\times\)32)} & \multicolumn{2}{c}{**FFHQ** (256\(\times\)256)} \\ \cline{2-9} & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID\(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID\(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID\(\downarrow\) \\ \hline near codevectors [39] & 0.9790 & 0.0270 & 3.17 & 0.8553 & 0.2553 & 41.08 & 0.7282 & 0.1085 & 4.31 \\ hard encoded features [8] & 0.9814 & 0.0243 & 2.25 & 0.8988 & 0.1978 & 29.16 & 0.7646 & 0.0870 & 3.91 \\ running average (ours) & **0.9823** & **0.0236** & **2.23** & **0.8991** & **0.1897** & **26.62** & **0.8193** & **0.0603** & **2.94** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablations for CVQ-VAE on image quantisation. We mainly train on MNIST and CIFAR10 training set, and evaluate on the validation set unless otherwise noted.**
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**MNIST** (28\(\times\)28)} & \multicolumn{2}{c}{**CFAIR10** (32\(\times\)32)} & \multicolumn{2}{c}{**Fashion MNIST** (28\(\times\)28)} \\ \cline{2-9} & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID \(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID \(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID \(\downarrow\) \\ \hline A & Baseline VQ-VAE [37]\({}_{\text{NeurIPS}}\):2017 & 0.9777 & 0.0282 & 3.43 & 0.8595 & 0.2504 & 39.67 & 0.9140 & 0.0801 & 12.73 \\ B & + Cosine distance & 0.9791 & 0.0266 & 3.06 & 0.8709 & 0.2303 & 35.14 & 0.9160 & 0.0764 & 11.40 \\ C & + Anchor initialization (offline) & 0.9810 & 0.0253 & 2.78 & 0.8829 & 0.2132 & 31.10 & 0.9145 & 0.0773 & 11.92 \\ D & + Anchor initialization (online) & 0.9823 & 0.0236 & 2.23 & **0.8991** & 0.1897 & 26.62 & **0.9254** & **0.0683** & 9.27 \\ E & + Contrastive loss & **0.9833** & **0.0222** & **1.80** & 0.8978 & **0.1883** & **24.73** & 0.9233 & 0.0693 & **8.85** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on various settings. We report patch-level SSIM, feature-level LPIPS, and dataset-level FID. All evaluation metrics are reported in Appendix Table B.3.**
ing cosine similarity (\(\mathbb{B}\)) improved performance on some datasets, which is consistent with the findings in previous works such as ViT-VQGAN [44]. In configuration (\(\mathbb{C}\)), we reinitialized the unoptimized code entries with the selected anchors, but only in the first training batch, which we refer to as the _offline_ version. This improved the usage of the codebook, resulting in slightly better gains. Significantly, when we applied the proposed _running average updates_ across different training mini-batches in configuration (\(\mathbb{D}\)), the performance on all metrics in various datasets improved substantially. This suggests that our proposed online clustering is significant for handling the changing encoded feature representation in deep networks. Finally, we naturally introduced contrastive loss to each entry based on its similarity to features (\(\mathbb{E}\)), which further improved the results.
Codebook Size.VQ embeds the continuous features into a discrete space with a finite size, \(K\) codebook entries. The codebook size has significant effects on traditional clustering. In Tab. 3(a), we showed the performances of various quantisers with different codebook sizes. Our CVQ-VAE benefits greatly from a larger number of codebook entries, while SQ-VAE [36] shows smaller improvements. It is worth noting that _not_ all quantizers automatically benefit from a larger codebook size, such as VQ-VAE's performance on the CIFAR10 dataset shown in Tab. 3(a) (bottom).
Perplexity _vs._ rFID.Recent concurrent studies [36, 13, 38] have explicitly promoted a large perplexity by optimizing a perplexity-related loss. However, as illustrated in Tab. 3(a), a larger perplexity does _not_ always guarantee a lower rFID. This suggests that a uniform distribution of perplexity, represented by the highest score, may not be the optimal solution for the codebook.
Codebook Dimensionality.Table 3(b) presents the results on various codebook dimensionalities. Interestingly, the performance of the quantizers _does not exhibit a straightforward positive correlation_ with the number of codebook dimensionality. In fact, some smaller codebook dimensionalities yield better results than larger ones, indicating that the choice of codebook dimensionality should be carefully considered depending on the specific application and dataset. Based on this observation, a low-dimensional codebook can be employed to represent images and used in downstream tasks, as demonstrated in the latent diffusion model (LDM) [32]. The relevant downstream applications on generation can be found in Sec. 5.
Anchor Sampling Methods.An evaluation of various _anchor sampling methods_ is reported in Tab. 3(c). The results indicate that the _offline_ version with only one reinitialization is highly sensitive to the anchor sampling methods. Interestingly, the random, unique, closest, and probabilistic random versions perform similarly for _online_ version, up to some random variations (rFID from 2.23 to 2.59 on MNIST, and from 25.99 to 26.62 on CIFAR10). As discussed in Sec. 3.2, different anchor sampling methods have significant effects on traditional clustering [4, 14]. However, our experiments demonstrate that the codebook reinitialization needs to consider the fact that _the encoded features change along with the deep network is trained_. The results highlight the effectiveness of our _online_ version with the running average updates, which is insensitive to the different instantiations.
Reinitialization Methods.Some latest works [39, 8] also consider updating the unoptimized codevectors, called _codebook reset_. In Tab. 3(d), we compare these methods with VQ-VAE's architecture [37] under the same experimental setting, except for the different quantisers. As discussed in Sec. 3.2, HVQ-VAE [39] resets the low usage codevectors using the high usage ones, which learns a narrow space codebook, resulting in limited improvement. The hard encoded features presented in [8] achieve better results (3.17\(\rightarrow\)2.25, 41.08\(\rightarrow\)29.16, and 4.31\(\rightarrow\)3.91) than the HVQ-VAE [39] by adding noise signal to ensure the independent anchors for each codebook entry. In contrast, our CVQ-VAE calculates the running average updates, resulting in a significant improvement. This further suggests that the online clustering centre along with the different training mini-batches is crucial for proper codebook reinitialization.
## 5 Experiments: Applications
Except for data compression, our CVQ-VAE can also be easily applied to downstream tasks, such as generation and completion. Following existing works [11, 32, 47], we conduct a simple experiment to verify the effectiveness of the proposed quantisers. Although this simple yet effective quantiser can be applied to more applications, it is beyond the main scope of this paper.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{FID\(\downarrow\)} \\ \cline{2-3} & Churches & Bedrooms \\ \hline StyleGAN [19] & 4.21 & 2.35 \\ DDPM [16] & 7.89 & 4.90 \\ ImageBART [10] & 7.32 & 5.51 \\ Projected-GAN [35] & **1.59** & **1.52** \\ \hline LDM [32]-8\({}^{*}\) & 4.02 & - \\ LDM [32]-4 & - & 2.95 \\ \hline LDM [32]-8 (reproduced) & 4.15 & 3.57 \\ CVQ-VAE-LDM [32]-8 & 3.86 & 3.02 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Quantitative comparisons on unconditional image generation. The better quantiser can improve the generation quality without modifying the training settings in the second stage. \({}^{*}\): trained in \(KL\)-regularized latent space, instead of the VQ discrete space.**
Implementation Details.We made minor modifications to the baseline LDM [32] system when adapting it with our quantiser for the downstream tasks. We first replace the original quantiser from VQGAN [11] with our proposed CVQ-VAE's quantiser. Then, we trained the models on LSUN [43] and ImageNet [7] for generation (8\(\times\) downsampling). Following the setting in LDM [32], we set the sampling step as 200 during the inference.
### Unconditional Generation
Tables 5 and 6 compares our proposed CVQ-VAE to the state-of-the-art methods on LSUN and ImageNet datasets for unconditional and _class_-conditional image generation. The results show that our model consistently improves the generated image quality under the sample compression ratio, as in the reconstruction. This confirms the advantages of using a better codebook for downstream tasks. Our CVQ-VAE also outperforms the LDM-8\({}^{*}\) that is trained with \(KL\)-regularized latent space, indicating that exploring a better discrete codebook is worth pursuing for unsupervised representation learning. Our CVQ-VAE also achieves comparable results to LDM-4 (3.02 _vs._ 2.95), whereas the LDM-4 uses a 4\(\times\) higher resolution representation, requiring more computational costs.
Example results are presented in Fig. 4. As we can see, even with 8\(\times\) downsampling, the proposed CVQ-VAE is still able to generate reasonable structures for these complicated scenes with various instances. Although there are artifacts on windows in the two scenarios, the other high-frequency details are realistic, such as the sheet on the bed.
## 6 Conclusion and Limitation
We have introduced CVQ-VAE, a novel codebook reinitialization method that tackles the _codebook collapse_ issue by assigning the online clustered anchors to unoptimized code entries. Our proposed quantiser is a simple yet effective solution that can be integrated into many existing architectures for representation learning. Experimental results show that our CVQ-VAE significantly outperforms the state-of-the-art VQ models on image modeling, yet without increasing computational cost and latent size. We hope this new plug-and-play quantiser will become an important component of future vector methods that use VQ in their learned architecture.
Ethics.We use the MNIST, Fashion-MNIST, CIFAR10, LSUN, and ImageNet datasets in a manner compatible with their terms. While some of these images contain personal information (_e.g_., faces) collected without consent, algorithms in this research do not extract biometric information. For further details on ethics, data protection, and copyright please see [https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html](https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html).
Acknowledgements.This research is supported by ERC-CoG UNION 101001212.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**FFHQ**} & \multicolumn{2}{c}{**ImageNet**} \\ \cline{2-5} & Steps & FID\(\downarrow\) & Steps & FID\(\downarrow\) \\ \hline RQVAE [22]\({}_{\text{CVPR}}\)2022 & 256 & 10.38 & 1024 & 7.55 \\ MoVQ [44]\({}_{\text{NeurIPS}}\)2022 & 1024 & 8.52 & 1024 & 7.13 \\ SQ-VAE [33]\({}_{\text{ICML}}\)2022 & 200 & 5.17 & 250 & 9.31 \\ LDM-4 [31]\({}_{\text{CVPR}}\)2022 & 200 & 4.98 & 250 & 10.56 \\
**CVQ-VAE** (ours) & 200 & **4.46** & 250 & **6.87** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Quantitative results for unconditional generation on FFHQ and _class_-conditional generation on ImageNet.
Figure 4: **Unconditional image generation on LSUN [43], and _class_-conditional image generation on Imagenet [7]. Following the baseline LDM [32], our results are generated on 256\(\times\)256 resolution. Our training parameters are the same as in LDM, except for the different quantisers and 8\(\times\) downsampling for the latent representations.** |
2307.02445 | Quantifying Poynting flux in the Quiet Sun Photosphere | Poynting flux is the flux of magnetic energy, which is responsible for
chromospheric and coronal heating in the solar atmosphere. It is defined as a
cross product of electric and magnetic fields, and in ideal MHD conditions it
can be expressed in terms of magnetic field and plasma velocity. Poynting flux
has been computed for active regions and plages, but estimating it in the quiet
Sun (QS) remains challenging due to resolution effects and polarimetric noise.
However, with upcoming DKIST capabilities, these estimates will become more
feasible than ever before. Here, we study QS Poynting flux in Sunrise/IMaX
observations and MURaM simulations. We explore two methods for inferring
transverse velocities from observations - FLCT and a neural network based
method DeepVel - and show DeepVel to be the more suitable method in the context
of small-scale QS flows. We investigate the effect of azimuthal ambiguity on
Poynting flux estimates, and we describe a new method for azimuth
disambiguation. Finally, we use two methods for obtaining the electric field.
The first method relies on idealized Ohm's law, whereas the second is a
state-of-the-art inductive electric field inversion method PDFI SS. We compare
the resulting Poynting flux values with theoretical estimates for chromospheric
and coronal energy losses and find that some of Poynting flux estimates are
sufficient to match the losses. Using MURaM simulations, we show that
photospheric Poynting fluxes vary significantly with optical depth, and that
there is an observational bias that results in underestimated Poynting fluxes
due to unaccounted shear term contribution. | Dennis Tilipman, Maria Kazachenko, Benoit Tremblay, Ivan Milic, Valentin Martinez Pillet, Matthias Rempel | 2023-07-05T17:15:11Z | http://arxiv.org/abs/2307.02445v1 | # Quantifying Poynting flux in the Quiet Sun Photosphere
###### Abstract
Poynting flux is the flux of magnetic energy, which is responsible for chromospheric and coronal heating in the solar atmosphere. It is defined as a cross product of electric and magnetic fields, and in ideal MHD conditions it can be expressed in terms of magnetic field and plasma velocity. Poynting flux has been computed for active regions and plages, but estimating it in the quiet Sun (QS) remains challenging due to resolution effects and polarimetric noise. However, with upcoming DKIST capabilities, these estimates will become more feasible than ever before. Here, we study QS Poynting flux in Sunrise/IMaX observations and MURaM simulations. We explore two methods for inferring transverse velocities from observations - FLCT and a neural network based method DeepVel - and show DeepVel to be the more suitable method in the context of small-scale QS flows. We investigate the effect of azimuthal ambiguity on Poynting flux estimates, and we describe a new method for azimuth disambiguation. Finally, we use two methods for obtaining the electric field. The first method relies on idealized Ohm's law, whereas the second is a state-of-the-art inductive electric field inversion method PDFI_SS. We compare the resulting Poynting flux values with theoretical estimates for chromospheric and coronal energy losses and find that some of Poynting flux estimates are sufficient to match the losses. Using MURaM simulations, we show that photospheric Poynting fluxes vary significantly with optical depth, and that there is an observational bias that results in underestimated Poynting fluxes due to unaccounted shear term contribution.
The Sun (1693) -- Solar atmosphere (1477) -- Solar photosphere (1518) -- Solar chromospheric heating (1987) -- Solar physics (1476) +
Footnote β : journal: ApJ
0000-0002-8002-8003]Dennis Tilipman
0000-0002-4880-7886]Maria Kazachenko
0000-0002-4880-0880]Benoit Tremblay
0000-0002-4880-0880]Ivan Milic
0000-0002-0703-0703]Valentin Martinez Pillet
0000-0002-0703-0888]Matthias Rempel
## 1 Introduction
Quantitative estimates of vertical energy transport in solar photosphere have been limited, yet they are explicitly relevant to many observed phenomena on the Sun, including flux emergence (Cheung & Isobe, 2014; Afanasyev et al., 2021), chromospheric and coronal heating (Withbroe & Noyes, 1977; Vernazza et al., 1981), and solar flares and coronal mass ejections (Tziotziou et al., 2013; Kazachenko et al., 2015; Pomoell et al., 2019). The flux of magnetic energy, i.e. Poynting flux or Poynting vector, defined as the cross product of electric and magnetic fields, has long been considered a primary mechanism for the energy transport from the photosphere to the overlaying atmosphere, but specific magnetically-driven processes and their relative importance have remained somewhat elusive (Steiner et al., 2008; Shelyag et al., 2012; Liu & Schuck, 2012; Welsch & Fisher, 2015). Typically, the flux of magnetic energy is divided
into emergence and shear terms. The emergence term arises from advection of magnetic field lines by upward plasma flows, and the shear term (also called wave term) is associated with twisting of the field lines by horizontal flows.
Quantitative investigations of photospheric Poynting flux are a relatively recent development, owing to the fact that the intermediate quantities needed to compute it - full electric and magnetic field vectors - are difficult to obtain even from modern state-of-the-art observations. Significant strides have been made in both magnetic field inversions from observed Stokes profiles (de la Cruz Rodriguez, 2019; Asensio Ramos and Diaz Baso, 2019), and electric field inversions (e.g., Welsch and Fisher, 2015; Fisher et al., 2020). However, most of the quantitative studies of Poynting flux have been constrained to either simulated data (Shelyag et al., 2012; Kazachenko et al., 2014; Afanasyev et al., 2021; Breu et al., 2022, 2023), or active regions and plages (Kazachenko et al., 2015; Lumme et al., 2019), since in these settings one deals with relatively high polarimetric signal-to-noise ratios (SNR). In particular, Breu et al. (2022) used high-fidelity simulations of the quiet-Sun (QS) photosphere to explain heating in a coronal loop, while Kazachenko et al. (2015) computed Poynting flux from the active region AR 11158 and found it to be sufficient to explain the heating of chromosphere and corona, according to theoretical estimates in Withbroe and Noyes (1977). However, an analogous, observation-based study into Poynting flux in QS has not been conducted. Yeates et al. (2014) and Welsch (2014) put constraints on the coronal energy associated with motions of photospheric footpoints and plage, but they used ideal MHD formulation of Ohm's law and they assumed zero upward advective motion, thereby neglecting the emergence term of Poynting flux. More recently, Silva et al. (2022) produced quantitative estimates of QS Poynting flux, but their focus was mostly on the horizontal flux and their method also included several simplifications, such as the idealized Ohm's law and reliance on apparent motions of magnetic field concentrations to obtain velocities transverse to the line-of-sight (i.e. parallel to plane of sky).
The studies of magnetic features in the QS have been few and far between due to both the noisiness of observations and systematic issues. The Sunrise/IMaX balloon-borne probe provides some of the best currently available QS polarimetry (Martinez Pillet et al., 2011), yet even in this data sample, strong linearly polarized light constitutes only about 10% of the field of view (Kianfar et al., 2018). Furthermore, there is the outstanding problem of magnetic field 180\({}^{\circ}\) azimuthal ambiguity, wherein spectropolarimetric inversions of Stokes profiles return two mathematically valid configurations of transverse magnetic field. While many methods of disambiguation have been proposed (for review of some of them and their respective limitations, see e.g. Pevtsov et al., 2021), none of them have been validated on QS magnetograms. Since full magnetic vector is necessary to compute Poynting flux, the task of disambiguation is necessary.
As a result of these observational and methodological challenges, quantitative investigations into Poynting flux in QS have been limited. At the same time, there will soon be unprecedented observations of QS from the Daniel K. Inouye Solar Telescope (DKIST), which will allow us to improve significantly on spatial resolution, cadence, and/or polarimetric sensitivity (Rimmele et al., 2020). There are also sophisticated methods of computing Poynting flux, which have not yet been tested on QS data. This presents a gap in the current state of this discipline, which this paper seeks to fill. Since QS constitutes the majority of observed photosphere area-wise, it is imperative that we understand the energy flux from it. The goal of this paper is to compute Poynting flux in the QS photosphere. To this end, we use several methods and we apply them to both observational and simulated data, with a focus on the former.
The remainder of the paper is structured as follows: in SS2 we describe the observational and simulated data we used in this work, in SS3 we explain how we obtain Poynting flux and the necessary intermediate quantities - full velocity, magnetic field, and electric field vectors. In SS4 we describe Poynting flux estimates from the various employed methods, and in SS5 we discuss them. Finally, in SS6 we summarize our findings and outline some of the possible future work.
## 2 Data
### Observational Data: IMaX
We perform our analysis on spectropolarimetric observations from the Imaging Magnetograph eXperim (IMaX) instrument on board the SUNRISE balloon-borne observatory (Martinez Pillet et al., 2011). We use one continuous IMaX/SUNRISE time series taken on June 9th, 2009 between 01:30:54-02:02:29 UT. This data set covers a \(40\times 40\) Mm region of QS at the disk center and includes a slowly evolving region of relatively high (\(>200\) G, for filling factor unity) magnetic field concentration seen at the bottom of the V Stokes vector map in panels e-h of Fig. 1. The photon SNR of 1000, cadence of 33.25 s, and sampling resolution of 0.0545"/px make the IMaX data set the best available source for the purposes of studying Poynting flux in QS. With this combination of cadence and spatial resolution, a
typical flux element moving at a moderate speed of 3 km s\({}^{-1}\) in the plane of sky (see e.g. Asensio Ramos et al., 2017) would traverse two pixels.
IMaX provides high-quality, diffraction-limited polarimetric observations of QS in the Fe I 5250.2 A line, which is sensitive to photospheric magnetic fields. The observations include full Stokes vector (\(I,Q,U,V\)) sampled in five wavelength positions: \(\pm 40\) and \(\pm 80\) mA on either side of the Fe I 5250.2 A line, and at \(+227\) A in the continuum, with spectral resolution of 65 mA (85 mA Gaussian). The IMaX Fabry-Perot sensor introduces a systematic blue shift which grows as a function of distance from the center of field of view (FOV). We apply a correction in the form of distance-to-center-dependent red shift to account for this effect on LOS velocity. The level 0 data had also been corrected to minimize instrumental effects, such as dark and flat-fielding and removal of dust-induced effects, resulting in non-reconstructed (NR) data (Martinez Pillet et al., 2011). The \(Q\) and \(U\) noise levels in the NR data set were estimated to be \(8.3\times 10^{-4}I_{c}\) and \(1.1\times 10^{-3}I_{c}\), respectively (Jafarzadeh et al., 2014). In addition, the IMaX point-spread function (PSF) was used to apply a phase diversity reconstruction (PDR) to NR data, thereby increasing spatial resolution to 0.15" at the expense of increasing \(Q\) and \(U\) noise levels to \(2.6\times 10^{-3}I_{c}\) and \(3.6\times 10^{-3}I_{c}\), respectively (Kianfar et al., 2018). We used the NR data, with their lower polarimetric noise, for magnetic field inversions, and the PDR data, with their higher spatial resolution, for velocity inversions.
### Simulation Data: STAGGER
STAGGER (Magic et al., 2013) is a 3-D radiative magneto-hydrodynamic (MHD) code, which solves for conservation of mass, energy, and momentum equations. Those equations are coupled with radiative transfer equations in local thermodynamic equilibrium (LTE) non-grey atmosphere on a 48-km grid size. The simulation cadence is 60 s. We use continuum intensities and transverse velocities from STAGGER simulations of QS to validate velocities obtained with FLCT and the neural network based method DeepVel, which isdiscussed further in Section 3.2.1.
### Simulation Data: MURaM
Figure 1: _Left panels, aβd:_ IMaX Stokes vectors maps at \(t=1430\) s. _Right panels, eβh:_ IMaX Magnetogram and LOS velocity map derived using Milne-Eddington inversions. For LOS velocity, a correction was applied to account for the systematic bias introduced by IMaX Fabry-Perot sensor. The bias scales as a function of distance from FOV center. The green square in the four right panels (eβh) outlines the region of interest (ROI), which contains the strongest magnetic field concentration. A close-up view of ROI is shown in Figure 4. See Β§3.1.1 for detailed discussion. The full FOV is \(836\times 836\) pixels, the ROI is \(100\times 100\) pixels, and each pixel is 48 km across.
MURaM (Vogler et al., 2005; Rempel, 2014, 2017) is a state the of art radiative MHD code used to model a variety of features in the solar atmosphere and below. The MURaM code solves for mass and energy transfers between subsurface convective zone and the photosphere, chromosphere, and corona. The simulation we analyze here is based on the case 'O16bM' from Rempel (2014) and was extended in the vertical direction by about 500 km. The simulation solves for all the main MHD quantities (magnetic and velocity vectors, temperature, pressure, heat and energy fluxes) in a domain with the physical extent of \(24.576\times 24.576\times 8.192\) Mm\({}^{3}\), with an isotropic grid spacing of 16 km, resulting in a \(1536\times 1536\times 512\) grid. It spans optical depths between approximately \(5\times 10^{-8}<\tau<10^{9}\), i.e. from the convection zone to the upper chromosphere and transition region. The location \(\tau=1\) is found about 2 Mm beneath the top boundary. The relevant simulation quantities from a QS MURaM simulation (LOS velocity, \(|B|\), \(S_{z}\), and \(S_{h}\)) are shown in Figure 2.
## 3 Methodology
Recall that Poynting flux is defined as
\[\mathbf{S}=\frac{1}{4\pi}\mathbf{E}\times\mathbf{B}, \tag{1}\]
where \(\mathbf{B}\) and \(\mathbf{E}\) are magnetic and electric field vectors, respectively. In SS3.1, we first describe how we use the polarimetric observations to infer the magnetic field \(\mathbf{B}\) in the quiet Sun. In SS3.1.1, we summarize the three methods we use to disambiguate the azimuth of the horizontal magnetic field: ME0 (Leka et al., 2009), random azimuth and Poynting-flux optimization methods. Finally, in SS3.2, we overview the two approaches we use to derive the electric field \(\mathbf{E}\): the
Figure 2: Outputs of a MURaM simulation at the geometrical surface \(z=0\), which corresponds to optical depth \(\overline{\tau}=1.1\), averaged over FOV. The top right panel is the inset designated by the red square on the top center panel. Arrows represent the orientation of transverse magnetic fields. The vertical and horizontal Poynting fluxes \(S_{z}\) and \(S_{h}\) are computed using the ideal-MHD method (for details, see Β§3).
PDFL_SS electric field inversion method, that solves the Faraday's induction equation
\[-\nabla\times\mathbf{E}=\frac{\partial\mathbf{B}}{\partial t}, \tag{2}\]
and the simplified electric field inversion method that strictly imposes the idealized Ohm's law
\[\mathbf{E}=-\mathbf{v}\times\mathbf{B}, \tag{3}\]
where \(\mathbf{v}\) is the velocity vector. In the simplified formulation of Poynting flux, where idealized Ohm's law is imposed strictly, we can express vertical Poynting flux (\(S_{z}\)) as follows:
\[S_{z}=\frac{1}{4\pi}[v_{z}B_{h}^{2}-(\mathbf{v}_{h}\cdot\mathbf{B}_{h})B_{z}], \tag{4}\]
where the \(z\) and \(h\) subscripts denote vertical and transverse variables, respectively. In this expression, the first term is the emergence term and the second is the wave, or shear, term. In SS3.2.1, we describe the two transverse velocity reconstruction methods, DeepVel and FLCT, since transverse velocities are a required intermediate quantity for using either of the electric field inversions. For brevity, we refer to the simplified approach as "ideal-MHD" method and to the PDFL_SS method as "inductive method". We emphasize, however, that both approaches could enforce the ideal MHD condition, but to different extents: the "ideal-MHD" method does so strictly but the "inductive method" does not, but could enforce it via ideal non-inductive contribution (see Section 2.4 in Kazachenko et al. (2014))
### Magnetic Field inversions
To obtain the magnetic field configuration and LOS velocity from level-1 IMaX polarimetry, we apply the Milne-Eddington (ME) inversion code pyMilne (de la Cruz Rodriguez, 2019) to the NR IMaX data set. We chose this method for its relative computational efficiency - the assumptions of Milne-Eddington atmosphere simplify the inversion scheme while adequately capturing the physics of the photospheric Fe I 5250.2 A line formation. b For each IMaX frame, the inversion code uses several seeds (nRandom parameter) to prevent the scheme from converging to local minima, and several Levenberg-Marquardt iterations (nIter) per seed. It should be noted that pyMilne assumes magnetic filling factor of unity, which may introduce bias in transverse magnetic field inversions (Leka et al., 2022).
We show an example of level 1 Stokes data in panels a-d in Figure 1 and the corresponding outputs of Milne-Eddington inversions in panels e-h. Clearly visible is the high-V signal region at the bottom of FOV. It corresponds to a strong B-field region that persists and slowly evolves throughout the observation window. We designate it as the region of interest (ROI) and denote it by a green rectangle in the four right panels. The ROI is not associated specifically with either upflows or downflows, as seen from the dopplergram.
We also show the resulting distributions of LOS and transverse magnetic field components in Figure 3. The negative polarity in ROI is clearly seen in the skewed shape of \(B_{z}\) histogram. As seen from the bottom-left panel, other regions with strong polarization signal are much smaller in extent. They are also more transient, highlighting the difficulties of QS polarimetric observations.
As can be seen in Figure 1, the polarimetric signal, particularly in Q and U (panels b and c), is quite weak in our data (\(<200\) G in most of the FOV). This is of course to be expected in the quiet Sun regime, where magnetic fields are only strong enough to produce distinct linear polarization features in 3-16 % of pixels in the FOV of SUNRISE/IMaX (Kianfar et al., 2018; Liu et al., 2022). In parts of the analysis that follows, we only consider regions of the FOV with signal strengths above a certain threshold. We chose the threshold of 50 G for masking out pixels with insufficiently strong magnetic fields, for the following reasons: 1) 50 G is approximately equal to \(3\sigma\) in magnetic field strength distribution (bottom right panel of Figure 3), 2) the subset of pixels in FOV with \(B>50\)G closely (within 5 G) corresponds to the pixels where at least one of Q, U, or V spectra exhibits strong enough (\(>3\sigma\)) deviations from continua, 3) this threshold is consistent with the minimum horizontal field strength described in Kianfar et al. (2018), where the strength of linear polarization features in IMaX magnetograms was found to be in the range 50-500 G.
#### 3.1.1 Azimuth Disambiguation
Azimuthal \(180^{\circ}\) ambiguity is a well-known problem, wherein spectropolarimetric inversions based on Zeeman effect produce two solutions for \(\mathbf{B}\)-field azimuth, and the two solutions are mathematically equally valid. Several solutions to this problem have been proposed. Those include global optimization mechanisms, such as ME0, where the preferred
Figure 3: Distributions of **B**-field components in IMaX magnetograms obtained with Milne-Eddington inversions, before disambiguation at \(t=1430\) s. _Top row:_ Histograms of **B**-field components, showing average and standard deviation in each bin taken over all magnetograms during the observation window. The relatively strong negative polarity is evident in the non-symmetric \(B_{z}\) histogram. _Bottom row, left:_ LOS velocity map with overlaid contours corresponding to area where polarimetric signal exceeds \(3\sigma\) (yellow) and \(5\sigma\) (red) at \(t=1430\) s. _Bottom row, right:_ Standard deviation in **B**-field components as a function of time taken over the entire FOV (solid lines) and only the pixels with polarimetric signal (at least one of Stokes Q, U, or V vectors) in excess of \(3\sigma\) from the continuum (dashed lines). \(B_{y}\) variance is the lowest, since these magnetograms have not been solved for the \(180^{\circ}\) ambiguity. For the same reason, the \(B_{y}\) histogram in the top row does not include negative values. For more discussion on azimuthal ambiguity, see Β§3.1.1.
magnetic field configuration results in a globally minimized magnetic energy (Leka et al., 2009). Other methods select the orientation of magnetic fields that results in the highest \(B_{z}\), if the magnetogram is taken off disk center, or they look for opposite polarities and select an orientation that would close field lines between the polarities (Metcalf et al., 2006). It should be noted that none of these methods have been rigorously tested in the QS regime, as linear polarization strength is usually too low to adequately employ these methods.
In this work, we attempt to use three (and end up using two) methods to disambiguate azimuths: ME0, randomization, and Poynting flux optimization. We first use ME0, as it is the most physically rigorous of the three methods and it has been extensively used, including, for example, in Hinode and SDO/HMI data processing pipelines (Leka et al., 2009; Hoeksema et al., 2014). ME0, or the minimum energy method, is an optimization algorithm that minimizes the global quantity \(\lambda|J_{z}|+|\nabla\cdot\mathbf{B}|\), where \(J_{z}\) is vertical current and \(\lambda\) is a modifiable scalar parameter that determines the relative importance of the two terms. As mentioned, ME0 has not been tested on QS data, so, to that end, we tested ME0 on synthetic magnetograms obtained from the 3D MHD STAGGER code (see SS2.2). Unfortunately, ME0 performed poorly on QS magnetograms produced by STAGGER, likely due to different physical assumptions under which STAGGER and ME0 operate. The issue with ME0 validation warrants a more detailed investigation, but we leave it for future work, as it is not the focus of the present paper.
While we cannot use full ME0 capabilities, the code is capable of performing a potential field acute angle disambiguation. We use this method to disambiguate azimuths in the first frame, and for each subsequent frame, we resolve the ambiguity using acute angle with respect to the previous frame. Another approach is to use the regular ME0 disambiguation while setting the \(\lambda\) weighting factor for \(|J_{z}|\) to 0. The minimized quantity is then simply the divergence of magnetic field. Like in the potential field disambiguation, we only apply this method to the first frame and then select the azimuths resulting in an acute angle with respect to the previous frame. In both cases this is done in order to minimize temporal discontinuities. While not strictly physical, this approach has been taken before, e.g. in Kaidhakkal et al. (2023).
In the absence of a validated physical disambiguation method, we asked two questions: how sensitive is Poynting flux to the orientation of transverse magnetic fields (in other words, how much does the "choice" of azimuth affect our computed quantities of Poynting flux), and what is the maximum Poynting flux that can be obtained from any given magnetogram that is yet to be disambiguated? These two questions lead us, respectively, to two other disambiguation methods: azimuth randomization and Poynting flux optimization.
Azimuth randomization can be thought of as an absolutely imperfect disambiguation, wherein we randomly add either \(0^{\circ}\) or \(180^{\circ}\) to the azimuth value of each pixel in a magnetogram. The random assignment for each pixel is performed independently of its neighboring pixels or earlier azimuth values in that pixel. Thus, this method yields a disambiguated magnetogram that almost certainly has spatial and temporal discontinuities in transverse field orientations.
The Poynting flux optimization disambiguation method consists of two steps: in the first magnetogram (\(t=0\)), we disambiguate azimuths in each pixel by selecting the one that results in higher value of \(S_{z}\) as computed using the ideal-MHD method, i.e. using Equation 3. Then, for each consecutive magnetogram, we select for each pixel the azimuth value that is closer to the azimuth value of that pixel in the previous frame. In contrast with the randomization method, where every pixel is completely independent from both its surrounding pixels and that pixel in adjacent magnetograms in the time series, the Poynting flux optimization method results in some degree of spatial and temporal azimuth continuity, while also providing us a physical ceiling (i.e. upper boundary) for the Poynting flux. We stress, however, that this disambiguation method is only physically meaningful insofar as it provides the ceiling for Poynting flux.
### Electric Field Inversion Methods
To find the electric field needed to estimate the Poynting flux (Equation 1), we use two approaches. The first "ideal-MHD" approach strictly enforces the ideal MHD condition (Equation 3). We then use Doppler measurements to derive the vertical velocity component and two reconstruction methods, FLCT and DeepVel, to invert the transverse velocity component (see SS3.2.1 below). In the second "inductive" approach we use the PDFLSS method to derive the electric field directly by inverting Faraday's law without necessarily enforcing the ideal MHD condition (see SS3.2.2 below).
#### 3.2.1 Transverse Velocity Inversion Methods
As shown in Section 3, the full plasma velocity vector (or alternatively, the horizontal electric field) is required to compute Poynting flux. Unlike the LOS velocity, which could be recovered from Doppler data (e.g. Welsch et al., 2013), transverse velocities cannot be directly inferred from observables. The two velocity retrieval methods we use in
this work are Fourier Local Correlation Tracking (FLCT, Fisher & Welsch, 2008) and a convolutional neural network (CNN) DeepVel (Asensio Ramos et al., 2017).
FLCT (Welsch et al., 2007) is a plasma flow tracking method that takes two consecutive magnetograms or intensitograms and, using a finite sliding window, infers the plane-of-sky displacement needed to produce the second map from the first one. It has been used as the flow inversion method for PDFLSS (Kazachenko et al., 2015; Lumme et al., 2019; Afanasyev et al., 2021) and for tracking flows in various environments (e.g., Tremblay et al., 2018; Loptien et al., 2016), but it has some constraints. The main constraint of the FLCT approach is that it assumes that any change in continuum or magnetic field intensity is due to advective motion without obeying the induction equation (i.e. FLCT measures an optical flow). Secondly, FLCT has been applied to data with either relatively strong magnetic fields (Welsch et al., 2012; Lumme et al., 2019), where tracking is made possible by relatively large S-N ratios, or to low-resolution and large FOV images, where the objective was to track meso- and super-granular motions (Fisher & Welsch, 2008). Neither of these contexts applies to our QS case: magnetic concentrations are, for the most part, transient and limited in spatial extent and strength, making it necessary to rely on continuum images, and the relevant scales of plasma motions are well below even meso-granular scales.
To validate FLCT plasma flow inferences in a setting more closely resembling the IMaX observations, we apply the FLCT to continuum images from STAGGER simulations. We find the correlation between FLCT and reference flows to be low, with the Pearson correlation coefficient of at most \(r<0.45\). The correlation is even weaker if the \(\sigma\) parameter, which defines the width of sliding Gaussian window, is lower than 10 pixels or higher than 15. In our work, we pick
Figure 4: Close-up view of the strong magnetic field region of interest (ROI) at the bottom of FOV (see panels eβh of Figure 1) showing magnetic field azimuthal angles obtained via randomization (top left), optimization (top center), matching to potential field (bottom left), and minimizing B-field divergence (bottom center). The top right panel shows the spatial distribution of horizontal magnetic field in the same region, where green contours correspond to regions with \(|\mathbf{B}|>200\) G. The bottom right panel shows LOS magnetic field.
\(\sigma=10\), as it produces the strongest correlation. Following analyses in Schrijver et al. (2006) and Tremblay et al. (2021), we consider three other correlation metrics between reference STAGGER velocities and velocities obtained from inversions: the spatially averaged relative error
\[E_{\rm rel}[\mathbf{v}_{\rm inv},\mathbf{v}_{\rm ref}]\equiv\left\langle\sqrt{ \frac{(\mathbf{v}_{\rm ref}-\mathbf{v}_{\rm inv})\cdot(\mathbf{v}_{\rm ref}- \mathbf{v}_{\rm inv})}{\mathbf{v}_{\rm ref}\cdot\mathbf{v}_{\rm ref}}}\right\rangle,\]
the vector correlation coefficient
\[C[\mathbf{v}_{\rm inv},\mathbf{v}_{\rm ref}]\equiv\frac{\left\langle\mathbf{v} _{\rm inv}\cdot\mathbf{v}_{\rm ref}\right\rangle}{\sqrt{\left\langle\mathbf{ v}_{\rm ref}\cdot\mathbf{v}_{\rm ref}\right\rangle\cdot\left\langle\mathbf{v}_{ \rm inv}\cdot\mathbf{v}_{\rm inv}\right\rangle}},\]
and the cosine similarity index, which measures the global spatial distribution of velocity vector orientations
\[A[\mathbf{v}_{\rm inv},\mathbf{v}_{\rm ref}]\equiv\left\langle\frac{\mathbf{v }_{\rm inv}\cdot\mathbf{v}_{\rm ref}}{\left\|\mathbf{v}_{\rm inv}\right\| \left\|\mathbf{v}_{\rm ref}\right\|}\right\rangle,\]
where the \(<\cdot>\) operation denotes spatial averaging.The \(C\) coefficient is defined so that it is \(0\) when the velocity vectors are perpendicular everywhere and \(1\) when parallel everywhere. Likewise, the \(A\) coefficient is \(-1\) when the vectors are anti-parallel and \(1\) when identical. Thus, the agreement between two vectors is the better the closer both \(C\) and \(A\) are to unity. For mathematical expressions of these metrics, see equations 3-5 in Tremblay et al. (2021). For FLCT, the values of these metrics are (\(E_{\rm rel}=1.09\), \(C=0.35\), \(A=0.21\)). FLCT and reference STAGGER flows are also qualitatively different (see Figure 5, left and right panels). STAGGER velocities show a clear pattern of divergence in granules and convergent flows with vortices in intergranular lanes (IGLs), whereas FLCT velocity fields are much more laminar (and smaller in magnitude) on average.
We find that FLCT velocity inferences in QS can be improved by averaging instantaneous velocities over 30-minute time windows (Asensio Ramos et al., 2017; Tremblay et al., 2018). The correlation coefficient between FLCT and STAGGER velocities then improves to \(r=0.75\), but, considering the photospheric timescales are on the order of five minutes, such improvement comes at a cost of losing time-dependent information. We therefore conclude that the FLCT method is an inadequate velocity inversion for our purposes, where instantaneous or near-instantaneous (\(<2.5\) minutes) velocities are to have high fidelity.
In addition to FLCT, we use Deepvel - a convolutional neural network that has been previously used to infer velocities on granular scales, including in the quiet Sun (Asensio Ramos et al., 2017; Tremblay et al., 2018; Tremblay and Attie, 2020). DeepVel is trained using simulation data, for which all flow components are known, to map a pair of input images (e.g., continuum intensity images at two timesteps) to the transverse flows at a given optical depth or geometrical height. This approach is known as supervised learning. In other words, the output velocities approximate
Figure 5: Comparison between the velocity fields computed by the STAGGER simulation (left; reference) and those predicted by DeepVel (center) and FLCT (right) velocity tracking methods. Both FLCT and DeepVel velocity fields were obtained using STAGGER simulated QS intensities, which serve as background for the figure. Note the different scale for FLCT vector arrows.
what the flows in the training simulation would be if we assume that the input data provided to the neural network was generated by the training simulation (i.e., there is a model dependency). For this work, we train DeepVel on a set of STAGGER data frames. To test the trained network, we run it on a STAGGER intensity map that is outside of the training set. The test map has the same properties as those described in Section 2.2. Similarly to the previous works, we find that DeepVel instantaneous velocities are highly (\(r=0.91\)) correlated with simulated velocities (Figure 5).
We find that correlation metrics values are significantly higher for DeepVel (\(E_{\rm rel}=0.74\), \(C=0.91\), \(A=0.87\)) than for FLCT. These are stark improvements over FLCT in terms of the accuracy achieved without losing temporal resolution via time averaging. Even though DeepVel is limited in ways that may have implications for our results (as discussed further in SS5 and in e.g. Tremblay and Attie, 2020), we choose to apply DeepVel to IMaX intensitygrams to retrieve transverse velocities in our analysis. Hereafter, all retrieved transverse velocities are obtained with DeepVel rather than FLCT. We note, however, that DeepVel is not without its limitations. Even though we get really good agreement (\(r\approx 0.9\)) between simulation and DeepVel velocities and divergences, DeepVel is not as reliable at reproducing vorticities (see Figure 6, panel d).
#### 3.2.2 PDFI Electric Field Inversion Method
To find Poynting flux without assuming ideal MHD conditions, we use the PDFI_SS method (Fisher et al., 2020). Briefly, the magnetic field in the PDFI_SS method is expressed as a sum of poloidal and toroidal components. This decomposition allows to derive the _inductive_ component of the electric field from observed quantities by uncurling the Faraday law (Equation 2). The gradient of a scalar part of the electric field that appears due to uncurling of the Faraday's law is called "non-inductive" and could be computed from additional constraints, including ideal MHD constraint \(\mathbf{E}\cdot\mathbf{B}=0\)(Kazachenko et al., 2014).
PDFI_SS has been used to describe the evolution of Poynting flux and magnetic helicity in multiple works, but notably, these were all concerned with either observed or simulated active regions (Kazachenko et al., 2015; Lumme et al., 2019, e.g. ) or regions of flux emergence (Afanasyev et al., 2021, e.g. ). To our knowledge, PDFI_SS has not been applied to QS magnetic fields. Apart from the general challenge of studying QS magnetism, the reliance of PDFI_SS on the \(\frac{\partial\mathbf{B}}{\partial t}\) term in Faraday's law makes it especially susceptible to noise. To mitigate the influence of noise, we set the bmin parameter, which masks pixels with lower magnetic field strength (see Section 10.2 in Fisher et al., 2020), to 50 G - the same threshold we choose in SS3.1. PDFI_SS also requires high cadence observations, so as to not miss the transient magnetic concentrations that are ubiquitous in QS (Gosic et al., 2018).
## 4 Results
We compute Poynting fluxes using two approaches: from velocity fields together with the ideal MHD assumption, and the PDFI_SS electric fields, where time derivatives of magnetic fields are used as a source term. Within each approach, we use randomly disambiguated azimuths and azimuths obtained via the optimization procedure (see Section 3.1.1). We show temporal evolution of Poynting fluxes in both settings in Figure 8.
As discussed in SS3.1.1, azimuthal orientation of vector magnetic fields can affect Poynting flux magnitudes. Since one of our two principal methods of azimuthal disambiguation relies on randomizing azimuths on a pixel-by-pixel basis, we investigate the resulting uncertainty in Poynting fluxes in ideal-MHD setting by repeating the randomization for each magnetogram 5000 times. We find average Poynting flux estimates in each frame to be highly robust to different
Figure 6: Scatterplots comparing the velocities inferred by the Deepvel neural network to the STAGGER simulation (reference): (a) \(v_{x}\), (b) \(v_{y}\), (c) \(\nabla_{h}\cdot\mathbf{v}_{h}\), and (d) \(\omega_{z}=[\nabla\times\mathbf{v}]_{z}\). Statistical metrics are provided in legends to each panel. MAE stands for mean absolute error.
realizations of azimuth randomization, with both signed (net) and unsigned (absolute values) fluxes tightly clustered (see Figure 7). We also find that in the ideal-MHD setting, the emergence term \(v_{z}B_{h}^{2}\) dominates both signed and unsigned fluxes over the shear, or wave, term \((\mathbf{v}_{h}\cdot\mathbf{B}_{h})B_{z}\) (see Equation 4), which accounts for less than 1% of the total vertical Poynting flux.
The left panel of Figure 8 shows the Poynting flux evolution for the ideal-MHD case. Different plot colors correspond to spatially averaged Poynting flux (\(\overline{S_{z}}\)) in all pixels as well as only in pixels where the magnetic field strength (\(|\mathbf{B}|\)) exceeds 50 G and 100 G thresholds. The first thing to note here is that the choice of azimuth disambiguation method plays a negligible role on \(S_{z}\) values. The largest difference is in the first frame, where, in the optimization procedure, we explicitly optimize for largest \(S_{z}\) value. In just over one minute this difference disappears, and \(\overline{S_{z}}\) values stabilize within the range \(6.0\pm 0.56\times 10^{5}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \(1.1\pm 0.087\times 10^{7}\) erg cm\({}^{-2}\) s\({}^{-1}\) for all pixels and pixels with high \(B\)-fields, respectively. This is consistent with our analysis of the randomization procedure shown in Figure 4, where we find very little variation in FOV-integrated \(S_{z}\) across different azimuth realizations.
We also observe that selecting pixels with relatively strong \(|\mathbf{B}|\) increases the average \(S_{z}\) by an order of magnitude, but there isn't much variation between 50 G and 100 G thresholds (or even 150 G threshold and above, which aren't shown here). Increasing the threshold to 100 G, however, reveals quasi-periodic oscillations that could conceivably be linked to 5-minute photospheric oscillations (Leighton et al., 1962; Ulrich, 1970).
Figure 8 (right panel) shows Poynting fluxes derived from the PDFI_SS method. Recall that, since we set the bmin parameter to 50 G, all pixels with magnetic fields below that threshold are set to zero and are not considered in the following analysis. We find that these estimates are very different from the ideal-MHD estimates shown on the left panel. Firstly, the optimized (randomized) \(\bar{S_{z}}\) is \(-2.1\pm 13\times 10^{5}\) (\(-1.3\pm 9.4\times 10^{5}\)) erg cm\({}^{-2}\) s\({}^{-1}\) - significantly lower, even in terms of absolute values, than \(\bar{S_{z}}\) in the ideal-MHD case with pixels with weak magnetic fields counted. Secondly, in both cases (randomized and optimized azimuths), \(S_{z}\) oscillates around zero and is sometimes well below it, meaning that magnetic energy is transported downwards instead of upwards. Thirdly and less surprisingly, \(S_{z}\) values obtained from the randomization and \(S_{z}\) optimization disambiguation methods are much more different from one another than in the ideal-MHD case. This is due to the fact that PDFI_SS uses spatial and temporal derivatives of the \(\mathbf{B}\)-fields to compute \(S_{z}\), and both are affected in the randomization procedure which produces highly discontinuous magnetic field configurations, more so than in the case of optimized azimuths. However, the two other azimuthal ambiguity resolutions - from potential field and from \(|\nabla\cdot\mathbf{B}|=0\) - also produce Poynting fluxes that oscillate frequently around zero and almost never exceed \(2\times 10^{6}\) erg cm\({}^{-2}\) s\({}^{-1}\). This most likely indicates that significant spatial and temporal discontinuities are present in IMaX QS magnetograms regardless of the azimuthal disambiguation method, as can be seen in Figure 4.
To evaluate how Poynting flux and its components vary in height, we use the outputs of MURaM simulations, since IMaX data set only includes data from one optical surface. In Figure 9, we compare Poynting fluxes derived directly from MURaM simulations. We find that MURaM averaged vertical Poynting flux reverses sign very close to the \(\tau=1\) surface, and that it is exceeded by \(|S_{h}|\) from the convection zone until well above \(\tau=0.1\). We find that at \(\overline{\tau}=1\), \(\overline{S_{z}}=4.38\times 10^{6}\) erg cm\({}^{-2}\) s\({}^{-1}\), and it rises to \(2.28\times 10^{7}\) erg cm\({}^{-2}\) s\({}^{-1}\) at \(\overline{\tau}=0.1\).
## 5 Discussion
In their seminal paper, Withbroe & Noyes (1977) derived a threshold of upward energy flux from the photosphere that would be necessary to explain chromospheric and coronal heating in the quiet Sun - \(S_{z,thr}=4.3\times 10^{6}\) erg cm\({}^{-2}\) s\({}^{-1}\).In MURaM simulations, vertical Poynting flux at \(\overline{\tau}=1\) is just above the \(S_{z,thr}\) value from (Withbroe & Noyes, 1977). This is consistent with existing MURaM simulations, where hot corona is maintained by photospheric magnetoconvection (Rempel, 2017; Breu et al., 2022, 2023). However, we find that in IMaX observations, there is not enough Poynting flux whether we use the ideal-MHD method or PDFI_SS, unless we consider only strong B-field pixels in the ideal-MHD case (Figure 8). Furthermore, in the ideal-MHD case, vertical Poynting flux is lower than the minimum value required for heating by an order of magnitude. On the other hand, PDFI_SS values are closer to \(4.3\times 10^{6}\) erg cm\({}^{-2}\) s\({}^{-1}\) in magnitude, but are frequently negative, indicating downward energy flux.
What are the possible causes of discrepancies between different Poynting flux estimates, and why is Poynting flux negative in some of them? We propose several physical and methodological explanations.
A non-trivial methodological issue with our analysis is the weak signal strength in the quiet Sun. This is particularly severe when it comes to Q and U Stokes vector signal strength, which adversely affects our inversions of \(\mathbf{B}_{h}\). Even in the ROI, where magnetic field strength exceeds 200 G - a relatively high value for our data set - there are significant
discontinuities in the spatial distribution of horizontal magnetic field (right panel of Figure 4). These gaps affect both ideal-MHD and PDFI_SS Poynting flux inversion methods. In ideal-MHD, as can be seen from Equation 4, Poynting flux is highly sensitive to \(\mathbf{B}_{h}\) as it appears in both terms of the expression. In PDFI_SS, \(\mathbf{B}_{h}\) uncertainties affect both the \(\mathbf{v}\times\mathbf{B}\) term as above and the spatial and temporal derivatives of the magnetic field.
Figure 8: _Left panel:_ Temporal evolution of the average Poynting flux, computed using the ideal-MHD assumption and DeepVel velocities. The dashed green lines represent Poynting flux computed in the same sets of pixels as the solid lines, but using random azimuths instead of azimuths obtained via the Poynting flux optimization procedure. _Right panel:_ Poynting flux computed via PDFI_SS and DeepVel velocities and averaged over pixels above the threshold B-field value (see Β§3.2.2), using random azimuths (orange line), optimized azimuths (red line), and azimuths obtained from potential field acute angle method (solid black line) and from imposing \(|\nabla\cdot\mathbf{B}=0|\) (dashed black line). Note the different y-axis limits.
Figure 7: Histograms of the spatially averaged Poynting flux within the FOV at \(t=1430\)s. The computations were performed using the ideal-MHD method with 5000 realizations of randomized azimuth disambiguation. The vertical red line corresponds to the value of the emergence term of \(S_{z}\) (see Β§4).
Uncertainties in transverse magnetic field inversions \(B_{h}\) propagate into issues with azimuth disambiguation. However, we see that with the IMaX signal strength, they do not meaningfully affect Poynting flux estimates, especially in the ideal-MHD scenario (Figure 7). Instead, the emergence term \(v_{z}B_{h}^{2}\) is responsible for virtually all signed Poynting flux and 99% of unsigned Poynting flux. This is qualitatively consistent with some of the existing literature (e.g., Liu &
\begin{table}
\begin{tabular}{l l c} Method & Target & \(\overline{S_{z}}\) [erg cm\({}^{-2}\) s\({}^{-1}\)] \\ \hline \hline present work, ideal-MHD & observed QS, all pixels & \(6.0\pm 0.56\times 10^{5}\) \\ present work, ideal-MHD & observed QS, high B-field pixels & \(1.1\pm 0.087\times 10^{7}\) \\ present work, PDFLSS & observed QS & \(-0.21\pm 1.3\times 10^{6}\) \\ present work, MURaM & simulated QS, geometrical surface \(z=0.000\) Mm (\(\overline{\tau}=1.1\)) & \(4.36\times 10^{6}\) \\ present work, MURaM & simulated QS, geometrical surface \(z=0.128\) Mm (\(\overline{\tau}=0.11\)) & \(2.28\times 10^{7}\) \\ present work, MURaM & simulated QS, optical surface \(\tau=1.0\) & \(-3.11\times 10^{7}\) \\ present work, MURaM & simulated QS, optical surface \(\tau=0.1\) & \(1.94\times 10^{7}\) \\ Kazachenko et al. (2015), PDFLSS & AR 11158 & \(10^{8}-10^{9}\) \\ Welsch \& Fisher (2015), ideal-MHD & AR 10930, plage & \(\approx 5\times 10^{7}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of photospheric Poynting flux estimates in the present work and in existing literature.
Figure 9: Average Poynting flux in MURaM simulations as a function of optical depth. Red curve shows transverse Poynting flux. The x-axis corresponds to a vertical range of \(\sim 0.35\) Mm. The black curves (grey diamonds) correspond to vertical Poynting flux computed on geometrical (optical) surfaces and spatially averaged over subsets of pixels with varying magnetic field strengths. The dotted curve represents the emergence term (\(v_{z}B_{h}^{2}\)) of the vertical Poynting flux averaged over all pixels on geometrical surfaces. The blue curve represents the response function of the Fe I 5250.2 Γ
line in IGL in arbitrary units. Its peak is at \(\overline{\tau}=3\times 10^{-3}\), which is slightly below 400 km. The horizontal and vertical orange lines in the inset represent \(S_{z,thr}\) from Withbroe & Noyes (1977) (see Β§5) and the \(\overline{\tau}=1\) surface, respectively.
Schuck, 2012), but this fraction is much higher than in previous works. This is likely due to the weak magnetic field signal in our observational sample: For the shear term to be present, both linear and circular polarization signatures must be strong in the same pixel.
Transverse velocity inversions are another potential source of errors in Poynting flux inversions, though in the present work it is also likely to be a secondary order error. As discussed in SS3.2.1, vorticity values inferred with DeepVel may be unreliable (Figure 6). Further, we do not have access to "ground truth" when it comes to transverse velocity on the real Sun, and a neural network trained using supervised learning generates predictions that are only as good as the simulations they were trained on (from the relationship between continuum intensity and transverse flows, to the topology and magnitude of the flows). In MHD simulations, including STAGGER and MURaM, vortices are mostly concentrated in IGLs and have been shown to be spatially correlated with vertical Poynting flux in MURaM simulations (Yadav et al., 2020, 2021). This, then, presents a clear avenue for improvement, particularly when DKIST observations with higher spatial resolution (down to 0.03", Rimmele et al., 2020) become available, since features in IGLs are especially vulnerable to resolution effects. Another way to improve the neural network approach is to train it to match coherence spectra, i.e. to match velocities at different frequencies in the Fourier space, as was done in Ishikawa et al. (2022).
Poynting flux inversions themselves can still be improved. We already explained how, unlike PDFI_SS, the ideal-MHD method does not account for Poynting flux derived from \(\dfrac{\partial\mathbf{B}}{\partial t}\), but PDFI_SS also has limitations. It has not been tested in the QS regime, particularly when only one polarity (negative in our case) of \(B_{z}\) is present in the FOV.
Figure 11: Same as Figure 10, but for the optical depth \(\tau=0.1\) and geometrical surface \(z=0.128\) Mm with spatially averaged optical depth \(\overline{\tau}=0.11\).
Figure 10: Vertical Poynting flux computed on optical surface \(\tau=1\) (left panel) and geometrical surface \(z=0\) Mm, where the spatially averaged optical depth is \(\overline{\tau}=1.1\) (center panel). Notice the concentration of both positive and negative \(S_{z}\) in IGLs. On the right panel is a 2-D histogram comparing pixel-by-pixel \(S_{z}\) values computed on optical and geometrical surfaces. Note the different scales for the axes.
Another explanation for negative vertical Poynting flux could be due to physical reasons. There are several pieces of evidence in favor of that possibility.
First, we are studying Poynting flux at the boundary layer between convection-dominated layers below the photosphere and radiation-dominated atmosphere. In such environment, it is reasonable to expect all energy fluxes (e.g. mass flux, convective flux) averaged over a representative FOV to become dominated by their horizontal components, which are mostly self-canceling, while vertical components approach near-zero values (Steiner et al., 2008). This is indeed the case in IMaX observations. Assuming ideal MHD conditions, we find that the horizontal Poynting flux (\(|S_{h}|\)) exceeds \(|S_{z}|\) by a factor of \(\approx 3\). Silva et al. (2022) analyzed the same IMaX data set we use in our work and reported an even higher ratio of horizontal-to-vertical Poynting fluxes, likely due to higher velocity values in their inversions.
In MURaM simulations, we see that Poynting fluxes are principally concentrated in IGLs. An important corollary from this observation is that the emergence term of Poynting flux, which can only be negative provided \(v_{z}<0\), is primarily negative in photospheric and chromospheric heights, which is indeed what we find (see Figure 9). Therefore, on average, the wave term of Poynting flux is larger in magnitude than the emergence term, in stark contrast with our findings in IMaX observations. As discussed above, this is likely due to a strong observational bias, wherein both IGL structure and magnetic concentrations that are not trivially oriented (neither completely parallel nor perpendicular to LOS) are subject to instrumental limitations. Observed ideal-MHD Poynting fluxes in IMaX, which are dominated by the emergence term yet positive, likely arise from magnetic concentrations located in granule interiors, such as those inside the ROI (see Figure 4). Analogs of such a structure can be seen in MURaM simulations as well, e.g. in Figures 2 and 10 at \([x,y]=[3.5\text{ Mm},3.5\text{ Mm}]\) (just left and below of the center of FOV).
There are of course caveats when it comes to optical depth. First, it is unclear what optical depth corresponds to the formation of Fe I 5250.2 A line, which we used for spectropolarimetric inversions. It is evident that the line forms somewhere in the photosphere, but we are not aware of existing studies that looked at its response function. We find that this could be important, since in MURaM simulations average Poynting flux values are sensitive to changes in optical depth: from \(\overline{\tau}=1.1\) to \(\overline{\tau}=0.63\), \(\overline{S_{z}}\) increases by a factor of 2.7 (see Figure 9). To see where the Fe I 5250.2 line could form, we calculate its response function using a MURaM atmospheric profile (Rempel, 2014). We use one profile from granule and one from IGL, and in both cases (the latter is shown in Figure 9) we find that the line forms in the photosphere (300-400 km), but that it has a broad formation height range.
The second caveat related to optical depth is that a constant \(\tau\) surface is very different from a constant height surface, and Poynting fluxes computed on optical surfaces deviate significantly from those computed on geometrical surfaces with comparable optical depth averaged over the FOV (see Figures 10 and 11). In particular, while the average Poynting flux computed on the geometrical surface with \(\overline{\tau}=1\) may be sufficient to match energy losses in the chromosphere and corona, the average Poynting flux computed on the geometrical surface \(\tau=1\) is far from it, as it is \(-3.11\times 10^{7}\) erg cm\({}^{-2}\) s\({}^{-1}\) (Figure 9). Despite these differences, we treat all of our vector quantities' components as being either parallel or perpendicular to plane-of-sky (and line-of-sight). This can lead to unphysical results, since we are essentially dealing with vector projections and not true vectors. It is especially so in regions where optical surfaces are least aligned with geometrical ones, such as on the boundaries between granules and intergranular lanes. Incidentally, this is where 1) MURaM vertical Poynting flux is primarily concentrated, 2) transverse flows have the most shear and vorticity, and 3) resolution constraints have the highest detrimental effect (Leka et al., 2022).
## 6 Conclusions
In this work we used two approaches - the ideal-MHD and the PDFL_SS methods - to compute average photospheric Poynting fluxes from IMaX polarimetric observations. We tested several methods for deriving intermediate quantities required for computing Poynting flux. Principally, such quantities include the magnetic field azimuth, transverse velocities, and electric fields. We also looked at the outputs of 3-D radiative MHD code MURaM between \(\tau=10^{9}\) and \(\tau=-5\times 10^{-8}\) to glean insights from simulated photospheric data.
Our quantitative estimates of Poynting flux do not reveal a consistent picture with respect to whether photospheric Poynting flux is sufficient to explain chromospheric and coronal heating (Table 1). However, we can outline several important findings:
1. The ideal-MHD approach yields ambiguous estimates of \(S_{z}\), but this could be explained by the quality of available data. When considering only pixels with relatively high magnetic field values (\(|B|>50\) G), the resulting average Poynting flux suffices to explain chromospheric heating, but \(S_{z}\) averaged over all pixels does not (Figure 8).
It is possible that, due to instrumental limitations, we miss on many of the small and/or transient magnetic concentrations;
2. The 180\({}^{\circ}\) azimuthal ambiguity barely affects the estimates of the ideal-MHD approach (Figure 7). This is because Poynting fluxes derived via the ideal-MHD method are dominated by the emergence term \(v_{z}B_{h}^{2}\). This could also explain the lack of Poynting flux when averaged over all pixels. The importance of the emergence term has been reported before, but it is likely exaggerated in our results, since pixels with both vertical and transverse magnetic field (which are both necessary to produce the wave term \((\mathbf{v}_{h}\cdot\mathbf{B}_{h})B_{z}\) of Poynting flux) are difficult to detect in QS magnetograms. Indeed, in MURaM simulations, the wave term is on average positive and larger in magnitude than the emergence term, which is concentrated in IGLs and, consequently, is negative on average. When more advanced observations are available, such that the bias against the wave term is diminished and resolving azimuthal ambiguity becomes relevant, we point to our Poynting flux optimization method as a way to disambiguate azimuths while meaningfully constraining \(S_{z}\);
3. Poynting flux obtained with the PDFLSS method is highly time-dependent, insufficient for chromospheric and coronal heating, and is negative in many of the frames in our time series (Figure 8). It is also sensitive to azimuth disambiguation. The variability and sensitivity to magnetic field azimuthal orientation can be caused by the reliance of PDFLSS on spatial and temporal derivatives, combined with the noisy data sample. The closeness of it to zero can be attributed to the photosphere being a boundary layer between convection-dominated subsurface and radiation-dominated lower atmosphere;
4. MURaM simulations also display vertical Poynting flux that flips sign around \(\tau=1\) and is dominated by (unsigned) horizontal Poynting flux, supporting the boundary layer explanation (Figure 9). \(S_{z}\) in MURaM simulations is frequently negative around the \(\tau=1\) surface, particularly in IGLs (see Figures 2 and 10). At the same time, the upward Poynting flux is more than sufficient to explain chromospheric and coronal heating. While it may look like Poynting flux is close to the heating threshold around \(\tau=1\) (Figure 9), this region is in the deep photosphere, where the sign of Poynting flux flips and below the formation height of most observable spectral lines. It should be noted that MURaM simulations that extend into the corona (Rempel, 2017) produce a self-maintained QS corona (about 1.5 Million K) with sufficient Poynting flux. However, those simulations have lower resolution and the Poynting flux comes more from braiding of QS network field that is mostly absent in our simulation set. MURaM simulations of coronal loops also show that photospheric energy output is sufficient to maintain hot corona (Breu et al., 2022, 2023).
The main question - whether observed QS photosphere produces enough magnetic energy in the form of Poynting flux to heat the chromosphere and corona - remains open. There are, however, promising signs that this uncertainty will be cleared up in the future. DKIST observations, particularly with VBI, DL-NIRSP, and VTF instruments, can be used to observe photospheric magnetic fields with unprecedented polarization sensitivity, resolution, and cadence (Rimmele et al., 2020). Repeating this analysis with DKIST data is one of the most obvious avenues for future work.
We can also improve our methodology moving forward, particularly as it pertains to transverse magnetic field inversions, including azimuth disambiguation, and transverse velocity inversions. For the former, a physics-based approach such as ME0 is preferable to the more stochastic or optimizing approaches used in this work. We can also use acute angle disambiguation, provided we have QS observations sufficiently far from the disk center. We may also achieve higher fidelity in transverse magnetic field inversions by using an inversion scheme that solves for magnetic filling factor (Leka et al., 2022). For transverse velocity inversions, modifying DeepVel so that it is trained to match vorticity as well as velocity can be useful, since Poynting flux is associated with shear flows and vortices in IGLs.
Finally, numerical MHD simulations present a convenient avenue of exploring relationships between time- and height-dependent upward flux of magnetic energy and different structures in the quiet Sun. This area has remained largely unexplored, due to a lack of observational counterparts with which to verify potential findings, but it is more relevant now, in the era of DKIST. For an investigation of Poynting flux that is more directly comparable to observations, observables such as Stokes vectors must be computed using forward models and then inverted. It should be noted that it is unclear whether such an approach will result in physical values, since inversions produce quantities on optical surfaces where vector cross products are not meaningful. However, an approach involving forward modeling can be used to assess the model fidelity and, by extension, whether it can be used to make useful Poynting flux predictions. Detailed and focused studies of numerical simulations are therefore necessary.
## Acknowledgements
This work is supported by NASA FINESST award 20-HELIO20-0004. This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977. The authors thank Anna Malanushenko for general comments on the paper and K.D. Leka for her valuable advice on azimuth disambiguation. The authors also thank the anonymous referee for useful suggestions, particularly in regards to azimuth disambiguation.
SubrISE (IMaX). pyMilne (de la Cruz Rodriguez, 2019), ME0 (Leka et al., 2009), FLCT (Fisher & Welsch, 2008), DeepVel (Asensio Ramos et al., 2017), PDFLSS (Fisher et al., 2020), MURaM (Rempel, 2014).
|
2308.04869 | RBG-Maxwell Framework: Simulation of Collisional Plasma Systems via
Coupled Boltzmann-Maxwell equations on GPU | This paper presents the RBG-Maxwell framework, a relativistic collisional
plasma simulator on GPUs. We provide detailed discussions on the fundamental
equations, numerical algorithms, implementation specifics, and key testing
outcomes. The RBG-Maxwell framework is a robust numerical code designed for
simulating the evolution of plasma systems through a kinetic approach on
large-scale GPUs. It offers easy adaptability to a wide range of physical
systems. Given the appropriate initial distributions, particle masses, charges,
differential cross-sections, and external forces (which are not confined to
electromagnetic forces), the RBG-Maxwell framework can direct the evolution of
a particle system from a non-equilibrium state to a thermal state. | Ming-Yan Sun, Peng Xu, Jun-Jie Zhang, Qun Wang, Tai-Jiao Du, Jian-Guo Wang | 2023-08-09T11:02:39Z | http://arxiv.org/abs/2308.04869v1 | RBG-Maxwell Framework: Simulation of Collisional Plasma Systems via Coupled Boltzmann-Maxwell equations on GPU
###### Abstract
This paper presents the RBG-Maxwell framework, a relativistic collisional plasma simulator on GPUs. We provide detailed discussions on the fundamental equations, numerical algorithms, implementation specifics, and key testing outcomes. The RBG-Maxwell framework is a robust numerical code designed for simulating the evolution of plasma systems through a kinetic approach on large-scale GPUs. It offers easy adaptability to a wide range of physical systems. Given the appropriate initial distributions, particle masses, charges, differential cross-sections, and external forces (which are not confined to electromagnetic forces), the RBG-Maxwell framework can direct the evolution of a particle system from a non-equilibrium state to a thermal state.
+
Footnote β : Corresponding authors: Jian-Guo Wang E-mail address: [email protected]; Jun-Jie Zhang E-mail address: [email protected]; To use our code, please refer to [https://Juanjie.github.io](https://Juanjie.github.io) or [https://sunminmyan.github.io](https://sunminmyan.github.io)
## I Introduction
As the forth state of matter along with the solid, liquid and gas,the plasma comprises 99% of the visible universe, ranging from the quark-gluon matter at microscopic scales to plasma at macroscopic scales [1]. The self-consistent interaction of charged particles with electromagnetic (EM) fields is essential to describe many plasma systems such as the early universe [2], plasma [3; 4], Tokamak [5; 6], high-altitude nuclear explosion [7; 8; 9], vacuum electronic devices [10; 11; 12], system generated EM pulses [13; 14; 15], and solar plasma, etc.
The self-consistent plasma model involves the classical EM fields governed by Maxwell equations and the particle distributions in coordinate space (phase space) governed by conservation (kinetic or Boltzmann) equations. These equations are coupled to each other, namely, the particles are sources to the fields while the fields exert forces on the particles. In some cases, the particles can have radiations (including \(\alpha\), \(\beta\) or \(\gamma\) rays depending on energy scales) in quantum transition processes. These particles and radiations as well as the EM fields all interact with one another both classically and in quantum processes, forming a complex system of particles and fields at a wide range of energy scales. We take a plasma of ionized hydrogen and electrons as an example. The mass of an H\({}^{+}\) ion is about 1836 times that of an electron, but they have the same electric charge. Thus, the two species are at extremely different space-time (or energy) scales under the influence of the same EM fields: electrons can be easily accelerated even to the speed of light while it is much harder for ions. This multi-scale feature is one of the major challenges in a theoretical |
2304.04202 | Continuous eigenfunctions of the transfer operator for Dyson models | In this article we address a well-known problem at the intersection of
ergodic theory and statistical mechanics. We prove that there exists a
continuous eigenfunction for the transfer operator corresponding to pair
potentials that satisfy a square summability condition, when the inverse
temperature is subcritical. As a corollary we obtain a continuous eigenfunction
for the classical Dyson model, with interactions $\J(k)=\beta \, k^{-\alpha}$,
$k\ge1$, in the whole subcritical regime $\beta<\beta_c$ for which the
parameter $\alpha$ is greater than $3/2$. | Anders Johansson, Anders Γberg, Mark Pollicott | 2023-04-09T10:04:54Z | http://arxiv.org/abs/2304.04202v4 | # Continuous eigenfunctions of the transfer operator for the Dyson model
###### Abstract.
In this article we prove that there exists a continuous eigenfunction for the transfer operator corresponding to potentials for the classical Dyson model in the subcritical regime for which the parameter \(\alpha\) is greater than \(3/2\), and we conjecture that this value is sharp.
This is a significant improvement on previous results where the existence of a continuous eigenfunction of the transfer operator was only established for general potentials satisfying summable variations, which would correspond to the parameter range \(\alpha>2\). Moreover, this complements as result by Bissacot, Endo, van Enter and Le Ny [8], who showed that there is no continuous eigenfunction at low temperatures.
Our approach to obtaining these new results involves a novel approach based on random cluster models.
Key words and phrases:Dyson model, transfer operator, eigenfunction, long-range Ising model 2020 Mathematics Subject Classification: Primary 37D35, 37A60, 82B20, 82B26, 82C27
## 1. Introduction
It is well-known [30] that there exists a continuous and strictly positive eigenfunction of a transfer operator defined on a symbolic shift space with a finite number of symbols such that the potential has summable variations. Here we prove the existence of a continuous eigenfunction for the important special class of Dyson potentials in the subcritical regime up to at least the critical line for Bernoulli percolation, i.e., in particular when the potential does not satisfy the condition of summable variations.
For the Dyson model a continuous eigenfunction means that there is a continuous Radon-Nikodym derivative between the two-sided equilibrium measure (a translation invariant Gibbs measure) and the one-sided Gibbs measure.
A complementary paper to ours is the one by Bissacot, Endo, van Enter, and Le Ny [8], where they show that there is no continuous eigenfunction in the context of the Dyson model for low enough temperatures, although they phrase their result in terms of the lack of the \(g\)-measure property. We will describe the connection further below.
###### Contents
[MISSING_PAGE_POST]
## 1. Introduction
In this paper, we study the case of continuous functions.
### Preliminaries and results.
Let \(T\) be the left shift on the space \(X=\mathcal{A}^{\mathbb{N}}\), where \(\mathcal{A}=\{-1,+1\}\). We denote by \(C(X)\) the space of continuous functions and by \(\mathcal{M}(X)\) the space of probability measures on \(X\). Let \(\phi:X\to\mathbb{R}\) be a continuous function which we refer to as the _one-point potential_. The transfer operator \(\mathcal{L}=\mathcal{L}_{\phi}\) is a positive operator \(\mathcal{L}:C(X)\to C(X)\) defined by
\[\mathcal{L}f(x)=\sum_{y\in T^{-1}x}e^{\phi(y)}\,f(y). \tag{1}\]
From the Ruelle-Perron-Frobenius theorem, we can deduce the existence of an _eigenmeasure_\(\nu\in\mathcal{M}(X)\) to the dual operator \(\mathcal{L}^{*}:\mathcal{M}(X)\to\mathcal{M}(X)\) that satisfies \(\mathcal{L}^{*}\nu=\lambda\nu\), or equivalently
\[\int\mathcal{L}f\,d\nu=\lambda\int f\,d\nu,f\in C(X),\]
for the maximum eigenvalue \(\lambda>0\).
In this paper, we want to establish the existence of a corresponding continuous _eigenfunction_\(h(x)\), \(0<h<\infty\), for the _long-range Dyson model_ where
\[\phi(x_{0},x_{1},\ldots)=x_{0}\cdot\beta\sum_{j=1}^{\infty}\frac{x_{j}}{j^{ \alpha}}, \tag{2}\]
for parameters \(\alpha>1\) and \(\beta>0\).
An _equilibrium measure_\(\mu\in\mathcal{M}(X)\) corresponding to \(\phi\) is a minimiser of the free energy \(P(\mu;\phi)=\mu(\phi)-H(\mu)\) among the set \(\mu\in\mathcal{M}_{T}(X)\) of all translation invariant probability measures, that is measures such that \(\mu=\mu\circ T^{-1}\). If there is an eigenfunction \(h\) with \(\int h\,d\nu=1\) then the equilibrium measure \(\mu\) can be written as \(\mu=h\nu\).
It is well-known ([18, 1]) that the long-range Dyson model with potential (2) has a critical value of the parameter \(\beta\), \(\beta_{c}=\beta_{c}(\alpha)\), such that we have a unique equilibrium state \(\mu\) and a unique eigenmeasure \(\nu\) for \(0<\beta<\beta_{c}\) and two ergodic states for \(\beta_{c}<\beta\). This \(\beta_{c}(\alpha)\) is also the critical \(\beta\) for _percolation_ in the corresponding _random cluster_ model with \(q=2\). There is also the critical parameter \(\beta_{c}^{1}(\alpha)\) for percolation in the Bernoulli random graph model.
We can now present our main result.
**Theorem 1**.: _For \(3/2<\alpha\leq 2\) there exists a continuous eigenfunction of \(\mathcal{L}\) whenever \(0<\beta\leq\beta_{*}\). Here, \(\beta_{*}=\beta_{*}(\alpha)\) is a critical value satisfying \(0<\beta_{c}^{1}\leq\beta_{*}\leq\beta_{c}\)._
We can define \(\beta_{*}\) as the supremum of \(\beta\leq\beta_{c}\) for which the corresponding random cluster model has a cluster size distribution with an exponentially decreasing tail, see (28). Note that \(\beta_{*}\) could well be equal to \(\beta_{c}\).
Let \(\operatorname{var}_{n}f=\sup\{|f(x)-f(y)|:x_{i}=y_{i},\,0\leq i\leq n-1\}\). If one assumes _summable variations_\(\sum_{n=1}^{\infty}\operatorname{var}_{n}(\phi)<\infty\) then the existence of a continuous eigenfunction \(h(x)\) for \(\mathcal{L}\) follows from a classical "cone-argument" used in, for example, Walters [30]. For the Dyson model, summability of variations means that \(\alpha>2\) and that the eigenfunction \(h(x)\) is Holder continuous.
In Theorem 1, we have a continuous eigenfunction in a context when \(\alpha>3/2\) and \(0<\beta<\beta_{*}\), and thus with the variations \(\operatorname{var}_{n}\phi=O(1/n^{\alpha-1})\), as \(n\to\infty\). Note that the condition \(\alpha>3/2\) means that \(\sum_{n=1}^{\infty}(\operatorname{var}_{n}\phi)^{2}<\infty\).
If we have a strictly positive continuous eigenfunction \(h\) of the transfer operator, then we can recover an equilibrium measure \(\mu\) ([29, 30]) as the _Doeblin measure_ ([7]; \(g\)-measure in Keane's terminology [27]) for the Doeblin function (\(g\)-function) \(g(x)\) defined by
\[g(x)=\frac{e^{\phi(x)}}{\lambda}\cdot\frac{h(x)}{h(Tx)}, \tag{3}\]
since \(\sum_{y\in T^{-1}x}g(y)=1\) for all \(x\). The corresponding transfer operator \(\mathcal{L}_{g}\) is a Markov transition operator and \(\mu\in\mathcal{M}(X)\) is a Doeblin measure for \(g(x)\) if \(\mathcal{L}_{g}^{*}\mu=\mu\), i.e., it is the invariant distribution for a stationary Markov process on the state space \(X\). We refer to [23, 25, 16, 6, 21, 19]) for results on Doeblin measures.
In Bissacot, Endo, van Enter and Le Ny [8], they show that at low temperatures for the Dyson model, there is no continuous Doeblin function \(g(x)\) that represents the Gibbs measure, and this gives a counterexample to the existence of a continuous eigenfunction of the transfer operator for the Dyson model. Fernandez and Maillard [17] proved that a Gibbs measure can be represented by a Doeblin measure in the Dobrushin regime.
There were some extensions made to establish the existence of a measurable eigenfunction bounded away from zero and infinity [10]. Walters proved some regularity (but not continuity) for an eigenfunction ([30], Theorem 5.1) under the so-called Bowen condition.
We conjecture that the condition \(\alpha>3/2\) is sharp in the sense that we do not have a continuous eigenfunction \(h\), \(0<h<\infty\), for the transfer operator for the Dyson model when \(\alpha<3/2\). We are grateful to Aernout van Enter for pointing out that there is support in the mathematical physics literature for such a conjecture, see Endo, van Enter, and Le Ny [14]. One possible approach to prove sharpness might be to use the theory of Gallesco, Gallo, and Takahashi [19] in combination with the observation that \(\alpha>3/2\) means that \(\sum_{n=1}^{\infty}(\operatorname{var}_{n}\phi)^{2}<\infty\).
### The method of proof
Assume that \(\nu\) is an eigenmeasure for \(\mathcal{L}^{*}\) and that there is some translation invariant \(\mu\) which is absolutely continuous with respect to \(\nu\), i.e. such that \(\mu=h\nu\). We use that the Radon-Nikodym
derivative \(h(x)=d\mu/d\nu\) is an eigenfunction of the transfer operator \(\mathcal{L}\). This follows from the identities
\[\int g\cdot h\,d\nu=\int(g\circ T)\cdot h\,d\nu=\int\frac{1}{\lambda}\mathcal{L} (g\circ T\cdot h)\,d\nu=\int g\cdot\left(\frac{1}{\lambda}\mathcal{L}h\right)\,d\nu,\]
where the last equality follows from the definition of \(\mathcal{L}\). This hold for all \(g\in C(X)\) if only if \(\mathcal{L}h=\lambda h\) as elements of \(L^{1}(\nu)\).
Starting with the translation invariant \(\mu\) and the eigenmeasure \(\nu\), we can try to construct the Radon-Nikodym derivative \(h(x)\) as the limit of the likelihood ratios \(d\mu|_{\mathcal{F}_{n}}/d\nu|_{\mathcal{F}_{n}}\), i.e.
\[h(x)=\lim_{n\to\infty}h_{n}(x)\quad\text{where}\quad h_{n}(x)=\frac{\mu[x_{0}, \ldots x_{n}]}{\nu[x_{0},\ldots,x_{n}]}. \tag{4}\]
The limit (4) is well-defined \(\nu\)-almost everywhere by the martingale convergence theorem. If it exists in \(L^{1}(\nu)\) then it equals \(h=d\mu/d\nu\) and we can deduce the existence of an eigenfunction \(h\) in \(L^{1}(\nu)\). By studying the associated _random cluster model_, we can prove that the limit in (4) is actually _continuous_ for the relevant Dyson models.
The proof goes roughly as follows. Let \(\nu(x)\) be the eigenmeasure of \(\mathcal{L}_{\phi}^{*}\) and let \(\mu(\bar{x})\), \(\bar{x}\in\mathcal{A}^{\mathbb{Z}}\), denote the natural extension of the equilibrium measure \(\mu\). We can represent \(\nu(x)\) and \(\mu(\bar{x})\) as Gibbs measures for the Ising model corresponding to potentials \(\Phi(x)\) and \(\Phi(\bar{x})\), respectively. Let \(\Gamma(V)\) denote the space of graphs on vertex set \(V\). We lift \(\nu(x)\) to a spin-cluster model \(\nu(\gamma_{+},x)\) and \(\mu(\bar{x})\) to a spin-cluster model \(\mu(\gamma,\bar{x})\), where \(\gamma_{+}\in\Gamma(\mathbb{N})\) and \(\gamma\in\Gamma(\mathbb{Z})\) are random graphs. The bipartition \(\mathbb{Z}=\mathbb{Z}_{<0}\uplus\mathbb{N}\) decomposes the graph \(\gamma\) as \(\gamma=(\gamma_{-},\gamma_{0},\gamma_{+})\). It follows from the properties of the random cluster model that we can factorise the distribution of \(\gamma\) as
\[\mu(\gamma_{-},\gamma_{0},\gamma_{+})=e^{R(\gamma)}\cdot\nu(\gamma_{-})\otimes \eta(\gamma_{0})\otimes\nu(\gamma_{+}),\]
where we prove in Lemma 2 that \(e^{R}\) is an element of \(L^{1}\). This factorisation gives us an opening to compute the likelihood ratios in (4) and to prove the continuity of the limit using the dominated convergence theorem.
**Acknowledgements**. We would like to thank Noam Berger and Evgeny Verbitskiy for valuable discussions, and in particular Aernout van Enter for a very important correspondence for this work. The second author wishes to thank the Knut and Alice Wallenberg Foundation for financial support. The third author acknowledges the ERC Grant 833802-Resonances.
## 2. The proof of Theorem 1
### Configurations, graphs and potentials
#### 2.1.1. Configurations and potentials
By a configuration space we mean a set of form \(\mathcal{A}^{S}=\{x:S\to\mathcal{A}\}\), where \(\mathcal{A}\) is a finite set (the "alphabet") and \(S\) (the "sites") is a countable set. We refer to elements \(x\in X\) as configurations and we give the space \(X\) the usual product topology and sigma-algebra \(\mathcal{F}\). For \(G\subset S\), we write \(x_{G}\) for the restriction \(x|_{G}:G\to A\in\mathcal{A}^{G}\) of \(x\) to \(G\) and \(\mathcal{F}_{G}\) for the \(\sigma\)-algebra generated by \(x_{G}\).
We use \(\bar{F}\) to denote the complement \(S\setminus F\) of \(F\). For all \(F\) we can decompose \(x\in X\) as \(x=(x_{F},x_{\bar{F}})\). We express that \(F\) is a finite subset of \(S\) by writing \(F\Subset S\). Writing \(F\uparrow S\) signifies that \(F\Subset S\) runs through an arbitrary increasing sequence of finite sets with limit \(S\). We denote \([x]_{F}\) the cylinder set \([x]_{F}=\{y\mid y_{F}=x_{F}\}\) at \(F\) containing \(x\).
Implicitly, we assume all functions introduced are measurable. For a function \(f:X\to\mathbb{R}\) the _variation_ at \(F\subset S\) is \(\operatorname{var}_{F}f=\sup\{|f(x)-f(y)|:x_{F}=y_{F}\}\). A function \(f\) is _local_ at \(F\) if \(f\) is \(\mathcal{F}_{U}\)-measurable for some finite subset \(U\Subset S\) and it is _continuous_ if \(\lim_{F\uparrow S}\operatorname{var}_{F}f=0\).
For two sequences \(x,y\in X\), let \(\Delta(x,y)\subset S\) denote the set where they are different, i.e. \(\Delta(x,y):=\{i:x(i)\neq y(i)\}\). With a _potential_\(\phi(x)\) on \(X\), we mean a formal limit \(\phi(x)=\lim_{F\uparrow S}\phi_{F}(x)\) of local functions, \(\operatorname{var}_{F}\phi=0\), such that the difference
\[\Delta\phi(x,y):=\lim_{F\uparrow S}\phi_{F}(x)-\phi_{F}(y)\]
is finite and well defined for any pair of configurations \(x\) and \(y\) such that \(\Delta(x,y)\) is a finite set. We can formally add potentials as long as it is clear that the underlying limits give a well defined equality \(\Delta(\phi+\psi)(x,y)=\Delta\phi(x,y)+\Delta\psi(x,y)\) for the differences. Note that, we can consider two potentials \(\phi=\lim\phi_{F}\) and \(\psi=\lim\psi_{F}\) to be equal when it holds for all \(F\Subset S\) that \(\phi_{F}(x)-\psi_{F}(x)\) does not depend on \(x\).
#### 2.1.2. Graphs
Let \(V^{(2)}\) denote the set of unordered pairs \(ij\) of elements \(i,j\in V\), i.e. \(V^{(2)}\) is \(V\times V\) modulo the equivalence relation with equivalence classes \(ij=\{(i,j),(j,i)\}\). We consider a _graph_ on the vertex set \(V=V(G)\) to be a map \(G:E\to V\) from a set of edges \(E=E(G)\) to \(V^{(2)}\). Edges \(e\) of the form \(G(e)=ii\), \(i\in V\), are _loops_. We thus allow for multiple edges and loops. The _complete graph_ on \(V\), \(K(V)\), is the inclusion map of the non-loops in \(V^{(2)}\). Given a bipartition \(V=V_{+}\uplus V_{-}\) of \(V\) the _complete bipartite_ graph \(K(V_{+},V_{-})\) is the inclusion of \(V_{+}\times V_{-}\) in \(V^{(2)}\).
A spanning subgraph \(\gamma\) of \(G\) is a restriction of \(G\) to a subset \(E(\gamma)\subset E(G)\). We denote by \(\Gamma(G)\) the space of spanning subgraphs \(\gamma\) of \(G\) and we can represent \(\gamma\in\Gamma(G)\) with a configuration \(\gamma\in\{0,1\}^{E(G)}\) or, equivalently, a subset \(\gamma\subset E(G)\). If \(G\) is the complete graph \(K(V)\) or the complete
bipartite graph \(K(V_{+},V_{-})\), we write \(\Gamma(V)\) or \(\Gamma(V_{+},V_{-})\) instead. We denote by \(G[F]\) the subgraph _induced_ on \(F\subset V\), which means the restriction \(G[F]:\,G^{-1}(F^{(2)})\to F^{(2)}\) of \(G\) in both the domain and codomain. All spanning subgraphs \(\gamma\in\Gamma(G)\) we consider will have finite degrees, i.e.
\[D(F,\gamma):=\sum_{i\in F}\sum_{j\in V}\gamma(ij)<\infty,\]
for all \(F\Subset V\).
Consider an equivalence relation \(\sim\) on \(V\) or, equivalently, a partition \(\tilde{V}=V/{\sim}\) into equivalence classes and identifying projection \(\pi:V\to\tilde{V}\). A _contraction_\(G/{\sim}\) of \(G\) along \(\sim\) is the graph \(G/{\sim}:E(G)\to\tilde{V}^{(2)}\) obtained from \(G\) by identifying pairs in the codomain along \(\sim\). Note that \(E(G)=E(G/{\sim})\) so \(\Gamma(G)\) and \(\Gamma(G/{\sim})\) are equal as sets. If \(F\Subset V\) then we write \(G^{F}\) for the local contraction at \(F\) obtained from the equivalence relation "\(x,y\in F\) or \(x=y\)", i.e. by contracting all vertices in \(F\).
#### 2.1.3. Clusters and decomposition along a cut \((V_{+},V_{-})\)
For a graph \(\gamma\in\Gamma(V)\), let \(\mathcal{C}(\gamma)=\{C\}\) denote the partition of \(V\) into connected components ("clusters"). If \(V\) is countably infinite, we consider the number of clusters \(\omega(\gamma)=|\mathcal{C}(\gamma)|\) as a potential: The difference
\[\Delta\omega(\gamma,\gamma^{\prime})=\lim_{F\uparrow V}\omega(\gamma[F])- \omega(\gamma^{\prime}[F])\]
for induced subgraphs along \(F\uparrow V\) is well defined and finite. We have \(|\Delta\omega(\gamma,\gamma^{\prime})|\leq|\Delta(\gamma,\gamma^{\prime})|\) for any two graphs \(\gamma\) and \(\gamma^{\prime}\) with a finite symmetric difference and the limit is eventually constant. Moreover, the potential \(\omega(\gamma)\) is continuous at \(\gamma\) unless \(\gamma\) contains more than one infinite cluster.
The analysis of the likelihood ratios in (4) leads us to consider contractions \(\gamma^{F}\) of a random graphs \(\gamma\) at the finite sets \(F=[0,n-1]\). We see that, given \(F\Subset V\), the number of clusters \(\omega(\gamma)=|\mathcal{C}(\gamma)|\) satisfies the potential equality
\[\omega(\gamma)=\omega_{F}(\gamma)+\omega(\gamma^{F})-1 \tag{5}\]
where \(\omega_{F}(\gamma)\) is the number of clusters intersecting \(F\) and \(\omega(\gamma^{F})\) is the number of clusters in the contraction \(\gamma^{F}\). The constant difference of one is irrelevant for equality between potentials.
We will on occasion consider a bipartition (or "cut") \(V=V_{+}\uplus V_{-}\) of the vertex set in two sets. Such a cut decomposes a given graph \(\gamma\in\Gamma(V)\) into three graphs
\[\gamma=(\gamma_{+},\gamma_{0},\gamma_{-})\in\Gamma(V_{+})\times\Gamma(V_{+},V_ {-})\times\Gamma(V_{-}) \tag{6}\]
and the analysis of the two-sided model with the one-sided rely on analysing the decomposition for random graphs \(\gamma\). The graphs \(\gamma_{\pm}=\gamma[V_{\pm}]\) are \(\gamma\) induced on the two vertex parts and the subgraph \(\gamma_{0}\) is the bipartite subgraph \(\gamma\cap K(V_{+},V_{-})\) connecting vertices \(i\in V_{-}\) with vertices \(j\in V_{+}\).
A cut gives the potential identity
\[\omega(\gamma)=\omega(\gamma_{+})+\omega(\gamma_{-})-|\gamma_{0}|+R(\gamma), \tag{7}\]
where the cardinality \(|\gamma_{0}|\) of \(\gamma_{0}\) is taken as a linear potential and where \(R(\gamma)\) is a correction term. Since \(\omega(\gamma\setminus\gamma_{0})=\omega(\gamma_{+})+\omega(\gamma_{-})\), we obtain the following expression for \(R\)
\[R(\gamma)=|\gamma_{0}|-(\omega(\gamma\setminus\gamma_{0})-\omega(\gamma))=| \gamma_{0}|-(\omega(\gamma_{+})+\omega(\gamma_{-})-\omega(\gamma)). \tag{8}\]
We could treat \(R(\gamma)\) as a potential, but, in the cases of our interest, we shall see that \(R(\gamma)\) is a regular function which is small in a certain sense.
The corank of a graph \(\gamma\) is the maximum number of edges that one can remove from \(\gamma\) without increasing the number of components \(\omega(\gamma)\). It is also the minimum number one must remove in order to make the graph acyclic. If we consider the bipartite (multi-)graph \(\tilde{\gamma}_{0}\in\Gamma(V_{+}/{\mathcal{C}}(\gamma_{+}),V_{-}/{\mathcal{C }}(\gamma_{-}))\) obtained from contracting \(\gamma_{0}\) along the clusters in \({\mathcal{C}}(\gamma_{+})\) and \({\mathcal{C}}(\gamma_{-})\), then \(R(\gamma)\) in (8) equals the corank of \(\tilde{\gamma}_{0}\).
If we contract the terms in the equality (7) at a finite set \(F\Subset V_{+}\), we obtain
\[\omega(\gamma^{F})=\omega(\gamma_{+}^{F})+\omega(\gamma_{-})-|\gamma_{0}|+R_{F }(\gamma). \tag{9}\]
Here \(R_{F}(\gamma)\) equals \(\omega(\gamma^{F})-\omega(\gamma^{F}\setminus\gamma_{0}{}^{F})=\operatorname{ corank}\tilde{\gamma}_{0}^{F}\), where \(\tilde{\gamma}_{0}^{F}\) denotes the bipartite graph \(\gamma_{0}\) contracted first at \(F\) and then along \({\mathcal{C}}(\gamma_{+})\uplus{\mathcal{C}}(\gamma_{-})\).
For some fixed chosen order on the edges in \(\gamma_{0}\), we say an edge \(ij\) is _irrelevant_ if there is an edge in \(\gamma_{0}\) from the same cluster \(C\) in \({\mathcal{C}}(\gamma_{-})\) that precedes it in the chosen order. We define \(Q(\gamma_{-},\gamma_{0})\) as the total number of irrelevant edges. That is,
\[Q(\gamma_{-},\gamma_{0})=\sum_{C\in{\mathcal{C}}(\gamma_{-})}\left(D(C,\gamma_ {0})-1\right)_{+}, \tag{10}\]
where \(D(C,\gamma_{0})=\sum_{j\in C}\sum_{j\in V_{+}}\gamma_{0}(ij)\) is the degree of \(C\subset V_{-}\) in \(\gamma_{0}\). Note that \(Q\) does not depend on \(\gamma_{+}\).
From the interpretation of \(R_{F}\) as the co-rank in \(\tilde{\gamma}_{0}^{F}\), it follows that for all \(F\)
\[R_{F}(\gamma)\leq Q(\gamma_{-},\gamma_{0}) \tag{11}\]
since, the graph where all vertices in one part has degree 1, is acyclic. Assuming \(Q<\infty\), we also have
\[R_{F}(\gamma)\to Q(\gamma_{-},\gamma_{0})\quad\text{as $F\uparrow V_{+}$}, \tag{12}\]
since all cycles in \(\tilde{\gamma}_{0}^{F}\) eventually become 2-cycles when \(F\uparrow V_{+}\).
### Random configurations and random graphs
Let \(\mathcal{M}(X)\) denote the space of probability distributions of configurations. Elements \(\alpha\in\mathcal{M}(X)\) are usually written as \(\alpha(x)\) in order to make it clear that \(\alpha\) is a distribution of the random configuration \(x\in X\). We also use \(x\sim\alpha\) to denote that \(x\) has distribution \(\alpha\). When we need to specify a parameter \(p\) of a distribution \(\alpha\) we use the form \(\alpha(x;p)\). We denote the marginal distribution of \(x_{A}\) by \(\alpha(x_{A})\). For a partition \(A\uplus B=S\), we denote by \(\alpha(x_{A})\otimes\beta(x_{B})\) the product measure \((\alpha\otimes\beta)(x)\) of \(\alpha(x_{A})\) and \(\beta(x_{B})\). A measure \(\eta\) is a _Bernoulli measure_ if and only if \(\eta(x)=\eta(x_{A})\otimes\eta(x_{B})\) for every bipartition \(S=A\uplus B\). To parametrise a general Bernoulli distribution, we use a function \(p:S\to\mathcal{M}(\mathcal{A})\) such that \(p(s)\) is a probability distribution on \(\mathcal{A}\). The product distribution \(\bigotimes_{s\in S}p(s)(x_{s})\) is the corresponding Bernoulli measure \(\eta(x;p)\). We write \(\mu(x)\prec\nu(x^{\prime})\) to state _stochastic domination_ between elements in \(\mathcal{M}(X)\), which means that we can couple \(\mu(x)\) and \(\nu(x^{\prime})\) so that with probability one \(x\leq x^{\prime}\) with respect to the partial order \(\cdot\leq\cdot\).
By a _gibbs measure_ (small \(g\)), we mean Gibbs measures, sufficiently generalised to cover the random cluster setting below. It is a probability distribution \(\mu(x)\in\mathcal{M}(X)\) consistent with an associated _specification_. That is, for every finite set \(F\Subset S\) and for every (instead of just for \(\mu\)-almost every) exterior configuration \(x_{\bar{F}}\), we have a well defined conditional probability \(\mu(x_{F}\mid x_{\bar{F}})\) of \(x_{F}\in\mathcal{A}^{F}\) given \(x_{\bar{F}}\in\mathcal{A}^{\bar{F}}\). Hence, for each finite set \(F\subset S\), the map \(x_{\bar{F}}\mapsto\mu(\cdot\mid x_{\bar{F}})\) is a well defined function of the exterior configuration \(x_{\bar{F}}\), satisfying the obvious consistency conditions for conditional probabilities. An immediate class of unique gibbs measures are the Bernoulli measures. The corresponding specification \(\eta(x_{F}\mid x_{\bar{F}};p)=\eta(x_{F};p|_{F})\) is independent of the boundary \(x_{\bar{F}}\).
Note that we do not require the specification of a gibbs measure to be continuous and we are thus talking about gibbs measures in an extended sense. The same specification may have multiple consistent gibbs measures, but, in our context, we can consider all gibbs measure of this form to be _unique_ by the subcriticality assumption on \(\beta\). The reference [15] gives a more thorough and rigourous introduction of Gibbs measures and some of their extensions.
We can _modulate_ a gibbs measure \(\alpha(x)\) with an "exponential of a potential" \(e^{\phi(x)}\) to obtain a set of new gibbs measures denoted by \(e^{\phi}\ltimes\alpha\). Let \(\mu=\gamma_{0}{}^{\phi}\ltimes\alpha\) denote an element of this set. For all finite subsets \(F\Subset S\), the specification of \(e^{\phi}\ltimes\alpha\) at \(F\), i.e. \(\mu(x_{F}\mid x_{\bar{F}})\), is well-defined by the relation
\[\frac{\mu(x_{F}\mid x_{\bar{F}})}{\mu(y_{F}\mid x_{\bar{F}})}=\exp\left(\Delta \phi(x,y)\right)\cdot\frac{\alpha(x_{F}\mid x_{\bar{F}})}{\alpha(y_{F}\mid x _{\bar{F}})}, \tag{13}\]
where \(y=(y_{F},x_{\bar{F}})\) and \(\alpha(\cdot|x_{\bar{F}})\) is the specification of \(\alpha\). We write \(e^{\phi}\ltimes\alpha\) to denote the set of weak limit points, as \(F\uparrow S\), of \(\mu(x_{F}\mid x_{\bar{F}})\cdot\alpha(x_{\bar{F}})\) where \(\alpha(x_{\bar{F}})\) denotes the marginal distribution of \(x_{\bar{F}}\). In our context, this limit
will be a unique measure and we usually write \(\mu=e^{\phi}\ltimes\alpha\). If \(g(x)>0\) is a regular positive function such that \(g(x)\in L^{1}(\alpha)\) then the modulation \(g\ltimes\alpha\) simply means taking the product with \(g(x)\) and normalising with a constant, i.e.
\[g\ltimes\alpha=\frac{g\cdot\alpha}{\int g(x)\,d\alpha(x)}. \tag{14}\]
We can for example construct the Bernoulli measure \(\eta(x;p)\) as the modulation \(e^{\phi(x)}\ltimes\upsilon(x)\) where \(\upsilon(x)\) denotes the _uniform_ Bernoulli measure on \(X\) and \(\phi(x)\) is the linear potential
\[\phi(x)=\sum_{i\in S}\log p(i)(x_{i}).\]
Modulation of a Bernoulli measure with a linear potential results in a new Bernoulli measure.
The following rule shows that composition of modulation behaves naturally, i.e.
\[e^{\psi}\ltimes(e^{\phi}\ltimes\alpha)=e^{\psi+\phi}\ltimes\alpha, \tag{15}\]
provided \(e^{\phi}\ltimes\alpha\) is unique. Another rule of computation is that of distributivity over products of measures, i.e.
\[e^{\psi(x_{A})+\phi(x_{B})}\ltimes(\alpha(x_{A})\otimes\beta(x_{B}))=(e^{\psi} \ltimes\alpha)(x_{A})\otimes(e^{\phi}\ltimes\beta)(x_{B}). \tag{16}\]
Both rule (15) and (16) are immediate when applied to the specification determined by (13). Since we assume that gibbs measures are uniquely specified by their specifications, we can state the rules above as equalities between gibbs measures.
#### 2.2.1. The random cluster model
A _random graph model_\(\alpha(\gamma)\) is a probability distribution \(\alpha(\gamma)\in\mathcal{M}(\Gamma(G))\) on the space \(\Gamma(G)\) of (spanning) subgraphs \(\gamma\) of a fixed ambient graph \(G\). We can identify \(\gamma\) with the corresponding configuration \(\gamma:E(G)\to\{0,1\}\) so that \(\Gamma(G)\cong\{0,1\}^{E(G)}\). In our context of "long range models", we will use the complete graph \(G=K(V)\) on a countable vertex set \(V\) as the ambient graph and we write \(\gamma\in\Gamma(V)\). However, we will almost surely have finite degrees \(D(F,\gamma)\).
The Bernoulli graph model \(\eta(\gamma;p)\) is uniquely specified by its _edge probabilities_\(p:G\to[0,1]\), so that \(\gamma_{ij}=1\) with probability \(p(ij)\) independently at each edge \(ij\in E(G)\). The finite degree condition holds whenever \(\sum_{j:ij\in E(G)}p(ij)<\infty\) for all \(i\). Note that the Bernoulli graph model is independent of the graph structure and is the same for all ambient graphs with the same set of edges.
The _random cluster model_\(\mathsf{RC}_{q}(\gamma;p)\) (or FK-model, see [20]) is the random graph distribution \(\mu=q^{\omega(\gamma)}\ltimes\eta(\gamma;p)\) on \(\Gamma(G)\) that one obtains if one modulates the Bernoulli graph \(\eta(\gamma;p)\) with \(q^{\omega(\gamma)}\). Note that, \(\mathsf{RC}_{1}(\gamma;p)=\eta(\gamma;p)\)
Since we focus on the Ising model, we will assume that \(q=2\) unless otherwise stated. We will only work in the sub-critical regimes where the uniqueness of the random cluster model \(\mu=q^{\omega(\gamma)}\ltimes\eta(p)\) is well established [20].
It is well-known (see [20]) that the random cluster models satisfy a stochastic domination relation, so that
\[\mathsf{RC}_{q}(\gamma;p)\prec\mathsf{RC}_{q^{\prime}}(\gamma^{\prime};p^{ \prime})\quad\text{when $p\leq p^{\prime}$ and $q\geq q^{\prime}$.}\]
In particular it holds that \(\mathsf{RC}(\gamma;p)\prec\eta(\gamma;p)\). For a fixed vertex \(\mathrm{o}\in V\), we use \(C_{\mathrm{o}}(\gamma)\) to denote the cluster containing the vertex \(\mathrm{o}\). It is a fact, see [5], that if we condition on the cluster \(C_{\mathrm{o}}\) then the distribution of the remaining graph \(\gamma\setminus C_{\mathrm{o}}\) is the random cluster model with edge probabilities \(p^{\prime}(ij)=p(ij)\mathbf{1}_{i,j\not\in C_{\mathrm{o}}}\). It follows that the conditional distribution of \(\gamma\setminus C_{\mathrm{o}}\) is dominated by the unconditional distribution of \(\gamma\setminus C_{\mathrm{o}}\), so that
\[\mu(\gamma\setminus C_{\mathrm{o}}\mid C_{\mathrm{o}})\prec\mu(\gamma). \tag{17}\]
The _random spin-cluster model_\(\mathsf{RC}((x,\gamma);p)\) is the joint distribution of a random graph \(\gamma\) together with an Ising _spin_ configuration, \(x\in X=\left\{+1,-1\right\}^{V}\), on the vertex set. One can obtain the distribution
\[\mu(x,\gamma)=\mathsf{RC}((x,\gamma);p)\]
of \((x,\gamma)\) by first considering the product distribution \(\upsilon(x)\otimes\eta(\gamma)\) of the uniform distribution of \(x\in X\) and the Bernoulli distribution \(\eta(\gamma;p)\) and then conditioning on the event that \(x\) and \(\gamma\) are compatible in the sense that no edge in \(\gamma\) connects vertices of opposite spin. An alternate, perhaps more direct, way to derive the spin-cluster distribution \(\mu(x,\gamma)\) is to first choose the random graph \(\gamma\) according to the random cluster model \(\mu(\gamma)\) and then to choose a spin \(x(C)\in\left\{+1,-1\right\}\) to each cluster \(C\in\mathcal{C}(\gamma)\) uniformly at random.
For our purposes, one should note that the marginal distribution \(\mu(x)\) of the spins \(x\in X\) is the Gibbs measure corresponding to the potential
\[\Phi(x)=\sum_{ij\in E(G)}-\log(1-p(ij))x_{i}x_{j}. \tag{18}\]
The marginal distribution \(\mu(\gamma)\) of \(\gamma\) is the random cluster model \(\mathsf{RC}(\gamma;p)\). Percolation is the event that the random graph \(\gamma\) contains a cluster of infinite size. The almost sure existence of an infinite cluster coincide with the existence of multiple Gibbs measures for the spins \(x\in X\).
#### 2.2.2. Cylinder probabilities
Let \(F\) be a finite subset of \(V\) and consider the cylinder \([x]_{F}\) of spins. Let \(B_{F}(x,\gamma)\in\left\{0,1\right\}\) indicate the event that the graph \(\gamma\) is compatible with the cylinder: That is, that no _path_ in \(\gamma\) connects \(i,j\in F\) such that the spins \(x_{i}\) and \(x_{j}\) have opposite signs. Recall that \(\omega_{F}(\gamma)\) denote the number of clusters in \(\gamma\) that intersects \(F\). From the alternate
way to derive the spin-cluster distribution, we deduce that the probability of the cylinder is
\[\mu([x]_{F})=\int 2^{-\omega_{F}(\gamma)}\,B_{F}(x,\gamma)\,d\mu(\gamma), \tag{19}\]
since the probability that the cluster-wise assignment of spins \(\{x(C)\}\) give rise to the cylinder \([x]_{F}\) equals \(2^{-\omega_{F}(\gamma)}\) provided the graph \(\gamma\) is compatible with \([x]_{F}\).
Note that \(\omega_{F}(\gamma)\leq|F|\) is a bounded function. From (14), (19) and the potential equality (5), we arrive at the following expression for the cylinder probability
\[\mu([x]_{F})=\frac{1}{\int 2^{-\omega_{F}}\,d\mu}\cdot\int B_{F}(x,\gamma)\,d \mu^{F}(\gamma), \tag{20}\]
where
\[\mu^{F}(\gamma)=2^{\omega(\gamma^{F})}\ltimes\eta(\gamma^{F};p).\]
denote the random cluster model \(\mu\)_contracted at_\(F\). Since \(\Gamma(G)\cong\Gamma(G^{F})\) as sets, we may choose to consider \(\mu^{F}(\gamma)\) as a perturbed random cluster distribution for \(\gamma\in\Gamma(G)\) or the random cluster distribution for the contraction \(\gamma^{F}\in\Gamma(G^{F})\).
#### 2.2.3. Decomposition of the random cluster model across a cut
Consider the decomposition in (6) of a graph \(\gamma\) across a cut \((V_{+},V_{-})\). Let \(\mu=2^{\omega(\gamma)}\ltimes\eta(\gamma)\) be the full "two-sided" random cluster model. Similary, let \(\nu(\gamma_{\pm})=2^{\omega(\gamma_{\pm})}\ltimes\eta(\gamma_{\pm})\) be the "one-sided" random cluster models for the graphs \(\gamma_{\pm}\) on vertex sets \(V_{\pm}\). Assume that \(F\Subset V_{+}\) is a fixed finite subset of \(V_{+}\). Let also \(\mu^{F}(\gamma)\) and \(\nu^{F}(\gamma_{+})\) be the contractions at \(F\) of \(\mu(\gamma)\) and \(\nu(\gamma_{+})\), respectively. From (6), it is clear that the Bernoulli distribution \(\eta(\gamma)=\eta(\gamma;p)\) factorises into three Bernoulli measures
\[\eta(\gamma)=\eta(\gamma_{+})\otimes\eta(\gamma_{0})\otimes\eta(\gamma_{-}). \tag{21}\]
We construct these Bernoulli measures by restricting the given edge probability.
A similar factorisation for the contracted random cluster measure \(\mu^{F}\) uses (16). From (21) and (9), we obtain that
\[\mu^{F}(\gamma) =2^{\omega(\gamma_{+}^{F})+\omega(\gamma_{-})-|\gamma_{0}|+R_{F} (\gamma)}\ltimes(\eta(\gamma_{+})\otimes\eta(\gamma_{0})\otimes\eta(\gamma_{- }))=\] \[=2^{R_{F}}\ltimes\left((2^{\omega(\gamma_{+}^{F})}\ltimes\eta( \gamma_{+}))\otimes(2^{-|\gamma_{0}|}\ltimes\eta(\gamma_{0}))\otimes(2^{- \omega(\gamma_{-})}\ltimes\eta(\gamma_{-}))\right)\]
and, hence, we have the following factorisation
\[\mu^{F}(\gamma)=2^{R_{F}(\gamma)}\ltimes\left(\nu^{F}(\gamma_{+})\otimes \tilde{\eta}(\gamma_{0})\otimes\nu(\gamma_{-})\right). \tag{22}\]
where the measure \(\tilde{\eta}\) is the Bernoulli measure
\[\tilde{\eta}(\gamma_{0})=2^{-|\gamma_{0}|}\ltimes\eta(\gamma_{0})=\eta(\gamma_ {0};\tilde{p})\quad\text{where }\tilde{p}=p/(2-p). \tag{23}\]
Clearly, \(\tilde{\eta}(\gamma_{0})\prec\eta(\gamma_{0})\).
Specialising (22) with \(F=\emptyset\), allow us to write the random cluster model \(\mu(\gamma)\) as
\[\mu(\gamma)=2^{R(\gamma)}\ltimes\left(\nu(\gamma_{+})\otimes\tilde{\eta}(\gamma_ {0})\otimes\nu(\gamma_{-})\right). \tag{24}\]
We shall see that, for the Dyson model, \(2^{R(\gamma)}\in L^{1}(\mu)\), which shows that the marginal distribution \(\mu(\gamma_{+})\) of \(\gamma_{+}\) under the two-sided model is absolutely continuous with respect to the one-sided measure \(\nu(\gamma_{+})\).
### The Dyson model
#### 2.3.1. The one-sided and two-sided models
Let \(\bar{X}=\mathcal{A}^{\mathbb{Z}}\) with projection \(\bar{X}\to X\) given by \(\bar{x}\mapsto x=\bar{x}|_{\mathbb{N}}\). For the analysis of the long range one-dimensional Dyson model, we consider the "two-sided" random spin-cluster model
\[\mu(\bar{x},\gamma)=\mathsf{RC}((\bar{x},\gamma);p)\]
with vertex set \(V=\mathbb{Z}\) where the edge probabilities are
\[p(ij)=1-e^{-J(ij)}\quad\text{where}\quad J(ij)=\frac{\beta}{|i-j|^{\alpha}}. \tag{25}\]
By (18), the marginal spin distribution \(\mu(\bar{x})\) is the Gibbs measure
\[\bar{\Phi}(\bar{x})=\sum_{i,j}\frac{\beta}{|i-j|^{\alpha}}\bar{x}_{i}\bar{x}_ {j}=\sum_{k=-\infty}^{\infty}\bar{\phi}(T^{k}\bar{x}) \tag{26}\]
where \(\bar{\phi}\) is the lift to \(\bar{X}\) of one point potential \(\phi\) from (2). By symmetry and uniqueness, \(\mu(\bar{x},\gamma)\) is translation invariant with respect the left shift \(T\) on \(\mathbb{Z}\). In particular, so is the marginal distribution \(\mu(x)\) of \(x\in X\).
Taking the cut of \(\mathbb{Z}=V_{+}\uplus V_{-}\) where \(V_{+}=\mathbb{N}=\{0,1,2,\dots\}\) and \(V_{-}=\{\dots,-2,-1\}\), we also consider the two "one-sided" spin-cluster models
\[\nu(x_{\pm},\gamma_{\pm})=\mathsf{RC}(x_{\pm},\gamma_{\pm};p_{\pm}).\]
By the vertex map \(j\mapsto-1-j\), \(j\in\mathbb{N}\), we have an isomorphism \(\nu((x_{+},\gamma_{+}))\cong\nu((x_{-},\gamma_{-}))\). For this one-sided model, the spin distribution \(\nu(x)\) is the Gibbs measure corresponding to the one-sided potential \(\Phi(x)\)
\[\Phi(x)=\sum_{k=0}^{\infty}\phi(T^{k}x) \tag{27}\]
for \(x\in X\), where we drop the subscripting with \(\pm\) on the spin sequences. The Gibbs measure \(\nu(x)\) for the potential \(\Phi(x)\) in (27) is also the eigenmeasure for \(\mathcal{L}_{\phi}^{*}\) since the definition of \(\mathcal{L}_{\phi}^{*}\) gives that
\[\mathcal{L}^{*}\nu(x)=e^{\phi(x)}\cdot\nu(Tx)\]
and the right hand side is, up to the normalising constant \(\frac{1}{\lambda}\), the Gibbs measure for \(\Phi(x)\) due to the identity \(\Phi(x)=\phi(x)+\Phi(Tx)\).
#### 2.3.2. Cluster size distribution
It is well-known (see e.g. [1], [20] or [12]) that for all \(\alpha\), \(1<\alpha\leq 2\), and \(q\geq 1\) there exists a critical parameter \(\beta_{c}=\beta_{c}(\alpha,q)\), such that percolation does not occur with probability one for \(0\leq\beta<\beta_{c}\) (the "sub-critical" regime), while it occurs with probability one for \(\beta_{c}<\beta<\infty\). The random cluster model is moreover unique except for possibly at \(\beta=\beta_{c}\).
We claim that there is a \(\beta_{*}>0\) such that, for \(0<\beta<\beta_{*}\), the moment generating function of the cluster size has a positive radius of convergence. In other words: For some \(t_{0}=t_{0}(\alpha,\beta)>0\), such that for \(0<t<t_{0}\)
\[\mathsf{E}\left(e^{t\cdot|C_{\mathrm{o}}(\gamma)|}\right)=\sum_{k=0}^{\infty} \frac{\mathsf{E}(|C_{\mathrm{o}}|^{k})}{k!}t^{k}<\infty, \tag{28}\]
where \(\mathrm{o}\) is any fixed vertex. The property (28) follows if the cluster size \(|C_{\mathrm{o}}|\) has exponentially bounded tails, i.e. if
\[\mathsf{P}(|C_{\mathrm{o}}|>n)\leq A\cdot e^{-t_{0}n}/\sqrt{n} \tag{29}\]
for some \(A\) and \(t_{0}>0\). We can easily see that (29) implies (28) using the identity
\[\mathsf{E}\left(e^{sX}\right)=\int_{0}^{\infty}se^{sx}\mathsf{P}(X>x)\,dx.\]
Moreover, it is well-known that (29) holds (see Panagiotis [28] Theorem 1.2.1; Aizenman and Newman [2], Proposition 5.1) for \(\beta<\beta_{c}^{1}(\alpha)\). By stochastic domination we have \(\beta_{c}^{1}(\alpha)\leq\beta_{c}(\alpha)\) and we can thus infer that
\[\beta_{c}^{1}(\alpha)\leq\beta_{*}\leq\beta_{c}(\alpha) \tag{30}\]
as claimed in discussion following Theorem 1. From now on we assume that \(\beta<\beta_{*}\) and thus that (28) holds.
A major part of our argument depends on the following lemma stating that the moment generating function (MGF) \(\mathsf{E}(e^{sQ})\) of \(Q\) from (10) is finite for all \(s\).
**Lemma 2**.: _If \(\nu(\gamma_{-})\) satisfies (28) then_
\[\int\exp\left(sQ(\gamma_{-},\gamma_{0})\right)\,d\tilde{\eta}(\gamma_{0})\,d \nu(\gamma_{-})<\infty, \tag{31}\]
_for every \(s>0\)._
#### 2.3.3. Proof of Lemma 2
The edge-indicators \(\gamma_{0}(ij)\) distributed according to \(\tilde{\eta}\) are independent with Bernoulli distribution \(\mathrm{Be}(\tilde{p}(ij))\) where \(\tilde{p}(ij)\leq p(ij)=1-e^{-J(ij)}\). Since \(\mathrm{Be}(1-e^{-J})\prec\mathrm{Po}(J)\), we have
\[\tilde{\eta}(\gamma_{0})\prec\eta(\gamma_{0})\prec\bigotimes_{ij}\mathrm{Po}( J(ij)).\]
We assume an underlying probability space \((\Omega,\mathcal{F},\mathsf{P})\), carrying the processes \((\gamma_{-},\eta)\sim\nu_{-}\otimes\tilde{\eta}\) as in (31). In addition, we assume a discrete Poisson process \(C\mapsto X(C)\), \(C\subset V_{-}\), specified by
\[X(C):=\sum_{i\in C}\sum_{j\in V_{+}}X(ij)\sim\mathrm{Po}(\lambda(C)),\quad \lambda(C)=\sum_{i\in C}\sum_{j\in V_{+}}J(ij),\]
where \(D(C)=\sum_{i\in C}\sum_{j}\gamma_{0}(ij)\leq X(C)\).
Let \(Y(C)=\left(X(C)-1\right)_{+}\). To prove Lemma 2, it is enough to show that
\[\mathsf{E}\left(\exp\left(s\cdot\sum_{C\in\mathcal{C}(\gamma_{-})}Y(C)\right) \right)<\infty, \tag{32}\]
for \(s>0\).
Choose \(m_{0}\geq 2\) so that
\[t=\frac{e^{s}\beta}{\alpha^{\prime}(m_{0}-1)^{\alpha^{\prime}}}<t_{0}. \tag{33}\]
where \(t_{0}=t_{0}(\beta)\) is the radius of convergence from (28). Let \(S=\{-1,-2,\ldots,-m_{0}\}\) and let \(\mathcal{C}^{\prime}=\{C\setminus S\mid C\in\mathcal{C}(\gamma)\}\). Note that, for every \(C\subset V_{-}\), we have
\[Y(C)\leq X(S\cap C)+Y(C\setminus S).\]
Hence, it follows that
\[\sum_{C\in\mathcal{C}(\gamma_{-})}Y(C)\leq X(S)+\sum_{C\in\mathcal{C}^{\prime }}Y(C).\]
Since \(\sum_{C\in\mathcal{C}^{\prime}}Y(C)\) is independent of \(X(S)\sim\mathrm{Po}(\lambda(S))\) by disjointness, it is enough to show that \(\mathsf{E}\left(e^{s\sum_{C\in\mathcal{C}^{\prime}}Y(C)}\right)<\infty\). This amounts to show that,
\[K_{3}:=\mathsf{E}\left(\prod_{C\in\mathcal{C}^{\prime}}\Psi(C)\right)<\infty, \tag{34}\]
where
\[\Psi(C) =\mathsf{E}\left(e^{s\left(X(C)-1\right)_{+}}\mid C\right)\] \[=\sum_{k=0}^{\infty}e^{-\lambda}\cdot\frac{\lambda^{k}}{k!}\cdot e ^{s\left(k-1\right)_{+}} \tag{35}\] \[=e^{-\lambda}+\lambda e^{-\lambda}+e^{-\lambda-s}\cdot\sum_{k=2} ^{\infty}\frac{\left(e^{s}\lambda\right)^{k}}{k!},\quad\text{with }\lambda= \lambda(C).\]
For \(i\geq 2\), an elementary integral estimate of \(\lambda(\{i\})=\sum_{j\in\mathbb{N}}J(ij)\) gives that
\[\lambda(\{i\})\leq\frac{\beta}{\alpha^{\prime}\cdot\left(|i|-1\right)^{\alpha^{ \prime}}},\]
where \(\alpha^{\prime}=\alpha-1\). Hence, for any \(C\subset V_{-}\)
\[\lambda(C)\leq\frac{\beta}{\alpha^{\prime}\cdot\left(J(C)-1\right)^{\alpha^{ \prime}}}\cdot|C|, \tag{36}\]
where \(J(C)=\min\{|i|:i\in C\}\) is the rightmost (first) element of \(C\).
Since \(e^{-\lambda}+\lambda e^{-\lambda}<1\) and \(e^{-\lambda-s}<1\), the expression (35) and the bound (36) implies that
\[\Psi(C)\leq 1+\sum_{k=2}^{\infty}\left(\frac{e^{s}\beta}{\alpha^{\prime}(J(C)-1 )^{\alpha^{\prime}}}\right)^{k}\cdot\frac{|C|^{k}}{k!} \tag{37}\]
Since \(J(C)\geq m_{0}\), for all \(C\in\mathcal{C}^{\prime}\), we obtain
\[\Psi(C)\leq 1+w(J)\cdot\Theta(|C|) \tag{38}\]
where
\[w(J)=\frac{(m_{0}-1)^{2\alpha^{\prime}}}{(J-1)^{2\alpha^{\prime}}}\quad\text{ and}\quad\Theta(N)=\sum_{k=2}^{\infty}t^{k}\cdot\frac{N^{k}}{k!}<\infty,\]
with \(t<t_{0}\) as in (33).
Order the elements in \(\mathcal{C}^{\prime}=\{C_{1},C_{2},\dots\}\) so that
\[m_{0}+1=J(C_{1})<J(C_{2})<\cdots.\]
Note that \(J(C_{k})=\min\{|i|:i\not\in S\cup C_{1}\cup\cdots\cup C_{k-1}\}\), i.e., we can determine \(J(C_{k})\) from the preceding clusters. By induction on (17), we see that
\[\mathsf{P}(|C_{k}|\mid C_{1},\dots,C_{k-1})\prec\mathsf{P}(|C_{\mathrm{o}}|)\]
and, hence, with \(\Theta(N)\) as in (38), we have, for all \(k\),
\[\mathsf{E}\left(\Theta(|C_{k}|)\mid C_{1},C_{2},\dots,C_{k-1}\right)\leq \mathsf{E}\left(\Theta(|C_{\mathrm{o}}|)\right)=:\Theta_{0}. \tag{39}\]
which, since \(t<t_{0}\), is less than \(\infty\) by (28).
To complete the proof, we take conditional expectations in (34) and obtain from (38) and (39) that
\[K_{3} \leq\mathsf{E}\left(\prod_{k=1}^{\infty}\mathsf{E}\left(1+w(J(C_{ k}))\cdot\Theta(|C_{k}|)\mid C_{1},C_{2},\dots,C_{k-1}\right)\right)\] \[\leq\mathsf{E}\left(\prod_{k=1}^{\infty}\left(1+w(J(C_{k}))\cdot \Theta_{0}\right)\right)\] \[\leq\exp\left(\Theta_{0}\cdot\mathrm{const}\cdot\sum_{k=m_{0}}^{ \infty}\frac{1}{(m_{0}+k-1)^{2\alpha^{\prime}}}\right)<\infty,\]
since \(J(C_{k})\geq m_{0}+k\) and \(w(k)\) is decreasing in \(k\) and the sum is finite due to \(2\alpha^{\prime}>1\).
### The proof of the theorem
Recall that our aim is to show that the sequence of the local likelihood ratios
\[h_{n}(x)=\frac{\mu(\left[x\right]_{n})}{\nu(\left[x\right]_{n})}\]
in (4) is a Cauchy sequence with respect to the supremum norm. That is, we aim to show that
\[\|h_{n}(x)-h_{m}(x)\|_{\infty}\to 0\quad\text{as }n,m\to\infty \tag{40}\]
which means that the limit \(h(x)\) is a continuous function bounded away from \(0\) and \(\infty\).
We will refer to previous relations concerning representations and cylinder probabilities such as (24), (22), (20), etc. which use notation for a more general setting. We can now specialise to the case where we consider cylinders at \(F=[0,n-1]=\{0,1,\ldots,n-1\}\) where \(n\to\infty\). For notational simplicity, we use subscript \(n\) instead of \(F\) when referring to cylinders, cluster counts and measures, etc. i.e. \(\left[x\right]_{n}\) stands for the cylinder \(\left[x\right]_{[0,n-1]}\) and we write \(\omega_{n}\) for \(\omega_{[0,n-1]}\), \(\nu^{n}\) for \(\nu^{F}\), \(R_{n}\) for \(R_{F}\), etc.
From (22) and (20), we have in this notation
\[\mu(\left[x\right]_{n})=k_{2}\cdot\int B_{n}(x,\gamma)\cdot 2^{R_{n}(\gamma)} \,d\left(\nu^{n}(\gamma_{+})\otimes\tilde{\eta}(\gamma_{0})\otimes\,d\nu( \gamma_{-})\right)\]
and
\[\nu(\left[x\right]_{n})=k_{1}\cdot\int B_{n}(x,\gamma_{+})\,d\nu^{n}(\gamma_{ +}),\]
where \(k_{2}=\int 2^{-\omega_{n}(\gamma)}\,d\mu(\gamma)\) and \(k_{1}=\int 2^{-\omega_{n}(\gamma_{+})}\,d\nu(\gamma_{+})\). Hence, by taking the ratio, we have
\[h_{n}(x)=K_{n}\cdot\frac{1}{L_{n}}\cdot I_{n}(x) \tag{41}\]
where \(K_{n}\) and \(L_{n}\) are
\[K_{n} =\frac{k_{2}}{k_{1}}=\frac{\int 2^{-\omega_{n}(\gamma)}\,d\mu( \gamma)}{\int 2^{-\omega_{n}(\gamma_{+})}\,d\nu(\gamma_{+})} \tag{43}\] \[L_{n} =\int 2^{R_{n}(\gamma)}\,d(\nu^{n}(\gamma_{+})\otimes\tilde{\eta}( \eta)\otimes\nu(\gamma_{-})). \tag{42}\]
Only the integral \(I_{n}(x)\) depends on \(x\in X\) and we have
\[I_{n}(x)=\frac{\int B_{n}(x,\gamma)\cdot 2^{R_{n}(\gamma)}\cdot\,d(\nu^{n}( \gamma_{+})\otimes\tilde{\eta}(\gamma_{0})\otimes\nu(\gamma_{-}))}{\int B_{n} (x,\gamma_{+})\,d\nu^{n}(\gamma_{+})}. \tag{44}\]
Note that \(K_{n}\) and \(L_{n}\) does not depend on \(x\).
Let
\[B^{\prime}_{n}(x,\gamma)=\begin{cases}1&B_{n}(x,\gamma_{+})=0\\ B_{n}(x,\gamma)&\text{otherwise}.\end{cases}\]
Thus \(B^{\prime}_{n}(x,\gamma)\) is zero only if \(B_{n}(x,\gamma_{+})=1\) and their is some cluster \(C\) in \(\mathcal{C}(\gamma_{-})\) sends a pair of edges in \(\gamma_{0}\) that joins two clusters in \(\gamma_{+}\) that intersect \([1,n-1]\) at positions with opposite spins. We can now rewrite \(I_{n}(x)\) as
\[I_{n}(x)=\int B^{\prime}_{n}(x,\gamma)2^{R_{n}(\gamma)}\ d(\hat{\nu}_{n}(\gamma _{+})\otimes\tilde{\eta}(\gamma_{0})\otimes\nu(\gamma_{-})), \tag{45}\]
where \(\hat{\nu}_{n}(\gamma_{+})\) is the probability measure given by
\[\hat{\nu}_{n}=\frac{B_{n}(x,\gamma_{+})\cdot\,d\nu^{n}(\gamma_{+})}{\int B_{n }(x,\gamma_{+})\,d\nu^{n}(\gamma_{+})}. \tag{46}\]
In other words, it is the measure \(\nu^{n}(\gamma_{+})\) conditioned on \(\gamma_{+}\) and \(\left[x\right]_{n}\) being compatible.
We define the endpoint of the "last" irrelevant edge as
\[N=\max\{j\in V_{+}\mid ij\in\gamma_{0},\ i\in C\in\mathcal{C}(\gamma_{-}),\ D(C)\geq 2\}. \tag{47}\]
By Lemma 2\(\mathsf{P}(N<\infty)=1\). Let \(A(x,\gamma_{-},\gamma_{0})\) indicate the event that no cluster \(C\in\mathcal{C}(\gamma_{-})\) sends two edges in \(\gamma_{0}\) to opposite spins of \(x\). We have
\[B^{\prime}_{n}(x,\gamma)=A(x,\gamma_{-},\gamma_{0})\quad\text{for all $n\geq N$.} \tag{48}\]
Moreover, by (12), we have for \(n\geq N\)
\[R_{n}(\gamma)=Q(\gamma_{-},\gamma_{0}). \tag{49}\]
We note that the quantities \(B^{\prime}_{n}\) and \(R_{n}\) are conditionally independent on the event \(n\geq N\).
We can now start to establish the convergence of the quantities (42), (43) and (44). From (49) it is clear that the integrals \(L_{n}\) converge
\[L_{n}\to\int 2^{Q(\gamma_{0},\gamma_{-})}\,d(\tilde{\eta}(\gamma_{0})\otimes \nu(\gamma_{-})),\]
as \(n\to\infty\). This is finite by Lemma 2.
It also follows from (48) and (49) that conditioning on \((\gamma_{-},\gamma_{0})\) gives
\[g_{n}(x):=\mathsf{E}(I_{n}(x)\mid\gamma_{-},\gamma_{0})=A(x,g_{-},\gamma_{0}) \cdot 2^{Q(\gamma_{-},\gamma_{0})}>0\]
on \(n\geq N\) and hence \(g_{n}(x)-g_{m}(x)\) are eventually equal to \(0\). It follows from dominated convergence that
\[\|I_{n}(x)-I_{m}(x)\|_{\infty}\leq\|2^{Q}\|_{L^{1}}\cdot\mathsf{P}(N\geq\min(n,m)),\]
which goes to zero as \(n,m\to\infty\). Thus, the functions \(\{I_{n}(x)\}\) constitute a Cauchy sequence with respect to the supremum norm and the limit \(I(x)=\lim I_{n}(x)\) is thus a continuous function and it is also clear that \(I_{n}(x)>0\) for all \(x\).
In order to establish (40), we must also show that the limit of \(K_{n}\) exists as a value bounded away from zero and infinity. That is, we want to show that
\[\lim_{n\to\infty}\log K_{n}=\log K \tag{50}\]
exists as a finite value. By the representation (24) of \(\mu\), we can write
\[\log K_{n} =\log\int 2^{-\omega_{n}(\gamma)}\,d\mu(\gamma)-\log\int 2^{- \omega_{n}(\gamma_{+})}\,d\nu(\gamma_{+})\] \[=\log\frac{\int 2^{\,\omega_{n}(\gamma_{+})-\omega_{n}(\gamma)+R( \gamma)}\;.\,2^{-\omega_{n}(\gamma_{+})}\,d(\nu(\gamma_{+})\otimes\tilde{\eta }(\gamma_{0})\otimes\nu(\gamma_{-}))}{\int 2^{-\omega_{n}(\gamma_{+})}\,d\nu( \gamma_{+})}\] \[=\log\int 2^{\,\omega_{n}(\gamma_{+})-\omega_{n}(\gamma)+R( \gamma)}\,d(\hat{\nu}_{n}(\gamma_{+})\otimes\tilde{\eta}(\gamma_{0})\otimes \nu(\gamma_{-}))\]
where \(\hat{\nu}_{n}(\gamma_{+})\) is the probability distribution \(2^{-\omega_{n}(\gamma_{+})}\ltimes\nu(\gamma_{+})\).
But, each cluster \(C\) in \(\gamma_{-}\) can only contribute with at most \(\left(D(C)-1\right)_{+}\) to the difference \(\omega_{n}(\gamma_{+})-\omega_{n}(\gamma)\), since each irrelevant edge from \(C\) can join at most two clusters intersecting \(S=n\). It follows that
\[\omega_{n}(\gamma_{+})-\omega_{n}(\gamma)\leq Q(\gamma_{-},\gamma_{0}) \tag{51}\]
Thus by (11), we have
\[\omega_{n}(\gamma_{+})-\omega_{n}(\gamma)+R(\gamma)\leq 2\cdot Q(\gamma_{-}, \gamma_{0}),\]
and, since \(\int 2^{2Q}\,d\tilde{\eta}(\gamma_{0})\,d\nu(\gamma_{-})<\infty\) by Lemma 2, the dominated convergence theorem implies (50).
|
2310.17487 | High Transmission in 120-degree Sharp Bends of Inversion-symmetric and
Inversion-asymmetric Photonic Crystal Waveguides | Bending loss is one of the serious problems for constructing nanophotonic
integrated circuits. Recently, many works reported that valley photonic
crystals (VPhCs) enable significantly high transmission via 120-degree sharp
bends. However, it is unclear whether the high bend-transmission results
directly from the valley-photonic effects, which are based on the breaking of
inversion symmetry. In this study, we conduct a series of comparative numerical
and experimental investigations of bend-transmission in various triangular PhCs
with and without inversion symmetry and reveal that the high bend-transmission
is solely determined by the domain-wall configuration and independent of the
existence of the inversion symmetry. Preliminary analysis of the polarization
distribution indicates that high bend-transmissions are closely related to the
appearance of local topological polarization singularities near the bending
section. Our work demonstrates that high transmission can be achieved in a much
wider family of PhC waveguides, which may provide novel designs for low-loss
nanophotonic integrated circuits with enhanced flexibility and a new
understanding of the nature of valley-photonics | Wei Dai, Taiki Yoda, Yuto Moritake, Masaaki Ono, Eiichi Kuramochi, Masaya Notomi | 2023-10-26T15:44:10Z | http://arxiv.org/abs/2310.17487v1 | High Transmission in 120-degree Sharp Bends of Inversion-symmetric and Inversion-asymmetric Photonic Crystal Waveguides
###### Abstract
Bending loss is one of the serious problems for constructing nanophotonic integrated circuits. Recently, many works reported that valley photonic crystals (VPhCs) enable significantly high transmission via 120-degree sharp bends. However, it is unclear whether the high bend-transmission results directly from the valley-photonic effects, which are based on the breaking of inversion symmetry. In this study, we conduct a series of comparative numerical and experimental investigations of bend-transmission in various triangular PhCs with and without inversion symmetry and reveal that the high bend-transmission is solely determined by the domain-wall configuration and independent of the existence of the inversion symmetry. Preliminary analysis of the polarization distribution indicates that high bend-transmissions are closely related to the appearance of local topological polarization singularities near the bending section. Our work demonstrates that high transmission can be achieved in a much wider family of PhC waveguides, which may provide novel designs for low-loss nanophotonic integrated circuits with enhanced flexibility and a new understanding of the nature of valley-photonics
## Introduction
Photonic crystal waveguides (PhCWGs) that support highly confined light modes have wide applications in telecommunication and data processing [1, 2, 3, 4]. Recently, valley photonic crystals (VPhCs), an optic implementation of valley Hall effect [5, 6, 7], have offered large-scale, all-dielectric designs for PhCWGs. The heart of valley-photonic properties is the breaking of inversion symmetry, which gives rise to non-trivial Berry curvatures around the \(K\) and \(K^{\prime}\) points, leading to a distinct valley Chern number and topological bandgaps, and suggesting the possibility of suppressed backscattering [8, 9, 10]. When two VPhCs with different Chern numbers are connected by an interface, topological domain-wall modes appear within the bandgaps.
Bending loss is one of the most serious problems in constructing photonic integrated circuits employing nanophotonics [11, 12, 13, 14]. This is because sharp bends exhibit significantly large reflections when the bending radius is comparable to the wavelength of light. For example, a simple single-missing-hole line defect waveguide (so-called, W1), the most widely-used PhC waveguides, have large reflections at 120-degree bends unless sophisticatedly modified at the corners [11, 12, 13]. In contrast, many recent reports showed that various VPhC waveguides exhibit extraordinarily high transmission through 120-degree bends within a wide frequency range [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. This interesting property of VPhCs has attracted considerable attention. Since backscattering suppression is generally expected for edge or domain-wall modes in topological insulators, reflection-free transmission has been considered a topological feature of VPhCs. Naively, if one assumes that inter-valley scattering is prohibited, valley spins should be conserved, and thus the back reflection should be suppressed.
However, no unambiguous demonstration proves the direct relationship between high transmission in bends and the topological properties. In fact, we believe some ambiguities remain. (1) Theoretically, it is not apparent whether inter-valley scattering could be prohibited or valley spins could be conserved at bends. In the usual situation, valley spin can be easily flipped upon reflection. For example, Arregui et al. [36] numerically showed that the suppression of backscattering occurs only at ultraslow light modes in straight VPhCWGs with minimal disorders, and a recent experimental work supported their conclusion [37]. This minimal perturbation condition is hardly satisfied in the transmission in 120-degree bends. (2) Experimentally or numerically, it has not been directly proved that the high transmission is due to the valley-photonic effects. In some previous studies [26, 21], W1WGs are used as the reference to verify the high transmission in VPhCWGs. However, the domain-wall configurations of W1WGs and VPhCWGs are largely different. Besides the inversion symmetry in the bulk lattice, W1WGs also have larger
waveguide widths and their domain-wall configuration is not compatible with a honeycomb structure. Therefore, there remain possibilities that the high transmissions result from the difference in the domain-wall configuration instead of the topological effect.
In this study, to identify the origin of high transmission in 120-degree bent PhCWGs, we separate the effect of inversion symmetry and the domain-wall configuration by employing a specific model structure in which we can vary the interface condition and the inversion symmetry separately. We show theoretical treatments first, following extensive experimental works. Our theoretical and experimental studies reveal a surprising result in which the high bend-transmittance appears irrespective of the existence of the inversion symmetry. It is shown that the high bend-transmission can be realized in a much wider range of structures than previously expected. Since the breaking inversion symmetry is the origin of the valley photonic effect, our finding indicates that the high bend-transmittance does not originate from the valley effect. Furthermore, we carefully investigate the appearance condition of the high bent transmission for various interface structures, and finally give an intriguing insight into the origin of the high bend-transmission.
## Results
### Model structures
We propose to employ a systematic model representing various domain-wall configurations with and without the inversion symmetry. In a honeycomb lattice, restoring the inversion symmetry closes the bandgap and thus no domain-wall modes remain. Here, we adopt triangular-lattice air-hole PhCs as shown in the inset image of Fig.1(a,b). We manipulate the inversion symmetry by changing the hole shapes. The bulk lattices are either inversion-symmetric with circular air holes (IS-PhCs) or inversion-asymmetric with triangular air holes (IA-PhCs).
IA-PhCWGs based on a triangular lattice have been already proposed and high transmissions through 120-degree bends have been observed [19, 22, 29, 30, 32, 38]. Here, we employ slightly different IA-PhCWGs based on triangular air-hole lattices and focus on the first bandgap in TE-polarization (Fig.1(a,b)). This first bandgap is most widely used in PhC waveguides, including W1WGs. Since the Dirac degeneracy occurs between the second and third bands at \(K\) points, the first bandgap always exists even with the inversion symmetry. Therefore, one can easily alternate the inversion symmetry without affecting the lattice configuration. The crucial point is that the valley-photonic effect still exists in this case. Recently, we investigated [38, 39] this type of triangular-hole PhCs without inversion symmetry (IA-PhCs, shown in Fig.1(b)) and found that they exhibit the valley-photonic effects in an essentially similar way to conventional VPhCs. We theoretically confirmed that these PhCs show large nontrivial Berry curvature around the \(K\) and \(K^{\prime}\) points of the first and second bands. Interestingly, the Berry curvature in these bands is even larger than that of the third band, which originates from the Dirac point in IS-PhCs. Moreover, the first and second bands have opposite distinctive angular momentum. Thus, these valleys naturally lead to various valley-photonic effects.
Starting from this bulk design, a wide variety of domain-walls can be constructed by simply shifting the lattice in the half-space. As shown in Fig.1(c), we divide the triangular lattice PhC with a 60-degree angle boundary into the blue and red regions. We can shift the two divided regions along the angle bisector (dotted red line) to introduce a line defect to the bulk lattice and thus construct a 120-degree bent waveguide. Waveguides constructed in this manner can be characterized by the shift direction and shift distance \(D\). Here, we define a shifting parameter \(S=\sqrt{3}D/a\). \(S\) is positive (negative) when the red region is shifted away from (towards) the blue region. When \(S=3\), the waveguide is a conventional W1WG with circular holes. When \(S=\pm 1\), the domain wall is a zigzag interface. When \(S=\pm 2\), the domain wall is a bearded interface. Note that when \(S\) is even, the waveguide is mirror-symmetric. When \(S\) is odd, the waveguide is glide-symmetric. When \(S\) is a non-integer, the interface has neither mirror symmetry nor glide symmetry.
We can also classify the domain-wall configuration of other types of VPhCs previously reported in a similar manner. For example, the zigzag interface in reference [19][19] corresponds to \(S=-1\), and the bearded interface in reference [22][22] corresponds to \(S=-2\). In addition, honeycomb lattice VPhCs can be classified in the same manner if we focus on the configuration of large holes (or pillars). When \(S=\pm 1\), the lattice becomes one of the sublattices of a zigzag interface honeycomb lattice [15, 16, 17, 18, 19, 20, 36]. When \(S=\pm 2\), the lattice becomes one of the sublattices of a bearded interface honeycomb lattice [22, 23, 24, 25, 26, 27]. The sign change of \(S\) corresponds to the sublattice exchange in a honeycomb lattice. We assume that the broken inversion symmetry and domain-wall types (parameter S) would both be able to affect the high transmission through sharp bends. The comparison between VPhCWGs and W1WGs alone cannot distinguish between the two factors. Therefore, in this study, we compare the light transmission through 120-degree sharp bends between IS-PhCWGs and IA-PhCWGs having the same domain-wall type and then examine different domain-wall types. We mainly investigate the aforementioned five types of interfaces: \(S=-2,-1,1,2,3\).
### Numerical studies of Z-shaped waveguides
_(1) W1WG (S = 3, mirror-symmetric waveguides with inversion symmetry)_
Here we numerically investigate Z-shaped waveguides in Si PhC slabs, consisting of a pair of 120-degree bends with the middle segment length of \(20a\). Firstly, we investigate the configuration of \(S=3\) with circular holes, corresponding to W1WG, which is mirror-symmetric and possesses inversion symmetry. It has an even and an odd band in the PBG (Fig.2(a)), and here we focus on the even modes in the lower-frequency band. As shown in Fig.2(a) with a black arrow, there is a single-mode region in the frequency range \(a/\lambda=0.265-0.282\). Within this single-mode region, we can see a clear transmission contrast between the straight (Fig.2(b), black curve) and the bent (Fig.2(b), red curve) waveguides. There are strong ripples in the spectrum of the bent waveguide. We calculate the corresponding cavity length to be around \(21.5a\) from the free spectral range (FSR) of the ripples. This length is very close to the middle segment's length \(20a\) in the Z-shaped waveguide. Therefore, we confirm that these are Fabry-Perot (F-P) ripples resulting from strong reflection at the two bends. In order to quantitatively analyze the transmittance, we calculate the average transmittance \(T_{av}\) and the F-P reflectivity (\(R_{FP}\)) in a single-mode region discarding the ultraslow-light region (see Method section for the definition of \(T_{av}\) and \(R_{FP}\)). For the W1WG the frequency range is \(a/\lambda=0.269-0.278\). The calculated \(T_{av}\) and \(R_{FP}\) are 0.47 and 0.40.
As a typical example of the field distribution for the Z-shaped waveguide, \(H_{z}\) near a bend at \(a/\lambda=0.270\) is shown in Fig.2(c) where the transmittance is 0.39. The five-pointed star indicates the location of the light source, exciting a rightward
Figure 1: The band structures of triangular lattice Si-slab PhCs (a) with and (b) without inversion symmetry. Inset in (a) shows the IS-PhC with circular air holes in the silicon slab. The lattice constant is 400 nm. The radius of air holes is 102 nm. Inset in (b) shows the A-type and B-type IA-PhC with triangular holes. The lattice constant is 400 nm. The side length of air holes is 277 nm. The effective refractive index of silicon is 2.65. (c) conceptual illustration of the universal design of triangular lattice waveguides that are compatible with 120-degree sharp bends. This large picture shows a bulk triangular lattice (\(S=0\)) with inversion symmetry (circular holes). The bent interface forms a 60-degree angle. The red dotted line is the angle bisector. The smaller pictures show the five domain-wall configurations obtained by shifting the red region when the shifting parameter \(S\) is -2,-1,1,2, and 3.
propagating mode. The light intensity is fairly reduced at the output port, and there is significant reflected intensity behind the excitation source. Hereafter, we investigate other types of waveguides focusing on their \(T_{av}\) and \(R_{FP}\).
### (ii) \(S=-2\), glide-symmetric waveguides
Here we investigate the \(S=-2\) IA-PhCWGs without inversion symmetry (inset of Fig.3(a)). As described before, this lattice and its domain-wall structure possess typical VPhC properties, such as non-trivial Berry curvature and angular momentum. In addition, due to the glide symmetry of this waveguide, two bands degenerate at the edge of the Brillouin zone (BZ), as shown in Fig.3(a). The modes have a mixed spatial parity at the waveguide's interface. The band above the degenerate frequency (upper band) has a broad single-mode region (\(a/\lambda=0.267-0.301\)). However, the band below the degenerate frequency (lower band) overlaps with the bulk modes, making it impossible to excite the lower band. Here we focus on the upper band only. The upper band exhibits a very high transmission, as shown in Fig.3(b). \(T_{av}\) and \(R_{FP}\) in this Z-shaped waveguide are estimated to be 0.97 and 0.03. Figure3(c) shows the \(H_{z}\) distribution at \(a/\lambda=0.270\), where the transmittance is 0.97. There is no indication of attenuation during the propagation. In addition, there is no apparent reflected intensity behind the excitation source, indicating very weak backscattering. These results show that the reflection at bends is significantly small. This configuration (S = -2) corresponds to a bearded interface in a honeycomb lattice VPhC waveguide [23]. This result of high transmission is essentially similar to those reported in reference [19, 23].
Next, we investigate \(S=-2\) waveguides with inversion symmetry by changing the air hole shape from triangular to circular. That is, we keep the same lattice configuration but restore the inversion symmetry. As shown in Fig.3(d), \(S=-2\) IS-PhCWG has a wide single-mode region (\(a/\lambda=0.261-0.293\)) with a degeneracy point at the BZ edge (\(a/\lambda=0.270\)). Since the degeneracy point is located within the bandgap, both upper and lower bands have sufficient single-mode regions. Surprisingly, the Z-shaped waveguide shows also very high transmission (Fig.3(b)) with \(T_{av}\) of 0.94 in the upper band, even though the inversion symmetry is NOT broken. The \(R_{FP}\) is 0.06. This high transmittance is comparable to that of the upper band in \(S=-2\) IA-PhCWG and other reported results in Z-shaped VPhCWGs [19, 23, 10, 15, 16, 17, 18, 19, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. Figure3(c) shows the \(H_{z}\) distribution at \(a/\lambda=0.283\) (upper band), where the transmittance is 0.93. Same as that of \(S=-2\) IA-PhCWG, there is no indication of attenuation during the propagation. The present result implies an important consequence. Because the inversion symmetry is not broken in this waveguide (S = -2 IS-PhCWG), it suggests that the observed high transmission may not be caused by the valley-photonic effect, which essentially requires the broken inversion symmetry.
Interestingly, the transmittance of the lower band differs significantly from that of the upper band in \(S=-2\) IS-PhCWGs. The \(T_{av}\) of only 0.40. The \(R_{FP}\) is 0.52 which is even larger than that of the W1WG. It should be noted that similar transmission contrast between upper and lower modes has been reported for glide-symmetric honeycomb-lattice VPhC waveguides. Yoshimi et al. [25] have reported a distinctive transmission contrast in a bearded-interface glide-symmetric PhCWG with a honeycomb lattice (S = 2 waveguide in the terminology of the present paper). Although the high bend-transmission was recently suggested for the upper band of an \(S=-2\) triangular-lattice IS-PhCWG [40], it has not been reported that the similar high contrast between upper and lower modes exists in the triangular-lattice IS-PhCWG, which we believe is important for understanding the nature of the transmission through sharp bends. Since the upper/lower band transmission contrast has been observed in Z-shaped
Figure 2: Calculation results of the straight and Z-shaped \(S=3\) IS-PhCWGs (W1WGs). (a) The 2-dimensional band structure. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (b) The transmittance spectra of the straight W1WG (black curve) and Z-shaped W1WG. The average transmittance is 0.47, and the F-P reflectivity is 0.40. (c) The out-of-plane magnetic field \(H_{z}\) of the Z-shaped W1WG at \(a/\lambda=0.270\). The transmittance is 0.39. The five-pointed star indicates the location of the wave source. The arrow points toward the propagation direction.
glide-symmetric PhCWGs irrespective of the inversion symmetry, we speculate that this phenomenon originates from the domain-wall configuration instead of the symmetry-breaking in the bulk lattice.
### (iii) Other \(S\)-value waveguides and summary of numerical studies
Following the same method as W1WGs and \(S=-2\) PhCWGs, we have also numerically investigated \(S=-1\) IA-PhCWGs, \(S=-1\) IS-PhCWGs, \(S=1\) IA-PhCWGs, \(S=1\) IS-PhCWGs, \(S=2\) IA-PhCWGs, \(S=2\) IS-PhCWGs, and \(S=3\) IA-PhCWGs. For these waveguide types, we only report a brief result here and leave a detailed discussion in the supplementary information (2).
Table 1 summarizes \(T_{av}\) and \(R_{FP}\) for each \(S\) with and without inversion symmetry. Bands with high bend-transmittance are labeled in orange and bands with low bend-transmittance are in gray. When \(S\) is even, the results of both the upper and the lower bands are shown.
This table shows that \(S=1\), the upper band of \(S=-2\), and the lower band of \(S=2\) have high \(T_{av}\) and low \(R_{FP}\). Most importantly, these characteristics do not depend on the existence of the inversion symmetry. It is worth noting that a large transmission contrast between the upper and lower bands for glide-symmetric waveguides, which was previously noted as proof of "topological property" for one of the bands, is also seen for inversion-symmetric waveguides. The last column in Table 1 shows the classification of the reported results for VPhC by the \(S\) parameter. All previous results of VPhCWG studies can be
Figure 3: Calculation results of the straight and Z-shaped \(S=-2\) PhCWGs. (a) The PBS of \(S=-2\) IA-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (b) The transmittance spectra of the straight \(S=-2\) IA-PhCWG (black curve) and Z-shaped \(S=-2\) IA-PhCWG (red curve). The average transmittance is 0.97. The F-P reflectivity is 0.03. (c) The out-of-plane magnetic field \(H_{z}\) of the Z-shaped \(S=-2\) IA-PhCWG at \(a/\lambda=0.280\), with a transmittance of 0.97. The Hz has mixed spatial parity and meanders along the glide-symmetric interface. (d) The PBS of \(S=-2\) IS-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (e) The transmittance spectra of the straight \(S=-2\) IS-PhCWG (black curve) and Z-shaped \(S=-2\) IS-PhCWG (red curve). The average transmittance of the upper band is 0.94. The F-P reflectivity of the upper band is 0.06. The average transmittance of the lower band is 0.40. The F-P reflectivity of the lower band is 0.52. (f) The out-of-plane magnetic field \(H_{z}\) of the Z-shaped \(S=-2\) IS-PhCWG at \(a/\lambda=0.283\), with transmittance 0.93.
classified in the same table, and coincide with our result. This table strongly suggests that this high transmission behavior does not originate from the broken inversion symmetry, but possibly originated from the domain configuration. Before concluding this speculation, we check other possible causes. In Supplementary Table 1, we examine the parity of modes, the group refractive index, and the sign of the group velocity for each case. As shown in the table, there is no correlation between these variables and the observed distinctive difference in the bend-transmittance. Thus these variables cannot explain the present phenomenon. Consequently, our results strongly suggest that the observed high bent transmittance and low reflectivity are attributed to the domain-wall configuration.
### Experimental studies of Z-shaped waveguides
In this part, we experimentally examine the transmission properties of bent PhCWGs. We implement the PhC structures in Si slabs fabricated by highly accurate lithography and etching process. The fabrication details are described in Methods. We couple the waveguides to the incident light from a wavelength-tunable laser with 5dBm power and measure the transmission spectra from 1355 to 1640 nm.
Figure4(a) shows the optical microscope image of the fabricated Z-shaped waveguide. The TE-polarized light is guided through a silicon taper and is coupled to the PhCWG via a silicon nanowire. The three segments of Z-shaped waveguides have lengths of \(100a\), \(30a\), and \(100a\) respectively. Figure.4(b) shows the bending part of a \(S=-2\) IS-PhCWG and Fig.4(c) shows the straight part of a \(S=1\) IA-PhCWG. The length of the straight waveguides is \(230a\). In a straight waveguide, there are reflections between the PhCWG and the silicon waveguide, making the whole waveguide an F-P cavity. F-P resonances may occur inside both the \(100a\) segment and the \(30a\) segment in a Z-shaped waveguide. As a typical example in our measurement, for a waveguide with a=400 nm, and the waveguide mode with a group velocity of \(0.1c\) and wavelength of 1500 nm, the wavelength FSR is 1.2 nm, 2.8 nm, and 9.4 nm in the \(230a\), \(100a\) and \(30a\) cavity respectively.
Here we show the measured transmitted intensity. We begin with \(S=3\) IS-PhCWGs (W1WGs). The lattice constant \(a\) is 424 nm. The radius of air holes \(r\) is 97 nm. As shown in Fig.5(a), \(S=3\) IS-PhCWGs have single waveguide modes between 1515-1574 nm (yellow region). We evaluate \(T_{av}\) in the single mode region. For \(S=3\) IS-PhCWGs, \(T_{av}=0.31\). Note that this Z-shaped waveguide shows ripples in the spectrum, whose FSR is 3-10 nm. The observed FSR seems to roughly correspond to the FP resonance of the \(30a\) segment, but the ripple is complicated especially in the longer wavelength region. Figure5(b) shows the spectra of \(S=3\) IA-PhCWGs (\(a=416\) nm), which have the same domain-wall configuration as W1WG but triangular air holes that break the inversion symmetry. The side length of the triangular air holes \(s\) is 232 nm. The \(S=3\) IA-PhCWGs have single modes between 1463-1532 nm. The evaluated \(T_{av}\) is also low, 0.48. These results agree with our theoretical simulation. Similar to \(S=3\) IS-PhCWGs, there are strong ripples in the spectrum. However, the ripples are rather complicated and hard to analyze. This trend is seen in all samples shown below. We regard that these complicated spectra result from the complex
\begin{table}
\begin{tabular}{p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}} \hline \multicolumn{2}{|p{42.7pt}|}{**Waveguide**} & \multicolumn{2}{p{42.7pt}|}{**With inversion**} & \multicolumn{2}{p{42.7pt}|}{**Without inversion**} & \multicolumn{1}{p{42.7pt}|}{**Previous**} \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Band & Group & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ & & & & & \\ \cline{1-1} \cline{6-6} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ & & & & & & \\ \hline S=3 & - & 1 & 0.47 & 0.40 & 0.42 & 0.56 & _No report_ \\ \hline S=2 & upper & 1 & 0.13 & 1.00 & 0.06 & 1.00 & _Ref [24-26]_ \\ \cline{2-6} & lower & 2 & 1.00 & 0.06 & 1.00 & 0.06 & _Ref [24-27]_ \\ \hline S=1 & - & 2 & 1.00 & 0.14 & 0.94 & 0.04 & _Ref [15-18,36]_ \\ \hline S=-1 & - & 3 & 1.00 & 0.17 & 1.00 & 0.03 & _Ref [19-21]_ \\ \hline S=-2 & upper & 3 & 0.94 & 0.06 & 0.97 & 0.03 & _Ref [22, 23]_ \\ \cline{2-6} & lower & 4 & 0.40 & 0.52 & no available mode & _No report_ \\ \hline \end{tabular}
\end{table}
Table 1: The simulation results of average transmittances of the waveguide in Z-shaped waveguides. The rightmost column shows the previous reports. Orange text background indicates high bend-transmittance results, and gray text background indicates low bend-transmittance results. The group numbers will be explained in the discussion section.
multiple reflection at section boundaries which exist in the fabricated devices but does not in the simulated structures. Hence, we could not evaluate \(R_{FP}\) from the measured spectra.
Hereafter, we investigate other domain-wall configurations one by one and evaluate \(T_{av}\). Figure.5(c) shows \(S=2\) IS-PhCWGs (\(a=441\) nm, \(r=92\) nm). As discussed in the simulation section, the \(S=2\) PhCWGs have glide-symmetric interfaces and have two touching bands in the photonic bandgap. However, the single-mode region (1410-1580 nm) only exists in the (frequency-wise) upper band. The upper band has an average bend-transmittance \(T_{av}\) of 0.19, lower than that of the W1WG. Figure.5(d) shows the spectra of \(S=2\) IA-PhCWGs (\(a=418\) nm, \(s=342\) nm), which corresponds to a bearded interface in honeycomb lattice VPhCWG [24, 25, 26, 27, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. Like its inversion-symmetric counterpart, \(S=2\) IA-PhCWG has single modes (1405-1530 nm) in the upper band. The \(T_{av}\) is 0.15. The extremely low transmittance in the upper band of \(S=2\) PhCWGs agrees with the previous reports [24, 25, 26, 27] as well as our simulation results.
Figure.5(e) shows the result for \(S=1\) IS-PhCWGs (\(a=427\) nm, \(r=120\) nm). The overall transmission in the Z-shaped \(S=1\) IS-PhCWGs (red) is comparable to that of the straight waveguide. \(T_{av}\) is as high as 0.85 in the single-mode range 1437-1504 nm. \(S=1\) IA-PhCWGs (\(a=441\) nm, \(s=314\) nm) corresponds to a zigzag interface VPhCWG. The previous reports are all based on the honeycomb lattice [15, 16, 17, 18, 41]. Similar to its IS- counterpart, the Z-shaped \(S=1\) IA-PhCWGs have transmission comparable to that of the straight waveguides (Fig.5(f)). \(T_{av}\) is 0.86 in the single-mode region 1464-1553 nm.
Figure.5(g) shows the measured spectra of \(S=-1\) IS-PhCWG (\(a=419\) nm, \(r=92\) nm) has a narrow single-mode region from 1420 to 1448 nm. The \(T_{av}\) is 0.74. The amplitude of F-P resonances in the Z-shaped \(S=-1\) IS-PhCWG's spectrum is as small as that of the straight waveguide. \(S=-1\) IA-PhCWG (Fig.5(h), \(a=442\) nm, \(s=268\) nm)) corresponds to another type of zigzag VPhCW [19]. It also has a narrow single-mode region from 1355 to 1404 nm. The \(T_{av}\) is 0.83.
Finally, we investigate the \(S=-2\) PhCWGs with glide-symmetry interface. Figure.5(i) shows the result for \(S=-2\) IS-PhCWGs (\(a=418\) nm, \(r=115\) nm). Both the straight and the Z-shaped waveguides have a transmission gap near 1500 nm. We speculate that fabrication errors in the air holes' size break the glide-symmetry, opening a bandgap and creating flat-band regions near the edges of the upper/lower bands. The upper band (green, 1379-1486 nm), shows higher transmission in the Z-shaped waveguide than in the straight waveguide. Therefore we set the \(T_{av}\) to be 1.00. The lower band (yellow,
Figure 4: (a) Optical microscope image of the fabricated Z-shaped \(S=3\) IS-PhCWG. The three segments of the waveguide have lengths \(100a\), \(30a\), and \(100a\). (c) SEM image of the bending part of a \(S=-2\) IS-PhCWG. (d) SEM image of the straight part of a \(S=1\) IA-PhCWG.
1516-1610 nm) shows low transmittance in the Z-shaped waveguide. The \(T_{av}\) is 0.38. Figure.5(j) shows the spectra of the \(S=-2\) IA-PhCWGs (\(a=469\) nm, \(s=287\) nm). In the numerical calculations, only the upper band of the \(S=-2\) PhCWGs have single modes. Actually, the lower band also lies within the bandgap in the 3-dimensional device. Here only the lower band can be observed possibly owning to the fabrication errors that separate the two bands with a large bandgap. Like the \(S=-2\) IA-PhCWGs, the lower band has low transmittance in the Z-shaped bends. The \(T_{av}\) is 0.41.
To summarize, the experimental results clearly show a significant difference in the transmission properties among different domain-wall configurations. The waveguide modes in \(S=1\), \(S=-1\) PhCWGs, and the upper band of \(S=-2\) PhCWG have high bend-transmittance, which agrees well with the numerical calculation results. Our experimental results support our proposal that the bend-transmittance is dominantly determined by the domain-wall configuration.
Figure 5: Measured transmission spectra of (a,b) \(S=3\) PhCWG, (c,d) \(S=2\) PhCWG, (e,f) \(S=1\) PhCWG, (g,h) \(S=-1\) PhCWG and (i,j) \(S=-2\) PhCWG. Black lines indicate the straight waveguides and red lines indicate the Z-shaped waveguides with two bends. Yellow and green boxes show the single-mode regions. \(T_{av}\) shows the relative average transmittance of the Z-shaped waveguides. A SEM image of the corresponding waveguide is shown on the right side of each plot.
## Discussion
We have numerically and experimentally demonstrated high bend-transmission through 120-degree sharp bends for PhCWGs with various \(S\) regardless of the existence of the inversion symmetry. In addition, the high transmission appears regardless of the mode parity and the group velocity. To observe the correlation between \(S\) and the bend-transmittance more intuitively, we plot \(T_{av}\) against \(R_{FP}\) for each simulation result in Fig.5(a). If \(S\) changes continuously, the waveguide bands will accordingly evolve in the bandgap, until they disappear into the bulk mode regions. Different bands can be traced to one another in this process before disappearing. We classify such bands into the same group as \(S\) changes from 3 to -2 (see supplementary (5) for detailed explanations). Thus we can classify the investigated waveguide bands into four different groups. We label each band group with different colors in Fig.5. The even mode of W1WG and the upper band of \(S=2\) glide-symmetric PhCWG belong to group 1 (green). The lower band of \(S=2\) glide-symmetric PhCWG and the even band of \(S=1\) PhCWG belong to group 2 (blue). The odd mode of \(S=-1\) PhCWG and the upper band of \(S=-2\) glide-symmetric PhCWG belong to group 3 (orange). Finally, the lower band of \(S=-2\) glide-symmetric PhCWG belongs to group 4 (gray).
Now we focus on the transmission properties of each band group. As shown in Fig.5(a), group 2 (blue) and group 3 (orange) have high bend-transmittance and low F-P reflectivity, and group 1 (green) and group 4 (gray) have low bend-transmittance and high F-P reflectivity. This shows that the mode classification corresponds with the transmission property of the Z-shaped waveguide very well. In addition, each group includes waveguide bands of both IS-PhCWGs and IA-PhCWGs, meaning that the existence of inversion symmetry has no significant influence.
It is worth noting that when continuously changing \(S\), groups 1 and 2 are connected at the BZ edge degeneracy point of the \(S=2\) PhCWG, and groups 3 and 4 are connected at the BZ-edge degeneracy point of the \(S=-2\) PhCWG. Interestingly, a degeneracy point of glide-symmetric waveguides connects two different band groups having high and low transmissions.
Next, we summarize the \(T_{av}\) obtained in simulation and measurement in Fig.6(b). We plot \(T_{av}\) against the shifting parameters. Solid markers show the simulation results and hollow markers show the experiment results. The experiment results agree well with the simulation results, showing that group 1 and group 4 have low bend-transmittance, and group 2 and group 3 have high bend-transmittance.
So far, we have clarified that the high bend-transmission is not influenced by the bulk property (especially for the existence of the inversion symmetry), and is mostly determined by the domain-wall configuration. This result has already denied the conventional understanding of the high bend-transmission as the valley-photonic property, in which the transmission is determined by the bulk topological property. More importantly, our result shows that high bend-transmission can occur for a much wider range of PhCWGs, which is not restricted to inversion-asymmetric PhCs. This finding is promising considering applications.
We further discuss the mechanism behind the mode classification. It has been clarified that one can find topologically-stable polarization singularities, such as the circular polarization singularities (C-points or CPs) and vortex singularities (V-points or
Figure 6: (a) Average transmittance against F-P reflectivity for simulation results. The circles indicate the results of waveguide modes in IS-PhCWGs, and the triangles indicate that of IA-PhCWGs. Colors indicate the mode classification. Black texts show the shifting parameters of each waveguide, where U indicates the upper band, and L indicates the lower band for a glide-symmetric interface. (b) Average transmittance against shifting parameters for simulation and experiment results. Solid markers indicate the simulation results. Hollow markers indicate the experiment results.
VPs) [42, 43, 44, 45] of the electric/magnetic fields. Unidirectional excitation of waveguide modes can be realized by putting the circularly polarized wave source at the location of the CPs [24, 40, 43, 44]. CPs and VPs exist pervasively in many types of PhCWGs regardless of the band topology or the particular symmetry of the structure. Here we speculate that the transmittances via sharp bends are related to the spatial distribution of CPs and VPs in the PhCWGs.
When the light is circularly polarized at the mirror-symmetric line of a 120-degree bend (broken lines in Fig.7(a)), the polarization state would be preserved across the bending, i.e., changing the propagation direction by 120 degrees does not cause polarization mismatch. Otherwise, If the light is not linearly polarized along this line, the polarization state is not preserved after the bending, and polarization mismatch occurs when constructing bends, which leads to a large modification of the field profile and may generate unwanted local resonances and/or high reflection (Fig.7(b)). Thus, circular polarization may lead to a smooth connection and thus small reflection within a broad frequency range. Here we examine this speculation by analyzing the polarization profile of the simulation results and investigating the distribution of CPs. In Fig.7(c), we plot the CP distributions in unit cells of \(S=-2\) IS-PhCWG at several frequencies. To identify the location of CPs, we plot the zero-value isolines of Stokes parameters [45, 46]. The crossing nodes of the \(S_{1}=0\) (red) and\(S_{2}=0\) (green) lines indicate C-points (CPs). The crossing nodes of the \(S_{1}=0\), \(S_{2}=0\), and \(S_{3}=0\) (blue) lines indicate VPs.
The upper band of \(S=-2\) IS-PhCWG has high bend-transmittance, and there are CPs (black boxes) near the broken lines. The calculated degree of polarization (\(S_{3}/S_{0}\)) is over 0.99 at each CP. The normalized electric field amplitudes (\(I/I_{max}\)) are over 0.5 at each CP, i.e., these are bright CPs. These CPs are located inside the air holes. As we change the frequency, we observe that these CPs gradually move away from the hole centers and disappear exactly at the degeneracy point. Beyond the degeneracy point, the lower band appears, which has low bend-transmittance. In this regime, as shown in the fourth and fifth columns in Fig.7(c), there are no bright CPs near the broken lines. There are only dark CPs in the silicon area where \(I/I_{max}\) is lower than 0.1. In addition, VPs (gray boxes) appear near the broken lines. The sudden disappearance of bright air-hole CPs is consistent with the abrupt change in bend-transmittance across the degeneracy point.
Figure 7: Schematic illustration of (a) polarization singularities causing high transmittance and (b) linear polarization causing low transmittance in the bend of an \(S=-2\) IS-PhCWG. The red and blue arrows represent the directions of polarization. Black dotted lines indicate the connection interface of the 120-degree bend. (c) The simulation results of the location of polarization singularities. Red, green, and blue lines show the zero-values of \(S_{1}\),\(S_{2}\), and \(S_{3}\), respectively. Black boxes show the location of CPs. Gray boxes show the location of VPs. Color maps show the amplitude of electric fields in the waveguide. Black dotted lines show the connection interfaces of a 120-degree bend. The normalized frequencies, the normalized amplitudes of the electric field, and the bend-transmittances are shown below each plot.
We have also investigated other domain-wall types of IS- and IA- PhCWGs (supplementary information [(6)]). We have confirmed the existence of bright air-hole CPs in most of the high bend-transmission bands and their disappearance in the low bend-transmission bands. For the glide-symmetric \(S=2\) PhCWGs, we have also observed that CPs located inside the air holes disappear around the position where the bend-transmission abruptly decreases. These results suggest that there is some correlation between the high bend-transmission and the existence of CPs near the mirror-symmetric line. Another important point is that for all the investigated domain walls, the CP distribution in IA- and IS- PhCWGs have no significant difference. This means that the inversion symmetry does not alter the CP properties. We admit that these arguments are still speculative because it is difficult to estimate the transmittance quantitatively from the distribution and brightness of CPs. We leave detailed investigations in this direction for future works.
As a final remark, we would like to address the influence of breaking inversion symmetry on bend-transmissions. Notably, we have observed a marginal enhancement in bend-transmission for certain IA-PhCWGs when compared to IS-PhCWGs, as evidenced by both numerical calculations and experimental data. It is important to note that this modest variation should be distinguished from the primary transmission contrast discussed in this study. Nevertheless, there exists the possibility that this slight improvement can be attributed to the valley-photonic effect. However, it is currently beyond the scope of this study to conduct a quantitative analysis of this subtle difference with our waveguide design, and we leave it for future works. Since the recent works [37, 38] indicate that the suppression of the backscattering may occur in a slow light region with a small disorder, one needs to analyze this issue in sharp bends in a meticulous manner.
## Conclusion
In summary, we have investigated the bend-transmission in a series of triangular-lattice PhCWGs compatible with 120-degree sharp bends. We systematically investigated different domain-wall configurations by adjusting \(S\) for waveguides with and without the inversion symmetry. Our numerical and experimental results demonstrate that significantly high bend-transmission can be achieved for certain domain-wall types, including typical VPhCWGs. Surprisingly, the presence of the inversion symmetry does not affect the emergence of high bend-transmission, which contradicts the previous understanding of the VPhCWGs. Our findings provide new possibilities for achieving uniquely high bend-transmission in a broader range of PhCs, not restricted to VPhCWGs. Since bending loss is one of the serious issues for nanophotonic integrated circuits, this work carries significant implications for constructing flexible low-loss nanophotonic circuits. As an empirical explanation, we propose a mode classification that links the high bend-transmission to specific groups of waveguide modes. Regarding the origin of the high bend-transmission, a preliminary study suggests that the abrupt change in bend-transmission is accompanied by the emergence or disappearance of topologically-protected CPs near the bending interface. Therefore, we speculate that the high bend-transmission phenomenon is related to the existence of CPs at the interface, whose behavior is mostly determined by the domain lattice configuration and is minimally influenced by the presence of the inversion symmetry. It remains possible that the observed slight difference in the bend-transmission between the PhCWGs with and without the inversion symmetry can be attributed to the suppression of backscattering due to the valley-photonic effect. However, an unambiguous conclusion requires more detailed and deliberated work. Our present work may pave the way to open up novel designs of nano-waveguides for low-loss nanophotonic integrated circuits and shed new light on the nature of valley-photonic properties.
## Method
### Simulations
We implement our waveguide designs on the SOI platform, using the transverse-electric (TE) modes confined in the PhC slab. The thickness of the slab is 220 nm. The lattice constant is 400 nm. The radii of circular air holes are 102 nm, and the length of one side of the triangular air holes is 277 nm. The area of each circular and triangular air hole is approximately the same. We conduct simulations based on a finite element method using commercial software (COMSOL). We first calculate the three-dimensional (3D) photonic band structure (PBS) and then approximate the 3D PBS in two-dimensional (2D) models. In 3D calculation, the refractive index of silicon is set as 3.48. In 2D calculation, the effective refractive index of silicon is set to be 2.65 to keep the photonic bandgap (PBG) within approximately the same wavelength range as the 3D results. As shown in Fig.1, we confirmed that both IS- and IA-PhCs have a broad PBG of over 250nm.
By connecting the bulk PhCs with an arbitrary interface, we can construct PhCWGs that support interface modes. The broken inversion symmetry that causes the coupling between the chirality of modes and valley DoF is the foundation of the valley-photonic explanation of high transmission in sharp bends. This explanation will remain valid if we observe low transmission in IS-PhCWGs and high transmission in IA-PhCWGs. Otherwise, if we observe relatively high transmission in some IS-PhCWGs or relatively low transmission in some IA-PhCWGs, we should consider other factors affecting the transmission other than inversion symmetry. For each domain-wall type in Fig.1(c), we construct the IA-PhCWGs and the IS-PhCWGs. We connect domains constructed from patterns A and B to construct the IA-PhCWGs with broken inversion
symmetry. In supplementary information (1), we have numerically confirmed that the A-B and B-A type waveguides have no essential difference in their transmission properties. Here we set most of the IA-PhCWGs to be the A-B type interface for convenience. \(S=-1\) IA-PhCWGs are exceptions and have a B-A type interface because the air holes overlap with each other at the A-B type interface. We calculate the light transmittance through a straight waveguide and a Z-shaped waveguide of the same length for each type of waveguide design. Details of the settings of the wave source are described in the supplementary information (1).
For each waveguide band, we calculate the average transmittance and estimate the reflectivity at bends from the amplitude of the F-P ripples. Given the bend reflectivity R, the transmittance through an F-P cavity is \(T=\frac{1}{1+F(\sin\frac{\theta}{2})^{2}}\), where \(F=\frac{4R}{(1-R)^{2}}\) is the finesse factor. Therefore, we can derive the reflectivity R for the transmittance spectra. We call the calculated R the F-P reflectivity and use this value to evaluate the bend-transmission in addition to the average transmittance.
Undesirable mode conversions may occur if other waveguide modes or bulk modes are near the edge of the single-mode region. In addition, we disregard the ultraslow light region near the mode edge where large reflection makes the analysis difficult. Thus, the actual frequency range where the transmittance can be accurately evaluated is slightly narrower than that calculated in the PBS. In our calculation, the frequency range in which the average transmittance and F-P reflectivity are calculated is set to be narrower than the single mode region of the waveguide band by 6 THz (\(0.08a/\lambda\)) when the lattice constant is 400 nm).
### Fabrication and experiments
We implement our waveguide design in 220nm-thick silicon slabs on 3000nm-thick air-bridge-structured SiO\({}_{2}\) under-claddings. The PhCWG patterns are fabricated by electron beam lithography and dry etching technique. The air bridge is formed by removing the sacrificial layer of SiO\({}_{2}\) with hydrofluoric acid.
To measure the transmittance spectrum, light from a wavelength-tunable laser is launched into an input silicon waveguide with a width of \(8\mu m\). The output intensity is 5 dBm. The silicon waveguide is connected with an appropriate taper to a silicon nanowire, which is 400-700 nm in width. The silicon nanowire is straight in the straight PhCWG and is \(12.5\mu m\) in length. Light is coupled to the PhC region via the silicon nanowire. The transmitted light is collected from an output silicon waveguide of the same design. Due to fabrication errors, the transmitting wavelength ranges in most waveguides deviate from those predicted by the 3-dimensional band calculation. Therefore we determine the single-mode region directly from the transmission spectra. Considering that multi-modes can have relatively high transmission through a straight waveguide, we determine the cut-off wavelength of the single modes using the Z-shaped waveguide's spectra. Suppose the peak transmitted intensity is \(I_{max}\) at wavelength \(\lambda_{max}\) we defined the single-mode range as \((\lambda_{1},\lambda_{2})\) where \(\lambda_{1}=max(\{\lambda|\lambda<\lambda_{max},\,I(\lambda)<0.1I_{max}\})\), and \(\lambda_{2}=min(\{\lambda|\lambda>\lambda_{max},\,I(\lambda)<0.1I_{max}\})\).For waveguide bands that have extremely low transmittance via 120-degree bends, like the upper band of S=2 PhCWGs, we calculate \((\lambda_{1},\lambda_{2})\) using the straight waveguides' spectra in the same manner.
To eliminate the influences of insertion loss and coupling loss, we use the measured intensities of straight PhCWGs as the reference to calculate the average transmittance of the bent PhCWGs. We convert the transmitted intensities of the straight and bent waveguides into the linear scale (in milliwatts) and respectively calculate their average intensities in the single-mode region. The average transmittance of the bent waveguide is derived as the division of the bent waveguides' average intensity and the straight waveguides' average intensity. We may also calculate the relative transmittance of the bent waveguides before taking the average. However, due to fluctuations in the spectra, the bent waveguides' intensities can be larger than that of the straight waveguides' at some wavelengths, which is amplified in linear scale and brings unnecessary errors to the results. Most of the measured transmission spectra of the Z-shaped waveguides have complicated resonance ripples in addition to the simple pattern of the \(30a\) cavity F-P resonance. Therefore it is difficult to calculate the FP reflectivity from the measured data in the same manner as in the numerical studies.
**Data availability.** The data which support the figures and other findings within this paper are available from the corresponding authors upon request.
## Acknowledgement
The authors would like to thank Masato Takiguchi for his help in the experimental setup, and Toshiaki Tamamura for his help in the fabrication process. This work was supported by the Japan Society for the Promotion of Science (Grant number JP20H05641) and the Japan Science and Technology Agency (JST Spring, Grant Number JPMJSP2106).
## Author contributions
M.N. conceived the ideas. M.N. and Y.M. supervised the project. M.N. and T.Y. proposed the theoretical background. W.D., M.O., and E.K. conducted the fabrication process. W.D. conducted the simulations and experimental measurements. Y.M. and
T.Y. helped with simulations. W.D. and M.N. wrote the manuscript with feedback from other authors.
## Additional information
**Supplementary Information**: Supplementary Information accompanies this paper at doi:
**Competing interests**: The authors declare no competing financial interests.
## References
* [1] Notomi, M. _et al._ Extremely large group-velocity dispersion of line-defect waveguides in photonic crystal slabs. _Phys. Rev. Lett._**87**, 253902, DOI: 10.1103/PhysRevLett.87.253902 (2001).
* [2] McNab, S. J., Moll, N. & Vlasov, Y. A. Ultra-low loss photonic integrated circuit with membrane-type photonic crystal waveguides. _Opt. Express_**11**, 2927-2939, DOI: 10.1364/OE.11.002927 (2003).
* [3] Kuramochi, E. _et al._ Disorder-induced scattering loss of line-defect waveguides in photonic crystal slabs. _Phys. Rev. B_**72**, 161318, DOI: 10.1103/PhysRevB.72.161318 (2005).
* [4] Notomi, M., Nozaki, K., Shinya, A., Matsuo, S. & Kuramochi, E. Toward fj/bit optical communication in a chip. _Opt. Commun._**314**, 3-17, DOI: [https://doi.org/10.1016/j.optcom.2013.09.073](https://doi.org/10.1016/j.optcom.2013.09.073) (2014). Energy efficient nanophotonics: Engineered light-matter interaction in sub-wavelength structures.
* [5] Vitale, S. A. _et al._ Valleytronics: Opportunities, challenges, and paths forward. _Small_**14**, 1801483, DOI: [https://doi.org/10.1002/smll.201801483](https://doi.org/10.1002/smll.201801483) (2018). [https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.201801483](https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.201801483).
* [6] Xiao, D., Liu, G.-B., Feng, W., Xu, X. & Yao, W. Coupled spin and valley physics in monolayers of mos\({}_{2}\) and other group-vi dichalcogenides. _Phys. Rev. Lett._**108**, 196802, DOI: 10.1103/PhysRevLett.108.196802 (2012).
* [7] Mak, K. F., McGill, K. L., Park, J. & McEuen, P. L. The valley hall effect in mos\(<\)sub\(>\)2\(<\)/sub\(>\) transistors. _Science_**344**, 1489-1492, DOI: 10.1126/science.1250140 (2014). [https://www.science.org/doi/pdf/10.1126/science.1250140](https://www.science.org/doi/pdf/10.1126/science.1250140).
* [8] Dong, J.-W., Chen, X.-D., Zhu, H., Wang, Y. & Zhang, X. Valley photonic crystals for control of spin and topology. _Nat. materials_**16**, 298--302, DOI: 10.1038/nmat4807 (2017).
* [9] Yang, Y., Jiang, H. & Hang, Z. H. Topological valley transport in two-dimensional honeycomb photonic crystals. _Sci. Reports_**8**, 1588, DOI: 10.1038/s41598-018-20001-3 (2018).
* [10] Ma, T. & Shvets, G. All-si valley-hall photonic topological insulator. _New J. Phys._**18**, 025012, DOI: 10.1088/1367-2630/18/2/025012 (2016).
* [11] Strasser, P. _et al._ Optimization of a 60\({}^{\circ}\) waveguide bend in inp-based 2d planar photonic crystals. _J. Opt. Soc. Am. B_**25**, 67-73, DOI: 10.1364/JOSAB.25.000067 (2008).
* [12] Ntakis, I., Pottier, P. & De La Rue, R. M. Optimization of transmission properties of two-dimensional photonic crystal channel waveguide bends through local lattice deformation. _J. Appl. Phys._**96**, 12-18, DOI: 10.1063/1.1753084 (2004). [https://doi.org/10.1063/1.1753084](https://doi.org/10.1063/1.1753084).
* [13] Tokushima, M., Kosaka, H., Tomita, A. & Yamada, H. Lightwave propagation through a 120\({}^{\circ}\) sharply bent single-line-defect photonic crystal waveguide. _Appl. Phys. Lett._**76**, 952-954, DOI: 10.1063/1.125902 (2000). [https://doi.org/10.1063/1.125902](https://doi.org/10.1063/1.125902) (2000).
* [14] Borel, P. I. _et al._ Topology optimization and fabrication of photonic crystal structures. _Opt. Express_**12**, 1996-2001, DOI: 10.1364/OPEX.12.001996 (2004).
* [15] Ma, J., Xi, X. & Sun, X. Topological photonic integrated circuits based on valley kink states. _Laser & Photonics Rev._**13**, 1900087, DOI: [https://doi.org/10.1002/lpor.201900087](https://doi.org/10.1002/lpor.201900087) (2019). [https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.201900087](https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.201900087).
* [16] Yamaguchi, T. _et al._ Gaas valley photonic crystal waveguide with light-emitting inas quantum dots. _Appl. Phys. Express_**12**, 062005, DOI: 10.7567/1882-0786/ab1cc5 (2019).
* [17] Shalaev, M. I., Walasik, W., Tsukernik, A., Xu, Y. & Litchinitser, N. M. Robust topologically protected transport in photonic crystals at telecommunication wavelengths. _Nat. nanotechnology_**14**, 31--34, DOI: 10.1038/s41565-018-0297-6 (2019).
* [18] Chen, X.-D., Zhao, F.-L., Chen, M. & Dong, J.-W. Valley-contrasting physics in all-dielectric photonic crystals: Orbital angular momentum and topological propagation. _Phys. Rev. B_**96**, 020202, DOI: 10.1103/PhysRevB.96.020202 (2017).
* [19] He, X.-T. _et al._ Topological polarization beam splitter in dual-polarization all-dielectric valley photonic crystals. _Phys. Rev. Appl._**18**, 044080, DOI: 10.1103/PhysRevApplied.18.044080 (2022).
* [20] Kumar, A. _et al._ Phototunable chip-scale topological photonics: 160 gbps waveguide and demultiplexer for thz 6g communication. _Nat. Commun._**13**, 5404, DOI: 10.1038/s41467-022-32909-6 (2022).
* [21] Arora, S., Bauer, T., Barczyk, R., Verhagen, E. & Kuipers, L. Direct quantification of topological protection in symmetry-protected photonic edge states at telecom wavelengths. _Light. Sci. & Appl._**10**, 9, DOI: 10.1038/s41377-020-00458-6 (2021).
* [22] Han, Y. _et al._ Design of broadband all-dielectric valley photonic crystals at telecommunication wavelength. _Opt. Commun._**488**, 126847, DOI: [https://doi.org/10.1016/j.optcom.2021.126847](https://doi.org/10.1016/j.optcom.2021.126847) (2021).
* [23] He, X.-T. _et al._ A silicon-on-insulator slab for topological valley transport. _Nat. Commun._**10**, 872, DOI: 10.1038/s41467-019-08881-z (2019).
* [24] Mehrabad, M. J. _et al._ Chiral topological photonics with an embedded quantum emitter. _Optica_**7**, 1690-1696, DOI: 10.1364/OPTICA.393035 (2020).
* [25] Yoshimi, H. _et al._ Experimental demonstration of topological slow light waveguides in valley photonic crystals. _Opt. Express_**29**, 13441-13450, DOI: 10.1364/OE.422962 (2021).
* [26] Yoshimi, H., Yamaguchi, T., Ota, Y., Arakawa, Y. & Iwamoto, S. Slow light waveguides in topological valley photonic crystals. _Opt. Lett._**45**, 2648-2651, DOI: 10.1364/OL.391764 (2020).
* [27] Gao, Z. _et al._ Valley surface-wave photonic crystal and its bulk/edge transport. _Phys. Rev. B_**96**, 201402, DOI: 10.1103/PhysRevB.96.201402 (2017).
* [28] Chen, Q. _et al._ Valley-hall photonic topological insulators with dual-band kink states. _Adv. Opt. Mater._**7**, 1900036, DOI: [https://doi.org/10.1002/adom.201900036](https://doi.org/10.1002/adom.201900036) (2019). [https://onlinelibrary.wiley.com/doi/pdf/10.1002/adom.201900036](https://onlinelibrary.wiley.com/doi/pdf/10.1002/adom.201900036).
* [29] Wu, X. _et al._ Direct observation of valley-polarized topological edge states in designer surface plasmon crystals. _Nat. Commun._**8**, 1304, DOI: 10.1038/s41467-017-01515-2 (2017).
* [30] Zhang, Z. _et al._ Broadband photonic topological insulator based on triangular-holes array with higher energy filling efficiency. _Nanophotonics_**9**, 2839-2846, DOI: doi:10.1515/nanoph-2020-0086 (2020).
* [31] Kang, Y., Ni, X., Cheng, X., Khanikaev, A. B. & Genack, A. Z. Pseudo-spin-valley coupled edge states in a photonic topological insulator. _Nat. Commun._**9**, 3029, DOI: 10.1038/s41467-018-05408-w (2018).
* [32] Zeng, Y. _et al._ Electrically pumped topological laser with valley edge modes. _Nature_**578**, 246-250, DOI: 10.1038/s41586-020-1981-x (2020).
* [33] Du, Z., Chen, H. & Huang, G. Optimal quantum valley hall insulators by rationally engineering berry curvature and band structure. _J. Mech. Phys. Solids_**135**, 103784, DOI: [https://doi.org/10.1016/j.jmps.2019.103784](https://doi.org/10.1016/j.jmps.2019.103784) (2020).
* [34] Wang, Y., Zhang, W. & Zhang, X. Tunable topological valley transport in two-dimensional photonic crystals. _New J. Phys._**21**, 093020, DOI: 10.1088/1367-2630/ab3ca3 (2019).
* [35] Xi, X., Ma, J., Wan, S., Dong, C.-H. & Sun, X. Observation of chiral edge states in gapped nanomechanical graphene. _Sci. Adv._**7**, eabe1398, DOI: 10.1126/sciadv.abe1398 (2021). [https://www.science.org/doi/pdf/10.1126/sciadv.abe1398](https://www.science.org/doi/pdf/10.1126/sciadv.abe1398).
* [36] Arregui, G., Gomis-Bresco, J., Sotomayor-Torres, C. M. & Garcia, P. D. Quantifying the robustness of topological slow light. _Phys. Rev. Lett._**126**, 027403, DOI: 10.1103/PhysRevLett.126.027403 (2021).
* [37] Rosiek, C. A. _et al._ Observation of strong backscattering in valley-hall photonic topological interface modes. _Nat. Photonics_**17**, 386-392, DOI: 10.1038/s41566-023-01189-x (2023).
* [38] Yoda, T. & Notomi, M. Air-hole-type valley photonic crystal slab with simple triangular lattice for valley-contrasting physics. In _2019 Conference on Lasers and Electro-Optics_, JTh2A.10 (Optica, formerly Optical Society of America, 2019).
* [39] Yoda, T., Dai, W., & Notomi, M. Novel design principle for valley-dependent physics in photonic crystals with triangular lattice. Manuscript in preparation.
* [40] Yang, J.-K., Hwang, Y. & Oh, S. S. Evolution of topological edge modes from honeycomb photonic crystals to triangular-lattice photonic crystals. _Phys. Rev. Res._**3**, L022025, DOI: 10.1103/PhysRevResearch.3.L022025 (2021).
* [41] Chen, Q. _et al._ Photonic topological valley-locked waveguides. _ACS Photonics_**8**, 1400-1406, DOI: 10.1021/acsphotonics.1c00029 (2021).
42] Burresi, M. _et al._ Observation of polarization singularities at the nanoscale. _Phys. Rev. Lett._**102**, 033902, DOI: 10.1103/PhysRevLett.102.033902 (2009).
* [43] Young, A. B. _et al._ Polarization engineering in photonic crystal waveguides for spin-photon entanglers. _Phys. Rev. Lett._**115**, 153901, DOI: 10.1103/PhysRevLett.115.153901 (2015).
* [44] Sollner, I. _et al._ Deterministic photon-emitter coupling in chiral photonic circuits. _Nat. Nanotechnol._**10**, 775-778, DOI: 10.1038/nnano.2015.159 (2015).
* [45] Lang, B., Beggs, D. M., Young, A. B., Rarity, J. G. & Oulton, R. Stability of polarization singularities in disordered photonic crystal waveguides. _Phys. Rev. A_**92**, 063819, DOI: 10.1103/PhysRevA.92.063819 (2015).
* [46] Arora, G., Ruchi & Senthilkumaran, P. Full poincare beam with all the stokes vortices. _Opt. Lett._**44**, 5638-5641, DOI: 10.1364/OL.44.005638 (2019).
Supplementary Information
abstract
## 1 Simulation method
As shown in Fig.1(a) the PhCWGs are surrounded by perfectly matched layers (PMLs). The total length of each waveguide is 100a. A unidirectional dipole source is put inside each waveguide, perpendicular to the waveguide direction, to excite the waveguide modes. The width of the dipole source is set to 2a. We have confirmed that changing wave source width does not affect the simulation results. The source is a combination of surface electric current and surface magnetic current. Surface current density \(Js=(0,\sqrt{2n_{eff}}/\sqrt{Z_{0}L_{dipole}},0)\) Surface magnetic current density: \(Jms=(0,0,\sqrt{2Z_{0}}/\sqrt{n_{eff}L_{dipole}})\) where \(n_{eff}\) is the effective refractive index, \(L\) is the dipole length and \(Z_{0}\) is the impedance in free space. By fixing the ratio \(Jms/Js=Z_{0}/n_{eff}\), we excite an EM wave propagating only rightwards.
Figure 1: (a) Schematic illustration of straight band bend waveguides in numerical calculation. The unidirectional dipole sources are positioned inside the red circles. 2 observation ports, port-1, and port-2 are positioned inside blue circles. The PhCWGs are surrounded by PMLs. Left is the straight waveguide and right is the Z-shaped bent waveguide with two 120-degree sharp bends. The bottom right is the enlarged bulk triangular lattice with and without inversion. (b) Power flow spectra of a silicon waveguide. The excited wave is assumed to propagate right-wards. 3 ports are set at each of the left and right sides of the dipole wave source, each 10a, 20a, and 30a away from the source. (c) The E-field intensity of the excited wave in the silicon waveguide at 200THz. (d) Power flow spectra of the W1WG are discussed in the simulation results. (e) The E-field intensity of the excited wave in the W1WG at 210THz
In the straight waveguide, 2 observation ports, port-1, and port-2 are located 10a and 70a away from the wave source. In the bent waveguide, port-1 is located 10a away and the first sharp bend is 30a away from the wave source. The second sharp bend is 20a away from the first one. And port-2 is 20a away from the second sharp bend. Thus, the observed EM waves travel the same distance between port-1 and port-2 in straight and bent waveguides. The source power \(P_{0}\) is calculated as the total power flow (Poynting vector) orthogonal to port-1 in the straight waveguide. The transmitted power in a straight waveguide \(P_{s}\) and that in the Z-shaped waveguide \(P_{z}\) are calculated as the total power flow orthogonal to port-2 at each waveguide. The transmittance is calculated as \(P_{s}/P_{0}\) in straight waveguides and \(P_{z}/P_{0}\) in Z-shaped waveguides. We cannot calculate the bend-transmittance naively as \(P_{z}/P_{1}\) where \(P_{1}\) is the total power flow passing through port-1 in the Z-shaped waveguide. The reason is that \(P_{1}\) also collects the power of reflected waves at the bends and is strongly affected by the F-P resonance. To examine the unidirectionality of the dipole source, we first confirm the wave propagation in a rectangular silicon waveguide without photonic crystal. The waveguide width is \(\sqrt{3}a\). As shown in Fig.1(b), the energy flow observed at the left-side ports (-10a, -20a, and -30a) is less than 10% of that observed at the right-side ports (10a,20a,30a). The E-field intensity at 200THz is plotted in Fig.1(c) in logarithm scale. We can see the majority of the energy propagates right-wards. In Fig.1(d,e), we have confirmed the same unidirectionality of a straight W1WG.
## 2 Other simulation results
We have reported in Table.1 and Fig.6(a) in the main manuscript the simulation results of \(S=-1\), \(S=1\), \(S=2\) IS- and IA-PhCWGs, and \(S=3\) IA-PhCWGs. \(S=-1\) and \(S=1\) PhCWGs have high transmissions, and \(S=2\) and \(S=3\) PhCWGs have lob transmissions via 120-degree sharp bends. Here we discuss these results in detail.
Figure 2: Calculation results of the straight and Z-shaped \(S=1\) PhCWGs. (a) PBS of the \(S=1\) IA-PhCWG. (b) The transmittance spectra of the straight \(S=1\) IA-PhCWG (black curve) and Z-shaped bent \(S=1\) IA-PhCWG (red curve). The \(T_{av}\) is 0.94. The \(R_{FP}\) is 0.04. (c) The out-of-plane magnetic field \(H_{z}\) of a Z-shaped \(S=1\) IA-PhCWG at \(a/\lambda=0.272\) with a transmittance of 0.88. (d) PBS of the \(S=1\) IS-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (e) The transmittance spectra of the straight \(S=1\) IS-PhCWG (black curve) and Z-shaped bent \(S=1\) IS-PhCWG (red curve). The \(T_{av}\) is 1. The \(R_{FP}\) is 0.14. (f) The out-of-plane magnetic field \(H_{z}\) of a Z-shaped \(S=1\) IS-PhCWG at \(a/\lambda=0.267\) with transmittance of 0.90.
**(i) \(S=1\), mirror-symmetric waveguides**
\(S=1\) waveguides have zigzag interfaces. This domain-wall configuration appears in many previous VPhC studies, reporting the high transmission in Z-shaped \(S=1\) PhCWGs with honeycomb lattice [1, 2, 3, 4]. As shown in Fig.2 (a,d), the single-mode region is 0.260-0.272 in the IS-PhCWG and 0.266-0.283 in the IA-PhCWG. As shown in Fig.2(b,e), the transmittances of Z-shaped \(S=1\) IS-PhCWG and \(S=1\) IA-PhCWG are both very high compared to that of the W1WG. The \(S=1\) IS-PhCWG has \(T_{av}\) of 1.00 and \(R_{FP}\) of 0.14. The \(S=1\) IA-PhCWG has \(T_{av}\) of 0.94 and \(R_{FP}\) of 0.04. Note that the bend-transmittance of \(S=1\) IS-PhCWG is greater than 1 at some frequencies and averages 1.00 despite strong fluctuation in the spectrum. This is due to bad coupling between the waveguide mode and the wave source and the subsequent complicated F-P resonance. We can see from Fig.2(c) that the waveguide mode is well confined in the \(S=1\) IS-PhCWG, and the majority of wave flux travels through the Z-shaped bends without much scattering. Despite the technical issues in the numerical calculation, we can conclude that the \(S=1\) IS-PhCWG has good bend-transmission comparable to that of the \(S=1\) IA-PhCWG and \(S=3\) PhCWGs.
**(ii)S = -1, mirror-symmetric waveguides**
\(S=-1\) PhCWGs have another type of zigzag interface where the lattice configuration is more compact. Similar to that in the honeycomb lattice ones [5, 6, 7], the waveguide mode has odd spatial parity about the central waveguide line. As shown in Fig.3(a), the \(S=-1\) IA-PhCWG has a wide single mode in \(a/\lambda=0.263-0.292\). As shown in Fig.3(b), the transmittance is very high throughout the whole single-mode region with weak F-P ripples. The \(T_{av}\) is 1.00. The \(R_{FP}\) is 0.03. Figure3(c) shows \(H_{z}\) distribution at \(a/\lambda=0.267\), where the transmittance is 0.90. The simulated results of \(S=-1\) IA-PhCWG agree well with a recent report by He et al. [8].
Figure 3: Calculation results of the straight and Z-shaped \(S=-1\) PhCWGs. (a) PBS of the \(S=-1\) IA-PhCWG. (b) The transmittance spectra of the straight \(S=-1\) IA-PhCWG (black curve) and Z-shaped bent \(S=-1\) IA-PhCWG (red curve). The \(T_{av}\) is 1.00. The \(R_{FP}\) is 0.03. (c) The out-of-plane magnetic field of a Z-shaped \(S=-1\) IA-PhCWG at \(a/\lambda=0.277\) with a transmittance of 0.97. (d) PBS of the \(S=-1\) IS-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (e) The transmittance spectra of the straight \(S=-1\) IS-PhCWG (black curve) and Z-shaped bent \(S=-1\) IS-PhCWG (red curve). The \(T_{av}\) is 1.00. The \(R_{FP}\) is 0.17.(f) The out-of-plane magnetic field of a Z-shaped \(S=-1\) IS-PhCWG at \(a/\lambda=0.273\), with a transmittance of 0.96.
S = -1 IS-PhCWG has a single-mode region in \(a/\lambda=0.260-0.284\). As shown in Fig.3(e), the bend-transmittance fluctuates around unity with obvious F-P ripples in the spectrum. The cause of this irregular spectrum is the same as that for the \(S=1\) IS-PhCWG. The \(T_{av}\) of \(S=-1\) IS-PhCWG is 1.00, and the \(R_{FP}\) is 0.17. Figure5(f) shows \(H_{z}\) distribution at \(a/\lambda=0.273\), where the transmittance is 0.96. Similar to the \(S=1\) and \(S=-2\) waveguides, the \(S=-1\) PhCWGs have high bend-transmission regardless of the inversion symmetry in the bulk lattice.
### (iii)S = 2, glide-symmetric waveguides
S=2 PhCWGs correspond to another type of bearded interface [9, 10, 11, 12, 13] other than S=-2 waveguides. It has been reported that the lower band of this domain-wall type in honeycomb-lattice VPhCs has high bend-transmission while the upper band has low bend-transmission [9, 11, 13]. Here, in a triangular lattice PhCWG, the lower band has no single-mode region. Here, we focus only on the upper band. As shown in Fig.4(a,d), the single-mode region is 0.271-0.301 for IS-PhCWG and 0.277-0.306 for IA-PhCWG. As shown in Fig.4(b,e), both \(S=2\) PhCWGs have low bend-transmission in the upper band. The \(T_{av}\) is 0.33 for IA-PhCWG and 0.14 for IS-PhCWG, both lower than that of a W1WG. The calculated \(R_{FP}\) is 1.00 for both IA-PhCWG and IS-PhCWG. Figure4(c) shows the \(H_{z}\) distribution of \(S=2\) IA-PhCWG at \(a/\lambda=0.285\), where the transmittance is 0.39. Figure4(f) shows the \(H_{z}\) distribution of \(S=2\) IS-PhCWG at \(a/\lambda=0.279\), where the transmittance is 0.17. In Fig.4(c,f), there is an apparent intensity attenuation during propagation and strong backscattering at the left side of the wave sources.
Figure 4: Calculation results of the straight and Z-shaped \(S=2\) PhCWGs. (a) PBS of the \(S=2\) IS-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (b) The transmittance spectra of the straight \(S=2\) IS-PhCWG (black curve) and Z-shaped bent \(S=2\) IS-PhCWG (red curve). (c) The out-of-plane magnetic field of a Z-shaped \(S=2\) IS-PhCWG at \(a/\lambda=0.279\) with a transmittance of 0.17. (d) PBS of the \(S=2\) IA-PhCWG. (e) The transmittance spectra of the straight \(S=2\) IA-PhCWG (black curve) and Z-shaped bent \(S=2\) IA-PhCWG (red curve). (f) The out-of-plane magnetic field of a Z-shaped \(S=2\) IA-PhCWG at \(a/\lambda=0.285\) with a transmittance of 0.39.
In this waveguide design, we are not able to evaluate the lower band of \(S=2\) PhCWGs. However, one can obtain single modes for both bands in an \(S=2\) PhCWG by locally changing the diameter of air holes along the waveguide interface. For a full comparison with the previous VPhCWGs, we discuss the hole-resized \(S=2\) PhCWGs in section 3.
### (iv)S = 3, mirror-symmetric waveguides without inversion symmetry
Finally, we return to \(S=3\) PhCWG with triangular holes (broken inversion symmetry). Similar to \(S=3\) IS-PhCWG (W1WG), the lower band has a wide single-mode region. This interface does not correspond to any conventional VPhCWGs. As shown in Fig.5(a), the single-mode region is 0.270-0.293. As shown in Fig.5(b), the bend-transmission is very low. The \(T_{av}\) is 0.41. The \(R_{FP}\) is 0.56. Figure5(c) shows the \(H_{z}\) distribution of \(S=3\) IA-PhCWG at \(a/\lambda=0.275\), where the transmittance is 0.58. There is an apparent attenuation and distortion of the \(H_{z}\) in the propagation and strong backscattering at the left side of the wave source.
## 2 Excluding trivial factors
We have discussed the influence of domain-wall types and inversion symmetry breaking on bend-transmission. In Table 1 we demonstrate that other factors such as spatial parity, the waveguide width, group refractive index (\(n_{g}\)), and band index do not affect the bend-transmission of waveguide modes. While it is known that bend scattering is strong in slow light regions, the focus of this work is the overall performance of waveguide bands in a broad wavelength range. The \(n_{g}\) here is calculated at the middle of the single-mode region for each waveguide band. We can see that the waveguide mode can have either high bend-transmittance or low bend-transmittance regardless of the value of \(n_{g}\). The same can be observed for other properties like spatial parity and band index.
Figure 5: Calculation results of the straight and Z-shaped \(S=3\) IA-PhCWGs. (a) PBS of the \(S=3\) IA-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. (b) The transmittance spectra of the straight \(S=3\) IA-PhCWG (black curve) and Z-shaped bent \(S=3\) IA-PhCWG (red curve). The \(T_{av}\) is 0.41. The \(R_{FP}\) is 0.56. (c) The out-of-plane magnetic field of a Z-shaped \(S=3\) IA-PhCWG at \(a/\lambda=0.275\) with a transmittance of 0.58.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Interface type** & shifting parameter & S=-2 & S=-1 & S=1 & S=2 & S=3 \\ \cline{2-8} & waveguide width(\(\sqrt{3}\)a) & 1/6 & 1/3 & 2/3 & 5/6 & 1 \\ \hline
**Properties of guided-modes** & parity of Hz & mixed & odd & even & mixed & even \\ \cline{2-8} & band index & lower & upper & lower & upper & upper \\ \cline{2-8} & with I-symmetry & \(n_{g}\) & 5.2-15 & 3.9-5.2 & 6.3-11.3 & 3.6-4.1 & 6.8-254 & 3.5-5.4 & 2.5-250 \\ \cline{2-8} & Sign of \(v_{g}\) & + & \(-\) & + & \(-\) & + & \(-\) & \(-\) \\ \cline{2-8} & Bend transmittance & \(\chi\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\chi\) & \(\chi\) \\ \hline
**without I-symmetry** & \(n_{g}\) & & 3.8-5.2 & 5.4-7.2 & 3.3-4.7 & 6.2-7.3 & 3.3-5.2 & 3.2-572 \\ \cline{2-8} & Sign of \(v_{g}\) & \(-\) & + & \(-\) & + & \(-\) & \(-\) \\ \cline{2-8} & Bend transmittance & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\chi\) & \(\chi\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of properties of waveguide modes by their domain-wall types (S parameters) and bulk lattice symmetry. Check marks indicate high and βXβ marks indicate low bend transmittance.
## 3 Resizing air holes
By locally resizing the air holes in the PhCWG, we can further engineer the band structure. Here we demonstrate 2 examples: the shrunken S=0 PhCWG and the expanded \(S=2\) PhCWG.
**i) The shrunken S=0 PhCWG**
Without lattice shifting, we directly introduce a defect to the bulk triangular lattice by shrinking 2 arrays of air holes along the \(\Gamma K\) direction, as shown in the inset image Fig.6(a). In the simulation, the lattice constant is 500 nm. The larger air holes in the bulk lattice have a radius of 136 nm, and the smaller air holes at the interface have a radius of 81 nm. The effective refractive index is 2.74. The shrunken S=0 PhCWGs have glide-symmetry along the interface. As shown in Fig.6(a), the band structure is more complicated than previously discussed waveguides, with 4 bands and 2 degeneracies in the bandgap. Here only one band has single modes at \(a/\lambda=0.309-0.315\). The bend-transmittance of this band is very high as shown with red curves in Fig.6(b). The \(T_{av}\) is 0.933. The \(R_{FP}\) is 0.149. We can also confirm from the \(H_{z}\) profile in Fig.6(c) that the bend-transmission is high with no apparent backward flux at the left side of the wave source (five-point star).
**ii) The expanded \(S=2\) PhCWG**
The expanded \(S=2\) PhCWGs have the same lattice configuration as the \(S=2\) PhCWGs except that the nearest 2 arrays of air holes along the interface have larger sizes than those in the bulk lattice. The \(S=2\) IS-PhCWGs have a lattice constant of 460 nm. The radius of larger holes is 204 nm. The radius of the smaller holes is 146 nm. The effective refractive index of silicon is set to be 2.7. As the hole size grows larger along the interface, the lower of the two touching bands rises up and forms a single-mode region, making it possible to investigate its transmission property (Fig.7(a)). As shown in Figure7(b) the lower band has high transmission through Z-shaped waveguides. The \(T_{av}\) is 1.00 in the upper band and 0.13 in the lower band. The \(R_{FP}\) is 0.18 in the lower band and 1.00 in the upper band.
Figure 6: Calculation results of the straight and Z-shaped \(S=0\) IS-PhCWGs. (a) PBS of the \(S=0\) Cir-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. The larger air holes in the bulk lattice have a radius of 136 nm. The small air holes have a radius of 81 nm. (b) The transmittance spectra of the straight \(S=0\) IS-PhCWG (black curve) and Z-shaped bent \(S=0\) IS-PhCWG (red curve). (c) The out-of-plane magnetic field of a Z-shaped \(S=0\) IS-PhCWG at \(a/\lambda=0.313\) with a transmittance of 1.00.
Figure 7: Calculation results of the straight and Z-shaped \(S=2\) PhCWGs. (a) PBS of the \(S=2\) IS-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. The large air holes in the bulk lattice have a radius of 204 nm. The small holes have a radius of 146 nm. (b) The transmittance spectra of the straight \(S=2\) IS-PhCWG (black curve) and Z-shaped bent \(S=2\) IS-PhCWG (red curve). (c) The out-of-plane magnetic field of a Z-shaped \(S=2\) IS-PhCWG at \(a/\lambda=0.313\) with a transmittance of 0.96. (d) PBS of the \(S=2\) IA-PhCWG. The black arrow shows the single-mode region. The inset shows one corner of the bent waveguide. The large air holes in the bulk lattice have a side length of 381 nm. The small holes have a side length of 319 nm. (e) The transmittance spectra of the straight \(S=2\) IA-PhCWG (black curve) and Z-shaped bent \(S=2\) IA-PhCWG (red curve). (f) The out-of-plane magnetic field of a Z-shaped \(S=2\) IS-PhCWG at \(a/\lambda=0.267\) with transmittance of 1.
The \(S=2\) IA-PhCWGs have a lattice constant of 460 nm. The side length of the larger triangular holes is 382 nm. The side length of the smaller triangular holes is 319 nm. The effective refractive index of silicon is 2.7. Like the IS-PhCWGs, the lower band of IA-PhCWGs rises up due to the resizing of air holes, as shown in Fig.7(d). The \(T_{av}\) is 1.00 in the upper band and 0.05 in the lower band. The \(R_{FP}\) is 0.07 in the lower band and 1.00 in the upper band.
We have included the results of the hole-resized \(S=2\) PhCWGs in Table.1 and Fig.7 in the main manuscript.
## 4 Experiment setting
We conduct the transmission measurement of waveguides using the experimental setup shown in Fig.8. Light from a wavelength-tunable laser is collimated by lens 1 and obtains linear polarization through POL 1. The polarization direction can be adjusted by rotating the half-wave plate (HWP) 1. In our measurement, we set the orientation of the HWPs so that only TE waves are transmitted into the device. Then the incident light is focused by lens 2 and coupled to the waveguide. The transmitted light undergoes a reversed process and is collected by the power meter. The power meter measures light intensity in decibel-milliwatts. The measurable power is up to -110dBm. There are time-variant fluctuations in the measured intensity up to 1dBm. The measured spectra are smoothed using adjacent averaging to remove the irrelevant fluctuation in raw data.
## 5 Mode classification based on band structures
We have discussed various waveguides with integer \(S\) values. Actually, the \(S\) can be tuned continuously. We calculate the photonic band structure for various \(S\) values in a 3-dimensional Si PhC slab in Fig.9. As the shifting parameter \(S\) decreases continuously, the waveguide modes emerge from bulk modes below the PBG and move upwardly into the bulk modes above the PBG. Different bands can be traced to one another in this process. Thus, we can classify these bands into four groups as \(S\) changes from 3 to -2. The even mode of W1WG and the upper band of \(S=2\) glide-symmetric PhCWG belong to group 1 (green). The lower band of \(S=2\) glide-symmetric PhCWG and the even band of \(S=1\) PhCWG belong to group 2 (blue). The odd mode of \(S=-1\) PhCWG and the upper band of \(S=-2\) glide-symmetric PhCWG belong to group 3 (orange). Finally, the lower band of \(S=-2\) glide-symmetric PhCWG belongs to group 4 (gray). Waveguide modes in group 1 and group 4 have low bend-transmission, and those in group 2 and group 3 have high bend-transmission. This mode classification originates from the photonic band structures (PBGs). Therefore, it is not self-evident that the bend-transmission via 120-degree bends is related to this mode classification.
Figure 8: Schematic of the experimental setup. Light from a wavelength-tunable laser is collimated by lens 1 and obtains linear polarization through POL 1. The polarization direction can be adjusted by rotating the half-wave plate (HWP) 1.
[MISSING_PAGE_POST]
## 6 Simulation results of circular polarization singularities
We have investigated the spatial distribution of circular polarization singularities or C-points (CPs) for all domain-wall types discussed in the main text. Figure10 shows the CP distribution inside an \(S=-2\) IA-PhCWG in the high bend-transmission band. The top four images show the zero-value isolines of the Stokes parameters, where the crossing node of the \(S_{1}=0\) and \(S_{2}=0\) lines represent the location of CPs. The bright CPs near the connection interface are marked with black boxes. The bottom images show the normalized E-field intensities at the corresponding frequencies. Below each image are the normalized frequencies (\(a/\lambda\)), normalized intensities at the location of the CPs, and the bend-transmittance of the Z-shaped waveguide. For waveguide modes at \(a/\lambda=0.269\) and \(a/\lambda=0.282\), there are bright CPs inside the first nearest air holes. At higher frequencies, the CPs disappear from these holes although the degree of polarization is still very high (over 0.99). New CPs appear inside the second nearest air holes. This change of locations differs from the case in \(S=-2\) IS-PhCWGs, which indicates that the shape of air holes can modify the location of CPs to some extent.
Figure11 shows the CP distributions inside \(S=-1\) PhCWGs. The left six images show the \(S=-1\) IS-PhCWGs and the right six images show the \(S=-1\) IA-PhCWGs. In both cases, there are bright CPs inside the nearest air holes near the connection interfaces. Figure12 shows the case of \(S=1\) PhCWGs. The left two images show the result of \(S=1\) IS-PhCWGs. We plot the result of only one frequency for the \(S=1\) IS-PhCWG because the CPs and field intensities do not change much within the single-mode region due to the narrow waveguide band. The right six images show the result of \(S=1\) IA-PhCWGs. In the \(S=1\) PhCWGs, the bright CPs near the connection interface are located inside the silicon area instead of inside the
Figure 10: The zero-value isoline of Stokes parameters (top) and the Electric field amplitudes (bottom) of \(S=-2\) IA-PhCWGs. The corresponding frequencies, normalized E-field amplitudes at the location of CPs, and the bend-transmittance are shown below each plot.
air holes. We speculate that CPs in the silicon areas are more susceptible to perturbations and less stable at the bends, which explains the relatively larger fluctuations in the transmittance spectra observed in the \(S=1\) PhCWGs. Actually, when we restore the second sublattice in the \(S=1\) PhCWGs (a zigzag interface PhCWG having a honeycomb lattice), there are also bright CPs at similar locations, except that these CPs fall within the air holes of the second sublattice. We have found that the simulated transmittance spectrum of the bent honeycomb lattice \(S=1\) PhCWG is more stable than its triangular lattice counterparts.
Figure13 shows the results of the \(S=2\) IS-PhCWGs. If we ignore the "unstable" CPs inside the silicon areas, we can see that the high bend-transmission band has bright CPs near the connection interface, which lies inside the air holes, while in the low bend-transmission band these bright CPs disappear. We apply the same rule to the S-3 PhCWGs and ignore the silicon region CPs. The left six images in Fig.14 show the results of \(S=3\) IS-PhCWGs. For lower frequencies such as \(a/\lambda=0.269\), there are bright CPs near the connection interface. However, due to the small group velocity and F-P resonance, the bend-transmittances are very low. At higher frequencies, the bright CPs inside the air holes disappear, thus no stable, bright CPs near the connection interface. The right six images in Fig.14 show the results of \(S=3\) IA-PhCWGs. For the \(S=3\) IA-PhCWGs, we find that at higher frequencies such as \(a/\lambda=0.272\) and \(a/\lambda=0.276\) there exist "stable", bright CPs near the connection interface. Interestingly, the bend-transmittances near these frequencies are relatively higher. We have observed that the \(S=3\) IA-PhCWG has higher bend-transmittance than the \(S=3\) IS-PhCWG overall. Maybe this is related to the "stable", bright CPs we have found in the \(S=3\) IA-PhCWGs. In addition, this is another case where the shapes of air holes mildly modify the location of CPs and affect the bend-transmittance when the domain-wall type is not changed.
To conclude our preliminary investigation into the circular polarization singularities, we have observed the existence of air-hole bright CPs in several high bend-transmittance bands and their disappearance in some low bend-transmittance bands. However, in \(S=2\) and \(S=3\) PhCWGs, there are bright CPs in the silicon region near the connection interface. We speculate that multiple CPs can appear inside the dielectric region with strong field intensity when the waveguide width is large. However, because they are exposed to a large area of uniform dielectric, these dielectric CPs can be unstable at the bends where the
Figure 11: The zero-value isoline of Stokes parameters (top) and the Electric field amplitudes (bottom) of \(S=-1\) IS- (left three columns) and IA-(right three columns) PhCWGs. The corresponding frequencies, normalized E-field amplitudes at the location of CPs, and the bend-transmittance are shown below each plot.
Figure 12: The zero-value isoline of Stokes parameters (top) and the Electric field amplitudes (bottom) of \(S=1\) IS-(left one column) and IA-(right three columns) PhCWGs. The corresponding frequencies, normalized E-field amplitudes at the location of CPs, and the bend-transmittance are shown below each plot.
translational symmetry is broken. Furthermore, because the lattice configuration is more compact at the bends in narrower PhCWGs, CPs are less affected by the broken translational symmetry. The most important point is that CPs located inside the air holes disappear around the position where the bend-transmission abruptly decreases. This suggests that there is some correlation between the high bend-transmission and the existence of CPs.
Finally, we give an example that the bright CPs continue to exist inside the bending corners of a Z-shaped waveguide (Fig.15). In the high bend-transmittance band of \(S=-2\) IA-PhCWG, we have confirmed the existence of bright CPs near the connection interface in the unit cell calculation. Here we investigate whether the same CPs persist in the bends if we excite the same waveguide mode in a Z-shaped waveguide. Like in the unit cell calculations, we identify the locations of CPs using the Stokes parameter isolines. The bright CPs near the bending corners are marked with black boxes. In the straight waveguide segment, we can find the bright CPs at the same location as those in the unit cell calculation. In the bends, we can see that there are strong distortions in the Stokes parameter isolines but the CPs (crossing nodes of red and green lines) exist in most of the air holes. After the light wave propagates through the second bend, the Stoke parameter isolines restore their shapes and the CPs are at the same location as before. While it is possible that the perturbation at the bends may give rise to accidental CPs. We believe this result demonstrates an uninterrupted distribution of the same type of CPs as we have identified in the unit cell calculation.
|
2306.05765 | On change of slow variables at crossing the separatrices | We consider general (not necessarily Hamiltonian) perturbations of
Hamiltonian systems with one degree of freedom near separatrices of the
unperturbed system. We present asymptotic formulas for change of slow variables
at evolution across separatrices. | Anatoly Neishtadt | 2023-06-09T09:02:59Z | http://arxiv.org/abs/2306.05765v1 | # On change of slow variables at crossing the separatrices
###### Abstract
We consider general (not necessarily Hamiltonian) perturbations of Hamiltonian systems with one degree of freedom near separatrices of the unperturbed system. We present asymptotic formulas for change of slow variables at evolution across separatrices.
## 1 Outline of the problem
We consider systems described by differential equations of the form
\[\dot{q} = \frac{\partial H}{\partial p}+\varepsilon f_{q},\,\dot{p}=-\frac {\partial H}{\partial q}+\varepsilon f_{p},\,\dot{z}=\varepsilon f_{z}\,, \tag{1.1}\] \[H = H(p,q,z),\,f_{\alpha}=f_{\alpha}(p,q,z,\varepsilon),\alpha=p,q,z, \,(p,q)\in\mathbb{R}^{2},z\in\mathbb{R}^{l-2}\,.\]
Here \(\varepsilon\) is a small parameter, \(|\varepsilon|\ll 1\). For \(\varepsilon=0,\,z=\mbox{const}\) we have _an unperturbed system_ for \(p,q\), which is a Hamiltonian system with one degree of freedom. The function \(H\) is an unperturbed Hamiltonian. For \(\varepsilon>0\) we have _a perturbed system_, and functions \(\varepsilon f_{\alpha}\) are _perturbations_.
It is supposed that there are a saddle point and passing through it separatrices in the phase portrait of the unperturbed system, Fig. 1. Under the action of perturbations the projection of the phase point onto the plane \((p,q)\) crosses a separatrix.
Separatrices divide the phase plane of the unperturbed systems into domains \(G_{1}(z),G_{2}(z),G_{3}(z)\), Fig. 1. In each of these domains it is possible to use variables \(h,\varphi\) instead of \(p,q\), where \(h\) is the difference between \(H\) and its value at the saddle point, and \(\varphi\) is "the angle" (from the pair "action-angle"
variables [1] of the unperturbed system). Then for \(h,z,\varphi\) we get the perturbed system having the standard form of system with one rotating phase [2]: in this system \(h,z\) are called _slow variables_, \(\varphi\) is _the rotating phase_. It is a classical result that the averaged with respect to \(\varphi\) system describes the evolution of \(h,z\) far from separatrices with accuracy \(O(\varepsilon)\) during the time interval of order \(1/\varepsilon\)[2]. For approximate description of evolution of \(h,z\) for trajectories that cross the unperturbed separatrices one can use the averaged system up to the separatrix and then averaged system with initial conditions on the separatrix in one of domains in which the trajectory is captured (a certain probability can be assigned to each such continuation). For majority of initial conditions this procedure describes the behaviour of slow variables with accuracy \(O(\varepsilon\ln\varepsilon)\) during time of order \(1/\varepsilon\); the measure of the "bad" set of initial conditions, for which this description is not valid, tends to \(0\) faster than any given power of \(\varepsilon\) as \(\varepsilon\to 0\)[10]. One can make one more step of the averaging method and use the same procedure for the second order averaged system (it is shown in [11] that solutions of this system indeed arrive to separatrices). This improves accuracy up to \(O(\varepsilon^{2})\) for motions far from separatrices. However, for motion with separatrix crossing there is no improvement. The reason is that there is a change of order at least \(\varepsilon\) of slow variables at crossing a narrow neighbourhood of separatrices. Because the width of this neighbourhood tends to \(0\) as \(\varepsilon\to 0\), it is reasonable to call this change _a jump_ of slow variables at the separatrix. In this note we give asymptotic formulas for this jump. Such formulas were first obtained in [12] for the
Figure 1: Phase portrait of the unperturbed system.
pendulum in a slowly varying gravitational field, then in [5, 7] for the general case of a Hamiltonian system with one degree of freedom and slowly varying parameters, in [8] for the general case of a slow-fast Hamiltonian system with two degrees of freedom, and in [3, 4] for motion in a slowly time-dependent potential with a dissipation. Jump of slow variables is interpreted as _a jump of an adiabatic invariant_ for Hamiltonian systems [5, 7, 8, 12] and as _a time shift_ for systems with a dissipation [3, 4]. We consider the case of general perturbed system (1.1). For derivation of intermediate estimates used in this note see, e.g., [10, 11].
## 2 Asymptotic expansions for unperturbed motions near separatrices
In the phase portrait of the unperturbed system there is a saddle point \(C=C(z)\) and passing through it separatrices \(l_{1}=l_{1}(z),l_{2}=l_{2}(z)\). We denote \(l_{3}=l_{3}(z)=l_{1}(z)\cup l_{2}(z)\). Denote \(q_{C}=q_{C}(z),p_{C}=p_{C}(z)\) coordinates of the point \(C\). Denote
\[h_{C}(z)=H(p_{C}(z),q_{C}(z),z),\ E(p,q,z)=H(p,q,z)-h_{C}(z).\]
We assume that \(E>0\) in \(G_{3}\), \(E<0\) in \(G_{1,2}\).
Denote
\[f_{z,C}(z)=f_{z}(p_{C}(z),q_{C}(z),z,0),\ F_{z}(p,q,z)=f(p,q,z,0)-f_{z,C}(z),\]
\[f_{h}(p,q,z)=\frac{\partial E}{\partial p}f_{p}(p,q,z,0)+\frac{\partial E}{ \partial q}f_{q}(p,q,z,0)+\frac{\partial E}{\partial z}f_{z}(p,q,z,0).\]
For the period \(T\) of the trajectory \(E=h\) in domain \(G_{i}\) we have
\[T=-a_{i}\ln|h|+b_{i}+O(h\ln|h|),\ a_{1}=a_{2}=a,a_{3}=2a,\ b_{3}=b_{1}+b_{2}.\]
Denote
\[\oint_{l_{i}}f_{h}(p,q,z)dt=-\Theta_{i}(z),\ \oint_{l_{i}}F_{z}(p,q,z)dt=A_{i}(z).\]
Then for integrals along the unperturbed phase trajectory \(E=h\) in the domain \(G_{i}\) we have
\[\oint_{E=h}f_{h}(p,q,z)dt=-\Theta_{i}(z)+O(h\ln|h|),\] \[\oint_{E=h}F_{z}(p,q,z)dt=A_{i}(z)+O(h\ln|h|).\]
We assume that \(\Theta_{1}(z)>0,\Theta_{2}(z)>0\) for all considered values of \(z\).
Introduce the coordinate system \(C\xi\eta\) as shown in Fig. 1. For initial points on the positive side of the axis \(C\eta\) and integrals on the unperturbed phase trajectory \(E=h\) (i.e. in \(G_{3}\)) we have
\[\frac{1}{T}\int_{0}^{T}(t-\frac{T}{2})f_{h}dt=-\frac{a\ln h(\Theta _{2}-\Theta_{1})/2+(\Theta_{1}b_{2}-\Theta_{2}b_{1})/2+d_{3}}{-2a\ln h+b_{3}}+O (\sqrt{h}\,),\] \[\frac{1}{T}\int_{0}^{T}(t-\frac{T}{2})F_{z}dt=-\frac{a\ln h(A_{1}- A_{2})/2-(A_{1}b_{2}-A_{2}b_{1})/2+g_{3}}{-2a\ln h+b_{3}}+O(\sqrt{h}\,).\]
For initial points on the axis \(C\xi\) and integrals on the unperturbed phase trajectory \(E=h\) in the domain \(G_{i},i=1,2\) we have
\[\frac{1}{T}\int_{0}^{T}(t-\frac{T}{2})f_{h}dt=-\frac{d_{i}}{-a\ln |h|+b_{1}}+O(\sqrt{|h|}\,),\] \[\frac{1}{T}\int_{0}^{T}(t-\frac{T}{2})F_{z}dt=-\frac{g_{i}}{-a\ln |h|+b_{i}}+O(\sqrt{|h|}\,).\]
We have \(d_{3}=d_{1}+d_{2},g_{3}=g_{1}+g_{2}\).
In line with the general approach of the averaging method, one can make a change of variables
\[h=\overline{h}+\varepsilon u_{h,1}(\overline{h},\overline{z}, \overline{\varphi})+\varepsilon^{2}u_{h,2}(\overline{h},\overline{z}, \overline{\varphi}),\] \[z=\overline{z}+\varepsilon u_{z,1}(\overline{h},\overline{z}, \overline{\varphi})+\varepsilon^{2}u_{z,2}(\overline{h},\overline{z}, \overline{\varphi}), \tag{2.1}\] \[\varphi=\overline{\varphi}+\varepsilon u_{\varphi,1}(\overline{h},\overline{z},\overline{\varphi})\]
that transforms original equations of motion to the following form:
\[\dot{\overline{h}}=\varepsilon\overline{f}_{h,1}(\overline{h}, \overline{z})+\varepsilon^{2}\overline{f}_{h,2}(\overline{h},\overline{z})+ \varepsilon^{3}\overline{f}_{h,3}(\overline{h},\overline{z},\overline{ \varphi},\varepsilon),\] \[\dot{\overline{z}}=\varepsilon\overline{f}_{z,1}(\overline{h}, \overline{z})+\varepsilon^{2}\overline{f}_{z,2}(\overline{h},\overline{z})+ \varepsilon^{3}\overline{f}_{z,3}(\overline{h},\overline{z},\overline{\varphi},\varepsilon), \tag{2.2}\] \[\dot{\overline{\varphi}}=\omega(\overline{h},\overline{z})+ \varepsilon\overline{f}_{\varphi,1}(\overline{h},\overline{z})+\varepsilon^{ 2}\overline{f}_{\varphi,2}(\overline{h},\overline{z},\overline{\varphi}, \varepsilon).\]
The first order averaged system is obtained by keeping only the first term in each of these equations. The second order averaged system is obtained by neglecting highest order terms in each of these equations.
One can show that (see [11])
\[u_{h,1}=\frac{1}{T}\int_{0}^{T}(t-\frac{T}{2})f_{h}dt,\ u_{z,1}=\frac{1}{T} \int_{0}^{T}(t-\frac{T}{2})F_{z}dt.\]
It is convenient to consider evolution using both usual time \(t\) and slow time \(\varepsilon t\).
Jump of slow variables
### General description of motion
Let a phase point start to move at \(t=t_{-}=0\) (thus \(\tau=\tau_{-}=0\)) in the domain \(G_{3}\) at the distance of order 1 from the separatrix. Denote \(h_{-},z_{-},\varphi_{-}\) initial values of variables \(h,z,\varphi\). Denote \(h(t),z(t),\varphi(t)\) solution of the system (1.1) with this initial condition (written in variables \(h,z,\varphi\)). The phase point makes rounds close to unperturbed trajectories in \(G_{3}\) while moving closer to the separatrix with each round, approaches the separatrix, crosses the separatrix and continues the motion in domain \(G_{i}\), \(i=1\) or \(i=2\). Assume, for the sake of being definite, that this is motion in \(G_{2}\). At \(t=t_{+}=K/\varepsilon\) (thus \(\tau=\tau_{+}=K\)) the phase point is in \(G_{2}\) at the distance of order 1 form the separatrix. Here \(K=\mbox{const}\). Denote \(h_{+}=h(t_{+}),z_{+}=z(t_{+}),\varphi_{+}=\varphi(t_{+})\).
Denote \(\overline{h}(\tau),\overline{z}(\tau)\) the solution of the first order averaged system with initial conditions \(h_{-},z_{-}\) glued of solutions of averaged systems for domains \(G_{3}\) and \(G_{2}\) (cf. [10]). Denote \(\tau_{*}\) the moment of the slow time such that \(\overline{h}(\tau_{*})=0\) (i.e. \(\tau_{*}\) is the moment of the slow time for the arrival of this solution to the separatrix). Denote \(z_{*}=\overline{z}(\tau_{*})\).
Denote \(\hat{h}_{-}(\tau),\hat{z}_{-}(\tau)\) the solution of the second order averaged system with initial, at \(\tau=0\), conditions corresponding to \(h_{-},z_{-},\varphi_{-}\) (i.e., these initial conditions are obtained from \(h_{-},z_{-},\varphi_{-}\) by transformation (2.1)). Denote \(\hat{h}_{+}(\tau),\hat{z}_{+}(\tau)\) the solution of the second order averaged system with initial, at \(\tau=\tau_{+}\,\), conditions corresponding to \(h_{+},z_{+},\varphi_{+}\). We consider this solution for \(\tau\leq\tau_{+}\). Denote \(\hat{\tau}_{*,\mp}\) moments of arrival of these two solutions to the separatrix, \(\hat{h}_{\mp}(\tau_{*,\mp})=0\). Denote \(\hat{z}_{*,\mp}=\hat{z}_{\mp}(\hat{\tau}_{*,\mp})\). Denote
\[\Delta\hat{\tau}_{*}=\hat{\tau}_{*,+}-\hat{\tau}_{*,-},\ \Delta\hat{z}_{*}=\hat{z}_{*,+}-\hat{z}_{*,-}. \tag{3.1}\]
We will call these values _jumps of slow variables at the separatrix_. To estimate these jumps, we will consider description of dynamics by the second order averaged system at approaching the separatrix (in \(G_{3}\)) and at moving away from the separatrix (in \(G_{2}\)).
For crossing from domain \(G_{3}\) to domain \(G_{i}\), \(i=1,2\), we use also notations \(\hat{\tau}_{*,3}=\hat{\tau}_{*,-},\hat{\tau}_{*,i}=\hat{\tau}_{*,+},\hat{z}_{*,3}=\hat{z}_{*,-},\hat{z}_{*,i}=\hat{z}_{*,+}\).
Values \(f_{z,C},\Theta_{i},A_{i},a_{i},b_{i},d_{i},g_{i}\) are taken at \(z=z_{*}\) in all expansions below.
### Approaching the separatrix
Consider motion of the phase point in \(G_{3}\). Projection of the phase point onto \(p,q\) plane makes rounds close to unperturbed trajectories while moving closer to the separatrix with each round. This projection crosses the ray \(C\eta\) on each such round when it moves close enough to the separatrix. We enumerate \(N+1\) moments of time for these intersections starting with the last one: \(t_{0}>t_{1}>\ldots>t_{N}>0\). The moment of time \(t_{N}\) is chosen in such a way that for \(0\leq t\leq t_{N}\) dynamics of \(h,z\) is described with a required (high enough) accuracy by the second order averaged system, while for \(t_{N}\leq t\leq t_{0}\) expansions near the separatrix can be used for description of motion because the phase point is close enough to the separatrix.
Denote \(\tilde{h}(t),\tilde{z}(t),\tilde{\varphi}(t)\) the result of transformation of solution \(h(t),z(t),\varphi(t)\) via formulas (2.1). Denote
\[h_{n}=h(t_{n}),z_{n}=z(t_{n}),\hat{h}_{n}=\hat{h}(t_{n}),\hat{z}_{n}=\hat{z}(t _{n}),\tilde{h}_{N}=\tilde{h}(t_{N}),\tilde{z}_{N}=\tilde{z}(t_{N}). \tag{3.2}\]
Denote \(U_{h}(t)=u_{h,1}+\varepsilon u_{h,2},\ U_{z}(t)=u_{z,1}+\varepsilon u_{z,2}\) where functions \(u_{h,i},u_{z,i}\) are those in (2.1), and they are calculated at the point \(\tilde{h}(t),\tilde{z}(t),\tilde{\varphi}(t)\). Denote \(U_{h,n}=U_{h}(t_{n}),U_{z,n}=U_{z}(t_{n})\).
We will use the symbol \(\simeq\) in approximate equalities without indication of accuracy of the approximation. We have
\[z_{N}=\tilde{z}_{N}+\varepsilon U_{z,N}\simeq\hat{z}_{n}+\varepsilon U_{z,N}.\]
Then we have an identity
\[z_{0}\simeq\hat{z}_{3,*}+(\hat{z}|_{h=h_{0}}-\hat{z}_{3,*})+((\hat{z}|_{h=h_{ N}}-\hat{z}|_{h=h_{0}})-(z_{N}-z_{0}))+(\hat{z}_{N}-\hat{z}|_{h=h_{N}})+ \varepsilon U_{z,N}. \tag{3.3}\]
Estimate terms in this expression separately.
a) For \((\hat{z}|_{h=h_{0}}-\hat{z}_{3,*})\).
This value is the change of \(\hat{z}\) from the moment of time when \(\hat{h}=0\) till the moment of time when \(\hat{h}=h_{0}\). In the principal approximation
\[\dot{\hat{z}}=\varepsilon(f_{z,C}+\frac{1}{T}A_{3}),\ \dot{\hat{h}}=-\varepsilon \frac{1}{T}\Theta_{3}.\]
Hence
\[\frac{d\hat{z}}{d\hat{h}}=-\frac{1}{\Theta_{3}}(Tf_{z,C}+A_{3})\]
and
\[\hat{z}|_{h=h_{0}} -\hat{z}_{3,*}=-\frac{1}{\Theta_{3}}\int_{0}^{h_{0}}(Tf_{z,C}+A_{3}) dh=-\frac{f_{z,C}}{\Theta_{3}}\int_{0}^{h_{0}}Tdh-\frac{A_{3}}{\Theta_{3}}h_{0}\] \[=-\frac{f_{z,C}}{\Theta_{3}}\int_{0}^{h_{0}}(-2a\ln h+b_{3})dh- \frac{A_{3}}{\Theta_{3}}h_{0}=-\frac{f_{z,C}}{\Theta_{3}}\left[-2a(h_{0}\ln h_{ 0}-h_{0})+b_{3}h_{0}\right]-\frac{A_{3}}{\Theta_{3}}h_{0}.\]
b) For \(((\hat{z}|_{h=h_{N}}-\hat{z}|_{h=h_{0}})-(z_{N}-z_{0}))\).
To calculate this term one can consider motion round by round, calculate differences between changes of \(\hat{z}\) and \(z\) on each round, and sum up these differences. For changes of \(h,z\) one can use
\[h_{n+1}-h_{n}\simeq\varepsilon\Theta_{3},\] \[z_{n+1}-z_{n}\simeq-\varepsilon f_{z,C}\left(-\frac{a}{2}\ln h_{ n}-a\ln(h_{n}+\varepsilon\Theta_{1})-\frac{a}{2}\ln h_{n+1}+b_{3}\right)- \varepsilon A_{3}.\]
The change of \(\hat{z}\) is calculated as
\[\hat{z}|_{h=h_{n+1}}-\hat{z}|_{h=h_{n}}\simeq-\frac{f_{z,C}}{\Theta_{3}}\int_ {h_{n}}^{h_{n+1}}(-2a\ln h+b_{3})dh-\varepsilon A_{3}.\]
Thus
\[(z_{n+1}-z_{n})-(\hat{z}|_{h=h_{n+1}}-\hat{z}|_{h=h_{n}})\] \[\simeq a\frac{f_{z,C}}{\Theta_{3}}\left[\int_{h_{n}}^{h_{n+1}}(-2 \ln h)dh-\varepsilon\Theta_{3}\left(-\frac{1}{2}\ln h_{n}-a\ln(h_{n}+ \varepsilon\Theta_{1})-\frac{1}{2}\ln h_{n+1}\right)\right]\]
The expression in the square brackets is related to calculation of the integral of \(-\ln h\) by the trapezoidal method like in [7]. Thus, we can directly use the expression for change of an adiabatic invariant from [7]. This gives
\[(\hat{z}|_{h=h_{N}}-\hat{z}|_{h=h_{0}})-(z_{N}-z_{0})\] \[\simeq 2\varepsilon af_{z,C}\left[-\frac{1}{2}\ln\frac{2\pi}{\Gamma( \xi_{3})\Gamma(\xi_{3}+\theta_{13})}+\xi_{3}+\left(-\xi_{3}+\frac{1}{2}\theta _{23}\right)\ln\xi_{3}\right]\] \[+\varepsilon\frac{1}{2}af_{z,C}(\theta_{23}-\theta_{13})(\ln h_{ N}-\ln h_{0}).\]
Here \(\Gamma(\cdot)\) is the gamma function, \(\xi_{3}=h_{0}/\Theta_{3}\), \(\theta_{ij}=\Theta_{i}/\Theta_{j}\).
c) For \((\hat{z}_{N}-\hat{z}|_{h=h_{N}})\).
We have \(h(t_{N})=h_{N}\). Denote \(\hat{t}_{N}\) the moment of time such that \(\hat{h}(\hat{t}_{N})=h_{N}\). Find \(\hat{t}_{N}-t_{N}\).
We have
\[\hat{h}(\hat{t}_{N})=h_{N}=h(t_{N})=\tilde{h}(t_{N})+\varepsilon U_{h,N}\simeq \hat{h}(t_{N})+\varepsilon U_{h,N}.\]
Thus
\[\hat{t}_{N}-t_{N}\simeq-\frac{1}{\Theta_{3}}TU_{h,N}.\]
Value of \(T\) is calculated at \(h=h_{N}\). Then
\[\begin{split}&\hat{z}_{N}-\hat{z}|_{h=h_{N}}=\hat{z}(t_{N})- \hat{z}(\hat{t}_{N})\simeq\varepsilon\left(f_{z,C}+\frac{1}{T}A_{3}\right)(t_ {N}-\hat{t}_{N})\simeq\varepsilon\left(f_{z,C}+\frac{1}{T}A_{3}\right)\frac{1 }{\Theta_{3}}TU_{h,N}\\ &\simeq-\varepsilon\frac{f_{z,C}}{\Theta_{3}}\left(\frac{a}{2}( \Theta_{2}-\Theta_{1})\ln h_{N}+(\Theta_{1}b_{2}-\Theta_{2}b_{1})/2+d_{3} \right)+\varepsilon\frac{1}{4}A_{3}(\theta_{23}-\theta_{13}).\end{split}\]
d) For \(\varepsilon U_{z,N}\).
We have \(\varepsilon U_{z,N}\simeq\varepsilon(A_{1}-A_{2})/4\).
Combining results of a) - d) we get from identity (3.3)
\[\begin{split}& z_{0}\simeq\hat{z}_{3,*}-\frac{f_{z,C}}{\Theta_{3 }}\left[-2a(h_{0}\ln h_{0}-h_{0})+b_{3}h_{0}\right]-\frac{A_{3}}{\Theta_{3}}h _{0}\\ &+2\varepsilon af_{z,C}\left[-\frac{1}{2}\ln\frac{2\pi}{\Gamma( \xi_{3})\Gamma(\xi_{3}+\theta_{13})}+\xi_{3}+\left(-\xi_{3}+\frac{1}{2}\theta _{23}\right)\ln\xi_{3}\right]\\ &-\varepsilon\frac{1}{2}af_{z,C}(\theta_{23}-\theta_{13})\ln h_{ 0}\\ &-\varepsilon\frac{1}{2}f_{z,C}\left((\theta_{13}b_{2}-\theta_{23 }b_{1})+2\frac{d_{3}}{\Theta_{3}}\right)+\varepsilon\frac{1}{4}A_{3}(\theta_ {23}-\theta_{13})\\ &+\varepsilon(A_{1}-A_{2})/4\\ &=\hat{z}_{3,*}-\frac{f_{z,C}}{\Theta_{3}}\left[-2ah_{0}\ln( \varepsilon\Theta_{3})+b_{3}h_{0}\right]-\frac{A_{3}}{\Theta_{3}}h_{0}\\ &+2\varepsilon af_{z,C}\left[-\frac{1}{2}\ln\frac{2\pi}{\Gamma( \xi_{3})\Gamma(\xi_{3}+\theta_{13})}+\frac{1}{2}\theta_{23}\ln\xi_{3}\right] \\ &-\varepsilon\frac{1}{2}af_{z,C}(\theta_{23}-\theta_{13})\ln h_{ 0}\\ &-\varepsilon\frac{1}{2}f_{z,C}\left((\theta_{13}b_{2}-\theta_{23 }b_{1})+2\frac{d_{3}}{\Theta_{3}}\right)+\varepsilon\frac{1}{4}A_{3}(\theta _{23}-\theta_{13})\\ &+\varepsilon(A_{1}-A_{2})/4.\end{split} \tag{3.4}\]
### Passage through the separatrix
We assume that \(k\sqrt{\varepsilon}\leq\xi_{3}\leq\theta_{23}-k\sqrt{\varepsilon}\), where \(k\) is a large enough constant. Then for \(t>t_{0}\) the phase point makes a round close to the separatrix \(l_{2}\) and arrives to the ray \(C\xi\) in \(G_{2}\) (Fig. 1) at some moment of time \(t^{\prime}_{0}=t_{0}+O(\ln\varepsilon)\). Denote \(h^{\prime}_{0}=h(t^{\prime}_{0}),z^{\prime}_{0}=z(t^{\prime}_{0})\). We have
\[h^{\prime}_{0}\simeq h_{0}-\varepsilon\Theta_{2},\ z^{\prime}_{0}\simeq z_{0} +\varepsilon f_{z,C}\left[-\frac{a}{2}\ln h_{0}-\frac{a}{2}\ln(-h^{\prime}_{0} )+b_{2}\right]+\varepsilon A_{2}. \tag{3.5}\]
Denote \(\xi_{2}=(-h^{\prime}_{0})/(\varepsilon\Theta_{2})\simeq(\varepsilon\Theta_{2} -h_{0})/(\varepsilon\Theta_{2})=(\varepsilon\Theta_{2}-\varepsilon\Theta_{3} \xi_{3})/(\varepsilon\Theta_{2})=1-(\Theta_{3}/\Theta_{2})\xi_{3}\). Thus \(\xi_{3}\simeq\theta_{23}(1-\xi_{2})\).
### Moving away from the separatrix
For \(t>t^{\prime}_{0}\) the projection of the phase point onto \(p,q\) plane makes rounds close to unperturbed trajectories while moving farther away from the separatrix with each round. This projection crosses the ray \(C\xi\) in \(G_{2}\) on each such round while it moves close enough to the separatrix. We enumerate \(N+1\) moments of time for these intersections starting with the first one: \(t^{\prime}_{0}<t^{\prime}_{1}<\ldots<t^{\prime}_{N}<K/\varepsilon\). The moment of time \(t^{\prime}_{N}\) is chosen in such a way that for \(t^{\prime}_{N}\leq t\leq K/\varepsilon\) changes of \(h,z\) are described with a required (high enough) accuracy by the second order averaged system, while for \(t^{\prime}_{0}\leq t\leq t^{\prime}_{N}\) expansions near the separatrix can be used for description of motion because the phase point is close enough to the separatrix. Calculations here are similar to those for approaching the separatrix in Section 3.2. In what follows, we omit 'prime' in notation for moments of time and use for variables \(h,z\) the same notation as in Section 3.2, except of \(h^{\prime}_{0},z^{\prime}_{0}\).
We have
\[z_{N}=\tilde{z}_{N}+\varepsilon U_{z,N}\simeq\hat{z}_{n}+\varepsilon U_{z,N}.\]
Then we have an identity
\[z^{\prime}_{0}\simeq\hat{z}_{2,*}+(\hat{z}|_{h=h^{\prime}_{0}}-\hat{z}_{2,*}) +\left((\hat{z}|_{h=h_{N}}-\hat{z}|_{h=h^{\prime}_{0}})-(z_{N}-z^{\prime}_{0} )\right)+(\hat{z}_{N}-\hat{z}|_{h=h_{N}})+\varepsilon U_{z,N}. \tag{3.6}\]
Estimate terms in this expression separately.
a) For \((\hat{z}|_{h=h_{0}}-\hat{z}_{2,*})\).
Similarly to Section 3.2 we get
\[\hat{z}|_{h=h^{\prime}_{0}}-\hat{z}_{2,*}\simeq-\frac{f_{z,C}}{\Theta_{2}} \left[-a(h^{\prime}_{0}\ln|h^{\prime}_{0}|-h^{\prime}_{0})+b_{2}h^{\prime}_{0 }\right]-\frac{A_{2}}{\Theta_{2}}h^{\prime}_{0}. \tag{3.7}\]
b) For \(\big{(}(\hat{z}|_{h=h_{N}}-\hat{z}|_{h=h^{\prime}_{0}})-(z_{N}-z^{\prime}_{0}) \big{)}\).
Similarly to Section 3.2 we can use result of [7]. This gives
\[(\hat{z}|_{h=h_{N}}-\hat{z}|_{h=h^{\prime}_{0}})-(z_{N}-z^{\prime}_{0})\simeq- \varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2})}+\xi_{2}+( \frac{1}{2}-\xi_{2})\ln\xi_{2}\right]. \tag{3.8}\]
c) For \((\hat{z}_{N}-\hat{z}|_{h=h_{N}})\)
Similarly to Section 3.2 we get
\[\hat{z}_{N}-\hat{z}|_{h=h_{N}}\simeq-\varepsilon\frac{f_{z,C}}{\Theta_{2}}d_{2}. \tag{3.9}\]
d) For \(\varepsilon U_{z,N}\).
We get \(\varepsilon U_{z,N}\simeq 0\).
Combining results of a) - d) we get from identity (3.6)
\[\begin{split} z^{\prime}_{0}&\simeq\hat{z}_{2,*}- \frac{f_{z,C}}{\Theta_{2}}\left[-a(h^{\prime}_{0}\ln|h^{\prime}_{0}|-h^{\prime }_{0})+b_{2}h^{\prime}_{0}\right]-\frac{A_{2}}{\Theta_{2}}h^{\prime}_{0}\\ &-\varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2 })}+\xi_{2}+(\frac{1}{2}-\xi_{2})\ln\xi_{2}\right]-\varepsilon\frac{f_{z,C}}{ \Theta_{2}}d_{2}\\ &=\hat{z}_{2,*}-\varepsilon f_{z,C}\left[a(\xi_{2}\ln( \varepsilon\Theta_{2}\xi_{2})-\xi_{2})-b_{2}\xi_{2}\right]+\varepsilon A_{2} \xi_{2}\\ &-\varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2 })}+\xi_{2}+(\frac{1}{2}-\xi_{2})\ln\xi_{2}\right]-\varepsilon\frac{f_{z,C}}{ \Theta_{2}}d_{2}\\ &=\hat{z}_{2,*}-\varepsilon f_{z,C}\left[a\xi_{2}\ln(\varepsilon \Theta_{2})-b_{2}\xi_{2}\right]+\varepsilon A_{2}\xi_{2}\\ &-\varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2 })}+\frac{1}{2}\ln\xi_{2}\right]-\varepsilon\frac{f_{z,C}}{\Theta_{2}}d_{2}. \end{split} \tag{3.10}\]
### Formula for jump of slow variables
Combining results of Sections 3.2, 3.3, 3.4 (formulas (3.4), (3.5) and (3.10) ) we get
\[\hat{z}_{2,*}-\varepsilon f_{z,C}\left[a\xi_{2}\ln(\varepsilon\Theta _{2})-b_{2}\xi_{2}\right]+\varepsilon A_{2}\xi_{2}\] \[-\varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2} )}+\frac{1}{2}\ln\xi_{2}\right]-\varepsilon\frac{f_{z,C}}{\Theta_{2}}d_{2}\] \[\simeq\hat{z}_{3,*}-\frac{f_{z,C}}{\Theta_{3}}\left[-2ah_{0}\ln( \varepsilon\Theta_{3})+b_{3}h_{0}\right]-\frac{A_{3}}{\Theta_{3}}h_{0}\] \[+2\varepsilon af_{z,C}\left[-\frac{1}{2}\ln\frac{2\pi}{\Gamma( \xi_{3})\Gamma(\xi_{3}+\theta_{13})}+\frac{1}{2}\theta_{23}\ln\xi_{3}\right] \tag{3.11}\] \[-\varepsilon\frac{1}{2}af_{z,C}(\theta_{23}-\theta_{13})\ln h_{0}\] \[-\varepsilon\frac{1}{2}f_{z,C}\left((\theta_{13}b_{2}-\theta_{2 3}b_{1})+2\frac{d_{3}}{\Theta_{3}}\right)+\varepsilon\frac{1}{4}A_{3}(\theta _{23}-\theta_{13})\] \[+\varepsilon(A_{1}-A_{2})/4\] \[+\varepsilon f_{z,C}\left[-\frac{a}{2}\ln h_{0}-\frac{a}{2}\ln( -h_{0}^{\prime})+b_{2}\right]+\varepsilon A_{2}\]
Therefore
\[\Delta\hat{z}_{*}=\hat{z}_{*,+}-\hat{z}_{*,-}=\hat{z}_{2,*}-\hat{z }_{3,*}\] \[\simeq\varepsilon f_{z,C}\left[a\xi_{2}\ln(\varepsilon\Theta_{2} )-b_{2}\xi_{2}\right]-\varepsilon A_{2}\xi_{2}\] \[\varepsilon af_{z,C}\left[-\ln\frac{\sqrt{2\pi}}{\Gamma(\xi_{2} )}+\frac{1}{2}\ln\xi_{2}\right]+\varepsilon\frac{f_{z,C}}{\Theta_{2}}d_{2}\] \[-\frac{f_{z,C}}{\Theta_{3}}\left[-2ah_{0}\ln(\varepsilon\Theta_ {3})+b_{3}h_{0}\right]-\frac{A_{3}}{\Theta_{3}}h_{0}\] \[+2\varepsilon af_{z,C}\left[-\frac{1}{2}\ln\frac{2\pi}{\Gamma( \xi_{3})\Gamma(\xi_{3}+\theta_{13})}+\frac{1}{2}\theta_{23}\ln\xi_{3}\right] \tag{3.12}\] \[-\varepsilon\frac{1}{2}af_{z,C}(\theta_{23}-\theta_{13})\ln h_{0}\] \[-\varepsilon\frac{1}{2}f_{z,C}\left((\theta_{13}b_{2}-\theta_{2 3}b_{1})+2\frac{d_{3}}{\Theta_{3}}\right)+\varepsilon\frac{1}{4}A_{3}(\theta _{23}-\theta_{13})\] \[+\varepsilon(A_{1}-A_{2})/4\] \[+\varepsilon f_{z,C}\left[-\frac{a}{2}\ln h_{0}-\frac{a}{2}\ln( -h_{0}^{\prime})+b_{2}\right]+\varepsilon A_{2}.\]
For passage form \(G_{3}\) to \(G_{1}\) we would have relation (3.12) with replacement of index '2' by index '1'. The final result in the form for passage from \(G_{3}\) to \(G_{i}\) where \(i=1\) or \(2\) is simplified to
\[\begin{split}&\Delta\hat{z}_{*}=\hat{z}_{i,*}-\hat{z}_{3,*}\simeq \varepsilon f_{z,C}a(\xi_{i}-\frac{1}{2})(\ln(\varepsilon\Theta_{i})-2\theta_ {i3}\ln(\varepsilon\Theta_{3}))\\ &-\varepsilon af_{z,C}\ln\frac{(2\pi)^{3/2}}{\Gamma(\xi_{i}) \Gamma(\theta_{i3}(1-\xi_{i})\Gamma(1-\theta_{i3}\xi_{i})}\\ &-\varepsilon f_{z,C}(\xi_{i}-\frac{1}{2})(b_{i}-\theta_{i3}b_{ 3})-\varepsilon(\xi_{i}-\frac{1}{2})(A_{i}-\theta_{i3}A_{3})\\ &+\varepsilon\frac{f_{z,C}}{\Theta_{i}}\left(d_{i}-\theta_{i3}d_ {3}\right).\end{split} \tag{3.13}\]
This formula is the main result of the current note. In a similar way one can write formulas for jumps of slow variables due to other passages between domains \(G_{j}\) that occur for other signs of values \(\Theta_{j},j=1,2,3\).
Value \(\xi_{i}\) is called _a crossing parameter_ or _a pseudo-phase_. Asymptotic formulas for the pseudo-phase were obtained in [6] for Hamiltonian systems with one degree of freedom and slow time dependence, in [9] for slow-fast Hamiltonian systems with one degree of freedom corresponding to fast motion, in [3, 4] for motion in a slowly time-dependent potential with a dissipation, and in [11] for a general perturbed system of form (1.1).
**Remark.** We do not indicate accuracy of formula (3.13). One can see that terms \(\sim\varepsilon/\ln h_{N}\) are neglected in some intermediate relations. However, because the final result should not depend on choice of \(h_{N}\), the accuracy of the final formula should be much better. For Hamiltonian perturbations the accuracy of the final formula is \(O(\varepsilon^{3/2}(|\ln\varepsilon|+(1-\xi_{i})^{-1}))\)[7, 8].
### Shift of slow time
The slow time \(\tau\) can be considered as a particular slow variable, \(\dot{\tau}=\varepsilon\). The formula for jump (or _sift_) of slow time for passage from \(G_{3}\) to \(G_{i}\), \(i=1\) or \(2\)
is a particular case of (3.13) with \(f_{z,C}=1\), \(A_{j}=0,j=1,2,3\). Thus we get
\[\begin{split}&\hat{\tau}_{i,*}-\hat{\tau}_{3,*}\simeq\varepsilon a( \xi_{i}-\frac{1}{2})(\ln(\varepsilon\Theta_{i})-2\theta_{i3}\ln(\varepsilon \Theta_{3}))\\ &-\varepsilon a\ln\frac{(2\pi)^{3/2}}{\Gamma(\xi_{i})\Gamma( \theta_{i3}(1-\xi_{i})\Gamma(1-\theta_{i3}\xi_{i})}\\ &-\varepsilon(\xi_{i}-\frac{1}{2})(b_{i}-\theta_{i3}b_{3})+ \frac{\varepsilon}{\Theta_{i}}\left(d_{i}-\theta_{i3}d_{3}\right).\end{split} \tag{3.14}\]
## 4 Jump of adiabatic invariant
In this Section we derive formulas for jumps of adiabatic invariants in Hamiltonian systems from obtained formulas for jumps of slow variables.
### Time-dependent Hamiltonian system
Let system (1.1) be a Hamiltonian system with the Hamiltonian \(H=H(p,q.\tau)\), \(\tau=\varepsilon t\). Denote \(S_{i}(\tau)\) area of the domain \(G_{i}\), \(i=1,2\). Denote \(S_{3}(\tau)=S_{1}(\tau)\cup S_{2}(\tau)\). Then \(\Theta_{j}=dS_{j}/d\tau\), \(j=1,2,3\). Consider motion with passage from \(G_{3}\) to \(G_{i}\), \(i=1\) or \(2\), as in Section 3. Let \(J_{-}\) and \(J_{+}\) be the initial (at \(t=0\), in \(G_{3}\)) and final (at \(t=K/\varepsilon\), in \(G_{i}\)) values of the improved adiabatic invariant. (For the definition of the improved adiabatic invariant and related formulas see, e.g., [7]). Then \(S_{3}(\hat{\tau}_{3,*})\simeq 2\pi J_{-}\), \(S_{i}(\hat{\tau}_{i,*})\simeq 2\pi J_{+}\). We get
\[2\pi J_{+}\simeq S_{i}(\hat{\tau}_{i,*})=S_{i}(\hat{\tau}_{3,*}+\hat{\tau}_{i, *}-\hat{\tau}_{3,*})\simeq S_{i}(\hat{\tau}_{3,*})+\Theta_{i}(\hat{\tau}_{i,*}- \hat{\tau}_{3,*}). \tag{4.1}\]
Substitute \((\hat{\tau}_{i,*}-\hat{\tau}_{3,*})\) from (3.14). We get
\[\begin{split}& 2\pi J_{+}\simeq S_{i}(\hat{\tau}_{3,*})+ \varepsilon a\Theta_{i}(\xi_{i}-\frac{1}{2})(\ln(\varepsilon\Theta_{i})-2 \theta_{i3}\ln(\varepsilon\Theta_{3}))\\ &-\varepsilon a\Theta_{i}\ln\frac{(2\pi)^{3/2}}{\Gamma(\xi_{i}) \Gamma(\theta_{i3}(1-\xi_{i})\Gamma(1-\theta_{i3}\xi_{i})}\\ &-\Theta_{i}(\xi_{i}-\frac{1}{2})(b_{i}-\theta_{i3}b_{3})+ \varepsilon\left(d_{i}-\theta_{i3}d_{3}\right)\end{split} \tag{4.2}\]
as in [5, 7]. One can replace \(S_{i}(\hat{\tau}_{3,*})\) with \(S_{i}(\tau_{*})+\theta_{i3}(2\pi J_{-}-S_{3}(\tau_{*}))\) here.
### Slow-fast Hamiltonian system
Let system (1.1) be a slow-fast Hamiltonian system. The Hamiltonian is \(H(p,q,y,x)\) with pairs of conjugate variables \((p,q)\) and \((y,\varepsilon^{-1}x)\). Equations of motion are
\[\dot{q}=\frac{\partial H}{\partial p},\ \dot{p}=-\frac{\partial H}{\partial q},\ \dot{x}= \varepsilon\frac{\partial H}{\partial y},\ \dot{y}=-\varepsilon\frac{\partial H}{ \partial x}. \tag{4.3}\]
Thus, \(z=(y,x)\), \(f_{z,C}=(-\partial h_{C}(y,x)/\partial x,\partial h_{C}(y,x)/\partial x)\). Denote \(S_{i}(z)=S_{i}(y,x)\) area of domain \(G_{i}\), \(i=1,2\). Denote \(S_{3}(z)=S_{1}(z)\cup S_{2}(z)\). Then \(\Theta_{j}=\{S_{j},h_{c}\},j=1,2,3\), where \(\{\cdot,\cdot\}\) is the Poisson bracket with respect to variables \((y,x)\), \(\{a,b\}=a^{\prime}_{x}b^{\prime}_{y}-a^{\prime}_{y}b^{\prime}_{x}\) (see [8]) and
\[A_{j}=\left(-\oint_{l_{j}}\frac{\partial(H-h_{c})}{\partial x}dt,\oint_{l_{j} }\frac{\partial(H-h_{c})}{\partial y}dt\right)=\left(\frac{\partial S_{j}}{ \partial x},-\frac{\partial S_{j}}{\partial y}\right). \tag{4.4}\]
(cf. [8]).
Consider motion with passage from \(G_{3}\) to \(G_{i}\), \(i=1\) or \(2\), as in Section 3. Let \(J_{-}\) and \(J_{+}\) be the initial (at \(t=0\), in \(G_{3}\)) and final (at \(t=K/\varepsilon\), in \(G_{i}\)) values of the improved adiabatic invariant. (For the definition of the improved adiabatic invariant and related formulas see, e.g., [8]). Then \(S_{3}(\hat{z}_{3,*})\simeq 2\pi J_{-}\), \(S_{i}(\hat{z}_{i,*})\simeq 2\pi J_{+}\). Then we get
\[2\pi J_{+}\simeq S_{i}(\hat{z}_{i,*})=S_{i}(\hat{z}_{3,*}+\hat{z}_{i,*}-\hat{ z}_{3,*})\simeq S_{i}(\hat{z}_{3,*})+(\mathop{\rm grad}S_{i}\cdot(\hat{z}_{i,*}- \hat{z}_{3,*})). \tag{4.5}\]
Here \((\ \cdot\ \ )\) is the standard scalar product. Substitute \((\hat{z}_{i,*}-\hat{z}_{3,*})\) from (3.13) and note that
\[(\mathop{\rm grad}S_{i}\cdot f_{z,C})=\Theta_{i},\ (\mathop{\rm grad}S_{i} \cdot A_{i})=\{S_{i},S_{i}\}=0,\ (\mathop{\rm grad}S_{i}\cdot A_{3})=-\{S_{i},S_{3}\}. \tag{4.6}\]
We get
\[2\pi J_{+}\simeq S_{i}(\hat{z}_{3,*})+\varepsilon\Theta_{i}a( \xi_{i}-\frac{1}{2})(\ln(\varepsilon\Theta_{i})-2\theta_{i3}\ln(\varepsilon \Theta_{3})) \tag{4.7}\] \[-\varepsilon a\Theta_{i}\ln\frac{(2\pi)^{3/2}}{\Gamma(\xi_{i}) \Gamma(\theta_{i3}(1-\xi_{i})\Gamma(1-\theta_{i3}\xi_{i})}\] \[-\varepsilon\Theta_{i}(\xi_{i}-\frac{1}{2})(b_{i}-\theta_{i3}b_{3 })-\varepsilon\theta_{i3}(\xi_{i}-\frac{1}{2})\{S_{i},S_{3}\}\] \[+\varepsilon\left(d_{i}-\theta_{i3}d_{3}\right).\]
For systems with two degrees of freedom one can approximately calculate \(S_{i}(\hat{z}_{3,*})\) via initial value of the improved adiabatic invariant and solution of the first order averaged system. Consider motion in the energy level \(H=h\). Relations
\[S_{3}(\hat{z}_{3,*})\simeq 2\pi J_{-},\quad h_{C}(\hat{z}_{3,*})\simeq h,\quad h _{C}(z_{*})=h \tag{4.8}\]
imply that
\[(\mbox{grad}\,S_{3}\cdot(\hat{z}_{3,*}-z_{*}))\simeq 2\pi J_{-}-S_{3}(z_{*}), \quad(\mbox{grad}\,h_{C}\cdot(\hat{z}_{3,*}-z_{*}))\simeq 0. \tag{4.9}\]
We have
\[S_{i}(\hat{z}_{3,*})=S_{i}(z_{*}+\hat{z}_{3,*}-z_{*})\simeq S_{i}(z_{*})+( \mbox{grad}\,S_{i}\cdot(\hat{z}_{3,*}-z_{*})). \tag{4.10}\]
Solve equations (4.9) for \((\hat{z}_{3,*}-z_{*})\) and substitute the result to (4.10). We get
\[S_{i}(\hat{z}_{3,*})\simeq S_{i}(z_{*})+\theta_{i3}(2\pi J_{-}-S_{3}(z_{*})). \tag{4.11}\]
Substitution of this relation to (4.7) gives an expression for jump of the adiabatic invariant in [8].
## 5 Conclusions
The main result of this note is the asymptotic formula (3.13) for change of slow variables at evolution across separatrices in systems of form (1.1). Together with formula for phase change in such systems [11] this gives a rather complete description of dynamics with separatrix crossings in the considered class of systems. |
2307.12130 | Estimating temperatures with low-cost infrared cameras using deep neural
networks | Low-cost thermal cameras are inaccurate (usually $\pm 3^\circ C$) and have
space-variant nonuniformity across their detector. Both inaccuracy and
nonuniformity are dependent on the ambient temperature of the camera. The goal
of this work was to estimate temperatures with low-cost infrared cameras, and
rectify the nonuniformity.
A nonuniformity simulator that accounts for the ambient temperature was
developed. An end-to-end neural network that incorporates both the physical
model of the camera and the ambient camera temperature was introduced. The
neural network was trained with the simulated nonuniformity data to estimate
the object's temperature and correct the nonuniformity, using only a single
image and the ambient temperature measured by the camera itself. Results of the
proposed method significantly improved the mean temperature error compared to
previous works by up to $0.5^\circ C$. In addition, constraining the physical
model of the camera with the network lowered the error by an additional
$0.1^\circ C$.
The mean temperature error over an extensive validation dataset was
$0.37^\circ C$. The method was verified on real data in the field and produced
equivalent results. | Navot Oz, Nir Sochen, David Mendelovich, Iftach Klapp | 2023-07-22T17:13:49Z | http://arxiv.org/abs/2307.12130v2 | # Improving temperature estimation in low-cost infrared cameras using deep neural networks
###### Abstract
Low-cost thermal cameras are inaccurate (usually \(\pm 3^{\circ}C\)) and have space-variant nonuniformity across their detector. Both inaccuracy and nonuniformity are dependent on the ambient temperature of the camera. The main goal of this work was to improve the temperature accuracy of low-cost cameras and rectify the nonuniformity.
A nonuniformity simulator that accounts for the ambient temperature was developed. An end-to-end neural network that incorporates the ambient temperature at image acquisition was introduced. The neural network was trained with the simulated nonuniformity data to estimate the object's temperature and correct the nonuniformity, using only a single image and the ambient temperature measured by the camera itself. Results show that the proposed method lowered the mean temperature error by approximately \(1^{\circ}C\) compared to previous works. In addition, applying a physical constraint on the network lowered the error by an additional \(4\%\).
The mean temperature error over an extensive validation dataset was \(0.37^{\circ}C\). The method was verified on real data in the field and produced equivalent results.
Deep learning, Convolutional neural network (CNN), Calibration, Bolometer, Image processing, Space- and time-variant nonuniformity, Fixed-Pattern Noise (FPN)
## I Introduction
Infrared (IR) imagery in the \(8_{\mu m}-14_{\mu m}\) atmospheric window measures the thermal radiation emitted from an object. IR imagery is extensively used for various applications, such as - military night vision [1], medical fever screening [2] and machinery fault diagnosis [3], among many others. One interesting utilization of such an imaging system is agriculture, because the temperature of a plant is important in deducing information on its well-being [4, 5].
Low-cost IR cameras are usually uncooled, and rely on microbolometer arrays as sensors. The microbolometer array enables the construction of inexpensive IR cameras with low energy requirements. Unlike the photon-counting detector arrays (e.g., CMOS in the visible range), microbolometers measure changes in the electrical resistance caused by the incident thermal radiation originating from an object [6]. The thermal radiation heats each microbolometer to a temperature that depends on the scene, and each microbolometer in the array has a slightly different temperature depending on the observed scene and the incident angle of the radiation. The resistance of each microbolometer changes according to the scene temperature. The minuscule changes in the resistance of each microbolometer in the array are used to construct an image corresponding to the temperature of the observed scene.
However, microbolometer arrays are subject to space-variant nonuniformity and noise from various sources. The microbolometer array is uncooled, and so a prominent source of nonuniformity is thermal radiation emitted by the camera itself [7]. Another parasitic thermal radiation source is the narcissus effect, where unfocused reflection of the detector returns from the optical surfaces [8]. The effect of the internal self-radiation (red lines) mixed with the incident thermal radiation from the scene (green lines) is schematically presented in Fig. 1.
These parasitic effects are dependent on ambient temperatures, meaning that their influence on the measurements
Fig. 1: Side view of a thermal camera. Green lines are thermal radiation propagating from the object plane to the sensor plane. Red lines are thermal radiation emitted by the camera itself, which has a major effect on nonuniformity [9].
Fig. 2: Image of a uniform heat source: SR-800N blackbody. The narcissus effect is the axis-symmetric ripple effect, meaning that the nonuniformity is also spatially variant.
changes with ambient temperature of the camera. Fig. 2 demonstrates the nonuniformity effect on an image of a uniform heat source (black body). Notice that the effect is also spatially variant.
Another source of nonuniformity is fixed-pattern noise (FPN). The readout circuitry of the microbolometer array is usually line-based (similar to charge coupled devices). Slight changes between readers on the same array can lead to considerable disparity between lines on the image [10].
Finally, the signal-to-noise ratio of the camera is often low due to readout and electronic noise [7]. These noises affect the minimum detectable change in scene temperature, known in the literature as noise equivalent differential temperature (NEDT). Lower NETD values are preferable, and the noise in the camera increases this value [10].
The thermal radiation emitted by a body for all wavelengths can be found using the Stefan-Boltzmann law, whereby the emitted radiation can be approximated by the fourth power of the object's temperature [7]:
\[L(T)\approx\epsilon\sigma T^{4}\quad\left[\frac{W}{m^{2}}\right] \tag{1}\]
where \(T\) is the object's temperature, \(\epsilon\) is a proportional constant and \(\sigma\) is the Boltzmann constant.
In a small environment near a reference temperature \(T_{0}\), the Stefan-Boltzmann law can be expanded by Taylor series:
\[\begin{split} L(T)&=\epsilon\cdot\sigma T^{4}= \epsilon\cdot\sigma(T_{0}+\Delta T)^{4}\\ &\approx\epsilon\cdot\sigma(T_{0})^{4}+4\epsilon\cdot\sigma(T_{ 0})^{3}\Delta T\approx a_{1}\cdot t_{obj}+a_{0}\end{split} \tag{2}\]
where \(a_{0}\), \(a_{1}\) are the coefficients and \(T_{0}\) is a reference temperature. \(\Delta T\) was changed to \(t_{obj}\) for brevity.
Equation 2 demonstrates that the radiation can be approximated as linear in scene temperature for a small environment around a reference temperature. This result means that the incident thermal radiation on the sensor has a temperature-_dependent_ element and a temperature-_independent_ element.
The ambient temperature of the camera has a profound effect on the measurements that it produces. Fig. 3 demonstrates the drift in measurements caused by a change in ambient temperature. Thus, the model in Equation 2 must also account for changes in ambient temperature. The linear approximation of the overall reading of the camera depends on both the ambient temperature and the object temperature:
\[L(t_{obj},t_{amb})=G(t_{amb})\cdot L(t_{obj})+D(t_{amb}) \tag{3}\]
where \(t_{amb},t_{obj}\) are the ambient and object temperatures, respectively.
\(G(t_{amb})\) and \(D(t_{amb})\) in Eq. 3 are polynomials of \(t_{amb}\). The polynomial model has been previously shown to be representative of the underlying physical thermal radiation model (e.g,[11, 12]). For the remainder of this work, higher-order polynomials, mainly quadratic, will be used for approximations.
Separating the coefficients from the object temperature in Eq. 3 is complicated when only the camera response is given [13]. However, some mathematical functions can separate a product into a summation, such as the \(\log()\) function [14]. The existence of a separation function suggests the use of neural networks, which can approximate any function [15]. Thus, this work attempts to develop an end-to-end neural network that represents the polynomial in Eq. 3. For completeness, a network with a linear physical constraint will also be developed and compared.
## II Prior work
Nonuniformity correction is an ongoing area of research. Different approaches are described in sections II-A-II-C.
### _Calibration-based methods_
The process of calibration requires collecting data of a known heat source under different environmental conditions. This process is usually conducted with a scientifically calibrated blackbody in an environmental chamber. The data are used to find coefficients that solve an equation for the calibration.
The baseline for the calibration methods is a one-point correction. These methods usually assume a known and constant gain across ambient temperatures, and only solve for the offset (e.g., [9]). The natural extension is the two-point correction where no assumption is made for the gain (e.g., [10]). These early methods solved for the coefficients using a simple linear regression model.
Contemporary methods formulate the nonuniformity correction (NUC) as an inverse problem. Nugent et al. [11] solved it as a least-squares problem, with the offset and gain modeled as polynomials of the object's temperature. In ref. [16], they used the internal shutter of the camera to periodically update the results of the calibration. Liang et al. [17] based the solution on interpolation of a predefined offset table for each ambient temperature, and the offset values for this table were found using a two-point correction. Chang and Li [18] solved for both the ambient temperature and integration time of the camera.
Calibration-based methods produce good results but relay on the collection of extensive data. The data must be accurate and contain both varying object temperatures and ambient temperatures, calling for the use of scientific-grade equipment. Moreover, these methods are valid only for the camera used
Fig. 3: Drift between true temperature and camera estimation at different operating points. Each measurement was taken after performing the internal flat-field correction (FFC) calibration procedure.
to collect the data, meaning that the data-collection process must be performed for every camera to be calibrated. Any attempt to apply the calibration data on another camera will be noisy and have noticeable FPN because the coefficients of the calibration will not be suitable between cameras.
### _Scene-based methods_
Scene-based methods exploit redundant data in and between frames, rendering calibration unnecessary. The redundant data can be movement between frames, camera jitter between images, or a constraint on the dataset itself.
Most of these methods assume that the change in ambient temperature is slow, thus the gain and offset changes slowly, and both can be regarded as constant between frames. This assumption holds true, but only for a limited time and ambient temperature span.
Averbuch et al. [19] used the motion between frames. Consecutive frames were registered to add data on each pixel, and an inverse problem was solved to find the offset. The solution was updated using a Kalman filter. Papini et al. [13] used pairs of blurred and sharp images to approximate the gain and offset.
These approaches offer good approximations for the temperatures but are expensive to calculate and require redundancy between frames.
### _Single image-based methods_
The idea of this approach is to use only information that is already embedded in the frame itself. Scribner et al. [20] used a neural network to find offset and gain. The neural network acted as a locally adaptive filter on a small neighborhood. Tendero and Gilles [21] equalized the frame using the cumulative histogram of all of the columns in the frame, and then used the discrete cosine transform to denoise the results.
Recently, methods that utilize neural networks in general, and convolutional neural networks (CNN) in particular, have been suggested. He et al. [22] suggested using a U-Net-type CNN trained end-to-end. Jian et al. [23] filtered the frame with a bilateral filter to allow the network to concentrate only on the high-frequency information. Chang et al. [24] shared multiscale information between layers of the network to improve the NUC results. Saragadam et al. [25] used a neural network as prior information for solving an optimization problem. The input to the network was jittered frames of the same object. The physical constraint shown in Eq. 2 was imposed as part of the optimization problem.
Jointly estimating accurate scene temperature and correcting nonuniformity using only a single image at different ambient temperatures, without the need to calibrate for each camera, has yet to be achieved.
This work aims to both estimate the scene temperature and correct the nonuniformity in frames captured by uncooled microbolometer-based cameras. We introduce a method of accurately estimating a space-variant nonuniformity model from measurements, which also accounts for the ambient temperature of the camera. The model utilizes prior knowledge on the physics of the domain, namely radial spatial dependence, which is incorporated into the mathematical modeling of the nonuniformity. The nonuniformity model is general and represents different cameras, unlike previous calibration methods. Thus model is used to train a CNN to correct the nonuniformity and produce accurate scene temperatures based only on a single frame and the ambient temperature of the camera. We also compare a neural network with a physical constraint based on Eq. 3 to an end-to-end temperature-estimation network and show that the physical constraint improves performance by a small margin. Finally, we demonstrate our method on real data collected with a low-cost uncooled microbolometer camera and compare it to measurements taken with a scientific-grade radiometric camera to show that the method indeed works and can provide generalizations. To summarize the findings of this work:
1. A method for jointly estimating scene temperature and perform nonuniformity correction with only a single gray-level frame and ambient temperature.
2. Development of a nonuniformity simulator that uses physical prior knowledge of radial pixel dependence. The simulator is general and can faithfully represent multiple cameras and situations for a wide range of ambient temperatures.
3. Elimination of the need to calibrate each camera separately.
4. Investigation of the effect of the physical constraint introduced in Eq. 2 on the scene temperature estimation.
## III Proposed method
The described method is aimed at providing a single-image scene-based method for correcting the space-variant degradation and temperature drift in microbolometer arrays.
The proposed method is composed of four steps:
1. Characterize the nonuniformity in an uncooled microbolometer thermal camera described by four steps (section III-A).
2. Model the camera response to a set of object temperatures (III-A1).
3. Use the spatial dependency between pixels as a constraint (III-A2).
4. Exploit symmetry around the middle of the frame (III-A3).
5. Apply the method to new frames (III-A4).
6. Acquire a large dataset of accurate temperature maps.
7. Create samples using the accurate temperature maps and the synthetic nonuniformity (Alg. 2).
8. Train a CNN to perform NUC in a supervised manner (section III-B).
### _Characterization of the nonuniformity_
In this work, we used the low-cost uncooled-microbolometer thermal camera FLIR Tau2, because it allows access to raw measurements of thermal radiation. To estimate the nonuniformity for various ambient temperatures, the camera was placed in an environmental chamber (Fig. 4), and focused on a SR-800N blackbody, which served as the object of the setup.
The Tau2 was set to \(60\) frames per second (FPS). The camera output was set to radiation flux, so the raw measurement of each microbolometer is represented as a \(14_{bit}\) integer. To acquire the rawest possible radiation flux and without any image processing, all the automatic image enhancements were disabled before each measurement (details are in a table in the supplementary material). The equipment and Tau2 parameters are elaborated in section IV-B.
An extensive dataset of camera responses was collected, comprised of the camera response at a known object temperature for different ambient temperatures. The camera response was measured for a series of operating points denoted as \(R(t_{amb},t_{obj})_{i}\), where \(t_{amb}\) is the ambient temperature, \(t_{obj}\) is the object temperature and \(i\in[1,\ldots,N]\). The measurements were made on a predefined set of temperatures such that \(t_{amb}\in T_{amb}\) and \(t_{obj}\in T_{obj}\). \(N\) images were averaged for each operating point to lower the noise per pixel \(\left(\propto N^{-0.5}\right)\). The averaged images at an operating point are denoted as \(\bar{R}(t_{amb},t_{obj})\).
#### Iii-B1 Object temperature dependence
The camera response for a given object temperature and ambient temperature can be estimated as a polynomial for each pixel:
\[\tilde{R}(t_{amb},t_{obj})[x,y]=\sum_{m=0}^{M_{pl}}\tilde{b}_{C}(t_{amb})[x,y] \cdot t_{obj}^{m}[x,y] \tag{4}\]
where \(M_{pl}\) is the degree of the polynomial fit, and \(b\) is the pixel-wise coefficient of the polynomial that depends on the ambient temperature.
To estimate the coefficient vector \(\tilde{b}_{C}\), first the dependence of the response on ambient temperature was determined and fitted. The dependence was determined from the operating point-measurements by estimating the polynomial coefficients for each \(t_{amb}\). A matrix of object temperatures at each operating point and a vector of camera responses is built for each \(t_{amb}\), and the polynomial coefficients are found _per pixel_ using Least Squares. We denote:
\[\underline{\underline{A}}_{O}[t_{amb}] =\begin{bmatrix}T_{obj}^{0}[1]&\ldots&T_{obj}^{M_{pl}}[1]\\ \vdots&\ddots&\vdots\\ T_{obj}^{0}[L_{obj}]&\ldots&T_{obj}^{M_{pl}}[L_{obj}]\end{bmatrix}_{L_{ obj}\times M_{pl}}\] \[\vec{b}_{C}[t_{amb}] =\begin{bmatrix}c_{0}[t_{amb}]\\ \vdots\\ c_{M_{pl}}[t_{amb}]\end{bmatrix}_{M_{pl}\times 1}\] \[\vec{R}[t_{amb}] =\begin{bmatrix}R\left(t_{amb},T_{obj}[0]\right)\\ \vdots\\ R\left(t_{amb},T_{obj}[L_{obj}]\right)\end{bmatrix}_{L_{obj}\times 1}\]
where \(M_{pl}\) is the degree of the polynomial to fit, and \(L_{obj}\) is the length of \(T_{obj}\). Then the values of \(\tilde{b}_{C}[t_{amb}][x,y]\) are estimated by solving the inverse problem:
\[\vec{R}[t_{amb}][x,y] =\underline{\underline{A}}_{O}[t_{amb}]\cdot\tilde{b}_{C}[t_{amb} ][x,y]\rightarrow \tag{5a}\] \[\vec{b}_{C}[t_{amb}][x,y] =\underline{\underline{A}}_{O}^{+}[t_{amb}]\cdot\vec{R}[t_{amb}][x,y] \tag{5b}\]
where \(\underline{\underline{A}}_{O}^{+}\) is the Moore-Penrose inverse of \(\underline{\underline{A}}_{O}\).
A set of coefficients \(\vec{b}_{C}\in\mathcal{R}^{M_{pl}}\) exists for each \(t_{amb}\in T_{amb}\). These coefficients are _pixel-wise_, meaning there are \(M_{pl}\) coefficient maps with spatial dimensions of \(h\times w\) for \(h,w\) the dimensions of each image. These coefficient maps were filtered using a Gaussian filter with \(\sigma=1\) to remove high-frequency noise stemming from dead pixels in the camera.
Fig. 5 presents an example of the fitting results between the gray levels at an operating point an the real blackbody temperatures, as described in Eq. 5b. The number of coefficients was chosen empirically as \(M_{pl}=3\). The fit provides a good estimation to the data (\(R^{2}\geq 0.99\)).
Fig. 6 is a scheme of the coefficients for a given object temperature as described in Eq. 5b. Each coefficient map is two-dimensional.
The measurements are expected to be symmetrical around the middle of the image [12], but practical effects can create skew. The skewing limits the usability of the model because
Fig. 4: Schematic of the environmental chamber. The chamber is controlled via a PC and is comprised of a Tau2 camera, a heating element and a SR-800N blackbody. EPA is the focal plane array of the camera.
Fig. 5: Example of quadratic fitting between gray level output of the Tau2 camera and the temperature of the blackbody as described in Eq. 5b. The measurements are taken at \(t_{amb}=38.9^{\circ}C\) and the coefficients for the pixel in the middle of the frame are displayed. Specifically, the estimated gray level of the camera in the middle pixel is \(\bar{R}(38.9,t_{obj})\approx 2215.32+0.36\cdot t_{obj}+2.55\cdot t_{obj}^{2}\).
the skewed model does not accurately depict a general symmetrical case. The effect of the skewing on real data can be seen in Fig. 7a.
#### Iii-B2 Spatial dependence
So far, the polynomial dependence of the camera's readings on \(t_{obj},t_{amb}\) were found for each pixel. To overcome the skewing, the coefficients from Eq. 5b are fitted to a spatial function. The spatial fitting is performed separately on each set of coefficients \(\vec{b}_{C}[m],\forall m\in[0,...,M_{pl}]\). The spatial fitting is performed twice: for a quadratic polynomial and for a high-degree polynomial. The coefficients that have the most profound effect on the skewing are the linear and quadratic coefficients. The ideal form of nonuniformity is expected to be axis-symmetric, and a quadratic function can be viewed as a low-frequency distortion of this symmetry. Thus, subtraction of the polynomials up to the quadratic coefficient removes the low-frequency distortion and alleviates the skew. An example of the skewing effect on real data and fitting results can be seen in Fig. 7.
Under these assumptions, we intend to find a skew-less axisymmetric polynomial approximation of the measurements. This will be achieved by first fitting the results to a spatial function, and then fitting again to a radial function.
The first step, fitting the coefficients of the camera response to a spatial function, exploits the correlation between neighboring pixels. The spatial fitting reduces the number of coefficients considerably, from \(\propto h\times s\) - the number of pixels - to \(\propto M_{sp}\) - the number of coefficients in the spatial fit where \(M_{sp}<<h\times w\).
To fit to a spatial function, we first define two matrices of dimensions \(h\times w\). The matrices are built from vectors in the range \([-0.5,0.5]\), in \(\underline{\underline{H}}\) as columns and in \(\underline{\underline{W}}\) as rows:
\[\underline{\underline{W}}=\begin{bmatrix}-0.5&\ldots&0.5\\ \vdots&\ddots&\vdots\\ -0.5&\ldots&0.5\end{bmatrix}_{h,w},\quad\underline{\underline{H}}=\underline{ \underline{W}}^{T}\]
The spatial fit is defined as:
\[K[t_{obj}] =\sum_{q=0}^{M_{sp}}\sum_{z=0}^{M_{sp}}\vec{\beta}_{C}[t_{amb}] [q,z]\cdot\underline{\underline{H}}^{q}\cdot\underline{\underline{W}}^{z} \tag{6a}\] \[\vec{\beta}_{C}[t_{amb}] =\operatorname*{argmin}_{\vec{\beta}_{C}[t_{amb}]}\Big{(}\vec{b} _{C}[t_{amb}]-K[t_{obj}]\Big{)} \tag{6b}\]
where \(M_{sp}\) is the number of coefficients in the spatial fit. The powers \(q,z\) are applied respectively to matrices \(\underline{\underline{H}},\underline{\underline{W}}\) element-wise.
We define \(\vec{\beta}_{C}^{C}\) as the quadratic fit with \(M_{sp}=2\), \(\vec{\beta}_{C}^{F}\) the fine fit with \(M_{sp}>>2\), and \(\vec{\beta}_{C}^{S}\) the final skew-less fit:
\[\vec{\beta}_{C}^{S}[t_{amb}]=\begin{cases}\text{Mean}(\vec{\beta}_{C}^{C}[q,z],\vec{\beta}_{C}^{F}[q,z]),&q=z=0\\ \vec{\beta}_{C}^{F}[q,z]-\vec{\beta}_{C}^{F}[q,z],&\forall q,z\neq 0\end{cases} \tag{7}\]
Notice that \(\vec{\beta}_{C}^{S}[t_{amb}]\in\mathcal{R}^{M_{sp}\times M_{sp}}\). The bias coefficient is averaged between the fits. Empirically, this is found to produce better results.
Fig. 8 shows a horizontal cross-sectional view of Fig. 7, along with the results of the spatial fitting in Eq. 7. The cross-section of the measurements, fine fit, quadratic fit and subtraction fitting are presented. The subtraction fitting is calculated by subtracting the quadratic polynomial from the fine polynomial. The number of coefficients for the fine fit were set to \(M_{sp}=15\). The final fit does indeed alleviate the skewing, while remaining faithful to the measurements.
#### Iii-B3 Axis-symmetric fitting
To exploit the radial symmetry around the middle of the image, the spatial fit results of Eq. 7 are fitted to a radial kernel:
\[\underline{\underline{P}} =\sqrt{\underline{\underline{H}}^{2}+\underline{\underline{W}} ^{2}},\qquad\underline{\underline{P}}\in\mathcal{R}^{h\times w} \tag{8a}\] \[J[t_{amb}] =\sum_{r=0}^{M_{rad}}\vec{\beta}_{C}[t_{amb}][r]\cdot\underline{ \underline{P}}^{r}\] (8b) \[\vec{\beta}_{C}[t_{amb}] =\operatorname*{argmin}_{\vec{\beta}_{C}[t_{amb}]}\Big{(}\vec{ \beta}_{C}^{S}[t_{amb}]-J[t_{amb}]\Big{)} \tag{8c}\]
Fig. 8: Side view of the skewing effect in the measurements, the second-order quadratic fit, the fine fit and the subtraction between the coefficients, as detailed in Eq. 7
Fig. 6: Example of pixel-wise coefficients found in Eq. 5b. The estimated radiance is the pixel-wise sum of the coefficients times the object temperature with the appropriate power. The coefficients are unique for each ambient temperature.
Fig. 7: Example of skewing in the measurements, and the spatial fitting is according to Eq. 7. Nonuniformity in the real image is \(1.4\%\).
where \(M_{rad}\) is the number of coefficients in the radial fit and \(\vec{\mathcal{B}}_{C}[r]\in\mathcal{R}^{M_{rad}}\) are the radial fitting coefficients.
Notice that for each ambient temperature \(t_{amb}\in[T_{amb}[0],...,T_{amb}[L_{FPA}]]\) in the discrete set of measurements, there is a unique vector of radial coefficients \(\vec{\mathcal{B}}_{C}[t_{amb}]\). The last step in the estimation process is to express a polynomial approximation of the radial coefficients by \(t_{amb}\); specifically, to find \(\vec{\mathcal{B}}(t)\) in the continuous range \(t\in[T_{amb}[0],...,T_{amb}[L_{FPA}]]\).
The result of Eq. 8 should output the following approximation:
\[\tilde{\mathcal{B}}_{C}(t_{amb})\approx\sum_{m=0}^{M_{amb}}\Gamma[m]\cdot t_{amb }^{m}\]
where \(\Gamma\) are the coefficients of the ambient temperature polynomial, \(M_{amb}\) is the degree of the polynomial and \(L_{FPA}\) is the length of \(T_{amb}\). Notice that the \(\Gamma\) coefficients are not dependent on the spatial dimension. The approximated coefficients \(\tilde{\mathcal{B}}_{C}(t_{amb})\) will be used with \(\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)
```
Data: Images of SR-800N blackbody at different operating points. Input:\(\mathbf{M_{pl}}\) is the degree of the polynomial of the object's temperature. \(\mathbf{M_{amb}}\) is the degree of the coarse spatial fit. \(\mathbf{M_{F}}\) is the degree of the fine spatial fit. \(\mathbf{M_{rad}}\) is the degree of the radial fit. \(\mathbf{M_{amb}}\) is the degree of the camera temperature fit for the radial coefficients. Output: The \(t_{amb}\)-dependent spatial nonuniformity coefficients \(\Gamma\in\mathcal{R}^{M_{C}\times M_{r}}\).
1for\(t_{amb}\in T_{amb}\)do
2\(\tilde{b}_{C}[t_{amb}][x,y]\longleftarrow\) Eq. 5b, \(\forall x,y\in\) image
3 end for
4for\(m\in[0,...,M_{pl}]\)do
5for\(t_{amb}\in T_{amb}\)do
6 Gaussian filter on \(\tilde{b}_{C}[t_{amb}][m]\)
7\(\widetilde{\mathcal{B}}_{C}^{\prime}[t_{amb}][m]\longleftarrow\) Quadratic spatial fit (Eq. 6)
8\(\widetilde{\mathcal{B}}_{C}^{\prime}[t_{amb}][m]\longleftarrow\) Fine spatial fit (Eq. 6)
9\(\widetilde{\mathcal{B}}_{C}^{\prime}[t_{amb}][m]\longleftarrow\) Subtract the spatial fit (Eq. 7)
10\(\widetilde{\mathcal{B}}_{C}[t_{amb}][m]\longleftarrow\) Radial spatial fit (Eq. 8c)
11 end for
12\(\Gamma[m]\longleftarrow\) Fit radial coefficients to \(t_{amb}\) (Eq. 13)
13 end for
```
**Algorithm 1**Estimation of nonuniformity maps.
The network shown in Fig. 10 operates as an end-to-end function:
\[\tilde{t}_{obj}=F(I(t_{obj}),t_{amb}) \tag{16}\]
where \(I(t_{amb})\) is a gray-level map taken at known ambient temperature \(t_{amb}\), and \(F\) is the output of the network blocks. We name this configuration **E2E** (end to end).
Equation 2 shows that the radiance is a linear function of the scene temperature, and Eq. 3 shows a linear dependence on the ambient temperature. To plug this prior knowledge into the
Fig. 10: Architecture of the estimation network U-Net [26]. The input is a frame with gray level values, and the output is a temperature map. This figure shows the end-to-end (E2E) configuration. In the network with a linear physical constraint (GxPD), the final block is replaced by two identical blocks as described in section III-B.
network, the final block in the U-NET was replaced with two blocks of the same configuration. Both blocks have the same input, which is the output of the layer before the split. These blocks extract the estimated object temperature \(\tilde{t}_{obj}\) from the linear approximation of the radiance shown in Eq. 3:
\[\begin{split} I(t_{amb})&=G(t_{amb})\cdot t_{obj}+D(t _{amb})\longrightarrow\\ \tilde{t}_{obj}&=\mathcal{G}\cdot I(t_{amb})+ \mathcal{D}\end{split} \tag{17}\]
where \(I(t_{amb})\) is the input to the network, and \(\mathcal{G}\approx\frac{1}{G},\mathcal{D}\approx\frac{D}{G}\) and the outputs of the respective blocks. We name this configuration **GxPD** (Gx + D).
The effects of both networks are elaborated in section IV.
### _Loss functions_
The loss function is comprised of a fidelity term, a structural term, and a noise-reduction term. The fidelity term is the mean absolute error (MAE) which is robust to outliers [28], applied on the difference between the accurate temperature map \(T\) and the output of the network \(\hat{T}\):
\[\mathcal{L}_{Fid}=\frac{1}{h\cdot w}\sum_{i,j}\left|\hat{T}_{i,j}-\hat{T}_{i, j}\right| \tag{18}\]
where \(h,w\) are height and width respectively.
The structural term measures the dissimilarity index, based on the structural similarity metric (SSIM). The SSIM is aimed at providing a good metric for the human visual-perception system. Use of the DSSIM method has been shown to improve network performance in image-restoration tasks [29]. It is calculated as:
\[\mathcal{L}_{DSSIM}=\frac{1-\text{SSIM}(\bar{T},\hat{T})}{2} \tag{19}\]
The noise-reduction term is total variation loss [30]. The underlying assumption is that the sum of absolute gradients for noisy images is higher than for clean images:
\[\mathcal{L}_{TV}(\hat{T})=\frac{1}{h\cdot w}\sum_{i,j}\left|\hat{T}_{i,j+1}- \hat{T}_{i,j}\right|+\left|\hat{T}_{i+1,j}-\hat{T}_{i,j}\right| \tag{20}\]
where \(i,j\) denotes the pixel position.
The overall loss term for the network training is:
\[\mathcal{L}=\mathcal{L}_{Fid}+\beta\cdot\mathcal{L}_{DSSIM}+\gamma\cdot \mathcal{L}_{TV} \tag{21}\]
where \(\beta,\gamma\) are the hyperparameters that balance the loss terms.
### _Preprocessing_
The input to the network is a gray-level map created from an accurate temperature map using the synthetic nonuniformity as described in Alg. 2. The input to the network can be described as:
\[I(t_{amb})=\hat{R}(t_{amb},t_{obj})+\mathcal{N}(0,\sigma^{2}) \tag{22}\]
where \(\hat{R}(t_{amb},t_{obj})\) is the synthetic gray-level map (Alg. 2), and \(\mathcal{N}\) is the additive Gaussian noise.
The input of the network is a frame representing the radiation flux measured by the microbolometer. These are represented by 14-bit gray-levels. To normalize them to the range of [0,1], the maximal and minimal values of gray-levels in the entire training and validation sets were obtained, and all inputs were normalized by:
\[\bar{I}(t_{amb})=\frac{I(t_{amb})-I_{\text{min}}}{I_{\text{max}}-I_{\text{min}}} \tag{23}\]
where \(\bar{I}(t_{amb})\) is the normalized input and \(I_{\text{min}},I_{\text{max}}\) are the minimal and maximal gray-levels over the datasets.
The accurate temperature maps must also be normalized to the range [0,1]. Again, the maximal and minimal temperatures were found over all datasets and both the output of the network and the original accurate temperature maps were normalized:
\[\bar{T}=\frac{T-T_{\text{min}}}{T_{\text{max}}-T_{\text{min}}} \tag{24}\]
where \(\bar{T}\) is the normalized accurate temperature map and \(T_{\text{min}},T_{\text{max}}\) are the minimal and maximal temperatures over all datasets.
Augmentations were applied during training and validation to enrich the dataset further. These included cropping to \(256\times 256\) pixels, random horizontal and vertical flips, and \(90^{\circ}\) rotations.
Random Gaussian noise with \(\sigma^{2}=5_{GL}\) and FPN were generated for each frame. FPN was generated as:
\[M_{FPN}=\begin{bmatrix}1\\ \vdots\\ 1\end{bmatrix}_{h\times 1}\cdot\left(\begin{bmatrix}\mathcal{U}[v_{\min},v_{ \max}]\\ \vdots\\ \mathcal{U}[v_{\min},v_{\max}]\\ \end{bmatrix}^{T}\right)_{1\times w} \tag{25}\]
where \(\mathcal{U}\) is uniform distribution. \(v_{\min},v_{\max}\) were chosen as \(v_{\min}=0.9,v_{\max}=1\).
During training, all augmentations were generated and applied randomly, i.e, random cropping and flipping, and randomly generated noise and FPN. During validation, the cropping was a \(256\times 256\) pixels rectangle around the center of the frame, to make the validation process deterministic. Moreover, the Gaussian noise and FPN were generated once for each frame and used throughout the entire validation process. This was done to allow a fair comparison between experiments.
To construct the input of the network, first cropping and flipping augmentations were applied to a temperature map \(T\). Second, a random \(t_{amb}\) was generated and used with the augmented temperature map in Eq. 15 to obtain a simulated camera response \(\tilde{R}(t_{amb},T)\). Then, normalization was applied to this simulated response to get \(\bar{I}(t_{amb})\). The last step was to apply the Gaussian noise \(\left(\mathcal{N}(1,\sigma^{2})\right)\) and FPN \((M_{FPN})\) to the normalized simulated camera response:
\[I^{in}_{t_{amb}}=\mathcal{N}(1,\sigma^{2})\otimes M_{FPN}\otimes\bar{I}(t_{amb}) \tag{26}\]
where \(I^{in}_{t_{amb}}\) is the normalized gray-level input to the network and \(\otimes\) is the element-wise multiplication.
### _Training details_
The network was trained using the ADAM optimizer [31] with a learning rate of \(10^{-4}\). The learning rate was halved on a validation loss plateau of more than 3 epochs. The network was run for a 100 epochs, but early stopping was applied
for a validation loss plateau of 8 epochs. The weights were initialized using the orthogonal scheme [32] with a scaling of \(50^{-2}\). The training was run on a single Nvidia 2080Ti. The network was written in Python3.8 using Pytorch 1.10. The hyperparameters of the network are given in a table at the supplementary material. The optimal hyper-parameters by optimizing on the average MAE (Eq. 18) of the validation sets.
The convergence results for the validation MAE of the E2E and GxPD network is found in the supplementary material.
## IV Experimental results
The methods described in section III were used to estimate temperature maps and correct nonuniformity in microbolometer-based thermal cameras. The presented experiments are organized as follows:
1. The data and equipment used to develop the proposed method.
2. The results for characterization of the nonuniformity as presented in section III-A.
3. The results of the NUC performed by the network, including the effect of the physical constraint.
4. The results of the NUC performed by the network on real data.
### _Data_
The data for the environmental chamber were measured at ambient temperatures \(T_{amb}\)= {27, 31, 37.2, 38.9, 40.4, 41.5, 43.6, 44.7, 46.2, 46.8, 48, 50.8}\({}^{\circ}C\). The SR-800N temperatures at each operating point were \(T_{obj}\)= {20, 25, 30, 35, 40, 45, 50, 55, 60}\({}^{\circ}C\).
Noise variance was determined from the environmental chamber measurements:
\[\sigma^{2}[t_{amb},t_{obj}]=\frac{1}{h\cdot w}\sum_{i=0}^{h}\sum_{j=0}^{w} \text{Var}(R[t_{amb},t_{obj}][i,j])_{N} \tag{27}\]
As seen in Eq. 27, the noise variance used as input to train the network in Eq. 22 was the average over the spatial dimension of the variance map obtained from \(N\) images. The effects of \(t_{amb},t_{obj}\) on \(\sigma^{2}\) were found to be negligible, so the average of all \(\sigma^{2}\) was \(\sigma^{2}=5\) gray levels.
As for the training of the network, the datasets were temperature maps collected using a FLIR A655sc camera, which is a scientific-level radiometric camera. The A655sc accuracy is only \(2\%\) of the temperature range in each frame.
The training dataset was \(12,897\) frames. The validation set was comprised of \(4,723\) frames. All frames were of different agricultural fields in Israel, taken from an unmanned aerial vehicle (UAV) flying \(70_{m}-100_{m}\) above the ground. Only sharp frames were used, hand-picked by a human user.
The validation sets were captured at the same locations as the training sets, but on different days. This validation procedure was chosen to eliminate data leakage between the training and validation sets, so that the metrics represent the ability of the network to generalize to different data. The training and validation dataset split remained the same for all training schemes, to allow a fair comparison between different experiments.
### _Equipment_
The environmental chamber used for the characterization process (section III-A) was designed and built at the Agricultural Research Organization, Volcani Institute. A cooking oven was adapted by controlling the heating element with a Campbell CR1000 controller. A PID control loop was implemented on the Campbell CR1000 to achieve a stable ambient temperature for the camera inside the oven. A schematic of the environmental chamber is presented in Fig. 4.
The Campbell CR1000, Tau2 and SR-800N were all controlled via Python3.8 from a Linux Ubuntu 20.04 computer.
The camera was calibrated using FLIR ThermalResearch v2.1. The configuration of the camera can be seen in a table at the supplementary material and information on the various functions can be found in Tau2 Quark Software IDD.
### _Camera characterization_
The number of coefficients for the radial fit (Eq. 8c) was set to \(M_{rad}=8\). The number of coefficients for the FPA fit of the radial coefficients (Eq. 13) was set to \(M_{amb}=3\). These values were chosen empirically.
The results of the nonuniformity characterization process described in section III-A as summarized by Alg. 1 are shown in Fig. 11. Four examples from different operating points are shown. These results illustrate that the fitting is both valid and corrects the skew in the measurements.
### _Nonuniformity correction_
A visual comparison of NUC between the proposed method and other methods is presented in Fig. 12. The left-most figure is the input to the network. The patch in the red square is zoomed-in and presented for GxPD, He et al. [22] and ADMIRE [21]. Observing the results, He et al. [22] does not thoroughly removes the NUC, and ADMIRE [21] increases noise and adds surplus edges and details thus limiting the
Fig. 11: Side view of the fitting described in Alg. 2. Panel (d) demonstrates how the skewing is corrected.
fidelity of its estimation. GxPD appears similar to the ground truth data. More visual results are in figures S17-S29 in the _supplementary material_.
A side-view of the results of the temperature estimation can be seen in Fig. 13. These figures contain the real temperature, and the estimations made by the results of the E2E network and the physically constrained network GxPD. As can be seen, both estimations are accurate and both network configurations are similar. The input to the network cannot be displayed with the plots, because it is in gray levels, whereas the network output temperatures are in \({}^{\circ}C\).
Table I compares the metrics of the estimations between the different configurations and compares them to He et al [22]. The latter results were retrained on the same data using the training scheme suggested by those authors [22]. For a fair comparison, we constrained our network to the same depth and number of filters as He et al [22]. The results of the E2E network without the ambient temperature are also compared. The metrics in the table are an average of the metrics from all validation sets. Although ADMIRE [21] is compared visually in Fig. 12, its metrics cannot be compared in the table because the method does not estimate the temperature, only corrects nonuniformity.
As can be seen in Table I, the ambient temperature significantly improves the performance of the network. The results for the E2E and GxPD are similar, with the latter showing a marginal advantage. The similar performance between the E2E and GxPD networks can be explained by the expressive power of the neural network. The network in E2E can intrinsically represent the GxPD network [15]. Having said that, the MAE in GxPD is lower by \(4\%\) in comparison to E2E, meaning that the physical constraint still has a measurable effect on the results.
### _Real data_
We captured the same scene with an accurate A655sc scientific-level radiometric camera and with the Tau2 camera. The A655sc outputs a temperature map and the Tau2 outputs gray levels corresponding to the radiation flux. The camera
Fig. 12: Comparison of the results. A zoomed patch from the sample is presented (surrounded by a red square). From left to right - (a) the input sample, (b) the ground truth temperature map, (c) GxPD, (d) He et al. [22] and (e) ADMIRE [21].
used for capturing these images was different from the one used for the calibration process.
The ambient temperature and emissivity of the A655sc were tuned using an accurate temperature sensor placed in the scene. The scenes were registered by hand-picking correspondence points and performing a homography with OpenCV V4.5.4.
Five results are presented in Fig. 15, another result is presented in Fig. 14 and six more are presented in figures S1-S6 in the supplementary material. The gray scales are the temperatures taken using the A655sc. The blue patches in the frames are the per-pixel differences between the temperatures and the results of GxPD. The numbers in white are the MAE between GxPD and the temperature map. We used the GxPD method because its MAE results were significantly better. The two uppermost figures are cars taken at the morning. The hot areas with high errors stem from direct sunlight hitting the metal and glass surfaces of the cars. the next two figures are buildings captured from a great distance. The last figure is a tree from a distance. Part of the error stems from registration errors between the two cameras, or from moving objects during acquisition (e.g, leaves in the lowest figure).
The range of the MAE is \(0.15^{\circ}C-0.93^{\circ}C\). This small error in temperature estimation is of the same order as the accuracy of the scientific A655sc. This accurate result was achieved without any thermographic corrections or NUC from the Tau2, only the radiation flux as gray levels. The exact configuration can be seen in the supplementary material. These results are also on-par with the results on the validation set (Table I) and with the visual results (Fig. 12).
## V Conclusion
A method to characterize the physical behavior of a system was demonstrated (section III-A). The characterization process allowed for supervised training of a deep learning network (section III-B). The temperature estimation performed by the network can be generalized to real data and different cameras (section IV-E). This allows for a faster NUC process that only requires a single collection of calibration data.
We also showed that the ambient temperature of the camera has a significant effect on the accuracy of the temperature estimation.
The proposed method (E2E) shows a significant improvement of roughly \(1^{\circ}C\) compared to previous works [22], producing a MAE of only \(0.42^{\circ}C\). This error was lowered even more by imposing a physical constraint (GxPD).
The physically constrained GxPD network achieved better results than the E2E network. This result suggests that the non-linearity of the neural network is capable of decomposing the dual dependency of measurements in \(t_{obj}\) and \(t_{amb}\). We leave this aspect to future work.
The results show good agreement between the image estimation and the ground truth on simulated data with mean temperature error of \(0.37^{\circ}C\), as well as on real-world experimental data with mean temperature error ranging in \(0.15^{\circ}C-0.93^{\circ}C\).
## Acknowledgments
The authors thank Dr. Yaffit Cohen and Dr. Eitan Goldstein for the UAV data used in this work; and Moti Barak, Lavi Rosenfeld and Liad Reshef for the design and construction of the environmental chamber.
## Disclosures
The authors declare no conflicts of interest.
Fig. 14: Results of GxPD on an image of a building. The temperature map taken with the A655sc serves as the gray background, and the colored map is the difference between the results of GxPD with the temperature map. The number in white as the MAE in \(\varvar{C}\) between the temperature map and the results of GxPD.
Fig. 13: Side view comparison of the network temperature estimation using the end-to-end (E2E) configuration and the physically constrained (GxPD) configurations.
[MISSING_PAGE_POST]
## References
* [1]J. A. Ratches (2006) Current and future trends in military night vision applications. Ferroelectrics342 (1), pp. 183-192. External Links: Document, Link Cited by: SSI.
* [2]P. Ghassemi, T. J. Pfefer, J. P. Casamento, R. Simpson, and Q. Wang (2018-09) Best practices for standardized performance testing of infrared thermographs intended for fever screening. PLOS ONE13 (9), pp. 1-24. External Links: Document, Link Cited by: SSI.
* [3]P. W. Nugent, J. A. Shaw, and N. J. Pust (2012) Connecting for focal-plane-array temperature dependence in microbolometer infrared cameras lacking thermal stabilization. Optical Engineering52, pp. 061304. External Links: Document, Link Cited by: SSI.
* [4]P. W. Nugent, J. A. Shaw, and N. J. Pust (2014) Radiometric calibration of infrared imagers using an internal shutter as an equivalent external blackbody. Optical Engineering53, pp. 123106. External Links: Document, Link Cited by: SSI.
* [5]P. W. Nugent, J. A. Shaw, and N. J. Pust (2014) Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter. Applied Optics56, pp. 884. External Links: Document, Link Cited by: SSI.
* [6]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [7]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [8]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [9]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [10]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [11]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [12]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [13]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [14]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [15]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [16]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [17]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [18]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [19]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [20]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [21]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [22]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [23]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [24]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [25]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [26]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [27]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [28]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [29]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [30]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [31]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [32]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [33]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [34]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [35]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [36]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [37]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2013-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [38]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [39]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [40]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [41]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [42]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2012-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [43]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [44]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-06) Scene based non-uniformity correction in thermal images using kalman filter. Image and Vision Computing25, pp. 833-851. External Links: Document, Link Cited by: SSI.
* [45]A. Averbuch, G. Liron, and B. Z. Bobrowsky (2007-0
Supplementary material
Navot Oz, Nir Sochen, David Mendelovich, and Iftach Klapp
A quantitative evaluation of the proposed method is given in the experimental results section (IV), and summed in Table 3. For completeness, bellow are qualitative results, i.e images, of the proposed methods.
Figures S1, S2, S3, S4,S5, S6 shows the results of the temperature estimation on real-world data. The temperature map taken with the A655sc serves as the gray background, and the colored map is the difference between the results of GxPD with the temperature map. The number in white as the MAE in \(\circ C\) between the temperature map and the results of GxPD.
Figures S7, S8, S9, S10, S11, S12 are some of the original frames taken by the Tau2. Figures S13, S14, S15, S16 are some of the ground truth frames taken by the A655sc. Notice that not all of the original frames could be displayed due to privacy limitations.
Figures S17-S29 are results of our GxPD model. A zoomed patch from the sample is presented (surrounded by a red square). From left to right - (a) the input sample, (b) the ground truth temperature map, (c) GxPD, (d) He et al. [1] and (e) ADMIRE [2].
Fig. S30 shows the convergence of the training for both E2E and GxPD networks.
Table I are the hyperparameters of the final networks used for all the metrics and figures. Table II are the parameters of the Tau2 used throughout the experiments.
## I Data
The training dataset was \(12,897\) frames of different agricultural fields with dimensions \(640\times 480\) pixels. The validation set was comprised of \(4,723\) frames. All frames were of different agricultural fields in Israel, taken from an unmanned aerial vehicle (UAV) flying \(70_{m}-100_{m}\) above the ground. 369 frames of corn fields taken in _Tzora_ village, 646 frames of peach trees taken in _Nir Eliyahu_ village, \(1,162\) frames of wheat fields taken in _Neve Yaar_ research station, 372 frames of vineyards taken in _Mevo Beitar_ village, 765 frames of vineyards taken in _Mevo Beitar_ village on a different day, \(1,048\) frames of cotton fields taken in _Neve Yaar_ research station, and 361 frames of wheat fields taken in _Gilat_ research station - for a total of \(4,723\) frames for validation.
The collection of data for the comparison between A655scand Tau2is elaborated in the article.
## I Introduction
In this paper, we consider the case of a _single_-dimensional (_i.e._, a _single-dimensional_) _model_ with a (single-dimensional) (_i.
## References
## References
Fig. S4. Results of the proposed method on an image of a shed.
Fig. S5. Results of the proposed method on an image of a building. The errors stems from direct sunlight reflecting from glass surfaces (the sun is in the back of the camera).
Fig. S6. Results of the proposed method inside a warehouse. The image was taken from a close distance (\(15_{m}-20_{m}\)), so the registration between the two cameras was imperfect. The high error on the right frame stems from the registration error.
Fig. S7. The original frame taken by Tau2 of the jeep in Fig. S1 in the supplementary material.
Fig. S8. The original frame taken by Tau2 of the shed in Fig. S2 in the supplementary material. Notice the leaves at the edge of the tree.
Fig. S9. The original frame taken by Tau2 of the building in Fig. S3 in the supplementary material.
Fig. S11. The original frame taken by Tau2 of the building with reflections from the sunlight in Fig. S5 in the supplementary material.
Fig. S12. The original frame taken by Tau2 of the warehouse in Fig. S6 in the supplementary material.
Fig. S13. The original frame taken by A655sc of the building Fig. 14 in the article.
Fig. S16. The original frame taken by A65Sec of the warehouse in Fig. S6 in the supplementary material.
Fig. S20.
Fig. S29.
Fig. S30. Validation mean estimation error (MAE) in \({}^{\circ}C\) for the training of E2E and GxPD. |
2305.03644 | Rankings-Dependent Preferences: A Real Goods Matching Experiment | We investigate whether preferences for objects received via a matching
mechanism are influenced by how highly agents rank them in their reported rank
order list. We hypothesize that all else equal, agents receive greater utility
for the same object when they rank it higher. The addition of
rankings-dependent utility implies that it may not be a dominant strategy to
submit truthful preferences to a strategyproof mechanism, and that
non-strategyproof mechanisms that give more agents objects they \emph{report}
as higher ranked may increase market welfare. We test these hypotheses with a
matching experiment in a strategyproof mechanism, the random serial
dictatorship, and a non-strategyproof mechanism, the Boston mechanism. A novel
feature of our experimental design is that the objects allocated in the
matching markets are real goods, which allows us to directly measure
rankings-dependence by eliciting values for goods both inside and outside of
the mechanism. The experimental results are mixed, with stronger evidence for
rankings-dependence in the RSD treatment than the Boston treatment. We find no
differences between the two mechanisms for the rates of truth-telling and the
final welfare. | Andrew Kloosterman, Peter Troyan | 2023-05-05T16:04:36Z | http://arxiv.org/abs/2305.03644v3 | # Rankings-Dependent Preferences: A Real Goods Matching Experiment+
###### Abstract
We investigate whether preferences for objects received via a matching mechanism are influenced by how highly agents rank them in their reported rank order list. We hypothesize that all else equal, agents receive greater utility for the same object when they rank it higher. The addition of rankings-dependent utility implies that it may not be a dominant strategy to submit truthful preferences to a strategyproof mechanism, and that non-strategyproof mechanisms that give more agents objects they _report_ as higher ranked may increase market welfare. We test these hypotheses with a matching experiment in a strategyproof mechanism, the random serial dictatorship, and a non-strategyproof mechanism, the Boston mechanism. A novel feature of our experimental design is that the objects allocated in the matching markets are real goods, which allows us to directly measure rankings-dependence by eliciting values for goods both inside and outside of the mechanism. Our experimental results confirm that the elicited differences in values do decrease for lower-ranked goods. We find no differences between the two mechanisms for the rates of truth-telling and the final welfare.
## 1 Introduction
In strategyproof mechanisms, it is always an optimal strategy for agents to truthfully report their private information to the mechanism. This theoretical property is clearly appealing, as it gives a mechanism designer the ability to predict play and make meaningful statements about other criteria such as welfare. However, a growing body of empirical evidence has documented significant deviations from truthful reporting in such mechanisms. This issue is particularly important in matching markets, the focus in this paper, in which participants submit preference rankings of alternatives such as schools or medical residency programs to a centralized clearinghouse which determines the assignment. Evidence of non-truthful behavior in strategyproof matching mechanisms can be found both in the lab (Chen and Sonmez, 2006; Pais and Pinter,
2008; Li, 2017) and in high-stakes decisions in the field (Chen and Pereyra, 2019; Shorrer and Sovago, 2018; Hassidim et al., 2021).
Deviations from truthful reporting are harmful under the implicit assumption that agents' preferences are standard economic preferences in that the values for the objects they receive in the mechanism are determined solely by the characteristics of these objects. If this assumption holds, then, when we observe agents rank objects they value less above objects they value more in a strategyproof mechanism, we can claim that these deviations from truthful reporting are indeed "mistakes". But what if this underlying assumption is wrong? Then, these mistakes may not really be mistakes, but rather optimal behavior from agents with non-standard preferences.
In this paper, we explore the possibility that the rankings agents submit to the mechanism influence values. For instance, an agent may value an object higher when they rank it \(2^{nd}\) compared to a counterfactual in which they receive the same object but rank it \(4^{th}\), because they suffer disutility when they get a low-ranked object. There are a number of reasons why receiving low-ranked objects may be undesirable. These include reference-dependent loss aversion where agents expect to get a high-ranked object and are disappointed when they do not (Dreyfuss et al., 2019; Meisner and von Wangenheim, 2021), ego utility where agents think that it looks good to others to receive a high-ranked object (Koszegi, 2006), preferences that focus on beating others rather than maximizing one's own utility (a "joy of winning", Cooper and Fang, 2008), or limited information on quality that instigates a 'curse of acceptance' whereby receiving a low-ranked object indicates that it is bad (Kloosterman and Troyan, 2020).
Following this discussion, we consider agents who have utility from receiving object \(x\) that takes the form
\[u(x)=v(x)+\rho(\text{rank}(x))\]
We call \(v(x)\) the agent's _fundamental value_ for object \(x\); this corresponds to the standard economic preferences assumed in typical matching models. The second term, \(\rho(\text{rank}(x))\), is an additional _rankings-dependent utility_ component that is determined by how highly the agent ranked \(x\) in their reported preferences. Our main assumption is that \(\rho(\cdot)\) is a decreasing function, i.e., all else equal, agents receive more utility when they rank an object higher.
Rankings-dependent utility has important consequences for real-world market design. First, with rankings dependence, deviations from truthful reporting may be optimal in a strategyproof mechanism because an agent may be able to attain a higher-ranked, though less desirable (in terms of fundamental value), object by ranking a less popular object highly.1 Second, it has potential implications when evaluating the welfare of various mechanisms. Starting with the seminal paper of Abdulkadiroglu and Sonmez (2003), there has been much written about the debate between strategyproof mechanism such as deferred acceptance (DA) and manipulable mechanisms such as the Boston mechanism (also called immediate acceptance). While DA gives better incentives to the agents, comparing the final rank distributions (how many students receive their reported first choice, reported second choice, etc.), which is a common outcome metric for many school districts, (unsurprisingly) shows better performance for mechanisms
such as Boston which are designed with this goal in mind.2
Footnote 2: Featherstone (2020) provides a detailed analysis of a general class of _rank-efficient mechanisms_ that implement an assignment whose rank distribution is not first-order stochastically dominated by any other. See also Ortega and Klein (2022) and Troyan (2022), who look at the welfare and incentive properties of these mechanisms.
The standard critique is that these data cannot be taken at face value, because the Boston mechanism gives clear incentives for agents to manipulate their preferences, even in the absence of rankings dependence (see Dur et al. (2018) for evidence of such behavior in a real-world school choice environment). However, if preferences are indeed rankings-dependent, this gives a stronger argument for mechanisms in which more agents receive objects they report as higher-ranked: if more agents are receiving their reported first choices, and this in turn gives agents additional rankings-dependent utility, then such mechanisms can lead to increased total welfare. Indeed, in discussions with school district administrators, this is one argument that has been given for continued use of the Boston mechanism over strategyproof alternatives: parents just do not like to get something they ranked low in their list (Cambridge, MA School District, personal communication).
Thus, rankings-dependence has potentially important consequences for matching market design, and so determining whether these factors are relevant in practice is crucial. This is the main contribution of our paper: we design and implement a laboratory experiment to test the hypothesis of rankings-dependent utility. In our experiment, participants play in a matching market with five agents and five goods. As in most real-world implementations of matching markets, they play a one-shot game in which they are asked to submit a rank-order list of the five goods, and a mechanism is used to determine the final allocation to each agent.
The mechanisms that we use for our experiment are the random serial dictatorship (RSD) and the Boston mechanism. We chose these mechanisms because they are two canonical mechanisms that are widely used in practice. Further, RSD is strategyproof, while the Boston mechanism is not, yet the Boston mechanism may result in agents receiving higher-ranked goods. This allows us to answer not only our main question of rankings-dependent utility, but also to test the hypothesis that a non-strategyproof mechanism may be welfare-enhancing once rankings-dependent utility is taken into account.
We formalize hypotheses in Section 3 in a simplified environment that is relevant for the experiment. Participants have an incentive to rank less popular goods higher under both mechanisms to achieve a higher rankings-dependent utility; however, this incentive is stronger in Boston because popular goods are likely to be unavailable to them in later rounds of the mechanism. This means that if truthful reporting with respect to fundamental value is optimal in Boston, then it is also optimal in the RSD (Theorem 1), and that more agents get their top-ranked object in Boston than in RSD (Theorem 2). Finally, the extra incentive in Boston results in higher welfare in Boston than in the RSD in equilibrium (Theorem 3).
A novel and key feature of our experimental design is that we use real objects that are in the room at the time of the experiment and that the participants may take home with them. To determine whether utility is rankings-dependent, in Phase I of the experiment, we first elicit valuations for 20 common objects (backpacks, alarm clocks, phone chargers, etc.) with the multiple price list elicitation method. In Phase II of the experiment, five of the objects (a Fjaltraven backpack, a Hydroflask water bottle, a Moleskine notebook, a generic ceramic coffee
mug, and a package of 4 ballpoint pens) were chosen and the participants were asked to submit a rank-order list of these five objects to a mechanism. The mechanism (either RSD or Boston) produces an allocation of one object to each participant. After the mechanism, we once again elicit each participant's valuation for the object that they were allocated in the mechanism. The important feature of this design is that the Net Value (Phase II value minus Phase I value), which we abbreviate NV, measures \(\rho(\text{rank}(x))\).
The use of real goods is a significant departure from the experimental literature on matching mechanisms which usually uses fictitious "goods" with induced monetary values (see Hakimov and Kubler (2021) for a recent survey of this literature). Measuring rankings-dependent utility would likely fail with induced values, because a participant's elicited valuation for an amount of money is likely to just be that amount of money, and so \(\rho(\text{rank}(x))\) would be zero. Further, using real goods makes the experiment more similar to real-world environments where participants must form their own preferences rather than have them being induced. It also mitigates a possible "experimenter demand" effect that could be present under an induced values framework, in which participants are given their preferences and asked to report them back. To our knowledge, we are the first to use real goods in a matching experiment, which is another contribution of our paper.
We present our results in Section 4. Our main hypothesis of interest is that the NV--which measures \(\rho(\text{rank}(x))\)--should be non-zero, and in particular decreasing in the reported rank of the good received in the mechanism. For the RSD treatment, we find clear evidence to support the hypothesis. The NV is nearly monotonically decreasing, from an average of \(+\$2.87\) for participants who receive their top-ranked good to an average of \(-\$0.69\) for participants who receive their fifth-ranked good (out of five). For the Boston treatment, there is a much smaller increase in NV for the first ranked good (only about \(+\$0.60\)), and, looking at the raw data averages, there is no clear evidence that NV is decreasing in rank. Non-parametric tests provide statistical support for these impressions. However, when we move to regression analysis to further explore these results, we find support for this hypothesis in both treatments, with the rank being a statistically significant predictor of NV for both the RSD treatment and the Boston treatment separately, as well as for the pooled sample. What appears to explain the discrepancy for Boston is the inclusion of a regressor for risk aversion, which was measured using the Holt-Lauty switching point, and a regressor for the Phase I value. For Boston, risk aversion enters positively in the regression, indicating that all else equal, more risk averse individuals have larger increases in NV.
Next, we evaluate truth-telling with respect to fundamental values and welfare. Interestingly, we find no differences in the in the rates of truth-telling between the two treatments according to a wide range of measures. This is interesting given the plethora of experimental work going back to Chen and Sonmez (2006) that generally finds much less truth-telling in Boston than in deferred acceptance (which is equivalent to RSD in our setting), as predicted by the theory. In particular, the rate of truth-telling in our RSD treatment is much lower than others have found for DA, which is arguably a more complex mechanism.3 There are several possible
explanations for this finding. First, these other experiments all use an induced values design, whereas we use real goods. This means that participants must form both their own preferences and beliefs about the preferences of others, which could mean that they spent more effort on these tasks, and less on understanding the details of the mechanism and formulating their strategy. Further, we had to infer the truthful rankings from Phase I valuations, which are noisy. Last, our participants only played an incentivized mechanism once, though we did have an 8-minute unincentivized practice period against robots for them to learn the mechanism. On the other hand, we do find that while the rates of truthful reporting are the same between the two treatments, the reasons underlying truthful reporting may be different. Using regression analysis, we find that participants who scored higher on a Cognitive Reflection Task were more likely to report truthfully in RSD, while less risk averse participants and females were more likely to report truthfully in Boston. This suggests that non-truthful reporting in RSD may be mistakes akin to the standard argument, while non-truthful reporting in Boston is related to risk aversion and gender.
Footnote 1: The _risk aversion_ is the risk aversion of the agents who are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are not in the sense that they are in the sense that they are in the sense that they are not in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are not in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are in the sense that they are not in the sense that they are in the
mistakes by participants.4
Footnote 4: Theoretical explorations include Ashlagi and Gonczarowski (2018), Troyan (2019), Pycia and Troyan (2020), and Bade and Gonczarowski (2017). For lab experiments, see Zhang and Levin (2017) and Bo and Hakimov (2019).
We take a different approach in this paper, which is a reassessment of the assumption that agent preferences are determined solely by the characteristics of the goods they receive. We discuss a few other recent papers that have explored related ideas.
The closest paper to ours theoretically is Meisner (2021). He proposes an equivalent model of utility that consists of both a fundamental value plus a rankings-dependent component. He then focuses on strategyproof mechanisms, and proves that any non-truthful preference ranking can be rationalized as optimal for some beliefs over match probabilities (what he refers to as "attainability distributions", which are determined by the mechanism itself combined with beliefs about the strategies of the other agents).
Dreyfuss et al. (2019), Dreyfuss et al. (2022), and Meisner and von Wangenheim (2021) all focus on expectations-based reference-dependent preferences (EBRD preferences for short, also referred to as EBLA for expectations-based loss aversion; Koszegi and Rabin (2006)) as a possible explanation for seemingly dominated choices in strategyproof mechanisms. Dreyfuss et al. (2019) re-evaluate the experimental data of Li (2017), who finds mistakes in the non-obviously strategyproof RSD mechanism, and find that EBRD preferences might explain this behavior. Dreyfuss et al. (2022) study a lab experiment using four different implementations of the DA mechanism. Their experiment is very different from ours. Most notably, they use induced values and reduce the game to an individual decision problem in which participants know the probability of their priority score being above the threshold for admission to each school and are asked to choose actions that then induce lotteries over the schools.5 The focus of their experiment is on the rates of non-truthful behavior (what they call non-straightforward behavior). They show that the variations in non-truthful behavior they find can be better rationalized by EBRD preferences compared to classical preferences. However, Dreyfuss et al. (2022) also write that "the EBRD model, while explaining a lot of the observed data, appears to be an incomplete explanation", and note that other explanations likely play an important role too.
Footnote 5: Dreyfuss et al. (2022) frame their experiment using the language of school choice and ask students to rank hypothetical schools which have an induced monetary value. The schools are analogous to the objects in our set-up.
Our experimental design, on the other hand, was devised to provide a direct measurement of rankings dependent utility, though we are agnostic on the underlying source of it. Additionally, we have participants play a multiplayer game, an area in which there is less work incorporating non-standard preferences relative to decision problems. Finally, while Dreyfuss et al. (2022) focus on explaining deviations from truthful behavior, we also consider the possible welfare implications of rankings-dependent preferences. Put together, we think these papers are all very complementary, and combined make a strong case that rankings-dependent preferences are likely to be relevant in matching market environments. Having established that, though, there is still much more work to be done to understand both the behavioral foundations and welfare implications of these findings.6
Model
|
2310.07694 | Speeding Up Squeezing with a Periodically Driven Dicke Model | We present a simple and effective method to create highly entangled spin
states on a faster timescale than that of the commonly employed one-axis
twisting (OAT) model. We demonstrate that by periodically driving the Dicke
Hamiltonian at a resonance frequency, the system effectively becomes a two-axis
countertwisting Hamiltonian which is known to quickly create Heisenberg limit
scaled entangled states. For these states we show that simple quadrature
measurements can saturate the ultimate precision limit for parameter estimation
determined by the quantum Cram\'er-Rao bound. An example experimental
realization of the periodically driven scheme is discussed with the potential
to quickly generate momentum entanglement in a recently described experimental
vertical cavity system. We analyze effects of collective dissipation in this
vertical cavity system and find that our squeezing protocol can be more robust
than the previous realization of OAT. | Jarrod T. Reilly, Simon B. JΓ€ger, John Drew Wilson, John Cooper, Sebastian Eggert, Murray J. Holland | 2023-10-11T17:39:17Z | http://arxiv.org/abs/2310.07694v2 | # Speeding Up Squeezing with a Periodically Driven Dicke Model
###### Abstract
We present a simple and effective method to create highly entangled spin states on a faster timescale than that of the commonly employed one-axis twisting (OAT) model. We demonstrate that by periodically driving the Dicke Hamiltonian at a resonance frequency, the system effectively becomes a two-axis countertwisting Hamiltonian which is known to quickly create Heisenberg limit scaled entangled states. For these states we show that simple quadrature measurements can saturate the ultimate precision limit for parameter estimation determined by the quantum Cramer-Rao bound. An example experimental realization of the periodically driven scheme is discussed with the potential to quickly generate momentum entanglement in a recently described experimental vertical cavity system. We analyze effects of collective dissipation in this vertical cavity system and find that our squeezing protocol can be more robust than the previous realization of OAT.
_Introduction.--_ For centuries, advancements in precision measurements have continuously propelled the scientific community's understanding of the fundamental nature of reality. This inspired both the quantum revolution and Einstein's theories of relativity, with the frontier of each still advancing through the use of increasingly precise experiments [1; 2; 3; 4; 5]. Current state-of-the-art precision measurements can detect a change of mirror distance of \(10^{-3}\) of the proton's width in gravitational wave detectors [6; 7; 5] and have led to the development of atomic clocks with a fractional frequency uncertainty of \(10^{-21}\)[8], among many other groundbreaking achievements [9; 10; 11; 12; 13; 14; 15; 16; 17].
Most precision metrology experiments still operate at or above the standard quantum limit (SQL), which is the fundamental sensitivity threshold that arises from shot noise in measurements of uncorrelated quantum states. This limit on product states can be overcome through the use of entangled quantum states, and if this can be consistently utilized, it would revolutionize precision measurements with the potential to discover new physics. Although there have been proof of principle experimental demonstrations of quantum entanglement, applications for a true sensing purpose have so far been limited [18; 19; 20; 21; 22]. For example, spin squeezing offers a promising platform to perform atomic clock experiments beyond the SQL, but often require a long squeezing time during which quantum correlations may be destroyed by decoherence.
In this Letter, we propose an experimentally relevant scheme to realize spin squeezing in a short propagation time. We show that driving the Dicke model [23; 24; 25; 26; 27; 28] at a parametric resonance leads to an effective two-axis countertwisting (TACT) Hamiltonian which can reach Heisenberg limited scaling in a shorter timescale than the commonly employed one-axis twisting (OAT) Hamiltonian. While the TACT Hamiltonian has been studied theoretically [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42], it has so far been elusive to achieve experimentally. We demonstrate how TACT may be realized in a current, state-of-the-art vertical cavity experiment [43; 44; 45; 46] by periodically modulating an injected field that drives the cavity. We discuss how to make optimal use of the system's entanglement for phase estimation using a recent advance that uncovers a state's full metrological potential by diagonalizing the quantum Fisher information matrix (QFIM) [47]. We then perform a Bayesian phase reconstruction sequence where, remarkably, we find that simple quadrature measurements saturate the quantum Cramer-Rao bound (QCRB) [48].
_Periodically Driven Dicke Model.--_ We consider \(N\) atoms that are collectively coupled through a cavity field. The atoms have ground state \(\left|\downarrow\right\rangle\) and excited state \(\left|\uparrow\right\rangle\), and we define the collective raising and lowering operators \(\hat{J}_{+}=\sum_{j}|\uparrow\rangle_{j}\left\langle\downarrow\right|_{j}= \hat{J}_{+}^{\dagger}\). This system has an underlying SU(2) symmetry with basis operators \(\hat{J}_{x}=(\hat{J}_{+}+\hat{J}_{-})/2\), \(\hat{J}_{y}=i(\hat{J}_{-}-\hat{J}_{+})/2\), and \(\hat{J}_{z}=[\hat{J}_{+},\hat{J}_{-}]/2\), as well as the quadratic Casimir operator \(\hat{J}^{2}=\hat{J}_{x}^{2}+\hat{J}_{y}^{2}+\hat{J}_{z}^{2}\). After eliminating the cavity in the dispersive regime, we consider dynamics governed by the time-dependant Dicke Hamiltonian [49]
\[\hat{H}=\hbar\Delta\hat{J}_{z}+\hbar\chi\cos(\omega t)\hat{J}_{x}^{2}, \tag{1}\]
where \(\Delta\) is a detuning and \(\chi\) scales the cavity-mediated nonlinearity. This Hamiltonian can model, for example, Raman transitions between hyperfine states using two time-dependent transverse fields [26; 27]. For now, we ignore cavity decay based on large cavity detuning, such that the dynamics are governed by the Schrodinger equation \(\partial_{t}\hat{\rho}=-i[\hat{H},\hat{\rho}]/\hbar\) with density matrix \(\hat{\rho}\). We discuss the effects of non-negligible dissipation in the next section.
The nonlinearity in Eq. (1) creates an entangled state which can be used to sense a physical parameter with a quantum advantage. To find the parameter \(\Phi\) that the generated state is most sensitive to, one finds the maximum quantum Fisher information (QFI), \(\lambda_{\max}\), by
calculating the largest eigenvalue of the quantum Fisher information matrix (QFIM) [47],
\[\mathbf{\mathcal{F}}\vec{\mathcal{O}}=\lambda_{\text{max}}\vec{\mathcal{O}}, \tag{2}\]
where the elements of the QFIM are given by [50]
\[\mathbf{\mathcal{F}}_{\mu\nu}=\sum_{i,j=0;\varrho_{i}+e_{j}\neq 0}^{\text{dim}[\hat{ \rho}]-1}\frac{2\operatorname{Re}\left[\bra{\varrho_{i}}\left[\hat{J}_{\mu}, \hat{\rho}\right]\ket{\varrho_{j}}\bra{\varrho_{j}}\left[\hat{\rho},\hat{J}_{ \nu}\right]\ket{\varrho_{i}}\right]}{\varrho_{i}+\varrho_{j}}, \tag{3}\]
with \(\mu,\nu\in\{x,y,z\}\) and the spectral decomposition \(\hat{\rho}=\sum_{i}\varrho_{i}\ket{\varrho_{i}}\!\bra{\varrho_{i}}\). The eigenvector \(\vec{\mathcal{O}}\) associated with this maximum eigenvalue corresponds to the generator \(\hat{\mathcal{G}}\) that encodes the optimal parameter [47], \(\exp\!\left[-i\hat{\mathcal{G}}\Phi\right]\). The QFI for unentangled states can reach the SQL, \(\lambda_{\text{max}}=N\), while entangled states can reach the Heisenberg limit (HL), \(\lambda_{\text{max}}=N^{2}\), which is the fundamental limit on sensing set by the Heisenberg uncertainty principle [51].
Although we will use Eq. (1) for our numerical simulations, one can gain a better intuition of the dynamics by transforming into a rotating frame. We move into an interaction picture \(\tilde{\hat{\rho}}=\hat{U}^{\dagger}\hat{\rho}\hat{U}\) with \(\hat{U}=\exp\!\left[-i\Delta\hat{J}_{z}\right]\), so that Eq. (1) becomes
\[\tilde{\hat{H}}=\frac{\hbar\chi}{4}\cos(\omega t)\left[e^{2i\Delta t}\hat{J}_ {+}^{2}+2\left(\hat{J}_{+}\hat{J}_{-}-\hat{J}_{z}\right)+e^{-2i\Delta t}\hat{ J}_{-}^{2}\right]. \tag{4}\]
In the majority of previous work, one assumes a constant nonlinear interaction rate \(\omega=0\). Then, in the limit \(|\Delta|\gg N|\chi|\), one makes the rotating-wave approximation (RWA) [53] to drop the fast-oscillating \(\hat{J}_{\pm}^{2}\) terms. We now explore an opposite regime in which the system is instead driven on the special resonance \(\omega=2\Delta\). Equation (4) after the RWA becomes
\[\hat{H}_{\text{PDD}}\approx\frac{\hbar\chi}{8}\left(\hat{J}_{+}^{2}+\hat{J}_{ -}^{2}\right), \tag{5}\]
which is seen by expanding \(\cos(\omega t)=(\exp[i\omega t]+\exp[-i\omega t])/2\). We label this as the periodically driven Dicke (PDD) model and note that it is reminiscent of two-axis countertwisting (TACT) [29], which was found to reach HL scaling on an exponential timescale [35; 31] through the pair production and twisting processes shown in Fig. 1. Beginning in the collective ground state \(\hat{\rho}_{0}=|\!\!\!\downarrow\rangle\!\!\langle\downarrow|^{\otimes N}\), we examine the sensitivity of the PDD model using the maximum QFI from Eq. (2). We display the dynamics of the QFIM eigenvalues in Fig. 2(a) for the case of \(N=100\). Here, one can see the exponential scaling of the maximum QFI on short timescales. In the rotating frame of Eq. (4), we find that the optimal generator corresponding to \(\lambda_{\text{max}}\) is given by \(\hat{\mathcal{G}}=(\hat{J}_{x}+\hat{J}_{y})/\sqrt{2}\). This can be understood by interpreting Eq. (5) as an analog to the photonic Kerr nonlinearity [54] which can be formalized if one performs the Holstein-Primakoff approximation assuming low atomic excitations [55; 56].
During the initial squeezing, the state can be rotated to have a high overlap with the Berry-Wiseman (BW)
Figure 2: (a) The three eigenvalues of the QFIM \(\mathbf{\mathcal{F}}\) for \(N=100\). The state evolves under Eq. (1) with \(\Delta=100N|\chi|\). The gray plus and asterisk indicate when the system reaches \(\hat{\rho}_{\text{BW}}\) and \(\hat{\rho}_{\text{peak}}\), respectively. Also shown is the largest eigenvalue of the QFIM for OAT with the same parameters (dashed black line). (b) The largest QFIM eigenvalue for \(\hat{\rho}_{\text{peak}}\). Also shown is the plateau value of \(N(N+1)/2\) for OAT. (c) Sensitivity, given by the standard deviation \(\sigma\) of the posterior distribution, for the optimal parameter \(\Phi\) after applying Bayes theorem. We display results for the states \(\hat{\rho}_{\text{BW}}\) and \(\hat{\rho}_{\text{peak}}\), and the dashed lines represent the QCRB for the respective state. The top and bottom dotted lines represent the SQL and HL, respectively. (d) Comparing the time of maximum QFI for PDD \(t_{\text{peak}}\) (orange plus) with the time OAT reaches its plateau \(t_{\text{pl}}\) (dotted red line) for a constant \(N|\chi|\). We also show the curve fit of the PDD simulations given by Eq. (7) (dashed blue line).
Figure 1: (a) Schematic of the pair production process \(\hat{J}_{+}^{2}\) that the PDD model drives (along with \(\hat{J}_{-}^{2}\)) to generate interparticle entanglement (dashed line). Here, \(\mathcal{S}\) is the the symmeterizer which sums over all permutations of \(i\) and \(j\)[52]. (b) The collective Bloch sphere for \(\hat{\rho}_{\text{BW}}\) in the rotating frame of Eq. (4). The color represents the stateβs overlap with the coherent spin state \(\ket{\theta,\phi}=\exp\!\left[-i\phi\hat{J}_{z}\right]\exp\!\left[-i\phi\hat{J} _{y}\right]\ket{\downarrow\rangle^{\otimes N}}\) at each point. The arrows indicate the direction of twisting about each axis.
phase state [57] which we label as \(\hat{\rho}_{\rm BW}\) and display in Fig. 1(b). This state maximizes the information gained about an unknown phase after a single measurement [58] and has a full \(2\pi\) dynamic range (see Supplemental Material (SM) [59]). The fidelity with the phase state reaches unity [35], which occurs at \(t\approx 5.57/(N|\chi|)\) for \(N=100\). As the system continues to squeeze, it reaches the state \(\hat{\rho}_{\rm peak}\) which maximizes the QFI in time. Here, the system is HL scaled with \(\lambda_{\rm max}\sim 0.65N^{2}\), and we discuss interesting properties of this state in the SM [59]. In the large \(N\) limit, we find that the maximum QFI asymptotes to \(\lambda_{\rm max}\sim 0.64N^{2}\), as shown in Fig. 2(b). One can then rotate \(\hat{\rho}_{\rm peak}\) to make a specific operator the optimal generator in order to exploit the largest amount of intra-particle entanglement for a specific sensing purpose [47]. For example, in atomic clock systems, one would perform a \(\pi/2\) pulse about \((\hat{J}_{x}-\hat{J}_{y})/\sqrt{2}\) in the rotating frame to make \(\hat{J}_{z}\) the optimal generator.
While the PDD model can clearly reach a high QFI on an exponentially short timescale, the QCRB is not guaranteed to be achievable with experimentally accessible measurements. This is because the QFI implicitly optimizes over all measurement bases [50]. Remarkably, the system saturates the QCRB with simple population measurements by performing a Bayesian reconstruction protocol. To demonstrate this, we first rotate the state by a \(\pi/2\) pulse with the optimal generator \(\hat{\mathcal{G}}\) such that its anti-squeezed axis is parallel with the equator of the Bloch sphere. We encode the parameter \(\Phi\) using \(\hat{\mathcal{G}}\) and implement Bayes theorem \(P(\Phi|m)=P(m|\Phi)P(\Phi)/P(m)\), where \(P(\Phi|m)\) is a conditional probability for the outcome \(m\) of a \(\hat{J}_{z}\) measurement. We begin the process with a flat prior \(P(m|\Phi)={\rm const.}\), which we then consistently update using the posterior distribution \(P(\Phi|m)\)[60].
Figure 2(c) displays the sensitivity of the posterior distribution for the states \(\hat{\rho}_{\rm BW}\) and \(\hat{\rho}_{\rm peak}\) after the rotation to the Bloch sphere's equator. We also show the SQL and HL as the upper and lower dotted lines, and remarkably, the sensitivity of \(\hat{\rho}_{\rm peak}\) nearly reaches the HL. After \(M\) measurements, the respective QCRBs are given by \(1/\sqrt{M\lambda_{\rm max}(t)}\), which we plot as dashed lines. In both cases, the standard deviation \(\sigma\) of the posterior distribution \(P(\Phi|m)\) saturates this bound when \(M\gtrsim 100\), showing that simple quadrature measurements are optimal for the generated states. We can calculate the decibel gain over the SQL, \(G=10\log_{10}(\sqrt{\lambda_{\rm max}/N})\), and obtain \(G=5.7\,\)dB and \(G=9.1\,\)dB of squeezing for \(\hat{\rho}_{\rm BW}\) and \(\hat{\rho}_{\rm peak}\), respectively. For \(\hat{\rho}_{\rm peak}\) in the large \(N\) limit, we expect the gain to scale as \(G\approx 5\log_{10}(N)-1\).
As a means for comparison, we now consider \(\omega=0\) in Eq. (1) and eliminate the fast-oscillating \(\hat{J}_{\pm}^{2}\) terms via the RWA. This gives the one-axis twisting (OAT) Hamiltonian [29],
\[\hat{H}_{\rm OAT}\approx-\frac{\hbar\chi}{2}\hat{J}_{z}^{2}, \tag{6}\]
as exploited in Refs. [44; 45]. Here, we have used the relation \(\hat{J}_{+}\hat{J}_{-}=\hat{J}^{2}-\hat{J}_{z}^{2}+\hat{J}_{z}\) and ignored a constant energy shift of \(N(N/2+1)/2\) from the \(\hat{J}^{2}\) term since we remain in the collective subspace \(\{|j=N/2,m\rangle\,,\,\,-j\leq m\leq j\}\).
When the state begins in an eigenvector of \(\hat{J}_{x}\), \(\hat{\rho}_{0}=[(|\uparrow\rangle+|\downarrow\rangle)((\uparrow|+\langle \downarrow\rangle)/2]^{\otimes N}\), the OAT Hamiltonian reaches \(\lambda_{\rm max}=N(N+1)/2\) on a timescale of \(t_{\rm pl}\sim 4/(\sqrt{N}|\chi|)\)[47; 48; 61]. We show this initial behavior of the maximum QFI for OAT as a dashed black line in Fig. 2(a). The QFI then remains at this value for a long plateau before eventually growing again at \(t_{\rm pl,f}\sim\pi/|\chi|-4/(\sqrt{N}|\chi|)\)[47; 48; 61]. For typical parameters, this is often too long of a timescale since decoherence will significantly reduce the squeezing performance. We compare the typical timescales for HL scaling of PDD and OAT in Fig. 2(d). We find that the PDD model indeed scales on a much faster timescale, an observation which becomes more pronounced if one considers larger atom numbers. Fitting the scaling of the PDD model, we find that the time that QFI is maximized is given by
\[t_{\rm peak}\approx[\ln\bigl{(}N^{2}\bigr{)}+4]/(N|\chi|), \tag{7}\]
which approximately matches the analysis of Ref. [31] with the Wineland squeezing parameter. Therefore, the PDD model is a full order of magnitude faster than OAT when one scales up to \(N=10^{4}\) while reaching a higher QFI, as shown in Figs. 2(b) and 2(d). Moreover, the states created by OAT do not, in general, saturate the QCRB using simple quadrature measurements when encoding the optimal parameter \(\Phi\).
_Example Experimental Realization.--_ Having established that the PDD model can outperform OAT on short timescales, we now turn to a prototypical experimental realization of this scheme. For this, we consider momentum squeezing in a recent vertical cavity (VC) experiment [43; 44; 45; 46], shown schematically in Fig. 3(a). Details of the theoretical analysis of this setup are given in the SM [59], and we describe the general features below. A packet of \(N^{87}\)Rb atoms fall through an optical mode of a VC under the influence of gravity \(\vec{g}=-g\vec{\varepsilon}_{z}\) with unit vector \(\vec{\varepsilon}_{z}\) along the vertical axis. The cavity decays at an intensity decay rate \(\kappa\), while an injected field pumps the VC at a rate \(\eta\). The atoms undergo Bragg transitions on the \(D_{2}\) cycling transition \(|F=2,m_{F}=2\rangle\leftrightarrow|F^{\prime}=3,m_{F^{\prime}}=3\rangle\) when the detuning between the cavity mode and atomic transition frequency is large. In this regime, the excited state \(|F^{\prime}=3,m_{F^{\prime}}=3\rangle\) can be adiabatically eliminated [45; 59] such that we can focus solely on the external degrees of freedom of the atoms.
The atoms are prepared with high overlap with the momentum ground state \(|0\hbar k\rangle\). By letting the atomic packet fall for a sufficient time \(\tau\) before turning on the injected field, the momentum states \(|0\hbar k\rangle\) and \(|2\hbar k\rangle\) become nearly degenerate in the co-falling reference frame [45]. This allows one to drive Bragg transitions
between \(\left|0\hbar k\right\rangle\leftrightarrow\left|2\hbar k\right\rangle\) while being energetically far from coupling to the \(\left|-2\hbar k\right\rangle\) and \(\left|4\hbar k\right\rangle\) states, truncating the momentum space to a collective two-level system (see SM [59]). We therefore define the collective momentum operators \(\hat{J}_{+}=\sum_{j}\left|2\hbar k\right\rangle\!\left\langle 0\hbar k\right|_{j} =\hat{J}_{-}^{\dagger}\).
We then displace the cavity field to account for the injected field from the external driving laser [59]. When the injected field is far detuned from the dressed cavity frequency, this displaced cavity field can be adiabatically eliminated with the result [59; 62]
\[\hat{H}_{\rm VC}=\hbar\omega_{g}\hat{J}_{z}-\hbar\chi(t)\hat{J}_{x}^{2}, \tag{8}\]
where we have defined the injected light field and the nonlinear interaction rate [59]
\[\beta(t)\approx-\frac{\eta(t)}{\Delta_{c}^{\prime}-\frac{i\kappa}{2}},\quad \chi(t)=\frac{\Delta_{c}^{\prime}U_{0}^{2}|\beta|^{2}}{(\Delta_{c}^{\prime})^{ 2}+\frac{\kappa^{2}}{4}}. \tag{9}\]
Here, \(\omega_{g}=4\omega_{r}-2kg\tau\) with recoil frequency \(\omega_{r}\), \(U_{0}\) is the light-momentum coupling strength [59; 45], and we have defined the dressed pump-cavity detuning \(\Delta_{c}^{\prime}\) that includes the Stark shift from the atoms \(\Delta_{c}^{\prime}=\Delta_{c}-NU_{0}\). We present a table outlining the various approximations that we assume to be valid to derive Eq. (8) in the SM [59], as well as relevant experimental parameters that satisfy these conditions.
We now wish to reverse engineer the driving profile \(\eta(t)\) such that Eq. (8) simplifies to the PDD Hamiltonian from Eq. (1). For this, we require \(\beta(t)=\beta_{0}\sqrt{\cos(\omega t)}\) and so we set \(\eta\propto\sqrt{\cos(\omega t)}\) which amounts to varying the amplitude and phase of the driving laser. However, since \(\chi\propto\left|\beta\right|^{2}\), this does not yet have the needed harmonics to parameterically drive TACT in Eq. (8). Therefore, we also oscillate the cavity detuning such that \(\Delta_{c}^{\prime}(t)=\Delta_{c}^{\prime}(0)\operatorname{sgn}[\cos(\omega t)]\). This promotes \(\left|\cos(\omega t)\right|\rightarrow\cos(\omega t)\) whereupon one sets \(\omega=2\omega_{g}\). The oscillation of \(\Delta_{c}^{\prime}\) can be accomplished with a time-dependent pump frequency or with time-dependent laser powers when one adds a second pump laser with shifted frequency. With this oscillation, Eq. (8) reduces to the PDD model of Eq. (1),
\[\hat{H}_{\rm VC}=\hbar\omega_{g}\hat{J}_{z}-\hbar\chi_{0}\cos(\omega t)\hat{J }_{x}^{2}, \tag{10}\]
where \(\chi_{0}=U_{0}^{2}|\beta_{0}|^{2}\Delta_{c}^{\prime}(0)/([\Delta_{c}^{\prime} (0)]^{2}+\kappa^{2}/4)\).
Since the cavity decays, we also obtain an effective jump operator
\[\hat{L}=\sqrt{\frac{\kappa U_{0}^{2}|\beta_{0}|^{2}|\cos(\omega t)|}{(\Delta_ {c}^{\prime})^{2}+\frac{\kappa^{2}}{4}}}\hat{J}_{x}, \tag{11}\]
from the adiabatic elimination of the cavity (see SM [59]). We can now evolve the system's density matrix \(\hat{\rho}\) under the Born-Markov master equation
\[\frac{\partial\hat{\rho}}{\partial t}=-\frac{i}{\hbar}\left[\hat{H}_{\rm VC}(t ),\hat{\rho}\right]+\hat{\mathcal{D}}[\hat{L}(t)]\hat{\rho}, \tag{12}\]
where the Lindbladian superoperator is given by \(\hat{\mathcal{D}}[\hat{O}]\hat{\rho}=\hat{O}\hat{\rho}\hat{O}^{\dagger}-(\hat {O}^{\dagger}\hat{O}\hat{\rho}+\hat{\rho}\hat{O}^{\dagger}\hat{O})/2\). In Fig. 3(b), we display the results for the maximum QFI, given by Eq. (2), for a density matrix evolved under Eq. (12) with different dissipation rates. For comparison, we also display results for OAT (\(\omega=0\)) with \(\kappa/|\Delta_{c}^{\prime}|=10^{-5}\) (solid black line) and \(\kappa/|\Delta_{c}^{\prime}|=10^{-2}\) (dashed black line). Notably, we find that even with a three orders of magnitude larger dissipation rate, the PDD model (\(\omega=2\omega_{g}\)) outperforms OAT on short timescales, which can be seen by comparing the dotted brown line to the solid black line.
To put our results into an experimental context, we adopt the setup of Refs. [44; 45] in which the atoms are allowed to fall for \(\tau=20\,\mathrm{ms}\) before the pump is turned on. This corresponds to \(|\omega_{g}|\sim 2\pi\cross 0.5\,\mathrm{MHz}\) such that \(|\omega|\sim 2\pi\cross 1\,\mathrm{MHz}\). Therefore, Fig. 3(b) shows an appreciable advantage of the PDD model compared to OAT after \(O(100\,\mu s)\). Furthermore, using the parameters of Fig. 3(b) with small dissipation rates \(\kappa/|\Delta_{c}^{\prime}|\ll 1\), we find \(N\chi_{0}\approx 0.012|\omega_{g}|\) and so Eq. (7) gives \(t_{\rm peak}\sim 355\,\mu\mathrm{s}\) while the OAT plateau time is \(t_{\rm pl}\sim 1.1\,\mathrm{ms}\). On the timescale of \(t_{\rm peak}\), an effective dephasing effect occurs from the increased energy difference between the momentum states as time progresses, which is accounted for in Ref. [44] by a spin echo sequence [53]. Furthermore, this dephasing is a single particle effect and so increasing \(N\) can grow the collective squeezing rate without increasing the effective dephasing rate. We also confirm that the QCRB is saturated from the simple quadrature measurements considered in the previous section, which can be implemented in the experiment by performing fluorescent measurements
after a Mach-Zehnder interferometry sequence [43]. For the case of \(\kappa\approx|\Delta_{c}^{\prime}|/87\), which corresponds to the cavity decay of Ref. [44], we find that the PDD model reaches a maximum of \(G=7.5\,\mathrm{dB}\) during the initial squeezing.
_Conclusion and outlook.--_ Similar to parameteric driving of nonlinear optical interactions to create non-classical states of light [63], in this Letter, we propose an analogous procedure to create non-classical states of matter through parametric driving. While we have focused on long-range interparticle interactions mediated through a dispersive cavity mode, our periodic driving methodology should be more broadly applicable to any system with controllable nonlinearities, such as trapped ions with phonon-mediated interactions [64; 65; 66; 67; 68], Bose-Einstein condensates with short- and long-range interactions [69; 70; 71], and solid state materials with spin-spin interactions [72; 73; 74]. Our periodic driving scheme is distinct from previous modulation proposals [32; 37] as it is implemented by simple parameter modulation of classical driving fields, thereby allowing direct modulation of nonlinear Hamiltonian terms. Unlike previous works on bosonic-mediated quantum amplification [75; 76; 77], the protocol presented here does not require squeezed bosonic modes and instead amplifies nonlinearities in the underlying matter to create non-classical, squeezed states. We have demonstrated that our proposed method can potentially be implemented in a current, state-of-the-art VC experiment [43; 44], which would be the first experimental realization of TACT. The system achieves HL scaling in reasonable timescales and has a simple optimal measurement basis, and therefore is a promising platform to create matterwave sensors with a true quantum advantage. Furthermore, it has been shown [34; 35] that TACT creates the Berry-Wiseman phase state, as well as high overlap with other theoretically studied states [78; 79; 80; 51; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 284; 286; 287; 288; 289; 291; 285; 287; 288; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 336; 337; 338; 341; 342; 343; 344; 35; 356; 357; 358; 369; 370; 371; 372; 373; 374; 375; 376; 377; 388; 39; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 423; 434; 44; 44; 45; 45; 46; 47; 48; 49; 425; 48; 49; 436; 49; 44; 45; 49; 401; 402; 403; 404; 41; 437; 405; 406; 411; 438; 41; 439; 44; 45; 46; 47; 49; 44; 46; 48; 49; 45; 47; 49; 46; 48; 49; 47; 48; 49; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 50; 59; 51; 50; 52; 54; 53; 56; 58; 59; 52; 57; 59; 53; 58; 59; 54; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 61; 64; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 87; 88; 89; 91; 89; 92; 80; 88; 89; 93; 80; 89; 81; 80; 81; 83; 85; 87; 89; 81; 84; 86; 88; 87; 89; 82; 89; 83; 85; 88; 89; 94; 80; 84; 86; 89; 87; 88; 89; 95; 89; 96; 81; 89; 97; 80; 81; 84; 89; 80; 82; 85; 86; 87; 88; 89; 88; 89; 98; 99; 99; 100; 111; 12; 133; 14; 15; 16; 17; 18; 89; 99; 111; 192; 193; 194; 195; 196; 197; 198; 100; 112; 199; 130; 131; 197; 199; 140; 199; 150; 151; 152; 153; 154; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 180; 181; 183; 184; 185; 186; 187; 188; 189; 199; 190; 191; 192; 193; 194; 195; 196; 197; 198; 1999; 200; 201; 203; 204; 205; 206; 207; 208; 209; 211; 21; 214; 216; 217; 218; 219; 222; 231; 242; 243; 244; 25; 256; 257; 268; 279; 28; 293; 294; 295; 296; 297; 298; 299; 301; 307; 308; 309; 311; 332; 333; 344; 35; 36; 37; 38; 39; 40; 411; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 99; 90; 911; 92; 193; 194; 195; 196; 197; 198; 199; 199; 199; 201; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 232; 243; 244; 25; 257; 26; 267; 278; 289; 293; 310; 32; 33; 34; 36; 379; 38; 394; 40; 306; 307; 308; 309; 321; 33; 35; 39; 361; 38; 395; 396; 38; 397; 397; 41; 398; 398; 303; 399; 41; 399
Guo, Phys. Rev. A **91**, 043642 (2015).
* Liu _et al._ [2011]Y. C. Liu, Z. F. Xu, G. R. Jin, and L. You, Phys. Rev. Lett. **107**, 013601 (2011).
* Yukawa _et al._ [2014]E. Yukawa, G. J. Milburn, C. A. Holmes, M. Ueda, and K. Nemoto, Phys. Rev. A **90**, 062132 (2014).
* Kajtoch and Witkowska [2015]D. Kajtoch and E. Witkowska, Phys. Rev. A **92**, 013623 (2015).
* [36]T. b. u. Opatrny\(\grave{\text{g}}\), Phys. Rev. A **91**, 053826 (2015).
* Wu _et al._ [2015]L.-N. Wu, M. K. Tey, and L. You, Phys. Rev. A **92**, 063610 (2015).
* Kruse _et al._ [2016]I. Kruse, K. Lange, J. Peise, B. Lucke, L. Pezze, J. Arlt, W. Ertmer, C. Lisdat, L. Santos, A. Smerzi, and C. Klempt, Phys. Rev. Lett. **117**, 143004 (2016).
* Borregaard _et al._ [2017]J. Borregaard, E. J. Davis, G. S. Bentsen, M. H. Schleier-Smith, and A. S. Sorensen, New Journal of Physics **19**, 093021 (2017).
* Anders _et al._ [2018]F. Anders, L. Pezze, A. Smerzi, and C. Klempt, Phys. Rev. A **97**, 043813 (2018).
* Zhang _et al._ [2021]J. Zhang, S. Wu, Y. Zhang, and Z. Zhou, Science China Information Sciences **64**, 122502 (2021).
* Hernandez Yanes _et al._ [2022]T. Hernandez Yanes, M. Plodzien, M. Mackoti Shinkeviciene, G. Zlabys, G. Juzeliunas, and E. Witkowska, Phys. Rev. Lett. **129**, 090403 (2022).
* Greve _et al._ [2022]G. P. Greve, C. Luo, B. Wu, and J. K. Thompson, Nature **610**, 472 (2022).
* Luo _et al._ [2023]C. Luo, H. Zhang, V. P. Koh, J. D. Wilson, A. Chu, M. J. Holland, A. M. Rey, and J. K. Thompson, arXiv preprint arXiv:2304.01411 (2023).
* Wilson _et al._ [2023]J. D. Wilson, C. Luo, J. T. Reilly, H. Zhang, A. Chu, A. M. Rey, M. J. Holland, and J. K. Thompson, Momentum based entanglement in a vertical cavity (2023), (to be published).
* Zhang _et al._ [2023]H. Zhang, A. Chu, C. Luo, J. K. Thompson, and A. M. Rey, Phys. Rev. Res. **5**, L032039 (2023).
* Reilly _et al._ [2023]J. T. Reilly, J. D. Wilson, S. B. Jager, C. Wilson, and M. J. Holland, Phys. Rev. Lett. **131**, 150802 (2023).
* Pezze _et al._ [2018]L. Pezze, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Rev. Mod. Phys. **90**, 035005 (2018).
* Kirton _et al._ [2019]P. Kirton, M. M. Roses, J. Keeling, and E. G. Dalla Torre, Advanced Quantum Technologies **2**, 1800043 (2019).
* Liu _et al._ [2019]J. Liu, H. Yuan, X.-M. Lu, and X. Wang, Journal of Physics A: Mathematical and Theoretical **53**, 023001 (2019).
* Holland and Burnett [1993]M. J. Holland and K. Burnett, Phys. Rev. Lett. **71**, 1355 (1993).
* Xu _et al._ [2013]M. Xu, D. A. Tieri, and M. J. Holland, Phys. Rev. A **87**, 062101 (2013).
* Steck [2007]D. A. Steck, Quantum and atom optics (2007).
* Caves [2020]C. M. Caves, Advanced Quantum Technologies **3**, 1900138 (2020).
* Holstein and Primakoff [1940]T. Holstein and H. Primakoff, Phys. Rev. **58**, 1098 (1940).
* Byrnes and Ho [2021]T. Byrnes and E. O. Ho-Okeke, _Quantum atom optics: Theory and applications to quantum technology_ (Cambridge university press, 2021).
* Berry [2001]D. W. Berry, _Adaptive Phase Measurements_, Ph.D. thesis, University of Queensland, Queensland (2001).
* Berry _et al._ [2009]D. W. Berry, B. L. Higgins, S. D. Bartlett, M. W. Mitchell, G. J. Pryde, and H. M. Wiseman, Phys. Rev. A **80**, 052114 (2009).
* [59]See Supplemental Material which includes Ref. [81, 82, 83, 84, 85]. Here, we comment on the states created by the PDD model, display a full derivation of achieving the PDD in the vertical cavity experiment, and comment on relevant experimental parameters that satisfy the needed approximations to achieve the PDD model.
* Reilly [2020]J. T. Reilly, _Entropy Removal and Coherence with Lasers_, Bachelor's thesis, University of Colorado Boulder (2020).
* Pezze and Smerzi [2009]L. Pezze and A. Smerzi, Phys. Rev. Lett. **102**, 100401 (2009).
* Jager _et al._ [2022]S. B. Jager, T. Schmit, G. Morigi, M. J. Holland, and R. Betzholz, Phys. Rev. Lett. **129**, 063601 (2022).
* Kippenberg _et al._ [2004]T. J. Kippenberg, S. M. Spillane, and K. J. Vahala, Phys. Rev. Lett. **93**, 083904 (2004).
* Sterk _et al._ [2012]J. D. Sterk, L. Luo, T. A. Manning, P. Maunz, and C. Monroe, Phys. Rev. A **85**, 062308 (2012).
* Linnet _et al._ [2012]R. B. Linnet, I. D. Leroux, M. Marciante, A. Dantan, and M. Drewsen, Phys. Rev. Lett. **109**, 233005 (2012).
* Wilson _et al._ [2014]A. C. Wilson, Y. Colombe, K. R. Brown, E. Knill, D. Leibfried, and D. J. Wineland, Nature **512**, 57 (2014).
* Bohnet _et al._ [2016]J. G. Bohnet, B. C. Sawyer, J. W. Britton, M. L. Wall, A. M. Rey, M. Foss-Feig, and J. J. Bollinger, Science **352**, 1297 (2016).
* Kahan _et al._ [2021]A. Kahan, L. Ermann, and C. Cormick, Phys. Rev. A **104**, 043705 (2021).
* Pyrkov and Byrnes [2013]A. N. Pyrkov and T. Byrnes, New Journal of Physics **15**, 093019 (2013).
* Kroeze _et al._ [2018]R. M. Kroeze, Y. Guo, V. D. Vaidya, J. Keeling, and B. L. Lev, Phys. Rev. Lett. **121**, 163601 (2018).
* Mivehvar _et al._ [2019]F. Mivehvar, H. Ritsch, and F. Piazza, Phys. Rev. Lett. **122**, 113603 (2019).
* Zhou _et al._ [2020]H. Zhou, J. Choi, S. Choi, R. Landig, A. M. Douglas, J. Isoya, F. Jelezko, S. Onoda, H. Sumiya, P. Cappellaro, H. S. Knowles, H. Park, and M. D. Lukin, Phys. Rev. X **10**, 031003 (2020).
* Xie _et al._ [2021]T. Xie, Z. Zhao, X. Kong, W. Ma, M. Wang, X. Ye, P. Yu, Z. Yang, S. Xu, P. Wang, Y. Wang, F. Shi, and J. Du, Science Advances **7**, eabg9204 (2021).
* Lee _et al._ [2023]J. Lee, M. Tatsuta, A. Xu, E. Bauch, M. J. H. Ku, and R. L. Walsworth, NPJ Quantum Information **9**, 77 (2023).
* Lu _et al._ [2015]X.-Y. Lu, Y. Wu, J. R. Johansson, H. Jing, J. Zhang, and F. Nori, Phys. Rev. Lett. **114**, 093602 (2015).
* Zeytinoglu _et al._ [2017]S. Zeytinoglu, A. m. c. Imamoglu, and S. Huber, Phys. Rev. X **7**, 021041 (2017).
* Burd _et al._ [2021]S. C. Burd, R. Srinivas, H. M. Knaack, W. Ge, A. C. Wilson, D. J. Wineland, D. Leibfried, J. J. Bollinger, D. T. C. Allcock, and D. H. Slichter, Nature Physics **17**, 898 (2021).
* Yurke and Stoler [1986]B. Yurke and D. Stoler, Phys. Rev. Lett. **57**, 13 (1986).
* Stockton _et al._ [2003]J. K. Stockton, J. M. Geremia, A. C. Doherty, and H. Mabuchi, Phys. Rev. A **67**, 022112 (2003).
* Combes and Wiseman [2004]J. Combes and H. M. Wiseman, Journal of Optics B: Quantum and Semiclassical Optics **7**, 14 (2004).
* Holevo [1984]A. S. Holevo, in _Quantum Probability and Applications to the Quantum Theory of Irreversible Processes_, edited by L. Accardi, A. Frigerio, and V. Gorini (Springer Berlin Heidelberg, Berlin, Heidelberg, 1984) pp. 153-172.
* Susskind and Glogower [1964]L. Susskind and J. Glogower, Physics Physique Fizika **1**, 49 (1964).
* Gradshteyn and Ryzhik [2000]I. Gradshteyn and I. Ryzhik, _Table of Integrals, Series, and Products_, 6th ed., edited by D. Zwillinger and A. Jeffrey (Elsevier Science, 2000).
* Metelmann _et al._ [2022]A. Metelmann, O. Lanes, T. Chien, A. McDonald, M. Hatridge, and A. Clerk, arXiv preprint arXiv:2208.00024 (2022
* [85] D. A. Steck, Rubidium 87 d line data (2001).
# Supplemental Material: Speeding Up Squeezing with a Periodically Driven Dicke Model
Jarrod T. Reilly
JILA, NIST, and Department of Physics, University of Colorado, 440 UCB, Boulder, CO 80309, USA
Simon B. Jager
Physics Department and Research Center OPTIMAS, University of Kaiserslautern-Landau, D-67663, Kaiserslautern, Germany
John Drew Wilson
JILA, NIST, and Department of Physics, University of Colorado, 440 UCB, Boulder, CO 80309, USA
John Cooper
JILA, NIST, and Department of Physics, University of Colorado, 440 UCB, Boulder, CO 80309, USA
Sebastian Eggert
Physics Department and Research Center OPTIMAS, University of Kaiserslautern-Landau, D-67663, Kaiserslautern, Germany
Murray J. Holland
JILA, NIST, and Department of Physics, University of Colorado, 440 UCB, Boulder, CO 80309, USA
November 3, 2021
###### Abstract
###### Contents
* I States Created by the Periodically Driven Dicke Model
* I.1 Berry-Wiseman Phase State
* I.2 State with Peak QFI
* II Model for Periodically Driving a Vertical Cavity
* I.3 Starting Point
* II.4 Elimination of the Electronic Excited State
* II.4 Displacement of the Cavity Field
* II.4 Adiabatic Elimination of the Cavity Field
* II.5 Reduction to Two Momentum States
* III Profile of the Injected Field
* IV Experimental Parameters
## I States created by the periodically driven Dicke model
In this section, we comment on some of the properties of two states that the periodically driven Dicke (PDD) model creates. We focus on the states examined in Fig. 2 of the Main Text, namely, the Berry-Wiseman (BW) phase state \(\hat{\rho}_{\text{BW}}\) and the state with the peak QFI \(\hat{\rho}_{\text{peak}}\).
### Berry-Wiseman Phase State
We begin by discussing the BW phase state, whose Q-function is show in Fig 1(b) of the Main Text. The Holevo variance for an ensemble of pseudospin-1/2 particles is defined as [1; 2]
\[V(\varphi)_{\psi}\equiv|\langle e^{-i\varphi}\rangle_{\psi}|^{2}-1,\] (S1)
where
\[\begin{split}\langle e^{-i\varphi}\rangle_{\psi}& \equiv\int_{0}^{2\pi}P_{\psi}(\varphi)e^{-i\varphi}d\varphi,\\ P_{\psi}(\varphi)&\equiv\langle\psi|\,e^{-i\varphi \hat{J}_{z}}\,|\psi\rangle\,.\end{split}\] (S2)
This variance is useful because states with complete phase uncertainty (e.g., any \(|\psi\rangle=|j=N/2,m\rangle\)) will have infinite Holevo variance, whereas the typical phase variance [2], \(\Delta\varphi^{2}=\langle\varphi^{2}\rangle_{\psi}-\langle\varphi\rangle_{\psi} ^{2}\), has a maximum uncertainty of \(\Delta\varphi=2\pi\). It has been shown [2; 3] that the state which minimizes the Holveo variance is the BW phase state,
\[|\psi_{\text{BW}}\rangle=\frac{1}{\sqrt{\frac{N}{2}+1}}\sum_{m=-\frac{N}{2}} ^{\frac{N}{2}}\sin\left[\frac{\pi(\frac{N}{2}+m+1)}{N+2}\right]\left|\frac{N} {2},m\right\rangle,\] (S3)
such that \(\hat{\rho}_{\text{BW}}=|\psi_{\text{BW}}\rangle\!\langle\psi_{\text{BW}}|\). This state has \(V(\varphi)_{\text{BW}}=\pi^{2}/N^{2}\) and is notably an eigenstate of the Susskind cosine operator [4],
\[\widehat{\text{cos}}(\varphi)\equiv\frac{1}{2}\sum_{m=-N/2}^{N/2}\left( \left|\frac{N}{2},m+1\right\rangle\!\!\left\langle\frac{N}{2},m\right|+\text {H.c.}\right).\] (S4)
The BW phase state is of particular interest for phase estimation because its dynamic range is a full \(2\pi\), meaning \(\langle\psi_{\text{BW}}|\,e^{-i\varphi\hat{J}_{z}}\,|\psi_{\text{BW}}\rangle=1\) only if \(\varphi=n2\pi\) for integer \(n\). Simultaneously, it has a quantum Fisher information (QFI) reaching Heisenberg limit (HL) scaling at
\[\mathcal{F}_{\text{BW}}\approx\left(\frac{1}{3}-\frac{2}{\pi^{2}}\right)N^{2} \approx 0.13N^{2}.\] (S5)
These conditions guarantee that, with no a priori knowledge of \(\varphi\), the BW phase state is the optimal state to gain information in a single measurement [5], making it a useful state for a multitude of sensing applications. For example, creating a BW phase state in matterwave interferometry would guarantee that each measurement gives the highest resolution estimation of an acceleration, which would be a powerful tool for time-varying gravitational fields such as those that an orbiting satellite experiences.
### State with Peak QFI
We now discuss the state with the maximum QFI during the initial squeezing under the PDD model, \(\hat{\rho}_{\text{peak}}\). We display the Q-function of this state in Fig. 1(a) which shows that \(\hat{\rho}_{\text{peak}}\) has properties of a partial ring state [6]. One would expect this structure to be highly sensitive to rotations about \(\hat{J}_{z}\) and a point on the Bloch sphere's equator in the direction of the anti-squeezed
axis. This explains why the two largest eigenvalues of the QFIM in Fig. 2(a) of the Main Text correspond to \(\hat{\mathcal{G}}=(\hat{J}_{x}+\hat{J}_{y})/\sqrt{2}\) and \(\hat{\mathcal{G}}_{2}=\hat{J}_{z}\) in the rotating frame. Moreover, by taking a log of the Q-function, which we show in Fig. 1(b), one can see interference fringes form around part of a longitude line of the Bloch sphere. This is reminiscent of the interference fringes that are present in the \(N00N\) state [3; 7] and may explain why the state is more sensitive to rotations about \(\hat{\mathcal{G}}\) than \(\hat{J}_{z}\).
As the squeezing continues past \(\hat{\rho}_{\text{peak}}\) under the PDD model with small dissipation, the large population packets begin to converge towards each other at the north pole. However, the state's QFI remains larger than the SQL as interference fringes remain present with a small amount of population still in a partial ring. The state reaches a local minimum in QFI when the large population packets meet at the north pole, but then the QFI climbs back to \(\lambda_{\text{max}}>N^{2}/2\) as a ring-like structure reemerges. This ring-like state has \(\hat{\mathcal{G}}=\hat{J}_{z}\).
We can also briefly comment on the case of non-negligible dissipation. Damping of the fringes shown in Fig. 1(b) may explain why the optimal generator switches from \(\hat{\mathcal{G}}\) to \(\hat{J}_{z}\) in the double peak structure of Fig. 3(b) of the Main Text when \(\kappa/|\Delta_{c}^{\prime}|\gtrsim 10^{-2}\). Here, the first peak corresponds to the initial squeezing with \(\hat{\mathcal{G}}=(\hat{J}_{x}+\hat{J}_{y})/\sqrt{2}\), but now with a lower QFI that reaches its maximum value more quickly. The optimal generator then switches to \(\hat{J}_{z}\) for the second peak as the QFI with respect to \(\hat{J}_{z}\) rotations falls off less quickly when increasing \(\kappa\).
## II Model for periodically driving a vertical cavity
This section is dedicated to deriving the effective model for the vertical cavity (VC) experiment. We further discuss how the periodically driven Dicke model can be implemented in this system. We consider the experimental VC setup displayed schematically in Fig. 3(a) of the Main Text.
We also show an energy diagram of each atom in the system after they have fallen for a certain time \(\tau\) in Fig. 2. The internal states \(\ket{g}\equiv\ket{F=2,m_{F}=2}\) and \(\ket{e}\equiv\ket{F^{\prime}=3,m_{F^{\prime}}=3}\) are separated by an optical frequency \(\omega_{a}\) and we assume a closed-cycling transition where \(\ket{g}\) can decay back to \(\ket{e}\) at a rate \(\gamma\). The atoms interact with a single mode of an optical cavity, which has frequency \(\omega_{c}\), at a single atom vacuum coupling rate \(\varLambda\). A coherent field is injected into the cavity which drives the mode with a time-dependent rate \(\ket{\eta(t)}\) and frequency \(\omega_{p}(t)\). The modulation of the frequency \(\omega_{p}(t)=\omega_{p}^{(0)}+\omega_{p}^{(1)}(t)\) is chosen to be around a frequency \(\omega_{p}^{(0)}\) with a fixed detuning to the atoms \(\varDelta_{a}=\omega_{a}-\omega_{p}^{(0)}\) and the cavity \(\varDelta_{c}=\omega_{c}-\omega_{p}^{(0)}\).
Figure 1: The state with the maximum QFI \(\hat{\rho}_{\text{peak}}\) for \(N=100\). (a) The Q-function calculated by finding the overlap with the coherent spin state \(\ket{\theta,\phi}\) at every point on the Bloch sphere. (b) The log of the Q-function to make the interference fringes more pronounced.
Figure 2: Schematic diagram of the frequency spectrum of a single atom in the vertical cavity setup. The states are labeled by their initial momentum value, i.e., \(\ket{i,p_{0}-mg\tau}\rightarrow\ket{i,p_{0}}\).
### Starting Point
Our theoretical analysis starts with the master equation for the density matrix \(\hat{\rho}_{\text{apc}}\) describing the atomic internal and motional degrees of freedom, as well as the cavity degree of freedom. The master equation is given by
\[\frac{\partial\hat{\rho}_{\text{apc}}}{\partial t}=-\frac{i}{\hbar}\left[\hat{H}_ {\text{apc}},\hat{\rho}_{\text{apc}}\right]+\hat{\mathcal{D}}\left[\sqrt{\kappa} \hat{a}\right]\hat{\rho}_{\text{apc}}+\sum_{j}\hat{\mathcal{D}}\left[\sqrt{ \gamma}\hat{\sigma}_{j}^{-}\right]\hat{\rho}_{\text{apc}},\] (S6)
where the coherent dynamics are governed by the Hamiltonian
\[\hat{H}_{\text{apc}}=\sum_{j}\left[\frac{(\hat{p}_{j}-mg\tau)^{2}}{2m}+\hbar \Lambda\cos(k\hat{x}_{j})(\hat{a}^{\dagger}\hat{\sigma}_{j}^{-}+\hat{\sigma}_ {j}^{+}\hat{a})+\hbar\Delta_{a}\hat{\sigma}_{j}^{+}\hat{\sigma}_{j}^{-}\right] +\hbar\Delta_{c}\hat{a}^{\dagger}\hat{a}+\hbar\left[\eta(t)\hat{a}^{\dagger}+ \text{H.c.}\right].\] (S7)
The first term in the Hamiltonian describes the kinetic energy with momentum operators \(\hat{p}_{j}\) of the atoms with mass \(m\) after falling for a time \(\tau\) under acceleration \(g\). The second term corresponds to the atomic-cavity coupling, where \(\cos(kx)\) is the standing-wave mode function of the cavity evaluated at the atomic position operators \(\hat{x}_{j}\) with wavenumber \(k\). In addition, we have introduced the creation and annihilation operators \(\hat{a}^{\dagger}\) and \(\hat{a}\) for the cavity mode and the internal excitation \(\hat{\sigma}_{j}^{+}=\ket{e}_{j}\bra{g}_{j}\) and \(\hat{\sigma}_{j}^{-}=\ket{g}_{j}\bra{e}_{j}\), respectively. The third term in Eq. (S7) is the energy of the excited state in the frame rotating with \(\omega_{p}^{(0)}\). The last two terms describe the energy of the photons and the driving of the cavity mode where modulations of frequency \(\omega_{p}^{(1)}(t)\) and amplitude \(\ket{\eta(t)}\) are encoded in the complex and time-dependent frequency \(\eta(t)\). In addition to the coherent effects, the master equation also includes cavity photon losses with rate \(\kappa\) and spontaneous emission with rate \(\gamma\). The Lindblad superoperator \(\hat{\mathcal{D}}\) for these Markov processes is defined as
\[\hat{\mathcal{D}}[\hat{O}]\hat{\rho}=-\frac{1}{2}[\hat{O}^{\dagger}\hat{O} \hat{\rho}+\hat{\rho}\hat{O}^{\dagger}\hat{O}-2\hat{O}\hat{\rho}\hat{O}^{ \dagger}].\] (S8)
### Elimination of the Electronic Excited State
We work in the regime where the detuning \(|\Delta_{a}|\) is much larger than the spontaneous emission rate and any characteristic frequency determining the dynamics of the cavity and the atomic external degrees of freedom. In this regime, the atoms remain, to good approximation, in the electronic ground state and the dominant scattering process is coherent scattering of laser photons. In addition, we assume that the fixed atom-laser detuning is much larger than the dynamical variance of the frequency \(|\Delta_{a}|\gg\omega_{p}^{(1)}\), which implies that the small modifications in the laser frequency have only a minor effect onto the coherent scattering rates. Using these approximations based on the parameter regime of interest, we derive an effective master equation which governs the dynamics of the density matrix \(\hat{\rho}_{\text{pc}}\) of atomic external degrees of freedom and the cavity. This master equation is given by
\[\frac{\partial\hat{\rho}_{\text{pc}}}{\partial t}=-\frac{i}{\hbar}\left[\hat{H} _{\text{pc}},\hat{\rho}\right]+\hat{\mathcal{D}}\left[\sqrt{\kappa}\hat{a} \right]\hat{\rho}_{\text{pc}},\] (S9)
with the Hamiltonian [8]
\[\hat{H}_{\text{pc}}=\sum_{j}\frac{(\hat{p}_{j}-mg\tau)^{2}}{2m}+\hbar\Delta_{c} ^{\prime}\left[1-\frac{U_{0}}{\Delta_{c}^{\prime}}\sum_{j}\cos(2k\hat{x}_{j}) \right]\hat{a}^{\dagger}\hat{a}+\hbar\left[\eta(t)\hat{a}^{\dagger}+\text{H.c.}\right].\] (S10)
The second term in Eq. (S10) describes the modified frequency of cavity photons which is shifted due to the presence of the atoms. Here, \(\Delta_{c}^{\prime}=\Delta_{c}-NU_{0}\) is the dressed cavity detuning with the ac Stark shift
\[U_{0}=\frac{\Lambda^{2}\Delta_{a}/2}{\Delta_{a}^{2}+\gamma^{2}/4}.\] (S11)
### Displacement of the Cavity Field
Next, we displace the cavity field by the field which is injected by the external laser. This is formally done by applying the displacement transformation
\[\hat{D}_{1}=\exp\bigl{[}\hat{a}^{\dagger}\beta(t)-\beta^{*}(t)\hat{a}\bigr{]},\] (S12)
onto the density matrix \(\tilde{\tilde{\rho}}_{\rm pc}=\hat{D}_{1}^{\dagger}\hat{\rho}_{\rm pc}\hat{D}_{1}\). In this new displaced picture, we find
\[\frac{\partial\tilde{\hat{\rho}}_{\rm pc}}{\partial t}=-\frac{i}{\hbar}\left[ \tilde{H},\tilde{\hat{\rho}}_{\rm pc}\right]+\hat{\cal D}\left[\sqrt{\kappa} \hat{a}\right]\tilde{\hat{\rho}}_{\rm pc},\] (S13)
where the injected light field \(\beta\) follows the differential equation
\[\frac{\partial\beta}{\partial t}=-i\left(\Delta_{c}^{\prime}-\frac{i\kappa}{2} \right)\beta-i\eta.\] (S14)
With a solution for this differential equation, we obtain the following displaced Hamiltonian:
\[\tilde{\hat{H}}_{\rm pc}=\sum_{j}\left[\frac{(\hat{p}_{j}-mg\tau)^{2}}{2m}- \hbar U_{0}\cos(2k\hat{x}_{j})(\hat{a}^{\dagger}\beta+\beta^{*}\hat{a})-\hbar U _{0}|\beta|^{2}\cos(2k\hat{x}_{j})\right]+\hbar\Delta_{c}^{\prime}\left[1- \epsilon\sum_{j}\cos(2k\hat{x}_{j})\right]\hat{a}^{\dagger}\hat{a},\] (S15)
where \(\epsilon=U_{0}/\Delta_{c}^{\prime}\). We remark that in this displaced picture, there is no external driving of the cavity. Instead, the meaning of \(\hat{a}\) is now the scattered field due to the presence of atoms which, in the original picture, needs to be added to the injected field \(\beta\).
### Adiabatic Elimination of the Cavity Field
By assuming \(|\Delta_{c}^{\prime}|\) is now the largest frequency in the effective system, we are able to adiabatically eliminate the scattered cavity field. This requires that \(|\Delta_{c}^{\prime}|\) is much larger than the Doppler-shift of the atoms and also that the modulation of the drive is slow compared to \(1/|\Delta_{c}^{\prime}|\). In this limit, we can derive an effective master equation for the density matrix describing the atomic external degrees of freedom \(\hat{\rho}_{\rm p}\)[9].
To eliminate the field, we assume that the scattered field is, to a good approximation, in vacuum. We can then displace the field by
\[\hat{D}_{2}=\exp\bigl{[}\hat{a}^{\dagger}\hat{\alpha}-\hat{\alpha}^{\dagger} \hat{a}\bigr{]},\] (S16)
such that the equation of motion for \(\hat{\rho}\), where we dropped the "p" index for brevity, is given by [9]
\[\frac{\partial\hat{\rho}}{\partial t}=-\frac{i}{\hbar}[\hat{H}_{\rm VC},\hat{ \rho}]+\hat{\cal D}\left[\sqrt{\kappa}\hat{\alpha}\right]\hat{\rho},\] (S17)
with the Hamiltonian
\[\hat{H}_{\rm VC}=\sum_{j}\left[\frac{(\hat{p}_{j}-mg\tau)^{2}}{2m}-\hbar U_{0} |\beta|^{2}\cos(2k\hat{x}_{j})\right]-\frac{\hbar U_{0}}{2}\left[\beta\hat{ \alpha}^{\dagger}\sum_{j}\cos(2k\hat{x}_{j})+{\rm H.c.}\right].\] (S18)
We then solve for the effective field operator
\[\frac{\partial\hat{\alpha}}{\partial t}=-i\left[\frac{(\hat{p}_{j}-mg\tau)^{2} }{2m},\hat{\alpha}\right]-i\left[\Delta_{c}^{\prime}\left(1-\epsilon\sum_{j} \cos(2k\hat{x}_{j})\right)-\frac{i\kappa}{2}\right]\hat{\alpha}+iU_{0}\beta \sum_{j}\cos(2k\hat{x}_{j}).\] (S19)
Here, we have assumed that \(U_{0}|\beta|^{2}\) is much smaller than any momentum energy gaps (see Section II.5 for the relevant gaps) such that it can be dropped from the commutator in Eq. (S19).
We are considering parameters such that \(N|\epsilon|/2\ll 1\) so that we can drop the non-linearity \(\propto\epsilon\) in Eq. (S19). By further making the ansatz \(\hat{\alpha}(t)=a_{+}(t)\sum_{j}\exp[2ik\hat{x}_{j}]+a_{-}(t)\sum_{j}\exp[-2 ik\hat{x}_{j}]\), we can find equations of motion for the coefficients \(a_{\pm}\). In the parameter regime \(|\Delta_{c}^{\prime}-i\kappa/2|\gg\omega\), where \(\omega\) is the characteristic modulation frequency of \(\beta\) [see Eqs. (S26) and (S27)], we can integrate the differential equations for \(a_{\pm}\). Using the obtained results in \(\hat{\alpha}(t)\) leads to the effective field operator
\[\hat{\alpha}(t)\approx\frac{U_{0}\beta}{2}\sum_{j}\left[\frac{1}{\Delta_{c}^{ \prime}+\Delta p_{\pm 2}-\frac{i\kappa}{2}}e^{2ik\hat{x}_{j}}+\frac{1}{\Delta_{c}^{ \prime}-\Delta p_{\pm 2}-\frac{i\kappa}{2}}e^{-2ik\hat{x}_{j}}\right],\] (S20)
where \(\Delta p_{\pm 2}=(p\pm 2\hbar k-mg\tau)^{2}/(2\hbar m)-(p-mg\tau)^{2}/(2\hbar m)\).
We now assume that we are restricted to low energy motional states, which we will formally justify in Sec. II.5. For these states, we can set \(\Delta^{\prime}_{c}\pm\Delta p_{\pm 2}\approx\Delta^{\prime}_{c}\) such that the effective field operator becomes
\[\hat{\alpha}(t)\approx\frac{U_{0}\beta}{\Delta^{\prime}_{c}-\frac{i\kappa}{2}} \sum_{j}\cos(2k\hat{x}_{j}).\] (S21)
This is valid if \(\Delta^{\prime}_{c}\gg\Delta p_{\pm 2}\) and, for the situation considered here, amounts to \(\Delta^{\prime}_{c}\pm\omega_{g}\approx\Delta^{\prime}_{c}\) where \(\omega_{g}\) is given by Eq. (S27) in Sec. II.5.
Using Eq. (S21) in Eqs. (S17) and (S18), we find
\[\frac{\partial\hat{\rho}}{\partial t}\approx-\frac{i}{\hbar}[\hat{H}_{\rm VC},\hat{\rho}]+\hat{\cal D}\left[\sqrt{\Gamma_{c}(t)}\sum_{j}\cos(2k\hat{x}_{j} )\right]\hat{\rho},\] (S22)
and the Hamiltonian
\[\hat{H}_{\rm VC}\approx\sum_{j}\left[\frac{(\hat{p}_{j}-mg\tau)^{2}}{2m}-\hbar U _{0}|\beta|^{2}\cos(2k\hat{x}_{j})\right]-\hbar\chi(t)\sum_{i,j}\cos(2k\hat{x} _{i})\cos(2k\hat{x}_{j}).\] (S23)
Here, we have defined the nonlinear interaction rate
\[\chi(t)=\frac{\Delta^{\prime}_{c}U_{0}^{2}|\beta|^{2}}{(\Delta^{\prime}_{c})^ {2}+\kappa^{2}/4},\] (S24)
and the dissipation rate
\[\Gamma_{c}(t)=\frac{\kappa U_{0}^{2}|\beta|^{2}}{(\Delta^{\prime}_{c})^{2}+ \kappa^{2}/4}.\] (S25)
### Reduction to Two Momentum States
In our protocol, the atoms are initialized with momentum \(p=0\), which means they have the kinetic energy \(Nmg^{2}\tau^{2}/2\) after gravitational acceleration. The idea of the periodic driving with \(\eta\) is now to engineer an injected light field \(\beta\) which drives a pair creation process by flipping two momentum state to \(p=2\hbar k\). This requires that we must drive with a frequency
\[\omega=2\omega_{g},\] (S26)
where \(\omega_{g}\) denotes the energy to excite a single atom from \(p=0\hbar k\) to the momentum state \(p=2\hbar k\),
\[\omega_{g}=\frac{(2\hbar k-mg\tau)^{2}-(mg\tau)^{2}}{2\hbar m}=4\omega_{r}-2 kg\tau.\] (S27)
Here, we have introduced the recoil frequency \(\omega_{r}=\hbar k^{2}/(2m)\). Thus, an appropriate driving profile would realize \(\chi(t)\propto\cos(\omega t)\). Using Eq. (S24), this can be realized with a driving resulting in \(\left|\beta(t)\right|^{2}\propto\left|\cos(\omega t)\right|\) and \(\Delta^{\prime}_{c}\propto\mathrm{sgn}[\cos(\omega t)]\), as explained in the Main Text. The latter corresponds to switching the driving frequency of the laser with respect to the cavity from red to blue detuned and back periodically in time.
We now want to restrict the dynamics of the atomic motional states to the momentum states \(\left|p=0\right>\) and \(\left|p=2\hbar k\right>\). This requires that we do not excite other momentum states, which can be justified using time-dependent perturbation theory. The two most relevant momentum flips occur due to (a) the single-particle term proportional to \(\cos(2k\hat{x}_{j})\) in Eq. (S23) which induces the momentum flip of a single atom \(p=\pm 2\hbar k\), and (b) the two-particle term proportional to \(\cos(2k\hat{x}_{i})\cos(2k\hat{x}_{j})\) in Eq. (S23) which can also amplify a pair with \(p_{1}=\pm 2\hbar k\) and \(p_{2}=-2\hbar k\). We examine the requirements to avoid these two processes individually:
(a) The frequency gap for a single flip into the state \(p=\pm 2\hbar k\) is \(\Delta\omega_{\pm}^{(1)}\). It can be calculated as
\[\Delta\omega_{\pm}^{(1)}=\frac{(\pm 2\hbar k-mg\tau)^{2}-(mg\tau)^{2}}{2m}=4 \omega_{r}\mp 2kg\tau.\] (S28)
The driving field \(\left|\beta\right|^{2}\propto\left|\cos(\omega t)\right|\) has frequency components that are multiples of \(2\omega=4\omega_{g}\). To neglect single momentum flips, we therefore require
\[\left|\frac{U_{0}|\beta|^{2}}{\left|4\omega_{g}\right|-\left|4\omega_{r}\mp 2kg \tau\right|}\right|\ll 1.\] (S29)
For large \(kg\tau\gg 2\omega_{r}\), this is true when \(\left|U_{0}||\beta\right|^{2}\ll 6kg\tau\).
(b) We now determine the frequency gap \(\Delta\omega_{+}^{(2)}\) for the unwanted pair creation processes corresponding to creating \(p_{1}=\pm 2\hbar k\) and \(p_{2}=-2\hbar k\). The frequency gap is given by
\[\begin{split}\Delta\omega_{+}^{(2)}&=\frac{(2\hbar k -mg\tau)^{2}-(mg\tau)^{2}}{2m}+\frac{(-2\hbar k-mg\tau)^{2}-(mg\tau)^{2}}{2m}= 8\omega_{r},\\ \Delta\omega_{-}^{(2)}&=2\frac{(-2\hbar k-mg\tau)^{ 2}-(mg\tau)^{2}}{2m}=8\omega_{r}+4kg\tau.\end{split}\] (S30)
Since we assume \(\chi(t)\propto\cos(\omega t)\) with \(\omega=2\omega_{g}\), these pair creation processes can be neglected if
\[\left|\frac{N\chi}{\left|2\omega_{g}\right|-\left|\Delta\omega_{+}^{(2)} \right|}\right|\ll 1.\] (S31)
Again assuming \(kg\tau\gg 2\omega_{r}\), this approximation is valid if \(N\chi\ll 16\omega_{r}\). In this calculation, we have included a factor of \(N\) because of the collective enhancement.
In the parameter regime where we can reduce the dynamics to atoms with momenta \(p=0\) and \(p=2\hbar k\), we can identify the momentum raising operator as an effective collective spin raising operator
\[\sum_{j}\exp[2i\hbar\hat{x}_{j}]\rightarrow\hat{J}_{+}=\sum_{j}\left|2\hbar k \right>_{j}\left<0\hbar k\right|_{j}.\] (S32)
We also define \(\hat{J}_{-}=\hat{J}_{+}^{\dagger}\) as well as the SU(2) basis operators \(\hat{J}_{x}=(\hat{J}^{+}+\hat{J}^{-})/2\), \(\hat{J}_{y}=i(\hat{J}_{-}-\hat{J}_{+})/2\), and \(\hat{J}_{z}=[\hat{J}_{+},\hat{J}_{-}]/2\), where we note \(\sum_{j}\cos(2k\hat{x}_{j})\rightarrow\hat{J}_{x}\). With these definitions, we can rewrite the Hamiltonian in Eq. (S23) as the periodically driven Dicke (PDD) model
\[\begin{split}\hat{H}_{\text{VC}}&=\hbar\omega_{g} \hat{J}_{z}-\hbar\chi(t)\hat{J}_{x}^{2}\\ &=\hbar\omega_{g}\hat{J}_{z}-\hbar\chi_{0}\cos(t)\hat{J}_{x}^{2},\end{split}\] (S33)
with \(\chi_{0}=U_{0}^{2}|\beta_{0}|^{2}\Delta_{c}^{\prime}(0)/([\Delta_{c}^{\prime} (0)]^{2}+\kappa^{2}/4)\). We also find a dissipative term with jump operator
\[\hat{L}=\sqrt{\Gamma_{c}(t)}\hat{J}_{x},\] (S34)
with \(\Gamma_{c}(t)\propto|\cos(\omega t)|\).
## III Profile of the injected field
We now comment on the driving profile of the injected field into the VC setup that reproduces the behavior of the periodically driven Dicke model. We begin with the relationship between the injected field and standing field, Eq. (S14). Formally integrating and making a coarse-graining approximation, we find
\[\begin{split}\beta(t)&=e^{-i(\Delta_{c}^{\prime}- \frac{\hbar\pi}{2})t}\beta(0)-i\int_{0}^{t}dse^{-i(\Delta_{c}^{\prime}-\frac{ \hbar\pi}{2})s}\eta(t-s)\\ &\approx-\frac{\eta(t)}{\Delta_{c}^{\prime}-\frac{\hbar\pi}{2}}, \end{split}\] (S35)
where we have assumed that the temporal variation of \(\eta\) is slow compared to the exponential kernel in the integral. Within this limit, we can now reverse engineer \(\eta(t)\) by simply inverting Eq. (S35).
In the case that the coarse-graining approximation used in Eq. (S35) breaks down, one can instead plug
\(\beta(t)=\beta_{0}\sqrt{\cos(\omega t)}\) into Eq. (S14) with the result
\[\eta(t)=-\frac{i\omega\beta_{0}}{2}\sqrt{\sin(\omega t)\tan(\omega t)}-\beta_{0} \left(\Delta^{\prime}_{c}-\frac{i\kappa}{2}\right)\sqrt{\cos(\omega t)}.\] (S36)
While the second term in this equation is the adiabatic result of the driving profile, the first term exhibits divergences which originate from the non-analyticities of \(\sqrt{\tan(\omega t)}\). The first term contributes a factor of \(\sqrt{\omega/\Delta^{\prime}_{c}}\) in the integral for \(\beta(t)\), Eq. (S35). In an integral over \(\beta(t)\), it will be suppressed by a factor of \((\omega/\Delta^{\prime}_{c})^{3/2}\)[10], and so the second term in Eq. (S36) will be the dominate contribution. However, for experimental considerations, it might be advantageous to use a profile with smooth intensity profile. In general, it might be interesting to explore several other driving profiles such as \(\eta(t)\propto\mathrm{sgn}[\cos(\omega t)]\) (square wave) and \(\eta(t)\propto 1-2\arccos[\cos(\omega t)]/\pi\) (triangle wave). We expect that these profiles can have similar performances for squeezing although they might lead to shifted parametric resonances for \(\omega\)[11] which can be derived using a Holstein-Primakoff approximation [12; 13] for early times. For practical applications, it is also of interest to optimize \(\eta(t)\) in order to achieve the maximum squeezing in minimum time with given experimental constraints. These considerations are left for future work.
## IV Experimental parameters
In this section, we present experimental parameters that lead to the values of \(\omega_{g}\), \(\beta_{0}\), \(U_{0}\), and \(\Delta^{\prime}_{c}\) used in Fig. 3(b) of the Main Text. We use \(N=100\) throughout this section. We then begin with the single-atom coupling constant of the cavity used in Ref. [14], \(\Lambda=2\pi\cross 0.5\,\mathrm{MHz}\). For this section, we also use the cavity loss rate from Ref. [14], \(\kappa=2\pi\cross 56\,\mathrm{kHz}\). The cavity addresses the \(D_{2}\) cycling transition of \({}^{87}\)Rb, which is a \(\lambda=780\,\mathrm{nm}\) transition with a decay rate of \(\gamma=2\pi\cross 6.066\,\mathrm{MHz}\)[15]. We assume the injected field leads to a cavity pump rate \(\eta_{0}=2\pi\cross 33\,\mathrm{MHz}\) and is detuned from the atomic resonance by \(|\Delta_{a}|=2\pi\cross 50\,\mathrm{MHz}\). The cavity frequency is also far detuned from the atomic resonance, while being detuned from the pump's frequency by \(|\Delta_{c}|=2\pi\cross 5.1\,\mathrm{MHz}\). Since all frequencies are within \(O(100\,\mathrm{MHz})\) from one another, we approximate the wavenumbers \(k\) to be constant such that the recoil frequency from all photons in the system is approximated as \(\omega_{r}=2\pi\cross 3.77\,\mathrm{kHz}\)[15].
With all of these specified experimental parameters, we obtain \(|U_{0}|=2\pi\cross 2.5\,\mathrm{kHz}\), \(|\Delta^{\prime}_{c}|=2\pi\cross 4.85\,\mathrm{MHz}\), and \(|\beta_{0}|=6.8\). Furthermore, a drop time of \(\tau=20\,\mathrm{ms}\) leads to \(kg\tau=2\pi\cross 0.25\,\mathrm{MHz}\) such that \(\omega_{g}=-2\pi\cross 0.488\,\mathrm{MHz}\). This satisfies \(kg\tau\gg 2\omega_{r}\), which was used in Eqs. (S29) and (S31), by a factor of 33. We thus have all of the needed quantities to simulate Eq. (S33). We can also calculate the perturbation \(|\epsilon|=5.1\cross 10^{-4}\), standing field \(|U_{0}||\beta_{0}|^{2}=2\pi\cross 0.115\,\mathrm{MHz}\), and effective non-linear interaction rate \(|\chi_{0}|=2\pi\cross 59.2\,\mathrm{Hz}\) such that \(N|\chi_{0}|=2\pi\cross 5.92\,\mathrm{kHz}\). We can now calculate ratios to check each of the approximations used in deriving Eq. (S33), which we present in Table. 1. We find that all our approximations are satisfied by at least a factor of 10, while also satisfying \(|\Delta^{\prime}_{c}|\gg\omega_{g}\) used in Eq. (S21) by a factor 10. We therefore expect our simulations of Eq. (S33) to be a realistic model of the current vertical cavity experiment of Ref. [14].
|
2307.02323 | Enhanced Electron Spin Coherence in a GaAs Quantum Emitter | A spin-photon interface should operate with both coherent photons and a
coherent spin to enable cluster-state generation and entanglement distribution.
In high-quality devices, self-assembled GaAs quantum dots are near-perfect
emitters of on-demand coherent photons. However, the spin rapidly decoheres via
the magnetic noise arising from the host nuclei. Here, we address this drawback
by implementing an all-optical nuclear-spin cooling scheme on a GaAs quantum
dot. The electron-spin coherence time increases 156-fold from $T_2^*$ = 3.9 ns
to 0.608 $\mu$s. The cooling scheme depends on a non-collinear term in the
hyperfine interaction. The results show that such a term is present even though
the strain is low and no external stress is applied. Our work highlights the
potential of optically-active GaAs quantum dots as fast, highly coherent
spin-photon interfaces. | Giang N. Nguyen, Clemens Spinnler, Mark R. Hogg, Liang Zhai, Alisa Javadi, Carolin A. Schrader, Marcel Erbe, Marcus Wyss, Julian Ritzmann, Hans-Georg Babin, Andreas D. Wieck, Arne Ludwig, Richard J. Warburton | 2023-07-05T14:25:36Z | http://arxiv.org/abs/2307.02323v1 | # Enhanced Electron Spin Coherence in a GaAs Quantum Emitter
###### Abstract
A spin-photon interface should operate with both coherent photons and a coherent spin to enable cluster-state generation and entanglement distribution. In high-quality devices, self-assembled GaAs quantum dots are near-perfect emitters of on-demand coherent photons. However, the spin rapidly decoheres via the magnetic noise arising from the host nuclei. Here, we address this drawback by implementing an all-optical nuclear-spin cooling scheme on a GaAs quantum dot. The electron-spin coherence time increases 156-fold from \(T_{2}^{*}=3.9\,\mathrm{ns}\) to \(0.608\,\mathrm{\SIUnitSymbolMicro s}\). The cooling scheme depends on a non-collinear term in the hyperfine interaction. The results show that such a term is present even though the strain is low and no external stress is applied. Our work highlights the potential of optically-active GaAs quantum dots as fast, highly coherent spin-photon interfaces.
## I Introduction
Photonic quantum technologies require quantum emitters capable of high-fidelity and high-rate operation. Of particular interest for quantum networks [1; 2; 3; 4; 5; 6] and measurement-based quantum computing [7; 8; 9] are quantum emitters that host a spin [10], allowing the creation of spin-photon entanglement.
Self-assembled semiconductor quantum dots (QDs) are potential candidates for spin-photon interfaces due to the deterministic photon emission at exceptionally high quality and rates [11; 12; 13; 14] and the ability to load a QD with a single electron or hole [15]. This has led to demonstrations of spin-photon entanglement [16; 17; 18; 19], distant spin-spin entanglement [20; 21], and the creation of multi-photon cluster states [22; 23; 24]. However, in these previous experiments, the short spin coherence times, \(T_{2}^{*}\sim 1-10\,\mathrm{ns}\), limited the entanglement fidelity. The short \(T_{2}^{*}\) is a consequence of magnetic noise in the host nuclear spins, coupling to the electron spin via the hyperfine interaction [25; 26; 27].
A powerful way to mitigate the short \(T_{2}^{*}\) is to cool the nuclear spins to ultralow temperatures in order to reduce the fluctuations. The nuclei can be cooled via the electron spin itself, exploiting the hyperfine interaction [28]. In an optical experiment, this was originally demonstrated on an ensemble of QDs [29]. On single QDs, nuclear spin cooling was demonstrated on gate-defined GaAs QDs via a measure-and-correct feedback loop [30; 31]. More recently, the highly inhomogeneous nuclear spins of a self-assembled InGaAs QD were cooled via an autonomous feedback [32]. Subsequently, a quantum sensing protocol was employed, narrowing the nuclear distribution further, thereby increasing \(T_{2}^{*}\) to \(300\,\mathrm{ns}\)[33]. For both schemes, a non-collinear term in the hyperfine interaction is required to allow for the cooling of the nuclei. In contrast to the collinear term from the contact hyperfine interaction (\(\propto S_{z}I_{z}\)), the non-collinear term (\(\propto S_{z}I_{x}\)) arises from nuclear quadrupolar fields in strained QDs; here \(S_{z}\) (\(I_{z}\)) is the electron (nuclear) spin operator along the direction of the applied magnetic field [34; 28; 35].
The most studied QDs for spin-photon applications are QDs in the InGaAs/GaAs system. InGaAs QDs are self-assembled via the strain-driven Stanski-Krastanov mechanism. Self-assembled GaAs QDs in an AlGaAs matrix represent an alternative platform. The strain is low such that these QDs are self-assembled via an alternative mechanism, droplet-etching. Low-noise GaAs QDs have excellent photonic properties, all at a convenient wavelength (around 780 nm). In high-quality material, the optical linewidths are within 10% of the transform limit [36]. Photons emitted by remote QDs have achieved a two-photon interference visibility of 93% without spectral or temporal filtering [37]. The biexciton cascade generates entangled photon pairs with an extremely high entanglement concurrence [38]. In terms of the nuclear spins, the lack of both strain and spin-\(\frac{9}{2}\) In atoms results in a homogeneous nuclear spin ensemble [39], as demonstrated by the success of the Carr-Purcell-Meiboom-Gill (CPMG) decoupling scheme in prolonging the electron spin \(T_{2}\) from \(3.8\,\mathrm{ns}\) to \(113\,\mathrm{\SIUnitSymbolMicro s}\)[40]. However, as for InGaAs QDs, noise in the nuclear spins limits \(T_{2}^{*}\) to values of a few-ns. To date, the possibility of feedback cooling the nuclear spins via the electron spin has remained uncertain, due to the predicted absence of the strain-generated non-collinear hyperfine interaction.
Here, we implement all-optical cooling schemes on low-noise GaAs QDs and demonstrate an increase in the electron spin coherence time from \(T_{2}^{*}=3.9\,\mathrm{ns}\) to \(0.608\,\mathrm{\SIUnitSymbolMicro s}\). This is achieved with autonomous feedback and without any external perturbation (such as strain tuning). We demonstrate spin control with \(T_{2}^{*}=0.608\,\mathrm{\SIUnitSymbolMicro s}\), an extension of \(T_{2}\) with CPMG (with a scaling of \(T_{2}^{\mathrm{CPMG}}\propto N^{0.69}\)
matching previous experiments [40]), fast spin rotations (Rabi frequencies above 100 MHz), and high-fidelity spin control (\(F_{\pi}>98\%\)). Our results establish GaAs QDs as an emitter of coherent photons and a host to a coherent spin.
To create the QDs, droplet-etched nanoholes in an Al\({}_{0.15}\)Ga\({}_{0.85}\)As matrix are filled with GaAs and capped by an Al\({}_{0.33}\)Ga\({}_{0.67}\)As layer. The materials are almost lattice-matched. Figure 1(a) shows a high-angle dark-field scanning transmission (HAADF-STEM) image of a GaAs QD [41]. Notable is a thin, Al-rich layer at the bottom surface of the QD [41]. The QD is embedded in a p-i-n diode structure (see Fig. 1(b)) such that the QD charge is stabilised via the Coulomb blockade. Individual QDs exhibit near-transform-limited optical linewidths [36; 37; 41]. A 3.00 T magnetic field is applied perpendicular to the growth direction (Voigt geometry), at an angle of \(45^{\circ}\) to the in-plane crystal axes. The electron Zeeman frequency is \(f_{\mathrm{Z}}=4.54\) GHz corresponding to a g-factor of \(g_{\mathrm{e}}=-0.11\).
The spin is manipulated by a two-colour Raman pulse detuned from the excited states by \(\Delta_{\mathrm{L}}=700\) GHz (see Fig. 1(c)). This pulse is created by amplitude-modulating circularly-polarised light with an electro-optic modulator driven by an arbitrary waveform generator [41; 42]. A laser resonant with the red "vertical" transition is used to read out the spin (such that the \(\ket{\downarrow}\)-state is bright, the \(\ket{\uparrow}\)-state is dark) and to prepare the spin in the \(\ket{\uparrow}\)-state via optical spin pumping [41].
Driving the electron spin resonance (ESR) (Fig. 1(d)) shows clear Rabi oscillations between \(\ket{\uparrow}\) and \(\ket{\downarrow}\) with increasing drive time \(t\). We find an exponential decay of the oscillations with \(T_{2}^{\mathrm{Rabi}}=73(5)\) ns, corresponding to a quality factor of \(Q=2T_{2}^{\mathrm{Rabi}}f_{\mathrm{Rabi}}=19(1)\) and \(\pi\)-pulse fidelity \(f_{\pi}=\frac{1}{2}(1+e^{-1/Q})=0.975(2)\) at \(\Omega=2\pi\times 130\) MHz. As has been observed for InGaAs QDs [42], we find a strong modulation of the quality factor [41] when the electron spin is driven close to the nuclear Larmor frequencies \(\omega_{\mathrm{n}}\) (i.e., \(\Omega\sim\omega_{\mathrm{n}}\)), a signature of an electron-nuclei interaction via a Hartmann-Hahn resonance [43].
We access rotation around a second axis on the Bloch sphere by controlling the phase of the microwave signal that is imprinted on the optical field. Fig. 1(e) shows the sinusoidal response after two consecutive \(\frac{\pi}{2}\)-pulses on changing the phase \(\phi\) of the second pulse, thereby demonstrating rotation around an arbitrary axis on the equator of the Bloch sphere.
On driving Rabi oscillations as a function of the detuning \(\Delta\) with respect to the Zeeman frequency (\(\Delta=f_{\mathrm{Z}}-f_{\mathrm{probe}}\)), we find strong deviations from the typical chevron pattern expected for a two-level system
Figure 1: Coherent spin control of an electron in a droplet-etched GaAs QD. (a) High-angle dark-field scanning transmission image of a droplet-etched GaAs QD. The dashed line is a guide to the eye to describe the droplet shape. (b) Schematic of the sample design: a layer of GaAs QDs is embedded in a diode structure. A magnetic field perpendicular to the growth direction defines the quantization axis. (c) Energy level diagram of a charged QD in an in-plane magnetic field. The βverticalβ transitions are \(x\)-polarised while the βdiagonalβ transitions are \(y\)-polarised. A circularly polarized rotation pulse detuned by \(\Delta_{\mathrm{L}}=700\) GHz drives a Raman transition between the electron spin states. The readout laser is on resonance with the lower-frequency βverticalβ transition and initialises the electron into the \(\ket{\uparrow}\)-state. (d) Electron spin Rabi oscillations as a function of drive time \(t\). The solid line is an exponential fit to the data with \(T_{2}^{\mathrm{Rabi}}=73(5)\) ns. (e) Full control of the rotation axis about the Bloch sphere using two consecutive \(\frac{\pi}{2}\)-pulses as a function of the phase \(\phi\) of the second pulse. The solid line is a sinusoidal fit to the data.
(see Fig. 2(a)). In a \(\sim 200\,\mathrm{MHz}\) window around the Zeeman frequency, we find that the spin rotations lock to the probe frequency \(f_{\mathrm{probe}}\), a clear signature of electron spin-nuclear spin coupling [44, 45, 46, 35].
When the ESR is locked via the hyperfine interaction, cooling of the nuclei, equivalently narrowing of the nuclear distribution, is predicted [47, 48]. This can be quantified by a reduction in \(\sigma_{\mathrm{OH}}\), the standard deviation of the ESR frequency fluctuations due to the changing Overhauser field. To probe this, we perform a free-induction decay (FID) experiment to measure the electron coherence time \(T_{2}^{*}\) in a Ramsey experiment, which acts as a gauge of the temperature of the nuclear spin ensemble (\(\sigma_{\mathrm{OH}}\propto T_{2}^{*}\)) [26, 27]. We compare the bare \(T_{2}^{*}\) to that obtained after locking the ESR (see Fig. 2(b)). We observe a 20-fold increase from \(T_{2}^{*}=3.9(2)\,\mathrm{ns}\) to \(78(2)\,\mathrm{ns}\) corresponding to a narrowing of \(\sigma_{\mathrm{OH}}\) from \(52(1)\,\mathrm{MHz}\) to \(2.90(5)\,\mathrm{MHz}\) following the Rabi drive. Remarkably, we already find an enhancement in coherence time without a dedicated cooling pulse when the Ramsey experiment is carried out with a high duty cycle: repetitive Ramsey experiments lead to a \(T_{2}^{*}\) of \(7.8(2)\,\mathrm{ns}\). To determine the bare electron coherence time, we add a \(100\,\mathrm{\SIUnitSymbolMicro s}\) buffer between each cycle. This observation suggests that the repetitive application of spin manipulation pulses as short as \(4\,\mathrm{ns}\) already leads to a narrowing of \(\sigma_{\mathrm{OH}}\).
We confirm the nuclear-spin cooling and locking of the ESR to the Rabi drive by fixing the cooling frequency \(f_{\mathrm{c}}\) during Rabi cooling, subsequently detuning the probe frequency \(f_{\mathrm{probe}}\) in a Ramsey experiment. Oscillations arise at the detuning frequencies \(\Delta=f_{\mathrm{c}}-f_{\mathrm{probe}}\) as expected in a classic Ramsey experiment (see Fig. 2 (c,d)), now with an increased coherence time.
To cool the nuclei further, we implement the recently developed quantum-sensing-based cooling scheme [33]. In this protocol, each cooling cycle consists of three steps (see Fig. 3(a, top)): (i) The electron spin is initialised and then rotated to the equator with a \(\frac{\pi}{2}\)-pulse. A period of free evolution \(\tau_{\mathrm{sense}}\) allows the electron to sense the Overhauser field fluctuation that leads to a detuning \(\Delta\) from the target frequency \(f_{\mathrm{c}}\). (ii) A coherent electron-nuclei flip-flop interaction arising from a non-collinear term in the hyperfine interaction is activated through ESR driving at Hartmann-Hahn resonance \(\Omega\approx\omega_{\mathrm{n}}\). The sign of the detuning \(\Delta\) determines the direction of the nuclear flops and thus leads to a reversal of the measured fluctuation. (iii) A projective measurement of the spin state transfers entropy from the nuclei and concludes one cycle of the cooling scheme. Repeating this cycle with increasing sensing time \(\tau\) results in a narrower feedback function in each cycle and hence an increased sensitivity to changes in \(\sigma_{\mathrm{OH}}\).
We find optimal parameters for the quantum-sensing-based cooling at \(N=40\) cycles with a linearly increasing sensing time \(\tau_{\mathrm{sense}}\) from \(\tau_{\mathrm{min}}=20\,\mathrm{ns}\) to \(\tau_{\mathrm{max}}=400\,\mathrm{ns}\), and electron-nuclei drive time \(T_{\mathrm{c}}=125\,\mathrm{ns}\) at a Rabi frequency \(\Omega_{\mathrm{c}}=2\pi\times 17\,\mathrm{MHz}\), followed by a spin pumping pulse of \(200\,\mathrm{ns}\)[41]. This preparation sequence takes \(\sim 22\,\mathrm{\SIUnitSymbolMicro s}\) and is repeated before each Ramsey cycle.
Figure 2: Locking of electron spin resonance (ESR) and cooling of nuclei with a Rabi drive. (a) Rabi oscillations versus detuning show locking of the ESR to the drive within a window of frequencies and unstable Rabi oscillations outside the window. (b) Top: Pulse sequence for Ramsey interferometry with prior Rabi cooling. For Rabi cooling a \(T_{\mathrm{c}}=1\,\mathrm{\SIUnitSymbolMicro s}\) long pulse at a Rabi frequency of \(\Omega_{\mathrm{c}}=2\pi\times 17\,\mathrm{MHz}\) is used. The Ramsey experiment was performed at a larger Rabi frequency of \(2\pi\times 100\,\mathrm{MHz}\). Bottom: Top and bottom envelopes of the Ramsey interferometry with \(100\,\mathrm{\SIUnitSymbolMicro s}\) pause (circles), zero pause (squares) and Rabi cooling (diamonds); the extracted coherence times are \(T_{2}^{*}=3.9(2)\,\mathrm{ns}\), \(T_{2}^{*}=7.8(2)\,\mathrm{ns}\), and \(T_{2}^{*}=78(2)\,\mathrm{ns}\), respectively. Counts are normalised to \(0.5\) for long delays. (c) Ramsey interferometry as a function of detuning with respect to the cooling frequency \(f_{\mathrm{c}}\) of the Rabi drive. (d) Limecut at \(\Delta=50\,\mathrm{MHz}\) with \(T_{2}^{*}=87(6)\,\mathrm{ns}\) (dashed box in (c)). The solid lines in (b) and (d) are Gaussian fits to the data.
The electron coherence time \(T_{2}^{*}\) increases from \(3.9(2)\,\mathrm{ns}\) to \(0.608(13)\,\mathrm{\SIUnitSymbolMicro s}\) after application of the protocol (see Fig. 3(a, b)). This constitutes a 156-fold increase in \(T_{2}^{*}\). The final \(T_{2}^{*}\) is a factor of two larger than the previous highest \(T_{2}^{*}\) reported on an electron spin hosted by an InGaAs QD (296 ns [33]) and just below the highest reported \(T_{2}^{*}\) of a single electron spin qubit in a gate-defined GaAs QD (767 ns [31]). The enhancement corresponds to a narrowing of the nuclear-spin ensemble from \(\sigma_{\mathrm{OH}}=52(1)\,\mathrm{MHz}\) to \(0.355(4)\,\mathrm{MHz}\) (see Fig. 3(b, inset)).
Using hyperfine constants \(A_{k}\) and abundancies \(\eta_{k}\) of the nuclei species \(k\in\{\mathrm{{}^{69}Ga},\mathrm{{}^{71}Ga},\mathrm{{}^{75}As}\}\) we can estimate the number of nuclei involved \(N=5/4\sum_{k}\eta_{k}A_{k}^{2}T_{2}^{*2}=1.4\cdot 10^{5}\) and estimate the hyperfine interaction per nuclei \(A_{\mathrm{c}}=1/(\sqrt{5N/2}\pi T_{2}^{*})=0.13\,\mathrm{MHz}\)[25, 33, 40]. This corresponds to a distribution of \(\sigma_{\mathrm{OH}}/A_{\mathrm{c}}\approx 376.8\) macrostates in the uncooled state and 2.6 after quantum-sensing-based cooling, entering the regime where just a few nuclei excitations remain.
For both the quantum-sensing-based and Rabi cooling schemes, the Rabi frequency \(\Omega_{\mathrm{c}}\) is an important parameter (see Fig. 3(c)). The maximum performance for both cooling schemes occurs at \(\Omega_{\mathrm{c}}=2\pi\times 17\,\mathrm{MHz}\), close to the difference frequency of \({}^{71}\mathrm{Ga}\) and \({}^{75}\mathrm{As}\)\(\left(\Delta\omega=\omega(^{71}\mathrm{Ga})-\omega(^{75}\mathrm{As})=2\pi \times 17.08\,\mathrm{MHz}\right)\). This result is in contrast to those on InGaAs QDs for which cooling was most effective at a direct Hartmann-Hahn
Figure 3: Quantum-sensing-based cooling and dynamical decoupling. (a) Top: Pulse scheme for the quantum-sensing-based cooling consisting of (i) a sensing step, (ii) a driven electron-nuclei interaction, and (iii) a reset. The last reset pulse in the cooling scheme initialises the electron spin for the Ramsey experiment performed at a Rabi frequency of \(2\pi\times 100\,\mathrm{MHz}\). Bottom: Ramsey interferometry with serrodyne frequency \(\omega_{\mathrm{s}err}=2\pi\times 20\,\mathrm{MHz}\) (\(\phi(\tau)=\sin(\omega_{\mathrm{s}err}\tau)\)) following quantum-sensing-based cooling gives \(T_{2}^{*}=0.608(13)\,\mathrm{\SIUnitSymbolMicro s}\). (b) Comparison of \(T_{2}^{*}\) before cooling (squares), after Rabi cooling (diamonds), and after quantum-sensing-based cooling (circles). Inset: fast Fourier transform of the Ramsey visibilities gives \(\sigma_{\mathrm{OH}}=52(1)\,\mathrm{MHz}\), \(\sigma_{\mathrm{OH}}=2.90(5)\,\mathrm{MHz}\), and \(\sigma_{\mathrm{OH}}=0.355(4)\,\mathrm{MHz}\), respectively. (c) \(T_{2}^{*}\) versus Rabi frequency during cooling (\(\Omega_{\mathrm{c}}\)) for Rabi cooling (diamonds) and quantum-sensing-based cooling (circles). Dashed lines correspond to nuclear Larmor frequencies, from left to right: \(\Delta\omega=2\pi\times 17.08\,\mathrm{MHz}\), \(\omega(^{75}\mathrm{As})=2\pi\times 21.9\,\mathrm{MHz}\), \(\omega(^{69}\mathrm{Ga})=2\pi\times 30.7\,\mathrm{MHz}\), \(\omega(^{27}\mathrm{Al})=2\pi\times 33.28\,\mathrm{MHz}\), \(\omega(^{71}\mathrm{Ga})=2\pi\times 39.0\,\mathrm{MHz}\). (d) Rabi oscillations at \(\Omega_{\mathrm{c}}=2\pi\times 8.9\,\mathrm{MHz}\) as a function of detuning from \(f_{\mathrm{c}}\) following quantum-sensing-based cooling. (e) Dynamical decoupling of the electron spin with a CPMG sequence. The solid lines in (a,b) are Gaussian fits to the data. The solid line in (e) is a power law fit to the data.
resonance [33]. Generally speaking, the fact that cooling via an autonomous feedback process is effective on GaAs QDs shows that a non-collinear term in the hyperfine interaction [28; 35; 44] must be present even though the strain in the QDs is small.
Following cooling, a typical chevron pattern is observed on driving Rabi oscillations as a function of detuning with respect to the cooling frequency \(f_{c}\) (Fig. 3(d)), using here a Rabi frequency below the Hartmann-Hahn resonances. This demonstrates that in this case the electron spin is isolated from the nuclear environment and behaves as a two-level system. In addition, the quality factor of the oscillations now increases to \(Q=30.0(14)\) (corresponding to a \(\pi\)-pulse fidelity of \(98.4(1)\,\%\)) [41], consistent with a reduction of hyperfine-interaction-induced Rabi decay.
Recent experiments showed that the electron spin \(T_{2}\) can be increased by implementing a decoupling scheme, the CPMG protocol. As a final step, we verify that this is also possible on the QD for which nuclear spin cooling was highly effective (see Fig. 3(d)). By applying CPMG pulses, we extend \(T_{2}\) from \(T_{2}^{\mathrm{HE}}=2.93(6)\,\mathrm{\SIUnitSymbolMicro s}\) using a Hahn echo (\(N_{\pi}\)=1) to \(T_{2}^{\mathrm{CPMG}}=22(8)\,\mathrm{\SIUnitSymbolMicro s}\), an order of magnitude increase, with \(N_{\pi}\)=20 pulses. We extract a \(T_{2}\) scaling of \(T_{2}^{\mathrm{CPMG}}\propto N_{\pi}^{\gamma}\) with \(\gamma=0.69(12)\), consistent with recent results on droplet-etched QDs [40] and gate-defined QDs [49]. This result confirms that the nuclear spin ensemble is highly homogeneous. The application of more pulses is currently limited by imperfect pulse calibrations and the electron spin relaxation time \(T_{1}\sim 40\,\mathrm{\SIUnitSymbolMicro s}\)[41].
In conclusion, we have demonstrated fast and flexible optical control of an electron spin confined to a self-assembled GaAs QD. We show that autonomous feedback protocols to cool the nuclear spins are very effective even on an as-grown, close-to-strain-free QD. Nuclear-spin cooling leads to a 156-fold increase in the \(T_{2}^{*}\) time, \(T_{2}^{*}=0.608\,\mathrm{\SIUnitSymbolMicro s}\). Furthermore, both \(T_{2}^{*}\) and \(T_{2}\) can be extended on exactly the same QD, \(T_{2}^{*}\) by nuclear spin cooling, \(T_{2}\) by dynamic decoupling. These results imply that a small non-collinear term must be present in the hyperfine Hamiltonian. Following nuclear spin cooling, \(T_{2}^{*}\) becomes much longer than both the time required to rotate the spin and the time required to generate a photon. Together with recent results on the generation of indistinguishable photons from remote GaAs QDs [37] performed on the same sample as used in this experiment, our results highlight the promise of GaAs QDs for a coherent spin-photon interface. Furthermore, the system represents an ideal testbed for creating non-classical collective states within the nuclear spin ensemble [50].
We thank Ming-Lai Chan and Peter Lodahl at the Niels-Bohr Institute, Leon Zaporski and Mete Atature at the University of Cambridge, and Dorian Gangloff at the University of Oxford for stimulating discussions.
The work was supported by SNF Project 200020 204069 and Horizon 2020 FET-Open Project QLUSTER. LZ, GNN and AJ received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No. 721394 (4PHOTON), No. 861097 (QUDOT-TECH), and No. 840453 (HiFig), respectively. HGB, JR, ADW and AL acknowledge financial support from the grants DFH/UFA CDFA05-06, DFG TRR160, DFG project 383065199, and BMBF Q.Link.X 16KIS0867.
|
2306.03314 | Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM
Agents | In this paper, we present a novel framework for enhancing the capabilities of
large language models (LLMs) by leveraging the power of multi-agent systems.
Our framework introduces a collaborative environment where multiple intelligent
agent components, each with distinctive attributes and roles, work together to
handle complex tasks more efficiently and effectively. We demonstrate the
practicality and versatility of our framework through case studies in
artificial general intelligence (AGI), specifically focusing on the Auto-GPT
and BabyAGI models. We also examine the "Gorilla" model, which integrates
external APIs into the LLM. Our framework addresses limitations and challenges
such as looping issues, security risks, scalability, system evaluation, and
ethical considerations. By modeling various domains such as courtroom
simulations and software development scenarios, we showcase the potential
applications and benefits of our proposed multi-agent system. Our framework
provides an avenue for advancing the capabilities and performance of LLMs
through collaboration and knowledge exchange among intelligent agents. | Yashar Talebirad, Amirhossein Nadiri | 2023-06-05T23:55:37Z | http://arxiv.org/abs/2306.03314v1 | # Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents
###### Abstract
In this paper, we present a novel framework for enhancing the capabilities of large language models (LLMs) by leveraging the power of multi-agent systems. Our framework introduces a collaborative environment where multiple intelligent agent components, each with distinctive attributes and roles, work together to handle complex tasks more efficiently and effectively. We demonstrate the practicality and versatility of our framework through case studies in artificial general intelligence (AGI), specifically focusing on the Auto-GPT and BabyAGI models. We also examine the "Gorilla" model, which integrates external APIs into the LLM. Our framework addresses limitations and challenges such as looping issues, security risks, scalability, system evaluation, and ethical considerations. By modeling various domains such as courtroom simulations and software development scenarios, we showcase the potential applications and benefits of our proposed multi-agent system. Our framework provides an avenue for advancing the capabilities and performance of LLMs through collaboration and knowledge exchange among intelligent agents.
## 1 Introduction
The field of artificial intelligence is rapidly advancing, bringing with it the complexity and challenges of deploying AI models to manage an array of tasks. In response to these challenges, researchers are delving into multi-agent systems where multiple AI entities collaborate towards a common goal [1]. One such multi-agent system can be seen in the work of [2], who introduced generative agents that imitate plausible human behavior within an interactive sandbox environment. Another instance of this exploration is Camel [3], in which a system is introduced which leverages a Large Language Model (LLM) to generate diverse and detailed instructions for a wide range of tasks. It incorporates role-playing scenarios where two agents interact, thereby demonstrating the potential of such systems in handling complex real-world scenarios. In a similar vein, our paper proposes the use of multiple LLMs, each bearing diverse characteristics, to enhance performance across a range of tasks.
We focus particularly on the recent iterations of the Generative Pretrained Transformer (GPT) models, GPT-4 and GPT-3.5-turbo. From content creation and question-answering systems to language translation, GPT models have manifested immense potential across a plethora of applications. Early experiments with GPT-4 [4] reinforces this fact, showing GPT-4's ability to solve complex tasks across diverse fields, approaching human-level performance. As a result, the adeptness of these models at managing complex language tasks makes them ideal candidates for our purposes. Moving forward, we will use the term "Intelligent Generative Agents" (IGAs) to refer to a series of agents, each equipped with unique attributes and roles. GPT models, despite their commonable text generation capabilities, operate as isolated entities in their conventional form. They lack the capability to collaborate with other agents or draw from external knowledge repositories. This inherent shortcoming restricts their utility in complex scenarios that necessitate collaborative efforts and information sharing among multiple intelligent systems.
Our proposed use of multiple IGAs stems from the notion that diversity in a system enhances performance. The idea is based on the principle of division of labor, where each agent specializes in a specific function, thereby improving the efficiency and effectiveness of the system as a whole. This mirrors the concept of teamwork in human systems, where different individuals contribute their expertise to complete a task. A diverse set of agents, each configured with unique
strengths, can collectively handle a wider range of potential inputs, outputs, and processing strategies. Furthermore, delegating different roles to each agent introduces more flexibility and efficiency in the context task management. This brings about the concept of "multi-agent systems", in which numerous IGAs interact and collaborate to achieve a shared goal. These agents are capable of creating subtasks, seeking information, and soliciting assistance from each other, and can also engage in competitive evaluation for better outcomes. The emphasis on collaboration and knowledge exchange in multi-agent systems can bolster the problem-solving proficiency of GPT models, thereby paving the way for more sophisticated intelligent systems. In fact, our proposed multi-agent system also aims to make strides toward achieving a higher level of artificial general intelligence (AGI). The development and deployment of advanced generative AI models like GPT-4, represent significant steps towards AGI [5]. By fostering collaboration and knowledge exchange among multiple IGAs, our system seeks to further this progress. It aims to emulate the diverse and adaptable problem-solving capabilities that are characteristic of an AGI, thereby pushing the boundaries of what AI can achieve.
Our proposed abstraction allows users to engage with a "black box" by providing an initial prompt and receiving the final output without grappling with the underlying complexities of agent collaborations and interactions. Consider, for instance, taking inspiration from the success of Generative Adversarial Networks (GANs), where two networks (a generator and a discriminator) work collaboratively to produce better results, a simple yet effective multi-agent system can be made utilizing two IGAs: One with memory and one with Internet access. By combining their strengths, these agents could cooperate to reduce the occurrence of 'hallucinations' or false information generation, thereby increasing the reliability of the output [6]. This could be particularly beneficial in tasks where accuracy and fact-checking are critical.
The objectives of this paper are to explore and demonstrate the potential of having multiple agents within a black box environment for enhanced collaboration and problem-solving. The specific objectives are outlined as follows:
* Introducing a General Framework: The primary objective is to pave the way for the creation of more powerful AGI models. By providing a general framework for multi-agent systems using LLMs, we aim to push the boundaries of what AI can achieve, thereby contributing to the advancement toward Artificial General Intelligence.
* Adaptive and Dynamic System: We introduce a system that is adaptive and can change itself to suit the tasks at hand. Having a static structure will limit a system to a set of predefined tasks. The possibility of addition and removal of agents in the system will make it flexible and capable of solving more complex problems.
* Multi-Agent Collaboration: In this paper, we explore the use of multiple LLMs in a collaborative manner. This collaboration aims to enhance performance across diverse tasks, utilizing the strengths of each agent and encouraging a synergistic relationship amongst them.
By effectively addressing these objectives, this paper aims to significantly advance the understanding and utilization of multi-agent collaboration in the realm of IGAs. The exploration and demonstration of such a model of collaboration serve as a stepping stone for future research in this domain.
The remainder of this paper is organized as follows: We begin by laying the foundation of our discussion in Section 2, where we introduce the essential components that make up the proposed framework. Following this, Section 3 provides a comprehensive description of the proposed multi-agent framework and its functionalities. In Section 4, we explore various potential use cases and applications of this framework, demonstrating its versatility and adaptability. Section 5 then discusses the potential challenges associated with our approach, shedding light on its limitations. We conclude our discussion in Section 6, where we summarize the main points of our discussion, draw final conclusions, and suggest directions for future research in this domain.
## 2 Building Blocks of the Multi-Agent System
The environment in which the multi-agent system operates can be considered a black box. This is due to the inherent nature of IGAs, which are trained on vast amounts of data and generate outputs based on complex internal computations that are not directly observable or interpretable. This black box environment is a digital workspace where multiple instances of IGAs interact and collaborate. This workspace provides a shared platform for the agents to communicate, exchange information, and perform tasks. Additionally, plugins can be used to provide additional functionality or capabilities to agents. They can be seen as a specialized tool or service that agents can utilize to perform specific tasks or access certain (internal or external) resources. Any of the agents or plugins can also be responsible with receiving the initial prompt from the user or responding to the main user.
We denote the black box environment as a graph \(G(V,E)\), where \(V\) is the set of vertices representing the IGAs and the plugins, and \(E\) is the set of edges representing the connection channels between the agents and the plugins, and between the agents themselves.
### Agent Representation
Each agent \(i\in V\) is represented as a tuple \(A_{i}=(L_{i},R_{i},S_{i},C_{i},H_{i})\), where:
* \(L_{i}\) refers to the language model instance utilized by the agent. This encompasses the model's type (such as GPT-4 or GPT-3.5-turbo) and its specific configuration parameters, including the 'temperature' setting which influences the degree of randomness in the agent's output. The choice of the language model can be dictated by the task requirements. For instance, while GPT-4, due to its exceptional reasoning capabilities, could be assigned tasks demanding deep insights and complex problem-solving, GPT-3.5-turbo could be employed for tasks requiring quicker execution owing to its faster performance.
* \(R_{i}\) is the role of the agent. The role defines the responsibilities of the agent and provides the agent with a sense of purpose and direction, guiding its actions and interactions. More specifically, these responsibilities are the tasks or functions the agent is expected to carry out within the system. For instance, an agent's responsibilities could include processing and responding to user queries, coordinating interactions between other agents, managing resources, or overseeing a particular aspect of the system's operations.
* \(S_{i}\) is the state of the agent, encompassing its current knowledge base and its "thoughts". The agent's state evolves over time based on the tasks it performs and the interactions it has with other agents or the environment.
* The "knowledge" component of the state can be seen as a representation of the agent's current understanding or awareness of its environment, tasks, and interactions. It's updated whenever the agent learns new information or gains a new understanding.
* "Thoughts" in this context can be interpreted as the agent's current focus or intent [7]. They represent what the agent is currently contemplating or planning, based on its knowledge and the task at hand. Thoughts may inform the agent's next action and may be updated after each operation. They may also encapsulate the agent's internal dialogue or reasoning process as it works towards its goal.
* \(C_{i}\) is the boolean property indicating whether an agent has the ability to create new agents. By setting this property to true, an agent can dynamically generate new agents within the system.
* \(H_{i}\) is the set of agents that this agent has the authority to halt. By specifying the agents that can be halted, an agent can exercise control over the execution of other agents within the system.
### Plugin Representation
Each plugin \(j\in V\) is represented as a tuple \(P_{j}=(F_{j},C_{j},U_{j})\), where:
* \(F_{j}\) is the set of the functionalities of the plugin, which are the actions that an agent can perform. This can include accessing and manipulating digital resources within this environment, such as files and databases, and interacting with external systems through APIs and other interfaces.
* \(C_{j}\) are the configurations associated with the plugin. Parameters are variables that are set when the plugin is initialized. Examples include API keys for accessing specific services, or threshold values to determine specific behavior. Parameters help in customizing the functionality of the plugin according to the task or application at hand.
* \(U_{j}\) is the set of usage constraints or conditions that govern the usage of the plugin. These constraints define the boundaries and capabilities of the plugin and can include limitations in terms of computational resources, input data types, and output capabilities.
### Connection and Message Representation
Each edge \(e_{ij}\in E\) connects an agent \(A_{i}\) to either a plugin \(P_{j}\) or another agent \(A_{j}\) using a communication channel. Each agent can send messages through the channels that it is connected to, and each message \(m\in M_{ij}\), sent from agent \(A_{i}\) to \(A_{j}\), is represented as a tuple \(m=(S_{m},A_{m},D_{m})\), where:
* \(S_{m}\) is the content of the message.
* \(A_{m}\) is the action associated with the message, which can be a task assignment, a report, a request, or any other action.
* \(D_{m}\) is the metadata associated with the message, which can include information such as the timestamp, sender, receiver, and potentially additional context-specific data.
Another approach to data transmission between agents can involve the use of plugins. For example, plugins designed for data storage can serve as shared databases, enabling different agents to access and retrieve information stored by other agents. Further extending this concept, a plugin could act as a communication board, enabling multi-directional communication between multiple agents. This essentially forms a many-to-many communication platform within the system.
## 3 Detailing Proposed Framework
In any multi-agent system, the nature of interaction and collaboration between the agents play a significant role in determining the overall system performance. This section explores the ways in which these interactions can be managed and optimized, particularly in the context of a system composed of multiple IGAs.
### System Design
The design of a multi-agent system involves determining the number of agents, the required plugins, establishing connections between agents and plugins, creating connections between agents to enable communication, and assigning roles and properties of agents. This design aims to optimize the configuration and align it with the desired end goal of the system, enabling efficient collaboration and interaction among the agents.
While designing the system, the following steps are taken:
* Agent Roles: Roles for the agents are identified and defined within the environment, based on the specific requirements of the task at hand. Each agent is assigned a role, which specifies their responsibilities and duties in the system.
* Agent-Plugin Connections: Connections between agents and plugins are established to provide agents with additional functionality. By connecting agents to plugins, agents gain access to tools, resources, or external services that enhance their capabilities. These connections allow agents to leverage the functionalities of the plugins.
* Agent-Agent Connections: Connections between agents are created to enable communication and collaboration. These connections allow agents to exchange messages, share information, and cooperate toward achieving the common goal.
* System Operations: Agents can be granted specific permissions to create new agents or halt a specific set of agents. Also, any of the plugins or agents can be responsible for receiving the initial prompt from the user or responding to them.
By carefully designing the system with well-defined agents, plugins and the connections between them, the framework enables efficient multi-agent interaction and collaboration. This design allows agents to effectively communicate, coordinate, and work together towards achieving the common goal within the black box environment.
### Dynamic Addition of Agents
In certain scenarios, an agent with the ability to create new agents may dynamically add additional agents to the system. This capability enables agents to distribute their workload and assign specific responsibilities to enhance collaboration and workload management. This need may arise as a byproduct of a sudden increase in the workload of the system. When a new agent is created, the creator assigns the new agent a role, grants it the necessary properties, and establishes connections with other agents and plugins. These properties and connections are subsets of those available to the creator agent. Also, a connection to the creator is established.
Once the new agent is created and initialized, it operates independently within its defined role. The creator agent sets a clear goal for the new agent, providing initial guidance to ensure a smooth transition of responsibilities. By allowing agents to dynamically create new agents and delegate tasks, the system can effectively manage workloads, enhance parallel processing capabilities, and improve overall system performance. This dynamic approach fosters a collaborative environment where agents can dynamically organize and distribute tasks, ultimately contributing to the achievement of the common goal.
The fact that a designer designed the system and defined the capabilities, connections, and permissions of the agents does not contradict the dynamic addition of agents and their ability to distribute workload and delegate responsibilities.
Although the designer has designed the initial framework, the dynamic addition of agents allows for flexibility and adaptation within the designed system. It empowers the agents themselves to make decisions and create new agents based on their own assessments of workload and the need for assistance. The designer's role is to provide the initial structure and guidelines, but the system allows for agent autonomy and self-organization.
Hence, system design and the dynamic addition of agents function harmoniously. The initial framework laid out by the designer serves as a robust foundation, while the agents' ability to dynamically adapt and distribute workload ensures flexibility and resilience under changing conditions and demands.
### Feedback and Self-Feedback Mechanisms
Feedback mechanisms play a pivotal role in multi-agent systems, enabling agents to learn from their experiences and adapt their strategies for improved performance. These mechanisms can be categorized into inter-agent feedback and self-feedback [8; 9; 10]. Inter-agent feedback involves agents providing constructive criticism to each other based on their interactions and collaborations. Such feedback can help agents identify areas of improvement and adapt their strategies accordingly, enabling continuous learning and improvement within the system [8]. Some multi-agent systems employ inter-agent feedback by involving agents into role-playing. This approach involves designing specific prompts (denoted as Inception Prompting in [3]) to guide chat agents toward task completion while maintaining consistency with the main goal. This approach can be integrated in the proposed model, giving different roles to multiple agents and connecting them together.
Self-feedback, on the other hand, involves agents assessing their own performance and identifying areas of improvement. This can be achieved through a self-assessment mechanism where agents evaluate their performance based on predefined criteria or goals. This self-assessment can help agents identify their strengths and weaknesses, adapt their strategies, and improve their problem-solving capabilities [9]. In the proposed model, self-feedback can be simulated by a pair of connected agents: one with the role of giving feedback and the other tasked with refining the response based on the feedback received. Note that this simulation removes the need for a human to ask for possible refinement of the response.
### Oracle Agent
An oracle agent is a unique type of agent in the system that operates in a stateless and memory-less manner. Unlike other agents that may maintain a state or memory to guide their actions, an oracle agent performs actions based solely on the current input it receives, without any regard for past inputs or outputs. This characteristic makes oracle agents particularly useful in scenarios where the task at hand is independent of previous interactions or states.
Every interaction with an oracle agent is treated as an isolated event, independent of any previous interactions. This makes oracle agents highly predictable, as their actions are solely determined by the current input and not influenced by any past events. oracle agents are mainly designed to be utilized by other agents. For instance, an oracle agent can give feedback on the response of the other agents, and let them refine their responses before proceeding.
### Halting Mechanism and Supervision
The proposed framework incorporates an essential mechanism whereby an agent can halt other agents under certain conditions. This capability is crucial for effective management and coordination of tasks within a multi-agent system. Specifically, this ability can be granted to any agent in the system, including those that create new agents. The authority to halt becomes inherently necessary for these parent agents to maintain control and ensure the proper functioning of their created agents.
In practice, an agent halting another would involve signaling the targeted agent to cease its current activity. This signaling could be in the form of a command or a message transmitted via the communication interfaces defined within the system. Upon receiving this signal, the halted agent would immediately stop its current operation and wait for further instructions. Depending upon the system design, it could either enter an idle state or undertake a default operation in such instances. For creator agents and the agents they created, the halting mechanism works similarly. If a creator agent identifies undesirable activity in its created agent, it can initiate the halt command, causing them to stop their current operation immediately. This interaction emphasizes the supervisory role of the creator agent, ensuring that created agent functions correctly and does not deviate from its intended role.
In fact, this supervisory role can be enhanced by the introduction of a specialized "Supervisor Agent". This Supervisor Agent can monitor the progress and task list of the main agent, providing timely feedback when necessary. For example, if an agent is stuck in a loop or deviates from its assigned task, the Supervisor Agent can detect these issues by reviewing
recent activities. Upon such detection, the Supervisor Agent can initiate the halt command, prompting the main agent to cease its current operation and change its course of action. This mechanism not only facilitates better task management but also reduces the need for constant human monitoring and intervention in the feedback process.
### Autonomous System Design
One notable aspect of the proposed framework is the potential role of an intelligent LLM as the system designer. The unique capabilities of an IGA extend beyond being just an agent within the environment, as it possesses the ability to fulfill the role of designing the system itself. It can consider the system's objectives, constraints, and desired functionalities to define the roles and responsibilities assigned to each agent. Additionally, the IGA can employ its knowledge of communication protocols and collaborative frameworks to determine the optimal interactions and connections between agents, plugins, and other system components. Drawing upon its comprehensive understanding of the problem domain, combined with precise system formulation and specified objectives, the IGA can design an effective system that optimally addresses the task at hand. Alternatively, after a human designs the initial system, an IGA can analyze the system structure, roles, interactions, and connections, and provide feedback and refinement to an already designed system. The IGA can also utilize its natural language generation capabilities to communicate the system design to the system owners. It can provide clear and concise descriptions of the agents' positions, roles, and interactions, allowing for a comprehensive understanding of the system's structure and functioning.
## 4 Use Cases and Applications
This section aims to demonstrate the practicality and versatility of the proposed multi-agent framework by examining its applicability to existing AI models. We focus specifically on two cutting-edge artificial general intelligence (AGI) models, Auto-GPT1 and BabyAGI2, and examine how our framework could potentially enhance their design and operation. We explore the models' main components, their operation, and limitations, and how our framework could be applied to improve their performance. Additionally, we discuss potential modifications that our framework can add, thus offering a broader understanding of the potential applications and benefits of the proposed multi-agent system.
Footnote 1: [https://github.com/Significant-Gravitas/Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT)
Footnote 2: [https://github.com/yobeinakajima/babyagi](https://github.com/yobeinakajima/babyagi)
### Artificial General Intelligence: Auto-GPT
Auto-GPT is an experimental open-source AI application that has been gaining significant attention due to its promising autonomous abilities. It is considered a step towards AGI, a type of AI that can perform human-level intellectual tasks. Auto-GPT has internet access, long-term and short-term memory management, GPT-4 for text generation, and file storage and summarization with GPT-3.5. It can perform tasks that ChatGPT [11] can do, such as debugging code and writing an email, but it can also complete more advanced tasks with fewer prompts. Auto-GPT's design is based on the concept of thoughts, which are essentially the steps it takes to complete a task.
#### 4.1.1 Model
The framework on which Auto-GPT runs can be modeled using our proposed framework. We can consider the Auto-GPT's main agent as a single agent in our model. The agent's goal is to perform tasks autonomously by chaining thoughts together, while working towards the goals specified by the user. The state of the agent includes the current task it is working on, and the chain of thoughts it has generated so far. This agent can also create other agents and halt any of them. Plugins can be represented as external services or tools that the agent uses to perform its tasks. For example, browsing the internet, managing memory, interacting with files, executing code, generating images, and similar tasks can be identified as plugins in our framework. There will also be an oracle agent, which is responsible for tasks such as summarization and criticizing the responses of the main agent. These plugins, along with the agents the main agent creates, can all be considered as nodes in the graph corresponding to the system, and the connections between the Auto-GPT agent and its plugins, along with the connections between the agent and the other agents it makes, can be represented as edges in the graph. Messages sent through these connections may include task assignments, requests for information, or commands to execute certain operations.
#### 4.1.2 Limitations and Possible Improvements
Despite its potential, Auto-GPT faces several challenges and limitations. One significant obstacle is that it might get stuck in a loop, rendering it unable to function properly. The looping issue is a result of the system's reliance on chaining
thoughts together to perform tasks. While this approach allows the system to perform complex tasks, it also makes it prone to getting stuck in loops, especially when dealing with complex or ambiguous problems. However, features that are proposed in our framework can possibly address this shortcoming, and open further avenues for improvements. For instance, the agent's inability to realize when it has got stuck or notice that it has gone off task can potentially be mitigated by adding the "Supervisor Agent" that was introduced in Section 3.5.
As another example, one can implement a concept of co-agents, where multiple autonomous instances of Auto-GPT could collaborate, share a workspace for files, and communicate in a board, essentially mimicking a team of humans working remotely, with each having a specific role.
Additionally, the system's ability to interact with files and execute code opens up a wide range of possibilities for its use, but it also introduces potential security risks. These risks are possibly alleviated by having a human provide feedback and authorizing each step, but this step is completely ignored when using the app's "continuous" mode. This means that the system should be designed with robust security measures in place to prevent unauthorized access or misuse. This can be done using a state-less oracle agent, which can monitor each sensitive task and decide if it is indeed malicious or not.
### Artificial General Intelligence: BabyAGI
BabyAGI is an AI agent that can generate and attempt to execute tasks based on a given objective. BabyAGI operates based on three LLM chains: Task creation chain, Task prioritization chain, and Execution chain.
#### 4.2.1 Model
In our proposed framework, BabyAGI can be modeled as a system of interconnected agents, each with a specific role. The agents in BabyAGI include a task creation agent, a task prioritization agent, and an execution agent. In addition to these agents, BabyAGI uses a vector database to store and retrieve task results for context. This can be modeled as a plugin in our framework that interacts with a vector database, with operations for storing and retrieving data. Furthermore, there can be an additional agent in our framework that interacts with the user, refines the input, and places it into the task storage.
#### 4.2.2 Limitations and Possible Improvements
Our framework can potentially improve upon the current implementation of BabyAGI by providing a more structured and modular approach to designing the system. By modeling each agent, plugin, and operation explicitly, our framework can make it easier to understand and modify the system. Furthermore, our framework's support for feedback loop can enable the agents in BabyAGI to learn from their own performance and improve over time.
### The "Gorilla" Model
#### 4.3.1 Model
The Gorilla system [12] is based on a fine-tuned LLaMA [13] model with additional capabilities to retrieve documents and integrate this information during both training and inference. It is capable of extending beyond mere language modelling, embracing features that enable interaction with external APIs, handling document retrieval, and adaptation to version changes. In this system, API calls and their documentation are used to instruct the LLM about the specific tasks each API can handle. The model learns to map prompts to API calls by using a retrieval system to access the most up-to-date API documentation from the database. Gorilla also mitigates hallucination and adapts to changes in API documentation by using a retriever during training and inference.
In our framework, we need a single agent to model the Gorilla system. To handle APIs, our model can employ plugins, which can be seen as extensions or modules designed to handle specific tasks. This results in enhanced flexibility and versatility in the system that allows it to handle a broader range of tasks.
#### 4.3.2 Limitations and Possible Improvements
Although integrating information during both training and inference shows significant improvements over GPT-4 in writing API calls, our model offers a more generalized and robust framework that can be customized to different use cases. For example, our model can handle real-time updates to APIs and their documentation more efficiently by updating the relevant agent's knowledge base, rather than having to update the entire model. Additionally, it can handle overlapping functionality between different APIs more elegantly by deciding between different agents based on their functionality.
Our model can also potentially improve the process of mitigating hallucinations by using a dedicated agent for this task. This agent could verify the main agent's responses to find out when the agent is hallucinating and intervene to correct the output.
Our model can further improve the process of interacting with APIs by employing different agents for different APIs, each equipped with its plugin for the relevant API documentation. This would allow our model to handle more complex tasks and interactions, as it can leverage the combined capabilities of multiple agents at once.
### Case Study
In this section, we will delve into two distinct case studies to illustrate the practical applications of our proposed multi-agent system. These case studies, namely a court simulation and a software development scenario, have been chosen due to their inherent complexity and the necessity for diverse roles and interactions within them. Both scenarios involve a multitude of tasks and responsibilities that need to be coordinated effectively for successful outcomes. By employing our multi-agent framework, we aim to demonstrate how such complex processes can be modeled in a common framework. Each agent in the system will be assigned a specific role, mirroring the real-world roles in these scenarios. They will be equipped with the necessary tools and capabilities to fulfill their responsibilities, thereby contributing to the overall objective.
#### 4.4.1 Court Simulation
Before the introduction of new LLMs, attempts to simulate environments like a courtroom required training with specific data [14]. However, with the recent advancements in the area of language models, the training process might not be necessary anymore. In this context, our framework can be utilized to model the various roles and interactions that take place within a courtroom. This includes the roles of the judge, jury, attorneys, witnesses, and even the court clerk. Each of these roles can be represented by an agent within the system, with specific responsibilities and capabilities assigned to them.
* **Judge Agent:** The Judge Agent is responsible for overseeing the proceedings, making rulings on legal issues, and ultimately delivering the verdict in non-jury trials. This agent would require a plugin that provides access to a comprehensive database of legal knowledge and precedents, enabling it to make informed decisions.
* **Jury Agent:** The Jury Agent is responsible for determining the facts of the case and delivering a verdict in jury trials. This agent would require a plugin that allows it to understand and evaluate the evidence presented during the trial.
* **Attorney Agents:** The Attorney Agents represent the prosecution and the defence in the trial. They are responsible for presenting their respective cases, cross-examining witnesses, and making closing arguments. These agents would require plugins that provide access to legal knowledge, as well as plugins that enable them to understand and generate persuasive arguments.
* **Witness Agents:** The Witness Agents provide testimony during the trial. They would require plugins that allow them to accurately recall and describe events.
* **Court Clerk Agent:** The Court Clerk Agent is responsible for administrative tasks such as maintaining court records and administering oaths. This agent would require plugins that enable it to manage and retrieve documents, as well as plugins that allow it to perform its administrative duties.
The interactions between these agents would be governed by a set of predefined rules and protocols, simulating the procedures followed in a real courtroom. For instance, the Judge Agent could issue instructions to the other agents, the Attorney Agents could question the Witness Agents, and the Jury Agent could request clarification or additional information from any of the other agents.
In terms of operations, the simulation process would proceed in stages, similar to a real trial. The Attorney agents would present their opening statements, followed by the presentation of evidence and witness testimony. The Jury Agent would then deliberate and deliver a verdict, after which the Judge Agent would conclude the proceedings.
This simulation could be used for a variety of purposes, such as training for law students, testing new legal theories, or even automating certain aspects of the legal process. However, it's important to note that while our framework can simulate the process and interactions in a courtroom, it cannot fully replicate the complexities of human decision-making and judgement. Therefore, any outcomes produced by the simulation should be interpreted with caution.
#### 4.4.2 Software Development
Our model can be effectively used in the context of software development, enabling the creation of a multi-agent system where each agent embodies a specific role integral to the software development process. By assigning distinct responsibilities to individual agents, the development process can be significantly optimized and streamlined. The key roles, as derived from the software development team structure, can be represented as follows:
* **User Experience Designer**: This agent is responsible for understanding and designing the user experience. It can use a plugin that simulates user interactions to test different designs and gather data on user preferences. The agent can then use this data to refine the design.
* **Product Manager**: The Product Manager is responsible for understanding the needs of the users and defining the product's features accordingly. It can use a plugin that collects and analyzes user feedback to understand what features are most important to the users. This agent can also interact with the User Experience Designer Agent to ensure that the product's design aligns with the users' needs.
* **Software Architect**: The Software Architect Agent is responsible for designing the software's architecture. It can use a plugin that simulates different architectural designs to test their performance and scalability. This agent can also interact with the Software Developer Agent to ensure that the architecture is implemented correctly.
* **Software Developer**: The Software Developer is responsible for writing the code that implements the software's features. It can use a plugin that provides access to a code repository to store and manage the code. This agent can also interact with the Software Architect Agent to ensure that the code aligns with the architecture.
* **Software Tester**: The Software Tester is responsible for testing the software to ensure that it works correctly. It can use a plugin that automates the testing process, running a suite of tests on the code and reporting any failures. This agent can also interact with the Software Developer Agent to help identify and fix any bugs in the code.
* **User Interface Designer**: The User Interface Designer is responsible for designing the software's user interface. It can use a plugin that simulates user interactions to test different designs and gather data on user preferences. This agent can then use this data to refine the design.
* **Debugger**: The Debugger is responsible for identifying and fixing bugs in the code. It can use a plugin that provides debugging tools to help identify the cause of any bugs. This agent can also interact with the Software Developer Agent to help fix the bugs.
* **Oracle Agent**: The oracle agent in this context can be used to provide feedback on the overall software development process. It can assess the performance of the other agents and provide feedback to help them improve. For example, it might suggest that the Software Developer Agent needs to write more efficient code, or that the User Experience Designer Agent needs to consider a different design approach.
In this way, our model can be used to utilize a more efficient and effective software development process. By assigning specific roles to each agent and using plugins to enhance their capabilities, we can create a system that is capable of automating the development process of a high-quality software, based on the user's needs.
## 5 Challenges and Limitations
Multi-agent systems, by their very nature, are complex entities. They involve the interaction of multiple autonomous agents, each with its own capabilities and objectives. This complexity, while being a source of the system's strength, also gives rise to a host of challenges and limitations. In the following subsections, we will explore some of these challenges, shedding light on the potential hurdles that need to be overcome in the context of multi-agent systems.
### Challenges of a Dynamic System
The dynamic addition of agents, while offering the potential for enhanced flexibility and adaptability, also presents several challenges. One of the primary concerns is the risk of over-proliferation of agents, which could lead to resource exhaustion or inefficiencies in the system. To mitigate this risk, the system needs to incorporate mechanisms to monitor and control the creation of new agents.
Specifically, the system needs to employ a resource management module that tracks the computational resources consumed by each agent and the system as a whole. This module can alert the system when resource usage approaches
a predefined threshold, triggering measures to prevent resource exhaustion. These measures could include halting the creation of new agents.
In addition to resource management, the system also needs to ensure that the dynamic addition of agents does not lead to inefficiencies or conflicts. This is achieved through a coordination mechanism that oversees the assignment of roles and tasks to the agents. When a new agent is created, this mechanism ensures that its role and tasks do not overlap significantly with those of existing agents, thereby preventing redundancies and potential conflicts.
### Scalability
Scalability is another significant challenge in multi-agent systems. As the system grows in size and complexity, maintaining the performance and efficiency of the system can become increasingly difficult. The computational resources required for managing the interactions and operations of a large number of agents can be substantial. Additionally, as the number of agents increases, the potential for conflicts and inconsistencies also increases, which can further impact the performance of the system.
### System Evaluation
Evaluating the performance of a multi-agent system can be challenging due to the complexity and diversity of the tasks that the system can handle. Traditional evaluation metrics might not be sufficient or appropriate for assessing the performance of the system. Therefore, new evaluation metrics and methodologies might need to be developed to accurately measure the performance of the system and its individual agents.
### Ethical Considerations
The use of multi-agent systems also raises several ethical considerations. For instance, the system might make decisions or take actions that have significant impacts on individuals or society. Therefore, it is crucial to ensure that the system operates in an ethical manner and that it respects the rights and interests of all users. This requires careful design and oversight of the system, as well as the implementation of appropriate ethical guidelines and safeguards.
## 6 Conclusion
In this paper, we proposed a novel framework for enhancing the performance and capabilities of LLMs by leveraging the power of multi-agent systems. Our framework introduces a black box environment where multiple IGAs, each with unique attributes and roles, collaborate to handle complex tasks more efficiently and effectively. By introducing collaboration and knowledge exchange among these agents, our system seeks to push the boundaries of what AI can achieve, potentially paving the way towards achieving a higher level of AGI.
Despite the potential benefits, the proposed framework also presents several challenges and limitations, including issues related to security and privacy, agent design and training, system evaluation, and ethical considerations. Addressing these challenges will require further research and development, as well as careful consideration of the ethical implications of deploying such systems. Another promising direction for future work could involve the use of the proposed framework to specific use cases or domains. For instance, the framework could be adapted to handle complex tasks in areas such as healthcare, finance, education, or transportation. This could provide valuable insights into the practical utility and potential impact of the proposed framework.
|
2307.14541 | Novel BCI paradigm for ALS patients based on EEG and Pupillary
Accommodative Response | Brain-computer interfaces (BCIs) are one of the few alternatives to enable
locked-in syndrome (LIS) patients to communicate with the external world, while
they are the only solution for complete locked-in syndrome (CLIS) patients, who
lost the ability to control eye movements. However, successful usage of
endogenous electroencephalogram(EEG)-based BCI applications is often not
trivial, due to EEG variations between and within sessions and long user
training required. In this work we suggest an approach to deal with this two
main limitations of EEG-BCIs by inserting a progressive and expandable
neurofeedback training program, able to continuously tailor the classifier to
the specific user, into a multimodal BCI paradigm. We propose indeed the
integration of EEG with a non-brain signal: the pupillary accommodative
response (PAR). The PAR is a change in pupil size associated with gaze shifts
from far to close targets; it is not governed by the somatic nervous system and
is thus potentially preserved after the evolution from LIS to CLIS, which often
occurs in neurodegenerative diseases, such as amyotrophic lateral sclerosis.
Multimodal BCIs have been broadly investigated in literature, due to their
ability to yield better overall control performances, but this would be the
first attempt combining EEG and PAR. In the context of the BciPar4Sla, we are
exploiting these two signals, with the aim of developing a more reliable BCI,
adaptive to the extent of evolving together with the user's ability to elicit
the brain phenomena needed for optimal control, and providing support even in
the transition from LIS to CLIS. | Davide D'Adamo, Emiliano Robert, Cristina Gena, Silvestro Roatta | 2023-07-26T23:15:50Z | http://arxiv.org/abs/2307.14541v1 | Novel BCI paradigm for ALS patients based on EEG and Pupillary
###### Abstract
Brain-computer interfaces (BCIs) are one of the few alternatives to enable locked-in syndrome (LIS) patients to communicate with the external world, while they are the only solution for complete locked-in syndrome (CLIS) patients, who lost the ability to control eye movements. However, successful usage of endogenous electroencephalogram(EEG)-based BCI applications is often not trivial, due to EEG variations between and within sessions and long user training required. In this work we suggest an approach to deal with this two main limitations of EEG-BCIs by inserting a progressive and expandable neurofeedback training program, able to continuously tailor the classifier to the specific user, into a multimodal BCI paradigm. We propose indeed the integration of EEG with a non-brain signal: the pupillary accommodative response (PAR). The PAR is a change in pupil size associated with gaze shifts from far to close targets; it is not governed by the somatic nervous system and is thus potentially preserved after the evolution from LIS to CLIS, which often occurs in neurodegenerative diseases, such as amyotrophic lateral sclerosis. Multimodal BCIs have been broadly investigated in literature, due to their ability to yield better overall control performances, but this would be the first attempt combining EEG and PAR. In the context of the BCIar4Sla, we are exploiting these two signals, with the aim of developing a more reliable BCI, adaptive to the extent of evolving together with the user's ability to elicit the brain phenomena needed for optimal control, and providing support even in the transition from LIS to CLIS.
## 1 Introduction
Locked-in Syndrome (LIS) is a rare neurological condition, possibly due to neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS), which bring the patient to the incapability of any voluntary movement, except for eye movements. A LIS patient is therefore unable to communicate autonomously, and a set of solutions exists to assist communication in this population: from no-tech (e.g. E-Tran boards) to high-tech (e.g. eye-tracker based systems), these solutions are mainly based on residual muscular control or eye-gaze [32].
One way to assess LIS patients' intentions without depending on eye movements is relying on brain signals. The most common approach to non-invasively extract information about brain activity is through EEG signals, which represent the activity of neuronal ensembles in cortical areas underneath the electrodes [23]. A Brain-Computer Interface (BCI) [22, 31] is a system capable of using this kind of signals to control any kind of external effector, from robotic arms to communication procedures. The main brain phenomena exploited in BCI control are visually evoked potentials [3, 9], when used in speller systems and necessarily phase-locked to an external stimulus (exogenous BCI), and sensory-motor rhythms (SMR) variations, spontaneously evocable by the user (endogenous BCI) by practicing Motor Imagery tasks (MI), i.e. imagined movement of a specific body-part [36]. Sadly, EEG-based BCIs suffer from different issues: low signal to-noise ratio, high inter-session instability, need of periodical recalibration of the predicting model and, in case of SMR control, long user-training process [17, 21, 23, 29, 34].
From this point of view, a BCI could benefit from a secondary input other than EEG, reliable and easy to control by the user, to support him/her during the MI training period giving the possibility to control the interface since day zero. Pupillary accommodative response (PAR) [8, 25], is a good candidate for this role: the control signal is elicited by the natural act of gaze shifting from a far to a near target, making the learning process for the user quite straightforward. PAR is based on the variations of pupil size, governed by the autonomic nervous system; therefore it is potentially retained by complete locked-in syndrome (CLIS) patients, which represent the subsequent stage in many cases of LIS (e.g. when caused by neurodegenerative diseases), defined as when the patient also loses the control of the eye
movements. To our knowledge, this is the first attempt in literature to develop a multimodal endogenous BCI combining EEG and PAR.
The idea presented in this work aims at developing a low-cost adaptive EEG-based BCI, easy to use at the beginning thanks to PAR support, capable of continuously improving both the BCI classification model and user's ability in controlling SMR and providing a safer transition to CLIS conditions, usually an obligated passage for this kind of patients. The work has been realized in context of the BciPar4Sla, which is a follow-up of past project on BCI [13] and aims to develop an innovative form of human-machine interaction based on two possible communication channels: brain waves voluntarily modulated by the patient (EEG) and pupillary movement.
This paper has been organized as follows: Section 2 discusses related work in the field, Section 3 presents our approach, while Section 4 concludes the paper.
## 2 Related Work
PAR signal has been proven robust and effective in terms of human-computer interface control [25] and allowed to develop a stand-alone Augmentative and Alternative Communication device, e-Pupil [6], enabling the user to answer simple questions or summon the attention of caregivers, yielding an accuracy of 100ti over a 4 class discrimination paradigm, based on duration and instant of initiation of pupillary constriction.
Integrating an EEG-based BCI with such a control signal would make it a multimodal BCI [20], potentially yielding better performance in target detection and/or allowing multidimensional control. Many example of multimodal BCI can be found in literature. For example, Kim et al. combined mental states recognition and eye-gaze direction to increase the range of commands callable by the user in a quadcopter driving task, while keeping the UI intuitive and simple. Another example would be the work of Pfurtscheller et al., in which it is demonstrated that using a MI-based switch to activate a steady state visual evoked potential BCI paradigm helps substantially to reduce the misclassification rate. In this way an exogenous paradigm could be used in a self-paced manner, exploiting the endogenous nature of MI. Finally, de'Sperati et al. combined pupillary frequency tagging, due to pupillary response to light intensity periodical oscillations, and steady state visual evoked potential to increase accuracy in a simple binary communication protocol, but to our knowledge there is no work in literature using both PAR and MI in a multimodal BCI paradigm.
In order to exploit SMR as control signals the user needs to learn how to correctly execute MI tasks, and to this aim usually neurofeedback (Nti) training protocols are adopted [10, 19, 27]. Nti training sessions consist in short trials (less than 20 seconds), in which the user is told what MI task to exercise (e.g. right hand, left hand, feet) and given a feedback somehow linked to the online classification score, enabling him/her to hone the execution technique and reach better performances (i.e. faster SMR modulation, better separation between classes). Being the user progressively learning how to elicit changes in SMR, training the classifier only at the beginning of the BCI experience (i.e. before the user finds the best way to execute MI) is not an efficient strategy. Therefore Nti training embedded with online model adaptation techniques emerged as effective tools for BCI users in order to obtain optimal BCI performances [11, 14, 1].
## 3 Approach
This section describes the development perspectives of the current work: subsection 3.1 introduces the signals exploited as controls over the interface, their generation, the required processing and the classification methods; subsection 3.2 describes the User Interface (UI) in terms of interaction modalities and adaptation to user's control capabilities; finally, subsection 3.3 describes the user training protocol.
### Control Signals
The BCI to be developed will be based on two main physiological signals: pupil area and SMR variations, obtained respectively through PAR and MI tasks. In the following paragraphs signal generation, acquisition, processing and classification for both phenomena are briefly described.
_PAR: Task Execu/on._ Pupil constriction is obtained due to PAR when the user shift the gaze from a far target to a near one. A single PAR task is executed shifting the gaze to the near target, and back to the far one. Taking inspiration from previous works exploiting this phenomenon [5, 6, 25, 33], the main UI display will constitute the far target (about 150 cm from the subject), while a transparent plastic sheet covered in white dots placed about 30 cm away from the subject will constitute the near target. The acquisition device will be an infrared (IR) camera, coupled with an IR LED, mounted on a customized pair of eyeglasses and connected to the PC, although the intended application could in principle work well with a remote eye-tracking system. However, most remote eyetrackers do not provide a real-time access to pupil size measurement and may be considerably expensive. On the contrary, the present prototype was developed following a low-cost approach, which however grants full control of all acquisition and processing steps [5, 6, 25], as required in research applications.
_PAR: Image Processing and Classification._ The image processing pipeline to be adopted reproduces the one presented in [5]. Briefly, after a preliminary automatic identification of the region of interest (ROI) containing the pupil, frames coming from the camera are cropped to match the ROI, processed via the ellipse from method described in [28] to detect the pupil, whose area can finally be computed. Signal conditioning applied to the pupil area time series follows the pipeline designed in [6], and allows to cope with physiological fluctuation of pupil size.
_MI: Task execution._ As said above, SMR modulation can happen as a consequence of specific mental tasks: during MI, the user performs an imaginary movement of a specific body part, which will trigger, similarly to an actual motor action, a frequency- and location-specific modulation of the EEG power (event related (de)synchronization; for details see [16]). The EEG acquisition setup consists of an electrodes headset, a bioamplifier and a processing workstation (PC). Two different acquisition system will be tried in this work: OpenBC1 CytonDaisy board (bioamplifier) coupled with a
Footnote 1: [http://www.openbci.com/](http://www.openbci.com/)
Footnote 2: [https://www.greentekensor.com](https://www.greentekensor.com)
Greentek2 EEG cap (headset), and an Emotiya EPOC+ (embedded). Proceeding with implementation and testing the best performing system will be chosen.
_MI: EEG Classification of mental tasks._ Among the different methods designed in the last decades, according to literature [21, 35] Riemannian Geometry based classifiers are considered state-of-the-art for MI tasks classification. following the pipeline described in [11], for the initial model training, the EEG data is bandpass filtered and divided in 50ti overlapping 0.5 s long labeled epochs, covariance matrices are computed and averaged in the Riemannian space for the different classes (at least one MI task and the idle state, i.e. no-control) obtaining class-specific prototypes. During actual classification, new data epochs will be classified based on the Riemannian distance from class prototypes.
_Model adaptation._ To address EEG strong non-stationarity [23] adaptive approaches are encouraged [29], indeed their classifier includes a comparison of strategies to continuously update the classifier references (class prototypes in the Riemannian space). The best performing in terms of classification accuracies and computational cost is based on a periodical re-estimation of the Riemannian class means considering both incoming new data epochs and the previous prototypes, the later being heavily weighted. Moreover, the classifier adaptation takes place during Nti training sessions, matching perfectly the intents of our work. Therefore the adaptation method to be implemented will strongly take inspiration from the just presented design.
### User interface
Another core objective of this work is developing a BCI application able of adapting to the user in a smart and personalized way [4], helping him/her to make his/her _preferential choices_[15]. This aim can be reached designing an adaptive user interface (UI), able to evolve together with the control capabilities of the user. At the beginning, when the user is not confident with the execution of MI yet, the interface can be driven using PAR only (PAR-based UI). After an initial period of user MI training through Nti protocols, and automatic model fine-tuning, if the scores obtained in the Nti training sessions are high enough (see subsection 3.3), the UI evolves to a stage where both PAR and EEG can be exploited to obtain a smoother user experience. In any of these configurations, the possibility of going back to the previous menu must always be present and the UI should give the possibility to i) easily access Nti training sessions and ii) promptly call the caregiver when needed. In the following paragraphs the design of interaction for the PAR-based and multimodal configurations is described, and finally the possibility of answering simple external questions is discussed.
_PAR-based UI._ When the UI has to be entirely driven by PAR, the main paradigm for taking choices is selecting them from a dynamic menu by executing the task, and confirming the selection in a secondary _confirmation menu._ In this way, executing two PAR tasks the user achieves a successful selection, supposedly in less than five seconds [6]. A
plausible example of main menu is presented together with an example of confirmation menu for the choice "Mental task training" (Fig. 1b).
_Mul/modal UI._ Once the user will be trained and ready to use MI as browsing controls, the PAR-based menu configuration will be integrated with "MI shortcuts", that is, the possibility for the user to execute a MI to directly choose one of the available options. For example, if the user was trained in right hand MI, this task could be used to access communication mode (Speller from the menu in Fig. 1) and the confirmation phase could be skipped. The more MI the subject learns to use (and the model learns to recognize), the more shortcuts can be integrated in the PAR-based menu configuration.
_Simple answers._ Interaction with others is based on simple questions that require only a confirmation or denial answer from the patient. Therefore this design includes the possibility to trigger, via an external input (i.e. a push button), a special menu with only three choices: Yes, No and Don't want to answer. The selection follows the PAR-based menu paradigm.
We have to emphasise that the sketches proposed in Fig. 1 will then be shown and discussed with stakeholders such as doctors, caregivers and patients in order to review and redesign them in a co-design perspective [7]. Once implemented, the proposed UI will be tested in the wild with neurotypical users and then patients by proposing a set of gamified activities, as already successfully experienced in [12, 26], to make the experience more meaningful and enjoyable, and collect then feedback in a real context of use.
### Neurofeedback Training
As anticipated above, Nti user training is a fundamental block for SMR-based BCIs and, given the subject-tailored nature of the application to be developed, its importance in this context gains even more room. It consists of a closed loop system, whose actors are i) the user, reproducing the required task, ii) the acquisition system, which streams real-time EEG data to iii) the classifier running on the PC, which in turn makes a prediction and gives it back to the user, through iv) an audio and/or visual apparatus [30]. The feedback reflects how close the user is to the ideal execution of the task, therefore allowing him/her to try different strategies to reach better or faster task recognition.
Being in this context Nti training sessions closely bounded to classifier adaptation, the interface used in this function will be also inspired by [11]. The main difference with the paradigm developed here stays in the number of tasks to be trained at once: Tireer et al. implemented a 4-classes training, while this work tends towards a more gradual path, as suggested in [2, 27], training a new task only after the user gets confident with the previous ones. Moreover, in this work the MI performance of the user must be tracked in order to assess the potential reliability of using it as control signal. According to [14], novel Riemannian-geometry based user performance metrics reflecting class separability and within-class consistency could be a valid index of user training progress, therefore these could be implemented and evaluated to define the switch from _PAR-based UI_ to _Mul/modal UI_.
## 4 Conclusion
The vision presented in this short paper is essentially an attempt to merge the work done with PAR in [6] with the advances in adaptive MI classifiers represented here by [11] and [14], in a novel multimodal BCI application tailored on the specific user, following his/her progress in training those skills (MI execution in this preliminary prototype) which could restore, at least partially, his/her independence. The novelty of this work resides mainly in the use of PAR as additional control signal: this gives the user the possibility to interact with the system since the first moment with no need of training and through a quite natural act. Moreover, PAR control signal features the possibility to expand its
communication potential in different ways. As done in [6], evaluating the duration of pupillary constriction may allow to define different commands. Alternatively, using a secondary display as near target would allow to think about new interface designs and increase the interaction speed. However, considering that CLIS patients cannot move the eyes, the two displays should be superimposed, so the near one should be semi-transparent. Moreover, PAR decoding algorithm should be tailored in order to be able to recognize single gaze shift and not complete PAR tasks as described in 3.1. Future works may head in this direction, with the objective of obtaining a smooth user experience involving the main stakeholders in the co-design and evaluation of both the UI and of the UX, matching CLIS patients needs and thus enabling an easier and effective interaction with the outer world.
## Acknowledgments
This research was funded by Fondazione CRT1 in the context of the 2021 funding program, grant number 2021.0609.
Footnote 1: [https://www.tondazionecrt.it/](https://www.tondazionecrt.it/)
|
2303.09781 | Ising model on a $restricted$ scale-free network | The Ising model on a $restricted$ scale-free network (SFN) has been studied
employing Monte Carlo simulations. This network is described by a power-law
degree distribution in the form $P(k)~k^{-\alpha}$, and is called restricted,
because independently of the network size, we always have fixed the maximum
$k_{m}$ and a minimum $k_{0}$ degree on distribution, being that for it, we
only limit the minimum network size of the system. We calculated the
thermodynamic quantities of the system, such as, the magnetization per spin
$\textrm{m}_{\textrm{L}}$, the magnetic susceptibility $\chi_{\textrm{L}}$, and
the reduced fourth-order Binder cumulant $\textrm{U}_{\textrm{L}}$, as a
function of temperature $T$ for several values of lattice size $N$ and exponent
$1\le\alpha\le5$. For the values of $\alpha$, we have obtained the finite
critical points due to we also have finite second and fourth moments in the
degree distribution, and the phase diagram was constructed for the equilibrium
states of the model in the plane $T$ versus $k_{0}$, $k_{m}$, and $\alpha$,
showing a transition between the ferromagnetic $F$ to paramagnetic $P$ phases.
Using the finite-size scaling (FSS) theory, we also have obtained the critical
exponents for the system, and a mean-field critical behavior is observed. | R. A. Dumer, M. Godoy | 2023-03-17T05:32:55Z | http://arxiv.org/abs/2303.09781v1 | # Ising model on a _restricted_ scale-free network
###### Abstract
The Ising model on a _restricted_ scale-free network (SFN) has been studied employing Monte Carlo simulations. This network is described by a power-law degree distribution in the form \(P(k)\sim k^{-\alpha}\), and is called restricted, because independently of the network size, we always have fixed the maximum \(k_{m}\) and a minimum \(k_{0}\) degree on distribution, being that for it, we only limit the minimum network size of the system. We calculated the thermodynamic quantities of the system, such as, the magnetization per spin \(\mathrm{mL}\), the magnetic susceptibility \(\chi_{\mathrm{L}}\), and the reduced fourth-order Binder cumulant \(\mathrm{U_{L}}\), as a function of temperature \(T\) for several values of lattice size \(N\) and exponent \(1\leq\alpha\leq 5\). For the values of \(\alpha\), we have obtained the finite critical points due to we also have finite second and fourth moments in the degree distribution, and the phase diagram was constructed for the equilibrium states of the model in the plane \(T\) versus \(k_{0}\), \(k_{m}\), and \(\alpha\), showing a transition between the ferromagnetic \(F\) to paramagnetic \(P\) phases. Using the finite-size scaling (FSS) theory, we also have obtained the critical exponents for the system, and a mean-field critical behavior is observed.
## I Introduction
Hyperlinks pointing from one web page to another (World Wide Web), computers physically linked (internet), actors that have acted in a movie together, scientists that have an article together, and proteins that bind together experimentally, are some of the cases that, when analyzed in terms of nodes and edges of a network, they are part of a broad group of real systems in which the degree distribution has a power-law tail [1; 2]. This degree distribution has the form \(P(k)\sim k^{-\alpha}\), representing the probability of a site in the network to have a degree \(k\), i.e., \(k\) edges linked to it, with exponent \(\alpha\). Barabasi-Albert [3] proposed a growing process of network creation, in which the degree distribution resultant also has the form of power-law. That growing process is based only on two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. With these mechanisms, most connected sites are most likely to receive new connections, and a "rich-get-richer", and self-organization phenomenon, as in real networks, is observed.
Due to the applicability of the networks with a power-law degree distribution, also called scale-free networks, it was been implemented in many physical problems [4; 5; 6; 7; 8; 9; 10; 11]. Highlighting the critical phenomena arising from the Ising model, from an analytical way, Dorogovtsev, Goltsev, and Mendes showed that its critical behavior is very dependent on the distribution of connections [12; 13]. From this dependence, when \(\alpha>5\) in \(P(k)\), its fourth moment \(\left\langle k^{4}\right\rangle\) is convergent and a mean-field critical behavior is obtained. When, \(3<\alpha\leq 5\), anomalous behavior of the thermodynamics quantities is observed, due to divergence in \(\left\langle k^{4}\right\rangle\), but when \(\alpha\leq 3\), the divergence is in the second moment, \(\left\langle k^{2}\right\rangle\), and the criticality vary with the size of the system, being an infinity order phase transition in the thermodynamic limit. In the network proposed in the Barabasi-Albert (BA) model [3], the exponent is limited by \(\alpha=3\), and in previews of approximate and numerical results, the infinity order phase transition is verified [14; 15]. In addition to \(\alpha=3\), the cases where \(\left\langle k^{4}\right\rangle\) and \(\left\langle k^{2}\right\rangle\) are convergent and divergent, by Monte Carlo simulations [16] and replica method [17], have been proven the non-trivial critical exponents, beyond the and size-dependent critical temperature.
In these Monte Carlo simulations, the SFN, constructed with a selected value of \(\alpha\), is absent from the two growing mechanisms in the BA model and created distributions degrees for the vertices based on the exact value of \(P(k)=Ak^{-\alpha}\)[16]. For this exact value is predefined the minimum \(k_{0}\) and maximum \(k_{m}\) degrees of the network, and calculated the normalization constant of the distribution, \(A=\sum_{k=k_{0}}^{k_{m}}k^{\alpha}\). In this sense, supposing that the number of sites with the degree \(k_{m}\) is \(N_{k_{m}}=1\), the network size with these characteristics is given by \(N=k_{m}^{\alpha}/A\). From this way, fixing \(k_{0}\) and varying the network sizes \(N\), Herrero [16] could reproduce the main critical phenomena seen analytically [12; 13], by the FSS analysis of networks with also non-fixed \(k_{m}\).
In this work, we investigated the Ising model on a _restricted_ SFN, where each site of the network is occupied by a spin variable \(\sigma=1/2\) that can assume values \(\pm 1\). Our network was built for various integer values of the exponent \(\alpha\), and divided into two sublattices, each connection distributed by \(P(k)\), a power-law degree distribution, which should
connect these sublattices. Besides that, in a similar way to what was proposed by Herrero [16] to construct its random uncorrelated network, we also preview define who will be \(k_{0}\) and \(k_{m}\) in our system. These values are kept fixed while we vary the size \(N\) of the network. Thereunto, the minimal network size that we can use is defined by \(N_{0}=k_{m}^{\alpha}/A\) and is not always obtained \(N_{k_{m}}=1\). However, as we have the same degrees on all network sizes, we always have a convergent \(\left\langle k^{2}\right\rangle\) and \(\left\langle k^{4}\right\rangle\), and consequently finite transitions point, for ferromagnetic to paramagnetic phases, in the whole values of \(\alpha\). Thus here, through Monte Carlo simulations, we have built phases diagrams of critical temperature \(T_{c}\) as a function of \(k_{0}\) and \(k_{m}\) for the studied values of \(\alpha\), and using the FSS theory, we have obtained the critical exponents for the system.
This article is organized as follows: In Section II, we describe the network used and the Hamiltonian model of the system. In Section III, we present the Monte Carlo simulation method, some details concerning the simulation procedures and the thermodynamic quantities of the system also necessary for the application of FSS analysis. The behavior of thermodynamic quantities, phases diagrams, and critical exponents are described in Section IV. Finally, in Section V, we present our conclusions.
## II Model
The Ising model studied in this work have \(N=L^{2}\) spins \(\sigma_{i}=\pm 1\), on a _restricted_ SFN and a ferromagnetic interaction of strength \(J_{ik}\). To distribute the connections on this network, we first defined \(k_{0}\), \(k_{m}\) and \(\alpha\), i.e., the minimum and maximum degree that the network is required to have and its exponent on the distribution, respectively. Next, we calculated the normalization constant of the distribution, \(A=\sum_{k=k_{0}}^{k_{m}}k^{\alpha}\), and found the smallest network that we will use in the system, \(N_{0}=k_{m}^{\alpha}/A\). With these values, we create a set of site numbers, \(\{N_{k}\}\), that will have the respective degrees \(k\), \(N_{k}=AN/k^{\alpha}\), and distribute them by the network. For this distribution of connections, we have divided the network into two sublattices, where one sublattice plays the role of central spins, while the other sublattice contains the spins in which the central spins can connect. Thus, starting with the lowest degree, we select one site \(i\) on the network, and its sublattice will be the sublattice of central spins, then, from the other sublattice, we select one site \(j\) at random, that has not yet received their respective connections. With this, we couple \(j\) to the neighbors of \(i\), and for the site \(j\) we couple the site \(i\) to their neighbors. This process is done until \(i\) has their \(k\) connections and visited all the set \(\{N_{k}\}\). It is valid to say that, for this network, there is not need to create these sublattices and the connections can be created completely randomly between free sites, but here, this implementation was done as a way to prepare the system for future works in non-equilibrium systems without loss of generality.
In Fig. 1, we displayed an example of the network, with \(\alpha=3\), \(k_{0}=2\), \(k_{m}=8\) and \(N=10^{2}\). The sites in the middle of the figure are the more connected, while the peripheral sites are the less connected, and sites from the sublattice green are only connected with sites from the sublattice red.
Based on this construction, throughout work, we have selected the integer values of \(1\leq\alpha\leq 5\), and network sizes
Figure 1: Schematic representation of the _restricted_ SFN. The red circles indicate the sites on one of the sublattices, the green circles are the sites on the other sublattice, and the black solid lines are the connections between the two sublattices. The size of the circles is proportional to the sites degree, varying from \(k_{0}=2\) to \(k_{m}=8\), with \(\alpha=3\), and \(N=10^{2}\).
\((32)^{2}\leq N\leq(256)^{2}\) to study the Ising model. In Fig. 2, we shown the degree distribution in the largest network size of the system for these selected values of \(\alpha\). With these distributions in the log-log plot, we can see that the construction method used here, guarantees the power-law form, as predicted in SFN.
The ferromagnetic Ising spin energy is described by the Hamiltonian on the form
\[\mathcal{H}=-\sum_{\langle i,j\rangle}J_{ij}\sigma_{i}\sigma_{j} \tag{1}\]
where the sum is over all pair of spins, and \(J_{ij}\) is the ferromagnetic interaction, and assuming the value of unity if sites \(i\) and \(j\) are connected by a link.
## III Monte Carlo simulations
In the simulation of the system specified by the Hamiltonian in Eq. (1) and with a _restricted_ SFN, we have chosen the initial state of the system with all spins in the random states, and a new configuration is generated by a Markov process. In this process, for a given temperature \(T\), exponents \(\alpha\) of the degree distribution, network size \(N\), and minimum \(k_{0}\) and maximum \(k_{m}\) degree, we choose at random a spin \(\sigma_{i}\) on the network and change its state by the one-spin flip mechanism with a transition rate given by the following Metropolis prescription
\[W(\sigma_{i}\rightarrow\sigma_{i}^{\prime})=\left\{\begin{array}{rl}e^{(- \Delta E/k_{B}T)}&\mbox{if}\ \ \Delta E>0\\ 1&\mbox{if}\ \ \Delta E\leq 0\end{array}\right., \tag{2}\]
where \(\Delta E\) is the change in energy after flipping the spin, \(\sigma_{i}\rightarrow\sigma_{i}^{\prime}\), \(k_{B}\) is the Boltzmann constant, and \(T\) the temperature of the system. Therefore, in this scenario, the acceptance of a new state is made if \(\Delta E\leq 0\), but, in the case where \(\Delta E>0\), the acceptance is pondered by the probability \(\exp{(-\Delta E/k_{B}T)}\) and just it is accepted by choosing a random number \(0<\xi<1\), where \(\xi\leq\exp{(-\Delta E/k_{B}T)}\). On the other hand, if no one of these conditions was satisfied, we do not change the state of the spin.
Repeating the Markov process \(N\) times, we have a Monte Carlo Step (MCS). In our simulations, we have waited \(10^{4}\) MCS for the system reach the equilibrium state, in all lattice sizes and value of the parameters. To calculate the thermal averages of the interest quantities, we used more \(4\times 10^{4}\) MCS. The average over samples were done using 10 independent samples for any configuration.
After reaching the equilibrium state, we have measured the following thermodynamic quantities: magnetization per spin m\({}_{\rm L}\), magnetic susceptibility \(\chi_{\rm L}\) and reduced fourth-order Binder cumulant U\({}_{\rm L}\):
\[{\rm m}_{\rm L}=\frac{1}{N}\left[\left\langle\sum_{i=1}^{N}\sigma_{i}\right \rangle\right], \tag{3}\]
Figure 2: Log-log plot of the degree distribution on _restricted_ SFN for some selected values of \(\alpha\), as shown in the figure. All curves refer to the network size \(N=256^{2}\), minimum degree \(k_{0}=4\) and maximum degree \(k_{m}=10\). The dotted lines have the exact expected slopes, \(\alpha\), and the errors of the slopes are in order of \(10^{-4}\)and the error bars are the same size or smaller than the symbols.
\[\chi_{\rm L}=\frac{N}{k_{B}T}\left[\left\langle{\rm m}_{\rm L}^{2}\right\rangle- \left\langle{\rm m}_{\rm L}\right\rangle^{2}\right], \tag{4}\]
\[{\rm U}_{\rm L}=1-\frac{\left[\left\langle{\rm m}_{\rm L}^{4}\right\rangle \right]}{3\left[\left\langle{\rm m}_{\rm L}^{2}\right\rangle^{2}\right]}, \tag{5}\]
where \([\ldots]\) represents the average over the samples, and \(\left\langle\ldots\right\rangle\) the thermal average over the MCS in the equilibrium state. In the vicinity of the critical temperature \(T_{c}\), the above-defined quantities obey the following finite-size scaling relations:
\[{\rm m}_{\rm L}=L^{-\beta/\nu}m_{0}(L^{1/\nu}\epsilon), \tag{6}\]
\[\chi_{\rm L}=L^{\gamma/\nu}\chi_{0}(L^{1/\nu}\epsilon), \tag{7}\]
\[{\rm U}_{\rm L}^{\prime}=L^{1/\nu}\frac{U_{0}^{\prime}(L^{1/\nu}\epsilon)}{T_ {c}}, \tag{8}\]
where \(\epsilon=(T-T_{c})/T_{c}\), \(m_{0}(L^{1/\nu}\epsilon)\), \(\chi_{0}(L^{1/\nu}\epsilon)\) and \(U_{0}(L^{1/\nu}\epsilon)\) are the scaling functions, and \(\beta\), \(\gamma\) and \(\nu\) are the magnetization, magnetic susceptibility and length correlation critical exponents, respectively.
Using the Eqs. (6), (7), (8) and the data from simulations for the network sizes \(32\leq L\leq 256\), we have obtained the critical exponents ratios, \(\beta/\nu\), \(\gamma/\nu\) and \(1/\nu\) from the slope of \({\rm m}_{\rm L}(T_{c})\), \(\chi_{\rm L}(T_{c})\) and \({\rm U}_{\rm L}^{\prime}(T_{c})\) as a function of \(L\) in a log-log plot. Aside from that, we also used data collapse from scaling functions to estimate the critical exponent values.
## IV Results
The interesting results about the critical phenomena in complex networks, more specifically on random uncorrelated networks, led us to understand the importance of its degree distribution and respective moments. The Ising model on the uncorrelated SFN, in the limit where \(N\rightarrow\infty\), \(k_{m}\rightarrow\infty\) and the changing in its structure for \(\alpha>3\), decreases the number of more connected spins, admitting a finite-order phase transition until reaches the standard mean-field critical behavior due to strong correlations of most connected vertices in their neighborhood [12; 13]. However, here, by the restriction of maximum \(k_{m}\) and minimum \(k_{0}\) degree, and minimum network size, when \(N\rightarrow\infty\), \(k_{m}\) keeps finite, consequently changing the number of more and less connected sites, their correlations, and the critical phenomena as we can see in our results.
Figure 3: Thermodynamic quantities as a function of temperature \(T\) for a fixed value of \(\alpha=1\), \(k_{0}=4\) and \(k_{m}=10\), and for different network sizes, as present in the figure. (a) Magnetization \({\rm m}_{\rm L}\), (b) reduced fourth-order Binder cumulant \({\rm U}_{\rm L}\), and (c) susceptibility \(\chi_{\rm L}\).
To begin with, we have identified the point of transition between the ferromagnetic \(F\) to the paramagnetic \(P\) phases through the curves of the fourth-order Binder cumulant \(\mathrm{U_{L}}\) with different network sizes [18; 19; 20; 21]. This critical point and second-order phase transition can be identified by the crossing point of \(\mathrm{U_{L}}\) curves, and an example is shown in Fig. 3. In this figure, we have presented as one of the best results for thermodynamic quantities obtained by Eqs. (3), (4) and (5), being that in Fig. 3(a), we can see the behavior of the magnetization \(\mathrm{m_{L}}\) as a function of \(T\), the reduced fourth-order Binder cumulant \(\mathrm{U_{L}}\) (see Fig. 3(b)), and the magnetic susceptibility \(\chi_{\mathrm{L}}\) (see Fig. 3(c)). In this case, we have used a fixed value of \(\alpha=1\), \(k_{0}=4\) and \(k_{m}=10\), and different network sizes, as present in Fig. 3.
With the critical points obtained, the phase diagrams were built, see Fig. 4. For these diagrams, the temperature as a function of \(k_{0}\), \(k_{m}\) and \(\alpha\) is studied, and for that, we have used \(4\leq k_{0},k_{m}\leq 10\) and integer values of \(\alpha\). The continuous transition lines can be seen in Fig. 4(a) for temperature \(T\) versus \(k_{0}\), and for a fixed value of \(k_{m}=10\), and some selected values of \(\alpha\), where we can verify that have a finite critical point for all the values of \(\alpha\). Because we have a fixed value of \(k_{m}\), as we increase \(k_{0}\) the number of different degrees on the network decrease until the whole system has the coordination number \(k_{0}=k_{m}=10\), and when that happens, the exponent of the degree distribution is no longer important, returning us to a unique critical point for all the exponents \(\alpha\). On the other hand, in the case where \(k_{0}\neq k_{m}\), the degrees can be distributed on the network, and for different values of \(\alpha\) is showed the presence of different critical points in the system, being that each value of \(\alpha\) is responsible for a distinct curve in Fig. 4(a). Instead, we can fix \(k_{0}=4\) and varying \(k_{m}\), and as can seen in Fig. 4(b) for \(T\) versus \(k_{m}\). Thus, we can observe the presence of the same qualitative behavior, like, continuous transitions lines for the whole parameter values, distinct curves for distinct exponents \(\alpha\), and a unique critical point when \(k_{m}=k_{0}\). Despite these similarities, if we approximate these curves by a linear fit, the slopes of curves in Fig. 4(a) have an increasing behavior according we increase \(\alpha\), while the slopes of curves in Fig. 4(b) decrease according to increase \(\alpha\). This is visually perceptible, but also is explicit in the
Figure 4: (a) Phase diagram of temperature \(T\) as a function of \(k_{0}\) for a fixed value of \(k_{m}=10\) and (b) as a function of \(k_{m}\) for a fixed value of \(k_{0}=4\). Both figures representing transitions between \(F\) to \(P\) phases. The solid lines are just guide for the eyes and the dashed lines represent the analytical result obtained by Eq. (9).
Figure 5: (a) Phase diagram of temperature \(T\) as a function of \(\alpha\), for the transition between \(F\) to \(P\) phases, with fixed values of \(k_{0}=4\) and \(k_{m}=10\). The dashed line is the theoretical result expected for random uncorrelated networks and obtained using the Eq. (9). The red squares are the simulation data points on the _restricted_ SFN, and calculated by the crossing of \(\mathrm{U_{L}}\) curves. (b) Degree-degree correlations \(r\) as a function of \(\alpha\) and obtained by the Eq. 10.
variables \(\Theta_{k_{0}}\) and \(\Theta_{k_{m}}\) presented in Tab. 1.
When \(\alpha>3\), the analytical calculations predict well-defined finite critical points in the \(F\) to \(P\) phase transition [12; 13; 17]. As a matter of comparison, the analytical results for the random uncorrelated networks in which is known its first and second moment of degree distribution, and introduced by equation
\[T_{c}=\frac{2}{\ln\left(\frac{\left\langle k^{2}\right\rangle}{\left\langle k ^{2}\right\rangle-2\left\langle k\right\rangle}\right)}, \tag{9}\]
also were plotted in Fig. 4 (see dashed lines). As we can see in this figure, we also have plotted values of \(\alpha<3\), and its possible because instead of the usual integral approximation of the sum in \(\left\langle k\right\rangle\) and \(\left\langle k^{2}\right\rangle\), where result in \(T_{c}\to 0\) and \(T_{c}\rightarrow\infty\), with \(\alpha=2\) and \(\alpha=3\), respectively, we have kept the sum, once that \(k_{0}\) and \(k_{m}\) are restricted. The dashed lines on Figs. 4(a) and 4(b) are described by Eq. (9), and apparently, we can observe the same signature, both in the analytical result and simulations. However, in the building process of our _restricted_ SFN, more connected sites are the last to be chosen to add their connections, and their missing connections only can be attached to sites that were not chosen yet, i.e., we have implicitly an increase of the degree-degree correlations [22; 23], once in the last steps of building the network, more connected sites only can connect with more connected sites. These correlations can be identified in the higher values of \(T_{c}\) in the simulations if compared with analytical results, because as we increase the difference between \(k_{0}\) and \(k_{m}\), we also increase the number of different degrees on the network and consequently increase the possibility of degree-degree correlations.
The degree-degree correlations were calculated here using the equation
\[r=\frac{M^{-1}\sum_{i}u_{i}v_{i}-\left[M^{-1}\sum_{i}\frac{1}{2}\left(u_{i}+v_{ i}\right)\right]^{2}}{M^{-1}\sum_{i}\frac{1}{2}\left(u_{i}^{2}+v_{i}^{2} \right)-\left[M^{-1}\sum_{i}\frac{1}{2}\left(u_{i}+v_{i}\right)\right]^{2}}, \tag{10}\]
for a network with \(M\) edges connecting the pair of vertices, and their respective \(u_{i}\) and \(v_{i}\) degrees, as defined in Ref. [22]. These correlations are shown in Fig. 5(b) as a function of \(\alpha\), and besides being independent of network size, have a descending behavior according the number of more connected sites increase, and ascending when this number become small, but relevant yet. The peak of the correlation \(r\) was observed in \(\alpha=3.5\) and has value \(r=0.268\). Therefore, the network presents a associative mixing, confirming that sites with high degree, on the network, prefers to connect on more connected sites, which is the case for high values of \(\alpha\). It is interesting to mention that many social networks also have significant associative mixing, and do not present some in complex network models, like random graphs or growing network model (BA model [22]).
Now that we have established the critical behavior of the system as a function of the degrees on the network, using the most distinct and distributed values of degree, \(k_{0}=4\) and \(k_{m}=10\), we will effectively verify the critical behavior of the _restricted_ SFN as a function of \(\alpha\). As presented in Fig. 5(a), this verification was first done by using some points in Fig. 4 by simulation data, and Eq. (9). By Eq. (9) is explicit the lower values of \(T_{c}\) where we have a high degree-degree correlation, but, for small \(r\), i.e., in lower values of \(\alpha\), the simulation results approach to analytical result of a random uncorrelated network. That approach to analytical results is also resultant of small influence of more connected sites, and also observed for \(\alpha>3.5\). By the fixed values of \(k_{m}\) and \(k_{0}\), the minimum and maximum critical temperature values is limited to the case where \(k_{m}=k_{0}=4\) on Fig. 4(b) as we increase \(\alpha\), because we will never reach a network with coordination number 4, without losing the structure of degrees. The same is observable decreasing \(\alpha\), because we have the limit of \(T_{c}\), in the case \(k_{m}=k_{0}=10\) present in Fig. 4 (a) and unreachable in our _restricted_ SFN. For some critical points in Fig. 5, \(1\leq\alpha\leq 5\), we have its explicit estimation and respective errors presented in Tab. 2.
Analytical results for the random uncorrelated networks also extend to critical exponents in the Ising model, being that is predicted for magnetic critical exponent, a mean-field character, \(\beta=1/2\) for \(\alpha>5\), with logarithmic corrections
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & \(\Theta_{k_{0}}\) & \(\Theta_{k_{m}}\) & \(-\beta/\nu\) & \(\gamma/\nu\) & \(1/\nu\) \\ \hline \hline
1 & \(0.46\pm 0.04\) & \(0.55\pm 0.04\) & \(0.45\pm 0.04\) & \(1.04\pm 0.05\) & \(0.99\pm 0.05\) \\ \hline
2 & \(0.54\pm 0.02\) & \(0.47\pm 0.01\) & \(0.53\pm 0.04\) & \(0.92\pm 0.06\) & \(0.95\pm 0.04\) \\ \hline
3 & \(0.62\pm 0.04\) & \(0.38\pm 0.03\) & \(0.52\pm 0.04\) & \(0.91\pm 0.05\) & \(0.97\pm 0.06\) \\ \hline
4 & \(0.71\pm 0.05\) & \(0.30\pm 0.03\) & \(0.51\pm 0.06\) & \(0.90\pm 0.07\) & \(0.96\pm 0.05\) \\ \hline
5 & \(0.79\pm 0.08\) & \(0.21\pm 0.06\) & \(0.52\pm 0.07\) & \(0.90\pm 0.08\) & \(0.95\pm 0.04\) \\ \hline \end{tabular}
\end{table}
Table 1: Slopes of curves presented in Fig. 4(a), \(\Theta_{k_{0}}\), and Fig. 4(b), \(\Theta_{k_{m}}\), and the ratio between the critical exponents obtained by the method presented in Fig. 6.
in \(\alpha=5\), and \(\beta=1/(\alpha-3)\), and for \(3<\alpha<5\). Interestingly, magnetic susceptibility critical exponent, \(\gamma\), has a mean-field universal character for \(\alpha>3\), i.e., \(\gamma=1\) for the whole values of the exponents of the degree distribution in which a finite order phase transition is predicted, \(\left\langle k^{2}\right\rangle<\infty\). And in accordance with the scaling law relation \(\gamma/\nu=2-\eta\), is also expected a universal mean-field character for correlation length, \(\nu=1/2\), and Fisher exponent, \(\eta=0\)[24]. With this results, here, only for \(1\leq\alpha\leq 5\) by limitation of the network size and computational time, we have made a direct comparative with this analytical results, using our _restricted_ SFN. Therefore, we have calculated the critical exponents for the system, and this was made by two methods. One of these methods, using the FSS relations, is based on the slope of its linear fit, with the thermodynamic quantities near to critical point, and the second one, is based on the data collapse of thermodynamic quantities in the form of scaling functions [18, 19, 20].
We can find the critical exponents of the system by using two methods. In the first method, we have used the data of thermodynamic quantities near the critical point and for different network sizes. For instance, in a log-log plot in Fig. 6, we have fitted the thermodynamic quantities as a function of the effective length of the system \(L\) and its slope returns us the ratios between the critical exponents present in Eqs. (6), (7) and (8). For the ratio \(-\beta/\nu\) present in Eq. (6), the curves of magnetization for different network sizes and values \(\alpha\) is presented in Fig. 6(a), where we shown its linear fits. In the same way, but now for the ratio \(\gamma/\nu\), using the Eq. (7), the linear fit of magnetic susceptibility curves can be seen in Fig. 6(b). Only these two ratios does not give us all the critical exponents of the system. Therefore, we also have used the forth-order Binder cumulant curves and its derivative, Eq. (8), being that its linear fits is presented in Fig. 6(c), and give us the ratio \(1/\nu\) that is the inverse of correlation length exponent, and only then obtain the exponents \(\beta\) and \(\gamma\). The set of exponents obtained by this method can be found in Tab. 1. However, it is necessary to emphasize that, because we are dealing with a random network with modifiable degree distribution, it is required to modify the Eqs. (6), (7) and (8) to the mean-field scaling relations, in which we change the effective length \(L\) by the total number of spins on network, \(N\). This modification in the scaling relations can be easily implemented in the obtained critical exponents, dividing \(\nu\) by two, once that \(N=L^{2}\).
In the second method, we can improve the values found for the critical exponents, by collapsing the data points. Our main goal is to use the thermodynamic quantities curves and different network sizes to obtain the form of its scaling functions, as a function of \(L^{1/\nu}\epsilon\). It is possible because, in the proximity of the critical points, scaling functions in Eqs. (6), (7) and (8) must be independently of network sizes, if on its, is utilized the correct critical exponents of the system, i.e., in the proximity of \(T_{c}\) using the correct exponents on scaling relations we obtain a collapsed curve in the form of scaling functions. In Fig. 7, we show some examples of the best collapsed curves obtained. In this figure,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\alpha\) & \(T_{c}\) & \(\beta\) & \(\gamma\) & \(\nu_{\text{m}_{\text{L}}}\) & \(\nu_{\text{X}_{\text{L}}}\) \\ \hline \hline
1 & \(6.228\pm 0.004\) & \(0.47\pm 0.03\) & \(1.03\pm 0.03\) & \(1.00\pm 0.06\) & \(1.00\pm 0.04\) \\ \hline
2 & \(5.734\pm 0.008\) & \(0.55\pm 0.02\) & \(0.95\pm 0.03\) & \(1.03\pm 0.03\) & \(1.05\pm 0.04\) \\ \hline
3 & \(5.225\pm 0.009\) & \(0.57\pm 0.03\) & \(0.91\pm 0.05\) & \(0.96\pm 0.05\) & \(1.08\pm 0.03\) \\ \hline
4 & \(4.710\pm 0.020\) & \(0.56\pm 0.03\) & \(0.90\pm 0.04\) & \(1.05\pm 0.05\) & \(1.10\pm 0.03\) \\ \hline
5 & \(4.210\pm 0.020\) & \(0.58\pm 0.03\) & \(0.92\pm 0.05\) & \(1.03\pm 0.04\) & \(1.10\pm 0.05\) \\ \hline \end{tabular}
\end{table}
Table 2: As a function of \(\alpha\), here is showed critical temperature \(T_{c}\) present in the phases diagram of Fig. 5, and critical exponents obtained by the data collapse method, being that \(\nu_{\text{m}_{\text{L}}}\) and \(\nu_{\text{X}_{\text{L}}}\) are the correlation length exponent obtained by the magnetic and magnetic susceptibility data collapsed curves, respectively.
Figure 6: Log-log plots of (a) the magnetization \(\text{m}_{\text{L}}\), (b) the susceptibility\(\chi_{\text{L}}\), and (c) the derivative of the cumulant\(\text{U}_{\text{L}}^{\prime}\), at the critical point, as a function of the effective length \(L\). We have fixed the values of \(k_{0}=4\), \(k_{m}=10\), and some selected values \(\alpha\), as indicated in the figure. As we are interest is only on the slope, the linear coefficients are changed for better visualization of each of the curves. The slopes obtained here can be seen in Tab. 1.
what we have done is adjust the exponents on isolated scaling functions, \(m_{0}\) and \(\chi_{0}\), as a function of \(L^{1/\nu}\varepsilon\), and when the thermodynamic quantity with different network size, best collapses into a single curve, that exponents were considered the critical exponents of the system. Fig. 7(a) shown, for \(\alpha=1\), the collapsed curves of magnetization data, Eq. (6), in which allowed us to adjust and obtain the exponents \(\beta\) and \(\nu\), and besides that, in Fig. 7(b) also for \(\alpha=1\), magnetic susceptibility data, Eq. (7), enable us to obtain exponents \(\gamma\) and \(\nu\). That estimated critical exponents for \(1\leq\alpha\leq 5\) is presented in Tab. 2, but, we also have to take into account the modification in scaling functions for the mean-field character, and the real values of exponent \(\nu\), is only obtained dividing its results by two.
To the best comprehension of the critical exponents obtained for the _restricted_ SFN, the average of its values, which is equivalent in both methods utilized in this work, was plotted in Fig. 8 as a function of \(\alpha\). In lower values of \(\alpha\), we tend to a system with all the degrees having the same number of sites and the mean-field critical behavior is present, but, when \(\alpha\) increase, the number of more connected sites decreases, and a slight deviation from this behavior is observed. With that observations, we can say that degree-degree correlations, also causes a deviation from the expected mean-field behavior on random networks with degree distribution convergent moments [24; 25; 26].
## V Conclusions
Here, we have employed Monte Carlo simulations to the study of thermodynamic quantities and the critical behavior of the Ising model on a _restricted_ SFN. When we fix the maximum \(k_{m}\) and minimum \(k_{0}\) number of degrees for the whole network sizes, as we made on our _restricted_ SFN, we always have convergent moments based in the degree distribution \(P(k)\). We have used a power-law degree distribution, and as the analytical results predict, the convergent second and fourth moments in the arbitrary degree distributions present a finite order phase transition [12; 13; 17; 24]. We have obtained the critical points of second-order phase transitions and built phase diagram of temperature \(T\) as
Figure 8: Static critical exponents \(\beta\), \(\gamma\), and \(\nu\) as functions of the exponent \(\alpha\).
Figure 7: Data collapse near the critical point for the magnetization \(\mathrm{mL}\) (a) and susceptibility \(\chi_{\mathrm{L}}\) (b) for different network sizes, as shown in the figure. Here, the minimum and maximum degree are fixed \(k_{0}=4\), \(k_{m}=10\), and \(\alpha=1\). The log-log plots were used to obtain the slope \(\Theta\) of scaling functions asymptotic behavior, i.e., distant of \(\epsilon=0\). The straight-dashed lines represent the asymptotic behavior of the scaling functions, Eq. (6) and (7).
a function of \(k_{0}\) and \(k_{m}\), and in which when we have \(k_{0}=k_{m}\) a random uncorrelated network is obtained. Therefore, the critical points are in agreement with analytical calculations, but, the increase on difference between \(k_{0}\) and \(k_{m}\), also increase the degree-degree correlations and causes a deviation from that calculations. The phase diagram of temperature \(T\) as a function of \(\alpha\) also was built, where we always have a finite critical temperature and a decrease in the critical point as we increase \(\alpha\), once that we also decrease the number of more connected sites on network. With these critical points, we estimated the critical exponents for the system as a function of \(\alpha\), and different from what is predicted by analytical results in a random uncorrelated SFN, here, with lower values of \(\alpha\), the increase in the number of more connected sites reaches a mean-field critical behavior. Otherwise, when \(\alpha\) increase, a mean-field critical behavior is also observed, but how are we dealing with correlated degrees on network, the decreasing in the number of more connected sites causes a slight deviation on that exponents. That mean-field behavior was predicted on networks with convergent moments on its degree distribution and it was found on a diversity of complex networks [24; 25; 26]. In our work, it was made on the Ising model with a power-law degree distributed network and unexpected values of \(\alpha\).
|
2306.14668 | A Dual-Band 28/38-GHz Power Amplifier With Inter-Band Suppression in
22-nm FD-SOI CMOS for Multi-Standard mm-Wave 5G Communications | In this article, we present a dual-band 28/38-GHz power amplifier (PA) with
inter-band suppression for millimeter-wave 5G communications. The dual-band
operation is achieved using a center-tapped transformer network with an extra
resonator which can provide optimum load impedance of the transistor in the two
bands and synthesize a short-circuit between the two bands. This feature
suppresses the PA signal emissions in the inter band, commonly allocated for
other applications. A design procedure is developed for the proposed matching
network including physical limits on the quality factor and the coupling
coefficient of the transformer. The PA is designed using a 22-nm fully-depleted
silicon-on-insulator (FD-SOI) CMOS process. The transistor stacking and a
four-path transformer parallel-series power combining techniques are used to
achieve high output power using the low-voltage process. The PA achieves
simulated performance of 22.6/22.0 dBm saturated output power, 19.8/20.0 dBm
output power at 1-dB gain compression, and 33/32 % maximum power-added
efficiency (PAE) at 28/38 GHz. The inter-band suppression is 6 dB at 33 GHz. | Abbas Nasri, Alireza Yousefi, Reza Nikandish | 2023-06-26T13:05:00Z | http://arxiv.org/abs/2306.14668v1 | A Dual-Band 28/38-GHz Power Amplifier With Inter-Band Suppression in 22-nm FD-SOI CMOS for Multi-Standard mm-Wave 5G Communications
###### Abstract
In this article, we present a dual-band 28/38-GHz power amplifier (PA) with inter-band suppression for millimeter-wave 5G communications. The dual-band operation is achieved using a center-tapped transformer network with an extra resonator which can provide optimum load impedance of the transistor in the two bands and synthesize a short-circuit between the two bands. This feature suppresses the PA signal emissions in the inter band, commonly allocated for other applications. A design procedure is developed for the proposed matching network including physical limits on the quality factor and the coupling coefficient of the transformer. The PA is designed using a 22-nm fully-depleted silicon-on-insulator (FD-SOI) CMOS process. The transistor stacking and a four-path transformer parallel-series power combining techniques are used to achieve high output power using the low-voltage process. The PA achieves simulated performance of 22.6/22.0 dBm saturated output power, 19.8/20.0 dBm output power at 1-dB gain compression, and 33/32 % maximum power-added efficiency (PAE) at 28/38 GHz. The inter-band suppression is 6 dB at 33 GHz.
CMOS, dual-band, fifth generation (5G), fully-depleted silicon-on-insulator (FD-SOI), millimeter-wave, multi-band, power amplifier (PA), transformer.
## I Introduction
Millimeter-wave (mm-wave) frequency bands are of paramount importance in the fifth generation (5G) and the future sixth generation (6G) networks where the broad spectrum availability can open up opportunities for new high-capacity communication applications [1, 2]. The development of innovative system and circuit architectures is essential to leverage exciting potentials of the mm-wave spectrum. A number of mm-wave bands are allocated for mobile communications, e.g., 28 GHz, 38 GHz, 60 GHz, 140 GHz, and several radio transceivers [3, 4, 5, 6, 7] and circuit components [8, 9, 10, 11, 12, 13, 14, 15] operating in these bands are presented. There is an increasing quest for the concurrent multi-band circuits to develop universal and multi-standard products in smaller chip area and with lower cost [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27].
There are three main approaches to realize the multi-band mm-wave circuits. In the most straightforward design method, multiple independent circuits each operating in one of the frequency band are used to realize the multi-band circuit. The operating band is selected by switches integrated in the circuits. This approach offers higher reliability and robustness and, as a result, is commonly used in commercial products. However, this leads to larger chip area, higher fabrication cost, and lower performance due to high insertion loss and low isolation of the switches in mm-wave bands.
Another approach to multi-band operation is through broadband circuits covering multiple bands [24, 25, 26, 27]. Examples include a 29-57-GHz class-AB power amplifier (PA) using a fourth-order matching network in 28-nm CMOS [24], a 0.4-31.6 GHz distributed PA in 22-nm fully-depleted silicon-on-insulator (FD-SOI) CMOS [25], and a frequency reconfigurable dual-path 40-65-GHz PA in 130-nm SiCBe BiCMOS [26]. This approach offers a lower sensitivity to modeling inaccuracies and process variations, but suffers from two major issues. First, a broadband circuit usually requires complicated impedance matching networks which their high insertion loss due to the low quality factor of on-chip passive components can degrade the circuit performance. Second, the broadband PAs can transmit spurious signals in undesired communication bands, i.e., out-of-band emissions, while the broadband low-noise amplifiers (LNAs) can receive blocker signals and noise present in unintended bands. It is, therefore, essential to use additional filtering with such broadband circuits which leads to extra loss and higher system cost.
In the third approach, the multi-band circuits are realized using dedicated multi-resonance circuit architectures [16, 17, 18, 19, 20, 21]. The design of these multi-band circuits requires special attention to the circuit functionality to develop impedance matching networks which can provide the required conditions at multiple frequencies. This approach has received increased interests recently, where a number of developments include a dual-band 28/38-GHz PA in 250-nm SiGe BiCMOS [17], a tri-band 28/37/39-GHz Doherty PA with reconfigurable matching networks in 130-nm SiGe BiCMOS [18], a dual-band 28/38-GHz PA in 22-nm FD-SOI CMOS [20], and a dual-band 27/33-GHz amplifier with transformer-feedback neutralization of the gate-drain capacitance in 100-nm GaAs pHEMT [23]. The main challenge of this approach is the development of multi-band circuits which can provide a good performance in the presence of practical limitations of the process, e.g., low quality factor of passive elements, parasitic capacitances, modeling inaccuracies, and process variations.
In this article, we present a dual-band 28/38-GHz PA using a transformer-based impedance matching network. The
transformer network, which provides the impedance matching and differential-to-single ended transformation, is _center-tapped with an extra resonator_. This resonator enables the transformer network to operate at two frequency bands and provide the _inter-band suppression_. This attenuates the PA out-of-band emissions in the frequency bands allocated for other communication applications.
The article is organized as follows. In Section II, we present the dual-band matching network architecture, principles of operation, and a detailed analysis of the impacts of circuit imperfections on its performance. In Section III, we discuss the circuit design of a dual-band PA using the developed transformer network realized in 22-nm FD-SOI CMOS process. The post-layout simulation results of the PA are presented in Section IV and the conclusions are discussed in Section V.
## II Dual-Band Transformer Network
### _Proposed Network_
The proposed transformer network for dual-band impedance matching and inter-band suppression is shown in Fig. 1(a). The secondary winding of the transformer is center-tapped with a reactive resonator network. We will discuss the selection criteria for this resonator circuit later. The source impedance of the transformer network is modeled by the optimum load resistance of the transistor \(R_{opt}\) in parallel with a capacitance \(C_{p}\) which should be absorbed into the transformer network. The capacitance \(C_{p}\) comprises the output-referred parasitic capacitance of the transistor and the primary parasitic capacitance of the transformer. The load impedance of the network includes the load resistance \(R_{L}\) in parallel with a capacitance \(C_{s}\), which comprises secondary parasitic capacitance of the transformer and parasitic capacitance of the output signal pad.
The transformer turn ratio is selected such that it transforms the load impedance \(R_{L}\) to the optimum resistance \(R_{opt}\), while the transformer inductances absorb the parasitic capacitances \(C_{p}\) and \(C_{s}\). These conditions should be concurrently satisfied at the lower and upper frequencies, \(\omega_{L}\) and \(\omega_{H}\), to realize a dual-band impedance matching network
\[Z_{in}(j\omega_{L})=Z_{in}(j\omega_{H})=R_{opt}. \tag{1}\]
It should be noted that the resistive part of the input impedance can be set to the optimum resistance by the proper selection of the transformer turns ratio as
\[n\approx\sqrt{\frac{R_{opt}}{R_{L}}}. \tag{2}\]
However, it is challenging to satisfy the reactive part conditions. _A conventional transformer cannot absorb the parasitic capacitances in two frequency bands_. A double-tuned transformer network which can provide two peaks in its frequency response is useful as interstage network of broadband amplifiers and resonator of oscillators [24, 28, 29]. This network has disadvantages of loss, imbalanced magnitudes in the two peaks, and tuning complexity as the ratio of the two peak frequencies is mainly controlled by the transformer coupling coefficient. Therefore, we propose the transformer network of Fig. 1(a) with the center-tap impedance of \(Z_{T}(j\omega)\) which should provide three conditions:
1. At the lower frequency \(\omega_{L}\), the resonator should operate as an open-circuit. The transformer's primary and secondary inductances are \(L_{p}=L_{p1}+L_{p2}\) and \(L_{s}=L_{s1}+L_{s2}\), and resonate with the parasitic capacitances.
2. At the higher frequency \(\omega_{H}\), the resonator should be equivalent with a short-circuit to effectively decrease the inductances of the transformer to \(L_{p1}\) and \(L_{s1}\). This enables the transformer to absorb the parasitic capacitances at the higher frequency.
3. At the inter-band frequency \(\omega_{SC}\), it should provide a proper reactance which, along with the transformer, creates a short-circuit at _output port of the transformer_. This realizes the expected inter-band suppression.
The discussed conditions can be summarized as follows.
\[\begin{cases}Z_{T}(j\omega_{L})=\infty\\ Z_{T}(j\omega_{H})=0\\ Z_{out}(j\omega_{SC})=0\end{cases} \tag{3}\]
The first two conditions are only dependent on the resonator network's elements, while the third condition is also related to the transformer's elements. We develop a design approach for the dual-band matching network.
In the dual-band transformer circuit of Fig. 1(a), several circuit structures can be envisioned as the resonator network.
Fig. 1: (a) Dual-band transformer network with center-tap resonator. (b) Four possible three-element circuits as the resonator.
We consider the four resonator circuits shown in Fig. 1(b) as the possible realizable circuits using three reactive elements. These circuits have a zero and a pole in their impedance frequency response. The requirements of the resonator network given in (3) indicate that the circuit should ideally have a pole at \(\omega_{L}\) and a zero at \(\omega_{H}\). This condition suggests to use the network I or II with \(\omega_{z}>\omega_{p}\). However, these networks include two inductors which can lead to higher loss and larger chip area compared to the networks III and IV with only one inductor. We can use the network III or IV which their pole is set at \(\omega_{L}\) and the impedance at \(\omega_{H}\) is practically small enough such that it can be approximated by zero. This condition can be considered as \(|Z_{T}(j\omega_{H})|\ll R_{L}\) or, for the sake of future derivations, as follows
\[|Z_{T}(j\omega_{H})|=\delta R_{L}, \tag{4}\]
where \(\delta\) is a constant which can be set in the typical range of 0.01-0.1 depending on the required accuracy and circuit element values. For the resonator network, we select the network III and derive the design criteria.
### _Design Approach_
The impedance of the resonator network III in Fig. 1(b) can be derived as
\[Z_{T}(j\omega)=\frac{1-\omega^{2}L_{ts}(C_{ts}+C_{ts1})}{1-\omega^{2}L_{ts}C_{ ts}}\frac{1}{j\omega C_{ts1}}. \tag{5}\]
The resonator network should have a pole at \(\omega_{L}\), leading to
\[\omega_{L}^{2}L_{ts}C_{ts}=1. \tag{6}\]
The resonator network should meet the condition (4) at \(\omega_{H}\) which using (5) and (6) can be derived as
\[\frac{\left(1+\frac{C_{ts1}}{C_{ts}}\right)\omega_{H}^{2}-\omega_{L}^{2}}{ \omega_{H}^{2}-\omega_{L}^{2}}\frac{1}{\omega_{H}C_{ts1}R_{L}}=\delta. \tag{7}\]
This can be solved for \(C_{ts1}\) as follows
\[C_{ts1}=\frac{C_{ts}}{\delta\omega_{H}C_{ts}R_{L}-\alpha}, \tag{8}\]
where \(\alpha=\omega_{H}^{2}/(\omega_{H}^{2}-\omega_{L}^{2})\). This indicates that the parameter \(\delta\) should satisfy the condition \(\delta>\alpha/(\omega_{H}C_{ts}R_{L})\) and cannot be selected arbitrarily small. Therefore, the minimum theoretical impedance of the resonator in (4) is derived as
\[|Z_{T}(j\omega_{H})|_{\min}=\frac{\omega_{H}^{2}}{\omega_{H}^{2}-\omega_{L}^{2 }}\frac{1}{\omega_{H}C_{ts}}. \tag{9}\]
The short-circuit frequency of the transformer network should meet the condition \(\omega_{L}<\omega_{SC}<\omega_{H}\). We can consider it as the mid logarithmic distance between the lower and higher frequencies, leading to
\[\frac{\omega_{SC}}{\omega_{L}}=\frac{\omega_{H}}{\omega_{SC}}\Rightarrow \omega_{SC}=\sqrt{\omega_{L}\omega_{H}}. \tag{10}\]
The impedance of the resonator network at the short-circuit frequency can be derived using (5), (6), and (10) as
\[Z_{T}(j\omega_{SC})=\frac{\left(1+\frac{C_{ts1}}{C_{ts}}\right)\omega_{H}- \omega_{L}}{\omega_{H}-\omega_{L}}\frac{1}{j\omega_{SC}C_{ts1}}, \tag{11}\]
which is equivalent with a capacitance given by
\[C_{sc}=\frac{(\omega_{H}-\omega_{L})C_{ts1}}{\left(1+\frac{C_{ts1}}{C_{ts}} \right)\omega_{H}-\omega_{L}}. \tag{12}\]
This capacitance can generate a resonance with the transformer to produce a short-circuit under certain conditions. The output impedance of the transformer network should be derived to apply the inter-band suppression condition, \(Z_{out}(j\omega_{SC})=0\). Using the circuit of Fig. 1 with the resonator network III, assuming \(L_{p1}=L_{p2}=\frac{1}{2}L_{p}\) and \(L_{s1}=L_{s2}=\frac{1}{2}L_{s}\), and setting \(Z_{out}(j\omega_{SC})=0\), it can be shown that the following condition should be satisfied
\[Z_{T}(j\omega_{SC})+\frac{1}{4}j\omega_{SC}L_{s}=0. \tag{13}\]
Using (11) and (12), this condition can be derived as
\[\frac{1}{4}\omega_{SC}^{2}L_{s}C_{sc}=1. \tag{14}\]
The resonator network can be designed using the conditions given by (6), (9), and (14). There are extra degrees of freedom in this system of equations which provide more design flexibility. The transformer network can be designed using the approximate input-referred equivalent circuit of the transformer (assuming \(K_{m}\approx 1\)) comprising a parallel RLC circuit with the elements
\[L_{in}\approx L_{p} \tag{15}\]
\[C_{in}\approx C_{p}+\frac{1}{n^{2}}C_{s} \tag{16}\]
\[R_{in}\approx n^{2}R_{L}. \tag{17}\]
The network is designed such that \(L_{in}\) and \(C_{in}\) resonate at the lower frequency \(\omega_{L}\), while \(R_{in}=R_{opt}\). The resonator network effectively reduces the inductances to \(L_{p1}\) and \(L_{s1}\) at the higher frequency \(\omega_{H}\), enabling the network to resonate at both \(\omega_{L}\) and \(\omega_{H}\). An important advantage of this network is that the transformer coupling coefficient is not a design parameter and the transformer layout can be optimized for the maximum coupling. This is in contrary with the conventional double-tuned transform network which usually should be designed with a low coupling coefficient to achieve broadband response at the cost of a lower gain [24, 28, 29].
We use the transducer power gain as the performance metric for the transformer network such that the impact of losses can also be evaluated [30]. This is defined as the ratio of the output power delivered to the load to the available source power and can be derived as
\[G_{T}(\omega)=\frac{4R_{opt}R_{L}|Z_{21}|^{2}}{[(Z_{11}+R_{opt})(Z_{22}+R_{L})- Z_{12}Z_{21}]^{2}}, \tag{18}\]
where \(Z_{ij}\) denotes the impedance parameter of the transformer two-port network.
In Fig. 2, \(G_{T}(\omega)\) of the dual-band transformer network is shown. The network is realized using ideal lossless elements and perfect transformer coupling (\(K_{m}=1\)). The network is designed for the lower frequency of 28 GHz, the upper frequency of 38 GHz, and the short-circuit frequency of \(f_{SC}=\sqrt{f_{L}f_{H}}\approx\) 33 GHz.
In Fig. 3, the impact of changing the capacitance \(C_{ts}\) on the frequency response is shown. This indicates that the short-circuit frequency can be controlled by using a digitally controlled or switched \(C_{ts}\). The network has no notch for \(C_{ts}=0\), while the notch is shifted toward the lower frequencies by increasing \(C_{ts}\).
### _Impact of Transformer Loss_
The transformer loss arises from the limited quality factor of inductors, due to metal and substrate losses, and the imperfect transformer coupling. It is assumed that the primary and secondary inductors of the transformer have the quality factor of \(Q_{\mathrm{XFMR}}\). This can be modeled by resistances \(r_{p}=\omega L_{p}/Q_{\mathrm{XFMR}}\) and \(r_{s}=\omega L_{s}/Q_{\mathrm{XFMR}}\) in series with \(L_{p}\) and \(L_{s}\), respectively. For the inductor of the resonator network with a quality factor of \(Q_{T}\), the loss can be similarly modeled as a resistance \(r_{ts}=\omega L_{ts}/Q_{T}\) in series with the inductor \(L_{ts}\).
In Fig. 4, the impact of transformer quality factor on the transfer function is illustrated. The insertion loss in the two pass bands increases by lowering \(Q_{\mathrm{XFMR}}\). For high \(Q_{\mathrm{XFMR}}\), the insertion loss is limited by the resonator network quality factor \(Q_{T}\) which is assumed to be 30 in this simulation. A transformer coupling coefficient of 1 is used to focus on the effects of the transformer quality factor. As will be discussed in Section III, the transformer physical structure is an hexagonal spiral to comply with design rules of the process. This limits its quality factor to about 20-25.
In Fig. 5, the transfer function versus frequency is shown for different transformer coupling coefficients \(K_{m}\). The insertion loss increase when \(K_{m}\) is reduced from 0.8 to 0.5. The impact is more significant in the upper band.
### _Impact of Resonator Loss_
The resonator loss is mainly caused by the limited quality factor of the inductor \(L_{ts}\). This is modeled by a resistance \(r_{ts}=\omega L_{ts}/Q_{T}\) in series with the inductor \(L_{ts}\) in the network III of Fig. 1(b). The effects of \(Q_{T}\) on the transfer function of the transformer network are evaluated in Fig. 6. The resonator quality factor has significant impact on the
Fig. 4: Transfer function of the dual-band transformer for different values of the transformer quality factor (assuming \(K_{m}=1\) and \(Q_{T}=30\)).
Fig. 5: Transfer function of the dual-band transformer with swept coupling coefficient \(K_{m}\) (assuming \(Q_{T}=30\)).
Fig. 3: Transfer function of the dual-band transformer with swept \(C_{ts}\). The notch frequency can be controlled by \(C_{ts}\).
Fig. 2: Transfer function of the dual-band 28/38-GHz transformer network realized using ideal elements.
suppression at the notch frequency. For example, to achieve 10 dB suppression, the quality factor of the resonator inductor should be at least 40 which is difficult to reach in this low-resistivity substrate process. Fortunately, the resonator inductor can be implemented as a straight transmission line, unlike the hexagonal spiral shape transformer, which features a higher quality factor compared to the transformer.
## III Power Amplifier Circuit Design
The dual-band PA architecture and circuit details are shown in Fig. 7. The PA comprises four power cells (PA1-4) combined in the parallel-series configuration. The power cells are matched to 50 \(\Omega\) using the dual-band transformer network center-tapped with the resonator (\(\mathrm{XFMR_{o}}\)). The parallel combining of output signals from two transformers \(\mathrm{XFMR_{o}}\) also transforms the impedance level to 25 \(\Omega\) which is then returned to 50 \(\Omega\) through the series combining by the transformer \(\mathrm{XFMR_{out}}\). The input power is divided between the power cells using the transformers \(\mathrm{XFMR_{i}}\) and the transmission lines \(\mathrm{TL_{in}}\). The power cells are realized using differential double-stacked structure shown in Fig. 7. We discuss details of the PA circuit design.
### _FD-SOI CMOS Process_
The PA is implemented in the Global Foundries 22-nm fully-depleted silicon-on-insulator (FD-SOI) CMOS process (22FDX). The process structure is shown in Fig. 8 which includes one Aluminum top thick metal layer, 2 thick and 7 thin metal Copper layers. The substrate has a low resistivity (7 \(\Omega.\mathrm{cm}\)) unlike the conventional SOI processes. The transformers in the PA circuit are implemented on the two top copper layers (IA and OI) which have a lower resistivity compared to the top Aluminum layer (LB). The circuit capacitors are realized as the metal-oxide-metal (MOM) capacitors using stacked thin metal layers (M1-2 and C1-5).
The process offers multiple types of transistors with different threshold and breakdown voltages. Super-low threshold voltage (SLVT) transistors are used in this design to benefit from their high transconductance \(g_{m}\), unity current gain frequency \(f_{T}\), and unity unilateral power gain frequency \(f_{\mathrm{max}}\). The SLVT NMOS transistor features 0.25 V threshold voltage, 0.8 V nominal supply voltage, \(f_{T}\) of 350 GHz, and \(f_{max}\) of 370 GHz. The process also provides body bias possibility to adjust threshold voltage of transistors which has not been used in this design.
### _Stacked Power Cell_
The circuit schematic of the power cells is shown in Fig. 7. The low breakdown voltage of the thin-oxide transistors limits their supply voltage and output power capability. Therefore, two transistors are stacked to increase the maximum supply voltage from nominal 0.8 V to 1.6 V. In the double-stacked amplifier, the gate node of the top transistors \(M_{3,4}\) is biased through a large resistor \(R_{1}\) operating roughly as open-circuit for the RF signal, unlike the Cascode amplifier in which the gate of top transistors is RF grounded. An accurately designed capacitor \(C_{1}\) is included in the gate of top transistor to control the gate-source voltage swing of this device. This results in higher output power and efficiency in the stacked amplifier. The input transistors \(M_{1,2}\) are biased at the class-AB to improve their gain and power-added efficiency (PAE).
The power cell is realized as a differential amplifier, shown in Fig. 9(a), where the cross-connected neutralization capacitors \(C_{neut}\) are used to cancel the gate-drain capacitors and achieve unconditional stability. Furthermore, an inductor \(L_{s}\) is placed in the common node of the input transistors pair to improve the common-mode stability. This inductor has no effect in the differential mode, as shown in Fig. 9(b), while it reduces gain of the transistors in the common mode, as can be inferred from Fig. 9(c). Layout of the power cell is shown in Fig. 10.
### _Size of Transistors_
The gate length of all transistors is set at the minimum length of the process, 20 nm. to achieve the best RF performance. The width of transistors is selected based on the output power (14 dBm at 1-dB gain compression) and optimum load resistance (close to 50 \(\Omega\)) requirements for each power cell. The width is determined using the load-pull and source-pull simulations performed on the power cell. The load-pull simulation results at the two frequencies 28 GHz and 38 GHz are shown in Fig. 11. The optimum output power is 14.8/14.4 dBm at 28/38 GHz.
### _Output Power Combiner_
The output power combiner which also transforms the load resistance to the optimum load resistance of transistors is realized using four dual-band transformer networks and a transformer as output power combiner. The real part of the optimum load impedance of the power cells is about 50 \(\Omega\). Thus, the dual-band transformer \(\mathrm{XFMR_{o}}\) has a 1:1 turn ratio. The parallel combining of two power cells transforms the resistance level to 25 \(\Omega\) which is then converted to 50 \(\Omega\)
Fig. 6: Transfer function of the dual-band transformer network for different values of the inductor quality factor \(Q_{T}\) in the resonator network.
using the series combining. This allows the output transformer \(\mathrm{XFMR_{out}}\) also be realized as a 1:1 transformer. This realizes a parallel-series power combining which can ideally convert the 14 dBm output power of the power cell to 20 dBm output power. The output network layout is shown in Fig. 13.
The power combiner performance is simulated using the EMX planar 3D electromagnetic (EM) simulator. The extracted inductance and quality factor of the dual-band trans
Fig. 11: The load-pull simulation result for output power at 1-dB gain compression of the power cell: (a) 28 GHz, (b) 38 GHz. The optimum output power is 14.8/14.4 dBm at 28/38 GHz.
Fig. 8: The 22-nm FD-SOI CMOS process structure.
Fig. 10: Layout of the power cell.
Fig. 7: Schematic of the dual-band power amplifier.
Fig. 9: (a) Power cell differential amplifier with stability network, (b) differential-mode circuit, (c) common-mode circuit.
formers are shown in Fig. 12. The inductance is about 200 pH and the quality factor is 22-25. Simulated scattering parameters of the dual-band transformer network are shown in Fig. 14. The insertion loss is about 1 dB, the inter-band suppression is 6 dB, and the input/output return losses are higher than 15 dB.
### _Input Power Splitter_
The input power splitter also serving as the input impedance matching network is shown in Fig. 15. This network comprises two single-to-differential transformers and two transmission lines. The input impedance of the transformers should be 100 \(\Omega\) to provide an input impedance 50 \(\Omega\) with the two-way power splitter. The input transformers match the 100 \(\Omega\) resistance to the optimum source impedance of the transistors.
The input network is designed with a broad bandwidth that covers the lower and upper bands. Scattering parameters of the input network are shown in Fig. 16, which indicates an insertion loss lower than 1.2 dB in the target bands.
## IV Power Amplifier Simulation Results
The layout of the PA is shown in Fig. 17, where the chip measures 0.5 mm \(\times\) 0.9 mm. The PA is biased at the supply voltage of \(\mathrm{V_{DD}}=1.6\,\mathrm{V}\), the gate bias voltage of \(\mathrm{V_{G}}=0.35\,\mathrm{V}\) for the input transistors, and \(\mathrm{V_{stack}}=1.2\,\mathrm{V}\) for the stack devices. The PA consumes \(80\,\mathrm{mA}\) drain current in the quiescent condition. The PA small-signal and large-signal simulations are performed using the 22-nm FD-SOI process design kit (PDK) for active devices and full electromagnetic simulations of the passive devices.
### _Small-Signal Simulations_
Simulated scattering parameters of the PA are shown in Fig. 18. The small-signal gain is 16.0 dB at 28 GHz and 15.5 dB at 38 GHz. The inter-band suppression is around 6 dB which has been limited by low quality factors of the transformers
Fig. 16: Simulated scattering parameters of the input network. Insertion loss is lower than 1.2 dB in the target band.
Fig. 12: Extracted (a) inductance and (b) quality factor of the output transformer \(\mathrm{XFMR_{o}}\).
Fig. 13: Layout of the output power combiner and impedance matching network.
Fig. 14: Simulated scattering parameters of the dual-band transformer with the center-tap resonator network: (a) \(S_{11}\) and \(S_{22}\), (b) \(S_{21}\).
Fig. 15: Layout of the input power splitter and impedance matching network.
and the resonator. It is expected to achieve higher inter-band suppression using a process with higher passives quality factor. The input return loss is higher than 10 dB is both bands which indicate a good input impedance matching. The output return loss is 13 dB at the lower and 8 dB at the higher band. Furthermore, the stability K factor (Rollet criterion), shown in Fig. 19, indicates that the PA is unconditionally stable.
### _Large-Signal Simulations_
Simulations results for the large-signal performance of the PA are shown in Fig. 20. The saturated output power \(\mathrm{P_{sat}}\) is 22.6 dBm at 28 GHz and 22.0 dBm at 38 GHz. The output-input characteristics indicate that the output power is saturated at around 4-dB gain compression. The output power at 1-dB gain compression \(\mathrm{P_{1dB}}\) is 19.8/20.0 dBm at 28/38 GHz, about 2-3 dB lower than the saturated power. Furthermore, the maximum PAE reads 33/32% at 28/38 GHz.
### _Performance Comparison_
A performance comparison of the designed dual-band PA with state-of-the-art mm-wave CMOS PAs operating in similar frequency bands is presented in Table I. We have indicated the references which are based on simulation results [27, 31, 32] with an asterisk symbol to make a fair comparison. The saturated output power \(\mathrm{P_{sat}}\) of the PA in this work is over 3 dB higher than the PAs based on simulations results, while \(\mathrm{PAE_{max}}\) is also higher than these works. Furthermore, \(\mathrm{P_{sat}}\) is higher than that of the fabricated PAs except for [10] with single-band operation, 37% higher supply voltage, and 6\(\times\) chip area. \(\mathrm{PAE_{max}}\) is competitive with state-of-the-art even though it is challenging to achieve high efficiency for such a high power level. Furthermore, the proposed inter-band suppression technique for the dual-band PA has not been considered in the literature.
Fig. 19: Simulated stability K factor of the PA.
Fig. 17: Layout of the PA in 22-nm FD-SOI process.
Fig. 18: Simulated S-parameter for proposed dual-band output matching transformer: (a) \(S_{21}\), (b) \(S_{11}\) and \(S_{22}\).
## V Conclusion
We presented a dual-band transformer network with inter-band suppression which is enabled by a resonator in the center-tap of the transformer. A theory was developed to provide insights into the circuit operation, design guidelines, and impacts of imperfect on-chip circuit elements. A proof-of-concept 28/38-GHz power amplifier (PA) was designed using the proposed network in 22-nm Fully-Depleted Silicon-on-Insulator (FD-SOI) CMOS process. The PA achieved state-of-the-art output power through multiple techniques including the transistor stacking, sizing the power cells for 50-\(\Omega\) optimum resistance, and four-path parallel-series transformer combining. The inter-band suppression is 6 dB at 33 GHz which is limited by quality factor of inductors and transformers. It can be improved, in a future research, through applying the inter-band suppression in the input transformers, using a modified resonator circuit, and using a process with higher quality factor of inductors.
## Acknowledgment
The authors would like to thank Global Foundries for the PDK support.
|
2303.07974 | Thermal effects in hot and dilute homogeneous asymmetric nuclear matter | We present a comprehensive analysis of hot and dilute isospin-asymmetric
nuclear matter employing the temperature-dependent effective-relativistic
mean-field theory (E-RMF). The E-RMF is applied to study the effect of $\delta$
and $\omega-\rho$ meson cross-coupling on the thermal properties of asymmetric
nuclear matter using two recently developed IOPB-I and G3 parameter sets. These
sets are known to reproduce the nuclear matter properties in agreement with
various experimental and observational constraints. We consider the nuclear
matter to be homogeneous and study the equation of state (EoS) for densities,
temperature and asymmetry which are relevant for astrophysical simulations such
as supernovae explosion. The effect of temperature is investigated in reference
to the density-dependent free symmetry energy and its higher-order derivatives
using the well known parabolic approximation. The larger value of
$\lambda_\omega$ cross-coupling in G3 in addition to the $\delta$ meson
coupling in G3 smoothen the free symmetry energy. Thermal effects on various
state variables are examined at fixed temperature and isospin asymmetry by
separating their T=0 and the finite-T expressions. The thermal effects are
mainly governed by effective mass with larger effective mass estimating larger
thermal contribution. The effect of temperature on isothermal and isentropic
incompressibility is discussed which is in harmony with various available
microscopic calculations. The liquid-gas phase transition properties are
examined in asymmetric matter with two conserved charges in the context of
different slope parameter and comparable symmetry energy in IOPB-I and G3 set.
The spinodal instability, binodal curve and critical properties are found to be
influenced by the slope parameter $L_{sym}$. | Vishal Parmar, Manoj K Sharma, S K Patra | 2023-03-14T15:30:26Z | http://arxiv.org/abs/2303.07974v1 | # Thermal effects in hot and dilute homogeneous asymmetric nuclear matter
###### Abstract
We present a comprehensive analysis of hot and dilute isospin-asymmetric nuclear matter employing the temperature-dependent effective-relativistic mean-field theory (E-RMF). The E-RMF is applied to study the effect of \(\delta\) and \(\omega-\rho\) meson cross-coupling on the thermal properties of asymmetric nuclear matter using two recently developed IOPB-I and G3 parameter sets. These sets are known to reproduce the nuclear matter properties in agreement with various experimental and observational constraints. We consider the nuclear matter to be homogeneous and study the equation of state (EoS) for densities, temperature and asymmetry which are relevant for astrophysical simulations such as supernovae explosion. The effect of temperature is investigated in reference to the density-dependent free symmetry energy and its higher-order derivatives using the well known parabolic approximation. The larger value of \(\lambda_{\omega}\) cross-coupling in G3 in addition to the \(\delta\) meson coupling in G3 smoothen the free symmetry energy. Thermal effects on various state variables are examined at fixed temperature and isospin asymmetry by separating their T=0 and the finite-T expressions. The thermal effects are mainly governed by effective mass with larger effective mass estimating larger thermal contribution. The effect of temperature on isothermal and isentropic incompressibility is discussed which is in harmony with various available microscopic calculations. The liquid-gas phase transition properties are examined in asymmetric matter with two conserved charges in the context of different slope parameter and comparable symmetry energy in IOPB-I and G3 set. The spinodal instability, binodal curve and critical properties are found to be influenced by the slope parameter \(L_{sym}\). Finally, we consider a more realistic system with the inclusion of electrons and analyse their effect on instability and adiabatic index of isospin asymmetric nuclear matter.
pacs: 21.65.+f, 26.60.+c, 26.60.-c, 25.70.Pq
## I Introduction
Core-collapse supernova is nature's one of the brightest optical display where million-year life of a giant star (\(M>8M\odot\)) is put to an end violently and abruptly within fractions of a second [1; 2]. The exact mechanism of collapse explosion is still not well understood even after several decades of thorough investigations. In recent years, such explosion has been studied using several ab-initio core-collapse simulations where the hydrodynamics equations are solved numerically [3; 4]. These simulations estimate that the explosion energy of \(\approx 10\)\({}^{51}\) erg is attained within the time scale of \(\geq 1\)s [5]. The temperature of the matter rises to 20 MeV and the density of the bounce can vary up to two times the nuclear saturation density. The short time scale of collapse does not allow the matter to reach \(\beta\) equilibrium and calculations are usually done at a fixed asymmetry \(\alpha=\frac{\rho_{n}-\rho_{p}}{\rho_{n}+\rho_{p}}\approx 0.4\)[6; 7].
The determination of EoS for isospin-asymmetric nuclear matter (ANM) is relevant in various areas of nuclear physics ranging from finite nuclei to infinite matter. Not only the understanding of its ground state is important, but its behaviour at finite temperature is equally significant. The finite temperature behaviour of ANM is relevant in context to astrophysical events such as neutron star mergers, gamma-ray bursts, proto-neutron stars, early universe, etc [8]. Furthermore, the composition of matter inside the neutron star impacts its transportation and cooling process which are governed by the so-called direct URCA process [9]. With the recent detection of gravitational wave (GW170817) which was accompanied by a gamma-ray-burst and electromagnetic afterglow from the merger of a neutron star binary opened a new era of astrophysics [10; 11]. In view of above, a systematic understanding of asymmetric nuclear matter at finite temperature is highly desirable.
The nuclear matter which is predominantly governed by a residual short-range strong force and long-range electromagnetic interaction shows various structures which in turn depend upon the parameters such as density, asymmetry, temperature etc. At low temperature or entropy, the matter is in the non-homogeneous form below the subnuclear density ( \(\rho<0.1\) fm\({}^{-3}\)). The nuclear matter is a mixture of heavy nuclei and lights clusters in the background of nucleon gas [12]. As the density increases, the nuclei become deformed constituting a frustrating system usually known as pasta struc |
2305.12749 | A Benchmark on Extremely Weakly Supervised Text Classification:
Reconcile Seed Matching and Prompting Approaches | Etremely Weakly Supervised Text Classification (XWS-TC) refers to text
classification based on minimal high-level human guidance, such as a few
label-indicative seed words or classification instructions. There are two
mainstream approaches for XWS-TC, however, never being rigorously compared: (1)
training classifiers based on pseudo-labels generated by (softly) matching seed
words (SEED) and (2) prompting (and calibrating) language models using
classification instruction (and raw texts) to decode label words (PROMPT). This
paper presents the first XWS-TC benchmark to compare the two approaches on fair
grounds, where the datasets, supervisions, and hyperparameter choices are
standardized across methods. Our benchmarking results suggest that (1) Both
SEED and PROMPT approaches are competitive and there is no clear winner; (2)
SEED is empirically more tolerant than PROMPT to human guidance (e.g., seed
words, classification instructions, and label words) changes; (3) SEED is
empirically more selective than PROMPT to the pre-trained language models; (4)
Recent SEED and PROMPT methods have close connections and a clustering
post-processing step based on raw in-domain texts is a strong performance
booster to both. We hope this benchmark serves as a guideline in selecting
XWS-TC methods in different scenarios and stimulate interest in developing
guidance- and model-robust XWS-TC methods. We release the repo at
https://github.com/ZihanWangKi/x-TC. | Zihan Wang, Tianle Wang, Dheeraj Mekala, Jingbo Shang | 2023-05-22T06:18:23Z | http://arxiv.org/abs/2305.12749v1 | # A Benchmark on Extremely Weakly Supervised Text Classification:
###### Abstract
Extremely Weakly Supervised Text Classification (XWS-TC) refers to text classification based on minimal high-level human guidance, such as a few label-indicative seed words or classification instructions. There are two mainstream approaches for XWS-TC, however, never being rigorously compared: (1) training classifiers based on pseudo-labels generated by _(softly) matching seed words_ (Seed) and (2) _prompting (and calibrating)_ language models using classification instruction (and raw texts) to decode label words (Prompt). This paper presents the first XWS-TC benchmark to compare the two approaches on fair grounds, where the datasets, supervisions, and hyperparameter choices are standardized across methods. Our benchmarking results suggest that (1) Both Seed and Prompt approaches are competitive and there is no clear winner; (2) Seed is empirically more tolerant than Prompt to human guidance (e.g., seed words, classification instructions, and label words) changes; (3) Seed is empirically more selective than Prompt to the pre-trained language models; (4) Recent Seed and Prompt methods have close connections and a clustering post-processing step based on raw in-domain texts is a strong performance booster to both. We hope this benchmark serves as a guideline in selecting XWS-TC methods in different scenarios and stimulate interest in developing guidance- and model-robust XWS-TC methods1.
Footnote 1: Github repo at [https://github.com/ZihanWangKi/x-TC](https://github.com/ZihanWangKi/x-TC)
## 1 Introduction
Recently there has been a significant advancement in the text classification with the emergence of Extremely Weakly Supervised Text Classification (XWS-TC) methods [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 74, 75, 76, 78, 79, 81, 82, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 132, 133, 134, 135, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 21, 222, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 109, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 152, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 174, 175, 176, 177, 178, 179, 181, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 201, 21, 223, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 140, 151, 120, 121, 128, 129, 132, 126, 129, 141, 152, 127, 129, 153, 154, 155, 156, 157, 159, 161, 170, 172, 181, 193, 194, 195, 196, 197, 199, 200, 21, 223, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 121, 128, 129, 133, 132, 134, 135, 136, 137, 138, 139, 140, 141, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 178, 179, 180, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 210, 211, 223, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 5
instruction template of <text>. sentiment:, the model generating "happy" or "sad" will help classify the sentiment of the text. Naive zero-shot prompting considers the highest likelihood label as the answer and recent improvements for more accurate likelihoods include calibration of likelihood scores Holtzman et al. (2021); Zhao et al. (2021); Han et al. (2022) and verbalizers that find more label words to better represent the class Schick and Schutze (2021); Ma et al. (2023); Hu et al. (2022).
Both Seed and Prompt methods have demonstrated strong performance in XWS-TC. However, there has been a lack of comprehensive comparison between these two approaches. This is due to the perception that the approaches are unrelated and the lack of standardization in datasets, supervision, and hyperparameter choices across methods.
We are motivated to construct a benchmark that fairly evaluates the performance of XWS-TC methods. The benchmark consists of 11 datasets covering four domains along with their fine-grained variants and different numbers of classes. In addition, we make an effort to use the same hyperparameters across datasets for the methods, as there should not be a development set to tune the hyperparameters in the XWS setting Perez et al. (2021).
Our benchmarking results suggest that both Seed and Prompt approaches are competitive, with no clear winner. Seed tends to perform better when both approaches use a similar-sized pre-trained model and is more robust and tolerant to changes in human guidance (such as seed words, classification instructions, and label words). On the other hand, Prompt methods have the ability to handle more general types of human guidance (such as descriptions of class names, rather than specific words) and do not have a strict requirement for an unlabeled corpus. When the underlying pre-trained language model changes, Prompt is more robust and scales better with the language model than Seed. We also examine two specific methods from each approach, X-Class Wang et al. (2021) and ProtoCal Han et al. (2022), which independently proposed a post-processing approach to calibrate the class predictions through clustering on an unlabeled in-domain corpus to improve classification performance. Our results show that this subroutine can be a universal booster for both Seed and Prompt approaches.
Through this benchmark, we aim to advance the study of XWS-TC methods and call for the development of methods that are robust to different human guidance and language models. We firmly believe that this paper will serve as a guide for selecting the appropriate method in different scenarios and contribute to the advancement of the field.
## 2 Related Work
### Different Types of Weak Supervision
Extremely Weak Supervision is a setting that assumes access to only high-level human inputs, such as names of classes or instructions about classification criteria. We briefly discuss different types of minimal supervision in the following paragraphs.
Few-shot SupervisionFew-shot supervision is the setting where there are only a small number of labeled examples for each of the classes. An intuitive way is to directly train the classifier on few-shot data, but usually that yields subpar performance. Another popular way is called _in-context learning_, where the few-shot supervision is used as _context_ to prompt LM for the answer Brown et al. (2020). Various methods have been proposed to improve it by searching for better label words Schick and Schutze (2021); Ma et al. (2023), stabilizing the output Lu et al. (2022), and efficient fine-tuning Gao et al. (2021).
Distant SupervisionDistant supervision includes supervision from external resources such as encyclopedias or gazetteers. There have been efforts to incorporate external knowledge into prompting Hu et al. (2022), phrase mining Shang et al. (2018), and named entity recognition Liang et al. (2020). External models can also be used to help with extremely weak supervision. A line of research is on leveraging models trained on natural language inference data to suggest better-related words Park and Lee (2022) or directly classify the text Yin et al. (2019); Gera et al. (2022).
No SupervisionUnsupervised methods fall into this category where they require no supervision. These methods typically take one of the two following approaches: (1) clustering Aharoni and Goldberg (2020), (2) topic modeling Blei et al. (2003). However, both of these approaches lack control over the clusters/topics generated i.e. classes. For example, a text corpus can be categorized on several basis including topic, location, and sentiment. An unsupervised method cannot handle such scenarios. It would be beneficial to be able to retrieve all possible classifications of a corpus in an
unsupervised manner, but as far as we are aware, there are no methods with this ability.
### Weak Supervision Benchmarks
We introduce two other Weak Supervision Benchmarks and talk about differences with this work.
Wrench (Zhang et al., 2021) is a benchmark that explored various types of weak supervision labeling functions (i.e., rules used to label the text). They synthesize the performance of different labeling functions, ways to combine them, and the fine-tuning process to learn the pseudo-training data. In our benchmark, we analyze extremely weak text classifiers that go beyond the labeling functions and compare their performance and robustness with zero-shot prompting.
AutoWS-Bench-101 (Roberts et al., 2022) is another benchmark that analyzes how labeling functions help text classification along with additional few-shot supervision. They conclude that pre-trained models are strong baselines for in-domain settings and should be considered integrating with weak supervision methods. In this work, we focus on extremely weak supervision methods without any labeled data. The Seed and Prompt methods compared in this benchmark are all based on pre-trained language models.
### Verbalizers
Verbalizers are a type of Prompt method that find a larger set of label words so that the class choices are accurately represented. We did not consider Verbalizer methods in this benchmark since they mostly rely on additional supervision, such as few-shot (Schick and Schutze, 2021; Ma et al., 2023) or an external knowledge base (Hu et al., 2022).
## 3 Background
Extremely Weak Supervision in Text Classification refers to a few high-level human guidance as supervision. This guidance typically is in the form of seed words that describe each class, or an instruction paired with label words that define the task. There are two main approaches for XWS-TC: matching seed words (Seed) and prompting language models (Prompt).
### Seed Matching Methods
Seed approaches are provided with a few class-indicative seed words and unlabeled documents as input. These methods typically involve seed word expansion where more words related to provided seed words are identified in the unlabeled corpus through several statistics-based (Salton and Buckley, 1988; Mekala and Shang, 2020) or deep learning-based strategies (Meng et al., 2020; Wang et al., 2021; Zhang et al., 2021). Using these expanded seed words, each unlabeled document is pseudo-labeled. Different heuristics have been explored for pseudo-labeling such as string-matching (Meng et al., 2018). Recently, the matching approach has also evolved into softer manners such as embedding-based matching (Wang et al., 2021), and graph-based matching (Zhang et al., 2021), that can address conflicts in a principled manner during pseudo-labeling.
We introduce 4 strong-performing Seed methods to include in our benchmark.
**LotClass**(Meng et al., 2020) obtains related words through predicting masked tokens in a masked language modeling trained model (Devlin et al., 2019), over an unlabelled corpus. They match the text to related words by fine-tuning a model to predict the related words given a text.
**XClass**(Wang et al., 2021) obtains related words by finding words that have similar representations. They construct class-oriented representations for text. and match the text to related words by representation similarity. They also showed that the performance can be improved significantly by matching based on clusters from text representations.
**ClassKG**(Zhang et al., 2021) models the dependence of related words as an annotating problem on the keyword graph.
**NPPrompt**(Zhao et al., 2022) obtains related words through embedding similarity from a pre-trained LM. The related words are used as label words to prompt a generative LM for predictions, which are then aggregated as the matching result. To some extent, NPPPrompt belongs to an intersection of Prompt and Seed methods.
### Prompt Methods
Prompting language models is another approach to extremely weak supervision in text classification. This approach involves prompting a generative language model with an instructive text and extracting the _likelihoods_ of different label words. This approach does not require an unlabeled in-domain corpus and can be used to predict text in an online fashion. However, language models have been known to be biased towards text sequences more
common in pre-training data, leading to instability in zero-shot & few-shot settings. Recently proposed post-processing methods Holtzman et al. (2021); Han et al. (2022) have attempted to address this by calibrating the predicted probabilities using estimates of the model's bias towards each verbalized label. We describe 2 calibration methods.
**DC-PMI**Holtzman et al. (2021) considers a null prompt to obtain the raw likelihoods of language model to predict each label. Then, for each text, they modify the likelihood of the predicted label by marginalizing the raw ones.
**ProtoCal**Han et al. (2022) considers an unlabelled corpus and obtains the predicted likelihoods on the corpus. The likelihood vectors are then clustered to better obtain the prediction boundary for each class. Instead of maximum likelihood, this prediction boundary is used to predict the class.
Some more Seed and Prompt methods are described in Appendix A.
## 4 Benchmark
In order to establish a benchmark that can accurately evaluate various XWS-TC methods, it is essential to consider a range of factors: Dataset choices, Instructions, Label words, Hyperparameter control, use of Pre-trained Language Models, Metrics and ensure their consistency across all experiments. We will discuss each of these factors in detail in the following sections.
### Dataset
We consider datasets from prior evaluations Holtzman et al. (2021); Wang et al. (2021); Meng et al. (2020) that contain data from diverse domains. To facilitate the evaluation process, the size of the evaluation set for each dataset has been controlled to a few thousand instances. Additionally, as many XWS-TC methods require the use of an unlabelled in-domain corpus, a similar-sized sample has been sampled from the training split to serve this purpose, with the evaluation set and unlabelled corpus being disjoint. The datasets have been uniformly sampled without altering the distribution of labels, thus preserving the imbalance ratio, which is defined as the ratio between the size of the largest class and the smallest class. The statistics of the datasets are presented in Table 1. Details of the sources of the datasets are in Appendix B.
### Instructions and Label/Seed Words
To fairly compare Seed and Prompt methods, we need to provide equal amounts of human supervision. That means, for Seed methods, we should only allow a single word for each class, matching the amount used for label words. For instructions, we consider simple ones that hint at the classification criteria Holtzman et al. (2021). Details choices can be found in Appendix C.
### Metrics
For evaluation metrics, we consider the macro F\({}_{1}\) score on a dataset-by-dataset basis, which values each class within a dataset equally. To understand the performance of a method on all datasets, we employ two metrics: the average of the macro F\({}_{1}\) scores, and a ranking-based metric that combines the ranking of methods on each dataset to obtain a scale-prone value Colombo et al. (2022).
### Hyperparameters
Another crucial aspect of the benchmark is the number of hyperparameters utilized by each method. In the context of extremely weak supervision, we argue that it is unrealistic to use different hyperparameters for different datasets, as doing so would necessitate the use of a separate development set, thereby defeating the purpose of using only high-level human supervision Perez et al. (2021). Therefore, we slightly tune the hyperparameters on one of the datasets to rule out failing scenarios and then stick with a single choice of hyperparameters throughout all datasets. Under this hyperparameter enforcement, the ideal method should exhibit consistent performance across all datasets.
### Pre-trained Language Models
Prompt methods use generative language models such as GPT while Seed methods use representation encoding language models such as BERT. To fairly compare methods between these two approaches on XWS-TC, we have to consider the ability of language models as a factor. We use the number of parameters of the pre-trained language model as an approximation of the power of the language model. Since all language models use the transformer as the backbone, this implies that the number of layers and size of hidden states is controlled. A further discussion is in Appendix D.
### Large Language Models
This benchmark specifically excludes the evaluation of (multi-task) fine-tuned language models such as T0 (Sanh et al., 2022), large language models (LLMs) such as GPT3, and human feedback-trained language models like Instruct-GPT (Ouyang et al., 2022) and ChatGPT because there are no equivalent representation encoding language models for the Seed approaches. We discuss this in more details and include an evaluation of ChatGPT on a single dataset as a reference in Appendix E.
## 5 Benchmark Experiments
### Main Results
In Table 2 we show the performances of all Seed and Prompt methods considered in the benchmark across the 11 datasets and report the average macro F\({}_{1}\) performance and the rank score.
Performance of Prompt MethodsWe note that the performance of the standalone Prompt method is about 20 points lower than its counterparts with calibration methods. The use of additional instance independent instructions (DCPMI) or an additional clustering based on unlabelled text (ProtoCal) is crucial for Prompt methods to work well in XWS (zero-shot) text classification.
Performance of Seed MethodsAll the Seed methods exhibit strong performance, with X-Class performing stably well across all datasets, and ClassKG performing the best on several datasets, but losing on certain fine-grained datasets.
Comparing Prompt and Seed MethodsFirst, on the absolute performances, we can see that Seed methods have overall better performance than Prompt methods, even when appropriate calibration is added for Prompt methods. However, we can also observe that a larger pre-trained GPT model increases the performance of Prompt methods quite significantly, while Seed methods have a lower performance improvement when a larger pre-trained language model is used. This effect is further studied in Section 5.2.3.
### Robustness
Through this benchmark, we hope to not only decide which method performs the best, but also analyze under dynamic circumstances, which method is more robust to changes. Different choices of label words/seed words, instructions, and pre-trained language models can happen in real life. Therefore, the robustness of methods when these ingredients are reasonably varied would indicate how stable the method is under variating circumstances. Due to the complexity of multiple runs of each method, we focus on 4 datasets pertaining to different domains, imbalance ratios, and number of classes: Yelp, AGNews, NYT-S, and DBpedia. We leave out two methods, LoT-Class and NPPrompt to save computational resources.
#### 5.2.1 Different Seed/Label words
In Table 3 we explore the effect when a different choice of label words and seed words are used. For example, for Yelp-2, we chose negative/positive, terrible/great bad/good, awful/find, and nasty/nice as the variants. We report the performance of the methods on each of the five choices, and also the aggregated performance over the 4 aforementioned datasets. We notice that Prompt methods in general have a high instability. While DCPMI and Pro
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline
**Name** & **Domain** & **\# Classes** & **\(\|\)Unlabelled\(\|\)** & **\(\|\)Eval\(\|\)** & **Imbalance** \\ \hline IMDB & Reviews/Sentiment & 2 & 5000 & 5000 & 1.0 \\ Yelp-2 & Reviews/Sentiment & 2 & 5600 & 3800 & 1.1 \\ Yelp-5 & Reviews/Sentiment & 5 & 6500 & 5000 & 1.1 \\ AGNews & News/Topic & 4 & 6000 & 7600 & 1.0 \\
20News & News/Topic & 5 & 6254 & 5362 & 1.9 \\
20News-Fine & News/Topic & 17 & 5589 & 4792 & 1.3 \\ NYT-S & News/Topic & 5 & 4578 & 3925 & 17.1 \\ NYT-S-Fine & News/Topic & 26 & 4034 & 3459 & 96.3 \\ NYT & News/Topic & 9 & 5119 & 6400 & 30.7 \\ NYT-Loc & News/Location & 10 & 5119 & 6400 & 17.1 \\ DBpedia & Wikipedia/Ontology & 14 & 5600 & 7000 & 1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics in our benchmark.
toCal can remedy the variance a bit, Seed methods are still more robust to changes of seed words.
#### 5.2.2 Different Instructions
A high variance is also observed when the instructions are changed for the Prompt methods, as in Table 4. A noticeable trend is that when the pre-trained model is larger, while the performance increases, the variance brought by instructions or label words also increases. This could be alarming for Prompt methods.
#### 5.2.3 Different Pre-trained Language Models
In Table 5 we analyze how changes in pre-trained language models would affect the performance of Seed and Prompt methods (See Appendix H for the full table). Although Seed performs better than Prompt, Prompt methods has a strong increasing trend as the size of the pre-trained language model (e.g., changing from BERT-base to BERT-large). Also, X-Class and NPPrompt fail on RoBERTa and BERT respectively, which we hypothesize is that assumptions made in the methods are not general to all pre-trained language models; for example, the distribution of similarities of representations generated by a language model might be different by models. This scaling trend is a factor that should be taken into selecting methods to use for XWS-TC, when the language model size is different than evaluated in this benchmark.
\begin{table}
\begin{tabular}{l l|c c c c c c c c c c c|c} \hline \hline
**Method** & **Model** & \multicolumn{6}{c|}{**IMDB**} & \multicolumn{6}{c|}{**Velp-2**} & \multicolumn{6}{c}{**AGNews**} & \multicolumn{6}{c}{**20News-Fine**} & \multicolumn{6}{c}{**NYT-S**} & \multicolumn{6}{c}{**NYT-Enc**} & \multicolumn{6}{c}{**NYT-Dec**} & \multicolumn{6}{c}{**DBpedia**} & \multicolumn{1}{c}{**Average**} & \multicolumn{1}{c}{**Rank Score**} \\ \hline \hline \multirow{3}{*}{Prompt} & GPT2-small & 56.42 & 47.36 & 7.62 & 38.42 & 36.32 & 28.76 & 22.45 & 38.90 & 33.44 & 60.32 & 13.93 & 34.90 & 0 \\ & GPT2-medium & 35.80 & 33.57 & 25.87 & 69.36 & 55.16 & 46.03 & 54.08 & 46.14 & 24.92 & 79.00 & 24.52 & 44.95 & 1 \\ \hline Prompt & GPT2-small & 70.13 & 65.34 & 23.01 & 72.67 & 61.64 & 37.45 & 73.93 & 63.19 & 55.20 & 70.40 & 51.10 & 58.55 & 4 \\ + DCPMI & GPT2-medium & 63.24 & 87.00 & 11.34 & 74.13 & 61.15 & 52.74 & 79.80 & 67.66 & 58.44 & 87.35 & 57.30 & 63.65 & 8 \\ \hline Prompt & GPT2-small & 70.35 & 65.89 & 23.77 & 72.66 & 58.62 & 36.77 & 53.69 & 29.82 & 55.15 & 65.80 & 51.97 & 53.14 & 2 \\ + ProtoCal & GPT2-medium & 70.58 & 88.60 & 36.62 & 75.26 & 62.58 & 48.55 & 51.97 & 46.85 & 59.04 & 72.45 & 66.46 & 61.54 & 9 \\ \hline \hline \multirow{3}{*}{NoT-Class} & BERT-base & 58.56 & 67.96 & 24.92 & 73.94 & 70.57 & 9.40 & 61.36 & 23.05 & 48.59 & 67.13 & 57.98 & 51.2 & 3 \\ & BERT-large & 81.03 & 77.03 & 25.17 & 68.25 & 65.71 & 45.51 & 44.00 & 37.11 & 43.08 & 80.55 & 58.04 & 56.86 & 5 \\ \hline \multirow{3}{*}{X-Class} & BERT-base & 82.89 & 85.44 & 28.80 & 81.81 & 76.98 & 58.78 & 91.94 & 61.06 & 67.19 & 86.38 & 89.50 & 73.71 & 10 \\ & BERT-large & 82.05 & 90.39 & 31.02 & 85.91 & 77.52 & 59.98 & 87.53 & 68.40 & 68.73 & 85.77 & 87.91 & 75.02 & 12 \\ \hline \multirow{3}{*}{ClassKO} & BERT-base & 80.89 & 92.21 & 32.33 & 88.10 & 81.72 & 52.29 & 84.12 & 49.59 & 60.79 & 92.81 & 94.75 & 74.25 & 13 \\ & BERT-large & 90.96 & 93.10 & 39.41 & 87.30 & 83.84 & 51.62 & 80.95 & 59.95 & 56.31 & 91.03 & 72.74 & 73.38 & 11 \\ \hline \multirow{3}{*}{NPPrompt} & Roberta-base & 85.19 & 81.17 & 14.20 & 80.42 & 68.92 & 48.64 & 77.76 & 55.23 & 64.46 & 53.85 & 60.36 & 62.75 & 7 \\ & Roberta-large & 85.67 & 93.58 & 23.45 & 83.62 & 69.82 & 43.33 & 77.93 & 35.91 & 59.96 & 65.83 & 47.11 & 62.38 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of Prompt and Seed methods on the benchmark with standard models, prompt instructions, label words, and seed word choices. All scores are higher the better.
\begin{table}
\begin{tabular}{l l|c c c c c c c|c c c} \hline \hline
**Method** & **Model** & \multicolumn{6}{c|}{**Velp-2**} & \multicolumn{6}{c}{**Averaged over Datasets**} \\ & & **default** & **alt. 1** & **alt. 2** & **alt. 3** & **alt. 4** & **Median** & **Average (std)** & **Median** & **Average** & **std** \\ \hline \hline \multicolumn{10}{c}{Prompt} & \multicolumn{6}{c}{Prompt} \\ \hline \hline \multirow{3}{*}{Prompt} & GPT2-small & 47.36 & 49.34 & 32.84 & 58.19 & 32.24 & 47.36 & 43.99 (10.04) & 32.88 & 31.01 & 6.37 \\ & GPT2-medium & 33.57 & 32.89 & 32.84 & 55.10 & 32.78 & 32.89 & 37.44 (8.84) & 39.39 & 40.70 & 8.77 \\ \hline \multirow{3}{*}{Prompt} & GPT2-small & 65.34 & 57.19 & 72.80 & 45.12 & 56.98 & 57.19 & 59.49 (9.27) & 61.81 & 62.46 & 5.13 \\ + DCPMI & GPT2-medium & 87.00 & 66.65 & 36.53 & 75.31 & 39.23 & 66.65 & 60.94 (19.93) & 68.56 & 66.54 & 7.26 \\ \hline \multirow{3}{*}{Prompt} & GPT2-small & 65.89 & 54.59 & 70.43 & 58.03 & 63.72 & 63.72 & 62.53 (5.63) & 64.62 & 64.03 & 6.17 \\ & GPT2-medium & 88.60 & 87.31 & 90.53 & 80.53 & 68.59 & 87.21 & 83.11 (8.00) & 72.17 & 70.74 & 8.76 \\ \hline \hline \multirow{3}{*}{X-Class} & BERT-base & 85.44 & 88.01 & 85.69 & 62.24 & 84.33 & 85.44 & 81.14 (9.53) & 86.18 & 83.83 & 5.70 \\ & BERT-large & 90.39 & 89.71 & 88.70 & 84.75 & 85.49 & 88.70 & 87.81 (2.27) & 83.77 & 83.36 & 4.47 \\ \hline \multirow{3}{*}{ClassKO} & BERT-base & 92.21 & 91.71 & 87.78 & 91.18 & 92.47 & 91.71 & 91.07 (1.70) & 87.71 & 85.88 & 4.45 \\ & BERT-large & 93.10 & 93.16 & 94.13 & 93.89 & 92.01 & 93.16 & 93.26 (0.74) & 84.93 & 85.40 & 3.74 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of Prompt and Seed methods when the label word/seed word are changed to similar meaning alternatives. We show the performance on 5 choices of label words on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets.
## 6 Connections between Recent Seed and Prompt Methods
While Prompt is introduced by the seminal GPT-3 paper Brown et al. (2020) not too long ago, Seed has a longer history and can be traced back to early tf-idf retrieval methods Salton and Buckley (1988). In recent years, Seed methods and Prompt methods are exploring similar ideas. Seed methods have been leveraging pre-trained language models to better understand the semantics of seed words; for example, by asking the language model to fill in masks Meng et al. (2020) or through means of representation similarities Wang et al. (2021); Zhao et al. (2022). Prompt methods have been exploring calibration and verbalizers to improve and stabilize its predictions. Verbalizer includes a step of finding more label words that better represent the class, which is a similar approach used in Seed. We show that a recent representative Seed method X-Class and two Prompt methods, Verbalizers and ProtoCal have higher similarities and deeper connections in their design. This is particularly interesting as both directions have been developing independently. In Figure 2, we provide a pipeline of the methods and highlight the similarities.
### Obtaining Text Representations
X-Class matches text to classes by learning class-oriented text representations from an encoder-based language model. X-Class views class representations as the union of representations describing the words. The text representation in X-Class is defined as a weighted average of individual token representations where the weights are based on their respective similarity to the class representations. On the other hand, general prompting relies on a decoder-based language model to produce a next token representation. In the penultimate layer of the decoder, the last token representation is computed by an attention mechanism over all other tokens, which essentially produces a weighted average of all the token representations.
In both methods, the text representation is obtained using an attention-like weighted average of tokens in the text. The attention is guided such that the output representation is indicative of the class. X-Class uses signals from class names to guide the attention while prompting relies on the understanding of the instruction.
### Obtaining Predicted Likelihoods
Prompt methods obtain likelihoods of the class by comparing the similarity of the next token rep
\begin{table}
\begin{tabular}{l l|c c c c c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**Yelp-2**} & \multicolumn{6}{c}{**Averaged over Datasets**} \\ & & **default** & **alt.** & **1 alt.** & **2 alt.** & **3 alt.** & **4** & **Median** & **Average (std)** & **Median** & **Average** & **std** \\ \hline \multirow{2}{*}{Prompt} & GPT2-small & 47.36 & 32.89 & 37.31 & 73.11 & 39.01 & 39.01 & 45.94 (14.37) & 31.06 & 32.32 & 8.40 \\ & GPT2-medium & 33.57 & 33.18 & 56.77 & 78.41 & 42.34 & 42.34 & 48.85 (17.08) & 38.34 & 39.11 & 11.73 \\ \hline \multirow{2}{*}{Prompt + DMCMPI} & GPT2-small & 65.34 & 76.96 & 50.14 & 48.83 & 39.53 & 50.14 & 56.16 (13.29) & 60.00 & 61.48 & 6.45 \\ & GPT2-medium & 87.00 & 88.03 & 48.56 & 79.67 & 67.76 & 79.67 & 74.20 (14.72) & 65.26 & 61.54 & 14.18 \\ \hline \multirow{2}{*}{Prompt + ProtoCal} & GPT2-small & 65.89 & 83.87 & 60.54 & 71.23 & 72.25 & 72.25 & 70.76 (7.78) & 65.54 & 64.80 & 6.23 \\ & GPT2-medium & 88.60 & 87.40 & 57.85 & 80.13 & 82.73 & 82.73 & 79.34 (11.18) & 62.59 & 62.07 & 10.85 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of Prompt methods when the instructions are changed to similar meaning alternatives. We show the performance on 5 choices of instructions on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets.
Figure 2: We highlight similarities (green) between a Seed method X-Class (orange) and two Prompt methods Verbalizers and ProtoCal (blue).
resentation to representations of the label words. A recent line of research on improving prompting for classification is to enlarge the set of label words to capture more diverse meanings of the classes, known as verbalizers, such as PET Schick and Schutze (2021), ProtoVerb Ma et al. (2023), and KPT Schick and Schutze (2021); Ma et al. (2023); Hu et al. (2022). The notion of verbalizers is very similar to seed-words expansion in Seed methods. For example, X-Class and verbalizers both obtain a list of related words and use it to aggregate a class representation to replace the naive usage of label/seed word representation. Notably, the verbalizer methods require external supervision to find the related words, such as few-shot data Schick and Schutze (2021); Ma et al. (2023) or a knowledge base Hu et al. (2022) to obtain the related word list, while Seed methods detect related words through an unlabelled corpus. Both approaches could be useful under different input settings.
### Unlabeled Corpus Clustering
Finally, a Seed method X-Class and a Prompt method ProtoCal independently introduced a post-processing step by clustering on an unlabelled corpus, with the goal of obtaining a better decision boundary. X-Class clusters the text representations and initializes the clusters with the prior text-class similarity so that the clusters and classes are aligned. Protocal clusters the predicted likelihoods and align the clusters to classes by post-matching the cluster centers to the classes. We further explore the effect of the two clustering ideas, a summary is in Table 6 (Full table in Appendix I). We show that adding such a post-clustering process to various methods can almost freely (apart from an unlabeled corpus) improve the performance of different methods consistently for five different methods.
### Implications
Given these connections between Seed and Prompt methods and previous analysis on robustness, a natural extension is to analyze the cause of the stability issues on label/seed words and model differences. We presented one empirical analysis of the clustering step in X-Class and ProtoCal and show that this step can improve performance for various different methods talked about in the benchmark (Section 6.3). Further analysis on other components is left as future work. For example, one could reason that the introduction of related words makes the model less sensitive to the given label/seed words. This would require an exploration of the quality of the related words found by different Seed and verbalizer methods, and
\begin{table}
\begin{tabular}{l l|c c} \hline \hline
**Method** & **Model** & **Average** & **Rank Score** \\ \hline \hline Prompt & GPT2-small & 34.90 & 0 \\ Prompt + clustering & GPT2-small & 53.14 & 1 \\ \hline Prompt + DCPMI & GPT2-small & 58.55 & 2 \\ Prompt + + DCPMI + clustering & GPT2-small & 59.70 & 3 \\ \hline XClass (w/o clustering) & BERT-base & 67.40 & 6 \\ XClass (w clustering) & BERT-base & 73.71 & 8 \\ \hline NPPrompt & roret-base & 62.75 & 4 \\ NPPrompt + clustering & roret-base & 64.54 & 5 \\ \hline ClassKG & BERT-base & 74.25 & 7 \\ ClassKG + clustering & BERT-base & 75.16 & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of Prompt and Seed methods with and without the clustering post-processing.
\begin{table}
\begin{tabular}{l l|c c} \hline \hline
**Method** & **Model** & **Average** & **Rank Score** \\ \hline \hline \multicolumn{4}{c}{Prompt} \\ \hline \hline \multirow{4}{*}{Prompt} & GPT2-small & 30.54 & 1 \\ & GPT2-medium & 45.38 & 8 \\ \cline{2-3} & BERT-base & 43.04 & 7 \\ & BERT-large & 51.84 & 15 \\ \cline{2-3} & RoBERTa-base & 45.71 & 6 \\ & RoBERTa-large & 59.85 & 22 \\ \hline \multirow{4}{*}{Prompt + DCPMI + DCPMI} & GPT2-small & 65.76 & 24 \\ & GPT2-medium & 74.56 & 31 \\ \cline{2-3} & BERT-base & 60.52 & 23 \\ \cline{2-3} & BERT-large & 55.88 & 14 \\ \cline{2-3} & RoBERTa-base & 47.14 & 5 \\ & RoBERTa-large & 55.86 & 18 \\ \hline \multirow{4}{*}{Prompt + ProtoCal} & GPT2-small & 61.05 & 21 \\ & GPT2-medium & 70.07 & 30 \\ \cline{2-3} & BERT-base & 55.74 & 11 \\ \cline{2-3} & BERT-large & 70.16 & 25 \\ \cline{2-3} & RoBERTa-base & 61.07 & 20 \\ \cline{2-3} & RoBERTa-large & 66.09 & 28 \\ \hline \multicolumn{4}{c}{Seed} \\ \hline \hline \multirow{4}{*}{X-Class} & BERT-base & 87.17 & 37 \\ & BERT-large & 87.94 & 39 \\ \cline{2-3} & RoBERTa-base & 60.18 & 19 \\ \cline{2-3} & RoBERTa-large & 46.78 & 13 \\ \hline \multirow{4}{*}{ClassKG} & BERT-base & 89.80 & 40 \\ \cline{2-3} & BERT-large & 83.52 & 38 \\ \cline{2-3} & RoBERTa-base & 86.94 & 36 \\ \cline{2-3} & RoBERTa-large & 93.17 & 41 \\ \hline \multirow{4}{*}{NPPrompt} & BERT-base & 32.46 & 0 \\ \cline{2-3} & BERT-large & 31.45 & 2 \\ \cline{1-1} \cline{2-3} & RoBERTa-base & 74.93 & 32 \\ \cline{1-1} \cline{2-3} & RoBERTa-large & 75.56 & 33 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of Prompt and Seed methods when the choice of the pre-trained model is alternated.
whether the related words between methods can be used interchangeably.
## 7 Conclusions and Future Work
In this work, we introduce a benchmark to qualitatively evaluate different Seed and Prompt approaches for extremely weakly supervised text classification. Through the benchmark, we raise awareness of the existence of Seed approaches, that are strong competitors to the more well-known zero-shot prompting (with calibrations). We also experiment on the robustness of these two approaches, and show that Seed are more tolerant to the given human guidance changes, however also being more selective to the pre-trained language models. We also analyzed the connections of Seed and Prompt approaches through the lens of a few representative methods of the two approaches and showed that the methodologies are converging more recently. Finally, we also include a study on clustering as a calibration technique that was independently proposed for both approaches, and show that it can be a good performance booster.
We envision future work in two directions. The first one would be to understand the source of robustness difference and design a method that can take the best of both worlds (see Section 6.4). The other would be to scale up the experiments and test if the conclusions still hold for larger pre-trained language models.
## Limitations
**Limitation of Model Scale** The benchmark only included the evaluation of moderate-size language models and did not experiment on large language models. We justify our reasons in Section 4.6 and Appendix E and include an evaluation of ChatGPT in Appendix E, showing that even human feedback fine-tuned large language models is far from perfect on XWS-TC. However, we acknowledge that the current state of extremely weak supervision would be better understood and assessed if complete evaluations on state-of-the-art large language models, such as Instruct-GPT (Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and ChatGPT exist. While we lack the computational resources to perform such an evaluation, we hope this work can stimulate interest in XWS-TC and complete the study.
**Limitation of Text Classification** Another limitation is the scope of Text Classification. While Prompt and Seed methods have shown strong performances on text classification, this performance does not extend to other general classification tasks, such as natural language inference/entailment (Zhao et al., 2022).
## Ethics Statement
This paper establishes a benchmark for extremely weakly supervised text classification frameworks. We provide empirical results on various Seed and Prompt methods, test their robustness, and analyze their connections. We give intuitions and insights on what method one should use for XWS-TC in different circumstances. We believe that we are on the ethical side and do not find any ethical concerns in this work.
|
2310.16799 | Measuring Supermassive Black Hole Properties via Gravitational Radiation
from Eccentrically Orbiting Stellar Mass Black Hole Binaries | There may exist stellar-mass binary black holes (BBH) which merge while
orbiting nearby a supermassive black hole (SMBH). In such a triple system, the
SMBH will modulate the gravitational waveform of the BBH through orbital
Doppler shift and de Sitter precession of the angular momentum. Future
space-based GW observatories focused on the milli- and decihertz band will be
uniquely poised to observe these waveform modulations, as the GW frequency from
stellar-mass BBHs varies slowly in this band while modulation effects
accumulate. In this work, we apply the Fisher information matrix formalism to
estimate how well space-borne GW detectors can measure properties of BBH+SMBH
hierarchical triples using the GW from orbiting BBH. We extend previous work by
considering the more realistic case of an eccentric orbit around the SMBH, and
notably include the effects of orbital pericenter precession. We find that for
detector concepts such as LISA, B-DECIGO, and TianGO, we can extract the SMBH
mass and semimajor axis of the orbit with a fractional uncertainty below the
0.1% level over a wide range of triple system parameters. Furthermore, we find
that the effects of pericenter precession and orbital eccentricity
significantly improve our ability to measure this system. We also find that
while LISA could measure these systems, the decihertz detector concepts
B-DECIGO and TianGO would enable better sensitivity to the triple's parameters. | Andrew Laeuger, Brian Seymour, Yanbei Chen, Hang Yu | 2023-10-25T17:29:19Z | http://arxiv.org/abs/2310.16799v1 | Measuring Supermassive Black Hole Properties via Gravitational Radiation from Eccentrically Orbiting Stellar Mass Black Hole Binaries
###### Abstract
There may exist stellar-mass binary black holes (BBH) which merge while orbiting nearby a supermassive black hole (SMBH). In such a triple system, the SMBH will modulate the gravitational waveform of the BBH through orbital Doppler shift and de Sitter precession of the angular momentum. Future space-based GW observatories focused on the milli- and decihertz band will be uniquely poised to observe these waveform modulations, as the GW frequency from stellar-mass BBHs varies slowly in this band while modulation effects accumulate. In this work, we apply the Fisher information matrix formalism to estimate how well space-borne GW detectors can measure properties of BBH+SMBH hierarchical triples using the GW from orbiting BBH. We extend previous work by considering the more realistic case of an eccentric orbit around the SMBH, and notably include the effects of orbital pericenter precession. We find that for detector concepts such as LISA, B-DECIGO, and TianGO, we can extract the SMBH mass and semimajor axis of the orbit with a fractional uncertainty below the 0.1% level over a wide range of triple system parameters. Furthermore, we find that the effects of pericenter precession and orbital eccentricity significantly improve our ability to measure this system. We also find that while LISA could measure these systems, the decihertz detector concepts B-DECIGO and TianGO would enable better sensitivity to the triple's parameters.
## I Introduction
Since the first detection of gravitational waves (GWs), GW astronomy by ground-based detectors has cemented itself as an advantageous method for studying binary systems of compact objects, the majority of which are binary black holes (BBHs) [1; 2; 3]. Within the population of observed BBHs, there are systems with progenitors whose masses exceed the predictions of stellar evolution [4; 5; 6; 7]. One possible explanation of this detection could be that the progenitors themselves were themselves products of previous mergers [8; 9; 10; 11]. The deep potential wells created by supermassive black holes (SMBHs) and their host galactic nuclei could trap the products of stellar mass BBH mergers, making galactic nuclei ideal locations for generating many repeated compact object mergers [10; 11; 12]. Numerical simulations of BBH formation in galactic nuclei due to gas friction [13; 14] and dynamic capture through gravitational interactions [15] suggest that the cosmological merger rate of BBH near galactic nuclei could be of order \(\sim\) a few \(\text{Gpc}^{-3}\text{yr}^{-1}\). Studying the properties of these repeated merger systems and of the SMBHs which encourage their formation could open a new window on understanding the dynamics of galactic nuclei and the processes which drive galaxy evolution. The most recent analysis of the BBH population in GWTC-3 is consistent with contributions from both isolated and AGN formations [16], though more observations are needed.
In a hierarchical triple system consisting of a stellar-mass BBH orbiting an SMBH, as depicted in Fig. 1, the presence of the SMBH would modulate the BBH GW signal through many effects. For example, the velocity of the BBH in its orbit will produce a Doppler shift in the waveform [17; 18; 19]. Allowing the BBH to take an _eccentric_ orbit around the SMBH introduces relativistic effects such as pericenter precession as the outer orbital path approaches near the SMBH [20]. Furthermore, the presence of the SMBH will cause the orbital angular momentum of the inner binary \(\hat{L}_{i}\) to experience de Sitter precession about the orbital angular momentum of the outer binary \(\hat{L}_{o}\)[21]. This effect modulates the inclination angle of the BBH angular momentum relative to an observatory in the Solar System. The Lidov-Kozai and Lense-Thirring effects also play a role in the evolution of hierarchical triples [22; 23].
By measuring the effects of Doppler shifts, pericenter precession, and de Sitter precession on the stellar-mass BBH gravitational waveform, one can measure the properties of this triple system, including the SMBH mass, semimajor axis of the outer orbit, and various angles describing the system geometry [17; 18; 19; 20; 22; 23; 24]. These effects accumulate substantially over time scales roughly on the order of an orbital period, which for typical BBH+SMBH triple systems can range from months to years. But because the current ground detectors in LIGO/Virgo/KAGRA are most sensitive between 10 Hz to a few kHz, which correspond to only the final seconds before merger for a stellar-mass BBH, current GW observatories are not optimal for extracting hierarchical triple system parameters through the influence of the SMBH on the waveform [25; 26].
However, the coming decades could see the construction of a number of proposed space-based detectors which would be sensitive to frequencies below \(\sim\)1-10 Hz. Build
ing low frequency detectors in space is necessary due to technical challenges from seismic noise [27; 28] and the need to create arms which are large compared to the curvature of the Earth. The LISA [29], TianQin [30], and Taiji [31; 32] detectors will target the millihertz GW band, while detector concepts such as B-DECIGO [33; 34] and TianGO [35; 36] will focus on the decihertz band. Since the instantaneous orbital decay timescale due to GW emission during inspiral scales roughly with \(\omega_{orb}^{-8/3}\)[37], space-based low-frequency detectors could observe stellar mass BBH for much longer times than ground detectors, making them more favorable for measurements of SMBH-driven effects in the BBH waveform.
Measuring a SMBH with an orbiting binary's GW would be be useful for studying the environment at the center of galaxies. In a recent work by Yu and Chen, it is shown that these proposed low-frequency GW observatories could feasibly measure properties of interest to the few percent level over a wide range of possible BBH+SMBH systems [24]. Current observational methods for measuring properties of SMBHs and their local environments include tracking the orbital dynamics of nearby test masses, like stars, and reverberation mapping of the emission line fluxes from the accretion disk, if the SMBH is active [38]. Recent advances in observational technology and modeling active galactic nuclei have enabled constraints of the masses of their central SMBHs to roughly 10% precision [39; 40; 41; 42], though the results obtained by each method do not always agree [43]. Adding a GW-based technique to this toolkit could expand the set of observable SMBHs with well-constrained properties to those which may have few electromagnetic radiation sources nearby [24] or foster improvements in established electromagnetic techniques through comparisons of joint measurements. Indeed, there has been significant progress in understanding how space-based GW observatories may be able to measure properties of SMBHs and the objects orbiting them through a variety of triple system phenomena [44; 45; 23; 46].
The initial work of Yu and Chen assumes a circular Newtonian outer orbit in the BBH+SMBH triple system [24]; however, it is expected that formation channels for these systems, especially those which are dynamical in nature, should produce a sizeable population of triples with eccentric outer orbits [47]. In this work, we examine how adding a nonzero eccentricity to the outer orbit affects parameter measurement uncertainties. We demonstrate that a nonzero outer eccentricity can significantly improve these uncertainties compared to the circular case, primarily through the inclusion of outer orbit pericenter precession. In order to estimate parameter uncertainties, we rely on the Fisher information matrix, a method which has been frequently used in the past to gauge the measurability of compact binary parameters by ground-based GW observatories [48]. In short, we find that uncertainties in triple system parameters can consistently fall below the 0.1% level, and that these parameters are measured more precisely with larger \(e_{o}\) and by detectors targeting the decihertz band. We also find that the general trends in parameter measurement are influenced almost entirely by pericenter and de Sitter precession.
In Sec. II, we outline the mathematical description of the gravitational waveform emitted from a BBH in a hierarchical triple and detected by a space-borne observatory. In Sec. III, we outline the Fisher matrix calculation as applied to parameter estimation and explain some simplifications we make to the computation. In Sec. IV, we present the results of our Fisher matrix computations, and in Sec. V, we offer conclusions and possible directions for this work to proceed in the future. In this work, we use geometrized units \(G=c=1\).
## II Mathematical description of the SMBH+BBH triple system
### Geometry
We first describe the full geometry of the SMBH+BBH triple system with an eccentric outer orbit. Table 1 below outlines the set of relevant parameters used in calculating the waveform measured by a space-borne GW observatory. In Fig. 1, the barred coordinates demarcate a Solar System centered coordinate system, while the unbarred coordinates demarcate a coordinate system based on the orientation of the observatory.
In order to compute the antenna response, we need to be able to convert from the unbarred coordinates to the barred coordinates, which for a constellation-preserving
\begin{table}
\begin{tabular}{|c|c|} \hline \(\mathbf{\theta^{a}}\) & **Definition** \\ \hline \(\log\mathcal{M}_{z}\) & Detector Frame Chirp Mass: \(\mu^{3/5}(m_{1}+m_{2})^{2/5}\) \\ \hline \(q\) & Mass Ratio \(M_{2}/M_{1}\) \\ \hline \(\log D_{L}\) & Luminosity Distance \\ \hline \(t_{c}\) & Coalescence Time \\ \hline \(\phi_{c}\) & Coalescence Phase \\ \hline \(\theta_{S,}\phi_{S}\) & Line of Sight of BBH+SMBH Triple \\ \hline \(\beta_{J,}\phi_{J}\) & Orientation of Total Angular Momentum \(\mathbf{J}\) \\ \hline \(\lambda_{L}\) & Angle Between \(\mathbf{L}_{i}\) and \(\mathbf{L}_{o}\) \\ \hline \(\alpha_{0}\) & Initial Phase of \(\mathbf{L}_{i}\) Around \(\mathbf{L}_{o}\) \\ \hline \(\log M_{3}\) & SMBH Mass \\ \hline \(\log a_{o}\) & Outer Orbit Semimajor Axis \\ \hline \(\gamma_{o}\) & Initial Outer Orbit Argument of Pericenter (See Note 1) \\ \hline \(e_{o}\) & Outer Orbit Eccentricity \\ \hline \(\varphi_{0}\) & Initial BBH Azimuthal Coordinate \\ \hline \end{tabular}
\end{table}
Table 1: Relevant parameters in BBH+SMBH triple system for GW observed by detectors. Bars over angles indicate the Solar System coordinate frame.
observatory such as LISA, is as follows [49]:
\[\hat{x} =-\frac{1}{4}\sin(2\phi_{d})\hat{\overline{x}}+\frac{3+\cos(2\phi_{ d})}{4}\hat{\overline{y}}+\frac{\sqrt{3}}{2}\sin(\phi_{d})\hat{\overline{z}} \tag{1}\] \[\hat{y} =\frac{-3+\cos(2\phi_{d})}{4}\hat{\overline{x}}+\frac{1}{4}\sin(2 \phi_{d})\hat{\overline{y}}-\frac{\sqrt{3}}{2}\cos(\phi_{d})\hat{\overline{z}}\] (2) \[\hat{z}=-\frac{\sqrt{3}}{2}\cos(\phi_{d})\hat{\overline{x}}-\frac {\sqrt{3}}{2}\sin(\phi_{d})\hat{\overline{y}}+\frac{1}{2}\hat{\overline{z}}. \tag{3}\]
We note that even though B-DECIGO will posses a different detector geometry than LISA during its orbit, we use the same configuration to simplify the analysis. The sky location of the hierarchical triple is \((\overline{\theta}_{S},\overline{\phi}_{S})\), which points along the vector \(\hat{N}\), and has a luminosity distance of \(D_{L}\). The triple itself consists of a BBH with black holes of masses \(M_{1}\) and \(M_{2}\), or equivalently, a chirp mass of \(\mathcal{M}=\frac{(M_{1}M_{2})^{3/6}}{(M_{1}+M_{2})^{1/5}}\) and mass ratio of \(q=M_{2}/M_{1}\), and an SMBH of mass \(M_{3}\). The shape of the BBH's orbit around the SMBH can be determined by the semimajor axis \(a_{o}\), the eccentricity \(e_{o}\), the angle \(\gamma_{o}\), analogous to the initial Keplerian argument of pericenter1, and the initial BBH azimuthal coordinate \(\varphi_{o}\).
Footnote 1: Of course, the outer orbit is not strictly Keplerian. A rigorous definition of the instantaneous argument of pericenter is subtle, though the picture of an elliptical orbital path with a pericenter that rotates in space at the 1PN-accurate angular velocity of \(\frac{3M_{2}}{a_{o}(1-\overline{z})}\) is appropriate as a rough approximation. Within the mathematical framework of [50], \(\gamma_{o}\) is implemented as a simple arbitrary rotation of the orbital plane, as in Eq. (12).
The unit vector of the angular momentum of the two lighter black holes in the binary system is \(\hat{L}_{i}\), and the unit vector of the angular momentum of the binary's orbit about the SMBH is \(\hat{L}_{o}\). The opening angle \(\lambda_{L}\) is defined by
\[\cos\lambda_{L}=\hat{L}_{o}\cdot\hat{L}_{i}. \tag{4}\]
For \(|\vec{L}_{o}|>>|\vec{L}_{i}|\) and neglecting long time scale orbital effects as well as the spin of the SMBH (see Sec. II.4), the opening angle stays constant in time, but the orientation of \(\hat{L}_{i}\) traces a cone around \(\hat{L}_{o}\) due to de Sitter precession, with
\[\frac{d\hat{L}_{i}}{dt}=\Omega_{dS}\hat{L}_{o}\times\hat{L}_{i}. \tag{5}\]
Based on Eq. (9.200) of [51], we use the instantaneous de Sitter precession frequency2
Footnote 2: Eq. (1) of the previous work [24] gave the orbit-averaged de Sitter precession rate, which agrees with Eq. (6).
\[\Omega_{dS}(t)=\frac{3}{2}\frac{M_{3}}{r(t)}\dot{\varphi}(t), \tag{6}\]
where \(r\) is the distance from the SMBH to the center of the BBH and and \(\varphi(t)\) is the azimuthal coordinate of the BBH in its orbit (as shown in the inset of Fig. 1). The orbit-averaged precession rate is
\[\langle\Omega_{dS}\rangle=\frac{3}{2}\frac{M_{3}}{a_{o}(1-e_{o}^{2})}\Omega_{o}, \tag{7}\]
where \(\Omega_{o}\equiv\sqrt{M_{3}/a_{o}^{3}}\) is the Newtonian orbital frequency. The phase of \(\hat{L}_{i}\) in this cone, as shown in the inset of Fig. 1, can be found by integrating the time-dependent de Sitter precession rate:
\[\alpha(t)=\alpha_{0}+\int_{t}^{t_{c}}\Omega_{dS}(t^{\prime})dt^{\prime}, \tag{8}\]
where \(\alpha_{0}\) is the phase at the time of the binary coalescence \(t_{c}\).
It is also useful to define the inclination angle \(\iota_{J}\) of the outer orbit angular momentum, given by
\[\cos\iota_{J}=\hat{N}\cdot\hat{L}_{o}. \tag{9}\]
Figure 1: Top: Geometry of the SMBH+BBH triple system. Bottom, inset: View of the triple system normal to the plane of the outer orbit. The outer orbit angular momentum \(\hat{L}_{o}\) points out of the page. See the discussion below and Table 1 for definition of all parameters. Figure dimensions are not an indication of true scale.
### BBH Orbit in Schwarzschild Spacetime
Despite the fact that there does not exist an analytic description of an elliptical orbit in Schwarzschild spacetime, there are well-established methods for computing Schwarzschild geodesics which can be applied to numerically calculate the BBH orbital trajectory [52; 53; 54; 50; 55]. In particular, we follow the procedure of [50]. Defining \(p=\frac{a_{o}}{M_{3}}(1-e_{o}^{2})\) for semimajor axis \(a_{o}\) and \(e_{o}\), we find a minimum and maximum orbital radius
\[r_{\rm min}=\frac{pM_{3}}{1+e_{o}},\quad r_{\rm max}=\frac{pM_{3}}{1-e_{o}} \tag{10}\]
Stable orbits only exist for \(p>6+2e_{o}\)[50], and we will exclude unstable systems from this analysis.
A relativistic anomaly \(\chi\), which ranges from \(0\) to \(2\pi\), is defined so that
\[r(\chi)=\frac{pM_{3}}{1+e_{o}\cos\chi}, \tag{11}\]
Furthermore, the azimuthal coordinate is given by
\[\varphi(\chi)=2\Big{(}\frac{p}{p-6+2e_{o}}\Big{)}^{1/2}\Big{[}F\Big{(}\frac{ \chi}{2}+\frac{\pi}{2},k^{2}\Big{)}-F\Big{(}\frac{\pi}{2},k^{2}\Big{)}\Big{]}+ \gamma_{o}, \tag{12}\]
where \(k^{2}=\frac{4e_{o}}{p-6+2e_{o}}\), \(F\) is the incomplete elliptic integral of the first kind, and \(\gamma_{o}\) denotes the initial argument of pericenter for the outer orbit (see Note 1).
The relationship between time and the relativistic anomaly is given by
\[t(\chi)=p^{2}M_{3}(p-2-2e_{o})^{1/2}(p-2+2e_{o})^{1/2}\\ \times\int_{0}^{\chi}d\chi^{\prime}\Big{\{}(p-2-2e_{o}\cos\chi^{ \prime})^{-1}(1+e_{o}\cos\chi^{\prime})^{-2}\\ \times(p-6-2e_{o}\cos\chi^{\prime})^{-1/2}\Big{\}}. \tag{13}\]
In the end, the geodesic has a doubly periodic structure, and the radius has a period of \(r(\chi)\) has a period of \(P_{r}=t(2\pi)\). During a time of \(P_{r}\), however, the azimuthal variable travels further than \(2\pi\), which is the relativistic pericenter precession. It is useful to define the shift in angle over a radial period. This is equal to
\[\Delta\varphi=4\Big{(}\frac{p}{p-6+2e_{o}}\Big{)}^{1/2}F(\pi/2,k^{2}). \tag{14}\]
We note that this matches the 1PN GR result [56] for the amount of precession during a radial period in the limit \(p\gg 1\)
\[\Delta\varphi\approx 2\pi(1+3/p)=6\pi/p+2\pi\,. \tag{15}\]
Defining the azimuthal frequency \(\Omega_{\varphi}\equiv\Delta\varphi/P_{r}\), it is shown that \(\varphi(t)-\Omega_{\varphi}t\) is \(P_{r}\)-periodic [50]. We note that \(\varphi(t)\) itself is _not_ periodic - since the orbit precesses, it takes \(<P_{r}\) time for \(\varphi\) to move through \(2\pi\) radians. Even though the precession angle over a full orbit remains constant, the time it takes to move through the precession angle will depend on the BBH distance from the SMBH (conserving angular momentum), so for an eccentric orbit, the time to complete a full \(2\pi\) in \(\varphi\) will depend on the starting value of \(\varphi\) itself.
To find \(r(t)\) and \(\varphi(t)\) numerically over many full orbits, we calculate the orbit over \(\chi\in[0,2\pi]\) and utilize the periodicity of \(r(\chi)\) and \(\varphi(t)-\Omega_{\varphi}t\). We furthermore choose some \(\chi_{0}\equiv\chi(t=0)\) so that \(\varphi(\chi_{0})=\varphi_{0}\), where \(\varphi_{0}\) is the initial azimuthal coordinate of the BBH in the plane of the outer orbit (see the bottom of Fig. 1). Furthermore, \(\hat{r}(t)\) and \(\dot{\varphi}(t)\) can be calculated by application of the chain rule to the expressions relating \(r\), \(\varphi\), and \(t\) to \(\chi\) above.
### Waveform
We can now proceed to calculate the strain detected by the space-based observatory, using the formalism of [21]. The overall measured signal is
\[\tilde{h}(f)=\tilde{h}_{C}\sqrt{(A_{+}F_{+})^{2}+(A_{\times}F_{ \times})^{2}}\\ \times\exp\{-i[\Phi_{P}+2\Phi_{T}+\Phi_{D}]\}, \tag{16}\]
where \(\tilde{h}_{C}\) is the carrier waveform of the BBH, \(A_{+,\times}\) and \(F_{+,\times}\) are the polarization amplitude and antenna response, respectively, and \(\Phi_{P}\), \(\Phi_{D}\), and \(\Phi_{T}\) are the polarization, Thomas, and Doppler phases. The carrier waveform in the frequency domain to leading post-Newtonian (PN) order is [57]
\[\tilde{h}_{C}(f)= \Big{(}\frac{5}{96}\Big{)}^{1/2}\frac{\mathcal{M}^{5/6}}{\pi^{2/3 }D_{L}}f^{-7/6}\] \[\times\exp\{i[2\pi ft_{c}-\phi_{c}-\frac{\pi}{4}+\frac{3}{4}(8\pi \mathcal{M}f)^{-5/3}]\}, \tag{17}\]
where \(t_{c}\) and \(\phi_{c}\) are the time and phase at coalescence. To the leading PN order, the relationship between GW frequency and time is given by
\[t(f)\approx t_{c}-\frac{5}{256\pi^{8/3}}\frac{1}{\mathcal{M}^{5/3}f^{8/3}}. \tag{18}\]
The two polarizations of the strain, \(h_{+}\) and \(h_{\times}\), are modified by the amplitude factors
\[A_{+}=1+(\hat{L}_{i}\cdot\hat{N})^{2} \tag{19}\] \[A_{\times}=-2\hat{L}_{i}\cdot\hat{N}, \tag{20}\]
and furthermore, the antenna responses for a 90-degree detector are
\[F_{+}(\theta_{S},\phi_{S},\psi_{S})=\frac{1}{2}(1+\cos^{2}\theta _{S})\cos 2\phi_{S}\cos 2\psi_{S}\\ -\cos\theta_{S}\sin 2\phi_{S}\sin 2\psi_{S}\,, \tag{21}\]
\[F_{\times}(\theta_{S},\phi_{S},\psi_{S})=\frac{1}{2}(1+\cos^{2} \theta_{S})\cos 2\phi_{S}\sin 2\psi_{S}\\ +\cos\theta_{S}\sin 2\phi_{S}\cos 2\psi_{S}, \tag{22}\]
where
\[\tan\psi_{S}(t)=\frac{\hat{L}_{i}\cdot\hat{z}-(\hat{L}_{i}\cdot\hat{N})(\hat{z} \cdot\hat{N})}{\hat{N}\cdot(\hat{L}_{i}\times\hat{z})}. \tag{23}\]
Note the use of the detector-frame coordinates in Eqs. (21) and (22). For a triangular detector such as LISA or B-DECIGO, the antenna pattern acquires a factor of \(\sqrt{3}/2\) and there are two effective detectors [58].
Let us now specify the phases in Eq. (16). Since the phases are slowly varying functions of time, the stationary phase approximation is used to convert them into frequency-dependent components via Eq. (18) - i.e., for some function \(g(t)\) appearing in the time-domain waveform \(h(t)\), \(g(f)\approx g(t(f))\)[59]. The polarization phase is given by
\[\tan\Phi_{P}(t)=-\frac{A_{\times}(t)F_{\times}(t)}{A_{+}(t)F_{+}(t)}. \tag{24}\]
The Thomas phase arises from the evolution of the principle +-polarization axis [21], and thus the inner orbital phase of the two stellar mass BH in the BBH, as the angular momentum \(\hat{L}_{i}\) precesses. It is given by
\[\Phi_{T}(t)=-\int_{t}^{t_{e}}dt\Big{[}\frac{\hat{L}_{i}\cdot\hat{N}}{1-(\hat{L }_{i}\cdot\hat{N})^{2}}\Big{]}(\hat{L}_{i}\times\hat{N})\cdot\frac{d\hat{L}_{ i}}{dt}\,. \tag{25}\]
The final phase term is the Doppler phase shift, the phase shift induced by the changing distance between the detector and the GW source. There are two contributions to this phase. The first is the contribution from the detector, given at a particular time \(t\) by
\[\Phi_{D,\text{det}}=2\pi f\times(1\text{ AU})\sin\theta_{S}\cos(\phi_{det}-\phi_{S}). \tag{26}\]
The other is from the source, which is modulated by the changing orbital radius as well as the inclination of the outer orbit and the position of the BBH in that orbit:
\[\Phi_{D,\text{src}}=2\pi f\times r\sin{\iota_{J}}\sin\varphi. \tag{27}\]
Gravitational lensing from the SMBH and its host galactic nucleus is neglected in this waveform, though its effects on parameter estimation have been studied in [60; 44].
### Neglected Orbital Dynamics
A three body system is complicated, and exhibits some interesting phenomenology. We will now discuss several additional well-known behaviors, and why we neglect them. A useful benchmark for comparison is that the characteristic frequency for de Sitter precession scales as
\[\Omega_{\text{dS}}=\frac{1}{1.1\text{ yr}}\bigg{(}\frac{100}{a_{o}/M_{3}} \bigg{)}^{5/2}\bigg{(}\frac{10^{8}M_{\odot}}{M_{3}}\bigg{)}\bigg{(}\frac{1-0. 3^{2}}{1-e^{2}}\bigg{)}\,. \tag{28}\]
We consider the implications of non-zero BH spins on the orbital dynamics. The precession of \(\hat{L}_{o}\) around the spin of the SMBH \(\hat{S}_{3}\) with \(S_{3}=\chi_{3}M_{3}^{2}\) has characteristic frequency [23]
\[\Omega_{L_{o},S_{3}}=\frac{S_{3}(4+3(M_{1}+M_{2})/M_{3})}{2a_{o}^{3}(1-e_{o}^ {2})^{3/2}}. \tag{29}\]
If we consider the case \(M_{3}\gg M_{1}+M_{2}\)
\[\frac{1}{t_{L_{o},S_{3}}}=\frac{1}{9.7\text{ yr}}\bigg{(}\frac{\chi_{3}}{0.7} \bigg{)}\bigg{(}\frac{100}{a_{o}/M_{3}}\bigg{)}^{3}\bigg{(}\frac{1-0.3^{2}}{1 -e^{2}}\bigg{)}^{3/2}. \tag{30}\]
Even for rapidly spinning SMBHs, this effect is about one order of magnitude slower than de Sitter precession, so for now, we neglect it. It is worth noting that each successive effect included in the waveform modulation generally increases the amount of Fisher information. As such, we expect that future inclusion of this effect will lead to further improved parameter estimation uncertainties.
Lense-Thirring precession of \(\hat{L}_{i}\) around \(\hat{S}_{3}\) also contributes to the orbital dynamics, with
\[\Omega_{L_{i},S_{3}}=\frac{S_{3}}{2a_{0}^{3}(1-e_{o}^{2})^{3/2}}. \tag{31}\]
This precession frequency is one-quarter of \(\Omega_{L_{o},S_{3}}\), and thus, since we treat \(\Omega_{L_{o},S_{3}}\) as small in this work, we do the same for \(\Omega_{L_{i},S_{3}}\).
As in [24], we also neglect the precession of \(\hat{L}_{i}\) around the spins of the two stellar mass BH. The opening angle of this precession will be of order \(1^{\circ}\), much less than a typical value of \(\lambda_{L}\)[61]. Also, the effects of this spin-induced precession should be easily distinguishable from the Doppler shift or de Sitter precession because the spin-induced precession will occur over just days, rather than years, for GW frequencies in the bands of space-based observatories.
We also consider Lidov-Kozai oscillations, the Newtonian tidal effect which exchanges inner orbit eccentricity with inclination between \(\hat{L}_{o}\) and \(\hat{L}_{i}\)[62]. These oscillations have a characteristic frequency of [23]
\[\Omega_{\text{LK}}=\Omega_{i}\frac{M_{3}}{M_{1}+M_{2}}\Big{(}\frac{a_{i}}{a_{o }\sqrt{1-e_{o}^{2}}}\Big{)}^{3}, \tag{32}\]
where \(\Omega_{i}=\sqrt{(M_{1}+M_{2})/a_{i}^{3}}\). The LK timescale is
\[\frac{1}{t_{\text{LK}}}=\frac{1}{67\text{ yr}}\bigg{(}\frac{10^{8} M_{\odot}}{M_{3}}\bigg{)}^{2}\bigg{(}\frac{100}{a_{o}/M_{3}}\bigg{)}^{3}\\ \times\bigg{(}\frac{1-0.3^{2}}{1-e^{2}}\bigg{)}^{3/2}\bigg{(} \frac{10^{-2}\text{ Hz}}{f}\bigg{)}. \tag{33}\]
In our frequency band of interest, this effect occurs over much longer time scales than the de Sitter precession, and since both de Sitter precession and Lidov-Kozai oscillations modulate \(L_{i}\), we neglect the slower of the two processes.
We furthermore assume that the eccentricity of the inner binary \(e_{i}\) is zero. As explained in [24], the inner eccentricity does not affect any component of the measured strain outside of the carrier waveform \(\tilde{h}_{C}(f)\), and thus should influence parameter estimation uncertainties primarily through the SNR. Furthermore, the eccentric Kozai-Lidov mechanism can drive periodic modulation of \(e_{i}\) between moderate and very high values. The GW signal frequency from a stellar-mass BBH can be pushed into the sensitivity range of space-based observatories when the inner eccentricity is high, so the eccentric Kozai-Lidov mechanism can produce periodic high SNR bursts in these detectors, driving up the total SNR measured for that particular binary [63; 64]. However, the time scale of this periodic burst behavior scales roughly as [65]\(\Omega_{\rm o}^{-2}f(1-e_{\rm o}^{2})^{3/2}\). These effects therefore occur much more slowly than de Sitter and pericenter precession, and thus are left for implementation into future analyses.
A higher \(e_{i}\) also leads to faster merger times; however, high eccentricity BBHs can still remain in the millihertz and decihertz frequency bands throughout the entire observation period with just a larger initial separation between the two stellar mass BHs. So, it is expected that even for \(e_{i}\) approaching 1, such BBHs will offer long enough integration times to generate a moderate SNR, and therefore the inner eccentricity should not significantly alter the results of the simplified Fisher matrix analysis (see [24] for a more detailed discussion).
## III Parameter estimation with the Fisher information matrix
In this analysis, we implement the Fisher information matrix method (as done in [24]) as a simple estimator for how well properties of a BBH+SMBH triple system can be measured. We make a number of well-supported assumptions to reduce the complexity of the numerical methods used to estimate parameter uncertainties.
### Parameter Uncertainties from the Fisher Information Matrix
We first outline how the Fisher information matrix (from now on, Fisher matrix) is used to estimate parameter measurement uncertainties. The elements of the Fisher matrix are defined as
\[\Gamma_{ab}\equiv\left(\frac{\partial\tilde{h}(f)}{\partial\theta_{a}}\Big{|} \frac{\partial\tilde{h}(f)}{\partial\theta_{b}}\right)\,, \tag{34}\]
where
\[\left(\tilde{g}\big{|}\tilde{h}\right)=4\,\text{Re}\int_{0}^{\infty}\frac{ \tilde{g}^{*}(f)\tilde{h}(f)}{S_{n}(f)}df\lx@note{footnote}{We make the approximation that the PSD $S_{n}(f)$ varies slowly enough so that $S_{n}(f)$ for the GW frequency in the BBH frame and the Doppler-shifted GW frequency in the observer frame are roughly equal. See App. A.}, \tag{35}\]
\(\tilde{h}\) is the frequency-domain waveform, \(S_{n}(f)\) is the PSD of the detector noise, and \(\theta_{a}\) are the various parameters of the system. In practice, we limit the frequency bounds of integration to \([f_{min},f_{max}]\), where \(f_{max}\) is at the upper edge of the detector sensitivity range and \(t(f_{max})-t(f_{min})=5\) years (via Eq. (18)) - see Sec. IV.
We note that we use a finite difference method to compute \(\partial\tilde{h}/\partial\theta_{a}\). To choose a finite parameter difference \(\Delta\theta_{a}\) from which to estimate \(\partial\tilde{h}/\partial\theta_{a}\), we minimize the quantity \(\epsilon\), analogous to waveform mismatch,
\[\epsilon=1-\frac{(\partial_{[\Delta\theta_{a}]}\tilde{h}|\partial_{[\Delta \theta_{a}]}\tilde{h})}{\sqrt{(\partial_{[\Delta\theta_{a}]}\tilde{h}| \partial_{[\Delta\theta_{a}]}\tilde{h})(\partial_{[\Delta\Delta\theta_{a}]} \tilde{h}|\partial_{[\Delta\Delta\theta_{a}]}\tilde{h})}}, \tag{36}\]
where
\[\partial_{[\Delta\theta]}\tilde{h}=\frac{\tilde{h}(\theta+\Delta\theta)- \tilde{h}(\theta-\Delta\theta)}{2\Delta\theta}. \tag{37}\]
Empirically choosing \(\Delta\theta_{a}\) to make \(\epsilon\) small gives us the best accuracy in computing the numerical derivative, as \(\epsilon\) begins to increase once \(\Delta\theta_{a}\) becomes so small that the changes in \(\tilde{h}\) are smaller than computer precision. The choice of \(4\Delta\theta_{a}\) to compare to \(\Delta\theta_{a}\) is arbitrary.
The Fisher information matrix is related to the covariance matrix roughly by
\[\Sigma_{ab}=[\Gamma^{-1}]_{ab}+\mathcal{O}(\rho^{-4}), \tag{38}\]
where \(\rho\) is the signal-to-noise ratio (SNR). So, in the limit of large SNR, the covariance between two parameters \(\Delta\theta_{i}\Delta\theta_{j}\) is approximately equal to the corresponding element of the inverse of the Fisher information matrix. As such, the parameter estimation uncertainty is given by \(\Delta\theta_{i}=(\Sigma_{ii})^{0.5}\). If a network of GW detectors were to observe the same system, the Fisher information matrix would scale as the sum of the matrix elements for each detector, or
\[(\Gamma_{ab})^{\text{network}}=\sum_{\text{det}}\Gamma_{ab}^{\text{det}}. \tag{39}\]
This also applies to a triangular observatory, wherein three arms compose two interferometric detectors.
### Reduced Fisher Matrix Dimensions
We can reduce the dimensions of the Fisher matrix by removing certain physical parameters from the analysis. Doing so reduces the total computation time as well as the condition number, leading to improved numerical accuracy in the Fisher matrix inversion [48]. From the parameters listed in Table 1, our Fisher matrices include the following 12 parameters:
\[\theta_{a}=(\log D_{L},\overline{\theta}_{S},\overline{\phi}_{S}, \overline{\theta}_{J},\overline{\phi}_{J},\lambda_{L},\alpha_{0},\\ \log M_{3},\log\Omega_{o},\gamma_{o},e_{o},\varphi_{0}). \tag{40}\]
We can remove parameters which we expect will have strong priors obtained from other GW measurements, or which contribute only weakly to the gravitational waveform. For example, we assume that space-based detectors like LISA or TianGO will act in conjunction with ground-based observatories, which are far more sensitive to the chirp mass \(\mathcal{M}\), the mass ratio \(q\), and the time and phase of coalescence \(t_{c}\) and \(\phi_{c}\)[36], and thus treat these four parameters as perfectly known in our analysis. Removing the chirp mass from the Fisher matrix also improves the numerical stability of our analysis. Furthermore, we neglect the spins of the three black holes because the precessional effects they induce accumulate much more slowly than the outer orbital motion and de Sitter precession, as described in Sec. II.4.
## IV Results and Discussion
We examine a BBH+SMBH triple system with fixed parameters \(M_{1}=M_{2}=50M_{\odot}\), \(t_{c}=0\), \(\phi_{c}=0\), \(D_{L}=1\)Gpc, \((\overline{\theta}_{S},\overline{\phi}_{S})=(33^{\circ},147^{\circ})\), \((\overline{\theta}_{J},\overline{\phi}_{J})=(75^{\circ},150^{\circ})\), and \(\lambda_{L}=45^{\circ}\). For B-DECIGO, TianGO, and LISA, we compute the Fisher matrix where the integration is taken over a frequency window corresponding to an observation time of five years and the highest frequency is \(f_{\rm max}=12\) Hz - this roughly corresponds to a lowest frequency of \(f_{\rm min}\sim 12\) mHz. In Fig. 2, we plot an example frequency-domain waveform along with the B-DECIGO, TianGO, and LISA sensitivity curves used in computing Fisher matrix elements.
In Fig. 3, we plot the fractional uncertainty in the SMBH mass \(M_{3}\), measured by B-DECIGO, as we vary \(M_{3}\) and \(a_{o}\). The Fisher matrix breaks down if \(e_{o}\) is identically zero, so in order to facilitate comparisons to the circular orbits used in [24], we use \(e_{o}=0.001\). At each point, we sample the covariance found with the Fisher matrix over combinations of the three geometrical phases - that is, 6 choices of \(\gamma_{o}\), \(\varphi_{o}\) and \(\alpha_{0}\), or 216 sets of \((\gamma_{o},\alpha_{0},\varphi_{o})\) - and find the median.
The purple regions denote where the outer binary merges in less than the proposed observation length of five years. We expect systems in this region to be exceedingly rare, as there is only a short window for such systems to form in order to be detected by B-DECIGO. We also shade out the region where the outer orbital period \(P_{\rm outer}\) exceeds twice the observation duration. In this region, the most dominant source of waveform modulation - namely, the Doppler phase shift - is difficult to measure because the BBH only passes through a small range of angles over the observation period. Furthermore, when the Doppler phase shift varies slowly, remaining roughly constant over the observation run, it becomes degenerate with \(t_{c}\), which itself can be changed by a simple redefinition of when \(t=0\). So, in this shaded region, our assumption that \(t_{c}\) can be safely removed from the list of parameters in the Fisher matrix does not hold well. Indeed, we encounter problems with numerical instability when computing the Fisher matrix in this region of the contour plots.
Figure 4 gives the same results, but using the LISA detector response and noise curve instead of that of B-DECIGO. The contour plots using the TianGO observatory have a similar structure to those using B-DECIGO, as the two detectors have similar sensitivity curves. Across the majority of the parameter space studied, the two sets of contours differ only in magnitude and not in shape, so for the sake of brevity, they are omitted here.
We note that the fractional uncertainty in the outer orbit semimajor axis \(\Delta a_{o}/a_{o}\) follows a similar contour structure to that of \(\Delta M_{3}/M_{3}\). For the outer orbit,
\[3\frac{a_{o}^{3}}{M_{3}}\frac{\Delta a_{o}}{a_{o}}\approx\frac{1}{\Omega_{o}^{ 2}}\frac{\Delta M_{3}}{M_{3}}-2\frac{1}{\Omega_{o}^{2}}\frac{\Delta\Omega_{o }}{\Omega_{o}}. \tag{41}\]
Our calculations determined that across the \((M_{3},a_{o}/M_{3})\) parameters space, \(\Delta\Omega_{o}/\Omega_{o}\) is much smaller in magnitude than \(\Delta M_{3}/M_{3}\), so
\[\frac{\Delta a_{o}}{a_{o}}\approx\frac{1}{3}\frac{\Delta M_{3}}{M_{3}}. \tag{42}\]
Figure 2: An example waveform \(\tilde{h}(f)\) with \(M_{3}=10^{8}M_{\odot}\), \(a_{o}=100M_{3}\), and \(e_{o}=0.3\), along with approximate sensitivity curves for B-DECIGO, TianGO, and LISA used in the Fisher matrix calculations done in this work. The red dashed curve gives the same waveform but with the effects of de Sitter precession removed.
This result is verified in the structures of Figs. 5 and 6, and we observe that both B-DECIGO and LISA have the potential to realize fractional uncertainties in \(M_{3}\) and \(a_{o}\) significantly below the \(0.1\%\) level across a wide range of parameters of the triple systems.
To understand the structure of the contour plots, we examine the contour plot in Fig. 7. For small \(M_{3}\) and \(a_{o}/M_{3}\), the shape of the contours are roughly separated by lines of constant \(a_{o}^{5}/M_{3}^{3}\). We correlate these trends to evolving components of the waveform. First, the de Sitter precession frequency is proportional to \(\Omega_{\rm dS}\propto\sqrt{M_{3}^{3}/a_{o}^{5}}\). As discussed in App. B, the Thomas phase and polarization phase scale as \(\Phi_{T}\sim\Omega_{\rm dS}t\). Thus, measurement accuracy scales with the number of de Sitter cycles within the five-year window. In this region of parameter space, the modulations of de Sitter precession are the dominant effect for how well we can measure \(M_{3},a_{o}\).
For larger \(M_{3}\) and a wide range of \(a_{o}/M_{3}\), the shape of the contours are roughly separated by lines of constant \(a_{o}/M_{3}\). In this region, the Doppler phase is the
Figure 3: Fractional uncertainty in \(M_{3}\) as measured by B-DECIGO for three different eccentricities \(e_{o}=\{0.001,0.3,0.6\}\). At each point in the contour plot, we take the median uncertainty over a set of combinations of \((\gamma_{o},\alpha_{0},\phi_{o})\). The purple region corresponds to where the outer binary merges in less time than the observation duration. We lightly shade out the region with an outer orbital period greater than 10 years, where the cumulative effect of the Doppler shift becomes small.
Figure 4: Same as Fig. 3, but measured by LISA instead.
dominant term in the frequency domain waveform phase. The Doppler phase magnitude features a degeneracy between \(a_{o}\) and \(\sin\iota_{J}\) (with \(\sin\iota_{J}\) being a function of the angles \(\overline{\theta}_{S}\), \(\overline{\phi}_{S}\), \(\overline{\theta}_{J}\), and \(\overline{\phi}_{J}\)), as these quantities appear in the magnitude only as the product \(a_{o}\sin\iota_{J}=M_{3}^{1/3}\Omega_{o}^{-2/3}\sin\iota_{J}\). This degeneracy is broken by the inclusion of relativistic pericenter precession, as this produces different periods in the radial and azimuthal motion of the BBH in the outer orbit (cf. Sec. II.2). The inclusion of this precession produces lines of constant \(\Delta M_{3}/M_{3}\) that scale roughly with \((a_{o}/M_{3})^{3/2}\). See App. B for more detailed discussion.
Studying Fig. 3, we see that for \(e_{o}\approx 0\), these flat contours do not appear, as for a circular orbit, pericenter precession is essentially consistent with an increase in \(\Omega_{o}\). The resulting contour plot shape is similar to the results seen in Fig. 5 of [24], where \(e_{o}\) is assumed to be zero - over a wide range of the parameter space, de Sitter precession is the dominant effect in determining \(\Delta M_{3}/M_{3}\). However, once \(e_{o}>0\), pericenter precession, rather than de Sitter precession, becomes the leading contribution to \(\Delta M_{3}/M_{3}\) over a significant portion of the parameter space. The importance of pericenter precession is further emphasized by comparing the magnitudes of \(\Delta M_{3}/M_{3}\) in our plots to Fig. 5 of [24], which sets \(e_{o}=0\) and therefore does not include pericenter precession (though it does in
Figure 5: Uncertainty in \(a_{o}\) as measured by B-DECIGO for three different eccentricities \(e_{o}=\{0.001,0.3,0.6\}\). The same sampling procedure as used in Fig. 3 is applied here.
Figure 6: Same as Fig. 5, but measured by LISA instead.
clude all other effects used in this work). With pericenter precession included, the parameter uncertainties across a wide region of the overall parameter space can drop by multiple orders of magnitude.
We also estimate how well the eccentricity can be measured with B-DECIGO and LISA as shown in Fig. 8 and Fig. 9. These results suggest that the eccentricity can be constrained to high precision, with B-DECIGO able to achieve a lower bound of \(\Delta e_{o}\sim 10^{-6}-10^{-5}\) and LISA able to achieve \(\Delta e_{o}\sim 10^{-5}-10^{-4}\) across a substantial portion of the parameter space where precession is detectable. Once again, we see the importance of de Sitter precession in the measurability of this parameter - in the portion of the parameter space where de Sitter precession is rapid, equivalent estimation uncertainties match contours of equal de Sitter precession period. Unlike the contour plots for \(\Delta M_{3}/M_{3}\), the shape of these contours is not heavily dictated by power laws related to pericenter precession. Indeed, there are no degeneracies between \(e_{o}\) and other waveform parameters which are broken by pericenter precession.
An important question is the impact of increasing outer orbit eccentricity on the ability to measure parameters like \(M_{3}\), \(a_{o}\), and \(e_{o}\) itself. In Figs. 10 and 11, we consider B-DECIGO, LISA, and the TianGO concept and three different combinations of \((M_{3},a_{o}/M_{3})\) across our chosen parameter space. We study the effect of increasing eccentricity on the estimation uncertainties in \(M_{3}\) and \(e_{o}\) (still averaging over initial orbital angles) and find that increasing eccentricity can produce marginal improvements in the measurement of \(M_{3}\) and \(e_{o}\) - a factor of \(\sim\) a few - though such improvement is not universal across \((M_{3},a_{o}/M_{3})\) parameter space.
Considering the arguments given in App. B, we see that the leading contributions to the Fisher matrix elements come from the derivatives of \(\Phi_{D}\), \(\Phi_{P}\), and \(\Phi_{T}\). Noting that these phases evolve at secular rates of \(\Omega_{\rm dS}\) (for \(\Phi_{P}\) and \(\Phi_{T}\)) or \(\Omega_{\rm pericenter}=\Omega_{o}\frac{3}{p}\) (for \(\Phi_{D}\) - specifically, this is the rate at which the degeneracy between \(a_{o}\) and \(\sin\iota_{J}\) is broken), and recalling that these rates scale with \((1-e_{o}^{2})^{-1}\), it follows that larger eccentricities produce more rapid evolution, larger Fisher matrix entries, and ultimately smaller parameter uncertainties.
The relative sensitivities between the three detectors are responsible for the clear hierarchy in the parameter uncertainties they produce. For example, the rates of precession and orbital velocity are sensitive to both \(M_{3}\) and \(e_{o}\) but with different dependencies, so there exist degeneracies between these two parameters. These
Figure 8: Uncertainty in \(e_{o}\) as measured by B-DECIGO for three different eccentricities \(e_{o}=\{0.001,0.3,0.6\}\). The same sampling procedure as used in Fig. 3 is applied here.
Figure 7: Contour plot for the fractional uncertainty in \(M_{3}\) as measured by B-DECIGO, taken from Fig. 3. Plotted on top of the contours are lines of constant \(a_{o}^{5/2}/M_{3}^{3/2}\) and \(M_{3}^{1/2}/a_{o}^{1/2}\) to indicate the structure of the contours.
degeneracies can be lifted by observing the system over long periods of time so that these effects can accumulate, enabling tighter constraints on their respective individual rates. Examining Fig. 2, we see that LISA effectively measures the BBH signal over a smaller frequency band than the other two detectors in the five years prior to merger. Since the LISA sensitivity is poorer than the other two detectors in the frequencies sampled in the five year observation run, the SNR of the waveform is reduced and it becomes more difficult to extract the waveform modulations driven by orbital and precessional effects over that period of time. Therefore, the degeneracies are not as cleanly lifted in LISA measurements, especially when these rates are slow (i.e., low \(M_{3}\), high \(a_{o}/M_{3}\)), producing less precise parameter estimates.
The primary effect of eccentricity then is to increase the strength of waveform modulations by increasing the magnitude of the precessional effects (pericenter, de Sitter); however, we see that for the LISA observatory, the improvement in parameter estimation uncertainty with rising \(e_{o}\) is not as significant as in B-DECIGO and TianGO, and in some cases, a larger \(e_{o}\) produces larger uncertainties. While increasing the eccentricity boosts the orbit averaged rate of de Sitter and pericenter precession (Cf. Eqs. 7 and 15), the majority of this evolution occurs when the BBH is near the outer orbit pericenter and the instantaneous precession rate is largest. So, for systems with slow outer orbits (once again, low \(M_{3}\) and high \(a_{o}/M_{3}\)), an increasing eccentricity constrains the majority of the waveform modulation effects to a shorter time window, as the BBH passes through the region near the pericenter at a faster rate. The GW radiation from the BBH then evolves through a smaller range of frequencies while the waveform is significantly modulated.
## V Conclusion and Future Directions
Using the Fisher information matrix, we have shown that future space-based GW observatories may be able to precisely constrain the properties of BBH+SMBH triple
Figure 10: The fractional uncertainties in \(M_{3}\) obtainable by B-DECIGO (blue), TianGO (orange), and LISA (green) as the eccentricity is varied. The solid, dashed, and dotted lines correspond to different choices of \((M_{3},a_{o}/M_{3})\).
Figure 9: Same as Fig. 8, but measured by LISA instead.
systems, like the SMBH mass and outer orbit semimajor axis and eccentricity, through the GW signal observed from the BBH. We have demonstrated that the rate of change of the Doppler phase shift and the de Sitter precession rate are the dominant factors determining of the measurability of triple system parameters and that an increasing outer orbit eccentricity leads to improved measurement uncertainties through greater Doppler phase shift modulation and faster de Sitter precession. We have also shown that the planned LISA detector is capable of measuring these systems, though decihertz detector concepts such as TianGO or B-DECIGO would possess a competitive advantage over LISA in measuring such quantities.
There are some important limitations of the Fisher information method implemented in this work. As described in [48], a high SNR is required for the inverse Fisher matrix to give the covariance of the posterior probability distribution for the true source parameters \(\vec{\theta}_{0}\). While the SNR we compute for our waveform is generally \(\sim 40\) for TianGO, it is only \(\sim 4\) for LISA, suggesting that the true parameter estimation uncertainties may be significantly different than those calculated here. However, the inverse Fisher matrix is also a _lower bound_ for the uncertainty of an unbiased estimator of \(\vec{\theta}_{0}\)[48], so our results essentially offer a best-case scenario for the parameter estimation precision obtainable by future space-based observatories. A more thorough approach to this analysis will implement a full Bayesian methodology.
We can further develop this work by inclusion of additional effects into the waveform. One can implement the spin-precession effects that we chose to neglect in II.4 due to their significantly slower time scales. Furthermore, for triple systems with lower outer binary merger times (i.e., with \(M_{3}\) and \(a_{o}/M_{3}\) near the purple regions shown in the contour plots such as Fig. 3), the semimajor axis and outer eccentricity can evolve significantly in time due to radiation reaction [37]. Considering the frequency integral that composes the Fisher matrix elements, we can include the effects of gravitational redshift and Doppler frequency shift, which would require the waveform and detector sensitivity to be evaluated at different frequencies in the integrand. Also, the stationary phase approximation used in the frequency domain waveform (outlined in App. A) may not hold well for highly eccentric outer orbits, as the outer orbital angle varies quite rapidly near the pericenter for such orbits.
In Ref. [44], it is discussed how gravitational lensing of GWs by the SMBH combined with the de Sitter precession of \(\mathbf{L}_{\rm i}\) can further constrain the parameters of a triple system as estimated by a space-based GW observatory, even in the case of a circular outer orbit. It would be interesting to examine the combined effects of an eccentric outer orbit and repeated GW lensing in parameter estimation problems.
Finally, measurements of the motion of a BBH through space through its modulated waveform may prove useful for understanding phenomena besides BBH+SMBH hierarchical triples. For example, measuring the evolving Doppler shift and aberrations induced by the evolving position and velocity of an isolated BBH might enable estimates of BBH kicks that occur shortly before merger or improve the precision of estimates of the Hubble constant by further constraining the redshifts of GW standard sirens [66; 67].
###### Acknowledgements.
This work was supported in part by the Caltech LIGO Summer Undergraduate Research Fellowship and by the REU program of the NSF. B.S. acknowledges support by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1745301. H.Y. is supported by NSF PHY-2308415.
|
2304.12460 | Functional Causal Inference with Time-to-Event Data | Functional data is a powerful tool for capturing and analyzing complex
patterns and relationships in a variety of fields, allowing for more precise
modeling, visualization, and decision-making. For example, in healthcare,
functional data such as medical images can help doctors make more accurate
diagnoses and develop more effective treatment plans. However, understanding
the causal relationships between functional predictors and time-to-event
outcomes remains a challenge. To address this, we propose a functional causal
framework including a functional accelerated failure time (FAFT) model and
three causal approaches. The regression adjustment approach is based on
conditional FAFT with subsequent confounding marginalization, while the
functional-inverse-probability-weighting approach is based on marginal FAFT
with well-defined functional propensity scores. The double robust approach
combines the strengths of both methods and achieves a balance condition through
the weighted residuals between imputed observations and regression adjustment
outcomes. Our approach can accurately estimate causality, predict outcomes, and
is robust to different censoring rates. We demonstrate the power of our
framework with simulations and real-world data from the Alzheimer's Disease
Neuroimaging Initiative (ADNI) study. Our findings provide more precise
subregions of the hippocampus that align with medical research, highlighting
the power of this work for improving healthcare outcomes. | Xiyuan Gao, Jiayi Wang, Guanyu Hu, Jianguo Sun | 2023-04-24T21:38:49Z | http://arxiv.org/abs/2304.12460v1 | # Functional Causal Inference with Time-to-Event Data
###### Abstract
Functional data is a powerful tool for capturing and analyzing complex patterns and relationships in a variety of fields, allowing for more precise modeling, visualization, and decision-making. For example, in healthcare, functional data such as medical images can help doctors make more accurate diagnoses and develop more effective treatment plans. However, understanding the causal relationships between functional predictors and time-to-event outcomes remains a challenge. To address this, we propose a functional causal framework including a functional accelerated failure time (FAFT) model and three causal approaches. The regression adjustment approach is based on conditional FAFT with subsequent confounding marginalization, while the functional-inverse-probability-weighting approach is based on marginal FAFT with well-defined functional propensity scores. The double robust approach combines the strengths
of both methods and achieves a balance condition through the weighted residuals between imputed observations and regression adjustment outcomes. Our approach can accurately estimate causality, predict outcomes, and is robust to different censoring rates. We demonstrate the power of our framework with simulations and real-world data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. Our findings provide more precise subregions of the hippocampus that align with medical research, highlighting the power of this work for improving healthcare outcomes.
**Keywords:** Accelerated failure time ; Functional treatment; Functional propensity score; Double robust estimator.
## 1 Introduction
Alzheimer's disease (AD) is a debilitating and fatal neurodegenerative disorder that affects millions of people worldwide, which has become the 6th leading cause of death among US adults (HHS, CDC, and NSHS, 2022), with increasing death rates and high costs of medical expenses (Alzheimer's Association, 2022). However, the pathogenesis of AD remains unclear, and there is currently no effective cure(Ashleigh et al., 2023). Early detection of mild cognitive impairment (MCI) and accurate assessment of progression from MCI to AD (MCI-AD) are crucial for the effective management of the disease (Potashman et al., 2023). Magnetic resonance imaging (MRI) has emerged as a valuable tool in the diagnosis and prognosis of AD, particularly in the analysis of the hippocampus which is a detectable brain atrophy susceptible to the impact of AD and frequently used as an MCI-AD progression biomarker. As a result, increasing attention has been given to exploring the use of MRI imaging data on the hippocampus to determine its relationship with MCI-AD and predict the time of conversion from MCI to AD (Warren and Moustafa, 2023).
By treating the MRI on the hippocampus as a functional predictor, recent advancements including Kong et al. (2018) and Yang et al. (2021) investigated its conditional effects on the hazard ratio while assuming that the effects of other covariates are held constant in the functional linear Cox regression model, but left the causal relationship as an open question. In an AD observational study such as Alzheimer's Disease Neuroimaging Initiative (ADNI), it is challenging to interpret
the fitted models if the clinical measurements are confounded by other factors such as age, education, and ADAS-Cog score. In order to explore the causal relationship between the hippocampus and MRI-AD, MRI imaging is regarded as a treatment throughout the paper to be in line with the causal inference language.
The existing causal inference methods considering time-to-event data primarily focused on binary (Cao and Yu, 2023), multi-level (Corder and Yang, 2022), continuous (Cui et al., 2021), and time-varying treatments (Yang et al., 2020), and have not addressed the complexities of functional treatments. Unlike time-varying treatments that are measured sequentially at a sparse time grid in a longitudinal analysis, functional treatments measure the entire trajectory synchronously over the whole domain at one time point, without a dynamic component. They are featured by the high-dimension, continuous nature, and discrete observations, resulting in a non-sparse effect on outcomes. By focusing on continuous responses, Zhang et al. (2021) proposed the functional weights, which cannot be straightforwardly adapted to time-to-event data due to the existence of censoring. Therefore, new methods are needed to account for unique features of functional treatment and determine its causal effect on time-to-event data.
The well-known Cox regression model has been widely investigated with a functional variable (Kong et al., 2018; Yang et al., 2021), whose clinical interpretation of hazard ratios (HR) is limited by the proportional hazard (PH) assumption. However, even when the PH is plausible, the HR is not causally interpretable unless there is no treatment effect or an untestable and unrealistic assumption holds (Martinussen, 2022). As an alternative, the accelerated failure time (AFT) model was proposed and was able to provide a direct interpretation of the mean survival time (Ritov, 1990). It has been widely studied for a small number of predictors (Saikia and Barman, 2017) but has not yet been generalized to functional data. In order to fill the research gap, this paper aims to develop a novel functional causal framework that can effectively handle the functional treatment and provide a meaningful causal interpretation with time-to-event data.
The novel functional causal survival framework consists of the functional AFT (FAFT) model and three functional causal estimation approaches. Distinguishing from previous research, this paper makes several new contributions. The FAFT is proposed by incorporating functional linear regression into AFT for modeling the relationship between survival outcomes and functional
treatment. We define the functional estimand as the coefficient of functional treatment in FAFT, representing the causal differences and variations over a domain. In fact this idea and our proposed approach are widely applicable to general time-to-event data with functional predictors. To deal with the nonparametric functional treatment and corresponding coefficient, the functional principle component analysis (FPCA) is employed to achieve dimension reduction and avoid computation difficulty in regularization methods (Sang et al., 2022). To adjust the confounding effects, three causal inference approaches are proposed for time-to-event data, including the regression adjustment approach, the functional inverse-probability-of-weighting (FIPW) approach, and the double robust approach.
The remainder of the paper is organized as follows. In Section 2, we first review the FPCA on a functional treatment, and then introduce the required assumptions and propose the FAFT model. Details of three causal inference approaches are presented in Section 3. To facilitate computation, Section 4 develops the corresponding algorithms used for FAFT estimation, regression adjustment, FIPW, and double robust approach. Extensive simulation studies are conducted in Section 5 and an application on AD is included in Section 6. In addition, the discussion can be found in Section 7. For ease of exposition, additional technical and numerical results are given in supplementary material.
## 2 Model and assumptions
### Notation and FPCA
The functional treatment \(X(s)\) is defined over a compact set \(\mathcal{S}\subseteq\mathbb{R}\), where \(s\) denotes the continuous domain of \(X(\cdot)\). However, in practice, \(X(s)\) only can be observed on a discrete set of points within \(\mathcal{S}\). The basic idea behind functional data analysis is to express discrete observed points in the form of a function, representing the entire measurements as a single observation. The functional linear regression considers a scalar response \(Y\) and models its relationship with \(X(\cdot)\) by using as follows:
\[Y=\alpha+\int_{\mathcal{S}}\beta(s)X(s)\;\mathrm{d}s+\epsilon, \tag{1}\]
where \(\alpha\) is an unknown intercept, \(\beta(\cdot)\) is an unknown coefficient with a functional form, and \(\epsilon\) is an error term independent of \(X(\cdot)\). The variability of the functional coefficient \(\beta(s)\) over \(\mathcal{S}\) characterizes how the effect of a predictor variable changes smoothly over the entire domain. It is also possible to interpret \(\beta(\cdot)\) at specific points or regions. In the AD example, \(\beta(\cdot)\) allows us to study how different brain sub-regions impact the MCI-AD progression. By examining how the coefficient varies over the entire brain, we can gain insights into which sub-regions have the most significant impact and which are most vulnerable to progression.
Unlike scalar values, \(X(\cdot)\) often exhibits complex structures such as high dimensionality, non-linearity, and non-stationarity, making it difficult to visualize and analyze. To overcome these challenges, this paper adopts FPCA due to its ability to represent functional data in the most parsimonious way (Jiao et al., 2021), and then performs statistical analysis on the coefficients of these representations.
Explicitly, the Karhunen-Loeve theorem allows for the representation \(X(s)=\sum_{k=1}^{\infty}A_{k}\phi_{k}(s),\)\(s\in\mathcal{S},\) where \(\{\phi_{k}(\cdot):1\leqslant k<\infty\}\) are eigenfunctions and \(\{A_{k}=\int_{\mathcal{S}}X(s)\phi_{k}(s)\mathrm{d}s\}\) are functional principal component scores (FPCS), with the properties \(E(A_{k})=0,\mathrm{Var}\left(A_{k}\right)=\lambda_{k}\), and \(\mathrm{Cov}\left(A_{k_{1}},A_{k_{2}}\right)=0\) for any \(k_{1}\neq k_{2}\). The corresponding eigenvalues \(\{\lambda_{k}:1\leqslant k<\infty\}\) satisfy that \(\lambda_{k}\geqslant 0\) in a decreasing order and \(\sum_{k=1}^{\infty}\lambda_{k}<\infty\). Based on the span of \(\phi_{k}(\cdot)\)'s, the projection of \(\beta(\cdot)\) corresponding to \(X(\cdot)\) is also identifiable, as shown below:
\[\int_{\mathcal{S}}\beta(s)X(s)\;\mathrm{d}s=\int_{\mathcal{S}}\beta(s)\left( \sum_{k=1}^{\infty}A_{k}\phi_{k}(s)\right)\;\mathrm{d}s=\sum_{k=1}^{\infty}A_{ k}\left(\int_{\mathcal{S}}\beta(s)\phi_{k}(s)\;\mathrm{d}s\right).\]
Let \(\beta_{k}=\int_{\mathcal{S}}\beta(s)\phi_{k}(s)\;\mathrm{d}s\), the expanded functional coefficient is \(\beta(s)=\sum_{k=1}^{\infty}\beta_{k}\phi_{k}(s),s\in\mathcal{S}.\)
In practice, the number of FPCS used in modeling the data is usually truncated at some number \(K\) due to computational constraints and to avoid overfitting. The selection of \(K\) is typically based on the percentage of variation explained (PVE), which measures the proportion of the total variability in the data that is captured by the first \(K\) FPCS. In a sample of size \(n\), the selected threshold is denoted as \(K_{n}\) and calculated as \(\text{PVE}(K_{n})=\sum_{k=1}^{K_{n}}\lambda_{k}/\sum_{k=1}^{\infty}\lambda_{k}\). Therefore, \(X(s)\approx\sum_{k=1}^{K_{n}}A_{k}\phi_{k}(s),s\in\mathcal{S},\) and the accuracy of such approximation has been proved to increase asymptotically as \(n\rightarrow\infty\)(Wang et al., 2016). Common choices of \(\text{PVE}(K_{n})\) range from
\(70\%\) to \(99\%\), depending on the complexity of the data and the research question of interest (Kong et al., 2018).
### Causality identification
When observing a time-to-event response, take \(T\) and \(C\) as the failure time and censoring time, respectively. The observed event time is denoted as \(\widetilde{T}=\min(T,C)\) and the corresponding censoring indicator is determined by \(\delta=\mathrm{I}(T\leqslant C)\), which equals to 1 if observing a failure and 0 otherwise. To remove the non-negativity restriction on event times, we define the response on its \(\log\) scale, i.e., \(Y=\log T\). In addition, a \(p-\)dimensional vector of observed covariates is represented by the vector \(\mathbf{Z}\).
In accordance with the potential outcomes framework (Rubin, 1974), consider a potential treatment \(x:=x(s)\in L^{2}=\{f:\int_{\mathcal{S}}f^{2}(s)\;\mathrm{d}s<\infty\}\). Let \(Y(x)\) be the potential outcome and \(\beta(\cdot)\) be the functional coefficient representing the causal effect curve. Our objective is to model \(\mathbb{E}\left[Y(x)\right]\) and estimate \(\beta(\cdot|x)\), in terms of the observed data \(\mathcal{O}=\left\{\left(X_{i}(s),\mathbf{Z}_{i},\delta_{i},\widetilde{T}_{i }\right),\;s\in\mathcal{S},i=1,...,n\right\}\). To link the observed data to counterfactual data, the following assumptions are required to identify the causal effect.
**Assumption 1**.: _Ignorability \(X\perp\!\!\!\perp Y(x)\mid\mathbf{Z},\;\forall x\in L^{2}\)._
This is also known as conditional exchangeability, i.e., \(\mathbb{E}\left[Y(x)|X,Z\right]=\mathbb{E}\left[Y(x)|\mathbf{Z}\right]\). In a randomized clinical trial (RCT), it always hold since the treatment allocation mechanism provides no information on the counterfactual outcomes. In an observational study, this is generally not true and is said to hold provided that the treatment is randomized within strata of the recorded covariates. Such condition is generally not verifiable empirically and their plausibility should be justified based on subject matter knowledge in practice. In general, it holds if the study guarantees it by design (e.g., stratified randomized trial) or sufficient potential confounders have been collected.
The following two assumptions are associated with the propensity score for a functional treatment. Since the conditional density function of a functional treatment does not exist, the rank-\(K\) functional propensity score is defined as \(\pi_{\mathbf{Z}}(\mathbf{A})=P(\mathbf{A}=\mathbf{a}|\mathbf{Z})\) by using the selected FPCS vector
\(\mathbf{A}=\{A_{1},...,A_{K}\}\)(Zhang et al., 2021).
**Assumption 2**.: _Consistency \(\sum_{k=1}^{K}A_{k}\phi_{k}(s)=\sum_{k=1}^{K}a_{k}\phi_{k}(s),s\in\mathcal{S} \Rightarrow Y=Y(x).\)_
**Assumption 3**.: _Positivity \(0<\pi_{\mathbf{Z}}(\mathbf{A})<1\) with probability \(1,\quad\forall\mathbf{a}\in\mathbb{R}^{K}.\)_
Assumption 2 states that potential outcomes are uniquely defined by an subject's own treatment level, no interference with others and no different versions of treatment. This ensures that the causal effect of \(X(\cdot)\) on an subject is not influenced by external factors. By implying that every subject has a positive chance of receiving any level of treatment, regardless of their covariates, Assumption 3 ensures that the treatment effect estimates are derived from a representative sample of the population, rather than a biased subgroup of subjects. It is important to point out that how to address the positivity violation remains an open problem for continuous treatments (Zhao et al., 2020), let alone functional treatments, where the issue of positivity violation is likely to be more severe.
For the censoring mechanism, we assume independent censoring between the actual survival time and the censoring time.
**Assumption 4**.: _Noninformative censoring \(C\perp\!\!\!\perp Y(x)|\left(\mathbf{Z},X\right)\Rightarrow C\perp\!\!\!\perp Y (X)|\left(\mathbf{Z},X\right).\)_
Given Assumptions 1-4, based on the observed data, \(\mathbb{E}[Y(x)]\) can be identified via
\[\mathbb{E}[Y(x)]=\int_{\mathbf{Z}}\mathbb{E}\left[Y(x)|\mathbf{Z}=\mathbf{z} \right]\;\mathrm{d}f_{\mathbf{Z}}(\mathbf{z})=\int_{\mathbf{Z}}\mathbb{E} \left[Y|X=x,\mathbf{Z}=\mathbf{z}\right]\;\mathrm{d}f_{\mathbf{Z}}(\mathbf{z}), \tag{2}\]
where \(f_{\mathbf{Z}}(\mathbf{z})\) is the joint density function of all covariates.
### FAFT model
In RCTs, the causal effect estimation is straightforward, since RCTs may achieve sufficient control over confounding factors provided a good design, proper conduction, and enough enrollment. We propose a FAFT model that incorporates the functional linear regression and classical accelerated
failure time model. Specifically, for subject \(i\), the FAFT has the form
\[Y_{i}=\log(T_{i})=\alpha+\int_{\mathcal{S}}\beta(s)X_{i}(s)\;\mathrm{d}s+\epsilon _{i}. \tag{3}\]
For simplicity, assume that \(X_{i}(s)\) can be fully observed on grid points \(\{s_{m}\in\mathcal{S},1\leqslant m\leqslant M\}\) without any measurement error. The generalization to situations with measurement error can be found in Kong et al. (2018). With FPCA, the FAFT can be rewritten and approximated as
\[Y_{i}=\log T_{i}=\alpha+\sum_{k=1}^{\infty}A_{ik}\beta_{k}+\epsilon_{i}\approx \alpha+\sum_{k=1}^{K_{n}}A_{ik}\beta_{k}+\epsilon_{i}. \tag{4}\]
Such approximation reduces the estimation of FAFT with an unknown curve to a classical AFT regression model with finite-dimensional predictors. Estimation details will be discussed in Section 4.1 and the resulting estimates include \(\hat{\alpha}\) and \(\{\hat{\beta}_{1},...,\hat{\beta}_{K_{n}}\}\). The resulted \(\hat{\beta}(\cdot)\) is the estimated functional coefficient in the model (3) and intuitively represents the causality between \(Y\) and \(X(\cdot)\) in RCTs.
However, this is usually not true in observational studies due to the possible disparities in confounders on both \(X(\cdot)\) and \(Y\). We propose three approaches to address confounding effects and denote the resulting estimators with extra subscripts, including (1) regression adjustment approach, \(\left\{\hat{\alpha}_{\text{RegAdj}},\hat{\beta}_{\text{RegAdj},1},...,\hat{ \beta}_{\text{RegAdj},K_{n}}\right\}\) and \(\hat{\beta}_{\text{RegAdj}}(\cdot)\); (2) FIPW approach, \(\left\{\hat{\alpha}_{\text{FIPW}},\hat{\beta}_{\text{FIPW},1},...,\hat{\beta }_{\text{FIPW},K_{n}}\right\}\) and \(\hat{\beta}_{\text{FIPW}}(\cdot)\); and (3) double robust approach, \(\left\{\hat{\alpha}_{\text{DR}},\hat{\beta}_{\text{DR},1},...,\hat{\beta}_{ \text{DR},K_{n}}\right\}\) and \(\hat{\beta}_{\text{DR}}(\cdot)\).
## 3 Causal inference approaches
### Regression adjustment approach
In order to marginalize the confounding effects, the regression adjustment approach provides consistent estimates of contrasts (e.g. differences, ratios) by regressing \(Y_{i}\) conditional on \(X_{i}\) and
followed by adjusting confounding effects. By considering a full FAFT,
\[Y_{i}=\log T_{i}=\alpha+\int_{\mathcal{S}}\beta(s)X_{i}(s)\;\mathrm{d}s+\mathbf{ \gamma}^{\top}\mathbf{Z}_{i}+\epsilon_{i}\approx\alpha+\sum_{k=1}^{K_{n}}A_{ik} \beta_{k}+\mathbf{\gamma}^{\top}\mathbf{Z}_{i}+\epsilon_{i}. \tag{5}\]
the association between \(X(\cdot)\) and \(Y\) conditioning on \(\mathbf{Z}\) is revealed. This conditional mean survival outcome is the conditional expectation \(\mathbb{E}\left[Y|X=x,\mathbf{Z}=\mathbf{z}\right]\) in equation (2).
Suppose the fitted FAFT model gives parameter estimates \((\hat{\alpha},\hat{\beta}_{1},...,\hat{\beta}_{K_{n}},\hat{\gamma}_{1},...,\hat {\gamma}_{p})\), the confounding adjustment proceeds in the following two steps. For each subject \(i\), construct new responses adjusted by empirical mean of all confounders,
\[\widehat{Y}_{i}=\frac{1}{n}\sum_{j=1}^{n}\left(\hat{\alpha}+\int_{\mathcal{S}} \beta(s)X_{i}(s)\;\mathrm{d}s+\hat{\mathbf{\gamma}}^{\top}\mathbf{Z}_{j}\right) \approx\hat{\alpha}+\sum_{k=1}^{K_{n}}A_{ik}\hat{\beta}_{k}+\frac{1}{n}\sum_{ j=1}^{n}\hat{\mathbf{\gamma}}^{\top}\mathbf{Z}_{j}. \tag{6}\]
Based on \(\left(X_{i},\widehat{Y}_{i},i=1,...,n\right)\), refit model (4) and the estimates \(\left\{\hat{\alpha}_{\text{RegAdj}},\hat{\beta}_{\text{RegAdj},1},...,\hat{ \beta}_{\text{RegAdj},K_{n}}\right\}\) can be used to reconstruct the estimated functional coefficient, via \(\hat{\beta}(\cdot)_{\text{RegAdj}}=\sum_{k=1}^{K_{n}}\hat{\beta}_{\text{RegAdj },k}\phi_{k}(\cdot)\).
### FIPW approach
Instead of modeling survival outcomes conditionally as discussed in Section 3.1, the causal estimator can also be identified by direct marginal modeling. By defining the true weights to be \(\mathrm{w}=f(\mathbf{Z})/f(\mathbf{Z}|X)\), the FIPW approach created a weighted pseudo sample and the marginal mean of the potential outcome within it is equal to the adjusted conditional mean of the actually observed population.
**Proposition 1**.: Under Assumptions 1-3, if the sample is re-weighted by using the weights defined as \(\mathrm{w}=f(\mathbf{Z})/f(\mathbf{Z}|X)\), then in the created pseudo-sample,
\[\mathbb{E}\Big{[}\mathrm{w}Y|X(s)=x(s)\Big{]}=\mathbb{E}[Y(x)]\Big{]}. \tag{7}\]
The left side of the equation is the expectation over the weighted pseudo-sample and is the logic behind FIPW, while the right side is the expectation after marginalization over \(\mathbf{Z}\) and is the rationale
behind the regression adjustment. Proof details are included in Section 1.1 of the supplementary material.
Unlike categorical treatments whose propensity scores are defined in terms of conditional densities and are usually estimated directly, directly estimating the weights via the definition of \(w\) can be very challenging as the (conditional) densities (\(f(Z)\) and \(f(Z\mid X)\)) are difficult to estimate and \(Z\) is often multi-dimensional. Inspired by Zhang et al. (2021), the functional propensity score (FPS) and corresponding weight are defined as
\[s_{i}=f_{\mathbf{A}|\mathbf{Z}}(\mathbf{a_{i}}\mid\mathbf{z_{i}}),\quad w_{i}= \frac{f_{\mathbf{A}}\left(\mathbf{A}_{i}\right)}{s_{i}}=\frac{f_{\mathbf{A}} \left(\mathbf{A}_{i}\right)}{f_{\mathbf{A}|\mathbf{Z}}(\mathbf{a_{i}}\mid \mathbf{z_{i}})},\]
where \(\mathbf{A}=(A_{1},...,A_{K^{*}})\) is the selected FPCS vector from FPCA and \(K^{*}\) can be different from \(K\). \(f_{\mathbf{A}|\mathbf{Z}}(\cdot)\) and \(f_{\mathbf{A}}(\cdot)\) are the conditional and marginal probability densities of \(\mathbf{A}\). This way to calculate weights is the same as the way to define \(\mathrm{w}\), as derivation shown in Section 1.2 of the supplementary material.
**Proposition 2**.: If \(X(s)=\sum_{k=1}^{K^{*}}A_{i}\phi_{k}(s),s\in\mathcal{S}\), then \(w_{i}=\mathrm{w}_{i}\) for subject \(i\).
For computation convenience, the standardized FPS \(s_{i}^{*}\) and standardized weights \(w_{i}^{*}\) are usually utilized. Define \(s_{i}^{*}=f_{\mathbf{A}^{*}|\mathbf{Z}^{*}}(\mathbf{a_{i}}^{*}\mid\mathbf{z_{ i}}^{*})\) and \(w_{i}^{*}=f_{\mathbf{A}^{*}}\left(\mathbf{A}_{i}^{*}\right)/s_{i}^{*}\) with standardized covariates \(\mathbf{Z}_{i}^{*}=\mathbf{\Gamma}_{\mathbf{Z}}^{-1/2}\mathbf{Z}_{i},\mathbf{ \Gamma}_{\mathbf{Z}}=E\left(\mathbf{Z}\mathbf{Z}^{\top}\right)\) and standardized FPCS \(\mathbf{A}_{i}^{*}=\left(A_{i1}^{*},\ldots,A_{iK_{n}^{*}}^{*}\right)^{\top},A_{ ik_{n}}^{*}=\lambda_{k}^{-1/2}A_{ik_{n}},\;k=1,\ldots,K_{n}^{*}\). Apparently, the standardized FPS automatically satisfies the positivity and consistency assumption. To satisfy the conditional exchangeability assumption, the weight vector \(\mathbf{w}^{*}\) should be able to achieve the covariate balance condition in the sense that minimizing the weighted correlation between \(\mathbf{A}^{*}\) and \(\mathbf{Z}^{*}\), i.e., \(\mathbb{E}\left(\mathbf{w}^{*}\mathbf{A}^{*}\mathbf{Z}^{*\top}\right)=0\). The optimization details will be discussed in Section 4.1. With the estimated weights \(\hat{\mathbf{w}}\), FAFT regression is then performed on \(X_{i}(\cdot)\) the weighted survival outcome \(\hat{w}_{i}Y_{i}^{\text{Imputed}}\), where \(Y_{i}^{\text{Imputed}}\) is obtained based on equation (9), resulting in \(\hat{\beta}_{\text{FIPW}}(\cdot)\).
### Double robust approach
The regression adjustment approach discussed in Section 3.1 requires the correct regression model specification, while the FIPW approach proposed in Section 3.2 is unstable when outliers exist. Therefore, the double robust approach is proposed and the advantage of being less sensitive to model misspecifications and data outliers, and attaining faster rates of convergence when both models are consistently estimated (Van der Laan and Robins, 2003). After the regression adjustment, for subject-\(i\), let \(Y_{i}^{\text{Imputed}}\) be the imputed outcome obtained during the estimation process of the full AFT and \(\hat{Y}_{\text{RegAdj},i}\) the fitted causal outcome. We construct a new adjusted pseudo outcome as follows,
\[\widetilde{Y}_{i}=\hat{Y}_{\text{RegAdj},i}+\hat{w}_{i}\left(Y_{i}^{\text{ Imputed}}-\hat{Y}_{\text{RegAdj},i}\right), \tag{8}\]
By applying \(\left\{\widetilde{Y}_{1},...,\widetilde{Y}_{n}\right\}\) on model (4), we can obtain the \(\hat{\beta}_{\text{DR}}(\cdot)\).
## 4 Implementations of the approaches
The objective of Sections 2 and 3 is to present a comprehensive framework for identifying the causal relationship between a functional treatment and a survival outcome. In this section, we will provide in-depth information on the estimation process and summarizes the algorithms utilized within this functional causal survival framework.
### FAFT estimation
With simpler notation, let \(\mathbf{\theta}\) represent all unknown parameters and \(\mathbf{D}_{i}\) predictors, respectively. For example, \(\mathbf{\theta}=(\alpha,\beta_{1},...,\beta_{k})^{\top}\) in FAFT (4) and \(\mathbf{\theta}=(\alpha,\beta_{1},...,\beta_{k},\gamma_{1},...,\gamma_{p})^{\top}\) in full FAFT (5). There, we write models as \(Y_{i}=\mathbf{\theta}^{\top}\mathbf{D}_{i}+\epsilon_{i}\).
To fit the FAFT, we generalize the idea of the least squares method due to its stability for a large number of predictors (Jin et al., 2006). To deal with the right censoring, each response \(Y_{i}\) is
imputed by its conditional expectation,
\[Y_{i}^{\text{Imputed}}(\mathbf{\theta})=\delta_{i}Y_{i}+\left(1-\delta_{i}\right) \mathbb{E}_{\mathbf{\theta}}\left[Y_{i}\mid Y_{i}\geqslant\log C_{i}\right], \tag{9}\]
where the expectation is evaluated based on the Kaplan-Meier estimator \(\hat{F}(e)\). Let \(r_{i}(\mathbf{\theta})=\widetilde{T}_{i}\ -\ \mathbb{E}_{\mathbf{\theta}}\left[Y|X_{i}, \mathbf{Z}_{i}\right]\), then \(\hat{F}(e)=1-\prod_{\{i:\mathbf{r}(i)\leqslant e\}}\left(\frac{n-i}{n-i+1} \right)^{\delta_{i}},\) where \(\mathbf{r}(i)\) are the ordered \(r_{i}(\mathbf{\theta})\)'s. Therefore the expectation in (9) is estimated as
\[\hat{\mathbb{E}}\left(Y_{i}\mid Y_{i}\geqslant\log C_{i}\right)=\mathbb{E}_{ \mathbf{\theta}}\left[Y|X_{i},\mathbf{Z}_{i}\right]+\hat{\mathbb{E}}\left[e_{i} \mid e_{i}\geqslant r_{i}(\mathbf{\theta})\right], \tag{10}\]
where \(\hat{\mathbb{E}}\left[e_{i}\mid e_{i}\geqslant r_{i}(\mathbf{\theta})\right]= \int_{r_{i}(\mathbf{\theta})}^{\infty}\frac{s}{1-\hat{F}(r_{i}(\mathbf{\theta}))}\, \mathrm{d}\hat{F}(s)\). Given initial values \(\hat{\mathbf{\theta}}^{(0)}\), the least square estimator is the solution of the estimating equation \(U_{n}(\mathbf{\theta},\hat{\mathbf{\theta}}^{(0)})=\sum_{i=1}^{n}\left(\mathbf{D}_{i}- \bar{\mathbf{D}}\right)^{\top}\left(Y_{i}^{*}(\hat{\mathbf{\theta}}^{(0)})-\mathbf{ \theta}^{\top}\mathbf{D}_{i}\right)=0.\) The estimation procedure can be proceeded iteratively by solving \(\hat{\mathbf{\theta}}_{n}^{(m)}=L_{n}\Big{(}\hat{\mathbf{\theta}}_{n}^{(m-1)}\Big{)}\), \(m\geqslant 1\), and
\[L_{n}(\mathbf{\theta})=\left[\sum_{i=1}^{n}\left(\mathbf{D}_{i}-\bar{\mathbf{D}} \right)^{\top}\left(\mathbf{D}_{i}-\bar{\mathbf{D}}\right)\right]^{-1}\left[ \sum_{i=1}^{n}\left(\mathbf{D}_{i}-\bar{\mathbf{D}}\right)^{\top}\left(Y_{i}^ {\text{Imputed}}(\mathbf{\theta})-\bar{Y}_{i}^{\text{Imputed}}(\mathbf{\theta}) \right)\right], \tag{11}\]
where \(\bar{Y}_{i}^{\text{Imputed}}(\mathbf{\theta})=\frac{\sum_{i=1}^{n}Y_{i}^{\text{ Imputed}}(\mathbf{\theta})}{n}\). The whole estimation procedure is briefly summarized in Algorithm 1 in the supplementary material.
In practical implementation, when the initial estimator is consistent and asymptotically normal such as the induced smoothing Gehan estimator, \(\hat{\mathbf{\theta}}\) is also consistent and asymptotically normal (Jin et al., 2006). The multiplier resampling can be applied to approximate the variance of the resulting estimator.
### Causal effect estimation
#### 4.2.1 Regression adjustment approach
As details shown in Section 3.1, after fitting the full outcome regression FAFT model, adjusting the confounding effects takes two more steps: firstly reconstruct adjusted responses and then refit the FAFT. Algorithm 2 in the supplementary material summarizes the whole procedure.
#### 4.2.2 FIPW approach
Ideally, the functional weights are expected to achieve the balance condition for each observed subject, i.e.,
\[\mathbb{E}\left(w_{i}^{*}\mathbf{A}_{i}^{*}\mathbf{Z}_{i}^{*\top}\right)=0, \quad i=1,...,n. \tag{12}\]
The optimization process involves the selection of a tuning parameter, denoted as \(\rho\), which represents the level of tolerance for unbalancing in order to resolve the non-convexity issue arising in equation (12). The recommended value is \(\rho_{0}=0.1/N\). For further details, refer to Zhang et al. (2021). After getting \(\hat{w}_{i}^{*}\)'s, according to equation (7), a weighted pseudo-sample, which is created by \(\mathbf{w}\circ\mathbf{Y}=\left\{w_{1}Y_{1}^{\text{Imputed}},...,w_{n}Y_{n}^{ \text{Imputed}}\right\}\) with \(Y_{i}^{\text{Imputed}}\) being calculated from equation (9), and will be used to fit FAFT. Algorithm 3 in the supplementary material is a summary of causal effect estimation.
#### 4.2.3 Double robust approach
Since the double robust approach is a combination of regression adjustment and FIPW, the corresponding algorithm also involves the implementation of Algorithm 2 and Algorithm 3. The details are shown in Algorithm 4 in the supplementary material.
A simulation study
To evaluate the finite-sample performance of the proposed methods, we conduct simulation studies under two different scenarios, representing light and strong effects of confounding variables. In both scenarios, we consider three different censoring rates.
For the \(i\)-th subject, the functional treatment \(X_{i}(\cdot)\) is given by \(X_{i}(s)=\sum_{k=1}^{K=6}A_{ik}\phi_{k}(s),s\in[0,1].\) The six eigenfunctions are defined as \(\phi_{2k-1}(s)=\sin{(2\pi ks)},\phi_{2k}(s)=\cos{(2\pi ks)},k=1,2,3\). The corresponding FPCS are assumed to be \(A_{i1}=\sqrt{16}W_{i1},A_{i2}=\sqrt{12}W_{i2},A_{i3}=\sqrt{8}W_{i3},A_{i4}= \sqrt{4}W_{i4},A_{i5}=W_{i5}\), and \(A_{i6}=W_{i6}/\sqrt{2}\), where \(W_{i1},...,W_{i6}\) are simulated from a multivariate normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\text{diag}(\sqrt{K},\ldots,\sqrt{K})\) with \(K=6\). Suppose that a three-dimensional covariate vector \(\mathbf{Z}_{i}=\left(Z_{i1},Z_{i2},Z_{i3}\right)^{\top}\) can be observed from each subject, simulating \(\mathbf{Z}_{i}\) as \(Z_{i1}=W_{i1}+e_{i1},Z_{i2}=0.2W_{i2}+e_{i2}\), and \(Z_{i3}=0.2W_{i3}+e_{i3}\) where \(e_{i1}\sim\mathrm{N}(0,0.5)\) and \(e_{i2},e_{i3}\stackrel{{\mathrm{i.i.d}}}{{\sim}}\mathrm{N}(0,1)\). The log-transformed failure times \(Y_{i}=\log T_{i}\) for all subjects are independently generated under two distinct scenarios:
* Scenario 1: \(Y_{i}=\log T_{i}=1+\int_{0}^{1}\beta_{0}(s)X_{i}(s)\;\mathrm{d}s+2Z_{i1}+e_{i},\;e_{i}\sim\mathrm{N}(0,0.5)\);
* Scenario 2: \(Y_{i}=\log T_{i}=1+\int_{0}^{1}\beta_{0}(s)X_{i}(s)\;\mathrm{d}s+2Z_{i1}+2Z_{i 1}^{2}\times A_{i1}+e_{i},\;e_{i}\sim\mathrm{N}(0,0.5)\);
in which the true functional coefficient is assume to be \(\beta_{0}(s)=2\sin(2\pi s)+\cos(2\pi s)+\sin(4\pi s)/2+\cos(4\pi s)/2,s\in[0,1]\), representing the true causal effect of \(X(\cdot)\). The survival times are then transformed back to the original scale. The right censoring is introduced by independently generating \(C_{i}\) from \(\text{Uniform}(a,b)\) with different \(a\) and \(b\) being considered to determine different right censoring rates. The observed survival outcome \(\widetilde{T}_{i}\) is decided by \(\min\{T_{i},C_{i}\}\) and the censoring indicator is \(\delta_{i}=I(T_{i}\leqslant C_{i})\). When the sample size of the study is \(N\), the observed data are \(\mathcal{O}=\left\{\widetilde{T}_{i},\delta_{i},X_{i}(s),Z_{i1},Z_{i2},Z_{i3},\quad i=1,...,N\right\}\).
The analysis goal is to get the estimated causal effect \(\hat{\beta}(\cdot)\) using \(\mathcal{O}\). The naive way fits the model (4) directly, assuming no confounding effect. However, in both scenarios considered here, among the three generated covariates, \(Z_{i1}\) is directly related to both \(X_{i}(\cdot)\) and \(Y_{i}\), while \(Z_{i2}\) and \(Z_{i3}\) are only directly related to \(X_{i}(\cdot)\) but not \(Y_{i}\). Such existence of confounding variables may bring bias to the estimates of the treatment effect if not appropriately accounted for. Especially in scenario
2, the non-linear relationship between \(Z_{i1}\) and \(Y_{i}\) may pose additional challenges for modeling and estimation. We will apply the proposed methods to adjust confoundings and compare them with the naive approach.
Each scenario considered three levels of right censoring (20%, 40%, and 60%) and two sample sizes (\(N=200\) and \(400\)), and each setup is repeated 500 times. The selection criterion for both \(K_{n}\) and \(K_{n}^{*}\) is set to be PVE=95%.
### Evaluation measures
To assess the accuracy and validity of each estimator and provide insights into their strengths and weaknesses in handling linear and non-linear confounding effects, as well as right censoring, we consider the following measures.
* Define the relative mean square error as \(\mathrm{RMSE}=\Big{(}\int_{\mathcal{S}}(\hat{\beta}(s)-\beta_{0}(s))^{2}\: \mathrm{d}s\Big{)}/\Big{(}\int_{\mathcal{S}}\beta_{0}^{2}(s)\:\mathrm{d}s \Big{)},\) where \(\hat{\beta}(\cdot)\) is the estimated mean coefficient function over all simulation runs. A lower value indicates a more accurate estimate to the true functional causal effect.
* Define the integrated squared bias as \(\mathrm{ISB}=\Big{(}\sum_{\mathcal{S}}(\hat{\beta}(s)-\beta_{0}(s))^{2}\Big{)} \:/|\mathcal{S}|,\) where \(|\mathcal{S}|\) is the \(L_{2}\) distance of the domain and \(|\mathcal{S}|=1\) in our situation. It characterizes the accuracy relative to the domain and a lower ISB indicates better accuracy.
* The averaged integrated squared error (AISE) is the mean of the integrated squared error (ISE) over all simulation runs. In the \(r\)-th simulation run, \(\mathrm{ISE}_{\text{sim-r}}=\Big{(}\int_{\mathcal{S}}(\hat{\beta}_{\text{sim-r} }(s)-\beta_{0}(s))^{2}\)\(\mathrm{d}s\Big{)}/|\mathcal{S}|,\) where \(\hat{\beta}_{\text{sim-r}}(\cdot)\) is the estimated functional coefficient. The AISE illustrates the accuracy over all simulation runs and its standard error (SE) measures the estimation variation. By contrast, the median ISE (MISE) provides an additional summary of the accuracy, which is less sensitive to outliers than AISE.
To examine the prediction performance, the 80% / 20% splitting rule was applied on each generated dataset for training and testing. In the \(r\)-th simulation run, the root mean squared error is
calculated as
\[\mathrm{Root}\text{-}\mathrm{MSE}_{\text{sim-r}}=\left(\sqrt{\sum_{i=1}^{n}(\hat{Y}_ {\text{sim-r},i,\text{pred}}-Y_{\text{sim-r},i,\text{causal}})^{2}/n}\right),\]
where \(\hat{Y}_{\text{sim-r},i,\text{pred}}\) is the fitted value and \(Y_{\text{sim-r},i,\text{causal}}\) is the generated true causal outcome for subject \(i\). This measure assesses the accuracy of the predicted outcome compared to the true causal outcome. We report its mean and three quantiles (\(q_{.25},q_{.50}\), and \(q_{.75}\)) over all runs.
### Simulation results
The proposed three causal approaches gave rise to five causal estimators, namely RegAdj, FIPW.para (FIPW with weights estimated parametrically), FIPW.np (FIPW with weights estimated nonparametrically), DR.para (DR with weights estimated parametrically), and DR.np (DR with weights estimated nonparametrically). The results presented below are based on \(N=400\), with similar results for \(N=200\) included in Section 3 of the supplementary material.
#### 5.2.1 Results of estimation
Table 1 summarizes four evaluation measures that assess the precision of the estimated functional coefficient \(\hat{\beta}(\cdot)\) under two scenarios, each with three levels of censoring rates. In scenario 1, where the confounding effect is weak and the dependence relationship is linear, all proposed causal estimators exhibit smaller errors than the naive estimator. When it comes to scenario 2 where the confounding effect is stronger and the correlation is more complex, the naive approach introduces greater bias to causal estimation. Instead, the proposed methods and algorithms successfully handle the situation. The much smaller values in means of RMSE, AISE, MISE, and ISB indicate a substantial improvement in estimation accuracy, and the significant reduction in SE of AISE demonstrates the stability of the proposed algorithms.
The estimated mean functional coefficients are displayed in Figure 1, with different estimators represented by variations in line colors and point shapes. The shaded area represents the confidence interval (CI) for \(\hat{\beta}(\cdot)\) over all replications. As shown in Figure 1, the naive estimator (the dark yellow line with dimond shapes) deviates away from the truth (the black line with squares)
and has a 95% CI that does not cover the truth in scenario 1, which is far worse in scenario 2. By contrast, the five newly proposed estimators are shown to be close to \(\beta_{0}(\cdot)\). In scenario 1, three estimators (DR.para in bright yellow, DR.np in dark blue, and RegAdj in orange) overlap with the truth, indicating less estimation bias than FIPW.para and FIPW.np. This is because the assumed model in the regression adjustment approach is similar to the true model used to generate our data. When such an assumption is heavily violated by the fact, which is the scenario 2, the FIPW approach (FIPW.para and FIPW.np) appears to have the advantage due to its ability to handle model misspecification. Another finding is that the FIPW approach yields a larger SE of AISE among five new estimators under scenario 1, while it becomes much smaller than others under scenario 2. This difference is due to the possibility of extreme estimated weights, which add instability to the estimation process. When the imbalance among confounding variables is weak (as in scenario 1), this instability contributes a larger proportion to the total variation, leading to a larger SE than others. In contrast, when the imbalance among confounding variables is stronger (as in scenario 2), the extreme estimated weights have less impact on the total variation, resulting in a smaller SE than others.
When looking at the DR approach, we can find that it performs between the regression adjustment and the FIPW approach, reflecting its "double robust" property. This suggests that the DR approach outperforms the regression approach when the regression model is misspecified and is more stable when the estimated weights exist with extreme values.
#### 5.2.2 Results of prediction
Table 2 includes in-sample and out-sample Root-MSE measures, displayed as the mean, 25%, 50%, and 75% quantiles (\(q_{.25},q_{.50}\), and \(q_{.75}\)). Compared to the naive estimator, the proposed approaches show better accuracy and efficiency in predicting causal outcomes, illustrated by a lower mean of Root-MSE and less variation among quantiles. Even though all estimators decrease in performance with more complex confounding effects and higher censoring rates, the proposed methods always outperform the naive approach in a more robust way. When comparing among five proposed estimators, once again, the double robust (DR.para and DR.np) and regression adjustment approaches show more accurate predictions in scenario 1, while the FIPW approach (FIPW.para and FIPW.np)
performs better in scenario 2, as discussed in Section 5.2.1.
#### 5.2.3 Different right censoring rates
Higher right censoring rates consistently lower estimation and prediction accuracy across scenarios and estimators due to information loss resulting from censoring. This creates uncertainty during imputation, leading to increased bias and variation in causal estimates and predictions.
As observed in Table 1, the SE of AISE for all estimators increases with increasing censoring rates under both scenarios. While the mean of AISE remains relatively stable under scenario 1, it exhibits an obvious increasing trend under scenario 2, particularly there is a big jump in AISE and its SE in the transition from a 40% to a 60% censoring rate. However, in comparison to the naive estimator, the proposed methods exhibit greater reliability and robustness. In particular, the FIPW demonstrates superiority over the DR in scenario 2 as the censoring rate increases. This is due to the increased bias that arises from utilizing the imputed outcomes of censored subjects as their true observations during the final step of estimating the DR estimators. This bias becomes more substantial and contributes to a decrease in estimation accuracy in the presence of a misspecified regression model in scenario 2. A similar pattern can also be observed in Table 2 with increasing censoring rates.
## 6 An application on AD
We implemented the proposed functional causal framework to analyze the data from the ADNI observational study, which comprises survival information about AD diagnosis, clinical measurements, and MRI records. In contrast to previous studies, we aimed to identify the causal relationship between subregions within the hippocampus and the time of progression from MCI to AD, to contribute to a better understanding of the mechanisms underlying AD progression, and to provide insight into potential targets for early intervention.
For our purpose, 373 MCI subjects in ADNI-1 study along with their imaging and clinical measures are considered in the analysis. Among them, 161 MCI subjects progressed to AD dur
ing the study and the remaining 212 MCI subjects did not convert to AD prior to the study's end. Thus, the time of conversion from MCI to AD should be treated as time-to-event data with a censoring rate of 56.9%. The clinical characteristics of participants include age, education length, ADAS-Cog score, gender (0=male; 1=female), handedness (0=right; 1=left), marital status (0=single; 1=married), retirement (0=no; 1=yes), and apolipoprotein E (APOE) genetic covariates. The APOE is defined based on two single nucleotide polymorphisms (SNPs), rs429358 and rs7412, and produced a 3-allele haplotype, resulting in \(\epsilon 2,\epsilon 3,\) and \(\epsilon 4\) variants. Descriptive statistics were summarized in Table 4, after categorizing the 373 MRI subjects into four groups based on their observed survival outcomes. The hippocampus image data was treated as a functional treatment and was represented as a matrix with dimension \((N,p)=(373,30000)\) after being preprocessed (Kong et al., 2018). Each row contained an subject's hippocampal radial distances of 30,000 surface points on the left and right hippocampus surfaces, which is a summary statistic of the hippocampal shape and size and defined as the distance between the medial core of the hippocampus and the corresponding vertex.
Specifically, we applied FPCA to the treatment with the top 12 FPCS (\(K_{n}\)=12, PVE=70%) selected to summarize the image data. To investigate potential correlations between the functional treatment and each clinical measurement, we calculated the absolute Pearson correlation between each of the top 13 FPCs and each continuous covariate, as well as the absolute Point-Bserial correlation for each categorical covariate. We reported the weighted absolute Pearson and Point-Bserial correlation to assess the performance of the estimated weights in improving covariate balance. We calculated weights both parametrically, and nonparametrically using three different values of the tuning parameter: \(\rho_{0}=0.1/N\) (default), \(\rho_{1}=1/N\) (more tolerance of imbalance), and \(\rho_{2}=0.01/N\) (lower tolerance). The results are denoted as unweighted, weighted.para, weighted.np.0, weighted.np.1, and weighted.np.2 in Figure 2 (a). As shown in all eight plots, the treatment is correlated with all covariates. The usage of weights is able to reduce the correlation over the top 12 FPCs in general, but it is obvious to see that the nonparametrically estimated weights perform better in improving covariate balance conditions. In addition, nonparametric weighting performs pretty robustly to different \(\rho\)'s. For age, handedness, and retirement, nonparametric weighting effectively lowers down the correlation value of the first FPC and keeps the correlation of the second to twelfth FPCs the lowest. In contrast, weights.para reduces the imbalance less, possibly due
to misspecified Gaussian distributions when calculating the weights parametrically.
We evaluated the causal effect using all proposed estimators. When using three different tuning parameters in FIPW.np estimation, the results are pretty close to each other. The FIPW.para also gives similar estimations. However, the regression adjustment approach yields different estimators from the FIPW approach and the double robust approach performs between them. This might be because they fit FAFT twice and thus rely more on the imputed outcomes. Since the censoring rate is pretty high in the study, the resulting estimators are influenced, as shown in the simulation study. We included all causal estimators in Section 3 of the supplementary material and presented the FIPW.np.0 (FIPW.np using \(\rho_{0}=0.1/N\)) estimator in Figure 2 (b) and (c). It can be clearly seen that the subfield of CA1 on both hippocampus has a negative effect on the survival time, indicating that the thicker these areas on the hippocampus are, the shorter the time is to convert to AD. Compared to the work of Kong et al. (2018) and Yang et al. (2021), our work identified more precise and refined subregions. Specifically, the previous works focused on the conditional effect on general subfields as shown in panel (d). As shown in panel (e), our work examined the causal effect and gave more precise subregions, which is in line with the findings in the medical research (Bienkowski et al., 2018). The reason for such a causal effect lies in the accumulation of neurofibrillary tangles (NFTs) first in the CA1 area and then gradually in the subiculum, CA2, CA3, and DG (Rao et al., 2022), which might be considered as potential targets for early intervention cure.
## 7 Discussion
The main contribution of this paper is the development of the FAFT model and a causal inference framework for observational studies involving time-to-event data and infinite-dimensional functional treatments. Due to the lack of a direct functional survival model, we propose a FAFT model to incorporate infinite-dimensional imaging data for a survival outcome. Three causal inference approaches were developed to adjust confounding effects and result in causal estimators. The results of the simulation study indicated the appealing numerical performance of the proposed methods in balancing confounding variables, estimating causal effects, and predicting survival outcomes, and demonstrated the robustness with respect to different right censoring rates. In addition, we success
fully identified more precise subregions of the hippocampus brain area and made the first endeavor to study its causal effect on conversion time from MCI to AD.
The proposed methods may be generalized to other applications. For example, it is almost directly applicable to handling multidimensional continuous treatments (e.g., Kong et al. (2019)), multidimensional functional treatments, and functional/categorical outcomes. With slight modifications, the proposed methods can be used to study the interval-censored data and other survival models such as the generalized transformation model. We list several directions for future work. The proposed weights are only capable of controlling the correlation between treatments and covariates. In order to improve the stability of weights and the performance of weighted estimators, the weights could be constructed to control the balancing error of the treatments and any outcome function on confounders. A possible solution could be to estimate weights via the kernel method based on a reproducing kernel Hilbert space (Wang et al., 2021) Another direction is to incorporate a longitudinal assessments of the functional treatment, such as multiple MRI images over 5 years for one subject. In this situation, a shared confounding structure is needed in order to emphasize the identifiability and estimation of causal effects. (Kong et al., 2022). Thus, it is possible to generalize our method by including the latent confounding structure for a better understanding of disease progression and higher accuracy of survival prediction.
|
2307.08168 | Enabling Efficient, Reliable Real-World Reinforcement Learning with
Approximate Physics-Based Models | We focus on developing efficient and reliable policy optimization strategies
for robot learning with real-world data. In recent years, policy gradient
methods have emerged as a promising paradigm for training control policies in
simulation. However, these approaches often remain too data inefficient or
unreliable to train on real robotic hardware. In this paper we introduce a
novel policy gradient-based policy optimization framework which systematically
leverages a (possibly highly simplified) first-principles model and enables
learning precise control policies with limited amounts of real-world data. Our
approach $1)$ uses the derivatives of the model to produce sample-efficient
estimates of the policy gradient and $2)$ uses the model to design a low-level
tracking controller, which is embedded in the policy class. Theoretical
analysis provides insight into how the presence of this feedback controller
overcomes key limitations of stand-alone policy gradient methods, while
hardware experiments with a small car and quadruped demonstrate that our
approach can learn precise control strategies reliably and with only minutes of
real-world data. | Tyler Westenbroek, Jacob Levy, David Fridovich-Keil | 2023-07-16T22:36:36Z | http://arxiv.org/abs/2307.08168v2 | # Feedback is All You Need: Real-World Reinforcement Learning with Approximate Physics-Based Models
###### Abstract
We focus on developing efficient and reliable policy optimization strategies for robot learning with real-world data. In recent years, policy gradient methods have emerged as a promising paradigm for training control policies in simulation. However, these approaches often remain too data inefficient or unreliable to train on real robotic hardware. In this paper we introduce a novel policy gradient-based policy optimization framework which systematically leverages a (possibly highly simplified) first-principles model and enables learning precise control policies with limited amounts of real-world data. Our approach \(1)\) uses the derivatives of the model to produce sample-efficient estimates of the policy gradient and \(2)\) uses the model to design a low-level tracking controller, which is embedded in the policy class. Theoretical analysis provides insight into how the presence of this feedback controller addresses overcomes key limitations of stand-alone policy gradient methods, while hardware experiments with a small car and quadruped demonstrate that our approach can learn precise control strategies reliably and with only minutes of real-world data.
Figure 1: (Left) Schematic of the proposed policy structure, the crucial element of which is a low-level stabilizing controller which improves the smoothness properties of the underlying problem, improving learning. (Middle) Still frames depicting the approximate paths taken by a car and quadruped during test-time. (Overlaid) Top-down view of the car executing two laps of around a figure-8 before and after training.
Introduction
Reliable, high-performance robot decision making revolves around the robot's ability to learn a control policy which effectively leverages complex real-world dynamics over long time-horizons. This presents a challenge, as constructing a highly accurate physics-based model for the system using first-principles is often impractical. In recent years, reinforcement learning methods built around policy gradient estimators have emerged as a promising general paradigm for learning an effective policy using data collected from the system. However, in current practice these approaches are often too data-inefficient or unreliable to train with real hardware data, leading many approaches to train on high-fidelity simulation environments [1; 2; 3]. However, there inevitably exists a gap between simulated and physical reality, leaving room to improve policy performance in the real world. In this paper, we demonstrate how to systematically leverage a physics-based model to yield highly efficient and reliable policy optimization techniques capable of learning with real-world data.
Modern techniques for policy learning generally fall into two categories: model-free [4; 5; 6; 7] and model-based [8; 9; 10; 11; 12]. Model-free approaches learn a mapping from states to inputs directly from data. These approaches are fully general and can synthesize high-performance policies, but are extremely data-inefficient. In contrast, model-based approaches use the collected data to fit a predictive model to estimate how the system will behave at points not contained in the training set. While these approaches are more data-efficient, they inevitably introduce bias into policy optimization algorithms, which limits the precision and performance of the resulting control policy.
However, due to the unstable nature of many robotic systems, both of these paradigms suffer from a more fundamental challenge: minute changes to the control policy can greatly impact performance over long time horizons. This "exploding gradients" phenomenon leads the variance of policy gradient algorithms to grow exponentially with time and renders the underlying policy learning problem ill conditioned, making gradient-based methods slow to converge. Model bias also compounds rapidly over time, limiting the effectiveness of otherwise efficient model-based approaches.
As illustrated in Fig. 1, this paper systematically exploits an approximate physics-based model and low-level feedback control to overcome these challenges in policy learning. Concretely, the contributions of this paper are:
* We introduce a novel framework which uses the approximate model to simultaneously design \(1)\) a policy gradient estimator and \(2)\) low-level tracking controllers which we then embed into the learned policy class. Using the model to construct the gradient estimator removes the need to learn about the real-world dynamics from scratch, while the low-level feedback controller prevents gradient estimation error from "exploding".
* Theoretical analysis and illustrative examples demonstrate how we overcome exponential dependencies in the variance, smoothness, and model-bias of policy gradient estimators.
* We validate our theoretical findings with a variety of simulated and physical experiments, ultimately demonstrating our method's data efficiency, run-time performance, and most importantly, ability to overcome substantial model mismatch.
## 2 Related Work
While a wide range of both model-based [4; 5; 6; 7] and model-free [8; 9; 10; 11; 12] reinforcement learning methods exist, the body of work most closely related to our own are works that seek to reduce model bias for policy optimization algorithms. As prior works have noted [13; 14; 15], there are two sources of potential error when using a model. The first source of error can arise if the model is used to simulate or 'hallucinate' trajectories for the system which are then added to the data set [16; 17; 18; 19]. While this approach yields a larger training set, it also introduces bias as the trajectories generated by the model can rapidly diverge from the corresponding real-world trajectory. To overcome this source of error, a number of works [13; 14; 15] have proposed policy gradient estimators which \(1)\) collect real-world trajectories and \(2)\) use the derivatives of a (possibly learned)
model to propagate approximate policy gradient information along these trajectories. Evaluating the gradient along real trajectories removes the first source of error. However, inaccuracies in the derivatives of the model lead to a second source of error and, as we demonstrate in Section 5, these errors can grow exponentially over long time horizons for unstable robotic systems. We demonstrate how low-level feedback control can overcome this second source of error, while reducing variance and improving conditioning. Altogether, this enables us to use even highly inaccurate physics-based models to accelerate learning, using its derivatives and accompanying low-level controller to produce precise yet sample efficient estimates of the policy gradient.
## 3 Problem Formulation
Our primary goal is to derive data-efficient, reliable learning algorithms capable of controlling real-world robotic systems, such as the scale car and quadrupedal robot depicted in Fig. 1.
**First-Principles Dynamics Models:** We assume access to a simplified, physics-based model of the environment dynamics of the form:
\[x_{t+1}=\hat{F}(x_{t},u_{t}), \tag{1}\]
where \(x_{t}\in\mathcal{X}\subset\mathbb{R}^{n}\) is the _state_, \(u_{t}\in\mathcal{U}\subset\mathbb{R}^{m}\) is the _input_ and the (potentially nonlinear) map \(\hat{F}\colon\mathcal{X}\times\mathcal{U}\to\mathcal{X}\) determines how the state evolves over discrete time steps \(t\in\mathbb{N}\). To make the modelling process and down-stream controller synthesis tractable, such models are necessarily built on simplifying assumptions. For example, the model we use to control the RC car in Fig. 1 neglects physical quantities such as the current velocity of the wheels. Nonetheless, such models capture the basic structure of how controller inputs will affect desired quantities (such as position) over time, and are highly useful for designing effective control architectures.
**Reinforcement Learning on the Real-World System:** Although many reinforcement learning frameworks model the environment as a stochastic process, to aid in our analysis, we will assume that the real-world dynamics evolve deterministically, according to the (possibly nonlinear) relation:
\[x_{t+1}=F(x_{t},u_{t}). \tag{2}\]
To control the real-world system we will optimize over a controller architecture of the form \(u_{t}=\pi_{t}^{\theta}(x_{t})\) where \(\pi^{\theta}=\{\pi_{t}^{\theta}\}_{t=0}^{T-1}\) represent the overall policy, \(T<\infty\) is the finite horizon for the task we wish to solve, \(\theta\in\Theta\subseteq\mathbb{R}^{p}\) is the policy parameter, and each map \(\pi_{t}^{\theta}\colon\mathcal{X}\to\mathcal{U}\) is assumed to be differentiable in both \(x\) and \(\theta\). Thus equipped, we pose the following policy optimization problem:
\[\max_{\theta\in\Theta}\mathcal{J}(\theta):=\mathbb{E}_{x_{0}\sim D}[J_{T}( \theta;x_{0})]\ \ \ \text{where}\ \ \ J_{T}(\theta;x_{0}):=\sum_{t=0}^{T}R(x_{t}). \tag{3}\]
Here, \(D\) is the probability density of the initial state \(x_{0}\) and \(R\) is the (differentiable) reward function.
## 4 Approximating the Policy Gradient with an Imprecise Dynamics Model
In this section we demonstrate how to calculate the policy gradient by differentiating the real-world dynamics map \(F\) along trajectories generated by the current policy. We then introduce the estimator used in this paper, which replaces the derivatives of \(F\) with the derivatives of the first-principles model \(\hat{F}\). We will initially focus on the gradient \(\nabla J_{T}(\theta;x_{0})\) of the reward experienced when unrolling the policy from a single initial conditon \(x_{0}\in\mathcal{X}\), and then discuss how to approximate the total policy gradient \(\nabla\mathcal{J}(\theta)\) using a batch estimator. To ease notation, for each \(x_{0}\in\mathcal{X}\) and \(\theta\in\theta\) we capture the resulting real-world trajectory generated by \(\pi^{\theta}\) via the sequence of maps defined by:
\[\phi_{t+1}^{\theta}(x_{0})=F\big{(}\phi_{t}^{\theta}(x_{0}),\pi_{t}^{\theta}( \phi_{t}^{\theta}(x_{0}))\big{)},\ \ \ \ \phi_{0}^{\theta}(x_{0})=x_{0}.\]
**Structure of the True Policy Gradient**: We first fix the initial condition \(x_{0}\in\mathcal{D}\) and policy parameter \(\theta\in\Theta\), and investigate the structure of the true policy gradient \(\nabla J_{T}(\theta;x_{0})\). We let \(\{x_{t}\}_{t=0}^{T}\) and \(\{u_{t}\}_{t=0}^{T-1}\) (with \(x_{t}=\phi_{t}^{\theta}(x_{0})\) and \(u_{t}=\pi_{t}^{\theta}(x_{t})\)) denote the corresponding sequences of states and inputs generated by the policy \(\pi^{\theta}\). The policy gradient captures how changes to the controller
parameters will affect the resulting trajectory and the accumulation of future rewards. We use the following shorthand to capture the _closed-loop sensitivity_ of the state and input to changes in the policy parameters:
\[\frac{\partial x_{t}}{\partial\theta}:=\frac{\partial}{\partial\theta}\phi_{t}^ {\theta}(x_{0}),\hskip 14.226378pt\frac{\partial u_{t}}{\partial\theta}:=\frac{ \partial}{\partial\theta}\pi_{t}^{\theta}(\phi_{t}^{\theta}(x_{0})).\]
These terms depend on the derivatives of the dynamics, which we denote with:
\[A_{t}=\frac{\partial}{\partial x}F(x_{t},u_{t}),\hskip 14.226378ptB_{t}=\frac{ \partial}{\partial u}F(x_{t},u_{t}),\hskip 14.226378ptK_{t}=\frac{\partial}{ \partial x}\pi_{t}^{\theta}(x_{t};x_{0}). \tag{4}\]
**Proposition 1**.: _The policy gradient is given by the following expression:_
\[\nabla J_{T}(\theta;x_{0})=\sum_{t=0}^{T}\nabla R(x_{t})\cdot\frac{\partial x _{t}}{\partial\theta},\;\text{where} \tag{5}\]
\[\frac{\partial x_{t}}{\partial\theta}=\sum_{t^{\prime}=0}^{t-1}\Phi_{t,t^{ \prime}}B_{t^{\prime}}\frac{\partial\pi_{t}^{\theta}}{\partial\theta},\hskip 14.226378pt \Phi_{t,t^{\prime}}:=\prod_{s=t^{\prime}+1}^{t-1}A_{t}^{cl},\hskip 14.226378pt \text{and}\;A_{t}^{cl}=A_{t}+B_{t}K_{t}.\]
For proof of the result see the supplementary material. The first expression in (5) calculates the gradient in terms of the sensitivities \(\frac{\partial x_{t}}{\partial\theta}\), while the latter expressions demonstrate how to compute this term using the derivatives of the model and policy. In (5) the term \(\Phi_{t,t^{\prime}}B_{t^{\prime}}\) captures how a perturbation to the policy at time \(t^{\prime}\) and state \(x_{t^{\prime}}\) propagates through the closed-loop dynamics to affect the future state at time \(t>t^{\prime}\). As we investigate below, when the robotic system is unstable these terms can grow exponentially large over long time horizons, leading to the exploding gradients phenomenon and the core algorithmic challenges we seek to overcome.
**Approximating the Policy Gradient Using the Model:** We approximate the policy gradient \(\nabla_{\theta}J_{T}(\theta;x_{0})\) using the approximate physics-based model \(\hat{F}\) in (1). Holding \(x_{0}\in\mathcal{X}\), \(\theta\in\Theta\), and the resulting real-world trajectory \(\{x_{t}\}_{t=0}^{T}\), \(\{u_{t}\}_{t=0}^{T-1}\) fixed as above, we denote the derivatives of the _model_ along this trajectory as:
\[\hat{A}_{t}=\frac{\partial}{\partial x}\hat{F}(x_{t},u_{t}),\hskip 42.679134pt \hat{B}_{t}=\frac{\partial}{\partial u}F(x_{t},u_{t}). \tag{6}\]
We can then construct an estimate for \(\nabla J_{T}(\theta;x_{0})\) of the form:
\[\nabla_{\theta}\widehat{J_{T}(\theta;x_{0})}=\sum_{t=0}^{T}\nabla R_{t}(x_{ t})\cdot\frac{\widehat{\partial x_{t}}}{\partial\theta},\;\text{where} \tag{7}\]
\[\frac{\widehat{\partial x_{t}}}{\partial\theta}=\sum_{t^{\prime}=0}^{t-1} \hat{\Phi}_{t,t^{\prime}}\hat{B}_{t^{\prime}}\frac{\partial\pi_{t}^{\theta}} {\partial\theta},\hskip 14.226378pt\hat{\Phi}_{t,t^{\prime}}:=\prod_{s=t^{ \prime}+1}^{t-1}\hat{A}_{s}^{cl},\hskip 14.226378pt\text{and}\;\hat{A}_{t}^{ cl}=\hat{A}_{t}+\hat{B}_{t}K_{t}.\]
**Remark 1**.: _Note that this estimator can be evaluated by \(1)\) recording the real-world trajectory which arises when policy \(\pi^{\theta}\) is applied starting from initial state \(x_{0}\), and then \(2)\) using the derivatives of the model \(\hat{F}\) to approximate the derivatives of the real-world system along that trajectory. Effectively, the only approximation here is of the form \(\Phi_{t,t^{\prime}}B_{t^{\prime}}\approx\hat{\Phi}_{t,t^{\prime}}\hat{B}_{t^{ \prime}}\) when calculating the estimate of the system sensitivity \(\frac{\partial x_{t}}{\partial\theta}\approx\widehat{\partial x_{t}}{\partial\theta}\). In Sections 5 and 6, we study what causes this approximation to break down over long time horizons, and how properly-structured feedback controllers can help._
**Remark 2**.: _While the policy gradient approximation given by (7) will prove convenient for analysis, this formula requires numerous 'forwards passes' to propagate derivatives forwards in time along the trajectory. As we demonstrate in the supplementary material, in practice this approximation can be computed more efficiently by 'back-propagating through time'._
**Batch Estimation:** To approximate the gradient of the overall objective \(\nabla\mathcal{J}(\theta)\), we draw \(N\) initial conditions \(\{x_{0}^{i}\}_{i=1}^{N}\) independently from the initial state distribution \(D\), compute each approximate gradient \(\nabla\widehat{J_{T}(\theta;x_{0}^{i})}\) as in (7), and finally compute:
\[\nabla\mathcal{J}(\theta)\approx\hat{g}_{T}^{N}(\theta;\{x_{0}^{i}\}_{t=0}^{T} ):=\frac{1}{N}\sum_{i=1}^{N}\nabla\widehat{J_{T}(\theta;x_{0}^{i})}. \tag{8}\]
We use this estimator in our overall policy gradient algorithm, which is outlined in Algorithm 1.
## 5 Exploding Gradients: Key Challenges for Unstable Robotic Systems
We now dig deeper into the structure of the policy gradient and our model-based approximation. We repeatedly appeal to the following scalar linear system to illustrate how key challenges arise:
**Running Example:** Consider the case with true and modeled dynamics given respectively by:
\[x_{t+1}=F(x_{t},u_{t})=ax_{t}+bu_{t}\quad\text{and}\quad x_{t+1}=\hat{F}(x_{t},u _{t})=\hat{a}x_{t}+\hat{b}u_{t}, \tag{9}\]
where \(a,\hat{a},b,\hat{b}>0\) and \(x_{t},u_{t}\in\mathbb{R}\). Suppose we optimize over policies of the form \(u_{t}=\pi_{t}^{\theta}(x_{t})=\bar{u}_{t}\) where \(\theta=(\bar{u}_{0},\bar{u}_{1},\ldots,\bar{u}_{T-1})\in\mathbb{R}^{T}\) are the policy parameters. In this case, the policy parameters \(\{\bar{u}_{t}\}_{t=0}^{T-1}\) specify a sequence of open-loop control inputs applied to the system. Retaining the conventions developed above, along every choice of \(\{\bar{u}_{t}\}_{t=0}^{T-1}\) and the resulting trajectory \(\{x_{t}\}_{t=0}^{T}\) we have \(A_{t}=a\), \(B_{t}=b\), \(\hat{A}_{t}=\hat{a}\), \(\hat{B}_{t}=\hat{b}\) and \(K_{t}=0\), and thus we have \(\Phi_{t,t^{\prime}}=a^{t-t^{\prime}-1}\) and \(\hat{\Phi}_{t,t^{\prime}}=\hat{a}^{t-t^{\prime}-1}\). When \(a,\hat{a}>1\), the system (and model) are _passively unstable_[20, Chapter 5], and small changes to the policy compound over time, as captured by and \(\|\Phi_{t,t^{\prime}}\|\) and \(\|\hat{\Phi}_{t,t^{\prime}}\|\) growing exponentially with the difference \(t-t^{\prime}\), along with the formula for the gradients (5).
### Exploding Model-Bias
Recall that the aforementioned estimator for \(\nabla J_{T}(\theta;x_{0})\) only introduces error in the term \(\frac{\partial x_{t}}{\partial\theta}\approx\frac{\widehat{\partial x_{t}}}{ \partial\theta}\) and in particular \(\Phi_{t,t^{\prime}}B_{t^{\prime}}\approx\hat{\Phi}_{t,t^{\prime}}\hat{B}_{t^{ \prime}}\) along the resulting trajectory. We will seek to understand how the point-wise errors in the derivatives of the model \(\Delta A_{t}^{cl}:=\hat{A}_{t}^{cl}-A_{t}^{cl}\) and \(\Delta B_{t}:=\hat{B}_{t}-B_{t}\) propagate over time. Towards this end we manipulate the following difference:
\[\hat{\Phi}_{t,t^{\prime}}\hat{B}_{t^{\prime}}-\Phi_{t,t^{\prime}} B_{t^{\prime}} =\Phi_{t,t^{\prime}}\hat{B}_{t^{\prime}}+\Delta\Phi_{t,t^{\prime}} \hat{B}_{t^{\prime}}-\Phi_{t,t^{\prime}}B_{t^{\prime}}=\Phi_{t,t^{\prime}} \Delta B_{t,t^{\prime}}+\Delta\Phi_{t,t^{\prime}}\hat{B}_{t^{\prime}} \tag{10}\] \[=\Phi_{t,t^{\prime}}\Delta B_{t^{\prime}}+\big{(}\sum_{s=t^{ \prime}+1}^{t-1}\Phi_{t,s}\Delta A_{s}^{cl}\hat{\Phi}_{s-1,t^{\prime}}\big{)} \hat{B}_{t^{\prime}},\]
The last equality in (10) provides a clear picture of how inaccuracies in the derivatives of the model are propagated over time. For example, when approximating \(\hat{\Phi}_{t,t^{\prime}}\hat{B}_{t,t^{\prime}}\approx\Phi_{t,t^{\prime}}B_{t^ {\prime}}\) the error \(\Delta B_{t^{\prime}}\) is magnified by \(\Phi_{t,t^{\prime}}\), while the error \(\Delta A_{t^{\prime}+1}^{cl}\) is magnified by \(\Phi_{t,t^{\prime}+1}\).
**Running Example:** Continuing with the scalar example, in this case we have \(\Delta B_{t}=\hat{b}-b\) and \(\Delta A_{t}^{cl}=\hat{a}-a\). Moreover, using the preceding calculations, we have \(\hat{\Phi}_{t,t^{\prime}}\hat{B}_{t^{\prime}}-\Phi_{t,t^{\prime}}B_{t^{\prime }}=a^{t-t^{\prime}}(\hat{b}-b)+\sum_{s=t^{\prime}+1}^{t-1}a^{t-s-1}\hat{a}^{s- t^{\prime}-1}b(\hat{a}-a)\). Thus, when \(a,\hat{a}>1\) and the system is unstable, the errors in derivatives of the model are magnified exponentially over long time horizons when computing the sensitivity estimate \(\frac{\partial x_{t}}{\partial\theta}\approx\frac{\widehat{\partial x_{t}}}{ \partial\theta}\) and ultimately the gradient estimate \(\nabla J_{T}(\theta;x_{0})\approx\widehat{\nabla J_{T}(\theta;x_{0})}\).
### Exploding Variance
We next illustrate how unstable dynamics can lead our batch estimator \(\hat{g}_{T}^{N}\) to explode over long time horizons \(T\) unless a large number of samples \(N\) are used.
**Running Example:** Consider the case where \(r(x_{t})=-\frac{1}{2}\|x_{t}\|_{2}^{2}\) and the initial state distribution is \(D\) uniform over the interval \([-1,1]\). Consider the case where we apply
\((0,\ldots,0)\) so that no control effort is applied. In this case, for every initial condition \(x_{0}\), the resulting state trajectory is given by \(x_{t}=a^{t}x_{0}\), and thus our estimate for the gradient is \(\nabla J_{T}(\theta;x_{0})=\sum_{t=0}^{T-1}(a^{t}x_{0})\cdot\sum_{t^{\prime}=0} ^{t-1}\hat{a}^{t-t}b\). Moreover, by inspection we see that the average of the estimator is \(\mathbb{E}[\hat{g}_{T}^{N}(\theta;\{x_{0}\}_{i=1}^{N})]=\mathbb{E}\big{[}\sum_ {i=1}^{N}\hat{J}_{T}(\theta;x_{0})\big{]}=0\) and thus the variance of the estimator is \(\frac{1}{N}\mathbb{E}[\|\hat{g}_{T}^{N}(\theta;\{x_{0}\}_{i=1}^{N})-\mathbb{E} \big{[}\sum_{i=1}^{N}\hat{J}_{T}(\theta;x_{0})]\|^{2}]=\frac{1}{N}\mathbb{E}[ \|\hat{g}_{T}^{N}(\theta;\{x_{0}\}_{i=1}^{N})\|^{2}]=\frac{1}{N}\|\sum_{t=0}^{ T-1}(a^{t}x_{0})\cdot\sum_{t^{\prime}=0}^{t-1}\hat{a}^{t-t^{\prime}}b\|^{2}\), a quantity which grows exponentially with the horizon \(T>0\).
### Rapidly Fluctuating Gradients
Let \(f\colon\mathbb{R}^{q}\to\mathbb{R}\) be a potentially non-convex and twice differentiable objective, such as the ones considered in this paper. In this general setting, well-established results for gradient-based methods characterize the rate of convergence to approximate stationary points of the underlying objective, namely, points \(z\in\mathbb{R}^{q}\) such that \(\|\nabla f(z)\|_{2}<\epsilon\) for some desired tolerance \(\epsilon>0\). A key quantity which controls this rate is the smoothness of the underlying objective, which is typically characterized by assuming the existence of a constant \(L>0\) such that \(\|\nabla f(z_{1})-\nabla f(z_{2})\|<L\|z_{1}-z_{2}\|\) for each \(z_{1},z_{2}\in\mathbb{R}^{q}\). When the constant \(L\) is very large, the gradient can fluctuate rapidly, and small step-sizes may be required to maintain the stability of gradient-based methods [21], slowing the rate of convergence for these approaches. Many analyses control these fluctuations using the Hessian of the objective by setting \(L:=\sup_{z\in\mathbb{R}^{q}}\|\nabla^{2}F(z)\|_{i,2}\), where \(\|\cdot\|_{i,2}\) is the induced \(2\)-norm.
Below, our main theoretical results will bound the magnitude of \(\nabla^{2}J(\theta)\), characterizing the smoothness of the underlying policy optimization problem and illustrating the benefits of embedded low-level controllers. We demonstrate how to derive an expression for the Hessian in the Appendix, but provide here a concrete illustration of how it can grow exponentially for unstable systems:
**Running Example:** Consider the case where the quadratic reward \(r(x_{t})=-\frac{1}{2}\|x_{t}\|_{2}^{2}\) is applied to our example scalar system. For every initial condition \(x_{0}\) and choice of policy parameters \(\theta=(\bar{u}_{1},\ldots,\bar{u}_{T-1})\) by inspection we have \(x_{t}=a^{t}x_{0}+\sum_{s=0}^{t}a^{t-s}b\bar{u}_{s}\), so that the overall objective is concave and given by \(J(x_{0};\theta)=\sum_{t=0}^{T}\sum_{s=0}^{t-1}-\|a^{t}x_{0}+\sum_{s=0}^{t}a^{t- s}b\bar{u}_{s}\|\). The Hessian of the objective can be calculated directly; in particular the diagonal entries are given by \(\frac{\partial^{2}}{\partial\bar{u}_{t}^{2}}=\sum_{s=t+1}^{T}a^{s-t}b\). This demonstrates that \(\|\nabla^{2}J(x_{0},\theta)\|\geq|\frac{\partial^{2}}{\partial\bar{u}_{t}^{2 }}|_{2}\) grows exponentially in time horizon. From the discussion above, this implies that policy gradient methods will be very slow to converge to optimal policies.
## 6 Embedding Low-Level Feedback into the Policy Class
We now demonstrate how we can overcome the these pathologies by using the model to design stabilizing low-level feedback controllers which are then embedded into the policy class.
**Running Example:** Let us again consider the simple scalar system and model we have studied thus far, but now suppose we use the model to design a proportional tracking controller of the form \(u_{t}=k(\bar{x}_{t}-x_{t})\), where \(\{\bar{x}_{t}\}_{t=0}^{T}\) represents a desired trajectory we wish to track and \(k>0\) is the feedback gain. We then embed this controller into the overall policy class by choosing the parameters to be \(\theta=(\bar{x}_{0},\bar{x}_{1},\ldots,\bar{x}_{t})\) so that \(u_{t}=\pi_{t}^{\theta}(x_{t})=k(\bar{x}_{t}-x_{t})\). Here, the parameters of the control policy specify the desired trajectory the low-level controller is tasked with tracking. In this case, along each trajectory of the system we will now have \(A_{t}^{cl}=a-bk\), \(\hat{A}_{t}^{cl}=\hat{a}-\hat{b}k\), \(B_{t}=b\) and \(\hat{B}_{t}=b\). If the gain \(k>0\) is chosen such that \(|a-bk|<1\) and \(|\hat{a}-\hat{b}k|<1\), then the transition matrices \(\hat{\Phi}_{t,t^{\prime}}=(\hat{A}_{t}^{cl})^{t-t^{\prime}-1}\) and \(\Phi_{t,t^{\prime}}=(A_{t}^{cl})^{t-t^{\prime}-1}\) will both decay exponentially with the difference \(t-t^{\prime}\). Thus, by optimizing _through a low-level tracking controller designed with the model_ we have reduced the sensitivity of trajectories to changes in the controller parameters.
**Remark 3**.: _In practice, we may select a control architecture as in Fig. 1 where our parameters are those of a neural network which corrects a desired trajectory and low-level controller. The natural generalization of the damping behavior displayed by the proportional controller above is that the low-level controller is incrementally stabilizing, which means that for every initial condition \(x_{0}\) and
\(\theta\in\Theta\) we will have \(\|\Phi_{t,t^{\prime}}\|\leq M\alpha^{t-t^{\prime}}\). There are many systematic techniques for synthesizing incrementally stabilizing controllers using a dynamical model from the control literature [20; 22]._
We are now ready to state our main result, which demonstrates the benefits using the model to design the policy gradient estimator and embedded feedback controller:
**Theorem 1**.: _Assume that \(1)\) the first and second partial derivatives of \(R_{t}\), \(\pi_{t}^{\theta}\), \(F\) and \(\hat{F}\) are bounded, \(2)\) there exists a constant \(\Delta>0\) such that for each \(x_{0}\in\mathcal{X}\) and \(u\in\mathcal{U}\) the error in the model derivatives are bounded by \(\max\{\|\frac{\partial}{\partial x}F(x,u)-\frac{\partial}{\partial x}\hat{F}(x,u)\|,\|\frac{\partial}{\partial u}F(x,u)-\frac{\partial}{\partial u}\hat{F}(x,u)\|\}<\Delta\) and \(3)\) the policy class \(\{\pi_{t}^{\theta}\}_{\theta\in\Theta}\) has been designed such that exists constants \(M,\alpha>0\) such that for each \(x_{0}\in\mathcal{X}\), \(\theta\in\Theta\), and \(t>t^{\prime}\) we have: \(\max\{\|\Phi_{t,t^{\prime}}\|,\|\hat{\Phi}_{t,t^{\prime}}\|\}<M\alpha^{t-t}\). Letting \(\bar{g}_{T}(\theta)=\mathbb{E}[\hat{g}_{T}^{N}(\theta;\{x_{0}^{i}\}_{i=1}^{N})]\) denote the mean of our gradient estimator, there exist scalars \(C,W,K>0\) such that the bias and variance of our policy gradient estimator are bounded as follows:_
\[\|\nabla\mathcal{J}_{T}(\theta)-\bar{g}_{T}(\theta)\|\leq\begin{cases}CT^{2} \alpha^{T}\Delta&\text{if }\alpha>1\\ CT^{2}\Delta&\text{if }\alpha=1\\ CT\Delta&\text{if }\alpha<1,\end{cases}\quad\mathbb{E}\Big{[}\|\hat{g}_{T}^{N}( \theta)-\bar{g}_{T}(\theta)\|^{2}\Big{]}\leq\begin{cases}\frac{WT^{4}\alpha^{2 T}}{W^{2}}&\text{if }\alpha>1\\ \frac{WT^{2}}{W^{2}}&\text{if }\alpha=1\\ \frac{WT^{2}}{W}&\text{if }\alpha<1.\end{cases}\]
_Moreover, the smoothness of the underlying policy optimization problem is characterized via:_
\[\|\nabla^{2}\mathcal{J}_{T}(\theta)\|_{2}\leq\begin{cases}KT^{4}\alpha^{3T}& \text{if }\alpha>1\\ KT^{4}&\text{if }\alpha=1\\ KT&\text{if }\alpha<1.\end{cases}\]
Proof of the result can be found in the supplementary material. The result formalizes the intuition built with our example: when the system is passively unstable (and we can have \(\alpha>1\)), the core algorithmic challenges introduced above can arise. However, embedding a (incrementally stabilizing) low-level tracking controller into the policy class can overcome these pathologies (\(\alpha\leq 1\)).
## 7 Experimental Validation
We implement Algorithm 1 in Julia [23] and interface with hardware in C++ using the Robot Operating System (ROS) [24] framework. Per Section 6, for each example below we construct our policy (Fig. 1) around a low-level controller designed using the model which aims to stably track reference trajectories. The neural network outputs 1) the parameters of a spline to define the reference trajectory and \(2)\) feedback gains used by the low-level controller. The neural network is a \(64\times 64\) multilayer perceptron with \(\tanh(\cdot)\) activations that takes in task, time, and/or state feedback information, and is constructed to provide offsets to nominal spline parameters and feedback gains.
**The Benefit of Low-Level Feedback:** We begin by comparing the policy class of Fig. 1 against a policy class in which a neural network directly determines open-loop control inputs (as in Section 5, omitting a low-level stabilizing controller). We use the double pendulum model from [25], and the task requires moving the end effector to a desired location, using a reward function based
Figure 2: Training curves for the double pendulum experiment. Embedding low-level feedback results in better performance both with and without model mismatch.
on Euclidean distance. **First experiment:** We provide the true dynamics to both approaches to observe the variance and conditioning, independent of model-mismatch. Each policy was trained using a batch size of \(5\), and training curves for the best learning rate for each approach are depicted in Fig. 1(a), which supports the our main theoretical findings. **Second Experiment:** Next we feed Algorithm 1 an approximate model that contains pendulum masses that are 50% of the actual values. As shown in Fig. 1(b), the unstable dynamics lead to significant model bias which limited the asymptotic performance of the naive controller without embedded feedback controller.
**NVIDIA JetRacer:** Next, we hardware-test our approach on an NVIDIA JetRacer 1/10\({}^{\text{th}}\) scale high-speed car using the following simplified dynamics model:
\[\begin{bmatrix}x_{t+1}\\ y_{t+1}\\ v_{t+1}\\ \phi_{t+1}\end{bmatrix}=\begin{bmatrix}x_{t}+v_{t}\cos\left(\phi_{t}\right) \Delta t\\ y_{t}+v_{t}\sin\left(\phi_{t}\right)\Delta t\\ v_{t}+a_{t}\Delta t\\ \phi_{t}+v_{t}\omega_{t}\Delta t\end{bmatrix}, \tag{11}\]
where \(\Delta t>0\) is the discrete time-step, \((x_{t},y_{t},\phi_{t})\in SE(2)\) are the Cartesian coordinates and heading angle of the car, \(v_{t}>0\) is the forward velocity of the car in its local frame, and \((a_{t},\omega_{t})\in U=[0,1]\times[-1,1]\) are the control inputs where \(a_{t}\) is the throttle input percentage and \(\omega_{t}\) is the steering position of the wheels. We note that this model makes several important simplifications: (i) drag is significant on the actual car, but is missing from (11); (ii) proper scaling of the control inputs \((a_{t},\omega_{t})\) has been omitted; (iii) the actual car has noticeable steering bias, and does not follow a straight line when \(\omega_{t}=0\); and (iv) physical quantities such as the current speed of the tires or time-delays in the motor are ignored.
The task consists of tracking a figure-8 made up of two circles, 3 meters in diameter, with a nominal lap time of \(5.5\,\mathrm{s}\). We implement a backstepping-based tracking controller [20, Ch. 6] for low-level control. As shown in Fig. 1 this controller alone does not ensure accurate tracking, due to inaccuracies in the model used to design it. We select a reward function that is a weighted sum of distance to the track and difference from nominal velocity. The policy was trained with \(2.2\,\mathrm{min}\) of real world data over 8 iterations, each \(16.5\,\mathrm{s}\) long. In Fig. 1, we see a clear improvement in tracking performance.
We now examine neural network outputs during a single execution of the figure-eight task for the NVIDIA JetRacer hardware experiment, depicted in Fig. 4. We see that the neural network issues corrections on the outside of the track, which is reasonable considering the untrained car was tracking the inside of the track. We note the following controller gains adjustments from the neural network: (i) an overall negative value selected for the feedforward steering gain \(\Delta K_{\omega}\) counteracts the car's inherent steering bias in the positive steering direction; (ii) lower values of forward velocity gain \(\Delta K_{v}\) were selected when crossing the origin, allowing the car to more closely track at this critical point; and (iii) elevated values of \(\Delta K_{v}\) are selected to speed up the car for the rest of the track, increasing reward.
Figure 3: (Left) Training curves for different algorithms applied to a high-fidelity simulation model of an RC car. (Right) One lap of the quadruped around the figure-8 task with corrected way points from neural network.
Next, we use a high fidelity simulation environment of the car to benchmark our approach against state-of-the-art reinforcement learning algorithms in Figure 3, in each case optimizing over the feedback control architecture described above. In particular, we compare to the model-based approach MBPO [16] and the model-free approaches SAC [8] and PPO [9]. Each of these approaches learns about the dynamics of the system from scratch; thus, it is unsurprising that our approach converges more rapidly as it exploits known physics represented by the model. The use of feedback enables us to take this approach and obtain a high-performing controller, even though the model we use is highly inaccurate, overcoming model-bias.
**Go1 Quadrupedal Robot:** We also replicate the figure-8 tracking experiment on a Unitree Go1 Edu quadrupedal robot to demonstrate the effectiveness of our approach when using a _very highly simplified_ model. The Go1 is an 18-degree-of-freedom system which is controlled in a hierarchical manner that involves the coordination of multiple control levels to enable effective locomotion. At the lowest level, a joint control module generates individual motor torques to actuate the robot's limbs to desired angles and velocities. At the next layer, a kinematic solver converts desired foot placements to joint angles. A gait generation module determines trajectories of foot placements from high-level linear and angular velocity commands issued to the robot. We provide these high-level commands to the Go1 via Unitree's ROS interface1, as outputs from a backstepping-based controller that was formulated using the following simplified dynamical model of the system:
Footnote 1: [https://github.com/unitrerereobotics/unitree_ros_to_real](https://github.com/unitrerereobotics/unitree_ros_to_real)
\[\begin{bmatrix}x_{t+1}\\ y_{t+1}\\ \phi_{t+1}\end{bmatrix}=\begin{bmatrix}x_{t}+v_{t}\cos\left(\phi_{t}\right) \Delta t\\ y_{t}+v_{t}\sin\left(\phi_{t}\right)\Delta t\\ \phi_{t}+\omega_{t}\Delta t\end{bmatrix}, \tag{12}\]
where \((x_{t},y_{t})\) are the Cartesian coordinates of the base of the robot on the ground plane \(\phi_{t}\) is its heading. The two inputs to the model are the desired forward velocity and \(v_{t}\) and the desired turning rate \(w_{t}\). Note that this is an extremely simplified model for the system, with a dynamic structure similar to the model for the car used in the previous example.
Setting a nominal lap time of \(37.7\,\mathrm{s}\), we trained the policy using \(5.9\,\mathrm{min}\) of real-world data over 7 iterations, each \(50.9\,\mathrm{s}\) long. Even though we used a highly simplified model for the dynamics, we again see a clear improvement in performance after training (cf. Fig. 3).
## 8 Limitations
Our approach successfully learns high-performance control policies using only limited data, acquired on physical systems. A key enabler to this end is the embedding of stabilizing low-level feedback within the policy class. However, there are several key limitations. First, for situations such as contact-rich manipulation, it may not be clear how to design a controller with the required
Figure 4: One lap of the car around the figure-8 task before/after training and neural network outputs.
(incremental) stability property. In the future, we hope to overcome this challenge by optimizing over more complex hierarchical control stacks. Second, our approach can fail if the model discrepancy is so large that the initial model-based controller does not reduce the sensitivity of the system. Future work may address this limitations by incorporating techniques for learning stabilizing controllers (e.g., the Lyapunov methods of [26, 27]). Additionally, while our method is highly sample-efficient, it does not take advantage of many established techniques from the reinforcement learning literature, such as value function learning and off policy training, leaving many directions for algorithmic advances. One particularly interesting direction is to combine the proposed approach with emerging model-based reward shaping techniques [28, 29] |
2306.07172 | Emerging mesoscale flows and chaotic advection in dense active matter | We study two models of overdamped self-propelled disks in two dimensions,
with and without aligning interactions. Active mesoscale flows leading to
chaotic advection emerge in both models in the homogeneous dense fluid away
from dynamical arrest, forming streams and vortices reminiscent of multiscale
flow patterns in turbulence. We show that the characteristics of these flows do
not depend on the specific details of the active fluids, and result from the
competition between crowding effects and persistent propulsions. Our results
suggest that dense active fluids present a type of `active turbulence' distinct
from collective flows reported in other types of active systems. | Yann-Edwin Keta, Juliane Klamser, Robert L. Jack, Ludovic Berthier | 2023-06-12T15:12:32Z | http://arxiv.org/abs/2306.07172v1 | # Emerging mesoscale flows and chaotic advection in dense active matter
###### Abstract
We study two models of overdamped self-propelled disks in two dimensions, with and without aligning interactions. Active mesoscale flows leading to chaotic advection emerge in both models in the homogeneous dense fluid away from dynamical arrest, forming streams and vortices reminiscent of multiscale flow patterns in turbulence. We show that the characteristics of these flows do not depend on the specific details of the active fluids, and result from the competition between crowding effects and persistent propulsions. Our results suggest that dense active fluids present a type of 'active turbulence' distinct from collective flows reported in other types of active systems.
+
Footnote β : These authors contributed equally to the work
Active matter has emerged as an important class of nonequilibrium systems, in which the injection of energy at the level of individual particles can produce emerging collective phenomena at large scales [1]. Among these, collective motion [2] has attracted much interest because of its biological and social interest, _e.g._ for wound healing [3] or crowd management [4]. Collective motion can be ordered, as in flocking [5; 6] where local interactions between individuals can lead to global motion along a given direction, or be more irregular or even chaotic, as in bacterial swarms [7] or active nematics [8] which display intermittent swirling motion.
The term 'active turbulence' [9] recently became popular to describe chaotic mesoscale flows in various systems, from dense epithelial tissues [10] to suspensions of microtubules and kinesin motors [11]. Unlike classical turbulence, active turbulence is observable in the absence of inertia. Moreover, the energy injection is not externally imposed but self-generated at small scales [9]. A recent classification [9] organises active turbulent models into four classes, depending on their symmetries: A model's order parameter can be either polar or nematic; in addition, it is called "wet" if it conserves momentum, for example if hydrodynamic interactions dominate, and "dry" if it does not.
In nematic systems [12; 13; 14] flow derives in wet and dry conditions from an instability in the dynamics of the nematic director field, with an emerging length scale determined by the balance between active and nematic stresses [12; 14]. Long-range velocity correlations in these flows are universal [14]. Most studies of polar active turbulence have either considered wet systems of swimmers [15], or the Toner-Tu-Swift-Hohenberg equation [16; 17], which describes incompressible flows in dry systems. In this latter description, the polarisation and the velocity are assumed to be aligned: this is appropriate in the absence of steric interactions. Diverse particle-based models have also been shown to display some type of active turbulence: extensions of the Vicsek model [18; 19], self-propelled rods [13; 16; 20; 21] and dumbbells [22], microswimmers with hydrodynamic interactions [23; 24]. All these models comply with the existing classification [14].
Here, we establish that the simplest class of active matter models - overdamped self-propelled disks - also develops mesoscale chaotic flows qualitatively similar to active turbulence, see Fig. 1. In two distinct models, we find that the homogeneous dense active fluid develops extended spatial velocity correlations [25; 26; 27; 28; 29; 30] which advect particles along a disordered array of streams and vortices, accompanied by hallmarks of active turbulence, including advective mixing. Within the existing symmetry classification [9], the natural comparison is polar turbulence with dry friction [16] but our results show different scaling behaviour. We attribute this to effects of particle crowding, which is absent from previous descriptions. Based on these observations we argue for a new class of active turbulent behaviour, which should encompass diverse models such as vibrated disks [31], self-aligning self-propelled particles [32; 33], or self-propelled Voronoi models of confluent tissues [34].
We study \(N\) overdamped athermal self-propelled particles in a square box of linear size \(L\) with periodic boundary conditions that follow the overdamped dynamics
\[\dot{\mathbf{r}}_{i}=-\mu\sum_{j\neq i}\nabla_{i}U(r_{ij})+\mu\mathbf{p}_{i}, \tag{1}\]
where \(\mathbf{r}_{i}\) is the position of particle \(i\), \(\mathbf{p}_{i}\) the self-propulsion force, \(\mu\) the particle mobility, \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\), and particles interact via a repulsive Weeks-Chandler-Andersen potential \(U=4\varepsilon[(\sigma_{ij}/r_{ij})^{12}-(\sigma_{ij}/r_{ij})^{6}+1/4]\) for \(r_{ij}<2^{1/6}\sigma_{ij}\)
and \(U=0\) otherwise, where \(\sigma_{ij}=(\sigma_{i}+\sigma_{j})/2\) with \(\sigma_{i}\) the diameter of particle \(i\).
The dynamics of the self-propulsion forces \(\mathbf{p}_{i}\) defines the active model [35]. We considered two distinct dynamics, active Ornstein-Uhlenbeck particles (AOUPs) [36, 37] and aligning active Brownian particles (ABPs) [38, 39, 40, 41]. To frustrate positional order, we introduce size polydispersity. The diameters of the AOUs are drawn from a uniform distribution of mean \(\sigma=\overline{\sigma_{i}}\) and polydispersity 20% [37, 42]. The ABPs are a 50:50 bidisperse mixture with diameters \(\sigma\) and \(1.4\sigma\). The packing fraction is \(\phi=2^{1/3}\pi N\overline{\sigma_{i}^{2}}/(4L^{2})\). The unit length is \(\sigma\), the unit energy is \(\varepsilon\), and the unit time is \(\mu\sigma^{2}/\varepsilon\). We measure velocities \(\mathbf{v}_{i}=\dot{\mathbf{r}}_{i}-N^{-1}\sum_{j}\dot{\mathbf{r}}_{j}\) in the center-of-mass frame.
For AOUPs, the self-propulsion forces obey:
\[\tau_{p}\dot{\mathbf{p}}_{i}=-\mathbf{p}_{i}+\sqrt{2D_{0}}\mathbf{\eta}_{i}, \tag{2}\]
where \(\tau_{p}\) is the persistence time, \(D_{0}\) the diffusion constant of a free particle, and \(\mathbf{\eta}_{i}\) is a Gaussian white noise of zero mean and unit variance, \(\langle\mathbf{\eta}_{i}(t)\mathbf{\eta}_{j}(0)\rangle=1\delta_{ij}\delta(t)\). From Eq. (2), it follows that the amplitude of the self-propulsion force fluctuates around \(\sqrt{\langle|\mathbf{p}_{i}|^{2}\rangle}=\sqrt{2D_{0}/\tau_{p}}\). We use \(D_{0}=1\), and vary \(\tau_{p}\) towards large values. We use system sizes up to \(N=262144\) (depending on the state point), to ensure that results are not significantly affected by finite size effects.
For aligning ABPs, \(\mathbf{p}_{i}=v_{0}\mathbf{u}_{i}\) with a constant amplitude \(v_{0}\) and orientations \(\mathbf{u}_{i}=(\cos\theta_{i},\sin\theta_{i})\) evolving as
\[\dot{\theta}_{i}=\frac{\gamma}{n_{i}}\sum_{j}f(r_{ij})\sin(\theta_{j}-\theta_ {i})+\sqrt{2D_{r}}\xi_{i}, \tag{3}\]
with \(\gamma\) the alignment strength, \(f(r_{ij})=1\) if \(r_{ij}/\sigma_{ij}<2\) and zero otherwise, \(n_{i}=\sum_{j}f(r_{ij})\) the number of particles interacting with particle \(i\), and \(D_{r}\) the rotational diffusivity which controls the single-particle persistence time \(\tau=D_{r}^{-1}\). We fix \(v_{0}\) and \(D_{r}\) to \(1\), and use modest \(\gamma\) values, which are well below the onset of polar order. We use system sizes up to \(N=51200\).
Fig. 1 illustrates the emergent flows that are the main subject of this work (see [43] for corresponding movies): it displays velocity (\(\mathbf{v}\)) and vorticity (\(\nabla\times\mathbf{v}\), coarse-grained over a length 4) fields, as well as streamlines. For suitable parameters, both models support states where the density is homogeneous with clear signatures of active turbulence with non-trivial space and time fluctuations of the velocity field leading to mesoscale chaotic flows. The patterns in Fig. 1 are highly dynamical and constantly form new networks of streams and vortices. Extended velocity correlations appear in a broad range of conditions (phase-separated [44], glassy [26, 29], jammed [25, 45], crystalline [28]), but the active turbulent phenomenology discussed here is more complicated to observe.
The emergence of active turbulence in AOUPs is surprising because there are no interactions favouring alignment of the self-propulsion forces, neither explicitly nor via shape anisotropy. Instead, the flows emerge because extended velocity correlations are produced by the coupling between persistent self-propulsion and density fluctuations [28, 29, 30]. The relevant densities are large enough to avoid motility induced phase separation [46] and small enough to avoid dynamic arrest [37]. For AOUPs under these conditions, advective flows develop gradually as \(\tau_{p}\) increases [30] [\(\tau_{p}=10^{4}\) in Figs. 1(a,b)]. This observation motivates our second model with weak alignment, in which similarly persistent self-propulsion arises from the aligning interactions, even if isolated particles decorrelate quickly (\(\tau=1\)). This drives aligning ABPs towards the same turbulent behaviour as highly-persistent AOUPs.
Despite differences in microscopic details, Fig. 1 shows that the velocity correlations are almost indistinguishable in both models, as confirmed below. These similarities support our identification of a new class of active turbulent systems, whose origin is the interplay of
Figure 1: (a) Configuration snapshot at \(\phi=0.8425\) of \(N=16384\) AOUPs with velocity field (arrows) and corresponding velocity amplitude (color) showing fast and slow regions of collective motion for \(\tau_{p}=10^{4}\). The corresponding vorticity field with streamlines in (b) highlights the presence of streams and vortices in the velocity field. (c,d) are for \(N=12800\) aligning ABPs, that show a comparable phenomenology at \(\phi=0.97\) and \(\gamma=2.5\).
self-propulsion and crowding. In all cases, velocity correlations are much longer-ranged than the correlations of the self-propulsion forces \(\mathbf{p}_{i}\) which are either absent (AOUPs) or weak (aligning APBs): velocity correlations are an emerging property. This situation is in contrast to the mechanism of correlated propulsions described by existing continuum theories [16], and supports our claim that these observations are not included in the current classification of active turbulent systems [9].
We now provide quantitative measurements supporting these conclusions. Figs. 2(a,b) show velocity autocorrelation functions, \(\left\langle\mathbf{v}_{i}(0)\cdot\mathbf{v}_{i}(t)\right\rangle/\left\langle |\mathbf{v}|^{2}\right\rangle\), which reveal the temporal behaviour of the flows. Unlike the exponential decay of simple fluids [47], we observe a two-step decay in both models becoming more pronounced with more turbulent flows. These two time scales respectively correspond to the short collision time, and the increasing decorrelation time of the self-propulsion forces. In AOUPs, this longer correlation time corresponds to the imposed persistence time \(\tau_{p}\); in ABPs, it is controlled by the alignment strength \(\gamma\) (recall that \(\tau=1\) throughout).
We quantify spatial velocity correlations using the analog of the kinetic energy spectrum [16]
\[E(k)=\frac{2\pi}{L^{2}}k\left\langle\left|\tilde{\mathbf{v}}(\mathbf{k}) \right|^{2}\right\rangle, \tag{4}\]
with \(k=|\mathbf{k}|\) and \(\tilde{\mathbf{v}}(\mathbf{k})=\int\mathrm{d}^{2}\mathbf{r}\,\mathbf{v}( \mathbf{r})\exp\left(-\mathrm{i}\mathbf{k}\cdot\mathbf{r}\right)\) the Fourier transform of the velocity field \(\mathbf{v}(\mathbf{r})=\sum_{i}\mathbf{v}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})\), see Figs. 2(c,d). Clearly, \(E(k)\) is directly related to velocity correlations in real space. For all parameters, \(E(k)\sim k\) for small enough \(k\), which implies the existence of a maximum length scale \(\xi\) beyond which velocities are uncorrelated, so that \(\left\langle|\tilde{\mathbf{v}}(\mathbf{k})|^{2}\right\rangle=\mathrm{const}\) for \(k\xi\ll 1\). This \(\xi\) is the correlation length of the velocities.
For wave vectors \(k\) intermediate between \(2\pi/\xi\) and \(2\pi/\sigma\), we report a decay of the energy spectrum \(E(k)\propto k^{-\alpha}\) with \(\alpha\simeq 1/2\). This corresponds to a scale-free decay \(\sim r^{\alpha-1}\) of velocity correlations for length scales between the particle size \(\sigma\) and the correlation length \(\xi\)[48]. The established classes [9] of active turbulent behaviour involve significantly larger exponents (for example \(\alpha=8/3\)[16]). Physically, \(\alpha\) quantifies the observation that the velocity fields in Figs. 1(a,c) display self-similar structure up to the (parameter-dependent) correlation length \(\xi\). For systems of non-aligning self-propelled particles, previous studies [29; 30; 49] reported results qualitatively similar to those of Fig. 2 but suggested a value \(\alpha=1\), consistent with hydrodynamic models of self-propulsion coupled to density fluctuations.
To further understand and characterise these flow patterns, we decompose the real-space velocity correlations into longitudinal (\(\alpha=\parallel\)) and transverse (\(\alpha=\perp\)) components:
\[C_{\alpha}(r)=\frac{\left\langle\sum_{i,j}v_{i}^{\alpha}v_{j}^{\alpha}\delta( r_{ij}-r)\right\rangle}{\left\langle\sum_{i,j}\delta(r_{ij}-r)\right\rangle}, \tag{5}\]
where \(v_{i}^{\alpha}\) is the velocity component in the direction parallel or transverse to the unit vector \((\mathbf{r}_{i}-\mathbf{r}_{j})/r_{ij}\). The total
Figure 2: (a,b) Velocity autocorrelations in time and (c,d) kinetic energy spectra defined in Eq. (4) for (a,c) AOUPs at various persistence times \(\tau_{p}\) and (b,d) aligning ABPs for a range of alignment strengths \(\gamma\). For AOUPs, \(\phi=0.84\) for \(\tau_{p}=10^{2},10^{3}\) and \(\phi=0.8425\) for \(\tau_{p}=10^{4}\). For ABPs, \(\phi=0.97\).
Figure 3: Real-space velocity correlations \(C_{\parallel}(r)\) and \(C_{\perp}(r)\) defined in Eq. (5), for AOUPs and ABPs, as shown. persistence times \(\tau_{p}\) and alignment strength \(\gamma\) respectively as indicated in the legend. The correlation length in \(C_{\parallel}(r)\) (a,c) and the amplitude of negative correlations in \(C_{\perp}(r)\) (b,d) can be tuned by increasing \(\tau_{p}\) or \(\gamma\) respectively. Volume fractions \(\phi\) are as in Fig. 2.
velocity correlation function is \(C(r)=C_{\parallel}(r)+C_{\perp}(r)\), but this decomposition is distinct from the Fourier analysis of [29, 30], where \(\mathbf{v}\) is instead resolved parallel and perpendicular to the wave vector \(\mathbf{k}\). Fig. 3 shows results in both models, for a range of state points. The decomposition separates the long-ranged positive correlations along streams [in \(C_{\parallel}(r)\)], and the anti-correlations characteristic of vortices [in \(C_{\perp}(r)\)] [50]. The data confirm a similar structure for both models, and show quantitatively that velocities are correlated over tens of particle diameters for the more persistent systems, in agreement with the peak position in \(E(k)\) and the snapshots in Fig. 1. The characteristic size \(\xi\) of the velocity patterns can be tuned via the persistence time \(\tau_{p}\) of AOUPs, or the alignment strength \(\gamma\) of aligning ABPs. This leads in both cases to more extended streams and vortices.
These emerging velocity correlations dramatically impact particle transport. This is revealed in Fig. 4 by 'dycing' particles according to their position at some initial time \(t_{0}\) in the steady state, and watching them spread over time. Transport is dominated at initial times by rapid advection along extended streams, as revealed by the initial distortion of the pattern with mutually invading branches that stretch and fold over a range of length scales, resembling chaotic advection (see times \(t_{1}\) and \(t_{2}\)). Only at large times do particles diffuse into regions of different colours which eventually blends the dyes. We also highlight three tracer particles which are initially very close, showing that particle pairs can be either advected large distances together or be separated almost immediately. These time-dependent patterns are qualitatively similar to the chaotic advection created for instance by time periodic flows [51].
We quantify these observations using the mean-squared displacement \(\Delta^{2}(t)=\langle|\Delta\mathbf{r}_{i}(t)|^{2}\rangle\) and the mean-squared distance between initially close-by particles (as studied in inertial turbulence [52, 53, 54]), \(D^{2}(t)=\langle|\Delta\mathbf{r}_{i}(t)-\Delta\mathbf{r}_{j}(t)|^{2}\rangle\), where \(\Delta\mathbf{r}_{i}(t)=\mathbf{r}_{i}(t)-\mathbf{r}_{i}(0)\) and the average is restricted to nearby pairs of particles with \(|\mathbf{r}_{i}(0)-\mathbf{r}_{j}(0)|<1.15\sigma_{ij}\)[55]. By construction, both quantities vanish at \(t=0\), while \(D^{2}\sim 2\Delta^{2}\sim t\) holds in the diffusive regime at large times (for which particles \(i,j\) eventually decorrelate), see Fig. 4(e,f).
Self-propulsion causes ballistic motion \(\Delta^{2}\sim t^{2}\) at small times. The corresponding velocity decreases significantly for AOUPs as \(\tau_{p}\) is increased at constant \(D_{0}\), mirroring the reduction in strength of \(\mathbf{p}_{i}\). In contrast, the velocity increases slightly with \(\gamma\) for ABPs. This ballistic regime is quickly interrupted by interparticle collisions at a corresponding very small length scale. At very large times, memory of the self-propulsion forces is lost and particles diffuse, \(\Delta^{2}\sim t\). Between these two limits, we observe an intermediate advective (super-diffusive) regime, which is demarcated by the two well-separated time scales found in the velocity auto-correlation function (recall Fig. 2).
The advection is also apparent in \(D^{2}\) which is similarly ballistic at very short times. At intermediate times, \(D^{2}\) grows significantly slower than \(\Delta^{2}\) showing that pairs of particles can be advected together over extremely large distances, leading to \(D^{2}\ll\Delta^{2}\). Eventually, particles' memory of their initial conditions is lost: this leads to super-diffusive scaling, as \(D^{2}\) 'catches up' with the long-time diffusive scaling \(D^{2}\sim 2\Delta^{2}\sim t\).
In conclusion, we have established that a novel form of active turbulence generically emerges in two well-studied models of dry, isotropic, self-propelled particles. The observed mesoscale flows should be observable in a broad range of systems; they resemble other active chaotic flows, displaying scale-free behaviour from the particle size up to a correlation length scale that is easily tuned
Figure 4: (a-d) Time series of configurations for aligning ABPs at \(\gamma=2.5\), \(\phi=0.97\). Particles are coloured according to their \(x\) position at some time in the steady state denoted \(t_{0}=0\). (e,f) Mean-squared displacement \(\Delta^{2}(t)\) (full symbols) and mean-squared displacement difference of initially close by particles \(D^{2}(t)\) (open symbols) for (e) AOUPs and (f) aligning ABPs. The indicated times in (f) correspond to the snapshots in (b-d). Volume fractions \(\phi\) are as in Fig. 2.
by the model parameters. However, these flows emerge here under the competition of highly persistent forcing and crowding in an otherwise homogeneous dense fluid. As previously developed theoretical descriptions of active turbulence rely on either polar or nematic interactions [9], new approaches are needed that take into consideration the effect of steric crowding. Unusual transport properties emerge from the correlated velocity fields, including chaotic advection over large distances, which directly impacts mixing dynamics. Such properties may be useful when energy sources for the active particles are localised [56], or in active matter with open boundaries [3], or for mixtures of active particles [57]: all these cases deserve further studies.
We thank D. Bartolo, J. Tailleur, and J. Yeomans for useful discussions. This work was publicly funded through ANR (the French National Research Agency) under the THEMA AAPG2020 grant. It was also supported by a grant from the Simons Foundation (#454933, LB), and by a Visiting Professorship from the Leverhulme Trust (VP1-2019-029, LB).
|
2307.10727 | Special features of the Weyl-Heisenberg Bell basis imply unusual
entanglement structure of Bell-diagonal states | Maximally entangled Bell states are of crucial importance for entanglement
based methods in quantum information science. Typically, a standard
construction of a complete orthonormal Bell-basis by Weyl-Heisenberg operators
is considered. We show that the group structure of these operators has strong
implication on error correction schemes and on the entanglement structure
within Bell-diagonal states. In particular, it implies a equivalence between a
Pauli channel and a twirl channel. Interestingly, other complete orthonormal
Bell-bases do break the equivalence and lead to a completely different
entanglement structure, for instance in the share of PPT-entangled states. In
detail, we find that the standard Bell basis has the highest observed share on
PPT-states and PPT-entangled states compared to other Bell bases. In summary,
our findings show that the standard Bell basis construction exploits a very
special structure with strong implications to quantum information theoretic
protocols if a deviation is considered. | Christopher Popp, Beatrix C. Hiesmayr | 2023-07-20T09:40:59Z | http://arxiv.org/abs/2307.10727v2 | Special features of the Weyl-Heisenberg Bell basis imply unusual entanglement structure of Bell-diagonal states
###### Abstract
Maximally entangled Bell states are of crucial importance for entanglement based methods in quantum information science. Typically, a standard construction of a complete orthonormal Bell-basis by Weyl-Heisenberg operators is considered. We show that the group structure of these operators has strong implication on error correction schemes and on the entanglement structure within Bell-diagonal states. In particular, it implies a equivalence between a Pauli channel and a twirl channel. Interestingly, other complete orthonormal Bell-bases do break the equivalence and lead to a completely different entanglement structure, for instance in the share of PPT-entangled states. In detail, we find that the standard Bell basis has the highest observed share on PPT-states and PPT-entangled states compared to other Bell bases. In summary, our findings show that the standard Bell basis construction exploits a very special structure with strong implications to quantum information theoretic protocols if a deviation is considered.
## 1 Introduction
Leveraging quantum phenomena in technology offers new resources that enable methods with better performance than classically limited methods in the field of information theory and related applications like communication, computation, simulation, metrology or cryptography [1, 2, 3, 4, 5].
Entanglement is one of the main resources for novel methods for quantum information processing like super-dense coding [6] or teleportation [7], as well as in other fields like e.g. medical sciences [8, 9, 10, 11, 12, 13]. Despite its relevance for our quantum-theoretic understanding of nature [14, 15, 16] and for practical applications in quantum technology, this phenomenon is far from being well understood. Two entangled qubits, i.e. a bipartite two-level quantum system, is the simplest system to observe entanglement. The qubit Bell states [17] are a set of maximally entangled states, which form a basis of this bipartite Hilbert space. Most applications leveraging entanglement as a resource use these special states. The amount of entanglement two parties posses if they share a maximally entangled qubit Bell state is named "ebit" [18]. Recently, multi-level quantum systems called "qudits" (with the "\(d\)" indicating the general dimension of the system) have been shown to offer potential advantages for applications [19, 20]. The notion of Bell states can be generalized to the qudit systems [21], in which case two parties holding a maximally entangled bipartite qudit state are said to posses one "edit".
A frequently used standard construction for a basis of bipartite Bell states of arbitrary dimension is based on a set of operators called "Weyl-Heisenberg" operators, which can be seen as generalization of the two-dimensional Pauli matrices to other dimensions [7]. Applying these operators locally to the standard maximally entangled state, a basis of maximally entangled Bell states can be generated [22]. Due to their properties, this specific "standard" Bell basis is often used in applications, but as we will show in this contribution, other Bell bases exist, which differ in some relevant properties.
For real applications, decoherence due to interaction with the environment generally disturbs a pure state and the resource in form of an edit is destroyed or cannot be used effectively. For this reason, quantum
error correction [23, 24] and entanglement purification/distillation [25] are required. While error correction transforms a certain disturbed state back to some logical pure state, entanglement purification processes several weakly entangled or disturbed states to produce fewer but stronger or maximally entangled states. Both concepts have been generalized to qudit systems. The Weyl-Heisenberg operators thereby play a crucial role in the form of defining a "nice" error basis [26, 27, 28] or to use the corresponding Bell states as target states [29, 30, 31]. One phenomenon, which does not appear for bipartite qubits but for dimension of the subsystems \(d\geq 3\) is bound entanglement [32, 33, 34, 35, 36, 37]. Bound entangled states are entangled states, but they cannot be used for entanglement purification. It is known that all entangled states with positive partial transposition (PPT) are bound entangled, while the existence of bound entangled states with negative partial transposition (NPT) is still an open problem. In general, the separability problem to decide whether a given PPT quantum state is entangled or separable is an NP-hard problem [38, 39].
For bipartite quantum states that are diagonal in the standard Bell basis constructed via the Weyl-Heisenberg operators, special algebraic and geometric properties of the states can be leveraged for some insight concerning this complex structure of entanglement [22, 37, 40]. Recently, the authors combined analytical and numerical methods to investigate the systems of bipartite qutrits (\(d=3\)) and ququarts (\(d=4\)) in detail [41, 42, 43, 44]. As a result, 95%/77% of all PPT (standard) Bell-diagonal qutrits/ququarts can be classified as PPT-entangled or separable. Moreover, it was shown that the group structure in the set of the standard Weyl-Heisenberg-constructed Bell states is highly relevant for the entanglement structure.
In this work, we demonstrate properties of the standard Bell basis constructed via the Weyl-Heisenberg operators and show that this basis has special properties among a set of generalized Bell bases with significant implications for the corresponding systems of Bell-diagonal states. The paper is organized as follows: First, we define a set of operators called "Weyl-Twirl" operators, for which all elements of the standard Bell basis are eigenstates and demonstrate an equivalence between the map to the set of Bell-diagonal states and the randomized application of those operators. Then, we present some applications of this "Weyl-Twirl" with relevance for the separability problem of Bell-diagonal states and a simple error correction scheme for Bell states. Second, by generalizing the construction based on the Weyl-Heisenberg operators, we present a family of Bell bases, which contains the standard Bell basis, but are generally not unitarily equivalent. We show that, even though the generalized Bell bases also consist of maximally entangled orthonormal Bell states, several characteristic and properties of the standard Bell basis relevant for applications do not exist in the generalized case and that these differences significantly affect the entanglement structure of Bell-diagonal states and the detection of PPT-entangled states. Finally, we conclude with a summary of our findings.
## 2 A Stabilizing Group for Maximally Entangled States
Here we define the Weyl-Heisenberg operators and introduce the channels and show their equivalency. Then we discuss applications in quantum information theory.
### Weyl-Heisenberg and Weyl-Twirl Operators
Let \(\mathcal{H}=\mathcal{H}_{d}\otimes\mathcal{H}_{d}\) be the Hilbert space of two qudits of dimension \(d\). In this bipartite system, we consider maximally entangled states, also called "Bell states". An orthonormal basis of \(d^{2}\) Bell states spanning the Hilbert space can be defined by applying certain local operators on one of the subsystems to a seed state, e.g. the maximally entangled state \(\left|\Omega_{00}\right\rangle:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\left|ii\right\rangle\). Applying the Weyl-Heisenberg operators [7]
\[W_{k,l}:=\sum_{j=0}^{d-1}w^{jk}\left|j\right\rangle\left\langle j+l\right|,\ \ k,l=0,...,d-1 \tag{1}\]
with \(w:=e^{\frac{2\pi i}{d}}\) to the (w.l.o.g.) first subsystem of \(\left|\Omega_{00}\right\rangle\) defines the "standard" Bell basis of bipartite qudits:
\[\left|\Omega_{k,l}\right\rangle:=W_{k,l}\otimes\mathbb{1}_{d}\left|\Omega_{00 }\right\rangle,\ \ k,l=0,...,d-1 \tag{2}\]
In eq.(1) and in the following, addition and subtraction are always to be understood \(\mod d\). The indices \(k\) and \(l\) in \(W_{k,l}\) relate to phase and shift operations, respectively. To highlight this property, \(W_{k,l}\) can also be written as \(W_{k,l}=Z(k)X(l)\), where \(Z(k)\left|j\right\rangle=w^{j\cdot k}\left|j\right\rangle\) is the phase operator and \(X(l)\left|j\right\rangle=\left|j-l\right\rangle\) denotes
the shift operator.
The Weyl-Heisenberg operators obey the following algebraic relations:
\[\begin{split}& W_{k_{1},l_{1}}W_{k_{2},l_{2}}=w^{l_{1}k_{2}}\ W_{k_{1}+k_{2},l_{1}+l_{2}}\\ & W^{\dagger}_{k,l}=w^{kl}\ W_{-k,-l}=W^{-1}_{k,l}\\ & W^{\ast}_{k,l}=W_{-k,l}\\ & W^{T}_{k,l}=w^{-kl}W_{k,-l}.\end{split} \tag{3}\]
Here, \((\dagger),(*)\) and \((T)\) denote the adjoint, complex conjugation and transposition with respect to the computational basis. These relations imply a linear structure for the Bell states, which was recently shown [42] to be highly relevant for the geometric properties of the set of separable, PPT entangled and NPT entangled states that are diagonal in the basis defined in eq.(2).
Consider now the set of \(d^{2}\) unitary operators, which we call "Weyl-Twirl operators",
\[T_{i,j}:=W_{i,j}\otimes W^{\ast}_{i,j}|\ \ i,j=0,...,d-1\;. \tag{4}\]
Applying the Weyl relations (3), one observes that this set of unitary operators \(\{T_{i,j}|i,j=0,...,d-1\}\) forms an abelian group under multiplication with neutral element \(T_{0,0}=\mathbb{1}_{d}\otimes\mathbb{1}_{d}\):
\[\begin{split}& T_{i1,j1}T_{i2,j2}=T_{i1+i2,j1+j2}=T_{i2,j2}T_{i1,j1} \\ & T^{-1}_{i,j}=T_{-i,-j}\;.\end{split} \tag{5}\]
Using the well-known property of the maximally entangled state \(\mathbb{1}_{d}\otimes M\left|\Omega_{0,0}\right>=M^{T}\otimes\mathbb{1}_{d} \left|\Omega_{0,0}\right>\) for any matrix \(M\) together with the Weyl relations (3), we can calculate the action of \(T_{i,j}\) on the Bell state \(\left|\Omega_{k,l}\right>\):
\[\begin{split} T_{i,j}\left|\Omega_{k,l}\right>&=(W _{i,j}\otimes W^{\ast}_{i,j})(W_{k,l}\otimes\mathbb{1}_{d})\left|\Omega_{0,0} \right>\\ &=W_{i,j}W_{k,l}\otimes W_{-i,j}\left|\Omega_{00}\right>\\ &=W_{i,j}W_{k,l}W^{-}_{-i,j}\otimes\mathbb{1}_{d}\left|\Omega_{0 0}\right>\\ &=w^{jk-il}W_{k,l}\otimes\mathbb{1}_{d}\left|\Omega_{00}\right> \\ &=w^{jk-il}\left|\Omega_{k,l}\right>\end{split} \tag{6}\]
Each Bell state \(\left|\Omega_{k,l}\right>\) is thus an eigenvector for each \(T_{i,j}\), with the phase of the eigenvalue being determined by the corresponding indices \((k,l)\) and \((i,j)\).
### Pauli Projectors and the Weyl-Twirl Channel
Channel properties are of the heart of studies of the dynamics of state propagation. For instance, it is of interest to characterise channels which transform entangled states to separable states, so called entanglement-breaking channels (for a recent study see Ref. [45]). Here be consider two channels that transform general states to Bell-diagonal ones.
Let us define the Bell projectors by \(P_{k,l}:=\left|\Omega_{k,l}\right>\left<\Omega_{k,l}\right|\) and the set \(\mathcal{M}_{d}\) of Bell-diagonal states is given as mixtures of the projectors \(\{P_{k,l}\}\) with mixing probabilities \(\{c_{k,l}\}\):
\[\mathcal{M}_{d}:=\{\rho=\sum_{k,l=0}^{d-1}c_{k,l}\,P_{k,l}\mid\sum_{k,l=0}^{d-1 }c_{k,l}=1,c_{k,l}\geq 0\} \tag{7}\]
This object forms a mathematical simplex and is also known as the magic simplex [46, 22, 40], where "magic" refers to the "magic" Bell basis for bipartite qubits introduced by Wootters and Hill [47]. Let \(\rho=\sum_{k,l=0}^{d-1}\sum_{n,m=0}^{d-1}\rho_{k,l,m,n}\left|\Omega_{k,l} \right>\left<\Omega_{m,n}\right|\in\mathcal{H}\) be a bipartite state, represented in the Bell basis. We define the Pauli channel \(\mathcal{P}:\mathcal{H}\rightarrow\mathcal{M}_{d}\) as a map from the total Hilbert space \(\mathcal{H}\) to the set of Bell-diagonal states as
follows:
\[\begin{split}\mathcal{P}(\rho)&:=\sum_{k,l=0}^{d-1}P_{k,l }\;\rho\;P_{k,l}=\sum_{k,l=0}^{d-1}\left\langle\Omega_{k,l}\right|\rho\left| \Omega_{k,l}\right\rangle\;P_{k,l}\\ &=\sum_{k,l=0}^{d-1}\rho_{k,l,k,l}\;P_{k,l}:=\sum_{k,l=0}^{d-1}c _{k,l}\,P_{k,l}\end{split} \tag{8}\]
Using the operators \(T_{i,j}\) (4), we can define another channel, which we name "Weyl-Twirl" channel:
\[\mathcal{T}(\rho):=\frac{1}{d^{2}}\;\sum_{i,j=0}^{d-1}T_{i,j}\;\rho\;T_{i,j}^{ \dagger}=\frac{1}{d^{2}}\;\sum_{i,j=0}^{d-1}W_{i,j}\otimes W_{i,j}^{*}\;\rho \;(W_{i,j}\otimes W_{i,j}^{*})^{\dagger} \tag{9}\]
We will show now that these two channels are equivalent, i.e.
**Theorem 1**.: \[\mathcal{P}(\rho)\equiv\mathcal{T}(\rho)\;\;\forall\rho\in\mathcal{H}\] (10)
Proof.: Consider the action of \(\mathcal{T}\) on a basis state \(\left|\Omega_{k,l}\right\rangle\left\langle\Omega_{m,n}\right|\):
\[\mathcal{T}(\left|\Omega_{k,l}\right\rangle\left\langle\Omega_{ m,n}\right|) =\frac{1}{d^{2}}\sum_{i,j}^{d-1}w^{j(k-m)-i(l-n)}\;\left|\Omega_ {k,l}\right\rangle\left\langle\Omega_{m,n}\right|\] \[=\delta_{k,m}\delta_{l,n}\;\left|\Omega_{k,l}\right\rangle\left \langle\Omega_{m,n}\right|\]
Here, the first equality follows from (6) and the second equality from the identity \(\sum_{j=1}^{d-1}w^{jx}=d\delta_{x,0}\). Let \(\rho=\sum_{k,l=0}^{d-1}\sum_{n,m=0}^{d-1}\rho_{k,l,m,n}\;\left|\Omega_{k,l} \right\rangle\left\langle\Omega_{m,n}\right|\in\mathcal{H}\). We then have:
\[\mathcal{T}(\rho) =\sum_{k,l,m,n=0}^{d-1}\rho_{k,l,m,n}\;\delta_{k,m}\delta_{l,n}\; \left|\Omega_{k,l}\right\rangle\left\langle\Omega_{m,n}\right|\] \[=\sum_{k,l=0}^{d-1}\rho_{k,l,k,l}\;\left|\Omega_{k,l}\right\rangle \left\langle\Omega_{k,l}\right|=\sum_{k,l=0}^{d-1}\rho_{k,l,k,l}\;P_{k,l}\] \[=\mathcal{P}(\rho)\]
Calling the application of the channel \(\mathcal{T}\) "Weyl-Twirl" is motivated by the fact that eq.(9) represents the random application of (bi-local) operators \(W_{i,j}\otimes W_{i,j}^{*}\). Operations of the form \(\rho\rightarrow\int(U\otimes U^{*})\rho(U\otimes U^{*})^{\dagger}dU\) with \(U\) being a local unitary operator and \(dU\) being the according Haar measure are generally named "Twirl". These operations leave certain diagonal elements invariant, while eliminating off-diagonal elements. Operations of this kind that transform a state to Bell-diagonal form and are often relevant for certain applications, e.g. entanglement purification schemes [25]. The channel equivalence of the Pauli channel (eq.8) and the finite "Weyl-twirl" (eq.9) shows that the operators \(T_{i,j}\) have the special property that they leave all Bell-diagonal element invariant under random application, while eliminating any off diagonal elements.
### Applications of Weyl-Twirl operators and channel
The properties of the Weyl-Twirl operators \(T_{i,j}\) and the channel equivalence (eq.10) imply several properties that are relevant for applications related to maximally entangled states. In the following, we will briefly mention two examples.
#### 2.3.1 The separability problem
Due to the existence of PPT-entangled states, the decision problem, whether a given bipartite state is entangled or separable, is generally a NP-hard problem [38, 39]. While all bipartite states with negative partial transposition (NPT) are known to be entangled according to the Peres-Horodecki/PPT-criterion [48], for subsystems of dimension \(d\geq 3\), also entangled states with non-negative partial transposition (PPT) exist [32]. No general and efficient criterion is known that detects all PPT-entangled states nor a general method to construct those. However, for the set of Bell-diagonal states \(\mathcal{M}_{d}\) (7) in dimension \(d=3\) and \(d=4\) the problem has efficiently be solved. In detail, for \(d=3/(4)\), \(95\%/(77\%)\) of all PPT states have been classified as entangled or separable, using a combination of analytical criteria and numerical methods [41, 42]. Thus, for those dimension and the standard Bell basis the structure of PPT-entangled states is kown.
As demonstrated in referenced works, the combination of two numerical methods are especially effective to distinguish separable and PPT-entangled Bell-diagonal states:
1. The construction of separable states \(\rho_{s}\in\mathcal{M}_{d}\cap SEP\), close to the border of the convex set of separable states, providing an inner approximating polytope of \(\mathcal{M}_{d}\cap SEP\)
2. The construction of optimal entanglement witnesses (see definition below), providing an outer approximation of \(\mathcal{M}_{d}\cap SEP\)
Both methods rely on an efficient parameterization of separable Bell-diagonal states \(\rho\in\mathcal{M}_{d}\cap SEP\) that can be used to optimize corresponding target quantities. Given an efficient parameterization of unitaries and pure states [49, 50], a parameterization of separable and Bell-diagonal states follows from the following corollaries of Theorem 1 :
**Corollary 1**.: _Any separable state \(\rho_{s}\) remains separable under application of the Pauli channel \(\mathcal{P}\), that is:_
\[\rho_{s}\in SEP\Rightarrow\mathcal{P}(\rho_{s})\in\mathcal{M}_{d}\cap SEP\]
Proof.: Due to linearity, it suffices to show the corollary for a pure product state \(\rho_{s}=\ket{\psi_{1}}\bra{\psi_{1}}\otimes\ket{\psi_{2}}\bra{\psi_{2}}\). By Theorem 1, we have:
\[\mathcal{P}(\rho_{s}) =\mathcal{T}(\rho_{s})\] \[=\frac{1}{d^{2}}\sum_{i,j=0}^{d-1}W_{i,j}\otimes W_{i,j}^{*}\ \rho_{s}\ (W_{i,j}\otimes W_{i,j}^{*})^{\dagger}\] \[=\frac{1}{d^{2}}\sum_{i,j=0}^{d-1}W_{i,j}\ket{\psi_{1}}\bra{\psi_ {1}}W_{i,j}^{\dagger}\otimes W_{i,j}^{*}\ket{\psi_{2}}\bra{\psi_{2}}(W_{i,j}^{ *})^{\dagger}\]
\(W_{i,j}\) is unitary, so \(W_{i,j}\ket{\psi_{1}}\bra{\psi_{1}}W_{i,j}^{\dagger}\) and \(W_{i,j}^{*}\ket{\psi_{2}}\bra{\psi_{2}}(W_{i,j}^{*})^{\dagger}\) are pure states and additionally \(\sum_{i,j=0}^{d-1}\frac{1}{d^{2}}=1\). Therefore, the above expression represents an equal mixture of pure product states with mixing probability \(\frac{1}{d^{2}}\) and thus is a separable mixed state. By definition, we also have \(\mathcal{P}(\rho_{s})\in\mathcal{M}_{d}\).
The second corollary states that optimal and Bell-diagonal entanglement witnesses are also optimal for \(\mathcal{M}_{d}\). An entanglement witness \(K\) is a Hermitian operator, for which the expectation \(\mathrm{Tr}(K\rho_{s})\) is non-negative for all separable states \(\rho_{s}\), while at least one state \(\rho_{e}\) exists with \(\mathrm{Tr}(K\rho_{e}<0)\). In this case \(K\) is said to detect the entangled state \(\rho_{e}\). \(K\) is said to be optimal, if there exists a separable state \(\rho_{0}\), such that \(\mathrm{Tr}(K\rho_{0})=0\). It has been shown [40] that Bell-diagonal entanglement witnesses can detect all entangled states in \(\mathcal{M}_{d}\).
**Corollary 2**.: _If a Bell-diagonal entanglement witness \(K\) is optimal, then it is also optimal for \(\mathcal{M}_{d}\), i.e. \(\exists\tilde{\rho}_{0}\in\mathcal{M}_{d}\cap SEP\) s.th. \(\mathrm{Tr}(K\tilde{\rho_{0}})=0\)._
Proof.: Let \(\rho_{0}=\sum_{k,l,m,n}^{d-1}\rho_{0(k,l,m,n)}\ket{\Omega_{k,l}}\bra{\Omega_{m,n}}\in SEP\) be a state so that \(\mathrm{Tr}(K\rho_{0})=0\). Define \(\tilde{\rho_{0}}:=\mathcal{P}(\rho_{0})\). By Corollary, 1 \(\tilde{\rho_{0}}\in\mathcal{M}_{d}\cap SEP\). \(K\) is of Bell-diagonal form and Hermitian, so \(K=\sum\kappa_{k,l}P_{k,l},\ \ \kappa_{k,l}\in\mathbb{R}\). This implies:
\[0=\mathrm{Tr}(K\rho_{0})=\sum_{k,l=0}^{d-1}\kappa_{k,l}\rho_{0(k,l,k,l)}= \mathrm{Tr}(K\tilde{\rho_{0}})\]
Using the corollaries, a parameterization of \(SEP\) together with convex optimization methods can then be applied to \(\mathcal{M}_{d}\cap SEP\), by either optimizing for the convex set of \(SEP\) and mapping the result to \(\mathcal{M}_{d}\) (eq.8) or by directly considering the action of \(\mathcal{P}\) for the parameterization.
#### 2.3.2 Error correction for a maximally entangled qudit
The Weyl-Twirl operators \(T_{i,j}\) (eq.4) allow for a simple error identification and correction scheme for the process of sharing a maximally entangled state without access to the state itself. Assume that in some information processing task, the initial state \(\left|\Omega_{0,0}\right\rangle\) is transformed to \(\left|\Omega_{k,l}\right\rangle\) with probability \(p_{k,l}\) so that the error channel state \(\mathcal{E}(P_{0,0})\) is represented as
\[\mathcal{E}(P_{0,0})=\sum_{k,l=0}^{d-1}p_{k,l}P_{k,l}\;. \tag{11}\]
Another possibility that leads to the form of eq.(11) is a lost or unknown measurement outcome in the Bell basis. The task is now to identify in which Bell states \(\left|\Omega_{k,l}\right\rangle\) the system is without measuring or disturbing it and optionally to transform it back to the initial state \(\left|\Omega_{0,0}\right\rangle\). The idea is to use eq.(6) to store the state dependent phase \(\Phi=jk-il\) in an ancilla qudit. After the phase has been identified, the state is known and can be transformed back to the initial state.
Consider a system to be in the initial state \(\left|\psi_{0}\right\rangle\), where the Bell state \(\left|\Omega_{k,l}\right\rangle\) is unknown and the ancilla qudit in a fixed state, i.e.
\[\left|\psi_{0}\right\rangle=\left|0\right\rangle\otimes\left|\Omega_{k,l} \right\rangle\;.\]
Suppose further that the following operations are available:
* Generalized Hadamard/Fourier gate \(F\) acting as \(F\left|j\right\rangle=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}w^{kj}\left|k\right\rangle\)
* Controlled Weyl-Twirl gate \(CT_{i,j}\) acting as \(CT_{i,j}\)\(\left|m\right\rangle\otimes\left|n\right\rangle=\left|m\right\rangle\otimes T_{i,j}^ {m}\)\(\left|n\right\rangle\)
Consider the application of the Fourier gate to the ancilla, followed by the controlled Weyl-Twirl gate controlled by the ancilla with the unknown Bell state as target, followed by the adjoint Fourier gate. These operations act as follows:
\[\left|\psi_{0}\right\rangle \rightarrow(F^{\dagger}\otimes\mathbb{1})CT_{i,j}(F\otimes \mathbb{1})(\left|0\right\rangle\otimes\left|\Omega_{k,l}\right\rangle)\] \[=(F^{\dagger}\otimes\mathbb{1})CT_{i,j}(\frac{1}{\sqrt{d}}\sum_{k }\left|k\right\rangle\otimes\left|\Omega_{k,l}\right\rangle)\] \[=(F^{\dagger}\otimes\mathbb{1})(\frac{1}{\sqrt{d}}\sum_{k}\left|k \right\rangle\otimes T_{i,j}^{k}\left|\Omega_{k,l}\right\rangle)\] \[=(F^{\dagger}\otimes\mathbb{1})(\frac{1}{\sqrt{d}}\sum_{k}\left|k \right\rangle\otimes w^{k\Phi}\left|\Omega_{k,l}\right\rangle)\] \[=(F^{\dagger}\frac{1}{\sqrt{d}}\sum_{k}w^{k\Phi}\left|k\right\rangle) \otimes\left|\Omega_{k,l}\right\rangle\] \[=\left|\Phi\right\rangle\otimes\left|\Omega_{k,l}\right\rangle\]
Measurement of the ancilla qudit yields the phase \(\left|\Phi\right\rangle=\left|jk-il\right\rangle\), depending on the applied Weyl-Twirl operator through \(i\) and \(j\) and the unknown Bell state indexed by \(k\) and \(l\). Due to the stabilizing property of the operators \(T_{i,j}\), the Bell state is not disturbed and the operation can be repeatedly applied or run in parallel with additional ancilla qudits. The applied Weyl-Twirl operators can be chosen according to the expected error. If only phase or only shift errors appear, a single measurement with suitably chosen \((i,j)\) can identify the error. If both shift and phase errors need to be identified, then two measurements of \(\Phi\) obtained with different \((i,j)\) for \(T_{i,j}\) are required to identify the phase/shift error via \(k/l\). The correction operation to recover \(\left|\Omega_{0,0}\right\rangle\) is then \(W_{k,l}^{\dagger}\otimes\mathbb{1}:\left|\Omega_{k,l}\right\rangle\rightarrow \left|\Omega_{0,0}\right\rangle\).
Generalized Bell-diagonal systems
Now we investigate cases, in which we still have a complete orthonormal Bell basis, but not via the standard construction introduced in the last section. In particular, this leads in general to a breaking of the channel equivalence \(\mathcal{P}\equiv\mathcal{T}\).
### Generalized bases of Bell states
Maximally entangled Bell states are characterized by the fact that the reduced state for any of its subsystems is maximally mixed. Those states are called "locally maximally mixed \({}^{{}^{\prime}}\) and they imply that all information is in the correlations between the subsystems. The Bell basis defined in (2) is of course an example of those states, but there are bases that share this property, but are not unitarily equivalent to the standard basis, the do form a magic simplex but not equivalent one (first noted in Ref. [40]).
For \(k,l=0,...,(d-1)\) consider the \(d^{2}\) states
\[\left|\phi_{k,l}^{\alpha}\right\rangle:=\frac{1}{\sqrt{d}}\sum_{s=0}^{d-1}w^{k (s-l)}\alpha_{s}\ \left|s-l\right\rangle\otimes\left|s\right\rangle \tag{12}\]
with \(\alpha_{s}=e^{i\phi_{s}}\) being a phase factor. For a suitable choice of phase vectors \(\alpha=\{\alpha_{s}\}_{s=0}^{d-1}\), these states form an orthonormal set of locally maximally mixed basis states. Consider the corresponding projection operators
\[\left|\phi_{k,l}^{\alpha}\right\rangle\left\langle\phi_{k,l}^{\alpha}\right| =\frac{1}{d}\sum_{s,t=0}^{d-1}w^{k(s-t)}\alpha_{s}\alpha_{t}^{*}\ \left|s-l\right\rangle\left\langle t-l\right|\otimes\left|s\right\rangle \left\langle t\right|,\]
for which any partial trace of the first or second subsystem yields the maximally mixed state:
\[\mathrm{tr}_{1/2}(\left|\phi_{k,l}^{\alpha}\right\rangle\left\langle\phi_{k,l }^{\alpha}\right|)=\frac{1}{d}\sum_{s=0}^{d-1}\alpha_{s}\alpha_{s}^{*}\left|s \right\rangle\left\langle s\right|=\frac{1}{d}\sum_{s=0}^{d-1}\left|s\right\rangle \left\langle s\right| \tag{13}\]
Furthermore, the ornonormality condition for \(\left|\phi_{m,n}^{\beta}\right\rangle\) and \(\left|\phi_{k,l}^{\alpha}\right\rangle\) reads:
\[\delta_{m,k}\delta_{n,l}=\left\langle\phi_{m,n}^{\beta}\middle|\phi_{k,l}^{ \alpha}\right\rangle=\frac{1}{d}\sum_{s,t}^{d-1}\alpha_{s}\beta_{t}^{*}\ w^{k(s-l)-m(t-n)}\ \left\langle t-n|s-l\right\rangle\left\langle t|s\right\rangle=\delta_{n,l} \frac{1}{d}\sum_{s}w^{s(k-m)}w^{mn-kl}\alpha_{s}\beta_{s}^{*} \tag{14}\]
This shows that for different shift indices \((n,l)\), the Bell states (12) are orthogonal, independent of the phases \((\alpha,\beta)\). For equal shift indices, we now require \(\alpha_{s}=\beta_{s}\), which implies:
\[\left\langle\phi_{m,l}^{\beta}\middle|\phi_{k,l}^{\alpha}\right\rangle=\frac{ 1}{d}w^{l(m-k)}\sum_{s}w^{s(k-m)}\alpha_{s}\alpha_{s}^{*}=\delta_{m,k} \tag{15}\]
Thus, in order to meet the requirements for orthonormality and being locally maximally mixed, we define the matrix \((\alpha_{s,t})_{s,t=0,...,d-1},\ \ \left|\alpha_{s,t}\right|=1\ \forall s,t\) and use it to define the "generalized" Bell basis \(\left\{\left|\Phi_{k,l}^{\alpha}\right\rangle|k,l=0,...,d-1\right\}\) with
\[\left|\Phi_{k,l}^{\alpha}\right\rangle:=\frac{1}{\sqrt{d}}\sum_{s}^{d-1}w^{k( s-l)}\alpha_{s,l}\ \left|s-l\right\rangle\otimes\left|s\right\rangle \tag{16}\]
In accordance with eq.(1) and eq.(2), we define an unitary operators that maps the maximally entangled state \(\left|\Omega_{0,0}\right\rangle\) to the generalized Bell basis states:
\[\begin{split}& V_{k,l}^{\alpha}:=\sum_{j}w^{jk}\alpha_{j+l,l}\ \left|j\right\rangle\left\langle j+l\right|,\ \ k,l=0,...,d-1\\ &\left|\Phi_{k,l}^{\alpha}\right\rangle=V_{k,l}^{\alpha}\otimes \mathbb{1}\ \left|\Omega_{0,0}\right\rangle\\ & P_{k,l}^{\alpha}:=\left|\Phi_{k,l}^{\alpha}\right\rangle \left\langle\Phi_{k,l}^{\alpha}\right|\.\end{split} \tag{17}\]
Note that standard Weyl-Heisenberg operators \(W_{k,l}\) and Bell basis state \(\left|\Omega_{k,l}\right\rangle\) are a special case of eq.(16), namely for \(\alpha_{s,t}=1\ \forall s,t\). In this sense we can talk about a generalization of the stand Weyl-Heisenberg operators.
### Properties of the generalized Bell bases
Event though the construction of the generalized Bell basis introduced in the previous (eq.(17)) section looks rather similar to the standard case (eq.(2), (1)) and are also a set of orthonormal, maximally entangled and locally maximally mixed states, they have significant differences with implications for the entanglement properties of diagonal states. We first state those differences and then numerically demonstrate their effect on the entanglement structure of corresponding systems of Bell-diagonal states.
The following differences to the standard construction hold for general transformation matrix \(\alpha\) and can be checked by simple calculation or counter examples:
1. The linear group structure does generally not exist: \(V^{\alpha}_{k_{1},l_{1}}V^{\alpha}_{k_{2},l_{2}}\not\propto V^{\alpha}_{k_{1}+ k_{2},l_{1}+l_{2}}\)
2. The stabilizing property does generally not hold: \(V^{\alpha}_{i,j}\otimes V^{\alpha}_{i,j}\left|\Phi_{k,l}\right\rangle\neq w^{ \phi}\left|\Phi_{k,l}\right\rangle\)
3. Let the simplex [40] \[\mathcal{M}^{\alpha}_{d}:=\{\rho=\sum_{k,l=0}^{d-1}c_{k,l}\;P^{\alpha}_{k,l}\; |\;\sum_{k,l=0}^{d-1}c_{k,l}=1,c_{k,l}\geq 0\}\] (18) be the set of states that are diagonal in the generalized Bell basis. The generalized Pauli channel \[\mathcal{P}^{\alpha}:\mathcal{H}\rightarrow\mathcal{M}^{\alpha}_{d},\;\; \mathcal{P}^{\alpha}(\rho):=\sum_{k,l=0}^{d-1}P^{\alpha}_{k,l}\;\rho\;P^{ \alpha}_{k,l}\] and the generalized Weyl-Twirl channel \[\mathcal{T}^{\alpha}(\rho):=\frac{1}{d^{2}}\sum_{i,j=0}^{d-1}V^{\alpha}_{i,j} \otimes V^{\alpha*}_{i,j}\;\rho\;(V^{\alpha}_{i,j}\otimes V^{\alpha*}_{i,j})^ {\dagger}\] are generally not identical: \(\mathcal{P}^{\alpha}\not\equiv\mathcal{T}^{\alpha}\)
4. The generalized Pauli channel does generally not conserve separability, so Corollary 1 does not hold, i.e.: \(\rho_{s}\in SEP\not\Rightarrow\mathcal{P}^{\alpha}(\rho)\in\mathcal{M}^{ \alpha}_{d}\cap SEP\)
5. Due to the loss of the linear structure, also entanglement conserving symmetries of \(\mathcal{M}_{d}\) (see Ref. [42] and the references therein) are lost.
### Explicit counter examples for bipartite qutrits
In Fig. 1 we present a visualization of the drastic change in the separability structure within \(\mathcal{M}^{\alpha}_{d}\) for dimension \(d=3\) for non-standard Bell basis choices compared to the standard Bell basis choice. The state family consists of the mixture of two lines with the totally mixed states, which was experimentally investigated in Ref. [37] since it is the result of the optimization in finding the greatest violation for an entanglement witness bases on mutually unbiased basis [51], which comes with an experimental recipe how to realize it in experiments. Fig. 1 (a) shows the result of the standard basis choice, including the region detected by the entanglement witness. In Fig. 1 (b) the phases were randomly chosen but close to zero. One observes that the PPT-region shrinks as well as the region detected by the realignment criterion E2 (defined in the appendix, named E2 according to Ref. [41]) and the entanglement witness. The other figures (c)-(f) are based on random choices of the phases and show how drastically the PPT region and the region detected by the realignment criterion (E2) changes. The Weyl-Heisenberg structure is thus very important for the separability structure within \(\mathcal{M}^{\alpha}_{3}\).
### General entanglement structure changes for bipartite qutrits
In the last section we have presented particular examples, but we can also come up with general results for \(d=3\) by exploiting and adapting the tools developed in Refs. [41, 42]. A combination of analytical and numerical methods [43] exploiting the Heisenberg-Weyl structure were used to analyse the entanglement structure of Bell-diagonal qudits for \(d=3\) and \(d=4\), leading to an efficient solution of the separability problem in this particular case. As a consequence of the stated differences above, most of the tools cannot be applied for mixtures of the generalized Bell basis states (16). However, some methods are still applicable that allows us to compare the standard with the generalized Bell bases for dimension \(d=3\).
In order to compare the system \(\mathcal{M}_{3}\) to \(\mathcal{M}_{3}^{\alpha}\), we use the same uniformly distributed sample set of mixing probabilities \(c_{k,l}\) to construct 10000 states in \(\mathcal{M}_{3}\) and in \(\mathcal{M}_{3}^{\alpha}\) for various \(\alpha\). In particular, we generate 1000 matrices \(\alpha\) with uniformly distributed elements. For each \(\alpha\), we define the Bell basis projectors \(P_{k,l}^{\alpha}\) and construct 10000 diagonal states according to (18). We then analyse these states for PPT and entanglement detection by the realignment criterion [52] (named E2 in Ref. [41]) and the quasipure concurrence criterion [53] (named E3 in Ref. [41]) (see Appendix for definitions), which do not depend on the entanglement class preserving symmetries of the standard simplex \(\mathcal{M}_{3}\) and are the most successful analytical criteria among those investigated in Ref. [41, 42].
Table 1 shows the share of PPT states among all analyzed states (rPPT), the share of PPT-entangled states detected by the realignment criterion among all PPT states (E2/PPT) and the cor
quasipure concurrence criterion (E3/PPT) as well as the share of entangled states that were detected by both criteria simultaneously (E2&E3)/PPT. It shows that for the 1000 realizations of \(\mathcal{M}_{3}^{\alpha}\) the relative volume of PPT states is between 49% and 59%. The mean of 51.6% indicates most of the systems are closer to the minimum value than to the maximum. Indeed, more than 85% of the systems have less than 54% PPT states. Interestingly, the PPT share for the standard system \(\mathcal{M}_{3}\) is higher than for any analyzed \(\mathcal{M}_{3}^{\alpha}\). For values of all \(\alpha\) being sufficiently close to 1, i.e. for Bell basis being close to the standard Bell basis, the observed quantities are arbitrarily close, but never higher then the reference values for the standard system. The same statements hold for the detection capabilities of E2, E3 and (E2&E3), which are always lower than the value for the standard system and are typically closer to the minimal observed value.
Table 2 presents strong positive correlations between the share of PPT states in the systems \(\mathcal{M}_{3}^{\alpha}\) and the relative detection capabilities of the criteria E2, E3 and simultaneous detection (E2&E3). Interestingly, the more PPT states are present, the larger is the relative volume of PPT-entangled states that is detected by those criteria. Moreover, the relative number of simultaneously detected states is then also high, in general. The strong correlations imply an almost linear dependence between those quantities.
Based on these observations, one can assume that the more states with positive partial transposition are present in a Bell-diagonal system, the larger is the share of PPT-entangled states detected by the two effective entanglement criteria E2 and E3. Additionally, a higher share of PPT states seems to imply a higher share of PPT-entangled states that can be detected by both of the criteria. While in the standard system, only 33% of the states detected by E3 are not also detected by E2, there are Bell-diagonal systems with less PPT states, in which 83% of the entangled states detected by E3 are not detected by E2. Curiously, the standard system shows extremal values for the relative volume of PPT states and the number of entangled states among them.
## 4 Summary and outlook
In this work, we investigated and compared properties of bipartite maximally entangled and locally maximally mixed Bell states, known as Bell-diagonal states, which form a mathematical simplex \(\mathcal{M}_{d}^{\alpha}\), and showed the special features of the Weyl-Heisenberg Bell basis imply special features in the entanglement structure of Bell-diagonal states.
The frequently used "standard" construction of a \(d^{2}\) dimensional Bell basis of the joint Hilbert space \(H_{d}\otimes H_{d}\) via the Weyl-Heisenberg operators is presented and properties of this special basis were derived that are strongly related to the separability problem and other applications as e.g. error corrections or channel equivalences. In particular, utilizing the Weyl relations of the Weyl-Heisenberg operators \(W_{k,l}\), we showed that the "Weyl-Twirl" operators \(W_{i,j}\otimes W_{i,j}^{*}\) are diagonalized by all elements of the standard Bell basis. We then leveraged this stabilizing property to show the equivalence of the "Pauli" channel, \(\mathcal{P}\), which projects onto Bell-diagonal states in the standard Bell basis, and the "Weyl-Twirl" channel, \(\mathcal{T}\), which represents the randomized application of the Weyl-Twirl operators.
\begin{table}
\begin{tabular}{l|c|c|c|c} & Std. Basis & Min & Max & Mean \\ \hline rPPT & 60.0\% & 49.0\% & 59.0\% & 51.6\% \\ \hline E2/PPT & 10.4\% & 2.1\% & 9.6\% & 3.6\% \\ \hline E3/PPT & 2.7\% & 0.6\% & 2.3\% & 1.0\% \\ \hline (E2\&E3)/PPT & 1.8\% & 0.1\% & 1.4\% & 0.4\% \\ \end{tabular}
\end{table}
Table 1: Minimum, maximum and mean statistics for 1000 sample systems \(\mathcal{M}_{3}^{\alpha}\) and the reference values for the standard system \(\mathcal{M}_{3}\). Relative volume of PPT states (rPPT) and the share of PPT-entangled states among them for the realignment criterion (E2/PPT), the quasi pure approximation criterion (E3/PPT) and the combined criterion ((E2&E3)/PPT).
\begin{table}
\begin{tabular}{l|l} & rPPT \\ \hline E2/PPT & 0.99 \\ \hline E3/PPT & 0.95 \\ \hline (E2\&E3)/PPT & 0.96 \\ \end{tabular}
\end{table}
Table 2: Correlation coefficient for the relative volume of PPT states (rPPT) and the share of PPT-entangled states among the PPT states as detected by the realignment criterion (E2/PPT), the quasipure concurrence criterion (E3/PPT) and the combined criterion ((E2&E3)/PPT).
One implication of this channel equivalence, \(\mathcal{P}\equiv\mathcal{T}\), is that the separability is conserved under the channels and that optimal Bell-diagonal entanglement witnesses remain optimal for the set of Bell-diagonal states. We then demonstrated several applications of the Weyl-Twirl operators. On the one hand, they allow the efficient parameterization of separable Bell-diagonal states, relevant for polytope approximations of this convex set and the construction of optimal Bell-diagonal entanglement witnesses. On the other hand, a simple error detection and correction scheme for a maximally entangled qudit was presented.
Highlighting the implication of the channel equivalence, \(\mathcal{T}\equiv\mathcal{P}\), we investigated systems of Bell-diagonal states that are constructed from generalized Bell states. Those generalized Bell states form an orthonormal basis and are locally maximally mixed, i.e. in the same manner Bell-diagonal states as the one bases on Heisenberg-Weyl operators. However, the channel equivalence \(\mathcal{T}\equiv\mathcal{P}\) is lost, which has strong implications on the entanglement structure within the family of Bell-diagonal states.
In detail, we show this for bipartite qutrits. We derive the relative volume of PPT states which turns out to be for the standard basis 60% and drops to 49% in the extreme case. Interestingly, we found that the share of Bell-diagonal PPT states is generally lower for the generalized Bell bases compared to the standard Bell basis. Moreover, the relative detection rate for PPT-entangled states of two --in the standard system highly effective-- entanglement criteria strongly correlates to the share of PPT states. The more states with positive partial transposition exist for a system of Bell-diagonal states, the higher is the share of detected PPT-entangled states among them for those two criteria. Consequently, for the system based on the standard Bell basis, which has the highest PPT share, also the highest relative amount of entanglement is observed. Furthermore, we visualized the dramatic change of the volume of PPT states in Fig. 1 for a family of states investigated previously in the experiment [37].
These results indicate that, among all in this way generalized Bell bases, Bell-diagonal states related to the standard construction have some very special properties. These properties have relevant implications on the entanglement structure and on practical applications using those Bell states. This suggests either that the two entanglement criteria are less effective for Bell-diagonal states if the special properties of the standard system are not given or that the structure and amount of PPT entanglement depends on those properties.
In summary, our findings are the starting point to analyse different quantum information theoretic protocols based on Bell-diagonal states either in the standard representation or in a non-standard representation.
## Data availability statement
All analyzed datasets were generated during the current study and are available from the corresponding author on reasonable request.
The software used to generate the reported results is published as registered open source package "BellDiagonalQudits.jl" [54] available at [https://github.com/kungfugo/BellDiagonalQudits.jl](https://github.com/kungfugo/BellDiagonalQudits.jl).
|
2308.06918 | DUVET Survey: Mapping Outflows in the Metal-Poor Starburst Mrk 1486 | We present a method to characterize star-formation driven outflows from
edge-on galaxies and apply this method to the metal-poor starburst galaxy, Mrk
1486. Our method uses the distribution of emission line flux (from H$\beta$ and
[OIII] 5007) to identify the location of the outflow and measure the extent
above the disk, the opening angle, and the transverse kinematics. We show that
this simple technique recovers a similar distribution of the outflow without
requiring complex modelling of line-splitting or multi-Gaussian components, and
is therefore applicable to lower spectral resolution data. In Mrk 1486 we
observe an asymmetric outflow in both the location of the peak flux and total
flux from each lobe. We estimate an opening angle of $17-37^{\circ}$ depending
on the method and assumptions adopted. Within the minor axis outflows, we
estimate a total mass outflow rate of $\sim2.5$ M$_{\odot}$ yr$^{-1}$, which
corresponds to a mass loading factor of $\eta=0.7$. We observe a non-negligible
amount of flux from ionized gas outflowing along the edge of the disk
(perpendicular to the biconical components), with a mass outflow rate $\sim0.9$
M$_{\odot}$ yr$^{-1}$. Our results are intended to demonstrate a method that
can be applied to high-throughput, low spectral resolution observations, such
as narrow band filters or low spectral resolution IFS that may be more able to
recover the faint emission from outflows. | Daniel K. McPherson, Deanne B. Fisher, Nikole M. Nielsen, Glenn G. Kacprzak, Bronwyn Reichardt Chu, Alex J. Cameron, Alberto D. Bolatto, John Chisholm, Drummond B. Fielding, Danielle Berg, Rodrigo Herrera-Camus, Miao Li, Ryan J. Rickards Vaught, Karin Sandstrom | 2023-08-14T03:33:48Z | http://arxiv.org/abs/2308.06918v1 | # DUVET Survey: Mapping Outflows in the Metal-Poor Starburst Mrk 1486
###### Abstract
We present a method to characterize star-formation driven outflows from edge-on galaxies and apply this method to the metal-poor starburst galaxy, Mrk 1486. Our method uses the distribution of emission line flux (from H\(\beta\) and [OIII] 5007) to identify the location of the outflow and measure the extent above the disk, the opening angle, and the transverse kinematics. We show that this simple technique recovers a similar distribution of the outflow without requiring complex modelling of line-splitting or multi-Gaussian components, and is therefore applicable to lower spectral resolution data. In Mrk 1486 we observe an asymmetric outflow in both the location of the peak flux and total flux from each lobe. We estimate an opening angle of \(17-37^{\circ}\) depending on the method and assumptions adopted. Within the minor axis outflows, we estimate a total mass outflow rate of \(\sim\)2.5 M\({}_{\odot}\) yr\({}^{-1}\), which corresponds to a mass loading factor of \(\eta=0.7\). We observe a non-negligible amount of flux from ionized gas outflowing along the edge of the disk (perpendicular to the biconical components), with a mass outflow rate \(\sim 0.9\) M\({}_{\odot}\) yr\({}^{-1}\). Our results are intended to demonstrate a method that can be applied to high-throughput, low spectral resolution observations, such as narrow band filters or low spectral resolution IFS that may be more able to recover the faint emission from outflows.
keywords: galaxies: Mrk 1486 - galaxies: evolution - galaxies: starburst - galaxies: star formation
## 1 Introduction
Galaxy-scale outflows are ubiquitous in high star-formation rate galaxies in both the local Universe (e.g. Veilleux et al., 2005) and at higher redshift (Rubin et al., 2014; Steidel et al., 2010). There is a consensus view that star formation driven outflows are a necessary component for models of galaxy evolution to reproduce observations of galaxy properties such as the stellar mass function (Somerville and Dave, 2015; Naab and Ostriker, 2017; Pillepich et al., 2018; Forster Schreiber et al., 2019). Current models of galaxy evolution propose that these outflows regulate star-formation by removing star-forming material from the galaxy disk (Oppenheimer and Dave, 2008) and enriching the surrounding circumgalactic medium (CGM) with higher metallicity gas (e.g. Peroux et al., 2020). This enrichment of outflows has, for the first time, been directly mapped by Cameron et al. (2021). The direct mapping of outflows is now allowing us to make more accurate measurements of mass outflow rates.
The observation and characterization of star-formation driven outflows has historically been limited by the extremely low surface brightness of extraplanar gas (Tumlinson et al., 2017). By far the most extensively studied of these is M82 (Lopez et al., 2020; Leroy et al., 2015; Shopbell and Bland-Hawthorn, 1998; Westmoquette et al., 2009). Outflows have also been directly imaged in NGC 1482 (Veilleux and Rupke, 2002), NGC 253 (Bolatto et al., 2013), and in a sample of nearby outflow candidates (Veilleux et al., 2003). Concas et al. (2022) finds evidence for outflows in massive galaxies at \(z\sim 2\) from the KLEVER survey, but no indication of such outflows in lower-mass galaxies, which may be due to the lower S/N on fainter targets. A study on 19 dwarf galaxies from the Dvali sample found evidence of outflows but with very low mass outflow rates (Marasco et al., 2022). This however represents a small sample of galaxies. Additionally, there are a number of methods used in studying outflows, and the definition of mass outflow rate changes between studies. This makes it difficult to compare results between samples and to compare results to simulations.
Models of galaxy evolution make direct predictions of the mass
outflow rate, \(\dot{M}_{\rm out}\), of galaxies based on basic properties such as stellar mass, SFR and gas fraction (e.g. Nelson et al., 2019; Hayward & Hopkins, 2017). The mass outflow rate is therefore a critical parameter for observations to recover. The mass outflow rate is defined as:
\[\dot{M}_{\rm out}=\Omega C_{f}\mu m_{p}N_{H}R_{\rm out}V_{\rm out}. \tag{1}\]
In the above equation, \(\Omega\) is the opening angle of the outflowing gas. \(C_{f}\) is the covering fraction of the outflow. \(\mu\) is the mass per H nucleus, accounting for the relative He abundance. \(m_{p}\) is the proton mass. \(N_{H}\) is the column density of outflowing gas. \(\rm R_{out}\) is the radial extent of the outflow, and the velocity of the outflow is represented by \(v_{\rm out}\). There is difficulty in measuring all these parameters, and they are frequently assumed. This introduces potentially large systematic uncertainties into determinations of the mass outflow rate.
The most common method to derive outflow properties is the technique of decomposing spectral lines into an outflow and a systemic line, which can be performed on large samples of galaxies with either absorption or emission lines (e.g. Rubin et al., 2014; Chisholm et al., 2015; Heckman et al., 2015; Forster Schreiber et al., 2019; Reichardt Chu et al., 2022). In these works, the geometric parameters (opening angle, covering fraction, and outflow radius) must be assumed, rather than directly measured. In the nearby starburst M82, the outflow has been measured to extend \(\sim 10-15\) kpc with a base width of \(\sim 0.5\) kpc and opening angle \(25^{\circ}\)(Shopbell & Bland-Hawthorn, 1998). We can measure these properties in M82 due to its proximity, which allows us to accurately measure the physical sizes of the faint emission. Many studies assume covering fractions of \(\sim\)0.8-1 (Chisholm et al., 2015; Heckman et al., 2015), but there is a large variation between galaxies (Martin, 2005), which may depend highly on galaxy morphology.
Assumptions on outflow radius range from 100 pc to kiloparsecs (e.g. Chisholm et al., 2015; Forster Schreiber et al., 2019). As these outflow properties are direct inputs for the mass outflow rate, an incomplete understanding of how they scale with galaxy properties and morphologies results in poorly constrained mass outflow rates. We are especially lacking in understanding how properties like the covering fraction or opening angle may vary with galaxy mass and SFR.
Image-slicer integral field units (IFUs) such as VLT/MUSE and Keck/KCWI make direct imaging of the morphologies and extents of extraplanar gas possible in moderate-sized samples of galaxies. Using VLT/MUSE, to this end effort has been made to study individual outflows in more extreme and distant galaxies (Rupke et al., 2019; Burchett et al., 2021; Shaban et al., 2022; Zabl et al., 2021). A result of these efforts has been the determination of the large extent of the outflows in these intermediate (\(z\sim 0.5\)) to high redshift systems (\(z\sim 1.7\)) with the detection of emission in MgII extending to \(\sim 30\) kpc (Burchett et al., 2021; Shaban et al., 2022; Zabl et al., 2021) and [OII] to \(\sim 40\) kpc. A limitation in many of these higher redshift studies (that don't make use of gravitational lensing as in Shaban et al. (2022)) is the reduced spatial resolution. This makes difficult the determination of wind properties such as opening angle and outflow radius close to the galaxy disk. Additionally, image-slicer IFUs introduce the new challenge of separating outflowing gas from the surrounding halo gas. The ability to measure the parameters \(\Omega\), \(C_{f}\), and \(R_{\rm out}\) in Eq. 1 in more comprehensive samples of galaxies is central to determining how mass outflow rates vary with galaxy properties. In order to do this we must first have methods to determine these properties that can be systematically applied to deep IFU datasets.
The DUVET (Deep near-UV observations of Entrained gas in Turbulent galaxies) survey is an IFU survey on KCWI with sub-kpc spatial resolution (in contrast to previous IFU surveys) observations of 27 starbursting, low-redshift galaxies (\(z\sim 0.03\)) (e.g. Cameron et al., 2021; Reichardt Chu et al., 2022). Galaxies selected for the sample have star-formation rates at least 5 times the main-sequence value for their stellar mass. In addition the survey requires that galaxies have morphologies and kinematics consistent with a disk. Amongst the DUVET galaxies are several edge-on systems, with extended minor axis emission. The subject of this paper is a detailed analysis of one edge-on outflow galaxy from the DUVET survey, Mrk 1486 as a case-study for characterising the outflow emission.
Throughout this paper we assume a flat \(\Lambda\)CDM cosmology with \(H_{0}=69.3\,\rm km\,Mpc^{-1}\,s^{-1},\Omega_{m}=0.3\), and \(\Omega_{\Lambda}=0.7\). All wavelengths quoted are rest-frame wavelengths.
## 2 Observations and data reduction
### Target: Mrk 1486
Mrk 1486 is an edge-on disk, with inclination \(85^{\circ}\)(Chisholm et al., 2015). It has a redshift of \(z=0.03383\) and stellar mass \(\log(M_{\bullet}/M_{\odot})=9.3\pm 0.2\). It is a 5x outlier above the star-formation rate main-sequence, similar to other galaxies with observed strong star-formation driven outflows, with star formation rate SFR = \(3.6\pm 0.7\) M\({}_{\odot}\) yr\({}^{-1}\)(Chisholm et al., 2018). Mrk 1486 has a low ISM metallicity of \(12+\log(\rm O/H)=7.8\)(Ostlin et al., 2014). Mrk 1486 also hosts a bipolar outflow (Duval et al., 2016), visible as extended filamentary structures in HST H\(\alpha\) imaging (fig. 1). These properties make Mrk 1486 an ideal target to study basic outflow properties.
Figure 1: \(19^{\prime\prime}\times 19^{\prime\prime}\) HST image of Mrk 1486 combining F336W (blue), F438W (green), and F673N (red). The young stars establish an edge-on disk. The H\(\alpha\) emission (red) shows filamentary structures extending above and below the plane of the disk, indicating the outflow described in Duval et al. (2016) and recently in Cameron et al. (2021). Overlaid are contours in decadal steps showing [OIII] \(\lambda 5007\) emission detected with KCWI. Note that the brightness and contrast is set to make the outflow visible.
### KCWI Observations
Observations of Mrk 1486 were taken on 2020 March 22 UT under sub-arcsecond seeing conditions (\(\sim 0\aas@@fstack{\prime\prime}7\) at 5000 A) with Keck/KCWI (Morrissey et al., 2018) using the large IFU slicer setting giving a spatial sampling of \(0\aas@@fstack{\prime\prime}29\times 1\aas@@fstack{\prime\prime}35\) and a \(20\aas@@fstack{\prime\prime}7\times 33\aas@@fstack{\prime\prime}\) field of view. Two configurations on the BM dispersion grating were used, a "blue" configuration with a central wavelength of 4180 A, and a "red" configuration with a central wavelength of 4850 A. This allowed for continuous spectral coverage from 3731 A \(-\)5284 A with spectral resolution \(R\sim 2000\).
Our project aims require tracking bright emission lines from the galaxy center to the faint gas in the outflow and surrounding region. The KCWI detector rapidly saturates with bright [OIII] \(\lambda\lambda\)4959, 5007 doublet and H\(\beta\) from the starburst. The \(\sim\)1 minute readout time makes a large number of short exposures time-prohibitive to reach our aims. We, therefore, used a combination of long and short exposure times to avoid saturating the emission lines in the bright galaxy center while measuring faint emission in the galaxy outskirts. In the red configuration, nine exposures were taken, seven long (\(6\times 300\) s and \(1\times 400\) s) and two short (\(2\times 30\) s). In the blue configuration seven 300 s exposures were taken. A half-slice dither was used in both configurations to increase the spatial sampling. To adequately remove the sky, we obtained separate sky fields in the two configurations, where a 600 s exposure was obtained in the red configuration directly before the science exposures and a 300 s exposure was obtained in the blue configuration directly after the science exposures.
### Data Reduction
The data were reduced with the IDL version of the KCWI Data Reduction Pipeline v1.1.01 using the standard settings with the separate sky fields noted above. The standard star Feige92 was used to flux calibrate the exposures in the final processing step.
Footnote 1: [https://github.com/Keck-DataReductionPipelines/KcwiDRP](https://github.com/Keck-DataReductionPipelines/KcwiDRP)
Before combining images, we align each datacube together using the H\(\gamma\) emission line, which is unsaturated in all spaxels and covered in both the red and blue spectral settings. This accounts for small scale imperfections in the WCS. The alignment is carried out using an iterative minimisation method for each line-map, in which the reference position of the fields are adjusted and the H\(\gamma\) flux is compared in each pixel. The position that results in the minimum average residual across the galaxy is chosen.
Our chosen combination of exposure times results in long exposures with a few bright lines that saturated in the center, and short exposures that are not sufficiently deep to probe outflow gas nor can they probe fainter spectral features (e.g. Cameron et al., 2021). We, therefore, developed a method to combine these two data sets with a preference toward using the longer exposures when there is no evidence for saturation within a reasonable bandpass. The [OIII] \(\lambda 5007/44959\) ratio has a fixed value of 3 (Osterbrock and Frand, 2006), and is close enough in wavelength to not be significantly impacted by extinction. We can use this as an indicator of saturation in each spaxel spectrum. In the pre-flux calibrated cubes (from step kcwi_stage7dar) we use the \(\lambda 5007/\lambda 4959\) ratio to determine the counts at which saturation is occurring. We found this to be at \(\sim\)5500 counts.
Saturation was not detected in any exposures in the blue configuration, and thus no short exposures were taken, and only long exposures were used. These exposures were reprojected to produce \(0\aas@@fstack{\prime\prime}29\times 0\aas@@fstack{\prime\prime}29\) square spaxels with the python package Montago2. This size was chosen based on the length of the smaller edge of the original rectangular spaxels. The reprojected images were then co-added with Montago. During reprojection we set drizzle=1.0, energyMode=True, and scaled the flux with the fluxScale parameter set to the ratio between the rectangular and square spaxel sizes. Variance cubes for the blue configuration were also reprojected and co-added in the same manner.
Footnote 2: [http://montage.ipac.caltech.edu/](http://montage.ipac.caltech.edu/)
For the red configuration, there was no saturation detected in the short exposures. These were co-added using Montago, and variance cubes were scaled as above. For the long exposures, saturation was detected near the galaxy center in the H\(\beta\), [OIII] \(\lambda 4959\), and [OIII] \(\lambda 5007\) emission lines. Where a saturated emission line was detected in a spaxel in a long exposure, a 20 A wavelength region centered on the emission line was replaced with the corresponding wavelength region in the combined short exposure in both the flux cube and the variance cube. Light was found to bleed from saturated spaxels to spatially adjacent ones, resulting in deviations of the [OIII] \(\lambda 5007/\lambda 4959\) ratio from the theoretically predicted value well below the determined count threshold to detect saturation. To correct for this effect this replacement was also carried out in a one spaxel annulus surrounding the spaxel where saturation was detected. The individual long exposures were then reprojected to produce square spaxels and co-added, in the same manner as the blue exposures.
The final images have a total integration time of 2100 s in the blue setting across the entire field, 2200 s in the red setting off the galaxy disk where the saturation correction was not applied, and 60s on the disk where the saturation correction was applied. The images cover the disk of Mrk 1486, and extend to a minor axis distance of 6.9 kpc in both the NW and SE, and a major axis distance of 12.2 kpc in the NE and 5.5 kpc in the SW, with a \(3\sigma\) surface brightness limit of \(2.3\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) arcsec\({}^{-1}\).
## 3 Data Analysis
### Continuum Subtraction
Our observations cover both the galaxy disk and the surrounding extraplanar regions. As a result, our field of view includes some regions in which we expect continuum emission (near the galaxy) and regions further from the galaxy with little-to-no continuum emission. Continuum subtraction thus cannot be applied across the whole cube. For those spaxels in the galaxy we use standard methods of continuum removal with pPXF (Capellari, 2017) with BPASS templates (Stanway and Eldridge, 2018). To determine those spaxels with sufficient continuum flux to employ pPXF, we estimate the continuum signal-to-noise in each spaxel. Continuum signal-to-noise is estimated by summing the flux in the band, after masking emission lines. In spaxels where we detected continuum flux at a signal-to-noise level of 3 or greater the continuum was fitted with pPXF. We note that in all spaxels, independant of continuum signal-to-noise, our Gaussian fitting includes a constant offset determined near to the emission line, which corrects for local imperfections in continuum fitting.
The templates used included binary systems, and were based on a broken powerlaw initial mass function, with a slope of \(-1.3\) between 0.1 M\({}_{\odot}\) and 1 M\({}_{\odot}\), a slope of \(-2.35\) above 1 M\({}_{\odot}\) and an upper mass limit of 300 M\({}_{\odot}\). The reddening of the galaxy spectra caused by Milky Way foreground extinction was corrected for using the Calzetti (2001) extinction curve. The internal extinction in the galaxy
was then determined using the H\(\beta\)/H\(\gamma\) ratio and corrected also using a Calzetti (2001) extinction law.
### Emission Line Fitting
We carry out emission line fitting with two separate settings, a single Gaussian fit to all spaxels and a second run in which multiple Gaussians are fit to each spaxel. Our emission line fitting software is built on work done by Reichardt Chu et al. (2022) and uses the python package threadcount3. We do not make a correction for the instrumental dispersion on the line width, as it would be mostly negligible for any dispersions <100 km s\({}^{-1}\).
Footnote 3: [https://github.com/astrodee/threadcount](https://github.com/astrodee/threadcount)
We adopt a Bayesian method to decide spaxel-by-spaxel how many Gaussian components are needed. The software fits each spaxel with separate models of one, two, and three Gaussian components for the [OIII] \(\lambda\)5007 emission line. We use the Python package lmfit(Newville et al., 2023) with the nelder minimization algorithm. The leastsquares minimization algorithm was tested, but was less reliable at fitting complex emission lines in the lower surface brightness regions due to the large number of local minima. We use the built in Akaike Information Criterion (AIC) function in threadcount to determine the validity of a model. A model is determined to be more significant for an AIC difference of 150. Reichardt Chu et al. (2022) discusses that the typically adopted difference of 10 results in all spaxels having multiple components. This is likely due to the reality that galaxy emission lines are not perfect Gaussians, and our very high S/N data identifies these small deviations. We chose the value of 150 based on visual inspection of characteristic spaxels. A quantified analysis of appropriate AIC values is in progress (Reichardt Chu et al. _in prep_). In our calculation, a 2-Gaussian fit must have an AIC that is 150 smaller than the AIC of the 1-Gaussian fit. Similarly a 3-Gaussian model must have an AIC value that is 150 smaller than both the 2-Gaussian and 1-Gaussian models. The user is prompted to judge individual fits when parameters fall into a range that is deemed uncertain. For example, if the central wavelengths of two differ by less than 0.5 A (the spectral resolution of the measurements) and if the ratio of the fluxes is lower than 0.25 the user double checks the decision made by the software. We then store both a single Gaussian fit for every spaxel and separately a multi-Gaussian model for each spaxel as decided by the AIC system.
## 4 Decomposing Outflows from Surrounding Gas in Edge-on Galaxies
In the following sections we present threadcount and our method for distinguishing the region of extraplanar gas that is likely outflow, where we determine the outflow region based on the observed surface brightness alone. As such our method can be more easily applied to large data sets of either deep IFU observations of emission lines or narrow band photometric observations of outflowing galaxies. We demonstrate that the determination of the outflow region using our method corresponds well with the outflow region that would be determined from traditional kinematic arguments.
### Identifying Outflows with Emission Line Surface Brightness
Before determining the outflow regions and their properties we have to exclude the regions dominated by the disk of Mrk 1486. In our observations we find that along the minor axis the continuum drops to negligible values within a full width of order 1.5 kpc. We note that this is roughly equivalent to twice the seeing limit of our observations, and is probably what is setting this. Indeed, Duval et al. (2016) use the HST imaging in Mrk 1486 to define a disk scale-height of \(\pm\)0.25 kpc. Moreover, we find that the linewidth, as traced by the single-Gaussian velocity dispersion shows a significant increase at the \(z\) distance of \(\pm\)1 kpc from the disk midplane (as defined by the peak of the continuum brightness). We therefore make a cut of \(\pm\)1 kpc to separate the "disk" from the "outflow" of the galaxy. We note that the outflow likely originates from a star-forming region that is smaller (Chisholm et al., 2018) and that our thickness is likely seeing-convolved. This distance is a rough approximation of where the emission line brightness becomes dominated by outflows and is similar to what was used in Leroy et al. (2015) for M 82.
To define our outflow search area, the software must first identify a central axis for each side of the outflow. To do this, threadcount steps outward from the galaxy, finding the brightest spaxel within each strip parallel to the galaxy disk in terms of surface brightness for a chosen emission line. The [OIII] \(\lambda\)5007 emission line is used throughout this work, as it is the brightest line in this galaxy, however any observed emission line can be chosen. The software stops iterating when no spaxels exist with brightness greater than the noise threshold given. In principle, this could be used to judge the outflow extent. This is, however, an observationally-limited method, and in the case of Mrk 1486, the emission extends to the edge of the KCWI field-of-view. The median position of all brightest spaxels in each row is used to define a line perpendicular to the galaxy disk. This is then taken to be the center of the outflow. The center is allowed to vary on separate sides of the galaxy. The software then uses the center to determine the 50% and 90% widths surrounding this center line based on surface brightness, and we take this to be our outflow region. The red lines in Fig. 2 show the outflow region determined by this process overlaid on the HST image and the [OIII] \(\lambda\)5007 emission line flux map.
We note that this method determines the shape of the outflow purely by empirical photometric measurements alone, and is model-independent. The benefit of this method is that it can be applied to data from low spectral resolution instruments, such as the low-resolution PRISM mode of NIRSpec/IFU on JWST, or a narrow-band filter. It is also not reliant on assumptions about the underlying velocity structure of the emission.
We find that at its widest point the 90% surface brightness contour reaches a width of 9.9 kpc while the core of the outflow (the 50% surface brightness contour) reaches a maximum width of 4 kpc. The diameter that contains 90% of the \(i\)-band flux of the starlight in the disk on the major axis is 4.1 kpc. This implies that the 50% width of the outflow reaches a comparable width as the starlight in the disk. Duval et al. (2016) shows that extraplanar Ly \(\alpha\) extends across more of the disk in HST images, which may be consistent with our observations. The outflow in Mrk 1486 is significantly wider than the outflow in M 82, in which Shopbell & Bland-Hawthorn (1998) estimates a diameter of 0.4-0.6 kpc. This may be due to a difference in the launching region of the outflow of Mrk 1486. The outflow of M 82 is known to be centrally-concentrated and contained to a region of order \(\sim\)1 kpc in the center of the galaxy. This is similar to that of NGC 253, which likewise has a central starburst surrounded by a lower SFR disk. In Mrk 1486 we observe \(\Sigma_{\rm SFR}>0.1\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) across the entire disk, which suggests that there is sufficient star formation to drive a wind (Heckman et al., 2015).
### Comparing our method to kinematic tracers of outflows
Figure 3 compares the outflow region determined via this surface brightness based method with kinematic tracers of the outflow. The top panel of Fig. 3 shows the outflow region overlaid on a map of the single-Gaussian velocity dispersion. Ho et al. (2016) argue that the velocity dispersion of gas is a strong indicator of outflows. Similar conclusions can be drawn from Westmoquette et al. (2009). The outflow regions determined from our surface brightness based method contain the minor axis regions with higher velocity dispersion. An average velocity dispersion just inside the contour is \(\sim\)90-100 km s\({}^{-1}\), and this drops to 40-60 km s\({}^{-1}\) just outside the boundary, which is a decline of 30-40 km s\({}^{-1}\). We interpret this as indicating that our surface brightness based determination of the outflow region is identifying a kinematically-distinct component of the gas. This is then consistent with a minor axis outflow.
In the lower panel of fig. 3 we compare our outflow determination with results from multi-component fitting. The value in each spaxel is an emission line ratio of the fainter two Gaussian components divided by the total flux. Within a spaxel the flux is described as \(F_{\rm total}=F_{\rm G1}+F_{\rm G2}+F_{\rm G3}\), where \(F_{\rm G1}\) is the flux of the brightest Gaussian, and so on for the others. The figure then plots \((F_{\rm G2}+F_{\rm G3})/F_{\rm total}\), which is a metric of the presence of multiple Gaussian components. For spaxels where two components are fit, we simply treat \(F_{\rm G3}=0\). Where only a single component is fit we have \(F_{\rm G2}=F_{\rm G3}=0\). We note that a BIC incorporates signal-to-noise in the choice. Lower S/N spaxels are less likely to identify multiple components, which is the primary reason the multiple components are not strong beyond \(\pm 6\) arcsec. Inside the high S/N region we again find agreement between the strongest multiple component behavior with the outflow contours, overplotted in the figure. We note that so-called "line splitting" is a key feature of many outflows (e.g. Westmoquette et al., 2009). Close to the disk we find that as much as half of the detected [OIII] \(\lambda 5007\) flux is in these additional fit components. These regions also exhibit high values for the single Gaussian velocity dispersion. This suggests that the velocity dispersion in these regions is not solely tracing an increase in turbulence, but is consistent with expanding, semi-transparent shells of gas, which is the typical model of a conical outflow. The outflow region determined from surface brightness surrounds these regions with significant secondary and tertiary fit components. Our surface brightness based method for determining the outflow region therefore recovers the same region as kinematic metrics such as single-Gaussian dispersion and multi-component fitting.
Overall, we find that identifying the outflow in Mrk 1486 with only surface brightness photometry recovers a similar region as more complex kinematic methods.
### Multiple estimates of outflow opening angle
In this subsection we compare the opening angle measured using the geometry determined with the surface brightness method to that determined from the line-splitting. We note this is a heavily simplified interpretation of the opening angle, and it is intended to be both an empirically-based method that can be applied without need for significant modelling and tractable for comparison of samples of outflows. While Herenz et al. (2023), for example, offers a method that relies on a toy-model applied to the outflow of SBS 0335-52E using two limb filaments, a simple geometric method applied to emission line surface brightness contours to determine the opening angle derives naturally from our outflow identification method. The opening angle in edge-on systems is typically determined using the velocity difference between the multiple components of an emission line, so-called "line splitting" measurements. It is not known how common, or easily observed, line splitting is. It requires a minimum
Figure 2: The outflow region (red lines) determined by threadcount overlaid on the HST image of this galaxy from Fig. 1 (_left_) and [OIII] \(\lambda 5007\) emission line flux (_right_). The single Gaussian dispersion is used to define the disk region as distinct from the surrounding gas. Then the brightest spaxel in each row parallel to the galaxy disk is determined and the median position of these spaxels is taken as the central outflow axis (black vertical line). We then determine the 50% and 90% widths on either side of this line in terms of surface brightness (red dashed and solid lines, respectively) and define this as the outflow region.
ability to resolve the components of the emission lines, and therefore is precluded from use in observations with low spectral resolution. Moreover, line splitting determination of the opening angle requires an assumption about the outflow velocity perpendicular to the disk, which is highly uncertain in edge-on systems.
To estimate the opening angle geometrically, we fit the 50% contour of the outflow (the dashed line in figs. 2 and 3) with a straight line. We fit the NW and SE lobes separately. We then determine the angle between the fit line and the outflow central axis by measuring the arctangent of the slope of the fit. The opening angle is generally reported as the full width of the cone. Therefore, twice the angle between that fitted line and the outflow central axis is comparable to the opening angle of the outflow. Errors in opening angle are calculated by first running Monte Carlo simulations of major axis offsets for a given minor axis offset taking the spatial resolution of the image as the error in the major axis position. We then fit each simulation, and take the standard deviation in fit slope as the error in our measured slope. This error is then propagated to an error in the opening angle. We provide the full opening angle for the outflow shapes and calculation methods discussed below in Table 1. With this method we measure an opening angle at the 50% width of \(19.81\pm 0.01\)\({}^{\circ}\) in the NW region and \(17.7\pm 0.02\)\({}^{\circ}\) in the SE region assuming the frustrum4 geometry seen in figs. 2 and 3. At the 90% width for these same angles, we measure \(43.94\pm 0.01\)\({}^{\circ}\) in the NW region and \(25.12\pm 0.02\)\({}^{\circ}\) in the SE region, though the dramatic changes in the major axis position of the 90% contour make these angles less reliable as an indicator of the overall outflow opening angle.
Footnote 4: A truncated cone.
We can compare this geometric method to the more common line-splitting determination of opening angle. We measure the mean line separation within \(\pm 1\) kpc of the central outflow axis, extending from 1 to 4 kpc minor axis offset from the galaxy. This excludes the galaxy disk and includes only regions of sufficient S/N to detect line splitting. In spaxels where the one component model was selected, the line separation is taken to be 0 km s\({}^{-1}\). Where a two-component model is selected, the line separation is taken to be the difference between the Gaussian mean of each component. Where a three-component model is selected, the line separation is taken as the difference between the means of the most blueshifted and the most redshifted components. Assuming an expanding, biconical minor axis outflow we can use our measurements of line separation in these minor axis regions to constrain the opening angle of the cone (\(\theta\)) with
\[\theta=2\arcsin\left(\frac{\Delta v}{2v_{\rm out}}\right) \tag{2}\]
from a direct geometric argument. Here \(\Delta v\) is the observed separation between emission line components, and \(v_{\rm out}\) is the outflow velocity. Based on the star-formation rate and stellar mass of the galaxy, there is precedent in the literature for outflow velocities in the range \(v_{\rm out}\approx 150-450\) km s\({}^{-1}\)(Chisholm et al., 2015; Heckman et al., 2015). We assume the median value of \(v_{\rm out}=300\) km s\({}^{-1}\) in our calculation. With this line-splitting method we derive an opening angle of \(\sim 17^{\circ}\) in both the NW and SE regions. This is in strong agreement with the opening angle determined via our photometric method.
However, this line-splitting method for determining the opening angle assumes a conical outflow, in contrast with the frustrumal geometry assumed with our previous photometric estimate. To more directly compare the two methods, we derive an opening angle from our photometric results assuming a conical geometry. We fit a linear relationship to the surface-brightness derived outflow region as
Figure 3: The threadcount determined outflow region overlaid on a \(5\times 5\) spaxel averaged map of the [OIII] \(\lambda 5007\) single Gaussian dispersion (_top_). The determined outflow region agrees strongly with the high-dispersion, minor axis regions. This outflow region is also overlaid on a \(5\times 5\) spaxel averaged map of the ratio of the flux in the two least bright components to the total flux in the multi-Gaussian component model for [OIII] \(\lambda 5007\) (_bottom_). High values for this ratio indicate regions where the high-dispersion, minor axis regions are not solely tracing an increase in turbulence, but are the result of an ordered, expanding shell of gas. Close to the disk where there is sufficient signal-to-noise to justify these higher-order fits the region we determine to be outflow from our surface brightness method contains the regions with high values for this flux ratio. Further from the disk, our fitting software prefers simpler fits as a result of the lower signal-to-noise. These maps indicate that the outflow region determined via our surface brightness based method agrees strongly with outflow determinations based on gas kinematics.
above, but enforce a y-intercept of 0, resulting in an outflow profile that terminates at the galaxy center. By design this results in a larger opening angle, since the position of the 50% contour at large z-height has not changed, but the origin has. With this method we calculate an opening angle of \(\sim\)37\({}^{\circ}\) in the NW region and \(\sim\)32\({}^{\circ}\) in the SE region, which are significantly larger than and roughly twice as large as the angles calculated with either previous method. At the 90% width, we measure an opening angle of \(\sim\)73\({}^{\circ}\) in the NW region and \(\sim\)74\({}^{\circ}\) in the SE region. These values are again significantly larger than their frustrumal counterparts.
It is interesting to note that the surface brightness method identifies a similar opening angle as the line splitting, despite the significant number of differences in measurements and assumptions. However, this agreement relies on an assumption of \(v_{\rm out}\sim 300\) km s\({}^{-1}\). If we change this to a lower value of 150 km s\({}^{-1}\), which is still within the range of \(v_{\rm out}\) for this SFR and mass, then the line-splitting based opening angle changes to 34\({}^{\circ}\). This illustrates the challenge of estimating the opening angle using the line splitting, as it is heavily based on the (very uncertain) adopted \(v_{\rm out}\).
### Outflows in the plane of the disk
Using the line fitting method described in Section 3.2 we find multiple components are common within the plane of the disk. This is seen in the bottom panel of Fig. 3 in which the secondary components make up significant fractions of the total flux inside \(\sim\pm 1\) kpc of the disk midplane. This could suggest gas outflows along the line of sight, especially for those that are blueshifted. We find that in the vast majority of spaxels (\(\sim\)80-90%) the broad component is blueshifted with respect to the narrow emission line, which is consistent with expectations for outflows. Similarly, Chisholm et al. (2015) measures the outflow via decomposition of the absorption line, which would be in the plane of the disk for this edge-on galaxy. Here we consider these in a similar way as is done in face-on galaxies (e.g Reichardt Chu et al., 2022).
By decomposing the observed emission lines in the disk region into a broad outflow component and a narrow systemic component as described above, and then taking the total broad and narrow flux across the disk region we measure the ratio of the outflow to systemic flux, \(F_{\rm broad}/F_{\rm narrow}=0.21\). We show a map of this quantity in the top panel of fig. 4. This is consistent with results from Davies et al. (2019) and Reichardt Chu et al. (2022). At higher redshift, Concas et al. (2022) argues that outflows from similar mass galaxies as Mrk 1486 may be overestimated due to spatial resolution effects in KMOS observations (e.g. Forster Schreiber et al., 2019). Our spatial resolution is significantly higher than those observations. We find, even in the edge-on case, that there is significant broadline flux, similar to studies like Forster Schreiber et al. (2019). If the secondary component in the plane of the disk is truly outflow, in principle one assumes it would be a larger fraction of the flux for a face-on galaxy.
We can estimate the mass loading in the plane of the disk, albeit with significant systematic uncertainty introduced by necessary assumptions. Taking the broad component to comprise the outflow we measure the resolved mass outflow rate from the disk, along the line-of-sight via:
\[\dot{M}_{\rm out}=\frac{1.36m_{H}}{\gamma H_{\beta}n_{e}}\frac{v_{\rm out}}{ R_{\rm out}}L_{\rm H\beta,broad}, \tag{3}\]
where all constants are as described in previous equations, \(L_{\rm H\beta,broad}\) is the luminosity in the broad component of the H\(\beta\) emission line, and \(R_{\rm out}\) is the radial extent of the outflow region. In Eq. 3 the two largest sources of uncertainty are the outflow radius, \(R_{\rm out}\) and the electron density, \(n_{e}\). For electron density, we adopt a single value of \(n_{e}=32\) cm\({}^{-3}\) based on our measurement of this quantity in the disk in Section A1. If the electron density in this outflow is however decaying with radial position, our derived value for \(\dot{M}_{\rm out}\) will then be a lower bound for a given outflow velocity and radius (\(R_{\rm out}\)). \(R_{\rm out}\) is not directly measurable in down-the-barrel observations. As the morphology of this edge-on outflow is not well known, and there is little precedent in the literature, this introduces a large systematic uncertainty into calculations of \(\dot{M}_{\rm out}\). Additionally, the value of \(R_{\rm out}\) may vary from spaxel to spaxel, as the \(\dot{M}_{\rm out}\) may include gas originating from within a given spaxel as well as tangentially launched gas from adjacent spaxels. We measure a maximum full-width of 2kpc at 1 kpc minor axis offset for the 50% width of the minor axis outflow region, which provides a possible upper limit for \(R_{\rm out}\) in the edge-on outflow. Both the size and electron density are consistent with values derived from modeling of UV absorption lines from Xu et al. (2023). We assume a constant value of \(R_{\rm out}\) across all outflow spaxels. For the assumption of \(R_{\rm out}=2\) kpc and \(n_{e}=32\) cm\({}^{-3}\) we find \(\dot{M}_{\rm out}({\rm edge})=0.9\) M\({}_{\odot}\) yr\({}^{-1}\), which yields a mass-loading of \(\eta({\rm edge})\sim 0.25\). We will show in subsequent sections that this is of order 25% of the outflows above the plane of the disk.
In Fig. 4 there is a region at position \(\sim\)1.75 arcsec along the major axis and \(\sim\)1 arcsec above the plane on the minor axis that has \(f_{\rm broad}/f_{\rm narrow}\sim 1\). This area is also seen in Fig. 3 and sits near the NW minor axis outflow lobe. The position is elevated above the midplane, and the boundary between outflow and disk may not be well defined at all places in the galaxy. It is, therefore, plausible that this region is not truly an outflow in the plane of the disk, but rather the beginning of the minor axis outflow. A 1.2\(\times\)1.2 arcsec\({}^{2}\) region centered on the peak \(f_{\rm broad}/f_{\rm narrow}\) in this area comprises \(\sim 10\%\) of the outflow flux in the plane of the disk.
Unlike in the extraplanar regions directly above and below the disk, with kinematically decomposed observations of the outflow in the disk region we can directly measure the outflow velocity. Within this region, the outflow velocity is defined as:
\[v_{\rm out}=|\Delta\mu|+2\sigma_{\rm broad}. \tag{4}\]
Here \(\Delta\mu\) is the difference between the means of the systemic and outflow Gaussians and \(\sigma_{\rm broad}\) is the spread in the broad Gaussian component. A map of \(v_{\rm out}\) across the disk region of Mrk 1486 is shown in the middle panel of Fig. 4. We note that in the figure, and in our analysis, we only show spaxels with statistically significant evidence for multiple components based on the BIC values. Within this region we measure outflow velocities within the range 100 - 430 km s\({}^{-1}\), with a median value of 220 km s\({}^{-1}\). These measured velocities are consistent with the range of velocities considered in the minor axis mass outflow rate profile and with those described in the literature (Chisholm et al., 2015; Heckman et al., 2015).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Assumed Shape & Region & \(\theta\) (\({}^{\circ}\)) \\ \hline Photometry & Frustrum & NW & 19.81 \(\pm\) 0.01 \\ Photometry & Frustrum & SE & 17.70 \(\pm\) 0.02 \\ Line Splitting & Conical & NW & 16.96 \(\pm\) 0.41 \\ Line Splitting & Conical & SE & 17.27 \(\pm\) 0.14 \\ Photometry & Conical & NW & 37.37 \(\pm\) 0.01 \\ Photometry & Conical & SE & 31.68 \(\pm\) 0.01 \\ \hline \end{tabular}
\end{table}
Table 1: Calculations of the minor-axis outflow opening angle with different methods and assumptions as discussed in Section 4.3.
### Outflows along the minor axis
Fig. 5 shows the vertical axis surface brightness profiles in [OIII] and H\(\beta\), obtained by taking the mean at each minor axis offset within the outflow region determined via threadcount. We recover the extended emission profile consistent with minor axis outflows. We note that the surface brightness profile is a smooth decay with radius that does not support a "thin-shell" geometry, which is sometimes invoked in the literature to interpret outflows measured via quasar absorption lines (see discussion in Veilleux et al., 2020; Tumlinson et al., 2017).
While the two minor axis outflow regions are highly asymmetric in their morphological and kinematic properties, the shape of the surface brightness profile is remarkably similar between the two. We fit a double exponential to the surface brightness profile and treat the outer exponential as the outflow profile. The average fit to these profiles is overplotted in fig. 5, as are the component single exponential profiles. Within this outflow profile we find a scale height of 2.5 kpc in the NW outflow region and 2.1 kpc in the SE outflow region. This implies that the majority of the outflow mass is contained within \(\sim\)3-4 kpc of disk center. We measure a 90% extent (of the total integrated flux) of 4.7 kpc in the upper outflow and 5.7 kpc in the lower outflow. Some studies use the maximum observed position of the outflow as the "outflow extent". This is clearly dependent on the sensitivity of the observations. The 90% extent (or even the 50% distance) could be considered an analogous quantity and provides a direct observable that can be compared to simulation results that is less impacted by sensitivity.
With the minor axis outflow region determined we can consider the total mass outflow rate within this region. For a minor axis outflow comprised of \(n\) strips of spaxels perpendicular to the galaxy disk, the total mass outflow rate within this outflow region is given by:
\[\dot{M}_{\rm out}=\sum_{i=n}^{n}\frac{v_{\rm out,i}}{r_{i}}\frac{1.36nm_{H}} {ne_{e,i}r\mathrm{H}\beta}L_{\mathrm{H}\beta,i}, \tag{5}\]
based on the thick wind formulae from Rupke et al. (2005). Here \(L_{\mathrm{H}\beta,i}\) is the total H\(\beta\) luminosity within strip \(i\), \(m_{H}\) is the atomic mass of Hydrogen, \(n_{e,i}\) is the electron density in strip \(i\), \(\mathrm{\gamma H}\beta\) is the H\(\beta\) emissivity, \(v_{\rm out,i}\) is the outflow velocity within strip \(i\), and \(r_{i}\) is the distance between strip \(i\) and the galaxy. From the [OIII] \(\lambda 4363\) emission line, the temperature in the outflow is determined to be \(\sim 1.2-1.3\times 10^{4}\) K (Cameron et al., 2021). We expect the [OIII]
Figure 4: Map across the disk edge of _Top_: ratio of the broad component flux to the narrow component flux in spaxels where the secondary component is detected, _Middle_: outflow velocity calculated for spaxels where the secondary component is detected, and _Bottom:_ representative fits in positions A, B, and C as shown in the upper and middle panels. The flux ratio and outflow velocity are only calculated in spaxels where there is statistically significant evidence for multiple components based on the BIC value.
to be biased towards hotter gas compared with H\(\beta\) and adopt a temperature \(T=10^{4}\) K, resulting in \(\gamma_{\rm H\beta}=1.24\times 10^{-25}\) erg cm\({}^{3}\) s\({}^{-1}\). The two largest sources of uncertainty in this determination of \(\dot{M}_{\rm out}\) are the gradients in the electron density and in the velocity of the wind. These are discussed at length in Appendix A. The electron density in particular introduces a large, systematic uncertainty into any determination of the mass outflow rate in this galaxy, and within the outflow region is entirely unconstrained. As Mrk 1486 and its outflow are at densities \(\leq 32\) cm\({}^{-3}\) all spaxels within our data are at or below the low-density limit for all optical emission line tracers of density. In our calculation of the total mass outflow rate within these minor axis lobes, we assume a decaying electron density profile following:
\[n_{e,z}=n_{e,\rm max}\left(\frac{z}{h_{n_{e}}}\right)^{-1} \tag{6}\]
with an electron density scale height of \(h_{n_{e}}=0.8\) kpc, such that the density reaches the maximum value at the disk edge. As Mrk 1486 is at an inclination of 85\({}^{\circ}\), we cannot observe the radial outflow velocity in the minor axis regions and must make assumptions about the shape of the velocity profile in the outflow. There is precedent in the literature for a maximum outflow velocity between 150 and 450 km s\({}^{-1}\)(Chisholm et al., 2015; Heckman et al., 2015). Appendix A2 details our determination of the impact of the outflow velocity profile on the final calculated mass outflow rate. We find that with reasonable assumptions the choice of velocity profile has a minimal impact on the mass outflow rate in the edge on outflow. We thus assume a constant outflow velocity with \(v_{\rm out}=300\) km s\({}^{-1}\). With the above assumptions, we calculate a mass outflow rate at the 90% width of 1.2 M\({}_{\odot}\) yr\({}^{-1}\) in the NW outflow region and 1.3 M\({}_{\odot}\) yr\({}^{-1}\) in the SE region. This corresponds to a total mass outflow rate in the minor axis regions of \(\dot{M}_{\rm out}(\rm minor)=2.5\) M\({}_{\odot}\) yr\({}^{-1}\) and a mass loading factor \(\eta(\rm minor)=0.7\).
### 3-Dimensional Outflow Shape
In the above sections, we have estimated the mass outflow rate in both the disk-edge and minor axis outflow regions. We show in Fig. 6 a cartoon diagram that is representative of the outflow shape for Mrk 1486. The minor axis outflow, which dominates in terms of mass flux, is shown in cyan and the disk-edge outflow, which contributes \(\sim 25\)% to the total outflow mass rate in the galaxy, is shown in blue. We determine the outflow shape in Mrk 1486 to be neither purely spherical nor biconical. The outflow is still dominated by the ultrastructural shape that is typical of other outflows studied (e.g. Shopbell & Bland-Hawthorn, 1998; Herenz et al., 2023), but has another lower azimuth component.
If this complex outflow shape is not unique to Mrk 1486, then the relative contributions of the different outflow components suggest possible systematic uncertainties in the determination of the total mass outflow rate for a galaxy observed with measurements that only detect a subset of the components. We now consider the impact of this systematic. To do so we compare the mass outflow rate of each component to that of a thin, spherical shell of similar flux. This simulates observation of each component were it an unresolved, "down-the-barrel" observation.
For each outflow component we determine a total mass and the area in which outflows are detected. We then calculate a mass density \(\rho_{\rm outflow}\) within the determined outflow region and let \(\rho_{\rm outflow}=\mu m_{p}N_{H}\) in Eq. 1. We then assume \(\Omega=4\pi\) per the spherical geometry. We adopt \(R_{\rm out}=2\) kpc (as in our determination of the disk-edge mass outflow rate in Section 4.4), and \(v_{\rm out}=300\) km s\({}^{-1}\) (as in our determination of the minor-axis outflow rate in Section 4.5 and consistent with the range of observed velocities in the disk edge outflow). We then calculate a mass outflow rate assuming a spherical outflow morphology via. Eq/ 1.
Table 2 shows the calculated mass outflow rate based on whether the disk-edge or minor axis outflows are observed. Note that the middle column, "Empirical Shape," is not the global mass-loading for Mrk 1486, but only the mass rate of that component. For the entire galaxy we measure a total empirical mass outflow rate of \(\dot{M}_{\rm out}=3.4\) M\({}_{\odot}\) yr\({}^{-1}\). On the minor-axis the outflow is extended over a large area, and thus lowering the surface density of any single line of sight that is observed. As the spherical, thin-shell geometry neglects the large extent of the minor axis outflows, their low surface density results in low total mass outflow rates of \(\dot{M}_{\rm out}=1.6\) M\({}_{\odot}\) yr\({}^{-1}\) whether the mass density is calculated from the NW or SE lobe. Conversely, the relative high surface density of the disk-edge outflow results in a very large total mass outflow rate of \(\dot{M}_{\rm out}=9.2\) M\({}_{\odot}\) yr\({}^{-1}\), much greater than the empirically determined value. Clearly, the geometry of the galaxy, and thus outflow, is an important parameter to constrain in low-resolution observations of outflows (e.g. Rubin et al., 2014; Forster Schreiber et al., 2019).
\begin{table}
\begin{tabular}{c c c} \hline \hline Observed Component & Empirical Shape & Spherical Model \\ & (\(\dot{M}_{\odot}\) yr\({}^{-1}\)) & (\(\dot{M}_{\odot}\) yr\({}^{-1}\)) \\ \hline Minor Axis NW & 1.2 & 1.6 \\ Minor Axis SE & 1.3 & 1.6 \\ Disk-Edge & 0.9 & 9.2 \\ \hline \end{tabular}
\end{table}
Table 2: The mass outflow rate within each detected outflow component and the calculated mass outflow rate assuming a thin-shell, spherical outflow morphology as discussed in Section 4.6.
Figure 5: Radial flux profiles for the minor-axis outflow regions in Mrk 1486. Shown are profiles for [OIII] \(\lambda\)5007 and H\(\beta\) in both the NW and SE outflow regions. The two outflow regions are very symmetric in terms of surface brightness. Overlaid in grey is the same profile for H\(\alpha\) emission in M 82 from Leroy et al. (2015). Overplotted is the average double exponential fit to the shown flux profiles in Mrk 1486, and the individual exponential components, showing the galaxy and outflow exponential. In the regions for which the M82 data is available, the shape of the profile is very similar to Mrk 1486.
## 5 Discussion & Conclusion
In this paper we analyzed the extraplanar emission in H\(\beta\) and [OIII] \(\lambda\)5007 of the galaxy Mrk 1486. We present a simple technique for identifying outflows in edge-on galaxies and measuring their properties, which can be applied to IFU observations or narrowband imaging with sufficient spatial resolution to resolve the outflow. The outflow emission is identified by (1) identifying lines perpendicular to the galaxy disk which correspond to the peak surface brightness above and below the disk (ie the outflow axis) and (2) measuring the surface brightness contours that enclose the width of the outflow. In this work we chose the 50% or 90% widths, however this could be altered in future work based on the needs of the program. In Fig. 3 we show that this method, which only requires emission line surface brightness, recovers the same region that is identified as having larger line-widths and stronger presence of multiple components. We take this as confirmation that the simple photometry is identifying a kinematically-distinct component of the extraplanar material, consistent with an expanding outflow.
We observe emission to \(\pm\)8 kpc above and below the disk, far beyond the distance of stellar continuum. The outflow follows an exponential decay with scale-length of \(\sim\)2.1-2.5 kpc, and corresponding distance that encloses 90% of the outflow flux to be \(\sim\)5-6 kpc. The outflow has a wide base, which reflects the wide distribution of star formation in Mrk 1486. This base is wider than that measured in M82.
In addition, we observe a disk-edge outflow component, contributing \(\sim\) 25% to the total mass outflow rate for the galaxy. We are thus able to determine the 3-dimensional shape of the outflow in Mrk 1486, and find it to be neither spherical nor purely biconical. A plausible cause of both the wide-base and disk-plane outflows is the distribution of star formation within Mrk 1486. We find strong emission lines extending to the edge of the disk, and likewise wide-spread UV emission. This is indicative of a galaxy-wide starburst. The SFR of Mrk 1486 is roughly 5x higher than the main-sequence for its mass, which likewise indicates a strong amount of star formation. If we compare Mrk 1486 to the local, well-studied outflow galaxy NGC 253 there are significant differences in the global distribution of star formation. In NGC 253 the starburst is contained within a small region in the central \(R\sim 500\) pc of a large disk. There are many kiloparsec between the starburst and the disk edge of NGC 253, which would act to collimate the outflow. In Mrk 1486 the starburst is not centrally located, which allows for escape from the disk along more lines of sight. We note this may have implications for outflows from galaxies at larger redshift, which do not necessarily have a centrally concentrated star formation and may be clumpier.
The mass distribution within these outflow regions is heavily dependent on assumptions about electron density and \(R_{\rm out}\). Under the assumption of a constant outflow velocity and a radial electron density profile decaying with \(r^{-1}\), we measured a total mass outflow rate within the minor axis outflow of 2.5 M\({}_{\odot}\) yr\({}^{-1}\). The electron density is essentially unconstrained in the extraplanar regions. The entirety of the galaxy and especially the extraplanar regions are at or below the low-density limit for all optical emission line tracers of density. While observations with greater spectral resolution would assist in resolving the individual [OII] emission lines and observations of the [SII] doublet as another tracer of electron density that is more widely separated and thus more easily resolved, the determination of the electron density within the outflow regions is still hampered by the limitations of optical emission line tracers of density.
We calculated the opening angle for the minor axis outflow using multiple methods and assumptions. In particular, we compared the opening angle determined in the typical manner, by measuring the degree of separation between components in multi-Gaussian fits to the observed emission lines, with the opening angle measured directly from our determination of the outflow region. Our photometric determination of the outflow region produces an opening angle at the 50% width of 20\({}^{\circ}\) in the NW region and 18\({}^{\circ}\) in the SE region assuming a frusturant outflow shape, and 37\({}^{\circ}\) in the NW region and 32\({}^{\circ}\) in the SE region assuming a conical outflow shape. Using the more common "line-splitting" technique for determining the opening angle we calculate an opening angle of 17\({}^{\circ}\) in both the NW and SE regions. This opening angle determination is consistent with the opening angle determined from the threadcount outflow shape, supporting this method of direct opening angle observation. Determination of the opening angle and geometry of galactic winds opens an interesting new discovery for studying outflows, which will facilitate much more precise determinations of the mass outflow rates of galactic outflows.
The sample size of galaxies for which we have resolved outflow observations is very small, and thus whenever we can add another target to that sample, the results on that target are worth noting as we have done here with Mrk 1486. This galaxy has similar ISM characteristics to galaxies at larger redshift in that it is low metallicity, high SFR, and low mass. An option to reproduce these results at higher redshift is to use long integrations with JWST NIRSpec PRISM, which has extremely high throughput. While this would sacrifice kinematic information and likely blend emission lines, one could generate a
Figure 6: A cartoon representation of the 3D outflow shape in Mrk 1486. The galaxy disk is shown in grey. We detect a bitstreamal outflow component along the minor axis (real) that dominates the mass outflow in the galaxy. In addition, we detect a disk-edge component (blue) containing a non-negligible (\(\sim\) 25%) amount of the total outflowing mass. The 3D-shape of the outflow in Mrk 1486 is thus not purely spherical nor only biconical. If this outflow shape is common, then this suggests systematic uncertainties in the calculation of the total mass outflow rate in galaxies where only the minor axis or disk-edge outflow is detected.
map of ionized gas in visible wavelengths for comparison to simulations. Our results present a new method, albeit representing only one object. More observations of outflowing systems are required. The method developed above can be easily applied to larger samples of galaxies observed with KCWI and MUSE to map resolved gas flows and their physical properties.
## Acknowledgements
Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. D.B.F. acknowledges support from Australian Research Council (ARC) Future Fellowship FT170100376 and ARC Discovery Program grant DP130101460. A.D.B. acknowledges partial support from AST1412419 and AST2108140. A.J.C. acknowledges funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 789056). R.H.-C. thanks the Max Planck Society for support under the Partner Group project "The Baryon Cycle in Galaxies" between the Max Planck for Extraterrestrial Physics and the Universidad de Concepcion. R.H.-C also acknowledges financial support from Millennium Nucleus NCN19058 (TITANs) and support by the ANID BASAL projects ACE210002 and FB210003. R.R.V. and K.S. acknowledge funding support from National Science Foundation Award No. 1816462. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Observations were supported by Swinburne Keck. The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
## Data Availability
The DUVET Survey is still in progress. The data underlying this article will be shared on reasonable request to the PI, Deanne Fisher at [email protected]
|
2305.01053 | Robust and Reliable Stochastic Resource Allocation via Tail Waterfilling | Stochastic allocation of resources in the context of wireless systems
ultimately demands reactive decision making for meaningfully optimizing
network-wide random utilities, while respecting certain resource constraints.
Standard ergodic-optimal policies are however susceptible to the statistical
variability of fading, often leading to systems which are severely unreliable
and spectrally wasteful. On the flip side, minimax/outage-optimal policies are
too pessimistic and often hard to determine. We propose a new risk-aware
formulation of the resource allocation problem for standard multi-user
point-to-point power-constrained communication with no cross-interference, by
employing the Conditional Value-at-Risk (CV@R) as a measure of fading risk. A
remarkable feature of this approach is that it is a convex generalization of
the ergodic setting while inducing robustness and reliability in a fully
tunable way, thus bridging the gap between the (naive) ergodic and
(conservative) minimax approaches. We provide a closed-form expression for the
CV@R-optimal policy given primal/dual variables, extending the classical
stochastic waterfilling policy. We then develop a primal-dual tail-waterfilling
scheme to recursively learn a globally optimal risk-aware policy. The
effectiveness of the approach is verified via detailed simulations. | Gokberk Yaylali, Dionysios S. Kalogerias | 2023-05-01T19:32:49Z | http://arxiv.org/abs/2305.01053v1 | # Robust and Reliable Stochastic
###### Abstract
Stochastic allocation of resources in the context of wireless systems ultimately demands reactive decision making for meaningfully optimizing network-wide random utilities, while respecting certain resource constraints. Standard ergodic-optimal policies are however susceptible to the statistical variability of fading, often leading to systems which are severely unreliable and spectrally wasteful. On the flip side, minimax/outage-optimal policies are too pessimistic and often hard to determine. We propose a new _risk-aware_ formulation of the resource allocation problem for standard multi-user point-to-point power-constrained communication with no cross-interference, by employing the Conditional Value-at-Risk (CV@R) as a measure of fading risk. A remarkable feature of this approach is that it is a convex generalization of the ergodic setting while inducing robustness and reliability in a fully tunable way, thus bridging the gap between the (naive) ergodic and (conservative) minimax approaches. We provide a closed-form expression for the CV@R-optimal policy given primal/dual variables, extending the classical stochastic waterfilling policy. We then develop a primal-dual _tail-waterfilling_ scheme to recursively learn a globally optimal risk-aware policy. The effectiveness of the approach is verified via detailed simulations.
Resource Allocation, Waterfilling, Conditional Value-at-Risk (CV@R), Risk-Aware Optimization.
## I Introduction
We revisit the classical problem of allocating resources in point-to-point communication networks operating over realizations of random fading channels \(\mathbf{h}\in\mathcal{H}\subseteq\mathbb{R}^{n}\). Resources such as transmission power and/or channel access are allotted among users to meaningfully optimize certain network-wide random utilities. Traditionally, such resources are allocated either _deterministically_ by essentially disregarding the statistical variability of fading as an integral characteristic of the system, including minimax formulations [1, 2], or _stochastically_ in an ergodic sense by considering performance averages [2, 3, 4, 5, 6], i.e., expectations of random network objectives in an attempt to optimize performance in the "long-term".
However, being optimal in expectation, ergodic stochastic resource allocation lacks the ability to effectively quantify relatively infrequent though statistically significant fading events causing performance drops, e.g., deep(er) fades. Indeed, the statistical dispersion of a communication medium with a fatter-tailed distribution is quite likely to result in rather undesirable channel realizations, leading to potentially major service losses. This happens because expectations of random services do not capture such risky tail events. In other words, ergodic-optimal resource allocation policies are _risk-neutral_.
In fact, it is well-known that optimal ergodic policies are often channel-opportunistic [3], and prone to sporadic channel realizations that negatively affect performance, leading to unreliable systems suffering from substantial spectrum underutilization. On the other extent, minimax-type (sometimes called "robust") resource policies aim for maximally reliable system performance [1, 2]. Still, such policies are known for being overcautious and for achieving conservative system performance, on top of the often unreasonable difficulty of the resulting optimization problems.
While approaches based on outage probability optimization try to bridge the two extremes [7], they exhibit counterfactual issues: Outage probability targets required for performing allocation of resources might not even be feasible to begin with, and even if they are, they might not result in operationally meaningful performance. Quantile-based resource allocation, such as outage rate/capacity optimization, aims for alleviating those issues, however the resulting problems still suffer from other limitations, mainly related to interpretability and lack of favorable structure (e.g., convexity).
In this paper, we introduce a _risk-aware_ formulation of the resource allocation problem -such ideas/approaches have recently started getting traction [8, 9, 10]- for standard multi-user point-to-point resource-constrained communication with no cross-interference, by capitalizing on the _Conditional Value-at-Risk (CV@R)_[11] as a measure of fading risk. CV@R, deeply rooted in mathematical finance, is a _coherent risk measure_[12], trades naturally between the (naive) ergodic and (conservative) minimax settings, and allows formulating the proposed risk-aware problem as a convex, well-structured extension of its ergodic counterpart, liberated from counterfactual issues and inducing robustness and reliability in a fully tunable way.
After obtaining a closed-form expression for the optimal Lagrangian-relaxed CV@R policy, we propose the _tail waterfilling algorithm_, a primal-dual scheme to learn a globally optimal risk-aware policy in a recursive fashion. Indeed, tail waterfilling continuously extends classical stochastic waterfilling [13] to the risk-aware universe. We present detailed numerical simulations, empirically corroborating the effectiveness of our approach for two standard utilities, namely, weighted sumrate and proportional fairness.
## II System Model
We consider a \(n\)-terminal parallel point-to-point communication channel model; some examples of relevant networking scenarios may be visualized as in Fig. 1. We assume perfect
channel state information (CSI) at transmission time (mainly for simplicity), which is leveraged to allocate resources via a _policy_\(\mathbf{p}(\mathbf{h})\), where \(\mathbf{h}\) is the channel fading vector. The rate for terminal or user \(i=1,\ldots,n\) in the network is
\[r_{i}(p_{i}(h_{i}),h_{i})\triangleq\log\left(1+\frac{h_{i}p_{i}(h_{i})}{\sigma_{ i}^{2}}\right), \tag{1}\]
where \(\sigma_{i}^{2}>0\) is the noise variance of the corresponding link. Under this setting, optimal resource allocation in an ergodic sense may be achieved by solving the convex problem [3]
\[\begin{array}{ll}\mathop{\mathrm{maximize}}\limits_{\mathbf{x}\in\mathcal{X}, \mathbf{p}\succeq\mathbf{0}}&f_{0}(\mathbf{x})\\ \mathrm{subject\ to}&\mathbf{x}\preceq\mathbb{E}\left[\mathbf{r}(\mathbf{p}(\mathbf{h}),\mathbf{h} )\right],\\ &\left\|\mathbb{E}\left[\mathbf{p}(\mathbf{h})\right]\right\|_{1}\leq P_{0}\end{array} \tag{2}\]
where \(f_{0}\) is a given concave utility, \(\mathbf{x}\) is the mean-ergodic rate vector, \(\mathcal{X}\) is a convex set, \(\mathbf{r}\) is the instantaneous rate vector, and \(P_{0}\) is a total mean power budget. Problem (2) is very well-studied; in fact, a globally optimal solution may be obtained via the well-known (stochastic) _waterfilling algorithm_[13], which is the same as (stochastic) dual descent [3]. However, as mentioned in Section I, it is also known that optimal policies obtained by solving (2) are channel-opportunistic, unavoidably leading to systems which are severely unreliable and spectrally wasteful. This is because of the _risk-neutral_ quantification of channel uncertainty in (2), which discards information about the higher-order or _tail_ behavior of the rate as a function of random fading. On the other hand, minimax-optimal policies or policies minimizing outage probabilities are either overly pessimistic [7], or result in problems that are difficult to handle, or suffer from counterfactual issues.
To effectively address those shortcomings, we take a fundamentally distinct approach to stochastic resource allocation by replacing the expectation of rates in (2) by a (vector) _risk measure_[12], specifically the CV@R[11], defined for an integrable random _cost_\(z\) as
\[\begin{array}{ll}\mbox{CV@R}^{\alpha}[z]\triangleq\inf_{t\in\mathbb{R}}\ t+\frac{1}{\alpha}\mathbb{E}[(z-t)_{+}],\end{array} \tag{3}\]
where \(\alpha\in(0,1]\) is the corresponding _confidence level_. CV@R is a strict and tractable generalization of expectation, because
\[\begin{array}{ll}\mbox{CV@R}^{1}[z]=\mathbb{E}[z]\leq\mbox{CV@R}^{\alpha}[ z],\ \forall\alpha\in(0,1]\ \mbox{and}\\ \mbox{CV@R}^{0}[z]\triangleq\lim_{\alpha\downarrow 0}\mbox{CV@R}^{\alpha}[z]= \mbox{esssup}\,z.\end{array} \tag{4}\]
Intuitively, CV@R measures _expected losses restricted to the upper tail_ of \(z\) of probability equal to \(\alpha\); see Fig. 2. Therefore, it provides an interpretable and tunable tradeoff bridging risk-neutrality and minimax robustness.
To make CV@R suitable for maximizing rewards -cf. (2)-rather than minimizing losses, it is sufficient to reflect it as
\[-\mbox{CV@R}^{\alpha}[-z]=\sup_{t\in\mathbb{R}}\ t-\frac{1}{\alpha}\mathbb{E }[(t-z)_{+}], \tag{5}\]
now measuring expected _rewards_ restricted to the _lower_ tail of \(z\) of probability equal to \(\alpha\); again, see Fig. 2 for a comparison. Using (5), we may formulate our proposed risk-aware resource allocation problem as
\[\begin{array}{ll}\boxed{P^{*}=\mathop{\mathrm{maximize}}\limits_{\mathbf{x}\in \mathcal{X},\mathbf{p}\succeq\mathbf{0}}}&f_{0}(\mathbf{x})\\ \mathrm{subject\ to}&\mathbf{x}\preceq-\mbox{CV@R}^{\mathbf{\alpha}}\left[-\mathbf{r}( \mathbf{p}(\mathbf{h}),\mathbf{h})\right]\\ &\left\|\mathbb{E}\left[\mathbf{p}(\mathbf{h})\right]\right\|_{1}\leq P_{0}\end{array} \right], \tag{6}\]
where \(\mathbf{x}\) is now interpreted as a _risk-ergodic rate_ vector, and the vector operator CV@R\({}^{\mathbf{\alpha}}[\cdot]\) with a confidence level vector \(\mathbf{\alpha}\) evaluates the risk of the corresponding rate vector in an elementwise manner. Note that we have tacitly not enforced risk-aware behavior on the policy itself (resource constraint), as this is operationally unnecessary. It is a standard exercise to show that problem (6) can be equivalently expressed as
\[\begin{array}{ll}P^{*}=\mathop{\mathrm{maximize}}\limits_{\mathbf{x}\in\mathcal{ X},\mathbf{p}\succeq\mathbf{0},\mathbf{t}}&f_{0}(\mathbf{x})\\ \mathrm{subject\ to}&\mathbf{x}\preceq\mathbf{t}-\frac{1}{\alpha}\odot\mathbb{E}\left[( \mathbf{t}-\mathbf{r}(\mathbf{p}(\mathbf{h}),\mathbf{h}))_{+}\right],\\ &\left\|\mathbb{E}\left[\mathbf{p}(\mathbf{h})\right]\right\|_{1}\leq P_{0}\end{array} \tag{7}\]
where "\(\odot\)" stands for elementwise multiplication, while \((\cdot)_{+}\) and division with a vector are similarly overloaded.
Observe that problem (6) is infinite-dimensional; in general such problems are challenging to tackle. Nonetheless, CV@R is a convex and monotone (in fact, coherent) risk measure [12], which preserves the convexity of both (6) and (7). Therefore, under the effect of some appropriate constraint qualification, such as Slater's condition -assumed hereafter-, problems (6) and (7) exhibit no duality gap (in fact, strong duality). This fact suggests that we handle (7) in the dual domain, i.e., within the framework of Lagrangian duality.
Fig. 1: Examples of multi-user one-to-one communication channels. Left: Multiplexed (e.g., time or frequency) star uplink or downlink model. Right: Classical parallel channel model.
## III Lagrangian Duality
The Lagrangian of problem (7) is defined as
\[\begin{split}\mathcal{L}&(\mathbf{x},\mathbf{p},\mathbf{t},\Lambda,\mu)\\ &\triangleq f_{0}(\mathbf{x})+\mu\left(P_{0}-\left\|\mathbb{E}\left[\bm {p}(\mathbf{h})\right]\right\|_{1}\right)\\ &\quad+\Lambda^{T}\left[\mathbf{t}-\frac{1}{\mathbf{\alpha}}\odot \mathbb{E}\left[(\mathbf{t}-\mathbf{r}(\mathbf{p}(\mathbf{h}),\mathbf{h}))_{+}\right]-\mathbf{x} \right],\end{split} \tag{8}\]
where \(\Lambda\succeq 0\) and \(\mu\geq 0\) are the dual variables associated with the explicit constraints of (7). Accordingly, the _dual function_ is defined as the maximization of the Lagrangian function over the primal variable triplet \((\mathbf{x},\mathbf{p},\mathbf{t})\), i.e.,
\[D(\Lambda,\mu)=\sup_{\mathbf{x}\in\mathcal{X},\mathbf{p}\succeq\mathbf{0},\mathbf{t}}\ \mathcal{L}(\mathbf{x},\mathbf{p},\mathbf{t},\Lambda,\mu). \tag{9}\]
Subsequently, the _dual problem_ is the minimization of the dual function over the dual variable pair \((\Lambda,\mu)\), i.e.,
\[\begin{split} D^{*}&=\inf_{(\Lambda,\mu)\succeq \mathbf{0}}\ D(\Lambda,\mu),\\ &=\inf_{(\Lambda,\mu)\succeq\mathbf{0}}\sup_{\mathbf{x}\in\mathcal{X}, \mathbf{p}\succeq\mathbf{0},\mathbf{t}}\ \mathcal{L}(\mathbf{x},\mathbf{p},\mathbf{t},\Lambda,\mu).\end{split} \tag{10}\]
As mentioned previously, problem (7) exhibits strong duality (under Slater's condition), which means that \(P^{*}=D^{*}\) and, what is more, optimal dual variables are guaranteed to exist. We also observe that even though problem (7) is infinite-dimensional, its dual (10) is finite-dimensional, which is a very useful fact if we are able to tackle the maximization involved in the dual function (9) -in particular over \(\mathbf{p}\)- adequately.
Leveraging strong duality, we hereafter focus on devising an efficient primal-dual algorithm for solving the minimax problem (10), hopefully providing an optimal solution to the constrained convex risk-aware problem (7), as well [3].
## IV Risk-Aware Resource Allocation:
The Tail Waterfilling Algorithm
The dual problem can be separated into several subproblems with respect to the primal variables. In particular, (10) can be equivalently expressed as
\[\begin{split}\inf_{(\Lambda,\mu)\succeq\mathbf{0}}\ \biggl{\{}\mu P_{0}+\sup_{\mathbf{x}\in\mathcal{X}}\ f_{0}(\mathbf{x})-\Lambda^{T}\mathbf{x}+ \sup_{\mathbf{t}}\ \sum_{i=1}^{n}\lambda_{i}t_{i}\\ &\quad+\mathbb{E}\left[\sup_{p_{i}\geq 0}\ -\mu p_{i}-\frac{ \lambda_{i}}{\alpha_{i}}\big{(}t_{i}-r_{i}(p_{i}(h_{i}),h_{i})\big{)}_{+} \right]\biggr{\}},\end{split} \tag{11}\]
where interchanging the \(\sup\) over \(\mathbf{p}\) with expectation (integration) is justified in light of the _interchangeability principle_; see, e.g., [12, Theorem 7.92]. This fact allows us to derive an optimal policy in closed-form.
### _Optimal Resource Policy_
The particular policy subproblem for each user \(i\) is
\[\sup_{p_{i}\geq 0}\ -\mu p_{i}-\frac{\lambda_{i}}{\alpha_{i}}\left(t_{i}-\log \left(1+\frac{h_{i}p_{i}}{\sigma_{i}^{2}}\right)\right)_{+}. \tag{12}\]
The next result provides an optimal solution of subproblem (12), determining the behavior of the optimal risk-aware policy \(\mathbf{p}^{*}(\mathbf{h})\), parameterized by primal/dual variables \((\mathbf{t},\Lambda,\mu)\).
**Theorem 1** (Optimal Resource Policy): _An optimal solution to the \(i\)-th policy subproblem (12) may be expressed as_
\[\boxed{p_{i}^{*}(h_{i},\cdot)\!=\!\left\{\begin{bmatrix}\left[\frac{\lambda_{i }}{\mu\alpha_{i}}-\frac{\sigma_{i}^{2}}{h_{i}}\right]_{+},&\text{if }\frac{\lambda_{i}}{\mu\alpha_{i}e^{t_{i}}}- \frac{\sigma_{i}^{2}}{h_{i}}<0\\ \frac{\sigma_{i}^{2}\left(e^{t_{i}}-1\right)}{h_{i}},&\text{if }\frac{ \lambda_{i}}{\mu\alpha_{i}e^{t_{i}}}-\frac{\sigma_{i}^{2}}{h_{i}}\geq 0\end{bmatrix}}, \tag{13}\]
_whenever \(\mu\) and \(\lambda_{i}\) are not simultaneously zero; otherwise, it is optimal to choose \(p_{i}^{*}(h_{i})=0\)._
The risk-aware policy presented in Theorem 1 is a genuine extension of the classical risk-neutral waterfilling power policy [3, 13], which for user \(i\) may be expressed as
\[p_{i}^{N}(h_{i},\cdot)=\left[\frac{\lambda_{i}}{\mu}-\frac{\sigma_{i}^{2}}{h_{i }}\right]_{+}. \tag{14}\]
This reduction is obtained by setting \(\alpha_{i}=1\), and then sending the CV@R variable \(t_{i}\) to \(+\infty\) in (13) (this can be explained by the construction of the CV@R; see [12, Chapter 6]).
Presence of a confidence level \(\alpha_{i}\in(0,1)\) (and the related rate target \(t_{i}\)) makes the policy _less opportunistic_ as compared with its risk-neutral counterpart. _First_, power is allocated more aggressively to more faded channels (smaller values of \(h_{i}\)) in the \([\cdot]_{+}\)-related part of the policy (cf. classical waterfilling), as \(\alpha_{i}\) decreases and the corresponding branch of (13) is active (also depending on \(t_{i}\)). Further, if \(\alpha_{i}\) (resp. \(t_{i}\)) is small enough to make the lower branch of (13) active, constant relative to \(\alpha\) but now proportional to \(t_{i}\) power equal to \(\sigma_{i}^{2}\left(e^{t_{i}}-1\right)/h_{i}\) is allocated. We observe a striking similarity of this term to outage-optimal policies; see, e.g., [7]. This might point to intricate information-theoretic properties of the CV@R approach, which can be the subject of future investigation.
_Second_, when the \([\cdot]_{+}\)-related part of the policy is nonpositive, then the policy indeed becomes opportunistic. However, given relevant (or optimized) values of \(t_{i}\), the opportunism of the policy is effectively _restricted to the lower tail_ (occuring with proability equal to \(\alpha_{i}\)) of the random rate \(r_{i}(p_{i}(h_{i}),h_{i})\).
These remarks may also be readily observed in Fig. 3. Having explicitly determined \(\mathbf{p}^{*}\), we may now derive primal -in \((\mathbf{x},\mathbf{t})\)- and dual -in \((\Lambda,\mu)\)- iterations, purposed to _recursively learn_ a globally optimal solution to (7).
Fig. 3: Optimal resource policies for risk-aware (RA, solid) and risk-neutral (RN, dashed) settings in a 3-user network, with \(\mu=0.07\), \(\alpha_{i}=0.53\) and \(\lambda_{i}=0.33\) for all \(i\), \(t_{1}=2.9\), \(t_{2}=2.15\) and \(t_{3}=2.45\), respectively.
### _CV@R Target / Risk-Ergodic Rate Updates-Primal Ascent_
The remaining subproblems of (11) relative to primal variables \((\mathbf{x},\mathbf{t})\) can be solved separately. Regarding maximization over \(\mathbf{t}\), and with \(p_{i}^{*}=p_{i}^{*}(h_{i},t_{i},\cdot)\), we end up with the problem
\[\sup_{t_{i}}\ \mathbb{E}\Bigg{[}\lambda_{i}t_{i}-\mu p_{i}^{*}-\frac{ \lambda_{i}}{\alpha_{i}}\left(t_{i}-\log\left(1+\frac{h_{i}p_{i}^{*}}{\sigma_ {i}^{2}}\right)\right)_{+}\Bigg{]}, \tag{15}\]
for each \(i\), where we explicitly denote the dependence of \(p_{i}^{*}\) on \(t_{i}\). It can be easily seen that the function inside the expectation of (15) is jointly concave relative to \((t_{i},p_{i})\). Therefore, it is also concave in \(t_{i}\) under partial maximization over \(p_{i}\geq 0\).
By [12, Theorem 7.52], it then follows that every subgradient of the latter is a stochastic subgradient of the objective of (15). Due to initial joint concavity in \((t_{i},p_{i})\), it can be shown that such a stochastic subgradient may be selected as
\[g_{i}(h_{i},t_{i},\cdot)=\lambda_{i}-\frac{\lambda_{i}}{\alpha_{i}}H\left[t_ {i}-\log\left(1+\frac{h_{i}p_{i}^{*}(h_{i},t_{i},\cdot)}{\sigma_{i}^{2}} \right)\right], \tag{16}\]
where \(H(\cdot)\) is the step (Heaviside) multifunction. Then, provided an iteration index \(n\in\mathbb{N}\) and processes \(\{h_{i}^{n}\}\) and \(\{t_{i}^{n}\}\) (and implicitly meant \(\{\lambda_{i}^{n}\}\) and \(\{\mu^{n}\}\); see below), we may formulate a stochastic subgradient ascent scheme for \(t_{i}\) as
\[t_{i}^{n}=t_{i}^{n-1}+\varepsilon_{t}g_{i}(h_{i}^{n},t_{i}^{n-1},\cdot),\quad n \geq 1, \tag{17}\]
with stepsize \(\varepsilon_{t}>0\), and starting from some initial value \(t_{i}^{0}\).
Maximization over \(\mathbf{x}\), on the other hand, depends on the dual variables \(\Lambda\) and concave utility \(f_{0}\). Hereafter, we assume that an optimal solution as a function of \(\Lambda\) -or \(\Lambda^{n},n\geq 0\)-
\[\mathbf{x}^{*}(\Lambda)\in\arg\max_{\mathbf{x}\in\mathcal{X}}\ f_{0}(\mathbf{x})-\Lambda^ {T}\mathbf{x} \tag{18}\]
exists, and \(f_{0}\) is such that \(\mathbf{x}^{*}(\Lambda^{n})\) is available (e.g., in closed form). Standard derivations and variable eliminations for popular utility functions, namely, sumrates and proportional fairness are provided later on, for completeness.
### _Dual Variable Updates_
Lastly, we can formulate stochastic (quasi-)subgradient descent updates for dual variables \((\Lambda,\mu)\). This is done along the lines of [3], by exploiting the corresponding constraint gaps. Note that the dual function \(D\) is convex and separable in \((\Lambda,\mu)\). For the power constraint multiplier \(\mu\), we have
\[\mu^{n}=\left[\mu^{n-1}-\varepsilon_{\mu}\left(P_{0}-\sum_{i=1}^{n}p_{i}^{*}(h _{i}^{n},\mu^{n-1},\cdot)\right)\right]_{+}, \tag{19}\]
with stepsize \(\varepsilon_{\mu}>0\), starting from \(\mu^{0}\). Similarly, for the rate constraint vector of multipliers \(\Lambda\), we get, for each \(i\),
\[\lambda_{i}^{n}= \Bigg{[}\lambda_{i}^{n-1}-\varepsilon_{\Lambda}\bigg{(}-x_{i}^{* }(\Lambda^{n-1})+t_{i}^{n-1} \tag{20}\] \[-\frac{1}{\alpha_{i}}\left(t_{i}^{n-1}-\log\left(1+\frac{h_{i}^{n }p_{i}^{*}(h_{i}^{n},\lambda_{i}^{n-1},\cdot)}{\sigma_{i}^{2}}\right)\right)_ {+}\bigg{)}\Bigg{]}_{+}\,,\]
with stepsize \(\varepsilon_{\Lambda}>0\), and starting from \(\lambda_{i}^{0}\).
The complete description of the proposed primal-dual algorithm, which we suggestively call _tail waterfilling_, is presented in Algorithm 1.
```
Choose initial values \(\mathbf{t}^{0},\mathbf{p}^{0},\mathbf{x}^{0},\mu^{0},\Lambda^{0}\). for\(n=1\)to Process End do Observe \(\mathbf{h}^{n}\). # Primal Variables \(\to\) Set \(p_{i}^{*}(\cdot)\) using (13), for all \(i\). \(\to\) Update \(t_{i}^{n}\) using (17) and (16), for all \(i\). \(\to\) Obtain \(\mathbf{x}^{*}(\Lambda^{n-1})\) from (18). # Dual Variables \(\to\) Update \(\mu^{n}\) using (19). \(\to\) Update \(\lambda_{i}^{n}\) using (20), for all \(i\). endfor
```
**Algorithm 1** Tail Waterfilling
### _Common Utilities_
_Sumrate:_ If \(f_{0}(\mathbf{x})=\mathbf{w}^{T}\mathbf{x}\), \(\mathbf{w}\succeq\mathbf{0}\), \(\mathbf{x}\in\mathbb{R}^{n}\), the subproblem relative to \(\mathbf{x}\) becomes \(\sup_{\mathbf{w}}\ (\mathbf{w}-\Lambda)^{T}\mathbf{x}\), which is unbounded for any \(\mathbf{w}\) and \(\Lambda\), except for the optimal dual choice \(\Lambda=\mathbf{w}\). Of course, this step eliminates both variables \(\mathbf{x}\) and \(\Lambda\).
_Proportional Fairness:_ In this case, we can choose \(f_{0}(\mathbf{x})=\sum_{i=1}^{n}\log(x_{i})\), \(\mathbf{x}\in\mathbb{R}^{n}\), and the subproblem in \(\mathbf{x}\) becomes
\[\sup_{\mathbf{x}}\ \sum_{i=1}^{n}\log(x_{i})-\Lambda^{T}\mathbf{x}=\sup_{\mathbf{x}}\ \sum_{i=1}^{n}\log(x_{i})-\lambda_{i}x_{i}, \tag{21}\]
which gives \(x_{i}^{*}=1/\lambda_{i}\) for each \(i\).
## V Performance Evaluation
Let us now verify and discuss the efficacy of the proposed tail waterfilling algorithm, summarized in Algorithm 1. We consider a basic \(3\)-terminal network consisting of Rayleigh fading point-to-point links with different noise variances. We then apply the tail waterfilling scheme for two utilities, namely, sumrate and proportional fairness. For the sumrate utility, the weights \(\mathbf{w}\) are selected to average the individual risk-ergodic services \(x_{i}\) per terminal, i.e., \(w_{i}=1/3,\forall i\). For both utilities, the stepsizes are set as \(\varepsilon_{t}=10^{-3}\) and \(\varepsilon_{\mu}=10^{-4}\), respectively. For proportional fairness, we set \(\varepsilon_{\Lambda}=10^{-4}\).
As shown in the histograms of Figs. 4 (top) and 6 (top), decreasing the (common) CVaR level \(\alpha\) restricts the achievable rates in program (6), while constraining their volatility. This induces system robustness, since the system sustains a consistent and reliable level of performance, incurring infrequent rate drops in the long-term. Especially for proportional fairness, we observe that, _simultaneously_ with being robust, the rates are more fairly distributed among users with different noise levels.
Equivalent remarks are in order regarding Figs. 4 (bottom) and 6 (bottom), which show the user outage probabilities, i.e., the cumulative distribution function \(P_{\mathrm{out}}(r_{o})=P\{r\leq r_{o}\}\), as another intuitive measure to evaluate system robustness. In fact, we observe that for lower values of \(\alpha\), the system exhibits _lower and sharper_ probabilities of outage -optimally tuned in the CV@R sense- at _always attainable_ channel rates. On the other hand, less risk-aware settings corresponding to higher values for \(\alpha\) result in much higher variability in the corresponding optimal channel rates.
The latter observations are also evident from Figs. 5 and 7, which highlight the vast difference in rate variability between
the risk-aware and risk-neutral policies, through their evolution in time (channel use). We see that the optimal CV@R policy exhibits quasi-invariant communication rate trends, keeping the rates at certain reliability levels. Further, the proportional fairness utility achieves more evenly distributed rates among users, as shown in Fig. 7 (top). In other words, the combination of risk-awareness and proportional fairness achieves system performance that is _both_ user-fair, _and_ aware of fading risk.
## VI Conclusion
We proposed a new risk-aware reformulation of the classical resource allocation problem for point-to-point networks. Utilizing the CV@R as a measure of risk generalizing expectations, we developed the tail waterfilling algorithm, which extends classical stochastic waterfilling in an interpretable and tunable way, and induces network robustness and reliability rigorously and tractably. The effectiveness of tail waterfilling was corroborated via detailed numerical simulations.
|
2310.02934 | Anomalous dissipation and Euler flows | We show anomalous dissipation of scalars advected by weak solutions to the
incompressible Euler equations with $C^{(\sfrac{1}{3})^-}$ regularity, for an
arbitrary initial datum in $\dot H^1 (\T^3)$. This is the first rigorous
derivation of zeroth law of scalar turbulence, where the scalar is advected by
solution to an equation of hydrodynamics (unforced and deterministic). As a
byproduct of our method, we provide a typicality statement for the drift, and
recover certain desired properties of turbulence, including a lower bound on
scalar variance commensurate with the Richardson pair dispersion hypothesis. | Jan Burczak, LΓ‘szlΓ³ SzΓ©kelyhidi Jr., Bian Wu | 2023-10-04T16:11:11Z | http://arxiv.org/abs/2310.02934v2 | # Anomalous dissipation and Euler flows
###### Abstract.
We show anomalous dissipation of scalar fields advected by a (typical) weak solution to the Euler equations with \(C^{\frac{1}{2}-}\) regularity in the 3D periodic setting.
LSz gratefully acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through GZ SZ 325/2-1.
### Context
#### 1.1.1. Turbulence
In the 1949 paper [10] titled 'Structure of the Temperature Field in Turbulent Flow' A.N. Obukhov writes
_In other words, the turbulent motion inside a thermally heterogeneous medium with gradients which are initially weak can contribute to the local gradients of temperature, which are subsequently smoothed out by the action of molecular heat conductivity._
This statement is precisely a prediction that little ('molecular') diffusion \(\kappa\Delta\) present in (1.1) can be amplified by advection \(u\cdot\nabla\) by a turbulent velocity field \(u\). The responsible mechanism should be transport of modes towards higher frequencies (smaller scales), where eventually diffusivity of a molecular length-scale steps in. This prediction has been further corroborated by phenomenological, experimental and numerical arguments, see classical [11], [12], and more recent [13], [14], [15]. Furthermore, the expected amplification of diffusivity is supposed to keep the quantity (1.3) bounded away from zero, a phenomenon referred to as 'anomalous dissipation'. Thus, it is strongly believed that anomalous dissipation is a key feature of turbulence.
On the other hand, it is argued that turbulent flows are irregular. More precisely, in his famous 1949 note on statistical hydrodynamics [16], L. Onsager conjectured that the threshold regularity for the validity of energy conservation in the class of Holder continuous weak solutions of the Euler equations is the exponent \(1/3\). Onsager's interest in this issue came from an effort to explain the primary mechanism of energy dissipation in turbulence, one that prevails even in the absence of viscosity. Thus, the implicit suggestion was that Holder continuous weak solutions of the Euler equations may be an appropriate mathematical description of turbulent flows in the inviscid limit.
Putting these two pieces together, considering non-smooth velocity fields advecting a scalar is of primary interest for turbulence.
There are several analytical models of scalar turbulence, including the 1968 Kraichnan model [17] (seemed nowadays as rather incompatible, compare the relevant discussion and references of [1]) and the more recent alternating shear flows, discussed in more detail below.
#### 1.1.2. Anomalous dissipation in PDEs
For any fixed initial datum, the energy solutions of (1.1) converge (along subseqences) to a distributional solution of the transport equation \(\partial_{t}\rho+u\cdot\nabla\rho=0\). These are given by a uniquely defined and measure preserving flow map, provided the drift is regular enough (classically \(u\in L^{1}(W^{1,\infty})\), but also \(u\in L^{1}(W^{1,1})\) suffices [10]) and divergence-free. In such a case, anomalous dissipation is excluded, since measure preservation implies in particular conservation of the \(L^{2}\) norm.
On the other hand, below the regularity threshold which required for a well-posedness theory of the transport equation, there are examples of divergence free velocity fields that give rise to anomalous dissipation. We mention the recent [10], where \(u\in C^{\infty}([0,T];C^{\alpha}(\mathbb{T}^{2}))\), \(\alpha<1\) (in fact it is merely logarithmically below Lipschitz regularity) and earlier [10], where \(u\in L^{1}(C^{\alpha}),\alpha<1\). The constructions of \(u\) in the aforementioned articles are essentially based on (i) shear flows alternating in time in a quasi self-similar fashion, which by'stirring' push towards smaller scales, where molecular diffusion takes over (inspired by [14, 15, 16]) and on (ii) comparison with the inviscid case [11].
It is worth mentioning the recent papers [1], [17], partially drawing from [18, 19], that manage to embed the above mentioned passive scalar methodology into nonlinear fluid dynamics (Navier-Stokes equations), at the cost of an arising forcing term.
A different approach based on _iterated quantitative homogenization_ has been recently proposed by Armstrong-Vicol in [1]. Although both iterated quantitative homogenization (see e.g. [20]) and homogenization in the present context of passive tracers in turbulent flows [21] has been studied previously, in [1] this idea was successfully combined with an infinite iterative scheme, and has substantially inspired our work. The methodology of [1] has several advantages over the previous ones: First of all, it abandons the need of the aforementioned comparison with the inviscid case, consequently becomes compatible with the existence of inertial range in turbulence. Secondly, in works based on alternating shear flows focussing towards a singular time, the anomalous dissipation actually occurs at only one instant of time. This seems a major drawback in the context of turbulent flows in light of the statistical time-invariance. Thirdly, as in classical homogenization, the method allows for enhanced and anomalous dissipation of arbitrary initial tracer fields, in contrast with previous constructions based on convex integration [22, 23, 1, 16], where the tracer is constructed in parallel to the velocity field scale-by-scale.
The main step forward that we make compared with [1] is that our velocity field solves incompressible Euler equation, thus providing the first deterministic link on the PDE level between the Obukhov-Corrsin theory and fluid mechanics. Indeed, to our knowledge this is the first result on anomalous dissipation, where the advecting vectorfield is itself the solution of a PDE. In general, we interpret iterative quantitative homogenization in parallel with convex integration, with the basic principle in mind that whilst convex integration is a form of "inverse renormalization" strategy, iterative quantitative homogenization can be seen as "forward renormalization".
As an interesting byproduct of our approach, which intertwines convex integration techniques with iterated homogenization, we construct not just a single velocity field exhibiting anomalous dissipation, but a \(C^{0}\)-dense set of such fields. Moreover, in our approach we abandon the shear-flow setting, giving us more flexibility in constructed such vectorfields.
#### 1.1.3. Related topics: enhanced dissipation, mixing, and beyond
Within or above Lipschitz regularity for \(u\), divergence-free drift can of course also assist dissipation (while it cannot counteract it), but in a weaker sense than anomalous dissipation (roughly, \(T\) must grow at at rate commensurate with regularity), which is referred to as enhanced dissipation.
The enhanced dissipation by a single shear flow, or a related simple 'lower dimensional' flow is well studied, with roots in the computations by Kelvin and Kolmogorov [15], compare [1], [2], [10], [11] and their references. In such cases, one needs to restrict initial datum to a class not orthogonal to the shearing direction, but the setting usually allows to extract precise decay rates. For a more general related approach see [26]. The general condition in the Lipschitz case has been provided in the seminal [12] (albeit without clear rate, for an optimal example see [1]), whose results has been recently slightly strenghtened and reproved by a different method in [24].
Let us finally point out, without getting into details, natural connections between enhanced dissipation and mixing (which is the phenomenon of inviscid-case migration towards smaller scales caused by a vector field), see [11] and the recent survey [11] with its references. There are also interesting developments in the stochastic setting, see [10], [12] and [21], in the kinetic setting (Landau damping) cf [23], and in suppression of chemotactic blowup, including recent beautiful [13]. In this context we mention the recent work [25], where optimal quantitative enhanced dissipation estimates are obtained for random drift given by the Gaussian free field - the strategy here, based on a renormalization strategy of computing effective diffusivities scale-by-scale, is very much reminiscent of our point of view.
### Notation
We use \(\mathbb{R}^{3}:=\mathbb{R}^{3\times 1}\). For two vectors \(v_{1},v_{2}\in\mathbb{R}^{3}\), we use \(v_{1}^{T}v_{2}\) or \(v_{1}\cdot v_{2}\) to denote their inner product. For an invertible matrix \(A\in\mathbb{R}^{3\times 3}\), we use \(A^{T}\) to denote its transpose,
to denote its inverse and \(A^{-T}\) to denote the inverse of its transpose. For a vector-value function \(\chi=(\chi_{1},\chi_{2},\chi_{3})^{T}:\mathbb{R}^{3}\to\mathbb{R}^{3}\), we use \(\nabla\chi\) to denote \((\nabla\chi_{1},\nabla\chi_{2},\nabla\chi_{3})^{T}\). We use \(\boldsymbol{\alpha}\) to denote the multiindex in partial derivative and we use \(|\boldsymbol{\alpha}|\) to denote its total order of differentiation.
## 2. Outline of proof of Theorem 1.1
In this section we provide an overview of the key steps in our proof, and postpone technical details to subsequent sections. As mentioned in the introduction, our strategy involves a "backward renormalization" in constructing the velocity field in Section 2.1 (in the sense of an iteration from large-scale to small-scale, \(u_{q}\mapsto u_{q+1}\mapsto\dots\)), and a "forward renormalization" in constructing the passive tracer in Section 2.2 (in the sense of an iteration from small-scale to large-scale, \(\rho_{q+1}\mapsto\rho_{q}\mapsto\dots\)).
We have tried to present the proofs in later sections in a self-contained manner, because we believe these may be of independent interest.
### Construction of the vectorfield \(u\) - Convex integration
The general scheme for producing Holder continuous weak solutions of the Euler equations (1.2) is by now well understood. One proceeds via an inductive process on a sequence of approximate solutions \(u_{q}\) and associated Reynolds defect \(\mathring{R}_{q}\) and pressure \(p_{q}\), \(q=0,1,2,\dots\), which satisfy the _Euler-Reynolds system_
\[\begin{split}\partial_{t}u_{q}+\operatorname{div}(u_{q}\otimes u _{q})+\nabla p_{q}&=\operatorname{div}\mathring{R}_{q}\,,\\ \operatorname{div}u_{q}&=0\,,\end{split} \tag{2.1}\]
with constraints
\[\operatorname{tr}\mathring{R}_{q}(x,t)=0,\quad\int_{\mathbb{T}^{3}}u_{q}(x,t) \,dx=0,\quad\int_{\mathbb{T}^{3}}p_{q}(x,t)\,dx=0. \tag{2.2}\]
We note in passing that these normalizations determine the pressure \(p_{q}\) uniquely from \((u_{q},\mathring{R}_{q})\) so that one may speak of the pair \((u_{q},\mathring{R}_{q})\) being a solution of (2.1).
#### 2.1.1. Inductive assumptions
The induction process involves a set of _inductive estimates_. These estimates are in terms of a frequency parameter \(\lambda_{q}\) and amplitude \(\delta_{q}\), which are given by
\[\lambda_{q}:=2\pi\lceil a^{(\theta^{q})}\rceil,\quad\delta_{q}:=\lambda_{q}^{ -2\beta} \tag{2.3}\]
where \(\lceil x\rceil\) denotes the smallest integer \(n\geq x\), \(a\gg 1\) is a large parameter, \(b>1\) is close to \(1\) and \(0<\beta<\sfrac{1}{3}\) is the exponent of Theorem 1.1. The parameters \(a\) and \(b\) are then related to \(\beta\). With these parameters the inductive estimates take the form1
Footnote 1: In [10] an additional estimate is added for \(\|u_{q}\|_{C^{0}}\), but it turns out this can be avoided. The only place where a bound on \(\|u_{q}\|_{C^{0}}\) was needed in [10] is in [10, Proposition 5.9], but as we show below, the (much worse) bound induced by (2.5) for \(n=1\) suffices to control the relevant terms - see Proposition 6.3 and Lemma 6.15.
\[\left\|\mathring{R}_{q}\right\|_{C^{0}} \leq\delta_{q+1}\lambda_{q}^{-\gamma_{R}}, \tag{2.4}\] \[\left\|u_{q}\right\|_{C^{n}} \leq M\delta_{q}^{\sfrac{1}{2}}\lambda_{q}^{n}\quad\text{ for }n=1,2,\dots,\bar{N},\] (2.5) \[\left|e(t)-\int_{\mathbb{T}^{3}}\lvert u_{q}\rvert^{2}\ dx-\bar{ e}\delta_{q+1}\right| \leq\delta_{q+1}\lambda_{q}^{-\gamma_{E}}, \tag{2.6}\]
where \(\gamma_{R},\gamma_{E}>0\) are small parameters, \(\bar{N}\in\mathbb{N}\) is a large parameter to be chosen suitably (depending on \(\beta>0\) and \(b>1\)), and \(\bar{e}>0\), \(M\geq 1\) are universal constants, which will be fixed throughout the iteration and depend on the particular geometric form of the perturbing building blocks - see they will be specified in Definition 6.10 and 6.14. We remark that in [10] only the case \(\bar{N}=1\) is required in (2.5) and (2.6) is slightly weaker. Moreover, in [10] a generic small parameter
\(\alpha>0\) is used in place of \(\gamma_{R},\gamma_{E}\), but for our purposes we need to choose these small corrections more carefully.
#### 2.1.2. Parameter choices
The inductive construction for passing from \(u_{q}\) to \(u_{q+1}\) involves three steps: _mollification_, _gluing_ and _perturbation_. In these steps two more scales are introduced, an adjusted length-scale \(\ell_{q}\) and an adjusted time-scale \(\tau_{q}\). In our case these will be defined as
\[\ell_{q}:=\lambda_{q}^{-1-\gamma_{L}},\quad\tau_{q}:=\lambda_{q}^{-1+\beta- \gamma_{T}}, \tag{2.7}\]
where \(\gamma_{L},\gamma_{T}\) are additional small parameters to be chosen suitably in dependence of \(\beta,b\). It is worth pointing out that, if one were to set \(\gamma_{L}=\gamma_{T}=0\), then \(\ell_{q},\tau_{q}\) would be the natural (dimensionally consistent) length- and time-scales induced by the velocity field \(u_{q}\) (c.f. (2.3) and (2.5)). Indeed, we can think of \(\delta_{q}^{\sfrac{1}{2}}\) having physical dimension of velocity, i.e. \(LT^{-1}\), and \(\lambda_{q}^{-1}\) having physical dimension of length \(L\).
Moreover, setting \(\gamma_{R}=0\) would be the consistent estimate in light of the basic principle in convex integration, that the error \(R_{q}\) is cancelled by the new average stress \(\langle(u_{q+1}-u_{q})\otimes(u_{q+1}-u_{q})\rangle\).
In addition to these small parameters we will also use \(\alpha>0\), as is also done in [1], to take care of the lack of Schauder estimates in \(C^{0},C^{1},\dots\) spaces. Thus, in summary, we will use the following set of additional parameters, which will all be chosen depending on \(\beta,b\):
* \(\gamma_{R}\in(0,1)\): smallness of \(\|\mathring{R}_{q}\|_{C^{0}}\) with respect to \(\|R_{q}\|_{C^{0}}\) ;
* \(\gamma_{L}\in(0,1)\): smallness of \(\ell_{q}\) with respect to \(\lambda_{q}^{-1}\);
* \(\gamma_{T}\in(0,1)\): smallness of \(\tau_{q}\) with respect to \((\delta_{q}^{\sfrac{1}{2}}\lambda_{q})^{-1}\);
* \(\gamma_{E}\in(0,1)\): smallness of energy gap with respect to \(\|R_{q}\|_{C^{0}}\);
* \(\alpha\in(0,1)\): Schauder exponent;
* \(\bar{N}\in\mathbb{N}\): number of derivatives in the induction.
With a suitable choice of these parameters we have the following analogue of [1, Proposition 2.1]:
**Proposition 2.1**.: _There exist universal constants \(M\geq 1\), \(\bar{e}>0\) with the following property. Assume \(0<\beta<\frac{1}{3}\) and_
\[1<b<\frac{1-\beta}{2\beta}\,. \tag{2.8}\]
_Further, assume that \(\gamma_{T},\gamma_{R},\gamma_{E}>0\) satisfy_
\[\max\{\gamma_{T}+b\gamma_{R},\gamma_{E}\}<(b-1)\big{(}1-(2b+1)\beta\big{)} \tag{2.9}\]
_and let \(e:[0,T]\to\mathbb{R}\) be a strictly positive smooth function. Then there exists \(\gamma_{L}>0\), \(\bar{N}\in\mathbb{N}\) and \(\alpha_{0}>0\) depending on \(\beta,b,\gamma_{T},\gamma_{R},\gamma_{E}\) such that, for any \(0<\alpha<\alpha_{0}\), there exists \(a_{0}\gg 1\) (in addition depending on \(e\)), such that for any \(a\geq a_{0}\) the following holds:_
_Let \((u_{q},\mathring{R}_{q})\) be a smooth solution of (2.1) satisfying the estimates (2.4)-(2.6). Then there exists another solution \((u_{q+1},\mathring{R}_{q+1})\) to (2.1) satisfying (2.4)-(2.6) with \(q\) replaced by \(q+1\), and we have_
\[\|u_{q+1}-u_{q}\|_{C^{0}}+\frac{1}{\lambda_{q+1}}\left\|u_{q+1}-u_{q}\right\| _{C^{1}}\leq M\delta_{q+1}^{\sfrac{1}{2}}. \tag{2.10}\]
The new velocity field \(u_{q+1}\) is obtained as
\[u_{q+1}:=\bar{u}_{q}+w_{q+1}, \tag{2.11}\]
where \(\bar{u}_{q}\) is constructed from \(u_{q}\) via Isett's gluing procedure [10], and \(w_{q+1}\) is the new perturbation consisting of a deformed family of Mikado flows. In Sections 2.1.3-2.1.4 below we now sketch the proof of Proposition 2.1. The detailed proof, which is very much based on the proof in [1], is deferred to Section 6.
#### 2.1.3. Gluing procedure
The gluing procedure amounts to the following statement:
**Proposition 2.2**.: _Within the setting of Proposition 2.1 we have the following statement. Let \((u_{q},\mathring{R}_{q})\) be a smooth solution of (2.1) satisfying the estimates (2.4)-(2.6). Then there exists another solution \((\bar{u}_{q},\mathring{\bar{R}}_{q})\) to (2.1) such that_
\[\operatorname{supp}\mathring{\bar{R}}_{q}\subset\mathbb{T}^{3}\times\bigcup_ {i\in\mathbb{N}}(i\tau_{q}+\tfrac{1}{3}\tau_{q},i\tau_{q}+\tfrac{2}{3}\tau_{q}) \tag{2.12}\]
_and the following estimates hold for any \(N\geq 0\):_
\[\|\bar{u}_{q}\|_{C^{N+1}} \lesssim\delta_{q}^{1/2}\lambda_{q}\ell_{q}^{-N}\,, \tag{2.13}\] \[\|\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}\,,\] (2.14) \[\|(\partial_{t}+\bar{u}_{q}\cdot\nabla)\mathring{\bar{R}}_{q}\|_ {C^{N+\alpha}} \lesssim\tau_{q}^{-1}\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha }\,,\] (2.15) \[\left|\int_{\mathbb{T}^{3}}|\bar{u}_{q}|^{2}-|u_{q}|^{2}\,dx\right| \lesssim\mathring{\delta}_{q+1}\,. \tag{2.16}\]
_Moreover, the vector potentials of \(u_{q}\) and \(\bar{u}_{q}\) satisfy_
\[\|z_{q}-\bar{z}_{q}\|_{C^{\alpha}}\lesssim\tau_{q}\mathring{\bar{\delta}}_{q+ 1}\ell_{q}^{-\alpha}\,. \tag{2.17}\]
Here and in the sequel, the vector potential \(z\) of a divergence-free velocity field \(u\) is given by the Biot-Savart operator on \(\mathbb{T}^{3}\), defined as \(\mathcal{B}=(-\Delta)^{-1}\operatorname{curl}\), so that \(z=\mathcal{B}u\) is the unique solution of
\[\operatorname{div}z=0\qquad\text{ and }\qquad\operatorname{curl}z=u\,, \tag{2.18}\]
recalling that we assume zero spatial average \(\int_{\mathbb{T}^{3}}u(x,t)dx\).
Proposition 2.2 is restated below as Corollary 6.7 and is a direct consequence of Corollary 6.4 and Proposition 6.6.
#### 2.1.4. The perturbation
The formula for the new perturbation \(w_{q+1}\) uses Mikado flows, which we recall here briefly. Mikado flows were introduced originally in [14] and widely used since in applications of convex integration to fluid dynamics.
We fix a finite set \(\Lambda\subset\mathbb{R}^{3}\) consisting of nonzero vectors \(\vec{k}\in\mathbb{R}^{3}\) with rational coordinates, and for each \(\vec{k}\in\Lambda\) let \(\varphi_{\vec{k}}\) be a periodic function with the properties
* \(\vec{k}\cdot\nabla\varphi_{\vec{k}}=0\),
* For any \(\vec{k}\neq\vec{k}^{\prime}\in\Lambda\) we have \(\operatorname{supp}\varphi_{\vec{k}}\cap\operatorname{supp}\varphi_{\vec{k}^{ \prime}}=\emptyset\),
* \(\int_{\mathbb{T}^{3}}\varphi_{\vec{k}}(\xi)\,d\xi=0\) and \(\int_{\mathbb{T}^{3}}|\Delta\varphi_{\vec{k}}(\xi)|^{2}\,d\xi=\int_{\mathbb{T }^{3}}|\nabla\varphi_{\vec{k}}(\xi)|^{2}\,d\xi=1\).
Next, let \(\psi_{\vec{k}}=\Delta\varphi_{\vec{k}}\) and define
\[W_{\vec{k}}(\xi)=\psi_{\vec{k}}(\xi)\vec{k},\quad U_{\vec{k}}(\xi)=\vec{k}\times \nabla\varphi_{\vec{k}}, \tag{2.19}\]
so that2\(\operatorname{curl}U_{\vec{k}}=W_{\vec{k}}\) and, for any \(a_{\vec{k}}\), \(\vec{k}\in\Lambda\), the vectorfield \(W=\sum_{\vec{k}\in\Lambda}a_{\vec{k}}W_{\vec{k}}\) satisfies
Footnote 2: Here we use the vector calculus identity \(\operatorname{curl}(F\times G)=F\operatorname{div}G-G\operatorname{div}F+(G \cdot\nabla)F-(F\cdot\nabla)G\).
\[\fint W\,d\xi=0,\quad\fint W\otimes W\,d\xi=\sum_{\vec{k}\in\Lambda}a_{\vec{k} }^{2}\vec{k}\otimes\vec{k}.\]
Further, we define \(H_{\vec{k}}=H_{\vec{k}}(\xi)\) to be the antisymmetric zero-mean matrix with the property that, for any \(v\in\mathbb{R}^{3}\) we have \(H_{\vec{k}}v=U_{\vec{k}}\times v\).
The following lemma (originating in the work of Nash [12]) is crucial:
**Lemma 2.3**.: _For any compact set \(\mathcal{N}\subset\mathcal{S}_{+}^{3\times 3}\) there exists a finite \(\Lambda\subset\mathbb{Q}^{3}\) such that there exists smooth functions \(a_{\vec{k}}:\mathcal{N}\to\mathbb{R}_{+}\) with_
\[\sum_{\vec{k}\in\Lambda}a_{\vec{k}}^{2}(R)\vec{k}\otimes\vec{k}=R\quad\text{ for any }R\in\mathcal{N}. \tag{2.20}\]
The corresponding vectorfield \(W=W(R,\xi)=\sum_{\vec{k}\in\Lambda}a_{\vec{k}}(R)W_{\vec{k}}(\xi)\) is called a Mikado flow. In the following we will fix \(\mathcal{N}:=B_{1/2}(\operatorname{Id})\), the metric ball of radius \(1/2\) around the identity matrix in \(\mathcal{S}_{+}^{3\times 3}\) and denote the corresponding Mikado vectorfield by \(W=W(R,\xi)\).
Following [1] the new perturbation is then defined as
\[w_{q+1}=\frac{1}{\lambda_{q+1}}\operatorname{curl}\left[\sum_{i\in\mathbb{N}} \sum_{\vec{k}\in\Lambda}\eta_{i}\sigma_{q}^{\sfrac{1}{2}}a_{\vec{k}}(\tilde{R} _{q,i})\nabla\Phi_{i}^{T}U_{\vec{k}}(\lambda_{q+1}\Phi_{i})\right]\,, \tag{2.21}\]
where
* \(\eta_{i}=\eta_{i}(x,t)\), \(i\in\mathbb{N}\), are smooth nonnegative cutoff functions with pairwise disjoint supports such that \[\|\partial_{t}^{m}\nabla_{x}^{n}\eta_{i}\|_{L^{\infty}}\leq C_{n,m}\tau_{q}^{ -m},\quad\sum_{i}\eta_{i}^{2}(x,t)=\bar{\eta}^{2}(x,\tau_{q}^{-1}t)\,,\] (2.22) where \(\bar{\eta}=\bar{\eta}(x,t)\) is a (universal) function, \(1\)-periodic in \(t\), such that \[\fint_{\mathbb{T}^{3}}\bar{\eta}^{2}(x,t)\,dx=c_{0},\quad\fint_{0}^{1}\bar{ \eta}^{2}(x,t)\,dt=c_{1}\] for some universal constants \(c_{0},c_{1}>0\).
* \(\sigma_{q}=\sigma_{q}(t)\) is a positive scalar function with the property \[|\sigma_{q}(t)-c_{1}^{-1}\delta_{q+1}|\leq C\delta_{q+1}\lambda_{q}^{-\min\{ \gamma_{E},\gamma_{R},(b-1)\beta\}}\leq\tfrac{1}{3c_{1}}\delta_{q+1}\] (2.23)
* The maps \(\Phi_{i}=\Phi_{i}(x,t)\) are the volume-preserving diffeomorphisms defined as the inverse flow map of the velocity field \(\bar{u}_{q}\), which satisfy for \((x,t)\in\operatorname{supp}\eta_{i}\): \[\|\nabla\Phi_{i}(x,t)\|_{C^{n}}+\|(\nabla\Phi_{i})^{-1}(x,t)\|_{C^{n}}\leq C_{ n}\ell_{q}^{-n},\quad\|\nabla\Phi_{i}-\operatorname{Id}\|_{L^{\infty}}\leq C \lambda_{q}^{-\gamma_{T}}\,.\] (2.24)
* \(\tilde{R}_{q,i}=\tilde{R}_{q,i}(x,t)\), \(i\in\mathbb{N}\), satisfies \[\tilde{R}_{q,i}=\nabla\Phi_{i}(\operatorname{Id}-\sigma_{q}^{-1}\mathring{ \tilde{R}}_{q})\nabla\Phi_{i}^{T},\quad\|\tilde{R}_{q,i}\|_{C^{n}}\leq C_{n} \ell_{q}^{-n}\,.\] (2.25)
These properties will be obtained in Section 6.3, where we will conclude the proof of Proposition 2.1.
### Enhanced dissipation via iterative stages
We define an additional \(q\)-dependent parameter which plays a key role in this paper:
\[\kappa_{q}:=\lambda_{q}^{-\theta},\quad\theta=\frac{2b}{b+1}(1+\beta)\,. \tag{2.26}\]
Using (2.3) it is easy to verify the recursive identity
\[\kappa_{q}=\frac{\delta_{q+1}}{\lambda_{q+1}^{2}\kappa_{q+1}}. \tag{2.27}\]
In turn, this identity indicates that we can think of \(\kappa_{q}\) has having physical dimension of diffusion coefficient \(L^{2}T^{-1}\).
We consider the sequence of advection-diffusion equations on \(\mathbb{T}^{3}\times[0,T]\):
\[\begin{split}\partial_{t}\rho_{q}+u_{q}\cdot\nabla\rho_{q}& =\kappa_{q}\Delta\rho_{q}\,,\\ \rho_{q}|_{t=0}&=\rho_{in}\,,\end{split} \tag{2.28}\]
where \(u_{q}\) is the sequence of velocity fields obtained in Section 2.1 via Proposition 2.1. We are interested in comparing the cumulative dissipation for subsequent values of \(q\), given by
\[D_{q}:=\kappa_{q}\int_{0}^{T}\|\nabla\rho_{q}\|_{L^{2}}^{2}\,dt=\tfrac{1}{2}( \|\rho_{in}\|_{L^{2}}^{2}-\|\rho_{q}(T)\|_{L^{2}}^{2}). \tag{2.29}\]
Our main result is
**Proposition 2.4**.: _Let \(0<\beta<\frac{1}{3}\) and \(b>1\) as in (2.8). Assume that \(\gamma_{L},\gamma_{T},\gamma_{R}\in(0,1)\) satisfy_
\[\gamma_{T}<\frac{b-1}{b+1}(1-(2b+1)\beta)<\gamma_{R}+\gamma_{T}, \tag{2.30}\]
\[2\gamma_{L}<\frac{b-1}{b+1}(1+\beta). \tag{2.31}\]
_Then there exists \(\gamma>0\) and \(\alpha_{1}>0\) such that for all \(0<\alpha<\alpha_{1}\) there exists \(\tilde{N}\in\mathbb{N}\) and \(q_{0}\in\mathbb{N}\) with the following property._
_For any \(q\geq q_{0}\) and any initial datum \(\rho_{in}\in L^{2}(\mathbb{T}^{3})\) with \(\int_{\mathbb{T}^{3}}\rho_{in}\,dx=0\) such that_
\[\|\rho_{in}\|_{H^{n}}\leq\lambda_{q}^{n}\min\{D_{q}^{\nicefrac{{1}}{{2}}},D_ {q+1}^{\nicefrac{{1}}{{2}}}\}\qquad\text{ for }1\leq n\leq\tilde{N}, \tag{2.32}\]
_we have_
\[|D_{q+1}-D_{q}|\leq\tfrac{1}{2}\lambda_{q}^{-\gamma}D_{q}. \tag{2.33}\]
Proof.: We start by fixing \(\gamma,\alpha_{1}>0\). The exponent \(\gamma>0\) is determined by the inequalities
\[\begin{split}\gamma<\tfrac{1}{4}\min&\Big{\{} \gamma_{T},\gamma_{R},\gamma_{E},(b-1)\beta,(b-1)\theta,&\gamma_{ T}+\gamma_{R}+(2b-1)\beta+1-\theta,\\ &\frac{b-1}{b+1}(1-(2b+1)\beta)-\gamma_{T}\Big{\}}.\end{split} \tag{2.34}\]
We remark that the last condition in the first line above can be satisfied because of the right inequality in (2.30). In turn, \(\alpha_{1}\) is then chosen sufficiently small so that
\[\alpha_{1}(1+\gamma_{L})+\gamma+\theta<\gamma_{T}+\gamma_{R}+(2b-1)\beta+1. \tag{2.35}\]
We also need to choose
\[\tilde{N}\geq\max\Bigl{\{}(b+1)\left(\frac{2}{b-1}+\frac{\theta}{b}\right),\,1+ \frac{2b(b-1)(1+\beta)}{\gamma_{T}(b+1)}\Bigr{\}}, \tag{2.36}\]
where the first lower bound is given by \(N_{h}\) in (4.18) and the second lower bound is given by \(N\) to validate (A1) in section 5.
_Step 1: Preparation (\(\rho_{q+1}\rightsquigarrow\tilde{\rho}_{q+1}\))_
We start with the equation
\[\begin{split}\partial_{t}\rho_{q+1}+u_{q+1}\cdot\nabla\rho_{q+1}& =\kappa_{q+1}\Delta\rho_{q+1}\,,\\ \rho_{q+1}|_{t=0}&=\rho_{in}\,,\end{split} \tag{2.37}\]
and recall the form \(u_{q+1}=\bar{u}_{q}+w_{q+1}\) of the advecting field \(u_{q+1}\), where \(w_{q+1}\) is given in (2.21). Then we make use of the linear algebra identity \((Au)\times(Av)=\text{\rm cof}\ A(u\times v)=\det AA^{-T}(u\times v)\) for any \(3\times 3\) matrix \(A\) and vectors \(u,v\in\mathbb{R}^{3}\). From this we deduce that if \(H\) is the antisymmetric matrix such that \(Hv=u\times v\) and \(\det A=1\), then \(\tilde{H}=A^{-1}HA^{-T}\) is the antisymmetric matrix such that \(\tilde{H}v=(A^{T}u)\times v\). In particular, using the antisymmetric matrix-valued functions \(H_{\vec{k}}(\xi)\) introduced above in Section 2.1.4, we deduce
\[(\nabla\Phi_{i}^{-1}H_{\vec{k}}\nabla\Phi_{i}^{-T})v=(\nabla\Phi_{i}^{T}U_{ \vec{k}})\times v.\]
Therefore, using the identity \(\operatorname{div}(z\times\nabla\rho)=(\operatorname{curl}z)\cdot\nabla\rho\), we can write equation (2.37) equivalently as
\[\begin{split}\partial_{t}\rho_{q+1}+\bar{u}_{q}\cdot\nabla\rho_{ q+1}&=\operatorname{div}A_{q+1}\nabla\rho_{q+1}\,,\\ \rho_{q+1}|_{t=0}&=\rho_{in}\,,\end{split} \tag{2.38}\]
where the elliptic matrix \(A_{q+1}=A_{q+1}(x,t)\) is defined as
\[A_{q+1}(x,t)=\kappa_{q+1}\text{\rm Id}+\frac{\eta_{i}(x,t)}{\lambda_{q+1}} \sum_{i}\nabla\Phi_{i}^{-1}(x,t)H^{(i)}(x,t,\lambda_{q+1}\Phi_{i})\nabla\Phi_{ i}^{-T}(x,t)\,, \tag{2.39}\]
and
\[H^{(i)}(x,t,\xi)=\sum_{\vec{k}\in\Lambda}\sigma_{q}^{\sfrac{1}{2}}(t)a_{\vec{ k}}(\tilde{R}_{q,i}(x,t))H_{\vec{k}}(\lambda_{q+1}\xi)\,.\]
Let \((\tilde{\eta}_{i})_{i}\) be a partition of unity such that \(\tilde{\eta}_{i}\eta_{i}=\eta_{i}\), \(\tilde{\eta}_{i}\eta_{j}=0\) for \(j\neq i\) and satisfying estimates of the same type as (2.22):
\[\|\partial_{t}^{m}\nabla_{x}^{n}\tilde{\eta}_{i}\|_{L^{\infty}}\leq\tilde{C}_ {n,m}\tau_{q}^{-m}. \tag{2.40}\]
Define the elliptic matrix
\[\tilde{A}_{q+1}(x,t)=\sum_{i}\tilde{\eta}_{i}(x,t)\nabla\Phi_{i}^{-1}(x,t) \left[\kappa_{q+1}\text{\rm Id}+\frac{\eta_{i}(x,t)}{\lambda_{q+1}}H^{(i)}(x,t,\lambda_{q+1}\Phi_{i}(x,t))\right]\nabla\Phi_{i}^{-T}(x,t)\,. \tag{2.41}\]
The estimate (2.24) implies
\[\|A_{q+1}-\tilde{A}_{q+1}\|_{L^{\infty}}\leq\kappa_{q+1}\sum_{i}\tilde{\eta}_{ i}\|\nabla\Phi_{i}^{-1}\nabla\Phi_{i}^{-T}-\text{\rm Id}\|_{L^{\infty}}\leq C \kappa_{q+1}\lambda_{q}^{-\gamma_{T}}. \tag{2.42}\]
Therefore we can compare the cumulative dissipation in (2.38) with that in
\[\begin{split}\partial_{t}\tilde{\rho}_{q+1}+\bar{u}_{q}\cdot \nabla\tilde{\rho}_{q+1}&=\operatorname{div}\tilde{A}_{q+1} \nabla\tilde{\rho}_{q+1}\,,\\ \tilde{\rho}_{q+1}|_{t=0}&=\rho_{in}\,.\end{split} \tag{2.43}\]
More precisely, assuming that \(q_{0}\) is sufficiently large and \(q\geq q_{0}\), we can ensure \(C\lambda_{q}^{-\gamma_{T}}<\frac{1}{2}\lambda_{q}^{-\gamma_{T}/2}\), and then apply Proposition 3.3 with \(\varepsilon=\frac{1}{2}\lambda_{q}^{-\gamma_{T}/2}<\frac{1}{2}\) to conclude
\[\big{|}\|\rho_{q+1}(T)\|_{L^{2}}^{2}-\|\tilde{\rho}_{q+1}(T)\|_{L^{2}}^{2} \big{|}\lesssim\lambda_{q}^{-\gamma_{T}/2}\,\big{|}\|\rho_{in}\|_{L^{2}}^{2}- \|\tilde{\rho}_{q+1}(T)\|_{L^{2}}^{2}\big{|}\,.\]
We may also write this as
\[\Big{|}D_{q+1}-\tilde{D}_{q+1}\Big{|}\lesssim\lambda_{q}^{-\gamma_{T}/2}\tilde {D}_{q+1}, \tag{2.44}\]
where \(\tilde{D}_{q+1}=\frac{1}{2}\,\big{|}\|\rho_{in}\|_{L^{2}}^{2}-\|\tilde{\rho}_ {q+1}(T)\|_{L^{2}}^{2}\big{|}\).
_Step 2: Spatial homogenization (\(\tilde{\rho}_{q+1}\rightsquigarrow\tilde{\rho}_{q}^{(1)}\))_
In the second step, we use classical estimates in quantitative homogenization and an explicit formula for the corrector, to replace (2.43) by the homogenized equation
\[\begin{split}\partial_{t}\bar{\rho}_{q}^{(1)}+\bar{u}_{q}\cdot \nabla\bar{\rho}_{q}^{(1)}&=\operatorname{div}\bar{A}_{q}\nabla \bar{\rho}_{q}^{(1)}\,,\\ \bar{\rho}_{q}^{(1)}|_{t=0}&=\rho_{in}\,.\end{split} \tag{2.45}\]
The homogenized elliptic coefficient is given, like in classical homogenization theory, by
\[\bar{A}_{q}(x,t)=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}} \!\int\tilde{A}_{q+1}(x,t,\xi)\Big{(}\mathrm{Id}+\sum_{i}\eta_{i}(x,t)\nabla \Phi_{i}^{T}(x,t)\nabla_{\xi}\chi_{i}^{T}(x,t,\xi)\Big{)}\,d\xi\]
with corrector \(\chi_{i}:\mathbb{T}^{3}\times[0,T]\times\mathbb{T}^{3}\to\mathbb{R}^{3}\). We will show below in Section 4 that, because of the special structure of the oscillating vectorfield \(w_{q+1}\), \(\chi\) can be defined _explicitly_ by
\[\chi_{i}(x,t,\xi)=-\frac{\sigma_{q}^{1/2}(t)}{\kappa_{q+1}\lambda_{q+1}}\nabla \Phi_{i}^{-1}(x,t)\sum_{\vec{k}}a_{\vec{k}}\big{(}\tilde{R}_{q,i}(x,t)\big{)} \varphi_{\vec{k}}(\xi)\vec{k}\,. \tag{2.46}\]
Using the properties of Mikado flows in Section 2.1.4, this formula allows us to obtain an explicit expression for \(\bar{A}_{q+1}\). Indeed, since both \(H^{(i)}\) and \(\chi_{i}\) satisfy \(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int H^{(i)}\,d\xi=0\), \(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int \chi_{i}\,d\xi=0\), we have
\[\bar{A}_{q}=\kappa_{q+1}\sum_{i}\tilde{\eta}_{i}\nabla\Phi_{i}^{-1}\nabla\Phi_ {i}^{-T}+\frac{1}{\lambda_{q+1}}\sum_{i}\eta_{i}^{2}\nabla\Phi_{i}^{-1}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int H^{(i)}(\nabla_{\xi}\chi_{i})^{T}\,d\xi.\]
On the other hand, using (2.46) and the definition of \(H^{(i)}(x,t,\xi)\),
\[H^{(i)}(\nabla_{\xi}\chi_{i})^{T} =-\frac{\sigma_{q}}{\lambda_{q+1}\kappa_{q+1}}\sum_{\vec{k}}a_{ \vec{k}}^{2}(\tilde{R}_{q,i})H_{\vec{k}}(\nabla\varphi_{\vec{k}}\otimes( \nabla\Phi_{i}^{-1}\vec{k}))\] \[=\frac{\sigma_{q}}{\lambda_{q+1}\kappa_{q+1}}\sum_{\vec{k}}a_{\vec {k}}^{2}(\tilde{R}_{q,i})|\nabla\varphi_{\vec{k}}|^{2}(\vec{k}\otimes\vec{k}) \nabla\Phi_{i}^{-T}\,.\]
Here we used the definition of \(H_{\vec{k}}(\xi)\) to deduce
\[H_{\vec{k}}\nabla\varphi_{\vec{k}}=(\vec{k}\times\nabla\varphi_{\vec{k}})\times \nabla\varphi_{\vec{k}}=-|\nabla\varphi_{\vec{k}}|^{2}\]
Using the normalization in the definition of Mikado flows in Section 2.1.4 as well as Lemma 2.3, we conclude
\[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int H^{(i)}(\nabla_{\xi}\chi_{i})^{T}\,d\xi= \frac{\sigma_{q}}{\lambda_{q+1}\kappa_{q+1}}\tilde{R}_{q,i}\nabla\Phi_{i}^{-T}\,.\]
Using (2.25) we deduce
\[\bar{A}_{q}(x,t)=\sum_{i}\tilde{\eta}_{i}\kappa_{q+1}\nabla\Phi_{i}^{-1}\nabla \Phi_{i}^{-T}+\frac{\sigma_{q}}{\lambda_{q+1}^{2}\kappa_{q+1}}\sum_{i}\eta_{i}^{ 2}(\mathrm{Id}-\sigma_{q}^{-1}\mathring{\tilde{R}}_{q})\,. \tag{2.47}\]
Next, let
\[\begin{split}\tilde{\kappa}_{q}(x,t)&=\kappa_{q+1}+c_{1}^ {-1}\frac{\delta_{q+1}}{\lambda_{q+1}^{2}\kappa_{q+1}}\sum_{i}\eta_{i}^{2}(x,t) \\ &=\kappa_{q+1}+c_{1}^{-1}\bar{\eta}^{2}(x,\tau_{q}^{-1}t)\kappa_{q},\end{split} \tag{2.48}\]
where we have used (2.27) and (2.22). Then we compute
\[\begin{split}\bar{A}_{q}-\tilde{\kappa}_{q}\mathrm{Id}& =\kappa_{q+1}\sum_{i}\tilde{\eta}_{i}(\nabla\Phi_{i}^{-1}\nabla \Phi_{i}^{-T}-\mathrm{Id})+\\ &+\kappa_{q}\sum_{i}\eta_{i}^{2}(\delta_{q+1}^{-1}\sigma_{q}-c_{1 }^{-1})\mathrm{Id}-\kappa_{q}\sum_{i}\eta_{i}^{2}\delta_{q+1}^{-1}\mathring{ \tilde{R}}_{q}.\end{split}\]
It then follows from (2.14), (2.23) and (2.24) that, for all \((x,t)\),
\[|\bar{A}_{q}(x,t)-\tilde{\kappa}_{q}(x,t)\mathrm{Id}|\leq C\tilde{\kappa}_{q} (x,t)\lambda_{q}^{-\min\{\gamma_{T},\gamma_{R},\gamma_{E},(b-1)\beta\}}\,, \tag{2.49}\]
for some fixed constant \(C\). Then, as in Step 1 we can choose \(q_{0}\) is sufficiently large to ensure that \(C\lambda_{q}^{-2\gamma}<\frac{1}{2}\lambda_{q}^{-\nicefrac{{\gamma_{2}}}{{2}}}\). In particular, we may bound pointwise
\[\frac{1}{2}\tilde{\kappa}_{q}\mathrm{Id}\leq\bar{A}_{q}\leq 2\tilde{\kappa}_{q} \mathrm{Id}. \tag{2.50}\]
In Section 4, specifically Corollary 4.3, we compare the cumulative dissipation of \(\tilde{\rho}_{q+1}\) with that of \(\bar{\rho}_{q}^{(1)}\). The setting in Section 4 is based on validity of the inequality (2.50) as well as (2.30), (2.31). Then, Corollary 4.3 and Remark 4.4 yields:
\[\begin{split}\left|\|\tilde{\rho}_{q+1}(T)\|_{L^{2}}^{2}-\|\bar{ \rho}_{q}^{(1)}(T)\|_{L^{2}}^{2}\right|\lesssim\frac{1}{2}\lambda_{q}^{-\gamma }\left\|\|\rho_{in}\|_{L^{2}}^{2}-\|\bar{\rho}_{q}^{(1)}(T)\|_{L^{2}}^{2} \right|,\end{split}\]
or equivalently
\[\begin{split}\left|\tilde{D}_{q+1}-D_{q}^{(1)}\right|\lesssim \lambda_{q}^{-\gamma}D_{q}^{(1)},\end{split} \tag{2.51}\]
where \(D_{q}^{(1)}=\frac{1}{2}\left|\|\rho_{in}\|_{L^{2}}^{2}-\|\bar{\rho}_{q}^{(1)} (T)\|_{L^{2}}^{2}\right|\).
_Step 3: Diagonal reduction (\(\bar{\rho}_{q}^{(1)}\rightsquigarrow\bar{\rho}_{q}^{(2)}\))_
Using once more (2.49) we now apply Proposition 3.3 with \(\varepsilon=\frac{1}{2}\lambda_{q}^{-\gamma}<\frac{1}{2}\) to conclude
\[\begin{split}\left|D_{q}^{(2)}-D_{q}^{(1)}\right|\lesssim \lambda_{q}^{-\gamma}D_{q}^{(2)},\end{split} \tag{2.52}\]
where \(D_{q}^{(1)}=\frac{1}{2}\left|\|\rho_{in}\|_{L^{2}}^{2}-\|\bar{\rho}_{q}^{(1)} (T)\|_{L^{2}}^{2}\right|\) and \(\bar{\rho}_{q}^{(2)}\) is the solution of
\[\begin{split}\partial_{t}\bar{\rho}_{q}^{(2)}+\bar{u}_{q}\cdot \nabla\bar{\rho}_{q}^{(2)}&=\mathrm{div}\,\tilde{\kappa}_{q} \nabla\bar{\rho}_{q}^{(2)}\,,\\ \bar{\rho}_{q}^{(2)}|_{t=0}&=\rho_{in}\,.\end{split} \tag{2.53}\]
_Step 4: Time averaging (\(\bar{\rho}_{q}^{(2)}\rightsquigarrow\bar{\rho}_{q}^{(3)}\))_
The advection-diffusion equation (2.53) has two different characteristic time-scales: the advective time-scale \(\|\nabla\bar{u}_{q}\|_{L^{\infty}}^{-1}\) and the time-scale \(\tau_{q}\) given by the time-oscillatory behaviour of the ellipticity coefficient \(\tilde{\kappa}_{q}(x,t)\) given in (2.48), with the relationship given by (2.7): \(\|\nabla\bar{u}_{q}\|_{L^{\infty}}\tau_{q}\leq M\lambda_{q}^{-\gamma_{T}}\). Let us now invoke Proposition 5.1 with the following choices (on the right-hand side we choose parameters used in Proposition 5.1 upon the current construction)
\[\eta:=c_{1}^{-1}\bar{\eta}^{2},\quad\kappa_{0}:=\kappa_{q+1},\quad\kappa_{1}:= \kappa_{q},\quad\mu:=\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q},\quad\tau:= \tau_{q}.\]
We remark that Proposition 5.1 requires assumptions (A1)-(A4). Control of the cutoff-functions in (2.22) yields (A4), whereas (A2)-(A3) follow from assumptions (2.30), (2.31), (2.13) as well as (2.7). (A1) gives the second lower bound for \(\tilde{N}\) in (2.36).
In this setting one can average over the faster time-scale \(\tau_{q}\) and obtain the estimate
\[\left|\|\bar{\rho}_{q}^{(3)}(T)\|_{L^{2}}^{2}-\|\bar{\rho}_{q}^{(2)}(T)\|_{L^{ 2}}^{2}\right|\leq C\lambda_{q}^{-\gamma_{T}}(D_{q}^{(2)}+D_{q}^{(3)}), \tag{2.54}\]
where
\[D_{q}^{(i)}=\left|\|\rho_{in}\|_{L^{2}}^{2}-\|\bar{\rho}_{q}^{(i)}(T)\|_{L^{2} }^{2}\right|\]
and \(\bar{\rho}_{q}^{(3)}\) is the solution of
\[\begin{split}\partial_{t}\bar{\rho}_{q}^{(3)}+\bar{u}_{q}\cdot \nabla\bar{\rho}_{q}^{(3)}&=(\kappa_{q+1}+\kappa_{q})\Delta\bar{ \rho}_{q}^{(3)}\,,\\ \bar{\rho}_{q}^{(3)}|_{t=0}&=\rho_{in}\,.\end{split} \tag{2.55}\]
Choosing \(q_{0}\) sufficiently large, we may ensure that \(C\lambda_{q}^{-\gamma_{T}}<\frac{1}{4}\lambda_{q}^{-\gamma}\) in (2.54), from which we can then conclude
\[\left|\frac{D_{q}^{(2)}}{D_{q}^{(3)}}-1\right|\leq\frac{1}{4}\lambda_{q}^{- \gamma}\left(1+\frac{D_{q}^{(2)}}{D_{q}^{(3)}}\right).\]
Therefore we deduce
\[\left|\frac{D_{q}^{(2)}}{D_{q}^{(3)}}-1\right|\leq\frac{1}{2}\lambda_{q}^{- \gamma}. \tag{2.56}\]
_Step 5: Gluing estimate (\(\bar{\rho}_{q}^{(3)}\rightsquigarrow\rho_{q}\))_
Finally, we compare (2.55) to (2.28) by using the estimate (2.17). Indeed, we can write (2.55) as
\[\begin{split}\partial_{t}\bar{\rho}_{q}^{(3)}+u_{q}\cdot\nabla \bar{\rho}_{q}^{(3)}&=\operatorname{div}\left(\kappa_{q}\nabla \bar{\rho}_{q}^{(3)}+\kappa_{q+1}\nabla\bar{\rho}_{q}^{(3)}+(z_{q}-\bar{z}_{q })\times\nabla\bar{\rho}_{q}^{(3)}\right)\,,\\ \bar{\rho}_{q}^{(3)}|_{t=0}&=\rho_{in}\,.\end{split}\]
On the other hand (2.17) and our choice of parameters imply
\[\|z_{q}-\bar{z}_{q}\|_{L^{\infty}}\lesssim\tau_{q}\delta_{q+1}\lambda_{q}^{- \gamma_{R}+\alpha(1+\gamma_{L})}\leq\kappa_{q}\lambda_{q}^{-2\gamma}\,,\quad \kappa_{q+1}=\lambda_{q}^{-(b-1)\theta}\kappa_{q}\leq\lambda_{q}^{-2\gamma} \kappa_{q}. \tag{2.57}\]
A final application of Proposition 3.3, assuming \(q_{0}\) is suffuciently large to absorb the implicit constant, leads to
\[\left|\|\bar{\rho}_{q}^{(3)}(T)\|_{L^{2}}^{2}-\|\rho_{q}(T)\|_{L^{2}}^{2} \right|\lesssim\lambda_{q}^{-2\gamma}\left|\|\rho_{in}\|_{L^{2}}^{2}-\|\rho_{ q}(T)\|_{L^{2}}^{2}\right|,\]
where \(\rho_{q}\) is the solution of (2.28), equivalently
\[\left|D_{q}^{(3)}-D_{q}\right|\lesssim\lambda_{q}^{-2\gamma}D_{q}. \tag{2.58}\]
_Final estimate_
Overall, we see that after the five steps above, we achieve the estimate
\[\frac{D_{q+1}}{D_{q}}=\frac{D_{q+1}}{\bar{D}_{q+1}}\frac{\tilde{D}_{q+1}}{D_{q }^{(1)}}\frac{D_{q}^{(1)}}{D_{q}^{(2)}}\frac{D_{q}^{(2)}}{D_{q}^{(3)}}\frac{D_ {q}^{(3)}}{D_{q}}\geq(1-C\lambda_{q}^{-\gamma})^{5}D_{q}\geq(1-\tfrac{1}{2} \lambda_{q}^{-\gamma/2})D_{q}, \tag{2.59}\]
where we have again assumed \(q_{0}\) is sufficiently large to absorb the constants and the exponent \(5\) at the expense of decreasing the exponent from \(\gamma\) to \(\gamma/2\). This concludes the proof of Proposition 2.4, with \(\gamma\) replaced by \(\gamma/2\).
### Construction of the vectorfield \(u\) - h-principle
In Section 2.1 we detailed the main iteration scheme for producing Holder-continuous weak solutions of the Euler equations. What remains is to produce an initial vectorfield and associated Reynolds tensor, which satisfies the inductive assumptions (2.4)-(2.6) for _some_\(q\in\mathbb{N}\). To this end we recall that a smooth strict subsolution to the Euler equations is a smooth triple \((\bar{u},\bar{p},\bar{R})\) on \(\mathbb{T}^{3}\times[0,T]\) solving the Euler-Reynolds system (2.1) with the normalizations \(\int_{\mathbb{T}^{3}}\bar{u}\,dx=0\), \(\int_{\mathbb{T}^{3}}\bar{p}\,dx=0\), such that \(\bar{R}(x,t)>0\) is (uniformly) positive definite on \(\mathbb{T}^{3}\times[0,T]\). The energy associated with a subsolution is (c.f. [10, 17, 14])
\[e(t)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\mathcal{R}\) is the inverse divergence operator on symmetric 2-tensors, introduced in [13]. Since \(\bar{R}-\tilde{R}\) is a constant multiple of the identity, it easily follows, as in [13, 14], that \(u_{q},\dot{R}_{q}\) is a solution of (2.1). Moreover, the estimates in [13, 14] (see also Section 6.3) imply
\[\|\dot{\bar{R}}_{q}\|_{C^{0}} \leq C\lambda^{-1+\alpha}\,,\] \[\|w\|_{C^{n}} \leq C\|\bar{R}\|_{C^{0}}^{\nicefrac{{1}}{{2}}}\lambda^{n}\quad \text{ for all }n\leq\bar{N}\,,\] \[\left|\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$ -$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}} \!\int_{\mathbb{T}^{3}}|u_{q}|^{2}-|\bar{u}|^{2}-\operatorname{tr}\tilde{R}\, dx\right| \leq C\lambda^{-1+\alpha}\,.\]
Here the constant \(C\) depends on \((\bar{u},\bar{R})\) and on \(\mathcal{N}\). It remains to choose \(\lambda\) and \(q\). We fix an exponent \(0<\gamma_{o}\) such that
\[\beta<\gamma_{o}\text{ and }\alpha+\gamma_{o}<1-2b\beta-\max\{\gamma_{E}, \gamma_{R}\}\]
and define \(\lambda=\lambda_{q}^{1-\gamma_{o}}\) (\(q\) still to be fixed). Observe that (2.8)-(2.9) guarantee the existence of such \(\gamma_{o}\). Now, validity of (2.4)-(2.6) follows from
\[C\lambda^{-1+\alpha}\leq\delta_{q+1}\lambda_{q}^{-\max\{\gamma_{E},\gamma_{R} \}},\text{ as well as }C\lambda\leq M\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q},\, \lambda\leq\lambda_{q}.\]
But by our choice of exponent \(\gamma_{o}\) these inequalities are satisfied for \(q\) sufficiently large. This concludes the proof.
**Remark 2.6**.: _An obvious way to apply Proposition 2.5 is to take \(\bar{u}=0\), \(\bar{p}=0\) and \(\bar{R}=\frac{1}{3}e(t)\mathrm{Id}\). More generally, one can take any smooth \(\bar{u},\bar{p}\) which is a classical solution of the Euler equations (1.2)._
### Proof of Theorem 1.1
Proof.: _Step 1. Choice of parameters._
Given \(0<\beta<\frac{1}{3}\), we choose \(1<b<\min\{\sqrt{\frac{3}{2}},\,\frac{1-\beta}{2\beta}\}\), and then fix \(\gamma_{T},\gamma_{R},\gamma_{E}>0\) as
\[\gamma_{E}=\gamma_{T}=\gamma_{R}=\frac{b-1}{b(b+1)}(1-(2b+1)\beta)\,. \tag{2.64}\]
One easily verifies that then
\[\gamma_{T}+b\gamma_{R} <(b-1)(1-(2b+1)\beta)\,, \tag{2.65}\] \[\gamma_{T} <\frac{b-1}{b+1}(1-(2b+1)\beta)\,,\] (2.66) \[\gamma_{R}+\gamma_{T} >\frac{b-1}{b+1}(1-(2b+1)\beta)\,. \tag{2.67}\]
Indeed, we have
\[\gamma_{T}+b\gamma_{R}=\frac{b-1}{b}(1-(2b+1)\beta),\quad\gamma_{R}+\gamma_{ T}=\frac{2}{b}\frac{b-1}{b+1}(1-(2b+1)\beta).\]
Note that (2.65) is the requirement (2.9) in Proposition 2.1, whereas (2.66) and (2.67) are the left and right inequalities in (2.30) in Proposition 2.4. Therefore, with this choice of parameters both Propositions are applicable.
_Step 2. Construction of the vectorfield \(u\)._
Having fixed the parameters \(\beta,b,\gamma_{T},\gamma_{R},\gamma_{E}\) in Step 1, as well as \(\alpha_{0},\alpha_{1},a_{0}\) in Propositions 2.1 and 2.4, we apply Proposition 2.5 to obtain \(q_{0}\in\mathbb{N}\) and \((u_{q_{0}},\dot{R}_{q_{0}})\) satisfying the estimates (2.4)-(2.6). Then, Proposition 2.1 applies inductively for any \(q\geq q_{0}\) and yields a sequence \(\{u_{q}\}_{q\geq q_{0}}\). Arguing as in [14], we deduce that \(u_{q}\to u\) in \(C([0,T];C^{\beta^{\prime}}(\mathbb{T}^{3}))\) for any \(\beta^{\prime}<\beta\), and moreover, that
\(u\in C^{\beta^{\prime\prime}}([0,T]\times\mathbb{T}^{3})\) for any \(\beta^{\prime\prime}<\beta^{\prime}\). Since \(\beta^{\prime\prime}<\beta^{\prime}\beta<1/3\) was arbitrary in this argument, by renaming \(\beta^{\prime\prime}<\beta^{\prime}\beta\) we deduce the existence of \(u\in C^{\beta}([0,T]\times\mathbb{T}^{3})\) as in the statement of the Theorem.
_Step 3. Macroscopic diffusion for mollified initial datum_
Next, we turn to the enhanced dissipation properties of the velocity field \(u\). First, we fix \(\gamma>0\) and \(\tilde{N}\) as in Proposition 2.4. Let \(\rho_{in}\in H^{1}(\mathbb{T}^{3})\) be a nonzero initial datum with \(\operatorname{div}\rho_{in}=0\) and \(\int_{\mathbb{T}^{3}}\rho_{in}\,dx=0\). We define the length-scale
\[\ell:=\frac{\|\rho_{in}\|_{L^{2}}}{\|\nabla\rho_{in}\|_{L^{2}}}.\]
Define
\[\tilde{\rho}_{in}:=\rho_{in}*\psi_{r\ell}\,,\]
where \(\psi\) is a standard symmetric mollifier (or one can use e.g. \(\psi\) from Section 6.1) and \(0<r<1\) is still to be fixed. Then (see e.g. Lemma 6.2)
\[\|\tilde{\rho}_{in}-\rho_{in}\|_{L^{2}} \leq Cr\ell\|\nabla\rho_{in}\|_{L^{2}}\leq Cr\|\rho_{in}\|_{L^{2}} \tag{2.68}\] \[\|\tilde{\rho}_{in}\|_{H^{n+1}} \leq C(r\ell)^{-n}\|\nabla\rho_{in}\|_{L^{2}}\leq C(r\ell)^{-n} \ell^{-1}\|\rho_{in}\|_{L^{2}}\quad\text{ for all }0\leq n\leq\tilde{N}. \tag{2.69}\]
The small factor \(r\) will be chosen below; for now we merely assume \(r<1\) is sufficiently small so that the first inequality becomes
\[\|\tilde{\rho}_{in}-\rho_{in}\|_{L^{2}}\leq\tfrac{1}{2}\|\rho_{in}\|_{L^{2}}\,. \tag{2.70}\]
Next, for any \(q\geq q_{0}\) we consider the solution \(\tilde{\rho}_{q}\) of the equation
\[\partial_{t}\tilde{\rho}_{q}+u_{q}\cdot\nabla\tilde{\rho}_{q}= \kappa_{q}\Delta\tilde{\rho}_{q}\,, \tag{2.71}\] \[\tilde{\rho}_{q}|_{t=0}= \tilde{\rho}_{in}\,.\]
By applying the Poincare inequality, there exists \(c_{0}>0\) such that
\[\frac{d}{dt}\|\tilde{\rho}_{q}(t)\|_{L^{2}}^{2}dt\leq-2\kappa_{q}\|\nabla \tilde{\rho}_{q}(t)\|_{L^{2}}^{2}\leq-c_{0}\kappa_{q}\|\tilde{\rho}_{q}(t)\|_ {L^{2}}^{2}, \tag{2.72}\]
and therefore
\[\kappa_{q}\int_{0}^{T}\|\nabla\tilde{\rho}_{q}\|_{L^{2}}^{2}dt= \frac{1}{2}\left(\|\tilde{\rho}_{in}\|_{L^{2}}^{2}-\|\tilde{\rho} _{q}(T)\|_{L^{2}}^{2}\right) \tag{2.73}\] \[\geq \frac{1}{2}\left(1-\exp(-c_{0}\kappa_{q}T)\right)\|\tilde{\rho}_{ in}\|_{L^{2}}^{2}\] \[\geq 16c\kappa_{q}T\|\tilde{\rho}_{in}\|_{L^{2}}^{2}\geq 4c\kappa_{q}T\| \rho_{in}\|_{L^{2}}^{2}\]
for some universal constant \(c\), provided \(q\) is sufficiently large so that \(c_{0}\kappa_{q}T<1/2\). Here we also used (2.70).
_Step 4. The inertial range - choice of \(q_{I}\) and \(r\)._
For some \(q_{I}\geq q_{0}\) to be fixed, set
\[r:=\lambda_{q_{I}}^{-b\theta/2}. \tag{2.74}\]
We claim that for sufficiently large \(q_{I}\) the following inequalities hold:
\[C\ell^{-1}(r\ell)^{-n} \leq\lambda_{q}^{n+1}(cT\kappa_{q+1})^{\sfrac{1}{2}}\text{ for all }n\geq 0\,, \tag{2.75}\] \[Cr^{2} \leq\tfrac{1}{4}cT\kappa_{q_{I}}\,. \tag{2.76}\]
Indeed, (2.75) is satisfied if
\[(r\ell)^{-1}\leq\lambda_{q}\text{ and }C\ell^{-1}\leq(cT)^{\sfrac{1}{2}} \lambda_{q}^{1-b\theta/2}\,, \tag{2.77}\]
where we used (2.26). Comparing powers of \(\lambda_{q}\) we then observe that (2.75) is valid provided \(\lambda_{q_{I}}^{b\theta-\theta}\) is sufficiently large, whereas (2.76) is valid provided \(\lambda_{q_{I}}^{1-b\theta/2}\) is sufficiently large. Since by choice \(b<\sqrt{\frac{3}{2}}\), from (2.26) it follows \(b\theta/2<1\). We conclude that with sufficiently large \(q_{I}\) and the choice (2.74), the inequalities (2.75)-(2.76) hold.
_Step 5. Enhanced dissipation in the inertial range._
Combining (2.75) with (2.73) and (2.69) we observe that for any \(q\geq q_{I}\)
\[\|\tilde{\rho}_{in}\|_{H^{n}}\leq\lambda_{q}^{n}\left(\kappa_{q+1}\int_{0}^{T} \|\nabla\tilde{\rho}_{q+1}\|_{L^{2}}^{2}dt\right)^{\sfrac{1}{2}}\text{ for any }1\leq n\leq\tilde{N}\,, \tag{2.78}\]
i.e. condition (2.32) in Proposition 2.4 holds. Therefore, for any \(q\geq q_{I}\), we may apply Proposition 2.4 with initial data \(\tilde{\rho}_{in}\), to obtain
\[(1-\tfrac{1}{2}\lambda_{q}^{-\gamma})\kappa_{q}\int_{0}^{T}\|\nabla\tilde{ \rho}_{q}\|_{L^{2}}^{2}dt\leq\kappa_{q+1}\int_{0}^{T}\|\nabla\tilde{\rho}_{q+1 }\|_{L^{2}}^{2}dt. \tag{2.79}\]
Next, observe that there exists \(\tilde{q}\in\mathbb{N}\) and \(\tilde{c}>0\) (depending only on \(\gamma>0\) and the choice of \(a,b\) for defining the sequence \((\lambda_{q})_{q}\)), such that \(\prod_{q^{\prime}\geq\tilde{q}}(1-\tfrac{1}{2}\lambda_{q}^{\prime-\gamma})\geq e ^{-\tilde{c}\lambda_{\tilde{q}}^{-\gamma}}\geq\tfrac{1}{2}\). Consequently, assuming \(q_{I}\geq\tilde{q}\), we have
\[\kappa_{q}\int_{0}^{T}\|\nabla\tilde{\rho}_{q}\|_{L^{2}}^{2}dt\geq\kappa_{q_{ I}}\int_{0}^{T}\|\nabla\tilde{\rho}_{q_{I}}\|_{L^{2}}^{2}dt\prod_{q^{\prime}=q_{I}} ^{q-1}(1-\tfrac{1}{2}\lambda_{q}^{-\gamma})\geq\tfrac{1}{2}\kappa_{q_{I}} \int_{0}^{T}\|\nabla\tilde{\rho}_{q_{I}}\|_{L^{2}}^{2}dt. \tag{2.80}\]
We deduce, for any \(q\geq q_{I}\)
\[\kappa_{q}\int_{0}^{T}\|\nabla\tilde{\rho}_{q}\|_{L^{2}}^{2}dt\geq 2c\kappa_{q_{ I}}T\|\rho_{in}\|_{L^{2}}^{2}\,. \tag{2.81}\]
_Step 6. dissipation in the molecular range._
Now, let us fix \(\kappa=\kappa_{q_{M}}\) for some \(q_{M}\geq q_{I}\), and compare \(\tilde{\rho}_{q_{M}}\) to the solution \(\tilde{\rho}\) of
\[\begin{split}\partial_{t}\tilde{\rho}+u\cdot\nabla\tilde{\rho}=& \kappa\Delta\tilde{\rho},\\ \tilde{\rho}|_{t=0}=&\tilde{\rho}_{in}.\end{split} \tag{2.82}\]
To this end we consider the vector potentials \(z_{q_{M}},z\) of \(u_{q_{M}},u\). We have, for any \(q\),
\[\|z_{q+1}-z_{q}\|_{C^{0}}\leq\|z_{q+1}-\bar{z}_{q}\|_{C^{0}}+\|\bar{z}_{q}-z_{q }\|_{C^{0}},\]
where \(\bar{z}_{q}\) is the vector potential of \(\bar{u}_{q}\) obtained in Proposition 2.2. Using the same arguments as in Section 6 (see for instance Proposition 6.16) it is easy to verify
\[\|z_{q+1}-\bar{z}_{q}\|_{C^{0}}\lesssim\|\mathcal{R}w_{q+1}\|_{C^{\alpha}} \lesssim\delta_{q+1}^{\sfrac{1}{2}}\lambda_{q+1}^{-1+\alpha}=\lambda_{q}^{-b(1+ \beta-\alpha)}.\]
Since \(\theta-b(1+\beta)=-b\frac{(b-1)}{b+1}(1+\beta)>2\gamma\), we deduce (assuming \(\alpha\) sufficiently small)
\[\|z_{q+1}-z_{q}\|_{C^{0}}\lesssim\kappa_{q}\lambda_{q}^{-2\gamma},\]
where we used (2.57) for estimating \(\bar{z}_{q}-z_{q}\). In particular we obtain
\[\|z-z_{q_{M}}\|_{C^{0}}\leq\sum_{q=q_{*}}^{\infty}\|z_{q+1}-z_{q}\|_{C^{0}} \lesssim\kappa_{q_{M}}\lambda_{q_{M}}^{-2\gamma}=\kappa\lambda_{q_{M}}^{-2 \gamma}.\]
Writing the equation (2.82) as
\[\partial_{t}\tilde{\rho}+u_{q_{M}}\cdot\nabla\tilde{\rho}=\operatorname{div}( \kappa\nabla\tilde{\rho}+(z-z_{q_{M}})\times\nabla\tilde{\rho}),\]
we may then apply Proposition 3.3 to deduce
\[\kappa\int_{0}^{T}\|\nabla\tilde{\rho}\|_{L^{2}}^{2}dt\geq c\kappa_{q_{I}}T\|\rho _{in}\|_{L^{2}}^{2}. \tag{2.83}\]
_Step 7. Enhanced dissipation for original initial datum_.
Finally, we compare \(\tilde{\rho}\) to the solution \(\rho\) of
\[\partial_{t}\rho+u\cdot\nabla\rho= \kappa\Delta\rho, \tag{2.84}\] \[\rho|_{t=0}= \rho_{in}.\]
The basic energy estimate together with (2.68) gives
\[\kappa\int_{0}^{T}\|\nabla\rho-\nabla\tilde{\rho}\|_{L^{2}}^{2}\,dt\leq\frac{1 }{2}\|\rho_{in}-\tilde{\rho}_{in}\|_{L^{2}}\leq Cr^{2}\|\rho_{in}\|_{L^{2}}^{2}\,.\]
Consequently
\[\kappa^{\sfrac{1}{2}}\left(\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2} \,dt\right)^{\sfrac{1}{2}} \geq\kappa^{\sfrac{1}{2}}\left(\int_{0}^{T}\|\nabla\tilde{\rho}\| _{L^{2}}^{2}\,dt\right)^{\sfrac{1}{2}}-C^{\sfrac{1}{2}}r\|\rho_{in}\|_{L^{2}}\] \[\geq\left[(c\kappa_{q_{I}}T)^{\sfrac{1}{2}}-C^{\sfrac{1}{2}}r \right]\|\rho_{in}\|_{L^{2}}\] \[\geq\tfrac{1}{2}(c\kappa_{q_{I}}T)^{\sfrac{1}{2}}\|\rho_{in}\|_{L ^{2}},\]
where we used (2.76).
Since this is true for \(\kappa=\kappa_{q_{M}}\) for any \(q_{M}\geq q_{I}\), we deduce
\[\limsup_{\kappa\to 0}\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2}\,dt\geq \tfrac{1}{4}c\kappa_{q_{I}}T\|\rho_{in}\|_{L^{2}}^{2}.\]
This concludes the proof.
## 3. Energy estimates
### Estimates for advection-diffusion with Laplacian
Estimates in this section are needed for both spatial homogenisation and time averaging. We consider the advection-diffusion equation
\[\partial_{t}\rho+u\cdot\nabla\rho =\kappa\Delta\rho\qquad\text{on }\mathbb{T}^{3}\times[0,T] \tag{3.1}\] \[\rho|_{t=0} =\rho_{in}\]
with smooth initial datum. For short notation, in this section we use space-time norms on time interval \([0,T]\), denoted by \(L_{xt}\).
We assume the scales relationship
\[\left(\frac{\|\nabla u\|_{L^{\infty}}}{\kappa}\right)^{n}\geq\left(\frac{\| \nabla^{n}u\|_{L^{\infty}}}{\kappa}\right)^{\frac{2n}{n+1}}. \tag{3.2}\]
**Lemma 3.1**.: _Let \(\kappa>0\) and \(u\in C^{\infty}(\mathbb{T}^{d};\mathbb{R}^{d})\) be divergence free. Assume (3.2). Then the advection-diffusion equation (3.1) satisfies for any \(t\leq T\)_
\[\frac{1}{2}\|\rho(t)\|_{L^{2}}^{2}+\kappa\int_{0}^{t}\|\nabla\rho(s)\|_{L^{2}} ^{2}\,ds=\frac{1}{2}\|\rho_{in}\|_{L^{2}}^{2},\]
_and, for any \(n\geq 1\)_
\[\sup_{t\leq T}\|(\nabla^{n}\rho)(t)\|_{L^{2}}^{2}+\kappa\int_{0}^{T}\|\nabla^{n+1} \rho(s)\|_{L^{2}}^{2}ds\leq\|\nabla^{n}\rho_{in}\|_{L^{2}}^{2}+C_{n}\left(\frac{ \|\nabla u\|_{L^{\infty}}}{\kappa}\right)^{n}\kappa\int_{0}^{T}\|\nabla\rho(s) \|_{L^{2}}^{2}. \tag{3.3}\]
Proof.: Apply \(\partial^{\boldsymbol{\alpha}}\), where \(|\boldsymbol{\alpha}|=n\), to the equation and and test the result with \(\partial^{\boldsymbol{\alpha}}\) to get after integration in time
\[\frac{1}{2}\sup_{t\leq T}\|(\partial^{\boldsymbol{\alpha}}\rho)(t)\|_{L^{2}}^ {2}+\kappa\|\nabla\partial^{\boldsymbol{\alpha}}\rho\|_{L^{2}_{xt}}^{2}\leq \frac{1}{2}\|\nabla^{n}\rho_{in}\|_{L^{2}}^{2}+\|[u\cdot\nabla,\partial^{ \boldsymbol{\alpha}}]\rho\|_{L^{2}_{xt}}\|\partial^{\boldsymbol{\alpha}}\rho \|_{L^{2}_{xt}}.\]
For the last term use the commutator estimate
\[\|[u\cdot\nabla,\partial^{\boldsymbol{\alpha}}]f\|_{L^{2}}\lesssim_{n}\| \nabla^{n}u\|_{L^{\infty}}\|\nabla f\|_{L^{2}}+\|\nabla u\|_{L^{\infty}}\| \nabla^{n}f\|_{L^{2}}\]
(followed by Holder's inequality in time) to write
\[\|[u\cdot\nabla,\partial^{\boldsymbol{\alpha}}]\rho\|_{L^{2}_{xt}} \lesssim_{n}\|\nabla^{n}u\|_{L^{\infty}}\|\nabla\rho\|_{L^{2}_{ xt}}\|\nabla^{n}\rho\|_{L^{2}_{xt}}+\|\nabla u\|_{L^{\infty}}\|\nabla^{n}\rho\|_{L^{2}_{ xt}}^{2}\] \[\lesssim_{n}\|\nabla^{n}u\|_{L^{\infty}}\|\nabla\rho\|_{L^{2}_{ xt}}^{\frac{n+1}{n}}\|\nabla^{n+1}\rho\|_{L^{\frac{n-1}{n}}_{xt}}^{\frac{n-1}{n}}+\| \nabla u\|_{L^{\infty}}\|\nabla\rho\|_{L^{2}_{xt}}^{\frac{2}{n}}\|\nabla^{n+ 1}\rho\|_{L^{\frac{n-2}{n}}_{xt}}^{\frac{2n-2}{n}}.\]
Here, for the latter \(\lesssim\) we use interpolation. Thus, summing over all partial derivatives of order \(n\) we have
\[\frac{1}{2}\sup_{t\leq T}\|(\nabla^{n}\rho)(t)\|_{L^{2}}^{2}+ \kappa\|\nabla^{n+1}\rho\|_{L^{2}_{xt}}^{2}\leq C_{n}\kappa^{-1}\|\nabla^{n}u\| _{L^{\infty}}(\kappa^{\frac{1}{2}}\|\nabla\rho\|_{L^{2}_{xt}})^{\frac{n+1}{n}} (\kappa^{\frac{1}{2}}\|\nabla^{n+1}\rho\|_{L^{2}_{xt}})^{\frac{n-1}{n}}\] \[+C_{n}\kappa^{-1}\|\nabla u\|_{L^{\infty}}(\kappa^{\frac{1}{2}}\| \nabla\rho\|_{L^{2}_{xt}})^{\frac{2}{n}}(\kappa^{\frac{1}{2}}\|\nabla^{n+1} \rho\|_{L^{2}_{xt}})^{\frac{2n-2}{n}}+\frac{1}{2}\|\nabla^{n}\rho_{in}\|_{L^{2} }^{2}.\]
The above and Young's inequality gives
\[\sup_{t\leq T}\|(\nabla^{n}\rho)(t)\|_{L^{2}}^{2}+\kappa\|\nabla^{n+1}\rho\|_{ L^{2}_{xt}}^{2}\leq\|\nabla^{n}\rho_{in}\|_{L^{2}}^{2}+C_{n}\left(\left(\frac{\| \nabla u\|_{L^{\infty}}}{\kappa}\right)^{n}+\left(\frac{\|\nabla^{n}u\|_{L^ {\infty}}}{\kappa}\right)^{\frac{2n}{n+1}}\right)\kappa\|\nabla\rho\|_{L^{2}_{ xt}}^{2},\]
which with assumption (3.2) gives (3.3).
We will need also the following immediate corollary for the forced advection-diffusion equation
\[\begin{split}\partial_{t}\rho+u\cdot\nabla\rho&= \kappa\Delta\rho+\operatorname{div}f\qquad\text{ on }\mathbb{T}^{3}\times[0,T]\\ \rho|_{t=0}&=\rho_{in}\end{split} \tag{3.4}\]
with smooth initial datum and forcing.
**Corollary 3.2**.: _Let \(\kappa>0\) and \(u\in C^{\infty}(\mathbb{T}^{d};\mathbb{R}^{d})\) be divergence free. Assume (3.2). Then the forced advection-diffusion equation (3.4) satisfies for any \(t\leq T\)_
\[\frac{1}{2}\|\rho(t)\|_{L^{2}}^{2}+\kappa\int_{0}^{t}\|\nabla\rho(s)\|_{L^{2}} ^{2}\,ds=\frac{1}{2}\|\rho_{in}\|_{L^{2}}^{2}+\int_{0}^{t}\int f\cdot\nabla\rho,\]
_and for any \(n\geq 1\):_
\[\begin{split}&\sup_{t\leq T}\|(\nabla^{n}\rho)(t)\|_{L^{2}}^{2}+ \kappa\int_{0}^{T}\|\nabla^{n+1}\rho(s)\|_{L^{2}}^{2}ds\leq\\ &\|\nabla^{n}\rho_{in}\|_{L^{2}}^{2}+C_{n}\left(\frac{\|\nabla u \|_{L^{\infty}}}{\kappa}\right)^{n}\kappa\int_{0}^{T}\|\nabla\rho(s)\|_{L^{2} }^{2}+\sum_{|\boldsymbol{\alpha}|=n}\left|\int_{0}^{T}\int\partial^{\boldsymbol{ \alpha}}f\cdot\nabla\partial^{\boldsymbol{\alpha}}\rho\right|.\end{split} \tag{3.5}\]
Observe that we don't use above norms for forcing, which will be important later. In particular, this is why we have immediately integrated in time while deriving estimate (3.3).
### Estimates for advection-diffusion with an elliptic matrix
Estimates in this section are needed mainly in the space homogenisation proposition. We will consider
\[\partial_{t}\bar{\rho}+u\cdot\nabla\bar{\rho} =\operatorname{div}\bar{A}\nabla\bar{\rho}\qquad\text{ on }\mathbb{T}^{3}\times[0,T] \tag{3.6}\] \[\bar{\rho}|_{t=0} =\rho_{in}\]
with smooth initial datum and smooth matrix \(\bar{A}\), or a difference of two solutions \(\bar{\rho}_{1}\) and \(\bar{\rho}_{2}\) with respective matrices \(\bar{A}_{1}\), \(\bar{A}_{2}\).
#### 3.2.1. Comparison
First, we prove an estimate that allows to compare two solutions of (3.6) with two ellipticity matrices.
**Proposition 3.3** (Stability estimates).: _Let \(\varrho_{1}\) and \(\varrho_{2}\) solve the following equations on \(\mathbb{T}^{3}\times[0,T]\)_
\[\partial_{t}\varrho_{1}+u\cdot\nabla\varrho_{1} =\operatorname{div}(A_{1}\nabla\varrho_{1}),\] \[\partial_{t}\varrho_{2}+u\cdot\nabla\varrho_{2} =\operatorname{div}(A_{2}\nabla\varrho_{2})\]
_with initial data \(\varrho_{1}(0)=\varrho_{2}(0)=\rho_{in}\in L^{2}(\mathbb{T}^{3})\) and uniformly elliptic symmetric matrices \(A_{1},A_{2}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}^{3\times 3}\) satisfying for \(\varepsilon\leq\frac{1}{2}\)_
\[\left|(A_{1}-A_{2})\xi\cdot\zeta\right|\leq \varepsilon(A_{1}\xi\cdot\xi)^{\frac{1}{2}}(A_{1}\zeta\cdot\zeta )^{\frac{1}{2}},\quad\text{for any }(x,t)\in\mathbb{T}^{3}\times[0,T]\text{ and }\xi,\zeta\in\mathbb{R}^{3}. \tag{3.7}\]
_Let \(\tilde{\varrho}:=\varrho_{1}-\varrho_{2}\), then we have_
\[|A_{1}\nabla\varrho_{2}\cdot\nabla\varrho_{2}| \leq 2A_{2}\nabla\varrho_{2}\cdot\nabla\varrho_{2}, \tag{3.8}\] \[\sup_{t\leq T}\|\tilde{\varrho}(t)\|_{L^{2}}^{2}+\int_{0}^{T}\int A _{1}\nabla\tilde{\varrho}\cdot\nabla\tilde{\varrho}dxdt\leq \varepsilon^{2}\int_{0}^{T}\int A_{1}\nabla\varrho_{2}\cdot\nabla \varrho_{2}dxdt,\] (3.9) \[\left|\|\varrho_{1}(t)\|_{L^{2}}^{2}-\|\varrho_{2}(t)\|_{L^{2}}^{ 2}\right|\leq 9\varepsilon\int_{0}^{t}\int A_{1}\nabla\varrho_{2}\cdot\nabla \varrho_{2}dxdt\leq 18\varepsilon\int_{0}^{t}\int A_{2}\nabla\varrho_{2}\cdot \nabla\varrho_{2}dxdt \tag{3.10}\]
_for any \(t\leq T\)._
Proof.: The pointwise inequality (3.8) follows from (3.7), ellipticity, and \(\varepsilon\leq\frac{1}{2}\). Taking the difference of the two equations, we have
\[\partial_{t}\tilde{\varrho}+u\cdot\nabla\tilde{\varrho}= \operatorname{div}\big{(}A_{1}\nabla\tilde{\varrho}\big{)}+\operatorname{div} \big{(}(A_{1}-A_{2})\nabla\varrho_{2}\big{)}. \tag{3.11}\]
Testing with \(\tilde{\varrho}\) and integrate by parts. For the last term in (3.11), we use Young's inequality to absorb the term \(\tilde{\varrho}\) from the first term on the right hand side. This gives (3.9).
From the equations for \(\varrho_{1}\) and \(\varrho_{2}\), we also can derive
\[\partial_{t}\big{(}\tilde{\varrho}\varrho_{2}\big{)}=-\tilde{ \varrho}u\cdot\nabla\varrho_{2}+\tilde{\varrho}\operatorname{div}(A_{2}\nabla \varrho_{2})-\varrho_{2}u\cdot\nabla\tilde{\varrho}+\varrho_{2}\operatorname{ div}\big{(}A_{1}\nabla\tilde{\varrho}\big{)}+\varrho_{2}\operatorname{ div}\big{(}(A_{1}-A_{2})\nabla\varrho_{2}\big{)}. \tag{3.12}\]
Integrate in \(\mathbb{T}^{3}\times[0,T]\), the first and the third terms on the right hand side of (3.12) cancel. For the rest, we integrate by parts
\[\left|\int\tilde{\varrho}(t)\varrho_{2}(t)dx\right| \leq\left|\int_{0}^{t}\int(A_{1}+A_{2})\nabla\tilde{\varrho} \nabla\varrho_{2}dxds\right|+\left|\int_{0}^{t}\int(A_{1}-A_{2})\nabla\varrho _{2}\nabla\varrho_{2}dxds\right|\] \[\leq 2\left|\int_{0}^{t}\int A_{1}\nabla\tilde{\varrho}\nabla \varrho_{2}dxds\right|+2\left|\int_{0}^{t}\int(A_{1}-A_{2})\nabla\varrho_{2} \nabla\varrho_{2}dxds\right|.\]
We estimate the latter right-hand side term by \(2\varepsilon\int_{0}^{t}\int A_{1}\nabla\varrho_{2}\cdot\nabla\varrho_{2}dxds\) using the assumption. For the first term we use \(A_{1}\xi\cdot\zeta\leq(A_{1}\xi\cdot\xi)^{\frac{1}{2}}(A_{1}\zeta\cdot\zeta)^{ \frac{1}{2}}\) (Cauchy-Schwarz) and Young's inequality that utilizes \(\varepsilon^{2}\) of (3.9), to get
\[\left|\int\tilde{\varrho}(t)\varrho_{2}(t)dx\right|\leq 4\varepsilon\int_{0}^{t} \int A_{1}\nabla\varrho_{2}\cdot\nabla\varrho_{2}dxds.\]
Then the following fact concludes the proof of (3.10),
\[\left|\|\varrho_{1}(t)\|_{L^{2}}^{2}-\|\varrho_{2}(t)\|_{L^{2}}^{2}\right|= \left|\int\tilde{\varrho}(t)\big{(}\varrho_{1}(t)+\varrho_{2}(t)\big{)}dx \right|=\|\tilde{\varrho}(t)\|_{L^{2}}^{2}+2\left|\int\tilde{\varrho}(t) \varrho_{2}(t)dx\right|.\]
#### 3.2.2. General weighted estimate
We define
\[\bar{D}=\int_{0}^{T}\|\bar{\kappa}^{\frac{1}{2}}\nabla\bar{\rho}(s)\|_{L^{2}}^ {2}\,ds.\]
Assume the following inequalities
\[2\bar{\kappa}(x,t)\mathrm{Id}\geq\bar{A}(x,t)\geq\frac{\bar{\kappa}(x,t)}{2} \mathrm{Id} \tag{3.13}\]
and
\[\left\|\frac{D_{t}^{u}\bar{\kappa}}{\bar{\kappa}}\right\|_{L^{\infty}}\leq C \tau^{-1}. \tag{3.14}\]
Assume further that for \(m\geq 1\)
\[\|\bar{\kappa}^{\frac{m-2}{2}}\nabla^{m}\bar{A}\|_{L^{\infty}}\leq C(\tau^{-1 })^{\frac{m}{2}}\qquad\|\bar{\kappa}^{\frac{m-2}{2}}\nabla^{m}\bar{\kappa}\|_ {L^{\infty}}\lesssim(\tau^{-1})^{\frac{m}{2}} \tag{3.15}\]
and
\[\|\bar{\kappa}^{\frac{m-1}{2}}\nabla^{m}u\|_{L^{\infty}}\leq C\tau^{-1}(\tau^ {-1})^{\frac{m-1}{2}}. \tag{3.16}\]
**Lemma 3.4**.: _Let \(u\) be divergence free. Assume the inequalities (3.13), (3.14), (3.16), (3.15) hold. Then the general advection-diffusion equation (3.6) satisfies for any \(t\leq T\)_
\[\frac{1}{2}\|\bar{\rho}(t)\|_{L^{2}}^{2}+\int_{0}^{t}\|\bar{\kappa}^{\frac{1} {2}}\nabla\bar{\rho}(s)\|_{L^{2}}^{2}\,ds=\frac{1}{2}\|\rho_{in}\|_{L^{2}}^{2}\]
_and for any \(n\geq 1\)_
\[\sup_{t\leq T}\|(\bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho})(t)\|_{L^{2} }^{2}+\int_{0}^{T}\|\bar{\kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho}\|_{L^{ 2}}^{2}\lesssim(\tau^{-1})^{n}\bar{D}+\sum_{i=1}^{n}(\tau^{-1})^{n-i}\|(\bar{ \kappa}^{i/2}\nabla^{i}\bar{\rho})_{in}\|_{L^{2}}^{2}. \tag{3.17}\]
_Further, for any \(|\boldsymbol{\alpha}|=n\)_
\[\int_{0}^{T}\|\bar{\kappa}^{\frac{n}{2}}D_{t}\partial^{\boldsymbol{\alpha}} \bar{\rho}\|_{L^{2}}^{2}\lesssim(\tau^{-1})^{n+1}\bar{D}+\sum_{i=1}^{n+1} \left(\tau^{-1}\right)^{n+1-i}\|(\bar{\kappa}^{i/2}\nabla^{i}\bar{\rho})_{in} \|_{L^{2}}^{2} \tag{3.18}\]
_The constants in \(\lesssim\) depend on the constants in assumptions and \(n\)_
The proof occupies the rest of this section.
Proof.: _Step 1: Preliminary \(m\)-th order estimate._
Apply \(\partial^{\boldsymbol{\alpha}}\), where \(|\boldsymbol{\alpha}|=n\), to the equation (3.6) and test the result with \(\bar{\kappa}^{n}\partial^{\boldsymbol{\alpha}}\bar{\rho}\)
\[\begin{split}\frac{1}{2}\frac{d}{dt}&\int|(\bar{ \kappa}^{n/2}\partial^{\boldsymbol{\alpha}}\bar{\rho})(t)|^{2}+\int\bar{ \kappa}^{n}\bar{A}(\nabla\partial^{\boldsymbol{\alpha}}\bar{\rho})(\nabla \partial^{\boldsymbol{\alpha}}\bar{\rho})=\frac{1}{2}\int\frac{D_{t}^{u}( \bar{\kappa}^{n})}{\bar{\kappa}^{n}}\bar{\kappa}^{n}|\partial^{\boldsymbol{ \alpha}}\bar{\rho}|^{2}\\ &-\sum_{\boldsymbol{\beta}+\boldsymbol{\gamma}=\boldsymbol{\alpha },\boldsymbol{\beta}>0}c_{\boldsymbol{\beta}}\int\partial^{\boldsymbol{\beta}} u\cdot\partial^{\boldsymbol{\gamma}}\nabla\bar{\rho}(\partial^{\boldsymbol{ \alpha}}\bar{\rho})\bar{\kappa}^{n}+\partial^{\boldsymbol{\beta}}\bar{A} \nabla\partial^{\boldsymbol{\gamma}}\bar{\rho}\nabla(\bar{\kappa}^{n} \partial^{\boldsymbol{\alpha}}\bar{\rho})+\int\bar{A}\nabla\partial^{ \boldsymbol{\alpha}}\bar{\rho}\nabla(\bar{\kappa}^{n})\partial^{\boldsymbol{ \alpha}}\bar{\rho},\end{split} \tag{3.19}\]
where \(c_{\boldsymbol{\beta}}\) are binomial coefficients. Estimate four right-hand side terms of (3.19), in order of their appearance. For the first one, use \(\frac{D_{t}^{u}(\bar{\kappa}^{n})}{\bar{\kappa}^{n}}=n\frac{D_{t}^{u}(\bar{ \kappa})}{\bar{\kappa}}\) and estimate in modulus
\[\int\frac{D_{t}^{u}(\bar{\kappa}^{n})}{\bar{\kappa}^{n}}\bar{\kappa}^{n}| \partial^{\boldsymbol{\alpha}}\bar{\rho}|^{2}\leq n\|\bar{\kappa}^{\frac{n}{2 }}\nabla^{n}\bar{\rho}\|_{L^{2}}^{2}\left\|\frac{D_{t}^{u}\bar{\kappa}}{\bar{ \kappa}}\right\|_{L^{\infty}}.\]
For a single summand of the second one, estimate its modulus by distributing the weights according to derivatives as follows
\[\begin{split}&\int\bar{\kappa}^{\frac{n-|\boldsymbol{\gamma}|- 1}{2}}\partial^{\boldsymbol{\beta}}u\cdot(\bar{\kappa}^{\frac{|\boldsymbol{ \gamma}|+1}{2}}\partial^{\boldsymbol{\gamma}}\nabla\bar{\rho})((\partial^{ \boldsymbol{\alpha}}\bar{\rho})\bar{\kappa}^{n/2})\leq\|\bar{\kappa}^{\frac{n} {2}}\nabla^{n}\bar{\rho}\|_{L^{2}}\|\bar{\kappa}^{\frac{|\boldsymbol{\gamma}| +1}{2}}\nabla^{|\boldsymbol{\gamma}|+1}\bar{\rho}\|_{L^{2}}\left\|\bar{\kappa} ^{\frac{n-|\boldsymbol{\gamma}|-1}{2}}\nabla^{n-|\boldsymbol{\gamma}|}u\right\| _{L^{\infty}}.\end{split} \tag{3.20}\]
For a single summand of the third one we estimate its modulus by
\[\begin{split}&\int\!\partial^{\boldsymbol{\beta}}\bar{A}\nabla \partial^{\boldsymbol{\gamma}}\bar{\rho}\nabla(\bar{\kappa}^{n}\partial^{ \boldsymbol{\alpha}}\bar{\rho})=n\int\partial^{\boldsymbol{\beta}}\bar{A}( \nabla\partial^{\boldsymbol{\gamma}}\bar{\rho})\bar{\kappa}^{n-1}\nabla\bar{ \kappa}\partial^{\boldsymbol{\alpha}}\bar{\rho}+\int\partial^{\boldsymbol{ \beta}}\bar{A}(\nabla\partial^{\boldsymbol{\gamma}}\bar{\rho})\bar{\kappa}^{n} \nabla\partial^{\boldsymbol{\alpha}}\bar{\rho}\leq\\ &\quad n\|\bar{\kappa}^{\frac{n-|\boldsymbol{\gamma}|-2}{2}} \nabla^{n-|\boldsymbol{\gamma}|}\bar{A}\|_{L^{\infty}}\|\bar{\kappa}^{-\frac{ 1}{2}}\nabla\bar{\kappa}\|_{L^{\infty}}\|\bar{\kappa}^{\frac{|\boldsymbol{ \gamma}|+1}{2}}\nabla^{|\boldsymbol{\gamma}|+1}\bar{\rho}\|_{L^{2}}\|\bar{ \kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho}\|_{L^{2}}\\ &+\|\bar{\kappa}^{\frac{n-|\boldsymbol{\gamma}|-2}{2}}\nabla^{n-| \boldsymbol{\gamma}|}\bar{A}\|_{L^{\infty}}\|\bar{\kappa}^{\frac{|\boldsymbol{ \gamma}|+1}{2}}\nabla^{|\boldsymbol{\gamma}|+1}\bar{\rho}\|_{L^{2}}\|\bar{ \kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho}\|_{L^{2}}.\end{split} \tag{3.21}\]
For the fourth, last term of (3.19) we estimate it in modulus writing, thanks to the upper bound (3.13)
\[\int\bar{\kappa}^{n-1}\bar{A}\nabla\bar{\kappa}\partial^{\boldsymbol{\alpha}} \bar{\rho}\nabla\partial^{\boldsymbol{\alpha}}\bar{\rho}\leq 2\int|\bar{\kappa}|^{n}| \nabla\bar{\kappa}||\partial^{\boldsymbol{\alpha}}\bar{\rho}|\nabla\partial^{ \boldsymbol{\alpha}}\bar{\rho}|\leq 2\|\bar{\kappa}^{-\frac{1}{2}}\nabla\bar{\kappa}\|_{L^{ \infty}}\|\bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho}\|_{L^{2}}\|\bar{ \kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho}\|_{L^{2}}.\]
Together, these estimates for four right-hand side terms of (3.19), absorbing their terms \(\|\bar{\kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho}\|_{L^{2}}\) by the dissipative part using Young's inequality, after summing over all multiindices of order \(n\) yield
\[\begin{split}\frac{1}{2}\frac{d}{dt}&\|(\bar{\kappa}^{n /2}\nabla^{n}\bar{\rho})(t)\|_{L^{2}}^{2}+\frac{1}{2}\|\bar{\kappa}^{\frac{n+1} {2}}\nabla^{n+1}\bar{\rho}\|_{L^{2}}^{2}\lesssim_{n}\|\bar{\kappa}^{\frac{n}{2 }}\nabla^{n}\bar{\rho}\|_{L^{2}}^{2}\left\|\frac{D_{t}^{u}\bar{\kappa}}{\bar{ \kappa}}\right\|_{L^{\infty}}\\ &+\|\bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho}\|_{L^{2}}\sum_{j=0 }^{n-1}\|\bar{\kappa}^{\frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2}}\left\| \bar{\kappa}^{\frac{n-j-1}{2}}\nabla^{n-j}u\right\|_{L^{\infty}}\\ &+\|\bar{\kappa}^{-\frac{1}{2}}\nabla\bar{\kappa}\|_{L^{\infty}}\| \bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho}\|_{L^{2}}\sum_{j=0}^{n-1}\|\bar{ \kappa}^{\frac{n-j-2}{2}}\nabla^{n-j}\bar{A}\|_{L^{\infty}}\|\bar{\kappa}^{ \frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2}}\\ &+\sum_{j=0}^{n-1}\|\bar{\kappa}^{\frac{n-j-2}{2}}\nabla^{n-j}\bar{A} \|_{L^{\infty}}^{2}\|\bar{\kappa}^{\frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2} }^{2}+\|\bar{\kappa}^{-\frac{1}{2}}\nabla\bar{\kappa}\|_{L^{\infty}}^{2}\|\bar{ \kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho}\|_{L^{2}}^{2}.\end{split} \tag{3.22}\]
_Step 2: Plugging in scales assumptions._
Use the assumptions (3.14), (3.15), (3.16) for right-hand side of (3.21) to obtain
\[\frac{d}{dt}\|(\bar{\kappa}^{n/2}\nabla^{n}\bar{\rho})(t)\|_{L^{2}}^{2}+\|\bar{ \kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho}\|_{L^{2}}^{2}\lesssim_{n}\sum_{j=0 }^{n-1}\|\bar{\kappa}^{\frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2}}^{2}( \tau^{-1})^{n-j} \tag{3.23}\]
which after integrating in time and writing \(P(i)=\int_{0}^{T}\|\bar{\kappa}^{\frac{i}{2}}\nabla^{i}\bar{\rho}\|_{L^{2}}^{2}\) yields for any \(n\geq 1\)
\[\sup_{t\leq T}\|(\bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho})(t)\|_{L^{2}}^{2} +P(n+1)\lesssim_{n}\|(\bar{\kappa}^{\frac{n}{2}}\nabla^{n}\bar{\rho})_{in}\|_{L ^{2}}^{2}+\sum_{j=0}^{n-1}P(j+1)(\tau^{-1})^{n-j}. \tag{3.23}\]
_Step 3: Iterations._
Take (3.23) with \(n=1\). Observing that \(P(1)=\bar{D}\) we have
\[\sup_{t\leq T}\|(\bar{\kappa}^{1/2}\nabla\bar{\rho})(t)\|_{L^{2}}^{2}+\int_{0} ^{T}\|\bar{\kappa}\nabla^{2}\bar{\rho}\|_{L^{2}}^{2}\,dt\lesssim\tau^{-1} \bar{D}+\|(\bar{\kappa}^{1/2}\nabla\bar{\rho})_{in}\|_{L^{2}}^{2} \tag{3.24}\]
i.e. (3.17) with \(n=1\), this allows to start induction. Assume (3.17) holds for any \(j\leq n\), in particular
\[P(j+1)\lesssim_{n}(\tau^{-1})^{j}\bar{D}+\sum_{i=1}^{j}(\tau^{-1})^{j-i}\|( \bar{\kappa}^{i/2}\nabla^{i}\bar{\rho})_{in}\|_{L^{2}}^{2}.\]
For \(n+1\) we have via (3.23)
\[P(n+2) \lesssim_{n}\|(\bar{\kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho })_{in}\|_{L^{2}}^{2}+\sum_{j=0}^{n}P(j+1)(\tau^{-1})^{n+1-j}\] \[\lesssim_{n}\|(\bar{\kappa}^{\frac{n+1}{2}}\nabla^{n+1}\bar{\rho })_{in}\|_{L^{2}}^{2}+\sum_{j=0}^{n}\left((\tau^{-1})^{j}\bar{D}+\sum_{i=1}^{ j}(\tau^{-1})^{j-i}\|(\bar{\kappa}^{i/2}\nabla^{i}\bar{\rho})_{in}\|_{L^{2}}^{2} \right)(\tau^{-1})^{n+1-j},\]
which gives (3.17) for \(n+1\).
_Step 4: Transport estimate._
Apply \(\partial^{\boldsymbol{\alpha}}\), where \(|\boldsymbol{\alpha}|=n\), to the equation (3.6)
\[D_{t}\partial^{\boldsymbol{\alpha}}\bar{\rho}=-\sum_{\boldsymbol{\beta}+ \boldsymbol{\gamma}=\boldsymbol{\alpha},\boldsymbol{\beta}>0}c_{\boldsymbol{ \beta}}\partial^{\boldsymbol{\beta}}u\cdot\partial^{\boldsymbol{\gamma}}\nabla \bar{\rho}+\operatorname{div}\sum_{\boldsymbol{\beta}+\boldsymbol{\gamma}= \boldsymbol{\alpha}}c_{\boldsymbol{\beta}}\partial^{\boldsymbol{\beta}}\bar{A} \partial^{\boldsymbol{\gamma}}\nabla\bar{\rho}.\]
Multiply both sides with \(\bar{\kappa}^{\frac{n}{2}}\), distribute the weights, and take space-time \(L^{2}\) norms on time interval \([0,T]\), denoted by \(L^{2}_{xt}\). These give
\[\|\bar{\kappa}^{\frac{n}{2}}D_{t}\partial^{\boldsymbol{\alpha}} \bar{\rho}\|_{L^{2}_{xt}} \lesssim_{n}\sum_{i+j=n,i>0}\|\bar{\kappa}^{\frac{i-1}{2}}\nabla^{ i}u\|_{L^{\infty}}\|\bar{\kappa}^{\frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2}_{xt}}\] \[+\sum_{i+j=n}\|\bar{\kappa}^{\frac{i-1}{2}}\nabla^{i+1}\bar{A}\|_ {L^{\infty}}\|\bar{\kappa}^{\frac{j+1}{2}}\nabla^{j+1}\bar{\rho}\|_{L^{2}_{xt} }+\sum_{i+j=n}\|\bar{\kappa}^{\frac{i-2}{2}}\nabla^{i}\bar{A}\|_{L^{\infty}}\| \bar{\kappa}^{\frac{j+2}{2}}\nabla^{j+2}\bar{\rho}\|_{L^{2}_{xt}}.\]
Use the assumptions (3.15), (3.16)
\[\|\bar{\kappa}^{\frac{n}{2}}D_{t}\partial^{\boldsymbol{\alpha}}\bar{\rho}\|_{ L^{2}_{xt}}\lesssim_{n}\tau^{-1}\sum_{j=1}^{n+2}(\tau^{-1})^{\frac{n-j}{2}}\| \bar{\kappa}^{\frac{j}{2}}\nabla^{j}\bar{\rho}\|_{L^{2}_{xt}}.\]
Squaring both sides above yields
\[\int_{0}^{T}\|\bar{\kappa}^{\frac{n}{2}}D_{t}\partial^{\boldsymbol{\alpha}} \bar{\rho}\|_{L^{2}}^{2}\lesssim_{n}(\tau^{-1})^{2}\sum_{j=1}^{n+2}(\tau^{-1})^ {n-j}P(j).\]
(3.18) now follows from using (3.17) to control \(P(j)\).
## 4. Spatial homogenisation
### Setup
In this section, we use \(\xi\) to denote the variable in the cell. For a function \(f:\mathbb{T}^{3}\to\mathbb{R}\) defined in the cell, i.e. taking the variable \(\xi\) as its argument, we use \(\langle f\rangle\) to denote its integral in \(\mathbb{T}^{3}\). Without further specification, the integral domain is \(\mathbb{T}^{3}\) for the variables \(x,\xi\in\mathbb{T}^{3}\). Let \(g\) be a function taking arguments \((x,t,\xi)\), we use \(\|g(x,t,\cdot)\|_{L^{\infty}_{\xi}}\) to denote the supreme norm in \(\xi\) variable. Note that \(\|g\|_{L^{\infty}_{\xi}}\) is still a function in \((x,t)\). The usual \(L^{2}\) and \(H^{-1}\) norms are in \(x\) variable. The constant in \(\lesssim\) in this section depends on \(N_{h}\) defined in (4.18).
In this section we consider the following advection-diffusion equation for \(\rho:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}\)
\[\partial_{t}\rho+u\cdot\nabla\rho= \operatorname{div}\big{(}\tilde{A}\nabla\rho\big{)}, \tag{4.1}\] \[\rho|_{t=0}= \rho_{\text{in}},\]
with elliptic tensors \(\tilde{A}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}^{3\times 3}\) and \(A:\mathbb{T}^{3}\times[0,T]\times\mathbb{T}^{3}\to\mathbb{R}^{3\times 3}\) given by
\[\tilde{A}(x,t):= A\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}, \tag{4.2}\] \[A(x,t,\xi):= \sum_{i}\tilde{\eta}_{i}(x,t)\nabla\Phi_{i}^{-1}(x,t)\Bigg{(} \kappa\mathrm{Id}+\frac{\eta_{i}(x,t)\sigma^{1/2}(t)}{\lambda}\sum_{\vec{k}}a_ {\vec{k}}\big{(}\tilde{R}_{i}(x,t)\big{)}H_{\vec{k}}(\xi)\Bigg{)}\nabla\Phi_{i }^{-T}(x,t), \tag{4.3}\]
where, compared to the setting in Step 2 of the proof of Proposition 2.4 we drop the index \(q\) for simplicity, so that
\[\kappa= \kappa_{q+1}, \ell= \ell_{q}, \tag{4.4}\] \[\lambda= \lambda_{q+1}, u= \bar{u}_{q},\] \[\bar{\kappa}= \tilde{\kappa}_{q}, \sigma= \sigma_{q},\] \[\tau= \tau_{q}, \tilde{R}_{i}= \tilde{R}_{q,i}.\]
and \(\bar{u}_{q}\), \(\Phi_{i}\), \(\sigma_{q}\), \(\eta_{i}\), \(a_{\vec{k}}\), \(\tilde{R}_{q,i}\) are given in section 2.1.4. The partition of unity \(\tilde{\eta}_{i}\) is defined in (2.40). Here, we also clarify a notation taking (4.2) as an example. We often use \((x,t,\lambda\Phi_{i}(x,t))\) as an argument of some function, this function always involves \((\eta_{i})_{i}\) as a family of cutoff functions with disjoint support. It means taking \((x,t,\lambda\Phi_{i}(x,t))\) as the argument in the support of \(\eta_{i}\) for any \(i\).
Our goal in this section is to show the solution \(\rho\) to (4.1) homogenizes to the solution \(\bar{\rho}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}\) of the following equation
\[\partial_{t}\bar{\rho}+u\cdot\nabla\bar{\rho}= \operatorname{div}\big{(}\bar{A}\nabla\bar{\rho}\big{)}, \tag{4.5}\] \[\bar{\rho}|_{t=0}= \rho_{\text{in}},\]
with elliptic tensor \(\bar{A}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}^{3\times 3}\) given by
\[\bar{A}(x,t)= \int A(x,\xi,t)\Big{(}\mathrm{Id}+\sum_{i}\eta_{i}(x,t)\nabla \Phi_{i}^{T}(x,t)\nabla_{\xi}\chi_{i}^{T}(x,t,\xi)\Big{)}\,d\xi, \tag{4.6}\]
and \(\chi_{i},\chi:\mathbb{T}^{3}\times[0,T]\times\mathbb{T}^{3}\to\mathbb{R}^{3}\) given by
\[\chi_{i}(x,t,\xi)= -\frac{\sigma^{1/2}(t)}{\kappa\lambda}\nabla\Phi_{i}^{-1}(x,t) \sum_{\vec{k}}a_{\vec{k}}\big{(}\tilde{R}_{i}(x,t)\big{)}\varphi_{\vec{k}}( \xi)\vec{k}, \tag{4.7}\] \[\chi(x,t,\xi)= \sum_{i}\chi_{i}(x,t,\xi)\eta_{i}(x,t). \tag{4.8}\]
With the choice in (4.4), we collect the following facts:
\[\|\nabla\Phi_{i}-\mathrm{Id}\|_{L^{\infty}}\leq \lambda^{-2\gamma}, \tag{4.9}\] \[\tfrac{1}{2}\bar{\kappa}\leq\bar{A}\leq 2\bar{\kappa},\quad\text{ where } \bar{\kappa}=\kappa\left(1+\sum_{i}\frac{\sigma\eta_{i}^{2}}{\kappa^{2}\lambda^{2}} \right), \tag{4.10}\]
and for \(n=0,1\)
\[\|D_{t}^{n}\nabla_{x}^{m}\chi_{i}(x,t,\cdot)\|_{C_{t}^{1}}\lesssim \frac{\sigma^{1/2}}{\kappa\lambda}\tau^{-n}\ell^{-m}, \tag{4.11}\] \[\|D_{t}^{n}\nabla_{x}^{m}\sigma\|_{L^{\infty}}\lesssim \sigma\tau^{-n}\ell^{-m},\] (4.12) \[\|D_{t}^{n}\nabla_{x}^{m}\eta_{i}\|_{L^{\infty}}+\|D_{t}^{n} \nabla_{x}^{m}\tilde{\eta}_{i}\|_{L^{\infty}}+ \|D_{t}^{n}\nabla_{x}^{m}\nabla\Phi_{i}\|_{L^{\infty}}+\|D_{t}^{n }\nabla_{x}^{m}(a_{\bar{k}}\circ\tilde{R}_{i})\|_{L^{\infty}}\lesssim\tau^{-n }\ell^{-m}. \tag{4.13}\]
Notice that from (2.30) and (2.31), and with \(\gamma>0\) given in (2.34), we have
\[\tau\|\bar{\kappa}\|_{L^{\infty}}\ell^{-2}\leq 1, \tag{4.14}\] \[\tau\|\bar{\kappa}\|_{L^{\infty}}\lambda^{2}\geq \lambda^{\frac{b-1}{b+1}},\] (4.15) \[\tau\|\bar{\kappa}\|_{L^{\infty}}\lambda^{\frac{2}{b}}\leq \lambda^{-2\gamma},\] (4.16) \[\|\bar{\kappa}\|_{L^{\infty}}^{-1}\tau^{-1}\lambda^{-2}\leq \kappa^{-1}\tau^{-1}\lambda^{-2}< \lambda^{-2\gamma}. \tag{4.17}\]
Furthermore, we need to choose \(N_{h}\geq 3\) sufficiently large such that
\[N_{h}\geq(b+1)\left(\frac{2}{b-1}+\frac{\theta}{b}\right), \tag{4.18}\]
then (4.18) and (4.15) give the following relation
\[\lambda^{2}\|\bar{\kappa}\|_{L^{\infty}}\kappa^{-1}\lesssim\left(\tau\|\bar{ \kappa}\|_{L^{\infty}}\lambda^{2}\right)^{N_{h}}. \tag{4.19}\]
### Quantitative estimates
The quantitative version of the above homogenisation process is given as follows.
**Proposition 4.1**.: _Given (4.4), the parameter setting in Proposition 2.1 and Proposition 2.4, let \(\rho\) be the solution to (4.1)-(4.3). Let \(\bar{\rho}\) be the solution to (4.5)-(4.7). Choose \(N_{h}\) such that (4.18) holds. Define \(\tilde{\rho}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}\) such that_
\[\rho(x,t)=\bar{\rho}(x,t)+\frac{1}{\lambda}\chi\big{(}x,t,\lambda\Phi_{i}(x,t )\big{)}\cdot\nabla\bar{\rho}(x,t)+\tilde{\rho}(x,t). \tag{4.20}\]
_Then_
\[\|\tilde{\rho}(\cdot,T)\|_{L^{2}}^{2}+ \kappa\int_{0}^{T}\|\nabla\tilde{\rho}\|_{L^{2}}^{2}dt\lesssim \frac{1}{\lambda^{2}\kappa\tau}\mathcal{D}_{N_{h}}, \tag{4.21}\] \[\mathcal{D}_{l}:= \int_{0}^{T}\|\bar{\kappa}^{\sfrac 1}\nabla\bar{\rho}\|_{L^{2}}^{2}dt+\sum_{i=1}^{l}\tau^{i}\int_{0}^{T}\|\bar{ \kappa}^{\sfrac 1}\nabla^{i}\rho_{in}\|_{L^{2}}^{2}dt\quad\text{ for }l\in\mathbb{N}.\]
We also have the following corollaries
**Corollary 4.2**.: _Let \(\rho,\bar{\rho}\) and \(\mathcal{D}_{l}\) be as in Proposition 4.1, then_
\[\|\rho(t)-\bar{\rho}(t)\|_{L^{2}}^{2}+ \kappa\int_{0}^{T}\|\nabla\rho-\nabla\bar{\rho}\|_{L^{2}}^{2}dt \lesssim\frac{1}{\lambda^{2}\kappa\tau}\mathcal{D}_{N_{h}}. \tag{4.22}\]
**Corollary 4.3**.: _Let \(\rho,\bar{\rho}\) and \(\mathcal{D}_{l}\) be as in Proposition 4.1, then_
\[\Big{|}\|\rho(T)\|_{L^{2}}^{2}-\|\bar{\rho}(T)\|_{L^{2}}^{2}\Big{|}\lesssim\Big{(} \frac{1}{\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{\nicefrac{{1}}{{2}}}}+\lambda ^{-2\gamma}\Big{)}\mathcal{D}_{N_{h}}. \tag{4.23}\]
**Remark 4.4**.: _If \(\rho_{in}\) satisfies that, for any \(i\in[0,N_{h}]\)_
\[\|\nabla^{i}\rho_{in}\|_{L^{2}}^{2}\leq\lambda^{\frac{2i}{b}}\kappa\int_{0}^{ T}\|\nabla\rho\|_{L^{2}}^{2}dt,\]
_then for \(\mathcal{D}_{N_{h}}\), we have_
\[\mathcal{D}_{N_{h}}\leq \int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{1}}{{2}}}\nabla\bar{\rho} \|_{L^{2}}^{2}dt+\sum_{i=1}^{N_{h}}\tau^{i}\|\bar{\kappa}\|_{L^{\infty}}^{i} \lambda^{\frac{2i}{b}}\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2}dt, \tag{4.24}\] \[\leq_{(\ref{eq:1.1})}\int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{1}}{ {2}}}\nabla\bar{\rho}\|_{L^{2}}^{2}dt+\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}} ^{2}dt.\]
_Therefore, combining (4.23), (4.24) and (4.17), we have_
\[\Big{|}\|\rho(T)\|_{L^{2}}^{2}-\|\bar{\rho}(T)\|_{L^{2}}^{2}\Big{|}\lesssim \lambda^{-\gamma}\min\left\{\int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{1}}{{2}}} \nabla\bar{\rho}\|_{L^{2}}^{2}dt,\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2} dt\right\}. \tag{4.25}\]
The proofs of Proposition 4.1, Corollary 4.2 and Corollary 4.3 are given in the end of this section. The \(\tilde{\rho}\) in Proposition 4.1 is the homogenization error term. To estimate this error term, we show \(\tilde{\rho}\) satisfies an equation in the following lemma.
**Lemma 4.5**.: _Let \(\tilde{\rho}\) be as in (4.20), then \(\tilde{\rho}\) satisfies_
\[\partial_{t}\tilde{\rho} +u\cdot\nabla\tilde{\rho}-\operatorname{div}\big{(}\tilde{A} \nabla\tilde{\rho}\big{)}=\operatorname{div}\big{(}\tilde{B}\nabla\bar{\rho} \big{)} \tag{4.26}\] \[+\frac{1}{\lambda}\Big{(}\operatorname{div}\big{(}\tilde{A} \nabla^{2}\bar{\rho}\chi\big{)}+\operatorname{div}\big{(}\tilde{A}\nabla_{x} \chi^{T}\nabla\bar{\rho}\big{)}\Big{)}\] \[-\frac{1}{\lambda}\Big{(}\chi\cdot D_{t}\nabla\bar{\rho}\cdot+ \nabla\bar{\rho}\cdot D_{t}\chi\Big{)}\]
_with the matrices \(\tilde{B}:\mathbb{T}^{3}\times[0,T]\to\mathbb{R}^{3\times 3}\) and \(B:\mathbb{T}^{3}\times[0,T]\times\mathbb{T}^{3}\to\mathbb{R}^{3\times 3}\) given by_
\[\tilde{B}(x,t)= B\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}, \tag{4.27}\] \[B(x,t,\xi)= A\Big{(}\mathrm{Id}+\sum_{i}\eta_{i}\nabla\Phi_{i}^{T}\nabla_{ \xi}\chi_{i}^{T}\Big{)}-\Big{\langle}A\Big{(}\mathrm{Id}+\sum_{i}\eta_{i} \nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}\Big{)}\Big{\rangle}, \tag{4.28}\]
_and \(D_{t}\chi\) denotes the transport derivative in \(x\), i.e._
\[D_{t}\chi=\big{(}\partial_{t}\chi+(u\cdot\nabla_{x})\chi\big{)}(x,t,\lambda\Phi _{i}).\]
Proof of Lemma 4.5.: From the ansatz (4.20), omitting the argument \(\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}\), direct computations give
\[\nabla\tilde{\rho} =\nabla\rho-\Big{(}\mathrm{Id}+\sum_{i}\eta_{i}\nabla\Phi_{i}^{T} \nabla_{\xi}\chi_{i}^{T}\Big{)}\nabla\bar{\rho}-\frac{1}{\lambda}\sum_{i} \Big{(}\nabla^{2}\bar{\rho}\chi_{i}\eta_{i}+\nabla_{x}(\chi_{i}\eta_{i})^{T} \nabla\bar{\rho}\Big{)}, \tag{4.29}\] \[\partial_{t}\tilde{\rho} =\partial_{t}\rho-\partial_{t}\bar{\rho}-\sum_{i}\eta_{i}\partial_ {t}\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}\nabla\bar{\rho}-\frac{1}{\lambda}\sum_ {i}\Big{(}\partial_{t}\nabla\bar{\rho}\cdot\chi_{i}\eta_{i}+\nabla\bar{\rho} \cdot\partial_{t}(\chi_{i}\eta_{i})\Big{)}. \tag{4.30}\]
Using \(\partial_{t}\Phi_{i}+(u\cdot\nabla)\Phi_{i}=0\) and omitting the argument \(\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}\), we obtain
\[\partial_{t}\tilde{\rho}+u\cdot\nabla\tilde{\rho}-\operatorname{div }\tilde{A}\nabla\tilde{\rho}= \operatorname{div}\left(\left[A\Big{(}\mathrm{Id}+\sum_{i}\eta_{i }\nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}\Big{)}-\Big{\langle}A\Big{(} \mathrm{Id}+\sum_{i}\eta_{i}\nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}\Big{)} \Big{\rangle}\right]\nabla\bar{\rho}\right)\] \[+\frac{1}{\lambda}\sum_{i}\Big{(}\operatorname{div}\big{(}\tilde {A}\nabla^{2}\bar{\rho}\chi_{i}\eta_{i}\big{)}+\operatorname{div}\big{(} \tilde{A}\nabla_{x}(\chi_{i}\eta_{i})^{T}\nabla\bar{\rho}\big{)}\Big{)}\] \[-\frac{1}{\lambda}\sum_{i}\Big{(}\partial_{t}\nabla\bar{\rho} \chi_{i}\eta_{i}+\partial_{t}(\chi_{i}\eta_{i})\nabla\bar{\rho}\Big{)}-\frac {1}{\lambda}\sum_{i}\Big{(}u\nabla^{2}\bar{\rho}\chi_{i}\eta_{i}+u\nabla_{x}( \chi_{i}\eta_{i})^{T}\nabla\bar{\rho}\Big{)}\] \[= \operatorname{div}\big{(}B\nabla\bar{\rho}\big{)}+\frac{1}{ \lambda}\sum_{i}\Big{(}\operatorname{div}\big{(}\tilde{A}\nabla^{2}\bar{\rho} \chi_{i}\eta_{i}\big{)}+\operatorname{div}\big{(}\tilde{A}\nabla_{x}(\chi_{i} \eta_{i})^{T}\nabla\bar{\rho}\big{)}\Big{)}\] \[-\frac{1}{\lambda}\sum_{i}\Big{(}\eta_{i}\chi_{i}\cdot D_{t}\nabla \bar{\rho}\cdot+\nabla\bar{\rho}\cdot D_{t}(\chi_{i}\eta_{i})\Big{)}.\]
**Lemma 4.6**.: _Let \(\tilde{\rho}\) be as in (4.20). Then for any \(N\in\mathbb{N}^{+}\), \(\tilde{\rho}\) satisfies_
\[\partial_{t}\tilde{\rho}+u\cdot\nabla\tilde{\rho}-\operatorname{div}\big{(} \tilde{A}\nabla\tilde{\rho}\big{)}=\frac{1}{\lambda}\Big{(}E_{1}+E_{2}+E_{3}+ E_{4}+\sum_{l=1}^{N}F_{l}+G_{N}\Big{)} \tag{4.31}\]
_with_
\[E_{1}= -\sum_{i,j,l}\operatorname{div}\big{(}\nabla\partial_{j}\bar{ \rho}\times(\nabla\Phi_{i}^{T}\tilde{c}_{j}^{(i)})\big{)}\] \[E_{2}= -\sum_{i,j,l}\operatorname{div}\big{(}\nabla_{x}\tilde{c}_{jl}^{ (i)}\times\nabla\Phi_{i,l})\partial_{j}\bar{\rho}\big{)}\] \[E_{3}= \operatorname{div}\big{(}\tilde{A}\nabla^{2}\bar{\rho}\chi\big{)}\] \[E_{4}= \operatorname{div}\big{(}\tilde{A}\nabla_{x}\chi^{T}\nabla\bar{ \rho}\big{)}\]
_and_
\[F_{1}= 0, \tag{4.32}\] \[F_{l}= \frac{1}{\lambda^{l-1}}\sum_{i,1\leq|\mathbf{\alpha}|\leq l-1} \operatorname{div}\Big{(}f_{0,\mathbf{\alpha}}^{(l-1)}(x,t,\lambda\Phi_{i})D_{t} \partial^{\mathbf{\alpha}}\bar{\rho}+f_{1,\mathbf{\alpha}}^{(l-1)}(x,t,\lambda\Phi_{i}) \partial^{\mathbf{\alpha}}\bar{\rho}\Big{)},\text{ for }l\geq 2,\] (4.33) \[G_{l}= \frac{1}{\lambda^{l-1}}\sum_{i,1\leq|\mathbf{\alpha}|\leq l}\Big{(}g_ {0,\mathbf{\alpha}}^{(l)}(x,t,\lambda\Phi_{i})D_{t}\partial^{\mathbf{\alpha}}\bar{\rho }+g_{1,\mathbf{\alpha}}^{(l)}(x,t,\lambda\Phi_{i})\partial^{\mathbf{\alpha}}\bar{\rho} \Big{)} \tag{4.34}\]
_where \(\tilde{c}_{j}^{(i)}(x,t)=c_{j}^{(i)}(x,t,\lambda\Phi_{i}(x,t))\) and the functions \(c_{j}^{(i)}\), \(f_{0,\mathbf{\alpha}}^{(l)}\), \(f_{1,\mathbf{\alpha}}^{(l)}\), \(g_{0,\mathbf{\alpha}}^{(l)}\) and \(g_{1,\mathbf{\alpha}}^{(l)}\), taking arguments \((x,t,\xi)\), satisfy the following estimates, for \(n\in\{0,1\}\),_
\[\langle f_{0,\mathbf{\alpha}}^{(l)}\rangle=\langle f_{1,\mathbf{\alpha}}^{(l)}\rangle= \langle g_{0,\mathbf{\alpha}}^{(l)}\rangle=\langle g_{1,\mathbf{\alpha}}^{(l)} \rangle=0, \tag{4.35}\]
\[\|D_{t}^{n}\nabla_{x}^{m}c_{j}^{(i)}\|_{L_{\xi}^{\infty}}\lesssim\kappa\Big{(}1+ \frac{\sigma^{1/2}\eta_{i}}{\kappa\lambda}\Big{)}\frac{\sigma^{1/2}\eta_{i}}{ \kappa\lambda}\tau^{-n}\ell^{-m}, \tag{4.36}\]
\[\big{\|}D_{t}^{n}\nabla_{x}^{m}f_{0,\mathbf{\alpha}}^{(l)}\big{\|}_{L_{\xi}^{\infty} }+\big{\|}D_{t}^{n}\nabla_{x}^{m}g_{0,\mathbf{\alpha}}^{(l)}\big{\|}_{L_{\xi}^{ \infty}}\lesssim\frac{\sigma^{1/2}\eta_{i}}{\kappa\lambda}\tau^{-n}\ell^{-(l-| \mathbf{\alpha}|+m)}, \tag{4.37}\]
\[\big{\|}D_{t}^{n}\nabla_{x}^{m}f_{1,\mathbf{\alpha}}^{(l)}\big{\|}_{L_{\xi}^{\infty} }+\big{\|}D_{t}^{n}\nabla_{x}^{m}g_{1,\mathbf{\alpha}}^{(l)}\big{\|}_{L_{\xi}^{ \infty}}\lesssim\frac{\sigma^{1/2}\eta_{i}}{\kappa\lambda}\tau^{-(n+1)}\ell^{-( l-|\mathbf{\alpha}|+m)}. \tag{4.38}\]
Proof of Lemma 4.6.: Define \(\chi_{ij}=\chi_{i}\cdot e_{j}\). Let \(b_{j}=Be_{j}\) and \(\tilde{b}_{j}=Be_{j}\). Direct computations show
\[\begin{split}& b_{j}(x,t,\xi)=\sum_{i}\eta_{i}b_{j}^{(i)}\\ =&\sum_{i}\eta_{i}\nabla\Phi_{i}^{-1}\Bigg{[}\kappa \nabla_{\xi}\chi_{ij}+\frac{\sigma^{1/2}}{\lambda}\sum_{\vec{k}}a_{\vec{k}}( \tilde{R}_{i})\Big{(}H_{\vec{k}}\nabla\Phi_{i}^{-T}e_{j}+\eta_{i}H_{\vec{k}} \nabla_{\xi}\chi_{ij}-\eta_{i}\langle H_{\vec{k}}\nabla_{\xi}\chi_{ij}\rangle \Big{)}\Bigg{]}.\end{split} \tag{4.39}\]
We claim \(\operatorname{div}_{\xi}(\nabla\Phi_{i}b_{j}^{(i)})=0\). Indeed, notice that
\[\begin{split}\operatorname{div}_{\xi}\big{(}H_{\vec{k}}\nabla_{ \xi}\chi_{ij}\big{)}=&\big{(}W_{\vec{k}}\cdot\nabla_{\xi}\big{)} \chi_{ij}=-\frac{\rho^{1/2}}{\kappa\lambda}\nabla\Phi_{i}^{-1}\sum_{\vec{k}}a _{\vec{k}}\big{(}\tilde{R}_{i}\big{)}\big{(}\psi_{\vec{k}}\vec{k}\cdot\nabla_ {\xi}\big{)}\varphi_{\vec{k}}\vec{k}\cdot e_{j}=0,\\ \operatorname{div}_{\xi}\big{(}H_{\vec{k}}\nabla\Phi_{i}^{-T}e_ {j}\big{)}=& W_{\vec{k}}\cdot\big{(}\nabla\Phi_{i}^{-T}e_{j}\big{)}= \psi_{\vec{k}}\big{(}\nabla\Phi_{i}^{-1}\vec{k}\big{)}\cdot e_{j}.\end{split} \tag{4.40}\]
Then plugging (4.40) and (4.7) into (4.39) gives \(\operatorname{div}_{\xi}(\nabla\Phi_{i}b_{j}^{(i)})=0\).
Also notice that \(\langle\nabla\Phi_{i}b_{j}^{(i)}\rangle=0\), hence we can find a vector potential \(c_{j}^{(i)}=c_{j}^{(i)}(x,t,\xi)\) so that \(\eta_{i}\nabla\Phi_{i}b_{j}^{(i)}=\nabla_{\xi}\times c_{j}^{(i)}\). Using the fact \(\det\nabla\Phi_{i}^{T}=1\) and omitting the argument \(\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}\), we have3
Footnote 3: Here, we use the formula \((Av_{1})\times(Av_{2})=A^{-T}(v_{1}\times v_{2})\det A\) for \(v_{1},v_{2}\in\mathbb{R}^{3}\) and \(A\in\mathbb{R}^{3\times 3}\) invertible.
\[\frac{1}{\lambda}\nabla\times\big{(}\nabla\Phi_{i}^{T}c_{j}^{(i)} \big{)} =\frac{1}{\lambda}\nabla\times\big{(}c_{jl}^{(i)}\nabla\Phi_{i,l} \big{)}=\frac{1}{\lambda}\nabla c_{jl}^{(i)}\times\nabla\Phi_{i,l}\quad\text{( chain rule)}\] \[=\frac{1}{\lambda}\nabla_{x}c_{jl}^{(i)}\times\nabla\Phi_{i,l}+ \big{(}\nabla\Phi_{i}^{T}\nabla_{\xi}c_{jl}^{(i)}\big{)}\times\big{(}\nabla \Phi_{i}^{T}e_{l}\big{)}\] \[=\frac{1}{\lambda}\nabla_{x}c_{jl}^{(i)}\times\nabla\Phi_{i,l}+ \nabla\Phi_{i}^{-1}\big{(}\nabla_{\xi}\times c_{j}^{(i)}\big{)}\quad\text{( linear transform for $\nabla\times$)}.\]
Therefore, omitting the argument \(\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}\), we have
\[b_{j}= \sum_{i}\nabla\Phi_{i}^{-1}\nabla_{\xi}\times c_{j}^{(i)}=\sum_{i }\Big{(}\frac{1}{\lambda}\nabla\times(\nabla\Phi_{i}^{T}c_{j}^{(i)})-\frac{1} {\lambda}\nabla_{x}c_{jl}^{(i)}\times\nabla\Phi_{i,l}\Big{)},\] \[b_{j}\partial_{j}\bar{\rho}= \frac{1}{\lambda}\nabla\times\big{(}\partial_{j}\bar{\rho}\nabla \Phi_{i}^{T}c_{j}^{(i)}\big{)}-\frac{1}{\lambda}\nabla\partial_{j}\bar{\rho} \times\big{(}\nabla\Phi_{i}^{T}c_{j}^{(i)}\big{)}-\frac{1}{\lambda}\big{(} \nabla_{x}c_{jl}^{(i)}\times\nabla\Phi_{i,l}\big{)}\partial_{j}\bar{\rho}.\]
Let \(\tilde{c}_{j}^{(i)}(x,t)=c_{j}^{(i)}\big{(}x,t,\lambda\Phi_{i}(x,t)\big{)}\). The estimate (4.36) follows from (4.11) by Schauder estimate. This gives \(E_{1}\) and \(E_{2}\). \(E_{3}\) and \(E_{4}\) are directly from the second line of (4.26).
Next, by induction, we show the last line of (4.26) gives \(\sum_{l=1}^{N}F_{l}+G_{N}\). When \(l=1\), \(|\boldsymbol{\alpha}|=1\), let \(g_{0,\boldsymbol{\alpha}}^{(1)}\) be given by \(\chi\), \(g_{1,\boldsymbol{\alpha}}^{(1)}\) be given by \(D_{t}\chi\) componentwise, \(F_{1}=0\). Using (4.11), we have (4.35) and the estimates (4.37) (4.38) for \(l=1\) by Schauder estimate. Assume (4.33), (4.34), (4.37) and (4.38) are true for \(l\). Using chain rule, for any function \(h(x,t,\xi):\mathbb{T}^{3}\times[0,1]\times\mathbb{T}^{3}\to\mathbb{R}^{3}\), we have
\[\frac{1}{\lambda}\operatorname{div}\bigl{(}\nabla\Phi_{i}^{-1}h(x,t,\lambda \Phi_{i})\bigr{)}=(\operatorname{div}_{\xi}h)(x,t,\lambda\Phi_{i})+\frac{1}{ \lambda}\operatorname{div}_{x}(\nabla\Phi_{i}^{-1}h)(x,t,\lambda\Phi_{i}).\]
Let \(h_{0}=\operatorname{div}_{\xi}^{-1}g_{0,\boldsymbol{\alpha}}^{(l)}\) and \(h_{1}=\operatorname{div}_{\xi}^{-1}g_{1,\boldsymbol{\alpha}}^{(l)}\), then \(\langle h_{0}\rangle=\langle h_{1}\rangle=0\). We can deduce (dropping the arguments \((x,t,\lambda\Phi_{i})\))
\[\begin{split}\frac{1}{\lambda}\operatorname{div}\big{(}\nabla \Phi_{i}^{-1}h_{0}D_{t}\partial^{\boldsymbol{\alpha}}\bar{\rho}+& \nabla\Phi_{i}^{-1}h_{1}\partial^{\boldsymbol{\alpha}}\bar{\rho} \big{)}=g_{0,\boldsymbol{\alpha}}^{(l)}D_{t}\partial^{\boldsymbol{\alpha}} \bar{\rho}+g_{1,\boldsymbol{\alpha}}^{(l)}\partial^{\boldsymbol{\alpha}}\bar{ \rho}\\ &+\frac{1}{\lambda}\operatorname{div}_{x}\big{(}\nabla\Phi_{i}^{-1 }h_{0}\big{)}D_{t}\partial^{\boldsymbol{\alpha}}\bar{\rho}+\frac{1}{\lambda} \operatorname{div}_{x}\big{(}\nabla\Phi_{i}^{-1}h_{1}\big{)}\partial^{ \boldsymbol{\alpha}}\bar{\rho}\\ &+\frac{1}{\lambda}\nabla\Phi_{i}^{-1}h_{0}\cdot D_{t}\partial^{ \boldsymbol{\alpha}}\nabla\bar{\rho}+\frac{1}{\lambda}\nabla\Phi_{i}^{-1}h_{1 }\cdot\partial^{\boldsymbol{\alpha}}\nabla\bar{\rho}.\end{split} \tag{4.41}\]
Now let \(f_{0,\boldsymbol{\alpha}}^{(l)}=\nabla\Phi_{i}^{-1}h_{0}\) and \(f_{1,\boldsymbol{\alpha}}^{(l)}=\nabla\Phi_{i}^{-1}h_{1}\). The second and the third lines in (4.41) give the formulas for \(g_{0,\boldsymbol{\alpha}}^{(l+1)}\) and \(g_{1,\boldsymbol{\alpha}}^{(l+1)}\). Because \(h_{0}\) and \(h_{1}\) have zero mean in \(\xi\), \(g_{0,\boldsymbol{\alpha}}^{(l+1)}\) and \(g_{1,\boldsymbol{\alpha}}^{(l+1)}\) also have zero mean in \(\xi\). The estimates (4.37) and (4.38) for \(l+1\) directly follows from the form of \(g_{0,\boldsymbol{\alpha}}^{(l+1)}\) and \(g_{1,\boldsymbol{\alpha}}^{(l+1)}\) and (4.11).
Proof of Proposition 4.1 and Corollary 4.2.: Test (4.31) with \(\bar{\rho}\). By Holder's inequality and Young's inequality, we have
\[\begin{split}\|\tilde{\rho}(\cdot,t)\|_{L^{2}}^{2}+\kappa\int_{ 0}^{T}\|\nabla\bar{\rho}\|_{L^{2}}^{2}dt\lesssim&\frac{1}{ \lambda^{2}\kappa}\bigg{(}\sum_{l=1}^{4}\int_{0}^{T}\|E_{l}\|_{H^{-1}}^{2}dt+ \sum_{l=1}^{N}\int_{0}^{T}\|F_{l}\|_{H^{-1}}^{2}dt+\int_{0}^{T}\|G_{N}\|_{L^{2 }}^{2}dt\bigg{)}\\ &+\frac{1}{\lambda^{2}}\|(\chi\cdot\nabla\bar{\rho})(\cdot,0)\|_{L ^{2}}^{2}\end{split} \tag{4.42}\]
Recall the estimates for \(\bar{\rho}\) from (3.17) and (3.18),
\[\begin{split}\int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{m}}{{2}}} \nabla^{m}\bar{\rho}\|_{L^{2}}^{2}dt\lesssim&\tau^{-(m-1)} \mathcal{D}_{m-1},\\ \int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{m}}{{2}}}D_{t}\nabla^{m} \bar{\rho}\|_{L^{2}}^{2}dt\lesssim&\tau^{-(m+1)}\mathcal{D}_{m+1 }.\end{split} \tag{4.43}\]
Note that (4.36) gives
\[\|\bar{\kappa}^{-1}D_{t}^{n}\nabla_{x}^{m}c_{j}^{(i)}\|_{L_{\xi}^{\infty}} \lesssim\tau^{-n}\ell^{-m}. \tag{4.44}\]
Also (4.11) and (4.3) give
\[\|\bar{\kappa}^{-1}\tilde{A}D_{t}^{n}\nabla_{x}^{m}\chi\|_{L_{\xi}^{\infty}} \lesssim\tau^{-n}\ell^{-m}. \tag{4.45}\]
With (4.44) and (4.45), the estimates for \(E_{1}\) and \(E_{3}\) are the same. The estimates for \(E_{2}\) and \(E_{4}\) are also the same. Here, we estimate \(E_{1}\) and \(E_{2}\) as examples.
\[\begin{split}\int_{0}^{T}\|E_{1}\|_{H^{-1}}^{2}dt\lesssim& \|\bar{\kappa}^{-1}c_{j}^{(i)}\|_{L^{\infty}}^{2}\int_{0}^{T}\|\bar{ \kappa}\nabla^{2}\bar{\rho}\|_{L^{2}}^{2}dt\\ \lesssim&\tau^{-1}\mathcal{D}_{1},\\ \int_{0}^{T}\|E_{2}\|_{H^{-1}}^{2}dt\lesssim&\|\bar{ \kappa}^{\nicefrac{{1}}{{2}}}\|_{L^{\infty}}^{2}\|\bar{\kappa}^{-1}\nabla_{x}c_{ j}^{(i)}\|_{L^{\infty}}^{2}\int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{1}}{{2}}}\nabla \bar{\rho}\|_{L^{2}}dt\\ \lesssim&\|\bar{\kappa}\|_{L^{\infty}}\ell^{-2} \mathcal{D}_{0}\lesssim_{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
Next, we estimate \(F_{l}\) and \(G_{N}\). In the rest of this proof, notice the fact \(|\nabla\Phi_{i}|<2\), contributing to a factor finally depending on \(N_{h}\). Note that from (4.37), (4.38) and (4.10) we have
\[\begin{split}\left\|\bar{\kappa}^{\nicefrac{{-1}}{{2}}}\!\!\!f_{ 0,\boldsymbol{\alpha}}^{(l)}\right\|_{L^{\infty}_{\xi}}&+\left\| \bar{\kappa}^{\nicefrac{{-1}}{{2}}}\!\!\!g_{0,\boldsymbol{\alpha}}^{(l)} \right\|_{L^{\infty}_{\xi}}&\lesssim\kappa^{\nicefrac{{-1}}{{2} }}\ell^{-(l-|\boldsymbol{\alpha}|)},\\ \left\|\bar{\kappa}^{\nicefrac{{-1}}{{2}}}\!\!\!f_{1,\boldsymbol{ \alpha}}^{(l)}\right\|_{L^{\infty}_{\xi}}&+\left\|\bar{\kappa}^{ \nicefrac{{-1}}{{2}}}\!\!\!g_{1,\boldsymbol{\alpha}}^{(l)}\right\|_{L^{\infty }_{\xi}}&\lesssim\kappa^{\nicefrac{{-1}}{{2}}}\tau^{-1}\ell^{-(l-| \boldsymbol{\alpha}|)}.\end{split} \tag{4.47}\]
Note that \(F_{1}=0\). For \(F_{l}\) with \(F\geq 2\), using (4.43) and (4.47), we have
\[\begin{split}\int_{0}^{T}\|F_{l}\|_{H^{-1}}^{2}dt\lesssim& \frac{1}{\lambda^{2(l-1)}}\sum_{m=1,|\boldsymbol{\alpha}|=m}^{l-1}\|\bar{ \kappa}^{-(m-1)}\|_{L^{\infty}}\|\bar{\kappa}^{\nicefrac{{-1}}{{2}}}\!\!\!f_{ 0,\boldsymbol{\alpha}}^{(l-1)}\|_{L^{\infty}}^{2}\int_{0}^{T}\|\bar{\kappa}^{ \nicefrac{{m}}{{2}}}\!\!\!D_{t}\nabla^{m}\bar{\rho}\|_{L^{2}}dt\\ &+\|\bar{\kappa}^{-(m-1)}\|_{L^{\infty}}\|\bar{\kappa}^{ \nicefrac{{-1}}{{2}}}\!\!\!f_{1,\boldsymbol{\alpha}}^{(l-1)}\|_{L^{\infty}}^{ 2}\int_{0}^{T}\|\bar{\kappa}^{\nicefrac{{m}}{{2}}}\nabla^{m}\bar{\rho}\|_{L^{ 2}}dt\\ \lesssim&\frac{1}{\lambda^{2(l-1)}}\sum_{m=1}^{l-1} \|\bar{\kappa}\|_{L^{\infty}}^{-(m-1)}|\nabla\Phi_{i}|^{2(l-2)}\kappa^{-1}\ell ^{-2(l-1-m)}\tau^{-(m+1)}\mathcal{D}_{m+1}\\ \lesssim&(\lambda\ell)^{-2(l-1)}\|\bar{\kappa}\|_{L ^{\infty}}\kappa^{-1}\sum_{m=1}^{l-1}\big{(}\tau\|\bar{\kappa}\|_{L^{\infty}} \ell^{-2}\big{)}^{-m}\tau^{-1}\mathcal{D}_{m+1}\\ ((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
Proof of Corollary 4.3.: To simplify the formulas, we introduce the following shorthand notation
\[M(x,t,\xi):= \mathrm{Id}+\sum_{i}\eta_{i}(x,t)\nabla\Phi_{i}^{T}(x,t)\nabla_{\xi }\chi_{i}^{T}\big{(}x,t,\xi\big{)}.\]
We first claim the following estimates, omitting the argument \((x,t,\lambda\Phi_{i})\)
\[\Big{|}\kappa\int_{0}^{T}\int\big{(}M^{T}M-\langle M^{T}M\rangle \big{)}\nabla\bar{\rho}\cdot\nabla\bar{\rho}dxdt\Big{|}\lesssim \Big{(}\frac{1}{\lambda\ell}+\frac{1}{\lambda\kappa^{\nicefrac{{1} {2}}}\tau^{\nicefrac{{1}}{{2}}}}\Big{)}\mathcal{D}_{1}, \tag{4.51}\] \[\frac{\kappa}{\lambda^{2}}\int_{0}^{T}\|\nabla^{2}\bar{\rho} \chi\|_{L^{2}}^{2}+\|\nabla_{x}\chi^{T}\nabla\bar{\rho}\|_{L^{2}}^{2}dt\lesssim \frac{1}{\lambda^{2}\kappa\tau}\mathcal{D}_{1}. \tag{4.52}\]
Direct computations give
\[M^{T}M =\mathrm{Id}+\sum_{i}\eta_{i}(\nabla\Phi_{i}^{T}\nabla_{\xi} \chi_{i}^{T}+\nabla_{\xi}\chi_{i}\nabla\Phi_{i})+\sum_{i}\eta_{i}^{2}\nabla_{ \xi}\chi_{i}\nabla\Phi_{i}\nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T},\] \[M^{T}M-\langle M^{T}M\rangle =\sum_{i}\eta_{i}(\nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}+ \nabla_{\xi}\chi_{i}\nabla\Phi_{i})+\] \[+\sum_{i}\eta_{i}^{2}(\nabla_{\xi}\chi_{i}\nabla\Phi_{i}\nabla \Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}-\langle\nabla_{\xi}\chi_{i}\nabla\Phi_{i} \nabla\Phi_{i}^{T}\nabla_{\xi}\chi_{i}^{T}\rangle).\]
Furthermore, by using (4.7) and the properties of Mikado flows in Section 2.1.4 we compute
\[\kappa\nabla_{\xi}\chi_{i}\nabla\Phi_{i}\nabla\Phi_{i}^{T}\nabla_ {\xi}\chi_{i}^{T} =\frac{\sigma}{\kappa\lambda^{2}}\sum_{\vec{k}}a_{\vec{k}}^{2} \nabla\Phi_{i}^{-1}(\vec{k}\otimes\nabla_{\xi}\varphi_{\vec{k}})\nabla\Phi_{i }\nabla\Phi_{i}^{T}(\nabla_{\xi}\varphi_{\vec{k}}\otimes\vec{k})\nabla\Phi_{i }^{-T}\] \[=\frac{\sigma}{\kappa\lambda^{2}}\sum_{\vec{k}}a_{\vec{k}}^{2} \nabla\Phi_{i}^{-1}|\nabla\Phi_{i}^{T}\nabla_{\xi}\varphi_{\vec{k}}|^{2}\nabla \Phi_{i}^{-T}.\]
In particular, comparing with the derivation of \(\bar{A}\), we deduce that
\[\big{|}\kappa\langle M^{T}M\rangle-\bar{A}\big{|}\lesssim\sum_{i}\eta_{i}^{2} \|\nabla\Phi_{i}-\mathrm{Id}\|_{L^{\infty}}|\bar{A}|\lesssim\lambda^{-2\gamma} |\bar{A}|. \tag{4.53}\]
Here we use (4.7), (4.9), (4.10) and the formula for homogenized matrix (2.47).
Now we estimate the goal of this corollary. Using (4.29) and omitting \((x,t,\lambda\Phi_{i})\), we have
\[\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2}dt=\kappa\int_{0}^{T} \Big{\|}M\nabla\bar{\rho}+\frac{1}{\lambda}(\nabla^{2}\bar{\rho}\chi+\nabla_{ x}\chi^{T}\nabla\bar{\rho})+\nabla\bar{\rho}\Big{\|}_{L^{2}}^{2}dt\] \[\leq \Big{(}1+\frac{1}{\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{ \nicefrac{{1}}{{2}}}}\Big{)}\kappa\int_{0}^{T}\|M\nabla\bar{\rho}\|_{L^{2}}^{2 }dt+C\big{(}1+\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{\nicefrac{{1}}{{2}}} \big{)}\kappa\int_{0}^{T}\|\nabla\tilde{\rho}\|_{L^{2}}^{2}dt\] \[+C\big{(}1+\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{\nicefrac{{1} }{{2}}}\big{)}\frac{\kappa}{\lambda^{2}}\int_{0}^{T}\|\nabla^{2}\bar{\rho} \chi\|_{L^{2}}^{2}+\|\nabla_{x}\chi^{T}\nabla\bar{\rho}\|_{L^{2}}^{2}dt\] \[\leq \Big{(}1+\frac{1}{\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{ \nicefrac{{1}}{{2}}}}\Big{)}\kappa\int_{0}^{T}\int\big{(}M^{T}M-\langle M^{T }M\rangle\big{)}\nabla\bar{\rho}\cdot\nabla\bar{\rho}dxdt+C\Big{(}1+\frac{1}{ \lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{\nicefrac{{1}}{{2}}}}\Big{)}\int_{0}^ {T}\int\bar{A}\nabla\bar{\rho}\cdot\nabla\bar{\rho}dxdt\] \[+C\big{(}1+\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{\nicefrac{{1} }{{2}}}\big{)}\frac{\kappa}{\lambda^{2}}\int_{0}^{T}\|\nabla^{2}\bar{\rho}\chi \|_{L^{2}}^{2}+\|\nabla_{x}\chi^{T}\nabla\bar{\rho}\|_{L^{2}}^{2}dt\] \[\leq C\Big{(}\frac{1}{\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{ \nicefrac{{1}}{{2}}}}+\frac{1}{\lambda\ell}+\lambda^{-2\gamma}\Big{)} \mathcal{D}_{N_{h}}+\Big{(}1+\frac{1}{\lambda\kappa^{\nicefrac{{1}}{{2}}}\tau^{ \nicefrac{{1}}{{2}}}}\Big{)}\int_{0}^{T}\int\bar{A}\nabla\bar{\rho}\cdot\nabla \bar{\rho}dxdt.\]
For the terms after the second \(\lesssim\), we estimate them by (4.51), (4.53) together with Proposition 3.3, Proposition 4.1 and (4.52) respectively. Then we conclude
\[\kappa\int_{0}^{T}\|\nabla\rho\|_{L^{2}}^{2}dt-\int_{0}^{T}\int\bar{A}\nabla\bar {\rho}\cdot\nabla\bar{\rho}dxdt\lesssim\Big{(}\frac{1}{\lambda\kappa^{\nicefrac{ {1}}{{2}}}\tau^{\nicefrac{{1}}{{2}}}}+\frac{1}{\lambda\ell}+\lambda^{-2\gamma} \Big{)}\mathcal{D}_{N_{h}}.\]
The proof of (4.52) is the same as the terms \(E_{3}\) and \(E_{4}\) in the proof of Proposition 4.1, so we omit it here. Now we prove our claim (4.51). As in the proof of Lemma 4.6, we could find a matrix potential \(\omega:=\sum_{i}\eta_{i}\omega^{(i)}\) taking arguments \((x,t,\xi)\) such that for \(n\in\{0,1\}\)
\[\big{(}M^{T}M-\langle M^{T}M\rangle\big{)}_{jl}= \partial_{\xi_{1}}\omega_{jl}=\sum_{i}\eta_{i}\partial_{\xi_{1}} \omega_{jl}^{(i)},\] \[\big{\|}D_{t}^{n}\nabla_{x}^{m}\omega_{j}\big{\|}_{L^{\infty}_{ \xi}}\lesssim \sum_{i}\Big{(}\frac{\sigma\eta_{i}^{2}}{\kappa^{2}\lambda^{2}}+ \frac{\sigma^{1/2}\eta_{i}}{\kappa\lambda}\Big{)}\tau^{-n}\ell^{-m}\] \[\lesssim \sum_{i}\Big{(}1+\frac{\sigma\eta_{i}^{2}}{\kappa^{2}\lambda^{2} }\Big{)}\tau^{-n}\ell^{-m}\lesssim\bar{\kappa}\kappa^{-1}\tau^{-n}\ell^{-m}. \tag{4.54}\]
Now by the chain rule,
\[\nabla\big{(}\omega_{jl}^{(i)}(x,t,\lambda\Phi_{i})\big{)}=(\nabla_{x}\omega_ {jl}^{(i)})(x,t,\lambda\Phi_{i})+\lambda\nabla\Phi_{i}^{T}(\nabla_{\xi}\omega_ {jl}^{(i)})(x,t,\lambda\Phi_{i}).\]
This leads to
\[(\partial_{\xi_{1}}\omega_{jl}^{(i)})(x,t,\lambda\Phi_{i})=\frac{1}{\lambda}( \nabla\Phi_{i}^{-T})_{1m}\partial_{m}\big{(}\omega_{jl}^{(i)}(x,t,\lambda\Phi _{i})\big{)}-\frac{1}{\lambda}(\nabla\Phi_{i}^{-T})_{1m}(\partial_{x_{m}} \omega_{jl}^{(i)})(x,t,\lambda\Phi_{i}).\]
Omitting the argument \((x,t,\lambda\Phi_{i})\), consider the following oscillation term and use integration by parts
\[\kappa\int_{0}^{T} \int\big{(}(M^{T}M)-\langle M^{T}M\rangle\big{)}\nabla\bar{\rho} \cdot\nabla\bar{\rho}dxdt\] \[= \frac{\kappa}{\lambda}\sum_{i,j,l,m}\Big{(}\int_{0}^{T}\int\eta_ {i}(\nabla\Phi_{i}^{-T})_{1m}\partial_{m}\omega_{jl}^{(i)}\partial_{j}\bar{ \rho}\partial_{l}\bar{\rho}dxdt-\int_{0}^{T}\int\eta_{i}(\nabla\Phi_{i}^{-T})_ {1m}\partial_{x_{m}}\omega_{jl}^{(i)}\partial_{j}\bar{\rho}\partial_{l}\bar{ \rho}dxdt\Big{)}\] \[= \frac{\kappa}{\lambda}\sum_{i,j,l,m}\Big{(}-\int_{0}^{T}\int \omega_{jl}^{(i)}\partial_{m}\Big{(}\eta_{i}(\nabla\Phi_{i}^{-T})_{1m} \partial_{j}\bar{\rho}\partial_{l}\bar{\rho}\Big{)}dxdt-\int_{0}^{T}\int\eta _{i}(\nabla\Phi_{i}^{-T})_{1m}\partial_{x_{m}}\omega_{jl}^{(i)}\partial_{j}\bar {\rho}\partial_{l}\bar{\rho}dxdt\Big{)}. \tag{4.55}\]
Now, to estimate the terms above, in the following estimate, we use (4.54), (4.13), Young's inequality with the decomposition \(\bar{\kappa}|\nabla\bar{\rho}||\nabla^{2}\bar{\rho}|=\tau^{-\nicefrac{{1}}{{4 }}}\bar{\kappa}^{\nicefrac{{1}}{{4}}}|\nabla\bar{\rho}|\cdot\tau^{\nicefrac{{1}} {{2}}}\bar{\kappa}^{\nicefrac{{3}}{{4}}}|\nabla^{2}\bar{\rho}|\). Then we invoke energy estimates (4.43), and use (4.17),
\[\kappa\int_{0}^{T} \int\big{(}(M^{T}M)-\langle M^{T}M\rangle\big{)}\nabla\bar{\rho} \cdot\nabla\bar{\rho}dxdt\] \[\lesssim_{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
## 5. Time averaging
### Setup
Consider constants \(\kappa_{1}>\kappa_{0}>0\), and the oscillatory \(\tilde{\kappa}=\tilde{\kappa}(x,t,\tau^{-1}t)\) of the form
\[\tilde{\kappa}(x,t,\tau^{-1}t)=\kappa_{0}+\eta(x,t,\tau^{-1}t)\kappa_{1}\,, \tag{5.1}\]
where \(s\mapsto\eta(x,t,s)\) is 1-periodic, nonnegative and smooth with
\[\langle\eta\rangle:=\int_{0}^{1}\eta(x,t,s)\,ds=1\,.\]
We define
\[\kappa:=\kappa_{0}+\kappa_{1},\quad g(x,t,s):=\frac{\kappa_{1}}{\kappa}(\eta( x,t,s)-1),\quad g_{\tau}(x,t):=g(x,t,\tau^{-1}t), \tag{5.2}\]
so that \(\langle g\rangle=0\), \(g\) is bounded, and we have the identity
\[\tilde{\kappa}(x,t,\tau^{-1})=\kappa_{0}+\kappa_{1}\eta(x,t,\tau^{-1}t)= \kappa+\kappa g_{\tau}(x,t). \tag{5.3}\]
In this section we consider the solutions \(\rho\) and \(\bar{\rho}\) of the following equations
\[\begin{split}\partial_{t}\rho+u\cdot\nabla\rho&= \operatorname{div}\left(\tilde{\kappa}\nabla\rho\right)\\ \rho|_{t=0}&=\rho_{in},\end{split} \tag{5.4}\]
and
\[\begin{split}\partial_{t}\bar{\rho}+u\cdot\nabla\bar{\rho}& =\operatorname{div}\left(\kappa\nabla\bar{\rho}\right)\\ \bar{\rho}|_{t=0}&=\rho_{in}.\end{split} \tag{5.5}\]
In view of definitions of \(\kappa\), \(\tilde{\kappa}\) (5.5) can be seen as the time-averaged version of (5.4). Further, we define the respective total dissipation
\[\tilde{D}:=\int_{0}^{T}\|\tilde{\kappa}^{\sfrac{1}{2}}\nabla\rho\|_{L^{2}}^{2 }\,dt\,,\quad D:=\kappa\int_{0}^{T}\|\nabla\bar{\rho}\|_{L^{2}}^{2}\,dt. \tag{5.6}\]
We make the following assumptions: there exists \(\mu\geq\|\nabla u\|_{\infty}\) (we should think of \(\mu^{-1}\sim\|\nabla u\|_{\infty}^{-1}\), being the advection time-scale) such that
* The fast time-scale is shorter than the advection time-scale, i.e. \(\tau\mu<1/2\). Moreover, there exists \(N\in\mathbb{N}\) such that \[(\tau\mu)^{N-1}\kappa<\kappa_{0}.\] (5.7)
* Control of higher-order spatial derivatives: for any \(n\leq N\) \[\|\nabla u\|_{C^{n}}^{2}\leq C_{N}\left(\frac{\mu}{\kappa}\right)^{n}\mu^{2}.\] (5.8)
* Bound on the initial data: for any \(1\leq n\leq N\) \[\|\rho_{in}\|_{H^{n}}^{2}\leq C_{N}\left(\frac{\mu}{\kappa}\right)^{n}D.\] (5.9)
* Control of cutoff and its slow derivatives: for any \(1\leq n\leq N\) \[\|\nabla^{n}\eta\|_{L^{\infty}}\leq C_{N}(\tau\mu)^{2}\left(\frac{\mu}{\kappa }\right)^{\frac{n}{2}},\qquad\|D_{t}\eta\|_{L^{\infty}}\leq C_{N}\mu.\] (5.10)
**Proposition 5.1**.: _Under the assumptions (A1), (A2), (A3), (A4) we have_
\[\|\rho(T)-\bar{\rho}(T)\|_{L^{2}}^{2}+\kappa\int_{0}^{T}\|\nabla\rho-\nabla\bar{ \rho}\|_{L^{2}}^{2}\,dt\lesssim(\tau\mu)^{2}D. \tag{5.11}\]
_Moreover, the total dissipation satisfies_
\[|D-\tilde{D}|\lesssim(\tau\mu)(D+\tilde{D}). \tag{5.12}\]
_The implicit constants depend only on \(N\) in assumptions._
Proof.: Starting with \(\rho^{(0)}:=\bar{\rho}\) being the solution of (5.5), we construct the solution \(\rho\) of (5.4) by successive approximations \(\rho^{(i)}\), \(i=1,2,\ldots,N\), defined to be solutions of
\[\begin{split}\partial_{t}\rho^{(i)}+u\cdot\nabla\rho^{(i)}& =\kappa\Delta\rho^{(i)}+\operatorname{div}\left(g_{\tau}\nabla \rho^{(i-1)}\right)\\ \rho|_{t=0}&=\rho_{in}.\end{split} \tag{5.13}\]
We will show below in Lemma 5.2 the improved energy bound4
Footnote 4: In Section 4 of [1] a similar strategy was employed to deal with fast temporal oscillations, albeit leading to an estimate with much more convolved right-hand side and weaker improvement, compare Lemma 4.2 there.
\[\|\rho^{(i)}(T)-\rho^{(i-1)}(T)\|_{L^{2}}^{2}+\kappa\int_{0}^{T}\|\nabla\rho^{ (i)}(t)-\nabla\rho^{(i-1)}\|_{L^{2}}^{2}\,dt\leq C_{N}(\tau\mu)^{2i}D.\]
Setting
\[\rho^{(error)}:=\rho-\rho^{(N)},\]
and using (5.4), (5.13) and (5.3), we may write the equation for \(\rho^{(error)}\) as
\[\begin{split}\partial_{t}\rho^{(error)}+u\cdot\nabla\rho^{(error )}&=\kappa\Delta\rho^{(error)}+\kappa\operatorname{div}(g_{ \tau}\nabla\rho^{(error)})+\kappa\operatorname{div}(g_{\tau}(\nabla\rho^{(N)} -\nabla\rho^{(N-1)}))\\ &=\kappa_{0}\Delta\rho^{(error)}+\bar{\kappa}\operatorname{div}( \eta\nabla\rho^{(error)})+\kappa\operatorname{div}(g_{\tau}(\nabla\rho^{(N)} -\nabla\rho^{(N-1)}))\,.\end{split}\]
Since \(\eta\geq 0\) and \(|g|\leq 2\), the standard energy estimate and an application of Young's inequality yields
\[\|\rho^{(error)}(T)\|_{L^{2}}^{2}+\kappa_{0}\int_{0}^{T}\|\nabla\rho^{(error )}\|_{L^{2}}^{2}\,dt\leq C\frac{\kappa}{\kappa_{0}}\kappa\int_{0}^{T}\|\nabla \rho^{(N)}-\nabla\rho^{(N-1)}\|_{L^{2}}^{2}\,dt.\]
Combining with the improved energy bound and (A1) we deduce
\[\begin{split}\|\rho^{(error)}(T)\|_{L^{2}}^{2}+\kappa\int_{0}^{T }\|\nabla\rho^{(error)}\|_{L^{2}}^{2}\,dt&\leq C_{N}\left(\frac{ \kappa}{\kappa_{0}}\right)^{2}(\tau\mu)^{2N}D\\ &\leq C_{N}(\tau\mu)^{2}D\,.\end{split} \tag{5.14}\]
Consequently,
\[\begin{split}\|\rho(T)-\bar{\rho}(T)\|_{L^{2}}^{2}+\kappa\int_{0}^ {T}\|\nabla\rho-\nabla\bar{\rho}\|_{L^{2}}^{2}\,dt&\leq 2\|\rho(T)-\rho^{(N)}(T )\|_{L^{2}}^{2}+2\kappa\int_{0}^{T}\|\nabla\rho-\nabla\rho^{(N)}\|_{L^{2}}^{2} \,dt+\\ +2^{N}\sum_{i=0}^{N-1}&\|\rho^{(i+1)}(T)-\rho^{(i)} (T)\|_{L^{2}}^{2}+\kappa\int_{0}^{T}\|\nabla\rho^{(i+1)}-\nabla\rho^{(i)}\|_{L ^{2}}^{2}\,dt\\ &\lesssim(\tau\mu)^{2}D\,,\end{split}\]
verifying (5.11).
Next, we consider the difference in total dissipation, and write
\[\int_{0}^{T}\int_{\mathbb{T}^{3}}\kappa|\nabla\bar{\rho}|^{2}-\tilde{\kappa}| \nabla\rho|^{2}\,dx\,dt=\underbrace{\kappa\int_{0}^{T}\int_{\mathbb{T}^{3}}g_{ \tau}|\nabla\bar{\rho}|^{2}\,dx\,dt}_{(I)}+\underbrace{\int_{0}^{T}\int_{ \mathbb{T}^{3}}\tilde{\kappa}(|\nabla\bar{\rho}|^{2}-|\nabla\rho|^{2})\,dx\,dt }_{(II)}\]
We show below (see (5.26)) the bound
\[|(I)|\lesssim(\tau\mu)D,\]
since, in the notation introduced below (c.f.(5.18)), \(I=B^{(0)}(\nabla\rho^{(0)},\nabla\rho^{(0)})\). Next, we use Young's inequality to estimate
\[|(II)| =\left|\int_{0}^{T}\int_{\mathbb{T}^{3}}\tilde{\kappa}(\nabla\rho +\nabla\bar{\rho})\cdot(\nabla\rho-\nabla\bar{\rho})\,dx\,dt\right|\] \[\lesssim\varepsilon^{-1}\kappa\int_{0}^{T}\left\|\nabla\rho- \nabla\bar{\rho}\right\|_{L^{2}}^{2}dt+\varepsilon\left(\kappa\int_{0}^{T} \left\|\nabla\bar{\rho}\right\|_{L^{2}}^{2}dt+\int_{0}^{T}\left\|\bar{\kappa} ^{\nicefrac{{1}}{{2}}}\nabla\rho\right\|_{L^{2}}^{2}dt\right).\]
Choosing \(\varepsilon=\tau\mu\) and using (5.11) we deduce
\[|(II)|\lesssim(\tau\mu)(D+\tilde{D}),\]
verifying (5.12).
The rest of this section is concerned with estimates on the successive approximations (5.13). Since they may be of separate interest, we present them independently from the specific choices made in the previous section. To this end let us observe that the definition (5.2) of \(g\) and of \(\eta\) imply there exists the fast time potential ie \(G(x,t,s)=\int_{0}^{s}g(x,t,a)da\) such that
\[\partial_{s}G(x,t,s)=g(x,t,s),\quad\text{ where }\quad g_{\tau}(x,t)=g(x,t,\tau^{-1 }t).\]
Let us denote \(g^{0}:=g,\ g^{l+1}:=G^{l}g,\ \partial_{s}G^{l}=g^{l}\). Since \(\frac{\bar{\kappa}}{\kappa}\leq 1\) and \(\eta\) is bounded, assumption (A4) and definitions yields that for \(f=g^{l}\) or \(G^{l}\) we have
\[\begin{split}&\|f\|_{L^{\infty}}\lesssim 1,\qquad\|(D_{t}^{ slow}f)\|_{L^{\infty}}\lesssim\mu,\\ &\text{and for }n>0\qquad\|\nabla^{n}f\|_{L^{\infty}}\lesssim(\tau\mu)^{2 }\left(\frac{\mu}{\kappa}\right)^{\frac{n}{2}}\lesssim(\tau\mu)\left(\frac{ \mu}{\kappa}\right)^{\frac{n}{2}}.\end{split} \tag{5.15}\]
We will use usually the bound \(\|\nabla^{n}f\|_{L^{\infty}}\lesssim(\tau\mu)\left(\frac{\mu}{\kappa}\right)^ {\frac{n}{2}}\), only once the other is needed.
### Notation
\[D:=\kappa\int_{0}^{T}\left\|\nabla\rho_{0}(s)\right\|_{L^{2}}^{2} \tag{5.16}\]
\[\begin{split}\mathchoice{\hbox{\hbox to 0.0pt{$ \vrule height 6.499933pt width 1px}$}\hbox{\hbox to 0.
### Improved energy bound
The following is the key technical lemma of this section. Consider
\[\begin{split}\partial_{t}\rho^{(i)}+u\cdot\nabla\rho^{(i)}-\kappa \Delta\rho^{(i)}=\kappa\,\mathrm{div}\left(g_{\tau}\nabla\rho^{(i-1)}\right)\\ \rho^{(i)}|_{t=0}=0\ \ \text{for}\ i>0,\qquad\rho^{(0)}|_{t=0}=\rho_{ in}\end{split} \tag{5.22}\]
and \(\rho^{(-1)}\equiv 0\).
**Lemma 5.2**.: _Assume that there is \(\partial_{s}G(x,t,s)=g(x,t,s)\) such that (5.15) holds for both \(f=g\), \(f=G\). Assume (5.8), (5.9), (5.10), then the energy solutions of (5.22) satisfy **(1) for \(\rho^{(0)}\)**_
\[\|\rho^{(0)}(t)\|_{L^{2}}^{2}+\kappa\int_{0}^{t}\|\nabla\rho^{(0)}(s)\|_{L^{2} }^{2}\,ds=\|\rho_{in}\|_{L^{2}}^{2} \tag{5.23}\]
_and, for any \(n\geq 1\)_
\[\begin{split}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-
Together, for any \(|\boldsymbol{\alpha}|=n\) and \(|\boldsymbol{\beta}|=m\)
\[\begin{split}\left|B^{l}(\nabla\partial^{\boldsymbol{\alpha}}\rho^{( i)},\nabla\partial^{\boldsymbol{\beta}}\rho^{(j)})\right|\lesssim\kappa\tau \leavevmode\hbox{\small 1\kern-3.8pt\vrule height 6.25pt width 0.2pt depth -0.2pt\kern-3.8pt \vrule height 6.25pt width 0.2pt depth -0.2pt}\rho^{(i)}|\leavevmode\hbox{\small 1 \kern-3.8pt\vrule height 6.25pt width 0.2pt depth -0.2pt\kern-3.
Consider the second line of (5.30): (i) it agrees with the desired estimate (5.25) for \(i=1\), if we use there definition (5.16) and (5.24); (ii) moreover, if we knew the estimate (5.25) for \(\rho_{i-1}\), it agrees with it for \(\rho_{i}\). Therefore, if we knew that for \(\boldsymbol{\alpha}=0\) and \(|\boldsymbol{\alpha}|=n\) it holds
\[\left|B^{0}(\nabla\partial^{\boldsymbol{\alpha}}\rho^{(i-1)},\nabla\partial^{ \boldsymbol{\alpha}}\rho^{(i)})\right|\lesssim(\tau\mu)^{2i}\left(\frac{\mu}{ \kappa}\right)^{k}D, \tag{5.31}\]
then (5.30) can be shown by induction. It is clear then that we should prove our (5.25) inductively, keeping track of \(B\).
We induce over \(i=0,1,\dots\). Inductive assumption is that for any \(n,m,l\)
\[\begin{split}&(a)\qquad\mbox{$\leavevmode \hbox to17.49986pt{\vbox to17.49986pt{\pgfpicture\makeatletter\hbox to 0.0pt{\pgfsys@beginscope{}{ \pgfsys@beginscope{}{{}{{}}}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{ }{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{ }{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{ }{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{} {}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{ }{{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{ }{{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{ }{{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ }{{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{} {}{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{ }{{{}{}{}{{}{}{{}{}{}{}{{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{} {{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{ }{{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{{}{}{{}{}{}{{}{}{}{}{ {}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{{}{}{}{}{}{{} {}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{}{}{{}{{}{}{}{}{ {}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{} {}{{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{} {{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{
_Step 5: Inductive step \(i\to i+1\)._
Now we assume (5.32) and want to show (for any \(n\), \(m\), \(l\))
\[\begin{array}{l}(a^{\prime})\qquad{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299 904pt width 1px\hss}\hbox{$\rm l$}}}}\rho^{(i+1)}{\mathchoice{\hbox{ \hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299 904pt width 1px\hss}\hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}\rho^{(i+1)}{ \mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}}_{n}\lesssim(\tau\mu)^{2i+2}\left(\frac{\mu}{\kappa} \right)^{n}D\\ (b^{\prime})\qquad\sup_{|\boldsymbol{\alpha}|=n,|\boldsymbol{\beta}|=m}\left|B^{ l}(\nabla\partial^{\boldsymbol{\alpha}}\rho^{(i+1)},\nabla\partial^{ \boldsymbol{\beta}}\rho^{(i)})\right|\lesssim(\tau\mu)^{2i+2}\left(\frac{\mu }{\kappa}\right)^{\frac{n+m}{2}}D.\end{array} \tag{5.36}\]
For any \(|\boldsymbol{\alpha}|=n\), \(|\boldsymbol{\beta}|=m\) the preliminary inequality (5.28) for \(i+1,j\leq i\) gives via part \((a)\) of the inductive assumption (5.32)
\[\left|B^{l}(\nabla\partial^{\boldsymbol{\alpha}}\rho^{(i+1)},\nabla\partial^ {\boldsymbol{\beta}}\rho^{(j)})\right|\lesssim{\mathchoice{\hbox{\hbox to 0.0pt{ \kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299 904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}\rho^{(i+1)}{ \mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299 904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}_{n}(\tau\mu)^{j+1}\left(\frac{\mu}{ \kappa}\right)^{\frac{m}{2}}D^{\frac{1}{2}}+\kappa\tau\left|\int_{0}^{T}\int G _{\tau}^{l}D_{t}(\nabla\partial^{\boldsymbol{\alpha}}\rho^{(i+1)}\cdot\nabla \partial^{\boldsymbol{\beta}}\rho^{(j)})\right|. \tag{5.37}\]
We will focus on the last term of (5.37). By (5.13) for \(\rho^{(i)}\) (arbitrary natural \(\iota\)), we have
\[D_{t}(\partial^{\boldsymbol{\gamma}}\rho^{(i)})=\kappa\Delta\partial^{ \boldsymbol{\gamma}}\rho^{(\iota)}+[u\cdot\nabla,\partial^{\boldsymbol{\gamma}} ]\rho^{(\iota)}+\kappa\operatorname{div}\partial^{\boldsymbol{\gamma}}\left(g \nabla\rho^{(\iota-1)}\right). \tag{5.38}\]
In what follows, we consider different cases to complete the estimates at step \(i+1\).
_Substep 5.1: The case when \(D_{t}\) in the last term of (5.37) acts on \(\rho^{(j)}\)._
Use (5.38) with \(\iota=j\),
\[\begin{array}{l}\kappa\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\nabla\partial^ {\boldsymbol{\alpha}}\rho^{(i+1)}\cdot\left(\kappa\Delta\nabla\partial^{ \boldsymbol{\beta}}\rho^{(j)}+[u\cdot\nabla,\nabla\partial^{\boldsymbol{\beta} }]\rho^{(j)}+\kappa\nabla\operatorname{div}\partial^{\boldsymbol{\beta}}\left(g _{\tau}\nabla\rho^{(j-1)}\right)\right)\right|=\\ \kappa\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\nabla\partial^{ \boldsymbol{\alpha}}\rho^{(i+1)}\cdot(\kappa\Delta\nabla\partial^{\boldsymbol{ \beta}}\rho^{(j)}+[u\cdot\nabla,\nabla\partial^{\boldsymbol{\beta}}]\rho^{(j)} +\kappa g_{\tau}\nabla\operatorname{div}\partial^{\boldsymbol{\beta}}\nabla\rho^{ (j-1)}+\kappa[\nabla\operatorname{div}\partial^{\boldsymbol{\beta}},g_{\tau}] \nabla\rho^{(j-1)}\right|\\ \lesssim\kappa\tau{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss} \hbox{$\rm l$}}}}\rho^{(i+1)}{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}_{n}\left({ \mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.2999904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.2999904pt width 1px\hss}\hbox{$ \rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.2999904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}_{n}\left({ \mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 1px\hss}\hbox{
_Substep 5.2: The case when \(D_{t}\) in the last term of (5.37) acts on \(\rho^{(i+1)}\)._
Use (5.38) with \(\iota=i+1\)
\[\kappa\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\nabla\partial^{ \boldsymbol{\beta}}\rho^{(j)}\cdot\left(\kappa\Delta\nabla\partial^{\boldsymbol {\alpha}}\rho^{(i+1)}+[u\cdot\nabla,\nabla\partial^{\boldsymbol{\alpha}}]\rho^{ (i+1)}+\kappa\nabla\operatorname{div}\partial^{\boldsymbol{\alpha}}\left(g_{ \tau}\nabla\rho^{(i)}\right)\right)\right|=\] \[\kappa\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\nabla\partial^{ \boldsymbol{\beta}}\rho^{(j)}\cdot\left(\kappa\Delta\nabla\partial^{\boldsymbol {\alpha}}\rho^{(i+1)}+[u\cdot\nabla,\nabla\partial^{\boldsymbol{\alpha}}]\rho^{ (i+1)}+\kappa g_{\tau}\Delta\partial^{\boldsymbol{\alpha}}\nabla\rho^{(i)}+ \kappa[\nabla\operatorname{div}\partial^{\boldsymbol{\alpha}},g_{\tau}]\nabla \rho^{(i)}\right|. \tag{5.41}\]
For the first right-hand side term of (5.41) we shift the laplacian:
\[\kappa^{2}\tau\int G_{\tau}^{l}\Delta\nabla\partial^{\boldsymbol{ \alpha}}\rho^{(i+1)}\cdot\nabla\partial^{\boldsymbol{\beta}}\rho^{(j)}= \kappa^{2}\tau\int G_{\tau}^{l}\nabla\partial^{\boldsymbol{\alpha }}\rho^{(i+1)}\cdot\Delta\nabla\partial^{\boldsymbol{\beta}}\rho^{(j)}+[G_{ \tau}^{l},\Delta]\nabla\partial^{\boldsymbol{\alpha}}\rho^{(i+1)}\cdot\nabla \partial^{\boldsymbol{\beta}}\rho^{(j)}\] \[= \kappa^{2}\tau\int G_{\tau}^{l}\nabla\partial^{\boldsymbol{\alpha }}\rho^{(i+1)}\cdot\nabla\Delta\partial^{\boldsymbol{\beta}}\rho^{(j)}+\nabla \partial^{\boldsymbol{\alpha}}\rho^{(i+1)}\cdot[\Delta,G_{\tau}^{l}]\nabla \partial^{\boldsymbol{\beta}}\rho^{(j)},\]
so that via the commutator estimate (5.21) we can bound the left-hand side as follows
\[\kappa\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\kappa\Delta\nabla \partial^{\boldsymbol{\alpha}}\rho^{(i+1)}\cdot\nabla\partial^{\boldsymbol{ \beta}}\rho^{(j)}\right|\lesssim\] \[\kappa^{\frac{1}{2}}\|\nabla^{n+1}\rho^{(i+1)}\|_{L^{2}_{xt}} \kappa\tau\left(\kappa^{\frac{1}{2}}\|\nabla^{m+3}\rho^{(j)}\|_{L^{2}_{xt}}+\| \nabla G_{\tau}^{l}\|_{L^{\infty}}\kappa^{\frac{1}{2}}\|\nabla^{m+2}\rho^{(j)} \|_{L^{2}_{xt}}+\|\nabla^{2}G_{\tau}^{l}\|_{L^{\infty}}\kappa^{\frac{1}{2}}\| \nabla\partial^{\boldsymbol{\beta}}\rho^{(j)}\|_{L^{2}_{xt}}\right)\] \[\lesssim\|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The fourth right-hand side term of (5.41), via the commutator estimate (5.21) is estimated as follows
\[\kappa^{2}\tau\left|\int_{0}^{T}\int G_{\tau}^{l}\nabla\partial^{ \boldsymbol{\beta}}\rho^{(j)}[\nabla\operatorname{div}\partial^{\boldsymbol{ \alpha}},g_{\tau}]\nabla\rho^{(i)}\right| \lesssim\kappa\tau\leavevmode\hbox{\small 1\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.
the second inequality by the inductive assumption (5.32) (\(b^{\prime}\)), and via the already established \((\kappa\tau)^{j}Q(0,m+2j)\lesssim Q(j,m)\), the last one by the definition of \(Q(j,m)\). This estimate used in (5.46) with \(j=i\) yields via definition of \(B\)
\[\sup_{|\boldsymbol{\alpha}|=n,|\boldsymbol{\beta}|=m}\Big{|}B^{l}(\nabla \partial^{\boldsymbol{\alpha}}\rho^{(i+1)},\nabla\partial^{\boldsymbol{\beta} }\rho^{(i)})\Big{|}\lesssim Q(i,m). \tag{5.47}\]
_Substep 5.4: Close the induction argument._
To close the argument, we need to return to estimates on \(\rho^{(i+1)}\), since it appears in \(Q(i,m)\). Note that (5.30) for \(i+1\) gives for \(n=0\)
\[\begin{split}\mathchoice{\mbox{\rm 1\kern-4.0pt\vrule width -4.0pt height -2.5pt depth -0.2pt}\kern-3.0pt \mbox{\rm 1\kern-2.5pt\vrule width -2.5pt depth -0.2pt}\kern-3.0pt\mbox{\rm 1\kern-2.5pt \vrule width -2.5pt depth -0.2pt}\kern-3.0pt\mbox{\rm 1\kern-2.
following inequalities relating these parameters:
\[\gamma_{L} <(b-1)\beta\,, \tag{6.1}\] \[4\alpha(1+\gamma_{L})+2\gamma_{L} <2(b-1)\beta+\gamma_{T}+\gamma_{R}\,,\] (6.2) \[\alpha\gamma_{L} <\gamma_{T}\,,\] (6.3) \[4\alpha(1+\gamma_{L}) <\gamma_{R}\,,\] (6.4) \[2\beta(b-1)+1+\gamma_{R} <\bar{N}\gamma_{L}\,,\] (6.5) \[b\alpha+\gamma_{T}+b\gamma_{R} <(b-1)(1-(2b+1)\beta)\,,\] (6.6) \[b\gamma_{E} <(b-1)(1-(2b+1)\beta)\,. \tag{6.7}\]
We first claim that (2.9) allow us to choose \(\gamma_{L},\bar{N}\) and \(\alpha_{0}>0\) so that (6.1)-(6.7) are valid. To see this, we can first choose \(\gamma_{L}>0\) sufficiently small so that (6.1) holds and
\[2\gamma_{L}<2(b-1)\beta+\gamma_{T}+\gamma_{R}.\]
Then we choose \(\bar{N}\) sufficiently large (depending on \(\gamma_{L}\)) so that (6.5) is valid. Finally, we choose \(\alpha_{0}>0\) sufficiently small, so that for any \(\alpha\leq\alpha_{0}\) also (6.2), (6.3), (6.4) and (6.6) hold.
For notational convenience we introduce
\[\mathring{\delta}_{q+1} :=\delta_{q+1}\lambda_{q}^{-\gamma_{R}}=\lambda_{q}^{-2b\beta- \gamma_{R}},\]
so that (2.4) can be written as \(\left\|\mathring{R}_{q}\right\|_{C^{0}}\leq\mathring{\delta}_{q+1}\).
### Mollification step
Following [1, Section 2.4] we define
\[u_{\ell}:=u_{q}*\psi_{\ell_{q}},\quad\mathring{R}_{\ell}:=\mathring{R}_{q}* \psi_{\ell_{q}}-(u_{q}\mathring{\otimes}u_{q})*\psi_{\ell_{q}}+u_{\ell_{q}} \mathring{\otimes}u_{\ell_{q}}\]
so that \((u_{\ell},\mathring{R}_{\ell})\) is another solution to (2.1). However, at variance with [1, Section 2.4], we fix a mollifying kernel \(\psi\in C^{\infty}_{c}(\mathbb{R}^{3})\) which, in addition to the usual requirement \(\int_{\mathbb{R}^{3}}\psi\,dx=1\) also satisfies
\[\int_{\mathbb{R}^{3}}\psi(x)x^{\theta}\,dx=0\quad\text{ for any multiindex }\theta\text{ with }1\leq|\theta|\leq\bar{N}. \tag{6.8}\]
The construction and use of such mollifiers (called "deep smoothing operators of depth \(\bar{N}\)") is standard, see e.g. [1, Section 2.3.4]; the case of infinite depth was introduced by Nash [11]). We point out that if \(\bar{N}\geq 2\), then \(\psi\) cannot be nonnegative.
The key point is the following lemma, a variant of the usual smoothing estimates.
**Lemma 6.2**.: _Let \(\psi\in C^{\infty}_{c}(\mathbb{R}^{n})\) be a smoothing operator of depth \(\bar{N}\geq 1\) and such that \(\int\psi=1\). Then for any real \(r,s\geq 0\)_
\[\|f*\psi_{\ell}\|_{C^{r+s}}\lesssim\ell^{-s}\|f\|_{C^{r}} \tag{6.9}\]
_and for any \(r\geq 0\), \(0\leq s\leq\bar{N}\)_
\[\|f-f*\psi_{\ell}\|_{C^{r}}\lesssim\ell^{s}\|f\|_{C^{r+s}} \tag{6.10}\]
_The implicit constants depend on the choice of \(\psi\) as well as on \(r,s\)._
Proof.: Concerning (6.9) assume first that \(r=k\) and \(s=l\) are integers and let \(a,b\) be multi-indices with \(|a|=k,|b|=l\). Then \(\partial^{a+b}(f*\psi_{\ell})=\partial^{a}f*\partial^{b}\psi_{\ell}\), hence
\[|\partial^{a+b}(f*\psi_{\ell})|\leq C_{l}\ell^{-l}\|f\|_{k}.\]
If \(s=l+\alpha\), we write
\[\partial^{a}f*\partial^{b}\psi_{\ell}(x+z)-\partial^{a}f*\partial^{b }\psi_{\ell}(x) =\int_{\mathbb{R}^{n}}\partial^{a}f(x-y)\left(\partial^{b}\psi_{ \ell}(y+z)-\partial^{b}\psi_{\ell}(y)\right)\,dy\] \[=\ell^{-k}\int_{\mathbb{R}^{n}}\partial^{a}f(x-y)\left((\partial^ {b}\psi)(y+\ell^{-1}z)-(\partial^{b}\psi)(y)\right)\,dy,\]
from which we obtain
\[\|\partial^{a+b}(f*\psi_{\ell})\|_{\alpha}\leq C_{l,\alpha}\ell^{-l-\alpha}\|f \|_{k}.\]
Finally, if also \(r=k+\beta\) for some \(\beta\in(0,1)\), we obtain the required estimate from interpolation between \(r=k\) and \(r=k+1\). This concludes the proof of (6.9) for \(r,s\geq 0\).
Next, by considering the Taylor expansion of \(f\) at \(x\) we can write where \(Q_{x}(y)\) is a sum of monomials in \(y\) of degree \(d\) with \(1\leq d\leq\bar{N}\) and \(|R_{x}(y)|\lesssim|y|^{s}\|f\|_{C^{s}}\). Moreover, from (6.8) we deduce that \(\int_{\mathbb{R}^{n}}Q_{x}(y)\psi_{\ell}(y)\,dy=0\). Thus,
\[|f-f*\psi_{\ell}|=\left|\int\psi_{\ell}(y)(f(x-y)-f(x))dy\right|\,\lesssim\|f \|_{C^{s}}\int\ell^{-n}\left|\psi(\ell^{-1}y)\right||y|^{s}dy\lesssim\ell^{s} \|f\|_{s}\,.\]
This proves (6.10) for the case \(r=0\). To obtain the estimate for integer \(r=k\), repeat the same argument for the partial derivatives \(\partial^{a}f\) with \(|a|=k\). For general real \(r\geq 0\) we again proceed by interpolation.
With the help of Lemma 6.2 we obtain the following bounds.
**Proposition 6.3**.: _For any \(N\geq 0\) we have_
\[\|u_{\ell}\|_{C^{N+1}} \lesssim\left\{\begin{array}{ll}\delta_{q}^{\sfrac{1}{2}} \lambda_{q}^{N+1}&\text{if }N+1\leq\bar{N}\\ \delta_{q}^{\sfrac{1}{2}}\lambda_{q}^{\bar{N}}\ell_{q}^{\bar{N}-N-1}&\text{if }N+1 \geq\bar{N}\end{array}\right., \tag{6.11}\] \[\left\|\mathring{R}_{\ell}\right\|_{C^{N}} \lesssim\hat{\delta}_{q+1}\ell_{q}^{-N}+\delta_{q}\lambda_{q}^{1 +\bar{N}}\ell_{q}^{\bar{N}-N}\,.\] (6.12) \[\left|\int_{\mathbb{T}^{3}}\left|u_{q}\right|^{2}-\left|u_{\ell} \right|^{2}\,dx\right| \lesssim\hat{\delta}_{q+1}+\delta_{q}^{\sfrac{1}{2}}\lambda_{q}^ {\bar{N}}\ell_{q}^{\bar{N}}\,. \tag{6.13}\]
_Moreover, if \(z_{q}=\mathcal{B}u_{q}\) and \(z_{\ell}=\mathcal{B}u_{\ell}=z_{q}*\psi_{\ell_{q}}\) are the vector potentials, we have in addition_
\[\|z_{\ell}-z_{q}\|_{C^{N+\alpha}}\lesssim\delta_{q}^{\sfrac{1}{2}}\lambda_{q} ^{\bar{N}}\ell_{q}^{\bar{N}+1-N-\alpha}\,, \tag{6.14}\]
Proof.: The bounds (6.11) and (6.14) follow directly from (6.9) and (6.10) together with the classical Schauder estimates on the Calderon-Zygmund operator \(\nabla\mathcal{B}\). For (6.12) we use the bound (2.5) and interpolation to obtain
\[\|u_{q}\otimes u_{q}\|_{C^{\bar{N}}}\lesssim\|u_{q}\|_{C^{0}}\|u_{q}\|_{C^{ \bar{N}}}\leq\|u_{q}\|_{C^{1}}\|u_{q}\|_{C^{\bar{N}}}\lesssim\delta_{q}\lambda _{q}^{\bar{N}+1}\,.\]
Then we apply (6.10) to the decomposition
\[\|\mathring{R}_{\ell}\|_{C^{N}}\leq\|\mathring{R}_{q}*\psi_{\ell_{q}}\|_{C^{N }}+\|(u_{q}\mathring{\otimes}u_{q})*\psi_{\ell_{q}}-u_{q}\mathring{\otimes}u_ {q}\|_{C^{N}}+2\|(u_{q}-u_{q}*\psi_{\ell_{q}})\otimes u_{q}\|_{C^{N}}\,.\]
Note that at variance with [1, Proposition 2.2] we are not using a commutator estimate here.
From (6.5) we obtain \(\delta_{q}^{\sfrac{1}{2}}\lambda_{q}^{\bar{N}}\ell_{q}^{\bar{N}}\leq\delta_{q} \lambda_{q}^{1+\bar{N}}\ell_{q}^{\bar{N}}\leq\mathring{\delta}_{q+1}\). Consequently we have
**Corollary 6.4**.: _For any \(N\geq 0\) we have the estimates_
\[\left\|u_{\ell}\right\|_{C^{N+1}} \lesssim\delta_{q}^{\sfrac{1}{2}}\lambda_{q}\ell_{q}^{-N}\,,\] \[\left\|\mathring{R}_{\ell}\right\|_{C^{N}} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N}\,,\] \[\left|\int_{\mathbb{T}^{3}}\left|u_{q}\right|^{2}-\left|u_{\ell} \right|^{2}\,dx\right| \lesssim\mathring{\delta}_{q+1}\,,\] \[\left\|z_{\ell}-z_{q}\right\|_{C^{N+\alpha}} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{1-\alpha-N}\,.\]
### Gluing step
The gluing step, introduced in [18], proceeds as follows. For each \(i\in\mathbb{N}\) we set \(t_{i}=i\tau_{q}\) and let \(u_{i}\) be the (classical) solution of the Euler equations
\[\partial_{t}u_{i}+\operatorname{div}(u_{i}\otimes u_{i})+\nabla p _{i} =0\,,\] \[\operatorname{div}u_{i} =0\,, \tag{6.15}\] \[u_{i}(\cdot,t_{i}) =u_{\ell}(\cdot,t_{i})\,.\]
It is well-known (see for instance [1, Proposition 3.1]) that there exists a constant \(c(\alpha)>0\) such that, for each \(i\in\mathbb{N}\) the solution \(u_{i}\) is smooth, uniquely defined, and satisfies for any \(N\geq 1\) the estimates
\[\left\|u_{i}(t)\right\|_{C^{N+\alpha}}\lesssim\left\|u_{\ell}(t_{i})\right\|_{ C^{N+\alpha}}\quad\text{ for all }t\in(t_{i}-T,t_{i}+T)\]
for \(T\leq c\|u_{\ell}(t_{i})\|_{C^{1,\alpha}}^{-1}\), where the implicit constant depends on \(N\) and \(\alpha\in(0,1)\). In particular, from our choice of \(\tau_{q}\) in (2.7) we obtain for any \(N\geq 1\) (c.f.[1, Corollary 3.2])
\[\left\|u_{i}(t)\right\|_{C^{N+\alpha}}\lesssim\delta_{q}^{\sfrac{1}{2}}\lambda_ {q}\ell_{q}^{1-N-\alpha}\,,\]
provided
\[\tau_{q}\|u_{\ell}\|_{C^{1,\alpha}}\leq c \tag{6.16}\]
Taking into account Remark 6.1 and (6.11), this is ensured by (6.3) and by choosing \(a\gg 1\) sufficiently large. Following the derivation of the stability estimates in [1, Proposition 3.3] and [1, Proposition 3.4], we deduce
**Proposition 6.5**.: _For \(|t-t_{i}|\leq\tau_{q}\) and \(N\geq 0\) we have_
\[\left\|u_{i}-u_{\ell}\right\|_{C^{N+\alpha}} \lesssim\tau_{q}\mathring{\delta}_{q+1}\ell_{q}^{-N-1-2\alpha}\,, \tag{6.17}\] \[\left\|z_{i}-z_{\ell}\right\|_{C^{N+\alpha}} \lesssim\tau_{q}\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}\,,\] (6.18) \[\left\|(\partial_{t}+u_{\ell}\cdot\nabla)(z_{i}-z_{\ell})\right\| _{C^{N+\alpha}} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha} \tag{6.19}\]
Proof.: Using the equations satisfied by \(u_{q}\) and \(u_{i}\), we obtain the equation for the pressure difference
\[\Delta(p_{\ell}-p_{i})=\operatorname{div}\bigl{(}\nabla u_{\ell}(u_{i}-u_{ \ell})\bigr{)}+\operatorname{div}\bigl{(}\nabla u_{i}(u_{i}-u_{\ell})\bigr{)} +\operatorname{div}\operatorname{div}\mathring{R}_{\ell},\]
and deduce
\[\left\|p_{\ell}-p_{i}\right\|_{C^{1+\alpha}}\lesssim\left\|u_{\ell}\right\|_{ C^{1+\alpha}}\left\|u_{i}-u_{\ell}\right\|_{C^{\alpha}}+\|\mathring{R}_{ \ell}\|_{C^{1+\alpha}}\,.\]
Using the equation for \(u_{q}\) and \(u_{i}\) we then obtain
\[\left\|(\partial_{t}+u_{\ell}\cdot\nabla)(u_{\ell}-u_{i})\right\|_{C^{\alpha} }\lesssim\left\|u_{\ell}\right\|_{C^{1+\alpha}}\left\|u_{i}-u_{\ell}\right\|_{ C^{\alpha}}+\|\mathring{R}_{\ell}\|_{C^{1+\alpha}}\,.\]
Applying Corollary 6.4, (6.16) and Gronwall's inequality we then conclude
\[\left\|u_{i}-u_{\ell}\right\|_{C^{\alpha}}\lesssim\tau_{q}\mathring{\delta}_ {q+1}\ell_{q}^{-1-2\alpha}\,,\]
which is (6.17) with \(N=0\). The case \(N\geq 1\) follows analogously, following the computations in the proof of [1, Proposition 3.3]. Furthermore, the estimates (6.18)-(6.19) can be deduced in the same manner, following the computations in the proof of [1, Proposition 3.4].
Next, as in [1, Section 4], we partition time using a partition of unity \(\{\chi_{i}\}_{i}\), with \(\chi_{i}\in C_{c}^{\infty}(\mathbb{R})\) and \(0\leq\chi_{i}\leq 1\), such that
* \(\sum_{i}\chi_{i}\equiv 1\) in \([0,T]\);
* \(\operatorname{supp}\chi_{i}\subset(t_{i}-\frac{2}{3}\tau_{q},t_{i}+\frac{2}{3} \tau_{q})\), in particular \(\operatorname{supp}\chi_{i}\cap\operatorname{supp}\chi_{i+2}=\emptyset\);
* \(\chi_{i}=1\) on \((t_{i}-\frac{1}{3}\tau_{q},t_{i}+\frac{1}{3}\tau_{q})\) and \(\chi_{i}+\chi_{i+1}=1\) on \((t_{i}+\frac{1}{3}\tau_{q},t_{i}+\frac{2}{3}\tau_{q})\);
* \(\|\partial_{t}^{N}\chi_{i}\|_{C^{0}}\lesssim\tau_{q}^{-N}\),
and define
\[\bar{u}_{q}=\sum_{i}\chi_{i}u_{i}\,,\quad\bar{p}_{q}^{(1)}=\sum_{i}\chi_{i}p_ {i}.\]
Further, we define
\[\mathring{\bar{R}}_{q} =\partial_{t}\chi_{i}\mathcal{R}(u_{i}-u_{i+1})-\chi_{i}(1-\chi _{i})(u_{i}-u_{i+1})\mathring{\otimes}(u_{i}-u_{i+1})\] \[\bar{p}_{q}^{(2)} =-\chi_{i}(1-\chi_{i})\left(|u_{i}-u_{i+1}|^{2}-\int_{\mathbb{T}^ {3}}|u_{i}-u_{i+1}|^{2}\,dx\right),\]
for \(t\in(t_{i}+\frac{1}{3}\tau_{q},t_{i}+\frac{2}{3}\tau_{q})\) and \(\mathring{\bar{R}}_{q}=0\), \(\bar{p}_{q}^{(2)}=0\) for \(t\notin\bigcup_{i}(t_{i}+\frac{1}{3}\tau_{q},t_{i}+\frac{2}{3}\tau_{q})\), where \(\mathcal{R}\) is the "inverse divergence" operator for symmetric tracefree \(2\)-tensors, defined as
\[(\mathcal{R}f)^{ij} =\mathcal{R}^{ijk}f^{k} \tag{6.20}\] \[\mathcal{R}^{ijk} =-\frac{1}{2}\Delta^{-2}\partial_{i}\partial_{j}\partial_{k}- \frac{1}{2}\Delta^{-1}\partial_{k}\delta_{ij}+\Delta^{-1}\partial_{i}\delta_{ jk}+\Delta^{-1}\partial_{j}\delta_{ik}.\]
when acting on vectors \(f\) with zero mean on \(\mathbb{T}^{3}\). See [1, Proposition 4.1] and [1, Definition 4.2 and Lemma 4.3].
Finally, we set
\[\bar{p}_{q}=\bar{p}_{q}^{(1)}+\bar{p}_{q}^{(2)}.\]
As in [1, Section 4.2], one can easily verify that
* \(\mathring{\bar{R}}_{q}\) is a smooth symmetric and traceless \(2\)-tensor;
* For all \((x,t)\in\mathbb{T}^{3}\times[0,T]\) \[\left\{\begin{array}{l}\partial_{t}\bar{u}_{q}+\operatorname{ div}(\bar{u}_{q}\otimes\bar{u}_{q})+\nabla\bar{p}_{q}=\operatorname{div} \mathring{\bar{R}}_{q},\\ \operatorname{div}\bar{u}_{q}=0;\end{array}\right.\]
* The support of \(\mathring{\bar{R}}_{q}\) satisfies \[\operatorname{supp}\mathring{\bar{R}}_{q}\subset\mathbb{T}^{3}\times\bigcup_{ i}(t_{i}+\tfrac{1}{3}\tau_{q},t_{i}+\tfrac{2}{3}\tau_{q}).\] (6.21)
With our choice of parameters \(\tau_{q},\ell_{q}\) the estimates in [1, Section 4.3 and Section 4.4] are modified as follows.
**Proposition 6.6**.: _The velocity field \(\bar{u}_{q}\) and its vector potential \(\bar{z}_{q}=\mathcal{B}\bar{u}_{q}\) satisfy the following estimates:_
\[\|\bar{u}_{q}-u_{\ell}\|_{C^{N+\alpha}} \lesssim\tau_{q}\mathring{\delta}_{q+1}\ell_{q}^{-1-N-2\alpha}\,, \tag{6.22}\] \[\|\bar{z}_{q}-z_{\ell}\|_{C^{\alpha}} \lesssim\tau_{q}\mathring{\delta}_{q+1}\ell_{q}^{-\alpha}\,. \tag{6.23}\]
_for all \(N\geq 0\). The new Reynolds stress \(\mathring{\bar{R}}_{q}\) satisfies the estimates:_
\[\Big{\|}\mathring{\bar{R}}_{q}\Big{\|}_{N+\alpha} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}+\tau_{q}^{2} \mathring{\delta}_{q+1}^{2}\ell_{q}^{-N-2-4\alpha}\,, \tag{6.24}\] \[\Big{\|}(\partial_{t}+\bar{v}_{q}\cdot\nabla)\mathring{\bar{R}}_ {q}\Big{\|}_{N+\alpha} \lesssim\tau_{q}^{-1}\mathring{\delta}_{q+1}\ell_{q}^{-N-3\alpha}+ \tau_{q}\mathring{\delta}_{q+1}^{2}\ell_{q}^{-N-2-4\alpha}\,. \tag{6.25}\]
_Furthermore, we have the estimate_
\[\left|\int_{\mathbb{T}^{3}}|\bar{u}_{q}|^{2}-|u_{\ell}|^{2}\,dx\right| \lesssim\tau_{q}\mathring{\delta}_{q+1}\delta_{q}^{\sfrac{1}{2}} \lambda_{q}+\tau_{q}^{2}\mathring{\delta}_{q+1}^{2}\ell_{q}^{-2-4\alpha}\,. \tag{6.26}\]
Proof.: Using the identity \(\bar{u}_{q}-u_{\ell}=\sum_{i}\chi_{i}(u_{i}-u_{\ell})\), the bounds (6.22) and (6.23) follow directly from (6.17) and (6.18) in Proposition 6.5.
As in the proof of [1, Proposition 4.4] we write the new Reynolds stress as
\[\mathring{\bar{R}}_{q}=\partial_{t}\chi_{i}(\mathcal{R}\operatorname{curl})( z_{i}-z_{i+1})-\chi_{i}(1-\chi_{i})(u_{i}-u_{i+1})\mathring{\otimes}(u_{i}-u_{i+1})\]
and note that \(\mathcal{R}\operatorname{curl}\) is a zero-order operator of Calderon-Zygmund type, for which Schauder estimates are valid. Therefore we obtain, again applying Proposition 6.5,
\[\|\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}} \lesssim\tau_{q}^{-1}\|z_{i}-z_{i+1}\|_{C^{N+\alpha}}+\|u_{i}-u_ {i+1}\|_{C^{N+\alpha}}\|u_{i}-u_{i+1}\|_{C^{\alpha}}\] \[\lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}+\tau_{q}^{2} \mathring{\delta}_{q+1}^{2}\ell_{q}^{-2-N-4\alpha}\,.\]
Next, differentiating the expression for \(\mathring{\bar{R}}_{q}\) as in the proof of [1, Proposition 4.4], we obtain
\[\|(\partial_{t}+u_{\ell}\cdot\nabla)\mathring{\bar{R}}_{q}\|_{C^{ N+\alpha}} \lesssim\tau_{q}^{-2}\|z_{i}-z_{i+1}\|_{C^{N+\alpha}}+\tau_{q}^{- 1}\|(\partial_{t}+u_{\ell}\cdot\nabla)(z_{i}-z_{i+1})\|_{C^{N+\alpha}}\] \[+\tau_{q}^{-1}\|u_{\ell}\|_{C^{1+\alpha}}\|z_{i}-z_{i+1}\|_{C^{N+ \alpha}}+\tau_{q}^{-1}\|u_{\ell}\|_{C^{1+N+\alpha}}\|z_{i}-z_{i+1}\|_{C^{ \alpha}}\] \[+\tau_{q}^{-1}\|u_{i}-u_{i+1}\|_{C^{N+\alpha}}\|u_{i}-u_{i+1}\|_{ C^{\alpha}}\] \[+\|(\partial_{t}+u_{\ell}\cdot\nabla)u_{i}-u_{i+1}\|_{C^{N+ \alpha}}\|u_{i}-u_{i+1}\|_{C^{N+\alpha}}\,.\]
Using again Proposition 6.5 we deduce
\[\|(\partial_{t}+u_{\ell}\cdot\nabla)\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}} \lesssim\tau_{q}^{-1}\mathring{\delta}_{q+1}\ell_{q}^{-N-3\alpha}+\tau_{q} \mathring{\delta}_{q+1}^{2}\ell_{q}^{-2-N-4\alpha}\,.\]
Finally, following the proof of [1, Proposition 4.5] we have
\[\left|\frac{d}{dt}\int_{\mathbb{T}^{3}}|u_{i}|^{2}-|u_{\ell}|^{2}\,dx\right| \lesssim\|u_{\ell}\|_{C^{1}}\|\mathring{R}_{\ell}\|_{C^{0}}\lesssim\mathring{ \delta}_{q+1}\delta_{q}^{\sfrac{1}{2}}\lambda_{q},\]
so that
\[\left|\int_{\mathbb{T}^{3}}|u_{i}|^{2}-|u_{\ell}|^{2}\,dx\right|\lesssim\tau_{ q}\mathring{\delta}_{q+1}\delta_{q}^{\sfrac{1}{2}}\lambda_{q}.\]
On the other hand
\[\int_{\mathbb{T}^{3}}|u_{i}-u_{i+1}|^{2}\,dx\lesssim\|u_{i}-u_{i+1}\|_{C^{ \alpha}}^{2}.\]
Using the identity from the proof of [1, Proposition 4.5]
\[|\bar{u}_{q}|^{2}-|u_{\ell}|^{2}=\chi_{i}(|u_{i}|^{2}-|u_{\ell}|^{2})+(1-\chi_ {i})(|u_{i+1}|^{2}-|u_{\ell}|^{2})-\chi_{i}(1-\chi_{i})|u_{i}-u_{i+1}|^{2}\,,\]
and Proposition 6.5, we deduce (6.26).
We conclude this section with the following summary of the mollification/gluing steps:
**Corollary 6.7**.: _Let \((u_{q},\mathring{\bar{R}}_{q})\) be a smooth solution of (2.1) satisfying the inductive assumptions (2.4)-(2.6). Then there exists another smooth solution \((\bar{u}_{q},\mathring{\bar{R}}_{q})\) of (2.1) with the support condition (6.21) such that the following estimates hold:_
\[\|\bar{u}_{q}\|_{C^{N+1}} \lesssim\delta_{q}^{\sfrac{1}{2}}\lambda_{q}\ell_{q}^{-N} \tag{6.27}\] \[\|\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}} \lesssim\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}\,,\] (6.28) \[\|(\partial_{t}+\bar{u}_{q}\cdot\nabla)\mathring{\bar{R}}_{q}\|_ {C^{N+\alpha}} \lesssim\tau_{q}^{-1}\mathring{\delta}_{q+1}\ell_{q}^{-N-2\alpha}\,,\] (6.29) \[\left|\int_{\mathbb{T}^{3}}|\bar{u}_{q}|^{2}-|u_{q}|^{2}\,dx\right| \lesssim\mathring{\delta}_{q+1}\,, \tag{6.30}\]
_and moreover the vector potentials satisfy_
\[\|\bar{z}_{q}-z_{q}\|_{C^{\alpha}}\lesssim\tau_{q}\mathring{\bar{\delta}}_{q+ 1}\ell_{q}^{-\alpha}\,. \tag{6.31}\]
**Remark 6.8**.: _It is useful to compare these estimates with the corresponding bounds obtained in the mollification/gluing steps in [1], namely the bounds in [1, (4.7), (4.10), (4.11), (4.12)]. For this comparison let us denote the respective parameters in [1] (defined in our case by the exponents \(\gamma_{R},\gamma_{L},\gamma_{t}\)) by \(\mathring{\bar{\delta}}_{q+1}^{old},\,\ell_{q}^{old},\,\tau_{q}^{old}\), so that, comparing with [1, (2.7), (2.19), (2.26)], we have_
\[\mathring{\delta}_{q+1}^{old}=3\alpha,\quad\ell_{q}^{old}=(b-1)\beta+\tfrac{3 }{2}\alpha,\quad\tau_{q}^{old}=2\alpha(1+\gamma_{L}^{old}).\]
_It is not difficult to see that (6.27), (6.28) and (6.30) are sharper bounds than the corresponding bounds [1, (4.7), (4.10), (4.12)], provided \(\gamma_{L}<\gamma_{L}^{old}\) and \(\gamma_{R}>3\alpha(1+\gamma_{L}^{old})\), in particular if_
\[\gamma_{L}<(b-1)\beta\,\text{ and }\alpha>0\text{ is sufficiently small,}\]
_in agreement with (6.1). In contrast, estimate (6.29) would only be sharper than [1, (4.11)] if \(\gamma_{R}<\gamma_{R}^{(old)}\), a condition which we will not assume, because we will need a better bound from (6.31)._
### Perturbation step
The construction of the new vector field \(u_{q+1}=\bar{u}_{q}+w_{q+1}\) is done in [1, Section 5.2] and [1, Section 5.3]. We start by recalling the main steps.
First we define space-time cutoff functions \(\eta_{i}\), adapted to the temporal support of \(\mathring{\bar{R}}_{q}\) in (6.21), and supported in "squiggling stripes", as done in [1, Lemma 5.3]. We start with the following construction, which is independent of \(q\):
**Lemma 6.9**.: _There exists two geometric constants \(c_{0},c_{1}>0\) and a family of smooth nonnegative functions \(\bar{\eta}_{i}\in C^{\infty}(\mathbb{T}^{3}\times\mathbb{R})\) with the following properties:_
1. \(0\leq\bar{\eta}_{i}(x,t)\leq 1\)_;_
2. \(\operatorname{supp}\bar{\eta}_{i}\cap\operatorname{supp}\bar{\eta}_{j}=\emptyset\) _for_ \(i\neq j\)_;_
3. \(\mathbb{T}^{3}\times(i+\tfrac{1}{3},i+\tfrac{2}{3})\subset\{(x,t):\bar{\eta}_ {i}(x,t)=1\}\)_;_
4. \(\operatorname{supp}\bar{\eta}_{i}\subset\mathbb{T}^{3}\times(i-\tfrac{1}{3},i+ \tfrac{4}{3})\)_;_
_Moreover, the function \(\bar{\eta}(x,t):=\sum_{i}\bar{\eta}_{i}(x,t)\) is \(1\)-periodic in \(t\) and satisfies_
1. \(\int_{\mathbb{T}^{3}}\bar{\eta}^{2}(x,t)\,dx=c_{0}\) _for all_ \(t\)_;_
2. \(\int_{0}^{1}\bar{\eta}^{2}(x,s)\,ds=c_{1}\) _for all_ \(x\)
Proof.: We start by following the proof of [1, Lemma 5.3] and choose a suitable \(h\in C_{c}^{\infty}(0,1)\) such that, setting
\[h_{i}(x,t):=h\left(t-\tfrac{1}{6}\sin(2\pi x_{1})-i\right) \tag{6.32}\]
the family of functions \(\{h_{i}\}\) satisfies (i)-(iv) above, and there exists a geometric constant \(c_{0}>0\) such that
\[\sum_{i}\mathchoice{\vbox{\hbox{$-$}}\kern-13.499949pt}{\vbox{ \hbox{$-$}}\kern-13.499949pt}{\vbox{\hbox{$-$}} \kern-12.249968pt}{\vbox{\hbox{$-$}}\kern-9.849973pt}{\vbox{ \hbox{$-$}}\kern-8.499973pt}\!\int_{\mathbb{T}^{3}}h_{i}^{2}(x,t)\,dx\geq c_{ 0}\qquad\text{ for all }t. \tag{6.33}\]
Then define
\[\bar{\eta}_{i}(x,t)=\left(\frac{1}{c_{0}}\sum_{i}\mathchoice{\vbox{\hbox{$-$}} \kern-13.499949pt}{\vbox{\hbox{$-$}}\kern-12.249968pt}{\vbox{ \hbox{$-$}}\kern-9.849973pt}{\vbox{\hbox{$-$}} \kern-8.499973pt}\!\int_{\mathbb{T}^{3}}h_{i}^{2}(y,t)\,dy\right)^{-\sfrac{1}{2 }}h_{i}(x,t)\,,\]
and
\[\bar{\eta}(x,t)=\sum_{i}\bar{\eta}_{i}(x,t).\]
Then \(\bar{\eta}_{i}\) satisfies (i)-(iv), whereas \(t\mapsto\eta(x,t)\) is \(1\)-periodic and satisfies (v). Finally, from (6.32) we see that \(\bar{\eta}_{i}(x,t)=\bar{\eta}_{i}(0,t-\tfrac{1}{6}\sin(2\pi x_{1}))\), and consequently
\[\bar{\eta}(x,t)=\bar{\eta}(0,t-\tfrac{1}{6}\sin(2\pi x_{1})).\]
But then the \(1\)-periodicity of \(t\mapsto\bar{\eta}(x,t)\) implies that \(\int_{0}^{1}\bar{\eta}^{2}(x,t)\,dt\) is independent of \(x\). This shows (vi).
The constants \(c_{0},c_{1}>0\) in Lemma 6.9 determine our choice of \(\bar{e}>0\):
**Definition 6.10**.: _The constant \(\bar{e}\) in (2.6) is defined to be \(\bar{e}=\frac{3c_{0}}{c_{1}}\) for the universal constants in conditions (vi) and (vii) of Lemma 6.9._
Now we are ready to define the family of cutoff functions \(\eta_{i}\), in analogy with [1]: let
\[\eta_{i}(x,t)=\bar{\eta}_{i}(x,\tau_{q}^{-1}t),\quad\eta(x,t)=\bar{\eta}(x, \tau_{q}^{-1}t). \tag{6.34}\]
It is easy to see that \(\eta_{i}\) have the properties:
* \(0\leq\eta_{i}(x,t)\leq 1\);
* \(\operatorname{supp}\eta_{i}\cap\operatorname{supp}\eta_{j}=\emptyset\) for \(i\neq j\);
* \(\mathbb{T}^{3}\times I_{i}\subset\{(x,t):\eta_{i}(x,t)=1\}\), where \(I_{i}=(t_{i}+\tfrac{1}{3}\tau_{q},t_{i}+\tfrac{2}{3}\tau_{q})\);
* \(\operatorname{supp}\eta_{i}\subset\mathbb{T}^{3}\times\tilde{I}_{i}\), where \(\tilde{I}_{i}:=(t_{i}-\tfrac{1}{3}\tau_{q},t_{i+1}+\tfrac{1}{3}\tau_{q})\);
* For any \(m,n\in\mathbb{N}\) we have the estimate \[\|\partial_{t}^{m}\eta_{i}\|_{C^{n}}\lesssim\tau_{q}^{-m}.\] (6.35)
The second step is to introduce a scalar function of time, which acts as the trace of the Reynolds stress tensor. In our case this will be defined as
\[\sigma_{q}(t):=\frac{1}{3c_{0}}\left(e(t)-\int_{\mathbb{T}^{3}}|\bar{u}_{q}|^{2 }\,dx-\bar{e}\delta_{q+2}\right). \tag{6.36}\]
We remark that the notation for this function in [1] is \(\rho_{q}(t)\), but in this paper we reserve \(\rho\) to denote density in subsequent sections. Moreover, in [1] the definition involves \(\delta_{q+2}/2\)
rather than \(\bar{e}\delta_{q+2}\), this difference is related to our sharper inductive estimate (2.6). In particular, this leads to the following bound, which follows from (2.6) and (6.26):
\[\left|\sigma_{q}(t)-\frac{\bar{e}}{3c_{0}}\delta_{q+1}\right|\lesssim\delta_{q+ 1}(\lambda_{q}^{-\gamma_{E}}+\lambda_{q}^{-\gamma_{R}}+\lambda_{q}^{-(b-1) \beta})\,. \tag{6.37}\]
Next, we introduce, as in [1, Section 5.2] the localized versions of the Reynolds stress as
\[R_{q,i}=\eta_{i}^{2}(\sigma_{q}\mathrm{Id}-\mathring{\tilde{R}}_{q}),\quad \tilde{R}_{q,i}=\frac{\nabla\Phi_{i}R_{q,i}\nabla\Phi_{i}^{T}}{\sigma_{q,i}}, \quad\sigma_{q,i}=\eta_{i}^{2}\sigma_{q} \tag{6.38}\]
where \(\Phi_{i}\) is the backward flow map for the velocity field \(\bar{u}_{q}\), defined as the solution of the transport equation
\[(\partial_{t}+\bar{u}_{q}\cdot\nabla)\Phi_{i} =0\] \[\Phi_{i}(x,t_{i}) =x.\]
By our choice of \(\eta_{i}\) and \(\tau_{q}\) (c.f. (6.16)) the backward flow \(\Phi_{i}\) is well-defined in the support of \(\eta_{i}\) and satisfies the estimate
\[\|\nabla\Phi_{i}-\mathrm{Id}\|_{C^{0}}\lesssim\tau_{q}\|\bar{u}_{q}\|_{C^{1}} \lesssim\lambda_{q}^{-\gamma_{T}}. \tag{6.39}\]
In particular, we have the following analogue of [1, Lemma 5.4]:
**Lemma 6.11**.: _For \(a\gg 1\) sufficiently large we have_
\[\|\nabla\Phi_{i}-\mathrm{Id}\|_{C^{0}} \leq 1/2\quad\text{ for }t\in\tilde{I}_{i}. \tag{6.40}\] \[|\sigma_{q}(t)-\tfrac{1}{3}\bar{e}\delta_{q+1}| \leq\tfrac{1}{9}\bar{e}\delta_{q+1}\quad\text{ for all }t\,. \tag{6.41}\]
_and for any \(N\geq 0\)_
\[\|\sigma_{q,i}\|_{C^{N}} \lesssim\delta_{q+1}\,, \tag{6.42}\] \[\|\partial_{t}\sigma_{q,i}\|_{C^{N}} \lesssim\delta_{q+1}\tau_{q}^{-1}\,. \tag{6.43}\]
_Moreover, we also have, for any \(t\in\tilde{I}_{i}\)_
\[\left|\frac{R_{q,i}}{\sigma_{q,i}}-\mathrm{Id}\right|=\left|\sigma_{q}^{-1} \mathring{\tilde{R}}_{q}\right|\lesssim\lambda_{q}^{-\gamma_{R}/2}. \tag{6.44}\]
_In particular, for \(a\gg 1\) sufficiently large and for all \((x,t)\)_
\[\tilde{R}_{q,i}(x,t)\in B_{\nicefrac{{1}}{{2}}}(\mathrm{Id})\subset\mathcal{ S}_{+}^{3\times 3}\,,\]
_where \(B_{\nicefrac{{1}}{{2}}}(\mathrm{Id})\) denotes the metric ball of radius \(1/2\) around the identity \(\mathrm{Id}\) in the space \(\mathcal{S}^{3\times 3}\)._
Proof.: The proof follows closely the proof of [1, Lemma 5.4]. In particular (6.40) follows from (6.39) and (6.41) follows from (6.37). The estimates (6.42)-(6.43) can be obtained as in [1, (5.13)-(5.15)]. Indeed, we start with using equation (2.1) to estimate
\[\left|\frac{d}{dt}\int|\bar{u}_{q}(x,t)|^{2}\ dx\right|=\left|2\int\nabla\bar{ u}_{q}\cdot\mathring{\tilde{R}}_{q}\,dx\right|\lesssim\delta_{q+1}\delta_{q}^{ \nicefrac{{1}}{{2}}}\lambda_{q},\]
so that
\[|\tfrac{d}{dt}\sigma_{q}(t)|\lesssim\|\tfrac{d}{dt}e\|_{C^{0}}+\delta_{q+1} \delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\lesssim\delta_{q+1}\tau_{q}^{-1},\]
where we assume \(a\gg 1\) is sufficiently large to absorb the term \(\|\tfrac{d}{dt}e\|_{C^{0}}\). Then we use (6.35) to conclude the bounds (6.42)-(6.43). The estimate (6.44) follows directly from (2.14) and (6.4). Conseuqnetly, bound on the range of \(\tilde{R}_{q,i}\) follows from (6.39) and by choosing \(a\gg 1\) sufficiently large.
With Lemma 6.11 and the definitions in (6.38) we define the new perturbation \(w_{q+1}\), precisely as in [1, Section 5.3], as follows:7
Footnote 7: here we use the calculus identities \(\operatorname{curl}[\nabla\Phi^{T}U(\Phi)]=\nabla\Phi^{-1}(\operatorname{curl}U )(\Phi)\) and \(\operatorname{curl}(\varphi F)=\varphi\operatorname{curl}F+\nabla\varphi\times F\).
\[w_{q+1} =\frac{1}{\lambda_{q+1}}\operatorname{curl}\left[\sum_{i}\sum_{ \vec{k}\in\Lambda}\sigma_{q,i}^{\sfrac 12}a_{\vec{k}}(\tilde{R}_{q,i})\nabla\Phi_{i}^{T}U_{\vec{k}}(\lambda_{q+1}\Phi _{i})\right], \tag{6.45}\] \[=\underbrace{\sum_{i}\sum_{\vec{k}\in\Lambda}\sigma_{q,i}^{\sfrac 12}a_{\vec{k}}(\tilde{R}_{q,i})\nabla\Phi_{i}^{-1}W_{\vec{k}}(\lambda_{q+1} \Phi_{i})}_{w_{o}}+\] \[+\underbrace{\frac{1}{\lambda_{q+1}}\sum_{i}\sum_{\vec{k}\in \Lambda}\nabla(\sigma_{q,i}^{\sfrac 12}a_{\vec{k}}(\tilde{R}_{q,i}))\times\nabla\Phi_{i}^{T}U_{\vec{k}}(\lambda_{ q+1}\Phi_{i})}_{w_{c}}\]
Note that in the formulas above \(\vec{k}\in\Lambda\) denotes vectors in \(\mathbb{R}^{3}\) and the corresponding sum is finite. In contrast, the notation introduced in [1] is
\[w_{o}=\sum_{i}\sum_{k\in\mathbb{Z}^{3}\setminus\{0\}}(\nabla\Phi_{i})^{-1}b_{i,k}e^{\lambda_{q+1}k\cdot\Phi_{i}},\quad w_{c}=\sum_{i}\sum_{k\in\mathbb{Z}^{3 }\setminus\{0\}}c_{i,k}e^{\lambda_{q+1}k\cdot\Phi_{i}}, \tag{6.46}\]
where
\[b_{i,k}=\sigma_{q,i}^{\sfrac 12}a_{k}(\tilde{R}_{q,i})A_{k},\quad c_{i,k}=\frac{-i}{ \lambda_{q+1}}\operatorname{curl}\left[\sigma_{q,i}^{\sfrac 12}\frac{\nabla\Phi_{i}^{T}(k\times a_{k}(\tilde{R}_{q,i}))}{|k|^{2}}\right],\]
the index \(k\in\mathbb{Z}^{3}\setminus\{0\}\) denotes the Fourier variable, and \(A_{k}\in\mathbb{C}^{3}\) are complex vectors arising in the Fourier decomposition of Mikado flows, specifically of the functions \(\psi_{\vec{k}}\) in (2.19). In particular, since \(\psi_{\vec{k}}\) is smooth, the Fourier coefficients \(a_{k}\) in the expression for \(b_{i,k},c_{i,k}\), together with all derivatives, and bounded and have polynomial decay in \(k\) of arbitrary order (c.f. [1, (5.5)]). At variance with [1] we will make use of this fact in the form
\[\|a_{k}(\tilde{R}_{q,i})\|_{0}\lesssim|k|^{-\bar{N}-3}\,. \tag{6.47}\]
The representation (6.46) is useful for obtaining estimates for \(w_{q+1}\) and for the new Reynolds stress \(\mathring{R}_{q+1}\), whereas the representation (6.45) will be useful for computing the bulk diffusion coefficient induced by \(w_{q+1}\) in Section 4.
As far as the estimates on \(w_{q+1},\mathring{R}_{q+1}\) are concerned, in light of Remark 6.8 all estimates in [1, Section 5.3-5.5] which do not use transport derivatives remain valid. These are (c.f. [1, Lemma 5.5 and Proposition 5.7]):
**Lemma 6.12**.: _There is a geometric constant \(\bar{M}\) such that_
\[\|b_{i,k}\|_{0}\leq\bar{M}\delta_{q+1}^{\sfrac 12}|k|^{-\bar{N}-3}\,. \tag{6.48}\]
_Moreover, for \(t\in\mathring{I}_{i}\) and any \(N\geq 0\)_
\[\left\|(\nabla\Phi_{i})^{-1}\right\|_{C^{N}}+\left\|\nabla\Phi_{ i}\right\|_{C^{N}} \lesssim\ell_{q}^{-N}\,, \tag{6.49}\] \[\left\|\sigma_{q,i}^{-1}R_{q,i}\right\|_{C^{N}}+\left\|\mathring{ R}_{q,i}\right\|_{C^{N}} \lesssim\ell_{q}^{-N}\,,\] (6.50) \[\left\|b_{i,k}\right\|_{C^{N}} \lesssim\delta_{q+1}^{\sfrac 12}|k|^{-\bar{N}-3}\ell_{q}^{-N}\,,\] (6.51) \[\left\|c_{i,k}\right\|_{C^{N}} \lesssim\delta_{q+1}^{\sfrac 12}\lambda_{q+1}^{-1}|k|^{-\bar{N}-3}\ell_{q}^{-N-1}\,. \tag{6.52}\]
Proof.: The estimate (6.49) follows from (6.40) and (2.13). Let us denote \(D_{t}=\partial_{t}+\bar{u}_{q}\cdot\nabla\). Since \(D_{t}\Phi_{i}=0\) by definition, for \(N\geq 1\) we have
\[\|D_{t}\nabla\Phi_{i}\|_{C^{N}} \lesssim\|\nabla\bar{u}_{q}^{T}\nabla\Phi_{i}\|_{C^{N}}\lesssim \|\nabla\bar{u}_{q}\|_{C^{0}}\|\nabla\Phi_{i}\|_{C^{N}}+\|\nabla\bar{u}_{q}\|_ {C^{N}}\|\nabla\Phi_{i}\|_{C^{0}}\] \[\lesssim\tau_{q}^{-1}\|\nabla\Phi_{i}\|_{C^{N}}+\tau_{q}^{-1} \ell_{q}^{-N}.\]
We deduce (6.49) from here using Gronwall's inequality. Next, using (6.38) we write
\[\sigma_{q,i}^{-1}R_{q,i}=\mathrm{Id}-\sigma_{q}^{-1}\mathring{\tilde{R}}_{q},\quad\tilde{R}_{q,i}=\nabla\Phi_{i}(\mathrm{Id}-\sigma_{q}^{-1}\mathring{ \tilde{R}}_{q})\nabla\Phi_{i}^{T}\,. \tag{6.53}\]
Then, applying (2.14) and (6.41) we obtain
\[\left\|\sigma_{q,i}^{-1}R_{q,i}\right\|_{C^{N}}\lesssim\delta_{q+1}^{-1}\left\| R_{q,i}\right\|_{C^{N+\alpha}}\lesssim\frac{\mathring{\tilde{\delta}}_{q+1}}{ \delta_{q+1}}\ell_{q}^{-N-2\alpha}\lesssim\ell_{q}^{-N},\]
where in the last inequality we used (6.4). Similarly we obtain the estimate for \(\tilde{R}_{q,i}\), leading to (6.50).
The estimates (6.51) and (6.52) follow directly from (6.42) and (6.47) as well as the above.
Because we require inductive estimates on \(\bar{N}\geq 1\) derivatives of \(u_{q}\), [1, Corollary 5.9] is replaced by
**Lemma 6.13**.: _Under the assumption (6.1) and assuming \(a\gg 1\) is sufficiently large, we have for any \(N=0,1,\ldots,\bar{N}\)_
\[\left\|w_{o}\right\|_{C^{N}} \leq\tilde{M}\delta_{q+1}^{\sfrac{1}{2}}\lambda_{q+1}^{N}\,,\] \[\left\|w_{c}\right\|_{C^{N}} \lesssim\delta_{q+1}^{\sfrac{1}{2}}\ell_{q}^{-1}\lambda_{q+1}^{-1 }\lambda_{q+1}^{N}\,,\] \[\left\|w_{q+1}\right\|_{C^{N}} \leq 2\tilde{M}\delta_{q+1}^{\sfrac{1}{2}}\lambda_{q+1}^{N}\,,\]
_where the constant \(\tilde{M}\) depends on \(\bar{N}\) and \(\bar{M}\)._
Proof.: We use the representation in (6.46). First of all, using the chain rule we obtain
\[\|e^{i\lambda_{q+1}k\cdot\Phi_{i}}\|_{C^{m}}\leq\lambda_{q+1}^{m}|k|^{m}\| \nabla\Phi_{i}\|_{C^{0}}^{m}+\sum_{j<m,\theta}C_{j,m}\lambda_{q+1}^{j}|k|^{j} \|\nabla\Phi_{i}\|_{C^{0}}^{\theta_{1}}\cdot\cdots\cdot\|\nabla\Phi_{i}\|_{C^ {m-1}}^{\theta_{m}}\]
for some constants \(C_{j,m}\) (binomial coefficients), where the sum is over \(1\leq j\leq m\) and multi-indices \(\theta\) with \(m=\theta_{1}+2\theta_{2}+\cdots+m\theta_{m}\) and \(j=\theta_{1}+\cdots+\theta_{m}\). Then, using Lemma 6.12 we deduce
\[\|e^{i\lambda_{q+1}k\cdot\Phi_{i}}\|_{C^{m}}\lesssim\lambda_{q+1}^{m}|k|^{m}+ \lambda_{q+1}|k|\ell_{q}^{1-m}.\]
However, from (6.1) it follows in particular \(\gamma_{L}<(b-1)\), hence \(\ell_{q}^{-1}<\lambda_{q+1}\), so that we deduce
\[\|e^{i\lambda_{q+1}k\cdot\Phi_{i}}\|_{C^{m}}\lesssim\lambda_{q+1}^{m}|k|^{m}.\]
By applying the product rule and Lemma 6.12 we the conclude that there exists \(\bar{M}\) such that
\[\|w_{o}\|_{C^{m}}\leq\tilde{M}\delta_{q+1}^{\sfrac{1}{2}}\lambda_{q+1}^{m} \quad\text{ for all }m=0,1,\ldots,\bar{N}.\]
The estimate on \(w_{c}\) follows directly from Lemma 6.12.
**Definition 6.14**.: _The constant \(M\) in (2.5) is defined as \(M:=4\tilde{M}\), where \(\tilde{M}\) is the constant in Lemma 6.13._
Finally, coming to estimates involving time-derivatives, we have the following variant of [1, Proposition 5.9]:
**Lemma 6.15**.: _For any \(t\in\tilde{I}_{i}\) and \(N\geq 0\) we have_
\[\|D_{t}\nabla\Phi_{i}\|_{C^{N}} \lesssim\delta_{q}^{1/2}\lambda_{q}\ell_{q}^{-N}\,,\] \[\|D_{t}\sigma_{q,i}\|_{C^{N}} \lesssim\delta_{q+1}\tau_{q}^{-1}\ell_{q}^{-N}\,,\] \[\|D_{t}\tilde{R}_{q,i}\|_{C^{N}} \lesssim\tau_{q}^{-1}\ell_{q}^{-N}\,,\] \[\|D_{t}c_{i,k}\|_{C^{N}} \lesssim\delta_{q+1}^{\sfrac{1}{2}}\tau_{q}^{-1}\lambda_{q+1}^{-1 }\ell_{q}^{-N-1}|k|^{-\bar{N}-3}\,,\]
_where \(D_{t}=\partial_{t}+\bar{u}_{q}\cdot\nabla\)._
Proof.: The proof follows [1, Proposition 5.9] using this time Lemma 6.11 and Lemma 6.12. In particular, using the expressions for the \(D_{t}\) derivatives, we have
\[\|D_{t}\nabla\Phi_{i}\|_{C^{N}} \lesssim\|\nabla\Phi_{i}\nabla\bar{u}_{q}\|_{C^{N}}\lesssim \delta_{q}^{\sfrac{1}{2}}\lambda_{q}\ell_{q}^{-N}\lesssim\tau_{q}^{-1}\ell_{q }^{-N}\,,\] \[\|D_{t}\sigma_{q,i}\|_{C^{N}} \lesssim\|\partial_{t}\sigma_{q,i}\|_{C^{N}}+\|\sigma_{q,i}\|_{C ^{N+1}}\|\bar{u}_{q}\|_{C^{1}}+\|\sigma_{q,i}\|_{C^{1}}\|\bar{u}_{q}\|_{C^{N}}\] \[\lesssim\delta_{q+1}\tau_{q}^{-1}+\delta_{q+1}\delta_{q}^{\sfrac {1}{2}}\lambda_{q}+\delta_{q+1}\delta_{q}^{\sfrac{1}{2}}\lambda_{q}\ell_{q}^{- N+1}\] \[\lesssim\delta_{q+1}\tau_{q}^{-1}\ell_{q}^{-N}\,.\]
Further, using (6.38) we write \(\sigma_{q,i}^{-1}R_{q,i}=\operatorname{Id}-\sigma_{q}^{-1}\mathring{\bar{R} }_{q}\) and compute
\[\|D_{t}(\sigma_{q,i}^{-1}R_{q,i})\|_{C^{N}} \lesssim\|\sigma_{q}^{-1}\partial_{t}\sigma_{q}\mathring{\bar{R} }_{q}\|_{C^{N+\alpha}}+\|\sigma_{q}^{-1}D_{t}\mathring{\bar{R}}_{q}\|_{C^{N+ \alpha}}\] \[\lesssim\frac{\mathring{\delta}_{q+1}}{\delta_{q+1}}\tau_{q}^{-1 }\ell_{q}^{-N-2\alpha}\lesssim\tau_{q}^{-1}\ell_{q}^{-N}\,,\]
where we again used (6.4). Then, using Lemma 6.12,
\[\|D_{t}\tilde{R}_{q,i}\|_{N} \lesssim\|D_{t}\nabla\Phi_{i}\|_{C^{N}}\|\sigma_{q,i}^{-1}R_{q,i} \|_{C^{0}}+\|D_{t}\nabla\Phi_{i}\|_{C^{0}}\|\sigma_{q,i}^{-1}R_{q,i}\|_{C^{N}}+\] \[+\|D_{t}\nabla\Phi_{i}\|_{C^{0}}\|\sigma_{q,i}^{-1}R_{q,i}\|_{C^{ 0}}\|\nabla\Phi_{i}\|_{C^{N}}+\] \[+\|D_{t}(\sigma_{q,i}^{-1}R_{q,i})\|_{C^{N}}+\|D_{t}(\sigma_{q,i} ^{-1}R_{q,i})\|_{C^{0}}\|\nabla\Phi_{i}\|_{C^{N}}\] \[\lesssim\tau_{q}^{-1}\ell_{q}^{-N}\,.\]
The estimate for \(D_{t}c_{i,k}\) follows again from (6.47), Lemma 6.11, Lemma 6.12 and the above.
Having obtain the analogous estimates for the perturbation \(w_{q+1}\), the estimates on the new Reynolds stress \(\mathring{R}_{q+1}\) proceed precisely as in [1, Section 6.1]. We set (c.f. [1, (5.21)])
\[\mathring{R}_{q+1}=\underbrace{\mathcal{R}\left(w_{q+1}\cdot\nabla\bar{u}_{q }\right)}_{\text{Nash error}}+\underbrace{\mathcal{R}\left(\partial_{t}w_{q+ 1}+\bar{u}_{q}\cdot\nabla w_{q+1}\right)}_{\text{Transport error}}+\underbrace{ \mathcal{R}\operatorname{div}\left(-\bar{R}_{q}+(w_{q+1}\otimes w_{q+1}) \right)}_{\text{Oscillation error}}. \tag{6.54}\]
where
\[\bar{R}_{q}=\sum_{i}R_{q,i}\,.\]
With this definition one may verify that
\[\left\{\begin{array}{l}\partial_{t}u_{q+1}+\operatorname{div}(u_{q+1} \otimes u_{q+1})+\nabla p_{q+1}=\operatorname{div}(\mathring{R}_{q+1})\,,\\ \operatorname{div}v_{q+1}=0\,,\end{array}\right.\]
where the new pressure is defined by
\[p_{q+1}(x,t)=\bar{p}_{q}(x,t)-\sum_{i}\sigma_{q,i}(x,t)+\sigma_{q}(t).\]
The analogue of [1, Proposition 6.1] for estimating the new Reynolds stress is
**Proposition 6.16**.: _The Reynolds stress error \(\mathring{R}_{q+1}\) satisfies the estimate_
\[\left\|\mathring{R}_{q+1}\right\|_{0}\lesssim\frac{\delta_{q+1}^{\nicefrac{{1 }}{{2}}}}{\tau_{q}\lambda_{q+1}^{1-\alpha}}\,. \tag{6.55}\]
Proof.: We follow the proof of [1, Proposition 6.1] and estimate each term in (6.54) separately. Concerning the _Nash term_, we have, as in [1], for any \(N\in\mathbb{N}\)
\[\left\|\mathcal{R}\left(w_{q+1}\cdot\nabla\bar{u}_{q}\right) \right\|_{\alpha}\lesssim \sum_{k\in\mathbb{Z}^{3}\setminus\{0\}}\frac{\delta_{q+1}^{ \nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}}{\lambda_{q +1}^{1-\alpha}|k|^{\bar{N}+3}}+\frac{\delta_{q+1}^{\nicefrac{{1}}{{2}}}\delta _{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}}{\lambda_{q+1}^{N-\alpha}\ell_{q}^{N+ \alpha}|k|^{\bar{N}+3}}\] \[+\sum_{k\in\mathbb{Z}^{3}\setminus\{0\}}\frac{\delta_{q+1}^{ \nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}}{\ell_{q} \lambda_{q+1}^{2-\alpha}|k|^{\bar{N}+3}}+\frac{\delta_{q+1}^{\nicefrac{{1}}{{2 }}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}}{\ell_{q}^{N+1-\alpha}\lambda_{ q+1}^{N+1-\alpha}|k|^{\bar{N}+3}}\]
where we have used the representation (6.46), Lemma 6.12 and the stationary phase estimate [1, Proposition C.2]. We claim that it is possible to choose \(N\geq 1\) so that
\[\lambda_{q+1}^{N-1}\ell_{q}^{N+\alpha}>1.\]
Using (6.1), this follows provided
\[N(b-1)(1-\beta)>b+\alpha(1+\gamma_{L}). \tag{6.56}\]
In turn, with this choice of \(N\), using (6.1) again to obtain \(\lambda_{q+1}\ell_{q}>1\), and using \(\bar{N}\geq 2\), we deduce
\[\left\|\mathcal{R}\left(w_{q+1}\cdot\nabla\bar{u}_{q}\right)\right\|_{\alpha} \lesssim\frac{\delta_{q+1}^{\nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2 }}}\lambda_{q}}{\lambda_{q+1}^{1-\alpha}}\,. \tag{6.57}\]
Concerning the _transport error_ we write, for \(t\in\mathring{I}_{i}\),
\[\begin{split}(\partial_{t}+\bar{u}_{q}\cdot\nabla)w_{o}=& \sum_{i,k}(\nabla\bar{u}_{q})^{T}(\nabla\Phi_{i})^{-1}b_{i,k}e^{i \lambda_{q+1}k\cdot\Phi_{i}}\\ &\quad+\sum_{i,k}(\nabla\Phi_{i})^{-1}(\partial_{t}+\bar{u}_{q} \cdot\nabla)\left(\sigma_{q,i}^{\nicefrac{{1}}{{2}}}a_{k}(\mathring{R}_{q,i} )\right)e^{i\lambda_{q+1}k\cdot\Phi_{i}}\,.\end{split} \tag{6.58}\]
As in [1] we obtain, arguing again as above with a sufficiently large \(N\) satisfying (6.56),
\[\left\|\mathcal{R}\left((\nabla\bar{u}_{q})^{T}(\nabla\Phi_{i})^{-1}b_{i,k}e^{ i\lambda_{q+1}k\cdot\Phi_{i}}\right)\right\|_{\alpha}\lesssim\frac{\delta_{q+1}^{ \nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}}{\lambda_{q+1 }^{1-\alpha}|k|^{\bar{N}+3}}\]
whereas, using Lemma 6.15,
\[\left\|\mathcal{R}\left((\nabla\Phi_{i})^{-1}(\partial_{t}+\bar{u}_{q}\cdot \nabla)(\sigma_{q,i}^{\nicefrac{{1}}{{2}}}a_{k}(\mathring{R}_{q,i}))e^{i \lambda_{q+1}k\cdot\Phi_{i}}\right)\right\|_{\alpha}\lesssim\frac{\delta_{q+1}^ {\nicefrac{{1}}{{2}}}}{\tau_{q}\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}}\]
Moreover, using (6.46), we have
\[(\partial_{t}+\bar{u}_{q}\cdot\nabla)w_{c}= \sum_{i,k}\left((\partial_{t}+\bar{u}_{q}\cdot\nabla)c_{i,k}\right) e^{i\lambda_{q+1}k\cdot\Phi_{i}}\]
and obtain, again using Lemma 6.15 and arguing as above,
\[\left\|\mathcal{R}\left(\left((\partial_{t}+\bar{u}_{q}\cdot\nabla)c_{i,k} \right)e^{i\lambda_{q+1}k\cdot\Phi_{i}}\right)\right\|_{\alpha}\lesssim \frac{\delta_{q+1}^{\sfrac{1}{2}}}{\tau_{q}\ell_{q}\lambda_{q+1}^{2-\alpha}|k| ^{\bar{N}+3}}\lesssim\frac{\delta_{q+1}^{\sfrac{1}{2}}}{\tau_{q}\lambda_{q+1}^ {1-\alpha}|k|^{\bar{N}+3}}\]
We deduce
\[\left\|\mathcal{R}\left(\partial_{t}w_{q+1}+\bar{u}_{q}\cdot\nabla w_{q+1} \right)\right\|_{\alpha}\lesssim\frac{\delta_{q+1}^{\sfrac{1}{2}}}{\tau_{q} \lambda_{q+1}^{1-\alpha}}\,. \tag{6.59}\]
Concerning the _oscillation error_ we argue precisely as in [1] and obtain
\[\left\|\mathcal{R}\operatorname{div}\left(-\bar{R}_{q}+w_{q+1}\otimes w_{q+1} \right)\right\|_{\alpha}\lesssim\frac{\delta_{q+1}}{\ell_{q}\lambda_{q+1}^{1- \alpha}}\,. \tag{6.60}\]
From (6.1) we deduce \(\delta_{q+1}^{\sfrac{1}{2}}\ell_{q}^{-1}<\delta_{q}^{\sfrac{1}{2}}\lambda_{q}\). We also recall \(\gamma_{T}>0\), hence \(\tau_{q}<\delta_{q}^{\sfrac{1}{2}}\lambda_{q}\). Consequently, combining (6.57), (6.59) and (6.60) we finally deduce (6.55) as required.
Finally, the new energy can be estimated, following [1, Section 6.2], as
**Proposition 6.17**.: _The energy of \(u_{q+1}\) satisfies the following estimate:_
\[\left|e(t)-\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$ -$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} {{\vbox{\hbox{$-$}}\kern-8.99986pt}}\!\int_{\mathbb{T}^{3}}\left|u_{q+1} \right|^{2}\,dx-\bar{e}\delta_{q+2}\right|\lesssim\frac{\delta_{q}^{\sfrac{1}{2 }}\delta_{q+1}^{\sfrac{1}{2}}\lambda_{q}}{\lambda_{q+1}}\,. \tag{6.61}\]
Proof.: We argue as in [1]. More precisely, we write
\[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$ -$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} {{\vbox{\hbox{$-$}}\kern-8.99986pt}}\!\int_{\mathbb{T}^{3}}\left|u_{q+1} \right|^{2}\,dx= \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$ -$}}\kern-9.8999849pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} \!\int_{\mathbb{T}^{3}}\left|\bar{u}_{q}\right|^{2}\,dx+2\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-12.149815pt}}{{\vbox{\hbox{$-$}}\kern-9.8999849pt}}{{ \vbox{\hbox{$-$}}\kern-8.99986pt}}\!\int_{\mathbb{T}^{3}}\left|w_{q+1} \cdot\bar{u}_{q}\,dx+\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$ -$}}\kern-9.89986pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} \!\int_{\mathbb{T}^{3}}\left|w_{q+1}(x,t)\right|^{2}\,dx\] \[= \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$ -$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} \!\int_{\mathbb{T}^{3}}\left|\bar{u}_{q}\right|^{2}\,dx+\mathchoice{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-12.149815pt}}{{\vbox{\hbox{$-$}}\kern-9.899986pt}} \!\int_{\mathbb{T}^{3}}\left|w_{o}\right|^{2}\,dx+\mathcal{E}_{1},\]
where, arguing as in [1] using stationary phase and Lemma 6.13,
\[\left|\mathcal{E}_{1}\right|=\left|\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{ \vbox{\hbox{$-$}}\kern-9.899986pt}}{{\vbox{\hbox{$-$}}\kern-8.99986pt}} \!\int 2w_{q+1}\cdot\bar{u}_{q}+2w_{o}\cdot w_{c}+\left|w_{c}\right|^{2}\,dx\right| \lesssim\frac{\delta_{q+1}^{\sfrac{1}{2}}\delta_{q}^{\sfrac{1}{2}}\lambda_{q}}{ \lambda_{q+1}}\]
Similarly,
\[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}}\kern-9.899986pt}} {{\vbox{\hbox{$-$}}\kern-8.999986pt}}\!\int_{\mathbb{T}^{3}}\left|w_{o} \right|^{2}\,dx=\sum_{i}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{ \vbox{\hbox{$-$}}\kern-9.899986pt}}{{\vbox{\hbox{$-$}}\kern-8.999986pt}} \!\int_{\mathbb{T}^{3}}\operatorname{tr}R_{q,i}\,dx+\mathcal{E}_{2}\]
where, using Lemma 6.11, Lemma 6.12 and (6.1),
\[\left|\mathcal{E}_{2}\right|\lesssim\frac{\delta_{q+1}}{\ell_{q}\lambda_{q+1}} \lesssim\frac{\delta_{q+1}^{\sfrac{1}{2}}\delta_{q}^{\sfrac{1}{2}}\lambda_{q}}{ \lambda_{q+1}}.\]
On the other hand, recalling the definition of \(R_{q,i}\) in (6.38) and using property (v) of \(\bar{\eta}_{i}\) in Lemma 6.9 as well as the definition of \(\sigma_{q}(t)\) in (6.36), we have
\[\sum_{i}\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}\!\int_{\mathbb{T}^{3}}\operatorname{tr}R_{q,i}(x,t)\,dx =3\sigma_{q}(t)\sum_{i}\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{\mathbb{T}^{3}}\eta_{i}^{2}\,dx=3c_{0} \sigma_{q}(t)\] \[=e(t)-\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{\mathbb{T}^{3}}|\bar{u}_{q}|^{2}\,dx-\bar{e}\delta _{q+2}.\]
The statement of the proposition follows.
We conclude this section with:
Proof of Proposition 2.1.: We need to verify that \(u_{q+1}:=\bar{u}_{q}+w_{q+1}\), with \(\bar{u}_{q}\) from Corollary 6.7, \(w_{q+1}\) defined in (6.45), as well as \(\check{R}_{q+1}\) defined in (6.54) satisfy the inductive estimates (2.4)-(2.6) with \(q\) replaced by \(q+1\).
First of all note that (2.5) follows from Lemma 6.13, or choice of \(M\) in Definition 6.14 and by choosing \(a\gg 1\) sufficiently large.
Secondly, (2.4) follows from (6.55) and the inequality
\[C\frac{\delta_{q+1}^{\sfrac{1}{2}}}{\tau_{q}\lambda_{q+1}^{1-\alpha}}<\delta_ {q+2}\lambda_{q+1}^{-\gamma_{R}},\]
where \(C\) is the implicit constant in (6.55). In light of the inequality (6.6), this is satisfied provided \(a\gg 1\) is sufficiently large. Similarly, (2.6) follows from (6.61) and the inequality
\[C\frac{\delta_{q+1}^{\sfrac{1}{2}}\delta_{q}^{\sfrac{1}{2}}\lambda_{q}}{ \lambda_{q+1}}<\delta_{q+2}\lambda_{q+1}^{-\gamma_{E}},\]
where \(C\) is the implicit constant in (6.61). In light of the inequality (6.7) this is satisfied provided \(a\gg 1\) is sufficiently large.
Finally, the estimate (2.10) follows directly from Lemma 6.13. This concludes the proof of Proposition 2.1.
|
2305.18502 | Escaping mediocrity: how two-layer networks learn hard generalized
linear models with SGD | This study explores the sample complexity for two-layer neural networks to
learn a generalized linear target function under Stochastic Gradient Descent
(SGD), focusing on the challenging regime where many flat directions are
present at initialization. It is well-established that in this scenario $n=O(d
\log d)$ samples are typically needed. However, we provide precise results
concerning the pre-factors in high-dimensional contexts and for varying widths.
Notably, our findings suggest that overparameterization can only enhance
convergence by a constant factor within this problem class. These insights are
grounded in the reduction of SGD dynamics to a stochastic process in lower
dimensions, where escaping mediocrity equates to calculating an exit time. Yet,
we demonstrate that a deterministic approximation of this process adequately
represents the escape time, implying that the role of stochasticity may be
minimal in this scenario. | Luca Arnaboldi, Florent Krzakala, Bruno Loureiro, Ludovic Stephan | 2023-05-29T14:40:56Z | http://arxiv.org/abs/2305.18502v2 | # Escaping mediocrity: how two-layer networks
###### Abstract
This study explores the sample complexity for two-layer neural networks to learn a single-index target function under Stochastic Gradient Descent (SGD), focusing on the challenging regime where many flat directions are present at initialization. It is well-established that in this scenario \(n=O(d\log d)\) samples are typically needed. However, we provide precise results concerning the pre-factors in high-dimensional contexts and for varying widths. Notably, our findings suggest that overparameterization can only enhance convergence by a constant factor within this problem class. These insights are grounded in the reduction of SGD dynamics to a stochastic process in lower dimensions, where escaping mediocrity equates to calculating an exit time. Yet, we demonstrate that a deterministic approximation of this process adequately represents the escape time, implying that the role of stochasticity may be minimal in this scenario.
## 1 Introduction
In this manuscript we are interested in the supervised task of learning the following target function:
\[y =\sigma_{\star}\left(w_{\star}^{\top}x\right)+\sqrt{\Delta}z, x \sim\mathcal{N}(0,\nicefrac{{1}}{{d}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
[5]. In particular, this assumption cover only problems with information exponent \(k=1\), excluding hard cases such as quadratic problems. Finally, for \(\sigma(x)=\sigma_{\star}(x)=x^{2}\), [9] has shown that for \(p\) large enough, full-batch gradient flow achieves sample complexity \(n=2d\), although at a running time of \(t=O(\log d)\).
With the exception of [5], the works mentioned above cover the scaling of the sample complexity in the high-dimensional limit. Our goal is, instead, to derive sharp results for the sample complexity of learning (1) with a fully-connected two-layer neural network in the challenging case where \(\sigma_{\star}\) has a vanishing first Hermite coefficient. As discussed above, this case violates the "standard learning scenario" of [5], and can be seen as a proxy for hard learning problems for descent-based algorithms. For concreteness, in the following we focus on the purely quadratic case:
\[y =\left(w_{\star}^{\top}x\right)^{2}+\sqrt{\Delta}z, w_{\star}\in\mathbb{S}^{d-1}(\sqrt{d}) \tag{2}\]
Learning the target (2) consists of learning the non-linearity \(\sigma_{\star}(x)=x^{2}\) and the direction \(w_{\star}\). In this work, we focus our attention in the second part, considering the following architecture with squared-activation:
\[f_{\Theta}(x)=\frac{1}{p}\sum_{i=1}^{p}a_{i}(w_{i}^{\top}x)^{2}. \tag{3}\]
where \(\Theta=(a,W)\) is the set of trainable weights, which are trained with one-pass stochastic gradient descent (SGD):
\[\Theta^{\nu+1}=\Theta^{\nu}-\gamma\nabla_{\Theta}\ell(y^{\nu},f_{\Theta^{\nu} }(x^{\nu})) \tag{4}\]
with square loss \(\ell(y,x)=\nicefrac{{1}}{{2}}(y-x)^{2}\) and initial condition \(\Theta^{0}=(a^{0},W^{0})\). Note that at each step \(\nu\), the gradient is evaluated at a fresh pair of data \((x^{\nu},y^{\nu})\in\mathbb{R}^{d+1}\) drawn from the model (2). In particular, this implies that after \(\nu\in[n]\) steps, the algorithm has seen \(n\) data points.
Learning in this problem is hard, and can be compared to finding a needle in a haystack. Indeed, with the exception of one direction that points towards \(\pm w_{\star}\), the population risk at (random) initialization is mostly flat. This slows down the dynamics, which takes a long time to establish a significant correlation with the signal - a scenario we refer to as _escaping mediocrity_.
At first, the particular case of purely quadratic activation might appear too specific. Indeed, as we will see later the population risk for this task has a a global maximum at initialization and a degenerated set of global minima. The choice of more general \(\sigma_{\star}\) and \(\sigma\) with zero first Hermite but not necessarily zero higher-order coefficients might give rise to other critical points such as saddle-points, giving rise to a more complex SGD dynamics. However, since the focus of this work is on escaping mediocrity, our conclusions will hold, up to constants, to more general activations with zero first Hermite.
Summary of results --Our main contributions in this manuscript are:
* We derive a deterministic set of ODEs providing an exact and analytically tractable description of the one-pass SGD dynamics in the high-dimensional limit \(d\to\infty\), and characterize the leading order stochastic corrections to this limit.
* We provide an analytical formula for the number of samples required for one-pass SGD to learn the phase retrieval target in high-dimensions at arbitrary network width. We show that overparametrization can only improve convergence by a constant factor for phase retrieval.
* Finally, we compute the leading order stochastic corrections to the exit time, and show that stochasticity does not help escaping the flat directions at initialization. This suggests that the deterministic descriptions is enough to fully capture the phenomenology of the dynamics in this problem.
All the codes used for numerical experiments are provided in this GitHub repository.
Further related work --The investigation of a deterministic high-dimensional limit of one-pass SGD for two-layer neural networks draws back to the seminal works of [10; 11; 12], and was followed by a stream of works that span decades of research [13; 14; 15; 16; 17; 18; 19; 20]. More recently, the stochastic corrections around fixed points of the dynamics have been investigated by [21]. In a complementary research line, [22; 23; 24; 25; 26]) have shown that an alternative deterministic description of SGD can be obtained in the infinite width-limit, a.k.a. mean-field regime. High-dimensional reductions of the mean-field equations have been studied by [5; 27; 28; 29]. Recently, [19; 20] has shown that these apparently different limits of one-pass SGD can be unified in a single description.
There has been a recent surge of interest in studying how increasing degrees of complexity in the target function are incrementally learned by SGD [5; 27; 28; 30; 31], with an emerging staircase picture where complexity is sequentially
learned in different scenarios. This picture, however, is bound to classes of targets where SGD develops strong correlations with the target directions at initialization, a notion which was mathematically formalized by the so-called information exponent (IE) by [2]. Instead, targets for which the landscape at initialization is mostly flat (IE \(\geq 2\)) are hard for SGD at high-dimensions, translating to very slow dynamics. This is precisely the case for the phase retrieval problem (IE \(=2\)), a classic inverse problem arising in many scientific areas, from X-ray crystallography to astronomical imaging [32, 33]. Phase retrieval has been widely studied in the literature as a prototypical example of a hard inverse problem [6, 7, 34, 35, 36, 37, 38], providing a simple yet challenging example of a non-convex optimization problem which is hard for descent-based algorithms [39, 40, 41, 42, 43, 44, 1].
## 2 High-dimensional limit of SGD
In this section we introduce our key theoretical tool, which consists in low-dimensional reduction of the projected SGD dynamics (4) in the high-dimensional limit \(d\to\infty\) of interest.
Sufficient statistics -The key observation is to notice that the population risk only depends on the hidden-layer weights \(W\in\mathbb{R}^{p\times d}\) through the the second layer weights \(a\in\mathbb{R}^{p}\) and the weights correlation matrices:
\[\Omega\coloneqq\begin{pmatrix}Q&m\\ m^{\top}&1\end{pmatrix}=\begin{pmatrix}\nicefrac{{1}}{{d}}\mathrm{W}W^{\top}& \nicefrac{{1}}{{d}}\mathrm{W}w_{\star}\\ \nicefrac{{1}}{{d}}\left(\mathrm{W}w_{\star}\right)^{\top}&1\end{pmatrix}\in \mathbb{R}^{(p+1)\times(p+1)}, \tag{5}\]
The explicit expression of the population risk is:
\[\mathcal{R}(\Theta)=\mathbb{E}\left[\ell(y,f_{\Theta}(x))\right]=\frac{ \Delta+3}{2}-\frac{1}{p}\sum_{j=1}^{p}a_{j}\left(Q_{jj}+2m_{j}^{2}\right)+ \frac{1}{2p^{2}}\sum_{j,l=1}^{p}a_{j}a_{l}(Q_{jj}Q_{ll}+2Q_{jl}^{2}) \tag{6}\]
Notice that the matrices \(M,Q\) are precisely the second moments of the pre-activations \((\lambda_{\star},\lambda)=(w_{\star}^{\top}x,Wx)\in\mathbb{R}^{p+1}\). Therefore, to characterize the evolution of the risk throughout SGD, it is sufficient to track the evolution of the first layer weights \(a_{i}\) and the correlation matrices \(m,Q\), which consists of \(p(p+1)\) parameters. As shown in Appendix A, starting from Eq. (4) we can derive a set of self-consistent stochastic processes describing the evolution of \((a,m,Q)\):
\[a_{j}^{\nu+1}-a_{j}^{\nu} =\frac{\gamma}{pd}\mathcal{E}^{\nu}\lambda_{j}^{2} \tag{7}\] \[m_{j}^{\nu+1}-m_{j}^{\nu} =2\frac{\gamma}{pd}\mathcal{E}^{\nu}a_{j}\lambda_{j}\lambda_{ \star}\eqqcolon\mathcal{M}_{j}(a,\lambda_{\star},\lambda)\] (8) \[Q_{jl}^{\nu+1}-Q_{jl}^{\nu} =2\frac{\gamma}{pd}\mathcal{E}^{\nu}\left(a_{j}+a_{l}\right) \lambda_{j}\lambda_{l}+4\frac{\gamma^{2}}{p^{2}d}\mathcal{E}^{\nu}||x^{\nu}|| ^{2}a_{j}a_{l}\lambda_{j}\lambda_{l}\eqqcolon\mathcal{Q}_{jl}(a,\lambda_{ \star},\lambda) \tag{9}\]
where we have defined the displacement vector
\[\mathcal{E}^{\nu}\coloneqq\frac{1}{p}\sum_{j=1}^{p}a_{j}(\lambda_{j}^{\nu})^ {2}-(\lambda^{\star\nu})^{2}+\sqrt{\Delta}z^{\nu}, \tag{10}\]
and we used \(\nicefrac{{\gamma}}{{d}}\) as the learning rate of the second layer, in order to have the same high-dimensional scaling.
High-dimensional limit -As of now we have not made any assumptions on the dimension of the problem; the stochastic processes defined in (7) are exact, with the right-hand side depending implicitly on \((m,Q)\) through the moments of \((\lambda_{\star},\lambda)\). However, our goal is to study this process in the high-dimensional limit \(d\to\infty\) where learning is hard and simulating (4) can be computationally demanding. Defining a step-size \(\delta t=\nicefrac{{\gamma}}{{pd}}\) and a continuous extension of \((a^{\nu},m^{\nu},Q^{\nu})\) to continuous time \((a(\nu\delta t),m(\nu\delta t),Q(\nu\delta t))\) by linear interpolation, it can be shown that in the high-dimensional limit \(d\to\infty\) the sufficient statistics \((a(t),m(t),Q(t))\) concentrate in their expectation \((\bar{a}(t),\bar{m}(t),\bar{Q}(t))\), which satisfies the following system of ordinary differential equations (ODEs):
\[\frac{\mathrm{d}\bar{a}_{j}}{\mathrm{d}t} =\mathbb{E}_{(\lambda,\lambda_{\star})\sim\mathcal{N}(0_{p+1}, \Omega)}\left[\mathcal{E}\lambda_{j}^{2}\right] \tag{11}\] \[\frac{\mathrm{d}\bar{m}_{j}}{\mathrm{d}t} =\mathbb{E}_{(\lambda,\lambda_{\star})\sim\mathcal{N}(0_{p+1}, \Omega)}\left[\mathcal{M}_{j}(a,\lambda_{\star},\lambda)\right]\eqqcolon \Psi_{j}\left(\Omega\right)\] \[\frac{\mathrm{d}\bar{Q}_{jl}}{\mathrm{d}t} =\mathbb{E}_{(\lambda,\lambda_{\star})\sim\mathcal{N}(0_{p+1}, \Omega)}\left[\mathcal{Q}_{jl}(a,\lambda_{\star},\lambda)\right]\eqqcolon \Phi_{jl}\left(\Omega\right)\]
with initial conditions given by \((\bar{a}(0),\bar{m}(0),\bar{Q}(0))=(a^{0},\nicefrac{{1}}{{4}}W^{0}w_{\star},\nicefrac{{ 1}}{{4}}W^{0}{W^{0}}^{\top})\). The explicit expression of these expected values can be found in Appendix A. As discussed in the related works, the high-dimensional limit of one-pass SGD for two-layer neural networks have been studied under different settings in the literature [19, 12, 15, 5, 40, 16, 19, 20, 21]. However, to our best knowledge our work is the first to derive and study these equations for the squared activation in the high-dimensional limit.
Initialization and mediocrity -In the noiseless case \(\Delta=0\), it is easy to check that \(a_{j}=1\) and \(w_{j}=\pm w_{\star}\) (\(m_{j}=\pm 1\) and \(Q_{jl}=1\)) is indeed a stationary point of (11) that corresponds to two degenerated global minima of the population risk (6). Adding a noise \(\Delta>0\) only shift these values. Similarly, it is easy to check that \(m_{i}=0\) and \(Q_{ij}=0\) for \(i\neq j\) are also stationary points. These correspond to taking \(w_{j}\perp w_{l}\perp w_{\star}\) for all \(j\neq l\) in (11), and is a global maximum of (6). This stationary point plays an important role in the dynamics. Indeed, in the absence of knowledge on the process that generated the data (2), it is customary to initialize the weights randomly:
\[w_{j}^{0}\sim\mathcal{N}(0,I_{d}), j=1,\cdots,p. \tag{12}\]
When \(d\to\infty\), the weights are be orthogonal with high probability. In terms of the sufficient statistics:
\[Q_{jj}\sim\text{Dirac}(1), j\neq l:\ \sqrt{d}\,Q_{jl}^{0} \xrightarrow{d\to+\infty}\mathcal{N}(0,1)\quad\text{and} \quad\sqrt{d}\,m_{j}^{0}\xrightarrow{d\to+\infty}\mathcal{N}(0,1). \tag{13}\]
Therefore, since the variance of \((m^{0},Q^{0})\) decays as \(\nicefrac{{1}}{{4}}\), the higher the dimension, the closer a random initialization is to a stationary point of the dynamics. Moreover, of all the \(d\) directions, there exists \(d-p-1\) directions orthogonal to \(w_{\star}\) and \(\{w_{j}^{0}\}_{j\in[p]}\) along which the population risk (6) remains constant. The proliferation of flat directions close to initialization severely slows down the SGD dynamics at high-dimensions, which typically requires \(n=O(d\log d)\) steps to develop a significant correlation with the signal in order to escape this region. This scenario, which we refer to as _escaping mediocrity_, is common to many hard learning problems [2]. In the following, we leverage the exact description (11) derived in this section to estimate precisely how much data is required for SGD to escape mediocrity in the prototypical phase retrieval problem (1).
Spherical constraint -A phenomenon that is observed when starting from the initial conditions above is a change in the norms of the weights \(w_{i}\) without effectively correlating with \(w_{\star}\). In this phase, sometimes referred as _norm learning_, \(m\approx Q_{jl}\approx 0\) for \(j\neq l\), while \(Q_{jj}\) changes considerably, resulting in a slightly decrease in the population risk towards a plateau that reflects mediocrity. Since the focus of this study is precisely on escaping mediocrity (i.e. developing non-zero correlation with the signal), in the following we will fix the norm of the weights \(w_{i}^{\nu_{i}}\in\mathbb{S}^{d-1}(\sqrt{d})\) at initialization and throughout the dynamics \(\nu\in[n]\). This assumption, which was also the focus of [2], amounts to imposing a spherical constraint at every step of SGD, also known as _projected SGD_:
\[w_{j}^{\nu+1}=\frac{w_{j}^{\nu}-\gamma\nabla_{w_{j}}\ell(y^{\nu},f_{\Theta^{ \nu}}(x^{\nu}))}{\left\|w_{j}^{\nu}-\gamma\nabla_{w_{j}}\ell(y^{\nu},f_{\Theta^ {\nu}}(x^{\nu}))\right\|}\sqrt{d}. \tag{14}\]
The high-dimensional limit of these equations lead to the following ODEs for the evolution of the sufficient statistics \((M,Q)\):
\[\frac{\mathrm{d}\bar{m}_{j}}{\mathrm{d}t}=\Psi_{j}(\Omega)-\frac{\bar{m}_{j}} {2}\Phi_{jj}(\Omega), \frac{\mathrm{d}\bar{Q}_{jl}}{\mathrm{d}t}=\Phi_{jl}(\Omega)- \frac{\bar{Q}_{jl}}{2}\left(\Phi_{jj}(\Omega)+\Phi_{ll}(\Omega)\right). \tag{15}\]
Note that \(Q_{jj}=1\) is consistently fixed.
## 3 Escaping mediocrity in the well-specified scenario
As a starting point, we consider the well-specified case of \(p=1\). As we are going to see, this case captures almost all of the phenomenology of interest, and is helpful to build an intuition for how SGD escapes mediocrity. Moreover, despite its apparent simplicity, this case has been the subject of several works in the literature [1, 2, 39, 41]. In particular, [41] established a convergence rate \(t=O(\log d)\) for randomly initialized gradient descent on the population risk. In our setting (4), this corresponds to the \(\gamma\to 0^{+}\) limit, and translates to a sample complexity of \(n=O(d\log d)\). [1] provided a refined analysis that accounts for the stochasticity in one-pass SGD, reaching a similar \(n=O(d\log d)\) rate for escaping mediocrity, in agreement with the general information exponent characterization of [2].
In this section, we show that in the high-dimensional limit, the sample complexity constant for one-pass SGD can be well estimated from the deterministic reduction (7). In particular, we show that the stochastic corrections from a finer analysis of the process (7) can be neglected.
### Population risk landscape
As previously discussed, one-pass stochastic gradient descent can be seen as a discretisation of gradient flow on the population risk [45]. Therefore, understanding the geometry of the population risk can provide useful insight into the behaviour of SGD. For \(p=1\) and imposing the spherical constraint, the population risk (6) considerably simplifies since everything can be expressed in terms of a single, scalar sufficient statistics \(m=\nicefrac{{1}}{{d}}w_{\star}^{\top}w\in[-1,1]\):
\[\mathcal{R}(w)-\frac{\Delta}{2}=2(1-m^{2}) \tag{16}\]
From this expression, it is easy to see that we have two global minima \(m=\pm 1\) (corresponding to \(w=\pm w_{\star}\)) and one global maximum \(m=0\) (corresponding to \(w\perp w_{\star}\)). Indeed, the spherical gradient of the population risk can also be computed explicitly (see Appendix F.1 for details):
\[\mathrm{grad}_{\mathbb{S}^{d-1}}\mathcal{R}(w)=4m\left(mw-w_{\star}\right) \tag{17}\]
which confirm that these are the only critical points. Finally, we can also compute the spherical Hessian exactly:
\[\mathrm{Hess}_{\mathbb{S}^{d-1}}\mathcal{R}(w)=4\left[m^{2}I_{d}+mww_{\star}^ {\top}-w_{\star}w_{\star}^{\top}\right] \tag{18}\]
Note that for \(w=\pm w_{\star}\) (\(m=\pm 1\)) the Hessian is proportional to the identity, hence positive definite as expected for a minimum. Instead, for \(w\perp w_{\star}\) (\(m=0\)), the Hessian becomes a rank-one matrix with \(d-1\) zero eigenvalues and a single negative eigenvalue with eigenvector proportional to \(w_{\star}\). Hence, \(m=0\) also corresponds to a _strict saddle_, i.e. the landscape is flat along most of the directions, except for one that points towards the signal. Note that with high probability, we have precisely \(m\approx 0\) for an uninformed random initialisation in high-dimensions \(d\to\infty\).
This provides a typical picture of mediocrity in high-dimensions, where the landscape resembles a flat golf course with a single hole, see Fig. 6.
### Exit time from deterministic limit
We now move to the description of the one-pass SGD dynamics. Our key goal in this section is to determine how much data / how long SGD takes in order to find the signal in the high-dimensional limit \(d\to\infty\). As we have discussed in Section 2, in this limit the sufficient statistics concentrate, with its evolution being described by the following deterministic ODE:
\[\frac{\mathrm{d}\bar{m}(t)}{\mathrm{d}t}=\bar{m}(t)\left[4(1-6\gamma)(1-\bar{ m}^{2}(t))-2\gamma\Delta\right]\quad\text{with}\quad\bar{m}(t)\in[-1,1] \tag{19}\]
with initial condition \(\bar{m}(0)=\nicefrac{{1}}{{d}}w_{\star}^{\top}w^{0}\). See Appendix B for an explicit derivation. Figure 1 (left) compares the evolution of the risk predicted from solving the high-dimensional ODEs (19) with different finite size (\(d=3000\)) simulated instances of the problem, showing a a good agreement between the theory and the averaged population risk over the different runs.
Given the spherical constraint, the population risk is now simply given by \(\mathcal{R}(m)=2\left(1-m^{2}\right)+\nicefrac{{\Delta}}{{2}}\). From this expression, it is clear that \(m=\pm 1\) are global minima and \(m=0\) is a global maximum. Therefore, the information theoretically minimum achievable risk is \(\mathcal{R}(\pm 1)=\min\mathcal{R}(m)=\nicefrac{{\Delta}}{{2}}\).
We start with two immediate observations that can be drawn from (19). First, we have a necessary upper bound on the learning rate for learning to occur: \(\gamma<\nicefrac{{1}}{{6}}\). Moreover, from fixed-point stability analysis we can get the value where \(\bar{m}\) converges for large times, and, consequently, the asymptotic excess population risk achievable in this setting is:
\[\lim_{t\to\infty}\mathcal{R}(\bar{m}(t))-\nicefrac{{\Delta}}{{2}}=\frac{ \gamma\Delta}{1-6\gamma}. \tag{20}\]
The presence of an asymptotic risk plateau for the excess risk is consistent with previous results for the high-dimensional limit of two-layers neural networks [12, 16]. Note that in the population gradient flow limit \(\gamma\to 0^{+}\), SGD converges to the minimal risk [45]. Indeed, the presence of a finite excess risk even in the well-specified setting is an intrinsic correction from the SGD noise in the high-dimensional regime where \(\nicefrac{{1}}{{d}}\ll\gamma\)[19], and can be related to the radius of the asymptotic stationary distribution of the weights around the global minima [46, 47].
We now move to our main problem: estimating the time SGD takes to escape mediocrity at initialization. Let \(T\in[0,1]\) be the relative difference with respect to the initial value of the risk, and let \(t_{\text{ext}}\) be the time when the risk exits the region above the threshold \(T\), see Fig. 1 (right) for an illustration. By construction, \(t_{\text{ext}}\) can be found by solving the following equation:
\[(1-T)\left(\mathcal{R}\big{(}\bar{m}(0)\big{)}-\frac{\Delta}{2}\right)=\left( \mathcal{R}\left(\bar{m}\left(t_{\text{ext}}\right)\right)-\frac{\Delta}{2} \right). \tag{21}\]
The above can be exactly solved by numerically integrating (19) and then finding the root of (21). However, an analytical expression for the ODE exit time can be found from the following two observations:
* From the discussion around equation (13), initializing at random in high-dimensions imply that \(\bar{m}(0)=\varepsilon\ll 1\), so we can consider the linearization of equation (19) in \(\varepsilon\) and solve it analytically. For small enough \(T\), this will lead us to an accurate result;
* Even if the ODE trajectories are deterministic, the exit time \(t_{\text{ext}}\) is a random variable of the random initialization.
Note there these lead to two natural notions of average exit time over the initial conditions. The first one is obtained by taking the expected value over initial conditions before solving the cross-threshold equation
\[t_{\text{ext}}^{\text{(an)}}=\frac{\log\left[Td+(1-T)\right]}{8(1-6\gamma)-4 \gamma\Delta}. \tag{22}\]
Borrowing the jargon from statistical physics, we refer to \(t_{\text{ext}}^{\text{(an)}}\) as the _annealed exit time_. The second option is to take the expected value exit time obtained from solving (19) over a fixed initial condition:
\[t_{\text{ext}}^{\text{(qnc)}}=\mathbb{E}_{\mu_{0}\sim\chi^{2}(1)}\left[\frac{ \log\left[\frac{Td}{\mu_{0}}+(1-T)\right]}{8(1-6\gamma)-4\gamma\Delta)}\right]. \tag{23}\]
Again, borrowing the jargon from statistical physics we refer to \(t_{\text{ext}}^{\text{(qnc)}}\) as the _quenched exit time_. Some comments on this result are in order:
* By concavity of the logarithm function, we have \(t_{\text{ext}}^{\text{(qnc)}}\geq t_{\text{ext}}^{\text{(anl)}}\).
* For both notions, we have \(t_{\text{ext}}=O(\log d)\) implying \(n=O(d\log d)\) samples are required to escape mediocrity, consistent with the rates in the literature [1, 2, 41].
* Both exit times are monotonically increasing in both \(\gamma\in[0,\nicefrac{{1}}{{6}}]\) and \(\Delta\geq 0\). Recalling that \(\delta t=\nicefrac{{\gamma}}{{d}}\), this implies the existence of an optimal learning rate \(\gamma_{\text{opt}}=\nicefrac{{1}}{{(12+\Delta)}}\) that minimizes the number of samples required to escape mediocrity.
### Does stochasticity matters?
Note that the initial correlation parameter at random initialization (12) is given by:
\[m^{0}=O(\nicefrac{{1}}{{\sqrt{d}}}) \tag{24}\]
Therefore, in the high-dimensional limit \(d\to\infty\) in which the ODE description (19) is exact, we have \(\bar{m}(0)=0\). This is a fixed point (19), which suggests that that strictly in the high-dimensional limit SGD is trapped forever at mediocrity. However, in practice we always have \(d<\infty\), meaning that at initialization we always have a non-zero correlation with the signal \(m^{0}=\varepsilon\ll 1\). Moreover, at high but finite dimensions, (19) is just an approximation to the actual stochastic dynamics (7). Indeed, this is precisely what we used in order to estimate the exit time from the deterministic ODE (19). While the stochastic corrections to the high-dimensional limit does not radically change the convergence rate scaling [1] (and hence the mediocrity picture), it is important to ask whether it leads to important corrections on the precise exit time.
Stochastic corrections to the deterministic high-dimensional limit of one-pass SGD have been recently discussed in a broad setting by [21]. In particular, this work has shown that close to a fixed point the the process for the sufficient statistics (7) can be well approximated in the high-dimensional limit by a diffusion process with drift potential given by the corresponding deterministic ODEs. We follow a similar strategy, and consider the following process specilezed in the case \(p=1\):
\[\begin{split}\mathrm{d}m_{1}&=\Psi_{1}(\Omega)\, \mathrm{d}t+\sqrt{\frac{\gamma}{d}}\,\sigma_{m}(\Omega)\cdot\mathrm{d}B_{t}\\ \mathrm{d}Q_{11}&=\Phi_{11}(\Omega)\,\mathrm{d}t+ \sqrt{\frac{\gamma}{d}}\,\sigma_{Q}(\Omega)\cdot\mathrm{d}B_{t}\end{split} \tag{25}\]
where \(\mathrm{d}B_{t}\) is a 2-dimensional Wiener process, and \(\sigma_{M}\) and \(\sigma_{Q}\) are defined as
\[\begin{pmatrix}\sigma_{m}\\ \sigma_{Q}\end{pmatrix}\coloneqq\sqrt{\begin{pmatrix}\mathrm{Var}_{(\lambda, \lambda_{*})\sim N(0_{p+1},\Omega)}\left[\mathcal{M}_{1}\right]&\mathrm{Cov}_ {(\lambda,\lambda_{*})\sim N(0_{p+1},\Omega)}\left[\mathcal{M}_{1},\mathcal{Q }_{11}\right]\\ \mathrm{Cov}_{(\lambda,\lambda_{*})\sim N(0_{p+1},\Omega)}\left[\mathcal{M}_{1 },\mathcal{Q}_{11}\right]&\mathrm{Var}_{(\lambda,\lambda_{*})\sim N(0_{p+1}, \Omega)}\left[\mathcal{Q}_{11}\right]\end{pmatrix}}. \tag{26}\]
Notice that the stochastic correction is proportional to \(\sqrt{\gamma/d}\), consistent to a first order correction to the deterministic limit. Similarly to the discussion in Section 3.2, the spherical constraint can be imposed by projecting the process in the sphere. This is discussed in detail in App. B, and for \(p=1\) reads:
\[\frac{\mathrm{d}m_{1}}{\mathrm{d}t}=\left(\Psi_{1}(\Omega)-\frac{m_{1}}{2}\Phi _{11}(\Omega)\right)\mathrm{d}t+\sqrt{\frac{\gamma}{d}}\,\left(\sigma_{m}- \frac{m_{1}}{2}\sigma_{Q}\right)\cdot\mathrm{d}B_{t} \tag{27}\]
Figure 1 compares different instances of finite size simulations with instances of the SDE (27) with the same initial condition. Although the stochastic correction offers a better description of the process at large but finite dimensions, we find that quite surprisingly they have a small impact in the exit time. Hence, the formulas (22) & (23) derived in the Section 3.2 for random initialization provide a good approximation to the exit time. In Appendix D we discuss how to derive an exit time formula with the stochastic corrections. As just showed, the new formulas do not offer any improvements compared to the deterministic ones, nevertheless the SDE could be useful in some particular and unrealistic nationalizations, since ODEs can fail.
To summarize, in this section we have shown that the deterministic ODEs provides a good approximation for the precise number of samples required for escaping mediocrity in high-dimensions. In other words, stochasticity _does not help_ in navigating the flat directions at initalization and correlating with the signal.
## 4 The role of width
Thus far our discussion has focused on the well-specified case. We now discuss the role of width in escaping mediocrity. Our starting point are the deterministic ODEs (11) for the sufficient statistics derived in Section 2. As in our previous analysis, we focus on the spherical setting where \(w_{i}\in\mathbb{S}^{d-1}(\sqrt{d})\), implying \(Q_{jj}=1\), see Appendix B for a detailed derivation. First, we derive analytical expressions for the exit time for arbitrary width \(p\geq 1\) in the particular case where the second layer is fixed at initialization \(a_{j}^{0}=1\), \(\forall j\in[n]\). The role played by the second layer is then discussed in Subsection 4.1.
Differently from the \(p=1\) case, the process cannot be described be a single sufficient statistics, and instead we have to track \(\nicefrac{{p(p-1)}}{{2}}\) non-diagonal entries of \(Q\) (it is a symmetric matrix), and \(p\) components of the vector \(m\). Note that
Figure 1: multiple run of the simulated SGD and the numerically integrated SDE, always starting from the same initial condition, with \(d=3000\). All the \(t_{\text{ext}}\) presented are obtained by solving numerically (21). The SDE captures the variance that the ODE doesnβt exhibit, but the \(t_{\text{ext}}\) do not change considerably.
equation (21) remains valid to define \(t_{\text{ext}}\), and can be solved numerically. An analytical expression for the exit time can be derived under similar assumptions to the ones discussed in Section 3.2, although the derivation is significantly more cumbersome. Full details can be found in C. The final expressions for the annealed and quenched exit times are given by
\[t_{\text{ext}}^{\text{(an)}}=\frac{\log\left[\frac{T(p+1)d+(p+1)(1-T)}{2p} \right]}{8\left[1-\frac{\gamma}{p}\left(1+\frac{1}{p}+\frac{4}{p^{2}}+\frac{ \Delta}{2}\right)\right]},\qquad\quad t_{\text{ext}}^{\text{(qmo)}}=\mathbb{E }_{\mu_{0},\tau_{0}\sim\mathcal{P}_{p}^{d}}\left[\frac{\log\left[\frac{Tp(p+1) d+(2\mu_{0}p-\tau_{0})(1-T)}{2\mu_{0}p}\right]}{8\left[1-\frac{\gamma}{p}\left(1+ \frac{1}{p}+\frac{4}{p^{2}}+\frac{\Delta}{2}\right)\right]}\right]. \tag{28}\]
With the distribution of the variables \(\tau_{0}\) and \(\mu_{0}\) given by
\[\mu_{0},\tau_{0}\sim\mathcal{P}_{p}^{d}\quad\text{where }\mathcal{P}_{p}^{d} \equiv\left(d\sum_{j=1}^{p}(u_{j}\cdot v)^{2},2d\sum_{j=1}^{p}\sum_{l=j+1}^{p} (u_{j}\cdot u_{l})^{2}\right)\text{ with }v,u_{j}\sim\mathbb{S}^{d-1}(1).\]
Notice that \(t_{\text{ext}}\) is a monotonically decreasing function of the width. Nevertheless, for any \(p\geq 1\), the leading order dependence in the dimension is \(t_{\text{ext}}=\log d\). Hence, despite helping escaping mediocrity, increasing the width cannot mitigate it. This can be contrasted to other aspects in which overparametrization can significantly help optimization, for instance with global convergence [48]. Interestingly, the minimal escaping time \(t_{\text{ext}}^{\text{(an)}}=\nicefrac{{1}}{{4}}\log(\nicefrac{{(T+1)d+(p+1) (1-T)}}{{2p}})\), obtained by choosing the learning rate that minimizes the sample complexity for escaping, has the same pre-factor for any width \(p\geq 1\), with the only differences being the dependence in \(p\) inside the logarithm and in the time scaling \(t=\nicefrac{{\nu\gamma}}{{pd}}\). At infinite width \(p\to\infty\), this simply amounts to a factor \(\frac{12+\Delta}{2+\Delta}\) with respect to \(p=1\). Details of this computation can be found in Appendix C.4.
Figure 2 compares our analytical formulas (28) with real one-pass SGD simulations. The simulation are averaged over many different instance of the initial conditions, and the ratio \(\nicefrac{{\gamma}}{{p}}\) is kept constant when varying \(p\), for not having discrepancies due to the different learning rate scaling. It's interesting to notice how the two different formulas gives the same outcome for large width \(p\gg 1\). Moreover, for narrow networks they essentially differ from by a \(d\) independent constant. Figure 2 also suggests that, as for \(p=1\), the stochasticity can be neglected in the estimation of the exit time. In Appendix E we provide further evidence of that.
Figure 2: Ratio between the measured \(t_{\text{ext}}\) from simulations and the corresponding analytical formula (square = annealed, circle = quenched). We average over many initial conditions, for different values of \(p\). The ratio \(\nicefrac{{\gamma}}{{p}}\) has been kept constant for different simulations.
### Training the second layer
In the previous section, we have derived analytical expressions for the exit time in the particular case of fixed first layer weights \(a_{j}^{0}=1\). Here, we provide numerical evidence that training the first layer does not significantly change our conclusions.
The key challenge is that by training the first layer we can't measure \(t_{\text{ext}}\) as the time needed to escape the risk at initialization. Indeed, from equation (21) it can be seen that in the very first steps of the learning the vector \(a\) changes slightly to adapt to the initial conditions, thereby fitting the noise. In this scenario, instead of looking directly at the risk, we can instead use the largest component of the correlation vector \(m\) as a measure on how much the network has learned. At random initialization, this is of order \(\nicefrac{{1}}{{\sqrt{a}}}\), grows to \(1\) as the neural network correlates with the target weights. A natural choice for initializing the second layer weights is \(a_{j}^{0}=1\), \(\forall j\in[p]\). In principle, this initial condition guarantees that the risk at initialization is exactly equal to the case where \(a_{j}\) is fixed. On the other hand, as we already point out, the initial plateau where the dynamics gets stuck depends on the particular first layer initial condition. Even for other choices of initialization, e.g. \(a_{j}\sim\text{Bernoulli}(\nicefrac{{1}}{{2}})\), the dynamics quickly goes the a plateau, so it does not really matter which \(a_{j}^{0}\) is used. Therefore, for simplicity we choose an homogeneous initialization \(a_{j}^{0}=1\). Figure 3 compares the evolution of the maximum correlation when learning the second layer or not, for different values of \(p\).
It is important to stress that we are not claiming that the time needed to reach the minimum of the population risk is the same when training or not the second layer, as can be seen in Fig. 3. Instead, our result highlights that the time needed to escape the flat directions at initialization are close. In fact, after the two layer neural network has escaped mediocrity, the dynamics can be very different whether the second layer weights are trained or not. For instance, \(a\) could become sparse with just a few neurons contributes to the output, or it could remain close to homogeneous \(a_{j}=1\), and with all neurons correlating with the target. Although studying the dynamics after escaping mediocrity is surely an interesting endeavor, it's out of the scope of this manuscript.
## 5 Conclusion
In this work we have derived a sharp formula for how many samples are required for a two-layer neural network trained with one-pass SGD to learn a quadratic target in the high-dimensional limit. In particular, we have shown that increasing the width can only improve the sample complexity by a pre-factor, with the overall scaling with the dimension remaining of the same order \(n=O(d\log d)\). Therefore, for this target overparametrization does not significantly help optimization, providing a prototypical example of a hard class of functions for learning with SGD. Our results rely on a low-dimensional description of SGD in terms of a stochastic process describing the evolution of the sufficient statistics in the high-dimensional limit. Surprisingly, we have shown that deriving the sample complexities from the deterministic
Figure 3: \(p=20\) (left), \(p=50\) (right), \(d=1000\). Comparison between the growth of \(\max m\) throughout the learning process, when the second layer is fixed (blue) and trained (green). The dynamics is obviously different far from the starting point, but when we zoom close to the exit point, the two processes have the same behavior, \(t_{\text{ext}}\) included.
drift of this process with small initial correlation with the target provides a fairly good approximation of the exit time as computed from the full process with zero initial correlation, showing stochasticity does not play a crucial role in escaping mediocrity for this problem.
## Acknowledgements
We thank Gerard Ben Arous and Lenka Zdeborova for valuable discussions. LA would like to thank _Scuola Normale Superiore_ and _Universita di Pisa_ for the support during part of this project. We acknowledge funding from the Swiss National Science Foundation grant SNFS OperaGOST, \(200021\_200390\) and the _Choose France - CNRS AI Rising Talents_ program.
|
2308.08935 | SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection | Despite significant progress in shadow detection, current methods still
struggle with the adverse impact of background color, which may lead to errors
when shadows are present on complex backgrounds. Drawing inspiration from the
human visual system, we treat the input shadow image as a composition of a
background layer and a shadow layer, and design a Style-guided Dual-layer
Disentanglement Network (SDDNet) to model these layers independently. To
achieve this, we devise a Feature Separation and Recombination (FSR) module
that decomposes multi-level features into shadow-related and background-related
components by offering specialized supervision for each component, while
preserving information integrity and avoiding redundancy through the
reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF)
module to guide the feature disentanglement by focusing on style
differentiation and uniformization. With these two modules and our overall
pipeline, our model effectively minimizes the detrimental effects of background
color, yielding superior performance on three public datasets with a real-time
inference speed of 32 FPS. | Runmin Cong, Yuchen Guan, Jinpeng Chen, Wei Zhang, Yao Zhao, Sam Kwong | 2023-08-17T12:10:51Z | http://arxiv.org/abs/2308.08935v1 | # SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection
###### Abstract.
Despite significant progress in shadow detection, current methods still struggle with the adverse impact of background color, which may lead to errors when shadows are present on complex backgrounds. Drawing inspiration from the human visual system, we treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network (SDDNet) to model these layers independently. To achieve this, we devise a Feature Separation and Recombination (FSR) module that decomposes multi-level features into shadow-related and background-related components by offering specialized supervision for each component, while preserving information integrity and avoiding redundancy through the reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF) module to guide the feature disentanglement by focusing on style differentiation and uniformization. With these two modules and our overall pipeline, our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS. Our code is publicly available at: _[https://github.com/rmcong/SDDNet_ACMM23_](https://github.com/rmcong/SDDNet_ACMM23_).
shadow detection, feature disentanglement, style constraint +
Footnote β : dagger}\)Runmin Cong and Wei Zhang are also affiliated with the Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan, Shandong, China.
+
Footnote β : dagger}\)Corresponding author.
+
Footnote β : dagger}\)Runmin Cong and Wei Zhang are also affiliated with the Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan, Shandong, China.
+
Footnote β : dagger}\)Corresponding author.
Since the shadows are created by light being blocked, they are inherently colorless. Based on this, humans can discern shadows on complex backgrounds through a three-step process: first, recognizing background attributes; second, identifying shadow attributes by observing confident shadow regions; and finally, detecting all shadow regions based on our understanding of background and shadow attributes. Inspired by this process, we propose treating shadow images as a composition of background and shadow layers, and modeling them separately to effectively reduce the impact of background color on detection performance. This objective can be accomplished through the strategy of feature disentanglement, which is proved to be effective in many computer vision works. For instance, (Wang et al., 2017) conducted an early trial to incorporate task-feature co-disentanglement regularizations for multi-task learning and achieved satisfactory performance.
In this paper, we introduce the concept of feature disentanglement into shadow detection to realize the separated modeling of background and shadow layers. We present a novel Style-guided Dual-layer Disentanglement Network (SDDNet) featuring two innovative modules, _i.e._, the Feature Separation and Recombination (FSR) module and the Shadow Style Filter (SSF) module. The FSR module effectively decomposes multi-level features into background-related and shadow-related components, which is explicitly achieved by providing distinct supervision for each set of components. The shadow-related component receives supervision from the ground-truth shadow map, while the background-related component is guided to generate a shadow-free background image. Furthermore, to ensure information integrity, the recombined features merged from both components are supervised to reconstruct the input image. During prediction, only the shadow-related component is utilized to generate the final shadow map, effectively eliminating the adverse influence of the background. Additionally, to further constrain the FSR module on feature disentanglement, particularly for background-related component that lacks a background ground-truth, we propose a Shadow Style Filter (SSF) module to extract and constrain style attributes of the shadow-related component, background-related component, and recombined features. Specifically, we regard the presence or absence of shadows as a style. From this perspective, the recombined features and shadow-related component should have consistent styles, while the background-related component should exhibit a different style from them. Based on this principle, we can generate background images in an indirect style-guided manner, thereby facilitating feature disentanglement within the FSR module.
In summary, our contributions are primarily three-fold:
* We model the shadow image as a superposition of shadow layers on background layers, and then propose a Style-guided Dual-layer Disentanglement Network (SDDNet) for shadow detection. Extensive experiments on three public datasets demonstrate that our proposed method outperforms all state-of-the-art shadow detection methods with a real-time inference speed of 32 FPS.
* We design a Feature Separation and Recombination (FSR) module to decompose image features into shadow-related and background-related components, thereby preventing predictions from being misled by background information.
* We devise a Shadow Style Filter (SSF) module that assists feature separation through style differentiation and uniformization, especially to help generate the background-related component in an indirect style-guided manner.
## 2. Related Works
### Shadow Detection
Related works on shadow detection can be broadly categorized into traditional methods and deep learning-based methods.
Early efforts in shadow detection primarily focused on constructing physical illumination models (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) to analyze the shadow formation process. Based on these models, shadows were detected either by using physical models (Wang et al., 2017; Wang et al., 2017) or by employing traditional machine learning-based detectors with hand-crafted features, such as illumination cues (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), texture (Wang et al., 2017; Wang et al., 2017), and edge (Wang et al., 2017; Wang et al., 2017). Although these methods led to improvements, most of them relied on assumptions (_e.g._, fixed background classes, uniform illumination, _etc._) that are difficult to satisfy in complex situations. Additionally, the hand-crafted features may not be discriminative enough for detecting intricate shadow regions.
Inspired by the outstanding performance of CNN in computer vision tasks, deep learning-based methods (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) have gained popularity in shadow detection. With their ability to extract and select discriminative features, CNNs are more robust than traditional methods that use hand-crafted features. Khan _et al._(Khan et al., 2017) were the first to apply CNNs to shadow detection, extracting features from superpixels using a 7-layer CNN and feeding these features to a conditional random field (CRF) model to refine the detection results. Zheng _et al._(Zheng et al., 2017) integrated the semantics of distraction regions to extend CNNs for robust shadow detection. Some researchers (Zheng et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) employed generative
Figure 1. Some difficult cases in shadow detection. (a) The input images. (b) The ground truth shadow maps. (c) The predicted results of ECA (Wang et al., 2017). (d) The predicted results of MTMT-Net (Wang et al., 2017). (e) The predicted results of our SDDNet.
adversarial networks (GAN) (Kirshick et al., 2017) for shadow detection. Recently, Chen _et al._(Chen et al., 2018) proposed a semi-supervised teacher-student framework to detect shadow regions, edges, and count under consistency constraints. Zhu _et al._(Zhu et al., 2019) introduced a feature reweighting method to balance the intensity-variant and intensity-invariant features obtained by self-supervised decomposition. Liao _et al._(Liao et al., 2019) incorporated confidence maps into shadow detection and combined the prediction results of multiple methods for shadow detection.
Despite the significant improvements offered by these methods, they still suffer from background color interference. This interference causes confusion between dark background areas and shadow regions, as well as between light background areas and weak shadow regions. In this study, we disentangle background-related and shadow-related components, utilizing only the shadow-related component to predict the final results. This approach enhances the robustness of shadow detection in complex scenes.
### Style Transfer
In the domain of neural style transfer, research has been conducted to comprehend the content and style of image features. Gatys _et al._(Gatys et al., 2017) proposed utilizing the Gram matrix of image features as a means to encapsulate the distinctive style of an image. Subsequent studies (Song et al., 2018; Wang et al., 2018) have further corroborated the efficacy of the Gram matrix in capturing and representing image styles.
In this paper, we employ the Gram matrix to extract style attributes and regulate the consistency or diversity of these attributes across various features and components of the input shadow image. This approach serves to bolster our feature disentanglement process, ultimately leading to enhanced outcomes.
## 3. Proposed Method
### Overview
In Figure 2, we present the overall framework of our SDDNet, which adopts an encoder-decoder architecture. During training, SDDNet generates the shadow map, background image, and reconstructed input image; however, only the shadow map is predicted during the inference stage. The generation of reconstructed and background images constitutes our joint training strategy with the main aim of improving the quality of feature disentanglement.
To elaborate, we initially input the image into the backbone network to extract multi-level features \(\{F_{k}\}_{k=1}^{N}\), where \(N\) represents the number of levels. To fully exploit the detail and global semantics, we divide the features into two groups: the low-level group \(F_{low}=\{F_{k}\}_{k=1}^{N_{low}}\) and the high-level group \(F_{high}=\{F_{k}\}_{k=N_{key}+1}^{N}\). We process these two groups in two paths with the same structure, omitting the subscripts _low_ and _high_ for simplicity. The features in each group are concatenated together after upsampling to unify the spatial sizes, generating merged features \(\hat{F}\). Subsequently, the FSR module is fed \(\hat{F}\) and outputs the shadow-related component \(\hat{F}^{sd}\), the background-related component \(\hat{F}^{bg}\), and the recombined features \(\hat{F}^{re}\). The SSF module then extracts style attributes from \(\hat{F}^{sd}\), \(\hat{F}^{bg}\), and \(\hat{F}^{re}\), and constrains the consistency or diversity of specific style attribute pairs to guide the upstream feature separation. Finally, in the parallel decoder, the shadow-related component, background-related component, and recombined features from both paths are fused separately, and then generate the shadow map, the background image, and the reconstructed input image.
### Feature Separation and Recombination Module
From the perspective of the human visual system, shadow images can be considered as shadows of other objects superimposed on the background image. This kind of dual-layer separation is not a difficult task for humans, but it is not a simple matter for computers. Therefore, we aim to emulate this way of perceiving shadow images in a bio-inspired manner, and thereby achieve the disentanglement of shadow image content/features. Effective feature disentanglement can promote the focus on more informative components for shadow detection.
One of the main challenges lies in the complex coupling between background and shadow images, making it extremely difficult to model the relationship with complete accuracy. To simplify this process, we model it as a straightforward linear model, which is also a relatively intuitive modeling approach. Nevertheless, to achieve accurate disentanglement within this simple linear model, we design comprehensive strategies based on differentiated supervision. However, differentiated supervision presents its own challenge. Specifically, we only have ground truth shadow maps and lack labels for shadow-free background images, which means that we cannot accomplish our goal solely through direct supervision. Instead, we must find ingenious indirect supervision methods. To this end, in addition to shadow image supervision, we also incorporate joint supervision and style supervision (introduced in the following section). In this manner, the generation of background images can be supervised indirectly, thereby improving the overall feature disentanglement process.
To accomplish this objective, we design the FSR module to achieve feature disentanglement and reorganization through a shadow branch and a background branch. Each branch consists of a residual block (Zhu et al., 2019), which comprises two convolutional layers and a skip connection. Given \(\hat{F}\), the shadow branch produces the shadow-related component \(\hat{F}^{sd}\), and the background branch generates the background-related component \(\hat{F}^{bg}\) as follows:
\[\hat{F}^{sd}=Conv\left(Conv\left(\hat{F}\right)\right)+\hat{F}, \tag{1}\]
\[\hat{F}^{bg}=Conv\left(Conv\left(\hat{F}\right)\right)+\hat{F}, \tag{2}\]
where \(Conv\) denotes a convolutional layer. Additionally, we combine them to obtain recombined features \(\hat{F}^{re}\):
\[\hat{F}^{re}=\hat{F}^{sd}\oplus\hat{F}^{bg}, \tag{3}\]
where \(\oplus\) signifies element-wise addition.
Upon obtaining the outputs of the FSR modules in the low- and high-level paths, \(\hat{F}^{sd}_{low}\), \(\hat{F}^{bg}_{low}\), \(\hat{F}^{re}_{low}\) and \(\hat{F}^{sd}_{high}\), \(\hat{F}^{bg}_{high}\), \(\hat{F}^{re}_{high}\) (with subscripts restored), they are individually fused in the parallel decoder:
\[F^{sd}=Conv\left(CA\left(concat(\hat{F}^{sd}_{low},\hat{F}^{sd}_{high})\right) \right), \tag{4}\]
\[F^{bg}=Conv\left(CA\left(concat(\hat{F}^{bg}_{low},\hat{F}^{bg}_{high})\right) \right), \tag{5}\]
\[F^{re}=Conv\left(CA\left(concat(\hat{F}^{re}_{low},\hat{F}^{re}_{high})\right) \right), \tag{6}\]
where _concat_ represents a concatenation operation, and \(CA\) denotes channel attention [24]. By applying channel attention, the network can automatically select informative channels from both low-level and high-level features while suppressing non-informative channels. Finally, the \(F^{sd}\), \(F^{bg}\), and \(F^{re}\) are use to generate the shadow map \(P^{sd}\), the background image \(P^{bg}\), and the reconstructed input image \(P^{re}\), respectively, after passing through a convolutional layer. These processes can be expressed as:
\[P^{sd}=Conv\left(F^{sd}\right), \tag{7}\]
\[P^{bg}=Conv\left(F^{bg}\right), \tag{8}\]
\[P^{re}=Conv\left(F^{re}\right). \tag{9}\]
Although the process for generating these three outputs does not involve distinct operations tailored to their specific targets, we can apply differentiated supervision to enable the network to autonomously learn the optimal way to decompose features. In particular, the supervision for \(P^{sd}\) is provided by the ground truth shadow map, while the supervision for \(P^{re}\) is derived from the input image. The two losses can be calculated as follows:
\[\mathcal{L}_{sd}=BBCE\left(P^{sd},\ G^{sd}\right), \tag{10}\]
\[\mathcal{L}_{re}=MAE(P^{re},\ I), \tag{11}\]
where \(G^{sd}\) represents the ground-truth shadow map, \(I\) denotes the input image, and \(BBCE\) and \(MAE\) signify the balanced binary cross entropy and mean absolute error, respectively. Here, we employ the same balanced binary cross entropy as in [66], formulated by:
\[\begin{split} BBCE(P^{sd},G^{sd})=\\ -\sum_{i}\left[\frac{N_{n}}{N}G^{sd}log(P^{sd}_{i})+\frac{N_{p}} {N}(1-G^{sd})log(1-P^{sd}_{i})\right],\end{split} \tag{12}\]
where \(i\) denotes the index of spatial locations, \(N_{p}\) and \(N_{n}\) represent the number of shadow and non-shadow pixels, and \(N\) corresponds to the total number of pixels. The mean absolute error is given by:
\[MAE(P^{re},I)=\frac{1}{N}\sum_{i}|P^{re}_{i}-I_{i}|. \tag{13}\]
The supervision for these two outputs is relatively straightforward. However, for \(P^{bg}\), the problem becomes more complex due to the absence of a ground-truth background image. As a result, we use some indirect manners to guide the network learning. In areas without shadows, the input image and the background image are identical, enabling us to directly use the input image to supervise these regions. This can be expressed as:
\[\mathcal{L}_{bg}=MAE\left(P^{bg}\otimes\left(1-P^{sd}\right),\ I\otimes\left( 1-G^{sd}\right)\right), \tag{14}\]
where \(\otimes\) denotes element-wise multiplication. The two terms in this loss correspond to the ground-truth shadow-free regions of the input image and the predicted shadow-free regions of the generated background image. As we employ the predicted shadow-free map \(1-P^{sd}\), \(\mathcal{L}_{bg}\) has the advantage of constraining the generation of the
Figure 2. Architecture of the proposed SDDNet. Given an input image, SDDNet outputs the shadow map, background image, and reconstructed image in an end-to-end manner. Firstly, the backbone extracts integrated low-level and high-level features. Then, the proposed FSR module decomposes the features and produce shadow-related component, background-related component, and recombined features. In addition, the SSF module extracts style attributes and guide the feature disentanglement process. Finally, the low-level and high-level features are fused through the parallel decoder to generate three outputs (_i.e.,_ background image, shadow map, and reconstructed input image).
background image while simultaneously aiding the prediction of the shadow map. Through \(\mathcal{L}_{bg}\), we offer guidance to the network for predicting the shadow-free region of the background image. Nevertheless, to predict the complete background image and thereby enhance the quality of disentangling the background-related component, we also need to provide guidance for the shadowed areas. This aspect is accomplished through our SSF module, which will be discussed in Section 3.3.
### Shadow Style Filter Module
In Section 3.2, we decompose the integrated features into background-related and shadow-related components using the proposed FSR module with a differentiated supervision strategy. However, there are two imperfections: 1) The supervision of the background-related component is insufficient, as it only involves shadow-free regions, leading to a lack of guidance for generating shadowed regions. 2) It does not further emphasize the differences between the shadows and the background, which may results in unclear boundaries for isolating different components, making them less pure.
To address these issues, we consider incorporating style guidance into our method, as the presence or absence of shadows inherently represents a common style attribute. Following this idea, we design the SSF module, as depicted in Figure 3. It extracts style attributes from each of the three outputs from the FSR module (_i.e._, \(\hat{F}^{sd}\), \(\hat{F}^{bg}\), and \(\hat{F}^{re}\)), and then constrains the consistency and diversity between different style pairs in a contrastive learning fashion.
For the style attribute extraction, we adopt the Gram matrix (Golovolovolov et al., 2015) of the feature map as the style representation. For the input features \(\hat{F}\in\mathbb{R}^{C\times H\times W}\), the Gram matrix \(M\in\mathbb{R}^{C\times C}\) captures correlations between its channels, which can be computed as follows:
\[M_{x,y}=\hat{F}_{x}^{T}\hat{F}_{y}, \tag{15}\]
where \(M_{x,y}\) denotes the \((x,y)\) element of Gram matrix \(M\), and \(\hat{F}_{x}\) and \(\hat{F}_{y}\) represent the \(x^{th}\) and \(y^{th}\) channels of \(\hat{F}\), respectively. Subsequently, we employ two consecutive linear layers to further extract the style attribute \(\rho\in\mathbb{R}^{C^{2}}\) from \(M_{x,y}\):
\[\rho=Linear\left(Linear\left(Flatten\left(M\right)\right)\right), \tag{16}\]
where \(Linear\) signifies a linear layer, and \(Flatten\) indicates a flatten operation. For the input components and features, \(\hat{F}^{sd}\), \(\hat{F}^{bg}\), and \(\hat{F}^{re}\), the extracted style attribute vectors are denoted as \(\rho^{sd}\), \(\rho^{bg}\), and \(\rho^{re}\), respectively.
In our approach, the primary style consideration is the presence or absence of shadows. From this perspective, the style of the shadow-related component and recombined features should be consistent, as they collectively represent the existence of shadows. To achieve this, we employ the following loss function to enhance their consistency:
\[\mathcal{L}^{con}=1-cos\left(\rho^{sd},\rho^{re}\right)=1-\frac{\rho^{sd} \cdot\rho^{re}}{\left|\rho^{sd}\right|\left|\rho^{re}\right|}, \tag{17}\]
in which \(cos\) denotes the cosine similarity. Reducing \(\mathcal{L}^{con}\) is equivalent to increasing the cosine similarity, which in turn improves the consistency between \(\rho^{sd}\) and \(\rho^{re}\).
Conversely, the styles of the shadow-related component and background-related component ought to be distinct, as the latter embodies a shadow-free style. To augment their difference, we employ the subsequent differentiate loss:
\[\mathcal{L}^{diff}=\frac{(\rho^{re}\cdot\rho^{bg})^{2}}{C^{2}}, \tag{18}\]
A smaller \(\mathcal{L}^{diff}\) signifies that the two vectors are more orthogonal, meaning the difference between them is larger.
The comprehensive style constraint loss, denoted as \(\mathcal{L}_{style}\), encompasses the similarity and diversity losses from both low-level and high-level pathways. This loss can be computed using the following equation:
\[\mathcal{L}_{style}=\mathcal{L}^{con}_{low}+\mathcal{L}^{diff}_{low}+\mathcal{ L}^{con}_{high}+\mathcal{L}^{diff}_{high}. \tag{19}\]
The two constraints in the SSF module enable the two linear layers to extract the style related to the presence or absence of shadows from the Gram matrix more effectively. As the presence or absence of shadows serves as the decisive factor for the diversity or consistency of the two style attribute pairs, if the linear layers were to focus on other styles, the diversity or consistency would not be adequately captured. Thus, the process of back-propagation encourages the linear layers to concentrate on the shadow aspect. With this premise, the constraint that differentiates background-related and shadow-related components fosters the formation of distinctly different characteristics between them. This ensures that the information they contain is not easily duplicated, supporting the feasibility of our dual-layer modeling approach. More importantly, when combined with the shadow-free region constraint described in Section 3.2, the network gains the ability to separate background component without requiring a ground-truth background image, which in turn refines the shadow-related component.
### Overall Loss Function
The overall loss function of our method is formulated as follows:
\[\mathcal{L}=\mathcal{L}_{sd}+\alpha(\mathcal{L}_{re}+\mathcal{L}_{bg})+\beta \mathcal{L}_{style}, \tag{20}\]
Figure 3. Structure of the SSF module. The Gram matrix is used to extract style attributes of the background-related component, the shadow-related component, and the recombined features. Based on the presence or absence of shadows, we aim to bring the style of the shadow-related component closer to that of the recombined features, while differentiating it with that of the background-related component.
where \(\alpha\) and \(\beta\) are two balancing hyperparameters, which are empirically set to \(\alpha=0.2\) and \(\beta=0.1\), respectively.
## 4. Experiments
### Datasets and Evaluation Metric
#### 4.1.1. Datasets
We evaluate our method on three public datasets: SBU (Shou et al., 2017), ISTD (Shen et al., 2017), and UCF (Wang et al., 2017). The SBU dataset comprises 4,089 training images and 638 testing images. The ISTD dataset contains 1,330 training images and 540 testing images. Although it provides ground truths for both shadow maps and shadow-free images, we only use the ground truths for shadow maps in our task. The UCF dataset consists of 135 training images and 110 testing images. Following previous shadow detection works (Beng et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), we evaluate our method on both the SBU and UCF test sets using the model trained on the SBU training images, and on the ISTD testing set using the model trained on its own training set.
#### 4.1.2. Evaluation Metrics
We following previous shadow detection works (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) to adopt the widely-used metric, balanced error rate (BER), to quantitatively evaluate performance:
\[BER=\left(1-\frac{1}{2}\left(\frac{TP}{TP+FN}+\frac{TN}{TN+FP}\right)\right) \times 100, \tag{21}\]
where \(TP\), \(TN\), \(FP\), and \(FN\) represent the numbers of true positive, true negative, false positive, and false negative pixels, respectively. BER considers error rates for both shadow and non-shadow regions, with lower values indicating better performance. Additionally, we also report the error rate for the shadow region, \(1-\frac{TP}{TP+FN}\), and the error rate for the non-shadow region, \(1-\frac{TN}{TN+FP}\).
### Implementation Details
For the backbone, we adopt the lightweight EfficientNet-B3 (Wang et al., 2017) as in (Wang et al., 2017; Wang et al., 2017) and initialize it with pre-trained parameters from ImageNet (Deng et al., 2017). EfficientNet-B3 comprises 25 consecutive blocks, with the output of the first 6 layers serving as our low-level features and the output of the remaining layers as the high-level features. Our code is implemented with PyTorch and accelerated by a single NVIDIA RTX2080Ti. We also implement our network by using the MindSpore Lite tool1.
Footnote 1: [https://www.mindspore.cn/](https://www.mindspore.cn/)
During both training and inference, we resize the input image to 512\(\times\)512. In the training stage, we optimize the entire network for 20 epochs with a batch size of 4, using the Adam optimizer. The initial learning rate is set to 0.0005. The learning rate is adjusted using the exponential decay strategy, with a decay rate of 0.7. During the testing stage, we employ a fully connected conditional random field (CRF) (Shou et al., 2017) to further refine our predicted shadow map, following the approaches in (Beng et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Our proposed model has a real-time inference speed of 32 FPS for processing an image with the size of \(512\times 512\).
### Comparisons
We compare our method with 15 previous state-of-the-art shadow detection methods both quantitatively and qualitatively, the methods we select include Unary-Pairwise (Dosov et al., 2015), scGAN (Wang et al., 2017), ST-CGAN (Shou et al., 2017), DC-DSPF (Wang et al., 2017), A+D Net (Wang et al., 2017), BDR (Wang et al., 2017), DSDNet (Wang et al., 2017), DSC (Wang et al., 2017), MTMTNet (Beng et al., 2017), ECA (Deng et al., 2017), RCDFNet (Wang et al., 2017), FDRNet (Wang et al., 2017), CMNet (Wang et al., 2017), TransShadow (Wang et al., 2017), and R2D (Wang et al., 2017). Among them, Unary-Pairwise is based on hand-crafted features, while all the others are CNN-based methods. For a fair comparison, all the results are provided directly by the authors or generated by the source codes under the default parameter settings in the corresponding models.
#### 4.3.1. Quantitative Comparison
We present the quantitative comparison results between our SDDNet and other models in Table 1. It is clear that our model is highly competitive among all these methods, securing either the first place or a tie for first place in terms of BER across all three datasets. This achievement demonstrates our model's ability to handle data with diverse characteristics and deliver satisfactory outcomes. In comparison to the previously best-performing CM-Net (Wang et al., 2017), our model exhibits an equal BER on the SBU dataset, while outperforming it by 11.81% and 2.08% on the ISTD and UCF datasets, respectively. Additionally, in the comparison of error rates within shadow and non-shadow regions, our model exhibits a consistently stable performance, ranking among the top positions across all three datasets.
#### 4.3.2. Qualitative Comparison
We also qualitatively compare the results of our model with those of previous models, as illustrated in Figure 4. It can be observed that the results of our model exhibit advantages, particularly in scenes with confusing background colors. For instance, in the first example, the dark eyes and hair of the cartoon character might be misclassified as shadows by other models; however, our SDDNet can effectively mitigate this interference due to its dual-layer modeling. Likewise, in the second and third examples, dark objects or dark ground may be erroneously identified as shadows by other networks, whereas our model prevents this error from occurring. Moreover, in the fourth and fifth examples, other models may misidentifify shadows on light-colored backgrounds as non-shadows due to the varying background colors covered by the shadows. In contrast, our model avoids this interference as the shadow-related component utilized for prediction do not incorporate any background information.
### Ablation Study
To verify the effect of each part in our model, we conduct ablation studies on the SBU dataset with the following configurations:
* _Baseline_: Compared with the full model introduced in Section 3, we remove the FSR module and the SSF module.
* _Baseline+FSR_: Compared with _Baseline_, we add the FSR module.
* _Baseline+FSR_': Compared with _Baseline+FSR_, we remove the joint training of generating the background image and reconstructing the input image, namely remove \(\mathcal{L}_{joint}\).
* _Baseline+FSR+SSF_: Compared with _Baseline+FSR_, we add the SSF module.
The quantitative results with all these different configurations are reported in Table 2. We also present the quantitative results for several different configurations in Figure 5.
#### 4.4.1. Effectiveness of the FSR module
In this part, we showcase the effectiveness of our proposed FSR module by comparing its performance to the results obtained without its implementation. The
FSR module allows for independent modeling of shadow and background layers, efficiently reducing the adverse effects of confounding background colors. By comparing the performance of _Baseline_ and _Baseline+FSR_, it is evident that the FSR module improves the performance of _Baseline+FSR_.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Source} & \multicolumn{3}{c}{ISTD [53]} & \multicolumn{3}{c}{SBU [52]} & \multicolumn{3}{c}{UCF [64]} \\ \cline{3-11} & & BER\(\downarrow\) & Shad.\(\downarrow\) & No Shad.\(\downarrow\) & BER\(\downarrow\) & Shad.\(\downarrow\) & No Shad.\(\downarrow\) & BER\(\downarrow\) & Shad.\(\downarrow\) & No Shad.\(\downarrow\) \\ \hline Unary-Pairwise [19] & CVPRβ11 & - & - & - & 25.03 & 36.26 & 13.80 & - & - & - \\ scGAN [43] & ICCVβ17 & 4.70 & 3.22 & 6.18 & 9.10 & 8.39 & 9.69 & 11.50 & 7.74 & 15.30 \\ ST-CGAN [53] & CVPRβ18 & 3.85 & 2.14 & 5.55 & 8.14 & 3.75 & 12.53 & 11.23 & 4.94 & 17.52 \\ DC-DSPF [55] & IJCAβ18 & - & - & - & 4.00 & 4.70 & 5.10 & 7.90 & 6.50 & 9.30 \\ A+D Net [36] & ECCVβ18 & - & - & - & 5.37 & 4.45 & 6.30 & 9.25 & 8.37 & 10.14 \\ BDRAR [65] & ECCVβ18 & 2.69 & 0.50 & 4.87 & 3.64 & 3.40 & 3.89 & 7.81 & 9.69 & 5.44 \\ DSDNet [62] & CVPRβ19 & 2.17 & 1.36 & 2.98 & 3.45 & 3.33 & 3.58 & 7.59 & 9.74 & 5.44 \\ DSC [25] & TPAMIβ19 & 3.42 & 3.85 & 3.00 & 5.59 & 9.76 & 1.42 & 10.54 & 18.08 & 3.00 \\ MTMT-Net [3] & CVPRβ20 & 1.72 & 1.36 & 2.08 & 3.15 & 3.73 & 2.57 & 7.47 & 10.31 & 4.63 \\ ECA [14] & MMβ21 & 2.03 & 2.88 & 1.19 & 5.93 & 10.82 & 1.03 & 10.71 & 18.59 & 2.83 \\ RCMPNet [40] & MMβ21 & 1.61 & 1.22 & 2.00 & 2.98 & 3.26 & 2.69 & 6.75 & 8.36 & 5.75 \\ FDRNet [66] & ICCVβ21 & 1.55 & 1.22 & 1.88 & 3.04 & 2.91 & 3.18 & 7.28 & 8.31 & 6.26 \\ CM-Net [67] & MMβ22 & 1.44 & - & - & **2.94** & - & - & 6.73 & - & - \\ TransShadow [29] & ICASSPβ22 & 1.73 & - & - & 3.17 & - & - & 6.95 & - & - \\ R2D [50] & WACVβ23 & 1.69 & 0.59 & 2.79 & 3.15 & 2.74 & 3.56 & 6.96 & 8.32 & 5.60 \\ Ours & / & **1.27** & 1.01 & 1.52 & **2.94** & 3.23 & 2.64 & **6.59** & 7.89 & 5.29 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Quantitative comparison results between our SDDNet and existing state-of-the-art methods. βShad.β and βNo Shad.β denote the error rates of shadow and non-shadow regions, respectively. Bold indicates the best performances, and underline indicates the second best performances.
Figure 4. Qualitative comparison between our SDDNet and existing state-of-the-art methods. (a) Input images. (b) Ground-truths. (c) The prediction of BDRAR [65]. (d) The prediction of DSDNet [62]. (e) The prediction of MTMT-Net [3]. (f) The prediction of FDRNet [66]. (g) The prediction of ECA [14]. (h) The prediction of CM-Net [67]. (i) The prediction of our SDDNet.
prediction accuracy. Compared to the scenario without the FSR module, the BER score improves from 3.39 to 3.29, with the percentage gain of 2.9%. As illustrated in Figure 5, when the backgrounds in shadowed areas (the first example) or non-shadowed areas (the second example) display various distinct characteristics, _Baseline_ struggles to eliminate such interference. It predicts the light yellow line in shadows as non-shadow and misclassifies the non-shadow dark part of the red clay court as shadows. In contrast, _Baseline+FSR_ performs better, as its isolated shadow-related components can mitigate the impact of background colors to some extent, achieving improved predictions. However, there are still noticeable discrepancies between the result and the ground truth, indicating that the absence of clear guidance for disentangling background-related components hinders the feature disentanglement from achieving complete success.
Additionally, we conduct experiments focusing on the joint training strategy in the FSR module, specifically the \(\mathcal{L}_{joint}\) term in the loss function. This joint training serves two purposes. Firstly, it constrains the reconstruction of the input image, ensuring that the information within the two isolated components is neither omitted nor redundant. Secondly, it constrains the generation of the background image, encouraging the production of the background-related component, thereby more effectively eliminating the interference of background information from shadow-related components. Compared with _Baseline+FSR_ and _Baseline+FSR_*, _Baseline+FSR_ that incorporates joint training yields superior results, with a BER improvement of 0.03. This observation demonstrates the significance of joint training, and both of its functions are essential for achieving high-quality feature disentanglement.
#### 4.4.2. Effectiveness of the SSF module
Furthermore, we compare the performance of our model with and without the proposed SSF module. This module constrains feature disentanglement, taking into account both style diversity and consistency. By examining the results of _Baseline+FSR_ and _Baseline+FSR+SSF_ in Table 2, it is evident that adding the SSF module yields considerably improved results, with a 0.35 higher BER. This suggests that the style constraints within the SSF module indeed enhance the ability to more effectively separate the two feature components, thus simplifying the prediction of shadow maps. In the absence of the SSF module, disentangling background-related component proves challenging, as ground truth background images are unavailable. However, the SSF module ingeniously addresses this issue by diversifying the styles of background features and shadow-related components in a weakly supervised manner.
In Figure 5, we can also observe the superiority brought by the SSF module. The predictions of _Baseline+FSR+SSF_ demonstrate an obvious advantage over _Baseline+FSR_, exhibiting a clear improvement in handling complex backgrounds and fully mitigating the impact of confounding background colors. Consequently, both the FSR and SSF modules are indispensable for obtaining stable and robust prediction results. They need to coordinate with each other in order to maximize their effectiveness.
## 5. Conclusion
In this paper, we present a novel Style-guided Dual-layer Disentanglement Network (SDDNet) for shadow detection. Our central idea is to separate the shadow and background layers of the input image to reduce the impact of background color. To achieve this goal, we introduce two novel modules. The first one is the Feature Separation and Recombination (FSR) module that separates complete features into shadow-related and background-related components using differentiated supervisions. Simultaneously, the joint training strategy of reconstructing the input image and generating the background image ensures the reliability of the separation process. Furthermore, we consider the presence and absence of shadows as a type of style and introduce style constraints to our model through a Shadow Style Filter (SSF) module, further enhancing the quality of feature disentanglement. Experimental results on three datasets demonstrate that our SDDNet achieves state-of-the-art performance, proving the effectiveness of our approach.
###### Acknowledgements.
This work was supported in part by the National Key R&D Program of China under Grant 2021ZD0112100, in part by the National Natural Science Foundation of China under Grant 62002014, Grant U1913204, Grant U1936212, Grant 62120106009, in part by the Taishan Scholar Project of Shandong Province under Grant tsqn202306079, in part by the Project for Self-Developed Innovation Team of Jinan City under Grant 2021GXRC038, in part by the Hong Kong Innovation and Technology Commission (InnoHK Project CIDMA), in part by the Hong Kong GRF-RGC General Research Fund under Grant 11203820 (9042598), in part by Young Elite Scientist Sponsorship Program by the China Association for Science and Technology under Grant 2020QNRC001, and in part by CAAI-Huawei MindSpore Open Fund.
\begin{table}
\begin{tabular}{c|c c c|c} \hline Configuration & FSR & \(\mathcal{L}_{joint}\) & SSF & BER\(\downarrow\) \\ \hline _Baseline_ & & & & 3.39 \\ _Baseline+FSR_ & β & β & & 3.29 \\ _Baseline+FSR*_ & β & & & 3.32 \\ _Baseline+FSR+SSF_ & β & β & β & **2.94** \\ \hline \end{tabular}
\end{table}
Table 2. Ablation study results for our SDDNet. Bold indicates the best performances.
Figure 5. The qualitative results of the ablation study. (a) Input images. (b) Ground-truths. (c) The prediction of _Baseline_. (d) The prediction of _Baseline+FSR_. (e) The prediction of _Baseline+FSR+SSF_. |
2307.11439 | The $\mathfrak S_k$-circular limit of random tensor flattenings | The tensor flattenings appear naturally in quantum information when one
produces a density matrix by partially tracing the degrees of freedom of a pure
quantum state. In this paper, we study the joint $^*$-distribution of the
flattenings of large random tensors under mild assumptions, in the sense of
free probability theory. We show the convergence toward an operator-valued
circular system with amalgamation on permutation group algebras for which we
describe the covariance structure. As an application we describe the law of
large random density matrix of bosonic quantum states. | StΓ©phane Dartois, Camille Male, Ion Nechita | 2023-07-21T08:59:33Z | http://arxiv.org/abs/2307.11439v1 | # The \(\boldsymbol{\mathfrak{S}}_{k}\)-circular limit of random tensor flattenings
###### Abstract
The tensor flattenings appear naturally in quantum information when one produces a density matrix by partially tracing the degrees of freedom of a pure quantum state. In this paper, we study the joint \({}^{*}\)-distribution of the flattenings of large random tensors under mild assumptions, in the sense of free probability theory. We show the convergence toward an operator-valued circular system with amalgamation on permutation group algebras for which we describe the covariance structure. As an application we describe the law of large random density matrix of bosonic quantum states.
Primary 15B52, 46L54; Keywords: Free Probability, Large Random Matrices, Random Tensors, Quantum Information, Bosonic quantum states
## Acknowledgements
This research was funded in part by the Australian Research Council grant DE210101323 of Stephane Dartois and the French National Research Agency (ANR) under the projects STARS ANR-20-CE40-0008 and Esquisses ANR-20-CE47-0014-01.
###### Contents
* 1 Introduction and presentation of the problem
* 2
The circular limit of flattenings * 2.1 Circular variables over the symmetric group * 2.2 Main result and applications * 2.2.1 Statement and corollaries * 2.2.2 Construction of \(\mathfrak{S}_{k}\)-free and free circular systems * 2.2.3 Examples: proof of Corollary 1.5
* 3 Proof of preliminary lemmas of Sections 2.1 and 2.2
* 4 Proof of the convergence of the \(\mathfrak{S}_{k}\)-covariance
* 5 Proof of the asymptotic \(\mathfrak{S}_{k}\)-circularity
* 5.1 Injective trace method for tensors
* 5.2 Expression of injective traces under Hypothesis 1.2
* 5.3 Important examples
* 5.3.1 A case with no twisting
* 5.3.2 A case with twistings
* 5.4 Convergence of injective traces
* 5.5 End of the proof
## 1 Introduction and presentation of the problem
Let \(F\) be an \(N\)-dimensional complex vector space given with a basis, and let \(k\geq 1\) be a fixed integer. Thanks to the coordinates in this basis, we represent a tensor of order \(2k\) on \(F\) as a multi-indexed vector in \((\mathbb{C}^{N})^{\otimes 2k}\), such as
\[M_{N}=\big{(}m(i_{1},\ldots,i_{2k})\big{)}_{i_{1},\ldots,i_{2k}\in[N]}\in( \mathbb{C}^{N})^{\otimes 2k},\]
where \([N]:=\{1,\ldots,N\}\). Splitting the \(k\) first and \(k\) last indices, a tensor \(M_{N}\) is canonically associated to a matrix of \(\mathrm{M}_{N^{k}}(\mathbb{C})\) that represents the endomorphism of \((\mathbb{C}^{N})^{\otimes k}\)
\[M_{N,id}:=\big{(}m(\mathbf{i},\mathbf{j})\big{)}_{\mathbf{i},\mathbf{j}\in[N] ^{k}}\in\mathrm{End}\big{(}(\mathbb{C}^{N})^{\otimes k}\big{)}\cong\mathrm{M} _{N^{k}}(\mathbb{C}).\]
Furthermore, we can also represent a tensor differently by shuffling the role of the indices, producing different matrices here called "flattenings" and sometimes "matricization" or "unfolding" of the initial tensor. More precisely, let us denote by \(\mathfrak{S}_{2k}\) the symmetric group of order \(2k\), that is the group of bijections of \([2k]\).
**Definition 1.1**.: _For any tensor \(M_{N}\in(\mathbb{C}^{N})^{\otimes 2k}\) and any bijection \(\sigma\in\mathfrak{S}_{2k}\), we define the element of \(\mathrm{M}_{N^{k}}(\mathbb{C})\sim\mathrm{End}\big{(}(\mathbb{C}^{N})^{k}\big{)}\)_
\[M_{N,\sigma}:=\Big{(}M_{N}\big{(}i_{\sigma^{-1}(1)},\ldots,i_{\sigma^{-1}(2k) }\big{)}\Big{)}_{(i_{1},\ldots,i_{k}),(i_{k+1},\ldots,i_{2k})\in[N]^{k}}, \tag{1.1}\]
_that is called a flattening for short1 of \(M_{N}\)._
Footnote 1: In this article we only consider the here defined balanced flattenings, which are only a sub-family of all the flattenings of a tensor. In the context of this paper, we just call them flattenings.
For \(k=1\), there are 2 permutations: id and the transposition \((1,2)\), so there are 2 flattenings: the canonical one \(M_{N,\mathrm{id}}\) and its transpose \(M_{N,(1,2)}=(M_{N,\mathrm{id}})^{\top}\). For general order \(k\geq 1\), there are up to \((2k)!\) different flattenings of a tensor.
We work under the following assumptions.
**Hypothesis 1.2**.: _The tensor \(M_{N}\in(\mathbb{C}^{N})^{\otimes 2k}\) has i.i.d. entries distributed as a centered complex random variable \(m_{N}\) having finite moments of all orders (i.e. \(\mathbb{E}\big{[}|m_{N}|^{\ell}\big{]}<\infty\) for all \(\ell\)). Moreover, the following limits exist_
\[N^{k}\times\mathbb{E}\big{[}|m_{N}|^{2}\big{]}\underset{N\to\infty}{ \longrightarrow}c>0,\ \ N^{k}\times\mathbb{E}\big{[}m_{N}^{2}\big{]}\underset{N\to\infty}{ \longrightarrow}c^{\prime}\in\mathbb{C}, \tag{1.2}\]
_and for all non-negative integers \(\ell_{1},\ell_{2}\) such that \(\ell_{1}+\ell_{2}>2\)_
\[N^{k}\times\mathbb{E}\big{[}m_{N}^{\ell_{1}}\overline{m_{N}}^{\ell_{2}}\big{]} \underset{N\to\infty}{\longrightarrow}\ \ 0. \tag{1.3}\]
_We call \((c,c^{\prime})\) the parameter of \(M_{N}\)._
**Example 1.3**.: _In item 2, we use the notation \(x\rightsquigarrow\nu\) to mean that a complex random variable \(x\) is distributed according to a probability distribution \(\nu\)._
1. _Let_ \(M_{N}\) _be sampled according to the complex Ginibre ensemble, i.e. the entries of_ \(\sqrt{N}M_{N}\) _are distributed according to the standard complex Gaussian distribution. Then_ \(M_{N}\) _satisfies Hypothesis_ 1.2 _and its parameter is_ \((1,0)\)_. A real Ginibre ensemble also satisfies the hypothesis with parameter_ \((1,1)\)_._
2. _Let_ \(p_{N}\in]0,1],N\geq 1\) _be a sequence of real numbers and let_ \(\mu\) _be the distribution of a complex random variable with no atom in 0, and with finite moments of all orders. We denote the probability measure_ \[\nu:=(1-p_{N})\delta_{0}+p_{N}\mu.\] (1.4) _If_ \(\mu\) _is a Dirac mass, we assume_ \(p_{N}<1/2\)_. Let_ \(M_{N}\) _be a random tensor with i.i.d. entries distributed as_ \(\sigma_{N}^{-2}(x-\alpha\,p_{N})\) _with_ \(x\rightsquigarrow\nu\)_, where for a variable_ \(y\rightsquigarrow\mu\) _we have set_ \(\alpha=\mathbb{E}[y]\)_,_ \(\beta^{2}=\mathbb{E}[|y|^{2}]\) _and_ \(\sigma_{N}=N^{k}p_{N}(\beta^{2}-|\alpha|^{2}p_{N})\)_. If_ \(N^{k}p_{N}\) _tends to infinity, then_ \(M_{N}\) _satisfies Hypothesis_ 1.2 _and its parameter is_ \((\mathbb{E}[|y|^{2}],\mathbb{E}[y^{2}])\) _where_ \(y\rightsquigarrow\mu\)_._
**Remark 1.4**.: _A variable sampled from \(\nu\) defined in (1.4) is distributed according to \(\mu\) with probability \(p_{N}\) and is equal to zero otherwise. We say that \(\nu\) is a dilution of \(\mu\). The entries of \(M_{N}\) are normalized to be centered and in order to get the announced parameter. The sequence \(p_{N}\) is allowed to converge to zero, provided the average number of entries different from the constant \(\alpha p_{N}\) in each column of a flattenings of \(M_{N}\) converges to infinity._
In this article, we consider the collection of all the flattenings of such a random tensor. We study the \({}^{*}\)-distribution of this family in the sense of free probability (whose definition is recalled in Section 2). Under the above assumptions, we characterize the limit in simple terms thanks to operator-valued free probability theory, which is the non commutative analogue of conditional probability. Our main result stated in Theorem 2.15, establishes that the limit of the flattenings is an operator-valued circular collection.
This extends freeness results of a random matrix and its transpose, which is known for unitarily invariant random matrices that converge in \({}^{*}\)-distribution [11], and more generally for "asymptotically unitarily invariant matrices" in the sense of traffics [1], which includes matrices with i.i.d. entries. It also In constrast with the results of these cited works, we show that in our more general setting freeness does not hold between all the flattenings. Other studies of random tensors in the context of free probability include the asymptotic freeness of a Wishart matrix and its partial transpose [11], asymptotic semicircularity for contracted Wigner-type tensors [1]. Our result also generalizes [14, Theorem 4.13] when we restrict our attention to square matrices.
Our motivation for this paper comes from quantum information theory. Recall first that the singular values of a matrix \(A_{N}\) are the square roots of the eigenvalues of the positive semidefinite matrix \(A_{N}A_{N}^{*}\). For a random matrix \(A_{N}\), we call (averaged) empirical singular values distribution the probability measure \(\mathbb{E}\big{[}\frac{1}{N}\sum_{i\in[N]}\delta_{s_{i}}\big{]}\), where \(\delta_{s_{i}}\) denote the Dirac mass at the singular values \(s_{i}\) of \(A_{N}\). If \(A_{N}\) is Hermitian, we call (averaged) empirical eigenvalues distribution the probability measure \(\mathbb{E}\big{[}\frac{1}{N}\sum_{i\in[N]}\delta_{\lambda_{i}}\big{]}\), where the \(\lambda_{i}\)'s are the eigenvalues of \(A_{N}\). The symmetrized matrix
\[\sum_{\sigma,\sigma^{\prime}\in\mathfrak{S}_{2k}}M_{N,\sigma}M_{N,\sigma^{ \prime}} \tag{1.5}\]
is, up to normalization, the density matrix associated to the partial trace over a bipartition of a random quantum states of bosons. Bosons are one of the two flavors (together with fermions) of undistiguinshable particles in nature. Their quantum states must be invariant under exchange of particles. The typical properties of such random states have not been studied in much details in the mathematical literature despite the fact that in many contexts (_e.g._ condensed matter) they are much more natural states to consider. An attempt at studying a model that is close in spirit can be found here [13].
In the present paper, we describe the spectrum of the marginals of (unnormalized) random bosonic states in order to bound their limiting geometric bipartite entanglement. In fact a consequence of our work is that the largest Schmidt coefficients of such states before normalization is asymptotically bounded from below by \(2\). This Schmidt coefficient is a well known
measure of bipartite entanglement which directly relates to geometric entanglement [14].
This is a first step in our work, as we plan to use the results of this paper to study the spectrum of the partial transpose of bosonic quantum states in the future. Indeed, we know from numerical simulations their spectrum is behaving very differently from the spectrum of the partial transpose of quantum states with no symmetry.
Moreover, we hope that this formalism could be used to study general permutational criteria of entanglement [13] of which the partial transpose and the realignment criteria are well known special cases, as indeed the permutational criteria can be encoded through the action of \(\mathfrak{S}_{2k}\) on density matrices.
Finally, though our work sheds no new light on this specific aspect, we want to point out the importance of flattenings of quantum states to study entanglement. In fact, the separability of quantum states is equivalent to the vanishing of all the \(2\times 2\) minors of some2 of its flattenings, called contraction maps [10].
Footnote 2: Note that these are not the flattenings we consider here, as we only consider a subset of them that produce maps \((\mathbb{C}^{N})^{\otimes k}\to(\mathbb{C}^{N})^{\otimes k}\), while contraction maps are all the flattenings leading to \(\operatorname{maps}(\mathbb{C}^{N})^{\otimes 2k-1}\to\mathbb{C}^{N}\)
Another motivation comes from data analysis, where information on a noisy data tensor is retrieved from the spectral properties of its flattenings for instance in the context of Multilinear Subspace Analysis (MSA) [11]. Our work could give insight on the free independence properties of the flattenings of the noisy part of such data tensors. In particular, we allow the tensor to be diluted, i.e. to have a majority of zero entries which is a regime appearing in _e.g._ community detection [12] or in MSA with (a lot of) missing values.
Finally, one of the most important recent use of flattenings of tensors is in the context of tensors PCA, where when they are used to initialize tensor powers algorithms drastically improve the detection threshold of this family of algorithms while having a better accuracy above the threshold. See [15] for a foundational work on this topic (see [15, section 3 and 6]).
To conclude this section, let us discuss in more detail a simple situation that our result solve. Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\). It is known that, for each \(\sigma\in\mathfrak{S}_{2k}\), the empirical eigenvalues distribution of \(M_{N,\sigma}M_{N,\sigma}^{*}\) converges to the Marchenko-Pastur distribution \(\operatorname{MP}_{c}=\operatorname{MP}_{c,1}\) of variance \(c\) and shape parameter \(\lambda=1\) (since the matrices we consider are square). We recall that the \(\operatorname{MP}_{c,\lambda}\) distribution has the following form
\[\operatorname{MP}_{c,\lambda}=\max(1-\frac{1}{\lambda},0)\delta_{0}+\frac{ \sqrt{(b-x)(x-a)}}{2\pi c\lambda x}\mathbbm{1}_{x\in(a,b)}\mathrm{d}x,\]
with \(a=c(1-\sqrt{\lambda})^{2}\), \(b=c(1+\sqrt{\lambda})^{2}\). As expressed earlier, in this work we consider the \(\lambda=1\) case.
More generally, it is known in free probability that each flattening converges in non-commutative distribution to a so-called _circular variable_[13] (see next section for the definitions). Furthermore, when \(c^{\prime}=0\) the transpose \(M_{N,\sigma}^{\top}\) is also known from works of Mingo and Popa [11] as well as Cebron, Dahlqvist and the second named author of this paper [10] to be asymptotically free from \(M_{N,\sigma}\), so \(S_{N,\sigma}^{\pm}=1/\sqrt{2}(M_{N,\sigma}\pm M_{N,\sigma}^{\top})\) converges to a circular variable with same variance. Therefore, the empirical eigenvalues distribution of \(S_{N,\sigma}^{\pm}{S_{N,\sigma}^{\pm}}^{*}\) converges to a Marchenko-Pastur distribution. Similarly the empirical eigenvalues distribution of \(S_{N,\sigma}^{\pm}+{S_{N,\sigma}^{\pm}}^{*}\) converges to a semicircular distribution.
One can also check (see Section 4) that the limits of the matrices are decorrelated, that is \(\mathbb{E}\big{[}\frac{1}{N}\mathrm{Tr}M_{N,\sigma}M_{N,\sigma^{\prime}}^{*} \big{]}\underset{N\to\infty}{\longrightarrow}0\) if \(\sigma\neq\sigma^{\prime}\). Hence it is natural to wonder if the collection of all matrices \(M_{N,\sigma}\) converges to free circular variables for \(k\geq 2\). This is not true as shows the following consequence of our main result.
**Corollary 1.5**.: _Let \(k\geq 1\) be a fixed integer. Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\). We consider the three following random matrices_
\[S_{1,N}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}M_{N,\sigma},\quad S_{2,N}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}} \mathrm{sg}(\sigma)\,M_{N,\sigma},\]
\[S_{3,N}=\frac{1}{2\sqrt{(2k)!k!(c+\Re c^{\prime})}}\sum_{\sigma\in\mathfrak{S }_{2k}}\big{(}M_{N,\sigma}+M_{N,\sigma}^{*}\big{)},\]
_where \(\mathrm{sg}(\sigma)\) is the signature of the permutation \(\sigma\). We denote by \(\delta_{0}\) the Dirac mass at zero, by \(\mathrm{MP}\) the Marchenko-Pastur distribution of shape parameter \(\lambda=1\) and variance \(1\), and by \(\mathrm{SC}\) the standard semicircular distribution._
1. _The empirical eigenvalues distribution of_ \(S_{1,N}S_{1,N}^{*}\) _and_ \(S_{2,N}S_{2,N}^{*}\) _converge to the distribution_ \((1-k!^{-1})\delta_{0}+k!^{-1}\mathrm{MP}\)_._
2. _The empirical eigenvalues distribution of_ \(S_{3,N}\) _converges to the distribution_ \((1-k!^{-1})\delta_{0}+k!^{-1}\mathrm{SC}\)_._
If the matrices \(M_{N,\sigma}\), \(\sigma\in\mathfrak{S}_{2k}\) were asymptotically free, the limits in the two first items of the above corollary should be Marchenko-Pastur distributions, not dilutions of the Marchenko-Pastur as we observe when \(k\geq 2\). Note that the parameter \(k!^{-1}\) of dilution depends only on \(k\). For a diluted tensor as in the second item of Example 1.3, the limit does not depend on the dilution parameter \(p_{N}\) of the model.
The asymptotic relations between the matrices \(M_{N,\sigma}\) that imply the lack of freeness are well explained thanks to operator-valued free probability theory.
**Organisation of the paper.** Section 2 is dedicated to the presentation of our results and the use of these for the concrete computation of limiting laws of flattenings of several tensors. This part is mostly algebraic and free probabilistic in nature. In a first subsection 2.1, we recall notions from [10] relevant to our work. In the subsection 2.2.3 we state our main result and prove our main application to marginals of symmetric and anti-symmetric (un-normalized) quantum states: Corollary 1.5.
In Section 3, we prove technical lemmas that we make a repeated use of throughout the paper. In Section 4, we prove the convergence of the operator valued covariance of the flattenings.
The Section 5 is the most technical part of the paper and is devoted to the convergence of the family of flattenings to a \(\mathfrak{S}_{k}\)-circular system. It contains a good amount of combinatorics of hypergraphs and therefore recalls the needed definitions. It also recalls the method of the injective trace and details important examples that are used as anchors in the last part of the proof.
## 2 The circular limit of flattenings
### Circular variables over the symmetric group
We first recall the classical notion of large \(N\) limit for random matrices in free probability [14, 10]. All random matrices under consideration are assumed to have entries with finite moments of all orders. We set \(\Phi_{N}:=\mathbb{E}\big{[}\frac{1}{N^{k}}\mathrm{Tr}\,\cdot\,\big{]}\) the normalized expected trace on \(\mathrm{M}_{N^{k}}(\mathbb{C})\). The integer \(k\geq 1\) is fixed as \(N\) varies, and for an integer \(n\geq 1\), we set \([n]=\{1,\ldots,n\}\).
**Definition 2.1**.:
1. \(A\) \(*\)_-probability is a couple_ \((\mathcal{A},\phi)\) _where_ \(\mathcal{A}\) _is a_ \(*\)_-algebra and_ \(\phi:\mathcal{A}\to\mathbb{C}\) _is a unital positive linear, i.e._ \(\phi(1_{\mathcal{A}})=1\) _and_ \(\phi(aa^{*})\geq 0\) _for all_ \(a\in\mathcal{A}\)_._
2. _Let_ \(\mathbf{M}_{N}=(M_{N,j})_{j\in J}\) _be a family of random matrices in_ \(\mathrm{M}_{N^{k}}(\mathbb{C})\)_, and let_ \(\mathbf{m}=(m_{j})_{j\in J}\) _be a family of elements in a_ \(*\)_-probability space_ \((\mathcal{A},\phi)\)_. We say that_ \(\mathbf{M}_{N}\) _converges in_ \(*\)_-distribution to_ \(\mathbf{m}\) _whenever_ \[\Phi_{N}\Big{[}M_{N,j_{1}}^{\varepsilon_{1}}\cdots M_{N,j_{L}}^{\varepsilon_{ L}}\Big{]}\underset{N\to\infty}{\longrightarrow}\phi\big{[}m_{j_{1}}^{ \varepsilon_{1}}\cdots m_{j_{L}}^{\varepsilon_{L}}\big{]}\in\mathbb{C}\] (2.1) _for any_ \(L\geq 1\)_, and any_ \(j_{\ell}\ \in J\)_,_ \(\varepsilon_{\ell}\in\{1,*\}\)_,_ \(\ell\in[L]\)
To describe the limit when \(\mathbf{M}_{N}\) is the collection of flattenings of a tensor \(M_{N}\) satisfying Hypothesis 1.2, it is actually much easier to consider also other quantities, involving the following definitions.
**Definition 2.2**.:
1. _For any_ \(\eta\in\mathfrak{S}_{k}\)_, let_ \(U_{N,\eta}\) _be the unitary matrix of size_ \(N^{k}\) _whose action on a simple tensor_ \(v_{1}\otimes\cdots\otimes v_{k}\in(\mathbb{C}^{N})^{\otimes k}\) _is_ \[U_{N,\eta}(v_{1}\otimes\cdots\otimes v_{k}):=v_{\eta(1)}\otimes\cdots\otimes v _{\eta(k)}.\] _We denote by_ \(\mathbb{C}\mathfrak{S}_{N,k}\) _the vector space spanned by all the matrices_ \(U_{N,\eta}\) _for_ \(\eta\) _in_ \(\mathfrak{S}_{k}\)_._
2. _Recalling the notation_ \(\Phi_{N}=\mathbb{E}\big{[}\frac{1}{N}\mathrm{Tr}\,\cdot\,\big{]}\)_, let_ \(\mathcal{E}_{N}\) _be the linear map defined, for any random matrix_ \(A_{N}\in\mathrm{M}_{N^{k}}(\mathbb{C})\)_, by_ \[\mathcal{E}_{N}(A_{N})=\sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}A_{N}U_{N,\eta}^{*}\big{]}U_{N,\eta}\in\mathbb{C}\mathfrak{S}_{N,k}.\]
In Definition 2.7 below, we define a notion of convergence with respect to \(\mathcal{E}_{N}\), rather than \(\Phi_{N}\). To introduce it, we first state basic properties. Firstly, one sees that \(\eta\mapsto U_{N,\eta}\) is a representation of \(\mathfrak{S}_{k}\), namely \(U_{N,\eta}U_{N,\eta^{\prime}}=U_{N,\eta\eta^{\prime}}\) and \(U_{N,\eta}^{*}=U_{N,\eta^{-1}}\) for all \(\overline{\eta,\eta^{\prime}\in\mathfrak{S}_{k}}\). The next statements are proved in Section 3.
**Lemma 2.3**.:
1. _For all_ \(\eta,\eta^{\prime}\in\mathfrak{S}_{k}\)_,_ \(\Phi_{N}[U_{N,\eta}]=N^{\#\eta-k}\)_, where_ \(\#\eta\) _is the number of cycles of_ \(\eta\)_._
2. _The matrices of_ \((U_{N,\eta})_{\eta\in\mathfrak{S}_{k}}\) _are linearly independent when_ \(N\geq k\)_._
In particular, when \(N\geq k\), any element \(B_{N}\) of \(\mathbb{C}\mathfrak{S}_{N,k}\) has a unique decomposition
\[B_{N}=\sum_{\eta\in\mathfrak{S}_{k}}B_{N}(\eta)U_{N,\eta}.\]
We say that the \(B_{N}(\eta)\)'s are the coefficients of \(B_{N}\). We can hence introduce the following notion of (coefficient-wise) convergence for a sequence of elements in \(\mathbb{C}\mathfrak{S}_{N,k}\).
**Definition 2.4**.:
1. _The group algebra_ \(\mathbb{C}\mathfrak{S}_{k}\) _of_ \(\mathfrak{S}_{k}\) _is the vector space with basis_ \((u_{\eta})_{\eta\in\mathfrak{S}_{k}}\)_, endowed with the product induced by_ \(u_{\eta}u_{\eta^{\prime}}:=u_{\eta\eta^{\prime}}\) _and the antilinear involution induced by_ \(u_{\eta}^{*}:=u_{\eta^{-1}}\)_. Every_ \(b\in\mathbb{C}\mathfrak{S}_{k}\) _has a unique decomposition_ \(b=\sum_{\eta\in\mathfrak{S}_{k}}b(\eta)u_{\eta}\)_._
2. _Let_ \((B_{N})_{N\geq 1}\) _be a sequence of elements of_ \(\mathbb{C}\mathfrak{S}_{N,k}\) _and let_ \(b\) _in_ \(\mathbb{C}\mathfrak{S}_{k}\)_. We say that_ \(B_{N}\) _converges to_ \(b\) _as_ \(N\to\infty\) _whenever_ \(B_{N}(\eta)\underset{N\to\infty}{\longrightarrow}b(\eta)\) _for all_ \(\eta\in\mathfrak{S}_{k}\)_, in which case we write_ \(B_{N}\underset{N\to\infty}{\longrightarrow}b\)_._
The following lemmas state properties of \(\mathcal{E}_{N}\). They are proved in Section
**Lemma 2.5**.: _For all \(B_{N},B^{\prime}_{N}\in\mathbb{C}\mathfrak{S}_{N,k}\) and \(A_{N}\in\mathrm{M}_{N^{k}}(\mathbb{C})\), we have_
\[\mathcal{E}_{N}(B_{N}A_{N}B^{\prime}_{N}) = B_{N}\mathcal{E}_{N}(A_{N})B^{\prime}_{N}, \tag{2.2}\] \[\mathcal{E}_{N}(\mathbb{I}_{N}) = \mathbb{I}_{N}+o(1) \tag{2.3}\]
_where the \(o(1)\) means a sequence of elements of \(\mathbb{C}\mathfrak{S}_{N,k}\) whose coefficients converge to zero. Moreover, if the coefficients of \(\mathcal{E}_{N}(A_{N})\) are bounded, then_
\[\Phi_{N}\big{[}\mathcal{E}_{N}(A_{N})\big{]} = \Phi_{N}[A_{N}]+o(1). \tag{2.4}\]
**Lemma 2.6**.: _For all \(N\), the map \(\mathcal{E}_{N}\) is completely positive, i.e. the map \(\mathrm{id}_{\mathrm{M}_{p}(\mathbb{C})}\otimes\mathcal{E}_{N}\) is positive for all integers \(p\geq 1\)._
We mention this complete positivity property since it is included in the definition of operator-valued probability space, although we do not use it; for a proof, see Section 3.
We can now recall the notion of large \(N\) limit with respect to \(\mathcal{E}_{N}\). In the definition below, we restrict our attention to the case of amalgamation over \(\mathbb{C}\mathfrak{S}_{k}\), although it can be replaced by any finitely generated unital \({}^{*}\)-algebra.
**Definition 2.7**.:
1. _An operator-valued_ \({}^{*}\)_-probability space with amalgamation over_ \(\mathbb{C}\mathfrak{S}_{k}\) _(called in short, a_ \(\mathfrak{S}_{k}^{*}\)_-probability space) is a triplet of the form_ \((\mathcal{A},\mathbb{C}\mathfrak{S}_{k},\mathcal{E})\) _where_ \(\mathcal{A}\) _is a_ \({}^{*}\)_-algebra,_ \(\mathbb{C}\mathfrak{S}_{k}\) _is a subalgebra of_ \(\mathcal{A}\)_, and_ \(\mathcal{E}:\mathcal{A}\to\mathbb{C}\mathfrak{S}_{k}\) _is a conditional expectation, i.e. a completely positive unital linear map such that_ \[\mathcal{E}(bab^{\prime})=b\mathcal{E}(a)b^{\prime}.\]
2. _Let_ \(\mathbf{M}_{N}=(M_{N,j})_{j\in J}\) _be a family of random matrices in_ \(\mathrm{M}_{N^{k}}(\mathbb{C})\)_, and let_ \(\mathbf{m}=(m_{j})_{j\in J}\) _be a family of elements in a_ \(\mathfrak{S}_{k}^{*}\)_-probability space_ \((\mathcal{A},\mathfrak{S}_{k},\mathcal{E})\)_. We say that_ \(\mathbf{M}_{N}\) _converges in_ \(\mathfrak{S}_{k}^{*}\)_-distribution to_ \(\mathbf{m}\) _whenever_ \[\mathcal{E}_{N}\Big{[}M_{N,j_{1}}^{\varepsilon_{1}}U_{N,\eta_{1}}\cdots M_{N,j _{L}}^{\varepsilon_{L}}U_{N,\eta_{L}}\Big{]}\underset{N\to\infty}{ \longrightarrow}\mathcal{E}\big{[}m_{j_{1}}^{\varepsilon_{1}}u_{\eta_{1}} \cdots m_{j_{L}}^{\varepsilon_{L}}u_{\eta_{L}}\big{]}\in\mathbb{C}\mathfrak{S} _{k}\] (2.5) _for any_ \(L\geq 1\)_._
**Remark 2.8**.:
1. _Let_ \((\mathcal{A},\mathfrak{S}_{k},\mathcal{E})\) _be a_ \(\mathfrak{S}_{k}^{*}\)_-probability space. We can canonically define a linear form_ \(\phi\) _on_ \(\mathcal{A}\) _by setting_ \(\phi(a)\) _equal to the coefficient of the unit in_ \(\mathcal{E}(a)\)_, for all_ \(a\in\mathcal{A}\)_. This induces a_ \({}^{*}\)_-probability space_ \((\mathcal{A},\phi)\) _such that_ \(\mathcal{E}=\sum_{\eta\in\mathfrak{S}_{k}}\phi(\,\cdot\,u_{\eta}^{*})u_{\eta}\) _and_ \(\phi(\mathcal{E})=\phi\)_. Moreover, if_ \(\mathbf{M}_{N}\) _converges in_ \(\mathfrak{S}_{k}^{*}\)_-distribution to_ \(\mathbf{m}\) _in a space_ \((\mathcal{A},\mathbb{C}\mathfrak{S}_{k},\mathcal{E})\) _then_ \(\mathbf{M}_{N}\) _also converges in_ \({}^{*}\)_-distribution to_ \(\mathbf{m}\) _in_ \((\mathcal{A},\phi)\)_._
2. _Note that we have_ \(\mathcal{E}(a)^{*}=\mathcal{E}(a^{*})\) _and_ \(\phi(a)^{*}=\phi(a^{*})\) _for all_ \(a\in\mathcal{A}\)_. Indeed, let us set as usual_ \(\Re(a)=\frac{a+a^{*}}{2}\) _and_ \(\Im(a)=\frac{a-a^{*}}{2i}\)_, so that
\(a=\Re(a)+i\Im(a)\). The linearity of \(\mathcal{E}\) implies that \(\mathcal{E}\big{(}\Re(a)\big{)}=\Re\big{(}\mathcal{E}(a)\big{)}\) and \(\mathcal{E}\big{(}\Im(a)\big{)}=\Im\big{(}\mathcal{E}(a)\big{)}\). Therefore, the anti-linearity of the adjoint gives as expected \(\mathcal{E}(a)^{*}=\Re\big{(}\mathcal{E}(a)\big{)}-\Im\big{(}\mathcal{E}(a) \big{)}=\mathcal{E}(a^{*}).\) The same proof shows the similar property for \(\phi\)._
The random matrices considered in this article are proved to converge to so-called \(\mathfrak{S}_{k}\)-circular variables. To define this notion, we introduce a sequence of multi-linear maps called the operator-valued free cumulants.
In the definition below, we denote by \(\mathrm{NC}(n)\) the set of non-crossing partitions of the interval \([n]=\{1,\ldots,n\}\), we set \(\mathrm{NC}=\sqcup_{n\geq 1}\mathrm{NC}(n)\), and we describe a way to extend a family of linear maps into a collection of maps labeled by non-crossing partitions.
**Definition-Proposition 2.9**.: _Given any sequence \((\mathcal{M}_{n})_{n\geq 1}\) of linear maps \(\mathcal{M}_{n}:\mathcal{A}^{n}\to\mathbb{C}\mathfrak{S}_{k}\), we define canonically the collection \((\mathcal{M}_{\xi})_{\xi\in\mathrm{NC}}\) as follows. Let \(\xi\) be a non-crossing partition of \([n]\). There always exists an interval block \(B=\{i,i+1,\ldots,i+n^{\prime}-1\}\) of \(\xi\), for some \(i\in[n]\) and \(n^{\prime}\in[n-i+1]\). Assume that \(B\) is the interval block of smallest indices, namely \(i=\min\bigl{\{}j\in[n]\,|\,\exists\,m^{\prime}\in[n]\ \mathrm{s.t.}\,\{j,j+1\cdots,j+m^{\prime}-1\}\in\xi\bigr{\}}\). Then for all \(a_{1},\ldots,a_{n}\) in \(\mathcal{A}\), we set, inductively on \(n\),_
\[\mathcal{M}_{\xi}(a_{1},\ldots,a_{n})\] \[= \mathcal{M}_{\xi\setminus B}\big{(}a_{1},\ldots,a_{i-1}\mathcal{M }_{n^{\prime}}(a_{i},a_{i+1},\ldots,a_{i+n^{\prime}-1}),a_{i+n^{\prime}}, \ldots,a_{n}\big{)},\]
_where \(\xi\setminus B\in\mathrm{NC}(n-n^{\prime})\) is obtained by removing the block \(B\) and shifting the indices greater than \(i+n^{\prime}-1\), with the convention \(\mathcal{M}_{\{\emptyset\}}=\mathrm{id}_{\mathbb{C}\mathfrak{S}_{k}}\). This relation entirely characterizes the collection \((\mathcal{M}_{\xi})_{\xi\in\mathrm{NC}}\) in terms of \((\mathcal{M}_{n})_{n\geq 1}\)._
**Remark 2.10**.: _For \(k=1\), one has \(\mathbb{C}\mathfrak{S}_{1}=\mathbb{C}\) and the functions \(\mathcal{M}_{\xi}\) satisfy a trivial factorization,_
\[\mathcal{M}_{\xi}(a_{1},\ldots,a_{n})=\prod_{B=\{i_{1}<\cdots<i_{n^{\prime}}\} \in\xi}\mathcal{M}_{n^{\prime}}(a_{i_{1}},\ldots,a_{i_{n^{\prime}}}). \tag{2.6}\]
_Formula (2.6) is not valid for \(k\geq 2\) if the maps \(\mathcal{M}_{\xi}\) are not \(\mathbb{C}\mathfrak{S}_{k}\)-linear, e.g. \(\mathcal{M}_{2}(ab,a^{\prime})\neq\mathcal{M}_{2}(a,a^{\prime})b\) for some \(a,a^{\prime}\in\mathcal{A}\), \(b\in\mathbb{C}\mathfrak{S}_{k}\). The general definition of \(\mathcal{M}_{\xi}\) depends on the nesting structure of the blocks of \(\xi\)._
**Definition-Proposition 2.11**.: _The \(\mathfrak{S}_{k}\)-free cumulants on \((\mathcal{A},\mathbb{C}\mathfrak{S}_{k},\mathcal{E})\) are the unique collection \((\mathcal{K}_{n})_{n\geq 1}\) of linear maps \(\mathcal{K}_{n}:\mathcal{A}^{n}\to\mathbb{C}\mathfrak{S}_{k}\) such that, for all \(n\geq 1\) and \(a_{1},\ldots,a_{n}\) in \(\mathcal{A}\),_
\[\mathcal{E}(a_{1}\cdots a_{n})=\sum_{\xi\in\mathrm{NC}(n)}\mathcal{K}_{\xi}(a_ {1},\ldots,a_{n}).\]
_Each \(\mathcal{K}_{\xi}\) is a \(\mathbb{C}\mathfrak{S}_{k}\)-module map, that is_
\[\mathcal{K}_{\xi}(a_{1},\ldots,a_{i-1},a_{i}b,a_{i+1},\ldots,a_{n}) = \mathcal{K}_{\xi}(a_{1},\ldots,a_{i-1},a_{i},ba_{i+1},\ldots,a_{n})\] \[\mathcal{K}_{\xi}(ba_{1},a_{2},a_{3},\ldots,a_{n-1},a_{n}b^{ \prime}) = b\,\mathcal{K}_{\xi}(a_{1},\ldots,a_{n})b^{\prime},\]
_for all \(a_{1},\ldots,a_{n}\in\mathcal{A}\) and all \(b,b^{\prime}\in\mathbb{C}\mathfrak{S}_{k}\)._
We can now set the central definitions used to describe our matrices.
**Definition 2.12**.: _Let \((\mathcal{A},\mathbb{C}\mathfrak{S}_{k},\mathcal{E})\) be a \(\mathfrak{S}_{k}^{*}\)-probability space._
1. _A collection_ \(\mathbf{m}=(m_{j})_{j\in J}\) _of elements in_ \(\mathcal{A}\) _is_ \(\underline{\mathfrak{S}}_{k}\)_-circular whenever the following_ \(\mathfrak{S}_{k}\)_-cumulants of order greater than two vanish:_ \[\mathcal{K}_{n}(m_{j_{1}}^{\varepsilon_{1}}b_{1},m_{j_{2}}^{\varepsilon_{2}}b _{2},\cdots,m_{j_{L}}^{\varepsilon_{L}}b_{L})=0,\] _for all_ \(L\geq 3\) _and all_ \(j_{\ell}\in J,\varepsilon_{\ell}\in\{1,*\},b_{\ell}\in\mathbb{C}\mathfrak{S}_ {k}\) _with_ \(\ell\in[L]\)_._
2. _Let_ \(A_{1},\ldots,A_{n}\) _be ensembles of elements of_ \(\mathcal{A}\)_. The_ \(A_{i}\)_'s are free over_ \(\underline{\mathfrak{S}}_{k}\) _(or_ \(\mathfrak{S}_{k}\)_-free) if and only if the mixed_ \(\mathfrak{S}_{k}\)_-cumulants vanish:_ \[\mathcal{K}_{n}(m_{1}^{\varepsilon_{1}}b_{1},m_{2}^{\varepsilon_{2}}b_{2}, \ldots,m_{L}^{\epsilon_{L}}b_{L})=0,\] _for all_ \(\varepsilon_{\ell}\in\{1,*\},b_{\ell}\in\mathbb{C}\mathfrak{S}_{k},\ell\in[L]\)_, and for all_ \(m_{1},\ldots,m_{L}\) _in the union_ \(A_{1}\cup\cdots\cup A_{n}\) _but not all in a single set_ \(A_{i}\) _(i.e._ \(\exists\ell\neq\ell^{\prime}\) _such that_ \(m_{\ell}\in A_{i},m_{\ell^{\prime}}\in A_{i^{\prime}}\) _and_ \(i\neq i^{\prime}\)_)._
To conclude this section, we state two classical lemmas useful to comment our main result. First, let us make explicit how the law of a \(\mathfrak{S}_{k}\)-circular collection is determined by first two \(\mathfrak{S}_{k}\)-free cumulants. With notations of Definition 2.12, the first cumulant coincides with the conditional expectation, namely \(\mathcal{K}_{1}(m_{j})=\mathcal{E}(m_{j}),\forall j\in J\). We say that the collection is centered when \(\mathcal{K}_{1}(m_{j})=0\) for all \(j\in J\).
Moreover, since \(\mathcal{K}_{2}\) is a \(\mathbb{C}\mathfrak{S}_{k}\)-bimodule bilinear map, the data of the second \(\mathfrak{S}_{k}\)-free cumulants can be summed up to the data of the \(\underline{\mathfrak{S}}_{k}\)-covariances
\[\mathcal{K}_{2}(m_{j}^{\varepsilon}b,m_{j^{\prime}}^{\varepsilon^{\prime}})= \mathcal{E}(m_{j}^{\varepsilon}\,b\,m_{j^{\prime}}^{\varepsilon^{\prime}})- \mathcal{E}(m_{j}^{\varepsilon})\,b\,\mathcal{E}(m_{j^{\prime}}^{\varepsilon^ {\prime}}), \tag{2.7}\]
for all \(j,j^{\prime}\in J,\varepsilon,\varepsilon^{\prime}\in\{1,*\},b\in\mathbb{C} \mathfrak{S}_{k}\). We emphasize that \(\mathcal{K}_{2}(m_{j}^{*}b,m_{j^{\prime}}^{*})=\mathcal{K}_{2}(m_{j^{\prime}} b^{*},m_{j})^{*}\), which is a direct consequence of Formula (2.7) and of the property \(\mathcal{E}(a)^{*}=\mathcal{E}(a^{*})\) (proved in the second item of Remark 2.8). Therefore, we can assume \((\varepsilon,\varepsilon^{\prime})\neq(*,*)\) without lose of information on the \(\mathfrak{S}_{k}\)-covariance structure in Equation (2.7). The following lemma is a direct consequence of the Definition 2.12.
**Lemma 2.13**.: _Let \(\mathbf{m}\) be a centered \(\mathbb{C}\mathfrak{S}_{k}\)-circular system and let \(A_{1},\ldots,A_{L}\) be ensembles of variables in \(\mathbf{m}\). The ensemble \(A_{\ell}\)'s are free over \(\mathfrak{S}_{k}\) if and only if the variables of different ensembles have pairwise vanishing \(\mathfrak{S}_{k}\)-covariance, namely \(\mathcal{E}_{N}\big{(}m_{1}^{\varepsilon_{1}}\,b\,m_{2}^{\varepsilon_{2}} \big{)}=0,\) for all \(\varepsilon_{1},\varepsilon_{2}\in\{1,*\}\) all \(b\in\mathbb{C}\mathfrak{S}_{k}\), and all \(m_{1}\in A_{\ell},m_{2}\in A_{\ell^{\prime}}\) such that \(\ell\neq\ell^{\prime}\)._
To conclude this definition section, we now compare \(\mathfrak{S}_{k}\)-freeness with the usual notion of freeness. For \(k=1\), we have \(\mathbb{C}\mathfrak{S}_{1}=\mathbb{C}\). Hence the \(\mathbb{C}\mathfrak{S}_{1}\)-free cumulants coincide with the ordinary free cumulants. In particular, a collection of \(\mathfrak{S}_{1}\)-circular variables is an ordinary circular system and \(\mathfrak{S}_{1}\)-free ensembles are free is the usual sense. Given a \(\mathfrak{S}_{k}\)-circular collection \(\mathbf{m}\), the following lemma gives a criterion to prove that \(\mathbf{m}\) is an ordinary circular collection in \((\mathcal{A},\phi)\).
**Lemma 2.14**.: _Let \((\mathcal{A},\mathfrak{S}_{k},\mathcal{E},\phi)\) as above and let \(\mathbf{m}\) be a \(\mathfrak{S}_{k}\)-circular collection. Then \(\mathbf{m}\) is circular in \((\mathcal{A},\phi)\) if \(\mathcal{E}(m_{1}^{\varepsilon_{1}}m_{2}^{\varepsilon_{2}})\) is proportional to the unit \(u_{\mathrm{id}}\) for all \(m_{1},m_{2}\) in \(\mathbf{m}\) and all \(\varepsilon_{1},\varepsilon_{2}\) in \(\{1,*\}\)._
Proof.: Up to a centering, we can assume the collection is centered with respect to \(\mathcal{E}\) (the lemma is proved for centered variables, one easily deduces the result in the non-centered case). For all \(n\geq 1\), we denote by \(\mathrm{NC}(n)\) the set of pair non-crossing partitions of \([n]\). Since the conditional expectation is proportional to the unit, by definition of \(\phi\) we have \(\mathcal{E}(m_{1}^{\varepsilon_{1}}m_{2}^{\varepsilon_{2}})=\phi(m_{1}^{ \varepsilon_{1}}m_{2}^{\varepsilon_{2}})u_{\mathrm{id}}\) for all \(m_{1},m_{2},\varepsilon_{1},\varepsilon_{2}\). Hence the trivial factorization (2.6) is valid for the \(\mathfrak{S}_{k}\)-cumulant functions \(\mathcal{K}_{\xi}\) on the variables and their adjoint. Hence we can write, for all \(m_{1},\ldots,m_{n}\) in \(\mathbf{m}\) and all \(\varepsilon_{1},\ldots,\varepsilon_{n}\) in \(\{1,*\}\)
\[\phi(m_{1}^{\varepsilon_{1}}\cdots m_{n}^{\varepsilon_{n}}) = \phi\big{(}\mathcal{E}(m_{1}^{\varepsilon_{1}}\cdots m_{n}^{ \varepsilon_{n}})\big{)}\] \[= \phi\Big{(}\sum_{\xi\in\mathrm{NC}_{2}(n)}\mathcal{K}_{\xi}(m_{1 }^{\varepsilon_{1}},\cdots,m_{n}^{\varepsilon_{n}})\Big{)}\] \[= \phi\Big{(}\sum_{\xi\in\mathrm{NC}_{2}(n)}\prod_{\{i,j\}\in\xi} \phi(m_{i}^{\varepsilon_{i}}m_{j}^{\varepsilon_{j}})u_{\mathrm{id}}\Big{)}\] \[= \sum_{\xi\in\mathrm{NC}(n)}\Big{(}\prod_{\{i,j\}\in\xi}\phi(m_{i }^{\varepsilon_{i}}m_{j}^{\varepsilon_{j}})\Big{)}.\]
By uniqueness of the cumulants (Proposition-Definition (2.11) with \(k=1\)), the free cumulants of \(\mathbf{m}\) (with respect to \(\phi\)) greater than two vanish, so the collection \(\mathbf{m}\) is circular (with respect to \(\phi\)).
### Main result and applications
In this section we first state our main result (Theorem 2.15), together with some of its consequences exhibiting families of \(\mathfrak{S}_{k}\)-free and scalar-free flattenings. We then use those results to prove Corollary 1.5, which is one of the main applications (and motivations) of our work.
#### 2.2.1 Statement and corollaries
We can now state our main result on the flattenings of a random tensor.
**Theorem 2.15**.: _Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\). Then the collection of flattenings of \(M_{N}\) converges in \(\mathfrak{S}_{k}^{*}\)-distribution to a centered \(\mathfrak{S}_{k}\)-circular family \(\mathbf{m}=(m_{\sigma})_{\sigma\in\mathfrak{S}_{2k}}\), in some space \((\mathcal{A},\mathbb{C}\mathfrak{S}_{k},\mathcal{E})\). For all \(\eta,\eta^{\prime}\in\mathfrak{S}_{k}\), we denote \(\eta\sqcup\eta^{\prime}\in\mathfrak{S}_{2k}\) the permutation obtained by gluing the actions of \(\eta\) and \(\eta^{\prime}\) on the first \(k\) and the last \(k\) elements of \([2k]\); it is formally defined by_
\[(\eta\sqcup\eta^{\prime})(i):=\begin{cases}\eta(i)&\text{if }i\in[k]\\ \eta^{\prime}(i-k)+k&\text{otherwise.}\end{cases} \tag{2.8}\]
_We also denote by \(\tau\in\mathfrak{S}_{2k}\) the permutation swapping the first and the last \(k\) elements of \([2k]\); it is given by_
\[\tau(i) : \left\{\begin{array}{cc}i\in[k]&\mapsto\ \ i+k\in[2k]\setminus[k]\\ i\in[2k]\setminus[k]&\mapsto\ \ \ \ \ i-k\in[k]\end{array}\right.\]
_The \(\mathfrak{S}_{k}\)-covariance of \(\mathbf{m}\) is given by the following equalities: \(\forall\sigma,\sigma^{\prime}\in\mathfrak{S}_{2k},\forall\eta\in\mathfrak{S}_ {k}\),_
\[\mathcal{E}(m_{\sigma}u_{\eta}m_{\sigma^{\prime}}^{*}) =\left\{\begin{array}{cc}cu_{\eta^{\prime}}&\text{ if }\exists\eta^{ \prime}\in\mathfrak{S}_{k}\text{ s.t. }\sigma=(\eta^{\prime}\sqcup\eta)\sigma^{\prime}\\ 0&\text{ otherwise,}\end{array}\right. \tag{2.9}\] \[\mathcal{E}(m_{\sigma}^{*}u_{\eta}m_{\sigma^{\prime}}) =\left\{\begin{array}{cc}cu_{\eta^{\prime}}&\text{ if }\exists\eta^{ \prime}\in\mathfrak{S}_{k}\text{ s.t. }\sigma=(\eta\sqcup\eta^{\prime})\sigma^{\prime}\\ 0&\text{ otherwise,}\end{array}\right.\] (2.10) \[\mathcal{E}(m_{\sigma}u_{\eta}m_{\sigma^{\prime}}) =\left\{\begin{array}{cc}c^{\prime}u_{\eta^{\prime}}&\text{ if }\exists\eta^{ \prime}\in\mathfrak{S}_{k}\text{ s.t. }\sigma=\tau(\eta\sqcup\eta^{\prime})\sigma^{\prime}\\ 0&\text{ otherwise.}\end{array}\right. \tag{2.11}\]
Let us first explain where the expression of the \(\mathfrak{S}_{k}\)-covariance comes from. First we show that the limiting ordinary (i.e. scalar) covariance of the collection \(\mathbf{m}\) in the above theorem is given by the simple relations: \(\forall\sigma,\sigma^{\prime}\in\mathfrak{S}_{2k}\),
\[\Phi(m_{\sigma}m_{\sigma^{\prime}}^{*})=\left\{\begin{array}{cc}c&\text{ if }\sigma=\sigma^{\prime}\\ 0&\text{ otherwise,}\end{array}\right.\quad\Phi(m_{\sigma}m_{\sigma^{\prime}})= \left\{\begin{array}{cc}c^{\prime}&\text{ if }\sigma=\tau\sigma^{\prime}\\ 0&\text{ otherwise.}\end{array}\right. \tag{2.12}\]
This implies formulas (2.9) and (2.11) thanks to the following natural action of \(\mathbb{C}\mathfrak{S}_{k}\) on the flattenings.
**Lemma 2.16**.: _For any tensor \(M_{N}\in(\mathbb{C}^{N})^{\otimes 2k}\) and any permutations \(\sigma\in\mathfrak{S}_{2k}\), \(\eta,\eta^{\prime}\in\mathfrak{S}_{k}\), we have_
\(U_{N,\eta}M_{N,\sigma}U_{N,\eta^{\prime}}^{*}=M_{N,(\eta\sqcup\eta^{\prime}) \sigma}.\) _Hence if the collection of flattenings \(\mathbf{M}_{N}\) of a tensor converges in \(\mathfrak{S}_{k}^{*}\)-distribution to some collection \(\mathbf{m}=(m_{\sigma})_{\sigma\in\mathfrak{S}_{k}}\), we can always assume \(u_{\eta}m_{\sigma}u_{\eta^{\prime}}^{*}=m_{(\eta\sqcup\eta^{\prime})\sigma}\)._
Lemma 2.16 is proved in Section 3. Section 4 contains the details of the proof of the expressions of the \(\mathfrak{S}_{k}\)-covariance. The main difficulty is to prove the \(\mathfrak{S}_{k}\)-circularity, which is the purpose of Section 5, where we use
the traffic method in order to prove that the \(\mathcal{E}_{N}\)-moments of the sequence of flattenings asymptotically satisfy a non-commutative Wick formula.
In the rest of this subsection, we interpret the \(\mathfrak{S}_{k}\)-covariance thanks to group theory notions. Recall, that given \(K\) a subgroup of a group \(G\) and an element \(g\) of \(G\), the set \(Kg:=\{kg|k\in K\}\) is called the right \(K\)-coset of \(g\) in \(G\). When the context is clear, we simply call it a \(K\)-coset. The set of \(K\)-cosets is denoted \(K\backslash G\). Firstly, the permutations of \(\mathfrak{S}_{2k}\) of the form \(\eta\sqcup\eta^{\prime}\) with \(\eta,\eta^{\prime}\in\mathfrak{S}_{k}\), form a subgroup, isomorphic to \(\mathfrak{S}_{k}^{2}\), that we denote \(\mathfrak{S}_{k,k}\). It is often referred as a Young subgroup. Its cosets are well-known and studied concepts of particular importance in the context of the representations of the permutation and general linear groups, see for instance [10, 11].
We now consider the permutation \(\tau\). We denote by \(\langle\tau\rangle\) the subgroup of \(\mathfrak{S}_{2k}\) generated by \(\tau\), and by \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\) the subgroup of \(\mathfrak{S}_{2k}\) generated by \(\mathfrak{S}_{k,k}\) and \(\tau\). This permutation has a natural action on matrices since one sees easily that \(M_{N,\tau\sigma}=M_{N,\sigma}^{\top}\) for all \(\sigma\in\mathfrak{S}_{2k}\). The structures of the subgroup \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\) and the associated cosets are quite elementary: one sees that \(\tau(\eta\sqcup\eta^{\prime})=(\eta^{\prime}\sqcup\eta)\tau\) for all \(\eta,\eta^{\prime}\in\mathfrak{S}_{k}\), and for all \(s\in\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\), there is a unique \((s_{1},s_{2})\in\mathfrak{S}_{k,k}\times\langle\tau\rangle\) such that \(s=s_{1}s_{2}\). Moreover, if \(\sigma_{\mathcal{O}}\) denotes a representative of a \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\)-coset \(\mathcal{O}\), then \(\mathcal{O}\) is the union of the \(\mathfrak{S}_{k,k}\)-coset of \(\sigma_{\mathcal{O}}\) and the \(\mathfrak{S}_{k,k}\)-coset of \(\tau\sigma_{\mathcal{O}}\), and both of them are isomorphic to the \(\mathfrak{S}_{k,k}\)-coset of the identity. The group \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\) is actually a semi-direct product of \(\mathfrak{S}_{k,k}\) by \(\langle\tau\rangle\), see [21].
The next corollary describes the classes of asymptotic \(\mathfrak{S}_{k}\)-free matrices from a collection of flattenings. In the statement below, we say that ensembles of matrices are asymptotically \(\mathfrak{S}_{k}\)-free if they converge in \(\mathfrak{S}_{k}^{*}\)-distribution toward \(\mathfrak{S}_{k}\)-free ensembles.
**Corollary 2.17**.: _Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\)._
1. _If_ \(c^{\prime}=0\)_, then the flattenings of_ \(M_{N}\) _indexed by different_ \(\mathfrak{S}_{k,k}\)_-cosets are asymptotically_ \(\mathfrak{S}_{k}\)_-free._
2. _If_ \(c^{\prime}\neq 0\)_, then the flattenings of_ \(M_{N}\) _indexed by different_ \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\)_-cosets are asymptotically_ \(\mathfrak{S}_{k}\)_-free._
_In each case, different flattenings belonging to the same coset are not \(\mathfrak{S}_{k}\)-free._
Proof.: Let \(\mathbf{m}\) denotes the limit of \(\mathbf{M}_{N}\) in \(\mathfrak{S}_{k}\)-distribution. By Theorem 2.15, \(\mathbf{m}\) is \(\mathfrak{S}_{k}\)-circular collection and \(\mathcal{K}_{2}(m_{\sigma_{1}}^{\epsilon_{1}}u_{\eta},m_{\sigma_{2}}^{\epsilon _{2}})=0\) for all \(\eta,\varepsilon_{1},\varepsilon_{2}\) if the permutations \(\sigma_{1}\) and \(\sigma_{2}\) are in different cosets (\(\mathfrak{S}_{k,k}\)-cosets if \(c^{\prime}=0\) and \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\)-cosets if \(c^{\prime}\neq 0\)). By Lemma 2.13, the vanishing of generalized covariances implies \(\mathfrak{S}_{k}\)-freeness. Moreover, if \(\sigma\) and \(\sigma^{\prime}\) belong to the same coset, then by Theorem 2.15 there exists \(\eta\in\mathfrak{S}_{k}\) such that \(\mathcal{E}(m_{\sigma}u_{\eta}m_{\sigma^{\prime}})\) or
\(\mathcal{E}(m_{\sigma}u_{\eta}m^{\star}_{\sigma^{\prime}})\) is non-zero, showing that the corresponding flattenings are not asymptotically \(\mathfrak{S}_{k}\)-free.
In the following corollary and in next subsection, we show that some families are not only \(\mathfrak{S}_{k}\)-free, but actually free in the usual (scalar) sense. This is a stronger statement, which allows one to use the full machinery of free probability (over the scalars) to analyze the limit distributions of flattenings (as we did in Corollary 1.5).
**Corollary 2.18**.: _Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\). Pick one element in each \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\)-coset to form a subcollection \(\widetilde{\mathbf{M}}_{N}\) of \((2k)!(k!)^{-2}/2\) flattenings of \(M_{N}\). Then \(\widetilde{\mathbf{M}}_{N}\) converges to a free circular system in the ordinary sense. If moreover \(c^{\prime}=0\), then \(\widetilde{\mathbf{M}}_{N}\cup\widetilde{\mathbf{M}}_{N}^{\top}\) converges to a free circular system._
Proof.: Theorem 2.15 implies that \(\widetilde{\mathbf{M}}_{N}\) converges to a \(\mathfrak{S}_{k}\)-circular collection \(\widetilde{\mathbf{m}}\) that satisfies \(\varepsilon(m^{\varepsilon}m^{\varepsilon^{\prime}})=\phi(m^{\varepsilon}_{ \sigma}m^{\varepsilon^{\prime}}_{\sigma^{\prime}})u_{id}\) for all \(m,m^{\prime}\) in \(\widetilde{\mathbf{m}}\) and all \(\varepsilon,\varepsilon^{\prime}\in\{1,*\}\). By Lemma 2.14, the collection is an ordinary circular system. The reasoning is the same, under the assumption that \(c^{\prime}=0\), to prove the asymptotic circularity along with the transpose.
#### 2.2.2 Construction of \(\mathfrak{S}_{k}\)-free and free circular systems
Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2. The flattenings labeled by elements of the same \(\mathfrak{S}_{k,k}\)-coset are not asymptotically \(\mathfrak{S}_{k}-\)free and they do not converge to a circular system in the usual sense. But we can construct linear combination of these matrices with these properties.
For that purpose, we recall that a representation \((\rho,V)\) of a group \(G\) is the data of a finite dimensional vector space \(V\) and of a group morphism \(\rho\) from \(G\) to the space of endomorphisms of \(V\). The character associated to a representation \(\rho\) is the map
\[\chi^{(\rho)}:\sigma\in G\mapsto\operatorname{Tr}\rho(\sigma).\]
A representation is irreducible if there is no proper subspace \(\{0\}\subsetneq W\subsetneq V\) stable by \(\rho(\sigma)\) for all \(\sigma\) in \(G\). The set Irrep \(\mathfrak{S}_{k}\) of irreducible representations of the symmetric group \(\mathfrak{S}_{k}\) is well-known [10]. Irreducible representations are labelled by the so-called Young diagrams and the associated vector spaces are known as the Specht modules.
We can now state a first corollary where we present asymptotically \(\mathfrak{S}_{k}\)-free collections of matrices build from flattenings in the same coset (which are therefore not \(\mathfrak{S}_{k}\)-free).
**Corollary 2.19**.: _Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\) such that \(c^{\prime}=0\) and let \(\sigma\) be a fixed element of \(\mathfrak{S}_{2k}\). For
_any irreducible representation \(\rho\) of \(\mathfrak{S}_{k}\), let us consider_
\[S_{N,\rho}=\sum_{\eta_{1},\eta_{2}\in\mathfrak{S}_{k}}b_{\rho}(\eta_{1})\chi^{( \rho)}(\eta_{2})M_{N,(\eta_{1}\sqcup\eta_{2})\sigma},\]
_where \(b_{\rho}:\mathfrak{S}_{k}\to\mathbb{C}\) is arbitrary. Then the family \(\{S_{N,\rho}\}_{\rho\in\operatorname{Irrep}\mathfrak{S}_{k}}\) converges to a \(\mathfrak{S}_{k}\)-free \(\mathfrak{S}_{k}\)-circular system. The same holds for the family of all matrices indexed by pairs \((\rho,\rho^{\prime})\) of irreducible representations_
\[S_{N,\rho,\rho^{\prime}}=\sum_{\eta_{1},\eta_{2}\in\mathfrak{S}_{k}}\chi^{( \rho)}(\eta_{1})\chi^{(\rho^{\prime})}(\eta_{2})M_{N,(\eta_{1}\sqcup\eta_{2}) \sigma}.\]
_If moreover \(b_{\rho}(\eta)=0\) for all \(\eta\neq\operatorname{id}\) then the family \(\{S_{N,\rho}\}_{\rho\in\operatorname{Irrep}\mathfrak{S}_{k}}\) converges in \({}^{*}\)-distribution to a usual free circular system._
**Remark 2.20**.: _The number of representation of the irreducible representations of \(\mathfrak{S}_{k}\) is the number \(\operatorname{part}(k)\) of partitions of the integer \(k\) as they index the conjugacy classes of \(\mathfrak{S}_{k}\) (see [13, Proposition 2.30]). Moreover, the number of \(\mathfrak{S}_{k,k}\)-cosets of \(\mathfrak{S}_{2k}\) is \(\frac{(2k)!}{k!^{2}}\). One can pick for each representative \(\sigma\) of a \(\mathfrak{S}_{k,k}\)-coset \(\mathcal{O}\) a family \(S_{N,\rho,\rho^{\prime}}\). Then the asymptotic \(\mathfrak{S}_{k}\)-freeness of flattenings labeled in different cosets and the above corollary proves that there is at least \(\operatorname{part}(k)^{2}\times\frac{(2k)!}{k!^{2}}\) matrices coupled with \(M_{N}\), in the algebra generated by its flattenings, that converge to a \(\mathbb{C}\mathfrak{S}_{k}\)-free \(\mathbb{C}\mathfrak{S}_{k}\)-circular system. The same result for the matrices \(S_{N,\rho}\) proves that there is at least \(\operatorname{part}(k)\times\frac{(2k)!}{k!^{2}}\) matrices coupled with \(M_{N}\), in the algebra generated by its flattenings, that converge to a free circular system._
To prove the corollary, let us first state a lemma.
**Lemma 2.21**.: _Let \(M_{N}\) be a random tensor satisfying Hypothesis 1.2 with parameter \((c,c^{\prime})\) such that \(c^{\prime}=0\), and let \(\sigma\) be an element of \(\mathfrak{S}_{2k}\). Consider two matrices of the form_
\[S_{N} = \sum_{\eta_{1},\eta_{2}\in\mathfrak{S}_{k}}a(\eta_{1},\eta_{2})M _{N,(\eta_{1}\sqcup\eta_{2})\sigma},\] \[S^{\prime}_{N} = \sum_{\eta_{1},\eta_{2}\in\mathfrak{S}_{k}}a^{\prime}(\eta_{1}, \eta_{2})M_{N,(\eta_{1}\sqcup\eta_{2})\sigma},\]
_where the \(a(\eta_{1},\eta_{2})\)'s and the \(a^{\prime}(\eta_{1},\eta_{2})\)'s are complex coefficients, and \(\sigma\) is a given element of \(\mathfrak{S}_{2k}\). Then the couple \((S_{N},S^{\prime}_{N})\) converges to a \(\mathfrak{S}_{k}\)-free \(\mathfrak{S}_{k}\)-circular system iff_
\[\sum_{\mu_{1},\mu_{2}\in\mathfrak{S}_{k}}a(\eta_{1}\mu_{1},\eta_{2}\mu_{2}) \overline{a^{\prime}(\mu_{1},\mu_{2})}=0,\ \ \ \ \forall\eta_{1},\eta_{2}\in\mathfrak{S}_{k}. \tag{2.13}\]
_If moreover for all \(\eta\neq\operatorname{id}\) in \(\mathfrak{S}_{k}\) we have_
\[\sum_{\mu_{1},\mu_{2}\in\mathfrak{S}_{k}}a(\eta\mu_{1},\mu_{2})\overline{a(\mu_ {1},\mu_{2})}=\sum_{\mu_{1},\mu_{2}\in\mathfrak{S}_{k}}a^{\prime}(\eta\mu_{1}, \mu_{2})\overline{a^{\prime}(\mu_{1},\mu_{2})}=0, \tag{2.14}\]
_then the couple \((S_{N},S^{\prime}_{N})\) converges to an ordinary circular system._
Proof.: The couple \((S_{N},S^{\prime}_{N})\) is asymptotically \(\mathfrak{S}_{k}\)-circular since its entries are \(\mathfrak{S}_{k}\)-linear combination of the flattenings of \(M_{N}\). Let \(\mathbf{m}=(m_{\sigma})_{\sigma\in\mathfrak{S}_{2k}}\) be the limit of \(\mathbf{M}_{N}=(M_{N,\sigma})_{\sigma\in\mathfrak{S}_{2k}}\), and denote the limits of \(S_{N}\) and \(S^{\prime}_{N}\) by
\[s:=\sum_{\eta_{1},\eta_{2}\in\mathfrak{S}_{k}}a(\eta_{1},\eta_{2})m_{(\eta_{1} \sqcup\eta_{2})\sigma},\quad s^{\prime}:=\sum_{\eta_{1},\eta_{2}\in\mathfrak{ S}_{k}}a^{\prime}(\eta_{1},\eta_{2})m_{(\eta_{1}\sqcup\eta_{2})\sigma}.\]
We shall characterize the \(a(\eta_{1},\eta_{2})\)'s and \(a^{\prime}(\eta_{1},\eta_{2})\)'s such that \(\mathcal{E}({su_{\eta}s^{\prime}}^{*})=0\). By (2.9) we have
\[\mathcal{E}({su_{\eta}s^{\prime}}^{*})\] \[= \sum_{\begin{subarray}{c}\eta_{1},\eta_{2}\in\mathfrak{S}_{k}\\ \eta^{\prime}_{1},\eta^{\prime}_{2}\in\mathfrak{S}_{k}\end{subarray}}a(\eta_{ 1},\eta_{2})\overline{a^{\prime}(\eta^{\prime}_{1},\eta^{\prime}_{2})} \mathcal{E}(m_{(\eta_{1}\sqcup\eta_{2})\sigma}u_{\eta}m^{*}_{(\eta^{\prime}_{1} \sqcup\eta^{\prime}_{2})\sigma})\] \[= \sum_{\begin{subarray}{c}\eta_{1},\eta^{\prime}_{1}\in\mathfrak{ S}_{k}\\ \eta^{\prime}_{2},\eta^{\prime}_{2}\in\mathfrak{S}_{k}\\ \eta^{\prime}\in\mathfrak{S}_{k}\end{subarray}}a(\eta_{1},\eta_{2})\overline{a ^{\prime}(\eta^{\prime}_{1},\eta^{\prime}_{2})}\delta_{(\eta_{1}\sqcup\eta_{2 }),(\eta^{\prime}\eta^{\prime}_{1}\sqcup\eta^{\prime}_{2})}cu_{\eta^{\prime}}\] \[= c\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\Big{(}\sum_{\eta^{ \prime}_{1},\eta^{\prime}_{2}\in\mathfrak{S}_{k}}a(\eta^{\prime}\eta^{\prime}_ {1},\eta^{\prime}_{2})\overline{a^{\prime}(\eta^{\prime}_{1},\eta^{\prime}_{2 })}\Big{)}u_{\eta^{\prime}}.\]
Therefore, \(\mathcal{E}({su_{\eta}s^{\prime}}^{*})=0\) iff each coefficient in front of \(u_{\eta^{\prime}}\) in the above expression of \(\mathcal{E}({su_{\eta}s^{\prime}}^{*})\) is zero, which is the condition (2.13) of the statement. Moreover one sees with a similar computation that \(\mathcal{E}({su_{\eta}s^{\prime}})\) is always zero thanks to the condition \(c^{\prime}=0\), hence (2.13) characterizes the \(\mathfrak{S}_{k}\)-freeness of \(s\) and \(s^{\prime}\).
Moreover, to characterize when \(s\) and \(s^{\prime}\) converge to free circular variable, we use Lemma 2.14. Hence we shall prove that for all \(m_{1},m_{2}\in\{s,s^{\prime}\}\) and all \(\varepsilon_{1},\varepsilon_{2}\) in \(\{1,*\}\), the covariance \(\mathcal{E}(m_{1}^{\varepsilon_{1}}m_{2}^{\varepsilon_{2}})\) is scalar. Since it is zero if \(m_{1}\neq m_{2}\) (by the above), and since the expression of \(s\) and \(s^{\prime}\) are analogue, it remains to prove that the covariance vanishes for \(m_{1}=m_{2}=s\). We have with the same computation as above
\[\mathcal{E}(ss^{*}) = c\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\Big{(}\sum_{\eta^{ \prime}_{1},\eta^{\prime}_{2}\in\mathfrak{S}_{k}}a(\eta^{\prime}\eta_{1},\eta^ {\prime}_{2})\overline{a(\eta^{\prime}_{1},\eta^{\prime}_{2})}\Big{)}u_{\eta^{ \prime}}.\]
Hence \(\mathcal{E}(ss^{*})\) is scalar iff for all \(\eta^{\prime}\neq\mathrm{id}\) the coefficients in front of \(u_{\eta^{\prime}}\) in the above expression are zero, which is condition (2.14). Moreover, \(\mathcal{E}(ss)\) is scalar, since the condition \(c^{\prime}=0\) and the same computation as above shows that it vanishes. We can hence use Lemma 2.14, proving the convergence of \(S_{N}\) and \(S^{\prime}_{N}\) toward free circular variables.
The next proposition recalls an important property of irreducible representations of the symmetric group.
**Proposition 2.22**.: _Let \(\{\chi^{\rho}\}_{\rho\in\operatorname{Irrep}(\mathfrak{S}_{k})}\) be characters of irreducible representations of the symmetric group \(\mathfrak{S}_{k}\), then assuming \(\rho,\rho^{\prime}\) are two different irreducible representations,_
\[\sum_{\mu\in\mathfrak{S}_{k}}\chi^{(\rho)}(\eta\mu)\bar{\chi}^{(\rho^{\prime})}( \mu)=0,\quad\forall\eta\in\mathfrak{S}_{k}. \tag{2.15}\]
Proof.: If \(\chi^{(\rho)},\chi^{(\rho^{\prime})}\) are characters of two irreducible representations of a finite group \(G\), then their convolution writes [11]
\[\chi^{(\rho)}\star\chi^{(\rho^{\prime})}=\delta_{\rho,\rho^{\prime}}\frac{|G|}{ \dim\rho}\chi^{(\rho)}, \tag{2.16}\]
where the convolution is defined as
\[(\chi^{(\rho)}\star\chi^{(\rho^{\prime})})(h)=\sum_{g\in G}\chi^{(\rho)}(hg^{- 1})\chi^{(\rho^{\prime})}(g). \tag{2.17}\]
Specifying to the symmetric group \(\mathfrak{S}_{k}\), using the fact that its characters are real valued class functions \((\chi^{(\rho)}(\mu)=\chi^{(\rho)}(\mu^{-1}),\,\chi^{(\rho)}(\mu)=\bar{\chi}^{ (\rho)}(\mu))\) to recognize that
\[\sum_{\mu\in\mathfrak{S}_{k}}\chi^{(\rho)}(\eta\mu)\bar{\chi}^{(\rho^{\prime}) }(\mu)=(\chi^{(\rho)}\star\chi^{(\rho^{\prime})})(\eta),\]
we can conclude.
Proof of Corollary 2.19.: The collection \((S_{N,\rho})_{\rho\in\operatorname{Irrep}\mathfrak{S}_{k}}\) converges to a \(\mathfrak{S}_{k}\)-circular system since its entries are linear combinations of flattenings of \(M_{N}\). Its is asymptotically \(\mathfrak{S}_{k}\)-free if each couple \((S_{N,\rho},S_{N,\rho^{\prime}})\) in this collection is asymptotically \(\mathfrak{S}_{k}\)-free. Condition (2.13) of Lemma 2.21 applied for this matrix reads
\[\sum_{\mu_{1}\in\mathfrak{S}_{k}}b_{\rho}(\eta_{1}\mu_{1})\overline{b_{\rho^{ \prime}}(\mu_{1})}\sum_{\mu_{2}\in\mathfrak{S}_{k}}\chi^{(\rho)}(\eta_{2}\mu_ {2})\overline{\chi^{(\rho^{\prime})}(\mu_{2})}=0,\,\forall\eta_{1},\eta_{2}\in \mathfrak{S}_{k}.\]
The above equality is satisfied thanks to Proposition 2.22. Hence the family is asymptotically \(\mathfrak{S}_{k}\)-free.
Moreover, Condition (2.14), reads
\[\sum_{\mu_{1}\in\mathfrak{S}_{k}}\delta_{\eta\mu_{1},\operatorname{id}}\delta _{\mu_{1},\operatorname{id}}\sum_{\mu_{2}\in\mathfrak{S}_{k}}\chi^{(\rho)}( \mu_{2})\overline{\chi^{(\rho^{\prime})}(\mu_{2})}=0,\forall\eta\neq \operatorname{id},\]
which is clearly true. Hence Lemma 2.21 implies that the family is asymptotically free.
The same reasoning yields to the asymptotic \(\mathfrak{S}_{k}\)-freeness of the family \((S_{N,\rho,\rho^{\prime}})_{\rho,\rho^{\prime}\in\operatorname{Irrep}\mathfrak{ S}_{k}}\).
**Remark 2.23**.: _After this section, and considering the results of Corollary 1.5, one would be inclined to study the laws of the following non-commutative random variables_
\[\psi^{(\lambda)}:=\frac{\dim V_{(\lambda)}}{(2k)!}\sum_{\sigma\in\mathfrak{S}_{2k }}\bar{\chi}^{(\lambda)}(\sigma)m_{\sigma}\]
_where \(\lambda\) labels an irreducible representation of \(\mathfrak{S}_{2k}\) and \(V_{(\lambda)}\) is the support of the corresponding representation. Note that this dimension is the number of Young tableaux of shape \(\lambda\)[11, Problem 4.47]. If one introduces the projector_
\[P^{(\lambda)}=\frac{\dim V_{(\lambda)}}{(2k)!}\sum_{\sigma\in\mathfrak{S}_{2k }}\bar{\chi}^{(\lambda)}(\sigma)U_{N,\sigma},\]
_then, according to [11, section 2.4],_
\[P^{(\lambda)}(M_{N,id})=\frac{\dim V_{(\lambda)}}{(2k)!}\sum_{\sigma\in \mathfrak{S}_{2k}}\bar{\chi}^{(\lambda)}(\sigma)M_{N,\sigma}\]
_lives in the irreducible (up to multiplicity) sub-representation \(V^{(\lambda)}\) of \((\mathbb{C}^{N})^{\otimes 2k}\). Therefore, \(P^{(\lambda)}(M_{N,id})\) can model a parastatistics (random) quantum state [10]. In the large \(N\) limit, \(P^{(\lambda)}(M_{N,id})\) converges to \(\psi^{(\lambda)}\). Typical (entanglement) properties of such states have not been studied and it could be interesting to explore them further. We have tried to compute the limit law using our Theorem 2.15 however, we were not able to reach a conclusion. Denoting \(K_{\lambda}:=\frac{\dim V_{(\lambda)}}{(2k)!}\), the \(\mathfrak{S}_{k}\)-covariance writes_
\[\mathcal{E}(\psi^{(\lambda)}u_{\eta}(\psi^{(\lambda)})^{*})=K_{ \lambda}^{2}\sum_{\sigma,\sigma^{\prime}\in\mathfrak{S}_{2k}}\bar{\chi}^{( \lambda)}(\sigma)\chi^{(\lambda)}(\sigma^{\prime})\mathcal{E}(m_{\sigma}u_{ \eta}m_{\sigma^{\prime}}^{*})\] \[=K_{\lambda}\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}c\chi^{( \lambda)}(\eta^{\prime}\sqcup\eta)u_{\eta^{\prime}}.\]
_Our attempt at computing the moments of the \(\psi^{(\lambda)}\)'s leads to complicated combinations of Littlewood-Richardson coefficients for which we found no practical expression._
#### 2.2.3 Examples: proof of Corollary 1.5
We showcase the use of Theorem 2.15 by proving Corollary 1.5. We recall that we consider three random matrices
\[S_{1,N}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}M_{N,\sigma},\quad S_{2,N}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}} \operatorname{sg}(\sigma)\,M_{N,\sigma},\]
\[S_{3,N}=\frac{1}{2\sqrt{(2k)!k!(c+\Re c^{\prime})}}\sum_{\sigma\in\mathfrak{S }_{2k}}\big{(}M_{N,\sigma}+M_{N,\sigma}^{*}\big{)},\]
where \(M_{N}\) satisfies Hypothesis 1.2. We address the question of computing the limit of empirical eigenvalues distribution of \(S_{1,N}S_{1,N}^{*}\), \(S_{2,N}S_{2,N}^{*}\) and of \(S_{3,N}\). We start by computing the limit of the covariances with respect to \(\mathcal{E}_{N}\).
**Lemma 2.24**.: _Let \(s_{1}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}m_{\sigma}\), \(s_{2}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}\mathrm{sg}( \sigma)\,m_{\sigma}\) and \(s_{3}=\frac{1}{\sqrt{(2k)!k!2(c+\mathfrak{Re}c^{\prime})}}\sum_{\sigma\in \mathfrak{S}_{2k}}\big{(}m_{\sigma}+m_{\sigma}^{*}\big{)}\), where \(\mathbf{m}\) is the limit of \(\mathbf{M}_{N}\). Then we have for all \(\eta\in\mathfrak{S}_{k}\),_
\[\mathcal{E}\big{(}s_{1}u_{\eta}s_{1}^{*}\big{)}=\mathcal{E}\big{(}s_{1}^{*}u_ {\eta}s_{1}\big{)}=\mathcal{E}\big{(}s_{3}u_{\eta}s_{3}\big{)}=(k!)^{-1}\sum_{ \eta^{\prime}\in\mathfrak{S}_{k}}u_{\eta^{\prime}},\]
\[\mathcal{E}(s_{2}u_{\eta}s_{2}^{*})=\mathcal{E}(s_{2}^{*}u_{\eta}s_{2})=(k!)^{ -1}\mathrm{sg}(\eta)\,\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\mathrm{sg}(\eta ^{\prime})\,u_{\eta^{\prime}}.\]
Note that in each case, the right hand side term is independent of \(\eta\).
Proof of Lemma 2.24.: **Case of \(s_{1}\).** First note that for any \(\eta\) in \(\mathfrak{S}_{k}\), by Lemma 2.16 we have
\[s_{1}u_{\eta}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}m_{ \sigma}u_{\eta}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}m_{( \mathrm{id}\sqcup\eta^{-1})\sigma}=s_{1},\]
thanks to a change of variable in the last equality. Similarly, using the equality \(u_{\eta}m_{\sigma}=m_{(\eta\sqcup\mathrm{id})\sigma}\) we get \(u_{\eta}s_{1}=s_{1}\) for all \(\eta\) in \(\mathfrak{S}_{k}\). Hence for all \(\eta\) in \(\mathfrak{S}_{k}\) we get
\[\mathcal{E}(s_{1}u_{\eta}s_{1}^{*}) = \mathcal{E}(s_{1}s_{1}^{*})=\sum_{\eta^{\prime}\in\mathfrak{S}_{ k}}\Phi(s_{1}s_{1}^{*}u_{\eta^{\prime}}^{*})u_{\eta^{\prime}}\] \[= \Phi(s_{1}s_{1}^{*})\times\sum_{\eta^{\prime}\in\mathfrak{S}_{k}} u_{\eta^{\prime}}\] \[= \frac{1}{(2k)!k!c}\sum_{\sigma,\sigma^{\prime}\in\mathfrak{S}_{ 2k}}\Phi(m_{\sigma}m_{\sigma^{\prime}}^{*})\times\sum_{\eta^{\prime}\in \mathfrak{S}_{k}}u_{\eta^{\prime}}\]
But by (2.12), we have \(\Phi(m_{\sigma}m_{\sigma^{\prime}}^{*})=c\delta_{\sigma,\sigma^{\prime}}\), therefore
\[\mathcal{E}(s_{1}u_{\eta}s_{1}^{*})=\frac{c(2k)!}{(2k)!k!c}\times\sum_{\eta^{ \prime}\in\mathfrak{S}_{k}}u_{\eta^{\prime}}=\frac{1}{k!}\sum_{\eta^{\prime} \in\mathfrak{S}_{k}}u_{\eta^{\prime}}\]
Moreover, thanks to the same arguments, we have
\[\mathcal{E}(s_{1}^{*}u_{\eta}s_{1}) = \mathcal{E}(s_{1}^{*}s_{1})=\sum_{\eta^{\prime}\in\mathfrak{S}_{k }}\Phi(s_{1}^{*}s_{1})u_{\eta^{\prime}}=\mathcal{E}(s_{1}u_{\eta}s_{1}^{*})\]
This concludes the computation of the \(\mathfrak{S}_{k}\)-covariance for \(s_{1}\).
**Case of \(s_{2}\).** We now have the following computation, for any \(\eta\) in \(\mathfrak{S}_{k}\)
\[s_{2}u_{\eta}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}}\mathrm{ sg}(\sigma)m_{\sigma}u_{\eta}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k} }\mathrm{sg}(\sigma)m_{(\mathrm{id}\sqcup\eta^{-1})\sigma}.\]
Recalling that \(\mathrm{sg}(\alpha\beta)=\mathrm{sg}(\alpha)\mathrm{sg}(\beta)\) and \(\mathrm{sg}(\mathrm{id})=1\), we have
\[s_{2}u_{\eta}=\frac{1}{\sqrt{(2k)!k!c}}\sum_{\sigma\in\mathfrak{S}_{2k}} \mathrm{sg}(\eta)\mathrm{sg}((\mathrm{id}\sqcup\eta^{-1})\sigma)m_{(\mathrm{ id}\sqcup\eta^{-1})\sigma}=\mathrm{sg}(\eta)s_{2}.\]
Similarly we have \(u_{\eta}s_{2}=\mathrm{sg}(\eta^{-1})s_{2}=\mathrm{sg}(\eta)s_{2}\). Hence we get, for all \(\eta\) in \(\mathfrak{S}_{k}\)
\[\mathcal{E}(s_{2}u_{\eta}s_{2}^{*}) = \mathrm{sg}(\eta)\mathcal{E}(s_{2}s_{2}^{*})=\mathrm{sg}(\eta) \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\Phi(s_{2}s_{2}^{*}u_{\eta^{\prime}}^{* })u_{\eta^{\prime}}\] \[= \mathrm{sg}(\eta)\Phi(s_{2}s_{2}^{*})\times\sum_{\eta^{\prime} \in\mathfrak{S}_{k}}\mathrm{sg}(\eta^{\prime})u_{\eta^{\prime}}\] \[= \mathrm{sg}(\eta)\sum_{\sigma,\sigma^{\prime}\in\mathfrak{S}_{2k }}\frac{\mathrm{sg}(\sigma)\mathrm{sg}(\sigma^{\prime})\Phi(m_{\sigma}m_{ \sigma^{\prime}}^{*})}{(2k)!k!c}\times\sum_{\eta^{\prime}\in\mathfrak{S}_{k}} \mathrm{sg}(\eta^{\prime})u_{\eta^{\prime}}\] \[= \frac{\mathrm{sg}(\eta)}{k!}\sum_{\eta^{\prime}\in\mathfrak{S}_{k }}\mathrm{sg}(\eta^{\prime})u_{\eta^{\prime}}.\]
**Case of \(s_{3}\).** Since \(s_{3}\) is proportional to \(s_{1}+s_{1}^{*}\), we have \(s_{3}u_{\eta}=u_{\eta}s_{3}=s_{3}\). Hence we have for any \(\eta\) in \(\mathfrak{S}_{k}\)
\[\mathcal{E}(s_{3}u_{\eta}s_{3}) = \mathcal{E}(s_{3}s_{3})=\Phi(s_{3}s_{3}^{*})\times\sum_{\eta^{ \prime}\in\mathfrak{S}_{k}}u_{\eta}\] \[= \frac{1}{(2k)!k!2(c+\Re c^{\prime})}\sum_{\sigma,\sigma^{\prime} \in\mathfrak{S}_{2k}}\Phi\big{(}(m_{\sigma}+m_{\sigma}^{*})(m_{\sigma^{\prime }}+m_{\sigma^{\prime}}^{*})\big{)}\times\sum_{\eta^{\prime}\in\mathfrak{S}_{k} }u_{\eta^{\prime}}\]
By (2.12), we have \(\Phi(m_{\sigma}^{*}m_{\sigma^{\prime}})=\Phi(m_{\sigma}m_{\sigma^{\prime}}^{* })=c\delta_{\sigma,\sigma^{\prime}}\) and \(\Phi(m_{\sigma}m_{\sigma^{\prime}})=\overline{\Phi(m_{\sigma}^{*}m_{\sigma^{ \prime}}^{*})}=c^{\prime}\delta_{\sigma,\tau\sigma^{\prime}}\), so
\[\mathcal{E}(s_{3}u_{\eta}s_{3}) = \frac{1}{(2k)!k!2(c+\Re c^{\prime})}\sum_{\sigma,\sigma^{\prime} \in\mathfrak{S}_{2k}}2(c+\Re c^{\prime})\times\sum_{\eta^{\prime}\in\mathfrak{ S}_{k}}u_{\eta^{\prime}}\] \[= \frac{1}{k!}\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}u_{\eta^{ \prime}}.\]
We can now prove Corollary 1.5. We start with the computation for the two first matrices and use the notations \(S_{N}\) and \(s\) to designate either \(S_{1,N}\) and \(s_{1}\), or \(S_{2,N}\) and \(s_{2}\). By the moment method, it is sufficient to show that the limit of \(\Phi_{N}\big{(}(S_{N}S_{N}^{*})^{n}\big{)}\) is the \(n\)-th moment of the expected
limiting distribution, for all \(n\geq 1\). Since \({\bf m}\) is \(\mathfrak{S}_{k}\)-circular and \(s\) is a linear combination of \({\bf m}\), \(s\) is also \(\mathfrak{S}_{k}\)-circular. Hence we have for all \(n\geq 1\)
\[{\cal E}\big{(}(ss^{*})^{n}\big{)} = \sum_{\xi\in{\rm NC}_{2}(2n)}{\cal K}_{\xi}(\underbrace{s,s^{*}, \ldots,s,s^{*}}_{2n}).\]
Recall that by definition of \({\cal K}_{\xi}\), if \(B\) denotes the first interval block of \(\xi\),
\[{\cal K}_{\xi}(s,s^{*},\ldots,s,s^{*})\] \[= \left\{\begin{array}{ll}{\cal K}_{\xi\setminus B}(\ldots,s^{*} \mathcal{E}\big{(}ss^{*}\big{)},s,\ldots),&\mbox{ if }B=\{2i,2i+1\},\\ {\cal K}_{\xi\setminus B}(\ldots,s^{*}\mathcal{E}\big{(}ss^{*}\big{)},s, \ldots),&\mbox{ if }B=\{2i-1,2i\},\end{array}\right.\]
where \(\xi\setminus B\) is the partition obtained from \(\xi\) by removing the block \(B\) and shifting indices above. If \(n=0\), the formula is valid with the convention \({\cal K}_{\{\emptyset\}}={\rm id}\).
We now consider the first case \(s=s_{1}\). By Lemma 2.24, we get with the same disjunction of cases as above
\[{\cal K}_{\xi}(s,s^{*},\ldots,s,s^{*}) = \left\{\begin{array}{ll}(k!)^{-1}\sum_{\eta^{\prime}\in \mathfrak{S}_{k}}{\cal K}_{\xi\setminus B}(\ldots,su_{\eta^{\prime}},s^{*}, \ldots)\mbox{ or}\\ (k!)^{-1}\sum_{\eta^{\prime}\in\mathfrak{S}_{k}}{\cal K}_{\xi\setminus B}( \ldots,s^{*}u_{\eta^{\prime}},s,\ldots)\end{array}\right.\]
If \(n\geq 2\), then \({\cal K}_{\xi\setminus B}\) can also be written as a covariance and so by Lemma 2.24 again, the quantity in the sum is independent of \(\eta^{\prime}\), so that
\[{\cal K}_{\xi}(\underbrace{s,s^{*},\ldots,s,s^{*}}_{2n})=K_{\xi\setminus B}( \underbrace{s,s^{*},\ldots,s,s^{*}}_{2n-2}).\]
By induction we hence get
\[{\cal E}\big{(}(ss^{*})^{n}\big{)} = |{\rm NC}_{2}(2n)|\times(k!)^{-1}\sum_{\eta^{\prime}\in\mathfrak{ S}_{k}}u_{\eta^{\prime}}.\]
Finally, since \(\phi(u_{\eta})=0\) when \(\eta\neq{\rm id}\) and \(\phi(u_{\rm id})=1\), we get
\[\phi\big{(}(ss^{*})^{n}\big{)} = |{\rm NC}_{2}(2n)|\times(k!)^{-1}.\]
We recognize the moments of a random variable which is zero with probability \((1-k!^{-1})\), and equal to a Marchenko-Pastur distribution otherwise. Hence we have proved that the \(n\)-th moment of the empirical singular-values distribution of \(S_{N}S_{N}^{*}\) converges to the \(n\)-th moment of such a random variables. Since these moments characterize the distribution, we have proved the first item of the proposition.
The second item is similar using the fact that
\[(S_{N}^{\prime})^{n}=\sum_{\varepsilon_{1},\ldots,\varepsilon_{n}\ \in\{1,*\}}\ (S_{N}^{\prime})^{ \varepsilon_{1}}\cdots(S_{N}^{\prime})^{\varepsilon_{n}}.\]
Proof of preliminary lemmas of Sections 2.1 and 2.2
We denote by \(\langle\cdot,\cdot\rangle\) the canonical scalar product of \((\mathbb{C}^{N})^{k}\) and by \((e_{i_{1}}\otimes\cdots\otimes e_{i_{k}})_{i_{1},\ldots,i_{k}\in[N]}\) its canonical basis. For any \(\eta\in\mathfrak{S}_{k}\), we have introduced the matrix \(U_{N,\eta}\) such that for all \(\mathbf{i},\mathbf{j}\in[N]^{n}\),
\[U_{N,\eta}(\mathbf{i},\mathbf{j}) = \langle e_{i_{1}}\otimes\cdots\otimes e_{i_{n}},e_{j_{\eta(1)}} \otimes\cdots\otimes e_{j_{\eta(n)}}\rangle=\delta_{\mathbf{i},\eta(\mathbf{j} )},\]
where we use the notation \(\eta(\mathbf{j})=(j_{\eta_{1}},\ldots,j_{\eta_{n}})\) and \(\delta\) is the Kronecker symbol.
Proof of Lemma 2.3.: 1. The first item computes the normalized trace \(\Phi_{N}[U_{N,\eta}]\) of the matrix. To prove it, we write
\[\Phi_{N}[U_{N,\eta}] = N^{-k}\sum_{\mathbf{i}\in[N]^{n}}\delta_{\mathbf{i},\eta( \mathbf{i})}.\]
The condition \(\delta_{\mathbf{i},\eta(\mathbf{i})}=1\) implies \(i_{1}=i_{\eta(1)}=i_{\eta^{2}(1)}=\dots\). Hence, when \(i_{1}\in[N]\) is given arbitrarily, the summand in the right hand side is nonzero when \(i_{\ell}=i_{1}\) for all \(\ell\) in the same cycle of \(1\) in \(\eta\). Extending the reasoning for all cycles of \(\eta\) yields \(\Phi_{N}[U_{N,\eta}]=N^{\#\eta-k}\), where \(\#\eta\) is the number of cycles of \(\eta\).
2. The second item of the lemma states that the matrices \(U_{N,\sigma}\) are linearly independent when \(N\geq k\). Let \((a_{\eta})_{\eta\in\mathfrak{S}_{k}}\) be a collection of complex numbers, and let us assume that \(\sum_{\eta\in\mathfrak{S}_{k}}a_{\eta}U_{N,\eta}=0\). When \(N\geq k\), we can apply both side of the former equation to the basis vector \(e_{1}\otimes e_{2}\otimes\ldots\otimes e_{k}\), which yields the equality \(\sum_{\eta\in\mathfrak{S}_{k}}a_{\eta}e_{\eta(1)}\otimes\ldots\otimes e_{\eta (k)}=0\). Since \((e_{\eta(1)}\otimes\ldots\otimes e_{\eta(k)})_{\eta\in\mathfrak{S}_{k}}\) is a subfamily of a basis, the vectors are linearly independent, and then \(a_{\eta}=0\) for all \(\eta\in\mathfrak{S}_{k}\). Hence the matrices \((U_{N,\sigma})_{\sigma\in\mathfrak{S}_{k}}\) are linearly independent.
We now come back to the proof of elementary properties of the maps
\[\mathcal{E}_{N}:A_{N}=\sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}A_{N}U_{N, \eta}^{*}\big{]}U_{N,\eta}.\]
Proof of Lemma 2.5.: We first prove that \(\mathcal{E}_{N}\) is a conditional expectation, namely Equation (2.2) holds. By linearity, it is sufficient to prove
\[\mathcal{E}_{N}(U_{N,\eta_{1}}A_{N}U_{N,\eta_{2}})=U_{N,\eta_{1}}\mathcal{E}_{ N}(A_{N})U_{N,\eta_{2}}\]
for all \(\eta_{1},\eta_{2}\in\mathfrak{S}_{k}\). By traciality of \(\Phi_{N}\), we have
\[\mathcal{E}_{N}(U_{N,\eta_{1}}A_{N}U_{N,\eta_{2}}) = \sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}A_{N}U_{N,\eta_{2} \eta^{-1}\eta_{1}}\big{]}U_{N,\eta}\]
Using the change of variable \(\eta^{\prime}=\eta_{1}^{-1}\eta\eta_{2}^{-1}\) in the sum yields
\[\mathcal{E}_{N}(U_{N,\eta_{1}}A_{N}U_{N,\eta_{2}}) = \sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}A_{N}U_{N,\eta^{-1}} \big{]}U_{N,\eta_{1}\eta\eta_{2}}\] \[= U_{N,\eta_{1}}\mathcal{E}_{N}(A_{N})U_{N,\eta_{2}},\]
proving the first point (2.2) of Lemma 2.5. Moreover we have
\[\mathcal{E}_{N}(\mathbb{I}_{N})=\sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}U _{N,\eta}^{*}\big{]}U_{N,\eta}=\sum_{\eta\in\mathfrak{S}_{k}}N^{\#\eta-k}U_{N, \eta}=\mathbb{I}_{N}+o(1),\]
proving the third assertion of the lemma. Finally, we have
\[\Phi_{N}\big{[}\mathcal{E}_{N}(A_{N})\big{]} = \sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}A_{N}U_{N,\eta}^{*} \big{]}N^{\#\eta-k}=\Phi_{N}\big{[}A_{N}]+o(1),\]
since \(\#\eta=k\) only if \(\eta=\mathrm{id}\). This concludes the proof of Lemma 2.5.
Proof of Lemma 2.6.: The map \(\mathcal{E}_{N}\) is a normalized expectation (over the distribution of random tensors) of the map
\[X\mapsto\sum_{\eta\in\mathfrak{S}_{k}}\mathrm{Tr}[U_{N,\eta}^{*}X]U_{N,\eta}.\]
Hence, it is enough to show that the linear map above is completely positive. This will follow by computing the so-called Choi matrix of the map and showing that it is positive semidefinite [23, Theorem 2.22]. The Choi matrix reads
\[C=\sum_{\eta\in\mathfrak{S}_{k}}U_{N,\eta}\otimes\bar{U}_{N,\eta}=\sum_{\eta \in\mathfrak{S}_{k}}U_{N,\eta}^{\otimes 2}=\sum_{\eta\in\mathfrak{S}_{k}}U_{N^{2}, \eta}=k!P_{\mathrm{sym}},\]
where \(P_{\mathrm{sym}}\) is the projection on the symmetric subspace of \((\mathbb{C}^{N^{2}})^{\otimes k}\), see [1, Proposition 1]; this concludes the proof.
Proof of Lemma 2.16.: The first relation we must prove
\[U_{N,\eta}M_{N,\sigma}U_{N,\eta^{\prime}}^{*}=M_{N,(\eta\sqcup \eta^{\prime})\sigma},\quad\forall\sigma\in\mathfrak{S}_{2k},\forall\eta, \eta^{\prime}\in\mathfrak{S}_{k},\]
follows from a simple computation of the entries: for any \(\mathbf{i},\mathbf{i}^{\prime}\in[N]^{n}\),
\[U_{N,\eta}M_{N,\sigma}U_{N,\eta^{\prime}}^{*}(\mathbf{i},\mathbf{ i}^{\prime}) = \sum_{\mathbf{j},\mathbf{j}^{\prime}\in[N]^{n}}\delta_{\mathbf{i}, \eta(\mathbf{j})}\delta_{\mathbf{i}^{\prime},\eta(\mathbf{j}^{\prime})}M_{N} \big{(}\sigma^{-1}(\mathbf{j},\mathbf{j}^{\prime})\big{)}\] \[= M_{N}\big{(}\sigma^{-1}(\eta^{-1}\cup{\eta^{\prime}}^{-1})( \mathbf{i},\mathbf{i}^{\prime})\big{)}\] \[= M_{N,(\eta\sqcup\eta^{\prime})\sigma}(\mathbf{i},\mathbf{i}^{ \prime}).\]
Moreover, we have \(U_{N,\eta}M_{N,\sigma}^{*}U_{N,\eta^{\prime}}^{*}=\big{(}U_{N,\eta^{\prime}}M_ {N,\sigma}U_{N,\eta}^{*}\big{)}^{*}=M_{N,(\eta^{\prime}\sqcup\eta)\sigma}^{*}\).
Proof of the convergence of the \(\mathfrak{S}_{k}\)-covariance
This section is devoted to the proof of the convergence of the \(\mathfrak{S}_{k}\)-covariance in Theorem 2.15. We recall that for any permutation \(\sigma\in\mathfrak{S}_{2k}\), the \(\mathfrak{S}_{k,k}\rtimes\langle\tau\rangle\)-coset of \(\sigma\) is the union of all \(\sigma^{\prime}\in\mathfrak{S}_{2k}\) of the form
\[\sigma^{\prime}=(\eta\sqcup\eta^{\prime})\sigma,\ \text{or}\ (\eta\sqcup\eta^{ \prime})\tau\sigma,\quad\eta,\eta^{\prime}\in\mathfrak{S}_{k}.\]
We start by proving formula (2.9), namely
\[\mathcal{E}_{N}(M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}}^{*}) \underset{N\to\infty}{\longrightarrow}\left\{\begin{array}{ll}cu_{\eta^{ \prime}}&\text{ if }\sigma=(\eta^{\prime}\sqcup\eta)\sigma^{\prime}\\ 0&\text{ otherwise}\end{array}\right.\]
We start with the computation of the ordinary covariance
\[\Phi_{N}\big{(}M_{N,\sigma}M_{N,\sigma^{\prime}}^{*}\big{)}\] \[= \mathbb{E}\Big{[}\frac{1}{N^{k}}\sum_{\mathbf{i}\in[N]^{2k}}M_{N }\big{(}\sigma^{-1}(\mathbf{i})\big{)}\overline{M_{N}\big{(}\sigma^{\prime-1} (\mathbf{i})\big{)}}\Big{]}\] \[= \frac{1}{N^{2k}}\sum_{\begin{subarray}{c}\mathbf{i}=(i_{1}, \ldots,i_{2k})\in[N]^{2k}\\ i_{p}\neq i_{q},\,\forall p\neq q\,\mathrm{in}\,[k]\end{subarray}}\mathbb{E} \Big{[}N^{k}M_{N}\big{(}\sigma^{-1}(\mathbf{i})\big{)}\overline{M_{N}\big{(} \sigma^{\prime-1}(\mathbf{i})\big{)}}\Big{]}+O\big{(}N^{-1}\big{)}.\]
In the last line, we use the boundedness of \(\mathbb{E}\Big{[}N^{k}M_{N}(\mathbf{i})\big{)}\overline{M_{N}(\mathbf{i}^{ \prime})}\Big{]}\) from Hypothesis 1.2 to estimate the \(O(N^{-1})\). Moreover the entries of \(M_{N}\) are centered and independent, so each expectation in the sum above is zero unless the entries \(M_{N}\big{(}\sigma^{-1}(\mathbf{i})\big{)}\) and \(M_{N}\big{(}\sigma^{\prime-1}(\mathbf{i})\big{)}\) are the same variable, in which case it converges to \(c\). Assume that the indices of \(\mathbf{i}\) are pairwise distinct. Then these two entries are the same only if \(\sigma=\sigma^{\prime}\). Hence we get
\[\Phi_{N}\big{(}M_{N,\sigma}M_{N,\sigma^{\prime}}^{*}\big{)}\quad \underset{N\to\infty}{\longrightarrow}\quad c\delta_{\sigma,\sigma^{\prime}}.\]
Hence we deduce the \({}^{*}\)-moments in the \(M_{N,\sigma}\)'s and the \(U_{N,\sigma}\)'s by using the first item of Lemma 2.3:
\[\Phi_{N}\big{(}M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}}^{*}U_{ N,\eta^{\prime}}^{*}\big{)} = \Phi_{N}\big{(}M_{N,\sigma}M_{N,(\eta^{\prime}\sqcup\eta)\sigma^{ \prime}}^{*}\big{)}\underset{N\to\infty}{\longrightarrow}c\delta_{\sigma,( \eta^{\prime}\sqcup\eta)\sigma^{\prime}}.\]
We can deduce the limit expression for the conditional expectation
\[\mathcal{E}_{N}\big{(}M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}}^{*}\big{)} \underset{N\to\infty}{\longrightarrow}\quad\sum_{\eta^{\prime}\in\mathfrak{ S}_{k}}c\delta_{\sigma,(\eta^{\prime}\sqcup\eta)\sigma^{\prime}}u_{\eta^{ \prime}}.\]
If \(\sigma\) and \(\sigma^{\prime}\) are not in a same \(\mathfrak{S}_{k,k}\)-coset, there is no \(\eta,\eta^{\prime}\) such that \(\sigma=(\eta^{\prime}\sqcup\eta)\sigma^{\prime}\) and so \(\mathcal{E}_{N}\big{(}M_{N,\sigma}U_{N,\eta}M^{*}_{N,\sigma^{\prime}}\big{)}\) converges to zero. Assume now that \(\sigma=(\eta^{\prime}_{0}\sqcup\eta_{0})\sigma^{\prime}\), for some \(\eta_{0},\eta^{\prime}_{0}\in\mathfrak{S}_{k}\). Then we have
\[\mathcal{E}_{N}\big{(}M_{N,\sigma}U_{N,\eta}M^{*}_{N,\sigma^{ \prime}}\big{)} \underset{N\to\infty}{\longrightarrow} \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}c\delta_{(\eta^{\prime}_{0} \sqcup\eta_{0})\sigma^{\prime},(\eta^{\prime}\sqcup\eta)\sigma^{\prime}}u_{ \eta^{\prime}}\] \[= \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}c\delta_{\eta,\eta_{0}} \delta_{\eta^{\prime},\eta^{\prime}_{0}}u_{\eta^{\prime}}\] \[= cu_{\eta^{\prime}}\quad\text{where }\sigma=(\eta^{\prime} \sqcup\eta)\sigma^{\prime}.\]
We have proved the announced result, formula (2.9).
We prove formula (2.10), similarly, using the traciality of \(\Phi_{N}\)
\[\mathcal{E}_{N}\big{(}M^{*}_{N,\sigma}U_{N,\eta}M_{N,\sigma^{ \prime}}\big{)} = \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\Phi_{N}\big{(}M^{*}_{N, \sigma}U_{N,\eta}M_{N,\sigma^{\prime}}U^{*}_{N,\eta^{\prime}}\big{)}U_{N,\eta^ {\prime}}\] \[= \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}\Phi_{N}\big{(}M_{N, \sigma^{\prime}}U_{N,\eta^{\prime-1}}M^{*}_{N,\sigma}U^{*}_{N,\eta^{-1}}\big{)} U_{N,\eta^{\prime}}\] \[\underset{N\to\infty}{\longrightarrow} \sum_{\eta^{\prime}\in\mathfrak{S}_{k}}c\delta_{\sigma^{\prime},( \eta^{-1}\sqcup\eta^{\prime-1})\sigma}u_{\eta^{\prime}}=\sum_{\eta^{\prime} \in\mathfrak{S}_{k}}c\delta_{(\eta\sqcup\eta^{\prime})\sigma^{\prime},\sigma}u_ {\eta^{\prime}}\] \[= \left\{\begin{array}{ll}cu_{\eta^{\prime}}&\text{if }\sigma=(\eta \sqcup\eta^{\prime})\sigma^{\prime},\\ 0&\text{otherwise}.\end{array}\right.\]
It remains to prove formula (2.11), considering \(\mathcal{E}_{N}(M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}})\). Firstly, we have with a similar reasoning
\[\Phi_{N}\big{(}M_{N,\sigma}M_{N,\sigma^{\prime}}\big{)}\] \[= \frac{1}{N^{2k}}\sum_{\mathbf{i}\in[N]^{2k}}\mathbb{E}\Big{[}N^{k }M_{N}\big{(}\sigma^{-1}(\mathbf{i})\big{)}M_{N}\Big{(}{\sigma^{\prime}}^{-1} \big{(}\tau(\mathbf{i})\big{)}\Big{)}\Big{]}\] \[\underset{N\to\infty}{\longrightarrow} c^{\prime}\delta_{\sigma,\tau\sigma^{\prime}},\]
which implies that
\[\Phi_{N}\big{(}M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}}U^{*}_{N,\eta^{ \prime}}\big{)}\underset{N\to\infty}{\longrightarrow} c^{\prime}\delta_{\sigma,(\eta\sqcup\eta^{\prime})\sigma^{\prime}}.\]
We deduce with the same computation as before
\[\mathcal{E}_{N}\big{(}M_{N,\sigma}U_{N,\eta}M_{N,\sigma^{\prime}}\big{)} \underset{N\to\infty}{\longrightarrow}\left\{\begin{array}{ll}c^{\prime}u_ {\eta^{\prime}}&\text{if }\sigma=(\eta\sqcup\eta^{\prime})\sigma^{\prime},\\ 0&\text{otherwise},\end{array}\right.\]
## 5 Proof of the asymptotic \(\mathfrak{S}_{k}\)-circularity
In all the section, \(M_{N}\) denotes a random tensor that satisfies Hypothesis 1.2. We prove in this section that the collection of flattenings of \(M_{N}\) converges
to a \(\mathfrak{S}_{k}\)-circular collection. We use the method of traffics [11, 12], introducing graphs notations and using combinatorial manipulations specific to this method, see below. While our ultimate goal is to compute the \(\mathfrak{S}_{k}\)-moments \(\mathcal{E}_{N}\big{[}M_{N,\sigma_{1}}^{\varepsilon_{1}}U_{N,\eta_{1}}\cdots M_ {N,\sigma_{L}}^{\varepsilon_{L}}U_{N,\eta_{L}}\big{]}\), the same technique as in the previous section allows us to first focus on the limit of the \({}^{*}\)-moments, namely
\[\Phi_{N}\big{[}M_{N,\sigma_{1}}^{\varepsilon_{1}}\cdots M_{N,\sigma_{L}}^{ \varepsilon_{L}}\big{]}=\mathbb{E}\left[\frac{1}{N^{k}}\mathrm{Tr}\,M_{N, \sigma_{1}}^{\varepsilon_{1}}\cdots M_{N,\sigma_{L}}^{\varepsilon_{L}}\right], \tag{5.1}\]
where \(L\geq 1\) and \(\sigma_{\ell}\in\mathfrak{S}_{2k}\), \(\varepsilon_{\ell}\in\{1,*\}\) for all \(\ell\in[L]\). Our analysis of \({}^{*}\)-moments shows how the conditional expectation \(\mathcal{E}_{N}\) appears naturally in the large \(N\) limit.
### Injective trace method for tensors
In this section, we first encode the \({}^{*}\)-moments (5.1) in terms of a function on hypergraphs in (5.3), and then we give a general formula in Lemma 5.5 for this quantity in terms of a transformation of this function that will simplify the calculations.
**Definition 5.1**.:
1. _We call directed_ \(k\)_-hypergraph a pair_ \((V,E)\) _where_ * \(V\) _is a non-empty set, its elements are called the vertices;_ * \(E\) _is a multi-set (elements can appear with multiplicity) of elements of_ \(V^{\times 2k}\)_, its elements are called the hyperedges; we often use the notation_ \(e=(in_{1},\ldots,in_{k},out_{1},\cdots,out_{k})\) _or_ \(e=(\mathbf{in},\mathbf{out})\)_, calling the_ \(k\) _first vertices the inputs and the_ \(k\) _last ones the outputs._
2. _In this article, a_ \({}^{*}\)_-test hypergraph is a quadruple_ \(T=(V,E,\sigma,\varepsilon)\) _where_ \((V,E)\) _is a directed_ \(k\)_-hypergraph and_ \(\sigma:E\to\mathfrak{S}_{2k}\) _and_ \(\varepsilon:E\to\{1,*\}\) _are labelling maps. With some abuse, we can think that the hyperedge_ \(e\in E\) _is associated to the matrix_ \(M_{N,\sigma(e)}^{\varepsilon(e)}\)_._
Since the domain of definition of the maps \(\sigma\) and \(\varepsilon\) is the multi-set \(E\), we emphasize that these functions can take different values on the different representatives of a same hyperedge, e.g \(\sigma(e)\) is a multi-set (of cardinal the multiplicity of \(e\)) of elements of \(\mathfrak{S}_{2k}\), that indicates the value of each representative.
**Definition 5.2**.:
1. _Let_ \(T=(V,E,\sigma,\varepsilon)\) _be a_ \({}^{*}\)_-test hypergraph. The (unnormalized)_ _trace of_ \(T\) _in_ \(M_{N}\) _is defined by_ \[\mathrm{Tr}\big{[}T(M_{N})\big{]}=\sum_{j:V\to[N]}\;\prod_{e\in E}\;M_{N,\sigma (e)}^{\varepsilon(e)}\big{(}j(e)\big{)},\] (5.2) _where the product is counted with multiplicity, and for_ \(e=(\mathbf{in},\mathbf{out})\) _we have set_ \(j(e)=\big{(}j(out_{1}),\ldots,j(out_{k}),j(in_{1}),\ldots,j(in_{k})\big{)}\)_._
2. _Given_ \(L\geq 1\)_,_ \(\sigma_{1},\ldots,\sigma_{L}\in\mathfrak{S}_{2k}\) _and_ \(\varepsilon_{1},\ldots,\varepsilon_{L}\in\{1,*\}\)_, we define the_ \({}^{*}\)_-test hypergraph_ \(T^{\varepsilon_{1},\ldots,\varepsilon_{L}}_{\sigma_{1},\ldots,\sigma_{L}}=(V_{ L},E_{L},\sigma,\varepsilon)\) _by_ * \(V_{L}=\{(1,1),\ldots,(1,L),\ldots,(k,1),\ldots,(k,L)\},\)__ * \(E_{L}=\{e_{1},,\ldots,e_{L}\}\) _where_ \[e_{\ell}=\big{(}(1,\ell+1),\ldots,(k,\ell+1),(1,\ell),\ldots,(k,\ell)\big{)}\] _(each edge is of multiplicity one) with the convention_ \((i,L+1):=(i,1)\) _for all_ \(i\) _,_ * _for all_ \(\ell\in[L]\)_, we have_ \(\sigma(e_{\ell})=\sigma_{\ell},\) _and_ \(\varepsilon(e_{\ell})=\varepsilon_{\ell}.\)__
The hypergraph \(T^{\varepsilon_{1},\ldots,\varepsilon_{L}}_{\sigma_{1},\ldots,\sigma_{L}}\) consists in a strip with \(\ell\) successive hyperedges, see the left panel of Figure 1. We then have in particular for a \({}^{*}\)-test hypergraph of the form \(T=T^{\varepsilon_{1},\ldots,\varepsilon_{L}}_{\sigma_{1},\ldots,\sigma_{L}}\)
\[\Phi_{N}\big{[}M^{\varepsilon_{1}}_{N,\sigma_{1}}\cdots M^{\varepsilon_{L}}_{ N,\sigma_{L}}\big{]}=\mathbb{E}\left[\frac{1}{N^{k}}\mathrm{Tr}\big{[}T^{ \varepsilon_{1},\ldots,\varepsilon_{L}}_{\sigma_{1},\ldots,\sigma_{L}}(M_{N}) \big{]}\right], \tag{5.3}\]
showing how we can encode the \({}^{*}\)-moments of the flattenings of \(M_{N}\) using hypergraphs.
We now define a modification of the trace of \({}^{*}\)-test hypergraphs.
**Definition 5.3**.: _The injective trace of a \({}^{*}\)-test hypergraph \(T\) in \(M_{N}\), denoted \(\mathrm{Tr}^{0}\big{[}T(M_{N})\big{]}\), is defined as \(\mathrm{Tr}\big{[}T(M_{N})\big{]}\) in (5.2) but with the summation restricted to the set of injective maps \(j:V\to[N]\)._
Figure 1: Representations of \({}^{*}\)-test hypergraphs for \(k=3\). Each hyperedge is pictured as a square with its \(k\) outputs represented on an edge in clockwise order, and its \(k\) inputs in the opposite face in anti-clockwise order. The links between vertices represent identifications. The symbols \((\sigma_{i},\epsilon_{j})\) indicate the labels of the hyperedges. Left: the graph \(T^{\varepsilon_{1},\ldots,\varepsilon_{4}}_{\sigma_{1},\ldots,\sigma_{4}}\). Right: two graphs that are commented in Section 5.3.
This functional \(\mathrm{Tr}^{0}\) is related to the trace of \({}^{*}\)-test hypergraphs thanks to Lemma 5.5, which requires the following definition.
**Definition 5.4**.: _Let \(T=(V,E,\sigma,\varepsilon)\) be a \({}^{*}\)-test hypergraph. We denote by \(\mathcal{P}(V)\) the set of partitions of the vertex set \(V\). Then for any \(\pi\in\mathcal{P}(V)\), we define the \({}^{*}\)-test hypergraph_
\[T^{\pi}=(V^{\pi},E^{\pi},\sigma^{\pi},\varepsilon^{\pi}),\]
_called the quotient of \(T\) by \(\pi\), by_
* \(V^{\pi}=\pi\)_, i.e. a vertex is a block of_ \(\pi\)_,_
* _each hyperedge_ \(e_{\ell}=(i_{s})_{s\in[2k]}\in E\) _induces a hyperedge_ \(e_{\ell}^{\pi}=(B_{s})_{s\in[2k]}\)_, where_ \(B_{s}\) _is the block of_ \(\pi\) _containing_ \(i_{s}\) _for all_ \(s\in[2k]\)_, and the labels of_ \(e_{\ell}^{\pi}\) _are_ \(\sigma^{\pi}(e_{\ell}^{\pi})=\sigma(e_{\ell})\) _and_ \(\varepsilon^{\pi}(e_{\ell}^{\pi})=\varepsilon(e_{\ell})\)_._
Note that \(T^{\pi}\) can possibly have multiple hyperedges even if \(T\) does not have any, as in the top right picture of Figure 1. It also can have degenerated hyperedges \((B_{s})_{s\in[2k]}\) where \(B_{s}=B_{s^{\prime}}\) for some \(s\neq s^{\prime}\) in \([2k]\) (see Figure 2).
**Lemma 5.5**.: _For any \({}^{*}\)-test hypergraph \(T\), we have_
\[\mathrm{Tr}\big{[}T(M_{N})\big{]}=\sum_{\pi\in\mathcal{P}(V)}\mathrm{Tr}^{0} \big{[}T^{\pi}(M_{N})\big{]}.\]
Since any \({}^{*}\)-moment (5.1) can be written as a trace of \({}^{*}\)-test hypergraph by (5.3), Lemma 5.5 implies that any \({}^{*}\)-moment is a finite sum of normalized injective trace of the form (5.4). The interest of this formulation is that the computation of injective traces is quite straightforward, and transforms the computation into a combinatorial problem.
Proof.: Let \(j:V\to[N]\) be a map. We denote by \(\ker j\) the partition of \(V\) such that \(v\sim_{\ker j}v^{\prime}\) whenever \(j(v)=j(v^{\prime})\). One can write
\[\mathrm{Tr}\big{[}T(M_{N})\big{]}=\sum_{\pi\in\mathcal{P}(V)}\left(\sum_{ \begin{subarray}{c}j:V\to[N]\\ \mathrm{s.t.}\ \ker j=\pi\end{subarray}}\ \prod_{e\in E}\ M_{N,\sigma(e)}^{ \varepsilon(e)}\big{(}j(e)\big{)}\right)\]
The lemma follows since the term in parenthesis equals \(\mathrm{Tr}^{0}\big{[}T^{\pi}(M_{N})\big{]}\).
We also define
\[\tau_{N}^{0}[T^{\pi}] := \mathbb{E}\Big{[}\frac{1}{N^{k}}\mathrm{Tr}^{0}\big{[}T^{\pi}(M_{ N})\big{]}\Big{]}. \tag{5.4}\]
### Expression of injective traces under Hypothesis 1.2
In this section we set up the definitions needed for writing an exact expression of \(\tau^{0}_{N}[T^{\pi}]\) defined in (5.4), for any \({}^{*}\)-test hypergraph \(T=(V,E,\sigma,\varepsilon)\) and any partition \(\pi\) of its vertex set \(V\). We assume \(N\geq|V^{\pi}|\) since otherwise \(\tau^{0}_{N}[T^{\pi}]=0\). In order to regroup terms in our computation, we need the following definition. We use as before the notation \(j(e):=\big{(}j(v_{1}),\ldots,j(v_{2k})\big{)}\) for an hyperedge \(e=(v_{1},\ldots,v_{2k})\) and a function \(j:V^{\pi}\to[N]\).
**Definition 5.6**.: _Let \(T\) be a \({}^{*}\)-test hypergraph and \(\pi\) be a partition of its vertex set._
1. _We say that two hyperedges_ \(e\) _and_ \(e^{\prime}\) _of_ \(T^{\pi}\) _are dependent whenever, for any_ \(j:V^{\pi}\to[N]\) _injective,_ \(M^{\varepsilon(e)}_{N,\sigma(e)}\big{(}j(e)\big{)}\) _and_ \(M^{\varepsilon(e^{\prime})}_{N,\sigma(e^{\prime})}\big{(}j(e^{\prime})\big{)}\) _are either the same random variable or are complex conjugate of each other. We denote by_ \(\hat{E}^{\pi}\) _the set of equivalent classes of hyperedges for the relation of dependence._
2. _For each class of dependence_ \(\hat{e}\) _in_ \(\hat{E}^{\pi}\)_, we denote by_ \(m(\hat{e})\) _and_ \(n(\hat{e})\) _the number of hyperedges_ \(e\) _of_ \(T^{\pi}\) _in_ \(\hat{e}\) _such that_ \(\varepsilon(e)=1\) _and_ \(\varepsilon(e)=*\) _respectively. Let_ \(x\) _be distributed as the entries of_ \(M_{N}\)_. Then we call weight of_ \(T^{\pi}\) _the quantity_ \[\omega_{N}[T^{\pi}] := \prod_{\hat{e}\in\hat{E}^{\pi}}N^{k}\times\mathbb{E}\big{[}x^{m( \hat{e})}\overline{x}^{n(\hat{e})}\big{]},\] (5.5)
Note that by independence of the entries of \(M_{N}\), the notion of dependence does not depend on the injective map \(j\).
**Remark 5.7**.: _Let \(e\) be an hyperedge, and assume that its vertices in \(T^{\pi}\) are pair-wise distinct. Then under Hypothesis 1.2 two entries \(M^{\varepsilon(e)}_{N,\sigma(e)}\big{(}j(e)\big{)}\) and \(M^{\varepsilon(e^{\prime})}_{N,\sigma(e^{\prime})}\big{(}j(e^{\prime})\big{)}\) are independent if and only if their covariance is zero. From the computation of covariances of Section 4, the hyperedges corresponding to these entries belong to a same class if \(\varepsilon=\varepsilon^{\prime}\) and \(\sigma=\sigma^{\prime}\), or \(\varepsilon\neq\varepsilon^{\prime}\) and \(\sigma=\tau\sigma^{\prime}\). Nonetheless if the vertices of the hyperedge are not distinct this is no longer true._
**Lemma 5.8**.: _For any \({}^{*}\)-test hypergraph \(T\) and any a partition \(\pi\) of its vertices, we have_
\[\tau^{0}_{N}[T^{\pi}] = N^{-k-k|\hat{E}^{\pi}|}\frac{N!}{(N-|V^{\pi}|)!}\omega_{N}[T^{ \pi}], \tag{5.6}\]
_where \(\omega_{N}[T^{\pi}]\) defined in (5.5) is bounded, \(\omega_{N}[T^{\pi}]=0\) if there is an equivalent class of dependence with a single element, and \(\omega_{N}[T^{\pi}]\underset{N\to\infty}{\longrightarrow}0\) if there is an equivalent class of dependence of cardinal different from 2._
Proof.: Since \(M_{N}\) has i.i.d. entries, the definition of the injective trace gives (with the convention \(0!=1\))
\[\tau_{N}^{0}[T^{\pi}] = \frac{1}{N^{k}}\sum_{\begin{subarray}{c}j:V\rightarrow[N]\\ \text{injective}\end{subarray}}\mathbb{E}\Big{[}\prod_{e\in E}\ M_{N,\sigma(e)}^ {\varepsilon(e)}\big{(}j(e)\big{)}\Big{]}, \tag{5.7}\] \[= N^{-k}\frac{N!}{\times(N-|V^{\pi}|)!}\mathbb{E}\Big{[}\prod_{e \in E}\ M_{N,\sigma(e)}^{\varepsilon(e)}\big{(}j(e)\big{)}\Big{]}\]
for any injective map \(j:V^{\pi}\rightarrow[N]\). The value of the expectation is independent of the choice of \(j\). Two edges are dependent whenever they contribute to the same entry in Formula (5.7), so
Moreover, since the entries of \(M_{N}\) are i.i.d., the expectations in the right hand side term above does not depend on the entry \(j(e)\). It can be replaced by the entry \((\mathbf{1})\) without changing the value of the expectations. This gives the expected formula. The rest of the lemma is consequence of Hypothesis 1.2.
The set \(\hat{E}^{\pi}\) forms a partition of the set of hyperedges that depends in an intricate way on \(\pi\), \(\sigma\) and \(\varepsilon\). We will overpass the dependence on \(\sigma\) and \(\varepsilon\) by introducing another graph. It will allow, after a combinatorial analysis, to assume that \(\hat{E}^{\pi}\) is a pair partition before we need to compute \(\omega_{N}[T^{\pi}]\), relating our problem with the computation of the \(\mathfrak{S}_{k}\)-covariances studied earlier.
### Important examples
We consider the two \({}^{*}\)-test hypergraphs \(T^{\pi}\) from the rightmost pictures of Figure 1. We propose to compute explicitly \(\tau_{N}^{0}[T^{\pi}]\) and its limit for each of these graphs.
#### 5.3.1 A case with no twisting
We denote by \(T_{1}\) the \({}^{*}\)-test hypergraph of the top rightmost picture of Figure 1, namely it is the quotient \(T^{\pi}\) of \(T=T_{\sigma_{1},\ldots,\sigma_{4}}^{\varepsilon_{1},\ldots,\varepsilon_{4}}\) with \(k=3\) for the partition \(\pi\) that identifies the \(i\)-th output of the second hyperedge with the \(i\)-th input of the third one for \(i=1,2,3\). Since the second and third hyperedges of \(T_{1}\) share the same vertices and the same holds for the first and fourth one, each of these pair may form either a single class of dependence (consisting in two hyperedges) or two classes (consisting in single hyperedges). This depends
on the values of the \(\sigma_{i}\)'s and \(\varepsilon_{i}\)'s. If there is a class of dependence formed by a single element, then \(\tau_{N}^{0}[T_{1}]=0\) (Lemma 5.8).
By Remark 5.7, the second and the third hyperdegrees of \(T_{1}\) belong to a same class if and only if one of the following dependence conditions is satisfied: \(\varepsilon_{2}\neq\varepsilon_{3}\) and \(\sigma_{2}=\sigma_{3}\), \(\varepsilon_{2}=\varepsilon_{3}\) and \(\sigma_{2}=\tau\sigma_{3}\) where we recall that \(\tau\) is defined as in Theorem 2.15
\[\tau(i) : \left\{\begin{array}{ccc}i\in[k]&\mapsto&i+k\in[2k]\setminus[k] \\ i\in[2k]\setminus[k]&\mapsto&i-k\in[k].\end{array}\right.\]
Note the formal difference with the formula 5.7, since we must take into account that the way the two hyperedge are identified by identifying the inputs of one with the outputs of the other one. Using the notation \([1]=0\) and \([*]=1\), we shall write this dependence condition in short \(\tau^{[\varepsilon_{2}]}\sigma_{2}=\tau^{1+[\varepsilon_{3}]}\sigma_{3}\). The similar condition of dependence holds for the first and fourth hyperedges.
Assume that these dependence conditions are satisfied, so \(T^{\pi}\) has \(|\hat{E}^{\pi}|=2\) classes of dependences. Note that \(T_{1}\) has \(|V_{1}^{\pi}|=9\) vertices. Hence we get
\[N^{-k-k|\hat{E}^{\pi}|}\frac{N!}{(N-|V^{\pi}|)!}=N^{-9}\frac{N!}{(N-9)!}.\]
It remains to write the expression of \(\omega_{N}[T_{1}]\). Let \(x\) be distributed as the entries of \(M_{N}\). Denoting \(x^{*}:=\bar{x}\) the usual complex conjugation, the definition of \(\omega_{N}\) and the above computation finally yields the expression
\[\tau_{N}^{0}[T_{1}] = N^{-9}\frac{N!}{(N-9)!}\times\delta_{\tau^{[\varepsilon_{2}]} \sigma_{2},\tau^{1+[\varepsilon_{3}]}\sigma_{3}}\delta_{\tau^{[\varepsilon_{1 }]}\sigma_{1},\tau^{1+[\varepsilon_{4}]}\sigma_{4}}\] \[\qquad\times N^{k}\mathbb{E}[x^{\varepsilon_{2}}\bar{x}^{ \varepsilon_{3}}]\times N^{k}\mathbb{E}[x^{\varepsilon_{1}}\bar{x}^{ \varepsilon_{4}}],\]
where \(\delta\) stands for the usual Kronecker symbol. Since \(N^{-k-k|\hat{E}^{\pi}|}\frac{N!}{(N-|V^{\pi}|)!}\) converges to one, Hypothesis 1.2 implies that \(\tau_{N}^{0}[T_{1}]\) converges when \(N\) tends to infinity. Denoting by \((c,c^{\prime})\) the parameter of \(M_{N}\), we get the expression of the limit using the formula
\[\lim_{N\to\infty}N^{k}\mathbb{E}[x^{\varepsilon}\bar{x}^{\varepsilon^{\prime }}]=\left\{\begin{array}{ccc}c&\mbox{if}&\varepsilon\neq\varepsilon^{\prime },\\ c^{\prime}&\mbox{if}&\varepsilon=\varepsilon^{\prime}=1,\\ \bar{c}^{\prime}&\mbox{if}&\varepsilon=\varepsilon^{\prime}=*.\end{array}\right.\]
Finally, note that the computation of covariances of Section 4 shows that
\[\tau_{N}^{0}[T_{1}]=\Phi_{N}(M_{N,\sigma_{2}}^{\varepsilon_{2}}M_{N,\sigma_{3 }}^{\varepsilon_{3}})\times\Phi_{N}(M_{N,\sigma_{1}}^{\varepsilon_{1}}M_{N, \sigma_{4}}^{\varepsilon_{4}})+o(1).\]
#### 5.3.2 A case with twistings
We now denote by \(T_{2}\) the \({}^{*}\)-test hypergraph of the right bottom picture of Figure 1, namely it is the quotient \(T^{\pi}\) of \(T=T_{\sigma_{1},\ldots,\sigma_{6}}^{\varepsilon_{1},\ldots,\varepsilon_{6}}\) for the partition \(\pi\) that does the following identifications:
* for \(\eta_{1}\) the transposition exchanging \(1\) and \(2\), the \(i\)-th output of the third hyperedge is identified with the \(\eta_{1}(i)\)-th input of the fourth one,
* for \(\eta_{2}\) is the cycle \((1,3,2)\), the \(i\)-th output of the second hyperedge is identified with the \(\eta_{2}(i)\)-th input of the fifth one.
The same reasoning as before shows that there are up to three classes of dependence, the pairs of indices of possible dependent hyperedges being \(\{3,4\},\{2,5\}\) and \(\{1,6\}\). The description of the depedent classes involves now the twisting induced by the permutations \(\eta_{1}\) and \(\eta_{2}\). Using the notation \([1]=0\) and \([*]=1\), we observe from the figure that each pair of indices \(\{3,4\},\{2,5\}\) and \(\{1,6\}\) corresponds to hyperedges of \(T_{2}\) in a same class of dependence whenever the following dependence conditions are satisfied:
\[\left\{\begin{array}{rcl}\tau^{[\varepsilon_{3}]}\sigma_{3}&=&(\mu_{1}\sqcup \mathrm{id})\tau^{1+[\varepsilon_{4}]}\sigma_{4},\\ \tau^{[\varepsilon_{2}]}\sigma_{2}&=&(\mu_{2}\sqcup\mu_{1}^{-1})\tau^{1+[ \varepsilon_{5}]}\sigma_{5},\\ \tau^{[\varepsilon_{1}]}\sigma_{1}&=&(\mathrm{id}\sqcup\mu_{2}^{-1})\tau^{ \varepsilon_{6}}\tau^{1+[\varepsilon_{6}]}\sigma_{6}.\end{array}\right.\]
Indeed, let us consider the first formula. Since the role of the \(\varepsilon\)-indices is clear from the previous example, let us assume \((\varepsilon_{3},\varepsilon_{4})=(\varepsilon_{2},\varepsilon_{5})=( \varepsilon_{1},\varepsilon_{6})=(1,*)\) for a simplification that does not affect the reasoning. Therefore the condition reads \(\sigma_{3}=(\mu_{1}\sqcup\mathrm{id})\sigma_{4}\). It is the consequence of Remark 5.7, the fact that by construction the \(i\)-th output of the third hyperedge is the \(i\)-th input of the fourth one, and the identification of vertices in the first item of the enumeration at the beginning of this subsection, namely of the \(i\)-th input of the third hyperedge is the \(\eta_{1}(i)\)-th output of the fourth one.
The second formula now reads \(\sigma_{2}=(\mu_{2}\sqcup\mu_{1}^{-1})\sigma_{5}\), which follows from the second item of the above enumeration, and the fact that the first item implies that the \(i\)-th output of the second hyperedge is identified by \(\pi\) with the \(\eta_{1}^{-1}(i)\)-th input of the fifth one. Similarly, the last formula reads \(\sigma_{1}=(\mathrm{id}\sqcup\mu_{2}^{-1})\sigma_{6}\) with same arguments, the factor \(\mathrm{id}\) coming from construction and the factor \(\mu_{2}^{-1}\) for the second itemized identification condition.
When the dependence conditions are satisfied they are \(|\hat{E}^{\pi}|=3\) dependent classes of hyperedges, and \(T_{2}\) has \(|V^{\pi}|=12\) vertices. With the same computation of the weights as in the previous section, Lemma 5.8 yields
\[\tau_{N}^{0}[T_{2}]\] \[= \delta_{\tau^{[\varepsilon_{3}]}\sigma_{3},\mu_{1}\tau^{1+[ \varepsilon_{4}]}\sigma_{4}}\delta_{\tau^{[\varepsilon_{2}]}\sigma_{2},\mu_{2 }\tau^{1+[\varepsilon_{5}]}\sigma_{5}\mu_{1}^{-1}}\delta_{\tau^{[\varepsilon_ {1}]}\sigma_{1},\tau^{\varepsilon_{6}}\tau^{1+[\varepsilon_{6}]}\sigma_{6}\mu_ {2}^{-1}}\] \[\qquad\times N^{-12}\frac{N!}{(N-12)!}\times\times N^{k}\mathbb{E }[x^{\varepsilon_{3}}\bar{x}^{\varepsilon_{4}}]\times N^{k}\mathbb{E}[x^{ \varepsilon_{2}}\bar{x}^{\varepsilon_{5}}]\times N^{k}\mathbb{E}[x^{ \varepsilon_{1}}\bar{x}^{\varepsilon_{6}}].\]
Hence under Hypothesis 1.2\(\tau^{0}_{N}[T_{2}]\) has a limit when \(N\) tends to infinity. The computation of covariances made in the dedicated section shows
\[\tau^{0}_{N}[T^{\pi}] = \Phi_{N}(M^{\varepsilon_{3}}_{N,\sigma_{3}}M^{\varepsilon_{4}}_{N,\mu_{1}\sigma_{4}})\Phi_{N}(M^{\varepsilon_{2}}_{N,\sigma_{2}}M^{\varepsilon_ {5}}_{N,\mu_{2}\sigma_{5}\mu_{1}^{-1}})\] \[\quad\times\Phi_{N}(M^{\varepsilon_{1}}_{N,\sigma_{1}}M^{ \varepsilon_{6}}_{N,\sigma_{6}\mu_{2}^{-1}})\times+o(1).\]
Lemma 2.16 implies that
\[\Phi_{N}(M^{\varepsilon}_{N,\sigma}M^{\varepsilon^{\prime}}_{N,\mu\sigma^{ \prime}\mu^{\prime-1}})=\left\{\begin{array}{ll}\Phi_{N}(M^{\varepsilon}_{N,\sigma}U_{N,\mu}M_{N,\sigma^{\prime}}U^{*}_{N,\mu^{\prime}})&\mbox{if}\quad \varepsilon^{\prime}=1,\\ \Phi_{N}(M^{\varepsilon}_{N,\sigma}U_{N,\mu^{\prime}}M^{*}_{N,\sigma^{\prime }}U^{*}_{N,\mu})&\mbox{if}\quad\varepsilon^{\prime}=*.\end{array}\right.\]
This is a first indication of the interest of introduction the \(\mathfrak{S}_{k}\)-probability setting.
### Convergence of injective traces
In Lemma 5.8 we have established an exact formula for \(\tau^{0}_{N}[T^{\pi}]\) involving the normalization factor
\[\frac{N!\times N^{|\hat{E}^{\pi}|}}{N^{k}\times(N-|V^{\pi}|)!}=\big{(}1+o(1) \big{)}\times N^{-k-|\hat{E}^{\pi}|+|V^{\pi}|},\]
where we recall that \(|\hat{E}^{\pi}|\) is the number of classes of dependent edges \(T^{\pi}\) and \(|V^{\pi}|\) is the number of vertices of \(T^{\pi}\).
In this section, we assume that \(T=T^{\varepsilon_{1},\ldots,\varepsilon_{L}}_{\sigma_{1},\ldots,\sigma_{L}}\) is a \({}^{*}\)-test hypergraph as in Definition 5.2 encoding a \({}^{*}\)-moments, and assume that the class of dependent hyperedges of \(T^{\pi}\) have at least two elements (this is the situation of interest according to Lemma 5.8). We prove that \(-k-|\hat{E}^{\pi}|+|V^{\pi}|\leq 0\) and learn important properties on the case of equality. This allows us to deduce the convergence of the \(\mathfrak{S}_{k}\)-distribution and prepare the proof of the convergence toward a \(\mathfrak{S}_{k}\)-circular system.
The arguments use the following types of "simplified" hypergraphs.
**Definition 5.9**.:
1. _A simple undirected hypergraph a pair_ \((V,E)\) _where_ * \(V\) _is a non-empty set;_ * \(E\) _is a set of subsets of elements of_ \(V^{\ell}\) _for some_ \(1\leq\ell\leq 2k\)_._
2. _For any partition_ \(\pi\) _of_ \(V\)_, we set_ \(\bar{T}^{\pi}=(V^{\pi},\bar{E}^{\pi})\)_, and call skeleton of_ \(T^{\pi}\)_, the undirected hypergraph_ \((V^{\pi},\bar{E}^{\pi})\) _where_ \(\bar{E}^{\pi}\) _is the set of all subsets of indices_ \(\{v_{1},\ldots,v_{2k}\}\) _such that_ \((v_{1},\ldots,v_{2k})\in E\)_. The fibre of_ \(\bar{e}\in\bar{E}^{\pi}\) _is the set of all_ \((v_{1},\ldots,v_{2k})\in E^{\pi}\) _such that_ \(\bar{e}=\{v_{1},\ldots,v_{2k}\}\)_._
We denote \(q(\pi)=-k-k|\bar{E}^{\pi}|+|V^{\pi}|\), so that we can write
\[-k-|\hat{E}^{\pi}|+|V^{\pi}|=q(\pi)+k\big{(}|\bar{E}^{\pi}|-|\hat{E}^{\pi}| \big{)}. \tag{5.8}\]
Note that the skeleton of \(T^{\pi}\) does not depend on the labelings \(\sigma\) and \(\varepsilon\). If two hyperedges of \(T^{\pi}\) belong to a same class of dependence, then they are fibres of a same hyperedge in the skeleton of \(T^{\pi}\). Hence \(|\bar{E}^{\pi}|\leq|\hat{E}^{\pi}|\) with equality if and only if the fibres of the hyperedges of \(\bar{T}^{\pi}\) coincide with the classes of dependence of \(T^{\pi}\).
We prove that \(q(\pi)\leq 0\) for any quotient \(T^{\pi}\). The idea is to consider the evolution of the combinatorial quantities under interest (5.8) for the sequence of skeletons generated by the first \(\ell\)-th hyperedges of \(T^{\pi}\) while \(\ell\) increases (see Figure 2). More precisely, for any \(\ell=1,\ldots,L-1\), let \(S_{\ell}=(V_{\ell},E_{\ell})\) be the \({}^{*}\)-test hypergraph consisting in a open strip with \(\ell\) successive hyperedges and defined as follow
* \(V_{\ell}=\{(1,1),\ldots,(1,L-\ell),\ldots,(k,1),\ldots,(k,L-\ell)\}\),
* \(E_{\ell}=\{e_{1},,\ldots,e_{\ell}\}\) where \[e_{i}=\big{(}(1,i+1),\ldots,(k,i+1),(1,i),\ldots,(k,i)\big{)}\] (each edge is of multiplicity one),
* for all \(i=1,\ldots,\ell\), we have \(\sigma(e_{i})=\sigma_{i}\), and \(\varepsilon(e_{i})=\varepsilon_{i}\).
Let also denote \(S_{L}=(V_{L},E_{L}):=T\), and let \(S_{0}=(V_{0},\emptyset)\) be the hypergraph with \(k\) isolated vertices \((1,1),\ldots,(k,1)\) and no hyperedge. For each \(\ell=0,\ldots,L\), the partition \(\pi\) induces a partition on \(V_{\ell}\) (with a slight abuse of notation, we still denote it \(\pi\)) and so a quotient \(S_{\ell}^{\pi}=(V_{\ell}^{\pi},E_{\ell}^{\pi})\) of \(S_{\ell}\). For
Figure 2: Picture 11: a \({}^{*}\)-test hypergraph \(T^{\pi}\), where the indicate the order of the hyperedges by indices from \(1\) to \(10\). Pictures \(1\) to \(10\) represent \(S_{\ell}^{\pi}\) as \(\ell\) increases from \(1\) to \(10\), with the last four steps regroups for conciseness. The dot lines and vertices represent the \(i\)-th hyperedge while the continuous lines represent \(S_{i-1}^{\pi}\).
each \(\ell=0,\ldots,L\), we set
\[q(\pi,\ell)=-k-k|\bar{E}_{\ell}^{\pi}|+|V_{\ell}^{\pi}|, \tag{5.9}\]
where \((V_{\ell}^{\pi},\bar{E}_{\ell}^{\pi})\) is the skeleton of \(S_{\ell}^{\pi}\) (Definition 5.9).
**Lemma 5.10**.: _For any \({}^{*}\)-hypergraph of the form \(T=T_{\sigma_{1},\ldots,\sigma_{L}}^{\varepsilon_{1},\ldots,\varepsilon_{L}}\) and for any partition \(\pi\) of its vertices, the sequence \(q(\pi,\ell)_{\ell=0,\ldots,L}\) is non-increasing and it satisfies \(q(\pi,0)\leq 0\)._
The lemma clearly implies that \(q(\pi)=q(\pi,L)\leq 0\), as expected.
Proof.: The graph \(S_{0}^{\pi}\) is the quotient of \(k\) isolated vertices (the outputs of the first hyperedge \(e_{1}\)) so it consists of a number \(a_{0}\in[k]\) of vertices. Hence we have \(q(\pi,0)=-k+a_{0}\leq 0\) with equality if and only if \(a_{0}=k\), i.e. the partition \(\pi\) does not identify different outputs of \(e_{1}\).
We consider the variation of the sequence, namely for \(\ell=1,\ldots,L\)
\[q(\pi,\ell)-q(\pi,\ell-1)=-k\big{(}|\bar{E}_{\ell}|-|\bar{E}_{\ell-1}|\big{)}+ \big{(}|V_{\ell}^{\pi}|-|V_{\ell-1}^{\pi}|\big{)}.\]
For \(\ell=1\), since the skeleton of \(S_{1}^{\pi}\) has one hyperedge and \(S_{0}^{\pi}\) has none, then \(|\bar{E}_{1}|-|\bar{E}_{0}|=1\). Moreover, the vertices of \(S_{1}\) that are not in \(S_{0}\) (the \(k\) inputs of \(e_{1}\)) can form up to \(k\) new vertices in \(S_{1}^{\pi}\). Setting \(a_{1}:=|V_{1}^{\pi}|-|V_{0}^{\pi}|\in\{0,\ldots,k\}\), we then have \(q(\pi,1)-q(\pi,0)=-k+a_{1}\leq 0\) with equality if and only if \(a_{1}=k\), i.e. the partition \(\pi\) does not identify different inputs of \(e_{1}\), and does not identify an input of \(e_{1}\) with a vertex consider early (at this step, these are the outputs of \(e_{1}\)).
We now come adding the other hyperedges \(e_{2},\ldots,e_{L-1}\). For each hyperedge \(e_{\ell}\), we face a choice:
1. Either it is of multiplicity one in the quotient \(S_{\ell}^{\pi}\). In this case, the reasoning is the same as for \(\ell=1\), namely we have \(|\bar{E}_{\ell}|-|\bar{E}_{\ell-1}|=1\) and \(a_{\ell}:=|V_{\ell}^{\pi}|-|V_{\ell-1}^{\pi}|\in\{0,\ldots,k\}\), so \(q(\pi,\ell)-q(\pi,\ell-1)=-k+a_{\ell-1}\leq 0\). For next section, we refer this as a growth step.
2. Or \(e_{\ell}\) is associated with other hyperedge to form a multiple hyperedge in \(S_{\ell}^{\pi}\). This implies that the skeletons of \(S_{\ell}^{\pi}\) and \(S_{\ell-1}^{\pi}\) are the same, and so \(q(\pi,\ell)=q(\pi,\ell-1)\). For next section, we refer this as a backtrack step.
By induction, this proves that \(q(\pi,\ell)\leq q(\pi,\ell-1)\) for all \(\ell=1,\ldots,L-1\).
Finally we add the last hyperedge. The vertex sets of \(T\) and \(S_{L-1}\) are the same, so \(|V_{L}^{\pi}|-|V_{L-1}^{\pi}|=0\). Either \(e_{L}\) is simple in \(T^{\pi}\), in which case \(|\bar{E}_{L}^{\pi}|-|\bar{E}_{L-1}^{\pi}|=1\) and so \(q(\pi,L)=q(\pi,L-1)-k\), or it is multiple in \(T^{\pi}\), in which case \(q(\pi,L)=q(\pi,L-1)\).
Now that we have proved the main result of this section, we can deduce easily the following convergence.
**Corollary 5.11**.: _The collection of flattenings of a random tensor satisfying Hypothesis 1.2 converges in \(\mathfrak{S}_{k}\)-distribution._
Proof.: We shall prove the convergence of \(\mathcal{E}_{N}\big{[}M_{N,\sigma_{1}}^{\varepsilon_{1}}U_{N,\eta_{1}}\cdots M_ {N,\sigma_{L}}^{\varepsilon_{L}}U_{N,\eta_{L}}\big{]}\) for an arbitrary choice of \(L\geq 1\), \(\sigma_{\ell}\in\mathfrak{S}_{2k}\), \(\varepsilon_{\ell}\in\{1,*\}\) and \(\eta_{\ell}\in\mathfrak{S}_{k}\) for all \(\ell\in[L]\). The definition of \(\mathcal{E}_{N}\) and Lemma 2.16 yields
\[\mathcal{E}_{N}\big{[}M_{N,\sigma_{1}}^{\varepsilon_{1}}U_{N, \eta_{1}}\cdots M_{N,\sigma_{L}}^{\varepsilon_{L}}U_{N,\eta_{L}}\big{]}\] \[= \sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}M_{N,\sigma_{1}}^{ \varepsilon_{1}}U_{N,\eta_{1}}\cdots M_{N,\sigma_{L}}^{\varepsilon_{L}}U_{N, \eta_{L}}U_{N,\eta^{-1}}\big{]}U_{N,\eta}\] \[= \sum_{\eta\in\mathfrak{S}_{k}}\Phi_{N}\big{[}M_{N,\tilde{\sigma} _{1}}^{\varepsilon_{1}}\cdots M_{N,\tilde{\sigma}_{L}}^{\varepsilon_{L}}\big{]} U_{N,\eta},\]
where for \(\ell=1,\ldots,L-1\), \(\tilde{\sigma}_{\ell}=(\operatorname{id}\sqcup\eta_{\ell}^{-1})\sigma_{\ell}\) if \(\varepsilon_{\ell}=1\) and \(\tilde{\sigma}_{\ell}=(\eta_{\ell}^{-1}\sqcup\operatorname{id})\sigma_{\ell}\) if \(\varepsilon_{\ell}=*\), and for \(\ell=L\) we have \(\tilde{\sigma}_{L}=(\operatorname{id}\sqcup\eta\eta_{L}^{-1})\sigma_{L}\) if \(\varepsilon_{L}=1\) and \(\tilde{\sigma}_{L}=(\eta\eta_{L}^{-1}\sqcup\operatorname{id})\sigma_{L}\) if \(\varepsilon_{L}=*\).
We hence consider the \({}^{*}\)-test graph \(T=T_{\tilde{\sigma}_{1},\ldots,\tilde{\sigma}_{L}}^{\varepsilon_{1},\ldots, \varepsilon_{L}}\). We have
\[\Phi_{N}\big{[}M_{N,\tilde{\sigma}_{1}}^{\varepsilon_{1}}\cdots M _{N,\tilde{\sigma}_{L}}^{\varepsilon_{L}}\big{]} = \sum_{\pi\in\mathcal{P}(V)}\tau_{N}^{0}\big{[}T^{\pi}\big{]}.\]
Lemmas 5.8 and 5.10 prove that each term in the above sum converges. Hence we get the expected convergence and a formula for the limit
\[\Phi_{N}\big{[}M_{N,\tilde{\sigma}_{1}}^{\varepsilon_{1}}\cdots M _{N,\tilde{\sigma}_{L}}^{\varepsilon_{L}}\big{]} \underset{N\to\infty}{\longrightarrow}\sum_{\pi\in\mathcal{P}(V)\text{ s.t.}\atop-k-|\vec{E}^{\pi}|+|V^{\pi}|=0}\lim_{N\to\infty}\omega_{N}[T^{\pi}]. \tag{5.10}\]
Let \(\mathcal{A}\) be the free \(\mathfrak{S}_{k}^{*}\)-algebra generated by a family \((m_{\sigma})_{\sigma\in\mathfrak{S}_{k}}\) with relations \(u_{\eta}m_{\sigma}u_{\eta^{\prime}}^{*}=m_{(\eta\sqcup\eta^{\prime})m_{\sigma}}\). We equip \(\mathcal{A}\) with the linear map \(\mathcal{E}\) defined by \(\mathcal{E}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}u_{\eta_{1}}\cdots m_{ \sigma_{L}}^{\varepsilon_{L}}u_{\eta_{L}}\big{]}=\lim_{N\to\infty}\mathcal{E}_{ N}\big{[}M_{N,\sigma_{1}}^{\varepsilon_{1}}U_{N,\eta_{1}}\cdots M_{N,\sigma_{L}}^{ \varepsilon_{L}}U_{N,\eta_{L}}\big{]}\). Therefore the collection of flattenings of \(M_{N}\) converges to the family \(\mathbf{m}=(m_{\sigma})_{\sigma\in\mathfrak{S}_{k}}\) in \(\mathfrak{S}_{k}^{*}\)-distribution.
### End of the proof
We prove that the limit of the flattenings computed in the previous section is \(\mathfrak{S}_{k}\)-circular, using the proof of Lemma 5.10 and manipulation explained in the second example in Section 5.3. We first state the following intermediate result, where we denote by \(\operatorname{NC}_{2}(L)\) the sets of non-crossing pair partitions of \([L]\). We recall that for a sequence \(\mathcal{L}_{n}\), \(n\geq 1\), where \(\mathcal{L}_{n}\) is a \(n\)-linear map, we define \(\mathcal{L}_{\xi}\) for \(\xi\) a non-crossing partition in Definition-Proposition 2.9.
**Lemma 5.12**.: _Let \(\mathbf{m}=(m_{\sigma})_{\sigma\in\mathfrak{S}_{k}}\) be the limit of the collection of flattenings given in Corollary 5.11. We set as usual \(\phi(a)\) to be the coefficient of
the identity in \(\mathcal{E}(a)\), and we denote \(\mathcal{A}lg^{*}(\mathbf{m})\) the \({}^{*}\)-algebra generated by \(\mathbf{m}\). There exists a collection \(\mathcal{L}_{n}\), \(n\geq 1\) of \(n\)-linear forms \(\mathcal{A}lg^{*}(\mathbf{m})^{n}\to\mathbb{C}\), such that for all \(L\geq 3\), all \(\sigma_{1},\ldots,\sigma_{L}\) in \(\mathfrak{S}_{2k}\) and all \(\varepsilon_{1},\ldots,\varepsilon_{L}\) in \(\{1,*\}\), we have_
\[\phi\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L}} ^{\varepsilon_{L}}\big{]}\] \[\quad=\quad\sum_{\xi\in\mathrm{NC}_{2}(L)}\mathcal{L}_{\xi\setminus B }\Big{[}m_{\sigma_{1}}^{\varepsilon_{1}},\ldots,m_{\sigma_{i-1}}^{\varepsilon _{i-1}}\mathcal{E}\big{[}m_{\sigma_{i}}^{\varepsilon_{i}}m_{\sigma_{i+1}}^{ \varepsilon_{i+1}}\big{]},m_{\sigma_{i+2}}^{\varepsilon_{i-1}},\ldots,m_{ \sigma_{L}}^{\varepsilon_{L}}\Big{]},\]
_where \(B=B(\xi)=\{i,i+1\}\) denotes the first internal block of \(\xi\)._
Proof.: Let \(\pi\) be a partition of the \({}^{*}\)-test hypergraph \(T=T_{\sigma_{1},\ldots,\sigma_{L}}^{\varepsilon_{1},\ldots,\varepsilon_{L}}\), and with the notations of Lemma 5.8, assume that \(-k-|\hat{E}^{\pi}|+|V^{\pi}|=0\). Section 5.4 proves that necessarily the class of dependence of the graph \(T^{\pi}\) are of cardinal \(2\). Given such a \(\pi\), we denote by \(\xi(\pi)\in\mathcal{P}(L)\) the pair partition whose blocks are the pairs \(\{\ell,\ell^{\prime}\}\) of indices such that the hyperedges \(e_{\ell}\) and \(e_{\ell^{\prime}}\) belong to a same class. Denoting by \(\mathcal{P}_{2}(L)\) the set of pair partitions of \([L]\), we set for any \(\xi\in\mathcal{P}_{2}(L)\)
\[\mathcal{M}_{\xi}\Big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L} }^{\varepsilon_{L}}\Big{]} := \sum_{\begin{subarray}{c}\pi\in\mathcal{P}(V)\text{ s.t.}\\ -k-|\hat{E}^{\pi}|+|V^{\pi}|=0\\ \text{ and }\xi(\pi)=\xi\end{subarray}}\lim_{N\to\infty}\omega_{N}[T^{\pi}].\]
By our previous computation of the limit, namely (5.10) with \(\eta_{1}=,\ldots,=\eta_{L}=\mathrm{id}\), we obtained that \(\phi\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L}}^{\varepsilon _{L}}\big{]}=\sum_{\xi\in\mathcal{P}_{2}(L)}\mathcal{M}_{\xi}\big{[}m_{\sigma_ {1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L}}^{\varepsilon_{L}}\big{]}\). Note in particular that the limit is zero if \(L\) is odd.
Firstly, we shall prove that \(\mathcal{M}_{\xi}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L} }^{\varepsilon_{L}}\big{]}=0\) if \(\xi\) is not a non-crossing partition. Assume \(L\geq 4\) is even. Recall that a pair partition \(\xi\) is non-crossing if and only if there exists a internal block \(B\) and the partition \(\xi\setminus B\) is non-crossing. Let us prove that \(\xi(\pi)\) satisfies this property when \(-k-|\hat{E}^{\pi}|+|V^{\pi}|=0\). Recall the proof of Lemma 5.10: if \(-k-|\hat{E}^{\pi}|+|V^{\pi}|=0\), this means that the difference \(q(\pi,\ell)-q(\pi,\ell-1)\) of the combinatorial quantities defined in (5.9) is zero for all \(\ell=1,\ldots,L\). Let \(i+1\) be the first index such that the \((i+1)\)-th step is a backtrack one. Then by construction \(B=(i,i+1)\) is a block of \(\xi(\pi)\). Now removing the block \(B\) from \(\xi(\pi)\) yields the partition \(\tilde{\xi}(\pi)=\xi(\pi)\setminus B\). But the partition \(\tilde{\xi}(\pi)\) is involved for the computation of \(\phi\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{i-1}}^{\varepsilon _{i-1}}\times m_{\sigma_{i+2}}^{\varepsilon_{i+2}}\cdots m_{\sigma_{L}}^{ \varepsilon_{L}}\big{]}\) in the same way \(\xi\) is involved in the computation of the moment under consideration. Hence by induction on \(L\), we get that \(\xi(\pi)\) is a non crossing partition.
It now remains to prove that for each \(\xi\in\mathcal{P}_{2}(L)\) we have \(\mathcal{M}_{\xi}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}\cdots m_{\sigma_{L}}^ {\varepsilon_{L}}\big{]}=M_{\xi\setminus B}\Big{[}m_{\sigma_{1}}^{\varepsilon _{1}},\ldots,m_{\sigma_{i-1}}^{\varepsilon_{i-1}}\mathcal{E}\big{[}m_{\sigma_ {i}}^{\varepsilon_{i}}m_{\sigma_{i+1}}^{\varepsilon_{i+1}}\big{]},m_{\sigma_{i+2 }}^{\varepsilon_{i-1}},\ldots,m_{\sigma_{L}}^{\varepsilon_{L}}\Big{]}\). Again, we come back to the proof of Lemma 5.10. During the first backtrack step, the the \(j\)-th input of the hyperedge \(e_{i+1}\) is identified by \(\pi\) with the \(\eta_{\pi}(j)\)-th output of \(e_{i}\), for some twisting permutation \(\eta_{\pi}\) of \([k]\). Therefore this situation is similar
to the second example in Section 5.3.
The weight associated to the block \((i,i+1)\in\xi(\pi)\) with specified twisting given by a permutation \(\eta_{\pi}\) is
\[\lim_{N\to\infty}N^{k}\mathbb{E}[M_{N,\sigma_{i}}^{\varepsilon_{i}}(\mathbf{a}, \mathbf{b})(M_{N,\sigma_{i+1}}^{\varepsilon_{i+1}}U_{N,\eta_{\pi}})((\mathbf{b},\mathbf{a}))],\]
where \((\mathbf{a},\mathbf{b})\) denote the indices of the matrix elements and are all distinct (recall that both \(\mathbf{a},\mathbf{b}\) are multiplets of \(k\) elements and the resulting \(2k\) elements are distinct). It is the weight appearing in the factorization of \(\omega_{N}[T^{\pi}]\) over classes of dependence (see Formula 5.5 and proof of lemma 5.8). Moreover, since the inputs of the hyperedge associated to \(M_{N,\sigma_{i+2}}^{\varepsilon_{i+2}}\) are identified with the outputs of the one associated to \(M_{N,\sigma_{i+1}}^{\varepsilon_{i+1}}\), the conjugate of the twisting permutation appears on input vertices of the hyperedge associated to \(M_{N,\sigma_{i+2}}^{\varepsilon_{i+2}}\) in \(\omega_{N}[T^{\pi}]\) to correct for the introduction of the permutation \(U_{N,\eta_{\pi}}\) above. This induces the identification of the outputs of \(M_{N,\sigma_{i-1}}^{\varepsilon_{i-1}}\) with the inputs of \(M_{N,\sigma_{i+2}}^{\varepsilon_{i+2}}\) through the permutation \(U_{N,\eta_{\pi}}^{*}\). See Figure 3 for illustration. This identification is achieved by introducing \(U_{N,\eta_{\pi}}^{*}\) in front of the factorized weight
\[\lim_{N\to\infty}N^{k}\mathbb{E}[M_{N,\sigma_{i}}^{\varepsilon_{i}}(\mathbf{a},\mathbf{b})(M_{N,\sigma_{i+1}}^{\varepsilon_{i+1}}U_{N,\eta_{\pi}})((\mathbf{ b},\mathbf{a}))].\]
Note the fact that partitions \(\pi\) mapping to the same partition \(\xi(\pi)=\xi\) differs only by their induced twisting permutations. Therefore the weight associated to \(B=(i,i+1)\in\xi\) in the expression of \(\mathcal{M}_{\xi}\) is the sum over twisting permutations \(\eta\) of the weights associated to the same block in \(\xi(\pi)\) whose twisting permutation induced by \(\pi\) is \(\eta\). Hence, more formally the weight of the block \(B\) is
\[\sum_{\eta\in\mathfrak{S}_{k}}\lim_{N\to\infty}N^{k}\mathbb{E}[M_{N,\sigma_{i }}^{\varepsilon_{i}}(\mathbf{a},\mathbf{b})(M_{N,\sigma_{i+1}}^{\varepsilon_{ i+1}}U_{N,\eta})((\mathbf{b},\mathbf{a}))]U_{N,\eta}^{*}.\]
It is simple to recognize from the earlier workings the \(\mathfrak{S}_{k}\)-covariance \(\mathcal{E}[m_{\sigma_{i}}^{\varepsilon_{i}}m_{\sigma_{i+1}}^{\varepsilon_{ i+1}}]\) in the above formula. Therefore, we have shown that
\[\mathcal{M}_{\xi}[m_{\sigma_{1}}^{\varepsilon_{1}},\ldots,m_{ \sigma_{i-1}}^{\varepsilon_{i-1}},m_{\sigma_{i}}^{\varepsilon_{i}},m_{\sigma _{i+1}}^{\varepsilon_{i+1}},m_{\sigma_{i+2}}^{\varepsilon_{i+2}},\ldots,m_{ \sigma_{L}}^{\varepsilon_{L}}]=\\ \mathcal{M}_{\xi\setminus B}[m_{\sigma_{1}}^{\varepsilon_{1}}, \ldots,m_{\sigma_{i-1}}^{\varepsilon_{i-1}}\mathcal{E}[m_{\sigma_{i}}^{ \varepsilon_{i}}m_{\sigma_{i+1}}^{\varepsilon_{i+1}}],m_{\sigma_{i+2}}^{ \varepsilon_{i+2}},\ldots,m_{\sigma_{L}}^{\varepsilon_{L}}].\]
We can now finish the proof of our main theorem. As in the previous section, we have
\[\mathcal{E}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}u_{\eta_{1}} \cdots m_{\sigma_{L}}^{\varepsilon_{L}}u_{\eta_{L}}\big{]} = \sum_{\eta\in\mathfrak{S}_{k}}\Phi\big{[}m_{\tilde{\sigma}_{1}}^{ \varepsilon_{1}}\cdots m_{\tilde{\sigma}_{L}}^{\varepsilon_{L}}\big{]}u_{\eta},\]
Figure 3: Top: for \(k=5\), we represent a quotient \(T^{\pi}\) of a graph \(T=T^{\varepsilon_{1},\ldots,\varepsilon_{8}}_{\sigma_{1},\ldots,\sigma_{8}}\) for a partition \(\pi\) that possibly contribute in the limit \(\tau^{0}[T^{\pi}]\). We have \(\xi(\pi)=\big{\{}\{1,8\},\{2,5\},\{3,4\},\{6,7\}\big{\}}\), and, in cycle decompositon \(\eta_{1}:\{(1,2),(3,5,4)\}\), \(\eta_{2}=\{(1,2,4),(3,5)\}\) and \(\eta_{3}=\{(1),(2,5),(3),(4)\}\). The black lines represent identifications between vertices. We emphasize the hyperedges forming the first interval block \(\{3,4\}\) of \(\xi(\pi)\) and their identifications with heavier lines. Bottom left: we consider the two hyperedges \(\{3,4\}\) solely and identify the \(j\)-th input of \(4\) with the \(\eta_{1}^{-1}(j)\)-th output of \(3\). Bottom right: We represent the quotient \(T^{\prime\pi^{\prime}}\) of the graph \(T^{\prime}=T^{\varepsilon_{1}^{\prime},\ldots,\varepsilon_{6}^{\prime}}_{ \sigma_{1},\ldots,\sigma_{6}^{\prime}}\) where \((\sigma_{1}^{\prime},\varepsilon_{1}^{\prime})=(\sigma_{1},\varepsilon_{1})\), \((\sigma_{2}^{\prime},\varepsilon_{2}^{\prime})=\big{(}(\operatorname{id} \sqcup\eta_{1})\sigma_{2},1\big{)}\) if \(\varepsilon_{2}=1\) and \((\sigma_{2}^{\prime},\varepsilon_{2}^{\prime})=\big{(}(\eta_{1}\sqcup \operatorname{id})\sigma_{2},*\big{)}\) if \(\varepsilon_{2}=*\), and \((\sigma_{i}^{\prime},\varepsilon_{i}^{\prime})=(\sigma_{i+2},\varepsilon_{i+2 }^{\prime})\), for all \(i\geq 3\).
where for \(\ell=1,\ldots,L-1\), \(\tilde{\sigma}_{\ell}=(\operatorname{id}\sqcup\eta_{\ell}^{-1})\sigma_{\ell}\) if \(\varepsilon_{\ell}=1\) and \(\tilde{\sigma}_{\ell}=(\eta_{\ell}^{-1}\sqcup\operatorname{id})\sigma_{\ell}\) if \(\varepsilon_{\ell}=*\), and for \(\ell=L\) we have \(\tilde{\sigma}_{L}=(\operatorname{id}\sqcup\eta\eta_{L}^{-1})\sigma_{L}\) if \(\varepsilon_{L}=1\) and \(\tilde{\sigma}_{L}=(\eta\eta_{L}^{-1}\sqcup\operatorname{id})\sigma_{L}\) if \(\varepsilon_{L}=*\). Therefore we get from Lemma 5.12
\[\mathcal{E}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}u_{\eta_{1}} \cdots m_{\sigma_{L}}^{\varepsilon_{L}}u_{\eta_{L}}\big{]}\] \[=\ \sum_{\xi\in\operatorname{NC}_{2}(L)}\sum_{\eta\in\mathfrak{S }_{k}}\mathcal{L}_{\xi\setminus B}\Big{[}m_{\tilde{\sigma}_{1}}^{\varepsilon _{1}},\ldots,m_{\tilde{\sigma}_{i-1}}^{\varepsilon_{i-1}}\mathcal{E}\big{[}m_ {\tilde{\sigma}_{i}}^{\varepsilon_{i}}m_{\tilde{\sigma}_{i+1}}^{\varepsilon _{i+1}}\big{]},m_{\tilde{\sigma}_{i+2}}^{\varepsilon_{i-1}},\ldots,m_{\tilde{ \sigma}_{L}}^{\varepsilon_{L}}\Big{]}u_{\eta}\]
Hence setting for each \(n\geq 1\)
we have proved the sequence \(\tilde{K}_{n},n\geq 1\) satisfies the moment-cumulant relation over \(\mathfrak{S}_{k}\)
\[\mathcal{E}\big{[}m_{\sigma_{1}}^{\varepsilon_{1}}u_{\eta_{1}} \cdots m_{\sigma_{L}}^{\varepsilon_{L}}u_{\eta_{L}}\big{]}\] \[=\ \sum_{\xi\in\operatorname{NC}_{2}(L)}\tilde{\mathcal{K}}_{ \xi\setminus B}\Big{[}m_{\sigma_{1}}^{\varepsilon_{1}}u_{\sigma_{1}},\ldots, m_{\sigma_{i-1}}^{\varepsilon_{i-1}}u_{\sigma_{-1}}\mathcal{E}\big{[}m_{ \sigma_{i}}^{\varepsilon_{i}}u_{\sigma_{i}}m_{\sigma_{i+1}}^{\varepsilon_{i+1 }}u_{\sigma_{i+1}}\big{]},\] \[\qquad\quad m_{\sigma_{i+2}}^{\varepsilon_{i-1}}u_{\sigma_{i+2} },\ldots,m_{\sigma_{L}}^{\varepsilon_{L}}u_{\sigma_{L}}\Big{]}.\]
By Definition-Proposition 2.11, necessarily \(\tilde{K}_{n}=K_{n}\) for all \(n\geq 1\) and by Definition 2.12, the family \((m_{\sigma})_{\sigma\in\mathfrak{S}_{k}}\) is \(\mathfrak{S}_{k}\)-circular.
|
2307.09276 | On some Operator Filtering Strategies Based on Suitably Modified Green's
Functions | Recent contributions showed the benefits of operator filtering for both
preconditioning and fast solution strategies. While previous contributions
leveraged laplacian-based filters, in this work we introduce and study a
different approach leveraging the truncation of appropriately chosen spectral
representations of operators' kernels. In this contribution, the technique is
applied to the operators of the 2D TE- and TM-electric field integral equations
(EFIE). We explore two different spectral representations for the 2D Green's
function that lead to two distinct types of filtering of the EFIE operators.
Numerical results corroborate the effectiveness of the newly proposed
approaches, also in the Calder\'on preconditioned EFIE | Matteo E. Masciocchi, Ermanno Citraro, Alexandre DΓ©ly, Lyes Rahmouni, Adrien Merlini, Francesco P. Andriulli | 2023-07-18T14:14:49Z | http://arxiv.org/abs/2307.09276v1 | # On some Operator Filtering Strategies Based on Suitably Modified Green's Functions
###### Abstract
Recent contributions showed the benefits of operator filtering for both preconditioning and fast solution strategies. While previous contributions leveraged laplacian-based filters, in this work we introduce and study a different approach leveraging the truncation of appropriately chosen spectral representations of operators' kernels. In this contribution, the technique is applied to the operators of the 2D TE- and TM-electric field integral equations (EFIE). We explore two different spectral representations for the 2D Green's function that lead to two distinct types of filtering of the FFIE operators. Numerical results corroborate the effectiveness of the newly proposed approaches, also in the Calderon preconditioned EFIE.
Integral equations, EFIE, spectral filtering.
## I Introduction
The exploration of efficient solutions for complex electromagnetic problems is a pivotal research domain, underpinning advancements in numerous areas from communications to medical imaging. Integral equations are often used in this context, because they require the discretization of scatterers' boundaries only. However, upon discretization via the boundary element method (BEM), they typically give rise to dense linear systems for which direct solutions are often unpractical. One efficient approach to sidestep this difficulty is to couple fast solvers and iterative solvers to solve the system. However, both the solution's precision and the number of iterations required to reach it are strongly influenced by the spectral properties of the discretized integral operators.
Recently, operator filtering has been introduced [1] as a way to manipulate and correct the deleterious properties of electromagnetic operators' spectra, while still being compatible with classical fast solution strategies. The first incarnations of operator filters rely on Laplacian manipulations to build spectral filters that are subsequently multiplicatively applied to the integral operators of interest. Operator filtering, relying on quasi-Helmholtz filters, has successfully been used to stabilize the Electric Field Integral Equations (EFIE) for 3D scattering in both the dense-discretization and low frequency regimes [1] and to enhance the compressibility of integral operators for building single-skeleton fast direct solvers [2].
In this work, we introduce a novel way to directly obtain filtered operators by truncating carefully chosen spectral representation of the operators' kernels. This means that the standard BEM discretization of the modified operators will directly yield matrices whose spectra correspond to a filtered version of the spectrum of the discretized original operators. This article focuses on the operators involved in the TE and TM electric field integral equations (EFIE) applied to a 2D scatterer, although the proposed approach is extensible beyond this scenario, for which we present two types of filtering that rely on two different spectral representations of the 2D Green's function. The properties and performance of the different filter approaches is analyzed in both the static and the dynamic case. The effectiveness of the new filtering scheme is further substantiated by suitably selected numerical results.
## II Notation and Background
Consider a 2D scatterer modeled by a smooth curve \(\gamma\in\mathbb{R}^{2}\), in a medium with wavenumber \(k\) and impedance \(\eta\), on which impinges an electromagnetic field \((\mathbf{E}^{i},\mathbf{H}^{i})\). We assume \(\gamma\) lyses on the \(xy\) plane, and denote with \(t\) the physical quantities lying on the \(xy\) plane, tangential to the scatterer. Depending on the polarization of interest, two equations can be obtained to relate the incident electric field \(\mathbf{E}^{i}=E_{t}\mathbf{t}+E_{z}\mathbf{z}\) to the induced surface current \(\mathbf{j}=j_{t}\mathbf{t}+j_{z}\mathbf{z}\)
\[\eta\mathrm{i}\mathbf{k}\mathcal{S}j_{z} =E_{\mathrm{z}}\,, \tag{1}\] \[\eta\mathrm{i}\mathrm{i}\mathbf{k}\mathcal{N}j_{t} =E_{\mathrm{t}}\,, \tag{2}\]
which are respectively the TM and TE EFIEs, and whose integral operators are
\[(\mathcal{S}j_{z})(\mathbf{r}) \coloneqq\int_{\gamma}g(\mathbf{r},\mathbf{r}^{\prime})j_{z}(\mathbf{r}^{ \prime})dr^{\prime}\,, \tag{3}\] \[(\mathcal{N}j_{t}) (\mathbf{r}) \coloneqq-\frac{\partial}{\partial n}\int_{\gamma}\frac{\partial }{\partial n^{\prime}}g(\mathbf{r},\mathbf{r}^{\prime})j_{t}(\mathbf{r}^{\prime})\,\mathrm{ d}\mathbf{r}^{\prime} \tag{4}\]
with \(g(\mathbf{r},\mathbf{r}^{\prime})\coloneqq-\mathrm{i}/4H_{0}^{(2)}(k|\mathbf{r}-\mathbf{r}^{ \prime}|)\) if \(k>0\), or \(g_{0}(\mathbf{r},\mathbf{r}^{\prime})\coloneqq-1/2\pi\log(|\mathbf{r}-\mathbf{r}^{\prime}|)\) if \(k=0\). The currents \(j_{t}\) and \(j_{z}\) can be expanded with piecewise linear Lagrange interpolants \(\{\varphi_{i}\}\) defined on a mesh of \(\gamma\) composed of \(N\) segments of uniform length \(h\): \(j_{t}\approx\sum_{i=1}^{N}[\mathbf{l}_{i}]\varphi_{i}\) and \(j_{z}\approx\sum_{i=1}^{N}[\mathbf{j}_{i}]\varphi_{i}\). Using Galerkin testing [3], the discrete system can be obtained as \(\mathbf{S}\mathbf{j}_{z}=\mathbf{\mathcal{E}}_{z}\) and \(\mathbf{N}\mathbf{j}_{t}=\mathbf{\mathcal{E}}_{t}\), where \([\mathbf{S}]_{ij}=\left\langle\varphi_{i},\mathcal{S}\varphi_{j}\right\rangle\), \([\mathbf{N}]_{ij}=\left\langle\varphi_{i},\mathcal{N}\varphi_{j}\right\rangle\), \(\mathbf{\mathcal{E}}_{t}=\left\langle\varphi_{i},\frac{\mathrm{i}k}{\eta}E_{t}\right\rangle\), and \(\mathbf{\mathcal{E}}_{z}=\left\langle\varphi_{i},\frac{1}{\eta\mathrm{i}k}E_{z}\right\rangle\).
## III Operator Filtering via Green's Function Spectral Truncation
In this section, we propose a new class of filtered operators, stemming from the truncation of spectral representations of the operators' kernel \(g(\mathbf{r},\mathbf{r}^{\prime})\). The scheme is as follows: (i) define the filtered Green's function \(g^{\alpha}(\mathbf{r},\mathbf{r}^{\prime})\) where \(\alpha\) indicates a
filtering parameter (akin to a cutoff frequency) that will depend on the spectral representation chosen, (ii) define the filtered operators
\[(\mathcal{S}^{\alpha}j_{z})(\mathbf{r}) \coloneqq\int_{\gamma}g^{\alpha}(\mathbf{r},\mathbf{r}^{\prime})j_{z}(\mathbf{r }^{\prime})\,\mathrm{d}\mathbf{r}^{\prime}\,, \tag{5}\] \[(\mathcal{N}^{\alpha}j_{t})\left(\mathbf{r}\right) \coloneqq-\frac{\partial}{\partial n}\int_{\gamma}\frac{\partial }{\partial n^{\prime}}g^{\alpha}(\mathbf{r},\mathbf{r}^{\prime})j_{t}(\mathbf{r}^{\prime })\,\mathrm{d}\mathbf{r}^{\prime}\,, \tag{6}\]
and (iii) use the boundary element method to obtain the matrices with the corresponding filtered spectra.
The first approach to filter \(g(\mathbf{r}-\mathbf{r}^{\prime})\) we will present, is to transform it into spectral domain in the sense of a multidimensional Fourier expansion, and back-transforming a truncated version obtaining the following modified kernels in the static case
\[g^{\alpha}_{0}(\mathbf{r},\mathbf{r}^{\prime})=-\frac{1}{2\pi}\log(|\mathbf{r}-\mathbf{r}^{ \prime}|)-\frac{1}{2\pi}\int_{s=\alpha}^{+\infty}\frac{J_{0}(s|\mathbf{r}-\mathbf{r}^ {\prime}|)}{s}\,\mathrm{d}s\,, \tag{7}\]
and in the dynamic case
\[g^{\alpha}(\mathbf{r},\mathbf{r}^{\prime})=-\frac{\mathrm{i}}{4}H^{(2)}_{0}(k|\mathbf{r} -\mathbf{r}^{\prime}|)-\frac{1}{2\pi}\int_{s=\alpha}^{+\infty}\frac{J_{0}(s|\mathbf{r }-\mathbf{r}^{\prime}|)s}{s^{2}-k^{2}}\,\mathrm{d}s \tag{8}\]
respectively, where \(\alpha>k\), \(J_{0}\) is the \(0^{\text{th}}\) order Bessel function of the first kind and \(H^{(2)}_{0}\) is the \(0^{\text{th}}\) order Hankel function of the second kind. For the implementation of (8), the computation is split in two parts: the singular part is handled by Taylor expansion; whereas the asymptotic regime is handled by a recursive extraction of terms by the expansion [4, eq. 10.17.3].
Another possible spectral expansion of \(g(\mathbf{r},\mathbf{r}^{\prime})\) can be obtained leveraging Mehler-Sonine integrals [4, eq. 10.9.12], obtaining \(Y_{0}(x)=-\frac{2}{\pi}\int_{1}^{\infty}\frac{\cos(xt)}{\sqrt{t^{2}-1}}dt\), where \(Y_{0}\) is the \(0^{\text{th}}\) order Bessel function of the second kind. Using the identity \(H^{(2)}_{0}(x)=J_{0}(x)-\mathrm{i}Y_{0}(x)\), recalling the Green's Function definition, and truncating \(Y_{0}(x)\), we obtain
\[g^{\alpha}(\mathbf{r}-\mathbf{r}^{\prime})=-\frac{\mathrm{i}}{4}J_{0}(k|\mathbf{r}-\mathbf{r} ^{\prime}|)-\frac{1}{2\pi}\int_{t=1}^{\alpha/k}\frac{\cos(k|\mathbf{r}-\mathbf{r}^{ \prime}|t)}{\sqrt{t^{2}-1}}\,\mathrm{d}t\,. \tag{9}\]
This form, however, is challenging to compute. An effective approach stems by rewriting the integral in (9) as
\[I^{c1,c2}_{k}=\Re\left(\int_{c1}^{c2}\mathrm{e}^{\mathrm{i}kt}f(t)dt\right). \tag{10}\]
where \(f:t\mapsto 1/\sqrt{t^{2}-1}\). Now we introduce a change of variable which maps \((-1,1)\) into \((c1,c2)\) carried out by a function \(g:t\mapsto(t+1)\frac{c2-c1}{2}+c1\). Then, we expand \(f\circ g\) as a linear combination of Legendre polynomials \(P_{n}\), i.e.,
\[f(g(x))=\sum_{n=0}^{\infty}a_{n}P_{n}(x),\,\,a_{n}=\frac{2n+1}{2}\int_{-1}^{1} f(g(x))P_{n}(x)\,\mathrm{d}x \tag{11}\]
and so we can rewrite (10) as
\[I^{c1,c2}_{k}=\Re\left(\frac{c2-c1}{2}\mathrm{e}^{\mathrm{i}\left(k^{\prime}+ kc1\right)}\sum_{n=0}^{\infty}a_{n}\int_{-1}^{1}\mathrm{e}^{\mathrm{i}k^{\prime}t}P _{n}(t)dt\right) \tag{12}\]
with \(k^{\prime}=(c2-c1)/2k\) and the coefficients \(a_{n}\) (11). Using the following identity from [5]
\[\int_{-1}^{1}P_{n}(t)\mathrm{e}^{\mathrm{i}kt}dt=(\mathrm{i})^{n}\sqrt{\frac{2 \pi}{k}}J_{n+\frac{1}{2}}(k) \tag{13}\]
we finally obtain
\[I^{c1,c2}_{k}=\Re\left(\frac{c2-c1}{2}\mathrm{e}^{\mathrm{i}\left(k^{\prime}+ kc1\right)}\sum_{n=0}^{\infty}a_{n}(\mathrm{i})^{n}\sqrt{\frac{2\pi}{k^{\prime}}}J_{ n+\frac{1}{2}}(k^{\prime})\right). \tag{14}\]
In practice, the expansion above is used to compute the integral in \((2,\alpha/k)\), because in this way, due to the smoothness of the function over this interval, the Legendre expansion can be truncated with a low number of terms (and the Legendre coefficients \(a_{n}\) can be precomputed for the integration interval of interest). In the interval \((1,2)\) a different expansion is used: by integration by part of Eq. (10), we obtain
\[I^{c1,c2}_{k}=\left[\cos(kt)\ln\left(t+\sqrt{t^{2}-1}\right) \right]^{c2}_{t=c1}+\\ k\int_{c1}^{c2}\sin(kt)\ln\left(t+\sqrt{t^{2}-1}\right)dt \tag{15}\]
and the integral on the right hand side is computed using the Legendre expansion procedure. Because \(\ln\left(t+\sqrt{t^{2}-1}\right)\) is not singular in \(1\), the number of terms of the expansion is low, and the approach is efficient and accurate.
## IV Numerical results
To illustrate the effectiveness of the above formulations, numerical results for both the static and dynamic cases are provided below. All these operators were obtained on the scatterer shown in Fig. 1. In the dynamic cases, the frequency is set to \(1\,\mathrm{GHz}\) with a mesh size of \(h=\lambda/30\). Fig. 1 shows the effectiveness of (7) on the discretized single layer operator \(\mathbf{S}\) in static, whereas Fig. 2 show the performance of (8) and (9) on the same operator in the dynamic case. We finally show
Fig. 1: Singular values of \(\mathbf{S}_{0}\) and singular values of \(\mathbf{S}_{0}^{\alpha}\) using (7), ordered by the singular vectors of the Laplace-Beltrami operator, and reference mesh.
the effectiveness of the filtering procedure in the Calderon preconditioned TE-EFIE [6]
\[\mathbf{G}^{-1}\mathbf{S}\mathbf{G}^{-1}\mathbf{N}\boldsymbol{j_{\hat{l}}}= \mathbf{G}^{-1}\mathbf{S}\mathbf{G}^{-1}\mathbf{e}_{t}\,, \tag{16}\]
where \([\mathbf{G}]_{ij}\coloneqq\langle\varphi_{i},\varphi_{j}\rangle\). In Figure 3, in particular, we show the singular values of \(\mathbf{G}^{-1}\mathbf{S}\mathbf{G}^{-1}\mathbf{N}\) and singular values of \(\mathbf{G}^{-1}\mathbf{S}^{\alpha}\mathbf{G}^{-1}\mathbf{N}\) where \(\mathbf{S}^{\alpha}\) is computed using (8) and (9), ordered by the singular vectors of the Laplace-Beltrami operator. It is evident the effectiveness of the filtering procedure with the new kernels and thus their applicability in a Calderon setting as further detailed in [2].
## Acknowledgment
The work of this paper has received funding from the Horizon Europe Research and innovation programme under the EIC Pathfinder grant agreement n\({}^{\circ}\) 101046748 (project CEREBO) and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 724846, project 321).
|
2307.06145 | Robust Impulse Responses using External Instruments: the Role of
Information | External-instrument identification leads to biased responses when the shock
is not invertible and the measurement error is present. We propose to use this
identification strategy in a structural Dynamic Factor Model, which we call
Proxy DFM. In a simulation analysis, we show that the Proxy DFM always
successfully retrieves the true impulse responses, while the Proxy SVAR
systematically fails to do so when the model is either misspecified, does not
include all relevant information, or the measurement error is present. In an
application to US monetary policy, the Proxy DFM shows that a tightening shock
is unequivocally contractionary, with deteriorations in domestic demand, labor,
credit, housing, exchange, and financial markets. This holds true for all raw
instruments available in the literature. The variance decomposition analysis
highlights the importance of monetary policy shocks in explaining economic
fluctuations, albeit at different horizons. | Davide Brignone, Alessandro Franconi, Marco Mazzali | 2023-07-12T13:00:00Z | http://arxiv.org/abs/2307.06145v1 | # Robust Impulse Responses using External
###### Abstract
External-instrument identification leads to biased responses when the shock is not invertible and the measurement error is present. We propose to use this identification strategy in a structural Dynamic Factor Model, which we call Proxy DFM. In a simulation analysis, we show that the Proxy DFM always successfully retrieves the true impulse responses, while the Proxy SVAR systematically fails to do so when the model is either misspecified, does not include all relevant information, or the measurement error is present. In an application to US monetary policy, the Proxy DFM shows that a tightening shock is unequivocally contractionary, with deteriorations in domestic demand, labor, credit, housing, exchange, and financial markets. This holds true for all raw instruments available in the literature. The variance decomposition analysis highlights the importance of monetary policy shocks in explaining economic fluctuations, albeit at different horizons.
**Keywords**: Proxy Dynamic Factor Model, Monetary Policy, Fundamentalness, Impulse Response Functions, Variance Decomposition.
**JEL codes**: C32, C38, E52.
Introduction
The use of external instruments to identify structural shocks has now become a widely used methodology in macroeconomics for estimating dynamic causal effects. The main advantage of employing such identification strategy, compared to traditional schemes, is its independence from any assumption on sign or timing of the impulse responses (see Stock and Watson, 2018, for a detailed review).
However, this identification strategy is not immune to problems affecting standard vector autoregressive (VARs) models. As largely discussed by the literature, the reliability of this class of models in estimating the transmission of structural shocks may be undermined by two major limitations: the _curse of dimensionality_ and the presence of _measurement errors_. The first is linked to the restricted amount of information available to the econometrician, which is forced to use small- or, at most, medium-scale systems due to the exponentially growing parameter space. Implicitly, this procedure assumes that the information from the past and present observations of the selected endogenous variables is sufficient to recover the structural shock of interest, _i.e._, the model is _informational sufficient_ (following the definition of Forni and Gambetti, 2014). However, several studies demonstrated that this assumption does not always hold true, as for instance, in the presence of anticipated shocks, that is, shocks with a delayed effect on some variables (see Leeper et al., 2013; Forni et al., 2014, among others). Consequently, the issues of _non-invertibility_ or _non-fundamentalness_ may arise.1 A second source of bias may be the presence of a non-negligible idiosyncratic component of the series, which is usually interpreted as measurement error or as sectoral disturbances. Indeed, despite often neglected in the literature, this could further affect the estimates of the impulse response functions, leading to inaccurate results - see, for instance, Giannone et al. (2006) and Forni et al. (2020).
Footnote 1: While invertibility requires that the shocks can be inferred from past and current values of the endogenous variables, fundamentalness implies that the shocks only need to belong to the space spanned by those values. Note that while the two properties are closely related, they are not exactly the same thing, although they are often used as synonyms. For example, consider the case where the vector moving average representation of a square system has at least one root equal to one in modulus. The system would not be invertible, but it would still be fundamental. See, _inter alia_, Hansen and Sargent (1991), Lippi and Reichlin (1993, 1994) and the review by Alessi et al. (2011).
In the last decades, the literature attempted to deal with the curse of dimensionality proposing econometric models that allow to include a larger number of variables _w.r.t._ VARs. A non-exhaustive list of examples includes Bayesian VARs (see, _e.g._, De Mol et al., 2008; Banbura et al., 2010), Reduced-Rank VARs (see,_e.g._, Carriero et al., 2011; Cubadda and Hecq, 2022), Factor Augmented VARs (FAVAR, henceforth - see, _e.g._, Bernanke et al., 2005), and Dynamic Factor Models (DFM, henceforth - see, _e.g._, Forni et al., 2000, 2009; Stock and Watson, 2002). Although they manage to solve the missi
only the last approach is able to deal with measurement errors.
In this paper, we propose using external instruments in a Dynamic Factor Model (Proxy DFM, henceforth). We describe how to apply the external instrument identification in a structural DFM framework to estimate a unit-variance shock. This allows the estimation of the variance decomposition and, possibly, the historical decomposition. By means of the simple theoretical model with perfect foresight studied in Leeper et al. (2013), we show the benefits of applying the proposed identification strategy within a _data-rich environment_ compared to a small-scale (VAR) model. We compare the theoretical responses with those obtained by estimating impulse responses using both a Proxy SVAR and Proxy DFM from the simulated series of the model. We explore various model specifications and highlight the role of measurement error and non-fundamentalness in biasing the estimated results.
Our findings indicate that, when the information set is insufficient, Proxy VAR delivers biased responses. Conversely, the DFM successfully estimates the true IRFs. Moreover, informational deficiency may be compounded by other sources of bias, such as the sensitivity of estimates to the chosen model specification and the presence of measurement error. For example, even if the VAR is correctly specified and the system is fundamental, the presence of measurement error can bias the final estimates. In contrast, the DFM is not affected by any distortion. Its robustness holds even when higher levels of the idiosyncratic component are added to the simulated series.
We document the practical value of the proposed approach with an empirical application, contributing to a never-ending debate for macroeconomists and policymakers: the macroeconomic effect and the transmission of monetary policy.
We select several instruments recently proposed in the literature, specifically by Gertler and Karadi (2015) (GK), Romer and Romer (2004) (RR), Miranda-Agrippino and Ricco (2021) (MAR), the raw high-frequency surprise series of Jarocinski and Karadi (2020) (JK) and Bauer and Swanson (2022) (BS) and compare the results between the estimates obtained from different model specifications using a Proxy SVAR with the unique set of responses estimated using a Proxy DFM. The choice of instruments is not casual. In fact, we consider also instruments that tend to capture the monetary policy shock along with a _news component_ coming from the central bank's assessment of the economic outlook, which makes the underlying shock also partially anticipated (Ramey, 2016; Jarocinski and Karadi, 2020; Miranda-Agrippino and Ricco, 2021). This event potentially exposes the VAR estimates to the problems introduced above. We find that the SVAR responses exhibit both output and price puzzles in almost all the specifications analyzed and across all instruments except the one proposed by MAR. On the other hand, the unique set of responses estimated through the Proxy DFM does not suffer from either of these puzzles. In other words, one can solve issues arising from the instrument by simply enlarging the information set of the econometrician,
mitigating the effect of the _news_ component of the monetary policy shock.
In the last part of our paper, we provide a thorough analysis of monetary policy shocks transmission. We now exclusively focus on Proxy DFM results, using GK as the baseline instrument. In addition, we compare the results with all the instruments previously introduced. We study the various channels through which a monetary policy shock propagates, analyzing both impulse responses and the variance decomposition.
The first interesting result is that our proposed model estimate impulse responses are robust across all instruments under analysis. Turning to the analysis of the transmission, we find that a monetary policy tightening shock is unequivocally contractionary for the economy. Real variables contract, but only after some months, with the exception of consumption and retail sales, which react at impact. All measures of prices decline, and their behavior indicates the presence of price rigidities in the economy, as they do not fully adjust at impact. The housing sector is deeply affected by the shock and, by weakening household balance sheets, can explain the sharp contraction in consumption (see Mian et al., 2013, for this mechanism). Its effect could compound with other important channels. For instance, equity prices fall, further negatively affecting household wealth; the exchange rate rises, signaling an appreciation of the domestic currency that translates into a reduction in net exports; and interest rates and spreads all point to an overall contraction of the financial sector, which could amplify the negative business cycle impact of the shock (Jorda et al., 2017). The variance decomposition analysis indicates that monetary policy shocks are important in explaining business cycle fluctuations of the economy, with results that differ depending on the nature of the variable. For real variables, the shock does not explain much variance at the very impact, but its importance increases towards the third year. As expected, monetary policy shocks explain a large part of the variance already at impact for financial variables, exchange rates, and prices.
Related Literature.Our study is linked to different strands of literature. First, we refer to the literature related to Dynamic Factor Models. This model can include a large number of variables in the estimation, and can therefore enlarge the information set of the econometrician, a feature which makes it useful both in forecasting and structural analysis (Forni et al., 2000; Stock and Watson, 2002). On the latter, the model can be particularly appealing, given that it has been proved that the measurement error and non-fundamentalness issues are solved by construction. Furthermore, the estimation of its Wold representation - and reduced form shocks - does not depart too much from the standard procedure which is applied in the context of VAR models (Forni et al., 2009). For this reason, the DFM has been widely used in the context of structural analysis in recent years, applied with different identification techniques - see, for instance, Del Negro and Otrok (2007); Forni and Gambetti
(2010); Barigozzi et al. (2014); Luciani (2015); Bjornland and Thorsrud (2019); Kerssenfischer (2019); Brignone and Mazzali (2022). Among these, Forni and Gambetti (2010) is particularly related to our work as it was the first to explore the effects of U.S. monetary policy shocks in a DFM. In this paper we departure by using external instruments to identify the monetary policy shock rather than imposing timing restrictions of the variables in the model. Recent contributions related to our work, who analyze monetary policy shocks in a Proxy DFM are Alessi and Kerssenfischer (2019) and Corsetti et al. (2022). The former uses it to estimate the response of asset prices to monetary policy shocks in the U.S., while the latter to study the heterogeneity in the transmission of shocks across euro area countries. Our work differs from theirs in terms of both methodology and scope. In a simulation exercise, we provide a comprehensive analysis of the advantages that a researcher can gain by using the Proxy DFM relative to the Proxy SVAR. In the empirical application, instead, we study the macroeconomic effects of monetary policy in the U.S., along with the channels through which it propagates.2 Another major departure from them is our unit-variance shock estimation, which allows us to perform variance and historical decompositions.3
Footnote 2: Focusing on the issue of information and shock recoverability, our study is also related to the recent literature that has proposed milder conditions based on the concept of information sufficiency, see Forni et al. (2019); Chahrour and Jurado (2022)
Footnote 3: In the paper, we only show the former decomposition.
Second, our study focuses on the strand of literature which identifies structural shocks of interest in a VAR framework using external instruments available (see Stock and Watson, 2018, for a survey). This approach typically involves a two-step strategy. Firstly, reduced form shocks are estimated, usually from a VAR model. Then, the structural shock of interest is identified by projecting a single selected reduced-form shock on the available external instrument. Its first appearance in macro-econometrics dates back to Stock (2008), reaching broader consensus some years later. Among the many contributions using Proxy SVARs we find Stock and Watson (2012), which examine the macroeconomic dynamics of the Great Recession in the U.S. and the subsequent slow recovery, Mertens and Ravn (2013), which provide evidence on the U.S. personal and corporate income tax changes. More closely related papers are the ones using a Proxy SVAR to estimate the macroeconomic effects of monetary policy shocks. Among those, we find Gertler and Karadi (2015), Jarocinski and Karadi (2020), Miranda-Agrippino and Ricco (2021), and Bauer and Swanson (2022). This literature has also been favoured by the development of the high-frequency identification (HFI) strategy, which allows using information from "outside" the VAR to identify the shock of interest.4 The HFI exploits high-frequency asset price changes around policy meetings to quantify exogenous changes in monetary policy actions (Kuttner, 2001; Gurkaynak et
al., 2004, among others). The identifying assumption underlying the HFI approach is that unexpected changes in interest rates in a short window surrounding policy meetings are only due to monetary policy actions.
In between the two bodies of literature mentioned above, our work is also related to Miescu and Mumtaz (2019); Bruns (2021); De Nora (2023), which use external instruments within a FAVAR approach. As we show in the simulations, FAVAR is able to address the _missing variable_ problem, but its estimates are still affected by measurement error and by model specification.
Finally, we also refer to the literature which directly focuses on the role of information in the proxy SVAR identification (see the recent works by Forni et al., 2022; Plagborg-Moller and Wolf, 2022; Bruns, 2021; Miescu and Mumtaz, 2019), and generally to the literature which has proposed milder conditions based on the concepts of informational sufficiency and _recoverability_ of the shocks5 - see Forni et al. (2019, 2022); Chahrour and Jurado (2022). The Proxy DFM is invertible by construction and, thus, the shocks are always recoverable, as the invertibility condition is stricter than the latter.
Footnote 5: Recoverability requires that the structural shock is a linear combination of present and future values of VAR residuals. For this, fundamentalness implies recoverability, but not vice versa.
The remainder of the paper is organised as follows. Section 2 describes the econometrics behind the DFM and the methodology we follow to apply the external instrument identification within this framework. Section 3 is devoted to the comparison between the Proxy SVAR and Proxy DFM results using a theoretical model with perfect foresight. Section 4 covers the empirical application to monetary policy. First, we compare impulse responses between the VAR and DFM identified with multiple instruments, then we use a Proxy DFM to shed light on the transmission mechanism of a monetary policy shock. Section 5 concludes.
## 2 Econometric framework
In this section, we present the model and the identification procedure. Regarding the DFM, we present two specifications. First, we show the stationary case, which is used in the simulation, which builds on Forni et al. (2009). Second, we present the methodology proposed by Barigozzi et al. (2021) to handle non-stationary variables. The latter is used in the empirical application.
### Dynamic Factor Model
#### 2.1.1 The stationary I(0) specification
Consider a \(N\)-vector \(x_{t}\) of weakly stationary time series. As standard in DFM literature, we assume each variable \(x_{it}\), \(i=1,...,N\), can be rewritten as the sum of an _idiosyncratic_ component, \(\xi_{it}\), and a _common_ component, \(\chi_{it}\). The former represents the source of variation affecting a specific variable and thus interpreted as measurement error or sectoral shocks. It is assumed that the \(\xi_{it}\) are poorly correlated in the cross-sectional dimension, a milder and more realistic assumption than uncorrelation. This assumption is crucial for ascribing this model to the class of _approximate_ factor models, rather than the more traditional _exact_ factor models a la Sargent and Sims (1977) and Geweke (1977). On the other hand, the _common_ components, \(\chi_{it}\), are assumed to permeate the entire dataset and to be function of \(q\) common shocks \(u_{t}=(u_{1t},u_{2t},...,u_{qt})^{\prime}\), with \(q<<N\), such that6
Footnote 6: In order to be consistent with the literature, throughout the paper, we will refer to \(u_{t}\) interchangeably as common shocks or _dynamic_ factors.
\[\chi_{it}=b_{i1}(L)u_{1t}+b_{i2}(L)u_{2t}+...+b_{iq}(L)u_{qt}\]
Defining \(\chi_{t}=(\chi_{1t}\ldots\chi_{Nt})^{\prime}\) and \(\xi_{t}=(\xi_{1t}\ldots\xi_{Nt})^{\prime}\) we can rewrite the model in vector notation as
\[\chi_{t}=B_{\chi}(L)u_{t} \tag{1}\]
where \(B_{\chi}(L)\) is a \(N\times q\) matrix, whose \((i,j)\)-th entry is \(b_{ij}(L)\), and \(u_{t}\) is an orthonormal white noise vector such that \(u_{t}\perp\xi_{t}\). Since the vector \(u_{t}\) is orthogonal to \(\xi_{t}\), the latter is also orthogonal to \(\chi_{t}\). Moreover, being the vector \(\chi_{t}\) singular, the dynamic factors are fundamental.
One can further rewrite the expression above in terms of _static_ factors as
\[\chi_{t}=\Lambda F_{t} \tag{2}\]
where \(F_{t}\) is a vector of \(r>q\) static factors, still orthogonal to \(\xi_{t}\), and \(\Lambda\) is a \(N\times r\) matrix of factor loadings. Notice that the static factors are only loaded contemporaneously, whereas we consider present and past values of the dynamic factors. It is further assumed that the vector \(F_{t}\) follows a VAR of order \(p\), and that \(F_{t}\) and \(u_{t}\) are linked as follows
\[D(L)F_{t}=\varepsilon_{t},\quad\text{with }\varepsilon_{t}=Ru_{t} \tag{3}\]
where \(D(L)\) is a \(r\times r\) polynomial matrix of coefficients, \(\varepsilon_{t}\) is the vector of VAR errors of
the static factors, \(R\) is a \(r\times q\) matrix resulting from a spectral decomposition of the errors \(\varepsilon_{t}\). Given that \(r>q\), the stochastic vector \(F_{t}\) is still singular.
By inverting the matrix of coefficient \(D(L)\) we obtain the moving average representation
\[F_{t}=D(L)^{-1}Ru_{t}=B_{F}(L)u_{t} \tag{4}\]
and the following relation
\[\chi_{t}=\Lambda F_{t}=\Lambda B_{F}(L)u_{t}=B_{\chi}(L)u_{t} \tag{5}\]
where \(B_{F}(L)\) is a \(r\times q\) polynomial matrix of coefficients. Equations (3) - (5) uncover the link between dynamic and static factors.
#### 2.1.2 The non-stationary I(1) specification
In case the data is \(I(1)\), we need a few adjustments, as suggested by the recent work of Barigozzi et al. (2021). Let us suppose we have a \(N\)-vector \(x_{t}\) of non-stationary time series. Considering a deterministic trend, one can still describe those as the sum of two orthogonal unobservable components, namely a common and idiosyncratic component, as
\[x_{t}=\alpha+\beta t+\chi_{t}+\xi_{t} \tag{6}\]
where \(\alpha\) is a vector of constants and \(\beta t\) is the linear trend. In this framework, moreover, the factors are assumed to be \(I(1)\) and the idiosyncratic components are either \(I(0)\) or \(I(1)\).
The procedure in this case is as follows: (i) we estimate the number of static factors \(r\) and dynamic factors \(q\) on the \(I(0)\) transformed data; (ii) we estimate the loadings \(\Lambda\) from the differenciated data by means of the principal component; (iii) given \(\Lambda\), we retrieve \(F_{t}\) from the non-stationary dataset; (iv) we estimate \(\alpha\) and the coefficient \(\beta\) associated to the trend and we project \(\hat{\tilde{x}}_{t}=x_{t}-\hat{\alpha}-\hat{\beta}t\) on the previously estimated loadings; (v) having \(F_{t}\), we derive the MA representation of \(\chi_{t}\), analogously to equation (5), either applying a VAR-in-level specification or a VECM. The latter case requires the estimation of the number of cointegration relationships \(d\).
### External Instrument Identification in a DFM environment
To identify the structural shocks from the estimated reduced form shock in (3), we follow the _external instrument_ procedure developed by Stock and Watson (2012) and Mertens and Ravn (2013). We start by describing the standard procedure in a quadratic system such as in VAR models, given that most of the assumptions also apply in a DFM framework.
Let us assume we observe a variable \(z_{t}\) satisfying the following conditions:
\[\mathbb{E}_{t}(z_{t}\eta_{it})=\alpha \tag{7}\] \[\mathbb{E}_{t}(z_{t}\eta_{-it})=0 \tag{8}\]
with \(\eta_{it}\) representing the structural shock we want to identiy and \(\eta_{-it}\) the remainig structural shocks. The two above conditions state that the variable \(z_{t}\) needs a) to be correlated with the structural shock \(\eta_{it}\) we want to estimate and b) to be orthogonal to all the remaining structural shocks.
If the two condition apply, one can retrieve the structural shock \(\eta_{it}\) from the estimated reduced-form shock \(u_{it}\) by regressing \(u_{it}\) on the instrument \(z_{t}\) and then by a proper rescaling of the coefficient coming from the regression. This, however, makes automatically impossible to retrieve a unit-variance shock and to perform variance and historical decomposition. Moreover, this way of proceeding would not fit in a DFM framework. Indeed, the methodology would require the choice of the estimated reduced-form residual which is directly associated to the policy variable. This is always possible in a VAR setting, given that it follows a non-singular quadratic \(N\times N\) model, whereas the DFM is characterised by a singular \(N\times q\) representation, which makes a clear one-to-one link between variables and shocks impossible.
A viable solution is offered by Forni et al. (2022), where they describe an alternative way to estimate a unit-variance shock using an external instrument identification in a non-singular quadratic system as the VAR.7 We borrow from them and consider the projection of the instrument, \(z_{t}\), on the vector of residuals, \(u_{t}\), as follows
Footnote 7: A similar procedure is employed by Alessi and Kerssenfischer (2019), which, however, is unable to retrieve the unit-variance shock and, therefore, to perform a variance and historical decomposition analysis.
\[z_{t}=\delta^{\prime}u_{t}+v_{t} \tag{9}\]
The unit-variance shock and the respective IRFs can be estimated as
\[\hat{\eta}_{it}= \frac{\hat{\delta}^{\prime}\hat{u}_{t}}{\sqrt{\hat{\delta}^{ \prime}\widehat{\Sigma}_{u}\hat{\delta}}} \tag{10}\] \[C_{\chi,i}(L)= B_{\chi}(L)\widehat{\Sigma}_{u}\frac{\hat{\delta}}{\sqrt{\hat{ \delta}^{\prime}\widehat{\Sigma}_{u}\hat{\delta}}} \tag{11}\]
where \(\Sigma_{u}\) is the variance-covariance matrix of \(u_{t}\), which is equal to the identity matrix in our framework.8 We obtain that the instrument \(z_{t}\) is correlated to the structural shock of interest \(\eta_{it}\), spanned throughout all the \(u_{t}\), and that it is orthogonal to all the remaining
structural shocks \(\eta_{-it}\). As a direct consequence, we do not need to choose a specific variable to instrument, as is the case in standard VARs - a choice that has often left room for discussion in the literature, providing an additional source of sensitivity that could reduce the external validity of the results.9
Footnote 9: One recent example is Bauer and Swanson (2022), which argue that the two-year Treasury yield is a better measure of the monetary policy stance than the one-year Treasury yield used by Gertler and Karadi (2015).
## 3 Non-fundamentalness and Proxy identification
This section aims at showing some of the possible advantages offered by the proxy identification strategy in the DFM framework, as described in section 2.2, compared to a standard VAR model. We conduct the first part of the analysis by deploying a theoretical macroeconomic model. Specifically, we borrow the simple Real Business Cycle (RBC) model of Leeper et al. (2013). Such model is characterized by log preferences, inelastic labor supply, full capital depreciation and fiscal foresight. The latter, in particular, is what matters in our case. In fact, in the presence of fiscal foresight, agents are able to foresee future values of the tax rate and behave according to such information.
The evolution of capital through time is dictated by the following equation
\[k_{t}=\alpha k_{t-1}+a_{t}-\kappa\sum_{i=0}^{\infty}\theta^{i}E_{t}\tau_{t+i+1} \tag{12}\]
where \(0<\alpha<1,|\theta|<1,\kappa=(1-\theta)\tau/(1-\tau)\), \(\tau\) is the steady state tax rate, \(k_{t}\) is capital, \(a_{t}\) is technology and \(\tau_{t}\) is tax rate. All the variables are considered as log deviations from the their steady state value.
For the sake of simplicity, technology and tax are assumed to follow two _i.i.d._ processes, \(u_{a,t}\) and \(u_{\tau,t}\), respectively. In order to allow for fiscal foresight, it is assumed that
\[\tau_{t}=u_{\tau,t-h}\]
Depending on the value of \(h\), we face different scenarios. When \(h=0\), equation (12) reduces to \(k_{t}=\alpha k_{t-1}+a_{t}\), \(i.e.\) the representative agent does not have information about the future. As a consequence capital accumulation does not depend on the tax shock. In this situation, the information set of the econometrician is aligned with that of the agent. A completely different case, instead, is when \(h>0\). Since the information set at time \(t\) of the agent includes present and past observations of \(u_{a,t}\) and \(u_{\tau,t}\), she has knowledge about future values of \(\tau_{t}\), according to which capital will be adjusted. Such knowledge generates misalignment between the two information sets, causing non-fundamentalness of the shocks.
In what follows, we consider the case with \(h=2\), that is characterized by the following equations
\[a_{t} =u_{a,t}\] \[k_{t} =\alpha k_{t-1}+a_{t}-\kappa(u_{\tau,t-1}+\theta u_{\tau,t})\] \[\tau_{t} =u_{\tau,t-2}\]
which implies the MA representation
\[\left(\begin{array}{c}a_{t}\\ k_{t}\\ \tau_{t}\end{array}\right)=\left(\begin{array}{cc}0&1\\ \frac{-\kappa(L+\theta)}{1-\alpha L}&\frac{1}{1-\alpha L}\\ L^{2}&0\end{array}\right)\left(\begin{array}{c}u_{\tau,t}\\ u_{a,t}\end{array}\right)=B(L)u_{t}. \tag{13}\]
As pointed out by Forni et al. (2020), \(u_{t}\) is fundamental for none of the square subsystems generated by the possible pairs of \(y_{t}=(a_{t},k_{t},\tau_{t})^{\prime}\). On the other hand, considering all the three variables is sufficient to recover \(u_{t}\).10
Footnote 10: The fact that adding information helps in recovering the true structural shocks is not new in the literature. See _inter alia_Giannone and Reichlin (2006).
### Simulation
We propose a simulation exercise to quantitatively assess to what extent non-funtamentalness threatens the results coming from the empirical analysis. The exercise is very close in spirit to the one performed in Forni et al. (2020) and Miescu and Mumtaz (2019).
We consider a static factor model representation of the form
\[\chi_{t}=\Lambda F_{t} \tag{14}\]
where \(\chi_{t}\) is a \(n\)-dimensional vector of economic variables, \(\Lambda\) is a \(n\times r\) matrix of loadings and \(F_{t}\) is a vector of static factors of dimension \(r\).
Let us define \(F_{t}=(k_{t},u_{a,t},u_{\tau,t}u_{\tau,t-1},u_{\tau,t-2})^{\prime}\), which is assumed to have the following VAR(1) dynamics
\[F_{t}=AF_{t-1}+Bu_{t} \tag{15}\]
where
\[A=\left(\begin{array}{ccccc}\alpha&0&-\kappa&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\end{array}\right)\qquad B=\left(\begin{array}{cccc}1&-\kappa\theta\\ 1&0\\ 0&1\\ 0&0\\ 0&0\end{array}\right)\qquad u_{t}=\left(\begin{array}{c}u_{a,t}\\ u_{\tau,t}\end{array}\right)\]
Having the factors \(F_{t}\), one can easily find that \(y_{t}=\bar{\Lambda}^{x}F_{t}\), where \(y_{t}=(a_{t},k_{t},\tau_{t})^{\prime}\) and
\[\bar{\Lambda}^{x}=\left(\begin{array}{ccccc}0&1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&0&1\end{array}\right)\]
We assume the econometrician observes \(\chi_{t}=(x_{t}^{\prime},{x_{t}^{*}}^{\prime})^{\prime}\) where \(x_{t}=(\tau_{t},k_{t})^{\prime}\) is a non-fundamental subsystem of \(y_{t}\) and \(x_{t}^{*}\) is a set of \(n\) survey series generated artificially. The cross-sectional dimension of the entire sample is thus \(N=n+2\). The series in \(x_{t}^{*}\) are generated by a linear transformation of the factors, given by \(\Lambda^{*}F_{t}\), where the entries of \(\Lambda^{*}\) are drawn from independent \(N(0,1)\).
Considering \(\Lambda^{x}\) as the third and second rows of \(\bar{\Lambda}^{x}\) in such order, we can rewrite equation (14) as
\[\left(\begin{array}{c}x_{t}\\ x_{t}^{*}\end{array}\right)=\left(\begin{array}{c}\Lambda^{x}\\ \Lambda^{*}\end{array}\right)F_{t} \tag{16}\]
Finally, we assume that the varibles are observed with a measurement error
\[Y_{t}=\chi_{t}+\xi_{t}=\Lambda F_{t}+\xi_{t} \tag{17}\]
where \(\xi_{it}\sim N(0,\sigma_{i})\) and \(\sigma_{i}\sim U(0,\nu)\), with \(\nu\in[0.5,2,5]\). To higher values of \(\nu\) corresponds an higher share of measurement errors.
We simulate 1000 different datasets of \(T=200\) observations from the model presented above, using the parametrization of Leeper et al. (2013): \(\alpha=0.36,\theta=0.2673,\tau=0.25\) and \(u_{t}\sim N(0,I)\). We arbitrarily fix \(n=100\), so to have a sufficiently large cross-section.
For each dataset, we estimate a standard VAR, a DFM and, in addition, the Factor Augmented VAR (FAVAR) of Bernanke et al. (2005).11 Unless otherwise stated, in each
case we set the lag order equal to 2. 12
Footnote 12: We repeated the estimation considering more lags and the findings are virtually identical. Results are available from the authors upon request.
While DFM and FAVAR estimation involves the whole sample, the VAR analysis includes only few key variables, specifically capital and tax rate, though we add also technology in some exercises. Moreover, the DFM is estimated with parameters \(r=5\) and \(q=2\), whereas the FAVAR is based on a bivariate VAR augmented with three additional factors.13.
Footnote 13: In this way we have that the space generated is the same as the one generated by the \(F_{t}\) of the DFM
To identify the shock from the simulated series for all the three models, we apply the external instrument procedure described in Section 2.2. As instrument, we consider the structural shock itself, _i.e._\(u_{\tau,t}\), which is the best instrument one can possibly have.14 Furthermore, we repeat each exercise with different and increasing values of \(\nu\), so that we are also able to observe the role of the measurement error.
Footnote 14: We also consider instruments of different quality. Since the results do not show substantially difference with those presented here, we show them as robustness in Appendix B.
_Notes: black dotted lines are the theoretical IRF, the red lines are the responses obtained estimating a VAR with different specifications along with their \(68\%\) confidence bands interval in grey. The empirical IRF are computed as the sample average of the responses obtained across simulations. Going form the upper panel to the lower, we see the results for a bivariate VAR(3) on capital and tax rate observed without measurement errors, for a trivariate VAR(3) on capital, tax rate and technology without measurement error and finally the same trivare VAR(3) obeserve with small measuremnt error, specifically \(\nu=0.5\)._
Figure 1: Comparison of VAR specifications
### Simulation results
In the next section we show the possible issues that may affect the IRFs estimated with a proxy identification within a VAR framework.
Figure 1 plots the responses of capital and tax rate to a tax shock, for three different VAR specifications. In each case, the series are simulated with two-periods fiscal foresight and the IRFs are computed as the average across the model simulations. The black dotted lines in the figure represents the model true responses. The agents know two periods in advance how and when a tax shock will hit, therefore they adjust capital accordingly. However, when the same responses are estimated from the data simulated from the same model, the econometrician is not always able to recover the true responses. In fact, the results depend on two different aspects: first, on the information set at the econometrician's disposal and, second, on whether what is observed is contaminated by the measurement error.
The results reported in the upper panels of Figure 1 are derived from a bivariate VAR on capital and tax rate, where no measurement error is added. Clearly, the VAR model is unable to recover the structural shock, even without measurement error contaminating the estimation, thus providing misleading impulse responses. Indeed, as explained in the previous section and more in detail in Leeper et al. (2013), considering only capital and tax rate would be insufficient to solve the non-fundamentalness issue, leading to misspecifications. In the middle panel we show that adding a third variable, namely technology, is crucial for the reliability of VAR estimates. The additional variable allows VAR empirical responses to retrace almost perfectly their theoretical counterparts. Nevertheless, in reality, variables are often observed with measurement errors. We then augment the same trivariate system with a small measurement error (obtained setting \(\nu=0.5\)), and proceed in identical manner. The results, shown in the last panel of Figure 1, highlight that the overall estimates of the true IRFs can still be distorted by measurement errors, despite the system itself is fundamental.
Therefore, the estimated responses following a proxy identification procedure can in principle be affected by both non-fundamentalness and the measurement error issues. As a solution, we propose to apply the same identification procedure into a DFM frameowork.The comparison is given in Figure 2, which complements the results displayed in the previous chart. In this case, we still have a model with a two-period fiscal foresight, with the true impulse responses of the tax rate and capital to a tax shock which are depicted in black dotted line. We add different levels of measurement error to the variables - specifically considering \(\nu=0.5,2\) and \(5\), and we then re-estimate the shock following the same proxy identification technique for each level of \(\nu\). The estimated IRFs are reported in blue, red and yellow for each level of the added measurement error.
Moreover, in figure 2 we compare the results coming from three different models: bivariate VAR (top panel), FAVAR (middle panel) and DFM (bottom panel). Not surprisingly, the
higher the measurement error, the more distorted are the estimated responses of the VAR. Conversely, both the FAVAR and DFM successfully recover the true responses, although we observe an increase in the estimation-bias in the former as the error increases. These results underline the usefulness of expanding the information set at econometrician's disposal also in the proxy identification procedure: not only it helps solving the non-fundamentalness issue by construction, but one can also better deal with the measurement error problem, which usually affects macroeconomic time series. Specifically on this matter, the DFM behaves better then the FAVAR because in the latter some of the variables are observed with the measurement error, a feature which can lead to a contamination of the estimated IRFs, as we will observe later in this section.
Figure 8 complements the above picture, though focusing this time on the underlying shock which is estimated by the different models. The chart is obtained following the same procedure explained above for the IRFs, and it depicts the true shock simulated from the model (black dotted line) along with the unit-variance shocks obtained from a VAR (top
Figure 2: IRF of tax rate and capital to a tax shock, with a two-period fiscal foresight.
panel), FAVAR (middle panel) and DFM (bottom panel) which are estimated each time with different level of measurement errors.15 As evident, the non-fundamental VAR is never able to recover the true structural shock and higher measurement errors contribute to further distorting the estimated shock. On the other hand, FAVAR and DFM perform largely better, especially the latter which shows great robustness.
Footnote 15: As for the IRFs, the chart plots the mean of the distribution of shocks which are estimated for each model simulation
In Table 2, we also report the Frobenius Norm computed between the true model IRFs and structural shocks and the estimated counterparts. Results corroborate what seen so far. Using VAR as benchmark, we show that the additional information exploited by FAVAR and DFM is effective. Let us focus on the specific case when the model is simulated with fiscal foresight: here both DFM and FAVAR largely overcome VAR. As \(\nu\) increases their results tend to converge to those of VAR, but at different rates: DFM converges at a lower rate with respect to the FAVAR, suggesting a higher reliability of DFM result.
For a matter of completeness, we also briefly comment the results obtained following the same procedure, but with a model with _no fiscal foresight_. In this model specification, the econometrician can estimate a bivariate VAR from of the simulated series of tax rate and capital to have a sufficient information set, _i.e._ the VAR is fundamental.16 However, although results are quantitatively different from the case studied in the previous paragraphs, they still lead to analogous conclusions. We spend only few words describing the results of figure 9, as it is not crucial in our analysis. Here we observe that the the VAR estimated on the simulated series with small measurement error (blu line) is now able to estimate reliable responses, which are almost identical to those of the other models. However, as measurement error increases (red and yellow lines), the VAR estimation deteriorates quickly, whereas the other two models are consistent. We refer again to table 2, which help us complete the overall picture. By looking at the Frobenius norm, it is evident that with the measurement error increasing, both the DFM and FAVAR perform better with respect to the VAR. 17
Footnote 16: See Forni and Gambetti (2014) for a discussion.
Footnote 17: This result seems in contrast with what observed before under the case of fiscal foresight, where an increase of measurement error was pushing the factor models closer to VAR in terms of distance. A possible explanation lies upon the different rates of deterioration. If variables are only affected by measurement error, VAR deterioration rate is higher than those of the other two models. This is because the latter models are able to better handle the issue and manage to better recover theoretical results. Thus, when this error increases, DFM and FAVAR perform relatively better than VAR. Conversely, when there is fiscal foresight, VAR is informationally deficient and, consequently, is already largely distorted by non-fundamentalness. The distortion due to the increasing measurement error add up to the pre-existing one, but the sum deteriorate at a slower rate wrt DFM and FAVAR, reducing the performance gap. The conclusion we draw from the table is: VAR is largely affected by measurement error, but even more by informational deficiency. In other words, if the former is of small size - and the shock to be estimated is fundamental - VAR may be comparable to DFM and FAVAR and its results are reliable. Conversely, if the shock is no-fundamental, but we do not observe any measurement error contaminating the series, the estimation will always be distorted.
As a final note, we stress another point which is also im
the IRFs, represented by the variables choice and the overall model specification. We do it through an exercise which differs from the ones we showed above, where we show that not only the VAR, but also the FAVAR is highly dependent on such choice. We proceed as follows: we estimate a trivariate VAR and a FAVAR having three variables observed with measurement errors plus two factors. We fix the first two observed variables in both the VAR and the FAVAR to be capital and tax rate, whereas the third variable changes for each iteration. The measurement error, instead, is always kept equal to \(\nu=0.5\). Figure 7 compares the \(n\) different estimates of the two models, with the VAR shown in the top panel, while FAVAR in the middle one. As evident, it is sufficient to vary only one variable at the time to obtain responses that greatly differ across specifications. In some cases, the results may end up being very misleading. This happens because each observed variable brings in the model a different type of information, along with extra measurement error, which can therefore contaminate the results. Conversely, the DFM does not have this weakness by construction: the econometrician already has all the information set available at his disposal, and the variables are already cleaned by the measurement error.
## 4 Empirical application
In this section, we provide a detailed account of our empirical analysis by describing the data and the model specification used. We present our findings in two sub-sections. Firstly, we compare impulse responses obtained using a Proxy VAR versus a Proxy DFM. For the analysis, we identify the monetary policy shock with a broad range of external instruments available in the literature. Then, we explore the transmission of monetary policy shocks in the United States, using the Proxy DFM identified with all the instruments previously analysed. Finally, we present the variance decomposition obtained using the new "unit-variance shock" methodology, which we have described in section 2.2.
### Data, specifications and procedure
We use data from the FRED monthly dataset, which is described in McCracken and Ng (2016), covering the period from January 1963 to December 2018. The dataset contains \(N=100\) macroeconomic and financial variables. For the VAR model, we specify a _core_ subset of variables, which include the industrial production index, the unemployment rate, the consumer price index (CPI), and the policy rate. We use the one-year Treasury yield as the policy variable, as it is common in the literature. In contrast, the DFM includes all available variables in the dataset. The variables are left in levels or log-levels and are not
transformed to reach stationarity.18
Footnote 18: It is important to note that we do not perform a VECM estimation in either the VAR or DFM cases. Sims (1980) and Sims et al. (1990) show that the cointegration relationship is correctly taken into account within a standard VAR in levels, at least at short horizons. Similarly, as shown by Barigozzi et al. (2021), the VAR estimated on \(I(0)\) static factors produces IRFs that, at a short horizon, are equal to the VECM specification, without the need to explicitly estimate the number of cointegration relationships.
We estimate a VAR with \(p=8\) lags, chosen based on both the Akaike and Schwarz information criteria. These criteria suggest \(p=11\) and \(p=6\), respectively. The estimation of the DFM in based on Section 2.1.2, and we determine the number of static and dynamic factors using the tests provided by Bai and Ng (2002) and Hallin and Liska (2007), respectively. Based on these tests, we set the number of static factors to \(\hat{r}=9\) and the number of dynamic factors to \(\hat{q}=4\), which represent our baseline parameter specification of the model. To maintain consistency with the VAR, we set the number of lags to \(p=8\). We conduct a robustness analysis, as presented in Appendix E, and find that the results are virtually identical when varying \(p\), which holds true for both the VAR and DFM models.
After having estimated the models, we identify the structural shock of interest exploiting the external information provided by the proxy (see Section 2.2). The baseline analysis is based on GK, which covers a period stretching from January 1990 to June 2012. However, we also offer a comparison with instruments developed by RR, MAR, JK, and BS.19 As mentioned in Section 2.2, our new procedure has the advantage of being agnostic regarding the choice of the policy variable to instrument. Indeed, we do not need to choose between the one- and two-year Treasury yield to capture the monetary policy stance, as long as the information of the structural shock we want to estimate (in our case, the monetary policy shock) is spread throughout the \(q\) estimated reduced-form innovations.
Footnote 19: Regarding RR, we take the version extended by Miranda-Agrippino and Rey (2020).
### A VAR-DFM comparison across instruments
The aim of this section is to compare the estimated impulse responses from the _core_ specification of the VAR identified with the most common instruments in the literature, namely GK, RR, MAR, JK, and BS, with those of the DFM obtained with the same instruments. The comparison is shown in Figure 3. Three main points can be made here: firstly, there are significant differences between the impulse responses estimated with the two models; secondly, in most cases, the VAR model produces results that are at odd with standard macroeconomic theory, with both price and output puzzles, whereas the DFM model does not; lastly, the VAR estimates show large variability across instruments, while by construction the DFM is unaffected by this issue.
Overall, the VAR estimates show that a contractionary shock raises industrial production and prices and lowers the unemployment rate. This issue is related to the "information
effect," i.e., the instrument not only contains information about the underlying monetary policy shock, but at the same time it also carries information, or _news_, on the future macroeconomic outlook which is implicitly communicated by the central banks when during the policy announcements. Therefore, the instrument is not exogenous and can be predicted with any publicly available information at the time of the FOMC announcement. The puzzles are evident for the responses obtained for all the instruments except with MAR. This is because the authors have developed an informationally-robust instrument that combines the high-frequency approach with the central bank's information set (Greenbook forecasts) to control for the macroeconomic information revealed by the central bank with its policy change.20 This is not the only attempt in the literature, and others have addressed the same issue in different ways. Jarocinski and Karadi (2020) take a high-frequency approach and distinguish a true monetary policy shock from an information shock by looking at the comovement of interest rates and stock prices around policy announcements. Bauer and Swanson (2022) obtain the series of high-frequency monetary policy surprises and then orthogonalize them using a set of predictors that are closely related to the Fed's monetary policy rule. Given our interest in exploring the role of our model in eliminating any kind of puzzle in the estimation of impulse responses without subjectively choosing how to _clean_ the instrument, we use the raw high-frequency surprise series for JK and BS.21
Footnote 20: The main assumption of this paper is that they assume that if the econometrician can correctly identify the monetary policy shock, then private agents should respond to it as a true monetary policy shock. The problem, however, is that private agents do not have the Greenbook forecasts in real time, as they are released to the public with a five-year lag.
Footnote 21: The _cleaned_ version for MAR is simply GK.
On the other side, performing the same identification strategy in a DFM environment implictly solves the information for those instruments which were affected. This is linked to the the large dimension of data which is at econometrician's disposal: including variables which also carry information about the future helps _purging_ the responses from the news component. Therefore, the estimated effect of monetary policy using the Proxy DFM yields puzzle-free impulse responses across all instruments: a contractionary shock that raises the one-year government bond yield lowers industrial production and prices (with some lag for some instruments), while raising the unemployment rate. Clearly, the use of a model that incorporates a large amount of information is critical to recover plausible monetary policy responses, no matter the instrument.
The problem of a limited information set available in small- and medium-scale models also determines a large variability in the estimates when including additional variables (see Forni et al., 2020). We proceed along their lines and show the impulse response for the _core_ variables when one additional variable is added to the model at a time, with a total of 95 different specifications. We also test for each model specification whether the shock is
invertible. To do this, we use the test recently proposed by Forni et al. (2022), where the proxy of interest is projected on the current value and the first \(r\) leads of the Wold residuals \(v_{t}\) as follows:
\[z_{t}=\sum_{k=0}^{r}\hat{\gamma}_{k}^{{}^{\prime}}\hat{v}_{t+k}+\hat{\xi}_{r,t} \tag{18}\]
where \(z_{t}\) is the proxy, and \(v_{t}\) are the VAR reduced-form residuals. We test for invertibility using the F-test for the significance of the regressors, with the null hypothesis being \(H_{0}:\gamma_{1}=\gamma_{2}=\cdots=\gamma_{r}=0\) against the alternative that at least one of the coefficients is non-zero.22 We estimate the regression in equation (18) with \(k=8\) and a 5% confidence level. We choose this calibration because too many leads would introduce significant noise in the regression and too few would be insufficient, both of which would undermine the validity of the test. If the estimated VAR specification has a shock invertible, then the corresponding impulse response is colored in yellow. Grey lines, instead, represent models with non invertible shocks.
Footnote 22: The regression does not include a constant as \(E(z_{t}|x=0)=0\), where \(x=\sum_{k=0}^{r}\hat{\gamma}_{k}^{{}^{\prime}}\hat{v}_{t+k}\).
The results are plotted in Figure 4 and indicate that the additional variable added at each iteration to the _core_ specification is sufficient to include additional information, along with measurement error, that can significantly affect the estimated impulse responses. However, it is often not sufficient to overcome the puzzles in the estimates, and even when the impulse responses are estimated with a cleaned instrument (MAR), some specifications estimate price puzzles. Overall, the high sensitivity of these estimates underscores the importance of carefully selecting the variables to be included in the model.23 An additional issue highlighted in the figure is that in most of the cases, the shock is not invertible (gray line). Conversely, the DFM specification does not suffer from the above problems because it is able to deal with a large number of variables simultaneously, which is not possible in the VAR framework due to the curse of dimensionality. Lastly, as described in section 2.1.1, the DFM by construction cleans the data from the measurement error, thereby eliminating the corresponding bias in the estimated responses.
Footnote 23: This problem affects not only the impulse responses but also the underlying structural shocks.
### Propagation Channels of the Monetary Policy Shock
This section further investigates the various channels of monetary policy (see Mishkin, 1995, for a survey) and, at the same time, shows the power of the Proxy DFM which succesfully estimates robust puzzle-free impulse responses across all the analysed external instruments. Compared to VAR models, a distinct advantage of our methodology relies in the possibility to study a wider range of variables at the same time, allowing for a broader understanding of the monetary policy transmission mechanism. Using the newly developed "unit-variance
shock", we are also able to estimate the variance decomposition, which was not previously feasible within the external instrument approach (see section 2.2).
Figure 5 plots the impulse responses of a representative sample of variables selected to explore the transmission channels of monetary policy. Specifically, we look at measures of economic activity, labor, housing, financial and labor markets, exchange rates, and uncertainty. The figure reports the median impulse responses obtained from the GK instrument, along with the 68 and 95 percent confidence bands. However, the median responses using
Figure 3: VAR vs DFM: proxy comparison
all of the other instruments analysed in the previous section are also reported. Notice that, for comparability reason, all the results are normalized at 100 basis point increase in the one-year Treasury yield at impact, a fairly common normalization in the literature.
#### Real Economy and Labor Market
Figure 5 shows that a contractionary monetary policy has a negative effect on the economy. Industrial production reacts with a delay of two months and builds up gradually over time to reach its peak impact after 10 months at -2% for GK before returning to its original level, while the other instruments depict slighly higher magnitudes, but still within the confidence bands.24 All the other real variables follow a similar path, even though no restrictions on
Figure 4: VAR vs DFM with GK identification: IRFs comparison
the shape of the impulse responses are imposed. This suggests that monetary policy shocks are transmitted to the real economy with a lag of a few months, in line with what monetary policymakers believe. Indeed, both real consumption, real income, and capacity utilization decline in a hump-shaped manner following the shock, reaching the maximum impact just before the end of the first year before returning to their trend, with a peak magnitude ranging between -1% and -3%. Real consumption, on the other hand, seems to react more quickly to the shock, which can be explained by the sharp drop in house prices, a mechanism well documented in the literature (e.g., Mian et al., 2013; Slacalek et al., 2020), a result we will comment on later. At the same time, business sales, business inventories, and new orders for durable goods all contract in line with industrial production. Turning to the labor market, the unemployment rate shows a sluggish increase, with no response at impact. The increase starts only from the second month onwards and it reaches its peak around 0.5% after about ten months (see e.g., Christiano et al., 1999). The average real earnings do not seem to react significantly to a monetary shock at impact, but progressively decrease over time, confirming the rather sluggish nature of real wages and possibly suggesting the presence of frictions in the economy, an idea that is now found in almost all the standard macro-theoretical models.
#### Housing Market
The analysis of both housing market and financial variables can further help us in better understanding which other channels are at play in the propagation of the monetary policy shock. Housing investments reduce at a large magnitude following a monetary policy tightening, with both housing starts and new housing permits which shrink at impact by around 10%, reaching almost -20% at peak within six months, though they also exhibit a rather short-living response. Coherently, house prices decrease by 0.5% at impact for GK, MAR and BS, and all the instruments point to a more long-lasting effect compared to the other housing sector indicators. Overall, the high sensitivity of the housing market to monetary policy shocks may directly impact household balance sheets, as also confirmed by the sharp reduction in consumption expenditure. As underlined by recent studies, this channel might be more relevant than previously believed. Indeed, when combining the large percentage of real estate over total assets in the portfolio of households with low level of wealth (Franconi and Rella, 2023) along with the large marginal propensity to consume (MPC) of _hand-to-mouth_ households (e.g. see Kaplan and Violante, 2018) and the larger MPC in response to a negative income shock (Christelis et al., 2019), the household balance sheets channel takes on a greater relevance for the transmission of monetary policy shocks (see for instance Slacalek et al., 2020).
### Financial Market
The negative wealth effect coming from the housing market can potentially compoud with that coming from the financial market, further contributing to the amplification of the monetary policy shock. In the literature, these mechanisms are referred to as the financial accelerator and the credit channel (Bernanke and Gertler, 1995; Bernanke et al., 1999). Asset prices, proxied by the S&P 500, show a sudden and large repricing for the Gk isntrument, experiencing a decline of more than 10% already in the second month. Interesting, the other instruments also show a significant drop, equal to 5%, which is however half compared to what implied by GK. More generally, all financial variables included, such as the Moody's
Figure 5: The transmission of monetary policy
Corporate Bond Spread (BAA-AAA) and all interest rates across different maturities, react at impact. Interest rates rise across all maturities, albeit to different degrees, with the ten-year Treasury yield which rises less compared to the federal funds rate and the one-year Treasury yieldThis is reflected in an inversion of the yield curve and a negative reaction of the term spread, which reduces by around 50bp in the first months. Turning to the credit channel, the tightening of financial conditions drives an increase in reserves and the corresponding decline in credit throughout the economy. Business loans, real estate loans, and nonrevolving consumer credit fall sluggishly, reaching a trough of roughly 2 to 4 per cent the former and 1 to 3 percent the latter about 20 months after the shock.
#### Exchange Rates
In terms of exchange rates, the US dollar appreciates following a monetary policy tightening, as shown by the GBP/USD, CAD/USD, CHF/USD and JPY/USD exchange rates, which increase by roughly 4% (2% for JPY) in the first two months. This implies higher prices for those countries that import goods produced in the US, and thus a decrease in US exports, which can further negatively weigh on the overall impact of the monetary policy shock on the domestic economy. It is also worth noting that both exchange rates react quickly, peaking within the second month before gradually declining. Thus, they do not exhibit the _delayed overshooting puzzle_ that was instead presented by the VAR analysis of Eichenbaum and Evans (1995), a result that confirms what was also found by Forni and Gambetti (2010).
#### Prices
Finally, a monetary policy tightening unequivocally reduces prices. All of the different price measures share a similar pattern: the CPI, the PCE deflator, the PPI, house prices and oil prices all react at impact for almost all the instruments, a feature that is at odds with the classical recursive identification scheme, which assumes zero contemporaneous effects at impact. Although the contraction is immediate, prices do not fully adjust at impact, but continue to fall for several months. The large fall in oil prices observed with GK, MAR and BS instruments could potentially be explained by a fall in consumption and investment following the negative monetary policy shock, which then affects oil prices through the demand channel by directly reducing the demand for oil.
#### Global Spillovers
A US monetary policy shock plays also as an important role in terms of global spillovers, thus contributing in shaping the global outlook. As shown by many studies, for instance Ca'Zorzi et al. (2020), many of the channels which we analysed in the above paragraph,
such as the demand channel, or the exchange channel, contribute to the propagatation of negative spillovers to other countries' financial markets and real activity sectors. Though representing an interesting topic, our analysis is not strictly related to the global dimension of US monetary policy, which may be explored in future works.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Variables** & _h=0_ & _h=6_ & _h=12_ & _h=18_ & _h=24_ & _h=30_ \\ \hline Industrial production & 0.37 & 10.46 & 22.07 & 25.02 & 26.18 & 26.10 \\ Real Consumption & 9.07 & 30.65 & 32.65 & 31.53 & 30.20 & 29.23 \\ Real Income ex.Trans. & 7.01 & 10.32 & 21.42 & 23.41 & 22.74 & 21.03 \\ Business Sales & 3.68 & 29.11 & 35.67 & 35.61 & 34.67 & 33.54 \\ New Orders: Durables & 4.76 & 27.16 & 33.35 & 34.41 & 34.71 & 34.27 \\ Business Inventories & 8.72 & 0.95 & 5.89 & 10.38 & 13.34 & 14.62 \\ Cap. Util. Manuf. & 0.04 & 13.03 & 24.49 & 26.80 & 27.24 & 26.56 \\ Business Sentiment & 1.37 & 37.49 & 38.57 & 31.22 & 26.39 & 24.32 \\ Unemployment rate & 2.53 & 16.61 & 26.65 & 29.82 & 30.33 & 28.84 \\ Avg. Hours manuf. & 1.21 & 17.70 & 27.35 & 27.13 & 26.35 & 26.70 \\ Avg. Earnings manuf. & 23.54 & 5.44 & 9.79 & 9.68 & 9.46 & 9.24 \\ CPI Headline & 15.28 & 7.92 & 4.30 & 3.06 & 2.97 & 3.42 \\ PCE & 14.74 & 9.19 & 5.26 & 3.79 & 3.60 & 3.96 \\ PPI & 18.04 & 13.23 & 8.21 & 6.24 & 5.63 & 5.60 \\ Oil Price & 22.58 & 18.25 & 12.29 & 9.75 & 8.66 & 8.20 \\ House Price & 20.15 & 22.16 & 19.09 & 18.04 & 17.26 & 15.99 \\ Housing Starts & 30.40 & 61.10 & 46.70 & 33.01 & 25.14 & 21.88 \\ New Housing Permits & 29.35 & 60.44 & 45.46 & 32.62 & 26.14 & 24.12 \\ Total Reserves & 45.03 & 38.35 & 31.00 & 27.80 & 26.27 & 25.41 \\ Business Loans & 72.01 & 17.90 & 12.38 & 16.66 & 19.47 & 20.08 \\ Real Estate Loans & 53.88 & 10.97 & 15.51 & 19.63 & 21.29 & 20.99 \\ One-year rate & 39.21 & 18.39 & 14.20 & 12.18 & 11.68 & 11.50 \\ Ten-year rate & 28.27 & 11.64 & 8.81 & 7.27 & 6.38 & 5.89 \\ BAA-AAA & 35.12 & 16.17 & 12.41 & 10.14 & 8.99 & 8.38 \\ S\&P 500 & 51.54 & 53.53 & 40.85 & 34.98 & 31.86 & 29.93 \\ Exch. GBP/USD & 71.65 & 67.14 & 60.74 & 59.67 & 58.08 & 54.28 \\ Exch. CAD/USD & 60.88 & 61.73 & 51.05 & 46.80 & 44.83 & 43.93 \\ Exch. CHF/USD & 54.56 & 58.47 & 53.40 & 47.51 & 41.80 & 36.11 \\ Exch. JPY/USD & 17.84 & 22.43 & 23.14 & 21.19 & 18.74 & 16.32 \\ \hline \hline \end{tabular} _Notes: Forecast Error Variance Decomposition of the contractionary monetary policy shock for a selection of variables, at different horizons. The shock in the Proxy DFM is identified with the instrument GK. Values in red represents the peak relevance._
\end{table}
Table 1: Forecast Error Variance Decomposition: Proxy DFM (GK)
#### Variance Decomposition
Table 1 presents the variance decomposition analysis obtained from the GK instrument and confirms that a monetary policy shock has an important role in explaining the cyclical fluctuations of the economy. For real variables and indicators of economic activity, the shock explains a small fraction of the variance at impact, ranging from 1% for industrial production to 9% for real income. This is consistent with the impulse response analysis, where a monetary policy shock has a limited effect on most of the real variables at the very impact, with the shock taking a few months to be transmit to the economy. However, its importance increases at longer frequencies, generally peaking during the course of the second year at roughly 30%. Similar results are shared by labor market variables, specifically for the unemployment rate and hours worked. The variance explained of the average earnings seems, however, to remain rather low, suggesting what already observed in the IRFs. Turning to the nominal variables, this analysis shows a larger share of variance explained at the beginning of the forecast horizon, from 14.7% of PCE to 22.5% of oil prices, with the importance of the monetary policy shock dissipating at longer horizons. On the housing and financial market, conversely, a monetary shock explains a large fraction of the total variance of these variables within the first six months following the shock. The peak in variance explained by the monetary policy shock for the housing sector is around 60%, for the credit market in the range 53% to 72%, for interest rates between 28% to 35%, for S&P 500 index above 50% and quite persistent, for exchange rates in the range 54% to 71%, with the exception of the \(JPY/USD\) which amounts to almost 18%.25
Footnote 25: The Bank of Japan conducts interventions in the foreign exchange rate market under the input of the Ministry of Finance. This may explain why US monetary policy shocks are comparatively less relevant for changes in the JPY/USD exchange rate.
## 5 Conclusion
External instruments identification procedure is not safe from issues affecting traditional SVARs. Even if the instrument is perfect, the estimates can be biased.
By means of a theoretical model with perfect foresight, we show that, if the underlying shock is non-fundamental or the variables are observed with a measurement error, the SVAR consistently fails to estimate the true impulse responses. Moreover, subjective choices about the variables included in the model can further increase the uncertainty in both the magnitude and the sign of the estimated responses. The latter is a problem that also affects FAVARs. As a solution, we propose using external instruments in a DFM, which is able to address all mentioned issues at once and to estimate the correct IRFs.
In the empirical exercise, we focus on an application to monetary policy and consider the
most well-known monetary policy instruments in the literature. Results show that, unlike SVAR, the information included in the DFM is enough to estimate puzzle-free responses in line with economic theory. Interestingly, results and consistent regardless the instrument considered, suggesting that the larger information set is able to deal with the distorting effect of monetary policy news.
Moreover, DFM proves invaluable in examining the behavior of a large set of variables, simultaneously. A tool of great value, especially for central banks, which allows for an internally consistent examination of the transmission channels of monetary policy. Our analysis shows that a monetary policy tightening shock has a clear contractionary effect on the economy, leading to a decline in both economic activity and prices. Multiple channels come into play, with both the financial and housing sectors deteriorating and directly affecting private consumption. Finally, the variance decomposition analysis shows that the monetary policy shock explains a significant portion of the variance of both real and nominal variables, albeit at different horizons, further highlighting the role of monetary policy in influencing business cycle fluctuations. |
2302.12936 | Educators' Perspectives of Using (or Not Using) Online Exam Proctoring | The onset of the COVID-19 pandemic changed the landscape of education and led
to increased usage of remote proctoring tools that are designed to monitor
students when they take assessments outside the classroom. While prior work has
explored students' privacy and security concerns regarding online proctoring
tools, the perspective of educators is under explored. Notably, educators are
the decision makers in the classrooms and choose which remote proctoring
services and the level of observations they deem appropriate. To explore how
educators balance the security and privacy of their students with the
requirements of remote exams, we sent survey requests to over 3,400 instructors
at a large private university that taught online classes during the 2020/21
academic year. We had n=125 responses: 21% of the educators surveyed used
online exam proctoring services during the remote learning period, and of
those, 35% plan to continue using the tools even when there is a full return to
in-person learning. Educators who use exam proctoring services are often
comfortable with their monitoring capabilities. However, educators are
concerned about students sharing certain types of information with exam
proctoring companies, particularly when proctoring services collect
identifiable information to validate students' identities. Our results suggest
that many educators developed alternative assessments that did not require
online proctoring and that those who did use online proctoring services often
considered the tradeoffs between the potential risks to student privacy and the
utility or necessity of exam proctoring services. | David G. Balash, Rahel A. Fainchtein, Elena Korkes, Miles Grant, Micah Sherr, Adam J. Aviv | 2023-02-24T23:50:37Z | http://arxiv.org/abs/2302.12936v1 | # Educators' Perspectives of Using (or Not Using) Online Exam Protoring +
###### Abstract
The onset of the COVID-19 pandemic changed the landscape of education and led to increased usage of remote protoring tools that are designed to monitor students when they take assessments outside the classroom. While prior work has explored students' privacy and security concerns regarding online proctoring tools, the perspective of educators is under explored. Notably, educators are the decision makers in the classrooms and choose which remote proctoring services and the level of observations they deem appropriate. To explore how educators balance the security and privacy of their students with the requirements of remote exams, we sent survey requests to over 3,400 instructors at a large private university that taught online classes during the 2020/21 academic year. We had \(n=125\) responses: 21% of the educators surveyed used online exam proctoring services during the remote learning period, and of those, 35% plan to continue using the tools even when there is a full return to in-person learning. Educators who use exam proctoring services are often comfortable with their monitoring capabilities. However, educators are concerned about students sharing certain types of information with exam proctoring companies, particularly when proctoring services collect identifiable information to validate students' identities. Our results suggest that many educators developed alternative assessments that did not require online proctoring and that those who did use online proctoring services often considered the tradeoffs between the potential risks to student privacy and the utility or necessity of exam proctoring services.
## 1 Introduction
The initial surge of the COVID-19 pandemic upended education, leading many schools to quickly switch to remote teaching in the Spring of 2020 [4], and many universities and colleges maintained remote learning into the 2020/21 academic year. This massive migration to online learning environments led to a corresponding increase in the use of remote educational technologies.
One such remote learning technology that saw a dramatic increase in use during remote instruction is online exam proctoring tools. Based on the analysis of the Chrome browser extension reviews, Balash et al. found explosive growth (720%) of online proctoring beginning at the start of the COVID-19 pandemic [2]. This is in line with a poll by Grajek that found that 77% of colleges and universities made use of or were planning to use online proctoring [12].
By design, remote proctoring systems are invasive. Given their capabilities to monitor and limit functionality where installed, students and privacy advocates have raised concerns about their security and privacy properties. As highlighted by the media coverage of remote proctoring tools, these concerns were not unfounded: Since the tools' widespread adoption at the beginning of the pandemic, reports uncovered major security and privacy incidents involving Proctorio and ProctorU, two widely used invigilation tools. These included a major data breach of ProctorU in which 444,000 users' personally identifying information was leaked online and a security vulnerability within Proctorio that allowed hackers to remotely activate the software on computers in which it was installed [1, 27, 29]. More recently, Burgess et al. [3] disclose several security and privacy issues, including concerns about how remote proctoring systems use facial recognition.
In a survey of students who experienced remote proctoring, Balash et al. found that many students had both privacy and security concerns with the tools [2]. In particular, student participants often felt they had no choice but to use the tools or that they trusted these proctoring services because of their academic institutions' support for them. Despite some students'
trust in these tools' security, Coheny [7] show evidence indicating that these tools may not be as trustworthy as students suspect. Specifically, they find that many collaborative tools used in remote classrooms collect information about students that often does not align with educational expectations. However, the tools analyzed in their study focus on collaboration tools that were not designed for academic settings.
As such _educators'_ perspectives of online exam proctoring services remains unexplored. Educators' perspectives are of particular importance given their roles in both the choice to use (or not use) an online proctoring tool and its associated monitoring. In this paper we seek to answer the following research questions about how educators consider privacy and security in the context of online proctoring services:
* What are educators' perceptions of online proctoring services?
* Do educators consider student privacy and security concerns when deciding to use (or not use) an online proctored exam and while setting up the exam proctoring parameters?
* Which proctoring methods do educators select to proctor their online exams?
To answer these research questions, we executed a campuswide recruitment of instructors at the George Washington University who taught courses during the remote learning period of the 2020/21 academic year. This involved inviting 3,460 educators to participate in an IRB-approved survey, of which \(n=125\) participants responded with their justifications for using or not using online proctoring. The survey captured responses from the university's 12 organizational units, senior and junior faculty, as well as graduate educators. Despite our small sample (approximately \(\nicefrac{{1}}{{5}}\) of respondents opted to use online proctoring), our results offer important and timely insights into how educators at a large-private university understood online proctoring tools and their motivations for using (or not using) these tools during a challenging period.
We found that a small but substantial number (21%) of educators used exam proctoring tools during the 2020/21 academic year. The most common reasons for using remote proctoring were to stop or deter cheating, to comply with COVID-19 safety protocols, to maintain exam integrity, and to be fair to students. In contrast, 79% of respondents did not use online exam proctoring tools during this same period. Many chose not to use remote proctoring tools due to their potential harms to students, negative impacts on trust between students and educators, student privacy concerns, ineffectiveness against cheating, and the availability of alternative modes of assessment, such as open-book exams and projects.
Both educators who used online proctoring and those who did not reported privacy concerns with using exam proctoring services. These concerns centered on webcam and audio recordings taken by a third party, intrusive monitoring measures, information sharing requirements, particularly those of personally identifying information for verification of student identities, and the invasion of student privacy as students take these exams in their homes. Educators also expressed concerns about the security implications of students having to install exam proctoring software on their computers. Many highlighted the software's monitoring capabilities and its ability to disable system functionality.
While many educators chose to modify their assessments rather than use online exam proctoring, some educators were required to use online exam proctoring tools. Specifically, these educators reported departmental mandates for the use of exam proctoring services in their courses, or requirements to administer the standardized tests in their field, such as nursing, that were only offered by testing companies that use online exam proctoring technologies. This led to educators being forced to use these proctoring tools despite having reservations about their use and concerns about their impact on students.
## 2 Related Work
Online proctoring tools have been the subject of heavy scrutiny from both the media [15, 17, 22, 28] and education researchers [10, 24, 26, 20, 16, 31]. Below, we first review the literature on the whether proctoring is needed to ensure academic integrity and are efficacious in doing so, and how they impact students have been at the center of debates on their role within remote learning. Following, we will discuss more recent work studying the security and privacy impact of this technology.
**Effectiveness in Academic Integrity** When it comes to the role of online proctring in online learning, Harton et al. find that despite vast improvements in remote learning tools, university instructors and students show a strong bias towards the beliefs that online courses are more conducive to academic dishonesty and that cheating occurs more often in online settings [14]. However, studies comparing the rates of academic dishonesty in face to face courses to its prevalence in online courses have found mixed results: Watson and Sottile find that while students more readily admit to academic dishonesty in face to face classes, they are more likely to cheat during online exams [30]. In contrast, Grijalva et al. [13] find no significant difference in rates of academic dishonesty in both course formats.
Gudino Paredes et al., who evaluated remote proctored exams' usage in graduate learning via a questionnaire-based study, find that the tools enforce academic integrity, but that students' honesty is neither driven by their moral compasses nor by their desire to learn [21]. Instead, as Gudino Paredes et al. explain, students lack opportunities to cheat due to constraints implemented within the tools and feel obliged to behave with integrity lest they be caught and punished by the software. Moreover, they find that these tools appear to
negatively impact students' learning or motivation to learn and raise concerns about student privacy. They therefore recommend educators carefully consider their motivation to use remote protoring before choosing to use these tools.
**Student Performance Under Protoring** Several studies [8, 9, 11, 25] have found that student performance was significantly better on unproctored remote exams than on remotely proctored ones. Seife and Stockton [25] further find that scores on remotely proctored exams are more closely correlated with predictive attributes of student performance, such as their ratings of human capital, which measures their general ability level. This, they argue, shows evidence that academic misconduct is likely quite pervasive, and that this pays off handsomely for dishonest students. Moreover, Wuthisadian [32] finds that students generally performed better on in-person exams than they did when the same exam was taken with remote online proctoring.
In contrast however, Hylton et al. do not find a significant difference in student performance between remote exams that use video monitoring and those that do not. Despite this, they note that students in non-video monitored exams took longer to complete their exams on average [18]. Rios and Liu similarly find that student performance on low stakes exams is not impacted by the use of online proctoring. However, unlike Hylton et al., they do not observe any differences in the amount of time students take to complete their exams [23]. In a study of the differences between testing centers and remote proctoring by Cherry et al. [5], average scores achieved across the two proctoring modes were similar.
**Privacy, Security and Ethics of Remote Proctoring** On-line remote proctoring and other online learning technology have received considerable attention due to the pandemic. Coghlan et al. [6], in their opinion piece, highlight proctoring tools' reliance on artificial intelligence and the ethical challenges this can raise. They explore an ethical framework for determining when and how to use these tools. In their opinion article, Swauger argues the underlying algorithms for remote monitoring and invigilation contain implicit negative biases and that these tools unfairly penalize students who do not meet their biased baseline [28].
Despite these concerns, research on these tools' security and how users perceive their security has been sparse. Balash et al. and Kharbat and Abu Daabes independently study student perceptions of the tools, and find a high prevalence of privacy concerns [2, 19]. Balash et al., who also analyze the security and privacy of several tools, specifically focus on how students' security and privacy concerns compare with the security vulnerabilities they encounter [2]. Neither Balash et al. nor Kharbat and Abu Daabes consider the perceptions of educators, the focus of this paper.
Recent work by Burgess et al. [3], performed a technical S&P analysis of four proctoring suites used in high stakes law licensing exams, such as the Bar Exam and entrance exams. They identified numerous privacy and security risk, including around facial recognition. In this paper, we investigate the educator's perspective on eight general purpose exam proctoring software suites, which are non-overlapping with the suites studied by Burgess et al.. Such a technical investigation, though, would likely be fruitful of the common university remote exam proctoring products.
Similar to this paper is the study by Cohney et al. that also considers instructor and faculty perceptions of remote learning tools [7], but not specifically remote proctoring tools. The focus of Cohney et al. is on the use of tools that were not originally designed for educational use, but were adopted hastily amidst the pandemic to accommodate the need for fully remote learning. This includes using Zoom, Google Drive, and other collaboration platforms whose privacy standards and data collection practices do not match the expectations of the classroom. In contrast, in this paper, we focus on instructors perceptions, and more specifically, on educators' perceptions of remote proctoring tools and their security and privacy attributes, both why they choose to use them and why not. Cohney et al., in contrast, identifies such tools, but was not the primary focus of the study.
## 3 Survey Methodology
We conducted an online survey to evaluate university educator perceptions of online exam proctoring tools. Here we describe the survey's procedures, recruitment, limitations, and ethics. Survey results are presented in Section 4.
**Study Procedure** Below we outline the survey. The full text can be found in Appendix A.
1. Informed Consent: The university educators were asked to consent to the study. The consent included that participants would answer questions about their experience with online exam proctoring services.
2. Eligibility Screening: To be eligible to complete the survey, participants were required to assert that they were either full-time faculty or part-time adjunct faculty at the university.
3. Background: The educators were then asked to optionally provide their associated organizational unit or school at the university, as well as the subject area(s) they taught during the 2020/21 academic year
4. Awareness of Technology: The university educators were asked about their awareness of the online exam proctoring tools available at the university during the 2020/21 academic year and their understanding of how online exam proctoring tools work. Next participants were asked which specific online exam proctoring tools, if any, they used in administering assessments during the 2020/21 academic year.
5. Use and Perceptions of Online Exam Proctoring Tools:
Educators were asked which proctoring services they most recently used, what factors they considered, and the type and number of assessments administered with online proctoring. Next the educators were asked about both the benefits and drawbacks of online exam proctoring and under what conditions they were likely to use online exam proctoring in the future.
6. Protoring Effectiveness: We then asked the educators about the effectiveness of online exam proctoring tools at preventing and catching cheating on assessments.
7. Review of Proctoring Tools Used: Educators were asked questions to assess the specific exam proctoring tools they reported to have recently used during the 2020/21 academic year. This included questions about the educators' views of the privacy and security of the online exam proctoring software, its effectiveness, and the potential tradeoff between student privacy concerns and the integrity of the examination being administered.
8. Online Exam Proctoring Methods: In this part of the survey we investigate the methods used by online exam proctoring services to monitor student test takers. Educators were asked which exam monitoring methods they enabled in their proctored exams, the effectiveness of these methods, their comfort using the methods, and if they would change methods for future exams.
9. Privacy Concerns: Finally, the survey concluded by asking educators about their concerns for their students' privacy when students are required to share information with exam proctoring companies.
RecruitmentWe worked with the George Washington University's administration to approve and coordinate the survey, in addition to receiving IRB approval (NCR202908). In turn the university provided our research team with an email list of all instructors who had taught a course during the 2020/21 academic year, totally 3,460 individuals. Recruitment occurred over a fifteen day period starting December 1st 2021. We sent out 3,460 emails and had 152 educators respond to the study, a response rate of 4.4%. Of the 152 educators who responded, 125 completed the study. Recruitment emails was sent by and the survey was hosted on the university's Qualtrics account. Participants who completed the survey were given the opportunity to enter a drawing for a $50 USD Amazon gift card with a 1 in 20 chance of winning. On average, it took 16.3 minutes (SD=37.4) to complete the study.
Note that many of the instructors who were contacted may either no longer be at the university or did not teach classes involving exam or quiz assessments (e.g., instead using grading based solely on term-papers) that would render them eligible to complete the survey. These instructors likely self-selected out of the survey, and thus the true number of eligible participants and the true response rate to the survey is difficult to determine. However, we believe we captured a reasonably representative cross-section of the university's instructors during this period, both in terms of educators who chose to use online proctoring and those who chose not to use it. But it is also important to acknowledge that there are likely some perspectives that may be over- or underrepresented due to self-selection both to take and not take the survey.
Analysis MethodsWhen presenting quantitative results the analysis is provided in context. For qualitative responses, we conducted open coding to analyze 14 free-response questions. A primary coder from the research team crafted a codebook and identified descriptive themes by coding all responses to each question. For the 10 open-ended questions answered by participants who did use exam proctoring tools, a secondary coder (also a member of the research team) coded all responses. For the 4 open-ended questions responded to by educators who did not use proctoring tools, a secondary coder coded a 20% sub-sample as a consistency check. In each case, the secondary coder provided feedback on the codebook, and inter-rater reliability was calculated on each round until Cohen's \(\kappa\geq 0.7\). Overall, the mean Cohen's \(\kappa=0.8\), indicating substantial agreement between coders. In the results presented below, we use the primary-coders application of the final codebook for any counts or themes presented.
Ethical ConsiderationsThe study protocol was approved by our Institutional Review Board (IRB) with approval number NCR202908, and all collected data is associated with random identifiers. For participants who wanted to enter the drawing for a chance to win the Amazon gift card we created an entry form that was separate and not linked to the survey.We also considered that some educators may not want to share how they managed student academic integrity for online classes or their specific academic department or subject area and so we made those questions optional.
LimitationsOur study is limited in its recruitment, particularly instructors at a single academic institution in the U.S. While our study offers a unique perspective at our institution, which is a large private university, we cannot claim full generalizability of the results within or beyond the institution as it is difficult to know the true number of eligible participants who considered or used online proctoring during the 2020/21 academic year. Despite this limitation, we believe that these results offer new insights and are likely representative of common attitudes and themes among educators about online proctoring and their choices to use or not use these products. However, we cannot conclude that the these themes occur at the same proportions beyond our sample. We attempt to note this limitation throughout when discussing proportional results. This is particularly true for those instructors who indicate that they do use online proctoring, and we cannot be confident that quantitative results, e.g., Likert responses, will be consistent in a larger sample. Although qualitative themes likely express dominant views in this subgroup, we may not capture all minority themes. Throughout the following section
we acknowledge these limitations.
There is also limitations with the size of our recruitment. We got approval to send recruitment emails to _all_ instructors during the online-instruction period at our institution, which included 3,460 individuals. Even within this pool, finding college level educators that use exam proctoring turns out to be a difficult-to-reach population. As with any online survey without direct recruitment, response rates can be small, and then within those responses, we are further seeking a set of educators who used online proctoring. We acknowledge that the result is a sample of educators who actually used proctoring is smaller than desirable, but when targeting hard-to-reach populations (namely, college educators that use exam proctoring), exploratory studies like this one, even with smaller samples, provide important and relevant themes.
Importantly, the goal of this study is not only on the educators who did use proctoring, but also those that choose not to and their security and privacy reasons for that choice. We were able to recruit 99 participants who decided against using online exam proctoring tools to provide important insight into their decision making processs.
Finally, we are limited by the fact that this study relies on self-reported behavior. We cannot verify that the participants actually used remote proctoring tools to proctor an online exam or which monitoring methods they enabled. Finally, responses can suffer from social desirability and response bias, leading participants to over describe their awareness of online exam proctoring as they may believe that this is the expectation of the researchers. Such biases may be most present when participants indicate concerns.
## 4 Results
All of the educators in our study taught a course during the 2020/21 academic year at a the George Washington University. Educators from twelve of the university's organizational units or schools were represented (see Table 1), with the largest percentages from the College of Arts & Sciences (\(n=49\); 39%), School of Public Health (\(n=16\); 13%), School of Medicine (\(n=12\); 10%), and the School of Business (\(n=11\); 9%) (**Q2**).
During the 2020/21 academic year the educators surveyed taught a wide range of subjects. The most common of which were science, technology, engineering, and mathematics (\(n=40\); 32%), health (\(n=30\); 24%), business (\(n=14\); 11%), and government (\(n=12\); 10%) (**Q3**). For the full results see Table 2. Twenty-one percent (\(n=26\)) of educators who responded to our survey used online exam proctoring tools to assist in administering assessments during the 2020/21 academic year (**Q6**).
At the time of the study, eight online exam proctoring tools were available at the university. Educators reported being most aware of exam software by Respondons (\(n=56\); 45%), ProctorU (\(n=16\); 13%), and Examsoft (\(n=9\); 7%) (**Q4**). Of the educators who reported using exam proctoring software, the largest number reported using Respondons (\(n=15\) of 26; 58%), followed by RPNow (\(n=3\) of 26; 12%), Examsoft (\(n=2\) of 26; 8%), and Proctorio (\(n=2\) of 26; 8%), for their most recent proctored online exam (**Q7**). Most (\(n=23\) of 26; 88%) educators who used online exam proctoring tools used them for administering course exams (e.g. test, midterm exam, final exam) (**Q9**). Among educators who used online proctoring sixty-five percent reported (\(n=17\) of 26) having administered five or more online proctored assessments (**Q10**).
### RQ1: Educators' Perceptions
**Educator Understanding of Exam Proctoring Tools** We asked educators to describe in their own words how online
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Organizational Unit** & **Educators** & **Used** & **Considered** \\ \hline Arts \& Sciences & 49 & 9 & 14 \\ Public Health & 16 & 0 & 3 \\ Medicine & 12 & 7 & 1 \\ Business & 11 & 3 & 2 \\ International Affairs & 8 & 0 & 1 \\ Engineering \& Applied Sci. & 8 & 1 & 2 \\ Nursing & 6 & 5 & 1 \\ Education & 6 & 0 & 0 \\ Professional Studies & 3 & 0 & 0 \\ Other & 3 & 1 & 0 \\ Political Management & 1 & 0 & 0 \\ Public Affairs & 1 & 0 & 1 \\ Arts \& Design & 1 & 0 & 0 \\ \hline
**Total** & 125 & 26 & 25 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of educators in each of the organizational units (**Q2**), the number of those educators who used online exam proctoring tools (**Q6**), and the number of those educators who considered using the tools (**N1**).
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Subject** & **Educators** & **Used** & **Considered** \\ \hline S.T.E.M. & 40 & 5 & 12 \\ Medicine \& Health & 30 & 8 & 5 \\ Business & 14 & 6 & 2 \\ Government & 12 & 2 & 2 \\ Did not disclose & 7 & 3 & 0 \\ Arts & 5 & 0 & 1 \\ History & 5 & 0 & 0 \\ Languages & 4 & 0 & 1 \\ Communications & 4 & 1 & 1 \\ Gender Studies & 1 & 0 & 1 \\ Law & 1 & 1 & 0 \\ Naval Science & 1 & 0 & 0 \\ Teaching & 1 & 0 & 0 \\ \hline
**Total** & 125 & 26 & 25 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The number of educators who used online exam proctoring tools (**Q6**), and the number of those educators who considered using the tools (**N1**).
protoring tools work (**Q5**). Many (\(n=65\)) educators described the ways the proctoring tools monitor a student's activity and behavior during an exam. Educator P2 (College of Arts & Sciences) responded, "Monitor student's actions and movements (and room content) to make sure they are not cheating on an exam." Educators (\(n=30\)) also detailed how the proctoring tools restrict a student's activities and access to unauthorized resources. Educator P41 (College of Arts & Sciences) explained, "The software takes control of a students computer so that they can't leave the exam, can't access the internet, and can't access other programs on the computer." Additionally, educators (\(n=19\)) described how proctoring tools record and flag anomalies during the exam taking session. For example, educator P114 (School of Public Health) added, "Software can also record the user (visual and audio) while the user is taking the test and can flag any suspicious activity (user getting up from the computer, looking down at table, etc.) for the instructor to later review to determine whether cheating occurred." However, only a few (\(n=3\)) educators described the ability of the proctoring service to verify a student's identity. P44 (College of Arts & Sciences; Responsous) noted, "There are various ways that they can monitor student identity while taking the exam (web cam)."
Some educators (\(n=32\)) reported that they did not know how they work. Educator P16 shared, "I don't know anything about how they work." Others (\(n=10\)) had simply not used them or were not aware of there availability at the university.
Reasons for Not Using Online Protoring ServicesOf the 79% (\(n=99\)) of educators responding to the survey that did not use online exam proctoring tools, 25% (\(n=25\) of 99) considered doing so at some point (**N1**). The College of Arts & Sciences had highest percentage (30%; \(n=12\) of 40) of educators who considered using exam proctoring, and likewise, S.T.E.M (29%; \(n=14\) of 49) was the highest percentage subject matter (see Tables 1 and 2). (**N2**). For instance, P77 (College of Arts & Sciences) who selected _Somewhat unlikely_ to use online exam proctoring (**N5**) explained,
\begin{tabular}{l} Cheating during exams is a serious (and somewhat common) issue. Being assured that cheating was kept to a minimum would provide confidence that grades were well-earned. \\ \end{tabular}
For many of the 73% (\(n=72\) of 99) of educators who did not consider using online proctoring tools, their decision was likely informed by their negative perceptions of these tools **N3**). Many (\(n=24\)) reported that they considered online exam proctoring tools to be harmful to students. Educator P10 College of Professional Studies, _Extremely unlikely_) stated, "Proctoring tools monitor the students in ways that increase their anxiety and obliterate their ability to learn." Likewise, educator P77 (College of Arts & Sciences, _Somewhat unlikely_) shared,
Privacy and home issues. Not all students were in a position to be engaged in being a student while remote learning. If a student was a primary care given for a child or elderly person, how could I penalize them from looking away from their screen during an exam?
P10 (College of Professional Studies, _Extremely unlikely_) even described it as "prison technology," and others (\(n=11\)) concluded that it impacts the trust between student and educator. For instance, educator P81 (College of Arts & Sciences, _Somewhat unlikely_) noted, "It feels invasive and I feel it erodes trust between student and professor." Furthermore, educators (\(n=10\)) had concerns about the negative impact on student privacy, such as when educator P76 (College of Arts & Sciences, _Extremely unlikely_) shared, "I found the on-line proctoring system to be a serious invasion of privacy of the student."
Some (\(n=9\)) educators determined that online proctoring tools lacked the ability to actually stop cheating, like when P53 (College of Arts & Sciences, _Extremely unlikely_) said, "...just locking down a computer is meaningless when students can easily access a second computer (or their phone)." There were also a number (\(n=16\)) who were not aware of the availability of the tools at the university. Educator P16 (College of Arts & Sciences, _Neither likely nor unlikely_) noted, "I have no knowledge of them or their availability."
Many educators refactored their assessment formats to avoid the necessity of online exams (**N4**). A common tactic was to provide time limits enforced through existing learning management software, such as Blackboard. For example, educator P96 (College of Arts & Sciences, _Extremely unlikely_) stated, "I gave exams on Blackboard. They were timed, so that students would have limited time to look up answers." Some educators (\(n=14\)) reported changing their exams to open book and open note exams, such as educator P90 (College of Arts & Sciences, _Somewhat unlikely_) who explained, "I ended up making everything open book so that I did not have to police anything." Others switched to take home assessments, like P14 (School of International Affairs, _Somewhat unlikely_) who added, "During Covid I made the quizzes take home and open book." Replacing the exams with other forms of assessment, such as projects and written papers, was another common theme (\(n=34\)). For example, educator P41 (College of Arts & Sciences, _Somewhat likely_) said, "I decided to replace my exams (midterms and finals) with two-week take-home projects, with many scaffolded layers." Another example is educator P109 (School of Business, _Somewhat likely_) who shared, "I ditched the quizzes and tests, opting instead for graded homework and written papers." Still others (\(n=4\)) reduced the percentage of the overall course grade which would come from exams, like P93 (School of Engineering & Applied Science, _Neither likely nor unlikely_) who illustrated, "The only way I actively managed it was 1. to put tremendous credit on the term project and 2. reduced credit for exams."
Another common theme (\(n=9\)) among educators who chose not to use online exam proctoring tools was a belief
that students would not cheat when asked to adhere to the university code of academic integrity. For instance, P76 (College of Arts & Sciences, _Extremely unlikely_) stated,
I told students that I expected them to be adults, and to follow the university expectations of integrity and honesty. This was after I told them my opinions of the proctoring system to be an invasion of their privacy. They appreciated my opinion and cooperated with taking exams with honesty.
Educator P100 Navy ROTC, _Somewhat unlikely_) trusted students to follow the university honor code and noted, "I reminded my students of the honor code and trusted them to follow it."
Finally, it appears that most educators responding to the survey (\(n=53\) of \(99\); \(54\%\)) who do not currently use online exam proctoring tools in their classes reported they would be unlikely to use them if they were teaching remotely under similar circumstances to those of the 2020/21 academic year, while only \(24\%\) (\(n=24\) of \(99\)) said they were likely (**N5**). This suggests that those who declined to use remote proctoring are unlikely to change their opinion of the technology. When describing why they choose not to use online exam proctoring tools (**N3**), the participants who reported being _Extremely unlikely_ or _Somewhat unlikely_ to use the tools in the future (**N5**), more often described themes such as proctoring tools as potentially harmful to students (20 of 53 vs. 3 of 24), privacy concerns (10 of 53 vs. 0 of 24), tools do not stop cheating (9 of 53 vs 0 of 24), and trust students not to cheat (9 of 53 vs. 2 of 24).
Reasons for Using Online Protoring ServicesTwenty-six educators responding to the survey reported using online proctoring services. We asked them what factors they considered when deciding to use these tools in an open response question (**Q8**). These factors may include majority opinions; however, they may not capture minority opinions due to the small number (\(n=26\)) of educators who used these services.
The most cited reason for using online proctoring tools is the convenience they offered (\(n=12\)). For many (\(n=7\)) this convenience was attributed to their familiarity with the proctoring tools either because they (or their colleague) have previously administered an online procted exam using the tool (\(n=6\)). For instance, P69 (College of Arts & Sciences; Respondus) noted, "Familiarity based on discussions with colleagues (who all used responsdu or proctor exams themselves)" and P25 (School of Nursing; Proctorio) added, "Already using this product - the proctoring version for the online environment is called Examplify (w/ ExamSoft)." Others (\(n=4\)) were influenced to use online proctoring due to their apparent popularity and recommendations from others, like P67 who simply said, "It was popular." A few (\(n=2\)) educators noted that these tools had been recommended to them by their institution, such as P26 (College of Arts & Sciences; Respondus) who recalled, "It was recommended by the school."
Many educators (\(n=8\)) noted that their main motivation to use remote proctoring was out of a form of necessity. Specifically, most (\(n=6\)) indicated that they were required to use online proctoring by their department in an attempt to make assessment more uniform. For example, P33 (School of Medicine & Health Sciences; RPNow) said,
RPNow is the one used by my department. I don't believe that I have a choice of which online proctoring service to use. It is already set up in my courses for me.
Or because they were administering a standardized test (\(n=2\)), like P25 (School of Nursing; Proctorio) who stated, "Nursing students also take standardized exams via ATI - their proctoring service in an online environment is called Proctorio." Some educators (\(n=2\)) felt compelled to use exam proctoring due to the circumstances of the COVID-19 pandemic and remote learning, e.g., P36 (School of Nursing; Examsoft) who stated, "Absence due to Covid exposure."
We further queried educators who indicated that they used exam proctoring in open-responses regarding the benefits of using these tools (**Q11**). The most frequently (\(n=16\)) mentioned benefit was to to enforce exam rules or exam integrity as the primary benefit using online proctoring. Specifically, educators indicate that the tools help prevent cheating (\(n=10\)), protect the integrity of remote exams (\(n=2\)), and ensure exam fairness, by limiting the benefit, or competitive edge students can gain from cheating (\(n=2\)). For example, P33 (School of Medicine & Health Sciences; RPNow) describes how online exam proctoring tools deter cheating as a primary benefit:
Even if not activated, students go through the [remote proctoring] system to take their exams, so they are under the impression that they are always being monitored. Ensures integrity of the exam without having to re-write questions to be open book.
Educators also felt that a benefit of online proctoring was to enforce exam rules by verifying students' identities (\(n=1\)), and holding students accountable for any misconduct they may commit (\(n=2\)). As P123 (School of Medicine & Health Sciences; RPNow) describes, "[The tools] provide a permanent record of the student's behavior during an exam." Other educators (\(n=2\)) highlighted how remote proctoring tools allowed them to enforce other exam rules, like time limits (\(n=1\)) and prohibit access to prohibited resources (\(n=1\)), e.g. P26 (College of Arts & Sciences; Respondus) added, "Ensuring that students don't use web resources to complete the test."
Many (\(n=13\)) educators noted as a benefit how online proctoring tools offered additional flexibility. As (\(n=11\)) instructors explained, these tools made it easy for them to set up and grade their exams. While (\(n=5\)) respondents highlighted the tools' general ease of use, e.g. P21 (School of Nursing; ProctorU) noted, "Ease of use can be done online." Additionally, instructors found it allowed them to give
protored exams while complying with COVID safety precautions, such as when P118 (School of Nursing; Proctorio) stated, "Convenience and safety during COVID."
Finally, a handful of educators noted that online proctoring made it easier to manage their exams when compared to proctoring exams in person (\(n=2\)), that the proctored were easier to grade (\(n=1\)), and that they were convenient to use (\(n=2\)). One educator noted the flexibility of using multimedia content in their exams when online, and others (\(n=2\)) noted that it also provides flexibility to students in selecting the time and environment for their exam. For instance, P35 (School of Medicine & Health Sciences; Respondus) shared, "It gave flexibility to the students to take the exam when convenient instead of at a set time."
Drawbacks when Using Online Proctoring ToolsWe also asked educators who indicated they used online proctoring tools (\(n=26\)) the drawbacks of online proctoring (**Q12**). These drawbacks may include majority opinions; however, they may not capture minority opinions due to the smaller number (\(n=26\)) of educators who used these services.
Most educators (\(n=20\)) identified at least one technology or usability issue they encountered while doing so. Chief amongst the drawbacks were technology glitches (\(n=7\)), and system limitations (\(n=17\)) that hindered students' or educators' ability to use the tools for their intended functions. They noted that these limitations impacted their ability to monitor students during exam time, to control students' test environment and ensure academic integrity, and to conclusively identify cases where students had cheated. A cited cause (\(n=3\)) for these issues were limitations on students' computers to run the proctoring software or students' internet access. When it came to connecting to the proctoring tools, instructors noted that some students either had unstable internet connections (\(n=1\)), or had limited access to their exams due to being located in a different country (\(n=1\)). In other cases, students' computers seemed to be the point of failure. In particular, instructors noted that some students' sometimes used older computers (\(n=1\)). This meant that their machines would occasionally freeze when running the proctoring tools (\(n=1\)), that they would not have webcams or microphones through which their exam session could be recorded, or that these input devices would fail to record (\(n=1\)) while students took their exams. As P44 (College of Arts & Sciences; Respondus) explains, the software's lack of dependability posed a significant "obstacle for students."
Educators also noted drawbacks with respect to the privacy of their students. Several (\(n=6\)) cited concerns for their students' privacy. In particular, they noted wariness about third parties potentially collecting vendor data about their students (\(n=1\)), and that they found monitoring via video or audio recording to be privacy invasive (\(n=3\)). As P33 (School of Medicine & Health Sciences; RPNow) explained they "[Felt] uncomfortable seeing students' living situation and watching them while taking the exam." (We elaborate more on the privacy concerns in subsection 4.2.)
Additionally, two educators also described drawbacks with respect to the interpersonal relationship with students that subjecting them to online proctoring can have, and that they (\(n=3\)) were personally discomorted with the use of video and audio recording to monitor exams and using that to actually identify cases of cheating. As P98 (School of Medicine & Health Sciences; Respondus) describes
The [protoring software] utilizing the camera is an invasion of privacy, often didn't work, and had the students so paranoid that they would email me to explain any movement they made. Plus, I realized that it would be difficult to ever prove anyone was actually cheating... I quit using the camera halfway through because of these problems.
Effectiveness of Exam Proctoring Educators who responded to the survey and indicated that they used online proctoring tools (\(n=26\)) were asked about the effectiveness of these tools at reducing cheating (**Q19**). Responses were mixed when considering if exam proctoring tools reduced cheating. Eleven (42%) educators who used online proctoring tools either _strongly agreed_ or _somewhat agreed_ that exam proctoring reduced cheating, while the same amount (\(n=11\) of 26; 42%) _strongly_ or _somewhat disagreed_. Four (15%) _neither agreed nor disagreed_.
There was less confidence that the proctoring software would actually catch cheating (**Q20**). Roughly a third (\(n=9\) of 26; 34%) of educators believed proctoring tools would catch cheating at least 50% of the time. In contrast, nearly two-thirds (\(n=16\) of 26; 61.5%) believed it caught cheating up to 50% of the time. Refer to Figure 2 for full details. Moreover, only 38% of educators stated that the tools reported cases of cheating (**Q21**). See Figure 3 for more information.
This suggests that educators found the software to be more successful in _deterring_ cheating than in actually detecting or catching cheating. For instance, educator P25 (School of Nursing, Proctorio) _somewhat agreed_ that exam proctoring reduced cheating but reported it catches cheating less than 25% of the time (**Q20**) and said, "They don't prevent cheating - students can look up ways on the Internet for workarounds. But they do deter cheating." Likewise, educator P125 (School of Medicine & Health Sciences; Respondus), who _somewhat disagreed_ that exam proctoring reduced cheating, stated it catches cheating less than 25% of the time (**Q20**) and wrote, "The video monitor and flagging is not great. Really doesn't prevent cheating, may just deter for a lot of people."
Despite clearly different opinions on the effectiveness of exam proctoring tools at preventing and identifying cheating, most of the educators (\(n=14\) of 26; 54%) who used them reported that they either _strongly_ or _somewhat_ agreed that they were a good solution in responses to **Q25** (see Figure 1).
In response to **Q12**, with respect to drawbacks, three educators noted the proctoring tools' audio and video monitoring
capabilities did not allow instructors to fully inspect students' exam conditions during assessments. Eleven respondents indicated the tools did not eliminate cheating, and when cheating was reported, three suggested that there were inconsistencies in the reports, causing them frustration, and at least one false positive and one false negative. Ultimately, this led two educators to suspect a subset of students had likely cheated, but to have been unable to conclusively identify which students had done so, despite their use of the proctoring tools.
Continued Use of Online ProctoringWe were interested in further exploring the impact of the COVID-19 pandemic on the decision to use online exam proctoring tools. We asked the educators who reported using online proctoring (\(n=26\)) if they would use online proctoring tools again if they were teaching remotely under similar circumstances to those of the
Figure 1: Most of the (\(n=26\)) instructors who used online proctoring tools indicated they did not feel the proctoring tools they used were privacy invasive (**Q22**). However, as demonstrated by a plurality of respondents (\(n=12\); 46%), instructors were (slightly) more concerned about the privacy risks the use of these tools posed to their students (**Q24**). Forth-six percent (\(n=12\)) of instructors at least _somewhat agree_ that the tools offered a reasonable tradeoff between student privacy and exam integrity (**Q23**). Most (\(n=14\); 54%) participants felt the tools they used offered a good solution for remotely monitoring exams (**Q25**).
Figure 3: When asked if the online exam proctoring tool they used reported any potential cheating **Q21**, 38% (\(n=10\) of 26) said that it had.
Figure 2: When asked if the use of the tools makes it less likely that students will cheat **Q19**, 42% (\(n=11\) of 26) at least _somewhat agree_.
2020/21 academic year (**Q13**). Sixty-five percent (\(n=17\) of 26) said they were _likely_ to use online proctoring tools again under those circumstances. While only 27% (\(n=7\) of 26) said they were _unlikely_.
We followed up by asking why they would or would not use online exam proctoring tools in such a situation **Q14**). Educators shared that it was either the next best option (\(n=3\)) or the only option (\(n=3\)) when in-person proctoring was not available. For instance, P47 (School of Nursing; Examsoft) shared, "If we were unable to test in person, this would be our only option." For others (\(n=2\)) it was to maintain exam integrity. As educator P25 (School of Nursing; Proctorio) highlighted, "It's the only main way to control for academic integrity when not face-to-face." Educator P118 (School of Nursing; Proctorio) reported pandemic safety was the reason and said, "If it is about being safe during a pandemic, I will use remote proctoring software every time."
Next, we asked the same educators how likely would they be to use online proctoring tools again if conditions were similar to Fall 2021 when in-person learning resumed with masks and some hybrid options (**Q15**). Only 35% (\(n=9\) of 26) reported they were _likely_ to continue to use online proctoring tools, while 46% (\(n=12\) of 26; 54%) reported they were _unlikely_.
Finally, we asked the same educators how likely would they be to use online proctoring tools again if they were teaching classes fully in-person without hybrid options (**Q17**). We observed similar responses with 35% (\(n=9\) of 26) reported they were _likely_ to use the tools, and over half (\(n=14\) of 26; 54%) reported they were _unlikely_. Detailed results can be found in Figure 3(a) and Figure 3(b).
We followed up again by asking why they would use online exam proctoring in this situation. Some educators simply shared that there was no longer any need to use online exam proctoring when in-person classes were taking place (\(n=2\)), or that they preferred traditional in-person exam proctoring (\(n=3\)). For example, educator P124 (College of Arts & Sciences; Respondus) noted, "So if there is no concerns about pandemic, exams should definitely be in person." Others said they would still consider using online exam proctoring for missed exams (\(n=1\)) or other extenuating circumstances (\(n=1\)), and asynchronous quizzes(\(n=1\)). Some educators would continue to use online exam proctoring for reasons of flexibility (\(n=1\)), hybrid course offerings (\(n=1\)), or because they prefer online exam proctoring (\(n=1\)). Educator P56 who referred to flexibility said, "I think it is a helpful option for times when holding exams virtually provides flexibility for students and faculty while still meeting course objectives and assessment standards."
Our results suggest that many educators have used online exam proctoring as a temporary expedient to manage assessments during pandemic induced remote learning periods. However, our results also suggest that a subset of educators will likely continue to use online proctoring as classes return to full in-person learning.
### RQ2: Privacy and Security Concerns
**Privacy Concerns** As with any application, there are possible privacy and security risks for users. We asked educators responding to the survey questions regarding privacy in relation to exam proctoring tools. One of the questions asked educators if they thought monitoring tools were an invasion of privacy (**Q22**). Of the educators that indicated using proctoring tools (\(n=26\) of 125), the majority either _strongly disagreed_ (\(n=6\) of 26) or _somewhat disagreed_ (\(n=10\) of 26) with the concept of a proctoring service being an invasion of privacy. When the educators were separately asked if they were specifically concerned about privacy risks to students (**Q24**), the majority either _somewhat agreed_ (\(n=8\) of 26) or _strongly agreed_ (\(n=4\) of 26). See Figure 1 for full results.
When asked to elaborate about the specific factors that
Figure 4: Detailed visualization of how likely educators are to use remote proctoring under circumstances similar to (a) 2020/2021 academic year and Fall 2021 (**Q13** & **Q15**) and (b) Fall 2021 and a full return to in-person learning (**Q15** & **Q17**).
informed their views about the privacy of online exam proctortoring tools in an open response question (**Q26**), one common theme (\(n=9\)) among some of the educators who chose to use online exam proctoring tools was a belief that the privacy risks was acceptable. For instance, P28 (School of Medicine & Health Sciences; Respondus) stated,
Most digital tools have some level of privacy issues. Any program that can access an internal mic or video seems to be a privacy risk. There are other learning tools such as Voice thread that I believe have a privacy risk but see the benefit outweighing the risk and damage to the student.
In contrast, many educators (\(n=9\)) expressed discomfort with online exam proctoring tools. For example, educator P36 (School of Nursing; Examsoft) stated,
Any program that requires you to download a file, disables your system functionality, and automatically searches your computer for files to upload is a total invasion of privacy. Additionally, the proctoring service records the student at their most vulnerable- in their home environment where they sometimes forget they are being recorded. This leaves the potential for private matters being recorded and permanently on a server somewhere... if there is a data breach of this program, these videos could be out there for anyone to see.
Some participants (\(n=3\)) called out the webcam as being privacy invasive, like P125 (School of Medicine & Health Sciences; Respondus) who illustrated, "Video monitoring is more intrusive." Educator P58 (College of Arts & Sciences; Respondus) stated that artificial intelligence used to monitor student behavior in private was invasive when they said "AI required to monitor and flag student behavior use of recordings in private setting." And educator P49 (College of Arts & Sciences; Respondus) added, "Glad to see the tradeoff question: Yes, it is somewhat invasive but that is offset by exam integrity."
When asked whether they thought the remote exam proctoring tool they used offered a reasonable tradeoff between student privacy and exam integrity (**Q23**), respondents appeared to be hesitant to endorse the tools. Here, only a plurality of participants (\(n=12\)) indicated they _agreed_ with the statement, where only (\(n=6\)) _strongly agreed_ with the statement, and (\(n=6\)) _somewhat agreed_. See Figure 1 for full results.
Software Security ConcernsExam proctoring services often require students to install specialized software to enable proctoring. The required proctoring software is often in the form of a browser extension that is added to students' existing web browsers or standalone software that must be installed on students' personal computers. As is the case with any custom software, there is risk of security vulnerabilities.
We asked educators responding to the survey who had used remote prottoring tools how concerned they were about students installing software created by exam proctoring companies on their personal computers (**Q27**). Over half of educators (\(n=14\) of 26; 54%) were _not at all concerned_, while 31% (\(n=8\) of 26) where _slightly_ or _somewhat concerned_. Only 15% (\(n=4\) of 26) were _moderated_ or _extremely concerned_. Refer to Figure 5 for the full results.
We then asked respondents to explain which factors led to their concern, or lack of concern, regarding students installing exam proctoring software (**Q27**). A number of educators (\(n=8\)) voiced concerns about the software. Concerns such as reliability issues (\(n=3\)), potential invasion of student privacy (\(n=2\)), security flaws (\(n=1\)), negative impacts on computer functionality (\(n=1\)). Educator P36 (School of Nursing; Examsoft) share concerns about privacy, "Again, any program that requires you to download a file, disables your system functionality, and automatically searches your computer for files to upload is a total invasion of privacy." Whereas, educator P124 (College of Arts & Sciences; Respondus) considered tradeoffs between privacy and necessity when the educator said,
No one likes to install software on their computer that could potentially be invasive. It's necessary in this instance but I can see why someone would be reluctant to do so.
Still a number of educators (\(n=6\)) did not have concerns, such as educator P28 (School of Medicine & Health Sciences; Respondus) who noted, "We install so much on our devices so I don't see this as a higher risk than other applications." We also found statements describing a transfer of trust from the institution, which licensed the software and made it available to educators, to the exam proctoring software itself. This
Figure 5: When educators who had used online exam proctoring were asked to report their level of concern about students installing software created by online exam proctoring services on their personal computers **Q27**, more than half (\(n=14\) of 26; 54%) reported that they were _not at all concerned_, while 27% (\(n=7\) of 26) were at least _somewhat concerned_.
implied trust leads educators to assume that the software has been through a vetting process. For instance, educator P125 (School of Medicine & Health Sciences; Respondus) responded, "If recommended by University then assume it is safe." And educator P25 (School of Nursing; Proctorio) added, "Just don't know enough to answer this question; defer to our [Online Learning and Instructional Technology] team who vet the software."
### RQ3: Proctoring Methods
#### 4.3.1 Enabling Monitoring Methods
Online exam proctoring services provide numerous types of student monitoring methods. These monitoring techniques range from lockdown browsers that prevent navigation to other sites during exam time, to more invasive monitoring that may include webcams, screen sharing, the use of a live (human) proctor, and even automated monitoring techniques such as eye tracking and network traffic analysis. Educators must select the monitoring methods they deem appropriate for proctoring students while they complete assessments.
We asked educators to report all of the monitoring methods they enabled in their proctored exams (**Q29**). Most educators (\(n=21\) of 26; 81%) reported enabling the lockdown browser, which many educators find to be the least invasive proctoring technique. Fifty percent of the educators (\(n=13\) of 26) enabled webcam recording, and many educators (\(n=11\) of 26; 42%) enabled microphone recording and face detection. Still others (\(n=8\) of 26; 31%) enabled the arguably more invasive monitoring methods of screen recording and eye movement tracking. Refer to Figure 6 for the full results.
#### 4.3.2 Monitoring Method Effectiveness
A majority (\(n=16\) of 26; 62%) of educators reported that they would enable the same monitoring methods again to administer another online proctored exam (**Q30**). While 38% (\(n=10\) of 26;) reported they would not use (\(n=3\) of 26; 11%), or were unsure if they would use (\(n=7\) of 26; 27%), the same monitoring methods.
When asked what monitoring methods in their online proctored assessments they would change and why (**Q31**). Educator P19 (College of Arts & Sciences; Respondus), who wanted to remove the webcam monitoring said, "Students did not feel comfortable or said they did not have a camera so we could not go that route." Furthermore, educator P124 (College of Arts & Sciences; Respondus) who wanted to remove the face detection monitoring shared, "Facial detection not as necessary I don't turn on the option for Respondus to fire off warnings when students face disappear from view, but I do watch the recordings later to determine if there is any egregious violations."
Additionally, educators reported technology issues with monitoring that relies on the webcam or microphone. For instance, educator P47 (School of Nursing; Examsoft) stated,
There is no way of guaranteeing that the student's webcam and microphone are working during the test. It is not until after that we can determine if they were working and by then, it's too late.
Bandwidth and lack of staff to view the videos could also be an issue as educator P103 (School of Business; Respondus) added, "Most students had excuses not to have cameras, the low bandwidth was a problem with Respondus, and we don't have enough staff for watching/proctoring."
#### 4.3.3 Comfort With Monitoring Methods
We asked educators how comfortable they feel using each monitoring type during an online proctored exam (**Q33**). Overall educators were comfortable with all 12 monitoring types presented to them. The lockdown browser monitoring method had the largest number (\(n=22\) of 26; 85%) of educators who reported being _comfortable_, many \(n=20\) of 26; 77%) of them _extremely comfortable_. This is followed by internet activity monitoring at 65% (\(n=17\) of 26) of educators _comfortable_ and keyboard restrictions at 54% (\(n=14\) of 26). Please refer to Figure 6 for the full results. These results are notably inline with the results found by Balash et al. when they asked students to select their comfort level with online exam proctoring monitoring methods [2].
Figure 6: Educators who reported using online exam proctoring tools (\(n=26\) of 125) were asked to select all monitoring methods they enabled (**Q29**). Over 80% of educators reported enabling the lockdown browser, and 50% of educators enabled webcam recording during their online protored exams. The educators were also asked to select how comfortable they would feel about using each monitoring type to monitor students during online protored exams in their course (**Q33**). Most educators were comfortable with a lockdown browser. A live proctor not visible to students had the largest number of uncomfortable educators, followed by eye movement tracking and web browser history monitoring.
Information Sharing ConcerWe asked educators to report their level of concern on a 5-point Likert concern scale for students sharing these various types of information with exam proctoring companies (**Q34**). Educators were generally _unconcerned_ with most types of student information sharing, except for identifiable information such as social security number, date of birth, street address and location data. The student information that garnered the largest number (\(n=22\) of 26; 85%) of _concerned_ educators was social security number. Please refer to Figure 7 for the full results.
Exploring Factors Influencing Proctor UsageWe performed exploratory analysis to determine potential factors that could lead to increased/decreased usage of exam proctoring in the future. As we had no priors and a small sample, we applied feature reduction analysis using multiple logistic regressions, considering all possible factors and reducing individual until the model either did not converge or the variance was no longer improving. As this exploratory, we refrain from presenting odds ratios and p-values in text and instead focus on factors that could be explored more in future research. The factors we considered included educator comfort with the 12 exam monitoring types (**Q33**), concern for 11 student information sharing types (**Q34**), agreement that online exam proctoring tools makes it less likely that students will cheat (**Q19**), the percentage of actual cheating found (**Q20**), and if the tool reported potential cheating (**Q21**). As outcomes we considered the likelihood of using online exam proctoring tools for assessments assuming similar circumstances to the 2020/2021 academic year (**Q13**), Fall 2021 (**Q15**), and a full return to in person learning (**Q17**). The outcomes were binned into two levels, educators who were _Extremely Likely_ or _Somewhat Likely_ to use proctoring and those who were not. The full models are found in Appendix B.
The leading factors that survived reduction were visible proctors, sharing student info, and does proctoring actually prevent cheating. For instance, we found (Table 3) a correlation with participants who were _Extremely Comfortable_ or _Somewhat Comfortable_ with a live proctor not visible to students and internet monitoring of students (**Q33**). Those participants were more likely to use online exam proctoring tools given similar circumstances to those of the 2020/2021 academic year (**Q13**). And we find a similar correlation for those who _Strongly Agree_ or _Somewhat Agree_ that the use of online exam proctoring tools makes it less likely that students will cheat (**Q19**). Future work could design experiments around these factors to quantify the effects.
## 5 Discussion and Conclusions
We surveyed (\(n=125\)) educators at a large academic institution about their perceptions and use of online proctoring during the 2020/21 academic year. Of those who responded to the survey, most (\(n=99\); 79%) did not use online proctoring, with some arguing that it was invasive or unnecessary.
Of those that did use online proctoring tools (\(n=26\); 21%), many felt that they did not have a choice either because of the necessity to maintain academic integrity or because they were required to do so by their department. Educators who used online proctoring were also not uniform in their view that it actually helps to deter and detect cheating, despite noting that it was a good solution under the circumstances. Furthermore, there was general comfort among educators who used remote proctoring with the monitoring of students (e.g., via video, screen share, or microphone); these educators were more concerned with sharing student data (e.g., name, student identification, etc.) with online proctoring companies.
Moving forward, even in a situation where there is full, in-person learning, many educators that use online proctoring indicated they would continue to do so, suggesting that more work is needed to address the potential privacy and security risks for students and educators when using these tools.
Privacy TradeoffsThere were marked differences between the educators who used online exam proctoring and those who chose not to use the tools. Educators who did not use online proctoring tools can generally be classified into one of two categories: The first consists of educators who preferred to redesign their assessments so they could be more easily completed remotely without concern for academic integrity violations, such as open book/note/internet exams and writing or project-based assignments. The second consist of those who considered the tradeoffs between student privacy and the
Figure 7: Educators who reported using online exam proctoring tools (\(n=26\) of 125) were asked to indicate how concerned they would be by students sharing each type of information with exam proctoring companies (**Q34**). The largest number of educators (\(n=22\) of 26; 85%) were concerned with students sharing their social security number. While the fewest number of educators (\(n=3\) of 26; 12%) were concerned with students sharing their screen view.
utility of the tools and decided that the potential privacy and security risks to student test takers outweighed the utility of the tools. Overall, the instructors who did not use remote proctoring had the most thematically negative responses to these tools and often highlighted the privacy risks and potential harms to students.
Likewise, the educators in our study that did use online proctoring tools can generally be classified into one of two categories: The first category are those who were forced to use online proctoring services either by their department, organizational unit, or as a standardized testing requirement. Their opinions of the tools generally better matched those who chose not to use them, but generally thought of them as less harmful and privacy invasive than those who did not use online proctoring at all. The second category is educators who considered the tradeoffs between student privacy and the utility of the tools and decided that the need for academic integrity outweighed the potential privacy and security risks to student test takers.
**Educator Training and Guidance for Online Proctoring** Many educators in the study expressed a desire for training to better understand the available proctoring tools and their impacts. Overall, they demonstrated general knowledge about how exam proctoring tools function with respect to monitoring students and how they restrict the use of unauthorized resources. However, educators seemed unaware of the methods used to validate students' identities and what happens to students' information after its collection for this purpose.
This presents an opportunity for improved training and guidance at the institutional level that provides the pros and cons about online proctoring and associated privacy/security risks. Such training and guidance could also include technical details on how exam proctoring tools can help educators maintain principles of least monitoring by using the smallest number of monitoring types necessary, given the constraints of the class. Moreover, institutional involvement in such training could set clear recommendations to set expectations for both educators and students.
**Limitations on Enforcing Academic Integrity** Qualitative responses suggest that educators have broad skepticism of the limitations about what remote proctoring tools can actually do to ensure exam integrity. Many often highlighted the difference between the ability to deter potential cheating versus the inability to detect motivated cheating. For instance, online exam proctoring tools may prevent panic cheating, but they will not necessarily stop or detect more planned or sophisticated cheating techniques, such as secondary devices, virtual machines, or other workarounds. Furthermore, even educators who use the tools are fairly split on whether online proctoring actually deters and detects cheating.
**Transfer of Trust** In the qualitative responses from educators, we find evidence of a transfer of trust between the institutions who licence and provide the online exam proctoring software and the software itself. A similar finding was reported by Balash et al. [2] when surveying students on their opinions of online proctoring, noting that students trusted the institution and since the institution licensed online proctoring, they implicitly trusted online proctoring tools. We found that the educators who used online proctoring expressed the same sentiment. They believed that their institution would not provide the software to proctor exams if it was not safe for students to install on their computers.
Institutional support for third-party proctoring software, which conveys credibility, makes the exam proctoring software appear safer and less potentially problematic because educators assume that institutions have properly vetted the software and the methods used by the proctoring services.
It is unclear that such trust is warranted. All software has inherent risks of security vulnerabilities, and recent major security and privacy incidents have shown that online exam proctoring software has been subject to both major data breaches and to security vulnerabilities that allowed remote activation. Given the capabilities of exam proctoring software to monitor users and disable system functionality, extra precautions should be taken to reduce security risks to students who are required to install the software.
**Implications to Security and Privacy** Remote proctoring systems are naturally invasive. When operating correctly, they monitor and restrict how students can interact with their own computers. The consequences of security vulnerabilities and breaches (cf. [1, 3, 29, 27]) are significant, especially given the private information collected about students--for example, their physical locations, photos and videos of their environment, and information about their computing devices.
Understanding how and why users choose (or are forced to use) software that could harm their privacy and security is a critical research need. This paper examines the perceptions of the decision-makers who choose whether or not to require remote proctoring--a form of monitoring software that has seen explosive growth. As argued above, providing additional guidance and training to educators, and heightening their awareness of the privacy and security risks that these systems impose on their students, is paramount. More generally, as with other technologies that aim at restricting and monitoring user functionality (e.g., remote IT management software), an argument can be made that limiting users' ability to control their own devices is antithetical to security and privacy. Given the proliferation of remote proctoring, there is an urgent need to better understand not only the potential for abuse and misuse of these systems, but also the perceptions of the educators who have the power to decide if and how they are used.
## Data and Source Availability
All data, scripts, and qualitative codebooks are available at the following repository: [https://github.com/gwusec/2023-USENIX-Educator-Perspectives-of-Exam-Proctoring](https://github.com/gwusec/2023-USENIX-Educator-Perspectives-of-Exam-Proctoring)
### Acknowledgements
We thank the anonymous shepherd and reviewers for improving this manuscript and preparing it for publication. This material is based upon work supported by the National Science Foundation under Grant Nos. 1845300, 2138654, and 2138078.
|
2303.13915 | Benchmarking the Impact of Noise on Deep Learning-based Classification
of Atrial Fibrillation in 12-Lead ECG | Electrocardiography analysis is widely used in various clinical applications
and Deep Learning models for classification tasks are currently in the focus of
research. Due to their data-driven character, they bear the potential to handle
signal noise efficiently, but its influence on the accuracy of these methods is
still unclear. Therefore, we benchmark the influence of four types of noise on
the accuracy of a Deep Learning-based method for atrial fibrillation detection
in 12-lead electrocardiograms. We use a subset of a publicly available dataset
(PTBXL) and use the metadata provided by human experts regarding noise for
assigning a signal quality to each electrocardiogram. Furthermore, we compute a
quantitative signal-to-noise ratio for each electrocardiogram. We analyze the
accuracy of the Deep Learning model with respect to both metrics and observe
that the method can robustly identify atrial fibrillation, even in cases
signals are labelled by human experts as being noisy on multiple leads. False
positive and false negative rates are slightly worse for data being labelled as
noisy. Interestingly, data annotated as showing baseline drift noise results in
an accuracy very similar to data without. We conclude that the issue of
processing noisy electrocardiography data can be addressed successfully by Deep
Learning methods that might not need preprocessing as many conventional methods
do. | Theresa Bender, Philip Gemke, Ennio Idrobo-Avila, Henning Dathe, Dagmar Krefting, Nicolai Spicher | 2023-03-24T11:04:16Z | http://arxiv.org/abs/2303.13915v1 | Benchmarking the Impact of Noise on Deep Learning-based Classification of Atrial Fibrillation in 12-Lead ECG
###### Abstract
Electrocardiography analysis is widely used in various clinical applications and Deep Learning models for classification tasks are currently in the focus of research. Due to their data-driven character, they bear the potential to handle signal noise efficiently, but its influence on the accuracy of these methods is still unclear. Therefore, we benchmark the influence of four types of noise on the accuracy of a Deep Learning-based method for atrial fibrillation detection in 12-lead electrocardiograms. We use a subset of a publicly available dataset (PTB-XL) and use the metadata provided by human experts regarding noise for assigning a signal quality to each electrocardiogram. Furthermore, we compute a quantitative signal-to-noise ratio for each electrocardiogram. We analyze the accuracy of the Deep Learning model with respect to both metrics and observe that the method can robustly identify atrial fibrillation, even in cases signals are labelled by human experts as being noisy on multiple leads. False positive and false negative rates are slightly worse for data being labelled as noisy. Interestingly, data annotated as showing baseline drift noise results in an accuracy very similar to data without. We conclude that the issue of processing noisy electrocardiography data can be addressed successfully by Deep Learning methods that might not need preprocessing as many conventional methods do.
Keywords:Deep Learning, Electrocardiogram, Atrial Fibrillation, Noise
## Introduction
Electrocardiograms (ECGs) are recordings of the electrical activity of the heart and are frequently used in emergency and in-patient care. However, different types of noise, either stemming from the patient's behaviour (e.g. motion) or the devices
(e.g. power line interference), can be introduced during measurement. The presence of noise leads to a twofold problem: It impedes detection of anomalies leading to false findings and alarms [1] and, if the signal-to-noise ratio (SNR) reaches a certain level, detecting diagnostically-relevant features becomes impossible [2].
One class of features with high clinical importance are the so-called "fiducial points", i.e. the center, on- and offsets of ECG waves such as the QRS complex and the P-/T-wave. They are used for segmenting heartbeats into meaningful intervals [3] and by doing so allow for arrhythmia detection. Atrial fibrillation (AF) is the most prevalent arrhythmia which is characterized by uncoordinated electrical impulses in the atrium and might lead to severe cardiovascular issues, such as stroke or heart failure. Analyzing the interval in a heartbeat where a P-wave is expected is crucial for AF classification as its absence indicates a lack of sinoatrial node activity and is thereby a sign for AF [4]. However, so-called fibrillatory waves might occur, mimicking P-waves, impeding the assessment of sinoatrial node activity.
Many state-of-the-art algorithms for ECG classification are based on extracting semantic features derived from human expert knowledge, such as fiducial points. However, as these algorithms tend to wrong results in case of noise [5], various denoising strategies [6] have been proposed. In contrast, algorithms from the field of deep learning (DL) were explored for ECG classification tasks recently [7, 8]. Instead of semantic features, they are based on agnostic features derived from fully-automatic correlation analysis between input ECGs and output classes in an end-to-end fashion. These models are based on the underlying premise that training and test datasets are stemming from the same distribution, which is often their pitfall in case of dataset shifts (variant devices, users, noise). Although initial studies indicate a better robustness to noise [9], it remains unclear to which extend it affects these models.
Thereby, in this work we benchmark the accuracy of a state-of-the-art pre-trained DL model for 12-lead ECG classification regarding its susceptibility to different types of noise. We use the publicly available PTB-XL dataset which contains annotations for several categories of noise made by human technical experts and compare the model's accuracy w.r.t. type of noise.
## Methods
We analyze a subset of the PTB-XL dataset containing 12-lead ECGs of 10 second length [10]. It contains all 1,514 ECGs annotated as showing AF (label in PTB-XL: _AFIB_) and we add the first \(2,000\) normal ECGs (_NORM_) as healthy controls. For each signal, we use a qualitative and a quantitative method to estimate SNR.
_SNR based on annotations_ (SNRa) For each ECG we determine the number of noisy leads using the columns _baseline_drift_, _static_noise_, _burst_noise_ and _electrodes_problems_ provided in the PTB-XL metadata. In the majority of cases, they contain the name of a single lead (e.g. "aVL"), multiple leads ("I,aVR") or ranges (e.g. "I-III"). Using a custom script, we convert this information to numeric values ranging from 0 to 12 for each type of noise. The labels "alles" (all) and "noisy recording" are converted to 12. We remove ECGs associated with other labels as
they are of a more qualitative nature (e.g. "leicht" (light)). In this way, for each signal a qualitative, unit-less, linear SNR measure is computed, ranging from 0 (no noise reported) to \(12*4=48\) (all leads are affected by all types of noise). As shown in Tbl. 1, we use this information to split the dataset in ECGs without ("w/o") a noise label and ECGs with ("w/") a noise label.
It has to be underlined that a value of zero does not have to mean that there is no noise, it just reflects that there is a potential for a noise-free ECG. The authors of PTB-XL also indicated that missing annotations in case of artifacts or false annotations in case of noise-free signals might occur. However, they concluded that the metadata bears the potential for ECG quality assessment [11].
Measured SNR(SNRm) Due to the limitations of the manual annotations and as they are only available for 22% of the PTB-XL database [11], we additionally use a quantitative SNR measure for each signal. We compute the Fourier Transform of the signals as well as the ratio of energies in two frequency bands as proposed in [12]. Based on the expected heart rates during AF, we define the "signal" frequency band ranging from 40 to 150 beats-per-minute (0.66 to 2.5 Hz) and define the "noise" frequency band as \(<40\) and \(>150\) beats-per-minute. By scaling with \(10\log 10\), we arrive at an SNR expressed in logarithmic decibel scale (dB).
DL classificationECG data is classified with a pre-trained model by Ribeiro et al. [7]. The model is a residual network and was trained on more than two million ECGs that were acquired within a Brazilian telehealth network. It outputs independent probabilities for six abnormalities, but we limit our analysis to AF. We use a threshold defined by the authors2.
Footnote 2: [https://github.com/antonior92/automatic-ecg-diagnosis/blob/master/generate_figures_and_tables.py](https://github.com/antonior92/automatic-ecg-diagnosis/blob/master/generate_figures_and_tables.py), commit 89f929d, line 121
Data analysisWe analyze the subset regarding differences between ECGs with and without noise labels for i) their distribution of SNRm and SNRa as well as ii) the accuracy of DL classification of each noise category. For ii) we compared the noisy recordings (SNRa \(>0\)) with randomly drawn signals from equally sized control groups (SNRa \(=0\)).
## Results
Fig. 1 shows the distribution of SNRa and SNRm values on the left and right side. The majority of ECGs with noise labels has less than 15 with the maximum being 29. This shows that even in the duration of 10 seconds, different data quality issues
\begin{table}
\begin{tabular}{l||c|c} Noise Label & AF & Healthy controls \\ \hline w/o & 1,097 & 1,581 \\ \hline w/ & 417 & 419 \\ \end{tabular}
\begin{tabular}{l||c|c|c} Noise Label & DL: FP & DL: FN \\ \hline w/o & 0.04 \% & 3.96 \% \\ \hline w/ & 0.24 \% & 7.06 \% \\ \end{tabular}
\end{table}
Table 1: Properties of subset extracted from PTB-XL (left) and results of DL-based AF classification (right). ECGs are grouped according to annotations: In case there is one or more noise label in the metadata, an ECG is assigned to βw/β, else to βw/oβ. FP and FN denote False Positive and False Negative, respectively.
per lead may occur. SNR\({}_{\text{m}}\) values are occurring in the range of \([-33.03,-7.78]\) dB with no clear difference between ECGs with and without noise labels.
Tbl. 1 (right) shows FP and FN rates of AF classification w.r.t. the existence of noise labels. FP is worsened by 0.2% and FN by 3.1% in case ECGs are annotated with noise labels. Tbl. 2 shows the DL accuracy for each type of noise compared to the same number of ECGs but randomly drawn 100 times from data without noise labels. ECGs with baseline drift or electrode problems are classified more accurately in comparison to random ECG signals without noise annotations, whereas ECGs with annotated burst and static noise reveal worse performance.
## Discussion
In general, the DL model can robustly classify AF, even in case ECGs are labelled by human experts as having multiple leads influenced by noise. Interestingly, in presence of baseline drift or electrode problems, accuracy is not deteriorated, but within one standard deviation compared to signals without noise labels. As a limitation, it has to be underlined that annotations are non-complete [11] and the subset contains only six signals annotated with electrode problems.
As the DL model can be assumed as a "black box", we can only speculate about the reasons for this behaviour. It could be explained by partial misinterpretation of baseline drift or static noise as P-waves. As we could show in previous work [13], the DL model was trained such that P-waves and R-peaks have a high relevance, similar to human perception, while numerous other features influence its decision. This multi-factor decision process could be robust to different kinds of noise, but this requires its presence during training. A shift between training and test datasets is always an issue for DL models. To mitigate this effect is has been suggested to intentionally include noise during training [9]. The model used in this work was trained on \(2,000,000\) non-public ECGs.
However, since the distribution of SNR\({}_{\text{m}}\) looks visually similar with or without noise labels, SNR\({}_{\text{a}}\) might not be optimal for quality assessment on its own.
\begin{table}
\begin{tabular}{l||c|c|c|c} \hline \multirow{2}{*}{LabelType} & Baseline Drift & Static Noise & Burst Noise & Electrode \\ & (\(n=305\)) & (\(n=478\)) & (\(n=156\)) & Problems (\(n=6\)) \\ \hline \hline w/o & \(96.8\%\pm 0.9\%\) & \(96.8\%\pm 0.7\%\) & \(96.9\%\pm 1.3\%\) & \(96.3\%\pm 8.0\%\) \\ \hline w/ & \(97.7\%\) & \(94.6\%\) & \(94.9\%\) & \(100.0\%\) \\ \end{tabular}
\end{table}
Table 2: DL accuracy w.r.t. the four types of noise. The variable \(n\) represents the number of signals with the given label (w/). For comparison to signals without a label (w/o), \(n\) ECGs are randomly drawn 100 times and accuracy is given as mean \(\pm\) standard deviation.
Figure 1: Distribution of values of both SNR metrics with (grey) and without (blue) noise labels.
A "no noise" label, explicitly identifying ECGs without data quality issues, and more labels in general would be a valuable addition for future experiments.
## Conclusion
Results show that the DL model is able to detect AF in 12-lead ECGs with high accuracy, even in the presence of data quality issues according to human experts. We conclude that the difficulty of processing noisy ECGs can be addressed by end-to-end DL models based on agnostic features. In contrast to conventional methods based on semantic features, they might not require preprocessing methods for achieving high accuracy. However, more experiments with larger and more diverse datasets should be the subject of future work.
|
2305.11254 | Robust Quantum Controllers: Quantum Information -- Thermodynamic Hidden
Force Control in Intelligent Robotics based on Quantum Soft Computing | A generalized strategy for the design of intelligent robust control systems
based on quantum / soft computing technologies is described. The reliability of
hybrid intelligent controllers increase by providing the ability to
self-organize of imperfect knowledge bases. The main attention is paid to
increasing the level of robustness of intelligent control systems in
unpredictable control situations with the demonstration by illustrative
examples. A SW & HW platform and support tools for a supercomputer accelerator
for modeling quantum algorithms on a classical computer are described. | Sergey V. Ulyanov, Viktor S. Ulyanov, Takakhide Hagiwara | 2023-05-18T18:37:56Z | http://arxiv.org/abs/2305.11254v1 | Robust Quantum Controllers: Quantum Information - Thermodynamic Hidden Force Control in Intelligent Robotics based on Quantum Soft Computing
###### Abstract
A generalized strategy for the design of intelligent robust control systems based on quantum / soft computing technologies is described. The reliability of hybrid intelligent controllers increase by providing the ability to self-organize of imperfect knowledge bases. The main attention is paid to increasing the level of robustness of intelligent control systems in unpredictable control situations with the demonstration by illustrative examples. A SW & HW platform and support tools for a supercomputer accelerator for modeling quantum algorithms on a classical computer are described.
\({}^{*}\)Institute of System Analysis and Management, Dubna State University
\({}^{*}\)Meshcheryakov Laboratory of Information Technologies, Joint Institute for Nuclear Research (JINR)
\({}^{\dagger}\)Department of Information Technologies, Moscow State University of Geodesy and Cartography
(MIIGAiK)
\({}^{**}\)Yamaha Motor Co. Ltd., Automotive operations Dpt.
\({}^{*}\)Email: [email protected]
\({}^{\dagger}\)Email: [email protected]
\({}^{**}\)Email: [email protected]
## Introduction
For complex and ill-defined dynamic control objects that are not easily controlled by conventional control systems (such as _P-[I]-D_-controllers) -- especially in the presence of fuzzy model parameters and different stochastic noises -- the System of Systems Engineering methodology provides fuzzy controllers (FC) as one of alternative way of control systems design.
Soft computing methodologies, such as genetic algorithms (GA) and fuzzy neural networks (FNN) had expanded application areas of FC by adding optimization, learning and adaptation features.
But still now it is difficult to design optimal and robust intelligent control system, when its operational conditions have to evolve dramatically (aging, sensor failure and so on). Such conditions could be predicted from one hand, but it is difficult to cover such situations by a single FC.
Using unconventional computational intelligence toolkit, we propose a solution of such kind of generalization problems by introducing a _self-organization_ design process of robust KB-FC that supported by the _Quantum Fuzzy Inference_ (QFI) based on quantum soft computing ideas [1-3].
## 1 Problem's Formulation
_A. Main problem and toolkit_
One of main problem in modern FC design is how to design and introduce robust KBs into control system for increasing _self-learning, self-adaptation and self-organizing capabilities_ that enhance robustness of developed FC in unpredicted control situations.
The _learning_ and _adaptation_ aspects of FC's have always the interesting topic in advanced control theory and system of systems engineering. Many learning schemes were based on the _backpropagation_ (BP) algorithm and its modifications (see, for example, [3] and their references). Adaptation processes are based on iterative stochastic algorithms. These ideas are successfully working if we perform our control task without a presence of ill-defined stochastic noises in environment or without a presence of unknown noises in sensors systems and control loop, and so on.
For more complicated control situations learning and adaptation methods based on BP-algorithms or iterative stochastic algorithms do not guarantee the required robustness and accuracy of control.
The solution of this problem based on SCO was developed in [2]. For achieving of _self-organization_ level in intelligent control system it is necessary to apply QFI [3, 4]. The described _self-organizing_ FC design method is based on special form of QFI that uses a few of partial KBs designed by SCO.
QFI uses the laws of quantum computing technologies [5] and explores three main unitary operations: (i) superposition; (ii) entanglement (quantum correlations); and (iii) interference. According to quantum gate computation, the logical union of a few KBs in one generalized space is realized with _superposition_ operator; with _entanglement_ operator (that can be equivalently described by different models of _quantum oracle_[6]) a search of a <<successful>> marked solution is formalized; and with _interference_ operator we can extract <<good>> solutions with classical _measurement_ operations [2].
_B. Method of solution_
Proposed QFI system consists of a few KB-FCs, each of which has prepared for appropriate conditions of control object and excitations by Soft Computing Optimizer (SCO) [2]. QFI system is a new quantum control algorithm of self-organization block, which performs post processing of the results of fuzzy inference of each independent FC and produces in on-line the generalized control signal output [4].
In this case the output of QFI is an optimal robust control signal, which combines best features of each independent FC outputs. Therefore, the operation area of such a control system can be expanded greatly as well as its robustness.
Robustness of control is the background for support the reliability of advanced control accuracy in uncertainty and information risk [5].
The simulation example of robust intelligent control based on QFI is introduced.
_C. Main goal_
The main technical purpose of QFI is to supply a self-organization capability for many (sometimes unpredicted) control situations based on a few KBs. QFI produces robust optimal control signal for the current control situation using a reducing procedure and compression of redundant information in KB's of individual FCs. Process of rejection and compression of redundant information in KB's uses the laws of quantum information theory [5, 6, 7].
Decreasing of redundant information in KB-FC increases the robustness of control without loss of important control quality as reliability of control accuracy. As a result, a few KB-FC with QFI can be adapted to unexpected change of external environments and to uncertainty in initial information.
We introduce main ideas of quantum computation and quantum information theory [6] applied in developed QFI methods. _Quantum Fuzzy Inference_ ideas are introduced. Robustness of new types of _self-organizing intelligent control systems_ is demonstrated.
## 2 SCO-structure based on soft computing
_D. KB of FC creation_
SCO uses the chain of GAs (GA1, GA2, GA3) and approximates measured or simulated data (TS) about the modeled system with desired accuracy or using real robot for it. GA1 solves optimization problem connected with the optimal choice of number of membership functions and their shapes. GA2 searches optimal KB with given level of rules activation. Introduction of activation level of rules allows us to sort fuzzy rules in accordance with value information and design robust KB. GA3 refines KB by using a few criteria.
Figure 1 shows the flow chart of SCO operations on macro level and combines several stages.
Stage 1: _Fuzzy Inference System_ (FIS) _Selection_. The user makes the selection of fuzzy inference model with the featuring of the following initial parameters: Number of input and output variables; Type of fuzzy inference model (Mamdani, Sugeno, Tsukamoto, etc.); Preliminary type of MFs.
Stage 2: _Creation of linguistic values_. By using the information (that was obtained on Stage 1), GA\({}_{1}\) optimizes membership functions number and their shapes, approximating teaching signal (TS), obtained from the in-out tables, or from dynamic response of control object (real or simulated in Matlab).
Stage 3: _Creation rules_. At this stage we use the rule rating algorithm for selection of certain number of selected rules prior to the selection of the index of the output membership function corresponding to the rules. For this case two criteria based on a rule's activation parameter called as a <<manual threshold level>> (TL). This parameter is given by a user (or it can be introduced automatically).
Stage 4: _Rule base optimization_. GA\({}_{2}\) optimizes the rule base obtained on the Stage 3, using the fuzzy model obtained on Stage 1, optimal linguistic variables, obtained on Stage 2, and the same TS as it was used on Stage 1. Rule base optimization can be performed by using mathematical model, or by using distance connection to real control object.
Figure 1: Flow chart of SC Optimizer.
Stage 5: _Refine KB_. On this stage, the structure of KB is already specified and close to global optimum.
In order to reach the optimal structure, a few methods can be used. First method is based on GA\({}_{3}\) with fitness function as minimum of approximation error, and in this case KB refining is similar to classical derivative based optimization procedures (like error back propagation (BP) algorithm for FNN tuning). Second method is also based on GA\({}_{3}\) with fitness function as maximum of mutual information entropy. Third method is realized as pure error back propagation (BP) algorithm. BP algorithm may provide further improvement of output after genetic optimization. As output results of the Stages 3, 4 and 5, we have a set of KB corresponding to chosen KB optimization criteria.
_E. Remote rule base optimization_
Remote KB optimization is performed on the fourth stage of designing FC (_Fig. 2_). The implementation of the physical environment connection intends to use additional equipment for the data transfer, such as radio channel, Bluetooth, WiFi or a cable connection, such as USB. Exchange of information between the management system and the SCO intended to form a KB (_Fig. 2_).
The control system reads the sensors and sends data to a computer for further processing. By taking input values, SCO evaluates previous decision (KB-FC) and performs fuzzy inference to check the following solutions (KB-FC). The result of the fuzzy inference is sent to the remote device. Thereafter, the control system by processing the input values generates control action.
Synchronization of SCO and control systems is based on the remote device (robot). To this end, a special program (firmware) is developed.
Connection profile uses the serial port. Transmission rate in this case is 115,200 bits / sec. During operation, floats in symbolic form are passing via COM-port. Connection to SCO uses designed plug-in. Before establishing a connection to the SCO, COM port number and the check time of one solution (the number of cycles of the system to test solution) are selected.
## 3 QFI-structure based on quantum computing
For design of QFI based on a few KBs it is needed to apply the additional operations to partial KBs outputs that drawing and aggregate the value information from different KBs. Soft computing tool
Figure 2: Remote rule base optimization scheme.
does not contain corresponding necessary operations [8].
The necessary unitary reversible operations are called as _superposition_, _entanglement_ (quantum correlation) and _interference_ that physically are operators of quantum computing in information processing.
We introduce briefly the particularities of quantum computing and quantum information theory that are used in the quantum block QFI (_Fig. 3_) supporting a self-organizing capability of FC in robust intelligent control system (ICS).
Let us consider peculiarities of quantum computing.
_F. Quantum computing_
In Hilbert space the superposition of classical states \(\left(c_{i}^{(i)}|0\rangle+c_{2}^{(i)}|1\rangle\right)\) called quantum bit (qubit) means that \(\omega False\rangle\) and \(\omega True\rangle\) are jointed in one state with different probability amplitudes, \(c_{i}^{\dagger},i=1,2\). If the Hadamard transform \(H=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\) is independently applied to different classical states then a tensor product of superposition states is the result:
\[\left|\psi\right\rangle=H^{\otimes n}\left|FalSe\right\rangle=\frac{1}{\sqrt {2^{n}}}\otimes_{i=1}^{n}\left(\left|False\right\rangle+\left|True\right\rangle \right)\text{.} \tag{1}\]
The fundamental result of quantum computation stays that all of the computation can be embedded in a circuit, which nodes are the universal gates. These gates offer an expansion of unitary operator \(U\) that evolves the system in order to perform some computation. Thus, naturally two problems are discussed: (i) Given a set of functional points \(S=\left\langle(x,y)\right\rangle\) find the operator\(U\) such that \(y=U\cdot x\) ; (ii) Given a problem, fined the quantum circuit that solves it.
Figure 3: Structure of robust ICS based on QFI.
Algorithms for solving these problems may be implemented in a hardware quantum gate or in software as computer programs running on a classical computer.
It is shown that in quantum computing the construction of a universal quantum simulator based on classical effective simulation is possible [3, 6, 7].
In the general form, the model of quantum algorithm computing comprises the following five stages:
* preparation of the initial state \(\left|\psi_{{}_{out}}\right\rangle\) (classical or quantum);
* execution of the Hadamard transform for the initial state in order to prepare the superposition state;
* application of the entangled operator or the quantum correlation operator (quantum oracle) to the superposition state;
* application of the interference operator;
* application of the measurement operator to the result of quantum computing \(\left|\psi_{{}_{out}}\right\rangle.\)
Hence, a quantum gate approach can be used in a global optimization of KB structures of ICSs that are based on quantum computing, on a quantum genetic search and quantum learning algorithms [8].
_G. Quantum information resources in QFI algorithm_
_Figure 4_ shows the algorithm for coding, searching and extracting the value information from two KBs of fuzzy PID controllers designed by SCO.
Thus, the quantum algorithm for QFI (_Fig. 5_) the following actions are realized [5]:
* The results of fuzzy inference are processed for each independent FC;
* Based on the methods of quantum information theory, valuable quantum information hidden in independent (individual) knowledge bases is extracted;
Figure 4: Example of information extraction in QFI.
* In on-line, the generalized output robust control signal is designed in all sets of knowledge bases of the fuzzy controller.
* In this case, the output signal of QFI in on-line is an optimal signal of control of the variation of the gains of the PID controller, which involves the necessary (best) qualitative characteristics of the output control signals of each of the fuzzy controllers, thus implementing the self-organization principle.
Therefore, the domain of efficient functioning of the structure of the intelligent control system can be essentially extended by including robustness, which is a very important characteristic of control quality.
The robustness of the control signal is the background for maintaining the reliability and accuracy of control under uncertainty conditions of information or a weakly formalized description of functioning conditions and/or control goals.
QFI model based on physical laws of quantum information theory, for computing use unitary invertible (quantum) operators and they have the following names: _superposition_, _quantum correlation_ (entangled operators), and _interference_. The fourth operator, measurement of result quantum computation is irreversible.
Optimal drawing process of value information from a few KBs that are designed by soft computing is based on following four facts from quantum information theory [4]: (i) the effective quantum data compression; (ii) the splitting of classical and quantum parts of information in quantum state; (iii) the total correlations in quantum state are \(\alpha\)mixture? of classical and quantum correlations; and (iv) the exiting of hidden (locking) classical correlation in quantum state [6, 9].
Figure 5: The structure of QFI gate.
This quantum control algorithm uses these four Facts from quantum information theory: (i) compression of classical information by coding in computational basis \(\left\langle\left|0\right\rangle\!,\left|1\right\rangle\right\rangle\) and forming the quantum correlation between different computational bases (Fact 1); (ii) separating and splitting total information and correlations on <<classical>> and <<quantum>> parts using Hadamard transform (Facts 2 and 3); (iii) extract unlocking information and residual redundant information by measuring the classical correlation in quantum state (Fact 4) using criteria of maximal corresponding amplitude probability.
These facts are the informational resources of QFI background. Applying these facts it is possible to extract an additional amount of quantum value information from smart KBs produced by SCO for design a _wise_ control using compression and rejection procedures of the redundant information in a classical control signal. Below we discuss the application of this quantum control algorithm in QFI structure.
_H. Remote quantum base optimization_
As the adjustable parameter scaling factor is used in remote quantum base optimization. Scaling factor is used in the final step of forming the gain of PID (_Fig. 5_).
During operation, floats in symbolic form are passed via COM-port. The control system reads the sensors and sends them to a computer for further processing. By taking the input values, the GA evaluates the previous decision, and carries a quantum fuzzy inference to check the following solutions. The result of the fuzzy inference is sent to the remote device. Thereafter, the control system by processing the input values generates control action. Connecting to QFI developed through a plug-in.
Before establishing a connection to the SCO, COM port number and the check time of one solution (the number of cycles of the system to test solution) are selected (_Fig. 6_).
## 4 KB-self-organization of FC's based on QFI
### Robust FC design toolkit
The kernel of the abovementioned FC design toolkit is a so-called SCO implementing advanced soft computing ideas. SCO is considered as a new flexible tool for design of optimal structure and robust
Figure 6: Remote connection plug-in for QC Optimizer.
KBs of FC based on a chain of genetic algorithms (GAs) with information-thermodynamic criteria for KB optimization and advanced error back-propagation algorithm for KB refinement [2]. Input to SCO can be some measured or simulated data (called as <<teaching signal>> (TS)) about the modelling system. For TS design (or for GA fitness evaluation) we are used stochastic simulation system based on the control object model. More detail description of SCO is given in [1, 2]. Below we discuss the application of this algorithm in QFI structure.
_Figure 3_ illustrates as an example the structure and main ideas of self-organized control system consisting of two FC's coupling in one QFI chain that supplies a self-organizing capability. According to described above algorithm the input to the QFI gate is considered according (1) as a superposed quantum state \(K_{i,t}(t)\otimes K_{z}(t)\), where \(K_{z,z}(t)\) are the outputs from fuzzy controllers FC1 and FC2 designed by SCO (see, _Fig. 4_) for the given control task in different control situations (for example, in the presence of different stochastic noises).
The algorithm of superposition calculation is presented in _Fig. 7_ and described in details in [4, 5].
We discuss for simplicity the situation in which an arbitrary amount of correlation is unlocked with a one-way message. Let us consider the communication process between two KBs as communication between two players \(A\) and \(B\) (see, _Figs 4_ and 7) and let \(d=2^{*}\). According to the law of quantum mechanics, initially we must prepare a quantum state description by density matrix \(\rho\) from two classical states (KB\({}_{1}\) and KB\({}_{2}\)).
The initial state \(\rho\) is shared between subsystems held by \(A\) (KB\({}_{1}\)) and \(B\) (KB\({}_{2}\)), with respective dimensions \(d\)
Figure 7: The algorithm of superposition calculation.
\[\rho=\frac{1}{2d}\sum_{k=0}^{d-1}\sum_{r=0}^{1}\left(\left|k\right\rangle\!\left\langle k \right|\otimes\left|t\right\rangle\!\left\langle t\right|\right)_{A}\otimes \left(U_{,}\left|k\right\rangle\!\left\langle k\right|U_{,}^{\dagger}\right)_{B}\,. \tag{2}\]
Here \(U_{0}=I\) and \(U_{1}\) changes the computational basis to a conjugate basis \(\left\langle\!\left\langle i\right|U_{1}\left|k\right\rangle\!\right\rangle\! =\!\!1/\sqrt{d}\)\(\forall\,i,k\,\).
In this case, \(B\) chooses \(\left|k\right\rangle\) randomly from \(d\) states in two possible random bases, while \(A\) has complete knowledge on his state. The state (2) can arise from following scenario. \(A\) picks a random \({}_{n}\)-bit string \({}_{k}\) and sends \(B\)\(\left|k\right\rangle\) or \(H^{\otimes n}\left|k\right\rangle\) depending on whether the random bit \(t=0\) or \(1\). \(A\) can send \(t\) to \(B\) to unlock the correlation later. Experimentally, Hadamard transform, \(H\) and measurement on single qubits are sufficient to prepare the state (2), and later extract the unlocked correlation in \(\rho^{\prime}\). The initial correlation is small, i.e. \(I_{C}^{(l)}(\rho)\!=\!\frac{1}{2}\!\log d\). The final amount of information after the complete measurement \(M_{A}\) in one-way communication is ad hoc, \(I_{C}(\rho^{\prime})\!=\!I_{C}^{(l)}(\rho)\!=\!\log d+1\), i.e., the amount of _accessible information increase_. This phenomenon is impossible classically.
However, states exhibiting this behaviour _need not be entangled_ and corresponding communication can be organized using Hadamard transform [9].
Therefore, using the Hadamard transformation and a new type of quantum correlation as the communication between a few KB's it is possible to increase initial information by unconventional quantum correlation (as the quantum cognitive process of a value hidden information extraction in on-line, see, e.g. _Fig. 4_). _Figure 8_ shows the structure of Quantum Computing Optimizer of robust KB-FC based on QFI [4].
In present report we consider a simplified case of QFI when with the Hadamard transform is organized an unlocked correlation in superposition of two KBs; instead of the difficult defined entanglement operation an equivalent quantum oracle is modelled that can estimates an _cintelligent_
Figure 8: QFI-process by using QC Optimizer (QFI kernel).
state_ with the maximum of amplitude probability in corresponding superposition of classical states (minimum entropy principle relative to extracted quantum knowledge [5]).
Interference operator extracts this maximum of amplitude probability with a classical measurement.
Using of described QFI model to control of non-linear locally and globally unstable dynamic systems below is described.
## 5 Benchmark's simulation
It is demonstrated that FCs prepared to maintain control object in the prescribed conditions are often fail to control when such conditions are dramatically changed. We propose the solution of such kind of problems by introducing a quantum generalization of strategies in fuzzy inference in on-line from a set of pre-defined fuzzy controllers by new QFI based systems. The latter is a new quantum algorithm in quantum computation without entanglement. Two Benchmarks are considered: robust control of locally and globally unstable control objects.
### Benchmark 1: Globally unstable control object simulation
<<Cart-pole>> control object is a non-linear dissipative system. This is a typical task of control theory, they demonstrating quality of control system. Task of control is the stability of inverted pendulum in vertical position. The motion of the dynamic system <<cart-pole>> is described by the following equations
\[\ddot{\theta}=\frac{g\sin\theta+\cos\theta\Bigg{(}\frac{u+\xi(t)+a_{\mathrm{i}} \dot{z}+a_{\mathrm{i}}z-ml\dot{\theta}^{2}\sin\theta}{m_{\mathrm{c}}+m}\Bigg{)} -k\dot{\theta}}{l\Bigg{(}\frac{4}{3}-\frac{m\cos^{2}\theta}{m_{\mathrm{c}}+m} \Bigg{)}}\,, \tag{3}\]
\[\ddot{z}\,=\,\frac{u+\xi(t)-a_{\mathrm{i}}\dot{z}-a_{\mathrm{j}}z+ml(\dot{ \theta}^{2}\sin\theta-\ddot{\theta}\cos\theta)}{m_{\mathrm{c}}+m}\, \tag{4}\]
where \(\theta\) is the pendulum deviation angle (degrees); \(z\) is the movement of the cart (m); \(g\) is the acceleration of gravity (9.8 m/s\({}^{2}\)); \(m_{\mathrm{c}}\) is the pendulum mass (kg); \(l\) is the pendulum half-length (m); \(\xi\big{(}t\big{)}\) is the stochastic excitation; and \(u\) is the control force acting on the cart (N). The equations for the entropy production rate in the control object and the PID controller have the following form, respectively:
\[\frac{d}{dt}\,S_{\theta}=\frac{k\dot{\theta}^{2}+\frac{ml\dot{ \theta}^{3}\sin 2\theta}{m_{\mathrm{c}}+m}}{l(\frac{4}{3}-\frac{m\cos^{2} \theta}{m_{\mathrm{c}}+m})}\ ;\ \frac{d}{dt}\,S_{z}=a_{\mathrm{i}}\dot{z}^{2}\ ;\ \frac{d}{dt}\,S_{u}=k_{d}\dot{e}^{2} \tag{5}\]
The following parameter values are determined: \(m_{\mathrm{c}}=\mathrm{I};m=0.1;l=0.54k=0.4;a_{\mathrm{1}}=0.1;a_{\mathrm{2}}= 5\) ; and the initial position \(\Big{[}\theta_{0};\dot{\theta}_{0};z_{0};\dot{z}_{0}\Big{]}=\big{[}10;0.1;0;0 \big{]}\) (the value of the pendulum deviation angle is given in degrees); the constraint on the control force is \(-0.5<u<5.0\).
The specific feature of control problem for the given control object (4) is the application of one fuzzy PID controller for controlling the movement of the cart (with one degree of freedom), while the control object has two degrees of freedom.
The control goal is that the pendulum deviation angle (second generalized coordinate) reaches the given value via the implicit control using the other generalized coordinate and corresponding
essentially nonlinear cross-connections with the cart movement coordinate (effect of energy transmission between the generalized coordinates).
_Remark 1_: _Stability Lemma for Nonlinear Systems_. Based on the relationship between thermodynamic exergy and Hamiltonian systems a fundamental stability Lemma for Hamiltonian systems formulated. The stability of Hamiltonian systems is bounded between Lyapunov and Chetaev theorems as following: Given the Lyapunov derivative as a decomposition and sum of exergy generation rate \(\dot{W}\) and exergy dissipation rate \(T_{0}\dot{S}_{i}\) then [10]
\[\dot{V}=\dot{W}-T_{0}\dot{S}_{i}=\sum_{j=1}^{N}Q_{j}\dot{q}_{j}- \sum_{l=1}^{M-N}Q_{l}\dot{q}_{l}\,. \tag{6}\]
where \(Q_{j}\) is the generalized force vector and the irreversible entropy production rate can be expressed as
\[\dot{S}_{i}=\sum_{k}\mathcal{F}_{k}\mathcal{X}_{k}=\frac{1}{T_{0} }\sum_{k}Q_{k}\dot{q}_{k}\geq 0\,.\]
A control law is Lyapunov optimal if it minimizes the first time derivative of the Lyapunov function over a space of admissible force controls. In general, a set of feedback gains are optimized by minimizing the regulating and / or tracking error of the conventional feedback controller while regulating to zero and / or tracking a desired reference input. The Lyapunov function is the total error energy which for most mechanical systems is equivalent to an appropriate Hamiltonian function H, as following: \(V=\mathrm{H}\). Then the concept of Lyapunov Optimal follows directly from setting \(\dot{W}=0\) in (6) and maximizing \(T_{0}\dot{S}_{i}\) for which the time derivative of the Lyapunov function (Hamiltonian) or the modified power (work / energy) equation is written as following:
\[\dot{V}=\dot{\mathrm{H}}=-T_{0}\dot{S}_{i}=-\sum_{j=1}^{M}Q_{j} \dot{q}_{j}=-\sum_{j=1}^{N}\mathcal{F}_{j}\dot{R}_{j}\,,\]
which is independent of system dynamics and is a kinematic quantity that applies to any system. Note that \(F_{j}\) denotes a set of forces acting on a mechanical system and \(\dot{R}_{j}\) denotes the inertial linear velocity of the point where \(\mathcal{F}_{j}\) and in (5) is applied. Passivity control for robotic systems follows directly from setting \(\dot{W}=0\) in (6).
_Remark 2_: _Information-like Lyapunov functions_. Recently, presented a rich information-like family of universal Lyapunov functions for any linear or non-linear reaction network with detailed or complex balance. Moreover, \(H_{f}\) are not just Lyapunov functions but information measure of the divergences: \(H_{f}\left(c^{1}\left(t\right)\left|c^{2}\left(t\right)\right.\right)\) is monotonically non-increasing function of time \(t\) for any two kinetic curves \(c^{1}\left(t\right)\) and \(c^{2}\left(t\right)\) with the same value of \(\sum_{i}c_{i}\). These new functions aimed to resolve "the mystery" about the difference between the rich family of Lyapunov functions (\(f\) - divergences) for linear kinetics and a limited collection of Lyapunov functions for non-linear networks in thermodynamic conditions [11].
In the case of the similar initial learning conditions, the SCO with soft computing is used to design KB\({}_{1}\) of FC\({}_{1}\) for the generalized criterion of minimal mean square error:
\[\int_{t_{0}}^{t_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tinytiny{\leftleftleftleftleftleftleft({{ \leftleftleft({{ \leftleftleft({{ \left}} \left({{{\left}}}\right)}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\}\}\\}\\}\\\\\\\}\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\
Thus, we consider the solution of the vector (multi-objective) optimization problem based on the decomposition of the KB. The Gaussian noise was used as the random signal for designing KB1, and Rayleigh noise was used for forming KB2 (see _Fig. 9_, learning situations (**S1**, **S2**), respectively).
Physically the first criterion is equivalent to the total energy of the overturned pendulum and the second criterion characterizes the precision of the dynamic behavior of the control object.
_Figure 10_ shows KB1 and KB2 with the corresponding activated numbers of rules equal to 22 and 33 for a total number of rules of 729.
Two contingency control situations (**S3**, **S4**) were simulated; in one of them (**S3**) the new noise \(\xi\big{(}t\big{)}\) was introduced, the random signal with uniform one dimensional distribution, the control error signal delay (0.03), and the noise signal in the position sensor of the pendulum (noise amplification coefficient 0.015).
Figure 10: Form of KB1 and KB2 with corresponding activated production rules.
Figure 9: Random noise used in situations (S1, S2).
_Figure 11_ shows the example of operation of the quantum FC for formation of the robust control signal using the proportional gain in contingency control situation **S3**. In this case, the output signals of KB\({}_{1}\) and KB\({}_{2}\) in the form of the response on the new control error in situation **S3** are received in the quantum FC. The output of the block of quantum FC is the new signal for on line control of the factor \(\,k_{{}_{p}}\,\).
Thus, the blocks of KB\({}_{1}\) and KB\({}_{2}\), and quantum FC in _Fig. 3_ form the block of KB self-organization in the contingency control situation.
_Figure 12_ shows the dynamic behavior of the studied system <<cart-pole>> and the control laws of the self-organized quantum controller (QFI), FC\({}_{1}\) and FC\({}_{2}\).
_Figure 11_: Example of operation of the block of KB self-organization based on QFI._
_Figure 12_: Dynamic motion of pole in situation S3._
_Remark 3:_ The following notation is used in _Fig. 12_ and below: \(x=\theta\) is the angle of pendulum deviation from the given position; \(z\) is the cart position; the quantum FC is based on the spatial correlation.
The results of simulation (_Fig. 12_) demonstrate that the dynamic control object in contingency control situations (**S3**) for the control of FC\({}_{1}\) (FC\({}_{2}\)) loses stability, and for the control of quantum FC the control system possesses the property of robustness and achieving the control goal is guaranteed. According to the results of simulation (_Fig. 12_), the required amount of control for the given criteria in contingency control situations (**S3**) for the control of FC\({}_{1}\) and FC\({}_{2}\) also is not achieved, while in the case of control of the quantum FC the control system possesses the required amount of control. This yields that two non-robust fuzzy controllers can be used to design in on line the robust fuzzy controller using quantum self-organization; the KB of this robust FC satisfies both quality criteria.
Therefore, the decomposition of the solution to the above multi-objective optimization problem for the robust KB in the contingency control situation into partial solutions to optimization subproblems physically can be performed in on line in the form of separate responses of the corresponding individual KBs optimized with different fixed cost functions and control situations.
The aggregation of the obtained partial solutions in the form of the new robust KB is performed based on the quantum FC containing the mechanism of formation of the quantum correlation between the obtained partial solutions.
As a result, only responses of the finite number of individual KBs containing limiting admissible control laws in the given contingency situations are used.
The control laws of variation of the gains of the fuzzy PID controller formed by the new robust KB have a simpler physical realization, and as a result they possess better characteristics of individual control cost function for the contingency control situation.
For experimental testing a physical model of robot (_Fig. 13_) is used.
Three situations of control are tested. First situation images simple situation.
The second situation use uniform noise in control channel, Gaussian noise in wheel friction and delay of control action -- 0.01 s.
And the third situation have delay of control action equal 0.03s. Simulation and experimental results (for the complex situation 3) are shown on _Fig. 14_.
Figure 13: Mobile robot configuration.
_K. Benchmark 2: Remote rule base optimization_
To compare method of remote rule optimization on the real control object with method using Matlab simulation for optimization we created 6 KB-FC.
Figure 14: Control error. Unpredicted situation: (a) modeling; (b) experiment on physical model.
Experiment and modeling were performed in two control situations.
The first situation (S1) is typical for the control system (the initial angle equals to 1). The goal is to maintain the pendulum in equilibrium (0\({}^{\circ}\) angle of deflection). It should be noted that KB optimization held in this control situation.
The second situation is unexpected (S2). The initial angle equals to 5\({}^{\circ}\). This situation characterizes the perturbation caused by external influences on CO.
_Figure 15_ shows a comparison of integrals of squared error for all regarded regulators in a typical situation of control:
Figure 16: Integral square error. Unpredicted situation: Simulation and experiment.
Figure 15: Integral square error. Typical situation: Simulation and experiment.
The lower is integral square error level, the better controller works. Consider the results of simulation and experiment in unpredicted situation of control:
_Figure 16_ shows a comparison of integrals of squared error for all regarded regulators in an unpredicted situation of control.
_L. Benchmark 3: Remote quantum base optimization_
Let's compare the PID controller, fuzzy controllers \(\mathrm{FC}_{1}\) and \(\mathrm{FC}_{4}\), and QFI controllers based on different correlations: Quantum-Space (Q-S), Quantum-Time (Q-T), Quantum-Space-Time (Q-ST). These QFI controllers are optimized using remote connection.
Mathematical modeling and physical experiments took place in two control situation:
* in the first (typical) situation (S1), the delay of control is standard as 0.015 sec;
* in the second unpredicted situation (S2), the delay of the control as 0.035 sec.
From _Figs 17_ and _18_ it can be seen that KB optimization using a remote connection with quantum optimizer can improve the quality of control in a typical and unpredicted situation.
_Related works._ Quantum computing approaching in robot path planning, emotion design, navigation, learning, decision making was applied also in [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] etc. Our approach is based on quantum self-organization of knowledge bases using responses of fuzzy controllers on unpredicted situations in on line.
## 6 Smart Robotic Manipulator: Quantum supremacy in intelligent control
The seven degrees of freedom (7 DoF) and seven-link robotic manipulator is described in this part. Due control object is complex, ICS for 7 DoF manipulator is constructed with using decomposition principal. Seven independent FCs (FC1 - FC7) are used for control each of manipulator link. The decomposition of control allows reducing complexity of constructing ICS. However, character of ICS somewhat reduced due to independence of seven FCs (_Fig. 19_).
QFI unit introduction allows improving ICS behavior by self-organization of independent KBs in FC1 - FC7. The correlation of three adjacent fuzzy KBs (the information FC \(i\), \(i+1\) and \(i+2\) is used to control the \(i\)-th link of the manipulator, as shown on _Fig. 19(b)_. Consider the first internal unpredicted situation - the random noise in the control channel (see, the signal \(s(t)\) on _Fig. 19_). Comparison of manipulator behavior for control system based on soft computing and based on quantum soft computing in performance criteria terms is shown in _Fig. 20_ (on the base results of sixty-five experiments).
The results are demonstrating if ICS is used with QFI gate (see, _Fig. 19 (a)_), all of evaluation of performance criteria improve (expect <<One iteration time>>).
Figure 18: Control error. Unpredicted situation of control (Experiment).
Figure 19: (a) The structure of 7DoF manipulator ICS; (b) The application of the correlation of three neighboring FC.
The one of cases is shown in _Fig. 21 (a)_. Positioning accuracy is better if used ICS with QFI unit (in this case positioning error is 0.184 m). Positioning error is 1.918 m, if used ICS without QFI unit.
Consider the second internal unpredicted situation - random noise in the measurement system (see, the signal \(d(t)\) and \(\propto\)Sensors), _Fig. 19(a)_). Comparison of manipulator behavior for control system based on soft computing and based on quantum soft computing in performance criteria terms is shown in _Fig. 22_.
Figure 21: (a) Manipulator behavior with random noise in the control channel, (b) Manipulator behavior with random noise in the measurement system.
Figure 20: Manipulator behavior with random noise in the control channel: FC β based on soft computing, QFC β based on quantum soft.
The results are demonstrating if ICS is used with QFI unit, all of evaluation of performance criteria improve (expect \(\epsilon\)One iteration time\(\approx\)). The one of cases is shown in _Fig. 21 (b)_. Positioning accuracy is better if used ICS with QFI unit (in this case positioning error is 0.262 m). Positioning error is 2.519 m, if used ICS without QFI unit. Thus, the positioning accuracy ten times increased with QFI application in the comparison with the using case of soft computing and these facts demonstrate the quantum supremacy of described methods of robust control design [29, 30].
## Conclusion
* New circuit implementation design method of quantum gates for fast classical efficient simulation of search QAs is developed. Benchmarks of design application as Grover's QSA and QFI based on QGA demonstrated.
* Applications of QAG approach in intelligent control systems with quantum self-organization of imperfect knowledge bases are described on concrete examples. Quantum supremacy on robotic Benchmarks demonstrated.
* Results of controller's behavior comparison confirm the existence of synergetic self-organization effect in the design process of robust KB on the base of imperfect (non-robust) KB of fuzzy controllers: from two imperfect KB with quantum approach a robust KB can be created using only quantum correlation. In classical intelligent control based on soft computing toolkit this effect impossible to achieve.
* Described approach opens new prospects for application of the model of quantum FC as the particular variant of the quantum self-organization algorithm in multi-objective control problems for the control object with weakly formalized structure and large dimensionality of the phase space of control parameters, application of experimental data in the form of the learning signal without development the mathematical model of the control object. These facts present a great advantage which is manifested as the possibility of design of control with required robustness in on line.
Figure 22: Manipulator behavior with random noise in the measurement. |
2304.06401 | Why Existing Multimodal Crowd Counting Datasets Can Lead to Unfulfilled
Expectations in Real-World Applications | More information leads to better decisions and predictions, right? Confirming
this hypothesis, several studies concluded that the simultaneous use of optical
and thermal images leads to better predictions in crowd counting. However, the
way multimodal models extract enriched features from both modalities is not yet
fully understood. Since the use of multimodal data usually increases the
complexity, inference time, and memory requirements of the models, it is
relevant to examine the differences and advantages of multimodal compared to
monomodal models. In this work, all available multimodal datasets for crowd
counting are used to investigate the differences between monomodal and
multimodal models. To do so, we designed a monomodal architecture that
considers the current state of research on monomodal crowd counting. In
addition, several multimodal architectures have been developed using different
multimodal learning strategies. The key components of the monomodal
architecture are also used in the multimodal architectures to be able to answer
whether multimodal models perform better in crowd counting in general.
Surprisingly, no general answer to this question can be derived from the
existing datasets. We found that the existing datasets hold a bias toward
thermal images. This was determined by analyzing the relationship between the
brightness of optical images and crowd count as well as examining the
annotations made for each dataset. Since answering this question is important
for future real-world applications of crowd counting, this paper establishes
criteria for a potential dataset suitable for answering whether multimodal
models perform better in crowd counting in general. | Martin ThiΓen, Elke HergenrΓΆther | 2023-04-13T11:09:28Z | http://arxiv.org/abs/2304.06401v1 | # Why Existing Multimodal Crowd Counting Datasets Can
###### Abstract
More information leads to better decisions and predictions, right? Confirming this hypothesis, several studies concluded that the simultaneous use of optical and thermal images leads to better predictions in crowd counting. However, the way multimodal models extract enriched features from both modalities is not yet fully understood. Since the use of multimodal data usually increases the complexity, inference time, and memory requirements of the models, it is relevant to examine the differences and advantages of multimodal compared to monomodal models. In this work, all available multimodal datasets for crowd counting are used to investigate the differences between monomodal and multimodal models. To do so, we designed a monomodal architecture that considers the current state of research on monomodal crowd counting. In addition, several multimodal architectures have been developed using different multimodal learning strategies. The key components of the monomodal architecture are also used in the multimodal architectures to be able to answer whether multimodal models perform better in crowd counting in general. Surprisingly, no general answer to this question can be derived from the existing datasets. We found that the existing datasets hold a bias toward thermal images. This was determined by analyzing the relationship between the brightness of optical images and crowd count as well as examining the annotations made for each dataset. Since answering this question is important for future real-world applications of crowd counting, this paper establishes criteria for a potential dataset suitable for answering whether multimodal models perform better in crowd counting in general.
Crowd Counting, Multimodal Learning, RGB-T, Transformer
## 1 Introduction
One of the biggest challenges of crowd counting in real-world applications is dealing with varying lighting conditions. Since crowd counting can be very important for event security and crowd monitoring, good performance independent of lighting conditions is essential for real-world applications. Especially at night, lighting is often poor, resulting in less contrast and information in optical images and thus reducing the accuracy of prediction models. In this case, thermal images are more suitable because they do not rely on visible light. On the other hand, optical images can contain more information during the daytime compared to monochrome thermal images due to their color information. In addition, the environment may heat up during the day, resulting in lower contrast in thermal images, as human body temperature is almost constant. Overall, the use of both modalities seems to be symbiotic and to lead to better results compared to the use of a single modality. Using multiple modalities to train a model has led to state-of-the-art results in many cases. In particular, with the rise of transformers [26], where inputs are transformed into homogeneous tokens, using multiple modalities such as text or images in a model has become easier. In the area of monomodal crowd counting, the use of transformers has not been fully explored. To our best knowledge, with the exception of one work [13], previous research has focused only on convolutional networks. The use of transformers has tremendous potential, as previous work [12][1] has often achieved better results when improving the extraction of multi-scale features. Although existing work [13][2] concludes that the use of optical and thermal imagery leads to better crowd counting predictions, it is not yet fully understood how such models internally work and how they extract enriched features from both modalities.
Apart from the lack of understanding of how multimodal models work internally, it is not entirely understood whether the multimodal approach leads to better crowd counting results in general or only under certain conditions. Further research with potential influencing factors such as illumination, distance to the crowd, or number of people per image is needed to gain more certainty about whether multimodal crowd counting leads to better predictions in general. For this reason, in this paper we investigate the impact of using optical and thermal images simultaneously in crowd counting.
To investigate the impact of using optical and thermal images simultaneously in crowd counting, we designed a monomodal and several multimodal architectures consisting of the same key components. When
we designed the monomodal model, we took into account the latest developments in the field of monomodal crowd counting. In addition, we have developed three multimodal models that incorporate different strategies of multimodal learning. To allow a comparison between the monomodal and the multimodal architectures, all key components of the monomodal architecture are also part of the multimodal architectures. The goal of this comparison is to find out whether multimodal models lead to better crowd counting results in general or only under certain conditions. Since this comparison led to interesting findings, we further analyzed all the datasets used to compare the models. To this end, we examined the relationship between the brightness of optical images and the number of individuals in the image. We also randomly selected a subset of each dataset and examined how individuals were labeled in the images from both modalities.
In examining the differences between the monomodal and the multimodal architectures, we found that existing datasets have a bias toward thermal images. This does not allow us to determine whether multimodal crowd counting leads to better results in general or only under certain conditions. For this reason, we have described criteria for a dataset suitable for investigating the research question.
## 2 Related Work
**Monomodal Crowd Counting:** Crowd counting has been studied for decades. While a few works have used thermal images for crowd counting, most works have used optical images to examine crowd counting. As in other areas, the use of deep learning models [20][21] has led to more accurate predictions in crowd counting. In recent years, the use of a density map-based approach for crowd counting has become prevalent. Many recent works have addressed the question of how to deal with scale variations in images. In particular, techniques such as multi-column models [15] or dilated convolutions [14] have been used to extract multi-scale features from the image. Since such techniques aim to increase the receptive field of a network, it was no surprise that state-of-the-art results could be achieved by using a transformer encoder [23] to extract features [13].
**Multimodal Crowd Counting:** Multimodal learning is becoming increasingly relevant in the field of crowd counting. So far, the use of optical and thermal images [13][2][2][3] as well as the use of optical and depth images [12][13] has been investigated. However, depth images provide only a limited depth range (\(0\sim 20\) meters), making them unsuitable for many real-world crowd counting applications [13]. Also, when using depth images, there is still the problem that less information is available in poorly illuminated scenes. For this reason, we will focus on the use of optical and thermal images in this paper. While all work concludes that the additional use of thermal images leads to better predictions in crowd counting, it is not fully understood under what circumstances it is beneficial to complement optical images with thermal images to obtain better predictions. Previous work has focused primarily on constructing a novel model architecture that outperforms the state-of-the-art in multimodal crowd counting. While this approach proves the effectiveness of the models created, it does not allow us to fully understand how complementary information is extracted from both modalities.
**Multimodal Crowd Counting Datasets:** Similar to different multimodal models, two different datasets [13][2] consisting of optical and thermal image pairs have been published in recent years. The dataset published by Peng et al. [2] was acquired with a drone and contains 3,600 image pairs. Furthermore, this dataset contains information about distance (scale of individuals), illumination and crowd count per image pair. The other dataset, which was published by Liu et al. [13], contains 2,030 image pairs. The image pairs of this dataset were taken from a normal perspective. Information on the number of individuals and lighting is available for each image pair.
## 3 Effectiveness of Multimodal Crowd Counting
To allow a comparison between monomodal and multimodal architectures, we first developed a monomodal architecture. This monomodal model takes into account recent advances in the field of monomodal crowd counting and its main components are reused in subsequent multimodal architectures to allow a fair comparison. Since the constructed monomodal architecture is heavily inspired by recent advances in monomodal crowd counting and does not incorporate any new strategies, we only used one monomodal model for comparison.
### Monomodal Architecture
The monomodal architecture designed in this work is inspired by the work of Tian et al. [13] as well as the implementation of the work realized in [20]. The CCTrans model designed by Tian et al. [13] achieves state-of-the-art results on multiple monomodal crowd counting benchmarks [15][20][21]. Our monomodal architecture is shown in Fig. 1. Instead of Twins [21], which was used by Tian et al. [13], we used PVTv2 [20] as the transformer-based backbone in our architecture. By empirical analysis, we found that the PVTv2 architecture leads to better results for us. More specifically, for our monomodal
architecture, we used the PVTv2 B0 variant, which allows shorter training time and requires less computational resources. However, this leads to slightly worse results compared to other PVTv2 variants with more parameters. This was acceptable to us, as our primary goal was not to construct a novel architecture with state-of-the-art results. Furthermore, we adopted the pyramid feature aggregation and regression head of Tian et al. [11], but used the convolution kernel sizes used in Wan21a. Again, through empirical analysis, we found that these kernel sizes led to slightly better results for us.
### Multimodal Architectures
After the monomodal architecture was designed, three different multimodal architectures were designed that incorporate different strategies of multimodal learning. As mentioned before, the key characteristics of the monomodal architecture are also incorporated in the three different multimodal architectures. The idea behind this is that when using the same weight initialization (prior) and the same model properties (which constrain the hypothesis space), better results can only be explained by more information provided by the additional modality (data). In this work, we chose to use early and late fusion as two simple multimodal strategies. These have also been used in previous work on multimodal crowd counting [11][2]. In addition, we apply a more advanced deep fusion strategy using the Information Aggregation and Distribution Module (IADM) of Liu et al. [11] which has been shown to be effective for multimodal crowd counting.
**Early Fusion Model:** With the early fusion strategy, modalities are fused at the beginning of the model. For this purpose, the constructed monomodal model was adapted to support 6-channel inputs by changing the amount of filters in the first layer. Thus, the multimodal early fusion model has the same number of parameters as the monomodal model.
**Late Fusion Model:** In contrast to the early fusion strategy, the fusion of modalities takes place at the end of the model with the late fusion strategy. The idea here is that features of both modalities are first extracted individually. Thus, except for the final layer (\(1\times 1\) convolution), both modalities are investigated with the constructed monoclonal model individually. Then, the extracted feature maps from both individual columns are concatenated. Based on the concatenated feature maps, a density map is then finally predicted by a \(1\times 1\) convolution. Hereby, the late fusion model requires around twice as many parameters as the monomodal model and the early fusion model.
**Deep Fusion Model:** In contrast to the early fusion and late fusion architectures, the multimodal information exchange in the deep fusion architecture takes place during feature extraction. For this purpose, a third column is added to the architecture, which extracts the complementary information of both modalities. In particular, this is done by using the IADM of Liu et al. [11]. Through the IADM, information is exchanged between the modality-specific columns and the cross-modality column. However, this only takes place during feature extraction in the backbone, as shown in Fig. 2. Of all the models used in this work, this architecture requires the most parameters.
### Evaluation
To evaluate the performance of the monomodal model and the three multimodal models, we used the mean absolute error (MAE) and the root mean squared error (RMSE). Both of these measures are widely used in crowd counting. The use of these measures allows comparison of our results with the results of other work. The mean absolute error and root mean squared error are defined as follows:
\[MAE=\frac{1}{N}\sum_{i=1}^{N}\left|y_{i}-\hat{y_{i}}\right|\,, \tag{1}\]
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y_{i}})^{2}}\, \tag{2}\]
where \(N\) is the number of image pairs, \(y_{i}\) is the ground-truth number of individuals in image pair \(i\), and \(\hat{y_{i}}\) is the predicted number of individuals for image pair \(i\).
Figure 1: The architecture of our monomodal model, which is inspired by the work of Tian et al. [11] as well as the implementation of the work realized in Wan21a. The input image is first transformed into tokens. From these tokens, features are extracted by a hierarchical transformer-based backbone. The hierarchical feature maps are then aggregated and finally used by the regression head to predict the crowd count. The parameter \(d\) indicates the dilatation rate used.
### Training
Overall, our training approach is heavily inspired by the training approach chosen by Tian et al. [14]. The B0 variant of the PVTV2 architecture was initialized with pre-trained weights in all experiments. We used random cropping with a cropping size of 256 for both dimensions and horizontal flipping with a probability of 50% as augmentation strategies. In addition, AdamW [13] was used as the optimizer and a batch size of 8 was chosen for training. The learning rate was 1e\(-\)5 in all experiments, but was increasingly regulated by a weight decay of 1e\(-\)4. Bayesian loss [12] with a sigma value of 8 was used as a loss function. The models were trained for 60 epochs in all experiments.
### Results
The results for all constructed models on both datasets are shown in Tab. 1 and Tab. 2. Three aspects in particular caught our attention, which we describe in more detail below.
**Discrepancy between optical and thermal images in both datasets.** One of the first things we noticed is that the monomodal model performs much better on thermal images than on optical images, as can be seen in Tab. 1 and Tab. 2. This holds true for both datasets. Nevertheless, the discrepancy is larger for the RGBT-CC dataset than for the Drone-RGBT dataset. Since we used the exact same model and training approach, these results raise the question of whether thermal images are more suitable for crowd counting in general. Before investigating this question, we first wanted to gain a better understanding of both data sets. The investigation is described in more detail in Section 4.
**The monomodal model performs better than the multimodal models for the Drone-RGBT [4] dataset.** Contrary to our assumption that the multimodal approach of using optical and thermal images would lead to better crowd counting predictions, using thermal images solely led to the best result for the Drone-RGBT dataset. This result further affirmed our motivation to gain a better understanding of both datasets. To our best knowledge, we have achieved state-of-the-art results for the Drone-RGBT dataset using the monomodal architecture.
**IADM [14] seems to be less effective with transformer encoders.** Comparing the three multimodal models, the late fusion model achieves the best results on both datasets. The deep fusion model, although more complex and shown to be effective by Liu et al. [14], performs worse in our study than the late fusion model. Since Liu et al. also compared the IADM to a late fusion model, the most obvious explanation for this is the use of a transformer encoder in our work. Liu et al. did not use a transformer encoder in their work. Nevertheless, a more detailed investigation beyond this work is needed to better understand why the IADM is less effective when used with transformer encoders.
\begin{table}
\begin{tabular}{l l c c} \hline Modality & Architecture & MAE & RMSE \\ \hline RGB & Monomodal & 26.48 & 55.28 \\ T & Monomodal & 15.19 & 28.27 \\ RGB-T & Early Fusion & 14.92 & 25.86 \\ RGB-T & Late Fusion & 13.83 & 25.16 \\ RGB-T & Deep Fusion & 14.32 & 24.64 \\ RGB-T & BL + IADM [14] & 15.61 & 28.18 \\ RGB-T & TAFNet [14] & **12.38** & **22.45** \\ \hline \end{tabular}
\end{table}
Table 1: Performance of the different architectures on the RGBT-CC [14] dataset. The use of thermal images leads to dramatically better results compared to optical images. Moreover, the multimodal approach leads to better results than the monomodal approach.
\begin{table}
\begin{tabular}{l l c c} \hline Modality & Architecture & MAE & RMSE \\ \hline RGB & Monomodal & 10.40 & 16.44 \\ T & Monomodal & **6.70** & **10.20** \\ RGB-T & Early Fusion & 7.41 & 11.43 \\ RGB-T & Late Fusion & 7.01 & 11.18 \\ RGB-T & Deep Fusion & 7.20 & 11.45 \\ RGB-T & MMCCN [4] & 7.27 & 11.45 \\ RGB-T & MFCC [1] & 7.96 & 12.50 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of the different architectures on the Drone-RGBT [4] dataset. Surprisingly, using thermal images solely with the monomodal architecture led to the best result for the Drone-RGBT dataset. In contrast, using optical images solely with the monomodal architecture leads to considerably worse results.
Figure 2: The architecture of our deep fusion model. To extract complementary information and enable exchange between modality-specific and modality-shared columns, we use the IADM of Liu et al. [14]. In their work, it was shown that the use of the IADM is effective for multimodal data.
## 4 Analysis of Existing Multimodal Crowd Counting Datasets
To understand more profoundly whether thermal images are better for crowd counting in general, or whether the characteristics of the datasets used lead to better results on thermal images, we used two different approaches.
### Relationship Between Brightness and Crowd Count
First, we investigated the relationship between the brightness of optical images and the number of individuals. We suspected that many optical images in both datasets were taken in poorly illuminated environments, which could be the reason for the discrepancy between thermal and optical images. This would also be in line with our main motivation to use multimodal data. Since the two metrics we used consider the counting error and are sensitive to outliers, we thought it is relevant to investigate the relationship between brightness and crowd count. To measure the brightness of an optical image, we used the following equation:
\[Brightness=\frac{\sum_{i=1}^{W*H}R_{i}+G_{i}+B_{i}}{3*W*H}\, \tag{3}\]
where \(W\) is the width and \(H\) is the height of the optical image. \(R_{i}\), \(G_{i}\) and \(B_{i}\) represent the three color values of pixel \(i\). The relationship between brightness and crowd count for both datasets are shown in Fig. 3 and Fig. 4.
**The RGBT-CC [10] dataset is unbalanced regarding brightness and crowd count.** The RGBT-CC dataset contains many images with very low brightness and high crowd count, as can be seen in Fig. 3. In comparison, the images in the Drone-RGBT dataset are much brighter on average and the overall distribution between brightness and number of individuals is much more balanced, as shown in Fig. 4. Since both metrics are sensitive to outliers and many optical images with very low brightness (low information) have a high crowd count in the RGBT-CC dataset, we assume that this explains the bigger discrepancy for the RGBT-CC dataset between optical and thermal images.
This finding has serious implications on our research question. Since this imbalance of the dataset likely affects all trained models and results in higher activations for thermal input, we believe that the research question cannot be thoroughly investigated with the RGBT-CC dataset. In particular, we assume that many optical images with low brightness (low optical information) and a high crowd count (high error) will cause the model to pay more attention to thermal images as the counting error is propagated back into the network during training. In this way, it is difficult to verify whether multimodal crowd counting leads to better results in general, since a certain condition (low brightness, high crowd count) has a great impact on the training of the model as well as the metrics. Nevertheless it is important to say that images with low brightness and high crowd count are not a problem, but are important and desirable for training a robust crowd counting model. However, we are concerned about whether our research question can be fairly investigated due to an inherent correlation between the number of people and brightness in the RGBT-CC dataset.
### Annotation Sample Analysis
Our second approach to better understand both datasets was to perform a sample analysis of how the annotations were made. Since both datasets contain two different modalities recorded with two different cameras,
Figure 4: Scatter plot showing the relationship between the brightness of optical images and crowd count in the Drone-RGBT dataset. The relationship between brightness and crowd count appears very uniform compared to the distribution of the RGBT-CC dataset.
Figure 3: Scatter plot showing the relationship between the brightness of optical images and crowd count in the RGBT-CC dataset. It can be seen that the RGBT-CC dataset is unbalanced in terms of brightness and crowd count. Many images with very low brightness have a high crowd count.
we wanted to understand if images of both modalities were synchronized and how perspective changes were handled (because the cameras were probably next to each other during the recording). We decided to perform the sample analysis of how the annotations were made since both datasets provide shared annotations for both modalities. To this purpose, we randomly selected 10% of the image pairs per dataset and visualized the annotations in the images of both modalities to verify how the individuals were labeled in each image.
**Only thermal images were used to label individuals in both datasets.** By randomly selecting 10% of all image pairs per dataset and visualizing the annotations for both modalities, we found that both datasets used only the thermal image to label individuals. Examples of both datasets showing that only thermal images were used to label individuals are provided in the Appendix in Fig. 5 and Fig. 6.
**All image pairs of the Drone-RGBT dataset were taken at night.** We have seen that the optical images in the Drone-RGBT dataset are on average brighter than the optical images in the RGBT-CC dataset. However, by looking at the annotations for each image pair in the Drone-RGBT dataset, we perceived that all images were taken at night. To gain more confidence in this perception, we looked at all the optical images in the Drone-RGBT dataset. In this way, we found that all image pairs in the Drone-RGBT dataset were taken at night. Nevertheless, many images were taken in environments with much artificial light, therefore the optical images are on average brighter than those of the RGBT-CC dataset. The fact that all images were taken at night adds a new perspective to the results obtained with the Drone-RGBT dataset. This leads to the assumption that optical images do not provide additional information at night and that a monomodal approach with thermal images leads to better results. Further research beyond this paper is needed to validate this assumption.
**For image pairs in the RGBT-CC dataset, individuals were sometimes visible in one modality but not the other.** Liu et al. [11] have already stated in their work that optical and thermal images in the RGBT-CC dataset are not strictly aligned because they were captured with different sensors. However, when examining the annotations, we found that not only were the image pairs not strictly aligned, but sometimes individuals were visible in one modality but not the other. Examples for this are provided in the Appendix in Fig. 7.
## 5 Criteria for a Multimodal Crowd Counting Dataset
Because both datasets have some weaknesses that make it difficult to draw general conclusions about the effectiveness of multimodal crowd counting, we decided to set criteria for a suitable dataset. Overall, the image pairs should be taken evenly throughout the day. In this way, the variability of the two modalities gets extensively covered. Ideally, this would even take into account different seasons and climate zones. Also, the crowd count per image pair should be independent of when the image was taken. This allows for an equal influence of both modalities on the multimodal model during training, as no modality receives more attention due to a higher counting error. When labeling individuals, both modalities should be considered so that later models can learn to extract the information from both modalities and incorporate it into the prediction (even when one modality contains little information and the other contains much). Furthermore, the images for both modalities should be taken simultaneously. In this way, the images of both modalities are aligned as precisely as possible, which allows the use of the same annotations for both modalities.
## 6 Is Multimodal Crowd Counting Better in General?
The goal of this work was to find out if the simultaneous use of optical and thermal images leads to better predictions in crowd counting in general. We found that existing datasets have a bias toward thermal images, making it difficult to draw general conclusions about the effectiveness of multimodal crowd counting. The results on the Drone-RGBT dataset indicate that solely using thermal images at night results in better predictions than a multimodal approach. Since the RGBT-CC dataset contains both daytime and nighttime images, the better predictions with multimodal data seem to indicate that the multimodal approach leads to better results during the daytime. However, these assumptions are by no means proven, but could serve as hypotheses for future research. Furthermore, we encourage the creation of a multimodal dataset in order to be able to investigate such hypotheses. We have provided criteria for the creation of such a dataset in the previous Section 5. However, it remains an open question whether multimodal crowd counting (including technical challenges like perspective distortion and synchronization between modalities) is the perfect approach. It could also be the case that two monomodal models produce better results than one multimodal model. For example, one monomodal model could be used with optical images during the day and another with thermal images at night.
## 7 Conclusion
In this work, we found that existing multimodal crowd counting datasets have a bias toward thermal images. For this reason, we outlined criteria for a balanced dataset. To our best knowledge, we also obtained state-of-the-art results on the multimodal Drone-RGBT
dataset. Interestingly, for this we used solely thermal images and the monomodal model constructed in this work. Considering the results of this work, we encourage the creation of a multimodal dataset that meets the criteria outlined in this paper. In this way, we can understand more profoundly whether the simultaneous use of optical images and thermal images leads to better predictions in crowd counting in general.
|
2302.05286 | Archaeological Sites Detection with a Human-AI Collaboration Workflow | This paper illustrates the results obtained by using pre-trained semantic
segmentation deep learning models for the detection of archaeological sites
within the Mesopotamian floodplains environment. The models were fine-tuned
using openly available satellite imagery and vector shapes coming from a large
corpus of annotations (i.e., surveyed sites). A randomized test showed that the
best model reaches a detection accuracy in the neighborhood of 80%. Integrating
domain expertise was crucial to define how to build the dataset and how to
evaluate the predictions, since defining if a proposed mask counts as a
prediction is very subjective. Furthermore, even an inaccurate prediction can
be useful when put into context and interpreted by a trained archaeologist.
Coming from these considerations we close the paper with a vision for a
Human-AI collaboration workflow. Starting with an annotated dataset that is
refined by the human expert we obtain a model whose predictions can either be
combined to create a heatmap, to be overlaid on satellite and/or aerial
imagery, or alternatively can be vectorized to make further analysis in a GIS
software easier and automatic. In turn, the archaeologists can analyze the
predictions, organize their onsite surveys, and refine the dataset with new,
corrected, annotation | Luca Casini, Valentina OrrΓΉ, Andrea Montanucci, NicolΓ² Marchetti, Marco Roccetti | 2023-01-02T16:51:16Z | http://arxiv.org/abs/2302.05286v1 | # Archaeological Sites Detection with a Human-Al Collaboration Workflow
###### Abstract
This paper illustrates the results obtained by using pre-trained semantic segmentation deep learning models for the detection of archaeological sites within the Mesopotamian floodplains environment. The models were fine-tuned using openly available satellite imagery and vector shapes coming from a large corpus of annotations (i.e., surveyed sites). A randomized test showed that the best model reaches a detection accuracy in the neighborhood of 80%. Integrating domain expertise was crucial to define how to build the dataset and how to evaluate the predictions, since defining if a proposed mask counts as a prediction is very subjective. Furthermore, even an inaccurate prediction can be useful when put into context and interpreted by a trained archaeologist. Coming from these considerations we close the paper with a vision for a Human-Al collaboration workflow. Starting with an annotated dataset that is refined by the human expert we obtain a model whose predictions can either be combined to create a heatmap, to be overlaid on satellite and/or aerial imagery, or alternatively can be vectorized to make further analysis in a GIS software easier and automatic. In turn, the archaeologists can analyze the predictions, organize their onsite surveys, and refine the dataset with new, corrected, annotation.
Deep Learning Archaeology Remote Sensing Mesopotamian Floodplain
## 1 Significance Statement
In this paper we describe the use of a pre-trained neural network for semantic segmentation, fine-tuned on annotated images of archaeological sites from the Mesopotamian floodplain. Integrating human expertise, our models reached a detection accuracy of 80%. We also propose a workflow where archaeologists and Al are collaborating: the model highlights the presence of a site and its predictions can be stitched together to create a huge overlay of predictions or converted into vector shapes, which archaeologists can import into a GIS software, speeding up the remote sensing phase during ground survey preparation. After reviewing the predictions, the experts can in turn refine and extend the dataset, improving the model.
## Introduction
This paper documents the outcomes of a collaboration between data scientists and archaeologists with the goal of creating an artificial intelligence (AI) system capable of assisting in the task of detecting potential archaeological sites from aerial or, in our case, satellite imagery. This procedure falls into the domain of Remote Sensing (RS), which indicates the act of detecting and/or monitoring a point of interest from a distance. In the world of archaeology this operation has become invaluable with the availability of more and better imagery from satellites that can be combined with older sources of information (e.g., the CORONA satellite imagery) to spot a larger number of archaeological sites as well as tracking their successive degradation due to anthropic factors. Depending on the area of investigation and the size of the archaeological features being surveyed, the effort necessary, especially in terms of time, can be huge for the researcher.
This collaboration aimed at solving exactly this issue by using deep learning models to streamline, but not completely automate, the process. Thus, we set out to train a model on a dataset of geo-referenced shapes of all known sites scattered throughout the southern Mesopotamian floodplain (which represents a sufficiently coherent geo-morphological region). As the project went on, a number of issues emerged that make this problem particularly hard to tackle and lead to an important reflection on the use of deep learning in general and its relationship to human experts. The dataset, while may be considered a very large one for near eastern archaeology with its almost 5,000 sites, is hardly sufficient for training a model as large as the state-of-the-art ones we see in use today and, perhaps more significantly, contains many cases that are visible only on certain old imagery.
The first issue is commonly solved in machine learning by leveraging transfer learning and using pre-trained models that are then fine-tuned on the data at hand. The second one, however, puts both training and evaluation in jeopardy, as the model is pushed to make wrong classifications during training and even if it learned robust representations that ignore the bad examples, we would then have a hard time detecting what is a mistake by the model and what is a mistake in the labels.
We believe that the only way out of this conundrum is through a human-in-the-loop approach. For this reason, throughout the paper we highlight the importance of integrating domain expertise during the training and evaluation phase of our experiments, since that was crucial in improving the dataset used and, in turn, the model. The final outcome of this iterative process is a model capable of obtaining a detection accuracy of around 80%.
Based on these egregious results, we envision a tool for human-Al collaboration to support the archaeologists in the remote sensing operations (rather than replace them) and propose a new kind of workflow, enhancing both their task and the model by providing improved data after every use. All the results were achieved using open-source software and models, as well as openly available data (imagery, annotations) and computational resources (Google Colab), making this kind of work highly accessible and replicable even in resource-constrained research environments. All code and resources mentioned are available at [https://bit.ly/PNAS](https://bit.ly/PNAS) floodplains.
## Research Background
### The Mesopotamian Floodplain
The southern Mesopotamian floodplain is a crucial region for understanding the complex interplay between the spatial clustering of human communities and the development of irrigated farmland in an otherwise semi-arid environment [1]. Robert McCormick Adams' surveys in the area [2, 3, 4] were carried out according to standards that were unparalleled for the time: he used a set of aerial photographs from 1961 to locate potential sites and map canals whose traces were visible on the surface; he was systematic in recording sites ranging in time from the later 7th millennium BCE to
the Ottoman period; above all, he was acutely aware of the historiographical potential of his survey work, which resulted in a powerful interpretation of settlement patterns and hydraulic activities [4].
After a long halt to fieldwork resulting from political instability, archaeological research resumed in southern Iraq in recent years (see [5] for an overview). In this area sites are usually referred with the Arabic word for mound, "Tell." The color and shape of these hills makes them especially visible from aerial and satellite imagery, which led to the use of remote sensing as a viable strategy to discover their location.
As Tony Wilkinson puts it "_Tells comprise multiple layers of building levels and accumulated wastes built up through time, in part because the locus of occupation has remained stationary. Tell settlements frequently are defined by an outer wall that both contained and constrained the accumulated materials, thereby restricting their spread [...]. The tell is by no means the sale locus of occupation [...]. Outer or lower towns [...] often appear as low humps or simply artifact scatters around tells, and they can extend the total occupied area of a site several fold_"[6].
In Mesopotamia, tells are often only slightly more elevated than the surrounding countryside, often being prone in such cases to artificial leveling in order to gain irrigable agricultural areas. Thus, the automatic detection of sites in such a dynamic environment is a highly complex operation, although contrasts are sufficiently marked to justify the attempt.
#### Remote sensing
By remote sensing one may refer to the use of any sensor (i.e., temperature, humidity, hyperspectral, satellite images etc.) for detecting or monitoring a point of interest without the need of personally visiting it. This approach is relevant to a variety of fields, but solutions that work in one domain may not translate to others.
Locating archaeological sites remotely was certainly possible even before the advent of modern computer technology by using aerial photographs and topographical maps of the area to be investigated, but today it is easier to combine multiple sources, using sensors of different nature or from different points in time, to get a more complete picture of the environment, especially since it can be changing due to natural or anthropic factors [7, 8, 9]. Depending on the characteristics of the sites, certain representations can be helpful like elevation models obtained from stereoscopic images or the use of parts of the electromagnetic spectrum other than visible light like infrared or radio waves [10, 11].
LiDAR is also becoming popular as it gives extremely high-resolution images, but it can be difficult to employ as it requires to be mounted on some kind of airborne craft like drones [12]. The problem with these "unusual" types of sources is that they might not be available for every location or not have a high enough resolution for the task at hand. On the other hand, good quality color images of virtually any location on the planet is easily and freely available, largely due to the popularity of online services like Google Maps or Bing Maps.
#### Deep Learning for Remote Sensing and Archaeology
Deep learning has found multiple uses in every field of application and archaeology is no exception. It can help in classifying objects and text, finding similarities, building 3D models and, as this paper illustrates too, the detection of sites [13, 14, 15, 16, 17]. A difficulty in dealing with such a model is that it requires domain experts in both archaeology and deep learning to come together, but it may also depend on the amount of data available. Neural networks are notoriously data hungry, and archaeology is a "slow data" field as Bickler put it [18]. Nonetheless, there are a few recent examples of deep learning being successfully applied to site detection in a variety of different scenarios [19, 20, 21, 22]. Most applications either use neural network to perform a classification task, with tiles sampled from maps that are marked as containing the site of interest or not, or as segmentation
tasks where the individual pixels are classified, and the result is the prediction of a shape corresponding to the site. In this paper we will use the second approach, as described below.
#### Semantic Segmentation
Semantic segmentation is the task of dividing an image into parts that correspond to units with a specific meaning. These can correspond to a specific subject (e.g., the outline of persons, vehicles, etc.) or to a generic category that encompasses multiple entities (e.g., buildings, backgrounds, etc.). In the context of this paper, we only have two categories: one for mounted (tell) sites and another one for everything else. Segmentation can be performed with various techniques that perform pixel-level classification. A very common approach uses pre-computed features, extracted by some algorithm, or manually engineered, which are then classified by a Random Forest algorithm [23]. The current state of the art is represented by end-to-end systems based on deep learning with convolutional neural networks. For this approach, the introduction of U-Net by Ronnenberger in the context of medical imaging represented a milestone [24]. This work leverages a more recent architecture, called MA-Net [25], which can be thought of as an upgrade of the U-Net architecture with the attention mechanism. While it was developed in the context of medical imaging it has found use also in remote sensing tasks [26, 27]. In the methods section we will provide more details.
#### Previous Work and Limitations
In a previous paper we tried to tackle this same problem using an image classification approach where the map was divided into tiles [28]. In that experiment however the dataset was an order of magnitude smaller, and we had to resort to aggressive data augmentation in order to boost performance. The best model obtained an AUC score of around 0.70 but when tested on an unseen portion of map it showed its limits in that it predicted many false positives while also missing some sites. The biggest trade-off of this tile-based classification approach is between the size of the tiles and the granularity of the predictions with bigger squares that are more practical but result in a loss of detail. There is also the problem of dealing with sites that land on the edge of a tile. A solution we tried was creating a shingled dataset with in-between tiles to fill the gaps. This however greatly increased the amount of prediction to be created. Finally, most models for image classification are bound by the use of a fixed size of input which can be a huge limit when dealing with maps. In this new experiment, given the increased size of the dataset, we decided to leverage image segmentation models with fully convolutional layers which address both the limits in input size and the granularity trade-off.
## Materials and Methods
In this section we first describe the dataset used, which was built starting from openly available resources and then the open-source models we fine-tuned on that dataset.
#### Vector shapes for archeological sites
We started with a dataset of geo-referenced vector shapes corresponding to contours of known mound sites in the survey area of the Floodplains Project that spans 66,000 km\({}^{\star}\)2, as shown in Figure 1. The dataset - developed at the University of Bologna by filing all published archaeological surveys in the area and geo-referencing anew the sites catalogued therein ([https://floodplains.orientlab.net](https://floodplains.orientlab.net) ) contains 4,934 shapes, thus all referring to sites which had been confirmed by ground truthing and by the associated study of the surface scatter of artifacts.
Since the dataset was compiled as a comprehensive source of information for archaeologists rather than specifically to train a machine learning model, we needed to filter out some examples that provided no information and could actually impair the learning process. We started by removing the top 200 sites by area as these were considerably bigger than the rest of the dataset and visual inspection confirmed that they follow the shape of areas that are not just simply mounds. The
number 200 emerges from noticing that these sites have an area bigger than the square region we use as an input and could thus result in a completely full segmentation mask which would not be very helpful. After a discussion between data scientists and archaeologists we convened that this was a good heuristic solution.
Additionally, we filtered out 684 sites that either presented a very small area or were earmarked by the archaeologists as having been destroyed. In particular the size threshold was set at 0.1 degrees squared (roughly equal to 1,000 m\({}^{\star}\)2). These very small sites actually correspond to a generic annotation for known sites with unknown size or precise location.
### Creating the input images
To generate a dataset of images to fine-tune our pretrained model we imported the shapes mentioned above into QGIS (an open-source GIS software) [29] and using a Python script we saved a square of length L centered on the centroid of site which contains only satellite imagery from Bing Maps (we also considered Esri imagery but found that in this particular area they are the same). We then saved the same image without a basemap but with the site contours shown, represented as a shape filled with a solid color, to serve as our ground truth masks.
In the first experiments we set L to be 1,000 meters, but we imagined that increasing the size of the prediction area could be beneficial due to the inclusion of a larger context. Consequently, we also tried using L = 2,000 m and obtained improved performance overall.
From the starting square image, we randomly crop a square of length L/2 to be used as the input. This ensures that the model does not learn a biased representation for which sites always appear at the center of the input and additionally serves as data augmentation. Beside this crop, we also augment the dataset by applying a random rotation and mirroring, as well as a slight shift in brightness and contrast, all these operations being applied in a different manner at each training iteration. When extracting from QGIS, we saved images with a resolution of around 1 pixel per meter (1,024 pixels for 1,000 meters, double that for the model with increased input size) but the
Figure 1: Investigation area. Orange dots represent surveyed sites in the Mesopotamian floodplain. The red rectangle is a selected test area in Maysan.
inputs were then scaled down to half of that to ease computational requirements while having low impact on the overall performance [30].
Finally, we introduced 1,155 images with empty masks (no sites to predict) sampled from locations suggested by the archaeologists. These include highly urbanized areas, intensive agricultural areas, locations subject to flooding (i.e., artificial lakes and basins) and rocky hills and mountains.
The number was chosen arbitrarily, taking into consideration the size of each suggested area and of the tiles. The final number of images is thus 5,025. We split the dataset into a 90% training set and a 10% holdout test set, stratifying the "empty" images we added. 10% of the training set was also randomly selected to be used as a validation set.
We tried integrating CORONA imagery as an additional input [31], as in the usual archaeological workflow that historical imagery is very useful (since it refers to a situation so much less affected by development) and often combine with the satellite base-maps and the topographical maps (but since CORONA were used as a complement, we do not pursue automatic detection on them only and thus sites destroyed after the 1970s have been excluded from the analysis). After importing the imagery into QGIS, we followed the same procedure to create the inputs, ensuring the crop operation was equal for both Bing and CORONA images.
#### Semantic segmentation models
This project started as an experiment to investigate the viability of pretrained semantic segmentation models as tools for detecting sites. For this reason, we decided to compare pretrained open-source models made available as part of a library written in PyTorch. The library allows one to choose an encoder convolutional neural network for feature extraction and a segmentation architecture independently, as well as providing a number of different loss functions [32].
In a previous preliminary paper, we experimented with different choices of architecture, encoders and loss functions [30]. We compared U-Net versus MA-net, Resnet18 versus Efficientnet-B3 and Dice Loss versus Focal Loss. The performance differences were small, within a few percentage points at best, which could be very well explained by fluctuations due to the random data augmentation.
Nonetheless, we took the best model which uses MA-net, Efficientnet-B3 and Focal Loss, trained for 20 epochs. We further tested for the effects of our filtering procedure (slightly improved from the previous work), and additionally experimented with the introduction of CORONA imagery and increased the input size.
#### Tepa sites in Uzbekistan
We also performed an additional test on another large dataset ([https://www.orientlab.net/samark-land/](https://www.orientlab.net/samark-land/)) elaborated by the Uzbek-Italian Archaeological Project at Samarkand [33]. Given the similarity between the Tell in the Mesopotamian floodplain and the Uzbek _Tepa_, we wanted to see if the model was able to detect those sites without the need of additional retraining.
The dataset features 2318 point annotations categorized in different ways which also come with attributes related to their preservation states. We selected only sites classified as either _Tepa_ or _Low Mound_, with the _Well-preserved_ label. The final number of sites ends being 215: 148 Tepa and 67 Mounds. The actual test set images were created following the same procedure described above.
## Results
### Mesopotamia
First, we present the results in terms of average Intersection-over-Union (IoU) score on the test dataset. IoU does not directly relate to the performance in detecting the sites, but only represents the degree of correspondence between the predicted shape and annotation in the dataset. Still, it gives us an idea of how the model behaves and helps us select the best one. Table 1 summarizes the results for all models on the holdout dataset, as described in the Methods section.
Note that, for each model, we report a mean score and the associated standard deviation. This is due to the fact that we are performing a random crop on the images, even on the test set, and thus we run ten tests with different crops to average out this effect.
The first thing that can be noticed is the marked improvement given by the increase in the input size. We imagine that the larger area provides more context to the predictions and makes the model more accurate.
As important is also the inclusion of the filtering procedure that results in a bump in performance regardless of the input size.
Finally, the use of CORONA imagery is a bit controversial. For the smaller input size, it seems to provide no benefits (the lower error score is within the margin of error) and we can hypothesize this is due to the low resolution of these photos. With larger areas they instead seem to provide an increase in performance, maybe again due to the larger context. Inspecting the prediction, however, revealed the absence of a marked difference, perhaps meaning the IoU is increasing just as the result of slightly more precise contours.
### Detection Accuracy
To further assess the results, we moved on to detection accuracy. First, we transformed the raster predictions from the model into vector shapes using the well-known library GDAL (Geospatial Data Abstraction Library) [34] and then we looked for the intersection between the site annotations and the predictions. To obtain smoother shapes, before the conversion we first applied a Gaussian blur to the prediction rasters and then clipped values above a certain threshold (0.5, but the number can be changed for a more or less sensitive model) to 1.0, while everything else would be set to 0.0.
This automatic evaluation gives good but not too exciting results, with an accuracy score of 0.6257 for Model 5 and 0.6008 for Model 6. A model able to find two out of three sites would already provide a good starting point for human analysis. However, archaeologists must provide a
verification of the predictions and differentiate the cases in which the model commits proper mistakes from those in which it makes justifiable errors that a human would do too [35].
First of all, there are a considerable number of sites that are no longer visible from present day satellite images and were not filtered from the dataset. This was expected as only 50% of the annotations had additional information and even less contained indication of their visibility. Those sites should not be considered as False Negatives but rather as True Negatives.
When it comes to predictions marked as False Positive, sometimes the model predicts another site close by, instead of not the one being tested. This can be considered a mistake or not depending on the nature of the "missed" site. In one case we have a site that is no longer visible, so the prediction is actually a True Positive. On the other hand, it can be a site that is still visible but maybe less so than another one in the picture. In this situation we could either consider both a false negative and a true positive, or just as a true positive given that, in a real world scenario, the closeness to other sites would result in a useful suggestion as the human expert, who would then be able to retrieve them all. Alternatively, we could avoid considering non-visible sites altogether, but the difference would be minimal with accuracy 0.7837 and recall 0.8201.
Lastly, some predictions were actually present in the outputs but too faint for the cutoff threshold we imposed. We did not adjust for those errors, but they indicate a possible approach for interaction: using predictions as overlays and manually looking at the map. Alternatively setting a lower threshold could solve the problem.
Table 2 summarizes the results for the automatic evaluation and the adjusted values after the human evaluation highlighted non-visible sites. The adjustment raises accuracy and recall to around 80, giving a more objective idea of the actual model performance.
It is interesting to see how Model 6, which got a higher IoU score, seems to actually be performing worse now. Looking at the images, it appears that this model is a little bit more restrained and cautious, resulting in less positive predictions and thus less False Positives. In turn, this can result in a higher IoU because it reduces the Union term, and, if areas are a little bit more precise, it even raises the Intersection term. However, for detection's sake, we need the presence of an intersection rather than a perfect match and in this situation the lower number of positives is punishing. Overall, the difference in accuracy is not excessive, so both models are useful and could be used in parallel, but we must also consider the additional complexity and cost of using two sets of input images which make Model 6 a bit cumbersome. For this reason, we moved on using just Model 5.
We concluded this subsection with Figure 2, which contains a few examples from the test dataset to display the quality of the model's outputs. Note how the colors correspond to probability values, and that faint areas would be cut off by the 0.5 threshold we use in creating the vector shapes. The
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Model** & **Evaluation** & **TP** & **TN** & **FP** & **FN** & **Accuracy** & **Recall** \\ \hline \multirow{2}{*}{Model 5} & Automatic & 228 & 98 & 70 & 125 & 0.6257 & 0.6459 \\ \cline{2-7} & Adjusted & 258 & 185 & 40 & 68 & 0.8040 & 0.7914 \\ \hline \multirow{2}{*}{Model 6} & Automatic & 209 & 104 & 57 & 151 & 0.6008 & 0.5806 \\ & Adjusted & 239 & 197 & 27 & 88 & 0.7913 & 0.7309 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Site detection performance for the best models. Automatic evaluation considers the labels as they come, adjusted evaluation compensates for incorrect labels with a human in the loop.
model is very accurate at tracing the site outlines and in some cases (i.e., the first row in Figure 2) these are even more accurate than the ground truth with respect to current satellite imagery.
**A test in the Maysan province**
After assessing detection performance, we wanted to try the model on a rectangular area within the unsurveyed Maysan province for which we carried out remote sensing. This test had the goal of evaluating how many false positives the model would predict and to give an example of the mistakes the model makes in an operational scenario.
The area we selected contains 20 alleged sites and spans 104 km\({}^{\star}\)2. Figure 3 shows the area with the annotation from the archaeologist and the prediction from the model. As it can be seen the model is able to recover 17 of the 20 sites while also suggesting around 20 more shapes (or less, depending on what is considered a single instance). Most of those suggestions are not useful but
are also easily and quickly sifted out by an expert eye, especially in context, given their size or their location.
Figure 4 instead shows an overlay produced by stitching together the various predictions and using the probabilities values as a sort of heatmap. "Hotter" colors correspond to higher probabilities while black indicates the absence of a site. The transparency is obtained through the use of the Overlay filter in QGIS.
Figure 4: _Maysan test area prediction probabilities overlaid on top inside QGIS. This visualization allows the user to decide where to look instead of relying on a predefined threshold value._
Figure 3: _Maysan province test area (orange) with sites remotely identified by archaeologists (turquoise) and model predictions (yellow). The sites identified by the trained eye and the model are equivalent and, most importantly, the model is able to ignore areas without significant features._
### Uzbekistan
Unfortunately, human evaluation of the outputs showed that the model is able to correctly identify only around 25% to 30% of the sites in this region, depending on how thresholds are chosen. The remaining part contains either sites that are missed completely or sites that are somehow hinted either too faintly or inside a huge area that appears meaningless.
The reason for this severe drop in performance is most probably due to the different nature of the landscape in the region which in some locations appear to be way more urbanized and in general features more vegetation: thus, not all floodplain environments are similar enough for a direct cross-comparison. Furthermore, the conventions which lie behind the annotations in the Uzbek dataset might not be perfectly aligned with the Mesopotamian one further complicating the situation.
The only way of dealing with this problem here is probably to create a small dataset of selected Tepa sites and perform an additional round of transfer learning so that the model may grasp the new context and characteristics in the region.
### Discussion
The results obtained can be considered satisfactory even if the IoU metric, when compared to other semantic segmentation applications, is not extremely high. When testing for detection performance, however, we found that the model is still able to detect most sites in the dataset, leaving us with good expectations for its use in other parts of the survey area. As the Uzbek test shows however, when it comes to new areas with similar sites but in a different context, performance may drop severely and a retraining phase, even with a smaller dataset, would be necessary. Future work may explore this research direction.
It is important to notice how evaluation metrics in this task seem to hit a wall when confronted with the fact that they are computed against annotations that oftentimes are not homogeneous and contain various spurious labels. In our case we coped with the fact that there are many sites that are only visible on some historical photographs or maps that are part of the dataset even if they do not provide useful examples. Fortunately, the model seems to be robust enough to learn useful concepts and ignore these confounding data points. Still a smaller, cleaner dataset could drastically improve performance while also reducing computational load. Obviously, such cleaning operations would be a massive investment in terms of time and archaeologists would rather spend it actively searching for sites themselves, instead.
Our model, however, opens up the possibility of going through already surveyed areas automatically and then producing a list of predictions that contrast the annotations to be manually reviewed. Subsequently a new, cleaner dataset could be assembled by the archaeologists and a new improved model could be trained. This same procedure also works in applications to new areas, where novel predictions can be manually checked and added to a new dataset overtime.
In addition to the automatic procedure, the model could also be used to produce an overlay to guide the eye of the archaeologist inside a GIS software. This graphical approach allows the users to also compare the overlay with other maps they might be using and use their expertise to infer the existence of a site based on all contextual information they have. We only tried this approach on a small area as shown in Figure 4 but the computation could be easily scaled up to cover huge areas, as it takes less than a second to produce an output and there is no need to complete the operation in one go anyway. The only shortcoming of this method is the evident ridge between different input images. In theory, semantic segmentation could work with inputs of arbitrary size, but doing so requires a huge amount of memory which might not be available. A solution might be the creation of overlapping prediction maps that would then be averaged, trading off computational time for increased precision.
Figure 5 summarizes the use we envision for the model we described. Starting from the dataset the model produces prediction masks that we can manipulate through post-processing to obtain either a vector shapefile that can be used for automatic evaluation and detection of sites. At this stage the user has the possibility of choosing a threshold to cut prediction off and the use of techniques to smooth the output shapes, like blurring or buffering the vectors. Similarly, the map overlay can be adjusted by selecting different graphical representations directly into the GIS software. The goal in this case is that of spotting sites that might go undetected by the automatic comparison because their probability is lower than the threshold, while still being distinguishable for a human. Each time the model is used, in either way, after reviewing the outputs the users would be able to obtain either a new set of annotations or a list of sites to be removed or relabeled.
If such a workflow is used by more than one team it could also greatly speed up the search efforts: the use of open technologies in this case makes the results easier to share between research groups, which could greatly help archaeology as a field [36].
_Figure 5 A human-in-the-loop workflow based on our model. A model is trained from annotated images and provides predictions masks. The masks can be used as an overlay or vectorized. Human evaluation is conducted on the outputs and in turn a refined dataset can be created to improve the model._
The experiments with CORONA imagery also hint at the possibility of combining more models, perhaps trained with different basemaps or a combination of them, and compare the prediction given by all of these. Especially if historical photos are present, we could end up with a dataset that also contains temporal information about when a site is visible and when it becomes undetectable. Use of stereoscopic images for the creation of elevation models could also benefit the task, if the resolution is sufficient to highlight the low mounds we are looking for.
## Conclusions
We presented a deep learning model for detection of mounded archaeological sites in the Mesopotamian floodplain. The model was implemented using pretrained models for semantic segmentation, fine-tuned on satellite images and masks of the site shapes coming from a dataset containing almost 5,000 examples.
The result of our experiments is a model which obtains an IoU score of 0.8154 on the test dataset and detects sites with 80% of accuracy. This statistic accuracy however is adjusted for the considerable number of sites that appear mislabeled as they are no longer visible on modern satellite imagery. While we cleaned up the dataset to the best of our ability, many undetectable sites still remained. The model seems to be quite robust, however.
Following this result, we propose a workflow for the archaeologists to adopt, in which their already established remote sensing practices are supported and enhanced by the use of a model like our own. The outputs can be used both for very fast automatic detection, being aware of the mistakes this could introduce, or combined to generate a graphical overlay to direct the user's attention towards certain areas. In turn, the use of the model will result in new shapefiles and annotations that can be used for retraining and improving the model, as well as enabling further analyses.
## Acknowledgments
FloodPlains Project. [https://floodplains.orientlab.net/](https://floodplains.orientlab.net/). The FloodPlains Project has been developed in the framework of the European Union project "EDUU - Education and Cultural Heritage Enhancement for Social Cohesion in Iraq" (EuropeAid CSOLA/2016/382-631), www.eduu.unibo.it, coordinated by Nicolo Marchetti.
The ongoing project "KALAM. Analysis, protection and development of archaeological landscapes in Iraq and Uzbekistan through ICTs and community-based approaches," funded by the Volkswagen Foundation and coordinated by N. Marchetti, www.kalam.unibo.it, has allowed a review of our data input and the development of the research presented in this paper. The CRANE 2.0 project of the University of Toronto provided the geospatial servers on which FloodPlains is running.
|
2307.00520 | Discovery of a relation between the decay rate of the Sun's magnetic
dipole and the growth rate of the following sunspot cycle: a new precursor
for solar cycle prediction | Sunspots have been observed for over four centuries and the magnetic nature
of sunspot cycles has been known for about a century; however, some of its
underlying physics still remain elusive. It is known that the solar magnetic
cycle involves a recycling of magnetic flux between the poloidal and toroidal
components of the magnetic field, that manifests as the solar dipole and
sunspots, respectively. Here we report the discovery of a new relationship
between the rise rate of the sunspot cycle and the decay rate of the solar
(axial) dipole moment. This provides an extension to the Waldmeier effect in
sunspot cycles and points to the existence of a causal connection between the
aforementioned physical quantities, which can be succinctly stated as the decay
rate of the Sun's dipole moment is related to the rate of rise of the following
sunspot cycle. We demonstrate how one may take advantage of this new
relationship to predict the timing of the sunspot cycle. Our analysis indicates
solar cycle 25 is expected to be a weak-moderate cycle, peaking in
$2024.00_{-0.49}^{+0.68}$. | Priyansh Jaswal, Chitradeep Saha, Dibyendu Nandy | 2023-07-02T09:03:31Z | http://arxiv.org/abs/2307.00520v2 | Discovery of a relation between the decay rate of the Sun's magnetic dipole and the growth rate of the following sunspot cycle: a new precursor for solar cycle prediction
###### Abstract
Sunspots have been observed for over four centuries and the magnetic nature of sunspot cycles has been known for about a century; however, some of its underlying physics still remain elusive. It is known that the solar magnetic cycle involves a recycling of magnetic flux between the poloidal and toroidal components of the magnetic field, that manifests as the solar dipole and sunspots, respectively. Here we report the discovery of a new relationship between the rise rate of the sunspot cycle and the decay rate of the solar (axial) dipole moment. This provides an extension to the Waldmeier effect in sunspot cycles and points to the existence of a causal connection between the aforementioned physical quantities, which can be succinctly stated as _the decay rate of the Sun's dipole moment is related to the rate of rise of the following sunspot cycle_. We demonstrate how one may take advantage of this new relationship to predict the timing of the sunspot cycle. Our analysis indicates solar cycle 25 is expected to be a weak-moderate cycle, peaking in \(2024.00^{+0.68}_{-0.49}\).
keywords: Sun: activity - Sun: magnetic fields - Sun: interior
## 1 Introduction
Our host star, the Sun, is a dynamic star whose magnetic activity varies across a wide range of timescales spanning from minutes to millennia and beyond (Usoskin, 2023). The most prominent signature of this variability is captured by the waxing and waning of sunspots - dark, magnetized patches on the Sun's surface - that repeats almost every 11 years, known as the sunspot cycle. Sunspot cycles exhibit significant fluctuations in both amplitude and duration that occasionally result in extreme activity phases like solar grand minima and grand maxima (Passos, D. et al., 2014; Hazra and Nandy, 2019; Saha et al., 2022; Dash et al., 2023). The Sun's dynamic activity output influences the entirety of the heliosphere including our home planet, the Earth, by shaping its space environmental conditions and determining the habitability (Schrijver et al., 2015; Nandy et al., 2021, 2023). Therefore, developing accurate predictive capabilities pertaining to the long-term solar activity is crucial in planning future space missions and safeguarding space-reliant technologies (Petrovay, 2020; Nandy, 2021; Bhowmik et al., 2023).
Stripped down to its fundamental essence, the magnetic activities of the Sun originate in its deep interior, wherein, a magnetohydrodynamic dynamo action generates and recycles the Sun's large-scale magnetic fields (Nandy and Choudhuri, 2002; Chatterjee et al., 2004; Charbonneau, 2020). The emergence of magnetic flux on the solar surface and its poleward migration under various flux-transport processes like supergranular diffusion, meridional circulation, etc. contribute to the gradual build up of global solar axial dipole moment (hereafter, dipole moment) (Dasi-Espuig et al., 2010; Pal et al., 2023; Hazra et al., 2023). It is evident from observations that the mean latitude of sunspot emergence drifts towards the equator with the progress of sunspot cycles (Li et al., 2003; Cameron and Schissler, 2007; Solanki et al., 2008; Owens et al., 2011; Mandal et al., 2017), thereby facilitating cross-equatorial diffusion of magnetic fluxes and their cancellation across the equatorial region.
Recently, Iijima et al. (2017) demonstrated that the emergence of new sunspots during the decaying phase of a sunspot cycle do not have considerable influence on the polar field build up. In fact, earlier studies have detected plateau-like intervals in the dipole moment time series - showing no substantial changes in its magnitude for an extended duration of multiple years - during the descending phase of sunspot cycles 21 to 24 (Schrijver and Liu, 2008; Iijima et al., 2017). On the other hand, meridional circulation, turbulent diffusion and turbulent magnetic pumping are believed to work in tandem to advect poloidal fields accumulated in the polar caps down into the base of solar convection zone (SCZ), where strong radial and latitudinal shear induct toroidal field that acts as a seed for the next sunspot cycle (Yeates et al., 2008; Munoz-Jaramillo et al., 2009; Cameron and Schissler, 2015). Generation of toroidal field in SCZ consumes the poloidal field of previous cycle. As a matter of fact, the solar dipole moment comes out of the plateau-like phase and starts decaying abruptly with almost a uniform rate. Besides, the toroidal fields produced at the base of SCZ become buoyantly unstable, rise up through the convection zone in the form of magnetic flux tubes and penetrates the solar surface - thereby producing sunspots of the new
cycle. Decay and dispersal of these new sets of sunspots eventually lead to a growth in the Sun's poloidal field, but with opposite polarity as compared to the previous cycle (see Fig.1, panel (a)).
This sequence of events indicates the existence of a causal connection between the decay of solar polar fields and dipole moment, and the rise of the following sunspot cycle. In fact it is widely known that steeply rising sunspot cycles peak to higher amplitudes and vice versa - known as the Waldmeier effect (Waldmeier, 1935). Kumar et al. (2021) found correlation between the decay rate of polar fields and the amplitude of the subsequent sunspot cycle across individual hemispheres of the Sun. However, it is to be noted that the decay of high-latitude polar field is almost concurrent with the ascent of the following sunspot cycle, leading to a narrow temporal window for solar cycle prediction (see, Appendix A). In this context, the dipole moment of the Sun has the potential to become a better precursor compared to the high-latitude polar field, where the former leads the latter by about a year as evidenced in observational data (see Fig.1, panel (d)). Petrovay (2020) argued this time lag to originate from
Figure 1: Panel (a): magnetic butterfly diagram showing the longitudinally averaged line-of-sight solar photospheric magnetic field since May 1976 to May 2023 (i.e., Carrington Rotation number 1642-2271) cleaned from the Wilcox Solar Observatory (WSO) synoptic charts. Panel (b): the grey curve in the background depicts the evolution of solar axial dipole moment cycles for the above mentioned period. Blue and red curves in the foreground represent 13-rotations smoothed (uniform running average) dipole moment denoting its positive and negative global polarity, respectively. Alternately shaded intervals in the background delineate consecutive dipole moment cycles with the cycle numbers D\({}_{20-24}\) labelled on the plot. The inset plot zooms into the tail end of dipole moment time series emphasizing the latest polarity reversal in solar dipole moment β from positive (in blue) to negative (in red) β that occurred during July 2022. This reversal in polarity breaks the approaching arrival of the peak of sunspot cycle 25. Panel (c): monthly mean total sunspot number time series (in the background) and its 13-months uniform running average (in the foreground) for the aforementioned period, i.e. since sunspot cycle 21 to present. Alternately shaded intervals in the background depicts individual sunspot cycles with the cycle numbers SC\({}_{21-25}\) labelled on the plot. Sunspot number data is obtained from WDC-SILSO, Royal Observatory of Belgium, Brussels. Panel (d): juxtaposition of two normalized time series β namely, the unsigned axial dipole moment (shaded in yellow) and hemispherically averaged unsigned polar field (in pink), both observed by WSO β depicts a finite time/phase lag in the latter with respect to the former one.
the delay induced by the poleward transport of low- and mid-latitude magnetic fields - during the formation of high-latitude polar fields.
In this work, we investigate the relationship between the declining phase of the axial dipole moment associated with the solar cycle and the rise rate of the following sunspot cycle. We find a compelling relationship between the two. We argue that this is theoretically expected and points to a causal connection between the flux transport dynamics mediated dispersal of active region flux during the rise of a sunspot cycle and the cancellation of the polar field of the previous cycle. Furthermore, we demonstrate how this new relationship can be utilized to predict the future sunspot cycle, especially the timing of its peak which is a challenging task. Our results also support the Babcock-Leighton paradigm of the sunspot cycle which proposes that the decay and dispersal of the flux of tilted bipolar sunspot pairs mediated via surface flux transport processes is the primary mechanism for solar poloidal field's creation.
## 2 Methods and Results
We make use of total sunspot number database maintained by the SIDC-SILSO and the solar synoptic charts recorded at the Wilcox Solar Observatory (WSO), covering the information of photospheric solar magnetic activity since 1976 to 2023. For a given synoptic chart corresponding to a particular Carrington Rotation number associated with time \(t\), global axial dipole moment of the Sun, \(D\), at that instant can be formulated as, (see Petrovay, 2020),
\[D(t)=\frac{3}{2}\int_{0}^{\pi}\overline{B}(\theta,t)\cos\theta\sin\theta\ d\theta, \tag{1}\]
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Sunspot cycle & Dipole moment cycle & \multicolumn{3}{c}{Decay of precursor dipole cycle \(|\mathrm{D}_{n-1}|\)} & \multicolumn{3}{c}{Rise of sunspot cycle \(\mathrm{SC}_{n}\)} \\ \cline{3-8} SC\({}_{n}\) & \(\mathrm{D}_{n-1}\) & Initial time & Final time & Decay rate, \(r_{\mathrm{DM}}\) & Initial time & Final time & Rise rate, \(r_{\mathrm{SSN}}\) \\ & & [yr (CR)] & [yr (CR)] & [yr (CR)] & [\(\mu\) yr yr\({}^{-1}\)] & [yr] & [yr\({}^{-1}\)] \\ \hline SC\({}_{21}\) & D\({}_{20}\) & 1977.60 (CR 1658) & 1978.87 (CR 1675) & 43.5917 & 1976.21 & 1979.96 & 68.0175 \\ SC\({}_{22}\) & D\({}_{21}\) & 1987.75 (CR 1794) & 1989.17 (CR 1813) & 54.9517 & 1986.71 & 1989.87 & 78.0974 \\ SC\({}_{23}\) & D\({}_{22}\) & 1998.28 (CR 1935) & 1999.18 (CR 1947) & 33.2563 & 1996.34 & 2001.87 & 36.8719 \\ SC\({}_{24}\) & D\({}_{23}\) & 2011.28 (CR 2109) & 2011.80 (CR 2116) & 22.8997 & 2008.96 & 2014.23 & 23.3260 \\ SC\({}_{25}\) & D\({}_{24}\) & 2021.28 (CR 2243) & 2022.55 (CR 2260) & 26.0578 & 2019.96 & 2022.87 & β \\ \hline \end{tabular}
\end{table}
Table 1: Calculated rise rate of previous four sunspot cycles \(\mathrm{SC}_{21-24}\) and the decay rate of their precursor dipole moment cycles \(\mathrm{D}_{20-23}\) are tabulated. Initial and final time of each interval, as considered in our analyses, are also reported (in year). Corresponding Carrington Rotation (CR) numbers are mentioned in parentheses.
Figure 3: Evidence of a strong correlation (Pearsonβs r = 0.98 with confidence level of 97.73%) between the decay rate of unsigned dipole moment, \(r_{\mathrm{DM}}\), and the rise rate of the following sunspot cycle, \(r_{\mathrm{SSN}}\). The black-dashed line denotes the best-fitted curve, while the shaded region in the background marks the corresponding 2\(\sigma\) confidence bound as obtained from linear regression. The error bar represents the typical magnitude of root-mean-squared error (RMSE) associated with this regression model, considering no other statistical uncertainties. Sunspot cycle numbers (21-25) are mentioned adjacent to their respective data points in the plot. The predicted rise rate of sunspot cycle 25 using this model is 28.5\(\pm\)4.7 sunspots per year, as denoted by the blue square.
Figure 2: Evolution of 13-months smoothed monthly total sunspot number since sunspot cycle 21 (in red-dashed curve) and corresponding unsigned dipole moment, \(|D|\) (in blue dash-dotted curve). In our analyses, the slopes of the linearly fitted blue and red solid lines determine the decay rate of unsigned dipole moment, \(r_{\mathrm{DM}}\), and the rise rate of sunspot cycles, \(r_{\mathrm{SSN}}\), respectively.
where, \(\overline{B}\) represents azimuthally averaged radial magnetic field of the Sun at colatitude \(\theta\).
In the rising phase of a sunspot cycle the number of sunspots surges, accompanied by a fall in the magnitude of solar dipole moment until the latter reverses its global polarity (see Fig. 1, panels (b)-(c)). This observation falls in line with the previously mentioned dynamo mechanism pertaining to the cyclic generation of poloidal and toroidal components of the Sun's large-scale magnetic field. Observations show that the polarity reversal of dipole moment precedes the occurrence of sunspot cycle peak by around a year. We hereby report the latest reversal in polarity of the solar dipole moment to have already occurred almost a year ago, during July 2022 - which anticipates an imminent cycle maximum of the ongoing sunspot cycle 25.
Since, the growth of a sunspot cycle (say, \(n\)) devours the precursor dipole moment of cycle (\(n-1\)), one would expect the time rates of these two physical processes to be in causal correlation with each other. To investigate this, we analyze the time series of the past four sunspot cycles (SC\({}_{21-24}\)) and their corresponding precursor dipole moment cycles (D\({}_{20-23}\)) by implementing linear regression over their growth and declining phases, respectively (Ref. Fig.1 caption for the definitions of SC\({}_{21-24}\) and D\({}_{20-23}\)). We define, the growth phase of the sunspot cycle as the interval during which the sunspot numbers rise from the cycle minimum to the cycle maximum with the rate, \(r_{\rm SSN}\). On the other hand, we take a semi-analytical approach (prescribed in Appendix A) to determine the decay intervals of individual dipole moment cycles, based on which we estimate their rate of decay, \(r_{\rm DM}\). We find these two dynamical quantities, namely \(r_{\rm SSN}\) and \(r_{\rm DM}\) strongly correlate with each other (Pearson's \(r=0.98\) with 97.73% confidence level), as described in Fig. 3, and the correlation can be expressed as follows,
\[r_{\rm SSN}=1.83\times r_{\rm DM}-19.17 \tag{2}\]
A further investigation of a similar relation as in Eq.2 using the decay rate of WSO average polar field instead of dipole moment demonstrates a positive correlation but with poor statistical significance. Utilizing the observed rate of decay of dipole moment cycle \(D_{24}\) (i.e., \(\sim 26.1\ \mu\)T yr\({}^{-1}\)) in the empirical relationship prescribed above we estimate the rate of rise of the ongoing sunspot cycle 25 to be \(28.5\pm 4.7\) sunspots per year - which is higher than that of the previous sunspot cycle 24 but lower than cycle 23 (see Table 1). We note that the outcome of the aforementioned regression is sensitive to the choice of initial epoch in the decay interval of dipole moment cycles and we discuss more on this in Appendix A.
Now we demonstrate how an amalgamation of this prior knowledge on the rise rate of a sunspot cycle, and its amplitude predicted by other independent means can be extended to forecasting the time of occurrence of its peak. Earlier studies have found that the magnitude of solar polar field and dipole moment at the sunspot cycle minimum significantly correlate with the strength of the subsequent sunspot cycle (Schatten et al., 1978; Yeates et al., 2008; Jiang et al., 2018). Fig. 4 depicts that even the amplitude of the dipole moment, \(A_{\rm DM}\), has a significant correlation with the subsequent sunspot cycle amplitude, \(A_{\rm SSN}\), which can be expressed in the form of the following independent relationship,
\[A_{\rm SSN}=2.00\times A_{\rm DM}+13.16 \tag{3}\]
Substituting \(A_{\rm DM}=51.75\ \mu\)T (i.e., the observed amplitude of dipole cycle D\({}_{24}\)) in equation (3), we estimate the strength of the imminent sunspot cycle 25 maximum to be \(116.91\pm 2.89\) denoting a weak-moderate cycle similar to or slightly stronger than cycle 24.
We mark the sunspot cycle minimum during December 2019 (say, \(t_{25}^{I}\)) with a monthly mean amplitude of 1.8 (say, \(A_{25}^{I}\)) as the beginning of the ongoing sunspot cycle 25. Ascribing a uniform average rise rate to this cycle (i.e., \(r_{25}=28.5\pm 4.7\) sunspots per year) as estimated from equation (2) and considering its amplitude (i.e., \(A_{25}^{I}=116.91\pm 2.89\) predicted from equation (3), we forecast the time of occurrence of the peak of sunspot cycle 25, \(t_{25}^{I}\) to be,
\[t_{25}^{I}=t_{25}^{i}+\frac{A_{25}^{I}-A_{25}^{I}}{r_{25}}=2024.00^{+0.68}_{-0.49} \tag{4}\]
Note that in the calculation of the range of possibilities of the expected peak timing we consider only the root-mean-squared error, and no other statistical uncertainties.
## 3 Conclusions
Analyzing long-term observation of solar photospheric magnetic activity for the past four sunspot cycles, we discover a compelling correlation between the decay rate of solar dipole moment and the rise rate of following sunspot cycle. We have explained how this correlation emerges out of a causal connection between the emergence and surface flux transport of new tilted bipolar sunspot pairs (cause) and the decay and reversal of the previous cycle's poloidal field (effect). Given that this causal connection is intimately related to the Babcock-Leighton mechanism for solar polar field generation our work provides independent confirmation that this mechanism is an integral part of the solar dynamo.
The rise rate of a sunspot cycle (say, cycle \(n\)) is known to be related
Figure 4: Observed amplitude, \(A_{\rm SSN}\), of sunspot cycles 22-24 exhibit strong correlation (Pearsonβs \(r=0.99\) with 95.38% confidence level) with the amplitude of preceeding unsigned axial dipole moment cycles as observed by WSO, \(A_{\rm DM}\). Sunspot cycle numbers (22-25) are mentioned adjacent to their respective data points in the plot. Based on the best-fit linear regression model (in black-dashed line) and the observed rate of decay of the preceding \(|D|\) cycle, the predicted amplitude of sunspot cycle 25 is estimated to be \(116.91\pm 2.89\), as denoted by the pink square. The error bar represents the typical magnitude of RMSE associated with the regression model and assuming there are no other statistical uncertainties.
to the eventual peak of that sunspot cycle (\(n\)) - a relationship known as the Waldmeier effect. Our work establishes an extension of this Waldmeier effect which can be succinctly stated as: the rate of decay of the Sun's axial dipole moment of cycle (\(n-1\)) is related to the rate of rise, and consequently, the eventual strength of the following sunspot cycle (i.e., cycle \(n\)).
Additionally, we formulate a semi-analytical framework to determine the decay time interval in dipole moment. It is worth noting that the evolution of the WSO dipole moment precedes that of the average solar polar field by nearly a year, which significantly extends the prediction window for the dynamics of the upcoming sunspot cycle with improved accuracy. The existence of such a strong correlation, in fact, enables one to forecast the timing of a sunspot cycle's peak once the amplitude of that cycle is independently anticipated. For example, we show that the ongoing sunspot cycle is likely to peak during January 2024 (with the range of July 2023 to September 2024), based on its empirically estimated amplitude of \(116.91\pm 2.89\). Note that this estimated amplitude matches with the physical model based prediction of Bhowmik & Nandy (2018).
Predicting the time of maximum amplitude of sunspot cycle is important for gauging when the most adverse space environmental conditions (space weather) are expected. This information is important for solar radiative forcing of the Earth's upper atmosphere, in protection of space based technological assets and mission lifetime estimates. This prediction of the timing of the peak of sunspot cycles have remained a challenging task for physics based models. We have provided an alternative empirical method for predicting the timing of the sunspot cycle peak which can be implemented only after a significant fraction of the rising phase of sunspot cycle has occurred. The physical model based prediction of Bhowmik & Nandy (2018) predicted the peak to occur in 2024 (\(\pm 1\) year). This convergence of our empirical prediction with early, physics based prediction augurs well for the field of solar cycle predictions.
## Acknowledgements
CESSI is funded by IISER Kolkata, Ministry of Education, Government of India. C.S. acknowledges fellowship from CSIR through grant no. 09/921(0334)/2020-EMR-I. The authors acknowledge helpful exchanges during the third team meeting of ISSI Team 474 sponsored by the International Space Science Institute, Bern. Authors are thankful to an anonymous reviewer for constructive comments.
## Data Availability
We use total sunspot number data made available by WDC-SILSO1, Royal Observatory of Belgium, Brussels. We also make use of Wilcox Solar Observatory synoptic charts2. Scripts of our statistical analyses will be shared on reasonable requests to the corresponding author.
Footnote 1: [https://www.sidc.be/SILSO/dtatifles](https://www.sidc.be/SILSO/dtatifles)
Footnote 2: [http://wso.stanford.edu/synopticl.html](http://wso.stanford.edu/synopticl.html)
|
2310.05097 | Resonant excitation of plasma waves in a plasma channel | We demonstrate resonant excitation of a plasma wave by a train of short laser
pulses guided in a pre-formed plasma channel, for parameters relevant to a
plasma-modulated plasma accelerator (P-MoPA). We show experimentally that a
train of $N \approx 10$ short pulses, of total energy $\sim 1$ J, can be guided
through $110$ mm long plasma channels with on-axis densities in the range
$10^{17} - 10^{18}$ cm$^{-3}$. The spectrum of the transmitted train is found
to be strongly red-shifted when the plasma period is tuned to the intra-train
pulse spacing. Numerical simulations are found to be in excellent agreement
with the measurements and indicate that the resonantly excited plasma waves
have an amplitude in the range $3$ - $10$ GV m$^{-1}$, corresponding to an
accelerator stage energy gain of order $1$ GeV. | Aimee J. Ross, James Chappell, Johannes J. van de Wetering, James Cowley, Emily Archer, Nicolas Bourgeois, Laura Corner, David R. Emerson, Linus Feder, Xiao J. Gu, Oscar Jakobsson, Harry Jones, Alexander Picksley, Linus Reid, Wei-Ting Wang, Roman Walczak, Simon M. Hooker | 2023-10-08T09:55:40Z | http://arxiv.org/abs/2310.05097v1 | # Resonant excitation of plasma waves in a plasma channel
###### Abstract
We demonstrate resonant excitation of a plasma wave by a train of short laser pulses guided in a pre-formed plasma channel, for parameters relevant to a plasma-modulated plasma accelerator (P-MoPA). We show experimentally that a train of \(N\approx 10\) short pulses, of total energy \(\sim 1\) J, can be guided through \(110\) mm long plasma channels with on-axis densities in the range \(10^{17}-10^{18}\) cm\({}^{-3}\). The spectrum of the transmitted train is found to be strongly red-shifted when the plasma period is tuned to the intra-train pulse spacing. Numerical simulations are found to be in excellent agreement with the measurements and indicate that the resonantly excited plasma waves have an amplitude in the range 3 - \(10\) GV m\({}^{-1}\), corresponding to an accelerator stage energy gain of order 1 GeV.
In the laser wakefield accelerator (LWFA) [1], a short laser pulse propagating through a plasma excites a trailing Langmuir wave, within which the generated electric fields can be of the order \(E_{\text{wb}}=m_{\text{e}}c\omega_{p}/e\), where \(\omega_{p}=(n_{\text{e}}e^{2}/m_{\text{e}}e_{0})^{1/2}\) is the plasma frequency, and \(n_{\text{e}}\) is the electron density. For electron densities of interest \(E_{\text{wb}}\sim 100\) GV m\({}^{-1}\), some three orders of magnitude greater than is possible in a conventional accelerator. Considerable progress has been made, including, for example, the acceleration of electrons to energies in the GeV range in centimetre-scale accelerator stages [2; 3; 4; 5; 6; 7; 8; 9; 10], and the application of LWFAs to driving compact light sources [11; 12]. Recently, free-electron laser gain was demonstrated using laser-accelerated electrons [13; 14].
To drive a large amplitude Langmuir (or 'plasma') wave, the duration \(\tau_{L}\) of the laser pulse must satisfy \(\tau_{L}\lesssim T_{p}/2\), where \(T_{p}=2\pi/\omega_{p}\) is the plasma period, corresponding to \(\tau_{L}\lesssim 100\) fs for plasma densities of interest. As a consequence, recent experimental work has been dominated by the use of high energy (joule-scale) chirped-pulse-amplification [15] Ti:sapphire lasers. However, this laser material has a high quantum defect (34%) [16] which limits the pulse repetition rate of high-energy systems to \(f_{\text{rep}}\ll 1\) kHz.
An alternative method for driving the plasma wave is to resonantly excite it with a train of low-energy pulses (or a single long, modulated pulse) in which the pulse spacing (or modulation) is matched to \(T_{p}\). An example of this approach is the plasma beat-wave accelerator (PBWA) [1; 17; 18; 19], in which two long pulses of frequencies \(\omega_{1}\) and \(\omega_{2}=\omega_{1}+\omega_{p}\) are combined to form a pulse modulated at \(\omega_{p}\). Beat-wave acceleration of electrons to energies in the \(10\) MeV range has been reported; of particular relevance to the present work is that by Tochitsky _et al._[19], who exploited ponderomotive self-guiding over 3 cm to accelerate electrons to 38 MeV at a gradient of \(\sim 1\) GV m\({}^{-1}\).
Interest in resonant wakefield excitation has revived [21] with the development of novel laser technologies, such as thin-disk lasers that can generate joule-scale pulses at \(f_{\text{rep}}\) in the kilohertz range, with high (\(\gtrsim 10\%\)) wall-plug efficiency [22]. The picosecond-duration pulses provided by these systems are too long to drive a plasma wave directly, and a second laser frequency separated by \(\omega_{p}\) is not currently available to drive a PBWA. A potential solution is the plasma-modulated plasma accelerator (P-MoPA) [23], which comprises three stages: (i) a modulator, in which a long (\(\sim 1\) ps), high-energy (\(\gtrsim 1\) J) laser pulse is spectrally modulated by the low amplitude plasma wave driven by a short (\(\lesssim 100\) fs), low-energy (\(\lesssim 100\) mJ)'seed' laser pulse as they co-propagate in a plasma channel of on-axis density \(n_{\text{e},0}\); (ii) a dispersive optical system that converts the spectral modulation to a train of short pulses spaced by \(T_{p,0}=2\pi\sqrt{m_{\text{e}}e_{0}/n_{\text{e},0}e^{2}}\); (iii) an accelerator stage, also of on-axis density \(n_{\text{e},0}\), within which the pulse train resonantly drives a large amplitude plasma wave. Numerical simulations [23] show that a \(1.7\) J, \(1\) ps driver, with a \(140\) mJ, \(40\) fs seed, could accelerate electrons to energies of \(0.65\) GeV in a \(100\) mm-long plasma channel with \(n_{\text{e},0}=2.5\times 10^{17}\) cm\({}^{-3}\).
In this Letter we investigate experimentally the accelerator stage of a P-MoPA. We demonstrate guiding of a train of \(N\approx 10\) short pulses, with a total energy of the order 1 J through \(110\) mm long plasma channels, equiva
lent to 14 Rayleigh ranges, with \(n_{\mathrm{e,0}}\) in the range \(10^{17}-10^{18}\,\mathrm{cm}^{-3}\). Resonant excitation of a plasma wave within the channel is evidenced by the observation of strong red-shifting of the spectrum of the transmitted pulse train when \(T_{p,0}\) was tuned to the pulse spacing in the train. The results are found to be in excellent agreement with numerical simulations, which show that wake amplitudes in the range \(3\,\mathrm{GV}\,\mathrm{m}^{-1}\) to \(10\,\mathrm{GV}\,\mathrm{m}^{-1}\) were achieved, corresponding to an accelerator stage energy gain of the order \(1\,\mathrm{GeV}\).
Figure 1 shows schematically the arrangement employed for these experiments, undertaken with the Astra-Gemini TA3 Ti:sapphire laser at the Rutherford Appleton Laboratory. This laser provides two synchronized beams, here denoted the 'drive' and 'channel-forming' beams, each of central wavelength \(\lambda_{0}=800\,\mathrm{nm}\) with transform-limited full-width at half-maximum (FWHM) duration of \(31\,\mathrm{fs}\). In order to mimic the pulse train employed in the P-MoPA scheme, single laser pulses were converted to a train of short pulses using a Michelson interferometer, as sketched in Fig. 1(a) and described previously [24, 25]. The temporal intensity profile of the generated pulse train, shown in Fig. 1(b), was determined from single-shot measurements of the spectrum and autocorrelation of the train (see Supplemental Material [20] for further details).
The gas target used in this work was a cell-jet hybrid [10, 26], with hydrogen gas pulsed into the target via a solenoid valve and two transducers measuring the pressure on-shot. The laser pulses were coupled into, and out of, the target via a pair of \(3\,\mathrm{mm}\) radius coaxial pinholes mounted on: (i) the front of the target; and (ii) a motorized plunger that could be moved to adjust the target length \(L\). A relative RMS pressure variation along the laser propagation axis of \(4.1\,\mathrm{\char 37}\) was measured [20], as shown in Fig. 1(e).
A hydrodynamic optical-field-ionized (HOFI) channel [27, 28] was formed in the target by focusing the channel-forming pulse, of energy \(\sim 100\,\mathrm{mJ}\) and FWHM pulse duration \(80\,\mathrm{fs}\), with an axicon lens of base angle \(3.6^{\circ}\). The transverse intensity profile of the beam produced by the axicon had a central maximum of FWHM spot size \((9.8\pm 0.1)\,\mathrm{\SIUnitSymbolMicro m}\), as shown in Fig. 1(c).
The pulse train, of total on-target energy \(E_{\mathrm{train}}=(2.5\pm 0.5)\,\mathrm{J}\), was focused by an off-axis \(f/40\) paraboloid to the target entrance. The transverse intensity profile of the focused beam [see Fig. 1(d)] was found to have a \(1/\mathrm{e}^{2}\) intensity radius of \((45.5\pm 3.4)\,\mathrm{\SIUnitSymbolMicro m}\), a Rayleigh range of \(z_{R}=(7.9\pm 0.7)\,\mathrm{mm}\), and to contain \((64.9\pm 1.5)\,\mathrm{\char 37}\) of its energy within its FWHM. The delay between the arrival of the channel-forming and drive beams was set to \(t_{d}=3.5\,\mathrm{ns}\). After leaving the plasma channel, the energy of the drive beam was reduced, and the beam re-imaged onto a 16-bit camera and a fibre-coupled spectrometer. An example guided mode is shown in Fig. 1(e).
The excitation of plasma waves by the drive pulse was detected through changes in its spectrum [29]. The spectra presented in Figs. 2 and 3 are photon-normalized,
Figure 1: Sketch of the experimental layout. (a) Illustration of the pulse train generation scheme. (b) Example single-shot autocorrelator (SSA) measurement for the \(\tau=(170\pm 2)\,\mathrm{fs}\) pulse train. Upper: comparison between the measured (pink) and retrieved (grey, dashed) SSA signal. Lower: retrieved pulse train intensity profile. (c) Measured axicon focus. (d) Example input mode of the focused multi-pulse drive beam. (e) Example guided mode at the channel exit. All focal spot images are normalized to their maximum. (f) Comparison between the measured and simulated longitudinal gas pressure profile [20].
defined as \(\tilde{S}(\lambda)=\lambda S_{\rm meas}(\lambda)/\int_{0}^{\infty}\lambda S_{\rm meas }(\lambda)\mathrm{d}\lambda\), where \(S_{\rm meas}(\lambda)\) is the measured spectrum. Figure 2(a) shows \(\tilde{S}(\lambda)\) for an incident pulse train with \(E_{\rm train}=(2.5\pm 0.5)\,\mathrm{J}\) and \(\tau=170\,\mathrm{fs}\), at on-axis densities approximately equal to, and one third of, the resonant value, \(n_{\rm e,res}\approx 4.3\times 10^{17}\,\mathrm{cm}^{-3}\). As expected, the input spectrum of the pulse train is modulated by the Michelson interferometer to yield \(N\approx 10\), uniformly-spaced peaks. For the off-resonant density, the spectrum of the transmitted train is similar to that of the incident pulse, with some blue-shifting apparent in the region \(\lambda\lesssim 780\,\mathrm{nm}\), likely caused by ionization of the neutral gas collar [30; 31] surrounding the HOFI channel and of the gas plumes that extend beyond the target. In contrast, at the resonant density, considerable red-shifting is observed, extending the bandwidth of the input beam by more than \(40\,\mathrm{nm}\) on the long wavelength side. The new red-shifted light beyond \(820\,\mathrm{nm}\) is seen to consist of a series of peaks [23]; these arise from spectral modulation of the laser pulse by the wakefield, which generates copies of the input spectrum shifted by \(\pm m\omega_{p}\) for integer \(m\). The peaks on the blue side of the spectrum are not visible in Fig. 2, likely due to the additional blue-shift from ionisation. We note that blue-shifting would have predominantly occurred for the first few pulses in the train, and, since the pulse train was negatively chirped, their initial spectra were on the blue side of the mean wavelength.
The density-dependent red-shift seen in Fig. 2(a) strongly indicates resonant plasma wave excitation in the plasma channel. To confirm this, we also measured the transmitted spectra for a temporally-smooth \(\sim 1\,\mathrm{ps}\) drive pulse of similar energy at on-axis densities matching those in Fig. 2(a). As shown in Fig. 2(b), in this case no red-shift was observed, and the spectra were similar for both densities and were dominated by blue-shift of similar magnitude to that observed in Fig. 2(a).
Figure 3 shows the variation with on-axis plasma density of the transmitted spectra when the drive was well-guided [20] by the plasma channel. To quantify the redshift we define the red-shift metric \(R=\sum_{\lambda_{\rm min}}^{\infty}\tilde{S}(\lambda)\), where \(\lambda_{\rm min}\) is the longest wavelength in the input spectrum above the noise level. It is evident from Fig. 3(a,b) that the spectra of the pulse train driver exhibit a pronounced red-shift for densities in the range \(n_{\rm e,0}=4\times 10^{17}\,\mathrm{cm}^{-3}\) to \(5\times 10^{17}\,\mathrm{cm}^{-3}\), which agrees with the expected resonance density of \(n_{\rm e,res}=4.3\times 10^{17}\,\mathrm{cm}^{-3}\). For a train of \(N\) identical laser pulses, the full-width of the resonance peak is expected [25] to be \(\delta n_{\rm e,0}/n_{\rm e,res}\approx 8/(3N)\), corresponding to \(\delta n_{\rm e,0}\approx 1.2\times 10^{17}\,\mathrm{cm}^{-3}\) -- in good agreement with the measured FWHM in \(R\) of \(\delta n_{\rm e,0}\approx 1.6\times 10^{17}\,\mathrm{cm}^{-3}\). In contrast, Figs. 3(c, d) show that no resonance is observed for the unmodulated drive
Figure 2: Comparison of the photon-normalized spectra, \(\tilde{S}(\lambda)\), of the input pulses (grey) and those transmitted through a \(110\,\mathrm{mm}\)-long HOFI channel for: (a) a pulse train with \(\tau=170\,\mathrm{fs}\) and \(E_{\rm train}=(2.5\pm 0.5)\,\mathrm{J}\); (b) an unmodulated pulse with FWHM duration \(\sim 1\,\mathrm{ps}\) and \(E=(2.7\pm 0.5)\,\mathrm{J}\). \(\tilde{S}(\lambda)\) is shown near the resonance condition of the pulse train [blue; \(n_{\rm e,res}=(4.3\pm 0.3)\times 10^{17}\,\mathrm{cm}^{-3}\)] and for an off-resonant density [green, dashed; \(n_{\rm e,0}=(1.4\pm 0.3)\times 10^{17}\,\mathrm{cm}^{-3}\)]. The photon-normalized spectra have been scaled to a maximum value of unity for the input pulse.
pulse. Significant red-shifting of the unmodulated drive pulse _is_ observed for \(n_{\mathrm{e,0}}\gtrsim 5.5\times 10^{17}\,\mathrm{cm}^{-3}\), likely caused by self-modulation [32; 33; 34] of the long pulse.
To provide further insight, we compared these measurements with the results of an in-house 2D cylindrical fluid code, benchmarked against the particle-in-cell (PIC) code WarpX [35] (see [20]). The calculations used the retrieved pulse train parameters and modelled the plasma channel as an ideal fully-ionized parabolic waveguide [20]. The code ignores the effects of ionization by the laser pulse, and assumes that the temporal envelope of the drive is unchanged by its interaction with the plasma.
Figure 3(b) shows the calculated \(R\) for the \(\tau=170\,\mathrm{ns}\), \(E_{\mathrm{train}}=2.5\,\mathrm{J}\) pulse train in a plasma channel of length \(L=110\,\mathrm{mm}\). It can be seen that the position and width of the calculated resonance peak agree closely with those observed in the measurements. For some shots the measured \(R\) values reach the calculated curve, but in most cases they are lower. In order to understand this, the energy transmission of the train was measured as a function of the plasma channel length [20]. For each cell length the measured energy transmission was found to vary over a wide range, owing to the large pointing jitter of the input pulse train. Shots for which the input beam was well aligned with the channel axis were found to have an input coupling of \(T_{0}=(64\pm 4)\,\%\), which is consistent with \(|c_{0}|^{2}=(71\pm 5)\%\), where \(c_{0}\) is the calculated [20] coupling coefficient between the transverse amplitude profile of the input beam and that of the lowest-order mode of the channel. In contrast, the coupling coefficient deduced from all guided shots is only \(T_{0}=(32\pm 13)\,\%\), which reflects the additional losses arising from misalignment with respect to the channel axis. Figure 3(b) shows that if the drive energy is reduced by this factor, i.e. to \(E_{\mathrm{train}}=800\,\mathrm{mJ}\), the calculated variation of \(R\) with density is in excellent agreement with the averaged measurements. At the resonant density, the amplitude of the wakefield driven by the \(\tau=170\,\mathrm{fs}\) pulse train is calculated from the fluid simulation to be \(10\,\mathrm{G}\mathrm{V}\mathrm{m}^{-1}\) (\(3\,\mathrm{G}\mathrm{V}\mathrm{m}^{-1}\)) for \(E_{\mathrm{train}}=2.5\,\mathrm{J}\) (\(0.8\,\mathrm{J}\)).
Further evidence of resonant wakefield excitation is shown in Fig. 4, which shows the measured and calculated variation of \(R\) with on-axis density for pulse trains with \(\tau=200\,\mathrm{fs}\) and \(170\,\mathrm{fs}\). In this case, \(E_{\mathrm{train}}=(2.5\pm 0.5)\,\mathrm{J}\) and \(L=70\,\mathrm{mm}\). It can be seen that, for both pulse trains, the position, width, and magnitude of the measured variation of \(R\) agree well with the calculation assuming \(E_{\mathrm{train}}=0.8\,\mathrm{J}\). At higher densities, \(n_{\mathrm{e,0}}\gtrsim 7\times 10^{17}\,\mathrm{cm}^{-3}\), red-shifting arising from self-modulation is again observed.
It has been previously shown that HOFI [27; 28] channels achieve higher energy transmission when the wings of the laser pulse have sufficient intensity to ionize the neutral gas collar to form a conditioned [36; 37] HOFI channel. PIC simulations [20] of the present experiment indicate that the leading three pulses in the train conditioned the HOFI channel, allowing later pulses in the train to be guided with low losses. We note that conditioning of the channel could also be achieved by employing a separate, short pulse immediately ahead of the pulse train [30]; the required energy of the conditioning pulse is \(\sim 7\,\mathrm{mJ}\) per cm of channel, i.e. only \(3\%\) of the drive energy in the present experiment.
In summary we have demonstrated guiding of a train of \(N\approx 10\) short pulses, with a total pulse train energy of the order \(1\,\mathrm{J}\) through \(110\,\mathrm{mm}\) long plasma channels with on-axis densities in the range \(10^{17}-10^{18}\,\mathrm{cm}^{-3}\). The spectra of the transmitted pulse trains were found to be strongly red-shifted when the plasma period was matched to the pulse spacing in the train. In contrast, no such resonance in the red-shift was observed for an unmodulated drive pulse of the same total energy and duration. Numerical simulations were found to be in excellent agreement with the measurements, and showed that, at resonance, the wake amplitude was in the range \(3-10\,\mathrm{G}\mathrm{V}\mathrm{m}^{-1}\), corresponding to an accelerator stage energy gain of the order \(1\,\mathrm{G}\mathrm{eV}\).
These results constitute the first demonstration of resonant excitation of a plasma wave by a train of laser pulses guided in a pre-formed plasma channel. The laser and plasma parameters employed in this work are directly relevant to the accelerator stage of the P-MoPA scheme [23], which offers a route to achieving kilohertz-repetition-rate, GeV-scale plasma accelerators driven by plasma modulation of joule-scale, picosecond-duration laser pulses, such as those provided by thin-disk lasers.
Figure 4: Variation of \(R\) with on-axis density for a plasma channel of length \(L=70\,\mathrm{mm}\) and for pulse trains of energy \((2.7\pm 0.5)\,\mathrm{J}\) and pulse separation: (a) \(\tau=200\,\mathrm{fs}\); and (b) \(\tau=170\,\mathrm{fs}\). The results of the fluid calculations, assuming \(E_{\mathrm{train}}=800\,\mathrm{mJ}\), are shown by the blue dashed lines. For each plot the expected resonant density is indicated by the orange dotted line.
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) (Grant Nos EP/R513295/1 & EP/V006797/1), the UK Science and Technologies Facilities Council (Grant Nos ST/P002048/1, ST/R505006/1, ST/S505833/1, ST/V001655/1, ST/V001612/1), and the Ken and Veronica Tregidgo Scholarship in Atomic and Laser Physics. This work required significant computing resources which were funded by the plasma HEC Consortium [EPSRC Grant No. EP/R029149/1] and UKRI funding [ARCHER2 Pioneer Projects]. Computing resources were provided by ARCHER and ARCHER2 [ARCHER2 PR17125] UK supercomputures [http://archer.ac.uk](http://archer.ac.uk), [https://www.archer2.ac.uk](https://www.archer2.ac.uk). This research used the open-source particle-in-cell code WarpX [https://github.com/ECP-WarpX/WarpX](https://github.com/ECP-WarpX/WarpX), primarily funded by the US DOE Exascale Computing Project. Primary WarpX contributors are with LBNL, LLNL, CEA-LIDYL, SLAC, DESY, CERN, and TAE Technologies. We acknowledge all WarpX contributors.
Data is available from the authors upon reasonable request.
This research was funded in whole, or in part, by EPSRC and STFC, which are Plan S funders. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2307.16342 | Proof-of-Federated-Learning-Subchain: Free Partner Selection Subchain
Based on Federated Learning | The continuous thriving of the Blockchain society motivates research in novel
designs of schemes supporting cryptocurrencies. Previously multiple
Proof-of-Deep-Learning(PoDL) consensuses have been proposed to replace hashing
with useful work such as deep learning model training tasks. The energy will be
more efficiently used while maintaining the ledger. However deep learning
models are problem-specific and can be extremely complex. Current PoDL
consensuses still require much work to realize in the real world. In this
paper, we proposed a novel consensus named
Proof-of-Federated-Learning-Subchain(PoFLSC) to fill the gap. We applied a
subchain to record the training, challenging, and auditing activities and
emphasized the importance of valuable datasets in partner selection. We
simulated 20 miners in the subchain to demonstrate the effectiveness of PoFLSC.
When we reduce the pool size concerning the reservation priority order, the
drop rate difference in the performance in different scenarios further exhibits
that the miner with a higher Shapley Value (SV) will gain a better opportunity
to be selected when the size of the subchain pool is limited. In the conducted
experiments, the PoFLSC consensus supported the subchain manager to be aware of
reservation priority and the core partition of contributors to establish and
maintain a competitive subchain. | Boyang Li, Bingyu Shen, Qing Lu, Taeho Jung, Yiyu Shi | 2023-07-30T23:39:58Z | http://arxiv.org/abs/2307.16342v1 | # Proof-of-Federated-Learning-Subchain: Free Partner Selection Subchain Based on Federated Learning
###### Abstract
The continuous thriving of the Blockchain society motivates research in novel designs of schemes supporting cryptocurrencies. Previously multiple Proof-of-Deep-Learning(PoDL) consensuses have been proposed to replace hashing with useful work such as deep learning model training tasks. The energy will be more efficiently used while maintaining the ledger. However deep learning models are problem-specific and can be extremely complex. Current PoDL consensus still require much work to realize in the real world. In this paper, we proposed a novel consensus named Proof-of-Federated-Learning-Subchain(PoFLSC) to fill the gap. We applied a subchain to record the training, challenging, and auditing activities and emphasized the importance of valuable datasets in partner selection. We simulated 20 miners in the subchain to demonstrate the effectiveness of PoFLSC. When we reduce the pool size concerning the reservation priority order, the drop rate difference in the performance in different scenarios further exhibits that the miner with a higher Shapley Value (SV) will gain a better opportunity to be selected when the size of the subchain pool is limited. In the conducted experiments, the PoFLSC consensus supported the subchain manager to be aware of reservation priority and the core partition of contributors to establish and maintain a competitive subchain.
Novel Consensus, Blockchain, Proof-of-Deep-Learning, FLChain, Federated Learning, Deep Learning +
Footnote β : publicationid: pubid: pubid: pubid: 978-8-3503-1019-1/23/$31.00 Β©2023 IEEE
## I Introduction
The popularity of Blockchain in recent years has drawn an unprecedented amount of research attention in this field. Despite the advantages brought by decentralized mechanism, one of the main drawbacks of Blockchain, the tremendous energy cost, is causing increasing concern [1]. Currently, most blockchains adopted hashing as workload which can only be solved through brute force. While hashing is secure and hard to crack, there's very limited additional value can be generated during this calculation process.
As a result, Proof-of-Deep-Learning (PoDL) [2] was proposed in recent years to address the "wasting energy" issue, which replaced the hash algorithm with deep learning training tasks as the workload. Deep learning algorithms have been widely applied in various research areas such as computer vision and natural language processing. PoDL adopted the Proof-of-Work (PoW) [2] and focused on designing pipelines and scheduling deep learning tasks to inherit the security properties. Successive works of PoDL chains such as DLchain [3] and DLBC [4] further improved security over the original design.
However, training a deep learning (DL) model for a specific task is more complex compared with hashing. In computer vision, the state-of-the-art neural network architectures in areas such as object detection [5] and image classification [6] varied drastically depending on the details of the problem. Moreover deep learning models are data driven [7]. The quality of data used to train the model has a direct impact on the model's performance.
In this work, we introduce the novel consensus Proof-of-Federate-Learning-Subchain (PoFLSC) which is derived from PoDL. This is the first consensus that integrates the importance of the dataset into the task scheduling process among miners. In PoFLSC, miners are encouraged to collect and contribute their private datasets. Both the complexity of the model to train and the value of the dataset will be considered while miners choose mining partners and rank the priority of scheduled tasks. To enhance the security of PoFLSC, we adopted the challenge and witness verification mechanism from Helium [8]. In the Fig. 1, it demonstrates the interaction between a miner and other contributor individuals/groups. Once the miner joins a subchain as a contributor, it maintains a ping-pong network communication to update the most result status with the subchain manager. Once a miner generates a challenge to a subchain, the miner fetches the model from the subchain and returns the performance of the model based on the private dataset of the miner. Amount visible data contrib
Fig. 1: The interaction of a miner with others participants
utor, they share their metadata with each other to increase their visibility. The tasks of each roll will be introduced in the subsection III-B
To conclude, a novel consensus named PoFLSC is proposed in this work. With the integration of data value and response time, miners will have the incentive to contribute their dataset and therefore made possible more complex deep learning tasks. Energy efficiency is improved compared with PoDL due to the diversity of the dataset introduced by miners. Model performance will also benefit from the increased scale of the dataset.
## II Background
### _Related work_
Previously, the PoDL [2] consensus mainly utilized the computation capability of miners, and the tasks publisher release the training tasks and the training data. In the PoNAS [9], the tasks publisher provides training data and relatively flexible training tasks. The target task is to search a neural architecture network, thus the actual training tasks can be different from each other.
Federated learning(FL) is proposed as an efficient deep learning method suitable for decentralized data [10]. Using FL, millions of users can train a model together with local data in their devices, and only gradients will be uploaded and aggregated to update the shared model's weights. Some advantages are essential using FL compared with traditional deep learning training. Firstly, the data remains private during the entire training process. Secondly, the hardware requirement of the user device is minimum and network resources are saved.
Furthermore, the combination of blockchain and FL resolved some of the existing drawbacks in FL such as centralized server, robust network communication, and lack of incentive [11]. FL-blockchains [12] removed the central server role and minimized the impact of remote devices failures. More than that, FL-blockchains naturally motivated devices to participate and contribute to the chain.
In our PoFLSC, because all miners will have a strong incentive to contribute their dataset, the training process will less likely suffer the problem of data scarcity. As a guarantee, it's important to fairly reward each miner considering the contribution of each dataset. Wang et al [13] proposed to use Shapley Values (SV) to measure the contributions of participants in FL. Influence estimation for each party in horizontal FL and Shapley estimation for individual feature value are considered in their work. To alleviate the calculation in this work, Ghorbani et al [14] proposed another Shapley-Value-based evaluation method to quantify the value of each training datum to the trained model's performance which is named Truncated Monte Carlo Shapley.
### _Participants_
1) Miners are the machines that join the decentralized network and contribute the resource for crypt-currency reward. In PoFLSC, the ability of DL model training, hosting the pool manager, proxy, high-quality data collecting are all crucial to earn rewards.
2) Full nodes will perform three types of checks. i) All nodes can behave as Type One full nodes and it will record and check all blocks and transactions; ii) As Type Two full node, each data contributor will generate challenges periodically to test the DL model performance of all its visible subchain; iii) The Type Three full node will audit the training procedure by repeating it.
3) Task publisher will provide a certain DL model architecture and the sample dataset. In PoFLSC, the public accessible dataset is less scarce and miners will merge the public dataset into the training if the SV of the dataset is higher than the selection threshold. The majority of training data are private datasets from data contributors.
### _Assumptions_
This work is based on three assumptions.
1) The dataset is clean. In this work, we only consider the quality of datasets. We assume no adversarial attack, poison attack, or miss label issue happens in any dataset. All mentioned attacking cases has been evaluated in related work [14]. These attack mechanism will reduce the SV of the miner and potentially reduce the performance of the subchain. But the subchain manager will less likely to reserve a miner with low SV dataset. Therefore, the miner with mentioned attacking cases will not be selected.
2) The size of a dataset from each data contributor is the same and the size of each sample is the same. With this assumption, we can avoid discussing whether a dataset with large in size and medium in quality is more valuable than a dataset with small in size and high in quality.
3) Each block will finish one task and the tasks between two blocks are independent. Therefore, we will not discuss the case if data contributors wish to hide part of their private dataset in training. Within one block, all the sharing activity will be recorded and data contributors share the high-quality dataset to their core pool partners.
## III Design
### _Overview_
The design of PoFLSC inherits the PoDL consensus and proposes modifications to improve the effectiveness when facing complex scenarios. PoFLSC allows miners to select/reject partners, challenge/verify the performance and configuration of each other. Specifically, PoFLSC is a free market for miners to select partners by measuring the response time and data value of other miners. This dynamic free market provides incentives for miners to contribute valuable data or a dataset from the minority group.
To train a model is a more complex task than hash algorithm workload. It requests high performance of network, computation, storage, data, and model design. PoNAS [9] adopted PoDL design and proposed a mining pool solution on top of NAS workload. The pool manager scheduled strong miners to find potential neural network structures and weak miners to
fine-tune the given model. The manager will assign a random sub search space within the full space for each strong miner. This training process among different miners is independent, therefore the performance of a single searching task for a miner depends on the neural network architecture, searching space, training data, etc. The performance of partners in the pool cannot hold back the performance of any other member in the pool. When we adopt this mining pool strategy to distribute deep learning framework, an intuitive method is to split the model or training data of deep learning training tasks and assign it as a sub-task to miners. With this strategy, the miners may cumber the overall performance of the pool if the miners are slow in computation power or network speed.
In PoFLSC, response time and value of data are key factors for miners to select partners. The general idea is to encourage miners work with as many partners as possible within one sub-block time of the subchain, then rank the priority of partners according to their data SV. It is allowed that every miner can contribute to many subchain. In general, one main-block finishes one DL training task and one FL global communication round is finished within one sub-block. All miners experience fours phase including the initial phase, core pool establish phase, secondary pool establish phase, and verification phase.
### _Miner tasks_
1) Training: With given neural network architecture and local dataset or partner shared dataset, the training workforce train DL models for certain local epoch and submit the updated gradients to the host or other miners.
2) Hosting the pool manager: Besides strong computation power, the host is the server to manage multiple other miners. Within the core pool, the host is selected with the conditions: i) long term reliability exceeds core pool average value; ii) among all qualified candidates, the selected host must have the shortest response time. The host also aggregates the gradients of all updated gradients from others member of the core pool and distributes the average global gradients to each member.
3) Proxy: The proxy can forward data or requests. With strong proxy involve, the core pool network performance of bandwidth saving and speeds will be improved.
4) Data Contributor: Data contributors will collect private datasets for DL training tasks and the owner of the high-quality dataset will be the preferred partner in the PoFLSC. Because each miner is allowed to contribute in multiple subchains if the miner is able to submit the updated gradients within one sub-block time of the subchain, the high-quality data contributor will earn more opportunity to be the final winner.
### _Subchain Structure_
In PoFLSC, each block will finish one target task and each block will be divided into multiple sub-blocks. The sub-block time is predefined when releasing the training tasks and it is longer than multiple local epoch time for slow miners. A relatively short sub-block time will limit the number of miners to contribute to the final DL model, while increasing the global communication rounds. With relatively longer sub-block time, it encourages more miners to join the competition and it encourages miners to participate in multiple subchains. Working with multiple subchains in parallel will increase the opportunity to win. Therefore, the miner or a core pool with a valuable dataset or shorter response time will wish to work with multiple subchains and they will increase their winning chance by upgrading to better hardware or collecting valuable data.
The main structure of subchain is similar to PoDL [4] which includes a block head and a block body. The block head structure remains the same as in PoDL. The first sub-block head of the current block is the hash of the previous block head. Among all visible subchains, only one winner subchain will write their sub-block information as the block information for the current block. After it runs into the fourth phase, a subchain becomes a candidate once the number of audits and challenges exceed the threshold, respectively. The winner subchain is the one with the best accuracy among all candidates. All qualified subchains should receive enough audits and challenges as confirmation.
The block body records transaction ledger and activation transaction as shown in Fig. 2 The transaction ledger remains the same as in PoDL. The activation transaction includes information on training, challenges, and audits. All the challenge results will be the model performance. Auditing will further verify the training procedure. In the activation transaction, it records transaction number, activation type, chain ID and model pair, verifier ID and role pair, miner ID and role pair, data ID, and previous dependency transaction number.
### _Four phase of intervals_
There are four phase intervals within one block. Each interval lasts one or more sub-block time. There is no certain time limit for all subchains to follow. But within one certain subchain, all members follow the same guide. The concept of this phase is for all participants to be aware of the status of the training procedure, thus it will receive resources for the different types of tasks.
In general, once the miner selects partners to establish a core pool or secondary pool. The miners' pool will adopt federated learning framework to train DL model together. Each node will run the required number of local epoch and submit the gradients to the host manager. The host manager will update the server average gradients and distribute the updated
Fig. 2: A sample of activation transaction
gradients to each following miner. One sub-block time is one global communication round.
1) The first phase is the initial phase to select partners to form core pools. Each miner maintains an event queue for an upcoming response time check and the time for the request in the queue is not part of response time. The response time includes the time for training the task and transferring data. Each miner also checks all visible miners for response time and maintains a list of partner candidates. The list adds a new partner if the sum of the response time of all candidates is shorter than one sub-block time or the response time of the new partner is shorter than the longest partner in the list. When the total response time of all partners is longer than one sub-block time, it removes the partner with the longest response time in the list. Between each pair of partners, they share their partner list starting from the partner with the lowest response time. Once a partner is a common partner in the pair, it establishes the core pool. All members of the core pool propose one candidate if the response time is shortest among all unconfirmed candidates in their list. Once any proposed candidate is not on the local list, the miner will raise reject and the candidate selection will stop. If the number of all confirmed candidates is bigger than the threshold number, the core pool will be established, otherwise, the core pool will be demolished. All participants of each core pool will maintain one subchain.
2) Once the core pool is established, the phase moves on to the second phase. All miners start training tasks and evaluate SV of the partners in each participated subchains. Based on the SV, the miner will rank the priority of these subchains in the local task schedule. The time for the initial phase is much short than one sub-block time and we consider all subchain will keep the same number of sub-block by the end because all subchains start to generate subblocks within the first sub-block and the length of each sub-block time is the same. Here, each subchain adopts a synchronous federated learning framework in the second phase. The pool manager nodes collect all unconfirmed candidates from each member and repeat the same algorithm in phase one to select partnerships with other pool managers. But all selection behavior happens among the pool manager of each unconfirmed candidate.
3) Once the partnership among multiple pool managers is confirmed, it establishes the secondary pool, and the phase moves onto the third. Each subchain will split and merge. For each subchain, the number of branches it split equals the number of partnerships. One branch of all subchain within one partnership merge into one subchain. In the third phase, the workforce nodes receive a global average gradients from their manager and continue training on the updated gradients. The manager adopts an asynchronous federated learning framework which allows the manager to boost the response time with faster miners, thus it helps the core pool increase the priority ranking in other schedules. The manager nodes update response time and evaluate SV with other manager nodes in partnership.
4) A subchain runs into the fourth phase when a secondary pool achieves both requirements: i) received sufficient audits and challenge, ii) qualified performance of workload. This subchain becomes the candidate of the final winner and it receives more challenges and audits from others.
### _Verification_
Full nodes will perform three types of checks.
i) All nodes can behave as Type One full nodes and it will record and check all blocks and transactions. In addition, all challenges and verification activation will be recorded as transactions too. The Type One results will be transaction confirmation and transaction reliability records of a single node.
ii) As Type Two full node, each data contributor will generate challenges periodically to test the DL model performance of all its visible subchain. The Type Two full node is limited to only generating one challenge set in one period of time. The challenge set is multiple random subsets of its private dataset. One subchain will receive only one challenge from one Type Two challenge in one period. The Type Two results will be the DL model performance and pool performance record.
iii) The Type Three full node will audit the training procedure by repeating it. Auditing of the training procedure is a heavy workload and it will be finished with multiple nodes in a group. The results of auditing will be the comprehensive record of a pool and all participants in their performance and reliability.
## IV Experiment
### _Experimental Setup_
In the experiment, we evaluated the effectiveness of PoFLSC. When the training process is finished in the main
Fig. 3: The histogram of 20 miners with the G-Shapley Value method.
chain, all participants agree that the subchain with the best performance model will write the block and initial the next block. The PoFLSC consensus is proposed to augment the functionality within each block. In the first phase, it will initiate the core pool based on the response time. In the secondary phase, the core pool started training and evaluate the SV of each miner. Because the pool need to finish at least one global epoch, the manager will measure the reservation priorities of each miner. When the resource is limit that the pool manager cannot afford handling all miners, the manager will firstly reserve the miner with the highest ranking in reservation priority queue, therefore the pool manager guarantees that the core pool will finish global epoch within one subblock time. It is important that every subchain can meet this commitment, thus the subchain can continue to join the secondary pool and final competition.
We simulated 100 miners to verify the effectiveness. The training task is MNIST [15] digits handwriting recognition and the DL model is of three 2D convolution layers and two fully connected layers. Each miner randomly select 30 samples from all samples. The response time between each pair of miners is given and fixed. The statistics of the response time is simulated to follow the Gaussian distribution. The demonstration subchain selected top 20 miners as the members of the core pool. In the experiment, the SV of all miners are calculated in two different methods. In the Fig. 3 and 4, it shows the histogram of the SV of 20 miners. The Fig. 3 is for G-Shapley Value method and The Fig. 4 is for LOO Shapley Value method
Fig. 4: The histogram of 20 miners with the LOO Shapley Value method.
Fig. 5: The statistics results of the G-Shapley Value of the subchain. Each point represents a member.
Fig. 6: The performance of models in two different reservation priority orders shows the effectiveness.
### _Performance Evaluation_
In the Fig. 5 and 7, it shows mean and STD. of member where the x coordinate represents the mean of the SV of each member and the y coordinate represents the STD. of the SV of each member. Here, we compared the performance differences when we reserved members in ascending order and descending order in the Fig. 6 and 8. The solid line represents the descending order which means we will reserve the candidate with the highest SV first when we start adding more member, and it also means we will kick the member with the lowest SV first when we start removing member. The dotted line shows the results of comparative experiment where the manager will reserve the candidate with the lowest SV first and remove the member with the highest SV first.
Due to the limitation of computation resources, we limited the size of the subchain pool to 20 in our simulation. When we gradually reduce the pool size, we observe the performance drop in all cases. The drop rate of the solid line is more gentle than the dotted line. When the pool size drops to 60%, the performance drops below 20% in Fig. 6. So the top 40% of members are the key contributors to be reserved to maintain the subchain in this experiment. In practice, the value of this threshold can be various in different tasks and datasets, but it is important for the manager to be aware of reservation priority and the core partition of contributors to maintain a healthy subchain. In Fig. 6 and 8, the drop rate of the performance means the relevance between the SV and the value of the data for this task. The drop rate difference between the dotted lines in Fig. 6 and 8 shows that the relevance is higher with G-shapley method than it is with LOO method. This experiment shows the effectiveness of this novel PoFLSC consensus and it demonstrates the miner with higher SV will achieve a better opportunity to be selected when the size of the subchain pool is limited. The reservation priority order based on SV is helpful for a subchain manager to select candidates and establish a competitive subchain.
## V Discussion
Previously, the PoDL consensus mainly utilized the computation capability of miners for deep learning training. The training tasks and the training data are given by the tasks publisher. In the PoNAS, the tasks publisher provided training data and relatively flexible training tasks. The target task is to search a neural architecture network, thus the actual training tasks can be different from each other. In this work, we considered the complexity of training a deep learning model and proposed the PoFLSC. Here, it emphasizes the importance of training data in DL models. In addition, the design also concerns the response time of miners, verification between miners. The response time can be expended to computation power of miner and specification of network infrastructure, etc. The verification mechanism further strengthens the security of the system.
Because DL models are data-driven, it is necessary to evaluate the value of the dataset in this novel consensus. To my best knowledge, we are the first to adopt SV into novel consensus
Fig. 8: The performance of models in two different reservation priority order shows the effectiveness of PoFLSC.
Fig. 7: The statistics results of the LOO Shapley Value of the subchain. Each point represents a member.
and further evaluate the effectiveness of PoFLSC. The value of datasets in the consensus also improves blockchain security. For example, if there is no mechanism to evaluate the value of each dataset, the honest miners will have less motivation to collect private datasets or generate synthetic datasets, while the attackers have strong motivation to collect or generate high-quality private datasets to improve model performance and win the competition. Once the consensus requests private datasets and evaluates the value of each private dataset, all miners will have the motivation to collect high-quality data. As a result, more effective and economical methods of data collection are expected to be proposed. For example, existing works [16, 17] have proved to be capable of generating synthetic training data for computer vision tasks utilizing 3D simulation techniques. Besides helping to create incentives for data collection, SV also contributes to detecting low-quality datasets, miss label samples, and adversarial attacks [18].
|
2306.01517 | Parameterized Broadcast Networks with Registers: from NP to the
Frontiers of Decidability | We consider the parameterized verification of arbitrarily large networks of
agents which communicate by broadcasting and receiving messages. In our model,
the broadcast topology is reconfigurable so that a sent message can be received
by any set of agents. In addition, agents have local registers which are
initially distinct and may therefore be thought of as identifiers. When an
agent broadcasts a message, it appends to the message the value stored in one
of its registers. Upon reception, an agent can store the received value or test
this value for equality with one of its own registers. We consider the
coverability problem, where one asks whether a given state of the system may be
reached by at least one agent. We establish that this problem is decidable;
however, it is as hard as coverability in lossy channel systems, which is
non-primitive recursive. This model lies at the frontier of decidability as
other classical problems on this model are undecidable; this is in particular
true for the target problem where all processes must synchronize on a given
state. By contrast, we show that the coverability problem is NP-complete when
each agent has only one register. | Lucie Guillou, Corto Mascle, Nicolas Waldburger | 2023-06-02T13:09:23Z | http://arxiv.org/abs/2306.01517v2 | # Parameterized Broadcast Networks with Registers: from NP to the Frontiers of Decidability
###### Abstract
We consider the parameterized verification of arbitrarily large networks of agents which communicate by broadcasting and receiving messages. In our model, the broadcast topology is reconfigurable so that a sent message can be received by any set of agents. In addition, agents have local registers which are initially distinct and may therefore be thought of as identifiers. When an agent broadcasts a message, it appends to the message the value stored in one of its registers. Upon reception, an agent can store the received value or test this value for equality with one of its own registers.
We consider the coverability problem, where one asks whether a given state of the system may be reached by at least one agent. We establish that this problem is decidable; however, it is as hard as coverability in lossy channel systems, which is non-primitive recursive. This model lies at the frontier of decidability as other classical problems on this model are undecidable; this is in particular true for the target problem where all processes must synchronize on a given state. By contrast, we show that the coverability problem is NP-complete when each agent has only one register.
Parameterized verification Well quasi-orders Distributed systems 10.4230/LIPIcs...1 IRIF, CNRS,
[MISSING_PAGE_POST]
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris Cite,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
Universite Paris,
Universite Paris Cite,
[MISSING_PAGE_POST]
Universite,
Universite Paris,
Universite Paris,
Universite,
Universite Paris,
Universite,
Universite Paris,
Universite,
Universite Paris,
designated state, was decidable and even PSPACE-complete, but the proof turned out to be wrong [22]. As we will see, the complexity of that problem is in fact much higher.
In this paper we establish the decidability of the coverability problem. We even prove its completeness for the hyper-Ackermannian complexity class \(\mathbf{F}_{\omega^{\omega}}\), thereby showing that the problem requires a non-primitive recursive amount of time. In fact, this problem is as hard as reachability for lossy channel systems, which are transition systems with a finite automaton that can store some letters in an unreliable FIFO memory from which any letter may be erased at any time [4, 11, 25]. We further establish that this problem lies at the frontier of decidability by showing undecidability of the target problem (where one asks whether there is a run at the end of which all agents are in a given state); we contrast these results with the NP-completeness of the coverability problem when each agent has only one register.
Related workBroadcast protocols are a widely studied class of systems in which processes are represented by nodes of a graph and can send messages to their neighbors. There are however many choices to make when designing a model for those systems: how individual processes are represented, whether the communication graph is fixed or can change, the type of messages they can send... A model where messages range over a finite alphabet was presented in [16], over a fully connected communication graph. It was rapidly shown that many basic parameterized problems are undecidable over that model [17]; similar negative results were found for Ad Hoc Networks where the communication graph is fixed but arbitrary [15]. This lead the community to consider Reconfigurable Broadcast Networks (RBN) where each broadcast can be received by an arbitrary subset of agents [15].
Parameterized verification problems over RBN have been the subject of extensive study in recent years [7, 8, 12, 14]. In [13], RBN were extended to BNRA, the model studied in this article, by the addition of registers allowing processes to exchange identifiers. This extension was inspired by the success of register automata, which offer a convenient formalism to express properties of words over an infinite alphabet; see [26] for a survey on the subject.
Other approaches exist to define parameterized models with registers [9], such as dynamic register automata in which processes are allowed to spawn other processes with new identifiers and communicate integers values [1]. While basic problems on these models are in general undecidable, some restrictions on communications allow to obtain decidability [2, 20].
Such parameterized verification problems often relate to the theory of well quasi-orders and the associated high complexities obtained from bounds on "bad sequences" in ordered sets. In particular, our model is linked to two classical models from this field. The first one is data nets, which are Petri nets in which tokens are labeled with natural numbers and can exchange and compare their labels [19]. In general, inequality tests are allowed, but data nets with only equality tests have also been studied [21]. They do not subsume BNRA as, in data nets, each process can only carry one integer at a time (problems on models of data nets where tokens have tuples of integers as labels are typically undecidable). The second closely related model is lossy channel systems (LCS) [4]. LCS are derived from distributed models where processes communicate through pairwise channels; this model is a rich field of study in itself [5, 6]. LCS reachability is complete for the complexity class \(\mathbf{F}_{\omega^{\omega}}\)[11, 25]; we show that the same is true for BNRA coverability and that LCS can be simulated in BNRA.
OverviewWe start with the model definition and some preliminary results in Section 2. We prove decidability of the coverability problem in Section 3. Finally, we prove the NP-completeness of the coverability problem with one register per process in Section 4. Due to space constraints, most proofs are postponed to the appendix.
## 2 Preliminaries
### Definitions of the Model
A _Broadcast Network of Register Automata_ (BNRA) [13] is a model describing broadcast networks of agents with local registers. A finite transition system describing the behavior of an agent; an agent can broadcast and receive messages with integer values, store them in local registers and perform equality tests. There are arbitrarily many agents. When an agent broadcasts a message, each other agent independently may or may not receive it.
A protocol with \(r\) registers is a tuple \(\mathcal{P}=(Q,\mathcal{M},\Delta,q_{0})\) with \(Q\) a finite set of states, \(q_{0}\in Q\) an initial state, \(\mathcal{M}\) a finite set of message types and \(\Delta\subseteq Q\times\mathsf{Op}\times Q\) a finite set of transitions, with a set of operations
\[\mathsf{Op}=\{\textbf{br}(m,i),\textbf{rec}(m,i,*),\textbf{rec}(m,i,\downarrow),\textbf{rec}(m,i,=),\textbf{rec}(m,i,\neq)\mid m\in\mathcal{M},1 \leq i\leq r\}\\ \cup\{\textbf{loc}(i,j,=),\textbf{loc}(i,j,\neq)\mid 1\leq i,j\leq r\}\]
Label br stands for broadcasts, rec for _receptions_ and loc for _local tests_. Given a reception \(\textbf{rec}(m,i,\alpha)\) or a local test \(\textbf{loc}(i,j,\alpha)\), \(\alpha\) is its _action_. The set of actions is \(\mathsf{Actions}:=\{=,\neq,\downarrow\), \(*\}\), where '\(=\)' is an equality test, '\(\neq\)' is a _discquality test, '\(\downarrow\)' is a _store action_ and '\(*\)' is a _dummy action_ with no effect. The _size of \(\mathcal{P}\) is \(|\mathcal{P}|:=|Q|+|\mathcal{M}|+|\Delta|+r\).
We now define the semantics of those systems. Essentially, we have a finite set of agents with \(r\) registers each; all registers initially contain distinct values. A step is either an agent performing a local action or an agent broadcasting a message received by some other agents.
[Semantics] Let \((Q,\mathcal{M},\Delta,q_{0})\) be a protocol with \(r\) registers, and \(\mathbb{A}\subseteq\mathbb{N}\) a finite non-empty set of _agents_. A _configuration_ over a set of agents \(\mathbb{A}\) is a function \(\gamma:\mathbb{A}\to Q\times\mathbb{N}^{r}\) mapping each agent to a state and _register values_. We write \(\mathsf{st}(\gamma)\) for the state component of \(\gamma\) and \(\mathsf{data}(\gamma)\) for its register component. An _initial configuration_\(\gamma\) is one where for all \(a\in\mathbb{A}\), \(\mathsf{st}(\gamma)(a)=q_{0}\) and \(\mathsf{data}(\gamma)(a,i)\neq\mathsf{data}(\gamma)(a^{\prime},i^{\prime})\) for all \((a,i)\neq(a^{\prime},i^{\prime})\).
Given a finite set of agents \(\mathbb{A}\) and two configurations \(\gamma,\gamma^{\prime}\) over \(\mathbb{A}\), a _step_\(\gamma\to\gamma^{\prime}\) is defined when one of the two following conditions is satisfied:
* there exist \(a\in\mathbb{A}\), \(i,j\in[1,r]\) and a local test \((\mathsf{st}(\gamma)(a),\textbf{loc}(i,j,\alpha),\mathsf{st}(\gamma^{\prime}) (a))\in\Delta\) such that \(\gamma(a^{\prime})=\gamma^{\prime}(a^{\prime})\) for all \(a^{\prime}\neq a\), \(\mathsf{data}(\gamma^{\prime})(a)=\mathsf{data}(\gamma)(a)\) and
* if \(\alpha=\text{``}\sqcap\) then \(\mathsf{data}(\gamma)(a,i)=\mathsf{data}(\gamma)(a,j)\),
* if \(\alpha=\text{``}\not=\text{'}\) then \(\mathsf{data}(\gamma)(a,i)\neq\mathsf{data}(\gamma)(a,j)\);
* there exist \(m\in\mathcal{M}\), \(a_{0}\in\mathbb{A}\) and \(i\in[1,r]\) s.t. \((\mathsf{st}(\gamma)(a_{0}),\textbf{br}(m,i),\mathsf{st}(\gamma^{\prime})(a_{0} ))\in\Delta\), \(\mathsf{data}(\gamma)(a_{0})=\mathsf{data}(\gamma^{\prime})(a_{0})\) and, for all \(a\neq a_{0}\), either \(\gamma^{\prime}(a)=\gamma(a)\) or there exists \((\mathsf{st}(\gamma)(a),\textbf{rec}(m,j,\alpha),\mathsf{st}(\gamma^{\prime}) (a))\in\Delta\) s.t. \(\mathsf{data}(\gamma^{\prime})(a,j^{\prime})=\mathsf{data}(\gamma)(a,j^{ \prime})\) for \(j^{\prime}\neq j\) and:
* if \(\alpha=\text{``}\mathsf{'}\) then \(\mathsf{data}(\gamma^{\prime})(a,j)=\mathsf{data}(\gamma)(a,j)\),
* if \(\alpha=\text{``}\downarrow\text{'}\) then \(\mathsf{data}(\gamma^{\prime})(a,j)=\mathsf{data}(\gamma)(a_{0},i)\),
* if \(\alpha=\text{``}\sqcap\) then \(\mathsf{data}(\gamma^{\prime})(a,j)=\mathsf{data}(\gamma)(a,j)=\mathsf{data}( \gamma)(a_{0},i)\),
* if \(\alpha=\text{``}\not=\text{'}\) then \(\mathsf{data}(\gamma^{\prime})(a,j)=\mathsf{data}(\gamma)(a,j)\neq\mathsf{data}( \gamma)(a_{0},i)\).
A _run_ is a sequence of steps \(\pi:\gamma_{0}\to\gamma_{1}\to\dots\to\gamma_{k}\). We write \(\gamma_{0}\xrightarrow{*}\gamma_{k}\) when there exists such a run. A run is _initial_ when \(\gamma_{0}\) is an initial configuration. A run \(\rho:\gamma_{0}\xrightarrow{*}\gamma_{f}\) covers a state \(q\in Q\) when there exists \(a\in\mathbb{A}\) such that \(\mathsf{st}(\gamma_{f})(a)=q\).
Remark 3. In our model, agents may only send one value per message. Indeed, [13] establishes undecidability of coverability if agents can broadcast two values at once.
We give an example of a protocol with 2 registers in Figure 1. Let \(\mathbb{A}=\{a_{1},a_{2}\}\). We denote a configuration \(\gamma\) over \(\mathbb{A}\) by \(\langle(\mathsf{st}(\gamma)(a_{1}),\mathsf{data}(\gamma)(a_{1})),(\mathsf{st}( \gamma)(a_{2}),\mathsf{data}(\gamma)(a_{2}))\rangle\). The following sequence is an initial run, where \(x_{1},y_{1},x_{2},y_{2}\) are distinct natural integers:
\[\langle(q_{0},(x_{1},y_{1})),(q_{0},(x_{2},y_{2}))\rangle \to\langle(q_{1},(x_{1},y_{1})),(q_{5},(x_{1},y_{2}))\rangle\to\] \[\langle(q_{4},(x_{1},y_{2})),(q_{4},(x_{1},y_{2}))\rangle \to\langle(q_{7},(x_{1},y_{2})),(q_{4},(x_{1},y_{2}))\rangle\to \langle(q_{7},(x_{1},y_{2})),(q_{4},(x_{1},y_{2}))\rangle\]
The broadcast messages are, in this order: \((m_{2},x_{1})\) by \(a_{1}\), \((m_{4},y_{2})\) by \(a_{2}\), \((m_{6},x_{1})\) by \(a_{1}\) and \((m_{7},x_{1})\) by \(a_{1}\). In this run, each broadcast message is received by the other agent. We make the following observation: from a run \(\rho:\gamma_{0}\xrightarrow{*}\gamma\), we can build a run in which each agent of \(\rho\) has a clone in the same state but with different register values. To obtain this, it suffices to add new agents that mimic \(\rho\) in parallel of the original agents, with which they will not share any register values. This property is called _copycat principle_: if state \(q\) is coverable, then for all \(n\) there exists an augmented run which puts \(n\) agents on \(q\).
The _coverability problem Cover asks, given a protocol \(\mathcal{P}\) and a state \(q_{f}\), whether there is an initial run of \(\mathcal{P}\) that covers \(q_{f}\). The _target reachability problem Target asks, given a protocol \(\mathcal{P}\) and a state \(q_{f}\), whether there is an initial run \(\gamma_{0}\xrightarrow{*}\gamma_{f}\) of \(\mathcal{P}\) such that \(\mathsf{st}(\gamma_{f})(\mathbb{A})=\{q_{f}\}\), i.e all agents end on \(q_{f}\). Let \(\mathcal{P}\) the protocol of Example 1. As proven in Example 4, \((\mathcal{P},q_{7})\) is a positive instance of Cover. However, \((\mathcal{P},q_{7})\) is a negative instance of Target: there must be an agent staying on \(q_{4}\) to broadcast \(m_{6}\). Meanwhile, \((\mathcal{P},q_{1})\) is a positive instance of Target: all agents can broadcast \(m_{1}\) to get to \(q_{1}\). Also, \((\mathcal{P},q_{3})\) is a negative instance of Cover: we would need one agent on \(q_{2}\) and one on \(q_{5}\) with the same value in their first registers, hence we need a broadcast \((m_{1},v)\) and a broadcast \((m_{2},v)\) for some \(v\). The two messages need to be sent by the agent having \(v\) as initial value, but this agent cannot send both messages. In [13], the authors considered "queries", which are conjunctions of formulas of the form '\(q(\mathsf{z})\)', '\(\mathsf{reg}_{j}(\mathsf{z})=\mathsf{reg}_{j^{\prime}}(\mathsf{z}^{\prime})\)', '\(\mathsf{reg}_{j}(\mathsf{z})\neq\mathsf{reg}_{j^{\prime}}(\mathsf{z}^{\prime})\)', with \(\mathsf{z},\mathsf{z}^{\prime}\) taken in a set of variables \(\mathsf{Var}\), \(q\in Q\) and \(j,j^{\prime}\in[1,r]\). A configuration satisfies a query if we can assign an agent to each variable so that all conjuncts are satisfied. This problem reduces to Cover. Given a protocol \(\mathcal{P}\) with \(r\) registers and a query \(\phi\) with \(k\) variables, we can construct a protocol \(\mathcal{P}^{\prime}\) with \(O(|\mathcal{P}|^{k}+|\phi|)\) states and \(kr\) registers that allows each agent to simulate \(k\) agents of the previous system. There is an initial run of the first BNRA satisfying \(\phi\) if and only if there is
Figure 1: Example of a protocol.
one of the second in which one agent simulates the \(k\) agents satisfying \(\phi\); in order to cover \(q_{f}\), this agent must check locally that \(\phi\) is satisfied by the \(k\) agents it encodes. This reduction is exponential; note however that complexity class (\(\mathbf{F}_{\omega^{\omega}}\)) is stable by exponential reductions.
In the case of one register, we even have a polynomial-time reduction. To do so, we extend the protocol so that any agent can share its local configuration in a single broadcast (going to some sink state). In order to reach \(q_{f}\), an agent must perform a sequence of transitions in which it receives the configurations of \(k\) agents and checks that they satisfy the query.
We can get rid of local equality tests at the cost of an exponential blow-up:
There is an exponential-time reduction from Cover to Cover with no local equality tests \(\textbf{loc}(i,j,=)\).
Proof sketch. From a protocol \(\mathcal{P}\), we build a protocol \(\mathcal{P}^{\prime}\) whose registers are used to store values of multiple registers of \(\mathcal{P}\), so that equality of two registers of \(\mathcal{P}\) is encoded in the states of \(\mathcal{P}^{\prime}\). To do so, a state of \(\mathcal{P}^{\prime}\) stores which register of \(\mathcal{P}\) is mapped to which register of \(\mathcal{P}^{\prime}\), hence the exponential blowup. The full proof can be found in Appendix A.
### Classical Definitions
Fast-growing hierarchyFor \(\alpha\) an ordinal in Cantor normal form, we denote by \(\mathscr{F}_{\alpha}\) the class of functions corresponding to level \(\alpha\) in the Fast-Growing Hierarchy. We moreover denote by \(\mathbf{F}_{\alpha}\) the associated complexity class and use the notion of \(\mathbf{F}_{\alpha}\)-completeness. All these notions are defined in [23]. We will specifically work with complexity class \(\mathbf{F}_{\omega^{\omega}}\). For readers unfamiliar with these notions, \(\mathbf{F}_{\omega^{\omega}}\)-complete problems are problems which are decidable but with very high complexity (non-primitive recursive, and even non-multiply recursive).
We highlight that our main result is the decidability of the problem. We show that the problem lies in \(\mathbf{F}_{\omega^{\omega}}\) because it does not complicate our decidability proof significantly; also, it fits nicely into the landscape of high-complexity problems arising from well quasi-orders.
Well-quasi ordersFor our decidability result, we rely on the theory of well quasi-orders in the context of subword ordering. Let \(\Sigma\) be a finite alphabet, \(w_{1},w_{2}\in\Sigma^{*}\), \(w_{1}\) is a _subword_ of \(w_{2}\), denoted \(w_{1}\preceq w_{2}\), when \(w_{1}\) can be obtained from \(w_{2}\) by erasing some letters. A sequence of words \(w_{0},w_{1},\ldots\) is _good_ if there exist \(i<j\) such that \(w_{i}\preceq w_{j}\), and _bad_ otherwise. Higman's lemma [18] states that every bad sequence of words over a finite alphabet is finite. In order to bound the length of a bad sequence, one must bound the growth of the sequence of words. We will use the following result, known as the Length function theorem [24]:
[Length function theorem [24]] Let \(\Sigma\) be a finite alphabet, let \(g:\mathbb{N}\to\mathbb{N}\) be a primitive recursive function. There exists a function \(f\in\mathscr{F}_{\omega^{[\Sigma]-1}}\) such that, for all \(n\in\mathbb{N}\), every bad sequence \(w_{1},w_{2},\ldots\) such that \(|w_{i}|\leq g^{(i)}(n)\) for all \(i\) is of length at most \(f(n)\).
### Link with LCS
_Lossy channel systems_ (LCS) are systems where finite-state processes communicate by sending messages from a finite alphabet through lossy FIFO channels. Unlike in the non-lossy case [10], reachability of a state is decidable for lossy channel systems [4], but has non-primitive recursive complexity [25] and is in fact \(\mathbf{F}_{\omega^{\omega}}\)-complete [11]. By simulating LCS using BNRA, we obtain our \(\mathbf{F}_{\omega^{\omega}}\) lower bound for Cover:
The coverability problem for BNRA is \(\mathbf{F}_{\omega^{\omega}}\)-hard.
Proof sketch.: Given an LCS \(\mathcal{L}\), we build a protocol \(\mathcal{P}\) with two registers. The first register is never modified and plays the role of a permanent identifier. Each agent starts by receiving a foreign identifier and storing it in its second register; it then only accepts messages with this identifier, using an equality test on every reception. This way, agents form chains where messages propagate in one direction and where each agent has at most one predecessor. Each agent of the chain simulates a step of an execution of \(\mathcal{L}\): an agent receives from its predecessor a configuration of \(\mathcal{L}\), chooses the next configuration of \(\mathcal{L}\) and broadcasts it, sending first the location of \(\mathcal{L}\) and then, letter by letter, the content of the channel. Some messages might get lost, hence the lossiness. The full proof can be found in Appendix B.
**Remark 12**.: This reduction can be adapted to show that repeat-coverability (whether there is an infinite run covering \(q_{f}\) infinitely often) is undecidable for BNRA, as it is for LCS [3].
## 3 Cover Decidability
This section is dedicated to the proof of the main result of this paper:
**Theorem 13**.: _Cover for BNRA is decidable and \(\mathbf{F}_{\omega^{\omega}}\)-complete._
Thanks to Proposition 9, we may assume that our protocols have no local equality tests (the complexity class \(\mathbf{F}_{\omega^{\omega}}\) is stable by exponential reduction).
In order to abstract our runs, we want to understand what an agent \(a\) needs from other agents in order to cover a state \(q\). In general, \(a\) may receive messages with several values from the same agent. However, because each message contains a single value, we will in fact be able to clone agents so that agents sending messages to \(a\) with distinct values are distinct and do not interact with each other.
We identify two types of roles that other agents must carry out, called _specifications_. First, they might need to broadcast a sequence of message types \(w\in\mathcal{M}^{*}\) with a common value, so that \(a\) stores the value of the first such message and tests further messages for equality. We later call such a role a _boss specification_. It might also be the case that \(a\) sends some messages with a common value \(v\) that it had initially, then needs to receive a message \((m,v)\) with that same value. To do so, some other agents should be able to broadcast \((m,v)\) after receiving some sequence of message types \(w\) with that value \(v\) (that they did not have in their registers initially, because \(a\) did). We later call such a role a _follower specification_.
The two roles identified previously are the key to the decidability procedure. To represent runs, we consider _unfolding trees_ that represent all such roles, dependencies between them and how they are carried out. The fact that we can consider trees and not general graphs is not obvious and is connected to the cloning idea mentioned above. The decidability procedure will rely on a bound of the minimum size of the unfolding tree one has to consider.
In Section 3.1, we introduce several useful notions. In Section 3.2, we define the notion of unfolding tree. In Section 3.3, we bound the size of the unfolding trees that we have to consider. In Section 3.4, we conclude by exposing our decidability procedure. In Section 3.5, we prove that the Target problem is, by contrast, undecidable.
### Useful Definitions
We define a notion of local run, which may be seen as the projection of a run onto a given agent. In this local vision, we do not specify the origin of the received messages.
A _local configuration_ is a pair \((q,\nu)\in Q\times\mathbb{N}^{r}\). An _internal step_ from \((q,\nu)\) to \((q^{\prime},\nu^{\prime})\) with transition \(\delta\in\Delta\), denoted \((q,\nu)\xrightarrow{\text{int}(\delta)}(q^{\prime},\nu^{\prime})\), is defined when \(\nu=\nu^{\prime}\) and \(\delta=(q,\alpha,q^{\prime})\)
is a broadcast or a local test satisfied by \(\nu\). A _reception step_ from \((q,\nu)\) to \((q^{\prime},\nu^{\prime})\) with transition \(\delta\in\Delta\) and value \(v\in\mathbb{N}\), denoted \((q,\nu)\stackrel{{\mathsf{ext}(\delta,v)}}{{\rightarrow}}(q,\nu)\), is defined when \(\delta\) is of the form \((q,\mathbf{rec}(m,j,\alpha),q^{\prime})\) with \(\nu(j^{\prime})=\nu^{\prime}(j^{\prime})\) for all \(j^{\prime}\neq j\) and one of the following holds:
* \(\alpha\) = '\(\ast\)' and \(\nu(j)=\nu^{\prime}(j)\) = \(\alpha\) = '\(\ast\)' and \(\nu(j)=\nu^{\prime}(j)=v\)
* \(\alpha\) = '\(\downarrow\)' and \(\nu^{\prime}(j)=v\)
* A _local step_\((q,\nu)\rightarrow(q^{\prime},\nu^{\prime})\) is either a reception step or an internal step. A _local run_\(u\) is a sequence of local steps denoted \((q_{0},\nu_{0})\stackrel{{\mathsf{*}}}{{\rightarrow}}(q,\nu)\). A value \(v\in\mathbb{N}\) appearing in \(u\) is _initial_ if it appears in \(\nu_{0}\) and _non-initial_ otherwise. The _input_ of \(u\), written \(\mathsf{ln}(u)\in(\mathcal{M}\times\mathbb{N})^{\ast}\), is the sequence of messages of its reception steps, its _output_, written \(\mathsf{Out}(u)\in(\mathcal{M}\times\mathbb{N})^{\ast}\), is the sequence of messages broadcast in \(u\). For \(v\in\mathbb{N}\), the \(v\)-_input_\(\mathsf{ln}_{v}(u)\) (resp. \(v\)-_output_\(\mathsf{Out}_{v}(u)\)) is the word \(m_{0}\cdots m_{\ell}\in\mathcal{M}^{\ast}\) such that \((m_{0},v)\cdots(m_{\ell},v)\) is the projection of \(\mathsf{ln}(u)\) (resp. \(\mathsf{Out}(u)\)) on \(\mathcal{M}\times\{v\}\).
A _decomposition_ is a tuple \(\mathsf{dec}=(w_{0},m_{1},\ldots,m_{\ell},w_{\ell})\) with \(w_{0},\ldots,w_{\ell}\in\mathcal{M}^{\ast}\), and \(m_{1},\cdots,m_{\ell}\in\mathcal{M}\), with \(m_{i}\neq m_{j}\) for all \(i\neq j\). In particular we have \(\ell\leq|\mathcal{M}|\). A word \(w\in\mathcal{M}^{\ast}\)_admits decomposition_\(\mathsf{dec}=(w_{0},m_{1},\ldots,m_{\ell},w_{\ell})\) if \(w\preceq w_{0}^{\prime}w_{1}^{\prime}\cdots w_{\ell}^{\prime}\) where for all \(j\), \(w_{j}^{\prime}\) can be obtained from \(w_{j}\) by adding letters from \(\{m_{1},\ldots,m_{j-1}\}\). We denote by \(\mathcal{L}^{\mathsf{dec}}\) the language of words that admit decomposition \(\mathsf{dec}\). This definition will be useful later; the intuition is that a decomposition describes the sequence of messages sent with some value \(v\) in a run. The \(w_{i}\) are message types sent by the agent with that value initially, and the \(m_{i}\) mark the times at which each message type is broadcast for the first time by another agent. This is all the information we need as if an agent manages to send some message \((m,v)\) with a value \(v\) it did not have initially, then from this point on we can assume that we have an unlimited supply of messages \((m,v)\), using (essentially) the copycat principle.
### Unfolding Trees
An _unfolding tree_ is an abstraction of a run in the form of a tree where each node is assigned a local run and a specification of its role. In this vision, the node at the root corresponds to the local run of the agent that we are interested in (_e.g._, the agent covering \(q_{f}\)), and children nodes are here to provide messages that this agent needs to receive; such needs are expressed using specifications. A _boss specification_ consists of a word \(\mathsf{bw}\in\mathcal{M}^{\ast}\) describing a sequence of message types that should be broadcast all with the same value. A _follower specification_ consists of a pair \((\mathsf{fw},\mathsf{fm})\in\mathcal{M}^{\ast}\times\mathcal{M}\), meaning that one must be able to broadcast \(\mathsf{fm}\) with value \(v\) after receiving the sequence \(\mathsf{fw}\) with value \(v\).
We first provide the formal definition of unfolding trees. We will then explain this definition and why this notion is relevant for Cover.
An _unfolding tree_\(\tau\) over \(\mathcal{P}\) is a finite tree where nodes \(\mu\) have three labels:
* a local run of \(\mathcal{P}\), written \(\mathbf{lr}(\mu)\), starting in the initial state with distinct register values;
* a value in \(\mathbb{N}\), written \(\mathbf{val}(\mu)\);
* a specification \(\mathbf{spec}(\mu)\), which is either a word \(\mathbf{bw}(\mu)\in\mathcal{M}^{\ast}\) (boss specification) or a pair \((\mathsf{fw}(\mu),\mathsf{fm}(\mu))\in\mathcal{M}^{\ast}\times\mathcal{M}\) (follower specification). In the first case we say that the node is a _boss node_, otherwise it is a _follower node_.
Moreover, all nodes \(\mu\) in an unfolding tree must satisfy the four following conditions:
* For each non-initial value \(v\neq\mathbf{val}(\mu)\) of \(\mathbf{lr}(\mu)\), \(\mu\) has a child \(\mu^{\prime}\) which is a boss node such that \(\mathsf{ln}_{v}(\mathbf{lr}(\mu))\) is a subword of \(\mathbf{bw}(\mu^{\prime})\).
* For each initial value \(v\) in \(\mathbf{lr}(\mu)\), there is a decomposition \(\mathsf{dec}=(w_{0},m_{1},w_{1},\ldots,m_{\ell},w_{\ell})\) s.t.:
* \(\mathbf{lr}(\mu)\) _may be split into successive local runs_ \(u_{0},\ldots,u_{\ell}\) _where, for all_ \(i\in[1,\ell]\)_,_ \(w_{i}\preceq\mathsf{Out}_{v}(u_{i})\) _and_ \(\mathsf{ln}_{v}(u_{i})\in\{m_{1},\ldots,m_{i-1}\}^{*}\)_,_
* _for all_ \(i\in[1,\ell]\)_,_ \(\mu\) _has a child_ \(\mu_{i}\) _which is a follower node such that_ \(\mathbf{fm}(\mu_{i})=m_{i}\) _and_ \(\mathbf{fw}(\mu_{i})\in\mathcal{L}^{\mathsf{dec}_{i}}\) _where_ \(\mathsf{dec}_{i}=(w_{0},m_{1},w_{1},\ldots,m_{i-1},w_{i-1})\)_._
* _If_ \(\mu\) _is a follower node then_ \(\mathbf{val}(\mu)\) _is not an initial value of_ \(u\)_,_ \(\mathsf{ln}_{\mathbf{val}(\mu)}(u)\preceq\mathbf{fw}(\mu)\) _and_ \(\mathsf{Out}_{\mathbf{val}(\mu)}(u)\) _contains_ \(\mathbf{fm}(\mu)\)_._
* _If_ \(\mu\) _is a boss node, then_ \(\mathbf{val}(\mu)\) _is an initial value of_ \(\mathbf{lr}(\mu)\) _and the decomposition_ \(\mathsf{dec}\) _of_ _(ii) for_ \(\mathbf{val}(\mu)\) _satisfies that_ \(\mathbf{bw}(\mu)\in\mathcal{L}^{\mathsf{dec}}\)_._
_Lastly, given_ \(\tau\) _an unfolding tree, we define its_ size _by_ \(|\tau|:=\sum_{\mu\in\tau}|\mathbf{lr}(\mu)|+|\mathbf{spec}(\mu)|\)_. Note that the size of_ \(\tau\) _takes into account the size of its nodes, so that a tree_ \(\tau\) _can be stored in space polynomial in_ \(|\tau|\) _(renaming the values appearing in_ \(\tau\) _if needed)._
We now explain this definition. Let \(\mu\) be a node of an unfolding tree \(\tau\) and let \(u:=\mathbf{lr}(\mu)\). \(u\) encodes the local run of a given agent, \(\mathbf{spec}(\mu)\) encodes the specification that this local run carries out and \(\mathbf{val}(\mu)\) encodes the value for which the specification is carried out.
Conditions (i) and (ii) state that the specifications of the children of \(\mu\) are witnesses that messages received in the local run \(\mathbf{lr}(\mu)\) can be broadcast by other agents. Conditions (iii) and (iv) state that \(\mu\) is a witness that its specification is carried out.
Condition (i) expresses that, for every non-initial value \(v\) of \(u\), \(\mu\) must have a boss child witnessing that \(\mathsf{ln}_{v}(u)\) can indeed be broadcast. Because \(v\) was first stored by a reception step of \(u\), any other (fresh) value with sequence of message types containing \(\mathsf{ln}_{v}(u)\) also works and we do not impose the value label of this child to be \(v\).
We now explain condition (ii). Let \(v\) be an initial value of \(u\). Consider a run where \(u\) is the local run of agent \(a\). If another agent broadcasts with value \(v\), it has first received and stored \(v\). Therefore, such an agent can be duplicated, and we may afterwards assume that we have an unlimited supply of messages \((m,v)\). We split \(u\) into \(u_{0},\ldots,u_{\ell}\) based on the first point where each type of message is received with value \(v\). For every \(i\), the sequence of messages available with value \(v\) during \(u_{i}\) is \(\mathsf{Out}_{v}(u_{i})\) expanded by freely adding symbols from \(\{m_{1},\ldots,m_{i-1}\}\). Therefore, the child \(\mu_{i}\) responsible for the broadcast of \((m_{i},v)\) may first receive with value \(v\) a subword of \(w^{\prime}_{0}\cdot w^{\prime}_{1}\cdots w^{\prime}_{i-1}\) where, for all \(j\leq i-1\), \(w_{j}\) is obtained from \(\mathsf{Out}_{v}(u_{i})\) by adding symbols from \(\{m_{1},\ldots,m_{j-1}\}\), which we state as \(\mathbf{fw}(\mu_{i})\in\mathcal{L}^{\mathsf{dec}_{i}}\).
Condition (iii) directly states that a follower node \(\mu\) receives word \(\mathbf{fw}(\mu)\) with value \(\mathbf{val}(\mu)\) and broadcasts message \((\mathbf{fm}(\mu),\mathbf{val}(\mu))\). Condition (iv) expresses that a boss node witnesses the broadcast of a sequence of messages \(\mathbf{bw}(\mu)\) with a single value; in this sequence, some messages may come from auxiliary agents encoded in follower children, which is why we have the condition that \(\mathbf{bw}(\mu)\in\mathcal{L}^{\mathsf{dec}}\) and not simply \(\mathsf{Out}_{\mathbf{val}(\mu)}(u)\preceq\mathbf{bw}(\mu)\).
Our aim is to prove that we can study Cover directly on unfolding trees. We consider trees whose root is a boss node, as they suffice to witness coverability (a follower node implicitely relies on its parent's ability to broadcast some messages).
A _coverability witness_ for \((\mathcal{P},q_{f})\) is an unfolding tree over \(\mathcal{P}\) whose root \(\mu\) is a boss node whose local run \(\mathbf{lr}(\mu)\) covers \(q_{f}\).
In Figure 2 we display an unfolding tree obtained from the run of Example 4. Tables are local runs, columns are local configurations. For instance, the local run at \(\mu_{1}\) is
\[(q_{0},(x_{1},y_{1}))\xrightarrow{\mathsf{int}((q_{0},\mathbf{br}(m_{2},1),q_{ 1}))}(q_{1},(x_{1},y_{1}))\xrightarrow{\mathsf{ext}((q_{1},\mathbf{rec}(m_{4 },2,\downarrow),q_{4}),y_{2})}(q_{4},(x_{1},y_{2}))\]
\[\xrightarrow{\mathsf{ext}((q_{4},\mathbf{rec}(m_{6},1,=),q_{7}),x_{1})}(q_{7},(x_{1},y_{2}))\xrightarrow{\mathsf{int}((q_{7},\mathbf{br}(m_{7},1),q_{7}))}(q_{7 },(x_{1},y_{2}))\]
We explain why conditions (i) and (ii) are satisfied at the root \(\mu_{1}\) of the tree. Let \(u:=\operatorname{\mathbf{lr}}(\mu_{1})\) its local run. The only non-initial value in \(u\) is \(y_{2}\), and \(\mathsf{ln}_{y_{2}}(u)=m_{4}\); condition (i) is satisfied as \(\mu_{1}\) has a boss child with a boss specification containing \(m_{4}\). For initial value \(y_{1}\), condition (ii) is satisfied as \(u\) never receives a message with value \(y_{1}\). For \(x_{1}\), consider the decomposition \(\mathsf{dec}:=(w,m_{6},w^{\prime})\) for \(w:=m_{2}\) and \(w^{\prime}:=m_{7}\) seen as words of \(\mathcal{M}^{*}\). Condition (ii) is satisfied thanks to \(\mu_{3}\) being a child of \(\mu_{1}\) with follower specification \((\mathbf{fm},\mathbf{fw})\) such that \(\mathbf{fm}=m_{6}\) and \(\mathbf{fw}\in\mathcal{L}^{\mathsf{dec_{1}}}\) where \(\mathsf{dec}_{1}=(m_{2})\). One can check that conditions (i) and (ii) are satisfied for the other nodes.
Condition (iii) only applies to \(\mu_{3}\). It is satisfied as \(\operatorname{\mathbf{lr}}(\mu_{3})\) broadcasts \((m_{6},x_{1})\) after receiving only \((m_{2},x_{1})\) with that value. For condition (iv), it suffices to observe that \(\mu_{2}\) and \(\mu_{4}\) broadcast their \(\mathsf{bw}\) themselves; we can consider decompositions \((m_{4})\) and \((m_{2})\) respectively. Moreover, \(\mu_{1}\) satisfies boss specification \(\mathsf{bw}:=m_{1}\,m_{6}\,m_{6}\,m_{7}\) as \(\mathsf{bw}\in\mathcal{L}^{\mathsf{dec}}\) with \(\mathsf{dec}:=(m_{2},m_{6},m_{7})\) as above. Therefore \(\mu_{1}\) is a witness that \(\mathsf{bw}\) can be broadcast with a single value, although its local run does not broadcast \(m_{6}\) itself.
The run from Example 4 involved two agents \(a_{1}\) and \(a_{2}\); \(a_{1}\) corresponds to nodes \(\mu_{1}\) and \(\mu_{4}\) and \(a_{2}\) to nodes \(\mu_{2}\) and \(\mu_{3}\). Note that, if we apply our procedure described below to build a run from \(\tau\), we would use 4 distinct agents, each playing a single role.
We now prove that Cover can be stated as the existence of an unfolding tree that is a coverability witness as defined above:
**Proposition 17**: _An instance of Cover \((\mathcal{P},q_{f})\) is positive if and only if there exists a coverability witness for that instance._
Proof sketch.: The translation from run to tree works by induction on the length of the run. We first define in a natural way what it means for a run to satisfy a specification. We consider a run \(\rho\) and isolate a well-chosen agent \(a\), whose local run witnesses that the specification is satisfied. We call the induction hypothesis with the specifications expressing what \(a\) needs to receive from other agents. Each such specification is satisfied by a strict prefix of \(\rho\) (the only exception being if \(a\) satisfies a boss specification with value \(v\) and the last step of \(\rho\) is a broadcast with \(v\) by another agent; in this case, we use the induction hypothesis on \(\rho\) but with a follower specification, hence the induction is well-founded). We construct an unfolding tree by labeling the root with the specification and the local run of \(a\), and attaching below it the subtrees obtained by induction hypothesis.
Figure 2: Example of an unfolding tree. The step labels in local runs are omitted for simplicity.
The translation from tree to run consists in an induction on the tree. A key concept is the one of partial run, which extends the notion of local run to a subset of agents: in a partial run, some receptions called _external_ are not matched by a broadcast. This is meant to represent the behavior of a subtree of the unfolding tree: if the root of an unfolding tree is a follower node with specification \((\mathsf{fw},\mathsf{fm})\) then the corresponding partial run receives an external sequence \(\mathsf{fw}\). The inductive step applies the induction hypothesis to the children of the root to obtain partial runs and merges them with the local run of the root by branching broadcasts to the right external receptions. See Appendix C for the full proof.
### Bounding the Size of the Unfolding Tree
Our aim is now to provide bounds on the size of the coverability witness. We start with two simple observations. First, for boss specifications, the longer the word broadcast, the better: if a word \(\mathsf{bw}\) can be broadcast with a single value, then any subword of \(\mathsf{bw}\) can also be received. For follower specifications, it goes in the opposite direction: for a fixed \(\mathsf{fm}\), the shorter the requirement \(\mathsf{fw}\), the better. The following lemma thus provides two ways of shortening an unfolding tree. Its proof can be found in Appendix D.
Let \(\tau\) be a coverability witness for \((\mathcal{P},q_{f})\). Let \(\mu,\mu^{\prime}\) be two nodes of \(\tau\) such that \(\mu\) is an ancestor of \(\mu^{\prime}\). If one of the conditions below holds, then there exists a coverability witness for \((\mathcal{P},q_{f})\) of size smaller than \(|\tau|\):
* \(\mu\) and \(\mu^{\prime}\) are boss nodes and \(\mathsf{bw}(\mu)\)\(\preceq\)\(\mathsf{bw}(\mu^{\prime})\); or
* \(\mu\) and \(\mu^{\prime}\) are follower nodes, \(\mathsf{fw}(\mu^{\prime})\)\(\preceq\)\(\mathsf{fw}(\mu)\) and \(\mathsf{fm}(\mu^{\prime})=\mathsf{fm}(\mu)\).
We now show that there is a computable bound on the size of the unfolding tree achieving a given specification and labeled with a protocol \(\mathcal{P}\). Lemma 4.1 leads us towards an application of the Length function theorem; however, this theorem requires a bound on the lengths of the words. In fact, there is no reason to think that the lengths of the labels of the children of a node can be bounded with respect to the length of the label of that node.
In order to bound the size of the nodes, we use the following result, which essentially states that if there is a local run between two local configurations \((q,\nu)\) and \((q^{\prime},\nu^{\prime})\) then there is one of length bounded by a primitive recursive function and which does not require larger inputs than the previous one.
There exists a primitive recursive function \(\psi(n,r)\) such that, for every protocol \(\mathcal{P}\) with \(r\) registers, for every local run \(u_{0}:(q_{0},\nu_{0})\xrightarrow{\ast}(q_{f},\nu_{f})\) in \(\mathcal{P}\), for every section \(u:(q,\nu)\xrightarrow{\ast}(q,\nu^{\prime})\) of \(u_{0}\), for every \(V\subseteq\mathbb{N}\) finite such that \(V\) contains all message values appearing in \(u\), there exists a local run \(u^{\prime}:(q,\nu)\xrightarrow{\ast}(q^{\prime},\nu^{\prime})\) such that we have \(\mathsf{len}(u^{\prime\prime})\leq\psi(|\mathcal{P}|,r)\) and:
* for all \(v^{\prime}\in\mathbb{N}\setminus V\), there exists \(v\) a non-initial value of \(u_{0}\) such that \(\mathsf{ln}_{v^{\prime}}(u^{\prime})\preceq\mathsf{ln}_{v}(u)\),
* for all \(v\in V\), \(\mathsf{ln}_{v}(u^{\prime})\preceq\mathsf{ln}_{v}(u)\).
Proof sketch. First, we prove that any long portion of \(u\) must change the value of every register at least once; otherwise we can shorten the run using an induction on the number of registers. We then manage to prove that, if \(u\) includes twice the same sequence of transitions of sufficient length, then we can cut off anything in the middle and glue back together the ends. While shortening the local run we may add some fresh values to it (see Figure 5 in the appendix), which is not a problem as we ensure that they are less constraining than the ones that were in the original run. For technical reasons, we want to prevent fresh values added in the proof from mimicking initial values of the agent. See Appendix E for the full proof.
**Remark 20**.: The function \(\psi(n,k)\) defined above is actually a tower of exponentials of height \(k\) where each floor is a polynomial in \(n\). Perhaps surprisingly, this bound is tight: one may need a local run of length a tower of exponentials to reach a given local configuration while being allowed to receive sequences of messages of same value from a given fixed set.
If we had in our unfolding tree only boss nodes or only follower nodes, then the previous result would allow us to apply the Length function theorem. Indeed, we can bound the size of a node with respect to the nodes to which it must send long words of messages. This means we can find a bound for a node \(\mu\) that depends on \(\mu\)'s follower children's size and, if \(\mu\) is a boss node, with respect to its parent's size. However, we cannot bound the size of an unfolding tree from the root to the leaves because of follower nodes nor from the leaves to the root because of boss nodes. We thus rearrange the tree as in Figure 3 to make it so that long sequences of messages are sent upwards. We formalize this with the notion of altitude:
**Definition 21**.: _Let \(\tau\) an unfolding tree. We define the altitude of a node \(\mu\) of \(\tau\), written \(\mathbf{alt}(\mu)\), recursively as follows:_
* _The altitude of the root is_ \(0\)_,_
* _The altitude of a boss node is the altitude of its parent minus one,_
* _The altitude of a follower node is the altitude of its parent plus one._
We now use the previous lemma to bound the label of each node \(\mu\) with respect to its neighbors of higher altitude, i.e., its follower children and its parent if it is a boss node. The idea is that these nodes define the number of messages that \(\mathbf{lr}(\mu)\) must output to satisfy the unfolding tree conditions. The function \(\psi\) in the statement below is the one from Lemma 19.
**Lemma 22**.: _Let \(\mathcal{P}\) be a protocol over \(r\) registers, let \(\tau\) be an unfolding tree over \(\mathcal{P}\) of minimal size satisfying a boss specification \(\mathbf{bw}\), let \(\mu\) be a node of \(\tau\). Let \(K\) be such that for all follower children \(\mu_{f}\) of \(\mu\), \(|\mathbf{fw}(\mu_{f})|\leq K\). We have the following properties:_
* _If_ \(\mu\) _is a boss node then_
* _If_ \(\mu\) _is the root of_ \(\tau\) _then_ \(\mathbf{bw}(\mu)=w\)_, otherwise_ \(|\mathbf{bw}(\mu)|\leq|\mathbf{lr}(\mu^{\prime})|\) _with_ \(\mu^{\prime}\) _its parent_
* _In both cases_ \(|\mathbf{lr}(\mu)|\leq\psi(|\mathcal{P}|,r)\Big{[}|\mathbf{bw}(\mu)|+|\mathcal{ M}|rK+1\Big{]}\)__
* _If_ \(\mu\) _is a follower node then_ \(|\mathbf{fw}(\mu)|\leq|\mathbf{lr}(\mu)|\leq\psi(|\mathcal{P}|,r)\Big{[}1+| \mathcal{M}|rK\Big{]}\)__
Proof sketch.: A node \(\mu\) has at most \(|\mathcal{M}|\) follower children for each initial value, hence at most \(|\mathcal{M}|r\) in total, each one of them requires at most \(K\) messages. The node \(\mu\) may have to
Figure 3: Rearrangement of a tree, with the root in red. Black solid arrows connect parents to children, blue dashed arrows highlight that long words of messages are sent upwards.
output \(\mathbf{bw}(\mu)\) extra messages to satisfy its specification if it is a boss node, or just one extra message if it is a follower node. This gives a bound on the number of messages \(\mathbf{lr}(\mu)\) needs to broadcast. We mark the positions at which \(\mathbf{lr}(\mu)\) sends them and use Lemma 19 to bound the length of sections of the run connecting two such broadcasts by \(\psi(|\mathcal{P}|,r)\), which yields the bounds above. See Appendix F for the full proof.
Thanks to the previous lemma, we bound the size of a node with respect to its altitude:
Let \((\mathcal{P},q_{f})\) be a positive instance of Cover, \(\tau\) a coverability witness for \((\mathcal{P},q_{f})\), \(\mathbf{altmax}\) the maximal altitude in \(\tau\). There exists a primitive recursive function \(f_{0}\) such that any node \(\mu\) of \(\tau\) has size bounded by \(f_{0}(|\mathcal{P}|+\mathbf{altmax}-\mathbf{alt}(\mu))\).
Proof sketch.: Applying Lemma 22 inductively from highest to lowest altitude, we bound the sizes of the labels of all nodes at a given altitude \(i\) with respect to \(\mathbf{altmax}-i\). See Appendix G for the detailed proof.
There exists a function \(f\) of class \(\mathscr{F}_{\omega|\mathcal{M}|+1}\) such that an instance \((\mathcal{P},q_{f})\) of Cover is positive if and only if it has a coverability witness \(\tau\) of size bounded by \(f(|\mathcal{P}|)\).
Proof sketch.: The full proof is in Appendix H. \((\mathcal{P},q_{f})\) is positive if and only if there exists a coverability witness for it thanks to Proposition 17; we consider the coverability witness \(\tau\) of minimal size. In a branch of \(\tau\) reaching maximal altitude, we mark the nodes that have a greater altitude than all the previous ones (see Figure 6). They are necessarily follower nodes as a boss node is below its parent. This sequence (taken from highest to lowest altitude) is so that the \(i\)th term is at altitude \(\mathbf{altmax}-i\) and we can bound its size with respect to \(i\) with the previous arguments. Along with Lemma 18, we apply the Length function theorem on that sequence to bound its length hence the maximal altitude (Lemma 19).
This yields in turn a bound on the root label, as its altitude (0) has a bounded difference with the maximal one. Another application of the Length function theorem, this time with boss nodes, allows us to bound the minimal altitude of a node of this tree (Lemma 19).
Once we have bounded both the maximal and minimal altitudes, we can infer a bound on the size of all nodes using Lemma 23, and then on branches as we can shorten branches as soon as they have two nodes with the same specification. The bound on the size of the tree then follows from the observation that as nodes have bounded local runs, they only see a bounded amount of values and thus need a bounded amount of children.
### 19 Decidability
In Section 3.2, we showed that unfolding trees are a sound and complete abstraction for Cover. In Section 3.3, we proved that there is a computable bound (of the class \(\mathscr{F}_{\omega^{\omega}}\)) on the size of a minimal coverability witness, if it exists. Our decidability procedure computes that bound, enumerates all trees of size below the bound and checks for each of them whether it is coverability witness. Details can be found in Appendix I.
Cover for BNRA is decidable and \(\mathbf{F}_{\omega^{\omega}}\)-complete.
### 20 Undecidability of Target
A natural next problem, after Cover, is the target reachability problem (Target). Our Cover procedure heavily relies on the ability to add agents at no cost; for Target we need to guarantee that those agents can later reach the target state, which makes the problem
harder. We in fact show that Target is undecidable, which suggests that while we can obtain decidability of Cover thanks to some monotonicity properties of the problem, we cannot analyze more precisely the set of runs of such systems.
Target is undecidable for BNRA with two registers.
Proof sketch.: We simulate a Minsky machine with two counters. Like for the LCS encoding, the first register is never modified and plays the role of a permanent identifier. We start with some initialisation phase where each agent stores some other agent's identifier in its second register, which will be its "predecessor"; it then only accepts messages from its predecessor. As there are finitely many agents, there is a cycle in the predecessor graph.
In a cycle, we use the fact that _all_ agents must reach state \(q_{f}\) to simulate faithfully a Minsky machine: we make agents alternate between receptions and broadcasts so that, in the end, they have received and sent the same number of messages, implying that no messages have been lost along the cycle. We then simulate the machine by having an agent (the leader) choose transitions and the other ones simulate the counter values by memorizing a counter (1 or 2) and a binary value (0 or 1). For instance, an increment of counter 1, initiated by the leader, takes the form of a message propagated in the cycle until it finds an agent simulating counter 1 and having bit 0. This agent switches to 1 and sends an acknowledgment that eventually propagates back to the leader. See Appendix J for the full proof.
## 4 Cover in 1-Bnra
In the section, we study the restriction of Cover to protocols with one register. We will establish that this problem is actually NP-complete, meaning that having only one register per agent makes the problem significantly easier. We shall call BNRA with one register 1-BNRA. Due to space constraints, formal proofs are not included in this section; they can be found in Appendix K. Here we intend to present the key observations that allow us to abstract runs into short witnesses, leading to an NP algorithm for the problem.
In 1-BNRA, local tests are irrelevant. Moreover, thanks to the copycat principle, any received message can be broadcast with a fresh value, therefore one can always circumvent '\(\not\vdash\)' tests. In the end, our main challenge for 1-BNRA is '\(=\)' tests upon reception. For this reason, we look at clusters of agents that share the value in their registers.
Consider a run in which some agent \(a\) starts with value \(v\). If the run puts some agent \(a^{\prime}\neq a\) in some state \(q\) with value \(v\), then we can duplicate agent \(a^{\prime}\) to have an unlimited supply of agents in state \(q\) with value \(v\). Indeed, at some point in the run, \(a^{\prime}\) was in a state \(q^{\prime}\), executed a '\(\downarrow\)' transition, received a message with value \(v\), stored it in its register and went to a state \(q^{\prime\prime}\). Because we have an unlimited supply of agents in \(q^{\prime}\) (thanks to the copycat principle), we also have an unlimited supply of agents in \(q^{\prime\prime}\) with value \(v\). We can then make all those agents copy the transitions of \(a^{\prime}\), which gives us an unlimited supply of agents in state \(q\) with value \(v\). Agent \(a\) is unique and called _boss_, agents like \(a^{\prime}\) are clonable and called _followers_. For value \(v\), the only relevant information is _boss_ state, _i.e._, the state of the agent \(a\) that had \(v\) initially (this agent cannot be cloned), and the _clique_, _i.e._, the set of states reached by other agents with that value \(v\).
We thus define gangs, an abstraction of configurationsof the 1-BNRA with respect to a value \(v\). A gang is composed of a boss state and a clique. The clique may only increase throughout the abstract runs we will consider, because we assume that we always have enough copies of each agent so that we can leave many agents with value \(v\) in every state they visited.
More formally, let \((Q,\mathcal{M},\Delta,q_{0})\) be a protocol. A _gang_ is a pair \(\mathsf{G}=(\mathsf{b},\mathsf{K})\in(Q\cup\{\bot\})\times 2^{Q}\).
The element \(\mathsf{b}\) is the _boss_ and the set \(\mathsf{K}\) is the _clique_ of the gang. Let \(\rho=\gamma_{0}\xrightarrow{}\gamma_{1}\xrightarrow{}\cdots\xrightarrow{}\gamma_ {k}\) be a run and \(v\in\mathsf{val}(\rho)\). The gang of value \(v\) in \(\rho\) is the gang \((\mathsf{b},\mathsf{K})\) such that:
* if there exists \(a_{0}\) an agent which has value \(v\) in \(\gamma_{0}\) and keeps it in all \(\gamma_{i}\) then \(\mathsf{b}\) is its state in \(\gamma_{k}\), otherwise \(\mathsf{b}:=\bot\),
* \(\mathsf{K}\) is the set of states \(q\) such that there is an agent in \(q\) with value \(v\) in some \(\gamma_{i}\).
In a concrete run of our system, gangs of distinct values may only interact with one another by covering states \(q\) which are needed by the other gang (in the form of a broadcast or of a'\(\downarrow\)'reception from \(q\)); therefore our abstraction also keeps track of the set of coverable states, which may only grow. However, it only needs to keep track of one gang at a time.
This leads us to a natural abstract semantics based on gangs. An abstract configuration consists of a set of states \(S\) (states covered so far by some agents) and a gang \((b,K)\) (the original owner of the value \(v\) we are keeping track of and the states reached by other agents with that value). If the original owner of \(v\) stores a new value we set \(b=\bot\). Abstract transitions are defined by applying transitions of the protocol while assuming that we have unlimited supplies of agents in every state of \(S\) and of agents with value \(v\) in states of \(K\). At any time, we can apply a _gang reset_, which maintains \(S\) but reinitializes \((b,K)\) to \((q_{0},\emptyset)\) (we track a new value). We define formally this abstraction and show its soundness and completeness in Appendix 0.C. To bound the length of relevant abstract runs, we impose that \(S\) should grow between two gang resets (otherwise they reset to the same abstract configuration) and that there may be at most \(O(|Q|^{2})\) abstract steps between two resets (as \(K\) can only increase and there are only \(|Q|+1\) possibilities for \(b\)). This means that if there is an abstract run covering a state, there is one of size \(O(|Q|^{3})\), proving the NP upper bound.
The NP lower bound follows from a reduction from 3SAT (an agent \(a\) sends a sequence of messages representing a valuation, with its identifier, to other agents which broadcast it back, playing the role of external memory, allowing \(a\) to check the satisfaction of a 3SAT formula).
These results yield the main theorem of this section:
The coverability problem is NP_-complete for protocols with one register._
## 5 Conclusion
We have established the decidability and \(\mathbf{F}_{\omega^{\omega}}\)-completeness of the coverability problem for BNRA, as well as the NP-completeness of the problem for 1-BNRA. One may want to enrich the transition systems of our protocols, for instance to pushdown automata. While this adds little difficulty in the general case (it suffices to extend Lemma 19 to pushdown protocols using a classical cut-hill argument), the case of one register may be trickier. Another open problem is the complexity of the target problem with one register. While we can show that this is a decidable problem, its exact complexity is unclear. Finally, one may want to extend this model with inequality tests, as in classical related models such as data nets.
Acknowledgements.We are grateful to Arnaud Sangnier for encouraging us to work on BNRA, for the discussions about his work in [13] and in general for his valuable advice. We also thank Philippe Schnoebelen for the interesting discussion and Sylvain Schmitz for the exchange on complexity class \(\mathbf{F}_{\omega^{\omega}}\) and related topics. |
2306.11704 | Causal survival embeddings: non-parametric counterfactual inference
under censoring | Model-free time-to-event regression under confounding presents challenges due
to biases introduced by causal and censoring sampling mechanisms. This
phenomenology poses problems for classical non-parametric estimators like
Beran's or the k-nearest neighbours algorithm. In this study, we propose a
natural framework that leverages the structure of reproducing kernel Hilbert
spaces (RKHS) and, specifically, the concept of kernel mean embedding to
address these limitations. Our framework has the potential to enable
statistical counterfactual modeling, including counterfactual prediction and
hypothesis testing, under right-censoring schemes. Through simulations and an
application to the SPRINT trial, we demonstrate the practical effectiveness of
our method, yielding coherent results when compared to parallel analyses in
existing literature. We also provide a theoretical analysis of our estimator
through an RKHS-valued empirical process. Our approach offers a novel tool for
performing counterfactual survival estimation in observational studies with
incomplete information. It can also be complemented by state-of-the-art
algorithms based on semi-parametric and parametric models. | Carlos GarcΓa-Meixide, Marcos Matabuena | 2023-06-20T17:34:17Z | http://arxiv.org/abs/2306.11704v1 | # Causal survival embeddings: non-parametric counterfactual inference under censoring
###### Abstract
Model-free time-to-event regression under confounding presents challenges due to biases introduced by causal and censoring sampling mechanisms. This phenomenology poses problems for classical non-parametric estimators like Beran's or the k-nearest neighbours algorithm. In this study, we propose a natural framework that leverages the structure of reproducing kernel Hilbert spaces (RKHS) and, specifically, the concept of kernel mean embedding to address these limitations. Our framework has the potential to enable statistical counterfactual modeling, including counterfactual prediction and hypothesis testing, under right-censoring schemes without assumptions on the regression model form. Through simulations and an application to the SPRINT trial, we demonstrate the practical effectiveness of our method, yielding coherent results when compared to parallel analyses in existing literature. We also provide a theoretical analysis of our estimator through an RKHS-valued empirical process. Our approach offers a novel tool for performing counterfactual survival estimation in observational studies with incomplete information. It can also be complemented by state-of-the-art algorithms based on semi-parametric and parametric models.
## 1 Introduction
Treatment effect estimation using survival endpoints is of key interest in statistics and biomedical applications. However, the increasing complexity of the processes structuring clinical research during the last decades tend to preclude collecting the correct data regarding a particular clinical question of interest. As a consequence, observational studies are more and more present in scientific research due to technical limitations that make randomization- the gold standard experimental design practice to protect against unmeasured confounding- impossible.
What is more, extensive warnings have been pointed out in the literature abut causal interpretations of hazard ratios (HR) estimated with Cox (1972)'s Proportional Hazards (PH) model, even under randomized treatment exposures (Hernan, 2010; Stensrud, Aalen, and Valberg, 2018). As a matter of fact, HRs merge at each instant the differences between arms that arise from treatment effect with those created by selection bias - intuitively, as time goes by, less patients will remain in the control arm if overall mortality risk is different between the two groups (equivalently, when the treatment is effective) leading to a comparison between unbalanced groups.
This makes clear the point that there is a need for designing new effect measures within survival analysis that have a causal interpretation and shed light into time dynamics, for instance time-varying treatment effects. Assume that gynaecologists aim to investigate the effect of an implanted medical device, such as a contraceptive method, on time-to-conception. It is reasonable to consider that the implant gradually deteriorates and it will cease to function as time goes by. Martinussen (2022) has shown that a Cox model fails
in this setup despite participants being randomized.
Causal treatment effect assertions are phrased via the potential outcome framework, the fundamental paradigm for statistical analysis of observational data - where treatment is not independent of the covariates (Neyman, 1923; Rubin, 1974). Numerous studies concentrate on computing the average treatment effect (ATE, see Imbens (2004)), which determines the discrepancy between the outcome distributions' means. In observational studies, the parameter ATE is interpreted within a framework (Pearl et al., 2000) suitable for causal inference as the outcome that would have been observed if the treatment had been randomly assigned.
Most empirical research on treatment effects typically focuses on estimating mean differences. However, there is also a longstanding interest in developing methods to estimate the impact of treatments on the entire outcome distribution. For example, if a specific treatment's impact is only observed in the outcome distribution's variance, the evaluation of average treatment effects will not be informative in the clinical decision-making.
A straight generalization would be to focus on the difference between the survival functions of potential outcomes directly on the absolute scale:
\[\mathrm{P}\left(\tilde{T}^{1}>t\right)-\mathrm{P}\left(\tilde{T}^{0}>t\right)\]
as it is a causally meaningful quantity (the tilde indicates counterfactual) that does not rely on non-collapsible parameters (Aalen, Cook, and Roysland, 2015) nor quantities whose identifiability relies on unstable assumptions (see Section 5 in Martinussen (2022)).
Estimating potential outcome distributions directly is straightforward when treatment assignment is random. For instance, in survival analysis it would suffice to fit one Kaplan and Meier (1958) curve per arm. However, in observational studies (or randomized experiments with imperfect compliance), this type of analysis becomes challenging (Imbens and Rubin, 1997). In this line, distributional extensions of the ATE have been considered in the literature through multiple lenses. For example, Abadie (2002) considers a bootstrap strategy to estimate distributional treatment effects while Muandet, Kanagawa, Saengkynogam, and Marukatat (2021) base their work on the theory of reproducing kernel Hilbert spaces (RKHSs).
In general, when data was not generated by a randomized control trial, empirical estimators of treatment effects rely either on the imputation of the so-called propensity score; which is the conditional probability of treatment assignment given the covariates (Rosenbaum and Rubin, 1983), either on matching techniques Zubizarreta (2012), or on combinations of the previous approaches as double robust estimators Ding and Li (2018). The fundamental technique here is named _inverse probability of treatment weighted_ estimation (Imbens, 2004), consisting loosely on performing an empirical inner product between the summands of the unweighted estimator times the reciprocals of the estimated propensity scores. Traditional methods for estimating the latter involve parametric approaches like logistic regression, which rely on a model for treatment propensity. However, incorrect specifications of the model can generate extreme weights and make the estimator unreliable. To address this issue, nonparametric techniques have been proposed (Lee, Lessler, and Stuart, 2010). Nonetheless, large weights may still be unavoidable even when the propensity score model is correctly specified.
An alternative procedure to avoid weighting consists in decoupling the difference between _realized outcome_- not potential- survival distributions in a particular form inherited from the econometrics literature (Oaxaca, 1973; Blinder, 1973). This decomposition strategy involves two terms: one driven by shifts in the covariate distributions between groups and other accounting for the distributional treatment effect conditional to the treated arm. This provides a formal mechanism to analyze whether differences that arise from the observed outcomes distributions of each arm truly come from effectiveness of a drug, whether they are just due to the probabilistic structure of the population baseline characteristics, or both.
Interpreting the decomposition mentioned above (see Equation 3.1.1, Section 3) seen within an RKHS provides the primary motivation for the notion of _causal survival embedding_. The main intuition behind the idea is that these objects allow for further investigation of what mechanism is the origin for potential differences that arise between the observational survival functions of two treatment arms (i.e. Kaplan-Meier curves fitted to isolated data coming from each treatment indicator value). It also serves as a departure point for hypothesis testing of covariate distribution shifts across treatment arms. This would involve a pivot that needs the RKHS norm of differences of functions involving our estimator to be computed, constituting in some sense an aggregated measure. However, our estimator is also useful to study differences arising between observational distributions pointwise.
### Our results and contributions
We introduce a general non-parametric estimator under right-censoring of counterfactual survival functions based on statistical and machine learning techniques on RKHSs.
* Our estimation procedure is a model-free approach based on embedding counterfactual distributions in reproducing kernel Hilbert spaces. This extends the prior work of Muandet et al. (2021) to handle censored data-generation environments. In contrast to traditional non-parametric survival estimators like the Beran estimator (Gonzalez-Manteiga and Cadarso-Suarez, 1994), which typically relies on strong smoothness conditions such as differentiability of density functions; our approach does not require these assumptions (only mild conditions on the moments of the kernel function). This makes our method more flexible and applicable to a wider range of scenarios.
* In the setting of counterfactual inference, our proposal constitutes one of the first strategies in the literature that adjusts for confounding in non-parametric estimation of survival functions.
* Theoretically, we are able to provide asymptotic behavior guarantees for our estimator and compute its convergence rate by employing techniques from Empirical Process Theory. We utilize these techniques to deduce the Hadamard-differentiability of an operator that takes values in a reproducing kernel Hilbert space. While the Functional Delta Method (Van der Vaart, 2000) is widely known for its application to general operators in Banach spaces, the interplay between the geometry of RKHSs and von Mises calculus remains relatively unexplored in the literature, with only a few researchers delving into this aspect?Matabuena, Felix, Ditzhaus, Vidal, and Gude (2023)
* Our procedure is computationally friendly as the main bottleneck is the one classi
cally present in estimating conditional mean embeddings, and linear estimators as Kaplan-Meier weights, efficiently implemented in well-known software packages such as survival.
* The present work not only advances the field but also paves the way for more advanced formulations of hypothesis testing Gretton, Borgwardt, Rasch, Scholkopf, and Smola (2012) and introduces new forms of clustering based on the concept of Maximum Mean Discrepancy Matabuena, Vidal, Padilla, and Sejdinovic (2022) in the counterfactual setting. Additionally, the flexibility of the RKHS framework enables us to incorporate complex variables, such as medical images or other functional data objects, as predictors. This expanded capability enhances the applicability of our approach to a wider range of domains and data types.
* We demonstrate the potential of our new models through their application in a relevant domain. Specifically, our models provide valuable insights that support the findings in Stensrud and Strohmaier (2017), which suggest that the association between treatment-induced diastolic blood pressure and cardiovascular outcomes may be confounded. This corroborates similar findings reported in the literature, as mentioned in the study (Beddhu, Chertow, Cheung, Cushman, Rahman, Greene, Wei, Campbell, Conroy, Freedman, et al., 2018), which align with our own results. Furthermore, we evaluate the finite properties of our estimator through a comprehensive simulation study, which further validates its effectiveness.
### Other related work
Different methods exist for treatment effect estimation in presence of censoring. A general procedure that can be found across the literature consists of the following steps. First, the ATE is causally identified without censoring and then a so-called _Censoring Unbiased Transformation_(Rubin and van der Laan, 2007; Suzukiawa, 2004) is used to create a pseudopulation from the observed data in which the conditional mean survival time is the same as in the uncensored population. Second, methodology from semiparametric inference adapts the estimators to the censoring mechanisms (Tsiatis, 2006). Alternative estimators of treatment effect include standardizing expected outcomes to a given distribution of the confounders (Robins (1986)'s g methods), inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators (Ozenne, Scheike, Staerk, and Gerds, 2020), which combine the two latter lines.
The utilization of tools from the RKHS framework for right-censored data is relatively limited, with a primary focus on hypothesis testing Matabuena and Padilla (2019); Rindt, Sejdinovic, and Steinsaltz (2020); Fernandez, Gretton, Rindt, and Sejdinovic (2021), albeit outside the context of counterfactual inference. There have also been efforts to perform hypothesis testin using RKHSs in other incomplete information schemes such as missing reponse (Matabuena, Felix, Garcia-Meixide, and Gude, 2022) Our methods extend and generalize these works by providing a unified framework to develop new hypothesis testing approaches within the realm of causal inference.
Importantly, Xue, Zhang, Chan, and Wong (2023) balance covariate functions over an RKHS to avoid directly modelling the propensity score for estimating causal effects.
### Organisation of the Paper
The paper is structured as follows. In Section 2 we rigorously introduce the formal elements that constitute the basis of our work by specifying: the fundamental random variables playing a role, which of them are observable and which are not, and how they interact between them to generate incomplete information and notation for their distribution functions. A self-contained description of the parameters of interest is presented in Section 3, accompanied by an opening introducing the notion of counterfactual distributions in survival analysis. Then we define their counterparts in a Hilbert space, leading to the notion of counterfactual mean embedding. Naturally, in Section 4 we thoroughly develop the estimation theory that is needed in our setting, involving M-estimation on a space of functions that themselves take values in another space of functions. The asymptotic properties of our proposal estimator are investigated in Section 5, starting with preliminary definitions needed for its formalization followed by sufficient conditions for consistency and a convergence rate of non-parametric counterfactual inference under censoring. Sections 6 and 7 are concerned with the results, respectively displaying the diminishing behaviour of variability as sample size increases via a simulation study and illustrating the usefulness of our methodology in a real application case related to cardiology. Finally, Section 8 closes the document with a discussion on the consequences of relaxing the censoring assumptions and other concerns regarding open directions. We close the document with a couple of Appendices containing the mathematical proofs for the results of this paper, followed by an empirical check of \(\sqrt{n}\) speed of convergence with underlying linear truth.
## 2 Preliminaries
We start with a collection of random variables in the potential outcomes framework:
\[\{(V^{0},V^{1},Z),\quad V\in\{\tilde{T},C,X\}\}\]
* \(\tilde{T}^{0},\tilde{T}^{1}\in(0,+\infty)\) are potential outcomes of survival times of interest.
* \(C^{0},C^{1}\in(0,+\infty)\) are potential outcomes of censoring times.
* \(X^{0},X^{1}\in\mathbb{R}^{p}\) are individual vectors of covariates, \(p\geq 1\).
* \(Z\in\{0,1\}\) are individual treatment assignment indicators.
We do not place a tilde over the potential outcomes of censoring times for the sake of simplicity; it is just not needed. \(F_{V}\) denotes the distribution function of each random variable \(V\). We use standard notation to denote joint and conditional distributions. Next, we define the _realized_ survival and censoring times respectively as
\[T=(1-Z)\tilde{T}^{0}+Z\tilde{T}^{1},\quad C=(1-Z)C^{0}+ZC^{1}\]
The _observed_ response is therefore
\[T^{*}:=\min\{T,C\}\]
We define \(T^{0}\) and \(C^{0}\) as random variables distributed according to \(F_{T^{0}}:=F_{T|Z=0}=F_{\tilde{T}^{0}|Z=0}\). This function is mathematically relevant because the conditional distribution of times coincides with the conditional distribution of counterfactual times.
The observed covariates are
\[X=(1-Z)X^{0}+ZX^{1}\]
with event indicator
\[\Delta=(1-Z)1(\tilde{T}^{0}\leq C^{0})+Z1(\tilde{T}^{1}\leq C^{1})\]
It is worth noting that
\[(1-Z)\min\{\tilde{T}^{0},C^{0}\}+Z\min\{\tilde{T}^{1},C^{1}\}=\min\{(1-Z) \tilde{T}^{0}+Z\tilde{T}^{1},(1-Z)C^{0}+ZC^{1}\}=\min\{T,C\}\]
what could be interpreted as commutativity between censoring and realizing.
In practice, we observe an i.i.d sample
\[\{(T_{i}^{*},\Delta_{i},Z_{i},X_{i})\}_{i=1}^{n}\sim(T^{*},\Delta,Z,X)\]
which are draws containing incomplete information about the original random variables.
\(S=1-F\) denotes survival function in all cases.
## 3 Population elements
### Counterfactual survival functions
A key consideration for understanding counterfactual inference is that
\[S_{\tilde{T}^{1}|Z=1}=S_{Z\tilde{T}^{1}+(1-Z)\tilde{T}^{1}|Z=1}=S_{T|Z=1}=:S_{T ^{1}}\]
but
\[S_{\tilde{T}^{1}|Z=1}\neq S_{\tilde{T}^{1}}\]
because \(\tilde{T}^{0}\) and \(\tilde{T}^{1}\) may be dependent of \(Z\). To guarantee the identifiability of causal effects from observational data we have to respect the assumption that the potential outcomes are dependent of the treatment only via the covariates; i.e., there is no hidden confounding. This hypothesis is known as _unconfoundedness_ or _ignorability_, which is a common hypothesis in observational studies. We can express it asserting that the joint distribution satisfies the global Markov property with respect to the following undirected graph:
and we term it throughout this paper _conditional exogeneity assumption_, that can be formally expressed as \(\tilde{T}^{0},\tilde{T}^{1}\perp\!\!\!\perp Z\mid X\) and \(C^{0},C^{1}\perp\!\!\!\perp Z\mid X\)
Survival functions of potential outcome times conditional on the treatment indicator are of interest because of their involvement in an expression that aims to break down the difference between both _realized_ distribution functions for \(Z=0,1\). This decomposition serves as one motivation for the foundational work of Chernozhukov, Fernandez-Val, and Melly (2013) on counterfactual distributions. The decoupling is the following:
\[S_{T^{1}}(t)-S_{T^{0}}(t)=F_{T^{0}}(t)-F_{T^{1}}(t)=F_{\tilde{T} ^{0}\mid Z=0}(t)-F_{\tilde{T}^{1}\mid Z=1}(t)=\] \[\underbrace{F_{\tilde{T}^{0}\mid Z=0}(t)-F_{\tilde{T}^{0}\mid Z=1 }(t)}_{(A)}+\underbrace{F_{\tilde{T}^{0}\mid Z=1}(t)-F_{\tilde{T}^{1}\mid Z=1 }(t)}_{(B)} \tag{3.1.1}\]
The equation comprises two terms, (A) and (B), which represent the distributional effect of covariate distributions and the distributional treatment effect on the treated, respectively. The difference between the realized outcome distributions can be attributed to either or both of these terms, and their estimation is valuable in understanding the origin of the difference in observed outcome distributions. If (A) is determined to be small with respect to (B), then the difference between the observed outcome distributions is caused by the distributional difference on the treated (B). Conversely, in the reciprocal configuration, the difference between the observed outcome distributions is due to (A), which is caused by distributional differences between the covariates in each group and not by the effects of the treatment. In the econometrics jargon, (A) quantifies a composition effect due to differences in characteristics and (B) stands for differences in the response structure (Chernozhukov et al., 2013).
It is important to note again that \(S_{T^{1}}(t)-S_{T^{0}}(t)\neq S_{\tilde{T}^{1}}(t)-S_{\tilde{T}^{0}}(t)\). The latter accounts for the effects of treatments 0 and 1, but our approach delves into what is driving the first one to be different. Observed outcome distributions are biased approximations to potential outcome distributions if treatment assignment is not randomized (i.e., if \(X\) and \(Z\) are not independent).
We now see how these distributions related to potential outcomes that appear in distributional causal effects are related to the notion of counterfactual distribution, that we define below. In the following, \(F_{T^{0}\mid X^{0}=x}(\cdot)\) and \(F_{T^{1}\mid X^{1}=x}(\cdot)\) are the conditional distribution functions that describe the stochastic assignment of survival times to people with characteristics \(x\) conditional on \(Z=0\) and \(Z=1\) respectively. We indistinctly use the relation \(S=1-F\).
**Definition 1** (Counterfactual distributions, Chernozhukov et al. (2013)).: _Whenever support\((F_{X^{1}})\)\(\subseteq\) support\((F_{X^{0}})\),_
\[F_{T\langle 0\mid 1\rangle}(\cdot):=\int F_{T^{0}\mid X^{0}=x}(\cdot)\mathrm{d}F _{X^{1}}(x)\]
**Lemma 1** (Chernozhukov et al. (2013); Muandet et al. (2021)).: _In general, \(S_{T\langle 0\mid 0\rangle}=S_{\tilde{T}^{0}\mid Z=0}\) and \(S_{T\langle 1\mid 1\rangle}=S_{\tilde{T}^{1}\mid Z=1}\). Moreover, if conditional exogeneity holds and support\((S_{X^{1}})=\) support \((S_{X^{0}})\) then we also have \(S_{T\langle 0\mid 1\rangle}=S_{\tilde{T}^{0}\mid Z=1}\) and \(S_{T\langle 1\mid 0\rangle}=S_{\tilde{T}^{1}\mid Z=0}\)_
Proof.: See Lemma 3 and 4 in Muandet et al. (2021).
If assumptions of Lemma 1 are fulfilled
\[(A)=S_{\langle 0|1\rangle}(t)-S_{\langle 0|0\rangle}(t)=\int F_{T^{0}|X^{0}}(t,x) dF_{X^{0}}(x)-\int F_{T^{0}|X^{0}}(t,x)dF_{X^{1}}(x)\]
Becoming now clearly visible that (A) is due to a shift between the covariate distributions \(F_{X^{0}}\) and \(F_{X^{1}}\), as the only discrepancy between both integrals in the right hand side is originated by the measures. Meanwhile, as explained, (B) would quantify a treatment effect conditional to the intensive treatment arm.
### Kernel embeddings
Let \(l:(0,+\infty)\times(0,+\infty)\to\mathbb{R}\) be a symmetric positive semidefinite function (_kernel_) and \(\mathcal{H}\) its associated RKHS (Aronszajn, 1950). We assume for the next couple of definitions that \(T^{0}\) satisfies the integrability condition \(\int_{0}^{\infty}\sqrt{l(t,t)}dF_{T}^{0}(t)<\infty\), \(\text{support}(F_{X^{1}})\subseteq\text{support}(F_{X^{0}})\) and conditional exogeneity. We start with the following definition:
**Definition 2** (Conditional mean embedding, Song, Huang, Smola, and Fukumizu (2009)).: \[\mu_{T^{0}|X^{0}=x}(\cdot):=\mathbb{E}_{T^{0}|X^{0}}\left[l(T^{0},\cdot)\mid X ^{0}=x\right]=\int_{0}^{\infty}l(\cdot,t)\mathrm{d}F_{T^{0}|X^{0}=x}(t),\quad x \in\mathbb{R}^{p}\]
See Muandet, Fukumizu, Sriperumbudur, and Scholkopf (2017) for an extensive survey on the interpretation, estimation and properties of kernel- conditional and mean- embeddings. We are now set to reach an important ingredient of our paper:
**Definition 3** (Counterfactual mean embedding, Muandet et al. (2021)).: \[\mu_{T\langle 0|1\rangle}(\cdot)=\int_{\mathbb{R}^{p}}\mu_{T^{0}|X^{0}=x}( \cdot)\mathrm{d}F_{X^{1}}(x)\in\mathcal{H}\]
It is easy to see using the iterated expectations lemma and using conditional exogeneity that \(\mu_{T\langle 0|1\rangle}(\cdot)=\int_{0}^{\infty}l(\cdot,t)\mathrm{d}F_{ \langle 0|1\rangle}(t)\). The previous definitions are reciprocally valid switching \(0\) by \(1\) and viceversa.
### Interpretation of kernel mean embeddings as depth bands
#### 3.3.1 Depth bands
**Definition 4**.: _A statistical depth measure is a mapping \(D:\mathcal{Y}\times\mathcal{P}\to[0,\infty)\), where \(\mathcal{P}\) is the space of probability measures over \(\mathcal{Y}\), that satisfies the following properties:_
* _Property P-1: Distance invariance of_ \(D\)_._
* _Property P-2: Maximality of_ \(D\) _at the center._
* _Property P-3: Monotonicity of_ \(D\) _relative to the deepest point._
* _Property P-4: Upper semi-continuity of_ \(D\) _in any function_ \(x\in\mathcal{D}\)_._
* _Property P-5: Receptivity of_ \(D\) _to the convex hull width across the domain._
* _Property P-6: Continuity of_ \(D\) _in_ \(\mathcal{P}\)
\(h\)-integrated depth band measures possess the desirable property of being affine invariant. We introduce the concept of an \(h\)-depth band functional for any \(f\in\mathcal{Y}\), defined as follows:
\[D(f,P_{Y})=\int_{\mathcal{Y}}D_{\kappa_{1}}\left(\langle f,v\rangle;P_{v}\right) d\eta(v), \tag{3.3.1}\]
Here, \(D_{\kappa_{1}}:\mathbb{R}\times\mathcal{P}(\mathbb{R})\to[0,\infty)\) represents a one-dimensional \(h\)-depth measure using \(k_{1}:[0,\infty)\to[0,\infty)\), \(P_{v}\in\mathcal{P}(\mathbb{R})\) corresponds to the distribution of \(\langle f,v\rangle\), where \(f\sim P_{Y}\) and \(v\in\mathcal{Y}\). The measure \(\eta\) is defined on \(\mathcal{Y}\) (identified with its dual using the Riesz representation theorem). Importantly, it should be noted that the \(h\)-depth band remains invariant under affine transformations.
Now, we introduce the concept of \(h\)-bands:
**Definition 5**.: _Let \(\mathcal{Y}\) be a vector space equipped with a norm \(\|\cdot\|\), \(P_{Y}\in\mathcal{P}(\mathcal{Y})\) and \(\kappa:[0,\infty)\to[0,\infty)\) be a continuous, non-increasing function with \(\kappa(0)>0\) and \(\lim_{t\to\infty}\kappa(t)=0\). The \(h\)-depth of \(y\in\mathcal{Y}\) with respect to \(P_{Y}\) is defined as_
\[D_{\kappa}(y;P_{Y})=\mathbb{E}[\kappa(\|y-Y\|)]. \tag{3.3.2}\]
#### 3.3.2 Kernel mean embeddings and integrated depth bands
A natural connection arises (Wynne and Nagy, 2021) between \(h\)-depth and kernel mean embeddings generated by an invariant kernel.
**Theorem 1**.: _Let \(\mathcal{Y}\) be a normed vector space and let \(k(x,y)=\kappa(\|x-y\|)\) be a kernel on \(\mathcal{Y}\) with \(\kappa\) satisfying the conditions of Definition 5. Then, \(D_{\kappa}(y,P)=\phi_{k}P(y)\)._
It is natural to ask what conditions are needed on \(\kappa\) to ensure that the corresponding kernel \(k\) is indeed a kernel. The following theorem contains the key information.
**Theorem 2**.: _Let \(\mathcal{Y}\) be a separable Hilbert space, \(\kappa:[0,\infty)\to[0,\infty)\), and \(k(x,y)=\kappa(\|x-y\|)\). Then the following are equivalent:_
1. \(k\) _is a kernel._
2. _There exists a finite Borel measure_ \(\mu\) _on_ \([0,\infty)\) _such that_ \(k(x,y)=\int_{0}^{\infty}e^{-t^{2}\|x-y\|^{2}}d\mu(t)\)_._
3. \(\kappa(\sqrt{\cdot})\) _is completely monotone._
## 4 Empirical estimates of causal survival embeddings
For generality purposes, we denote by \(\mathcal{X}\) the covariate space and by \(\mathcal{T}\) the target space. Anyway, in our real application case we will use \(\mathcal{X}=\mathbb{R}^{9}\) and \(\mathcal{T}=(0,+\infty)\). We first discuss how to estimate \(\mu_{T^{0}|X^{0}}=\mathbb{E}_{T^{0}|X^{0}}\left[\left(T^{0},\cdot\right)|X^{0} \right]:\mathcal{X}\longrightarrow\mathcal{H}\)(Park and Muandet, 2020) because upon obtaining \(\hat{\mu}_{T^{0}|X^{0}=x}\) (simply done by isolating the data from control group), estimating counterfactual mean embeddings is reduced to taking averages with respect to the covariates in the treatment group: \(\hat{\mu}_{T(0|1)}:=\frac{1}{m}\sum_{j=1}^{m}\hat{\mu}_{T^{0}|X^{0}=X^{1}_{j}}\), as suggested by Definition 3. We insist in the fact that for estimation of the _conditional_ mean embedding we use data in the _control_ group. In this section we focus on \(T^{0}\) and \(X^{0}\) but, again, the same theory holds replacing \(0\) by \(1\) without loss of generality when it comes to estimate
\(\mu_{T\left\langle 1\right|0\right\rangle}\). We start by noticing that the map \(x\mapsto\mu_{T^{0}\left|X^{0}=x\right.}\) takes values in the Hilbert space \(\mathcal{H}\). This motivates the following definition:
**Definition 6** (Vector-valued RKHS, Carmeli, De Vito, and Toigo (2006)).: _An \(\mathcal{H}\)-valued RKHS on \(\mathcal{X}\) is a Hilbert space \(\mathcal{F}\) such that 1) the elements of \(\mathcal{F}\) are functions \(\mathcal{X}\rightarrow\mathcal{H}\); 2) for all \(x\in\mathcal{X},\exists C_{x}>0\) such that \(\left\|F(x)\right\|_{\mathcal{H}}\leq C_{x}\|F\|_{\mathcal{F}}\) for all \(F\in\mathcal{F}\)._
In the traditional framework of RKHSs formed by real valued functions, a very useful aspect is that it is possible to evaluate functions belonging to the space by making inner products times very special elements therein: the collection \(\left\{k(\cdot,x):x\in\mathcal{X}\right\}\), in virtue of Riesz's Representation Theorem. \(k\) is the so-called _kernel_ function uniquely determining \(\mathcal{H}\). Looking for a surrogate of the notion of kernel in \(\mathcal{H}\)-valued RKHSs we arrive to the following definition. We call \(\mathcal{L}(\mathcal{H})\) the space of bounded linear operators from \(\mathcal{H}\) to \(\mathcal{H}\).
**Definition 7** (\(\mathcal{H}\)-kernel, Carmeli et al. (2006)).: _A \(\mathcal{H}\)-kernel of positive type on \(\mathcal{X}\times\mathcal{X}\) is a map \(\Gamma:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{L}(\mathcal{H})\) such that \(\forall N\in\mathbb{N},\forall x_{1},\ldots,x_{N}\in\mathcal{X}\) and \(\forall c_{1},\ldots,c_{N}\in\mathbb{R},\sum_{i,j=1}^{N}c_{i}c_{j}\left\langle \Gamma\left(x_{j},x_{i}\right)(h,h\right)_{\mathcal{H}}\geq 0\quad\forall h\in \mathcal{H}\)._
If \(\Gamma\) is an \(\mathcal{H}\)-kernel in the sense of the previous definition, there exists a unique (up to isometry) RKHS, with \(\Gamma\) as its reproducing kernel (Micchelli and Pontil, 2005), satisfying: 1) for any \(x,x^{\prime}\in\mathcal{X},h,h^{\prime}\in\mathcal{H}\) and \(F\in\mathcal{F}\), \(\left\langle F(x),h\right\rangle_{\mathcal{H}}=\left\langle F,\Gamma(\cdot,x) (h)\right\rangle_{\mathcal{F}}\) and 2) \(\left\langle h,\Gamma\left(x,x^{\prime}\right)(h^{\prime})\right\rangle_{ \mathcal{H}}=\left\langle\Gamma(\cdot,x)(h),\Gamma\left(\cdot,x^{\prime} \right)(h^{\prime})\right\rangle_{\mathcal{F}}\)
Now we can pose the estimation of conditional mean embeddings as risk minimization of the theoretical loss (Grunewalder, Lever, Baldassarre, Patterson, Gretton, and Pontil, 2012):
\[\tilde{R}(F)=\mathbb{E}_{X^{0}}\left[\left\|\mu_{T^{0}\left|X^{0} \right.}(X^{0})-F(X^{0})\right\|_{\mathcal{H}}^{2}\right],\quad F\in\mathcal{F}\]
where \(\mathcal{F}\) is a vector-valued RKHS of functions \(\mathcal{X}\rightarrow\mathcal{H}\). For simplicity, we endow \(\mathcal{F}\) with a kernel \(\Gamma\left(x,x^{\prime}\right)=k\left(x,x^{\prime}\right)\) Id, where \(k\) is a scalar kernel on \(\mathcal{X}\) and Id: \(\mathcal{H}\rightarrow\mathcal{H}\) is the identity map on \(\mathcal{H}\). We have in virtue of generalised conditional Jensen's inequality (Perlman, 1974) and iterated expectations lemma:
\[\tilde{R}(F) =\mathbb{E}_{X^{0}}\left[\left\|\mathbb{E}_{T^{0}\left|X^{0} \right.}\left[l(T^{0},\cdot)-F(X^{0})\mid X^{0}\right]\right\|_{\mathcal{H}}^ {2}\right]\leq\mathbb{E}_{X^{0}}\mathbb{E}_{T^{0}\left|X^{0}\right.}\left[ \left\|l(T^{0},\cdot)-F(X^{0})\right\|_{\mathcal{H}}^{2}\mid X^{0}\right]\] \[=\mathbb{E}_{T^{0}X^{0}}\left[\left\|l(T^{0},\cdot)-F(X^{0}) \right\|_{\mathcal{H}}^{2}\right]=:R(F)\]
\(R(F)\) acts as a surrogate theoretical risk that admits an empirical version under right-censoring.
Now the problem is that we do not have access to a sample from the joint distribution of \((T_{0},X_{0})\) that would allow us to estimate the expectation involved by \(R(F)\) because of censoring: we instead observe data from \(\min\{T^{0},C^{0}\}\). Let us further develop the measure
with respect to which the expectation in \(R(F)\) is taken:
\[dF_{T^{0}X^{0}}(t,x)=P(T^{0}\in dt,X^{0}\in dx)=P(T\in dt,X\in dx|Z=0)=\]
\[=\frac{P(T\in dt,X\in dx|Z=0)P(Z=0)}{P(Z=0)}=\]
\[=\frac{P(T\in dt,X\in dx,Z=0)P(\Delta=1|T=t,X=x,Z=0)}{P(Z=0)P(\Delta=1|T=t,X=x,Z=0)}=\]
\[=\frac{P(\Delta=1,T\in dt,X\in dx,Z=0)}{P(Z=0)P(\Delta=1|T=t,X=x,Z=0)}=\frac{P (\Delta=1,T\in dt,X\in dx|Z=0)}{P(\Delta=1|T=t,X=x,Z=0)}=\]
\[=\frac{dF_{0}^{(*)}(t,x)}{G_{0}(t,x)}\]
where \(G_{0}(t,x)=P(\Delta=1|T=t,X=x,Z=0)\) is the conditional probability that an observation is uncensored given that the event time is \(t\) and the covariates are \(x\) in the control population
and \(F_{0}^{(*)}(t,x)=P(\Delta=1,T\leq t,X\leq x|Z=0)\) is the law of uncensored observations in the control population (Stute, 1996; Gerds, Beyersmann, Starkopf, Frank, van der Laan, and Schumacher, 2017).
Note that if we assume \(C\perp\!\!\!\perp T|Z\) and \(\Delta\perp\!\!\!\perp X|T,Z\) then
\[G_{0}(t,x)=P(\Delta=1|T=t,X=x,Z=0)=P(\Delta=1|T=t,Z=0)=P(C>t|Z=0)\]
and therefore \(G_{0}(t,x)=G_{0}(t)\) equals \(1-\) the marginal law of censoring times conditional to \(Z=0\).
Let \(\left(X_{1},T_{1}^{*}\right),\ldots,\left(X_{n},T_{n}^{*}\right)\) be i.i.d. observations from the control group \(Z=0\). By plugging in an estimate \(\hat{G}_{0}(t,x)\) and the empirical measure
\[d\hat{F}_{0}^{(*)}(t,x)=\frac{1}{n}\sum_{i=1}^{n}\Delta_{i}\delta_{T_{i}^{*}}( t)\delta_{X_{i}}(x)\]
we arrive to a regularized empirical risk minimization problem:
\[\hat{R}_{\varepsilon,n}(F):=\frac{1}{n}\sum_{i=1}^{n}\frac{\Delta_{i}}{\hat{ G}_{0}(T_{i}^{*},X_{i})}\left\|l\left(T_{i}^{*},\cdot\right)-F\left(X_{i} \right)\right\|_{\mathcal{H}}^{2}+\varepsilon\|F\|_{\mathcal{F}}^{2}\]
\[W_{i}:=\frac{\Delta_{i}}{\hat{G}_{0}(T_{i}^{*},X_{i})}\]
We denote its minimizer by \(\hat{\mu}_{\varepsilon,n}\),
\[\hat{\mu}_{\varepsilon,n}:=\underset{F\in\mathcal{F}}{\text{argmin}}\ \ \widehat{R}_{\varepsilon,n}(F).\]
This is the final estimator for the conditional mean embedding.
**Lemma 2**.: _A minimizer of the empirical risk \(\hat{R}_{\varepsilon}(F)\) is unique and can be expressed as \(\sum_{j=1}^{n}\Gamma\left(\cdot,x_{i}\right)(c_{i})\) where the coefficients \(\{c_{j}:j=1,\ldots,n\}\subseteq\mathcal{H}\) are the unique solution of the linear equations \(\sum_{j=1}^{n}\left(W_{i}\Gamma\left(z_{i},z_{j}\right)+n\varepsilon\delta_{ ij}\right)(c_{j})=W_{i}h_{i},i=1,\ldots,n\)._
Proof.: See Appendix.
Choosing \(\Gamma\left(x,x^{\prime}\right)=k\left(x,x^{\prime}\right)\mathrm{Id}\) (see Grunewalder et al. (2012) for more details on why this is a sensible election) we conclude
\[WH=(WK+n\varepsilon I)C\Longleftrightarrow C=(WK+n\varepsilon I)^{-1}WH\]
where \(K_{ij}=k(X_{i},X_{j})\)\(W=\mathrm{diag}(W_{1},\ldots,W_{n})\), \(H=(h_{1}\ldots h_{n})^{\prime}\), \(C=(c_{1}\ldots c_{n})^{\prime}\).
Now the _conditional_ mean embedding evaluated on the covariates of the treated sample \((X_{1}^{1},\ldots,X_{m}^{1})\) is \((\hat{F}(X_{1}^{1})\ldots\hat{F}(X_{m}^{1}))=(\sum_{j=1}^{n}k(X_{1}^{1},X_{j} )c_{j}\ldots\sum_{j=1}^{n}k(X_{m}^{1},X_{j})c_{j})=C^{\prime}\tilde{K}\)
where \(\tilde{K}_{ij}=\Gamma(X_{i},X_{j}^{1})\)
The _counterfactual_ mean embedding is computed by taking the average of the previous row: \(\hat{\mu}_{T\langle 0|1\rangle}(\cdot)=C^{\prime}\tilde{K}1_{m}\) where \(1_{m}\) is a vector of all ones divided by \(m\).
By recovering the expression of \(C\) previously derived we have a closed expression for the _counterfactual_ mean embedding estimator
\[\hat{\mu}_{T\langle 0|1\rangle}(\cdot)=((WK+n\varepsilon I)^{-1}WH)^{\prime} \tilde{K}1_{m}=H^{\prime}W(KW+n\varepsilon I)^{-1}\tilde{K}1_{m}\]
and its row-shaped version (visually, resembles better to a function of time) is
\[\hat{\mu}^{\prime}_{T\langle 0|1\rangle}(\cdot)=1^{\prime}_{m}\tilde{K}^{ \prime}(WK+n\varepsilon I)^{-1}WH\]
It is important to bear in mind that \(H=(l(T_{1}^{*},\cdot),\cdots,l(T_{n}^{*},\cdot))^{\prime}\). We can always evaluate \(H_{ij}=l(T_{i}^{*},t_{j})\) on a grid time-points \(t_{1},\ldots,t_{N}\).
## 5 Asymptotics of causal survival embeddings
### Population and empirical covariance operators
This section comprises the main theoretical contribution of our work. Let us get started by a couple of definitions needed to reexpress parameters and their estimators in a more convenient way regarding proofs.
**Definition 8** (Fukumizu, Song, and Gretton (2013)).: _Let \(\mathcal{C}_{TX}:\mathcal{G}\rightarrow\mathcal{H}\) be the covariance operator of the random variables \(X^{0}\) and \(T^{0}\) defined as_
\[\mathcal{C}_{TX}f=\int l(\cdot,t)f(x)dF_{X^{0}T^{0}}(x,t)=\mathbb{E}_{X^{0}T^{ 0}}\left[l\left(\cdot,T^{0}\right)f\left(X^{0}\right)\right],\quad f\in \mathcal{G}\]
substituting the measure \(dF_{X^{0}T^{0}}=\frac{dF_{0}^{(*)}}{G_{0}}\) by the empirical counterparts \(\hat{F}_{0}^{(*)}\) and \(\hat{G}_{0}\) we obtain
**Definition 9** (Adapted from Muandet, Kanagawa, Saengkyongam, and Marukatat (2021)).: _Let \(\left(X_{1},T_{1}^{*}\right),\ldots,\left(X_{n},T_{n}^{*}\right)\) be i.i.d. observations from the control group \(Z=0\). We define:_
\[\widehat{\mathcal{C}}_{XX}^{*}f:=\frac{1}{n}\sum_{i=1}^{n}W_{i}k\left(\cdot,X_{ i}\right)f\left(X_{i}\right),\quad\widehat{\mathcal{C}}_{TX}^{*}f=\frac{1}{n} \sum_{i=1}^{n}W_{i}l\left(\cdot,T_{i}^{*}\right)f\left(X_{i}\right),\quad f\in \mathcal{G}\]
The following result shows that we can write \(\hat{\mu}_{T\left\langle 0\right|1})\) using the empirical covariance operators.
**Lemma 3**.: _Let \(\hat{\mu}_{X_{1}}\) the kernel mean embedding estimated with the sample covariates from the treated population. Then we have_
\[\hat{\mu}_{T\left\langle 0\right|1})=\widehat{\mathcal{C}}_{TX}^{*}\left( \widehat{\mathcal{C}}_{XX}^{*}+\varepsilon I\right)^{-1}\hat{\mu}_{X_{1}}.\]
Proof.: See Appendix.
### Assumptions
In the following, we introduce the assumptions needed for establishing consistency of our proposed estimator.
1. \(\sup_{x\in\mathcal{X}}k(x,x)<\infty\) and \(\sup_{t\in\mathcal{T}}\left(t,t\right)<\infty\) This assumption is satisfied by Gaussian kernels and helps in conjunction with the following general inequality for RKHSs. Let us suppose that \(f\in\mathcal{G}\). Then for \(x\in\mathcal{X}\) \[f(x)=\left\langle k(\cdot,x),f\right\rangle_{\mathcal{H}}\leq\left\|k(\cdot,x )\right\|_{\mathcal{H}}\left\|f\right\|_{\mathcal{H}}\] in virtue of Cauchy-Schwartz inequality. Now noting that \(\left\|k(\cdot,x)\right\|_{\mathcal{H}}^{2}=\left\langle k(\cdot,x),k(\cdot, x)\right\rangle_{\mathcal{H}}=k(x,x)\), we finally have \[f(x)\leq\sqrt{k(x,x)}\left\|f\right\|_{\mathcal{H}}\] and therefore \[\left\|f\right\|_{\infty}\leq\sup_{x\in\mathcal{X}}\left|k(x,x)\right|\left\| f\right\|_{\mathcal{H}}\] As a particular case \[k(x,x^{\prime})\leq\sqrt{k(x,x)}\sqrt{k(x^{\prime},x^{\prime})}\] Moreover, as all probability measures are finite we have ensured that \(k\) is integrable with respect to any probability measure in virtue of Holder's inequality.
2. The RKHS \(\mathcal{H}\) of \(k\) is dense in \(L_{2}\left(F_{X_{0}}\right).\) This is also satisfied by Gaussian kernels (Steinwart and Christmann, 2008).
3. The distribution \(F_{X_{1}}\) is absolutely continuous with respect to \(F_{X_{0}}\) with the Radon-Nikodym derivative \(g:=\mathrm{d}F_{X_{1}}/\mathrm{d}F_{X_{0}}\) satisfying \(g\in L_{2}\left(F_{X_{0}}\right)\). By this we are expressing formally that the marginal density functions of \(F_{X_{0}}\) and \(F_{X_{1}}\) should not be very different. It also implies the support equality condition used throughout Section 3.
* \(\left(T_{1}^{*},\Delta_{1},0,X_{1}\right),\ldots,\left(T_{n}^{*},\Delta_{n},0,X_{n}\right)\) are i.i.d. observations from the control group, and \(X_{1}^{1}\ldots,X_{m}^{1}\) are i.i.d. observations of the random variable \(X^{1}\).
* \(C\perp\!\!\!\perp T|Z\) (independence) and \(\Delta\perp\!\!\!\perp X|T,Z\) (conditional independence of the censoring indicator and the covariates given the realized time). This automatically implies \[G_{0}(t,x)=P(C>t|T=t,X=x,Z=z)=P(C>t|T=t)=P(C>t)=:G_{0}(t)\] In this case, it is possible to estimate \(G_{0}(t)\) using the marginal reverse Kaplan-Meier estimator- flipping the event indicators and using the canonical Kaplan-Meier estimator (Gill, 1980). See Stute (1993, 1996a) for further comments on these couple of assumptions.
* \[\frac{1}{G_{0}^{2}}\text{ and }\frac{1}{\hat{G}_{0}^{2}}<\infty\] This ensures that population and empirical covariance operators are well defined as Bochner integrals (Dinculeanu, 2000).
### Consistency and convergence rate
Our main theoretical contribution is the convergence rate of the stochastic error in RKHS norm in Theorem 3. Once established, we complement our finding with the literature aiming to prove consistency in Corollary 1 and find the final convergence rate in Corollary 2.
**Theorem 3**.: _(Convergence rate of the stochastic error) Consider the causal survival embedding estimator \(\hat{\mu}_{T(0|1)}\). Suppose that conditions i.) to vi.) (ii.) is optional) hold. Then we have for the stochastic error_
\[\left\|\widehat{\mathcal{C}^{*}}_{TX}\left(\widehat{\mathcal{C}^{*}}_{XX}+ \varepsilon_{n}I\right)^{-1}\widehat{\mu}_{X_{1}}-\mathcal{C}_{TX}\left( \mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}}\right\|_{\mathcal{H }}=O_{p}\left(n^{-1/2}\varepsilon_{n}^{-1}\right)\]
Proof.: We start with the same breakdown as in proof of Theorem 11 in Fukumizu, Song, and Gretton (2013):
\[\left\|\widehat{\mathcal{C}^{*}}_{TX}\left(\widehat{\mathcal{C}^{*}}_{XX}+ \varepsilon_{n}I\right)^{-1}\widehat{\mu}_{X_{1}}-\mathcal{C}_{TX}\left( \mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}}\right\|_{\mathcal{H }}\leq\]
\[+\left\|\widehat{\mathcal{C}^{*}}_{TX}\left(\widehat{\mathcal{C}^{*}}_{XX}+ \varepsilon_{n}I\right)^{-1}\left(\mathcal{C}_{XX}-\widehat{\mathcal{C}^{*}}_ {XX}\right)\left(\mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}} \right\|_{\mathcal{H}}\quad:\quad(C)\]
(A): From Muandet et al. (2021) we have that
\[\text{(A)}=O_{p}\left(\varepsilon_{n}^{-1/2}n^{-1/2}\right)\]
as it can be seen to rely on weak convergence of uncensored kernel mean embeddings at speed \(\frac{1}{\sqrt{n}}\)(Ledoux and Talagrand, 1991; Berlinet and Thomas-Agnan, 2011) and on applying Theorem 1 in Baker (1973) to \(\frac{d\hat{F}_{\mathcal{O}}^{(*)}}{\hat{G}_{0}}\).
(B): using Lemma 24 in Muandet et al. (2021)
\[\left\|\left(\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX} \right)\left(\mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}}\right\| _{\mathcal{H}} \leq\left\|\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX}\right\| \left\|\left(\mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}}\right\| _{\mathcal{G}}\] \[\leq\left\|\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX} \right\|\cdot O_{p}\left(\varepsilon_{n}^{-1/2}\right)\]
(C): proceeding as in Muandet et al. (2021)
\[\left(\text{C}\right)=\left\|\widehat{\mathcal{C}^{*}}_{XX}-\mathcal{C}_{XX} \right\|\cdot O_{p}\left(\varepsilon_{n}^{-1}\right)\]
Let \(\varepsilon_{n}>0\) be a regularization constant. Then if \(\varepsilon_{n}\to 0\) and \(n^{1/2}\varepsilon_{n}\to\infty\) as \(n\to\infty\), we have consistency provided that we show the tight uniform bounds
\[\left\|\widehat{\mathcal{C}^{*}}_{XX}-\mathcal{C}_{XX}\right\| =O_{p}\left(n^{-1/2}\right) \tag{5.3.1}\]
\[\left\|\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX}\right\| =O_{p}\left(n^{-1/2}\right) \tag{5.3.2}\]
and the term with the the slowest rate would be (C). We will have into account that \(\|\cdot\|\leq\|\cdot\|_{HS}\).
**Lemma 4**.: _Define \(K_{i}=k(\cdot,X_{i})-\mu_{X^{0}}\), \(L_{i}=l(\cdot,T_{i}^{*})-\mu_{T^{0}}\), \(K(X^{0})=k(\cdot,X^{0})-\mu_{X^{0}}\), \(L(T^{0})=l(\cdot,T^{0})-\mu_{T^{0}}\) where \(\mu_{X^{0}}\) and \(\mu_{T^{0}}\) are the marginal kernel mean embeddings \(E_{X^{0}}\left[k(\cdot,X^{0})\right]\) and \(E_{T^{0}}\left[k(\cdot,T^{0})\right]\). Then we have:_
\[\left\|\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX}\right\|_{HS}^{2}=\left\| \frac{1}{n}\sum_{i=1}^{n}W_{i}\left(K_{i}-\frac{1}{n}\sum_{j=1}^{n}W_{j}K_{j} \right)\left(L_{i}-\frac{1}{n}\sum_{j=1}^{n}W_{j}L_{j}\right)-E[K(X^{0})L(T^{ 0})]\right\|_{\mathcal{G}\otimes\mathcal{H}}^{2}\]
Proof.: Direct adaptation of Fukumizu, Bach, and Gretton (2007).
Deriving the following inequality in Lemma 5 is more involved however
**Lemma 5**.: \[\|\widehat{\mathcal{C}^{*}}_{TX}-\mathcal{C}_{TX}\|_{HS}\leq\]
\[\leq\left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}L_{i}-E[K(X^{0})L(T^{0})]\right\| _{\mathcal{G}\otimes\mathcal{H}}+\left|2-\frac{1}{n}\sum_{i=1}^{n}W_{i}\right| \left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}\right\|_{\mathcal{G}}\left\|\frac {1}{n}\sum_{i=1}^{n}W_{i}L_{i}\right\|_{\mathcal{H}}\]
Proof.: See Appendix.
Let us denote for simplicity of notation \(\mu_{X^{0}}=\mu_{0}\). Having a closer look at the term \(\left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}\right\|_{\mathcal{G}}\) in right hand side of Lemma 5
\[\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i} =\frac{1}{n}\sum_{i=1}^{n}W_{i}(k(\cdot,X_{i})-\mu_{0})=\frac{1}{n }\sum_{i=1}^{n}(W_{i}k(\cdot,X_{i})-W_{i}\mu_{0})\] \[=\frac{1}{n}\sum_{i=1}^{n}W_{i}k(\cdot,X_{i})-\mu_{0}+\mu_{0}-\mu _{0}\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}\right)=\] \[=\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}k(\cdot,X_{i})-\mu_{0}\right) +\mu_{0}\left(1-\frac{1}{n}\sum_{i=1}^{n}W_{i}\right)\]
Furthermore,
\[\frac{1}{n}\sum_{i=1}^{n}W_{i}k(\cdot,X_{i})-\mu_{0}=\int_{\mathcal{X}}k( \cdot,X^{0})\frac{d\tilde{F}_{0}^{(*)}}{\tilde{G}_{0}}-\int_{\mathcal{X}}k( \cdot,X^{0})\frac{dF_{0}^{(*)}}{G_{0}}=:\nu(\hat{F}_{0}^{(*)},\hat{G}_{0})- \nu(F_{0}^{(*)},G_{0})\in\mathcal{G}\]
It is important to note that \(\nu\) is an operator taking values in a Hilbert space and showing its Hadamard-differentiability is not straightforward. Let, tor \(n\geq 1\), \(S_{n}=\sum_{i=1}^{n}k\left(\cdot,X_{i}\right)\) and \(\Lambda_{n}=\sqrt{n}\left(\frac{S_{n}}{n}-\mathcal{I}_{\mu}\right).\) Since \(\mathcal{I}_{\mu}=\int K\left(\cdot,X_{0}\right)dF_{X^{0}}=E\left(K\left( \cdot,X_{0}\right)\right)\), one could prove by using the Hilbert space version of the Central Limit Theorem that the sequence \(\left(\Lambda_{n}\right)_{n\geq 1}\) converges weakly to a centered gaussian variable (Ledoux and Talagrand, 1991). The elements preventing us from proceeding this way are the \(W_{i}\), which are breaking the i.i.d. assumption needed by this CLT.
First, it is known that \(\sqrt{n}(\hat{G}_{0}-G_{0})\) converges weakly in \(D[0,\tau]\) to a tight, mean zero Gaussian process (Fleming and Harrington (2011); Andersen, Borgan, Gill, and Keiding (2012)). Second, by Donsker's theorem, \(\sqrt{n}(\hat{F}_{0}^{(*)}-F_{0}^{(*)})\) also converges weakly to a tight, mean zero Gaussian process- as a reminder, \(d\hat{F}_{0}^{(*)}(t,x)=\frac{1}{n}\sum_{i=1}^{n}\Delta_{i}\delta_{T_{i}^{*}}( t)\delta_{X_{i}}(x)\) is just the empirical measure of the uncensored observations on the arm with \(\dot{Z}_{i}=0\)
We now proceed to show Hadamard-differentiability of \(\nu\) for \(\mathcal{X}=\mathbb{R}\). The following definitions are taken from Van der Vaart (2000) sections 18.6 and 20.3
**Definition 10**.: _Let \(T=[a,b]\) be an interval in the extended real line. We denote by \(C[a,b]\) the set of all continuous functions \(z:[a,b]\mapsto\mathbb{R}\) and by \(D[a,b]\) the set of all functions \(z:[a,b]\mapsto\mathbb{R}\) that are right continuous and whose limits from the left exist everywhere in \([a,b]\). (The functions in \(D[a,b]\) are called cadlag: continue a droite, limites a gauche.) It can be shown that \(C[a,b]\subset D[a,b]\subset\ell^{\infty}[a,b]\). We always equip the spaces \(C[a,b]\) and \(D[a,b]\) with the uniform norm \(\|z\|_{T}\), which they "inherit"from \(\ell^{\infty}[a,b]\)_
The space \(D[a,b]\) is referred to here as the Skorohod space and the set \(BV_{M}[a,b]\) is the set of all cadlag functions \(z:[a,b]\mapsto[-M,M]\subset\mathbb{R}\) of variation bounded by \(M\). We also define:
\(BV_{M}^{1}[a,b]=\{B\in BV_{M}[a,b]:x\mapsto k(x,x)\in L^{1}(B)\}\)
\(D^{2}[a,b]=\{A\in D[a,b]:A\in L^{2}(B)\text{ for all }B\in BV_{M}^{1}[a,b]\}\)
We need to restrict our operator to \(D_{M}\equiv D^{2}[-\infty,\infty]\times BV_{M}^{1}[-\infty,\infty]\) for existence of Bochner integrals, see Theorem 105 in Berlinet and Thomas-Agnan (2011). Nevertheless, thanks to assumptions i.) and vi.) this is always the case as far as we operate on \(D[a,b]\times BV_{M}[a,b]\).
**Lemma 6**.: _Let \(\mathcal{H}\) be an RKHS of functions \(f:\mathbb{R}\longrightarrow\mathbb{R}\) with reproducing kernel \(k\). Then the operator \((A,B)\mapsto\int k(\cdot,x)A(x)dB(x)\in\mathcal{H}\) is Hadamard-differentiable from the domain \(D_{M}\equiv D^{2}[-\infty,\infty]\times BV_{M}^{1}[-\infty,\infty]\subset D[- \infty,\infty]\times D[-\infty,\infty]\) into \((\mathcal{H},\sqrt{\langle\cdot,\cdot\rangle\mathcal{H}})\) at every pair of functions of bounded variation \((A,B)\)._
Proof.: We set as a candidate \(\psi_{A,B}^{\prime}(\alpha,\beta)(\cdot)=\int k(\cdot,x)A(x)d\beta(x)+\int k( \cdot,x)\alpha(x)dB(x)\), for \((\alpha,\beta)\in D_{M}\).
We will use the fact that \(\|k(\cdot,x)\|_{\mathcal{H}}^{2}=\langle k(\cdot,x),k(\cdot,x)\rangle=k(x,x)\).
For sequences \(t_{n}\to 0\) in \(\mathbb{R}\), \(\alpha_{n}\rightarrow\alpha\), and \(\beta_{n}\rightarrow\beta\) in \(D^{2}[-\infty,\infty]\) and \(BV_{M}^{1}[-\infty,\infty]\) respectively, define \(A_{n}\equiv A+t_{n}\alpha_{n}\) and \(B_{n}\equiv B+t_{n}\beta_{n}\). Since we require that \((A_{n},B_{n})\in D_{M}\), we know that the total variation of \(B_{n}\) is bounded by \(M\). Consider first the derivative of \(\psi\), and note that
\[\left\|\frac{\int k(\cdot,x)A_{n}(x)dB_{n}(x)-\int k(\cdot,x)A(x)dB(x)}{t_{n}} -\psi_{A,B}^{\prime}\left(\alpha_{n},\beta_{n}\right)\right\|_{\mathcal{H}}=\]
\[\left\|\int k(\cdot,x)\alpha_{n}(x)d\left(B_{n}-B\right)(x)\right\|_{ \mathcal{H}}=\]
\[\left\|\int k(\cdot,x)\alpha(x)d\left(B_{n}-B\right)(x)+\int k(\cdot,x)\left( \alpha_{n}(x)-\alpha(x)\right)d\left(B_{n}-B\right)(x)\right\|_{\mathcal{H}}\leq\]
\[\int\|k(\cdot,x)\|_{\mathcal{H}}\left|\alpha(x)\right|d\left(B_{n}-B\right)(x )+\int\|k(\cdot,x)\|_{\mathcal{H}}\left|\alpha_{n}(x)-\alpha(x)\right|d\left(B _{n}-B\right)(x)=\]
\[\int\sqrt{k(x,x)}|\alpha(x)|d\left(B_{n}-B\right)(x)+\int\sqrt{k(x,x)}\left(| \alpha_{n}(x)-\alpha(x)|\right)d\left(B_{n}-B\right)(x)\equiv(1)\,+ \tag{2}\]
Because we assumed that \(k\) is bounded, (2) converges to zero as since both \(B_{n}\) and \(B\) have total variation bounded by \(M\) and \(k\) is bounded.
For convergence of (1) to zero as \(t_{n}\longrightarrow 0\) we follow the same argument as in Van der Vaart (2000) Lemma 20.10. with \(\phi\) therein the identity map.
Since the map \((\alpha,\beta)\mapsto\psi_{A,B}^{\prime}(\alpha,\beta)\) is continuous and linear, the desired Hadamard differentiability of \(\psi\) will follow because (1) and (2) converge to zero.
Our operator \(\nu\) was defined for \((A,B)\in D_{M}\) as
\[\nu:(A,B)\mapsto\left(A,\frac{1}{B}\right)\mapsto\int_{\mathcal{X}}k(\cdot,X) \frac{1}{B(X)}dA(X)\]
We can assert that \(\sqrt{n}\left(\nu(\hat{F}_{0}^{(*)},\hat{G}_{0})-\nu(F_{0}^{(*)},G_{0})\right)\) converges weakly to a process in a Polish RKHS in virtue of the chain rule of Hadamard-differentiability, the fact that \(B\mapsto 1/B\) is Hadamard differentiable on \(\left\{B\in\ell^{\infty}(\mathcal{X}):\inf_{x\in\mathcal{X}}|B(x)|>0\right\}\), Lemma 6 and the Functional Delta Method (Kosorok, 2008). Therefore, in virtue of Prokhorov's theorem the limiting process is uniformly tight and therefore:
\[\left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}(k(\cdot,X_{i})-\mu_{X^{0}})\right\|_{ \mathcal{G}}=O_{p}\left(n^{-1/2}\right),\quad\left\|\frac{1}{n}\sum_{i=1}^{n} W_{i}(l(\cdot,T_{i}^{*})-\mu_{T^{0}})\right\|_{\mathcal{H}}=O_{p}\left(n^{-1/2}\right)\]
By consistency of real Kaplan-Meier integrals (Stute, 1993): \(\frac{1}{n}\sum_{i=1}^{n}W_{i}=o_{p}(1)\). In addition, the tensor product norm in right hand side of Lemma 5 can be seen to be \(O_{p}(n^{-1/2})\) combining our arguments with those in Lemma 5 from Fukumizu et al. (2007). In virtue of Slutsky's theorem, we have just shown the tight uniform bounds 5.3.1, 5.3.2 we were looking for.
**Corollary 1**.: _(Consistency) Suppose that Assumptions i.) to vi.) are satisfied. Let \(\varepsilon_{n}>0\) be a regularization constant. Then if \(\varepsilon_{n}\to 0\) and \(n^{1/2}\varepsilon_{n}\to\infty\) as \(n\to\infty\), we have_
\[\left\|\hat{\mu}_{Y\langle 0|1\rangle}-\mu_{Y\langle 0|1\rangle}\right\|_{ \mathcal{H}}\to 0\]
_in probability as \(n\to\infty\)._
Proof.: See Appendix.
Informally, \(\alpha\) and \(\beta\) in the following result quantify respectively how similar are \(F_{X^{0}}\) and \(F_{X^{1}}\) (the bigger, the more similar) and the smoothness of the map \(x\mapsto\mu_{T^{0}|X^{0}=x}\) (the bigger, the smoother).
**Corollary 2**.: _(Convergence rate) Suppose that Assumptions i.) to vi.) in our paper and Assumption 3, 4 in Muandet et al. (2021) hold with \(\alpha+\beta\leq 1\) both non-negative. Let \(\varepsilon_{n}>0\) be a regularization constant. Let \(c>0\) be an arbitrary constant, and set \(\varepsilon_{n}=cn^{-1/(1+\beta+\max(1-\alpha,\alpha))}\). Then we have_
\[\left\|\hat{\mu}_{Y\langle 0|1\rangle}-\mu_{Y\langle 0|1\rangle}\right\|_{ \mathcal{H}}=O_{p}\left(n^{-(\alpha+\beta)/2(1+\beta+\max(1-\alpha,\alpha))}\right)\]
Proof.: See Appendix.
## 6 Numerical experiments
We provide a self-contained simulation study in order to validate the large-sample properties that have been proven in the previous section. The underlying model for the simulation case study is
\[\log\tilde{T^{0}} =X_{1}^{0}+X_{2}^{0}+\varepsilon\] \[\log C^{0} =X_{1}^{0}+X_{2}^{0}+\varepsilon^{\prime}\] \[\log\tilde{T^{1}} =2+X_{1}^{1}+X_{2}^{1}+\omega\] \[\log C^{1} =2+X_{1}^{1}+X_{2}^{1}+\omega^{\prime}\]
\(X_{1}^{0}\) and \(X_{2}^{0}\) are independent \(\mathcal{N}(0,1)\) random variables while \(X_{1}^{1}\) and \(X_{2}^{1}\) are also independent unit variance normal but \(X_{1}^{1}\) has mean \(0.5\).
\(\varepsilon\) and \(\varepsilon^{\prime}\) are \(\mathcal{N}(c^{0},1)\) and \(\mathcal{N}(0,1)\) respectively with \(c^{0}>0\) controlling the amount of censoring (the bigger \(c^{0}\), the more censoring in the control arm). Analogously, \(\omega\) and \(\omega^{\prime}\) are \(\mathcal{N}(c^{1},1)\) and \(\mathcal{N}(0,1)\) respectively. We have set \(c^{0}=0.2\) and \(c^{1}=0.1\) in order to keep an incomplete information percentage of approximately \(75\%\) in both arms through all \(B=100\) simulation runs. We replicate the experiment for four different sample sizes \(n=100,200,300,500\). We equip both the covariates and response spaces with Gaussian kernel \(k\left(y,y^{\prime}\right)=\exp\left(-\left\|y-y^{\prime}\right\|_{2}^{2}/2 \sigma^{2}\right)\). The bandwidth parameter \(\sigma\) is chosen via the median heuristic: \(\sigma^{2}=\operatorname{median}\left\{\left\|y_{i}-y_{j}\right\|_{2}^{2}:i \neq j\right\}/2\).
We perform estimation of the causal survival mean embeddings for \(B=100\) different simulation runs in the four different sample sizes scenarios. The results are visible in Figure 5. Notice the decrease of variability with sample size. We provide in Appendix 9 the results of an experiment revealing that our estimator may have a _fast rate_. What is to say, under linear ground dependency between covariates and times, the convergence rate is of stochastic order \(\sqrt{n}\), despite this magnitude not being achieved for any non-negative \(\alpha\) and \(\beta\) in Corollary 2
## 7 Application to SPRINT: a landmark trial in public health
NIH's Systolic Blood Pressure Intervention Trial (SPRINT) was conducted to inform the new blood pressure medication guidelines in the US by testing the effects that a lower blood pressure target has on reducing heart disease risk. Observational studies had shown that individuals with lower systolic blood pressure (SBP) levels had fewer complications and deaths due to cardiovascular disease (CVD). Building on this observation, the NIH's Systolic Blood Pressure Intervention Trial (SPRINT) was designed to test the effects of a lower blood pressure target on reducing heart disease risk. Specifically, SPRINT aimed to compare treating high blood pressure to a target SBP goal of less than 120 mmHg against treating to a goal of less than 140 mmHg.
However, it has been seen in major clinical trials that a reduction of SBP is intimately connected to a reduction of DBP (diastolic blood pressure). Despite this association, it is debated whether low DBP leads to undesirable cardiovascular outcomes, such as a reduction of coronary flow, myocardial infarction, heart failure, or cardiovascular death (Franklin, Gokhale, Chow, Larson, Levy, Vasan, Mitchell, and Wong, 2015; Bohm, Schumacher, Teo, Lonn, Mahfoud, Mann, Mancia, Redon, Schmieder, Sliwa, et al., 2017; Messerli, Mancia, Conti, Hewkin, Kupfer, Champion, Kolloch, Benetos, and Pepine, 2006). This suggests that intensive systolic blood pressure therapy may result in an excessive reduction of DBP and therefore result in an undesired increase in cardiovascular risk. Nevertheless, SPRINT showed that intensive treatment was clearly associated with a reduced risk of CVD and was even finished early because the results were so convincing (The-SPRINT-Research
## 6 Conclusion
Figure 5: The black solid line represents the average of the \(B=100\) runs. The dashed yellow line is a numerical approximation of the population counterfactual mean embedding. Each grey line corresponds with one simulation draw. Simulation parameters \(c^{0}\) and \(c^{1}\) were tuned by hand in order to set a censoring percentage of approximately 75% (on average across simulations, only 25% of information was complete).
Group, 2015). Given the conclusions drawn by SPRINT, the research question is now whether it is possible to decompose the total effect of treatment on the primary outcome into a (natural) direct effect and a (natural) indirect effect through low DBP (induced by the treatment).
The debate on intensive blood pressure therapy is ongoing. Lee, Cavalcanti, McDonald, Pilote, and Brophy (2018) set out to ascertain whether there is an association between the onset of diastolic hypotension during treatment and negative outcomes. To achieve this, they utilized a conventional Cox PH model, using diastolic blood pressure as a time-varying exposure and adjusting for certain baseline factors. Stensrud and Strohmaier (2017) aimed to explore whether a formal mediation analysis, utilizing the SPRINT data, could identify whether intensive SBP treatment impacts cardiovascular outcomes via a pathway that involves diastolic blood pressure DBP below 60 mmHg. They claim that _the association between treatment-induced diastolic blood pressure and cardiovascular outcomes suffers from confounding_(Stensrud and Strohmaier, 2019).
We illustrate how our methodological contribution manages to perform the desired effect decomposition both across pathways and, importantly, across time thanks to the RKHS formulation. A consensus answer to the problem would be relevant to the medical community because, as mentioned, SPRINT ultimately informed the new blood pressure guidelines by demonstrating that a lower blood pressure target can significantly reduce heart disease risk.
### Description of the dataset
We conducted our analysis among 2269 participants older than 75 years old who had non-missing values for the covariates. Our response variable T_PRIMARY is observed time-to-primary outcome in days, which is a CVD composite endpoint of myocardial infarction, stroke, acute coronary syndrome, acute decompensated heart failure (ADHF), and CVD death. Composite outcomes are postulated to enhance the evaluation of treatment effects on infrequent outcomes, such as mortality in smaller trials, and serve as a convenient means of representing a broader spectrum of beneficial effects resulting from an interven
Figure 6: DAG depicting underlying causal structure of the medical problem; taken from Stensrud and Strohmaier (2017). The primary aim of their investigation was to decompose the total effect of intensive therapy versus standard therapy into two separate pathways: (i) a direct pathway that encompasses all effects not involving a reduction in diastolic blood pressure below 60 mmHg, comprising the advantageous impact of reducing systolic blood pressure, and (ii) an indirect pathway that acts through on-treatment DBP below 60 mmHg and has the potential to be deleterious.
tion (Cordoba, Schwartz, Woloshin, Bae, and Gotzsche, 2010). Even considering several events to build the primary endpoint, the percentage of uncensored observations is 11 % and 7 % in the control and treatment arms respectively. These high incomplete information percentages render the consideration of censoring mandatory, consitituting a strong motivation factor for the development of our new estimator.
The treatment indicator for each patient INTENSIVE is encoded such that 1 indicates lower SBP target of 120 mmHg and 0 indicates standard treatment (target SBP: 140 mm Hg). The vector of covariates for each patient includes 'DBP.1yr' (DBP one year after randomisation) and baseline characteristics we want to adjust for: 'DBP.rz' DBP at randomization, 'AGE', 'CHR' Cholesterol mg/dL, 'GLUR' Glucose mg/dL, 'HDL' High-Density Lipoprotein ("good") cholesterol direct mg/dL, 'TRR' Triglycerides, mg/dL, 'UMALCR' Urine Albumin/Creatinine ratio 'BMI' Body mass index kg/m2.
### Naive analysis of SPRINT
We might start by stratifying the observations into two groups: one with DBP \(\leq\) 60 mmHg one year after randomisation (encoded DBP60=0) and a group with \(>\) 60 mmHg one year after randomisation (encoded DBP60=1DBP60=0). Then we regress the primary endpoint against the newly created indicator variable using vanilla Cox PH.
> library('survival')
> primary=Surv(t,delta) > coxdbp60 <- coxph(primary ~ DBP60) > summary(coxdbp60)
coxph(formula = primary ~ DBP60)
n= 2269, number of events= 210
coef exp(coef) se(coef) z Pr(>|z|) DBP601 0.2823 1.3262 0.1475 1.914 0.0556. --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1'1
exp(coef) exp(-coef) lower.95 upper.95 DBP601 1.326 0.754 0.9933 1.771
Concordance= 0.529 (se = 0.017 ) Likelihood ratio test= 3.54 on 1 df, p=0.06 Wald test = 3.66 on 1 df, p=0.06 Score (logrank) test = 3.69 on 1 df, p=0.05
The estimates provided by the model fit would confirm the original suspicions of the medical community, stating that low DBP leads to increased cardiovascular risk. This is because the estimate of the hazard ratio exp(coef)=1.326 \(>\) 1.
The second step we take is to fit two Kaplan-Meier curves, one for each arm of the SPRINT trial (INTENSIVE=0 target SBP of 140 mmHg, INTENSIVE=1 target SBP of 120 mmHg) and produce the plot displayed in Figure 7. This serves as a quantitative basis for three facts. First, the paradox we are facing becomes empirically confirmed because now treatment defined as SBP lowering intervention seems to be effective (the blue curve therein estimating the survival function of the treatment population is higher after one year).
Second, the estimates of the survival functions are crossing. This is a well-known problem in the field of time-to-event analysis (Bouliotis and Billingham, 2011), directly invalidating the proportional hazards assumption. Third, this would confirm observationally the overall positive results of the SPRINT trial, asserting that intensive SBP control results in cardiovascular benefit.
### Conclusions of our analysis of SPRINT
Our results agree with Stensrud and Strohmaier (2017): The increased risk in subjects with diastolic pressure below 60 cannot fully be explained by the intensive treatment itself, but may be due to other factors. A complete description of the results is included in Figure 8
## 8 Discussion
The main contribution of this paper is the introduction of a novel framework that enables model-free counterfactual inference, opening the doors to many tasks, including counterfactual prediction, hypothesis testing, and clustering analysis. The proposed methods just rely on the Kaplan-Meier estimator (Gerds et al., 2017). While the assumptions that make this possible- i.e., that prevent users from explicitly including covariates in the involved weights- pose a limitation from a practical viewpoint, our method could be locally fitted using the nearest neighbor paradigm (Tamas and Csaji, 2023) and remain robust to different types of censoring mechanisms.
Moreover, our fundamental approach can be adapted to handle more complex scenarios,
Figure 7: Two Kaplan-Meier fits aimed to estimation of \(S_{\tilde{T}^{1}|Z=1}(t)\) in blue and \(S_{\tilde{T}^{0}|Z=0}(t)\) in black
such as inverse probability weighting or doubly robust estimators (Rubin and van der Laan, 2007). The key advantage of our methods is their model-free nature, allowing for learning of complex non-linear relationships between predictors and response variables, given certain smoothness and moment conditions. Many existing models in counterfactual inference are semi-parametric in nature, like the Cox model, which may involve parameters that do not have a causally valid interpretation (Martinussen, 2022).
The adaptation of such estimators to the fully non-parametric context faces technical difficulties, as seen with the k-NN algorithmand Beran's estimator. However, by adopting
Figure 8: We look at Equation 3.1.1 in the RKHS scale. \(\hat{\mu}_{T\langle 0|0\rangle}-\hat{\mu}_{T\langle 0|1\rangle}\) is represented through the blue line. Similarly, we plot in green \(\hat{\mu}_{T\langle 0|1\rangle}-\hat{\mu}_{T\langle 1|1\rangle}\). An important fact to bear in mind is that (A) in Equation 3.1.1 is zero if and only if the population counterpart of the blue function is zero, meaning that there is _no_ distributional effect on the outcomes arising from the difference in covariate distributions. Likewise, if the green line is zero at the population level then there are no distributional effects for the treated. The dashed line is the sum of both colored functions: the kernel mean embedding of (A) + (B) \(=S_{T^{1}}-S_{T^{0}}\) representing the _realized-_ not counterfactual- survival probability gain in the intensive treatment arm. The plot can be interpreted as follows: the shift in DBP decreaseβs distribution across treatment arms has an effect in the opposite direction to the counterfactual treatment effect on the intensive treatment arm; being the latter stronger. During the majority of the study time span (approx. from month 6 until the beginning of the third year), intensively reducing SBP pays off the harmful consequences of the reduction of DBP that comes hand-by-hand. As a consequence, survival probability is increased during this period, which is translated in the dotted line being above zero. Nevertheless, SBP reduction to the lowest target impacts survival negatively in the long term. Interestingly, the inherent reduction in DBP becomes beneficial and tries to compensate for the harmful effect in the long run brought by intensive SBP reduction.
the mean embedding toolset, we can create model-free estimators without the technical difficulties. Kernel mean embeddings can be interpreted as conditional depth bands, proving their usefulness for inferential tasks and other descriptive analyses, as demonstrated in the paper. Additionally, the geometry of kernel mean embeddings allows for a natural interpretation of quantities that are present in the potential outcomes framework, such as the effect of distributional shifts on the covariates.
From a theoretical standpoint, we discuss the implications of using weights involving the Kaplan-Meier estimator. Roughly speaking, these weights assume independence between survival and censoring times, as well as conditional independence of the censoring indicator and the covariates given the realized times (Assumption v.).
Let us briefly depict the consequences of relaxing these hypotheses. A regular estimator is efficient if it achieves the lowest possible variance among regular estimators, and this optimality notion is established with tools from semiparametric inference (Kosorok, 2008). Specifically, the Kaplan-Meier integral is asymptotically efficient only under the assumption of independence between survival and censoring times with respect to the covariates (Stute, 1996; Laan and Robins, 2003). This is intuitive because the covariate values of the censored times are never observed in empirical estimates. However, if we relax this hypothesis and consider a scenario where \(C\) is not independent of \(T\) given \(Z\), and \(\delta\) is not independent of \(X\) given \(T\) and \(Z\), then the resulting estimator will be inefficient; as these assumptions were guaranteeing that the conditional survival distribution of the censoring times \(G\) does not depend on the covariates.
To address this issue, we can use a Cox model to estimate \(G_{0}(t,x)\). This would be more efficient than using Kaplan-Meier under conscious violation of the previous assumptions, but even this approach will never achieve full efficiency. As per adaptive estimation principle Bickel, Klaassen, Bickel, Ritov, Klaassen, Wellner, and Ritov (1993), a larger censoring model leads to more efficient weights estimation. However, in high-dimensional settings-the scenario we often face when covariates are present in biomedicine, the performance of this method may be poor. This may be potentially alleviated by doubly robust estimators (Benkeser, Carone, Laan, and Gilbert, 2017).
The proportional hazards model is the prevailing regression model used in survival analysis. However, a standard Cox analysis does not provide insight into how the effects evolve over time, potentially resulting in loss of valuable information. With the usual Cox analysis, coefficients are typically assumed to remain constant over time, making it challenging to incorporate any deviations from this assumption. There exist a number of alternatives, for instance Aalen's additive regression model (Aalen, Borgan, and Gjessing, 2008). It offers the benefit of permitting covariate effects to vary independently over time. However, Aalen's model performs repeated regressions at each event time, running into instability and overfitting problems when not many events (understood as uncensored observations) are present in the data. Figure 8 illustrates the importance of our estimator as a tool to assess relative risk between treatment arms across time in a natural way without involving time-dependent hazard ratios. All being said, reliably answering inferential questions about time-varying causal effects is a true milestone in contemporary statistics, even reaching areas like Reinforcement Learning (Zhang, Janson, and Murphy, 2022).
In conclusion, our proposed estimator offers a flexible and powerful tool for estimating counterfactual distributions in observational studies with right-censored data. The model-free nature of our approach makes it applicable to diverse scenarios where traditional
methods may be unsuitable. Our estimator can be used in combination with or as an alternative to existing parametric and semiparametric causal survival models, further expanding the range of available options for researchers.
## Appendix 1: proofs of auxiliary results
### Proof of Lemma 2
We have for an arbitrary \(G\in\mathcal{F}\)
\[\hat{R}_{\varepsilon,n}(\hat{F}+G)=\frac{1}{n}\sum_{i=1}^{n}W_{i}\left\|h_{i}- \hat{F}\left(X_{i}\right)-G\left(X_{i}\right)\right\|_{\mathcal{H}}^{2}+ \varepsilon\|\hat{F}+G\|_{\mathcal{F}}^{2}=\]
\[\frac{1}{n}\sum_{i=1}^{n}W_{i}\left(\left\|h_{i}-\hat{F}\left(X_{i}\right) \right\|_{\mathcal{H}}^{2}+\|G\left(X_{i}\right)\|_{\mathcal{H}}^{2}-2\langle h _{i}-\hat{F}(X_{i}),G(X_{i})\rangle_{\mathcal{H}}\right)+\varepsilon\left(\| \hat{F}\|_{\mathcal{F}}^{2}+\|G\|_{\mathcal{F}}^{2}-2\langle\hat{F},G\rangle_{ \mathcal{F}}\right)=\]
\[\hat{R}_{\varepsilon,n}(\hat{F})+\frac{1}{n}\sum_{i=1}^{n}W_{i}\left(\|G\left( X_{i}\right)\|_{\mathcal{H}}^{2}-2\langle h_{i}-\hat{F}(X_{i}),G(X_{i})\rangle_{ \mathcal{H}}\right)+\varepsilon\left(\|G\|_{\mathcal{F}}^{2}+2\langle\hat{F},G\rangle_{\mathcal{F}}\right)\]
Assuming that \(\hat{F}\) is a minimizer implies \(\hat{R}_{\varepsilon,n}(\hat{F})\leq\hat{R}_{\varepsilon,n}(\hat{F}+G)\) and therefore it is necessary that for all \(G\in\mathcal{F}\)
\[\frac{1}{n}\sum_{i=1}^{n}W_{i}\langle h_{i}-\hat{F}(X_{i}),G(X_{i})\rangle_{ \mathcal{H}}=\varepsilon\langle\hat{F},G\rangle_{\mathcal{F}}\]
Now we try the solution \(\hat{F}=\sum_{i=1}^{n}\Gamma\left(\cdot,X_{i}\right)(c_{i})\in\mathcal{F}\) and we use the properties of \(\Gamma\) to develop the inner product
\[\langle\hat{F},G\rangle_{\mathcal{F}}=\langle\sum_{i=1}^{n}\Gamma\left( \cdot,X_{i}\right)(c_{i})\,,G\rangle_{\mathcal{F}}=\sum_{i=1}^{n}\langle c_{i},G(X_{i})\rangle_{\mathcal{H}}\]
So
\[\frac{1}{n}\sum_{i=1}^{n}W_{i}\langle h_{i}-\hat{F}(X_{i}),G(X_{i})\rangle_{ \mathcal{H}}=\varepsilon\sum_{i=1}^{n}\langle c_{i},G(X_{i})\rangle_{ \mathcal{H}}\]
Therefore
\[\sum_{i=1}^{n}\left(W_{i}\langle h_{i}-\hat{F}(X_{i}),G(X_{i})\rangle_{ \mathcal{H}}-n\varepsilon\langle c_{i},G(X_{i})\rangle_{\mathcal{H}}\right)=0\]
Now we use again the expression \(\hat{F}=\sum_{i=1}^{n}\Gamma\left(\cdot,X_{i}\right)(c_{i})\) to rewrite
\[\langle\hat{F}(X_{i}),G(X_{i})\rangle_{\mathcal{H}}=\sum_{j=1}^{n}\langle \Gamma(X_{i},X_{j})(c_{j}),G(X_{i})\rangle_{\mathcal{H}}\]
For the previous identity to be true for all \(G\in\mathcal{F}\) it is sufficient that for \(1\leq i\leq n\) the following holds
\[W_{i}(h_{i}-\sum_{j=1}^{n}\Gamma(X_{i},X_{j})(c_{j}))-n\varepsilon c_{i}=0\]
that can be written as
\[W_{i}h_{i}=\sum_{j=1}^{n}W_{i}\Gamma(X_{i},X_{j})(c_{j})+n\varepsilon c_{i} \delta_{ij}\]
**Proof of Lemma 3**
Define \(g:=\left(\widehat{\mathcal{C}^{*}}_{XX}+\varepsilon I\right)^{-1}\hat{\mu}_{X _{1}}\).
Since \(\hat{\mu}_{X_{1}}=\left(\widehat{\mathcal{C}^{*}}_{XX}+\varepsilon I\right)g= \frac{1}{n}\sum_{j=1}^{n}W_{j}k\left(\cdot,X_{j}\right)g\left(X_{j}\right)+ \varepsilon g\),
we have \(\hat{\mu}_{X_{1}}\left(X_{l}\right)=\frac{1}{n}\sum_{j=1}^{n}W_{j}k\left(X_{l},X_{j}\right)g\left(X_{j}\right)+\varepsilon g\left(X_{l}\right)=\frac{1}{n}( KW\mathbf{g})_{l}+\varepsilon\mathbf{g}_{l}\) for all \(l=1,\ldots,n\), where \(K\in\mathbb{R}^{n\times n}\) with \(K_{ij}=k\left(X_{i},X_{j}\right)\) and \(\mathbf{g}=\left(g\left(X_{1}\right),\ldots,g\left(X_{n}\right)\right)^{\top}\in \mathbb{R}^{n}\).
Therefore \(\mathbf{\mu}=\frac{1}{n}(KW+n\varepsilon I)\mathbf{g}\), where \(\mathbf{\mu}:=\left(\hat{\mu}_{X_{1}}\left(X_{1}\right),\ldots,\hat{\mu}_{X_{1}} \left(X_{n}\right)\right)^{\top}=\widetilde{K}1_{m}\), where \(1_{m}=(1/m,\ldots,1/m)^{\top}\) and \(\widetilde{K}\in\mathbb{R}^{n\times m}\) with \(\widetilde{K}_{ij}=k\left(X_{i},X_{j}^{1}\right)\). Thus \(\mathbf{g}=n(KW+n\varepsilon I)^{-1}\mathbf{\mu}\). Lastly, we use the definition of \(\widehat{\mathcal{C}^{*}}_{TX}\) to express \(\hat{\mu}_{\left\langle 0\right|1}=\frac{1}{n}\sum_{i=1}^{n}W_{i}\ell\left( \cdot,Y_{i}\right)g\left(X_{i}\right)=\sum_{i=1}^{n}W_{i}\beta_{i}\ell\left( \cdot,Y_{i}\right)\), where \(\beta=(\beta_{1},\ldots,\beta_{n})^{\top}=n^{-1}\mathbf{g}=(KW+n\varepsilon I)^{ -1}\mathbf{\mu}\), which is the original expression of \(\hat{\mu}_{T(0|1)}\).
**Proof of Lemma 5**
\[\left\|\frac{1}{n}\left(\sum_{i=1}^{n}W_{i}K_{i}L_{i}-\frac{2}{n} \left(\sum_{i=1}^{n}W_{i}K_{i}\right)\left(\sum_{i=1}^{n}W_{i}L_{i}\right)+ \frac{1}{n^{2}}\left(\sum_{i=1}^{n}W_{i}K_{i}\right)\left(\sum_{i=1}^{n}W_{i}L _{i}\right)\left(\sum_{i=1}^{n}W_{i}\right)\right)-E[K(X^{0})L(T^{0})]\right\| _{\mathcal{G}\otimes\mathcal{H}}\] \[= \left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}L_{i}-E[K(X^{0})L(T^{0 })]-\left(2-\frac{1}{n}\sum_{i=1}^{n}W_{i}\right)\left(\frac{1}{n}\sum_{i=1}^{ n}W_{i}K_{i}\right)\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}L_{i}\right)\right\| _{\mathcal{G}\otimes\mathcal{H}}\] \[\leq \left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}L_{i}-E[K(X^{0})L(T^{0} )]\right\|_{\mathcal{G}\otimes\mathcal{H}}+\left|2-\frac{1}{n}\sum_{i=1}^{n}W_ {i}\right|\left\|\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}\right)\left(\frac{1 }{n}\sum_{i=1}^{n}W_{i}L_{i}\right)\right\|_{\mathcal{G}\otimes\mathcal{H}}\] \[\leq \left\|\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}L_{i}-E[K(X^{0})L(T^{0} )]\right\|_{\mathcal{G}\otimes\mathcal{H}}+\left|2-\frac{1}{n}\sum_{i=1}^{n}W_ {i}\right|\left\|\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}K_{i}\right)\right\|_{ \mathcal{G}}\left\|\left(\frac{1}{n}\sum_{i=1}^{n}W_{i}L_{i}\right)\right\|_{ \mathcal{H}}\]
**Proof of Corollary 1**
By Triangle's Inequality,
\[\|\widehat{\mathcal{C}}_{TX}\left(\widehat{\mathcal{C}}_{XX}+ \varepsilon_{n}I\right)^{-1}\hat{\mu}_{X_{1}}-\mu_{T\langle 0|1\rangle}\|_{ \mathcal{H}}\] \[\leq \left\|\widehat{\mathcal{C}}_{TX}\left(\widehat{\mathcal{C}}_{XX }+\varepsilon_{n}I\right)^{-1}\hat{\mu}_{X_{1}}-\mathcal{C}_{TX}\left( \mathcal{C}_{XX}+\varepsilon_{n}I\right)^{-1}\mu_{X_{1}}\right\|_{\mathcal{H}} \quad\text{(Stochastic error)}\] \[+\left\|\mathcal{C}_{TX}\left(\mathcal{C}_{XX}+\varepsilon_{n}I \right)^{-1}\mu_{X_{1}}-\mu_{T\langle 0|1\rangle}\right\|_{\mathcal{H}}\quad\text{( Approximation error)}\]
### Proof of Corollary 2
The proof of Theorem 3 has been written with \(\alpha=0\) (we can assume so thanks to assumption iii.)). The proof for \(\alpha>0\) is straightforward using Lemma 24 in Muandet et al. (2017) and in this case the rate of the stochastic error is \(O_{p}\left(n^{-1/2}\varepsilon_{n}^{\min(-1+\alpha,-1/2)}\right)\).
The proof is completed by showing that the rate for the approximation error is \(O\left(\varepsilon_{n}^{(\alpha+\beta)/2}\right)\) (see E.3 in Muandet et al. (2021)).
## 9 Appendix: empirical check of \(\sqrt{n}\) rate under linear truth
Call: lm(formula = lsds ~ mlogn)
Residuals: 1 2 3 4 5 6 8.818e-05 1.834e-02 -7.885e-02 4.292e-02 -4.727e-02 6.477e-02
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.10064 0.13888 -0.725 0.509 mlogn 0.51281 0.02512 20.418 3.4e-05 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1'1
Residual standard error: 0.06088 on 4 degrees of freedom Multiple R-squared: 0.9905, Adjusted R-squared: 0.9881 F-statistic: 416.9 on 1 and 4 DF, p-value: 3.398e-05
Note: This experiment uncovers that our estimator shows an adaptive behaviour: when the underlying model is simulated to be linear, the convergence rate is faster: \(n^{-1/2}\).
Figure 9: Let \(V\) be average across time-points of point empirical standard deviation computed through simulations. Assuming that \(V=C\cdot n^{-\gamma}\), we can write \(\log(V)=\log(C)-\gamma n\). The console output below shows that \(\gamma\) is close to 0.5.
## 10 Acknowledgements
We would like to express our gratitude to Prof. Peter Buhlmann for his valuable advice during the development of this work. We also extend our gratitude to Prof. Mats Stensrud for his encouragement to use data from the Systolic Blood Pressure Trial (SPRINT). We are grateful for the kind gesture of Prof. Thomas A. Gerds and Prof. Krikamol Muandet, having provided helpful clarifications regarding weak convergence of the estimator. We are thankful to the National Heart, Lung and Blood Institute for providing us with access to this valuable dataset.
## 11 Funding
C.G.M is supported by the Fundacion Barrie via Bolsas de Posgrao no Estranxeiro. |
2307.06151 | The RoboPol sample of optical polarimetric standards | Optical polarimeters are typically calibrated using measurements of stars
with known and stable polarization parameters. However, there is a lack of such
stars available across the sky. Many of the currently available standards are
not suitable for medium and large telescopes due to their high brightness.
Moreover, as we find, some of the used polarimetric standards are in fact
variable or have polarization parameters that differ from their cataloged
values. Our goal is to establish a sample of stable standards suitable for
calibrating linear optical polarimeters with an accuracy down to $10^{-3}$ in
fractional polarization. For five years, we have been running a monitoring
campaign of a sample of standard candidates comprised of 107 stars distributed
across the northern sky. We analyzed the variability of the linear polarization
of these stars, taking into account the non-Gaussian nature of fractional
polarization measurements. For a subsample of nine stars, we also performed
multiband polarization measurements. We created a new catalog of 65 stars (see
Table 2) that are stable, have small uncertainties of measured polarimetric
parameters, and can be used as calibrators of polarimeters at medium- and
large-size telescopes. | D. Blinov, S. Maharana, F. Bouzelou, C. Casadio, E. GjerlΓΈw, J. Jormanainen, S. Kiehlmann, J. A. Kypriotakis, I. Liodakis, N. Mandarakas, L. Markopoulioti, G. V. Panopoulou, V. Pelgrims, A. Pouliasi, S. Romanopoulos, R. Skalidis, R. M. Anche, E. Angelakis, J. Antoniadis, B. J. Medhi, T. Hovatta, A. Kus, N. Kylafis, A. Mahabal, I. Myserlis, E. Paleologou, I. Papadakis, V. Pavlidou, I. Papamastorakis, T. J. Pearson, S. B. Potter, A. N. Ramaprakash, A. C. S. Readhead, P. Reig, A. SΕowikowska, K. Tassis, J. A. Zensus | 2023-07-12T13:15:39Z | http://arxiv.org/abs/2307.06151v1 | # The RoboPol sample of optical polarimetric standards
###### Abstract
Context:Optical polarimeters are typically calibrated using measurements of stars with known and stable polarization parameters. However, there is a lack of such stars available across the sky. Many of the currently available standards are not suitable for medium and large telescopes due to their high brightness. Moreover, as we find, some of the used polarimetric standards are in fact variable or have polarization parameters that differ from their cataloged values.
Aims:Our goal is to establish a sample of stable standards suitable for calibrating linear optical polarimeters with an accuracy down to \(10^{-3}\) in fractional polarization.
Methods:For five years, we have been running a monitoring campaign of a sample of standard candidates comprised of 107 stars distributed across the northern sky. We analyzed the variability of the linear polarization of these stars, taking into account the non-Gaussian nature of fractional polarization measurements. For a subsample of nine stars, we also performed multiband polarization measurements.
Results:We created a new catalog of 65 stars (see Table 2) that are stable, have small uncertainties of measured polarimetric parameters, and can be used as calibrators of polarimeters at medium- and large-size telescopes.
Conclusions:
## 1 Introduction
Polarimetry, on its own and in combination with other techniques, is a powerful tool for probing the physical conditions of astrophysical sources. As all experimental techniques, polarimetric observations require careful calibration and control of instrumental systematics. In the case of optical polarimetry, standard stars with known polarization properties are used for calibration purposes. Unfortunately, the number of reliable polarimetric standards is very limited. There are less than 30 stars in both hemispheres with polarization degree (PD) known with an accuracy of 0.1% or better, and proven to be stable in time (e.g.
Schmidt et al., 1992; Hsu and Breger, 1982). The lack of an appropriate unpolarized standard star in the night sky at a given moment is common.
The situation is particularly difficult for telescopes with aperture larger than 1-m. They often have a lower limit on the brightness of sources suitable for observations due to CCD saturation constraints. Meanwhile, most unpolarized standards are very bright (\(<8\arcsec\)), making them unsuitable for calibration on such telescopes. This is because unpolarized standards are selected from nearby stars to ensure that their light does not pass through a significant column of dust in the interstellar medium.
Another problem is the lack of polarized standards with low, but not negligible, PD in the range between 0.1 and 2%. Existing measurements of standards with PD \(>2\%\) have been sufficient to calibrate conventional polarimeters, and there has been no need for covering a lower range of PD. It is because conventional polarimeters have (or are assumed to have) negligible crosstalk between the Stokes parameters, meaning that the parameters are independent and uncorrelated. In this case, one uses: unpolarized (zero- or negligibly-polarized) stars to find _the offset_ of the instrumental \(Q/I\) - \(U/I\) plane with respect to the standard one; (2) highly-polarized stars to find a _rotation_ of the instrumental relative Stokes parameters plane with respect to the standard one (e.g. Ramaprakash et al., 2019). However, some new polarimeters have significant crosstalk (Tinbergen, 2007; Wiersema et al., 2018; Maharana et al., 2022; Wiktorowicz et al., 2023). This crosstalk must be modeled in the entire range of PDs of interest, including the 0.1 to 2% range (the level of ISM-induced stellar polarization in the diffuse ISM). The lack of standards covering a range of polarization values hinders efficient calibration of modern polarimeters where crosstalk between the relative Stokes \(Q/I\) and \(U/I\) parameters is significant.
Finally, a significant fraction of stars that are widely recognized as reliable standards exhibit inconsistent polarization parameters across different sources in the literature, and in some cases, they have been found to be variable (see, e.g. Table 1). A few examples of such studies follow. Hsu and Breger (1982), after monitoring 12 previously used standards, found that 3 of them are variable. Dolan and Tapia (1986) also questioned the stability of 3 standards. Bastien et al. (1988) monitored 13 previously known polarized standard stars and found 11 of them to be variable. Their methods were criticized by Clarke and Naghizadeh-Khouei (1994). However, after considering this criticism and applying more rigorous statistical methods, Bastien et al. (2007) reached a very similar conclusion: out of these 13 standards, 7 show significant variability, while 4 others may also be variable. In a study by Clemens and Tapia (1990), a single-epoch survey of 16 stars previously used as polarization standards was conducted. The study found that four of these stars had significantly different polarization parameters compared to the values previously published. Breus et al. (2021) found that 9 stars used as calibrators in previous studies show variability. As another example, while performing our polarimetric monitoring program, _RoboPol1_, we found that VI Cyg 12 (a.k.a. Cyg OB2 #12 or Schulte 12), which is used as a highly-polarized standard in many observatories, is variable in polarization (see Fig. 1). Indeed, VI Cyg 12 has been shown to be a Luminous Blue Variable with a circumstellar dust shell (Chentsov et al., 2013). The standard deviation of the Electric Vector Position Angle (EVPA) in our measurements of this star is \(>0.8\arcdeg\). Therefore, it should not be used for calibration if the desired accuracy of EVPA zero point calibration is more strict. Based on _RoboPol_ monitoring data, BD+33.2642 is suspected to have different polarization values than previously reported (Skalidis et al., 2018).
Footnote 1: [http://robopol.org](http://robopol.org)
There have been recent attempts to revise the parameters of polarization standards in use or to establish new samples of calibrators. Breus et al. (2021) report on their observations of a large sample \(\sim\)100 stars that had been considered as calibrators in various studies and offer revised/refined values of the polarization parameters of these stars. Gil-Hutton and Benavidez (2003) proposed a sample of nearby low-polarization stars in the southern hemisphere. Additionally, stars in the solar vicinity with measured polarization parameters obtained for the interstellar medium (e.g., Piirola et al., 2020) and white dwarf physics (Zejmo et al., 2017) studies can be used as unpolarized standards. Nevertheless, all candidate standard stars provided in these works are subject to one or several of the aforementioned deficiencies. They are either very bright or they are not proven to be stable, that is measured only a few times, or measured multiple times over a very short time interval.
In summary, there has been a long-standing need in the optical polarimetry community to establish a large homogeneous list of polarimetric standards that will facilitate easier characterization of instrument performance. The aim of this work is to contribute in this direction.
## 2 Sample of polarization-standard candidates
To meet the challenges of establishing a large set of reliable polarization standards, we selected an initial sample of 121 candidate stars, which was comprised of four independent subsamples:
**Sample B:** 35 polarized stars (PD/\(\sigma_{\rm PD}\geq 3\)) in fields of blazars monitored within the _RoboPol_ program, that did not show any significant variability between 2013 and 2016. _RoboPol_ is a linear optical polarimeter designed for efficient monitoring of point sources such as blazars or stars (see Sect. 3.1 and Ramaprakash et al., 2019). The point sources are placed in a central \(22\times 22\) arcsec masked area, where the sky background is reduced. However, the polarimeter also has a large unmasked field of view (FoV) of \(13\times 13\) arcmin, which allows linear polarimetry of all sources in the field, but with higher noise compared to the central target. High-cadence polarimetric monitoring of about one hundred blazars was performed between 2013 and 2016 (Blinov et al., 2021). Most of the sources were observed several tens to a few hundred times. This provided the same number of observation of stars in the corresponding fields. We analyzed the field stars data and selected 35 sources from these fields, which have shown stable polarization (see Sect. 4) throughout the monitoring period.
**Sample H:** Six stars were selected from Heiles (2000) catalog, with brightness in the range \(8\arcsec<R<14\arcsec\). Three of them are highly-polarized stars. Additional three have low polarization and fill the range in Right Ascension (RA), where there is a lack of low-polarization stars in other samples.
**Sample L:** 54 photometric standard stars distributed along the celestial equator from Landolt (1992). For selecting these sources we used an atlas of Landolt standards compiled by P. S. Smith2. The selection criteria of stars in this atlas were: 1) Declination \(\delta>-20\arcdeg\); 2) Observed by Landolt (1992) on at least 5 nights; 3) Absence of confirmed or suspected variability. The atlas stars are distributed in 6.8\(\times\)6.8 arcmin fields every one hour
in RA near \(\mathrm{Dec}=0^{\circ}\). We selected 2 to 4 stars per such field, with brightness in the range \(8^{\mathrm{m}}<R<14^{\mathrm{m}}\).
**Sample Z:** 26 unpolarized stars at high Galactic latitudes, from a single epoch survey by Berdyugin et al. (2014) that have fractional polarization \(\mathrm{PD}<0.1\%\), with uncertainties \(\sigma_{\mathrm{PD}}<0.05\%\);
The properties of the selected standard-star candidates are summarized in Table 1, where _Star ID_ prefixes correspond to one of the four samples. The advantages of our sample are that it is widely distributed over the northern sky (see Fig. 2) and partially available from the southern hemisphere. It contains relatively faint stars that are accessible to medium- and large-size telescopes. Moreover, a significant fraction of the sample are Landolt stars. Therefore, they can be used for simultaneous polarimetric and absolute photometric calibration of instruments (i.e., \(I\), \(Q\) and \(U\) Stokes parameters can be calibrated together).
## 3 Observations and data reduction
We have been monitoring the linear polarization parameters of the sample of candidate stars for four consecutive years in order to confirm their stability. The monitoring was performed using the _RoboPol_ polarimeter. Additionally, we performed single-epoch measurements of a small subsample using the Nordic Optical Telescope. These observations are described in the following subsections.
### RoboPol monitoring and data reduction
We carried out our polarimetric monitoring of the selected sample in the Cousins \(R\) and SDSS \(r^{\prime}\) bands from May 2017 to June 2021 using the _RoboPol_ polarimeter at the 1.3 m telescope of the Skinakas observatory. Every year observations were performed from May to November. Because the observatory does not operate during winter months, sources around \(RA=12\) h were insufficiently sampled. Of the initial 121 stars selected for monitoring, 14 were never observed or have poor quality of measurements. For this reason, they have been dropped from the sample. However, most of them were located near other stars in the sample, and therefore, would not increase much the sky coverage of our final standards catalog.
The polarizing assembly of the polarimeter consists of two half-wave plates and two Wollaston prisms aligned in such a way that any incident ray is split into four rays/channels with the polarization state rotated by 45\({}^{\circ}\)with respect to each other. _RoboPol_ has no moving parts except the filter wheel, which simplifies operations and instrumental polarization modeling. Three Stokes parameters \(q=Q/I\), \(u=U/I\) and \(I\) (the latter only in the case when stars with known magnitude are present in the \(13\times 13\) arcmin FoV) can be measured simultaneously with a single exposure. The optical and mechanical design of _RoboPol_ is described in Ramaprakash et al. (2019). All data for this program were collected in the central masked region of the FoV, where systematic uncertainties are \(<0.1\%\)(Ramaprakash et al., 2019).
The data were processed using the standard _RoboPol_ pipeline, which is described by King et al. (2014), with modifications presented by Blinov et al. (2021). Further corrections were introduced at the calibration stage, using known standard stars measurements. The details of this process are described below.
Standard processing of _RoboPol_ data includes an instrumental polarization correction model. This model was created based on combined measurements obtained during multiple years of several unpolarized standard stars, in a grid of hundreds of positions uniformly covering the FoV (King et al., 2014). Therefore, it approximates well the large-scale instrumental polarization variation across the entire FoV. However, we discovered that for stars measured in the central masked area, there is a residual instrumental polarization, which is unaccounted for by the model, and depends on the \((x,y)\) source position on the CCD. In Fig. 3, we show an example of such subtle instrumental polarization changes for unpolarized standards measured in 2019. A clear position-dependent trend with an amplitude of \(\sim 0.5\%\) in the measured values of relative Stokes parameters can be seen. Since all measurements discussed in this work were observed in the central masked area, we had to correct them for this trend. We approximated these \(q(x,y)=\frac{Q}{I}(x,y)\) and \(u(x,y)=\frac{U}{I}(x,y)\) dependencies with a quadratic surface for each observing season separately. Using these fits, we corrected all measurements of
Figure 1: R-band relative Stokes parameters of VI Cyg 12 (black circles) in comparison with two other standards BD+32.3739 (blue triangles) and HD212311 (green squares). The horizontal red lines and the pink areas represent \(Q/I\) and \(U/I\) values for VI Cyg 12 from Schmidt et al. (1992) with corresponding uncertainties. Values for the two other stars are shifted in such a way that their average Stokes parameters match the red line for visualization purposes. In spite of larger photon noise uncertainties, it is clear that the Stokes parameters of VI Cyg 12 are significantly variable and systematically deviate from their catalogue values during long periods of time.
corresponding seasons. Then, we determined the standard deviations in the \(q\) and \(u\) estimates for the unpolarized standards, and propagated these values with the corresponding uncertainties of standard candidates measurements.
We found the rotation of the instrumental \(q\) - \(u\) plane with respect to the standard reference frame using highly-polarized standards (Table 1) that were monitored along with the standards candidate sample. Since individual catalog values for these standards are unreliable (see Sect. 1), we used a statistical approach: the entire sample was considered, including stars that are known to be variable (e.g., VI Cyg 12). For each standard we computed the weighted mean of the relative Stokes parameters combining all measurements along the observing period. Then, using these \(q\), \(u\) estimates, we calculated the corresponding \(EVPA_{\rm{dpl}}\) values of each star in our measurements, and found the difference between this value and the one reported in the literature, \(EVPA_{\rm{cat}}\). For stars with multiple values reported in the literature, we used either the value with the smallest uncertainty, or the most recent measurement if the uncertainties are comparable. The \(EVPA_{\rm{cat}}\) values used are marked with asterisk symbols in Table 1. The corresponding differences between the \(RoboPol\) and literature estimates are shown in Fig. 4. We found the weighted mean of \(EVPA_{\rm{dpl}}-EVPA_{\rm{cat}}\) to be \(1.1\pm 0.5^{\circ}\), after applying 3\(\sigma\)-clipping, which excluded _CMaR1 24_ from the averaging. This value was used as the instrumental EVPA zero-point correction, and all measurements were adjusted for it, while uncertainties were propagated accordingly.
We assessed the possibility that the polarimeter has a crosstalk between the relative Stokes parameters by measuring their covariance. We first corrected the measurements of unpolarized standards for the polarization and EVPA zero points as described before. Then, we calculated the correlation coefficient between \(q\) and \(u\) for each individual standard. There was no significant systematic correlation found among stars. We also calculated the correlation coefficient, \(r\), for a set of standard star measurements for each season. In all cases, \(|r|\) does not exceed 0.42, while the median value among seasons is \(r=-0.17\). Therefore,
\begin{table}
\begin{tabular}{l c c c c c c} \hline Source & RA & Dec & Band & PD (\%) & EVPA (\({}^{\circ}\)) & Reference \\ \hline BD+57.2615 & 22:47:49.6 & +58:08:50 & \(R\) & \(2.02\pm 0.05\) & \(41.0\pm 1.0\) & Whittet et al. (1992) \\ BD+59.389\({}^{\circ}\) & 02:02:42.1 & +60:15:26 & \(R\) & \(6.43\pm 0.022\) & \(98.14\pm 0.10\) & Schmidt et al. (1992) \\ BD+64.106 & 00:57:36.7 & +64:51:35 & \(R\) & \(5.150\pm 0.098\) & \(96.74\pm 0.54\) & Schmidt et al. (1992) \\ CMaR1 24 & 07:04:47.4 & \(-10\):56:18 & \(R\) & \(3.18\pm 0.09\) & \(86.0\pm 1.0\) & Whittet et al. (1992) \\ CygOB2 14 & 20:32:16.6 & +41:25:36 & \(R\) & \(3.13\pm 0.05\) & \(86.0\pm 1.0\) & Whittet et al. (1992) \\ HD147283 & \(\left\{\begin{array}{l}16:21:57.7\\ \end{array}\right.\) & \(-24\):29:44 & \(R\) & \(1.59\pm 0.03\) & \(174.0\pm 1.0^{\circ}\) & Whittet et al. (1992) \\ HD147343 & \(\left\{\begin{array}{l}-\prime\prime\AA&-\prime\AA&-\prime\AA&-\prime \AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA\\ \end{array}\right.\) & Whittet et al. (1992) \\ HD150193 & 16:40:17.9 & \(-23\):53:45 & \(R\) & \(5.19\pm 0.05\) & \(56.0\pm 1.0\) & Whittet et al. (1992) \\ HD154445\({}^{c}\) & \(\left\{\begin{array}{l}17:05:32.3\\ \end{array}\right.\) & \(-00\):53:31 & \(R\) & \(3.683\pm 0.072\) & \(88.91\pm 0.56\) & Schmidt et al. (1992) \\ HD154445\({}^{c}\) & \(\left\{\begin{array}{l}-\prime\prime\AA&-\prime\AA&-\prime\AA&-\prime \AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&- \prime\AA&-\prime\AA&-\AA&-\prime\AA&-\prime\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&- \prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\prime\AA&-\AA&-\AA&-\prime\AA&-\AA&-\AA&-\prime\AA&-\AA&-\AA&-\AA&-\prime\AA&-\AA&-\AA&-\AA&-\AA&-\
we conclude that the crosstalk between channels of the polarimeter is negligible with respect to the noise level.
We also verified that the polarimetric efficiency of the instrument is 100% within measurement errors, i.e, the measured polarimetric accuracy is independent of the source polarization. To this end, we corrected the Stokes parameters of the highly-polarized stars for the zero points, and then compared them with the corresponding literature values. The orthogonal distance regression fits to the data are consistent with the expected \(d_{\rm{phot}}=q_{\rm{cat}}\) and \(u_{\rm{t,phot}}=u_{\rm{cat}}\) dependencies within \(1\sigma\), as shown in Fig. 5.
### NOT observations and data analysis
We selected a subset of 10 standard candidates based on visibility, limited available observing time for this program, and preliminary stability observed in _RoboPol_ data. Subsequently, we conducted observations of these candidates using the Nordic Optical Telescope (NOT) under proposal 61-608. The Alhambra Faint Object Spectrograph and Camera (ALFOSC) instrument3 was used in its polarimetric mode. It is a two-channel polarimeter consisting of a rotating half-wave plate (HWP) and a calcite plate. For each object, two so-called \(o\) and \(e\) beam images are formed, corresponding to the 0\({}^{\circ}\) and 90\({}^{\circ}\) polarizations coming out of the WP, respectively. Standard candidates were
Figure 4: Differences between weighted average of observed EVPA and corresponding catalogue values for 14 most reliable highly-polarized standards. Weighted mean value for 13 stars (CMaR1 24 is excluded by \(3\sigma\)-clipping) is shown by the solid red line, while the standard error of the mean is shown by the red dashed lines.
Figure 3: Relative Stokes parameters of zero-polarization standards from Table 1 observed in 2019 in the central masked area as a function of the position on the CCD. Color of points indicates the deviation of \(Q/I\) or \(U/I\) from zero. The planar contours show the fitted quadratic surface.
Figure 2: Distribution of the sample stars over the sky. Samples B, H, L and Z are described in Sect. 2. 14 stars marked as βnot observedβ that have zero measurements are excluded from the final sample. Highly-polarized and Unpolarized stars are standards used in previous studies listed in Table 1. The symbol size indicates polarization of individual stars from Table 4.
observed with sequences consisting of 8 exposures, corresponding to HWP positions of 0\({}^{\circ}\) to 180\({}^{\circ}\) in steps of 22.5\({}^{\circ}\). This yields 4 \(Q/I\) and \(U/I\) measurements for each star. Observations were performed during the nights of 8, 23 September and 13 December 2020 in multiple filters. Most of the stars were observed in SDSS \(g\), \(r\), \(i\) and Johnson-Cousins \(B\), \(V\) and \(R\) bands, while two stars were also observed in the \(U\)-band. Unpolarized standards stars BD+28.4211, HD 212311, and highly-polarized standards Hiltner 960, VI Cyg 12, BD+59.389, BD+64.106, HD 204827 were observed during the same nights as the program targets. These standards were observed with sequences consisting of 16 exposures corresponding to HWP positions of 0\({}^{\circ}\) to 360\({}^{\circ}\) in steps of 22.5\({}^{\circ}\). This provided 8 \(Q/I\) and \(U/I\) measurements for each star.
For the analysis of the raw data, we developed a semi-automated data reduction pipeline in Python. Attention was paid to error estimation and propagation in each step of the analysis. Photometry was done using the aperture photometry package of the _Photutils_ library4. To find the polarization parameters, we followed the procedure from Patat & Romaniello (2006). For each HWP position \(\{\theta_{i}=i\times 22.5^{\circ}\mid i\in\{0,1,...,N\}\}\) we calculated the normalized flux differences between \(o\) and \(e\) star images:
Footnote 4: [https://photutils.readthedocs.io/en/stable/](https://photutils.readthedocs.io/en/stable/)
\[F_{i}=\frac{f_{o,i}-f_{e,i}}{f_{o,i}+f_{e,i}}. \tag{1}\]
Then the relative Stokes parameters were expressed as:
\[q\equiv\frac{Q}{I}=\frac{2}{N}\sum_{i=0}^{N-1}F_{i}\cos\left(\frac{\pi}{2}i\right) \tag{2}\]
\[u\equiv\frac{U}{I}=\frac{2}{N}\sum_{i=0}^{N-1}F_{i}\sin\left(\frac{\pi}{2}i\right). \tag{3}\]
Using these \(q\) and \(u\) estimates we inferred PD and EVPA, and their uncertainties as described in Sect. 3.3. Then, we examined dependencies of the PD and EVPA estimates on the photometry aperture radius, and selected optimal aperture and annulus radii, where both parameters reach a plateau and minimal uncertainties. The same procedure was performed for the observations of the standard stars. Using their polarization parameters, we found the instrumental polarization and EVPA zero points in each band individually. However, in the U-band, no standards were observed either during our observations or during adjacent nights. In the SDSS \(g\) and \(r\) bands, only a single standard star measurement (BD+28.4211) was available. Therefore, we fitted the dependencies of the instrumental \(q\) and \(u\) on the effective wavelength using all other bands with a linear function. By utilizing these fits, we determined the instrumental zero points of the relative Stokes parameters in each band and applied corrections to all measurements based on these values.
### Polarization parameter estimates and their uncertainties
The polarization degree and its uncertainty were calculated assuming that the relative Stokes parameters \(q=Q/I\) and \(u=U/I\) follow a normal distribution,
\[\mathrm{PD}=\sqrt{q^{2}+u^{2}},\ \ \sigma_{\mathrm{PD}}=\sqrt{\frac{q^{2} \sigma_{\mathrm{q}}^{2}+u^{2}\sigma_{\mathrm{u}}^{2}}{q^{2}+u^{2}}}. \tag{4}\]
Any linear polarization measurement is biased towards higher PD values (Serkowski, 1958). The PD follows a Rician distribution (Rice, 1945) and significantly deviates from the normal distribution at low signal-to-noise ratios. There is a variety of methods suggested for correction of this bias (e.g. Simmons & Stewart, 1985; Vaillancourt, 2006). Our catalog provides the flexibility to select any debiasing method as the relative Stokes parameters constitute our ultimate data product. The data presented in this paper remain uncorrected for polarization bias. The relative Stokes parameters themselves, of course, are unbiased quantities.
The EVPA is defined as
\[\mathrm{EVPA}=\frac{1}{2}\operatorname{atan2}\left(\frac{u}{q}\right)=\begin{cases} \arctan\left(\frac{u}{q}\right)&q>0\\ \arctan\left(\frac{u}{q}\right)+\pi&u\geq 0,\ q<0\\ \arctan\left(\frac{u}{q}\right)-\pi&u<0,\ q<0\\ \frac{\pi}{2}&u>0,\ q=0\\ -\frac{\pi}{2}&u<0,\ q=0\\ \text{undefined}&u=0,\ q=0\end{cases} \tag{5}\]
while its measurements are also non-Gaussian and defined by the following probability density (Naghizadeh-Khouei & Clarke, 1993):
\[G(\theta;\theta_{o};\mathrm{PD}_{o})=\frac{1}{\sqrt{\pi}}\left\{\frac{1}{\sqrt {\pi}}+\eta_{o}e^{\theta_{o}^{2}}[1+\mathrm{erf}(\eta_{o})]\right\}\exp\left( -\frac{\mathrm{PD}_{o}^{2}}{2\sigma_{\mathrm{PD}}^{2}}\right), \tag{6}\]
where \(\eta_{o}=\mathrm{PD}_{o}\cos 2(\theta-\theta_{o})/(\sigma_{\mathrm{PD}}\sqrt{2})\), erf is the Gaussian error function, \(\mathrm{PD}_{o}\) and \(\theta_{o}\) are the true values of PD and EVPA, and \(\sigma_{\mathrm{PD}}\) is the uncertainty of PD5.
Footnote 5: The equation for \(\eta_{e}\) is missing the factor of 2 for the cosine argument in Clarke (2009), while the correct formula is provided in Naghizadeh-Khouei & Clarke (1993).
We determine the EVPA uncertainty \(\sigma_{\theta}\) numerically, by solving the following integral:
\[\int_{-1\sigma_{\theta}}^{1\sigma_{\theta}}G(\theta;\mathrm{PD}_{o})d\theta=68.27\%. \tag{7}\]
Figure 5: Weighted mean values of Stokes parameters of high-polarization standards as measured by _RoboPol_ vs. catalogue values from Table 1. The black dashed line is \(y=x\) the red solid line is the ordinary least squares regression to the data. The light blue region is the \(1\sigma\) uncertainty region of the fit.
The true value of PD in this procedure was estimated following Vaillancourt (2006) as:
\[\mathrm{PD}_{o}=\left\{\begin{aligned} 0&\quad\mathrm{for} \;\mathrm{PD}/\sigma_{\mathrm{PD}}<\sqrt{2}\\ \sqrt{\mathrm{PD}^{2}-\sigma_{\mathrm{PD}}^{2}}&\quad \mathrm{for}\;\mathrm{PD}/\sigma_{\mathrm{PD}}\geq\sqrt{2}.\end{aligned}\right. \tag{8}\]
For high SNR values, \(\mathrm{PD}/\sigma_{\mathrm{PD}}\geq 20\), the uncertainty of EVPA was approximated as \(\sigma_{\theta}=\mathrm{PD}/(2\sigma_{\mathrm{PD}})\).
## 4 Analysis of variability
We assessed variability of the sample stars following Clarke et al. (1993) and Bastien et al. (2007). The method can be summarized as follows. If measurements of the relative Stokes parameters \(q\) and \(u\) are independent and follow normal distributions with means \(q_{0}\) and \(u_{0}\), then the statistic
\[\wp=\sqrt{\left(\frac{q}{\sigma_{q}}\right)^{2}+\left(\frac{u}{\sigma_{u}} \right)^{2}}, \tag{9}\]
as demonstrated by Simmons & Stewart (1985), follows the Rician distribution (Rice 1945):
\[f(\wp,\wp_{0})=\wp\exp\left(-\frac{\wp^{2}+\wp_{0}^{2}}{2}\right)J_{0}(i \wp\wp_{0}), \tag{10}\]
where \(i\) is the unit imaginary number, \(J_{0}\) is the zeroth order Bessel function, and \(\wp_{0}=\sqrt{(q_{0}/\sigma_{\mathrm{q0}})^{2}+(u_{0}/\sigma_{\mathrm{u0}})^{ 2}}\). In the case of an unpolarized source (\(\wp_{0}=0\)), Eq. 10 reduces to the Rayleigh distribution:
\[f(\wp,0)=\wp\exp\left(-\frac{\wp^{2}}{2}\right). \tag{11}\]
Then, the cumulative distribution function (CDF) of \(\wp\) is expressed as:
\[\mathrm{CDF}(\wp)=\frac{\int_{0}^{\wp}f(\wp,0)d\wp}{\int_{0}^{\infty}f(\wp,0) d\wp}=1-\exp\left(-\frac{\wp^{2}}{2}\right). \tag{12}\]
In practice, the CDF is approximated by the empirical cumulative distribution function
\[\mathrm{EDF}(\wp)=\frac{\mathrm{number\;of\;observations}<\wp}{\mathrm{total\; number\;of\;observations}}. \tag{13}\]
The EDF can deviate significantly from the CDF in two cases: (1) the source has variable polarization; (2) the uncertainties \(\sigma_{q}\) and \(\sigma_{u}\) are incorrectly estimated. Since in either of these cases the measurements of a star cannot be considered for establishing it as a standard, we do not distinguish between them.
In the case of a polarized source, its weighted means of the measured normalized Stokes parameters \(\overline{q}\) and \(\overline{u}\) were used as estimates for \(q_{0}\) and \(u_{0}\) in order to reduce the polarization to zero:
\[\wp_{\mathrm{reduced}}=\sqrt{\left(\frac{q-\overline{q}}{\sigma_{q}}\right)^ {2}+\left(\frac{u-\overline{u}}{\sigma_{u}}\right)^{2}}. \tag{14}\]
In order to assess whether the EDF significantly deviates from the CDF of a constant source given by Eq. 12, we used a two-sided Kolmogorov-Smirnov (KS) test. If the p-value of the KS test exceeds a given threshold, we consider the star as non-variable and suitable for use as a standard. Otherwise, we consider it unsuitable. As mentioned earlier, if the p-value of the KS test is below the threshold, the star may indeed be variable, or our uncertainty estimates of its measurements may be incorrect. We do not discriminate between these two cases. We use a threshold of \(p=0.0455\), corresponding to a \(2\sigma\) confidence level, to assess the variability of stars. However, we also provide the p-values for all stars in the sample with \(\geq 5\) measurements in Table 4. This allows one to select a more or less robust sample of standards by filtering stars based on the p-value and choosing a different confidence level if desired. We do not perform the test for stars with fewer than 5 measurements and mark them as uncertain.
We note that in the procedure described above the variability evaluation is based purely on the fractional polarization behaviour, while information about the EVPA is completely ignored. However, there is a possibility that in peculiar cases the polarization vector can produce nearly perfect loops on the \(Q/I\) - \(U/I\) plane. In such a situation, the \(\sigma_{\mathrm{reduced}}\) remains constant, while the EVPA changes with time. For instance, binary stars with envelopes symmetric about their orbital plane can produce such polarization variability (Brown et al. 1978). In order to avoid identification of stars with this variability pattern as stable, we visually inspected distributions of measurements on the relative Stokes parameters plane for each source. We did not find any false stable stars during this inspection.
## 5 Results
We obtained \(696\,R\)-band and \(296\,\mathrm{SDSS}\)\(r^{\prime}\)-band measurements of \(107\) stars with \(RoboPol\) that are listed in Table 2. Additionally, for nine stars we obtained multi-band polarization measurements with ALFOSC that are presented in Table 3. We did not find any significant systematic difference in relative Stokes parameters between the \(R\) and \(r^{\prime}\)-bands. Therefore, we combine all measurements in these two bands and consider them together. For each star in the sample, we constructed plots of the time series data showing the evolution of the fractional polarization PD, the EVPA, and the relative Stokes parameters \(Q/I\) and \(U/I\). These monitoring data were analyzed using the method described in Sect. 4, so that the EDF of the normalized fractional polarization given by Eq. 14 was computed for each star. Then we compared it with the distribution given by Eq. 12, which is the expected cumulative distribution of the same quantity for a stable source with the same noise level. The time series, CDF, EDF and the distribution of measurements on the relative Stokes parameters plane for B_0017+8135_82, as an example, are shown in Figure 6. Similar plots for all other sources in the sample are available only in the electronic version in Appendix B. As the result, we found the average polarimetric parameters for each star in the sample and classified them as stable or variable with the \(2\sigma\) confidence level. These parameters and classes are listed in Table 4. For reader's convenience, we list only the stable stars in a separate Table 2, together with their average relative Stokes parameters and Gaia's \(G\)-band magnitude. We arbitrarily place the limit between high- and low-polarization stars at \(\mathrm{PD}=0.5\%\) in Table 2. This information, along with finding charts for all sample stars, can also be accessed online at [https://robopol.physics.uoc.gr/standards](https://robopol.physics.uoc.gr/standards).
L_PG2349+002, L_92_249, L_92_248, L_111_1969 and L_PG2213\(-\)006A were selected among Landolt photometric standards, that is, they are expected to have stable total flux density. However, these stars exhibit significant polarization variability.
H_GSC02355 was selected as a high-polarization star from Heiles (2000), where it has \(\mathrm{PD}=5.058\%\). However, in our measurements, this star is variable and has a higher average polarization of 6.0%. H_HD57702 was selected as a low-polarization star from Heiles (2000), where it has a PD of 0.040 \(\pm\) 0.069%. However, in our measurements, this star is 0.33% polarized.
For stars L_PG1323\(-\)085D, Z_HD153752, H_HD344776 and L_111_1969, the EDF of the reduced PD (see Sect. 4) is located entirely to the left of the theoretical CDF. It means that these stars are more stable than one would expect from the uncertainties of their PD measurements. Since we used the two-sided KS test, these stars are classified as variable. However, it is likely that the uncertainties in the relative Stokes parameters for these four sources are overestimated, and the stars are in fact stable.
## 7 Conclusions
We obtained 1044 polarization measurements of 107 stars using two different polarimeters. Most observations were performed in the Cousins \(R\) and the SDSS \(r^{\prime}\) bands with the _RoboPol_ polarimeter along a four-year time interval. After applying a variability analysis to these monitoring data, we have selected 65 stars that have \(\geq 5\) measurements and do not demonstrate significant variability in linear polarization in the red bands. These stars are listed in Table 2 and they can be used as optical polarimetric standards for calibration of instrumental polarization. For 24 stars, we did not have enough data to conclude whether they are variable or stable, while the remaining 18 stars were found to exhibit significant variability in polarization.
## Data availability
All data discussed in this paper are available in Harvard Dataverse at [https://doi.org/10.7910/DUN/IV9TXX](https://doi.org/10.7910/DUN/IV9TXX).
## Acknowledgements
We thank T. Pursimo and S. Armas Perez for assistance with the NOT observations. D.B., S.K., N.M., V.P., R.S., and K.T. acknowledge support from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation program under the grant agreement No 771282. A.S. acknowledges the Polish National Science Centre grant 2017/25/B/ST9/02805. This work was supported by the NSF grant AST-2109127. The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOT.
Figure 6: Evolution of polarization parameters of B_0017+8135_82, which is found to be variable. (a, b) - Evolution of the relative Stokes parameters. The dashed black line shows the weighted average, the red dashed lines show the corresponding 1\(\sigma\) uncertainty. (c) - Distribution of measurements on the relative Stokes parameters plane. (d, e) - Evolution of the polarization degree and the electric vector position angle. The dashed black line shows the weighted average, the red dashed lines show the corresponding 1\(\sigma\) uncertainty. (f) - EDF of measured polarization in both bands together with expected CDF of polarization measurements for a constant source with similar uncertainties. |
2310.17493 | A Hybrid Graph Network for Complex Activity Detection in Video | Interpretation and understanding of video presents a challenging computer
vision task in numerous fields - e.g. autonomous driving and sports analytics.
Existing approaches to interpreting the actions taking place within a video
clip are based upon Temporal Action Localisation (TAL), which typically
identifies short-term actions. The emerging field of Complex Activity Detection
(CompAD) extends this analysis to long-term activities, with a deeper
understanding obtained by modelling the internal structure of a complex
activity taking place within the video. We address the CompAD problem using a
hybrid graph neural network which combines attention applied to a graph
encoding the local (short-term) dynamic scene with a temporal graph modelling
the overall long-duration activity. Our approach is as follows: i) Firstly, we
propose a novel feature extraction technique which, for each video snippet,
generates spatiotemporal `tubes' for the active elements (`agents') in the
(local) scene by detecting individual objects, tracking them and then
extracting 3D features from all the agent tubes as well as the overall scene.
ii) Next, we construct a local scene graph where each node (representing either
an agent tube or the scene) is connected to all other nodes. Attention is then
applied to this graph to obtain an overall representation of the local dynamic
scene. iii) Finally, all local scene graph representations are interconnected
via a temporal graph, to estimate the complex activity class together with its
start and end time. The proposed framework outperforms all previous
state-of-the-art methods on all three datasets including ActivityNet-1.3,
Thumos-14, and ROAD. | Salman Khan, Izzeddin Teeti, Andrew Bradley, Mohamed Elhoseiny, Fabio Cuzzolin | 2023-10-26T15:49:35Z | http://arxiv.org/abs/2310.17493v2 | # A Hybrid Graph Network for Complex Activity Detection in Video
###### Abstract
Interpretation and understanding of video presents a challenging computer vision task in numerous fields - e.g. autonomous driving and sports analytics. Existing approaches to interpreting the actions taking place within a video clip are based upon Temporal Action Localisation (TAL), which typically identifies short-term actions. The emerging field of **Complex Activity** **D**etection (CompAD) extends this analysis to long-term activities, with a deeper understanding obtained by modelling the internal structure of a complex activity taking place within the video.
We address the CompAD problem using a hybrid graph neural network which combines attention applied to a graph encoding the local (short-term) dynamic scene with a temporal graph modelling the overall long-duration activity. Our approach is as follows: i) Firstly, we propose a novel feature extraction technique which, for each video snippet, generates spatiotemporal 'tubes' for the active elements ('agents') in the (local) scene by detecting individual objects, tracking them and then extracting 3D features from all the agent tubes as well as the overall scene. ii) Next, we construct a local scene graph where each node (representing either an agent tube or the scene) is connected to all other nodes. Attention is then applied to this graph to obtain an overall representation of the local dynamic scene. iii) Finally, all local scene graph representations are interconnected via a temporal graph, to estimate the complex activity class together with its start and end time.
The proposed framework outperforms all previous state-of-the-art methods on all three datasets including ActivityNet-1.3, Thumos-14, and ROAD.
## 1 Introduction
Detecting and recognising activities in untrimmed videos is a challenging research problem, with applications to, e.g., sports [14], autonomous driving [9], medical robotics [53] and surveillance [58]. _Temporal Action Localisation_ (TAL) approaches not only recognise the action label(s) present in a video, but can also identify the start and end time of each activity instance, enabling the generation of sports highlights [25, 60], the understanding of road scenes in autonomous driving [21], the video summarisation of surveillance videos [54] and video captioning [23, 38]. A number of TAL methods [3, 3, 27, 28, 31, 34, 52, 55, 59] have recently been proposed, competing to achieve state-of-the-art performance [12, 61] on accepted benchmarks. Whereas various new datasets have been recently proposed, the two most common relevant benchmarks remain ActivityNet 1.3 [7], and Thumos-14 [16]. State-of-the-art performance on Thumos-14 has improved in four years by some 19% [64].
Almost all TAL approaches comprise a _features/scene representation_ stage and a _temporal localisation_ stage. In the former, snippets (continuous sequences of frames) are processed to understand the local scene in the video. Methods [3, 31, 52, 13] typically employ pre-extracted features obtained using a sequential learning model (e.g., I3D [10]), often pre-trained on the Kinetics [18] dataset. Features are then processed, e.g., via a temporal or semantic graph neural network, by applying appropriate encoding techniques or by generating temporal proposals, in an object detection style [11]. In the second stage, TAL approaches temporally localise activities in various ways, e.g. via temporal graphs [55, 59], boundary regression and proposals generation [27, 13, 28] or encoder-decoder methods [64].
As recently pointed out in e.g. [12, 20], in real-world applications a challenge is posed by _complex activities_, longer-term events comprising a series of elementary actions, often performed by multiple agents. For example, an Autonomous Vehicle (AV) negotiating a pedestrian crossing is engaged in a complex activity: First it drives along the road, then the traffic lights change to red, the vehicle stops and several pedestrians cross the road. Eventually, the lights turn green again and the AV drives off.
Theoretically, TAL methods can be employed to temporally segment complex activities, in practice such approaches are only employed to detect short- or mid-duration actions lasting a few seconds at most (e.g., a person jumping or pitching a baseball). The activities contemplated by the most common datasets are of this nature. Fig. 2 compares two standard TAL benchmarks with the recently released ROAD dataset, explicitly designed for complex activity detection. Activities in the ROAD dataset last longer
than those in ActivityNet or Thumos, with twice as many agents per snippet, making them more complex in nature.
In this paper we argue that standard TAL approaches are ill equipped to detect complex activities, as they fail to model both the global temporal structure of the activity and its fine-grained structure, in terms of the agents and elementary actions involved and their relations.
We may thus define (strong) _Complex Activity Detection_ (CompAD) as the task of recovering, given an input video, the temporal extent of the activities there present, _as well as_ their inner structure in terms of the agents or elementary actions involved. A weaker CompAD is one in which the only expected output is the temporal segmentation, with the internal structure of the activity estimated as a means for achieving segmentation - with no annotation available.
While a small number of studies have attempted to detect complex, long-duration activities [12, 43, 20], to our knowledge [21] is the only existing study which attempts to tackle CompAD as defined above. The work, however, relies upon the availability of heavily-annotated datasets which include granular labels for the individual actions which make up a complex activity, and the corresponding frame-level bounding boxes. This is a serious limitation, for it prevents [21] from being usable for pure temporal segmentation and compared with prior art there. Further, [21] focuses only on the graph representation of snippets, neglecting the long-term modelling of complex activities.
**Objectives**. This work aims to push the boundaries of temporal action localisation to tackle (weak) CompAD, leveraging datasets providing temporal segmentation annotation _only_. We do so by modelling and leveraging a complex video activity's internal structure, but without resorting to any additional fine-grained annotation concerning individual actions. Nevertheless, our proposal is easily generalisable to strong CompAD whenever individual action/agent annotation is available, in an end-to-end trainable setting.
Our **proposed framework** (Fig. 1) is composed of three stages: A) feature extraction; B) a scene graph attention network designed to learn the importance of each active object ('agent') within the local dynamic scene; and C) a temporal
Figure 1: An overview of our **Complex Activity Detection** (CompAD) framework. The input video is divided into fixed-size snippets \(S_{1},\dots,S_{N}\) (top); each snippet is then processed in three major steps (bottom). A) Firstly, scene objects (agents) are detected and tracked throughout the snippet to form agent tubes. 3D features are then extracted from all the cropped agent tubes (\(f_{a^{i}}\)) as well as the local scene (\(f_{s_{i}}\)). B) Next, a local scene graph is constructed where agent nodes (in gray) are connected to each other and to the snippet node (in light blue). The local scene graph is processed using a graph attention network (GAT), resulting in intermediate scene features (\(f^{\prime\prime}_{s_{i}}\)). C) Finally, all local scene features associated with individual snippets are temporally connected and processed using a global temporal graph to identify the boundaries of the activity using anchor proposals (shown in different colors).
Figure 2: Comparing TAL datasets (ActivityNet-1.3 and Thumos-14) with a CompAD dataset (ROAD) in terms of average activity and video durations and mean number of agents per snippet.
graph of attended scene graphs for the localisation of complex activities of arbitrary duration.
Our feature extraction scheme (A) differs from what typically done in prior art - where spatiotemporal features are extracted from whole snippets only. In contrast, we first detect the relevant active objects (_agents_) in the scene and track them within each snippet to build for each an _agent tube_ (a series of related bounding boxes). We use a pre-trained tracker to allow the method to be deployable to any datasets with only temporal annotation (no bounding box annotation for the scene agents is required), while leveraging a fine-grained description of the internal structure of a complex activity in the form of a graph of agents. A pre-trained 3D feature extraction model is used to extract features from both the agent tubes and the whole snippet.
To represent the local dynamic scene within each snippet, a _local scene graph_ (B) is constructed using three different topologies (Sec. 3.2). The local scene graph is then processed using a _scene graph attention (SGAT) network_ to extract an overall scene representation, because of its ability to compute the importance of each node in the context of its neighbors, thus modelling the structure of complex scenes.
Finally (C), the learned local scene graphs are connected to each other by constructing a _temporal graph_, with the aim of recognising the activity label and identifying its temporal boundaries (start and end time) in a class-agnostic manner.
Our main **Contributions** are, therefore:
* An original _hybrid graph network_ approach for general complex activity detection, comprising a local scene graph as well as a global temporal graph, capable of localising both complex and shorter-term activities and able to perform both weak and strong CompAD, depending on what annotation is available.
* A scene graph attention network for learning the importance of each agent in the context of a (local) dynamic scene.
* A temporal graph of activated scene graphs for the detection of the start and end of an activity of arbitrary duration.
* Comprehensive experiments showing how our approach leveraging weak CompAD _outperforms the most recent TAL state-of-the-art across the board_ on ActivityNet-1.3, Thumos-14 and the recent ROad event Awareness Dataset (ROAD) [21, 44], which portrays long-duration road activities involving multiple road agents over sometimes several minutes, showing clear dominance on classical TAL approaches.
## 2 Related Work
Recently, a series of works on activity detection have been proposed including spatiotemporal methods and graph-based methods, with recent advances in Graph Attention showing promise in many applications, including trajectory prediction for autonomous driving [22, 47], social recommendation [45] and 3D object tracking [6]. The state-of-the-art approaches to activity detection and graph approaches are summarised below.
### One-stage vs two-stage approaches
TAL methods can been broadly divided into _one-stage_ and _two-stage_ approaches. The former [5, 15, 26, 35] detect actions/activities in a single shot, and can be easily trained in an end-to-end manner. For example, Wang et al. [50] detect actions using transformers which, unlike RNNs, do not suffer from nonparallelism and vanishing gradients. Most one-stage methods, however, only perform action classification, rather than spatiotemporal localisation. In contrast, Lin et al. [26] have recently proposed an anchor-free, one-stage light model which generates proposals, locates actions within them, and classifies them end-to-end.
The latter group of methods [2, 27, 51, 65, 52], instead, consist of two stages, similarly to region-proposal object detectors. The first stage generates suitable proposals for predicting the start and end time of an activity, while the second stage extracts features and processes the proposals before passing them to both a classification head and a regression head (for temporal localisation). Some works, including [27], focus on the first stage to improve the quality of the proposals, while others focus on the processing or refining of the proposals. [65], for instance, uses an off-the-shelf method for proposal generation. The second stage consists of two networks, a 'disentanglement' network to separate the classification and regression representations, and a 'context aggregation' network to add them together. Such methods are not trainable end-to-end and limited to short-or mid-duration actions.
In contrast, this paper proposes a CompAD method which exhibits the advantages of both classes of methods, thanks to our hybrid graph approach capable of recognising and localising both short- and long-duration activities.
### Graph Convolutional Networks approaches
Graph Convolutional Networks (GCNs) have been extensively investigated for TAL [2, 40, 55, 56, 59]. GCN-based TAL methods can also be further divided into two-stage and one-stage methods. The former, once again, perform localisation after generating suitable proposals. E.g., in [59] two different types of boundary proposals are generated and then individually passed to the same graph, resulting in both an action label and a temporal boundary.
One-stage GCN-based TAL methods, instead, solve the detection problem without proposals in one go by learning spatiotemporal features in an end-to-end manner. For examples, in [55] a graph is first generated by connecting the snippets both temporally and by virtue of meaningful semantics. This graph is then divided into sub-graphs (anchors), where each anchor represents the activity in an
untrimmed video. In contrast, [21] proposed a spatiotemporal scene graph-based long-term TAL method where each of the snippets is considered as a separate graph, which is heavily dependent on the particular actions present in the scene, and is only applicable to datasets providing (label and bounding box) annotations for each individual action.
This study proposes leveraging GCNs, by incorporating them in an overall hybrid graph capable of modelling both the local scenes, via a Graph Attention Network (GAT) [48], and the overall global activity via a temporal graph. GATs build on the transformer concept by applying attention to graphs, and were originally proposed in [48] for node classification. The idea is to update the representation of the current node with respect to its neighbours by applying attention to learn the importance of the various connections.
In this paper, GAT is used at the local scene level to learn the features of each node (active agent tube), to generate a more robust local scene representation.
## 3 Proposed Methodology
The proposed framework is illustrated in Fig. 1. An input untrimmed video \(V\) is divided into \(N\) snippets \(S\) = \(S_{1}\),..., \(S_{i}\), \(S_{i+1}\),..., \(S_{N}\) (each snippet is a pre-defined constant length of consecutive frames). Each snippet is then passed to the tube detection and feature extraction module which returns a feature vectors for both the snippet \(f_{s_{i}}\) and the individual agent tubes \(f_{a_{i}^{1}}\), \(f_{a_{i}^{2}}\),...,\(f_{a_{n}^{n}}\), where \(n\) is the number of agents present in snippet \(i\). These features are then forwarded to the local scene graph attention layer for learning the attention of each agent in the context of its neighbours. This returns an aggregate feature representation for the whole scene (\(f_{s_{i}}^{\prime\prime}\)). These aggregate local scene features for all the snippets, \(f_{s_{1}}^{\prime\prime}\),\(f_{s_{2}}^{\prime\prime}\),...,\(f_{s_{N}}^{\prime\prime}\), are then connected using a global temporal graph for the generation of the activity class label \(\hat{y}_{s_{i}}\) and activity boundary labels \(\hat{y}_{br_{i}}\) using anchor proposals in a class-agnostic manner.
### Tube Detection and Feature Extraction
As mentioned, one of the contributions of this paper is a new strategy for feature extraction and representation which consists in analysing the finer structure of the local dynamic scene, rather extracting features from whole snippets only.
**Objects Detection and Tracking**. We first detect scene objects in each frame of the snippet using an object detector pre-trained on the COCO dataset [29], which comprises of 80 different types of objects. However, we select a relevant list of object types (_agents_) which depends on the dataset (the lists of classes selected for each dataset is given in Section 4.2 - Implementation Details). The detected agents are then tracked using a pre-trained tracker throughout the snippet in order to construct agent tubes. The latter are of variable length, depending on the agent itself and its role in each snippet. Fig. 3 pictorially illustrates agent tubes in an example video segment from the ROAD dataset, showing both a sample frame, with overlaid the detection bounding boxes, and a bird's-eye view of the agent tubes and local scene graph for the snippet it belongs to.
**Feature Extraction**. Next, all the detected agent tubes are brought to a standard size and passed to the pre-trained 3D CNN model along with the whole snippet for spatiotemporal feature extraction. The adopted 3D CNN model allows variable length inputs, and outputs a fixed-sized feature vector for each of the agent tubes and the snippet.
### Scene Graph Attention
In the second stage, a scene graph representation is used to describe the scene in terms of features extracted from both the overall snippet and the agent tubes.
**Graph Generation**. The scene graph is constructed using three different topologies: a star graph connecting each agent node to the scene node, a star topology with also links between agents sharing a label, and a fully-connected one (Fig. 3). The influence of topology is shown in Sec. 4.4.
**Graph Attention**. As mentioned, our scene graph attention layer is inspired by the GAT concept [48], originally
Figure 3: Visualisation of our agent detection and tracking stage using a birdβs-eye view of the spatiotemporal volume corresponding to a video segment of the ROAD dataset. The upper section shows a random frame from the segment (with bounding boxes). Below, the detected agent tubes are plotted in space and time together with different possible local scene graph representations. The agent tubes we extract from the scene are of variable sizes. The way scene object motion affects the spatial and temporal extent of the tubes can be appreciated. Additional scenarios with visualisation are illustrated in the **Supplementary material.**
designed for node classification. While we follow a similar attention mechanism, here we amend the attention layer in order to extract aggregate features from all the nodes, to be passed in turn to our localisation layer in the third stage.
The workflow of our scene graph attention layer is shown in Fig. 4. All input node features are linearly transformed using a weight matrix \(W_{1}\), followed by _self-attention_ to find the importance of each node with respect to its connected neighbours. An activation function LeakyReLU is applied to the features for nonlinearity, resulting in final output features for all of the nodes. These are then normalised using a softmax function to make each node representation (\(f^{\prime}_{s_{i}a_{i}^{\dagger}}\)) comparable with that of all the nodes connected to it (\(f^{\prime}_{s_{i}}\)).
The self-attention process is further improved by applying a _multi-head attention_ strategy inspired by transformers [48]. Attention is applied to the node features individually. For each node, the average over the \(H\) heads is computed, resulting in an attended feature vector for each node.
Finally, to get a fixed-size representation for the whole scene, the output features of all the nodes are aggregated using another learnable weight matrix \(W_{2}\), which outputs the final feature representation \(f^{\prime\prime}_{s_{i}}\) for the whole (local) scene.
## 3 Temporal Graph Localisation
In Stage C of our framework, activity recognition and localisation are performed using a GCN. The final features from all the local scenes (\(f^{\prime\prime}_{s_{1}}\),..., \(f^{\prime\prime}_{s_{i}}\),\(f^{\prime\prime}_{s_{i+1}}\),...,\(f^{\prime\prime}_{s_{N}}\)), as outputted by the scene graph attention layer, are temporally connected to build a global temporal graph (see Fig. 1, C).
Our GCN network for processing the global temporal graph is divided into two parts. The first part comprises three 1D convolutional layers designed to learn the temporal appearance of all the local scenes with boundaries, each followed by a sigmoid activation for non-linearity. The second part generates the anchor proposals of the temporally learned features via pre-defined anchors, where each of the anchors acts as a binary mask over the whole graph.
Overall, the GCN module provides two different outputs. i) _Activity classification_: the list of predicted activity labels for all the snippets in the video (\(\hat{y}_{s_{1}}\),..., \(\hat{y}_{s_{i}}\),..., \(\hat{y}_{s_{N}}\)) is produced, where the dimensionality of the output vector \(\hat{y}\) is equal to the number of classes. ii) _Activity localisation_: activities are localised using binary masked class-agnostic anchor proposals. The Intersection over Union (IoU) measure between each anchor and the ground truth (true temporal extension of the activity) is computed, and the anchors with maximum IoU are selected to train the model for localising the boundaries of any activity, regardless of its activity label. The final output of our temporal graph is a one-hot binary vector (\(\hat{y}_{br_{1}}\),..., \(\hat{y}_{br_{1}}\),..., \(\hat{y}_{br_{N}}\)) for each series of snippets (video), where \(\hat{y}_{br_{1}}=1\) iff snippet \(S_{i}\) belongs to the activity, \(=0\) when the snippet does not belong to it.
## 4 Experimental Evaluation
### Datasets
**ROAD** (The ROad event Awareness Dataset for Autonomous Driving) [44] is a multi-labeled dataset proposed for road agent, action, and location detection. The combination of these three labels is referred to in [44] as a 'road event'. The ROAD dataset consists of a total of 22 videos with an average duration of 8 minutes, captured by the Oxford RobotCar [36] under diverse lighting and weather conditions. The dataset was further extended as a testbed for CompAD in [21]. Complex activities in the ROAD dataset belong to six different classes, including: negotiating an intersection, negotiating a pedestrian crossing, waiting in a queue, merging into the (ego) vehicle lane, sudden appearance (of other vehicles/agents), and (people) walking in the middle of the road. Activities can span up to two minutes and involve a large number of road agents.
**Thumos-14**[16] is a benchmark datasets for TAL. It contains 413 untrimmed temporally annotated videos categorised into 20 actions. Videos are characterised by a large variance in duration, from one second to 26 minutes. On average, each video contains 16 action instances. To compare our performance with the state-of-the-art, we adopt standard practice of using the validation set (200 videos) for training while evaluating our model on the test set (213 videos).
Figure 4: Our scene graph attention layer (Stage B of our approach, Fig. 1) takes the node features generated in the feature extraction Stage A as an input, and updates each nodeβs features using node attention with respect to its all connected neighbours. Multi-head attention with \(H\) heads is applied to each node to further robustify the representation β averaging yields the final node features. To obtain a fixed-size overall representation for a specific local scene (snippet), the features of all its nodes are aggregated using a learnable weight matrix \(W_{2}\).
**ActivityNet-1.3**[7] is one of the largest action localisation datasets with around 20K untrimmed videos comprising 200 action categories. The videos are divided into training, validation, and testing folds according to a ratio of 2:1:1, respectively. The number of action instances per video is 1.65, which is quite low compared to Thumos-14. Following the previous art, we train our model on the training set and test it on the validation set.
## 4.2 Implementation Details
**Evaluation metrics**. In our experiments, mean Average Precision (mAP) was used as an evaluation metric, using different IoU thresholds for the different datasets. According to the official protocols for the various benchmarks, the following lists of temporal IoU thresholds were selected: \(\{0.1,0.2,0.3,0.4,0.5\}\) for ROAD, \(\{0.3,0.4,0.5,0.6,0.7\}\) for Thumos-14 and \(\{0.5,0.7,0.95\}\) for ActivityNet-1.3.
**Feature extraction**. Firstly, the agent tubes are constructed by detecting scene objects using a YOLOv5 detector [17] pre-trained on the COCO dataset. Detections are then tracked throughout a snippet using DeepSort [4]. Then, features are extracted using an I3D network pre-trained on the Kinetics dataset [18], from both the entire snippet and each cropped agent tube individually. The object categories were reduced to six for the ROAD dataset to only cover the agents actually present in the road scenes portrayed there. As the other two datasets (ActivityNet-1.3 and Thumos-14) are general purpose, in their case we retained all the 80 classes present in the COCO dataset.
**Scene Graph Attention**. The local scene graph was generated by producing a list of tuples \([(0,1),(0,2),...]\), where the first index relates to the source node and the second number indexes the target node. The reason for preferring this structure over an adjacency matrix was to limit memory usage. For node attention learning, we initialised our architecture using the weights of the GAT model [48] pre-trained on the PPI dataset [66] and applied 4 attention layers with {4, 4, _num of classes_, and _num of classes_} heads, respectively. The number of classes is equal to 201, 21, and 7 in ActivityNet-1.3, Thumos-14 and ROAD, respectively.
The **Temporal Graph** is a stack of three 1D convolution layers on the final representation of the temporally connected local scenes. The size of the input to the first convolutional layer is the number \(N\) of local scenes (snippets), multiplied by the number of heads in the last attention layer.
The length of our temporal graph is fixed to \(N\). Videos with number of snippets less than or equal to \(N\) are passed directly to the temporal graph; longer videos are split into multiple chunks containing \(N\) snippets each. The output is a one-hot vector of activity labels of size \(N\) and a collection of 128 proposals (binary graph masks) also of length \(N\).
**Loss Functions**. Our problem is multi-objective, as we aim at not only recognising the label of the activity taking place but also finding its boundary (start and end time). Our overall loss function is thus the weighted sum of _BCEWithLogitsLoss_[1] (for activity classification) and standard binary cross entropy (for temporal localisation). Full details can be found in the **Supplementary material**.
thresholds and even outperforms all the methods including the one using both OF and RGB modalities.
To sum up, our proposal clearly outperforms all previous approaches over the ROAD dataset, showing the potential of this approach for modelling and detecting long, complex activities. Further, our approach outperforms all competitors over Thumos-14 and ActivityNet. The comparison of methods with respect to the complexity of the datasets is illustrated in Fig. 5 shows how increasingly outperforms the prior art as the duration and complexity of the activities increases.
## 4 Ablation Studies
**Effect of Agent Nodes**. Firstly, we showed the advantage of modelling the scene as a graph of agents, compared to simply using whole scene features. Namely, we removed the local scene graph from our pipeline (Fig. 1, stage B) and passed the 3D features of the whole scene as a node to the global temporal graph. The significant performance drop can be clearly observed in Fig. 6 over all three datasets.
**Effect of Edges and Aggregation**. Next, we studied the influence of different types of edge connections in the local scene graph: (i) a fully-connected scene graph; (ii) a star structure where each agent node is connected to the scene node only; (iii) a star structure with additional connections between agent nodes sharing the same label.
We also validated two different techniques for extracting the final representation from the local scene graph, named 'Aggregated' and 'Scene'. In the former, the feature representation is extracted by aggregating those of all the attended nodes. In the latter, only the feature vector related to the scene node (after attention) is retained. Table 3 shows the effect of all possible combinations of graph topologies and aggregation strategies. A fully-connected scene graph
\begin{table}
\begin{tabular}{l|l|l|l l l l l|l l l l|l} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Venue} & \multicolumn{4}{c|}{\(\mathbf{Of}\)} & \multicolumn{4}{c}{Thumos-14} & \multicolumn{4}{c}{ActivityNet-1.3} \\ & & \multicolumn{1}{c|}{OF} & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & Average & 0.5 & 0.75 & 0.95 & Average \\ \hline BSN [28] & ECCVβ18 & β & 53.5 & 45.0 & 36.9 & 28.4 & 20.0 & 36.8 & 46.4 & 30.0 & 8.0 & 30.0 \\ P-GCN [59] & ICCVβ19 & β & 63.6 & 57.8 & 49.1 & β & β & β & 48.3 & 33.2 & 3.3 & 31.1 \\ BMN [27] & ICCVβ19 & β & 56.0 & 47.4 & 38.8 & 29.7 & 20.5 & 38.5 & 50.1 & 34.8 & 8.3 & 33.8 \\ G-TAD [55] & CVPRβ20 & β & 54.5 & 47.6 & 40.2 & 30.8 & 23.4 & 39.3 & 50.4 & 34.6 & 9.0 & 34.1 \\ BC-GNN [2] & ECCVβ20 & β & 57.1 & 49.1 & 40.4 & 31.2 & 23.1 & 40.2 & 50.6 & 34.8 & **9.4** & 34.3 \\ BSN++ [46] & AAAIβ21 & β & 59.9 & 49.5 & 41.3 & 31.9 & 22.8 & 41.1 & 51.3 & 35.7 & 8.3 & 34.9 \\ MUSES [32] & CVPRβ21 & β & 68.9 & 64.0 & 56.9 & 46.3 & 31.0 & 53.4 & 50.0 & 35.0 & 6.6 & 34.0 \\ ContextLoc [63] & ICCVβ21 & β & 68.3 & 63.8 & 54.3 & 41.8 & 26.2 & 50.9 & 56.0 & 35.2 & 3.5 & 34.2 \\ CPN [13] & WACVβ22 & β & 68.2 & 62.1 & 54.1 & 41.5 & 28.0 & 50.7 & β & β & β & β \\ RefactorNet [52] & CVPRβ22 & β & 70.7 & 65.4 & 58.6 & 47.0 & 32.1 & 54.8 & 56.6 & **40.7** & 7.4 & **38.6** \\ RCL [49] & CVPRβ22 & β & 70.1 & 62.3 & 52.9 & 42.7 & 30.7 & 51.7 & 55.1 & 39.0 & 8.3 & 37.6 \\ LDCLR [64] & AAAIβ22 & β & 72.1 & 65.9 & 57.0 & 44.2 & 28.5 & 53.5 & **58.1** & 36.3 & 6.2 & 35.2 \\ ActionFormer [61] & ECCVβ22 & β & 82.1 & 77.8 & 71.0 & 59.4 & 43.9 & 66.8 & 53.5 & 36.2 & 8.2 & 35.6 \\ Re\({}^{2}\)TAL [62] & CVPRβ23 & β & 77.4 & 72.6 & 64.9 & 53.7 & 39.0 & 61.5 & 55.3 & 37.9 & 9.0 & 37.0 \\ TriDet [42] & CVPRβ23 & β & **83.6** & **80.1** & **72.9** & **62.4** & **47.4** & **69.3** & 54.7 & 38.0 & 8.4 & 36.8 \\ \hline GTAN [35] & CVPRβ19 & β & 57.8 & 47.2 & 38.8 & β & β & β & 52.6 & 34.1 & 8.9 & 34.3 \\ TadTR [33] & TIPβ22 & β & 59.6 & 54.5 & 47.0 & 37.8 & 26.5 & 45.1 & 49.6 & 35.2 & 9.9 & 34.3 \\ E2E-TAD [31] & CVPRβ22 & β & 69.4 & 64.3 & 56.0 & 46.4 & 34.9 & 54.2 & 50.8 & 36.0 & 10.8 & 35.1 \\ ActionFormer [61] & ECCVβ22 & β & 69.8 & 66.0 & 58.7 & 48.3 & 34.6 & 55.5 & 53.2 & 35.1 & 8.0 & 34.9 \\ TAGS [39] & ECCVβ22 & β & 59.8 & 57.2 & 50.7 & 42.6 & 29.1 & 47.9 & 54.4 & 34.9 & 8.7 & 34.9 \\ TallFormer [12] & ECCVβ22 & β & 76.1 & β & **63.2** & β & 34.5 & 59.2 & 54.1 & 36.2 & 7.9 & 35.5 \\ DL-Net [57] & ICASSPβ23 & β & 61.3 & 55.8 & 47.7 & 37.6 & 26.4 & β & 50.3 & 35.0 & 9.3 & 34.3 \\ DCMD [24] & CVPRβ23 & β & 70.5 & 65.8 & 59.2 & **50.1** & **38.2** & 56.8 & 53.7 & 35.9 & 8.6 & 35.6 \\
**Ours** & β & β & **78.2** & **69.5** & 62.7 & **50.1** & 36.9 & **59.8** & **60.6** & **40.3** & **11.1** & **39.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Activity detection performance comparison on Thumos-14 and ActivityNet-1.3. mAP values (%) at different IoU thresholds are reported for the test and validation sets of Thumos-14 and ActivityNet-1.3, respectively. The models are grouped by whether the model relies on optical flow (OF) or not. Best results are in **bold** and second best underlined.
Figure 6: Average mAP of variants of our method with: attended scene graph (βAgents with GATβ), scene graph (βAgents without GATβ) and βScene features onlyβ, over the all three datasets.
with Aggregated features performs best in two cases over three, while the star topology best suits ActivityNet.
**Effect of Sequence Length**. To explore the effect on our model of snippet duration (the temporal extent of the local dynamic scene), we performed experiments with four different sizes (12, 18, 24, and 30), reported in Table 4. On ActivityNet-1.3 and ROAD, the top scores were obtained by selecting a sequence length of 24, due to the nature of the activities present in these datasets, which last longer. On the other hand, on Thumos-14 we achieved the best performance using a sequence length of 18, as most activities there are shorter in duration.
**Effect of Temporal Graph Length**. We also ablated the effect of the length of the temporal graph, i.e., the number of local scene nodes composing the global graph (Table 5). Four different graph lengths \((128,256,512,1024)\) are compared. The best performance is achieved on Thumos-14 using a smaller number of scene nodes, due to the average duration of the videos there and its portraying shorter-term activities. In contrast, on ROAD and ActivityNet-1.3 our approach performed the best using longer temporal graphs.
**Qualitative Results**. To help the reader visualise the output of our proposed method, we show some qualitative detection results on all three datasets in Fig. 7. The figure shows one sample per dataset, and portrays a series of local scenes (snippets), skipping some for visualisation purposes, with superimposed the ground truth (in green) and the prediction of our model (in red). For example, for ActivityNet-1.3 an instance of the 'Layup drill in basketball' class is shown in which the activity starts with snippet 7 and ends with snippet 42. Our model predicts the activity to start from snippet 6 and end with snippet 43.
## 5 Conclusions
This paper explicitly addresses the problem of detecting longer-term, complex activities using a novel hybrid graph neural network-based framework - combining both scene graph attention and a temporal graph to model activities of arbitrary duration. Our proposed framework is divided into three main building blocks: agent tube detection and feature extraction; a local scene graph construction with attention; and a temporal graph for recognising the class label and localising each activity instance. We tested our method on three benchmark datasets, showing the effectiveness of our method in detecting both short-term and long-term activities, thanks to its ability to model their finer-grained structure without the need for extra annotation. Our approach outperforms all previous state-of-the-art methods on all of the datasets including Thumos-14, ActivityNet-1.3, and ROAD datasets.
In the future, we intend to progress from incremental inference to incremental training, by learning to construct activity graphs in an incremental manner, paving the way to applications such as future activity anticipation [30] and pedestrian intent prediction [8]. A further exciting line of research is the modelling of the uncertainty associated with complex scenes, in either the Bayesian [19] or the full epistemic settings [37, 41].
\begin{table}
\begin{tabular}{l l|l|l} \hline \hline Snippet length & Thumos-14 & ActivityNet-1.3 & ROAD \\ \hline
12 & 52.7 & 31.3 & 56.9 \\
18 & **59.8** & 37.6 & 62.6 \\
24 & 54.3 & **39.3** & **73.0** \\
30 & 49.7 & 34.5 & 70.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average mAP over all IoU thresholds of our method as a function of different snippet lengths for the three datasets.
\begin{table}
\begin{tabular}{l l|l|l l|l l} \hline \hline & \multicolumn{2}{c}{Thumos-14} & \multicolumn{2}{c}{Act.Net-1.3} & \multicolumn{2}{c}{ROAD} \\ Topology & Aggr. & Scene & Aggr. & Scene & Aggr. & Scene \\ \hline Fully & **59.8** & 49.2 & 37.4 & 31.4 & **73.0** & 62.7 \\ Star & 51.2 & 44.8 & **39.3** & 36.6 & 62.3 & 57.9 \\ Star + & 52.2 & 41.9 & 35.3 & 32.7 & 64.9 & 59.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of local scene graph topologies and feature aggregation strategies on the performance of our proposal.
\begin{table}
\begin{tabular}{l l|l|l} \hline \hline Temporal length & Thumos-14 & ActivityNet-1.3 & ROAD \\ \hline
128 & **59.8** & 29.6 & 48.3 \\
256 & 52.9 & 32.7 & 58.5 \\
512 & 50.3 & **39.3** & 69.2 \\
1,024 & 46.8 & 34.1 & **73.0** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average mAP over all IoU thresholds of our method as a function of different temporal graph lengths for the three datasets.
Figure 7: Qualitative results of our method on all 3 datasets. The green rectangles spanning the snippets (local scenes) are the ground truth; the red boxes denote our modelβs predictions.
## Acknowledgements
This project has received funding from the European Union's Horizon 2020 research and innovation programme, under grant agreement No. 964505 (E-pi).
|
2304.14005 | ContraNeRF: 3D-Aware Generative Model via Contrastive Learning with
Unsupervised Implicit Pose Embedding | Although 3D-aware GANs based on neural radiance fields have achieved
competitive performance, their applicability is still limited to objects or
scenes with the ground-truths or prediction models for clearly defined
canonical camera poses. To extend the scope of applicable datasets, we propose
a novel 3D-aware GAN optimization technique through contrastive learning with
implicit pose embeddings. To this end, we first revise the discriminator design
and remove dependency on ground-truth camera poses. Then, to capture complex
and challenging 3D scene structures more effectively, we make the discriminator
estimate a high-dimensional implicit pose embedding from a given image and
perform contrastive learning on the pose embedding. The proposed approach can
be employed for the dataset, where the canonical camera pose is ill-defined
because it does not look up or estimate camera poses. Experimental results show
that our algorithm outperforms existing methods by large margins on the
datasets with multiple object categories and inconsistent canonical camera
poses. | Mijeong Kim, Hyunjoon Lee, Bohyung Han | 2023-04-27T07:53:13Z | http://arxiv.org/abs/2304.14005v2 | # ContraNeRF: 3D-Aware Generative Model via Contrastive Learning
###### Abstract
Although 3D-aware GANs based on neural radiance fields have achieved competitive performance, their applicability is still limited to objects or scenes with the ground-truths or prediction models for clearly defined canonical camera poses. To extend the scope of applicable datasets, we propose a novel 3D-aware GAN optimization technique through contrastive learning with implicit pose embeddings. To this end, we first revise the discriminator design and remove dependency on ground-truth camera poses. Then, to capture complex and challenging 3D scene structures more effectively, we make the discriminator estimate a high-dimensional implicit pose embedding from a given image and perform contrastive learning on the pose embedding. The proposed approach can be employed for the dataset, where the canonical camera pose is ill-defined because it does not look up or estimate camera poses. Experimental results show that our algorithm outperforms existing methods by large margins on the datasets with multiple object categories and inconsistent canonical camera poses.
## 1 Introduction
3D-aware Generative Adversarial Networks (GANs) aim to synthesize multiple views of a single scene with explicitly control of camera poses. Recent methods [4, 3, 28, 5, 33, 32, 39, 8, 13, 37, 38] incorporate the advances of neural radiance fields [34, 30, 27] into generative models [11, 20, 19] and reconstruct 3D scenes using a collection of 2D images without 3D shape priors. This technique allows us to predict not only 2D projected images but also their underlying 3D structures. However, they still suffer from the limited scope of target domains; most algorithms deal with only a few object categories, _e.g_., human or cat faces, where ground-truth or estimated camera poses are available and the canonical camera pose is well-defined. Al
though there exist some methods [24, 9, 1] that extend the scope of target domains to realistic ones with less geometric priors, they rely on additional geometric cues such as depth maps of training examples.
To alleviate the drawbacks of existing 3D-aware GAN approaches, we design a novel discriminator for representing the 3D structure of complex scenes in diverse domains without extra information. As an intermediate goal of our algorithm, we start from removing the dependency on ground-truth camera poses in the discriminator employed in previous work [4] and make the discriminator learn a camera pose regressor in a self-supervised way using generated images and their rendering poses. Such a simple approach turns out to be effective in synthesizing novel views without ground-truth camera poses of training images, but it sometimes fails to reconstruct 3D structures properly. We argue that the limitation is mainly due to explicit camera pose regression, which is not desirable to handle scenes or objects with complex and heterogeneous geometries.
To further improve the quality of complex geometric and photometric structures, we propose implicit camera pose embedding in a high-dimensional space for robust and comprehensive 3D reconstruction. For training, we employ a self-supervised contrastive learning [31], which captures rich geometric information of scenes from diverse pairwise relations of camera pose embeddings, improving camera pose regression and consequently enhancing 3D reconstruction quality without any ground-truths of camera poses. Our experiments demonstrate that the proposed approach achieves state-of-the-art performance in both standard GAN evaluation and 3D reconstruction metrics without extra information. Our main contributions are summarized below:
* We present a simple yet effective camera pose representation method, implicit pose embedding, for training discriminators of 3D-aware GANs without ground-truth camera poses.
* We train the discriminator of 3D-aware GANs by contrastive learning, which allows our model to learn 3D structures of scenes with ill-defined canonical poses due to heterogeneous geometric configurations.
* Our framework achieves state-of-the-art performance on challenging benchmarks without any 3D related information and is validated via extensive experiments.
## 2 Related Work
We review existing approaches of GANs in 3D domain and discuss contrastive learning algorithms used in other generative tasks.
### 3D-aware GANs
After the success of Generative Adversarial Network (GAN) [11, 20, 19, 21, 45, 7] on generating high quality 2D images, several 3D-aware GANs [4, 3, 28, 5, 33, 32, 39, 8, 13, 37, 38] have been proposed to synthesize images based on 3D understanding instead of just image level understanding. By plugging the ideas of volume rendering and neural implicit representation techniques [34, 30, 27] into networks, 3D-aware GANs gain the capability of synthesizing multiple views of a single 3D scene. In addition, the networks are even trained to generate images with specific viewpoints using unorganized 2D image datasets--the same datasets used for 2D GANs. However, the domains of such datasets are limited; most 3D-aware GANs have shown successful examples only in a few object classes, including human and animal faces, cars, and a few synthetic object categories. Unlike existing 3D-aware GANs, we extend the domain range of the 3D-aware GAN framework to the complex scenes.
### 3D-aware GANs on complex scenes
Some 3D-aware GANs have tackled generation task on the complex datasets which are composed of the images with diverse geometric configurations. However, existing approaches mostly rely on the prior knowledge of scenes such as object classes, ground-truths camera poses, or depth supervision. For example, full human body generation techniques [17, 12, 47] demonstrate impressive results with high-fidelity geometries and motions but cannot be generalized to other domains because they rely on the pre-trained
Figure 2: Illustration of the proposed contrastive learning on the pose embedding space. The βpositiveβ and βnegativeβ images denote images rendered in the same or different directions with the βanchorβ image, respectively. The distance between pose embeddings of positive pairs are learned to be closer than those of negative pairs.
human body modeling such as SMPL [25]. On another direction, several algorithms learn to generate 3D indoor environments [9, 10, 1] and their generation processes are conditioned on camera pose information. These methods can synthesize new view synthesis of complex scenes in reasonable quality but require additional information such as ground truth camera poses or depth maps must be provided while training those networks. DepthGAN [24] also generates 3D indoor environments, but it utilizes the estimated depth maps given by the pre-trained depth estimation model [43] to obtain direct 3D information. However, our algorithm does not rely on explicit geometric information such as depth maps or ground-truths camera poses, and it can be generalized to various domains.
### Contrastive learning
Contrastive learning is a widely used self-supervised representation learning schemes [15, 41, 6, 31]. CntrGAN [48] adds contrastive learning to train GANs together with image augmentations, where it serves as a regularizer to improve the fidelity of generation. Contrastive learning has also been used in image-to-image translation [35, 14, 23] and cross-modal translation [46] to enforce patch-wise correspondence and mutual information between image and text, respectively. Also, ContraGAN [18] proposes a class-conditional contrastive learning objective to increase the correlations between images of the same class. Unlike prior works, we are the first to adopt contrastive learning to 3D-aware GAN, employing it on the proposed implicit pose embeddings.
## 3 Preliminaries: EG3D
Our goal is to learn 3D-aware GANs on complex objects and scenes without prior knowledge or prediction models for camera poses of training examples. Since our algorithm relies on a state-of-the-art model, EG3D [4], we summarize the design of the generator \(G(\cdot)\) and discriminator \(D(\cdot)\) of EG3D.
### Generator
Let \(p_{z}\) and \(p_{\xi}\) be the distributions of latent variable and camera pose, respectively. Given \(z\sim p_{z}\) and \(c\sim p_{\xi}\), the generator produces a 3D feature based on a tri-plane structure. The 3D feature is employed for rendering in the direction \(c\), producing a low-resolution of 2D feature map and image. Then, an image super-resolution module synthesizes a high-resolution image from a given 2D feature map and a low-resolution image. In summary, the 3D-aware generator synthesizes a high-resolution image as
\[G:z,c\to I. \tag{1}\]
The generator \(G(\cdot)\) can produce an image with different viewpoints of the same object, _i.e_., using the identical \(z\) but different \(c\)'s.
### Discriminator
The discriminator in 3D-aware GANs encourages the paired generator to draw realistic images given camera poses. To this end, EG3D [4] utilizes pose conditional discriminator as illustrated in Figure 2(a). The discriminator takes both an image and a camera pose, and returns a logit as follows:
\[D:I,c\to l, \tag{2}\]
where \(l\in\mathbb{R}\) is a logit for the standard GAN loss. Note that, following the design of EG3D, the discriminator takes both low-resolution and high-resolution image as its inputs. However, for the simplicity of the notations, we disregard
Figure 3: Comparison of different discriminator architectures. The pose-conditioned discriminator in (a) utilizes camera pose information as input, where ground truth pose should be given for each training image. On the other hand, (b) and (c) does not use such extra information, but instead, they additionally learn camera pose estimator explicitly or implicitly on rendered images. To this end, (b) uses direct pose regression loss with rendering camera pose \(c\), while (c) employs contrastive learning to learn implicit pose embeddings. Note that our PRNeRF and ContraNeRF employ (b) and (c) as their discriminators, respectively.
low-resolution image inputs from the equations for EG3D and our algorithm, which include (2), (3), and (6). Please refer to [4] for more detailed information.
### Discussion
While EG3D [4] achieves competitive performance, it requires the ground-truth poses of training examples as inputs to the discriminator. This limitation significantly reduces the applicability of EG3D because camera poses are defined relatively with respect to a certain viewpoint and consequently ill-defined except for a few object categories with common-sense central poses such as faces. Other methods [28, 5, 33, 39, 32] trained without ground-truth camera poses typically yield incompetent performance and are evaluated only on less challenging datasets, _e.g_., human faces. We propose a novel 3D-aware GAN algorithm that does not require camera pose labels but works well on images of natural scenes with heterogeneous geometric configurations.
## 4 Camera Pose Regression in Discriminator
This section describes our intermediate solution for training 3D-aware GAN models on top of EG3D without using camera pose labels of training data.
### Discriminator design
To make the discriminator trainable without ground-truth camera poses, we first revise the original discriminator in EG3D [4] as illustrated in Figure 2(b). Specifically, the new discriminator takes camera poses as its inputs no more while having additional output branch to predict the pose. The formal definition of the discriminator operation is given by
\[D:I\to l,\hat{c}, \tag{3}\]
where \(l\in\mathbb{R}\) is a logit for the standard GAN loss and \(\hat{c}\in\mathbb{R}^{2}\) is an estimated pitch and yaw for camera pose of an input image \(I\). For implementation, we remove the pose conditional module in the discriminator of EG3D [4] and modify the dimension of the last fully connected layer. Note that the generator has an identical architecture with EG3D [4].
### Pose regression loss
The estimated \(\hat{c}\) is optimized by the pose regression loss, which encourages the generator to synthesize the images congruent to a given camera direction \(c\), which is given by
\[\mathcal{L}_{\text{pose}}=\mathbb{E}_{z\sim p_{z},c\sim p_{\xi}}\left\|\hat{c} -c\right\|, \tag{4}\]
where \(\left\|\cdot\right\|\) denotes \(\ell_{1}\) or \(\ell_{2}\) norm. This loss is operated only on fake images because their true camera poses, denoted by \(c\)'s, are always available as rendering direction while we do not have ground-truths of real images.
### Overall objective
To enable the generator to learn the real data distribution, we use unsaturated adversarial losses with R1 regularization [26]. On top of the standard GAN loss, the pose regression loss is employed to provide 3D awareness with the model as follows:
\[\begin{split}\mathcal{L}(D,G)&=\mathbb{E}_{I\sim p _{\text{data}}}\left[f(-D(I)+\lambda\|\nabla D(I)\|^{2})\right]\\ &+\mathbb{E}_{z\sim p_{z},c\sim p_{\xi}}\left[f(D(G(z,c))\right] \\ &+\lambda_{\text{pose}}\cdot\mathcal{L}_{\text{pose}},\end{split} \tag{5}\]
where \(f(u)=-\log(1+\exp(-u))\) and \(p_{\text{data}}\) denotes the data distribution.
## 5 Contrastive Learning in Discriminator
We now discuss the formulation and optimization of our final model, ContraNeRF, for 3D-aware generative model via contrastive learning.
### Motivation
Although the pose regression loss discussed in the previous section is effective in terms of generated image quality, it often suffers from a lack of fidelity in reconstructing the underlying 3D structures. We build a novel discriminator based on an implicit pose embedding in a high-dimensional space and train the network using a new loss based on pairwise relations of implicit camera poses in a mini-batch. To be specific, we maximize the similarity between the implicit pose embeddings of the images with the same camera pose while minimizing the rest of the embedding pairs. This strategy is helpful for encoding camera poses by estimating the underlying scene structures via rich and flexible relations between many implicit pose embedding pairs. Note that the pose regression loss is now inaccessible due to the use of implicit camera pose embedding but our approach still free from the ground-truth camera poses.
### Implicit pose embedding discriminator
The proposed discriminator has a similar architecture as the network given by (3). The only difference is that, instead of extracting a two-dimensional explicit camera pose from the input image \(I\), the new model estimates an high-dimensional implicit camera pose embedding as follows:
\[D:I\to l,v, \tag{6}\]
where \(l\in\mathbb{R}\) is a logit for the standard GAN loss and \(v\in\mathbb{R}^{m}\) is an implicit pose embedding of the input image after \(\ell_{2}\) normalization. We set the dimensionality of \(v\) sufficiently high, _i.e_. \(m\gg 2\), to make the implicit pose embedding vector more expressive than the typical camera pose representation based on yaw and pitch. Figure 3 illustrates our discriminator in comparison to other options. The other parts of the discriminator are identical to EG3D [4].
### Mutual information maximization
The idea of contrastive learning is to train a network to keep the representation of anchor images close to the representation of relevant positive images while pushing it away from those of many mismatched negative images. Our goal is to make the synthesized images rendered on the same camera pose strongly associated with each other rather than the images generated by different poses as shown in Figure 2. In this respect, we employ contrastive learning on the pose embedding in the discriminator, which aims to maximize the mutual information between synthesized images with the same camera pose.
Positive and negative examplesGiven an anchor image \(I^{a}=G(z^{a},c^{a})\), a positive image \(I^{+}\) and a negative image \(I^{-}\) are defined as follows:
\[I^{+}\in\mathcal{I}^{+}=\{I=G(z,c)|z\sim p_{z},\;c=c^{a}\} \tag{7}\] \[I^{-}\in\mathcal{I}^{-}=\{I=G(z,c)|z\sim p_{z},\;c\sim p_{\xi},\; c\neq c^{a}\}. \tag{8}\]
A positive image is an example that is rendered in the same direction but may be generated using a different latent vector from the anchor image. In contrast, a negative image is the one rendered with a different camera pose from the anchor image. Then, the implicit pose embedding of an anchor \(v^{a}\), its positive embedding \(v^{+}\) and negative embedding \(v^{-}\) are given by
\[l^{a},v^{a}=D(I^{a}) \tag{9}\] \[l^{+},v^{+}=D(I^{+})\] \[l^{-},v^{-}=D(I^{-}),\]
where \(l^{a}\), \(l^{+}\) and \(l^{-}\) are logits for the standard GAN loss.
Contrastive lossWe adopt the InfoNCE loss [31] for contrastive learning of the implicit pose embedding. Let \(v^{a}_{i}\), \(v^{+}_{i}\), and \(v^{a}_{i,j}\) be camera pose embeddings in a mini-batch, where \(v^{+}_{i}\) is a positive pair with the same camera pose of an anchor image as \(v^{a}_{i}\), \(i\in\{1,...,N\}\), while \(v^{-}_{i,j}\), \(j\in\{1,...,S\}\), is a negative pair with a different pose from \(v^{a}_{i}\). We denote a collection of the negative examples for each anchor by \(\mathbf{v}^{-}_{i}\), _i.e._, \(v^{-}_{i,j}\in\mathbf{v}^{-}\). Given \(v^{a}_{i}\), \(v^{+}_{i}\), and \(\mathbf{v}^{-}_{i}\), we obtain the following contrastive loss term:
\[\ell\left(v^{a}_{i},v^{+}_{i},\mathbf{v}^{-}_{i}\right)= \tag{10}\] \[-\log\left(\frac{\exp\left(d(v^{a}_{i},v^{+}_{i})/\tau\right)}{ \exp\left(d(v^{a}_{i},v^{+}_{i})/\tau\right)+\sum_{j=1}^{S}\exp\left(d(v^{a}_ {i},v^{-}_{i,j})/\tau\right)}\right),\]
where \(d(u,v)=u^{\top}v/\|u\|\|v\|\) denotes the cosine similarity between \(u\) and \(v\). This loss enforces the synthesized image to be similar as the rendered images in the same camera viewpoint but dissimilar to those in other camera directions. To sum up, the overall contrastive loss is given by
\[\mathcal{L}_{\text{InfoNCE}}=\mathbb{E}_{z\sim p_{z},c\sim p_{\xi}}\ell(v^{a},v^{+},\mathbf{v}^{-}), \tag{11}\]
where \(v^{+}\) and \(\mathbf{v}^{-}\) are defined for each anchor, \(v^{a}\).
### Overall Objective
The final objective function of our algorithm is given by replacing the pose regression term in (5) by the InfoNCE loss term proposed for contrastive learning, as follows:
\[\mathcal{L}(D,G) =\mathbb{E}_{I\sim p_{\text{data}}}\left[f(-D(I)+\lambda\|\nabla D (I)\|^{2})\right] \tag{12}\] \[+\mathbb{E}_{\mathbf{z}\sim p_{z},c\sim p_{\xi}}\left[f(D(G(z,c))\right]\] \[+\lambda_{\text{pose}}\cdot\mathcal{L}_{\text{InfoNCE}},\]
where the first term is active for real images, updating only the discriminator, while the remaining terms optimize both the generator and the discriminator with fake images.
## 6 Experiments
This section describes our benchmarks with complex geometric structures and reports the performance of our methods compared to previous ones quantitatively and qualitatively. We referred to our two models, one with camera pose regression and the other with contrastive learning, as PRNeRF and ContraNeRF, respectively.
### Datasets and Settings
We report results on four different image datasets, LSUN Bedroom [44], LSUN Church [44], AFHQ (Animal Faces-HQ) [7], CUB [40]. These datasets are challenging for 3D-aware GANs, where the canonical pose is hard to define on the LSUN datasets, and AFHQ and CUB datasets contain complex and diverse geometric shapes. For AFHQ, we compute low-resolution features and images at a resolution of \(32^{2}\) with a total of 96 depth samples per ray. The final images are generated at a resolution of \(256^{2}\). The resolution of feature maps and final images for the other datasets is set to \(32^{2}\) and \(128^{2}\), respectively.
### Results
Several ablations and analyses are performed to justify our contributions and proposed modules. For image synthesis evaluation, we report Frechet inception distance (FID) [16] and Precision & Recall, which measure the fidelity and diversity of generated samples. For the evaluation of 3D reconstruction quality, we visualize rendered depth images along with their Depth FID, which measures FID between the estimated depth maps of training images given by a depth estimation model [43] and the rendered depth maps from the generated images. We also provide the quality of depth in rendered images using three subjective levels: Bad, Fair, and Good.
#### 6.2.1 LSUN Bedroom
We compare PRNeRF and ContraNeRF with GRAF [37], GIRAFFE [29], \(\pi\)-GAN [5], and DepthGAN [24] on the LSUN bedroom dataset. Figure 3(a) illustrates the generated images and their depth maps from three different viewpoints. Most algorithms including \(\pi\)-GAN [29], GIRAFFE [37], and PRNeRF produce unrealistic depth maps, where their generated images are almost identical and have planer depth maps from all viewpoints. In contrast, ContraNeRF generates RGB images and depth maps that reflect true 3D scene structures faithfully. Table 1 presents overall quantitative results, where ContraNeRF outperforms other algorithms in terms of Depth FID with considerable margins. It indicates that the synthesized 3D scenes given by ContraNeRF reflect true geometries effectively. Within our methods, although PRNeRF outperforms ContraNeRF in terms of 2D image synthesis metrics, it struggles with learning 3D structures accurately.
#### 6.2.2 LSUN Church
Our models are compared with GRAF [37], GIRAFFE [29], \(\pi\)-GAN [5], and GIRAFFE-HD [42] on the LSUN church dataset. Again, our models outperform previous models in all metrics, as shown in Table 2. Figure 3(b) illustrates output examples from our models, where only ContraNeRF produces reasonable 3D scene structures and images. Similar to LSUN Bedroom, although PRNeRF shows a better 2D image synthesis quality within our methods, it fails to capture realistic 3D information in the scene.
### Analysis
Combination of \(\mathcal{L}_{\text{Pose}}\) and \(\mathcal{L}_{\text{InfoNCE}}\)We evaluate the ensemble trained with the pose regression loss, \(\mathcal{L}_{\text{Pose}}\), and the contrastive loss, \(\mathcal{L}_{\text{InfoNCE}}\), on the FFHQ dataset [20]. Table 5 presents that the combination of the two losses achieves the best performance on FFHQ, except EG3D1. Unlike other datasets, PRNeRF outperforms ContraNeRF in FFHQ, probably because the pose regression of PRNeRF is more straightforward in this dataset, having homogeneous geometry. However, ContraNeRF or its ensemble version always shows the best performance including the FFHQ dataset.
Footnote 1: EG3D exploits the ground-truth camera poses of the training set, which makes the direct comparison with EG3D unfair. For reference, EG3D can not be evaluated by other datasets used in this paper.
High resolution image synthesisTo verify that our algorithm performs well on higher-resolution images, we test our algorithms on the AFHQ dataset with the resolution of \(512^{2}\). Table 6 presents that our methods still significantly outperform the previous one on the \(512^{2}\) resolution, similar to the \(256^{2}\) resolution setting in Table 3.
Dimensionality of pose embeddingWe analyze the impacts of the dimensionality of pose embedding on 3D recon
Figure 4: Comparing samples of ContraNeRF, PRNeRF, and modern 3D-aware GANs. We visualized the RGB image and depth map by rotating the rendering pose horizontally. For the first row, the first two results are from \(\pi\)-GAN [5], and the rest is from GIRAFFE-HD [42]. ContraNeRF produces high-fidelity images with accurate depth maps in all domains since its implicit pose embedding can capture complex geometries. However, others methods, including PRNeRF, usually produce planar depth maps with unrealistic 3D structure. For additional results, please refer to our supplementary materials.
\begin{table}
\begin{tabular}{c|c|c|c c c} \hline Method & GT pose & \(\mathcal{L}_{\text{Pose}}\) & \(\mathcal{L}_{\text{InfoNCE}}\) & FID\(\downarrow\) & Precision\(\uparrow\) & Recall\(\uparrow\) \\ \hline \hline EG3D [4] & β & & 4.92 & 0.554 & 0.435 \\ StyleNeRF [13]\({}^{\dagger}\) & & & 8 & - & - \\ EpiGRAF [38]\({}^{\ddagger}\) & & & 9.71 & - & - \\ PRNeRF & β & & 5.94 & 0.548 & 0.415 \\ ContraNeRF & & β & 6.85 & **0.552** & 0.405 \\ PR-ContraNeRF & β & β & **5.73** & 0.551 & **0.421** \\ \hline \end{tabular}
\end{table}
Table 5: Experiments on the FFHQ dataset with 256 resolution. The dagger (\(\dagger\)) denotes that the scores are taken from StyleNeRF [13] and EpiGRAF [38].
struction quality on the LSUN Bedroom dataset, and visualize its ablative results in Figure 5. Our framework successfully captures 3D structures if it has a sufficient number of embedding dimension \(m\geq 24\). Even with low-dimensional embedding vectors, we still obtain decent quality of reconstructed images and depth maps with minor blurs.
Handling dataset with diverse camera posesFigure 6 illustrates images rendered by ContraNeRF, from the same camera pose but with different content latent vector \(z\). The generated images on the AFHQ dataset have almost identical viewpoints, indicating that our contrastive learning works as previous ones for these simple cases. On the other hand, since the LSUN Bedroom, LSUN Church, and CUB have various scene geometries and object shapes, there is no canonical center pose applicable to all images, and the generated images do not have the same viewpoints perceptually. However, ContraNeRF successfully reconstructs the underlying 3D structure of the scenes as presented earlier, which shows the strength and potential of ContraNeRF for naturally handling datasets with images captured from heterogeneous viewpoints.
Failure casesAlthough ContraNeRF produces high-fidelity volumetric scenes in most cases, we observe some failure cases on the LSUN bedroom dataset. Figure 7 illustrates failure cases in which ContraNeRF produces planar scenes. We presume outlier training samples, such as watermarked images or images captured from out-of-distribution camera poses, may result in degenerate outputs.
## 7 Conclusion
By extending 3D-aware GANs to handle more diverse domains of objects and scenes, the proposed models improve their usability and expands the possible applications from face synthesis to 3D world modeling. To this end, we first show that a pose regression-based framework can be used to effectively remove the camera pose dependency in 3D-aware GAN training. We then propose a contrastive learning-based framework that uses implicit pose embeddings at higher dimensions for rich descriptions of pose information in natural scenes with diverse and complex geometries. The effectiveness of implicit pose embedding and contrast learning frameworks has been experimentally demonstrated through evaluation on multiple benchmark datasets.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & FID\(\downarrow\). & Precision\(\uparrow\) & Recall\(\uparrow\) \\ \hline \hline GIRAFFE-HD [42] & 13.42 & 0.61 & 0.23 \\ \hline PRNeRF & 8.21 & 0.61 & 0.31 \\ ContraNeRF & **8.02** & **0.63** & **0.32** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Experiments on the AFHQ dataset with \(512^{2}\) resolution. ContraNeRF and PRNeRF still outperform the existing method in high resolution setting, where ContraNeRF achieves the best performance.
Figure 5: Effects of the pose embedding dimension on the quality of rendered image on the LSUN Bedroom dataset. Our framework successfully captures underlying 3D structures with a sufficient number of embedding dimensions.
Figure 6: Qualitative results of rendering for the same pose with different latent vector \(z\) on ContraNeRF. ContraNeRF synthesizes diverse scenes with different geometry in the LSUN Bedroom and LSUN Church datasets while it produces images with identical geometry in the AFHQ dataset.
Figure 7: Example of failure cases. ContraNeRF sometimes produces images with unrealistic geometries such as planar scenes. The first case (left) is the example that our algorithm generate translated images by varying camera poses, and the second one (right) illustrates the results with almost uniform depth maps. |
2301.00876 | MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement
Understanding | Reading comprehension of legal text can be a particularly challenging task
due to the length and complexity of legal clauses and a shortage of
expert-annotated datasets. To address this challenge, we introduce the Merger
Agreement Understanding Dataset (MAUD), an expert-annotated reading
comprehension dataset based on the American Bar Association's 2021 Public
Target Deal Points Study, with over 39,000 examples and over 47,000 total
annotations. Our fine-tuned Transformer baselines show promising results, with
models performing well above random on most questions. However, on a large
subset of questions, there is still room for significant improvement. As the
only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark
for both the legal profession and the NLP community. | Steven H. Wang, Antoine Scardigli, Leonard Tang, Wei Chen, Dimitry Levkin, Anya Chen, Spencer Ball, Thomas Woodside, Oliver Zhang, Dan Hendrycks | 2023-01-02T21:08:27Z | http://arxiv.org/abs/2301.00876v3 | # MAUD: An Expert-Annotated Legal NLP Dataset for
###### Abstract
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
## 1 Introduction
While pretrained Transformers Devlin et al. (2019); Brown et al. (2020) have surpassed humans on reading comprehension tasks such as SQuAD 2.0 Rajpurkar et al. (2018) and SuperGLUE Wang et al. (2019), their accuracy in understanding real-world specialized legal texts remains underexplored.
Reading comprehension of legal text can be a particularly challenging natural language processing (NLP) task due to the length and complexity of legal clauses and the difficulty of collecting expert-annotated datasets. To help address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), a legal reading comprehension dataset curated under the supervision of highly specialized mergers-and-acquisitions (M&A) lawyers and used in the American Bar Association's 2021 Public Target Deal Points Study ("ABA Study"). The dataset and code for MAUD can be found at github.com/TheAtticusProject/maud.
Public target company acquisitions are the most prominent business transactions, valued at hundreds of billions of dollars each year. Merger agreements are the legal documents that enable these acquisitions, and key clauses in these merger agreements are called "deal points."
Lawyers working on the ABA Study perform contract review on merger agreements. In general, contract review is a two-step process. First, lawyers extract key legal clauses from the contract (an entity extraction task). Second, they interpret the meaning of these legal clauses (a reading comprehension task). In the ABA Study, the lawyers extract deal points from merger agreements, and for each deal point they answer a set of standardized multiple-choice questions.
Models trained on MAUD's expert-annotated data can learn to answer 92 reading comprehension questions from the 2021 ABA Study, given extracted deal point text from merger agreements. By answering these questions, models interpret the meaning of specialized legal language and categorize the different agreements being made by companies in the contract.
Entity extraction and reading comprehension are both important and challenging tasks in legal contract review. A large-scale expert-annotated entity extraction benchmark for contract review is already available in Hendrycks et al. (2021). However, to the best of our knowledge, there is no large-scale expert-annotated reading comprehension dataset for contract review or any other legal task in the English language. Therefore in this short paper, we focus on the legal reading comprehension task. (Appendix A.12 presents a preliminary benchmark for the extraction task for interested researchers.)
Annotating MAUD was a collective effort of over 10,000 hours by law students and experienced lawyers. Prior to labeling, each law student attended 70-100 hours of training, including lectures and workshops from experienced M&A lawyers.
Each annotation was labelled by three law student annotators, and these labels were verified by an experienced lawyer. See Appendix A.11 for more information on the annotation process. We estimate the pecuniary value of MAUD to be over $5 million using a prevailing rate of $500 per hour in M&A legal fees.
## 2 Related Work
Due to the high costs of contract review and the specialized skills it requires, understanding legal text has proven to be a ripe area for NLP research.
Legal Entity Extraction.One area of contract review research focuses on legal entity extraction and document segmentation. Chalkidis et al. (2017) introduce a dataset for extracting basic information from contracts, with follow-up modeling work using RNNs (Chalkidis et al., 2018) and Transformers (Chalkidis et al., 2020). Lippi et al. (2019) introduce a small expert-annotated dataset for identifying "unfair" clauses in 50 online terms of services. Tuggener et al. (2020) introduce a semi-automatically constructed dataset of legal contracts for entity extraction. Leivaditi et al. (2020) introduce an expert-annotated dataset of 2960 annotations for 179 lease agreements. Hendrycks et al. (2021) introduce CUAD, an expert-annotated contract review dataset containing 13,010 annotations for 150 legal contracts. Unlike CUAD, which is a entity extraction task for 16 different types of contracts, MAUD is a multiple-choice reading comprehension task focusing on merger agreements.
Reading Comprehension for Legal NLP.Koarek and Manning (2021) introduce a crowd-worker-annotated dataset containing 7191 Natural Language Inference questions about spans of non-disclosure agreements. Hendrycks et al. (2021) propose a question-answering dataset sourced from freely available online materials, containing questions (including legal exam questions) from dozens of specialized areas. Zheng et al. (2021) present a multiple-choice reading comprehension dataset with 53,317 annotations automatically extracted from US case law citations. Duan et al. (2019) present a Chinese-language legal reading comprehension dataset, with about 50,000 expert-generated annotations of Chinese judicial rulings. In our work we present a legal reading comprehension dataset with 47,457 expert-generated annotations about merger agreements. To the best of our knowledge, MAUD is the only English-language legal reading comprehension dataset that is both large-scale and expert-annotated.
## 3 MAUD: A Legal NLP Dataset for Merger Agreement Understanding
MAUD consists of 47,457 annotations based on legal text extracted from 152 English-language public merger agreements. MAUD's merger agreements were sourced from the EDGAR system maintained by the U.S. Securities and Exchange Commission.
Terminology._Deal points_ are legal clauses that define when and how the parties in a merger agreement are obligated to complete an acquisition. We refer to the text of these clauses (extracted by annotators from merger agreements) as _deal point text_s. One or more predefined _deal point questions_ can be asked about each deal point text. Each deal point question can be answered by one or more pre
Figure 1: MAUD contains 39,000+ examples for 92 different reading comprehension questions about merger agreements. Given a _deal point question_ and _deal point text_, a model learns to predict the correct answer(s) from a list of possible answers standardized by the 2021 ABA Study. The deal point texts above are truncated for display.
defined _deal point answers_. Deal point questions and texts are grouped into mutually exclusive _deal point categories_.
Deal Points in MAUD.The deal points in MAUD are standardized by the 2021 ABA Study. For the 2021 ABA Study, the American Bar Association appointed an M&A attorney to design 130 deal point questions and 7 deal point categories reflecting recent legal developments and deal trends of interest.
Of the 130 different deal point questions in the 2021 ABA Study, 92 are represented in MAUD. MAUD contains 8,226 unique deal point text annotations and 39,231 question-answer annotations (i.e. examples), for a total of 47,457 annotations. There are seven different deal point categories in MAUD: Conditions to Closing, Deal Protection and Related Provisions, General Information, Knowledge, Material Adverse Effect, Operating and Efforts Covenant, and Remedies.
Task.MAUD is a multiple-choice reading comprehension task. The model predicts the correct deal point answer from a predefined list of possible answers associated with each question. (See Figure 1 for an example). Several deal point questions in the ABA Study are in fact multilabel questions, but for uniformity we cast all multilabel questions as binary multiple-choice questions. This increases the effective number of questions from 92 to 144.
### MAUD Datasets and Splits
MAUD contains three datasets (main, abridged, and rare answers) corresponding to three methods of generating examples. See Table 2 for the number of examples contained in each dataset.
Main Dataset.The main dataset contains 20,623 examples with original deal point text extracted from 152 merger agreements by expert annotators.
Abridged Dataset.The abridged dataset contains 14,928 examples with deal point text extracted from 94 of the 152 merger agreements included in the main dataset. In the abridged dataset, deal point texts are abridged to delete portions of legal text in the main dataset that are not pertinent to the deal point question. Because many texts contain answers to multiple questions, we provide the abridged data to guide a model to recognize the most pertinent text. Appendix A.8 compares the difficulty of main and abridged test examples.
Rare Answers Dataset.The rare answers dataset contains 3,680 examples that have rare answers to a question. Legal experts made small edits to texts in the main dataset to create deal points with rare answers. See Appendix A.11 for an example edit. We introduced the rare answers dataset to ameliorate imbalanced answer distributions in the main dataset. In particular, some answers in the main dataset appear in fewer than 3 contracts, making a train-dev-test split impossible.
Train, Dev, and Test Splits.We construct the train-dev-test split as follows. We reserve a random 20% of the combined main and abridged datasets as the test split. The remaining main and abridged examples are combined with the rare answers data, and then split 80%-20% to
\begin{table}
\begin{tabular}{l|c c c|c}
**Deal Point Category** & \begin{tabular}{c} **Main** \\ **Dataset** \\ \end{tabular} & \begin{tabular}{c} **Rare Answers** \\ **Dataset** \\ \end{tabular} & \begin{tabular}{c} **Abridged** \\ **Dataset** \\ \end{tabular} &
\begin{tabular}{c} **All** \\ **Datasets** \\ \end{tabular} \\ \hline Conditions to Closing & 3,411 & 298 & 4,052 & 7,761 \\ Deal Protection and Related Provisions & 6,491 & 2,280 & 5,937 & 14,708 \\ General Information & 152 & 17 & 173 & 342 \\ Knowledge & 388 & 23 & 258 & 669 \\ Material Adverse Effect & 8,816 & 871 & 3,273 & 12,960 \\ Operating and Efforts Covenant & 1,216 & 191 & 1,054 & 2,461 \\ Remedies & 149 & 0 & 181 & 330 \\ \hline All Categories & 19,407 & 3,680 & 14,928 & 39,231 \\ \hline \end{tabular}
\end{table}
Table 1: Number of MAUD examples contained in each dataset by category. Each example is a question-answer pair corresponding to an extracted deal point text.
\begin{table}
\begin{tabular}{|l|c c c|c|} \hline & **train** & **dev** & **test** & **overall** \\ \hline
**main** & 13,256 & 3,471 & 3,896 & 20,623 \\
**abridged** & 9,647 & 2,526 & 2,755 & 14,928 \\
**rare** & 2,924 & 756 & 0 & 3,680 \\ \hline
**overall** & 25,827 & 6,753 & 6,651 & 39,231 \\ \hline \end{tabular}
\end{table}
Table 2: The number of examples in MAUD, grouped by splits (train, dev, test) and by dataset (main, abridged, rare answers).
splits. All splits are stratified by deal point question-answer pairs.
To avoid data leakage due to main dataset and abridged dataset examples having overlapping text and the same answer, we always split the main examples first and then place abridged examples from the same contract in the same split.
## 4 Experiments
### Setup
Metrics.Because many questions have an imbalanced answer distribution, we use area under the precision-recall curve (AUPR) as our primary metric. For every question, we calculate the minority-class AUPR score for each answer and then average to get a mean AUPR score for the question. Then we average over all question scores to get an overall AUPR score for a model.
For example, consider a deal point question \(Q\), with three possible answers: \(A1\), \(A2\), and \(A3\), which have \(50\), \(10\), and \(10\) test examples respectively. For the unique question-answer pair \((Q,A1)\), we first binarize all answers as \(A1\) or \(\neg A1\). The minority binarized answer is \(\neg A1\), with 20 examples, and so the AUPR score for \((Q,A1)\) is calculated using positive class \(\neg A1\). To get the AUPR score for question \(Q\), we average the AUPR scores for \((Q,A1)\), \((Q,A2)\), and \((Q,A3)\).
Models.We fine-tune both single-task and multi-task pretrained language models on MAUD using the Transformers library Wolf et al. (2020).
In the single-task setting, we evaluate the performance of fine-tuned BERT-base (110M params), RoBERTa-base (125M params), LegalBERT-base (110M params), DeBERTa-v3-base (184M params), and BigBird-base (127M params).
In the multi-task setting, we evaluate RoBERTa-base, LegalBERT-base, and DeBERTa-v3-base.
BERT Devlin et al. (2019) is a bidirectional Transformer that established state-of-the-art performance on many NLP tasks. LegalBERT Chalkidis et al. (2020) pretrains BERT on a legal corpus. RoBERTa Liu et al. (2019) improves on BERT, using the same architecture, but pretraining on an order of magnitude more data. DeBERTa He et al. (2020) improves upon RoBERTa by using a disentangled attention mechanism and more parameters.
27.6% of the unique deal point texts in MAUD and 50.0% of texts across all examples are longer than 512 RoBERTa-base tokens, motivating our evaluation of BigBird-base. BigBird Zaheer et al. (2020) is initialized with RoBERTa and trained on longer input sequences up to 4,096 tokens using a sparse attention pattern that scales linearly with
\begin{table}
\begin{tabular}{l|c c c c}
**Deal Point Category** & **Random** & **RoBERTa** & **LegalBERT** & **DeBERTa** \\ \hline Conditions to Closing & 20.4\% & 40.3\% & **46.2\%** & **46.2\%** \\ Deal Protections & 17.2\% & 48.6\% & **53.6\%** & 53.0\% \\ General Information & 23.4\% & **80.2\%** & 74.8\% & 67.7\% \\ Knowledge & 18.8\% & 68.3\% & **73.0\%** & 71.8\% \\ Material Adverse Effect & 14.5\% & 48.3\% & **50.7\%** & 47.8\% \\ Operating and Efforts Cov. & 22.0\% & 80.3\% & **87.3\%** & 74.2\% \\ Remedies & 10.9\% & 51.0\% & **83.6\%** & 77.9\% \\ \hline Overall & 16.8\% & 51.4\% & **55.8\%** & 53.0\% \\ \hline \end{tabular}
\end{table}
Table 4: Multi-task AUPR scores for each deal point category and fine-tuned model.
\begin{table}
\begin{tabular}{l|c c c c c c}
**Deal Point Category** & **Random** & **BERT** & **RoBERTa** & **LegalBERT** & **DeBERTa** & **BigBird** \\ \hline Conditions to Closing & 20.4\% & 41.7\% & 41.6\% & 32.0\% & **48.2\%** & 46.6\% \\ Deal Protections & 17.2\% & 53.8\% & 57.1\% & **58.6\%** & 57.9\% & 58.0\% \\ General Information & 23.4\% & 85.7\% & 81.7\% & 82.0\% & **87.2\%** & 81.2\% \\ Knowledge & 18.8\% & 75.6\% & **81.4\%** & 71.6\% & 80.9\% & 81.0\% \\ Material Adverse Effect & 14.5\% & 44.0\% & 47.7\% & 49.8\% & 48.8\% & **50.9\%** \\ Operating and Efforts Cov. & 22.0\% & 84.8\% & 85.7\% & **89.0\%** & 86.9\% & 86.6\% \\ Remedies & 10.9\% & 88.2\% & 94.3\% & **100\%** & 96.6\% & 95.0\% \\ \hline Overall & 16.8\% & 52.6\% & 55.5\% & 55.9\% & 57.1\% & **57.8\%** \\ \hline \end{tabular}
\end{table}
Table 3: Single-task AUPR scores for each deal point category and fine-tuned model. Each category score is calculated as the mean minority-class AUPR over all questions in the category and over three runs. The overall score is the mean AUPR score over all questions (not the mean over categories). See Appendix A.10 for category descriptions.
the number of input tokens. No deal point texts in MAUD have more than 4,096 tokens.
Training.We fine-tune models using the AdamW optimizer [11] and oversample to give every answer equal proportion. The learning rate and number of updates were chosen by grid search. We trained our final models on the combined training and development splits, averaging test AUPR scores over three runs. See Appendix A.3 for more training details.
### Results
Our fine-tuned models achieved high AUPR scores in the Remedies, General Information, and Operating & Efforts Covenant categories, but scored lower on other categories, particularly Deal Protections & Related Provisions (best single-task AUPR 58.6%), Conditions to Closing (48.2%), and Material Adverse Effect (50.9%). Our results indicate that there is substantial room for improvement on these three hardest categories, which have the longest text lengths (see Table 9) and which attorneys also find to be the most difficult to review. See Tables 3 and 4 for full results.
Generally, larger and newer models had higher mean performance on MAUD. In the single-task setting, DeBERTa achieved an overall score of 57.1% AUPR, compared with 55.5% for RoBERTa and 52.6% for BERT. BigBird achieved the highest score of 57.8% AUPR, slightly outperforming DeBERTa.
Effect of Pretraining on Legal Corpus.In the single-task setting, LegalBERT outperforms BERT and slightly outperforms RoBERTa, which have the same model architecture but are not specialized for law. In the multi-task setting, LegalBERT also outperforms DeBERTa. The strong performance of LegalBERT suggests that that pretraining on legal data is helpful for MAUD.
Single-Task versus Multi-Task Performance.RoBERTa and DeBERTa multi-task models performed worse than their single-task counterparts by about 4% AUPR. However, for LegalBERT these models had approximately the same performance.
Dataset Size Ablation.We trained single-task RoBERTa models on random subsets of MAUD training data to evaluate the effect of dataset size on performance (see Figure 3). We found that RoBERTa models trained on all training examples had an overall AUPR score 7.3% higher than those trained on a 50% subset of the dataset and 23.7% higher than models trained on only a 5% subset.
## 5 Conclusion
MAUD is a large-scale expert-annotated dataset which facilitates NLP research on a specialized merger agreement review task, based on the American Bar Association's Public Target Deal Point Study. MAUD can accelerate research towards specialized legal tasks like merger agreement review, while also serving as a benchmark for assessing NLP models in legal text understanding. Fine-tuned Transformer baselines exhibit strong performance on some deal point categories, but there is significant room for improvement on the three hardest categories.
Figure 3: RoBERTa-base AUPR as a function of the number of training examples, highlighting the value of our datasetβs size. AUPR is averaged over three runs.
Figure 2: Precision-recall curves for multi-task models, averaged over all MAUD questions.
Ethics Statement
### Data Collection
Our data was created by volunteer annotators from a non-profit legal organization, who joined the organization in order to create this dataset. None of our annotators were compensated monetarily for their time. Among our 36 annotators, 20 were male and 16 were female. 33 annotators are based in the United States and 3 annotators are based in Europe.
### Societal Impact
Advances in ML contract review, including merger agreement review, can reduce the costs of and increase the availability of legal services to businesses and individuals. In coming years, M&A attorneys would likely benefit from having auxiliary analysis provided by ML models.
### Limitations
MAUD enables research on models that can automate a specialized labelling task in the ABA Study, but does not target the other task performed in the ABA Study, which is the extraction of deal point texts from merger agreements.
We reserve this task for future work. For researchers interested in the deal point extraction task, we also release the 152 original contract texts and span annotations. Details on the span annotations and a preliminary baseline can be found in Appendix A.12.
The 152 merger agreements in MAUD involve the acquisitions of most but not all of the U.S. public target companies exceeding $200 million in value that were closed in 2021. Merger agreements for private companies or public companies that do not exceed $200 million in value are not included, and consequently models trained on MAUD may be less performant for deal point texts extracted for these merger agreements.
The deal point questions and the list of predefined deal point answers to each question were created by experienced M&A attorneys and standardized by the ABA, but they do not represent all of the deal points that are important in a merger agreement. MAUD should not be used as the sole source for developing AI tools for merger agreement review and drafting.
|
2310.16955 | Break it, Imitate it, Fix it: Robustness by Generating Human-Like
Attacks | Real-world natural language processing systems need to be robust to human
adversaries. Collecting examples of human adversaries for training is an
effective but expensive solution. On the other hand, training on synthetic
attacks with small perturbations - such as word-substitution - does not
actually improve robustness to human adversaries. In this paper, we propose an
adversarial training framework that uses limited human adversarial examples to
generate more useful adversarial examples at scale. We demonstrate the
advantages of this system on the ANLI and hate speech detection benchmark
datasets - both collected via an iterative, adversarial
human-and-model-in-the-loop procedure. Compared to training only on observed
human attacks, also training on our synthetic adversarial examples improves
model robustness to future rounds. In ANLI, we see accuracy gains on the
current set of attacks (44.1%$\,\to\,$50.1%) and on two future unseen rounds of
human generated attacks (32.5%$\,\to\,$43.4%, and 29.4%$\,\to\,$40.2%). In hate
speech detection, we see AUC gains on current attacks (0.76 $\to$ 0.84) and a
future round (0.77 $\to$ 0.79). Attacks from methods that do not learn the
distribution of existing human adversaries, meanwhile, degrade robustness. | Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel | 2023-10-25T19:51:37Z | http://arxiv.org/abs/2310.16955v2 | # Break it, Imitate it, Fix it:
###### Abstract
Real-world natural language processing systems need to be robust to human adversaries. Collecting examples of human adversaries for training is an effective but expensive solution. On the other hand, training on synthetic attacks with small perturbations--such as word-substitution--does not actually improve robustness to human adversaries. In this paper, we propose an adversarial training framework that uses limited human adversarial examples to generate more useful adversarial examples at scale. We demonstrate the advantages of this system on the ANLI and hate speech detection benchmark datasets--both collected via an iterative, adversarial human-and-model-in-the-loop procedure. Compared to training only on observed human attacks, also training on our synthetic adversarial examples improves model robustness to future rounds. In ANLI, we see accuracy gains on the current set of attacks (\(44.1\%\to 50.1\%\)) and on two future unseen rounds of human generated attacks (\(32.5\%\to 43.4\%\), and \(29.4\%\to 40.2\%\)). In hate speech detection, we see AUC gains on current attacks (\(0.76\to 0.84\)) and a future round (\(0.77\to 0.79\)). Attacks from methods that do not learn the distribution of existing human adversaries, meanwhile, degrade robustness.
## 1 Introduction
Improving accuracy on real adversarial examples is critical to effective natural language processing (NLP) systems. In this paper, we propose methods to improve adversarial robustness beyond what we can achieve by either directly training on real adversarial examples or on simple synthetic attacks. We achieve this by learning the distribution of real attacks, and generating synthetic examples that imitate them.
Adversarial examples are perturbed examples that were designed by humans to induce misclassification (Szegedy et al., 2014). Adversarial robustness is the ability of a classifier to correctly label adversarial examples. Adversarial robustness has been extensively studied on benchmark NLP tasks (Jia and Liang, 2017; Ettinger et al., 2017; Zhang et al., 2020; Gao et al., 2018). Despite extensive work and progress in this domain, NLP classifiers are still not robust to real-life text adversaries (Lees et al., 2021; Borkan et al., 2019). These failures can result in real-world harms (Scheuerman et al., 2021). Prior work relies on two types of approaches: training on adversarial examples collected by humans (Dinan et al., 2019; Nie et al., 2020), and synthetic attacks (Uesato et al., 2018; Jin et al., 2020). Vulnerability in real-world scenarios is, in part, due to (a) too few real adversarial examples available, and (b) over-simplification of synthetic attacks to produce adversarial examples at scale.
Collecting human-generated NLP adversarial datasets is expensive (Xu et al., 2021; Hendrycks et al., 2021) -- even more prohibitively so for modern large models (Ganguli et al., 2022). Thus, we rely on synthetic examples to improve robustness beyond what we can achieve by directly training on our limited number of human-generated examples. Existing synthetic attacks, however, are often over-simplified; because, unlike computer vision, it is difficult to generate synthetic examples with the desired class label in a discrete space. So these attacks are often reduced to easy-to-generate attacks based on easy-to-compute text-quality metrics (Uesato et al., 2018).
Popular synthetic NLP attacks include (a) template-based or small text-edit-distance attacks, (Malfa and Kwiatkowska, 2022; McCoy et al., 2019; Zang et al., 2020; Ren et al., 2019), (b) perturbation attacks that use word embeddings and search within an \(\varepsilon\)-neighborhood (Jia et al., 2019; Zhao et al., 2017, 2018; Li et al., 2021; Huber et al., 2022), or (c) finding universal adversarial perturbations (Moosavi-Dezfooli et al., 2017; Wallace et al., 2019; Mehrabi et al., 2022). Real attackers, meanwhile, are (i) known to make much larger edits from the original text, and (ii) are informed by each other's successful attacks, neither of which is captured in existing synthetic NLP attacks (West, 2017). In this paper, we take a step towards closing this gap by directly modeling the real attack patterns. This enables us to emulate human text attacks more realistically by (i) allowing larger edits and (ii) using existing real attacks to inform future attacks.
In prior work, the following proxies are usually used to measure whether generated adversarial examples are of good quality: semantic proximity (Malfa and Kwiatkowska, 2022), high attack success rate (Szegedy et al., 2014), low label noise (Malfa and Kwiatkowska, 2022), or distributional similarity to past attacks (Pillutla et al., 2021). These metrics, however, do not connect well to the attack patterns used by real adversaries. Our primary metric for attack quality is whether the generated attacks are useful when used in adversarial training to defend against future unseen rounds of human-generated attacks. That is, can we increase robustness beyond what we achieve by only training on all existing observed human attacks? We leverage frameworks like Dynabench to test on the evolving patterns of real adversaries (Kiela et al., 2021).
We show that our attack generation methods, which learn the distribution of human adversarial examples, outperform both (1) attack generators that do not take the human examples into account, and (2) attack generators that do not learn the distribution but rely on random perturbations (Sec 6). Our attack generation methods are able to make improvements even when trained on as few as 500 real human adversarial examples. Finally, though prior adversarial literature places a high emphasis on adversary success rates, low label noise, or distributional similarity, we show that these quality proxies are not predictive of whether an attack generator can better defend against future attacks. Our primary contributions are to:
1. **Demonstrate misalignment between synthetic and real attacks:** We empirically show that existing synthetic attack approaches do not necessarily improve robustness to the real attacks from humans.
2. **Overcome misalignment by imitating real adversaries:** We use generative models to directly imitate existing human-generated attacks. Our metric of success is how much we can improve robustness to future real attacks (beyond what can be accomplished by adversarially training on all existing real attacks).
3. **Improve adversarial robustness without relying on a better/bigger model:** Adversarial training on imitated real attacks provides significant robustness benefits. When compared to solely training on existing real attacks, we improve accuracy by 11% on unseen attacks in the ANLI benchmark, and by 8% on existing attacks in the hate speech detection benchmark.
4. **Show misalignment between common attack quality metrics and attack usefulness in preventing future attacks:** We empirically show that more distributional similarity, low label noise, or high adversary success rate do not entail that an attack generator is better than another in helping defend against downstream attacks.
## 2 Related Work
**Adversarial robustness** is measured as accuracy on challenge sets such as Adversarial GLUE (Wang et al., 2021). These test sets are gathered through crowd-sourcing and programmatic text perturbation (Zhang et al., 2019; Morris et al., 2020). To improve adversarial robustness, prior work uses training interventions and/or data augmentation. Training interventions focus on learning more meaningful stable representations from available data: e.g. using mutual information (Wang et al., 2020; Zhao et al., 2022) or locally regularizing (Aghajanyan et al., 2020; Zhu et al., 2019). These are out of scope: we only focus on data augmentation - which can be used with any of the training intervention methods.
Despite the popularity of data augmentation solutions such as \(\epsilon\)-bounded PGD attacks (Madry et al., 2017) in continuous domains, it is not straightforward to extend them to discrete NLP settings. In NLP, small perturbations to the text can have a big impact on the text's true label.
Nevertheless, **controlled text generation** has vastly improved in recent years through zero or few-shot tuning of large language models (Wu et al., 2021; Perez et al., 2022). Such methods use data augmentation for fine-tuning (Michel et al., 2019; Garg and Ramakrishnan, 2020), gradients from small attribute classifiers to ensure generated text has a particular class (Dathathri et al., 2020; Wu et al., 2023), or careful prompting to generate semantically close text with only the desired attribute swapped (Madaan et al., 2020).
_Adversarial_ controlled text generation, however, is is even more challenging. It's not straightforward to correctly label the generated text we intend to use as an adversarial example (by definition). Prior work typically gets around this challenge by making very small or rule-based perturbations that should not change the original example's true label. These include contextual synonym-substitution (Michel et al., 2019; Jia et al., 2019; Li et al., 2020; Morris et al., 2020), rule-based grammar (McCoy et al., 2019; Ribeiro et al., 2018), morphological (Tan et al., 2020), and character-level (Eger and Benz, 2020) manipulations. A related but under-explored area is to adversarially intervene on all parts of the text that are invariant to the predicted label (Wang et al., 2020; Chen et al., 2021; Lei et al., 2022), or conversely to minimally intervene to ensure the true label changes (Ross et al., 2021; Deng et al., 2022). Our attack method is different in that it relies on memorizing and re-mixing phrases from existing adversarial examples to generate additional adversarial examples that the attack method can correctly label.
Most of prior work assumes that adversarial text should be fluent, grammatically correct or semantically similar to the original. There is no such constraint on adversaries in the real world. Hence, in this paper we use the Hate Detection task which often does not have fluent grammatical text. Prior work that does not assume a standardized single language, include studies on dialectal language like Ebonics on Twitter (Blodgett et al., 2016), multilingual text on Reddit (Michel and Neubig, 2018), Hinglish (Biradar et al., 2021)), emoji-based hatespeech (Kirk et al., 2022); and methods that seek to distinguish human and machine generated text through heuristics (e.g.: sentence length, verb ratios, (Yao et al., 2017), relational consistency (Zhong et al., 2020)).
**Red teaming:** We are also motivated by red-team human generated datasets--human-created attacks that target a specific model, often by crowd-sourcing. Examples include SWAG, ReCoRD, HotpotQA, HellaSWAG, HANS (Zellers et al., 2018; Zhang et al., 2018; Yang et al., 2018; Zellers et al., 2019; McCoy et al., 2019; Kaushik et al., 2019). Sometimes the work also uses feedback from the original classifier: CoDAH, Quoref, DROP, FEVER 2, ANLI, etc. (Chen et al., 2019; Dasigi et al., 2019; Dua et al., 2019; Thorne et al., 2019; Nie et al., 2020; Bartolo et al., 2020; Kiela et al., 2021). We rely on such human-model interactions in a feedback loop set-up, and propose a mechanism to reduce red team costs.
## 3 Problem Formulation
Training on existing real attacks improves robustness to future real attacks. Our goal is to improve robustness even further by making use of generated synthetic attacks that imitate the real adversarial examples. Given our focus on robustness improvements through data augmentation, we only assume black-box access to the attacked model, and knowledge of all observed existing attacks. We evaluate the robustness on successive rounds of adversarial examples collected through a crowd-sourced model-in-the-loop approach (Nie et al., 2020). We take the classifier architecture to be fixed; improvements to architecture or model size are out of scope.
**Notation:** We refer to \(\mathbf{x}\), as a collection of samples drawn from distribution \(p:\mathbf{x}\sim p\). We assume there are ground truth labels associated with samples \(\mathbf{x}\), given by \(y(\mathbf{x})\). We assume that there is a base classifier, parameterized by \(\theta_{0}\), trained on \(\mathbf{x}\) that labels each input as \(\widehat{y}_{\theta_{0}}(\mathbf{x})\). Next, we sample human adversarial examples from a distribution \(p_{\mathbf{a}_{0}}:\mathbf{a}\sim p_{\mathbf{a}_{0}}\). These are created to fool the base classifier, i.e., \(\widehat{y}_{\theta_{0}}(\mathbf{a})\neq y(\mathbf{a})\).
The classifier is then further fine-tuned on the real attacks, \(\mathbf{a}_{0}\). We refer to the output of this new classifier as \(\widehat{y}_{\theta_{1}}(\mathbf{x})\). Next, we create a new round of human adversarial examples to fool \(\widehat{y}_{\theta_{1}}\). We refer this future human adversarial data distribution as \(p_{\mathbf{a}_{1}}:\mathbf{a}_{1}\sim p_{\mathbf{a}_{1}}\) which are misclassified by the classifier: \(\widehat{y}_{\theta_{1}}(\mathbf{a}_{1})\neq y(\mathbf{a}_{1})\).
This model-in-the-loop process results in evolving crowd-sourced real attacks as follows:
\[\theta_{0}\rightarrow\mathbf{a}_{0}\rightarrow\theta_{1}\rightarrow\mathbf{a }_{1}\rightarrow\cdots \tag{1}\]
We do not have access to \(p_{\mathbf{a}_{1}}\),\(p_{\mathbf{a}_{2}}\) or \(\mathbf{a}_{1}\),\(\mathbf{a}_{2}\) at train time. **Our goal is to improve the accuracy of \(\widehat{y}_{\theta_{1}}\) on future attacks: \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\), without gathering additional human attacks beyond \(\mathbf{a}_{0}\)**.
## 4 Methods
To address the problem in Sec. 3, we describe how we use synthetic attacks to improve robustness. Then we describe the specific methods of generating the synthetic attacks.
### Overall Solution Framework
We aim to be more robust to \(\mathbf{a}_{1}\) by fine-tuning \(\widehat{y}_{\theta_{1}}\) on additional synthetic examples. We do this by training a generator, \(G\), on existing adversarial examples, \(\mathbf{a}_{0}\sim p_{\mathbf{a}_{0}}\). The generator learns \(\widehat{p_{\mathbf{a}_{0}}}\), an approximation of the true \(p_{\mathbf{a}_{0}}\). Then we use the generator to create synthetic examples. \(\mathbf{a}_{\mathbf{g}}\sim\widehat{p_{\mathbf{a}_{0}}}\). (For clarity, we also depict how we create all models and adversarial data in Fig. 1.)
The hypothesis of this work is that modeling the existing real attack distribution allows the generator to capture something about the real attack generation process more broadly. This allows the generator to generalize to future attacks. **Restated, we hypothesize \(\widehat{p_{\mathbf{a}_{0}}}\) is not only close to \(p_{\mathbf{a}_{0}}\), but also reasonably close to future adversarial attack distributions, \(p_{\mathbf{a}_{1}}\), \(p_{\mathbf{a}_{2}}\).**
While in practice \(\mathbf{a}_{1}\) and all subsequent attacks would depend on the mitigation put in place to obtain \(\theta_{1}\), we do not generate new human attacks in response to the new classifiers trained on \(\mathbf{a}_{\mathbf{g}}\), and hence \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) are fixed in our setting. We do show that future attacks (\(\mathbf{a}_{1}\),\(\mathbf{a}_{2}\)) are less effective thanks to training on \(\mathbf{a}_{\mathbf{g}}\).
### Synthetic Attack Generators
For many NLP tasks, the input \(x\) can be broken into \((x_{i},x_{o})\). \(x_{i}\) is the portion of the input text that is not attacked by the adversary. It remains the same. This can be context: e.g. premise in NLI, paragraph in QA tasks, etc. For the toxicity task, where the entire sentence may be attacked, we set \(x_{i}\) to be the first half of
the text. \(x_{o}\) is the portion that is attacked: e.g. hypothesis in NLI, question in QA, comment in sentiment analysis. We now present two methods to generate \(\mathbf{a_{g}}\sim\widehat{p_{\mathbf{a_{0}}}}\). The first is a imitation-only approach agnostic to the task. The second takes the classifier task into account to better maintain the desired class label.
**Direct Imitation (DI): Label-aware fine-tuning** We fine-tune the generative model on existing observed attacks, \(\mathbf{a_{0}}\sim p_{\mathbf{a_{0}}}\). We consider \((x_{i},y)\) as the input for the generator, and \(x_{o}\) to be the target text that needs to be generated. Incorporating \(y\) as input greatly helps reduce the rate of noisy labels--the rate at which the the generated example does not retain the same label as the input (\(y(\mathbf{a_{g}})\neq y(\mathbf{a_{0}})\)). Specifically, we minimize the cross-entropy loss of the generated text probabilities \(\widehat{p_{\mathbf{a_{0}}}}(x_{i},y)\) and the target text \(x_{o}\). See Appendix A for additional implementation details and loss function definition.
**Imitation + Controlled Exploration (ICE):** The primary challenge of the DI method, and the primary challenge of all controlled generation more broadly, is noisy labels (Also, adversarial examples are extra hard to label correctly by definition).
To overcome this challenge, we modify the Plug and Play controlled decoding method to make it suitable for adversarial robustness (Dathathri et al., 2020). Plug and Play methods add an additional small classifier to a generator. The classifier takes the hidden layers of the generator as input. This purpose of the classifier is to ensure a desired property is maintained in the generated output. For any output of the generator, the classifier checks whether the output has the desired property using a cross entropy loss. The generator updates its hidden layers to minimize the classifier loss, and generates a new better output that is closer to having the desired property. This method is computationally intensive if the classifier is complex. Hence, the classifier is a single layer feed forward network.
**Warm-starting:** This system works well when this classifier on the hidden layers of a generator model can capture the desired property; i.e the property is easy to learn. This is obviously not the case with adversarial examples. Restated, a linear classifier cannot guide the generator into producing correctly labeled examples when given tricky human adversarial examples as input. Moreover, it is unlikely that a linear classifier will outperform the large language model we aim to ultimately improve. Examples the classifier does label correctly, are examples the large language model is likely to get right as well. Hence, these examples will not be useful in improving the large language model.
Figure 1: The framework that shows how each dataset and model is created. Our contributions are double lined. Not pictured: \(\hat{y}_{g}\) is then evaluated on the future real attacks, \(\mathbf{a_{1}}\)
We mitigate this issue by simplifying the task for the linear classifier by encouraging the generator to prefer text phrases for which the linear classifier already knows the true labels. Restated, the adversarial examples generated by this ICE method largely only re-mixes existing examples (See Table 1). For additional implementation details and loss function definition, we refer to the Appendix A.
We achieve this re-mixing by encouraging the system to reconstruct existing attacks. The reconstruction task asks the model to generate the attack exactly: \(R:x_{i}\to x_{o}\). We modify the generator to multitask on observed real attacks, \(\mathbf{a}_{0}\), for both the main task (\(Y:x\to y\)) and the reconstruction task. We freeze the generator parameters. Next, we fine-tune the linear classifier on the \(\mathbf{a}_{0}\) for the main task. Then we freeze the parameters for the classifier and unfreeze the generator parameters.
**Iterative Example Generation:** We generate examples by passing in the observed real attacks one at a time. For a fixed number of \(S\) steps, the classifier provides feedback to the generator. The generator updates its parameters to generate a new output that is more likely to have the same label, \(y\), as the input. This process is outlined in Algorithm 1: LC refers to the linear classifier; G refers to the generator. This method allows us to increase diversity of generated examples by toggling hyper-parameters: reducing the reconstruction task weight \(\lambda\), increasing beam search parameter: \(\alpha\), and increasing steps: \(S\).
```
G.\(train(R:x_{i}\to x_{o},\forall(x,y)\in\mathbf{a}_{0})\) G.\(train(Y:x\to y,\forall(x,y)\in\mathbf{a}_{0})\) Freeze weights of G. Let \(H(x)=\texttt{G.}get\_hidden\_layers(R:x)\). LC.\(train(Y:H(x)\to y,\forall(x,y)\in\mathbf{a}_{0})\) Freeze weights of LC for all\((x,y)\in\mathbf{a}_{0}\)do\(\texttt{G}^{\prime}=\texttt{G.}copy()\) Unfreeze weights of \(\texttt{G}^{\prime}\) for all\(i\in 1,2,\cdots,S\)do\(\texttt{Grad}=\nabla\texttt{L.}loss(Y:(H(x_{i}),y))+\lambda\cdot\nabla \texttt{G.}loss(R:(x_{i},x_{o}))\)\(\texttt{G}^{\prime}.back\_propagate(\texttt{Grad})\) endfor yield\(\texttt{G}^{\prime}.decode(\mathbf{a}_{0},\alpha)\) endfor
```
**Algorithm 1** Pseudo-code for ICE Method:
## 5 Experiments
This section details how we evaluate the methods listed in Section 4--the benchmark dataset used, implementation details, and baselines.
\begin{table}
\begin{tabular}{c|c c} \(\mathbf{a}_{0}\) & The Nassau County & population increased \\ & from 2010 to 2016. & \\ \(\mathbf{a}_{0}\) & The Crystal Mountain & Resort is a tourist destiation. \\ \hline \(\mathbf{a}_{\texttt{g}}\) & Crystal Mountain & population increased \\ & from 2010 to 2016. & \\ \end{tabular}
\end{table}
Table 1: An example of the ICE attack generator remixing existing observed attacks (top two) from the ANLI R1 data to create a new attack (bottom).
A Tasks: Human-in-the-loop crowd-sourced
We want to assess whether we can meaningfully amplify past human-generated attacks to be more robust to future human-generated attacks (given fixed attack generation instructions and UI). Hence, we chose two very different DynaBench tasks: Natural Language Inference (NLI) and Hate Speech Detection: The former has longer and qualitatively more varied texts. The latter is terse, less varied, and has less standard English (often with incorrect grammar and spelling) (Kiela et al., 2021).
A.I. Adversarial NLI:We evaluate our methods on the Adversarial NLI (ANLI) task (Nie et al., 2020). This is a Natural Language Inference (NLI) task: the goal is to determine whether a _hypothesis_ logically follows (entailment) or contradicts (contradiction) or is undetermined (neutral) based on facts present in the _premise_. Nie et. al. crowd-source human attacks on the hypothesis against a base classifier, \(\widehat{y}_{\theta_{0}}\), trained on MNLI+SNLI data (Wang et al., 2018; Bowman et al., 2015). Then they train more robust models by incorporating these new attacks (and other data) for three rounds. \(\mathbf{a}_{0}\) are the human generated attacks from the first round created by attacking \(\widehat{y}_{\theta_{0}}\), a BERT-Large transformer classifier (Devlin et al., 2019). Successive rounds \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\) are created by attacking RoBERTa models. We choose to improve the robustness of BERT-Large model \(\widehat{y}_{\theta_{0}}\) using \(\mathbf{a}_{0}\) and evaluate on future human adversarial attacks: \(\mathbf{a}_{1}\),\(\mathbf{a}_{2}\).
A.II. Hate Speech Detection:We also evaluate on the Dynabench Hate Speech detection dataset, an adversarial human-in-the-loop dataset generated in four rounds (Vidgen et al., 2021). In Round 1, Vidgen et. al. train a base RoBERTa classifier, \(\widehat{y}_{\theta_{0}}\), on original content created by humans. In Rounds 2-4, they create more robust RoBERTa models (\(\widehat{y}_{\theta_{1}}\),\(\widehat{y}_{\theta_{2}}\),\(\widehat{y}_{\theta_{3}}\)) by training on attacks created as follows: Human raters first create original content that successfully fools the base classifier. Then they perturb these new sentences to create even more challenging "contrast sets" with different labels. This data is then split into train, validation, and test sets, with half the test set entries created by annotators who do not appear in the training and validation sets to minimize annotator bias.
Note that for both tasks we do not gather additional human adversarial attacks targeting our improved classifiers. We evaluate on a fixed set of attacks previously unseen by the classifier. This is a limitation of our set-up; gathering additional rounds of human attacks is left as future work.
## Appendix B Base Attack Generator Model is T5
We use the T5 encoder-decoder as our generator for this paper (Raffel et al., 2020) for two reasons. First, it is compatible with multiple small sentence generation tasks (Raffel et al., 2020). Second, its performance on benchmark NLI tasks (MNLI, ANLI), and the Hate Speech task is close to that of the BERT-Large model we seek to improve - i.e. improvements are not coming from a superior model but rather imitating the real adversaries.
## Appendix C Baselines
We have two types of baseline attack generators: The first type uses existing observed attacks, but does not learn the distribution, instead relying on random perturbations. We use three of these methods: TextFooler, BertAttack, and CT-GAN. TextFooler is a very popular attack generation library that transmutes the most predictive words, while preserving semantic similarity and contextual coherence (Jin et al., 2020). BertAttack is another popular method: it uses the model it's attacking to identify vulnerable words in the input; then, it uses BERT to generate substitutes for the vulnerable words. (Li et al., 2020). CT-GAN is a Generative Adversarial Network(Goodfellow et al., 2014) modified for controlled text generation where the NLI premise is used as the control text (Haidar et al., 2019; Betti et al., 2020). The second type of baseline learns an example distribution, but does not use the attack distribution. We repeat our main methods, ICE and DI, in a
new data setting. Instead of using ANLI R1, we use the MNLI+SNLI data used in to train the base classifier, \(\widehat{y}_{\theta_{0}}\).
## 6 Results
Synthetic human-like adversarial data improve robustness to future attacks. Distribution-agnostic baselines do not.
Table 2 and Table 3 show that for both tasks, the accuracy on future rounds of human-generated attack (\(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\)) improves when generated examples, \(\mathbf{a_{g}}\) from ICE and DI are incorporated into training.
The adversarial example generators that attempt to imitate \(\mathbf{a}_{1}\) (ICE and DI) out-perform all types of baselines. First, they improve robustness beyond what we achieve by training on past human adversarial attacks, \(\mathbf{a}_{1}\), alone. This improvement cannot be achieved merely by training for more steps on ANLI R1 as shown in Table 18 in the Appendix. Second, they out-perform methods that rely on noise-based attacks on \(\mathbf{a}_{1}\) like TextFooler, BertAttack, and CT-GAN. Finally, they out-perform methods that imitate example distributions generated by other processes: ICE(MNLI+SNLI) and DI(MNLI+SNLI). The word/phrase substitution methods, BertAttack and TextFooler, improve accuracy within the same round for the Hate Speech dataset- this dataset is itself half-generated by making such minimal substitutions. Yet, these methods are not more effective on future rounds.
Table 17 extends the setting where R1 and R2 are past observed attacks available for training, and R3 is the held out set of future human attacks, leading to similar conclusions.
\begin{table}
\begin{tabular}{l l l l}
**Model** & \(\mathbf{a}_{0}\): **R1** & \(\mathbf{a}_{1}\): **R2** & \(\mathbf{a}_{2}\): **R3** \\ \hline \(\widehat{y}_{0}\) : Base + R1 & \(44.1_{\pm 0.03}\) & \(32.5_{\pm 0.05}\) & \(29.4_{\pm 0.05}\) \\ \(\hookrightarrow\) + TextFooler(R1) & \(24.1_{\pm 0.08}\) & \(27.9_{\pm 0.06}\) & \(30.3_{\pm 0.06}\) \\ \(\hookrightarrow\) + BERT-Attack(R1) & \(35.1_{\pm 0.12}\) & \(29.0_{\pm 0.08}\) & \(31.3_{\pm 0.09}\) \\ \(\hookrightarrow\) + CT-GAN(R1) & \(26.8_{\pm 0.14}\) & \(29.5_{\pm 0.12}\) & \(29.5_{\pm 0.11}\) \\ \(\hookrightarrow\) + DI(MNLI+SNLI) & \(22.9_{\pm 0.11}\) & \(28.1_{\pm 0.12}\) & \(29.4_{\pm 0.10}\) \\ \(\hookrightarrow\) + ICE(MNLI+SNLI) & \(33.9_{\pm 0.78}\) & \(33.7_{\pm 0.67}\) & \(33.5_{\pm 1.47}\) \\ \(\hookrightarrow\) + DI(R1) & \(\mathbf{48.2}_{\pm 0.32}\) & \(39.1_{\pm 0.29}\) & \(\mathbf{40.2}_{\pm 0.37}\) \\ \(\hookrightarrow\) + ICE(R1) & \(\mathbf{50.1}_{\pm 1.43}\) & \(\mathbf{43.4}_{\pm 2.91}\) & \(\mathbf{39.9}_{\pm 1.38}\) \\ \hline \end{tabular}
\end{table}
Table 2: Improvement on ANLI mean accuracy (%) (\(\pm\) standard error across 3 runs) when trained on attacks generated only from Round 1. The notation, DI(R1) for instance, refers to the method DI using R1 data to generate more examples. We underline the setups that outperform the _Base + R1_ baseline.
\begin{table}
\begin{tabular}{l l l l}
**Model** & \(\mathbf{a}_{0}\): **R2** & \(\mathbf{a}_{1}\): **R3** & \(\mathbf{a}_{2}\): **R4** \\ \hline Base + R1 + R2 & \(0.76_{\pm 0.001}\) & \(0.78_{\pm 0.003}\) & \(0.77_{\pm 0.001}\) \\ \(\hookrightarrow\) + TextFooler(R2) & \(0.78_{\pm 0.011}\) & \(0.77_{\pm 0.012}\) & \(0.76_{\pm 0.009}\) \\ \(\hookrightarrow\) + BERT-Attack(R2) & \(0.78_{\pm 0.013}\) & \(0.76_{\pm 0.015}\) & \(0.77_{\pm 0.017}\) \\ \(\hookrightarrow\) + DI(R2) & \(\mathbf{0.84}_{\pm 0.013}\) & \(0.77_{\pm 0.034}\) & \(0.76_{\pm 0.018}\) \\ \(\hookrightarrow\) + ICE(R2) & \(\mathbf{0.83}_{\pm 0.032}\) & \(0.80_{\pm 0.024}\) & \(\mathbf{0.79}_{\pm 0.018}\) \\ \hline \end{tabular}
\end{table}
Table 3: Improvement on Hate speech detection AUC (\(\pm\) standard error across 3 runs) when trained on attacks generated only from Round 2. The notation, DI(R2) for instance, refers to the method DI using R2 data to generate more examples. In this dataset, R1 is not adversarially generated, and is analogous to the base MNLI/SNLI data in the ANLI task.
Common metrics for an attack method (label noise rate, attack success rate, and distributional similarity) do not entail whether adversarial data from the method can help defend against future attacks.
Adversarial robustness literature considers attack methods to be better if they produce datasets with less label noise (Dathathri et al., 2020), or higher attack success rates (Uesato et al., 2018), or higher proximity to the original dataset (Ross et al., 2021). We show that these metrics are not good proxies for determining which type of attack method can generate examples that best defends against future attacks. These findings are surprising, and additional investigation is needed to understand why (Also see generated examples in Appendix A).
Higher Distributional Similarity of \(\mathbf{a_{g}}\) to future attacks does not entail more useful adversarial examples.
We use MAUVE as a metric of distributional similarity between two text datasets in Fig. 2(Pillutla et al., 2021). Though DI(R1) and ICE(R1) are more useful methods than TextFooler(R1) (as per Table 2), ICE and TextFooler have about the same level of similarity to the ANLI data -- both higher than DI.
### Lower label noise does not entail more useful adversarial examples.
One of the hardest challenges in controlled adversarial example generation is generating examples that have the desired label, that is reducing the rate of noisy labels. While this is a useful goal within an attack method type, comparing rates of noisy labels across attack methods does not help us choose a more useful method. Table 4 shows that TextFooler has more correct labels than the DI method, even though Table 2 makes it clear that DI is the more useful
Figure 2: Distributional similarity, as measured by MAUVE on RoBERTa embeddings from a random 1k sample (Pillutla et al., 2021). MAUVE scores range from 0 to 1, with higher values indicating more similar distributions. MAUVE metrics are intended to be evaluated relative to each other, and not as absolute measures. Note that distributional similarity to the held out attacks, R2, does not correlate with whether an attack generation method is useful as per Table 2.
method. We, the authors, annotated 100 examples from each attack method and report the rate of correct labels by comparing our annotated ground truth with the adversarial label associated with the corresponding (premise, hypothesis) pair in the ANLI R1 dataset. We refer to the Appendix A for tables of generated examples from each of the attack methods, and guidelines for human annotations.
### Higher adversary success rate does not entail more useful adversarial examples.
Table 5 shows the attack success rate of the various attack datasets (only including attacks with the correct label as verified by human ratings). All synthetic attack generators have high success rates all classifiers - which indicates that the information in the generated examples were not yet captured in by the base classifiers. Surprisingly, TextFooler, which has the highest attack success rate, does not improve adversarial robustness; R2, R3, DI(R1), and ICE(R1) datasets have lower attack success rates but are much more useful in increasing robustness to future attacks.
### Even \(\sim\)1k human adversarial examples improves robustness to unseen adversaries.
We test how the number of human adversarial examples used to train the attack generator affects the adversarial robustness using the generated examples. We fix the number of examples we use to train \(\widehat{y}_{\theta_{1}}\) at 10k generated examples. The full set of real adversarial examples from ANLI R1 is 16.9k real examples. We sample the number of real examples we use to train the attack generator. Table 6 demonstrates that increasing human examples improves robustness. Nevertheless, even when we provide low numbers of human examples, robustness to future rounds has improved. In-distribution accuracy on R1, however, suffers until 8k examples are used to train the attack generator. When we have fewer than 500 human adversarial examples to amplify using our approach, the robustness gains do not generalize. That happens because the few-shot setting creates several additional considerations that are out-of-scope in this paper:
\begin{table}
\begin{tabular}{l r}
**Generated attack dataset** & **Rate of correct labels** \\ \hline \(\mathbf{a_{g}}\): TextFooler(R1) & 51\% \\ \(\mathbf{a_{g}}\): DI(R1) & 39\% \\ \(\mathbf{a_{g}}\): ICE(R1) & **78\%** \\ \hline \end{tabular}
\end{table}
Table 4: Rate of correct labels from the attack generation methods on ANLI round 1 (\(\uparrow\) better). We, the authors, rated 100 random generated examples from each method to obtain these numbers. Our rating guidelines are included in the Appendix A, and the ratings are in Supplemental Materials.
\begin{table}
\begin{tabular}{l r r r}
**Model to attack**\(\rightarrow\) & Base & Base+R1 & Base+R1+R2 \\
**Attack Dataset**\(\downarrow\) & \(\widehat{y}_{\theta_{0}}\) & \(\widehat{y}_{\theta_{1}}\) & \(\widehat{y}_{\theta_{2}}\) \\ \hline \(\mathbf{a}_{0}\): R1 & 78\% & 56\% & 47\% \\ \(\mathbf{a}_{1}\): R2 & 73\% & 67\% & 58\% \\ \(\mathbf{a}_{2}\): R3 & 71\% & 71\% & 62\% \\ \(\mathbf{a}_{\mathbf{g}}\): TextFooler(R1) & 96\% & 98\% & 94\% \\ \(\mathbf{a}_{\mathbf{g}}\): DI(R1) & 76\% & 87\% & 79\% \\ \(\mathbf{a}_{\mathbf{g}}\): ICE(R1) & 78\% & 88\% & 88\% \\ \hline \end{tabular}
\end{table}
Table 5: Attack success rate of attack datasets on base BERT Large models as they incorporate more rounds of ANLI training data (\(\uparrow\) better). Only the examples that had the correct labels (as verified by human rating) are included in computing this rate.
* Simple fine-tuning baselines do not perform well in the few-shot setting. Other few-shot and parameter efficient baselines might be relevant (Zhou et al., 2022; Liu et al., 2022). The choice of training method is orthogonal to our goal of synthetic data generation.
* As we reduce the number of adversarial examples, evaluating generalization requires categorizing examples into various patterns to ensure coverage. In this work, we instead use the pattern-agnostic approach of Dynabench (Kiela et al., 2021), which aggregates all human-generated attacks.
### The ICE method, in effect, re-mixes phrases from previous observed attacks.
The reconstruction loss (\(R\)) often leads to exact memorization of train-set examples or exact phrases from the train-set. Qualitatively, we find that increasing the beam search parameter \(\alpha\) and number of steps \(S\), or decreasing reconstruction loss, \(\lambda\) at inference time leads to remixing these phrases (See Table 1). This also explains the high MAUVE distributional similarity between the ICE methods and the ANLI rounds on which it trains.
## 7 Limitations
If we put our new more robust models to use, human adversaries may adapt to them as well. Checking whether crowd-sourcing fresh attacks is indeed more difficult on the new models is beyond the scope of this work. Also, we benefit from having a fair number of human adversarial examples (\(16.9k\) in ANLI, and \(10k\) per round in Hate Speech). Our methods may be less successful in a scenario with very fewer examples (\(\sim 10\)). On the flip side, we have also not evaluated these methods on classifiers with access to even larger real adversarial datasets. Finally, our methods work on datasets with the notion of an original example, and a perturbed adversarial example as is the norm for adversarial robustness literature (Madry et al., 2017). In the new paradigm of larger more capable NLP models, adversarial datasets may increasingly not involve a perturbation (Ganguli et al., 2022).
**Risks:** Our techniques can enhance robustness given a set of observed adversarial examples. The new classifier we trained with generated data from DI and ICE may still be vulnerayble to future human attacks that are able to adapt to the new model (in this paper future attack rounds are known a priori from past model-in-the-loop work). This would require extensive crowd-sourcing efforts to evaluate. We also run
\begin{table}
\begin{tabular}{l r r r}
**\# R1 samples** & **a\({}_{0}\): R1** & **a\({}_{2}\): R2** & **a\({}_{2}\): R3** \\ \hline
100 & 33.0 & 27.8 & 28.9 \\
200 & 34.2 & 28.1 & 29.5 \\
500 & 37.1 & 30.8 & 32.6 \\
1024 & 42.7 & 36.5 & 36.1 \\
2048 & 43.2 & 33.2 & 36.5 \\
4096 & 43.3 & 37.8 & 37.1 \\
8192 & 44.5 & 36.6 & 36.6 \\ all (16.9k) & **48.2** & **39.1** & **40.1** \\ \hline \end{tabular}
\end{table}
Table 6: Test accuracy (%) as we vary the number of R1 real adversarial examples used to fine-tune the DI attack generator. We use the trained DI generator to generate 10k examples. We fine-tune a Base + R1 classifier on these 10k DI(R1) examples to produce the robustness metrics above. We underline the setups that outperform the _Base + R1_ baseline in Table 2.
the risk of over-fitting to the new human generated adversarial data. This may come at the cost of lower performance on future attacks generated by a different mechanism (say TextFooler instead of future ANLI rounds), and comes at the cost of degrading accuracy on original tasks such as MNLI and SNLI (Table 19 in Appendix). As a separate concern, any technique that betters generative text modeling brings the risk that humans may struggle to distinguish machine generated text. This can have negative consequences for disinformation and misinformation, which is an active area of research (Pu et al., 2023).
## 8 Conclusion
We demonstrate that training on attacks that imitate human adversaries can improve robustness to future rounds of human adversarial attacks by \(~{}11\%\) on ANLI, and by 6% and 8% on existing adversarial examples on ANLI and hate speech datasets. We are able to improve robustness to future attack distributions even when the attack generator is only trained on 1000 real adversarial examples. We show that existing attack generation methods that do not train on the distribution of real attacks, however, (methods like TextFooler and CT-GANs) are unable to improve robustness to future real attacks. Finally, we discover that attack generation methods with the lowest label noise or highest attack success rate or highest distributional similarity to future attacks are not the best methods at increasing robustness to future real attacks. These findings run counter to accepted norms on choosing the best attack method type in literature, demonstrate the opportunities in leveraging increasingly effective generation methods, and motivate future work on improving real adversarial robustness.
|
2304.03579 | A lightweight Encryption Method For Privacy-Preserving in Process Mining | Novel technological achievements in the fields of business intelligence,
business management and data science are based on real-time and complex virtual
networks. Sharing data between a large number of organizations that leads to a
system with high computational complexity is one of the considerable
characteristics of the current business networks. Discovery, conformance and
enhancement of the business processes are performed using the generated event
logs. In this regard, one of the overlooked challenges is privacy-preserving in
the field of process mining in the industry. To preserve the data-privacy with
a low computational complexity structure that is a necessity for the current
digital business technology, a novel lightweight encryption method based on
Haar transform and a private key is proposed in this paper. We compare the
proposed method with the well-known homomorphic cryptosystem and Walsh-
Hadamard encryption (WHE) in terms of cryptography, computational complexity
and structure vulnerability. The analyses show that the proposed method
anonymizes the event logs with the lower complexity and more accuracy compared
with two aforementioned cryptosystems, significantly. | Mohsen Kazemian, Markus Helfert | 2023-04-07T10:31:43Z | http://arxiv.org/abs/2304.03579v1 | # A lightweight Encryption Method For Privacy-Preserving in Process Mining
###### Abstract
Novel technological achievements in the fields of business intelligence, business management and data science are based on real-time and complex virtual networks. Sharing data between a large number of organizations that leads to a system with high computational complexity is one of the considerable characteristics of the current business networks. Discovery, conformance and enhancement of the business processes are performed using the generated event logs. In this regard, one of the overlooked challenges is privacy-preserving in the field of process mining in the industry. To preserve the data-privacy with a low computational complexity structure that is a necessity for the current digital business technology, a novel lightweight encryption method based on Haar transform and a private key is proposed in this paper. We compare the proposed method with the well-known homomorphic cryptosystem and Walsh-Hadamard encryption (WHE) in terms of cryptography, computational complexity and structure vulnerability. The analyses show that the proposed method anonymizes the event logs with the lower complexity and more accuracy compared with two aforementioned cryptosystems, significantly.
Privacy and security, Process mining, Haar transform, Data encryption, Healthcare.
## I Introduction
Advent of \(5\)G and beyond technologies provides many fascinating possibilities for all the companies and organizations to make their smart and digital business come to reality [1]. These advancements aim to transfer the demanded information based on the digital environments. Carrying the information between machines and manufacturing execution system, and between warehouse robots and related management system in the industry, are two examples in this domain [2]. Therefore, mining the communications in any organisation with the aim of improving the executed processes is vital in the current digital world.
Process mining (PM) refers to a family of methods focused on getting valuable insights from executed processes in any organization using the generated information [3, 4]. Process mining techniques are related to the fields of data science (i.e. areas such as data mining and predictive analytic) and process science (e.g. business process management (BPM) [5, 6]) having the aim of discovering the bottlenecks and improving the overall system performance based on the event logs.
Currently, process mining techniques have been widely developed in industry and academia. In this regard one of the most important domains is healthcare enabling healthcare stakeholders to get valuable insights such as identifying the actual order of activities and the involvement of resources, from an event log [7]. An event log in healthcare may include the sensitive and critical information of the patient such as the case ID, activity, time stamps, age, diagnosis, and treatment code.
The need to consider privacy-preserving in process mining was felt at an early stage in the studies [8]. However, the process mining association has typically overlooked the privacy problem, until recently. Currently, with the emergence of real-time and virtual network architectures, more than before, adequately safeguarding data privacy and security is of key importance for process mining area. In this domain, one of the essential targets is to encrypt the data log so that in the presence of untrusted environments, interfaces and organizations, the information must be kept secure before applying the process mining algorithms [9]. Since the current emerging networks are much complex inherently, development of a privacy-preserving approach using a low complexity algorithm is of key importance. Given the point that considering the variable parameters such as security degree, hardware architecture, processing speed and the volume of data are key parameters in the selection of encryption method and process mining technique.
### _Related Work_
Privacy and security-preserving topics are entirely well-known in the general field of data mining. Currently, some articles made preliminary efforts to address some noticeable challenges related to privacy-preserving in process mining science [8, 10]. A method allowing the outsourcing of process mining while ensuring the secrecy of event logs and the executed processes is proposed in [9]. This work offers a very weak privacy-protection approach and could be prone to advanced de-anonymization schemes. Saavedra et al. [11] proposed a model aims to mask event logs containing sensitive values (e.g. by removing the last \(4\) characters from \(\{12345678\}\)). However, some drawbacks such as high operational complexity and the uncertainty in regard to the correct regeneration of the original data before applying the process mining techniques, make it unacceptable. Liu et al. [12] proposed a privacy-preserving framework for cross-organization business based on access control. Process mining with encrypted data is one of the main targets in this research area. However, to the best of our knowledge, this goal has not yet been achieved. In this regard homomorphic encryption (HE) [13] designed to allow arithmetic operations to be performed on the encrypted data. Therefore, designing a process mining algorithm that works
with homomorphic encrypted data is a novel research area. However, HE with its very high computational complexity and high delay architecture is not a general solution for all applications. Furthermore, the general process mining algorithms are not designed to work with HE, efficiently. Hence, existence of a low computational complexity approach that works with the current PM algorithms is admirable.
### _Main Contributions_
In this paper we propose a novel approach called Haar transform encryption (HTE) with a low computational complexity structure. HTE is designed for both numbers and characters using the simple architecture of the Haar matrices [14] and one private-key. Therefore, for the networks and processors considering multiplication as a time-consuming operation, a significant saving is achieved. Since process mining techniques are independent of domain and can be applied in any organization where processes are present and a data log is available, the proposed approach is applicable to any industry. However, it is analyzed on the healthcare domain through the paper. Addressing the important necessities of the current digital business networks such as low computational complexity and easy-restorable structure of the cryptographic technique for the event-log data in a novel designed system model are the aims of this study.
Furthermore, the computational complexity of the Paillier technique as one of the well-known homomorphic cryptosystems is studied to prove that the improvement of this cryptosystem is an essential requirement if there is a process mining algorithm that works with the encrypted data. Note that our proposed symmetric cryptographic technique needs to be decrypted before applying the process mining algorithms that is suitable for the current applications.
The rest of the paper is organized as follows: Section II presents some preliminary definitions including the Homomorphic and Haar transform concepts. The system model and the proposed scheme are introduced in Section III and IV, respectively. Results and discussion including encryption of an event log and the computational complexity analysis are presented in Section V, and is followed by the conclusion in Section VI.
_Notation:_ Throughout the paper, we use small and capital boldface letters, \(\otimes\), \(\alpha^{T}\) and \(\{\cdot\}\) to denote vectors, matrices, Kronecker product, the transpose of \(\alpha\) and multiplication operator, respectively.
## II Definitions
The preliminary definitions through the proposed method are defined as follows:
**Definition 1** (Haar transform): Un-normalized Haar unitary matrices consist of \(\pm 1\) and \(0\), and the transforms are provided only by addition and subtraction operations, without involving any multiplications. However, the Haar transform uses the normalized Haar matrices including the other numbers in addition to \(\pm 1\) and \(0\)[14]. Although multiplication operations are needed in the Haar transform, the computation time is very short because of the existence of \(\pm 1\) and \(0\) values in its matrix structure. Therefore, a significant saving in terms of complexity and delay is achieved. Haar functions \(\hbar_{\upsilon}(\varphi)\) are defined in the interval \([0,1]\) and the order \(\upsilon\) of the function is uniquely decomposed into the integers \(a\) and \(b\) as follows:
\[\upsilon=2^{a}+b-1,\ \ N=2^{l},\ \ \upsilon\in\{0,1,...,N-1\}, \tag{1}\]
where \(l\in\{1,2,...\}\), \(0\leq a\leq l-1\), \(0\leq b\leq 2^{a}\). The Haar functions are defined as follows:
\[\hbar_{0}(\varphi)\equiv\hbar_{00}(\varphi)=\frac{1}{\sqrt{N}},\ \ \varphi\in[0,1], \tag{2}\]
\[\hbar_{\upsilon}(\varphi)\equiv\hbar_{ab}(\varphi)=\frac{1}{\sqrt{N}}\begin{cases} 2^{\frac{a}{2}}&\frac{b-1}{2^{a}}\leq\varphi<\frac{b-\frac{1}{2^{a}}}{,}\\ -2^{\frac{a}{2}}&\frac{b-1}{2^{a}}\leq\varphi<\frac{b}{2^{a}},\\ 0&\text{otherwise in [0,1].}\end{cases} \tag{3}\]
The Haar transform matrix of order \(N\) consists of rows which are resulted from the prior functions computed at the points \(\varphi=\frac{e}{N}\) where \(e\in\{0,1,...,N-1\}\). Considering \(\mathbf{H}\) as Haar matrix which is a square matrix of dimension \(2^{l}\), the vector \(\mathbf{\tilde{y}}\) as the Haar transform of an \(N\)-point vector \(\mathbf{x}\) is computed by:
\[\mathbf{\tilde{y}}=\mathbf{H}_{2^{l}}\mathbf{x}. \tag{4}\]
Note that the Haar matrix generally can be derived by the following equations:
\[\mathbf{H}_{2^{l}}=\begin{bmatrix}\mathbf{H}_{2^{l-1}}\otimes[1,1]\\ \mathbf{I}_{2^{l-1}}\otimes[1,-1]\end{bmatrix}, \tag{5}\]
\[\mathbf{I}_{2^{l-1}}=\begin{bmatrix}1&0&\dots&0\\ 0&1&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1\end{bmatrix}_{2^{l-1}\times 2^{l-1}}. \tag{6}\]
**Definition 2** (Walsh-Hadamard (WH)): The square matrix of WH with dimension \(2^{l}\) is defined as follows [15]:
\[\mathbf{\bar{H}}_{2^{l+1}}=\mathbf{\bar{H}}_{2^{l}}\otimes\mathbf{\bar{H}}_{ 2}=\begin{bmatrix}\mathbf{\bar{H}}_{2^{l}}&\mathbf{\bar{H}}_{2^{l}}\\ \mathbf{\bar{H}}_{2^{l}}&-\mathbf{\bar{H}}_{2^{l}}\end{bmatrix}, \tag{7}\]
with
\[\mathbf{\bar{H}}_{2}=\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}. \tag{8}\]
This matrix can also be used for the encryption strategy. However, absence of zero element in the Walsh-Hadamard matrix makes it more complicated than the Haar one.
**Definition 3** (Homomorphic encryption): HE allows arithmetic operations including addition and multiplication over encrypted data without decryption procedure which can be used as a basis for computing complex functions [13]. Two types of HE have received more attention: partial homomorphic and fully homomorphic encryption cryptosystems. The former allows only one operation either addition or multiplication, while the latter supports both addition and multiplication operations to be performed on the ciphertexts (CTs) with the aim of obtaining the computational results on the corresponding plaintexts (PTs).
One of the most recent invented cryptosystems is paillier homomorphic encryption (PHE) which is only an additive
cryptosystem [16]. In the following, the workflow of paillier system is explained.
The paillier algorithm is started by randomly choosing two independent large prime numbers \(j\) and \(k\) when \(gcd\)\((jk,(j-1)(k-1))=1\). Computing \(n=jk\), \(\lambda=lcm\)\((j-1,k-1)\) and selecting random integer \(g\) where \(g\in\mathbb{Z}_{n^{2}}^{*}\) are the next steps. The above steps must be repeated until it is confirmed that \(n\) divides the order of \(g\). This is done by checking the existence of \(\nu=(L(g^{\lambda}\mod n^{2}))^{-1}\mod n\) where \(L(u)=\frac{u-1}{n}\). In this situation, the pair \((n,g)\) and \((\lambda,\nu)\) are known as public and private keys, respectively.
If \(x\in\mathbb{Z}_{n}\) and \(r\in\mathbb{Z}_{n}^{*}\) are the plaintext message and a random number, respectively, then \(E(x)=g^{x}.r^{n}\mod n^{2}\) is the ciphertext, where \(E(x)\in\mathbb{Z}_{n^{2}}^{*}\). The plaintext will be decrypted using the following equation: \(x=L(E(x)^{\lambda}\mod n^{2})\cdot\nu\mod n\). Since paillier cryptosystem is just an additively homomorphic cryptosystem, the product of two ciphertexts is decrypted to the sum of their corresponding plaintexts as follows:
\[D(E(x_{1},r_{1})\cdot E(x_{2},r_{2})\mod n^{2})=x_{1}+x_{2}\mod n, \tag{9}\]
where \(D(E(x))=x\). If \(g\), \(r\), \(n\) and specially the plain text \(x\) are large values, then a huge computational complexity and delay will be occurred in both hardware and software. Hence, HE with the aforementioned structure including large number of multiplication operations, is not suitable for current high speed frameworks. A summary of PHE is described in Algorithm 1.
```
0:\(\mathbf{x}=[x_{1},...,x_{N}]^{T}\) for \(N\) numbers.
1:for\(i=1,...,N\)do
2:repeat
3: generate \(j\) and \(k\) when \(gcd\)\((jk,(j-1)(k-1))=1\),
4: set \(n\leftrightarrow jk\),
5: set \(\lambda\gets lcm(j-1,k-1),\)
6: select random \(g\) where \(g\in\mathbb{Z}_{n^{2}}^{*}\),
7:until\(gcd\)\((L(g^{\lambda}\mod n^{2}),n)\neq 1\),
8: select random \(r_{i}\) where \(r_{i}\in\mathbb{Z}_{n}^{*}\),
9: compute ciphertext \(E(x_{i})=g^{x_{i}}.r_{i}{}^{n_{i}}\mod n_{i}{}^{2},\)
10:endfor
10:\(\mathbf{E}(\mathbf{x})=[Ex_{1},...,Ex_{N}]\), \(\mathbf{r}=[r_{1},...,r_{N}]\).
```
**Algorithm 1** Paillier Homomorphic Encryption Procedure
## III System Model
Beyond Industry \(4.0\) exploits the collaboration of multi organizations in serial and parallel situations even at different geographical points, where the data transfer with untrusted environment is a serious concern [17]. In this regard, we have considered a digital business network consisting of \(M\in\mathbb{N}\) organizations and one process mining point, as shown in Fig. 1. The event log is created at the \(1^{st}\) organization (Org. 1) and will be updated at the subsequent \(M-1\) organizations by adding new data. PM point applies the process mining algorithms and sends the results to the first organization. Finally, the Org. \(1\) sends the enhancement instructions to the subsequent organizations. In this scenario, Org. \(1\) and the PM point are trusted, and the \(I_{m}\), \(m\in\{1,...,M\}\), which is any type of interface to transmit the event log to the next organization is the untrusted environment. Note that any organization, except the first one, is considered as untrusted environment for the other organizations' data. In other words, the data of each organization is encrypted using an individual private key.
## IV Proposed Method
Let \(\mathbf{y}=[y_{1},...,y_{N}]^{T}\) denote the cipher vector of \(\mathbf{x}=[x_{1},...,x_{N}]^{T}\), then using (4) we have:
\[\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{N}\end{bmatrix}=\zeta_{p}\frac{1}{\sqrt{N}}\mathbf{H}_{2^{l}}\begin{bmatrix} x_{1}\\ x_{2}\\ \vdots\\ x_{N}\end{bmatrix}, \tag{10}\]
where \(\zeta_{p}\in\mathbb{R}\) is the arbitrary random private key which is considered in the encryption and decryption procedure, where \(p\in\{m,s\}\). Individual private key \(\zeta_{m}\) makes the data of \(m^{th}\) organization inaccessible from the untrusted environments, when \(\zeta_{s}\) refers to the shared key between all organizations.
Since \(\mathbf{H}_{2^{l}}\) is a square matrix of order \(2^{l}\)[14], if the actual number of data denoted by \(\bar{N}\) is less than \(2^{l}\), then \(\bar{N}=2^{l}-\bar{N}\) zeros are padded to the data vector. For instance, for a small data log with five numbers, we need to pad three zeros to adapt with an \(8\times 8\) Haar matrix. Finally, using the fact that for the Haar matrix \(\mathbf{H}_{2^{l}}^{-1}=\mathbf{H}_{2^{l}}^{T}\), the vector \([x_{1},...,x_{N}]^{T}\) can be extracted from y as follows:
\[\begin{bmatrix}x_{1}\\ x_{2}\\ \vdots\\ x_{N}\end{bmatrix}=\frac{1}{\zeta_{m}}\frac{1}{\sqrt{N}}\mathbf{H}_{2^{l}}^{ T}\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{N}\end{bmatrix}. \tag{11}\]
Any event log consists of data attributes in three types of numbers, timestamps and characters. The last two types have to be mapped into the unique numbers as described below:
**Timestamps** are calculated from a time-origin and the differences in minutes or seconds are the numerical expressions.
**Characters** are mapped into numbers based on their alphabetical order.
We summarize the design steps of the proposed scheme as the pseudocode in Algorithm 2. To get more insight into how this algorithm works, lines \(1\) to \(3\) of Algorithm 2 deal with the mapping the characters and timestamps to the integer numbers.
Fig. 1: System model for \(M\) organizations
If the actual number of data vector members, denoted by \(\bar{N}\), is less than \(2^{l}\), then the zero padding procedure is required. (see lines \(4\) to \(6\) of Algorithm 2). The encrypted vector \(\mathbf{y}\) is simply calculated by line \(8\). Finally, \(\mathbf{y}\) and \(\zeta_{p}\) are used for the decryption procedure before applying the process mining tools.
## V Results and Discussion
In this section we investigate the manner of our proposed method on an event log with typical healthcare data attributes using \(M=3\) organizations. Additionally, comparisons based on computational complexity and the vulnerability of the cryptographic structure for HTE, WH encryption (WHE) [15] and PHE schemes are studied. Throughout this section \(\zeta_{s}\), \(\zeta_{1}\), \(\zeta_{2}\), \(\zeta_{3}\), \(j\), \(k\) and \(g\) are considered \(\sqrt{8}\), \(2\), \(\sqrt{2}\), \(3\sqrt{8}\), \(3\), \(5\) and \(22\), respectively.
### _Encryption using HTE_
A small event log together with its anonymized versions which are encrypted by HTE approach are described in Table I and Table II. The log is made of eight events, two distinct case IDs: \(\{1,2\}\), five different activities: \(\{A,B,C,D,E\}\) which are performed by three users: {Tom, John, Anna}, in ascending time stamps. Org. \(1\) uses \(\zeta_{s}\) to encrypt the case ID, timestamp and activity columns, as the shared data between all organizations. Additionally, it uses \(\zeta_{1}\) to anonymize the resource column to protect the privacy-preserving of the patients. Updating process can add data to both rows and columns of the event log. In this example, Org. \(2\) only adds the encrypted column of heart rate information using \(\zeta_{2}\), when it has no knowledge of the resource column. In a specific case, if needed, \(\zeta_{1}\) can be available to Org. \(3\) and the encrypted column of cost will be added to the event log using \(\zeta_{3}\). Hence, any organization is able to use the data of the other organizations, provided it has the corresponding key. Table II describes the available information for Org. \(3\). Finally, the trusted PM point applies the process mining algorithms using the decrypted event log and send the enhancement recommendations to the \(1^{st}\) organization. Collaboration of thousands organizations with large event logs is conceivable in the future digital business networks.
Referring to 10 and Algorithm 2, the Haar transform of numerical data vector \(\mathbf{x}\) multiply by \(\zeta_{p}\), is the encrypted vector \(\mathbf{y}\). The time stamps are calculated from a time-origin and generate the numerical vector \(\mathbf{x}\) (e.g. \(10:20\) - \(0:00=620\) minutes). Note that each column in Table I and Table II has its plaintext vector \(\mathbf{x}\) and corresponding ciphertext vector \(\mathbf{y}\), exclusively. In the non-numerical data attributes such as activity and resource, \(\mathbf{x}\) is formed by mapping each character into a number based on its alphabetical order. However, if the member in the plaintext vector includes more than one character, then a separate \(\mathbf{y}\) will be presented for each word (e.g. Tom \(\rightarrow\mathbf{x}=[20,15,13,0]\) and \(\mathbf{y}=[48,22,7,18.2]\) with \(N=4\)). Finally, the ciphertext columns in Table I and Table II will be considered as the encrypted data log.
### _Computational Complexity Analysis_
In this subsection, we present an investigation on the computational complexity of the paillier cryptosystem and the proposed scheme, to show the superiority of the latter based on the number of multiplication operations. We prove that the overall order of PHE scheme described in Algorithm 1 is \(\mathcal{O}\left(\sum_{i=1}^{N}g^{x_{i}}\right)\) while that of HTE described in Algorithm 2 is \(\mathcal{O}\left(N^{2}\right)\).
**Proof 1**: _There are three bottleneck lines in the complexity analysis of Algorithm 1, i.e., lines \(1\) - \(10\), \(2\) - \(7\) and \(9\). The first bottleneck that includes a "for" loop with the length of \(N\), is of order \(\mathcal{O}(N)\). Using the simplest assumption that makes the condition in line \(7\) true at the first loop, the computational complexity of the second bottleneck including a "repeat" loop is limited to \(gcd\) and \(lcm\) algorithms, which are of order \(\mathcal{O}\left(\log jk.((j-1)(k-1))\right)=\mathcal{O}\left(\log n^{2}\right)\) and \(\mathcal{O}(1)\), respectively, where \(lcm=\frac{((j-1)(k-1))}{gcd(j-1,k-1)}\). Considering \(N\ll x_{i}\) and \(n\ll x_{i}\) where \(i\in\{1,...,N\}\), only the third bottleneck including power operation with the order of \(\mathcal{O}\left(g^{x_{i}}\right)\) forms the dominant term. Thus, the overall order of PHE in the most simple situation is \(\mathcal{O}\left(\sum_{i=1}^{N}g^{x_{i}}\right)\). Indeed, the complexity of PHE grows with dependency on the data value that yields huge computational complexity in the hardware and programming architecture._
**Proof 2**: _Referring to Algorithm 2, the only bottleneck is in line \(8\) that includes the main part of the proposed scheme. Referring to (5) and (6), we extract \(z(\mathbf{H}_{2^{l}})\) as the number of zeros in \(\mathbf{H}_{2^{l}}\) using the following equation:_
\[z(\mathbf{H}_{2^{l}})=2(2^{2(l-1)}-2^{l-1})+2z(\mathbf{H}_{2^{l-1}}),\ \ \ z( \mathbf{H}_{2})=0. \tag{12}\]
The actual number of multiplication and addition operations for a data vector with the length of \(N=2^{l}\) using HTE scheme are achieved as follows:
\[Mul_{N}=N^{2}-z(\mathbf{H}_{2^{l}}), \tag{13}\]
\[Add_{N}=N^{2}-N-z(\mathbf{H}_{2^{l}}). \tag{14}\]
Therefore, a significant reduction in terms of multiplication numbers is achieved especially for the large \(x_{i}\) and \(N\) values, due to the exitance of zeros in Haar matrices. Finally, the overall order of the HTE scheme is computed by \(\mathcal{O}\left(N^{2}\right)\).
A comparison based on the numbers of multiplication and addition operations between HTE, WHE and PHE based on the data in Table I, is described in Table III. Since the computational complexity of PHE is dependent on the values of \(N\) and \(x_{i}\), a large number of multiplication operations is required for the large data values while the proposed scheme is independent of the plaintext value, yields a great computing saving. The resemblance between Walsh-Hadamard and Haar matrices motivated us to compare WHE and HTE in Fig. 2. Showing the behaviour of WHE and HTE for \(l\in\{2,...,8\}\) proves that the latter has a remarkable superiority in terms of multiplication numbers compared with WHE for the large row numbers.
### _Cryptographic Structure Vulnerability_
In this section, we investigate the effect of destructive factors on the structure of PHE, WHE and HTE methods.
#### Iii-C1 Vulnerability of PHE
Referring to line \(9\) of Algorithm 1, \(g\), \(x\), \(r\) and \(n\) are the key parameters in the PHE scheme. Assuming \(n_{1}=n_{2}\) and \(g_{1}=g_{2}\), multiplication of two ciphertexts \(E(x_{1})\) and \(E(x_{2})\) affected by an additive error (AE) is computed as follows:
\[E(x_{1}+\acute{x_{1}})\cdot E(x_{2}+\acute{x_{2}})=g^{x_{1}+ \acute{x_{1}}+x_{2}+\acute{x_{2}}}\cdot(r_{1}\cdot r_{2})^{n+\acute{n}}\\ +\Re\mod(n+\acute{n})^{2}, \tag{15}\]
where \((\acute{\cdot})\) and \(\Re\) denote the AE to \((\cdot)\), and the terms consisting of AE as base numbers, respectively.
**Proof 3**: \(E(x_{1}+\acute{x_{1}})\cdot E(x_{2}+\acute{x_{2}})=(g+\acute{g})^{x_{1}+ \acute{x_{1}}}\cdot(r_{1}+\acute{r_{1}})^{n+\acute{n}}\cdot(g+\acute{g})^{x_{2}+ \acute{x_{2}}}\cdot(r_{2}+\acute{r_{2}})^{n+\acute{n}}\mod(n+\acute{n})^{2}=(g+ \acute{g})^{x_{1}}\cdot(g+\acute{g})^{x_{1}}\cdot(r_{1}+\acute{r_{1}})^{n} \cdot(r_{1}+\acute{r_{1}})^{\acute{n}}\cdot(g+\acute{g})^{x_{2}}\cdot(g+ \acute{g})^{x_{2}}\cdot(r_{2}+\acute{r_{2}})^{n}\cdot(r_{2}+\acute{r_{2}})^{ \acute{n}}\mod(n+\acute{n})^{2}=(g^{x_{1}}+\acute{g})\cdot(\acute{g}^{x_{1}}+ \acute{g}^{x_{1}}+\acute{g}^{x_{1}}+\acute{g})\cdot(r_{1}+\acute{r_{1}}+\acute{ r_{1}}+\acute{r_{1}}+\acute{r_{1}}+\acute{r_{1}}+\acute{r_{1}})\cdot(r_{1}+ \acute{r_{1}}+\acute{r_{1}}+\acute{r_{1}})\cdot(\acute{g}^{x_{2}}+\acute{g}^{x _{2}}+\acute{g})\cdot(g^{x_{2}}+\acute{g}^{x_{2}}+\acute{g}^{x_{2}}+\acute{g} )\cdot(r_{2}+\acute{g})\cdot(r_{2}+\acute{g}^{x_{2}}+\acute{g})\cdot(r_{2}+ \acute{r_{2}}+\acute{r_{2}})\cdot(r_{2}+\acute{r_{2}}+\acute{r_{2}})\mod(n+ \acute{n})^{2}=(g^{x_{1}}+\acute{r_{2}})^{2}+\acute{r_{2}})\mod(n+\acute{n})^{2}=(g^{x_{1} +x_{1}+x_{1}+x_{2}+\acute{x_{2}}}\cdot(r_{1}\cdot r_{2})^{n+\acute{n}})+( \acute{g}^{x_{1}+\acute{x_{1}}+x_{2}+\acute{x_{2}}}\cdot(r_{1}\cdot r_{2})^{n+ \acute{n}}+\ldots\mod(n+\acute{n})^{2}\simeq E(x_{1}+\acute{x_{1}}+x_{2}+x_{2})\), where \(\{\cdot\}\) and \(\{\cdot\}\) are the expanded terms using
Fig. 2: Computational complexity comparison between WHE and HTE in terms of multiplication numbers for \(N=4\) to \(N=256\)
binomial theorem [18] where the exponent is a true data and AE, respectively.
Referring to 15 and \(Proof\)\(3\), adding error to the every possible element of PHE yields various extra computations consisting of AE as the base and/or exponent number. Furthermore, inaccurate plaintext-plaintext addition coming from disturbed ciphertext-ciphertext multiplication disrupts the process mining schemes. Therefore, in the presence of AE, increasing the number of plaintexts (i.e., \(x_{1}\), \(x_{2}\),...), makes PHE an ultra-high computational complexity approach with inaccurate computations which is unsuccessful in the high-speed applications.
#### V-C2 Vulnerability of WHE and HTE
Referring to 5 and 7, the top row and the left-most column of the Walsh-Hadamard, and the top row of the Haar matrix only consist of \(+1\). Furthermore, ignoring the top row and the left-most column of the former provides the cyclic shift property of rows. Since the destructive factor usually causes only a phase shifting in the aforementioned matrices [15], the deterministic structure of WH and Haar matrices can be restored simply. Hence, the deterministic and low-complexity structure of the Haar matrix together with a private key is an efficient solution in the high-speed applications.
## VI Conclusion
Emergence of real-time and virtual networks into the digital business technology motivated us to investigate on the privacy-preserving concept in the industry. Since these networks are highly complex inherently, in this paper we proposed a very low-complexity encryption approach called HTE to anonymize the data before execution of the process mining techniques. Encryption with a low-complexity and easy-restorable structure in a business network including large number of organizations was the main goal of this paper. The proposed method is based on a Haar transform and one private key that is applicable to any data type in an event log. The complexity analysis proves that HTE is superior to the WHE and PHE cryptosystems in terms of overall computational complexity, significantly. The proposed method is beneficial for the companies, those privacy of consumers, complexity, cost and the speed are some of their serious concerns. The future target of the present work is to design a PM algorithm that can work with HTE encrypted data.
|
2306.03300 | Quantum Boltzmann dynamics and bosonized particle-hole interactions in
fermion gases | In this paper, we study a cold gas of $N \gg 1$ weakly interacting fermions.
We describe the time evolution of states that are perturbations of the Fermi
ball, and analyze the dynamics in particle-hole variables. Our main result
states that, for small values of the coupling constant and for appropriate
initial data, the effective dynamics of the momentum distribution is determined
by a discrete collision operator of quantum Boltzmann form. | Esteban CΓ‘rdenas, Thomas Chen | 2023-06-05T22:51:51Z | http://arxiv.org/abs/2306.03300v3 | # Quantum Boltzmann dynamics and bosonized particle-hole interactions in fermion gases
###### Abstract.
In this paper, we study a cold gas of \(N\gg 1\) weakly interacting fermions. We describe the time evolution of the momentum distribution of states close to the Fermi ball by simultaneously analyzing the dynamical behavior of excited particles and holes. Our main result states that, for small values of the coupling constant, and for appropriate initial data, the effective dynamics of the above system is driven by an energy-mollified quantum Boltzmann collision operator, plus a an interaction term with virtual bosonized particle-hole pairs around the Fermi surface.
###### Contents
* 1 Introduction
* 2 Main results
* 3 Preliminaries
* 4 Tool Box I: Analysis of \(b\)- and \(D\)-operators
* 5 Tool Box II: Excitation Estimates
* 6 Leading Order Terms I: Emergence of \(Q\)
* 7 Leading Order Terms II: Emergence of \(B\)
* 8 Subleading Order Terms
* 9 Proof of Theorem 1
* 10 The Fixed Volume Case
## 1. Introduction
### Historical background
In this work, we consider a collection of \(N\) identical fermions, moving in the \(d\)-dimensional torus \(\Lambda\equiv(\mathbb{R}/L\mathbb{Z})^{d}\), where \(L>0\) is its linear length. Pure quantum-mechanical states then belong to the Hilbert space of antisymmetric wave functions \(L_{a}^{2}(\Lambda^{N})\), on which we consider the interacting Hamiltonian
\[H\equiv\frac{\hbar^{2}}{2}\sum_{i=1}^{N}(-\Delta_{x_{i}})+\lambda\sum_{i<j}^{ N}V(x_{i}-x_{j}). \tag{1.1}\]
Here, we have chosen units for which the mass of the fermions \(m_{F}\) has unit value. The parameter \(\lambda\geqslant 0\) corresponds to the coupling strength of the two-body interaction, mediated by the real-valued function \(V:\Lambda\to\mathbb{R}\).
Understanding the dynamics of many-body fermionic systems has been of key physical interest since the beginning of quantum mechanics in the early twentieth century. In particular, it is well-known that the ground state of \(H\) for non-interacting systems (\(\lambda=0\)) is given by the Slater determinant of eigenvectors of the kinetic energy operator
\[\psi_{\rm Slater}(x_{1},\ldots,x_{N})\equiv(1/\sqrt{N!})\det[e_{k_{i}}(x_{j})]_{ i,j=1}^{N} \tag{1.2}\]
where we denote \(e_{k}(x)\equiv L^{-d/2}\exp(i\,x\cdot k).\) The collection of wave-vectors \(\{k_{1},\ldots,k_{N}\}\) in (1.2) minimize the kinetic energy in compliance with the Pauli Exclusion Principle by filling the _Fermi ball_ (or, _Fermi sea_)
\[\mathcal{B}\equiv\{k\in(2\pi\mathbb{Z}/L)^{d}:|k|\leqslant k_{F}\}\,\qquad \text{where}\qquad k_{F}\equiv\kappa(N/|\Lambda|)^{\frac{1}{d}}. \tag{1.3}\]
The parameter \(k_{F}\) is called the _Fermi wave-vector_, and the factor \(\kappa=1/|\mathbb{S}_{d-1}|^{\frac{1}{d}}+o(1)\) is chosen so that \(|\mathcal{B}|=N\). Of course, as soon as \(\lambda>0\), \(\psi_{\rm Slater}\) is no longer the ground state of the Hamiltonian, nor a stationary solution of the associated Schrodinger equation. It is then of interest to find a non-trivial scaling regime between the parameters of the theory in which one can describe effectively the dynamics generated by \(H\), provided the particle number \(N\) is large, and the coupling constant \(\lambda\) is small enough.
In the mathematical literature, extensive research has been carried out focusing on the mean-field scaling regime. In this approximation, the two-body potential is replaced by an averaged interaction over the position density, and the Hartree system of equations emerges as the _leading_ order term
\[i\hbar\partial_{t}\omega=[-\hbar^{2}/2\Delta+\lambda N(V*\rho),\omega]\quad \text{where}\quad\rho(t,x)\equiv N^{-1}\omega(t;x,x). \tag{1.4}\]
Here, \(\omega(t)\in\mathscr{L}^{1}(L^{2}(E))\) is a trace-class operator with \({\rm Tr}\omega=N\) that approximates the one-particle reduced density matrix of the original \(N\)-particle system; \(E=\Lambda\) or \(E=\mathbb{R}^{d}\) depending on the situation. In the existing literature, the values of the coupling constant \(\lambda\) for which the Hartree equation (1.4) is derived are adapted to the physical system under consideration. The miscroscopic scaling regime in which \(\hbar=1\) (or, is taken independent of \(N\)) has been studied in [1, 2, 23, 30] in several physical situations. On the other hand, in the _semi-classical regime_ for which \(\hbar=1/N^{1/d}\), one chooses \(\lambda=1/N\) and semi-classical initial data. For results in this direction, we refer the reader to [11, 12, 20, 22, 31]. Finally, we note that in this regime, the \(\hbar\downarrow 0\) limit of \(\omega(t)\) is linked with the solution of the Vlasov equation \(f(t)\in L^{1}_{x,p}\). The reader is refered to [11, 26, 29] and the references therein.
More recently, in [10], the authors have determined the _subleading_ order term of the many-body fermionic dynamics in three dimensions, again in the semi-classical regime \(\hbar=1/N^{1/3}\) with fixed macroscopic volume \(L=2\pi\), and for initial data close to \(\psi_{\rm Slater}\). Let us informally describe these results. The leading order term corresponds to the translation-invariant, stationary solution with kernel
\[\omega(t;x,y)=\sum_{i=1}^{N}e_{k_{i}}(x-y) \tag{1.5}\]
where the right hand side corresponds to the the one-particle reduced density matrix of \(\psi_{\rm Slater}\), that is, the Fermi ball. The authors of [10] then focus on the subleading
order dynamics describing the time evolution of a relatively small number of fermions that have been excited outside of the Fermi ball, together with the holes that they leave behind. If these particle-hole pairs are sufficiently close to the Fermi surface, they approximately obey Bose statistics-the closeness being measured in the scale determined by the decay rate of the interaction \(\hat{V}(k)\). Roughly speaking, one can then consider bosonic creation- and annihilation- operators \(b_{\alpha}^{*}(k)\) and \(b_{\alpha}(k)\) that create and destroy a bosonized particle-hole pair, with relative momentum \(k\in\mathbb{Z}^{3}\) and \(\alpha\) labels their position in the Fermi surface. The dynamics of these pairs is then determined (in norm approximation!) by the Bogoliubov transformation of an effective Hamiltonian, quadratic in \(b\) and \(b^{*}\). Now, the initial data in this description corresponds to states for which _no_ fermion far away from the Fermi surface has been excited, since this is not an energetically favorable configuration. In fact, the machinery with which these states are built has been succesfully used to rigorously determine the subleading order correction to the associated ground state energy in the so-called _Random Phase Approximation_ (RPA), see [8, 10, 13, 18, 24]. One can then understand these states as being generated naturally as thermal fluctuations around the non-interacting ground state.
Besides mean-field theory, physicists have for almost a century speculated that yet another approximation should be valid in the _kinetic scaling regime_. Here, one re-scales microscopic position \(x\) and time \(t\) (that is, variables for which \(\hbar=1\)) in terms of the macroscopic variables
\[X=\lambda^{2}x\qquad\text{and}\qquad T=\lambda^{2}t. \tag{1.6}\]
One is then interested in the limit \(\lambda\downarrow 0\). Heuristically, in this picture the Hartree equation (1.4) becomes the free Schrodinger equation and, for microscopic times \(t\simeq 1\), free motion dominates. However, at longer time-scales \(T\simeq 1\), rare but strong, short-ranged two-particle interactions-that is, collisions-dominate the dynamics. It is believed that the quantum Boltzmann equation then emerges from the \(N\)-body fermion problem
\[(\partial_{T}+P\cdot\nabla_{X})F=4\pi\int \mathrm{d}P_{2}\mathrm{d}P_{3}\mathrm{d}P_{4}\delta(P+P_{2}-P_{3 }-P_{4})\delta(P^{2}+P_{2}^{2}-P_{3}^{2}-P_{4}^{2})\] \[\times|\hat{V}(P-P_{3})-\hat{V}(P-P_{4})|^{2}(FF_{2}\widetilde{F} _{3}\widetilde{F}_{4}-F_{4}F_{3}\widetilde{F}_{2}\widetilde{F}). \tag{1.7}\]
Here, \(F=F(T,X,P)\) is the one-particle phase space distribution, \(F_{i}\) is short-hand notation for \(F(T,X,P_{i})\), and we denote \(\widetilde{F}=1-F\). So far, the derivation of Eq. (1.7) has yet to be proved rigorously, and it remains an open problem from the mathematical point of view. For results in this direction, the reader is refered to [3, 4, 5, 6, 17, 21, 27, 32] and the references therein.
### Our contribution
The main goal of the present article is to describe the effective dynamics of holes and particles around the Fermi ball, but which are found _away_ from its surface-this is complementary to the situation described by the authors in [10], where their focus is on bosonized particle-hole pairs near the same surface. We explore the microscopic scaling regime \(\hbar=1\), and consider boxes of size \(|\Lambda|\ll N\). In particular, the _Fermi momentum_\(p_{F}\equiv\hbar k_{F}\) becomes large, of order \((N/|\Lambda|)^{1/d}\gg 1\). Our main result is Theorem 1, which addresses the case of arbitrary dimension \(d\) and box size
\(|\Lambda|\). As a corollary, by looking at large time-scales \(T\simeq 1\), we are able determine the effective dynamics of the system for dimension \(d=3\) and box size \(|\Lambda|=(2\pi)^{3}\). Our parameter window includes the scaling regime given by the relations
\[\lambda=\frac{1}{N^{2}}\qquad\text{and}\qquad t=N^{1/3}T \tag{1.8}\]
provided the number of holes and particles is not very large. In Remark 2.6, we make a comparison between the physical scales given by (1.8) and the combined mean-field and semi-classical regime.
Let us now describe in physical terms our main results. We regard our system as a very cold gas of \(N\) weakly interacting fermions, in which we _externally_ excite \(n\ll N\) particles outside of the Fermi ball (say, by a beam of light). The holes left behind behave like the anti-particles of those fermions that have been excited outside of \(\mathcal{B}\). On the other hand, as mentioned above, the collective excitation of particle-hole pairs near the Fermi surface displays boson-like behavior. Hence, one can identify three subsystems: holes deep inside the bulk; excited particles away from the Fermi ball; and particle-hole pairs near the Fermi surface. Such system shares formal similarities with the theory of Quantum Electrodynamics (QED), in which electrons and positrons interact via mediation of photons. We push this analogy in a rigorous way, by looking at the dynamics of the momentum distribution of holes and particles. It turns out that, besides the expected two-body interactions generated by the potential \(V(x-y)\), _virtual_ particle-hole pairs around the Fermi surface can mediate interactions between holes/hole and particle/particle-just as photons do in QED! Essentially, we prove in Theorem 1 that in a well-chosen scaling regime, and for appropriate initial data, the following equation emerges from the original \(N\)-particle dynamics
\[f_{t}=f_{0}+\lambda^{2}t\ Q_{t}[f_{0}]+\lambda^{2}t\ B_{t}[f_{0}] \tag{1.9}\]
up to an error that we have control of. Here, \(f_{t}(p)\) corresponds to the momentum distribution of holes and excited particles, measured in the microscopic variables \((t,p)\). The operator \(Q_{t}\) describes interactions of quantum Boltzmann-type and includes collisions of the form hole/hole, particle/particle and hole/particle. On the other hand, the operator \(B_{t}\) represents interactions between holes/holes and particle/particle, respectively, that are mediated by virtual bosons around the Fermi surface-these can be interpreted as self-energy terms.
The best of our knowledge, this is the first rigorous result in which the dynamics of holes "deep in the bulk" of the Fermi ball has been analyzed. In particular, while the emergence of the operator \(Q_{t}\) could have been intuitively guessed based on previous work in quantum Boltzmann dynamics, the emergence of the operator \(B_{t}\) seems to be entirely new. We believe that this phenomenon contains not only mathematical value, but physical value as well. In particular, we hope that it will give shed some light into what will hopefully be the first derivation of the quantum Boltzmann equation.
### Organization of this article
In Section 2 we state the main result of this article, and in Section 3 we introduce the preliminaries that are needed to set up the proof. In Sections 4 and 5 we introduce and develop the machinery that we use in our analysis.
In Section 6 and 7 we show how the operators \(Q\) and \(B\), respectively, emerge from the many-body dynamics, giving rise to the _leading order terms_. In Section 8 we estimate _subleading order terms_ and in Section 9 we prove our main result, Theorem 1. Finally, in Section 10 we analyze the fixed volume case.
### Notation
The following notation is going to be used throughout this article.
* \(\Lambda^{*}\equiv(2\pi\mathbb{Z}/L)^{d}\) denotes the dual lattice of \(\Lambda\).
* We write \(\int_{\Lambda^{*}}F(p)\mathrm{d}p\equiv|\Lambda|^{-1}\sum_{p\in\Lambda^{*}}F (p)\) for any function \(F:\Lambda^{*}\to\mathbb{C}\).
* \(\delta_{p,q}\) denotes the standard Kronecker delta.
* \(\ell^{p}(\Lambda^{*})\) denotes the space of functions with finite norm \(\|f\|_{\ell^{p}}\equiv(\int_{\Lambda^{*}}|f(p)|^{p}\mathrm{d}p)^{1/p}\).
* \(B(X)\) denotes the space of bounded linear operators acting on \(X\).
* We denote \(\widetilde{f}\equiv 1-f\) for any function \(f:\Lambda^{*}\to\mathbb{C}\).
* \(\widehat{G}(k)\equiv(2\pi)^{-d/2}\int_{\Lambda}e^{-ik\cdot x}G(x)\mathrm{d}x\) denotes the Fourier transform of \(G:\Lambda\to\mathbb{C}\).
* We say that a positive real number \(C>0\) is a _constant_, if it is independent of the physical parameters \(N\), \(|\Lambda|\), \(\lambda\), \(n\) and \(t\).
* Given two real-valued quantities \(A\) and \(B\), we say that \(A\lesssim B\) if there exists a constant \(C>0\) such that \[A\leqslant C\,B\.\] (1.10)
Additionally, we say that \(A\simeq B\) if both \(A\lesssim B\) and \(B\lesssim A\) hold true.
* We shall frequently omit subscripts from Hilbert spaces norms throughout proofs.
## 2. Main results
The main result of this article is a rigorous interpretation of the emergence of Eq. (1.9) from the many-body fermionic dynamics governed by the Hamiltonian \(H\). In this section, we first rigorously introduce the model from which we shall derive this equation. Secondly, we present our main result in Theorem 1. It contains an estimate in a weighted \(\ell^{\infty}\) norm for the difference between the momentum distribution of the full dynamics, and the dynamics associated to the leading order \(Q\) and \(B\) terms. Finally, we discuss the consequences of this estimate in a particular scaling regime.
### The model
We will work with the grand canonical ensemble associated to the \(N\)-body Hamiltonian \(H\). Namely, we study its second quantization which we now proceed to define. Consider the fermionic Fock space associated to the the \(1\)-particle space \(L^{2}(\Lambda)\)
\[\mathscr{F}\equiv\mathbb{C}\oplus\bigoplus_{n\geqslant 1}\mathscr{F}_{n} \qquad\text{where}\qquad\mathscr{F}_{n}\equiv\bigwedge_{i=1}^{n}L^{2}(\Lambda), \ \forall n\geqslant 1\, \tag{2.1}\]
which we endow it with creation and annihilation operators \(a_{p}\) and \(a_{q}^{*}\). In wave-vector space, they satisfy the Canonical Anticommutation Relations (CAR)
\[\{a_{p},a_{q}^{*}\}=\delta(p-q)\equiv|\Lambda|\delta_{p,q}\qquad\text{and} \qquad\{a_{p},a_{q}\}=\{a_{p}^{*},a_{q}^{*}\}=0\,\qquad p,q\in\Lambda^{*}. \tag{2.2}\]
Here, \(\Lambda^{*}=(2\pi\mathbb{Z}/L)^{d}\) is the corresponding dual lattice, \(\delta_{p,q}\) stands for the Kronecker delta, and \(\{\cdot,\cdot\}\) denotes the anticommutator. The Fock vacuum vector will be denoted
by \(\Omega\in\mathscr{F}\). The Hamiltonian \(H\) introduced in (1.1) is now conveniently described in second quantization, written in terms of creation- and annihilation operators as
\[\mathcal{H}\equiv\frac{1}{2}\int_{\Lambda^{*}}|p|^{2}a_{p}^{*}a_{p}\ dp+\frac{ \lambda}{2}\int_{(\Lambda^{*})^{3}}\hat{V}(k)\ a_{p+k}^{*}a_{q-k}^{*}a_{q}a_{p} \ \mathrm{d}p\,\mathrm{d}q\,\mathrm{d}k \tag{2.3}\]
where we work in microscopic units such that \(\hbar=m_{F}=1\). Note that in in these units we identify momenta and wave-vectors. In particular, we work extensively with the Fermi momentum \(p_{F}=k_{F}\simeq(N/|\Lambda|)^{1/d}\), as given in Eq. (1.3).
_Interaction potentials._ We shall always assume that \(V:\Lambda\to\mathbb{R}\) is regular enough so that \(\mathcal{H}\) is self-adjoint in its natural domain-the one-parameter unitary group \((e^{-it\mathcal{H}})_{t\in\mathbb{R}}\) is then well-defined in \(\mathscr{F}\). To be more precise, we assume that \(\hat{V}(k)\) satisfies the following assumptions:
_(i)_ It has compact support in a ball of radius \(r>0\).
_(ii)_\(\hat{V}(-k)=\hat{V}(k)\) for all \(k\in\Lambda^{*}\). Thus, \(\hat{V}\) is real-valued.
_(iii)_\(\hat{V}(0)=0\).
_(iv)_\(V\) is chosen relative to the box \(\Lambda\) so that \(\sup_{|\Lambda|>0}\|\hat{V}\|_{\ell^{1}(\Lambda^{*})}<\infty\).
We recall that the ground state of the non-interacting system is conveniently described as a Slater determinant (1.2) associated to the Fermi ball \(\mathcal{B}\), defined in (1.3). Small perturbations of such states can be understood as fermions being excited out of the Fermi ball, together with the holes they leave behind. We implement this point of view as follows. First, we introduce the following notation that we will be using extensively for the rest of the article
\[\chi(p)\equiv\left\{\begin{array}{cl}\quad 1\,&\quad p\in\mathcal{B}^{c}\\ \quad 0\,&\quad p\in\mathcal{B}\end{array}\right.\qquad\text{together with}\qquad\chi^{\perp}(p)\equiv 1-\chi(p). \tag{2.4}\]
Next, we define the following _particle-hole transformation_
\[\mathcal{R}:\mathscr{F}\to\mathscr{F} \tag{2.5}\]
defined as the Bogoliubov transformation satisfying for all \(p\in\Lambda^{*}\)
\[\mathcal{R}^{*}a_{p}^{*}\mathcal{R}\ =\ \chi^{\perp}(p)\,a_{p}^{*}\,+\,\chi(p)\,a_{p} \tag{2.6}\]
together with \(\mathcal{R}^{*}\Omega=\Psi_{\mathrm{Slater}}\equiv(0,\ldots,0,\psi_{\mathrm{ Slater}},0,\ldots)\in\mathscr{F}\). In particular, the momentum distribution of the non-interacting \(N\)-particle ground state in \(\mathscr{F}\) can be written as
\[\left\langle\Psi_{\mathrm{Slater}},a_{p}^{*}a_{q}\Psi_{\mathrm{Slater}} \right\rangle=\delta(p-q)\chi(p)\,\qquad\forall p,q\in\Lambda^{*}. \tag{2.7}\]
_Remark 2.1_.: One should understand the particle-hole transformation \(\mathcal{R}\) as a change of variables. Thus, unless stated otherwise, for the rest of the article \(\mathscr{F}\), \(a\) and \(a^{*}\) refer to the variables associated to particles and holes. Namely, if \(p\in\mathcal{B}^{c}\) then \(a_{p}\) and \(a_{p}^{*}\) are creation- and annihilation- operators for excited particles outside of the Fermi ball. On the other hand, if \(h\in\mathcal{B}\) then \(a_{h}\) and \(a_{h}^{*}\) are creation- and annihilation- operators for holes inside of the Fermi ball.
We are mostly interested in describing the time evolution of the momentum distribution of states close to the Fermi ball. In particle-hole space, the dynamics of these states is driven by the _particle-hole Hamiltonian_
\[\mathfrak{h}\equiv\mathcal{R}^{*}\mathcal{H}\mathcal{R}. \tag{2.8}\]
A more explicit representation of the Hamiltonian \(\mathfrak{h}\) will be given in the next section. Thus, we shall study the evolution-in-time of the corresponding momentum distribution, defined as follows.
**Definition 1**.: _Given an initial state \(\nu:B(\mathscr{F})\to\mathbb{C}\), we define_
\[f_{t}(p)\equiv\frac{\nu(e^{it\mathfrak{h}}a_{p}^{*}a_{p}e^{-it\mathfrak{h}})}{ |\Lambda|} \tag{2.9}\]
_for all \(t\in\mathbb{R}\) and \(p\in\Lambda^{*}\)._
_Remark 2.2_.: Let us relate the momentum distribution of particles and holes \(f_{t}(p)\) to the dynamics generated by \(\mathcal{H}\). Let \(\rho:B(\mathscr{F})\to\mathbb{C}\) to be a translation-invariant initial state, and consider its time evolution
\[\rho_{t}(\mathcal{O})\equiv\rho(e^{-it\mathcal{H}}\mathcal{O}e^{it\mathcal{H} })\,\qquad t\in\mathbb{R},\ \mathcal{O}\in B(\mathscr{F}). \tag{2.10}\]
The associated two-point function of the system can be expressed in terms of the particle-hole transformation \(\mathcal{R}\) and the relative state \(\nu(\mathcal{O})\equiv\rho(\mathcal{R}\mathcal{O}\mathcal{R}^{*})\) as
\[\rho_{t}(a_{p}^{*}a_{q})=\delta(p-q)\Big{(}\chi(p)+\big{[}\chi(p)-\chi^{\perp} (p)\big{]}f_{t}(p)\Big{)} \tag{2.11}\]
for all \(t\in\mathbb{R}\), and \(p\in\Lambda^{*}\). Thus, Eq. (2.11) expresses the time evolution of the original many-body dynamics in terms of the momentum distribution of holes and particles, as given in Definition 1.
Let us now describe the conditions that the initial data will satisfy. In order to motive them, let us recall that thermal fluctuations around the non-interacting ground state induce a collective bosonization of particle-hole pairs around the boundary of the Fermi ball; the modes of excitation of these quasiparticles belong to the support of the interaction potential \(\hat{V}\), which we denote by \(r>0\). Since such phenomena will arise in our analysis, it is convenient to introduce the following strip
\[\mathcal{S}\equiv\{p\in\Lambda^{*}:p_{F}-3r\leqslant|p|\leqslant p_{F}+3r\} \tag{2.12}\]
which (under a slight abuse of notation) we shall refer to as the _Fermi surface_. The pre-factor \(3\) is included for technical reasons. The conditions for the initial data are as follows.
**Condition 1**.: _The initial state \(\nu:B(\mathscr{F})\to\mathbb{C}\) verifies: (C1) \(\nu\) is quasi-free: for all \(k,k^{\prime}\in\mathbb{N}\), \(p_{1},\ldots,p_{k}\in\Lambda^{*}\) and \(q_{1},\ldots,q_{k^{\prime}}\in\Lambda^{*}\) there holds_
\[\nu\Big{(}\prod_{i=1}^{k}a_{p_{i}}^{*}\prod_{j=1}^{k^{\prime}}a_{q_{j}}\Big{)} \,=\,\delta_{k,k^{\prime}}(-1)^{\frac{k(k-1)}{2}}\det\big{[}\nu(a_{p_{i}}^{*} a_{q_{j}})\big{]}_{1\leqslant i,j\leqslant k}. \tag{2.13}\]
_(C2) \(\nu\) is translation invariant: for all \(p,q\in\Lambda^{*}\) there holds \(\nu(a_{p}^{*}a_{q})=\delta(p-q)\nu(a_{p}^{*}a_{p})\). (C3) \(\nu\) has zero charge: \(\int_{\mathcal{B}}\nu(a_{p}^{*}a_{p})\mathrm{d}p=\int_{\mathcal{B}^{c}}\nu(a _{p}^{*}a_{p})\mathrm{d}p\.\)_
_(C4) There exists a constant \(C>0\) such that \(\int_{\mathcal{S}}\nu(a_{p}^{*}a_{p})\mathrm{d}p\leqslant C(\lambda|\Lambda|p_{F}^{ d-1})^{2}\)._
_Example._ We can easily construct a state \(\nu\) that verifies Condition 1 as follows. Given \(n\in\mathbb{N}\), let \(h_{1},\ldots,h_{n}\in\mathcal{B}\backslash\mathcal{S}\) and \(p_{1},\ldots,p_{n}\in\mathcal{B}^{c}\backslash\mathcal{S}\). Then, we consider
\[\nu(\mathcal{O})\equiv\frac{\langle\Psi_{0},\mathcal{O}\Psi_{0}\rangle_{ \mathscr{F}}}{\|\Psi_{0}\|_{\mathscr{F}}^{2}}\qquad\text{where}\qquad\Psi_{0} \equiv\prod_{i=1}^{n}a_{h_{i}}^{*}a_{p_{i}}^{*}\Omega. \tag{2.14}\]
The state \(\nu\) is a pure state corresponding to the Slater determinant \(\Psi_{0}\). Since Slater determinants are always quasi-free, this verifies (C1). One may verify that translation invariance in (C2) is satisfied by direct computation of the two-point function
\[\nu(a_{p}^{*}a_{q})=\delta(p-q)\Big{(}\delta(p-h_{1})+\ldots+\delta(p-h_{n})+ \delta(p-p_{1})+\ldots+\delta(p-p_{n})\Big{)}. \tag{2.15}\]
The state \(\nu\) has zero charge in (C3) because we have chosen an equal number of \(h_{i}^{\prime}s\) and \(p_{i}^{\prime}s\) in \(\mathcal{B}\) and \(\mathcal{B}^{c}\), respectively. Finally we note that (C4) is verified because \(\nu(a_{p}^{*}a_{p})=0\) for all \(p\in\mathcal{S}\).
### Statement of the main theorem
Our main result identifies the time evolution of the momentum distribution \(f_{t}(p)\) in terms of two non-linear operators that act on functions on \(\Lambda^{*}\). In order to define them we introduce the following two objects
1. For \(p\in\Lambda^{*}\), the _dispersion relation_ of holes and particles is given by \[E_{p}\,\equiv\,-\chi(p)\Big{(}\ \frac{p^{2}}{2}\,+\,\frac{\lambda}{2}\,( \hat{V}*\chi^{\perp})(p)\ \Big{)}+\chi^{\perp}(p)\Big{(}\ \frac{p^{2}}{2}\,-\,\frac{\lambda}{2}\,(\hat{V}* \chi)(p)\ \Big{)}\.\] (2.16)
2. For \(t\in\mathbb{R}\) and \(x\in\mathbb{R}\) we define the following _modified Delta function_ \[\delta_{t}(x)\equiv t\delta_{1}(tx)\qquad\text{ where }\qquad\delta_{1}(x)\equiv\frac{2}{\pi}\frac{\sin^{2}\Big{(}\frac{x}{2} \Big{)}}{x^{2}}\] (2.17)
The following operator describes describes Boltzmann-type interactions between particles/particles, particles/holes and holes/holes. For convenience, for any \(k\in\mathbb{N}\), we will write \(\chi(p_{1},\ldots,p_{k})=\chi(p_{1})\cdots\chi(p_{k})\) for \(p_{1},\cdots,p_{k}\in\Lambda^{*}\) and similarly for \(\chi^{\perp}\).
**Definition 2**.: _For \(f\in\ell^{1}(\Lambda^{*})\) and \(t\in\mathbb{R}\) we define_
\[Q_{t}[f](p) \equiv\pi\int_{\Lambda^{*4}}\mathrm{d}\vec{p}\,\sigma(\vec{p}) \left[\delta(p-p_{1})+\delta(p-p_{2})-\delta(p-p_{3})-\delta(p-p_{4})\right] \tag{2.18}\] \[\times\delta_{t}[E_{p_{1}}+E_{p_{2}}-E_{p_{3}}-E_{p_{4}}]\Big{(} f(p_{3})f(p_{4})\widetilde{f}(p_{1})\widetilde{f}(p_{2})-f(p_{1})f(p_{2}) \widetilde{f}(p_{3})\widetilde{f}(p_{4})\Big{)}\.\]
_The coefficient function \(\sigma:(\Lambda^{*})^{4}\to\mathbb{R}\) is decomposed as_
\[\sigma=\sigma_{HH}+\sigma_{PP}+\sigma_{HP}+\sigma_{PH} \tag{2.19}\]
_where the coefficient functions are defined for \(\vec{p}=(p_{1},p_{2},p_{3},p_{4})\in(\Lambda^{*})^{4}\) as follows_
\[\sigma_{HH}(\vec{p}) =\chi(p_{1},p_{2},p_{3},p_{4})\delta(p_{1}+p_{2}-p_{3}-p_{4})|\hat{V }(p_{1}-p_{4})-\hat{V}(p_{1}-p_{3})|^{2} \tag{2.20}\] \[\sigma_{PP}(\vec{p}) =\chi^{\perp}(p_{1},p_{2},p_{3},p_{4})\delta(p_{1}+p_{2}-p_{3}-p_{ 4})|\hat{V}(p_{1}-p_{4})-\hat{V}(p_{1}-p_{3})|^{2}\] (2.21) \[\sigma_{HP}(\vec{p}) =2\chi(p_{1},p_{3})\chi^{\perp}(p_{2},p_{4})\delta(p_{1}-p_{2}-p_ {3}+p_{4})|\hat{V}(p_{1}-p_{3})|^{2}\] (2.22) \[\sigma_{PH}(\vec{p}) =2\chi^{\perp}(p_{1},p_{3})\chi(p_{2},p_{4})\delta(p_{1}-p_{2}-p_ {3}+p_{4})|\hat{V}(p_{1}-p_{3})|^{2}. \tag{2.23}\]
The following operator describes the effect of the bosonzied excitations around the Fermi surface, into holes and particles.
**Definition 3**.: _For all \(t\in\mathbb{R}\) we define in terms of particle and hole interactions_
\[B_{t}\equiv B_{t}^{(H)}+B_{t}^{(P)}:\ell^{1}(\Lambda^{*})\to\ell^{1}(\Lambda^{ *}) \tag{2.24}\]
_where \(B_{t}^{(H)}:\ell^{1}(\Lambda^{*})\to\ell^{1}(\Lambda^{*})\) and \(B_{t}^{(P)}:\ell^{1}(\Lambda^{*})\to\ell^{1}(\Lambda^{*})\) are defined as follows_
\[B_{t}^{(H)}[f](h) \equiv 2\pi\int_{\Lambda^{*}}|\hat{V}(k)|^{2}\bigg{(}\alpha_{t}^{H} (h-k,k)f(h-k)\widetilde{f}(h)-\alpha_{t}^{H}(h,k)f(h)\widetilde{f}(h+k)\bigg{)} \mathrm{d}h\,\] \[B_{t}^{(P)}[f](p) \equiv 2\pi\int_{\Lambda^{*}}|\hat{V}(k)|^{2}\bigg{(}\alpha_{t}^{P} (p+k,k)f(p+k)\widetilde{f}(p)-\alpha_{t}^{P}(p,k)f(p)\widetilde{f}(p-k)\bigg{)} \mathrm{d}p\,\]
_for \(f\in\ell^{1}\) and \(p,h\in\Lambda^{*}\). Here, the coefficients \(\alpha_{t}^{H}\) and \(\alpha_{t}^{P}\) are defined as_
\[\alpha_{t}^{H}(h,k) \equiv\chi(h)\chi(h+k)\int_{\Lambda^{*}}\chi(r)\chi^{\perp}(r+k) \delta_{t}\big{[}E_{h}-E_{h+k}-E_{r}-E_{r+k}\big{]}\mathrm{d}r\, \tag{2.25}\] \[\alpha_{t}^{P}(p,k) \equiv\chi^{\perp}(p)\chi^{\perp}(p-k)\int_{\Lambda^{*}}\chi(r) \chi^{\perp}(r+k)\delta_{t}\big{[}E_{p}-E_{p-k}-E_{r}-E_{r+k}\big{]}\mathrm{d }r\, \tag{2.26}\]
_for all \(p,h,k\in\Lambda^{*}\)._
Finally, let us now introduce an appropiate space of functions. Indeed, for \(m>0\) we introduce the following weight
\[w_{m}(p)\equiv\begin{cases}\left\langle p\right\rangle^{m}\,&p\in\mathcal{S}\\ 1\,&p\in\Lambda^{*}\backslash\mathcal{S}\end{cases}. \tag{2.27}\]
where \(\left\langle p\right\rangle\equiv(1+p^{2})^{1/2}\) denotes the standard Japanese bracket. We define the Banach space \(\ell_{m}^{\mathrm{I}}\equiv\ell_{m}^{\mathrm{I}}(\Lambda^{*})\) of functions \(\varphi:\Lambda^{*}\to\mathbb{C}\) for which the norm
\[\|\varphi\|_{\ell_{m}^{\mathrm{I}}}\equiv\int_{\Lambda^{*}}|\varphi(p)|w_{m}(p )\mathrm{d}p \tag{2.28}\]
is finite. We will measure distances in the norm associated to the dual space of \(\ell_{m}^{\mathrm{I}}(\Lambda^{*})\). Namely, we regard \(\ell_{m}^{1*}\equiv[\ell_{m}^{\mathrm{I}}(\Lambda^{*})]^{*}\) as the Banach space of functions \(f:\Lambda^{*}\to\mathbb{C}\) endowed with the norm
\[\|f\|_{\ell_{m}^{1*}}\equiv\sup_{p\in\Lambda^{*}}w_{m}(p)^{-1}|f(p)|=\sup_{ \varphi\in\ell_{m}^{\mathrm{I}}}\frac{|\left\langle\varphi,f\right\rangle|}{ \|\varphi\|_{\ell_{m}^{\mathrm{I}}}} \tag{2.29}\]
where we denote by \(\left\langle\varphi,f\right\rangle\equiv\int_{\Lambda^{*}}\overline{\varphi(p)} f(p)\mathrm{d}p\) the coupling between \(\ell_{m}^{1}\) and \(\ell_{m}^{1*}\).
_Remark 2.3_.: As vector spaces, \(\ell_{m}^{1}(\Lambda^{*})=\ell^{1}(\Lambda^{*})\) and \(\ell_{m}^{1*}(\Lambda^{*})=\ell^{\infty}(\Lambda^{*})\) for all \(m>0\). However, we choose to equip these spaces with the norms \(\|\cdot\|_{\ell_{m}^{1}}\) and \(\|\cdot\|_{\ell_{m}^{1*}}\) since the weight \(w_{m}(p)\) appropriately records the decay near the Fermi surface \(\mathcal{S}\)-this point will be crucial in our analysis. For completeness, we record here the following inequality
\[\|f\|_{\ell^{\infty}(\Lambda^{*}\setminus\mathcal{S})}\leqslant\|f \|_{\ell_{m}^{1*}}\leqslant\|f\|_{\ell^{\infty}(\Lambda^{*})}\,\qquad\forall f\in\ell^{\infty}(\Lambda^{*}) \tag{2.30}\]
which we shall make use of, when studying the fixed volume case in the next subsection.
_Remark 2.4_.: If \(f\in\ell_{m}^{1*}\) is real-valued, one may restrict the supremum over \(\varphi\in\ell_{m}^{1}\) on the right hand side of (2.29) to be real-valued as well.
The following theorem is our main result. It contains an estimate in \(\ell_{m}^{1*}\)-norm that collects the leading order terms of the evolution-in-time of the momentum distributions of particles and holes.
**Theorem 1**.: _Let \(\mathfrak{h}\) the particle-hole Hamiltonian defined in (2.8). Assume the inital state of the system \(\nu\) satisfies Condition 1, and consider the momentum distribution \(f_{t}(p)\) defined in (2.9). We denote by_
\[n\equiv|\Lambda|\int_{\Lambda^{*}}f_{0}(p)\mathrm{d}p \tag{2.31}\]
_the initial number of particles/holes in the system, and introduce the recurring parameter_
\[R\equiv|\Lambda|p_{F}^{d-1}\simeq LN^{\frac{d-1}{d}}. \tag{2.32}\]
_Assume that \(1\lesssim n\lesssim R^{1/2}\). Then, for all \(m>0\) there exists \(C=C(m,d)>0\) such that for all \(t\geqslant 0\) there holds_
\[\|f_{t}-f_{0}-\lambda^{2}t\,Q_{t}[f_{0}]-\lambda^{2}t\,B_{t}[f_{0}] \|_{\ell_{m}^{1*}}\ \leqslant\ C\,\lambda^{2}t\,\big{(}\,\theta_{1}\,t\,\langle t \rangle+\,\theta_{2}\,t\,\big{)}\,\exp(C\lambda R\,\langle t\rangle) \tag{2.33}\]
_where the positive parameters \(\theta_{1}\) and \(\theta_{2}\) are defined as_
\[\theta_{1}\equiv\lambda R^{2}(R^{\frac{1}{2}}+n^{2})\quad\text{and}\quad \theta_{2}\equiv\frac{R^{3}}{p_{F}^{m}}. \tag{2.34}\]
_Remark 2.5_ (Finite volume).: In the next subsection, we study the fixed volume case \(L=2\pi\) in three dimensions. We consider macroscopic time scales \(T\simeq 1\), and construct appropriate initial data for which the large time limit of \(Q_{t}\) and \(B_{t}\) describe the time evolution of holes in the Fermi ball \(\mathcal{B}\). Our parameter range includes the following scaling regime as an example
\[n\simeq N^{\frac{2}{9}}\,\qquad\lambda=\frac{1}{N^{2}}\,\qquad t=N^{\frac{1}{ 3}}T\qquad\text{and}\qquad m=3. \tag{2.35}\]
_Remark 2.6_ (Comparison of physical scales).: The physical situation given by (2.35) can be compared with the combined mean-field \(\lambda=1/N\) and semi-classical regime \(\hbar=N^{1/3}\) as follows. When both are measured in microscopic units:
1. The strength of out interaction is \(\mathcal{O}(1/N)\) weaker, and its range is \(\mathcal{O}(N^{1/3})\) shorter;
2. Both time scales are \(\mathcal{O}(N^{1/3})\); and
3. The size of our box is \(\mathcal{O}(N)\) smaller than in the semi-classical case. Consequently, the Fermi momentum \(p_{F}\) in our regime is of order \(\mathcal{O}(N^{1/3})\)-much larger than its \(\mathcal{O}(1)\) semi-classical analog.
#### 2.2.1. Discussion
Note that all the physical parameters in Eq. (2.33) remain finite and non-zero. This is different from many results in the mathematical physics literature, where the goal is to prove a convergence result of the form \(f=\lim_{N\to\infty}f_{N}\), where \(f_{N}\) is an object extracted from the many-body problem, all the other physical parameters remain fixed or have been expressed in terms of \(N\), and \(f\) is a limiting object independent of \(N\). This approach has been carried out for the derivation of mean-field equations in the quantum and classical setting, Boltzmann-type equations in the probabilistic and deterministic setting, the Euler equation in the hydrodynamical limit, and many others. Results of this kind require one to know _apriori_ the scaling regime under consideration, i.e. the functional dependence of the physical parameters in terms of a single one (usually, \(N\gg 1\) or \(\lambda\ll 1\))-this approach is highly compatible with BBGKY hierarchy methods, in which compactness arguments are often employed. This dependence is often obtained from scaling considerations, based on a solid understanding of the underlying physical scales of the system.
As we have noted in the introductory section, the derivation of the quantum Boltzmann equation has been a longstanding conjecture from the mathematical point of view. In particular, the optimal scaling regime for which it could be derived is a matter of active research. Notably, this includes various theories of interacting quantum gases, like the one studied in this paper. To explore the appropiate parameter ranges, we do not fix a strict dependence between the physical parameters of the system. The "parameter window" for which the inequality (2.33) is a meaningful approximation is then found _aposteriori_-the better the estimates, the larger the window.
Our result by no means identifies the _optimal_ parameter window for which emergence of this phenomenon holds true. However, it is the first result to identify the leading order terms that drive the dynamics of holes inside of the Fermi ball, for small values of the coupling constant.
### Fixed volume
Let us discuss in this section a scaling regime for which Theorem 1 turns into an effective approximation. Namely, the situation in which the linear length of the box is \(L=2\pi\), and \(d=3\). Here, the dual lattice becomes \(\Lambda^{*}=\mathbb{Z}^{3}\). The physical situation in which \(|\Lambda|\gg 1\) and \(\Lambda^{*}\sim\mathbb{R}^{3}\) (that is, the continuum approximation) will not be addressed in this article.
The most important feature of this regime is that one is able to easily identify the leading order time dependence of the operators \(Q_{t}\) and \(B_{t}\), contained in the mollified delta function \(\delta_{t}(\Delta E)\). Indeed, we prove in Lemma 10.1 that for all \(x\in\mathbb{Z}\) and \(y\in\mathbb{R}\)
\[\delta_{t}(x+\lambda y)=\,\frac{2t}{\pi}\delta_{x,0}\,+\mathcal{O}(1/tx^{2})+ \mathcal{O}(t^{3}\lambda^{2}|y|^{3}). \tag{2.36}\]
Consequently, in Lemma 10.2 and 10.3 we are able to identify the time dependence of the operators \(Q_{t}\) and \(B_{t}\) as follows
\[Q_{t}[f] =t\mathscr{Q}[f]+\mathcal{O}_{\ell^{\infty}}(1/t)+\mathcal{O}_{ \ell^{\infty}}(t^{3}\lambda^{2}\|\hat{V}\|_{\ell^{1}}^{2})\,\] \[B_{t}[f] =t\mathscr{B}[f]+\mathcal{O}_{\ell^{\infty}}(1/t)+\mathcal{O}_{ \ell^{\infty}}(t^{3}\lambda^{2}\|\hat{V}\|_{\ell^{1}}^{2}). \tag{2.37}\]
where \(f\in\ell^{1}(\mathbb{Z}^{3}).\) Here, the operator \(\mathscr{Q}[f]\) is defined as in Def. 2 but with \(\delta_{t}(\Delta E)\) being replaced by the discrete Delta function on the lattice \((2/\pi)\delta_{\Delta e,0}\), where now energy conservation holds for the signed free dispersion relation:1
Footnote 1: In the literature, one usually finds the dispersion relation for particles and holes written in terms of the absolute value \(|p^{2}/2-p_{F}^{2}/2|=e(p)+p_{F}^{2}/2\left(\chi^{\perp}(p)-\chi(p)\right)\). This choice is particularly useful for linearizing the dispersion relation of holes/particles around the Fermi surface \(p\in\mathcal{S}\), for it is then clear that \(|p^{2}/2-p_{F}^{2}/2|\lesssim p_{F}\). While this is important for the dynamics of bosonized particle-hole pairs, it is not relevant for the case under studyβhence, we choose \(e(p)\) as above. Thanks to charge conservation, both representations are equivalent.
\[\Delta e\equiv e(p_{1})+e(p_{2})-e(p_{3})-e(p_{4})\,\qquad\text{where}\qquad e(p) \equiv\big{[}\chi^{\perp}(p)-\chi(p)\big{]}\frac{p^{2}}{2}. \tag{2.38}\]
The definition of \(\mathscr{B}\) is anologous.
In this regime, we consider now a macroscopic time scale \(T\simeq 1\) for which the right hand side of Eq. (2.37) is small, but the estimate contained in Theorem 1 is meaningful. Namely, let \(T\geqslant 0\) and \(F_{T}\in\ell^{1}(\mathbb{Z}^{3})\) be defined through
\[T\equiv\epsilon t\quad\text{and}\quad F_{T}\equiv f_{\varepsilon^{-1}T} \tag{2.39}\]
where \(\epsilon\in(0,1)\) is a parameter controlling the time scale, which we now define. We fix a relationsip between all the parameter as follows:
\[\lambda=1/N^{\frac{5}{3}+\alpha}\qquad 1\leqslant n\leqslant N^{1/3}\,\qquad \epsilon=1/N^{\beta} \tag{2.40}\]
where \(\alpha,\beta>0\) are positive, but independent of \(N\).
In this context, the following result now follows as a corollary of Theorem 1, Lemma 10.2 and 10.3, and the inequalities found in Eq. (2.30).
**Corollary 1** (Fixed volume. Macroscopic times).: _Under the same assumptions of Theorem 1, let \(F_{T}\) be as in (2.39), and consider all the parameter as in (2.40). Let \(d=3\). Then, for all \(m>0\) there exists \(C>0\) such that for all \(T\in[0,1]\) there holds_
\[F_{T}=F_{0}+(\lambda/\epsilon)^{2}T^{2}\Big{(}\mathscr{Q}[F_{0}]+\mathscr{B} [F_{0}]+\mathrm{Rem}(N,n,T)\Big{)} \tag{2.41}\]
_where \(\mathrm{Rem}\) is a remainder term that satisfies_
\[\|\mathrm{Rem}(N,n,T)\|_{\ell^{\infty}(\mathbb{Z}^{3}\backslash\mathcal{S})} \leqslant C\bigg{(}\frac{N^{2/3}}{N^{2\beta}}+\frac{N^{2/3}}{N^{2(5/3+\alpha- \beta)}}+\frac{(N^{1/3}+n^{2})}{N^{1/3+\alpha-\beta}}+\frac{1}{N^{2m/3-2}} \bigg{)} \tag{2.42}\]
_Remark 2.7_.: Corollary 1 contains information about the evolution of \(F_{T}(p)\) for \(p\in\mathbb{Z}^{3}\backslash\mathcal{S}\). That is, away from the Fermi surface. For \(p\in\mathcal{S}\) and \(T\in[0,1]\), one has the following \(\ell^{1}(\mathbb{Z}^{3})\) bound:
\[\|F_{T}\|_{\ell^{1}(\mathcal{S})}\leqslant C(\lambda R\epsilon^{-1}\left\langle T \right\rangle)^{2}\exp(C\lambda R\epsilon^{-1}T)\leqslant C\frac{\exp(C/N^{1+ \alpha-\beta})}{N^{2(1+\alpha-\beta)}}. \tag{2.43}\]
This bound follows as a propagation of Condition 1 for the initial data \(F_{0}\)-see Proposition 5.2. Thus, \(\|F_{T}\|_{\ell^{1}(\mathcal{S})}\ll 1\) provided \(1+\alpha-\beta>0\) and \(N\gg 1\).
The inequality contained in Corollary 1 shows that the effective dynamics dominates over the remainder terms if we have
\[\|\mathscr{Q}[F_{0}]\|_{\ell^{\infty}(\mathbb{Z}^{3}\setminus\mathcal{S})}\,+ \,\|\mathscr{B}[F_{0}]\|_{\ell^{\infty}(\mathbb{Z}^{3}\setminus\mathcal{S})} \,\gg\,\|\text{Rem}(N,n,T)\|_{\ell^{\infty}(\mathbb{Z}^{3}\setminus\mathcal{S} )}. \tag{2.44}\]
It turns out that the sizes of \(\mathscr{Q}[F_{0}]\) and \(\mathscr{B}[F_{0}]\) are quite sensitive to the structure of the initial data. While the operator \(\mathscr{Q}\) depends on _how many_ holes there are in \(\mathcal{B}\), the operator \(\mathscr{B}\) depends on _where_ these holes are found. For illustration, given2\(\delta\in(\frac{1034}{1648},1)\), in Section 10.3 we construct initial data \(F_{0}\in\ell^{1}(\mathbb{Z}^{3})\) such that
Footnote 2: The parameter \(\delta_{0}\equiv 1034/1648<2/3\) is the current best power for the remainder term associated to the Gauss circle problemβsee [16, Theorem 2]. The reader is refered to Section 10.3 for details on the connection with the problem at hand.
\[\|\mathscr{Q}[F_{0}]\|_{\ell^{\infty}(\mathcal{B}\setminus\mathcal{S})}\simeq n \qquad\text{and}\qquad\|\mathscr{B}[F_{0}]\|_{\ell^{\infty}(\mathcal{B} \setminus\mathcal{S})}\geqslant CN^{\frac{\delta}{3}}\,\qquad\forall N \gg 1\, \tag{2.45}\]
where \(N^{\delta/3}\) is the momentum of the outermost hole in \(\mathcal{B}\). With this example in mind, let us assume for concreteness that \(n\simeq N^{\delta/3}\) and \(\delta=2/3\). Then, a straightforward calculation shows that the inequality (2.44) is valid for \(N\gg 1\) provided
\[1/9<\beta-1/9<\alpha\qquad\text{and}\qquad m>8/3 \tag{2.46}\]
where the parameters \(\alpha,\beta\) were introduced in (2.40). In particular, by choosing \(\alpha=\beta=1/3\) we get the scaling regime presented in Eq. (2.35).
## 3. Preliminaries
In this section, we introduce preliminaries that are needed to prove our main result. First, we give an explicit representation of the particle-hole Hamiltonian \(\mathfrak{h}\), introduced in (2.8). Secondly, based on this representation, we introduce the interaction picture framework that we shall use to study the dynamics of the momentum distribution \(f_{t}(p)\), defined in (2.9). Thirdly, we perform a double commutator expansion and identify nine terms, from which we shall extract leading order and subleading order terms. Finally, we introduce number estimates that we use to analyze the nine terms found in the double commutator expansion.
### Calculation of \(\mathfrak{h}\)
Let us introduce two fundamental collection of operators. We shall refer to them informally as \(D\)- and \(b\)-operators, respectively.
**Definition 4**.: _Let \(k\in\Lambda^{*}\)._
1. _We define the_ \(D\)_-operators as_ \[D_{k}\equiv\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p-k)a^{*}_{p-k}a_{p} \,\mathrm{d}p-\int_{\Lambda^{*}}\chi(h)\chi(h+k)a^{*}_{h+k}a_{h}\,\mathrm{d}h\.\] (3.1)
2. _We define the_ \(b\)_-operators as_ \[b_{k}\equiv\int_{\Lambda^{*}}\chi^{\perp}(p)\chi(p-k)a^{*}_{p-k}a^{*}_{p}\, \mathrm{d}p\.\] (3.2)
_Remark 3.1_.: For the rest of the article, we denote the corresponding adjoint operators by \(D_{k}^{*}\equiv[D_{k}]^{*}\) and \(b_{k}^{*}\equiv[b_{k}]^{*}\), respectively. Additionally, we shall extensively use the basic relation
\[D_{k}^{*}=D_{-k}\qquad\forall k\in\Lambda^{*}. \tag{3.3}\]
_Remark 3.2_.: One should understand the operator \(D\) as _fermionic_ operators; they intertwine only holes and holes, together with particles and particles. One the other hand, the operators \(b\) should be understood as _bosonic_ operators; they create/annihilate bosonized particle-hole pairs near the Fermi surface. In fact, the following commutation relation holds
\[[b_{k},D_{k}^{*}]=0\qquad\forall k\in\Lambda^{*}. \tag{3.4}\]
The following lemma contains the explicit representation for the particle-hole Hamiltonian, in terms of a "solvable Hamiltonian", plus interaction terms depending on \(D\) and \(b\) operators.
**Lemma 3.1**.: _Let \(\mathfrak{h}\) be the operator defined in (2.8). Then, the following identity holds_
\[\mathfrak{h}-\mu_{1}\mathds{1}-\mu_{2}\mathcal{Q}=\mathfrak{h}_{0}+\lambda \mathcal{V} \tag{3.5}\]
_for some real-valued constants \(\mu_{1},\mu_{2}\in\mathbb{R}\). Here \(\mathcal{Q}\) corresponds to the charge operator_
\[\mathcal{Q}\equiv\int_{\Lambda^{*}}\chi^{\perp}(p)a_{p}^{*}a_{p}\mathrm{d}p- \int_{\Lambda^{*}}\chi(p)a_{p}^{*}a_{p}\mathrm{d}p; \tag{3.6}\]
\(\mathfrak{h}_{0}\) _corresponds to the quadratic, diagonal operator_
\[\mathfrak{h}_{0}=\int_{\Lambda^{*}}E_{p}a_{p}^{*}a_{p}\mathrm{d}p \tag{3.7}\]
_with \(E_{p}\) the dispersion relation defined in (2.16); and \(\mathcal{V}=V_{F}+V_{FB}+V_{B}\) contains the following three interaction terms_
\[V_{F} \equiv\frac{1}{2}\int_{\Lambda^{*}}\hat{V}(k)D_{k}^{*}D_{k}\ \mathrm{d}k \tag{3.8}\] \[V_{FB} \equiv\int_{\Lambda^{*}}\hat{V}(k)D_{k}^{*}\big{[}b_{k}+b_{-k}^{* }\big{]}\mathrm{d}k\] (3.9) \[V_{B} \equiv\int_{\Lambda^{*}}\hat{V}(k)\big{[}b_{k}^{*}b_{k}+\frac{1} {2}\,b_{k}^{*}b_{-k}^{*}+\frac{1}{2}\,b_{-k}b_{k}\big{]}\mathrm{d}k. \tag{3.10}\]
_Remark 3.3_.: The labeling of \(V_{F}\), \(V_{FB}\) and \(V_{B}\) is of course related to Remark 3.2. Namely, \(V_{F}\) contains fermion/fermion interactions, \(V_{FB}\) contains fermion/boson interactions and \(V_{B}\) contains boson/boson interactions.
_Remark 3.4_.: The charge operator \(\mathcal{Q}\) is irrelevant for the dynamics in the system. Indeed, one may easily check that \([\mathfrak{h}_{0},\mathcal{Q}]=[D,\mathcal{Q}]=[b,\mathcal{Q}]=0\) and, therefore, \([\mathfrak{h},\mathcal{Q}]=0\). In other words, the charge is a constant of motion and only the right hand side of (3.5) is relevant regarding the time evolution of the momenum distribution of the system. We make this argument precise in the next subsubsection.
The proof of the above Lemma will not be given here, for it has already been considered in the literature in a very similar form. The reader is refered for instance to [9, pps 897-899].
### The interaction picture
Let us now exploit the identity found in (3.5). First, recalling that the Hamiltonian \(\mathfrak{h}_{0}\) is quadratic and diagonal with respect to creation and annihilation operators, we may easily calculate the associated Heisenberg evolution to be given by
\[a_{p}(t) \equiv e^{it\mathfrak{h}_{0}}a_{p}e^{-it\mathfrak{h}_{0}}=e^{-itE_{p }}a_{p}\, \tag{3.11}\] \[a_{p}^{*}(t) \equiv e^{it\mathfrak{h}_{0}}a_{p}^{*}e^{-it\mathfrak{h}_{0}}=e^{+ itE_{p}}a_{p}^{*}\, \tag{3.12}\]
for all \(p\in\Lambda^{*}\) and \(t\in\mathbb{R}\) ; the dispersion relation \(E_{p}\) was defined in (2.16). Secondly, we introduce the _interaction Hamiltonian_
\[\mathfrak{h}_{I}(t)\equiv\lambda\,e^{it\mathfrak{h}_{0}}\mathcal{V}e^{-it \mathfrak{h}_{0}}\qquad\forall t\in\mathbb{R}\, \tag{3.13}\]
where \(\mathfrak{h}_{0}\) and \(\mathcal{V}\) are defined in Lemma 3.5.
We now introduce the dynamics associated to the interaction picture.
**Definition 5**.: _Given an initial state \(\nu:B(\mathscr{F})\to\mathbb{C}\), we denote by \((\nu_{t})_{t\in\mathbb{R}}\) the solution of the initial value problem_
\[\left\{\begin{array}{rl}&i\partial_{t}\nu_{t}(\mathcal{O})=\nu_{t}\big{(} \llbracket\mathfrak{h}_{I}(t),\mathcal{O}\rrbracket\big{)}\quad\forall \mathcal{O}\in B(\mathscr{F})\\ &\nu_{0}=\nu\end{array}\right. \tag{3.14}\]
_which we shall refer to as the interaction dynamics._
The momentum distribution of the system \(f_{t}(p)\), introduced in Def. 1, is now linked to the interaction dynamics. Indeed, a standard calculation shows that for all \(t\in\mathbb{R}\) and \(p\in\Lambda^{*}\), there holds
\[f_{t}(p)=|\Lambda|^{-1}\nu_{t}(a_{p}^{*}a_{p}). \tag{3.15}\]
In the next subsection, we shall use Eq. (3.15) to expand \(f_{t}(p)\).
### Double commutator expansion
Let \(f_{t}(p)\) be as in Eq. (3.15), and let us recall that \(\nu\) is an initial state satisfying Condition 1. In particular, quasi-freeness and translation invariance imply that
\[\nu([a_{p}^{*}a_{p},a_{k_{1}}^{\divide*}a_{k_{2}}^{\divide*}a_{k_{3}}^{ \divide*}a_{k_{4}}^{\divide*}])=0\,\qquad\forall\,k_{1},k_{2},k_{3},k_{4}\in \Lambda^{*}. \tag{3.16}\]
Thus, upon expressing the Hamiltonian \(\mathfrak{h}_{I}(t)\) in terms of creation- and annihilation operators, one finds that \(\partial_{t}|_{t=0}f_{t}(p)=i|\Lambda|^{-1}\nu([a_{p}^{*}a_{p},\mathfrak{h}_{I }(0)])=0\). Hence, the following third-order expansion holds true
\[f_{t}(p)=f_{0}(p)-|\Lambda|^{-1}\int_{0}^{t}\int_{0}^{t_{1}}\nu_{t_{2}}\Big{(} \llbracket[a_{p}^{*}a_{p},\mathfrak{h}_{I}(t_{1})],\mathfrak{h}_{I}(t_{2})] \Big{)}\mathrm{d}t_{1}\mathrm{d}t_{2} \tag{3.17}\]
for any \(t\in\mathbb{R}\) and \(p\in\Lambda^{*}.\) We dedicate the rest of this article to the study of the right-hand side of the above equation.
Let us identify all of the terms in the double commutator expansion found above. A straightforward expansion of the interaction Hamiltonian yields the decomposition
\[\mathfrak{h}_{I}(t)=\lambda\big{(}V_{F}(t)+V_{FB}(t)+V_{B}(t)\big{)}\qquad \forall t\in\mathbb{R} \tag{3.18}\]
where the interaction terms above according to the Heisenberg picture. Namely, we set
\[V_{\alpha}(t)\equiv\,e^{it\mathfrak{h}_{0}}V_{\alpha}\,e^{-it\mathfrak{h}_{0} }\qquad\forall t\in\mathbb{R}\,\alpha\in\{F,FB,B\}. \tag{3.19}\]
Upon expanding the right hand side of (3.17), one finds the following nine terms
\[f_{t}-f_{0}= -\lambda^{2}|\Lambda|^{-1}\Big{(}T_{F,F}(t)+T_{F,FB}(t)+T_{F,B}(t) \Big{)}\] \[-\lambda^{2}|\Lambda|^{-1}\Big{(}T_{FB,F}(t)+T_{FB,FB}(t)+T_{FB,B}( t)\Big{)}\] \[-\lambda^{2}|\Lambda|^{-1}\Big{(}T_{B,F}(t)+T_{B,FB}(t)+T_{B,B}( t)\Big{)} \tag{3.20}\]
where we set, for \(t\in\mathbb{R}\) and \(p\in\Lambda^{*}\)
\[T_{\alpha,\beta}(t,p)\equiv\int_{0}^{t}\int_{0}^{t_{1}}\nu_{t_{2}}\Big{(}[[a_ {p}^{*}a_{p},V_{\alpha}(t_{1})],V_{\beta}(t_{2})]\Big{)}\mathrm{d}t_{1} \mathrm{d}t_{2}\qquad\alpha,\beta\in\{F,FB,B\}. \tag{3.21}\]
We shall analyze in detail the quantities \(T_{\alpha,\beta}:\mathbb{R}\times\Lambda^{*}\to\mathbb{R}\) when tested against a smooth function. To this end, let us introduce some notation we shall be using for the rest of this work. For \(\varphi:\Lambda^{*}\to\mathbb{C}\) we let
\[N(\varphi)\equiv\int_{\Lambda^{*}}\overline{\varphi(p)}a_{p}^{*}a_{p}\mathrm{ d}p \tag{3.22}\]
together with
\[T_{\alpha,\beta}(t,\varphi)\equiv\langle\varphi,T_{\alpha,\beta}(t)\rangle= \int_{0}^{t}\int_{0}^{t_{1}}\nu_{t_{2}}\Big{(}[[N(\varphi),V_{\alpha}(t_{1})],V_{\beta}(t_{2})]\Big{)}\,\mathrm{d}t_{1}\mathrm{d}t_{2}. \tag{3.23}\]
### Excitation operators
The following two operators will play a major role in our analysis. They correspond to the number operator that counts the total number of particles and holes in the system, together with the number operator that only counts the number of particles and hole in the Fermi surface \(\mathcal{S}\). More precisely, we consider
**Definition 6**.: _We define the two following operators in \(\mathscr{F}\). (1) The number operator as_
\[\mathcal{N}\equiv\int_{\Lambda^{*}}a_{p}^{*}a_{p}\mathrm{d}p. \tag{3.24}\]
_(2) The surface-localized number operator as_
\[\mathcal{N}_{\mathcal{S}}\equiv\int_{\mathcal{S}}a_{p}^{*}a_{p}\mathrm{d}p \tag{3.25}\]
_where \(\mathcal{S}\) is the Fermi surface, defined in (2.12)._
_Remark 3.5_ (Domains).: \(\mathcal{N}\) is an unbounded self-adjoint operator in \(\mathscr{F}\) with domain \(\mathcal{D}(\mathcal{N})=\{\Psi=(\psi_{n})_{n\geqslant 0}\in\mathscr{F}: \sum_{n\geqslant 0}n^{2}\|\psi_{n}\|_{L^{2}(\Lambda^{n})}^{2}<\infty\}\). As initial data, the mixed states that we work with satisfy
\[\nu(\mathcal{N})\equiv\int_{\Lambda^{*}}\nu(a_{p}^{*}a_{p})\mathrm{d}p=\int_{ \Lambda^{*}}f_{0}(p)\mathrm{d}p<\infty\, \tag{3.26}\]
and similarly for higher powers \(\mathcal{N}^{k}\). It is standard to show that the time evolution generated by the particle-hole Hamiltonian \(\mathfrak{h}\), as defined in (2.8), preserves \(\mathcal{D}(\mathcal{N})\), in the sense that \(\nu_{t}(\mathcal{N}^{k})<\infty\) for \(t\in\mathbb{R}\) and \(k\in\mathbb{N}\). In order to simplify the exposition, we shall purposeful not refer to the unbounded nature of the operator \(\mathcal{N}\) in the rest of the article.
The proof of Theorem 1 relies on the fact that the subleading order terms that arise from the double commutator expansion -written in terms of \(b\)- and \(D\)-operators- can be bounded above by expectations of the operators \(\mathcal{N}\) and \(\mathcal{N}_{\mathcal{S}}\), with respect to the evolution of the state \(\nu\) driven by the interaction Hamiltonian \(\mathfrak{h}_{I}(t)\). This analysis is carried out in Section 4. Further, in Section 5 we prove bounds for the growth-in-time of the expectations \(\nu_{t}(\mathcal{N})\) and \(\nu_{t}(\mathcal{N}_{\mathcal{S}})\). This two-step analysis is combined in Section 9 to prove Theorem 1.
## 4. Tool Box I: Analysis of \(b\)- and \(D\)-operators
In the last section, we introduced the time evolution of certain observables in the Heisenberg picture, with respect to the solvable Hamiltonian \(\mathfrak{h}_{0}\), introduced in (3.7). In particular, the evolution of the creation- and annihilation- operators \(a\) and \(a^{*}\) take the simple form
\[a_{p}(t)=e^{-itE_{p}}a_{p}\qquad\text{and}\qquad a_{p}^{*}(t)=e^{+itE_{p}}a_{p} ^{*}\, \tag{4.1}\]
for all \(p\in\Lambda^{*}\) and \(t\in\mathbb{R}\) ; the dispersion relation \(E_{p}\) was defined in (2.16). Let us now introduce the Heisenberg evolution of the \(b\)- and \(D\)-operators as follow.
**Definition 7**.: _Let \(k\in\Lambda^{*}\) and \(t\in\mathbb{R}\). (1) The Heisenberg evolution of the \(D\)-operators is given by_
\[D_{k}(t)\equiv e^{it\mathfrak{h}_{0}}D_{k}e^{-it\mathfrak{h}_{0}}=\int_{ \Lambda^{*}}\chi^{\perp}(p,p-k)a_{p-k}^{*}(t)a_{p}(t)\,\mathrm{d}p-\int_{ \Lambda^{*}}\chi(h,h+k)a_{h+k}^{*}(t)a_{h}(t)\,\mathrm{d}h\]
_and \(D_{k}^{*}(t)\equiv[D_{k}(t)]^{*}\). (2) The Heisenberg evolution of the \(b\)-operators is given by_
\[b_{k}(t)\equiv e^{it\mathfrak{h}_{0}}b_{k}e^{-it\mathfrak{h}_{0}}=\int_{ \Lambda^{*}}\chi^{\perp}(p)\chi(p-k)a_{p-k}^{*}(t)a_{p}^{*}(t)\,\mathrm{d}p\]
_and \(b_{k}^{*}(t)\equiv[b_{k}(t)]^{*}\)._
The main goal of this section is to introduce a systematic calculus that lets us deal with combination of the operators \(b_{k}(t)\) and \(D_{k}(t)\)-together with multiple combination of their commutators- as they show up in the analysis of the double commutator expansion found in (3.20). First, we introduce many useful identities required for the upcoming analysis. Secondly, we state estimates for several combinations of \(b\)- and \(D\)-operators.
### Identities
In this subsection, we record useful identities between operators in \(\mathscr{F}\) that we shall use extensively in the rest of this article. Most importantly, in the next subsection we shall use these identities to obtain estimates of importantes commutator observables.
_Preliminary identities_. First, we write general time-independent relations.
1) For all \(p,q,r\in\Lambda^{*}\) the CAR imply that
\[[a_{r}^{*}a_{r},a_{p}^{*}a_{q}]=\big{(}\delta(r-q)-\delta(r-p)\big{)}a_{p}^{*}a _{q} \tag{4.2}\]
2) For all \(p,q\in\Lambda^{*}\) and \(\varphi\in\ell^{1}(\Lambda^{*})\) there holds
\[[N(\varphi),a_{p}^{*}a_{q}]=\Big{(}\overline{\varphi(p)}-\overline{\varphi(q)} \Big{)}a_{p}^{*}a_{q} \tag{4.3}\]
where we recall \(N(\varphi)=\int_{\Lambda^{*}}\overline{\varphi(p)}a_{p}^{*}a_{p}\mathrm{d}p\).
_Commutator identities_. The following lemma contains useful operator identities, to be used in the next section. Since they only rely on the CAR and straightforward commutator calculations, we leave their proof to the reader.
**Lemma 4.1**.: _Let \(k,\ell\in\Lambda^{*}\) and \(t,s\in\mathbb{R}\). (1) For \(p\in\mathcal{B}^{c}\) and \(h\in\mathcal{B}\) there holds_
\[[b_{k}(s),a_{p}^{*}(t)] =\ \chi(p-k)\,e^{i(t-s)E_{p}}a_{p-k}(s)\, \tag{4.4}\] \[[b_{k}(s),a_{h}^{*}(t)] =\ -\chi^{\perp}(h+k)\,e^{i(t-s)E_{h}}a_{h+k}(s). \tag{4.5}\]
_(2) There holds_
\[[b_{\ell}(s),D_{k}^{*}(t)] =\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p-k)\chi(p-\ell)e^ {i(t-s)E_{p}}a_{p-\ell}(s)a_{p-k}(t)\mathrm{d}p\] \[+\int_{\Lambda^{*}}\chi(h)\chi(h+k)\chi^{\perp}(h+\ell)e^{i(t-s)E _{h}}a_{h+\ell}(s)a_{h+k}(t)\mathrm{d}h. \tag{4.6}\]
_In particular, \([b_{k}(t),D_{k}^{*}(s)]=0\). (3) There holds_
\[[b_{k}(t),b_{\ell}^{*}(s)] =\ \delta(k-\ell)\int_{\Lambda^{*}}\chi^{\perp}(p)\chi(p-k)e^{-i(t-s )(E_{p}+E_{p-k})}\mathrm{d}p\] \[-\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p+\ell-k)\chi(p-k )e^{-i(t-s)E_{p-k}}a_{p}^{*}(t)a_{p+\ell-k}(s)\,\mathrm{d}p\] \[-\int_{\Lambda^{*}}\chi(h)\chi(h+\ell-k)\chi^{\perp}(h+\ell)e^{-i (t-s)E_{h+k}}a_{h}^{*}(t)a_{h+\ell-k}(s)\,\mathrm{d}h. \tag{4.7}\]
We shall contract \(b\)- and \(D\)-operators with an external function \(\varphi:\Lambda^{*}\to\mathbb{C}\) by means of the following two operators
\[D_{k}^{*}(t,\varphi) \equiv[N(\varphi),D_{k}^{*}(t)]\] \[=\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p-k)\big{[} \varphi(p)-\varphi(p-k)\big{]}a_{p}^{*}(t)a_{p-k}(t)\mathrm{d}p\] \[\quad-\int_{\Lambda^{*}}\chi(h)\chi(h+k)\big{[}\varphi(h)-\varphi (h+k)\big{]}a_{h}^{*}(t)a_{h+k}(t)\mathrm{d}h\, \tag{4.8}\] \[b_{k}(t,\varphi) \equiv[N(\varphi),b_{k}(t)]\] \[=\int_{\Lambda^{*}}\chi^{\perp}(q)\chi(q-k)\Big{[}\varphi(q-k)+ \varphi(q)\Big{]}a_{q-k}(t)a_{q}(t)\ \mathrm{d}q. \tag{4.9}\]
**Lemma 4.2**.: _Let \(k,\ell\in\Lambda^{*}\), \(t,s\in\mathbb{R}\) and \(\varphi\in\ell^{1}\). (1) There holds_
\[[b_{\ell}(s), D_{k}^{*}(t,\varphi)] \tag{4.10}\] \[=\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p-k)\chi(p-\ell) \big{[}\varphi(p)-\varphi(p-k)\big{]}e^{i(t-s)E_{p}}a_{p-\ell}(s)a_{p-k}(t){ \rm d}p\] \[+\int_{\Lambda^{*}}\chi(h)\chi(h+k)\chi^{\perp}(h+\ell)\big{[} \varphi(h)-\varphi(h+k)\big{]}e^{i(t-s)E_{h}}a_{h+\ell}(s)a_{h+k}(t){\rm d}h\.\]
**Lemma 4.3** (\(\mathcal{N}\) commutators).: _For all \(k\in\Lambda^{*}\) and \(t\in\mathbb{R}\) the following holds true. (1) For the \(D\)-operators_
\[[D_{k}(t),\mathcal{N}]=[D_{k}^{*}(t),\mathcal{N}]=0 \tag{4.11}\]
_and similarly for the contracted operators \(D_{k}(t,\varphi)\) and \(D_{k}^{*}(t,\varphi)\). (2) For the \(b\)-operators, for any measurable function \(f:\mathbb{R}\to\mathbb{C}\) the pull-through formulae holds true_
\[f(\mathcal{N})b_{k}(t)=b_{k}(t)f(\mathcal{N}-2)\quad\text{\rm and}\quad f( \mathcal{N})b_{k}^{*}(t)=b_{k}^{*}(t)f(\mathcal{N}+2) \tag{4.12}\]
_and similarly for the contracted operators \(b_{k}(t,\varphi)\) and \(b_{k}^{*}(t,\varphi)\)._
**Lemma 4.4** (\(\mathcal{N}_{\mathcal{S}}\) commutators).: _For all \(k\in 3{\rm supp}\hat{V}\) and \(t\in\mathbb{R}\) the following commutation relations hold true_
\[[\mathcal{N}_{\mathcal{S}},b_{k}(t)]=-2b_{k}(t)\quad\text{\rm and}\quad[ \mathcal{N}_{\mathcal{S}},b_{k}^{*}(t)]=+2b_{k}^{*}(t). \tag{4.13}\]
### Estimates
In this subsection we state estimates that shall be used extensively for the rest of this article. Most of these are operator estimates for observables in \(\mathscr{F}\) containing the fermionic creation- and annihilation- operators \(a_{p}\) and \(a_{p}^{*}\). We remind the reader that these are bounded operators with norm \(\|a_{p}\|_{B(\mathscr{F})}\,=\,\|a_{p}^{*}\|_{B(\mathscr{F})}\,\leqslant\,| \Lambda|^{1/2}\) for all \(p\in\Lambda^{*}\).
_Preliminary estimates_. Let us state without proof elementary estimates that we shall make use of.
1) For any function \(f:\Lambda^{*}\to\mathbb{C}\), \(k\in\Lambda^{*}\) and \(\Psi\in\mathscr{F}\) there holds
\[\Big{\|}\int_{\Lambda^{*}}f(p)a_{p+k}^{*}a_{p}{\rm d}p\Psi\,\Big{\|}_{\mathscr{ F}}\,\leqslant\,\,\|f\|_{\ell^{\infty}}\|\mathcal{N}\Psi\|_{\mathscr{F}}. \tag{4.14}\]
2) The Heisenberg evolution of the creation- and annihilation- operators \(a_{p}(t)\) and \(a_{p}^{*}(t)\) are bounded operators in \(\mathscr{F}\), with norms
\[\|a_{p}(t)\|_{B(\mathscr{F})}\,=\,\|a_{p}^{*}(t)\|_{B(\mathscr{F})}\,\, \leqslant\,\,|\Lambda|^{1/2},\qquad\forall t\in\mathbb{R},\ p\in\Lambda^{*}. \tag{4.15}\]
3) The Heisenberg evolution of the \(b\)-operators are bounded operators in \(\mathscr{F}\) with norms
\[\|b_{k}(t)\|_{B(\mathscr{F})}\,=\,\|b_{k}^{*}(t)\|_{B(\mathscr{F})}\,\, \leqslant\,\,|\Lambda|\int_{\Lambda^{*}}\chi^{\perp}(p)\chi(p-k){\rm d}p\ \lesssim\ R \tag{4.16}\]
for all \(k\in{\rm supp}\hat{V}\) and \(t\in\mathbb{R}\) - we recall that \(R=|\Lambda|p_{F}^{d-1}\).
_Commutator estimates_. Let us now describe the most important estimates concerning \(b\)- and \(D\)-operators. Essentially, commutators between \(b\)- and \(D\)-operators -together with their contracted versions \(b(\varphi)\) and \(D(\varphi)\)- can be classified into four types, depending on the estimate they verify. It turns out that these four type of estimate exhaust _all_ possibilities that show up in the double commutator expansion for \(f_{t}(p)\). In other words, these estimates are enough to analyze the nine terms \(\{T_{\alpha,\beta}(t,p)\}_{\alpha,\beta\in\{F,FB,B\}}\).
We remind the reader of the relation \(D_{k}^{*}(t)=D_{-k}(t)\), valid for all \(k\in\Lambda^{*}\) and \(t\in\mathbb{R}\). In particular, _all_ of the upcoming inequalities are valid if we replace \(D\) by \(D^{*}\). On the other hand, we warn the reader that this property _does not_ hold for \(b\)-operators in general.
The first type of estimate concerns the combination of operators that are relatively bounded with respect to the number operator \(\mathcal{N}=\int_{\Lambda^{*}}a_{p}^{*}a_{p}\mathrm{d}p\), or any of its powers. We call these _Type-I_ estimates. They are contained in the following lemma.
**Lemma 4.5** (Type-I estimates).: _There exists a constant \(C>0\) such that for any \(\Psi\in\mathscr{F}\), \(k,\ell\in\Lambda^{*}\), and \(t,s,r\in\mathbb{R}\) the following inequalities hold true_
\[\|D_{k}(t)\Psi\|_{\mathscr{F}} \leqslant C\|\mathcal{N}\Psi\|_{\mathscr{F}} \tag{4.17}\] \[\|[D_{k}(t),D_{\ell}(s)]\Psi\|_{\mathscr{F}} \leqslant C\|\mathcal{N}\Psi\|_{\mathscr{F}}\] (4.18) \[\|[D_{k}(t),D_{\ell}(s)D_{\ell}(r)]\Psi\|_{\mathscr{F}} \leqslant C\|\mathcal{N}^{2}\Psi\|_{\mathscr{F}}. \tag{4.19}\]
The second type of estimates concerns combination of operators that can be bounded above by the surface-localized number operator \(\mathcal{N}_{\mathcal{S}}=\int_{\mathcal{S}}a_{p}^{*}a_{p}\mathrm{d}p\), up to pre-factors that can grow with the recurring parameter \(R=|\Lambda|p_{F}^{d-1}\). We call these _Type-II_ estimates, and they are contained in the following lemma
**Lemma 4.6** (Type-II estimates).: _There exists a constant \(C>0\) such that for any \(\Psi\in\mathscr{F}\), \(k,\ell,q\in\mathrm{supp}\hat{V}\), and \(t,s,r\in\mathbb{R}\) the following inequalities hold true_
\[\|b_{k}(t)\Psi\|_{\mathscr{F}} \leqslant CR^{\frac{1}{2}}\,\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{ \mathscr{F}} \tag{4.20}\] \[\|[b_{\ell}(t),D_{k}(s)]\Psi\|_{\mathscr{F}} \leqslant CR^{\frac{1}{2}}\,\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{ \mathscr{F}}\] (4.21) \[\|[b_{\ell}(t),D_{k}(s)],D_{q}(r)]\Psi\|_{\mathscr{F}} \leqslant CR^{\frac{1}{2}}\,\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{ \mathscr{F}}. \tag{4.22}\]
_Remark 4.1_.: In certain proofs, it will be convenient to use the upper bound
\[\mathcal{N}_{S}\leqslant\mathcal{N}.\]
The reader should then have in mind that the (weaker) version of the estimates contained in Lemma 4.6, in which \(\mathcal{N}_{\mathcal{S}}\) is replaced by \(\mathcal{N}\), also holds true.
The third type of estimate corresponds to combination of operators that have been contracted with a test function \(\varphi\in\ell_{m}^{1}\), and their operator norm can be bounded above in terms of the integral
\[\int_{\mathcal{S}}|\varphi(p)|\mathrm{d}p\ \lesssim\ p_{F}^{-m}\|\varphi\|_{\ell_{m} ^{1}}. \tag{4.23}\]
We call these _Type-III_ estimates, and they are contained in the following lemma.
**Lemma 4.7** (Type-III estimates).: _Let \(m>0\). There exists a constant \(C>0\) such that for all \(k,\ell,q\in\mathrm{supp}\hat{V},\ t,s,r\in\mathbb{R}\) and \(\varphi\in\ell^{1}_{m}(\Lambda^{*})\) the following inequalities true_
\[\|b_{k}(t,\varphi)\|_{B(\mathscr{F})} \leqslant\ C\,|\Lambda|\,p_{F}^{-m}\,\|\varphi\|_{\ell^{1}_{n}} \tag{4.24}\] \[\|[b_{\ell}(t),D_{k}(s,\varphi)\}]\|_{B(\mathscr{F})} \leqslant\ C\,|\Lambda|\,p_{F}^{-m}\,\|\varphi\|_{\ell^{1}_{n}}\] (4.25) \[\|[[b_{k}(t),D_{\ell}(s)],D_{q}(r,\varphi)]\|_{B(\mathscr{F})} \leqslant\ C\,|\Lambda|\,p_{F}^{-m}\,\|\varphi\|_{\ell^{1}_{n}} \tag{4.26}\]
_Remark 4.2_.: Type-III estimates are symmetric with respect to the exchange of \(b\) and \(b^{*}\). This property follows from the relation \(\|\mathcal{O}\|_{B(\mathscr{F})}=\|\mathcal{O}^{*}\|_{B(\mathscr{F})}\) and the symmetry \(D_{k}^{*}(t)=D_{-k}(t)\).
The fourth and final type of estimate corresponds to combination of operators that have been contracted with a test function \(\varphi\in\ell^{1}_{m}\), and their operator norm can be bounded above in terms of the integral
\[\int_{\Lambda^{*}}|\varphi(p)|\mathrm{d}p\ =\ \|\varphi\|_{\ell^{1}}\ \lesssim\ \|\varphi\|_{\ell^{1}_{m}}\, \tag{4.27}\]
and a pre-factor, depending on the volume of the box \(|\Lambda|.\) We call these _Type-IV_ estimates, and they are contained in the following lemma.
**Lemma 4.8** (Type-IV estimates).: _There exists a constant \(C>0\) such that for all \(k,\ell,q\in\Lambda^{*},\ t,s,r\in\mathbb{R}\) and \(\varphi\in\ell^{1}(\Lambda^{*})\) the following inequalities true the following holds true_
\[\|D_{k}(t,\varphi)\|_{B(\mathscr{F})} \leqslant\ C\,|\Lambda|\|\varphi\|_{\ell^{1}} \tag{4.28}\] \[\|[D_{k}(t,\varphi),D_{\ell}(s)]\|_{B(\mathscr{F})} \leqslant\ C\,|\Lambda|\|\varphi\|_{\ell^{1}}. \tag{4.29}\]
#### 4.2.1. Proof of Lemmata
In this subsection, we provide sketches for the proofs of Lemmas 4.5, 4.6, 4.7 and 4.8.
Sketch of Proof of Lemma 4.5.: Let us fix \(\Psi\in\mathscr{F}\), \(k,\ell\in\Lambda^{*}\), and \(t,s,r\in\mathbb{R}\).
Proof of (1).: We shall make use of the elementary estimate found in (4.14). To this end, starting from (7) we decompose
\[D_{k}(t)=\int_{\Lambda^{*}}f^{(1)}(t,k,p)a^{*}_{p-k}a_{p}\mathrm{d}p+\int_{ \Lambda^{*}}f^{(2)}(t,k,h)a^{*}_{h+k}a_{h}\mathrm{d}h \tag{4.30}\]
where \(f^{(1)}(t,k,p)=\chi^{\perp}(p,p-k)e^{it(E_{p-k}-E_{p})}\) and \(f^{(2)}(t,k,h)=\chi(h,h+k)e^{it(E_{h+k}-E_{h})}\). Clearly, \(\|f^{(1)}(t,k)\|_{\ell^{\infty}}=\|f^{(2)}(t,k)\|_{\ell^{\infty}}=1\). Hence, it follows that \(\|D_{k}(t)\Psi\|_{\mathscr{F}}\leqslant 2\|\mathcal{N}\Psi\|_{\mathscr{F}}\).
Proof of (2) The proof is extremely similar-it suffices to note that the commutator can be calculated explicitly to be
\[[D_{k}(t),D_{\ell}(s)]= \int_{\Lambda^{*}}\chi^{\perp}(p,p-\ell,p-k-\ell)e^{i(s-t)E_{p-\ell }}a^{*}_{p-k-\ell}(t)a_{p}(s)\mathrm{d}p\] \[\qquad-\int_{\Lambda^{*}}\chi^{\perp}(p,p-k,p-k-\ell)e^{i(t-s)E_{p -k}}a^{*}_{p-k-\ell}(s)a_{p}(t)\mathrm{d}p\] \[\qquad+\int_{\Lambda^{*}}\chi(h,h+\ell,h+k+\ell)e^{i(s-t)E_{h+\ell }}a^{*}_{h+k+\ell}(t)a_{h}(s)\mathrm{d}h\] \[\qquad-\int_{\Lambda^{*}}\chi(h,h+k,h+k+\ell)e^{i(t-s)E_{h+k}}a^{ *}_{h+k\ell}(s)a_{h}(t)\mathrm{d}h. \tag{4.31}\]
Hence, the same argument shows that \(\|[D_{k}(t),D_{\ell}(s)]\Psi\|_{\mathscr{F}}\leqslant 4\|\mathcal{N}\Psi\|_{ \mathscr{F}}\).
Proof of (3).: For simplicity, let us supress the time labels, and the momentum variables. In what follows \(C>0\) is a contant whose value may change from line to line. We calculate using the previous results, and the commutation relations \([\mathcal{N},D]=0\)
\[\|[D,DD]\Psi\|_{\mathscr{F}} \leqslant\|D[D,D]\Psi\|_{\mathscr{F}}+\|[D,D]D\Psi\|_{\mathscr{F}}\] \[\leqslant C\|\mathcal{N}[D,D]\Psi\|_{\mathscr{F}}+C\|\mathcal{N}D \Psi\|_{\mathscr{F}}\] \[=C\|[D,D]\mathcal{N}\Psi\|_{\mathscr{F}}+\|CD\mathcal{N}\Psi\|_{ \mathscr{F}}\] \[\leqslant C\|\mathcal{N}^{2}\Psi\|_{\mathscr{F}}. \tag{4.32}\]
This finishes the proof.
Sketch of Proof of Lemma 4.6.: Let us fix \(\Psi\in\mathscr{F}\), \(k,\ell,q\in\mathrm{supp}\hat{V}\), and \(t,s,r\in\mathbb{R}\).
Let us give the main ideas behind the proof. Let us recall that \(\mathrm{supp}\hat{V}\) is contained in a ball of radius \(r>0\). For \(n\in\mathbb{N}\), define the Fermi surfaces
\[\mathcal{S}(n)\equiv\{p\in\Lambda^{*}\ :\ p_{F}-nr\leqslant|p|\leqslant p_{F}+nr\}, \tag{4.33}\]
and the number operators \(\mathcal{N}_{\mathcal{S}(n)}\equiv\int_{\mathcal{S}(n)}a^{*}_{p}a_{p}\mathrm{ d}p\). In particular, we are denoting \(\mathcal{S}=\mathcal{S}(3)\) in (1.3). Given \(k,\ell\in\mathrm{supp}\hat{V}\), consider operators of the form
\[\beta_{k}\equiv\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}(1)}(p)\,a_{p+k}a_{p} \,\mathrm{d}p\,\qquad\text{and}\qquad\mathcal{D}_{\ell}\equiv\int_{\Lambda^{*}}a^{*}_{p+ \ell}a_{p}\mathrm{d}p. \tag{4.34}\]
One should think generically of \(\beta_{k}\) as \(b_{k}(t)\) and \(\mathcal{D}_{\ell}\) as \(D_{\ell}(s)\). We make the following two observations. First, \(\beta_{k}\) can be controlled by \(\mathcal{N}_{\mathcal{S}(1)}\) in the following sense
\[\|\beta_{k}\Psi\|_{\mathscr{F}} \leqslant|\Lambda|^{\frac{1}{2}}\int_{\Lambda^{*}}\mathds{1}_{ \mathcal{S}(1)}(p)\|a_{p}\Psi\|_{\mathscr{F}}\] \[\leqslant|\Lambda|^{\frac{1}{2}}\Big{(}\int_{\Lambda^{*}} \mathds{1}_{\mathcal{S}(1)}(p)\mathrm{d}p\Big{)}^{\frac{1}{2}}\Big{(}\int_{ \Lambda^{*}}\mathds{1}_{\mathcal{S}(1)}(p)\|a_{p}\Psi\|_{\mathscr{F}}^{2} \Big{)}^{\frac{1}{2}}\] \[\lesssim|\Lambda|^{\frac{1}{2}}p_{F}^{\frac{d-1}{2}}\|\mathcal{N }_{\mathcal{S}(1)}^{\frac{1}{2}}\Psi\|_{\mathscr{F}}=R^{\frac{1}{2}}\| \mathcal{N}_{\mathcal{S}(1)}^{\frac{1}{2}}\Psi\|_{\mathscr{F}}\, \tag{4.35}\]
where we used a basic geometric estimate to find that \(\int_{\Lambda*}\mathds{1}_{\mathcal{S}(1)}(p)\mathrm{d}p\lesssim p_{F}^{d-1}.\) Secondly, the commutator between \(\beta_{k}\) and \(\mathcal{D}_{\ell}\) can be calculated to be
\[[\beta_{k},\mathcal{D}_{\ell}]=\int_{\Lambda*}\mathds{1}_{\mathcal{S}(1)}(p- \ell)a_{p+k-\ell}a_{p}\mathrm{d}p+\int_{\Lambda*}\mathds{1}_{\mathcal{S}(1)}(p )a_{p+k-\ell}a_{p}\mathrm{d}p. \tag{4.36}\]
Since both \(k,\ell\in\mathrm{supp}\hat{V}\), it holds that \(\mathds{1}_{\mathcal{S}(1)}(p-\ell)\leqslant\mathds{1}_{\mathcal{S}(2)}(p)\), and of course \(\mathds{1}_{\mathcal{S}(1)}(p)\leqslant\mathds{1}_{\mathcal{S}(2)}(p)\). Consequently, the same argument that we used to obtain (4.35) can now be repeated on each term of the above equation to obtain
\[\|[\beta_{k},\mathcal{D}_{\ell}]\Psi\|_{\mathscr{F}}\lesssim R^{\frac{1}{2}} \|\mathcal{N}_{\mathcal{S}(2)}^{\frac{1}{2}}\Psi\|_{\mathscr{F}}. \tag{4.37}\]
The same argument can be repeated for the next commutator with \(\mathcal{D}_{q}\), provided one enlarges the Fermi surface from \(\mathcal{S}(2)\) to \(\mathcal{S}(3)\). In other words, it holds that
\[\|[[\beta_{k},\mathcal{D}_{\ell}],\mathcal{D}_{q}]\Psi\|_{\mathscr{F}} \lesssim R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S}(3)}^{\frac{1}{2}}\Psi\|_{ \mathscr{F}}. \tag{4.38}\]
The above motivation contains the main ideas for the proof of the lemma. One merely has to include additional bounded coefficients in the definition of \(\beta_{k}\) and \(\mathcal{D}_{\ell}\) to account for the dependence on \(t\in\mathbb{R}\) and \(k\in\Lambda^{*}\), that comes from \(b_{k}(t)\) and \(D_{\ell}(s)\). We leave the details to the reader.
Sketch of Proof of Lemma 4.7.: Let us fix \(m>0\), \(k,\ell,q\in\mathrm{supp}\hat{V}\), \(t,s,r\in\mathbb{R}\) and \(\varphi\in\ell_{m}^{1}(\Lambda^{*})\). Starting from Eq. (4.9) we easily estimate that
\[\|b_{k}(t,\varphi)\|_{B(\mathscr{F})}\leqslant 2|\Lambda|\int_{\Lambda*} \mathds{1}_{\mathcal{S}}(p)|\varphi(p)|\mathrm{d}p. \tag{4.39}\]
It suffices then to note that \(\int_{\mathcal{S}}|\varphi(p)|\mathrm{d}p\lesssim p_{F}^{-m}\|\varphi\|_{ \ell_{m}^{1}}\). For the next estimate, the same analysis can be carried out, starting from the commutator identity found in Eq. (4.10). For the last estimate, one has to calculate the upcoming commutators and bound each term in the same way.
Sketch of Proof of Lemma 4.8.: Let us fix \(k\in\Lambda^{*}\) and \(\varphi\in\ell^{1}\). Starting from Eq. (4.8) we use \(0\leqslant\chi,\chi^{\perp}\leqslant 1\) and \(\|a_{p}(t)\|_{B(\mathscr{F})}=\|a_{p}^{*}(t)\|_{B(\mathscr{F})}\leqslant| \Lambda|^{\frac{1}{2}}\) to find
\[\|D_{k}(t,\varphi)\|_{B(\mathscr{F})}\leqslant 4|\Lambda|\int_{\Lambda*}| \varphi(p)|\mathrm{d}p. \tag{4.40}\]
A similar inequality can be found upon calculation of the commutator \([D_{k}(t),D_{\ell}(s,\varphi)]\). This finishes the proof.
## 5. Tool Box II: Excitation Estimates
In Section 3 we introduced the two following observables:
\[\mathcal{N}=\int_{\Lambda*}a_{p}^{*}a_{p}\mathrm{d}p\qquad\text{ and }\qquad \mathcal{N}_{\mathcal{S}}=\int_{\mathcal{S}}a_{p}^{*}a_{p}\mathrm{d}p \tag{5.1}\]
corresponding to the the Number Operator and Surface-Localized Number Operator, respectively. The main purpose of this section is to prove estimates that control the growth-in-time of the expectation of \(\mathcal{N}\) and \(\mathcal{N}_{\mathcal{S}}\) with respect to the interaction dynamics
\((\nu_{t})_{t\in\mathbb{R}}\), defined in (3.14). These estimates are precisely stated in the following two propositions, which we prove in the reminder of this section.
**Proposition 5.1**.: _Let \((\nu_{t})_{t\in\mathbb{R}}\) solve the interaction dynamics defined in (3.14), with initial data \(\nu_{0}\equiv\nu\) satisfying Condition 1. Assume that \(n=\nu(\mathcal{N})\geqslant 1\). Then, for all \(\ell\in\mathbb{N}\) there exists a constant \(C>0\) such that_
\[\nu_{t}(\mathcal{N}^{\ell})\leqslant Cn^{\ell}\exp(C\lambda Rt)\,\qquad\forall t \geqslant 0. \tag{5.2}\]
**Proposition 5.2**.: _Let \((\nu_{t})_{t\in\mathbb{R}}\) solve the interaction dynamics defined in (3.14), with initial data \(\nu_{0}\equiv\nu\) satisfying Condition 1. Further, assume that \(n=\nu(\mathcal{N})\lesssim R^{1/2}\). Then, there exists a constant \(C>0\) such that_
\[\nu_{t}(\mathcal{N}_{\mathcal{S}})\leqslant C(\lambda R\left\langle t\right\rangle )^{2}\exp(C\lambda Rt)\,\qquad\forall t\geqslant 0\, \tag{5.3}\]
_where \(\left\langle t\right\rangle=(1+t^{2})^{\frac{1}{2}}\)._
The idea behind the proof of our estimates relies on a standard Gronwall argument, in which we bound expectations of commutators \([\mathcal{N},\mathfrak{h}_{I}(t)]\) and \([\mathcal{N}_{\mathcal{S}},\mathfrak{h}_{I}(t)]\) in terms of combinations of expectations of \(\mathcal{N}\) and \(\mathcal{N}_{\mathcal{S}}\). This proof relies heavily in the fact that the interaction Hamiltonian decomposes into three parts, corresponding to fermion-fermion, fermion-boson and boson-boson interactions. Namely, there holds
\[\mathfrak{h}_{I}(t)\ =\ \lambda\left(V_{F}(t)+V_{F,B}(t)+V_{B}(t)\right)\,, \qquad\forall t\geqslant 0. \tag{5.4}\]
Here, time-dependence corresponds to the Heisenberg evolution associated to the solvable Hamiltonian \(\mathfrak{h}_{0}\)-see Eq. (3.19). In particular, using the formulae (3.8), (3.9) and (3.10) for \(V_{F}\), \(V_{F,B}\) and \(V_{B}\), respectively, we may write that for all \(t\in\mathbb{R}\)
\[V_{F}(t) =\frac{1}{2}\int_{\mathsf{A}^{*}}\hat{V}(k)D_{k}^{*}(t)D_{k}(t)\ \mathrm{d}k \tag{5.5}\] \[V_{FB}(t)\] (5.6) \[V_{B}(t) \tag{5.7}\]
where \(b_{k}(t)\) and \(D_{k}(t)\) correspond to the Heisenberg evolution of the \(b\)- and \(D\)-operators, respectively, as given in Definition 7.
### Number Operator Estimates
The main purpose of this section is to prove the Proposition 5.1. The first step in this direction is to prove appropiate commutator estimates between \(\mathcal{N}\) and the generator of the interaction dynamics, \(\mathfrak{h}_{I}(t)\). The commutator estimates that we prove are contained in the upcoming Lemma. We recall that \(R=|\Lambda|p_{F}^{d-1}\).
**Lemma 5.1** (Commutator Estimates for \(\mathcal{N}\)).: _For all \(\ell\geqslant 1\) there exists a constant \(C=C(\ell)>0\) such that:_
1. _For all_ \(\Psi\in\mathscr{F}\) _and_ \(t\geqslant 0\) _there holds_ \[\left\langle\Psi,[\mathcal{N}^{\ell},V_{F}(t)]\Psi\right\rangle_{\mathscr{F}}=0\]
_._
2. _For all_ \(\Psi\in\mathscr{F}\) _and_ \(t\geqslant 0\) _there holds_ \[|\left\langle\Psi,[\mathcal{N}^{\ell},V_{F,B}(t)]\Psi\right\rangle_{\mathscr{F}} \,|\leqslant CR\left\langle\Psi,(\mathcal{N}^{\ell}+\mathds{1})\Psi\right\rangle_ {\mathscr{F}}\]
3. _For all_ \(\Psi\in\mathscr{F}\) _and_ \(t\geqslant 0\) _there holds_ \[|\left\langle\Psi,[\mathcal{N}^{\ell},V_{B}(t)]\Psi\right\rangle_{\mathscr{F}} \,|\leqslant CR\left\langle\Psi,(\mathcal{N}^{\ell}+\mathds{1})\Psi\right\rangle _{\mathscr{F}}\]
_Remark 5.1_.: We recall that every state \(\nu:B(\mathscr{F})\to\mathbb{C}\) is a convex combination of pure states. Namely, there exists sequences \((\lambda_{n})_{n=0}^{\infty}\subset(0,\infty)\) and \((\Psi_{n})_{n=0}^{\infty}\subset\mathscr{F}\) satisfying the normalization condition \(\sum_{n=0}^{\infty}\lambda_{n}=1\) and \(\|\Psi_{n}\|_{\mathscr{F}}=1\), respectively, such that the following decomposition holds true
\[\nu(\mathcal{O})=\sum_{n=0}^{\infty}\lambda_{n}\left\langle\Psi_{n}, \mathcal{O}\Psi_{n}\right\rangle_{\mathscr{F}}\,\qquad\forall\mathcal{O}\in B(\mathscr{F}). \tag{5.8}\]
In particular, the estimates contained in Lemma 5.1 can be easily converted into estimates for mixed states. For instance, if \(\mathcal{O}_{1},\mathcal{O}_{2},\mathcal{O}_{3}\) are operators such that
\[|\left\langle\Psi,\mathcal{O}_{1}\Psi\right\rangle_{\mathscr{F}}\,|\leqslant C \|\mathcal{O}_{2}\Psi\|_{\mathscr{F}}\|\mathcal{O}_{3}\Psi\|_{\mathscr{F}}\,\qquad \forall\Psi\in\mathscr{F} \tag{5.9}\]
for a constant \(C>0\), then it follows from the above decomposition of \(\nu\) and the Cauchy-Schwarz inequality that
\[|\nu(\mathcal{O}_{1})|\leqslant C\nu(\mathcal{O}_{2}^{*}\mathcal{O}_{2})^{ \frac{1}{2}}\,\nu(\mathcal{O}_{3}^{*}\mathcal{O}_{3})^{\frac{1}{2}}. \tag{5.10}\]
In most applications, \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\) shall corespond to either \(\mathcal{N}\) or \(\mathcal{N}_{\mathcal{S}}\).
Let us briefly postpone the proof of the above Lemma to the next subsubsection. First, we turn to the proof of the important Proposition 5.1.
Proof of Proposition 5.1.: The decomposition for \(\mathfrak{h}_{I}(t)\) from (5.4) combined with the commutator estimates from Lemma 5.1 imply that for all \(\ell\geqslant 1\) there exists \(C=C(\ell)>0\) such that
\[\partial_{t}\nu_{t}(\mathcal{N}^{\ell}+\mathds{1})=\nu_{t}\big{(}i[\mathfrak{ h}_{I}(t),\mathcal{N}^{\ell}]\big{)}\leqslant C\lambda R\nu_{t}(\mathcal{N}^{ \ell}+\mathds{1}),\qquad\forall t\geqslant 0. \tag{5.11}\]
Gronwall's inequality now easily implies that there exists a constant \(C>0\) such that
\[\nu_{t}(\mathcal{N}^{\ell})\,\leqslant\,C\lambda R\,\nu_{t}(\mathcal{N}^{ \ell}+\mathds{1})e^{C\lambda Rt}\,\qquad\forall t\geqslant 0. \tag{5.12}\]
To finalize the proof, we use the fact that for quasi-free states it holds true that \(\nu(\mathcal{N}^{\ell})\lesssim\nu(\mathcal{N})^{\ell}\), together with the assumption \(\nu(\mathcal{N})=n\geqslant 1\).
#### 5.1.1. Commutator Estimates for \(\mathcal{N}\)
Proof of Lemma 5.1.: Throughout this proof, \(\Psi\in\mathscr{F}\) denotes an element in \(\cap_{k=1}^{\infty}D(\mathcal{N}^{k})\), which will justify all of the upcoming calculations. Let us now fix \(\ell\in\mathbb{N}\).
Proof of (1).: This is an immediate consequence of the fact that \([D_{k}(t),\mathcal{N}]=0\) for all \(k\in\Lambda^{*}\) and \(t\in\mathbb{R}\) - see Lemma 4.3.
Proof of (2).: Using the fact that \(D_{k}^{*}(t)=D_{-k}(t)\) and \([D_{k}^{*}(t),b_{k}(t)]=0\) we may re-write the fermion-boson interaction term as
\[V_{FB}(t)=\int_{\Lambda^{*}}\hat{V}(k)D_{k}^{*}(t)b_{k}(t)\mathrm{d}k\ +\ \mathrm{h.c}. \tag{5.13}\]
Thus, we find that for all \(t\in\mathbb{R}\)
\[\big{\langle}\Psi,[\mathcal{N}^{\ell},V_{FB}(t)]\Psi\big{\rangle}=2\mathrm{Im} \int_{\Lambda^{*}}\hat{V}(k)\big{\langle}\Psi,[\mathcal{N}^{\ell},D_{k}^{*}(t)b _{k}(t)]\Psi\big{\rangle}. \tag{5.14}\]
In view of Lemma 4.3, we see that \([D_{k}^{*}(t),\mathcal{N}^{\ell}]=0\). Further, using the pull-through formulae for \(b\)-operators with \(f(x)=x^{\ell}\) we find the following useful identity
\[[\mathcal{N}^{\ell},b_{k}(t)]=\sum_{n=0}^{\ell-1}\binom{\ell}{n}(-2)^{\ell-n} \mathcal{N}^{n}b_{k}(t)\,\qquad\forall k\in\Lambda^{*},\ t\in\mathbb{R}. \tag{5.15}\]
Consequently, we can estimate that
\[|\,\big{\langle}\Psi,[\mathcal{N}^{\ell},V_{FB}]\Psi\big{\rangle}\,| \leqslant\sum_{n=0}^{\ell-1}\binom{\ell}{n}(-2)^{\ell-n}\int_{ \Lambda^{*}}|\hat{V}(k)|\ |\,\big{\langle}\Psi,D_{k}^{*}(t)\mathcal{N}^{n}b_{k}(t)\Psi \big{\rangle}\,|\mathrm{d}k \tag{5.16}\] \[\leqslant\sum_{n=0}^{\ell-1}\binom{\ell}{n}(-2)^{\ell-n}\int_{ \Lambda^{*}}|\hat{V}(k)|\ \|\mathcal{N}^{\frac{n-1}{2}}D_{k}(t)\Psi\|\,\|\mathcal{N}^{\frac{n+1}{2}}b_{ k}(t)\Psi\|\mathrm{d}k\.\]
We can now combine Lemma 4.3, the Type-I estimate (4.17) and the norm bound (4.16) to find that there exists a constant \(C>0\) such that
\[\|\mathcal{N}^{\frac{n-1}{2}}D_{k}(t)\Psi\|\,\|\mathcal{N}^{\frac{n+1}{2}}b_{ k}(t)\Psi\|\leqslant CR\|\mathcal{N}^{\frac{n+1}{2}}\Psi\|^{2}\,\qquad\forall n\geqslant 0. \tag{5.17}\]
Finally, we put the two above estimates together and use the elementary fact \(\mathcal{N}^{\frac{n+1}{2}}\lesssim\mathcal{N}^{\ell}+1\) (valid for \(n\leqslant\ell-1\)) to find that for some \(C>0\) there holds
\[|\,\big{\langle}\Psi,[\mathcal{N}^{\ell},V_{FB}]\Psi\big{\rangle}\,|\leqslant CR |\hat{V}|_{\ell^{1}}(\mathcal{N}^{\ell}+1)\Psi\|^{2}\,\qquad\forall t\geqslant 0 \tag{5.18}\]
which gives the desired estimate.
_Proof of (3)._ First, we note that \([\mathcal{N},b_{k}^{*}(t)b_{k}(t)]=0\) for all \(t\in\mathbb{R}\) and \(k\in\Lambda^{*}\). Hence, we can readily check that
\[\big{\langle}\Psi,[\mathcal{N}^{\ell},V_{B}(t)]\Psi\big{\rangle}=\mathrm{Im} \int\hat{V}(k)\,\big{\langle}\Psi,[\mathcal{N}^{\ell},b_{k}(t)b_{-k}(t)]\Psi \big{\rangle}\,\mathrm{d}k\qquad\forall t\in\mathbb{R}. \tag{5.19}\]
In view of the commutation relation \(\mathcal{N}b_{k}(t)b_{-k}(t)=b_{k}(t)b_{-k}(t)(\mathcal{N}-4)\) we can calculate using the pull-through formula for \(f(x)=x^{\ell}\) that
\[[\mathcal{N}^{\ell},b_{k}(t)b_{-k}(t)]=\sum_{n=0}^{\ell-1}\binom{\ell}{n}4^{ \ell-n}(\mathcal{N}+4)^{\frac{n-1}{2}}b_{k}(t)b_{-k}(t)\mathcal{N}^{\frac{n+1} {2}}. \tag{5.20}\]
Consequently, putting the last two displayed equations together one finds that for all \(t\in\mathbb{R}\)
\[|\,\big{\langle}\Psi,[\mathcal{N}^{\ell},V_{B}(t)]\Psi\big{\rangle}\,|\, \leqslant\sum_{n=0}^{\ell-1}\binom{\ell}{n}4^{\ell-n}\int_{\Lambda^{*}}|\hat{V }(k)|\|(\mathcal{N}+4)^{\frac{n+1}{2}}\Psi\|\ \|b_{k}(t)b_{-k}(t)\mathcal{N}^{\frac{n-1}{2}}\Psi\| \mathrm{d}k. \tag{5.21}\]
We estimate the right hand side as follows. First, we note that \(\|(\mathcal{N}+4)^{\frac{n+1}{2}}\Psi\|\leqslant C(\ell)\|(\mathcal{N}+1)^{\ell/2} \Psi\|\) for all \(0\leqslant n\leqslant\ell-1\). Secondly, we use the Type-II estimate (4.20) and the commutation relation (4.12) for \(f\equiv 1\) to find that
\[\|b_{k}(t)b_{-k}(t)\mathcal{N}^{\frac{n-1}{2}}\Psi\| \lesssim\ R^{\frac{1}{2}}\,\|(\mathcal{N}+2)^{\frac{1}{2}}b_{-k}(t) \mathcal{N}^{\frac{n-1}{2}}\Psi\|\] \[=\ R^{\frac{1}{2}}\,\|b_{-k}(t)\mathcal{N}^{\frac{1}{2}}\mathcal{ N}^{\frac{n-1}{2}}\Psi\|\] \[\lesssim\ R\,\|\mathcal{N}^{\frac{1}{2}}\mathcal{N}^{\frac{1}{2}} \mathcal{N}^{\frac{n-1}{2}}\Psi\|\] \[\lesssim\ R\,\|(\mathcal{N}+1)^{\frac{\ell}{2}}\Psi\| \tag{5.22}\]
where again we used the fact that \(n\leqslant\ell-1\). The proof of the Lemma is easily finished after we put together the last two displayed estimates.
### Surface-localized Number Operator Estimates
The main purpose of this section is proving Proposition 5.2. In order to control the time evolution of \(\mathcal{N}_{\mathcal{S}}\) with respect to \(\mathfrak{h}_{I}(t)\), we establish the following commutator estimates. Recall that \(R=|\Lambda|p_{F}^{d-1}\).
**Lemma 5.2**.: _There exists a constant \(C>0\) such that the following estimates hold true_
1. _For all_ \(\Psi\in\mathscr{F}\)__ \[|\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{F}(t)]\Psi\rangle_{\mathscr{F}} \,|\leqslant C\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{\mathscr{F}}\| \mathcal{N}^{3/2}\Psi\|_{\mathscr{F}}\.\] (5.23)
2. _For all_ \(\Psi\in\mathscr{F}\)__ \[|\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{FB}(t)]\Psi\rangle_{\mathscr{F}} \,|\leqslant CR^{1/2}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{\mathscr{F}}\| \mathcal{N}\Psi\|_{\mathscr{F}}\.\]
3. _For all_ \(\Psi\in\mathscr{F}\)__ \[|\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{B}(t)]\Psi\rangle_{\mathscr{F}} \,|\leqslant CR\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{\mathscr{F}}^{2}+CR\| \mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|_{\mathscr{F}}\|\Psi\|_{\mathscr{F}}\.\]
We shall defer the proof of Lemma 5.2 to next subsubsection. Now we turn our attention to the proof of Proposition 5.2.
Proof of Proposition 5.2.: Throughout the proof, \(C>0\) is a constant whose value may change from line to line. First, in view of the decomposition of \(\mathfrak{h}_{I}(t)\) given in (5.4), Lemma 5.2 and Remark 5.1, there holds for all \(t\in\mathbb{R}\)
\[\frac{d}{dt}\nu_{t}(\mathcal{N}_{\mathcal{S}})=\nu_{t}(i[\mathfrak{ h}_{I}(t),\mathcal{N}_{\mathcal{S}}]) \leqslant C\lambda[\nu_{t}(\mathcal{N}_{\mathcal{S}})]^{\frac{1}{2}}[ \nu_{t}(\mathcal{N}^{3})]^{\frac{1}{2}}\] \[+C\lambda R^{\frac{1}{2}}[\nu_{t}(\mathcal{N}_{\mathcal{S}})]^{ \frac{1}{2}}[\nu_{t}(\mathcal{N}^{2})]^{\frac{1}{2}}\] \[+C\lambda R[\nu_{t}(\mathcal{N}_{\mathcal{S}})]\] \[+C\lambda R[\nu_{t}(\mathcal{N}_{\mathcal{S}})]^{\frac{1}{2}}[\nu_ {t}(\mathds{1})]^{\frac{1}{2}}. \tag{5.24}\]
Thus, we divide3 by \(\nu_{t}(\mathcal{N}_{\mathcal{S}})^{1/2}\) to find that thanks to Proposition 5.1
Footnote 3: Technically, one should introduce a regularization \(u_{\delta}(t)=(\delta+\nu_{t}(\mathcal{N}_{\mathcal{S}}))^{1/2}\) in order to avoid possible singularities whenever \(\nu_{t}(\mathcal{N}_{\mathcal{S}})=0\). One should then close the estimates after taking the limit \(\delta\downarrow 0\). We leave the details to the reader
\[\frac{d}{dt}\nu_{t}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}} \leqslant C\lambda R\nu_{t}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{ 2}}+C\lambda R\Big{(}\nu_{t}(\mathcal{N}^{3})^{\frac{1}{2}}/R+\nu_{t}( \mathcal{N}^{2})^{\frac{1}{2}}/R^{\frac{1}{2}}+1\Big{)}\] \[\leqslant C\lambda R\nu_{t}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{ 2}}+C\lambda R\exp(\lambda Rt)\Big{(}n^{\frac{3}{2}}/R+n/R^{\frac{1}{2}}+1 \Big{)}. \tag{5.25}\]
The Gronwall inequality now implies that for all \(t\geqslant 0\)
\[\nu_{t}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}\leqslant C\exp(C\lambda Rt) \Big{(}\nu_{0}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+\lambda Rt\left(n^{ \frac{3}{2}}/R+n/R^{\frac{1}{2}}+1\right)\Big{)}. \tag{5.26}\]
Finally, we notice that in view of Condition 1 we have \(\nu_{0}(\mathcal{N}_{\mathcal{S}})\lesssim(\lambda R)^{2}\). The proof is then finished once we simplify the right hand side using the bound \(n\lesssim R^{1/2}\), and take squares on both sides of the inequality.
#### 5.2.1. Commutator Estimates for \(\mathcal{N}_{\mathcal{S}}\)
In order to prove Lemma 5.2, we shall first establish the following useful lemma. Here and in the sequel, \(\mathds{1}_{\mathcal{S}}\) denotes the characteristic function of the Fermi surface \(\mathcal{S}\).
**Lemma 5.3**.: _For all \(k\in\Lambda^{*}\) and \(g\in\ell^{\infty}\) the operator_
\[\mathcal{O}(k):=\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)g(p)a_{p+k}^{*}a_ {p}\mathrm{d}p \tag{5.27}\]
_satisfies the following estimate_
\[\big{|}\left\langle\Phi,\mathcal{O}(k)\Psi\right\rangle_{\mathscr{F}}\big{|} \leqslant\|g\|_{\ell^{\infty}}\|\mathcal{N}^{1/2}\Phi\|\|\mathcal{N}^{1/2}_{ \mathcal{S}}\Psi\|\,\qquad\forall\Phi,\,\Psi\in\mathscr{F}. \tag{5.28}\]
Proof.: Let \(\Phi,\Psi\in\mathscr{F},\,k\in\Lambda^{*}\) and \(g\in\ell^{\infty}\). Then, we calculate
\[|\left\langle\Phi,\mathcal{O}(k)\Psi\right\rangle_{\mathscr{F}}| =\bigg{|}\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)g(p)\left\langle a _{p+k}\Phi,a_{p}\Psi\right\rangle_{\mathscr{F}}\mathrm{d}p\bigg{|}\] \[\leqslant\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)|g(p)|\|a_{ p+k}\Phi\|_{\mathscr{F}}\|a_{p}\Psi\|_{\mathscr{F}}\mathrm{d}p\] \[\leqslant\|g\|_{\ell^{\infty}}\bigg{(}\int_{\Lambda^{*}}\|a_{p+k} \Phi\|_{\mathscr{F}}^{2}\mathrm{d}p\bigg{)}^{\frac{1}{2}}\bigg{(}\!\int_{ \Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)\|a_{p}\Psi\|_{\mathscr{F}}^{2} \mathrm{d}p\bigg{)}^{\frac{1}{2}}\] \[=\|g\|_{\ell^{\infty}}\|\mathcal{N}^{\frac{1}{2}}\Phi\|_{\mathscr{ F}}\|\mathcal{N}^{\frac{1}{2}}_{\mathcal{S}}\Psi\|_{\mathscr{F}}. \tag{5.29}\]
In the last line we used the fact that \(\|a_{p}\Phi\|_{\mathscr{F}}^{2}=\left\langle\Phi,a_{p}^{*}a_{p}\Phi\right\rangle _{\mathscr{F}}\) for all \(p\in\Lambda^{*}\), plus a change of variables \(p\mapsto p-k\). A similar argument holds for the term containing \(\Psi\). This finishes the proof.
Proof of Lemma 5.2.: Throughout this proof, \(\Psi\in\mathscr{F}\) is fixed. In addition, in order to ease the notation, we shall drop the explicit time dependence in our estimates - since the estimates are uniform in \(t\in\mathbb{R}\), there is no risk of confusion. Let us now fix \(\ell\in\mathbb{N}\).
Proof of (1).: Starting from (3.8) we can first calculate that
\[\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{F}]\Psi\rangle=2{\rm i}\,\,\int_{ \Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},D^{*}(k )]D(k)\Psi\rangle\,{\rm d}k. \tag{5.30}\]
We now put the above commutator in an appropiate form. Using the explicit expression of \(D^{*}(k)\) in terms of creation- and annihilation- operators (see Def. 7) together with the CAR, we find that for all \(k\in\Lambda^{*}\) there holds
\[[\mathcal{N}_{\mathcal{S}},D^{*}(k)] =\int_{\Lambda^{*}}\Big{(}\mathds{1}_{\mathcal{S}}(p)-\mathds{1} _{\mathcal{S}}(p-k)\Big{)}\chi^{\perp}(p)\chi^{\perp}(p-k)a_{p}^{*}a_{p-k}\,{ \rm d}p\] \[-\int_{\Lambda^{*}}\Big{(}\mathds{1}_{\mathcal{S}}(h)-\mathds{1} _{\mathcal{S}}(h+k)\Big{)}\chi(h)\chi(h+k)a_{h}^{*}a_{h+k}\,{\rm d}h\] \[\equiv\mathcal{O}_{1}(k)+\mathcal{O}_{2}(k) \tag{5.31}\]
where we introduce the two following auxiliary operators (notice the change of variables \(p\mapsto p+k\) and \(h\mapsto h-k\) in the second operator)
\[\mathcal{O}_{1}(k) :=\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)\chi^{\perp}(p,p- k)a_{p}^{*}a_{p-k}\,{\rm d}p-\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(h) \chi(h,h+k)a_{h}^{*}a_{h+k}\,{\rm d}h \tag{5.32}\] \[\mathcal{O}_{2}(k) :=-\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(p)\chi^{\perp}(p,p+ k)a_{p+k}^{*}a_{p}\,{\rm d}p+\int_{\Lambda^{*}}\mathds{1}_{\mathcal{S}}(h) \chi(h,h-k)a_{h-k}^{*}a_{h}\,{\rm d}h \tag{5.33}\]
where for simplicity we denote \(\chi^{\perp}(p,p-k)\equiv\chi^{\perp}(p)\chi^{\perp}(p-k)\) and similarly for \(\chi(h,h+k)\). We are now able to write
\[\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{F}]\Psi\rangle =2{\rm i}\,\,\int_{\Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi, \mathcal{O}_{1}(k)D(k)\Psi\rangle\,{\rm d}k \tag{5.34}\] \[+2{\rm i}\,\,\int_{\Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi,D( k)\mathcal{O}_{2}(k)\Psi\rangle\,{\rm d}k\] (5.35) \[+2{\rm i}\,\,\int_{\Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi,[ \mathcal{O}_{2}(k),D(k)]\Psi\rangle\,{\rm d}k. \tag{5.36}\]
The first term in the above equation can be estimated using Lemma 5.3 for \(\mathcal{O}(k)=\mathcal{O}_{1}^{*}(k)\). Namely,
\[\big{|}2{\rm i}\,\,\int_{\Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi, \mathcal{O}_{1}(k)D(k)\Psi\rangle\,{\rm d}k\big{|}\leqslant 2\|\hat{V}\|_{ \ell^{1}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\|\mathcal{N}^{3/2}\Psi\| \tag{5.37}\]
The second term in the above equation is estimates using Lemma 5.3 for \(\mathcal{O}(k)=\mathcal{O}_{2}(k)\). We get
\[\big{|}2{\rm i}\,\,\int_{\Lambda^{*}}\hat{V}(k){\rm Im}\,\langle\Psi,D(k) \mathcal{O}_{2}\Psi\rangle\,{\rm d}k\big{|}\leqslant 2\|\hat{V}\|_{\ell^{1}}\| \mathcal{N}^{3/2}\Psi\|\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\| \tag{5.38}\]
The third term in the above equation is actually zero. This comes from the fact that the commutator between \(\mathcal{O}_{2}(k)\) and \(D(k)\) is self-adjoint. More precisely, we can calculate
using the CAR
\[[\mathcal{O}_{2}(k),D(k)] =\int_{\Lambda^{*}}\Big{(}\mathds{1}_{\mathcal{S}}(p+k)-\mathds{1}_{ \mathcal{S}}(p)\Big{)}\chi^{\perp}(p,p+k)a_{p}^{*}a_{p}\mathrm{d}p \tag{5.39}\] \[-\int_{\Lambda^{*}}\Big{(}\mathds{1}_{\mathcal{S}}(h-k)-\mathds{1} _{\mathcal{S}}(h)\Big{)}\chi(h,h-k)a_{h}^{*}a_{h}\mathrm{d}h. \tag{5.40}\]
We put our results together to find that
\[\big{|}\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{F}]\Psi\rangle\,\big{|} \leqslant 4\|\hat{V}\|_{\ell^{1}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\| \|\mathcal{N}^{3/2}\Psi\|. \tag{5.41}\]
Proof of (2).: Starting form (3.9) we can calculate that
\[|\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{FB}]\Psi\rangle\,| \leqslant 2\int_{\Lambda^{*}}|\hat{V}(k)|\,|\,\langle\Psi,[ \mathcal{N}_{\mathcal{S}},D^{*}(k)b(k)]\Psi\rangle\,|\mathrm{d}k\,\] \[\leqslant 2\int_{\Lambda^{*}}|\hat{V}(k)|\,\|[\mathcal{N}_{ \mathcal{S}},D(k)]\Psi\|\,\|b(k)\Psi\|\mathrm{d}k\] \[\quad+2\int_{\Lambda^{*}}|\hat{V}(k)|\,\|[\mathcal{N}_{\mathcal{ S}},b(k)]\Psi\|\,\|D(k)\Psi\|\mathrm{d}k. \tag{5.42}\]
Let us estimate the first term contained in the right hand side of (5.42). In view of \(D^{*}(k)=D(-k)\) and (5.31) we have that \([\mathcal{N}_{\mathcal{S}},D(k)]=\mathcal{O}_{1}(-k)+\mathcal{O}_{2}(-k)\). Each \(\mathcal{O}_{i}(k)\) can be estimated using (4.14) -we conclude that \(\|[\mathcal{N}_{\mathcal{S}},D(k)]\Psi\|\lesssim\|\mathcal{N}\Psi\|\). On the other hand, we use the Type-II estimate (4.20) on \(b(k)\). We conclude that
\[\int_{\Lambda^{*}}|\hat{V}(k)|\,\|[\mathcal{N}_{\mathcal{S}},D(k)]\Psi\|\,\|b (k)\Psi\|\mathrm{d}k\ \lesssim\ R^{\frac{1}{2}}\|\mathcal{N}\Psi\|\,\|\mathcal{N}_{\mathcal{S}}^{1/2 }\Psi\|. \tag{5.43}\]
Let us now look at the second term contained in (5.42). First, we recall that for \(k\in\mathrm{supp}\hat{V}\) there holds \([\mathcal{N}_{\mathcal{S}},b(k)]=-2b(k)\), see Lemma 4.4. Consequently, using the Type-II estimate (4.20) we see that \(\|[\mathcal{N}_{\mathcal{S}},b(k)]\Psi\|\lesssim R^{1/2}\|\mathcal{N}_{ \mathcal{S}}^{1/2}\Psi\|\). On the other hand, we can use the Type-I estimate (4.17) to find \(\|D(k)\Psi\|\lesssim\|\mathcal{N}\Psi\|\). These upper bounds can be put together to find that
\[\int_{\Lambda^{*}}|\hat{V}(k)|\,\|[\mathcal{N}_{\mathcal{S}},b(k)]\Psi\|\,\|D( k)\Psi\|\mathrm{d}k\ \lesssim\ R^{1/2}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\mathcal{N}\Psi\|. \tag{5.44}\]
A direct combination of the last three displayed estimates finish the proof of (2).
Proof of (3).: Starting from (3.10) we decompose the boson-boson interaction into a diagonal, and off-diagonal part. Namely, we write \(V_{B}=V_{1}+V_{2}\), where we set
\[V_{1}\equiv\int_{\Lambda^{*}}\hat{V}(k)b^{*}(k)b(k)\mathrm{d}k\quad\text{and} \quad V_{2}\equiv\frac{1}{2}\int_{\Lambda^{*}}\hat{V}(k)\Big{(}b(k)b(-k)+ \mathrm{h.c}\Big{)}\mathrm{d}k. \tag{5.45}\]
For \(V_{1}\) we can quickly verify that its commutator with \(\mathcal{N}_{\mathcal{S}}\) vanishes. Indeed, thanks to Lemma 4.4 we find that \([\mathcal{N}_{\mathcal{S}},b^{*}(k)b(k)]=+2b^{*}(k)b(k)-2b^{*}(k)b(k)=0\) for all \(k\in\mathrm{supp}\hat{V}\). Hence, \([\mathcal{N}_{\mathcal{S}},V_{1}]=0\) upon summing over \(k\in\Lambda^{*}\).
For \(V_{2}\), we have the preliminary upper bound as our starting point
\[|\,\langle\Psi,[\mathcal{N}_{\mathcal{S}},V_{2}]\Psi\rangle\,|\leqslant 2\int_{ \Lambda^{*}}|\hat{V}(k)||\,\langle\Psi,b(k)b(-k)\Psi\rangle\,|\mathrm{d}k. \tag{5.46}\]
We estimate the integrand of the right hand side as follows -let us fix \(k\in\mathrm{supp}\hat{V}\). First, recalling that \([\mathcal{N}_{\mathcal{S}},b_{k}]=0\) (see Lemma 4.4) we find that for any measurable function \(\varphi:\mathbb{R}\to\mathbb{C}\) the following _pull-through formula_ holds true
\[\varphi(\mathcal{N}_{\mathcal{S}})b(k)=b(k)\varphi(\mathcal{N}_{\mathcal{S}}-2) \tag{5.47}\]
Thus, using \(\varphi(x)=(x+5)^{1/2}\) we find
\[|\,\langle\Psi,b(k)b(-k)\Psi\rangle\,| =|\,\langle(\mathcal{N}_{\mathcal{S}}+5)^{1/2}\Psi,b(k)b(-k)( \mathcal{N}_{\mathcal{S}}+1)^{-1/2}\Psi\rangle\,|\] \[\leqslant\|(\mathcal{N}_{\mathcal{S}}+5)^{1/2}\Psi\|\,\|b(k)b(-k )(\mathcal{N}_{\mathcal{S}}+1)^{-1/2}\Psi\|. \tag{5.48}\]
We use again the commutation relation \([\mathcal{N}_{\mathcal{S}},b_{k}]=0\) and the Type-II estimate (4.20) for \(b\)-operators to find that
\[\|b(k)b(-k)(\mathcal{N}_{\mathcal{S}}+1)^{-1/2}\Psi\| \lesssim\,R^{1/2}\|\mathcal{N}_{\mathcal{S}}^{1/2}b(-k)(\mathcal{ N}_{\mathcal{S}}+1)^{-1/2}\Psi\|\] \[\leqslant\,R^{1/2}\|(\mathcal{N}_{\mathcal{S}}+2)^{1/2}b(-k)( \mathcal{N}_{\mathcal{S}}+1)^{-1/2}\Psi\|\] \[=\,R^{1/2}\|b(-k)\mathcal{N}_{\mathcal{S}}^{1/2}(\mathcal{N}_{ \mathcal{S}}+1)^{-1/2}\Psi\|\] \[\lesssim\,R\|\mathcal{N}_{\mathcal{S}}^{1/2}\mathcal{N}_{ \mathcal{S}}^{1/2}(\mathcal{N}_{\mathcal{S}}+1)^{-1/2}\Psi\|\] \[\leqslant\,R\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|. \tag{5.49}\]
On the other hand, the other term multiplying in (5.48) can be bounded as follows \(\|(\mathcal{N}_{\mathcal{S}}+5)^{1/2}\Psi\|\lesssim\|\mathcal{N}_{\mathcal{S} }^{1/2}\Psi\|+\|\Psi\|\). A straightforward combination of the estimates contained in (5.46), (5.48) and (5.49) now finish the proof.
## 6. Leading Order Terms I: Emergence of \(Q\)
In Section 3 we considered a double commutator expansion (3.20) for the momentum distribution of particles and holes, \(f_{t}(p)\). This expansion is expressed in terms of the nine quantities \(\{T_{\alpha,\beta}(t)\}\) that arise from the three different interacting potentials \(V_{F}\), \(V_{FB}\) and \(V_{B}\), respectively. The main goal of this section is analyzing the single term \(T_{F,F}\). In particular, we prove that one may extract the mollified collision operator \(Q_{t}\)-originally introduced in Def. 2- up to reminder terms that we have control of. A precise statement is given in the following proposition. We remind the reader that \(R=|\Lambda|p_{F}^{d-1}\)
**Proposition 6.1** (Analysis of \(T_{F,F}\)).: _Let \(T_{F,F}(t,p)\) be the quantity defined in Eq. (3.21) for \(\alpha=\beta=F\), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) the following inequality holds true_
\[\big{|}T_{F,F}(t,\varphi)+\,|\Lambda|\,t\,\langle\varphi,Q_{t}[f_{0}]\rangle \,\big{|}\leqslant C|\Lambda|\lambda t^{3}\|\hat{V}\|_{\ell^{1}}^{3}\|\varphi \|_{\ell^{1}}\sup_{\tau\leqslant t}\Big{(}R^{2}\nu_{\tau}(\mathcal{N}^{4})^{ \frac{1}{2}}+\nu_{\tau}(\mathcal{N}^{4})\Big{)} \tag{6.1}\]
_where \(T_{F,F}(t,\varphi)\equiv\langle\varphi,T_{F,F}(t)\rangle\) and \(Q_{t}\) is given in Def. 2._
In order to prove Proposition 6.1 we shall perform an additional expansion of \(\nu_{t}\) with respect to the interaction Hamiltonian \(\mathfrak{h}_{I}(t)\). Namely, we consider
\[T_{F,F}(t,\varphi) =\int_{0}^{t}\int_{0}^{t_{1}}\nu\big{(}[[N(\varphi),V_{F}(t_{1})],V _{F}(t_{2})]\big{)}\mathrm{d}t_{1}\mathrm{d}t_{2}\] \[\quad-i\int_{0}^{t}\int_{0}^{t_{1}}\int_{0}^{t_{2}}\nu_{t_{2}} \big{(}[[[N(\varphi),V_{F}(t_{1})],V_{F}(t_{2})],\mathfrak{h}_{I}(t_{3})] \big{)}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\, \tag{6.2}\]
where we recall \(N(\varphi)\equiv\int_{\Lambda^{*}}\overline{\varphi(p)}a_{p}^{*}a_{p}\, \mathrm{d}p\). We then analyze the two terms of the right hand side of (6.2) separately. Thus, we split the proof into two parts, which are contained in the following two lemmas.
**Lemma 6.1**.: _Let \(\nu:B(\mathscr{F})\to\mathbb{C}\) be an initial state satisfying Condition 1, and let \(f_{0}(p)=|\Lambda|^{-1}\nu(a_{p}^{*}a_{p})\) for all \(p\in\Lambda^{*}\). Let \(V_{F}(t)\) be the Heisenberg evolution of the fermion-fermion interaction, defined in (3.19) for \(\alpha=F\). Then, for all \(\varphi\in\ell^{1}\) and \(t\geqslant 0\)_
\[\int_{0}^{t}\int_{0}^{t_{1}}\nu\big{(}[[N(\varphi),V_{F}(t_{1})],V_{F}(t_{2})] \big{)}\mathrm{d}t_{1}\mathrm{d}t_{2}=-t|\Lambda|\left\langle\varphi,Q_{t}[f_{ 0}]\right\rangle. \tag{6.3}\]
The proof of the identity contained in Lemma 6.1 will be heavily inspired by the work of Erdos, Salmhofer and Yau [21], on a heuristic derivation of the quantum Boltzmann equation. In fact, we shall make use of some of their algebraic relations.
**Lemma 6.2**.: _Let \((\nu_{t})_{t\in\mathbb{R}}\) be the interaction dynamics as given in Def. 5, with initial data \(\nu=\nu_{0}\) satisfying Condition 1. Let \(V_{F}(t)\) be the Heisenberg evolution of the fermion-fermion interaction, defined in (3.19) for \(\alpha=F\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell^{1}\) and \(t\geqslant 0\)_
\[\bigg{|}\int_{0}^{t}\int_{0}^{t_{1}}\int_{0}^{t_{2}}\nu_{t_{2}} \big{(}[[[N(\varphi), V_{F}(t_{1})],V_{F}(t_{2})],\mathfrak{h}_{I}(t_{3})]\big{)} \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\bigg{|}\] \[\leqslant C\lambda t^{3}\|\hat{V}\|_{\ell^{1}}^{3}|\Lambda\|\varphi \|_{\ell^{1}}\sup_{\tau\leqslant t}\left(\nu_{\tau}(\mathcal{N}^{4})+R^{2} \nu_{\tau}(\mathcal{N}^{4})^{\frac{1}{2}}\right)\,. \tag{6.4}\]
We remind the reader that the interaction Hamiltonian \(\mathfrak{h}_{I}(t)\) admits the decomposition given in (5.4) in terms of the Heisenberg evolution of \(b\) and \(D\)-operators-see (5.5), (5.6) and (5.7).
Proof of Proposition 6.1.: It suffices to put together Eq. (6.2) and Lemmas 6.1 and 6.2.
We dedicate the rest of this section to the proof of Lemmas 6.1 and 6.2.
### Proof of Lemma 6.1
Before we jump into the proof of Lemma 6.1, we shall re-write the fermion-fermion interaction term \(V_{F}(t)\) in a form that will be suitable for our analysis. This representation is recorded in Lemma 6.3, which we study in the next subsubsection.
#### 6.1.1. Normal ordering of \(V_{f}(t)\)
Let us fix the time label \(t\in\mathbb{R}\). First, we see from (5.5) that \(V_{F}(t)=\int_{\Lambda^{*}}\hat{V}(k)D_{k}^{*}(t)D_{k}(t)\mathrm{d}k\) can be written in terms of the Heisenberg evolution of the \(D\)-operators, as given in Def. 7. These can be written explicitly in terms of creation- and annihilation- in the following way
\[D_{k}(t)=\int_{(\Lambda^{*})^{2}}d_{t}(k,p,q)a_{p}^{*}a_{q}\mathrm{d}p\mathrm{d}q \tag{6.5}\]
where the coefficients in the above expression are given as follows
\[d_{k}(t,p,q)\,\equiv\,e^{it(E_{p}-E_{q})}\big{[}\chi^{\perp}(p)\chi^{\perp}(q) \delta(p-q+k)-\chi(p)\chi(q)\delta(p-q-k)\big{]} \tag{6.6}\]
for all \(k,p,q\in\Lambda^{*}\). Since \(D_{k}^{*}(t)=D_{-k}(t)\) it readily follows that we can write the fermion-fermion interaction in the following form
\[V_{F}(t)=\int_{\Lambda^{*}}\bigg{[}\int_{\Lambda^{*}}\hat{V}(k)d_{t}(-k,p_{1}, q_{1})\,d_{t}(k,p_{2},q_{2})\mathrm{d}k\bigg{]}a_{p_{1}}^{*}a_{q_{1}}a_{p_{2}}^{*}a _{q_{2}}\mathrm{d}p_{1}\mathrm{d}p_{2}\mathrm{d}q_{1}\mathrm{d}q_{2}. \tag{6.7}\]
Clearly, the expression in (6.7) is _not_ normally ordered. Our next goal is then to put \(V_{F}(t)\) in normal order, with explicit coefficients. To this end, we introduce the following coefficient function
\[\phi_{t}(\vec{p})\equiv\int_{\Lambda^{*}}\hat{V}(k)\,d_{t}(-k,p_{1},p_{4})\,d_ {t}(k,p_{2},p_{3})\mathrm{d}k \tag{6.8}\]
where \(\vec{p}=(p_{1},p_{2},p_{3},p_{4})\in(\Lambda^{*})^{4}\). A straightforward calculation using the CAR in Eq. (6.7) now yields
\[V_{F}(t) =\int_{\Lambda^{*}}\phi_{t}(p_{1},p_{2},q_{2},q_{1})a_{p_{1}}^{*} a_{p_{2}}^{*}a_{q_{2}}a_{q_{1}}\mathrm{d}p_{1}\mathrm{d}p_{2}\mathrm{d}q_{1} \mathrm{d}q_{2}\] \[+\int_{\Lambda^{*2}}\bigg{[}\ \int_{\Lambda^{*2}}\phi_{t}(p_{1},p_{2},q_{2},q_{1}) \delta(q_{1}-p_{2})\mathrm{d}p_{2}\mathrm{d}q_{1}\bigg{]}a_{p_{1}}^{*}a_{q_{2 }}\mathrm{d}p_{1}\mathrm{d}q_{2}. \tag{6.9}\]
We shall denote by \(:V_{F}(t):\) the normal ordering of \(V_{F}(t)\), that is, the first term in Eq. (6.9).
Next, we shall put the above normal order form in a more explicit representation by calculating explictly the coefficient function \(\phi_{t}\), together with its contraction for \(q_{1}=p_{2}\). Before we do so, let us introduce some convenient notation:
\(\square\) When \(\vec{p}=(p_{1},p_{2},p_{3},p_{4})\in(\Lambda^{*})^{4}\) is known from context, we let
\[\chi_{1234}\equiv\chi(p_{1})\chi(p_{2})\chi(p_{3})\chi(p_{4})\qquad\text{and} \qquad\chi_{1234}^{\perp}\equiv 1-\chi_{1234}\]
and similarly for \(\chi_{ij}\) and \(\chi_{ij}^{\perp}\) for any combination of \(i,j\in\{1,2,3,4\}\).
\(\square\) For any \(\vec{p}=(p_{1},p_{2},p_{3},p_{4})\in(\Lambda^{*})^{4}\) we let
\[\Delta E(\vec{p})\equiv E_{p_{1}}+E_{p_{2}}-E_{p_{3}}-E_{p_{4}} \tag{6.10}\]
where \(E_{p}\) is the dispersion relation of the system-see (2.16).
Starting from (6.8) and using the definition of \(d_{t}(k,p,q)\) we may explicitly calculate that for all \(\vec{p}\in(\Lambda^{*})^{4}\) there holds
\[\phi_{t}(\vec{p}) =e^{it\Delta E(\vec{p})}\delta(p_{1}+p_{2}-p_{3}-p_{4})\hat{V}(p_{1 }-p_{4})\big{(}\chi_{1234}+\chi_{1234}^{\perp}\big{)}\] \[-e^{it\Delta E(\vec{p})}\delta(p_{1}-p_{2}+p_{3}-p_{4})\hat{V}(p_{ 1}-p_{4})\big{(}\chi_{13}\chi_{24}^{\perp}+\chi_{13}^{\perp}\chi_{24}\big{)}. \tag{6.11}\]
In particular, a straightforward calculation using (6.11) shows that the integrand of the quadratic term in (6.9) can be written as
\[\int_{\Lambda^{*2}}\phi_{t}(p_{1},p_{2},q_{2},q_{1})\delta(q_{1}-p_{2})\mathrm{ d}p_{2}\mathrm{d}q_{1}=\delta(p_{1}-p_{3})g(p_{1}) \tag{6.12}\]
where \(g(p)\equiv\chi(p)(\hat{V}*\chi)(p)+\chi^{\perp}(p)(\hat{V}*\chi^{\perp})(p)\)-the explicit form of \(g(p)\) is not important, but the \(\delta(p_{1}-p_{3})\) dependence in the last equation implies that the second term in (6.9) _commutes_ with \(a_{p}^{*}a_{p}\). This fact we shall use in the proof of Lemma 6.1.
Finally, thanks to the CAR, the coefficients \(\phi_{t}(p_{1},p_{2},p_{3},p_{4})\) inside of \(:V_{F}(t):\) can be antisymmetrized with respect to the permutation of the variables \((p_{1},p_{2})\mapsto(p_{2},p_{1})\) and \((p_{3},p_{4})\mapsto(p_{4},p_{3})\), respectively. Namely, the coefficients \(\phi_{t}\) in \(:V_{F}(t):\) may be replaced by
\[\Phi_{t}(\vec{p})\equiv\frac{1}{4}\Big{(}\phi_{t}(p_{1},p_{2},p_{3},p_{4})- \phi_{t}(p_{2},p_{1},p_{3},p_{4})+\phi_{t}(p_{2},p_{1},p_{4},p_{3})-\phi_{t}(p _{1},p_{2},p_{3},p_{4})\Big{)}\, \tag{6.13}\]
which can be put in an explicit form, using (6.11). We record all these results in the following lemma.
**Lemma 6.3** (Normal ordering).: _Let \(t\in\mathbb{R}\) and \(V_{F}(t)\) the Heisenberg evolution of the fermion-fermion interaction. Then, the following identity holds_
\[V_{F}(t)\ =:\!V_{F}(t)\!:\ +\ N(g). \tag{6.14}\]
_Here, \(:\!V_{F}(t)\!:=\ \int_{\Lambda^{*4}}\Phi_{t}(p_{1}\cdots p_{4})a_{p_{1}}^{*}a_{p _{2}}^{*}a_{p_{3}}a_{p_{4}}\mathrm{d}p_{1}\cdots\mathrm{d}p_{4}\) is the normal ordering of \(V_{F}(t)\), and \(N(g)=\int_{\Lambda^{*}}g(p)a_{p}^{*}a_{p}\mathrm{d}p\), where \(g(p)\equiv\chi(p)(\hat{V}*\chi)(p)+\chi^{\perp}(p)(\hat{V}*\chi^{\perp})(p)\)._
_The coefficient function \(\Phi_{t}:(\Lambda^{*})^{4}\to\mathbb{C}\) is partially antisymmetric_
\[\Phi_{t}(p_{1},p_{2},p_{3},p_{4})=-\Phi_{t}(p_{2},p_{1},p_{3},p_{4})=+\Phi_{t }(p_{2},p_{1},p_{4},p_{3})=-\Phi_{t}(p_{1},p_{2},p_{3},p_{4}) \tag{6.15}\]
_and admits the following decomposition_
\[\Phi_{t}=\Phi_{t}^{(1)}+\Phi_{t}^{(2)}\]
_where \(\Phi_{t}^{(1)}\) is given by_
\[\Phi_{t}^{(1)}(\vec{p})=\,\frac{1}{2}\,e^{it\Delta E(\vec{p})}\delta(p_{1}+p_{ 2}-p_{3}-p_{4})\big{(}\hat{V}(p_{1}-p_{4})-\hat{V}(p_{1}-p_{3})\big{)}\big{(} \chi_{1234}+\chi_{1234}^{\perp}\big{)} \tag{6.16}\]
_and \(\Phi_{t}^{(2)}\) is given by_
\[\Phi_{t}^{(2)}(\vec{p})=\,\frac{1}{2}\,e^{it\Delta E(\vec{p})} \delta(p_{1}+p_{3}-p_{2}-p_{4})\hat{V}(p_{1}-p_{4})\big{(}\chi_{14}^{\perp}\chi _{23}+\chi_{23}^{\perp}\chi_{14}\big{)}\] \[\qquad\qquad-\frac{1}{2}e^{it\Delta E(\vec{p})}\delta(p_{1}+p_{ 4}-p_{2}-p_{3})\hat{V}(p_{1}-p_{3})\big{(}\chi_{13}^{\perp}\chi_{24}+\chi_{24} ^{\perp}\chi_{13}\big{)}. \tag{6.17}\]
#### 6.1.2. Proof of Lemma 6.1
Proof.: We start with the normal ordering of \(V_{F}(t)\) found in Lemma 6.3.
First, we observe that we may disregard the quadratic term \(N(g)\equiv\int_{\Lambda^{*}}g(t,p)a_{p}^{*}a_{p}\mathrm{d}p\). Indeed, since \([a_{p}^{*}a_{p},N(g)]=0\) we find that for any \(p\in\Lambda^{*}\)
\[\nu\big{(}[[a_{p}^{*}a_{p},V_{F}(t)],V_{F}(s)]\big{)}=\nu\big{(}[[a_{p}^{*}a_{p },\colon V_{F}(t)\colon],V_{F}(s)]\big{)} \tag{6.18}\]
Furthermore, since \(\nu\) is quasi-free and translation invariant, it verifies the identities (3.16). Thus, since \([a_{p}^{*}a_{p},\colon V_{F}(t)\colon]\) is quartic in creation- and annihilation operators, we find that
\[\nu\big{(}[[a_{p}^{*}a_{p},V_{F}(t)],V_{F}(s)]\big{)} =\nu\big{(}[[a_{p}^{*}a_{p},\colon V_{F}(t)\colon],V_{F}(s)]\big{)}\] \[=\nu\big{(}[[a_{p}^{*}a_{p},\colon V_{F}(t)\colon]\,,\colon V_{F}( s)\colon]\big{)}-\nu\big{(}[N(g),[a_{p}^{*}a_{p},\colon V_{F}(t)\colon]]\big{)}\] \[=\nu\big{(}[[a_{p}^{*}a_{p},\colon V_{F}(t)\colon]\,,\colon V_{F}( s)\colon]\big{)}\, \tag{6.19}\]
for all \(p\in\Lambda^{*}\).
Secondly, we note that a standard calculation using the CAR implies that
\[\nu\big{(}[[a_{p}^{*}a_{p},V_{F}(t)],V_{F}(s)]\big{)} =\int_{\Lambda^{*\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where we denote \(\nu_{ij}\equiv\nu(a_{k_{i}}^{*}a_{\ell_{j}})\) and \(\tilde{\nu}_{ij}\equiv\delta(k_{i}-\ell_{j})-\nu_{ij}\). Secondly, based on the symmetries of \(M_{p}(\vec{k},\vec{\ell})\), we may follow the algebraic analysis carried out in [21, pps 374-375] to find that
\[\nu\big{(}[[a_{p}^{*}a_{p}, V_{F}(t)],V_{F}(s)]\big{)} \tag{6.23}\] \[=\int_{\Lambda^{*4}\times\Lambda^{*4}}M_{p}(k_{1}k_{2}\ell_{3} \ell_{4},k_{3}k_{4}\ell_{1}\ell_{2})\,4\big{(}\nu_{11}\nu_{22}\tilde{\nu}_{33} \tilde{\nu}_{44}+4\nu_{11}\nu_{23}\nu_{42}\tilde{\nu}_{34}\big{)}\mathrm{d}\vec{ k}\mathrm{d}\vec{\ell}\.\]
Thirdly, translation invariance \(\nu(a_{p}^{*}a_{q})=\delta(p-q)f_{0}(p)\) now yields two terms
\[\nu\big{(}[[a_{p}^{*}a_{p},V_{F}(t)], V_{F}(s)]\big{)}\] \[=4\int_{\Lambda^{*}}M_{p}(k_{1}k_{2}k_{3}k_{4},k_{3}k_{4}k_{1}k_{ 2})f_{0}(k_{1})f_{0}(k_{2})\tilde{f}_{0}(k_{3})\tilde{f}_{0}(k_{4})\mathrm{d} \vec{k}\] \[+16\int_{\Lambda^{*}}M_{p}(k_{1}k_{2}k_{2}k_{3},k_{3}k_{4}k_{1}k_ {4})f_{0}(k_{1})f_{0}(k_{2})f_{0}(k_{3})\tilde{f}_{0}(k_{4})\mathrm{d}\vec{k}. \tag{6.24}\]
Similarly as in [21], we look at the two terms of the right hand side of (6.24) by evaluating the function \(M_{p}\) in the different cases.
_The second term of (6.24)._ Let us show that the second term vanishes. Indeed, we use the fact that \(\Phi_{t}(k_{3}k_{4}k_{1}k_{2})=\Phi_{-t}(k_{1}k_{2}k_{3}k_{4})\) together with antisymmetry with respect to \(k_{1}\mapsto k_{2}\) and \(k_{3}\mapsto k_{4}\) to find that
\[M_{p}(k_{1}k_{2}k_{2}k_{3},k_{3}k_{4}k_{1}k_{4}) \tag{6.25}\] \[=2\cos\big{[}(t-s)(E_{1}-E_{3})\big{]}\big{(}\delta(p-k_{3})- \delta(p-k_{1})\big{)}\Phi(k_{1}k_{2}k_{3}k_{2})\Phi(k_{1}k_{4}k_{3}k_{4})\]
where we denote \(\Phi(\vec{k})\equiv\Phi_{0}(\vec{k})\). One may verify that \(\Phi(k_{1}k_{2}k_{3}k_{2})\) is proportional to \(\delta(k_{1}-k_{3})\) and, consequently, it holds that \(\big{(}\delta(p-k_{3})-\delta(p-k_{1})\big{)}\Phi(k_{1}k_{2}k_{3}k_{2})=0\).
_The first term of (6.24)._ Using the fact that \(\Phi_{t}(k_{3}k_{4}k_{1}k_{2})=\Phi_{-t}(k_{1}k_{2}k_{3}k_{4})\) one finds
\[M_{p}(k_{1}k_{2}k_{3}k_{4},k_{3}k_{4}k_{1}k_{2}) \tag{6.26}\] \[=2\cos\big{[}(t-s)\Delta E(\vec{k})\big{]}|\Phi(\vec{k})|^{2} \big{(}\delta(p-k_{1})+\delta(p-k_{2})-\delta(p-k_{3})-\delta(p-k_{4})\big{)}\.\]
We plug this result back in (6.24) to find that after a change of variables \((k_{1}k_{2})\mapsto(k_{3}k_{4})\),
\[\nu\big{(}[[a_{p}^{*}a_{p}, V_{F}(t)],V_{F}(s)]\big{)} \tag{6.27}\] \[=4\int_{\Lambda^{*}}\big{(}\delta(p-k_{1})+\delta(p-k_{2})-\delta (p-k_{3})-\delta(p-k_{4})\big{)}\ |\Phi(\vec{k})|^{2}\] \[\times\cos\big{[}(t-s)\Delta E(\vec{k})\big{]}\Big{(}f_{0}(k_{1} )f_{0}(k_{2})\tilde{f}_{0}(k_{3})\tilde{f}_{0}(k_{4})-f_{0}(k_{3})f_{0}(k_{4}) \tilde{f}_{0}(k_{1})\tilde{f}_{0}(k_{2})\Big{)}\.\]
Finally, we integrate against time and a test function \(\varphi(p)\) to find that
\[\int_{0}^{t}\int_{0}^{t_{1}}\nu\big{(}[[N(\varphi),V_{F}(t_{1})],V_{F}(t_{2})] \big{)}\mathrm{d}t_{1}\mathrm{d}t_{2}=-t|\Lambda|\int_{\Lambda^{*}}\varphi(p)Q_ {t}[f_{0}](p)\mathrm{d}p \tag{6.28}\]
where \(Q_{t}[f_{0}]\) is the expression given by
\[Q_{t}[f_{0}](p) = 4\pi\int_{\Lambda^{*4}}\frac{|\Phi(\vec{k})|^{2}}{|\Lambda|}\left[ \delta(p-k_{1})+\delta(p-k_{2})-\delta(p-k_{3})-\delta(p-k_{4})\right]\] \[\times \delta_{t}[\Delta E(\vec{k})]\Big{(}f(k_{3})f(k_{4})\tilde{f}(k_{ 1})\tilde{f}(k_{2})-f(k_{1})f(k_{2})\tilde{f}(k_{3})\tilde{f}(k_{4})\Big{)} \mathrm{d}\vec{k}\.\]
where we recall \(\delta_{1}(x)=\frac{2}{\pi}\frac{\sin^{2}(x/2)}{x^{2}}\) and \(\delta_{t}(x)=t\delta_{1}(tx).\) Upon expanding \(\Phi=\Phi^{(1)}+\Phi^{(2)}\) in the above expression with respect to the decomposition found in Lemma 6.3, one may check that the formula is in agreement with the operator \(Q_{t},\) as given by Def. 2. This finishes the proof of the lemma.
### Proof of Lemma 6.2
Proof.: Let \(\varphi\in\ell^{1}\) and \(t,s\in\mathbb{R}\), let us introduce the following notation for the fermion-fermion double commutator
\[C_{F}(\varphi,t,s) :=[[N(\varphi),V_{F}(t)],V_{F}(s)]\] \[=\int_{\Lambda^{*2}}\hat{V}(k)\hat{V}(\ell)\Big{[}\Big{[}N( \varphi),D_{k}^{*}(t)D_{k}(t)\Big{]},D_{\ell}^{*}(s)D_{\ell}(s)\Big{]}\; \mathrm{d}k\mathrm{d}\ell \tag{6.30}\]
where we have written \(V_{F}(t)\) in terms of \(D\)-operators, see (5.5). For simplicity, we shall assume that \(\varphi\) is real-valued so that \(C_{F}(\varphi,t,s)\) is self-adjoint (in the general case, one may decompose \(\varphi=\mathrm{Re}\varphi+\mathrm{i}\mathrm{Im}\varphi\) and apply linearity of the commutator). We claim that there exists a constant \(C>0\) such that
\[\|C_{F}(\varphi,t,s)\Psi\|\leqslant C\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda|| \varphi\|_{\ell^{1}}\|\mathcal{N}^{2}\Psi\|\, \tag{6.31}\]
for all \(\Psi\in\mathscr{F}\). To see this, we shall expand the double commutator of the right hand side of (6.30) into eight terms. In order to ease the notation, we shall drop the time labels \(t,s\in\mathbb{R}\) -since our estimates are uniform in time, there is no risk in doing so. In terms of the contraction operators \(D_{k}^{+}(\varphi)\equiv[N(\varphi),D_{k}^{*}]\) and \(D_{k}(\varphi)\equiv[N(\varphi),D_{k}]\) we find
\[\Big{[}\Big{[}N(\varphi),D_{k}^{*}D_{k}\Big{]},D_{\ell}^{*}D_{ \ell}\Big{]} = D_{k}^{+}(\varphi)\Big{[}D_{k},D_{\ell}^{*}\Big{]}D_{\ell}\ +\ D_{k}^{+}(\varphi)D_{\ell}^{*}\Big{[}D_{k},D_{ \ell}\Big{]} \tag{6.32}\] \[+ \Big{[}D_{k}^{+}(\varphi),D_{\ell}^{*}\Big{]}D_{\ell}D_{k}\ +\ D_{\ell}^{*}\Big{[}D_{k}^{+}( \varphi),D_{\ell}\Big{]}D_{k}\] \[+ D_{k}^{*}D_{\ell}^{*}\Big{[}D_{k}(\varphi),D_{\ell}\Big{]}\ +\ D_{k}^{*}\Big{[}D_{k}( \varphi),D_{\ell}^{*}\Big{]}D_{\ell}\] \[+ D_{\ell}^{*}\Big{[}D_{k}^{*},D_{\ell}\Big{]}D_{k}(\varphi)\ +\ \Big{[}D_{k}^{*},D_{ \ell}^{*}\Big{]}D_{\ell}D_{k}(\varphi)\.\]
All these operators can be controlled using the Type-I and Type-IV estimates, found in Lemma 4.5 and Lemma 4.8, respectively, together with the commutator identities \([D_{k},\mathcal{N}]=[D_{k}(\varphi),\mathcal{N}]=0\)-see Lemma 4.3. For instance, given \(\Psi\in\mathscr{F}\) the first term
can be estimated as follows
\[\|D_{k}^{+}(\varphi)\big{[}D_{k},D_{\ell}^{*}\big{]}D_{\ell}\Psi\| \,\leqslant\,\|D_{k}^{+}(\varphi)\|\|\big{[}D_{k},D_{\ell}^{*}\big{]}D _{\ell}\Psi\|\] \[\,\leqslant\,C\|\varphi\|_{\ell^{1}}|\Lambda\|\mathcal{N}D_{\ell}\Psi\|\] \[\,=\,C\|\varphi\|_{\ell^{1}}|\Lambda\|D_{\ell}\nabla\Psi\|\] \[\,\leqslant\,C\|\varphi\|_{\ell^{1}}|\Lambda\|\mathcal{N}^{2}\Psi\| \tag{6.33}\]
for a constant \(C>0\). Every other term in the expansion (6.32) can be analyzed in the same fashion, and satisfy the same bound -we leave the details to the reader. Thus, we plug the estimate (6.33) back in the expansion (6.32) and integrate over \(k,\ell\in\Lambda^{*}\). One then obtains (6.31).
Let us now estimate the integral reminder term, we fix \(0\leqslant t_{3}\leqslant t_{2}\leqslant t_{1}\). As a first step, since \(C_{F}\) and \(\mathfrak{h}_{I}\) are self-adjoint, we use the following rough upper bound
\[\nu_{t_{3}}\big{(}[[\![N(\varphi),V_{F}(t_{1})]\!]V_{F}(t_{2})],\mathfrak{h}_ {I}(t_{3})]\big{)}\leqslant 2\nu_{t_{3}}\Big{(}C_{F}(\varphi,t_{1},t_{2})^{2} \Big{)}^{\frac{1}{2}}\nu_{t_{3}}\Big{(}\mathfrak{h}_{I}(t_{3})^{2}\Big{)}^{ \frac{1}{2}} \tag{6.34}\]
In view of Remark 5.1, we can turn the estimate (6.31) into the upper bound
\[\nu_{t_{3}}\Big{(}C_{F}(\varphi,t_{1},t_{2})^{2}\Big{)}^{\frac{1}{2}}\leqslant C \|\hat{V}\|_{\ell^{1}}^{2}|\Lambda\|\|\varphi\|_{\ell^{1}}\nu_{t_{3}}\big{(} \mathcal{N}^{4}\big{)}^{\frac{1}{2}}. \tag{6.35}\]
On the other hand, using the operator norm estimates (4.16), a simple but rough estimate for the interaction Hamiltonian is found to be
\[\|\mathfrak{h}_{I}(t)\Psi\| \,\leqslant\,\,\lambda\|V_{F}(t)\Psi\|+\lambda\|V_{FB}(t)\Psi\|+ \lambda\|V_{BB}(t)\Psi\|\] \[\,\lesssim\,\,\lambda\|\hat{V}\|_{\ell^{1}}\|\mathcal{N}^{2}\Psi \|+\lambda\|\hat{V}\|_{\ell^{1}}R\|\mathcal{N}\Psi\|+\lambda\|\hat{V}\|_{\ell^ {1}}R^{2}\|\Psi\|\] \[\,\lesssim\,\,\lambda\|\hat{V}\|_{\ell^{1}}\big{(}\|\mathcal{N}^{ 2}\Psi\|+R^{2}\|\Psi\|\big{)}\]
where we recall that \(R=|\Lambda|p_{F}^{d-1}\). Consequently, in view of Remark 5.1 we find that
\[\nu_{t_{3}}\Big{(}\mathfrak{h}_{I}(t_{3})^{2}\Big{)}^{\frac{1}{2}}\leqslant C \lambda\|\hat{V}\|_{\ell^{1}}^{2}\Big{(}\nu_{t_{3}}\big{(}\mathcal{N}^{4} \big{)}^{\frac{1}{2}}+R^{2}\Big{)} \tag{6.36}\]
where we used the fact that \(\nu_{t}(\mathds{1})=1\) for all \(t\in\mathbb{R}.\) The proof of the lemma is now finished once we combine Eqs. (6.34), (6.35) and (6.36), and integrate over the time variables \(0\leqslant t_{3}\leqslant t_{2}\leqslant t_{1}\leqslant t\).
## 7. Leading Order Terms II: Emergence of \(B\)
The main purpose of this section is to analyze the term \(T_{FB,FB}(t)\) found in the double commutator expansion (3.20), introduced in Section 3. In particular, we show that this term gives rise to the operator \(B_{t}\), as given in Def. 3, corresponding to the second leading order term describing the dynamics of \(f_{t}(p).\) It describes interactions between particles/holes and _virtual bosons_ around the Fermi surface. This is manifest in the fact that, as we shall see, it contains the _propagator_ of free bosons
\[G_{k}(t-s)\equiv\langle\Omega,[b_{k}(t),b_{k}^{*}(s)]\Omega\rangle_{{}_{ \mathscr{F}}} \tag{7.1}\]
defined for \(k\in\Lambda^{*}\), and \(t,s\in\mathbb{R}\).
We state the main result of this section in the following proposition, which we prove in the remainder of the section.
**Proposition 7.1** (Analysis of \(T_{FB,FB}\)).: _Let \(T_{FB,FB}(t,p)\) be the quantity defined in Eq. (3.21) for \(\alpha=\beta=FB\), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) the following inequality holds true_
\[\big{|}T_{FB,FB}(t,\varphi)+|\Lambda|t\,\langle\varphi,B_{t}[f_{0}] \rangle\,\big{|}\] \[\qquad\leqslant C|\Lambda|t^{2}\|\varphi\|_{\ell^{1}}\|\hat{V}\|_{ \ell_{1}}^{2}\sup_{\tau\leqslant t}\Big{(}R^{\frac{1}{2}}\nu_{\tau}(\mathcal{N }_{\mathcal{S}})^{\frac{1}{2}}\nu_{\tau}(\mathcal{N})^{\frac{1}{2}}+CR^{\frac {3}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+Rp_{F}^{-m}\nu_{ \tau}(\mathcal{N}_{1}^{2})\Big{)}\] \[\qquad+\,|\Lambda|t^{3}\lambda R\|\varphi\|_{\ell^{1}}\|\hat{V} \|_{\ell_{1}}^{3}\sup_{\tau\leqslant t}\Big{(}R^{\frac{3}{2}}\nu_{\tau}( \mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R\nu_{\tau}(\mathcal{N}_{\mathcal{S} })+Rp_{F}^{-m}\nu_{\tau}(\mathcal{N})^{\frac{1}{2}}\Big{)} \tag{7.2}\]
_where \(T_{FB,FB}(t,\varphi)\equiv\langle\varphi,T_{FB,FB}(t)\rangle\) and \(B_{t}\) is given in Def. 3._
_Remark 7.1_.: In order to prove Proposition 7.1, we expand \(T_{FB,FB}\) into several terms and analyze each one separately. This expansion is based in the following two observations:
_(i)_ For any self-adjoint operators \(N,T,S\) and state \(\mu\), there holds:
\[\mu\big{(}[[N,T+T^{*}],S]\big{)}=2\mathrm{Re}\,\mu\big{(}[[N,T],S]\big{)}. \tag{7.3}\]
_(ii)_ Thanks to the symmetries \(D_{k}=D_{-k}^{*}\), \(\hat{V}(-k)=\hat{V}(k)\) and the vanishing commutator \([D_{k}^{*},b_{k}]=0\), starting from the representation (5.6) we may re-write the fermion-boson interaction term as
\[V_{FB}(t)=\int_{\Lambda*}\hat{V}(k)B_{k}^{*}(t)D_{k}(t)\mathrm{d}k\qquad\text {where}\qquad B_{k}^{*}(t)\equiv b_{k}^{*}(t)+b_{-k}(t). \tag{7.4}\]
Starting from (3.21), based on these two observations we are able to re-write the term \(T_{FB,FB}\) for all \(t\in\mathbb{R}\) and \(\varphi\in\ell^{1}\) in the following form
\[T_{FB,FB}(t,\varphi)\] \[\qquad=2\mathrm{Re}\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda*^{2}} \hat{V}(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}[[N(\varphi),D_{k}^{*}(t_{1})b_{k}(t_ {1})],B_{\ell}^{*}(t_{2})D_{\ell}(t_{2})]\Big{)}\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}k\mathrm{d}\ell\] \[\qquad\equiv M(t,\varphi)+R^{(1)}(t,\varphi)+R^{(2)}(t,\varphi)+R ^{(3)}(t,\varphi)+R^{(4)}(t,\varphi) \tag{7.5}\]
where in the second line we have expanded the commutator into five terms. The first one we shall refer to as the _main term_, and is defined as follows
\[M(t,\varphi)=2\mathrm{Re}\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda*^{2}}\hat{V }(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}D_{k}^{*}(t_{1},\varphi)[b_{k}(t_{1}),b_{ \ell}^{*}(t_{2})]D_{\ell}(t_{2})\Big{)}\,\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}k\mathrm{d}\ell. \tag{7.6}\]
The last four, which we shall refer to as the _remainder terms_, are defined as follows
\[R^{(1)}(t,\varphi) =2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda^{\mathsf{ A}^{\mathsf{R}^{2}}}}\hat{V}(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}D_{k}^{*}(t_{1}, \varphi)B_{\ell}^{*}(t_{2})[b_{k}(t_{1}),D_{\ell}(t_{2})]\Big{)}\,\mathrm{d}t_ {1}\mathrm{d}t_{2}\mathrm{d}k\mathrm{d}\ell\] \[R^{(2)}(t,\varphi) =2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda^{\mathsf{ A}^{\mathsf{R}^{2}}}}\hat{V}(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}[D_{k}^{*}(t_{1}, \varphi),B_{\ell}^{*}(t_{2})]D_{\ell}(t_{2})b_{k}(t_{1})\Big{)}\,\mathrm{d}t_ {1}\mathrm{d}t_{2}\mathrm{d}k\mathrm{d}\ell\] \[R^{(3)}(t,\varphi) =2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda^{\mathsf{ A}^{\mathsf{R}^{2}}}}\hat{V}(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}B_{\ell}^{*}(t_{2})[D_{k} (t_{1},\varphi),D_{\ell}(t_{2})]b_{k}(t_{1})\Big{)}\,\mathrm{d}t_{1}\mathrm{d} t_{2}\mathrm{d}k\mathrm{d}\ell\] \[R^{(4)}(t,\varphi) =2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda^{\mathsf{ A}^{\mathsf{R}^{2}}}}\hat{V}(k)\hat{V}(\ell)\nu_{t_{2}}\Big{(}[D_{k}(t_{1})b_{k}(t_{1}, \varphi),B_{\ell}^{*}(t_{2})D_{\ell}(t_{2})]\Big{)}\,\mathrm{d}t_{1}\mathrm{d }t_{2}\mathrm{d}k\mathrm{d}\ell. \tag{7.7}\]
_Remark 7.2_.: We remind the reader that we have previously introduced the notation
\[D_{k}^{*}(t,\varphi)=[N(\varphi),D_{k}^{*}(t)]\quad\text{and}\quad b_{k}(t, \varphi)=[N(\varphi),b_{k}(t)] \tag{7.8}\]
for any \(k\in\Lambda^{*}\) and \(t\in\mathbb{R}\). We have also used the fact that \([b_{k}(t),b_{\ell}(s)]=0\).
In the remainder of this section, we shall study these five terms separately. The proof of Proposition 7.1 follows directly from the following two lemmas. Here, we remind the reader that \(R=|\Lambda|p_{F}^{d-1}\) is our recurring parameter.
**Lemma 7.1** (The main term).: _Let \(M\) be the quantity defined in (7.6), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) the following estimate holds true_
\[|M(t,\varphi)+|\Lambda|t \langle\varphi,B_{t}[f_{0}]\rangle| \tag{7.9}\] \[\leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda|\|\varphi\|_{ \ell_{m}^{1}}\sup_{\tau\leqslant t}\Big{(}R^{\frac{1}{2}}\nu_{\tau}(\mathcal{ N}_{\mathcal{S}})^{\frac{1}{2}}+p_{F}^{-m}\Big{)}\nu_{\tau}(\mathcal{N}^{2})^{ \frac{1}{2}}\] \[+C\lambda t^{3}R|\Lambda|\|\varphi\|_{\ell_{m}^{1}}\|\hat{V}\|_{ \ell^{1}}^{3}\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{\frac{3}{2}}\nu_{\tau}( \mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R\nu_{\tau}(\mathcal{N}_{\mathcal{S }})+Rp_{F}^{-m}\nu_{\tau}(\mathcal{N})^{\frac{1}{2}}\Big{)}\]
_where the operator \(B_{t}\) was introduced in Def. 3._
**Lemma 7.2** (The remainder terms).: _Let \(R^{(1)}\), \(R^{(2)}\), \(R^{(3)}\) and \(R^{(4)}\) be the quantities defined in (7.7), and let \(m>0.\) Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\)_
_(1) There holds_
\[|R^{(1)}(t,\varphi)|\leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}\|\varphi\|_{ \ell^{1}}|\Lambda|R^{\frac{3}{2}}\sup_{0\leqslant t\leqslant\tau}\nu_{\tau}( \mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}. \tag{7.10}\]
_(2) There holds_
\[|R^{(2)}(t,\varphi)|\leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}\|\varphi\|_{ \ell^{1}_{m}}|\Lambda|\frac{R}{p_{F}^{m}}\sup_{0\leqslant t\leqslant\tau}\nu_ {\tau}(\mathcal{N}^{2})^{\frac{1}{2}}. \tag{7.11}\]
_(3) There holds_
\[|R^{(3)}(t,\varphi)|\leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}\|\varphi\|_{ \ell^{1}}|\Lambda|R^{\frac{3}{2}}\sup_{0\leqslant t\leqslant\tau}\nu_{\tau}( \mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}. \tag{7.12}\]
_._
_(4) There holds_
\[|R^{(4)}(t,\varphi)|\leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}\|\varphi\|_{\ell^{1 }_{m}}|\Lambda|\frac{R}{p_{F}^{m}}\sup_{0\leqslant t\leqslant\tau}\nu_{\tau}( \mathcal{N}^{2}). \tag{7.13}\]
Proof of Proposition 7.1.: Straightforward combination of the expansion given in Eq. (7.5), and the estimates contained in Lemmas 7.1 and 7.2.
We dedicate the rest of the section to the proof of Lemma 7.1 and 7.2, respectively. This is done in the two following subsections.
### Analysis of the main term
The main goal of this subsection is to prove Lemma 7.1 by analyzing the main term \(M\). Our first step in this direction is to give an additional decomposition of \(M\). Indeed, we start by noting that the commutator of the bosonic operators may be written as (see (4.7) in Section 4)
\[[b_{k}(t),b_{\ell}^{*}(s)]=\ \delta(k-\ell)G_{k}(t-s)\mathds{1}-\mathcal{R}_{k, \ell}(t,s)\, \tag{7.14}\]
which corresponds to a decomposition into its "diagonal" and "off-diagonal" parts, with respect to the variables \(k,\ell\in\Lambda^{*}\). Here, \(G_{k}(t-s)\) is a scalar that corresponds to the _propagator_ of the boson field-it can be explicitly calculcated to be
\[G_{k}(t-s)=\langle\Omega,[b_{k}(t),b_{k}^{*}(s)],\Omega\rangle_{\mathscr{F}}= \int_{\Lambda^{*}}\chi^{\perp}(p)\chi(p-k)e^{-i(t-s)(E_{p}+E_{p-k})}\mathrm{d}p. \tag{7.15}\]
for all \(k\in\Lambda^{*}\) and \(t,s\in\mathbb{R}\). On the other hand, the second term of (7.14) corresponds to an operator remainder term
\[\mathcal{R}_{k,\ell}(t,s) \equiv\int_{\Lambda^{*}}\chi^{\perp}(p)\chi^{\perp}(p+\ell-k) \chi(p-k)e^{-i(t-s)E_{p-k}}a_{p}^{*}(t)a_{p+\ell-k}(s)\,\mathrm{d}p\] \[+\int_{\Lambda^{*}}\chi(h)\chi(h+\ell-k)\chi^{\perp}(h+\ell)e^{-i (t-s)E_{h+k}}a_{h}^{*}(t)a_{h+\ell-k}(s)\,\mathrm{d}h. \tag{7.16}\]
The decomposition of the bosonic commutator given in (7.14) now suggests that we split the main term into two parts. The first one contains the \(\delta(k-\ell)\) function, and the second one contains the operator \(\mathcal{R}_{k,\ell}\). In other words, we shall consider
\[M(t,\varphi)=M^{\delta}(t,\varphi)+M^{\mathcal{R}}(t,\varphi). \tag{7.17}\]
We shall analyze \(M^{\delta}\) and \(M^{\mathcal{R}}\) separately in the two next subsubsections. The proof of Lemma 7.1 is given in the third subsubsection.
#### 7.1.1. Analysis of \(M^{\delta}\)
Upon expanding the bosonic commutator (7.14) in (7.6), we evaluate the \(\delta(k-\ell)\) function to find that
\[M^{\delta}(t,\varphi)=2\mathrm{Re}\,\int_{\Lambda^{*}}|\hat{V}(k)|^{2}\int_{0 }^{t}\int_{0}^{t_{1}}G_{k}(t_{1}-t_{2})\nu_{t_{2}}\Big{(}D_{k}^{*}(t_{1}, \varphi)D_{k}(t_{2})\Big{)}\,\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}k. \tag{7.18}\]
In order to analyze the above expectation value, we shall expand \(\nu_{t_{2}}\) with respect to the interaction dynamics (3.14). Namely, we consider
\[M^{\delta}=M_{0}^{\delta}+M_{1}^{\delta} \tag{7.19}\]
where for all \(t\in\mathbb{R}\) and \(\varphi\in\ell^{1}\) we define
\[M_{0}^{\delta}(t,\varphi)\equiv 2\mathrm{Re}\int_{\Lambda^{*2}}|\hat{V}(k)|^{2} \int_{0}^{t}\int_{0}^{t_{1}}G_{k}(t_{1}-t_{2})\nu\big{(}D_{k}^{*}(t_{1},\varphi )D_{k}(t_{2})\big{)}\,\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}k \tag{7.20}\]
together with
\[M_{1}^{\delta}(t,\varphi) \tag{7.21}\] \[\qquad\equiv 2\mathrm{Im}\int_{\Lambda^{*}}|\hat{V}(k)|^{2}\int_{0 }^{t}\int_{0}^{t_{1}}\int_{0}^{t_{2}}G_{k}(t_{1}-t_{2})\nu_{t_{3}}\big{(}[D_{k }^{*}(t_{1},\varphi)D_{k}(t_{2}),\mathfrak{h}_{I}(t_{3})]\big{)}\mathrm{d}t_{1 }\mathrm{d}t_{2}\mathrm{d}t_{3}\mathrm{d}k\.\]
First, we identify that from the first term in the above expansion will the \(B_{t}\) operator emerge. Namely, we claim that
**Claim 1**.: _For all \(t\in\mathbb{R}\) and real-valued \(\varphi\in\ell^{1}\), the following identity holds true_
\[M_{0}^{\delta}(t,\varphi)=-t\left\langle\varphi,B_{t}[f_{0}]\right\rangle \tag{7.22}\]
_where \(B_{t}\) is the operator given in Def. 3._
Once this is established, it suffices to control the second term in the expansion of \(M^{\delta}\), that is, the extra integral reminder term in (7.19), \(M_{1}^{\delta}\).
**Claim 2**.: _For all \(m>0\) there exists a constant \(C>0\) such that for all \(t\geqslant 0\) and \(\varphi\in\ell^{1}\) the following estimate holds true_
\[|M_{1}^{\delta}(t,\varphi)|\leqslant C\lambda t^{3}R|\Lambda|\|\varphi\|_{\ell _{m}^{1}}\|\hat{V}\|_{\ell^{1}}\|\hat{V}\|_{\ell^{2}}^{2}\sup_{0\leqslant\tau \leqslant t}\left[R^{\frac{3}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac {1}{2}}+R\nu_{\tau}(\mathcal{N}_{\mathcal{S}})+\frac{R}{p_{F}}\nu_{\tau}( \mathcal{N})^{\frac{1}{2}}\right]\,. \tag{7.23}\]
The proof of the above claims are given as follows.
Proof of Claim 1.: Let us fix \(k\in\Lambda^{*}\), \(t,s\in\mathbb{R}\) and \(\varphi\in\ell^{1}\), which we assume is real-valued in the reminder of the proof. In order to prove our claim, we write
\[D_{k}^{*}(t,\varphi) =\int_{\Lambda^{*}}\chi^{\perp}(p_{1},p_{1}-k)[\varphi(p_{1})- \varphi(p_{1}-k)]a_{p_{1}}^{*}(t)a_{p_{1}-k}(t)\mathrm{d}p_{1}\] \[-\int_{\Lambda^{*}}\chi(h_{1},h_{1}+k)[\varphi(h_{1})-\varphi(h_{ 1}+k)]a_{h_{1}}^{*}(t)a_{h_{1}+k}(t)\mathrm{d}h_{1}\, \tag{7.24}\] \[D_{k}(s) =\int_{\Lambda^{*}}\chi^{\perp}(p_{2},p_{2}+k)a_{p_{2}}^{*}(s)a_{ p_{2}+k}(s)\mathrm{d}p_{2}\] \[-\int_{\Lambda^{*}}\chi(h_{2},h_{2}-k)a_{h_{2}}^{*}(s)a_{h_{2}-k} (s)\mathrm{d}h_{2}. \tag{7.25}\]
Thus we are able to calculate that the following four terms arise
\[\nu\big{(}D_{k}^{*}(t,\varphi)D_{k}(s)\big{)} =\int_{\Lambda^{*2}}\chi^{\perp}(p_{1},p_{2},p_{1}-k,p_{2}+k)[ \varphi(p_{1})-\varphi(p_{1}-k)]\] \[\qquad\times\ \nu\Big{(}a_{p_{1}}^{*}(t)a_{p_{1}-k}(t)a_{p_{2}}^{*}(s) a_{p_{2}+k}(s)\Big{)}\mathrm{d}p_{1}\mathrm{d}p_{2}\] \[+\int_{\Lambda^{*2}}\chi(h_{1},h_{2},h_{1}+k,h_{2}-k)[\varphi(h_{ 1})-\varphi(h_{1}+k)]\] \[\qquad\times\ \nu\Big{(}a_{h_{1}}^{*}(t)a_{h_{1}+k}(t)a_{h_{2}}^{*}(s )a_{h_{2}-k}(s)\Big{)}\mathrm{d}h_{1}\mathrm{d}h_{2}\] \[-\int_{\Lambda^{*2}}\chi^{\perp}(p_{1},p_{1}-k)\chi(h_{2},h_{2}- k)[\varphi(p_{1})-\varphi(p_{1}-k)]\] \[\qquad\times\ \nu\Big{(}a_{p_{1}}^{*}(t)a_{p_{1}-k}(t)a_{h_{2}}^{*}(s )a_{h_{2}-k}(s)\Big{)}\mathrm{d}p_{1}\mathrm{d}h_{2}\] \[-\int_{\Lambda^{*2}}\chi(h_{1},h_{1}+k)\chi^{\perp}(p_{2},p_{2}+ k)[\varphi(h_{1})-\varphi(h_{1}+k)]\] \[\qquad\times\ \nu\Big{(}a_{h_{1}}^{*}(t)a_{h_{1}+k}(t)a_{p_{2}}^{*}(s )a_{p_{2}+k}(s)\Big{)}\mathrm{d}h_{1}\mathrm{d}p_{2}. \tag{7.26}\]
In order to calculate the four terms displayed in the right hand side of (7.26) we use the fact that \(\nu\) is translation invariant and quasi-free. In particular, it is possible to calculate that for any \(p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}\) the following relation hold true
\[\nu\Big{(}a_{p_{1}}^{*}(t)a_{q_{1}}(t)a_{p_{2}}^{*}(s)a_{q_{2}}(s )\Big{)} =\delta(q_{1}-p_{1})\delta(q_{2}-p_{2})f_{0}(p_{1})f_{0}(p_{2})\] \[+\delta(q_{1}-p_{2})\delta(q_{2}-p_{1})e^{i(t-s)(E_{p_{1}}-E_{p_{ 2}})}f_{0}(p_{1})\widetilde{f}_{0}(p_{2}). \tag{7.27}\]
We note that this implies that the third and fourth term in (7.26) are zero. Indeed, for the third term we choose in (7.27) \(p_{1}=p_{1}\), \(q_{1}=p_{1}-k\), \(p_{2}=h_{2}\) and \(q_{2}=h_{2}-k\) to find that
\[\nu\Big{(}a_{p_{1}}^{*}(t)a_{p_{1}-k}(t)a_{h_{2}}^{*}(s)a_{h_{2}- k}(s)\Big{)} =|\Lambda|\delta(k)f_{0}(p_{1})f_{0}(h_{2})\] \[+|\Lambda|\delta(k)\delta(p_{1}-h_{2})e^{i(t-s)(E_{p_{1}}-E_{h_{ 2}})}f_{0}(p_{1})\widetilde{f}_{0}(h_{2}). \tag{7.28}\]
It suffices to note that the right hand side is proportional to \(\delta(k)\), and that \([\varphi(p_{1})-\varphi(p_{1}-k)]\delta(k)=0\). This shows that the third term has a null contribution -the same analysis holds for the fourth term in (7.26).
In a similar fashion, the first and second term in (7.26) can be collected and re-written thanks to (7.27) to find that
\[\nu\big{(}D_{k}^{*} (t,\varphi)D_{k}(s)\big{)} \tag{7.29}\] \[=|\Lambda|\int_{\Lambda^{*}}\chi^{\perp}(p,p-k)[\varphi(p)- \varphi(p-k)]e^{i(t-s)(E_{p}-E_{p-k})}\ f_{0}(p)\widetilde{f}_{0}(p-k)\mathrm{d}p\] \[+|\Lambda|\int_{\Lambda^{*}}\chi(h,h+k)[\varphi(h)-\varphi(h+k)] e^{i(t-s)(E_{h}-E_{h+k})}\ f_{0}(h)\widetilde{f}_{0}(h+k)\mathrm{d}h\]
where we have dropped all terms in (7.27) containing \(\delta(k)\). Now, we integrate in time the above equation to find that
\[\int_{0}^{t}\int_{0}^{t_{1}}\nu_{t_{2}}\Big{(}G_{k}(t_{1}-t_{2})D_{k }^{*}(t_{1},\varphi)D_{k}(t_{2})\Big{)}\mathrm{d}t_{2}\mathrm{d}t_{1}\\ =|\Lambda|\int_{\Lambda^{*}}\chi^{\perp}(p,p-k)\Bigg{(}\int_{0}^{ t}\int_{0}^{t_{1}}G_{k}(t_{2})e^{it_{2}(E_{p}-E_{p-k})}\mathrm{d}t_{2}\mathrm{d}t_{ 1}\Bigg{)}\\ \times[\varphi(p)-\varphi(p-k)]f_{0}(p)\widetilde{f}_{0}(p-k) \mathrm{d}p\\ +|\Lambda|\int_{\Lambda^{*}}\chi(h,h+k)\Bigg{(}\int_{0}^{t}\int_{ 0}^{t_{1}}G_{k}(t_{2})e^{it_{2}(E_{h}-E_{h+k})}\mathrm{d}t_{2}\mathrm{d}t_{1} \Bigg{)}\\ \times[\varphi(h)-\varphi(h+k)]f_{0}(h)\widetilde{f}_{0}(h+k) \mathrm{d}h \tag{7.30}\]
To finalize the proof, let us identify the right hand side of the last displayed equation, with the operator \(B_{t}\) as given by Def. 3. Indeed, consider the second term of Eq. (7.30). We may calculate explictly the integrals with respect to time as follows. First, we rewrite \(G_{k}(t)\) in terms of the variables \(r=p-k\)
\[G_{k}(t-s)=\int_{\Lambda^{*}}\chi(r)\chi^{\perp}(r+k)e^{-i(t-s)(E_{r}+E_{r+k}) }\mathrm{d}r. \tag{7.31}\]
Let \(h\in\mathcal{B}\cap\mathcal{B}-k\). After integration in time and taking the real part we find
\[2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}G_{k}(t_{2})e^{it_{2}( E_{h}-E_{h+k})}\mathrm{d}t_{2}\mathrm{d}t_{1}\\ =\int_{\Lambda^{*}}\chi(r)\chi^{\perp}(r+k)\ 2\mathrm{Re}\int_{0}^{t }\int_{0}^{t_{1}}e^{it_{2}(E_{h}-E_{h+k}-E_{r}-E_{r+k})}\mathrm{d}t_{2} \mathrm{d}t_{1}\mathrm{d}r\\ =\int_{\Lambda^{*}}\chi(r)\chi^{\perp}(r+k)\,2\pi t\delta_{t}\, \big{(}E_{h}-E_{h+k}-E_{r}-E_{r+k}\big{)}\mathrm{d}r\\ =2\pi t\,\alpha_{t}^{H}(h,k). \tag{7.32}\]
Here, \(\delta_{t}(x)\) corresponds to the mollified Delta function defined as \(\delta_{t}(x)=t\delta_{1}(tx)\) where \(\delta_{1}(x)=\frac{2}{\pi}\sin^{2}(x/2)/x^{2}.\) On the other hand, \(\alpha_{t}^{H}\) corresonds to the object defining \(B_{t}\), see (2.25) in Def. 3. A similar calculation shows that the first term of the right hand side of Eq. (7.30) can be put in the following form
\[\chi^{\perp}(p,p-k)2\mathrm{Re}\,\int_{0}^{t}\int_{0}^{t_{1}}G_{k}(t_{2})e^{ it_{2}(E_{p}-E_{p-k})}\mathrm{d}t_{2}\mathrm{d}t_{1}=2\pi t\,\alpha_{t}^{P}(p,k)\]
where \(\alpha_{P}\) is the quantity given in (2.26), see Def. 3. We integrate against \(|\hat{V}(k)|^{2}\) and change variables \(h\mapsto h-k,\ p\mapsto p+k\) in the "gain term" of (7.30) to find that
\[2\mathrm{Re}\,\int_{\Lambda^{*2}}|\hat{V}(k)|^{2}\int_{0}^{t}\int_{0}^{t_{1}} \nu\Big{(}G_{k}(t_{1}-t_{2})D_{k}^{*}(t_{1},\varphi)D_{k}(t_{2})\Big{)}\, \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}k=-t\,\langle\varphi,B_{t}[f_{0}]\rangle\]
where \(B_{t}\) is the operator given in Eq. (2.24). This finishes the proof.
Proof of Claim 2.: Let us fix throughout the proof the time label \(t\in\mathbb{R}\), the parameter \(m>0\) and the test function \(\varphi\in\ell^{1}\). Based on the fact that \(\|G_{k}(\tau)\|_{B(\mathscr{F})}\lesssim R\) for all \(k\in\Lambda^{*}\) and \(\tau\in\mathbb{R}\), our starting point is the following elementary inequality
\[|M_{1}^{\delta}(t,\varphi)|\,\lesssim\,R\,\|\hat{V}\|_{\ell_{2}}^{2}\,t^{3} \sup_{k\in\operatorname{supp}\hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{3}}\Big{(} \Big{[}\,D_{k}^{*}(t_{1},\varphi)D_{k}(t_{2})\,,\,\mathfrak{h}_{I}(t_{3})\, \Big{]}\Big{)}\Big{|}. \tag{7.33}\]
Thus, it suffices to estimate the sup quantity in Eq. (7.33). For notational convenience we do not write explictly the time variables \(t_{i}\in[0,t]\) for \(i=1,2,3\)-since our estimates are uniform in these variables, there is no risk in doing so. In addition, we shall only give estimates for pure states \(\langle\Psi,\ \cdot\ \Psi\rangle\) and then apply Remark 5.1 to conclude estimates for the mixed state \(\nu\). Finally, we shall extensively use the results contained in Section 4 -that is, the estimates of Type-I, Type-II, Type-III and Type-IV, contained in Lemma 4.5, 4.6, 4.7 and 4.8, respectively, together with the several commutation relations.
Let us fix \(k\in\operatorname{supp}\hat{V}\). We begin by expanding the commutator in (7.33) as follows
\[\nu\big{(}[D_{k}^{*}(\varphi)D_{k},\mathfrak{h}_{I}]\big{)}\\ =\lambda\nu\big{(}[D_{k}^{*}(\varphi)D_{k},V_{F}]\big{)}+\lambda \nu\big{(}[D_{k}^{*}(\varphi)D_{k},V_{FB}]\big{)}+\lambda\nu\big{(}[D_{k}^{*}( \varphi)D_{k},V_{B}]\big{)} \tag{7.34}\]
Let us estimate the three terms on the right hand side of Eq. (7.34), separately.
_The F term of (7.34)._ A straightforward expansion of \(V_{F}\) based on the representation (5.5) yields
\[[D_{k}^{*}(\varphi)D_{k},V_{F}] =\int_{\Lambda^{*}}\hat{V}(\ell)\ D_{k}^{*}(\varphi)[D_{k},D_{ \ell}^{*}]D_{\ell}\ \mathrm{d}\ell+\int_{\Lambda^{*}}\hat{V}(\ell)\ D_{k}^{*}(\varphi)D_{\ell}^{ *}[D_{k},D_{\ell}]\ \mathrm{d}\ell\] \[+\int_{\Lambda^{*}}\hat{V}(\ell)\ D_{\ell}^{*}[D_{k}^{*}(\varphi),D_{\ell}]D_{k}\ \mathrm{d}\ell+\int_{\Lambda^{*}}\hat{V}(\ell)\ [D_{k}^{*}( \varphi),D_{\ell}^{*}]D_{\ell}D_{k}\ \mathrm{d}\ell. \tag{7.35}\]
Each of the four terms in the right hand side above is estimated in the same way. Let us look in detail at the first one. For \(\Psi\in\mathscr{F}\) and \(\ell\in\Lambda^{*}\), we find using the Type-I estimate for \(D_{\ell}\) and \([D_{\ell},D_{k}]\), the Type-IV estimate for \(D_{k}(\varphi)\) and the commutation relation \([\mathcal{N},D(\varphi)]=0\)
\[|\,\langle\Psi,D_{k}^{*}(\varphi)[D_{k},D_{\ell}^{*}]D_{\ell} \Psi\rangle| =|\,\langle[D_{\ell},D_{k}]D_{k}(\varphi)\Psi,D_{\ell}\Psi\rangle\,|\] \[\leqslant\|\mathcal{N}D_{k}(\varphi)\Psi\|\,\|\mathcal{N}\Psi\|\] \[=\|D_{k}(\varphi)\mathcal{N}\Psi\|\,\|\mathcal{N}\Psi\|\] \[\leqslant\|D_{k}(\varphi)\|\|\mathcal{N}\Psi\|^{2}\] \[\leqslant|\Lambda|\|\varphi\|_{\ell^{1}}\|\mathcal{N}\Psi\|^{2}. \tag{7.36}\]
We conclude that there is a constant \(C>0\) such that
\[\nu\big{(}[D_{k}^{*}(\varphi)D_{k},V_{F}]\big{)} \leqslant C|\Lambda|\|\hat{V}\|_{\ell^{1}}\|\varphi\|_{\ell^{1}}\nu (\mathcal{N}^{2}). \tag{7.37}\]
_The FB term of (7.34)._ The relation \(\overline{\nu(O)}=\nu(O^{*})\) and a straightforward expansion shows that
\[\nu\big{(}[D_{k}^{*}(\varphi)D_{k}, V_{FB}]\big{)} \tag{7.38}\] \[=\int_{\Lambda^{*}}\hat{V}(\ell)\ \nu\big{(}[D_{k}^{*}(\varphi)D_{k},D_{ \ell}^{*}\,b_{\ell}]\big{)}\ \mathrm{d}\ell-\int_{\Lambda^{*}}\hat{V}(\ell)\ \overline{\nu\big{(}[D_{k}^{*}D_{k}(\varphi),D_{\ell}^{*}\,b_{ \ell}]\big{)}}\ \mathrm{d}\ell.\]
We only estimate the first term in (7.38), since the second one is analogous. Indeed, we expand the commutator to find that
\[\nu\big{(}[D_{k}^{*}(\varphi)D_{k},D_{\ell}^{*}\,b_{\ell}]\big{)} =\nu\big{(}D_{k}^{*}(\varphi)D_{\ell}^{*}[D_{k},\,b_{\ell}]\big{)} +\nu\big{(}D_{k}^{*}(\varphi)[D_{k},D_{\ell}^{*}]\,b_{\ell}\big{)} \tag{7.39}\] \[\quad+\nu\big{(}D_{\ell}^{*}[D_{k}^{*}(\varphi),\,b_{\ell}]D_{k} \big{)}+\nu\big{(}[D_{k}^{*}(\varphi),D_{\ell}^{*}]\,b_{\ell}D_{k}\big{)}. \tag{7.40}\]
We bound these four terms in the bullet points below.
\(\bullet\) Since both \([D_{k},b_{\ell}]\) and \(b_{\ell}\) satisfy Type-II estimates, the two terms in (7.39) are bounded above in the same way. Let us look at the first on in detail. Indeed, for \(\Psi\in\mathscr{F}\) and \(\ell\in\mathrm{supp}\hat{V}\) we find
\[|\,\langle\Psi,D_{k}^{*}(\varphi)D_{\ell}^{*}[D_{k},\,b_{\ell}] \Psi\rangle\,| =\ |\,\langle D_{\ell}D_{k}(\varphi)\Psi,[D_{k},\,b_{\ell}]\Psi \rangle\,|\] \[\leqslant\ \|D_{k}(\varphi)\|\ \|\mathcal{N}\Psi\|\ \|[D_{k},b_{\ell}]\Psi\|\] \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}\|\mathcal{N}\Psi\|R^{ \frac{1}{2}}\ \|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\| \tag{7.41}\]
where we have used the Type-I estimate for \(D_{\ell}\), the Type-II estimate for \([D_{k},b_{\ell}]\), the Type-IV estimate for \(D_{k}(\varphi)\), and the commutation relation \([\mathcal{N},D_{k}(\varphi)]=0\).
\(\bullet\) For the first term in (7.40) we consider \(\Psi\in\mathscr{F}\) and \(\ell\in\mathrm{supp}\hat{V}\). We find
\[|\,\langle\Psi,D_{\ell}^{*}[D_{k}^{*}(\varphi),\,b_{\ell}]D_{k} \Psi\rangle\,|\ \leqslant\ \|[D_{k}^{*}(\varphi),b_{\ell}]\|\,\|\mathcal{N}\Psi\|^{2}\ \lesssim\ |\Lambda|p_{F}^{-m}\|\varphi\|_{\ell_{m}^{1}}\|\mathcal{N}\Psi\|^{2}. \tag{7.42}\]
where we have used the Type-I estimate for \(D_{k}\) and \(D_{\ell}\), and the Type-III estimate \([D_{k}^{*}(\varphi),b_{\ell}]\).
\(\bullet\) For the second term in (7.40) we consider \(\Psi\in\mathscr{F}\) and \(\ell\in\mathrm{supp}\hat{V}\). We find
\[|\,\langle\Psi,[D_{k}^{*}(\varphi),D_{\ell}^{*}]\,b_{\ell}D_{k} \Psi\rangle\,| \leqslant|\,\langle[D_{\ell},D_{k}(\varphi)]\Psi,[b_{\ell},D_{k}] \Psi\rangle\,|+|\,\langle D_{k}^{*}[D_{\ell},D_{k}(\varphi)]\Psi,b_{\ell} \Psi\rangle\,|\] \[\lesssim\|[D_{\ell},D_{k}(\varphi)]\|\|\Psi\|[b_{\ell},D_{k}]\Psi \|+\|[D_{\ell},D_{k}(\varphi)]\|\mathcal{N}\Psi\|\|b_{\ell}\Psi\|\] \[\lesssim|\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|(\mathcal{ N}+1)\Psi\|\ \|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\, \tag{7.43}\]
where, we have used the Type-I estimate for \(D_{k}^{*}\), Type-II estimates for \([b_{\ell},D_{k}]\) and \(b_{\ell}\), Type-IV estimates for \([D_{\ell},D_{k}(\varphi)]\).
We put back the three estimates found in the bullet points above to find that there exists a constant \(C>0\) such that
\[\nu\big{(}[D_{k}^{*}(\varphi)D_{k},V_{FB}]\big{)}\leqslant C\|\hat{V}\|_{\ell^{ 1}}\|\varphi\|_{\ell_{m}^{1}}\,|\Lambda|\Big{[}R^{\frac{1}{2}}\nu(\mathcal{N} _{\mathcal{S}})^{\frac{1}{2}}+p_{F}^{-m}\nu(\mathcal{N}^{2})^{\frac{1}{2}} \Big{]}\nu(\mathcal{N}^{2})^{\frac{1}{2}}\ . \tag{7.44}\]
_The B term of (7.34)._ Similarly as we dealt with the second term, we expand
\[\nu\big{(}[D^{*}_{k}(\varphi)D_{k},V_{B}]\big{)} =\int_{\Lambda^{*}}\hat{V}(\ell)\nu\big{(}[D^{*}_{k}(\varphi)D_{k},b ^{*}_{\ell}b_{\ell}]\mathrm{d}\ell\big{)} \tag{7.45}\] \[+\frac{1}{2}\int_{\Lambda^{*}}\hat{V}(\ell)\nu\big{(}[D^{*}_{k}( \varphi)D_{k},b_{-\ell}b_{\ell}]\big{)}\mathrm{d}\ell\] (7.46) \[-\frac{1}{2}\int_{\Lambda^{*}}\hat{V}(\ell)\overline{\nu\big{(}[D ^{*}_{k}D_{k}(\varphi),b_{-\ell}b_{\ell}]\big{)}}\mathrm{d}\ell. \tag{7.47}\]
We only present a proof of the estimates for the terms in (7.45) and (7.46)- the third one is analogous to the second one. In order to ease the notation we shall omit the indices \(k,\ell\in\mathrm{supp}\hat{V}\).
* _Analysis of (7.45)._ We expand the commutator to find that \[[D^{*}(\varphi)D,b^{*}b]=D^{*}(\varphi)b^{*}[D,b]+D^{*}(\varphi)[D,b^{*}]b+[D ^{*}(\varphi),b^{*}b]D\] (7.48) and estimate each term separately. Let us fix a \(\Psi\in\mathscr{F}\).
* The first term in (7.48) may be estimated as \[|\langle\Psi,D^{*}(\varphi)b^{*}[D,b]\Psi\rangle|\] (7.49) \[\leqslant\ \|[b,D(\varphi)]\|\,\|\Psi\|\,\|[D,b]\Psi\|+\|D( \varphi)\|\,\|b\Psi\|\,\|[D,b]\Psi\|\] \[\lesssim\ |\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}\|\Psi\|R^{ \frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|+|\Lambda|\|\varphi\|_{ \ell^{1}}R\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|^{2}\] \[\leqslant\ \|\varphi\|_{\ell^{1}_{m}}|\Lambda|\Big{(}p_{F}^{-m}\| \Psi\|+R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\Big{)}R^{\frac{ 1}{2}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\.\] Here, we have used Type-II estimates for \([D,b]\) and \(b\), Type-III estimates for \([b,D(\varphi)]\), and Type-IV estimates for \(D(\varphi)\).
* The second term in (7.48) may be estimated as \[|\langle\Psi,D^{*}(\varphi)[D,b^{*}]b\Psi\rangle|\] \[\leqslant\ \|D(\varphi)\|\,\|[b,D^{*}]\Psi\|\,\|b\Psi\|+\|[b,D^{*}],D( \varphi)]\|\,\|\Psi\|\,\|b\Psi\|\] (7.50) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}R\|\mathcal{N}_{ \mathcal{S}}\Psi\|^{2}+|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}\|\Psi\|R^ {\frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\] \[\lesssim\ \|\varphi\|_{\ell^{1}_{m}}|\Lambda|\Big{(}p_{F}^{-m}\| \Psi\|+R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\Big{)}R^{\frac{ 1}{2}}\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\] Here, we have used Type-II estimates for \([b,D^{*}]\) and \(b\), the Type-III estimate for \([[b,D^{*}],D(\varphi)]\), and Type-IV estimates for \(D(\varphi)\).
* The third term in (7.48) may be estimated as \[|\,\langle\Psi,[D^{*}(\varphi),b^{*}b]D\Psi\rangle| \leqslant\ \|[D^{*}(\varphi),b^{*}b]\|\,\|\Psi\|\,\|D\Psi\|\] (7.51) \[\lesssim\ R|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}\,\|\Psi\| \,\|\mathcal{N}\Psi\|\] Here, we have used the Type-I estimate for \(D\), and Type-III estimates and the operator norm bound for \(b\) (see (4.16)) for \(\|[D^{*}(\varphi),b^{*}b]\|\leqslant\|b^{*}\|\,\|[D^{*}(\varphi),b]\|+\|[D^{*}( \varphi),b^{*}]\|\,\|b\|\).
_- Analysis of (_7.46_)._ Similarly as before, we expand the commutator
\[[D^{*}(\varphi)D,bb]=D^{*}(\varphi)b[D,b]+D^{*}(\varphi)[D,b]b+[D^{*}(\varphi),bb]D. \tag{7.52}\]
and estimate each term separately. We let \(\Psi\in\mathscr{F}\).
\(\bullet\) The first term in (7.52) may be estimated as
\[|\,\langle\Psi,D^{*}(\varphi)b[D,b]\Psi\rangle\,| \leqslant \|D(\varphi)\|\|b\|\|\Psi\|[D,b]\Psi\| \tag{7.53}\] \[\lesssim |\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{3}{2}}\|\Psi\|\|\mathcal{ N}_{\mathcal{S}}^{1/2}\Psi\|\.\]
Here, we have used the Type-II estimate for \([D,b]\), the Type-IV estimate for \(D(\varphi)\), and the operator norm bound \(\|b\|\lesssim R\).
\(\bullet\) The second term in (7.52) may be estimated as
\[|\,\langle\Psi,D^{*}(\varphi)[D,b]b\Psi\rangle\,| \leqslant \|D(\varphi)\|\|[D,b]\|\|\Psi\|\|b\Psi\| \tag{7.54}\] \[\lesssim |\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{3}{2}}\|\Psi\|\|\mathcal{ N}_{\mathcal{S}}^{1/2}\Psi\|\.\]
Here, we have used the Type-II estimate for \(b\), the Type-IV estimate for \(D(\varphi)\), and the operator norm bound \(\|[D,b]\|\lesssim R\).
\(\bullet\) The third term in (7.52) may be estimated as
\[|\,\langle\Psi,D[D^{*}(\varphi),bb]\Psi\rangle\,| \leqslant \|[D^{*}(\varphi),bb]\|\,\|\Psi\|\,\|D^{*}\Psi\| \tag{7.55}\] \[\lesssim R|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{n}}\,\|\Psi\|\,\| \mathcal{N}\Psi\|\.\]
Here, we have used the Type-I estimate for \(D^{*}\), and Type-III estimates and the operator norm bound for \(b\) (see (4.16)) for \(\|[D^{*}(\varphi),bb]\|\leqslant\|b\|\,\|[D^{*}(\varphi),b]\|+\|[D^{*}(\varphi ),b]\|\,\|b\|\).
Putting together the estimates found in the six bullet points above, we find that there exists a constant \(C>0\) such that for all \(k\in\mathrm{supp}\hat{V}\)
\[\nu\big{(}[D^{*}_{k}(\varphi)D_{k},V_{B}]\big{)}\leqslant C|\Lambda|\|\varphi \|_{\ell^{1}_{m}}\|\hat{V}\|_{\ell^{1}}\Big{[}R^{\frac{3}{2}}\nu(\mathcal{N}_ {\mathcal{S}})^{\frac{1}{2}}+R\nu(\mathcal{N}_{\mathcal{S}})+\frac{R^{\frac{1} {2}}}{p_{F}^{m}}\nu(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+\frac{R}{p_{F}^{m }}\nu(\mathcal{N}^{2})^{\frac{1}{2}}\Big{]}\,. \tag{7.56}\]
Finally, we can go back to the original decomposition found in (7.34), plug it back in the starting point (7.33), and use the estimates found in Eqs. (7.37), (7.44) and (7.56) to find that there exists a constant \(C>0\) such that
\[|M^{\delta}_{1}(t,\varphi)| \leqslant C|\Lambda|\lambda t^{3}R\|\varphi\|_{\ell^{1}_{n}}\|\hat{V}\|_{ \ell^{2}}^{2}\|\|\hat{V}\|_{\ell^{1}} \tag{7.57}\] \[\quad\times\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{\frac{3}{2} }\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R\nu_{\tau}(\mathcal{N}_ {\mathcal{S}})+\frac{R^{\frac{1}{2}}}{p_{F}^{m}}\nu_{\tau}(\mathcal{N}_{ \mathcal{S}})^{\frac{1}{2}}+\frac{R}{p_{F}^{m}}\nu_{\tau}(\mathcal{N})^{\frac{ 1}{2}}\Big{)}\.\]
To conclude, we note that \(\nu(\mathcal{N}_{\mathcal{S}})\leqslant R^{1/2}\nu(\mathcal{N})\) so that the third term in the right hand side above can be absorbed into the fourth one. This finishes the proof.
#### 7.1.2. Analysis of \(M^{\mathcal{R}}\)
Let us estimate the second term of the right hand side in (7.17).
**Claim 3**.: _For all \(m>0\) there exists a constant \(C>0\) such that for all \(tgeq0\) and \(\varphi\in\ell^{1}\) the following estimate holds true_
\[|M^{\mathcal{R}}(t,\varphi)|\,\leqslant\,Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}| \Lambda|\|\varphi\|_{\ell^{1}_{m}}\sup_{\tau\leqslant t}\Big{(}R^{\frac{1}{2} }\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+p_{F}^{-m}\Big{)}\nu_{ \tau}(\mathcal{N}^{2})^{\frac{1}{2}}. \tag{7.58}\]
Proof.: Let us fix \(m>0\), \(t\geqslant 0\) and \(\varphi\in\ell^{1}\). Going back to the definition of the main term in (7.6), we plug the reminder operator \(\mathcal{R}_{k,\ell}\) defined in (7.16), from which the elementary inequality follows
\[|M^{\mathcal{R}}(t,\varphi)|\ \lesssim\ t^{2}\|\hat{V}\|_{\ell^{1}}^{2}\sup_{k, \ell\in\mathrm{supp}\hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}D_{t_{1}}^{ *}(k,\varphi)\mathcal{R}_{k,\ell}(t_{1},t_{2})D_{t_{2}}(\ell)\Big{)}\Big{|}. \tag{7.59}\]
Let us estimate the supremum quantity in the above equation. Since our estimates are uniform in \(t_{1},t_{2}\) we shall omit them. Letting \(\Psi\in\mathscr{F}\), we find that
\[|\,\langle\Psi,D_{k}^{*}(\varphi)\mathcal{R}_{k,\ell}D_{\ell} \Psi\rangle\,| \leqslant\ |\,\big{\langle}D_{k}^{*}(\varphi)\mathcal{R}_{k,\ell}^{*} \Psi,D_{\ell}\Psi\rangle\,|+|\,\langle\Psi,[D_{k}^{*}(\varphi),\mathcal{R}_{k,\ell}]D_{\ell}\Psi\rangle\,|\] \[\leqslant\ \|D_{k}^{*}(\varphi)\|\mathcal{R}_{k,\ell}\Psi\|\|D_{ \ell}\Psi\|+\|\Psi\|\|[D_{k}^{*}(\varphi),\mathcal{R}_{k,\ell}]\|\|D_{\ell} \Psi\|. \tag{7.60}\]
Letting \(k,\ell\in\mathrm{supp}\hat{V}\), we find the following estimates for the quantities containing \(\mathcal{R}_{k,\ell}\)
\[\|\mathcal{R}_{k,\ell}\Psi\|\lesssim R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S }}^{1/2}\Psi\|\quad\text{and}\quad\|[D_{k}^{*}(\varphi),\mathcal{R}_{k,\ell}] \|\lesssim|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}. \tag{7.61}\]
The proof of these estimates follows the same lines of the proof of Lemma 4.6 and 4.7, so we shall omit it. We combine the last three displayed equations together with Remark 5.1 to conclude the proof of the estimate contained in Eq. (7.58)
#### 7.1.3. Proof of Lemma 7.1
Proof of Lemma 7.1.: The triangle inequality and the decomposition \(M=M_{0}^{\delta}+M_{1}^{\delta}+M^{\mathcal{R}}\) gives \(|M-M_{0}^{\delta}|\leqslant|M_{1}^{\delta}|+|M^{\mathcal{R}}|\). It suffices then to use the results contained in Claims 1, 2 and 3.
### Analysis of the remainder terms
In this subsection, we estimate the remainder terms \(R^{(i)}\) (see (7.7)) and give a proof of Lemma 7.2.
Proof of Lemma 7.2.: Throughout the proof, we fix \(m>0\), \(t\geqslant 0\) and \(\varphi\in\ell^{1}_{m}\). We make extensive use of the Type-I, Type-II, Type-III and Type-IV estimates contained in Lemmas 4.5, 4.6, 4.7, and 4.8, respectively, together with the operator bound \(\|b\|\leqslant R\)-see (4.16). Due to the similarities, we only show all the details for the proof of (1), and only give the key estimates for the proofs of (2), (3), and (4).
Proof of (1) Our starting point is the elementary estimate
\[|R^{(1)}(t,\varphi)|\,\lesssim\,t^{2}\|\hat{V}\|_{\ell^{1}}^{2}\sup_{k,\ell \text{supp}\hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}D_{k}^{*}(t_{1}, \varphi)B_{\ell}^{*}(t_{2})[b_{k}(t_{1}),D_{\ell}(t_{2})]\Big{)}\Big{|}. \tag{7.62}\]
In view of Remark 5.1, it is sufficient to give estimates for pure states \(\Psi\in\mathscr{F}\). In order to ease the notation, we shall drop the time variables \(t_{1},t_{2}\in[0,t]\), together with the
momentum labels \(k,\ell\in\mathrm{supp}\hat{V}\). Letting \(\Psi\in\mathscr{F}\), we find that
\[|\,\langle\Psi,D^{*}(\varphi)B^{*}[b,D]\Psi\rangle\,| \leqslant \|D(\varphi)\|\|\Psi\|\|B^{*}\|\|[b,D]\Psi\| \tag{7.63}\] \[\lesssim |\Lambda|\|\varphi\|_{\ell^{1}}\|\Psi\|R^{\frac{3}{2}}\|\mathcal{ N}_{\mathcal{S}}^{1/2}\Psi\|,\]
where we used the Type-II estimate for \([b,D]\), the Type-IV estimate for \(D^{*}(\varphi)\), and the norm bound \(\|B\|\leqslant 2\|b\|\lesssim R\). The estimate in Eq. (7.10) now follows from the last two displayed equations, and \(\nu(\mathds{1})=1\).
_Proof of (2)_ Letting \(\Psi\in\mathscr{F}\), we find that
\[|\,\langle\Psi,[D^{*}(\varphi),B^{*}]Db\Psi\rangle\,| \leqslant \|[D^{*}(\varphi),b]\|\|\Psi\|\|Db\Psi\| \tag{7.64}\] \[\lesssim \|[D^{*}(\varphi),b]\|\|\Psi\|\|\mathcal{N}b\Psi\|\] \[\lesssim |\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}R\|\Psi\|\|\mathcal{ N}\Psi\|\,\]
where we have used the Type-II estimate, commutation relations and the norm bound for \(b\) to obtain \(\|Db\Psi\|\leqslant\|\mathcal{N}b\Psi\|\leqslant\|b\mathcal{N}\Psi\|\lesssim R \|\mathcal{N}\Psi\|\) ; and the Type-III estimate for \([D^{*}(\varphi),b]\). The proof is finished after one follows the same argument we used for (1).
_Proof of (3)_ Letting \(\Psi\in\mathscr{F}\), we find that
\[|\,\langle\Psi,B^{*}[D(\varphi),D]b\Psi\rangle\,| \leqslant\|B^{*}\|\|\Psi\|\|[D(\varphi),D]\|\|b\Psi\| \tag{7.65}\] \[\lesssim R^{\frac{3}{2}}\|\Psi\|\|\Lambda\|\varphi\|_{\ell^{1}}\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\, \tag{7.66}\]
where we used the Type-II estimate for \(b\), the Type-IV estimate for \([D(\varphi),D]\), and the norm bound \(\|B\|\lesssim R\). The proof is finished after one follows the same argument we used for (1).
_Proof of (4)_ Letting \(\Psi\in\mathscr{F}\), we find that
\[|\,\langle\Psi,[Db(\varphi),B^{*}D]\Psi\rangle\,| \leqslant 2|\,\langle\Psi,Db(\varphi)B^{*}D\Psi\rangle\,| \tag{7.67}\] \[\lesssim\|D^{*}\Psi\|\|b(\varphi)\|\|B^{*}\|\|D\Psi\|\] \[\lesssim p_{F}^{-m}|\Lambda\|\varphi\|_{\ell^{1}_{m}}R\|\mathcal{ N}\Psi\|^{2}\.\]
where we used the Type-I estimate for \(D\) and \(D^{*}\), the Type-III estimate for \(b(\varphi)\), and the norm bound \(\|B\|\lesssim R\). The proof is finished after one follows the same argument we used for (1).
## 8. Subleading Order Terms
In this section we analyze the \(T_{\alpha,\beta}(t,p)\) terms of the double commutator expansion (3.20) that we regard as subleading order terms. So far, out of the nine terms we have analyzed two leading order terms: \(T_{F,F}\) in Section 6 and \(T_{FB,FB}\) in Section 7. Thus, we shall analyze the reminding seven. We do this in the following five subsections.
### Analysis of \(T_{F,FB}\)
The main result of this subsection is the following proposition, that gives an estimate on the size of \(T_{F,FB}\).
**Proposition 8.1** (Analysis of \(T_{F,FB}\)).: _Let \(T_{F,FB}(t,p)\) be the quantity defined in (3.21) with \(\alpha=F\) and \(\beta=FB\), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) the following estimate holds true_
\[|T_{F,FB}(t,\varphi)|\ \leqslant\ Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}| \Lambda|\|\varphi\|_{\ell_{m}^{1}}\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{ \frac{1}{2}}\,\nu_{\tau}(\mathcal{N}^{2})^{\frac{1}{2}}\nu_{\tau}(\mathcal{N} _{\mathcal{S}})^{\frac{1}{2}}+p_{F}^{-m}\nu_{\tau}(\mathcal{N}^{2})\Big{)} \tag{8.1}\]
_where we recall \(T_{F,FB}(t,\varphi)=\langle\varphi,T_{F,FB}(t)\rangle\) and \(R=|\Lambda|p_{F}^{d-1}\)._
Proof.: For simplicity, we assume \(\varphi\) is real-valued-in the general case, one may expand into real and imaginary parts and use linearity of the commutators. Starting from (3.23) we use the self-adjointness of \(V_{F}(t)\) and \(N(\varphi)=\int_{\Lambda^{*}}\varphi(p)a_{p}^{*}a_{p}\mathrm{d}p\) to get the elementary inequality
\[|T_{F,FB}(t,\varphi)|\] \[\quad=\Big{|}\int_{0}^{t}\int_{0}^{t_{1}}\int_{\Lambda^{*2}}\hat {V}(k)\hat{V}(\ell)2\mathrm{Re}\nu_{t_{2}}\Big{(}[[N(\varphi),D_{k}^{*}(t_{1}) D_{k}(t_{1})],D_{\ell}^{*}(t_{2})b_{\ell}(t_{2})]\Big{)}\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}k\mathrm{d}\ell\Big{|}\] \[\quad\lesssim t^{2}\|\hat{V}\|_{k,\ell\mathrm{supp}\hat{V},t_{i} \in[0,t]}^{2}\Big{|}\nu_{t_{2}}\Big{(}[[N(\varphi),D_{k}^{*}(t_{1})D_{k}(t_{1} )],D_{\ell}^{*}(t_{2})b_{\ell}(t_{2})]\Big{)}\Big{|}. \tag{8.2}\]
It suffices now to estimate the supremum quantity in the above equation. In order to ease the notation, we shall drop the time labels \(t_{1},t_{2}\in[0,t]\), together with the momentum variables \(k,\ell\in\mathrm{supp}\hat{V}\). Using the notation \(D^{*}(\varphi)\equiv[N(\varphi),D^{*}]\) we compute the commutator
\[[N(\varphi),D^{*}D]=D^{*}(\varphi)D+D^{*}D(\varphi). \tag{8.3}\]
We shall only show how to estimate the contribution that arises from the first term on the right hand side of (8.3); the second one is analogous. To this end, we expand
\[[D^{*}(\varphi)D,D^{*}b] \tag{8.4}\] \[\quad=D^{*}(\varphi)[D,D^{*}]b+D^{*}(\varphi)D^{*}[D,b]+[D^{*}( \varphi),D^{*}]bD+D^{*}[D^{*}(\varphi),b]D\.\]
Next, we estimate the expectation of each term in (8.4) separately. In view of Remark 5.1, it suffices to provide estimates for pure states \(\Psi\in\mathscr{F}\). We shall make extensive use of Type-I to Type-IV estimates contained in Lemma 4.5-4.8, the commutation relations from Lemmas 4.3 and 4.4, and operator bounds of the form \(\|b\|_{B(\mathscr{F})}\lesssim R\).
_The first term in (8.4)._ Using the Type-I estimate for \([D^{*},D]\), the Type-II estimate for \(b\), the Type-IV estimate for \(D(\varphi)\) and the commutation relation \([\mathcal{N},D(\varphi)]=0\) we find
\[|\,\langle\Psi,D^{*}(\varphi)[D,D^{*}]b\Psi\rangle\,| \leqslant\ \|[D^{*},D]D(\varphi)\Psi\|\,\|b\Psi\|\] \[\lesssim\ \|\mathcal{N}D(\varphi)\Psi\|R^{\frac{1}{2}}|\mathcal{N}_{ \mathcal{S}}^{\frac{1}{2}}\Psi\|\] \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}\|\mathcal{N}\Psi\|R^{ \frac{1}{2}}|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|. \tag{8.5}\]
_The second term in (8.4)._ Using the Type-I estimate for \(D\), the Type-II estimate for \([D,b]\), the Type-IV estimate for \(D(\varphi)\) and the commutation relation \([\mathcal{N},D(\varphi)]=0\) we
find
\[|\,\langle\Psi,D^{*}(\varphi)D^{*}[D,b]\Psi\rangle\,| \;\leqslant\;\|DD(\varphi)\Psi\|\,\|[D,b]\Psi\|\] \[\;\lesssim\;\|\mathcal{N}D(\varphi)\Psi\|R^{\frac{1}{2}}\|\mathcal{ N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\] \[\;\lesssim\;|\Lambda|\|\varphi\|_{\ell^{1}}\|\mathcal{N}\Psi\|R^{ \frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|. \tag{8.6}\]
_The third term in (8.4)._ Using the Type-I estimate for \(D^{*}\), the Type-II estimate for both \(b\) and \([D,b]\), the Type-IV estimate for \([D,D(\varphi)]\) and the commutation relation \([\![\mathcal{N},[D,D(\varphi)]]\!]=0\) we find
\[|\,\langle\Psi,[D^{*}(\varphi),D^{*}]bD\Psi\rangle\,| \;\leqslant\;|\,\langle[D,D(\varphi)]\Psi,[b,D]\Psi\rangle\,|+|\, \langle[D,D(\varphi)]\Psi,Db\Psi\rangle\,|\] \[\;\leqslant\;\|[D,D(\varphi)]\Psi\|\,\|[b,D]\Psi\|+\|D^{*}[D,D( \varphi)]\Psi\|\,\|b\Psi\|\] \[\;\lesssim\;|\Lambda|\|\varphi\|_{\ell^{1}}\|(\mathcal{N}+1)\Psi \|R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|. \tag{8.7}\]
_The fourth term in (8.4)._ Using the Type-I estimate for \(D\) and the Type-III estimate for \([D^{*}(\varphi),b]\) we find
\[|\,\langle\Psi,D^{*}[D^{*}(\varphi),b]D\Psi\rangle\,| \;\leqslant\;\|[D^{*}(\varphi),b]\|\|D\Psi\|^{2}\;\lesssim\;| \Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}}\|\mathcal{N}\Psi\|^{2}. \tag{8.8}\]
The proof now follows by collecting the previous four estimates in the expansion (8.4), and plugging them back in (8.2).
### Analysis of \(T_{f,b}\)
The main result of this subsection is the following proposition, that gives an estimate on the size of \(T_{F,B}\).
**Proposition 8.2** (Analysis of \(T_{f,b}\)).: _Let \(T_{F,B}(t,p)\) be the quantity defined in (3.21) with \(\alpha=F\) and \(\beta=B\), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell^{1}_{m}\) and \(t\geqslant 0\) the following estimate holds true_
\[|T_{F,B}(t,\varphi)|\;\leqslant\;Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda|\| \varphi\|_{\ell^{1}_{m}}\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{\frac{3}{2} }\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+Rp_{F}^{-m}\nu(\mathcal{N }^{2})^{\frac{1}{2}}+R\nu_{\tau}(\mathcal{N}_{\mathcal{S}})\Big{)} \tag{8.9}\]
_where we recall \(T_{F,B}(t,\varphi)=\langle\varphi,T_{F,B}(t)\rangle\) and \(R=|\Lambda|p_{F}^{d-1}\)._
Proof.: For simplicity, we assume \(\varphi\) is real-valued-in the general case, one may expand into real and imaginary parts and use linearity of the commutators. Starting from (3.23) we use the self-adjointness of \(V_{F}(t)\), \(V_{B}(t)\) and \(N(\varphi)=\int_{\Lambda*}\varphi(p)a_{p}^{*}a_{p}\mathrm{d}p\) to get the elementary inequality
\[|T_{F,B}(t,\varphi)| =\Big{|}\int_{0}^{t}\int_{0}^{t_{1}}\mathrm{Re}\,\nu_{t_{2}}\Big{(} \llbracket N(\varphi),V_{F}(t_{1})\rrbracket,V_{B}(t_{2})\rrbracket\Big{)} \mathrm{d}t_{1}\mathrm{d}t_{2}\Big{|} \tag{8.10}\] \[\lesssim t^{2}\|\hat{V}\|_{\ell^{1}_{1}}^{2}\sup_{k,\ell\text{supp} \hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}\llbracket N(\varphi),D_{k}^{*} (t_{1})D_{k}(t_{1})\rrbracket,b_{\ell}^{*}(t_{2})b_{\ell}(t_{2})\rrbracket \Big{)}\Big{|}\] \[+t^{2}\|\hat{V}\|_{\ell^{1}_{1}}^{2}\sup_{k,\ell\text{supp}\hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}\llbracket N(\varphi),D_{k}^{*}(t_{1})D _{k}(t_{1})\rrbracket,b_{\ell}(t_{2})b_{-\ell}(t_{2})\rrbracket\Big{)}\Big{|}\,\]
where in the last line we used the representation of \(V_{F}(t)\) and \(V_{B}(t)\) in terms of \(b\)- and \(D\)-operators found in Eqs. (5.5) and (5.7) -the \(b^{*}b^{*}\) term is re-written in terms of \(bb\) upon taking the real part of \(\nu\). Next, we estimate the two supremum quantities in (8.10),
which we shall refer to as an _off-diagonal contribution_, and a _diagonal contribution_, with respect to the operators \(b\) and \(b^{*}\). In view of Remark 5.1, it suffices to provide estimates for pure states \(\Psi\in\mathscr{F}\). Further, in order to ease the notation, we omit the time labels \(t_{1},t_{2}\in[0,t]\) and the momentum variables \(k,\ell\in\mathrm{supp}\hat{V}\). We make extensive use of Type-I to Type-IV estimates contained in Lemma 4.5-4.8, the commutation relations from Lemmas 4.3 and 4.4, and operator bounds of the form \(\|b\|_{B(\mathscr{F})}\lesssim R\).
_The off-diagonal contribution of (8.10)._ We expand the first commutator as follows
\[[[N(\varphi),D^{*}D],bb]=[D^{*}(\varphi)D,bb]+[D^{*}D(\varphi),bb]\, \tag{8.11}\]
where we recall we use the notation \(D^{*}(\varphi)=[N(\varphi),D]\). We shall only show in detail how to estimate the first term in (8.11)-the second term can be estimated in the same spirit. We expand the second commutator as follows
\[[D^{*}(\varphi)D,bb]=D^{*}(\varphi)b[D,b]+D^{*}(\varphi)[D,b]b+[D^{*}(\varphi ),bb]D. \tag{8.12}\]
We now estimate the three terms in the right hand side of (8.12).
* _The first term of (8.12)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,D^{*}(\varphi)b[D,b]\Psi\rangle \leqslant\ \|D(\varphi)\|\|\Psi\|\|b\|\|[D,b]\Psi\|\] (8.13) \[\lesssim\ |\Lambda\|\varphi\|_{\ell^{1}}\|\Psi\|R^{\frac{3}{2}}\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\,\] where we used the Type-II estimate for \([D,b]\), the Type-IV estimate for \(D(\varphi)\), and the norm bound \(\|b\|\lesssim R\).
* _The second term of (8.12)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,D^{*}(\varphi)[D,b]b\Psi\rangle\,| \leqslant\ \|D(\varphi)\|\|\Psi\|\|[D,b]\|\|b\Psi\|\] (8.14) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}\|\Psi\|R^{\frac{3}{2}}\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\,\] where we used the Type-II estimate for \(b\), the Type-IV estimate for \(D^{*}(\varphi)\), and the norm bound \(\|[D,b]\|\lesssim R\),
* _The third term of (8.12)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,[D^{*}(\varphi),bb]D\Psi\rangle\,| \leqslant\ \|[D^{*}(\varphi),bb]\|\|\Psi\|\|\mathcal{N}\Psi\|\] (8.15) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}_{m}}p_{F}^{-m}R\|\Psi\| \|\mathcal{N}\Psi\|\,\] where we used the Type-I estimate for \(D\), the Type-III estimate for \([D^{*}(\varphi),b]\) and the norm bound \(\|b\|\lesssim R\).
We collect the four estimates found above and put them back in (8.11) to find that the off-diagonal contribution satisfies the following upper bound
\[\Big{|}\nu\Big{(}[[N(\varphi),D^{*}D],bb]\Big{)}\Big{|}\ \lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}_{m}}\Big{(}R^{\frac{3}{2}}\nu( \mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+\frac{R}{p_{F}^{m}}\nu(\mathcal{N}^ {2})^{\frac{1}{2}}\Big{)}. \tag{8.16}\]
_The diagonal contribution of (8.10)._ Similarly as before, we shall expand the commutator as follows.
\[[D^{*}(\varphi)D,b^{*}b]=D^{*}(\varphi)b^{*}[D,b]+D^{*}(\varphi)[D,b^{*}]b+[D ^{*}(\varphi),b^{*}b]D. \tag{8.17}\]
These three terms are estimates as follows.
* _The first term of (_8.17_)._ Letting_ \(\Psi\in\mathscr{F}\)_, we find that_ \[|\langle\Psi,D^{*}(\varphi)b^{*}[D,b]\Psi\rangle|\] \[\qquad\leqslant|\left\langle D(\varphi)b\Psi,[D,b]\Psi\right\rangle |+|\left\langle[D(\varphi),b]\Psi,[D,b]\Psi\right\rangle|\] \[\qquad\lesssim\|D(\varphi)\|\|b\Psi\|\|[D,b]\Psi\|+\|[D(\varphi),b] \|\|\Psi\|[D,b]\|\|\Psi\|\] \[\qquad\lesssim|\Lambda|\|\varphi\|_{\ell_{m}^{1}}\Big{(}R\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|^{2}+p_{F}^{-m}R\|\Psi\|^{2} \Big{)}\.\] (8.18) where we used the Type-II estimate for \(b\) and \([D,b]\), the Type-III estimate for \([D(\varphi),b]\), the Type-IV estimate for \(D(\varphi)\).
* _The second term of (_8.17_)._ Letting_ \(\Psi\in\mathscr{F}\)_, we find that_ \[|\langle\Psi,D^{*}(\varphi)[D, b^{*}]b\Psi\rangle|\] \[\qquad\leqslant|\left\langle D(\varphi)[D^{*},b]\Psi,b\Psi\right\rangle |+|\left\langle[D(\varphi),[D^{*},b]]\Psi,b\Psi\right\rangle|\] \[\qquad\lesssim\|D(\varphi)\|\|[D^{*},b]\Psi\|\|b\Psi\|+\|[D( \varphi),[D^{*},b]]\|\|\Psi\|\|b\Psi\|\] \[\qquad\lesssim|\Lambda|\|\varphi\|_{\ell_{m}^{1}}\Big{(}R\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|^{2}+p_{F}^{-m}R\|\Psi\|^{2} \Big{)}\,\] (8.19) where we used the Type-II estimate for \(b\) and \([D^{*},b]\), the Type-III estimate for \([D(\varphi),[D^{*},b]]\), and the Type-IV estimate for \(D(\varphi)\).
* _The third term of (_8.17_)._ Letting_ \(\Psi\in\mathscr{F}\)_, we find that_ \[|\left\langle\Psi,[D^{*}(\varphi),b^{*}b]D\Psi\right\rangle| \leqslant\|\Psi\|\|[D^{*}(\varphi),b^{*}b]\|\|D\Psi\|\] \[\qquad\leqslant\Big{(}\|b^{*}\|[D^{*}(\varphi),b]\|+\|[D^{*}( \varphi),b^{*}]\|\|b\|\Big{)}\|\Psi\|\|\mathcal{N}\Psi\|\] \[\qquad\lesssim|\Lambda|\|\varphi\|_{\ell_{m}^{1}}p_{F}^{-m}R\| \Psi\|\|\mathcal{N}\Psi\|\,\] (8.20) where we used the Type-I estimate \(D\), the Type-III estimate for \([D^{*}(\varphi),b]\) and \([D^{*}[\varphi],b^{*}]\), and the norm bound \(\|b\|\lesssim R\).
We gather the three above estimates to find that the diagonal contribution is satisfies the following upper bound
\[\Big{|}\nu\Big{(}[[N(\varphi),D^{*}D],b^{*}b]\Big{)}\Big{|}\ \lesssim\ |\Lambda|\|\varphi\|_{\ell_{m}^{1}}\Big{(}R\nu(\mathcal{N}_{ \mathcal{S}})+\frac{R}{p_{F}^{m}}\nu(\mathcal{N}^{2})^{\frac{1}{2}}\Big{)}. \tag{8.21}\]
The proof of the proposition is finished once we gather the diagonal and off-diagonal contributions and plug them back in (8.10).
### Analysis of \(T_{fb,f}\)
In this subsection, we analyze the term \(T_{FB,F}\). Our main result is the estimate contained in the next proposition.
**Proposition 8.3** (Analysis of \(T_{FB,f}\)).: _Let \(T_{FB,F}(t,p)\) be the quantity defined in (3.21) with \(\alpha=FB\) and \(\beta=F\), and let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) the following estimate holds true_
\[|T_{FB,F}(t,\varphi)|\ \leqslant C\,t^{2}\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda|\| \varphi\|_{\ell_{m}^{1}}\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{\frac{1}{2} }\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+p_{F}^{-m}\nu_{\tau}( \mathcal{N}^{2})^{\frac{1}{2}}\Big{)}\nu_{\tau}(\mathcal{N}^{2})^{\frac{1}{2}} \tag{8.22}\]
_where we recall \(T_{FB,F}(t,\varphi)=\left\langle\varphi,T_{FB,F}(t)\right\rangle\) and \(R=|\Lambda|p_{F}^{d-1}\)._
Proof.: For simplicity, we assume \(\varphi\) is real-valued-in the general case, one may expand into real and imaginary parts and use linearity of the commutators. Starting from (3.23) we use the self-adjointness of \(V_{FB}(t)\), \(V_{F}(t)\) and \(N(\varphi)=\int_{\Lambda*}\varphi(p)a_{p}^{*}a_{p}\mathrm{d}p\) to get the elementary inequality
\[|T_{FB,F}(t,\varphi)| =\Big{|}\int_{0}^{t}\int_{0}^{t_{1}}\mathrm{Re}\,\nu_{t_{2}}\Big{(} [[N(\varphi),V_{FB}(t_{1})],V_{F}(t_{2})]\Big{)}\mathrm{d}t_{1}\mathrm{d}t_{2} \Big{|} \tag{8.23}\] \[\lesssim t^{2}\|\hat{V}\|_{\ell_{1}}^{2}\sup_{k,\ell\in\mathrm{ supp}\hat{V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}[[N(\varphi),D_{k}^{*}(t_{1})b_{k} (t_{1})],D_{\ell}^{*}(t_{2})D_{\ell}(t_{2})]\Big{)}\Big{|}\]
where in the last line we used the representation of \(V_{F}(t)\) and \(V_{B}(t)\) in terms of \(b\)- and \(D\)-operators found in Eqs. (5.5) and (5.7) -the \(D_{k}^{*}b_{-k}^{*}\) term is re-written in terms of \(D_{k}^{*}b_{k}\) upon taking the real part of \(\nu\). Next, we estimate the two supremum quantity in (8.23). In view of Remark 5.1, it suffices to provide estimates for pure states \(\Psi\in\mathscr{F}\). In order to ease the notation, we omit the variables \(t_{1},t_{2}\in[0,t]\) and \(k,\ell\in\mathrm{supp}\hat{V}\). We shall make extensive use of Type-I to Type-IV estimates contained in Lemma 4.5-4.8, and the commutation relations from Lemmas 4.3 and 4.4.
We expand the first commutator in terms of \(D^{*}(\varphi)=[N(\varphi),D]\) and \(b(\varphi)=[N(\varphi),b]\) as follows
\[[[N(\varphi),D^{*}b],D^{*}D]=[D^{*}(\varphi)b,D^{*}D]+[D^{*}b( \varphi),D^{*}D]. \tag{8.24}\]
We dedicate the rest of the proof to estimate the expectation of the two terms in the right hand side of (8.24).
_The first term of (8.24)_ We break up the second commutator into three pieces
\[[D^{*}(\varphi)b,D^{*}D]=D^{*}(\varphi)D^{*}[b,D]+D^{*}(\varphi )[b,D^{*}]D+[D^{*}(\varphi),D^{*}D]b \tag{8.25}\]
which we now estimate separately.
* _The first term of (8.25)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\left\langle\Psi,D^{*}(\varphi)D^{*}[b,D]\Psi\right\rangle| \leqslant\ \|DD(\varphi)\Psi\|\|[b,D]\Psi\|\] (8.26) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|\mathcal{N} \Psi\|\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\,\] where we used the Type-I estimate for \(D\), the Type-II estimate for \([D,b]\), the Type-IV estimate for \(D(\varphi)\), and the commutation relation \([\![\mathcal{N},D(\varphi)]\!]=0\).
* _The second term of (8.25)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\langle\Psi,D^{*}(\varphi)[\![b,D^{*}]\!]D\Psi\rangle|\] (8.27) \[\leqslant\ |\left\langle\Psi,D^{*}(\varphi)D[b,D^{*}]\Psi \right\rangle|+|\left\langle\Psi,D^{*}(\varphi)[\![b,D^{*}],D]\Psi\right\rangle|\] \[\lesssim\ \|D^{*}D(\varphi)\Psi\|\|[b,D^{*}]\Psi\|+\|D(\varphi)\| \|\Psi\|[[b,D^{*}],D]\Psi\|\] \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|( \mathcal{N}+\mathds{1})\Psi\|\|\mathcal{N}_{\mathcal{S}}^{1/2}\Psi\|\,\] where we used the Type-I estimate for \(D^{*}\), the Type-II estimate for \([b,D^{*}]\) and \([\![b,D^{*}]\!],D]\), the Type-IV estimate for \(D(\varphi)\), and the commutation relation \([\![\mathcal{N},D(\varphi)]\!]=0\).
_The third term of (_8.25_)._ Letting_ \(\Psi\in\mathscr{F}\)_, we find that_ \[|\,\langle\Psi,[D^{*}(\varphi),D^{*}D]b\Psi\rangle\,| \leqslant\ \|[D(\varphi),D^{*}D]\Psi\|\|b\Psi\|\] (8.28) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|\mathcal{N} \Psi\|\|\mathcal{N}^{1/2}_{\mathcal{S}}\Psi\|\,\] where we used the Type-I estimates for \(D\) and \(D^{*}\), the Type-II estimate for \(b\), the Type-IV estimate for \([D(\varphi),D]\) and \([D(\varphi),D^{*}]\), and the commutation relation \([\mathcal{N},[D(\varphi),D]]=0\).
Upon gathering the last three estimates, we find that the first term of (8.24) satisfies the following upper bound
\[|\nu\big{(}[D^{*}(\varphi)b,D^{*}D]\big{)}|\lesssim|\Lambda|\|\varphi\|_{\ell ^{1}_{m}}R^{\frac{1}{2}}\nu(\mathcal{N}^{2})^{\frac{1}{2}}\nu(\mathcal{N}_{ \mathcal{S}})^{\frac{1}{2}}. \tag{8.29}\]
_The second term of (8.24)._ Similarly as before, we break up the double second into three pieces
\[[D^{*}b(\varphi),D^{*}D]=D^{*}D^{*}[b(\varphi),D]+D^{*}[b(\varphi),D^{*}]D+b( \varphi)[D^{*},D^{*}D] \tag{8.30}\]
which can be estimated as follows.
* _The first term in (_8.30_)._ Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,D^{*}D^{*}[b(\varphi),D]\Psi\rangle\,| \leqslant\|DD(\mathcal{N}+2)^{-1}\Psi\|\|(\mathcal{N}+2)[b( \varphi),D]\Psi\|\] (8.31) \[\lesssim|\Lambda|\|\varphi\|_{\ell^{1}_{m}}p_{F}^{-m}\|\mathcal{ N}\Psi\|^{2}\,\] where we used the Type-I estimate for \(D\), the Type-III estimate for \([b(\varphi),D]\) and the pull-through formula \((\mathcal{N}+2)[b(\varphi),D]=[b(\varphi),D]\mathcal{N}\).
* Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,D^{*}[b(\varphi),D^{*}]D\Psi\rangle\,| \leqslant\ \|D\Psi\|\|[b(\varphi),D^{*}]\|\|D\Psi\|\] (8.32) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}_{m}}p_{F}^{-m}\|\mathcal{N} \Psi\|^{2}\,\] where we used the Type-I estimate for \(D\), and the Type-III estimate for \([b(\varphi),D^{*}]\).
* Letting \(\Psi\in\mathscr{F}\), we find that \[|\,\langle\Psi,b(\varphi)[D^{*},D^{*}D]\Psi\rangle\,| \leqslant\ \|b^{*}(\varphi)\mathcal{N}\Psi\|\|[D^{*},D^{*}D]( \mathcal{N}+2)^{-1}\Psi\|\] (8.33) \[\lesssim\ |\Lambda|\|\varphi\|_{\ell^{1}_{m}}p_{F}^{-m}\|\mathcal{N} \Psi\|^{2}\,\] where we used the Type-I estimate for \([D^{*},D^{*}D]\), the Type-III estimate for \(b^{*}(\varphi)\), the pull-through formula \((\mathcal{N}+2)b(\varphi)=b(\varphi)\mathcal{N}\) and the commutation relation \([D^{*},\mathcal{N}]=0\).
Upon gathering the last three estimates, we find that the second term of (8.4) satisfies the following upper bound
\[\Big{|}\nu\Big{(}[D^{*}b(\varphi),D^{*}D]\Big{)}\Big{|}\lesssim|\Lambda|\| \varphi\|_{\ell^{1}_{m}}p_{F}^{-m}\nu(\mathcal{N}^{2}). \tag{8.34}\]
The proof of the proposition is finished once we put together the estimates found in Eqs. (8.29) and (8.34) back in (8.24).
### Analysis of \(T_{fb,b}\)
The main result of this subsection is the following proposition. It contains an estimate on the size of \(T_{FB,B}\).
**Proposition 8.4** (Analysis of \(T_{fb,b}\)).: _Let \(T_{FB,B}(t,p)\) be the quantity defined in (3.21) with \(\alpha=FB\), and \(\beta=B\). Further, let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) such that_
\[|T_{FB,B}(t,\varphi)|\,\leqslant\,Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda\| \|\varphi\|_{\ell_{m}^{1}}\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{\frac{3} {2}}\,\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R^{2}p_{F}^{-m}\nu_{ \tau}(\mathcal{N}_{1}^{2})^{\frac{1}{2}}\Big{)} \tag{8.35}\]
_where we recall \(T_{FB,B}(t,\varphi)=\langle\varphi,T_{FB,B}(t)\rangle\) and \(R=|\Lambda|p_{F}^{d-1}\)._
Proof.: For simplicity, we assume \(\varphi\) is real-valued-in the general case, one may expand into real and imaginary parts and use linearity of the commutators. Starting from (3.23) we use the self-adjointness of \(V_{FB}(t)\), \(V_{B}(t)\) and \(N(\varphi)=\int_{\Lambda*}\varphi(p)a_{p}^{*}a_{p}\mathrm{d}p\) to get the elementary inequality
\[|T_{FB,B}(t,\varphi)| =\Big{|}\int_{0}^{t}\int_{0}^{t_{1}}\mathrm{Re}\,\nu_{t_{2}}\Big{(} \llbracket N(\varphi),V_{FB}(t_{1})\rrbracket,V_{B}(t_{2})\rrbracket\Big{)} \mathrm{d}t_{1}\mathrm{d}t_{2}\Big{|}\] \[\lesssim t^{2}\|\hat{V}\|_{\ell_{1}}\sup_{k\in\mathrm{supp}\hat{ V},t_{i}\in[0,t]}\Big{|}\nu_{t_{2}}\Big{(}\llbracket N(\varphi),D_{k}^{*}(t_{1})b_{k} (t_{1})\rrbracket,V_{B}(t_{2})\rrbracket\Big{)}\Big{|} \tag{8.36}\]
where in the last line we used the representation of \(V_{FB}(t)\) in terms of \(b\)- and \(D\)-operators found in (5.6)-the \(D_{k}^{*}b_{-k}^{*}\) term is re-written in terms of \(D_{k}^{*}b_{k}\) upon taking the real part of \(\nu\). Next, we estimate the two supremum quantity in (8.36). In view of Remark 5.1, it suffices to provide estimates for pure states \(\Psi\in\mathscr{F}\). In order to ease the notation, we omit the variables \(t_{1},t_{2}\in[0,t]\). We shall make extensive use of Type-I to Type-IV estimates contained in Lemma 4.5-4.8, and the commutation relations from Lemmas 4.3 and 4.4.
In terms of \(D_{k}^{*}(\varphi)=[N(\varphi),D_{k}]\) and \(b_{k}(\varphi)=[N(\varphi),b_{k}]\) we calculate the first commutator to be
\[\llbracket N(\varphi),D_{k}^{*}b_{k}\rrbracket,V_{B}\rrbracket=[D_{k}^{*}( \varphi)b_{k},V_{B}]+[D_{k}^{*}b_{k}(\varphi),V_{B}],\qquad\forall k\in \mathrm{supp}\hat{V}. \tag{8.37}\]
We shall estimate the expectation of the two terms in (8.37) separately.
_The first term of (8.37)._ We expand \(V_{B}\) into three additional terms. Namely
\[\llbracket D_{k}^{*}(\varphi)b_{k},V_{B}\rrbracket =\int_{\Lambda*}\hat{V}(\ell)\Big{(}\llbracket D_{k}^{*}(\varphi) b_{k},b_{\ell}^{*}b_{\ell}\rrbracket+\frac{1}{2}[D_{k}^{*}(\varphi)b_{k},b_{ \ell}b_{-\ell}]+\frac{1}{2}[D_{k}^{*}(\varphi)b_{k},b_{-\ell}^{*}b_{\ell}^{*} \rrbracket\Big{)}\mathrm{d}\ell\] \[\equiv\int_{\Lambda*}\hat{V}(\ell)\Big{(}C_{1}(k,\ell)+C_{2}(k, \ell)+C_{3}(k,\ell)\Big{)}\mathrm{d}\ell. \tag{8.38}\]
Next, we proceed to analyze the commutators \(C_{j}\) for \(j=1,2,3\) separately.
* see (4.7). In particular, it can be easily verified that for \(k,\ell\in\mathrm{supp}\hat{V}\) it satisfies the estimate \[\|[b_{k}(t),b_{\ell}^{*}(s)]\|_{B(\mathscr{F})}\lesssim R\] (8.40)
Consequently, \(C_{1}\) can be estimated as follows. Omitting momentarily the variables \(k,\ell\in\mathrm{supp}\hat{V}\) we find \[\big{|}\,\langle\Psi,C_{1}\Psi\rangle\big{|} \leqslant\big{|}\,\langle\![b,b^{*}]\!]D(\varphi)\Psi,b\Psi\rangle \big{|}+\big{|}\,\langle\!\Psi,[D^{*}(\varphi),b^{*}b]\!]b\Psi\rangle\big{|}\] \[\leqslant\|[b,b^{*}]\|\,\|D^{*}(\varphi)\Psi\|\,\|b\Psi\|+\|[D^{*} (\varphi),b^{*}b]\|\|\Psi\|\|b\Psi\|\] \[\lesssim R|\Lambda|\|\varphi\|_{\ell^{1}}\|\Psi\|R^{\frac{1}{2}} \|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|+|\Lambda|p_{F}^{-m}\|\varphi \|_{\ell^{1}_{m}}R^{2}\|\Psi\|^{2}\,\] (8.41) where we used the Type-II estimate for \(b\), the Type-III estimate for \([D^{*}(\varphi),b]\) and \([D^{*}(\varphi),b^{*}]\), the Type-IV estimate for \(D^{*}(\varphi)\), the norm bound \(\|b\|\lesssim R\) and the commutator bound (8.40).
* _Analysis of \(C_{2}\)._ This one is easier to estimate, because we do not pick a non-zero commutator between the \(b\) operators. Namely, there holds \(C_{2}(k,\ell)=[D_{k}^{*}(\varphi),b_{\ell}b_{-\ell}]b_{k}.\) Thus, we find (ommitting the \(k,\ell\in\mathrm{supp}\hat{V}\) variables) \[|\,\langle\Psi,C_{2}\Psi\rangle\,|\lesssim|\Lambda|\|\varphi\|_{\ell^{1}}\| \hat{V}\|_{\ell^{1}}^{2}R^{2}p_{F}^{-m}\|\Psi\|^{2}\.\] (8.42)
* _Analysis of \(C_{3}\)._ This is the most intricate term among the three terms we analyze, because it involves higher-order commutators. First we decompose \[C_{3}(k,\ell) =D_{k}^{*}(\varphi)b_{-\ell}^{*}[b_{k},b_{\ell}^{*}]+D_{k}^{*}( \varphi)[b_{k},b_{-\ell}^{*}]b_{\ell}^{*}+[D_{k}^{*}(\varphi),b_{-\ell}^{*}b_ {\ell}^{*}]b_{k}\] \[\equiv C_{3,1}(k,\ell)+C_{3,2}(k,\ell)+C_{3,3}(k,\ell)\] (8.43) and analyze each term separately. Let us look at the first one. Omitting the \(k,\ell\in\mathrm{supp}\hat{V}\) variables we find \[\big{|}\,\langle\Psi,C_{3,1}\Psi\rangle\big{|} =\big{|}\,\langle bD(\varphi)\Psi,[b,b^{*}]\Psi\rangle\,\big{|}\] \[\leqslant\|bD(\varphi)\Psi\|\big{|}[b,b^{*}]\Psi\|\] \[\leqslant\|[b,D(\varphi)]\|\|\Psi\|\|[b,b^{*}]\Psi\|+\|D(\varphi) \|\|b\Psi\|\|[b,b^{*}]\Psi\|\] \[\lesssim(|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell^{1}_{m}})R\|\Psi\| ^{2}+|\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|\mathcal{N}_{\mathcal{S} }^{\frac{1}{2}}\Psi\|R\|\Psi\|\] \[\leqslant|\Lambda|\|\varphi\|_{\ell^{1}_{m}}\Big{(}R^{\frac{3}{2} }\|\Psi\|\|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|+Rp_{F}^{-m}\|\Psi\| ^{2}\Big{)}\] (8.44) where we used the Type-II estimates for \(b\), the Type-III estimate for \([b,D(\varphi)]\), and the commutator bound \(\|[b,b^{*}]\|\leqslant R\), see Eq. (8.40). Let us now look at the second one. Let us recall that the boson commutator can be written as \([b_{k},b_{\ell}^{*}]=\delta(k-\ell)G_{k}\mathds{1}+\mathcal{R}_{k,\ell}\) where \(G_{k}\) is a scalar, and \(\mathcal{R}_{k,\ell}\) is a reminder operator (see (7.14) for details). Thus, we find \[\big{|}\langle\Psi,C_{3,2}(k,\ell)\Psi\rangle\big{|} \leqslant\big{|}\,\langle\Psi,C_{3,1}(k,-\ell)\Psi\rangle\, \big{|}+\big{|}\,\langle\Psi,D_{k}^{*}(\varphi)[\mathcal{R}_{k,-\ell},b_{\ell} ]\Psi\rangle\,\big{|}\] \[\leqslant|\Lambda|\|\varphi\|_{\ell^{1}_{m}}\Big{(}R^{\frac{3}{2} }\|\Psi\|\|\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|+Rp_{F}^{-m}\|\Psi\| ^{2}\Big{)}\] \[\quad+|\Lambda|\|\varphi\|_{\ell^{1}}R^{\frac{1}{2}}\|\Psi\|\| \mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\.\] (8.45) where in the last line we used the upper bound for \(C_{3,1}(k,\ell)\), the Type-IV estimate for \(D_{k}^{*}(\varphi)\), and the following commutator estimate \[\big{\|}[\mathcal{R}_{k,\ell},b_{-\ell}]\Psi\big{\|}\lesssim R^{\frac{1}{2}} |\mathcal{N}_{\mathcal{S}}^{\frac{1}{2}}\Psi\|\] (8.46)
valid for \(k,\ell\in\operatorname{supp}\hat{V}\).
Let us now look at the third one. Omitting the \(k,\ell\in\operatorname{supp}\hat{V}\) variables we find
\[\big{|}\,\langle\Psi,C_{3,3}\Psi\rangle\,\big{|}\leqslant 2\|b^{*}\|\llbracket D^{ *}(\varphi),b^{*}\rrbracket\|\Psi\|\|b\Psi\|\lesssim|\Lambda|\|\varphi\|_{\ell_{ m}^{1}}R^{2}p_{F}^{-m}\|\Psi\|^{2}. \tag{8.47}\]
where we used the Type-III estimate for \([D^{*}(\varphi),b^{*}]\), and the norm bounds \(\|b\|,\|b^{*}\|\lesssim R\).
Putting the estimates for \(C_{3,1}\), \(C_{3,2}\) and \(C_{3,3}\) we finally find that for all \(k,\ell\in\operatorname{supp}\hat{V}\) there holds
\[\big{|}\,\langle\Psi,C_{3}(k,\ell)\Psi\rangle\,\big{|}\lesssim|\Lambda|\| \varphi\|_{\ell^{1}}\Big{(}R^{\frac{3}{2}}\|\Psi\|\|\mathcal{N}_{\mathcal{S}}^ {\frac{1}{2}}\Psi\|+R^{2}p_{F}^{-m}\|\Psi\|^{2}\Big{)}. \tag{8.48}\]
Finally, we put the estimates (8.41), (8.42) and (8.48) for \(C_{1}\), \(C_{2}\) and \(C_{3}\), respectively, to find that the expectation of the first term in (8.37) is bounded above by
\[\Big{|}\nu\big{(}\llbracket D^{*}_{k}(\varphi)b_{k},V_{B}\rrbracket\big{)} \Big{|}\leqslant|\Lambda|\|\varphi\|_{\ell_{m}^{1}}\|\hat{V}\|_{\ell^{1}} \Big{(}R^{\frac{3}{2}}\nu(\mathds{1})^{\frac{1}{2}}\nu(\mathcal{N}_{\mathcal{ S}})^{\frac{1}{2}}+R^{2}p_{F}^{-m}\nu(\mathds{1})\Big{)}. \tag{8.49}\]
_The second term of (8.37)._ This one is easy, we use the brutal estimate
\[|\nu\Big{(}\llbracket D^{*}_{k}b_{k}(\varphi),V_{B}\rrbracket\Big{)}|\leqslant| \nu\big{(}D^{*}_{k}b_{k}(\varphi)V_{B}\big{)}|+|\nu\big{(}V_{B}D^{*}_{k}b_{k} (\varphi)\big{)}| \tag{8.50}\]
We estimate these terms as follows. In view of \(\|V_{B}\|_{B(\mathscr{F})}\lesssim\|\hat{V}\|_{\ell^{1}}R^{2}\) we find for the first term in (8.50) that
\[\big{|}\,\langle\Psi,D^{*}_{k}b_{k}(\varphi)V_{B}\Psi\rangle\big{|} \leqslant\|b^{*}_{k}(\varphi)D_{k}\Psi\|\|V_{B}\Psi\| \tag{8.51}\] \[\leqslant\|\hat{V}\|_{\ell^{1}}\|b^{*}_{k}(\varphi)\|\|\mathcal{ N}\Psi\|R^{2}\|\Psi\|\leqslant|\Lambda|\|\hat{V}\|_{\ell^{1}}p_{F}^{-m}\| \varphi\|_{\ell_{m}^{1}}R^{2}\|\mathcal{N}\Psi\|\|\Psi\|\,\]
where we used the Type-I estimate for \(D^{*}_{k}\), and the Type-III estimate for \(b^{*}_{k}(\varphi)\). For the second term in (8.50), we use the same bound for \(V_{B}\), together with the pull through formula \(\mathcal{N}b(\varphi)=b(\varphi)(\mathcal{N}-2)\) to find that
\[\big{|}\,\langle\Psi,V_{B}D^{*}_{k}b_{k}(\varphi)\Psi\rangle\,\big{|} \leqslant\|V_{B}\Psi\|\|D^{*}_{k}(\mathcal{N}+2)^{-1}\|\|( \mathcal{N}+2)b_{k}(\varphi)\Psi\| \tag{8.52}\] \[\leqslant\|\hat{V}\|_{\ell^{1}}R^{2}\|\Psi\|\|b_{k}(\varphi) \mathcal{N}\Psi\|\leqslant\|\hat{V}\|_{\ell^{1}}R^{2}|\Lambda|\|\varphi\|_{ \ell_{m}^{1}}p_{F}^{-m}\|\Psi\|\|\mathcal{N}\Psi\|\,\]
where we used the Type-I estimate for \(D^{*}_{k}\), and the Type-III estimate for \(b_{k}(\varphi)\). These last two estimates combined together then imply that
\[|\nu\big{(}\llbracket D^{*}_{k}b_{k}(\varphi),V_{B}\rrbracket\big{)}|\leqslant \|\hat{V}\|_{\ell^{1}}R^{2}|\Lambda|\|\varphi\|_{\ell_{m}^{1}}p_{F}^{-m}\nu( \mathds{1})^{\frac{1}{2}}\nu(\mathcal{N}^{2})^{\frac{1}{2}}. \tag{8.53}\]
_Conclusion._ The proof of the proposition is finished once we gather the estimates contained in (8.49) and (8.53), and plug them back in (8.36).
### Analysis of \(T_{b,\alpha}\)
Out of the nine terms \(T_{\alpha,\beta}(t,\varphi)\), those with \(\alpha=B\) are the easiest ones to deal with. The main result of this subsection is contained in the following proposition. It contains an estimate for the three terms \(T_{B,F}\), \(T_{B,FB}\) and \(T_{B,B}\).
**Proposition 8.5** (Analysis of \(T_{B,F}\), \(T_{B,FB}\) and \(T_{B,B}\)).: _Let \(T_{B,F}(t,p)\), \(T_{B,FB}(t,p)\) and \(T_{B,FB}(t,p)\) be the quantities defined in (3.21), for \(\alpha=B\) and \(\beta=F\), \(\beta=FB\) and
\(\beta=B\), respectively. Further, let \(m>0\). Then, there exists a constant \(C>0\) such that for all \(\varphi\in\ell_{m}^{1}\) and \(t\geqslant 0\) there holds_
\[|T_{B,F}(t,\varphi)|+|T_{B,FB}(t,\varphi)|+|T_{B,B}(t,\varphi)|\\ \leqslant Ct^{2}\|\hat{V}\|_{\ell^{1}}^{2}|\Lambda|\|\varphi\|_{ \ell_{m}^{1}}R^{3}p_{F}^{-m}\sup_{0\leqslant\tau\leqslant t}\left(1+R^{-2}\nu_ {\tau}(\mathcal{N}^{4})^{\frac{1}{2}}\right)\,, \tag{8.54}\]
_where we recall \(T_{\alpha,\beta}(t,\varphi)=\langle\varphi,T_{\alpha,\beta}(t)\rangle\) and \(R=|\Lambda|p_{F}^{d-1}\)._
Proof.: In what follows, we let \(\alpha\) be either \(F\), \(FB\) or \(B\), and we fix \(m>0\), \(t\geqslant 0\) and \(\varphi\in\ell_{m}^{1}\). Starting from (3.21) one finds the following elementary bound
\[|T_{B,\alpha}(t,\varphi)|\lesssim t^{2}\sup_{t_{i}\in[0,t]}\big{|}\nu_{t_{2}} \big{(}[\![N(\varphi),V_{B}(t_{1})],V_{\alpha}(t_{2})]\!\big{)}\big{|} \tag{8.55}\]
and so it suffices to estimate the supremum quantity in the above inequality. In view of Remark 5.1, it suffices to consider estimates on pure states \(\Psi\in\mathscr{F}\). In order to ease the notation, we drop the time variables \(t_{1},t_{2}\in[0,t]\). Thus, we find that
\[|\,\langle\Psi,[\![N(\varphi),V_{B}]\!],V_{\alpha}]\Psi\rangle\,|\leqslant 2 |\,\langle\Psi,[\![N(\varphi),V_{B}]\!]V_{\alpha}\Psi\rangle\,|\leqslant 2 |[\![N(\varphi),V_{B}]\!]|\!|\Psi|\!|\|V_{\alpha}\Psi\| \tag{8.56}\]
Using the expansion of \(V_{B}\) in terms of \(b\)-operators (see (5.7)), it is straightforward to find that, in terms of \(b_{k}(\varphi)=[\![N(\varphi),b_{k}]\!]\),
\[\|[\![N(\varphi),V_{B}]\!]\leqslant 2\|\hat{V}\|_{\ell^{1}}\|b\|\|b_{k}( \varphi)\|\lesssim\|\hat{V}\|_{\ell^{1}}R|\Lambda|p_{F}^{-m}\|\varphi\|_{\ell_ {m}^{1}} \tag{8.57}\]
where we used the Type-III estimate on \(b_{k}(\varphi)\) (see Lemma 4.7), together with the norm bound \(\|b_{k}\|\lesssim R\). On the other hand, we have previously established the estimate
\[\|V_{\alpha}\Psi\|\lesssim\|\hat{V}\|_{\ell^{1}}\Big{(}\|\mathcal{N}^{2}\Psi \|+R^{2}\|\Psi\|\Big{)}. \tag{8.58}\]
The proof is finished once we gather the last four estimates.
## 9. Proof of Theorem 1
We are now ready to give a proof of our main result, Theorem 1. We shall make extensive use of the excitation estimates established in Section 5. Namely, letting \((\nu_{t})_{t\in\mathbb{R}}\) be the interaction dynamics (3.14) with initial data satisfying Condition 1, we know that for all \(\ell\in\mathbb{N}\) exists a constant \(C>0\) such that for all \(t\geqslant 0\) there holds
\[\nu_{t}(\mathcal{N}^{\ell}) \leqslant Cn^{\ell}\exp(C\lambda Rt)\, \tag{9.1}\] \[\nu_{t}(\mathcal{N}_{\mathcal{S}}) \leqslant(\lambda R\,\langle t\rangle)^{2}\exp(C\lambda Rt). \tag{9.2}\]
Here, \(n=\nu_{0}(\mathcal{N})\lesssim R^{1/2}\) is the initial number of particles/holes in the system, and \(R=|\Lambda|p_{F}^{d-1}\) is our recurrent parameter.
Proof.: Throughout the proof, we shall fix the parameter \(m>0\). Let \(f_{t}(p)\) be the momentum distribution of the system, as defined in Def. 1. In Section 3, we performed a double commutator expansion of \(f_{t}(p)\), given in (3.20), in terms of the quantities
\(T_{\alpha,\beta}(t,p)\), defined in Eq. (3.21). It then follows by the triangle inequality that for all \(t\geqslant 0\)
\[\big{\|}f_{t}-f_{0}-\lambda^{2}t\left(Q_{t}[f_{0}] +B_{t}[f_{0}]\right)\big{\|}_{\ell_{m}^{1\bullet}}\] \[\leqslant\frac{\lambda^{2}}{|\Lambda|}\Big{(}\|T_{F,F}(t)+t| \Lambda|Q_{t}[f_{0}]\big{\|}_{\ell_{m}^{1\bullet}}+\big{\|}T_{FB,FB}(t)+t| \Lambda|B_{t}[f_{0}]\big{\|}_{\ell_{m}^{1\bullet}}\Big{)}\] \[\quad+\frac{\lambda^{2}}{|\Lambda|}\Big{(}\|T_{F,FB}(t)\|_{\ell_{ m}^{1\bullet}}+\|T_{F,B}(t)\|_{\ell_{m}^{1\bullet}}\Big{)}\] \[\quad+\frac{\lambda^{2}}{|\Lambda|}\Big{(}\|T_{FB,F}(t)\|_{\ell_ {m}^{1\bullet}}+\|T_{FB,B}(t)\|_{\ell_{m}^{1\bullet}}\Big{)}\] \[\quad+\frac{\lambda^{2}}{|\Lambda|}\Big{(}\|T_{B,F}(t)\|_{\ell_ {m}^{1\bullet}}+\|T_{B,FB}(t)\|_{\ell_{m}^{1\bullet}}+\|T_{B,B}(t)\|_{\ell_{m }^{1\bullet}}\Big{)} \tag{9.3}\]
where \(Q_{t}\) and \(B_{t}\) are the operators defined in Def. 2 and 3, respectively. We shall now estimate the right hand side of (9.3). First, we estimate the leading order terms, previously analyzed in Section 6 and 7. Secondly, we describe the subleading order terms, previously analyzed in Section 8.
Leading order terms. First, we collect the Boltzmann-like dynamics. This term emerges from \(T_{F,F}\). Indeed, it follows from Proposition 6.1 and Eq. (9.1) that there exists a constant \(C>0\) such that for all \(t\geqslant 0\)
\[\|T_{F,F}(t)+t|\Lambda|Q_{t}[f_{0}]\|_{\ell_{m}^{1\bullet}} \leqslant C|\Lambda|t^{3}\lambda\sup_{\tau\leqslant t}\Big{(}R^{2} \nu_{\tau}(\mathcal{N}^{4})^{\frac{1}{2}}+\nu_{\tau}(\mathcal{N}^{4})\Big{)}\] \[\leqslant C|\Lambda|t^{3}\lambda(R^{2}+n^{2})n^{2}\exp(C\lambda Rt)\] \[\leqslant C|\Lambda|t^{3}\lambda R^{2}n^{2}\exp(C\lambda Rt)\, \tag{9.4}\]
where we have used the assumption \(n\lesssim R\).
Now, we collect the interactions between holes/particles and bosonized particle-hole pairs around the Fermi surface. In view of Proposition 7.1 and Eqs. (9.1) and (9.2) we
find that there exists a constant \(C>0\) such that for all \(t\geqslant 0\) there holds
\[\|T_{FB,FB}(t)+t|\Lambda|B_{t}[f_{0}]\|_{\ell_{m}^{1\ast}}\] \[\qquad\qquad\qquad\leqslant C|\Lambda|t^{2}\sup_{\tau\leqslant t} \Big{[}R^{\frac{1}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}\nu_{ \tau}(\mathcal{N})^{\frac{1}{2}}+R^{\frac{3}{2}}\nu_{\tau}(\mathcal{N}_{ \mathcal{S}})^{\frac{1}{2}}+\frac{R}{p_{F}^{m}}\nu_{\tau}(\mathcal{N}^{2}) \Big{]}\] \[\qquad\qquad\qquad+C|\Lambda|t^{3}\lambda R\sup_{\tau\leqslant t} \Big{[}R^{\frac{3}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R \nu_{\tau}(\mathcal{N}_{\mathcal{S}})+\frac{R}{p_{F}^{m}}\nu_{\tau}(\mathcal{ N})^{\frac{1}{2}}\Big{]}\,\] \[\qquad\qquad\qquad\leqslant C|\Lambda|t^{2}\Big{[}R^{\frac{1}{2}} \lambda R\left\langle t\right\rangle n^{\frac{1}{2}}+R^{\frac{3}{2}}\lambda R \left\langle t\right\rangle+\frac{Rn^{2}}{p_{F}^{m}}\Big{]}e^{C\lambda Rt}\] \[\qquad\qquad\qquad+C|\Lambda|t^{3}\lambda R\Big{[}R^{\frac{3}{2}} \lambda R\left\langle t\right\rangle+R(\lambda R\left\langle t\right\rangle)^ {2}+\frac{Rn^{\frac{1}{2}}}{p_{F}^{m}}\Big{]}e^{C\lambda Rt}\,\] \[\qquad\qquad\qquad\leqslant C|\Lambda|\Big{[}t^{2}\left\langle t \right\rangle\lambda R^{\frac{3}{2}}n^{\frac{1}{2}}+t^{2}\left\langle t\right \rangle\lambda R^{\frac{5}{2}}+t^{2}\frac{Rn^{2}}{p_{F}^{m}}\Big{]}e^{C \lambda Rt}\] \[\qquad\qquad\qquad+C|\Lambda|\Big{[}t^{3}\left\langle t\right\rangle \lambda^{2}R^{\frac{7}{2}}+t^{3}\left\langle t\right\rangle^{2}\lambda^{3}R^{ 4}+t^{3}\frac{\lambda R^{2}n^{\frac{1}{2}}}{p_{F}^{m}}\Big{]}e^{C\lambda Rt}. \tag{9.5}\]
Under the assumptions \(1\lesssim n\lesssim R\) we find the following upper bound, for some constant \(C>0\). Note that we absorb polynomials on the variable \(\lambda R\left\langle t\right\rangle\) into the exponential factor \(\exp(C\lambda R\left\langle t\right\rangle)\), after updating the constant \(C\).
\[\|T_{FB,FB}(t)+t|\Lambda|B_{t}[f_{0}]\|_{\ell_{m}^{1\ast}}\] \[\qquad\qquad\qquad\qquad\leqslant C|\Lambda|\Big{[}\lambda t^{2} \left\langle t\right\rangle R^{\frac{5}{2}}\Big{(}1+\lambda R\left\langle t \right\rangle+R^{-\frac{1}{2}}(\lambda R\left\langle t\right\rangle)^{2} \Big{)}+\frac{t^{2}Rn^{2}}{p_{F}^{m}}\big{(}1+\lambda Rt\big{)}\Big{]}e^{C \lambda R\left\langle t\right\rangle}\] \[\qquad\qquad\qquad\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t \right\rangle R^{\frac{5}{2}}+\frac{t^{2}Rn^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda R \left\langle t\right\rangle}. \tag{9.6}\]
Subleading order terms. In the expansion given by (3.20) we have already analyzed the leading order terms given by \(T_{F,F}(t)\) and \(T_{FB,FB}(t)\). The remainding seven terms are regarded as subleading order terms. These can be estimated as follows.
Using Proposition 8.1 and Eqs. (9.1) and (9.2), we find that there is a constant \(C>0\) such that
\[\|T_{F,FB}(t)\|_{\ell_{m}^{1\ast}} \leqslant Ct^{2}|\Lambda|\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{ \frac{1}{2}}\,\nu_{\tau}(\mathcal{N}^{2})^{1/2}\nu_{\tau}(\mathcal{N}_{\mathcal{S }})^{1/2}+p_{F}^{-m}\nu_{\tau}(\mathcal{N}^{2})\Big{)}\] \[\leqslant Ct^{2}|\Lambda|\Big{(}R^{\frac{1}{2}}\,n^{2}(\lambda R \left\langle t\right\rangle)+p_{F}^{-m}n^{2}\Big{)}e^{C\lambda Rt}\] \[\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t\right\rangle R ^{\frac{3}{2}}n^{2}+\frac{n^{2}t^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}. \tag{9.7}\]
Using Proposition 8.2 and Eqs. (9.1) and (9.2), we find that there is a constant \(C>0\) such that
\[\|T_{F,B}(t)\|_{\ell_{m}^{1\ast}} \leqslant Ct^{2}|\Lambda|\sup_{0\leqslant\tau\leqslant t}\Big{(}R^{ \frac{3}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R\nu_{\tau}( \mathcal{N}_{\mathcal{S}})+Rp_{F}^{-m}\nu(\mathcal{N}^{2})^{\frac{1}{2}}\Big{)}\.\] \[\leqslant Ct^{2}|\Lambda|\Big{(}R^{\frac{3}{2}}\lambda R\left\langle t \right\rangle+R(\lambda R\left\langle t\right\rangle)^{2}+\frac{Rn}{p_{F}^{m}} \Big{)}e^{C\lambda Rt}\] \[\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t\right\rangle R ^{\frac{5}{2}}\big{(}1+\lambda R^{\frac{1}{2}}\left\langle t\right\rangle \big{)}+\frac{Rnt^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}\] \[\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t\right\rangle R ^{\frac{5}{2}}+\frac{Rnt^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda R\left\langle t \right\rangle}. \tag{9.8}\]
Using Proposition 8.3 and Eqs. (9.1) and (9.2), we find that there is a constant \(C>0\) such that
\[\|T_{FB,F}(t)\|_{\ell_{m}^{1\ast}} \leqslant Ct^{2}|\Lambda|\sup_{0\leqslant\tau\leqslant t}\Big{(}R ^{\frac{3}{2}}\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R^{2}p_{F}^{- m}\nu_{\tau}(\mathcal{N}^{2})^{\frac{1}{2}}\Big{)}\] \[\leqslant Ct^{2}|\Lambda|\Big{(}R^{1/2}(\lambda R\left\langle t \right\rangle)n+\frac{n^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}\] \[\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t\right\rangle R ^{\frac{3}{2}}n+\frac{n^{2}t^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}. \tag{9.9}\]
Using Proposition 8.4 and Eqs. (9.1) and (9.2), we find that there is a constant \(C>0\) such that
\[\|T_{FB,B}(t)\|_{\ell_{m}^{1\ast}} \leqslant Ct^{2}|\Lambda|\sup_{0\leqslant\tau\leqslant t}\Big{(}R ^{\frac{3}{2}}\,\nu_{\tau}(\mathcal{N}_{\mathcal{S}})^{\frac{1}{2}}+R^{2}p_{F} ^{-m}\nu_{\tau}(\mathcal{N}^{2})^{\frac{1}{2}}\Big{)}\] \[\leqslant Ct^{2}|\Lambda|\Big{(}R^{\frac{3}{2}}(\lambda R\left\langle t \right\rangle)+\frac{R^{2}n}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}\] \[\leqslant C|\Lambda|\Big{(}\lambda t^{2}\left\langle t\right\rangle R ^{\frac{5}{2}}+\frac{R^{2}nt^{2}}{p_{F}^{m}}\Big{)}e^{C\lambda Rt}. \tag{9.10}\]
Using Proposition and Eqs. (9.1) and (9.2), we find that there is a constant \(C>0\) such that
\[\|T_{B}(t)\|_{\ell_{m}^{1\ast}} \leqslant C|\Lambda|t^{2}R^{3}p_{F}^{-m}\sup_{0\leqslant\tau \leqslant t}\Big{(}1+R^{-2}\nu_{\tau}(\mathcal{N}^{4})^{\frac{1}{2}}\Big{)} \leqslant C|\Lambda|t^{2}\frac{R^{3}}{p_{F}^{m}}e^{C\lambda Rt}\, \tag{9.11}\]
where we have additionally used the fact that \(1\lesssim n\lesssim R\).
Conclusion. It suffices now to gather all the estimates for the leading and subleading order terms, and plug them back in the expansion given in Eq. (9.3) for the momentum distribution of the system. This finishes the proof of our main theorem.
## 10. The Fixed Volume Case
In this section, we prove the inequalities that were stated in Section 2 concerning the fixed volume case \(L=2\pi\). We recall that the dual lattice now becomes \(\Lambda^{\ast}=\mathbb{Z}^{d}\), and we shall keep using the notation \(\int_{\mathbb{Z}^{d}}\mathrm{d}p=(2\pi)^{-d}\sum_{p\in\mathbb{Z}^{d}}\).
### The delta function
First, we recall that \(\delta_{t}(x)\) is the mollified Delta function, defined in (2.17). Here, we prove the following approximation lemma.
**Lemma 10.1**.: _There is \(C>0\) such that for all \(x\in\mathbb{Z}\), \(y\in\mathbb{R}\), \(t>0\) and \(\lambda|y|\leqslant 1/2\)_
\[|\delta_{t}(x+\lambda y)-(2/\pi)t\delta_{x,0}|\leqslant C\;\frac{(1-\delta_{x,0})}{x^{2}}\frac{1}{t}+C\delta_{x,0}\lambda^{2}t^{3}|y|^{2}. \tag{10.1}\]
Proof.: We consider the decomposition
\[\delta_{t}(x+\lambda y)=\delta_{x,0}\delta_{t}(\lambda y)+(1-\delta_{x,0}) \delta_{t}(x+\lambda y). \tag{10.2}\]
The first term in (10.2) is estimated as follows. Using \(\delta_{t}(0)=2t/\pi\), we find that
\[|\delta_{t}(\lambda y)-2t/\pi|=t|\delta_{1}(t\lambda y)-\delta_{1}(0)| \leqslant Ct(t\lambda|y|)^{2}. \tag{10.3}\]
In the last line, \(C>0\) is a constant that verifies \(|\delta_{1}(z)-\delta_{1}(0)|\leqslant C|z|^{2}\) for all \(z\in\mathbb{R}\)-the constant exists because \(\delta_{1}^{\prime}(0)=0\), and \(\delta_{1}(z)\) is globally bounded. The second term in (10.2) is estimated as follows. For \(|x|\geqslant 1\) and \(\lambda|y|\leqslant 1/2\) we have
\[\delta_{t}(x+\lambda y)\leqslant\frac{2/\pi}{t(x+\lambda y)^{2}}\leqslant \frac{2/\pi}{tx^{2}(1-|x|^{-1}\lambda|y|)^{2}}\leqslant\frac{C}{tx^{2}}. \tag{10.4}\]
The proof is finished once we put all the inequalities together.
### Operator estimates
Let us now analyze the time dependence of the operators \(Q_{t}\) and \(B_{t}\).
Let us recall that \(Q_{t}\) was defined in Def. 2, and the time independent operator \(\underline{\mathscr{Q}}\) is defined in the same way, but with the discrete Delta function \((2/\pi)\delta_{\Delta e,0}\) replacing the energy mollifier \(\delta_{t}(\Delta E)\). Here, \(\Delta E\) corresponds to the dispersion relation (2.16), whereas \(\Delta e\) corresponds to (signed) free dispersion \(e(p)=(\chi^{\perp}(p)-\chi(p))\,p^{2}/2\). We shall prove that, under our assumptions for \(\hat{V}\), the following result is true.
**Lemma 10.2** (Analysis of \(Q_{t}\)).: _Assuming that \(0<\lambda\|\hat{V}\|_{\ell_{1}}\leqslant 1/2\), there is \(C=C(\|\hat{V}\|_{\ell^{1}})>0\) such that for all \(f\in\ell^{1}(\mathbb{Z}^{d})\) there holds_
\[\|Q_{t}[f]-t\underline{\mathscr{Q}}[f]\|_{\ell^{\infty}}\leqslant Ct\big{(}1/t^{2}+(\lambda t)^{2}\big{)}\|\widehat{f}\|_{\ell^{\infty}}^{2} \|f\|_{\ell^{1}}\|f\|_{\ell^{\infty}}\,\qquad\forall t>0 \tag{10.5}\]
_where we have denoted \(\ell^{\infty}=\ell^{\infty}(\mathbb{Z}^{3})\) and \(\ell^{1}=\ell^{1}(\mathbb{Z}^{3})\)._
Proof.: Starting from the definition of \(Q_{t}[f]\), one finds after evaluating the delta functions \(\delta(p-p_{1})+\delta(p-p_{2})-\delta(p-p_{3})-\delta(p-p_{4})\) that
\[Q_{t}[f]-t\underline{\mathscr{Q}}[f]=R_{t}^{+}[f]-R_{t}^{-}[f] \tag{10.6}\]
where on the right hand side we have two remainder terms, corresponding to a gain, and a loss term. Namely, for \(p\in\Lambda^{*}\) we have
\[R_{t}^{+}[f](p) =4\pi\int_{\mathbb{Z}^{3d}}\sigma(\vec{p})\Big{(}\delta_{t}( \Delta E)-2t/\pi\delta_{\Delta e,0}\Big{)}f(p_{3})f(p_{4})\widetilde{f}(p_{2} )\widetilde{f}(p)\,\mathrm{d}p_{2}\mathrm{d}p_{3}\mathrm{d}p_{4}\, \tag{10.7}\] \[R_{t}^{-}[f](p) =4\pi\int_{\mathbb{Z}^{3d}}\sigma(\vec{p})\Big{(}\delta_{t}( \Delta E)-2t/\pi\delta_{\Delta e,0}\Big{)}f(p)f(p_{2})\widetilde{f}(p_{3}) \widetilde{f}(p_{4})\,\mathrm{d}p_{2}\mathrm{d}p_{3}\mathrm{d}p_{4}. \tag{10.8}\]
Here, we have denoted \(\vec{p}=(p,p_{2},p_{3},p_{4})\), \(\Delta E=E(p)+E(p_{2})-E(p_{3})-E(p_{4})\) and \(\Delta e\equiv\frac{1}{2}(p^{2}+p_{2}^{2}-p_{3}^{2}-p_{4}^{2})\). Lemma 10.1 with \(x=\Delta e\) and \(y=\mathcal{O}(\|\hat{V}\|_{\ell^{1}})\) now implies that there is \(C>0\) such that
\[|R_{t}^{+}[f](p)| \leqslant C(1/t+\lambda^{2}t^{3}\|\hat{V}\|_{\ell^{2}}^{2})\| \widetilde{f}\|_{\ell^{\infty}}^{2}\int_{\mathcal{I}^{3d}}\sigma(\vec{p})\,|f( p_{3})|\,|f(p_{4})|\;\mathrm{d}p_{2}\mathrm{d}p_{3}\mathrm{d}p_{4}\, \tag{10.9}\] \[|R_{t}^{-}[f](p)| \leqslant C(1/t+\lambda^{2}t^{3}\|\hat{V}\|_{\ell^{1}}^{2})\| \widetilde{f}\|_{\ell^{\infty}}^{2}\int_{\mathcal{I}^{3d}}\sigma(\vec{p})\,|f (p)|\,|f(p_{2})|\;\mathrm{d}p_{2}\mathrm{d}p_{3}\mathrm{d}p_{4}. \tag{10.10}\]
Next, we consider the following upper bound for the coefficients
\[\sigma(\vec{p}) \leqslant\delta(p+p_{2}-p_{3}-p_{4})|\hat{V}(p-p_{3})-\hat{V}(p- p_{4})|^{2}+2\delta(p-p_{2}-p_{3}+p_{4})|\hat{V}(p-p_{3})|^{2}\] \[=\delta(p+p_{2}-p_{3}-p_{4})\Big{(}\hat{V}(p-p_{3})^{2}+\hat{V}(p -p_{4})^{2}-2\hat{V}(p-p_{3})\hat{V}(p-p_{4})\Big{)}\] \[\quad+2\delta(p-p_{2}-p_{3}+p_{4})|\hat{V}(p-p_{3})|^{2}. \tag{10.11}\]
We insert the above inequality in the right hand side of (10.9), and use some elementary manipulations to obtain the crude upper bound
\[\int_{\mathcal{I}^{3d}}\sigma(\vec{p})|f(p_{3})|\,|f(p_{4})|\,\mathrm{d}p_{2} \mathrm{d}p_{3}\mathrm{d}p_{4}\leqslant C\|\hat{V}\|_{\ell^{1}}\|\hat{V}\|_{ \ell^{\infty}}\|f\|_{\ell^{\infty}}\|f\|_{\ell^{1}}, \tag{10.12}\]
and the same bound holds for the right hand side of Eq. (10.10). This finishes the proof after we collect all the estimates, use the elementary bound \(\|\hat{V}\|_{\ell^{\infty}}\leqslant(2\pi)^{d}\|\hat{V}\|_{\ell^{1}}\) and collect the \(\hat{V}\)-dependent factors into a constant \(C>0\).
Next, we analyze the operator \(B_{t}\), defined in Def. 3, and its relation to the time independent operator \(\mathscr{B}\), defined in the same way but with \(\delta_{t}(E_{1}-E_{2}-E_{3}-E_{4})\) being replaced by \(2/\pi\,\delta_{e_{1}-e_{2}-e_{3}-e_{4},0}.\) While for the operator \(Q_{t}\) an upper bound can be given in terms of the number of holes \(n=(2\pi)^{3}\int_{\mathcal{I}^{3}_{2}}f(p)\mathrm{d}p\), the operator \(B_{t}\) depends on the total number of fermions \(N\). Physically, this is due to the fact that a hole can interact with any of the \(N^{2/3}\) virtual particle-hole pairs around the Fermi surface.
**Lemma 10.3** (Analysis of \(B_{t}\)).: _Assuming that \(0<\lambda\|\hat{V}\|_{\ell_{1}}\leqslant 1/2\), there is \(C=C(\|\hat{V}\|_{\ell^{1}})>0\) such that for all \(f\in\ell^{1}(\mathbb{Z}^{d})\) there holds_
\[\|B_{t}[f]-t\mathscr{B}[f]\|_{\ell^{\infty}}\leqslant Ct\big{(}1/t^{2}+(\lambda t)^{2}\big{)}N^{\frac{d-1}{d}}\|\widehat{f}\|_{ \ell^{\infty}}\|f\|_{\ell^{\infty}}\,\qquad\forall t>0 \tag{10.13}\]
_where we have denoted \(\ell^{\infty}=\ell^{\infty}(\mathbb{Z}^{3})\) and \(\ell^{1}=\ell^{1}(\mathbb{Z}^{3})\)._
Proof.: Recall that \(B=B^{(H)}+B^{(P)}\) is defined in Def. 3 in terms of the respective hole and particle interaction terms. Let us look only at the \(B^{(H)}\) term, the second one being analogous. We find in terms of \(\mathscr{B}=\mathscr{B}^{(H)}+\mathscr{B}^{(P)}\) that for \(f\in\ell^{1}(\mathbb{Z}^{d})\)
\[B_{t}^{(H)}[f]-t\mathscr{B}^{(H)}[f]=L_{t}[f] \tag{10.14}\]
where we define the following reminder term
\[L_{t}[f](h)=2\pi\int_{\mathcal{I}^{d}}|\hat{V}(k)|^{2}\Big{(}\rho_{t}^{H}(h-k,k )f(h-k)\widetilde{f}(h)-\rho_{t}^{H}(h,k)f(h)\widetilde{f}(h+k)\Big{)}\mathrm{d }k.\]
Here, the new remainder coefficient \(\rho_{t}(h,k)\) are given by
\[\rho_{t}(h,k)\equiv\chi(h)\chi(h+k)\int_{\mathbb{Z}^{d}}\chi(r)\chi^{\perp}(r+k) \Big{(}\delta_{t}(\widehat{\Delta E})-\frac{2t}{\pi}\delta_{\widehat{\Delta e},0 }\Big{)}\mathrm{d}r \tag{10.15}\]
where we denote \(\widehat{\Delta E}=E_{h}-E_{h+k}-E_{r}-E_{r+k}\) and \(\widehat{\Delta e}=e_{h}-e_{h+k}-e_{r}-e_{r+k}.\) Thus, it follows from Lemma 10.1 with \(x=\widehat{\Delta e}\) and \(|y|\leqslant\|\hat{V}\|_{\ell^{1}}\) that there is \(C>0\) such that
\[\|B_{t}[f]-t\mathscr{B}[f]\|_{\ell^{\infty}} \leqslant C(1/t+t^{3}\lambda^{2}\|\hat{V}\|_{\ell^{1}}^{2})\| \widetilde{f}\|_{\ell^{\infty}}\|f\|_{\ell^{\infty}}\int_{\mathbb{Z}^{2d}}| \hat{V}(k)|^{2}\chi(r)\chi^{\perp}(r+k)\mathrm{d}r\mathrm{d}k\] \[\leqslant C(1/t+t^{3}\lambda^{2}\|\hat{V}\|_{\ell^{1}}^{2})\| \widetilde{f}\|_{\ell^{\infty}}\|f\|_{\ell^{\infty}}\|\hat{V}\|_{\ell^{1}}^{ 2}N^{2/3}. \tag{10.16}\]
In the last line, we have used the geometric estimate \(\int_{\mathbb{Z}^{2d}}\chi(r)\chi^{\perp}(r+k)\mathrm{d}r\lesssim N^{\frac{d-1 }{d}}\), valid for \(k\in\mathrm{supp}\hat{V}\). This finishes the proof after we absorb \(\hat{V}\) into the constant \(C>0\).
### Example of Initial Data
In the reminder of this section, we work in three spatial dimensions \(d=3\). The inequality contained in Corollary 1 becomes a meaningful approximation for \(F_{T}\) provided \(F_{0}\) is such that
\[\|\mathscr{Q}[F_{0}]\|_{\ell^{1*}_{m}}+\|\mathscr{B}[F_{0}]\|_{\ell^{1*}_{m}} \gg\|\mathrm{Rem}(N,n,T)\|_{\ell^{1*}_{m}}. \tag{10.17}\]
Clearly, we will need a lower bound on \(\hat{V}\). For simplicity, we assume that there exists \(r\geqslant 1\) such that
\[|\hat{V}(k)|>0,\ \forall|k|\leqslant r\qquad\text{and}\qquad\hat{V}(k)=0,\ \forall|k|>r. \tag{10.18}\]
_Construction of initial data_. Let us give an example of initial data \(F_{0}\) for which the lower bound (10.17) holds true. We recall here that we denote by \(\mathcal{S}\) the Fermi surface defined in (2.12), in terms of the parameter \(r>0\). We assume \(r\ll p_{F}\).
We let \(n\in\mathbb{N}\) be an odd integer satisfying \(1\ll n\leqslant p_{F}-3r\simeq N^{1/3}\), and consider the following collection of points inside of the Fermi ball
\[\mathcal{I}=\{h_{1},\dots,h_{(n-1)/2}\}\,\qquad\mathcal{I}^{\prime}=\{h^{ \prime}_{1},\dots,h^{\prime}_{(n-1)/2}\}\,\quad\text{and}\quad H=\mathcal{I}\cup\mathcal{I}^{\prime} \tag{10.19}\]
where, for all \(1\leqslant i\leqslant(n-1)/2\) we let
\[h_{i}\equiv(i,0,0)\quad\text{and}\quad h^{\prime}_{i}\equiv(0,i,0). \tag{10.20}\]
Note that \(H\cap\mathcal{S}=\emptyset\). Further, we consider the singleton
\[H_{*}\equiv\{h_{*}\}\qquad\text{where}\qquad h_{*}=(0,0,|h_{*}|)\in\mathcal{B }\backslash\mathcal{S}. \tag{10.21}\]
Finally, let \(P\equiv\{p_{k}\}_{k=1}^{n}\) be any set of points in \(\mathcal{B}^{c}\backslash\mathcal{S}\). We consider initial data with delta-like support in the union of the sets \(H\), \(H_{*}\) and \(P\). Namely let \(U\equiv H\cup H_{*}\cup P\) and define
\[F_{0}(p)=\sum_{q\in U}\delta(p-q). \tag{10.22}\]
One may easily construct an initial state \(\nu:B(\mathscr{F})\to\mathbb{C}\) with momentum distribution \(F_{0}\) by considering the pure state associated to the Slater determinant
\[\nu(\mathcal{O})\equiv\frac{\langle\Psi_{U},\mathcal{O}\Psi_{U}\rangle_{ \mathscr{F}}}{\|\Psi_{U}\|_{\mathscr{F}}^{2}}\quad\text{with}\quad\Psi_{U} \equiv\prod_{p\in U}a_{p}^{*}\ \Omega. \tag{10.23}\]
As we have already argued in Section 2, the state \(\nu\) verifies Condition 1.
_Lower bound for \(\mathscr{Q}[F_{0}]\)_. In what follows, we only study the bulk of the Fermi ball \(p\in\mathcal{B}\backslash\mathcal{S}\). Indeed, our first observation is that \(F_{0}(p)\) is either \(1\) or \(0\), depending if \(p\in H\cup H_{*}\) or not. In particular, the associated "loss term" \(\mathscr{Q}^{-}[F_{0}](p)\) vanishes for \(p\notin H\cup H_{*}\). Hence, one finds that for all \(p\in\mathcal{B}/(H\cup H_{*})\):
\[\mathscr{Q}[F_{0}](p)=8\int_{(\mathcal{I}^{3})^{3}}\big{(}\sigma_{HH}(\vec{p} )+\sigma_{HP}(\vec{p})\big{)}\ \delta_{\Delta e,0}\ F_{0}(p_{3})F_{0}(p_{4})\widetilde{F}_{0}(p_{2})\ \mathrm{d}p_{2}\mathrm{d}p_{3}\mathrm{d}p_{4}. \tag{10.24}\]
Here, we have evaluated \(p_{1}=p\) together with \(\widetilde{F}(p)=1\), and we denote \(\vec{p}=(p,p_{1},p_{2},p_{3})\) as well as \(\Delta e=e(p)+e(p_{2})-e(p_{3})-e(p_{4})\). Let us now look at at \(p=0\) and keep the \(\sigma_{HH}(\vec{p})\) contribution only. Upon using conservation of momentum \(p_{2}=p_{3}+p_{4}\) and realizing that \(\Delta e=2p_{3}\cdot p_{4}\) we find
\[\mathscr{Q}[F_{0}](0)\geqslant 8\int_{(\mathcal{I}^{3})^{2}} \chi(p_{3}+p_{4},p_{3},p_{4}) \ \delta_{p_{3}\cdot p_{4},0}\ |\hat{V}(p_{3})-\hat{V}(p_{4})|^{2}\] \[\times F_{0}(p_{3})F_{0}(p_{4})\widetilde{F}_{0}(p_{3}+p_{4})\ \mathrm{d}p_{3}\mathrm{d}p_{4}. \tag{10.25}\]
Next, we use the special structure of the initial data \(F_{0}\), constructed with the set \(H=\mathcal{I}\cup\mathcal{I}^{\prime}\). Namely, we restrict the above integration only over \(p_{3}\in\mathcal{I}\) and \(p_{4}\in\mathcal{I}^{\prime}\). Clearly, \(\chi(p_{3}+p_{4},p_{3},p_{4})=1\) together with \(p_{3}\cdot p_{4}=0\)-hence, \(\delta_{p_{3}\cdot p_{4},0}=1\). Further, we see that \(F_{0}(p_{3})=F_{0}(p_{4})=1\) and \(\widetilde{F}_{0}(p_{3}+p_{4})=1\) since \(p_{3}+p_{4}\notin H\cup H_{*}\). We thus find, writing in terms of the sum \(\int_{\mathcal{I}^{3}}\mathrm{d}p=(2\pi)^{-3}\sum_{p\in\mathcal{I}^{3}}\)
\[\mathscr{Q}[F_{0}](0) \geqslant\,8/(2\pi)^{6}\sum_{p_{3}\in\mathcal{I},p_{4}\in \mathcal{I}^{\prime}}|\hat{V}(p_{3})-\hat{V}(p_{4})|^{2}\] \[\geqslant\,8/(2\pi)^{6}\sum_{p_{4}\in\mathcal{I}^{\prime}:|p_{4}| >r}\sum_{p_{3}\in\mathcal{I}}|\hat{V}(p_{3})|^{2}\,\geqslant\,8/(2\pi)^{6}(n /2-r)\kappa_{V}\,\simeq\,n\.\]
In the last line we have introduced \(\kappa_{V}>0\) as a constant satisfying \(\kappa_{V}\leqslant\sum_{p\in H}|\hat{V}(p)|^{2}\). The above inequality then concludes that \(\|\mathscr{Q}[F_{0}]\|_{\ell_{m}^{1\bullet}}\geqslant Cn\) for some constant \(C>0\). Furthermore, the upper bound \(\|\mathscr{Q}[F_{0}]\|_{\ell^{\infty}}\leqslant C\|\widetilde{F}_{0}\|_{\ell^{ \infty}}^{2}\|F_{0}\|_{\ell^{\|}}F_{0}\|_{\ell^{\infty}}\) can be established in an analogous way as we did for Lemma 10.2. Since \(\|F_{0}\|_{\ell^{1}}=n\), this concludes that \(\|\mathscr{Q}[F_{0}]\|_{\ell_{m}^{1\bullet}}\,\simeq n\).
_Lower bounds for \(\mathscr{B}[F_{0}]\)_. Let us now analyze the \(\mathscr{B}\) operator in the bulk of the Fermi ball, by looking at its value at the point \(h_{*}=(0,0,|h_{*}|)\in\mathcal{B}\ \backslash S\cap\mathrm{supp}F_{0}\). Indeed, we have \(F_{0}(h_{*})=1\) and \(\widetilde{F}_{0}(h_{*})=0\). In particular, the "gain term" vanishes. One obtains
\[\mathscr{B}[F_{0}](h_{*})=-2\pi\int_{\mathcal{I}^{3}}|\hat{V}(k)|^{2}\alpha^{ H}(h_{*},k)\widetilde{F}_{0}(h_{*}+k)\mathrm{d}k. \tag{10.26}\]
The function \(\alpha^{H}(h,k)\) corresponds to the discrete version of the original, mollified \(\alpha^{H}_{t}(h,k)\). Namely,
\[\alpha^{H}(h,k)=\frac{(2/\pi)}{(2\pi)^{3}}\sum_{r\in\mathcal{I}^{3}}\chi(r) \chi^{\perp}(r+k)\delta_{h\cdot k,r\cdot k}. \tag{10.27}\]
The evaluation of the function \(\alpha^{H}(h,k)\) is subtle, for it involves counting lattice points inside of a two-dimensional annulus. Indeed, let us assume here that \(k=(1,0,0)|k|\) and \(h=(1,0,0)|h|\). Then, a straightforward calculation shows that
\[\alpha^{H}(h,k)\ =\ \frac{(2/\pi)}{(2\pi)^{3}}\big{|}\big{\{}x\in\mathbb{Z}^{2}:p_{ F}^{2}-(|h|+|k|)^{2}<|x|^{2}\leqslant p_{F}^{2}-|h|^{2}\big{\}}\big{|}\;. \tag{10.28}\]
Note that the area of the above annulus is \(\pi(2|h||k|+|k|^{2})\). Finding the asymptotics of the above counting function is a problem in Number Theory that has received attention in the last few decades; see for instance [14, 15, 19, 25, 28] and the references therein. In particular the asymptotics depend on the relative size between \(|h|\) and \(p_{F}\). In contrast, in the original Gauss circle problem, one compares \(N(r)\equiv|\{x\in\mathbb{Z}^{2}:|x|^{2}\leqslant r^{2}\}|\) with the area of the circle \(\pi r^{2}\), as \(r\to\infty\). In this case, it is known that the remainder \(E(r)\equiv N(r)-4\pi r^{2}\) satisfies the following bound for all \(\varepsilon>0\)
\[|E(r)|\leqslant Cr^{\delta_{0}+\varepsilon}\;,\qquad\forall r\gg 1 \tag{10.29}\]
where \(\delta_{0}\equiv 1034/1648=0.6274...<2/3\) is, to the authors best knowledge, the current best power for the bound (10.29), see [16, Theorem 2]. We can use the above estimate for \(E(r)\) to find the asymptotics for \(\alpha^{H}(h,k)\), provided we assume in addition that \(|h|\geqslant Cp_{F}^{\delta_{0}}\) for some \(\delta\in(\delta_{0},1)\). Indeed, in this case, we find that as \(p_{F}\to\infty\)
\[\alpha^{H}(h,k) =\frac{1}{4\pi^{4}}\Big{(}N\Big{(}\sqrt{p_{F}^{2}-|h|^{2}}\Big{)} -N\Big{(}\sqrt{p_{F}^{2}-(|h|+|k|)^{2}}\Big{)}\Big{)}\;,\] \[=\frac{1}{4\pi^{4}}\Big{(}\pi\left(2|h||k|+|k|^{2}\right)+E\Big{(} \sqrt{p_{F}^{2}-|h|^{2}}\Big{)}-E\Big{(}\sqrt{p_{F}^{2}-(|h|+|k|)^{2}}\Big{)} \Big{)}\;,\] \[=\frac{|h||k|}{2\pi^{3}}\Big{(}1+\mathcal{O}\big{(}p_{F}^{-( \delta-\delta_{0}-\varepsilon)}\big{)}+\mathcal{O}\big{(}|k|p_{F}^{-\delta} \big{)}\Big{)}\simeq\frac{|h||k|}{2\pi^{3}}\;. \tag{10.30}\]
We are now ready to give a lower bound for the \(\mathscr{B}\) operator. Indeed, letting \(\delta\in(\delta_{0},1)\), we find that for \(h_{*}\in\mathcal{B}\) with \(|h_{*}|\geqslant Cp_{F}^{\delta}\), the following lower bound holds true for all \(k=(0,0,|k|)\in\mathbb{Z}^{3}\)
\[|\mathscr{B}[F_{0}](h_{*})|\geqslant C|h_{*}||k||\hat{V}(k)|^{2} \tag{10.31}\]
where we have combined (10.26), (10.30), and have used the fact that \(\widetilde{F}_{0}(h_{*}+k)=1\). This concludes the lower bound.
**Acknowledgements.** E.C is very grateful to Michael Hott for several stimulating discussions. The work of E.C was supported by the Provost's Graduate Excellence Fellowship at The University of Texas at Austin. T.C. gratefully acknowledges support by the NSF through grants DMS-1151414 (CAREER), DMS-1716198, DMS-2009800, and the RTG Grant DMS-1840314 _Analysis of PDE_.
|
2307.03029 | Modelling the response of a CsI(Tl)-PiN photodiode Microscintillator
Detector | The full instrument response of a superminiaturised CsI(Tl)-PiN photodiode
radioactivity detector, intended for deployment on a meteorological radiosonde,
has been modelled by combining a physics-based model of the sensor with the
detector circuit response, obtained via an LTspice simulation. The model uses
the incident energy of a gamma ray as an input, and produces the pulse expected
from the detector. The detector response was verified by comparing the
simulated energy calibration with a laboratory source. The measurement circuit
is found to control the minimum detectable energy of 26 keV, and the maximum
detectable energy is ~10 MeV. The energy sensitivity of the PiN detector is
0.29 +- 0.02 mV/keV in the 0-800 keV range. The simulation and laboratory
calibrations were consistent to better than 5% over the calibration range of
the instrument. | Justin Tabbett, Karen L. Aplin | 2023-07-06T14:45:44Z | http://arxiv.org/abs/2307.03029v2 | # Modelling the response of a CsI(Tl)-PiN photodiode Microscintillator Detector
###### Abstract
The full instrument response of a CsI(Tl)-PiN photodiode radioactivity detector, intended for deployment on a meteorological radiosonde, has been modelled by combining a physics-based model of the sensor with the detector circuit response, obtained via a LTSpice simulation. The model uses the incident energy of a gamma ray as an input, and produces the pulse expected from the detector. The detector response was verified by comparing the simulated energy calibration with laboratory radioactive sources. The Schmitt trigger part of the measurement circuit is found to control the observed minimum detectable energy of 223 keV. Additionally, the energy sensitivity of the PiN detector was found to be 0.529 \(\pm\) 0.010 mV/keV in the 200-800 keV range. The simulation and laboratory calibrations were consistent to better than 20% over the operating range of the instrument, decreasing to 0.34% at 800 keV.
keywords: Radioactivity Detector, Ionisation, Scintillator, PiN Photodiode, Simulation, LTSpice +
Footnote β : journal: Nuclear Inst. and Methods in Physics Research, A
## 1 Introduction
There is a lack of readily available instrumentation to study ionisation in the atmosphere, and the effects of energetic particles on weather and climate [1]. The creation of atmospheric ions by galactic cosmic rays, solar UV radiation, and electron precipitation events means that the effects of these
ions manifest in different regions of the atmosphere. For instance, neutron monitoring stations provide a global understanding of galactic cosmic ray intensities, however at altitudes below 5 km, it has been shown that there is little correlation between neutron counts and measured ionisation rates [2]. Predominantly, satellites have been used for primary particle detection [3], and ground-based instruments for secondary particle detection, however the intermediary region creates an opportunity for a new, balloon-borne detector.
A novel microscintillator ionisation detector, here called the PiN detector, capable of measuring energy and count rate, has been developed for deployment on meteorological radiosondes (weather balloons). Hundreds of these balloons are launched daily for weather forecasting purposes, but as they are not routinely retrieved, and have limited capability to carry additional payloads, the cost is limited to a few hundred pounds and the mass to tens of grams. Geigersondes are suitable for balloon applications but do not offer energy detection [2]. A miniaturised CsI(Tl) scintillator coupled to a PiN diode both meets the radiosonde power and mass requirements, and adds energy detection capability. The detector was first deployed on a meteorological radiosonde in 2016 to investigate the transition region where surface-borne energetic particles cease to be the dominant source of ionising radiation, in favour of high-energy particles in the free troposphere [4]. During ongoing development of the detector, it has been used to measure background radiation from natural sources such as Radon gas. During balloon deployment in 2018, the detector unexpectedly observed stratospheric X-rays, which was corroborated by NOAA POES spacecraft data [5]. Deployment of the detector is well-established, however the work presented in this paper analyses and discusses the instrumentation in more detail than previously. This will allow both for retrospective analysis of previous flight data and for future development.
The PiN detector sensor is an Advatech CsI(Tl) scintillator, measuring 10\(\times\)10\(\times\)8 mm\({}^{3}\), coupled to a silicon PiN RD100 photodiode [6; 7]. There are two stages in particle detection within the sensor. First, incident radiation generates a light pulse in the scintillator, described by characteristic decay times. The second stage of particle detection involves the conversion of a light pulse to a current pulse. The magnitude of the current pulse is proportional to the energy of the incident ionising radiation. The next stage of particle detection involves the current passing through the electronics of the detector. Figure 1 shows the detection process where the current flows from the PiN photodiode to a transimpedance amplifier, located physically close to the
photodiode on the board to minimise losses. The subsequent signal passes through a frequency dependent gain stage, followed by the signal conditioning circuitry.
There are three key components in the signal conditioning circuit: a Schmitt trigger, an analogue pulse, and a negative peak detection circuit (valley detector). The trigger activates an interrupt routine in the PIC16F676 microcontroller code, signalling the microcontroller to measure the voltages on the analogue pulse and the valley signal. A pulse height, \(dV\), is calculated by the difference between the analogue pulse and the valley trace. The measured analogue pulse value is larger than the measured valley value, as the analogue pulse is a negative pulse.
Regarding the detector data collection, the microcontroller uses a 10-bit analogue-digital converter (ADC) to measure voltages. The microcontroller is held at a 5 V bias, where 5 V is recorded at 1023 ADC counts. The reference voltage level rests at \(\sim 3.211\) V (657 ADC). The pulse height should then not exceed 657 ADC. For data output, the microcontroller measures the reference and valley voltage values separately, additionally assigning a time-stamp for the event, to a serial output.
We present, for the first time, a full-stack model of the detector response, from the initial interaction of the scintillator with ionising radiation, through to the production of voltage traces which would be passed to the microcontroller for measurement. A physics-based model written in Python emulates
Figure 1: Block diagram showing the detection process and illustration of the pulse height. The voltages for the pulse height are measured in the microcontroller.
the response of the sensor, the signals from which are used as input to a Simulation Program with Integrated Circuit Emphasis (SPICE) simulation to emulate the electronics response of the detector.
Software packages such as Geant4, FLUKA, and MCNPX are often used to simulate the interactions between ionising radiation and scintillator crystals [8]. FLUKA was not suitable as the desired light pulse output needed to be in the time-domain, to combine with the photodiode response; whereas FLUKA produces quantities such as spatial energy depositions, emission spectra, and particle fluence, among others. Using Geant4 in combination with SPICE programs has been suggested as a method of investigating the overall response of a detector and would likely be a valid approach [9]; however, as the scintillator is a commercial product with its typical response given, it would be unnecessary to model its response from first principles. Therefore, an analytical approach was deemed sufficient. Additionally, the simulation of the total response of the detector was the desired outcome, therefore smaller deviations likely to arise between a numerical and analytical approach [10], could be eclipsed by the response of electronic components used in the signal conditioning circuit.
Like the scintillator, the photodiode is governed by characteristic decay times (rise- and fall-times); however, as a commercial component, only typical rise times, at a specific reverse bias and for a given wavelength, are provided. Therefore, the rise time has been determined by using estimated parameters, and the fall time was determined during a tuning phase of the model development. Convolution has been used as the method of interfacing the scintillator and photodiode responses [9], ultimately producing a current pulse. Figure 2 illustrates the detection process, noting the output from each stage of the Python model. Finally, the SPICE simulation has been created using LTSpice; the current pulse from the physics model is used as an input for the simulation. Relevant circuit components have been recreated in LTSpice, allowing for analogue voltage traces to be measured.
Accompanying the model and simulation, a laboratory calibration with radioactive sources is presented, serving as the primary method of validation for the model. Terrestrial gamma radiation, originating from the \({}^{238}\)U decay series, has additionally been used in the detector calibration. Section 2 details the model method, with the scintillator response in Section 2.1, the photodiode response in Section 2.2, and the current pulse in Section 2.3. The simulation of the electronics of the detector are detailed in Section 2.4, and the laboratory calibration is explored in Section 2.5. Finally, a comparison
of the model and detector is given in Section 3.
## 2 Model Method
Modelling the PiN detector response was split into two sections: the physics model, and the electronics simulation. Within the physics model, the responses of the scintillator and photodiode are considered, resulting in their combination which produces a current pulse proportional to the incident radiation. The current pulse serves as an input for the electronics simulation. The simulation models the remainder of the detector until the data acquisition stage.
### CsI(Tl) Scintillator Response
To model the production of photons by the interaction of radiation with the scintillator, the following assumptions were made [11; 12]:
* Total number of photons generated are determined by the light yield (\(\gamma\)/MeV) of the scintillator
* Photons are emitted over a range of wavelengths in different proportions, described by the emission spectrum
* The light pulse generated is characterised by at least two decay times
Figure 2: Model block diagram illustrating the pulse shapes obtained at each stage of the Python model. The light pulse has a longer decay time than the PiN photodiode response. The two responses are combined using convolution to determine the shape of the current pulse output; subsequently used as the input for the LTSpice simulation.
The scintillator emission decay curve is described by Equation 1
\[L(t)=(1-\exp{[-t/\tau_{rs}]})-a_{1}(1-\exp{[-t/\tau_{1}]})-a_{2}(1-\exp{[-t/\tau_{2 }]}) \tag{1}\]
where \(\tau_{rs}\) is scintillator rise time, \(\tau_{1,2}\) are the fast and slow decay times, respectively, and \(a_{1,2}\) their proportions. In the instance of the CsI(Tl) crystal present in the PiN detector, the manufacturer reports a single decay time, therefore in the model \(a_{2}=0\).
The model uses discretised wavelength steps to determine the energy generated by scintillation photons during an interaction. For each wavelength, \(\lambda\), a number of photons, \(N(\lambda)\) are produced described by the light yield \(LY\), incident energy \(E_{in}\), and emission spectrum proportion \(S(\lambda)\):
\[N(\lambda)=LY\cdot E_{in}\cdot S(\lambda) \tag{2}\]
The emission spectrum, shown in Figure 3, was digitised using a Python-based plot digitiser and linear interpolation [13; 14]. This processes allows for the scintillator emission spectrum and the photodiode responsivity to be evaluated at the same steps (every 10 nm) over the model's wavelength range (400-800 nm). Therefore, a summation of the energy of a single wavelength photon, \(E_{s}=hc/\lambda\), multiplied by the corresponding number of photons, over this range, yields the total energy emitted by the scintillator for a single event.
\[E_{Tot}=\sum_{\lambda=400}^{800}E_{s}(\lambda)\cdot N(\lambda) \tag{3}\]
Figure 3: Digitised scintillator emission spectrum and photodiode responsivity.
It is necessary to convert the energy emitted from the scintillator into a power quantity, because the photodiode responsivity relates incident optical power to an output photocurrent - discussed in Section 2.3. Therefore, a cutoff time, \(t_{co}\), has been empirically determined. The cutoff time was set to 3.05 \(\mu\)s as this, in combination with the photodiode fall time, gave the desired response when comparing the simulated and laboratory 662 keV pulse height.
The final expression for the power output from the scintillator is then
\[P(t)=\frac{E_{Tot}}{t_{co}}\cdot L(t) \tag{4}\]
### PiN Photodiode Response
The temporal response of the photodiode is governed by the rise time \(\tau_{rp}\), typically composed of: drift and diffusion times, and the RC time constant of the diode-circuit. The relationship between the three components is given by:
\[\tau_{rp}=\sqrt{\tau_{RC}^{2}+\tau_{drift}^{2}+\tau_{diff}^{2}} \tag{5}\]
Where \(\tau_{RC}\), \(\tau_{drift}\), and \(\tau_{diff}\) are the RC, drift times, and diffusion times, given in Equation 6, 10, and 11 respectively.
The derivation of the RC time constant and its dependencies are given in Equations 6-9 which denote the RC time constant, the junction capacitance \(C_{j}\), the depletion width \(W_{d}\), and the series resistance \(R_{s}\)[16]:
\[\tau_{RC}=2.2C_{j}(R_{s}+R_{L}) \tag{6}\]
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Parameter & Symbol & Value & Source \\ \hline Scintillator rise time & \(\tau_{rs}\) & 50 ns & Estimated [15] \\ Fast decay time & \(\tau_{1}\) & 900 ns & Datasheet [6] \\ Slow decay time & \(\tau_{2}\) & - & - \\ Cutoff time & \(t_{co}\) & 3.05 \(\mu\)s & Empirical \\ Fast decay proportion & \(a_{1}\) & 1 & - \\ Slow decay proportion & \(a_{2}\) & 0 & - \\ Light yield & LY & 54000 \(\gamma\)/MeV & Datasheet [6] \\ \hline \end{tabular}
\end{table}
Table 1: Scintillator Model Parameters
where \(R_{L}=50\Omega\) is the load resistance which has been estimated;
\[C_{j}=\frac{\epsilon_{Si}\epsilon_{0}A}{W_{d}} \tag{7}\]
where \(\epsilon_{Si}=11.9\) F/m is the relative permittivity of silicon, \(\epsilon_{0}\) is the permittivity of free space, and \(A=100\) mm\({}^{2}\) is the active area of the photodiode;
\[W_{d}=\sqrt{\frac{2\epsilon_{Si}\epsilon_{0}}{qN_{n}}(V_{A}+V_{Bi})} \tag{8}\]
where \(q\) is the charge of an electron, \(V_{A}=12\) V is the applied bias, \(V_{Bi}=0.65\) V is estimated value of the built-in bias of silicon, and \(N_{n}\) is the doping concentration;
\[R_{s}=R_{c}+\frac{(W_{s}-W_{d})\rho}{A} \tag{9}\]
where \(\rho=(qN_{n}\mu)^{-1}\) is the conductivity of silicon, \(R_{c}=0\Omega\) is the assumed contact resistance in the diode, and \(W_{s}\) is the silicon substrate width.
The substrate width and doping concentration \(N_{n}\) were estimated by considering the case where \(V_{A}=100\) V, and the photodiode is completely depleted, meaning \(W_{s}=W_{d}\). Using the previous assumption, Equation 7 and the typical value of the junction capacitance, 50 pF, under such conditions, the substrate width is estimated to be \(W_{s}=210.7252\)\(\mu\)m. Following, an estimate for the doping concentration can be obtained using this result and Equation 8, giving \(N_{n}=2.985\times 10^{18}\).
Additionally, the electron mobility of silicon is taken to be 1350 cm\({}^{2}\)(Vs)\({}^{\text{-}1}\)[17].
The drift time component is given by:
\[\tau_{drift}=\frac{W_{d}^{2}}{2\mu(V_{A}+V_{Bi})} \tag{10}\]
The diffusion time is given by:
\[\tau_{diff}=\frac{q(W_{s}-W_{d})^{2}}{\mu_{h}kT} \tag{11}\]
where \(\mu_{h}\) is the hole mobility equal to 480 cm\({}^{2}\)(Vs)\({}^{\text{-}1}\)[17].
A summary of the photodiode model parameters are given in Table 2.
The PiN photodiode response curve is given by
\[C(t)=-\exp\left[-t/\tau_{fp}\right]+\exp\left[-t/\tau_{rp}\right] \tag{12}\]
where the photodiode fall-time \(\tau_{fp}>\tau_{rp}\).
The rise time governs the rate at which the photodiode can respond to a pulse of light. Typical literature values for the rise time are reported for the case where the photodiode is operated in its fully depleted mode at at reverse bias of 75 V. It was therefore necessary to obtain a rise time for the conditions employed in practice, achieved through the above process. A reasonable rise time of 15.08 \(\mu\)s was obtained as a result.
The photodiode fall time was determined by matching the ADC pulse height for the \({}^{137}\)Cs 662 keV energy peak in the simulation and the calibration and remained fixed at this value. This parameter, ultimately, serves as a "catch-all" for inaccuracies which might arise in the estimation of the doping concentration of silicon, or the substrate width. Therefore, while model parameters have been chosen in Table 2, there are other possible combinations which yield the same rise time.
### Current Pulse
The current pulse is given by a convolution of the scintillator and photodiode response, using the photodiode responsivity - digitised and shown in Figure 3 - to convert from the scintillator power to electrical current [18]. The current pulse, \(I(t)\) is given by:
\[I(t)=R_{\lambda}P(t)*C(t) \tag{13}\]
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Parameter & Symbol & Value & Source \\ \hline Silicon doping concentration & \(N_{n}\) & 2.985 \(\times\) 10\({}^{18}\) & Estimated from [7] \\ Substrate width & \(W_{s}\) & 210.7252 \(\times 10^{-6}\)\(\mu\)m & Estimated from [7] \\ Load resistance & \(R_{L}\) & 50 \(\Omega\) & Estimated [7] \\ Contact resistance & \(R_{c}\) & 0 \(\Omega\) & Assumed \\ Electron mobility & \(\mu\) & 1350 cm\({}^{2}\)(Vs)\({}^{-1}\) & Estimated [17] \\ Hole mobility & \(\mu_{h}\) & 480 cm\({}^{2}\)(Vs)\({}^{-1}\) & Estimated [17] \\ Silicon built-in bias & \(V_{bi}\) & 0.65 V & Estimated \\ Photodiode rise time & \(\tau_{rp}\) & 15.08 \(\mu\)s & Derived \\ Photodiode fall time & \(\tau_{fp}\) & 32 \(\mu\)s & Empirical \\ \hline \end{tabular}
\end{table}
Table 2: Photodiode Model Parameters
where \(R_{\lambda}\) is the photodiode responsivity over the same wavelength range as the scintillator emission spectrum. The responsivity is the ratio of current generated to optical power. The magnitude of the current pulse is given explicitly by:
\[I_{max}=\frac{1}{|L(t)*C(t)|}\sum_{\lambda=400}^{800}\frac{E_{s}(\lambda)N( \lambda)R_{\lambda}(\lambda)}{t_{co}} \tag{14}\]
Where, \(|L(t)*C(t)|\), denotes the maximum value of the convolution therefore serving as a normalisation factor.
### Electronics Simulation
The current pulse from the physics model has been used as a current source input in LTSpice. The current source is connected to the transimpedence amplifier, yielding a voltage trace. The remainder of the circuit - until the microcontroller - was built in LTSpice, following the layout given in Figure 1. The measurement of the pulse analogue and valley detector voltages were completed manually, converting the voltage into an analogue-to-digital count. By varying the input energy, a relationship between incident energy and pulse height was determined.
### Laboratory Calibration
To enable the validation of the model, a PiN detector was calibrated using Barium-133 (\({}^{133}\)Ba) and Cesium-137 (\({}^{137}\)Cs) radioactive sources. The experimental setup for the \({}^{137}\)Cs source can be seen in Figure 4a, and for the \({}^{133}\)Ba source in Figure 4b; further experimental details are given in Table 3. During \({}^{137}\)Cs data collection, the PiN detector was placed directly opposite the source, meanwhile for \({}^{133}\)Ba, the source was placed adjacent to the detector. The background spectrum was collected in the same location with the sources shielded.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Source & Activity (Bq) & Distance to PiN (cm) & Exposure time (s) \\ \hline Cesium-137 & \(15\times 10^{6}\) & 45 \(\pm\) 1 & 353.43 \\ \hline Barium-133 & NA & \(\sim\) 0 & 4117.06 \\ \hline Background & NA & NA & 869.85 \\ \hline \end{tabular}
\end{table}
Table 3: Radioactive Sources Used in Calibration
## 3 Comparison of Detector and Model
### Simulation and Laboratory Results
The calibration allows for voltage pulses caused by specific gamma energies to be identified. The binned count rate data are given in Figure 4(a), showing the relative intensities of background and source energies. Figure 4(b) displays the count rate data with an asymmetric least-squares baseline (\(p=0.001\), \(\lambda=0.0001\)) subtraction applied [19; 20]. The peaks associated with each respective source are labelled.
Regions of higher count rate in the spectra in Figures 4(a) and 4(b) are important when identifying energy peaks - not solely the peaks themselves. For example, a \({}^{137}\)Cs source will induce an increased count rate before its characteristic energy peak, due to the Compton edge, and back-scatter [17]. Comparing the raw \({}^{137}\)Cs and background counts, it can be seen that there is an increase over the whole range above 20 ADC, approaching 9 times as many counts.
Figure 4(c) shows the relationship between the incident energy (keV) and the pulse height given in ADC for the simulation and laboratory calibration.
Figure 4: Experimental setups for the collection of \({}^{137}\)Cs and \({}^{133}\)Ba data.
Figure 5: a) Binned count rate data. b) Baseline subtracted count rate data. c) Energy calibration fits. d) Calibration percentage difference.
The calibration for the simulation is:
\[PH=(0.103\pm 0.003)E-(3\pm 1) \tag{15}\]
and for the calibration:
\[PH=(0.108\pm 0.002)E-(7.1\pm 0.9) \tag{16}\]
The laboratory calibration yields an energy resolution of 9.3 \(\pm\) 0.2 keV and an energy sensitivity of 0.529 \(\pm\) 0.010 mV/keV.
Figure 5d indicates the percentage difference between the two calibrations, and highlights how appropriate the model is for simulating the gamma response of the detector. At an energy of 432.6 keV, the percentage difference between the laboratory and simulation is less than 5 %, and 800 keV, the difference is 0.34 %. For energies below 432.6 keV, the large difference in the intercepts of the linear fits becomes more apparent as the fits diverge.
### Minimum detectable energy
The minimum detectable energy is the energy that produces a pulse height of 1 ADC. The predicted response denotes this to be a 39 keV gamma ray; for the laboratory calibration, 75 keV. The enclosure containing the detector is anticipated to permit through 95% of gammas with energy \(\geq\)160 keV, which might provide a practical lower limit However in practice, the lowest recorded pulse height in the laboratory data is 20 ADC corresponding to 223 and 251 keV in the simulation and laboratory, respectively. While the pulse analogue and valley in the simulation respond to incident radiation of any energy, the trigger does not. For example, at the anticipated minimum detectable energy of 39 keV, the pulse analogue does not exceed the Schmitt trigger threshold; in practice this means no event would be recorded by the microcontroller. Further, at a simulated pulse of 223 keV the Schmitt trigger does not appreciably activate, remaining at 5 V within 1 \(\mu\)V.
Figure 6 shows the minimum value of the simulated Schmitt trigger with input energies varying from 230-260 keV. In normal operation the Schmitt trigger will drop from 5 V to 0 V, as illustrated in Figure 1, however if the voltage drop is sufficiently small, the microcontroller will not register the change as a flag for the measurement routine. Over this range, the three lowest energies in Figure 6 would not be detected by the microcontroller because the trigger has not activated, despite there being a 22-23 ADC difference in the reference and valley voltages. Pulses corresponding to energy
greater than or equal to 250 keV would be measured in the physical system certainly, as there would be full activation of the trigger. However for simulated gamma rays with energy in the 245-250 keV range, the Schmitt trigger minimum would need to decrease below 1 V as defined by the microcontroller specification.
For all of the energies tested in Figure 6, the threshold voltage was 639\(\pm\)1 ADC meaning the pulse analogue would need to decrease below this in order for the trigger to activate. In all simulation runs, the reference ADC was 657 ADC, therefore suggesting the smallest possible pulse would be 18 ADC. The smallest pulse recorded in the laboratory data was 20 ADC, where the reference was 659 ADC and the valley was 639 ADC. In practice there can be fluctuations in the reference level, which is measured from the pulse analogue, but these fluctuations will not affect the minimum detectable energy. This is because the valley and trigger sub-circuits are driven by the pulse analogue voltage line. For example if the pulse analogue is 10 ADC higher than normal, the resting valley voltage will be 10 ADC higher, and the trigger threshold will be higher than normal.
Conversely, the maximum detectable energy could be governed by a num
Figure 6: Detection threshold for low energy particles, determined from the minimum voltage reached by the Schmitt trigger in the LTSpice simulation against input energy. For each energy, the calculated pulse height is shown in ADC counts. 0V would mean the trigger is activated, which occurs at 250 keV, and the event will be registered by the microcontroller.
ber of aspects. Assuming, normal behaviour of the reference level, the maximum pulse height could be 657 ADC, corresponding to an energy of 6.1 MeV, purely considering the available voltage drop in the valley. However, more likely the limiting factor on the maximum detectable energy would be the absorption efficiency of the scintillator. For similar sized CsI(Tl) crystals, the absorption efficiency has been shown to decrease to 0% at 10 MeV [21].
The smallest pulse considerations are given in Table 4, as the model, laboratory data, and trigger threshold each suggest different values (24, 20, 18 ADC, respectively). These ADC span an energy range of 203.9 - 288.0 keV across both calibrations. The wide energy range could be a result of the limited number of calibration points in the low energy region.
In summary, the trigger threshold voltage currently determines the minimum detectable energy. The threshold value could be adjusted to increase sensitivity to lower-energy particles, but this would need to be carefully traded off against the level of noise observed in the detector.
### Model considerations
Due to the commercial nature of the photodiode used in the detector, a number of the parameters had to be determined or estimated. As a result there are degenerate combinations of parameters which result in the same photodiode response, when considering the pulse height obtained by the simulation. For example, if the pulse height is too small, the scintillator cutoff time can be decreased (producing a larger optical power), or the photodiode fall time can be increased, or the photodiode rise time can be decreased, all of which result in the final pulse being larger. The latter most is dependent on the doping concentration, the electron and hole mobilities, which have typical non-specific values, and resistances in the diode circuit. Therefore, flexibility is available in the model to fine tune its performance to more closely match the actual system.
Future work could include a comparison of voltage traces for single events, measuring the pulse height by the voltage traces, mirroring the model, and
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Origin & PH (ADC) & PH (mV) & E\({}_{\mathrm{lab}}\) (keV) & E\({}_{\mathrm{sim}}\) (keV) \\ \hline Trig. Threshold & 18 & 88.0 & 232.4 & 203.9 \\ \hline Lab. Data & 20 & 97.8 & 250.9 & 223.3 \\ \hline Model & 24 & 117.3 & 288.0 & 262.1 \\ \hline \end{tabular}
\end{table}
Table 4: Summary of smallest pulse heights, their origin, and energy for each calibration.
comparing them with the microcontroller measured pulse heights. This would highlight the effect of different parameters as an altered photodiode rise/fall time will yield a different pulse shape overall. Matching the simulated and actual pulse shapes would provide more understanding on which parameters are more important to the agreement of model and detector. An aspect to consider for this approach is that matching voltage traces offers no energy calibration information to compare against the model energy input.
## 4 Conclusion
A model has been developed to emulate the sensor and electronics response of a miniaturised CsI(Tl) PiN photodiode ionisation detector. The rise and decay times for the scintillator are given to be 50 ns and 900 ns, respectively. The final photodiode rise time was given to be 15.08 \(\mu\)s and fall time was determined through fitting to be 32 \(\mu\)s. The combination of a physics model and a electronics simulation have resulted in a detector model which has been verified against laboratory calibrations with \({}^{133}\)Ba and \({}^{137}\)Cs sources. At 200 keV the difference between the model and laboratory data was 18.3 %, and at 800 keV, it was 0.34 %. This modelling plus new calibration techniques have led to improvements in the detector energy resolution compared to previous work [4][5], to typically 9.3 \(\pm\) 0.2 keV, in a range of 200 - 800 keV. This is suitable for the intended meteorological radiosonde application, as data rate limitations typically further degrade the energy resolution.
The model has been used to suggest that the Schmitt trigger threshold voltage is the most pertinent reason for the minimum detectable energy. According to the simulation, the minimum energy would be \(\sim\)250 keV, producing a pulse height of 24 ADC. In the experimental data, the minimum pulse height was 20 ADC. The difference of 4 ADC (19.6 mV) between experiment and model represents \(<\)1% of the ADC range available, as the reference rests at 657 ADC (3211 mV).
Agreement between the predicted and actual response means that components in the electronics can confidently be changed and tested in the simulation before implementation, speeding up development. Despite the PiN photodiode being a commercial product, with some physical quantities kept as proprietary information, the model affords flexibility in choosing, and matching, the photodiode fall time to accommodate parameters which have otherwise been estimated or assumptions which may not hold. It can also
be adapted to simulate changes in the system such as different scintillator material or the response at different temperatures.
Overall this research offers increased confidence in the response of the detector, especially when considering the minimum detectable energy. This aspect is particularly valuable when evaluating the atmospheric effects of bremstrahlung X rays from energetic electron precipitation during space weather events [5]. Overall the model and laboratory calibration were consistent to better than 20% over the operating range of the instrument and 5% for energies of \(>\)400keV. The detailed understanding now acquired of this novel miniaturised instrument will allow for more effective analysis and interpretation of existing data and future design improvements.
**Data availability**
Data is available at Tabbett, Justin; Aplin, Karen (2023), "PiN Modelling Data", Mendeley Data, V1, doi: 10.17632/mctr9ksk4r.1
**Acknowledgements**
We would like to thank Dr Alessandro Narduzzo at the Department of Physics, University of Bath for access to radioactive sources.
**Funding**
EPSRC studentship and A-Squared Technologies Ltd.
|
2305.15032 | How to Distill your BERT: An Empirical Study on the Impact of Weight
Initialisation and Distillation Objectives | Recently, various intermediate layer distillation (ILD) objectives have been
shown to improve compression of BERT models via Knowledge Distillation (KD).
However, a comprehensive evaluation of the objectives in both task-specific and
task-agnostic settings is lacking. To the best of our knowledge, this is the
first work comprehensively evaluating distillation objectives in both settings.
We show that attention transfer gives the best performance overall. We also
study the impact of layer choice when initializing the student from the teacher
layers, finding a significant impact on the performance in task-specific
distillation. For vanilla KD and hidden states transfer, initialisation with
lower layers of the teacher gives a considerable improvement over higher
layers, especially on the task of QNLI (up to an absolute percentage change of
17.8 in accuracy). Attention transfer behaves consistently under different
initialisation settings. We release our code as an efficient transformer-based
model distillation framework for further studies. | Xinpeng Wang, Leonie Weissweiler, Hinrich SchΓΌtze, Barbara Plank | 2023-05-24T11:16:09Z | http://arxiv.org/abs/2305.15032v1 | # How to Distill your BERT: An Empirical Study on the
###### Abstract
Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both task-specific and task-agnostic settings is lacking. To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall. We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies.1
Footnote 1: [https://github.com/mainlp/How-to-distill-your-BERT](https://github.com/mainlp/How-to-distill-your-BERT)
## 1 Introduction
Large-scale pre-trained language models (PLMs) have brought revolutionary advancements to natural language processing, such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), ELECTRA (Clark et al., 2020) and GPT-3 (Brown et al., 2020). However, the enormous size of these models has led to difficulties in deploying them in resource-constrained environments. Therefore significant interest has emerged in developing methods for reducing their size.
Knowledge Distillation (KD) (Hinton et al., 2015) transfers the knowledge embedded in one model to another, which can be used for cross-lingual transfer, cross-modal transfer, and model compression. KD heavily depends on the distillation objective, which determines how knowledge is transferred. Many works have tried to design different distillation objectives for Transformer-based (Vaswani et al., 2017) model compression and successfully distilled PLMs into smaller models, either task-specifically (Sun et al., 2019; Jiao et al., 2020) or task-agnostically--which differ in whether KD is performed at the pre-training stage or during task finetuning (Sanh et al., 2019; Sun et al., 2020; Wang et al., 2020; Wang et al., 2021).
Despite their impressive results, determining the best distillation objective is difficult due to their diverse comparison setups, such as data preprocessing, student model initialization, layer mapping strategies, task-specific/agnostic settings, and others. This breadth of choices and lack of code has led to comparison on unequal grounds and contradictory findings.2 This shows a substantial need to reproduce and evaluate distillation objectives within the same setting. Motivated by this gap, we conduct experiments on the most common distillation objectives and their combinations in task-specific and task-agnostic settings. From our empirical evaluation, we show: (1) attention transfer performs consistently well in various initialisation settings, (2) initialisation with lower layers of the teacher gives a considerable improvement over higher layers in task-specific distillation.
Footnote 2: For example, both Jiao et al. (2020) and Wang et al. (2020) claimed to be the better method in their setting. See section 5 for detail.
In summary, our **contributions** are:
* We perform an evaluation of the effectiveness of different distillation objectives and the layer choice for initializing the student from the teacher layer.
* We make our code available as an efficient distillation framework.
* We provide practical guidance in terms of teacher layer choice for initialisation, distillation objectives and training parameters.
Related Work
Task-specific DistillationSun et al. (2019) task-specifically compressed BERT by learning from the every \(k\)-th layer of the teacher. To avoid leaving out some of the teacher layers, many follow-up works Wu et al. (2020, 2021, 2021) designed new layer mapping strategies to fuse the teacher layers. Jiao et al. (2020) used data augmentation to further improve the performance. Initialising the student model with pretrained weights is crucial for performance since the student learns from the teacher only shortly in downstream tasks. Common choices for initialization are: (1) task-agnostically distilling models first, (2) using publicly available distilled models, or (3) initializing with teacher layers. As part of this study, we examine how to maximize the benefits of initializing from teacher layers.
Task-agnostic DistillationIn the field of task-agnostic distillation, one line of work is to compress the teacher model into a student model with the same depth but narrower blocks Sun et al. (2020, 2022). Another line of work is to distill the teacher into a student with fewer layers Sanh et al. (2019), Jiao et al. (2020), Wang et al. (2020), which is our focus.
Comparative StudiesLi et al. (2021) conducted out-of-domain and adversarial evaluation on three KD methods, which used hidden state transfer or data augmentation. Lu et al. (2022) is closely related to our work, where they also evaluated knowledge types and initialisation schemes. However, they did not consider layer choice when initialising from the teacher, and the evaluation was only for task-specific settings. Hence, our work complements theirs.
## 3 Distillation Objectives
Prediction Layer TransferPrediction layer transfer minimizes the soft cross-entropy between the logits from the teacher and the student: \(\mathcal{L}_{\text{pred}}=\operatorname{CE}\left(\mathbf{z}^{T}/t,\mathbf{z}^{S}/t\right)\), with \(\mathbf{z}^{T}\) and \(\mathbf{z}^{S}\) the logits from the teacher/student and \(t\) is the temperature value.
Following the vanilla KD approach Hinton et al. (2015), the final training loss is a combination of \(\mathcal{L}_{\text{pred}}\) and supervision loss \(\mathcal{L}_{\text{ce}}\) (masked language modelling loss \(\mathcal{L}_{\text{mlm}}\) in the pertaining stage). We denote this objective as **vanilla KD**.
Hidden States TransferHidden states transfer penalizes the distance between the hidden states of specific layers from the teacher and the student. Common choices for the representation are the embedding of the [CLS] token Sun et al. (2019) and the whole sequence embedding Jiao et al. (2020). We use Mean-Squared-Error (MSE) to measure the distance between the student and teacher embedding, which can be formulated as \(\mathcal{L}_{\text{hid}}=\operatorname{MSE}\left(\mathbf{h}^{S}\mathbf{W}_{h},\mathbf{h}^ {T}\right)\), where \(\mathbf{h}^{S}\in\mathbb{R}^{d}\) and \(\mathbf{h}^{T}\in\mathbb{R}^{d^{\prime}}\) are the [CLS] token embedding of specific student and teacher layer, \(d\) and \(d^{\prime}\) are the hidden dimensions. The matrix \(\mathbf{W}_{h}\in\mathbb{R}^{d\times d^{\prime}}\) is a learnable transformation. We denote this objective as **Hid-CLS**. In the case of transferring the sequence embedding, one can replace the token embeddings with sequence embeddings \(\mathbf{H}^{S}\in\mathbb{R}^{l\times d}\) and \(\mathbf{H}^{T}\in\mathbb{R}^{l\times d^{\prime}}\), where \(l\) is the sequence length. The objective that transfers the sequence embedding with MSE loss is denoted as **Hid-Seq**.
We also evaluated a contrastive representation learning method which transfers the hidden state representation from the teacher to the student with a contrastive objective Sun et al. (2020). We inherited their code for implementation and refer our readers to the original paper for details. We denote this objective as **Hid-CLS-Contrast**.
Attention and Value TransferThe attention mechanism has been found to capture rich linguistic knowledge Clark et al. (2019), and attention map transfer is widely used in transformer model distillation. To measure the similarity between the multi-head attention block of the teacher and the student, MSE and Kullback-Leibler divergence are the two standard loss functions. The objective using MSE is formulated as \(\mathcal{L}_{\text{att}}=\frac{1}{h}\sum_{i=1}^{h}\operatorname{MSE}(\mathbf{A}_{i }^{S},\mathbf{A}_{i}^{T})\), where \(h\) is the number of attention heads, matrices \(\mathbf{A}_{i}\in\mathbb{R}^{l\times l}\) refers to the \(i\)-th attention head (before the softmax operation) in the multi-head attention block. We denote this objective as **Att-MSE**.
Since the attention after the softmax function is a distribution over the sequence, we can also use the KL-divergence to measure the distance: \(\mathcal{L}_{\text{att}}=\frac{1}{TH}\sum_{t=1}^{T}\sum_{h=1}^{H}D_{KL}(a_{t,h} ^{T}\|a_{t,h}^{S})\), where \(T\) is the sequence length and \(H\) is the number of attention heads. We will denote this objective as **Att-KL**. In addition to attention transfer, value-relation transfer was proposed by Wang et al. (2020), to which we refer our readers for details. Value-relation transfer objective will be denoted as **Val-KL**.
## 4 Experimental Setup
We evaluate our model on the General Language Understanding Evaluation (GLUE) benchmark Wang et al. (2018) tasks, including linguistic acceptability (CoLA), sentiment analysis (SST-2), semantic equivalence (MRPC, QQP), and natural language inference (MNLI, QNLI, RTE).
For task-specific distillation, we distill a fine-tuned RoBERTaBASELiu et al. (2019) into a 3-layer transformer model on each GLUE task, using the Fairseq Ott et al. (2019) implementation and the recommended hyperparameters presented in Liu et al. (2019). We follow the training procedure from TinyBERT to perform _intermediate layer_ and _prediction layer_ distillation sequentially for 10 epochs each, freeing us from tuning the loss weights. For intermediate layer distillation, the student learns from the same teacher's layers that were used for initialising the student. In addition, we always initialise the embedding layer with the teacher's embedding layer.
For task-agnostic distillation, we distill the uncased version of BERTbase into a 6-layer student model, based on the implementation by Izsak et al. (2021). Here we perform last-layer knowledge transfer since we see no improvement when transferring multiple layers in our experiments. We train the student model for 100k steps with batch size 1024, a peaking learning rate of 5e-4 and a maximum sequence length of 128. The distilled student model is then fine-tuned on the GLUE datasets with grid search over batch size {16, 32} and learning rate {1e-5, 3e-5, 5e-5, 8e-5}. We follow the original training corpus of BERT: English Wikipedia and BookCorpus Zhu et al. (2015).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Objectives** & **QNLI** & **SST-2** & **MNLI** & **MRPC** & **QQP** & **RTE** & **CoLA** & **Avg** \\ & Acc & Acc & Acc & F1 & Acc & Acc & Mcc & \\ \hline Vanilla KD & 66.5\({}_{\pm 1.49}\) & 84.7\({}_{\pm 0.16}\) & 75.1\({}_{\pm 0.05}\) & 71.2\({}_{\pm 0.80}\) & 81.9\({}_{\pm 0.10}\) & 54.0\({}_{\pm 1.24}\) & 69.1\({}_{\pm 0.00}\) & 71.8 \\ \hline Hid-CLS-Contrast & 69.3\({}_{\pm 0.60}\) & 85.3\({}_{\pm 0.56}\) & 76.2\({}_{\pm 0.45}\) & 71.1\({}_{\pm 0.85}\) & 83.1\({}_{\pm 0.69}\) & 53.6\({}_{\pm 0.23}\) & 69.0\({}_{\pm 0.12}\) & 72.5 \\ Hid-CLS & 75.7\({}_{\pm 0.57}\) & 85.8\({}_{\pm 0.34}\) & 77.0\({}_{\pm 0.10}\) & 71.3\({}_{\pm 0.41}\) & 83.8\({}_{\pm 1.63}\) & 54.0\({}_{\pm 2.17}\) & 68.4\({}_{\pm 0.35}\) & 73.2 \\ Hid-Seq & 83.3\({}_{\pm 0.13}\) & 87.4\({}_{\pm 0.13}\) & 78.3\({}_{\pm 0.13}\) & **72.9\({}_{\pm 0.50}\)** & 87.6\({}_{\pm 0.00}\) & 51.8\({}_{\pm 1.10}\) & 69.2\({}_{\pm 0.55}\) & 75.8 \\ \hline Att-MSE & 84.3\({}_{\pm 0.18}\) & 89.2\({}_{\pm 0.40}\) & 78.6\({}_{\pm 0.25}\) & 71.1\({}_{\pm 0.41}\) & 88.7\({}_{\pm 0.05}\) & 54.4\({}_{\pm 1.03}\) & 69.3\({}_{\pm 0.17}\) & 76.5 \\ +Hid-Seq & 84.6\({}_{\pm 0.29}\) & 89.2\({}_{\pm 0.21}\) & 78.9\({}_{\pm 0.10}\) & 71.8\({}_{\pm 0.51}\) & 88.8\({}_{\pm 0.00}\) & 54.0\({}_{\pm 0.93}\) & **69.5\({}_{\pm 0.48}\)** & 77.0 \\ \hline Att-KL & 85.3\({}_{\pm 0.14}\) & 89.0\({}_{\pm 0.26}\) & 79.4\({}_{\pm 0.08}\) & 71.4\({}_{\pm 0.29}\) & 89.0\({}_{\pm 0.05}\) & 55.5\({}_{\pm 2.05}\) & 69.3\({}_{\pm 0.13}\) & 77.0 \\ +Hid-Seq & 84.6\({}_{\pm 0.21}\) & 89.1\({}_{\pm 0.46}\) & 79.5\({}_{\pm 0.17}\) & 72.4\({}_{\pm 0.39}\) & 89.0\({}_{\pm 0.06}\) & 57.2\({}_{\pm 0.86}\) & 69.3\({}_{\pm 0.21}\) & 77.3 \\ +Val-KL & **85.5\({}_{\pm 0.34}\)** & **89.6\({}_{\pm 0.31}\)** & **79.6\({}_{\pm 0.10}\)** & 72.2\({}_{\pm 0.39}\) & **89.1\({}_{\pm 0.05}\)** & **57.5\({}_{\pm 0.70}\)** & 69.2\({}_{\pm 0.15}\) & **77.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Task-specific distillation results on GLUE dev sets. Student models are initialised with every 4th layer of the teacher model. We report the average and standard deviation over 4 runs. Attention based objectives consistently outperform hidden states transfer and vanilla KD.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Objectives** & **QNLI** & **SST-2** & **MNLI** & **MRPC** & **QQP** & **RTE** & **CoLA** & **Avg** \\ & Acc & Acc & Acc & F1 & Acc & Acc & Mcc & \\ \hline DistilBERT\({}^{\star}\) & 89.2 & 91.3 & 82.2 & 87.5 & 88.5 & 59.9 & 51.3 & 78.5 \\ TinyBERT\({}^{\dagger}\) & 90.5 & 91.6 & 83.5 & 88.4 & 90.6 & 72.2 & 42.8 & 79.9 \\ MiniLM\({}^{\ddagger}\) & **91.0** & 92.0 & **84.0** & 88.4 & **91.0** & **71.5** & 49.2 & 81.0 \\ \hline Vanilla KD\({}^{\star}\) & 88.6 & 91.4 & 82.4 & 86.5 & 90.6 & 61.0 & **54.4** & 79.3 \\ \hline Hid-CLS & 86.5 & 90.6 & 79.3 & 73.0 & 89.7 & 61.0 & 33.9 & 73.4 \\ Hid-Seq & 89.2 & 91.5 & 82.3 & 89.2 & 90.3 & 67.2 & 48.2 & 79.7 \\ \hline Att-MSE & 89.8 & 91.6 & 83.2 & 90.6 & 90.7 & 69.7 & 53.5 & **81.3** \\ +Hid-Seq\({}^{\dagger}\) & 89.7 & **92.4** & 82.8 & 90.4 & 90.8 & 68.6 & 52.8 & 81.1 \\ \hline Att-KL & 88.0 & 89.7 & 81.1 & 90.1 & 90.3 & 66.1 & 43.6 & 78.4 \\ +Hid-Seq & 88.9 & 91.6 & 82.4 & 90.0 & 90.5 & 66.8 & 47.9 & 79.7 \\ +Val-KL\({}^{\ddagger}\) & 89.8 & 91.6 & 82.4 & **91.0** & 90.6 & 66.7 & 47.7 & 80.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Task-agnostic distillation: Performance on GLUE dev sets of three existing distilled 6-layer Transformer models and our 6-layer students distilled. All the students are randomly initialised and distilled from BERTBASE. We report the best fine-tuning result with grid search over learning rate and batch size. Att-MSE performs the best among all the objectives.
## 5 Results
Distillation ObjectivesDistillation objective performances are compared in Table 1 and Table 2 for task-specific and task-agnostic settings, respectively. In the task-specific setting, attention transfer is the best choice with initialisation from every \(k\)-th teacher layer. However, the performance of hidden states transfer and _vanilla KD_ can be drastically improved under other initialisation settings, which we discuss in the next section.
In the task-agnostic setting, the _Att-MSE_ objective outperforms _Att-KL_, which performs similarly to _vanilla KD_ and hidden states transfer. This contradicts the observation in MiniLM Wang et al. (2020), where their _Att-KL_ based objective outperforms TinyBERT Jiao et al. (2020) with _Att-MSE_. However, MiniLM has more training iterations and a larger batch size, which makes comparison difficult. The performance drop of _Att-KL_ compared to _Att-MSE_ is mainly due to its poor performance on CoLA (linguistic acceptability of a sentence), on which MiniLM also performs poorly. We hypothesise that MSE can transfer the linguistic knowledge embedded in the attention matrix more effectively because the MSE loss function gives more direct matching than KL-divergence, which was also concluded by Kim et al. (2021).
For reference, we report the result of 3 existing works that use the same objectives in our experiments. The result of DistilBERT and MiniLM are taken from the respective papers. The result of TinyBERT is taken from Wang et al. (2020) for fair comparison since TinyBERT only reported task-specific distillation result with data augmentation. We denote the prior works and the corresponding objective we evaluate with the same superscript symbol.
InitialisationWe also studied the impact of the choice of teacher layers for initialising the student. Evaluation score on GLUE task development sets under different teacher layer choices for initialisation are reported in Table 3 and Table 4 for task-specific and task-agnostic distillation, respectively.
We observe that initiatlization of layers has a huge impact in the task-specific setting. The performance of _vanilla KD_ and Hidden states transfer was significantly improved when initialising from lower layers of the teacher (e.g. from 68.1% to 85.9% on QNLI for Vanilla KD). This explains the impressive result of PKD Sun et al. (2019), which initialised the student with first k teacher layers. We believe this is an important observation that will motivate further research into investigating the effectiveness of the different layers of the pre-trained transformer model.
In the task-agnostic setting, we only observe
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Objectives** & **Init.** & **QNLI** & **SST-2** & **MNLI** & **MRPC** & **QQP** & **RTE** & **CoLA** & **Avg** \\ & & Acc & Acc & Acc & F1 & Acc & Acc & Acc \\ \hline \multirow{4}{*}{Vanilla KD} & 4,8,12 & 66.5\(\pm_{1.49}\) & 84.7\(\pm_{0.16}\) & 75.1\(\pm_{0.05}\) & 71.2\(\pm_{0.80}\) & 81.9\(\pm_{0.10}\) & 54.0\(\pm_{1.24}\) & 69.1\(\pm_{0.00}\) & 71.8 \\ & 1,8,12 & 82.9\(\pm_{0.31}\) & 88.5\(\pm_{0.51}\) & 76.6\(\pm_{0.08}\) & 71.2\(\pm_{0.88}\) & 87.8\(\pm_{0.06}\) & 55.5\(\pm_{1.07}\) & 70.8\(\pm_{0.29}\) & 76.2 \\ & 1,2,3 & **86.2\(\pm_{0.35}\)** & **90.4\(\pm_{0.28}\)** & **78.7\(\pm_{0.18}\)** & **78.6\(\pm_{0.18}\)** & **89.8\(\pm_{0.05}\)** & **57.1\(\pm_{1.46}\)** & **74.9\(\pm_{0.54}\)** & **79.4** \\ \hline \multirow{4}{*}{Hid-CLS-Contrast} & 4,8,12 & 69.3\(\pm_{0.60}\) & 85.3\(\pm_{0.36}\) & 76.2\(\pm_{0.45}\) & 71.1\(\pm_{0.85}\) & 83.1\(\pm_{0.69}\) & 53.6\(\pm_{0.23}\) & 69.0\(\pm_{0.12}\) & 72.5 \\ & 1,8,12 & 82.9\(\pm_{0.36}\) & 88.6\(\pm_{0.29}\) & 77.0\(\pm_{0.58}\) & 72.8\(\pm_{0.61}\) & 88.0\(\pm_{0.13}\) & 55.4\(\pm_{0.75}\) & 70.4\(\pm_{0.30}\) & 76.4 \\ & 1,2,3 & **86.1\(\pm_{0.22}\)** & **89.6\(\pm_{0.38}\)** & **79.0\(\pm_{0.12}\)** & **73.9\(\pm_{1.43}\)** & **90.1\(\pm_{0.10}\)** & **55.1\(\pm_{0.67}\)** & **71.1\(\pm_{1.09}\)** & **77.8** \\ \hline \multirow{4}{*}{Hid-CLS} & 4,8,12 & 75.7\(\pm_{0.57}\) & 85.8\(\pm_{0.34}\) & 77.0\(\pm_{0.10}\) & 71.3\(\pm_{0.41}\) & 83.8\(\pm_{1.63}\) & 54.0\(\pm_{2.17}\) & 68.4\(\pm_{0.35}\) & 73.2 \\ & 1,8,12 & 83.4\(\pm_{0.15}\) & 88.1\(\pm_{0.38}\) & 77.7\(\pm_{0.10}\) & 71.9\(\pm_{0.10}\) & 88.6\(\pm_{0.06}\) & 56.1\(\pm_{0.88}\) & 71.5\(\pm_{0.40}\) & 76.7 \\ & 1,2,3 & **85.7\(\pm_{0.05}\)** & **90.3\(\pm_{0.29}\)** & **78.6\(\pm_{0.14}\)** & **74.3\(\pm_{1.00}\)** & **90.1\(\pm_{0.00}\)** & **57.1\(\pm_{1.37}\)** & **73.6\(\pm_{0.24}\)** & **78.5** \\ \hline \multirow{4}{*}{Hid-Seq} & 4,8,12 & 83.3\(\pm_{0.13}\) & 87.4\(\pm_{0.13}\) & 78.3\(\pm_{0.13}\) & 72.9\(\pm_{0.50}\) & 87.6\(\pm_{0.00}\) & 51.8\(\pm_{1.10}\) & 69.2\(\pm_{0.55}\) & 75.8 \\ & 1,8,12 & 84.3\(\pm_{0.10}\) & 88.6\(\pm_{0.28}\) & 78.2\(\pm_{0.08}\) & 72.0\(\pm_{0.70}\) & 88.6\(\pm_{0.10}\) & 55.2\(\pm_{1.40}\) & 71.6\(\pm_{0.37}\) & 77.6 \\ & 1,2,3 & **85.9\(\pm_{0.24}\)** & **90.7\(\pm_{0.08}\)** & **78.9\(\pm_{0.10}\)** & **75.5\(\pm_{1.14}\)** & **90.0\(\pm_{0.05}\)** & **56.6\(\pm_{0.74}\)** & **74.2\(\pm_{0.45}\)** & **78.8** \\ \hline \multirow{4}{*}{Att-KL} & 4,8,12 & 85.3\(\pm_{0.14}\) & 89.0\(\pm_{0.26}\) & 79.4\(\pm_{0.08}\) & 71.4\(\pm_{0.29}\) & 89.0\(\pm_{0.05}\) & 55.2\(\pm_{0.06}\) & 69.3\(\pm_{0.13}\) & 77.0 \\ & 1,8,12 & 84.7\(\pm_{0.26}\) & **89.6\(\pm_{0.13}\)** & 78.2\(\pm_{0.10}\) & **72.5\(\pm_{0.24}\)** & 88.6\(\pm_{0.08}\) & 56.5\(\pm_{0.44}\) & **70.4\(\pm_{0.26}\)** & 77.2 \\ & 1,2,3 & **86.2\(\pm_{0.0.06}\)** & 88.6\(\pm_{0.19}\) & 77.9\(\pm_{0.17}\) & 71.3\(\pm_{0.24}\) & **89.0\(\pm_{0.05}\)** & **61.2\(\pm_{0.27}\)** & 69.5\(\pm_{0.80}\)** & **77.7** \\ \hline \multirow{4}{*}{Att-MSE} & 4,8,12 & 84.3\(\pm_{0.18}\) & 89.2\(\pm_{0.40}\) & **78.6\(\pm_{0.25}\)** & 71.1\(\pm_{0.41}\) & 88.7\(\pm_{0.05}\) & 54.4\(\pm_{1.03}\) & 69.3\(\pm_{0.17}\) & 76.5 \\ & 1,8,12 & 84.3\(\pm_{0.25}\) & **89.8\(\pm_{0.39}\)** & 77.5\(\pm_{0.14}\) & **72.5\(\pm_{1.36}\)** & 88.4\(\pm_{0.05}\) & 57.2\(\pm_{0.9
considerable improvement with the objective _Hid-CLS_, which performs poorly when randomly initialized, compared to other objectives. This contradicts Sanh et al. (2019) with a _vanilla KD_ objective, where they instead showed improvement of 3 average score when initialising from the teacher over random initialisation. However, our _vanilla-KD_ approach initialised with random weights outperforms their best result (79.3 vs 78.5). Therefore, we hypothesise that the advantage of pre-loading teacher layers over random initialisation diminishes as the student is fully distilled during pre-training.
Significance TestWe conducted paired t-testing for all the distillation objectives in Table 1 and the three initialisation choices within each objective in Table 3. For Table 1, all the pairs of objectives are statistically significant (p < 0.05) except four: (Att-KL, Att-MSE), (Att-KL, Att-KL + Hid-Seq), (Att-KL, Att-MSE + Hid-Seq). This further supports our conclusion that when initialised from every K teacher layer, it is important to do attention transfer, and the specific objective matters less. For Table 3, all three initialisation choices are statistically significantly different from each other for all the objectives, except the pair (1,8,12, 1,2,3) for Att-KL and Att-MSE, which indicates the robustness of attention transfer under different initialisation choices.
Training TimeSince task-agnostic distillation is computationally expensive, we also focus on optimizing our distillation framework for faster training. Our training time is about 58 GPU hours on 40GB A100, compared to TinyBERT (576 GPU hours on 16GB V100) and DistilBERT (720 GPU hours on 16GB V100). This is achieved by using a shorter sequence length and an optimized transformer pre-training framework by Izsak et al. (2021). We see no improvement when using a longer sequence length of 512.
GuidanceTo sum up, our observations, trade-offs and recommendations are:
* For task-specific KD, we recommend attention transfer in general, due to its consistently high performance in various initialisation settings (Table 3). The exact attention distillation objective matter less (Table 1). Considering the excellent performance of the vanilla KD approach (Table 3) when initialising with lower teacher layers, we also recommend lower teacher layer initialisation with the vanilla KD approach for its shorter training time and simple implementation.
* For task-agnostic KD, attention transfer with Mean-Squared-Error is the best choice based on our result (Table 2, 4).
* We recommend readers to use our task-agnostic distillation framework and short sequence length for fast training.
## 6 Conclusion
We extensively evaluated distillation objectives for the transformer model and studied the impact of weight initialisation. We found that attention transfer performs consistently well in both task-specific and task-agnostic settings, regardless of the teacher layers chosen for student initialization. We also observed that initialising with lower teacher layers significantly improved task-specific distillation performance compared to higher layers. We release our code and hope this work motivates further research into developing better distillation objectives and compressing in-house models.
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline
**Objectives** & **Init.** & **QNLI** & **SST-2** & **MNLI** & **MRPC** & **QQP** & **RTE** & **CoLA** & **Avg** \\ & & Acc & Acc & Acc & F1 & Acc & Acc & Mcc & \\ \hline \multirow{2}{*}{Vanilla KD} & random & **88.6** & **91.4** & **82.4** & 86.5 & 90.6 & 61.0 & 54.4 & 79.3 \\ & first 6 & 88.3 & 91.2 & 82.2 & **87.0** & 90.6 & **62.8** & **55.4** & **79.6** \\ \hline \multirow{2}{*}{Hid-CLS} & random & 86.5 & 90.6 & 79.3 & 73.0 & 89.7 & 61.0 & 33.9 & 73.4 \\ & first 6 & **87.0** & **91.2** & **80.7** & **88.0** & **90.2** & **66.0** & **42.5** & **77.9** \\ \hline \multirow{2}{*}{Hid-Seq} & random & **89.2** & 91.5 & 82.3 & 89.2 & 90.3 & **67.2** & 48.2 & 79.7 \\ & first 6 & 87.5 & 91.5 & 82.3 & **90.0** & **90.5** & 66.4 & **50.6** & **79.9** \\ \hline \multirow{2}{*}{Att-MSE} & random & **89.8** & 91.6 & **83.2** & 90.6 & 90.7 & **69.7** & **53.5** & **81.3** \\ & first 6 & 89.5 & **91.7** & 82.8 & **91.0** & **90.8** & 66.1 & 53.4 & 80.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Task-agnostic distillation: Performance of the student initialised with random weights vs first 6 teacher layers. Attention transfer performs the best in both initialisation settings.
## 7 Limitations
We evaluated the most widely used distillation objectives including prediction layer transfer, hidden states transfer and attention transfer. However, some objectives are not included in our evaluation due to missing implementation details in their paper. For example, we only implemented the contrastive intermediate layer distillation objective proposed by Sun et al. (2020) in task-specific setting, since code and implementation details are missing for task-agnostic setting. New objectives are increasingly appearing for model compression in the field of computer vision, such as Wasserstein contrastive representation distillation Chen et al. (2021) and distillation with Pearson correlation Huang et al. (2022), which can be included to have a broader scope of distillation objectives evaluation.
This work empirically studied the impact of the teacher layer choice for initialization and training objectives, however, further analysis is needed to understand why lower teacher layers are essential for initialisation, and why attention transfer behaves consistently well under various teacher layer choices in the task-specific setting, while hidden state transfer does not.
## Acknowledgements
We thank the anonymous reviewers as well as the members of the MaiNLP research lab for their constructive feedback. This research is supported by ERC Consolidator Grant DIALECT 101043235.
|
2310.18978 | Quantum Phase Transitions in a Generalized Dicke Model | We investigate a generalized Dicke model by introducing two interacting spin
ensembles coupled with a single-mode bosonic field. Apart from the normal to
superradiant phase transition induced by the strong spin-boson coupling,
interactions between the two spin ensembles enrich the phase diagram by
introducing ferromagnetic, antiferromagnetic and paramagnetic phases. The
mean-field approach reveals a phase diagram comprising three phases:
paramagnetic-normal phase, ferromagnetic-superradiant phase, and
antiferromagnetic-normal phase. Ferromagnetic spin-spin interaction can
significantly reduce the required spin-boson coupling strength to observe the
superradiant phase, where the macroscopic excitation of the bosonic field
occurs. Conversely, antiferromagnetic spin-spin interaction can strongly
suppress the superradiant phase. To examine higher-order quantum effects beyond
the mean-field contribution, we utilize the Holstein-Primakoff transformation,
which converts the generalized Dicke model into three coupled harmonic
oscillators in the thermodynamic limit. Near the critical point, we observe the
close of the energy gap between the ground and the first excited states, the
divergence of entanglement entropy and quantum fluctuation in certain
quadrature. These observations further confirm the quantum phase transition and
offer additional insights into critical behaviors. | Wen Liu, Liwei Duan | 2023-10-29T11:00:56Z | http://arxiv.org/abs/2310.18978v1 | # Quantum Phase Transitions in a Generalized Dicke Model
###### Abstract
We investigate a generalized Dicke model by introducing two interacting spin ensembles coupled with a single-mode bosonic field. Apart from the normal to superradiant phase transition induced by the strong spin-boson coupling, interactions between the two spin ensembles enrich the phase diagram by introducing ferromagnetic, antiferromagnetic and paramagnetic phases. The mean-field approach reveals a phase diagram comprising three phases: paramagnetic-normal phase, ferromagnetic-superradiant phase, and antiferromagnetic-normal phase. Ferromagnetic spin-spin interaction can significantly reduce the required spin-boson coupling strength to observe the superradiant phase, where the macroscopic excitation of the bosonic field occurs. Conversely, antiferromagnetic spin-spin interaction can strongly suppress the superradiant phase. To examine higher-order quantum effects beyond the mean-field contribution, we utilize the Holstein-Primakoff transformation, which converts the generalized Dicke model into three coupled harmonic oscillators in the thermodynamic limit. Near the critical point, we observe the close of the energy gap between the ground and the first excited states, the divergence of entanglement entropy and quantum fluctuation in certain quadrature. These observations further confirm the quantum phase transition and offer additional insights into critical behaviors.
## I Introduction
The Dicke model, named after R. H. Dicke, is a prominent example in the field of quantum optics and quantum mechanics [1]. It was initially formulated to describe the collective behavior of a large ensemble of spins (two-level systems or qubits) interacting with a single-mode electromagnetic field within an optical cavity. If only considering a single spin, the Dicke model reduces to the Rabi model, one of the simplest models for studying light-matter interactions [2; 3; 4]. The Dicke model provides a fundamental framework for exploring collective quantum behavior and the intricate interplay between quantum systems and electromagnetic fields [5; 6; 7]. Its significance extends across various domains of physics, among which the quantum phase transition in the Dicke model has drawn persistent attention [8; 9; 10; 11; 12; 13; 14].
With a sufficiently large spin-boson coupling strength, the ground state of the Dicke model exhibits a transition from the normal phase to the superradiant phase. This transition is accompanied by macroscopic excitation in the bosonic field [10; 11]. The sudden change in the behaviors of the ground state serves as a characteristic signature of the quantum phase transition, which arises from the quantum fluctuation at zero temperature, rather than the thermal fluctuation in the classical phase transition [15]. The quantum phase transition in the Dicke model and its generalizations have been experimentally observed in various platforms, such as Bose-Einstein condensate coupled to an optical cavity [16; 17], trapped-ion systems [18], etc. In addition to the superradiant phase transition in the ground state, the Dicke model serves as a versatile prototype to study the excited-state quantum phase transition [19; 20; 21], nonequilibrium quantum phase transition [22], dynamical phase transition [23], universal dynamics under slow quenches [14], quantum-classical correspondence [24], chaos and thermalization [10; 11; 25], etc.
Recently, various generalizations of the Dicke model have been proposed, which greatly improve its flexibility. Using squeezed light in the cavity field reduces the necessary spin-boson coupling strength to observe the superradiant phase transition in the Dicke model [26; 27; 28]. The anisotropic Dicke model introduces different coupling strengths corresponding to the rotating and counter-rotating terms [29; 30; 31; 32]. The two-mode Dicke model yields a richer phase diagram with both first- and second-order quantum phase transitions by introducing an additional bosonic mode [33]. The Rabi-Hubbard model consists of a lattice of coupled optical cavities, each containing a spin, which undergoes a phase transition from the Mott phase to the superfluid phase [34; 35; 36; 37]. By partially breaking the exchange symmetry between the spins, the Dicke model can produce a quantum tricritical point [38]. A triangular structure formed by the Dicke model provides an opportunity to study the chiral properties and the geometric frustration [39; 40; 41]. Mapping the optomechanical problem of harmonically trapped atoms near a chiral waveguide to a generalized Rabi model reveals first-order quantum phase transitions with \(Z_{3}\) symmetry breaking [42]. A quantum dot inside a single-mode cavity, described by the Rabi model, allows the distinguishing of topological phases when coupled to an additional Majorana
nanowire [43]. Furthermore, the introduction of collective spin-spin interactions to the Dicke model stimulates the exploration of quantum criticality in systems combining matter--matter and light--matter interactions [44].
In this work, we investigate a generalized Dicke model including two spin ensembles within a single-mode cavity. The interaction between two spin ensembles can be ferromagnetic or antiferromagnetic, which themselves correspond to the well-known coupled-top model [45; 21; 46]. The interplay between spin-spin interaction and spin-boson interaction is expected to result in a more complex phase diagram and critical behavior.
The paper is structured as follows. In Section II, we introduce the Hamiltonian of the generalized Dicke model. In Section III, a phase diagram is constructed by employing the mean-field approach. Higher-order quantum effects beyond the mean-field contribution, such as the quantum fluctuation and the entanglement entropy, are given in Section IV. A brief summary is given in Section V.
## II Hamiltonian
As a paradigmatic model to study the light-matter interaction, the Dicke model initially only introduced the interaction between a large ensemble of independent spins and a single-mode bosonic field [5; 6; 7]. In this paper, we introduce a generalized Dicke model as below, which includes two interacting spin ensembles coupled with a bosonic field:
\[\hat{H} = \hat{H}_{\rm S}+\hat{H}_{\rm B}+\hat{H}_{\rm I}, \tag{1}\] \[\hat{H}_{\rm S} = \Omega\left(\hat{J}_{1,z}+\hat{J}_{2,z}\right)+\frac{\chi}{J}\hat {J}_{1,x}\hat{J}_{2,x},\] (2) \[\hat{H}_{\rm B} = \omega\hat{b}^{\dagger}\hat{b},\] (3) \[\hat{H}_{\rm I} = \frac{\lambda}{\sqrt{J}}\left(\hat{J}_{1,x}+\hat{J}_{2,x}\right) \left(\hat{b}^{\dagger}+\hat{b}\right). \tag{4}\]
Here, the spin ensembles can be described by the angular momentum operator \(\hat{J}_{i,s}=\sum_{n=1}^{N}\hat{\sigma}_{n,s}^{(i)}/2\), with \(s=x,y,z\) and \(i=1,2\). The total number of spins in each ensemble is \(N=2J\). \(\hat{H}_{\rm S}\) describes two interacting spin ensembles, where \(\Omega\) is the frequency of the spin and \(\chi\) is the spin-spin interacting strength. \(\hat{H}_{\rm S}\) is also known as the coupled-top model [45; 46; 47; 21; 48], which can be regarded as a generalization of the transverse-field Ising model. It can be realized by magnetic clusters coupled to superconducting loops of micro-SQUIDs [45]. For \(|\Omega/\chi|>1\), two spin ensembles tend to align in parallel to the \(z\) axis, which leads to the paramagnetic phase. For \(|\Omega/\chi|<1\), two spin ensembles prefer to align in parallel or anti-parallel along the \(x\) axis, depending on the sign of \(\chi\), which leads to ferromagnetic or anti-ferromagnetic phase. \(\hat{H}_{\rm B}\) describes the single-mode bosonic field with frequency \(\omega\). \(\hat{H}_{\rm I}\) represents the coupling between the spin ensembles and the bosonic field with spin-boson coupling strength \(\lambda\).
The excitation number operators for spin ensembles and the bosonic field are defined as \(\hat{N}_{S,i}=\hat{J}_{i,z}+J\) and \(\hat{N}_{\rm B}=\hat{b}^{\dagger}\hat{b}\), respectively. In the absence of the spin-spin interaction (\(\chi=0\)) and spin-boson coupling (\(\lambda=0\)), it is straightforward to confirm that the expectation values of the excitation number operators corresponding to the ground state are both zero, i.e., \(\left\langle\hat{N}_{S,i}\right\rangle=\left\langle\hat{N}_{\rm B}\right\rangle=0\). When both the spin-spin interaction and spin-boson coupling are weak enough (\(\chi,\lambda\ll\{\Omega,\omega\}\)), the rotating-wave approximation can be introduced. The spin-spin interaction and spin-boson coupling can be rewritten as
\[\hat{J}_{1,x}\hat{J}_{2,x} = \frac{1}{4}\left(\hat{J}_{1,+}\hat{J}_{2,-}+\hat{J}_{1,-}\hat{J}_ {2,+}\right)+\frac{1}{4}\left(\hat{J}_{1,+}\hat{J}_{2,+}+\hat{J}_{1,-}\hat{J}_ {2,-}\right), \tag{5}\] \[\left(\hat{J}_{1,x}+\hat{J}_{2,x}\right)\left(\hat{b}^{\dagger}+ \hat{b}\right) = \frac{1}{2}\left[\left(\hat{J}_{1,+}+\hat{J}_{2,+}\right)\hat{b}+ \left(\hat{J}_{1,-}+\hat{J}_{2,-}\right)\hat{b}^{\dagger}\right]\] (6) \[+ \frac{1}{2}\left[\left(\hat{J}_{1,+}+\hat{J}_{2,+}\right)\hat{b} ^{\dagger}+\left(\hat{J}_{1,-}+\hat{J}_{2,-}\right)\hat{b}\right],\]
with \(\hat{J}_{i,\pm}=\hat{J}_{i,x}\pm{\rm i}\hat{J}_{i,y}\). In general, the second term in Equations (5) and (6) is referred to as the counter-rotating term. The rotating-wave approximation involves disregarding this term, resulting in a Hamiltonian with U(1) symmetry and the conservation of the total number of excitations (\(\hat{N}_{S,1}+\hat{N}_{S,2}+\hat{N}_{\rm B}\)) [2; 5; 12; 29]. Unfortunately, the presence of the counter-rotating term disrupts the U(1) symmetry.
Nevertheless, the generalized Dicke model possesses a parity symmetry or \(Z_{2}\) symmetry, given by \(\left[\hat{H},\hat{\Pi}\right]=0\). The parity operator \(\hat{\Pi}\) is defined as \(\hat{\Pi}=\exp\left[{\rm i}\pi\left(\hat{N}_{S,1}+\hat{N}_{S,2}+\hat{N}_{\rm B }\right)\right]\). The action of the parity transformation leads to \(\hat{\Pi}\hat{J}_{i,x}\hat{\Pi}^{\dagger}=-\hat{J}_{i,x}\), \(\hat{\Pi}\hat{J}_{i,z}\hat{\Pi}^{\dagger}=\hat{J}_{i,z}\), \(\hat{\Pi}\hat{b}\hat{\Pi}^{\dagger}=-\hat{b}\), \(\hat{\Pi}\hat{b}^{\dagger}\hat{\Pi}^{\dagger}=-\hat{b}^{\dagger}\), while leaving the total Hamiltonian unaltered. In the
symmetric phase, any eigenstate \(\left|\psi\right\rangle\) of the Hamiltonian \(\hat{H}\) must meet the condition \(\hat{\Pi}\left|\psi\right\rangle=\Pi\left|\psi\right\rangle\), where \(\Pi=\pm 1\) signifies even and odd parities, respectively. It is straightforward to confirm that
\[\left\langle\psi\right|\hat{J}_{i,x}\left|\psi\right\rangle = \left\langle\psi\right|\hat{\Pi}^{\dagger}\hat{\Pi}\hat{J}_{i,x} \hat{\Pi}^{\dagger}\hat{\Pi}\left|\psi\right\rangle=-\Pi^{2}\left\langle\psi \right|\hat{J}_{i,x}\left|\psi\right\rangle=-\left\langle\psi\right|\hat{J}_{i,x}\left|\psi\right\rangle, \tag{7a}\] \[\left\langle\psi\right|\hat{b}\left|\psi\right\rangle = \left\langle\psi\right|\hat{\Pi}^{\dagger}\hat{\Pi}\hat{b}\hat{ \Pi}^{\dagger}\hat{\Pi}\left|\psi\right\rangle=-\Pi^{2}\left\langle\psi\right| \hat{b}\left|\psi\right\rangle=-\left\langle\psi\right|\hat{b}\left|\psi \right\rangle. \tag{7b}\]
Hence, in the symmetric phase, both the expectation values \(\left\langle\hat{J}_{i,x}\right\rangle\) and \(\left\langle\hat{b}\right\rangle\) are zero. However, when the parity symmetry is spontaneous breaking, the eigenstate \(\left|\psi\right\rangle\) of the Hamiltonian will not be the eigenstate of \(\hat{\Pi}\). Thus, \(\left\langle\hat{J}_{i,x}\right\rangle\) and \(\left\langle\hat{b}\right\rangle\) can be nonzero in the parity symmetry broken phase. This property renders them suitable as order parameters for determining the phase boundary [7; 12; 29; 49]. Due to the competition between the spin-spin and spin-boson interactions, we expect that the generalized Dicke model exhibits fascinating phenomena, such as a richer phase diagram and novel critical behavior.
## III Mean-field approach
The mean-field approach is widely employed to investigate the Dicke model, coupled-top model, and their generalizations [50; 51; 8; 9]. First, we discuss the mean-field representation of the generalized Dicke model, which provides a simple and intuitive physical picture despite the lack of correlations among different components. Prior to employing the mean-field approximation, we introduce the rotating operator \(\hat{R}_{i}(\theta_{i})\) and the displacement operator \(\hat{D}(\alpha)\), defined as
\[\hat{R}_{i}(\theta_{i}) = \exp\left(-\mathrm{i}\theta_{i}\hat{J}_{i,y}\right)=\exp\left[ \frac{\theta_{i}}{2}\left(\hat{J}_{i,-}-\hat{J}_{i,+}\right)\right], \tag{8a}\] \[\hat{D}(\alpha) = \exp\left[\alpha\left(\hat{b}^{\dagger}-\hat{b}\right)\right]. \tag{8b}\]
In terms of \(\hat{R}_{i}(\theta_{i})\) and \(\hat{D}(\alpha)\), the spin and bosonic coherent states, which are also known as the coherent states of Heisenberg-Weyl and SU(2) groups, can be expressed as
\[\left|\theta_{i}\right\rangle=\hat{R}_{i}(\theta_{i})\left|J,-J\right\rangle, \quad\left|\alpha\right\rangle=\hat{D}(\alpha)\left|0\right\rangle, \tag{9}\]
where \(\left|J,-J\right\rangle\) represents the lowest Dicke state satisfying \(\hat{J}_{i,z}\left|J,m\right\rangle=m\left|J,m\right\rangle\) with \(m=-J,-J+1,\ldots,J\), and \(\left|0\right\rangle\) is the vacuum state of the bosonic field.
According to the mean-field theory, we construct a trial wave function formed by the tensor product of the spin and bosonic coherent states, namely,
\[\left|\psi_{\mathrm{MF}}\right\rangle=\left|\theta_{1}\right\rangle\otimes \left|\theta_{2}\right\rangle\otimes\left|\sqrt{N}\alpha\right\rangle. \tag{10}\]
Then, we can calculate the average energy expectation value:
\[E_{\mathrm{MF}}\left(\theta_{1},\theta_{2},\alpha\right) = \frac{1}{N}\left\langle\psi_{\mathrm{MF}}\right|\hat{H}\left| \psi_{\mathrm{MF}}\right\rangle\] \[= -\frac{\Omega}{2}\left(\cos\theta_{1}+\cos\theta_{2}\right)+ \frac{\chi}{2}\sin\theta_{1}\sin\theta_{2}+\omega\alpha^{2}-\sqrt{2}\lambda \alpha\left(\sin\theta_{1}+\sin\theta_{2}\right).\]
Based on the variational principle, the unknown variational parameters \(\theta_{1}\), \(\theta_{2}\), and \(\alpha\) can be determined by minimizing the average energy expectation value \(E_{\mathrm{MF}}\), which leads to
\[\frac{\partial E_{\mathrm{MF}}}{\partial\theta_{1}} = \frac{\Omega}{2}\sin\theta_{1}+\frac{\chi}{2}\sin\theta_{2}\cos \theta_{1}-\sqrt{2}\lambda\alpha\cos\theta_{1}=0, \tag{12a}\] \[\frac{\partial E_{\mathrm{MF}}}{\partial\theta_{2}} = \frac{\Omega}{2}\sin\theta_{2}+\frac{\chi}{2}\sin\theta_{1}\cos \theta_{2}-\sqrt{2}\lambda\alpha\cos\theta_{2}=0,\] (12b) \[\frac{\partial E_{\mathrm{MF}}}{\partial\alpha} = 2\left(\omega\alpha-\frac{\lambda}{\sqrt{2}}\left(\sin\theta_{1}+ \sin\theta_{2}\right)\right)=0. \tag{12c}\]
Upon determination of \(\theta_{1}\), \(\theta_{2}\), and \(\alpha\), the ground-state energy \(E_{\rm MF}\) and wave function \(\left|\psi_{\rm MF}\right\rangle\) are achieved. Subsequently, the order parameters \(\left\langle\hat{J}_{i,x}\right\rangle\) and \(\left\langle\hat{b}\right\rangle\) can be calculated accordingly, with
\[\left\langle\hat{J}_{i,x}\right\rangle = \left\langle\psi_{\rm MF}\right|\hat{J}_{i,x}\left|\psi_{\rm MF} \right\rangle=-J\sin\theta_{i}, \tag{13}\] \[\left\langle\hat{b}\right\rangle = \left\langle\psi_{\rm MF}\right|\hat{b}\left|\psi_{\rm MF} \right\rangle=\sqrt{N}\alpha, \tag{14}\]
from which we can find out whether the quantum phase transition exists. The order of the quantum phase transition can be determined by the derivative of the ground-state energy \(E_{\rm MF}\) with respective to system parameters [29; 12]. A first-order quantum phase transition is indicated by a discontinuous first derivative, \({\rm d}E_{\rm MF}/{\rm d}\chi\), while a second-order quantum phase transition is marked by a discontinuous second derivative, \({\rm d}^{2}E_{\rm MF}/{\rm d}\chi^{2}\). Furthermore, the excitation numbers \(\left\langle\hat{N}_{S,i}\right\rangle\) and \(\left\langle\hat{N}_{B}\right\rangle\) also offer some insights in different phases, given by
\[\left\langle\hat{N}_{s,i}\right\rangle = \left\langle\psi_{\rm MF}\right|\hat{J}_{i,z}\left|\psi_{\rm MF }\right\rangle+J=J\left(1-\cos\theta_{i}\right), \tag{15}\] \[\left\langle\hat{N}_{B}\right\rangle = \left\langle\psi_{\rm MF}\right|\hat{N}_{B}\left|\psi_{\rm MF} \right\rangle=N\alpha^{2}. \tag{16}\]
Based on the analysis above, Figure 1 displays the phase diagram of the generalized Dicke model, revealing three distinct phases. The dashed line is represented by \(\chi=\frac{4\lambda^{2}-\Omega\omega}{\omega}\). This line represents the second-order quantum phase transition that distinguishes Phase I from Phase II, as indicated by the continuous \({\rm d}E_{\rm MF}/{\rm d}\chi\) in Figure 1a and discontinuous \({\rm d}^{2}E_{\rm MF}/{\rm d}\chi^{2}\) in Figure 1b. The dotted line is expressed as \(\chi=\Omega\). It signifies the second-order quantum phase transition that separates Phase I from Phase III, characterized by the discontinuous \({\rm d}^{2}E_{\rm MF}/{\rm d}\chi^{2}\) in Figure 1b. The solid line is expressed as \(\chi=\frac{2\lambda^{2}}{\omega}\). It corresponds to the first-order quantum phase transition that separates Phase II from Phase III, as reflected in the discontinuous \({\rm d}E_{\rm MF}/{\rm d}\chi\) in Figure 1a.
Phase I is present only if both the spin-spin interaction strength \(\left|\chi\right|\) and the spin-boson coupling strength \(\lambda\) are small enough. Solving Equation (12) results in \(\theta_{1}=\theta_{2}=0\) and \(\alpha=0\). The ground state is nondegenerate, with energy \(E_{\rm MF}=-\Omega\). Moreover, the parity symmetry is unbroken, as indicated by \(\left\langle\hat{J}_{i,x}\right\rangle=0\) in Figure 1c, and \(\left\langle\hat{b}\right\rangle=0\) in Figure 1d. The spin ensembles tend to align in parallel to the \(z\) axis due to \(\left\langle\hat{J}_{i,z}\right\rangle/J=-1\), which corresponds
to the paramagnetic phase in the coupled-top model. There are no macroscopic excitations in the bosonic field due to \(\left\langle\hat{N}_{\mathrm{B}}\right\rangle=0\), which is consistent with what happens in the normal phase of the original Dicke model. To sum up, Phase I is referred to as the paramagnetic-normal phase.
Phase II is located in the lower-right corner of Figure 1, which corresponds to the ferromagnetic-superradiant phase. The energy minimum occurs at \(\theta_{1}=\theta_{2}=\theta=\pm\arccos\left(\frac{\Omega\omega}{4\lambda^{2}- \chi\omega}\right)\) and \(\alpha=\frac{\sqrt{2}\lambda\sin\theta}{\omega}\), which corresponds to a twofold degenerate ground state with energy
\[E_{\mathrm{MF}} = -\frac{\Omega}{2}\left(\frac{\Omega\omega}{4\lambda^{2}-\chi \omega}+\frac{4\lambda^{2}-\chi\omega}{\Omega\omega}\right). \tag{17}\]
From Equations (13) and (14), it is easy to confirm that \(\left\langle J_{1,x}\right\rangle=\left\langle J_{2,x}\right\rangle\neq 0\) and \(\left\langle\hat{b}\right\rangle\neq 0\), which is the signature of parity symmetry breaking. Phase II is termed the ferromagnetic-superradiant phase due to the following reasons:
* When the spin-boson coupling strength \(\lambda\) is held constant, the critical spin-spin interaction strength that distinguishes Phase I from Phase II is given by \(\chi=\frac{4\lambda^{2}-\Omega\omega}{\omega}\). In the case of \(\lambda=0\), the generalized Dicke model is reduced to the coupled-top model, with a critical point \(\chi=-\Omega\). In the coupled-top model, it is well-known that a strong ferromagnetic spin-spin interaction (\(\chi<-\Omega\)) is required to observe the ferromagnetic phase, where two spin ensembles prefer to align in parallel along the \(x\) axis. From this perspective, Phase II corresponds to the ferromagnetic phase, as \(\left\langle J_{1,x}\right\rangle=\left\langle J_{1,x}\right\rangle\neq 0\). The spin-boson coupling promotes the formation of the ferromagnetic phase by significantly reducing the ferromagnetic spin-spin interaction strength \(\left|\chi\right|\) required to induce the phase transition. The ferromagnetic phase persists even in the presence of antiferromagnetic spin-spin interaction (\(\chi>0\)), provided that \(\lambda\) is sufficiently large;
* When the spin-spin interaction strength \(\chi\) is held constant, the critical spin-boson coupling strength that distinguishes Phase I from Phase II is given by \(\lambda=\frac{\sqrt{(\Omega+\chi)\omega}}{2}\). In the case of \(\chi=0\), the generalized Dicke model is reduced to the original Dicke model, with a critical point \(\lambda=\sqrt{\Omega\omega}/2\). In the original Dicke model, it is well-known that a strong spin-boson coupling (\(\lambda>\sqrt{\Omega\omega}/2\)) is required to observe the superradiant phase, where macroscopic excitations of the bosonic field emerge. From this perspective, Phase II corresponds to the superradiant phase, as indicated by \(\left\langle\hat{N}_{B}\right\rangle>0\). The antiferromagnetic spin-spin interaction (\(\chi>0\)) hinders the formation of the superradiance, whereas the ferromagnetic spin-spin interaction (\(\chi<0\)) promotes the formation of the superradiant phase.
Phase III is situated in the upper-left corner of Figure 1. The energy minimum is reached when \(\theta_{1(2)}=-\theta_{2(1)}=\pm\arccos\left(\frac{\Omega}{\chi}\right)\) and \(\alpha=0\), leading to twofold degenerate ground states, with energy \(E_{\mathrm{MF}}=-\frac{\Omega}{2}\left(\frac{\Omega}{\chi}+\frac{\chi}{\Omega}\right)\). Varying the spin-boson coupling \(\lambda\) does not influence the critical spin-spin interaction strength \(\chi\) that separates Phase I and III. From Equations (13) and (14), we find that \(\left\langle\hat{J}_{1,x}\right\rangle=-\left\langle\hat{J}_{2,x}\right\rangle\neq 0\), which indicates the breaking of the parity symmetry. Two spin ensembles prefer to align anti-parallelly along the \(x\) axis, which is consistent with the behavior observed in the antiferromagnetic phase of the coupled-top model. It should be noted that the original Dicke model enters the superradiant phase with macroscopic excitations in the bosonic field as long as the parity symmetry is breaking [11]. However, as indicated in Figure 1d, Phase III exhibits no macroscopic excitations in the bosonic field, regardless of the strength of the spin-boson coupling. Therefore, Phase III corresponds to the normal phase in the original Dicke model. Overall, Phase III is referred to as a ferromagnetic-normal phase.
## IV Beyond the mean-field approach
As illustrated in the previous section, the mean-field approach elucidates the behaviors of the order parameters, from which we can capture the phase diagram. Nonetheless, higher-order contributions, such as the quantum fluctuation, correlation and entanglement [49, 52, 53], are obscured, highlighting the urgent demand for methods extending beyond the mean-field approach. A commonly employed technique for separating mean-field and higher-order contributions involves a unitary transformation \(\hat{U}=\hat{R}_{1}(\theta_{1})\hat{R}_{2}(\theta_{2})\hat{D}(\sqrt{N}\alpha)\)[54, 55, 56, 41], namely, \(\hat{H}=\hat{U}^{\dagger}\hat{H}\hat{U}\), with \(\theta_{1}\), \(\theta_{2}\) and \(\alpha\) determined through the mean-field approach in Section III. Since the rotating operator \(\hat{R}_{i}(\theta_{i})\) and the displacement operator \(\hat{D}(\sqrt{N}\alpha)\) satisfy the following properties:
\[\hat{R}_{i}^{\dagger}(\theta_{i})\hat{J}_{i,x}\hat{R}_{i}(\theta _{i}) =\cos\theta_{i}\hat{J}_{i,x}+\sin\theta_{i}\hat{J}_{i,z}, \hat{D}^{\dagger}(\sqrt{N}\alpha)\hat{b}\hat{D}(\sqrt{N}\alpha) =\hat{b}+\sqrt{N}\alpha, \tag{18a}\] \[\hat{R}_{i}^{\dagger}(\theta_{i})\hat{J}_{i,z}\hat{R}_{i}(\theta _{i}) =\cos\theta_{i}\hat{J}_{i,z}-\sin\theta_{i}\hat{J}_{i,z}, \hat{D}^{\dagger}(\sqrt{N}\alpha)\hat{b}^{\dagger}\hat{D}(\sqrt{N}\alpha) =\hat{b}^{\dagger}+\sqrt{N}\alpha, \tag{18b}\]
one can easily achieve the transformed Hamiltonian \(\hat{\hat{H}}\). Subsequently, the Holstein-Primakoff transformation [10; 11; 57] is introduced to map the angular momentum operators into bosonic creation and annihilation operators as
\[\hat{J}_{i,z}=\hat{a}_{i}^{\dagger}\hat{a}_{i}-J,\quad\hat{J}_{i,+}=\hat{a}_{i}^ {\dagger}\sqrt{N-\hat{a}_{i}^{\dagger}\hat{a}_{i}},\quad\hat{J}_{i,-}=\sqrt{N- \hat{a}_{i}^{\dagger}\hat{a}_{i}}\hat{a}_{i}. \tag{19}\]
In the thermodynamic limit (\(N\rightarrow+\infty\)), it is anticipated that \(N\gg\left\langle\hat{a}_{i}^{\dagger}\hat{a}_{i}\right\rangle\), and the Holstein-Primakoff transformation can be simplified as
\[\hat{J}_{i,z}=\hat{a}_{i}^{\dagger}\hat{a}_{i}-J,\quad\hat{J}_{i,+}\approx \sqrt{N}\hat{a}_{i}^{\dagger},\quad\hat{J}_{i,-}\approx\sqrt{N}\hat{a}_{i}, \tag{20}\]
After the Holstein-Primakoff transformation, we can write \(\hat{H}\) as a series expansion in powers of \(1/N\) as follows,
\[\hat{H} \approx \left(\frac{1}{N}\right)^{-1}E_{\rm MF}\left(\theta_{1},\theta_{ 2},\alpha\right)+\left(\frac{1}{N}\right)^{-1/2}\hat{\hat{H}}_{1}+\left(\frac {1}{N}\right)^{0}\hat{\hat{H}}_{2}, \tag{21}\]
with
\[\hat{H}_{1} = -\left(\frac{\Omega}{2}\sin\theta_{1}+\frac{\chi}{2}\sin\theta_{ 2}\cos\theta_{1}-\sqrt{2}\lambda\alpha\cos\theta_{1}\right)\left(\hat{a}_{1}^ {\dagger}+\hat{a}_{1}\right) \tag{22}\] \[-\left(\frac{\Omega}{2}\sin\theta_{2}+\frac{\chi}{2}\sin\theta_{ 1}\cos\theta_{2}-\sqrt{2}\lambda\alpha\cos\theta_{2}\right)\left(\hat{a}_{2}^ {\dagger}+\hat{a}_{2}\right)\] \[+\left(\omega\alpha-\frac{\lambda}{\sqrt{2}}\left(\sin\theta_{1} +\sin\theta_{2}\right)\right)\left(\hat{b}^{\dagger}+\hat{b}\right),\] \[\hat{\hat{H}}_{2} = \frac{\chi}{2}\cos\theta_{1}\cos\theta_{2}\left(\hat{a}_{1}^{ \dagger}+\hat{a}_{1}\right)\left(\hat{a}_{2}^{\dagger}+\hat{a}_{2}\right)\] (23) \[+\frac{\lambda}{\sqrt{2}}\left[\cos\theta_{1}\left(\hat{a}_{1}^{ \dagger}+\hat{a}_{1}\right)+\cos\theta_{2}\left(\hat{a}_{2}^{\dagger}+\hat{a} _{2}\right)\right]\left(\hat{b}^{\dagger}+\hat{b}\right)\] \[+\left(\Omega\cos\theta_{1}-\chi\sin\theta_{1}\sin\theta_{2}+2 \sqrt{2}\lambda\alpha\sin\theta_{1}\right)\hat{a}_{1}^{\dagger}\hat{a}_{1}\] \[+\left(\Omega\cos\theta_{2}-\chi\sin\theta_{1}\sin\theta_{2}+2 \sqrt{2}\lambda\alpha\sin\theta_{2}\right)\hat{a}_{2}^{\dagger}\hat{a}_{2}\] \[+\omega\hat{b}^{\dagger}\hat{b}.\]
where we have ignored the higher-order terms proportional to \((1/N)^{l}\) with \(l>0\). The first term in Equation (21) is a constant, representing the ground-state energy (17) derived from the mean-field approach. \(\theta_{1}\), \(\theta_{2}\) and \(\alpha\) obtained from Equation (12) lead to \(\hat{\hat{H}}_{1}=0\), which further simplifies the low-energy effective Hamiltonian. Finally, we only need to deal with the quadratic Hamiltonian \(\hat{\hat{H}}_{2}\).
By introducing \(\hat{x}_{i}=\left(\hat{a}_{i}^{\dagger}+\hat{a}_{i}\right)/\sqrt{2}\), \(\hat{p}_{i}={\rm i}\left(\hat{a}_{i}^{\dagger}-\hat{a}_{i}\right)/\sqrt{2}\), for \(i=1,2\), and \(\hat{x}_{3}=\left(\hat{b}^{\dagger}+\hat{b}\right)/\sqrt{2}\), \(\hat{p}_{3}={\rm i}\left(\hat{b}^{\dagger}-\hat{b}\right)/\sqrt{2}\), the quadratic Hamiltonian \(\hat{\hat{H}}_{2}\) can be rewritten as
\[\hat{H}_{2} = \sum_{i=1}^{3}\frac{\epsilon_{i}}{2}\left(\hat{x}_{i}^{2}+\hat{p}_{ i}^{2}-1\right)+\tau_{i,i+1}\hat{x}_{i}\hat{x}_{i+1}, \tag{24}\]
with
\[\epsilon_{1} = \Omega\cos\theta_{1}-\chi\sin\theta_{1}\sin\theta_{2}+2\sqrt{2} \lambda\alpha\sin\theta_{1}, \tag{25a}\] \[\epsilon_{2} = \Omega\cos\theta_{2}-\chi\sin\theta_{1}\sin\theta_{2}+2\sqrt{2} \lambda\alpha\sin\theta_{2},\] (25b) \[\epsilon_{3} = \omega,\] (25c) \[\tau_{1,2} = \chi\cos\theta_{1}\cos\theta_{2}=\tau_{2,1},\] (25d) \[\tau_{1,3} = \sqrt{2}\lambda\cos\theta_{1}=\tau_{3,1},\] (25e) \[\tau_{2,3} = \sqrt{2}\lambda\cos\theta_{2}=\tau_{3,2}. \tag{25f}\]
Clearly, this corresponds to coupled harmonic oscillators. As demonstrated in Appendix A, this Hamiltonian can be solved exactly using the symplectic transformation [58; 59], which decouples the coupled harmonic oscillators into the
following form:
\[\hat{\hat{H}}_{2}=\sum_{i=1}^{3}\frac{\Delta_{i}}{2}\left(\hat{x}_{i}^{\prime 2}+ \hat{p}_{i}^{\prime 2}\right)-\sum_{i=1}^{3}\frac{\epsilon_{i}}{2}. \tag{26}\]
\(\Delta_{i}\geq 0\) corresponds to the excitation energy, as shown in Figure 2a. Without loss of generality, we select the parameters \(\Omega/\omega=1\) and \(\lambda/\omega=0.3\), while allowing \(\chi/\omega\) to range from \(-2\) to \(2\). This range encompasses all three phases, as depicted in Figure 1. The critical point that separates Phase I from Phase II is situated at \(\chi_{c-}/\omega=-0.64\), whereas the critical point separating Phase I and Phase III is found at \(\chi_{c+}/\omega=1\). The lowest excitation energy, namely, \(\Delta_{\text{min}}=\min\left(\Delta_{1},\Delta_{2},\Delta_{3}\right)\), represents the energy gap between the ground state and the first excited state. Generally, the quantum phase transition occurs concurrently with the closing of the energy gap (\(\Delta_{\text{min}}\to 0\)), which is consistent with our results in Figure 2a. As illustrated in Figure 2b,c, the critical behavior associated with the excitation energy is given by \(\Delta_{\text{min}}\propto\left|\chi-\chi_{c\pm}\right|^{1/2}\) near the critical point \(\chi_{c\pm}\), which is in accordance with that in the original Dicke model [10; 57] and the coupled-top model [56].
In stark contrast to its classical counterpart driven by thermal fluctuations, the quantum phase transition takes place at zero temperature due to the quantum fluctuations [15]. The quantum fluctuations in \(x_{i}\) and \(p_{i}\) quadrature are expressed as
\[\left(\Delta x_{i}\right)^{2}=\left\langle\hat{x}_{i}^{2}\right\rangle-\left \langle\hat{x}_{i}\right\rangle^{2},\quad\left(\Delta p_{i}\right)^{2}=\left \langle\hat{p}_{i}^{2}\right\rangle-\left\langle\hat{p}_{i}\right\rangle^{2}. \tag{27}\]
Figure 2d,g depicts the behaviors of the quantum fluctuations in \(x_{i}\) and \(p_{i}\) quadrature, respectively. Since both \(i=1,2\) correspond to the spin ensembles, we only show one of them for clarity. Far away from the critical point \(\chi_{c\pm}\), both \(\left(\Delta x_{i}\right)^{2}\) and \(\left(\Delta p_{i}\right)^{2}\) tend to approach \(1/2\), which can be well captured by the coherent state in the mean-field approach. Near the critical point \(\chi_{c\pm}\), \(\left(\Delta p_{i}\right)^{2}\) becomes less than \(1/2\), which indicates a strong squeezing effect
[2]. All three quantum fluctuations in \(x_{i}\) quadrature tend to exponentially diverge with \(\left(\Delta x_{i}\right)^{2}\propto\left|\chi-\chi_{c-}\right|^{-1/2}\) near the critical point \(\chi_{c-}\), as shown in Figure 2e. Similar phenomena can be found for \(x_{1}\) and \(x_{2}\) quadrature near the critical point \(\chi_{c+}\), both of which correspond to the spin components. Nevertheless, Figure 2f indicates that \(x_{3}\) quadrature, associated with the bosonic field, does not exhibit an exponential divergent fluctuation near \(\chi_{c+}\). As discussed in Section III, \(\chi_{c+}\) separates Phase I and Phase III, which correspond to paramagnetic-normal phase and antiferromagnetic-normal phase, respectively. The quantum phase transition is dominated by the antiferromagnetic spin-spin interactions, leading to substantial fluctuations in the spin components, as opposed to the bosonic field.
The entanglement entropy, also known as the von Neumann entropy, is proposed to quantify the entanglement between different components of the quantum system [60; 61]. It is directly associated with the Heisenberg's uncertainty relation for the quadratic Hamiltonian of interacting bosonic systems [52; 61]. In terms of \(\Delta x_{i}\) and \(\Delta p_{i}\), the entanglement entropy \(\mathcal{S}_{i}\) can be written as
\[\mathcal{S}_{i}=\left(\Delta x_{i}\Delta p_{i}+\frac{1}{2}\right)\log\left( \Delta x_{i}\Delta p_{i}+\frac{1}{2}\right)-\left(\Delta x_{i}\Delta p_{i}- \frac{1}{2}\right)\log\left(\Delta x_{i}\Delta p_{i}-\frac{1}{2}\right), \tag{28}\]
which describes the entanglement between the \(i\)th component and the others. Recently, there has been growing interest in investigating quantum phase transitions from the perspective of entanglement [38; 45; 49; 52; 62]. As depicted in Figure 2h, the entanglement entropies \(\mathcal{S}_{1}\) and \(\mathcal{S}_{3}\) exhibit divergences near the quantum critical point \(\chi_{c-}\), which indicate strong entanglement among two spin ensembles and one bosonic field. However, \(\mathcal{S}_{3}\) is negligible compared to \(\mathcal{S}_{1}\) near \(\chi_{c+}\), which indicates that the bosonic field is almost independent from spin ensembles. The finite \(\mathcal{S}_{3}\) near \(\chi_{c+}\) has a similar origin to the finite quantum fluctuation \(\left(\Delta x_{3}\right)^{2}\). Both Phase I and Phase III, separated by \(\chi_{c+}\), exhibit no macroscopic excitations in the bosonic field. The quantum phase transition is primarily driven by the strong antiferromagnetic spin-spin interaction, leading to significant entanglement between two spin ensembles, as indicated by the divergence in \(\mathcal{S}_{1}\). In contrast, the spin-boson coupling has a negligible effect, resulting in weak entanglement between the bosonic field and the two spin ensembles.
## V Conclusions
The Dicke model serves as a paradigmatic model to study the light-matter interaction, where the bosonic field represents light and the spin ensemble represents matter. It undergoes a quantum phase transition from the normal phase to the superradiant phase for sufficient strong spin-boson coupling. The coupled-top model describe two interacting spin ensembles, which can be regarded as an example of matter-matter interaction. It exhibits paramagnetic phase, ferromagnetic phase and antiferromagnetic phase, depending on the spin-spin interaction strength. In this work, we proposed a generalized Dicke model that combines the light-matter interaction in the Dicke model with the matter-matter interaction in the coupled-top model. This is achieved by introducing two interacting spin ensembles coupled with a bosonic field.
Due to the competition between the spin-spin interaction and the spin-boson coupling, the generalized Dicke model admits a diverse phase diagram, which consists of three phases: paramagnetic-normal phase, ferromagnetic-superradiant phase and antiferromagnetic-normal phase. The paramagnetic-normal phase is present only if both the spin-spin interaction and the spin-boson coupling are sufficiently weak. In this phase, two spin ensembles tend to align in parallel along the \(z\) axis, while the bosonic field prohibits macroscopic excitation. In the ferromagnetic-superradiant phase, two spin ensembles prefer to align in parallel along the \(x\) axis, while the macroscopic excitation emerges in the bosonic field. Interestingly, the spin-boson coupling strength required to stimulate the macroscopic excitation is significantly suppressed in the presence of the ferromagnetic spin-spin interaction. In the antiferromagnetic-normal phase, two spin ensembles prefer to align anti-parallelly along the \(x\) axis. No macroscopic excitation in the bosonic field emerges, regardless of the strength of the spin-boson coupling.
The boundary of the phase diagram can be distinguished by the mean-field approach. Nevertheless, it falls short in offering deeper insights into the higher-order quantum effects, such as the excitation energy, quantum fluctuation and entanglement entropy. These can be achieved through the utilization of the Holstein-Primakoff transformation and the symplectic transformation, which transform the generalized Dicke into three decoupled harmonic oscillators in the thermodynamic limit. The excitation energy approaches zero in the vicinity of the critical point. The closing of the energy gap between the ground and the first excited states coincides with the divergence of the quantum fluctuation in certain quadrature and the entanglement entropy. Our generalizations to the Dicke model further improve its flexibility and open up new opportunities to investigate the competition between light-matter and matter-matter interactions.
It is worth mentioning that approaches both within and beyond the mean-field theory are performed in the thermodynamic limit in this work. Recently, the finite-component system has drawn renewed attention. These systems
not only offer enhanced experimental accessibility but also yield valuable insights into quantum phase transitions by revealing finite-size scaling behavior near critical points [13; 14; 62; 63]. Moreover, quantum phase transitions and spontaneous symmetry breaking can even exist in the finite-component system, such as the Rabi model and its generalizations [18; 27; 32; 49; 50; 64]. The finite-size effects in the generalized Dicke model deserve further consideration, which are left to future research.
###### Acknowledgements.
L.D. is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12305032 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ23A050003. W. L. is supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ21F050007.
## Appendix A Symplectic Transformation and Covariance Matrix
The low-energy effective Hamiltonian \(\hat{\hat{H}}_{2}\) [Equation (24)] is of a quadratic form. The diagonalization of any quadratic Hamiltonian is a rather straightforward mathematical routine. Here, we choose the symplectic transformation [58; 59; 61]. For simplicity, we can introduce the vector of canonical operators \(\hat{\mathbf{r}}=\left(\hat{x}_{1},\hat{x}_{2},\hat{x}_{3},\hat{p}_{1},\hat{p} _{2},\hat{p}_{3}\right)^{T}\), which satisfies the canonical commutation relation \(\left[\hat{\mathbf{r}},\hat{\mathbf{r}}^{T}\right]=\mathrm{i}\Gamma\), where \(\Gamma\) is given by
\[\Gamma=\left(\begin{array}{cc}O_{3}&I_{3}\\ -I_{3}&O_{3}\end{array}\right), \tag{10}\]
with \(I_{3}\) and \(O_{3}\) being the \(3\times 3\) identity and null matrices, respectively. In terms of the vector of canonical operators \(\hat{\mathbf{r}}\), the quadratic Hamiltonian \(\hat{\hat{H}}_{2}\) can be expressed as
\[\hat{\hat{H}}_{2}=\frac{1}{2}\hat{\mathbf{r}}^{T}H\hat{\mathbf{r}}-\sum_{i=1}^{ 3}\frac{\epsilon_{i}}{2},\]
with the Hamiltonian matrix \(H=H_{x}\oplus H_{p}\) and
\[H_{x}=\left(\begin{array}{ccc}\epsilon_{1}&\tau_{1,2}&\tau_{1,3}\\ \tau_{2,1}&\epsilon_{2}&\tau_{2,3}\\ \tau_{3,1}&\tau_{3,2}&\epsilon_{3}\end{array}\right),\quad H_{p}=\left( \begin{array}{ccc}\epsilon_{1}&0&0\\ 0&\epsilon_{2}&0\\ 0&0&\epsilon_{3}\end{array}\right). \tag{11}\]
Based on the Williamson's theorem [58], for the positively defined real matrix \(H\), there exists a symplectic transformation \(S\) (\(S^{T}\Gamma S=\Gamma\)) such that
\[S^{T}HS=\Lambda,\text{ with }\Lambda=\mathrm{diag}\left(\Delta_{1},\Delta_{2}, \Delta_{3},\Delta_{1},\Delta_{2},\Delta_{3}\right). \tag{12}\]
The symplectic transformation \(S\) can be constructed by a standard procedure [59]: First, the diagonal elements of \(H_{p}\) are transformed into a uniform form by squeezing in \(p_{i}\); Second, \(H_{x}\) is diagonalized by rotating in \(x_{i}\); Finally, the resulting Hamiltonian matrix is transformed into the desired form \(\Lambda\) by squeezing in \(x_{i}\times p_{i}\). Once \(S\) is achieved, one can introduce a new vector of canonical operators \(\hat{\mathbf{r}}^{\prime}=S^{-1}\hat{\mathbf{r}}\), which decouples the quadratic Hamiltonian into independent degrees of freedom as
\[\hat{\hat{H}}_{2}=\sum_{i=1}^{3}\frac{\Delta_{i}}{2}\left(\hat{x}_{i}^{\prime 2 }+\hat{p}_{i}^{\prime 2}\right)-\sum_{i=1}^{3}\frac{\epsilon_{i}}{2}=\frac{1}{2} \hat{\mathbf{r}}^{\prime T}D\hat{\mathbf{r}}^{\prime}-\sum_{i=1}^{3}\frac{ \epsilon_{i}}{2}.\]
It is a well-established fact that the ground state of the quadratic Hamiltonian is a Gaussian state, which holds considerable importance in the field of continuous variable quantum information [58]. Rather than dealing with the infinite dimension of the associated Hilbert space, one can work with the \(6\times 6\) covariance matrix \(\sigma\), which provides a comprehensive description of any Gaussian state [58; 61]. The covariance matrix \(\sigma\) is written as
\[\sigma=\frac{1}{2}\left\langle\left\{\left(\hat{\mathbf{r}}-\left\langle\hat{ \mathbf{r}}\right\rangle\right),\left(\hat{\mathbf{r}}-\left\langle\hat{ \mathbf{r}}\right\rangle\right)^{T}\right\}\right\rangle. \tag{13}\]
With the symplectic transformation \(S\), it is straightforward to confirm that the covariance matrix can be expressed as \(\sigma=SS^{T}/2\)[61]. The quantum fluctuations in \(\hat{x}_{i}\) and \(\hat{p}_{i}\) are quantified by their standard deviations, which are obtained from the diagonal elements of the covariance matrix \(\sigma\), namely,
\[\left(\Delta x_{i}\right)^{2} = \left\langle\hat{x}_{i}^{2}\right\rangle-\left\langle\hat{x}_{i} \right\rangle^{2}=\sigma_{i,i}, \tag{43a}\] \[\left(\Delta p_{i}\right)^{2} = \left\langle\hat{p}_{i}^{2}\right\rangle-\left\langle\hat{p}_{i} \right\rangle^{2}=\sigma_{3+i,3+i}. \tag{43b}\]
|
2304.02667 | Non-thermal motions and atmospheric heating of cool stars | The magnetic processes associated with the non-thermal broadening of
optically thin emission lines appear to carry enough energy to heat the corona
and accelerate the solar wind. We investigate whether non-thermal motions in
cool stars exhibit the same behaviour as on the Sun by analysing archival
stellar spectra taken by the Hubble Space Telescope, and full-disc Solar
spectra taken by the Interface Region Imaging Spectrograph. We determined the
non-thermal velocities by measuring the excess broadening in optically thin
emission lines formed in the stellar atmosphere; the chromosphere, the
transition region and the corona. Assuming the non-thermal broadening is caused
by the presence of Alfv\'en waves, we also determined the associated wave
energy densities. Our results show that, with a non-thermal velocity of
$\sim$23 kms$^{-1}$ the Sun-as-a-star results are in very good agreement with
values obtained from spatially-resolved solar observations. The non-thermal
broadening in our sample show correlation to stellar rotation, with the
strength of the non-thermal velocity decreasing with decreasing rotation rate.
Finally, the non-thermal velocity in cool Sun-like stars varies with
atmospheric height or temperature of the emission lines, and peaks at
transition region temperatures. This points towards a solar-like Alfv\'en wave
driven heating in stellar atmospheres. However, the peak is at a lower
temperature in some cool stars suggesting that, other magnetic process such as
flaring events could also dominate. | S. Boro Saikia, T. Lueftinger, V. S. Airapetian, T. Ayres, M. Bartel, M. Guedel, M. Jin, K. G. Kislyakova, P. Testa | 2023-04-05T18:01:05Z | http://arxiv.org/abs/2304.02667v1 | # Non-thermal motions and atmospheric heating of cool stars
###### Abstract
The magnetic processes associated with the non-thermal broadening of optically thin emission lines appear to carry enough energy to heat the corona and accelerate the solar wind. We investigate whether non-thermal motions in cool stars exhibit the same behaviour as on the Sun by analysing archival stellar spectra taken by the Hubble Space Telescope, and full-disc Solar spectra taken by the Interface Region Imaging Spectrograph. We determined the non-thermal velocities by measuring the excess broadening in optically thin emission lines formed in the stellar atmosphere; the chromosphere, the transition region and the corona. Assuming the non-thermal broadening is caused by the presence of Alfven waves, we also determined the associated wave energy densities. Our results show that, with a non-thermal velocity of \(\sim\)23 km s\({}^{-1}\) the Sun-as-a-star results are in very good agreement with values obtained from spatially-resolved solar observations. The non-thermal broadening in our sample show correlation to stellar rotation, with the strength of the non-thermal velocity decreasing with decreasing rotation rate. Finally, the non-thermal velocity in cool Sun-like stars varies with atmospheric height or temperature of the emission lines, and peaks at transition region temperatures. This points towards a solar-like Alfven wave driven heating in stellar atmospheres. However, the peak is at a lower temperature in some cool stars suggesting that, other magnetic process such as flaring events could also dominate.
0000-0002-8828-7885]S. Boro Saikia
0000-0002-4880-7885]T. Lueftinger
0000-0002-4880-7885]V. S. Airapetian
0000-0002-4880-7885]T. Ayres
0000-0002-4880-7885]M. Bartel
0000-0002-4880-7885]M. Guedel
0000-0002-4880-7885]M. Jin
0000-0002-0701-8885]K. G. Kislyakova
0000-0002-4880-7885]P. Testa
## 1 Introduction
One of the key drivers of atmospheric loss in (exo)planets orbiting cool main-sequence stars is the magnetised winds of their central star (Kislyakova et al., 2014; Airapetian et al., 2017). In cool stars like our Sun, the wind is driven by non-thermal processes in the stellar corona and interface region 1, which include magnetic reconnection, propagation and presence of Alfven waves, turbulence, flares and other explosive events (e.g., Mariska, 1992; De Pontieu et al., 2021, and references therein). Out of these different processes Alfven waves have emerged as the dominant heating mechanism in solar and stellar wind models (Suzuki & Inutsuka, 2006; Cranmer et al., 2007; van der Holst et al., 2014; Lionello et al., 2014; Shoda et al., 2019; Reville et al., 2020). However, controversy still exists on the contribution of these different non-thermal processes towards coronal heating and wind propagation (Cranmer & Winebarger, 2019). Detailed comparative study of the interface region in the Sun and other cool stars can help shed light into these different processes and constrain numerical models of solar and stellar winds.
Footnote 1: includes the chromosphere and the transition region, which act as an interface between the photosphere and the corona
In the case of the Sun, far-ultraviolet (FUV) and extreme-ultraviolet (EUV) spectral lines provide important diagnostics of the plasma properties in the interface region. While flux ratios of certain spectral lines can be used to estimate the density of the emitting plasma (Polito et al., 2016; Young et al., 2018), other diagnostics such as Doppler shifts (Cheung et al., 2015; Testa et al., 2016), asymmetries (Martinez-Sykora et al., 2011), and non-thermal broadening (Chae et al., 1998; De Pontieu et al., 2015) of spectral lines provide valuable constraints to solar coronal heating and wind acceleration models. Here, we focus on the analysis of non-thermal broaden
ing of FUV lines from cool stars, where the non-thermal broadening is the excess broadening of a spectral line on top of the thermal and instrumental broadening.
Instruments such as the _Skylab_ spectrograph (Tousey et al., 1973; Reeves, 1976), the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) instrument aboard the Solar and Heliophysics Observatory (SOHO, Wilhelm et al., 1995), and the Interface Region Imaging Spectrograph (IRIS, De Pontieu et al., 2014) have provided new insights on the non-thermal broadening of interface region emission lines. The non-thermal velocities, measured in the quiet Sun, range from 5 to 30 km s\({}^{-1}\), and show a correlation with the emission line temperature or the atmospheric height (Boland et al., 1975; Doschek et al., 1976; Mariska, 1992; Chae et al., 1998; Teriaca et al., 1999). The measured non-thermal velocities increase with an increasing temperature, and reach a peak in the upper transition region, which is then followed by a decline towards higher temperatures in the corona. Correlations have been also reported between the non-thermal velocities and the line intensities, however the correlations gets weaker with increase in temperature (Chae et al., 1998). These observed properties of non-thermal velocities suggest that magnetic processes in the solar atmosphere drive solar coronal heating and wind acceleration.
While solar observations provide us with an unique opportunity to obtain an in-depth knowledge of the physical processes in a cool main sequence star, it is a single data point amongst the hundreds of thousands of cool stars in our neighbourhood. To obtain a general understanding of non-thermal broadening in stellar atmospheres and their connection to stellar fundamental properties such as mass and rotation, we must also examine cool stars other than the Sun. Measurements of non-thermal velocities in cool stars are limited, and most of our knowledge of cool stellar non-thermal velocities come from observations taken by the Hubble Space Telescope's Goddard High Resolution Spectrograph (HST/GHRS, Linsky and Wood, 1994; Wood et al., 1997, and references therein). The interface region emission lines used in these studies not only showed large non-thermal broadening, but also a non-Gaussian profile with strong emission around the wings. Although not as prominent as in stellar spectra, such a shape is also known to exist in solar emission lines (Kjeldseth, Moe and Nicolas, 1977; Peter, 2001; Ayres et al., 2021). To account for the broad wings the spectral lines are often modelled using a double Gaussian profile with a narrow and a broad component. The non-thermal velocities are then calculated for both the narrow and the broad component individually.
The presence of the broad component adds an additional layer of complexity in the interpretation of the measured non-thermal broadening. In the case of giant and super-giant stars, anisotropically distributed turbulence along the line of sight has been proposed as a mechanism behind enhanced wings of non-thermally broadened lines (Robinson et al., 1996; Airapetian et al., 2000). Based on observations of cool stars, Wood et al. (1997) concluded that the non-thermal broadening in the narrow component could be attributed to turbulence or Alfven waves, whereas the broad component is generated by micro-flares. A Sun-as-a-star study by Peter (2006) provides an alternative theory, that the broad component is related to the underlying magnetic network in the chromosphere, and the non-thermal broadening of the narrow component is a better indicative for coronal and wind heating. A recent study of solar emission lines by Ayres et al. (2021) shows that the non-Gaussian nature of the lines prevail even at the finest spatial scales, suggesting that the observed velocity distribution might be non-Maxwellian in nature. However, further investigations are needed to explore the true nature of these emission lines.
In this work we investigate the non-thermal broadening in a sample of cool stars, based on archival measurements of HST's Cosmics Origins Spectrograph (COS). We use a double Gaussian profile to model the emission lines, and only consider the narrow component of the Gaussian while calculating the non-thermal broadening of the sample. We investigate the correlation between the measured non-thermal velocities and stellar properties at different formation temperatures, and compare our results to the non-thermal velocities measured using full-disk solar data. In section 2 we discuss the archival data set, followed by data analysis in Section 3. The results are discussed in Section 4, and the conclusions in Section 5.
## 2 Observations and Data Analysis
The stellar data used in this work were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via DOI: 10.17909/wvzj-wd79. Table 1 lists the properties of our stellar sample of 55 stars, all of which were observed by HST COS (Froning & Green, 2009), which is a highly-sensitive moderate resolution spectrograph with a spectral resolution of 1600-24000 over a wavelength range of 1150-3200 A. We also analysed solar spectra taken by IRIS (De Pontieu et al., 2014), which is a NASA Small Explorer satellite that takes high-resolution UV images and spectra of the Sun; spectral resolution of \(\sim\)50,000. In addition to daily raster scans of the Sun IRIS provides monthly full-disc mosaics in 6 spectral windows, enabling us to compare Sun-as-a-star spectral lines to the stellar sample.
### Interface region emission lines
Non-thermal velocities in stellar interface regions can be determined from optically thin emission lines. In this work we analysed optically thin FUV emission lines formed in the interface layer; the chromosphere and the transition region. Additionally, we also included a coronal emission line in our analysis. The line selection was based on the wavelength coverage of COS and IRIS, and on the signal-to-noise ratio of the archival observations. Figures 1 and 2 show the emission lines analysed in this work (absolute flux densities are shown in Figure 1).
#### 2.1.1 Chromospheric emission lines
The most well-studied chromospheric lines are optically thick, which include the Mg ii h and k lines at 2803.52 A and 2796.34 A, and the Ca ii h and k lines at 3936.85 A and 3933.64 A, respectively. In recent years, due to its optically thin nature, the O i 1355.6 A line has emerged as a strong diagnostics for non-thermal motions in the chromosphere. According to Lin & Carlsson (2015), the O i line forms at a wide range of heights at the middle of the chromosphere, and in this work we adopt a temperature of 20,000 K (Teriaca et al., 1999). Due to its formation in the chromosphere the O i line is a very good diagnostic tool not only for non-thermal broadening but also as a proxy for magnetic activity in the stellar chromosphere. However, careful analysis is required as the O i line blends with the C i line at 1355.8 A, as shown in Figures 2 and 1.
#### 2.1.2 Transition region emission lines
The transition region is the second layer of the interface region, where the temperature rises from a few
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ HD name} & other name & mass & radius & \(T_{\rm eff}\) & \(v_{\rm rad}\) & \(P_{\rm rot}\) & \(\log R^{\prime}_{\rm HK}\) & \(\xi_{\rm Si1393}\) & planet host & additional refs \\ & & M\({}_{\odot}\) & R\({}_{\odot}\) & K & km s\({}^{-1}\) & days & & km s\({}^{-1}\) & & \\ \hline HD 72905 & \(\pi^{1}\) UMa & 1.00 & 0.96 & 5873 & -12.0 & 5.0 & -4.30 & 27.0 & No & 42, 43, 44 \\ HD 161897 & HIP 86540 & 1.01 & 0.86 & 5623 & -16.5 &.. & -4.77 & 17.7 & No & \\ HD 192310 & LHS 488 & 0.84 & 0.81 & 5080 & -54.0 & 47.7 & -5.30 & 16.8 & Yes & 1 \\ HD 24636 & HIP 17764 & 1.39 & 1.37 & 6831 & 14.5 &.. &.. & 57.6 & No & 17, 23, 45 \\ HD 25825 & LP 15-582 & 1.01 & 1.08 & 5941 & 37.0 &.. & -4.34 & 23.3 & No & \\ HD 160691 & \(\mu\) Ara & 1.08 &.. & 5813 & -12.0 & 31.0 & -4.97 & 22.0 & Yes & 46,47 \\ HD 39587 & \(\chi^{1}\) Ori & 0.82 & 1.01 & 5882 & -13.4 & 5.2 & -4.40 & 30.7 & No & \\ HD 186408 & 16 Cyg A & 1.25 & 1.25 & 5781 & -27.5 & 26.9 & -4.98 & 25.5 & No & 1 \\ HD 197037 & LTT 16037 & 1.11 & 1.15 & 6150 & 8.0 & 19.1 &.. & 18.8 & Yes & 1,48 \\ HD 1835 & 9 Cet & 0.98 & 0.96 & 5837 & -2.5 &.. & -4.40 & 29 & No & \\ \hline \end{tabular} Note. β Columns 1 to 9: HD name, other name, mass, radius, effective temperature, radial velocity, rotation, stellar activity or \(\log R^{\prime}_{\rm HK}\), mean non-thermal velocity determined in this work from the Si iv 1393 Γ
line. The stellar parameters are taken from Valenti & Fischer (2005) or the following references, France et al. (2018)1; Poppenhaeger et al. (2010)2; Wright et al. (2011)2;Silva-Valio (2008)4; Boro Saikia et al. (2018)5; Wittenmyer et al. (2014)6; Schweitzer et al. (2019)7; Passeger et al. (2020)8; Kopyfova et al. (2016)9; Terrien et al. (2015)10; Mayor et al. (2004)11; Bonfils et al. (2012)12; Palle et al. (2020)13; Zuenko et al. (2019)14; Hojjatpannah et al. (2020)15, Anderson et al. (2014)14; McDonald et al. (2012)17; Kervella et al. (2019)18; Peacock et al. (2019)19; Endl et al. (2008)20; Muirhead et al. (2018)21; Cutispoto et al. (2002)22; Bochanski et al. (2018)23; Nielsen et al. (2019)24; Bakos et al. (2010)25; Bouchy et al. (2005)26; Anglada-Escude et al. (2012)27; Anglada-Escude et al. (2013)28; Delfosse et al. (2013)29; Schofer et al. (2019)30; Butler et al. (2006)31; Feng et al. (2015)32; Boro Saikia et al. (2016)33; Kervella et al. (2008)34; Santos et al. (2002)35; von Braun et al. (2014)36; Youngblood et al. (2017)37; Boisse et al. (2012)38; Pepe et al. (2011)39; Bonfils et al. (2005)40; von Braun et al. (2011)41; Cenarro et al. (2007)42; Rosen et al. (2016)43; Kochukhov et al. (2020)44; Gaspar et al. (2016)43; Butler et al. (2001)46; Santos et al. (2004)47; Robertson et al. (2012)48
Footnote 1: [http://www.stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci](http://www.stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci.edu/stsci).
thousands of Kelvin to a million degree Kelvin. It is also the region where multiple different optically thin emission lines form, providing an excellent diagnostic for plasma motions. We included five transition region emission lines in our study, which include the Si iv doublet at 1393.76 A and 1402.77 A, the C iv doublet at 1548.195 A and 1550.77 A, and O iv at 1401.156 A, respectively. Figure A1 shows these five transition region emission lines for the young exoplanet host star \(\epsilon\) Eri.
The Si iv doublet lines are resonance lines formed under optically thin collisionally excited conditions with a peak formation temperature of \(\sim\)80,000 K (Peter et al., 2014) and lie at the middle of the transition region. The Si iv line formed at 1393 A (decimals are ignored for simplicity in the rest of the text) has a stronger intensity than the line formed at 1402 A. Under optically thin conditions the flux ratio of these two lines (1393/1402) is 2, while under the optically thick conditions the ratio is \(\neq 2\). The Si iv doublet lines are the only lines for which we have included full-disc solar observations from IRIS.
The C iv doublet lines are resonance lines with a peak formation temperature at 100,000 K (Teriaca et al., 1999). The 1548 A line originates from an atomic transition from the ground to the 3rd energy level and the 1550 A line is formed due to a transition from the ground to the 2nd energy level. The C iv doublet have the strongest intensity when compared to the other emission lines used in this work. Under non-flaring solar conditions the flux ratio of the two lines in the doublet are shown to be \(\sim 2\), suggesting optically thin conditions (Dere and Mason, 1993). Due to limited wavelength coverage the C iv line could be analysed for only one star in the sample, \(\epsilon\) Eri.
The O iv line at 1401 A is one of the five O iv intercombination lines found close to the Si iv line. With a formation temperature of 150,000 K under equilibrium conditions (De Pontieu et al., 2014) the O iv line is an important density diagnostic tool (Keenan et al., 2002). The ratio of this line with the O iv line at 1399 A is widely used as a density diagnostic tool for medium density plasma (Polito et al., 2016). In the Sun, the wings of O iv line are known to be blended with the photospheric cool transition line of S i (Polito et al., 2016).
#### 2.1.3 Coronal emission lines
While the corona mostly emits in X-rays, it can be also probed using a limited number of FUV and EUV lines. The coronal emission line included in this study is the Fe xii 1349.4 A forbidden line with a peak formation temperature of \(\sim\) 1.5 MK. Figure A1 shows the Fe xii line of \(\epsilon\) Eri, which has the poorest signal-to-noise ratio amongst all the lines included in this work. Furthermore, the weak Fe xii line is only observable for two stars in our sample. Even in the Sun long exposure times are required to obtain a good signal-to-noise ratio for this line (Testa et al., 2016).
Our results in Section 3 are primarily based on the Si iv line at 1393 A, as it is most easily detectable line for all stars included in our study. The results related to the other six emission lines are only discussed for a handful of stars with a good signal-to-noise ratio.
### Gaussian model line fitting and non-thermal velocities
In order to determine the non-thermal broadening we first perform a Gaussian fit to the observed emission lines. The full width at half maximum (FWHM) of the best fit model is then used to measure the non-thermal broadening after subtracting the corresponding thermal broadening.
As shown in Figures 1 and 2, the S iv, C iv and O iv lines were fitted using a double Gaussian model, which includes a narrow component (NC) and a broad (BC) component. A slightly different Gaussian model was applied to the O i and Fe xii lines. The nearby C i was included in the fitting procedure of the O i line. The best fit model doesn't account for the broad wings of O i suggesting an additional Gaussian component might be necessary, which is not included here. Finally, a single Gaussian model is used to fit the Fe xii line. Analysis of IRIS Fe xii lines in the solar active regions by Testa et al. (2016) show that a single Gaussian model is sufficient to model this line. Figure 2 also suggests that the Fe xii line can be modelled using a single
Figure 1: HST COS Si iv 1393 Γ
line of the active young K dwarf \(\epsilon\) Eri in grey. The best fit Gaussian model is shown in black, followed by the narrow and the broad component of the Gaussian fit in blue. A full-disc solar Si iv 1393 Γ
line is shown for comparison in red.
Gaussian model. However it has been shown that, while spectra with poor signal-to-noise can be easily fit with a single Gaussian line profile, the same spectra might need a double Gaussian model if the signal-to-noise ratio improves (Peter, 2006). Finally, although a double Gaussian model is applied to the O iv line, the S i line blend is not included in the line fitting process. Hence, care should be taken while interpreting the non-thermal broadening determined using the O i, Fe xii, and O iv lines.
Since our emission line profiles are composed of either a single or a double Gaussian, the FWHM of the best fit Gaussian models can be expressed as a combination of thermal and non-thermal motions,
\[FWHM=\sqrt{4n2\bigg{(}\frac{\lambda}{c}\bigg{)}^{2}\left(\frac{2k_{\rm B}T}{M }+\xi^{2}\right)} \tag{1}\]
where \(\lambda\) is the rest wavelength of the emission line in A, \(c\) is the speed of light in km s\({}^{-1}\), \(k_{\rm B}\) is the Boltzmann constant, \(T\) is the temperature of the plasma in Kelvin, \(M\) is the mass of the ion emitting the line in units of the hydrogen atom, and \(\xi\) is the non-thermal velocity along the line of sight in km s\({}^{-1}\).
In addition to thermal and non-thermal broadening, other key broadening mechanisms in the observed emission lines are instrumental and rotational broadening. For rapidly rotating stars, especially for chromospheric lines where the lines are narrower than their transition region counterparts, the affect of rotational broadening should be included in the model. The rotational broadening for moderate to slowly rotating stars, however, is negligible compared to the other broadening mechanisms. The stellar sample included here mostly consists of moderate and slow rotators, hence rotational broadening is not included in our current model. To account for the instrumental broadening of COS spectra, our Gaussian models are convolved with the COS line spread function (LSF) before applying the fitting algorithm. The instrumental broadening of IRIS is reported to be \(\sim\)5.5 km s\({}^{-1}\)(De Pontieu et al., 2014), which is accounted for in our calculation by including the \(\Delta_{\rm inst}\) term in equation 1, as shown below,
\[FWHM=\sqrt{\Delta_{\rm inst}^{2}+4ln2\bigg{(}\frac{\lambda}{c}\bigg{)}^{2} \left(\frac{2k_{\rm B}T}{M}+\xi^{2}\right)} \tag{2}\]
Our fitting algorithm is based on python, and is optimised by scipy's least-squares minimisation. Addition
Figure 2: Interface region and coronal emission lines of \(\epsilon\) Eri in grey. The best fit Gaussian model is shown in black followed by the narrow and the broad component in blue.
ally, we determine the goodness of our fits by calculating the \(\chi^{2}\). To provide an error estimate we carried out multiple Gaussian fits by including a wide range of initial conditions to the fitting algorithm. The standard deviation of the non-thermal velocities calculated from these Gaussian fits is taken as the dispersion, with an average dispersion of \(\sim\)2 km s\({}^{-1}\). Figures 1 and 2 show examples of the best-fit Gaussian models to \(\epsilon\) Eri's emission lines observed by HST COS, where the line widths are much broader than what is expected from thermal broadening alone. As an example, the width of the Si iv line due to thermal motions is \(\sim\)12 km s\({}^{-1}\), which is much smaller than the line width seen in Figure 1. The average non-thermal velocity obtained for this star after subtracting the thermal broadening is 23.2 km s\({}^{-1}\) (Table 1). This clearly shows that non-thermal broadening mechanisms dominate in the interface region lines discussed here.
### Non-thermal energy carried by Alfven waves
The observed non-thermal velocity could be attributed to multiple different processes, out of which Alfven waves have widely emerged as the mechanism of choice for current coronal and wind heating models (van der Holst et al., 2014; Lionello et al., 2014; Reville et al., 2020). If the excess energy required to heat solar/stellar winds is provided by the presence of transverse Alfven waves, then the associated wave energy density \(w\) can be expressed as,
\[w=\rho\delta v^{2} \tag{3}\]
where \(\rho\) is the mass density and \(\delta v^{2}\) is the wave velocity perturbation, and both of these terms can be observationally constrained using the emission lines discussed here. The wave velocity perturbation \(\delta v^{2}\) can be determined directly from the observed non-thermal velocities \(\xi^{2}=\frac{1}{2}\delta v^{2}\)(Banerjee et al., 1998). The mass density \(\rho\) can be determined from the electron number density \(N_{\rm e}\), \(\rho=m_{\rm H}N_{\rm e}\), where \(m_{\rm H}\) is the mass of a single hydrogen atom. Flux ratios of certain interface region emission lines act as a good diagnostic of the number density \(N_{\rm e}\)(Keenan et al., 2002; Polito et al., 2016). However, it should be noted that this diagnostic method is sensitive to line formation temperature. As the temperature in the interface region vary rapidly the density diagnostic will be strongly dependent on the line used. In this work the O iv 1399/1401 line ratio was analysed to constrain \(N_{\rm e}\) and \(\rho\), as discussed in Section 3.5.
## 3 Results and Discussion
### Emission line shapes
Figure 1 shows an example Si iv 1393 A line for the active young Sun \(\epsilon\) Eri. The best fit double Gaussian model is also shown, together with a full-disk spectra of the Sun taken by the IRIS spectrograph. In the case of \(\epsilon\) Eri strong excess emission close to the wings is detected in both the red and the blue part of the spectral line. Such strong emission around the wings have been detected in all of the stars in our sample. In comparison the excess emission in the full-disc solar spectral line is relatively weaker, as shown in Figure 1.
In addition to the double Gaussian shape, some of the emission lines exhibit distortions in the line profile, and in some cases shifts of the central wavelength of the line profile were also detected. In Fig. 2 the core of the C iv 1548 A shows a distorted profile shape. In the solar case distortions and variations in the FUV emission lines are caused due to non-thermal motions along the line-of-sight (Phillips et al., 2008). Recently, such distortions were also detected for the exoplanet host star 55 Cnc (Bourrier et al., 2018), where the distortions and the reduction in flux in FUV lines coincided with one of its planet, 55 Cnc e's transit. According to the authors the variation seen in these lines could not be explained by stellar activity alone and could include contributions from possible star-planet interaction. A detailed analysis of these line profile distortions and Doppler shifts in stars, with and without exoplanets, has the potential to uncover a possible diagnostics for star-planet-interaction, which is however beyond the scope of this work.
### Non-thermal velocity at the solar transition region
To determine the non-thermal velocity in the solar atmosphere we analysed 98 full-disc mosaics surrounding the Si iv 1393 A line taken by the IRIS mission, which cover a period from the declining phase of solar cycle 24 to the increasing phase of cycle 25 (September 2013 to June 2022). The non-thermal velocities determined from these observations have a mean of 22.9 km s\({}^{-1}\) and a standard deviation of 0.5 km s\({}^{-1}\), which are in strong agreement with measurements of the Quiet Sun (QS) by Dere and Mason (1993) and Chae et al. (1998). The non-thermal velocities determined from full-disk IRIS mosaics also agree with results from spatially resolved IRIS observations of the Si iv 1402 A line, where the non-thermal velocity was determined to be \(\sim\)20 km s\({}^{-1}\)(De Pontieu et al., 2015).
Ten out of the 98 full-disc IRIS mosaics used here were also analysed by Ayres et al. (2021) as part of their Sun-as-a-star study (referred to as A21 from here on), where they included 10 full-disc mosaics taken between October 2013 and September 2019. The data analysed by A21 were corrected for cosmic ray hits and missing data, and for this data-set we obtained a mean non-thermal
velocity of 22.4 km s\({}^{-1}\) with a standard deviation of 0.6 km s\({}^{-1}\), which is in agreement with the results obtained from the full IRIS sample of 98 observations. For a detailed comparison between individual observations please see Appendix C.
Since IRIS observations cover the decreasing phase of solar cycle 24 and the increasing phase of cycle 25, we also investigate any possible correlation between full-disc solar non-thermal velocities and magnetic spot emergence. Figure 3 shows the evolution of the non-thermal velocity with the solar cycle. There is a phase delay between the peaks of the two measurements with the non-thermal velocity peaking at least a year later and showing a general flat distribution towards the declining phase of cycle 24. The lower envelope of the non-thermal velocities for the entire sample of 98 observations exhibit a similar trend as the sunspot numbers as they progress from cycle 24 to 25. The non-thermal velocities determined for the observations taken from A21 agree well with the full IRIS sample used here. Sunspot numbers are a proxy for photospheric magnetic activity and non-thermal velocities are diagnostics for magnetic processes in the interface region. Since the interface region magnetic field has roots in the photosphere, it is not surprising to see a general agreement in Figure 3 despite the difference in spatial scales.
### Non-thermal velocity vs stellar properties
We determine the non-thermal velocities for the stars listed in Table 1 from the Si iv 1393 A line profile, and investigate their relationship with stellar properties such as rotation and effective temperature.
The spectral coverage of our sample lies between late F and mid M dwarfs, with G and K dwarfs being the dominant majority, as shown in Figure 4. The chromospheric activity of the sample indicate mostly intermediate to low activity stars, with a few active stars in the mix. The solar non-thermal velocity, determined in the previous section, is close to the average non-thermal velocity for its spectral type.
The non-thermal velocities show a weak dependence on the stellar spectral type, with an overall decrease in velocity towards late type dwarfs, as shown in Figure 4. M dwarfs are known to have stronger surface magnetic fields when compared to G and K dwarfs (Shulyak et al., 2017). However, late-type dwarfs have smaller surface convective velocities compared to G dwarfs, which corresponds to smaller amplitudes for Alfven waves (Sakane and Shibata, 2021) and non-thermal velocities. This could explain the weak dependence of non-thermal velocity on spectral type. Additionally, as shown in the right panel of Figure 4, the majority of the stars in our sample are exoplanet hosts. Our sample is biased towards inactive cool stars and only includes partially convective M dwarfs. A majority of the known partially convective M dwarfs outside of young associations are inactive and believed to be older, which could also contribute towards the relatively low non-thermal broadening in M dwarf spectra seen in this work. This decreasing trend towards early M dwarfs has been also reported in multiple chromospheric activity studies of late-type dwarfs (Reiners et al., 2012; Astudillo-Defru et al., 2017; Boro Saikia et al., 2018). Hence, rotation or age should be also included when investigating the non-thermal broadening-stellar property relation for our sample.
Stellar non-thermal velocity shows a clear dependence on rotation, as presented in Figure 5. The late-type stars in Figure 4 with the lowest non-thermal velocity are indeed older slowly rotating dwarfs. Overall the non-thermal velocity shows a linear dependence on rotation, but within the intermediate to rapidly rotating cohort the non-thermal velocity exhibits a flat distribution between 15-30 km s\({}^{-1}\). The rapidly rotating young Sun EK Dra with non-thermal velocity in the range of 30-40 km s\({}^{-1}\) gives the impression of an increase with rapid rotation in the left panel of Figure 5. However, at a rotation period of \(\sim\)2 days rotational effects could distort and broaden the observed spectral line. Furthermore, some of the EK Dra observations are affected by a flar
Figure 3: Full-disc solar non-thermal velocities as a function of time. The blue circles represent the 98 full-disc IRIS observations. The magenta circles represent non-thermal velocities determined for the observations in A21. The dashed grey curve follows the sunspot cycle. The sunspot numbers were taken from World Data Centre SILSO, Royal Observatory of Belgium, Brussels.
ing event (Ayres and France, 2010; Ayres, 2015). Hence, the quiescent non-thermal velocity of EK Dra could be much lower than reported here.
Since the influence of rotational broadening is not included in the model spectra the stellar sample was additionally reduced to include only slowly rotating stars. The right panel of Figure 5 shows the non-thermal velocity of a smaller sample of stars with low rotational velocity, \(v\sin i<5\) km s\({}^{-1}\)2. The overall trend is the same as the left panel of Figure 5. The non-thermal velocity of stars similar to the Sun in rotation period lie in the range of 15-30 km s\({}^{-1}\), and the single slowly rotating M dwarf exhibits a non-thermal velocity in the range of 10-15 km s\({}^{-1}\). Future observations of both rapidly and slowly rotating stars, together with a more complex model, would help to further constrain the relationship between non-thermal velocity and rotation in Figure 5.
Footnote 2: Five stars, including two M dwarfs, didnβt have any \(v\sin i\) measurements so those stars were also omitted in the right panel of Figure 5.
### Non-thermal broadening vs Si iv 1393 flux
The Si iv 1393 A line is also considered to be an indirect tracer of stellar magnetic field, and a proxy for stellar EUV activity (France et al., 2018). Hence, we investigate possible correlations between the Si iv 1393 flux and the measured non-thermal velocities. The flux in the Si iv 1393 line, \(F_{\rm SiIV1393}\), is calculated by performing an integration centred around the line. Since our sample consists of stars of different spectral types we normalise the measured flux by the bolometric flux \(F_{\rm bol}\).
\[F_{\rm bol}=\sigma T_{\rm eff}^{4}\left(\frac{R}{d}\right)^{2} \tag{4}\]
where \(\sigma\) is the Stefan-Boltzmann constant, \(T_{\rm eff}\) is the stellar effective temperature, \(R\) is the stellar radius, and \(d\) is the distance.
Figure 6 shows the S iv 1393 A non-thermal velocity as a function of the normalised Si iv flux \(F_{\rm SiIV1393}/F_{\rm bol}\). The non-thermal velocities exhibit a weak correlation with the measured flux, with a coefficient of determination (R\({}^{2}\)) of 0.3. The correlation appears to be stronger for \(F_{\rm SiIV1393}/F_{\rm bol}\) above -6.5. Stronger correlations have been reported between the non-thermal velocity and intensity of the Si iv line for the Quiet Sun by Chae et al. (1998). Solar simulations suggests that, depending on the magnetic field orientation, the observed correlation between non-thermal broadening and line intensity could be attributed to either shocks or turbulence (De Pontieu et al., 2015). It is however to be noted that these results are based on spatially resolved solar observations, which is not the same as the disk integrated stellar observations. Furthermore, unlike the Sun, where the correlation is for multiple observations of the same star, the correlation shown in Figure 6 is based on measurements taken for different cool stars. The stellar correlation could be due to different levels of magnetic activity, including short- and long-term variability.
### Alfven wave energy vs stellar rotation
As discussed in Section 2.3, to determine the Alfven wave energy density the mass density must also be
Figure 4: Non-thermal velocity, derived from the Si iv 1393 Γ
line width, as a function of stellar effective temperature. _Left_: The colour bar represents stellar chromospheric activity (\(\log R^{\prime}_{\rm HK}\)) taken from the literature. Stars without known literature \(\log R^{\prime}_{\rm HK}\) values are marked in red. _Right_: Same as the Figure on the left except green marks the known exoplanet hosts and orange represents stars without known exoplanets.
known in addition to the non-thermal velocity measurements. We determined the O iv 1399/1401 flux ratio to estimate the mass density \(\rho\).
The O iv lines are much weaker than the Si iv lines (Figure 10). Hence, the density estimates could be obtained for only six stars out of our entire sample (Table 2). As show in Figure 11, the line ratios lie close to the high-density limit of \(N_{\rm e}=10^{12}\) cm\({}^{-3}\) in Figure 2 of Polito et al. (2016). Hence, we applied the solar mass density 3 in equation 3 as a minimum density to determine the Alfven wave energy density (Figure 11). Since the density \(\rho\) remains the same for all stars the trend is same as the non-thermal velocity. A detailed density determination will help to shed light into the dependence of wave energy density on stellar properties.
Footnote 3: \(N_{\rm e}\) of 2\(\times 10^{10}\) cm\({}^{-3}\)(van der Holst et al., 2014)
### Non-thermal velocity vs emission line temperature
#### 3.6.1 The Sun
The dependence of the solar non-thermal velocity on emission line temperature in spatially resolved solar measurements was extensively studied in the past (Doschek et al., 1976; Chae et al., 1998; Teriaca et al., 1999). In Figure 7 we include non-thermal velocities, determined using SUMER observations, from two such studies (Chae et al., 1998; Teriaca et al., 1999) and compare them to our Sun-as-a-star result determined from the Si iv 1393 A line. To make comparison easier we also fit a second-order polynomial to the archival measurements, where green marks the fit obtained for Chae et al. (1998), purple marks the fit obtained for Teriaca et al. (1999), and black is the fit obtained for combined Chae et al. (1998) and Teriaca et al. (1999) measurements. Table 3 lists the equations of the polynomial fits.
\begin{table}
\begin{tabular}{c c} \hline \hline name & O iv 1399/1401 \\ \hline HD 22049 & 0.45 \\ HD 75732 & 0.43 \\ HD 72905 & 0.58 \\ HD 1835 & 0.53 \\ HD 201091 & 0.45 \\ HD 39587 & 0.56 \\ \hline \end{tabular}
\end{table}
Table 2: Ratio of the O iv lines at 1399 and 1401 Γ
for the sub-sample of stars in Figure 8. Mean values are shown for stars with multiple observations.
Figure 5: _Left:_ Non-thermal velocity as a function of stellar rotation period. Colour scale same as the left panel of Figure 4. _Right:_ Same as the figure on the left but only stars with \(v\sin i<\)5 km s\({}^{-1}\) are included.
Figure 6: Non-thermal velocity vs Si iv 1393 flux. The dotted line is a linear fit to the data, where the R\({}^{2}\) is 0.3.
As shown in Figure 7, the solar non-thermal velocity increases with line temperature, peaks at around 100,000-200,000 K, and decreases towards higher temperatures. Figure 7 also shows that the AR measurements are consistently higher than the QS measurements. The QS non-thermal velocities from Chae et al. (1998) are weaker than the ones obtained by Teriaca et al. (1999), which could be due to different observing periods, and data reduction and analysis techniques. Despite this slight discrepancy the overall trend exhibited by both of these studies is very similar. Our Sun-as-star non-thermal velocity is on the lower end, closest to the QS value of Chae et al. (1998).
#### 3.6.2 Sun-like stars
To determine the dependence of non-thermal velocity on emission line temperature in stars other than the Sun we analysed the seven emission lines discussed in Section 2.1 and determined the non-thermal broadening for six stars with appropriate wavelength coverage and good signal-to-noise ratios. Although six is a small number the stars vary in mass, age and rotation period, and include two older Sun-like stars 55 Cnc and 61 Cyg A, and four active young Sun-like stars \(\pi^{1}\) UMa, \(\epsilon\) Eri, 9 Cet and \(\chi^{1}\)Ori. Two out of these six stars are known exoplanet hosts, 55 Cnc and \(\epsilon\) Eri. Out of these six stars the non-thermal broadening in the C iv doublet could be determined for only one target, \(\epsilon\) Eri, as this line was out of the wavelength coverage for the rest of the stars. Additionally, due to poor signal-to-noise ratio of the Fe xii line we could determine the non-thermal broadening in this line for two out of the six stars, \(\epsilon\) Eri and 9 Cet. The O i, Si iv, and the O iv lines could be analysed for all six stars.
Figure 8 shows the non-thermal broadening vs emission line temperature for the six stars discussed above, where each star is represented by a single colour. Similar to the solar case, the stellar non-thermal velocities exhibit a clear dependence on emission line temperature. The non-thermal velocity is the lowest in the chromosphere (20000 K) and the highest at the transition region. However, unlike the Sun where the peak occurs between 100,000-200,000 K, in almost all stars the non-thermal velocity peaks at a lower temperature of 80,000 K. HD72905 or \(\pi^{1}\) UMa is the only star analysed in this work where the non-thermal velocity peak is similar to the Sun. The other young solar analogues in the sample do not exhibit the same trend. A similar analysis by Pagano et al. (2004) on HST STIS spectra of the moderately active star \(\alpha\) Cen A showed that its non-thermal velocity peaks at a similar temperature to the Sun. This suggests that the discrepancy between the solar and stellar non-thermal velocity peaks, seen in Figure 8, might not be related to spatial scales and instrument used, but could be due to differences in the underlying physical processes that causes the non-thermal broadening in these lines.
As discussed previously, the non-thermal broadening of emission lines is due to mechanisms such as, Alfven waves, turbulence, shocks, flares, and re-connection events. It is now widely accepted that Alfven waves are the dominant transport mechanism in the Sun. Based on the relationship between the non-thermal velocity and emission line temperature in \(\pi^{1}\) UMa and \(\alpha\) Cen A, it is reasonable to assume that the energy transport processes in these two stars are very similar to the Sun.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & a & b & c \\ \hline QS fit C98 & -15.1 & 162.2 & -408.2 \\ AR fit T99 & -10.39 & 112.1 & -272.6 \\ QS fit T99 & -11.21 & 120.2 & -293.9 \\ fit all data & -13.06 & 140.3 & -348.6 \\ \hline \end{tabular}
\end{table}
Table 3: coefficients of the second degree polynomial ( \(a\log^{2}{x}+b\log{x}+c\)) fits in Figure 7.
Figure 7: Solar non-thermal velocities vs emission line temperature in log scale. The open circles show the quiet Sun (QS) measurements taken from Chae et al. (1998) (green, C98) and Teriaca et al. (1999) (purple, T99), and the filled circles represent active region (AR) measurements taken from Teriaca et al. (1999) (purple, T99). A second order polynomial fit to the AR and QS measurements of T99 is shown by the purple dotted and dashed line, respectively, while the dashed green line marks a second-order polynomial fit to the QS measurements of C98. The black dashed and dotted line is the overall fit obtained for all measurements included in this Figure (QS C98, QS T99, and AR T99). The mean Sun-as-a-star non-thermal velocity determined in this work is marked by the black \(\odot\) with the standard deviation as error.
This provides a strong justification to the suitability of Alfven wave driven models for wind simulations in stars like \(\pi^{1}\) UMa and \(\alpha\) Cen A.
For some stars, however, the non-thermal velocity peak at 80,000 K indicates that a majority of the Alfven waves are either dissipating at lower temperatures than the Sun, or processes other than Alfven waves are contributing towards the non-thermal broadening for the Si iv line at 80,000 K. As an example, the \(\epsilon\) Eri observations used in this work are taken from the HST archive and some of these observations are impacted by flares (Loyd et al., 2022). It is however difficult to judge how extreme events such as flares impact the non-thermal broadening of the interface region emission lines from these observations alone. A detailed study of these emission lines at different levels of solar activity, including and excluding large flare events could help shed further light into this problem.
## 4 Summary and Conclusion
We analysed HST COS and IRIS archival spectra to determine the non-thermal velocity of cool Sun-like stars by measuring the non-thermal broadening in the Si iv line for 56 stars including the Sun.
To determine the non-thermal velocity of the Sun we used full-disc mosaics of IRIS, which can be compared to disk-integrated stellar observations. The non-thermal velocities obtained this work are in good agreement with results obtained from spatially resolved observations of the quiet Sun and also with other Sun-like stars that have similar spectral type and rotation period. We also detect a weak correlation between the solar non-thermal velocity and the sunspot numbers.
The Si iv 1393 A line profile was also used to determine the non-thermal velocity of 55 other Sun-like stars, the majority of which are exoplanet hosts. The non-thermal velocity shows a clear dependence on rotation, where an increase in rotation rate is followed by an increase in non-thermal velocity. We also determined the Alfven wave energy density, assuming that the non-thermal broadening is caused by the presence of transverse Alfven waves. By applying the solar plasma density in the transition region as a representative minimum value for cool stars, our results show that the Alfven wave energy density follows the same trend as
Figure 8: Non-thermal velocity vs emission line temperature. Individual stars are represented by the colour and the spectral lines are represented by the symbols. The error bars represent the dispersion for stars with multiple measurements. Each star is colour coded as shown in labels.The mean solar non-thermal velocity determined from the Si iv 1393 Γ
line is shown by the solar symbol. The coloured lines represent the polynomial fits from Figure 7.
the non-thermal velocity. Since the Alfven wave energy density is an important input parameter for state-of-the-art stellar wind models such as AWSoM (van der Holst et al., 2014), these results provide, for the first time, a data-driven way to constrain the wave energy in such models, and thus help us scale the models for stars with different rotation periods. However, detailed density determination in cool stars is still needed to fully exploit the diagnostic capabilities of the emission lines discussed here.
Finally, we investigated the relationship between non-thermal broadening and emission line temperature using seven emission lines with temperatures ranging from from the chromosphere to the lower corona.This part of the analysis could be only applied to a limited sample size of six stars due to signal-to-noise ratio constraints. In the Sun the non-thermal velocity increases with temperature, peaking at around 100,000-200,000 K and decreases towards coronal temperatures. Our results show that in cool stars the global trend for non-thermal velocity vs emission line temperature is very similar to the Sun, however the non-thermal velocity peaks at a lower temperature for some stars. This suggests that, in some cool stars, either a majority of the Alfven energy is dissipated at a lower temperature in the transition region, or other processes, such as flares could contribute strongly towards the non-thermal broadening at low temperatures. To obtain a better understanding of the contribution of Alfven waves and energetic events such as flares, detailed investigations of the Sun and stars using emission lines formed at different interface region temperatures is required.
This research was funded by the Austrian Science Fund (FWF) Lise Meitner grant: M2829-N. V.S.A. acknowledges support from the NASA/GSFC Sellers Exoplanet Environments Collaboration (SEEC), which is funded by the NASA Planetary Science Division's Internal Scientist Funding Model (ISFM) and funding from HST GO Cycle 27 NAS5-26555. MJ is supported by NASA's SDO/AIA contract (NNG04EA00C) to LMSAL. PT was supported by contract 8100002705 (IRIS) from Lockheed-Martin to SAO.
|
2306.13113 | Do Resilience Metrics of Water Distribution Systems Really Assess
Resilience? A Critical Review | Having become vital to satisfying basic human needs, water distribution
systems (WDSs) are considered critical infrastructure. They are vulnerable to
critical events such as extreme weather, natural and man-made disasters, armed
conflicts etc. To account for critical events in the context of design and
operation of WDSs, the concept of resilience is frequently mentioned. How
resilience of WDSs can be assessed using resilience metrics has been the
subject of research of many publications. The aim of this paper is to inspect
the alignment between a general understanding of resilience in WDSs and the
metrics used for resilience assessment. A novel framework for categorising
resilience metrics for WDSs is presented. A literature review of resilience
metrics for WDSs is performed and the results are analysed using the framework
designed. The results show that resilience metrics do not really assess
resilience of the systems, but rather only specific functions and properties of
systems which can make them resilient. | Michaela LeΕ‘tΓ‘kovΓ‘, Kevin Tiernan Logan, Imke-Sophie Rehm, Peter F. Pelz, John Friesen | 2023-06-21T21:02:37Z | http://arxiv.org/abs/2306.13113v1 | # Do Resilience Metrics of Water Distribution Systems Really Assess Resilience? A Critical Review
###### Abstract
Having become vital to satisfying basic human needs, water distribution systems (WDSs) are considered critical infrastructure. They are vulnerable to critical events such as extreme weather, natural and man-made disasters, armed conflicts etc. To account for critical events during the design and operation of WDSs, the concept of resilience is frequently mentioned. How resilience of WDSs can be assessed using resilience metrics has been the subject of research of many publications. The aim of this paper is to inspect the alignment between a general understanding of resilience in WDSs and the metrics used for resilience assessment. A novel framework for categorising resilience metrics for WDSs is presented. A literature review of resilience metrics for WDSs is performed and the results are analysed using the framework designed. The results show that resilience metrics do not really assess resilience of the systems, but rather only specific functions and properties of systems which can make them resilient.
keywords: resilience, water distribution systems, resilience metrics, review, infrastructure resilience +
Footnote β : journal: Arxiv
## 1 Introduction
Access to safe water belongs to the most fundamental human needs [1]. It plays a pivotal role in the Sustainable Development Goal 6 [2]. In many
places on Earth, access to safe water is provided by water distribution systems (WDSs). Having become vital to satisfying basic human needs, WDSs are considered critical infrastructure that is vulnerable to extreme weather events, natural and man-made disasters, armed conflicts etc. Recent examples of this include the disruption of water supply as a consequence of the 2021 floods events in western Europe e.g. in Bad Munstereifel [3], several cases of direct attacks at pumping stations, pipelines and dams during the Russia-Ukraine armed conflict [4] as well as broken water pipes as a consequence of the 2023 earthquake in Turkey and Syria [5]. The projected increase in frequency of extreme weather events as a result of the progressing climate crisis also affects and will continue to affect WDSs.
To account for critical events in the context of design and operation of WDSs, the concept of resilience is frequently mentioned [6]. WDSs are considered technical or socio-technical systems that need to be _resilient_ with regard to critical events. However, it is challenging to operationalise the concept of resilience in WDSs and to use it in academic studies [7]. The reasons for this are mainly twofold: (i) no scientific consensus regarding the definition of resilience exists; and (ii) no scientific consensus regarding measuring WDS resilience exists.
These two challenges have been addressed in numerous scientific publications. In particular, numerous metrics have been proposed for resilience assessment of WDSs. The aim of the presented publication is to inspect the alignment between a general understanding of resilience in WDSs and the metrics used for resilience assessment. Specifically, the following research questions are addressed:
* How do existing WDS resilience metrics assess resilience?
* To what extent do the existing metrics assess resilience with regard to the functions and properties of resilient systems?
* How general are the existing resilience metrics with regard to different critical events?
To answer these questions, the rest of the paper is structured as follows. First, an overview of existing review papers about resilience metrics in WDSs is provided in Section 2. In Section 3, the understanding of resilience within the scope of this study is presented, placed in the overall resilience discourse and its implications for the WDSs are illuminated. The novel framework for
classifying resilience metrics is described in Section 4. Section 5 documents the literature search protocol. The results of the critical review and the discussion are presented in Section 6 and 7, respectively.
## 2 State of the Art
Several review studies aimed to categorise resilience metrics for water distribution systems in the past: Liu and Song. [8], Shuang at al. [9] and Shin et al. [10], Gunawan, Schultmann and Zarghami [11], Gay and Sinha [12], and Mohebbi et al. [13]. Each of the studies used their own unique framework for categorising metrics and the understanding of resilience also varied. In the following, the key structure of each of the frameworks is presented.
Shin et al. [10] broaden the horizon considering not only WDSs but also water resource systems (WRSs). While stating that resilience definitions in the domain of water infrastructures lack clarity, they determine four key capabilities of resilient systems - withstanding capability, absorptive capability, restorative capability and adaptive capability. These capabilities are considered as customer needs in a functional design process and can thus be understood as system functions. Shin et al. categorise resilience metrics according to two separate dichotomies: probabilistic vs. deterministic and dynamic vs. static. Unlike deterministic measures, the probabilistic measures "consider the stochasticity of system functions (or disturbances) and the probability-based formulation of the measures"[10]. Dynamic approaches "consider time-dependent functions of a system" while time-independent approaches do not.
Focusing solely on WDNs, Shuang et al. [9] define WDN resilience as the "ability to absorb local failures, to quickly recover and maintain the essential service functions, and to adapt to long-term changes in the environment and uncertainty disturbances" [9]. From this definition, they abstract three capabilities of a resilient WDN: absorptive, restorative and adaptive, omitting the withstanding capability of Shin et al. [10]. Analysing existing publications related to resilience assessment of WDNs, the authors identify four clusters of approaches for quantitative resilience metrics: surrogate measures, simulation methods, network theory approaches and fault detection and isolation approaches [9]. For each of the approaches, an overview of metrics, research progresses and limitations is provided. The clusters are, however,
qualitatively different: while the first three focus on the methods behind the metrics, the last one covers an application area.
Liu and Song reviewed the body of research carried out on WDSs and five different types of urban networks (drainage distribution, gas distribution, transportation, electricity distribution and communication) [8]. For WDSs, Liu and Song identify two types of metrics similar to those of Shuang et al. [9]: surrogate-based evaluation metrics and recovery-based simulation metrics. According to the authors, the definition of resilience also changes based on which of the two types of metrics is used: in the first case, resilience "is considered a surrogate measure of [...] reliability, robustness, reserve capacity, and sustainability"[8] and is thus "static"[8]. In the second case, the resilience definition "[includes] adaptability, absorbability, and recovery capacity"[8], and is a "reflection of dynamic system performance before and after hazards"[8]. However, the authors do not provide any sources for these definitions, neither do they explain their understanding of the terms "static" and "dynamic". They are also unclear on whether these terms relate to the concept of resilience, the resilience metric or the technical system itself.
Gunawan, Schultmann and Zarghami consider resilience itself to be one of the "indicators of system performance", along with reliability, redundancy and robustness. They list 14 metrics, assigning each to one of the four "indicators" and dividing them into structural and functional metrics, where structural metrics can be understood as analogous to static metrics of Shin et al. and functional metrics as analogous to dynamic metrics [10]. Resilience is mentioned only in connection with the dynamic metrics. Although this analysis provides an initial overview of different metrics, it does not systematically compare the individual indicators and elaborate in detail how they differ or what function of resilience they are related to.
Gay and Sinha performed a literature review of civil infrastructure system's resilience [12]. They offer an interdisciplinary perspective, distinguishing between engineering, ecologic, economic and societal resilience. However, they lack a resilience definition for the case of engineering resilience. They argue that while resilience in general cannot be measured, a system's capability for resilience can be assessed by concepts from graph theory. This resilience assessment should consider the previously stated four aspects of resilience during operation, design and analysis of infrastructures.
Mohebbi et al. [14] evaluate resilience and its quantification in water, cyber, and transportation infrastructures, as well as their interdependencies. They distinguish between network-based, performance-based and technology
-based metrics. They provide comprehensive lists for the different metrics (7 network-based, 6, performance-based and 5 technology-based). Nevertheless, similar to the reviews before, the authors do not go into detail about their understanding of resilience and do not describe which aspects of resilience each metric describes.
A major methodological problem in the review studies mentioned above is that little to no effort is made to link the resilience metrics to the definition or understanding of resilience. "Functions" or "capabilities" of resilient systems are mentioned, but not thoroughly reflected by the categorisation or analysis of the resilience metrics themselves. Other concepts such as redundancy, reliability and robustness are mentioned, but their relation to resilience differs in each paper and in some cases [8; 11] it is unclear whether they are the property of resilience or of a resilient system. Hence, more work is needed to improve the connection between the interpretation of resilience, other related concepts commonly mentioned in its context and the metrics used to measure it. The presented paper proposes a framework to address this challenge.
## 3 Resilience and Water Distribution Systems
In this section, the understanding of resilience underlying this study is presented and placed in the overall resilience discourse. Certain functions and properties of resilient technical systems are introduced and the implications of this understanding within the studied domain of WDSs are illuminated.
### Resilience of Technical Systems
While earlier mentions of the term resilience can be found, the first usage considered relevant for this work is by C. S. Holling in 1973 [15]. Holling describes resilience as a measure of the ability of ecosystems "to absorb changes of state variables, driving variables, and parameters, and still persist." In a later work, Holling distinguishes between engineering resilience and ecological resilience [16]. The former focuses on efficiency, constancy, and predictability and aims for resistance of a (ecological) system to perturbation and return to an equilibrium steady state. The latter, in contrast, allows for multiple steady states to exist and considers the magnitude of disturbances that cause regimes changes in a system from one state to another.
Since then, the term resilience has been widely adopted and discussed in multiple scientific fields, as indicated by the scope of contributions to the Handbook of International Resilience [17]. According to Elsner et al. the
concept of resilience owes its popularity in parts to a conjuncture of ecology, awareness of the dynamic nature of systems and the unavoidability of failures as well as a certain fatalism towards a loss of control [18]. In consequence, resilience has come under increased critical scrutiny. One point of criticism is the vagueness and ambiguity of its meaning [19] or even its haphazard usage [18]. This has not only put into question its usefulness for scientific study but raised the concern that as a normative term it transports a hidden agenda, as it does not capture aspects of political and economic power or interests, but instead is in line with neoliberal ideology [18]. More explicitly, it has been argued that calling for resilience is a strategy for the shifting responsibility of coping with critical events from large social institutions to individuals and that it can serve as an excuse for inaction with regard to mitigating the consequences of critical events or developments [19]. The question whether the term is normative is not fully resolved, however, as the resilience of constellations or systems can be both desirable and undesirable [19]. This is also reflected in the metaphors used for describing resilience, in that it allows systems to "bounce back" or "bounce forward". Here, the former implies that a disrupted system returns to a prior, desirable state, reminiscent of Holling's concept of engineering resilience and the latter that a disruption of the system leads to transformation and a new state of the system reflecting Holling's concept of ecological resilience.
In spite of the critique, it has been acknowledged that the term is useful when studying complex, transient, adaptive systems [18]. Accordingly, from Holling's concept of engineering resilience a paradigm of resilience engineering has developed for engineers concerned with complex systems [20; 21; 22]. Within this domain, the definition of resilience for the purpose of engineering of complex systems was gradually and systematically developed in order to include reactions to mishaps or continuous stress, to highlight the uncertainty of these events, and finally to incorporate aspects of the ecological engineering concept by focusing on adaptation to changed conditions [22].
Inspired by Hollnagel, Pelz et al. drew on resilience as a strategy for coping with uncertainty when designing load-bearing systems in mechanical engineering [23]. Maintaining that systems ultimately serve to fulfil functions, they differentiate between three types of uncertainty these systems face: stochastic uncertainty, incertitude and ignorance. They further propose three design strategies for coping with uncertainty: (i) robustness, (ii) flexibility, and (iii) resilience. Here, robust systems are able to fulfil their designed functionality not only at the design point but within a given interval
of operating conditions around the design point, whereas flexible systems can adapt to fulfil a given set of predetermined functionalities depending on the operating conditions. Both strategies are used for coping with incertitude. Resilience, in contrast, is a strategy for coping with ignorance, as it allows for systems to evolve their function beyond the predefined design point as an adaptation to changed conditions, while still fulfilling at least the function of its initial design. Accordingly, the authors give the following definition:
A resilient technical system guarantees a predetermined minimum of functional performance even in the event of disturbances and failures of system components, and a subsequent possibility of recovering. [23, p. 411]
In this conceptualisation, resilient systems are a strict subset of flexible systems, which in turn are a strict subset of robust systems.
Within the scope of the presented work, resilience is understood as the property of technical systems according to the definition given above. However, the more general term _critical events_ is used instead of disturbances or failures to describe any events that require the system to operate outside the designed operating conditions. In the following subsection, this broad definition is further detailed.
### Functions and Properties of Resilient Technical Systems
Hollnagel speaks of four functions that make resilient performance possible [22, 21]. These functions of resilient systems are [22]:
* **monitoring** (knowing relevant internal and external critical parameters; supervising their values during operation)
* **reacting** (being able to respond to critical events by adjusting the current mode of functioning)
* **learning** (understanding what happened during a critical event and incorporate the knowledge during future critical events)
* **anticipating** (knowing the expected system's behaviour when faced with critical events and being able to anticipate future developments such as changing operating conditions or new critical events)
Resilient systems are often described as those having the _adaptive_, _absorptive_ and _restorative_ capability [8; 10; 9; 24; 6]. These capabilities can be mapped to the functions of resilient systems as shown in Figure 1.
Besides the functions of resilient systems, several other properties result from the definition of resilience in Section 3.1. Resilient systems are defined as having a predetermined (meaning required, acceptable) minimum of functional performance. This is e.g. the minimum of functional performance for emergency operation during or after a critical event. In resilient systems, the intrinsic minimal functional performance lies above the predetermined functional performance. It can be considered a baseline, hence it is referred to as _baseline functionality_ in the context of this paper. Another important property of resilient systems is the possibility of _recovery_, reflected in the definition of resilience: despite the functional performance of the system being compromised, it should be possible for the system to return to a state in which a satisfactory functional performance can be guaranteed. Recovery is present in many other definitions of resilience as well [6]. A further property of resilient systems is _redundancy_[6]: by equipping the system with additional capacity, e.g. with duplicate components, a sufficient functional performance can be secured even in case of a critical event. Redundancy is generally considered tightly coupled with the concept of, but not sufficient for resilience. As such, it accompanies baseline functionality and the possibility of recovery.
While the concept of resilience and the functions and properties of a
Figure 1: Mapping of capabilities of resilient systems to their functions. While the absorptive capability can be mapped to both anticipation and reaction, the restorative and adaptive correspond to reaction and learning, respectively.
resilient technical system have so far been presented in abstract terms of systems in general, in the following subsection, they are concretised for WDSs.
### Resilience Applied to Water Distribution Systems
WDSs are large technical systems (LTS) with a socio-political dimension [25; 26]. As infrastructure systems, they lie within the engineering domain as engineering knowledge is required for their design and operation [24]. In the context of this work, the focus is on the technical character of WDSs. Of the overall water supply system of a city, WDSs are defined as the part that transports water from the outlet of the source or treatment plant to the point where the consumer's installation connects [27]. WDSs consist of a network of pipes of various carrying capacity that stretches out covering the supply area as well as service reservoirs, pumping stations, valves, joints and fittings and further minor components [27].
Considering the adopted definition of resilience in the context of WDSs, the concepts minimum of functional performance, failures of system components, disturbances, and possibility of recovery are to be clarified.
Functional performance of WDSs is determined by threshold values at the point of connection to the consumer's installation for the following quantities: service pressure, flow rate, continuity of supply, water quality (i.e. maximum threshold values for substances in the water) [27]. Further criteria for functional performance include sustainable use of energy, minimising water loss, longevity of installations, minimising noise, and minimising risks to neighbouring buildings and the environment, and providing service in emergencies [28].
The minimum of functional performance can be defined by a further set of threshold values for the quantities enumerated above, according to national regulations. As an example for volume of water, the Federal Office of Civil Protection and Disaster Assistance (BBK) gives an estimation of 50 litres per day and capita to be provided by operators of WDS, even during critical events [29]. This figure corresponds to the level 6B water restrictions enforced by the City of Cape Town during the drought in 2018 [30]. 1 The
threshold value in this case is a requirement defined for the operation of the WDS as a baseline functionality and is not equivalent to a predetermined minimum of functional performance as a characteristic of the WDS. Since WDSs are rarely designed from scratch but rather develop over generations, a predetermined minimum of functional performance cannot be implemented as a system characteristic and determining this characteristic is not trivial. Thus, defining threshold values for an acceptable minimum of functional performance for all operating conditions (i.e. a baseline functionality) is usually more relevant than determining the actual system characteristic "minimum of functional performance" of a WDS.
Footnote 1: The WDSs are not necessarily the same as the WDSs.
Failures of system components in WDSs include but are not limited to pipe breaks, leakages in pipes, joints, service reservoirs or fittings, faults in pumping stations and pump outages, and broken valves. Disturbances are considered to be changes to the operating condition deviating from that for which the WDS was designed, without physical damages to components. This includes unexpected changes to consumer demand and demand patterns, changes to the available supply of water from the sources and treatment plants as well as contamination of the water sources, back flow, and stagnation. The union of failures of system components and disturbances is termed critical events in the context of this work.
Concerning the possibility of recovery subsequent to critical events, this can generally be understood as the WDS returning to the service levels determined by the functional performance after a period in which only at least the minimum of functional performance was fulfilled. Depending on the nature of the critical events, recovery is either achieved from within the WDS through the actuators (pumping stations, valves) or by human intervention (repair of pipes and other broken components, restoring supply through source or water treatment plant, and others). It is important to recognise that in the latter case, the system boundary is extended to include not only the WDS as described above but also the human agents required to operate it as well as spare materials.
The four resilience functions defined by Hollnagel referenced in the preceding section are proposed to be understood in the context of WDSs as follows:
* **monitoring** using sensors to measure quantities for operation, e.g.
service pressure and volume flow, as well as relevant external quantities, e.g. groundwater levels, precipitation, population dynamics
* **reacting** mitigating the effects of critical events on functional performance after detecting them and returning the WDS to fulfilling service levels, e.g. using actuators in the WDS such as valves and pumps (when not including human agents in the system boundary) or deploying repair crews to restore failed components (when including human agents in the system boundary)
* **learning** gathering operation data and information and analysing them to improve future operation. This can entail using the data to adapt models for control units of pumps or improving protocols for detecting critical events through monitoring, e.g. by improving data analysis methods of the sensor data
* **anticipating** providing for likely critical events in WDSs, e.g. leakages, pipe breaks, pump outages, demand or supply variations, and others, as well as considering long term developments, e.g. demand level increase or decrease through migration into or out of the supply area and changes in supply due to dropping groundwater levels or droughts
Considering the properties of resilient technical systems given in the previous section, the following concretisations can be made in the context of WDSs. _Baseline functionality_ is a set of threshold values of the functional performance of the WDS (service pressure, flow rate, etc.) that serves as a reference for the acceptable minimum of functional performance. The deviation from this can be measured. _Recovery_ is reflected in the resilience definition and is closely linked to the resilience function **react**. If the threshold values of functional performance cannot be maintained due to critical events, the react function of the WDS needs to be fulfilled in order to return the operating point to a state where the threshold values are again met. _Redundancy_ in WDSs is related to the resilience function **anticipate**. Redundancy in WDSs is achieved by, e.g., ensuring multiple supply paths for consumers in a network, using multiple sources, including surplus pumps in pumping stations as well as securing extra capacity both in terms of available volume of water and transportation capacity in the pipe network.
Having illustrated how the resilience definition, resilience functions and related resilience properties can be applied to WDSs as an instance of resilient
technical systems, the next step is to construct a framework within which metrics for measuring resilience can be classified.
## 4 Framework for Classifying Resilience Metrics
In this section, the categories used within the presented work for classifying resilience metrics are presented in detail, constituting the framework used for analysing resilience metrics.
In the past years, a plethora of metrics have been designed for the purpose of assessing resilience of water distribution systems. To inspect _how_ the metrics approach the assessment of resilience, the current section presents a framework to classify them according to:
* system functions addressed (cf. Section 3.2)
* system properties addressed (cf. Section 3.2)
* dependence on time (cf. Section 4.1)
* mathematical characteristics (cf. Section 4.2)
* quantification type (cf. Section 4.3)
* scope of the metric (cf. Section 4.4)
An important distinction is that the first two categories refer to what characteristics of the system are addressed, while the rest of the categories are characteristics of the metrics themselves. Hence, the metric _assesses_ a function or a property that a system _has_, but the metric _is_ time-dependent or _has_ certain mathematical characteristics.
As the system functions and properties have already been presented in detail in Section 3.2, this section will focus on the remaining categories of the framework.
### Metrics According to Their Dependence on Time
Resilience metrics can be differentiated based on whether they do or do not consider development of the functional performance of the WDS in time [8; 10; 24]. In this framework, the terms _time-independent_ and _time-dependent_ are used instead of static and dynamic [10] (as the metric itself is not static or dynamic) or structural and functional [11] (as metrics assessing
structural or functional characteristics of a system can still either depend or not depend on time).
_Time-independent_ resilience metrics aim to assess resilience without considering the development of the selected quantity in time and tend to focus on topology of the system and characteristics of its components.
_Time-dependent_ resilience metrics account for the development of the functional performance or another selected quantity in time.
It is important to distinguish between time dependence as the property of resilience versus the property of the resilience metrics. In the understanding of the authors of this paper, it is the metrics that can be either time-dependent or time-independent, not resilience itself. Some authors speak of "static resilience" [32], which would suggest its time independence. However, the authors of this paper are of the opinion that in such case, it would be more appropriate to discuss whether resilience is _time-invariant_. This discussion is, however, beyond the scope of the present work.
Figure 2: Visualisation of the classification of metrics
### Metrics According to Their Mathematical Characteristics
Resilience metrics are defined on various intervals. Unlike open intervals, closed intervals with an optimal value suggest that a WDS can achieve absolute resilience. However, no scientific consensus exists about whether this is possible. The main reason for this is that the resilience scholarship tends to think of resilient systems with regard to any (reasonable) critical events, not to a specific set of them [33], and that it is impossible to account for all of these in the resilience analysis. Moreover, it is also disputed whether resilience is a continuous or a Boolean property: whether a system can be _only a little resilient_ or whether it either is resilient or is not.
Resilience metrics are developed with the goal of being able to compare various configurations of a single system or separate systems with one another. Resilience metrics normalised to a closed interval (such as \([0,1]\)) suggest that comparability within the system as well as between various systems is possible. Non-normalised metrics make comparison between separate systems more difficult.
### Metrics According to Their Quantification Type
Resilience metrics use different types of quantification. Cassottana et al. differentiate between _graph-theoretic_ and _performance-based_ resilience metrics [34]. _Graph-theoretic_ resilience metrics are based on measures developed in graph theory [34], such as betweenness centrality or shortest paths. As WDSs can be modelled as mathematical graphs, these metrics are a suitable tool for their resilience assessment. Graph-based metrics often aim to express resilience in terms of values of each node or link. _Performance-based_ resilience metrics assess resilience as based on a system output characterising the performance of the system [34]. For example, they express the ratio of functional performance with a predefined reference value, such as the ratio between supply and demand during a critical event or between the available energy and the required energy. Another quantification type are _score-based_ resilience metrics. _Score-based_ resilience metrics rely on an qualitative or semi-qualitative assessment according to certain criteria, using e.g. a 5-point scale (from "very good" to "very bad"). Some metrics can also be composed of multiple weighted metrics - these will be referred to as _composite metrics_. This approach is recommended by Hollnagel for assessing the resilience of systems in general, as he disputes that resilience is a quantity which can be captured by a single measurement [21].
### Metrics According to Their Scope
The key advantage of using metrics is to have a relative assessment of a certain property of a system - either with regard to its own states or with regard to other systems. With the help of resilience metrics, specifically, it should be possible to distinguish whether a new state of the system is more resilient than an older one, but also whether one system is more resilient than another. Accordingly, metrics are classified with regard to whether they are evaluated _(i)_ for different states of one system for comparing the resilience of those two states, and _(ii)_ for different systems for comparing the resilience of those two systems.
Metrics also differ in their generality with regard to critical events. By definition, a system is resilient independent of a critical event, i.e. the resilient system definition from Section 3.1 should hold for all reasonable critical events. Accordingly, metrics are classified with regard to whether they are evaluated in view of critical events affecting the system as well as _which_ and _how many_ different types of critical events are considered.
## 5 Literature Search Protocol
The categorisation of currently existing resilience metrics using the framework presented above is based on a systematic search. Following the guidelines developed under the PRISMA concept (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), the following query in the Web-of-Science database was performed on March 10, 2022:
_resilien* AND (metric* OR indicator* OR quantitative* OR index OR indices) AND water AND (distribution* OR supply OR network* OR infrastructure*)_. Only papers written in English were included in the study. The workflow is shown in Figure 3.
The initial search led to 1279 records. The titles of these publications were screened manually, after which 965 papers were filtered out, yielding 314 records. Subsequently, the abstracts underwent thorough screening. Whenever an abstract stated that a newly developed or adapted resilience metric was proposed or discussed within the paper, the paper was considered for further reading. Through this process, 185 papers were filtered out, leading to 129 papers.
Finally, full-text screens of all of the 129 papers were performed in order to assess whether the paper contains metrics that are presented as resilience
Figure 3: Flowchart representing the step-wise filtering process according to the PRISMA guidelines.
metric, and whether the metrics are newly introduced or adopted from previous work. In this step, 90 papers were filtered out, resulting in 39 papers that contained suitable metrics. Most of the papers were published after 2014 as can be seen in the histogram in Figure 4. Since some papers contained more than one metric, 50 resilience metrics were found through this search. Since several resilience metrics known to the authors were not captured by the search query, they were added manually (9 metrics). This leads to a total of 59 resilience metrics. The initial list of publications and all filtering steps can be followed with the help of the table provided in the supplementary material, c.f. Section 9.
The resilience metrics were categorised using the framework introduced in Section 4 and subsequently used to answer the research questions from Section 1.
## 6 Results - Analysis of the Literature Review
### How do existing metrics assess resilience?
Most metrics (46; 78%) assess resilience based on the performance of the system, i.e. on system output (such as delivered head or volume flow). 4 (7%) metrics are score-based, evaluating resilience of a system using a score system, and 9 (15%) metrics are based on approaches from graph theory. There are 15 composite metrics that combine multiple metrics using normalisation and weighting factors. Composite metrics can be composed of metrics of one
Figure 4: A histogram of papers found in the literature search based on year of publication. After 2014, an increase in the number of publications is noticeable.
quantification type or combine multiple ones (e.g. graph-theoretical and score-based).
From a temporal perspective, the review shows that there are 34 (58%) time-independent resilience metrics and 25 (42%) time-dependent resilience metrics.
All graph-theoretical metrics are time-independent. The point-based metrics are predominantly time-independent, and the performance-based metrics are about 50/50 split between time-dependent and time-independent.
In total, 35 metrics use normalisation to a certain interval, most commonly \([0,1]\). In most cases, the optimal value is 1. In other cases, there is no upper or lower bound to resilience.
To what extent do the existing metrics assess resilience with regard to the functions and properties of resilient systems?
There is a strong tendency to address anticipation (34; 58%) and reacting
Figure 5: Metrics by resilient system functions and properties they address. Most metrics only assess either anticipating or reacting. Baseline functionality (bf), redundancy (red) and recovery (rec) are addressed by about 30% of the metrics each.
(26; 44%) rather than monitoring (1; 2%) and learning (1; 2%), even though monitoring and learning are considered vital resilience functions.
The vast majority of metrics assesses only one function (57; 97%). Only a single metric assesses 2 and 3 functions each; no metrics assess all four functions. A thorough assessment of resilience, considering all resilience functions, is thus missing.
Properties of resilient systems - baseline functionality, redundancy and recovery are addressed by 20, 18 and 21 metrics, respectively (about 30%). Similarly to the functions, most metrics that address properties of resilient systems only address one of them (27; 46%). 13 (22%) metrics consider 2 properties, and two considers all three.
To assess the relationship between the characteristics of the metrics and those of the systems, as well as between the functions and properties addressed, Pearson correlation coefficient was calculated for selected data categories, see correlation matrix in Figure 6. The results show that there is a strong positive correlation between time-dependence and the react function, and time-independence and the anticipate function, meaning that metrics that assess reaction tend to be time-dependent, while metrics that assess anticipation tend to be time-independent. Assessing anticipation and being graph-theoretical, as well as assessing reaction and being performance-based correlate moderately. Assessing the recovery property correlates strongly with being time-dependent. The property redundancy correlates moderately with being graph-theoretical.
### How general are the existing resilience metrics with regard to different critical events?
According to the resilience understanding in Section 3, a resilient system can keep its minimum functionality in any (reasonable) critical event. Hence, the metrics should aim for independence from the type of critical events.
The results of this study show that only about 15% of the reviewed metrics are independent of critical events (Figure 7). Figure 7 also shows that most metrics focus on a specific subset of critical events, and most commonly only on one (pipe failure - 21, change in demand - 2, change in supply - 5). Two metrics consider "any component failure", which is a relatively general category that can include pump failure, pipe failure, valve failure and other component failures. A commonly occurring combination is between change in demand and change in supply (8), as well as between pipe failure and change in demand (7). Only a single metric combines three critical events.
With this limited view, it can be argued that the metrics assess robustness of the system with regard to pipe failure or change in demand/supply, rather than its resilience.
11 metrics from this review have been used to assess different systems and make a comparison between them. The networks used for resilience assessment are also stated in the dataset linked in Section 9: except the benchmark network Net3 which was used in 3 cases, no pattern with regard to the usage of networks can be observed.
Figure 6: Pearson correlation matrix between the data categories βmonitorβ, βreactβ, βlearnβ, βanticipateβ, βtime-independentβ, βtime-dependentβ,βgraph-theoreticalβ,βperformance-basedβ,βscore-basedβ, βcompositeβ, βbaseline functionalityβ,βredundancyβ, βrecoveryβ. Positive values (blue) and negative values (red) suggest positive and negative correlation, respectively.
### Clustering Reviewed Metrics
Hierarchical clustering has been performed on the reviewed metrics along the categories "systems functions addressed", "system properties addressed", "dependence on time", and "quantification". More details on the clustering algorithm are provided in Section A. The results from the distance matrix are plotted in Figure 8, forming 5 clusters (CLs). The clusters can be characterised as follows:
* CL1: reaction metrics considering recovery
* CL2: reaction metrics not considering recovery
* CL3: performance-based anticipation metrics
* CL4: non-performance-based anticipation metrics
* CL5: score-based resilience metrics considering all properties
Figure 7: Left: Metrics based on whether they are independent of critical events. Only 15.3 % of metrics are independent. Right: Numbers of metrics based on which critical events they can capture. Most metrics can only capture one; combinations of change in supply and change in demand, as well as of pipe failure and change in demand are relatively common.
Figure 8: Dendrogram showing all reviewed resilience metrics grouped into 5 clusters (CL1-5). For assignment of metrics to clusters in text form, consult Table B.1.
### Categorisation of Selected Resilience Metrics for WDS
Below, selected metrics characteristic for each of the clusters specified in Section 6.4 are presented. The full list of reviewed metrics is presented in the B.
#### 6.5.1 Reaction Metrics Considering Recovery (CL1)
In CL1, all metrics are performance-based, assess reaction and consider recovery. They all consider recovery. Many of them also address baseline functionality.
Hashimoto et al. define the _system's average recovery rate_ as a measure of resilience [48]. For the system output \(X_{t}\) at time \(t\) which can be in a satisfactory state \(S\) or failure state \(F\), the metric can be expressed as follows:
\[\gamma=\frac{P(X_{t}\in S\:\text{and}\:X_{t+1}\in F)}{P(X_{t}\in F)}=\frac{ \varrho}{1-\alpha}, \tag{1}\]
where \(\varrho\) denotes the probability \(P\) of the system transitioning from the set \(S\) in the period \(t\) to the set \(F\) in the period \(t+1\), and \(\alpha\) denotes the probability of being in a satisfactory state: \(\alpha=P(X_{t}\in S)\)[48].
The metric is designed to aid in determining design and operating policies for WDSs [48]. In the understanding of Hashimoto et al., resilience describes "how quickly a system is likely to recover or bounce back from failure once failure has occurred" [48]. This measure assesses reaction, namely how likely the system is to to transition back to a satisfactory state after failure. Hence, it considers recovery after the failure, and also requires baseline functionality to define the satisfactory/failure state. It is a time-dependent, performance-based metric. Hashimoto et al. do no not prescribe what quantity the system output \(X_{t}\) should be expressed with, but in their case study they work with volume.
System's avergage recovery rate is defined on the interval \([0,1]\) with 1 being the optimum value. The use of the metric is illustrated on a water reservoir with seasonal changes in demand and supply.
Zhuang et al. define their resilience metric _integral water service availability_ as "the percentage of water supplied to customers over a system failure period" [47]. It can be expressed as the ratio of delivered flowrate \(Q\) (supply) to required flowrate \(Q^{*}\) (demand) over the selected period of time when the critical event occurred [47]. At system scale, it is formulated as
\[R_{\rm sys}=\frac{\sum_{t=1}^{T}\sum_{i=1}^{N}Q_{i,t}}{\sum_{t=1}^{T}\sum_{i=1}^{N }Q_{i,t}^{*}} \tag{2}\]
with \(T\) being the time duration under system failure and \(N\) the number of nodes [47]. Using Monte-Carlo simulations, Zhuang et al. aim to assess the performance of the studied networks under various conditions. They aim to investigate what the critical factors affecting system resilience are. Moreover, they demonstrate how the expected costs for improving the WDS resilience can be determined. Zhuang et al. understand resilience as "the ability to recover from a failure to a satisfactory state" [47]. They consider the duration of recovery an important aspect of resilience. Considering availability (2) to be a resilience metric, they argue that it also provides an insight into the intensity of the critical event. This metric is a typical time-dependent resilience metric. It considers the time interval after the critical event, reflecting the focus on recovery. As such, it is capable of assessing the reacting function of the system. The authors use the metric to study the WDS performance when the current mode of functioning is adjusted, i.e. under operator intervention or under adaptive pump operation. Baseline functionality is reflected in the denominator (required volume flow \(Q^{*}\)). It is a performance-based metric defined on an interval between 0 and 1 with 1 being the optimal value at which the demand can be satisfied during the entire time duration under system failure. Performance of the system is measured using volume flowrate \(Q\). The metric was applied to a medium-sized example network representing a primarily residential community. Comparisons are made between different reaction strategies. Zhuang et al. use the metric to study WDS resilience under randomly generated changes in demand and pipe failures.
Farahmandfar and Piratla define a _flow-based resilience metric_[45] derived from Todini [69]:
\[\text{FR}=\frac{\sum_{t=1}^{td}\sum_{i=1}^{N_{n}}\left[\left(\sum_{j=1}^{N_{i} }(1-P_{fj})\right)q_{i,t}^{*}(h_{i,t}-h_{i,t}^{*})\right]}{4\times\sum_{t=1}^{ td}\sum_{i=1}^{N_{n}}q_{i,t}^{*}h_{i,t}^{*}}. \tag{3}\]
Here, \(q_{i,t}^{*}\) is the design demand at node \(i\) in time step \(t\), \(h_{i,t}^{*}\) is the minimum required total head at node \(i\) in time step \(t\), and \(h_{i,t}\) is the actual total head at node \(i\) in time step \(t\). The factor \((1-P_{f})\) represents pipe reliability using pipe fragility \(P_{f}\), which is computed for each pipe \(j\) as
\[P_{fj}=1-\exp(-\text{RR}_{j}\cdot L_{j}), \tag{4}\]
with repair rate \(\text{RR}_{j}\) of pipeline \(j\) and length \(L_{j}\) of pipeline \(j\). The quantities are summed over the number of time steps in the demand pattern \(td\), the total number of nodes in the WDS \(N_{n}\), and the node degree of node \(i\)\(N_{i}\). The metric is used for making decisions in rehabilitation schemes with the objective of enhancing resilience within budgetary constraints. Farahmandfar and Piratla state that resilience refers to the ability of WDSs to "withstand stresses, mitigate failures, minimise consequences, and recover quickly in the face of abnormalities such as earthquakes" (45, p.1). The metric assesses the reaction of the WDS. Similarly to the resilience index of Todini which it is based on, it considers the properties baseline functionality and redundancy (measuring the surrogate energy of the system). Summing the values over time, however, makes it possible to also account for recovery. It is a time-dependent metric. It is a performance-based metric, with performance being expressed in terms of power proportional to the product of the volume flow and head, \(Qh\). The metric is usually constrained to the interval \([0,1]\) with 1 being its optimum value. The metric is evaluated for a single network and considers the scenario of pipe failures due to seismic events.
#### 6.5.2 Reaction Metrics Not Considering Recovery (CL2)
In CL2, all metrics are performance based and assess reaction. None of them consider recovery. They are predominantly time-dependent, and a few consider baseline functionality.
Huizar et al. propose a resilience metric called _user severity_, defined as "the minimum ratio of supply to demand, or minimum functionality, during the analysis period" [35]. For the \(i\)-th user, it is defined as
\[\text{US}_{i,t}=\min_{T_{0}\leq t\leq T}\{f_{i,t}\}, \tag{5}\]
where \(T_{0}\) and \(T\) are the beginning and the end of the analysis period and \(f_{i,t}=S_{i,t}/D_{i,t}\) is the user functionality, defined as the ratio of supply \(S\) to demand \(D\) at time \(t\)[35].
The metric was developed along other metrics for the purpose of measuring water system security. Huizar et al. understand resilience as "the ability to mitigate and recover from failure" [35]. User severity assess the reaction of the system to a failure. It is time-independent and performance-based; the performance is expressed in terms of volume supplied. For non-zero demand \(D\), user severity can attain values between 0 and 1, with 1 being the optimal value. The metric is sensitive to changes in supply and demand.
#### 6.5.3 Performance-Based Anticipation Metrics (CL3)
In CL3, all metrics assess anticipation and are performance-based. They are predominantly time-independent and consider baseline functionality or redundancy, in a few cases also recovery.
Todini's resilience index for looped water distribution networks [69], which is one of the most commonly used resilience metrics for WDSs, belongs to this cluster. It was formulated as
\[I_{\mathrm{r}}=\frac{\sum_{i=1}^{n_{\mathrm{n}}}q_{i}^{*}(h_{i}-h_{i}^{*})}{ \sum_{k=1}^{n_{\mathrm{r}}}Q_{k}H_{k}+\sum_{j=1}^{n_{\mathrm{p}}}(P_{j}/\gamma )-\sum_{i=1}^{n_{\mathrm{n}}}q_{i}^{*}h_{i}^{*}}, \tag{6}\]
with \(q_{i}^{*}\) and \(h_{i}^{*}\) being the design demand and head required at the node \(i\), \(h_{i}\) being the available head at the node \(i\), \(Q_{k}\) and \(H_{k}\) being the flow from and total head in the \(k\)-th reservoir, \(P_{j}\) is the power introduced to the network by the \(j\)-th pump, \(\gamma\) being the specific weight of water, and \(n_{\mathrm{n}}\), \(n_{\mathrm{r}}\), \(n_{\mathrm{p}}\) being the number of nodes, reservoirs and pumps, respectively [69].
Todini understands resilience as "capability of overcoming stress or failure conditions" or "the capability to allow to overcome local failures and to guarantee the distribution of water to users"[69].
The resilience index is a time-independent resilience metric. It is performance-based, comparing the required power with the available power in the WDS. It can have values between 0 and 1, with 1 being the optimum value. Todini illustrates the use of the index on optimisation problems with three simplified looped networks with the aim of minimum cost design. He uses the resilience index in the design phase in order to develop a heuristic optimisation approach to arrive at a Pareto set of solutions in the cost vs. resilience space.
While Todini makes comparisons in the resilience index for a specific WDS, he does not make comparisons between systems. The resilience index is independent of critical events. It assesses the anticipating function of the system. It is not necessary for the critical event to occur in the analysis or in the real world in order to be able to assess it.
Altherr et al. bring the _buffering capacity_ into resilience engineering [23]. This metric was first described by Woods as "the size or kinds of disruptions the system can absorb or adapt to without a fundamental breakdown in performance or in the system's structure" [20]. Altherr al. defined buffering capacity as "a measure for the amount of structural change after which the fulfillment of a predetermined required minimum of functional performance
can still be guaranteed" [70]. In WDSs, buffering capacity can be expressed using discrete values - the number \(k\) of components that can fail while the minimum of functional performance can still be guaranteed. The system is then called _k-resilient_.
#### 6.5.4 Non-Performance-Based Anticipation Metrics (CL4)
In CL4, all metrics are time-independent and assess anticipation. They are graph-theoretical or score-based. Composite anticipation metrics also belong to CL4. Most of these metrics address redundancy.
Herrera et al. base their resilience index on a common graph-theoretical algorithm, K-shortest paths [77]. The index is also extended to WDSs sectorised into district metered areas. The measure of resilience is first computed for each node by determining the K-shortest paths between the node and each source. To account for hydraulics, the paths are weighted by energy loss associated with the flow resistance along the path. The resilience index of Herrera et al. for a node \(i\) is mathematically defined as
\[I(i)=\sum_{s=1}^{S}\left(\frac{1}{K}\sum_{k=1}^{K}\frac{1}{r(k,s)}\right) \tag{7}\]
with \(S\) being the total number of sources, \(K\) the number of shortest paths and \(r(k,s)\) being the measure of the energy loss for the path \(k\) to source \(s\)[55]. It can be expressed for example as follows:
\[r(k)=\sum_{m=1}^{M}f(m)\frac{L_{m}}{D_{m}} \tag{8}\]
with \(M\) being the number of pipes on the path \(k\), \(f\) the friction factor and \(L\) and \(D\) the pipe length and diameter, respectively [55]. As Lorenz and Pelz show, the index can be additionally weighted by relative node demand \(q/Q\) where \(q\) is the node demand and \(Q\) the total demand in the network [78].
For a district metered area, Herrera et al. propose aggregating the resilience indices of each node \(j\) into a single resilience index for all \(n\) nodes using the trimmed mean [79]:
\[I^{*}=\sum_{j=1}^{n^{*}}\frac{I(j)}{n^{*}} \tag{9}\]
in which nodes of very high or very low values are discarded before computing the mean (\(n^{*}<n\)) [55]. The purpose of the work of Herrera et al. is to develop a resilience assessment framework. The index is shown to be consistent with other alternative approaches. Herrera et al. understand resilience as "the ability of a system to maintain and adapt its operational performance in the face of failures and other adverse conditions" [55]. The resilience index of Herrera et al. addresses anticipation. It is a measure of the system's expected behaviour during a critical event. It quantifies redundancy in connectivity and supply [55]. It is not capable of considering recovery. It is a time-independent metric that only considers topology of a network and hydraulic properties.
Herrera et al. validate the metric on the C-Town network [80] and they use it to analyse the resilience of two networks with 4820 and 106,115 nodes, respectively. Comparisons are made between the resilience of various DMAs, not between the networks. The considered disruption event is pipe failures.
Balaei et al. developed a framework for assessing resilience, leading to the _water supply system resilience indicator_ that aggregates several weighted and scaled metrics:
\[R=\frac{1}{\sum_{j=1}^{N}w_{j}}\sum_{j=1}^{N}w_{j}i_{j}^{2}, \tag{10}\]
where the weights are denoted by \(w\) and the indicators by \(i\) for \(N\) indicators in total [57]. The indicators are scaled by the respective maximum value. The indicators are "operational representations of serviceability, quality, or a characteristic of a system" [57] that satisfy the criteria of validity, sensitivity, objectivity and simplicity [57]. A specific set of indicators must be chosen for each use case under the consideration of data availability. Examples of indicators provided in the paper are physical vulnerability, knowledge of the emergency response plan, social participation rate, GDP per capita and median household's income. The purpose of the resilience indicator and the proposed framework is to assess seismic resilience based on data and information from past earthquakes. The framework is aimed at researchers, planners and decision makers. The metric considers the anticipating function. It is a time-independent metric. It is a score-based metric. It has been evaluated on one example without comparisons. It has a strong focus on earthquakes but as the choice of indicators has to be determined for each individual system, there is potential to adjust it to other critical events as
well.
#### 6.5.5 Score-based Resilience Metrics Considering All Properties (CL5)
Among the metrics found in the presented study, CL5 contains only one score-based metric that considers three system functions (monitor, react and anticipate) and all three properties. It is thus the closest to being a resilience metric.
The _water provision resilience (WPR)_ was proposed by Milman and Short [38]. Rather than giving a single equation for calculating it, WPR is an aggregate of points that the considered WDS scores in the categories supply, finances, infrastructure, service provision, water quality, and governance. In each of these categories, there are different numbers of criteria for which a binary decision is made whether they are fulfilled or not, each fulfilled criterium yielding a point. The sum of points gives the score of WPR. The purpose of this metric is not, as the authors state, to "measure the adaptive capacity related to catastrophic events" [38, p. 756]. Instead, the focus lies on measuring "the ability of a city or water district to maintain or improve access to safe water" [38, p.760]. In their understanding of resilience, the authors refer to [81, p.259] understanding resilience as "the capacity of the system 'to absorb disturbance and re-organise while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks"", emphasising that the definition includes the ability of the given system "to adapt to stresses and changes and to transform into more desirable states" [38, p.759]. The variety of criteria included for the resilience evaluation allow the metric to cover three of the resilience functions: monitor, react and anticipate. The properties baseline-functionality, redundancy and recovery are also considered within the criteria. As the criteria include the development of the WDS within the following 50 years, the metric is time-dependent. It is a score-based and composite metric. In total, 36 criteria are included, i.e. the maximum achievable value of WPR is 36, the minimum being 0. The metric is used by Milman and Short to assess the resilience of the WDS of three municipal areas and to compare the resilience of these. Critical events considered in the criteria are change in demand, change in supply, and water resource contamination.
## 7 Discussion
In a systematic review of resilience metrics for WDSs, the presented results show that most metrics, regardless of what their characteristics are, only focus on a single function and/or property of resilient systems, rather than on their resilience as a whole. The review bridges a gap in research about resilience metrics for WDSs as it provides a comprehensive framework for categorising metrics and juxtapose them with a general understanding of resilience.
Most often, the functions "anticipate" and "react" are assessed. While generality with regard to critical events is often stressed when speaking about resilience, it is not reflected in the metrics which tend to focus on specific critical events such as pipe failure and changes in demand or supply. Moreover, that the metric is defined on a specific interval with an optimal value suggests that the system can achieve perfect resilience. Once the system achieves it, there is no more room for improvement with regard to resilience. It is, however, questionable whether such a state is achievable for real-world networks, and whether the resilience metrics are really capable of capturing this.
Strictly speaking, the presented assessment framework shows that there is no metric among the existing metrics reviewed that can be called a _resilience metric_, as no metric addresses all 4 functions of a resilient system. This is not to say, however, that the metrics are not useful for certain purposes, even for those related to resilience assessment, or e.g. optimisation for resilience. Resilience is a complex concept that is difficult to capture by quantitative and even qualitative metrics. Instead, the authors propose that a stronger differentiation should be made among metrics related to resilience assessment in WDSs: for example, to speak of anticipation metrics or reaction metrics rather than of resilience metrics. This will help prevent conceptual stretching of the term resilience, already criticised nowadays for being a buzzword or an umbrella term particularly difficult to work with in academia [7; 82]. The presented framework can be used for this purpose.
The design of the presented framework depends strongly on the selected definition of resilience. As no scientific consensus with regard to the definition of resilience exists, the authors have selected a definition that is well-known and general enough to cover most other definitions present in literature. The assessment of functions and properties by metrics has been a challenging task during the review that is necessarily prone to a certain amount of subjectivity.
By providing both the data and the code used for the analysis (Sec. 9), the authors hope to lay ground for a discussion of the framework and resilience understanding in the domain of WDS.
While difficult to capture by metrics, the authors are of the opinion that the understanding of resilience should not be limited in order to make it easier to quantify, but rather that new metrics should be developed in order to improve its quantifiability. Especially the functions "learn" and "monitor" are largely ignored by the existing metrics for WDS. Existing frameworks such as the Resilience Analysis Grid [21] or water provision resilience [38] can be used as a guideline; while having a thorough resilience understanding, these frameworks, however, lack quantitative metrics and are thus currently difficult to implement in studies commonly performed in the field of resilience engineering of WDSs, such as optimisation problems or Monte Carlo simulations.
The presented results also prepare ground for further research in the domain of WDS resilience. A big challenge remains to systematically incorporate climate change effects into resilience metrics for WDS, as climate change is/ is the cause of critical events that affect water distribution. In some cases, it will be necessary to extend the system boundary of WDS to include water resource management and/or other infrastructure that can be used for delivering water to citizens, such as the transport network. Moreover, like other disciplines [7; 83; 19], WDS research should also take a critical look on resilience, evaluating the weaknesses and strengths of the concept and reflect these in the metrics.
## 8 Conclusion
The presented publication assessed the alignment between a general understanding of resilience in water distribution systems and the metrics used for their resilience assessment. For this purpose, a systematic review of resilience metrics for WDSs was performed, showing that:
* most metrics are performance-based rather than graph-theoretical or score-based, and time-independent rather than time-dependent (RQ1)
* most metrics, regardless of what their characteristics are, only focus on a single function and/or property of resilient systems, rather than on their resilience as a whole (RQ2)
* most metrics focus on a specific set of critical events, resulting in a lack of generality inherent to the understanding of resilience (RQ3)
To summarise and answer the title question, the results show that resilience metrics do not really assess resilience, but rather specific functions and properties of systems which can make them resilient. To prevent further conceptual stretching of the term resilience, the authors propose that a stronger differentiation is made among metrics related to resilience assessment in WDSs: for example, to speak of anticipation metrics or reaction metrics rather than of resilience metrics.
## 9 Data and Software Availability
The data for this study (a table with categorisation of all reviewed metrics as well as a table with the literature search procedure) is available under [https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3900](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3900).
The corresponding code in the form of Jupyter notebooks is available under [https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3901](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3901).
## 10 Author Contributions
**Michaela Lestakova**: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - Original Draft, Writing - Review & Editing, Visualization, Data Curation, Project administration; **Kevin T. Logan**: Conceptualization, Methodology, Investigation, Writing - Original Draft, Writing - Review & Editing, Visualization, Data Curation, Project administration; **Imke-Sophie Rehm**: Conceptualization, Methodology, Investigation, Writing - Original Draft, Writing - Review & Editing, Project administration; **John Friesen**: Conceptualization, Methodology, Investigation, Resources, Writing - Original Draft, Writing - Review & Editing, Project administration Supervision; **Peter F. Pelz**: Funding Acquisition, Supervision
## 11 Acknowledgements
This work has been funded by the LOEWE initiative (Hesse, Germany) within the emergenCITY center, by the LOEWE exploration project "Uniform detection and modeling of slums to determine infrastructure needs" as
well as by the KSB Stiftung Stuttgart, Germany within the project "Antizipation von Wasserbedarfsszenarien fur die Stadte der Zukunft".
The authors would like to thank Yali Wu for her help with performing the correlation analysis of the reviewed resilience metrics and Katharina Henn for the literature search.
## Appendix A Clustering
The hierarchical clustering was performed in Python utilising the methods scipy.cluster.hierarchy.linkage (method: 'ward', metric: 'Euclidian') and scipy.cluster.hierarchy.fcluster (number of clusters: 5, criterion:'maxclust').
The dendrogram was created with the method dendrogram from scipy.cluster.hierarchy.
The code including an Anaconda environment file with all necessary Python packages is available in the Jupyter Notebook provided in the Section 9.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline metric & M & R & L & A & TI & TD & GT & PB & SB & CM & BF & RD & RC & CL \\ \hline \hline integral waterservice availability [47] & 1 & & & & 1 & & 1 & & & 1 & 1 & 1 \\ \hline systemβs average recovery rate [48] & 1 & & & 1 & & 1 & & 1 & 1 & 1 \\ \hline rapidity of recovery [23] & 1 & & & 1 & & 1 & & 1 & 1 & 1 \\ \hline Asset-based resilience [39] & 1 & & & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline supply curve and total cost [37] & 1 & & 1 & 1 & & & & & 2 \\ user severity [35] & 1 & 1 & & 1 & & & & 2 \\ \hline user volumetric severity [35] & 1 & & & 1 & 1 & & & & 2 \\ \hline Pressure-Dependent Fire Demand Metric [36] & 1 & & 1 & 1 & & & & 2 \\ \hline graceful degradation [23] & 1 & & 1 & & 1 & 1 & 2 \\ \hline Pressure-Dependent Demand Metric [36] & Normal & 1 & & 1 & 1 & 1 & 2 \\ \hline Pressure-Dependent Demand Metric [36] & Hydrant & 1 & & 1 & 1 & & & 2 \\ \hline leakage-related power dissipation [71] & & & 1 & & 1 & & 1 & 3 \\ \hline Ratio for excess pressure beyond design pressure [76] & & & 1 & 1 & & 1 & & 3 \\ \hline Shannonβs entropy function [63] & & 1 & 1 & & 1 & & & 3 \\ \hline
**Network Resilience** Deviation [64] & & 1 & 1 & & 1 & & & 3 \\ \hline system-wide hydraulic uniformity index [65] & & 1 & 1 & & 1 & & & 3 \\ \hline
**criticality score [68]** & & 1 & 1 & & 1 & & 1 & 3 \\ \hline Ratio for service capacity at threshold pressure to full service capacity [76] & & 1 & 1 & & 1 & & 1 & 3 \\ \hline integrative resilience framework [66] & & 1 & 1 & & 1 & & 1 & 3 \\ \hline Degree of service capacity reduction with increased pressure [76] & & 1 & 1 & & 1 & & 1 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table B.1: Categorisation of the existing resilience metrics. M: monitor, R: react, L: learn, A: anticipate, TI: time-independent, TD: time-dependent, GT: graph-theoretical, PB: performance-based, SB: score-based, CM: composite, BF: baseline functionality, RD: redundancy, RC: recovery, CL: cluster (Continued)
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline metric & M & R & L & A & TI & TD & GT & PB & SB & CM & BF & RD & RC & CL \\ \hline \hline Topology-Based & Resilience & & & 1 & 1 & & & & & & & & & **3** \\ Metric (Seismic) [45] & & & & & & & & & & & & & **3** \\ \hline reserve Capacity [62] & & & 1 & 1 & & & & & & & & **3** \\ \hline Potentially Recoverable Energy Index (PREI) [74] & & & 1 & 1 & 1 & & & & & & **3** \\ \hline combined network entropy-resiliency index [67] & & 1 & 1 & & 1 & & & & & **3** \\ \hline combined entropy-resiliency index [67] & & 1 & 1 & & 1 & & & & & **3** \\ \hline seismic Resilience Metric [73] & & 1 & 1 & & 1 & & & & & **3** \\ \hline buffering capacity [70] & & 1 & 1 & & 1 & & & & **1** & **3** \\ \hline diameter-sensitive flow entropy & & 1 & 1 & & 1 & & & & **1** & **3** \\ [72] & & 1 & 1 & & & & & & **3** \\ \hline resilience index [69] & & & 1 & 1 & & & & & **3** \\ \hline criticality-demand concentration [60] & & 1 & 1 & & 1 & & & & **3** \\ \hline maintainability [35] & & 1 & 1 & & 1 & & & & **1** & **3** \\ \hline probabilistic resilience index (PRI) [75] & & 1 & 1 & & 1 & & & & **3** \\ \hline
**topological metric [36]** & & 1 & 1 & & 1 & & & & **3** \\ \hline water supply system seismic resilience indicator [57] & & 1 & 1 & & 1 & 1 & & & **4** \\ \hline resilience index [55] & & 1 & 1 & 1 & & & & & **1** & **4** \\ \hline relative number of connected node pairs [56] & & 1 & 1 & 1 & & & & & **4** \\ \hline demand-adjusted entropic degree [53] & & 1 & 1 & 1 & & & & & **4** \\ \hline resilience metric [61] & & 1 & 1 & 1 & & & & & **4** \\ \hline Bridge Ratio index [54] & & 1 & 1 & 1 & & & & & **1** & **4** \\ Water flow edge betweenness centrality [6] & & 1 & 1 & 1 & & & & & **1** & **4** \\ \hline composite resilience metric [60] & & & 1 & 1 & & & & & **1** & **4** \\ \hline overall system resilience [59] & & 1 & 1 & 1 & 1 & 1 & 1 & & **4** \\ resilience indicator [58] & & & 1 & 1 & & & & & **4** \\ \hline \hline \end{tabular}
\end{table}
Table B.1: Categorisation of the existing resilience metrics. M: monitor, R: react, L: learn, A: anticipate, TI: time-independent, TD: time-dependent, GT: graph-theoretical, PB: performance-based, SB: score-based, CM: composite, BF: baseline functionality, RD: redundancy, RC: recovery, CL: cluster (Continued) |
2307.04746 | Classical Observables from the Exponential Representation of the
Gravitational S-Matrix | By combining the KMOC-formalism with the exponential representation of the
scattering matrix we show that the two-body scattering angle is given by the
corresponding matrix element of the exponential representation. This holds to
all orders in the Post-Minkowskian expansion of gravity when restricted to the
conservative sector. Once gravitational radiation is taken into account new
terms correcting this relationship appear starting at fourth Post-Minkowskian
order. A systematic expansion of the momentum kick is provided to any order,
thus illustrating the iterative structure that partly recycles terms from lower
orders in the Post-Minkowskian expansion. We provide explicit results for this
computation to fourth Post-Minkowskian order, the first complete calculation at
this order based on scattering amplitudes. | Poul H. Damgaard, Elias Roos Hansen, Ludovic PlantΓ©, Pierre Vanhove | 2023-07-10T17:53:44Z | http://arxiv.org/abs/2307.04746v2 | # Classical Observables from the Exponential Representation of the Gravitational S-Matrix
###### Abstract
By combining the KMOC-formalism with the exponential representation of the scattering matrix we show that the two-body scattering angle is given by the corresponding matrix element of the exponential representation. This holds to all orders in the Post-Minkowskian expansion of gravity when restricted to the conservative sector. Once gravitational radiation is taken into account new terms correcting this relationship appear starting at fourth Post-Minkowskian order. A systematic expansion of the momentum kick is provided to any order, thus illustrating the iterative structure that partly recycles terms from lower orders in the Post-Minkowskian expansion. We provide explicit results for this computation to fourth Post-Minkowskian order, the first complete calculation at this order based on scattering amplitudes.
Keywords:Scattering Amplitudes, General Relativity +
## 1 Introduction
### The exponential representation of the gravitational \(S\)-matrix
#### The classical limit and velocity cuts
#### The KMOC formalism and the exponential representation
[MISSING_PAGE_POST]
\(\phantom{
Introduction
While the Post-Minkowskian expansion of general relativity [1; 2; 3; 4; 5] has been highly successful in solving the relativistic two-body problem by means of modern amplitude techniques, new and puzzling features seem to appear at every new order considered. The second-order Post-Minkowskian solution of Westpfahl [6] was easily reproduced by amplitude methods [3] but already the first solution to third Post-Minkowskian order [7; 8] displayed an unphysical divergence in the scattering angle that could not be understood within the conservative framework used. The resolution was to be found when including radiation reaction of the gravitational field [9; 10; 11; 12; 13]. Remarkably, soft gravitons cancelled the unwanted divergence in the scattering angle, thereby reproducing the classic result of Amati, Ciafaloni, and Veneziano [14]. Moreover, to this third Post-Minkowskian order a standard quantum field theoretic evaluation of the full classical part of the gravitational two-to-two scattering amplitude precisely yields the correct scattering angle [15; 16], the simple resolution being found in the need to include _all_ classical pieces from the two-loop scattering amplitude. As explained in the latter two references, those classical parts can be systematically identified through the so-called velocity cuts of the scattering amplitude: delta-function contributions that emerge from combinations of propagators with the Feynman \(i\epsilon\)-prescription. For reviews of these ideas see, \(e.g.\), ref. [17; 18].
Among the many lessons learned at that third Post-Minkowskian order has been the need to understand how to subtract terms that diverge in the classical limit in order to yield unambiguously those parts of the scattering amplitude that remain finite when \(\hbar\to 0\). These delicate cancellations have their root in the conventional use of the Born expansion of quantum field theory. Parametrizing the \(S\)-matrix as \(\hat{S}=1+i\hat{T}/\hbar\), unitarity of \(\hat{S}\) leads to the optical theorem through
\[\hat{T}-\hat{T}^{\dagger}\ =\ \frac{i}{\hbar}\hat{T}\hat{T}^{\dagger}. \tag{1}\]
This relation shows how the perturbative expansion of the \(T\)-matrix to any given order in the coupling constant cross-talks with lower-order terms and parts of those will have increasingly higher inverse powers of \(\hbar\). This is the origin of the eikonal exponentiation in impact parameter space [19]. It is also the origin of the need to introduce the well-known Born subtractions, whether implemented by effective field theory methods [4] or, equivalently, by solving the Lippmann-Schwinger equation associated with the corresponding relativistic Hamiltonian [20].
Inspired by the different subtraction scheme behind the calculation of the conservative part to fourth Post-Minkowskian order of ref. [21], an alternative representation of the \(S\)-matrix was suggested in ref. [22]. In this representation, an Hermitian scattering matrix, denoted \(N\), is introduced through the operator identification
\[\hat{S}\ =\ \exp\Bigl{[}i\hat{N}/\hbar\Bigr{]}. \tag{2}\]
It was conjectured in ref. [22] that two-to-two matrix elements of the operator \(\hat{N}\), after a transform to impact-parameter space, yields the radial action and hence, by simple differentiation, also the scattering angle. This was verified explicitly to third Post-Minkowskian order [22] and later checked, in the probe limit, up to fifth Post-Minkowskian order [23]. More recently, the exponential representation has also been checked against the fourth Post-Minkowskian order calculation of ref. [24; 21] for arbitrary masses [18] but not including all radiation effects. There is thus substantial evidence that the exponential representation of the gravitational \(S\)-matrix captures the classical dynamics of the conservative sector (and even parts of radiative effects) but a proof has so far still been lacking. One purpose of this paper is to provide such a proof.
Matrix elements of the exponential representation of the \(S\)-matrix resemble, after transforming to impact parameter space, the quantum field theoretic eikonal [25; 26; 27; 28; 29; 30; 31; 32; 33]. We stress, however, that these two representations are quite distinct beyond leading order. The \(\hat{N}\)-operator encapsulates by construction the semi-classical limit of the \(S\)-matrix and its two-to-two matrix element is therefore expected to yield the corresponding radial action. Because \(\hat{N}\) is already in the exponent there are no superclassical contributions to it and all corrections to the radial action will be of quantum mechanical origin (and therefore not of interest here). The \(\hat{N}\)-operator is thus more closely related to the WKB approximation than to the eikonal1.
Footnote 1: For a recent comprehensive review of the eikonal formalism, see ref. [34].
Two other formalisms will be central to the understanding of gravitational two-body scattering in the Post-Minkowskian expansion. One is the KMOC formalism [35; 36; 37; 38; 39; 40], the other is the Post-Minkowskian worldline formalism [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57]. The KMOC framework is, after appropriate reductions to the point-particle limit, intimately related to the amplitude approach to gravitational scattering. Indeed some of the first resolutions of the puzzles at third Post-Minkowskian order came from expressing KMOC observable in the form of cut amplitudes by reverse unitarity [39; 40]. The worldline approach differs conceptually in that the classical limit \(\hbar\to 0\) can be taken from the outset, thus eliminating the need for subtractions altogether. In the end, the resulting integrals that must be evaluated are nevertheless very similar and they are, not surprisingly, very closely related to the integrals that need to evaluated in the amplitude-based approach. It becomes particularly clear in terms of the velocity cut method where the correspondence up to third Post-Minkowskian order has been shown to be one-to-one [16]. This is not surprising in view of the fact that both formalisms amount to solving the classical Einstein field equations by Green function methods.
New issues have appeared at fourth Post-Minkowskian order of the gravitational expansion. These are related to both angular momentum loss and energy loss during the scattering process, losses which are due to the gravitationally radiated angular
momentum and energy [58]. There has been much progress on how to incorporate these effects in the eikonal formalism [31; 32; 33] but so far a complete computation has only been reported in work using the worldline formalism [54; 55]. In order to tackle dissipation at this order, the worldline calculations have been rephrased in terms of the closed time paths of the Schwinger-Keldysh kind [52; 53]. This leads to a doubling of degrees of freedom, the use of retarded (or advanced) propagators, and in general a much larger set of master integrals due to less symmetry of the integrands. It is interesting to contrast this with the KMOC formalism which provides \(S\)-matrix expressions for the same quantities but based on standard amplitudes with Feynman propagators. In a recent paper [59] we have demonstrated the equivalence between the KMOC and worldline formulations in the classical limit. While this non-trivial relationship has been established on general grounds, it is interesting that dissipative effects are accounted for quite differently in the two formulations due to the difference between Feynman and retarded/advanced propagators.
In this paper we combine the KMOC-formalism with the exponential representation of the \(S\)-matrix. We shall argue that such a combination is more economical than the conventional one based on the linear \(T\)-matrix representation of the \(S\)-matrix. It leads to very compact formulas for classical observables in gravity based on amplitudes and it clarifies the inclusion of radiative effects in a simple diagrammatic fashion. Importantly, because the KMOC formalism makes no distinction between conservative and dissipative contributions, classical observables are extracted in a universal manner from the matrix elements of the \(\hat{N}\)-operator by retaining all classical pieces. As in the full amplitude computation at third Post-Minkowskian order [16] there is no need to separate different contributions. At any order in the expansion one only has to extract all classical terms of the matrix elements of \(\hat{N}\) and derived quantities thereof.
While equivalent to the worldline formulation in the Keldysh-Schwinger path integral, the formulas we shall present here have a structure that is straightforward to implement in terms of modern amplitude methods. Having different consistent formulations available is clearly an advantage and there is now a variety of approaches available for the Post-Minkowskian expansion (see also refs. [60; 61; 62; 63; 64; 65; 66; 67; 68]). This is particularly important when the Post-Minkowskian expansion enters the new uncharted territory of higher orders.
We shall illustrate the simplicity of the combination of the \(\hat{N}\)-operator with the KMOC formalism by computing the full momentum kick (and hence scattering angle) to fourth Post-Minkowskian order. As we shall show, the required basis of master integrals is significantly smaller than that used in refs. [54; 55] due to the fact that we need only use Feynman propagators. Nevertheless, our results agree.
The exponential representation of the gravitational \(S\)-matrix
In this section we briefly review the exponential operator representation of the \(S\)-matrix. We first fix conventions. We consider the Einstein-Hilbert action of two massive scalars (of masses \(m_{1}\) and \(m_{2}\)) coupled to gravity,
\[S_{EH}=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{R}{16\pi G}+\frac{1}{2}\partial_{\mu} \phi_{1}\partial^{\mu}\phi_{1}+\frac{1}{2}\partial_{\mu}\phi_{2}\partial^{\mu} \phi_{2}-\frac{m_{1}^{2}}{2}\phi_{1}^{2}-\frac{m_{2}^{2}}{2}\phi_{2}^{2}\Bigg{]}\,. \tag{1}\]
Newton's constant is denoted by \(G\) and \(R\) is the Ricci scalar. We use a mostly-minus metric with flat Minkowski space at infinity, \(\mathrm{diag}\,\eta_{\mu\nu}\equiv(1,-1,-1,-1)\), and expand the full metric as \(g_{\mu\nu}(x)\equiv\eta_{\mu\nu}+\sqrt{32\pi G}h_{\mu\nu}(x)\).
In this section we write everything in the standard language of _in-out_ states and consider the two-to-two scattering with \(p_{1}\) and \(p_{2}\) denoting incoming momenta and \(p_{1}^{\prime}\) and \(p_{2}^{\prime}\) outgoing momenta with \(p_{1}^{2}={p_{1}^{\prime}}^{2}=m_{1}^{2}\) and \(p_{2}^{2}={p_{2}^{\prime}}^{2}=m_{2}^{2}\). In the center of mass frame with
\[p_{1}=(E_{1}(p),\vec{p}),\qquad p_{2}=(E_{2}(p),-\vec{p}) \tag{2}\]
we have
\[(p_{1}+p_{2})^{2}=(p_{1}^{\prime}+p_{2}^{\prime})^{2}=m_{1}^{2}+m_{2}^{2}+2m_{ 1}m_{2}\gamma,\quad\gamma\equiv\frac{p_{1}\cdot p_{2}}{m_{1}m_{2}}\,, \tag{3}\]
\[(p_{1}-p_{1}^{\prime})^{2}=(p_{2}^{\prime}-p_{2})^{2}\equiv q^{2}=-\vec{q}^{2}\,, \tag{4}\]
In ordinary scattering theory we wish to compute \(S\)-matrix elements. Here, instead, we shall focus on matrix elements of the Hermitian operator \(\hat{N}\) defined by eq. (2), in particular, for two-to-two scattering,
\[N(\gamma,q^{2})=\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{N}|p_{1},p_{2} \rangle. \tag{5}\]
This should be contrasted with the standard Born expansion of the \(S\)-matrix based on
\[\hat{S}=1+\frac{i}{\hbar}\hat{T} \tag{6}\]
and the usual scattering amplitude \(M(p_{1},p_{2},p_{1}^{\prime},p_{2}^{\prime})\) defined by
\[\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{T}|p_{1},p_{2}\rangle\ =\ (2\pi\hbar)^{D} \delta^{(D)}(p_{1}+p_{2}-p_{1}^{\prime}-p_{2}^{\prime})M(p_{1},p_{2},p_{1}^{ \prime},p_{2}^{\prime})\, \tag{7}\]
in dimensions \(D=4-2\epsilon\). As detailed in ref. [22] it is straightforward to expand the exponential representation and derive the infinite sequence of relations between operators \(\hat{N}\) and \(\hat{T}\) in perturbation theory. In the two-to-two sector the operators have perturbative expansions that we can write compactly as
\[\hat{T} = G\hat{T}_{0}+G^{3/2}\hat{T}_{0}^{\rm rad}+G^{2}\hat{T}_{1}+G^{5/2} \hat{T}_{1}^{\rm rad}+G^{3}\hat{T}_{2}+\cdots\] \[\hat{N} = G\hat{N}_{0}+G^{3/2}\hat{N}_{0}^{\rm rad}+G^{2}\hat{N}_{1}+G^{5/2 }\hat{N}_{1}^{\rm rad}+G^{3}\hat{N}_{2}+\cdots \tag{8}\]
from which we straightforwardly can solve for \(G\)'s in terms of \(T\)'s by expanding the exponential. Integer powers of \(G\) describe interactions with an even number of graviton vertices while half-integer powers describe interactions with an odd number of gravitons. The separation of operators with superscript rad refers only to the associated half-integer power of \(G\). We find it useful diagrammatically to make this distinction (see also below) but it has no further meaning beyond this. There are clearly also radiative terms in the even powers.
At order \(G^{4}\) the relation reads
\[\hat{N}_{3}=\hat{T}_{3}-\frac{i}{2\hbar}(\hat{N}_{1}^{\rm rad} \hat{N}_{0}^{\rm rad}+\hat{N}_{0}^{\rm rad}\hat{N}_{1}^{\rm rad})-\frac{i}{2 \hbar}\hat{T}_{1}^{2}-\frac{i}{2\hbar}(\hat{T}_{0}\hat{T}_{2}+\hat{T}_{2}\hat{ T}_{0})\\ -\frac{1}{12\hbar^{2}}[\hat{N}_{0}^{\rm rad},[\hat{N}_{0}^{\rm rad },\hat{N}_{0}]]-\frac{1}{3\hbar^{2}}(\hat{T}_{0}^{2}\hat{T}_{1}+\hat{T}_{0} \hat{T}_{1}\hat{T}_{0}+\hat{T}_{1}\hat{T}_{0}^{2})+\frac{i}{4\hbar^{3}}\hat{T} _{0}^{4}\,. \tag{9}\]
and it is elementary to generalize this to higher orders. Note that we have combined some of the \(T\)-matrices into \(N\)-matrices on the right hand side, thus making the cancellation among superclassical pieces associated with those manifest. This also aids in understanding the separation into real and imaginary parts. We remind that \(\hat{N}\) is Hermitian so that two-to-two scalar matrix elements of that operator are real.
The obvious way to evaluate matrix elements of the \(\hat{N}\) operator by conventional field theory methods is to insert a complete set of momentum eigenstates between all products of \(T\)-matrices and truncate to the desired order in \(G\). Then matrix elements can be evaluated by standard Feynman rules of scattering theory. Here the complete set of states is spanned by two massive scalar particles: one of momentum \(k_{1}\) and mass \(m_{1}\), the other of momentum \(k_{2}\) and mass \(m_{2}\), together with any number \(n\) of massless gravitons. We denote such states by \(|k_{1},k_{2};\ell_{1},\ldots,\ell_{n}\rangle\). These states are normalized relativistically according to
\[\langle k_{1},k_{2};\ell_{1},\ldots,\ell_{n}|k_{1}^{\prime},k_{2 }^{\prime};\ell_{1}^{\prime},\ldots,\ell_{m}^{\prime}\rangle=\delta_{n,m} \prod_{i=1}^{2}2E_{k_{i}}(2\pi\hbar)^{D-1}\delta^{(D-1)}(k_{i}-k_{i}^{\prime}) \times\prod_{i=1}^{n}2E_{\ell_{i}}\,(2\pi\hbar)^{D-1}\delta^{(D-1)}(\ell_{i}- \ell_{i}^{\prime}), \tag{10}\]
and the completeness relation is given by
\[1=\sum_{n=0}^{\infty}\frac{1}{n!}\int\prod_{i=1}^{2}d\Pi_{k_{i}}\prod_{r=1}^{ n}d\Pi_{\ell_{r}}|k_{1},k_{2};\ell_{1},\ldots,\ell_{n}\rangle\langle k_{1},k_{2}; \ell_{1},\ldots\ell_{n}|. \tag{11}\]
including a sum over graviton helicities. Here \(d\Pi\) is the standard Lorentz invariant phase space measure, \(i.e.\),
\[d\Pi_{k_{i}}=\frac{d^{D}k_{i}}{(2\pi\hbar)^{D-1}}\delta^{+}((k_{i})^{2}-m_{i}^{2 })=\frac{d^{D}k_{i}}{(2\pi\hbar)^{D-1}}\theta(k_{i}^{0})\delta((k_{i})^{2}-m_{i }^{2})\qquad\text{for}\qquad i=1,2 \tag{12}\]
for the massive states, and similarly for the massless gravitons.
We now insert the completeness relation between all operator products to get the three-loop relation between matrix elements of the \(\hat{N}\) and the \(\hat{T}\) operators
\[\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{N}_{3}|p_{1},p_{2}\rangle=\langle p _{1}^{\prime},p_{2}^{\prime}|\hat{T}_{3}|p_{1},p_{2}\rangle+L_{0}+L_{1}+L_{2} \tag{13}\]
which we then expand in powers of \(G\). Keeping track of this overall power of \(G\), we can view it as an expansion in the number of gravitons connecting the operators. First, with just the massive states inserted,
\[+\frac{i}{4} \tag{14}\]
Next, with the inclusion of one graviton,
\[L_{1}=-\frac{i}{2} \tag{15}\]
as well as one graviton inserted twice:
\[L_{2}=\frac{1}{6} \tag{16}\]
Note that the completeness relation enforces the inclusion of graph topologies that are partly disconnected, such as the graviton line skipping one internal operator
as well as the Compton-type contributions in the last line where scalars skip an internal operator. Such intermediate states begin to contribute for the first time at fourth Post-Minkowskian order because up to and including third Post-Minkowskian order they have no support on physical kinematics. To fourth order in \(G\) no further insertions of graviton states are possible when evaluating \(N\)-matrix elements through use of eq. (9).
Although written as an apparent expansion in \(1/\hbar\) one must keep in mind that additional factors of \(\hbar\) (of both positive and negative powers) arise when computing matrix elements. Since matrix elements of the \(\hat{N}\) are manifestly free of superclassical contributions, the subtractions on the right hand side of eq. (9) ensure cancellations among all superclassical terms arising from the \(\hat{T}\)-matrix, here including order \(1/\hbar^{3}\)-terms. We shall show in section 3 how this implies the cancellation of the superclassical terms when evaluating observables in the KMOC formalism.
One advantage of the exponential representation is that we can ignore these superclassical cancellations that are guaranteed to occur anyway and thus focus exclusively on the pieces that have a well-defined \(\hbar\to 0\) limit. The systematic way to extract this classical limit of matrix elements of the \(\hat{N}\)-operator is by means of velocity cuts. This will be described next.
### The classical limit and velocity cuts
The notion of velocity cuts [15; 16; 23] is computationally useful for extracting the classical limit. The basic idea is to combine massive propagator lines in pairs, each having denominators that are linear in the external momenta but with opposite signs, thus effectively reducing to delta-function constraints that are linear in momenta. Ignoring soft momentum corrections, this puts the massive lines on-shell and removes one momentum integration, thus enforcing the first link to the classical worldline formalism.
The classical limit \(\hbar\to 0\) of the massive amplitude is obtained by scaling the momentum transfer \(q=\hbar\underline{q}\) with \(\underline{q}\) fixed, and scaling the loop integration momenta \(\ell_{i}=\hbar|\underline{q}|\,\bar{\ell}_{i}\). The amplitude will involve two massive propagators,
\[\frac{1}{\left(\ell+p_{r}\right)^{2}-m_{r}^{2}+i\varepsilon}=\frac{1}{2\ell \cdot p_{r}+\ell^{2}+i\varepsilon}\qquad r=1,2 \tag{17}\]
where \(\ell\) is a generic loop momentum. In the classical limit we have
\[\frac{1}{2\hbar|\underline{q}|\ell\cdot p_{r}+\hbar^{2}|\underline{q}|^{2} \ell^{2}+i\varepsilon}\simeq\frac{1}{2\hbar|\underline{q}|}\frac{1}{\ell\cdot p _{r}+i\varepsilon}, \tag{18}\]
so that the \(\ell^{2}\) part is subleading and the massive propagators effectively become linear. Combinations of such linear propagators using
\[\lim_{\varepsilon\to 0}\left(\frac{1}{2\ell\cdot p_{r}+\ell^{2}+i \varepsilon}+\frac{1}{2\ell\cdot p_{r}-\ell^{2}+i\varepsilon}\right)=-2i\pi \delta(2\ell\cdot p_{r}) \tag{19}\]
lead to delta-function insertions in the loops. The higher order \(O(\hbar^{2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
which then has to be evaluated between _in_-states of \(t=-\infty\). Inserting the exponential representation of the \(\hat{S}\) operator of eq. (2) together with the crucial property of Hermiticity of \(\hat{N}\),
\[\Delta\hat{O}=e^{-\frac{i\hat{S}}{\hbar}\hat{O}e^{\frac{i\hat{S}}{\hbar}}}-\hat{ O}. \tag{19}\]
allows us to rewrite eq. (19) by means of the Campbell identity that expands the two exponentials as an infinite sum of nested commutators,
\[\Delta\hat{O}=\sum_{n\geq 1}\frac{(-i)^{n}}{\hbar^{n}n!}\underbrace{[\hat{N},[ \hat{N},\ldots,[\hat{N},\hat{O}]]]}_{\text{n times}}. \tag{20}\]
This rewriting, which is where we use unitarity of the \(S\)-matrix, will play a crucial role in our all-order proofs because it displays the iterative structure of the KMOC formalism when combined with the exponential representation. It is convenient to define
\[\hat{A}^{\hat{O}}_{n}\equiv\frac{1}{\hbar^{n}}\underbrace{[\hat{N},[\hat{N}, \ldots,[\hat{N},\hat{O}]]]}_{\text{n times}}. \tag{21}\]
The nested commutator structure implies the operator relation
\[\hat{A}^{\hat{O}}_{n}=\hat{A}^{\hat{A}^{\hat{O}}_{1}}_{1}=\hat{A}^{\hat{A}^{ \hat{\cdot}^{\hat{A}^{\hat{O}}_{1}}}_{1}}_{1}. \tag{22}\]
Importantly, when we evaluate matrix elements by means of insertions of complete sets of states, this iterative structure is preserved (since all we do is to insert factors of unity).
Repeating the steps described in ref. [35], we can insert the above expression in the KMOC-expression and take the limit of localized massive states. The result is
\[\langle\Delta\hat{O}\rangle(p_{1},p_{2},b)=\int\frac{d^{D}q}{(2\pi)^{D-2}} \delta(2p_{1}\cdot q-q^{2})\delta(2p_{2}\cdot q+q^{2})e^{i\frac{b}{\hbar}} \langle p^{\prime}_{1}p^{\prime}_{2}|\Delta O|p_{1}p_{2}\rangle \tag{23}\]
where \(p^{\prime}_{1}=p_{1}-q\) and \(p^{\prime}_{2}=p_{2}+q\). In this form it is clear that a first step is the evaluation of the matrix element \(\langle p^{\prime}_{1}p^{\prime}_{2}|\Delta O|p_{1}p_{2}\rangle\), followed by the shown Fourier transform to \(b\)-space.
One noticeable feature of the KMOC-formalism for (non-spinning) black-hole scattering is that it always entails the evaluation of matrix elements of an operator (18) between two-particle scalar states. For an observable corresponding to an Hermitian operator \(\hat{O}\) the corresponding \(\Delta O\) is clearly Hermitian as well. Two-particle scalar matrix elements of this \(\Delta O\) are then real, as follows from time-reversal symmetry. The reality of the expectation value is preserved by the insertion of the completeness relation since it just amounts to the insertion of factors of unity.
### Cancellation of superclassical terms: the conservative sector
In this section we first show how the \(N\)-operator formalism provides a simple way to demonstrate the cancellation of the superclassical pieces when restricted to the conservative sector. We next give a general formula valid to all orders in \(G\) for a general operator in section 3.1.1 and a vector operator in section 3.1.2. The application to the momentum kick \(\Delta P_{1}\) is pursued in section 3.1.3.
#### 3.1.1 The classical limit
We start with a general operator \(\hat{O}\) and consider the term with \(n=1\) in (3.6)
\[\mathcal{A}_{1}^{O}(p_{1},p_{2},q)=\frac{1}{\hbar}\langle p_{1}^{\prime},p_{2 }^{\prime}|[\hat{N},\hat{O}]|p_{1},p_{2}\rangle \tag{3.10}\]
and we first analyze the conservative case where gravitons are not included in the set of inserted on-shell states. This is graphically represented as
where the red line indicates where we insert the intermediate two-particle state, corresponding to
\[\mathcal{A}_{1}^{O}(p_{1},p_{2},q)=\frac{1}{\hbar}\int d\Pi_{q_{1 }}d\Pi_{q_{2}}\Big{(}\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{N}|q_{1},q_{2 }\rangle\langle q_{1},q_{2}|\hat{O}|p_{1},p_{2}\rangle\\ -\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{O}|q_{1},q_{2} \rangle\langle q_{1},q_{2}|\hat{N}|p_{1},p_{2}\rangle\Big{)}. \tag{3.11}\]
It is convenient to factor out overall energy-momentum conservation and write
\[\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{N}|p_{1},p_{2}\rangle=N(\gamma,q^{2 })(2\pi\hbar)^{D}\delta(p_{1}^{\prime}+p_{2}^{\prime}-p_{1}-p_{2}) \tag{3.12}\]
and
\[\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{O}|p_{1},p_{2}\rangle=O(p_{1},p_{2 },q)(2\pi\hbar)^{D}\delta(p_{1}^{\prime}+p_{2}^{\prime}-p_{1}-p_{2}). \tag{3.13}\]
We can use one of the energy-momentum conservation delta-functions to remove integration variable \(q_{2}\) After defining \(k_{1}=q_{1}-p_{1}\) and using the scaled momenta \(\underline{q}\) and \(\underline{k}_{1}\) such that \(p_{1}^{\prime}=p_{1}-q=p_{1}-\hbar\underline{q}\), \(p_{2}^{\prime}=p_{2}+q=p_{2}+\hbar\underline{q}\) we change variables to get
\[\mathcal{A}_{1}^{O}(p_{1},p_{2},q)=\hbar\int\frac{d^{D} \underline{k}_{1}}{(2\pi)^{D-2}}\delta^{+}((p_{1}+\hbar\underline{k}_{1})^{2} -m_{1}^{2})\delta^{+}((p_{2}-\hbar\underline{k}_{1})^{2}-m_{2}^{2})\\ \times\Big{(}N(\gamma,\hbar^{2}(\underline{k}_{1}+\underline{q}) ^{2})O(p_{1},p_{2},-\hbar\underline{k}_{1})-O(p_{1}+\hbar\underline{k}_{1},p_ {2}-\hbar\underline{k}_{1},\hbar(\underline{q}+\underline{k}_{1}))N(\gamma, \hbar^{2}\underline{k}_{1}^{2})\Big{)}\\ \times(2\pi\hbar)^{D}\delta(p_{1}+p_{2}-p_{1}^{\prime}-p_{2}^{ \prime}). \tag{3.14}\]
Setting
\[\mathcal{A}_{1}^{O}(p_{1},p_{2},q)=A_{1}^{O}(p_{1},p_{2},q)(2\pi\hbar)^{D}\delta( p_{1}+p_{2}-p_{1}^{\prime}-p_{2}^{\prime}). \tag{3.15}\]
Changing variables \(\underline{k}_{1}\to-\underline{k}_{1}-\underline{q}\) to the second term of the sum gives
\[A_{1}^{O}(p_{1},p_{2},q)=\hbar\int\frac{d^{D}\underline{k}_{1}}{( 2\pi)^{D-2}}O(p_{1},p_{2},-\hbar\underline{k}_{1})N(\gamma,\hbar^{2}( \underline{k}_{1}+\underline{q})^{2})\\ \times\delta^{+}((p_{1}+\hbar\underline{k}_{1})^{2}-m_{1}^{2}) \delta^{+}((p_{2}-\hbar\underline{k}_{1})^{2}-m_{2}^{2})\\ -\hbar\int\frac{d^{D}\underline{k}_{1}}{(2\pi)^{D-2}}O(p_{1}- \hbar(\underline{k}_{1}+\underline{q}),p_{2}+\hbar(\underline{k}_{1}+ \underline{q}),-\hbar\underline{k}_{1})N(\gamma,\hbar^{2}(\underline{k}_{1}+ \underline{q})^{2})\\ \times\delta^{+}((p_{1}-\hbar(\underline{k}_{1}+\underline{q}))^ {2}-m_{1}^{2})\delta^{+}((p_{2}+\hbar(\underline{k}_{1}+\underline{q}))^{2}-m _{2}^{2}). \tag{3.16}\]
Doing the small \(\hbar\) expansion of the integrand leads to
\[O(p_{1},p_{2},-\hbar\underline{k}_{1})\delta^{+}((p_{1}+\hbar \underline{k}_{1})^{2}-m_{1}^{2})\delta^{+}((p_{2}-\hbar\underline{k}_{1})^{ 2}-m_{2}^{2})\\ -O(p_{1}-\hbar(\underline{k}_{1}+\underline{q}),p_{2}+\hbar( \underline{k}_{1}+\underline{q}),-\hbar\underline{k}_{1})\delta^{+}((p_{1}- \hbar(\underline{k}_{1}+\underline{q}))^{2}-m_{1}^{2})\delta^{+}((p_{2}+ \hbar(\underline{k}_{1}+\underline{q}))^{2}-m_{2}^{2})\\ =\frac{2}{\hbar}((\underline{k}_{1}+\underline{q})\cdot \underline{k}_{1})O(p_{1},p_{2},-\hbar\underline{k}_{1})\Big{(}(\delta^{+})^ {\prime}(2p_{1}\cdot\underline{k}_{1})\delta^{+}(-2p_{2}\cdot\underline{k}_{ 1})+\delta^{+}(2p_{1}\cdot\underline{k}_{1})(\delta^{+})^{\prime}(-2p_{2} \cdot\underline{k}_{1})\Big{)}\\ +\frac{1}{\hbar}(\underline{k}_{1}^{\mu}+\underline{q}^{\mu})( \nabla^{\mu}O(p_{1},p_{2},-\hbar\underline{k}_{1}))\delta^{+}(2p_{1}\cdot \underline{k}_{1})\delta^{+}(-2p_{2}\cdot\underline{k}_{1}). \tag{3.17}\]
where we have introduced the derivative
\[\nabla_{\mu}[\mathcal{F}]\equiv\frac{\partial\mathcal{F}}{\partial p_{1}^{\mu} }-\frac{\partial\mathcal{F}}{\partial p_{2}^{\mu}}. \tag{3.18}\]
Consequently the \(\hbar\) expansion of \(A_{1}^{O}\) takes the form
\[A_{1}^{O}(p_{1},p_{2},q)=\int\frac{d^{D}\underline{k}_{1}}{(2 \pi)^{D-2}}N(\gamma,\hbar^{2}(\underline{k}_{1}+\underline{q})^{2})\\ \times(\underline{k}_{1}^{\mu}+\underline{q}^{\mu})\nabla_{\mu} \Big{(}O(p_{1},p_{2},-\hbar\underline{k}_{1}))\delta^{+}(2p_{1}\cdot \underline{k}_{1})\delta^{+}(-2p_{2}\cdot\underline{k}_{1})\Big{)}+\mathcal{O}( \hbar) \tag{3.19}\]
Here, crucially, \(N(\gamma,\hbar^{2}(\underline{k}_{1}+\underline{q})^{2})\) by construction has only classical and quantum parts. This means that for classical observables \(O\) the matrix element \(A_{1}^{O}\) will have a leading piece which is classical, followed by quantum corrections. There are no superclassical pieces in \(A_{1}^{O}\). By recursion it follows that this holds for \(A_{n}^{O}\) and any \(n\) as well.
Although the completeness relation has a positive energy constraint, this is automatically satisfied in the classical limit for the massive scalars of positive energy,
\[\delta^{+}((p_{1}-\hbar\underline{k}_{1})^{2}-m_{1}^{2})=\theta(p_{1}^{0}- \hbar\underline{k}_{1}^{0})\delta((p_{1}-\hbar\underline{k}_{1})^{2}-m_{1}^{2} )\simeq\theta(p_{1}^{0})\delta(-2\hbar p_{1}\cdot\underline{k}_{1})^{2}). \tag{3.20}\]
To conclude, we have shown that the classical piece of \(A_{1}^{O}\) is given by
\[A_{1}^{O}(p_{1},p_{2},q)=\\ \int\!\!\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_ {1}^{\mu}+q^{\mu})\nabla_{\mu}[O(p_{1},p_{2},-k_{1}))\delta(2p_{1}\cdot k_{1}) \delta(-2p_{2}\cdot k_{1})] \tag{3.21}\]
after setting \(\hbar=1\). Not that this is an all-order statement in \(G\). Iterating, it follows that all higher commutators and hence also the full expectation value are free of superclassical pieces when evaluated in the conservative sector.
#### 3.1.2 Vector operators
Let us now consider the application of the general iterative formula of eq. (3.21) to a special class of four-vector operators \(O^{\mu}(p_{1},p_{2},q)=\langle p^{\prime}_{1},p^{\prime}_{2}|\hat{O}^{\mu}|p_{ 1},p_{2}\rangle\) that decompose into longitudinal \(O_{\parallel}(\gamma,q^{2})\) and transverse \(O_{\perp}(\gamma,q^{2})\) parts as follows:
\[O^{\nu}(p_{1},p_{2},q)=O_{\parallel}((p_{1}+p_{2})^{2},q^{2})L^{\nu}+O_{\perp} ((p_{1}+p_{2})^{2},q^{2})q^{\nu}. \tag{3.22}\]
It is convenient to introduce the four-vector
\[L^{\mu}\ \equiv\ \frac{(m_{2}^{2}+m_{1}m_{2}\gamma)p_{1}^{\mu}-(m_{1}^{2}+m_{ 1}m_{2}\gamma)p_{2}^{\mu}}{m_{1}^{2}m_{2}^{2}(\gamma^{2}-1)} \tag{3.23}\]
which satisfies nice relations,
\[L\cdot p_{2}=1\,\quad L\cdot p_{1}=-1\,\quad b\cdot L=0\,\quad\nabla^{\mu}L_{ \mu}\ =\ \frac{1}{p_{\infty}^{2}}. \tag{3.24}\]
where we used that impact parameter \(b^{\mu}\) lies in the plane of scattering and is orthogonal to both \(p_{1}^{\mu}\) and \(p_{2}^{\mu}\). Because \(L\cdot q=O(q^{2})\), we also have \(L\cdot q=0\) in \(q\)-space, before the Fourier transform to \(b\)-space. Since \(-p_{1}\cdot q=p_{2}\cdot q=\frac{q^{2}}{2}\), \(p_{1}\) and \(p_{2}\) are indeed orthogonal to \(q\) in the classical limit. Here,
\[p_{\infty}=\frac{m_{1}m_{2}\sqrt{\gamma^{2}-1}}{\sqrt{m_{1}^{2}+m_{2}^{2}+2m_ {1}m_{2}\gamma}} \tag{3.25}\]
The decomposition in (3.22) is clearly not valid for an arbitrary four-vector but it is satisfied by the momentum kick \(\langle\Delta P_{1}^{\mu}\rangle\) when evaluated in the conservative sector as we will do in section 3.1.3.
To evaluate the classical part of the first commutator \(A_{1}^{O^{\nu}}=\frac{1}{\hbar}\langle p^{\prime}_{1},p^{\prime}_{2}|[\hat{N},\hat{O}^{\nu}]|p_{1},p_{2}\rangle\) using the expression (3.21) we begin by acting with the derivative \(\nabla_{\mu}\) in (3.18). It is useful to note that \(\nabla_{\mu}(p_{1}+p_{2})^{2}=0\) and \(\nabla_{\mu}k_{1}^{\nu}=0\) so that
\[\nabla_{\mu}O_{r}((p_{1}+p_{2})^{2},-k_{1})=0 \tag{3.26}\]
for both the longitudinal part \(r=\parallel\) and the transverse part \(r=\perp\). We then get
\[\nabla_{\mu}\Big{(}O^{\nu}(p_{1},p_{2},-k_{1}))\delta(2p_{1}{\cdot }k_{1})\delta(-2p_{2}{\cdot}k_{1})\Big{)}=\frac{1}{p_{\infty}^{2}}O_{\parallel }((p_{1}{+}p_{2})^{2},k_{1}^{2})\delta^{\nu}_{\mu}\delta(2p_{1}{\cdot}k_{1}) \delta(-2p_{2}{\cdot}k_{1})\] \[+2k_{1\mu}\Big{(}O_{\parallel}((p_{1}{+}p_{2})^{2},k_{1}^{2})L^{ \nu}{-}O_{\perp}((p_{1}{+}p_{2})^{2},k_{1}^{2})k_{1}^{\nu}\Big{)}\Big{(}\delta ^{\prime}(2p_{1}{\cdot}k_{1})\delta(-2p_{2}{\cdot}k_{1}){+}\delta(2p_{1}{ \cdot}k_{1})\delta^{\prime}(-2p_{2}{\cdot}k_{1})\Big{)}. \tag{3.27}\]
which we can insert into eq. (3.2), keeping only the classical part:
\[A_{1}^{O^{\nu}}(p_{1},p_{2},q)=\frac{1}{p_{\infty}^{2}}\int\frac{d^ {D}k_{1}}{(2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_{1}^{\nu}+q^{\nu})O_{\parallel} ((p_{1}+p_{2})^{2},k_{1}^{2})\delta(2p_{1}\cdot k_{1})\delta(-2p_{2}\cdot k_{1}) \\ +2\int\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_{1} +q)\cdot k_{1}\Big{(}O_{\parallel}((p_{1}+p_{2})^{2},k_{1}^{2})L^{\nu}\Big{)}\\ \times\Big{(}\delta^{\prime}(2p_{1}\cdot k_{1})\delta(-2p_{2} \cdot k_{1})+\delta(2p_{1}\cdot k_{1})\delta^{\prime}(-2p_{2}\cdot k_{1})\Big{)} \\ -2\int\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_{1 }+q)\cdot k_{1}\Big{(}O_{\perp}(\gamma,k_{1}^{2})k_{1}^{\nu}\Big{)}\\ \times\Big{(}\delta^{\prime}(2p_{1}\cdot k_{1})\delta(-2p_{2} \cdot k_{1})+\delta(2p_{1}\cdot k_{1})\delta^{\prime}(-2p_{2}\cdot k_{1}) \Big{)}. \tag{3.28}\]
By symmetry the integral in the second line vanishes. We thus have
\[A_{1}^{O^{\mu}}(p_{1},p_{2},q)=\frac{1}{p_{\infty}^{2}}A_{1}^{O_{\parallel}\mu }(\gamma,q^{2})+A_{1}^{O_{\perp}\mu}(\gamma,q^{2}) \tag{3.29}\]
with
\[A_{1}^{O_{\parallel}\mu}(\gamma,q^{2})\equiv\int\frac{d^{D}k_{1}}{(2\pi)^{D-2} }N(\gamma,(k_{1}+q)^{2})(k_{1}^{\mu}+q^{\mu})O_{\parallel}((p_{1}+p_{2})^{2}, k_{1}^{2})\delta(2p_{1}\cdot k_{1})\delta(-2p_{2}\cdot k_{1}), \tag{3.30}\]
and
\[A_{1}^{O_{\perp}\mu}(\gamma,q^{2})\equiv-2\int\frac{d^{D}k_{1}}{ (2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_{1}+q)\cdot k_{1}\Big{(}O_{\perp}(\gamma,k_{1}^{2})k_{1}^{\mu}\Big{)}\\ \times\Big{(}\delta^{\prime}(2p_{1}\cdot k_{1})\delta(-2p_{2} \cdot k_{1})+\delta(2p_{1}\cdot k_{1})\delta^{\prime}(-2p_{2}\cdot k_{1}) \Big{)}. \tag{3.31}\]
By tensor reduction the latter takes the form
\[A_{1}^{O_{\perp}\mu}(\gamma,q^{2})=-L^{\mu}\int\frac{d^{D}k_{1}}{(2\pi)^{D-2} }N(\gamma,(k_{1}+q)^{2})(k_{1}+q)\cdot k_{1}O_{\perp}(\gamma,k_{1}^{2})\delta (2p_{1}\cdot k_{1})\delta(-2p_{2}\cdot k_{1}). \tag{3.32}\]
We note an interesting swap between longitudinal and transverse parts in this first iteration. Clearly, when we iterate further, this will generate alternating contributions between the longitudinal and transverse parts.
To complete the evaluation of the observable according to the KMOC prescription we now perform the Fourier transform to \(b\)-space according to eq. (3.9). Having already taken the classical limit, it is clear that we can also ignore the \(q^{2}\)-terms in the two delta-functions and effectively the Fourier transform simply becomes
\[\tilde{O}(\gamma,b)=\int\frac{d^{D}q}{(2\pi)^{D-2}}\delta(-2p_{1}\cdot q) \delta(2p_{2}\cdot q)O((p_{1}+p_{2})^{2},q^{2})e^{ib\cdot q}. \tag{3.33}\]
For the longitudinal part we have to evaluate the Fourier transform of \(A_{1}^{O_{\parallel}\mu}(\gamma,q^{2})\) which reads
\[\int\frac{d^{D}q}{(2\pi)^{D-2}}\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N( \gamma,(k_{1}+q)^{2})(q^{\mu}+k_{1}^{\mu})O_{\parallel}((p_{1}+p_{2})^{2},k_{1} ^{2})\delta(2p_{1}\cdot k_{1})\delta(-2p_{2}\cdot k_{1})\\ \times\delta(-2p_{1}\cdot q)\delta(2p_{2}\cdot q)e^{ib\cdot q}. \tag{3.34}\]
and by a change of variables \(q\to q-k_{1}\) and \(k_{1}\to-k_{1}\) the integral factorizes
\[\eqref{eq:A1}=\int\frac{d^{D}q}{(2\pi)^{D-2}}q^{\nu}N(\gamma,q^{2 })\delta(-2p_{1}\cdot q)\delta(2p_{2}\cdot q)e^{ib\cdot q}\\ \times\int\frac{d^{D}k_{1}}{(2\pi)^{D-2}}O_{\parallel}((p_{1}+p_ {2})^{2},k_{1}^{2})\delta(-2p_{1}\cdot k_{1})\delta(2p_{2}\cdot k_{1})\,e^{ib \cdot k_{1}}. \tag{3.35}\]
Setting
\[\tilde{O}_{\parallel}(\gamma,b)\equiv\int\frac{d^{D}k_{1}}{(2\pi)^{D-2}}O_{ \parallel}((p_{1}+p_{2})^{2},k_{1}^{2})\delta(-2p_{1}\cdot k_{1})\delta(2p_{2 }\cdot k_{1})\,e^{ib\cdot k_{1}} \tag{3.36}\]
and noticing that
\[-i\frac{\partial\tilde{N}(\gamma,b)}{\partial b_{\nu}}=\int\frac{d^{D}q}{(2 \pi)^{D-2}}q^{\nu}N(\gamma,q^{2})\delta(-2p_{1}\cdot q)\delta(2p_{2}\cdot q)e ^{ib\cdot q}. \tag{3.37}\]
with \(\tilde{N}(\gamma,J)\) the Fourier transform of \(N(\gamma,q^{2})\) to \(b\)-space
\[\tilde{N}(\gamma,b)\ \equiv\ \text{FT}[N(\gamma,q^{2})]\ \equiv\ \frac{1}{4m_{1}m_{2}\sqrt{\gamma^{2}-1}}\int\frac{d^{2}q}{(2\pi)^{2}}N( \gamma,q^{2})e^{ib\cdot q}. \tag{3.38}\]
we find that the Fourier transform of \(A_{1}^{O_{\parallel}\mu}(\gamma,q^{2})\) is given by
\[-i\frac{\partial\tilde{N}(\gamma,b)}{\partial b_{\nu}}\tilde{O}_{\parallel}( \gamma,b)=i\frac{b^{\nu}}{|b|}\frac{\partial\tilde{N}(\gamma,b)}{\partial|b| }\tilde{O}_{\parallel}(\gamma,b). \tag{3.39}\]
For the transverse part we have to evaluate
\[\tilde{A}_{1}^{O_{\perp}\mu}(\gamma,b)=-L^{\mu}\int\frac{d^{D}q}{ (2\pi)^{D-2}}\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N(\gamma,(k_{1}+q)^{2})(k_{1}+q) \cdot k_{1}O_{\perp}(\gamma,k_{1}^{2})\\ \times\delta(2p_{1}\cdot k_{1})\delta(-2p_{2}\cdot k_{1})\delta( -2p_{1}\cdot q)\delta(2p_{2}\cdot q)e^{ib\cdot q} \tag{3.40}\]
By the same change of variable as before we get
\[\tilde{A}_{1}^{O_{\perp}\mu}(\gamma,b)=L^{\mu}\int\frac{d^{D}q}{ (2\pi)^{D-2}}\frac{d^{D}k_{1}}{(2\pi)^{D-2}}N(\gamma,q^{2})q\cdot k_{1}O_{ \perp}(\gamma,k_{1}^{2})\\ \times\delta(-2p_{1}\cdot k_{1})\delta(2p_{2}\cdot k_{1})\delta( -2p_{1}\cdot q)\delta(2p_{2}\cdot q)e^{ib\cdot q}e^{ib\cdot k_{1}} \tag{3.41}\]
This integral is product of a Fourier transform over \(q\) times a Fourier transform over \(k_{1}\) leading to
\[\tilde{A}_{1}^{O_{\perp}\mu}(\gamma,b)=-L^{\mu}\frac{\partial\tilde{N}(\gamma,b)} {\partial b^{\nu}}\frac{\partial\tilde{O}_{\perp}(\gamma,b)}{\partial b_{\nu}}=L ^{\mu}\frac{\partial\tilde{N}(\gamma,b)}{\partial|b|}\frac{\partial\tilde{O}_ {\perp}(\gamma,b)}{\partial|b|}. \tag{47}\]
Collecting these pieces, we get
\[\tilde{A}_{1}^{O^{\mu}}(\gamma,b)=\left(\frac{i}{p_{\infty}^{2}}\frac{b^{\nu} }{|b|}\tilde{O}_{\parallel}(\gamma,b)+L^{\nu}\frac{\partial\tilde{O}_{\perp}( \gamma,b)}{\partial|b|}\right)\frac{\partial\tilde{N}(\gamma,b)}{\partial|b|}. \tag{48}\]
In term of the angular momentum \(J=p_{\infty}|b|\), we have
\[\tilde{A}_{1}^{O^{\mu}}(\gamma,b)=\left(\frac{i}{p_{\infty}}\frac{b^{\mu}}{|b |}\tilde{O}_{\parallel}(\gamma,b)+p_{\infty}L^{\mu}\frac{\partial\tilde{O}_{ \perp}(\gamma,b)}{\partial|b|}\right)\frac{\partial\tilde{N}(\gamma,J)}{ \partial|J|}. \tag{49}\]
The factorization of the Fourier transforms separates the \(N\) operator from the operator \(O\) in \(b\)-space. This remarkable fact implies that we can iterate the result above as dictated by the commutator relation in eq. (3.1). It is convenient to introduce a matrix notation so that
\[\tilde{A}_{1}^{O^{\mu}}(\gamma,b)=\left(L^{\mu}\ i\frac{b^{\mu}}{|b|}\right) \begin{pmatrix}0&p_{\infty}\frac{\partial\tilde{N}}{\partial|J|}\\ \frac{1}{p_{\infty}}\frac{\partial\tilde{N}}{\partial|J|}&0\end{pmatrix}\begin{pmatrix} \tilde{O}_{\parallel}\\ \frac{\partial\tilde{O}_{\perp}}{\partial|b|}\end{pmatrix} \tag{50}\]
and
\[\tilde{A}_{n+1}^{O^{\mu}}(\gamma,b)=\left(L^{\mu}\ i\frac{b^{\mu}}{|b|}\right) \begin{pmatrix}0&p_{\infty}\frac{\partial\tilde{N}}{\partial|J|}\\ \frac{1}{p_{\infty}}\frac{\partial\tilde{N}}{\partial|J|}&0\end{pmatrix}^{n} \begin{pmatrix}\frac{\partial\tilde{O}_{\perp}}{\partial|b|}\frac{\partial \tilde{N}}{\partial J}\\ \frac{\partial_{\parallel}}{p_{\infty}}\frac{\partial\tilde{N}}{\partial J} \end{pmatrix} \tag{51}\]
for summing the iteration to all orders according the recursion in eq. (3.1). Inserting it in the expression (3.1), we get
\[\Delta\tilde{O}^{\mu}(\gamma,b)=\left(L^{\mu}\ i\frac{b^{\mu}}{|b|}\right) \sum_{n=1}^{\infty}\frac{(-i)^{n}}{n!}\begin{pmatrix}0&p_{\infty}\frac{ \partial\tilde{N}}{\partial|J|}\\ \frac{1}{p_{\infty}}\frac{\partial\tilde{N}}{\partial|J|}&0\end{pmatrix}^{n-1 }\begin{pmatrix}p_{\infty}\frac{\partial\tilde{O}_{\perp}}{\partial|b|}\frac{ \partial\tilde{N}}{\partial J}\\ \frac{\partial_{\perp}}{p_{\infty}}\frac{\partial\tilde{N}}{\partial J} \end{pmatrix}. \tag{52}\]
which sums into
\[\Delta\tilde{O}^{\mu}(\gamma,b)=\left(L^{\mu}\ i\frac{b^{\mu}}{|b|}\right) \begin{pmatrix}-\frac{i\sin\left(\frac{\partial\tilde{N}}{\partial J}\right)} {\frac{\partial\tilde{N}}{\partial J}}&\frac{p_{\infty}\left(\cos\left(\frac{ \partial\tilde{N}}{\partial J}\right)-1\right)}{\frac{\cos\left(\frac{\partial \tilde{N}}{\partial J}\right)-1}{\frac{i\sin\left(\frac{\partial\tilde{N}}{ \partial J}\right)}{\frac{\partial\tilde{N}}{\partial J}}}}\\ \frac{\cos\left(\frac{\partial\tilde{N}}{\partial J}\right)-1}{\frac{\cos \left(\frac{\partial\tilde{N}}{\partial J}\right)-1}{\frac{i\sin\left(\frac{ \partial\tilde{N}}{\partial J}\right)}{\frac{\partial\tilde{N}}{\partial J}}} }&-\frac{i\sin\left(\frac{\partial\tilde{N}}{\partial J}\right)}{\frac{ \partial\tilde{N}}{\partial J}}\end{pmatrix}\begin{pmatrix}p_{\infty}\frac{ \partial\tilde{O}_{\perp}}{\partial|b|}\frac{\partial\tilde{N}}{\partial J} \\ \frac{\partial_{\perp}}{p_{\infty}}\frac{\partial\tilde{N}}{\partial J}\end{pmatrix}. \tag{53}\]
This relation shows the intimate connection between the exponential representation of the \(S\)-matrix and the KMOC formalism. It is an interesting fact that the \(\hat{N}\)-operator is here sandwiched between the initial _in_-state and its conjugate rather
than between _in_ and _out_ states as in ref. [22]. This is a consequence of the fact that the KMOC formalism evaluates observables as the difference between time evolved _in_-states whereas in [22]\(N(\gamma,b)\) was viewed as an ordinary scattering matrix element from which to compute the scattering angle through the radial action. It is also interesting to note how the iterative structure of the exponential representation makes \(\hat{N}\) matrix elements the universal objects to compute in the KMOC formalism, whereas all details of the actual observable \(O^{\mu}\) only enter through the initial vector determined by \(\tilde{A}_{1}^{O^{\nu}}(\gamma,b)\) in (3.44).
#### 3.1.3 Momentum kick: the conservative sector
We now finally apply the general considerations above to the case of the momentum kick of, say, particle 1 with initial momentum \(p_{1}\) in the scattering. We then have that the initial vector is
\[\tilde{A}^{P_{1}^{\mu}}(\gamma,b)=ip_{\infty}\frac{b^{\mu}}{|b|}\frac{\partial \tilde{N}(\gamma,J)}{\partial J}. \tag{3.49}\]
We apply the equation (3.48) with \(\tilde{O}_{\parallel}(\gamma,b)=p_{\infty}^{2}\) and \(\tilde{O}_{\perp}(\gamma,b)=0\). We get
\[\Delta\tilde{P_{1}}^{\nu}(\gamma,b)|_{\rm cons}=p_{\infty}\frac{b^{\nu}}{|b|} \sin\left(\frac{\partial\tilde{N}(\gamma,J)}{\partial J}\right)+p_{\infty}^{ 2}L^{\nu}\Bigg{(}\cos\Bigg{(}-\frac{\partial\tilde{N}(\gamma,J)}{\partial J} \Bigg{)}-1\Bigg{)}. \tag{3.50}\]
In the conservative case, the scattering angle can be extracted by the coefficient of the transverse piece only. A comparison with the general relation between momentum kick and scattering angle [40]2
Footnote 2: The coefficient of \(\sin(\chi)\) is fixed by a quadratic condition. We choose the sign opposite to that of ref. [40].
\[\Delta\tilde{P_{1}}^{\nu}(\gamma,b)|_{\rm cons}=-p_{\infty}\frac{b^{\nu}}{|b|} \sin(\chi)+p_{\infty}^{2}L^{\nu}\left(\cos(\chi)-1\right), \tag{3.51}\]
demonstrates that
\[\chi\ =\ -\frac{\partial\tilde{N}(\gamma,J)}{\partial J}\ =\ -\frac{1}{p_{\infty}}\frac{ \partial\tilde{N}(\gamma,b)}{\partial b}\, \tag{3.52}\]
thus proving the conjectured relation of ref. [22] between the scattering angle and the matrix elements of the \(N\)-operator. This also shows that the \(\tilde{N}(\gamma,J)\) is the radial action.
### Including gravitational radiation
We now turn to the impact of gravitational radiation on the expectation value of an operator \(\hat{O}\). We recall that in the KMOC formalism radiation is automatically taken into account in perturbation theory by insertion of a complete set of states (including
any number of gravitons) in the pertinent _in-in_ matrix elements. Conventionally done by means of the Born expansion of the \(\hat{T}\)-matrix, we here adapt it to the exponential representation. In particular, we use the insertion of the identity operator inside the nested commutators and extract contributions order by order in the gravitational coupling \(G\). To clarify: when going from \(\hat{T}\)-matrix elements to \(\hat{N}\)-matrix elements we also include terms that are radiative, to arbitrarily high order in the coupling \(G\). What is missing in order to compute the full expectation value of an operator \(\hat{O}\) are the pieces that arise from inserting complete sets of states (including gravitons) _inside the nested commutators of eq. (3.6)_. The discussion will clearly mimic closely the way we evaluated matrix elements of \(\hat{N}\)-operator itself. We now consider these additional terms.
Since our aim is to derive a recursive relation for the classical limit of an observable, we begin by analyzing the expectation value of \(\hat{A}^{\hat{O}}_{n+1}\) based on one iteration,
\[\langle p^{\prime}_{1},p^{\prime}_{2}|\hat{A}^{\hat{O}^{\mu}}_{n+1}|p_{1},p_{2 }\rangle=\frac{1}{\hbar}\langle p^{\prime}_{1},p^{\prime}_{2}|[\hat{N},\hat{A }^{\hat{O}^{\mu}}_{n}]|p_{1},p_{2}\rangle. \tag{3.53}\]
Inserting a complete set of states, this has a graphical representation
\[A^{O^{\mu}}_{n+1}(p_{1},p_{2},q)= \tag{3.54}\]
where the ellipsis represent pieces with insertion of more that one graviton. We stress that this involves the full \(\hat{N}\)-operator and in perturbation theory we obviously need to truncate to the given order in \(G\) (but for now we keep it general). Up to \(\mathcal{O}(G^{4})\) we only need to consider the \(n=2\) term and to compute the iteration of this term, i.e. the \(n=3\) contribution with one graviton insertion
\[[\hat{N}^{\text{rad}},[\hat{N}^{\text{rad}},[\hat{N},\hat{O}]]]+[\hat{N}^{ \text{rad}},[\hat{N},[\hat{N}^{\text{rad}},\hat{O}]]]+[\hat{N},[\hat{N}^{\text {rad}},[\hat{N}^{\text{rad}},\hat{O}]]]. \tag{3.55}\]
By simple manipulations this can be written
\[[[\hat{N}^{\text{rad}},[\hat{N}^{\text{rad}},\hat{N}]],\hat{O}]+3[\hat{N},[ \hat{N}^{\text{rad}},[\hat{N}^{\text{rad}},\hat{O}]]]+3[[\hat{N}^{\text{rad}}, \hat{N}],[\hat{N}^{\text{rad}},\hat{O}]]. \tag{3.56}\]
Taking the classical limit, we find that the first and last terms vanish so that we are left with
\[3[\hat{N},[\hat{N}^{\text{rad}},[\hat{N}^{\text{rad}},\hat{O}]]]\]
This term can be evaluated using the same tools we developed in the previous part for the conservative pieces.
This concludes the analysis of single-graviton insertions from the complete set of states up to \(\mathcal{O}(G^{4})\). Actually, what we just shown can be generalized to any number of graviton insertions. However, at three-loop level, and as noticed in ref. [38] in the context of the eikonal, multiple graviton insertions such as
\[\includegraphics[width=14.226378pt, width=14.226378pt]{figs/gaviton_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion_insertion__insertion_insertion_insertion_insertion__insertion_insertion_insertion_insertion__insertion_insertion__insertion_insertion__insertion_insertion_insertion_insertion__insertion_insertion__insertion_insertion__insertion_insertion__insertion_insertion__insertion_insertion__insertion__insertion__insertion_insertion__insertion__insertion__insertion_insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion___insertion__insertion___insertion___insertion___insertion__insertion___insertion___insertion___insertion__insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion____insertion___insertion___insertion___insertion___insertion___insertion___insertion____insertion___insertion___insertion____insertion___insertion___insertion___insertion___insertion___insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion___insertion____insertion____insertion___insertion____insertion___insertion____insertion____insertion___insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion___insertion____insertion____insertion___insertion___insertion____insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion__insertion___insertion___insertion___insertion___insertion___insertion___insertion__insertion___insertion___insertion____insertion___insertion___insertion___insertion___insertion___insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion___insertion___insertion__insertion___insertion__insertion___insertion___insertion___insertion___insertion___insertion___insertion__insertion___insertion__insertion___insertion__insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion___insertion___insertion__insertion__insertion___insertion___insertion__insertion___insertion__insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion___insertion__insertion___insertion___insertion___insertion___insertion__insertion__insertion___insertion__insertion___insertion___insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion__insertion___insertion__insertion__insertion__insertion__insertion___insertion___insertion___insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion__insertion__insertion___insertion__insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion__insertion__insertion___insertion___insertion___insertion__insertion__insertion__insertion__insertion___insertion___insertion__insertion___insertion__insertion__insertion___insertion__insertion___insertion___insertion__insertion__insertion___insertion___insertion___insertion___insertion__insertion___insertion___insertion__insertion___insertion___insertion___insertion__insertion____insertion__insertion___insertion___insertion___insertion____insertion___insertion___insertion___insertion___insertion___insertion___insertion___insertion____insertion___insertion___insertion____insertion___insertion____insertion___insertion_____insertion____insertion___insertion___insertion____insertion___insertion____insertion___insertion____insertion____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion_____insertion____insertion_____insertion____insertion____insertion____insertion_____insertion____insertion____insertion___insertion____insertion____insertion____insertion____insertion_____insertion_____insertion_____insertion____insertion_____insertion____insertion____insertion____insertion_____insertion____insertion_____insertion____insertion____insertion_____insertion___insertion_____insertion_____insertion_____insertion____insertion_____insertion____insertion_____insertion_____insertion____insertion_____insertion_____insertion____insertion____insertion____insertion_____insertion____insertion_____insertion____insertion____insertion____insertion____insertion____insertion_____insertion____insertion____insertion____insertion____insertion____insertion____insertion_____insertion____insertion____insertion___insertion_____insertion_____insertion____insertion____insertion___insertion____insertion____insertion____insertion_____insertion____insertion___insertion____insertion___insertion____insertion_____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion____insertion___insertion_____insertion____insertion___
\[\mathcal{E}_{1}\equiv\frac{m_{1}^{2}+m_{1}m_{2}\gamma}{m_{1}^{2}+m_{2}^{2}+2m_{1}m_ {2}\gamma};\qquad\mathcal{E}_{2}\equiv 1-\mathcal{E}_{1}=\frac{m_{2}^{2}+m_{1}m_{2} \gamma}{m_{1}^{2}+m_{2}^{2}+2m_{1}m_{2}\gamma}. \tag{3.64}\]
As we have shown, at fourth Post-Minkowskian order we can write the full result as
\[\Delta\tilde{O}(\gamma,b)=\Delta\tilde{O}_{\text{cons}}(\gamma,b)+\sum_{n=1}^ {\infty}\Delta\tilde{O}_{\text{rad}}^{(n)}(\gamma,b) \tag{3.65}\]
where \(\Delta\tilde{O}_{\text{rad}}^{(n)}\) is the contribution coming from the succession of \(n\) single-graviton insertions. The conservative part is given by
\[\Delta\tilde{O}_{\text{cons}}(\gamma,b)=\left(u_{1}^{\mu}\ u_{2}^{ \mu}\ \tfrac{\underline{p}^{\mu}}{|b|}\right)\sum_{n\geq 1}\frac{(-i)^{n}}{n!}M^{n-1 }\begin{pmatrix}\tilde{O}_{1}^{u_{1}}\\ \tilde{O}_{1}^{u_{2}}\\ \tilde{O}_{1}^{b}\end{pmatrix} \tag{3.66}\] \[=i\left(u_{1}^{\mu}\ u_{2}^{\mu}\ \tfrac{\underline{p}^{\mu}}{|b|} \right)\begin{pmatrix}-\frac{(\frac{\partial\tilde{N}}{\partial J}\underline{ \varepsilon}_{1}\underline{\varepsilon}_{2}\sin\left(\frac{\partial\tilde{N}}{ \partial J}\right))}{\frac{\partial\tilde{N}}{\partial J}}&-\frac{\underline{ \varepsilon}_{1}(\frac{\partial\tilde{N}}{\partial J}-\sin\left(\frac{ \partial\tilde{N}}{\partial J}\right))}{\frac{\partial\tilde{N}}{\partial J}} &\frac{-1+\cos\left(\frac{\partial\tilde{N}}{\partial J}\right)}{\frac{ \partial\tilde{N}}{\partial J}}\\ -\frac{\underline{\varepsilon}_{2}(\frac{\partial\tilde{N}}{\partial J}-\sin \left(\frac{\partial\tilde{N}}{\partial J}\right))}{\frac{\partial\tilde{N}}{ \partial J}}&-\frac{(\underline{\varepsilon}_{1}\sin\left(\frac{\partial \tilde{N}}{\partial J}\right)+\frac{\partial\tilde{N}}{\partial J}\underline{ \varepsilon}_{2})}{\frac{\partial\tilde{N}}{\partial J}}&\frac{1-\cos\left( \frac{\partial\tilde{N}}{\partial J}\right)}{\frac{\partial\tilde{N}}{ \partial J}}\\ \frac{\underline{\varepsilon}_{2}\left(1-\cos\left(\frac{\partial\tilde{N}}{ \partial J}\right)\right)}{\frac{\partial\tilde{N}}{\partial J}}&\frac{ \underline{\varepsilon}_{1}(\cos\left(\frac{\partial\tilde{N}}{\partial J} \right)-1)}{\frac{\partial\tilde{N}}{\partial J}}&-\frac{\sin\left(\frac{ \partial\tilde{N}}{\partial J}\right)}{\frac{\partial\tilde{N}}{\partial J}} \end{pmatrix}\begin{pmatrix}\tilde{O}_{1}^{u_{1}}\\ \tilde{O}_{1}^{u_{2}}\\ \tilde{O}_{1}^{u_{2}}\\ \tilde{O}_{1}^{b}\end{pmatrix},\]
where we have introduced the operator \(\hat{O}_{1}\equiv[\hat{N},\hat{O}]\). This is just a different way of writing the conservative result of eq. (3.48), as can be seen by use of the relations (3.60) and (3.64).
For the radiative sector we get a similar formula
\[\Delta\hat{O}_{\text{rad}}^{(1)}(\gamma,b)=\left(u_{1}^{\mu}\ u_{2}^{\mu}\ \tfrac{\underline{p}^{\mu}}{|b|}\right)\Bigg{(}-\frac{1}{2}+\frac{iM}{2}\Bigg{)} \begin{pmatrix}\tilde{O}_{2}^{u_{2}}\\ \tilde{O}_{2}^{u_{2}}\\ \tilde{O}_{2}^{b}\end{pmatrix}+\mathcal{O}(G^{5}) \tag{3.67}\]
where we defined
\[\hat{O}_{k+1}=\underbrace{[\hat{N},[\hat{N},\ldots,[\hat{N},\hat{O}]]]}_{\text {k+1 times}}|_{\text{k graviton insertions}} \tag{3.68}\]
after restricting to \(k\) graviton insertions, as explained above. This is the complete expression to fourth Post-Minkowskian order and it is readily generalized to higher orders.
We emphasize again that the terminology of conservative and radiative pieces is completely artificial. There are also radiative modes in what we for historical reasons call the conservative part. This was already obvious at two-loop level where it was shown in refs. [22] that the two-to-two matrix element of \(\hat{N}\)-operator yields the full result, including radiation reaction, to that order. We now understand why this
phenomenon does not generalize to higher orders, and we understand how to correct for it. There are still many radiative modes and radiation-reaction parts in just the two-to-two matrix element of \(\hat{N}\)-operator and therefore those matrix elements are far from being just conservative.
#### 3.2.1 Full momentum kick at fourth Post-Minkowskian order
We now turn to the full explicit evaluation of the momentum kick \(\Delta P_{1}^{\mu}\) at fourth Post-Minkowskian order. As a building block we will first need to compute \(\tilde{N}(\gamma,b)\). This was already done in ref. [18] up to 4PM order (except for one term which we take the opportunity to correct here) so that what we label the conservative piece
\[\Delta P_{1}^{\mu}|_{\text{cons.}}=\begin{pmatrix}u_{1}^{\mu}&u_{2}^{\mu}& \frac{b^{\mu}}{|b|}\end{pmatrix}\begin{pmatrix}p_{\infty}\Big{(}1-\cos(\chi_{ \text{cons}})\Big{)}\\ p_{\infty}\Big{(}\cos(\chi_{\text{cons}})-1\Big{)}\\ -p_{\infty}\sin(\chi_{\text{cons}})\end{pmatrix} \tag{3.69}\]
is known. Here it is convenient to introduce the following notation
\[\chi_{\text{cons}}\equiv-\frac{\partial\tilde{N}}{\partial J} \tag{3.70}\]
and define the PM-expanded quantities
\[\chi_{\text{cons}}\equiv\sum_{n=0}^{\infty}G^{n+1}\chi_{\text{cons}}^{(n)} \tag{3.71}\]
as well as
\[\tilde{N}\equiv\sum_{n=0}^{\infty}G^{n+1}\tilde{N}^{(n)} \tag{3.72}\]
so that at fourth Post-Minkowskian order we have
\[\Delta P_{1}^{\mu,4PM}|_{\text{cons.}}=p_{\infty}G^{4}\begin{pmatrix}u_{1}^{ \mu}&u_{2}^{\mu}&\frac{b^{\mu}}{|b|}\end{pmatrix}\begin{pmatrix}-\frac{(\chi_ {\text{cons}}^{(0)})^{4}}{24}+\frac{(\chi_{\text{cons}}^{(1)})^{2}}{2}+\chi_{ \text{cons}}^{(0)}\chi_{\text{cons}}^{(2)}\\ \frac{(\chi_{\text{cons}}^{(0)})^{4}}{24}-\frac{(\chi_{\text{cons}}^{(1)})^{ 2}}{2}-\chi_{\text{cons}}^{(0)}\chi_{\text{cons}}^{(2)}\\ \frac{(\chi_{\text{cons}}^{(0)})^{2}\chi_{\text{cons}}^{(1)}}{2}-\chi_{\text{ cons}}^{(3)}\end{pmatrix} \tag{3.73}\]
Starting at third Post-Minkowskian order we need to also evaluate the first radiation contribution to the momentum kick \(\Delta\tilde{P}_{1,\text{rad}}^{\mu(1)}\). We thus need the building block
\[\tilde{P}_{1,1}^{\mu}=\langle p_{1}^{\prime},p_{2}^{\prime}|[\hat{N},[\hat{N},\hat{P}_{1}^{\mu}]]|p_{1},p_{2}\rangle \tag{3.74}\]
evaluated with one-graviton insertions. This reads
\[\tilde{P}^{\mu}_{1,1} = \text{FT}[\int\frac{d^{D}q_{1}d^{D}q_{2}}{(2\pi)^{2D-4}}\langle p^{ \prime}_{1},p^{\prime}_{2}|\hat{N}|p_{1}+q_{1},p_{2}-q_{2},q_{2}-q_{1}\rangle(- q^{\mu}-2q_{1}^{\mu}) \tag{3.75}\] \[\times \langle p_{1}+q_{1},p_{2}-q_{2},q_{2}-q_{1}|\hat{N}|p_{1},p_{2} \rangle\delta(2p_{1}\cdot q_{1})\delta(-2p_{2}\cdot q_{2})\delta((q_{2}-q_{1}) ^{2})]\]
Where again, for compactness of notation, we label the Fourier transform into \(b\)-space by FT. Its precise definition is given in eq. (3.38). Note that this integral is orthogonal to \(p_{1}\), _i.e._
\[p_{1\mu}\langle p^{\prime}_{1},p^{\prime}_{2}|[\hat{N},[\hat{N},\hat{P}^{\mu}_ {1}]]|p_{1},p_{2}\rangle=0, \tag{3.76}\]
so that it can be decomposed according to
\[\tilde{P}^{\mu}_{1,1}=\begin{pmatrix}u_{1}^{\mu}&u_{2}^{\mu}& \frac{\psi^{\mu}}{|b|}\end{pmatrix}\begin{pmatrix}0\\ \tilde{P}^{\mu\nu}_{1,1}\\ \tilde{P}^{b}_{1,1}\end{pmatrix} \tag{3.77}\]
Based on the analysis of ref. [40] we know that the coefficients have the following perturbative expansion
\[\tilde{P}^{u_{2}}_{1,1} = G^{3}\tilde{P}^{u_{2},(2)}_{1,1}+G^{4}\tilde{P}^{u_{2},(3)}_{1, 1}+\mathcal{O}(G^{5}),\] \[\tilde{P}^{b}_{1,1} = G^{4}\tilde{P}^{b,(3)}_{1,1}+\mathcal{O}(G^{5}). \tag{3.78}\]
so that
\[\Delta\tilde{P}^{\nu,(1)}_{1,\text{rad}}=G^{3}\begin{pmatrix}u_{1}^{\mu}&u_{2}^ {\mu}&\frac{b^{\mu}}{|b|}\end{pmatrix}\begin{pmatrix}0\\ -\frac{\tilde{P}^{u_{2},(2)}_{1,1}}{2}\\ 0\end{pmatrix}+G^{4}\begin{pmatrix}u_{1}^{\mu}&u_{2}^{\mu}&\frac{b^{\mu}}{|b|} \end{pmatrix}\begin{pmatrix}0\\ -\frac{\tilde{P}^{u_{2},(3)}_{1,1}}{2}\\ \frac{\tilde{\mathcal{E}}_{1}\chi^{(0)}_{\text{cons}}\tilde{P}^{u_{2},(2)}_{1,1}}{2}-\frac{\tilde{P}^{b,(3)}_{1,1}}{2}\end{pmatrix}+\mathcal{O}(G^{5}) \tag{3.79}\]
Note in particular that \(\tilde{P}^{b}_{1,1}\) only receives a contribution from order \(\mathcal{O}(G^{4})\), the 4PM order. As mentioned above, the 3PM case is therefore quite special in that all radiative effects are entirely contained in the classical contribution from the \(\hat{N}\)-operator [22]. The momentum kick due to radiation at 3PM order only shifts the longitudinal momenta.
Starting at fourth Post-Minkowskian order we need to also evaluate of the first radiative contribution to the momentum kick \(\Delta\tilde{P}^{\mu(2)}_{1,\text{rad}}\) which, as indicated, involves the insertion of two graviton lines. This contribution is more tricky and is diagrammatically represented by
(3.80)
which has two pieces at 4PM order:
giving rise to the elementary building block
\[\tilde{P}^{\mu}_{1,2}=\langle p^{\prime}_{1},p^{\prime}_{2}[\hat{N},[\hat{N},[\hat {N},\hat{P}^{\mu}_{1}]]]|p_{1},p_{2}\rangle \tag{3.82}\]
evaluated with two-graviton insertions. This is
\[\tilde{P}^{\mu}_{1,2} =G^{4}\text{FT}[q^{\mu}\langle p^{\prime}_{1},p^{\prime}_{2}|\hat {N}^{\text{rad}}_{0}\hat{N}_{0}\hat{N}^{\text{rad}}_{0}|p_{1},p_{2}\rangle]\] \[+G^{4}\text{FT}\Big{[}\int\frac{d^{D}q_{1}d^{D}q_{2}d^{D}q_{3}}{( 2\pi)^{3D-6}}(-3q_{1}^{\mu}+3q_{3}^{\mu})\langle p^{\prime}_{1},p^{\prime}_{2}| \hat{N}^{\text{rad}}_{0}|p_{1}+q_{3},p_{2}-q_{2},q_{2}-q_{3}\rangle\] \[\times\langle p_{1}+q_{3},q_{2}-q_{3}|\hat{N}_{0}|p_{1}+q_{1},q_ {2}-q_{1}\rangle\delta(2p_{1}\cdot q_{3})\delta(-2p_{2}\cdot q_{2})\delta^{(+) }((q_{2}-q_{3})^{2})\] \[\times\langle p_{1}+q_{1},p_{2}-q_{2},q_{2}-q_{1}|\hat{N}^{\text {rad}}_{0}|p_{1},p_{2}\rangle\delta(2p_{1}\cdot q_{1})\delta^{(+)}((q_{2}-q_{ 1})^{2})\Big{]}+O(G^{5})\] \[\equiv 6G^{4}\text{FT}[q^{\mu}L_{2}(\gamma,q^{2})]+G^{4}\tilde{P}^{ \mu,(3)}_{1,2}+O(G^{5})\] \[=6iG^{4}p_{\infty}\frac{b^{\mu}}{|b|}\frac{\partial\tilde{L}_{2}( \gamma,J)}{\partial J}+G^{4}\tilde{P}^{\mu,(3)}_{1,2}+O(G^{5}) \tag{3.83}\]
so that its contribution to the momentum kick becomes
\[\Delta\tilde{P}^{\nu,(2)}_{1,\text{rad}}=G^{4}\left(u_{1}^{\mu}\ u_{2}^{\mu} \ \frac{b^{\mu}}{|b|}\right)\begin{pmatrix}0\\ \frac{i}{6}\tilde{P}^{\mu,(3)}_{1,2}\\ -p_{\infty}\frac{\partial\tilde{L}_{2}(\gamma,J)}{\partial J}+\frac{i}{6} \tilde{P}^{b,(3)}_{1,2}\end{pmatrix}+\mathcal{O}(G^{5}) \tag{3.84}\]
Combining all pieces, the full fourth-order momentum kick is thus given by
\[\Delta\tilde{P}^{\nu,4PM}_{1}=G^{4}\left(u_{1}^{\mu}\ u_{2}^{\mu}\ \frac{b^{\mu}}{|b|}\right)\begin{pmatrix}p_{\infty}\Big{(}-\frac{(\chi^{(0)}_{ 0})^{4}}{24}+\frac{(\chi^{(1)}_{\text{cons}})^{2}}{2}+\chi^{(0)}_{\text{cons} }\chi^{(2)}_{\text{cons}}\Big{)}\\ p_{\infty}\Big{(}\frac{(\chi^{(0)}_{\text{cons}})^{4}}{24}-\frac{(\chi^{(1)}_{ \text{cons}})^{2}}{2}-\chi^{(0)}_{\text{cons}}\chi^{(2)}_{\text{cons}}\Big{)} -\frac{\tilde{P}^{\mu,(3)}_{1,2}}{2}+\frac{i}{6}\tilde{P}^{\mu_{2},(3)}_{1,2}\\ p_{\infty}\Big{(}\frac{(\chi^{(0)}_{\text{cons}})^{2}\chi^{(1)}_{\text{cons}}}{2 }-\chi^{(3)}_{\text{cons}}-\frac{\partial\tilde{L}_{2}(\gamma,J)}{\partial J} \Big{)}+\frac{\varepsilon_{1\chi^{(0)}_{\text{cons}}}\tilde{P}^{\mu_{2},(2)}_{1,1}}{2}-\frac{\tilde{P}^{b,(3)}_{1,1}}{2}+\frac{i}{6}\tilde{P}^{b,(3)}_{1,2} \Big{)}\end{pmatrix} \tag{3.85}\]
We note the partial recycling of lower-order terms here, a feature that generalizes to higher orders as well.
## 4 Details on the 4PM calculation
### The construction of the integrands
To perform the full explicit computation of the momentum, we need to compute only three integrands giving \(\hat{N}^{(3)}\), \(\tilde{P}^{\mu,(3)}_{1,1}\) and \(\tilde{P}^{\mu,(3)}_{1,2}\). The three integrands can be represented as
\[-(q^{\mu}+2q_{1}^{\mu}) \tag{4.1}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 3(q_{3}^{\mu}-q_{1}^{\mu}) \tag{4.2}\]
We compute these from generalized unitarity and velocity cuts, selecting topologies that both have three velocity cuts and respect the conditions on the on-shell gravitons when imposed by the topology.
### The integration basis
At fourth Post-Minkowskian order the computation of the momentum kick is expanded on two sets of master integrals. A first family of master integrals has delta-function constraints on the massive legs and one graviton propagator as depicted in fig. 1(a)
\[\mathcal{J}\left(\{n_{j}\},\{\pm,\pm,\pm\};\gamma,\epsilon\right)=\!\!\!\int \frac{\delta(2v_{1}\cdot\ell_{1})\delta(2v_{1}\cdot(\ell_{1}+\ell_{2}+\ell_{3} ))\delta(2v_{2}\cdot(\ell_{1}+\ell_{2}))\delta(\ell_{2}^{2})}{\prod_{i=1}^{12 }D_{i}^{n_{i}}}\frac{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
default the Feynman \(+i\varepsilon\) prescription. We find that for the set of master integrals in (4.4) the basis needed for longitudinal pieces has dimension 54, and the one for the transverse pieces has the same dimension. These master integrals have a delta-function for one of the graviton propagators as required in the one-graviton radiative sector analyzed in section 3.2. This delta-function breaks the symmetry between \(l_{2}\) and \(l_{3}\) compared to the other basis.
In the conservative sector of section 3.1 it is enough to use the smaller set of master integrals represented in figure 1(b) given by
\[\mathcal{I}\left(\{n_{j}\},\{\pm,\pm,\pm\};\gamma,\epsilon\right)=\int\frac{ \delta(2v_{1}\cdot\ell_{1})\delta(2v_{1}\cdot(\ell_{1}+\ell_{2}+\ell_{3})) \delta(2v_{2}\cdot(\ell_{1}+\ell_{2}))}{\prod_{i=1}^{12}D_{i}^{n_{i}}}\prod_{r= 1}^{3}\frac{d^{4-2\epsilon}\ell_{i}}{(2\pi)^{3-2\epsilon}}. \tag{4.6}\]
The tensorial reduction gives a basis of dimension 40. This basis are also sufficient to compute the second radiative term \(\tilde{P}_{1,2}^{\mu,(3)}\), which differs only by the boundary conditions we impose in the static \(\gamma=1\) limit.
The world-line computation of [55] uses master integrals with delta-function velocity cuts on three massive propagators at fourth Post-Minkowskian order. But they have either Feynman or retarded or advanced propagators and in the end use a total of 576 master integrals. Converting the retarded (respectively advanced) propagator to a Feynman propagator using
\[\frac{i}{(\ell_{0}\pm i\epsilon)^{2}-\vec{\ell}^{2}}=\frac{i}{\ell_{0}^{2}- \vec{\ell}^{2}+i\epsilon}\mp\pi\delta(\ell_{0})\theta(\mp\ell_{0}) \tag{4.7}\]
allows to expand the master integrals used in [55] on the basis of master integrals in (4.4).
As in ref. [18] we compute the integrals by solving three differential systems of sizes \(40\times 40\), \(54\times 54\) and \(54\times 54\) respectively. There are three regions of integration, potential-potential (PP), potential-radiation (PR) and radiation-radiation (RR). We expand all master integrals in each of these regions, which gives boundary data to solve and check the solution of the differential systems. In the end, each master integral can be expanded on 9 independent static master integrals (6 for the transverse pieces, 3 for the longitudinal contributions) as
\[\mathcal{I}^{\perp}(\gamma)=\sum_{j=1}^{3}c^{j}_{PP,\perp}(\gamma)I^{j}_{PP, \perp}+(4(\gamma^{2}-1))^{-\epsilon}\sum_{j=1}^{2}c^{j}_{PR,\perp}(\gamma)I^{j }_{PR,\perp}+(4(\gamma^{2}-1))^{-2\epsilon}c_{RR,\perp}(\gamma)I_{RR,\perp} \tag{4.8}\]
and
\[\mathcal{I}^{\parallel}(\gamma)=(4(\gamma^{2}-1))^{-\epsilon}\sum_{j=1}^{2}c^ {j}_{PR,\parallel}(\gamma)I^{j}_{PR,\parallel}+(4(\gamma^{2}-1))^{-2\epsilon}c _{RR,\parallel}(\gamma)I_{RR,\parallel} \tag{4.9}\]
The final step is then to compute each static master integral with the correct constraint on its graviton propagator (Feynman propagator or delta-function) according to the integrand it contributes to.
### The final result for the 4PM momentum kick
#### 4.3.1 The \(N\)-matrix elements
For the so-called conservative part (the \(N\)-matrix elements), we first recall the results up to 3PM order,
\[\tilde{N}^{(0)}=\frac{Gm_{1}m_{2}(2\gamma^{2}-1)}{\sqrt{\gamma^{2} -1}}\Gamma(-\epsilon)J^{2\epsilon} \tag{4.10}\] \[\tilde{N}^{(1)}=\frac{3\pi G^{2}m_{1}^{2}m_{2}^{2}(m_{1}+m_{2})( 5\gamma^{2}-1)}{4\sqrt{s}}\frac{1}{J} \tag{4.11}\]
\[\tilde{N}^{(2)}=\frac{G^{3}m_{1}^{3}m_{2}^{3}\sqrt{\gamma^{2}-1}} {s}\Bigg{(}\frac{s(64\gamma^{6}-120\gamma^{4}+60\gamma^{2}-5)}{3(\gamma^{2}-1) ^{2}}-\frac{4}{3}m_{1}m_{2}\gamma(14\gamma^{2}+25)\\ +\frac{4m_{1}m_{2}(3+12\gamma^{2}-4\gamma^{4})\arccos(\gamma)}{ \sqrt{\gamma^{2}-1}}\\ +\frac{2m_{1}m_{2}(2\gamma^{2}-1)^{2}}{\sqrt{\gamma^{2}-1}} \Big{(}\frac{8-5\gamma^{2}}{3(\gamma^{2}-1)}+\frac{\gamma(-3+2\gamma^{2}) \arccos(\gamma)}{(\gamma^{2}-1)^{\frac{3}{2}}}\Big{)}\Bigg{)}\frac{1}{J^{2}} \tag{4.12}\]
Almost all of the 4PM part of \(\tilde{N}\) was already computed in ref. [18], except for one term which we correct here. The velocity cuts automatically eliminate super-classical terms, so that the generalized unitarity integrand arises directly from \(\langle p_{1}^{\prime},p_{2}^{\prime}|\hat{T}_{3}|p_{1},p_{2}\rangle+L_{0}\). To this we must add \(L_{1}\) which precisely cancel the imaginary radiation pieces as at 3PM order. Note also that the real piece from \(L_{1}\) is canceled by a similar computation as we did in section 3.2. At the end we get
\[\tilde{N}^{(3)}=\tilde{N}^{(3)}_{PP+RR}+\tilde{N}^{(3)}_{PR}+\tilde{L}_{2} \tag{4.13}\]
with
\[\tilde{N}^{(3)}_{PP+RR}=-\frac{G^{4}(m_{1}+m_{2})^{3}m_{1}^{4}m_{2 }^{4}\pi(\gamma^{2}-1)}{8s^{\frac{3}{2}}}\\ \times\Big{(}\mathcal{M}_{4}^{p}+\nu(4\mathcal{M}_{4}^{t}\log \Bigg{(}\frac{\sqrt{\gamma^{2}-1}}{2}\Bigg{)}+\mathcal{M}_{4}^{\pi^{2}}+ \mathcal{M}_{4}^{\rm rem}\Big{)}\Big{)}\frac{1}{J^{3}} \tag{4.14}\]
\[\tilde{N}^{(3)}_{PR}=\frac{G^{4}(m_{1}+m_{2})^{3}m_{1}^{4}m_{2}^{4}\pi(\gamma^ {2}-1)}{8s^{\frac{3}{2}}}\Big{(}\frac{6\nu(2\gamma^{2}-1)(5\gamma^{2}-1) \mathcal{I}(\gamma)}{\sqrt{\gamma^{2}-1}}\Big{)}\frac{1}{J^{3}} \tag{4.15}\]
and
\[\mathcal{I}(\gamma)\equiv\frac{16-10\gamma^{2}}{3(\gamma^{2}-1)}+\frac{2\gamma(-3+ 2\gamma^{2})\arccos(\gamma)}{\gamma^{2}-1} \tag{4.16}\]
where for convenience of the reader we have separated the pieces in terms of regions of integration (potential P and radiation R) and used the same notation as in ref. [18]. Note that, as already observed in a different context in ref. [67], the \(L_{2}\) Compton-like term that we have in the conservative piece will exactly cancel the one in the second radiative piece.
#### 4.3.2 The first radiation piece
At 3PM order the value of the coefficient of the first radiation piece can be extracted from ref. [40]
\[\tilde{P}_{1,1}^{u_{2},(2)}=\frac{2m_{1}^{2}m_{2}^{3}p_{\infty}^{2}}{J^{3}} \mathcal{E}(\gamma) \tag{4.17}\]
with
\[\frac{\mathcal{E}(\gamma)}{\pi}\equiv\frac{1151-3336\gamma+3148 \gamma^{2}-912\gamma^{3}+339\gamma^{4}-552\gamma^{5}+210\gamma^{6}}{48(\gamma^ {2}-1)^{\frac{3}{2}}}\\ +\frac{\gamma(-3+2\gamma^{2})(11-30\gamma^{2}+35\gamma^{4})}{16( \gamma^{2}-1)^{2}}\arccos(\gamma)\\ -\frac{-5+76\gamma-150\gamma^{2}+60\gamma^{3}+35\gamma^{4}}{8 \sqrt{\gamma^{2}-1}}\log\Big{(}\frac{1+\gamma}{2}\Big{)} \tag{4.18}\]
while at 4PM order we have performed the computation and find for the longitudinal part
\[\tilde{P}_{1,1}^{u_{2},(3)} =\frac{2m_{1}^{2}m_{2}^{3}p_{\infty}^{3}}{J^{4}}\Bigg{(}\frac{(m_ {1}g[1]+m_{2}h[1])\pi^{2}}{192(\gamma^{2}-1)^{2}}+\frac{m_{1}g[2]+m_{2}h[2]}{7 05600\gamma^{8}(\gamma^{2}-1)^{\frac{5}{2}}}\] \[+\Big{(}\frac{m_{1}g[3]+m_{2}h[3]}{6720\gamma^{9}(\gamma^{2}-1)^{ 3}}+\frac{(m_{1}g[4]+m_{2}h[4])\log(2)}{8(\gamma^{2}-1)^{2}}\Big{)}\arccos(\gamma)\] \[+\Big{(}\frac{m_{1}g[5]+m_{2}h[5]}{(\gamma^{2}-1)^{\frac{7}{2}}} +\frac{m_{1}g[6]+m_{2}h[6]}{(\gamma^{2}-1)^{2}}\Big{)}\arccos^{2}(\gamma)+ \frac{m_{1}g[7]+m_{2}h[7]}{8(\gamma^{2}-1)^{2}}\arccos(\gamma)\log(\gamma)\] \[+\frac{m_{1}g[8]+m_{2}h[8]}{8(\gamma^{2}-1)^{2}}\Big{(}\arccos( \gamma)\log\!\left(\frac{1+\gamma}{2}\right)-2\operatorname{Li}_{2}\Big{(}- \gamma+\sqrt{\gamma^{2}-1}\Big{)}\Big{)}\] \[+\frac{m_{1}g[9]+m_{2}h[9]}{32(\gamma^{2}-1)^{2}}\Big{(} \operatorname{Li}_{2}\Big{(}\frac{\gamma-1}{\gamma+1}\Big{)}-4 \operatorname{Li}_{2}\Big{(}\sqrt{\frac{\gamma-1}{\gamma+1}}\Big{)}\Big{)}\] \[-\frac{m_{1}g[10]+m_{2}h[10]}{16(\gamma^{2}-1)^{2}}\operatorname{ Li}_{2}\Big{(}-(\gamma-\sqrt{\gamma^{2}-1})^{2}\Big{)}\Bigg{)} \tag{4.19}\]
with
\[g[1] =\gamma(-1485+4993\gamma^{2}-3195\gamma^{4}+1575\gamma^{6})\] \[g[2] =385875-1837500\gamma^{2}+7188300\gamma^{4}-21241500\gamma^{6}+7674 10066\gamma^{8}\] \[+3966858415\gamma^{10}-3429240286\gamma^{12}-791542442\gamma^{14}+ 393897472\gamma^{16}\] \[g[3] =3675-19950\gamma^{2}+79800\gamma^{4}-246540\gamma^{6}+222810 \gamma^{8}-25426269\gamma^{10}\] \[-37185456\gamma^{12}+46406238\gamma^{14}+2662204\gamma^{16}-359219 2\gamma^{18}\] \[g[4] =1263-3883\gamma^{2}+1065\gamma^{4}-525\gamma^{6}\] \[g[5] =32\gamma^{2}(60+35\gamma^{2}-59\gamma^{4}+4\gamma^{8})\] \[g[6] =8\gamma(-9+26\gamma^{2})\] \[g[7] =\gamma(1041-2773\gamma^{2}-1065\gamma^{4}+525\gamma^{6})\] \[g[8] =3(37\gamma-185\gamma^{3}+355\gamma^{5}-175\gamma^{7})\] \[g[9] =6(6-37\gamma-66\gamma^{2}+185\gamma^{3}+210\gamma^{4}-355\gamma^ {5}-150\gamma^{6}+175\gamma^{7})\] \[g[10] =\gamma(1041-2773\gamma^{2}-1065\gamma^{4}+525\gamma^{6}) \tag{101}\]
\[h[1] =2(2075+17367\gamma^{2}+5553\gamma^{4}-6819\gamma^{6})\] \[h[2] =490\gamma(1575-8250\gamma^{2}+35710\gamma^{4}-142640\gamma^{6}-5 560073\gamma^{8}-417302\gamma^{10}+4034092\gamma^{12}\] \[-587336\gamma^{14}+6144\gamma^{16})\] \[h[3] =14\gamma(525-3100\gamma^{2}+13690\gamma^{4}-55260\gamma^{6}+8165 95\gamma^{8}+3752006\gamma^{10}\] \[-1978290\gamma^{12}-1029342\gamma^{14}+213480\gamma^{16}+24576 \gamma^{18})\] \[h[4] =-2(2057+15261\gamma^{2}+3387\gamma^{4}-4321\gamma^{6})\] \[h[5] =-32\gamma(-3+2\gamma^{2})(-8-51\gamma^{2}-6\gamma^{4}+8\gamma^{6})\] \[h[6] =16(16+111\gamma^{2}+18\gamma^{4}-24\gamma^{6})\] \[h[7] =-2(2039+13155\gamma^{2}+1221\gamma^{4}-1823\gamma^{6})\] \[h[8] =-2(9+1053\gamma^{2}+1083\gamma^{4}-1249\gamma^{6})\] \[h[9] =6(36-1209\gamma+4212\gamma^{2}-6422\gamma^{3}+4332\gamma^{4}+175 5\gamma^{5}-4996\gamma^{6}+2100\gamma^{7})\] \[h[10] =-2(2039+13155\gamma^{2}+1221\gamma^{4}-1823\gamma^{6}) \tag{102}\]
For the transverse part we find
\[\tilde{P}^{b,(3)}_{1,1}=-\frac{2m_{1}^{2}m_{2}^{2}p_{\infty}^{4}} {J^{4}}\Bigg{(}\Big{(}-\frac{2\gamma^{2}-1}{\gamma^{2}-1}\mathcal{C}(\gamma)+ \frac{\gamma(-3+2\gamma^{2})}{(\gamma^{2}-1)^{\frac{3}{2}}}\mathcal{E}(\gamma )\Big{)}(m_{1}+m_{2})\\ +\frac{2\gamma^{2}-1}{(\gamma+1)\sqrt{\gamma^{2}-1}}\mathcal{E}( \gamma)m_{1}\Bigg{)} \tag{103}\]
with
\[\frac{\mathcal{C}(\gamma)}{\pi}\equiv\frac{-237+386\gamma+111\gamma^{2} -683\gamma^{3}+537\gamma^{4}+240\gamma^{5}-411\gamma^{6}+105\gamma^{7}}{24( \gamma^{2}-1)^{2}}\\ -\frac{\gamma(-3+2\gamma^{2})(-12+19\gamma+72\gamma^{2}-70\gamma^ {3}-60\gamma^{4}+35\gamma^{5})}{8(\gamma^{2}-1)^{\frac{5}{2}}}\,\text{arccosh}( \gamma)\\ +\frac{-62+155\gamma+16\gamma^{2}-70\gamma^{3}-90\gamma^{4}+35 \gamma^{5}}{4(\gamma^{2}-1)}\log\biggl{(}\frac{1+\gamma}{2}\biggr{)} \tag{4.23}\]
#### 4.3.3 The second radiation piece
The contributions from the second radiation piece matches exactly the result of ref. [54] with
\[\tilde{P}_{1,3}^{b,(3)}=-\frac{6ip_{\infty}^{4}}{J^{4}}c_{1\text{b},2\text{ rad}}^{(4)\text{diss}} \tag{4.24}\]
and
\[\tilde{P}_{1,3}^{u_{2},(3)}=-\frac{6im_{2}p_{\infty}^{3}}{J^{4}}c_{1\tilde{u}_ {2},2\text{rad}}^{(4)\text{diss}} \tag{4.25}\]
Finally, when inserting all integrals into the formula of eq. (3.85) we find complete agreement with ref. [54]. This amplitude-based approach, which combines the exponential representation of the gravitational \(S\)-matrix with the KMOC formalism, thus yields a result for the momentum kick that is in full agreement with the worldline calculation of ref. [54].
## 5 Conclusion
The exponential representation of the \(S\)-matrix [22] is a natural starting point for a semi-classical analysis of quantum field theory. Matrix elements of the \(\hat{N}\)-operator in the exponent of the \(S\)-matrix are by construction free of superclassical terms and they are, therefore, at leading order providing the classical part, followed by quantum corrections. Using the KMOC-formalism, we have shown how the exponential representation of the \(S\)-matrix makes manifest the cancellation of superclassical contributions in the conservative sector. One advantage of working with the \(\hat{N}\)-matrix rather than the conventional \(\hat{T}\)-matrix is indeed that it bypasses the need to ensure the delicate cancellation between superclassical terms of the \(\hat{T}\)-matrix. Instead, by extracting the relevant pieces of the \(\hat{N}\)-matrix by means of velocity cuts we automatically retrieve the classical terms. Pictorially speaking, the velocity cuts introduced in [16] localize the massive scattering states on classical on-shell trajectories. As shown in section 3.1 of the present paper, two-to-two massive matrix elements of the
\(\hat{N}\)-operator, Fourier-transformed to impact parameter space, is the radial action of the conservative sector. This proves the conjectured relation put forward in ref. [22].
Including gravitational radiation, the \(\hat{N}\)-operator is still a basic building block of the KMOC-formalism and as an example we have shown how the momentum kick in the scattering of two black holes can be compactly described by matrix elements of \(\hat{N}\). We have provided the explicit formulas up to and including fourth Post-Minkowskian order but the framework is iterative and it is straightforward to derive corresponding expressions to arbitrarily high order in Newton's constant \(G\). As an application we have explicitly derived the momentum kick at fourth post-Minkowskian order. Our results are in agreement with [54; 55]. As is well known, and somewhat disturbingly, it leads to a scattering angle that diverges at high energy if one applies the scattering angle expression of ref. [11]. The solution for the integrals used here and in the references above is the one connecting smoothly to the Post-Newtonian expansion. We cannot exclude that another solution exists which is valid at high energy only and without a smooth connection to the Post-Newtonian limit. This possibility seems to deserve attention. Alternatively, one could consider doing a new fourth-order calculation from scratch with massless scalars.
The resulting relationship between the KMOC-formalism and the exponential representation of the \(S\)-matrix is very simple and of a universal form involving trigonometric functions together with iterated commutators. This trigonometric structure arises from \(\hat{N}\) being the exponential phase operator of the \(S\)-matrix and is thus closely linked to the Euler formula. Beyond the conservative parts, the operator identities involved lead to additional terms but the structure of nested commutators is responsible for the simple algebraic relations that iteratively build up observables to higher and higher orders in the gravitational coupling constant.
In the end, the expression for classical observables including all dissipative effects becomes remarkably simple by combining the KMOC formalism with the exponential representation of the \(S\)-matrix. The full calculation reduces to scattering amplitude evaluations for which modern techniques have become highly developed. There is thus no need to distinguish between different pieces or to separate the amplitude calculation into different types of contributions; one must only retain all classical terms, as this provides the full classical answer.
## Acknowledgements
We thank Thibault Damour for comments. P.V. would like to thank the LAPTh for the hospitality during the completion of this work. The work of P.H.D. was supported in part by DFF grant 0135-00089A, the work of E.R.H. was supported by the Rozenthal Foundation and ERC Starting Grant No. 757978 from the European Research Council, and the research of P.V. has received funding from the ANR grant "SMAGP" ANR-20-CE40-0026-01. |
2310.07858 | QArchSearch: A Scalable Quantum Architecture Search Package | The current era of quantum computing has yielded several algorithms that
promise high computational efficiency. While the algorithms are sound in theory
and can provide potentially exponential speedup, there is little guidance on
how to design proper quantum circuits to realize the appropriate unitary
transformation to be applied to the input quantum state. In this paper, we
present \texttt{QArchSearch}, an AI based quantum architecture search package
with the \texttt{QTensor} library as a backend that provides a principled and
automated approach to finding the best model given a task and input quantum
state. We show that the search package is able to efficiently scale the search
to large quantum circuits and enables the exploration of more complex models
for different quantum applications. \texttt{QArchSearch} runs at scale and high
efficiency on high-performance computing systems using a two-level
parallelization scheme on both CPUs and GPUs, which has been demonstrated on
the Polaris supercomputer. | Ankit Kulshrestha, Danylo Lykov, Ilya Safro, Yuri Alexeev | 2023-10-11T20:00:33Z | http://arxiv.org/abs/2310.07858v1 | # QArchSearch: A Scalable Quantum Architecture Search Package
###### Abstract.
The current era of quantum computing has yielded several algorithms that promise high computational efficiency. While the algorithms are sound in theory and can provide potentially exponential speedup, there is little guidance on how to design proper quantum circuits to realize the appropriate unitary transformation to be applied to the input quantum state. In this paper, we present QArchSearch, an AI based quantum architecture search package with the QTensor library as a backend that provides a principled and automated approach to finding the best model given a task and input quantum state. We show that the search package is able to efficiently scale the search to large quantum circuits and enables the exploration of more complex models for different quantum applications. QArchSearch runs at scale and high efficiency on high-performance computing systems using a two-level parallelization scheme on both CPUs and GPUs, which has been demonstrated on the Polaris supercomputer.
Ankit Kulhrestha, Danylo Lykov, Ilya Safro, and Yuri Alexeev. 2023. QArchSearch: A Scalable Quantum Architecture Search Package 1, 1 (October 2023), 10 pages. [https://doi.org/10.1145/nnnnnn.nnnnnn](https://doi.org/10.1145/nnnnnn.nnnnnn)
## 1. Introduction
Quantum computing is a nascent and rapidly growing field that holds the promise of accomplishing tasks that were hitherto thought too be computationally intractable by classical computers. In the current era, we have access to noisy intermediate scale quantum computers (NISQ) that make it possible to run hybrid quantum algorithms to tackle problems in computational chemistry, finance (Herman et al., 2023), optimization (Ushijima-Mwesigwa et al., 2021) and related fields (Shaydulin et al., 2019). These algorithms leverage a "variational" method in which quantum circuit parameters have to be trained through a classical optimization procedure (generally run on a classical co-processor).
In their most abstract form, variational quantum circuits are a form of linear parameterized unitary transformation \(U(\mathbf{\theta})\) that map an input quantum state \(\ket{\psi}_{in}\) to an output quantum state \(\ket{\psi}_{out}=U(\mathbf{\theta})\ket{\psi}_{in}\). Currently, the goal in variational quantum algorithms (VQAs) is to find \(\mathbf{\theta}^{*}=\arg\min_{\theta}C(\mathbf{\theta})\) where \(C(\mathbf{\theta})\) is some cost function that quantifies the quality of output. However, in this paper we consider an alternative problem: We aim to find the best possible circuit representing \(U(\mathbf{\theta})\) given input \(\ket{\psi}_{in}\) and cost function \(C(\mathbf{\theta})\).
Finding an appropriate quantum circuit architecture for a given application is a computationally intensive search procedure that requires evaluation of several candidate quantum operations across different qubits and selecting the best performing circuit from amongst them. Our focus in this paper is to demonstrate how we scale such a search procedure across a state of the art HPC infrastructure to aid the search by training deep neural networks to suggest good circuit structures. We title our software as QArchSearch and include it as a part of widely available QTensor package.
In this work, we will use the Quantum Approximate Optimization Algorithm (QAOA) [Farhi et al. 2014] for the graph maxcut problem as the driver application of the QArchSearch package. Briefly, for a given simple undirected graph \(G=(\mathcal{V},\mathcal{E})\), the graph maxcut problem aims to find a maximum "cut set", i.e., a partition of nodes into two disjoint parts to maximize the number (or the total weight in case the graph is weighted) of edges that span both parts. The cost function for max cut is given as [Farhi et al. 2014]:
\[C_{MC}(\mathbf{z})=\frac{1}{2}\sum_{(u,v)\in\mathcal{E}}(1-z_{u}z_{v}), \tag{1}\]
where \(z_{i}\in\{-1,+1\}\) is an indicator variable for node \(i\) that corresponds to the set membership the given node. In the QAOA setup, we start with an initial state \(\ket{s}=\ket{+}^{\otimes^{n}}\) where \(\ket{+}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}\). A \(p\) layer alternating ansatz is then applied to the initial input state:
\[\ket{\mathbf{\gamma},\mathbf{\beta}}=e^{-i\beta_{P}B}e^{-i\gamma_{P}C}\ldots e^{-i \beta_{1}B}e^{-i\gamma_{1}C}\ket{s}. \tag{2}\]
Here, \(\mathbf{\gamma},\mathbf{\beta}\in\mathbb{R}^{P}\) are parameters of the cost operator \(C\) and the mixer operator \(B\), respectively. The cost function is measured by computing \(\bra{\mathbf{\gamma},\mathbf{\beta}}C(\mathbf{z})\ket{\mathbf{\gamma},\mathbf{\beta}}\). In QAOA problems, the structure of the cost operator \(C\) is generally guided
Figure 1: An overview of the search process in QArchSearch software
by the problem we are interested in optimizing, but the structure of mixer operator is an open design problem. In our application, QArchSearch is responsible for searching low-depth mixers using the process depicted in Figure 1.
## 2. Methodology
### QArchSearch
The QArchSearch software has three key components:
* Predictor module: This module accepts a tensor that represents the rotation gates and entanglement operators and generates a new circuit representation that is passed to the quantum builder module.
* Quantum Builder (a.k.a QBuilder): This module accepts the encoded tensor representation from the predictor module and generates the appropriate quantum circuit in an available quantum computing software. In our work, the circuits are generated using Qiskit software. The generated circuit is then passed to the evaluator.
* Evaluator Module: This module is responsible for training the generated quantum circuit on the QAOA cost function in Equation 1. The trained circuit is then evaluated and the reward is propagated back to the predictor module.
Algorithm 1 shows the overall search procedure for searching best performing QAOA mixer circuit from a given gate alphabet \(\mathcal{A}_{R}\). The current version of the search algorithm is an instance of random search which has shown to be a strong baseline in neural architecture search (Li and Talwalkar, 2020). We perform a search by varying the depth \(p\) from 1 to desired maximum depth. For each \(p\) we explore the best possible gate combination (Line 5) and construct a mixer circuit based on the nodes in the current graph (Line 6). We then instantiate the QAOA ansatz and run the variational algorithm for 200 steps with the COBYLA optimizer. The obtained energy is added to a global collection (Line 9). At the end of exploring all possible gate combinations, we select the best performing mixer circuit and compare it to the previously existing best performing mixer circuit if it exists (Line 10). The final best performing mixer circuit and corresponding cut energy are then returned to the main calling procedure.
```
0:\(\theta\): parameters of quantum circuit, \(\mathcal{A}_{R}\): Gate alphabet, \(p_{max}\): depth of QAOA ansatz, \(K_{max}\): maximum number of possible gate combinations \(G\): input graph
1:\(U_{B}^{best}\leftarrow\phi\)
2:for\(p:1\dots p_{max}\)do
3: energies \(\leftarrow\) {}
4:for\(k:1\dots K_{max}\)do
5: gate_comb \(\leftarrow\) GET_COMBINATIONS(\(\mathcal{A}_{R}\), k)
6:\(U_{B}\leftarrow\) BUILD_MIXER_CKT(\(G\), gate_comb)
7:\(U_{QAOA}(\theta)\leftarrow\) BUILD_QAOA_CKT(\(U_{B}\), \(p\))
8:\(\langle C\rangle\leftarrow\) SIMULATE_QAOA(\(G\), \(U_{QAOA}(\theta)\)
9: energies \(\leftarrow\) APPEND(energies, \(\langle C\rangle\))
10:\(U_{B}^{best}\leftarrow\) SELECT_BEST(energies, \(U_{B}^{best}\)) return
11:\(U_{B}^{best}\), \(\langle C_{best}\rangle\)
```
**Algorithm 1**QArchSearch for QAOA Mixer
### QTensor
In this work, we used the Argonne-developed tensor network simulator (Lykov and Alexeev, 2021; Lykov et al., 2022, 2023). It is developed for running large-scale quantum circuit simulations using modern GPU-based supercomputers. It has been used to perform the largest QAOA simulations in the world. QTensor utilizes state-of-the-art heuristic tensor contraction order optimizers (third-party and own custom optimizers), which substantially reduce the simulation cost by minimizing the contraction width of the contraction sequence. We used a number of techniques to speed up simulations.
QTensor has support for a few tensor contraction libraries (backends) for contracting tensors efficiently. In this work, we used NumPy for tensor contraction on CPUs. The code is freely available on GitHub [QTe [n. d.]].
## 3. Experiments and Results
In this section we present our results on the single and multi-core performance of QArchSearch. We then show that the circuit resulting from our search procedure generalizes to unseen graph instances and can achieve better max-cut energies even at low depths.
### Performance Profiling Results
To perform the performance profiling, we first implemented a serially executed search procedure that examined every possible rotation gate combination and simulated the resulting circuit for depths \(p=1\dots 4\). For each depth, we
Figure 3. The CPU level parallelism exposed by current version of QArchSearch.
Figure 2. The parallelization architecture within each node and between nodes using the QArchSearch and QTensor packages on the Polaris supercomputer.
performed a combination of \(k=1\ldots 4\) gates over the given rotation gate alphabet \(\mathcal{A}_{R}\) with \(|\mathcal{A}_{R}|=5\) leading to \(2500\) possible circuit combinations. All search profiling was performed on a dataset of \(20\), \(10\)-node Erdos-Renyi graphs with varying degrees of connectivity.
**Serial Search Process**: We first profiled a serial search process that sequentially examined each possible gate combination for a given depth \(p\). The expected run time of the algorithm was thus \(O(pk)\) where \(k\) was the number of gates selected from \(\mathcal{A}_{R}\).
**Parallelizing Architecture Search**: To speedup the search process, it was necessary to parallelize the algorithm without causing a degradation in the quality of search results. We identified the sequential simulation of gate combinations for a given graph and depth as a major computational bottleneck. Hence, our focus was to improve run time by searching multiple possible gate combinations in parallel. This strategy is shown in Figure 3
To accomplish the aforementioned objective, we opted for process-level parallelism that can take advantage of multiple CPUs on a single node of a HPC cluster. We used Python's multiprocessing library's starmap_async method to create a pool of processes on different CPUs that executed the optimization objective of Equation 1 with different gate combinations in parallel. The run time was thus reduced from \(O(pk)\) to \(O(p)\) for a single graph.
**Results**: The results of our profiling experiments are shown in Figure 4 and Figure 5. Figure 4 shows the improvement in the run time of the algorithm with increasing depth of the QAOA ansatz. We note that in the serial case, the growth in the run time is quadratic as \(p\approx k\). However, in the case of parallel the run time is improved by over \(50\%\) even when the depth approaches the maximum possible gate combinations.
Figure 4: Time to simulate circuits with serial and parallel quantum NAS procedure. The results are averaged over five separate runs of the NAS algorithm on different Erdos-Renyi Graphs
Figure 5 shows the time to simulate for a graph with \(p=2\) and the number of available CPUs varied from 8 to 64 in increments of 8. We can see that our parallel version can efficiently utilize the available CPUs and are 0.76 times faster than a serial algorithm for the same graph and \(p\).
### Performance of Discovered Circuit
Once the search procedure was run, we evaluated the possible discovered combinations of the mixer layer on a separate dataset of 20, 10 node random 4-regular graphs. The best performing mixer circuit is shown in Figure 6. For each discovered circuit, we calculated the approximation ratio \(r\) defined as:
\[r=\frac{\langle C_{max}\rangle}{C_{classical}} \tag{3}\]
Where \(\langle C_{max}\rangle\) is the expected energy of the largest cut discovered by the given quantum circuit. The approximation ratio measures the quality of solutions discovered by the quantum procedure as compared to a classical one. The results are shown in Figure 7. We can clearly see that the best performing mixer layer combination achieves the highest approximation ratio for low \(p\) value.
We further compare the performance of the searched mixer circuit on the ER and random regular graphs with the default mixer choice for maxcut QAOA. Figure 8 shows the results for average \(r\) obtained by the searched mixer and baseline mixer. These results were averaged by computing energies with \(p=1,2,3\). We can see that the searched mixer yields a higher average approximation ratio on ER random graphs.
In the case of random regular graph both mixer circuits perform comparably at all values of \(p\). These results are shown in Figure 9. We show individual \(r\) since the aggregated values over \(p\) are equal (1.0).
Figure 5. Time to simulate a graph with \(p=2\) with different number of cores available on a HPC cluster. The dashed red line indicates the time to simulate the same graph with serial search.
Overall, the mixer found by our search algorithm generally performs better with lower resource usage on different types of random graphs. Moreover, we show that our algorithm is able to extract the best _general_ performing circuit given data and some evaluation metric.
## 4. Key Challenges and Future Work
In this paper we have demonstrated that parallelizing the search algorithm is extremely important and requires careful design to obtain a meaningful speedup for large problem instances. We now discuss some important directions that are
Figure 6. Best performing searched mixer circuit for Max-cut QAOA
Figure 7. Approximation ratios obtained for \(p=1\) on 4-regular random graphs. All parameterized gates in the mixer circuit share the same parameter and hence do not incur additional computational cost.
currently in development for QArchSearch.
**GPU Integration**: In this work, we noted that simulating quantum circuits was another computational bottleneck. However, improving the run time of quantum circuit simulations requires a different strategy since state vectors cannot be arbitrarily chunked and passed to multiple CPUs in a cluster. One possible way to improve the runtime is then to
Figure 8: Comparison of \(r\) obtained by the baseline and searched (qnas) mixer circuits
Figure 9: Approximation ratios obtained by baseline and qnas mixer circuits on 10 node random regular graph of degree 4.
consider running the simulations on a GPU device. The future versions of QArchSearch will tightly integrate with QTensor to allow a user to seamlessly select a GPU backend whenever possible.
**Deep Neural Network based Search**: In this work we employed a version of random search to search for possible combinations of mixer circuits for the maxcut QAOA problem. Since this was a less complex problem than searching for full quantum circuits, random search returned strong generalized mixer circuits. However, our aim is to discover best quantum circuits for _any_ given dataset and performance measure.
In the upcoming version of QArchSearch we will integrate several deep neural network based search algorithms like (Zhou et al., 2018; Zoph and Le, 2016).
## 5. Related Work
Quantum architecture search is a very important and active area of research and different works have considered the problem from different angles.
Fosel _et al_(Fosel et al., 2021) consider the problem of optimizing the design of quantum circuits by first proposing inefficient circuits and then training a DNN to optimize the circuit given a desired circuit metric (e.g. number of gates). Ostaszewski _et al_(Ostaszewski et al., 2021) also propose to use deep reinforcement learning (RL) to obtain a good circuit for solving VQE (Peruzzo et al., 2014) problem. Another hybrid method is considered by (Duong et al., 2022) where they propose to use Bayesian Optimization (BO) to discover optimal circuit architectures for a QNN given a particular dataset and loss function. Finally, a pure quantum architecture search is proposed by Du _et al_(Du et al., 2022) where they utilize a quantum "supercircuit" to search for child quantum circuits that satisfy a given metric. To reduce computational cost, the parameters are shared amongst all child circuits.
In the hybrid approach (i.e. using DNN to discover circuits) a major bottleneck is the sample-inefficient nature of RL algorithms. Typically, it takes days if not weeks to find good candidate architecture on a given dataset. We note that our proposed software also falls in the hybrid category of algorithms. Our objective with QArchSearch is to reduce this search time to a couple of hours.One of the reasons we do not opt for a pure quantum architecture search procedure is due to the inherent issues of scalability. For instance (Du et al., 2022) note that they are unable to search for circuits beyond 2 or 3 qubits. In order for a general architecture search package to be useful, we desire that it scale to arbitrarly as many qubits as desired.
## 6. Conclusions
In this work, we demonstrated the implementation of QArchSearch package that finds short-depth compact quantum circuits and architectures for a given objective function using quantum simulator QTensor. QArchSearch runs at scale and high efficiency on high-performance computing systems using two-level parallelization scheme on both CPUs and GPUs, which has been demonstrated on the 44-Petaflop supercomputer Polaris located in Argonne Leadership Computing Facility (Pol [n. d.]).
Our software satisfies a critical need in the quantum computing community - a scalable software that automates searching of candidate quantum architectures for a given problem. Our software can also incorporate arbitrary constraints in the search procedure and thus deliver custom architectures that exceed performance of manually designed ones. In order to satisfy the needs of scalability and speed, we leverage reinforcement learning techniques running
on GPUs and parallelize the search process on a large scale HPC system. Our belief is that our software will enable quantum computing researchers to find shorter-depth circuits for various advanced applications in the field.
## 7. Acknowledgements
This work used in part the resources of the Argonne Leadership Computing Facility, which is a Department of Energy Office of Science User Facility supported under Contract DE-AC02-06CH11357. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Energy or the U.S. Government. This work was supported in part with funding from the Defense Advanced Research Projects Agency (DARPA).
|
2303.07960 | Coloring and Recognizing Directed Interval Graphs | A \emph{mixed interval graph} is an interval graph that has, for every pair
of intersecting intervals, either an arc (directed arbitrarily) or an
(undirected) edge. We are particularly interested in scenarios where edges and
arcs are defined by the geometry of intervals. In a proper coloring of a mixed
interval graph $G$, an interval $u$ receives a lower (different) color than an
interval $v$ if $G$ contains arc $(u,v)$ (edge $\{u,v\}$). Coloring of mixed
graphs has applications, for example, in scheduling with precedence
constraints; see a survey by Sotskov [Mathematics, 2020]. For coloring general
mixed interval graphs, we present a $\min \{\omega(G), \lambda(G)+1
\}$-approximation algorithm, where $\omega(G)$ is the size of a largest clique
and $\lambda(G)$ is the length of a longest directed path in $G$. For the
subclass of \emph{bidirectional interval graphs} (introduced recently for an
application in graph drawing), we show that optimal coloring is NP-hard. This
was known for general mixed interval graphs. We introduce a new natural class
of mixed interval graphs, which we call \emph{containment interval graphs}. In
such a graph, there is an arc $(u,v)$ if interval $u$ contains interval $v$,
and there is an edge $\{u,v\}$ if $u$ and $v$ overlap. We show that these
graphs can be recognized in polynomial time, that coloring them with the
minimum number of colors is NP-hard, and that there is a 2-approximation
algorithm for coloring. | Grzegorz Gutowski, Konstanty Junosza-Szaniawski, Felix Klesen, PaweΕ RzΔ
ΕΌewski, Alexander Wolff, Johannes Zink | 2023-03-14T15:04:15Z | http://arxiv.org/abs/2303.07960v2 | # Coloring and Recognizing Directed Interval Graphs
###### Abstract
A _mixed interval graph_ is an interval graph that has, for every pair of intersecting intervals, either an arc (directed arbitrarily) or an (undirected) edge. We are interested in mixed interval graphs where the type of connection of two vertices is determined by geometry. In a proper coloring of a mixed interval graph \(G\), an interval \(u\) receives a lower (different) color than an interval \(v\) if \(G\) contains arc \((u,v)\) (edge \(\{u,v\}\)).
We introduce a new natural class of mixed interval graphs, which we call _containment interval graphs_. In such a graph, there is an arc \((u,v)\) if interval \(u\) contains interval \(v\), and there is an edge \(\{u,v\}\) if \(u\) and \(v\) overlap. We show that these graphs can be recognized in polynomial time, that coloring them with the minimum number of colors is NP-hard, and that there is a 2-approximation algorithm for coloring.
For coloring general mixed interval graphs, we present a \(\min\{\omega(G),\lambda(G)\}\)-approximation algorithm, where \(\omega(G)\) is the size of a largest clique and \(\lambda(G)\) is the length of a longest induced directed path in \(G\). For the subclass of _bidirectional interval graphs_ (introduced recently), we show that optimal coloring is NP-hard.
Interval Graphs, Mixed Graphs, Graph Coloring Grzegorz Gutowski: partially supported by the National Science Center of Poland under grant no. 2019/35/B/ST6/02472.
## 1 Introduction
In a geometric intersection graph, the vertices represent geometric objects, and two vertices are adjacent if and only if the corresponding objects intersect. For example, _interval graphs_ are the intersection graphs of intervals on the real line. These graphs are well understood: interval graphs are _chordal_ and can thus be colored optimally (that is, with the least number of colors) in polynomial time. In other words, given an interval graph \(G\), its _chromatic number_\(\chi(G)\) can be computed efficiently.
The notion of coloring can be adapted to directed graphs where an arc \((u,v)\) means that that the color of \(u\) must be smaller than that of \(v\). Clearly, such a coloring can only exist if the given graph is acyclic. Given a directed acyclic graph, its chromatic number can be computed efficiently (via topological sorting).
A generalization of both undirected and directed graphs are _mixed graphs_ that have edges and arcs. A _proper coloring_ of a mixed graph \(G\) is a function \(f\colon\mathcal{I}\to\mathbb{N}\) such that, for any distinct vertices \(u\) and \(v\) of \(G\), the following two conditions hold:
**1.** if there is an edge \(\{u,v\}\), then \(f(u)\neq f(v)\), and
**2.** if there is an arc \((u,v)\), then \(f(u)<f(v)\).
The concept of mixed graphs was introduced by Sotskov and Tanaev [11] and reintroduced by Hansen, Kuplinsky, and de Werra [4] in the context of proper colorings of mixed graphs. Coloring of mixed graphs was used to model problems in scheduling with precedence constraints; see a survey by Sotskov [10]. The problem is NP-hard even for bipartite planar graphs [9] but admits efficient algorithms for trees [1] and series-parallel graphs [2].
In this paper, we study _mixed interval graphs_, that is, mixed graphs whose vertices correspond to intervals. If two intervals intersect, the corresponding vertices are connected either by an edge or by an arc (in one of the two directions). We are particularly interested in subclasses of mixed interval graphs where the type of connection of two vertices is determined by geometry, that is, by the relative position of the two corresponding intervals.
For example, Gutowski et al. [3] have introduced _directional interval graphs_, where there is an edge between two intervals that are contained in each other and there is an arc between every two overlapping intervals, directed towards the interval that starts and ends to the right. They also introduced _bidirectional interval graphs_ where each interval is either _left-going_ or _right-going_. For left-going intervals, the arcs are defined as in directional interval graphs. For right-going intervals, the symmetric definition applies. Moreover, two intervals are connected by an edge if they are contained in each other or if they overlap but go in different directions. Coloring such graphs has applications in routing edges in layered orthogonal graph drawing according to the so-called Sugiyama framework [12]; the colors correspond to the tracks for routing the edges [13]. Hence, coloring an auxiliary graph with fewer colors yields a more compact layout. Gutowski et al. showed that directional interval graphs can be recognized in quadratic time and that their chromatic number can be computed efficiently whereas coloring general mixed interval graphs optimally is NP-hard. Clearly, the optimal coloring algorithm for directional interval graphs yields a 2-approximation for coloring bidirectional interval graphs [3].
### Our Contribution.
We introduce a new natural class of mixed interval graphs, which we call _containment interval graphs_. In such a graph, there is an arc \((u,v)\) if interval \(u\) contains interval \(v\), and there is an edge \(\{u,v\}\) if \(u\) and \(v\) overlap. For a set \(\mathcal{I}\) of intervals, let \(C[\mathcal{I}]\) be the containment interval graph induced by \(\mathcal{I}\). We show that these graphs can be recognized in polynomial time (Section 2), that coloring them optimally is NP-hard (Section 4), and that for every set \(\mathcal{I}\) of intervals, it holds that \(\chi(C[\mathcal{I}])\leq 2\omega(C[\mathcal{I}])-1\), that is, \(C[\mathcal{I}]\) can be colored with fewer than twice as many colors as the size of the largest clique in \(C[\mathcal{I}]\) (Section 3). Our constructive proof yields a 2-approximation algorithm for coloring containment interval graphs.
Then we prove that, for the class of bidirectional interval graphs, optimal coloring is NP-hard (Section 5). Finally, we show that, for any mixed interval graph \(G\) without directed cycles, it holds that \(\chi(G)\leq\omega(G)\cdot\lambda(G)\), where \(\lambda(G)\) denotes the length of a longest induced directed path in \(G\) (Section 6). Since \(\chi(G)\geq\max\{\omega(G),\lambda(G)\}\), our constructive proof for the upper bound yields a \(\min\{\omega(G),\lambda(G)\}\)-approximation algorithm. The upper bound is asymptotically tight in the worst case.
Table 1 gives an overview over known and new results concerning the above-mentioned subclasses of mixed interval graphs. Given a positive integer \(k\), we use \([k]\) as shorthand for
the set \(\{1,2,\ldots,k\}\). When we visualize a graph coloring corresponding to a set of intervals, we use horizontal tracks to indicate the color. Recall that a _proper_ interval graph is an interval graph that has a representation where no interval is contained in another interval.
For a mixed interval graph \(G\), the _underlying undirected graph_ of \(G\), denoted by \(U(G)\), has an edge for every edge or arc of \(G\). Note that testing whether a given graph \(G\) is a mixed interval graph is the same as testing whether \(U(G)\) is an interval graph, which takes linear time [7].
## 2 Recognition of Containment Interval Graphs
In this section we present a recognition algorithm for containment interval graphs. Given a mixed graph \(G\), our algorithm decides whether \(G\) is a containment interval graph. If it is, the algorithm additionally constructs a set \(\mathcal{I}\) of intervals representing \(G\), i.e., with \(C[\mathcal{I}]\) isomorphic with \(G\). The algorithm works in two phases. The first phase uses heavily the concept of a PQ-tree, that will be defined shortly. The algorithm carefully selects a rotation of a PQ-tree of the underlying undirected graph \(U(G)\) of \(G\). This corresponds to fixing the order in which the maximal cliques appear in the interval representation of \(U(G)\). This almost fixes the interval representation. In the second phase, the endpoints of the intervals are perturbed so that the edges and arcs in \(G\) are represented correctly.
The main result of this section is the following theorem.
There is an algorithm that, given a mixed graph \(G\), decides whether \(G\) is a containment interval graph. The algorithm runs in \(O(nm)\) time, where \(n\) is the number of vertices of \(G\) and \(m\) is the number of edges of \(G\), and produces a containment representation of \(G\) if \(G\) admits one.
The algorithm runs in two phases that we introduce separately in Section 2.2, and Section 2.3. But first, in Section 2.1, we introduce the necessary machinery of PQ-trees.
### MPQ-Trees
For a set of pairwise intersecting intervals on the real line, let the _clique point_ be the leftmost point on the real line that lies in all the intervals. Given an interval representation of an interval graph \(G\), we get a linear order of the maximal cliques of \(G\) by their clique points from left to right. Booth and Lueker [7] showed that a graph \(G\) is an interval graph if and only if the maximal cliques of \(G\) admit a _consecutive arrangement_, i.e., a linear order such that, for each vertex \(v\), all the maximal cliques containing \(v\) occur consecutively in the order. They have also introduced a data structure called PQ-tree that encodes all possible consecutive
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Mixed interval & \multicolumn{6}{c}{Coloring} & \multicolumn{3}{c}{Recognition} \\ graph class & complexity & lower bd. & upper bd. & & \multicolumn{3}{c}{approx.} \\ \hline containment & NP-hard & T13 & \(2\omega-1\) & P12 & \(2\omega-1\) & T8 & 2 & C11 & \(O(nm)\) & T1 \\ directional & \(O(n\log n)\)[3] & & & & & & 1 & [3] & \(O(n^{2})\)[3] \\ bidirectional & NP-hard & T14 & & & & & 2 & [3] & open \\ general & NP-hard & [3] & \(\lambda\omega/2\) & P16 & \(\lambda\omega\) & T15 & \(\min\{\omega,\lambda\}\) & T15 & \(O(n+m)\)[7] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Known and new results concerning subclasses of mixed interval graphs. The time complexities refer to a given set of \(n\) intervals with \(m\) pairwise intersections. (We use T, P, and C as shorthand for Theorem, Proposition, Corollary, respectively.)
arrangements of \(G\). We present our algorithm in terms of modified PQ-trees (MPQ-trees, for short) as described by Korte and Mohring [5, 6].
An _MPQ-tree_\(T\) of an interval graph \(G\) is a rooted, ordered tree with two types of nodes: P-nodes and Q-nodes, joined by links. Each node can have any number of children and a set of at least two consecutive links joining a Q-node \(x\) with some (but not all) of its children is called a _segment_ of \(x\). Further, each vertex \(v\) in \(G\) is assigned either to a P-node, or to a segment of some Q-node. Based on this assignment, we _store_\(v\) in the links of \(T\). If \(v\) is assigned to a P-node \(x\), we store \(v\) in the link just above \(x\) in \(T\) (adding a dummy link above the root of \(T\)). If \(v\) is assigned to a segment of a Q-node \(x\), we store \(v\) in each link of the segment. For a link \(\{x,y\}\), let \(S_{xy}\) denote the set of vertices stored in \(\{x,y\}\). We say that \(v\) is _above_ (_below_, resp.) a node \(x\) if \(v\) is stored in any of the links on the upward path (in any of the links on some downward path, resp.) from \(x\) in \(T\). We write \(A^{T}_{x}\) (\(B^{T}_{x}\), resp.) for the set of all vertices in \(G\) that are above (below, resp.) node \(x\). Observe that every vertex assigned to a P-node \(x\) is above \(x\), and every vertex assigned to a segment of a Q-node \(x\) is below \(x\).
The _frontier_ of \(T\) is the sequence of the sets \(A^{T}_{x}\), where \(x\) goes through all leaves in \(T\) in the order of \(T\). Every node of \(T\) with at least two children is _branching_. Given a Q-node \(x\) of an MPQ-tree \(T\), there are two _rotations of \(x\)_: having the order of the children of \(x\) as in the original tree \(T\), and reversing the order of the children of \(x\). For a P-node \(x\) of an MPQ-tree \(T\) with \(k\) children, there are \(k!\) rotations of \(x\), each obtained by a different permutation of the children of \(x\). Every tree that is obtained from \(T\) by a sequence of rotations of nodes (i.e., obtained by arbitrarily permuting the order of the children of P-nodes and reversing the orders of the children of some Q-nodes) is a _rotation_ of \(T\). The defining property of an MPQ-tree \(T\) of a graph \(G\) is that each leaf \(x\) of \(T\) corresponds to a maximal clique \(A^{T}_{x}\) of \(G\) and the frontiers of rotations of \(T\) correspond bijectively to the consecutive arrangements of \(G\). Observe that any two vertices adjacent in \(G\) are stored in links that are connected by an upward path in \(T\). We say that \(T\)_agrees_ with an interval representation \(\mathcal{I}\) of \(G\) if the order of the maximal cliques of \(G\) given by their clique points in \(\mathcal{I}\) from left to right is the same as in the frontier of \(T\). We assume the following properties of an MPQ-tree of \(T\) (see [6], Lemma 2.2):
* For a P-node \(x\) with children \(y_{1},\ldots,y_{k}\), for every \(i=1,\ldots,k\), there is at least one vertex stored in link \(\{x,y_{i}\}\) or below \(y_{i}\), i.e., \(S_{xy_{i}}\cup B^{T}_{y_{i}}\neq\emptyset\).
* For a Q-node \(x\) with children \(y_{1},\ldots,y_{k}\), we have \(k\geq 3\). Further, for \(S_{i}=S_{xy_{i}}\), we have:
* \(S_{1}\cap S_{k}=\emptyset\), \(B^{T}_{y_{1}}\neq\emptyset\), \(B^{T}_{y_{k}}\neq\emptyset\), \(S_{1}\subset
algorithm by Korte and Mohring [6]. Further, we have that arcs induce a transitive directed acyclic graph, i.e., for every arcs \((u,v)\) and \((v,w)\), there is an arc \((u,w)\) and there is no directed cycle in \(G\).
We call a rotation of \(T\)_correct_ if it agrees with some containment representation of \(G\). As we assume \(G\) to be a containment interval graph, there is at least one correct rotation of \(T\), and our goal is to find some correct rotation of \(T\). Our algorithm decides the rotation of every node in \(T\), one-by-one, in any top-down order, i.e., from the root to the leafs. Thus, when the rotation of a node \(x\) is to be decided, the rotation of every node above \(x\) is already decided. Our algorithm keeps the invariant that before, and after, deciding the rotation of every single node, there is at least one correct rotation of \(T\) that agrees with the rotation of the already decided nodes. The invariant is trivially satisfied before the first rotation, and because it is satisfied after the last rotation, the algorithm constructs a correct rotation of \(T\).
From now on, we focus on choosing a rotation of a single branching node \(x\). Let \(y_{1},\ldots,y_{k}\) denote the children of \(x\). We have \(k\geq 2\) (\(k\geq 3\) when \(x\) is a Q-node), and for each \(i=1,\ldots,k\), let \(B_{i}=S_{xy_{i}}\cup B_{y_{i}}^{T}\). We have \(\bigcup_{i=1}^{k}B_{i}=B_{x}^{T}\neq\emptyset\).
Let \(\tilde{T}\) denote the (unknown) set of all correct rotations of \(T\) that agree with the rotations of the already decided nodes. We have, by our invariant, that \(\tilde{T}\neq\emptyset\), and our goal is to choose a rotation of \(x\) that agrees with at least one rotation in \(\tilde{T}\). We call any such rotation of \(x\) a _correct rotation of \(x\)_.
For each vertex \(v\) above \(x\) in \(T\), it is already decided if \(v\) is above some node of \(T\) that is to the left (right, resp.) of \(x\) in all rotations in \(\tilde{T}\) (as this depends only on the rotation of the nodes above \(x\) in \(T\)). If it is, then there is a maximum clique that: includes \(v\), does not include any of the vertices below \(x\), and in the frontier of every \(T^{\prime}\in\tilde{T}\) is strictly to the left (right, resp.) of all maximum cliques containing vertices below \(x\). This means that in every interval representation that agrees with some \(T^{\prime}\in\tilde{T}\), the interval representing \(v\) contains a clique-point that is strictly to the left (right, resp.) of all left endpoints (right endpoints, resp.) of intervals representing vertices below \(x\) in \(T\). We call such \(v\)_left-1-bound_ (_right-\(1\)-bound_, resp.). Observe, that a vertex can be both left-\(1\)-bound and right-\(1\)-bound. If a vertex is neither left-\(1\)-, nor right-\(1\)-bound, we call it _\(1\)-unbound_. For \(\ell\geq 2\), if \(v\) is \((\ell-1)\)-unbound and has an edge to an \((\ell-1)\)-left-bound (\((\ell-1)\)-right-bound, resp.) vertex \(u\), we call \(v\)\(\ell\)-right-bound (\(\ell\)-left-bound, resp.) If a vertex is \((\ell-1)\)-unbound and neither left-\(\ell\)-, nor right-\(\ell\)-bound, we call it \(\ell\)-unbound. Lastly, a vertex is _\(\ell\)-bound_ if it is left-\(\ell\)-bound, or right-\(\ell\)-bound, _bound_ if it is \(\ell\)-bound for some \(\ell\geq 1\), and _unbound_ if it is \(\ell\)-unbound for every \(\ell\geq 1\).
Observe that the properties of MPQ-trees guarantee that for a \(1\)-unbound vertex \(v\), we have that either \(x\) is a P-node and \(v\) is assigned to \(x\), or \(x\) is a Q-node with a parent P-node \(z\), \(x\) is the only child of \(z\), and \(v\) is assigned to \(z\).
For every \(\ell\geq 1\), for an \(\ell\)-bound vertex \(v\) and an \(\ell\)-unbound vertex \(w\), there is no arc \((w,v)\).
Proof. We prove this by induction on \(\ell\). For \(\ell=1\), every interval of a \(1\)-bound vertex contains a clique-point such that if any other interval contains it, the corresponding vertex is also \(1\)-bound. For \(\ell>1\), let \(u\) be an \((\ell-1)\)-bound vertex that certifies that \(v\) is \(\ell\)-bound. By the induction hypothesis, there is no arc \((w,u)\), and there is no edge \(\{u,w\}\) as \(w\) is \(\ell\)-unbound. Thus, there is an arc \((u,w)\). Assuming to the contrary the existence of an arc \((w,v)\), we get an arc \((u,v)\) from transitivity of arcs. This contradicts with the existence of the edge \(\{u,v\}\)
\(\rhd\) Claim 4. Let \(v_{1},\ldots,v_{\ell}\) be a sequence of bound vertices such that: \(v_{i}\) is left-\(i\)-bound, for \(i\) odd; \(v_{i}\) is right-\(i\)-bound, for \(i\) even; there is an edge \(\{v_{i},v_{i+1}\}\), for \(i=1,\ldots,\ell-1\). For every interval representation that agrees with some \(T^{\prime}\in\tilde{T}\), for every odd (even, resp.) \(i>1\), the interval representing \(v_{i}\) contains the left (right, resp.) endpoint of the interval representing \(v_{i-1}\).
Proof. By the previous claim, we have that there is an arc \((v_{i},v_{j})\) for every \(i\in[\ell-2]\) and \(j\in\{i+2,\ldots,\ell\}\). We prove this by induction on \(\ell\). For \(i=2\), \(v_{1}\) is left-\(1\)-bound, while \(v_{2}\) is not, and there is an edge \(\{v_{1},v_{2}\}\), which means that \(v_{2}\) contains the right endpoint of \(v_{1}\). For \(i>2\), \(i\) odd, we have: \(v_{i}\) is contained in \(v_{i-2}\), \(v_{i-1}\) contains the right endpoint of \(v_{i-2}\) by induction, \(v_{i}\) does not include the right endpoint of \(v_{i-1}\), and the edge \(\{v_{i-1},v_{i}\}\) means that \(v_{i}\) contains the left endpoint of \(v_{i-1}\). For \(i\) even, the argument is symmetric. \(\lhd\)
Observe that, intuitively, it is "natural" for intervals representing vertices below a node to be contained in intervals representing vertices above it. Let \(x\) be a node in \(T\), \(v\in A_{x}^{T}\), and \(w\in B_{x}^{T}\). First notice that it is impossible to have an arc \((w,v)\), i.e., it is impossible for the interval of \(w\) to contain the interval of \(v\). As \(x\) is a branching node, \(w\) omits clique-points in the subtree of at least one child of \(x\). These clique-points are contained in the interval of \(v\). It is still possible to have an edge joining a vertex above \(x\) and a vertex below \(x\). Each such edge allows us to deduce some information on the correct rotations of \(x\) and these edges are crucial to our algorithm.
### Rotating Q-nodes.
Observe that each vertex \(w\in B_{x}^{T}\) is present in at most one of \(B_{1}\) or \(B_{k}\). (Recall that \(B_{i}=S_{xy_{i}}\cup B_{y_{i}}^{T}\) for \(i\in[k]\) and \(y_{1},\ldots,y_{k}\) are the children of \(x\).)
We shall prove that if there is at least one edge joining a vertex \(w\) below \(x\) and a bound vertex \(v\) above \(x\), then there is only one correct rotation of \(x\). If \(v\) is left-\(\ell\)-bound (right-\(\ell\)-bound, resp.) then \(x\) needs to be rotated so that \(w\) is in the last (first, resp.) child of \(x\). Otherwise, when bound vertices above \(x\) have arcs towards vertices below \(x\), then both rotations are correct.
Assume that there is an edge for some left-\(\ell\)-bound \(v\in A_{x}^{T}\) and \(w\in B_{x}^{T}\), and \(\ell\) is minimum possible (i.e.there are no edges joining \(\ell^{\prime}\)-bound vertices and \(B_{x}^{T}\) for \(\ell^{\prime}<\ell\)). We prove that \(w\) must be assigned to or below the last child of \(x\) in every \(T^{\prime}\in\tilde{T}\). For \(\ell=1\), \(v\) is left-\(1\)-bound, and the left endpoint of \(v\) is to the left of the left endpoint of \(w\) in every containment representation that agrees with some \(T^{\prime}\in\tilde{T}\). Thus, to realize the edge \(\{v,w\}\), the right endpoint of \(v\) is to the left of the right endpoint of \(w\), which requires \(w\) to be in the last child of \(x\) (as otherwise there is some clique-point to the right of \(w\) that is in \(v\)). For \(\ell>1\), there is an edge \(\{v,w\}\) for some left-\(\ell\)-bound \(v\) and \(w\) below \(x\). Let \(u\) be the right-\((\ell-1)\)-bound vertex with edge \(\{u,v\}\). We know that \(v\) contains the left endpoint of \(u\). Because \(\ell\) is minimum, the interval of \(u\) contains the interval of \(w\). Thus, \(v\) contains the left endpoint of \(w\), and in order to have the right endpoint of \(w\) after the right endpoint of \(v\), we need \(w\) to be assigned to or below the last child of \(x\).
Similarly if \(v\) is right-bound, then \(w\) must be in the first child in every \(T^{\prime}\in\tilde{T}\).
For the second part, we assume that there is an arc from every bound vertex above \(x\) towards every vertex below \(x\). We know that there is also an arc from every bound vertex to every unbound vertex. Let \(B\) denote the set of all vertices that are either below \(x\) or unbound. Observe that any containment representation of \(G\) has all the endpoints of intervals representing vertices in \(B\) strictly inside the intersection of intervals representing bound vertices. Thus, reversing the order of all endpoints of intervals representing vertices in
gives another containment representation of \(G\). This other representation has the order of clique-points represented by the subtree of \(x\) reversed. Thus, both orientations of \(x\) are correct and algorithm can choose any of them.
### Rotating P-nodes.
We are to choose the order of the children \(y_{1},\ldots,y_{k}\) of \(x\). Observe that in this case, for a P-node, the sets \(B_{1},\ldots,B_{k}\) are pairwise disjoint, and there are neither arcs, nor edges joining two different sets \(B_{i}\) and \(B_{j}\).
Now, assume \(k\geq 3\), and observe that for each middle child, i.e., for each \(i=2,3,\ldots,k-1\), for every vertex \(v\in A_{x}^{T}\) above \(x\), and every vertex \(w\in B_{i}\) assigned to or below a middle child of \(x\), we have an arc \((v,w)\). This is because there is at least one clique-point below the first, and below the last child of \(x\). Children assigned to or below middle children are not in these cliques. Thus, an interval representing \(v\) must contain an interval representing \(w\).
Now, we call a child \(y_{i}\) of \(x\) to be _special_, if there is an edge joining a vertex \(v\in A_{x}^{T}\) with a vertex \(w\in B_{i}\). We already know that there are at most two special children of \(x\), as otherwise \(\tilde{T}=\emptyset\). Observe that if \(\sigma\) is a correct rotation of \(x\), then any \(\sigma^{\prime}\) that is obtained from \(\sigma\) by arbitrarily permuting the middle children is also a correct rotation of \(x\). This is because in the rotation \(\sigma\), we have that all middle children of \(x\) are not special, and there are neither edges nor arcs between different sets \(B_{i}\) and \(B_{j}\).
Now, let us fix a single permutation \(\psi\) of the children of \(x\) in which every special child of \(x\) is either the first, or the last child. Let \(\psi^{\prime}\) denote the permutation obtained by reversing \(\psi\). It is easy to see that if there is a correct rotation of \(x\) at all, then also either \(\psi\) or \(\psi^{\prime}\) is a correct rotation of \(x\). Now, we can apply the same reasoning as for the Q-nodes. If there is at least one edge joining a vertex \(w\) below \(x\) and a bound vertex \(v\) above \(x\), then only one of \(\psi\), or \(\psi^{\prime}\) is a correct rotation of \(x\). Otherwise, both \(\psi\), and \(\psi^{\prime}\) are correct rotations of \(x\).
The rotation of a single node can be decided in time \(O(n+m)\).
For a Q-node we first need to decide which vertices are left/right-\(\ell\)-bound for different \(\ell\). We can first calculate the set \(A_{x}^{T}\). Then traverse the tree upwards and in each node mark left/right-1-bound vertices. Then use BFS to decide which vertices in \(A_{x}^{T}\) are left/right-\(\ell\)-bound for different \(\ell\). This can be easily done in \(O(n+m)\) time.
Then, for each edge that connects an \(\ell\)-bound vertex \(v\) with a vertex \(w\), we need to decide if \(w\in B_{1}\) or \(w\in B_{2}\). Observe that the queries "whether a vertex \(w\) is in \(B_{i}\)?" can be answered in constant time (by looking at the index of the first/last clique in the frontier that includes \(w\), and on the index of the first/last clique in the frontier that is below a node \(y_{i}\)). Thus, we can decide the rotation of a Q-node in \(O(n+m)\) time.
For a P-node, we first need to decide which children are special. For this we need to calculate the set \(A_{x}^{T}\), and sets \(B^{i}\), and then for each edge check if it makes some child special. This can be easily done in \(O(n+m)\) time. The rest of analysis is the same as for a Q-node.
As there are \(n\) nodes to rotate, and by the previous claim, we conclude that the running time of the algorithm is \(O(nm)\).
### Perturbing Endpoints
There is an algorithm that, given an MPQ-tree \(T\) that agrees with some containment representation of a mixed graph \(G\), constructs a containment representation \(\mathcal{I}\)
of \(G\) such that \(T\) agrees with \(\mathcal{I}\). The running time of this algorithm is in \(O(nm)\) where \(n\) is the number of vertices of \(G\) and \(m\) is the number of edges of \(G\)._
Proof.: The frontier of \(T\) fixes the left-to-right order of clique-points of maximal-cliques in \(G\). We need to respect that order, but still we have some freedom in choosing the exact locations of the endpoints. For any vertex \(v\), let \(L_{v}\) (\(R_{v}\), resp.) denote the index of the first (last, resp.) clique in the frontier of \(T\) that includes \(v\). For any \(L\geq 1\) (\(R\geq 1\), resp.), the _left-L-group_ (_right-R-group_, resp.) is the set of all vertices \(v\) with \(L_{v}=L\) (\(R_{v}=R\), resp.). It is easy to see that any interval representation of \(U(G)\) that agrees with \(T\) can be stretched so that, for every vertex \(v\), we have that the left endpoint \(l_{v}\) of \(v\) is a real in the open interval \(l_{v}\in(L_{v}-\frac{1}{2},L_{v})\), and the right endpoint \(r_{v}\) of \(v\) is in the interval \(r_{v}\in(R_{v},R_{v}+\frac{1}{2})\). Obviously, any representation satisfying these conditions on the locations of the endpoints agrees with \(T\). We are free to determine the order among endpoints in each group independently, so that the resulting intervals are a containment representation of \(G\).
We will now collect different order constraints on the relative location of pairs of the endpoints. First, consider two adjacent vertices \(u\) and \(v\) with \(L_{u}=L_{v}\), and \(R_{u}<R_{v}\). As the right endpoints are in different right-groups, we have \(r_{u}<r_{v}\). If there is the edge \(\{u,v\}\), then, regarding the relative order, of the left endpoints, we need to have \(l_{u}<l_{v}\). If there is the arc \((u,v)\), then we need to have \(l_{u}>l_{v}\). The arc \((v,u)\) is impossible to realize. Similarly, if any of the left/right-groups is common for \(u\) and \(v\), but the other one is different, the relative order of endpoints is fixed.
We have collected information about all pairs of vertices \(u\) and \(v\), except for these with \(L_{u}=L_{v}\) and \(R_{u}=R_{v}\). In this case, if there is an arc \((u,v)\) or \((v,u)\), then again the relative order of left and right endpoints is fixed.
Now, we assume that there is an edge \(\{u,v\}\) and we want to say something about the relative order of the endpoints. Clearly, we have \(l_{u}<l_{v}\iff r_{u}<r_{v}\), but we would like to decide the correct order. For a third vertex \(w\), we say that \(w\)_behaves the same_ (_differently_, resp.) on \(u\) and \(v\), when \(w\) is connected to \(u\) with the same type of connection (edge, arc towards, arc from) as to \(v\) (otherwise, resp.). Assume that there is a vertex \(w\) with \(L_{u}=L_{v}=L_{w}\) and \(R_{u}=R_{v}\neq R_{w}\) that behaves differently on \(u\) and \(v\). Note that we do not have two arcs in different directions joining \(w\) with \(u\) and \(v\) as it would imply an arc between \(u\) and \(v\) by transitivity. Thus, we assume w.l.o.g. that \(w\) is connected by an arc \((u,w)\) or \((w,u)\) with \(u\), but with an edge \(\{v,w\}\) with \(v\). Then, the relative order of \(u\) and \(v\) is fixed because there is only one relative order of the three left endpoints that allows for this situation. Similarly for the case \(L_{u}=L_{v}\neq L_{w}\) or \(R_{u}=R_{v}=R_{w}\).
For a pair of vertices \(u\), \(v\) with the edge \(\{u,v\}\), \(L_{u}=L_{v}\), and \(R_{u}=R_{v}\), if the above rule gives us the relative order of their endpoints, we call such pair _decided_. Otherwise, it is _undecided_. While there are undecided pairs, we propagate the order of the decided pairs as follows. Consider a vertex \(w\) with \(L_{u}=L_{w}\) and \(R_{u}=R_{w}\) that behaves differently on \(u\) and \(v\), i.e., there is an arc \((u,w)\) (or \((w,u)\)) and an edge \(\{v,w\}\) (two arcs in different directions is not possible as argued before), and let \(\{v,w\}\) be decided. Then, the relative orders of the endpoints of \(v\) and \(w\) and the ones of \(u\) and \(w\) are fixed, which means that the relative orders of the endpoints of \(u\) and \(v\) follow. From now on the pair \(u\), \(v\) is also decided and we apply this procedure as long as it is possible.
At this point, if there are some undecided pairs left, choose any vertex \(u\) from an undecided pair, and let \(U\) be the set of vertices reachable from \(u\) by a path of undecided edges. We have \(|U|\geq 2\), and every vertex \(w\notin U\) behaves the same on any two vertices in \(U\) as otherwise we would have applied one of the order constraints described above. We remove all vertices in \(U\) except \(u\) and solve the smaller instance of the problem.
We can find all order constraints in \(O(nm)\) time as it suffices to consider each edge together with each vertex.
Now, it remains to prove, that we can insert back the vetices in \(U\) to the solution. Observe that each vertex \(v\) in \(U\) can be placed in the position of \(u\) and this position satisfies all order constraints of \(u\) against vertices not in \(U\). Thus, we will put all the left (right, resp.) endpoints of vertices in \(U\) in an \(\varepsilon\) range around the left (right, resp.) endpoint of \(u\). Consider the mixed graph induced by the vertices in \(U\). This is a complete mixed acyclic graph and can be seen as a partially ordered set.
A _partially ordered set_, or a _poset_ for short, is a transitive directed acyclic graph. A poset \(P\) is _total_ if, for every pair of vertices \(u\) and \(v\), there is either an arc \((u,v)\) or an arc \((v,u)\) in \(P\). We can conveniently represent a total poset \(P\) by a linear order of its vertices \(v_{1}<v_{2}<\cdots<v_{n}\) meaning that there is an arc \((v_{i},v_{j})\) for each \(1\leq i<j\leq n\). A poset \(P\) is _two-dimensional_ if the arc set of \(P\) is the intersection of the arc sets of two total posets on the same set of vertices as \(P\). McConnell and Spinrad [8] gave a linear-time algorithm that, given a directed graph \(D\) as input, decides whether \(D\) is a two-dimensional poset. If the answer is yes, the algorithm also constructs a _realizer_, that is, (in this case) two linear orders \((R_{1},R_{2})\) on the vertex set of \(D\) such that
\[\mbox{arc }(u,v)\mbox{ is in }D\iff\left[(u<v\mbox{ in }R_{1})\wedge(u<v\mbox{ in }R_{2})\right].\]
\(\rhd\) Claim 7. The mixed graph induced by \(U\) is a containment interval graph if and only if the poset of \(U\) is two-dimensional.
Proof. First, for the "if" direction, assume that \(U\) is two-dimensional, and let \(L_{1}\), \(L_{2}\) be two linear extensions of \(U\) such that \(U\) is the intersection of \(L_{1}\) and \(L_{2}\). Now, we construct the containment interval representation of \(G[U]\) in the following way: We choose the locations of the left endpoints of the intervals representing the vertices in \(U\) in the open interval \((-\frac{1}{2},0)\) so that their left-to-right order is exactly as in \(L_{1}\). Similarly, we place right endpoints in the interval \((0,\frac{1}{2})\) so that their right-to-left order is exactly as in \(L_{2}\). Now, for an arc \((u,v)\) we have that \(u\leq v\) in the poset, and \(u\leq_{L_{1}}v\) and \(u\leq_{L_{2}}v\). Thus, the left endpoint of \(u\) is to the left of left endpoint of \(v\), and right endpoint of \(u\) is to the right of the right endpoint of \(v\), as required. Conversely, for an edge \(\{u,v\}\) we have that \(u\) and \(v\) are incomparable in the poset, and \(u\leq_{L_{1}}v\) and \(v\leq_{L_{2}}u\) (or the other way around, both inequalities are reversed). Thus, the resulting intervals overlap without containment, and the resulting set of intervals is a containment representation of \(G[U]\).
For the other direction, given any containment interval representation, let \(L_{1}\) be a linear order on \(U\) given by the left-to-right order of the left endpoints of the intervals, and \(L_{2}\) be given by the right-to-left order of the right endpoints. Now, after the previous argument, it is easy to see that \(L_{1},L_{2}\) is a realizer of the poset of \(U\), and \(U\) is of two-dimensional. \(\lhd\)
By the claim, we can realize \(G[U]\) within small ranges designated to the endpoints of \(u\). As the running time of this step is linear, the resulting running time of this algorithm is in \(O(nm)\).
### Final Proof
Theorem 1 follows easily from Lemmas 2 and 6.
**of Theorem 1**.: Our algorithm, given a containment interval graph \(G\), applies the algorithm from Lemma 2 to obtain an MPQ-tree \(T\) that agrees with some containment representation of \(G\). Then, using Lemma 6, it constructs a containment representation of \(G\). If any of the
phases fails, then we know that \(G\) is not a containment interval graph, and we can reject the input. Otherwise, our algorithm accepts the input and returns a containment representation of \(G\). As both phases can be implemented to run in \(O(nm)\) time, we get that our algorithm recognizes containment interval graphs in \(O(nm)\) time.
## 3 A 2-Approximation Algorithm for Coloring Containment Interval Graphs
For any set \(\mathcal{I}\) of intervals, the containment interval graph \(C[\mathcal{I}]\) induced by \(\mathcal{I}\) admits a proper coloring with at most \(2\cdot\omega(C[\mathcal{I}])-1\) colors.
Proof.: For simplicity, let \(G:=C[\mathcal{I}]\) and \(\omega:=\omega(G)\). We use induction on \(\omega\). If \(\omega=1\), then \(G\) has no edges and clearly admits a proper coloring using only one color. So assume that \(\omega>1\) and that the theorem holds for all graphs with smaller clique number.
Let \(M(\mathcal{I})\) denote the subset of \(\mathcal{I}\) consisting of intervals that are maximal with respect to the containment relation. In particular, \(C[M(\mathcal{I})]\) is a proper interval graph. Observe that \(\bigcup M(\mathcal{I})=\bigcup\mathcal{I}\). Let \(R\) be an inclusion-wise minimal subset of \(M(\mathcal{I})\) such that \(\bigcup R=\bigcup\mathcal{I}\). In the example instance depicted in Figure 1, \(R\) is the set of intervals on the lowest two (gray) lines whereas the intervals in \(M(\mathcal{I})\) are marked with crosses.
\(\rhd\)Claim 9. \(C[R]\) is an undirected linear forest.
Proof.: All intervals in \(M(\mathcal{I})\) and thus in \(R\) are incomparable with respect to the containment relation, so \(C[R]\) has no arcs. Note that \(C[R]\) is a proper interval graph, so it contains no induced \(K_{1,3}\) and no induced cycle with at least four vertices. Thus we need to only prove that \(C[R]\) is triangle-free. For contradiction, suppose otherwise. Let \(x,y,z\) induce a triangle in \(C[R]\), ordered according to their left endpoints. As \(x,y,z\) are pairwise overlapping, note that \(y\subseteq x\cup z\), and thus \(\bigcup(R\setminus\{y\})=\bigcup R\). This contradicts the minimality of \(R\).
By the claim above, \(C[R]\) can be properly colored with colors \(\{1,2\}\). Let \(f_{1}\) be such a coloring. If \(R=\mathcal{I}\), we are done (using only \(\omega\) many colors), so suppose that \(\mathcal{I}\setminus R\neq\emptyset\). Slightly abusing notation, we define \(G^{\prime}:=G-R\).
The largest clique in \(G^{\prime}\) has at most \(\omega-1\) vertices.
Proof.: As \(G^{\prime}\) is a subgraph of \(G\), each clique in \(G^{\prime}\) has at most \(\omega\) vertices. For contradiction, suppose that there is a set \(S\subseteq\mathcal{I}\setminus R\) such that \(|S|=\omega\) and all intervals in \(S\) pairwise intersect. By the Helly property of intervals, \(\bigcap S\neq\emptyset\). Let \(p\in\bigcap S\). Since \(\bigcup R=\bigcup\mathcal{I}\), there is an interval \(r\in R\) that contains \(p\). Thus \(S\cup\{r\}\) is a clique in \(G\) with \(\omega+1\) vertices, which contradicts the definition of \(\omega\).
By the inductive assumption, \(G^{\prime}\) admits a proper coloring \(f_{2}\) using colors \([2(\omega-1)-1]\). Finally, we define \(f\colon\mathcal{I}\to[2\omega-1]\) as follows:
\[f(x)=\begin{cases}f_{1}(x)&\text{if }x\in R,\\ f_{2}(x)+2&\text{if }x\notin R.\end{cases}\]
We claim that \(f\) is a proper coloring of \(G\). (For an example, see Figure 1.)
First, note that if \(x,y\in\mathcal{I}\) are distinct and \(x\cap y\neq\emptyset\), then \(f(x)\neq f(y)\). Indeed, if \(x,y\in R\), then \(f(x)=f_{1}(x)\neq f_{1}(y)=f(y)\). If \(x,y\notin R\), then \(f(x)=f_{2}(x)+2\neq f_{2}(y)+2=f(y)\). Finally, if, say, \(x\in R\) and \(y\notin R\), then \(f(x)\in\{1,2\}\) and \(f(y)\geq 3\).
So let us argue that the second condition in the definition of a proper coloring holds as well. For contradiction, let \(x,y\) be distinct intervals such that \(x\subseteq y\) and \(f(x)<f(y)\). Note that \(x\notin M(\mathcal{I})\) and thus \(x\notin R\). This implies that \(f(x)\geq 3\). Since we assumed that \(f(y)>f(x)\), we have that \(f(y)>3\). Hence, \(y\not\in R\). However, by the inductive assumption, we have \(f(x)=f_{2}(x)+2>f_{2}(y)+2=f(y)\), which yields the desired contradiction. This completes the proof.
Observe that the proof of Theorem 3 can be easily transformed into an efficient algorithm, which yields the following corollary. There is a 2-approximation algorithm for coloring interval containment graphs properly. Given a set of \(n\) intervals, the algorithm runs in \(O(n\log n)\) time.
Proof.: For any graph \(G\), we have \(\chi(G)\geq\omega(G)\). Hence, the approximation factor follows directly from Theorem 3.
It remains to implement the constructive proof of Theorem 3 efficiently. Let \(\mathcal{I}\) be the given set of intervals. For each interval \(I\) in \(\mathcal{I}\), let \(r_{I}\) be the right endpoint of \(I\). We go through the intervals from left to right in several phases. In each phase, we use two colors, except possibly in the last phase where we may only use one color. For the first phase, we reserve colors 1 and 2, for the second phase, we reserve colors 3 and 4 etc. We use an augmented balanced binary search tree \(\mathcal{T}\) to store the intervals in \(\mathcal{I}\). We will query \(\mathcal{T}\) in two ways. A query of type Q1 in \(\mathcal{T}\) with a value \(x\in\mathbb{R}\cup\{-\infty\}\) will return, among all intervals whose left endpoint is at least \(x\), one with leftmost left endpoint (and _nil_ if such an interval does not exist). A query of type Q2 in \(\mathcal{T}\) with a value \(y\in\mathbb{R}\) will return, among all intervals whose left endpoint is at most \(y\), one with rightmost right endpoint (and _nil_ if such an interval does not exist). Note that the two queries are not symmetric.
Initially, \(\mathcal{T}\) stores all intervals in \(\mathcal{I}\). We start each phase by Q1-querying \(\mathcal{T}\) with \(-\infty\). This yields the leftmost interval \(I\) in \(\mathcal{I}\). We color \(I\) with the smaller color \(c=2i-1\) reserved for the current phase \(i\). Whenever we have colored an interval, we immediately remove it from \(\mathcal{T}\). Then we Q2-query \(\mathcal{T}\) with the right endpoint \(r_{I}\) of \(I\). If \(\mathcal{T}\) returns _nil_ or an interval \(I^{\prime}\) that lies completely to the left of \(r_{I}\), we Q1-query \(\mathcal{T}\) with \(r_{I}\) for the next interval \(I^{\prime\prime}\). If \(I^{\prime\prime}\) exists, then we color it, too, with color \(c\). Otherwise, that is, if \(I^{\prime\prime}\) does not exist, we start a new phase. Finally, if the interval \(I^{\prime}\) overlaps with the previous interval \(I\), we color \(I^{\prime}\) with the other color \(c^{\prime}\neq c\) that we reserved for the current phase. Then we continue with the next Q2-query as above (where \(I^{\prime}\) now plays the role of \(I\)).
It remains to implement the balanced binary search tree \(\mathcal{T}\). The key of an interval is its left endpoint. For simplicity, we assume that the intervals are stored in the leaves of \(\mathcal{T}\) and that the key of each inner node is the maximum of the keys in its left subtree. This suffices to answer queries of type Q1. For queries of type Q2, we augment \(\mathcal{T}\) by storing, with each node \(\nu\), a value \(\max(\nu)\) that we set to the maximum of the right endpoints among all intervals in the subtree rooted at \(\nu\). (We also store a pointer \(\mu(\nu)\) to the interval that yields the maximum.) In a Q2-query with a value \(y\), we search for the largest key \(k\leq y\). Let \(\pi\) be the search path in \(\mathcal{T}\), and initialize \(m\) with \(-\infty\). When traversing \(\pi\), we inspect each node
Figure 1: A set of intervals and a coloring produced by the 2-approximation algorithm. The intervals that lie in \(M(\mathcal{I})\) in the top level of the recursion are marked with crosses.
that hangs off \(\pi\) on the left side. If \(\max(\nu)>m\), then we set \(m=\max(\nu)\) and \(\rho=\mu(\nu)\). When we reach a leaf, \(\rho\) points to an interval whose right endpoint is maximum among all intervals whose left endpoint is at most \(y\).
The runtime of \(O(n\log n)\) is obvious since we insert, query, and delete each interval in \(O(\log n)\) time in each of the two data structures exactly once.
There is an infinite family \((\mathcal{I}_{n})_{n\geq 1}\) of sets of intervals with \(|\mathcal{I}_{n}|=3\cdot 2^{n-1}-2\), \(\chi(\mathcal{I}_{n})=2n-1\), and \(\omega(\mathcal{I}_{n})=n\).
Proof.: The construction is iterative. The family \(\mathcal{I}_{1}\) consists of a single interval of unit length.
Now let \(n>1\) and suppose that we have defined \(\mathcal{I}_{n-1}\) and want to define \(\mathcal{I}_{n}\). We introduce two new intervals \(\ell_{n}\) and \(r_{n}\), both of length \(3^{n-1}\), that overlap slightly. Then we introduce two copies of \(\mathcal{I}_{n-1}\). All intervals of one copy are contained in \(\ell_{n}\setminus r_{n}\), and all intervals of the other copy are contained in \(r_{n}\setminus\ell_{n}\).
The number of intervals in \(\mathcal{I}_{n}\) is given by the recursion: \(f(1)=1\) and \(f(n)=2f(n-1)+2\), which solves to \(f(n)=3\cdot 2^{n-1}-2\). Furthermore, it is straightforward to observe that with each step of the construction, the size of a largest clique increases by \(1\).
We claim that, for \(i\in[n]\), in any proper coloring of \(\mathcal{I}_{n}\), the difference between the largest and the smallest color used is at least \(2n-2\). This is obvious for \(\mathcal{I}_{1}\). Assume that the claim holds for \(\mathcal{I}_{n-1}\). Consider any proper coloring of \(\mathcal{I}_{n}\), and let \(m\) be the minimum color used in this coloring. The colors of \(\ell_{n}\) and \(r_{n}\) must be different. Without loss of generality, suppose that the color of \(r_{n}\) is larger than the color of \(\ell_{n}\). In particular, the color of \(r_{n}\) is at least \(m+1\). Now consider the copy of \(\mathcal{I}_{n-1}\) contained in \(r_{n}\). The color of each interval in this copy must be larger than the color of \(r_{n}\), so in particular the minimum color used for this copy of \(\mathcal{I}_{n-1}\) is at least \(m+2\). By the inductive assumption, some interval in this copy of \(\mathcal{I}_{n-1}\) receives a color that is at least \(m+2+2(n-1)-2=2n-2+m\). Summing up, the difference between the largest and the smallest color used for \(\mathcal{I}_{n}\) is at least \(2n-2\).
Now, as the minimum color is \(1\), we conclude that \(\chi(\mathcal{I}_{n})\geq 2n-1\).
For the upper bound, we color \(\mathcal{I}_{n}\) as follows. For \(n=1\), we color the only interval with color \(1\). For \(n>1\), we color \(\ell_{n}\) with color \(1\) and \(r_{n}\) with color \(2\). Next, for each of the two copies of \(\mathcal{I}_{n-1}\), we use the proper coloring defined inductively with all colors increased by \(2\), see Figure 2.
## 4 Coloring Containment Interval Graphs Is NP-Hard
Given a set \(\mathcal{I}\) of intervals and a positive integer \(k\), it is NP-hard to decide whether \(k\) colors suffice to color \(C[\mathcal{I}]\), that is, whether \(\chi(C[\mathcal{I}])\leq k\).
Proof.: We describe a reduction from (exact) 3-Sat, i.e., the satisfiability problem where every clause contains exactly three literals. Let \(\varphi=C_{1}\wedge C_{2}\wedge\cdots\wedge C_{m}\) be an instance of 3-Sat where, for each clause \(C_{i}\) (\(i\in[m]\)), the literals are negated or unnegated variables from the set \(\{x_{1},x_{2},\ldots,x_{n}\}\).
Let \(j\in[n]\). The variable gadget for \(x_{j}\) consists of two "Christmas trees", which are cliques containing only arcs; see Figure 3. We refer to them as _red_ and _gray_ Christmas tree and we let their longest intervals overlap. Figure 3 depicts two representations of the same gadget for a variable \(x\), each with its own coloring of the intervals (encoded by the height of the intervals; see the numbers at the right side of the gray box). The left representation and its coloring correspond to assigning true to \(x\), the right representation corresponds to assigning false. The height of such a tree is the number of intervals that are contained in one another. The height of the red tree is \(H-1\) minus the number of occurrences of literals \(x_{j^{\prime}}\) and \(\neg x_{j^{\prime}}\) with \(j^{\prime}<j\) in \(\varphi\), where \(H=5(m+1)\). The height of the gray tree is that of the red tree minus the number of occurrences of \(\neg x_{j}\) in \(\varphi\). We say that \(x_{j}\) is set to true if the bottom interval of the gray Christmas tree has color \(1\) and \(x_{j}\) is set to false otherwise.
For \(i\in[m]\), the gadget for clause \(C_{i}\) consists of a Christmas tree (blue in Figure 4) of height \(H-(5i+2)=5(m-i)+3\). All clause gadgets are placed to the right of all variable gadgets, in the order \(C_{1},\ldots,C_{m}\) from left to right.
The key idea to transport a Boolean value from a variable gadget to a clause gadget is to add, for each occurrence of a literal \(\ell_{j}\in\{x_{j},\neg x_{j}\}\) in a clause \(C_{i}\), an "arm" (blue intervals in Figures 3 and 4) that ends to the right of all Christmas trees and starts immediately to
Figure 4: Variable gadgets (red and gray) and clause gadgets (blue) for the 3-Sat instance \((\neg x_{2}\vee\neg x_{4}\lor x_{5})\wedge(x_{1}\vee\neg x_{3}\lor x_{4}) \wedge(\neg x_{1}\lor x_{2}\lor x_{3})\) with a fulfilling truth assignment (above) and a non-fulfilling assignment (below). Note that the latter uses one color more (is one level higher).
Figure 3: Variable gadget for the proof of Theorem 4.2 in its two states. The blue intervals with dots extend to the clause gadgets. The topmost blue interval (starting immediately to the right of a gray interval) indicates that \(x\) is part of a clause \(C_{j}\) with \(j>i\); the blue interval (with a small gap) that starts immediately to the right of a red interval indicates that \(\neg x\) is part of the clause \(C_{i}\).
the right of the \(5i\)-th interval of the gray tree (if \(\ell_{j}=x_{j}\)) or of the red tree (if \(\ell_{j}=\neg x_{j}\)) corresponding to \(\ell_{j}\). The arm is represented by a sequence of intervals that are separated by a small gap within each Christmas tree (such that, for any two arms, their gaps are disjoint and the resulting intervals do not contain each other). Presuming that the total number of colors is \(H\), two intervals of the same arm that are separated by a gap need to get the same color because, at the gap, \(H-2\) colors are occupied by other intervals and the one remaining "wrong" color is blocked due to the intervals depicted in green in Figures 3 and 4, which need to get color \(H\) or \(H-1\) and are contained by the (blue) intervals of the arms.
If any of the literals in \(C_{i}\) is true, then its arm can use color \(5i\). Then, the arms of the other literals that occur in \(C_{i}\) can use colors \(5i+1\) and \(5i+2\). This allows a small Christmas tree (blue in Figure 4) of height \(5(m-i)+3\) being contained in the rightmost intervals of the arms that represent clause \(C_{i}\) to use the colors \(\{5i+3,5i+4,\ldots,H\}\).
If none of the literals in \(C_{i}\) is true, then none of their arms can use color \(5i\). Hence, they must use colors \(5i+1\), \(5i+2\), and \(5i+3\) (or higher). This forces the small (blue) Christmas tree of clause \(C_{i}\) to use colors \(\{5i+4,\ldots,H,H+1\}\).
Thus, a coloring with \(H\) colors exists if and only if \(\varphi\) is satisfiable.
## 5 Coloring Bidirectional Interval Graphs Is NP-Hard
Given a set \(\mathcal{I}\) of intervals and a positive integer \(k\), it is NP-hard to decide whether \(k\) colors suffice to color \(B[\mathcal{I}]\), that is, whether \(\chi(B[\mathcal{I}])\leq k\).
We use the same ideas as in the proof of Theorem 4.1, but now we reduce from Monotone 3-Sat, the version of 3-Sat where every clause contains only negated or only unnegated variables as literals. For an overview see Figures 5 and 6. The maximum color sufficient for a yes-instance is again \(H=5(m+1)\).
Observe that now a clique containing only arcs is a "staircase" instead of a Christmas tree. Consequently, each variable is now represented by a _red_ right-going and a _gray_ left-going staircase; see Figure 5.
For a clause \(C_{i}\) with only negated literals, we again have three "arms" starting to the right of the \(5i\)-th interval of the red staircase of the three corresponding variables; see the blue intervals in Figures 5 and 6. The intervals of these arms are left-going and they are not split by a gap at the other variable gadgets this time because now we need to contain the left-going intervals of the staircases to have edges instead of arcs. They end below a left-going _blue_ staircase (see Figure 6 on the upper right side) of height \(5(m-i)+3\) whose maximum color is \(H\) if and only if none of the corresponding arms gets a color greater than \(5i+2\).
Figure 5: Variable gadget for the proof of Theorem 4.1 in its two states. Intervals directions are indicated by arrow heads. The blue intervals extend (to the right) to the clause gadgets of only negated variables. The two blue arrow heads starting next to the red intervals indicate that the clause \(C_{i}\) and a clause \(C_{j}\) with \(j>i\) contain the literal \(\neg x\).
Note that we need to avoid arcs between the arms. Therefore, we let every arm have a gap below each blue staircase it passes. These gaps do not overlap and their order is inverse to the order of the left endpoints of the involved intervals. We continue the arm with a right-going interval (to avoid an arc with the blue staircase and the other arms) ending to the right of the blue staircase, where we continue again with a left-going interval. Consider such an arm with intervals \(I\) and \(I^{\prime}\) (going in different directions) around a gap. At this gap, it is important that no color smaller than the color of \(I\) is available for \(I^{\prime}\), forcing \(I^{\prime}\) to also get the color of \(I\). Hence, we add long left-going intervals blocking every color not occupied by an arm or the blue staircase; see the orange intervals in Figure 6.
For every clause with only unnegated literals, we use the same construction but mirrored; see the green intervals and staircases in Figure 6.
We now have the same conditions as in the proof of Theorem 13 and, therefore, the same correctness argument applies, which concludes the proof.
## 6 Coloring General Mixed Interval Graphs
In this section we consider a further generalization of mixed interval graphs. We are dealing with an interval graph \(G\) whose edges can be arbitrarily oriented (or stay undirected). In other words, the edge directions are not related to the geometry of the intervals.
Observe that a proper coloring of \(G\) exists if and only if \(G\) does not contain a directed cycle. Let \(\chi(G)\) denote the minimum number of colors in a proper coloring of \(G\), if it exists, or \(\infty\) otherwise. We point out that the existence of a directed cycle can be determined in polynomial time (using, for example, depth-first search algorithm).
Note that clearly we have \(\omega(G)\leq\chi(G)\). However, there is another parameter that enforces a large chromatic number even in sparse graphs. A _directed path_ (of length \(t\)) in \(G\) is a sequence of vertices \(\langle v_{1},v_{2},\ldots,v_{t}\rangle\), such that, for each \(i\in[t-1]\), the arc \((v_{i},v_{i+1})\) exists. Let \(\lambda(G)\) denote the length of a longest induced directed path in \(G\).
Note that if \(\langle v_{1},v_{2},\ldots,v_{\ell}\rangle\) is a directed path in \(G\), then in any proper coloring of \(G\), the vertices of the path receive pairwise distinct colors. Thus we have \(\lambda(G)\leq\chi(G)\), and
Figure 6: Variable gadgets (red and gray) and clause gadgets (green and blue) for the Monotone 3-Sat instance \((x_{1}\lor x_{2}\lor x_{4})\wedge(\neg x_{1}\lor\neg x_{2}\lor\neg x_{3}) \wedge(\neg x_{2}\lor\neg x_{3}\lor\neg x_{4})\) with a fulfilling truth assignment (top) and a non-fulfilling assignment (bottom), which use \(H\) and \(H+1\) colors, respectively. Interval directions are indicated by arrow heads.
consequently \(\chi(G)\geq\max\{\omega(G),\lambda(G)\}\).
Let \(G\) be a mixed interval graph without directed cycles. Then \(\chi(G)\leq\omega(G)\cdot\lambda(G)\).
Proof.: Let \(V\) denote the vertex set of \(G\). Let \(G^{\rightarrow}\) be the graph obtained from \(G\) by removing all edges. Clearly, \(G^{\rightarrow}\) is a DAG. We partition \(V\) into _layers_\(L_{0},L_{1},\ldots\) as follows. The set \(L_{0}\) consists of the vertices that are sources in \(G^{\rightarrow}\), i.e., they do not have incoming arcs. Then, for \(i=1,2,\ldots\), we iteratively define \(L_{i}\) to be the set of sources in \(G^{\rightarrow}\setminus\bigcup_{j=0}^{i-1}L_{j}\). Note that \(\lambda(G)=\max\{i\colon L_{i}\neq\emptyset\}\). For each \(x\in V\), let \(\ell(x)\) denote the unique \(i\) such that \(x\in L_{i}\).
Recall that \(U(G)\) is an (undirected) interval graph, and thus \(\chi(U(G))=\omega(U(G))=\omega(G)\). Let \(c\colon V\to[\omega(U(G))]\) be an optimal proper coloring of \(U(G)\).
Now we define a coloring \(f\) of \(G\): to each vertex \(x\) of \(G\), we assign the number \(\ell(x)\omega(G)+c(x)\). We order these colors lexicographically. Clearly, their number is bounded by \(\lambda(G)\cdot\omega(G)\). We claim that \(f\) is a proper coloring.
Consider an edge \(\{x,y\}\). As this is also an edge in \(U(G)\), we obtain that \(c(x)\neq c(y)\), and so \(f(x)\neq f(y)\). Now consider an arc directed from \(x\) to \(y\). The existence of such an arc implies that \(\ell(x)<\ell(y)\), and thus \(f(x)<f(y)\).
For some instances, the above bound is asymptotically tight.
There is an infinite family \((G_{k})_{k\geq 1}\) of mixed interval graphs with \(|V(G_{k})|=k^{2}\), \(\omega(G_{k})=2k\), \(\lambda(G_{k})=k\), and \(\chi(G_{k})=k^{2}=\omega(G_{k})\cdot\lambda(G_{k})/2\).
Proof.: Let \(\mathcal{I}_{k}=\mathcal{I}_{k,1}\cup\mathcal{I}_{k,2}\cup\cdots\cup\mathcal{I }_{k,k}\) be a set of \(k^{2}\) intervals defined as follows; see Figure 7 for \(\mathcal{I}_{4}\). For \(i\in[k]\), let \(\mathcal{I}_{k,i}\) be a multiset that contains \(k\) copies of the interval \([i,i+1.1]\). Note that, for \(i\in[k-1]\), every interval in \(\mathcal{I}_{k,i}\) intersects every interval in \(\mathcal{I}_{k,i+1}\). Let \(G_{k}\) be a mixed interval graph for the set \(\mathcal{I}_{k}\). We direct the edges of \(G_{k}\) as follows. Let \(\{I,I^{\prime}\}\) be a pair of intervals in \(\mathcal{I}_{k}\) that intersect each other. If \(I\) and \(I^{\prime}\) are copies of the same interval, then \(\{I,I^{\prime}\}\) is an edge in \(G_{k}\). Otherwise, \((I,I^{\prime})\) is an arc in \(G_{k}\), assuming that \(I\) lies further to the left than \(I^{\prime}\). It is easy to see that \(G_{k}\) has the desired properties.
Note that the mixed interval graphs that we constructed in the proof above are even directional interval graphs.
## 7 Open Problems
The obvious open problems are improvements to the results in Table 1, in particular: Is there a constant-factor approximation algorithm for coloring general mixed interval graphs? Is there a polynomial-time recognition algorithm for bidirectional interval graphs? For applications in graph drawing, a better-than-2 approximation for coloring bidirectional interval graphs is of particular interest.
## Acknowledgments.
We are indebted to Krzysztof Fleszar, Zbigniew Lonc, Karolina Okrasa, and Marta Piecyk for fruitful discussions.
|
2304.12474 | Design optimization for high-performance computing using FPGA | Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs) have
been used for accelerating computations in several domains because of their
unique combination of flexibility, performance, and power efficiency. However,
FPGAs have not been widely used for high-performance computing, primarily
because of their programming complexity and difficulties in optimizing
performance. We optimize Tensil AI's open-source inference accelerator for
maximum performance using ResNet20 trained on CIFAR in this paper in order to
gain insight into the use of FPGAs for high-performance computing. In this
paper, we show how improving hardware design, using Xilinx Ultra RAM, and using
advanced compiler strategies can lead to improved inference performance. We
also demonstrate that running the CIFAR test data set shows very little
accuracy drop when rounding down from the original 32-bit floating point. The
heterogeneous computing model in our platform allows us to achieve a frame rate
of 293.58 frames per second (FPS) and a %90 accuracy on a ResNet20 trained
using CIFAR. The experimental results show that the proposed accelerator
achieves a throughput of 21.12 Giga-Operations Per Second (GOP/s) with a 5.21 W
on-chip power consumption at 100 MHz. The comparison results with off-the-shelf
devices and recent state-of-the-art implementations illustrate that the
proposed accelerator has obvious advantages in terms of energy efficiency. | Murat Isik, Kayode Inadagbo, Hakan Aktas | 2023-04-24T22:20:42Z | http://arxiv.org/abs/2304.12474v1 | # Design optimization for high-performance computing using FPGA
###### Abstract
Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs) have been used for accelerating computations in several domains because of their unique combination of flexibility, performance, and power efficiency. However, FPGAs have not been widely used for high-performance computing, primarily because of their programming complexity and difficulties in optimizing performance. We optimize Tensil AI's open-source inference accelerator for maximum performance using ResNet20 trained on CIFAR in this paper in order to gain insight into the use of FPGAs for high-performance computing. In this paper, we show how improving hardware design, using Xilinx Ultra RAM, and using advanced compiler strategies can lead to improved inference performance. We also demonstrate that running the CIFAR test data set shows very little accuracy drop when rounding down from the original 32-bit floating point. The heterogeneous computing model in our platform allows us to achieve a frame rate of 293.58 frames per second (FPS) and a %90 accuracy on a ResNet20 trained using CIFAR. The experimental results show that the proposed accelerator achieves a throughput of 21.12 Giga-Operations Per Second (GOP/s) with a 5.21 W on-chip power consumption at 100 MHz. The comparison results with off-the-shelf
devices and recent state-of-the-art implementations illustrate that the proposed accelerator has obvious advantages in terms of energy efficiency.
Keywords:High-performance computing, Tensil AI, Design optimization, FPGA, Open-source inference accelerator
## 1 Introduction
Real-time vision-based motion tracking is necessary for many applications. Real-time video streaming for surveillance applications requires advanced encoding and decoding techniques as well as compute-intensive image processing designs. The ability to operate in real-time is especially important for applications in which speed is paramount, such as production areas, traffic speed control systems, or when the camera activity needs to be synchronized with other system components. The developers of machine vision frameworks and respectability may be engrossed in determining which of these steps to implement while developing the rest of the framework. The organize option is commonly selected when prototyping the system for the first time. The number of sections or outlines the application must prepare each instant depends on how many sections the prototyped application must handle at any given moment. Researchers are developing methods to design embedded systems that require less power, which is critical for most applications in modern embedded systems [1][2]. High-resolution image preparation applications demand faster, configurable, high-throughput frameworks with superior productivity for preparing enormous data sets [3][4][5]. FPGAs (Field-Programmable Gate Arrays) can play an important role since they provide configurability, adaptability, and parallelism to coordinate the necessary throughput rates of the application under consideration [6]. An FPGA device provides an execution method that allows it to be used in real-life applications. FPGAs have significantly increased the flexibility of hardware in general. A wider community of builders can now make use of these devices thanks to advancements in the toolchains for developing applications on them. Applications that require concurrency, high transfer speeds, and re-programmability typically use FPGAs. Modern digital life is increasingly reliant on image-processing applications. They are used in a variety of applications, including medical imaging, security, autonomous vehicles, and entertainment. In order to meet the increasing demand for more accurate and faster image processing, high-performance computing systems are needed. Image processing systems can be improved through FPGA-based design optimization. There are several factors that require pushing for higher performance in image processing. The following factors are discussed in more detail.
* Resolution and Image Size: An image processing system's performance is strongly influenced by image resolution and file size. The complexity of
the image processing algorithms required to analyze images increases with increasing resolution and size. For example, medical images such as CT scans and MRI images can have resolutions of several megapixels, and the files can be several gigabytes in size. Images of this size and complexity require high-performance computing systems that can handle large amounts of data quickly and accurately.
* Real-Time Processing: Real-time image processing is often required by image processing applications. Video streaming, security systems, and autonomous vehicles are applications in which real-time processing is essential. Detecting and avoiding obstacles, pedestrians, and other vehicles requires real-time image processing in autonomous vehicles. In order for these applications to operate smoothly, high-performance computing systems must be able to process large volumes of data in real-time.
* Complex Algorithms: High performance is also necessary due to the complexity of image processing algorithms. Complex algorithms require more processing power and memory to run. In image processing applications such as object recognition and image classification, deep learning algorithms require a significant amount of processing power and memory. High-performance computing systems accelerate the execution of these complex algorithms, providing faster and more accurate results.
* Parallel Processing: Parallel processing can greatly benefit image processing applications since multiple computations can be performed at once. Parallel processing is made possible through FPGA-based design optimization since FPGAs offer a high degree of parallelism. Multi-pixel image processing is possible with FPGAs, which allows for faster and more efficient image processing. The importance of this is especially pronounced in applications such as video processing where multiple frames must be processed simultaneously.
The increasing demand for faster and more accurate image processing requires image processing applications to push for higher performance. High-performance computing systems are needed because of factors such as high resolution and image size, real-time processing, complex algorithms, and parallel processing. Optimising FPGA-based designs for image processing applications is a very effective way to increase performance, and it is likely to become more relevant as the demand for faster and more accurate image processing increases. FPGAs are integrated circuits that can be programmed and reprogrammed to perform specific tasks. The unique features that make them well-suited to high-performance computing make them increasingly popular. High-performance computing can benefit from FPGAs as outlined below.
* High Parallelism: Parallelism is one of the key features of FPGAs that makes them well-suited to high-performance computing. An FPGA can execute multiple tasks or operations simultaneously, which is essential for high-performance computing applications requiring a large amount of processing power. FPGAs achieve parallelism by using configurable logic blocks that can perform different tasks simultaneously.
* Customizable Architecture: The flexible architecture of FPGAs makes them ideal for high-performance computing applications. FPGAs can be programmed to meet specific performance requirements, enabling them to optimize performance for particular applications. Consequently, FPGAs can be customized to meet the specific needs of high-performance computing applications, something that isn't possible with general-purpose processors.
* Low Latency: High-performance computing can also be achieved with FPGAs due to their low latency. System latency is the amount of time it takes for an input to be processed. An FPGA can process inputs in nanoseconds, which is much faster than a general-purpose processor. Real-time applications, such as video and audio processing, require low latency to avoid poor quality.
* High Bandwidth: FPGAs have high bandwidth, which refers to the amount of data they can transfer. The ability to transfer large amounts of data quickly is an important feature for high-performance computing applications. Through high-speed serial transceivers, FPGAs can achieve high bandwidth that can reach several gigabits per second.
* Energy Efficiency: High-performance computing is also made possible by FPGAs' energy efficiency. Compared with general-purpose processors, FPGAs have lower power consumption, which is important for applications requiring a high level of processing power. Due to their parallel architecture, FPGAs achieve high energy efficiency and can be customized to meet application requirements.
FPGAs are an attractive option for applications requiring a significant amount of processing power, such as image processing, machine learning, and real-time processing. FPGAs are likely to become even more important in high-performance computing as the demand grows.
Tensil AI creates hardware and software solutions for machine learning applications. They offer high-performance machine learning inference on FPGA platforms through an open-source inference accelerator. As an open-source machine learning library developed by Google, TensorFlow Lite Inference Engine underpins Tensil AI's inference accelerator. The Tensil AI accelerator can therefore be easily integrated with existing machine learning applications. The Tensil AI inference accelerator performs quantization as one of its key features. A quantification process reduces the precision of machine learning models, making them easier to store and deploy. A Tensil AI accelerator performs quantization on the fly, which reduces the memory and power requirements of the inference engine. Its ability to support dynamic shapes is another key feature of the Tensil AI inference accelerator. Machine learning applications that require real-time processing of sensor or camera data can benefit from variable input data sizes and shapes. The Tensil AI accelerator is able to change the size of the inference engine on the fly based on the size of the input data, so it can handle a wide range of input sizes and shapes. Tensil AI inference accelerators are highly configurable, allowing developers to optimize their performance according to their needs. The low latency, high bandwidth,
and high throughput processing it provides make it an ideal solution for high-performance computing applications. The Tensil AI inference accelerator is not only highly scalable but also highly efficient. The technology can be employed in edge devices such as smartphones, smart cameras, and IoT devices, as well as in cloud-based applications that require high-performance machine learning inferences. Tensil AI accelerators can be deployed on a wide range of FPGA platforms, including Xilinx's Alveo accelerator cards, making them ideal for high-performance computing applications. The Tensil AI open-source inference accelerator is a powerful tool for accelerating machine learning inference on FPGA platforms. A wide range of input sizes and shapes can be supported, making it a highly scalable and versatile solution. High-performance computing will likely become even more reliant on solutions like the Tensil AI inference accelerator as machine learning becomes more important [7][8].
The rest of the paper is organized as follows: **Section II** presents the motivation and related works. **Section III** introduces open-source ml inference accelerators. The proposed method and its experimental results and analysis are reported in **Section IV** and **Section V**. **Section VI** concludes the contents of this paper and gives future aspects of this paper.
## 2 Motivation
FPGAs have been around for several decades, and they are used in many different applications. High-performance computing has been limited by a number of challenges and difficulties. FPGAs have not been widely used in high-performance computing due to their high development cost and complexity. The tools and technologies required for FPGA development are often expensive and complex, which makes it difficult to develop systems based on FPGAs. FPGA-based solutions have proven challenging to adopt for many organizations, especially for smaller organizations or those with limited resources. The limited availability of high-level software tools is another challenge with FPGAs in high-performance computing. Developing software for FPGAs requires a deep understanding of the underlying hardware architecture, which is more difficult than for traditional processors. However, high-level synthesis tools are not as mature as those used for traditional processors, making development more challenging [9][10].
Some high-performance computing applications can also be limited by the limited amount of on-chip memory on FPGAs. There is a significant amount of data transfer between the FPGA and external memory, which slows performance and increases latency. For many high-performance computing applications, floating-point operations are also not supported by FPGAs. FPGAs used in high-performance computing also have a limited number of prebuilt IP blocks. The development of FPGA-based solutions often requires the use of pre-built intellectual property (IP) blocks, such as memory controllers and data interfaces. The availability of these IP blocks for FPGAs is
often limited, which makes developing FPGA-based systems more difficult and time-consuming.
High-performance computing applications benefit from the advantages of FPGAs, despite these challenges. FPGAs can be highly optimized for specific tasks and often perform better than traditional processors in specific applications. A hardware-level parallelism capability also enhances performance for certain tasks. Recent developments have made FPGAs more accessible for high-performance computing, thus addressing these challenges. The availability of high-level synthesis tools for FPGAs makes software development easier, for example. A number of pre-built IP blocks are also being developed and made available for FPGAs. A number of FPGA-based solutions are now available that require less specialized hardware design knowledge and are easier to use. Despite the challenges and difficulties involved in developing and implementing FPGAs, they have not been widely used for high-performance computing, but efforts are being made to resolve these issues and make FPGA-based solutions more accessible and usable for high-performance computing. The adoption of FPGAs in high-performance computing will increase as development tools, IP blocks, and FPGA-based solutions improve [11].
High-performance computing applications have attracted significant interest in FPGAs in recent years. FPGA-based systems can be highly optimized for specific tasks, and they can often perform better than traditional processors in specific applications. The image and video processing industry has extensively used FPGAs for high-performance computing. The processing of high-resolution images and video can be carried out in real time using FPGAs. A high-level synthesis tool called Vivado HLS has been used by researchers at UCLA to develop an FPGA-based system for real-time image processing [12]. A throughput of 52 frames per second was achieved when filtering images, and 20 frames per second when segmenting images. High-performance computing has also been done using FPGAs in the financial industry. Complex mathematical operations are often involved in financial calculations, which are well suited for FPGAs. A high-frequency trading system developed by the Tokyo Stock Exchange (TSE) can process trades in less than one microsecond using FPGAs [13][14]. The system uses FPGAs to calculate financial instruments such as options and futures. Machine learning and artificial intelligence are other areas where FPGAs have been used for high-performance computing. FPGAs can be highly optimized for neural network computations, making it possible to process large amounts of data faster and more efficiently. Scientific calculations can be highly optimized on FPGAs, resulting in faster, more efficient processing of large amounts of data. Furthermore, a number of existing works focus on optimizing FPGA-based systems for high-performance computing in general. Researchers have developed a tool called FireSim to simulate large-scale FPGA-based systems using cloud resources [15]. The tool can be used to optimize system performance and evaluate different design options. There are many existing works that focus on using FPGAs for high-performance computing. Several applications, including image and
video processing, finance, machine learning, artificial intelligence, and scientific research, have been demonstrated using FPGAs in these studies. With the continued development of FPGA-based tools and technologies, we can expect to see even increased adoption of FPGAs for high-performance computing in the future.
## 3 Open-source ML inference accelerators
Many high-performance computing applications rely on machine learning inference. Models are used to analyze input data and generate output results. High-performance computing systems can help speed up ML inference, which is often computationally intensive. High-performance computing applications may benefit from open-source ML inference accelerators. An ML inference accelerator is a specialized hardware device that performs ML inference tasks efficiently. It is typically performed on general-purpose processors or graphics processing units (GPUs) that are not optimized for ML inference. An ML inference accelerator provides efficient and optimized execution of ML inference tasks. Open-source ML inference accelerators offer the advantage of being free and customizable to fit specific use cases. An open-source ML inference accelerator offers a transparent and open development process, which encourages community participation and feedback. Open-source accelerators can also reduce the cost and time associated with developing ML inference accelerators. Recently, the Versatile Tensor Accelerator (VTA) has gained significant attention as an open-source ML inference accelerator. Inference tasks using VTA are performed with the help of a highly optimized hardware accelerator. TensorFlow, PyTorch, and ONNX are among the popular ML frameworks supported by VTA. There are a variety of hardware platforms that can be used with VTA, including FPGAs and ASICs [16].
By providing open-source tools and platforms, developers can collaborate to create new and more efficient ML inference solutions. Collaboration can lead to faster development and adoption of new ML techniques and applications. Other open-source ML inference accelerators, such as Intel's OpenVINO and Xilinx's Deep Learning Processor, are available alongside VTA [17][18]. These accelerators provide developers with a variety of options for building and optimizing machine learning systems. High-performance computing applications can take advantage of open-source ML inference accelerators because they provide a flexible and powerful tool. Development of ML inference systems can be customized and optimized, resulting in lower development costs and time, collaboration and innovation, and wide adoption of new ML techniques and applications. As the field of ML continues to grow and evolve, we can expect to see even more powerful and efficient open-source ML inference accelerators become available.
High-performance computing applications using FPGAs can be built with Nengo and Tensil AI frameworks. A variety of hardware platforms, including FPGAs, can be used to build large-scale neural models with Nengo software.
In addition, Nengo is designed to be flexible and extensible, allowing users to create customized models and algorithms. Nengo is well-suited for applications such as robotics and cognitive modeling because it can build complex models with many neurons and synapses. The Tensil AI hardware accelerator, on the other hand, performs machine learning inference tasks. The Tensil AI is designed to provide high performance at low power consumption, making it ideal for applications such as image recognition and natural language processing. Tensil AI is designed to be easily integrated with existing hardware architectures and supports a wide range of machine learning frameworks, including TensorFlow and PyTorch. Their focus is one of the key differences between Nengo and Tensil AI. Tensil AI focuses on accelerating machine learning inference tasks, whereas Nengo is primarily focused on building large-scale neural networks [19][20][21].
Nengo is more versatile and can be used for a wider variety of tasks, whereas Tensil AI is more focused on particular tasks. Their development processes are also key differences between Nengo and Tensil AI. The Open-source project Nengo is actively developed and maintained by a large developer community. As a result, users have access to a variety of resources, including documentation, tutorials, and support forums. The Tensil AI product, on the other hand, is a commercial product developed and supported by Tensil AI. Due to this, users have access to dedicated support and resources, but not as much community support as with an open-source project. Machine learning inference tasks can be performed quickly and with low latency using Tensil AI. Self-driving cars and industrial automation, for example, can benefit from their ability to make inferences quickly and efficiently. Nengo, on the other hand,
Figure 1: VTA Framework
simulates complex behaviors over long periods of time using large-scale neural models. Tensil AI has the potential drawback of limited flexibility. Its limited versatility may be due to its specialized nature as a hardware accelerator. Users may not be able to create custom models or algorithms, and they may have to use pre-built models and architectures instead. Therefore, Nengo and Tensil AI are both powerful frameworks for developing high-performance computing applications. A variety of applications can be carried out with Nengo, whereas Tensil AI is more suited for specific tasks, such as machine learning inference. Developers should carefully evaluate the strengths and weaknesses of each framework before selecting one, and ultimately their choice will depend on the specific needs of the application [7][8].
## 4 Method
We propose different approaches such as Vivado hardware design, leveraging Xilinx Ultra RAM, and using advanced compiler strategies to improve the performance of inference. In the ResNet20-ZCU104 tutorial by Tensil AI, several methods are used to optimize the design of the ResNet20 neural network for implementation on the Xilinx ZCU104 development board using their open-source inference accelerator. ResNet-20 is trained using CIFAR-10, a dataset that contains 60,000 32x32 RGB images categorized into ten classes. A PyTorch framework is used to train the network, which has an accuracy of approximately %91. Training the network on ZCU104 is followed by several optimization steps for deployment on the device. A pruning technique reduces computation and memory requirements by removing unnecessary connections from the network. Tensil AI reduces the number of parameters and computations by pruning connections from the trained network. A further optimization method used in the tutorial is quantization, which involves reducing the weights and activations of the network. The network is quantized to 8-bit fixed-point precision using the TensorRT framework, further reducing memory and computation requirements. Tensil AI's open-source inference accelerator, designed to accelerate sparse neural network execution on FPGAs, implements the optimized neural network on the ZCU104. High performance and energy efficiency are achieved by utilizing FPGAs' reconfigurability and parallelism. The ResNet20-ZCU104 tutorial by Tensil AI demonstrates a variety of optimization techniques that can be used to optimize neural network designs for implementation on FPGA-based accelerators, including pruning and quantization.
### Baseline design
Specifying 32 by 32 systolic array size contributed to the high utilization of multiply-accumulate units (DSP). Note how we pushed Block RAM (BRAM) utilization almost to its limit by specifying 16 KV local memory and 4 KV accumulators (KV = 1024 vectors = 1024 * 32 * 16 bits). The ZCU104 board supports an SD card interface. This allows us to use Tensil embedded driver
file system functionality to read the ResNet model and a set of images to test it with. The set we will be using is the test set for the original CIFAR-10. The ResNet model is trained with separate training and validation sets from the CIFAR-10. The test set is what the model hasn't seen in training and therefore gives an objective estimate of its accuracy. The CIFAR-10 provides a test set of 10,000 images in several formats. We will use the binary format that is more suitable for the embedded application. With the SD card inserted and containing the CIFAR-10 test data set and the ResNet model compiled for Tensil, you should see the inference printing every 100's images and the corresponding prediction along with measured inferences (frames) per second. After running the inference on the entire test data set the program will print the final average frames per second and the accuracy of the inference. For the baseline solution, we are getting an average of 133.54 frames per second with %90 accuracies. Note that the accuracy we are seeing when testing the same ResNet model with TensorFlow is %92. The %2 drop is due to changing the data type from a 32-bit floating point in TensorFlow to a 16-bit fixed point in Tensil.
### Dual clock solution
The first optimization is based on the following observation. The Tensil RTL block is clocked at 100MHz. The Tensil block DRAM0 and DRAM1 ports are connected to AXI interfaces on the ZYNQ block. The instruction port is indirectly connected to the AXI on the ZYNQ block via AXI DMA. ZYNQ UltraScal+ AXI ports support up to 333MHz and a maximum width of 128 bits. This gives us the opportunity to introduce a second clock domain for 333MHz while at the same time making the Tensil AXI ports wider. Figure 2 shows how this may work in a simpler 100MHz to 400MHz, 512- to 128-bit conversion. Each clock in the Tensil clock domain would pump one 512-bit word in or out. This would match 4 clocks in the ZYNQ clock domain with 512-bit word split to or composed from 4 128-bit words.
For the dual clock solution, we are getting an average of 152.04 frames per second-a meaningful improvement over the baseline. This improvement is roughly proportional to the ratio of time spent in moving data to and from the FPGA to the time spent in internal data movement and computation. An accelerator is designed in two parts - a data path and a control path. Data paths process input data through neural networks, while control paths manage data flow and control the overall operation of the accelerator. Data and control paths are clocked separately at different frequencies, with the data path clocked at a higher frequency to maximize the accelerator's throughput. Data synchronization and timing violations between the two paths can also impact the accelerator's performance and reliability with this approach. Tensil AI's dual clock solution includes a number of design techniques such as pipelining, synchronization signals, and careful timing analysis to ensure proper data synchronization and avoid timing violations. A high level of throughput is still
maintained while these techniques improve the accelerator's performance and reliability.
### Ultra RAM solution
Ultra RAM refers to a design approach that optimizes memory access and utilization in FPGA-based inference accelerators. An Ultra RAM is a high-density memory block that is available in Xilinx FPGAs and offers high bandwidth and low latency access to memory. The Ultra RAM solution is used in the ResNet20-ZCU104 project to store the weights of the neural network model, which is a critical part of inference. Ultra RAMs are used to store weights so they can be accessed quickly and efficiently during the inference process. As part of the Ultra RAM configuration, the accelerator also supports concurrent reads and writes, further improving performance. Tensil AI's design approach makes optimal use of Ultra RAMs by using techniques such as weight compression and quantization, which reduce the memory footprint of weights without compromising accuracy. Using these techniques increases the capacity and efficiency of Ultra RAMs, improving the accelerator's performance overall. Inference accelerators based on FPGAs benefit greatly from the Ultra RAM solution for optimizing memory access and utilization. A ResNet20-ZCU104 neural network model has been successfully inferred with high performance and efficiency using the ResNet20-ZCU104.
The second optimization is based on the higher-end ZYNQ UltraScale+ device's support for another type of on-chip memory called Ultra RAM. By default, Vivado maps dual-port memory to Block RAM. In order for it to map to the Ultra RAM it needs hints in the Verilog code. To enable these hints we will use the Xilinx ultra ram option of the Tensil RTL tool. The amount of Ultra RAM available on ZCU104 allows us to add around 48 KV memory in addition to 20 KV available through Block RAM. We start by creating a new Tensil architecture for ZCU104 in which we allocate all of the Block RAM (20
Figure 2: Tensil RTL clock domain.
KV) to accumulators and all of the Ultra RAM (48 KV) to local memory. For the Ultra RAM solution, we are getting an average of 170.16 frames per second, another meaningful improvement. This improvement is based purely on having larger on-chip memory. With a small on-chip memory the Tensil compiler is forced to partition ResNet convolution layers into multiple load-compute-save blocks. This, in turn, requires that the same input activations are loaded multiple times, assuming weights are loaded only once. This is called weight-stationary dataflow. In the future, we will add an option for input-stationary dataflow. With it, when partitioned, the input activations are loaded once and the same weights are loaded multiple times.FPGA utilization for Ultra RAM design is shown in Table 1.
Figure 3 shows such a 3-partitioned compilation. Layer N has 2 stages. In each stage, a unique subset of weights is loaded. Then, each stage is further split into 2 partitions. Partition is defined by the largest amount of weights, input and output activations, and intermediate results that fit local memory and accumulators.
Having larger on-chip memory reduces this partitioning and, by extension, the need to load the same data multiple times. Figure 4 shows how to layer N now has 1 stage and 1 partition that fits larger local memory and accumulators, which allows weights and activations to be loaded only once.
\begin{table}
\begin{tabular}{|c|c|} \hline Utilization & XCZU7EV \\ \hline \hline LUT & 181440 \\ \hline DSP & 1054 \\ \hline BRAM & 293 \\ \hline URAM & 96 \\ \hline \end{tabular}
\end{table}
Table 1: Resource Usage.
Figure 3: 3-Partitioned Compilation.
### Compiler Strategy with large local memory
This strategy involves optimizing the speed and latency of local memory resources in FPGAs, such as block RAMs and Ultra RAMs, which are faster than external memory resources such as DRAM. Inference is carried out using the data and weights stored in the local memory, which are used for data storage and weight storage. Several techniques are used by ResNet20-ZCU104 to implement the compiler strategy with large local memory, such as weight compression and quantization, which reduce the memory footprint of weights without compromising their accuracy. The reduced memory footprint allows for larger portions of the neural network model and data to be stored in the local memory resources. The final optimization is based on the same hardware design and Tensil architecture we created to support the Ultra RAM. We will only change the Tensil compiler strategy. Tensil compilers, by default, assume that the model is much larger than the FPGA's local memory in terms of its weights and activations. This is true for large models and for low-end FPGA devices. For small and medium-sized models running on large FPGA devices, there is a possibility that local memory is large enough to contain the weights plus input and output activations for each layer. Our Proposed compiler strategy is shown in Figure 5.
## 5 Results
Our results demonstrate the effectiveness of Tensil AI's open-source inference accelerator for optimizing neural networks and implementing them on FPGAs for high-performance computing applications. It has been done using CPUs, GPUs, and FPGAs. CPU/GPU-based NNs consume a lot of power and have a limited throughput due to limited memory bandwidth which is shown in Table 2. In Table 3 Many researchers have developed FPGA-based designs
Figure 4: 1-Partitioned Compilation.
for accelerating network inference workloads in order to achieve better energy efficiency.FPGAs function as programmable devices that can construct unique logic, alleviating constraints on neural network implementation. We demonstrated how improving the Vivado hardware design, leveraging Xilinx Ultra RAM, and using advanced compiler strategies can improve the performance of inference. As a result, one of the current research hotspots involves the development of hardware systems supporting NN inference based on FPGA to achieve high throughput and power efficiency.
Figure 6 summarizes presented solutions and their frames per second performance.
Figure 5: Compiler Strategy.
Figure 6: Performance Chart.
## 6 Conclusions
The ResNet20-ZCU104 project demonstrates the potential of using FPGA-based acceleration for machine learning tasks. By leveraging the unique capabilities of FPGAs, such as low latency and efficient memory usage, the project achieved impressive results in terms of both performance and accuracy. The model is implemented for hardware acceleration with various heterogeneous devices and resulting in an energy-efficient, reconfigurable system on the latter. In the further phase of our work, we will propose to use Dynamic Partial Reconfiguration which is state-of-art technology of reconfigurable hardware into an achieved high-performance framework. Within this feature, we will solve reshaping and offloading the initial and post-data processing for high-performance computing with Tensil AI. The Tensil AI already can take a different model by compiling something new, but the data going in and out could require manipulation which works between the input process and output process of the Tensil AI.
## 7 Declarations
### Ethical Approval
Not applicable
### Competing interests
The authors declare no competing interests.
### Authors' contributions
M.I., K.I., and H.A. contributed equally to the design, implementation, and evaluation of the FPGA-based inference accelerator for machine learning tasks presented in this paper. M.I. and K.I. performed the hardware design and optimization and the software development and performance evaluation, while H.A. contributed to a review of this paper. All authors contributed to the writing and revision of the manuscript and approved the final version for submission.
\begin{table}
\begin{tabular}{l l l l l l l} Work & Device & Frequency & Quant & Power & FPS & Throughput & Energy Efficiency \\ \hline Ma et al. [22] & Arria-10 GX & \(150MHz\) & 8-16 bit fixed & \(21.2W\) & β & \(645.25(COP/s)\) & \(30.44(COP/s/W)\) \\ Mei et al. [23] & Virtex-7 & \(20MHz\) & 16-bit float & \(10.81W\) & \(6.58\) & \(202.42(COP/s)\) & \(1.64(COP/s/W)\) \\ Zhang et al. [24] & Zyuz 2UTEV & \(300MHz\) & 8-bit float & \(17.67W\) & β & \(290.40(COP/s)\) & \(0.80(COP/s/W)\) \\ Blott et al. [25] & Zyuz 2U3EG & \(220MHz\) & 8-bit float & \(10.2W\) & \(200\) & \(400(COP/s)\) & \(39.21(COP/s/W)\) \\ Zhang et al. [26] & Virtex-7 & \(200MHz\) & 8-bit float & \(6.2W\) & \(6.77\) & \(208.06(COP/s)\) & \(31.16(COP/s/W)\) \\ Li et al. [27] & Zyuz 7010 & \(200MHz\) & 16-bit float & \(19.52\) & β & \(452.8(COP/s)\) & \(23.26(COP/s/W)\) \\ Suda et al. [28] & Stratix-V & \(120MHz\) & 8-16 bit fixed & \(25.8W\) & β & \(117.8(COP/s)\) & \(4.56(COP/s/W)\) \\ Ours & Zyuz 2UTEV & \(100MHz\) & 32-bit floating & \(5.21W\) & \(209.58\) & \(21.12(COP/s)\) & \(4.05(COP/s/W)\) \\ \end{tabular}
\end{table}
Table 3: Comparisons with previous implementations.
### Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
### Availability of data and materials
The data used in this study are available upon request.
|
2306.13675 | Intersectionality and Testimonial Injustice in Medical Records | Detecting testimonial injustice is an essential element of addressing
inequities and promoting inclusive healthcare practices, many of which are
life-critical. However, using a single demographic factor to detect testimonial
injustice does not fully encompass the nuanced identities that contribute to a
patient's experience. Further, some injustices may only be evident when
examining the nuances that arise through the lens of intersectionality.
Ignoring such injustices can result in poor quality of care or life-endangering
events. Thus, considering intersectionality could result in more accurate
classifications and just decisions. To illustrate this, we use real-world
medical data to determine whether medical records exhibit words that could lead
to testimonial injustice, employ fairness metrics (e.g. demographic parity,
differential intersectional fairness, and subgroup fairness) to assess the
severity to which subgroups are experiencing testimonial injustice, and analyze
how the intersectionality of demographic features (e.g. gender and race) make a
difference in uncovering testimonial injustice. From our analysis, we found
that with intersectionality we can better see disparities in how subgroups are
treated and there are differences in how someone is treated based on the
intersection of their demographic attributes. This has not been previously
studied in clinical records, nor has it been proven through empirical study. | Kenya S. Andrews, Bhuvani Shah, Lu Cheng | 2023-06-20T17:22:50Z | http://arxiv.org/abs/2306.13675v1 | # Intersectionality and Testimonial Injustice in Medical Records
###### Abstract
Detecting testimonial injustice is an essential element of addressing inequities and promoting inclusive healthcare practices, many of which are life-critical. However, using a single demographic factor to detect testimonial injustice does not fully encompass the nuanced identities that contribute to a patient's experience. Further, some injustices may only be evident when examining the nuances that arise through the lens of intersectionality. Ignoring such injustices can result in poor quality of care or life-endangering events. Thus, considering intersectionality could result in more accurate classifications and just decisions. To illustrate this, we use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice, employ fairness metrics (e.g. demographic parity, differential intersectional fairness, and subgroup fairness) to assess the severity to which subgroups are experiencing testimonial injustice, and analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice. From our analysis, we found that with intersectionality we can better see disparities in how subgroups are treated and there are differences in how someone is treated based on the intersection of their demographic attributes. This has not been previously studied in clinical records, nor has it been proven through empirical study.
## 1 Introduction
In medical settings, decisions can have life-critical consequences (Zenios et al., 1999; Kumar Mangla et al., 2023; White and Lo, 2020; Cheng et al., 2021; Cheng and Liu, 2023), making it _essential_ to ensure that machine learning tools use there are fair. This fairness is often measured with common fairness metrics such as demographic parity (Dwork et al., 2012) and equal opportunity (Hardt et al., 2016). However, these tools do not consider the intersectionality of the subjects under consideration (Ghosh et al., 2021; Gohar and Cheng, 2023). That is, by focusing solely on factors such as race, gender, or socioeconomic status, we ignore the nuances related to individuals with unique experiences shaped by having multiple features sensitive to marginalization. We theorize that _how various aspects of an individual intersect and contribute to their experiences, via intersectionality, could make instances of injustice more overt - and in some cases may be the sole approach for identifying such instances_. Intersectionality recognizes that power relations based on factors such as race, class, and gender are not mutually exclusive and can interact with each other, affecting all aspects of the social world (Marques, 2018). Therefore, it is important to consider intersectionality when evaluating the fairness of machine learning tools in medical settings.
In clinical settings, it is particularly important that care providers (e.g. physicians) properly acknowledge what their patients are hoping to convey to them in a way that does not diminish what the patient is saying. Moreover, it is imperative for care providers to accurately relay their understanding of their patients' experiences, as others will be dependent upon their previous understandings and evaluations, often recorded in notes, to assist with overseeing and providing care for that patient (Jin, 2021). We have seen that when this does not occur, there are higher instances of death amongst certain marginalized groups (Bowman, 2013). With the rise in using machine learning tools to help make decisions on medical plans and treatments, who often only interact with the notes provided to them and not the actual patient, it is vital they are able to properly see patients. This visibility should be clear despite previous attempts at burying their words behind instances of injustices which hides them as a speaker. Here, we focus on a particular form of injustice - testimonial injustice. Testimonial injustice occurs when someone is assigned less
credibility due to prejudices about them (Fricker, 2019).
The aim of our study is to examine how test-monial injustice in medical records is affected by the intersectionality of gender and race. These two observable attributes have historically led to marginalization in various societal settings, such as education (Rankin and Thomas, 2020), housing (Roscigno et al., 2009), and healthcare (Krieger, 1990; Chapman et al., 2013). In fact, some forms of marginalization may only be evident in those with multiple marginalized identities - for instance, a Black police woman may not experience the same level of power and privilege as a White male police officer (Martin, 1994). Neglecting to consider the various contributing identities of an individual may further marginalize them. Therefore, it is important to consider intersectionality when identifying and addressing injustices in order to result in more accurate classifications and decisions.
There has been a small amount of work done to understand testimonal injustice in medical records and to our knowledge no prior work on how intersectionality might affect the emergence of testimonal injustice, even in life-critical medical settings. This motivates our contributions to this work: (1) The importance of intersectionality has been spoken about but has not been shown before (particularly in the medical setting). Thus, we perform an empirical study to show there is a difference in how subgroups are treated in medical settings, but this can only be revealed in intersectional views. (2) Practitioners continue to use singular-feature fairness metrics in medical settings. Thus, we provide proof that we should not be using these metrics to detect instances of injustice. This proof has not been provided before, not even in medical settings. Thus, we (3) perform an empirical study to show traditional fairness metrics (i.e. demographic parity) are inefficient when judging people's experiences in healthcare because they produce different results when the entirety of a person is considered. (4) Lastly, not all metrics fit each situation - even in similar settings. Therefore, we analyze if different intersectional fairness metrics might reveal differences in how we recognize intersectionality.
Previous studies have shown that both Black patients and female patients are more likely to experience testimonal injustice in the medical field, as evidenced by the use of biased language in their records (Beach et al., 2021). However, these studies have not examined the specific impact of intersectionality, or how being simultaneously Black and female might affect testimonal injustice. Our work seeks to address this gap by examining the impact of the intersection of ethnicity (Black, Asian, Latino, and White) and gender (Male and Female - though we acknowledge in modern society, there is recognition of genders beyond the traditional binary options, the dataset used here only includes these two genders) on testimonal injustice in medical records.
## 2 Related Works
Despite the increased use of machine learning tools and a growing focus on intersectionality in the medical community (Holman et al., 2021; Bauer and Lizotte, 2021), there have been limited efforts to understand how intersectionality can impact outcomes in medical settings. Since various healthcare professionals rely on medical records to make treatment decisions and give proper care, it is crucial that such records are written appropriately (Bali et al., 2011). The authors of (Adam et al., 2022) found that even when race is removed from patients' records, models could detect the race of the patient - even when humans could not. Furthermore, they discovered that models trained on these records (i.e. which race has been removed from) still maintain biases in treatment recommendations. Though they only remove race in their work, this further affirms that there are differences in how patients are spoken about in their records based on demographic features, emphasizing the need to study what can occur if we look at multiple demographic features as we do here. In their work, P Goddu et al. explored how stigmatizing language in a patient's medical record can shape the attitudes of physicians-in-training towards the patient and their clinical decision-making. They found that stigmatizing language is associated with more negative attitudes and less aggressive pain management. Building on this work, we examine words that may indicate testimonal injustice, which occurs when someone's statements are diminished due to stereotypes or prejudices about them (Fricker, 2019). It is therefore important to identify instances of stigmatizing language in medical records and take steps to prevent them from occurring as emphasized by Park et al..
In (Beach et al., 2021), the authors use a lexicon look-up to identify testimonal injustice in
medical records, analyzing the use of quotation marks, evidential words, and judgmental words in the records of male and female patients who are Black or White. We expand their work, including words that are negative and commonly used stigmatizing words in medical settings. We exclude the search for quotation marks, acknowledging that direct quotations may give rise to uncertainty by suggesting that the statement in question constitutes not a fact, but rather an assertion (Beach et al., 2021). However, we believe that our expanded lexicon will help to identify instances of testimonial injustice. Further in contrast to Beach et al., we consider the records of Black, White, Asian, and Latino patients, exploring how testimonial injustice may differ across the intersection of their identities with gender. The authors found that Black and female patients are most likely to experience testimonial injustice, highlighting the need to examine how different intersectional identities impact experiences of testimonial injustice in medical settings.
Previous research has examined the presence of epistemological bias in medical records based on sensitive attributes to detect instances of experiences injustice i.e. disparate treatment. Himmelstein et al. studied diabetic patients and found that non-Hispanic Black patients were more likely to have stigmatizing language included in their notes than non-Hispanic White patients. Similarly, Sun et al. investigated medical records and racial bias, discovering that Black patients had a 2.54 times higher chance of negative descriptors than White patients. These studies suggest that certain demographics may experience differential treatment in medical settings, which may help explain healthcare disparities. However, these works only examined single demographic features, while we seek to investigate their intersection. We anticipate that studying the intersection of groups will more clearly reveal instances of injustice or discrepancies in treatment. The ongoing use of tools that do not consider intersectionality highlights the importance of this research (Buolamwini and Gebru, 2018).
Guo and Caliskan developed a technique to automatically identify intersectional biases from static word embeddings. They found that their model's highest accuracy was for predicting emergent intersectional bias among African American and Mexican American women. This could be attributed to these groups experiencing more overt biases that are easier to detect. This discovery motivates us to further investigate if biases are more prevalent in high-risk settings such as medical settings, especially for individuals from marginalized groups. However, it can be challenging for humans to identify when a bias is occurring since it can be subtle, as highlighted by Hube and Fetahu. Furthermore, doctors may struggle to recognize their own use of words that cause testimonial injustice since they may be unconsciously influenced by their own biases and take them as facts (FitzGerald and Hurst, 2017; Beeghly and Madva, 2020).
## 3 Data
### MIMIC-III
Obtaining medical data has been a standing challenge, largely due to HIPAA requirements and privacy constraints. We use the MIMIC-III (Johnson et al., 2016) dataset, which contains features of interest to our experiments: ethnicity/race, gender, patient id, diagnosis, physicians' notes, and so on. This data was collected between 2001-2012 at the Beth Israel Deaconess Medical Center in Boston, MA. The MIMIC-III dataset contains information for 46,146 patients. The distribution of racial groups in the data was highly disproportionate, as shown in Table 1. The two genders represented in this dataset, Female and Male, however, are more balanced. We removed ethnicities that were listed as "unknown/not specified", "multi-race ethnicity", "other", "unable to obtain", and "patient declined to answer" since we cannot clearly denote the race of these patients. We also removed patients whose diagnosis was "newborn" since these patients had notes solely stating they were newly born. We did however include the newborns who had other diagnoses. Only 9 of those patients were Caribbean and 38 were Middle Eastern, thus we removed them from the records as well. We were not able to find any duplicate records in the dataset, with a simple python search.
After data pre-processing, there are 32,864 patients in total for experimentation. We truncated the MIMIC-III feature 'ethnicity' into 'race' such that all ethnicities are represented as the race often associated with them as labeled in the dataset (e.g. original ethnicity in the dataset: 'ASIAN -VIETNAMESE' was truncated to 'Asian'). For ethnicities that were not associated with a particular race, we searched for how they are commonly associated and relabeled them to the race (e.g. origi
nal ethnicity in the dataset: 'SOUTH AMERICAN' was relabeled to 'Latino'). Finally, given that many patients had multiple records, we clustered the patients based on their patient_id and combined their records based on patient_id, gender, race, and diagnosis (e.g. 56327, male, Latino, HYPOTENSION). We then run analysis on the physicians' notes to find terms that are testimonially injust.
We analyze the distribution of data for MIMIC-III in A.3. Our analysis looked at the occurrence of our four types of words associated with testimonial injustice, namely evidential (Figure 6 and 7), judgmental words (Figure 8 and 9), stigmatizing words (Figure 10 and 11), and negative words. We plot the density distribution of each gender, race, and their intersection as normalized sums of these types of words, where the numerator is the frequency of occurrence of the relevant words for that patient and the denominator is the number of records for that patient. We did not include the plots for negative words due to their limited occurrence in the medical notes of this dataset, however we do use them in our analysis of the results for detecting testimonial injustice. Our observations suggests that the confluence of race and gender better helps us in distinguishing instances of testimonial injustice than either race or gender in isolation. In particular, when race and gender are considered independently, males seem to be treated better than females or White patients are treated generally better than Black patients. However, there is nuance in the difference in the treatment of White males and White females as well as Black males and Black females.
### Testimonial Injustice Terms
In order to assess testimonial injustice in the physicians' notes, we focus on 4 main categories of unjust words: evidential, judgemental, negative, and stigmatizing words that can contribute to someone experiencing testimonial injustice. We use the same evidential and judgmental words from (Beach et al., 2021). Evidential terms do not endorse a statement but allow it to be agnostic (e.g. "complains", "says", "tells me" and so on). When a physician uses these words, they express missing what the patient is actually experiencing. Judgment terms cast doubt on the sayer by the hearer (i.e. the physician) by trying to make their statements sound good or bad (e.g. "apparently", "claims", "insists", and so on). Exacerbated racial and ethnic healthcare disparities have been linked to negative words used to describe Black patients as well (Sun et al., 2022). Negative words are included in this study as they typically show active rejection or disagreement, e.g. "challenging", "combative", "defensive", "exaggerate", and so on. Clearly, the use of these words expresses assumptions about the patient and could result in a lower quality of care.
We also include stigmatizing terms as they are commonly used in medical contexts (Himmelstein et al., 2022). Stigmatizing terms are rooted in stereotypes or stigmas about a person (Link and Phelan, 2001) (e.g. "user", "faking", "cheat", and so on). Using stigmatizing terms may alter treatment plans, transmit biases between clinicians, and alienate patients. This lexicon has been proven to consist of words used to diminish specific conditions like diabetes, substance use disorder, and chronic pain (Himmelstein et al., 2022). All of these conditions are known to disproportionately affect racial minority groups. Using all of these terms in our lexicon lookup 4.2 will help us to detect testimonial injustice in these medical records.
## 4 Methods
Although all marginalized groups invariably experience some degree of injustice, our aim is to bridge the gap in research by highlighting the disparate treatment of subgroups in medical notes. To achieve this goal, we estimate and compare common metrics across different groups (i.e. Asian men, Asian women, Black men, Black women, Latino men, Latina women, White women, and White men) specifically using demographic parity, differential intersectional fairness, and subgroup fairness.
### Normalization
To account for patients who had multiple visits or were admitted to the ICU for multiple days, the physicians' notes were combined for each pa
\begin{table}
\begin{tabular}{l|l|l} \hline
**Race** & **Gender** & **Count** \\ \hline White & Female & 15,399 \\ \hline Black & Female & 2,522 \\ \hline Asian & Female & 512 \\ \hline Latina & Female & 662 \\ \hline White & Male & 20,317 \\ \hline Black & Male & 2,041 \\ \hline Asian & Male & 690 \\ \hline Latino & Male & 1,041 \\ \hline \end{tabular}
\end{table}
Table 1: Counts of patients by race and gender.
tient's duration in the ICU. To analyze the potential variance in testimonial injustice among different groups, we summed the frequency of testimonial injustice words in the notes for each patient and then normalized this frequency by dividing it by the number of original records we had for that particular patient. This allowed us to ensure that each patient had an equal standing, regardless of length of hospital stay or number of visits from doctors. By using normalized sums, we were able to compare groups and determine if there were any differences in levels of testimonial injustice. The normalized sums of occurrences of testimonial injustice across each intersection of groups are visualized in Figure 5 in A.1.
### Lexicon Lookup
After normalizing the sums of testimonial injustice for each patient, we performed a lexicon lookup for exact phrase matching. With this, we counted the frequency of occurrence for each testimonial injustice word in the patients' combined and normalized visits. We combined the terms introduced in Section 3.2 commonly associated with being evidentially biased, judgmental, negative, and stigmatizing into a lexicon.
### Defining Fairness
In this work, we define the desired fairness as the following: _a patient's record has **no** terms which are considered testimonial unjust._ However, this is a strict boundary that is unlikely to be met since a term could appear in a patient's record but might not actually be casting doubt on them as a sayer (i.e. testimonial injustice). Thus, we find the greatest number of occurrences of each type of term that indicates testimonial injustice, \(m=max_{p}(t/r)\) (where \(p\) are the patients). We determine that if a patient has more than \(m*.10\) in that particular type of term, they as experiencing testimonial injustice. For this work, we arbitrarily use 10% of the maximum value for each term. In the future, we will do some experimentation to improve this definition of fairness. To determine if there is disparate treatment amongst groups to this fairness definition, we use fairness metrics - demographic parity, differential intersectional fairness, and subgroup fairness.
#### 4.3.1 Demographic Parity
Demographic parity requires that the difference in two groups being assessed have equal chances of receiving a positive outcome (Dwork et al., 2012). We use this metric as our baseline metric to understand how testimonial injustice might reveal itself if we ignore intersectionality, as has been done with most works in the fairness literature ((Hardt et al., 2016), (Kusner et al., 2017), (Agarwal et al., 2018),and so on]. That is, we are seeking to investigate whether there is a significant difference in the way a patient is spoken about in medical records when the intersection of their race and gender are considered. Demographic parity is a popular fairness metric, but it does not work to reveal fairness or justice; rather it solely reveals equity. We can look at the example of when both groups have high amounts of injustice (i.e. true fairness occurs when neither group experiences injustice, nearly 0) hence, fairness is not detected only equality or when a marginalized group should be afforded more opportunity for the sake of corrective justice due to historical bias hence justice is not enforced. In these cases, demographic parity is still satisfied, but fairness nor justice persists. Demographic parity is defined as:
\[\frac{P(Y=1|A=a)}{P(Y=1|A=a^{\prime})}>0.8, \tag{1}\]
where \(Y\) is the outcome and \(A\) is the sensitive attribute. Demographic parity looks to ensure the difference between the two groups receiving a positive outcome is greater than 80%.
#### 4.3.2 Differential Fairness
For intersectionality, we first look at \(\epsilon\)-Differential fairness (Foulds et al., 2020), which requires that the difference between groups, regardless of their combination of sensitive attributes, not be treated differently within a range. This metric of fairness allows us to include multiple attributes of a person whereas demographic parity only allows us to look at one sensitive attribute per group. Differential fairness is defined as:
\[e^{-\epsilon}<\frac{P(M(x)=y|s_{i},\theta)}{P(M(x)=y|s_{j},\theta)}<e^{\epsilon}, \tag{2}\]
where \(\epsilon\) should be small. In our experiments, it is set to 0.01, \(M\) is a mechanism (linear regression in our case) that takes an instance, \(x\), from the data to achieve some outcome, \(y\), \(s\) values are the cross product of sensitive attributes, and \(\theta\) is the distribution of \(x\).
#### 4.3.3 Subgroup Fairness
Another common intersectional fairness notion is Statistical Parity Subgroup Fairness or subgroup fairness. We use subgroup fairness to compare our results with the differential fairness metric. Subgroup fairness (Kearns et al., 2018) requires there be no difference in positive outcomes between groups, but we are allowed to ignore an \(\alpha\) amount of people. Subgroup fairness is described for each group, \(a\), by:
\[\alpha(a,\mathcal{P})*\beta(a,M,\mathcal{P})\leq\gamma, \tag{3}\]
where,
\[\alpha(a,\mathcal{P})=P_{\mathcal{P}}[a(x)=1]\] \[\beta(a,M,\mathcal{P})=|P_{\mathcal{D},\mathcal{P}}[M(x)=1]-\] \[P_{M,\mathcal{P}}[M(x)=1|a(x)=1]|.\]
Here \(M\) is a classifier, \(\mathcal{P}\) is the distribution of patients, \(\gamma\epsilon[0,1]\) indicates the amount of deviation from equity we tolerate. We relax this constraint for our experiments, allowing \(\gamma\) to be 95% of the maximum value of \(\alpha(a,\mathcal{P})*\beta(a,M,\mathcal{P})\) for each term that leads to testimonal injustice. \(a(x)=1\) indicates that individuals with sensitive feature, \(x\), are in group \(a\).
## 5 Results
When examining the results for demographic parity, we solely focus on instances of race or gender, as this approach only allows for an assessment of one factor at a time. However, for differential fairness and subgroup fairness, we conduct an intersectional analysis with race and gender. For these, we look to see which groups have privilege over another, meaning one group experiences less testimonal injustice in their physicians' notes as opposed to the group they are being compared to.
### Demographic Parity
**Gender.** In terms of Demographic Parity gender analysis, there was little to no disparate treatment detected across all term types between male and female patients, indicating that there was minimal evidence of injustice in the data based on gender, as observed in Figure 1. The greatest difference was found within evidential words, where female patients experienced the most injustice. Then follows the stigmatizing words and judgment words with the greatest bias against females. The least difference comes from the negative words with males experiencing the least fairness. Negative words occurred the least and stigmatizing words occurred the most across the patient records. With this, gender should not be found to be a significant predictor of the treatment or care received by patients. Therefore, the findings of the analysis should show that a person's gender membership does not have any substantial impact on how they are treated, indicating that the principle of fairness is being upheld.
**Race.** In terms of Demographic Parity race analysis, there was little to no disparate treatment detected across all term types between the different races of patients, indicating that there was minimal evidence of injustice in the data based on race, as observed in Figure 2. We observe that Latino patients are the most likely to experience evidential words, while Asian patients were the least likely. Further, for evidential words, White patients have privilege over Black patients, Black patients have privilege over Latino patients, and Asian patients have privilege over White and Latino patients. For judgemental words, Black patients are the most likely, and Asian patients were the least likely to experience judgemental words. Here, we observe that White patients have privilege over Black patients. Latino patients were the most likely and Asian patients were the least likely to experience negative words in their medical records. We note here that negative terms were the least likely to appear in the records of any patient. Black patients were the most likely and Asian patients were the least likely to experience stigmatizing words in their medical records. Another observation is that White patients have privilege over Black patients, Asian patients have privilege over every race of
Figure 1: Demographic Parity Occurrences of Injustice by Gender.
patients, and Latino patients have privilege over White patients. Stigmatizing words occurred the most in everyone's medical records. With this, race should also not be found to be a significant predictor of the treatment or care received by patients. Therefore, the findings of the analysis should show that a person's racial membership does not have any substantial impact on how they are treated, indicating that the principle of fairness is being upheld.
Since our analysis using demographic parity showed that neither race nor gender affect how a patient experiences testimonal injustice, when we observe their intersection, we should see that the treatment and care received by patients are not affected by the intersectionality of race and gender. This would indicate that the principle of fairness is being upheld regardless of a patient's race or gender. However, we see a different story when we consider intersectionality.
### Differential Fairness
Differential fairness focuses on the intersectionality of race and gender in relation to testimonial injustice. The results of the demographic parity experiments showed, there are no disparities in how groups are treated with respect to testimonial injustice upon race or gender. However, the results of the experiment pertaining to differential fairness show that there are disparities between different intersections of gender and race with respect to the types of terms that lead to testimonial injustice. Specifically, out of 112 comparisons for each intersection of gender and race, 110 violations of differential fairness occurred. This demonstrates that there are underlying injustices occurring in how different groups are treated based on gender and race and that we cannot simply rely on measures that do not consider intersectionality to reveal this.
There were very few instances in which fairness was not violated, such as Asian males to Asian females for evidential and judgmental words, and Asian males to Latina females for negative words. The results showed that Asian females and males were the most privileged, and White males and females were the least privileged when fairness was violated. This may be due to the fact that there are many more records for White patients than all other races of patients. As observed in Figure 3, across all types of terms that lead to testimonial injustice, Black females were the next least privileged after White patients. Black males were found to have more privilege in experiencing testimonial injustice than Black females. The experiment was also conducted with 500 randomly sampled records of each subgroup of patient, and the results there showed that when unfairness is present, Black females are the most marginalized, and Asian males are the least. For these sampled records, across all types of terms that lead to testimonial injustice, Latina females were the most marginalized for evidential words, Black females for judgment words and negative words, and Latino males for stigmatizing words. However, even with the full dataset, Asian males were consistently found to be the most privileged of all the groups represented.
### Subgroup Fairness
In this experiment, similar to differential fairness, we focus on the intersectionality of race and gender
Figure 3: Differential Fairness Occurrences of Injustice by Gender and Race.
Figure 2: Demographic Parity Occurrences of Injustice by Race.
in relation to testimonal injustice. The results of the demographic parity experiments showed, there were no disparities in how groups were treated with respect to testimonal injustice upon race nor gender. However, the results of the differential fairness experiments showed there are differences in how one is treated based on their race and gender. We conduct an experiment that also looks at intersectionality of groups to compare if there is a difference in how these two metrics reveal disparate treatment amongst the subgroups.
Based on our analysis of demographic parity in detecting testimonal injustice in medical records, we found that the privileged groups by race are Asian and White patients, as well as males. Therefore, for the purpose of intersectional fairness analysis, we consider Asian men and White men as non-sensitive groups. When we conducted a differential fairness analysis, we found that violations occurred 110 times out of 112 comparisons (each intersection of gender and race for each type of term leading to testimonal injustice). We expected similar results (Figure 4) for subgroup fairness analysis. Our subgroup fairness metric detected 69 violations our of the 112 comparisons of subgroups. Though less occurrences of violations are present, this still reveals we must consider intersectionality within the medical setting and in the fairness metrics we use there. If even better highlights that a metric which considers intersectionality is not enough, but we must be careful at which fairness metrics we use based on the tasks at hand.
For evidential terms, we found that Latina females were the most discriminated against, while Asian males were the most privileged. For judgment terms, Black males were the most discriminated against, while Asian males were the most privileged. For negative words, Asian males were the most privileged, while Latino males were the least privileged. For stigmatizing words, Black females were the most discriminated against, while Asian males were again the most privileged. It is important to note that our experiment includes the entire dataset, which is over-representative of White patients. Thus, we can expect even larger disparities in how different groups are treated with a more representative dataset. This does not mean that White patients do not experience discrimination, but rather emphasizes the importance of having a more representative dataset to better understand the degrees to which different groups may experience testimonal injustice in their records.
## 6 Discussion
When conducting experiments using demographic parity, we compared race or gender. In each case, there were no violations of demographic parity for any patient is treated based on their race or gender alone. If a practitioner takes these results for face value, they might determine there is no form of discrimination happening based on these commonly observed visible attributes. For example, when speaking to a Black male patient who was stigmatized against from the demographic parity view, they would have no evidence in that setting to back their expression of their experience. However, when we look deeper, through the lens of intersectional fairness (i.e., differential fairness and subgroup fairness) at the intersection of race and gender, we can see that a male patient can still experience discrimination (i.e. Black males) and so could a White patient (i.e. White females).
When we look at measures that consider intersectionality, we see disparity in how people are treated based on their race and gender for every type of word we analyzed that could lead to testimonal injustice. We attribute this to: (1) being able to consider multiple aspects about a person that might only reveal themselves at the intersection of race and gender, (2) in differential fairness being able to constrain the range in which we look for violations, as opposed to only looking at it from one side as demographic parity does. To properly see injustices occurring, we must look at all angles from which they could possibly be coming from. This is because someone might only be testimonially
Figure 4: Subgroup Fairness Occurrences of Injustice by Gender and Race.
injust toward a person who is female, others might only act unjustly because of your membership with a historically marginalized race, and so on. We contend that the better metrics to use for detecting injustices, e.g. testimonial injustice, in medical records are ones which consider intersectionality. Still, we see differences in how these measures show which groups are experiencing privilege, thus we must be careful in understanding the goals of the fairness metrics we use.
## 7 Conclusions
The objective of this empirical study was to investigate the potential benefits of intersectionality in detecting testimonial injustice, using medical records as a real-world application. Demographic parity, differential intersectional fairness, and subgroup fairness were used to examine whether there are differences in the extent of testimonial injustice experienced by individuals based on the intersection of their demographic attributes and if intersectionality helps reveal this. Our results showed (1) when we allow ourselves to use metrics that consider intersectionality, as opposed to sole factors of who a person is, we can better see disparities in how they are treated in terms of detecting testimonial injustice in medical records, (2) there are differences in how someone is treated based on the intersection of their demographic attributes (3) different intersectional fairness metrics do reveal these injustices differently. While demographic parity did not show a clear disparate impact based on gender or race, differential intersectional fairness and subgroup fairness - two intersectional fairness measures - revealed that there was disparate treatment based on both gender and race. These findings suggest that intersectionality should be considered when detecting testimonial injustice, especially in medical settings.
## 8 Limitations and Future Work
**Data.** A challenge we faced was that MIMIC-III was unevenly distributed across the races (e.g. ethnicities) for the patients represented. We had significantly more White and Black patients than any other race of people and even still many more White than Black patients. Therefore we continue to express the need for more representative, inclusive, and balanced datasets. Further, the dataset did include ethnic breakdowns, but due to the lack of patients present in those ethnic groups we could not include Caribbean or Middle Eastern patients as well as many other subgroups in our analysis. We would like to use a more comprehensive dataset in the future, potentially from a facility that consistently services marginalized and privileged communities. If we had more time, we would like to partner with a medical facility that regularly serves marginalized and non-marginalized groups, steadily, to develop a dataset which captures more features that could reveal some bias and ensure they are more descriptive (i.e. has_insurance) to get higher quality data.
**Better Feature Selection and Using More Demographic Features.** To ensure the quality of the aforementioned data, we will perform a causal analysis to identify the specific features that cause testimonial injustice. We anticipate that variables such as age and education level of patients need be included, as these factors have been shown to affect how patients are treated, particularly in the medical field (Dunsch et al., 2018; DeVoe et al., 2009).
**Fairness Metrics.** Existing and popular, fairness metrics cannot be generalized to fit in settings where intersectionality must be considered. Another challenge we faced was having a lack of good baselines to use when analyzing intersectional differences. Intersectionality is highly unexplored, in the future we would like to develop our own metric which can be more beneficial in detecting intersectional disparate treatment between individuals.
**Additional Analysis.** We plan to conduct additional analysis to understand if specific physicians treat similar patients similarly based on the intersection of their demographic features. Further, we plan to perform statistical significance testing on differences in how patients were treated based on the intersection of their demographic features and the occurrences of specific physicians' use of testimonial unjust terms to other patients.
## Acknowledgements
This paper is based upon work supported in part by the NSF LSAMP Bridge to the NSF Program on Fairness in AI in Collaboration with Amazon under Award No. IIS-1939743, titled FAI: Addressing the 3D Challenges for Data-Driven Fairness: Deficiency, Dynamics, and Disagreement (Kenya Andrews). This work is also supported in part by the Cisco Research Gift Grant (Lu Cheng). Any opinion, findings, and conclusions or recommen
dations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation, Amazon, or Cisco Research.
|
2302.01786 | Customer Profiling, Segmentation, and Sales Prediction using AI in
Direct Marketing | In an increasingly customer-centric business environment, effective
communication between marketing and senior management is crucial for success.
With the rise of globalization and increased competition, utilizing new data
mining techniques to identify potential customers is essential for direct
marketing efforts. This paper proposes a data mining preprocessing method for
developing a customer profiling system to improve sales performance, including
customer equity estimation and customer action prediction. The RFM-analysis
methodology is used to evaluate client capital and a boosting tree for
prediction. The study highlights the importance of customer segmentation
methods and algorithms to increase the accuracy of the prediction. The main
result of this study is the creation of a customer profile and forecast for the
sale of goods. | Mahmoud SalahEldin Kasem, Mohamed Hamada, Islam Taj-Eddin | 2023-02-03T14:45:09Z | http://arxiv.org/abs/2302.01786v1 | # Customer Profiling, Segmentation, and Sales Prediction using AI in Direct Marketing
###### Abstract
In an increasingly customer-centric business environment, effective communication between marketing and senior management is crucial for success. With the rise of globalization and increased competition, utilizing new data mining techniques to identify potential customers is essential for direct marketing efforts. This paper proposes a data mining preprocessing method for developing a customer profiling system to improve sales performance, including customer equity estimation and customer action prediction. The RFM-analysis methodology is used to evaluate client capital and a boosting tree for prediction. The study highlights the importance of customer segmentation methods and algorithms to increase the accuracy of the prediction. The main result of this study is the creation of a customer profile and forecast for the sale of goods.
keywords: Data mining, SVM, Boosting tree, RFM-analysis methodology, Deep learning +
Footnote β : journal: Elsevier
## 1 Introduction
In today's business landscape, companies are faced with the challenge of identifying potential customers who are most likely to respond positively to a product or offer, this is where data mining techniques come into play. With the increasing amount of data available, data mining has become an essential tool for direct marketing efforts, allowing companies to create a prediction response model based on past client purchase data. This study aims to present
a data mining preprocessing method for developing a customer profiling system that improves the sales performance of an enterprise. The study uses an RFM-analysis methodology to evaluate client capital and a boosting tree for prediction. Furthermore, the study highlights the importance of customer segmentation methods and algorithms in increasing the accuracy of the prediction. The main result of this study is the creation of a customer profile and forecast for the sale of goods, which will assist decision-makers in making strategic marketing decisions. The study is expected to provide valuable insights for companies looking to improve their direct marketing efforts and increase sales performance through data mining-based customer profiling.
The need for a client profiling framework utilizing AI techniques has become increasingly important in today's business landscape. With the exacerbation of competition, the rise in communication costs, and the impact of a lack of buyers, companies are shifting their focus from attracting new customers to retaining existing ones and building their loyalty. The significance of this research topic lies in the fact that long-term relationships with customers are financially beneficial, as they ensure regular purchases, require lower advertising costs per customer and through the recommendations of loyal customers, increase their number. The purpose of this study is to develop a customer profiling system using machine learning methods that will improve the computerized degree of advertising, sales development, and another degree of client base.
To achieve this goal, it is planned to solve the following research tasks:
* Data collection;
* Study of machine learning methods;
* Specify the structure of the client profile, types, and indicators that characterize them;
* Analysis and formation of customer data;
* Generalize and systematize foreign experience in improving the profile of clients;
* Conduct an analysis of existing methods for researching the profile of clients, and identify the most effective ones for enterprises;
* Determine the place and role of the concept of "consumer loyalty" in modern marketing and identify the problems of its use;
* Clarify the structure and nature of consumer loyalty, types and indicators that characterize them;
* Generalize and systematize foreign experience in improving;
* Highlight the factors that determine the choice of a reward system for the formation of programs to increase consumer loyalty;
* Propose a methodology for developing programs to increase comprehensive consumer loyalty for manufacturers of goods and services and formulate practical recommendations for their formation.
Deep learning is a subfield of machine learning that has seen widespread applications in various industries. In computer vision, deep learning algorithms have been utilized for object detection, image classification, and video analysis. In the field of Natural Language Processing (NLP), deep learning models have been applied to tasks such as text classification, sentiment analysis, machine translation, speech recognition[1], table detection and recognition[2; 3; 4]. Healthcare is another industry where deep learning has found several applications, including diagnosis, treatment planning, drug discovery[5], and medical imaging analysis[6; 7; 8]. In robotics, deep learning is used for autonomous navigation, object recognition[9; 10], and robotic control. handwritten recognition for various languages[11; 12; 13; 14; 15]. Intrusion Detection in IoT [16; 17] The finance industry has also seen applications of deep learning in areas such as fraud detection, algorithmic trading, and risk management. Additionally, deep learning is finding use cases in gaming, such as game playing and decision-making, as well as in marketing, with applications in customer segmentation, personalized recommendations, and sentiment analysis. The transportation industry is another area where deep learning is making an impact, with applications in autonomous vehicles, traffic prediction, and route optimization. Finally, deep learning has potential applications in the energy industry for predictive maintenance, energy consumption prediction[18; 19], and equipment malfunction detection. These are just some of the ways that deep learning is being applied across different fields and the potential for further growth and development is immense.
The objects of study are enterprises and organizations, their marketing activities in the context of the formation and implementation of client policy,
as well as consumers of goods and services. The subject of this study is the entirety of economic and organizational relationships that occur in the process of firms implementing relationship marketing, as manifested in the creation and implementation of programs to build consumer loyalty.
The study's theoretical and methodological foundation was the essential research of internal and international scientists on issues of market economy, management, marketing, and consumer and brand loyalty management[20]. The methods of marketing, economic and statistical analysis, quantitative and qualitative study, as well as the principles of consistency and development, were used in work. The author also relied on substantiating the main provisions of the dissertation on expert methods for obtaining information. As described by the authors of the article, H. Muller and U. Hamm, the first step is to start with segmentation, marketing, and customer data. Then, the data can be adjusted in the right direction for analysis and profiling[21].
The scientific novelty of the work lies in the development of scientific and methodological provisions and recommendations aimed at the formation and implementation of a client profiling framework utilizing AI techniques, as well as the identification of the most effective methods for researching the profile of clients and increasing consumer loyalty in Kazakhstan enterprises. This study will provide valuable insights for companies looking to improve their relationship marketing efforts and increase sales performance through data-driven customer profiling.
The research aims to analyze and review various projects, works, and scientific literature on the topic of customer segmentation in online business ventures. The overall description of the research is that in online business, clients use various platforms provided by organizations with various needs, shopping patterns, and profiles. To understand this wide range of needs, shopping patterns, behavior, conduct, and requests of clients, we use various divisions according to the business model of organizations. Customer segmentation is defined as the division of customers into various individual groups that share similarities in various ways relevant to marketing, such as orientation, interests, age, shopping patterns, and different ways of managing money.
Organizations that want to implement customer segmentation are under the idea that different customers have different needs and requirements, which is why organizations perform data mining procedures and develop a specific marketing strategy to implement in their business model. The simple truth is that most organizations have data that can be used to target these
individuals and to understand the critical drivers of segmentation. Customer segmentation is when a customer is divided between various groups based on business needs.
Today, customers have become the fuel that drives a business. The loss of customers affects sales, which is why it is expensive to acquire new customers. At the same time, it is more important to retain old customers. Therefore, organizations need to focus on reducing customer churn and to do so, they constantly offer coupons and offers to customers. It is good to know that AI will help with customer segmentation. One of the main uses of unsupervised learning methods of AI is customer segmentation. With the help of the "clustering" procedures of unsupervised learning, we can identify the different segments of customers. Where individual segments have some individual similarities, which allows organizations to target the potential customer base according to the business model and requirements for efficiency.
The research will also cover the need for customer segmentation, the importance of understanding customer behavior, and the use of AI in customer segmentation. The study will provide valuable insights for organizations looking to improve their customer retention and benefit upgrades through data-driven customer segmentation.
## 2 Related Work
In the field of customer segmentation, researchers have been experimenting with different algorithms to perform segmentation on customer data. Most of these studies have focused on analyzing customer buying history and purchasing behavior to identify segments.
According to T Jiang and A, Tuzhilin [22], it is crucial to implement both customer segmentation and buyer targeting in order to enhance marketing performance. These two tasks are integrated into a step-by-step approach, however, the challenge of unified optimization arises. To address this issue, the authors proposed the K-Classifiers Segmentation algorithm. This method prioritizes allocating more resources to those customers who generate the most returns for the company. A significant number of researchers have discussed various techniques for segmenting customers in their studies. also, the authors propose a direct clustering method for grouping customers. Rather than relying on computed statistics, this approach utilizes transactional data from multiple customers. The authors also acknowledge that finding an optimal segmentation solution is computationally difficult, known
as NP-hard. Therefore, Tuzhilin presents alternative sub-optimal clustering methods. The study then experimentally evaluates the customer segments obtained through direct grouping and finds them to be superior to statistical methods.
KR Kashwan [23] proposed a K-means algorithm and a statistical tool to propose a model that elaborates on a continuous analysis and online framework for an e-commerce organization to predict sales. They involved a clustering strategy for determining market segmentation because a developed computing-based system is intelligent enough to address results to managers for a quick and fast decision-making cycle.
PQ Brito [24] emphasized that advertising and manufacturing approaches are highly important for customized industries because buying a large variety of products makes it difficult to find specific patterns of customer preferences. As a result, they proposed two different data mining methods, clustering and sub-cluster discovery, for customer segmentation to better understand customer preferences.
X He and C Li [25] propose a three-dimensional strategy for enhancing customer lifetime value (CLV), customer satisfaction, and customer behavior. The study concludes that consumers have varying needs, and segmentation helps to identify their demands and expectations, which in turn leads to providing better service.
A Sheshasaayee [26] developed a new integrated approach to segmentation by combining the RFM (Recency, Frequency, Monetary) and LTV (Life Time Value) methods. They employed a two-phase approach, starting with a statistical method in the first phase, and then proceeding to cluster in the second phase. The objective is to apply K-means clustering following the two-phase model and then utilize a neural network to improve the segmentation.
MT Ballestar [27] proposed the role of customers in the use of their cash-back and determined the business activity and behavior of customers on the site of a social network. They proposed a model that applied social network analysis to marketing such as loyalty, communication, customer development, and customer engagement to show the dependence of customers' positions within an organization.
W Qadadeh [28] proposed the evaluation of data analysis algorithms like K-means for clustering and Self-Organized Maps for the nature of clustering with visualization. They recommend that involving various procedures for segmentation with experts will further develop organizations like insurance
and study segment elements and behavior of a customer in any customer relationship management dataset.
AJ Christy [29] emphasized that a good understanding of the customer's needs and identification of potential customers for the organization are satisfied by the segmentation process. They performed segmentation using RFM analysis and extended it to other algorithms like K-means, and RM K-means through minor adjustments in K-means clustering.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Paper** & **Proposed Method** & **Advantages** & **Disadvantages** \\ \hline KR Kashwan [23] & K-means algorithm and a statistical tool & A continuous analysis and online system for e-commerce organization to predict sales & Limited to the use of clustering for determining of market segmentation \\ \hline PQ Brito [24] & Two data mining methods (clustering and sub-cluster discovery) & Better understanding of customer preferences & Limited to redefined industries \\ \hline MT Ballestar [27] & Utilization of cashback and client behavior on social network sites & Shows the reliance on the position of clients inside an organization & Limited to the use of social network writing to promoting like dedication, person-to-person communication, development of client, and commitment of client \\ \hline W Qadadeh [28] & K-means for clustering and Self-Organized Maps for quality of clustering with representation & Involves various procedures for division with expert to further developing organizations & Limited to the use of multiple procedures for segmentation with expert \\ \hline AJ Christy [29] & RFM analysis and extended to other algorithms like K-means, and RM K-means & Good understanding of the need of client and identification of potential clients for organization & Limited to the use of RFM analysis and extended it to other algorithms like K-means, and RM K-means through minor adjustment in K-means clustering \\ \hline T Jiang [22] & Direct clustering based on transactional data & Identifies customer segments based on actual customer behavior & Finding an optimal segmentation solution is computationally difficult \\ \hline X He [25] & Three-dimensional approach for enhancing CLV, customer satisfaction, and customer behavior & Considers multiple dimensions of customer behavior, leading to more accurate segmentation & Complexity and High computational cost \\ \hline A Sheshasaayee [26] & Integrated approach combining RFM and LTV methods with two-phase approach (statistical and clustering) and neural network & Integrates different methods to improve segmentation & Computationally intensive \\ \hline \end{tabular}
\end{table}
Table 1: Related work methods, advantages and disadvantages
## 3 Problem Statement
The problem of customer segmentation can be based on various factors such as marketing, sales, support, product, and leadership. Experts in large or small organizations involved in the data analysis process adjust the working group and set the expectations that it will continue to do so in many stages. Some issues that can be resolved through customer segmentation are given below.
* Marketing: We can solve the problem by understanding our customer base to effectively reach them. We may not be able to observe the business's email lists using the task to be done, but we can observe ones for B2C subscription organizations with high website traffic volume.
* Sales: Many issues faced by sales representatives can be resolved by this process. We can route prospects to our self-service stream or the most appropriate group within sales, such as startups, Small Market businesses, and Multi-Model businesses, based on clear customer segments.
* Support: Issues are categorized based on their tool and field. After categorization, it can be used to route support inquiries to the appropriate channels, such as AnswerBot, Alexa, Google Assistant, our help center, or a support representative, to improve customer and business outcomes further.
* Product: This process can also resolve issues with product quality. Experts should know which product requests and feedback make the biggest impact on which customer and focus accordingly, instead of by volume alone.
* Leadership: It manages the mission run by e-commerce organizations to deliver their service and make a lead. For this, they create a common language for the product and design and go to markets to describe the customers.
From this work, we proposed a customer segmentation strategy based on various categories. Different clustering methods like k-means, RM K-means, and Self-Organized Maps were used for segmentation. In this paper, we proposed a business model for e-commerce organizations based on segmentation
according to various categories [10] and RFM positioning to retain and acquire customers in e-commerce. As we know, observing new customers is important, but retaining old customers is even more important.
## 4 Model, Tools, Environment, Technology
### The Customer Segmentation Approach
Client division is a commonly used marketing technique where a business separates its customer base into smaller groups that can be targeted with specific content. This is done by analyzing customer behavior data, which gives the company a deeper understanding of the types of customers in its system. The benefit of this technique is that it allows for more effective marketing strategies. It can be challenging to implement in online retail environments, where data is vast and complex. One algorithm used for this purpose is Vector Quantization, which automatically assigns existing or new customers to different groups. It was developed by Linde, Buzo, and Gray and can be applied to any probability source definition or long data sequence. It may not always achieve optimal results, but it often ensures local optimality [30].
Client segmentation is a powerful marketing strategy that is widely used by businesses to understand better and target their customer base. It involves dividing the customer base into smaller groups based on characteristics such as demographics, behavior, and purchasing history. These segments are then targeted with specific marketing messages and campaigns that are tailored to their unique needs and preferences. One of the key advantages of client segmentation is that it allows businesses to understand better the different types of customers that make up their customer base. This, in turn, leads to more effective marketing strategies that are better able to convert leads into customers. Additionally, segmentation allows businesses to identify and target their most valuable customers, which can help to improve customer retention and increase revenue. Implementing client segmentation can be challenging, particularly in online retail environments with vast and complex data. One algorithm used for this purpose is Vector Quantization, which automatically assigns existing or new customers to different groups. It was developed by Linde, Buzo, and Gray and can be applied to any probability source definition or a long data sequence. This algorithm is efficient and can automatically group customers based on their behavior data. Although Vector Quantization is an efficient algorithm and a powerful tool for client segmentation, it may not always achieve optimal results, but it often ensures
local optimality. In most cases, the algorithm is able to accurately group customers and provide valuable insights for businesses to target their marketing efforts.
A mapping, also known as a block quantizer or vector quantizer, is a tool that can be used to divide data into smaller groups. The mapping is N-level k-dimensional and can be implemented in software code. It can take a variety of client RFM values as input vectors, and a non-negative real distortion measure is used to represent the difference between the original vectors and the reproduced vectors. This type of mapping has been widely studied in the literature, and many similar techniques have been presented [30]. The error distortion measure, which is widely used in the development of mathematical applications, is chosen for its computational efficiency in the formula.
\[d(x,x^{n})=\sum_{i=0}^{k-1}|x_{i}-x_{i}^{n}| \tag{1}\]
Where x is the input vector, \(x^{n}\) = q(x) is the reproduction vector, n describes the number of times the division, and i is the reproduction vector.
An N-level quantizer is considered to be ideal or globally ideal if it minimizes the average distortion, or at least, is ideal if for any remaining quantizers four having N generation vectors D(q*)!D(q) [29]. If D(q) is only a close least, a quantizer is said to be locally optimal, meaning that small changes in q cause distortion to increase. The goal of quantizer design is to obtain an optimal quantizer if possible or a locally optimal and preferably "good" quantizer if that is not possible. Several such algorithms have been proposed in the literature for the computer-aided design of locally optimal quantizers.
### Machine Learning
Recently, interest in machine learning (ML) has grown as processing power and accumulated data have increased significantly. Artificial intelligence (AI) can be defined as "computational methods that make use of past experience to improve performance or make precise predictions." Experience, in this case, refers to data about the past, which is often electronic information, the size and quality of which significantly impact the outcome of the predictions made by the algorithms. Common AI tasks include classification, regression, ranking, clustering, dimensionality reduction, or complex learning. Classification is a problem of finding the right label for inputs. These
problems can be, for example, image labels, text classification, or identifying the appropriate customer segment for a customer. Regression is a problem where a value is not fixed for an input. For example, future stock value or length of the customer relationship. In Ranking, the problem is to arrange items according to certain criteria, for example, web search. Clustering aims to segment the data into homogenous groups that are not yet known. For example, a company might wish to discover new customer segments or in social networks to find communities. Dimensionality reduction or complex learning aims to reduce data representation to lower layered representation. The subject of this review is whether a customer will be stirred, which is a common decision problem between 1 and 0. Therefore, the strategies introduced in this section are used in decision problems. AI techniques can be divided into supervised learning and unsupervised learning, where the main difference is that with supervised learning, the data is labeled. In unsupervised learning, it is not. A common use case for unsupervised learning is clustering or dimension reduction and, for example, email spam filter for supervised learning.
#### 4.2.1 Data preprocessing and model optimization
Data preprocessing is an essential step in creating an AI model. It plays a crucial role in the model's performance and its interpretability. Data preprocessing includes cleaning, standardization, transformation, feature extraction or selection, and more. The preprocessing can be divided into two main categories: value transformation (cleaning, standardization, transformation, handling missing values, etc.) and value representation (variable selection and evaluation).
#### 4.2.2 Data cleaning and transformation
Data cleaning is examining the quality of data and ensuring its integrity. This process involves two primary approaches: filtering and wrapping. Filtering involves the removal of data based on predefined rules, such as removing outliers, incorrect spellings, duplicates, or impossible data, such as a 120-year-old client. On the other hand, Wrapping focuses on improving data quality by identifying and removing mislabeled data. Feature engineering or data transformation is a technique used to discover missing information about the relationships among features and construct new features from the existing features. This process can lead to more accurate and concise classifiers
and improved interpretability. These new features may include combinations of current and future values, such as the sum of two previous values.
#### 4.2.3 Missing Data
Data preprocessing often involves dealing with missing values in the dataset. One approach to handling missing data is to delete the instances that contain missing values, which can lead to data imbalance. An alternative approach is to impute the missing values with an estimated value. This can be done by using similar instances, calculating the mean values, or using statistical or machine learning techniques.
#### 4.2.4 Sampling
Data preprocessing is a crucial step in creating an AI model. It affects the performance and interpretability of the model. It includes data cleaning, standardization, transformation, feature extraction, and selection. Data cleaning involves examining the quality of the data and removing any anomalies or inaccuracies. Transformation or feature engineering is a technique used to find missing data and build new features from existing ones that would result in more accurate and concise classifiers and increased interpretability. Missing values can be handled by imputing estimated values obtained from similar cases or using statistical or AI techniques. However, it is important to note that class imbalance, a common issue in ML, can lead to problems such as improper evaluation metrics, lack of data, and improper inductive bias. To address this, oversampling and undersampling techniques can be used to adjust the distribution of the training set. Still, they also have their own drawbacks, such as loss of data and increased risk of overfitting.
#### 4.2.5 Feature and Variable Selection
Feature and variable selection are techniques used to identify and extract the most relevant data from many factors. With the increasing amount of data and factors available due to advancements in data collection, it is crucial only to include the most important and useful factors in the model being built. The main goals of selection are to achieve better predictive performance, make faster and more efficient predictions, and gain a more accurate understanding of the predictive process. Including unnecessary factors in the model can lead to complexity or overfitting, while missing important factors can result in diminished predictive performance. There are several
classes of feature selection methods, including filter, wrapper, and embedded methods. Filter methods use selected feature importance models, such as variance, to determine which features to include. Wrapper methods use algorithms to iterate through possible feature subsets and maximize classification performance, while embedded methods aim to reduce computational time by incorporating feature selection into the training process. Advanced methods, such as genetic algorithms and particle swarm optimization, can also be used. However, these methods can be computationally expensive and may be subject to NP-hard problems. It's important to use appropriate feature selection methods based on the dataset, model, and computational resources.
### Model for Customer Segmentation
Several models are commonly used for customer segmentation, including classification techniques and various analytical methods tailored to the specific needs of different business models. The main models used for customer segmentation include:
* Demographic Segmentation;
* Recency, Frequency, and Monetary (RFM) Segmentation;
* Customer Status and Behavioral Segmentation.
Segmentation based on gender is one of the simplest yet most effective ways for organizations to categorize their customer base. This type of segmentation is particularly useful for creating targeted content or promotions for gender-based events or programs, such as Mother's Day, Father's Day, or Women's Day. RFM Segmentation is commonly used in the direct mail industry and is widely employed for ranking customers based on their purchasing history. This approach identifies customers based on recency (the number of days between two purchases), frequency (the total number of purchases made by a customer in a specific period), and monetary value (the total amount spent by a customer in a specific period)[24].
Client status and behavior analysis is when organizations examine their data to categorize their clients into active and lapsed. Active and lapsed status refers to the last time a client made a purchase[31]. Behavioral analysis involves analyzing the past behavior of clients, such as shopping habits, brand preferences, and purchase patterns, to make predictions about their future
actions. This process is carried out by data analysts who work with the data set from the e-commerce organization, load the data, perform data analysis, and segment the clients into categories. The information is then presented in easy-to-understand dashboards for non-technical individuals. Finally, this information is used to develop strategies for retaining and acquiring clients.
Brain networks are a component of Artificial Intelligence that employs principles and behavior similar to that of neurons in living organisms for signal processing [32]. The central aspect of this network, which accounts for its broad possibilities and significant potential, is the parallel processing of data by all nodes, significantly enhancing the speed of data processing. Additionally, with a high number of interneuron connections, the network possesses robustness against errors that may occur in individual lines. Currently, brain networks are applied to solving various problems, one of which is the problem of prediction[33]. In this case, the radial basis function (RBF) network was chosen as the architecture of the brain network, with a multi-layered time series as input and the prediction outcome as the time series value at the desired time.
To improve the prediction quality, it is crucial to perform preprocessor data handling, as brain networks typically do not perform well with values from a broad range of input data. To eliminate this issue, the data should be scaled to the range [0... +1] or [-1... +1]. The equation used to scale the input data is as follows (2,3,4):
\[X_{s}=S_{c}.X_{u}+Of \tag{2}\]
\[Of=\frac{T_{max}-T_{min}}{R_{max}-R_{min}} \tag{3}\]
\[Of=T_{min}-S_{c}.R_{min} \tag{4}\]
Where Xs, Xu respectively, the scaled and original input data; Tmin=0, Tmax=1 - the maximum and minimum of the objective function; Rmax, Rmin - the maximum and minimum inputs.
A radial basis neural network is a network with one hidden layer in figure
In the work context, the hidden layer employs Radial Basis Functions (RBFs) to transform the input vector X. various radial basis functions can be utilized. However, the Gaussian function is the most commonly used and will be utilized in this work. The Gaussian form for the kth neuron is as follows [34]:
\[\phi_{k}(x)=exp(\frac{-r_{k}^{2}}{a_{k}^{2}}) \tag{5}\]
where X is the input vector, \(r_{k}\) is the radius.
\[r_{k}=|X-C_{k}| \tag{6}\]
\(C_{k}\) is the center vector of the RBF, and a is the function's parameter, called the width. The output layer of the network is a linear adder, and the output of the network \(C_{k}\) is described by the expression:
\[u=\sum_{k=0}^{N}w_{k}\phi_{k}(X) \tag{7}\]
Figure 1: Radial basis neural network
where wk is the weight connecting the output neuron with the kth neuron of the hidden layer.
To understand the behavior of a radial basis function network, tracking the progression of the input vector X is crucial. When values are assigned to the components of the input vector, each neuron of the input layer produces a value based on how close the input vector is to the weight vector of each neuron. Consequently, neurons with weight vectors that differ significantly from the input vector X will have outputs close to 0, and their impact on the results of subsequent neurons in the output layer will be negligible. Conversely, an input neuron whose weights are close to the X vector will produce a value close to one. Segmentation is performed on an unstructured set of customer data intended for the purpose of marketing. This section discusses market segmentation and customer segmentation and mentions the available data mining techniques to support these processes. Market segmentation is a well-known marketing strategy, and its benefits are highlighted in various marketing research textbooks[34].
### Market segmentation
Market division, first defined in 1956, is a method used by organizations to categorize customers based on similar characteristics, such as geographic location, demographics, product usage, and purchasing behavior. The goal is to increase customer satisfaction and maximize efficiency by tailoring marketing efforts to specific segments. One common tool used in market division is clustering, which groups elements with similar values into segments[35; 36; 37].
While early market division studies only considered one set of factors, modern market division models take into account multiple sets of factors simultaneously, called cooperative market division. There are various market division methods, including k-means clustering, hierarchical clustering, association rule mining, decision trees, and neural networks. The objective is to identify and describe customer groups and reach profitable customer segments[38]. The stages of market division research include Literature Review, Solution Architecture, Testing, Verification, and Evaluation of Results. Market division is an ongoing area of study, and there is always room for improvement. The ultimate goal of using a market division system is to improve the position of the organization and better serve the needs of customers[39].
### Customer segmentation
Market and customer segmentation are often used interchangeably in the literature, with market segmentation generally being viewed as a high-level strategy and customer segmentation providing a more granular view. A combination of customer segmentation and targeting for campaign strategies can be achieved through the use of the Recency, Frequency, and Monetary (RFM) model[24]. The RFM model considers the most recent purchase amount (P), the total number of purchases made during a given period of time (F), and the monetary value spent during that time period (M). It can be used in conjunction with the Customer Lifetime Value (LTV) model, which evaluates the contributions of segmented customers by calculating their current value and predicting their potential value.
One approach to improve the customer division and targeting process is through the use of genetic algorithms, as proposed by Chang[28], who suggests that the LTV model be used as a fitness function in the genetic algorithm to identify more suitable customers for each campaign. Another approach, proposed by Kim, Jung, Su, and Hwang[36], is to perform customer segmentation using LTV components such as current value, expected value, and customer loyalty.
In traditional markets, customer segmentation is a critical technique used in marketing research. There are numerous mathematical methods for identifying customer segments, including statistical techniques, neural networks, genetic algorithms, and k-means fuzzy clustering, as explored by various researchers.
To conclude, a brief overview of the segmentation process is provided. The customer population can be divided into segments based on different criteria or attributes. For example, a population could be segmented based on geographic location, resulting in four segments of varying sizes. However, the segments would have different attributes that could be further exploited through a process called customer profiling.
### Client profiling
Client profiling involves analyzing a client's characteristics such as age, orientation, income, and lifestyle in order to understand the traits of a particular group and describe what they are like. By utilizing client segmentation and profiling techniques, marketers can determine the appropriate marketing strategies for each segment. This approach helps to establish and maintain a strong relationship with existing customers, improving customer retention
and ultimately contributing to business growth and revenue generation. This process is known as Customer Relationship Management (CRM) [40]. There is no one specific method for conducting client segmentation and profiling, as each database utilizes its own approach. Typically, there are two types of profiling: segment profiling and lead profiling [41].
Client segment profiling is a common marketing approach to understanding the attributes of a particular group of customers. It takes into consideration various factors such as demographics, lifestyle, and purchasing behavior to tailor marketing strategies and enhance customer relationships. This practice falls under the umbrella of customer relationship management (CRM) and is crucial for improving customer acquisition and revenue generation in the early stages of a digital project. The segment profile of the customer is considered more relevant than the individual social profile as it determines the target market for advertising and provides insight into the content direction. Additionally, the decision-making process of consumers in regard to purchasing goods and services is known as buyer behavior. While Mowen and Minor present a different definition, behavior profiling is based on consumer attitudes, usage patterns, and reactions to a product. Advertisers believe that social factors are the best starting points for constructing consumer behavior profiling, including:
* Timing: Customers are profiled based on their purchase decision-making process, including the time they choose to make a purchase or use the product. Companies may adopt different marketing strategies based on key timing events, such as before the New Year or National Holidays.
* Benefits: Benefit profiling is a process that segments customers based on the various benefits they may be seeking in a product.
* Customer status: By profiling non-customers, former customers, potential customers, new customers, and regular customers of the product, the company can tailor and customize its marketing efforts for each group.
* Usage rate: Usage rate profiling segments customers based on the amount they use the product, dividing them into groups of non-users, light users, medium users, and heavy users.
* Purchaser Readiness Stage: The purchaser readiness stage refers to the customer's level of awareness and interest in the product.
* Loyalty status: Customers can also be profiled based on their level of loyalty. Hard-core loyal customers consistently purchase the same product, split loyal customers are loyal to multiple brands and purchase them randomly, and shift loyal customers switch from one brand to another, staying with one brand for a period of time before switching to another.
* Attitude: Customers can be divided based on their attitude towards the product, such as enthusiastic, positive, neutral, negative, or hostile. By considering consumer attitudes towards a brand or product, a company gains a wide range of insights about the market and its customers.
## 5 Experimental
### Architecture
The architecture of the profiling customer shown in figure 2
Figure 2: Processes of Customer Segmentation
### Dataset
"The data utilized for our research was sourced from the Data Flair repository, which encompasses a cross-border dataset that encompasses several key demographic attributes, including age, education level, ID, annual income, marital status, and presence of children in the household."
The RFM model employed in this study utilized data from the SAS Institute to calculate the recency, frequency, and monetary rankings, enabling the segmentation of customers into distinct groups. The data comprises the following attributes:
### Preprocessing
#### 5.3.1 Data Cleaning
Data cleaning is a crucial aspect of AI and has a substantial impact on the development of a model. It is a routine task that is often overlooked, yet it is vital for achieving successful outcomes. While there may not be any complex techniques or insider knowledge involved in data cleaning, it can play a decisive role in the success of a business. Experienced data scientists often allocate a substantial amount of time to this process, recognizing that clean data is more valuable than complex calculations. With a well-cleaned dataset, even simple calculations can produce desirable results, as demonstrated in Figure 3.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Serial No.** & **Attributes** \\ \hline
1 & ID \\ \hline
2 & Year\_Birth \\ \hline
3 & Education \\ \hline
4 & Marital\_Status \\ \hline
5 & Income \\ \hline
6 & Kidhome \\ \hline
7 & Teenhome \\ \hline \end{tabular}
\end{table}
Table 2: Attributes of first datasets
#### 5.3.2 Exploratory Data Analysis
In the field of data mining, Exploratory Data Analysis (EDA) involves the systematic examination of datasets to uncover their underlying characteristics and patterns. EDA is a crucial step in the data analysis process and helps to understand the information contained in the data before proceeding with modeling. It can be challenging to extract meaningful insights from large sets of raw data or complex calculations. Exploratory data analysis provides a framework for making sense of the data by utilizing visualizations and summarization techniques to make the data more accessible and under
\begin{table}
\begin{tabular}{|c|c|} \hline
**Serial No.** & **Attributes** \\ \hline
1 & Dt\_Customer \\ \hline
2 & Recency \\ \hline
3 & MntWines \\ \hline
4 & MntFruits \\ \hline
5 & MntMeatProducts \\ \hline
6 & MntMeatProducts \\ \hline
7 & MntFishProducts \\ \hline
8 & MntSweetProducts \\ \hline
9 & MntGoldProds \\ \hline
10 & NumDealsPurchases \\ \hline
11 & NumWebPurchases \\ \hline
12 & NumCatalogPurchases \\ \hline
13 & NumStorePurchases \\ \hline
14 & NumWebVisitsMonth \\ \hline
15 & AcceptedCmp1 \\ \hline
16 & AcceptedCmp2 \\ \hline
17 & AcceptedCmp3 \\ \hline
18 & AcceptedCmp4 \\ \hline
19 & AcceptedCmp5 \\ \hline
20 & Complain \\ \hline
21 & Z\_CostContact \\ \hline
22 & Z\_Revenue \\ \hline
23 & Response \\ \hline \end{tabular}
\end{table}
Table 3: Attributes of second datasets
standardable. The goal of EDA is to provide a comprehensive understanding of the data and identify potential areas for further investigation.
#### 5.3.3 Analysis of Variables
Univariate analysis is a fundamental form of data analysis that involves the examination of a single variable. Common univariate techniques include box plots and histograms. Multivariate analysis, on the other hand, involves the examination of multiple variables. This type of analysis requires the use of more advanced statistical methods such as scatter plots and bar charts.
### Cluster Analysis
In this section, the focus of the analysis conducted during this project is presented, providing an overview of segmentation and customer division.
Customer division is a widely used marketing strategy that involves dividing the customer base into smaller groups that can be targeted with specific content and offers. These customer segments are drawn from customer behavior data, which provides the business with a deeper understanding of the types of customers in the system. The benefit of customer division is twofold.
Figure 3: Processes of Customer Segmentation
A better understanding of the types of customers in a system, first and foremost, can lead to better business and marketing strategies. Additionally, a customer is more likely to use an application regularly if they receive relevant content. Furthermore, if a customer is satisfied, they are more likely to recommend the application to others, contributing to the expansion of the business [23]. This type of marketing strategy is a component of a company's Business Intelligence framework. To effectively segment the customer base into meaningful groups, a comprehensive analysis of available data along with a study and evaluation of clustering algorithms is necessary (Figure 4)
The customer distribution is depicted in Figures 5, showcasing the majority of the customers, 64%, in relationships (Married or Together), with the majority, 97%, holding at least a bachelor's degree.
#### 5.4.1 Clustering Algorithm
Clustering algorithms are employed to group clients into clusters in order to ensure that clients belonging to the same cluster are more similar to each other than to clients in another cluster. The aim of this segmentation is to identify meaningful patterns within the data space, with clients being
Figure 4: Customer Segmentation
we will discuss the most commonly used similarity measures and clustering algorithms. It is important to note that the success of clustering is highly dependent on the definition of a relevant similarity or distance measure. The simplest and most common distance measure is Euclidian distance (equation 8).
\[d(x,y)=\sqrt{\sum_{k=1}^{n}(x_{k}-y_{k})^{2}} \tag{8}\]
where n is the number of features, x and y are the data objects, \(x_{k}\) and \(y_{k}\) are the kth attributes of the feature data objects x and y respectively.
The cosine similarity is widely used in the area of recommender systems, particularly in collaborative filtering. The fundamental concept behind cosine similarity is to calculate the cosine value of the angle between two n-dimensional feature vectors [29]]. This can be accomplished using the following equation, where n represents the number of features in the data objects x and y, denotes the dot product of the vectors, and ---- x ---- represents the magnitude of vector x (equation 9).
\[cos(x,y)=\frac{(x.y)}{||x||y||} \tag{9}\]
Figure 5: Martial Status and Education Level
Another distance measure that will be covered in this report is the Pearson correlation. This distance measure is also widely used in recommender systems. Pearson correlation calculates the linear relationship between two feature vectors, meaning two feature vectors are similar if a best-fitting straight line is close to all data points in both vectors. It is calculated using the following function (equation 10):
\[Pearson(x,y)=\frac{\sum(x,y)}{\sigma x.\sigma y} \tag{10}\]
where x and y are two component vectors,\(\sum\) is the covariance of the information focuses x and y, and is the standard deviation of an element vector. The outcome is a worth between - 1 and 1 where a worth near 1 or - 1 implies that all values are situated on the best-fitting line, and values more like 0 show that there is little relationship between the given element vectors.
#### 5.4.2 K-means
The K-means algorithm is widely used in cluster analysis and customer segmentation. It is a method designed to divide a set of objects into K subgroups or clusters. The algorithm depends on a pre-determined value for K, with K centroids initialized to random observations within the dataset. The K-means algorithm then iteratively adjusts these centroids to minimize the cluster variance by employing two steps:
* For each centroid c, identify the subset of objects that are closer to c than any other centroid using some similarity measure,
* Recalculate another centroid for each cluster by computing the mean vector of all objects in the group. This two-step process is repeated until convergence is reached. The standard implementation of K-means uses the Euclidean distance measure described in a previous section to identify the subset of objects that corresponds to each cluster, by calculating the mean squared error, which in this case is equivalent to the Euclidean distance, of each object's feature vector to the K centroid and selecting the closest result [23]. However, other distance measures can be used in place of the Euclidean distance. Aggarwal et al. assert that for high-dimensional data, the choice of distance measure used in clustering is crucial for its success.
The steps for the algorithm are as follows:
* Choose the number of clusters "k"
* Select "k" random points from the dataset as centroids
* Assign all points to the nearest group centroid
* Recalculate the centroids of newly formed groups
* Repeat steps 3 and 4 until no change in clusters is observed
* End the iteration.
Process of K-Means on RFM analysis shown 6).
#### 5.4.3 Silhouette Score
This is a more accurate way of determining the clustering results to form from the data. It is determined for each case, and the formula is as follows (11):
Figure 6: Processes of RFM analysis
\[SC=\frac{x-y}{max(x,y)} \tag{11}\]
where y represents the mean cluster formation distance, or the distance between examples in the same cluster, and x represents the mean nearest cluster distance, or the distance between instances in the next closest cluster.
The coefficient ranges from -1 to 1. A number around 1 indicates that the example is near its cluster and belongs to the correct cluster. A high ratio to -1, on the other hand, indicates that the value has been assigned to the incorrect cluster (Figure 7).
This approach is predicated on the assumption that k=3 is a local optimum, while k=5 should be selected as the number of clusters. This method is deemed to be superior as it renders the determination of the optimal number of clusters more critical and transparent. However, it should be noted that this calculation is computationally intensive, as the coefficient must be computed for each case [42]. As such, the choice regarding the optimal metric for selecting the number of clusters must be made based on the specific requirements of the application.
#### 5.4.4 Elbow method
The basic purpose of cluster partitioning algorithms such as k-means is to define clusters with the least amount of intra-cluster variation (Figure 8). Minimum( summary W(\(C_{k}\)), k=1 to k shown (Figure 9).
Figure 7: Silhouette Method
where W(\(C_{k}\)) denotes intra-cluster variation, \(C_{k}\) denotes the kth cluster. The compact of the clustering boundary can be assessed by measuring the total intra-cluster variation.
In 2001, the Gap Statistic Method was introduced by G.Walther, R. Tibshirani, and T. Hastie of Stanford University. This method can be applied to any clustering technique, such as K-means, hierarchical clustering, etc. The gap statistic allows us to evaluate the total intra-cluster variation for different values of k, along with their average values, under the assumption of an uninformative reference distribution of the data. Monte Carlo simulations can be used to generate a sample dataset. For each variable in the dataset, we can calculate the range between the minimum and maximum
Figure 8: Elbow Method
Figure 9: Gap Statistics Method
values, from which we can generate values uniformly distributed within the lower and upper bounds.
The information investigation stage, this is a lopsided dataset (more than 80% express no to the mission). The models are not difficult to become familiar with certain characteristics about regrettable examples, yet it very well may be difficult to get from positive examples. While SMOTE lighten the issue by offering us more certain preparation tests. Simultaneously,Matthews Correlation Coefficient (MCC) scorer considers valid and misleading up-sides and negatives and is for the most part viewed as a reasonable measure which can be utilized regardless of whether the classes are of totally different sizes. For this situation, MCC is a more effective measure than exactness in trial, since there are a couple of positive examples in the test set. we have tried Logistic Regression, The Boosting Tree, Support-vector machines,and Neural Networks. Supporting Tree plays out the best among every one of the models in each of the 3 datasets. The exhibitions of SVM and NN are practically equivalent in 3 different datasets. LR is the most exceedingly terrible model since it is excessively basic for this grouping task. Supporting Tree have a few anomalies in crude datasets and include choice datasets, which demonstrates this calculation probably won't be steady in these datasets. All in all, in this characterization task, we could utilize Feature Selection Dataset + Boosting Tree, in light of the fact that 1) This mix accomplishes the best MCC execution. 2) Although BT may be unsteady, even the lower exceptions are practically identical to NN and SVM.
The overall accuracy of the model was determined to be 0.877, but upon further examination of the score report, it was noted that the model performed well in identifying negative examples (0) with high accuracy (0.877), however it was found to be lacking in its ability to accurately identify certain positive examples, with an accuracy of 0.55 and recall of 0.55.
The MCC for the test data was calculated to be 0.469, indicating that the model may have difficulty in correctly classifying positive examples in the test set. While the train MCC was determined to be 0.98, this result suggests the presence of overfitting in the model. Despite efforts to improve the model, a reduction in train MCC did not result in significant improvement in test MCC, which may indicate that the features used in this dataset may not effectively predict the 'Response' variable.
## 6 Conclusion
Our research paper investigates the formation of a client profile and the prediction of the client's behavior. For a precise forecast, regression analysis requires different client characteristics and the time series needs to consider the client's purchase history. To determine the market area and create a client profile, segmentation types and client-characterizing variables are utilized.
Response modeling is commonly framed as a binary classification problem. Buyers are divided into two categories: responders and non-responders. Various classification techniques, such as statistical methods and AI methods, were employed to model the response, including decision trees, Bayesian networks, and support vector machines. The latter of these, support vector machine (SVM), has gained attention in the AI community and offers advantages over multivariate classifiers. In this study, a support vector machine (SVM) was employed as a classifier for the simulation.
The response modeling process involves several steps, including data acquisition, data preprocessing, feature engineering, feature selection, class balancing, model training and evaluation. Different data mining techniques and algorithms were employed to execute each step. This study utilized the analysis cycle, which was built upon previous response modeling philosophy. Due to the nature of this study, different stages of the modeling process were gathered from past work and, with certain modifications and enhancements, integrated into a single process. To choose the best algorithm and philosophy for each stage of the cycle, various studies related to each stage were considered and evaluated. After considering various techniques and approaches for each stage, the best and most appropriate ones were selected. This exploratory process (building the response model) required extensive programming to implement. The algorithms and methods associated with each stage were customized and run using the Python programming language.
In considering the limitations of the project, it is important to take into account the amount of data available, as well as potential avenues for future research and the possibility of extending the project. The amount of data, specifically the number of rows in this study, has a direct impact on the accuracy and solution provided by the algorithm. With a limited dataset of under 3000 rows, the results may not be fully representative. Additionally, this study did not consider interpretability, which is an area of interest in the field of customer segmentation based on distinct categories. Companies would benefit from being able to understand why their clients are purchasing
certain goods rather than simply predicting future purchases. While it may be possible to extract the importance of certain variables from some of the algorithms used in this study, it was not within the scope of this project's objective.
## 7 Future Work
In future Work, more advanced methods for predicting customer churn may be explored, such as weighted random forests and hybrid models that can handle unstructured data. This would enable the extraction of relevant attributes for potential customer segmentation studies in the retail industry. As highlighted in the literature review, using hybrid models has shown promising performance gains and could be a strategy to improve the models.
Artificial intelligence has the potential to revolutionize various industries by transforming existing business processes and creating new business models. Key areas of focus include consumer engagement, digital manufacturing, smart cities, autonomous vehicles, risk management, computer vision, and speech recognition. AI has already demonstrated positive results in a range of sectors including healthcare, law enforcement, finance, security, trade, manufacturing, education, mining, and logistics.
|
2306.04309 | Structural Relaxation of Materials with Spin-Orbit Coupling: Analytical
Forces in Spin-Current DFT | Analytical gradients of the total energy are provided for local density and
generalized-gradient hybrid approximations to generalized Kohn-Sham
spin-current density functional theory (SCDFT). It is shown that gradients may
be determined analytically, in a two-component framework, including spin-orbit
coupling (SOC), with high accuracy. We demonstrate that renormalization of the
electron-electron potential by SOC-induced spin-currents can account for
considerable modification of crystal structures. In the case of Iodine-based
molecular crystals, the effect may amount to more than half of the total
modification of the structure by SOC. Such effects necessitate an SCDFT, rather
than DFT, formulation, in which exchange-correlation functionals are endowed
with an explicit dependence on spin-current densities. An implementation is
presented in the \textsc{Crystal} program. | Jacques K. Desmarais, Alessandro Erba, Jean-Pierre Flament | 2023-06-07T10:14:29Z | http://arxiv.org/abs/2306.04309v2 | # Structural Relaxation of Materials with Spin-Orbit Coupling: Analytical Forces in Spin-Current DFT
###### Abstract
Analytical gradients of the total energy are provided for local density and generalized-gradient hybrid approximations to generalized Kohn-Sham spin-current density functional theory (SCDFT). It is shown that gradients may be determined analytically, in a two-component framework, including spin-orbit coupling (SOC), with high accuracy. We demonstrate that renormalization of the electron-electron potential by SOC-induced spin-currents can account for considerable modification of crystal structures. In the case of Iodine-based molecular crystals, the effect may amount to more than half of the total modification of the structure by SOC. Such effects necessitate an SCDFT, rather than DFT, formulation, in which exchange-correlation functionals are endowed with an explicit dependence on spin-current densities. An implementation is presented in the Crystal program.
## I Introduction
The Hohenberg-Kohn density functional theory (DFT), being entirely formulated in terms of functionals \(F\left[n\right]\) of the electron density (particle density) \(n=\Psi^{\dagger}\Psi\), is meant for a (non-relativistic) fermionic system embedded in some external field which may be described by a scalar-multiplicative potential (i.e. a Coulomb field).[1] The theory may be extended to external fields that are not scalar-multiplicative, a consequence being that the formulation, then, involves a larger set of auxiliary density variables. For instance, extension to a Zeeman field leads to functionals \(F\left[n,\mathbf{m}\right]\) of both the electron density and spin-magnetization \(\mathbf{m}=\Psi^{\dagger}\boldsymbol{\sigma}\Psi\), where \(\boldsymbol{\sigma}\) is the vector of Pauli matrices \(\boldsymbol{\sigma}^{x},\boldsymbol{\sigma}^{y}\) and \(\boldsymbol{\sigma}^{z}\): i.e. the so-called spin-DFT (SDFT) of von Barth and Hedin.[2] Further extension to an external magnetic field leads to the current-spin DFT of Vignale and Rasolt,[3; 4] involving functionals \(F\left[n,\mathbf{m},\mathbf{j}\right]\) of also the particle-current density \(\mathbf{j}=\frac{1}{2i}\Psi^{\dagger}\left(\boldsymbol{\nabla}-\boldsymbol{ \nabla}^{\dagger}\right)\Psi\). The appearance of \(\mathbf{m}\) and \(\mathbf{j}\) in the formulation is thus associated to time-reversal symmetry breaking (TRSB) due to the magnetic field. In (open-shell) systems that intrinsically break TRS, use of SDFT has become routine.
The above considerations are, of course, still restricted to the non-relativistic regime. Scalar relativistic (SR) effects (by definition described by scalar-multiplicative potentials) can be straightforwardly included in the theory. It is notable, however, that spin-orbit coupling (SOC, described by a potential that is certainly not scalar, nor multiplicative) may also be formally introduced in the theory by viewing it as another external field. Overall, the relativistic generalization of the procedure leads (in the two-component framework) to the spin-current DFT (SCDFT), of Vignale and Rasolt, as first shown by Bencheihk.[4; 5] In this context, SOC enters the theory through non-Abelian potentials \(\boldsymbol{\mathcal{A}}^{x}\), \(\boldsymbol{\mathcal{A}}^{y}\) and \(\boldsymbol{\mathcal{A}}^{z}\), each of which couple to the fermionic system through spin-current densities (i.e. currents of the spin-magnetization \(m_{x}\), \(m_{y}\) and \(m_{z}\)) \(\mathbf{J}^{x}\), \(\mathbf{J}^{y}\) and \(\mathbf{J}^{z}\):[5; 6; 7; 8]
\[\mathbf{J}^{a}=\frac{1}{2i}\Psi^{\dagger}\boldsymbol{\sigma}^{a}\left( \boldsymbol{\nabla}-\boldsymbol{\nabla}^{\dagger}\right)\Psi\quad a=x,y,z \tag{1}\]
In the simplest variant of the formulation, that being for (closed-shell) systems that preserve time-reversal symmetry, the functional reduces to \(F\left[n,\mathbf{J}^{x},\mathbf{J}^{y},\mathbf{J}^{z}\right]\), and therefore corresponding density-functional approximations (DFAs) for a fermionic system in the presence of SOC must include the spin-current densities.[9; 10] In the most general case of TRSB systems, the full functional \(F\left[n,\mathbf{m},\mathbf{j},\mathbf{J}^{x},\mathbf{J}^{y},\mathbf{J}^{z}\right]\) is written also in terms of \(\mathbf{m}\) and \(\mathbf{j}\).
Although first proposed around 35 years ago, SCDFT has garnered little attention, until recently. Indeed, the ball was moved forward in 2017, with Pittalis _et al._ demonstrating that the spin-currents enter explicitly in DFAs only at the level of the curvature of the exchange-correlation (xc) hole.[6] This led to our formulation of the adiabatic connection in the SCDFT, within a generalized Kohn-Sham framework,[11] in which spin-currents can be effectively treated via the exact-exchange operator in hybrid DFAs of the local-density and generalized gradient approximations (LDA and GGA).[7; 8] The theory was thereafter applied to the description of: i) Weyl fermions, wherein renormalization of the electron-electron interaction by SOC-induced spin-currents was found to account for around half of the splitting of the Weyl node pair in TaAs;[10] ii) a Bismuth two-dimensional \(\mathbb{Z}_{2}\) topological insulator, wherein it was demonstrated that only an SCDFT formulation could account for the appearance of an experimentally-confirmed Dirac fermion in the valence band structure, at the onset of the topological phase transition.[12]
These previous studies clearly demonstrate the fundamental importance of spin-current densities in the description of the electronic structure, when SOC is included. Here, we extend the same treatment to analytical gradients of the total energy, allowing, for the
first time, to discuss the effect of spin-currents on the description of the crystal structure. Such extension is implemented in a developmental version of the Crystal program.[13; 14] Through application of the approach to the diiodide molecule, as well as Iodine-based molecular crystals, we demonstrate that renormalization of the electron-electron interaction through SOC-induced spin-currents can account for significant modification of crystal structures (around half or more of the total effect due to SOC).
## II Formalism
### Two-Component Generalized Kohn-Sham Equations
In the case of periodic systems, the spinors \(\left|\psi_{i,\mathbf{k}}\right\rangle=\left|\psi_{i,\mathbf{k}}^{\uparrow} \right\rangle\otimes\left|\uparrow\right\rangle+\left|\psi_{i,\mathbf{k}}^{ \downarrow}\right\rangle\otimes\left|\downarrow\right\rangle\) are 2c crystalline orbitals (COs), with components \(\left|\psi_{i,\mathbf{k}}^{\sigma}\right\rangle\), expanded in a set of Bloch functions (BFs) \(\phi_{\mu,\mathbf{k}}\):
\[\psi_{i,\mathbf{k}}^{\sigma}\left(\mathbf{r}\right)=\sum_{\mu}^{N_{\mathcal{B} }}C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\phi_{\mu,\mathbf{k}}\left(\mathbf{ r}\right)\;, \tag{2}\]
where \(\mathbf{k}\) is a point in the first Brillouin zone (FBZ), \(N_{\mathcal{B}}\) is the number of basis functions in a given cell of the infinite-periodic system, and \(\sigma=\uparrow,\downarrow\) is a spin index.
In Crystal, the BFs are conveniently represented as a linear combination of _pure real_ atomic orbitals (LCAO), through the inverse-Fourier relation:
\[\phi_{\mu,\mathbf{k}}\left(\mathbf{r}\right)=\frac{1}{\sqrt{\Omega}}\sum_{ \mathbf{g}}e^{i\mathbf{k}\cdot\mathbf{g}}\;\chi_{\mu,\mathbf{g}}\left(\mathbf{ r}-\mathbf{A}_{\mu}\right)\;. \tag{3}\]
Here \(\Omega\) is the volume of the FBZ, \(\mathbf{g}\) is a direct-lattice vector and \(\mathbf{A}_{\mu}\) is the position in cell \(\mathbf{g}\) at which the AO \(\chi_{\mu,\mathbf{g}}\) is centered. In Eq. (3) we have introduced the shorthand notation \(\chi_{\mu,\mathbf{g}}\left(\mathbf{r}-\mathbf{A}_{\mu}\right)=\chi_{\mu}\left( \mathbf{r}-\mathbf{A}_{\mu}-\mathbf{g}\right)\). A similar notation will also be applied to the electron density \(\rho_{\mathbf{g}}\left(\mathbf{r}\right)=\rho\left(\mathbf{r}-\mathbf{g}\right)\) Variation of the orbitals \(\psi_{i,\mathbf{k}}^{\sigma}\) under the constraint of orthonormality:
\[\left\langle\psi_{i,\mathbf{k}}^{\sigma}|\psi_{j,\mathbf{k}^{ \prime}}^{\sigma^{\prime}}\right\rangle = \delta_{i,j}\delta_{\mathbf{k},\mathbf{k}^{\prime}}\delta_{\sigma,\sigma^{\prime}} \tag{4a}\] \[\Rightarrow \mathbf{C}^{\dagger}\left(\mathbf{k}\right)\mathbf{S}\left( \mathbf{k}\right)\mathbf{C}\left(\mathbf{k}\right)=\mathbf{1}\]
leads to the generalized Kohn-Sham (GKS) equation:
\[\mathbf{H}\left(\mathbf{k}\right)\mathbf{C}\left(\mathbf{k}\right)=\mathbf{S} \left(\mathbf{k}\right)\mathbf{C}\left(\mathbf{k}\right)\mathbf{E}\left( \mathbf{k}\right)\;, \tag{4b}\]
where all matrices have size \(2N_{\mathcal{B}}\times 2N_{\mathcal{B}}\), \(\mathbf{C}\left(\mathbf{k}\right)\) is the matrix of CO coefficients of Eq. (2), \(\mathbf{S}\left(\mathbf{k}\right)\) is the BF overlap matrix, \(\mathbf{E}\left(\mathbf{k}\right)\) is the matrix of Lagrange multipliers (i.e. for canonical orbitals, corresponding to the diagonal matrix of band-structure energy levels \(\epsilon_{i,\mathbf{k}}\)) and \(\mathbf{H}\left(\mathbf{k}\right)\) is the BF Hamiltonian matrix. Eq. (4b) can be written more explicitly to highlight the structure in spin space:
\[\left(\begin{matrix}\mathbf{H}^{\uparrow\uparrow}\left(\mathbf{k} \right)&\mathbf{H}^{\uparrow\downarrow}\left(\mathbf{k}\right)\\ \mathbf{H}^{\downarrow\uparrow}\left(\mathbf{k}\right)&\mathbf{H}^{\downarrow \downarrow}\left(\mathbf{k}\right)\end{matrix}\right)\left(\begin{matrix} \mathbf{C}^{\dagger}\left(\mathbf{k}\right)\\ \mathbf{C}^{\downarrow}\left(\mathbf{k}\right)\end{matrix}\right) \tag{5}\] \[=\]
In Eq. (5) and elsewhere, matrices with double and single spin indices have size \(N_{\mathcal{B}}\times N_{\mathcal{B}}\) and \(N_{\mathcal{B}}\times 2N_{\mathcal{B}}\), respectively. \(\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)\), for instance, has elements:
\[H_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)=\Omega\langle\phi_{ \mu,\mathbf{k}}|\hat{H}^{\sigma\sigma^{\prime}}|\phi_{\nu,\mathbf{k}}\rangle \tag{6}\]
and:
\[\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)=\mathbf{h}^{\sigma \sigma^{\prime}}\left(\mathbf{k}\right)+\mathbf{J}^{\sigma\sigma^{\prime}} \left(\mathbf{k}\right)-a\mathbf{K}^{\sigma\sigma^{\prime}}\left(\mathbf{k} \right)+\mathbf{V}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)\;, \tag{7}\]
in which \(\mathbf{h}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)\) contains the matrix elements that can be built from mono-electronic integrals:
\[\mathbf{h}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)=\delta_{\sigma, \sigma^{\prime}}\left[\mathbf{v}\left(\mathbf{k}\right)+\mathbf{u}_{AR}\left( \mathbf{k}\right)\right]+\mathbf{u}_{SO}^{\sigma\sigma^{\prime}}\left(\mathbf{ k}\right)\;. \tag{8}\]
Here, \(\mathbf{v}\) consists of the electronic kinetic energy and electron-nuclear interaction terms, \(\mathbf{u}_{AR}\) and \(\mathbf{u}_{SO}^{\sigma\sigma^{\prime}}\) are, respectively, the averaged and spin-orbit relativistic effective potential (AREP and SOREP) matrices;[15; 16] and \(\mathbf{J}^{\sigma\sigma^{\prime}}\) and \(\mathbf{K}^{\sigma\sigma^{\prime}}\) are the usual Coulomb and exact-exchange terms (with \(a\) being the included fraction of the latter). \(\mathbf{V}^{\sigma\sigma^{\prime}}\) is the matrix of DFT correlation and exchange potentials (in either collinear or non-collinear treatments).[17; 18]
Inserting Eq. (3) into Eq. (6) (or the equivalent equation with \(\hat{H}\) being replaced by any other operator), we are able to relate the BF matrix \(\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)\), for instance, to the AO one \(\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\) through the inverse-Fourier relation:
\[\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)=\sum_{\mathbf{g}}e^{i \mathbf{k}\cdot\mathbf{g}}\;\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)\;,\] (9a) where AO matrix elements of \[\hat{H}\] or any other operator read: \[\mathbf{H}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)=\langle\chi_{\mu, \mathbf{0}}|\hat{H}^{\sigma\sigma^{\prime}}|\chi_{\nu,\mathbf{g}}\rangle\;. \tag{9b}\]
In the AO basis, the Coulomb matrix reads:[14]
\[J_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right) = \delta_{\sigma,\sigma^{\prime}}\sum_{\nu}\mathfrak{F}^{\uparrow \uparrow}\left(\mathbf{g}^{\sigma}\right)+P_{\omega\tau}^{\downarrow\downarrow} \left(\mathbf{g}^{\sigma}\right)\right]\sum_{\mathbf{g}^{\prime\prime}}(\mu^{ \mathbf{0}}\nu^{\mathbf{g}}|\tau^{\mathbf{\pi}^{\prime\prime}}\omega^{\mathbf{ \pi}^{\prime\prime}+\mathbf{g}^{\prime}}) \tag{10}\] \[= \delta_{\sigma,\sigma^{\prime}}\sum_{\mathbf{g}^{\prime\prime}}\int d\mathbf{r}\;\chi_{\mu, \mathbf{0}}\left(\mathbf{r}-\mathbf{A}_{\mu}\right)\Phi^{\mathrm{Coul}}\left( \mathbf{r},\mathbf{g}^{\prime\prime}\right)\chi_{\nu,\mathbf{g}}\left(\mathbf{r}- \mathbf{A}_{\nu}\right)\]
with the Coulomb potential:
\[\Phi^{\mathrm{Coul}}\left(\mathbf{r},\mathbf{g}^{\prime\prime}\right)=\int d \mathbf{r}^{\prime}\frac{\rho_{\mathbf{g}^{\prime\prime}}\left(\mathbf{r}^{ \prime}\right)}{\left|\mathbf{r}^{\prime}-\mathbf{g}^{\prime\prime}-\mathbf{ r}\right|} \tag{11}\]
being a density functional \(\Phi^{\mathrm{Coul}}=\Phi^{\mathrm{Coul}}\left[\rho_{\mathbf{g}^{\prime\prime}}\right]\). The exchange AO matrix reads:
\[K_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)=\sum_{\tau\omega} \sum_{\mathbf{g}^{\prime}}P_{\tau\omega}^{\sigma\sigma^{\prime}}\left(\mathbf{ g}^{\prime}\right)\sum_{\mathbf{g}^{\prime\prime}}(\mu^{\mathbf{0}}\tau^{\mathbf{g}^{ \prime\prime}}\left[\omega^{\mathbf{g}^{\prime\prime}+\mathbf{g}^{\prime}}\nu^{ \mathbf{g}}\right]\,, \tag{12}\]
where \(P_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\) is the AO direct-space density matrix:
\[P_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right) = \left[P_{\nu\mu}^{\sigma^{\prime}\sigma}\left(-\mathbf{g}\right) \right]^{*}=\frac{1}{\Omega}\sum_{i}^{\text{bands}}f_{i}\int_{\Omega}d \mathbf{k}\;e^{i\mathbf{k}\cdot\mathbf{g}} \tag{13}\] \[\times C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\left[C_{\nu,i}^{\sigma^ {\prime}}\left(\mathbf{k}\right)\right]^{*}\theta\left[\varepsilon_{F}- \epsilon_{i}(\mathbf{k})\right]\;,\]
and \(\theta\) is the Heaviside step-function, \(\varepsilon_{F}\) is the Fermi energy and \(0<f_{i}<1\) is the fractional occupation of band \(i\). In terms of these matrices, the total energy is written:
\[E = \frac{1}{2}\sum_{\mathbf{g}}\sum_{\sigma,\sigma^{\prime}}\sum_{ \mu\nu}\Bigg{\{}\left[P_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)\right]^{*}\left[h_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)+H_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\right] \Bigg{\}} \tag{14}\] \[= \frac{1}{2}\sum_{\mathbf{g}}\sum_{\sigma,\sigma^{\prime}}\sum_{ \mu\nu}\Bigg{\{}\left[P_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)\right]^{*}\left[2h_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)+V_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\right.\] \[+ \left.\sum_{\tau\omega}\sum_{\sigma^{\prime\prime},\sigma^{\prime \prime\prime}}\sum_{\mathbf{g}^{\prime}}P_{\tau\omega}^{\tau\sigma^{\prime \prime\prime}}\left(\mathbf{g}^{\prime}\right)\sum_{\mathbf{g}^{\prime\prime }}B_{\mu,\nu,\tau^{\prime},\omega}^{\mathbf{0},\mathbf{g},\mathbf{g}^{\prime \prime},\mathbf{g}^{\prime\prime}+\mathbf{g}^{\prime}}\right]\Bigg{\}}\;,\]
where we introduced the shorthand notation:
\[B_{\mu,\nu,\tau^{\prime},\omega}^{\mathbf{0},\mathbf{g},\mathbf{g}^{\prime \prime},\omega}\mathbf{g}^{\prime\prime}+\mathbf{g}^{\prime}=(\mu^{\mathbf{0} }\nu^{\mathbf{g}}|\tau^{\mathbf{g}^{\prime\prime}}\omega^{\mathbf{g}^{\prime \prime}+\mathbf{g}^{\prime}})-a(\mu^{\mathbf{0}}\tau^{\mathbf{g}^{\prime\prime }}|\omega^{\mathbf{g}^{\prime\prime}+\mathbf{g}^{\prime}}\nu^{\mathbf{g}})\;. \tag{15}\]
### Treatment of the Coulomb Series
For 3D periodic systems, the Coulomb lattice series, whose electron-electron component was given in Eq. (10), is conditionally convergent. The series may be rendered absolutely convergent by Ewald summation techniques, employing a charge distribution of atom-centered point multipoles (here \(c\) is an atomic index per cell) in the long range:[19; 20]
\[\rho_{c,\mathbf{g}^{\prime\prime}}^{\text{model}}\left(\mathbf{r}\right)= \sum_{l=0}^{L}\sum_{m=-l}^{l}\eta_{m}^{l}\left[\rho_{c,\mathbf{g}^{\prime\prime }}\right]\delta_{l}^{m}\left(\mathbf{r}-\mathbf{A}_{c}-\mathbf{g}^{\prime \prime}\right)\;, \tag{16}\]
used to model the exact atomic contribution to the density:
\[\rho_{c,\mathbf{g}^{\prime\prime}}\left(\mathbf{r}\right) = \sum_{\mu\in c,\mathbf{g}^{\prime\prime}}\sum_{\mathbf{g}}\sum_{ \nu}\Re\left[P_{\mu\nu}^{\uparrow\uparrow}\left(\mathbf{g}\right)+P_{\mu\nu}^ {\downarrow\downarrow}\left(\mathbf{g}\right)\right] \tag{17}\] \[\times \chi_{\mu}\left(\mathbf{r}-\mathbf{A}_{c}-\mathbf{g}^{\prime \prime}\right)\chi_{\nu}\left(\mathbf{r}-\mathbf{A}_{\nu}-\mathbf{g}-\mathbf{ g}^{\prime\prime}\right)\;,\]
with \(\mu\in c,\mathbf{g}^{\prime\prime}\) meaning that the sum is restricted to AOs centered at atom \(c\) in cell \(\mathbf{g}^{\prime\prime}\). In Eq. (16):
\[\eta_{m}^{l}\left[\rho_{c,\mathbf{g}^{\prime\prime}}\right]=\int d\mathbf{r} \rho_{c,\mathbf{g}^{\prime\prime}}\left(\mathbf{r}\right)X_{l}^{m}\left( \mathbf{r}-\mathbf{A}_{c}-\mathbf{g}^{\prime\prime}\right) \tag{18}\]
are the multipole moments of \(\rho_{c,\mathbf{g}^{\prime\prime}}\) with unnormalized real spherical harmonics \(X_{l}^{m}\), while \(\delta_{l}^{m}\) are unit point multipoles centered at \(\mathbf{A}_{c}\) in cell \(\mathbf{g}^{\prime\prime}\), and in our implementation \(L\) has a maximum value of 6 (a minimum value of 4 is formally required to ensure absolute convergence).
The model density is introduced by the following replacement of the Coulomb potential in Eq. (10):[19]
\[\Phi^{\text{Coul}}\left[\rho_{c,\mathbf{g}^{\prime\prime}}\right]\to\Phi^{ \text{Ew}}\left[\rho_{c,\mathbf{g}^{\prime\prime}}^{\text{model}}\right]+\Phi^{ \text{Coul}}\left[\rho_{c,\mathbf{g}^{\prime\prime}}-\rho_{c,\mathbf{g}^{ \prime\prime}}^{\text{model}}\right]\;, \tag{19}\]
with \(\Phi^{\text{Ew}}\) being the corresponding Ewald potential. The model \(\rho_{c,\mathbf{g}^{\prime\prime}}^{\text{model}}\) is applied in the long range, in the sense that the \(\mathbf{g}^{\prime\prime}\) lattice series for the second term in Eq. (19) is truncated by a preset tolerance \(T2\), while the one relevant to the first term is summed analytically to infinity.[19; 20]
In the 3D periodic case, the procedure leads to an absolutely convergent lattice series, at the price of an additional correction depending on the shape of the sample used for the summation.[21] For spherical 3D samples, the correction to the Hamiltonian matrix is proportional to the spherical second moment of the electron density \(Q\):[19]
\[H_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\to H_{\mu\nu}^{ \sigma\sigma^{\prime}}\left(\mathbf{g}\right)-\delta_{\sigma,\sigma^{\prime}}QS_{ \mu\nu}^{\sigma\sigma}\left(\mathbf{g}\right)\;, \tag{20}\]
where:
\[Q=\sum_{c}^{\text{atoms}}Q_{c}=\sum_{c}^{\text{atoms}}\frac{2\pi}{3V}\int d \mathbf{r}\left[\rho_{c,\mathbf{0}}\left(\mathbf{r}\right)-\rho_{c,\mathbf{0}}^ {\text{model}}\left(\mathbf{r}\right)\right]|\mathbf{r}|^{2}\;, \tag{21}\]
and \(V\) is the volume of the unit cell in direct space.
In the calculation, the correction of the Hamiltonian matrix elements of Eq. (20) can be avoided by adding \(Q\mathbf{S}\left(\mathbf{k}\right)\) to both sides of Eq. (4b), leading to the modified GKS equation:[19]
\[\mathbf{H}\left(\mathbf{k}\right)\mathbf{C}\left(\mathbf{k}\right)=\mathbf{S} \left(\mathbf{k}\right)\mathbf{C}\left(\mathbf{k}\right)\left(\mathbf{E}\left( \mathbf{k}\right)+Q\right)\;, \tag{22}\]
with shifted energy levels \(\mathbf{E}\left(\mathbf{k}\right)\to\mathbf{E}\left(\mathbf{k}\right)+Q\). The shift is irrelevant for total energy calculations (in which only differences \(\epsilon_{i}\left(\mathbf{k}\right)-\varepsilon_{F}\) with respect to the Fermi level \(\varepsilon_{F}\) matter). On the other hand, the correction enters our formulation for the analytical gradient in 3D periodic systems, as shown below. We note that the same correction is not necessary for the 0D, 1D or 2D cases, in which the Coulomb lattice series is already absolutely convergent (provided that a unit cell can be chosen with vanishing dipole moment).
### Analytical Gradients with Respect to Atomic Displacements
The derivative of the energy \(E\) with respect to one of the atomic centers \(\mathbf{A}_{\eta}\) provides, from Eq. (14):
\[\frac{\partial E}{\partial\mathbf{A}_{\eta}} = \frac{1}{2}\sum_{\mathbf{g}}\sum_{\sigma,\sigma^{\prime}}\sum_{\mu \nu}\Bigg{\{}\left[P_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g} \right)\right]^{*}\left[2\frac{\partial h_{\mu\nu}^{\sigma\sigma^{\prime}} \left(\mathbf{g}\right)}{\partial\mathbf{A}_{\eta}}+\frac{\partial V_{\mu\nu}^{ \sigma\sigma^{\prime}}\left(\mathbf{g}\right)}{\partial\mathbf{A}_{\eta}}\right.\] (23) \[+ \sum_{\tau\omega}\sum_{\sigma^{\prime\prime},\sigma^{\prime\prime \prime}}\sum_{\mathbf{g}^{\prime}}P_{\tau\omega}^{\sigma^{\prime\prime \prime}}\left(\mathbf{g}^{\prime}\right)\sum_{\mathbf{g}^{\prime\prime}} \frac{\partial B_{\mu,\nu,\tau,\omega}^{\mathbf{g}^{\prime\prime},\mathbf{g}^{ \prime\prime}+\mathbf{g}^{\prime}}}{\partial\mathbf{A}_{\eta}}\Bigg{]
The explicit calculation of the derivative of the density matrix in the last line of Eq. (23) can be avoided by first making use of Eqs. (13) and (9a), leading to:
\[\sum_{\mathbf{g}}\sum_{\mu\nu}\frac{\partial\left[P_{\mu\nu}^{ \sigma\sigma^{\prime}}\left(\mathbf{g}\right)\right]^{*}}{\partial\mathbf{A}_{ \eta}}H_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right) = \sum_{\mu\nu}\frac{1}{\Omega}\sum_{i}^{\text{bands}}f_{i}\int_{ \Omega}d\mathbf{k}\ \theta\left[\varepsilon_{F}-\epsilon_{i}(\mathbf{k})\right] \left\{\frac{\partial\left[C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\right]^{ *}}{\partial\mathbf{A}_{\eta}}H_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{ k}\right)C_{\nu,i}^{\sigma^{\prime}}\left(\mathbf{k}\right)+c.c.\right\} \tag{24}\] \[= \sum_{\mu\nu}\frac{1}{\Omega}\sum_{i}^{\text{bands}}f_{i}\int_{ \Omega}d\mathbf{k}\ \theta\left[\varepsilon_{F}-\epsilon_{i}(\mathbf{k})\right] \left\{\frac{\partial\left[C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\right]^{ *}}{\partial\mathbf{A}_{\eta}}\delta_{\sigma,\sigma^{\prime}}\left(\epsilon _{i}\left(\mathbf{k}\right)+Q\right)S_{\mu\nu}^{\sigma\sigma}\left(\mathbf{k} \right)C_{\nu,i}^{\sigma^{\prime}}\left(\mathbf{k}\right)+c.c.\right\}\]
where we have made use of Eq. (22). Furthermore, a differentiation of Eq. (4a) provides:
\[0 = \delta_{\sigma,\sigma^{\prime}}\sum_{\mu\nu}\frac{\partial\left[ C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\right]^{*}}{\partial\mathbf{A}_{\eta}}S_{ \mu\nu}^{\sigma\sigma}\left(\mathbf{k}\right)C_{\nu,i}^{\sigma^{\prime}}\left( \mathbf{k}\right) \tag{25}\] \[+ C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\frac{\partial S_{\mu \nu}^{\sigma\sigma^{\prime}}\left(\mathbf{k}\right)}{\partial\mathbf{A}_{ \eta}}C_{\nu,i}^{\sigma^{\prime}}\left(\mathbf{k}\right)+C_{\mu,i}^{\sigma} \left(\mathbf{k}\right)S_{\mu\nu}^{\sigma\sigma}\left(\mathbf{k}\right)\frac {\partial C_{\nu,i}^{\sigma^{\prime}}\left(\mathbf{k}\right)}{\partial \mathbf{A}_{\eta}}\]
Inserting Eqs. (24) and (25) into Eq. (23), we obtain:
\[\frac{\partial E}{\partial\mathbf{A}_{\eta}}=\frac{1}{2}\sum_{ \mathbf{g}}\sum_{\sigma,\sigma^{\prime}}\sum_{\mu\nu}\Bigg{\{}\left[P_{\mu\nu }^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)\right]^{*}\left[2\frac{ \partial h\epsilon_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right)}{ \partial\mathbf{A}_{\eta}}\right.\] \[\left.-\left[W_{\mu\nu}^{\sigma\sigma}\left(\mathbf{g}\right) \right]^{*}\frac{\partial S_{\mu\nu}^{\sigma}\left(\mathbf{g}\right)}{ \partial\mathbf{A}_{\eta}}\right\} \tag{26}\]
in which \(W_{\mu\nu}^{\sigma\sigma}\left(\mathbf{g}\right)\) are elements of the direct-space energy-weighted density-matrix:
\[W_{\mu\nu}^{\sigma\sigma^{\prime}}\left(\mathbf{g}\right) = \frac{1}{\Omega}\sum_{i}^{\text{bands}}f_{i}\int_{\Omega}d \mathbf{k}\ \left(\epsilon_{i}(\mathbf{k})+Q\right)e^{i\mathbf{k}\cdot\mathbf{g}} \tag{27}\] \[\times C_{\mu,i}^{\sigma}\left(\mathbf{k}\right)\left[C_{\nu,i}^{\sigma^ {\prime}}\left(\mathbf{k}\right)\right]^{*}\theta\left[\varepsilon_{F}- \epsilon_{i}(\mathbf{k})\right]\.\]
### Analytical Gradients with Respect to Lattice Vectors
Having determined derivatives with respect to atomic displacements \(\mathbf{A}_{\eta}\), the treatment may be extended to derivatives with respect to direct lattice vectors \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\) and \(\mathbf{a}_{3}\). This may be achieved, by first writing a general position in the lattice basis:
\[\mathbf{A}_{\eta}+\mathbf{g}=\sum_{i=1}^{3}\left(f_{\eta,i}+n_{\mathbf{g},i} \right)\mathbf{a}_{i} \tag{28}\]
in which \(f_{\eta,1},f_{\eta,2},f_{\eta,3}\in\mathbb{R}\) and \(n_{\mathbf{g},1},n_{\mathbf{g},2},n_{\mathbf{g},3}\in\mathbb{Z}\) are, respectively real and integer quantities. Starting from Eq. (14), and proceeding as in Section II.3 yields:
\[\frac{\partial E}{\partial\mathbf{a}_{i}}=\frac{1}{2}\sum_{\mathbf{g}}\sum_{ \sigma,\sigma^{\prime}}\sum_{\mu\nu}\Bigg{\{}\left[P_{\mu\nu}^{\sigma\sigma^{ \prime}}\left(\mathbf{g}\right)\right]^{*}\left[2\frac{\partial h\sigma^{ \sigma\sigma^{\prime}}\left(\mathbf{g}\right)}{\partial\mathbf{a}_{i}}\right.\] \[\left.-\left[W_{\mu\nu}^{\sigma\sigma}\left(\mathbf{g}\right) \right]^{*}\frac{\partial S_{\mu\nu}^{\sigma\sigma}\left(\mathbf{g}\right)}{ \partial\mathbf{a}_{i}}\right\} \tag{29}\]
### Derivatives of SOC Integrals
In Eqs. (23) and (29), we require analytical derivatives of the integrals with respect to atomic displacements and lattice vectors. Their calculation (apart from SOC integrals) has been described elsewhere.[22; 23; 24; 25; 26; 27; 28] In the following sections we concentrate on the calculation of analytical derivatives of SOC integrals.
#### ii.5.1 Derivatives of SOC Integrals: Atomic Displacements
As reported in Ref. [29], the explicit energy contribution in Eq. (14) from the SOC operator of Eq. (8) can be written in the following computationally convenient way:
\[E_{SO} = -2\Re\sum_{\mathbf{g}}\sum_{\mu\geq\nu}\left\{u_{SO,\mu\nu}^{ \alpha\alpha}\left(\mathbf{g}\right)\left[P_{\mu\nu}^{\alpha\alpha}\left( \mathbf{g}\right)-P_{\mu\nu}^{\beta\beta}\left(\mathbf{g}\right)\right]^{*}\right. \tag{30}\] \[- \left.u_{SO,\mu\nu}^{\alpha\beta}\left(\mathbf{g}\right)\left[P_{\mu \nu}^{\alpha\beta}\left(\mathbf{g}\right)-P_{\mu\nu}^{\beta\alpha}\left(\mathbf{g }\right)\right]^{*}\right\}\,,\]
in which the diagonal spin-blocks of the SOC matrix elements read (in the given RECP approximation): [29]
\[u^{\alpha\alpha}_{SO,\mu\nu}\left({\bf g}\right) = \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}\sum_{l=0}^{ \mathcal{L}}\langle\chi_{\mu,{\bf 0}}|\hat{\xi}_{l,c,{\bf g}^{\prime\prime}} \tag{31}\] \[\times \hat{P}_{l,c,{\bf g}^{\prime\prime}}\hat{L}_{z,l,c,{\bf g}^{ \prime\prime}}\hat{P}_{l,c,{\bf g}^{\prime\prime}}|\chi_{\nu,{\bf g}}\rangle\] \[\equiv \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}\sum_{l=0}^{ \mathcal{L}}\langle\chi_{\mu,{\bf 0}}|\hat{u}^{\alpha\alpha}_{SO,c,{\bf g}^{ \prime\prime}}|\chi_{\nu,{\bf g}}\rangle\] \[\equiv \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}\,u^{\alpha \alpha}_{SO,\mu\nu}\left({\bf g};c,{\bf g}^{\prime\prime}\right)\;,\]
where \(\hat{\xi}_{l,c,{\bf g}^{\prime\prime}}\) are radial operators centered at \({\bf A}_{c}\) in cell \({\bf g}^{\prime\prime}\), \(\hat{P}_{l,c,{\bf g}^{\prime\prime}}\) are projectors onto real spherical harmonics and \(\hat{L}_{z,l,c,{\bf g}^{\prime\prime}}\) is the (pure-imaginary) \(z\)-component electron-nuclear angular-momentum operator. In our implementation, \(\mathcal{L}\) has a maximum value of 4 (\(g\)-type projectors). Off-diagonal spin-blocks are written: [29]
\[u^{\alpha\beta}_{SO,\mu\nu}\left({\bf g}\right) = \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}\sum_{l=0}^{ \mathcal{L}}\langle\chi_{\mu,{\bf 0}}|\hat{\xi}_{l,c,{\bf g}^{\prime\prime}} \hat{P}_{l,c,{\bf g}^{\prime\prime}} \tag{32}\] \[\times \hat{L}_{-,l,c,{\bf g}^{\prime\prime}}\hat{P}_{l,c,{\bf g}^{ \prime\prime}}|\chi_{\nu,{\bf g}}\rangle\] \[\equiv \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}\langle\chi_{ \mu,{\bf 0}}|\hat{u}^{\alpha\beta}_{SO,c,{\bf g}^{\prime\prime}}|\chi_{\nu,{\bf g }}\rangle\] \[\equiv \sum_{{\bf g}^{\prime\prime}}\sum_{c}^{\rm atoms}u^{\alpha\beta} _{SO,\mu\nu}\left({\bf g};c,{\bf g}^{\prime\prime}\right)\;,\]
where \(\hat{L}_{-,l,c,{\bf g}^{\prime\prime}}\) is the corresponding angular-momentum annihilation operator. Comparing Eq. (30) with Eq. (23), the terms associated with derivatives of the SOC integrals are then:
\[E^{\prime}_{SO} = -2\Re\sum_{{\bf g}}\sum_{\mu\geq\nu}\left\{\frac{\partial u^{ \alpha\alpha}_{SO,\mu\nu}\left({\bf g}\right)}{\partial{\bf A}_{\eta}}\left[P^ {\alpha\alpha}_{\mu\nu}\left({\bf g}\right)-P^{\beta\beta}_{\mu\nu}\left({ \bf g}\right)\right]^{*}\right. \tag{33}\] \[- \left.\frac{\partial u^{\alpha\beta}_{SO,\mu\nu}\left({\bf g} \right)}{\partial{\bf A}_{\eta}}\left[P^{\alpha\beta}_{\mu\nu}\left({\bf g} \right)-P^{\beta\alpha}_{\mu\nu}\left({\bf g}\right)\right]^{*}\right\}\,.\]
To calculate the necessary derivative integrals, we employ integration by parts to develop the following translational invariance sum rule, involving derivatives with respect to the centers of the bra- \({\bf A}_{\mu}\), ket- \({\bf A}_{\nu}\) and operator \({\bf A}_{c}\):
\[\frac{\partial}{\partial{\bf A}_{\mu}}+\frac{\partial}{\partial{\bf A}_{\nu}}+ \frac{\partial}{\partial{\bf A}_{c}}=0\;. \tag{34}\]
Inserting Eq. (34) into Eqs. (31) and (32), we obtain:
\[\frac{\partial u^{\alpha\sigma^{\prime}}_{SO,\mu\nu}\left({\bf g };c,{\bf g}^{\prime\prime}\right)}{\partial{\bf A}_{\eta}} = \langle\frac{\partial\chi_{\mu,{\bf 0}}}{\partial{\bf A}_{\mu}}|\hat{u}^{ \sigma\sigma^{\prime}}_{SO,c,{\bf g}^{\prime\prime}}|\chi_{\nu,{\bf g}}\rangle \left(\delta_{\eta,\mu}-\delta_{\eta,c}\right) \tag{35}\] \[+\langle\chi_{\mu,{\bf 0}}|\hat{u}^{\sigma\sigma^{\prime}}_{SO,c,{\bf g }^{\prime\prime}}|\frac{\partial\chi_{\nu,{\bf g}}}{\partial{\bf A}_{\nu}} \rangle\left(\delta_{\eta,\nu}-\delta_{\eta,c}\right)\]
#### ii.2.2 Derivatives of SOC Integrals: Lattice Vectors
Insertion of Eq. (28) into Eqs. (31) and (32) permits to write derivatives of the SOC integrals with respect to lattice vectors required in Eq. (29):
\[\frac{\partial u^{\sigma\sigma^{\prime}}_{SO,\mu\nu}\left({\bf g };c,{\bf g}^{\prime\prime}\right)}{\partial{\bf a}_{i}} = \frac{\partial u^{\sigma\sigma^{\prime}}_{SO,\mu\nu}\left({\bf g };c,{\bf g}^{\prime\prime}\right)}{\partial{\bf A}_{\mu}}\frac{\partial{\bf A}_{ \mu}}{\partial{\bf a}_{i}}+\frac{\partial u^{\sigma\sigma^{\prime}}_{SO,\mu \nu}\left({\bf g};c,{\bf g}^{\prime\prime}\right)}{\partial\left({\bf A}_{\nu}+{ \bf g}\right)}\frac{\partial\left({\bf A}_{\nu}+{\bf g}\right)}{\partial{\bf a }_{i}}+\frac{\partial u^{\sigma\sigma^{\prime}}_{SO,\mu\nu}\left({\bf g};c,{ \bf g}^{\prime\prime}\right)}{\partial\left({\bf A}_{c}+{\bf g}^{\prime\prime }\right)}\frac{\partial\left({\bf A}_{c}+{\bf g}^{\prime\prime}\right)}{ \partial{\bf a}_{i}} \tag{36}\] \[= \langle\frac{\partial\chi_{\mu,{\bf 0}}}{\partial{\bf A}_{\mu}}|\hat{u}^{ \sigma\sigma^{\prime}}_{SO,c,{\bf g}^{\prime\prime}}|\chi_{\nu,{\bf g}}\rangle \left(f_{\mu,i}-f_{c,i}-n_{{\bf g}^{\prime\prime},i}\right)+\langle\chi_{\mu,{ \bf 0}}|\hat{u}^{\sigma\sigma^{\prime}}_{SO,c,{\bf g}^{\prime\prime}}|\frac{ \partial\chi_{\nu,{\bf g}}}{\partial{\bf A}_{\nu}}\rangle\left(f_{\nu,i}+n_{{ \bf g},i}-f_{c,i}-n_{{\bf g}^{\prime\prime},i}\right)\;,\]
where use has been made of Eq. (34).
## III Computational Details
All calculations are performed with a developmental version of the Crystal program. [13; 14] The SVWN5 exchange-correlation (xc) functional of the LDA, PBE xc functional of the GGA, and PBE0 xc functional of the global hybrid GGA are used. [30; 31; 32; 33] Large-core pseudo-potentials are used for Po, and both large- and small-core pseudo-potentials for I. [16] For the large-core calculations, valence basis sets for I and Po of the form \((6s5p2d)/[4s3p2d]\) and \((4s4p)/[2s2p]\) have been modified from the ones originally presented in Ref. [34], respectively. For the molecular small-core calculations, the triple-zeta valence basis set for I of Ref. [35] is used. The basis set for H is taken from Ref. [36]. For the periodic calculations, we use small-core pseudo-potentials and basis sets modified from Ref. [37]. For application to the I\({}_{2}\) and CsI\({}_{3}\) crystals, reciprocal space is sampled in a \(10\times 10\times 10\) and \(4\times 4\times 4\) Monkhorst-Pack net, respectively. A tolerance of \(10^{-8}\) Hartree on the total energy is used as a convergence criterion for the self-consistent field (SCF) procedure. The five TOLINTEG parameters that control truncation of the Coulomb and exact-exchange infinite series are set to 8 8 8 8 8 20. The xc functional and potential (in their collinear spin-DFT formulation) are sampled on a direct-space pruned grid over the unit-cell volume with Lebedev angular and Gauss-Legendre
radial quadratures, employing 99 radial and 1454 angular points (keyword XXLGRID). Both the atomic fractional coordinates and lattice parameters are fully optimized with analytical gradients of the total energy and a quasi-Newton scheme in the Broyden-Fletcher-Goldfarb-Shanno (BFGS) variant.[24; 25; 27; 38] The initial guess for the Hessian of the BFGS scheme is taken as the identity matrix. Full input decks are available in Crystal format in the ESI.[39]
## IV Results and discussion
We discuss the numerical accuracy of the analytical forces relative to numerical ones on a simple model system: a periodic 1D chain of H\({}_{2}\)Po\({}_{2}\) units. We discuss the effect of SOC on structural parameters of the I\({}_{2}\) molecule, I\({}_{2}\) orthorhombic molecular crystal, and Caesium triiodide orthorhombic crystal. The effect of renormalization of the electron-electron interaction through SOC-induced spin-currents is quantified.
### Numerical Validation on a Model System
To validate our approach for computation of analytical gradients of the total energy in SCDFT, and demonstrate its high numerical accuracy, we perform calculations on a model system represented by the infinite 1D chain of H\({}_{2}\)Po\({}_{2}\). This system is chosen based on the very large contribution of SOC to the forces. We compare the analytical cell gradient against numerical computations from finite differences of the total energy. The employed geometry for the H\({}_{2}\)Po\({}_{2}\) chain, as well as the values of the finite difference parameters are provided in the ESI. We present such comparison in Table 1 for two types of GKS calculations: exact exchange approximation (EXX) and LDA. The computed analytical cell gradient \(G_{a}\) is 5.861 \(\times 10^{-2}\) for EXX and 5.826 \(\times 10^{-2}\) Hartree/bohr for LDA. The total SOC contribution to the cell gradient \(G_{a}^{\rm SOC}\) amounts to 5.265 \(\times 10^{-3}\) and 7.628 \(\times 10^{-3}\), respectively. Thus, the effect of SOC on the cell gradient is of the order of 10-15%.
Different numerical finite difference schemes are used to compare the analytical forces with: a two-point, one-sided formula (2O), a two-point, two-sided formula (2T), and a four-point, two-sided formula (4T). Differences between the analytical force and the numerical one are already on the order of \(10^{-4}\) a.u. when the simplest 2O formula is used (i.e. being around one order of magnitude smaller than the SOC contribution to the gradient). These differences are further reduced to \(10^{-5}\) a.u. (in the case of LDA) and even \(10^{-6}\) a.u. (in the case of EXX) when more robust 2T or 4T numerical schemes are used, thus demonstrating the high numerical accuracy of our analytical approach to energy gradients within the SCDFT. A better agreement is obtained in the case of EXX, wherein the implementation is fully analytical (aside from integration over the first Brillouin zone, as well as diagonalization of the secular GKS equation). In contrast, the LDA computation contains an additional numerical step: the integration of the xc energy-density and potential over the direct-space unit-cell volume. In this case, imperfect cancellation of errors in the numerical quadrature slightly worsens the agreement.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \(G_{a}\) & \(G_{a}^{\rm SOC}\) & \(\Delta_{\rm 2O}\) & \(\Delta_{\rm 2T}\) & \(\Delta_{\rm 4T}\) \\ HF & 5.86\(\times 10^{-2}\) & 5.26\(\times 10^{-3}\) & 2.26\(\times 10^{-4}\) & 5.00\(\times 10^{-6}\) & 4.95\(\times 10^{-6}\) \\ LDA & 5.83\(\times 10^{-2}\) & 7.63\(\times 10^{-3}\) & 2.27\(\times 10^{-4}\) & 1.30\(\times 10^{-5}\) & 1.30\(\times 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between analytical and numerical cell gradient for the infinite 1D chain of H\({}_{2}\)Po\({}_{2}\) in the presence of SOC, for EXX and LDA calculations. The analytical cell gradient \(G_{a}\) is reported along with its SOC contribution \(G_{a}^{\rm SOC}\). Differences \(\Delta\) are reported between the analytical and numerical gradient as obtained through different finite difference schemes: a two-point, one-sided formula (2O), a two-point, two-sided formula (2T), and a four-point, two-sided formula (4T). All values in Hartree/bohr.
Figure 1: Crystal structure of A) the I\({}_{2}\) orthorhombic molecular crystal, and B) the CsI\({}_{3}\) orthorhombic crystal.
### Application to the I\({}_{2}\) Molecule
We discuss the effect of SOC on the bond length of the diiodide molecule by comparing our SCDFT approach to simpler (S)DFT ones where the effect of the spin-current densities on the renormalization of the electron-electron interaction is omitted. We study the molecule both in its neutral ground state I\({}_{2}\) (closed-shell configuration, i.e. time-reversal symmetry preserving) and in its anionic form I\({}_{2}^{-}\) (open-shell configuration, i.e. time-reversal symmetry breaking). We perform calculations using both the PBE and PBE0 xc functionals, and both large-core (LC) and small-core (SC) pseudo-potentials. Results are reported in Table 2. The experimental gas phase bond length at -80 \({}^{\circ}\)C, for comparison, is around 2.674 A.[40; 41] The best agreement with the experimental figure is obtained from the PBE0 SC calculation, which gives 2.678 A with our SCDFT approach.
Inspection of the Table clearly shows that SOC systematically induces the lengthening of the bond in all cases. For the I\({}_{2}\) species, an SCDFT treatment of SOC (i.e. including spin-current densities) results in a bond lengthening that is twice as large as that from a DFT approach where spin-current densities are neglected. This is observed consistently in both LC and SC calculations. Thus, around half of the effect of SOC on the ground state geometry of the I\({}_{2}\) molecule is accounted for by modification of the electron-electron interaction through SOC-induced spin-current densities.
The last three columns of Table 2 provide data for the I\({}_{2}^{-}\) species. As anticipated before, also in this case SOC produces the lengthening of the bond, but with an important difference with respect to the case of I\({}_{2}\) when it comes to the role played by the spin-current densities. Now, by neglecting the effect of the SOC-induced spin-current densities the effect of SOC on the bond length would be largely overestimated by the SDFT, with a lengthening by 0.047 and 0.036 A with PBE0 with LC and SC, respectively, which is reduced to 0.024 and 0.018 A by inclusion of the spin-current densities within the SCDFT. The effect of SOC on bond lengthening is thus overestimated by 100 % if spin-current densities are not included in the xc functional.
### Application to the I\({}_{2}\) Molecular Crystal
We now discuss the application of our approach to the I\({}_{2}\) orthorhombic molecular crystal. The structure of the crystal (in terms of its conventional lattice cell) is depicted in Fig. 1 A). Low temperature (5 K) experimental structural data are available[42] and are reported in Table 3, along with our optimized theoretical structural parameters (in terms of the primitive lattice cell). The hybrid PBE0 xc functional is used here both in a DFT and SCDFT framework. Absolute values are reported from the scalar relativistic (SR) calculation. The effect of SOC, \(\Delta_{\rm SOC}\), is highlighted in the last two columns.
Based on the experiments, the I-I bond length in the crystal, 2.717 A, is larger than in gas phase, 2.674 A. Our theoretical calculations are consistent with this picture. Also in the crystal, SOC induces the lengthening of the bond with an elongation that is nearly twice as large in SCDFT (0.018 A) than it is in DFT (0.011 A). Compared to the gas phase calculations, we observe that the SOC-induced bond elongation is increased by around 17%. The **a** and **b** lattice parameters (in the plane perpendicular to the nearest-neighbour I-I bond) are shortened by SOC, while the **c** lattice parameter is lengthened by SOC. An overall SOC-induced volume contraction by
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{I\({}_{2}\)} & \multicolumn{2}{c}{I\({}_{2}^{-}\)} \\ \cline{2-6} & SR & \(\Delta_{\rm SOC}\) & SR & \(\Delta_{\rm SOC}\) \\ & DFT & DFT & SCDFT & SDFT & SDFT & SCDFT \\ PBE (LC) & 2.822 & 0.034 & - & 3.399 & 0.064 & - \\ PBE0 (LC) & 2.792 & 0.011 & 0.026 & 3.339 & 0.047 & 0.024 \\ PBE (SC) & 2.694 & 0.019 & - & 3.315 & 0.049 & - \\ PBE0 (SC) & 2.663 & 0.008 & 0.015 & 3.252 & 0.036 & 0.018 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Equilibrium bond length of the I\({}_{2}\) and I\({}_{2}^{-}\) molecules, as computed with the PBE and PBE0 xc functionals, with both large-core (LC) and small-core (SC) pseudo-potentials. The scalar relativistic (SR) value is reported (i.e. obtained before inclusion of SOC) along with the effect of SOC (\(\Delta_{\rm SOC}\)). All values are in Γ
. The experimental gas phase bond length of I\({}_{2}\) at -80 \({}^{\circ}\)C, for comparison, is around 2.674 Γ
.[40; 41]
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Exp. & SR & \(\Delta_{\rm SOC}\) \\ \cline{2-5} & & DFT & SCDFT \\ a/Γ
& 4.254 & 4.316 & -0.036 & -0.033 \\ c/Γ
& 9.796 & 9.711 & 0.035 & 0.041 \\ c/a & 2.302 & 2.250 & 0.0270 & 0.0272 \\ \(\gamma/^{\circ}\) & 113.584 & 115.550 & 0.104 & -0.004 \\ V/Γ
\({}^{3}\) & 162.482 & 163.223 & -2.241 & -1.857 \\ x/a & 0.1549 & 0.1589 & 0.002 & 0.002 \\ z/c & 0.1175 & 0.1186 & 0.000 & 0.000 \\ I-I/Γ
& 2.717 & 2.728 & 0.011 & 0.018 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Structural parameters of the I\({}_{2}\) orthorhombic molecular crystal computed with the PBE0 xc functional and SC pseudo-potentials. Absolute values are reported from the scalar relativistic (SR) calculation. The effect of SOC, \(\Delta_{\rm SOC}\), is reported for both a DFT and SCDFT treatment. Data are reported in the primitive basis of the lattice. Low-temperature (5 K) experimental values are taken from Ref. [42].
-2.241 A\({}^{3}\) in DFT decreased, in absolute value, to -1.857 A\({}^{3}\) in SCDFT (corresponding to a volume expansion of about 1.1% with respect to the SR calculation).
### Application to the CsI\({}_{3}\) Crystal
The structure of the CsI\({}_{3}\) orthorhombic (space group P\(mcn\)) crystal is depicted in Fig. 1 B) with four Cs and twelve I atoms in the unit cell. The crystal exhibits linear I\({}_{3}\) molecules characterized by a small asymmetry in terms of the two I-I bond lengths, with a shorter one of 2.842 A and a longer one of 3.038 A at -160 \({}^{\circ}\)C.[43] The asymmetry may be effectively tuned by external stimuli, such as temperature or pressure.[44] Low temperature (113 K) experimental structural data are reported in Table 4, along with our optimized theoretical structural parameters. The hybrid PBE0 xc functional is used here both in a DFT and SCDFT framework. Absolute values are reported from the scalar relativistic (SR) calculation. The effect of SOC, \(\Delta_{\rm SOC}\), is highlighted in the last two columns.
The effect of SOC on the structure of this crystal is more articulated than it was on the I\({}_{2}\) molecular crystal. Indeed, while SOC still induces the elongation of I-I bonds, it shortens Cs-I interactions, particularly so within an SCDFT description including SOC-induced spin-current densities. This results in an overall volume contraction by 2.020 A\({}^{3}\) from DFT and 4.357 A\({}^{3}\) from SCDFT (corresponding to a volume contraction of about 0.6% with respect to the SR calculation). SOC is also found to contract the structure anisotropically, with the largest contraction occurring along the **a** lattice parameter. This is consistent with **a** being a crystallographic direction with no I-I bond components: indeed, I-I bonds are rather oriented in the **bc** plane, as shown in Fig. 1 B).
## V Conclusion
We have presented analytical gradients of the total energy for local-density (LDA) and generalized-gradient (GGA) hybrid approximations to generalized Kohn-Sham spin-current density functional theory (GKS-SCDFT), including spin-orbit coupling (SOC). Our strategy has been implemented in a developmental version of the Crystal program. The numerical accuracy of the analytical forces has been validated against forces obtained by different finite difference schemes. Application on the I\({}_{2}\) and CsI\({}_{3}\) crystals has shown that terms in the exchange-correlation functional arising from SOC-induced spin-current densities, as accounted for within an SCDFT framework, lead to significant structural changes. These are reflected on a lattice expansion or contraction which account (in the case of the CsI\({}_{3}\) crystal) to more than half of the total effect due to SOC. Future efforts will be devoted to inclusion of explicit contributions in density functional approximations from modification of the curvature of the exchange-correlation hole by SOC-induced current densities, which must be taken into account at the level of meta-GGA approximations to GKS-SCDFT.
###### Acknowledgements.
This research has received funding from the Project CH4.0 under the MUR program "Dipartimenti di Eccellenza 2023-2027" (CUP: D13C22003520001).
|
2310.00851 | High-curvature, high-force, vine robot for inspection | Robot performance has advanced considerably both in and out of the factory,
however in tightly constrained, unknown environments such as inside a jet
engine or the human heart, current robots are less adept. In such cases where a
borescope or endoscope can't reach, disassembly or surgery are costly. One
promising inspection device inspired by plant growth are "vine robots" that can
navigate cluttered environments by extending from their tip. Yet, these vine
robots are currently limited in their ability to simultaneously steer into
tight curvatures and apply substantial forces to the environment. Here, we
propose a plant-inspired method of steering by asymmetrically lengthening one
side of the vine robot to enable high curvature and large force application.
Our key development is the introduction of an extremely anisotropic, composite,
wrinkled film with elastic moduli 400x different in orthogonal directions. The
film is used as the vine robot body, oriented such that it can stretch over
120% axially, but only 3% circumferentially. With the addition of controlled
layer jamming, this film enables a steering method inspired by plants in which
the circumference of the robot is inextensible, but the sides can stretch to
allow turns. This steering method and body pressure do not work against each
other, allowing the robot to exhibit higher forces and tighter curvatures than
previous vine robot architectures. This work advances the abilities of vine
robots--and robots more generally--to not only access tightly constrained
environments, but perform useful work once accessed. | MijaΓl JaΓ©n Mendoza, Nicholas D. Naclerio, Elliot W. Hawkes | 2023-10-02T02:15:11Z | http://arxiv.org/abs/2310.00851v1 | # High-curvature, high-force, vine robot for inspection
###### Abstract
Robot performance has advanced considerably both in and out of the factory, however in tightly constrained, unknown environments such as inside a jet engine or the human heart, current robots are less adept. In such cases where a borescope or endoscope can't reach, disassembly or surgery are costly. One promising inspection device inspired by plant growth are "vine robots" that can navigate cluttered environments by extending from their tip. Yet, these vine robots are currently limited in their ability to simultaneously steer into tight curvatures and apply substantial forces to the environment. Here, we propose a plant-inspired method of steering by asymmetrically lengthening one side of the vine robot to enable high curvature and large force application. Our key development is the introduction of an extremely anisotropic, composite, wrinkled film with elastic moduli 400x different in orthogonal directions. The film is used as the vine robot body, oriented such that it can stretch over 120% axially, but only 3% circumferentially. With the addition of controlled layer jamming, this film enables a steering method inspired by plants in which the circumference of the robot is inextensible, but the sides can stretch to allow turns. This steering method and body pressure do not work against each other, allowing the robot to exhibit higher forces and tighter curvatures than previous vine robot architectures. This work advances the abilities of vine robots-and robots more generally-to not only access tightly constrained environments, but perform useful work once accessed.
## I Introduction
When a complicated mechanical system such as a jet engine, nuclear reactor, or human heart malfunctions it often requires an internal inspection to diagnose or solve the problem. Some simple problems can be accessed by removing an external component or using a borescope or endoscope to peer into the machine. However, many internal problems require disassembly or surgery due to the inadequacy of current inspection tools, a costly and time intensive process.
A promising device for inspecting difficult to reach spaces inside machines, structures, and organisms are vine robots; a soft, inflatable robot that grows from its tip like a plant [1]. The device is composed of a long thin tube of airtight film or fabric, inverted inside itself such that when pressurized it exerts and extends from its tip. Unlike a borescope or endoscope which is pushed from its base and can only traverse limited paths, the vine robot experiences no relative movement between its skin and the surrounding allowing it to navigate tortuous, cluttered paths with multiple curvatures. These robots have been developed to less-invasively explore archaeological ruins [2], collapsed buildings [3], and endovascular surgery [4].
However, current vine robot designs are limited by how much force they can exert, and how tight of bends they can make. Most prior steering designs work by actively shortening one side of the robot so that it curves towards that side. The most common method is to use pneumatic artificial muscles [2, 5, 6, 7, 8], however these designs have limited curvature and stiffness. Similarly, pull tendons have been used for steering [9, 10, 11], but are limited by friction and implementation complexity. Another method is to use a rigid device inside the vine robot that can create tight curves and discrete locations [3, 12, 13], at the expense of adding a rigid component to an otherwise soft robot. Further, various selective stiffening methods of varying complexity have been added to these designs to selectively control which parts of the robot bend [14, 15, 10].
An inherent limit to steering a vine robot by contracting one side is that the contractile method is antagonistic to the body pressure of the robot, reducing how much force it can exert on the environment. Most studies of vine robot have focused on how little force they exert [16], which would be beneficial for navigating delicate surgery environments, but perhaps less useful for inspection tasks that may require opening a door or turning a lever. Although the robot can exert an axial force at its tip [12], they have poor bending stiffness [13] without an active stiffening mechanism [15].
An alternative steering method inspired by the biological growth of plants is to lengthen one side of the robot and create a bend in the opposite direction. Plants do not have contractile muscles, instead they selectively elongate the cells on one side of their body and rely on their internal turgor pressure to bend [17]. This steering method would be advantageous for a vine robot because its steering method
Fig. 1: Like a plant, the vine robot grows and bends by asymmetrically extending one side of its body. The keys to this design are an anisotropic skin that allows the robot to extend axially, and layer jamming locking bodies along its inside that prevent one side from extending to create bends. For scale, robot is 32 mm in diameter.
and internal pressure do not work counter to each other, allowing the robot to maintain both high body stiffness and high curvature at the same time. One complicated design using selective release of latches was implemented in a vine robot [1], but its curves were permanent.
In this work we present a method of reversible vine robot steering inspired by biological growth that allows for higher forces and tighter curvatures than existing designs. To do this we developed a highly-anisotropic, wrinkled, composite film that allows the robot to extend over 120% axially, but only 3% circumferentially. The robot selectively steers by stiffening one side of the robot with an internal jamming structure driven by the body pressure of the robot, and allowing the other side of the robot to extend (Fig. 1). The key advantage of this design over other steering and jamming designs is that the body pressure of the robot both pressurizes the jamming structure and drives bending, allowing it to reversibly exhibit high curvature and exert high forces at the same time. This could be useful when inspecting difficult to reach spaces such as inside an aircraft engine. What follows is a more detailed description of the design (Sec. II), basic analytical models of its performance (Sec. III), robot demonstrations and characterizations of its performance (Sec. V), and concluding thoughts (Sec. VI).
## II Design
In this section, we describe the inspiration for the design from the mechanisms of plant bending as well as the two key components that enable the device-the anisotropic composite film and the integrated layer jamming structures (Fig. 2).
### _Plant-Inspired Bending by Lengthening_
Bending in plants is a complicated process mediated by various hormones. However, at a high level, the principles involved can serve as guides for bio-inspired robotic design. We briefly describe the mechanism of plant bending below, based on [17, 18].
Without contractile muscles, plants generally rely on extension to bend. Extension requires lengthening of the stiff cell walls that work for the structure of plants (Fig. 1). These cell walls are composed primarily of load-bearing cellulose fibers cross-linked with other organic molecules including hemicellulose and pectin. To bend, the hormone expansin cleaves the crosslinks, allowing the cellulose fibers to slide past one another in the axial direction; this motion is driven by the high internal pressure of the plant. However, the circumferential structure is maintained, such that the plant does not swell radially.
The key principles that can be gleaned from this description to create plant-inspired bending are two-fold. First, a pressurized, tube-like structure will lengthen axially but not swell radially if it is formed from a highly anisotropic skin that is stiff in the circumferential direction. Second, this structure will bend if the lengthening is controlled to be different on different sides; varying the interlocking among overlapping, inextensible fibers controls lengthening. Next we describe our engineered solutions based on these principles.
### _Anisotropic Composite Film_
To achieve bending by elongation, we need an air-tight, highly anisotropic film with a high specific strength. The skin needs to be on the order of 20x stiffer in one direction than the other to achieve the desired steering curvature (see Sec. III (3)). To meet these requirements, we developed a composite material comprising one layer of a uniaxially pre-stretched elastomer laminated to an inelastic film. When the pre-stretch of the elastomer layer is relaxed, the inelastic film spontaneously buckles to form a uniformly wrinkled surface. The film is then formed in a tube, with the wrinkles running around the circumference of the tube. Such spontaneous buckling phenomenon in thin films has been described [19, 20, 21], and explored for stretchable electronics [22], but not as a structural, anisotropic material.
### _Integrated Layer Jamming Structures_
To control the lengthening of different sides of an extensible tube, we need structures that can controllably lengthen. Inspired by the controllable interlocking among cellulose fibers, we use pressure-driven layer jamming structures attached along the length of the tube. The jamming structure is composed of several interlocking sheets [23, 24] of high-friction material inside a thin flexible tube. When the tube is compressed by an internal vacuum or external pressure, the layers jam against each other and can no longer easily slide past one another. When the tube is inflated, the layers are free to slide past each other. We note that various soft robotic systems have utilized jamming technologies [25], including one example using layer jamming structures to stiffen and create virtual joints for dynamic reconfiguration in vine robots [15].
### _Implementing Bending by Lengthening_
Our robot is composed of a tube of our anisotropic composite film with several independent jamming structures attached along the length of the inside of the main body. The main body is inverted, such that when pressurized, it will evert and lengthen from its tip. The body pressure compresses the jamming structures, so that they are in a default jammed state. The jamming structures can be independently inflated
Fig. 2: The robot bends by letting its uniaxially-wrinkled composite film stretch while one side of the robot is locked by a layer jamming locking body compressed by the internal body pressure of the robot.
to release them and cause a bend. Note that the jamming structures are arranged in series along each side of the robot to control shape. More jamming structures allow for more controllable degrees of freedom in the robot. Importantly, very little air needs to flow in and out of the jamming structures to control them, such that very small tubes can be used to control each (see Sec. IV-D). This contrasts with designs that use artificial muscles to control shape, wherein large volumes of air must flow in and out of the muscle, requiring large tubing. Instead, in the proposed design, the work of turning is done with the main body, freeing the steering mechanisms to do no work, essentially becoming brakes instead of actuators.
## III Modeling
In this section, we describe simple mathematical models of the proposed device that serve as tools for design.
### _Geometric Kinematics_
The first model describes how geometry and maximum strain relate to the maximum curvature of the robot body. For a given strain \(\epsilon\) on one side of the robot body, we can model the radius of curvature \(R\) of the resultant bend by looking at arc lengths \(l\) and \(l+\epsilon l\) (Fig. 3a).
\[l=R\theta \tag{1}\]
and
\[l+\epsilon l=(R+2r)\theta \tag{2}\]
where \(R\) is the radius of curvature, \(r\) is the radius of the robot, and \(\theta\) is the angle of curvature. Solving for \(R\) and \(\theta\) we find
\[R=2r/\epsilon \tag{3}\]
and
\[\theta=\epsilon l/(2r). \tag{4}\]
This model helps inform the amount of stretch required in the anisotropic film if the radial swelling and desired radius of curvature are set by the designer. We set the radial swelling to be less than 5% to approximately maintain the robot radius during growth, and we aim for a radius of curvature equal to the robot diameter to access highly constrained environments. In this case, the required axial strain is 100%, and the material should be at least 20x stiffer in the circumferential direction than the axial direction.
### _Force Curvature Equilibrium_
The second model explores how as the robot is pressurized, its shape changes; this can be calculated using statics with force and moment balances. We assume that the internal forces acting on the robot are internal pressure \(P\) times area \(\pi r^{2}\), an elastic force
\[F_{e}=EA_{film}\epsilon l \tag{5}\]
from the axial elasticity of the body where \(E\) is the film's modulus and \(A_{film}\) is the cross sectional area of the film, and an angle dependent friction force
\[F_{\theta}=K_{\theta}e^{\mu\theta} \tag{6}\]
from capstan friction due to internal resistance in a deactivated jamming structure where \(K_{\theta}\), and \(\mu\) are empirical constants.
If no jamming structure is activated, then the length of the robot \(l\) is simply
\[l=P\pi r^{2}/EA_{film}+l_{0} \tag{7}\]
where \(l_{0}\) is its depressurized length.
If one jamming structure is activated, we can do a moment balance around that side of the body, resulting in
\[P\pi r^{3}=(EA_{film}\epsilon l+K_{\theta}e^{\mu\theta})2r \tag{8}\]
for which there is no explicit solution for \(\epsilon\), \(R\), and \(\theta\).
This model helps inform the design of the robot. First, \(E\) should be minimized to reduce the required pressure for a given strain. Second, it informs how large of a force the jamming structure on the inner curve of the robot must be able to withstand. We note that the axial force must be equal in the inner wall to that in the outer wall for static equilibrium, and thus from (8), we know that the force that the jamming structure must withstand is \(EA_{film}\epsilon+K_{\theta}e^{\mu\theta}\).
### _External Force Application_
The third model describes how much force the vine robot can apply in the direction perpendicular to its long axis by bending. When locking one side of the body, the robot can apply a perpendicular force \(F_{tangent}\) at its tip as it is pressurized. This can be solved by a moment balance as
\[F_{tangent}=(P\pi r^{3}-E\epsilon-K_{\theta}e^{\mu\theta})/l. \tag{9}\]
For comparison, the force that a robot with a pneumatic artificial muscle with force \(F_{m}(\epsilon)=(\pi r_{m})P_{m}[a(1-\epsilon)^{2}-b]\) modeled by the ideal McKibben muscle equation [26, 8] would be
\[F_{tangent}=(2r*(\pi r_{m})P_{m}[a(1-\epsilon)^{2}-b]-P\pi r^{3})/l. \tag{10}\]
Notice that in the proposed device, the force is directly related to the pressure inside the body, meaning more force can be attained by increasing the body pressure. In contrast, for the previous design that uses an artificial muscle, the body pressure is actually working against the muscle pressure, meaning that a stiffer, higher pressure body inhibits external force application.
Fig. 3: Model geometry (a) and forces (b).
## IV Fabrication
### _Anisotropic Composite Film_
Our composite anisotropic robot skin is a laminate of a pre-stretched membrane and an inelastic film and fabricated as follows. First, a rectangular piece of 52 \(\mu\)m thick thermoplastic polyurethane (TPU) is uniaxially pre-stretched up to 200% and fixed at its ends. Second, a rectangular piece of 42 \(\mu\)m thick Dyneema composite fabric (ultra-high-molecular-weight polyethylene fibers sandwiched between polyester film) is cut to the length of the pre-stretched TPU and coated with 65 \(\mu\)m thick pressure sensitive adhesive (PSA) transfer tape (3M F9460PC). Finally, the two layers are adhered together and allowed to contract, forming wrinkles.
### _Anisotropic Stretchable Tip-everting Body_
The body of the robot is composed of the anisotropic film described in Section IV-A, but formed in a tube with a different method shown in Fig. 4. First a 30 cm long, 74 mm diameter bladder of 52 \(\mu\)m thick TPU is formed by heat sealing. Second, the TPU bladder is placed around a scratched-up, double-walled, 90 cm long, 32 mm diameter tube of 50 \(\mu\)m thick low density polyurethane (LDPE) tube. Third, the LDPE tube is inflated to 40 kPa to pre-stretch the TPU bladder by 200%. Fourth, the pre-stretched TPU tube is coated in 65 \(\mu\)m thick PSA tape and a sheet of 42 \(\mu\)m thick Dyneema composite fabric. Finally, the LDPE tube is deflated and removed, leaving behind an axially wrinkled, 32 mm diameter anisotropic composite film tube.
### _Layer Jamming Locking Body_
The locking body is composed of two sheaves of interlocking film strips inside a flexible bladder as shown in Fig. 4e. The two sheaves are made of five and six 5 mm wide, 170mm long, and 0.127 mm thick Duralar sheets respectively, each joined together at one end. The strips are interwoven for a length of 150 mm, placed inside a heat-sealed TPU bladder, and secured at each end to the TPU tube by PSA tape. The sheaves can slide past each other unless the TPU bladder is compressed by external pressure.
### _Assembly_
The locking bodies are attached to the inside of the robot body. To do this we invert and stretch the robot body then attach the stretched locking bodies along its length with PSA tape. Next we add 1.2 mm diameter silicone air tubes to the locking bodies to provide positive pressure to un-jam them. Finally, the robot body is inverted again for the final configuration shown in Fig. 4f.
Multiple locking bodies can be attached in series along the length of the robot body to enable compound curvatures.
## V Results
### _Component Characterization_
In this section, we describe characterization tests of the two key components of the system: the anisotropic skin material and the jamming structures.
#### V-A1 Anisotropic Skin Characterization
To characterize our anisotropic skin, we measured its elastic moduli through uniaxial testing on an Instron machine at a rate of 20mm/min (five rectangular specimens, 6 mm x 50 mm for both X and Y directions). Fig. 5a shows the material before and after stretch in the X direction (along robot body) and Fig. 5b shows the stress-strain curves. The results in the X direction indicate that the behavior of the material is composed of two regimes-one low stiffness during unwrinkling and one high stiffness once the wrinkles are taut. In contrast, the stress-strain curve in the Y direction presents one high-stiffness regime due to the lack of wrinkles in this direction of the anisotropic material. Importantly, we show a high anisotropic ratio: the elastic modulus in the X direction is 1.98 MPa, and
Fig. 4: Fabrication of the robot. (a, b) The TPU bladder is pre-stretched by an LDPE tube. (c) Dyneema composite fabric as attached to the TPU. (d) The LDPE tube is deflated and removed, leaving a wrinkled robot body. (e) The locking body is assembled of two sets of alternating strips of plastic and placed inside a TPU tube. (f) The locking bodies are attached to the inside of the robot body.
Fig. 5: Stress-strain curves of the anisotropic material in the longitudinal and transverse direction. (a, b) Longitudinal corresponds to the direction with presence of wrinkles and transverse direction is its orthogonal direction that lacks wrinkles. (c) Data showing that pre-stretch in the fabrication increased the ability of the anisotropic material to stretch in the longitudinal direction.
in the Y direction, 879.1 MPa, representing a ratio of over 400x. This exceeds the specification described in Sec. III-A.
To characterize the effect of pre-stretch of the TPU film during fabrication on the stretchability of the anisotropic material in the X direction, we fabricated specimens with varied pre-stretch. We measured the percent extension \(e=(L_{max}-L_{0})/L_{0}\), where \(L_{max}\) is the max length and \(L_{0}\) is the initial length (Fig. 5c). Increasing the pre-stretch for each specimen increased the ability to stretch nearly linearly.
#### Iii-A2 Jamming Structure Characterization
We built one jamming structure to characterize the effect of the curvature on the critical force before slipping. The layer size for jamming unit was 5x90 mm with an overlap of layers of 30 mm. We applied tension to the jamming structure while held over varied 3D-printed arcs to change the angle of curvature, while maintaining 30 mm in contact with each arc (n=5 for each arc). As shown in Fig. 6, the critical force increased exponentially as the deflection angle increased. Such trend suggests that the critical force in the inner side of the curve of the vine robot is higher than when the jamming unit is straight.
### _Straight Vine Body Characterization_
Since the main body of our vine robot exhibits a new working principle based on its ability to stretch in the longitudinal direction, it is important to characterize the effect of actuation pressure lengthening. We inflated the robot body in a range from 0 kPa to 60 kPa and measured the length at each increment. Note that no jamming structures were attached. Fig. 7 shows that increasing the actuation pressure increased the length linearly at first, before saturating as the wrinkles became stretched taut. The vine body was able to stretch two times its original length, such elongation is slightly higher in a flat sheet with the same pre-stretch due to differences between stretching a flat sheet and an airtight tube (Fig. 5).
Second, we characterized the bending stiffness of the vine body in terms of normalized torque (N.m/degrees) at different pressure values. Ten sets of actuation pressures were used to inflate the vine body. To obtain the stiffness for each pressure, we increased the lateral load at a set length, recording the vertical displacement. The stiffness was calculated using the slope of the torque-deflection angle curve and the known set length. Increasing the pressure increased the normalized stiffness of the inflatable beam as expected, shown in Fig. 7b.
### _Free Strain for Steering_
For navigation tasks without substantial forces resisting bending, characterizing bending radius in free space is important. First, we formed the robot into to tight curvature shapes, showing a full circle and a tight "S" (Fig. 8, a-b). We found radius of curvatures of 44 to 52 mm, allowing the robot to turn around a radius less than two body diameters. Second, we characterized our method of steering against gravity and compared it to a control using a previous vine robot steering method, external artificial muscles, fPAMs [9]. To characterize this behavior, the vine robot was set
Fig. 8: (a, b) The robot can create right, compound curvatures with a radius of curvature as tight as 1.4x its body diameter. (c) Curvature of the vine robot vs. actuation pressure is much higher with our lengthening method (top curve) compared to prior contracting methods using an artificial muscle. Tests with the artificial muscle were performed for three different robot body pressures (lower three curves).
Fig. 6: Increasing the curvature of the jamming structure, even when no vacuum pressure is applied, causes an increase in the critical tension force required to cause a slip. An exponential fit is plotted, suggested by the model from (8).
Fig. 7: (a) Elongation of vine body versus the actuation pressure. (b) Normalized stiffness-pressure curve of the vine body prior to attaching the jamming structures.
horizontally on a table and allowed to curve up away from the table. We recorded the radius of curvature of the vine robots using a video as the actuation pressure was increased from 0 to 60 kPa. For our bending by lengthening vine robot, the jamming structure away from the table was inflated and relaxed. For the bending by shortening vine robot, the actuation pressure was the applied pressure in the artificial muscle, and the body pressure was varied. Fig. (c)c shows that for all cases of internal pressure for the control robot, our method of steering by lengthening exhibits greater than 3x and 6x more curvature.
### _Bending to Apply External Force_
We hypothesize that our steering method can exert substantial force since the internal pressure is causing the bending, rather than resisting it, as in the case of previous work with artificial muscles. To test this we compared how high a cantilevered robot with our method or a pneumatic muscle (fPAM [9]) could lift masses at a position of 19 cm from its base. The force vs. deflection curve was measured for three different actuator pressures shown in Fig. 9, with three trials (n = 3) each. The body pressure of the robot with the fPAM was kept constant at 10 kPa.
The displacement-load curves for with fPAM and our robot are shown in Fig. (a)a and b respectively. These results indicate that fPAMs could not exert meaningful displacement with forces above 1 N. Conversely, our method could exert forces up to 10 N. This suggests that steering by lengthening is better at exerting forces on the environment while steering than can be done with artificial muscles.
### _Demonstrations_
We performed three demonstrations of the robot's ability to traverse obstacles that could be found in an inspection task, as shown in Fig. 10. First (Fig. (a)a), the 32 mm robot squeezed through a 25 mm gap, something that a rigid robot could not do. Second (Fig. (b)b), the robot pushed a 200 g weight out of the way to access its target. And finally (Fig. (c)c), the robot grew while making a compound curvature to reach an opening oriented parallel, but offset from its starting point.
## VI Discussion
One limitation of our proposed anisotropic material is that it has to be fabricated by hand from two existing materials. Future work could explore manufacturing methods to create the material via scalable methods. A limitation of the proposed robot as built is its length. Future versions should be extended to increase the usefulness of the robot. Ideally, as the length grows, the number of layer jamming sections will increase to increase the degrees of freedom of the device. Although this will add more tubing lines, as noted earlier, these lines can be very small (1.2mm diameter), such that many can be added without hindering robot performance. This contrasts with lines to pneumatic actuators, which need to have a larger diameter (usually around 4-6 mm) to allow enough flow, limiting the feasible number of degrees of freedom compared to the proposed design.
## VII Conclusion
We have presented an alternative way of steering a vine robot by lengthening one side using an anisotropic body material and layer jamming. This plant-inspired method exhibits higher forces and curvatures than methods that work by shortening with artificial muscles. This is an important advancement toward the development of practical vine robots for exploring and accessing difficult to reach spaces that no other device can reach, such as inside machinery or through tortuous lumens in the human body. Additionally, the composite material offers new properties for the field of soft robotics generally.
|
2305.04881 | Skolem and Positivity Completeness of Ergodic Markov Chains | We consider the following Markov Reachability decision problems that view
Markov Chains as Linear Dynamical Systems: given a finite, rational Markov
Chain, source and target states, and a rational threshold, does the probability
of reaching the target from the source at the $n^{th}$ step: (i) equal the
threshold for some $n$? (ii) cross the threshold for some $n$? (iii) cross the
threshold for infinitely many $n$? These problems are respectively known to be
equivalent to the Skolem, Positivity, and Ultimate Positivity problems for
Linear Recurrence Sequences (LRS), number-theoretic problems whose decidability
has been open for decades. We present an elementary reduction from LRS Problems
to Markov Reachability Problems that improves the state of the art as follows.
(a) We map LRS to ergodic (irreducible and aperiodic) Markov Chains that are
ubiquitous, not least by virtue of their spectral structure, and (b) our
reduction maps LRS of order $k$ to Markov Chains of order $k+1$: a substantial
improvement over the previous reduction that mapped LRS of order $k$ to
reducible and periodic Markov chains of order $4k+5$. This contribution is
significant in view of the fact that the number-theoretic hardness of verifying
Linear Dynamical Systems can often be mitigated by spectral assumptions and
restrictions on order. | Mihir Vahanwala | 2023-05-08T17:27:13Z | http://arxiv.org/abs/2305.04881v4 | # Skolem and Positivity Completeness of Ergodic Markov Chains
###### Abstract
We consider the following decision problems: given a finite, rational Markov Chain, source and target states, and a rational threshold, does there exist an \(n\) such that the probability of reaching the target from the source at the \(n^{th}\) step is equal to the threshold (resp. crosses the threshold)? These problems are known to be equivalent to the Skolem (resp. Positivity) problems for Linear Recurrence Sequences (LRS). These are number-theoretic problems whose decidability has been open for decades. We present a short, self-contained, and elementary reduction from LRS to Markov Chains that improves the state of the art as follows: (a) We reduce to ergodic Markov Chains, a class widely used in Model Checking. (b) We reduce LRS to Markov Chains of significantly lower order than before. We thus get sharper hardness results for a more ubiquitous class of Markov Chains. Immediate applications include problems in modeling biological systems, and regular automata-based counting problems.
keywords: Ergodic Markov Chains, Reachability, Model checking, Linear Recurrence Sequences +
Footnote β : journal:
## 1 Introduction
Markov Chains are a natural mathematical framework to describe probabilistic systems, such as those arising in computational biology. There is an extensive body of work on model checking Markov Chains: see [3] for a comprehensive set of references. Most of the focus has been on the verification of linear- and branching-time properties of Markov Chains through solving systems of linear equations, or linear programs. An alternative approach [1; 4; 7; 8] is to consider specifications on the state distribution at each time step, e.g., whether the probability of being in a given state at the \(n^{th}\) step is at least \(1/4\). Decidability in this setting is a lot more inaccessible: [1; 4] only present incomplete or approximate verification procedures, while [7; 8] owe their model-checking procedures to additional mathematical assumptions. The inherent difficulty of precisely solving decision problems in this fundamental setting is established in [2]: it is formally shown that verifying such specifications is tantamount to solving the Skolem/Positivity Problem for Linear Recurrence Sequences (LRS). The reduction therein is from LRS of order \(k\) to periodic Markov Chains of order
\(2k+4\). However, it is the ergodic Markov Chains (irreducible and _aperiodic_) that are widely assumed in practice, and it may well be that the hardness is somehow mitigated by the additional spectral structure. On the automata-theoretic front, [6] reduces LRS to counting words of length \(n\) in a regular language. The nature of the reduction means that the resulting automaton is necessarily periodic. _Aperiodicity_ in this context relates to LTL-definability, and the logical restriction could lead to a combinatorial breakthrough.
In this paper, we show that the such breakthroughs that circumvent the original reduction are unlikely without significant progress in the underlying number-theory itself. **Our novelty** is a reduction from order \(k\) LRS to ergodic Markov Chains of order \(k+1\). An interesting feature of our reduction is that it shows that hard instances exist for _every_ stationary distribution. In doing so, the translation of number-theoretic hardness for LRS (cf. [11]) to Markov Chains also becomes much sharper.
## 2 Markov Chain Preliminaries
**Notation:** Distributions are assumed to be column vectors. We use \(\mathbf{1}\) to denote the column vector whose entries are all \(1\), and \(\mathbf{I}\) to denote the identity matrix. We use \(\mathbf{0}\) to denote the zero column vector, and \(\mathbf{O}\) to denote the zero matrix. Superscript \(T\) denotes transposition. We use \(\mathbf{e_{i}}\) to denote the elementary column vector, i.e. the vector whose \(i^{th}\) entry is \(1\) and all other entries are \(0\), e.g. \(\mathbf{e_{1}}=\begin{bmatrix}1&0&\dots&0\end{bmatrix}^{T}\). We use \(m_{ij}^{(n)}\) as shorthand to denote the entry in the \(i^{th}\) row and \(j^{th}\) column of \(\mathbf{M}^{n}\), i.e. \(m_{ij}^{(n)}=\mathbf{e_{i}}^{T}\mathbf{M}^{n}\mathbf{e_{j}}\). When not specified, \(n=1\).
**Definition 1** (Markov Chain).: _A \(k\)-state Markov Chain over \(\mathbb{Q}\) is a matrix \(\mathbf{M}\in\mathbb{Q}^{k\times k}\), such that \(m_{ij}=\mathbf{e_{i}}^{T}\mathbf{M}\mathbf{e_{j}}\) denotes the probability of moving from state \(j\) to state \(i\). We have \(\mathbf{1}^{T}\mathbf{M}=\mathbf{1}^{T}\)._
**Definition 2** (Irreducible Markov Chain).: _A Markov Chain is called irreducible if every state has a path to every other state._
**Definition 3** (Periodicity).: _The period \(d_{i}\) of a state \(i\) of a Markov chain \(\mathbf{M}\) is defined as_
\[\mathsf{gcd}\{n\geq 1:\mathbf{e_{i}}^{T}\mathbf{M}^{n}\mathbf{e_{i}}>0\}\]
_State \(i\) is called aperiodic if \(d_{i}=1\). \(\mathbf{M}\) is said to be aperiodic iff all its states are aperiodic._
**Definition 4** (Stationary distribution).: _A distribution \(\mathbf{s}\) is said to be a stationary distribution of a Markov Chain \(\mathbf{M}\), if \(\mathbf{Ms}=\mathbf{s}\)._
**Theorem 1** (Fundamental Theorem of (Ergodic) Markov Chains).: _A Markov chain \(\mathbf{M}\) is called ergodic if it is irreducible and aperiodic. An ergodic Markov Chain has a unique stationary distribution._
The following technical lemma will help us construct an Ergodic Markov Chain in our reduction.
**Lemma 2**.: _Let \(\mathbf{s}\) be a distribution with all entries strictly positive, and let \(\mathbf{S}=\begin{bmatrix}\mathbf{s}&\mathbf{s}&\dots&\mathbf{s}\end{bmatrix}\). A stochastic matrix \(\mathbf{M}\) is an ergodic Markov Chain with stationary distribution \(\mathbf{s}\) if and only if \(\mathbf{M}\mathbf{s}=\mathbf{s}\) and there exists \(\mathbf{D}\) such that_
* \(\mathbf{M}=\mathbf{S}+\mathbf{D}\)__
* \(\mathbf{D}\mathbf{S}=\mathbf{S}\mathbf{D}=\mathbf{O}\)__
* \(\mathbf{D}\) _has spectral radius less than_ \(1\)_, i.e._ \(\lim_{n\to\infty}\mathbf{D}^{n}=\mathbf{O}\)__
_In particular, we observe that the first two properties of \(\mathbf{D}\) imply that \(\mathbf{M}^{n}=\mathbf{S}+\mathbf{D}^{n}\) for \(n\geq 1\)._
Proof.: **Only If**:
Let \(\mathbf{M}\) be an ergodic Markov Chain with stationary distribution \(\mathbf{s}\). Then, by definition, \(\mathbf{M}\mathbf{s}=\mathbf{s}\), \(\mathbf{M}\mathbf{S}=\mathbf{S}\), and \(\lim_{n\to\infty}\mathbf{M}^{n}=\mathbf{S}\). We note that both \(\mathbf{M}\) and \(\mathbf{S}\) are stochastic matrices, and thus \(\mathbf{1}^{T}(\mathbf{M}-\mathbf{S})=\mathbf{0}^{T}\). Denote \(\mathbf{M}-\mathbf{S}\) by \(\mathbf{D}\). From the previous observation, it is clear that \(\mathbf{S}\mathbf{D}=\mathbf{O}\), since all the rows of \(\mathbf{S}\) are scaled multiples of \(\mathbf{1}^{T}\). We also have that \(\mathbf{S}=\mathbf{M}\mathbf{S}=\mathbf{S}^{2}+\mathbf{D}\mathbf{S}=\mathbf{S} +\mathbf{D}\mathbf{S}\), which means that \(\mathbf{D}\mathbf{S}=\mathbf{O}\).
We use the fact that \(\mathbf{D}\mathbf{S}=\mathbf{S}\mathbf{D}=\mathbf{O}\) and that \(\mathbf{S}^{n}=\mathbf{S}\) to observe
\[\mathbf{M}^{n}=(\mathbf{S}+\mathbf{D})^{n}=\mathbf{S}+\mathbf{D}^{n}\]
because all other terms in the binomial expansion are nullified. Now, since \(\lim_{n\to\infty}\mathbf{M}^{n}=\mathbf{S}\), it forces \(\lim_{n\to\infty}\mathbf{D}^{n}=\mathbf{O}\).
**If**:
The above argument is reversible. If instead we are given the three properties of \(\mathbf{D}\) to begin with, we can conclude that \(\mathbf{M}\mathbf{S}=\mathbf{S}\) and \(\lim_{n\to\infty}\mathbf{M}^{n}=\mathbf{S}\), which is precisely the definition of \(\mathbf{M}\) being an ergodic Markov Chain with stationary distribution \(\mathbf{s}\).
## 3 Overview of Problems
**Problem 1** (Ergodic Markov Chain Reachability).: _Given an ergodic Markov Chain \(\mathbf{M}\in\mathbb{Q}^{k\times k}\) and \(r\in\mathbb{Q}\), the Ergodic Markov Chain Reachability problem asks whether there exists an \(n\in\mathbb{N}\) such that the probability of returning to state \(1\) at the \(n^{th}\) step is exactly \(r\), i.e. \(\mathbf{e_{1}}^{T}\mathbf{M}^{n}\mathbf{e_{1}}=r\)._
**Problem 2** (Threshold Ergodic Markov Chain Reachability).: _Given an ergodic Markov Chain \(\mathbf{M}\in\mathbb{Q}^{k\times k}\) and \(r\in\mathbb{Q}\), the Threshold Ergodic Markov Chain Reachability problem asks whether for all \(n\in\mathbb{N}\), the probability of returning to state \(1\) at the \(n^{th}\) step is at least \(r\), i.e. \(\mathbf{e_{1}}^{T}\mathbf{M}^{n}\mathbf{e_{1}}\geq r\)._
We relate these problems to long-standing open problems on Linear Recurrence Sequences.
**Definition 5** (Linear Recurrence Sequence a.k.a. LRS).: _An LRS of order \(k\) over \(\mathbb{Q}\) is an infinite sequence \(\langle u_{n}\rangle_{n=0}^{\infty}\) satisfying a recurrence relation_
\[u_{n+k}=\sum_{i=0}^{k-1}a_{i}u_{n+i}\]
_for all \(n\in\mathbb{N}\). The recurrence relation is given by the tuple \((a_{0},\ldots,a_{k-1})\in\mathbb{Q}^{k}\) with \(a_{0}\neq 0\). The sequence is uniquely determined by the starting values \((u_{0},\ldots,u_{k-1})\in\mathbb{Q}_{k}\)._
**Problem 3** (Skolem Problem for LRS).: _Given an LRS \(\langle u_{n}\rangle_{n=0}^{\infty}\) (via the recurrence relation and starting values), the Skolem problem asks whether there exists an \(n\in\mathbb{N}\) such that \(u_{n}=0\)._
**Problem 4** (Positivity Problem for LRS).: _Given an LRS \(\langle u_{n}\rangle_{n=0}^{\infty}\) (via the recurrence relation and starting values), the Positivity problem asks whether for all \(n\in\mathbb{N}\), \(u_{n}\geq 0\)._
The Skolem Problem is known to be decidable for LRS of order up to \(4\), see [9; 12]. Very recently, there have been conditional decidability results for LRS of order \(5\)[5]. The Positivity Problem is decidable up to order \(5\), decidability at order \(6\) would entail significant number-theoretic breakthroughs [11]. If we restrict ourselves to the class of _simple_ LRS (no repeated characteristic roots), then Positivity is decidable up to order \(9\), see [10].
We are now ready to state our main reduction, which is agnostic to the spectral nature of the LRS, and minimal with respect to the order.
**Theorem 3** (Main Result).: _Problem 3 reduces to Problem 1, while Problem 4 reduces to Problem 2. Moreover, applying the reduction to an LRS of order \(k\) results in an ergodic Markov Chain of order \(k+1\)._
It is well known that for any matrix \(\mathbf{M}\), \(\langle m_{ij}^{(n)}\rangle_{n=1}^{\infty}\), i.e. the entries in the \(i^{th}\) row and \(j^{th}\) column in the powers of \(\mathbf{M}\) form an LRS. This is most easily seen through the Cayley-Hamilton Theorem: any matrix \(\mathbf{M}\) satisfies its characteristic polynomial equation, i.e. if \(p(\lambda)=\det(\mathbf{M}-\lambda\mathbf{I})\), then \(p(\mathbf{M})=\mathbf{O}\).
Thus, we trivially have that Problems 3 and 4 respectively reduce to Problems 1 and 2.
One can define the "off-diagonal" variants of Problems 1 and 2, i.e. queries on the probability of reaching state \(1\) from state \(2\). The above equivalences hold for the off-diagonal variants with an almost identical proof. We discuss the difference after presenting the reduction.
## 4 The Reduction
The key idea is to use Lemma 2 to construct an ergodic Markov Chain via the decomposition \(\mathbf{M}=\mathbf{S}+\mathbf{D}\). Given an LRS \(\langle u_{n}\rangle_{n=0}^{\infty}\) of order \(k\) over \(\mathbb{Q}\), we
will choose \(\mathbf{S},\mathbf{D}\in\mathbb{Q}^{(k+1)\times(k+1)}\), \(r=s_{11}=s_{1}\), a rational \(\eta\) and a large rational \(\rho\) in such a way that for all \(n\geq 1\), \(d_{11}^{(n)}=\eta u_{n}/\rho^{n}\).
Since \(m_{11}^{(n)}=s_{1}+d_{11}^{(n)}\), deciding the Skolem (resp. Positivity) problem reduces to checking whether there exists \(n\) such that \(m_{11}^{(n)}=r=s_{1}\) (resp. for all \(n\), \(m_{11}^{(n)}\geq r=s_{1}\)), which is precisely the reduction we want.
We assume without loss of generality that none of the initial terms of the LRS are \(0\), and that \(u_{0}>0\).
To begin with, we choose some arbitrary probability distribution
\[\mathbf{s}=\begin{bmatrix}s_{1}&s_{2}&\dots&s_{k+1}\end{bmatrix}^{T}\in \mathbb{Q}^{k+1}\]
such that all the entries of \(\mathbf{s}\) are strictly positive. \(\mathbf{S}\) denotes the square matrix, each of whose columns is \(\mathbf{s}\).
Let \(\mathbf{A}\in\mathbb{Q}^{k\times k}\) be the companion matrix of the given LRS, i.e.
\[\mathbf{A}=\begin{bmatrix}0&1&0&\dots&0\\ 0&0&1&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\dots&1\\ a_{0}&a_{1}&a_{2}&\dots&a_{k-1}\end{bmatrix}\]
and let \(\mathbf{u}=\begin{bmatrix}u_{0}&u_{1}&\dots&u_{k-1}\end{bmatrix}^{T}\) We have that \(u_{n}=\mathbf{e_{1}}^{T}\mathbf{A}^{n}\mathbf{u}\)
Now, we choose \(\eta\in\mathbb{Q}\), \(\eta>0\) such that \(\eta u_{0}=1-s_{1}\). Let \(\mathbf{F}\in\mathbb{Q}^{k\times k}\) be the invertible diagonal matrix such that
\[\mathbf{F}\begin{bmatrix}1-s_{1}\\ -s_{2}\\ \vdots\\ -s_{k}\end{bmatrix}=\eta\mathbf{u}\]
i.e. \(\mathbf{F}=\mathrm{diag}(1,-\eta u_{1}/s_{2},\dots,-\eta u_{k-1}/s_{k})\). Observe that the top left entry in both \(\mathbf{F}\) and \(\mathbf{F}^{-1}\) is \(1\). Now, let \(\mathbf{B}=\mathbf{F}^{-1}\mathbf{A}\mathbf{F}\).
Let \(\mathbf{C}\in\mathbb{Q}^{(k+1)\times(k+1)}\) be the matrix
\[\begin{bmatrix}\mathbf{B}&\mathbf{0}\\ -\mathbf{1}^{T}\mathbf{B}&0\end{bmatrix}\]
We note, by a simple induction, that for \(n\geq 1\)
\[\mathbf{C}^{n}=\begin{bmatrix}\mathbf{B}^{n}&\mathbf{0}\\ -\mathbf{1}^{T}\mathbf{B}^{n}&0\end{bmatrix}\]
By construction \(\mathbf{1}^{T}\mathbf{C}=\mathbf{0}^{T}\), and hence \(\mathbf{SC}=\mathbf{O}\). Let \(\mathbf{D}=\frac{1}{\rho}(\mathbf{C}-\mathbf{CS})\). The choice of \(\rho\) is large enough to ensure that:
* The entries of \(\mathbf{S}+\mathbf{D}\) are non-negative.
* The spectral radius of \(\mathbf{D}\) is less than 1.
By Lemma 2, this makes the stochastic \(\mathbf{M}=\mathbf{S}+\mathbf{D}\) an ergodic Markov Chain with stationary distribution \(\mathbf{s}\), since, indeed, \(\mathbf{DS}=\mathbf{SD}=\mathbf{O}\).
We now observe that for \(n\geq 1\), \(\mathbf{D}^{n}=\frac{1}{\rho^{n}}\mathbf{C}^{n}(\mathbf{I}-\mathbf{S})\). We see this inductively: \((\mathbf{C}-\mathbf{CS})(\mathbf{C}^{n}-\mathbf{C}^{n}\mathbf{S})=\mathbf{C}^ {n+1}-\mathbf{C}^{n+1}\mathbf{S}-\mathbf{SSC}^{n}+\mathbf{SSC}^{n}\mathbf{S}= \mathbf{C}^{n+1}-\mathbf{C}^{n+1}\mathbf{S}\), since \(\mathbf{SC}=\mathbf{O}\).
To complete the proof, we now compute the top-left entry of \(\mathbf{D}^{n}\), \(n\geq 1\):
\[\mathbf{e_{1}}^{T}\mathbf{D}^{n}\mathbf{e_{1}} =\frac{1}{\rho^{n}}\mathbf{e_{1}}^{T}\mathbf{C}^{n}(\mathbf{I}- \mathbf{S})\mathbf{e_{1}}\] \[=\frac{1}{\rho^{n}}\mathbf{e_{1}}^{T}\begin{bmatrix}\mathbf{B}^ {n}&\mathbf{0}\\ -\mathbf{1}^{T}\mathbf{B}^{n}&0\end{bmatrix}\begin{bmatrix}1-s_{1}\\ -s_{2}\\ \vdots\\ -s_{k}\\ -s_{k+1}\end{bmatrix}\] \[=\frac{1}{\rho^{n}}\mathbf{e_{1}}^{T}\mathbf{B}^{n}\begin{bmatrix} 1-s_{1}\\ -s_{2}\\ \vdots\\ -s_{k}\end{bmatrix}\ \ (\text{note the change in dimension of }\mathbf{e_{1}})\] \[=\frac{1}{\rho^{n}}(\mathbf{e_{1}}^{T}\mathbf{F}^{-1})\mathbf{A}^ {n}\left(\mathbf{F}\begin{bmatrix}1-s_{1}\\ -s_{2}\\ \vdots\\ -s_{k}\end{bmatrix}\right)\] \[=\frac{\eta}{\rho^{n}}\mathbf{e_{1}}^{T}\mathbf{A}^{n}\mathbf{u}\] \[=\frac{\eta u_{n}}{\rho^{n}}\]
which is precisely a scaled version of our LRS.
We have thus established that
\[m_{11}^{(n)}=s_{1}+\frac{\eta u_{n}}{\rho^{n}}\]
Let \(r=s_{1}\). The original LRS is a YES instance of the Skolem Problem (resp. the Positivity Problem) if and only if there exists an \(n\) such that \(m_{11}^{(n)}=r\) (resp. for all \(n\), \(m_{11}^{(n)}\geq r\)). These are precisely the YES instances of the reachability problems we defined.
## 5 Discussion: The off-diagonal variants
In this variant, we query \(m_{12}^{(n)}\) instead of \(m_{11}^{(n)}\). The proof proceeds identically, except for the choice of \(\eta\) and diagonal matrix \(\mathbf{F}\). Here
\[\mathbf{F}\begin{bmatrix}-s_{1}\\ 1-s_{2}\\ \vdots\\ -s_{k}\end{bmatrix}=\eta\mathbf{u}\]
We choose \(r=s_{1}\), but \(\eta=-u_{0}/s_{0}<0\), and thus
\[\mathbf{F}=\text{diag}(1,\eta u_{1}/(1-s_{2}),-\eta u_{2}/s_{3},\ldots,-\eta u_ {k-1}/s_{k})\]
In the same way as above, we get \(d_{12}^{(n)}=\frac{\eta u_{n}}{\rho^{n}}\).
\[m_{12}^{(n)}=s_{1}+\frac{\eta u_{n}}{\rho^{n}}\]
In the previous case, \(d_{11}^{(n)}\) and \(u_{n}\) had the same sign, however, \(d_{12}^{(n)}\) and \(u_{n}\) have opposite signs here. Thus, Positivity is equivalent to \(m_{12}^{(n)}\leq s_{1}\) for all \(n\), whereas Skolem is still equivalent to \(m_{12}^{(n)}=s_{1}\) for some \(n\).
Note the difference in the inequalities in the diagonal and off-diagonal cases. The diagonal and off-diagonal variants seem to have some inherent structural differences. The trivial justification for making this choice of inequalities is that regardless of choice of \(r\), for \(n=0\), the probability of being in the source state is \(1\), and can never be less than \(r\). Similarly, for \(n=0\) the probability of being in a state different from the starting state is \(0\). On a philosophical note, the diagonal variant can be thought of as a safety property (e.g. a fraction of the population will invariably be in the desirable state we started off in), whereas the violation of the off-diagonal variant can be thought of as a liveness property (e.g. at some point, a large fraction of the population will be in an active state). We do not have more technical insights (e.g. what if the problem were defined for only \(n\geq 1\)?): if any, they are likely to be beyond the scope of the simple reduction we present here.
|
2304.02101 | MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera
Configurations | Camera orientations (i.e., rotation and zoom) govern the content that a
camera captures in a given scene, which in turn heavily influences the accuracy
of live video analytics pipelines. However, existing analytics approaches leave
this crucial adaptation knob untouched, instead opting to only alter the way
that captured images from fixed orientations are encoded, streamed, and
analyzed. We present MadEye, a camera-server system that automatically and
continually adapts orientations to maximize accuracy for the workload and
resource constraints at hand. To realize this using commodity pan-tilt-zoom
(PTZ) cameras, MadEye embeds (1) a search algorithm that rapidly explores the
massive space of orientations to identify a fruitful subset at each time, and
(2) a novel knowledge distillation strategy to efficiently (with only camera
resources) select the ones that maximize workload accuracy. Experiments on
diverse workloads show that MadEye boosts accuracy by 2.9-25.7% for the same
resource usage, or achieves the same accuracy with 2-3.7x lower resource costs. | Mike Wong, Murali Ramanujam, Guha Balakrishnan, Ravi Netravali | 2023-04-04T19:58:20Z | http://arxiv.org/abs/2304.02101v1 | # MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations
###### Abstract
Camera orientations (i.e., rotation and zoom) govern the content that a camera captures in a given scene, which in turn heavily influences the accuracy of live video analytics pipelines. However, existing analytics approaches leave this crucial adaptation knob untouched, instead opting to only alter the way that captured images from fixed orientations are encoded, streamed, and analyzed. We present MadEye, a camera-server system that automatically and continually adapts orientations to maximize accuracy for the workload and resource constraints at hand. To realize this using commodity pan-tilt-zoom (PTZ) cameras, MadEye embeds (1) a search algorithm that rapidly explores the massive space of orientations to identify a fruitful subset at each time, and (2) a novel knowledge distillation strategy to efficiently (with only camera resources) select the ones that maximize workload accuracy. Experiments on diverse workloads show that MadEye boosts accuracy by 2.9-25.7% for the same resource usage, or achieves the same accuracy with 2-3.7\(\times\) lower resource costs.
## 1 Introduction
Building on the steady growth in camera deployments and advances in deep neural networks (DNNs) for vision tasks (e.g., classification or detection) [3, 17, 40, 59, 63], live video analytics pipelines have become prevalent. These pipelines operate by continually streaming live video feeds from cameras to processing servers (either edge [4, 7, 69, 74, 100] or cloud [28, 51, 60, 110]), where DNNs are run on incoming frames to produce low latency and highly accurate results for different application queries, i.e., combinations of task, DNN, and object(s) of interest. Key use cases include autonomous driving, footfall tracking, traffic coordination, business analytics, among others [1, 5, 8, 20, 22, 26, 35, 36, 82, 83].
Given their practical importance, much research has been devoted to improving both the resource efficiency and accuracy of live video analytics pipelines. Existing solutions include accuracy-aware tuning of inference configuration, encoding, or appearance knobs [29, 52, 77, 110], filtering out redundant content [21, 28, 48, 60], using cheaper model variants [4, 84], improving job scheduling [74, 88, 110], and so on. However, all of these works assume that the content observable by cameras is unchangeable, and instead can only be encoded, streamed, or analyzed differently. In essence, they focus on optimizing _fixed, preset_ camera deployments.
Unfortunately, the deployment of cameras for analytics is itself a daunting task for operators. Subject to practical constraints (e.g., mounts, power sources), for a scene of interest, operators must determine the number of cameras to deploy and the orientation (i.e., combination of rotation and zoom factor) to use for each. There exist many possible orientations, and altering these decisions requires manual intervention. Yet we find that doing so can be highly fruitful: across different workloads and scenes, dynamically adapting orientations over time can yield accuracy improvements of 21.3-35.3% (without inflating resource usage) compared to even the _best_ fixed-orientation scheme. Further, these wins cannot be reaped by simply deploying more fixed cameras to simultaneously cover more orientations: most orientations are 'best' for short total periods of time (median of 6 sec for each 10-min video), drastically hindering the efficiency of such an approach, especially in the resource-constrained settings where video analytics are run [4, 69, 89, 6].
An alternative strategy is to leverage PTZ (pan-tilt-zoom) cameras that offer software libraries for tuning orientations, thereby providing a logical approach to capturing the above wins. Indeed, despite existing for nearly two decades, PTZ camera popularity has surged in recent years (global market value of $3 billion in 2020 [16]) largely due to declining price points that now rival fixed-camera costs [86, 44]. However, multiple challenges complicate their use for live analytics ($2.3). First, queries are highly sensitive, in different ways, to orientation knobs due to their diverse goals (e.g., tasks), inherent model biases (how models perceive scenes and objects), and scene dynamism (where objects are located) - optimizing orientation tuning for one workload can forego up to 25.1% of the potential median accuracy wins for another. Second, the 'grid' of orientations is large, but the selection space is sparse, with steep accuracy drops from the best orientation(s) to other at any time. Third, the best orientation changes rapidly, e.g., 85% of changes occur in \(\leq\)1 sec since the last change.
To overcome these issues, we present **MadEye**, a camera-server system that automatically and continually adapts PTZ camera orientations to maximize analytics accuracy for the scene and workload at hand. The key insight behind MadEye is that the speed at which commodity PTZ cameras can
change orientations (i.e., upwards of 600\({}^{\circ}\) per sec with near-instantaneous digital zoom) far outpaces the rate at which applications require analytics results (typically 1-30 frames per second (fps), i.e., every 33-1000 ms). This, in turn, allows MadEye to eschew typical non-stationary multi-armed bandit strategies [57; 73; 98] that rely purely on previous explorations to determine orientation importance, in favor of a more informed strategy based on _current_ scene content. Concretely, in each timestep (33 ms for 30 fps) and subject to network/compute resource availability, MadEye cameras explore multiple orientations and quickly determine which will maximize workload accuracy and warrant transmission to the backend for full inference. However, realizing this strategy in practice involves addressing several technical challenges.
First, to enable fast camera-side evaluation of the importance of different orientations, MadEye adopts a custom knowledge distillation [43] strategy with edge-grade, ultra-compressed NN models. To cope with their potentially limited predictive power, we task them with modeling query sensitivities only to the point of accurately _ranking_ orientations in terms of impact on workload accuracy - precise results are left to backend servers. Even with this relaxed framing, MadEye must employ several optimizations to achieve sufficient rank accuracy. Most notably, MadEye trains edge models using a common abstraction - detection for objects of interest - that reflects the minimum information needed to capture sensitivities and biases for popular tasks. Task-specific semantics need not be baked into edge models, and instead can be incorporated by post processing the generated results.
Edge models are _continually_ trained on MadEye's backend using both the latest and historical workload results, with the goal of mitigating data skew towards recently-selected orientations (given uncertainties in what will be selected next). Importantly, to balance resource costs and accuracy, each edge model covers only a single query but all orientations. The intuition is that, while model results can exhibit substantial divergence [6; 10; 30; 56], feature-level variance between orientations for the same scene is considerably narrower, often smaller than that in typical pre-training datasets [62]. Accordingly, MadEye freezes pre-trained feature extraction layers across queries, caching those weights on cameras, thereby lowering retraining and (downlink) model update overheads.
Second, we develop a novel, on-camera search strategy to explore orientations with the goal of capturing the best one (accuracy-wise) at each timestep. Three key empirical observations guide our search: (1) despite rapid temporal shifts, transitions between best orientations move slowly in the spatial dimension, (2) the best orientations are typically spatially clustered, and (3) neighboring orientations (with overlapping regions) exhibit highly correlated trends in efficacy.
Building on these observations, MadEye explores a flexible shape of contiguous orientations at each timestep, and considers shifting only towards neighboring orientations whose efficacy can be robustly predicted. Decisions to keep/remove orientations are governed by both response rates (and the corresponding time budgets) and _relative_ comparisons of recent edge model results. For the former, MadEye uses an efficient heuristic to determine path feasibility in the time budget (a variant of the NP-Hard Traveling Salesman Problem [42]). For the latter, MadEye gracefully trades off exploration (i.e., shape size) for network usage (i.e., sending more orientations for backend inference) to bound the effects of edge model errors and maximize accuracy for the required response rate.
To evaluate MadEye, we developed the first (to our knowledge) dataset that supports tuning rotation and zoom at each time instant by splicing out scenes of interest from publicly available 360\({}^{\circ}\) videos. Using this dataset, we evaluated MadEye on a variety of network conditions and workloads that incorporate multiple vision DNNs and query tasks: classification, counting (per-frame and aggregate), and detection. Across these settings, MadEye boosts accuracy by 2.9-25.7% compared to an oracle fixed-orientation strategy without inflating resource usage; these wins are within 1.8-13.9% of the oracle dynamic strategy. Framed differently, MadEye achieves those accuracy boosts with 2-3.7\(\times\) lower resource footprints than the best strategy of using (multiple) fixed-orientation cameras. Moreover, MadEye outperforms recent PTZ tracking algorithms [85; 90] (by 2.0-3.8\(\times\)) and multi-armed bandit solutions [97] (by 5.8\(\times\)). We will release MadEye and our datasets. This work does not raise any ethical issues.
## 2 Background and Motivation
We start with an overview of live video analytics deployments (SS2.1). We then show measurements highlighting the importance of dynamically adapting camera orientations to workloads and scenes (SS2.2), and the challenges associated with realizing those benefits in practice (SS2.3).
### Overview of Live Video Analytics
In a live video analytics deployment, one or more cameras continually stream their video frames to servers for processing. Servers can range from distant (but powerful) cloud machines [88; 110] to nearby (but weaker) edge boxes [4; 69; 74], and are tasked with running queries on the incoming frames to support different applications. Queries most often involve running deep neural network (DNN) inference on individual frames, with the goals of locating and characterizing various objects in the scene, e.g., an intersection. Moreover, the queries for different applications can vary in terms of the tasks they perform, the objects they consider, the DNNs they use (different architectures and weights), and the response rates they require. For instance, footfall tracking for business
analytics will count people passing through an area, with response rates at 1 fps or less [8]. In contrast, smart driving or sports analytics applications will detect the specific locations of cars or people, with response rates upwards of 30 fps [83].
In this paper, we focus on the following four query tasks (and their corresponding accuracy metrics) that have been prevalent in recent literate [19, 28, 53, 54, 60] and real-world deployments [67, 74]. We note that these query types also serve as the building blocks for complex applications and other tasks, e.g., tracking queries rely on object detections.
* **Binary classification**: asks if any objects of interest are present in a frame. Accuracy across the video is measured as the fraction of frames with the correct binary decision.
* **Counting**: counts the number of objects of interest in each frame. Accuracy for each frame is measured as the percent difference between the returned and ground truth counts.
* **Detection**: finds the precise bounding box coordinates for objects of interest in a frame. Accuracy per frame is measured using mAP [33], which evaluates the overlap between each returned box and its ground truth counterpart.
* **Aggregate counting**: counts the _unique_ objects of interest that appear in a scene. Accuracy per video is the percent difference between the returned and ground truth counts.
Over time, an analytics deployment will face diverse workloads to run on the video feeds it manages, each varying in query composition and size [6, 74]. Yet, the overarching goals persist: subject to resource constraints, deliver low-latency results (at the desired response rate) with maximal accuracy.
### Opportunities with Tuning Camera Orientations
Existing optimizations for video analytics (SS6) assume that a stationary camera's orientation (rotation and zoom), and thus what it ingests from the target scene, is fixed and incapable of being adapted. To quantify the significance of this restriction, we run experiments on our 50-video dataset and workloads that incorporate 4 model architectures, the 4 tasks from SS2.1, and people/cars; SS5.1 details both. Each video supports tuning of rotations (150\({}^{\circ}\) horizontally by 30\({}^{\circ}\), 75\({}^{\circ}\) vertically by 15\({}^{\circ}\)) and zoom (1-3x); we consider other granularities in SS5.4.
For each video, we obtained per-frame (15 fps here) results for each workload by running its queries on all 75 orientations. We then define accuracy relative to the _best_ orientation for each frame, i.e., the orientation that maximized per-frame accuracy for the workload. For instance, for counting, an orientation's accuracy at any time is its object of interest count divided by the max count across all orientations at that time. Using this methodology, we compare three schemes: (1) _one time fixed_ which selects the best orientation at time=0 and keeps it throughout the video, (2) _best fixed_ which uses oracle knowledge to pick the best single orientation that maximizes average workload accuracy for the video, and (3) _best dynamic_ which selects the best orientation per frame in the video.
As shown in Figure 1, adapting camera orientations brings substantial accuracy improvements without inflating resource usage, i.e., the same number of frames are transmitted and processed: median boosts with _best dynamic_ are 30.4-46.3% over _one time fixed_ and 21.3-35.3% over the _best fixed_ scheme that is an upper bound for any fixed-orientation approach. Figure 2 breaks down these results by query task. Notably, the importance of adapting orientations grows as query types become more specific. For instance, for YOLOv4 and cars, median accuracy improvements over _best fixed_ are 1.2%, 13.4%, and 16.4% for binary classification, counting, and detection, respectively. The reason is that coarser queries mask certain differences across orientations, e.g., if many objects of interest are present in the scene, any orientation that catches a single object will deliver max accuracy for binary classification; counting, on the other hand, will favor the orientation with the most objects.
**Primer on PTZ cameras.** Pan-tilt-zoom (PTZ) cameras present an intuitive mechanism to realize such adaptation. PTZ cameras come in two forms, traditional [31, 78] and electronic (ePTZ) [46, 79], both of which support software tuning of pan (horizontal rotation), tilt (vertical rotation), and zoom. The key difference between the two variants is in their tuning mechanisms. Traditional PTZ cameras embed physical motors to rotate at well over 360\({}^{\circ}\)-per-second and optically zoom (i.e.,
Figure 1. Accuracy for 5 representative workloads when using varying degrees of orientation adaptation. Bars list results for the median video, with error bars spanning 25-75th percentiles.
Figure 2. Accuracy wins from adapting orientations (compared to _best fixed_) grow as query specificity grows. Bars list median videos, with error bars for 25-75th percentiles. We exclude agg. counting+cars due to limits of multi-object trackers (Β§5.1).
without reducing resolutions). In contrast, ePTZ cameras capture wide field-of-views and employ near-instantaneous digital rotation and zoom to focus on specific parts of the scene. ePTZ cameras change orientations faster and are cheaper, but also cover smaller rotation areas (150\({}^{\circ}\) vs. 360\({}^{\circ}\)) and degrade image quality by using digital zoom. PTZ cameras rival traditional ones in on-board compute resources, with recent offerings housing edge-grade GPUs [71].
### Challenges
Despite the potential benefits of adapting camera orientations using PTZ cameras, three fundamental challenges complicate this approach in practice. We describe them in turn.
**C1: rapid changes in best orientation over time.** As shown in Figure 3, due to the dynamic nature of video content, switches in best orientation are frequent: 85% of switches occur in \(\leq\)1 sec since the last switch.
**C2: diverse workload sensitivities to zoom and rotation at each time.** At any point in time, the best orientation can vary across individual queries and workloads. Figure 4 illustrates this, showing that adapting orientations to maximize accuracy for one workload can result in foregoing 3.2-25.1% of the potential (median) accuracy wins for other workloads.
Figure 5 highlights this at a query level, showing that different models, objects, and tasks can all influence orientation selections. Model discrepancies influence what can be discerned in the scene during inference and under what orientations. For instance, with people counting, selecting best orientations for a query using YOLOv4 will miss out on 26.3% median accuracy wins for the same task using SSD (even when trained on the same dataset). In contrast, tasks dictate the specificity needed in the collected results, e.g., optimizing for counting people (with YOLOv4) rather than aggregate people counting with the same model foregoes 10.2% of potential wins. Lastly, objects of interest govern the importance of regions based on object densities, as well as the features used for and difficulty in detecting relevant objects (smaller objects are typically tougher to discern [80]). Thus, unsurprisingly, optimizing for a YOLOv4 people counting query would forego 13.3% of wins if the query considered cars instead.
Figure 6 provides example screenshots to illustrate the benefits and harm of changing orientations. Importantly, tuning orientations does not simply bring new objects into field of view, and instead plays a large role in a model's ability to detect objects that were already visible.
**C3: massive (but sparse) search space.** The orientation space exhibits substantial sparsity in the spatial and temporal dimensions. For the former, among the 75 orientations at any time, only 1 (or several, with ties) is best, with steady dropoff in accuracy to the others, e.g., median dips of 4.8% and 20.7% from the best to 2nd and 5th best. For the latter, most orientations are best for short total times in each video, with median durations of 5-6 sec across workloads (Figure 7).
## 3. Design
Figure 8 shows the end-to-end operation of MadEye. The main insight behind MadEye is to leverage fast PTZ rotation speeds to explore many orientations in each timestep (i.e., between when results are needed for an fps), and then select, based on their _current_ content, the one(s) that maximize workload accuracy under resource constraints. The idea is to limit the "guess work" compared to prior search algorithms that rely only on past orientation efficacy (SS5.3).
As in other video analytics systems [28, 52, 60, 74], users register queries with a backend agent (on an edge or cloud server), specifying a target scene, as well as a model to use, object(s) of interest, and a task. To operate under camera compute constraints, MadEye then trains edge-compatible (i.e.,
Figure 4. Workloads exhibit different sensitivity to orientations. Results apply the best orientations for workload \(X\) (legend) to workload \(Y\) (x axis), and plot the accuracy wins (over best fixed for \(Y\)) that are lost from not using the best orientations for workload \(Y\). Bars list medians; error bars for 25-75th percentiles.
Figure 5. Applying the best orientations for a base query of {YOLOv4, counting, people} to a query \(Y\) that modifies a single element in the base query; we compare the accuracy wins (over best fixed) to those when using the best orientations for \(Y\). Bars list medians; error bars for 25-75th percentiles.
Figure 3. Shifts in the best orientation are frequent. Results list a PDF (binned by 1 sec) of time between switches in best orientation across all videos and workloads.
highly compressed) models (SS3.1), not to replace the original (more accurate) query models (as in typical knowledge distillation [43]), but instead to _approximately_ extract information of importance in a frame for each query. In other words, approximation models are explicitly designed to estimate the inherent sensitivities of each query (C2 from SS2.3).
To cope with the large space of orientations and rapid shifts in best orientations (C1 and C3 from SS2.3), MadEye employs an efficient on-camera search strategy (SS3.3) that explores as many potentially fruitful orientations as possible while avoiding fps violations for results. The camera then runs approximation models on all captured orientations in each timestep and uses the results to (1) _rank_ the orientations in terms of their likelihood to maximize overall workload accuracy, and (2) determine the set of orientations to consider in the next time step. The highest ranked orientations that the network can support are sent to the backend for full workload inference; new results are used to continually adapt approximation models to the current scene (SS3.2).
### Designing Approximation Models
The primary objective of MadEye's approximation models is to quantify the _relative_ importance of orientations for the queries in a workload. However, this requires capturing the sensitivity of each query to different orientation and scene dynamics, subject to camera compute constraints. Given the potential complexity of workload queries, we eschew noisy (and limited) vision features based on local gradients [25; 66] in favor of knowledge distillation with compressed models [43]. However, we alter this approach in several ways to favorably balance ranking accuracy and resource efficiency.
We design approximation models using a common abstraction that reflects the minimum amount of information needed to sufficiently rank orientations. The key idea is that the core elements of query sensitivity pertain to how models find and characterize objects, rather than how tasks post-process those results. Thus, MadEye's approximation models are structured purely as ultra-lightweight detectors for objects of interest; this strategy also avoids tricky development of compressed models per task. Concretely, we use the smallest variant of the edge EfficientDet family [94], EfficientDet-D0 (3.9M parameters, >150 fps on a Jetson edge GPU). More complex detectors could be used, but cameras possess limited GPU memory [60; 74], and inference delays negatively influence the degree to which MadEye can explore orientations (SS3.3).
**Why a detector?** Two alternatives we considered for the approximation models are to directly estimate object counts in an image, and to directly output rank orderings across multiple images. However, we empirically observed high error rates with both. This is largely because such approaches can only relate the presence of features to objects via a global regression over an entire image (or multiple images), failing to leverage local regressions via bounding box predictions to boost precision. While image-level DNN object counters
Figure 6. Screenshots showing the (diverse) impact of rotation and zoom for different queries. Each column shows two images from the same time instant that use either different rotation or zoom. On the bottom row, green arrows show newly captured objects, while red arrows show objects that are newly missed after the orientation change. Left: rotation brings a new object into the scene, helps detect 2 previously-visible objects, but loses a previously-detected object. Middle: zooming in helps detect new people. Right: after switching models, the same zoom from the middle column actually reduces the number of detected people.
Figure 7. Most orientations are best for short total times in each video. Results consider all orientation-video pairs per workload.
do exist [91, 104, 109, 113], they focus on large crowds of people. In contrast, there are often few objects of interest in an orientation at any time (SS2.3), making rank orderings extremely sensitive to small errors in count prediction.
MadEye uses one approximation model per query, rather than per workload or per object. Though more efficient, we avoid per-workload and per-object approximation models as we (like others [6]) find that different DNNs can exhibit wildly varying response profiles to even the same object classes due to object-independent factors like scale and resolution [45]. Moreover, DNNs trained on very different datasets are known to inherit different algorithmic biases [10, 30, 56, 72, 92, 102].
However, each approximation model is configured to support _all_ orientations for two reasons. First, the number of orientations is large (SS2.3), making per-orientation approximation models impractical with on-camera GPUs. Second, neighboring orientations exhibit substantial overlap, and since we only consider orientations for a given scene, divergence in background content, lighting, shadows, etc. are minimal. Indeed, we measured the perceptual distance [81] of images (LPIPS) from different orientations in the same scene to be 0.30. For context, the same value for the popular MS-COCO and Pascal VOC datasets used to successfully pre-train many vision models (including EfficientDet) are 0.46 and 0.41.
**Estimating workload accuracies.** MadEye post-processes the generated bounding boxes from all approximation models to compute _predicted workload accuracies_ for orientation ranking. To do this, MadEye follows the per-task accuracy metrics from SS2.1, but computes per-orientation predicted accuracy in a relative manner compared to the other orientations under test. For instance, counting computes the ratio of object counts between each orientation and the max among the set of explored orientations at that timestep, while detection expands this to incorporate object area sizes (as per mAP score). Lastly, aggregate counting modulates count scores to favor less explored orientations (that may have unseen objects).
### Continually Training Approximation Models
MadEye servers train a new approximation model for each new query, with the goals of being fast (since training blocks deployment) and accurate (in ranking orientation importance). Initial training uses a small set of 1000 historical images from the target scene that is then labeled (online) using the DNN in the registered query; label generation takes 7-90 sec depending on the DNN. However, to accelerate this process, MadEye begins with a version of EfficientDet that is pre-trained on Pascal VOC, and freezes both the backbone network and the BiFPN layers responsible for feature extraction and fusion. Only weights for the final 3 bounding box and class prediction layers are fine-tuned to mimic the target query's behavior. The rationale is that model features progressively move from general (e.g., textures, gradients) to task-specific (e.g., object prediction) as a function of layer depth [107, 12, 108]. Initial fine-tuning lasts for 40 epochs (\(\approx\)25 mins).
Even after initial fine-tuning, approximation models may fail to generalize to changing scene dynamics [93], leading to degrading accuracy. To cope with such data drift, MadEye employs continual learning (every 400-500 ms) to update the model's weights using the latest query results on orientations sent to the server for full workload inference. While continual learning has been applied to edge video analytics [4, 68], MadEye requires several alterations from prior efforts. The main challenge is that within each retraining window, samples are only available for the orientations that MadEye's camera-side component recently visited and deemed worthy of backend inference. Since orientations are typically best for short total times (SS2.3), there is often severe imbalance in the orientations covered by new training samples. For instance, with perfect rankings, the average 2-minute window sees only 9.3% of orientations get sent to the backend. This can result in overfitting to certain orientations, and catastrophic forgetting [55] for others that may soon be ranked highly.
To deal with this, MadEye retrieves the most recent historical training samples from each orientation and uses this to balance the dataset. As we will discuss in SS3.3, we find that orientation shifts are often spatially localized, with changes to distant orientations happening over longer timescales. Thus, MadEye pads the data samples for neighboring orientations (up to 3 away from the latest one) to match the count for the most popular orientation in the retraining window. The
Figure 8. Overview of MadEyeβs end-to-end workflow.
### Exploring and Ranking Orientations
The primary goal of MadEye's on-camera component is to efficiently explore (a subset of) the large orientation space to capture the best orientation for each timestep. Realizing this is challenging for three reasons. First, MadEye only has visibility into the orientations that it has recently explored, but other orientations can change in content and importance at any time. Second, even among recently explored orientations, MadEye only has access to coarse results from approximation models (i.e., that accurately capture only relative importance) for most. Third, each timestep is not only dedicated to exploration, but also (1) running approximation models on explored orientations, (2) encoding and shipping select orientations to the server, and (3) running the workload on shipped images.
Rather than relying on previous (and potentially stale) observations at each orientation (SS5.3), MadEye opts for a more informed strategy guided by 3 empirical observations.
* Although best orientations change rapidly over time (SS2.3), those changes are far slower in the spatial dimension. Figure 9 illustrates this, showing that the median and 90th percentile spatial distance between successive best orientations are 30\({}^{\circ}\) and 63.5\({}^{\circ}\), which pertains to shifts spanning only 1 or 2 orientations in our default grid (SS5.1).
* The best performing orientations (accuracy-wise) at any time are often spatially clustered (Figure 10). Concretely, across our dataset, the 75th percentile distance separating orientations in the top \(k\) at each timestep is 1 and 2 orientations for \(k\) values of 2 and 6.
* Accuracy for neighboring orientations often shift in tandem. Indeed, as shown in Figure 11, the correlation coefficient for accuracy changes in direct neighbors is 0.83; intuitively, this value shrinks to 0.75 when considering neighbors 2-hops away (that exhibit less content overlap).
Taken together, these findings motivate a search strategy that considers a flexible shape of contiguous orientations at each timestep, and swaps out underperforming orientations in the previous shape only for neighboring ones whose trends we can robustly predict for the next timestep. We start with a description of the algorithm that does not account for zoom or resource constraints and later incorporate those elements. Common themes are: only relative comparisons of approximation model results are used, we leverage all outputs from those models (including bounding boxes), and search decisions are entirely local (i.e., on cameras) to remain rapid.
MadEye begins with a rectangular seed shape that reflects the largest coverable area in the time budget, thereby maximizing early exploration; we reset to this shape any time 0 objects of interest are found in a shape. The corresponding orientations are captured and analyzed with approximation models to compute a predicted workload accuracy for each (SS3.1). After sending the top \(k\) orientations to the server for workload inference, MadEye must use these prior results to determine the set of orientations to explore in the next timestep.
To do this, MadEye labels each orientation from the last timestep with a value that indicates the likelihood of being fruitful in the _next_ timestep. Concretely, we combine the exponentially weighted moving averages from recent (10) timesteps for (1) any computed predicted accuracy values, and (2) the deltas between those values. Weighted averages are used to remain robust to inconsistencies in DNN results across consecutive frames [6, 76], which is especially pronounced with MadEye's compressed approximation models.
Using those labels, MadEye must now determine which orientations to remove and add for the upcoming timestep. For this, MadEye sorts orientations into an ordered list based on their label values. Using pointers at the head \(H\) (largest label) and tail \(T\) (smallest label) of the list, MadEye iteratively
Figure 11. Correlation in accuracy changes across orientations separated by \(N\) hops. Results list Pearson Correlation Coefficients and cover 3 representative videos and workloads (15 fps).
Figure 10. Top ranked orientations are often spatially clustered. Results use 15 fps, are aggregated across all workloads and videos, and show the max distance between orientations in the top \(k\) ranked orientations at each timestep.
Figure 9. Spatial distance between successive best orientations is small, with most transitions between neighboring orientations. Results aggregate across all videos and workloads for 15 fps.
compares orientations by asking: should we remove the orientation at \(T\) in favor of adding a neighbor to \(H\)? Concretely, MadEye computes the ratio of label values for \(H\)/\(T\). If (1) that ratio exceeds a threshold (indicating a substantial disparity in the potential of \(H\) and \(T\)), (2) \(H\) has neighbors not already in the shape, and (3) removing \(T\) would not break contiguity, we remove the orientation at \(T\) and increment the pointer. The process repeats by considering the addition of another neighbor for \(H\), this time using a larger threshold to account for the additional uncertainty of adding more neighbors. \(H\) is decremented when a neighbor cannot be added, and the process ends when even one neighbor for \(H\) cannot be added.
For each iteration that results in a neighbor addition for \(H\), MadEye selects among \(H\)'s neighbors by analyzing the bounding boxes that its approximation models generated in the last timestep. For each candidate neighbor, we compute the ratio of two values: normal distances to the center of \(H\) and to the centroid of all bounding boxes in \(H\). Values \(<\)1 indicate lower chances of \(H\)'s objects moving to the candidate in the next timestep. We repeat this process for all other orientations in the last shape that the candidate exhibits any non-zero overlap with. Candidate neighbor scores are computed as the weighted sum of these ratios (weights according to degree of overlap), and the candidate with the max score is selected.
**Reachability and path selection.** The search algorithm thus far ignores whether a PTZ camera can sufficiently cover the selected shape in a given time budget. Formally, the shape of orientations can be represented as a fully-connected undirected graph with edge weights pertaining to the time taken to move between two adjacent orientations (given a rotation speed). Our goal is to determine whether the shape is coverable in a given time budget, and if so, what is the shortest path. The paths between orientations satisfy the triangle inequality property [95], so this can be modeled as a variant of the NP-Hard Traveling Salesman Problem (TSP) [14]. Given our tight time budgets, MadEye employs the Minimum Spanning Tree (MST) heuristic [42], but optimizes it to minimize online delays. In particular, since our orientation grid is static, we precompute pairwise distances and the entire MST ahead of time. Online, for a given shape, we quickly extract and perform a preorder walk on the corresponding subgraph to get the shortest path. This reduces the heuristic to linear complexity (in orientations); each path computation takes 14 \(\upmu\)s, and the resultant paths are within 92% of optimal. Upon failure, MadEye greedily removes the orientation with the lowest potential (that does not break contiguity) and rechecks reachability.
**Balancing search size and network/compute delays.** MadEye pipelines its exploration through orientations with the running of approximation models on each one. However, network transmission to and workload inference on the backend do not overlap with orientation exploration. The reason is that transmissions are governed by global ranks across _all_ orientations explored in each timestep. Thus, in each timestep, we face a tradeoff between exploring more orientations and sending more orientations to the backend.
MadEye resolves this tension based on the expected difficulty for its approximation models to accurately rank the considered orientations, which in turn governs the risk associated with exploring more orientations (and sending fewer to get ground truth results). Intuitively, scenarios where the considered orientations are projected to contribute similar accuracies pose the biggest difficulty for approximation models (as the gaps between ranks shrinks). MadEye determines the right balance by first selecting a target number of frames to send according to the training accuracy for approximation models (provided by the backend) and the variance in predicted accuracy values in the last timestep, e.g., 85% training accuracy and 25% variance results in sending at least 2 frames. MadEye then computes a target shape size for exploration, accounting for network transmission delays (harmonic mean of past 5 transfers [106]), backend compute delays, camera rotation speeds, and approximation model inference delays.
**Handling zoom.** After selecting the set of orientations to visit, the search algorithm must determine the zoom factor to use for each one. The challenge is that past accuracies are insufficient for determining zoom fidelity as MadEye cannot know what objects are being missed by not zooming in/out. Instead, we rely on bounding boxes from approximation models to determine the risk of zooming in. When an orientation is added to the shape, we start at the lowest zoom factor to gain visibility into its whole content. At each timestep, we compute the average distance between each bounding box and the centroid of all boxes; smaller distances indicate more clustering and less risk of zooming in. These values are compared with the area covered by each zoom factor to select one, and MadEye automatically zooms out after 3 seconds to avoid missing newly entering objects in the orientation.
**Transmitting images.** At the end of each timestep, MadEye must transmit select images to the server for workload inference. Unlike standard streaming, MadEye sends disjoint sets of images from each orientation's video stream. To keep bandwidth costs low, MadEye maintains a list of the last image shared for each orientation, and employs a functional encoder [34] that computes deltas relative to that image.
## 4. Implementation
MadEye's core components are written in 9.1k lines of Python code, with all training and inference tasks across the backend and camera run in PyTorch. We use TensorRT [2] to accelerate inference on the backend, and a variant of Nexus [88] as a round-robin scheduler for approximation model inference on cameras. Orientations are first represented as rotational
values, projected onto a 360\({}^{\circ}\) space, and then converted using an in-house equirectangular-to-rectilinear image converter (written in C++) to match the APIs offered by recent PTZ cameras [13]. For ground truth accuracy computations (SS5.1) that require a global (i.e., across all orientations) perspective on object locations and uniqueness, atop the ByteTrack multi-object tracker [105] that links objects across an orientation's video, we use cv2 and scikit-image to extract image features (e.g., SIFT) that link objects across orientations.
## 5. Evaluation
We evaluated MadEye across diverse workloads, network settings, and videos. Our key findings are:
* MadEye increases median workload accuracies by 2.9-25.7% compared to an oracle fixed-orientation strategy (while using the same amount of resources); wins are within 1.8-13.9% of the oracle dynamic strategy.
* Achieving MadEye's accuracy wins with 1 PTZ camera would require the best 4-6 fixed-orientation cameras, which comes with a 2-3.7\(\times\) inflation in resource costs.
* MadEye outperforms prior PTZ algorithms by 2.0-5.8\(\times\), providing 31.1%, 46.8%, and 52.7% higher accuracy than Panoptes [90], tracking [85], and multi-armed bandits [97].
* MadEye gracefully balances on-camera exploration and transmission of orientations to maximize accuracy even as resources shrink and response rates rise.
### Methodology
**Video dataset for PTZ analysis.** To the best of our knowledge, there does not exist a public video dataset for PTZ cameras that enables users to tune rotation and zoom knobs; instead, existing PTZ datasets reflect pre-determined knob decisions. Thus, to evaluate MadEye, we generate our own dataset. To construct our dataset, we begin with the abundance of 360\({}^{\circ}\) datasets. Concretely, we use 50 360-degree videos from YouTube that incorporate scenes resembling those from prior work [4, 28, 60], e.g., traffic intersections, walkways, shopping centers. Each video lasts 5-10 minutes.
From each video, we carve out scenes of interest as regions spanning 150\({}^{\circ}\) horizontally and 75\({}^{\circ}\) vertically. We then subdivide those regions into grids of orientations to mimic recent PTZ offerings [46] (30\({}^{\circ}\) and 15\({}^{\circ}\) granularities for pan and tilt; we explore other grids in SS5.4), and extract a full video per orientation. For zoom, since we operate on pre-captured videos, we employ digital zoom (1-3\(\times\)) by cropping images and scaling back the dimensions to match the original image.
**Models and workloads.** We consider 4 popular architectures for vision tasks: SSD [64] and Faster RCNN [87] with ResNet-50 backbones, YOLOv4 and Tiny-YOLOv4 [99] with CSPDarknet53 backbones. We consider two versions of each model trained on Pascal VOC and MS-COCO, but show results for the latter as the trends were similar. To construct queries, we follow the same methodology from recent work (based on production deployments) [74]. Each model can perform any of the four tasks from SS2.1 with a focus on either people or cars. We enumerate all possible workloads sized between 2-20 queries and pick 10 randomly. The appendix details each workload. We run workloads on all videos and consider response rates from 1-30 fps.
**Hardware and networks.** On-camera computations run on an edge-grade Jetson Nano [71] equipped with a 128-core Maxwell GPU, quad-core ARM CPU with 1.43 GHz clock speed, and 4 GB of memory. We consider default camera rotation speeds of 400\({}^{\circ}\) per second; we study this parameter in SS5.4. Workload inference and training of approximation models run on a server with an NVIDIA GTX 1080 GPU (8 GB RAM) and 18-core Intel Xeon 5220 CPU (2.2 GHz; 125 GB RAM). Camera and server components are connected with emulated Mahimahi networks [70] using fixed-capacity (24-60 Mbps; 5-20 ms) and real-world mobile traces.
**Metrics.** Our primary evaluation metric is average workload accuracy per video. For each frame, following the accuracy definitions from SS2.1, we compute per-orientation accuracy for each query relative to the orientation that delivers the max accuracy at that time. Per-query accuracies at each time are averaged to compute per-frame workload accuracies, which in turn are averaged to compute workload accuracy for a video.
While computing these values for binary classification and counting are straightforward, detections and aggregate counting require slight alterations. For detections, mAP scores depend on bounding box coordinates for specific objects and thus cannot be measured by comparing results directly across orientations. Thus, we consolidate the bounding boxes across orientations into a global view, and employ de-duplication [75] to eliminate redundant objects in overlapping regions. We then compute each orientation's mAP score relative to the global scene, and assign per-orientation accuracies as the ratio of its mAP score to the max one across orientations.
Aggregate counting queries are directly evaluated across the entire video (not per frame). Thus, we compute the ratio of unique objects across the orientations that a system selects compared to the total number of unique objects in the video. Note that ByteTrack (SS4) was unable to robustly support car tracking, so we exclude aggregate counting for cars.
### Overall Results
We first compare MadEye with the two baselines from SS2.2, _best fixed_ and _best dynamic_, on different network and fps settings. Both baselines impractically rely on oracle knowledge of video content and workload accuracy, i.e., to pick the best
orientation per video or per timestep, respectively, that maximizes accuracy for the target workload-video. Nonetheless, they serve as useful context for MadEye's performance. Note that MadEye automatically adapts the number of frames it explores and transmits based on network delays and response rates (SS3.3). For _best fixed_, we leverage increasing network speeds by adding more fixed cameras (i.e., best, 2nd best, etc.), rather than simply capturing more (redundant) frames from 1 camera. _Best dynamic_ does not change for any query other than aggregate counting, for which we send the largest number of fruitful orientations that the network can support.
Our results are captured in Figures 12-13. Across these settings, MadEye delivers median and 75th percentile accuracies that are 2.9-25.7% and 1.6-20.7% higher than _best fixed_, and within 1.8-13.9% and 1.3-12.5% of _best dynamic_. Digging deeper, our results show two key trends. First, as frame rates decrease (for a fixed network), MadEye's accuracies and wins over _best fixed_ grow, e.g., for a {24 Mbps, 20 ms} network, median wins improve from 5.8-13.3% to 12.4-25.7% as fps drops from 15 to 1. The reason is that lower fps yields larger timesteps (e.g., 1 sec for 1 fps, 66.7 ms for 15 fps), enabling more exploration and/or transmission. Second, as network speeds grow (for fixed fps), the same trends persist (since each network transfer is faster) but to a lesser extent, e.g., median 15 fps wins grow to 8.6-18.4% for {60 Mbps, 5 ms}.
Figure 14 breaks down MadEye's wins over _best fixed_ by task and object. Following the rationale from SS2.2, accuracy boosts with MadEye grow as task specificity grows: median wins grow from 8.6% to 13.3% to 22.1% as we move from counting to detections to aggregate counting for people. We also observe consistently larger accuracy wins for people queries (rather than cars) due to their less structured motion patterns (more frequent and scattered orientation switches), e.g., for detections, wins for cars shrink to 6.7%.
Results thus far focus on accuracy improvements. However, a key goal with MadEye is to maximize accuracy for a given resource cost, i.e., network and backend inference overheads. Table 1 lists the smallest number of optimally configured fixed cameras that would be required to match the accuracies that different versions of MadEye deliver, each of which sends a different number of frames per timestep. As shown, it would take 3.7 fixed cameras to realize the 63.1% accuracy that MadEye-1 achieves, implying a 3.7\(\times\) reduction in network and backend compute usage. MadEye-2 is matched by 5.5 fixed cameras; here, however, the resource reduction factor is 2.8\(\times\) since MadEye also sends 2 frames per timestep.
### Comparisons with State-of-the-Art
We compare MadEye with 3 alternate approaches for adaptive camera orientations. Figure 15 shows results for a {24 Mbps; 20 ms} network and 15 fps; trends hold for all other scenarios.
First, we consider Panoptes [90], a recent PTZ system that configures orientations for workloads of applications, each explicitly concerned with specific orientation(s). For orientations of relevance, Panoptes generates a static round-robin schedule that is weighted according to how many queries an orientation is of interest to and how much motion has been detected historically in that orientation; higher weights indicate staying in an orientation for longer. Panoptes then switches between orientations according to this schedule with one exception: if motion gradients in the direction of any overlapping orientation of interest exceed a threshold, Panoptes switches there for several sec before resuming the round robin. Panoptes does not specify a zoom strategy, so we consider the best zoom (accuracy-wise) for any orientation it visits.
We consider two versions of Panoptes, _Panoptes-all_ and _Panoptes-few_, in which each workload query is interested in all orientations or only its best fixed orientation, respectively. Max accuracy in both cases is defined relative to the best orientation among only the set of considered ones. As shown in Figure 15, MadEye outperforms Panoptes-all by 3.8\(\times\), with 46.8% higher accuracy at the median. The reason is that Panoptes cycles through orientations based on
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**MadEye Variant** & **Median Accuracy (\%)** & **\# Fixed Cameras** \\ \hline
**MadEye-1** & 63.1 & 3.7 \\ \hline
**MadEye-2** & 66.3 & 5.5 \\ \hline
**MadEye-3** & 66.8 & 6.1 \\ \hline \end{tabular}
\end{table}
Table 1: Number of optimally-configured fixed cameras needed to match the accuracy of MadEye. MadEye-\(k\) refers to a version of MadEye that is restricted to sending the top \(k\) frames to the server for workload inference. Results consider a {24 Mbps; 20 ms} network, 15 fps response rate, and all video-workload pairs.
Figure 12: Comparing MadEye with the best possible fixed- and adaptive-orientation schemes across all videos and workloads with a {24 Mbps, 20 ms} network and varying fps. Bars list medians with errors bars spanning 25-75th percentiles.
a pre-determined schedule and motion gradients _in the current orientation_, neither of which are sufficient indicators of importance of other orientations at the current time, e.g., orientations are suboptimal most of the time (SS2.3). In contrast, MadEye considers many orientations per timestep, ranking them based on current content. The wins persist compared to Panoptes-few (not shown due to the different accuracy metric), but are less pronounced (median of 40.5%) as there are fewer unfruitful orientations for Panoptes to consider.
Next, we consider tracking algorithms that most PTZ cameras come equipped with today [85]. This algorithm starts in a home region (best fixed in our experiment), selects the largest object it finds, and tracks that object continually across orientations aiming to keep it as centered as possible. The algorithm resets to the home region upon losing the tracked object. We consider a favorable variant in which all orientations explored in a timestep are shared with the backend, which uses the one with the highest accuracy. As shown, MadEye delivers 2.0\(\times\) higher workload accuracies (31.1% more at the median) compared to this tracking scheme. The main reason again is that the presence of a large object is a poor indicator of accuracy importance as it fails to capture more general scene properties and the sensitivity of the queries under test. In contrast, MadEye directly estimates query sensitivity to make workload-aware, informed orientation selections.
Finally, we consider the common UCB1 multi-armed bandit (MAB) algorithm [97]. Each orientation is considered a lever with a weight set to the average observed accuracy across all past visits (we seed this with historical data). The algorithm continually selects an orientation to visit as the one with the highest sum of weighted average and upper confidence bound (which favors less-visited orientations). As with tracking, we send all visited orientations to the backend, which selects the best one per timestep. MadEye delivers 52.7% higher median accuracies than this scheme, i.e., a 5.8\(\times\) win. Unlike the schemes above, MAB does factor in workload accuracies in selecting orientations. However, its adaptation considers only historical efficacy (not current content), and scene dynamics have shifted by the time it updates its patterns.
**Compatibility with other optimizations.** By focusing on previously un-tuned knobs (rotation and zoom) to boost accuracy, MadEye is largely compatible with prior efforts that optimize resource overheads. To illustrate this, we consider a variant of Chameleon [52] that dynamically tunes pipeline knobs (resolution and frame rate) to lower network and back-end inference resource costs without harming accuracy; we brute force selections per frame focused on the best fixed orientation. We then run MadEye atop the fps and resolution selections that Chameleon makes, sending the same amount of network data. As shown in Table 2, Chameleon lowers resource costs by 2.4\(\times\) compared to the naive scheme that sends all frames at the highest resolution; MadEye preserves these efficiency wins, while increasing accuracy by 9.8%.
### Deep Dive Results
**Rotation speeds.** We evaluated the impact of camera rotation speed on MadEye's performance by considering values of {200, 400, 500, infinite}\({}^{\circ}\) per second, a fixed network ({24
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**System** & **Resource reduction** & **Median accuracy** \\ \hline
**Chameleon** [52] & 2.4\(\times\) & 46.3\% \\ \hline
**Chameleon + MadEye** & 2.4\(\times\) & 56.1\% \\ \hline \end{tabular}
\end{table}
Table 2. MadEye preserves resource savings of recent systems, while improving accuracy. Results use 15 fps, {24 Mbps; 20 ms}.
Figure 14. MadEyeβs accuracy improvements (over best fixed) for different query tasks and objects. Results consider all videos and models, and use 15 fps and {24 Mbps; 20 ms}.
Figure 13. Comparing MadEye with the best possible fixed- and adaptive-orientation schemes across all videos and workloads with fixed fps (15) and varying networks (improving from left to right). Bars list medians with errors bars spanning 25-75th percentiles.
Figure 15. MadEye vs. 3 camera tuning strategies. Results are for all workloads and videos, 15 fps, and {24 Mbps; 20 ms}.
Mbps; 20 ms ), and 15 fps. Intuitively, accuracy grows as rotation speeds increase, e.g., jumping from 54.2% to 64.9% as rotation speed grows from 200 to 500\({}^{\circ}\) per second. The reason is that faster rotations enable the exploration of additional orientations or, in rarer instances, additional transmissions. Importantly, benefits plateau since most queries (other than aggregate counting) are fully satisfied accuracy-wise as long as MadEye finds the best orientation at each timestep.
**Grid granularity.** To understand the effect of grid granularity (with other settings fixed), we focus on the pan dimension (since it is wider) and consider steps of {15, 30, 45, 60}\({}^{\circ}\). Overall, MadEye's accuracy benefits shrink as grids become more fine-grained (with more orientations), e.g., median accuracies drop from 67.5% to 51.8% when pan steps drop from 45 to 15. This is because, although exploration in a time budget is governed by rotation speeds rather than grid granularity, the same distance (in \({}^{\circ}\)) of exploration will warrant more approximation model inference on more orientations, thereby shrinking each timestep's exploration budget.
**Overheads.** On MadEye's backend, the primary overheads are in initializing approximation models and continually sharing model updates with the camera. Across our workloads, we find median bootstrapping delays to be 27 mins (including labeling and initial fine-tuning). Downlink streaming consumes 3.2 Mbps for the median experiment. Recall that both overheads are mitigated by MadEye's fine-tuning strategy (SS3.2). On cameras, the main overheads are in selecting orientations to explore and running approximation models; for the median workload-video pair, per-timestep delays for each task were 17\(\upmu\)s and 6.7 ms for 15 fps and {24 Mbps; 20 ms}. The former benefits from pre-computed reachability analysis (SS3.3).
**Microbenchmarks.** MadEye's performance is governed by two main tasks: (1) ranking orientations with approximation models, and (2) selecting orientations to explore to find the best one(s) per timestep. For the former, Figure 16 show that MadEye's approximation models assign median ranks of 1.1-1.3 to the best explored orientation at each timestep, significantly outperforming the variant that relies on counting directly on images. For the latter, for the median workload-video pair on {24 Mbps; 20 ms} and 15 fps, MadEye explores best orientation 89.3% of the time, with 6.8% of errors coming from our conservative zoom strategy (SS3.3).
## 6. Related Work
**Adapting video analytics knobs.** VideoStorm [111] selects an input knob configuration (e.g., frame rate, resolution) per workload to lower resource costs and facilitate job scheduling on backend servers. Chameleon [52] extends such configuration tuning to be adaptive in order to cope with ever-changing scene dynamics while keep resource costs low. As shown in SS5.3, by focusing on tuning camera orientations (and not backend pipeline knobs), MadEye provides complementary benefits to these efforts, boosting accuracies while preserving the resource efficiency wins they bring. Other efforts focus on camera-side knobs as MadEye does. For example, CamTuner [77] uses SARSA Reinforcement Learning to boost accuracy by automatically tuning capture knobs that cameras do not usually auto-adjust, e.g., brightness, contrast, and sharpness. AccMPEG [29] predicts the effects of macroblock encoding settings on server-side DNNs, and tunes encoding to maximize accuracy. MadEye shares the same goal as these efforts - tune camera knobs to boost workload accuracy - but focuses on complementary knobs, i.e., camera orientations.
**Frame filtering and result reuse.** Many prior efforts exploit temporal redundancies in video data by filtering out frames for network transfer and processing, and reusing results accordingly [9, 18, 21, 24, 27, 37, 38, 58, 60, 76, 103, 112, 115]. Spatula [48] extends this to multi-camera settings, selecting among cameras in a network. These optimizations are logically similar to MadEye, which also aims to maximize accuracy per network usage. However, the techniques are largely complementary: filtering decisions could be made among explored orientations to maximize new content in transfers.
**Computation and network optimizations.** Several efforts seek to lower compute footprints either by identifying lightweight model variants [15, 23, 39, 43, 47, 65, 84, 114], sharing model layers during inference [50, 74], or using smarter job scheduling strategies [88, 111]. Other systems target lower network overheads by intelligently compressing transmitted frames in a manner that is recoverable on the server or does not negatively impact accuracy [28, 32, 101]. MadEye is entirely complementary to both directions in that it solely focuses on judiciously selecting images (i.e., orientations) to process at any time for an application-provided model (which can be compressed); MadEye is agnostic to the way that selected frames are transmitted or processed on the backed.
**Drone coordination.** Numerous efforts aim to adapt drone flight plans (and thus the content on-board cameras see) to
Figure 16. Comparing different approximation model designs: MadEyeβs lightweight detection models and compressed counting models (Count CNN). Results use all videos, {24 Mbps; 20 ms}, 15 fps, and list median rank assigned to the best explored orientation at each timestep (error bars for 25-75th percentiles).
maximize analytics accuracy or scene coverage [11, 41, 49, 96]. However, these systems focus on identifying events of interest (e.g., wildfires, objects) in a geographically dispersed area for a preset application. In contrast, MadEye focuses on tuning camera orientations for a single scene to cope with workload nuances and maximize accuracy.
## 7 Conclusion
This paper presents MadEye, a system that continually tunes PTZ camera orientations to maximize accuracy for a given analytics workload and resource setting. Key to MadEye are a rapid algorithm that searches through the large space of orientations at each time, and a new, approximate transfer learning strategy that efficiently selects the most fruitful (accuracy-wise) orientations from those explored. Across many videos, workloads, and resource conditions, MadEye increases accuracy by 2.9-25.7% for the same resource usage, or achieves the same accuracy with 2.0-3.7\(\times\) lower resource costs.
|
2305.03136 | Contrastive losses as generalized models of global epistasis | Fitness functions map large combinatorial spaces of biological sequences to
properties of interest. Inferring these multimodal functions from experimental
data is a central task in modern protein engineering. Global epistasis models
are an effective and physically-grounded class of models for estimating fitness
functions from observed data. These models assume that a sparse latent function
is transformed by a monotonic nonlinearity to emit measurable fitness. Here we
demonstrate that minimizing contrastive loss functions, such as the
Bradley-Terry loss, is a simple and flexible technique for extracting the
sparse latent function implied by global epistasis. We argue by way of a
fitness-epistasis uncertainty principle that the nonlinearities in global
epistasis models can produce observed fitness functions that do not admit
sparse representations, and thus may be inefficient to learn from observations
when using a Mean Squared Error (MSE) loss (a common practice). We show that
contrastive losses are able to accurately estimate a ranking function from
limited data even in regimes where MSE is ineffective. We validate the
practical utility of this insight by showing contrastive loss functions result
in consistently improved performance on benchmark tasks. | David H. Brookes, Jakub Otwinowski, Sam Sinai | 2023-05-04T20:33:05Z | http://arxiv.org/abs/2305.03136v3 | # Contrastive losses as generalized models
###### Abstract
Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.
## 1 Introduction
A fitness function maps biological sequences to relevant scalar properties of the sequences, such as binding affinity to a target molecule, or fluorescent brightness. Biological sequences span combinatorial spaces and fitness functions are typically multi-peaked, due to interactions between positions in a sequence. Learning fitness functions from limited experimental data (often a minute fraction of the possible space) can be a difficult task but allows one to predict properties of sequences. These predictions can help identify promising new sequences for experimentation [1] or to guide the search for optimal sequences [2, 3].
Numerous methods have been developed to estimate fitness functions from experimental data, including classical machine learning techniques [4], deep learning approaches [5], and semi-supervised methods [6]. Additionally, there are many methods that incorporate biological assumptions into the modeling process, such as parameterized biophysical models [7], non-parametric techniques [8, 9], and methods for spectral regularization of neural networks [10]. These latter approaches largely focus on accurately modeling the influence of "epistasis" on fitness functions, which refers to statistical or physical interactions between genetic elements, typically either amino-acids in a protein sequence or genes in a genome.
"Local" epistasis refers to interactions between a limited number of specific positions in a sequence, and is often modeled using interaction terms in a linear model of a fitness function [11]. "Global" epistasis, on the other hand, refers to the presence of nonlinear relationships that affect the fitness of sequences in a nonspecific manner. A model of global epistasis typically assumes a simple latent fitness function is transformed by a monotonically increasing nonlinearity to produce observed fitness data [12, 13, 14, 15, 16]. Typically, these models assume a particular parametric form of the latent fitness function and nonlinearity, and fit the parameters of both simultaneously. It is most common to assume that the underlying fitness function includes only additive (non-interacting) effects [13], though pairwise interaction effects have been added in some models [14].
Despite their relative simplicity, global epistasis models have been found to be effective at modeling experimentally observed fitness functions [16, 17, 15]. Further, global epistasis is not just a useful modeling choice, but a physical phenomenon that can result from features of a system's dynamics [18] or the environmental conditions in which a fitness function is measured [13]. Therefore, even if one does not use a standard global epistasis model, it is still important to consider the effects of global epistasis when modeling fitness functions.
Due to the monotonicity of the nonlinearity in global epistasis models, the latent fitness function in these models can be interpreted as a parsimonious ranking function for sequences. Herein we show that fitting a model to observed fitness data by minimizing a contrastive, or ranking, loss is a simple and effective method for extracting such a ranking function. We particularly focus on the Bradley-Terry loss [19], which has been widely used for learning-to-rank tasks [20], and more recently for ordering the latent space of a generative model for protein sequences [21]. Minimizing this loss provides a technique for modeling global epistasis that requires no assumptions on the form of the nonlinearity or latent fitness functions, and can easily be applied to any set of observed fitness data.
Further, we use an entropic uncertainty principle to show that global epistasis can result in observed fitness functions that cannot be represented using a sparse set of epistatic interactions. In particular, this uncertainty principle shows that a fitness function that is sufficiently concentrated in the fitness domain-meaning that a small number of sequences have fitness values with relatively large magnitudes-can not be concentrated in the Graph Fourier bases that represent fitness functions in terms of local epistatic interactions [22, 23, 24]. We show that global epistasis nonlinearities tend to concentrate observed fitness functions in the fitness domain, thus preventing a sparse representation in the epistatic domain. This insight has the implication that observed fitness functions that have been affected by global epistasis may be difficult to estimate with undersampled training data and a Mean Squared Error (MSE) loss. We hypothesize that estimating the latent ranking fitness function using a contrastive loss can be done more data-efficiently than estimating the observed fitness function using MSE, and conduct simulations that support this hypothesis. Additionally, we demonstrate the practical importance of these insights by showing that models trained with the Bradley-Terry loss outperform those trained with MSE loss on nearly all FLIP benchmark tasks [25].
## 2 Background
### Fitness functions and the Graph Fourier transform
A fitness function \(f:\mathcal{S}\rightarrow\mathbb{R}\) maps a space of sequences \(\mathcal{S}\) to a scalar property of interest. In the case where \(\mathcal{S}\) contains all combinations of elements from an alphabet of size \(q\) at \(L\) sequence positions, then the fitness function can be represented exactly in terms of increasing orders of local epistatic interactions. For binary sequences (\(q=2\)), this representation takes the form:
\[f(\textbf{x})=\beta_{0}+\sum_{i=1}^{L}\beta_{i}x_{i}+\sum_{ij}\beta_{ij}x_{i}x _{j}+\sum_{ijk}\beta_{ijk}x_{i}x_{j}x_{k}+...,\]
where \(x_{i}\in\{-1,1\}\) represent elements in the sequence and each term in the expansion represents a (local) epistatic interaction with weight \(\beta_{\{i\}}\), with the expansion continuing up to \(L^{\text{th}}\) order terms. Analogous representations can be constructed for sequences with any size alphabet \(q\) using Graph Fourier bases[22, 23, 24]. These representations can be compactly expressed as:
\[\textbf{f}=\mathbf{\Phi}\boldsymbol{\beta}, \tag{1}\]
where **f** is a length \(q^{L}\) vector containing the fitness values of every sequence in \(\mathcal{S}\), \(\mathbf{\Phi}\) is a \(q^{L}\times q^{L}\) orthogonal matrix representing the Graph Fourier basis, and \(\boldsymbol{\beta}\) is a length \(q^{L}\) vector containing the weights corresponding to all possible epistatic interactions. We refer to **f** and \(\boldsymbol{\beta}\) as representing the fitness function in the fitness domain and the epistatic domain, respectively. Note that we may apply the inverse transformation of Eq. (1) to any complete observed fitness function, **y** to calculate the epistatic representation of the observed data, \(\boldsymbol{\beta_{\textbf{y}}}=\mathbf{\Phi}^{T}\mathbf{y}\). Similarly, if \(\mathbf{\hat{f}}\) contains the predictions of a
fitness model for every sequence in a sequence space, then \(\mathbf{\hat{\beta}}=\mathbf{\Phi}^{T}\mathbf{f}\) is the epistatic representation of the model.
A fitness function is considered sparse, or concentrated, in the epistatic domain when \(\mathbf{\beta}\) contains a relatively small number of elements with large magnitudes, and many elements equal to zero or with small magnitudes. In what follows, we may refer to a fitness function that is sparse in the epistatic domain as simply being a "sparse fitness function". A number of experimentally-determined fitness functions have been observed to be sparse in the epistatic domain [26, 27, 24]. Crucially, the sparsity of a fitness function in the epistatic domain determines how many measurements are required to estimate the fitness function using Compressed Sensing techniques that minimize a MSE loss function [24]. Herein we consider the effect that global epistasis has on a sparse fitness function. In particular, we argue that global epistasis results in observed fitness functions that are dense in the epistatic domain, and thus require a large amount of data to accurately estimate by minimizing a MSE loss function. However, in these cases, there may be a sparse ranking function that can be efficiently extracted by minimizing a contrastive loss function.
### Global epistasis models
A model of global epistasis assumes that noiseless fitness measurements are generated according to the model:
\[y=g\left(f(\mathbf{x})\right), \tag{2}\]
where \(f\) is a latent fitness function, \(g\) is a monotonically increasing nonlinear function. In most cases, \(f\) is assumed to include only first or second order epistatic terms and the nonlinearity is explicitly parameterized using, for example, spline functions [13] or sums of hyperbolic tangents [14]. The restriction that \(f\) includes only low-order terms is somewhat arbitrary, as higher-order local epistatic effects have been observed in fitness data (see, e.g., [28]). In general we may consider \(f\) to be any fitness function that is sparse in the epistatic domain, and global epistasis then refers to the transformation of a sparse fitness function by a monotonically-increasing nonlinearity.
Global epistasis models of the form of Eq. (2) have proved effective at capturing the variation observed in empirical fitness data [29, 16, 13, 17, 15, 14], suggesting that global epistasis is a common feature of natural fitness functions. Further, it has been shown that global epistasis results from first-principles physical considerations that are common in many biological systems. In particular, Husain and Murugan [18] show that global epistasis arises when the physical dynamics of a system is dominated by slow, collective modes of motion, which is often the case for protein dynamics. Aside from intrinsic/endogenous sources, the process of measuring fitness can also introduce nonlinear effects that are dependent on the experiment and not on the underlying fitness function. For example, fitness data is often left-censored, as many sequence have fitness that falls below the detection threshold of an assay. Finally, global diminishing-returns epistatic patterns have been observed widely in both single and multi-gene settings (where the interactions are among genes rather than within a gene)[29, 15, 30].
Together, these results indicate that global epistasis is an effect that can be expected in empirically-observed fitness functions. Further, the form of Eq. 2 suggests that global epistasis may be regarded as "corrupting" a sparse latent fitness function, analogous to the corrupting effects of additive measurement noise. In what follows, we argue that the corruption due to global epistasis manifests itself by producing observed data that is dense in the epistatic domain. In other words, when an observed fitness function is produced through Eq. (2) then the epistatic representation of this fitness function (calculated through application of Eq. (1)), is not sparse. Further we argue that this corrupting effect of global epistasis makes it to difficult to model such observed data by minimizing standard MSE loss functions with a fixed amount of data. Further, we argue that fitting fitness models aimed at extracting the latent fitness function from observed data is a more efficient use of observed data that results in improved predictive performance (in the ranking sense).
While the models of global epistasis described thus far could be used for this purpose, they have the drawback that they assume a constrained form of both \(g\) and \(f\), which enforces inductive biases that may affect predictive performance. Here we propose a flexible alternative to modeling global epistasis that makes no assumptions on the form of \(f\) or \(g\). In particular, we interpret the latent fitness function \(f\) as a parsimonious ranking function for sequences, and the problem of modeling global
epistasis as recovering this ranking function. A natural method to achieve this goal is to fit a model of \(f\) to the observed data by minimizing a contrastive, or ranking, loss function. These loss functions are designed to learn a ranking function and, as we will show, are able to recover a sparse fitness function that has been transformed by global epistasis to produce observed data. An advantage of this approach to modeling global epistasis is that the nonlinearity \(g\) is modeled non-parametrically, and is free to take any form, while the latent fitness function can be modeled by any parametric model, for example, convolutional neural networks (CNNs) or fine-tuned language models, which have been found to perform well in fitness prediction tasks [25]. An accurate ranking model also enables effective optimization, as is also implied by the results in Chan et al. [21].
### Contrastive losses
Contrastive losses broadly refer to loss functions that compare multiple outputs of a model and encourage those outputs to be ordered according to some criteria. In our case, we desire a loss function that encourages model outputs to be ranked according to observed fitness values. An example of such a loss function is the Bradley-Terry (BT) loss [19, 20], which has the form:
\[\mathcal{L}(\mathbf{\theta})=\sum_{i,j:y_{i}>y_{j}}\log\left[1+e^{-(f_{\mathbf{\theta }}(\textbf{x}_{i})-f_{\mathbf{\theta}}(\textbf{x}_{j}))}\right], \tag{3}\]
where \(f_{\mathbf{\theta}}\) is a model with parameters \(\mathbf{\theta}\), \(\textbf{x}_{i}\) are model inputs and \(y_{i}\) are the corresponding labels of those inputs. This loss compares every pair of data points and encourages the model output \(f_{\mathbf{\theta}}(\textbf{x}_{i})\) to be greater than \(f_{\mathbf{\theta}}(\textbf{x}_{j})\) whenever \(y_{i}>y_{j}\); in other words, it encourages the model outputs to be ranked according to their labels. A number of distinct but similar loss functions have been proposed in the learning-to-rank literature [31] and also for metric learning [32]. An example is the Margin ranking loss [33], which replaces the logistic function in the sum of Eq. (3) with a hinge function.In our experiments, we largely focus on the BT loss of Eq. (3) as we found it typically results in superior predictive performance; however, we also compare to models trained with the Margin loss when presenting results in benchmark tasks.
The BT loss was recently used by Chan et al. [21] to order the latent space of a generative model for protein sequences such that certain regions of the latent space corresponding to sequences with higher observed fitness values. In this case, the BT loss is used in conjunction with standard generative
Figure 1: Nearly exact recovery of latent fitness function from complete fitness data by minimizing Bradley-Terry loss. (a) Schematic of simulation. (b) Comparison between latent (\(f\)) and observed (\(y\)) fitness functions in fitness (left) and epistatic (right) domains. The latent fitness function is sampled from the NK model with \(L=8\) and \(K=2\) and the global epistasis function is \(g(f)=\exp(10\cdot f)\). Each point in the scatter plot represents the fitness of a sequence, while each bar in the bar plot (right) represents the squared magnitude of an epistatic interaction, with roman numerals indicating the order of interaction. Only epistatic interactions up to order 3 are shown. The right plot demonstrates that global epistasis produces a dense representation in the epistatic domain compared to the representation of the latent fitness in the epistatic domain. (c) Comparison between latent and estimated (\(\hat{f}\)) fitness functions in fitness and epistatic domains.
modeling losses. In contrast, here we analyze the use of the BT loss alone in order to learn a ranking function for sequences given corresponding observed fitness values.
A key feature of the contrastive loss in Eq. (3) is that it only uses information about the ranking of observed labels, rather than the numerical values of the labels. Thus, the loss is unchanged when the observed values are transformed by a monotonic nonlinearity. We will show that this feature allows this loss to recover a sparse latent fitness function from observed data that has been corrupted by global epistasis, and enables more data-efficient learning of fitness functions compared to a MSE loss.
## 3 Results
Our results are aimed at demonstrating three properties of contrastive losses. First, we show that given complete, noiseless fitness data (i.e. noiseless fitness values associated with every sequence in the sequence space) that has been corrupted by global epistasis, minimizing the BT loss enables a model to nearly exactly recover the sparse latent fitness function \(f\). Next, we consider the case of incomplete data, where the aim is to predict the relative fitness of unobserved sequences. In this regime, we find through simulation that minimizing the BT loss enables models to achieve better predictive performance then minimizing the MSE loss when the observed data has been corrupted with global epistasis. We argue by way of a fitness-epistasis uncertainty principle that this is due to the fact that nonlinearities tend to produce fitness functions that do not admit a sparse representation in the epistatic domain, and thus require more data to learn with MSE loss. Finally, we demonstrate the practical significance of these insights by showing that minimizing the BT loss results in improved predictive performance over MSE loss in nearly all tasks in the FLIP benchmark [25].
### Recovery from complete data
We first consider the case of "complete" data, where fitness measurements are available for every sequence in the sequence space. The aim of our task in this case is to recover a sparse latent fitness function when the observed measurements have been transformed by an arbitrary monotonic nonlinearity. In particular, we sample a sparse fitness function \(f\) from the NK model [34], a popular model of fitness functions that has been shown to recapitulate the sparsity properties of some empirical fitness functions [24]. The NK model has three parameters: \(L\), the length of the sequences, \(q\), the size of the alphabet for sequence elements, and \(K\), the maximum order of (local) epistatic interactions in the fitness function. Roughly, the model randomly assigns \(K-1\) interacting positions to each position in the sequence, resulting in a sparse set of interactions in the epistatic domain. The weights of each of the assigned interactions are then drawn from a independent unit normal distributions.
We then transform the sampled fitness function \(f\) with a monotonic nonlinearity \(g\) to produce a set of complete observed data, \(y_{i}=g(f(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{S}\). The goal of the task is then to recover the function \(f\) given all \((\mathbf{x}_{i},y_{i})\) pairs. In order to do so, we model \(f\) using a two layer fully connected neural network and fit the parameters of this model by performing stochastic gradient descent (SGD) on the BT loss, using the Spearman correlation between model predictions and the \(y_{i}\) values to determine convergence of the optimization. We then compare the resulting model, \(\hat{f}\) to the latent fitness function \(f\) in both the fitness and epistatic domains, using the forward and inverse transformation of Eq. (1) to convert between the two domains.
Fig. 1 shows the results of one of these tests. In this case, we used an exponential function to represent the global epistasis nonlinearity. The exponential function exaggerates the effects of global epistasis in the epistatic domain and thus better illustrates the usefulness of contrastive losses, although the nonlinearities in empirical fitness functions tend to have a more sigmoidal shape [13]. Fig. 1b shows that the global epistasis nonlinearity substantially alters the representations of the observed data \(y\) in both the fitness and epistatic domains, as compared to the latent fitness function \(f\). Nonetheless, Fig. 1c demonstrates that the model fitness function \(\hat{f}\) created by minimizing the BT loss is able to nearly perfectly recover the sparse latent fitness function. This is a somewhat surprising result, as there are many fitness functions that correctly rank the fitness of sequences, and it is not immediately clear why minimizing the BT loss produces this particular sparse latent fitness function. However,
this example makes clear that fitting a model by minimizing the BT loss can be an effective strategy for recovering a sparse latent fitness function from observed data that has been corrupted by global epistasis. Similar results from additional examples of this task using different nonlinearities and latent fitness functions are shown in the SI.
### Fitness-epistasis uncertainty principle
Next, we consider the case where fitness data is incomplete. Our aim is to understand how models trained with the BT loss compare to those trained with MSE loss at predicting the relative fitness of unobserved sequence using different amounts of subsampled training data. We take a signal processing perspective on this problem, and consider how the density of a fitness function in the epistatic domain affects the ability of a model to accurately estimate the fitness function given incomplete data. In particular, we demonstrate that global epistasis tends to increase the density of fitness functions in the epistatic domain, and use an analogy to Compressive Sensing (CS) to hypothesize that more data is required to effectively estimate these fitness functions when using an MSE loss [24]. In order to support this claim, we first examine the effects of global epistasis on the epistatic domain of fitness functions.
Fig. 1b provides an example where a monotonic nonlinearity applied to a sparse fitness increases the density of the the fitness function in the epistatic domain. In particular, we see that many "spurious" local epistatic interactions must appear in order to represent the nonlinearity (e.g. interactions of order 3, when we used an NK model with \(K=2\)). This effect can be observed for many different shapes of nonlinearities [12].
We can understand this effect more generally using uncertainty principles, which roughly show that a function cannot be concentrated on a small number of values in two different representations. In particular, we consider the discrete entropic uncertainty principle proved by Dembo et al. [35]. When applied to the transformation in Eq. (1), this uncertainty principle states:
\[H(\textbf{f})+H(\mathbf{\beta})\geq L\log\left(\frac{1}{m^{2}}\right), \tag{4}\]
where \(H(\textbf{x})=-\sum_{i}\frac{\pi^{2}}{||x||^{2}}\log\frac{\pi^{2}}{||x||^{2}}\) is the entropy of the normalized squared magnitudes of a vector and \(m=1/\sqrt{q}\) when \(q=2\), \(m=1/(q-\sqrt{q})\) when \(q=3\) and \(m=1-1/(q-\sqrt{q})\) otherwise. Low entropy indicates that a vector is highly concentrated on a small set of elements. Thus, the fitness-epistasis uncertainty principle of Eq. (4) shows that fitness functions cannot be highly concentrated in both the fitness and epistatic domains. A sparse fitness function (in the epistatic domain) must therefore be "spread out" (i.e. dense) in the fitness domain, and vice-versa.
Figure 2: (a) Demonstration of the fitness-epistasis uncertainty principle where a latent fitness function is transformed by \(g(f)=\exp(a\cdot f)\) for various settings of \(a\). The dashed black line indicates the lower bound on the sum of the entropies of the fitness and epistatic representations of the fitness function (b) Test-set Spearman correlation for models trained with MSE and BT loss on incomplete fitness data transformed by various nonlinearities, compared to the entropy of the fitness function in the epistatic domain. Each point corresponds to a model trained on 256 randomly sampled training points from a \(L=10,K=2\) latent fitness function which was then transformed by a nonlinearity. (c) Convergence comparison between models fit with BT and MSE losses to observed data generated by transforming an \(L=10,K=2\) latent fitness function by \(g(f)=\exp(10\cdot f)\). Each point represents the mean test set correlation over 200 training set replicates.
The importance of this result for understanding global epistasis is that applying a nonlinearity to a dense vector will often have the effect of concentrating the vector on a smaller number of values. This can most easily be seen for convex nonlinearities like the exponential shown in Fig 1a, but is also true of many other nonlinearities (see the SI for more examples). If this concentration in the fitness domain is sufficiently extreme, then the epistatic representation of the fitness function, \(\mathbf{\beta}\), must be dense in order to satisfy Eq. (4). Fig 2a demonstrates the uncertainty principle by showing how the entropy in the fitness and epistatic domains decrease and increase, respectively, as more extreme nonlinearities are applied to a sparse latent fitness function.
The uncertainty principle quantifies how global epistasis corrupts a fitness function by preventing a sparse representation in the epistatic domain. From a CS perspective, this has direct implications for modeling the fitness function from incomplete data. In particular, if we were to model the fitness function using CS techniques such as LASSO regression with the Graph Fourier basis as the representation, then it is well known that the number of noiseless data points required to perfectly estimate the function scales as \(\mathcal{O}(S\log N)\) where \(S\) is the sparsity of the signal in a chosen representation and \(N\) is the total size of the signal in the representation [36]. Therefore, when using these techniques, fitness functions corrupted by global epistasis will require more data to effectively model. Notably, the techniques for which these scaling laws apply minimize a MSE loss functions as part of the estimation procedure. Although these scaling laws only strictly apply to CS modeling techniques, we hypothesize that the intuition that fitness functions with dense epistatic representations will require more data to train an accurate model with MSE loss, even when using neural network models and SGD training procedures. In the next section we present the results of simulations that support this hypothesis by showing that the entropy of the epistatic representation is negatively correlated with the predictive performance of models trained with an MSE loss on a fixed amount of fitness data. Further, these simulations show that models trained with the BT loss are robust to the dense epistatic representations produced global epistasis, and converge faster to maximum predictive performance as they are provided more fitness data compared to models trained with an MSE loss.
### Simulations with incomplete data
We next present simulation results aimed at showing that global epistasis adversely effects the ability of models to effectively learn fitness functions from incomplete data when trained with MSE loss and that models trained with BT loss are more robust to the effects of global epistasis.
In our first set of simulations, we tested the ability models to estimate a fitness function of \(L=10\) binary sequences given one quarter of the fitness measurements (256 measurements out of total of \(2^{10}=1024\) sequences in the sequence space). In each simulation, we (i) sampled a sparse latent fitness function from the NK model, (ii) produced an observed fitness function by applying one of three nonlinearities to the latent fitness function: exponential, \(g(f)=\exp(a\cdot f)\), sigmoid, \(g(f)=(1+e^{-a\cdot f})^{-1}\), or a cubic polynomial \(g(f)=x^{3}+ax\) with settings of the parameter \(a\) that ensured the nonlinearity was monotonically increasing, (iii) sampled 256 sequence/fitness pairs uniformly at random from the observed fitness function to be used as training data, and (iv) trained models with this data by performing SGD on the MSE and BT losses. We ran this simulation for 10 replicates of each of 20 settings of the \(a\) parameter for each of the three nonlinearities. In every case, the models were fully-connected neural networks with two hidden layers and the optimization was terminated using early stopping with \(20\) percent of the training data used as a validation set. After training, we measured the extent to which the models estimated the fitness function by calculating Spearman correlation between the model predictions and true fitness values on all sequences in the sequence space.
The results of each of these simulations are shown in Fig. 2b. We see that the predictive performance of models trained with the MSE loss degrades as the entropy of the fitness function in the epistatic domain increases, regardless of the type of nonlinearity that is applied to the latent fitness function. This is in contrast to the models trained with the BT loss, which often achieve nearly perfect estimation of the fitness function even when the entropy of the fitness function in the epistatic domain approaches its maximum possible value of \(L\log 2\). This demonstrates the key result that the MSE loss is sensitive to the density of the epistatic representation resulting from global epistasis (as implied by the analogy
to CS), while the BT loss is robust to these effects.
Next we tested how training set size effects the predictive performance of models trained with MSE and BT losses on a fitness function corrupted by global epistasis. In order to do so, we sampled a single \(L=10,K=2\) fitness function from the NK model and applied the nonlinearity \(g(f)=\exp(10\cdot f)\) to produce an observed fitness function. Then, for each of a range of training set sizes between 25 and 1000, we randomly sampled a training set and fit models with MSE and BT losses using the same models and procedure as in the previous simulations. We repeated this process for 200 training set replicates of each size, and calculated both the Spearman and Pearson correlations between the resulting model predictions and true observed fitness values for all sequences in the sequence space.
Fig 2c shows the mean correlation values across all 200 replicates of each training set size. There are two important takeaways from this plot. First, the BT loss achieves higher Spearman correlations than the MSE loss in all data regimes. This demonstrates the general effectiveness of this loss to estimate fitness functions corrupted by global epistasis. Next, we see that models trained with BT loss converge to a maximum Spearman correlation faster than models trained with MSE loss do to a maximum Pearson correlation, which demonstrates that the difference in predictive performance between models trained with MSE and BT losses is not simply due to a result of the evaluation metric being more tailored to one loss than the other. This result also reinforces our claim that fitness functions corrupted by global epistasis require more data to learn effectively with MSE loss, as would be predicted by CS scaling laws. The BT loss on the other hand, while not performant with the Pearson metric as expected by a ranking loss, seems to overcome this barrier and can be used to effectively estimate a fitness function from a small amount of data, despite corruption by global epistasis.
### FLIP benchmark results
In the previous sections, we used noiseless simulated data to explore the interaction between global epistasis and loss functions. Now we present results demonstrating the practical utility of our insights by comparing the predictive performance of models trained with MSE and BT losses on experimentally-determined protein fitness data. We particularly focus on the FLIP benchmark [25], which comprises of a total of 15 fitness prediction tasks stemming from three empirical fitness datasets. These three datasets explore multiple types of proteins, definitions of protein fitness, and experimental assays. In particular, one is a combinatorially-complete dataset that contains the binding fitness of all combinations of
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Data set & Split & MSE Loss & Margin & Bradley-Terry \\ \hline \multirow{5}{*}{GB1} & 1-vs-rest & \(0.133\pm 0.150\) & \(0.068\pm 0.093\) & \(0.091\pm 0.093\) \\ & 2-vs-rest & \(0.564\pm 0.026\) & \(0.549\pm 0.041\) & \(\textbf{0.607\pm 0.009}\) \\ & 3-vs-rest* & \(0.814\pm 0.049\) & \(\textbf{0.869\pm 0.002}\) & \(\textbf{0.880\pm 0.003}\) \\ & Low-vs-High* & \(0.499\pm 0.010\) & _0.472 \(\pm\) 0.020_ & \(\textbf{0.567\pm 0.013}\) \\ & Sampled & \(0.930\pm 0.002\) & _0.944 \(\pm\) 0.004_ & \(\textbf{0.951\pm 0.002}\) \\ \hline \multirow{5}{*}{AAV} & Mut-Des & \(0.751\pm 0.006\) & _0.763 \(\pm\) 0.005_ & \(0.757\pm 0.007\) \\ & Des-Mut & \(0.806\pm 0.006\) & \(0.806\pm 0.007\) & \(\textbf{0.832\pm 0.002}\) \\ & 1-vs-rest* & \(0.335\pm 0.117\) & \(\textbf{0.591\pm 0.042}\) & _0.485 \(\pm\) 0.078_ \\ & 2-vs-rest & \(0.748\pm 0.010\) & \(0.737\pm 0.009\) & \(\textbf{0.798\pm 0.003}\) \\ & 7-vs-rest & \(0.732\pm 0.003\) & \(0.730\pm 0.003\) & \(\textbf{0.742\pm 0.003}\) \\ & Low-vs-High & \(0.401\pm 0.006\) & \(0.407\pm 0.007\) & _0.410 \(\pm\) 0.009_ \\ & Sampled & \(0.927\pm 0.001\) & \(0.922\pm 0.001\) & \(\textbf{0.933\pm 0.000}\) \\ \hline \multirow{3}{*}{Thermostability} & Mixed & \(0.349\pm 0.011\) & _0.370 \(\pm\) 0.009_ & \(\textbf{0.453\pm 0.007}\) \\ & Human & \(0.511\pm 0.016\) & _0.526 \(\pm\) 0.009_ & \(\textbf{0.589\pm 0.002}\) \\ \cline{1-1} & Human-Cell & \(0.490\pm 0.021\) & _0.517 \(\pm\) 0.012_ & \(\textbf{0.570\pm 0.004}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between MSE, Margin, and Bradley-Terry losses on FLIP benchmark tasks using the CNN baseline model. Each row represents a data set and split combination. Numerical columns indicate the mean and standard deviation of test set Spearman correlation over 10 random initializations of the model. Asterisks indicate that unmodified portions of sequences were used in training data. Bold values indicate that a loss has significantly improved performance over all other tested losses; Italics indicate that a contrastive loss has significantly improved performance over MSE, but not over the other contrastive loss.
mutations at 4 positions to the GB1 protein [28], another contains data about the viability of Adeno-associated virus (AAV) capsids for many different sets of mutations to the wild-type capsid sequence [3], and another contains data about the thermostability of many distantly related proteins [37].
For each of the three datasets, the FLIP benchmark provides multiple train/test splits that are relevant for protein engineering scenarios. For example, in the GB1 and AAV datasets, there are training sets that contain only single and double mutations to the protein, while the associated test sets contain sequences with more than two mutations. This represents a typical situation in protein engineering where data can easily be collected for single mutations (and some double mutations) and the goal is then to design sequences that combine these mutations to produce a sequence with high fitness. In all of the FLIP tasks the evaluation metric is Spearman correlation between model predictions and fitness labels in the test set, since ranking sequences by fitness is the primary task that models are used for in data-driven protein engineering.
In the FLIP benchmark paper, the authors apply a number of different modeling strategies to these splits, including Ridge regression, training a CNN, and a number of variations on fine-tuning the ESM language models for protein sequences [38]. All of these models use a MSE loss to fit the model to the data, along with any model-specific regularization losses. In our tests, we consider only the CNN model as it balances consistently high performance in the benchmark tasks with relatively straightforward training protocols, enabling fast replication with random restarts.
We trained the CNN model on each split using the standard MSE loss as well as the Margin and BT contrastive losses. The mean and standard deviation of Spearman correlations between the model predictions and test set labels over 10 random restarts are shown in Table 1. By default, the FLIP datasets contain portions of sequences that are never mutated in any of the data (e.g., only 4 positions are mutated in the GB1 data, but the splits contain the full GB1 sequence of length 56). We found that including these unmodified portions of the sequence often did not improve, and sometimes hurt, the predictive performance of the CNN models while requiring significantly increased computational complexity. Therefore most of our results are reported using inputs that contain only sequence positions that are mutated in at least one train or test data point. The few cases where including the unmodified portions of the sequence did improve performance are indicated by asterisks; in these cases we found both models trained with MSE and contrastive losses had improved performance.
The results in Table 1 show that using contrastive losses (and particularly the BT loss) consistently results in improved predictive performance across a variety of practically-relevant fitness prediction tasks. The reasons for this may be manifold; however, we hypothesize that it is partially a result of sparse latent fitness functions being corrupted by global epistasis. Indeed, it is shown in Otwinowski et al. [13] that a GB1 landscape closely associated with that in the FLIP benchmark is strongly affected by global epistasis. Further, many of the FLIP training sets are severely undersampled in the sense of CS scaling laws, which is the regime in which differences between MSE and contrastive losses are most apparent when global epistasis is present, as shown in Fig. 2.
## 4 Discussion
Global epistasis is thought to be a pervasive effect on experimental fitness data. Here we have shown that minimizing a contrastive loss is an effective technique for recovering a sparse latent fitness function that has been corrupted by global epistasis. Additionally, we have shown that global epistasis manifests itself by producing fitness data that does not admit a sparse representation in the epistatic domain. By drawing an analogy to Compressive Sensing, we then argued that a fitness function with a dense epistatic representation requires more data to effectively estimate when using an MSE loss. Our simulations show that the BT loss, however, is robust to these corrupting effects of global epistasis. The practical utility of this perspective on global epistasis and contrastive losses can be seen in our results showing that models trained with contrastive losses consistently outperform those trained with MSE loss in fitness prediction tasks on diverse set of experimentally-measured and representative protein benchmarks.
The comparison between the Margin and BT losses in Table 1 indicates that there may be cases where certain contrastive losses outperform others. This suggests that it can be fruitful in practice to compare multiple contrastive losses for any given fitness prediction task. Our primary goal is to
encourage practitioners to use such contrastive losses, rather than simply relying on the standard MSE loss.
Further, our results leave open a few avenues for future exploration. First, it is not immediately clear in what situations we can expect to observe the nearly-perfect recovery of a latent fitness function as seen in Fig. 1. A theoretical understanding of this result may either cement the promise of the BT loss, or provide motivation for the development of techniques that can be applied in different scenarios. Next, we have made a couple of logical steps in our interpretations of these results that are intuitive, but not fully supported by any theory. In particular, we have drawn an analogy to CS scaling laws to explain why neural networks trained with MSE loss struggle to learn a fitness function that has a dense representation in the epistatic domain. However, these scaling laws only strictly apply for a specific set of methods that use an orthogonal basis as the representation of the signal; there is no theoretical justification for using them to understanding the training of neural networks (although applying certain regularizations to neural network training can provide similar guarentees [39]). Additionally, it is not clear from a theoretical perspective why the BT loss seems to be robust to the dense representations produced by global epistasis. A deeper understanding of these phenomena could be useful for developing improved techniques.
Despite these possibilities for further inquiry, our results lay the groundwork for understanding the interplay between natural properties of fitness functions and the loss functions used to train models on fitness data.
|
2305.13325 | Transmission and multiple reflection mechanisms of guided streamers
propagating through grounded annular electrode and interacting with grounded
surface electrode | Trains of positive guided streamers are generated within an atmospheric
pressure plasma jet supplied in helium and polarized by a high-voltage
nanosecond pulse generator. The device is completed by a grounded annular
electrode and a grounded surface electrode on which they can interact. The
resulting transmitted and multiple reflected guided streamers are measured
combining optical and electrical characterizations. While the electrical
approach provides information on the capacitive/conductive nature of the
current peaks as well as on their positive/negative value, fast ICCD imaging
distinguishes whether the guided streamers are incident, reflected or
transmitted. Combining these two techniques allow us to demonstrate
experimentally that the reflected streamers are negative contrarily to the
others. Besides, 4 types of reflections have been highlighted: a reflection (r)
at the outlet of the capillary, a reflection on the grounded surface electrode
(R) and two reflections (r' and r") observed when an incident guided streamer
passes through the grounded annular electrode. The two techniques agree that
the characteristic propagation times are always shorter for reflected negative
streamers than for the positive ones propagating forward. Hence, for a grounded
annular electrode placed 3 cm away from the high voltage electrode, propagation
time is 80 ns for reflection versus 250 ns for transmission. These propagation
times are even shorter when the annular electrode is brought closer to the
surface electrode with velocities typically higher than 300 km/s. Finally, all
these experimental data are utilized to build an equivalent electrical model to
understand the dynamics of the guided streamers and explain their transmission
and reflection modes. | H. Decauchy, T. Dufour | 2023-05-16T21:42:58Z | http://arxiv.org/abs/2305.13325v1 | Transmission and multiple reflection mechanisms of guided streamers propagating through grounded annular electrode and interacting with grounded surface electrode
###### Abstract
The repeatable dynamics and the reversal propagation of guided streamers remains a major question of fundamental physics. In this article, trains of positive guided streamers are generated within an atmospheric pressure plasma jet supplied in helium and polarized by a high-voltage nanosecond pulse generator. The device is completed by two distant targets: a grounded annular electrode coaxially centered around the capillary through which guided streamers can propagate, and a grounded surface electrode on which they can interact. The resulting transmitted and multiple reflected guided streamers are measured combining optical characterization (fast ICCD imaging) and electrical characterization (high voltage probe and current monitors). While the electrical approach provides information on the capacitive/conductive nature of the current peaks as well as on their positive/negative value, fast ICCD imaging distinguishes whether the guided streamers are incident, reflected or transmitted. Combining these two techniques allow us to demonstrate experimentally that the reflected streamers are negative contrarily to the others. Besides, 4 types of reflections have been highlighted: a reflection (r) at the outlet of the capillary, a reflection on the grounded surface electrode (R) and two reflections (r\({}^{\prime}\) and r\({}^{\prime}\)) observed when an incident guided streamer passes through the grounded annular electrode. The two techniques agree that the characteristic propagation times are always shorter for reflected negative streamers than for the positive ones propagating forward. Hence, for a grounded annular electrode placed 3 cm away from the high voltage electrode, propagation time is 80 ns for reflection versus 250 ns for transmission. These characteristic propagation times are even shorter when the annular electrode is brought closer to the surface electrode with velocities typically higher than 300 km/s. In addition, the intensity ratios of reflected/incident guided currents drop sharply, typically losing one decade over a counter-propagation length of only 3-5 cm. Finally, all these experimental data are utilized to build an equivalent electrical model that allow to better understand the dynamics of the guided streamers and explain their transmission and reflection modes upon their interaction with the two distant grounded electrodes.
Keywords:Guided streamers, streamer counter-propagation, return stroke, streamer-magnetic field interaction +
Footnote β : : _H. Decauchy, T. Dufour, Plasma Sources Science and Technology, Vol. 31, No. 11 (2022), [https://doi.org/10.1088/1361-6595/acolda_](https://doi.org/10.1088/1361-6595/acolda_)
## I Introduction
### Preamble
Atmospheric pressure plasma jet (APPJ) devices are studied for several decades in a wide range of applications, including materials, effluents vaporization and more recently life sciences [1]. This wide variety of applications has stimulated the emergence of a large variety of APPJ configurations combined to different electrical excitation modes detailed hereafter.
### Plasma applications
In the field of materials and surface science, APPJ can be used for the etching of materials like Kapton, silicon dioxide, tantalum or tungsten [2] but also for the deposition of amorphous carbon films [3] as well as composite thin films like InO\({}_{y}\) and SnO\({}_{z}\)[4]. APPJ are also relevant for effluents processing, especially once they are mounted in parallel to treat larger volumes. Such configurations can be used to treat textile wastewaters and more specifically dyes that are among the most complicated environmental pollutants to treat owing to their complex structures, chemical properties and molecular weights [5]. CO\({}_{2}\) conversion by APPJ is also a flagship application that paves the way for sustainable and low-carbon processes [6]. As a third field of application, APPJ's are extensively investigated to address various life science issues. Plasma jets supplied with helium or argon can generate reactive species that play an essential role to inductive _Stophyococcus aureus_ and _Escherichia coli_ and more generally to decontaminate surfaces from a large spectrum of microorganisms (bacteria, bacilli, fungi, viruses) [7]. APPJ are also used in agriculture to improve seeds' germinative properties. Although they can only treat small batches of seeds, APPJ can be used to easily test a wide range of experimental conditions and then innovate larger-scale plasma processes. In medical research, APPJ's have established in many areas, especially for wound healing to trigger biological effects upon disinfection, proliferation, angiogenesis, cell migration and re-epithelialization [8]. APPJ have been successfully applied in oncology to induce antitumor effects upon _in vivo_ campaigns, either following a direct approach (tumor cells exposed to plasma) [9] or an indirect approach (tutilization of a plasma-activated liquid) to generate long lifespan reactive species [10].
### APPJ configurations
The diversity of the aforementioned applications and the specific know-how of each laboratory have led to the emergence of a large panel of APPJ devices. As sketched in Figure 1, it is convenient to classify APPJ's into three categories. The first includes the devices where the plasma is in contact with no electrode. Such APPJ present double dielectric barriers as sketched in the (a) configuration where two outer rings surround a dielectric capillary, in the (b) configuration where a pin electrode is embedded in a dielectric material or in the (c) configuration where the polarized electrode and the counter-electrode are wounded around a tube following a double helix structure. Such dielectric-embedded electrodes are particularly relevant for processing liquids or corrosive gases [11]. The second category corresponds to the APPJ's with single-dielectric barrier so that plasma is in contact with only one electrode. This is the case of the (d) configuration corresponding to the plasma gun device successfully applied in several medical applications [12] as well as for the (e), (f) and (g) configurations, the latter one being drastically different from the previous ones in that its counter-electrode is not in contact with the dielectric tube. The third category gathers APPJ devices where the plasma is in contact with at least one electrode and where the concept of a dielectric barrier is no more relevant. In this category, a metal surface is often used as the counter-electrode, as sketched in the (h) configuration. A variant corresponds to the (i) configuration where no counter-electrode is present. In the two latter configurations, the main role of the dielectric tube is to flush the gas flow in a preferred direction, with the ability to influence the discharge inception and its spatio-temporal dynamics [13]. However, the dielectric property of the tube cannot be utilized to prevent arc transition. For this reason, small modification of interelectrode distance or applied power can easily change the plasma from cold to thermal state.
### Structuring of a streamer
Whatever the APPJ configuration, the physical mechanisms responsible for the ignition of a cold plasma discharge remain the same. First, the background radiation (cosmic radiation and environmental radioactivity) generates seed electrons within the gas confined in the interelectrode space. There, the applied voltage creates an external electric field that strongly accelerates these electrons, making them collide at high frequency with neutral species (atoms and molecules) and leading to their excitation and/or ionization. Primary electrons can hence be generated and accelerated by the external electric field to collide with neutrals and lead to a chain reaction commonly called "electron avalanche". As soon as the Raether-Meek criterion is met, i.e. the number of electrons in the avalanche's head is higher than 10\({}^{3}\)-10\({}^{9}\), the avalanche turns into a streamer: an ionization wave that propagates longitudinally and that transports electrical charges and radiative species over long distances [14]. Streamers can be detected by fast ICCD imaging and/or by electrical probing. Combining these diagnostics with simulation tools allow to model a streamer as a multi-stage structure including:
* The pre-head region which is located far upstream of the streamer's head and where seed electrons can be created through several physical mechanisms: local electric field induced by space charges, background ionization effects and photo-ionization [15, 16]. The efficiency of this latter mechanism strongly depends on the presence of both nitrogen and oxygen: while it dominates streamer propagation in air for repetition frequencies of at least 1 kHz. It depends also on the repetition frequency in the case of N\({}_{2}\) with 1 ppm O\({}_{2}\)[17].
Figure 1: Non-exhaustive nomenclature of AAPI devices and their cross-sectional representations
The head which can take the appearance of a highly emissive and small bullet (e.g. positive guided streamer generated in helium before interacting with a grounded metal target [18]) or on the contrary appear with the same optical emission of the tail, hence the denomination of filament (e.g. positive streamer discharges in N\({}_{2}\)-O\({}_{2}\) gas mixtures at low pressure [17, 19].
* The tail which has the appearance of a long drag of weak intensity and decreasing as one moves away from the head. The tail can contain positive and negative charged species although its whole and macroscopic electrical charge is zero [14][18].
The term "plasma bullet", although widely used in the literature, is somewhat misleading in that it suggests a small volume of plasma (here the head) propagating as a projectile completely independent of the device that generated it. In the case of a plasma gun, this terminology would imply that the "plasma bullet" is disconnected from the polarized electrode and therefore that the tail does not exist, which is obviously false [20, 21]. Their dynamics has been extensively investigated as part of simulation works which concluded that "plasma bullet" propagation is similar to cathode streamer propagation, i.e. ionization waves guided by the jet of flowing gas [21]. For the sake of clarity, this terminology is abandoned in this article.
### Streamers - Propagation modes
Depending on the APP1 configurations and the profile of the applied voltage, different electrical excitation modes can be reached, driving either to trains of guided streamers or non-guided streamers. A train of guided streamers is the succession of streamers periodically repeated in time and space, i.e. each streamer following the same spatial path arrangement as its predecessors. Such streamers always propagate at the same velocity (speed variations < a few percent), after a same delay time (uncertainties < a few nanoseconds) [22]. Conversely, a train of unguided streamers corresponds to a succession of streamers that are randomly distributed in time and/or in space. Thus, streamers that are repeated in time but whose propagation paths change, belong to this category, as well as those which appear at different times after the instant when the breakdown voltage is reached and occupy a different spatial arrangement at each shot [23]. As underlined by Zeng _et al._, such streamers propagate in different directions with a typical variation of the propagation velocity of approximately 20%-50% [24]. The stochastic behavior of unguided streamers may result from their ignition jitter and from the variation of their propagation velocity from pulse to pulse [22]. The key physical parameter that differentiates guided streamers from unguided is the seed electron density that remains in the propagation channel between two successive streamers. If its value is higher than 10\({}^{9}\) cm\({}^{-3}\), the dynamics of the plasma plume transits from a stochastic mode to a repeatable mode [22]. The unguided/guided streamers transition can also depend on the geometry and size of the capillary which impacts on the flow regimes (laminar, turbulent) and related timescales like Kolmogorov microscale. To study this transition in a fine and reliable way, it is therefore recommended to compare several rapid characterization diagnostics, for example electrical vs. optical as in this article or optical emission vs. laser induced fluorescence, as in Iseni et al. [25].
### Electrical excitation modes
The Figure 2 shows three modes of electrical excitation (DC, sine and pulses) considering the (g) and (h) configurations of Figure 1. The DC excitation mode is undoubtedly the most versatile since it permits to generate guided or unguided streamers, mostly depending on the value of the interelectrode distance [26]. As sketched in Figure 2a, transient streamers propagate randomly at high speed from the pin electrode. They can give rise to corona streamers (Figure 2b) which vanish before reaching the counter-electrode [27]. As a result, the time profile of the discharge current is pulsed at a steady repetition frequency (a few kHz), hence testifying that the streamers are repeated in time although not in space [26]. By reducing the interelectrode gap, the avalanches can bridge the pin electrode to the grounded metal plate by forming a single filament, as sketched in Figure 2c. Then, a shortening of the inter-electrode gap leads to a discharge current that is still pulsed but with a higher frequency and smaller pulse width, hence resulting into a transient glow, as sketched in Figure 2d. This discharge is composed of time-repeated streamers and presents locally a dark space (DS) region close to the counter-electrode. For shorter interelectrode gaps, the Figure 2e indicates that the current shows a constant time profile, meaning that the discharge current is no longer carried by streamers. Shorter gap values drive to the appearance of a spark that can be either directive or branched, as sketched by the Figures 2f and 2g respectively [28]. The streamers can be considered as guided only for the directive spark because they show a highly reproducible repetition frequency and they occupy a unique spatial pattern [29]. If the power supply provides a current of sufficiently high magnitude, then the spark turns into an arc (thermal plasma). The Figure 2 proposes two other electrical excitation modes from the same "pin / remote plate" configuration. In the case of a high voltage sinusoidal polarization (Figure 2h), the discharge current shows two components: (i) a dielectric one corresponding to a sinusoidal waveform and (ii) a plasma component composed of a stochastic streamers distribution which therefore corresponds to non-guided streamers. Conversely and as illustrated in Figure 2i, guided streamers can be obtained using the same APP1 device if it is powered by high voltage pulses - either positive, negative or alternatively positive/negative - with widths that must be short enough (< 100 \(\upmu\)s). Such guided streamers typically propagate at velocities in the range 10\({}^{\text{-}}\)10\({}^{\text{5}}\) km.s\({}^{\text{-}}\)10 [31].
### Free mode, contact mode, propagation and reverse propagation
A plasma jet can operate either in free mode (propagation beyond the device's interelectrode gap and complete energetical dissipation in the gaseous environment) or in contact mode (propagation in the gaseous environment followed by the interaction with a condensed material - whether liquid or solid - commonly called "target"). The key parameters of a target are its dielectric permittivity and electrical conductivity which both determinate its capacitance and resistance. In turn, these two parameters control the time constant to charge the target surface.
When the streamer impinges a target of both low permittivity and conductivity (e.g. glass), an in-depth penetration of the electric field is evidenced with negligible lateral gradients [32]. As a result, the surface is charged by a fast accumulation and spreading of the streamers [33]. However, when the streamer impinges a target of both high permittivity and conductivity (e.g. metal), the electrical charging of the target is either inexistant or operates at very low velocity. Besides, the electric field does not enter the target, but a higher voltage drop remains in the gap instead (between capillary's outlet and target). As a result, a conductive ionized channel is established, bridging the polarized electrode with the target (source of electrons). There, the localized electric field immediately creates a strong discharge to rebuild the conductive channel a few hundred ns after. As a result, a backward streamer propagates inside this channel accompanied by a long-lasting diffuse discharge [32][34].
This counter-propagation has already been characterized by Darny et al. through metastable helium density measurements (laser absorption spectrometry) and electric field distribution measurements (Poeckels probe) [35]. These two physical parameters enable the authors to explain counter-propagation as the result of an impedance mismatch between the target and the polarized electrode. This phenomenon has also been the subject of numerical simulations, notably those of Vegas et al. using a 2D axisymmetric fluid model based on drift-diffusion-reaction equations for charged species, reaction equations for neutral species and the Poisson's equation [36]. Their simulations have demonstrated that the existence of an ionized channel is necessary for the propagation of the streamers from the target to the polarized electrode. They have also shown that chemical reactions stay in the plasma plume during the 1 us pulse if the metal target is grounded while it decreases as low as a few hundred ns if the same target is at a floating potential [36]. Interestingly, the computational investigations of Babaeva et al. reveal that for a metal target, a backward streamer could change its direction and transform into a secondary forward streamer [34]. The velocities of the backward streamer and the secondary forward streamer are higher than that of the primary forward streamer because they both propagate along an already ionized channel. Multiple forward and backward streamers reflections have also been observed and analyzed by GREMI laboratory [35].
The dielectric permittivity of a target is a fundamental question that has also strong spinoffs in plasma applications, whether in the field of materials or life sciences. As demonstrated by Yonemori et al., the density of oxygen radicals generated by an APPJ can be twice higher in the vicinity of a glass surface rather than a biological tissue (e.g. _ex vivo_ skin of a murine model) [37]. For particular applications in oncology, it also appears that the plasma jet treatment can induce effects contrary to those expected if it is
Figure 2: Schematics of non-equilibrium atmospheric plasma discharges obtained for different electrical excitation modes and characterized either by guided streamers (temporal and spatial repeatability), non-guided streamers (random distribution of current peaks in space and/or in time) or the absence of streamers (continuous current).
placed in "contact mode" with the tumor tissue, whereas the toxicity effects are considerably reduced in "remote mode" [38]. Furthermore, researchers can take many advantages from targets in medical applications: (i) they can be used to verify the absence of electrical and thermal hazards before applying plasma on preclinical models, e.g. mice, rats and pigs, (ii) they can be engineered so as to mimic the electrical response of human bodies while taking into account biophysical factors (e.g. pregnant woman, child with clammy skin, old man with metal prothesis, etc.) to customize the plasma therapy [39, 40].
## II Experimental setup & Methods
### Plasma gun device
Streamers are generated using a plasma gun: an APPJ device composed of a quartz capillary, an inner rod electrode and an external ring counter-electrode, as sketched in Figure 3. The quartz capillary is 150 mm long with inner and outer diameters of 2.0 and 4.0mm respectively. The inner rod electrode (50 mm in length, 2.0 mm in diameter) is biased to the high voltage power supply and is called "high voltage" (HV) electrode while the outer ring counter-electrode (10 mm in length) is grounded. The central coordinate of the ring electrode (x\({}_{\mathrm{r}\mathrm{o}\mathrm{g}}\)) is aligned to the end of the inner rod electrode (x\({}_{\mathrm{r}\mathrm{o}\mathrm{d}}\)) so that x\({}_{\mathrm{r}\mathrm{o}\mathrm{g}}\) = x\({}_{\mathrm{r}\mathrm{o}\mathrm{d}}\) as sketched in Figure 3. The plasma gun is supplied with helium at 1000 sccm and polarized by positive pulses of high voltage generated by a pulse generator (RLC electronic Company, NanoGen1 model) coupled with a DC high voltage power supply (Spellman company, SLM 10 KV 1200W model). In all the experiments, the high voltage magnitude is 8500 V, the duty cycle is 10 % and the repetition frequency is 5 kHz.
This plasma source is completed by two distant grounded electrodes: a grounded annular electrode (GAEL) which correspond to the external housing of the current monitor CM\({}_{1}\) and a grounded surface electrode (GSEL) that is placed 15 mm from the outlet of the plasma gun.
### Electrical measurements & Signal processing
#### ii.2.1 Electrical probes
Electrical parameters are measured using an analog oscilloscope (Wavesurfer 3054) from Teledyne Lecroy coupled with a high voltage probe (Tektronix P6015A 1000:1, Teledyne LeCroy PPE 20 kV 1000:1, Teledyne LeCroy PP020 10:1) and two current monitors (Pearson, 2877).
Each current monitor (CM) consists of a 16 mm thick hollow cylindrical housing, with internal and external diameters of 6 mm and 26 mm respectively. Since this housing is in metal and connected to the ground, it corresponds to a grounded annular electrode (GAEL). It contains a ferromagnetic torus (FT) characterized by its inner poloidal radius (R\({}_{\mathrm{n}}\) = 7 mm), outer poloidal radius (R\({}_{\mathrm{n}\mathrm{d}\mathrm{t}}\) = 13 mm) and its rectangular section (length L\({}_{1}\) = 6 mm \(\times\) width w\({}_{1}\) = 12 mm). A metal wire is wound around this torus, forming N = 50 turns equidistant from each other. As sketched in Figure 3, FT is inside GAEL, the whole forming CM. In this article, while CM\({}_{2}\) is only used to measure current, CM\({}_{1}\) plays two roles: (i) being a grounded electrode thanks to its metal housing that is connected to the ground of the experimental room and (ii) measuring current thanks to its internal ferromagnetic torus. When a guided streamer passes through CM, it generates circular magnetic field lines that are perpendicular to the streamer propagation. These lines create an induced current in the ferromagnetic torus which - after calibration procedure - corresponds to the electrical current of the streamer.
Figure 3: Experimental setup of the plasma gun interacting with two distant grounded electrodes: a grounded surface electrode (GSEL) and a grounded annular electrode (GAEL) that is coaxially centered and which corresponds to the external housing of a current monitor (CM\({}_{2}\)). Incident, reflected and transmitted streamers are analyzed using current monitors CM, and CM\({}_{b}\) as well as fast ICCD imaging.
### High voltage pulses
An ideal signal pulse is characterized by a rising edge with a characteristic rising time \(\tau_{\text{rise}}\) = 0 s, a falling edge with a characteristic falling time \(\tau_{\text{tail}}\) = 0 s and a time width along which its amplitude is constant, as sketched in Figure 4a. In this work, the high voltage positive pulses delivered by the Nanogen power supply are close to ideal rectangular pulses: they present a magnitude of 8.5 kV, a droop of only 200V (Figure 4b), an overshoot of 13.2 kV (Figure 4e) while \(\tau_{\text{rise}}\) and \(\tau_{\text{tail}}\) are both equal to only 36 ns (Figures 4e and 4f). The values of the overshoot and ringing are low enough to confirm that the damping of the electronic circuit is of completely satisfactory quality. Besides, each pulse can be decomposed into a weighted summation of series of sine waves, as shown in the amplitude spectrum of Figure 4d where the signal fluctuations (mainly present during \(\tau_{\text{rise}}\) and \(\tau_{\text{tail}}\)) see their amplitude sharply decrease with frequency. Finally, the derivative of the voltage pulse in Figure 4c results into two peaks whose magnitude multiplied by the device capacitance provides the positive and negative capacitive peaks specific to the plasma gun. These peaks are extrinsic to physical properties of the streamers (or plasma) and can therefore be considered as benchmarks for comparing streamers dynamics.
#### 1.2.3 Current peaks
Electromagnetic radiation (EMR) corresponds to waves of electric and magnetic energy moving together in space. The HV power supply utilized to generate cold plasma (or streamers) is an EMR source owing to its transformers, its electrical wires which behave as transmission lines and the rising time (typically 36 ns from 0 to 10 kV). Although this EMR emission is non-ionizing - and therefore cannot remove the electrons from the atoms through space - its strength is high enough to interfere with the normal operation of transistors in a radius of 1.5 meters, and therefore with all the surrounding electronic devices, especially laptops (e.g. locking of keyboard and mouse) [41]. Since EMR can also scramble with the measurements of current performed by CM\({}_{\text{L}}\), it must be removed through an appropriate calibration procedure. For this purpose, the measurement of current peaks associated with streamer propagation must be performed considering these two configurations: the "A configuration" where CM\({}_{\text{L}}\) is placed a few centimeters away from the capillary and the "B configuration" where CM\({}_{\text{L}}\) is coaxially centered with the capillary, as sketched in Figure 5.
Figure 4: (a) Ideal positive square pulse, (b) Real positive square pulse obtained with the HV power supply, (c) Derivative of the real pulse to evidence the positive (x) and negative capacitive peaks (xβ). (d) Fast Fourier Transform of the real pulse, (e) enlarged view of the pulse rising edge to evidence \(\tau_{\text{in}}\), overshoot and ringing, (f) Enlarged view of the pulse falling edge to evidence \(\tau_{\text{in}}\) and backswing.
In the A configuration, CM1 can only measure the current resulting from the EMR emission which is labelled lus in equation 1. The Figure 5a shows an enlarged view of the lxxx profile on the rising edge of the HV pulse while the inset recalls the existence of an lxxx component for each side of the pulse: one on its rising edge and the other on its falling edge. In the B configuration, the total current measured by CM1 (la) has three components as stated in equation 2: the first corresponds to the emission of electromagnetic fields (lxxx), the second is related to the capacitive current peak of the device (lx) and the third stands for the conductive current peak of the guided streamer (lx). In this configuration, the Figure 5b shows the profile of la at the rising edge of the HV positive pulse. Since its profile is too noisy to clearly distinguish the three aforementioned components, a Fast Fourier Transform (FFT) analysis can be performed to assess noise versus discrete frequency components for B configuration as well as for A configuration. As shown in Figure 5c, the FFT analysis of la reveals that the electromagnetic interferences operate at frequencies typically higher than 4 MHz. In comparison, the Figure 5d represents the FFT spectrum of the signal in the B configuration, where the capacitive and conductive current peaks occur at a lower frequency range. A Butterworth low-pass filter of order n = 3 is applied with cut-off frequency (ffc) and sampling frequency (ffc) of 4 MHz and 2 GHz respectively. This processing is labelled \(\mathcal{B}_{4MHz}\) and is characterized on a logarithmic Bode diagram by its gain at order 3 (G3) which decreases linearly towards -.o, at a rate of -60 dB/decade following equation 3. The Butterworth processing of lx leaves a residual background c which remains constant at a value of about 18 mA, as represented in Figure 5e and expressed in equation 4. In the B configuration, the Figure 5f shows the result of the same processing applied to la: the current intensity profile is composed of the c current background, the capacitive current peak (lx) and the guided streamer current peak (lx) (equation 5). Finally, the useful components of the current intensity profile measured by CM1 - namely lx and lx - are obtained by applying Equation 6 and are unambiguously visible in Figure 5g.
Figure 5: Calibration procedure to measure current with CM1 at the rising edge of a positive pulse (a) EMR current, (b) Current of guided streamer & EMR, (c) FFT of (a), (d) FFT of (b), (e) Current of (a) after Butterworth processing; (f) Current of (b) after Butterworth processing; (g) Conductive current peak of the guided streamer (Q and capacitive current peak of the device (d).
\[I_{A}=I_{EMR} \tag{1}\] \[I_{B}=I_{EMR}+I_{K}+I_{\zeta}\] (2) \[G_{3}(f)=\frac{1}{\sqrt{1+\left(\frac{f}{\zeta_{c}}\right)^{6}}}\] (3) \[\overline{\mathcal{B}}_{4MHz}[I_{A}]=\varepsilon\] (4) \[\overline{\mathcal{B}}_{4MHz}[I_{B}]=\varepsilon+I_{\zeta}+I_{\zeta}\] (5) \[I_{k}+I_{\zeta}=\mathcal{B}_{4MHz}[I_{B}]-\mathcal{B}_{4MHz}[I_{ A}] \tag{6}\]
### Fast ICCD imaging & Signal processing
Since guided streamers are transient and low emissive phenomena, the observation of a single one of them requires very specific equipment like streak camera whose temporal resolution is close to 800 fs or even less [42] in our case, the radiative emission of the plasma jet is collected by an intensified charge-coupled device (ICCD) camera from Andor company (model Istar DH340T). It has a 2048 x 512 imaging array of 13.5 \(\upmu\)m x 13.5 \(\upmu\)m pixels) and an optical gate width lower than 2 ns. Although this camera is equipped with high intensification technology, its time resolution is lower than that of a streak camera, so that it is a mandatory to study not a single but a train of guided streamers. This means collecting at regular time intervals a large number of guided streamers (N\({}_{\text{RS}}\)) and summing their emissions on a single image. The Solis software enables such operations combining the "kinetic series" acquisition mode and the "DDG" gate mode. Before explaining the procedure for creating an image, the three following points must be reminded:
1. The HV power supply delivers pulses at the repetition frequency of 5 kHz for a duty cycle of 10%. Therefore, the width of a single pulse is 10% / 5kHz = 20 \(\upmu\)s and the repetition period is \(T_{\text{rep}}\) = 1 / 5kHz = 200 \(\upmu\)s (See Figure 6). As observed in the Figure 5g, a single guided streamer has a duration of few \(\upmu\)s, approximately 3 \(\upmu\)s. Therefore, the ICCD observations can be achieved only the first microseconds that follow the rising edge of each pulse. The remaining 20 - 3 = 17 \(\upmu\)s can be ignored.
2. A kinetic series is composed of several scans, each scan comprising a given number of acquisitions. The kinetic series is defined by the 3 following parameters: the exposure time of a single scan (\(\tau_{\text{scan}}\)), the number of accumulations (N\({}_{\text{acc}}\)) and the length which corresponds to the number of scans taken in the kinetic series (L\({}_{\text{Lin}}\)). Here, we consider that \(\tau_{\text{scan}}\) = 5 s, N\({}_{\text{acc}}\) = 1 and L\({}_{\text{Lin}}\) = 850. As a result, one can easily deduce the number of guided streamers (i.e. pulses, i.e. acquisitions) collected upon a single scan, namely \(N_{GS}=\frac{\tau_{\text{scan}}}{T_{\text{rep}}}=\frac{5s}{200\upmu\text{s}}\) = 25000 as sketched in Figure 6. Besides, the total duration of a kinetic series can also be assessed as \(\tau_{\text{scan}}\times\text{N}_{\text{acc}}\times\text{L}_{\text{Lin}}=4250 \text{s}\) = 1h11 min. Therefore, depending on whether one wishes to observe all or part of the forward / backward propagation phenomenon, the L\({}_{\text{Lin}}\) value is carefully chosen to find the best compromise between measurement accuracy and measurement time.
3. The DDG mode is characterized by a gain G = 1500, a gate width w = 2 ns and a delay \(\delta\) that can vary from 0 to 850 (L\({}_{\text{Lin}}\) value) per step of 1 ns. This delay is defined with respect to trigger: the instant corresponding to the appearance of the capacitive current peak (\(\kappa\)) (see Figure 5g). Consequently, and as shown in Figure 6 for Acq. 1, two successive gate widths present an overlap of 1 ns.
The procedure for creating an image is performed in 850 scans. During scan #001, while 25 000 pulses are carried out (or guided streamers are generated), the ICCD camera acquires only the first ns of each one (w = 2 ns) for a delay always maintained at \(\delta\) = 0 ns, as shown in Figure 6. These 25 000 acquisitions are summed to constitute a unique acquisition specific to scan #001. During scan #002, 25 000 new pulses are achieved while the ICCD camera acquires only a tiny part of each one defined by w = 2 ns and a delay \(\delta\) fixed at 1 ns. These 25 000 acquisitions are summed to constitute a new acquisition specific to #002. More generally, for each new scan k (with k <851), 25 000 pulses are carried out, each of them being partially captured by the ICCD camera over a time interval always set at w = 2 ns and shifted per ns-step so that once k = L\({}_{\text{Lin}}\) = 850, \(\delta\) has a value as high as 849 ns.
Considering that each ICCD picture is a matrix of 2000 columns of pixels by 250 rows of pixels, and that the streamer propagates along the rows, the propagation velocity of the guided streamers (\(v_{GS}\)) is measured as follows: (i) the values of the pixels are summed along each column so that integrated emissivity is represented in a 2000\(\times\)1 matrix. The highest value of integrated emissivity can be associated with the ionization front of the guided streamers (\(v_{K}\) coordinate). As expressed in equation (7), the velocity is measured at locations \(v_{K}\) from two consecutive ICCD pictures so that \(t_{k+1}-t_{k}=1ns\):
\[v_{GS}(x_{k})=\frac{dx}{dt}=\frac{x_{IF}(t_{k+1})-x_{IF}(t_{k})}{t_{k+1}-t_{k}} \left\{7\right\}\]
## III Results
### Guided streamers interacting with a distant grounded surface electrode (GSEL)
#### Threshold and reflection resulting from Incident positive guided streamers
We propose to investigate the propagation mechanisms of guided streamers generated along the capillary of Figure 3 before reaching the grounded surface electrode (GSEL) located 15mm away from \(x_{\text{out}}\). No current monitor (CM\({}_{1}\) and/or CM\({}_{1}\)) is present in the experimental setup; only fast ICCD imaging is achieved, as illustrated by the photos compiled in Figure 7. The photos are taken at different time intervals between 0 ns (appearance of the capacitive current peak) and 3000 ns: a time lapse corresponding to \(\frac{3\upmu\text{s}}{20\upmu\text{s}}=15\) % of the pulse width. Since the guided streamer is generated at the rising edge of the voltage pulse and propagates along the increasing x coordinates, it is called "incident guided streamer" and it carries a positive charge, hence the notation:\(GS_{1}^{*}\). The analysis of streamers propagation can be achieved following the four stages:
1. From 0 to 424 ns, \(GS_{t}^{+}\) propagates from the inner HV electrode to the capillary's outlet, i.e. from \(X_{\rm{ring}}\) to \(X_{\rm{out}}\), as sketched in Figure 3. Its head is clearly visible while its tail - connecting the head to the HV electrode - appears as a vanishing region only detectable in the vicinity of the head. \(GS_{t}^{+}\) slows down as it nears the capillary's outlet.
2. From 425 ns to 469 ns, the streamer is transmitted (\(GS_{t}\)) out of the capillary to reach the grounded surface electrode after crossing an air gap of 15 mm. The head of the streamer interplays with the grounded surface electrode during almost 6 ns. Then, its optical emission vanishes while a reflected guided streamer (\(GS_{r}\)) is simultaneously forming in the air-gap.
3. From 470 ns to 1099 ns, a reflection (or counter-propagation) of the guided streamer is observed (\(GS_{r}\)) inside the capillary while its emissivity increases. Simultaneously, the part of \(GS_{r}\) which remains outside the capillary does not extend anymore and its optical emission decreases. This phenomenon is in agreement with the simulations and experimental works of Babaeva et al. and Damy et al. respectively [34][43].
4. From 1100 ns to 3000 ns, a second reflection is observed from GSEL: a guided streamer (\(GS_{R}\)) is emitted from GSEL to the capillary. However, \(GS_{R}\) cannot penetrate inside the capillary and it shows a much lower emissivity than that of \(GS_{t}^{+}\).
The positive or negative sign of \(GS_{t}\), \(GS_{r}\) and \(GS_{R}\) cannot be determined through fast imaging but this issue is addressed in the section III.2.
Figure 6: Characteristic durations of the high voltage signal supplying the plasma gun and of the fast ICCD imaging signals.
#### ii.1.2 Transmission and reflection resulting from incident negative guided streamers
When a plasma gun is supplied with a sinusoidal high voltage, the positive streamers appear on the positive half periods (or positive applied voltage) while the negative streamers appear on the negative half periods (or negative applied voltage). In the case of a plasma gun supplied with ideal positive pulses, the situation would be totally different since the high voltage would remain always positive, as sketched in Figure 4a. This means that negative guided streamers would be generated on the falling edge (Figure 4b) where the voltage remains positive but decreases to 0, hence inducing a reverse electric field. However, in our experimental study, each positive pulse presents a backswing of -5 kV, as indicated in Figure 4f which lasts over a few 100 ns. Therefore, it is expected that the negative guided streamers are the result of two components: (i) the falling edge of the positive pulse upon the first 30 ns and (ii) the -5 kV backswing upon the following hundreds of ns.
In Figure 8, fast ICCD imaging is performed on the falling edges of the voltage pulses to highlight the propagation profile of the negative incident guided streamers (\(GS_{l}^{-}\)). Although positive and negative guided streamers have propagation velocities of the same range of order (\(\approx 1-2.10^{5}\)\(m/s\)), several discrepancies must be underlined: (i) \(GS_{l}^{-}\) shows a more diffuse profile (head and tail) than that of \(GS_{l}^{+}\) (ii) as evidenced by the photos at t = 100, 200 and 300 ns, \(GS_{l}^{-}\) shows a highly emissive region in the immediate vicinity of the HV electrode which vanishes as the streamer propagates, (iii) \(GS_{l}^{-}\) remains confined inside the capillary and never reaches GSEL, so that no reflected guided streamers can be obtained, at least in our experimental conditions. Given the time constants associated with the falling edge and the backswing, it is assumed that the highly emissive region that remains confined in the vicinity of the HV electrode is associated with the electric field reversal (falling edge) while the propagation of \(GS_{l}^{-}\) is associated with the -5 kV backswing.
Guided streamers interacting with two distant grounded electrodes: the annular electrode (GAEL) and the surface electrode (GSEL)
#### ii.2.1 Electrical characterization
We propose to study the propagation mechanisms of guided streamers as they interact with two distant grounded electrodes: the grounded annular electrode (GAEL) coaxially-centered to the capillary, followed by the grounded surface electrode (GSEL), as sketched in Figure 3. In this experimental setup, the two current monitors are present, so that the current profile of the guided streamers can be detected at two distinct locations. This profile can take the appearance of 2 or 3 current peaks among those listed in Table 1.
Figure 8: Photo sequence of an incident guided streamer carrying a negative charge (\(GS_{l}\)) and propagating along the capillary without being able to exit it. No backward propagation is observed. The \(GS_{l}^{-}\) shows a highly emissive region at the immediate vicinity of the HV electrode. Measurements achieved without CMs.
Figure 7: Photo sequence showing an incident guided streamer (GS) propog
From these peaks, five characteristic durations can be defined, as sketched in Figure 9a:
* \(\tau_{\text{xx}}\) is the duration between the instant when the capacitive current peak is measured by CM\({}_{1}\) and the instant when the capacitive current peak is measured by CM\({}_{2}\), i.e. respectively the black and red x peaks in Figure 9b.
* \(\tau_{\text{xx}^{\prime}}\) is the duration between the conductive current peak of a guided streamer (either \(\zeta_{t}^{+}\) or \(\zeta_{t}^{+}\)) and its related capacitive current peak (either \(x_{t}^{+}\) or \(x_{t}^{+}\)).
* \(\tau_{f}\) is the time required by \(G\mathcal{S}_{t}\) to propagate forward (f) from CM\({}_{1}\) to GSEL, which is roughly the same as from CM\({}_{1}\) to CM\({}_{2}\).
* \(\tau_{\text{b}}\) is the duration required by \(G\mathcal{S}_{r}\) to propagate backward (b) or counter-propagate from GSEL to GAEL, which is roughly the same as from CM\({}_{2}\) to GAEL.
* \(\tau_{f+b}\) corresponds to the time required by a guided streamer to propagate (forward, f) from GAEL to GSEL and then come back to GAEL (backward, b), that is to say the time required to achieve the GAEL-GSEL-GAEL round trip.
A more analytical description of these characteristic durations is proposed in Appendix IX.1.
Now that the characteristic peaks and durations are defined, the variation of their values can be studied for different locations of GAEL with respect to the HV electrode. Based on different values of d namely 1 cm (Figure 10a), 3 cm (Figure 10b), 5 cm (Figure 10c), 7 cm (Figure 10d) and 9 cm (Figure 10e), several observations are noteworthy:
* The capacitive peak (x) is detected by CM\({}_{1}\) at x\({}_{\text{GAEL}}\) and by CM\({}_{2}\) at x\({}_{\text{AGEL}}\). These peaks, whose magnitudes are directly proportional to the derivative of the applied voltage, can be considered as temporal references since their positions do not depend on the value of d, contrarily to the conductive current peaks.
* Two conductive current peaks are measured by CM\({}_{1}\) while only one is detected by CM\({}_{2}\).
* For increasing values of d, an increase in \(\tau_{\text{xx}^{\prime}}\) is observed and consequently of \(\tau_{f}\), \(\tau_{b}\) and \(\tau_{f+b}\) as well.
\begin{table}
\begin{tabular}{c c} \hline Symbol & Designation \\ \hline \(\kappa_{t}^{+}\) & Capacitive current peak (x) that is incident (i) with a positive charge (+) \\ \hline \(\zeta_{t}^{+}\) & Conductive current peak (\(\zeta\)) associated to the propagation of an incident (i) guided streamer whose electrical charge is positive (+) \\ \hline \(\zeta_{t}^{+}\) & Conductive current peak (\(\zeta\)) associated to the propagation of a transmitted (t) guided streamer whose electrical charge is positive (+) \\ \hline \(\zeta_{t}^{+}\) & Conductive current peak (\(\zeta\)) associated to the propagation of a reflected (t) guided streamer whose electrical charge is negative (-). The negative sign of this polarization is justified later in Figure 10. \\ \hline \end{tabular}
\end{table}
Table 1: Types of current peaks associated to the guided streamers and that can be evidenced on the oscilloscope.
In Figure 10, all the conductive current peaks associated with the reflected guided streamers are positive, which - at first glance - might suggest that the reflected streamers (\(GS_{\rm r}\)) carry a positive charge (\(\zeta_{\rm r}^{\rm r}\)). However, as Figure 9c reminds us, depending on whether CM\({}_{1}\) is oriented in the conventional or opposite direction to the electron flow, it returns a positive or negative value of the same measured current (\(\rm I_{\rm tot}\)). In Figure 9c, whether before or after the 180"-flip of CM\({}_{\rm L}\) the streamers are always oriented from left to right, which means that the orientation of the magnetic field remains unchanged, as well as the induced current circulating in the coil. However, the 180"-flip has changed the orientation of the closed contour and therefore the orientation of current in input and output of the coil. For this reason, any guided streamer that counter-propagates (\(GS_{\rm r}\)) through CM\({}_{1}\) carries a negative charge (\(\zeta_{\rm r}^{\rm r}\)) if its conduction current peak appears positive.
#### ii.2.2 Relevant parameters obtained from electrical characterization
As plotted in Figure 11a, when d is increased from 1 to 9 cm, \(\tau_{\rm KK}\) remains roughly constant with an average value of 59.2 ns \(\pm\) 2.3 ns. This means that changing d has no significant impact on the kinetics of a same capacitive peak measured at \(\rm X_{\rm GSR}\) and \(\rm X_{\rm GSR}\). The reason is that the capacitive peaks are specific to the plasma gun device and decorrelated from the streamers propagation; \(\tau_{\rm KK}\) can therefore be considered as a temporal benchmark to characterize the kinetics of the conductive peaks.
The variation of \(\tau_{\rm KK}\) versus d is represented in Figure 11b considering measurements performed by CM\({}_{1}\) (time lapse between \(\kappa_{\rm r}^{\rm r}\) and \(\zeta_{\rm r}^{\rm r}\)) and CM\({}_{2}\) (time lapse between \(\kappa_{\rm r}^{\rm r}\) and \(\zeta_{\rm r}^{\rm r}\)). A linear fit of the curves in Figure 11b indicates slopes with values of approximately 30 ps/cm. No datapoint is reported for CM\({}_{1}\) at d = 9 cm because the conductive current peak (\(\zeta_{\rm r}^{\rm r}\)) of the incident guided streamer (\(GS_{\rm l}\)) is overlapped by the conductive current peak (\(\zeta_{\rm r}\)) of the reflected guided streamer (\(GS_{\rm r}\)), as evidenced in Figure 10e. Figure 11b shows that by remote GAEL (CM\({}_{1}\)) from the HV electrode, the conductive current peak (\(\zeta_{\rm r}^{\rm r}\)) associated to \(GS_{\rm l}\) is delayed since the capacitive current peak (\(\kappa_{\rm r}^{\rm r}\)) always appears at the same instant. The reason is that the further GAEL is from the HV electrode, the slower the incident streamer arrives
Figure 10: Temporal profiles of current peaks associated with guided streamers and measured by CM\({}_{1}\) and CM\({}_{2}\) at \(\kappa_{\rm GSR}\) and \(\kappa_{\rm GSR}\), respectively, for d = 1, 3, 5, 7 and 9 cm (**_k_** **coactive current peak, \(\zeta\) conductive current peak).**
before reaching GAEL. Using CM\({}_{2}\) leads to the same delay observed between \(\zeta_{t}^{+}\) and \(\kappa_{t}^{+}\). As a result, the time gap between these two current monitors is always close to 170 ns (see vertical arrow) whatever the value of d.
The characteristic propagation times of the guided streamers can be deduced from Figure 11c. An increase in d drives to shorter values of \(\tau_{f}\) and \(\tau_{b}\), as well as of \(\tau_{tb}\) while always keeping the relation \(\tau_{f+b}=\tau_{f}+\tau_{b}\). The values of \(\tau_{f}\) are always lower than those of \(\tau_{b}\), meaning that the velocity of a guided streamer is directly correlated with its electrical charge: a positive guided streamer transmitted through GAEL propagates slower than a negative guided streamer reflected by GSEL. When d is increased, the GAEL-GSEL distance is necessarily decreased, so that the characteristic propagation time associated to the round-trip (\(\tau_{f+b}\)) is reduced as well.
#### ii.2.3 Fast ICCD characterization
In addition to the previous electrical characterizations, a fast ICCD imaging study is performed to track the propagation and counter-propagation kinetics of the guided streamers. If the Figure 5 indicates that these phenomena typically occur on a time scale of a few \(\mu\)s (roughly 13 \(\mu\)s \(-\) 10 \(\mu\)s = 3 \(\mu\)s), the ICCD results obtained in Figure 7 demonstrate that transmitted and reflected streamers can be analyzed on shorter timescales, here 850 ns. As detailed in section II.3, the present fast ICCD characterization has been achieved using L\({}_{\rm{in}}\) = 850 scans with a delay [8] incremented every 1 ns.
The Figure 12 is divided into 5 subfigures with d = 1 cm (Figure 12a), d = 3 cm (Figure 12b), d = 5 cm (Figure 12c), d = 7 cm (Figure 12d) and d = 9 cm (Figure 12e). In each of these subfigures, the integrated emission of the guided streamers is plotted versus time, considering different \(\kappa_{t}\) locations along the x axis. Each of these profiles contains at least one main peak associated with the most emissive part of the streamer's head, typically the ionization front. This main peak is considered as a reference both for locating the streamer within the capillary and for normalizing the entire integrated emission profile between 0 and 1.
For d = 1 cm, all the integrated emission profiles show two types of guided streamers: the transmitted guided streamer (\(GS_{t}\)) after its passage through GAEL and the counter-propagation of a guided streamer (\(GS_{r}\)) which is reflected by GSEL. \(GS_{t}\) appears at \(t\) = 30 ns with the highest intensity (here normalized to 1) and shifts towards the increasing values of x. On the contrary, \(GS_{r}\) has a much lower intensity and counter-propagates. Unsurprisingly, the time interval separating these two peaks narrows with increasing values of x. By correlating these integrated emission profiles with the current intensity profiles of Figure 10, it turns out that the first and second emission peaks correspond to the \(\zeta_{t}^{+}\) and \(\zeta_{r}^{-}\) conductive peaks respectively.
For d = 3, 5, 7 and 9 cm, the integrated emission profiles can be performed for \(x<\chi_{GAEL}^{-}\)and \(x>\chi_{GAEL}^{+}\). In that latter case, the profiles are similar to those obtained for d = 1 cm, i.e. a first peak corresponding to (\(GS_{r}\), \(\zeta_{t}^{+}\)) and the second one to (\(GS_{r}\), \(\zeta_{r}^{-}\)). The influence of d on the time separating these two peaks is quite clear and discussed in the Section IV. Interestingly, the integrated emission profiles measured for \(x<\chi_{GAEL}^{-}\) reveal not only a single peak associated to \(GS_{t}\) but at least one other peak that evidence the existence of at least one reflected guided streamer. In Figure 12b (d = 3 cm), the peak corresponding to the incident guided streamer (\(GS_{t}\)) shows an intensity of 1, followed by two smaller peaks corresponding to reflected guided streamers: \(GS_{rr}\) (closer to \(GS_{t}\) with an intensity of 0.5-0.7) and \(GS_{r}\) (further away from \(GS_{t}\) with a relative intensity of 0.2-0.3). While \(GS_{r}\) is only visible for d = 3 cm, \(GS_{r}\) is still clearly detected for the higher values of d in Figure 12c, 12d and 12e. Besides, the more d increases, the further \(GS_{rr}\) can move away from \(GS_{t}\).
The peak associated to \(GS_{rr}\) should not be confused with a \(GS_{r}\) peak, the latter one resulting from a reflection on GSEL. The peak associated to \(GS_{rr}\) is measured at an instant that comes much earlier before \(GS_{r}\) is generated. As an example, in Figure 12b, \(GS_{t}\) appears at t = 5 ns, is transmitted at t = 70 ns and is reflected by GSEL at t \(\approx\) 450ns. Much earlier, the \(GS_{rr}\) and \(GS_{r}\)- peaks have
Figure 11: Influence of d on (a) \(\tau_{xx}\) (temporal interval of a same capacitive peak measured at \(x_{\rm{scan}}\) and \(x_{\rm{scan}}\)), (b) \(\tau_{xx}\) : temporal interval between capacitive and conductive peaks versus d, (c) Characteristic propagation times of the guided streamers versus d.
appeared at 50 and 85 ns respectively. In our experimental setup, \(GS_{r}\), and \(GS_{r}\)- can only be detected by fast ICCD imaging. Electrical analysis would only be possible by placing another current monitor between the HV electrode and GAEL.
The Figures 12a, 12b, 12c, 12d and 12e are also plotted as contour plots in Appendix IX.2.
### Relevant parameters obtained from ICCD characterization
Several parameters can be extracted from the previous integrated emission profiles, namely the characteristic propagation times (as already achieved from electrical study in Figure 10), the ratios of the integrated emission peaks and the velocity of the guided streamers in the immediate vicinity of GAEL and GSEL.
While electrical characterizations (Sections III.2.1 and III.2.2.1) only allow the measurement of characteristic propagation times between CM\({}_{1}\) (GAEL) and CM\({}_{2}\) (GSEL), fast ICCD imaging allows to accurately localize the guided streamers at any \(x\) coordinate of the transparent capillary except in the region hidden by CM\({}_{1}\) which is 16 mm thick. From Figure 12a to Figure 12e, the instant at which the head of a guided streamer occupies the x\({}_{\text{GAEL}}\) position can be approximated by \(t=\frac{t(x_{\text{CSF}}=x_{\text{AEL}}^{2})-t(x_{\text{CSF}}=x_{\text{AEL}} ^{2})}{2}\). The Figure 13a
indicates the durations required by the guided streamers to propagate from \(x_{k}\) to \(x_{k,\text{GAL}}\) (filled symbols) as well as to counter-propagate from \(x_{k,\text{GAL}}\) to \(x_{k}\) (open symbols). As an example, for d = 5 cm, a guided streamer at the \(x_{2}\) location requires 375 ns to reach GSEL while in the case of a counter-propagation from GSEL to reach the \(x_{k}\) location, 130 ns are expected (green curve). Besides, it appears that the forward propagation times are always longer than the backward ones. As an example, for d = 1 cm and x = 2 cm, a duration of 300 ns is required for forward propagation versus only 200 ns for backward propagation. This figure permits also to distinguish the cases where \(x_{k}\neq\) d (circle symbols) and where \(x_{k}\) = d (square symbols). This latter case corresponds to the forward/backward propagation ways between \(\text{CM}_{1}\) and \(\text{CM}_{2}\): they can be used to determine the characteristic propagation times as previously achieved in the electrical characterization. Hence, \(\tau_{t}\) is obtained by measuring the difference between the instant at which \(\text{GS}_{t}\) reaches GSEL and the instant at which \(\text{GS}_{t}\) (or \(\text{GS}_{t}\)) was located at GSEL. Similarly, \(\tau_{b}\) is the difference between the instant when GS, reaches GSEL and the instant when \(\text{GS}_{t}\) reached GSEL. These particular values are plotted versus d in Figure 13b which indicates a decrease in \(\tau_{t}\) and \(\tau_{b}\) as a function of d. This illustrates that the further GSEL is from the \(\text{H}\) electrode, the faster \(GS_{t}\) propagates through the capillary. Thus, while \(\tau_{f}\) drops from 310 ns (d = 1 cm) to 250 ns (d = 9 cm), \(\tau_{b}\) is generally smaller since it varies from 180 ns (d = 1 cm) at 70 ns (d = 9 cm).
From Figure 12, it is possible to measure the integrated emission of each peak, whether for the incident guided streamer (l), the transmitted guided streamer (l) or the reflected guided streamers (l, and \(\nu\)). Then, their values can be correlated by plotting the \(I_{\nu^{\prime}}/I_{l}\) ratio (Figure 14a) and the \(I_{\nu}/I_{l}\) ratio (Figure 14b) on a log scale as a function of x and for different values of d. For a given position of GALL (d constant) in Figure 14a, the \(I_{\nu^{\prime}}/I_{l}\) ratio becomes larger for increasing values of x because the reflected guided streamer is detected closer to GALL, i.e. no more than 1 cm away. As an example with d = 7 cm, \(\left(\frac{I_{\nu^{\prime}}}{I_{l}}\right)_{x=2cm}=8\times 10^{-3}\) while \(\left(\frac{I_{\nu^{\prime}}}{I_{l}}\right)_{x=8cm}=2\times 10^{-1}\). The same behavior is observed in Figure 14b with d = 3 cm for example: \(\left(\frac{I_{\nu}}{I_{\nu}}\right)_{x=8cm}=1.1\times 10^{-1}\) while \(\left(\frac{I_{\nu}}{I_{\nu}}\right)_{x=10cm}=5.4\times 10^{-1}\). This later observation is consistent with the works of Darny et al. where the reflected guided streamers always present a higher magnitude than that of the transmitted guided streamers [43]. Besides, for a given \(x_{k}\) coordinate, a decrease in d drives to higher \(I_{\nu^{\prime}}/I_{l}\) ratios but lower \(I_{\nu}/I_{l}\) ratios.
In addition to the characteristic propagation times in Figure 13 and to the emission ratios in Figure 14, a third parameter of interest is the velocity of the streamers before/after interacting each of the two distant grounded electrodes. To evaluate the values of these velocities, we have first plotted in Figure 15a the profiles of the guided streamers in a time-space diagram, for different values of d. As sketched in inset and as verified for any experimental profile, an incident guided streamer splits into a first reflected (\(r\)) guided streamer, eventually a second reflected (\(r^{
Figure 14: (a) Emission ratio of the incident/reflected guided streamers interacting with GSEL, (b) Emission ratio of the reflected/transmitted guided streamers interacting with GSEL.
Figure 13: (a) Propagation times required by guided streamers to reach \(x_{k,\text{GAL}}\) from \(x_{k}\) locations (forward, filled symbols) or to reach specific \(x_{k}\) locations from \(x_{k,\text{GAL}}\) (backward, open symbols). The square symbols indicate the condition \(x_{k}=\) d, (b) Forward, backward and round-trip propagation times measured as a function of d, i.e. between GALL (fixed location) and GSEL (variable location).
represented here for the sake of clarity and a transmitted (t) guided streamer that appears after passing through GAEL. Then, this guided streamer impinges with GSEL, driving to an additional relaxion (r). Each profile is composed of experimental data (solid line) and interpolated values (dashed line) when the streamer (counter)propagates through GAEL. All the profiles, whether forward or backward are non-linear, hence evidencing the existence of acceleration and deceleration regions. As an example, an acceleration region of \(GS_{t}\) is clearly visible in the 15mm-air gap separating the capillary's outlet from GSEL (\(10.0\;cm<x<11.5\;cm\)). Besides, the location of GAEL has a strong influence on the guided streamers' dynamics. As an example, the duration required by the guided streamers to bridge the HV electrode to GSIEL is only 335 ns for d = 1 cm and becomes as high as 530 ns for d = 9 cm. From this figure, it is possible to measure the spatio-temporal coordinates of the guided streamers (i) in the immediate vicinity of GAEL, at \(x_{GAEL}^{-}\) and \(x_{GAEL}^{+}\) and (ii) at the surface of GSEL at \(x_{GSEL}\), as sketched in Figure 3. As a result, the velocities of the incident guided streamer (\(v_{GSL}\)) at \(x_{GAEL}^{-}\) and of the transmitted guided streamer (\(v_{GSL}\)) at \(x_{GAEL}^{+}\) can be plotted as a function of d in Figure 15b. Similarly, the velocities of the transmitted (\(v_{GSL}\)) and reflected (\(v_{GSL}\)) guided streamers interacting with the grounded metal target are plotted versus d in Figure 15c.
In Figure 15b, the velocity of the guided streamers is always reduced after passing through GAEL, e.g. decreasing from \(v_{GSL}=960\;km/s\) to \(v_{GSL}=300\;km/s\) at d = 3 cm. Interestingly, this trend is even more pronounced for the increasing values of d, so that the value of \(v_{GSL}\) can change by 50 %. On the contrary, such trends are not observed when \(GS_{t}\) interacts with GSEL (grounded surface electrode): as shown in Figure 15c, the velocities are always significantly higher for \(GS_{r}\). Hence, when GAEL is placed 3 cm away from the HV electrode, \(GS_{t}\) impinges the grounded metal target at \(v_{GSL}=400\;km/s\) while its reflected streamer leaves GSEEL at a higher velocity: \(v_{GSL}=580\;km/s\). It is also worth stressing that the reflections velocities are much higher for streamers interacting with GAEL than GSEL. Indeed, if we consider the condition d = 5 cm, it turns out that \(v_{GSL}=1400\;km/s\) while \(v_{GSL}=600\;km/s\).
## 4 Discussion
### Complementarity of electrical and optical analysis
#### 4.1 Characteristic propagation times
In this study, we have demonstrated how to detect reflected and transmitted guided streamers combining two analysis techniques that rely on fundamentally different physical principles: the first measures the electrical properties of guided streamers and the other their optical properties. To assess this complementarity, Figure 16 shows the characteristic propagation times obtained by electrical analysis (Figure 11c) and by optical analysis (Figure 13b) for several values of d. If the trends remain the same, it appears that a quasi-constant temporal gap is measured, of the order of 50-100 ns.
One reason that could have accounted for this delay is the thermal drift of not only the voltage generator but also the ICCD camera between the times they are turned on and their period of use. To overcome this artifact, we always let the cold plasma jet run for 15 minutes before launching the acquisitions by fast ICCD imaging (which always remained in operation, thus in thermal equilibrium). This same duration of 15 minutes was respected before launching any new measurement, again to allow both the voltage generator and the plasma source to reach this same thermal equilibrium.
Therefore, two other assumptions have been proposed to explain this 50-100 ns discrepancy. First, this discrepancy could result from a different number of acquisitions between (i) the electrical approach whose statistic relies on a triplicate of 3 oscillograms, i.e. 3 acquisitions and (ii) the optical approach whose statistic is performed on a triplicate of 3 trains of guided streamers, each train containing N\({}_{\text{os}}\) = 25 000 guided streamers (Figure 6). Second, the values obtained by electrical characterization could be slightly underestimated due to the use of the Butterworth low-pass filter (Section 2.2.3.). Despite these slight discrepancies, the trends remain the same and consolidate our conclusions: (i) the reflection of the guided streamers is always faster than that of the transmitted streamers, (ii) the closer GAEL is to GSEL, the shorter T\({}_{b}\) is.
Figure 15: (a) Spatio-temporal diagram of the incident, reflected and transmitted guided streamers (b) Velocity of the guided streamers before/after passing through GAEL (grounded annular electrode). (c) Velocity of the guided streamers before/after interacting with GSEL (grounded surface electrode). All the curves are plotted for d = 1, 3, 5, 7 and 9 cm. (d: estimation by extrapolation).
#### 4.1.2 Electrical charge of transmitted and reflected streamers
Fast ICCD imaging alone cannot provide information about the electrical charge carried by the heads of guided streamers; it just allows to decipher whether these guided streamers are incident, transmitted or reflected (Figure 7, Figure 8). Likewise, the electrical analysis alone allows only to identify current peaks (either capacitive or conductive) when guided streamers pass through the current monitors, at only two positions in our experimental set-up. This being said, electrical characterization has a major advantage that deserves to be emphasized: it allows us to identify the polarization (positive/negative) of the conductive current peaks associated with the streamers and thus the overall electrical charge they carry in their heads. Thus, by combining the electrical and optical approaches, a conductive current peak \(\zeta_{t}^{+}\) associated with a transmitted guided streamer \(GS_{t}\) can be written as: \(GS_{t}^{+}\). Similarly, a conductive current peak \(\zeta_{r}^{-}\) associated with a reflected guided streamer \(GS_{r}\) can be written as: \(GS_{r}^{-}\).
#### 4.1.3 Highlighting multiple reflection phenomena
While the electrical analysis alone only shows one type of reflection (the \(r\) reflection corresponding to the interaction of \(GS_{t}^{+}\) on the grounded metal target), the fast ICCD imaging reveals 4 types of reflection:
* r: reflection corresponding to the counter-propagation of a negative guided streamer initiated by \(GS_{t}^{+}\) at the capillary's outlet. This reflection is the only one to be detected both optically and electrically (Figure 10). Our experimental results are in agreement with the works of Babaeva et al. modelling the counter-propagation of a streamer approaching and reflecting a metal grounded target [34].
* R: reflection corresponding to the counter-propagation of a guided streamer initiated from GSEL and appearing after \(GS_{r}^{-}\). The R reflection remains confined in the 15 mm gap and cannot reach the outlet's capillary. If the polarity of this streamer cannot be measured by electrical analysis, it can reasonably be assumed to be negative, as previously demonstrated with the counter-propagation of guided streamers with a negative charge. If so, the following notation can be used: \(GS_{R}^{-}\).
* r: reflection corresponding to the counter-propagation of a guided streamer initiated from the hollow region of GAEL (\(GS_{r}^{-}\)). Its propagation velocity is very high and is all the greater the closer GAEL is to the HV electrode. Given our experimental setup, this type of reflection can only be demonstrated by fast ICCD imaging. However, its existence could also be demonstrated by electrical analysis by placing a third current monitor between the HV electrode and GAEL (see Figure 3). Here, since \(GS_{r}\) is only detected by fast ICCD imaging, the sign of its electrical charge cannot be deduced. However, it is assumed to be negative, hence the notation \(GS_{r}^{-}\). This hypothesis relies on the previous result (\(r\) reflection on GSEL). Besides, Figure 15 indicates that the reflected streamers are always faster than the incident/transmitted ones: \(v_{GST^{-}}>v_{GS_{t}^{+}}\) but also \(v_{GST^{-}}>v_{GST^{+}}\). This result may appear in contradiction with previous research works where negative guided streamers are slower than the positive ones [44, 45]. However, in our case the reflections result from the interaction with a grounded electrode and in a counter-propagation configuration, i.e. in the ionic trace of a previous guided streamer.
* r: reflection corresponding to the counter-propagation of a guided streamer initiated from GAEL and appearing after \(GS_{r}\). The existence of this streamer can only be demonstrated by fast ICCD imaging and under the condition d = 3 cm. Demonstrating its existence at d = 1 cm since is impossible since it is hidden by GAEL which is 16 mm thick. For d = 5 cm, \(GS_{r}\), no longer exists or has been considerably attenuated; this is the case for \(GS_{r}\), whose integrated emission is strongly reduced for d ranging from 3 to 5 cm (Figure 12c). The study of \(GS_{r}\)- would deserve a dedicated study to understand the mechanisms underlying its generation.
### 4.2 Propagation mechanisms & Equivalent electrical model
#### 4.2.1 Electron sources
In this study, the experimental device has two distant ground electrodes (GAEL and GSEL), whose electrical potential is always 0 V. It should be remembered that an electrical ground corresponds to a reservoir containing an infinite number of free electrons and whose electrical potential is always zero Volt. Thus, if a flow of positive charges (total charge Q*) is transferred to the ground, then it generates an equivalent flow of electrons (total charge Q*) to maintain its zero potential (\(V_{ground}\) = 0 V). In our experimental setup, the distant grounded electrodes are not a mandatory for the generation and propagation of guided streamers. Whether it is positive or negative, a guided streamer can propagate from one electrode to another through photo-ionization processes that take place in its head to generate electrons. In the case of a positive guided streamer, the electron density obtained is lower than that of the positive ions produced, so that the overall charge of the guided streamer's head is positive (Q*). When this streamer comes into contact with a grounded electrode, the latter can release a certain number of electrons (global charge Q*) in streamer's tail.
Figure 16: Characteristic propagation time as a function of d measured using the electrical or optical approach.
#### a.2.2 Propagation from the HV electrode to Gael
As it propagates, \(GS_{t}^{+}\) leaves behind an ionic trace in the volume of the capillary as well as a positive polarization on the inner walls. This surface polarization is sketched in Figure 17a which introduces an equivalent electrical model of the guided streamers. During its passage through GAEl, i.e. from \(x_{GAEL}^{+}\) to \(x_{GAEL}^{+}\) (Figure 3), this surface polarization is considerably reduced (or even cancelled). The electric charge carried by \(GS_{t}^{+}\) splits in two components: (i) \(Q_{t}^{+}\) carried by the guided streamer transmitted by GAEl and (ii) \(Q_{peak}^{+}\) which leaves the capillary as a capacitive leakage current \(I_{x}=\frac{dQ_{peak}^{+}}{dt}\) by passing through which stand for the capacitances of the capillary thickness and the air gap (separating the capillary from GAEl) respectively. Since the streamers are generated at atmospheric pressure and the excited/ionized particles (electrons, positive ions) are governed by energy distribution functions, it is assumed that GAEl locally induces a potential barrier that separates the most energetic particles (particles that can pass through GAEl and thus constitute the transmitted streamer of charge \(Q_{t}^{+}\)) from the least energetic particles (particles that cannot pass through GAEl and that are dissipated in the form of a capacitive leakage current involving \(Q_{peak}^{+}\)). To maintain its potential at \(\omega\), GAEl must provide a flow of electrons (of overall charge \(Q_{GAEL}^{-}\)) that returns into the capillary, or more precisely into the ionic tail of \(GS_{t}^{+}\). Then, \(Q_{GAEL}^{-}\) gives rise to a negative space charge zone characterized by an electron density much higher than the ionic density. In the model of Figure 17, the open/closed states of the \(K_{z}\)(t) and \(K_{z}\)(t) switches must reverse, so that electrons can move to the increasing potentials, allowing this negative space charge region to counter-propagate towards the HV electrode as a negative guided streamer (\(GS_{rr}^{+}\)).
#### a.2.3 Propagation from Gael to Gsel
When \(GS_{t}^{+}\) propagates from GAEl to Gsel, it first interacts with the inner walls of the dielectric capillary, leaving behind an ionic trace in the volume of the capillary and a positive polarization of the inner walls, as shown in Figure 17b. Then, the head of \(GS_{t}^{+}\) is separated from GSEl by a layer of air that can be modeled by \(C_{air}^{+}(x,t)\). As the streamer approaches GSEl, the air thickness decreases, thus increasing the value of this capacitance. When \(GS_{t}^{+}\) reaches GSEl, the capacitance becomes infinite and therefore behaves as a simple short circuit. \(GS_{t}^{+}\) directly impinges on GSEl to transfer its charge \(Q^{+}\). Still to respect the condition \(V_{ground}=\) OV, GSEl returns a flow of electrons, of overall charge \(Q_{GSEL}^{-}\) and of very high mobility. Unlike the case where the interaction operates with GAEl, we assume here that the totality of Q* is transferred to GSEl.
#### a.2.4 Propagation from Gsel to the HV electrode
The negative space charge region resulting from the transfer of \(Q_{GSEL}^{-}\) into the 15 mm gap gives rise to a negative guided streamer (\(GS_{T}^{-}\)) which can then counter-propagate towards GAEl and then the HV electrode. As \(GS_{T}^{-}\)counter-propagates, the value of \(C_{a}\)(x,t) increases.
### a.3 Reflected guided streamer versus guided return stroke
As sketched in Figure 18, a streamer is an ionization wave that propagates longitudinally and that transports electrical charges as well as radiative species over long distances. Thanks to the ionizing mechanisms directly generated in its pre-head region (e.g. photoionization), a streamer can propagate in a gaseous environment even if it does not contain any charged particle. Hence, in the case of a positive guided streamer propagating along a capillary from the anode to the cathode, two important characteristics are noteworthy: (i) the positive charges of the streamer are left on the inner walls during propagation and (ii) a second wave is generated after the positive streamer has interacted with the anode. Then, CMi (i.e. GAEl) can show the same positive current peak following the passage of the second wave depending on whether it has a negative charge as it propagates backwards (reflected guided streamer) or a positive charge as it propagates forwards (return stroke) [46].
Figure 17: Equivalent electrical model explaining the (counter)propagation of guided streamers in a plasma gun device interacting with two distant grounded electrodes (GAEL and GSEl).
In the case of the **reflected guided streamer**, the counter-propagation results from a local electric field induced by two space charge regions: the negative charged region of the reflected wave and the residual positive charges that line the inner walls of the capillary. This mechanism alone would be sufficient to generate the counter-propagation, hence making photo-ionization no more a mandatory.
In the case of the **return stroke**, the propagation mechanics is different, as sketched in Figure 18. Return stroke corresponds to the most luminous and noticeable part of a lightning discharge [47]. Once a conductive channel bridges the air gap between a negative charge excess in the cloud (cathode) and the positive surface charge excess on the ground (anode), a large drop in resistance is observed across the lightning channel. In the case of a negative channel (or negative stepped leader), electrons accelerate rapidly as a result in a zone beginning at the point of attachment, which expands across the entire leader network at up to one third of the speed of light [48].
In atmospheric physics, return strokes are therefore non-guided ionization waves which require negative charge channels to counter-propagate. In our experimental setup, the situation is quite different since the ionization wave is guided and the conductive channel is composed of positive charges. This makes a main difference with a conventional return stroke, although the existence of (guided) return strokes propagating in positively charged channels may also be considered (Figure 18) [49]. In that latter case, CM1 would measure a forward flow of positive charges (instead of a backward flow of negative charges) with a velocity expected to be close to that of the first positive guided streamer. However, Figure 11c states exactly the opposite with \(\tau_{b}<\tau_{f}\), i.e. the backward wave is faster than the forward one. For this reason, the assumption of a reflected negative guided streamer has appeared more relevant than the guided return stroke.
## V Conclusion
A DC high-voltage power supply has been utilized to generate trains of positive guided streamers in a plasma gun device. Its dielectric capillary, supplied in helium, interacts with two distant electrodes: a grounded annular electrode (coaxially centered around the capillary) and a grounded surface electrode (15 mm away from capillary's outlet). By combining electrical analysis and fast ICCD imaging, we have developed a methodological approach that allowed to demonstrate novel results:
* Guided streamers can be reflected by passing through the air gap of a grounded annular electrode, without any kind of impact on solid-state target.
* Reflected guided streamers carry a negative charge. Although predicted by theory, this result is now demonstrated using current monitor CM1 and using Ampere's right-hand grip rule.
* Four types of reflections have been identified: Two reflections following an impact with the grounded metal surface: one with sufficiently high kinetic energy to counter-propagate over long distances and enter the capillary as a negative guided streamer i, and the other with lower kinetic energy, so that the reflected negative streamer i can only counter-propagate in the 15 mm gap, without being able to penetrate the capillary. In addition, two other reflections have been evidenced once the incident
Figure 18: Synoptic diagram explaining the propagation mechanisms of streamers (whether positive or negative) and return strokes (whether along a negative or positive channel).
guided streamer has interacted with the grounded annular electrode. These \(r^{\prime}\) and \(r^{\prime}\) reflected guided streamers are only detectable by fast tCCD imaging and their electrical charge is assumed to be negative.
* Streamers propagating backward (reflection) are faster than those propagating forward (incident or transmitted), especially after a reflection involving GAEL (rather than GSEL). Velocities as high as 3000 km/s are thus obtained for d = 3 cm (Figure 15b).
* GAEL is always located between the HV electrode and GSEL. Its relative position has a significant influence on the optical emission of the guided streamers but also on their characteristic propagation times. Hence, bringing GAEL closer to GSEL, contributes to significantly reduce \(\tau_{f}\) (typically from 160 ns to 50 ns), as well as \(\tau_{b}\) (typically from 250 ns to 200 ns) and therefore \(\tau_{f+b}\) (typically from 420 ns to 250 ns).
* Whatever the type of distant grounded electrode, the amplitude of the reflected streamers decreases exponentially. For a given position (\(\kappa_{i}\)), the more GAEL is close to the HV electrode, the more the amplitude of the streamers reflected by GSEL decreases (Figure 14b) while the amplitude of the streamers reflected by GAEL increases (Figure 14a).
Based on these results, an equivalent electrical model is proposed to better understand guided streamers dynamics. In this model, the grounded electrodes are defined as reservoirs containing an infinite number of free electrons, releasable at any time to always maintain a 0 Volt potential, especially when GAEL or GSEL is exposed to a flow of positive charges from the incident guided streamer.
Although this experimental work has allowed to investigate the physics of guided streamer propagation in a purely fundamental framework, it could have strong spin-offs in applied research. For example, in plasma medicine, the innovation of therapeutic plasma sources could require on-board sensors such as current monitors to measure the currents in real time during the patient therapy.
## VI. Acknowledgements
The authors would like to thank Sorbonne Universite and the Ile-de-France Region for supporting fundamental research in plasma physics by co-funding the \({}^{\text{P}2}\)ABIOMEDE platform project (Sesame 2016).
## VII. Data Access statement
The data that support the findings of this study are available upon reasonable request from the authors.
|
2303.11864 | Asymptotic expansions for partitions generated by infinite products | Recently, Debruyne and Tenenbaum proved asymptotic formulas for the number of
partitions with parts in $\mathcal{L}\subset\mathbb{N}$ ($\gcd(\mathcal{L})=1$)
and good analytic properties of the corresponding zeta function, generalizing
work of Meinardus. In this paper, we extend their work to prove asymptotic
formulas if $\mathcal{L}$ is a multiset of integers and the zeta function has
multiple poles. In particular, our results imply an asymptotic formula for the
number of irreducible representations of degree $n$ of $\mathfrak{so}{(5)}$. We
also study the Witten zeta function $\zeta_{\mathfrak{so}{(5)}}$, which is of
independent interest. | Walter Bridges, Benjamin Brindle, Kathrin Bringmann, Johann Franke | 2023-03-21T14:07:50Z | http://arxiv.org/abs/2303.11864v1 | # Asymptotic expansions for partitions generated by infinite products
###### Abstract.
Recently, Debruyne and Tenenbaum proved asymptotic formulas for the number of partitions with parts in \(\Lambda\subset\mathbb{N}\) (\(\gcd(\Lambda)=1\)) and good analytic properties of the corresponding zeta function, generalizing work of Meinardus. In this paper, we extend their work to prove asymptotic formulas if \(\Lambda\) is a multiset of integers and the zeta function has multiple poles. In particular, our results imply an asymptotic formula for the number of irreducible representations of degree \(n\) of \(\mathfrak{se}(5)\). We also study the Witten zeta function \(\zeta_{\mathfrak{se}(5)}\), which is of independent interest.
Key words and phrases:asymptotic formula, Circle Method, partitions, polygonal numbers, Witten zeta functions 2020 Mathematics Subject Classification: 11E45, 11M41, 11P82
## 1. Introduction and statement of results
### The Circle Method
In analytic number theory and combinatorics, one uses complex analysis to better understand properties of sequences. Suppose that a sequence \((c(n))_{n\in\mathbb{N}_{0}}\) has moderate growth and the _generating function_
\[f(q):=\sum_{n\geq 0}c(n)q^{n},\]
is holomorphic in the unit disk with radius of convergence \(1\). Via Cauchy's integral formula one can then recover the coefficients from the generating function
\[c(n)=\frac{1}{2\pi i}\int_{\mathcal{C}}\frac{f(q)}{q^{n+1}}dq, \tag{1.1}\]
for any closed curve \(\mathcal{C}\) contained in the unit disk that surrounds the origin exactly once counter-clockwise. The so-called Circle Method uses the analytic behavior of \(f(q)\) near the boundary of the unit circle to obtain asymptotic information about \(c(n)\). For instance, if the \(c(n)\) are positive and monotonically increasing, it is expected that the part close to \(q=1\) provides the dominant contribution to (1.1). These parts of the curve are the _major arcs_ and the complement are the _minor arcs_. To obtain an asymptotic expansion for \(c(n)\), one then evaluates the major arc to some degree of accuracy and bounds the minor arcs. Depending on the function \(f(q)\), both of these tasks present a variety of difficulties.
In the present paper, we are interested in infinite product generating functions of the form
\[f(q)=\prod_{n\geq 1}\frac{1}{(1-q^{n})^{a(n)}}.\]
Such generating functions are important in the theory of partitions, but also arise, for example, in representation theory. If \(a(n)\) is a "simple" sequence of nonnegative integers and \(f\) is "bounded" away from \(q=1\), then Meinardus [28] proved an asymptotic expression for \(c(n)\). Debruyne and Tenenbaum [15] eliminated the technical growth conditions on \(f\) by adding a few more assumptions on the \(a(n)\), which made their result more applicable. Our main results, Theorems 1.4 and 4.4, yield asymptotic expansions given mild assumptions on \(a(n)\) and have a variety of new applications.
### The classical partition function
Let \(n\in\mathbb{N}\). A weakly decreasing sequence of positive integers that sum to \(n\) is called a _partition_ of \(n\). The number of partitions is denoted by \(p(n)\). If \(\lambda_{1}+\ldots+\lambda_{r}=n\), then the \(\lambda_{j}\) are called the _parts_ of the partition. The partition function has no elementary closed formula, nor does it satisfy any finite order recurrence. However, setting \(p(0):=1\), its generating function has the following product expansion
\[\sum_{n\geq 0}p(n)q^{n}=\prod_{n\geq 1}\frac{1}{1-q^{n}}, \tag{1.2}\]
where \(|q|<1\). In [21], Hardy and Ramanujan used (1.2) to show the asymptotic formula
\[p(n)\sim\frac{1}{4\sqrt{3}n}e^{\pi\sqrt{\frac{2n}{3}}},\qquad n\to\infty, \tag{1.3}\]
which gave birth of the Circle Method. With Theorem 1.4 we find, for certain constants \(B_{j}\) and arbitrarily \(N\in\mathbb{N}\),
\[p(n)=\frac{e^{\pi\sqrt{\frac{2n}{3}}}}{4\sqrt{3}n}\left(1+\sum_{j=1}^{N}\frac {B_{j}}{n^{\frac{j}{2}}}+O_{N}\left(n^{-\frac{N+1}{2}}\right)\right).\]
Similarly, one can treat the cases for \(k\)-th powers (in arithmetic progressions), see [15].
### Plane partitions
Another application is an asymptotic formula for plane partitions. A _plane partition of size \(n\)_ is a two-dimensional array of non-negative integers \(\pi_{j,k}\) for which \(\sum_{j,k}\pi_{j,k}=n\), such that \(\pi_{j,k}\geq\pi_{j,k+1}\) and \(\pi_{j,k}\geq\pi_{j+1,k}\) for all \(j,k\in\mathbb{N}\). We denote the number of plane partitions of \(n\) by \(\operatorname{pp}(n)\). MacMahon [23] proved that
\[\sum_{n\geq 0}\operatorname{pp}(n)q^{n}=\prod_{n\geq 1}\frac{1}{\left(1-q^{n} \right)^{n}}.\]
Using Theorem 1.4, we recover Wright's asymptotic formula [35]
\[\operatorname{pp}(n)=\frac{C}{n^{\frac{25}{36}}}e^{A_{1}n^{\frac{2}{3}}} \left(1+\sum_{j=2}^{N+1}\frac{B_{j}}{n^{\frac{2(j-1)}{3}}}+O_{N}\left(n^{- \frac{2(N+1)}{3}}\right)\right),\]
where the constants \(B_{j}\) are explicitly computable,
\[C:=\frac{\zeta(3)^{\frac{7}{36}}e^{\zeta^{\prime}(-1)}}{2^{\frac{11}{36}} \sqrt{3\pi}},\qquad A_{1}:=\frac{3\zeta(3)^{\frac{1}{3}}}{2^{\frac{2}{3}}}\]
with \(\zeta\) the Riemann zeta function.
### Partitions into polygonal numbers
The \(n\)-th \(k\)_-gonal number_ is given by (\(k\in\mathbb{N}_{\geq 3}\))
\[P_{k}(n):=\frac{1}{2}\left((k-2)n^{2}+(4-k)n\right).\]
The study of representations of integers as sums of polygonal numbers has a long history. Fermat conjectured in 1638 that every \(n\in\mathbb{N}\) may be written as the sum of at most \(k\)\(k\)-gonal numbers which was finally proved by Cauchy. Let \(p_{k}(n)\) denotes the number of partitions of \(n\) into \(k\)-gonal numbers. We have the generating function
\[\sum_{n\geq 0}p_{k}(n)q^{n}=\prod_{n\geq 1}\frac{1}{1-q^{P_{k}(n)}}.\]
The \(p_{k}(n)\) have the following asymptotics.1
Footnote 1: Note that asymptotics for polynomial partitions were investigated in a more general setting by Dunn and Robles in [17].
**Theorem 1.1**.: _We have, for all 2\(N\in\mathbb{N}\),_
Footnote 2: Explicit asymptotic formulas for \(p_{3}(n)\), \(p_{4}(n)\), and \(p_{5}(n)\) are given in Corollary 5.4.
\[p_{k}(n)=\frac{C(k)e^{A(k)n^{\frac{1}{3}}}}{n^{\frac{5k-6}{6(k-2)}}}\left(1+ \sum_{j=1}^{N}\frac{B_{j,k}}{n^{\frac{j}{3}}}+O_{N}\left(n^{-\frac{N+1}{3}} \right)\right),\]
_where the \(B_{j,k}\) can be computed explicitly and_
\[C(k):=\frac{(k-2)^{\frac{6-k}{6(k-2)}}\Gamma\left(\frac{2}{k-2}\right)\zeta \left(\frac{3}{2}\right)^{\frac{k}{3(k-2)}}}{2^{\frac{3k-2}{2(k-2)}}\sqrt{3} \pi^{\frac{4k-9}{3(k-2)}}},\qquad A(k):=\frac{3}{2}\left(\sqrt{\frac{\pi}{k-2 }}\zeta\left(\frac{3}{2}\right)\right)^{\frac{2}{3}}.\]
### Numbers of finite-dimensional representations of Lie algebras
The special unitary group \(\mathfrak{su}(2)\) has (up to equivalence) one irreducible representation \(V_{k}\) of each dimension \(k\in\mathbb{N}\). Each \(n\)-dimensional representation \(\bigoplus_{k=1}^{\infty}r_{k}V_{k}\) corresponds to a unique partition
\[n=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{r},\qquad\lambda_{1}\geq\lambda_{2} \geq\ldots\geq\lambda_{r}\geq 1 \tag{1.4}\]
such that \(r_{k}\) counts the number of \(k\) in (1.4). As a result, the number of representations equals \(p(n)\). It is natural to ask whether this can be generalized. The next case is the unitary group \(\mathfrak{su}(3)\), whose irreducible representations \(W_{j,k}\) indexed by pairs of positive integers. Note that (see Chapter 5 of [20]) \(\dim(W_{j,k})=\frac{1}{2}jk(j+k)\). Like in the case of \(\mathfrak{su}(2)\), a general \(n\)-dimensional representation decomposes into a sum of these \(W_{j,k}\), again each with some multiplicity. So analogous to (1.2), the numbers \(r_{\mathfrak{su}(3)}(n)\) of \(n\)-dimensional representations, have the generating function
\[\sum_{n\geq 0}r_{\mathfrak{su}(3)}(n)q^{n}=\prod_{j,k\geq 1}\frac{1}{1-q^{ \frac{jk(j+k)}{2}}},\]
again with \(r_{\mathfrak{su}(3)}(0):=1\). In [31], Romik proved that, as \(n\to\infty\),
\[r_{\mathfrak{su}(3)}(n)\sim\frac{C_{0}}{n^{\frac{3}{5}}}\exp\left(A_{1}n^{ \frac{2}{5}}+A_{2}n^{\frac{3}{10}}+A_{3}n^{\frac{1}{5}}+A_{4}n^{\frac{1}{10}} \right),\]
with explicit constants3\(C_{0},A_{1},\ldots,A_{4}\) expressible in terms of zeta and gamma values. Two of the authors [7] improved this to an analogue of formula (1.3), namely, for any \(N\in\mathbb{N}_{0}\), we have
Footnote 3: Note that Romik used different signs for the constants in the exponential.
\[r_{\mathfrak{su}(3)}(n)=\frac{C_{0}}{n^{\frac{3}{5}}}\exp\left(A_{1}n^{\frac{ 2}{5}}+A_{2}n^{\frac{3}{10}}+A_{3}n^{\frac{1}{5}}+A_{4}n^{\frac{1}{10}}\right) \left(1+\sum_{j=1}^{N}\frac{C_{j}}{n^{\frac{j}{10}}}+O_{N}\left(n^{-\frac{N}{ 10}-\frac{3}{80}}\right)\right), \tag{1.5}\]
as \(n\to\infty\), where the constants \(C_{j}\) do not depend on \(N\) and \(n\) and can be calculated explicitly. The expansion (1.5) with improved error term \(O_{N}(n^{-\frac{N+1}{10}})\) and explicit values for \(A_{j}\) (\(1\leq j\leq 4\)) and \(C_{0}\), can also be obtained using Theorem 4.4.
This framework generalizes to other groups. For example, one can investigate the _Witten zeta function_ for \(\mathfrak{so}(5)\), which is (for more background to this function, see [25] and [26])
\[\zeta_{\mathfrak{so}(5)}(s):=\sum_{\varphi}\frac{1}{\dim(\varphi)^{s}}=6^{s} \sum_{n,m\geq 1}\frac{1}{m^{s}n^{s}(m+n)^{s}(m+2n)^{s}}, \tag{1.6}\]
where the \(\varphi\) are running through the finite-dimensional irreducible representations of \(\mathfrak{so}(5)\). We prove the following; for the more precise statement see Theorem 5.14.
**Theorem 1.2**.: _The function \(\zeta_{\mathfrak{so}(5)}\) has a meromorphic continuation to \(\mathbb{C}\) whose positive poles are simple and occur for \(s\in\{\frac{1}{2},\frac{1}{3}\}\)._
It is well-known that the finite-dimensional representations of \(\mathfrak{so}(5)\) can be doubly indexed as \((\varphi_{j,k})_{j,k\in\mathbb{N}}\) with \(\dim(\varphi_{j,k})=\frac{1}{6}jk(j+k)(j+2k)\), which explains the last equality in (1.6). A general \(n\)-dimensional representation decomposes as a sum of these \(\varphi_{j,k}\), each with some multiplicity. Therefore, as in the case \(\mathfrak{su}(3)\), we find that
\[\sum_{n\geq 0}r_{\mathfrak{so}(5)}(n)q^{n}=\prod_{j,k\geq 1}\frac{1}{1-q^{\frac{ jk(j+k)(j+2k)}{6}}}.\]
We prove the following.
**Theorem 1.3**.: _As \(n\to\infty\), we have, for any \(N\in\mathbb{N}\),_
\[r_{\mathfrak{so}(5)}(n)=\frac{C}{n^{\frac{7}{12}}}\exp\left(A_{1}n^{\frac{1}{ 3}}+A_{2}n^{\frac{2}{9}}+A_{3}n^{\frac{1}{9}}+A_{4}\right)\left(1+\sum_{j=2}^ {N+1}\frac{B_{j}}{n^{\frac{j-1}{9}}}+O_{N}\left(n^{-\frac{N+1}{9}}\right) \right),\]
_where \(C\), \(A_{1}\), \(A_{2}\), \(A_{3}\), and \(A_{4}\) are given in (5.17)-(5.19) and the \(B_{j}\) can be calculated explicitly._
### Statement of results
The main goal of this paper is to prove asymptotic formulas for a general class of partition functions. To state it, let \(f:\mathbb{N}\to\mathbb{N}_{0}\), set \(\Lambda:=\mathbb{N}\setminus f^{-1}(\{0\})\), and for \(q=e^{-z}\) (\(z\in\mathbb{C}\) with \(\operatorname{Re}(z)>0\)), define
\[G_{f}(z):=\sum_{n\geq 0}p_{f}(n)q^{n}=\prod_{n\geq 1}\frac{1}{(1-q^{n})^{f(n)}},\qquad L_{f}(s):=\sum_{n\geq 1}\frac{f(n)}{n^{s}}. \tag{1.7}\]
We require the following key properties of these objects.
1. Let \(\alpha>0\) be the largest pole of \(L_{f}\). There exists \(L\in\mathbb{N}\), such that for all primes \(p\), we have \(|\Lambda\setminus(p\mathbb{N}\cap\Lambda)|\geq L>\frac{\alpha}{2}\).
2. Condition (P2) is attached to \(R\in\mathbb{R}^{+}\). The series \(L_{f}(s)\) converges for some \(s\in\mathbb{C}\), has a meromorphic continuation to \(\{s\in\mathbb{C}:\operatorname{Re}(s)\geq-R\}\), and is holomorphic on the line \(\{s\in\mathbb{C}:\operatorname{Re}(s)=-R\}\). The function \(L_{f}^{*}(s):=\Gamma(s)\zeta(s+1)L_{f}(s)\) has only real poles \(0<\alpha:=\gamma_{1}>\gamma_{2}>\dots\) that are all simple, except the possible pole at \(s=0\), that may be double.
3. For some \(a<\frac{\pi}{2}\), in every strip \(\sigma_{1}\leq\sigma\leq\sigma_{2}\) in the domain of holomorphicity, we uniformly have, for \(s=\sigma+it\), \[L_{f}(s)=O_{\sigma_{1},\sigma_{2}}\left(e^{a|t|}\right),\qquad|t|\to\infty.\]
Note that (P1) implies that \(|\Lambda\setminus(b\mathbb{N}\cap\Lambda)|\geq L>\frac{\alpha}{2}\) for all \(b\geq 2\).
**Theorem 1.4**.: _Assume_ (P1) _for \(L\in\mathbb{N}\),_ (P2) _for \(R>0\), and_ (P3)_. Then, for some \(M,N\in\mathbb{N}\),_
\[p_{f}(n)=\frac{C}{n^{b}}\exp\left(A_{1}n^{\frac{\alpha}{\alpha+1}}+\sum_{j=2} ^{M}A_{j}n^{\alpha_{j}}\right)\left(1+\sum_{j=2}^{N}\frac{B_{j}}{n^{\beta_{j} }}+O_{L,R}\left(n^{-\min\left\{\frac{2L-\alpha}{2(\alpha+1)},\frac{R}{\alpha+ 1}\right\}}\right)\right),\]
_where \(0\leq\alpha_{M}<\alpha_{M-1}<\dotsm\alpha_{2}<\alpha_{1}=\frac{\alpha}{\alpha+1}\) are given by4\(\mathcal{L}\) (defined in (1.8)), and \(0<\beta_{2}<\beta_{3}<\dots\) are given by \(\mathcal{M}+\mathcal{N}\), where \(\mathcal{M}\) and \(\mathcal{N}\) are defined in (1.9) and (1.10), respectively. The coefficients \(A_{j}\) and \(B_{j}\) can be calculated explicitly; the constants \(A_{1}\), \(C\), and \(b\) are provided in (1.11) and (1.12). Moreover, if \(\alpha\) is the only positive pole of \(L_{f}\), then we have \(M=1\)._
Footnote 4: We can enlarge the discrete exponent sets at will, since we can always add trivial powers with vanishing coefficients to an expansion. Therefore, from now on we always use this expression, even if the set increases tacitly.
**Remarks.**
1. _Debruyne and Tenenbaum proved Theorem_ 1.4 _in the special case that_ \(f\) _is the indicator function of a subset_ \(\Lambda\) _of_ \(\mathbb{N}\)_. They also assumed that the associated_ \(L\)_-function can be analytically continued except for one pole in_ \(0<\alpha\leq 1\)_. Our refined assumption (P1) on the set_ \(\Lambda\) _is necessary to bound minor arcs in this more general setup._
2. _The complexity of the exponential term depends on the number and positions of the positive poles of_ \(L_{f}\)_. Theorem_ 4.4 _is more explicit and covers the case of exactly two positive poles. This case has importance for representation numbers of_ \(\mathfrak{su}(3)\) _and_ \(\mathfrak{so}(5)\)_._
In Section 2, we collect some analytic tools, properties of special functions and useful properties of asymptotic expansions that are heavily used throughout the paper. In Section 3, we apply the Circle Method and calculate asymptotic expansions for the saddle point \(\varrho_{n}\) and the value of the generating function \(G_{f}(\varrho_{n})\). In Section 4, we complete the proof of Theorem 1.4, and we also state and prove a more explicit version of Theorem 1.4 in the case that \(L_{f}\) has two positive poles (Theorem 4.4). The proofs of Theorems 1.1, 1.2, and 1.3 are given in Section 5; this includes a detailed study of the Witten zeta function \(\zeta_{\mathfrak{so}(5)}\) which is of independent interest.
## Acknowledgements
We thank Gregory Debruyne, Kohji Matsumoto, and Andreas Mono for helpful discussions. The first author and the third author were partially supported by the SFB/TRR 191 "Symplectic Structure in Geometry, Algebra and Dynamics", funded by the DFG (Projektnummer 281071066 TRR 191). The second and third author received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179), and the last two authors are partially supported by the Alfried Krupp prize.
## Notation
For \(\beta\in\mathbb{R}\), we denote by \(\{\beta\}:=\beta-\lfloor\beta\rfloor\) the _fractional part_ of \(\beta\). As usual, we set \(\mathbb{H}:=\{\tau\in\mathbb{C}:\operatorname{Im}(\tau)>0\}\) and \(\mathbb{E}:=\{z\in\mathbb{C}:|z|<1\}\). For \(\delta>0\), we define
\[\mathcal{C}_{\delta}:=\left\{z\in\mathbb{C}\colon|\operatorname{Arg}(z)|\leq \tfrac{\pi}{2}-\delta\right\},\]
where \(\operatorname{Arg}\) uses the principal branch of the complex argument. For \(r>0\) and \(z\in\mathbb{C}\), we set
\[B_{r}(z):=\{w\in\mathbb{C}:|w-z|<r\}.\]
For \(a,b\in\mathbb{R}\), we let \(\mathcal{R}_{a,b;K}\) be the rectangle with vertices \(a\pm iK\) and \(b\pm iK\), and we let \(\partial\mathcal{R}_{a,b;K}\) be the path along the boundary of \(\mathcal{R}_{a,b;K}\), surrounded once counterclockwise. For \(-\infty\leq a<b\leq\infty\), we denote \(S_{a,b}:=\{z\in\mathbb{C}:a<\operatorname{Re}(z)<b\}\). We also set, for real \(\sigma_{1}\leq\sigma_{2}\) and \(\delta>0\),
\[S_{\sigma_{1},\sigma_{2},\delta}:=\{s\in\mathbb{C}:\sigma_{1}\leq\operatorname {Re}(s)\leq\sigma_{2}\}\setminus\left(B_{\delta}\left(\frac{1}{2}\right)\cup \bigcup_{j=-\infty}^{1}B_{\delta}\left(\frac{j}{3}\right)\right).\]
For \(k\in\mathbb{N}\) and \(s\in\mathbb{C}\), the _falling factorial_ is \((s)_{k}:=s(s-1)\cdots(s-k+1)\). For \(f:\mathbb{N}\to\mathbb{N}_{0}\), we let \(\mathcal{P}\) be the set of poles of \(L_{f}^{*}\), and for \(R>0\) we denote by \(\mathcal{P}_{R}\) the union of the poles of \(L_{f}^{*}\) greater than \(-R\) with \(\{0\}\). We define
\[\mathcal{L} :=\frac{1}{\alpha+1}\mathcal{P}_{R}+\sum_{\mu\in\mathcal{P}_{R}} \left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0}, \tag{1.8}\] \[\mathcal{M} :=\frac{\alpha}{\alpha+1}\mathbb{N}_{0}+\left(-\sum_{\mu\in \mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0}\right) \cap\left[0,\frac{R+\alpha}{\alpha+1}\right), \tag{1.9}\]
\[\mathcal{N}:=\left\{\sum_{j=1}^{K}b_{j}\theta_{j}:b_{j},K\in\mathbb{N}_{0},\theta_ {j}\in(-\mathcal{L})\cap\left(0,\frac{R}{\alpha+1}\right)\right\}. \tag{1.10}\]
We set, with \(\omega_{\alpha}:=\operatorname{Res}_{s=\alpha}L_{f}(s)\),
\[A_{1} :=\left(1+\frac{1}{\alpha}\right)(\omega_{\alpha}\Gamma(\alpha+1) \zeta(\alpha+1))^{\frac{1}{\alpha+1}},\qquad C:=\frac{e^{L_{f}^{\prime}(0)}( \omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1))^{\frac{1}{2}-L_{f}(0)\over \alpha+1}}{\sqrt{2\pi(\alpha+1)}}, \tag{1.11}\] \[b :=\frac{1-L_{f}(0)+\frac{\alpha}{2}}{\alpha+1}. \tag{1.12}\]
## 2. Preliminaries
In this section, we collect and prove some tools used in this paper.
### Tools from complex analysis
We require the following results from complex analysis. The first theorem describes Taylor coefficients of the inverse of a biholomorphic function; for a proof, see Corollary 11.2 on p. 437 of [10].
**Proposition 2.1**.: _Let \(\phi:B_{r}(0)\to D\) be a holomorphic function such that \(\phi(0)=0\) and \(\phi^{\prime}(0)\neq 0\), with \(\phi(z)=:\sum_{n\geq 1}a_{n}z^{n}\). Then \(\phi\) is locally biholomorphic and its local inverse of \(\phi\) has a power series expansion \(\phi^{-1}(w)=:\sum_{k\geq 1}b_{k}w^{k}\), where_
\[b_{k}=\frac{1}{ka_{1}^{k}}\sum_{\begin{subarray}{c}\ell_{1},\ell_{2},\ell_{3 }\ldots\geq 0\\ \ell_{1}+2\ell_{2}+3\ell_{3}+\cdots=-k-1\end{subarray}}(-1)^{\ell_{1}+\ell_{2 }+\ell_{3}+\cdots}\frac{k\cdots(k-1+\ell_{1}+\ell_{2}+\cdots)}{\ell_{1}!\ell_ {2}!\ell_{3}!\cdots}\left(\frac{a_{2}}{a_{1}}\right)^{\ell_{1}}\left(\frac{a_ {3}}{a_{1}}\right)^{\ell_{2}}\cdots.\]
To deal with certain zeros of holomorphic functions, we require the following result from complex analysis, the proof of which is quickly obtained from Exercise 7.29 (i) in [9].
**Proposition 2.2**.: _Let \(r>0\) and let \(\phi_{n}:B_{r}(0)\to\mathbb{C}\) be a sequence of holomorphic functions that converges uniformly on compact sets to a holomorphic function \(\phi:B_{r}(0)\to\mathbb{C}\), with \(\phi^{\prime}(0)\neq 0\). Then there exist \(r>\kappa_{1}>0\) and \(\kappa_{2}>0\) such that, for all \(n\) sufficiently large, the restrictions \(\phi_{n}|_{B_{\kappa_{1}}(0)}:B_{\kappa_{1}}(0)\to\phi_{n}(B_{\kappa_{1}}(0))\) are biholomorphic and \(B_{\kappa_{2}}(0)\subset\phi_{n}(B_{\kappa_{1}}(0))\). In particular, the restrictions \(\phi_{n}^{-1}|_{B_{\kappa_{2}}(0)}:B_{\kappa_{2}}(0)\to\phi_{n}^{-1}(B_{\kappa_ {2}}(0))\) are biholomorphic functions._
### Asymptotic expansions
We require two classes of asymptotic expansions.
**Definition**.: Let \(R\in\mathbb{R}\).
1. Let \(g:\mathbb{R}^{+}\to\mathbb{C}\) be a function. Then \(g\in\mathcal{K}(R)\) if there exist real numbers \(\nu_{g,1}<\nu_{g,2}<\nu_{g,3}<\cdots<\nu_{g,N}<R\) and complex numbers \(a_{g,j}\) such that \[g(x)=\sum_{j=1}^{N_{g}}\frac{a_{g,j}}{x^{\nu_{g,j}}}+O_{R}\left(x^{-R}\right), \qquad(x\to\infty).\]
2. Let \(\phi\) be holomorphic on the right half-plane. Then \(\phi\in\mathcal{H}(R)\) if there are real numbers \(\nu_{\phi,1}<\nu_{\phi,2}<\nu_{\phi,3}<\cdots<\nu_{\phi,N}<R\) and \(a_{\phi,j}\in\mathbb{C}\) such that, for all \(k\in\mathbb{N}_{0}\) and \(0<\delta<\frac{\pi}{2}\), \[\phi^{(k)}(z)=\sum_{j=1}^{N_{\phi}}(\nu_{\phi,j})_{k}a_{\phi,j}z^{\nu_{\phi,j}- k}+O_{\delta,R,k}\left(|z|^{R-k}\right),\qquad(z\to 0,z\in\mathcal{C}_{\delta}).\] (2.1) If there is no risk of confusion, then we write \(N\), \(\nu_{j}\), and \(a_{j}\) in the above. The \(R\)-dependence of the error only matters if \(R\) varies, for instance, if we can choose it to be arbitrarily large.
Note that any sequence \(g(n)\) with
\[g(n)=\sum_{j=1}^{N}\frac{a_{j}}{n^{\nu_{j}}}+O_{R}\left(n^{-R}\right),\qquad(n \to\infty), \tag{2.2}\]
can be extended to a function \(g\) in \(\mathcal{K}(R)\). Conversely, each function in \(\mathcal{K}(R)\) can be restricted to a sequence \(g(n)_{n\in\mathbb{N}}\) satisfying (2.2). In addition, we include functions in \(\mathcal{K}(R)\) that have asymptotic expansion as in (1), but are initially defined only on intervals \((r,\infty)\) for some large \(r>0\). The reason for this is that it does not matter how the function is defined up to \(r\), and therefore it can always be continued to \((0,\infty)\). If \(g\in\mathcal{K}(R)\) for all \(R>0\), then we write
\[g(x)=\sum_{j\geq 1}\frac{a_{j}}{x^{\nu_{j}}},\qquad(x\to\infty). \tag{2.3}\]
We use the same abbreviation if \(\phi\in\mathcal{H}(R)\) for all \(R>0\). In this case we write \(g\in\mathcal{K}(\infty)\) and \(\phi\in\mathcal{H}(\infty)\), respectively. In some situations, we write for \(R\in\mathbb{R}\cup\{\infty\}\)
\[g(x)=\sum_{j=1}^{N}\frac{a_{g,j}}{x^{\nu_{g,j}}}+O_{R}\left(x^{-R}\right),\]
where \(R\) might depend on the choice of the function \(g\). If \(R=\infty\), then one may ignore the error \(O_{R}(x^{-R})\) and use the notation (2.3) instead. We have the following useful lemmas, that can be obtained by a straightforward calculation.
**Lemma 2.3**.: _Let \(R_{1},R_{2}\in\mathbb{R}\), \(\lambda\in\mathbb{C}\), \(g\in\mathcal{K}(R_{1})\), and \(h\in\mathcal{K}(R_{2})\). Then we have the following:_
1. _We have_ \(\lambda g\in\mathcal{K}(R_{1})\) _and_ \(g+h\in\mathcal{K}(\min\{R_{1},R_{2}\})\)_. The exponents_ \(\nu_{g+h,j}\) _run through_ \[(\{\nu_{g,j}\colon 1\leq j\leq N_{g}\}\cup\{\nu_{h,j}\colon 1\leq j\leq N_{h}\}) \cap(-\infty,\min\{R_{1},R_{2}\}).\]
2. _We have_ \(gh\in\mathcal{K}(\min\{R_{1}+\nu_{h,1},R_{2}+\nu_{g,1}\})\)_. The exponents_ \(\nu_{gh,j}\) _run through_ \[(\{\nu_{g,j}\colon 1\leq j\leq N_{g}\}+\{\nu_{h,j}\colon 1\leq j\leq N_{h}\}) \cap(-\infty,\min\{R_{1}+\nu_{h,1},R_{2}+\nu_{g,1}\}).\]
We next deal with compositions of asymptotic expansions with holomorphic functions.
**Lemma 2.4**.: _Let \(0<R\leq\infty\), \(g\in\mathcal{K}(R)\) with \(\nu_{g,1}=0\) and \(h\) holomorphic at \(a_{g,1}\). Then \((h\circ g)(x)\) is defined for all \(x>0\) sufficiently large, and we have \(h\circ g\in\mathcal{K}(R)\) with_
\[\{\nu_{h\circ g,j}:1\leq j\leq N_{h\circ g}\}=\left(\sum_{j=1}^{N_{g}}\nu_{g, j}\mathbb{N}_{0}\right)\cap[0,R).\]
We need a similar result for general asymptotic expansions.
**Lemma 2.5**.: _Let \(0<R_{1},R_{2}\leq\infty\), \(\phi\in\mathcal{H}(R_{1})\), \(g\in\mathcal{K}(R_{2})\), and \(R:=\min\{R_{2}-\nu_{g,1},\nu_{g,1}R_{1}\}\). Assume \(\nu_{g,1}>0\) and \(g(x)>0\) for \(x\) sufficiently large. Then \(\phi\circ g\in\mathcal{K}(R)\), \(a_{\phi\circ g,1}=a_{\phi,1}a_{g,1}^{\nu_{\phi,1}}\), and_
\[\{\nu_{\phi\circ g,j}\colon 1\leq j\leq N_{\phi\circ g}\}=\left(\nu_{g,1}\{\nu_{ \phi,1},...,\nu_{\phi,N_{\phi}}\}+\sum_{j=2}^{N_{g}}(\nu_{g,j}-\nu_{g,1}) \mathbb{N}_{0}\right)\cap(-\infty,R).\]
### Special functions
The following theorem collects some facts about the Gamma function.
**Proposition 2.6** (see [1, 32]).: _Let \(\gamma\) denote the Euler-Mascheroni constant._
1. _The gamma function_ \(\Gamma\) _is holomorphic on_ \(\mathbb{C}\setminus(-\mathbb{N}_{0})\) _with simple poles in_ \(-\mathbb{N}_{0}\)_. For_ \(n\in\mathbb{N}_{0}\) _we have_ \(\operatorname{Res}_{s=-n}\Gamma(s)=\frac{(-1)^{n}}{n!}\)_._
2. _For_ \(s=\sigma+it\in\mathbb{C}\) _with_ \(\sigma\in I\) _for a compact interval_ \(I\subset[\frac{1}{2},\infty)\)_, we uniformly have_ \[\max\left\{1,|t|^{\sigma-\frac{1}{2}}\right\}e^{-\frac{\pi|t|}{2}}\ll_{I}| \Gamma(s)|\ll_{I}\max\left\{1,|t|^{\sigma-\frac{1}{2}}\right\}e^{-\frac{\pi|t| }{2}}.\] _The bound also holds for compact intervals_ \(I\subset\mathbb{R}\) _if_ \(|t|\geq 1\)_._
3. _Near_ \(s=0\)_, we have the Laurent series expansion_ \(\Gamma(s)=\frac{1}{s}-\gamma+O(s)\)_._
4. _For all_ \(s\in\mathbb{C}\setminus\mathbb{Z}\)_, we have_ \(\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin(\pi s)}\)_._
For \(s,z\in\mathbb{C}\) with \(s\notin-\mathbb{N}\), the _generalized Binomial coefficient_ is defined by
\[\binom{s}{z}:=\frac{\Gamma(s+1)}{\Gamma(z+1)\Gamma(s-z+1)}.\]
We require the following properties of the Riemann zeta function.
**Proposition 2.7** (see [2, 8, 32]).:
1. _The_ \(\zeta\)_-function has a meromorphic continuation to_ \(\mathbb{C}\) _with only a simple pole at_ \(s=1\) _with residue_ \(1\)_. For_ \(s\in\mathbb{C}\) _we have (as identity between meromorphic functions)_ \[\zeta(s)=2^{s}\pi^{s-1}\sin\left(\frac{\pi s}{2}\right)\Gamma(1-s)\zeta(1-s).\]
2. _For_ \(I:=[\sigma_{0},\sigma_{1}]\) _and_ \(s=\sigma+it\in\mathbb{C}\)_, there exists_ \(m_{I}\in\mathbb{Z}\)_, such that for_ \(\sigma\in I\)__ \[\zeta(s)\ll(1+|t|)^{m_{I}},\qquad(|t|\to\infty).\]
3. _Near_ \(s=1\)_, we have the Laurent series expansion_ \(\zeta(s)=\frac{1}{s-1}+\gamma+O(s-1)\)_._
For the Saddle Point Method we need the following estimate.
**Lemma 2.8**.: _Let \(\mu_{n}\) be an increasing unbounded sequence of positive real numbers, \(B>0\), and \(P\) a polynomial of degree \(m\in\mathbb{N}_{0}\). Then we have_
\[\int_{-\mu_{n}}^{\mu_{n}}P(x)e^{-Bx^{2}}dx=\int_{-\infty}^{\infty}P(x)e^{-Bx^ {2}}dx+O_{B,P}\left(\mu_{n}^{\frac{m-1}{2}}e^{-B\mu_{n}^{2}}\right).\]
Finally, we require the following in our study of the Witten zeta function \(\zeta_{\mathfrak{so}(5)}\).
**Lemma 2.9**.: _Let \(n\in\mathbb{N}_{0}\). The function \(g:\mathbb{R}\to\mathbb{R}\) defined as \(g(u):=e^{|u|}\int_{-\infty}^{\infty}|v|^{n}e^{-|v|-|v+u|}dv\) satisfies \(g(u)=O_{n}(u^{n+1})\) as \(|u|\to\infty\)._
Proof.: Let \(u\geq 0\). Then we have
\[g(u)=\frac{n!}{2^{n+1}}\sum_{j=0}^{n}\frac{2^{j}}{j!}u^{j}+\frac{u^{n+1}}{n+1 }+\frac{n!}{2^{n+1}}=O_{n}\left(u^{n+1}\right).\]
The lemma follows, since \(g\) is an even function.
## 3. Minor and major arcs
### The minor arcs
For \(z\in\mathbb{C}\) with \(\operatorname{Re}(z)>0\), we define, with \(G_{f}\) given in (1.7),
\[\Phi_{f}(z):=\operatorname{Log}(G_{f}(z)).\]
Note that we assume throughout, that the function \(f\) grows polynomially, which is implicitly part of (P2). We apply Cauchy's Theorem, writing
\[p_{f}(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\exp\left(n(\varrho_{n}+it)+\Phi_{f}( \varrho_{n}+it)\right)dt,\]
where \(\varrho_{n}\to 0^{+}\) is determined in Subsection 3.3. We split the integral into two parts, the major and minor arcs, for any \(\beta\geq 1\)
\[p_{f}(n)=\frac{e^{\varrho_{n}n}}{2\pi}\int_{|t|\leq\varrho_{n}^{\beta}}\exp\left( int+\Phi_{f}(\varrho_{n}+it)\right)dt+\frac{e^{\varrho_{n}n}}{2\pi}\int_{ \varrho^{\beta}\leq|t|\leq\pi}\exp\left(int+\Phi_{f}(\varrho_{n}+it)\right)dt. \tag{3.1}\]
The first integral provides the main terms in the asymptotic expansion for \(p_{f}(n)\), the second integral is negligible, as the following lemma shows.
**Lemma 3.1**.: _Let \(1<\beta<1+\frac{\alpha}{2}\) and assume that \(f\) satisfies the conditions of Theorem 1.4. Then_
\[\int_{\frac{\varrho_{n}^{\beta}}{2\pi}\leq|t|\leq\frac{1}{2}}^{\beta}e^{2\pi int }G_{f}(\varrho_{n}+2\pi it)dt\ll_{L}\varrho_{n}^{L+1}G_{f}(\varrho_{n}).\]
Sketch of proof.: The proof may be adapted from [15, Lemma 3.1]. That is, we estimate the quotient,
\[\frac{|G_{f}(\varrho_{n}+2\pi it)|}{G_{f}(\varrho_{n})}\leq\prod_{m\geq 1} \left(1+\frac{16||mt||^{2}}{e^{m\varrho_{n}}m^{2}\varrho_{n}^{2}}\right)^{- \frac{f(m)}{2}},\]
where \(||x||\) is the distance from \(x\) to the nearest integer. We then throw away \(m\)-th factors depending on the location of \(t\in[\frac{\varrho_{n}^{\beta}}{2\pi},\frac{1}{2}]\). The proof follows [15, Lemma 3.1]_mutatis mutandis_; key facts are hypothesis (P3) of Theorem 1.4 and that (which follows from [32, Theorem 7.28 (1)])
\[\sum_{1\leq m\leq x}f(m)\sim\frac{\operatorname{Res}_{s=\alpha}L_{f}}{\alpha} x^{\alpha}.\qed\]
### Inverse Mellin transforms for generating functions
We start this subsection with a lemma on the asymptotic behavior of the function \(\Phi_{f}\) near \(z=0\).
**Lemma 3.2**.: _Let \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfy_ (P2) _with \(R>0\) and_ (P3)_. Fix some \(0<\delta<\frac{\pi}{2}-a\). Then we have, as \(z\to 0\) in \(C_{\delta}\),_
\[\Phi_{f}(z)=\sum_{\nu\in-\mathcal{P}_{R}\setminus\{0\}}\operatorname{Res}_{s =-\nu}L_{f}^{*}(s)z^{\nu}-L_{f}(0)\mathrm{Log}(z)+L_{f}^{\prime}(0)+O_{R} \left(|z|^{R}\right).\]
_For the \(k\)-th derivative (\(k\in\mathbb{N}\)), we have_
\[\Phi_{f}^{(k)}(z)=\sum_{\nu\in-\mathcal{P}_{R}\setminus\{0\}}(\nu)_{k} \operatorname{Res}_{s=-\nu}L_{f}^{*}(s)z^{\nu-k}+\frac{(-1)^{k}(k-1)!L_{f}(0)} {z^{k}}+O_{R,k}\left(|z|^{R-k}\right).\]
Proof.: With \(J_{f}(s;z):=L_{f}^{*}(s)z^{-s}\), we obtain, for \(\kappa\in\mathbb{N}_{0}\),
\[2\pi i\Phi_{f}^{(\kappa)}(z)=\frac{d^{\kappa}}{dz^{\kappa}}\left(\,\int_{-R-i \infty}^{-R+i\infty}+\lim_{K\to\infty}\left(\int_{\partial\mathcal{R}_{-R, \alpha+1;K}}+\int_{\alpha+1-iK}^{-R-iK}+\int_{-R+iK}^{\alpha+1+iK}\,\right) \right)J_{f}(s;z)ds. \tag{3.2}\]
Here we use (P2), giving that there are no poles of \(J_{f}(s;z)\) on the path of integration. By Proposition 2.7 (2), [7, Theorem. 2.1 (3)], and (P3), we find a constant \(c(R,\kappa)\) such that, as \(|v|\to\infty\),
\[\left|L_{f}^{*}(-R+iv)\right|\ll_{R}(1+|v|)^{c(R,\kappa)}e^{-\left(\frac{\pi}{ 2}-a\right)|v|}.\]
This yields, with Leibniz' integral rule and \(0<\delta<\frac{\pi}{2}-a\),
\[\left|\frac{d^{\kappa}}{dz^{\kappa}}\int_{-R-i\infty}^{-R+i\infty}J_{f}(s;z) ds\right|\ll_{R,\kappa}|z|^{R-\kappa}.\]
For the second integral in (3.2), applying the Residue Theorem gives
\[\frac{d^{\kappa}}{dz^{\kappa}}\lim_{K\to\infty}\frac{1}{2\pi i}\int_{ \partial\mathcal{R}_{-R,\alpha+1;K}}J_{f}(s;z)ds\\ =\sum_{\nu\in-\mathcal{P}_{R}\setminus\{0\}}(\nu)_{\kappa}\operatorname{ Res}_{s=-\nu}L_{f}^{*}(s)z^{\nu-\kappa}+\frac{d^{\kappa}}{dz^{\kappa}}\left(-L_{f}(0) \mathrm{Log}(z)+L_{f}^{\prime}(0)\right),\]
since \(s=0\) is a double pole of \(J_{f}(s;z)\). For the last two integrals in (3.2) we have, for some \(m(I)\in\mathbb{N}_{0}\), depending on \(I:=[-R,\alpha+1]\),
\[\left|\int_{-R\pm iK}^{\alpha+1\pm iK}J_{f}(s;z)ds\right|\ll_{I}(1+|K|)^{m(I)} \max\left\{|z|^{\alpha+1},|z|^{-R}\right\}e^{-(\delta-a)|K|},\]
which vanishes as \(K\to\infty\) and thus the claim follows by distinguishing \(\kappa=0\) and \(\kappa\in\mathbb{N}\).
### Approximation of saddle points
We now approximately solve the saddle point equations
\[-\Phi_{f}^{\prime}(\varrho)=n=-\Phi_{f}^{\prime}(\varrho_{n}). \tag{3.3}\]
The following proposition provides an asymptotic formula for certain functions.
**Proposition 3.3**.: _Let \(\phi\in\mathcal{H}(R)\) with \(R>0\), \(\nu_{\phi,1}<0\), and \(a_{\phi,1}>0\). Assume that \(\phi(\mathbb{R}^{+})\subset\mathbb{R}\). Then we have the following:_
1. _There exists a positive sequence_ \((\varrho_{n})_{n\in\mathbb{N}}\)_, such that for all_ \(n\) _sufficiently large,_ \(\phi(\varrho_{n})=n\) _holds._
2. _We have_5__\(\varrho\in\mathcal{K}(1-\frac{R+1}{\nu_{\phi,1}})\)_,_ \(a_{\varrho,1}=a_{\phi,1}^{-\frac{1}{\nu_{\phi,1}}}\)_, and the corresponding exponent set_
Footnote 5: Recall that we can consider the sequence \(\varrho_{n}\) as a function on \(\mathbb{R}^{+}\).
\[\{\nu_{\varrho,j}:1\leq j\leq N_{\varrho}\}=\left(-\frac{1}{\nu_{\phi,1}}+\sum _{j=1}^{N_{\phi}}\left(1-\frac{\nu_{\phi,j}}{\nu_{\phi,1}}\right)\mathbb{N}_{ 0}\right)\cap\left(-\infty,1-\frac{R+1}{\nu_{\phi,1}}\right).\]
_In particular, we have \(\varrho_{n}\to 0^{+}\)._
Proof.: In the proof we abbreviate \(\nu_{n}:=\nu_{\phi,n}\) and \(a_{n}:=a_{\phi,n}\).
(1) For \(n\in\mathbb{N}\), set
\[\psi_{n}(w):=-1+\frac{1}{n}\phi\left(\left(\frac{n}{a_{1}}\right)^{\frac{1}{ \nu_{1}}}w\right).\]
As \(\phi\) is holomorphic on the right-half plane by assumption, so are the \(\psi_{n}\). Using (2.1), write
\[\psi_{n}(w)=w^{\nu_{1}}-1+E_{n}(w), \tag{3.4}\]
where the error satisfies
\[E_{n}(w)=\frac{1}{n}\sum_{j=2}^{N_{\phi}}a_{j}\left(\frac{n}{a_{1}}\right)^{ \frac{\nu_{j}}{\nu_{1}}}w^{\nu_{j}}+O_{R}\left(n^{\frac{R}{\nu_{1}}-1}|w|^{R} \right).\]
We next show that, for all \(n\) sufficiently large, the \(\psi_{n}\) only have one zero near \(w=1\). We argue with Rouche's Theorem. First, we find that, for \(n\) sufficiently large, the inequality
\[|E_{n}(w)|<|1-w^{\nu_{1}}|+|w^{\nu_{1}}-1+E_{n}(w)|=|1-w^{\nu_{1}}|+|\psi_{n}( w)| \tag{3.5}\]
holds on the entire boundary of \(B_{\kappa(\nu_{1})}(1)\), with \(0<\kappa(\nu_{1})<\frac{1}{2}\) sufficiently small such that \(w\mapsto 1-w^{\nu_{1}}\) only has one zero in \(B_{\kappa(\nu_{1})}(1)\). By Rouche's Theorem and (3.5), for \(n\) sufficiently large \(\psi_{n}\) also has exactly one zero in \(B_{\kappa(\nu_{1})}(1)\). We denote this zero of \(\psi_{n}\) by \(w_{n}\). It is real as \(\phi\) is real-valued on the positive real line and a holomorphic function. One can show that \(\varrho_{n}=(\frac{n}{a_{1}})^{\frac{1}{\nu_{1}}}w_{n}>0\) satisfies \(\phi(\varrho_{n})=n\).
(2) We first give an expansion for \(w_{n}\). By Proposition 2.2, there exists \(\kappa>0\), such that for all \(n\) sufficiently large and all \(z\in B_{\kappa}(0)\), the inverse functions \(\psi_{n}^{-1}\) of \(\psi_{n}\) are defined and holomorphic in \(B_{\kappa}(1)\). Using this, we can calculate \(w_{n}\), satisfying \(\psi_{n}(w_{n})=0\). For this, let
\[h_{n}(w):=\psi_{n}(w+1)-\psi_{n}(1).\]
We have \(h_{n}(0)=0\), and we find, with Proposition 2.1,
\[w_{n}-1=h_{n}^{-1}(-\psi_{n}(1))=\sum_{m\geq 1}(-1)^{m}b_{m}(n)\psi_{n}(1)^{m},\]
where the \(b_{m}\) can be explicitly calculated. First, \(\psi_{n}(1)^{m}\) (\(m\in\mathbb{N}_{0}\)) have expansions in \(n\) by (3.4) and Lemma 2.4. They have exponent set \(\sum_{2\leq j\leq N_{\phi}}(1-\frac{\nu_{j}}{\nu_{1}})\mathbb{N}_{0}\cap[0,1- \frac{R}{\nu_{1}})\). We find, for \(k\in\mathbb{N}\),
\[\psi_{n}^{(k)}(1)=\frac{1}{n}\sum_{j=1}^{N_{\phi}}(\nu_{j})_{k}a_{j}\left( \frac{n}{a_{1}}\right)^{\frac{\nu_{j}}{\nu_{1}}}+O_{R}\left(n^{\frac{R}{\nu_{ 1}}-1}\right). \tag{3.6}\]
Again by Lemma 2.4, and (3.6), \(\psi_{n}^{(k)}(1)\) (\(k\in\mathbb{N}_{0}\)) has expansions in \(n\), with exponent set \((\sum_{2\leq j\leq N_{\phi}}(1-\frac{\nu_{j}}{\nu_{1}})\mathbb{N}_{0})\cap[0,1- \frac{R}{\nu_{1}})\). By Lemma 2.4 we have the following expansion in \(n\)
\[\psi_{n}^{\prime}(1)^{-m}=\left(\nu_{1}+\frac{1}{n}\sum_{j=2}^{N_{\phi}}\nu_{j }a_{j}\left(\frac{n}{a_{1}}\right)^{\frac{\nu_{j}}{\nu_{1}}}+O_{R}\left(n^{ \frac{R}{\nu_{1}}-1}\right)\right)^{-m}\]
with exponent set \((\sum_{2\leq j\leq N_{\phi}}(1-\frac{\nu_{j}}{\nu_{1}})\mathbb{N}_{0})\cap[0, 1-\frac{R}{\nu_{1}})\). By the formula in Proposition 2.1, the \(b_{m}(n)\) are essentially sums and products of terms \(\psi_{n}^{\prime}(1)^{-1}\) and \(\psi_{n}^{(k)}(1)\), where \(k\geq 2\). Hence, \(b_{m}(n)\) has an expansion in \(n\), with exponent set \((\sum_{2\leq j\leq N_{\phi}}(1-\frac{\nu_{j}}{\nu_{1}})\mathbb{N}_{0})\cap[0,1 -\frac{R}{\nu_{1}})\), and according to Lemma 2.3, the same holds for finite linear combinations \(\sum_{1\leq m\leq M}(-1)^{m}b_{m}(n)\psi_{n}(1)^{m}\). As \(\psi_{n}(1)=O(n^{\frac{\nu_{2}}{\nu_{1}}-1})\) for \(n\to\infty\), one has, for \(M\) sufficiently large and not depending on \(n\),
\[\sum_{m\geq M+1}(-1)^{m}b_{m}(n)\psi_{n}(1)^{m}=O_{R}\left(n^{\frac{R}{\nu_{1 }}-1}\right).\]
Now, as \(w_{n}\sim 1\), we conclude the theorem recalling that \(\varrho_{n}=(\frac{n}{a_{1}})^{\frac{1}{\nu_{1}}}w_{n}\).
We next apply Proposition 3.3 to \(-\Phi_{f}^{\prime}\). For the proof one may use Lemma 3.2 with \(k=1\).
**Corollary 3.4**.: _Let \(\varrho_{n}\) solve (3.3). Assume that \(f\colon\mathbb{N}\to\mathbb{N}_{0}\) satisfies the conditions of Theorem 1.4. Then \(\varrho\in\mathcal{K}(\frac{R}{\alpha+1}+1)\) with \(a_{\varrho,1}=a_{-\Phi_{f}^{\prime},1}^{\frac{1}{\alpha+1}}=(\omega_{\alpha} \Gamma(\alpha+1)\zeta(\alpha+1))^{\frac{1}{\alpha+1}}\) and we have_
\[\{\nu_{\varrho,j}\colon 1\leq j\leq N_{\varrho}\}=\left(\frac{1}{\alpha+1}- \sum_{\mu\in\mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0 }\right)\cap\left[\frac{1}{\alpha+1},\frac{R}{\alpha+1}+1\right).\]
### The major arcs
In this subsection we approximate, for some \(1+\frac{\alpha}{3}<\beta<1+\frac{\alpha}{2}\),
\[I_{n}:=\int_{|t|\leq\varrho_{n}^{\beta}}\exp(\Phi_{f}(\varrho_{n}+it)+int)dt,\]
where \(\alpha\) is the largest positive pole of \(L_{f}\). The following lemma can be shown using [15, SS4].
**Lemma 3.5**.: _Let \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfy the conditions of Theorem 1.4, \(\varrho_{n}\) solve (3.3), and \(N\in\mathbb{N}\). Then we have_
\[I_{n}=\sqrt{2\pi}G_{f}(\varrho_{n})\left(\frac{1}{\sqrt{\Phi_{f}^{\prime\prime}( \varrho_{n})}}+\sum_{2\leq k\leq\frac{3H(N+\alpha)}{2\alpha}}\frac{(2k)!\lambda _{2k}(\varrho_{n})}{2^{k}k!\Phi_{f}^{\prime\prime}(\varrho_{n})^{k+\frac{1}{2 }}}+O_{N}\left(\varrho_{n}^{N}\right)\right),\]
_where \(H:=\lceil\frac{N}{3(\beta-1-\frac{\alpha}{3})}\rceil+1\) and_
\[\lambda_{2k}(\varrho):=(-1)^{k}\sum_{h=1}^{H}\frac{1}{h!}\sum_{ \begin{subarray}{c}3\leq m_{1},\ldots,m_{h}\leq 3(N+\alpha)\\ m_{1}+\cdots+m_{h}=2k\end{subarray}}\prod_{j=1}^{h}\frac{\Phi_{f}^{(m_{j})}( \varrho)}{m_{j}!}.\]
The following lemma shows that the first term in Lemma 3.5 dominates the others; its proof follows with Lemma 2.5, Lemma 3.2, and Corollary 3.4 by a straightforward calculation.
**Lemma 3.6**.: _Let \(k\geq 2\) and assume the conditions as in Lemma 3.5. Then we have_
\[\frac{\lambda_{2k}(\varrho_{n})}{\Phi_{f}^{\prime\prime}(\varrho_{n})^{k+ \frac{1}{2}}}=\sum_{j=1}^{M}\frac{b_{j}}{n^{\eta_{j}}}+O_{R}\left(n^{-R+1+\left( k-\left\lfloor\frac{2k}{3}\right\rfloor+\frac{3}{2}\right)\frac{\alpha}{ \alpha+1}}\right),\]
_where the \(\eta_{j}\) run through_
\[\frac{\alpha+2}{2(\alpha+1)}+\frac{\alpha}{\alpha+1}\mathbb{N}_{0}+\left(- \sum_{\mu\in\mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{ 0}\right)\cap\left[0,\frac{R+\alpha}{\alpha+1}\right).\]
We next use Lemma 2.5 and Corollary 3.4 to give an asymptotic expansion for \(G_{f}(\varrho_{n})\).
**Lemma 3.7**.: _Assume that \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfies the conditions of Theorem 1.4. Then, we have_
\[G_{f}(\varrho_{n})=\frac{e^{L_{f}^{\prime}(0)}n^{\frac{L_{f}(0)}{ \alpha+1}}}{a_{-\Phi_{f}^{\prime},1}^{\frac{L_{f}(0)}{\alpha+1}}}\exp\left( \frac{1}{\alpha}(\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1))^{\frac{1}{ \alpha+1}}n^{\frac{\alpha}{\alpha+1}}+\sum_{j=2}^{M}C_{j}n^{\beta_{j}}\right)\] \[\times\left(1+\sum_{j=1}^{N}\frac{B_{j}}{n^{\delta_{j}}}+O_{R} \left(n^{-\frac{R}{\alpha+1}}\right)\right),\]
_where \(0\leq\beta_{M}<\cdots<\beta_{2}<\frac{\alpha}{\alpha+1}\) run through \(\mathcal{L}\) and \(0<\delta_{1}<\delta_{2}<\cdots<\delta_{N}\) through \(\mathcal{M}+\mathcal{N}\)._
Proof.: Let \(\phi(z):=\Phi_{f}(z)+L_{f}(0)\mathrm{Log}(z)\) and \(F:=\phi\circ\varrho\). By Lemma 3.2, Proposition 3.3, and Lemma 2.5 we find that
\[\Phi_{f}(\varrho_{n})+L_{f}(0)\log(\varrho_{n})=L_{f}^{\prime}(0)+\sum_{j=1}^ {N_{F}}\frac{a_{F,j}}{n^{\nu_{F,j}}}+O_{R}\left(n^{-\frac{R}{\alpha+1}}\right), \tag{3.7}\]
where \(\nu_{F,j}\) run through (the inclusion follows by Corollary 3.4)
\[\left(-\frac{1}{\alpha+1}\mathcal{P}_{R}+\sum_{j=2}^{N_{\varrho}} \left(\nu_{\varrho,j}-\frac{1}{\alpha+1}\right)\mathbb{N}_{0}\right)\cap\left( -\infty,\frac{R}{\alpha+1}\right)\] \[\subset\left(-\frac{1}{\alpha+1}\mathcal{P}_{R}-\sum_{\mu\in \mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0}\right) \cap\left(-\infty,\frac{R}{\alpha+1}\right). \tag{3.8}\]
Note that, again by Lemma 2.5 and Lemma 3.2, we obtain
\[a_{F,1}=a_{\phi,1}a_{\varrho,1}^{\nu_{\varrho,1}}=\frac{1}{\alpha}(\omega_{ \alpha}\Gamma(\alpha+1)\zeta(\alpha+1))^{\frac{1}{\alpha+1}}.\]
We split the sum in (3.7) into two parts: one with nonpositive \(\nu_{F,1},\ldots,\nu_{F,M}\), say, and the one with positive \(\nu_{F,j}<\frac{R}{\alpha+1}\). Note that \(M\) is bounded and independent of \(R\). Exponentiating (3.7) yields
\[\exp(\Phi_{f}(\varrho_{n}))=\varrho_{n}^{-L_{f}(0)}e^{L_{f}^{\prime}(0)}\exp \left(\sum_{j=M+1}^{N_{F}}\frac{a_{F,j}}{n^{\nu_{F,j}}}+O_{R}\left(n^{-\frac{R} {\alpha+1}}\right)\right)\exp\left(\sum_{j=1}^{M}\frac{a_{F,j}}{n^{\nu_{F,j}} }\right).\]
Note that the positive \(\nu_{F,j}\) run through (3.8) with \(-\infty\) replaced by \(0\). By Lemma 2.4, we have
\[\exp\left(\sum_{j=M+1}^{N_{F}}\frac{a_{F,j}}{n^{\nu_{F,j}}}+O_{R}\left(n^{- \frac{R}{\alpha+1}}\right)\right)=1+\sum_{j=1}^{K}\frac{H_{j}}{n^{\varepsilon _{j}}}+O_{R}\left(n^{-\frac{R}{\alpha+1}}\right)\]
for some \(K\in\mathbb{N}\) and with exponents \(\varepsilon_{j}\) running through \(\mathcal{N}\). Recall that, by Corollary 3.4, we have \(\varrho_{n}\sim a_{\varrho,1}n^{-\frac{1}{\alpha+1}}\). Now set \(h(n):=n^{-\frac{L_{f}(0)}{\alpha+1}}\varrho_{n}^{-L_{f}(0)}\). A straightforward calculation using Corollary 3.4 shows that \(h\in\mathcal{K}(\frac{R+\alpha}{\alpha+1})\) with exponent set \((-\sum_{\mu\in\mathcal{P}_{R}}(\frac{\mu+1}{\alpha+1}-1)\mathbb{N}_{0})\cap[0, \frac{R+\alpha}{\alpha+1})\subset\mathcal{M}\) and \(a_{h,1}=a_{-\Phi_{f}^{\prime},1}^{-\frac{L_{f}(0)}{\alpha+1}}\). By Lemma 2.3 (2), we obtain, for some \(N\in\mathbb{N}\), \(B_{j}\in\mathbb{C}\), and \(\delta_{j}\) running through \(\mathcal{M}+\mathcal{N}\),
\[h(n)\left(1+\sum_{j=1}^{K}\frac{H_{j}}{n^{\varepsilon_{j}}}+O_{R}\left(n^{- \frac{R}{\alpha+1}}\right)\right)=a_{h,1}\left(1+\sum_{j=1}^{N}\frac{B_{j}}{n ^{\delta_{j}}}+O_{R}\left(n^{-\frac{R}{\alpha+1}}\right)\right).\]
Setting \(C_{j}:=a_{F,j}\) for \(1\leq j\leq M\), the lemma follows.
Another important step for the proof of our main theorem is the following lemma.
**Lemma 3.8**.: _Let \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfy the conditions of Theorem 1.4. Then we have, as \(n\to\infty\),_
\[e^{n\varrho_{n}}=\exp\left((\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1))^{ \frac{1}{\alpha+1}}n^{\frac{\alpha}{\alpha+1}}+\sum_{j=2}^{M}a_{\varrho,j}n^{ \eta_{j}}\right)\left(1+\sum_{j=1}^{N}\frac{D_{j}}{n^{\mu_{j}}}+O_{R}\left(n^ {-\frac{R}{\alpha+1}}\right)\right)\]
_for some \(1\leq M\leq N_{\varrho}\), with \(\frac{\alpha}{\alpha+1}>\eta_{2}>\cdots>\eta_{M}\geq 0\) running through \(\mathcal{L}\) and the \(\mu_{j}\) through \(\mathcal{N}\)._
Proof.: Let \(g(n):=n\varrho_{n}\). By Corollary 3.4 we have \(g\in\mathcal{K}(\frac{R}{\alpha+1})\) with exponent set
\[\{\nu_{g,j}:1\leq j\leq N_{\varrho}\}=\left(-1+\frac{1}{\alpha+1}-\sum_{\mu \in\mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0}\right) \cap\left[-1+\frac{1}{\alpha+1},\frac{R}{\alpha+1}\right).\]
Hence, for some \(1\leq M\leq N_{\varrho}\), we obtain
\[e^{n\varrho_{n}}=\exp\left(a_{-\Phi_{f}^{\prime},1}^{\frac{1}{\alpha+1}}n^{ \frac{\alpha}{\alpha+1}}+\sum_{j=2}^{M}\frac{a_{\varrho,j}}{n^{\nu_{g,j}}} \right)\exp\left(\sum_{j=M+1}^{N_{\varrho}}\frac{a_{\varrho,j}}{n^{\nu_{g,j}} }+O_{R}\left(n^{-\frac{R}{\alpha+1}}\right)\right)\]
with \(-\frac{\alpha}{\alpha+1}<\nu_{g,2}<\cdots<\nu_{g,M}\leq 0<\nu_{g,M+1}<\cdots<\nu_{g,N_{ \varrho}}\). By Lemma 3.2 we obtain \(a_{-\Phi_{f}^{\prime},1}^{\frac{1}{\alpha+1}}=(\omega_{\alpha}\Gamma(\alpha+1) \zeta(\alpha+1))^{\frac{1}{\alpha+1}}\). Note that the exponents \(0<\nu_{g,M+1}<\cdots<\nu_{g,N_{\varrho}}\) run through
\[\left(-\frac{\alpha}{\alpha+1}-\sum_{\mu\in\mathcal{P}_{R}}\left(\frac{\mu+1} {\alpha+1}-1\right)\mathbb{N}_{0}\right)\cap\left(0,\frac{R}{\alpha+1}\right).\]
By Lemma 2.4, \(\exp(\sum_{j=M+1}^{N_{\varrho}}\frac{a_{\varrho,j}}{n^{\prime\theta,j}}+O_{R}(n^ {-\frac{R}{\alpha+1}}))\) is in \(\mathcal{K}(\frac{R}{\alpha+1})\), with exponent set
\[\left\{\sum_{j=1}^{K}b_{j}\theta_{j}:K,b_{j}\in\mathbb{N}_{0},\ \theta_{j}\in \left(-\frac{\alpha}{\alpha+1}-\sum_{\mu\in\mathcal{P}_{R}}\left(\frac{\mu+1} {\alpha+1}-1\right)\mathbb{N}_{0}\right)\cap\left(0,\frac{R}{\alpha+1}\right) \right\}.\]
As \(\alpha\in\mathcal{P}_{R}\), this is a subset of \(\mathcal{N}\), so the above exponents are given by \(\mathcal{N}\), proving the lemma.
The following corollary is very helpful to prove our main theorem.
**Corollary 3.9**.: _Let \(f\colon\mathbb{N}\to\mathbb{N}_{0}\) satisfy the conditions of Theorem 1.4. Then we have_
\[e^{n\varrho_{n}}G_{f}(\varrho_{n})=\frac{e^{L_{f}^{\prime}(0)}n^{\frac{L_{f}( 0)}{\alpha+1}}}{a_{-\Phi_{f}^{\prime},1}^{\frac{L_{f}(0)}{\alpha+1}}}\exp\left( A_{1}n^{\frac{\alpha}{\alpha+1}}+\sum_{j=2}^{M}A_{j}n^{\alpha_{j}}\right) \left(1+\sum_{j=1}^{N}\frac{E_{j}}{n^{\eta_{j}}}+O_{R}\left(n^{-\frac{R}{ \alpha+1}}\right)\right),\]
_with \(A_{1}\) defined in (1.11), \(\frac{\alpha}{\alpha+1}>\alpha_{2}>\cdots>\alpha_{M}\geq 0\) running through \(\mathcal{L}\), and \(\eta_{j}\) through \(\mathcal{M}+\mathcal{N}\)._
## 4. Proof of Theorem 1.4
### The general case
The following lemma follows by a straightforward calculation, using (3.1) and Lemmas 3.5, 3.1, and 3.6.
**Lemma 4.1**.: _Let \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfy the conditions of Theorem 1.4. Then we have_
\[p_{f}(n)=\frac{e^{n\varrho_{n}}G_{f}(\varrho_{n})}{\sqrt{2\pi}}\left(\sum_{j=1 }^{M}\frac{d_{j}}{n^{\nu_{j}}}+O_{L,R}\left(n^{-\min\left\{\frac{L+1}{\alpha+ 1},\frac{R+\alpha}{\alpha+1}+\frac{\alpha+2}{2(\alpha+1)}\right\}}\right)\right)\]
_for some \(M\in\mathbb{N}\), \(d_{1}=\frac{1}{\sqrt{\alpha+1}}(\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1 ))^{\frac{1}{2(\alpha+1)}}\), and the \(\nu_{j}\) run through_
\[\frac{\alpha+2}{2(\alpha+1)}+\frac{\alpha}{\alpha+1}\mathbb{N}_{0}+\left(- \sum_{\mu\in\mathcal{P}_{R}}\left(\frac{\mu+1}{\alpha+1}-1\right)\mathbb{N}_{0 }\right)\cap\left[0,\frac{R+\alpha}{\alpha+1}\right).\]
_In particular, we have \(\nu_{1}=\frac{\alpha+2}{2(\alpha+1)}\)._
We prove the following lemma.
**Lemma 4.2**.: _Assume that \(f\) satisfies the conditions of Theorem 1.4 and that \(L_{f}\) has only one positive pole \(\alpha\). Then we have_
\[n\varrho_{n}+\Phi_{f}(\varrho_{n})=(\omega_{\alpha}\Gamma(\alpha+1)\zeta( \alpha+1))^{\frac{1}{\alpha+1}}\left(1+\frac{1}{\alpha}\right)n^{\frac{\alpha} {\alpha+1}}-L_{f}(0)\log(\varrho_{n})+L_{f}^{\prime}(0)+o(1).\]
Proof.: By Lemma 3.2, we have
\[\Phi_{f}(\varrho_{n})=\frac{\omega_{\alpha}\Gamma(\alpha)\zeta(\alpha+1)}{ \varrho_{n}^{\alpha}}-L_{f}(0)\log(\varrho_{n})+L_{f}^{\prime}(0)+O\left( \varrho_{n}^{R_{0}}\right), \tag{4.1}\]
where
\[R_{0}:=\begin{cases}-\max\limits_{\nu\in\mathcal{P}_{R}\cap(-R,0)}&\text{if } \mathcal{P}_{R}\cap(-R,0)\neq\emptyset,\\ R&\text{otherwise}.\end{cases} \tag{4.2}\]
To show the lemma, we need an expansion for \(\varrho_{n}\). We have, by (3.3) and again by Lemma 3.2,
\[-\Phi_{f}^{\prime}(\varrho_{n})=\frac{\omega_{\alpha}\Gamma(\alpha+1)\zeta( \alpha+1)}{\varrho_{n}^{\alpha+1}}+\frac{L_{f}(0)}{\varrho_{n}}+O\left( \varrho_{n}^{R_{0}-1}\right).\]
By Corollary 3.4, we have an expansion for \(\varrho_{n}\) with an error \(o(1)\). We iteratively find the first terms. By Corollary 3.4 we have \(\varrho_{n}\sim a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}n^{-\frac{1}{\alpha +1}}\), as \(n\to\infty\). We next determine the second order term in \(\varrho_{n}=\frac{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}}{n^{\frac{1}{ \alpha+1}}}+\frac{K_{2}}{n^{\kappa_{2}}}+o(n^{-\kappa_{2}})\) for some \(\kappa_{2}<\frac{1}{\alpha+1}\) and \(K_{2}\in\mathbb{C}\). We choose \(\kappa\) in
\[n\left(1+\frac{K_{2}}{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}n^{\kappa_{ 2}-\frac{1}{\alpha+1}}}\right)^{-\alpha-1}+\frac{L_{f}(0)}{a_{-\Phi^{\prime}_ {f},1}^{\frac{1}{\alpha+1}}}n^{\frac{1}{\alpha+1}}\left(1+\frac{K_{2}}{a_{- \Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}n^{\kappa_{2}-\frac{1}{\alpha+1}}} \right)^{-1}=n+O(n^{\kappa})\]
as small as possible. One finds that
\[\frac{(\alpha+1)K_{2}}{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}}n^{1- \kappa_{2}+\frac{1}{\alpha+1}}=\frac{L_{f}(0)}{a_{-\Phi^{\prime}_{f},1}^{ \frac{1}{\alpha+1}}}n^{\frac{1}{\alpha+1}},\]
and hence
\[\varrho_{n}=\frac{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}}{n^{\frac{1}{ \alpha+1}}}+\frac{L_{f}(0)}{(\alpha+1)n}+o\left(\frac{1}{n}\right). \tag{4.3}\]
Plugging (4.3) into \(\Phi_{f}\) leads, by (4.1), to
\[\Phi_{f}\left(\frac{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}}{n^{\frac{1} {\alpha+1}}}+\frac{L_{f}(0)}{(\alpha+1)n}+o\left(\frac{1}{n}\right)\right)= \frac{a_{-\Phi^{\prime}_{f},1}^{\frac{1}{\alpha+1}}}{\alpha}n^{\frac{\alpha} {\alpha+1}}-\frac{L_{f}(0)}{\alpha+1}-L_{f}(0)\log(\varrho_{n})+L^{\prime}_{f} (0)+o(1).\]
As a result, using (4.3), we conclude the claim.
We are now ready to prove Theorem 1.4.
Proof of Theorem 1.4.: Theorem 1.4 follows from Lemmas 2.3 (2), 3.1, 3.5, 3.7, 4.1, 4.2 and Corollaries 3.4 and 3.9.
### The case of two positive poles of \(L_{f}\)
If \(\alpha>0\) is the only positive pole of \(L_{f}\), then we can calculate the single term in the exponential in the asymptotic of \(p_{f}(n)\) explicitly, by Theorem 1.4. In this subsection we assume that \(L_{f}\) has exactly two positive simple poles, \(\alpha\) and \(\beta\). In this case, Lemma 3.2 with \(k=1\) gives
\[-\Phi^{\prime}_{f}(z)=\frac{c_{1}}{z^{\alpha+1}}+\frac{c_{2}}{z^{\beta+1}}+ \frac{c_{3}}{z}+O_{R}\left(|z|^{R_{0}-1}\right)\]
with \(R_{0}\) from (4.2). Above we set \(c_{j}:=a_{-\Phi^{\prime}_{f},j}\) for \(1\leq j\leq 3\), i.e., by Lemma 3.2
\[c_{1}=\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1),\quad c_{2}=\omega_{ \beta}\Gamma(\beta+1)\zeta(\beta+1),\quad c_{3}=L_{f}(0). \tag{4.4}\]
In the next lemma, we approximate the saddle point in this special situation.
**Lemma 4.3**.: _Let \(f\) satisfy the conditions of Theorem 1.4. Additionally assume that \(L_{f}\) has exactly two positive poles \(\alpha\) and \(\beta\) that satisfy \(\frac{\ell+1}{\ell}\beta<\alpha\leq\frac{\ell}{\ell-1}\beta\) for some \(\ell\in\mathbb{N}\), where we treat the case \(\ell=1\) simply as \(2\beta<\alpha\). Then there exists \(0<r\leq\frac{R}{\alpha+1}\) such that_
\[\varrho_{n}=\sum_{j=1}^{\ell+1}\frac{K_{j}}{n^{(j-1)\left(1-\frac{\beta+1}{ \alpha+1}\right)+\frac{1}{\alpha+1}}}+\frac{c_{3}}{(\alpha+1)n}+O_{R}\left(n^ {-r-1}\right) \tag{4.5}\]
_for some constants \(K_{j}\) independent of \(n\) and \(c_{3}\) as in (4.4). In particular, we have_
\[K_{1} =c_{1}^{\frac{1}{\alpha+1}},\ \ K_{2}=\frac{c_{2}}{(\alpha+1)c_{1}^{ \frac{\beta}{\alpha+1}}},\ \ K_{3}=\frac{c_{2}^{2}(\alpha-2\beta)}{2(\alpha+1)^{2}c_{1}^{\frac{2\beta+1} {\alpha+1}}},\ \ K_{4}=\frac{c_{2}^{3}\left(2\alpha^{2}-9\alpha\beta-2\alpha+9\beta^{2}+3 \beta\right)}{6(\alpha+1)^{3}c_{1}^{\frac{3\beta+2}{\alpha+1}}},\] \[K_{5} =\frac{c_{2}^{4}(6\alpha^{3}-44\alpha^{2}\beta-15\alpha^{2}+96 \alpha\beta^{2}+56\alpha\beta+6\alpha-64\beta^{3}-48\beta^{2}-8\beta)}{24( \alpha+1)^{4}c_{1}^{\frac{4\beta+3}{\alpha+1}}}.\]
Proof.: By Corollary 3.4, the exponents of \(\varrho_{n}\) that are at most \(1\) are given by combinations
\[\frac{1}{\alpha+1}+(j-1)\left(1-\frac{\beta+1}{\alpha+1}\right)+m\left(1- \frac{1}{\alpha+1}\right)\leq 1,\]
with \(j\in\mathbb{N}\) and \(m\in\mathbb{N}_{0}\). A straightforward calculation shows that \(\frac{\ell+1}{\ell}\beta<\alpha\leq\frac{\ell}{\ell-1}\beta\) if and only if
\[0<\frac{1}{\alpha+1}+(j-1)\left(1-\frac{\beta+1}{\alpha+1}\right)\leq 1\]
for all \(1\leq j\leq\ell+1\) but not for \(j>\ell+1\). Together with the error term induced by Corollary 3.4, (4.5) follows. Assuming \(\ell\geq 5\), \(K_{1}\) to \(K_{5}\) and the term \(\frac{c_{3}}{(\alpha+1)n}\) can be determined iteratively.
We are now ready to prove asymptotic formulas if \(L_{f}\) has exactly two positive poles.
**Theorem 4.4**.: _Assume that \(f:\mathbb{N}\to\mathbb{N}_{0}\) satisfies the conditions of Theorem 1.4 and that \(L_{f}\) has exactly two positive poles \(\alpha>\beta\), such that \(\frac{\ell+1}{\ell}\beta<\alpha\leq\frac{\ell}{\ell-1}\beta\) for some \(\ell\in\mathbb{N}\). Then we have_
\[p_{f}(n)=\frac{C}{n^{b}}\exp\left(A_{1}n^{\frac{\alpha}{\alpha+1 }}+A_{2}n^{\frac{\beta}{\alpha+1}}+\sum_{k=3}^{\ell+1}A_{k}n^{\frac{(k-1)\beta }{\alpha+1}+\frac{k-2}{\alpha+1}+2-k}\right)\\ \times\left(1+\sum_{j=2}^{M_{1}}\frac{B_{j}}{n^{\nu_{j}}}+O_{L,R} \left(n^{-\min\left\{\frac{2L-\alpha}{2(\alpha+1)},\frac{R}{\alpha+1}\right\}} \right)\right),\qquad(n\to\infty),\]
_with_
\[A_{1}:=\left(\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1)\right)^{\frac{1}{ \alpha+1}}\left(1+\frac{1}{\alpha}\right),\qquad A_{2}:=\frac{\omega_{\beta} \Gamma(\beta)\zeta(\beta+1)}{\left(\omega_{\alpha}\Gamma(\alpha+1)\zeta( \alpha+1)\right)^{\frac{\beta}{\alpha+1}}}, \tag{4.6}\]
_and for all \(k\geq 3\)_
\[A_{k}:=K_{k}+\frac{c_{1}^{\frac{1}{\alpha+1}}}{\alpha}\sum_{m=1} ^{\ell}\binom{-\alpha}{m}\sum_{\begin{subarray}{c}0\leq j_{1},\ldots,j_{\ell} \leq m\\ j_{1}+\ldots+j_{\ell}=m\\ j_{1}+2j_{2}+\ldots+\ell j_{\ell}=k-1\end{subarray}}\binom{m}{j_{1},j_{2}, \ldots,j_{\ell}}\frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac{ m}{\alpha+1}}}\\ +\frac{c_{2}}{\beta c_{1}^{\frac{\beta}{\alpha+1}}}\sum_{m=1}^{ \ell}\binom{-\beta}{m}\sum_{\begin{subarray}{c}0\leq j_{1},\ldots,j_{\ell} \leq m\\ j_{1}+\ldots+j_{\ell}=m\\ j_{1}+2j_{2}+\ldots+\ell j_{\ell}=k-2\end{subarray}}\binom{m}{j_{1},j_{2}, \ldots,j_{\ell}}\frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac{ m}{\alpha+1}}}.\]
_Here, \(C\) and \(b\) are defined in (1.11) and (1.12), the \(\nu_{j}\) run through \(\mathcal{M}+\mathcal{N}\), the \(K_{j}\) are given in Lemma 4.3, and \(c_{1}\), \(c_{2}\), and \(c_{3}\) run through (4.4)._
Proof.: Assume that \(g:\mathbb{N}\to\mathbb{C}\) has an asymptotic expansion as \(n\to\infty\) and denote by \([g(n)]_{*}\) the part with nonnegative exponents. With Lemmas 3.2 and 4.1 we obtain, using that \(L_{f}\) has exactly two positive poles in \(\alpha\) and \(\beta\),
\[p_{f}(n)=\frac{C}{n^{b}}\exp\left(\left[n\varrho_{n}+\frac{c_{1}}{\alpha\varrho _{n}^{\alpha}}+\frac{c_{2}}{\beta\varrho_{n}^{\beta}}\right]_{*}\right)\left(1 +\sum_{j=2}^{M_{1}}\frac{a_{j}}{n^{\delta_{j}}}+O_{L,R}\left(n^{-\min\left\{ \frac{2L-\alpha}{2(\alpha+1)},\frac{R}{\alpha+1}\right\}}\right)\right)\]
with the \(\delta_{j}\) running through \(\mathcal{M}\). With the Binomial Theorem and Lemma 4.3, we find
\[\frac{c_{1}}{\alpha\varrho_{n}^{\alpha}}=\frac{c_{1}^{\frac{1}{\alpha+1}}}{ \alpha}n^{\frac{\alpha}{\alpha+1}}\left(1+\sum_{m\geq 1}\binom{-\alpha}{m} \left(\sum_{j=2}^{\ell+1}\frac{K_{j}c_{1}^{-\frac{1}{\alpha+1}}}{n^{(j-1) \left(1-\frac{\beta+1}{\alpha+1}\right)}}+\frac{c_{3}c_{1}^{-\frac{1}{\alpha+ 1}}}{(\alpha+1)n^{\frac{\alpha}{\alpha+1}}}+o\left(n^{-\frac{\alpha}{\alpha+ 1}}\right)\right)^{m}\right). \tag{4.7}\]
By definition, \([\frac{c_{1}}{\alpha\varrho_{n}^{\alpha}}]_{*}\) is the part of the expansion of \(\frac{c_{1}}{\alpha\varrho_{n}^{\alpha}}\) involving nonnegative powers of \(n\), i.e., for \(m\geq 2\) in the sum on the right of (4.7) we can ignore the term
\[\frac{c_{3}}{(\alpha+1)c_{1}^{\frac{1}{\alpha+1}}n^{\frac{\alpha}{\alpha+1}}} +o\left(n^{-\frac{\alpha}{\alpha+1}}\right).\]
Applying the Multinomial Theorem to (4.7) gives
\[\frac{c_{1}}{\alpha\varrho_{n}^{\alpha}}=\frac{c_{1}^{\frac{1}{ \alpha+1}}}{\alpha}n^{\frac{\alpha}{\alpha+1}}-\frac{c_{3}}{\alpha+1}+\frac{c_ {1}^{\frac{1}{\alpha+1}}}{\alpha}\sum_{m=1}^{\ell}\binom{-\alpha}{m}\sum_{ \begin{subarray}{c}0\leq j_{1},j_{2},\ldots,j_{\ell}\leq m\\ j_{1}+\cdots+j_{\ell}=m\end{subarray}}\binom{m}{j_{1},j_{2},\ldots,j_{\ell}} \frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac{m}{\alpha+1}}}\\ \times n^{\frac{(j_{1}+2j_{2}+\cdots+\ell j_{\ell})\beta}{\alpha+ 1}+\frac{j_{1}+2j_{2}+\cdots+\ell j_{\ell}-1}{\alpha+1}-(j_{1}+2j_{2}+\cdots+ \ell j_{\ell}-1)}+o(1). \tag{4.8}\]
Similarly, we have
\[\frac{c_{2}}{\beta\varrho_{n}^{\beta}}=\frac{c_{2}}{\beta e_{1}^ {\frac{\beta}{\alpha+1}}}n^{\frac{\beta}{\alpha+1}}+\frac{c_{2}}{\beta c_{1}^ {\frac{\beta}{\alpha+1}}}\sum_{m=1}^{\ell}\binom{-\beta}{m}\sum_{ \begin{subarray}{c}0\leq j_{1},j_{2},\ldots,j_{\ell}\leq m\\ j_{1}+\cdots+j_{\ell}=m\end{subarray}}\binom{m}{j_{1},j_{2},\ldots,j_{\ell}} \frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac{m}{\alpha+1}}}\\ \times n^{\frac{(j_{1}+2j_{2}+\cdots+\ell j_{\ell}+1)\beta}{ \alpha+1}+\frac{j_{1}+2j_{2}+\cdots+\ell j_{\ell}}{\alpha+1}-(j_{1}+2j_{2}+ \cdots+\ell j_{\ell})}+o(1). \tag{4.9}\]
Finally, we obtain, with Lemma 4.3,
\[[n\varrho_{n}]_{*}=K_{1}n^{\frac{\alpha}{\alpha+1}}+\sum_{m=1}^{\ell}K_{m+1}n ^{\frac{m\beta}{\alpha+1}+\frac{m-1}{\alpha+1}-(m-1)}+\frac{c_{3}}{\alpha+1}. \tag{4.10}\]
Combining (4.8), (4.9), and (4.10), we find that
\[\left[n\varrho_{n}+\frac{c_{1}}{\alpha\varrho_{n}^{\alpha}}+\frac{c_{2}}{ \beta\varrho_{n}^{\beta}}\right]_{*}=\left(1+\frac{1}{\alpha}\right)c_{1}^{ \frac{1}{\alpha+1}}n^{\frac{\alpha}{\alpha+1}}+\frac{c_{2}}{\beta c_{1}^{ \frac{\beta}{\alpha+1}}}n^{\frac{\beta}{\alpha+1}}+\sum_{k=2}^{\ell}A_{k+1}n ^{\frac{k\beta}{\alpha+1}+\frac{k-1}{\alpha+1}-(k-1)},\]
where
\[A_{k}=K_{k}+\frac{c_{1}^{\frac{1}{\alpha+1}}}{\alpha}\sum_{m=1}^{\ell}\binom{- \alpha}{m}\sum_{\begin{subarray}{c}0\leq j_{1},j_{2},\ldots,j_{\ell}\leq m\\ j_{1}+\cdots+j_{\ell}=m\\ j_{1}+2j_{2}+\cdots+\ell j_{\ell}=k-1\end{subarray}}\binom{m}{j_{1},j_{2}, \ldots,j_{\ell}}\frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac{m}{ \alpha+1}}}\]
\[+\frac{c_{2}}{\beta c_{1}^{\frac{\beta}{a+1}}}\sum_{m=1}^{\ell} \binom{-\beta}{m}\sum_{\begin{subarray}{c}0\leq j_{1},j_{2},\ldots,j_{\ell}\leq m \\ j_{1}+\cdots+j_{\ell}=m\\ j_{1}+2j_{2}+\cdots+\ell j_{\ell}=k-2\end{subarray}}\binom{m}{j_{1},j_{2}, \ldots,j_{\ell}}\frac{K_{2}^{j_{1}}\cdots K_{\ell+1}^{j_{\ell}}}{c_{1}^{\frac {m}{a+1}}}.\]
Note that we have by definition of \(c_{1}\), \(c_{2}\) (see (4.4)), \(K_{1}\), and \(K_{2}\) (see Lemma 4.3),
\[A_{1} =\left(1+\frac{1}{\alpha}\right)c_{1}^{\frac{1}{\alpha+1}}=\left( 1+\frac{1}{\alpha}\right)(\omega_{\alpha}\Gamma(\alpha+1)\zeta(\alpha+1))^{ \frac{1}{\alpha+1}},\] \[A_{2} =\frac{c_{2}}{\beta c_{1}^{\frac{\beta}{a+1}}}=\frac{\omega_{ \beta}\Gamma(\beta)\zeta(\beta+1)}{(\omega_{\alpha}\Gamma(\alpha+1)\zeta( \alpha+1))^{\frac{\beta}{\alpha+1}}},\]
which gives (4.6). Hence we indeed obtain, as \(n\to\infty\), for suitable \(M_{1}\in\mathbb{N}\)
\[p_{f}(n) =\frac{C}{n^{b}}\exp\left(A_{1}n^{\frac{\alpha}{\alpha+1}}+A_{2}n ^{\frac{\beta}{\alpha+1}}+\sum_{k=3}^{\ell+1}A_{k}n^{\frac{(k-1)\beta}{\alpha +1}+\frac{k-2}{\alpha+1}-(k-2)}\right)\] \[\qquad\qquad\qquad\qquad\times\left(1+\sum_{j=2}^{M_{1}}\frac{B_{ j}}{n^{\nu_{j}}}+O_{L,R}\left(n^{-\min\left\{\frac{2L-\alpha}{2(\alpha+1)}, \frac{R}{\alpha+1}\right\}}\right)\right),\]
where the \(\nu_{j}\) run, as in Theorem 1.4, through \(\mathcal{M}+\mathcal{N}\). This proves the theorem.
## 5. Proofs of Theorems 1.1, 1.2, and 1.3
We require the zeta function associated to a polynomial \(P\),
\[Z_{P}(s):=\sum_{n\geq 1}\frac{1}{P(n)^{s}}\]
with \(P(n)>0\) for \(n\in\mathbb{N}\). In particular, we consider \(P=P_{k}\), where
\[P_{k}(w):=\frac{(k-2)w^{2}-(k-4)w}{2}.\]
The following lemma ensures that all the \(P_{k}\) satisfy (P1) with \(L\) arbitrary large.
**Lemma 5.1**.: _Let \(k\geq 3\) be an integer and let_
\[\Lambda^{[k]}:=\left\{P_{k}(n):n\in\mathbb{N}\right\}.\]
_For every prime \(p\), we have \(|\Lambda^{[k]}\setminus(\Lambda^{[k]}\cap p\mathbb{N})|=\infty\)._
We next show that (P2) and (P3) hold.
**Proposition 5.2**.: _Let \(k\in\mathbb{N}\) with \(k\geq 3\)._
1. _The function_ \(Z_{P_{k}}\) _has a meromorphic continuation to_ \(\mathbb{C}\) _with at most simple poles in_ \(\frac{1}{2}-\mathbb{N}_{0}\)_. The positive pole lies in_ \(s=\frac{1}{2}\)_._
2. _We have_ \(Z_{P_{k}}(s)\ll Q_{k}(|\mathrm{Im}(s)|)\) _as_ \(|\mathrm{Im}(s)|\to\infty\) _for some polynomial_ \(Q_{k}\)_._
Proof.: (1) The meromorphic continuation of \(Z_{P_{k}}\) to \(\mathbb{C}\) follows by [27, Theorem B]. By [27, Theorem A (ii)] the only possible poles (of order at most one) are located at \(\frac{1}{2}-\frac{1}{2}\mathbb{N}_{0}\). Holomorphicity in \(-\mathbb{N}_{0}\) is a direct consequence of [27, Theorem C]. Finally, note that \(P_{k}(n)\ll_{k}n^{2}\). Thus, as \(x\to\infty\),
\[\sum_{1\leq n\leq x}\frac{1}{P_{k}(n)^{\frac{1}{2}}}\gg_{k}\sum_{1\leq n\leq x }\frac{1}{n}.\]
This proves the existence of a pole in \(s=\frac{1}{2}\), completing the proof.
(2) This result follows directly by [27, Proposition 1 (iii)].
To apply Theorem 1.4, it remains to compute \(Z_{P_{k}}(0)\) and \(Z^{\prime}_{P_{k}}(0)\), as well as \(\operatorname{Res}_{s=\frac{1}{2}}Z_{P_{k}}(s)\).
**Proposition 5.3**.: _Let \(k\in\mathbb{N}\) with \(k\geq 3\)._
1. _We have_ \(Z_{P_{k}}(0)=\frac{1}{2-k}\) _and_ \[Z^{\prime}_{P_{k}}(0)=\frac{\log\left(\frac{k-2}{2}\right)}{k-2}+\log\left( \Gamma\left(\frac{2}{k-2}\right)\right)-\log(2\pi).\]
2. _We have_ \(\operatorname{Res}_{s=\frac{1}{2}}Z_{P_{k}}(s)=\sqrt{\frac{1}{2(k-2)}}\)_._
Proof.: (1) Since the roots of \(P_{k}\) are not in \(\mathbb{R}_{\geq 1}\), we may use [27, Theorem D] to obtain that \(Z_{P_{k}}(0)=\frac{1}{2-k}\). For the derivative, one applies [27, Theorem E] yielding
\[Z^{\prime}_{P_{k}}(0)=\frac{\log\left(\frac{k-2}{2}\right)}{k-2}+\log\left( \Gamma\left(\frac{2}{k-2}\right)\right)-\log(2\pi).\]
(2) Since \(Z_{P_{k}}(s)=(\frac{2}{k-2})^{s}\sum\limits_{n\geq 1}(n-\frac{k-4}{k-2})^{-s}n^{-s}\), the result follows as the sum has residue \(\frac{1}{2}\) at \(s=\frac{1}{2}\) by equation (16) of [27].
The previous three lemmas are used to prove Theorem 1.1.
Proof of Theorem 1.1.: We may apply Theorem 1.4 as Lemma 5.1 and Proposition 5.2 ensure that conditions (P1)-(P3) are satisfied. Hence, one obtains an asymptotic formula for \(p_{k}(n)\). The constants occurring in Theorem 1.4 are computed using (1.11), (1.12), and Proposition 5.3. That the exponential consists only of the term \(A_{1}n^{\frac{1}{3}}\) follows by Theorem 1.4, since \(Z_{P_{k}}(s)\) has exactly one positive pole, lying in \(s=\frac{1}{2}\). Note that we are allowed to choose \(L\) and \(R\) arbitrarily large due to Lemma 5.1 and Proposition 5.2 (1).
We consider some special cases of Theorem 1.1.
**Corollary 5.4**.: _For triangular numbers, squares, and pentagonal numbers, respectively, we have_
\[p_{3}(n)\sim\frac{\zeta\left(\frac{3}{2}\right)}{2^{\frac{7}{2 }}\sqrt{3}\pi n^{\frac{3}{2}}}\exp\left(\frac{3}{2}\pi^{\frac{1}{3}}\zeta \left(\frac{3}{2}\right)^{\frac{2}{3}}n^{\frac{1}{3}}\right),\qquad p_{4}(n) \sim\frac{\zeta\left(\frac{3}{2}\right)^{\frac{2}{3}}}{2^{\frac{7}{3}}\sqrt{ 3}\pi^{\frac{7}{6}}n^{\frac{7}{6}}}\exp\left(\frac{3}{2^{\frac{4}{3}}}\pi^{ \frac{1}{3}}\zeta\left(\frac{3}{2}\right)^{\frac{2}{3}}n^{\frac{1}{3}}\right),\] \[p_{5}(n)\sim\frac{\Gamma\left(\frac{2}{3}\right)\zeta\left( \frac{3}{2}\right)^{\frac{5}{9}}}{2^{\frac{13}{3}}3^{\frac{4}{9}}\pi^{\frac{ 11}{9}}}n^{\frac{19}{18}}\exp\left(\frac{3^{\frac{2}{3}}}{2}\pi^{\frac{1}{3}} \zeta\left(\frac{3}{2}\right)^{\frac{2}{3}}n^{\frac{1}{3}}\right).\]
The next lemma shows that \(\prod_{j,k\geq 1}(1-q^{\frac{jk(j+k)(j+2k)}{6}})^{-1}\) satisfies (P1) for \(L\) arbitrarily large.
**Lemma 5.5**.: _Let \(f\colon\mathbb{N}\to\mathbb{N}_{0}\) be defined by_
\[f(n):=\left|\left\{(j,k)\in\mathbb{N}^{2}:\frac{jk(j+k)(j+2k)}{6}=n\right\} \right|.\]
_Then, for all primes \(p\), we have \(|\Lambda\setminus(\Lambda\cap p\mathbb{N})|=\infty\)._
For investigating the function \(\zeta_{\mathfrak{so}(5)}\), we need the _Mordell-Tornheim zeta function_, defined by
\[\zeta_{\mathrm{MT},2}(s_{1},s_{2},s_{3}):=\sum_{m,n\geq 1}m^{-s_{1}}n^{-s_{2}}(m+n )^{-s_{3}}.\]
By [25], for \(\operatorname{Re}(s)>1\) and some \(-\operatorname{Re}(s)<c<0\) we get a relation between \(\zeta_{\mathrm{MT},2}\) and \(\zeta_{\mathfrak{so}(5)}\) via
\[\zeta_{\mathfrak{so}(5)}(s)=\frac{6^{s}}{2\pi i\Gamma(s)}\int_{c-i\infty}^{c+i \infty}\Gamma(s+z)\Gamma(-z)\zeta_{\mathrm{MT},2}(s,s-z,2s+z)dz. \tag{5.1}\]
We have the following theorem.
**Theorem 5.6** ([24], Theorem 1).: _The function \(\zeta_{\mathrm{MT},2}\) has a meromorphic continuation to \(\mathbb{C}^{3}\) and its singularities satisfy \(s_{1}+s_{3}=1-\ell,s_{2}+s_{3}=1-\ell,s_{1}+s_{2}+s_{3}=2\), with \(\ell\in\mathbb{N}_{0}\)._
Fix \(M\in\mathbb{N}_{0}\) and \(0<\varepsilon<1\). Let \(\mathrm{Re}(s_{1}),\mathrm{Re}(s_{3})>1\), \(\mathrm{Re}(s_{2})>0\), and \(s_{2}\notin\mathbb{N}\). Then, for \(\mathrm{Re}(s_{2})<M+1-\varepsilon\), we have (see equation (5.3) in [24])
\[\zeta_{\mathrm{MT},2}(s_{1},s_{2},s_{3}) =\frac{\Gamma(s_{2}+s_{3}-1)\Gamma(1-s_{2})}{\Gamma(s_{3})}\zeta( s_{1}+s_{2}+s_{3}-1)\] \[\quad+\sum_{m=0}^{M-1}\binom{-s_{3}}{m}\zeta(s_{1}+s_{3}+m)\zeta (s_{2}-m)\] \[\quad+\frac{1}{2\pi i}\int_{M-\varepsilon-i\infty}^{M-\varepsilon +i\infty}\frac{\Gamma(s_{3}+w)\Gamma(-w)}{\Gamma(s_{3})}\zeta(s_{1}+s_{3}+w) \zeta(s_{2}-w)dw. \tag{5.2}\]
The first two summands on the right-hand side of (5.2) extend meromorphically to \(\mathbb{C}^{3}\), so to show that (5.1) extends meromorphically, we consider (5.2). Note that \(\mathrm{Re}(w)=M-\varepsilon\). To avoid poles on the line of integration, we assume that
\[\mathrm{Re}(s_{3}+w) >0\Leftrightarrow\mathrm{Re}(s_{3})>\varepsilon-M, \tag{5.3}\] \[\mathrm{Re}(s_{1}+s_{3}+w) >1\Leftrightarrow\mathrm{Re}(s_{1})+\mathrm{Re}(s_{3})>1-M+\varepsilon,\] (5.4) \[\mathrm{Re}(s_{2}-w) <1\Leftrightarrow\mathrm{Re}(s_{2})<1+M-\varepsilon. \tag{5.5}\]
Note that the final condition is already assumed above.
By Proposition 2.6 (2), the integral converges compactly and the integrands are locally holomorphic. Thus, the integral is a holomorphic function in the region defined by (5.3), (5.4), and (5.5). Recalling (5.1), we are interested in \(\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\). By Theorem 5.6, this function is meromorphic in \(\mathbb{C}^{2}\) and holomorphic outside the hyperplanes defined by \(3s+z=1-\ell\), \(3s=1-\ell\), and \(4s=2\), where \(\ell\in\mathbb{N}_{0}\). With (5.2), we obtain
\[\zeta_{\mathrm{MT},2}(s,s-z,2s+z)=\frac{\Gamma(3s-1)\Gamma(z+1-s)} {\Gamma(2s+z)}\zeta(4s-1)\\ +\sum_{m=0}^{M-1}\binom{-2s-z}{m}\zeta(3s+z+m)\zeta(s-z-m)+I_{M}(s ;z), \tag{5.6}\]
where \(s\in\mathbb{C}\setminus\{\frac{1}{2},\frac{1-\ell}{3}\}\), and
\[I_{M}(s;z):=\frac{1}{2\pi i}\int_{M-\varepsilon-i\infty}^{M-\varepsilon+i \infty}\frac{\Gamma(2s+z+w)\Gamma(-w)}{\Gamma(2s+z)}\zeta(3s+z+w)\zeta(s-z-w )dw.\]
The following lemma shows that \(I_{M}(s;z)\) is holomorphic in \(z\). To state it let
\[\mu=\mu_{M,\sigma}:=\max\{-1+\sigma-M+\varepsilon,1-3\sigma-M+\varepsilon,- 2\sigma-M+\varepsilon\}.\]
**Lemma 5.7**.: _Let \(s=\sigma+it\in\mathbb{C}\), \(M\in\mathbb{N}_{0}\), and \(0<\varepsilon<1\). Then \(z\mapsto I_{M}(s;z)\) is holomorphic in \(S_{\mu,\infty}\)._
Proof.: If \(z\in S_{\mu,\infty}\), then \(\mathrm{Re}(2s+z+w)>0\), \(\mathrm{Re}(3s+z+w)>1\), and \(\mathrm{Re}(s-z-w)<1\) for \(w\in\mathbb{C}\) satisfying \(\mathrm{Re}(w)=M-\varepsilon\), so \(\Gamma(2s+z+w)\), \(\zeta(3s+z+w)\), and \(\zeta(s-z-w)\) have no poles on the path of integration. As \(0<\varepsilon<1\), we have \(M-\varepsilon\notin\mathbb{N}_{0}\), so \(w\mapsto\Gamma(-w)\) has no pole if \(\mathrm{Re}(w)=M-\varepsilon\). As a result, no pole is located on the path of integration, and by Proposition 2.6 (2) and the uniform polynomial growth of the zeta function along vertical strips we find that the integral converges uniformly on compact subsets of \(S_{\mu,\infty}\)
The next lemma shows, that \(I_{M}\) is bounded polynomially in certain vertical strips. A proof is obtained using Propositions 2.6 (2) and 2.7 (2).
**Lemma 5.8**.: _Let \(\sigma_{1}<\sigma_{2}\) and \(\sigma_{3}<\sigma_{4}\), such that \(S_{\sigma_{3},\sigma_{4}}\subset S_{\mu,\infty}\) for all \(s\in S_{\sigma_{1},\sigma_{2}}\) and fix \(0<\varepsilon<1\) sufficiently small. In \(S_{\sigma_{1},\sigma_{2}}\times S_{\sigma_{3},\sigma_{4}}\) the function \((s,z)\mapsto I_{M}(s;z)\) is holomorphic and satisfies \(|I_{M}(s;z)|\leq P_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},M}(|\mathrm{ Im}(s)|,|\mathrm{Im}(z)|)\) for some polynomial \(P_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},M}(X,Y)\in\mathbb{R}[X,Y]\)._
Next we investigate \(\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) for fixed \(s\) more in detail.
**Lemma 5.9**.: _Let \(s\in\mathbb{C}\setminus\{\frac{1}{2},\frac{1}{3}-\frac{1}{3}\mathbb{N}_{0}\}\). Then \(z\mapsto\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) is holomorphic in the entire complex plane except for possibly simple poles in \(z=1-\ell-3s\) with \(\ell\in\mathbb{N}_{0}\)._
Proof.: As holomorphicity is a local property, it suffices to consider arbitrary right half-planes. By Lemma 5.7, for \(M\) sufficiently large, \(I_{M}\) is holomorphic in an arbitrary right half-plane. By (5.2), possible poles of \(\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) therefore lie in \(z=s-\ell\) and in \(z=-3s-m-\ell\), \(\ell\in\mathbb{N}\). A direct calculation shows that the residue at \(z=s-\ell\) vanishes if \(\ell\leq M-1\). Consequently, for a fixed pole \(s-\ell\), we can choose \(M\) sufficiently large such that we only have to consider the of (5.2). This gives the claim.
We are now ready to prove growth properties of \(\zeta_{\mathrm{MT},2}\). As we need to avoid critical singular points, we focus on incomplete half-planes of the type \(S_{\sigma_{1},\sigma_{2},\delta}\) (with \(\delta>0\) arbitrarily small).
**Lemma 5.10**.: _Let \(\sigma_{1}<\sigma_{2}\), \(\sigma_{3}<\sigma_{4}\) with \(1-3\sigma_{1}<\sigma_{3}\) and \(\delta>0\) arbitrarily small. For \((s,z)\in S_{\sigma_{1},\sigma_{2},\delta}\times S_{\sigma_{3},\sigma_{4}}\), we have, for some polynomial \(P_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\delta}\) only depending on \(S_{\sigma_{1},\sigma_{2},\delta}\) and \(S_{\sigma_{3},\sigma_{4}}\),_
\[|\zeta_{\mathrm{MT},2}(s,s-z,2s+z)|\leq P_{\sigma_{1},\sigma_{2},\sigma_{3}, \sigma_{4},\delta}(|\mathrm{Im}(s)|,|\mathrm{Im}(z)|).\]
_If \(\sigma_{1}<0\), for all \(s\in U\) with \(U\subset S_{\sigma_{1},\sigma_{2}}\), a sufficiently small neighborhood of \(0\), we have_
\[\left|\frac{\zeta_{\mathrm{MT},2}(s,s-z,2s+z)}{\Gamma(s)}\right|\leq P_{ \sigma_{3},\sigma_{4},U}(|\mathrm{Im}(z)|),\]
_where the polynomial \(P_{\sigma_{3},\sigma_{4},U}\) only depends on \(\sigma_{3}\), \(\sigma_{4}\), and \(U\)._
We need another lemma dealing with the poles of the Mordellell-Tornheim zeta function.
**Lemma 5.11**.: _Let \(k\in\mathbb{N}_{0}\). Then the meromorphic function \(s\mapsto\zeta_{\mathrm{MT},2}(s,s-k,2s+k)\) is holomorphic for \(s=-\ell\) with \(\ell\in\mathbb{N}_{\geq\frac{k}{2}}\) and has possible simple poles at \(s=\ell\in\mathbb{N}_{0}\) with \(0\leq\ell<\frac{k}{2}\). In particular, \(s\mapsto\Gamma(s+k)\zeta_{\mathrm{MT},2}(s,s-k,2s+k)\Gamma(s)^{-1}\) is holomorphic at \(s=-\ell\) with \(\ell\in\mathbb{N}_{0}\)._
Proof.: Let \(s\) lie in a bounded neighborhood of \(-\ell\). We use (5.6) with \(s=k\). Analogous to the proof of Lemma 5.7, the function \(s\mapsto I_{M}(s;k)\) is holomorphic in a neighborhood of \(s=-\ell\). The analysis of the remaining terms is straightforward, and the lemma follows.
The next lemma states where the integral of (5.1) defining \(\zeta_{\mathfrak{so}(5)}\) is a meromorphic function.
**Lemma 5.12**.: _Let \(\varepsilon>0\) be sufficiently small and let \(K\in\mathbb{N}\). Then the function_
\[s\mapsto\frac{1}{2\pi i\Gamma(s)}\int_{K-\varepsilon-i\infty}^{K-\varepsilon+ i\infty}\Gamma(s+z)\Gamma(-z)\zeta_{\mathrm{MT},2}(s,s-z,2s+z)dz \tag{5.7}\]
_is meromorphic on the half plane \(\{s\in\mathbb{C}:\mathrm{Re}(s)>\frac{1-K+\varepsilon}{3}\}\) with at most simple poles in \(\{\frac{1}{2},\frac{1}{3}-\frac{1}{3}\mathbb{N}_{0}\}\setminus(-\mathbb{N}_{0})\) (with \(\mathrm{Re}(s)>\frac{1-K+\varepsilon}{3}\)) and grows polynomially on vertical strips with finite width._
Proof.: We first show holomorphicity in \(S_{\sigma_{1},\sigma_{2},\delta}\) with \(\frac{1-K+\varepsilon}{3}<\sigma_{1}<\sigma_{2}\) and \(0<\delta<1\). Since \(\mathrm{Re}(s)>\frac{1-K+\varepsilon}{3}>-K+\varepsilon\), there are no poles of \(\Gamma(s+z)\Gamma(-z)\) on the path of integration \(\mathrm{Re}(z)=K-\varepsilon\). By Lemma 5.9, \(z\mapsto\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) has no poles for \(s\in S_{\sigma_{1},\sigma_{2},\delta}\), as \(\mathrm{Re}(z+3s-1)=\frac{1}{3}\mathbb{N}_{0}\setminus(-\mathbb{N}_{0})\). By Lemma 5.9, \(z\mapsto\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) has no poles for \(s\in S_{\sigma_{1},\sigma_{2},\delta}\). By Lemma 5.9, \(z\mapsto\zeta_{\mathrm{MT},2}(s,s-z,2s+z)\) has no poles for \(s\in S_{\sigma_{1},\sigma_{2},\delta}\).
\(K-\varepsilon+3\mathrm{Re}(s)-1>0\). By Proposition 2.6 (2), Lemma 5.10, and Lemma 2.9, the integral is holomorphic away from singularities and grows polynomially on vertical strips of finite width.
We are left to show that (5.7) has at most a simple pole at \(s=s_{0}\), where \(s_{0}\in\{\frac{1}{2},\frac{1}{3}-\frac{1}{3}\mathbb{N}_{0}\}\setminus(- \mathbb{N}_{0})\) with \(s_{0}\geq\frac{1-K+\varepsilon}{3}\). Recall the representation of \(\zeta_{\mathrm{MT},2}\) in (5.6). By Lemma 5.8
\[\int_{K-\varepsilon-i\infty}^{K-\varepsilon+i\infty}\Gamma(s+z)\Gamma(-z)I_{M} (s;z)dz\]
converges absolutely and uniformly on any sufficiently small compact subset \(C\) containing \(s_{0}\) for \(M\) sufficiently large. Similarly, by Propositions 2.7 (2) and 2.6 (2),
\[\int_{K-\varepsilon-i\infty}^{K-\varepsilon+i\infty}\Gamma(s+z)\Gamma(-z)\sum_ {m=0}^{M-1}\binom{-2s-z}{m}\zeta(3s+z+m)\zeta(s-z-m)dz\]
converges absolutely and uniformly in \(C\). In particular, both integrals continue holomorphically to \(s_{0}\). As \(s\mapsto\frac{1}{\Gamma(s)}\) is entire, it is sufficient to study
\[\frac{\Gamma(3s-1)\zeta(4s-1)}{\Gamma(s)}\int_{K-\varepsilon-i\infty}^{K- \varepsilon+i\infty}\frac{\Gamma(s+z)\Gamma(-z)\Gamma(1+z-s)}{\Gamma(2s+z)}dz\]
around \(s_{0}\). Again, by Proposition 2.6 (2), the integral converges absolutely and uniformly in \(C\). As \(\frac{\Gamma(3s-1)\zeta(4s-1)}{\Gamma(s)}\) has at most a simple pole in \(s_{0}\) and a removable singularity if \(s_{0}\in-\mathbb{N}_{0}\), the proof of the lemma is complete.
The following lemma is a refinement of Lemma 5.12 for the specific case that \(z\in\mathbb{Z}\) and follows from Lemma 5.8, by using Propositions 2.6 and 2.7.
**Lemma 5.13**.: _Let \(k\in\mathbb{N}_{0}\) with \(0\leq k\leq K-1\). Then, for all \(\sigma_{1}<\sigma_{2}\), there exists a polynomial \(P_{K,\sigma_{1},\sigma_{2}}\), such that, uniformly for all \(\sigma_{1}\leq\mathrm{Re}(s)\leq\sigma_{2}\) and \(|\mathrm{Im}(s)|\geq 1\),_
\[|\zeta_{\mathrm{MT},2}(s,s-k,2s+k)|\leq P_{K,\sigma_{1},\sigma_{2}}(|\mathrm{ Im}(s)|).\]
The following theorem shows that the function \(\zeta_{\mathfrak{so}(5)}\) satisfies the conditions of Theorem 1.4 and gives the more precise statement of Theorem 1.2.
**Theorem 5.14**.: _The function \(\zeta_{\mathfrak{so}(5)}\) extends to a meromorphic function in \(\mathbb{C}\) and is holomorphic in \(\mathbb{N}_{0}\). For \(K\in\mathbb{N}\) and \(0<\varepsilon<1\), we have, on \(S_{\frac{1-K+\varepsilon}{3},\infty}\),_
\[\zeta_{\mathfrak{so}(5)}(s) =\frac{6^{s}}{\Gamma(s)}\sum_{k=0}^{K-1}\frac{(-1)^{k}\Gamma(s+k) }{k!}\zeta_{\mathrm{MT},2}(s,s-k,2s+k)\] \[\qquad+\frac{6^{s}}{2\pi i\Gamma(s)}\int_{K-\varepsilon-i\infty}^ {K-\varepsilon+i\infty}\Gamma(s+z)\Gamma(-z)\zeta_{\mathrm{MT},2}(s,s-z,2s+z )dz. \tag{5.8}\]
_All poles of \(\zeta_{\mathfrak{so}(5)}\) are simple and contained in \(\{\frac{1}{2},\frac{1}{3},-\frac{1}{3},-\frac{2}{3},\dots\}\). Furthermore, for all \(\sigma_{0}\leq\sigma\leq\sigma_{1}\) as \(|\mathrm{Im}(s)|\to\infty\), for some polynomial depending only on \(\sigma_{0}\) and \(\sigma_{1}\),_
\[|\zeta_{\mathfrak{so}(5)}(s)|\leq P_{\sigma_{0},\sigma_{1}}(|\mathrm{Im}(s)|).\]
Proof.: Assume \(\mathrm{Re}(s)>1\). By Lemma 5.9, the only poles of the integrand in (5.1) in \(S_{-\mathrm{Re}(s),\infty}\) lie at \(z\in\mathbb{N}_{0}\). By shifting the path to the right of \(\mathrm{Re}(z)=M-\varepsilon\), we find, with Lemma 5.10 and the Residue Theorem, that (5.8) holds on \(S_{1,\infty}\). By Lemma 5.12 the right-hand side is a meromorphic function on \(S_{\frac{1-K+\varepsilon}{3},\infty}\). By Theorem 5.6, the functions \(s\mapsto\zeta_{\mathrm{MT},2}(s,s-k,2s+k)\) only have possible (simple) poles for \(s_{1}+s_{3}=3s+k=1-\ell\), \(s_{2}+s_{3}=3s=1-\ell\), \(s_{1}+s_{2}+s_{3}=4s=2\), with \(\ell\in\mathbb{N}_{0}\), i.e., for \(s\in\{\frac{1}{2},\frac{1}{3},0,-\frac{1}{3},-\frac{2}{3},-1,\dots\}\). However, by Lemma 5.11 the sum in (5.8) continues holomorphically to \(-\mathbb{N}_{0}\), so the sum only contributes possible poles
\(s\in\mathcal{S}:=\{\frac{1}{2},\frac{1}{3},-\frac{1}{3},-\frac{2}{3},-\frac{4}{3}, \dots\}\). Note that this argument does not depend on the choice of \(K\). On the other hand, if we choose \(K\) sufficiently large, then the integral in (5.8) is a holomorphic function around \(s=-m\) for fixed but arbitrary \(m\in\mathbb{N}_{0}\), and it only contributes poles in \(\mathcal{S}\) in \(S_{\frac{1-K+\varepsilon}{3},\infty}\) by Lemma 5.12, where \(0<\varepsilon<1\). So the statement about the poles follows if \(K\to\infty\).
We are left to show the polynomial bound. With Lemma 5.13 we obtain the bound for the finite sum, as we chose \(K\) in terms of \(\sigma_{0}\) and \(\sigma_{1}\). Lemma 5.12 implies the polynomial bound for the integral.
To apply Theorem 1.4 we require \(\zeta_{\mathfrak{so}(5)}(0)\).
**Proposition 5.15**.: _We have \(\zeta_{\mathfrak{so}(5)}(0)=\frac{3}{8}\)._
Proof.: Since \(I_{M}(s;z)\) is holomorphic in \(s\) for \(z\in S_{\mu,\infty}\) by Lemma 5.8 and \(\Gamma(s)\) has a pole in \(s=0\),
\[\lim_{s\to 0}\frac{I_{M}(s;z)}{\Gamma(s)}=0. \tag{5.9}\]
Let \(K\in\mathbb{N}\). For \(z\in\mathbb{C}\) with \(\operatorname{Re}(z)=K-\frac{1}{2}\) and \(m\in\mathbb{N}_{0}\), we have \(\pm(z+m)\neq 1\). Hence, \(s\mapsto\binom{-2s-z}{m}\zeta(3s+z+m)\zeta(s-z-m)\) is holomorphic at \(s=0\). This implies that for \(z\in\mathbb{C}\) with \(\operatorname{Re}(z)=K-\frac{1}{2}\), we have
\[\lim_{s\to 0}\binom{-2s-z}{m}\frac{\zeta(3s+z+m)\zeta(s-z-m)}{\Gamma(s)}=0.\]
Using this, (5.8) with \(\varepsilon=\frac{1}{2}\), (5.9), Proposition 2.6 (4), and Lebesgue's dominated convergence theorem, we obtain, for integers \(K\geq 3\),
\[\lim_{s\to 0}\frac{6^{s}}{2\pi i\Gamma(s)}\int_{K-\frac{1}{2}-i \infty}^{K-\frac{1}{2}+i\infty}\Gamma(s+z)\Gamma(-z)\zeta_{\operatorname{MT},2 }(s,s-z,2s+z)dz=\frac{i}{72}\int_{K-\frac{1}{2}-i\infty}^{K-\frac{1}{2}+i \infty}\frac{1}{\sin(\pi z)}dz.\]
Since \(\sin(\pi(z+1))=-\sin(\pi z)\) and
\[\lim_{L\to\infty}\int_{K-\frac{1}{2}-iL}^{K+\frac{1}{2}-iL}\frac{1}{\sin(\pi z )}dz=\lim_{L\to\infty}\int_{K+\frac{1}{2}+iL}^{K-\frac{1}{2}+iL}\frac{1}{\sin (\pi z)}dz=0,\]
the Residue Theorem implies that
\[\lim_{s\to 0}\frac{6^{s}}{2\pi i\Gamma(s)}\int_{K-\frac{1}{2}-i \infty}^{K-\frac{1}{2}+i\infty}\Gamma(s+z)\Gamma(-z)\zeta_{\operatorname{MT},2 }(s,s-z,2s+z)dz=\tfrac{1}{72}\operatorname{Res}_{z=K}\tfrac{\pi}{\sin(\pi z)}= \tfrac{(-1)^{K}}{72}. \tag{5.10}\]
In the following we use that \(\zeta(s)\) does not have a pole in \(s=\pm m\) for \(m\in\mathbb{N}_{\geq 2}\), implying that \(s\mapsto\binom{-2s-1}{m-1}\zeta(3s+m)\zeta(s-m)\) is holomorphic at \(s=0\). Moreover \(s\mapsto\Gamma(s+k)\binom{-2s-k}{m}\zeta(3s+k+m)\zeta(s-k-m)\) is holomorphic at \(s=0\) for \((k,m)\in(\mathbb{N}\times\mathbb{N}_{0})\backslash\{(1,0)\}\). Thus, using Propositions 2.6 (3) and 2.7 (3) and the fact that \(\zeta(-1)=-\frac{1}{12}\) and \(\zeta(0)=\frac{1}{2}\), we obtain, with (5.6),
\[\lim_{s\to 0}\frac{6^{s}}{\Gamma(s)}\sum_{k=0}^{K-1}\frac{(-1)^{k} \Gamma(s+k)}{k!}\zeta_{\operatorname{MT},2}(s,s-k,2s+k)\] \[= \frac{3}{8}+\frac{(-1)^{K+1}}{72}+\lim_{s\to 0}I_{M}(s;0)+\sum_{k=1}^{ K-1}\frac{(-1)^{k}}{k}\lim_{s\to 0}\frac{I_{M}(s;k)}{\Gamma(s)}. \tag{5.11}\]
Since, by Lemma 5.8, \(s\mapsto I_{M}(s;k)\) is holomorphic at \(s=0\) for every \(k\in\mathbb{N}_{0}\) and \(\frac{1}{\Gamma(s)}\) vanishes in \(s=0\), we have
\[\lim_{s\to 0}\frac{I_{M}(s;k)}{\Gamma(s)}=0.\]
Applying the Lebesgue dominated convergence theorem gives \(\lim_{s\to 0}I_{M}(s;0)=0\), yielding the claim with (5.8), (5.10), and (5.11).
Furthermore, we need certain residues of \(\zeta_{\mathfrak{so}(5)}\).
**Proposition 5.16**.: _The poles of \(\zeta_{\mathfrak{so}(5)}\) are precisely \(\{\frac{1}{2}\}\cup\{\frac{d}{3}\notin\mathbb{Z}:d\leq 1\text{ odd}\}\). We have_
\[\operatorname{Res}_{s=\frac{1}{2}}\zeta_{\mathfrak{so}(5)}(s)=\frac{\sqrt{3} \Gamma\left(\frac{1}{4}\right)^{2}}{8\sqrt{\pi}}.\]
_Moreover for \(d\in\mathbb{Z}_{\leq 1}\setminus(-3\mathbb{N}_{0})\),_
\[\operatorname{Res}_{s=\frac{d}{3}}\zeta_{\mathfrak{so}(5)}(s)=\frac{3^{\frac{ d}{3}-\frac{3}{2}}\pi\Gamma\left(\frac{d}{6}\right)\zeta\left(\frac{4d}{3}-1 \right)}{2^{\frac{d}{3}-1}(1-d)!\Gamma\left(\frac{d}{3}\right)^{2}\Gamma \left(\frac{d}{2}\right)}\left(\frac{d}{3}\right)\left(1+2^{\frac{2d}{3}-1} \right). \tag{5.12}\]
_In particular, we have_
\[\operatorname{Res}_{s=\frac{1}{3}}\zeta_{\mathfrak{so}(5)}(s)=\frac{2^{\frac{ 1}{3}}+1}{3^{\frac{2}{3}}}\zeta\left(\frac{1}{3}\right).\]
Proof.: With Lemma 5.12, near \(s=\frac{1}{2}\), we can choose \(K=1\) in (5.8) and obtain
\[\operatorname{Res}_{s=\frac{1}{2}}\zeta_{\mathfrak{so}(5)}(s)\] \[=\lim_{s\to\frac{1}{2}}\left(s-\frac{1}{2}\right)\left(6^{s} \zeta_{\operatorname{MT},2}(s,s,2s)+\frac{6^{s}}{2\pi i\Gamma(s)}\int_{\frac{ 1}{2}-i\infty}^{\frac{1}{2}+i\infty}\Gamma(s+z)\Gamma(-z)\zeta_{\operatorname{ MT},2}(s,s-z,2s+z)dz\right).\]
Now, we have
\[\lim_{s\to\frac{1}{2}}\left(s-\frac{1}{2}\right)6^{s}\zeta_{ \operatorname{MT},2}(s,s,2s)=\frac{\sqrt{3}\pi}{2\sqrt{2}}.\]
On the other hand, we find
\[\lim_{s\to\frac{1}{2}}\left(s-\frac{1}{2}\right)\frac{6^{s}}{2\pi i \Gamma(s)}\int_{\frac{1}{2}-i\infty}^{\frac{1}{2}+i\infty}\Gamma(s+z)\Gamma(- z)\zeta_{\operatorname{MT},2}(s,s-z,2s+z)dz\\ =\lim_{s\to\frac{1}{2}}\left(s-\frac{1}{2}\right)\frac{6^{s} \Gamma(3s-1)\zeta(4s-1)}{2\pi i\Gamma(s)}\int_{\frac{1}{2}-i\infty}^{\frac{1 }{2}+i\infty}\Gamma(s+z)\Gamma(-z)\Gamma(z+1-s)dz, \tag{5.13}\]
since \(s\mapsto\frac{\Gamma(s+z)\Gamma(-z)\zeta(3s+z)\zeta(s-z)}{\Gamma(s)}\) and \(s\mapsto\frac{\Gamma(s+z)\Gamma(-z)I_{1}(s;z)}{\Gamma(s)}\) are holomorphic if \(\operatorname{Re}(z)=\frac{1}{2}\). Shifting the path to the left and using [19, 9.113], Proposition 2.6 (1), 15.4.26 of [29], and Proposition 2.6 (4) we obtain that (5.13) equals
\[\frac{\sqrt{3}\pi}{2\sqrt{2}}{}_{2}F_{1}\left(\frac{1}{2},\frac{1}{2};1;-1 \right)-\frac{\sqrt{3}\pi}{2\sqrt{2}}=\frac{\sqrt{3}\Gamma\left(\frac{1}{4} \right)^{2}}{8\sqrt{\pi}}-\frac{\sqrt{3}\pi}{2\sqrt{2}}.\]
This proves the first part of the proposition.
Now, let \(d\in\mathbb{Z}_{\leq 1}\setminus(-3\mathbb{N}_{0})\) and choose \(0<\varepsilon<\frac{1}{3}\), and also \(K,M>1-d\). We have, by (5.8),
\[\operatorname{Res}_{s=\frac{d}{3}}\zeta_{\mathfrak{so}(5)}(s)=\lim _{s\to\frac{d}{3}}\frac{\left(s-\frac{d}{3}\right)6^{s}}{\Gamma(s)}\sum_{k=0} ^{K-1}\frac{(-1)^{k}\Gamma(s+k)}{k!}\zeta_{\operatorname{MT},2}(s,s-k,2s+k)\\ +\lim_{s\to\frac{d}{3}}\frac{\left(s-\frac{d}{3}\right)6^{s}}{2 \pi i\Gamma(s)}\int_{K-\varepsilon-i\infty}^{K-\varepsilon+i\infty}\Gamma(s+z) \Gamma(-z)\zeta_{\operatorname{MT},2}(s,s-z,2s+z)dz. \tag{5.14}\]
Note that \(\lim\limits_{s\rightarrow\frac{d}{3}}(s-\frac{d}{3})I_{M}(s;k)=0\) because of holomorphicity of \(I_{M}\) by Lemma 5.8 and
\[\lim\limits_{s\rightarrow\frac{d}{3}}\left(s-\frac{d}{3}\right)\zeta(3s+k+m)= \frac{1}{3}\delta_{m=1-d-k}.\]
Thus we obtain, by (5.6) and (15.4.26) of [29],
\[\lim\limits_{s\rightarrow\frac{d}{3}}\frac{\left(s-\frac{d}{3} \right)6^{s}}{\Gamma(s)}\sum\limits_{k=0}^{K-1}\frac{(-1)^{k}\Gamma(k+s)}{k!} \zeta_{\mathrm{MT},2}(s,s-k,k+2s)=\frac{6^{\frac{d}{3}}\zeta\left(\frac{4d}{3}- 1\right)}{3(1-d)!\Gamma\left(\frac{d}{3}\right)}\\ \times\left(\sum\limits_{k=0}^{K-1}\frac{(-1)^{k+d+1}\Gamma\left( k+1-\frac{d}{3}\right)\Gamma\left(k+\frac{d}{3}\right)}{k!\Gamma\left(k+\frac{2d}{3} \right)}+\sum\limits_{k=0}^{1-d}(-1)^{k}\binom{1-d}{k}\frac{\Gamma\left(k+ \frac{d}{3}\right)\Gamma\left(1-\frac{2d}{3}-k\right)}{\Gamma\left(\frac{d}{3 }\right)}\right)\\ =\frac{6^{\frac{d}{3}}\zeta\left(\frac{4d}{3}-1\right)}{3(1-d)! \Gamma\left(\frac{d}{3}\right)}\left(\sum\limits_{k=0}^{K-1}\frac{(-1)^{k+d+1} \Gamma\left(k+1-\frac{d}{3}\right)\Gamma\left(k+\frac{d}{3}\right)}{k!\Gamma \left(k+\frac{2d}{3}\right)}+\Gamma\left(1-\frac{2d}{3}\right){}_{2}F_{1} \left(\frac{d}{3},d-1;\frac{2d}{3};-1\right)\right)\\ =\frac{6^{\frac{d}{3}}\zeta\left(\frac{4d}{3}-1\right)}{3(1-d)! \Gamma\left(\frac{d}{3}\right)}\sum\limits_{k=0}^{K-1}\frac{(-1)^{k+d+1} \Gamma\left(k+1-\frac{d}{3}\right)\Gamma\left(k+\frac{d}{3}\right)}{k!\Gamma \left(k+\frac{2d}{3}\right)}\\ +\frac{3^{\frac{d}{3}-1}\zeta\left(\frac{4d}{3}-1\right)\Gamma \left(1-\frac{2d}{3}\right)\Gamma\left(\frac{2d}{3}\right)\Gamma\left(\frac{d} {6}\right)}{2^{\frac{d}{3}}(1-d)!\Gamma\left(\frac{d}{3}\right)^{2}\Gamma \left(\frac{d}{2}\right)}. \tag{5.15}\]
For the integral in (5.14), we obtain that
\[\lim\limits_{s\rightarrow\frac{d}{3}}\frac{\left(s-\frac{d}{3} \right)6^{s}}{2\pi i\Gamma(s)}\int_{K-\varepsilon-i\infty}^{K-\varepsilon+i \infty}\Gamma(s+z)\Gamma(-z)\zeta_{\mathrm{MT},2}(s,s-z,2s+z)dz\\ =\frac{(-1)^{d+1}6^{\frac{d}{3}}\zeta\left(\frac{4d}{3}-1\right)} {3(1-d)!\Gamma\left(\frac{d}{3}\right)}\frac{1}{2\pi i}\int_{K-\varepsilon-i \infty}^{K-\varepsilon+i\infty}\frac{\Gamma\left(z+\frac{d}{3}\right)\Gamma \left(z+1-\frac{d}{3}\right)\Gamma(-z)}{\Gamma\left(z+\frac{2d}{3}\right)}dz. \tag{5.16}\]
By shifting the path of integration to the left such that all poles of \(\Gamma(\frac{d}{3}+z)\Gamma(1-\frac{d}{3}+z)\Gamma(-z)\) except the ones in \(\mathbb{N}_{0}\) lie left to the path of integration, we obtain with formula (9.113) of [19]
\[\frac{1}{2\pi i}\int_{K-\varepsilon-i\infty}^{K-\varepsilon+i \infty}\frac{\Gamma\left(z-\frac{d}{3}\right)\Gamma\left(z+1-\frac{d}{3} \right)\Gamma(-z)}{\Gamma\left(z+\frac{2d}{3}\right)}dz\\ =\frac{\Gamma\left(\frac{d}{3}\right)\Gamma\left(1-\frac{d}{3} \right)}{\Gamma\left(\frac{2d}{3}\right)}{}_{2}F_{1}\left(\frac{d}{3},1-\frac {d}{3};\frac{2d}{3};-1\right)+\sum\limits_{k=0}^{K-1}\frac{(-1)^{k+1}\Gamma \left(k+\frac{d}{3}\right)\Gamma\left(k+1-\frac{d}{3}\right)}{k!\Gamma\left(k+ \frac{2d}{3}\right)}\\ =\frac{\Gamma\left(1-\frac{d}{3}\right)\Gamma\left(\frac{d}{6} \right)}{2\Gamma\left(\frac{d}{2}\right)}-\sum\limits_{k=0}^{K-1}\frac{(-1)^{k }\Gamma\left(k+\frac{d}{3}\right)\Gamma\left(k+1-\frac{d}{3}\right)}{k!\Gamma \left(k+\frac{2d}{3}\right)},\]
where the final equality is due to (15.4.26) of [29]. Equation (5.12) follows by this calculation together with (5.14), (5.15), (5.16), and Proposition 2.6 (4). Finally note that (5.12) vanishes for even \(d\leq 1\).
Now we are ready to prove Theorem 1.3.
Proof of Theorem 1.3.: Note that by Lemma 5.5 and Theorem 5.14 all conditions of Theorem 1.4 are satisfied (with \(L\) and \(R\notin\frac{1}{3}\mathbb{N}\) arbitrary large). As \(\zeta_{\mathfrak{so}(5)}\) has, by Proposition 5.16, exactly two
positive poles \(\alpha:=\frac{1}{2}>\frac{1}{3}=:\beta\), Theorem 4.4 applies with \(\ell=3\), and we obtain
\[r_{\mathfrak{so}(5)}(n)=\frac{C}{n^{b}}\exp\left(A_{1}n^{\frac{1}{3}}+A_{2}n^{ \frac{2}{9}}+A_{3}n^{\frac{1}{9}}+A_{4}\right)\left(1+\sum_{j=2}^{N+1}\frac{B_{ j}}{n^{\frac{j-1}{9}}}+O_{N}\left(n^{-\frac{N+1}{9}}\right)\right),\quad(n \rightarrow\infty).\]
So we are left to calculate \(c\), \(b\), \(A_{1}\), \(A_{2}\), \(A_{3}\), and \(A_{4}\). By Proposition 5.15, \(\zeta_{\mathfrak{so}(5)}(0)=\frac{3}{8}\) and by Proposition 5.16, \(\operatorname{Res}_{s=\frac{1}{2}}\zeta_{\mathfrak{so}(5)}(s)\), \(\omega_{\frac{1}{2}}=\frac{\sqrt{3}\Gamma(\frac{1}{4})^{2}}{8\sqrt{\pi}}\) and \(\omega_{\frac{1}{3}}=\frac{2^{\frac{1}{3}}+1}{3^{\frac{3}{3}}}\zeta(\frac{1}{ 3})\). Hence, by (4.4), we get
\[c_{1}\frac{\sqrt{3}\Gamma\left(\frac{1}{4}\right)^{2}\zeta\left(\frac{3}{2} \right)}{16},\qquad c_{2}=3^{-\frac{5}{3}}\left(2^{\frac{1}{3}}+1\right) \Gamma\left(\frac{1}{3}\right)\zeta\left(\frac{1}{3}\right)\zeta\left(\frac{4} {3}\right).\]
Moreover, by Lemma 4.3, we have
\[K_{2}=\frac{2c_{2}}{3c_{1}^{\frac{9}{9}}},\quad K_{3}=-\frac{c_{2}^{2}}{27c_{ 1}^{\frac{10}{9}}}.\]
Now, we compute \(A_{1}\), \(C\), and \(b\) by (1.11) and \(A_{2}\), \(A_{3}\), \(A_{4}\) by Theorem 4.4 and obtain
\[b =\frac{7}{12},\qquad C=\frac{e^{\zeta_{\mathfrak{so}(5)}^{\prime} (0)}\Gamma\left(\frac{1}{4}\right)^{\frac{1}{6}}\zeta\left(\frac{3}{2}\right) ^{\frac{1}{12}}}{2^{\frac{1}{3}}\frac{31}{24}\sqrt{\pi}},\qquad A_{1}=\frac{3 ^{\frac{4}{3}}\Gamma\left(\frac{1}{4}\right)^{\frac{4}{3}}\zeta\left(\frac{3}{ 2}\right)^{\frac{2}{3}}}{2^{\frac{8}{3}}}, \tag{5.17}\] \[A_{2} =\frac{2^{\frac{8}{9}}\left(2^{\frac{1}{3}}+1\right)\Gamma\left( \frac{1}{3}\right)\zeta\left(\frac{1}{3}\right)\zeta\left(\frac{4}{3}\right)}{ 3^{\frac{7}{9}}\Gamma\left(\frac{1}{4}\right)^{\frac{4}{9}}\zeta\left(\frac{3}{ 2}\right)^{\frac{2}{9}}},\qquad A_{3}=-\frac{2^{\frac{40}{9}}\left(2^{\frac{1} {3}}+1\right)^{2}\Gamma\left(\frac{1}{3}\right)^{2}\zeta\left(\frac{1}{3} \right)^{2}\zeta\left(\frac{4}{3}\right)^{2}}{3^{\frac{44}{9}}\Gamma\left( \frac{1}{4}\right)^{\frac{20}{9}}\zeta\left(\frac{3}{2}\right)^{\frac{10}{9}}},\] (5.18) \[A_{4} =\frac{2^{8}\left(2^{\frac{1}{3}}+1\right)^{3}\Gamma\left(\frac{1 }{3}\right)^{3}\zeta\left(\frac{1}{3}\right)^{3}\zeta\left(\frac{4}{3}\right) ^{3}}{3^{8}\Gamma\left(\frac{1}{4}\right)^{4}\zeta\left(\frac{3}{2}\right)^{2}}. \tag{5.19}\]
This proves the theorem.
## 6. Open problems
We are led by our work to the following questions:
1. Is there a simple expression for \(\zeta_{\mathfrak{so}(5)}^{\prime}(0)\)?
2. Can one weaken the hypothesis that \(f(n)\geq 0\) for all \(n\) in Theorem 1.4? An important application would be that the \(r_{f}(n)\) are eventually positive. There are many special cases in the literature (see [11, 12, 13, 14]), but to the best of our knowledge no general asymptotic formula has been proved.6 Footnote 6: The one exception is in Todtβs Ph.D. thesis [33, Theorem 3.2.1]; however, there it is further assumed that \(r_{f}(n)\) is non-decreasing, which precludes the princple application of such an asymptotic.
3. In [18], Erdos proved by elementary means that if \(S\subset\mathbb{N}\) has natural density \(d\) and \(\mathbb{1}_{S}\) is the indicator function of \(S\), then \(\log(p_{1_{S}}(n))\sim\pi\sqrt{\frac{2dn}{3}}\). Referring to Theorem 1.4, can one prove by elementary means that for any \(\varepsilon>0\) \[\log\left(r_{f}(n)\right)=A_{1}n^{\frac{\alpha}{\alpha+1}}+\sum_{j=2}^{M}A_{j}n ^{\alpha_{j}}+O(n^{\varepsilon})?\]
4. Can one "twist" the products in Theorem 1.4 by \(w\in\mathbb{C}\) and prove asymptotic formulas for the (complex) coefficients of \[\prod_{n\geq 1}\frac{1}{(1-wq^{n})^{f(n)}}?\]
If \(f(n)=n\) or \(f(n)=1\), then such asymptotics were shown to determine zero attractors of polynomials (see [3, 4]) and equidistribution of partition statistics see [5, 6]), and the general case of \(|w|\neq 1\) was treated by Parry [30]. Nevertheless, all of these results require that \(L_{f}(s)\) has only a single simple pole with positive real part.
5. In Theorem 1.4, can one write down explicit or recursive expressions for the constants \(A_{j}\) in the exponent, say in the case that \(L_{f}(s)\) has three positive poles?
6. Can one prove limit shapes for the partitions generated by (1.7) in the sense of [16, 34]?
|
2306.05017 | Non-Intrusive Load Monitoring (NILM) using Deep Neural Networks: A
Review | Demand-side management now encompasses more residential loads. To efficiently
apply demand response strategies, it's essential to periodically observe the
contribution of various domestic appliances to total energy consumption.
Non-intrusive load monitoring (NILM), also known as load disaggregation, is a
method for decomposing the total energy consumption profile into individual
appliance load profiles within the household. It has multiple applications in
demand-side management, energy consumption monitoring, and analysis. Various
methods, including machine learning and deep learning, have been used to
implement and improve NILM algorithms. This paper reviews some recent NILM
methods based on deep learning and introduces the most accurate methods for
residential loads. It summarizes public databases for NILM evaluation and
compares methods using standard performance metrics. | Mohammad Irani Azad, Roozbeh Rajabi, Abouzar Estebsari | 2023-06-08T08:11:21Z | http://arxiv.org/abs/2306.05017v1 | # Non-Intrusive Load Monitoring (NILM) using Deep Neural Networks: A Review
###### Abstract
Demand-side management now encompasses more residential loads. To efficiently apply demand response strategies, it's essential to periodically observe the contribution of various domestic appliances to total energy consumption. Non-intrusive load monitoring (NILM), also known as load disaggregation, is a method for decomposing the total energy consumption profile into individual appliance load profiles within the household. It has multiple applications in demand-side management, energy consumption monitoring, and analysis. Various methods, including machine learning and deep learning, have been used to implement and improve NILM algorithms. This paper reviews some recent NILM methods based on deep learning and introduces the most accurate methods for residential loads. It summarizes public databases for NILM evaluation and compares methods using standard performance metrics.
Smart Grids, NILM, Deep Learning, Energy Management.
## I Introduction
The non-intrusive load monitoring (NILM) method has gained popularity in recent years as a way to monitor appliance and electrical utility energy usage in buildings and events (on/off) using a single energy meter. If consumers had data on appliance-level energy usage, they could better understand their energy consumption behavior and take action to reduce it. The aim of this study is to present an overview of the latest algorithms currently being investigated by researchers to create a precise non-intrusive load monitoring (NILM) method for effective energy management. The article discusses the potential applications of NILM across different fields, along with future research objectives. The development of sustainable and smart cities has been made possible by advancements in artificial intelligence (AI), smart meters, the internet of things (IoT), and smart grids, as cited in [1] and [2]. Effective energy management is a crucial component of sustainable city development, which aims to utilize resources responsibly, protect the environment, and enhance society's well-being. The objective of energy management is to promote energy system self-reliance and sustainability [1].
Energy management involves monitoring and controlling electrical utilities to optimize energy use and reduce consumption. However, with the increase in energy needs, energy conservation has become a challenge in recent years [3]. Greater energy use can lead to an energy crisis, climate change, and a negative impact on the economy [4]. It is estimated that the rise in carbon emissions will increase global temperatures by 2.5 to 10 \({}^{\circ}\)C this century, causing more frequent floods, droughts, a rise in sea level, and the spread of infectious illnesses [5]. Therefore, it is essential to reduce carbon emissions across all sectors, including construction, industry, and transportation, to mitigate climate change. Researchers are working on developing technology solutions for energy conservation [3]. Buildings are one of the major contributors to energy consumption [6], with energy consumption in this sector steadily increasing over time. In order to mitigate carbon emissions, optimizing energy consumption in residential and commercial buildings is crucial. This can be achieved through the construction or design of energy-efficient structures, as well as improving energy usage in existing buildings.
The paper is organized as follows. Section II introduces the mathematical definition of the NILM problem. Section III discusses deep learning-based NILM methods. Section IV provides a summary of the public NILM datasets. Section V presents a comparison study of NILM methods, and finally, Section VI concludes the paper.
## II NILM Problem Definition
### _Mathematical Problem Definition_
The issue at hand can be described as follows: at a given time \(t\), the total active power consumed by a system is represented by \(y(t)\), while \(y_{i}(t)\) represents the active power consumed by the \(i\)th appliance at the same time. The overall load is the sum of the energy consumed by individual appliances and an unmeasured residual load, expressed as:
\[y(t)=\sum_{i=1}^{N}y_{i}(t)+e(t), \tag{1}\]
where \(N\) denotes the number of appliances considered, and \(e(t)\) represents the undetermined residual load. The aim is to estimate \(F(y(t))\) by determining the values of \(y_{i}(t)\), given only the value of \(y(t)\), as:
\[y_{1}(t),y_{2}(t),...,y_{i}(t),...,y_{N}(t)=F(y(t)), \tag{2}\]
where \(F\) is an operator that produces \(N\) distinct values when applied to the total active power. These numbers represent the most accurate estimate of the power consumed by each appliance. It should be noted that \(y_{i}(t)\) typically does not reflect the entire set of home appliances but rather a subset of them. As a result, the unknown term \(e(t)\) takes into account the loads caused by unmonitored appliances. If simultaneous measurements of the aggregate consumption and load of each appliance are available, approximating the \(F\) operator can be considered a supervised learning problem. When mainly concerned with activation times and cumulative consumption, as is the case in real situations, the estimated individual appliance consumption (\(\hat{y}_{i}(t)\)) can be obtained using functions that are constant over the device's activation period:
\[\hat{y}_{i}(t)=p_{i}\hat{a}_{i}(t), \tag{3}\]
where \(p_{i}\) represents the average consumption of appliance \(i\), and \(\hat{a}_{i}(t)\) represents an estimate of the activation state of the particular appliance at time \(t\). Its value is one if the device is in use and uses energy, and zero otherwise. Therefore, starting with the aggregate load, a technique is provided to derive the most accurate and feasible assessment of the activation state of the appliances:
\[\hat{a}_{1}(t),\hat{a}_{2}(t),...,\hat{a}_{i}(t),...,\hat{a}_{N}(t)=F_{a}(y(t)), \tag{4}\]
After learning the average nominal consumption of the considered equipment, one can use Equation 3 to estimate consumption.
### _Appliance types_
Based on their operational characteristics, appliances can be classified into four types as discussed in [7]. Type I appliances have two modes of operation - on and off. These include appliances such as kettles, toasters, and light bulbs, which consume energy only when turned on. Type I appliances are predominantly resistive with few linear reactive components. Type II appliances are characterized as multi-state or finite state machines with a limited number of operational states that may be run repeatedly. Changes in these appliances' states can be observed by monitoring the power consumption's falling/rising edges over time. Stove burners, refrigerators, and washing machines are some examples of Type II appliances [7, 8]. Figure 1 demonstrates the distinct appliance operation conditions.
Category III appliances, also known as Continuously Variable Devices (CVDs), exhibit a non-repetitive power usage pattern, which poses a challenge for energy consumption disaggregation. Examples of Type III appliances include power drills and dimmer lights [8]. Type IV appliances are those that run continuously for extended periods of time, typically lasting several days or weeks. Examples of Type IV equipment include wireless telephone devices and cable TV receivers [8]. Therefore, the Non-Intrusive Load Monitoring (NILM) system is required to have the ability to differentiate between various types of appliance events, which may happen concurrently or independently and at varying time intervals.
## III Deep Learning Based NILM methods
NILM techniques can be broadly categorized into two groups: supervised and unsupervised methods [9]. In supervised NILM, individual appliance power usage is used to train the models. On the other hand, unsupervised methods can only utilize aggregate power usage data. Examples of unsupervised NILM techniques include Hidden Markov models (HMM) [10, 11], factorial HMM (FHMM) [12, 13], and techniques based on event detection and clustering [14, 15]. These techniques have been thoroughly examined in previous studies [9, 16]. With the advent of deep neural networks (DNNs), many neural network-based supervised NILM techniques have been developed [17, 18]. Convolutional neural networks (CNN) have also recently made significant advances [19, 20]. Graph signal processing [21], HMM [12, 13, 15, 22, 23], and DNNs [24, 25] are commonly used in suggested NILM approaches. As the cost of employing appliance data for training has grown dramatically, researchers have focused on developing unsupervised approaches and incorporating appliance models. Despite the significant progress made in NILM research in recent years, challenges remain in terms of application, identification accuracy, training time, and online deployment techniques in smart metering frameworks.
### _Event-Based Non-Intrusive Detection_
The event-based NILM method is based on the concept of detecting and categorizing events within a combined electrical signal. Figure 2 shows the block diagram of the this approach. A robust event detector should be developed to cope with noisy fluctuations and identify events with decay and growth patterns, which is a bottleneck and inherent difficulty in existing event detectors [26]. One approach includes the steps of event detection, extraction, clustering, and matching in the event-based block [27]. It should be noted that the accuracy of previous event-based frameworks is dependent on the power features that are introduced. Since some appliances may have identical active power curves but radically distinct reactive power trends, increasing the number of features can enhance the accuracy of the appliance model, particularly for non-linear
Fig. 1: A pictorial representation illustrates the different types of appliances categorized based on their operating states.
loads. One of the advantages of incorporating reactive power is that it enables discrimination between different types of devices.
### _NILM Disaggregation by CNN_
The proposed method [28] employs a convolutional neural network (CNN) that takes the time interval of a home's energy consumption as input and predicts the activation status of each device at every time step. The network architecture, referred to as Temporal Pooling NILM (TP-NILM), is an updated version of Zhao et al.'s PSPNet (Pyramid Scene Parsing Network) used for semantic image segmentation [29]. The TP-NILM follows the conventional approach to image segmentation, with an encoder comprising pooling and convolutional layers that enhance the feature space of the signal but decrease its temporal resolution, and a decoder module that uses these features to approximate the activation state of the devices at the original resolution. To establish a temporal context that covers extended periods without compromising the signal's resolution, the TP-NILM incorporates a Temporal Pooling module that accumulates features at various resolutions, enabling accurate reconstruction of the activation state.
Figure 3 illustrates the architecture of the network used in this study. The encoder uses a rectified linear unit activation function, batch-independent normalization downstream of the activations, and a regularization dropout layer, with three convolutional filters interleaved by max-pooling layers. The encoder reduces the signal's temporal resolution by a factor of eight and raises the number of output characteristics from a single aggregate power consumption value to 256. The TP block provides context information to the decoding block, allowing it to create extra features for decoding by aggregating encoder output with various resolutions. The encoder output is passed through four average pooling modules with various filter sizes, which degrade the temporal resolution while maintaining the number of features, before being convolutional with a unit filter size. This reduces the number of characteristics to one-quarter of those used in the input. The convolutional results are used as the input for a rectified linear unit activation function, which is then batch normalized. Lastly, linear up-sampling yields a temporal resolution at the TP block's output that is equal to the encoder's output. A dropout is also added to the output, allowing the network to be controlled. The context features generated by this block are connected to the encoder's detail features, doubling the total number of features in the decoder's input. To raise the signal's temporal resolution and lower the number of features, the decoder contains a transposed convolutional layer with a stride and kernel size of 8. The ReLU is still used as the activation function, followed by an extra convolutional layer with an unified kernel size that keeps the temporal resolution and increases the number of output channels to match the number of devices being analyzed. In the output, a sigmoid function is used. This is because in the semantic segmentation of photos, each pixel is connected to a single class, however in the current application, many appliances might be in use at the same time. While working in this manner, the network decomposes all appliances at the same time. This should enable an encoder to use more broad convolutive filters that aren't specialized for a single kind of appliance, boosting the neural network's capacity to generalize. Gradient descent optimization may be used to find the net's weights. The loss function is a binary crossentropy applied to each output channel that assesses the disparity between the activations predicted by the net \(\hat{a}_{i}(t)\) and the actual ones \(a_{i}(t)\) for each appliance under consideration and for each instant of the period under consideration.
## IV NILM Public Datasets
To develop NILM (non-intrusive load monitoring) algorithms and assess their performance, the research community provides various NILM datasets in the public domain [30]. Since each dataset monitors different appliances
Fig. 3: Outline of network structure for NILM by CNN.
Fig. 2: Block diagram scheme of event-based NILM.
in diverse environments and buildings over a varied time period, each dataset has its own specific criteria [31, 32]. However, it has been observed that many public datasets have structural variations that require pre-processing before usage. To address this issue, the dsCleaner Python module was developed to standardize, clean, and convert time series data into a consistent file format, and also includes a resampling method for datasets. Typically, NILM datasets consist of aggregated energy data from a single meter and the actual energy consumption of each appliance, which is measured by plug-level meters and serves as the ground truth for evaluating NILM algorithms. Table I lists the most popular publicly accessible NILM datasets for research purposes [30].
In 2011, the Reference Energy Disaggregation Dataset (REDD) [32] became available as the first openly accessible dataset designed specifically to aid NILM research. Following this, the building-Level fully-labeled dataset for Electricity Disaggregation (BLUED) [33] was released in 2012, which contained data from a single household.
The Almanac of Minutely Power dataset (AMPds) [34], on the other hand, was made public in 2013 and comprised both aggregate and sub-metered power data from a single household. The Almanac of Minutely Power dataset Version 2 (AMPds2) [35] is another dataset that captures all three primary types of consumption, including electricity, water, and natural gas, over an extended period of 2 years. Furthermore, it provides 11 measurement characteristics for electricity. The data in AMPds2 has been pre-cleaned to ensure consistent and comparable accuracy results among researchers and machine learning algorithms.
The REFIT Electrical Load Measurements [36] dataset is another one that includes cleaned electrical consumption data in Watts for 20 households at both aggregate and appliance levels, timestamped and sampled at 8-second intervals. It is designed to support research into energy conservation and advanced energy services, ranging from non-intrusive appliance load monitoring, demand response measures, tailored energy and retrofit advice, appliance usage analysis, consumption and time-use statistics, and smart home/building automation. Finally, the UK Domestic Appliance-Level Electricity data set (UK-DALE) [37] was released, containing data from four households.
The available NILM datasets have varying sample rates ranging from 1 Hz to 100 kHz and cover individual appliances as well as residential complexes. While data collected from individual appliances can be valuable for modeling and training the NILM system, its performance may not be optimal when tested on the entire residential building. Conversely, relying solely on whole-household datasets may not be appropriate for training the algorithms, especially when individual appliance data is unavailable.
Furthermore, certain databases provide primary current and voltage signals, whereas others provide calculated electrical parameters such as active power, reactive power, apparent power, and power factor. However, in order to create an effective NILM system, it is crucial to obtain unprocessed electrical signals in order to extract fundamental and harmonic characteristics.
## V Experimental Results
### _Evaluation Metrics_
To evaluate the performance of algorithms in recognizing appliance switching ON or OFF, the classification metrics presented in Eqs. 5-8 were used. The metrics are calculated using True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). TP represented the number of times a device was correctly recognized as ON, whereas TN represented the number of correctly identified OFF occurrences. FP highlighted instances when ON states were recorded despite the appliance not consuming power. On the other hand, FN displayed the number of OFF occurrences that were incorrectly recognized.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}} \tag{5}\]
\[\text{Presicion}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{6}\]
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{7}\]
\[F_{1}=2*\frac{\text{Presicion}*\text{Recall}}{\text{Presicion}+\text{Recall}} \tag{8}\]
The recall metric in Eq. 7 measures the ratio of correctly identified positive instances (TP) to the total number of positive instances in the dataset. On the other hand, the precision metric in Eq. 6 represents the ratio of correctly identified positive instances (TP) to the total number of instances identified as positive by the algorithm. The \(F_{1}\) score is a weighted mean of precision and recall, which is used to determine the accuracy of the algorithm in identifying appliance states. Higher \(F_{1}\) scores indicate better algorithm performance in recognizing appliance state transitions.
The mean absolute error (MAE) and proportion of energy correctly allocated (PECA) metrics in Eqs. 9 and 10, respectively, are non-event-based metrics used to evaluate the accuracy of load disaggregation systems in calculating and assigning electricity usage. MAE measures the average absolute difference between the estimated and actual energy usage, while PECA evaluates the percentage of energy correctly allocated to individual appliances.
\[\text{MAE}=1/T*\sum_{t=1}^{T}|\hat{y}_{t}^{i}-y_{t}^{i}| \tag{9}\]
\[\text{FECA}=1-\frac{\sum_{t=1}^{T}\sum_{i=1}^{N}|\hat{y}_{t}^{i}-y_{t}^{i}|}{ 2\sum_{t=1}^{T}\bar{y}_{t}} \tag{10}\]
In the preceding equations, \(\hat{y}_{t}^{i}\) and \(y_{t}^{i}\) are the estimated and ground-truth power of the \(i^{th}\) device at time step t, respectively. Furthermore, \(\bar{y}_{t}\) is the total power at time t [24].
### _Performance Evaluation and Comparison Study_
In this part, the disaggregation findings for the NILM approaches using the REDD, UK-DALE, and REFIT datasets are shown. The performance indicators obtained by executing the tests on these datasets shows that \(F_{1}\) produced the best results for refrigerators, air conditioners, freezers, televisions, and washing machines across the three datasets, with values greater than 0.70. Toasters and electronics, on the other hand, have lower \(F_{1}\) scores of roughly 0.25 owing to misclassification caused by the non-uniform pattern of these items.
The accuracy metrics of the findings were compared for published methods: on-line NILM [27], NILM-TK [14], an FHMM implementation; Neural-NILM [7], a DNN adaption for energy estimation. The Neural-NILM used three DNN architectures: i) long short-term memory, ii) de-noising auto-encoders, and iii) rectangles. Rectangle networks, in particular, regress the start-time, end-time, and average power of appliance activation.
In experiments using UK-DALE dataset, a comparison of on-line NILM, NILM-TK and Neural-NILM methods for five appliances (fridge, washing machine, dishwasher, microwave, and kettle) is done. The microwave gives the lowest marks for all three methods. MAE and an \(F_{1}\) score of roughly 195 Watts and 0.01, respectively, are reported by NILM-TK. The best MAE of 6 Watts and an \(F_{1}\) score of 0.21 is shown by the neural-NILM. With an \(F_{1}\) score of about 0.35, the on-line NILM method outperforms the other two. The MAE and \(F_{1}\) scores reported by NILM-TK are roughly 67 watts and 0.55, respectively. The Neural-NILM has an MAE of 18 Watts and an F1 score of 0.82. In terms of energy estimate, the Neural-NILM outperformed the suggested technique, particularly for complicated equipment like dishwashers and washing machines. Nonetheless, the time and computational resources required to train the neural network and generate the models need a large amount of appliance-level data. The on-line NILM technique, on the other hand, may generate appliance models using aggregate data without the necessity for appliance-level sub-metered data.
## VI Conclusion
The three-phase distribution network often feeds the final residential customers through single phase cables. It is important to make the loads balanced. Demand side management and different optimisation techniques would help on this. However, understanding the contribution of different appliances to energy consumption is beneficial. NILM is demonstrated to be a good approach to this end. The accuracy of NILM depends on the method applied. This paper reviewed some deep learning-based methods which outperform other existing NILM algorithms. The paper compared the results of applying these advanced methods to provide a basis for future implementation. These datasets have public access and are widely used in NILM literature. Several performance criteria are formulated to analyze the performance of the methods.
|
2308.10808 | Graph Neural Bandits | Contextual bandits algorithms aim to choose the optimal arm with the highest
reward out of a set of candidates based on the contextual information. Various
bandit algorithms have been applied to real-world applications due to their
ability of tackling the exploitation-exploration dilemma. Motivated by online
recommendation scenarios, in this paper, we propose a framework named Graph
Neural Bandits (GNB) to leverage the collaborative nature among users empowered
by graph neural networks (GNNs). Instead of estimating rigid user clusters as
in existing works, we model the "fine-grained" collaborative effects through
estimated user graphs in terms of exploitation and exploration respectively.
Then, to refine the recommendation strategy, we utilize separate GNN-based
models on estimated user graphs for exploitation and adaptive exploration.
Theoretical analysis and experimental results on multiple real data sets in
comparison with state-of-the-art baselines are provided to demonstrate the
effectiveness of our proposed framework. | Yunzhe Qi, Yikun Ban, Jingrui He | 2023-08-21T15:57:57Z | http://arxiv.org/abs/2308.10808v1 | # Graph Neural Bandits
###### Abstract.
Contextual bandits algorithms aim to choose the optimal arm with the highest reward out of a set of candidates based on the contextual information. Various bandit algorithms have been applied to real-world applications due to their ability of tackling the exploitation-exploration dilemma. Motivated by online recommendation scenarios, in this paper, we propose a framework named **Graph Neural Bandits (GNB)** to leverage the collaborative nature among users empowered by graph neural networks (GNNs). Instead of estimating rigid user clusters as in existing works, we model the "fine-grained" collaborative effects through estimated user graphs in terms of exploitation and exploration respectively. Then, to refine the recommendation strategy, we utilize separate GNN-based models on estimated user graphs for exploitation and adaptive exploration. Theoretical analysis and experimental results on multiple real data sets in comparison with state-of-the-art baselines are provided to demonstrate the effectiveness of our proposed framework.
Contextual Bandits; User Modeling; Graph Neural Networks +
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Information systems Personalization
+
Footnote β : journal: Journal of Information systems Personalization
+
Footnote β : journal: Information systems Personalization
final decision based on the strength of their correlation with the target user. However, the user correlations are usually unknown, and the learner is required to estimate them on the fly. Here, the learner aims to approximate the correlation between two users by exploiting their past interactions; on the other hand, the learner can benefit from exploring the potential correlations between users who do not have sufficient interactions, or the correlations that might have changed. In this case, we formulate this problem as the exploitation-exploration dilemma in terms of the user correlations. To solve this new challenge, GNB separately constructs two kinds of user graphs, named "user exploitation graphs" and "user exploration graphs". Then, we apply two individual graph neural networks (GNNs) on the user graphs, to incorporate the collaborative effects in terms of both exploitation and exploration in the decision-making process. Our main contributions are:
* **Problem Settings** Different from existing works formulating the "coarse-grained" user correlations by neglecting the divergence within user groups, we introduce a new problem setting to model the "fine-grained" user collaborative effects via user graphs. Here, pair-wise user correlations are preserved to contribute differently to the decision-making process. **(Section 3)**
* **[Proposed Framework]** We propose a framework named GNB, which has the novel ways to build two kinds of user graphs in terms of exploitation and exploration respectively. Then, GNB utilizes GNN-based models for a refined arm selection strategy by leveraging the user correlations encoded in these two kinds of user graphs for the arm selection. **(Section 4)**
* **[Theoretical Analysis]** With standard assumptions, we provide the theoretical analysis showing that GNB can achieve the regret upper bound of complexity \(O(\sqrt{l\log(Tn)})\), where \(T\) is the number of rounds and \(n\) is the number of users. This bound is sharper than the existing related works. **(Section 5)**
* **[Experiments]** Extensive experiments comparing GNB with nine state-of-the-art algorithms are conducted on various data sets with different specifications, which demonstrate the effectiveness of our proposed GNB framework. **(Section 6)**
Due to the page limit, interested readers can refer to the paper Appendix for supplementary contents.
## 2. Related Works
Assuming the reward mapping function to be linear, the linear upper confidence bound (UCB) algorithms (Bauer and Scholkopf, 1998; Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016) were first proposed to tackle the exploitation-exploration dilemma. After kernel-based methods (Goyal et al., 2016; Goyal et al., 2016) were used to tackle the kernel-based reward mapping function under the non-linear settings, neural algorithms (Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016) have been proposed to utilize neural networks to estimate the reward function and confidence bound. Meanwhile, AGG-UCB (Goyal et al., 2016) adopts GNN to model the arm group correlations. GCN-UCB (Goyal et al., 2016) manages to apply the GNN model to embed arm contexts for the downstream linear regression, and GNN-PE (Goyal et al., 2016) utilizes the UCB based on information gains to achieve exploration for classification tasks on graphs. Instead of using UCB, EE-Net (Goyal et al., 2016) applies a neural network to estimate prediction uncertainty. Nonetheless, all of these works fail to consider the collaboration effects among users under the real-world application scenarios.
To model user correlations, (Goyal et al., 2016; Goyal et al., 2016) assume the user social graph is known, and apply an ensemble of linear estimators. Without the prior knowledge of user correlations, CLUB (Goyal et al., 2016) introduces the user clustering problem with the graph-connected components, and SCLUB (Goyal et al., 2016) adopts dynamic user sets and set operations, while DynUCB (Goyal et al., 2016) assigns users to their nearest estimated clusters. Then, CAB (Goyal et al., 2016) studies the arm-specific user clustering, and LOCB (Goyal et al., 2016) estimates soft-margin user groups with local clustering. COFBIA (Goyal et al., 2016) utilizes user and arm co-clustering for collaborative filtering. Meta-Ban (Ban, 2017) applies a neural meta-model to adapt to estimated user groups. However, all these algorithms consider rigid user groups, where users from the same group are treated equally with no internal differentiation. Alternatively, we leverage GNNs (Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016) to learn from the "fine-grained" user correlations and arm contexts simultaneously.
## 3. Gnb: Problem Definition
Suppose there are a total of \(n\) users with the user set \(\mathcal{U}=\{1,\cdots,n\}\). At each time step \(t\in[T]\), the learner will receive a target user \(u_{t}\in\mathcal{U}\) to serve, along with candidate arms \(\mathcal{X}_{t}=\{\mathbf{x}_{i,t}\}_{i\in[a]}\), \(|\mathcal{X}_{t}|=a\). Each arm is described by a \(d\)-dimensional context vector \(\mathbf{x}_{i,t}\in\mathbb{R}^{d}\) with \(\|\mathbf{x}_{i,t}\|_{2}=1\), and \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\) is also associated with a reward \(r_{i,t}\). As the user correlation is one important factor in determining the reward, we define the following reward function:
\[r_{i,t}=h(\mathbf{x}_{i,t},\mathbf{g}_{i,t}^{(1),*})+\epsilon_{i,t} \tag{1}\]
where \(h(\cdot)\) is the unknown reward mapping function, and \(\epsilon_{i,t}\) stands for some zero-mean noise such that \(\mathbb{E}[r_{i,t}]=h(\mathbf{x}_{i,t},u_{i},\mathbf{g}_{i,t}^{(1),*})\). Here, we have \(\mathbf{g}_{i,t}^{(1),*}=(\mathcal{U},E,W_{i,t}^{(1),*})\) being the **unknown** user graph induced by arm \(\mathbf{x}_{i,t}\), which encodes the "fine-grained" user correlations in terms of the **expected rewards**. In graph \(\mathbf{g}_{i,t}^{(1),*}\) each user \(u\in\mathcal{U}\) will correspond to a node; meanwhile, \(\mathbb{E}=\{\epsilon(u,u^{\prime})\}_{u,u^{\prime}\in\mathcal{U}}\) refers to the set of edges, and the set \(W_{i,t}^{(1),*}=\{w_{i,t}^{(1),*}(u,u^{\prime})\}_{u,u^{\prime}\in\mathcal{U}}\) stores the weights for each edge from \(E\). Note that under real-world application scenarios, users sharing the same preference for certain arms (e.g., sports news) may have distinct tastes over other arms (e.g., political news). Thus, we allow each arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\) to induce different user collaborations \(\mathbf{g}_{i,t}^{(1),*}\).
Then, motivated by various real applications (e.g., online recommendation with normalized ratings), we consider \(r_{i,t}\) to be bounded \(r_{i,t}\in[0,1]\), which is standard in existing works (e.g., (Goyal et al., 2016; Goyal et al., 2016; Goyal et al., 2016)). Note that as long as \(r_{i,t}\in[0,1]\), we do not impose the distribution assumption (e.g., sub-Gaussian distribution) on noise term \(\epsilon_{i,t}\).
**[Reward Constraint]** To bridge user collaborative effects with user preferences (i.e., rewards), we consider the following constraint for reward function in **Eq. 1**. The intuition is that for any two users with comparable user correlations, they will incline to share similar tastes for items. For arm \(\mathbf{x}_{i,t}\), we consider the difference of expected rewards between any two users \(u,u^{\prime}\in\mathcal{U}\) to be governed by
\[\left|\mathbb{E}[r_{i,t}|u,\mathbf{x}_{i,t}]-\mathbb{E}[r_{i,t}|u^{\prime},\mathbf{x}_{ i,t}]\right|\leq\Psi(\mathbf{g}_{i,t}^{(1),*}[u\cdot],\mathbf{g}_{i,t}^{(1),*}[u^{ \prime}\cdot]) \tag{2}\]
where \(\mathbf{g}_{i,t}^{(1),*}[u\cdot]\) represents the normalized adjacency matrix row of \(\mathbf{g}_{i,t}^{(1),*}\) that corresponds to user (node) \(u\), and \(\Psi:\mathbb{R}^{n}\times\mathbb{R}^{n}\mapsto\mathbb{R}\) denotes an unknown mapping function. The reward function definition (**Eq. 1**) and the constraint (**Eq. 2**) motivate us to design the GNB framework, to be introduced in Section 4.
Then, we proceed to give the formulation of \(\mathcal{G}^{(1),*}_{i,t}=(\mathcal{U},E,W^{(1),*}_{i,t})\) below. Given arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\), users with strong correlations tend to have similar expected rewards, which will be reflected by \(W^{(1),*}_{i,t}\).
**Definition 1** (User Correlation for Exploitation).: _In round \(t\), for any two users \(u,u^{\prime}\in\mathcal{U}\), their exploitation correlation score \(w^{(1),*}_{i,t}(u,u^{\prime})\) w.r.t. a candidate arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\) is defined as_
\[w^{(1),*}_{i,t}(u,u^{\prime})=\Psi^{(1)}\big{(}\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}], \ \mathbb{E}[r_{i,t}|u^{\prime},\ \mathbf{x}_{i,t}]\big{)}\]
_where \(\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}],i\in[a]\) is the expected reward in terms of the user-arm pair \((u,\mathbf{x}_{i,t})\). Given two users \(u,u^{\prime}\in\mathcal{U}\), the function \(\Psi^{(1)}:\mathbb{R}\times\mathbb{R}\mapsto\mathbb{R}\) maps from their expected rewards \(\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}]\) to their user exploitation score \(w^{(1),*}_{i,t}(u,u^{\prime})\)._
The edge weight \(w^{(1),*}_{i,t}(u,u^{\prime})\) measures the correlation between the two users' preferences. When \(w^{(1),*}_{i,t}(u,u^{\prime})\) is large, \(u\) and \(u^{\prime}\) tend to have the same taste; Otherwise, these two users' preferences will be different in expectation. In this paper, we consider the mapping functions \(\Psi^{(1)}\) as the prior knowledge. For example, \(\Psi^{(1)}\) can be the radial basis function (RBF) kernel or normalized absolute difference.
**[Modeling with User Exploration Graph \(\mathcal{G}^{(2),*}_{i,t}\)]** Unfortunately, \(\mathcal{G}^{(1),*}_{i,t}\) is the **unknown** prior knowledge in our problem setting. Thus, the learner has to estimate \(\mathcal{G}^{(1),*}_{i,t}\) by exploiting the current knowledge, denoted by \(\mathcal{G}^{(1)}_{i,t}=(\mathcal{U},E,W^{(1)}_{i,t})\), where \(W^{(1)}_{i,t}=\{w^{(1)}_{i,t}(u,u^{\prime})\}_{u,u^{\prime}\in\mathcal{U}}\) is the estimation of \(W^{(1),*}_{i,t}\) based on the function class \(\mathcal{F}=\{f^{(1)}_{u}\}_{u\in\mathcal{U}}\) where \(f^{(1)}_{u}\) is the hypothesis specified to user \(u\). However, greedily exploiting \(\mathcal{G}^{(1)}_{i,t}\) may lead to the sub-optimal solution, or overlook some correlations that may only be revealed in the future rounds. Thus, we propose to construct another **user exploration graph**\(\mathcal{G}^{(2),*}_{i,t}\) for principled exploration to measure the estimation gap \(\mathcal{G}^{(1)}_{i,t}-\mathcal{G}^{(1)}_{i,t}\), which refers to the uncertainty of the estimation of graph \(\mathcal{G}^{(1)}_{i,t}\).
For each arm \(x_{i,t}\in\mathcal{X}_{t}\), we formulate the user exploration graph \(\mathcal{G}^{(2),*}_{i,t}=(\mathcal{U},E,W^{(2),*}_{i,t})\), with the set of edge weights \(W^{(2),*}_{i,t}=\{w^{(2),*}_{i,t}(u,u^{\prime})\}_{u,u^{\prime}\in\mathcal{U}}\). Here, \(\mathcal{G}^{(2),*}_{i,t}\) models the uncertainty of estimation \(\mathcal{G}^{(1)}_{i,t}\) in terms of the true exploitation graph \(\mathcal{G}^{(1),*}_{i,t}\), and \(\mathcal{G}^{(2),*}_{i,t}\) can be thought as the oracle exploration graph, i.e., "perfect exploration". Then, with the aforementioned hypothesis \(f^{(1)}_{u}(\mathbf{x}_{i,t})\) for estimating the expected reward of arm \(\mathbf{x}_{i,t}\) given \(u\), we introduce the formulation of \(\mathcal{G}^{(2),*}_{i,t}\) as the user exploration correlation.
**Definition 2** (User Correlation for Exploration).: _In round \(t\), given two users \(u,u^{\prime}\in\mathcal{U}\) and an arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\), their underlying exploration correlation score is defined as_
\[w^{(2),*}_{i,t}(u,u^{\prime})\] \[\qquad=\Psi^{(2)}\bigg{(}\mathbb{E}[r_{i,t}|u,\mathbf{x}_{i,t}]-f^{(1 )}_{u}\left(\mathbf{x}_{i,t}\right),\mathbb{E}[r_{i,t}|u^{\prime},\mathbf{x}_{i,t}]-f _{u^{\prime}}\left(\mathbf{x}_{i,t}\right)\bigg{)}\]
_with \(\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}]-f^{(1)}_{u}\left(\mathbf{x}_{i,t}\right),i\in[a]\) being the potential gain of the estimation \(f^{(1)}_{u}(\mathbf{x}_{i,t})\) for the user-arm pair \((u,\mathbf{x}_{i,t})\). Here, \(f^{(1)}_{u}(\cdot)\) is the reward estimation function specified to user \(u\), and \(\Psi^{(2)}:\mathbb{R}\times\mathbb{R}\mapsto\mathbb{R}\) is the mapping from user potential gains \(\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}]-f^{(1)}_{u}(\mathbf{x}_{i,t})\) to their exploration correlation score._
Here, \(w^{(2),*}_{i,t}(u,u^{\prime})\) is defined based on the potential gain of \(f^{(1)}_{u}(\cdot)\), i.e., \(\mathbb{E}[r_{i,t}|u,\ \mathbf{x}_{i,t}]-f^{(1)}_{u}\left(\mathbf{x}_{i,t}\right)\), to measure the estimation uncertainty. Note that our formulation is distinct from the formulation in (Cheng et al., 2017), where they only focus on the single-bandit setting with no user collaborations, and all the users will be treated identically.
As we have discussed, \(w^{(2),*}_{i,t}(u,u^{\prime})\) measures the uncertainty of estimation \(w^{(1)}_{i,t}(u,u^{\prime})\). When \(w^{(2),*}_{i,t}(u,u^{\prime})\) is large, the uncertainty of estimated exploitation correlation, i.e., \(w^{(1)}_{i,t}(u,u^{\prime})\), will also be large, and we should explore them more. Otherwise, we have enough confidence towards \(w^{(1)}_{i,t}(u,u^{\prime})\), and we can exploit \(w^{(1)}_{i,t}(u,u^{\prime})\) in a secure way. Analogous to \(\Psi^{(1)}\) in **Def. 1**, we consider the mapping function \(\Psi^{(2)}\) as the known prior knowledge.
**[Learning Objective]** With the received user \(u_{t}\) in each round \(t\in[T]\), the learner is expected to recommend an arm \(x_{t}\in\mathcal{X}_{t}\) (with reward \(r_{t}\)) in order to minimize the cumulative pseudo-regret
\[R(T)=\mathbb{E}[\sum_{t=1}^{T}(r_{t}^{*}-r_{t})] \tag{3}\]
where we have \(r_{t}^{*}\) being the reward for the optimal arm, such that \(\mathbb{E}[r_{t}^{*}|u_{t},\mathcal{X}_{t}]=\max_{\mathbf{x}_{i,t}\in\mathcal{X}_{t}} \mathbf{h}(\mathbf{x}_{i,t},u_{t},\mathcal{G}^{(1),*}_{i,t})\).
**[Comparing with Existing Problem Definitions]** The problem definition of existing user clustering works (e.g., (Bang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020)) only formulates "coarse-grained" user correlations. In their settings, for a user group \(\mathcal{N}\subseteq\mathcal{U}\) with the mapping function \(h_{\mathcal{N}}\), all the users in \(\mathcal{N}\) are forced to share the same reward mapping given an arm \(\mathbf{x}_{i,t}\), i.e., \(\mathbb{E}[r_{i,t}\mid u,\mathbf{x}_{i,t}]=h_{\mathcal{N}}(\mathbf{x}_{i,t}),\forall u \in\mathcal{N}\). In contrast, our definition of the reward function enables us to model the pair-wise "fine-grained" user correlations by introducing another two important factors \(u\) and \(\mathcal{G}^{(1),*}_{i,t}\). With our formulation, each user here is allowed to produce different rewards facing the same arm, i.e., \(\mathbb{E}[r_{i,t}\mid u,\mathbf{x}_{i,t}]=h(\mathbf{x}_{i,t},u,\mathcal{G}^{(1),*}_{i,t} ),\forall u\in\mathcal{N}\). Here, with different users \(u\), the corresponding expected reward \(h(\mathbf{x}_{i,t},u,\mathcal{G}^{(1),*}_{i,t})\) can be different. Therefore, our definition of the reward function is more generic, and it can also readily generalize to existing user clustering algorithms (with "coarse-grained" user correlations) by allowing each single user group to form an isolated sub-graph in \(\mathcal{G}^{(1),*}_{i,t}\) with no connections across different sub-graphs (i.e., user groups).
**[Notation]** Up to round \(t\), denoting \(\mathcal{T}_{u,t}\subseteq[t]\) as the collection of time steps at which user \(u\in\mathcal
exploitation graphs \(\mathcal{G}_{i,t}^{(1),*},i\in[a]\) (denoted by \(\mathcal{G}_{i,t}^{(1)}\)), and the exploration graphs \(\mathcal{G}_{i,t}^{(2),*},i\in[a]\) (denoted by \(\mathcal{G}_{i,t}^{(2)}\)), to model user correlations in terms of exploitation and exploration respectively; **Second**, to estimate the reward and the potential gain by leveraging the 'fine-grained' correlations, we propose the GNN-based models \(\mathcal{G}_{gm}^{(1)}(\cdot),\mathcal{G}_{gm}^{(2)}(\cdot)\) to aggregate the correlations of the target user-arm pair on estimated graphs \(\mathcal{G}_{i,t}^{(1)}\) and \(\mathcal{G}_{i,t}^{(2)}\), respectively; **Third**, we select the arm \(\mathbf{x}_{t}\), based on the estimated arm reward and potential gain calculated by our GNN-based models; **Finally**, we train the parameters of GNB using gradient descent (GD) on past records.
### User Graph Estimation with User Networks
Based on the definition of unknown true user graphs \(\mathcal{G}_{i,t}^{(1),*},\mathcal{G}_{i,t}^{(2),*}\) w.r.t. arm \(\mathbf{x}_{t,t}\in\mathcal{X}_{t}\) (**Definition**1 and 2), we proceed to derive their estimations \(\mathcal{G}_{i,t}^{(1)}\), \(\mathcal{G}_{i,t}^{(2)}\), \(i\in[a]\) with individual user networks \(f_{u}^{(1)}\), \(f_{u}^{(2)},u\in\mathcal{U}\). Afterwards, with these two kinds of estimated user graphs \(\mathcal{G}_{i,t}^{(1)}\) and \(\mathcal{G}_{i,t}^{(2)}\), we will be able to apply our GNN-based models to leverage the user correlations under the exploitation and the exploration settings. The pseudo-code is presented in **Alg.**1.
**[User Exploitation Network \(f_{u}^{(1)}\)] For each user \(u\in\mathcal{U}\), we use a neural network \(f_{u}^{(1)}(\cdot)=f_{u}^{(1)}(\cdot;\mathbf{\Theta}_{u}^{(1)})\) to learn user \(u\)'s preference for arm \(\mathbf{x}_{t,t}\), i.e., \(\mathbb{E}[r_{t,t}|\mathbf{u},\mathbf{x}_{t,t}]\). Following the Def. 1, we construct the exploitation graph \(\mathcal{G}_{i,t}^{(1)}\) by estimating the user exploitation correlation based on user preferences. Thus, in \(\mathcal{G}_{i,t}^{(1)}\), we consider the edge weight of two user nodes \(u,u^{\prime}\) as
\[\mathsf{w}_{i,t}^{(1)}(\mathbf{u},u^{\prime})=\Psi^{(1)}\big{(}f_{u}^{(1)}(\mathbf{x} _{i,t}).f_{u^{\prime}}^{(1)}(\mathbf{x}_{i,t})\big{)} \tag{4}\]
where \(\Psi^{(1)}(\cdot,\cdot)\) is the mapping function applied in **Def.**1 (line 1, **Alg.**1). Here, \(f_{u}^{(1)}(\cdot)\) will be trained by GD with chosen arms \(\{\mathbf{x}_{t}\}_{\tau\in\mathcal{T}_{u,t}}\) as samples, and received reward \(\{r_{\tau}\}_{\tau\in\mathcal{T}_{u,t}}\) as the labels, where \(\mathcal{L}_{u}^{(1)}(\Theta_{u}^{(1)})=\sum_{\tau\in\mathcal{T}_{u,t}}\big{|} f_{u}^{(1)}(\mathbf{x}_{\tau};\Theta_{u}^{(1)})-r_{\tau}\big{|}^{2}\) will be the corresponding quadratic loss. Recall that \(\mathbf{x}_{\tau}\) and \(r_{\tau}\) stand for the chosen arm and the received reward respectively in round \(\tau\).
**[User Exploration Network \(f_{u}^{(2)}\)] Given user \(u\in\mathcal{U}\), to estimate the potential gain (i.e., the uncertainty for the reward estimation) \(\mathbb{E}[r|\mathbf{u},\mathbf{x}_{t,t}]-f_{u}^{(1)}(\mathbf{x}_{t,t})\) for arm \(\mathbf{x}_{t,t}\), we adopt the user exploration network \(f_{u}^{(2)}(\cdot)=f_{u}^{(2)}(\cdot;\mathbf{\Theta}_{u}^{(2)})\) inspired by (Corban et al., 2017). As it has proved that the confidence interval (uncertainty) of reward estimation can be expressed as a function of network gradients (Zhu et al., 2017; Zhu et al., 2018), we apply \(f_{u}^{(2)}(\cdot)\) to directly learn the uncertainty with the gradient of \(f_{u}^{(1)}(\cdot)\). Thus, the input of \(f_{u}^{(2)}(\cdot)\) will be the network gradient of \(f_{u}^{(1)}(\cdot)\) given arm \(\mathbf{x}_{t,t}\), denoted as \(\nabla f_{u}^{(1)}(\mathbf{x}_{t,t})=\nabla_{\mathbf{\Theta}}f_{u}^{(1)}(\mathbf{x}_{t,t}; \{\Theta_{u}^{(1)}\}_{t-1})\), where \(\{\Theta_{u}^{(1)}\}_{t-1}\) refer to the parameters of \(f_{u}^{(1)}\) in round \(t\) (before training [line 11, **Alg.**1)]. Analogously, given the estimated user exploration graph \(\mathcal{G}_{i,t}^{(2)}\) and two user nodes \(u,u^{\prime}\), we let the edge weight be
\[\mathsf{w}_{i,t}^{(2)}(u,u^{\prime})=\Psi^{(2)}\bigg{(}f_{u}^{(2)}\big{(}\nabla f _{u}^{(1)}(\mathbf{x}_{i,t})\big{)}.f_{u^{\prime}}^{(2)}\big{(}\nabla f_{u^{\prime }}^{(1)}(\mathbf{x}_{i,t})\big{)}\bigg{)} \tag{5}\]
as in line 17, **Alg.**1, and \(\Psi^{(2)}(\cdot,\cdot)\) is the mapping function that has been applied in **Def.**2. With GD, \(f_{u}^{(2)}(\cdot)\) will be trained with the past gradients of \(f_{u}^{(1)}\), i.e., \(\{\nabla f_{u}^{(1)}(\mathbf{x}_{\tau})\}_{\tau\in\mathcal{T}_{u,t}}\) as samples; and the potential gain (uncertainty) \(\{r_{\tau}-f_{u}^{(1)}(\mathbf{x}_{\tau};\Theta_{u}^{(1)}\}_{\tau-1})\}_{\tau\in \mathcal{T}_{u,t}}\) as labels. The quadratic loss is defined as \(\mathcal{L}_{u}^{(2)}(\Theta_{u}^{(2)})=\sum_{\tau\in\mathcal{T}_{u,t}}\big{|} f_{u}^{(2)}(\nabla f_{u}^{(1)}(\mathbf{x}_{\tau});\Theta_{u}^{(2)})-\big{(}r_{ \tau}-f_{u}^{(1)}(\mathbf{x}_{\tau};[\Theta_{u}^{(1)}\}_{\tau-1})]\big{|}^{2}\).
**[Network Architecture]** Here, we can apply various architectures for \(f_{u}^{(1)}(\cdot),f_{u}^{(2)}(\cdot)\) to deal with different application scenarios (e.g., Convolutional Neural Networks [CNNs] for visual content recommendation tasks). For the theoretical analysis and experiments, with user \(u\in\mathcal{U}\), we apply separate \(L\)-layer (\(L\geq 2\)) fully-connected (FC) networks as the user exploitation and exploration network
\[f_{u}(\mathbf{x};\mathbf{\Theta}_{u})=\mathbf{\Theta}_{L}\sigma(\mathbf{\Theta}_{L-1}\sigma(\mathbf{ \Theta}_{L-2}\ldots\sigma(\mathbf{\Theta}_{1}\mathbf{\chi}))),\ \sigma\coloneqq\mathrm{ReLU}(\cdot) \tag{6}\]
with \(\mathbf{\Theta}_{u}=[\mathrm{vec}(\mathbf{\Theta}_{1})^{\intercal},\ldots,\mathrm{vec}( \mathbf{\Theta}_{L})^{\intercal}]^{\intercal}\) being the vector of trainable parameters. Here, since \(f_{u}^{(1)}(\cdot),f_{u}^{(2)}(\cdot)\) are both the \(L\)-layer FC network (**Eq.**6), the input \(\mathbf{\chi}\) can be substituted with either the arm context \(\mathbf{x}_{i,t}\) or the network gradient \(\nabla f_{u}^{(1)}(\mathbf{x}_{i,t})\) accordingly.
**[Parameter Initialization]** The weight matrices of the first layer are slightly different for two kinds of user networks, as \(\mathbf{\Theta}_{1}^{(1)}\in\mathbb{R}^{m\times d}\), \(\mathbf{\Theta}_{1}^{(2)}\in\mathbb{R}^{m\times p_{u}^{(1)}}\) where \(p_{u}^{(1)}\) is the dimensionality of \(\mathbf{\Theta}_{u}^{(1)}\). The weight matrix shape for the rest of the \(L-1\) layers will be the same for these two kinds of user networks, which are \(\mathbf{\Theta}_{l}\in\mathbb{R}^{m\times m},l\in[2,\cdots,L-1]\), and \(\mathbf{\Theta}_{L}\in\mathbb{R}^{1\times m}\). To initialize \(f_{u}^{(1)},f_{u}^{(2)}\), the weight matrix entries for their first \(L-1\) layers \(\{\mathbf{\Theta}_{1},\ldots\mathbf{\Theta}_{L-1}\}\) are drawn from the Gaussian distribution \(N(0,2/m)\), and the entries of the last layer weight matrix \(\mathbf{\Theta}_{L}\) are sampled from \(N(0,1/m)\).
Figure 1. Workflow of the proposed Graph Neural Bandits (GNB) framework.
```
1Input: Number of rounds \(T\), network width \(m\), information propagation hops \(k\). Functions for edge weight estimation \(\Psi^{(1)}(\cdot,\cdot),\Psi^{(2)}(\cdot,\cdot):\mathbb{R}\times\mathbb{R} \mapsto\mathbb{R}\). Output: Arm recommendation \(\mathbf{x}_{t}\) for each time step \(t\). Initialization: Initialize trainable parameters for all models. for\(t\in\{1,2,\ldots,T\}\)do
1 Receive a user \(u_{t}\) and a set of arm contexts \(\mathcal{X}_{t}=\{\mathbf{x}_{i,t}\}_{i\in[a]}\). for each candidate arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\)do
2 Construct two kinds of user graphs \(\mathbf{\mathcal{G}}_{i,t}^{(1)},\mathbf{\mathcal{G}}_{i,t}^{(2)}=\) Procedure Estimating Arm-Specific User Graphs (\(\mathbf{x}_{i,t}\)). [line 13-20] Compute reward estimation [Eq. 10] \(\hat{r}_{i,t}=f_{gmn}^{(1)}(\mathbf{x}_{i,t},\ \mathbf{\mathcal{G}}_{i,t}^{(1)},[\mathbf{\Theta}_{ gmn}^{(1)}]_{t-1})\), and the potential gain [Eq. 11] \(\hat{b}_{i,t}=f_{gmn}^{(2)}\nabla[f_{gmn}^{(1)}]_{t,t},\ \mathbf{\mathcal{G}}_{i,t}^{(2)} ;[\mathbf{\Theta}_{gmn}^{(2)}]_{t-1})\).
3 end for Play arm \(\mathbf{x}_{t}=\arg\max_{\mathbf{x}_{i,t}\in\mathcal{X}_{t}}\big{(}\hat{r}_{i,t}+\hat{b }_{i,t}\big{)}\), and observe its true reward \(r_{t}\). Train the user networks \(f_{u_{t}}^{(1)}(\cdot;\mathbf{\Theta}_{u_{t}}^{(1)})\), \(f_{u_{t}}^{(2)}(\cdot;\mathbf{\Theta}_{u_{t}}^{(2)})\) and GNN models \(f_{gmn}^{(1)}(\cdot;\mathbf{\Theta}_{gmn}^{(1)})\), \(f_{gmn}^{(2)}(\cdot;\mathbf{\Theta}_{gmn}^{(2)})\) with GD.
4 end for
5Procedure Estimating Arm-Specific User Graphs (\(\mathbf{x}_{i,t}\)) Initialize arm graphs \(\mathbf{\mathcal{G}}_{i,t}^{(1)},\mathbf{\mathcal{G}}_{i,t}^{(2)}\). for each user pair \((u,u^{\prime})\in\mathcal{U}\times\mathcal{U}\)do
6 For edge weight \(w_{i,t}^{(1)}(u,u^{\prime})\in W_{i,t}^{(1)}\), update \(w_{i,t}^{(1)}(u,u^{\prime})=\Psi^{(1)}\big{(}f_{u}^{(1)}(x_{i,t}),f_{u^{\prime }}^{(1)}(\mathbf{x}_{i,t})\big{)}\). [Eq. 4] For edge weight \(w_{i,t}^{(2)}(u,u^{\prime})\in W_{i,t}^{(2)}\), based on the [Eq. 5], update \(w_{i,t}^{(2)}(u,u^{\prime})=\Psi^{(2)}\big{(}f_{u}^{(2)}\big{(}\nabla f_{u}^{ (1)}(\mathbf{x}_{i,t})\big{)},f_{u^{\prime}}^{(2)}\big{(}\nabla f_{u^{\prime}}^{ (1)}(\mathbf{x}_{i,t})\big{)}\big{)}\).
7 end for Return user graphs \(\mathbf{\mathcal{G}}_{i,t}^{(1)},\mathbf{\mathcal{G}}_{i,t}^{(2)}\).
```
**ALGORITHM 1**Graph Neural Bandits (GNB)
### Achieving Exploitation and Exploration with GNN Models on Estimated User Graphs
With derived user exploitation graphs \(\mathbf{\mathcal{G}}_{i,t}^{(1)}\), and exploitation graphs \(\mathbf{\mathcal{G}}_{i,t}^{(2)},i\in[a]\), we apply two GNN models to separately estimate the arm reward and potential gain for a refined arm selection strategy, by utilizing the past interaction records with all the users.
#### 4.2.1. The Exploitation GNN\(f_{gmn}^{(1)}(\cdot)\).
In round \(t\), with the estimated user exploitation graph \(\mathbf{\mathcal{G}}_{i,t}^{(1)}\) for arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\), we apply the exploitation GNN model \(f_{gmn}^{(1)}(\mathbf{x}_{i,t},\mathbf{\mathcal{G}}_{i,t}^{(1)};\mathbf{\Theta}_{gmn}^{(1)})\) to collaboratively estimate the arm reward \(\widehat{r}_{i,t}\) for the received user \(u_{t}\in\mathcal{U}\). We start from learning the aggregated representation for \(k\) hops, as
\[\mathbf{H}_{agg}=\sigma\big{(}(\mathbf{\mathcal{S}}_{i,t}^{(1)})^{k}\cdot(\mathbf{X}_{i,t} \mathbf{\Theta}_{agg}^{(1)})\big{)}\in\mathbb{R}^{n\times m} \tag{7}\]
where \(\mathbf{\mathcal{S}}_{i,t}^{(1)}=(\mathbf{D}_{i,t}^{(1)})^{-\frac{1}{2}}\mathbf{\mathcal{ A}}_{i,t}^{(1)}(\mathbf{D}_{i,t}^{(1)})^{-\frac{1}{2}}\) is the symmetrically normalized adjacency matrix of \(\mathbf{\mathcal{G}}_{i,t}^{(1)}\), and \(\sigma\) represents the ReLU activation function. With \(m\) being the network width, we have \(\mathbf{\Theta}_{agg}^{(1)}\in\mathbb{R}^{nd\times m}\) as the trainable weight matrix. After propagating the information for \(k\) hops over the user graph, each row of \(\mathbf{H}_{agg}\) corresponds to the aggregated \(m\)-dimensional hidden representation for one specific user-arm pair \((u,\mathbf{x}_{i,t}),u\in\mathcal{U}\). Here, the propagation of multi-hop information can provide a global perspective over the users, since it also involves the neighborhood information of users' neighbors ((Hamilton et al., 2015; Wang et al., 2016)). To achieve this, we have the embedding matrix \(\mathbf{X}_{i,t}\) (in **Eq. 7**) for arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t},i\in[a]\) being
\[\mathbf{X}_{i,t}=\left(\begin{array}{cccc}\mathbf{x}_{i,t}^{\intercal}&\mathbf{0}& \cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{x}_{i,t}^{\intercal}&\cdots&\mathbf{0}\\ \vdots&&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\cdots&\mathbf{x}_{i,t}^{\intercal}\end{array}\right)\in\mathbb{R}^{ n\times nd} \tag{8}\]
which partitions the weight matrix \(\mathbf{\Theta}_{gmn}^{(1)}\) for different users. In this way, it is designed to generate individual representations w.r.t. each user-arm pair \((u,\mathbf{x}_{i,t}),u\in\mathcal{U}\) before the \(k\)-hop propagation (i.e., multiplying with \((\mathbf{\mathcal{S}}_{i,t}^{(1)})^{k}\)), which correspond to the rows of the matrix multiplication \((\mathbf{X}_{i,t}\mathbf{\Theta}_{agg}^{(1)})\in\mathbb{R}^{n\times m}\).
Afterwards, with \(\mathbf{H}_{0}=\mathbf{H}_{agg}\), we feed the aggregated representations into the \(L\)-layer (\(L\geq 2\)) FC network, represented by
\[\mathbf{H}_{l}=\sigma(\mathbf{H}_{l-1}\cdot\mathbf{\Theta}_{l}^{(1)})\in\mathbb{R}^{n\times m },\ l\in[L-1],\] \[\widehat{r}_{all}(\mathbf{x}_{i,t})=\mathbf{H}_{L-1}\cdot\mathbf{\Theta}_{L}^{(1)} \in\mathbb{R}^{n} \tag{9}\]
where \(\widehat{r}_{all}(\mathbf{x}_{i,t})\in\mathbb{R}^{n}\) represents the reward estimation for all the users in \(\mathcal{U}\), given the arm \(\mathbf{x}_{i,t}\). Given the target user \(u_{t}\) in round \(t\), the reward estimation for the user-arm pair \((u_{t},\mathbf{x}_{i,t})\) would be the corresponding element in \(\widehat{r}_{all}\) (line 8, **Alg. 1**), represented by:
\[\widehat{r}_{i,t}=f_{gmn}^{(1)}(\mathbf{x}_{i,t},\ \mathbf{\mathcal{G}}_{i,t}^{(1)};[\mathbf{ \Theta}_{gmn}^{(1)}]_{t-1})=[\widehat{r}_{all}(\mathbf{x}_{i,t})]_{u_{t}} \tag{10}\]
where \(\mathbf{\Theta}_{gmn}^{(1)}=[\text{vec}(\mathbf{\Theta}_{agg}^{(1)})^{\intercal}, \text{vec}(\mathbf{\Theta}_{1}^{(1)})^{\intercal},\ldots,\text{vec}(\mathbf{\Theta}_{L}^ {(1)})^{\intercal}\mathbf{\tau}\in\mathbb{R}^{p}]\) represent the trainable parameters of the exploitation GNN model, and we have \([\mathbf{\Theta}_{gmn}^{(1)}]_{t-1}\) being the parameters \(\mathbf{\Theta}_{gmn}^{(1)}\) in round \(t\) (before training [line 11, **Alg. 1**]). Here, the weight matrix shapes are \(\mathbf{\Theta}_{l}^{(1)}\in\mathbb{R}^{m\times m},l\in[1,\cdots,L-1]\), and the \(L\)-th layer \(\mathbf{\Theta}_{l}^{(1)}\in\mathbb{R}^{m}\).
**[Training \(f_{gmn}^{(1)}\) with GD]** The exploitation GNN \(f_{gmn}^{(1)}(\cdot)\) will
their estimated reward difference \(|[\widehat{\mathbf{\tau}}_{all}(\mathbf{x}_{i,t})]_{u_{1}}-[\widehat{\mathbf{\tau}}_{all}(\mathbf{ x}_{i,t})]_{u_{2}}|\) can be bounded by the distance of the corresponding rows in \(\mathbf{S}_{i,t}\) (i.e., \(\|\mathbf{S}_{i,t}^{(1)}[\mathbf{u}_{1}:]-\mathbf{S}_{i,t}^{(1)}[\mathbf{u}_{2}:]\|\)) given the exploitation GNN model. This design matches our definition and the constraint in **Eq. 1**-2.
#### 4.2.2. The Exploration GNN \(f_{gmn}^{(2)}(\cdot)\)
Given a candidate arm \(\mathbf{x}_{i,t}\in\mathcal{X}_{t}\), to achieve adaptive exploration with the user exploration collaborations encoded in \(\mathbf{G}_{i,t}^{(2)}\), we apply a second GNN model \(f_{gmn}^{(2)}(\cdot)\) to evaluate the potential gain \(\widehat{\mathbf{\tau}}_{i,t}\) for the reward estimation \(\widehat{\mathbf{\tau}}_{i,t}=f_{gmn}^{(1)}(\mathbf{x}_{i,t},\ \mathbf{G}_{i,t}^{(1)};[\mathbf{\Theta}_{ gmn}^{(1)}]_{t-1})\) [**Eq. 10**], denoted by
\[\widehat{b}_{i,t}=f_{gmn}^{(2)}(\nabla[f_{gmn}^{(1)}]_{t,t},\mathbf{G}_{i,t}^{(2)} ;[\mathbf{\Theta}_{gmn}^{(2)}]_{t-1})=[\widehat{\mathbf{b}}_{all}(\mathbf{x}_{i,t})]_{u}. \tag{11}\]
The architecture of \(f_{gmn}^{(2)}(\cdot)\) can also be represented by **Eq. 7**-10. While \(f_{gmn}^{(1)}(\cdot)f_{gmn}^{(2)}(\cdot)\) have the same network width \(m\) and number of layers \(L\), the dimensionality of \(\mathbf{\Theta}_{agg}^{(1)}\in\mathbb{R}^{nd\times m},\mathbf{\Theta}_{agg}^{(2)}\in \mathbb{R}^{np\times m}\) is different. Analogously, \(\widehat{\mathbf{b}}_{all}(\mathbf{x}_{i,t})\in\mathbb{R}^{n}\) is the potential gain estimation for all the users in \(\mathcal{U}\), w.r.t. arm \(\mathbf{x}_{i,t}\) and the exploitation GNN \(f_{gmn}^{(1)}(\cdot)\). Here, the inputs are user exploration graph \(\mathcal{G}_{i,t}^{(2)}\), and the gradient of the exploitation GNN, represented by \(\nabla[f_{gmn}^{(1)}]_{t,t}=\nabla_{gmn}^{(1)}f_{gmn}^{(1)}(\mathbf{x}_{i,t}, \mathcal{G}_{i,t}^{(1)};[\mathbf{\Theta}_{gmn}^{(1)}]_{t-1})\). The exploration GNN \(f_{gmn}^{(2)}(\cdot)\) leverages the user exploration graph \(\mathcal{G}_{i,t}^{(2)}\) and the gradients of \(f_{gmn}^{(1)}(\cdot)\) to estimate the uncertainty of reward estimations, which stands for our adaptive exploration strategy (downward or upward exploration). More discussions are in Appendix **Section** C.
**[Training \(f_{gmn}^{(2)}\) with GD]** Similar to \(f_{gmn}^{(1)}\), we train \(f_{gmn}^{(2)}\) with GD by minimizing the quadratic loss, denoted by \(\mathcal{L}_{gmn}^{(2)}(\Theta_{gmn}^{(2)})=\sum_{r\in[t]}\)\(\big{|}f_{gmn}^{(2)}(\nabla[f_{gmn}^{(1)}]_{r,\mathcal{G}_{i}^{(2)};\Theta_{ gmn}^{(2)}})-(r_{r}-f_{gmn}^{(1)}(\mathbf{x}_{r},\mathcal{G}_{i}^{(1)};[\mathbf{ \Theta}_{gmn}^{(1)}]_{r-1})\big{)}^{2}\). This is defined to measure the difference between the estimated potential gains \(\{f_{gmn}^{(2)}(\nabla[f_{gmn}^{(1)}]_{r,\mathcal{G}_{i}^{(2)};\Theta_{gmn}^{ (2)}})_{r\in[t]}\}\), and the corresponding labels \(\{r_{r}-f_{gmn}^{(1)}(\mathbf{x}_{r},\ \mathcal{G}_{i}^{(1)};[\mathbf{\Theta}_{ gmn}^{(1)}]_{r-1})\}_{r\in[t]}\).
**Remark 4.1** (Reducing Input Complexity).: The input of \(f_{gmn}^{(2)}(\cdot)\) is the gradient \(\nabla_{\mathbf{\Theta}}f_{gmn}^{(1)}(\mathbf{x})\) given the arm \(\mathbf{x}\), and its dimensionality is naturally \(p=(nd\times m)+(L-1)\times m^{2}+m\), which can be a large number. Inspired by CNNs, e.g., (Golovolov et al., 2016), we apply the **average pooling** to approximate the original gradient vector in practice. In this way, we can save the running time and reduce space complexity simultaneously. Note this approach is also compatible with user networks in Subsection 4.1. To prove its effectiveness, we will apply this approach on GNB for all the experiments in Section 6.
**Remark 4.2** (Working with Large Systems).: When facing a large number of users, we can apply the "approximated user neighborhood" to reduce the running time in practice. Given user graphs \(\mathbf{G}_{i,t}^{(1)},\mathbf{G}_{i,t}^{(2)}\) in terms of arm \(\mathbf{x}_{i,t}\), we derive approximated user neighborhoods \(\widetilde{\mathbf{\mathcal{N}}}^{(1)}(u_{t})\), \(\widetilde{\mathbf{\mathcal{N}}}^{(2)}(u_{t})\subset\mathcal{U}\) for the target user \(u_{t}\), with size \(|\widetilde{\mathbf{\mathcal{N}}}^{(1)}(u_{t})|=|\widetilde{\mathbf{\mathcal{N}}}^{(2)} (u_{t})|=\widetilde{n}\), where \(\widetilde{n}<<n\). For instance, we can choose \(\widetilde{n}\) "representative users" (e.g., users posting high-quality reviews on e-commerce platforms) to form \(\widetilde{\mathbf{\mathcal{N}}}^{(1)}(u_{t}),\widetilde{\mathbf{\mathcal{N}}}^{(2)} (u_{t})\), and apply the corresponding approximated user sub-graphs for downstream GNN models to reduce the computation cost and space cost in practice. Related experiments are provided in Subsection 6.3.
**[Parameter Initialization]** For the parameters of both GNN models (i.e., \(\mathbf{\Theta}_{gmn}^{(1)}\) and \(\mathbf{\Theta}_{gmn}^{(2)}\)), the matrix entries of the aggregation weight matrix \(\mathbf{\Theta}_{agg}\) and the first \(L-1\) FC layers \(\{\mathbf{\Theta}_{1},\dots,\mathbf{\Theta}_{L-1}\}\) are drawn from the Gaussian distribution \(N(0,2/m)\). Then, for the last layer weight matrix \(\mathbf{\Theta}_{L}\), we draw its entries from \(N(0,1/m)\).
#### 4.2.3. Arm Selection Mechanism and Model Training
In round \(t\), with the current parameters \([\mathbf{\Theta}_{gmn}^{(1)}]_{t-1}\), \([\mathbf{\Theta}_{gmn}^{(2)}]_{t-1}\) for GNN models before model training, the selected arm is chosen as
\[\mathbf{x}_{t}= \arg\max_{\mathbf{x}_{i,t}\in\mathcal{X}_{t}}\left[f_{gmn}^{(1)}(\mathbf{x }_{i,t},\ \mathbf{\mathcal{G}}_{i,t}^{(1)};[\mathbf{\Theta}_{gmn}^{(1)}]_{t-1})\right.\] \[\left.+f_{gmn}^{(2)}(\nabla_{\mathbf{\Theta}_{gmn}^{(1)}}f_{gmn}^{(1)} (\mathbf{x}_{i,t},\ \mathbf{\mathcal{G}}_{i,t}^{(1)};[\mathbf{\Theta}_{gmn}^{(1)}]_{t-1}),\ \mathbf{\mathcal{G}}_{i,t}^{(2)};[\mathbf{\Theta}_{ gmn}^{(2)}]_{t-1})\right], \tag{11}\]
based on the estimated reward and potential gain (line 10, **Alg. 1**). After receiving reward \(r_{t}\), we update user networks \(f_{fu}^{(1)},f_{u_{t}}^{(2)}\) of user \(u_{t}\), and GNN models based on GD (line 11, **Alg. 1**).
## 5. Theoretical Analysis
In this section, we present the theoretical analysis for the proposed GNB. Here, we consider each user \(u\in\mathcal{U}\) to be evenly served \(T/n\) rounds up to time step \(T\), i.e., \(|\mathcal{T}_{u,t}|=T_{u,t}=T/n\), which is standard in closely related works (e.g., (Barton et al., 2016; Zhang et al., 2017)). To ensure the neural models are able to efficiently learn the underlying reward mapping, we have the following assumption regarding the arm separateness.
Assumption 5.1 (\(\rho\)-Separateness of Arms).: _After a total of \(T\) rounds, for every pair \(\mathbf{x}_{i,t},\mathbf{x}_{i^{\prime},t^{\prime}}\) with \(t,t^{\prime}\in[T]\) and \(i,i^{\prime}\in[a]\), if \((t,i)\neq(t^{\prime},i^{\prime})\), we have \(\|\mathbf{x}_{i,t}-\mathbf{x}_{i^{\prime},t^{\prime}}\|_{2}\geq\rho\) where \(0<\rho\leq\mathcal{O}(\frac{1}{T})\)._
Note that the above assumption is mild, and it has been commonly applied in existing works on neural bandits (Barton et al., 2016) and over-parameterized neural networks (Barton et al., 2016). Since \(L\) can be manually set (e.g., \(L=2\)), we can easily satisfy the condition \(0<\rho\leq\mathcal{O}(\frac{1}{T})\) as long as no two arms are identical. Meanwhile, Assumption 4.2 in (Zhang et al.,
networks defined in **Eq. 6** and the GNN models defined in **Eq. 7-9** with \(L\) FC-layers, let \(m\geq\Omega(\text{Poly}(T,L,a,\frac{1}{\rho})\cdot\xi_{L}\log(1/\delta))\), \(n\geq\widetilde{\Omega}(\text{Poly}(L))\). Set the learning rates and \(\text{GD}\) iterations \(\eta_{1}=\Theta\big{(}\frac{\rho}{m\cdot\text{Poly}(T,n,a,L)}\big{)},\quad\eta_ {2}=\Theta\big{(}\frac{\rho}{m\cdot\text{Poly}(T,a,L)}\big{)},\)
\[J_{1}=\Theta\big{(}\frac{\text{Poly}(T,n,a,L)}{\rho\cdot\delta^{2}}\cdot\log( \frac{1}{\xi_{1}})),\ J_{2}=\Theta\big{(}\frac{\text{Poly}(T,a,L)}{\rho\cdot \delta^{2}}\cdot\log(\frac{1}{\xi_{2}})\big{)}.\]
Then, following **Algorithm 1**, with probability at least \(1-\delta\), the \(T\)-round pseudo-regret \(R(T)\) of GNB can be bounded by
\[R(T)\leq\sqrt{T}\cdot\big{(}\mathcal{O}(L_{\xi L}^{2})\cdot\sqrt{2\log(\frac{ Tn\cdot a}{\delta})}\big{)}+\sqrt{T}\cdot\mathcal{O}(L)+\mathcal{O}(\xi_{L})+ \mathcal{O}(1).\]
Recall that \(L\) is generally a small integer (e.g., we set \(L=2\) for experiments in **Section 6**), which makes the condition on number of users reasonable as \(n\) is usually a gigantic number in real-world recommender systems. We also have \(m\) to be sufficiently large under the over-parameterization regime, which makes the regret bound hold. Here, we have the following remarks.
**Remark 5.3** (Dimension terms \(d,\tilde{d}\)).: Existing neural single-bandit (i.e., with no user collaborations) algorithms (Zhou et al., 2017; Li et al., 2018; Li et al., 2019) keep the bound of \(\mathcal{O}(\tilde{d}\sqrt{T}\log(T))\) based on gradient mappings and ridge regression. \(\tilde{d}\) is the effective dimension of the NTK matrix, which can grow along with the number of parameters \(p\) and rounds \(T\). The linear user clustering algorithms (e.g., (Li et al., 2018; Li et al., 2019; Li et al., 2019)) have the bound \(\mathcal{O}(d\sqrt{T}\log(T))\) with context dimension \(d\), which can be large with a high-dimensional context space. Alternatively, the regret bound in **Theorem 5.2** is free of terms \(d\) and \(\tilde{d}\), as we apply the generalization bounds of over-parameterized networks instead (Li et al., 2018; Li et al., 2019), which are unrelated to dimension terms \(d\) or \(\tilde{d}\).
**Remark 5.4** (From \(\sqrt{n}\) to \(\sqrt{\log(n)}\)).: With \(n\) being the number of users, existing user clustering works (e.g., (Li et al., 2018; Li et al., 2019; Li et al., 2019)) involve a \(\sqrt{n}\) factor in the regret bound as the cost of leveraging user collaborative effects. Instead of applying separate estimators for each user group, our proposed GNB only ends up with a \(\sqrt{\log(n)}\) term to incorporate user collaborations by utilizing dual GNN models for estimating the arm rewards and potential gains correspondingly.
**Remark 5.5** (Arm i.i.d. Assumption).: Existing clustering of bandits algorithms (e.g., (Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019)) and the single-bandit algorithm EENet (Li et al., 2018) typically require the arm i.i.d. assumption for the theoretical analysis, which can be strong since the candidate arm pool \(X_{t},t\in[T]\) is usually conditioned on the past records. Here, instead of using the regression-based analysis as in existing works, our proof of **Theorem 5.2** applies the martingale-based analysis instead to help alleviate this concern.
## 6. Experiments
In this section, we evaluate the proposed GNB framework on multiple real data sets against nine state-of-the-art algorithms, including the linear user clustering algorithms: (1) **CLUB**(Li et al., 2019), (2) **SCLUB**(Li et al., 2019), (3) **LOCB**(Li et al., 2019), (4) **DynUCB**(Li et al., 2019), (5) **COFIBA**(Li et al., 2019); the neural single-bandit algorithms: (6) **Neural-Pool** adopts one single Neural-UCB (Li et al., 2019) model for all the users with the UCB-type exploration strategy; (7) **Neural-Ind** assigns each user with their own separate Neural-UCB (Li et al., 2019) model; (8) **EE-Net**(Li et al., 2018); and, the neural user clustering algorithm: (9) **Meta-Ban**(Li et al., 2019). We leave the implementation details and data set URLs to Appendix **Section** A.
### Real Data Sets
In this section, we compare the proposed GNB with baselines on six data sets with different specifications.
**[Recommendation Data Sets]** "MovieLens rating dataset" includes reviews from \(1.6\times 10^{5}\) users towards \(6\times 10^{4}\) movies. Here, we select \(10\) genome-scores with the highest variance across movies to generate the movie features \(\mathbf{v}_{i}\in\mathbb{R}^{d},d=10\). The user features \(\mathbf{v}_{u}\in\mathbb{R}^{d},u\in\mathcal{U}\) are obtained through singular value decomposition (SVD) on the rating matrix. We use K-means to divide users into \(n=50\) groups based on \(\mathbf{v}_{u}\), and consider each group as a node in user graphs. In each round \(t\), a user \(u_{t}\) will be drawn from a randomly sampled group. For the candidate pool \(\mathcal{X}_{t}\) with \(|\mathcal{X}_{t}|=a=10\) arms, we choose one bad movie (\(\leq\) two stars, out of five) rated by \(u_{t}\) with reward \(1\), and randomly pick the other \(9\) good movies with reward \(0\). The target here is to help users avoid bad movies. For "Yelp" data set, we build the rating matrix w.r.t. the top \(2,000\) users and top \(10,000\) arms with the most reviews. Then, we use SVD to extract the \(10\)-dimensional representation for each user and restaurant. For an arm, if the user's rating \(\geq\) three stars (out of five stars), the reward is set to \(1\); otherwise, the reward is \(0\). Similarly, we apply K-means to obtain \(n=50\) groups based on user features. In round \(t\), a target \(\mathbf{u}_{t}\), is sampled from a randomly selected group. For \(\mathcal{X}_{t}\), we choose one good restaurant rated by \(u_{t}\) with reward \(1\), and randomly pick the other \(9\) bad restaurants with reward \(0\).
**[Classification Data Sets]** We also perform experiments on four real classification data sets under the recommendation settings, which are "MNIST" (with the number of classes \(\mathcal{C}=10\)), "Shattle" (\(\mathcal{C}=7\)), the "Letter" (\(\mathcal{C}=26\)), and the "Pendigits" (\(\mathcal{C}=10\)) data sets. Each class will correspond to one node in user graphs. Similar to previous works (Li et al., 2019; Li et al., 2019), given a sample \(\mathbf{x}\in\mathbb{R}^{d}\), we transform it into \(\mathcal{C}\) different arms, denoted by \(\mathbf{x}_{1}=(\mathbf{x},0,\dots,0),\mathbf{x}_{2}=(0,\mathbf{x},\dots,0),\dots,\mathbf{x}_{ \mathcal{C}}=(0,0,\dots,\mathbf{x})\in\mathbb{R}^{d+\mathcal{C}-1}\) where we add \(\mathcal{C}-1\) zero digits as the padding. The received reward \(r_{t}=1\) if we select the arm of the correct class, otherwise \(r_{t}=0\).
#### 6.1.1. Experiment Results
Figure 2 illustrates the cumulative regret results on the six data sets, and the red shade represents the standard deviation of GNB. Here, our proposed GNB manages to achieve the best performance against all these strong baselines. Since the MovieLens data set involves real arm features (i.e., genome-scores), the performance of different algorithms on the MovieLens data set tends to have larger divergence. Note that due to the inherent noise within these two recommendation data sets, we can observe the "linear-like" regret curves, which are common as in existing works (e.g., (Li et al., 2019)). In this case, to show the model convergence, we will present the convergence results for the recommendation data sets in Appendix **Subsce**. A.4. Among the baselines, the neural algorithms (Neural-Pool, EE-Net, Meta-Ban) generally perform better than linear algorithms due to the representation power of neural networks. However, as Neural-Ind considers no correlations among users, it tends to perform the worst among all baselines on these two data sets. For classification data sets, Meta-Ban performs better than the other baselines by modeling user (class) correlations with the neural network. Since the classification data sets generally involve complex reward mapping functions, it can lead to the poor performances of linear algorithms. Our proposed GNB outperforms the baselines by modeling fine-grained correlations and utilizing
the adaptive exploration strategy simultaneously. In addition, GNB only takes at most 75% of Meta-Ban's running time for experiments, since Meta-Ban needs to train the framework individually for each arm before making predictions. We will discuss more about the running time in **Subsec.**6.5.
### Effects of Propagation Hops \(k\)
We also include the experiments on the MovieLens data set with 100 users to further investigate the effects of the propagation hyper-parameter \(k\). Recall that given two input vectors \(w,v\), we apply the RBF kernel as the mapping functions \(\Psi^{(1)}(w,v)=\Psi^{(2)}(w,v)=\exp(-\gamma\cdot\|w-v\|^{2})\) where \(\gamma\) is the kernel bandwidth. The experiment results are shown in the 1 below, and the value in the brackets \([0]^{*}\) is the element standard deviation of the normalized adjacency matrix of user exploitation graphs.
Here, increasing the value of parameter \(k\) will generally make the normalized adjacency matrix elements "smoother", as we can see from the decreasing standard deviation values. This matches the low-pass nature of graph multi-hop feature propagation (Srivastava et al., 2017). With larger \(k\) values, GNB will be able to propagate the information for more hops. In contrast, with a smaller \(k\) value, it is possible that the target user will be "heavily influenced" by only several specific users. However, overly large \(k\) values can also lead to the "over-smoothing" problem (Srivastava et al., 2017; Wang et al., 2018), which can impair the model performance. Therefore, the practitioner may need to choose \(k\) value properly under different application scenarios.
### Effects of the Approximated Neighborhood
In this subsection, we conduct experiments to support our claim that applying approximated user neighborhoods is a feasible solution to reduce the computational cost, when facing the increasing number of users (**Remark**4.2). We consider three scenarios where the number of users \(n\in\{200,300,500\}\). Meanwhile, we let the size of the approximated user neighborhood \(\tilde{\mathcal{N}}^{(1)}(u_{t}),\tilde{\mathcal{N}}^{(2)}(u_{t})\) fix to \(\tilde{n}=|\tilde{\mathcal{N}}^{(1)}(u_{t})|=|\tilde{\mathcal{N}}^{(2)}(u_{ t})|=50\) for all these three experiment settings, and the neighborhood users are sampled from the user pool \(\mathcal{U}\) in the experiments.
Here, we see that the proposed GNB still outperforms the baselines with increasing number of users. In particular, given a total of 500 users, the approximated neighborhood is only 10% (50 users) of the overall user pool. These results can show that applying approximated user neighborhoods (Remark 4.2) is a practical way to scale-up GNB in real-world application scenarios. In addition, in 1**Table**2, we also include the average regret per round across different time steps. With the number of users \(n=500\) on the MovieLens data set, we include the experiments given different numbers of "representative users" \(\widetilde{n}\in\{50,100,150\}\) to better show the model performance when applying the approximated neighborhood. Here, increasing the number of "representative users" \(\widetilde{n}\) can lead to better performances of GNB, while it also shows that a
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{Bandwidth \(\gamma\)} \\ \hline \(k\) & 0.1 & 1 & 2 & 5 \\ \hline
1 & 7276 & 7073 & 7151 & 7490 \\ & \([1.6\times 10^{-4}]\) & \([1.4\times 10^{-3}]\) & \([2.2\times 10^{-3}]\) & \([3.9\times 10^{-3}]\) \\ \hline
2 & 6968 & 6966 & 7074 & 7087 \\ & \([1.0\times 10^{-4}]\) & \([7.7\times 10^{-4}]\) & \([1.3\times 10^{-3}]\) & \([2.5\times 10^{-3}]\) \\ \hline
3 & 7006 & 7018 & 6940 & 7167 \\ & \([7.1\times 10^{-5}]\) & \([7.0\times 10^{-4}]\) & \([1.2\times 10^{-3}]\) & \([1.9\times 10^{-3}]\) \\ \hline \end{tabular}
\end{table}
Table 1. Cumulative regrets on MovieLens dataset with 100 users (different \(k\) / kernel bandwidth). The value in the brackets "[]" is the element standard deviation of the corresponding normalized adjacency matrix.
\begin{table}
\begin{tabular}{|l|l l l l l|} \hline & \multicolumn{4}{c|}{Avg. regret per round at different \(t\)} \\ \hline
**Algorithm** & 2000 & 4000 & 6000 & 8000 & 10000 \\ \hline CLUB & 0.7691 & 0.7513 & 0.7464 & 0.7468 & 0.7496 \\ Neural-Ind & 0.8901 & 0.8808 & 0.8790 & 0.8754 & 0.8741 \\ Neural-Pool & 0.7681 & 0.7526 & 0.7405 & 0.7362 & 0.7334 \\ EE-Net & 0.7886 & 0.7723 & 0.7642 & 0.7618 & 0.7582 \\ Meta-Ban & 0.7811 & 0.7761 & 0.7754 & 0.7729 & 0.7708 \\ \hline GNB (\(\widetilde{n}=50\)) & 0.7760 & 0.7245 & 0.7190 & 0.7265 & 0.7140 \\ GNB (\(\widetilde{n}=100\)) & 0.7406 & 0.7178 & 0.7172 & 0.7110 & 0.7104 \\ GNB (\(\widetilde{n}=150\)) & 0.7291 & 0.7228 & 0.7129 & 0.7105 & 0.7085 \\ \hline \end{tabular}
\end{table}
Table 2. Running for 1000 rounds and with the number of users \(n=500\) for the MovieLens data set, the comparison between GNB and baselines on average regret per round.
Figure 3. Cumulative regrets for different number of users with approximated user neighborhood (MovieLens data set).
Figure 2. Cumulative regrets on the recommendation and classification data sets.
small number of "representative users" will be enough for GNB to achieve satisfactory performances.
### Effects of the Adaptive Exploration
To show the necessity of the adaptive exploration strategy, we consider an alternative arm selection mechanism (different from line 10, **Alg.** 1) in round \(t\in[T]\), as \(\mathbf{x}_{t}=\arg\max_{\mathbf{x}_{t,i}\in\mathcal{X}_{t}}\left(\hat{r}_{i,t}+\alpha \cdot\hat{b}_{i,t}\right)\), given the estimated reward and potential gain. Here, we introduce an additional parameter \(\alpha\in[0,1]\) as the exploration coefficient to control the exploration levels (i.e., larger \(\alpha\) values will lead to higher levels of exploration). Here, we show the experiment results with \(\alpha\in\{0,0.1,0.3,0.7,1.0\}\) on the "MNIST" and "Yelp" data sets.
In Table 3, regarding the results of the "Yelp" data set, although the performances of GNB do not differ significantly with different \(\alpha\) values, our adaptive exploration strategy based on user exploration graphs is still helpful to improve GNB's performances, which is validated by the fact that setting \(\alpha\in(0,1]\) will lead to better results compared with the situation where no exploration strategies are involved (setting \(\alpha=0\)). On the other hand, as for the results of the "MNIST" data set, different \(\alpha\) values will lead to relatively divergent results. One reason can be that with larger context dimension \(d\) in the "MNIST" data set, the underlying reward mapping inclines to be more complicated compared with that of the "Yelp" data set. In this case, leveraging the exploration correlations will be more beneficial. Thus, the adaptive exploration strategy is necessary to improve the performance of GNB by estimating the potential gains based on "fine-grained" user (class) correlations.
### Running Time vs. Performance
In Figure 4, we show the results in terms of cumulative regret [y-axis, smaller = better] and running time [x-axis, smaller = better]. Additional results are in Appendix **Subsec.**A.3. Each colored point here refers to one single method. The point labeled as "GNB_Run" refers to the time consumption of GNB on the arm recommendation process only, and the point "GNB" denotes the overall running time of GNB, including the recommendation and model training process.
Although the linear baselines tend to run faster compared with our proposed GNB, their experiment performances (Subsec. 6.1.1) are not comparable with GNB, as their linear assumption can be too strong for many application scenarios. In particular, for the data set with high context dimension \(d\), the mapping from the arm context to the reward will be much more complicated and more difficult to learn. For instance, as shown by the experiments on the MNIST data set (\(d=784\)), the neural algorithms manage to achieve a significant improvement over the linear algorithms (and the other baselines) while enjoying the reasonable running time. Meanwhile, we also have the following remarks: (1) We see that for the two recommendation tasks, GNB takes approximately 0.4 second per round to make the arm recommendation with satisfactory performances for the received user; (2) In all the experiments, we train the GNB framework per 100 rounds after \(t>1000\) and still manage to achieve the good performance. In this case, the running time of GNB in a long run can be further improved considerably by reducing the training frequency when we already have sufficient user interaction data and a well-trained framework; (3) Moreover, since we are actually predicting the rewards and potential gains for all the nodes within the user graph (or the approximated user graph as in Remark 4.2), GNB is able to serve multiple users in each round simultaneously without running the recommendation procedure for multiple times, which is efficient in real-world cases.
### Supplementary Experiments
Due to page limit, we present supplementary experiments in Appendix **Section**A, including: (1) **[Subsec.**A.2]** experiments showing the potential impact on GNB when there exist underlying user clusters; (2) **[Subsec.**A.3]** complementary contents for **Subsec.**6.5 regarding the "Letter" and "Pendigits" data sets; (3) **[Subsec.**A.4]** the convergence results of GNB on recommendation data sets.
## 7. Conclusion
In this paper, we propose a novel framework named GNB to model the fine-grained user collaborative effects. Instead of modeling user correlations through the estimation of rigid user groups, we estimate the user graphs to preserve the pair-wise user correlations for exploitation and exploration respectively, and utilize individual GNN-based models to achieve the adaptive exploration with respect to the arm selection. Under standard assumptions, we also demonstrate the improvement of the regret bound over existing methods from new perspectives of "fine-grained" user collaborative effects and GNNs. Extensive experiments are conducted to show the effectiveness of our proposed framework against strong baselines.
###### Acknowledgements.
This work is supported by National Science Foundation under Award No. IIS-1947203, IIS-2117902, IIS-2137468, and Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
Figure 4. Running time vs. performance with baselines.
\begin{table}
\begin{tabular}{|l|c c c c c|} \hline & \multicolumn{5}{c|}{Regret results with different \(\alpha\) values} \\ \hline
**Dataset** & \(\alpha=0\) & \(\alpha=0.1\) & \(\alpha=0.3\) & \(\alpha=0.7\) & \(\alpha=1\) \\ \hline Yelp & 7612 & 7444 & 7546 & 7509 & 7457 \\ MNIST & 2323 & 2110 & 2170 & 2151 & 2141 \\ \hline \end{tabular}
\end{table}
Table 3. Results with different exploration coefficients \(\alpha\). |
2304.10505 | Video Pre-trained Transformer: A Multimodal Mixture of Pre-trained
Experts | We present Video Pre-trained Transformer. VPT uses four SOTA encoder models
from prior work to convert a video into a sequence of compact embeddings. Our
backbone, based on a reference Flan-T5-11B architecture, learns a universal
representation of the video that is a non-linear sum of the encoder models. It
learns using an autoregressive causal language modeling loss by predicting the
words spoken in YouTube videos. Finally, we evaluate on standard downstream
benchmarks by training fully connected prediction heads for each task. To the
best of our knowledge, this is the first use of multiple frozen SOTA models as
encoders in an "embedding -> backbone -> prediction head" design pattern - all
others have trained their own joint encoder models. Additionally, we include
more modalities than the current SOTA, Merlot Reserve, by adding explicit Scene
Graph information. For these two reasons, we believe it could combine the
world's best open-source models to achieve SOTA performance. Initial
experiments demonstrate the model is learning appropriately, but more
experimentation and compute is necessary, and already in progress, to realize
our loftier goals. Alongside this work, we build on the YT-20M dataset,
reproducing it and adding 25,000 personally selected YouTube videos to its
corpus. All code and model checkpoints are open sourced under a standard MIT
license. | Kastan Day, Daniel Christl, Rohan Salvi, Pranav Sriram | 2023-03-24T17:18:40Z | http://arxiv.org/abs/2304.10505v1 | # Video Pre-trained Transformer: A Multimodal Mixture of Pre-trained Experts
###### Abstract
We present Video Pre-trained Transformer. VPT uses four SOTA encoder models from prior work to convert a video into a sequence of compact embeddings. Our backbone, based on a reference Flan-T5-11B architecture, learns a universal representation of the video that is a non-linear sum of the encoder models. It learns using an autoregressive causal language modeling loss by predicting the words spoken in YouTube videos. Finally, we evaluate on standard downstream benchmarks by training fully connected prediction heads for each task. To the best of our knowledge, this is the first use of multiple frozen SOTA models as encoders in an "embedding \(\rightarrow\) backbone \(\rightarrow\) prediction head" design pattern - all others have trained their own joint encoder models. Additionally, we include more modalities than the current SOTA, Merlot Reserve, by adding explicit Scene Graph information. For these two reasons, we believe it could combine the world's best open-source models to achieve SOTA performance. Initial experiments demonstrate the model is learning appropriately, but more experimentation and compute is necessary, and already in progress, to realize our loftier goals. Alongside this work, we build on the YT-20M dataset, reproducing it and adding 25,000 personally selected YouTube videos to its corpus. All code and model checkpoints are open sourced under a standard MIT license.
Figure 1: VPT learns multimodal representations of videos from four sources: image-sequences, raw audio, Whisper-generated text captions, and OpenPSG scene graphs). Our backbone is trained to predict the words spoken in a video given encodings of the frames, audio, and scene graph, as well as an encoding of the directly preceding text.
Introduction
Humans receive a variety of stimuli to help interpret the surrounding environment: vision, audio, smell, etc. and jointly use these signals to understand the world. These events _inform_ each other and compound understanding in ways that individual modalities are incapable of producing. Machines, conversely, are often limited to maximize performance on individual modalities. Many SOTA models excel in one or two modalities. Then, what might occur when combining these models together, having each inform the other to complete a more coherent scene?
It is with these ideals in mind that we present VPT, a joint transformer trained from scratch to combine different modalities. This model takes advantage of transfer learning, with embeddings created from multi-billion parameter SOTA models CLIP for images (Radford et al., (2020)), Whisper for the highest quality YouTube captions including word-level timings (instead of YouTube's provided captions and phrase-level timings) (Radford et al., (2022)) and Open-PSG for explicit scene graph relationships, based on Detectron2 image segmentation (Yang et al., (2022)).
We plan to train this model at scale, using a dataset with over 20 million videos in addition to many handpicked videos that epitomize visual-audio relationships. Currently, we train on only 25k videos. For loss, we implement self-supervised learning across the captions and finetune on the VQA dataset.
The end goal of VPT is general knowledge based on video input, producing parameters that can be used as input to a variety of heads for downstream tasks. In summary, our key contributions are the following:
* We introduce the VPT architecture which leverages existing SOTA models, one or more per input modality, to produce embeddings which are integrated in a large universal backbone transformer.
* In recognition of the power of open source ML, such as the innovation enabled by the release of BERT and Stable Diffusion, and in protest against closed-source models from OpenAI and Microsoft Research, the final PyTorch models will be available under a standard open source MIT license and will be trivially consumable via Huggingface.
* Including explicit scene-graph information in a multi-modal video model.
* Employing Whisper for the purpose of generating millions of captions, a novel large-scale usage of Whisper for training a transformer with caption-frame-scene graph tuples.
Overall, our work demonstrates the potential power of utilizing data in structured and unstructured ways and speaks to the strength of combining multiple modalities.
## 2 Related Work
A variety of world-class self-supervised models have been introduced with the same design pattern: Embeddings \(\rightarrow\) Backbone \(\rightarrow\) Prediction-Heads, which is the central inspiration for this work (Carion et al., (2020), Caron et al., (2021), Junper et al., (2021), Li et al., (2022)., Tesla AI Day (2022).)
**Universal Transformer from Google Brain**(Liu et al., (2020)) introduced using a transformer layer to combine domain-specific encodings. The authors extracted the last hidden layer from a set of task-specific CNNs as input to a transformer layer to create a "universal representation" for diverse downstream image classification. This concept inspired our work.
**Merlot and follow-up Merlot Reserve from the Allen Institute for AI (Zellers et al., (2021), Zellers et al., (2022))** introduced an open-source model for multimodal video understanding achieving SOTA performance in a wide range of downstream tasks, in both zero-shot and fine-tuned
Figure 2: An architecture diagram of the Embedding, Trunk, Head design pattern as applied in Carion et al., (2020) and Junper et al., (2021).
settings. Moreover, the same work introduced the largest curated YouTube dataset to date, in the form of YouTube video IDs/URLs, on which we build for this work.
The authors used standard contrastive framework matching, much like CLIP (Radford et al., (2020)), but also introduced a novel learning objective for Masked Language Modeling (MLM) with attention-guided attention. The problem with normal/naive MLM on YouTube data is that random masking often masks uninformative filler words like "umm" and "you know." Therefore, Merlot prioritizes masking tokens which are most attended to by the joint vision-language encoder to mask the most helpful, informative, and visually grounded tokens to provide a richer signal to MLM learning. The authors use a 20% masking ratio, and 50% of the time masks a random word and 50% of the time masks one of the top 20% most-attended-to tokens. In follow-up work, Merlot Reserve added audio to their pre-training objective via a contrastive audio masking objective, much the same as MLM. Ablations show audio provided additional information not amenable to transcripts, and increase performance on Situated Reasoning by 1.1%.
**SimCLR from Google Brain, Toronto**(Chen et al., (2020)) introduced L2-normalized embeddings for contrastive learning, as used in this work.
**The more loss terms the better in SSL**
**FLAVA from FAIR**(Singh et al., (2022)) utilized a large collection of loss functions. During pre-training, they employed Masked Image Modeling (i.e., hidden image patches), MLM, contrastive masked multimodal modeling (MMM) and contrastive image-text matching (ITM). The design is very similar to Merlot RESERVE. Finally, like this work, FLAVA then learns task-specific classification heads from random initializations for evaluations on VQA, GLUE and ImageNet.
**AlphaFold2 from Google Deepmind, London**
demonstrated the success of using many loss functions, seven in their final model, especially when each is carefully tailored to performance on downstream tasks. Their design can be approximated with human expertise and should be confirmed via extensive ablations testing all combinations and variants of the loss functions.
**Creative Application of Attention**
This approach does not just combine transformers. Rather, it creatively applies attention to filter and combine vectors. (Ma et al., (2022), Jaegle et al., (2021), Jaegle et al., (2022), Bertasiu et al., (2021))
**Perceiver and follow-up Perceiver IO, DeepMind, London**(Jaegle et al., (2021), Jaegle et al., (2022)) Solving a similar problem as LongFormer, Perceiver uses cross attention in place of less self-attention over the input dimension to enable O(N) linear compute scaling on input sequences, instead of standard O(N\({}^{2}\)) self-attention. Their follow-up, PerceiverIO, extends this efficiency to the decoder as well and enables something akin to a standard autoencoder but with very large (audio, video, and label) inputs and outputs. The result is an impressive reconstruction of the original video, audio and a class label prediction. Nevertheless, their images are small; each frame is only 512 pixels, much less than the standard 224x224 = 50k pixels used in CLIP and X-CLIP and even smaller than ViT's 32x32=1,024 pixel patches. In our work we don't see sufficient benefit in attempting to reconstruct full video outputs, as is so expensive in PerceiverIO, and instead only predict textual outputs.
**X-CLIP by Microsoft Research Asia, Xiamen University China**(Ma et al., (2022)) greatly informed how we think about temporally aligning the words someone speaks with the actions appearing on screen; in short, often YouTubers don't speak and act simultaneously. They typically talk first, then show. This underscores the importance of having video-level features and not just one-frame per one-caption. X-CLIP also leveraged Contrastive learning at every combination of temporal lengths i.e. all combinations of {video, frame, sentence, word}.
**CLIP-Event by UIUC**(Li et al., (2022)) optimized SSL with a "hard negative" sampling strategy to ensure negative samples were as challenging as possible, i.e., events with similar visual features but different labels. This inspires our future work of retrieval augmented hard negative sampling, where negatives are not selected randomly from the training batch, but rather are retrieved from any part of the training corpus.
**Chinchilla from Google Deepmind**(Hoffmann et al., (2022)) introduced the "updated" scaling laws and recommended the use of 20 text tokens per model parameters during pre-training (e.g., 1B-param models should use 20B tokens in dataset for compute-optimal performance). This implies the first generation of "breakthrough LLMs" (e.g., GPT-3, OPT, T5, PaLM) were over-sized for the size of their pre-training data, if, that is, we want to be "compute optimal" in some sense. Chinchilla also inspired the style of this background section.
## 3 Method
We present VPT, a causal language model which accepts multimodal embeddings (generated by state-of-the-art frameworks) as input and autoregressively predicts the transcripts of YouTube videos as output. In addition to our proposed model, we present an elaborate data preparation procedure to construct the input embeddings for the pre-training process.
We begin by downloading 28 million videos from YouTube. For each video, we run a state-of-the-art speech recognition system Whisper (Radford et al., (2022)) to generate a transcript with per-word timestamps and other metadata. In a similar fashion to previous works (Zellers et al., (2021)), we employ a sliding window technique to partition the video into word-dense segments containing 15 words each. We extract one frame per video segment sampled at regular intervals to use as our visual modality. We feed these frames into a SOTA scene graph model OpenPSG on this image to generate a context-descriptive scene graph. Thus, we have three resulting modalities: image, text, and scene graph. We embed all the modality features using CLIP (Radford et al., (2020)) to generate an input embedding of size [3, 768], which constitutes one training example.
### Model Architecture
**Whisper for SOTA caption quality**(Radford et al., (2022)). Spoken word is a heavily underutilized source of data in large language models. This is partially due to their noisy nature and the difficulty in producing accurate transcriptions. However, similarly to Merlot (Zellers et al., (2021)), we can still leverage the additional information in spoken language from videos indirectly via caption generation. While Merlot leverages YouTube's auto-generated captions, we attempt to improve upon model performance by utilizing Whisper, a SOTA ASR model for more accurate caption extraction.
Whisper is a large speech recognition model that generates transcripts from audio files. It applies zero-shot transfer from the original training to a new environment without the need for finetuning. This model served three major functions for our project. Foremost, it auto-detects language, making it simple to remove non-English videos from our dataset. Second, it boasts impressive accuracy, making it better for word-to-text translation than the auto-generated captions from YouTube. Finally, Whisper, when used with library Lhotes, can extract a high precision timestamp for each individual word. This feature enables further refining of the preprocessed by ensuring the chosen frame is a known distance from the spoken words, and ensuring we use videos and video-segments with sufficient word density. We only keep video segments with meeting a threshold of 30 words per minute speech.
**Scene graph for image contextualization.** While images alone are a powerful source of information, their lack of structure and contextualization can lead to mediocre performance over a short duration of training. Thus, we propose an additional source of information acquisition from these frames: scene graphs.
**Visual Genome**(Krishna et al., (2016)) define scene graphs as a structured formal graphical representation of an image. Scene graphs represent a web of relationships among interconnected objects and have led to the development of many powerful models in image captioning, image retrieval, visual question answering, relationship modeling and image generation.
In our work, we employ OpenPSG, (Yang et al., (2022)) a state-of-the-art scene graph generator with Panoptic Scene Graph generation (PSG). Existing scene graph generation tasks (Tang et al., (2018)) in (a) use bounding box-based labels, which are often inaccurate -- pixels covered by a bounding box do not necessarily belong to the annotated class -- and cannot fully capture the background information. In contrast, OpenPSG leverages PSG in (b) to construct a more
comprehensive and clean scene graph representation. This representation contains more accurate localization of objects and description of relationships with the background i.e., the trees, pavement, and sky. We hypothesize that this will enhance the innate scene classification and spatial intelligence of our model.
**CLIP for image and text encoding.** While other models (Zellers et al., (2021)) train their own image and text encoders, we hypothesize that a self-supervised representation learner such as CLIP may enhance generalizability to downstream tasks due to its demonstrated zero-shot capabilities (Radford et al., (2020)). As CLIP is trained in a contrastive manner on a large dataset, its generated embeddings are task-agnostic and should be sufficiently powerful to enable the effective pre-training of our model. Moreover, using a frozen, pre-trained encoder greatly reduces the size and number of trainable parameters of our model, increasing the portability and simplicity of our model.
We use the most capable pre-trained CLIP model (VIT-L/14(r)336px) and preprocess our entire YouTube dataset with both the CLIP text and image encoders. For a given segment of a video, we encode the Whisper-generated caption of the segment with our CLIP text encoder and encode some number of evenly sampled frames (a tunable hyperparameter) from the segment with the CLIP image encoder. These embeddings are then saved and used for the primary training with our T5 backbone.
### Supervised Training
Our model employs auto-regressive causal language modeling (Chung et al., (2022)) based on the generated captions. Specifically, we explore two different options for utilizing captions as ground truth labels: our first method is to take the entire caption embedding as input and use the raw caption text as output. In this setting, we hope that the model can quickly learn to combine the input embeddings in a way that encapsulates the scene's key information. Our second method is to encode the first half of the 15-word caption and have the model predict the second half the raw caption text. In this setting, we hope that our model also learns to jointly combine the input embeddings albeit more slowly. In this alternative, there is no leaky data.
## 4 Experiments
### Visual Question Answering (VQA)
We evaluated our model through the VQA dataset (Agrawal et al., (2015)) to examine its vision, language and commonsense understanding. The VQA is a benchmark visual question answering dataset consisting of 265,016 images and over 1 million open-ended questions.
**VQA Task.** A model is given an image (from MS COCO dataset) and an open-ended question. Additionally, 5 captions associated with the image can be used as input, too. The model needs to generate a natural language response (90% of time the correct answer is a single word). The output generated by the model is compared with 10 ground truth answers and scores as per an evaluation metric provided by the authors of the
Figure 3: Results on VQA benchmark. We show that YouTube pre-training improves downstream performance on VQA. We also conduct ablations against including scene-graph information and are surprised to find that, as implemented now, scene-graphs hurt performance.
dataset. We evaluate our performance using the standard VQA accuracy metric, described by the following formula:
\[\mathrm{Acc}(\mathit{ans})=\min\left\{\frac{\#\mathrm{humans\ that\ said\ }ans}{3},1\right\}\]
**Finetuning Approach.** We use the VQA v2 dataset (Goyal et al., (2016)) for finetuning and evaluation. Specifically, we finetune on the training dataset for 1 epoch with a batch size of 16 and an AdamW optimizer with learning rate 0.0001. For a given image-question pair, we extract the scene graph from the image using OpenPSG and the CLIP embeddings for both the question and the image. We follow a near-identical concatenation procedure to our pre-training setup, except we replace the video caption embedding with our question embedding. As there are multiple possible answers provided for a given sample question, we randomly pick one of the provided answers and use it as our ground truth annotation for the visual question. We then train the model with teacher forcing to output the expected answer (truncated/padded to a fixed sequence size of 128) and use regular cross-entropy as our loss function.
We train the T5 base model variants for (and up to) 20k iterations on the training dataset. We visualize the loss curve and corresponding accuracies of a specific variant at iteration 5k, 10k and 20k in the results table. While the improvements are minor, the model still sees increases in accuracy throughout all 20k iterations, a sanity check to ensure we are not overfitting.
**Ablations.** We also run ablations to further examine the contributions of the large-scale pre-training and our novel scene-graph incorporation. We first examine the effectiveness of the YouTube pre-training quantitatively on the VQA task. There is an accuracy improvement of 1.65% with the YouTube pre-training, which is quite insignificant considering the scale and compute required for this effort. This may be due to an overarching issue regarding the training objective, which is susceptible to abysmal performance as a result of information leakage. In the pre-training task, our training objective is to reconstruct the CLIP-encoded text caption with our T5 model. However, recall that this text caption is fed in as an input to the model; this training objective may be too naive and weak to propel the learning of strong representations of downstream tasks.
We also examine the contributions of the scene-graph modality to the downstream VQA performance. Contrary to our intuitive hypothesis regarding the utility of scene graphs, the model seems to perform considerably better without the scene graph modality on the VQA task (an accuracy improvement of \(\sim\)6-7%). While this finding does not invalidate our hypothesis about the utility of the scene graph modality, it indicates that our usage of the scene graph information is inappropriate. Currently, we embed the scene graph with the CLIP text encoder and simply concatenate it onto our input embeddings. However, the scene graph contains multiple phrases that focus on different semantic portions of the image. Encoding all these phrases simultaneously with the CLIP text encoder is likely an issue. Moreover, scene graph information is structured and could be used more intelligently to improve our learned representations (as opposed to simple concatenation). An initial idea is to use scene graph information to refine image embeddings with a cross-attention mechanism, thereby leveraging it in an auxiliary manner rather than an explicit modality.
Finally, in response to our extremely general low performance on the VQA task, we ran an ablation to simplify the task by running training and evaluation on only yes/no questions in the dataset. The experiment would also enable us to reason about possible issues that arose due to the teacher forcing framework. This ablation showed similar results, again accentuating a training collapse perhaps due to larger issues with our proposed framework. We investigate these issues further in the qualitative analysis section.
## 5 Qualitative Analysis
We visualize our model outputs on select image-question pairs to qualitatively assess the model and reason about the low accuracies mentioned in the previous section. Through the visualization of our outputs (many beyond those visualized above), we noticed that the low accuracies of our model variants are due to a collapse in the model predictions to a trivial set of outputs. For example, the model simply predicts "No" for a majority of the questions (regardless of the context) interspersed with some predictions of "0" as well. All the variants except the scene graph ablation were tremendously susceptible to this collapse, as we struggled to find even a small number of samples with "non-trivial" predictions. The scene graph ablation exhibited slightly more promising behavior (albeit largely similar), which we visualize in Fig. 4. The first three images in the top row illustrate that our model possesses some understanding of the question and the semantic context of the image. However, the model regresses to nonsensical and sometimes trivial in most of the other images, demonstrated by the prediction of "No" and "2" in certain cases.
Our qualitative results corroborate our quantitative findings and our reasoning behind the potential sources of the general low accuracy across all variants. The observed trivial set of outputs could likely be due to a combination of overfitting along with a poor choice of training objective and model architecture, as well as other design choices e.g., model input structure.
## 6 Conclusion, Limitations, Future Work
### Conclusions
The goal of VPT is to provide general knowledge based on video input. This is achieved by leveraging multiple multimillion parameter models to create embeddings for a large-scale joint transformer via transfer learning. Scene graphs are also created for each frame to provide additional context to the embedding space. The model is trained at scale using a dataset of over 20 million videos, and self-supervised learning is used to optimize the model. The end result is a model that fuses multiple modalities (including a novel structured scene graph modality) to enable universal representation learning, enhancing performance on myriad downstream tasks.
### Limitations and Future Work
**Training Data Usage.** In our current model, we are training on only a fraction of available preprocessed data. Scale is one key factor in large deep learning models, (Brown et al., (2020)) and we believe that with more compute time/power our model will better acclimate to its input embeddings and better capture information in its output embeddings.
**Loss Function.** In our current loss function, we are using the full caption as both input data and as the output labels. This means our model contains
Figure 4: Qualitative analysis of model performance. Even incorrect examples reasonable guesses, but the model often collapses to predicting βnoβ unnecessarily.
information about a variable that is being predicted by our model (leaky data). This likely led to overfitting, culminating in poor performance on our downstream task. To rectify this issue, we propose using our alternative method of splicing the caption in half as mentioned in the methods section.
Additionally, our model will ideally leverage many more loss functions in future iterations. Options we're currently implementing include contrastive image-caption matching, position-encoding based temporal frame ordering, and contrastive caption-audio matching. Based on previous work, Zellers et al. (2021), Zellers et al. (2022)) these objectives could encourage the model to learn more robust representations.
**Audio**. One more major modification to our model is the inclusion of audio embeddings. Raw audio is capable of encapsulating information that language cannot, like tone and surrounding sound. This is particularly relevant in video. Proper utilization of audio embeddings has been shown as a source of improvement in other multimodal transformers Zellers et al. (2022)). Additionally, we plan to leverage a pre-trained audio transformer Gong et al. (2021), Gong et al. (2021), Verma et al. (2021) ) as opposed to a self-trained transformer, relating to our overarching theme of transfer learning. Additionally, our model might be able to leverage different audio embedding models with different downstream goals. One instance of this could be using one embedding model trained for tone recognition and another embedding model trained for sound recognition.
## Acknowledgements
Heng Ji, Yuxiong Wang, and the NCSA Delta admin team.
|
2305.07350 | Detecting Coordinated Inauthentic Behavior in Likes on Social Media:
Proof of Concept | Coordinated inauthentic behavior is used as a tool on social media to shape
public opinion by elevating or suppressing topics using systematic engagements
-- e.g. through *likes* or similar reactions. In an honest world, reactions may
be informative to users when selecting on what to spend their attention:
through the wisdom of crowds, summed reactions may help identifying relevant
and high-quality content. This is nullified by coordinated inauthentic liking.
To restore wisdom-of-crowds effects, it is therefore desirable to separate the
inauthentic agents from the wise crowd, and use only the latter as a voting
*jury* on the relevance of a post. To this end, we design two *jury selection
procedures* (JSPs) that discard agents classified as inauthentic. Using machine
learning techniques, both cluster on binary vote data -- one using a Gaussian
Mixture Model (GMM JSP), one the k-means algorithm (KM JSP) -- and label agents
by logistic regression. We evaluate the jury selection procedures with an
agent-based model, and show that the GMM JSP detects more inauthentic agents,
but both JSPs select juries with vastly increased correctness of vote by
majority. This proof of concept provides an argument for the release of
reactions data from social media platforms through a direct use-case in the
fight against online misinformation. | Laura Jahn, Rasmus K. Rendsvig, Jacob StΓ¦rk-Γstergaard | 2023-05-12T09:59:26Z | http://arxiv.org/abs/2305.07350v1 | # Detecting Coordinated Inauthentic Behavior in Likes on Social Media:
###### Abstract
Coordinated inauthentic behavior is used as a tool on social media to shape public opinion by elevating or suppressing topics using systematic engagements--e.g. through 'likes' or similar reactions. In an honest world, reactions may be informative to users when selecting on what to spend their attention: through the wisdom of crowds, summed reactions may help identifying relevant and high-quality content. This is nullified by coordinated inauthentic liking. To restore wisdom-of-crowds effects, it is therefore desirable to separate the inauthentic agents from the wise crowd, and use only the latter as a voting 'jury' on the relevance of a post. To this end, we design two _jury selection procedures_ (jsps) that discard agents classified as inauthentic. Using machine learning techniques, both cluster on binary vote data--one using a Gaussian Mixture Model (gmm jsp), one the \(k\)-means algorithm (km jsp)--and label agents by logistic regression. We evaluate the jury selection procedures with an agent-based model, and show that the gmm jsp detects more inauthentic agents, but both jsps select juries with vastly increased correctness of vote by majority. This proof of concept provides an argument for the release of reactions data from social media platforms through a direct use-case in the fight against online misinformation.
**Keywords**: Coordinated inauthentic behavior, bot detection, social media, wisdom of crowds, simulation, agent-based modeling
## 1 Introduction
In April 2022, we bought 100 Twitter likes for 3.85 USD through a readily accessible website. These 100 likes sufficed to catapult the liked tweet to the top of the _Top_ feed of #dkpol, the main Twittersphere for discussing Danish politics. There, it stayed for several hours.1 This illustrates that Twitter's content sorting algorithm may be easily hacked to bring selected items to users' attention using only likes.
Footnote 1: When the hashtag was viewed in private browser tab without being logged or when logged in with a new Twitter profile. The tweet was clearly marked as a test, and published by an account with almost no network or activity.
Our tweet was clearly marked as off-topic for #dkpol, but could have been misinformation. Our "inauthentic likes" could thus have been used with the intent to mislead or manipulate--and this would not be uncommon: when deploying _influence operations_ (IOs) on social media platforms to shape public opinion [11], a central strategy is to exploit the platforms' content sorting algorithms to highlight posts to users, a process known as _attention hacking_[1]. Attention hacking through likes requires coordination of likes to maximize effect. As the liking behavior does not reflect authentic personal beliefs, it is an example of so-called _coordinated inauthentic behavior_ (CIB) [18, 19, 2].2 Coordinated inauthentic behavior may be exhibited by humans and bots alike.
Footnote 2: As many platformsβ sorting algorithms assign higher rank to posts that many users have engaged withβe.g., through liking, upvoting, sharing, retweeting or commentingβattention hacking influence operations orchestrate coordinated engagements through coordinated inauthentic behavior to maximize their effect [11].
Liking is an engagement type common across social media platforms, but as different platforms use different labels, we refer to _reactions_, understood as one-click engagements where users may select one option from a short pre-defined list as their'reaction' to a post, with users' choices typically summed and presented as a quantified metric beneath the item. Reactions include perhaps most famously Facebook's original 'Like' and their now five other reaction emojis, the hearts/likes on Instagram, TikTok and Twitter, and Reddit's up- and downvotes Weber and Neumann [2]. Importantly, all these reactions inform the platforms' algorithmic content sorting, and thus steer users' attention.
In an honest world, reactions may be informative in steering attention: through the wisdom-of-crowds, summed reactions may help identify relevant, well-produced, or otherwise high quality content as attention-worthy, so it may be presented to users at the top of their news feed [1]. Alas, that reactions serve as attention-steering exactly makes them--along with other quantified attention metrics [10]--a target candidate for influence operations that spread misinformation based on coordinated inauthentic behavior (CIB-based IOs). Accounts (often bots) used to hack users' attention simulate authentic interest in a topic through reacting to social media posts [10]. While not actively posting content, they seek to elevate or suppress specific topics in the public perception, flood platforms with misinformation, and boost narratives counter to an authentic public interest [11]. The identification of
such computational propaganda is difficult as modern bots mask their identity, mimicking human behavior to an increased extent [Beatson et al.2021, Bradshaw and Howard2017].
When CIB-based IOs target reactions, the wisdom-of-crowds effect is lost. Scholars have called for ways to promote the Internet's potential to strengthen rather than diminish democratic virtues [Lazer et al.2018], e.g., by redesigning online environments to enable informed choice of attention expenditure by providing transparent crowd-sourced voting systems [Lorenz-Spreen et al.2020]. Here, current implementations of reactions are in the ballpark, yet strongly flawed as they may be hacked by CIB-based IOs. Adopting exactly a voting perspective, this paper develops a computational approach to detect and remove CIB influence on reactions, with the aim to restore reactions' wisdom-of-crowds effects.
Detecting and removing coordinated inauthentic behavior targeted to reactions is a neglected area of research (perhaps partially because relevant data is difficult for researchers to obtain despite often being public, a topic we return to below and in the concluding remarks). In general, computational approaches to combat CIB have not been studied extensively [Nizzoli et al.2021]. Recent research has explored user information-based coordination such as account handle sharing, content-based coordination (e.g., synchronized co-posting of images, hashtags, text, and links), attention metric-based coordination such as co-retweeting, or timing-based coordination [Kirn and Hinders2022, Pacheco et al.2021, Nizzoli et al.2021, Gigiletto et al.2020, a; Grimme, Assenmacher, and Adam2018, Weber and Neumann2021]. Despite reactions being a commonly adopted and an easily manipulatable mechanism, research on CIB more narrowly targeted at reactions is quite scarce. Borderlining relevancy are studies on purchased likes not of posts, but of _pages [followers]_ on Facebook [Twitter] [Ikram et al.2017, De Cristofaro et al.2014, Beutel et al.2013] [(Aggarwal and Kumaraguru2015)]. This stream of work tries to understand the modus operandi of page like farms [follower farms] [De Cristofaro et al.2014] [(Aggarwal and Kumaraguru2015)] and develops supervised classification models based on demographic, temporal, and social characteristics [Ikram et al.2017] [(Aggarwal and Kumaraguru2015)]. Here, notably, Ikram et al. (2017) find that their bot classifier has difficulty detecting page like farms that mimick regular like-spreading over longer timespans, and conclude that Beutel et al. (2013)'s unsupervised approach to detect page like farms--even developed with data from inside Facebook--yielded large false positive errors.3 Directly about reactions to posts is Torres-Lugo et al. (forthcoming 2022)'s study of metric inflation through strategic deletions on Twitter. They analyze coordination in repetitive _(un)liking_ on _deleted_ tweets in influence operations that seek to bypass daily anti-flooding tweeting limits. From a curation point of view, looking at unlikes is a very smart move, as this data is in fact available to purchase from Twitter. Alas, the approach is inapplicable to tweets that remain online, such as those central to CIB-based IOs that push narrative through political astroturfing [Schoch et al.2022].
Footnote 3: Also in the closely related field of bot detection has the detection of bots that are mainly designed to engage through reactions gone unstudied, again perhaps due to data restrictions. For a systematic review of the bot detection literature, see [Orabi et al.2020].
### A Voting and Simulation Approach to Coordinated Inauthentic Behavior
To study CIB targeted at reactions, we methodologically take a voting perspective on reactions and a computer simulation approach to validate the proposed methods.
With the voting perspective, we conceptualize reactions as votes about the epistemic quality of an information item. We restrict attention to a two-reaction case, with one reaction interpreted as a vote _for_ the item being of high quality, the other a vote against. We adopt this voting perspective as it allows us to clearly explicate a structure of reactions as binary voting, to specify different patterns and varying degrees of coordination [Nizzoli et al.2021], and to define and quantify the aptitude of a group of users with respect to tracking quality.
Further, it allows us to draw on intuitions from the _Condorcet Jury Theorem4_[Condorcet1785]: while many weakly competent authentic judgments may lead to a highly accurate collective judgment through simple majority vote, such positive wisdom-of-crowds effects may be counteracted by the non-independence exhibited by coordinated inauthentic behavior.
Footnote 4: When all jurors vote _independently_ and are _better than random_ at voting correctly, the probability of a correct majority judgment approaches \(1\) as the jury size approaches \(\infty\).
The latter motivates the paper's fundamental approach to counter CIB influence, namely to design _jury selection procedures_ (jsps). The core idea is this: given a collection of votes from a voting population of agents, a jsp searches the collection for coordinated voting and from the findings classifies agents as inauthentic or authentic, before finally returning a subset of the population--the _jury_--whose votes are tallified to determine the epistemic quality of a post. I.e., a jsp censors a subset of the population's votes in order to restore wisdom-of-crowds effects for the remainder.
Methodologically, the paper is also a computer simulation paper. We develop an agent-based model (ABM) in which agents vote on the quality of fictitious posts. The ABM includes agents that vote authentically--in accordance with their private beliefs about the quality of the post and the assumptions of the Condorcet Jury Theorem--and some that do not, either by voting only inauthentically or coordinated inauthentically. Over synthetic vote data generated by the ABM, we test and validate the machine learning-based jsps that we develop.
Validating with synthetic data circumvents three main challenges in detecting coordinated inauthentic users (lacking reproducibility, lacking data availability, and lacking ground truth), while suffering the downside that synthetic
data has limited ecological validity. First, empirical social media studies of bots remain problematic to replicate and reproduce due to a time-sensitivity of the relevant data (Martini et al., 2021; Samper-Escalante et al., 2021; Bebensee, Nazarov, and Zhang, 2021). Attempts to collect the same data twice are likely to fail, as traces of coordination may be altered or deleted after an influence operation was concluded. While e.g. Twitter grants generous academic research access to historic tweets through their API, accounts involved in CIB may evade detection as they are no longer retrievable in their original appearance (Torres-Lugo et al. forthcoming, 2022). The shortcomings in data reproducibility make CIB/bot detection frameworks difficult to compare, as these typically require live data access (Martini et al., 2021). Data and analyses of the methods proposed here are time-insensitive and reproducible (cf. Data Availability Statement and Supplementary Material5.
Footnote 5: Code to reproduce and analyse the data can be found at the GitHub repository LJ-9/Coordinated-Inauthentic-Behavior-Likes-ABM-Analysis.
Second, data availability limits research. Large scale studies may simply be impossible due to data access restrictions (Martini et al., 2021; Bliss et al., 2020; Pasquetto, Swire-Thompson et al., 2020). Specifically data concerning users' reactions is very difficult for researchers to obtain: none of the currently existing datasets include it,6 and neither Meta, Twitter nor Reddit supply this data in necessary scope (Bliss et al., 2020; Pasquetto, Swire-Thompson et al., 2020). We outline data collection strategies in connection with empirical validation of our methods in the concluding remarks. Data from an ABM can be (re)synthesized in any quantity.
Footnote 6: See e.g. Indiana Universityβs Bot Repository, a resourceful centralized repository of annotated datasets of Twitter social bots: [https://botometer.osome.iu.edu/bot-repository/datasets.html](https://botometer.osome.iu.edu/bot-repository/datasets.html).
Third, there is an issue with lacking ground truth as researchers do not have access to the empirical truth about accounts involved in coordinated inauthentic behavior. Qualified guesses can be made based on suspicious similarities in behavior or profile features, but _de facto_, it remains unknown whether two users' actions are authentically correlated or inauthentically coordinated, or how many fully or partially automated accounts exist in a total population (Magelinski, Ng, and Carley, 2022; Martini et al., 2021; Samper-Escalante et al., 2021; Chavoshi, Hamooni, and Mueen, 2017; Beutel et al., 2013). Specifically for reaction-based CIB, it seems infeasible to create a labeled dataset that even _approximates_ the ground truth: labeling accounts individually e.g. via crowd-sourcing or the well-established bot classifier _Botometer_ will likely fail as \(i)\) single accounts will often seem inconspicuous unless looked at in concert at a collective level (Magelinski, Ng, and Carley, 2022; Grimme, Assenmacher, and Adam, 2018; Yang et al., 2019, 2020),7 and \(ii)\) collective level labeling is impossible due to current data restrictions as reactions data is available only in severely limited quantities, if at all.8
Footnote 7: _Botometer_βs feature-based approach considers accounts one at a time and does therefore not pick up on group anomalies based on suspicious similarity (Yang et al., 2019, 2020).
By validating over an ABM where we specify which agents are involved in CIB, we gain transparency and a ground truth. We get precise baselines, exact measurements of the effect of our methods, and certainty about the degrees of misclassification. We elaborate on this below. Hereby, the ABM validation allows us to provide methodologically robust proof of concept for the jsp approach.
### Existing Work and Contributions
Little work exists on identifying and eliminating inauthentic votes and jsps. Galeazzi, Rendsvig, and Slavkovik (2019) suggest to remove inauthentic influence by identifying an independent jury via the \(\chi^{2}\) test of independence. Their model takes sharing-induced diffusion in social networks as evolving crowdvoting. Their main results pertain to jsp time-complexity, with their least requiring suggestion still exponential in the jury size (a direct consequence of using \(\chi^{2}\)). In addition, we find that the number of data points required for \(\chi^{2}\) application (see Sec. 3) makes their jsps practically inapplicable and computationally unservicable. A performance comparison with their bot detecting scheme is therefore impossible beyond contrasting data requirements.
The central goal of this paper is to develop jury selection procedures that raise the correctness of vote by majority of juries, complementing (Galeazzi, Rendsvig, and Slavkovik, 2019). The methodological voting perspective allows us to define a metric of success for the methods we develop: majority correctness scores (mcss). Majority correctness scores give a direct perspective on the collective epistemic practice of a group of agents, providing a more conclusive perspective than misclassification scores. Beyond raising majority correctness scores, we desire _accurate_ jsps that minimize misclassification of \(i\). authentic agents as inauthentic and \(ii\). inauthentic agents as authentic (i.e., minimize \(i\). false positive and \(ii\). false negative errors). The first values _vox populi_ and penalizes censorship (Shao et al., 2018), while the second is a _precationary principle_ against inauthentic influence. Further, we desire _feasible_ jsps that use only data that is obtainable by social media platforms and that requires little to no preprocessing, have few to no supervised elements (Orabi et al., 2020; Grimme, Assenmacher, and Adam, 2018), and have reasonable complexity.
This paper develops two jsps, evaluated with respect to vote data generated by the agent-based model. The ABM is presented in Sec. 2 where varying baseline agent populations' majority correctness scores (mcss) are inspected, on which inauthentic activity has a substantial negative impact.
The core jsp machinery is presented in Sec. 3. Each jury selection procedure invokes a classifier method that decomposes the ABM data into singular values (SVD), applies a clustering strategy (either a Gaussian Mixture
Model (gmm) or the \(k\)-means algorithm (km)), and labels agents using a non-standard application of logistic regression on the qualitative property of the post voted on. In related work, vote data--such as US congress roll call data--has been successfully grouped employing dimensionality reduction, e.g., (Yang et al., 2020; Porter et al., 2005; Sirovich, 2003; Poole, 2000). Our approach is novel in applying such methods in the realm of digital propaganda using simple binary input data. Dimensionality reduction and clustering methods have so far been applied to less sparse data structures, such as HTTP-level traffic patterns (Suchacka, 2019; Suchacka and Iwanski, 2020), textual data of tweets (Kirn and Hinders, 2022), or rich datasets with behavior-based features (number of friends/followers, mentions and hashtags, etc.) like in detection of spam bots on social media sites (e.g., (Ahmed and Abulaish, 2013)). A systematic review on detection of bots on social media (Orabi et al., 2020) further discusses unsupervised methods, e.g. (Chavoshi et al., 2017; Chen and Subramanian, 2018), yet to our knowledge only Galeazzi et al. (2019) attempt to flag agents given just binary vote data (i.e., with no added information about e.g. temporal coordination as in (Beutel et al., 2013; Grimme et al., 2018; Magelinski et al., 2022; Pacheco et al., 2021; Schoch et al., 2022)) obtainable intra-platform by social media sites.
In Sec. 4, we define and evaluate the gmm and km jury selection procedures. We show that both are highly successful, as they select juries that have vastly increased majority correctness scores compared to baseline juries. Moreover, the gmm jsp outperforms the km jsp with respect to its accurate and particularly precaentious results. Sec. 5 summarizes the main findings and discusses ethical considerations, model assumptions, and data collection.
Technically, we contribute a novel, reactions-based approach to detect CIB, implemented in two variants evaluated to have positive effects over synthetic ABM data, thus showing proof of concept. Soctically, the proof of concept provides a direct argument to be raised to social media platform to open access to reactions data: the data is necessary to evaluate, tweak and deploy promising methods (i.e., jsps) to combat coordinated inauthentic behavior and thus to inhibit the spread of misinformation.
## 2 Agent-Based Model (ABM)
We evaluate the two jury selection procedures over data generated by the following agent-based model. A model _run_ consists of a fixed set of agents partitioned into agent types (see below), and a sequence of independent _voting rounds_. Each round concerns a given post (which we do not explicitly represent) and whether the post is of high or low quality, on which agents vote \(\{1,-1\}\) (\(1\) for high, \(-1\) for low). We think of these votes as users' reactions, and call \(1\) an _upvote_ and \(-1\) a _downvote_.
Agents are either _authentic_ or _inaauthentic_. We formally define the agents types in Sec. 2.2 below. We think of authentic agents as regular social media users that use their up- and downvotes to inform about post quality (e.g., analogously to Metaxas et al. (2015) who showed that by retweeting, users on Twitter signal trust in the message). Authentic agents vote independently according only to their competence-based beliefs about post quality: they satisfy the assumptions of the Condorcet Jury Theorem. Inauthentic agents do not: with different patterns and varying degrees, they coordinate their votes through properties distinct from quality. On social media, inauthentic behavior can both be witnessed among human controlled and automated accounts. Given the scale of influence operations, it is relevant to think about inauthentic behavior in terms of so-called _social bots_: "_Computer programs designed to use social networks by simulating how humans communicate and interact with each other_" (Abokhodair et al., 2015; Ahmed and Abulaish, 2015). The design of our inauthentic agents draws inspiration from the social bot classes _astururfing bots_ (that create _"the appearance of widespread support for a candidate or opinion_" (Ratkiewicz et al., 2011)) and _influence bots_ ("_Realistic automated identities that illicitly shape discussion_" (Subrahmanian et al., 2016)) (Orabi et al., 2020).
### Post Properties, Compences and Beliefs
Let \(A\) be a finite set of agents and \(I=\{1,2,3\}\) an index set for properties. A voting round concerns a given post, and commences with the (Monte Carlo like) sampling of a state
\[s=(p_{i},C_{a}(p_{1}),B_{a}(p_{i}))_{i\in I\cup A,a\in A}\in\mathbb{R}^{3+|A| +|A|+3|A|+|A|^{2}}\]
where each \(p_{i}\) represents a property of the post, \(C_{a}(p_{1})\) is agent \(a\)'s competence in evaluating whether the post has property \(p_{1}\) and \(B_{a}(p_{i})\) is \(a\)'s belief about whether the post has property \(p_{i}\). Properties \((p_{i})_{i\in I}=(p_{1},p_{2},p_{3})\in\{-1,1\}^{3}\) are sampled independently from a binomial distribution with probabilities \(P(p_{i}=1)=(1-P(p_{i}=-1))\), given as noise levels in Sec. 2.4. Each \(p_{a}\) is sampled as \(p_{3}\), and is a private property used by some agent types.9 We say that the post has property \(p_{i}\) if \(p_{i}=1\), else that it does not. Property \(p_{1}\) represents whether the post has high or low quality, and is the only property relevant to authentic agents. Inauthentic agents act also on additional properties, as described below.
Footnote 9: We include \(p_{a}\) and \(B_{a}(p_{b})\) for all agents \(a,b\in A\) in the state for description simplicity. In the simulation implementation, we only sampled \(p_{a}\) and \(B_{a}(p_{a})\) for agents \(a\) that make use of \(p_{a}\).
Each agent \(a\in A\) is assigned a competence \(C_{a}(p_{1})\) to determine whether the post has high quality, \(p_{1}\).10 To evaluate \(p_{1}\), it is assumed that all agents are better than fair coin tosses but not perfect: \(C_{a}(p_{1})\in[0.65,0.95]\). We chose \((0.65,0.95)\) for \(C_{a}(p_{1})\) to expedite convergence towards a 100% mcS for authentic agents while ensuring imperfect competence. Any closed, convex subinterval of the open \((0.5,1)\) would yield similar results w.r.t. mcs, more or less quickly. Compentences are uniformly resampled each round, to capture that agents' expertise may vary from post to post. Inauthentic agents are assumed perfectly competent in evaluating properties \(p_{2}\) and \(p_{3}\), which they use to coordinate their actions:
\(C_{a}(p_{2})=C_{a}(p_{3})=1\). Properties and competences probabilistically determine agents' beliefs: for all \(a\in A,i\in I\), the beliefs \(B_{a}(p_{i})\in\{-1,1\}\) are sampled with
\[C_{a}(p_{i})=P(B_{a}(p_{i})=p_{i}). \tag{1}\]
If \(B_{a}(p_{i})=1\), then \(a\) believes that the post has property \(p_{i}\), else \(a\) believes it does not.1 If \(B_{a}(p_{i})=p_{i}\), then \(a\)'s belief about \(p_{i}\) is correct. Hence, (1) states that the probability of agent \(a\)'s beliefs about \(p_{i}\) being correct equates \(a\)'s competence with respect to \(p_{i}\). For two rounds and their states \(s\) and \(s^{\prime}\), all sampling is independent, and in each state \(s\), each \(C_{a}(p_{1})\) is independent from \(C_{b}(p_{1})\), \(a\neq b\). No correlations between properties are assumed due to the interpretations of \(p_{2}\) and \(p_{3}\), stated below.
Footnote 1: Hence, agents never suspend judgment, even on properties irrelevant to their voting behavior. Superfluous beliefs have no effects, and are only to simplify implementation.
### Agent Types
We define 10 agent types. Each agent type is a behavior-defining function that maps an agent's beliefs to votes. The set of agent types is \(\{\mathsf{A},\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{L}_{i}\}_{i\in\{\uparrow, \downarrow\}}\), each defined and described below. A _population_ is a map \(\mathcal{P}:A\longrightarrow\{\mathsf{A},\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{ L}_{i}\}_{i\in\{\uparrow,\downarrow\}}\) that assigns each agent an agent type.
Intuitively, \(\{\mathsf{A},\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{L}_{i}\}_{i\in\{\uparrow, \downarrow\}}\) contains the following agent types: A is the _authentic_ agent type, and the inauthentic agents come in three types that incorporate different patterns of coordination--_boosters_\(\mathsf{B}_{i}\), _distorters_\(\mathsf{D}_{i}\), and _lone wolfs_\(\mathsf{L}_{i}\). Each inauthentic type votes based on beliefs about a property _distinct_ from quality. Boosters and distorters vote respectively given properties \(p_{2}\) and \(p_{3}\) to coordinate their inauthentic behavior in-group. Lone wolfs do not coordinate. Each group contains three sub-types: one main to our story which _upposes on cue_\((i=\uparrow)\), and two auxiliary that _downvee on cue_\((i=\downarrow)\) or _both up-and downvote on cue_\((i=\uparrow)\). We include the auxiliary sub-types to create a more noisy--and thus harder to maneuver--setting for the \(\mathsf{IsPs}\). We hope the notation is mnemonically helpful rather than distracting.
Throughout, the largest population is \(\mathcal{P}_{\mathtt{Full}}\), defined for an agent set \(A\), \(|A|=1900\), with \(1000\) agents assigned to \(\mathsf{A}\) and \(100\) agents to each \(\mathsf{X}\in\{\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{L}_{i}\}_{i\in\{\uparrow, \downarrow\}}\). This size and ratio allows for flexibly choosing subpopulations with sizes large enough to produce robust votes. We mainly study subpopulations (restrictions) of \(\mathcal{P}_{\mathtt{Full}}\). We specify these sub-populations by stating the size of the pre-image of the agent types (which is sufficient as precise agent identity will not matter), where we write \(|\mathsf{X}|\) for \(|\mathcal{P}_{\mathtt{Full}}\,^{-1}(\mathsf{X})|\) for agent type \(\mathsf{X}\). The four main subpopulations are subsets of either 1000 agents (\(\mathcal{P}_{\mathtt{A11}}\) containing all agents types, with 100 agents of each type) or 200 agents (\(\mathcal{P}_{\mathtt{B}_{\uparrow}}\), \(\mathcal{P}_{\mathtt{D}_{\uparrow}}\) and \(\mathcal{P}_{\mathtt{L}_{\uparrow}}\) each with 100 authentic agents and 100 agents of either type \(\mathsf{B}_{\uparrow}\), \(\mathcal{D}_{\uparrow}\) or \(\mathsf{L}_{\uparrow}\)). Thus, let \(\mathcal{P}_{\mathtt{A11}}\) be the restriction of \(\mathcal{P}_{\mathtt{Full}}\) with \(|\mathsf{X}|=100\) for each \(\mathsf{X}\in\{\mathsf{A},\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{L}_{i}\}_{i\in\{ \uparrow,\downarrow\}}\), let \(\mathcal{P}_{\mathtt{B}_{\uparrow}}\) be the restriction of \(\mathcal{P}_{\mathtt{Full}}\) with \(|\mathsf{A}|=|\mathsf{B}_{\uparrow}|=100\) and \(|\mathsf{X}|=0\) for \(\mathsf{X}\in\{\mathsf{B}_{i},\mathsf{D}_{i},\mathsf{L}_{i}\}_{i\in\{\uparrow, \downarrow\}}\setminus\{\mathsf{B}_{\uparrow}\}\), and let \(\mathcal{P}_{\mathtt{D}_{\uparrow}}\) and \(\mathcal{P}_{\mathtt{L}_{\uparrow}}\) be given as \(\mathcal{P}_{\mathtt{B}_{\uparrow}}\) replacing \(\mathtt{B}_{\uparrow}\) with respectively \(\mathsf{D}_{\uparrow}\) and \(\mathsf{L}_{\uparrow}\). We may further specify subpopulations of \(\mathcal{P}_{\mathtt{A11}}\), \(\mathcal{P}_{\mathtt{B}_{\uparrow}}\), \(\mathcal{P}_{\mathtt{D}_{\uparrow}}\) and \(\mathcal{P}_{\mathtt{L}_{\uparrow}}\) like we specify subpopulations of \(\mathcal{P}_{\mathtt{Full}1}\). These restrictions mainly serve to describe what happens when we reduce the number of authentic agents. We write e.g. "\(\mathcal{P}_{\mathtt{B}_{\uparrow}}\) for \(|\mathsf{A}|=25\)" to mean the subpopulations of \(\mathcal{P}_{\mathtt{B}_{\uparrow}}\), with \(125\) agents in total, \(25\) of them authentic.
Authentic Agents.Authentic agents--agents \(a\) of type \(\mathsf{A}\)--correspond to those assumed in the Condorcet Jury Theorem: they vote fully in accordance with their beliefs about quality (\(p_{1}\)), independently of others, and with a competence strictly above \(0.5\). The vote of an authentic agent \(a\) in state \(s\) is \(\mathsf{A}(a,s)\in\{-1,1\}\), given by the following table:
\[\begin{array}{c|c|c}&B_{a}(p_{1})=1&B_{a}(p_{1})=-1\\ \hline\mathsf{A}(a,s)&1&-1\end{array}\]
In this and the below tables, row index (\(\mathsf{A}(a,s)\)) denotes the agent type and the cell content denotes the action taken in the circumstances specified in the column index (e.g. \(B_{a}(p_{1})=1\)).
Boosters.Boosters vote in a coordinated partisan fashion, aiming to swing the majority vote in a direction given by \(p_{2}\), irrespective of quality (\(p_{1}\)). Hence boosters exhibit CIB. In social media terms, we think of \(p_{2}\) as disconnected from quality (\(p_{1}\)), but as representing that the post, e.g., originates from a specific source, expresses a given viewpoint, or--taking booster agents as bots--as tagged for special action by a handler.
The main _Uvvote Booster_\(\mathsf{B}_{\uparrow}\) has as goal to boost and amplify \(p_{2}\) posts: they upvote ("Yes, the post has \(p_{1}\)") if they believe the post has property \(p_{2}\), and else vote authentically (to hide their inauthentic activities). For auxiliaries, the _Downvote Booster_\(\mathsf{B}_{\downarrow}\) 'inverts' \(\mathsf{B}_{\uparrow}\): \(\mathsf{B}_{\downarrow}\) demotes non-\(p_{2}\) posts by downvoting if they believe the post does not have \(p_{2}\), and else vote authentically, while the _Both Booster_\(\mathsf{B}_{\uparrow}\) combine the inauthentic behaviors of \(\mathsf{B}_{\uparrow}\) and \(\mathsf{B}_{\downarrow}\) by always voting according to \(p_{2}\), and never authentically. The vote of an agent \(a\) of type \(\mathsf{B}_{i\in\{\uparrow,\downarrow\}}\) in state \(s\) is \(\mathsf{B}_{i}(a,s)\) given by
\[\begin{array}{c|c|c}&B_{a}(p_{2})=1&B_{a}(p_{2})=-1\\ \hline\mathsf{B}_{\uparrow}(a,s)&1&\mathsf{A}(a,s)\\ \mathsf{B}_{\downarrow}(a,s)&\mathsf{A}(a,s)&-1\\ \mathsf{B}_{\downarrow}(a,s)&1&-1\end{array}\]
The table also refers to the authentic agent type \(\mathsf{A}\) to make it visually explicit in which cases the Up- and Downvote Boosters behave authentically.
Distorters.Distorters seek to create noise among the votes by, on cue, voting against their beliefs about quality. They vote in a coordinated, but non-partisan fashion: triggered by \(p_{3}\), they vote contrary to their private beliefs about quality (\(p_{1}\)). As with \(p_{2}\), we think of \(p_{3}\) as encoding a property of the post distinct from quality, such as, e.g., tag, source or viewpoint. The D agents seek to water down the majority view and damper public impressions of consensus, thus exhibiting one form of _concern trolling_[1].
The main _Uvote Distorter_\(\mathsf{D}_{\uparrow}\) votes authentically (to hide) except when they believe the post has \(p_{3}\) but not \(p_{1}\): then they distort by voting contrary to their belief about \(p_{1}\) (e.g., they upvote low quality posts of a given viewpoint to
dampen consensus impressions). For auxiliaries, the _Downvote Distorter_\(\mathsf{D}_{\downarrow}\) 'inverts' \(\mathsf{D}_{\uparrow}\): they vote authentically except when believing the post has both \(p_{1}\) and \(p_{3}\); then they distort by voting contrary to their beliefs about quality. The _Both Distorter_\(\mathsf{D}_{\downarrow}\) join the inauthentic behaviors of \(\mathsf{D}_{\uparrow}\) and \(\mathsf{D}_{\downarrow}\): if they believe the post has \(p_{3}\), then they vote contrary to their \(p_{1}\) beliefs (e.g., to always sow distrust about content from a given source, or of a given viewpoint). The vote of an agent \(a\) of type \(\mathsf{D}_{i\in\{\uparrow,\downarrow\}}\) in state \(s\) is \(\mathsf{D}_{i}(a,s)\) given by
\begin{tabular}{c|c|c|c} & \(\begin{array}{c}B_{a}(p_{3})=\texttt{\texttt{1}},\texttt{and}\\ B_{a}(p_{1})=\texttt{\texttt{1}}\end{array}\) & \(\begin{array}{c}B_{a}(p_{3})=\texttt{\texttt{1}},\texttt{and}\\ B_{a}(p_{1})=\texttt{\texttt{-1}}\end{array}\) & \(\begin{array}{c}B_{a}(p_{3})=\texttt{\texttt{-1}}\\ B_{a}(p_{1})=\texttt{\texttt{-1}}\end{array}\) & \(\begin{array}{c}B_{a}(p_{3})=\texttt{\texttt{-1}}\\ B_{a}(p_{3})=\texttt{\texttt{-1}}\end{array}\) \\ \hline \(\mathsf{D}_{\uparrow}(a,s)\) & \(\begin{array}{c}B_{a}(p_{1})\\ \texttt{D}_{\downarrow}(a,s)\\ -1\cdot B_{a}(p_{1})\end{array}\) & \(\begin{array}{c}-\texttt{\texttt{1}}\cdot B_{a}(p_{1})\\ B_{a}(p_{1})\end{array}\) & \(\begin{array}{c}\texttt{\texttt{\texttt{A}}}(a,s)\\ \texttt{\texttt{A}}(a,s)\\ \texttt{\texttt{D}}_{\downarrow}(a,s)\end{array}\) \\ \end{tabular}
Lone Wolfs.Lone wolfs also create noise among the votes by voting against their beliefs about quality. They do so exactly as the distorters, but without coordination through \(p_{3}\). We interpret these agents as individual users that--cued by a personal property--upvote contra their beliefs about quality (\(\mathsf{L}_{\uparrow}\), main, _Uvote Lone Wolf_), e.g., out of sympathy, downvote contra their beliefs about quality (\(\mathsf{L}_{\downarrow}\), aux., _Downvote Lone Wolf_), e.g., out of anger or spite, or both (\(\mathsf{L}_{\downarrow}\), aux., _Both Lone Wolf_).
Instead of voting given shared property \(p_{3}\), a lone wolf, i.e., an agent \(a\) of type \(\mathsf{L}_{i\in\{\uparrow,\downarrow\}}\), votes on a personal property \(p_{a}\in\{-1,\texttt{1}\}\), believing \(B_{a}(p_{a})\in\{-1,\texttt{1}\}\) with \(P(B_{a}(p_{a})=p_{a})=1\). For all \(a,b\in A\), properties \(p_{a},p_{b}\) are sampled as \(p_{3}\), but if \(a\neq b\), \(p_{a}\) and \(p_{b}\) are sampled independently. The voting rules for each \(\mathsf{L}_{\uparrow}\), \(i\in\{\uparrow,\downarrow,\uparrow\}\), is obtained by replacing \(p_{3}\) with \(p_{a}\) in the table for \(\mathsf{D}_{i}\).
### Majority Vote and Correctness
We are interested in how agent populations' votes fair with respect to _majority correctness_, both before (baseline experiments) and after we have applied our two jury selection procedures. A _jury_ is a set of agents \(J\subseteq A\). Let \((v_{a})_{a\in J}\), \(v_{a}\in\{1,-1\}\) be a _voting profile_ of \(J\) with respect to the post. The _majority vote_ of \((v_{a})_{a\in J}\) is whichever of \(\texttt{\texttt{1}}\) and \(-1\) that gets more votes, tie-breaking to \(\texttt{\texttt{1}}\), giving the post the benefit of doubt. I.e., the majority vote of \((v_{a})_{a\in J}\) is \(-1\) if \(\sum_{a\in J}v_{a}<0\), else \(\texttt{\texttt{1}}\). The majority vote is _correct_ if it equals the post's quality, \(p_{1}\in\{1,-1\}\). Finally, the _majority correctness score_ (mcs) of a jury over a set of voting rounds is the percentage of correct majority votes of the jury in those rounds. The mcs of a jury is a measure of its competence with respect to tracking quality, and is the jury performance indicator of interest in this paper.
### Parameters and Generated Dataset
Using R to implement the ABM,2 we chose three _noise level_ parameter combinations for the sampling of properties:
Footnote 2: We implemented the ABM from the ground up to retain freedom in agent design and as the simplicity of the encoded behavior and generated data do not invoke advanced features of existing ABM simulation packages and programs, such as Netlogo (Wilensky, 1999), Laputa (Angere, 2010; Olsson, 2013), or Hashkat (Ryczko et al., 2017).
These noise levels were chosen to produce different voting patterns, and to introduce varying degrees of coordination and correlations among votes, in turn producing three levels of difficulty for vote-based agent classification. Quality (\(p_{1}\)) is fixed across levels, leaving authentic agents unaffected. Inauthentic agents perform less (coordinated) inauthentic activities in higher levels, as \(p_{2}\) and \(p_{3}\) decrease. They thus mimic authentic agents more (more noise), raising the difficulty of classification. The sampling of \(p_{2}\) is asymmetric to avoid mirrored results in low and mid noise for \(\mathsf{B}_{\downarrow}\) and \(\mathsf{B}_{\uparrow}\) given that booster agents solely rely on \(p_{2}\). We chose a symmetric setup for \(P(p_{3}=\texttt{\texttt{1}})\) as distorters and lone wolfs' votes are not solely determined by \(p_{3}\), but influenced by the sampling of \(p_{1}\), too, hence making completely mirrored votes less likely. For each noise level, we performed \(100\) runs, each based on a random seed and with voting rounds \(r=1000\), producing a dataset with \(3\times 100,000\) (state, vote profile) pairs. Each was done for \(\mathcal{P}_{\mathsf{Full}}\), thus counting \(1900\) agents: \(1000\) authentic and \(100\) of each inauthentic type. Sec. 2.5 displays diverse population ratios that explore the effect of authentic agents in minority and majority on mcs. Throughout, results are based on and evaluated against a datasubset with \(r=500\). As all runs and rounds are independent, choosing fewer or more voting rounds is without problem. Other values of \(r\) are mentioned explicitly when robustness checks are discussed.
### Baseline Majority Correctness Scores
To showcase varying populations' behaviors, we illustrate two sets of baseline mcs results in Figures 1 and 2.
Figure 1 shows \(7\) populations' mcss as a function of the number \(|\mathsf{A}|\) of authentic agents in the population. As expected from the Condorcet Jury Theorem, the mcs of authentic agents alone converges to 100%, with 25 agents sufficing. This is representative for all noise levels, as noise does not affect authentic agents. Figure 1 is filtered to rounds with \(p_{2}=p_{3}=\texttt{\texttt{1}}\), so \(\mathsf{B}_{\uparrow}\) and \(\mathsf{D}_{\uparrow}\) are 'actively inauthentic' (and both always upvote). Given this filter, the figure is representative for all noise levels for \(\mathsf{B}_{\uparrow}\) and \(\mathsf{D}_{\uparrow}\). The effect of \(\mathsf{L}_{\uparrow}\) is level specific (but unaffected by the filter).
We make three observations concerning the main, upvoting inauthentic agents \(\mathsf{B}_{\uparrow},\mathsf{D}_{\uparrow}\) and \(\mathsf{L}_{\uparrow}\) of Figure 1. **First**, the left-most part of Figure 1 shows populations with \(|\mathsf{A}|=1\), a very hospitable environment for inauthentic activity. Here, each of \(\mathsf{B}_{\uparrow},\mathsf{D}_{\uparrow}\) and \(\mathsf{L}_{\uparrow}\) exhibit a mcs of \(75\%\). This is an artifact of how their behavior interacts with the sampling frequency for \(p_{1}\). For \(\mathsf{B}_{\uparrow}\) and \(\mathsf{D}_{\uparrow}\), the mcs of \(75\%\) follows as Figure 1 is filtered for \(p_{2}=p_{3}=1\), and only contains rounds where both always upvote. As \(p_{1}=75\%\), they are thus correct \(75\%\) of the time. Though not all \(\mathsf{L}_{\uparrow}\) always upvote in these rounds, they do so individually with a \(96.5\%\) chance (assuming average \(C_{a}(p_{1})=0.8\)). As a group, they thus sway the majority vote to \(\texttt{\texttt{1}}\) with high probability, again correct with \(75\%\). **Second**, \(\mathsf{B}_{\uparrow},\mathsf{D}_{\uparrow}\) and \(\mathsf{L}_{\uparrow}\) each exhibit
their maximal lowering effect on the mcs while \(|\mathsf{A}|=100\). This is a motivating factor in focusing on populations with \(|\mathsf{A}|=100\), the mcss of which we return to in Table 2. **Third**, for \(|\mathsf{A}|>100\), \(\mathsf{B}_{\uparrow}\) and \(\mathsf{D}_{\uparrow}\) negatively influence the mcs identically, as both upvote in the shown rounds, with their effect declining from a mcs of \(75\%\) at \(|\mathsf{A}|=125\) to an mcs of \(100\%\) by \(|\mathsf{A}|=225\). For \(|\mathsf{A}|>100\), to form an incorrect majority, inauthentic agents must be 'aided' by authentic agents that happen to vote incorrectly. The probability that enough such exist to overcome the correctly voting authentic agents drops as \(|\mathsf{A}|\) grows. With \(|\mathsf{A}|\geq 225\), \(\mathsf{B}_{\uparrow}\) and \(\mathsf{D}_{\uparrow}\) are seen to have lost all effect. \(\mathsf{L}_{\uparrow}\) have a less robust effect, as they vote in an uncoordinated fashion, and are thus more quickly outnumbered by authentic agents' votes.
Finally, the effect of the \(900\) inauthentic agents jointly drops with higher noise levels, i.e., with decreased activity. In the high activity case (low noise), the 900 inauthentic agents seem 'overwhelmed' already by between 325 and 475 authentic agents. This is correct on the aggregate level, but \(900\) inauthentic agents do not equate \(900\) inauthentic actions: given the filter, some types act authentically always (\(\mathsf{B}_{\downarrow}\)) or sometimes (\(\mathsf{D}_{\uparrow},\mathsf{D}_{\downarrow},\mathsf{L}_{\uparrow},\mathsf{ L}_{\downarrow},\mathsf{L}_{\uparrow}\)). Additionally, some types partially cancel each other (e.g., \(\mathsf{D}_{\uparrow}\) and \(\mathsf{L}_{\downarrow}\)) or even themselves (e.g., \(\mathsf{D}_{\downarrow}\)) out.
Figure 2 shows mcs summary plots of all inauthentic agent types in isolation and jointly, as a function of \(|\mathsf{A}|\), not filtered for properties. As noise increases, the figure evinces how inauthentic agents' impact on mcs decreases. Note how in low noise, agent types \(\mathsf{D}_{\uparrow}\), \(\mathsf{D}_{\uparrow}\), \(\mathsf{L}_{\downarrow}\), and \(\mathsf{L}_{\uparrow}\) are more effective than \(\mathsf{B}_{i}\) for each \(i\in\{\uparrow,\downarrow,\uparrow\}\) in lowering the mcs as the former agent types directly counteract correct majority voting concerning quality (\(p_{1}\)). The picture flips in the high noise level given how \(p_{1}\), \(p_{2}\), and \(p_{3}\) are sampled (Sec. 2.4).
## 3 Classification
Our jury selection procedures (gmm and km jsps) classify the set of agents into two agent groups: authentic and inauthentic. Each jury selection procedure invokes a classifier method that decomposes the ABM data into singular values (SVD), applies a clustering strategy--either a Gaussian Mixture Model (gmm) or the \(k\)-means algorithm (km)--and labels agents using logistic regression on the quality property \(p_{1}\) of the post voted on.
We assume \(p_{1}\) known, as we know of the general setting: the agents vote on quality. We do not assume knowledge of \(p_{2}\) and \(p_{3}\), or even of their existence. The input dataset consists of binary votes of \(n\) agents over a given number of voting rounds \(r\), where \(r>n\) is not a requirement regarding the machinery. Yet the more observations \(r\), the better we cluster. Data requirements are thus feasible, in contrast to the \(\chi^{2}\) test suggested by Galeazzi, Rendsvig, and Slavkovik (2019) that requires \(p_{1}\) known plus at least \(1\) observation for each of the \(2^{n}\) possible voting round outcomes.
For each ABM run, the classification analysis is performed on five resampled (with replacement) datasets with \(r=500\) and \(n\) either \(1000\) for \(\mathcal{P}_{\mathsf{A}11}\) or \(200\) for \(\mathcal{P}_{\mathsf{B}_{\uparrow}}\), \(\mathcal{P}_{\mathsf{D}_{\uparrow}}\), and \(\mathcal{P}_{\mathsf{L}_{\uparrow}}\). For each of the bootstrapped datasets, we calculate the Singular Value Decomposition (SVD) \(\mathbf{X}=\mathbf{U}\mathbf{D}\mathbf{V}^{T}\) of the \(n\times n\) sample correlation matrix \(\mathbf{X}\) of the vote data. For clustering, we consider the first \(q=2\) dimensions' eigenvectors, i.e., the first two columns of the \(n\times p\) orthogonal matrix \(\mathbf{U}\) where \(n=p\), weighted with the corresponding eigenvalue collected in the diagonal \(p\times p\) matrix \(\mathbf{D}\). Hence, we cluster on the \(q\) partial components \(\mathbf{U}_{q}\mathbf{D}_{q}\)(Hastie, Tibshirani, and Friedman, 2009). Figure 3 shows the scatterplots of \(\mathbf{U}_{q}\mathbf{D}_{q}\), illustrating more blurred clustering
environments as the noise level increases from low to high.
We contrast the probabilistic Gaussian Mixture Model (gmm) and the deterministic \(k\)-means (km) algorithm for clustering the components. The soft clustering gmm is more memory-intensive, while the hard clustering km algorithm is faster. We choose gmm and \(k\)-means as they are among the simplest, most well-known, and most efficiently implementable unsupervised clustering methods [12]. Both cluster the weighted eigenvectors into \(k\) groups, \(k=2,...,20\) (in testing, 20 proved sufficient as upper bound). In the gmm, \(k\) is chosen by maximizing the \(\log\)-likelihood according to the Bayesian Information Criterion (BIC). The BIC penalizes the number of parameters more heavily than Akaike's Information Criterion, aiming for a model fit with fewer parameters to avoid overfitting [13]. In the km algorithm, \(k\) is estimated with the gap statistic, which compares the change in the within-cluster dispersion with that under a reference null distribution [10].
Having clustered the data into \(k\) groups, the mean vote per voting round of those agents clustered together--i.e., the row sums of \(k\) subsets of the vote data, viz. \(k\ r\times 1\) vectors--are used in a logistic regression model with the two-level factor \(p_{1}\) as the response variable. Put differently, the \(k\) coefficients refer to the clusters' mean vote per voting round given the quality of posts. To select those clusters comprising inauthentic agents, we add the lasso penalty term to the optimization, \(\sum_{j=1}^{k}||\beta||_{j}\) with \(k\) predictor variables (clusters), as implemented in the R package glmnet[14]. Coefficients consequently shrunk to \(0\) when regressing on \(p_{1}\) receive the label 'inauthentic'. Coefficients _not_ shrunk to \(0\) receive the label 'authentic'. Lasso regularization was chosen over the ridge regularization as the former shrinks coefficients to \(0\) and thereby imposes sparseness. In contrast, the ridge penalty never fully removes variables. Coefficients shrunk to \(0\) accordingly do not play an important role when regressing on \(p_{1}\) and therefore receive the label 'inauthentic'. These labels are then forwarded to the agents found in each cluster.
Note that logistic regression is applied in a non-standard way. In this paper, the goal of the logistic regression is _not_ to predict each vote per voting round into the categorical dependent variable \(p_{1}\), in contrast to classical approaches where the dependent variable describes the classes in which one is interested. We seek to classify each agent as inauthentic or authentic which we do via hard classification through shrinking components to \(0\) and an additional labeling step. This makes traditional classifier metrics like a receiver operating characteristic curve (ROC curve) and a corresponding area under the curve score (AUC score) inapplicable. Instead, Sec. 3.1 discusses false positive and false negative classification errors to transparently and separately assess the two desiderata vox populi and precaution.
Once the classification analysis is completed on all \(5\) bootstrapped datasets, each agent has been classified as either 'authentic' or 'inauthentic' \(5\) times. Only if an agent received the 'authentic' label at least \(4\) out of \(5\) times, the overall 'authentic' label will be granted. Else, the agent is overall classified as 'inauthentic'. The \(4/5\) classification threshold was fixed pragmatically to balance runtime efficiency and precaution against inauthentic influence. Simple majority would exhibit less precaution and more vox populi, while a \(\nicefrac{{19}}{{20}}\) classification threshold (95%) would heavily increase runtime. We discuss this modeling choice further in the final remarks (Sec. 5.2).
### Classification Results
In order to evaluate the gmm and km classifier methods, we first inspect the misclassification results for \(\mathcal{P}_{\texttt{A11}}\) for \(r=500\), second comment on selected robustness observations, and third examine classifier accuracy for smaller sub-populations \(\mathcal{P}_{\texttt{B}_{\texttt{1}}}\), \(\mathcal{P}_{\texttt{D}_{\texttt{1}}}\), and \(\mathcal{P}_{\texttt{L}_{\texttt{1}}}\) for \(r=500\).
First, in population \(\mathcal{P}_{\texttt{A11}}\), gmm classifies well in the low (mid) noise case, accurately misclassifying only \(4\)% (\(11\)%) of authentic agents as inauthentic, and \(3\)% (\(4\)%) of inauthentic agents as authentic (Table 1), exhibiting both vox populi and precaution. As expected, classifier accuracy reduces in the high noise case given that inauthentic agents hide and mimic authentic behavior, i.e., often vote authentically and are accordingly difficult to detect. However, here the inauthentic agents' impact on majority correctness scores is limited (Figure 2) despite a \(35\)% false negative error. Indeed, as Figure 2 suggests, it is \(\texttt{B}_{\texttt{l}}\) and \(\texttt{B}_{\texttt{T}}\) agents that negatively affect the mcs to the largest extent in the high noise case, and both gmm and km identify these agent groups accurately as inauthentic (Figure 4). Moreover, classification results in Figure 4 and Table 1 show how gmm outperforms km in all noise levels with regard to identifying inauthentic agents, exhibiting less false negative misclassification. Thus, gmm overall clusters more precatiously than km.
Second, robustness checks given \(\mathcal{P}_{\texttt{A11}}\) show differences between gmm and km. Results based on fewer observations (\(r=250\) instead of \(r=500\)) affect false negative errors less for gmm, but notably for km. E.g., gmm still does not misclassify any booster or distorter agents, while km's false negative errors (\(\nicefrac{{\text{norm}}}{{\text{sbm}}}\)) rise in low noise as follows
\[r\begin{array}{c|c|c|c|c|c|c|c}\text{B}_{\texttt{T}}&\text{B}_{\texttt{L}} &\text{B}_{\texttt{T}}&\text{D}_{\texttt{T}}&\text{D}_{\texttt{L}}&\text{D}_{ \texttt{T}}\\ \hline 500&250&500&250&500&250&500&250&500&250\\ \text{KM}&\begin{subarray}{c}.44\\.41\end{subarray}&\begin{subarray}{c}.56\\.43\end{subarray}&\begin{subarray}{c}.39\\.43\end{subarray}&\begin{subarray}{c}.52\\.5\end{subarray}&\begin{subarray}{c}.37\\.5\end{subarray}&\begin{subarray}{c}.51\\.5\end{subarray}&\begin{subarray}{c}.54\\.44\end{subarray}&\begin{subarray}{c}.68\\.4\end{subarray}&\begin{subarray}{c}.39\\.5\end{subarray}&\begin{subarray}{c}.52\\.5\end{subarray}&\begin{subarray}{c}.24\\.36\end{subarray}&\begin{subarray}{c}.34\\.41\end{subarray}\\ \end{subarray}&\begin{subarray}{c}.34\\.41\end{subarray}\\ \end{subarray}\end{array}\]
with similar trends observable in mid and high noise. Based on more observations (\(r=750\) (\(r=1000\)) instead of
Figure 4: gmm (blue) and km (orange) mean misclassification and standard deviation in \(\mathcal{P}_{\texttt{A11}}\) in low (l), mid (m), and high (h) noise.
\(r=500\)), both gmm and km classify all inauthentic agent types but \(\mathtt{L}_{\uparrow}\) more precautiously in difficult high noise environments. Misclassification in high noise for the main inauthentic agent types improve thusly:
\[\begin{array}{r c c c c c c c c c c c}&\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit
Jury ResultsTable 2 summarizes gmm and km jsp's mean majority correctness scores and SD for populations \(\mathcal{P}_{\texttt{B}_{\texttt{I}}},\mathcal{P}_{\texttt{D}_{\texttt{I}}}, \mathcal{P}_{\texttt{L}}\), and \(\mathcal{P}_{\texttt{A11}}\) in key conditions. In \(\mathcal{P}_{\texttt{B}_{\texttt{I}}}\), km and gmm jsp result in mcss that clearly show how removing agents classified as inauthentic from the baseline jury suffices to yield perfect mcss, despite 13% misclassification among inauthentic agents by both km and gmm (Table 1). Similar observations hold for \(\mathcal{P}_{\texttt{D}_{\texttt{I}}}\), where the gmm jsp achieves maximum mcs. The setback in the km high noise case is explained by difficulties in distinguishing authentic from inauthentic agents: The non-precautious misclassification of inauthentic agents as authentic forecloses the jury to achieve a higher mcs when the inauthentic agents are activated. For gmm and km jsps in \(\mathcal{P}_{\texttt{L}_{\texttt{I}}}\), we might expect lower mcss given rather substantial misclassification numbers in the high noise case (Table 1). Yet, we observe perfect mcss, explained by the non-coordinated way that \(\texttt{L}_{\texttt{I}}\) act inauthentically. Hence, difficulties classifying this subgroup for both classifiers are mitigated by its limited effect on lowering mcss. Note how \(\mathcal{P}_{\texttt{L}_{\texttt{I}}}\) is unaffected by the filter in Table 1: \(\texttt{L}_{\texttt{I}}\) agents act on their individual beliefs about quality, i.e., they act uncoordinated on their personal property, which cannot be filtered for per voting round without changing the population size.
Assessing jsp given \(\mathcal{P}_{\texttt{A11}}\), we show the mcss achieved through the gmm method strictly dominate those from km in low and mid noise cases, both in terms of mean value and SD. Moreover, the gmm jsp strictly outperforms baseline injuries in all noise cases when looking at average and best juries. Merely in high noise, worst case, we observe that neither the gmm nor the km jsp outperforms the baseline jury.
## 5 Concluding Remarks
Influence or information operations such as coordinated inauthentic behavior (CIB), e.g. performed by attention hacking bots, shape public opinion by elevating or suppressing topics through coordinatedly up- or downvoting social media posts, mimicking authentic behavior to avoid detection, nullifying online voting judgments' reliability. To restore wisdom-of-crowds effects, this paper designed two accurate and feasible _jury selection procedures_ (jsps) that discard agents classified as inauthentic from the voting jury.
Comparing the gmm and km jsps, the main difference is accuracy: The gmm jsp detects more inauthentic agents, exhibiting smaller false negative errors and hence more precaution. Both jsps select juries with vastly increased _majority correctness scores_ (mcss), with preponderantly better scores for the gmm jsp. Overall, the application of either almost fully restores wisdom-of-crowds effects, despite the presence of inauthentic agents. In the low and mid noise cases, inauthentic agents strongly affect the baseline mcss negatively, but both jsps successfully eliminate this effect. Only in populations with a high degree of hiding (i.e., high noise, where inauthentic agents act mainly authentically), the jsps do not significantly increase mcss. However, in these cases the inauthentic agents also exhibit negligible negative effects on mcss.
The latter highlights a trade-off for inauthentic attention hacking behavior: attention hackers must balance their accounts' activity to, on the one hand, hide their true identity by acting authentically, and, on the other, act in a coordinated manner to sway the majority vote. We believe this may be exploited in designing attention hack resistant social media vote systems. Employing jsps means inauthentic actors must hide more often, raising the cost of influence for the attention hackers that handle them. Further, jsps could be combined with a user reputation system that only publicly displays a user's vote if the user has logged enough (ignored) votes. Beyond raising bot startup costs, this may provide early data for jsps.
We round off with a discussion of ethical considerations, model assumptions, and data collection.
### Ethical Considerations
Any suppression of information in public fora raises ethical concerns about censorship. The suppression of reactions to
social media posts is no different. Generally, we find that the suppression of coordinated inauthentic behavior as used by attention hackers is defendable, justified by the aim to combat misinformation online. We omit further discussion of this point. However, in applying automated techniques based on classification, there is always a risk that misclassification occurs. If the classification is used for censorship--as is the case here--misclassification may then lead to unrightful censorship.
The jsps risk unjustified censorship on two points: the unrightful censorship of individuals due to behavioral correlation with inauthentic agents, and the unrightful censorship of groups due to an authentic disagreement with the notion of quality assumed by jsp deployers.
Concerning individuals, then we designed the jsps with a focus on the two stated desiderata _vox populi_ (to minimize false positive errors, i.e., to preserve as many authentic agents as possible) and _precantion_ (to minimize false negative errors, i.e., to eliminate as many inauthentic agents as possible). Vox populi implies a desire to not unrightfully censor individuals, but is opposed by precaution: the most precautions model censors all, while the model that preserves most voices censors none. Given our ABM and its parameters, employing _ends-justify-the-means_ reasoning, and taking the correct evaluation of posts' quality to be the primary end, we find it worth compromising vox populi over deprioritizing precaution: as illustrated in Figures 1 and 2, deprioritizing precaution quickly threaten the wisdom-of-crowds effect as few inauthentic agents in the jury drastically lower the majority correctness score, while compromising with vox populi by allowing small fractions of authentic agents to be labelled as inauthentic is--with respect to mcs--absorbed by the wisdom of crowds exhibited by even a small jury of only authentic agents.
In the classification, the balance between vox populi and precaution is controlled by the classification threshold. As classification threshold, we precautiously chose that agents should be labelled authentic \(4\) of \(5\) times to be classified as authentic. This choice did not cause tremendous collateral damage to vox populi. While we deem especially the gmm jsp a precautions method, it still exhibits low (\(<.11\)) false positive misclassification errors throughout, except for \(\mathcal{P}_{\texttt{A11}}\) in high noise. The km jsp, similarly shows low false positive errors (\(<.1\)) except for \(\mathcal{P}_{\texttt{A11}}\) in low and high noise (cf. Table 1). The approach remains flexible to emphasizing vox populi further by lowering the \(4/5\) classification threshold.
Concerning group censorship, it is relevant that our approach assumes an agreed-upon notion of _truth about the quality of posts_ for which a commonly acknowledged arbiter exists. This is a fundamental premise of our method: if no such notion exists, majority correctness scores loose their meaning and the assumptions of the classifiers are unmet. Such a notion of quality is of paramount importance in relation to fake news, where, arguably, "objective" quality exists, embodied e.g. by the Principles of Journalism. However, the criteria for what constitutes quality may lead to marginalization of groups. E.g., sympathizers of Alex Jones and InfoWars might be marginalized by censorship if quality is equated with adhering to the Principles of Journalism, or sympathizers of the black feminist Combahee River Collective may be marginalized if quality is equated with adhering to ideals of the National Association for the Advancement of Colored People of the 1970s. Therefore, the notion of quality used in applications should be carefully defined, and preferably made open to the public e.g. by inclusion in community standards or terms and conditions of social media platforms.
Due to the risk of unrightful censorship, we would always suggest that users are made aware of censorship decisions that concern them and are given the option to appeal. This, of course, also allows accounts used in IOs to appeal, but appeal adds a non-trivial maintenance cost to e.g. large bot collectives.
### Assumptions of the ABM and Classification
While our contribution hopefully serves as a proof of concept for jury selection procedures as a tool to counter reaction-oriented CIB-based IOs, the simulated environment is not in a one-to-one correspondence with the plethora of environments found on social media platforms. We discuss how modeling choices relate to social media platforms and how assumptions may be relaxed, first concerning the ABM, then the classification.
On social media platforms, it is likely that human users at times vote inauthentically to a low degree that should not be penalized by censorship. Such inauthentic voting violates the ABM's assumptions about authentic agents who vote given only their competence-based beliefs. Our classification results indicate, however, that the authenticity assumption may be relaxed. _Lone volfs_ in the high noise case behave _almost_ authentically, and may be interpreted as generally, but not fully, authentic, uncoordinated users. These agents are further--by the gmm jsp--often _misclassified_ as authentic in high noise (cf. L sections of Figure 4 and Table 1), but correctly classified for low and mid noise, which indicates that the gmm method may be tuned to tolerate a degree of uncoordinated, inauthentic behavior.
Further, on social media, vote participation is not complete: most users do not react to most posts. For simplicity, we have not included abstaining as an option in the ABM, but all steps including mcs calculation and jury selection would be unaffected. As we return to below, also the classification can accommodate for a less complete vote participation.
Concerning classification, disciplines not directly related to social media applications and misinformation research show how dimensionality reduction and SVD procedures can be applied to empirical data to disclose coordinated voting groups and patterns: US Congress roll call votes have been clustered based on scores similar to the weighted eigenvectors used in this paper (Yang et al., 2020; Porter et al., 2005; Sirovich, 2003; Poole, 2000). SVD Scatterplots of votes as suggested by Porter et al. (2005), for instance, provide proxies for party stance. While Yang et al. (2020) explore roll call vote data only \(1\)-dimensionally, we expand the application and cluster on \(2\) partial components; both their and our applications can be generalized to more dimensions to increase precision in less exposing vote environments. Moving towards social media applications, this can
become relevant for votes with not only binary but several options from which to choose, such as vote data reflecting Facebook's \(6\) reactions.
We rely on unsupervised methods that disclose coordination that go unnoticed by supervised methods that take only features of individual accounts into consideration [2, 1, 10, 11]. We add a single supervised learning element--logistic regression--to apply labels to agent clusters found by the unsupervised steps. In the logistic regression, we have used that authentic votes correlate with post quality (possibly allowing for noise in observing quality). Other subjective assumptions could be used to steer labeling while producing equally efficient jury selection procedures.
Besides limiting supervision, the input data needs of the gmm and km jury selection procedures are vastly more feasible than Galeazzi, Rendsvig, and Slavkovik (2019)'s: we rely on \(500\) observations, where the \(\chi^{2}\) test would require at least \(2^{1000}\) for our population \(\mathcal{P}_{\mathrm{A1}}\). In empirical application, obtaining \(500\) votes of one user group may still be a challenge. A mitigating factor is that the proposed jsps can accommodate for missing data, and, for validation, only the authentic agents need to be fixed over several voting rounds, while the authentic agents may vary, as these vote independently. Thus, we can lift the assumption that all agents are always presented with, and vote, on every post.
### Empirical Validation and The Release of Reactions Data
Empirical data--in contrast to simulated data--to further validate jury selection procedures remains difficult to obtain [2, 10, 11]. Among the platforms that provide APIs for academic purposes, only Twitter releases user-IDs of (public) profiles that have clicked the like-button. However, while Twitter provides generous academic access to historical data for researchers, the platform does not allow to automatically scrape _comprehensive_ lists of users that have liked, but only releases the user-IDs of the 100 _most recent_ liking users of any single post. Additionally, lists of liking users may be requested at most 75 times per 15 minutes. For small-scale Twitter environments where posts receive few likes, these restrictions may be balanced by using a suitably timed algorithm. However, for large political hashtags like #MakeAmericaGreatAgainor#Brexit where CIB-based IOs may be feared to be in play, the current data restrictions make it practically impossible to obtain a complete picture of liking behavior.
The proof of concept for jsps provided in this paper provides a direct use case for reactions data in the fight against online misinformation. The data is necessary to evaluate, tweak and deploy the suggested methods. The paper thus provides a direct argument for a more comprehensive release of and access to reactions data to researchers, e.g. under full anonymization and non-disclosure agreements or via open API access to publicly available data.
|
2302.03191 | Enhancement of dilepton production rate and electric conductivity around
QCD critical point | We investigate whether the soft mode that becomes massless at the QCD
critical point (CP) causes an enhancement of the dilepton production rate (DPR)
and the electric conductivity around the CP through the modification of the
photon self-energy. The modification is described by the so-called
Aslamazov-Larkin, Maki-Thompson and density of states terms, which have been
taken into account in our previous study on the DPR near the
color-superconducting phase transition, with a replacement of the diquark modes
with the soft mode of the QCD CP. We show that the coupling of photons with the
soft modes brings about an enhancement of the DPR in the low invariant-mass
region and the conductivity near the CP, which would be observable in the
relativistic heavy-ion collisions. | Toru Nishimura, Masakiyo Kitazawa, Teiji Kunihiro | 2023-02-07T01:43:40Z | http://arxiv.org/abs/2302.03191v1 | # Enhancement of dilepton production rate and electric conductivity around QCD critical point
###### Abstract
We investigate whether the soft mode that becomes massless at the QCD critical point (CP) causes an enhancement of the dilepton production rate (DPR) and the electric conductivity around the CP through the modification of the photon self-energy. The modification is described by the so-called Aslamazov-Larkin, Maki-Thompson and density of states terms, which have been taken into account in our previous study on the DPR near the color-superconducting phase transition, with a replacement of the diquark modes with the soft mode of the QCD CP. We show that the coupling of photons with the soft modes brings about an enhancement of the DPR in the low invariant-mass region and the conductivity near the CP, which would be observable in the relativistic heavy-ion collisions.
pacs: PACS numbers: 12.38.-b, 12.38.-b, 12.38.-b +
Footnote β : preprint: Preprint number: YITP-23-12; J-PARC-TH-0283
Introduction
Exploring the high-density matter at vanishing and finite temperature in Quantum Chromodynamics (QCD) is one of the most challenging as well as intriguing subjects in the current nuclear physics [1]. Among various interesting subjects, the possible existence of a critical point called the QCD CP on the QCD phase diagram has been acquiring much attention. The phase transition at the QCD CP is of second order with the same universality class as the \(Z_{2}\) Ising model, and large fluctuations of various quantities coupled to the order parameter are expected to occur [2; 3]. A number of proposals have been made for observational identification of the QCD CP in the relativistic heavy-ion collision (HIC) experiments [1; 2; 3; 4; 5; 6; 7; 8], such as the event-by-event fluctuations of conserved charges and especially their non-Gaussianity, large fluctuations of the low-momentum particle distributions, anomalous fluid dynamical phenomena with diverging transport coefficients and so on. Active experimental analyses are ongoing at the beam-energy scan program at RHIC, NA61/SHINE, and HADES [9]. The future experiments at FAIR and J-PARC-HI will further pursue them [10; 11].
In this article, we investigate possible signals of the QCD CP that would be observed in these experiments on the basis of the fact that the second-order nature of the QCD CP implies the existence of a low-energy mode with a vanishing mass at the CP. Such a slow mode is called the soft mode of the phase transition. The soft mode of the QCD CP is fluctuations in the scalar channel but _not_ a sigma _mesonic_ mode. Instead, it is the particle-hole (p-h) collective excitation with a mixing of baryon number density and energy density that has a spectral support in the space-like region [12; 13; 14; 15].
The existence of the soft mode should affect various observables near the CP. In this article, as examples of such observables, we explore how the dilepton production rate (DPR) and the electric conductivity are affected by the soft mode of the QCD CP. We have shown in a previous work Ref. [16] that the DPR can be greatly enhanced in the low invariant-mass region near the phase boundary of the two-flavor color superconductivity (2SC) due to the diquark soft mode [17; 18; 19; 20]; in Ref. [16], the enhancement of the DPR originates from a modification of the photon self-energy by the Aslamazov-Larkin (AL) [21], Makin-Thompson [22; 23] and density of states (DOS) terms [24] incorporating the diquark soft modes. A surprise in Ref. [16] was that although the spectral support of the diquark soft mode is concentrated in the _space-like_ region, their scattering process described by the AL term does cause the enhancement of the DPR in the _time-like_ region. We thus expect that such an enhancement of these observables may occur by a similar mechanism due to the soft mode associated with the QCD CP; we consider the AL, MT and DOS terms with
the diquark soft modes being replaced by the soft mode of the QCD CP in the 2-flavor Nambu-Jona-Lasinio (NJL) model.
A notable feature of the soft mode of the QCD CP is that its propagator is not analytic at the origin unlike the diquark modes investigated in Ref. [16]. As a result, a simple time-dependent Ginzburg-Landau (TDGL) approximation is not applicable to describe the soft mode of the QCD CP. We thus introduce an approximation scheme that simply takes care of the specific analytic properties. The vertex functions in the AL, MT and DOS terms are then constructed so as to be consistent with this treatment in light of the gauge invariance. In this way, our photon self-energy is constructed to satisfy the Ward-Takahashi (WT) identity.
Using the photon self-energy thus constructed, we calculate the DPR and the electric conductivity near the QCD CP. We show that the DPR at low invariant-mass region, as well as the electric conductivity, is greatly enhanced around the QCD CP due to the soft modes. We also present some issues which are relevant when pursuing an experimental measurement of these signals in the HIC experiments.
This paper is organized as follows. In the next section, after introducing the model and its phase diagram in the mean-field approximation, we discuss properties of the soft mode of the QCD CP. In Sec. 3, we calculate the photon self-energy described by the AL, MT and DOS terms. In Sec. 4, we discuss the numerical results on the DPR and the electric conductivity near the QCD CP. The final section will be devoted to a short summary.
## 2 Phase diagram and soft modes of QCD CP
To investigate the DPR and electric conductivity near the QCD CP, we adopt the following 2-flavor NJL model [25]
\[\mathcal{L}=\bar{\psi}i(\not{\partial}-m)\psi+\ G_{S}[(\bar{\psi}\psi)^{2}+( \bar{\psi}i\gamma_{5}\vec{\tau}\psi)^{2}], \tag{1}\]
where \(\psi\) is the quark field and \(\vec{\tau}=(\tau_{1},\tau_{2},\tau_{3})\) is the Pauli matrices for the flavor \(SU(2)_{f}\). The current quark mass \(m=5.5\) MeV, the scalar coupling constant \(G_{S}=5.50\) GeV\({}^{-2}\) and the three-momentum cutoff \(\Lambda=631\) MeV are determined so as to reproduce the pion mass \(m_{\pi}=138\) MeV and the pion decay constant \(f_{\pi}=93\) MeV [25].
In Fig. 1, we show the phase diagram as a function of the temperature \(T\) and the quark chemical potential \(\mu\) in the mean-field approximation with the mean field \(\langle\bar{\psi}\psi\rangle\). The solid line shows the first-order critical line, and the circle marker denotes the QCD CP, which is located at \((T_{c},\,\mu_{c})\simeq(46.757,\,329.30)\) MeV.
The soft mode of the QCD CP is described by the collective excitations of the scalar field \(\bar{\psi}\psi\)[12, 26]. The imaginary-time Green's function of this channel in the random-phase
approximation (RPA) is given by [25]
\[\tilde{\Xi}(k) =\frac{1}{G_{S}^{-1}+\mathcal{Q}(k)}, \tag{2}\] \[\mathcal{Q}(k) =2N_{f}N_{c}\int_{p}\mathrm{Tr}[\mathcal{G}_{0}(p-k)\mathcal{G}_{ 0}(p)], \tag{3}\]
where \(N_{f}=2\) and \(N_{c}=3\) are the numbers of flavor and color, \(\mathcal{Q}(k)=\mathcal{Q}(\mathbf{k},i\nu_{n})\) is the one-loop quark-anti-quark correlation function, \(\mathcal{G}_{0}(p)=\mathcal{G}_{0}(\mathbf{p},i\omega_{m})=1/[(i\omega_{m}+\mu) \gamma_{0}-\mathbf{p}\cdot\mathbf{\gamma}+M]\) is the free-quark propagator, \(\omega_{m}\) (\(\nu_{n}\)) is the Matsubara frequency for fermions (bosons), \(M=m-2G_{S}\langle\bar{\psi}\psi\rangle\) is the constituent quark mass and \(\mathrm{Tr}\) is the trace over the Dirac indices. Throughout this paper, we denote the momentum integration and Matsubara-frequency summation as \(\int_{p}=T\sum_{m}\int d^{3}\mathbf{p}/(2\pi)^{3}\). The Green's function \(\tilde{\Xi}(k)\) is represented by a sum of repeated bubble diagrams composed of the one-loop correlation function (3).
The retarded functions \(\Xi^{R}(\mathbf{k},\omega)\) and \(Q^{R}(\mathbf{k},\omega)\) corresponding to \(\tilde{\Xi}(k)\) and \(\mathcal{Q}(k)\) are obtained by an analytic continuation \(i\nu_{n}\rightarrow\omega+i\eta\). The analytic formula of the imaginary
Figure 1: Phase diagram calculated by the mean-field approximation in the 2-flavor NJL model (1). The solid line shows the first-order phase transition. The QCD CP is represented by the circle marker, which is located at \((T_{c},\mu_{c})\simeq(46.757,329.30)\) MeV.
part of \(Q^{R}(\mathbf{k},\omega)\) is calculated to be
\[\mathrm{Im}Q^{R}(\mathbf{k},\omega)= -\frac{N_{f}N_{c}T}{4\pi}\frac{\omega^{2}-\mathbf{k}^{2}-4M^{2}}{|\mathbf{k}|}\] \[\times\Bigg{\{}\theta\big{(}|\omega|-\sqrt{\mathbf{k}^{2}+4M^{2}} \big{)}F\big{(}\omega,\bar{k}(|\mathbf{k}|,\omega)\big{)}\] \[\quad+\theta\big{(}\bar{k}(|\mathbf{k}|,\bar{\Lambda})-|\omega|\big{)} \Big{[}F\big{(}\omega,\bar{k}(|\mathbf{k}|,\omega)\big{)}-F\big{(}\omega,\bar{ \Lambda})\Big{]}\Bigg{\}}, \tag{4}\] \[F(\omega,x)= \sum_{s,t=\pm}s\,\log\,\mathrm{cosh}\frac{\omega+sx-2t\mu}{4T},\] (5) \[\bar{k}(|\mathbf{k}|,\omega)= |\mathbf{k}|\sqrt{1-4M^{2}/(\omega^{2}-\mathbf{k}^{2})},\quad\bar{ \Lambda}=2\sqrt{\Lambda^{2}+M^{2}}, \tag{6}\]
where \(\bar{k}(|\mathbf{k}|,\bar{\Lambda})<|\mathbf{k}|\). Then the real part is given by the Kramers-Kronig relation
\[\mathrm{Re}Q^{R}(\mathbf{k},\omega)=\frac{1}{\pi}P\int_{-\bar{\Lambda}}^{\bar{ \Lambda}}d\omega^{\prime}\frac{\mathrm{Im}Q^{R}(\mathbf{k},\omega^{\prime})}{ \omega^{\prime}-\omega}, \tag{7}\]
where \(P\) denotes the principal value.
The first and second terms in the curly bracket in Eq. (4) take nonzero values in the time- and space-like regions, respectively. \(\Xi^{R}(\mathbf{k},\omega)\) has poles that physically represent collective modes in the time- and space-like regions, respectively. The former corresponds to the sigma meson composed of quark-anti-quark excitations, while the latter to that composed of p-h excitations due to the existence of a Fermi sphere.
From Eq. (4) one also finds that \(Q^{R}(\mathbf{k},\omega)\) is not analytic at the origin \((|\mathbf{k}|,\omega)=(0,0)\). In fact, the limiting value of \(\mathrm{Im}Q^{R}(\mathbf{k},\omega)\) at the origin along the line \(\omega=a|\mathbf{k}|\) is given by
\[\lim_{|\mathbf{k}|\to 0}\mathrm{Im}Q^{R}(\mathbf{k},a|\mathbf{k}|)=a\frac{N_{f}N_{c}M^{2}}{2 \pi}\sum_{t=\pm}\biggl{\{}\tanh\!\frac{\lambda_{0}-2t\mu}{4T}-\tanh\!\frac{ \bar{\Lambda}-2t\mu}{4T}\biggr{\}}\theta\biggl{(}\frac{2\Lambda}{\bar{\Lambda }}-|a|\biggr{)}, \tag{8}\]
with \(\lambda_{0}=\sqrt{4M^{2}/(1-a^{2})}\). Equation (8) is nonzero for \(0<|a|<2\Lambda/\bar{\Lambda}<1\), in which the value depends on \(a\).
At the QCD CP, \(\Xi^{R}(\mathbf{k},\omega)\) satisfies
\[\Xi^{R^{-1}}(\mathbf{0},0)\big{|}_{T=T_{c},\ \mu=\mu_{c}}=0, \tag{9}\]
in accordance with the nature of the second-order phase transition at the CP. In fact, Eq. (9), known as the Thouless criterion [27], is derived from the stationary condition of the effective potential at the CP. The Thouless criterion shows the existence of a collective mode that becomes exactly massless in \(\Xi^{R}(\mathbf{k},\omega)\). This mode is called the soft mode associated with the
CP. It is known that the soft mode of the QCD CP is a p-h mode in the space-like region, while the mesonic mode in the time-like region does not become massless even at the QCD CP [12, 14, 15, 26].
In the next section, we investigate the effect of the soft mode on the photon self-energy in the low energy and momentum region. For this analysis we introduce an approximate formula of \(\Xi^{R}(\mathbf{k},\omega)\) that is valid near the QCD CP in the following way: First, since the spectral function of the soft mode has the support in the space-like region, we focus on the strength in the space-like region only. The mesonic mode in the time-like region is neglected since its contribution to the photon self-energy at low energy-momentum is suppressed because of the dispersion relation \(\omega>\sqrt{\mathbf{k}^{2}+4M^{2}}\), where \(M\simeq 185\) MeV around the CP. Second, we approximate the denominator of \(\Xi^{R}(\mathbf{k},\omega)\) in the space-like region by expanding it with respect to \(\omega\) and picking up the first two terms as
\[\Xi^{R}(\mathbf{k},\omega)=\frac{1}{G_{S}^{-1}+Q^{R}(\mathbf{k},\omega)}\sim\frac{1}{ A(\mathbf{k})+C(\mathbf{k})\omega}, \tag{10}\]
where \(A(\mathbf{k})=G_{S}^{-1}+Q^{R}(\mathbf{k},0)\) and \(C(\mathbf{k})=\partial Q^{R}(\mathbf{k},\omega)/\partial\omega\mid_{\omega=0}\), which are found to be real and pure-imaginary numbers, respectively, from Eqs. (4) and (7). We then write the imaginary part of Eq. (10) as
\[\text{Im}\Xi^{R}(\mathbf{k},\omega)\sim\text{Im}\frac{1}{A(\mathbf{k})+C(\mathbf{k}) \omega}\ \theta\big{(}\bar{k}(|\mathbf{k}|,\bar{\Lambda})-|\omega|\big{)}, \tag{11}\]
where we have used the fact that \(\text{Im}\Xi^{R}(\mathbf{k},\omega)\) takes a nonzero value for \(|\omega|<\bar{k}(|\mathbf{k}|,\bar{\Lambda})\) in the space-like region, as seen from Eq. (4). In the next section, we use the forms of \(A(\mathbf{k})\) and \(C(\mathbf{k})\) determined in the NJL model for a given \(T\) and \(\mu\). It is shown that \(C(\mathbf{k})\) behaves as \(1/|\mathbf{k}|\) and diverges in the limit \(|\mathbf{k}|\to 0\) corresponding to the non-analytic nature of \(\Xi^{R}(\mathbf{k},\omega)\) at the origin. From this behavior a simple time-dependent Ginzburg-Landau (TDGL) approximation [24] that expands \([\Xi^{R}(\mathbf{k},\omega)]^{-1}\) with respect to \(\omega\) and \(|\mathbf{k}|\) is not applicable in the present case. Our approximation (11) is valid even in this case since the \(\mathbf{k}\) dependence is treated exactly. We also note that the denominator of Eq. (11) does not diverge at the origin since the term \(C(\mathbf{k})\omega\) is suppressed by the condition \(|\omega|<|\mathbf{k}|\) for \(\mathbf{k}\to 0\).
To demonstrate the validity of Eq. (11), we show in Fig. 2 the contour maps of the dynamical structure factor given by
\[S(\mathbf{k},\omega)=\frac{1}{\pi}\frac{1}{1-e^{-\omega/T}}\text{Im}\Xi^{R}(\mathbf{ k},\omega), \tag{12}\]
in the space-like region at and slightly away from the CP (\(T=0.9T_{c},1.0T_{c}\) and \(1.1T_{c}\) at \(\mu=\mu_{c}\)). In each panel, the left and right subpanels are the results of the RPA, Eq. (2)
with Eq. (4), and the approximation (11), respectively. One finds that the former is well reproduced by the latter especially at the low energy region at which the soft mode has a significant strength.
Although the spectral properties of the soft mode of the QCD CP look quite similar to that of the diquark soft mode [16], the present \(\Xi^{R}(\mathbf{k},\omega)\) is not analytic at the origin \((\left|\mathbf{k}\right|,\omega)=(0,0)\) and it has a discontinuity at the light cone, in contrast to that of the diquark mode; the discontinuity of the diquark propagator coming from the light cone is located at \(\left|\omega+2\mu\right|=\left|\mathbf{k}\right|\), and accordingly analytic at the origin [16, 20]. This difference makes the following analysis require some extra caution in the present case.
## 3 Dilepton production rate and electric conductivity
In this section, we shall calculate the dilepton production rate (DPR) and the electric conductivity assuming that the system is in the vicinity of the QCD CP. In this case, these observables would be significantly modified by the soft modes. Their effects are incorporated through the calculation of the photon self-energy by taking them into account. We perform this analysis in a parallel way to the analysis in Ref. [16] that investigated the diquark soft modes. Once the retarded photon self-energy \(\Pi^{R\mu\nu}(\mathbf{k},\omega)\) is obtained, the DPR is calculated to be
\[\frac{d^{4}\Gamma}{d^{4}k}(\mathbf{k},\omega)=-\frac{\alpha}{12\pi^{4}}\frac{1}{k ^{2}}\frac{1}{e^{\omega/T}-1}g_{\mu\nu}\mathrm{Im}\Pi^{R\mu\nu}(\mathbf{k},\omega), \tag{13}\]
Figure 2: Color maps of the dynamical structure factor \(S(\mathbf{k},\omega)\) in the space-like region for \(T=0.9T_{c},1.0T_{c}\) and \(1.1T_{c}\), respectively. The solid (white) lines represent the light cone. The left subpanel for each \(T\) is the result computed by the RPA (2) and (3), while the right one shows the approximate formula (11).
with the fine structure constant \(\alpha\) and the Minkowski metric \(g_{\mu\nu}\). The electric conductivity \(\sigma\) is also obtained as [28]
\[\sigma=\frac{1}{3}\lim_{\omega\to 0}\frac{1}{\omega}\sum_{i=1,2,3}\text{Im} \Pi^{Rii}(\mathbf{0},\omega). \tag{14}\]
### Modification of photon self-energy by the soft mode
To construct the photon self-energy in a gauge-invariant way, we start from the lowest-order contribution of the soft mode to the thermodynamic potential \(\Omega_{\text{fluc}}=\int_{p}\ln[G_{S}\tilde{\Xi}^{-1}(p)]\), which is diagrammatically represented by the one-loop graph of the soft mode propagator. The photon self-energy is then constructed by attaching electromagnetic vertices at two points of quark lines in \(\Omega_{\text{fluc}}\). This procedure leads to ten types of diagrams shown in Fig. 3. By borrowing the nomenclature in the theory of superconductivity [16, 21, 22], we call (a)-(d) the Aslamazov-Larkin (AL) [21], (e) and (f) the Maki-Thompson (MT) [22] and (g)-(j) the density of states (DOS) terms, respectively. The respective contributions to the photon self-energy, \(\tilde{\Pi}^{\mu\nu}_{\text{AL}}(k)\), \(\tilde{\Pi}^{\mu\nu}_{\text{MT}}(k)\) and \(\tilde{\Pi}^{\mu\nu}_{\text{DOS}}(k)\), in the imaginary-time formalism are expressed as
\[\Pi^{\mu\nu}_{\text{AL}}(k) =\sum_{f}\int_{q}\tilde{\Gamma}^{\mu}_{f}(q,q+k)\tilde{\Xi}(q+k) \tilde{\Gamma}^{\nu}_{f}(q+k,q)\tilde{\Xi}(q), \tag{15}\] \[\Pi^{\mu\nu}_{\text{MT}}(k) =\sum_{f}\int_{q}\tilde{\Xi}(q)\mathcal{R}^{\mu\nu}_{\text{MT},f }(q,k),\] (16) \[\Pi^{\mu\nu}_{\text{DOS}}(k) =\sum_{f}\int_{q}\tilde{\Xi}(q)\mathcal{R}^{\mu\nu}_{\text{DOS}, f}(q,k), \tag{17}\]
Figure 3: Diagrammatic representations of the Aslamazov-Larkin (a)β(d), Maki-Thompson (e, f) and density of states (g)β(j) terms with the soft modes. The single, double and wavy lines are quarks, soft modes and photons, respectively.
where \(f=u,d\) is the index of flavors and the vertex functions \(\tilde{\Gamma}_{f}^{\mu}(q,q+k)\), \(\mathcal{R}_{\text{MT},f}^{\mu\nu}(q,k)\) and \(\mathcal{R}_{\text{DOS},f}^{\mu\nu}(q,k)\) represent the three- and four-point diagrams in Fig. 3. We note that the number of diagrams is doubled compared with the case in Ref. [16], since the quark and anti-quark lines should be distinguished for the present case.
The total photon self-energy reads
\[\tilde{\Pi}^{\mu\nu}(k) =\tilde{\Pi}_{\text{free}}^{\mu\nu}(k)+\tilde{\Pi}_{\text{fluc}}^{ \mu\nu}(k), \tag{18}\] \[\tilde{\Pi}_{\text{fluc}}^{\mu\nu}(k) =\tilde{\Pi}_{\text{AL}}^{\mu\nu}(k)+\tilde{\Pi}_{\text{MT}}^{ \mu\nu}(k)+\tilde{\Pi}_{\text{DOS}}^{\mu\nu}(k), \tag{19}\]
where \(\tilde{\Pi}_{\text{fluc}}^{\mu\nu}(k)\) is the contribution from the soft modes and
\[\tilde{\Pi}_{\text{free}}^{\mu\nu}(k)=N_{c}C_{\text{em}}\int_{p}\text{Tr}[ \gamma^{\mu}\mathcal{G}(p+k)\gamma^{\nu}\mathcal{G}(p)], \tag{20}\]
is the self-energy of the free-quark system, where \(C_{\text{em}}=e_{u}^{2}+e_{d}^{2}\) with \(e_{u}=2|e|/3\) (\(e_{d}=-|e|/3\)) denoting the electric charges of up (down) quark. We note that \(\tilde{\Pi}^{\mu\nu}(k)\) thus constructed nicely satisfies the Ward-Takahashi (WT) identity
\[k_{\mu}\tilde{\Pi}^{\mu\nu}(k)=0. \tag{21}\]
### Vertices
For the vertex functions \(\tilde{\Gamma}_{f}^{\mu}(q,q+k)\), \(\mathcal{R}_{\text{MT},f}^{\mu\nu}(q,k)\) and \(\mathcal{R}_{\text{DOS},f}^{\mu\nu}(q,k)\), instead of calculating the diagrams in Fig. 3 directly we determine their functional forms from the WT identities for the vertices
\[k_{\mu}\tilde{\Gamma}_{f}^{\mu}(q,q+k) =-e_{f}[\tilde{\Xi}^{-1}(q+k)-\tilde{\Xi}^{-1}(q)], \tag{22}\] \[k_{\mu}\mathcal{R}_{f}^{\mu\nu}(q,k) =-e_{f}[\tilde{\Gamma}_{f}^{\nu}(q-k,q)-\tilde{\Gamma}_{f}^{\nu} (q,q+k)], \tag{23}\]
where \(\mathcal{R}^{\mu\nu}(q,k)=\mathcal{R}_{\text{MT},f}^{\mu\nu}(q,k)+\mathcal{R} _{\text{DOS},f}^{\mu\nu}(q,k)\).
Among the vertex functions, only their spatial components are needed for the calculations of Eqs. (13) and (14), because \(\tilde{\Pi}_{\text{fluc}}^{00}(k)\) in Eq. (13) is obtained from the spatial components through
\[\tilde{\Pi}^{00}(k)=\frac{\mathbf{k}^{2}}{(i\nu_{l})^{2}}\tilde{\Pi}^{11}(k)\qquad \text{for}\quad k=(|\mathbf{k}|,0,0,i\nu_{l}). \tag{24}\]
To obtain \(\tilde{\Gamma}_{f}^{i}(q,q+k)\) for \(i=1,2,3\), we take the same procedure as that adopted in Ref. [16], where the energy dependent and independent terms of \(\tilde{\Xi}^{-1}(q)\) on the right-hand
side in Eq. (22) are attributed to \(k_{0}\tilde{\Gamma}_{f}^{0}(q,q+k)\) and \(\mathbf{k}\cdot\tilde{\mathbf{\Gamma}}_{f}(q,q+k)\) on the left-hand side, respectively, so that the spatial part of Eq. (22) is given by 1
Footnote 1: This procedure is justified by calculating \(\tilde{\Gamma}_{f}^{\mu}(q,q+k)\) from the triangle diagrams in Fig. 3 (a)β(d) directly and comparing the functional forms in the small \(\omega\) and \(\mathbf{k}\) limit [29].
\[\mathbf{k}\cdot\tilde{\mathbf{\Gamma}}_{f}(q,q+k)=e_{f}[A(\mathbf{q}+\mathbf{k})-A(\mathbf{q})]. \tag{25}\]
We then employ the ansatz on the form of \(\tilde{\Gamma}_{f}^{i}(q,q+k)\) that satisfies Eq. (25) as
\[\tilde{\Gamma}_{f}^{i}(q,q+k) =e_{f}Q_{(1)}(\mathbf{q}+\mathbf{k},\mathbf{q})(2q+k)^{i}, \tag{26}\] \[Q_{(1)}(\mathbf{q}_{1},\mathbf{q}_{2}) =\frac{A(\mathbf{q}_{1})-A(\mathbf{q}_{2})}{|\mathbf{q}_{1}|^{2}-|\mathbf{q}_{2} |^{2}}. \tag{27}\]
Since \(A(\mathbf{q})\) is real, \(\tilde{\Gamma}_{f}^{i}(q,q+k)\) is also a real function in this construction. We note that this form of approximation is valid only for sufficiently small \(k\), as the WT identity (22) cannot uniquely determine the vertex in general. Near the QCD CP at which the contribution of the soft mode becomes prominent, it is, however, expected that the qualitative result does not depend on the form of the vertex and our approximation would be well justified.
The form of the vertex \(\mathcal{R}_{f}^{ij}(q,k)\) is also obtained by adopting a similar argument with Eqs. (23) and (26), as was done in Ref. [16]. From this analysis it is found that \(\mathcal{R}_{f}^{ij}(q,k)\) is a real function and independent of \(i\nu_{l}\). By constructing the MT and DOS terms from the vertex, one finds
\[\text{Im}\Pi_{\text{MT}}^{Rij}(\mathbf{k},\omega)+\text{Im}\Pi_{\text{DOS}}^{Rij} (\mathbf{k},\omega)=0. \tag{28}\]
Equation (28) is shown from the fact that the sum of Eqs. (16) and (17) becomes real after the Matsubara summations when \(\mathcal{R}_{f}^{ij}(q,k)\) satisfies the above conditions [16; 29]. The cancellation of the MT and DOS terms is also known in metallic superconductivity [24].
From Eqs. (28) and (24), one obtains
\[\text{Im}\Pi_{\text{fluc}}^{R00}(\mathbf{k},\omega)=\frac{\mathbf{k}^{2}}{\omega^{2}} \text{Im}\Pi_{\text{AL}}^{R11}(\mathbf{k},\omega)\qquad\text{for}\qquad\mathbf{k}=(| \mathbf{k}|,0,0). \tag{29}\]
Plugging this into Eq. (13), one finds that the DPR is written solely in terms of the AL term. So is the electric conductivity since it is given by the spatial components of \(\text{Im}\Pi_{\text{fluc}}^{R\mu\nu}(\mathbf{k},\omega)\) as in Eq. (14). These results show that we only have to compute the AL term for obtaining both the DPR and the electric conductivity.
### Aslamazov-Larkin term
Since \(\mathrm{Im}\tilde{\Pi}^{ij}_{\mathrm{fluc}}(k)\) consists of only the AL term, we now calculate \(\tilde{\Pi}^{ij}_{\mathrm{AL}}(k)\). Using Eqs. (11) and (26), we obtain
\[\tilde{\Pi}^{ij}_{\mathrm{AL}}(k)= \sum_{f}\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\tilde{\Gamma}^{i}_{f} (q,q+k)\tilde{\Gamma}^{j}_{f}(q+k,q)\oint_{C}\frac{dq_{0}}{2\pi i}\frac{\coth \frac{q_{0}}{2T}}{2}\tilde{\Xi}(q+k)\tilde{\Xi}(q), \tag{30}\]
where the contour \(C\) encircles the imaginary axis, which is deformed so as to avoid the cut in \(\tilde{\Xi}(q+k)\) and \(\tilde{\Xi}(q)\).
Taking the analytic continuation \(i\nu_{l}\rightarrow\omega+i\eta\) and using Eq. (29) we obtain
\[g_{\mu\nu}\mathrm{Im}\Pi^{R\mu\nu}_{\mathrm{fluc}}(\mathbf{k},\omega)= \frac{\mathbf{k}^{2}}{\omega^{2}}\mathrm{Im}\Pi^{R11}_{\mathrm{AL}}( \mathbf{k},\omega)-\sum_{i}\mathrm{Im}\Pi^{Rii}_{\mathrm{AL}}(\mathbf{k},\omega)\] \[= C_{\mathrm{em}}\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\int\frac{d \omega^{\prime}}{2\pi}\mathrm{coth}\frac{\omega^{\prime}}{2T}\] \[\times\big{(}Q_{(1)}(\mathbf{q}+\mathbf{k},\mathbf{q})\big{)}^{2}\bigg{[} \bigg{(}\frac{(\mathbf{q}+\mathbf{k})^{2}-\mathbf{q}^{2}}{\omega}\bigg{)}^{2}-(2\mathbf{q}+\bm {k})^{2}\bigg{]}\] \[\times\mathrm{Im}\Xi^{R}(\mathbf{q}+\mathbf{k},\omega^{\prime})\big{\{} \mathrm{Im}\Xi^{R}(\mathbf{q},\omega^{\prime}+\omega)-\mathrm{Im}\Xi^{R}(\mathbf{q}, \omega^{\prime}-\omega)\big{\}}. \tag{31}\]
The contribution of the soft mode to the DPR is computed by substituting Eq. (31) into Eq. (13). The contribution of the soft mode to the electric conductivity is also obtained by plugging the formula
\[\sum_{i=1}^{3}\mathrm{Im}\Pi^{Rii}_{\mathrm{fluc}}(\mathbf{k},\omega)= C_{\mathrm{em}}\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\int\frac{d\omega^{ \prime}}{2\pi}\mathrm{coth}\frac{\omega^{\prime}}{2T}\big{(}Q_{(1)}(\mathbf{q}+\bm {k},\mathbf{q})\big{)}^{2}(2\mathbf{q}+\mathbf{k})^{2}\] \[\times\mathrm{Im}\Xi^{R}(\mathbf{q}+\mathbf{k},\omega^{\prime})\big{\{} \mathrm{Im}\Xi^{R}(\mathbf{q},\omega^{\prime}+\omega)-\mathrm{Im}\Xi^{R}(\mathbf{q}, \omega^{\prime}-\omega)\big{\}}, \tag{32}\]
into Eq. (14). We note that Eq. (32) at \(|\mathbf{k}|=0\) is linearly dependent on \(\omega\) in the \(\omega\to 0\) limit, and hence the conductivity \(\sigma\) calculated from it has a nonzero value. This term leads to the divergence of \(\sigma\) at the QCD CP as we will see in the next section.
We note that the domain of the integral in Eq. (31) or (32) is subject to a constraint that Eq. (11) takes a nonzero value only in the energy-momentum region \(|\omega|<\bar{k}(|\mathbf{k}|,\bar{\Lambda})\), i.e., inside the space-like region. Nevertheless, we note that some multiple soft mode processes can affect the photon self-energy in the time-like region that is responsible for the DPR and conductivity. These contributions are understood as the scattering process of the photon with a soft mode: Let a virtual photon with the energy-momentum \(k=(\mathbf{k},\omega)\) is absorbed by a soft mode with \(q_{1}=(\mathbf{q}_{1},\omega_{1})\) to make another one with \(q_{2}=(\mathbf{q}_{2},\omega_{2})\), both of which are in the space-like region; \(|\omega_{1}|<|\mathbf{q}_{1}|\) and \(|\omega_{2}|<|\mathbf{q}_{2}|\). Then, the energy-momentum conservation law
tells us that \(\mathbf{k}=\mathbf{q}_{2}-\mathbf{q}_{1}\) and \(\omega=\omega_{2}-\omega_{1}\), where \(|\mathbf{k}|\) can be taken arbitrarily small keeping \(\omega=\omega_{2}-\omega_{1}\) finite. Thus, the soft mode which has the spectral support in the space-like region can contribute to the photon self-energy in the time-like region (\(\omega>|\mathbf{k}|\)). In the next section we shall see that it can cause an enhancement of the DPR and conductivity.
Before closing this section, let us clarify the limitation of our calculation. Firstly, in our treatment we focus on the effects of the soft mode in the space-like region, and the effects of the mesonic mode in the time-like region is neglected. This approximation is justified as long as we consider the DPR in the low energy region near the QCD CP, since the mass of the sigma mode is larger than \(2M\simeq 370\) MeV. When considering the DPR above \(2M\), however, the effect of the mesonic modes will become significant. Secondly, we have constructed the approximate forms of the vertex functions through the WT identities and Eq. (10). While this assumption should be valid for sufficiently small \(\omega\), it would not be directly applicable to the large energy-momentum region.
## 4 Numerical results
In this section, we shall show the numerical results of the DPR (13) and electric conductivity (14) near the QCD CP calculated with the photon self-energy obtained in the previous section.
We first show the DPR at \(\mathbf{k}=\mathbf{0}\) at \(\mu=\mu_{c}\) for several values of \(T\) below (above) \(T_{c}\) in the left (right) panel of Fig. 4. The red-thick lines show the contribution from \(\tilde{\Pi}^{\mu\nu}_{\rm fluc}(k)\). The total
Figure 4: Dilepton production rate (DPR) per unit energy and momentum \(d^{4}\Gamma/d\omega d^{3}k\) for several values of \(T/T_{c}\) at \(\mu=\mu_{c}\) and \(\mathbf{k}=\mathbf{0}\). The thick (red) and thin (blue) lines are the contributions from the soft mode and the massless quark gases, respectively. The left and right panels show the DPR below and above \(T_{c}\), respectively.
rate is given by the sum of the contributions from \(\tilde{\Pi}_{\rm fluc}^{\mu\nu}(k)\) and \(\tilde{\Pi}_{\rm free}^{\mu\nu}(k)\). However, the latter is almost negligible at the QCD CP in the range of \(\omega\) in the figure since \({\rm Im}\tilde{\Pi}_{\rm free}^{\mu\nu}(k)\) has a nonzero value only for \(|\omega|>\sqrt{\mathbf{k}^{2}+4M^{2}}\), where \(M\simeq 185\) MeV at the QCD CP. For a comparison, the DPR from the _massless_ free-quark gas are shown by the blue-thin lines in the figure. The figure shows that the DPR is enhanced significantly near the QCD CP by the soft modes and well exceeds the case of the massless free-quark gas in the low energy region \(\omega\lesssim 250\) MeV. The enhancement in the low energy region becomes more prominent as \(T\) approaches \(T_{c}\) from both sides of the temperature. Taking a closer look at these results, one finds that the DPR increases monotonically in the left panel as \(T\) approaches \(T_{c}\), while the \(T\) dependence for \(T>T_{c}\) shown in the right panel is not monotonous. The latter can be accounted for by a competition of the effect of the soft modes and the kinematical temperature effect causing more thermal excitations of the soft modes at higher temperatures. Figure 5 shows the numerical results of the DPR at nonzero momentum for several \(T\) above \(T_{c}\). One finds that the enhancement of the DPR is more prominent in the low-momentum region.
In the HIC experiments, the DPR is usually observed as a function of the invariant-mass \(m_{ll}\),
\[\frac{d\Gamma}{dm_{ll}^{2}}=\int d^{3}k\frac{1}{2\omega}\frac{d^{4}\Gamma}{d^ {4}k}\bigg{|}_{\omega=\sqrt{\mathbf{k}^{2}+m_{ll}^{2}}}, \tag{33}\]
to cancel out the effect of the flow. In Fig. 6, we show the numerical results of Eq. (33) for various values of \(T\) at \(\mu=\mu_{c}\). We find that the contribution of the soft modes is conspicuous in the low invariant-mass region \(m_{ll}\lesssim 150\) MeV.
Finally, we show the behavior of the electric conductivity \(\sigma\) near the QCD CP in Fig. 7. The left panel is the \(T\) dependence of \(\sigma\) at three values of \(\mu\), where \(\sigma\) is normalized by \(TC_{\rm em}\). As expected from the infrared behavior of the soft modes in the critical region, the conductivity \(\sigma\) tends to diverge near the CP. In fact, it can be shown that \(\sigma\) grows as \(|T-T_{c}|^{-2/3}\) in the vicinity of the critical point in the present approximation, as will be discussed in detail in the forthcoming publication [29]. At \(\mu=0.99\)\(\mu_{c}\), the conductivity is not divergent but only shows a prominent but finite peak at \(T\simeq 1.08\)\(T_{c}\) in accordance with the crossover nature of the transition. At \(\mu=1.01\)\(\mu_{c}\), the \(\sigma\) shows a cusp-like behavior reflecting the first-order nature of the phase transition at \(T\simeq 0.9\)\(T_{c}\). The right panel of Fig. 7 shows a contour plot of \(\sigma/TC_{\rm em}\) on the \(T\)-\(\mu\) plane. One sees that the \(\sigma\) has a significant excess along the critical lines of the first-order phase and crossover transitions.
## 5 Discussions
Focusing on the collective soft modes the mass of which tends to vanish at the QCD CP, we have explored its effects on the dilepton production rate (DPR) and the electric conductivity near the QCD CP. The contribution to these observables was taken into account through the modification of the photon self-energy by the AL, MT and DOS terms, the inclusion of all of which is necessary to assure the WT identity. We have shown that the DPR in the low energy and low invariant-mass regions is greatly enhanced due to the soft modes around the CP in comparison with that of the massless free-quark gas. We have also seen that the prominent enhancement of the electric conductivity \(\sigma\) occurs near the QCD CP due to the soft modes. We plan to report on more detailed analyses on the possible
anomalous transport properties including the electric conductivity and relaxation time near the QCD CP, as well as the phase boundary of the 2SC phase, elsewhere [29].
It is interesting to explore the phenomenological consequence of the present findings in the HIC experiments. If an anomalous enhancement of the DPR in the low mass region, say less than 150 MeV, should ever be detected, our result suggests that it may be the signal of the QCD CP. The enhancement of the conductivity at the CP will also be observed in the HIC [30]. To identify the signal, however, it is important to disentangle the signal from other effects that induces a similar enhancement. For example, in our previous study we have pointed out that a similar enhancement of the DPR manifests itself near the phase boundary of the 2SC phase due to the development of the diquark soft modes [29]. Other standard mechanisms due to medium effects, such as hadronic scenarios and the processes to be described by the perturbative QCD and so on [31, 32, 33], also bring about the enhancement at low mass region. More detailed investigation of the invariant-mass spectrum of the DPR will be required to disentangle these effects.
Other important issues to be examined are the effects of dynamics. In the dynamical evolution of the HIC, the effect of the critical slowing down will modify the DPR around the QCD CP. To deal with this effect, the analysis of the DPR in the real-time formalism with the time-dependent background medium is required. It will also be important to understand the effects of the phase transitions on the bulk evolution of the medium [34]. For elucidating the production mechanisms and the respective characteristics, one would eventually need to recourse to some dynamical transport models [35, 36, 37, 38]. These investigations are left for future
Figure 7: Electric conductivity \(\sigma\) associated with the soft modes. **Left**: \(T\) dependence of the \(\sigma\). The results of \(\mu/\mu_{c}=0.99\), \(1.0\) and \(1.01\) are the dashed, solid and dotted lines, repectively. **Right**: Contour plot of \(\sigma\) in the \(T\)-\(\mu\) plane around the QCD CP.
study. Nevertheless, we would like to emphasize that the production mechanism of the DPR through the soft modes is robust.
The measurements of the DPR in the low invariant-mass region \(m_{ll}\lesssim 100-200\) MeV is also a challenge in the experimental side, because di-electrons are contaminated by the Dalitz decay in this energy region. In spite of these challenging demands, however, it is encouraging that the future HIC programs in GSI and J-PARC-HI are designed to carry out high-statistical experiments [9, 10, 11], and also that new technical developments are vigorously being made [30].
Finally, we remark that a complete description of the collective soft modes around the QCD CP needs to incorporate the vector coupling [39, 40] as well as the scalar couplings which was exclusively taken into account in the present study. Such a more complete analyses constitutes one of the future tasks, which we hope to report somewhere in future.
## Acknowledgements
The authors thank Berndt Mueller, Hirotsugu Fujii and Akira Ohnishi for valuable comments. T. N. thanks JST SPRING (Grant No. JPMJSP2138) and Multidisciplinary PhD Program for Pioneering Quantum Beam Application. This work was supported by JSPS KAKENHI (Grants No. JP19K03872, No. JP19H05598, No. 20H01903, No. 22K03619).
|
2303.05811 | Enumeration of regular fractional factorial designs with four-level and
two-level factors | Designs for screening experiments usually include factors with two levels
only. Adding a few four-level factors allows for the inclusion of multi-level
categorical factors or quantitative factors with possible quadratic or
third-order effects. Three examples motivated us to generate a large catalog of
designs with two-level factors as well as four-level factors. To create the
catalog, we considered three methods. In the first method, we select designs
using a search table, and in the second method, we use a procedure that selects
candidate designs based on the properties of their projections into fewer
factors. The third method is actually a benchmark method, in which we use a
general orthogonal array enumeration algorithm. We compare the efficiencies of
the new methods for generating complete sets of non-isomorphic designs.
Finally, we use the most efficient method to generate a catalog of designs with
up to three four-level factors and up to 20 two-level factors for run sizes 16,
32, 64, and 128. In some cases, a complete enumeration was infeasible. For
these cases, we used a bounded enumeration strategy instead. We demonstrate the
usefulness of the catalog by revisiting the motivating examples. | Alexandre Bohyn, Eric D. Schoen, Peter Goos | 2023-03-10T09:37:07Z | http://arxiv.org/abs/2303.05811v2 | # Enumeration of regular fractional factorial designs with four-level and two-level factors
###### Abstract
Designs for screening experiments usually include factors with two levels only. Adding a few four-level factors allows for the inclusion of multi-level categorical factors or quantitative factors with possible quadratic or third-order effects. Three examples motivated us to generate a large catalog of designs with two-level factors as well as four-level factors. To create the catalog, we considered three methods. In the first method, we select designs using a search table, and in the second method, we use a procedure that selects candidate designs based on the properties of their projections into fewer factors. The third method is actually a benchmark method, in which we use a general orthogonal array enumeration algorithm. We compare the efficiencies of the new methods for generating complete sets of non-isomorphic designs. Finally, we use the most efficient method to generate a catalog of designs with up to three four-level factors and up to 20 two-level factors for run sizes 16, 32, 64, and 128. In some cases, a complete enumeration was infeasible. For these cases, we used a bounded enumeration strategy instead. We demonstrate the usefulness of the catalog by revisiting the motivating examples.
_Keywords:_ Delete-One-Factor Projection; Search Table; Sequential Enumeration; Reduction; NAUTY; Non-isomorphic designs.
Introduction
A major challenge in any scientific investigation is to choose an appropriate experimental design. In this paper, we are concerned with designs for the early stages of an investigation, where there are many factors that potentially affect the responses of interest. The purpose in these early stages is to identify the truly influential factors, and the designs appropriate for this purpose are screening designs (Montgomery, 2017, Chap. 1). In many cases, the factors included in such designs are studied at two levels only in order to achieve run-size economy. The most pertinent design is often selected using one or more ranking criteria. In this regard, catalogs of designs are convenient because they offer the possibility to compare several designs on multiple criteria. For this reason, extensive catalogs of orthogonal two-level designs have been published by Chen et al. (1993), Block and Mee (2005), Xu (2009), Ryan and Bulutoglu (2010), Schoen et al. (2010) and Schoen et al. (2017).
There are, however, circumstances where factors with more than two levels are involved in a screening experiment. For example, there may be categorical factors addressing more than two categories, or numerical factors that are believed to have quadratic or cubic effects. For these cases, designs with four-level and two-level factors, also called four-and-two-level designs, are suitable options because four-level factors can be constructed from two-level factors (Wu, 1989). Therefore, the run size economy of two-level designs carries over to designs with four-level as well as two-level factors.
All orthogonal 16-run four-and-two-level designs were enumerated by Schoen et al. (2010). The total number of orthogonal 32-run four-and-two-level designs was too large for a complete enumeration by these authors. However, it is possible to enumerate all regular 32-run orthogonal designs. In such designs, any two effects in the model matrix are either completely aliased or orthogonal. In the rest of this paper, we refer to such designs as \(4^{m}2^{n-p}\) designs. To construct a suitable 32-run \(4^{m}2^{n-p}\) design, the practitioner may consult the catalogs of Wu and Zhang (1993) or Ankenman (1999).
Three experiments from the literature motivated us to extend the work of Wu and Zhang
(1993) or Ankenman (1999) to designs with more factors or larger run sizes. Schoen (1999) presented two practical cases, not covered by the published catalogs. The first one involved a 14-factor experiment on the synthesis of a catalyst on a gauze with two four-level factors. In this experiment, a strip of gauze was cut from a roll and prepared for further processing. The gauze was then placed in an autoclave together with a mixture of ingredients. Next, the autoclave was heated for several hours. After cooling, the gauze was weighted and the yield of the synthesis was determined. The goal of the experiment was to identify the factor settings that maximize the yield of the synthesis. The 14 factors that could affect the yield are shown in Table 1. The two-level factors 1 and 2 are related to the gauze preparation. The four-level factors 3 and 4 as well as the two-level factors 5-10 are related to the composition of the mixture. Finally, the two-level factors 11-14 govern the synthesis process. A design including 32 runs was constructed based on first principles; see the original paper for more details. The design's construction would have been much faster if a complete catalog had been available.
The second example in Schoen (1999) is a cheese-making experiment involving 128 runs. In this case, ten factors were investigated. Nine factors were studied at two levels and one was studied at four levels. The purpose of the experiment was to detect which of the factors affected the quality characteristics of the cheeses. Here again, the design was constructed using first principles because no catalog was available; see the original paper for more details.
Our third example is provided by Katic (2011). It is a simulation experiment about sensor fields involving 32 runs. Two factors were studied at four levels, and five factors were studied at two levels. Wu and Zhang (1993) and Ankenman (1999) provide the same design for this case. However, this design does not match the design used by Katic (2011). In addition, as we explain later in Section 5, there may be reasons to prefer yet another design, not available in the existing catalogs. Overall, a catalog containing all the 32-run designs with two four-level factors and five two-level factors would have helped the author to consider several design options and thus make a more thoughtful design choice.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Stage & No. & Factor & Levels \\ \hline \hline \multirow{2}{*}{Gauze preparation} & \multirow{2}{*}{1} & \multirow{2}{*}{Etching} & Yes, no \\ & & 2 & Pretreatment \\ \hline \multirow{2}{*}{Mixture preparation} & \multirow{2}{*}{3} & \multirow{2}{*}{Si source} & \multirow{2}{*}{1,2,3,4} \\ & & 4 & Al source \\ & & 5 & Template \\ & & 1,2 \\ & & 6 & Si:Al ratio \\ & & 7 & Template:Al ratio \\ & & 8 & Total salt \\ & & 9 & Water:Al ratio \\ & & 10 & Shaking time \\ \hline \multirow{2}{*}{Synthesis} & \multirow{2}{*}{11} & \multirow{2}{*}{Aging} & Long, short \\ & & 12 & Rotating \\ & & 13 & Cooling \\ & & 14 & Time after synthesis \\ \hline \hline \end{tabular}
\end{table}
Table 1: Factors and factor levels in the experiment on the synthesis of catalyst on a gauze
The purpose of our paper is to enumerate regular orthogonal designs with four-level and two-level factors for ranges of run sizes and numbers of factors that are likely to be useful in practice. In addition, the enumeration is to be complete in the sense that all non-isomorphic designs are generated when computationally feasible, and, if not, a subset of designs with limited aliasing between main effects and two-factor interactions and among two-factor interactions is generated. To enumerate the designs, we develop a search-table method and a method that selects candidate designs based on the properties of their projections into fewer factors. As a benchmark method, we use the general orthogonal array enumeration algorithm of Schoen et al. (2010), which generates both regular and nonregular orthogonal designs.
The rest of this paper is organized as follows. In Section 2, we review the state of art in enumeration techniques for regular two-level designs. Next, in Section 3, we adapt these methods to \(4^{m}2^{n-p}\) designs and evaluate their computational efficiencies. In Section 4, we present a catalog of \(4^{m}2^{n-p}\) designs for run sizes of 16, 32, 64, and 128, with up to 20 two-level factors, using the most efficient method. In Section 5, we revisit the three practical cases and explain how the catalog would have helped in the design of these experiments. Finally, in Section 6, we discuss our findings and the possibilities offered by our catalog.
## 2 Enumeration of regular two-level designs
### Preliminaries
Consider a \(2^{n-p}\) regular fractional factorial two-level design \(D\), whose factor levels are coded with \(-1\) and \(+1\). The design \(D\) has \(n\) factors and \(2^{n-p}\) runs. The \(n\) factors include \(k=n-p\) basic factors whose \(2^{k}\) level combinations are all present in the design. There are also \(p\) added factors, defined by interactions between the \(k\) basic factors. These interactions are called generators. The element-wise product between an added factor and its generator results in the identity column, which is a column whose elements are all \(+1\). If we represent the factors as lower case letters, then such a product can be represented as a word containing
the letter representing the added factor and the letters corresponding to the basic factors involved in the generator. All words thus correspond to the identity column, as does any product of two or more words. All words derived from the \(p\) generators together with all words formed by the products of two or more of them form the defining relation. Therefore, the defining relation contains \(2^{p}-1\) words.
**Example 1.** Let \(D\) be a \(2^{6-2}\) design with four basic factors \(a,b,c,d\) and two added factors defined as \(e=abc\) and \(f=acd\). The products between the generators, \(abc\) and \(acd\), and the added factors, \(e\) and \(f\), result in the words \(abce\) and \(acdf\), respectively. Multiplying these words results in the word \(abceacdf\) which can be simplified to \(bdef\) because the product of a factor with itself results in the identity column. Therefore, the defining relation of \(D\) is the set of \(2^{2}-1\) words
\[\{abce,acdf,bdef\}. \tag{1}\]
\(\blacksquare\)
The length of a word is the number of letters it contains. For a \(2^{n-p}\) design \(D\), let \(A_{i}\) denote the number of words of length \(i\) in the defining relation. The vector
\[W(D)=(A_{3},A_{4},\ldots,A_{n}) \tag{2}\]
is called the word length pattern (WLP) of the design. A word of length three indicates a subset of three factors whose main effects are fully aliased with the interaction between the other two factors. A word of length four indicates a subset of four factors where the two-factor interaction of any two of the factors is fully aliased with the two-factor interaction of the two remaining factors. Words of length five or higher indicate less serious aliasing, such as aliasing between a two-factor interaction and a three-factor interaction.
The resolution of a \(2^{n-p}\) design \(D\) is the smallest integer \(r\) for which \(A_{r}>0\), and thus the length of the shortest word in the defining relation (Box and Hunter, 1961). To distinguish between designs with the same resolution, Fries and Hunter (1980) proposed the aberration criterion:
**Definition 1**.: _For any two \(2^{n-p}\) designs \(D_{1}\) and \(D_{2}\), let \(r\) be the smallest integer for which \(A_{r}\left(D_{1}\right)\neq A_{r}\left(D_{2}\right)\). Then, \(D_{1}\) has less aberration than \(D_{2}\) if \(A_{r}\left(D_{1}\right)<A_{r}\left(D_{2}\right)\). If there is no design with less aberration than \(D_{1}\), then \(D_{1}\) has minimum aberration (MA)._
A \(2^{n-p}\) design can be represented as an \(2^{n-p}\times n\) array where each row is an experimental run and each column is a factor. The level of factor \(j\) in the \(i\)th experimental run is determined by the value in position \((i,j)\) of the array. The levels are denoted by \(-1\) and \(+1\), representing the low and high level of a quantitative two-level factor, or the two categories of a categorical factor.
Two designs are isomorphic if one can be obtained from the other through permutations of rows, columns and levels within a given column. If this is the case, there exists an isomorphic map from one design to the other, denoted by \(\pi\equiv(\kappa,\rho,\sigma)\), that is a specific sequence \(\kappa\) of column permutations, a specific sequence \(\rho\) of row permutations and a specific set \(\sigma\) of level switches of the columns. It is easy to see that two isomorphic designs have the same word length pattern. Designs that are isomorphic to each other belong to the same isomorphism class. As isomorphic designs have exactly the same statistical properties, it suffices to enumerate one design per isomorphism class.
**Example 1** (continued).: The word length pattern \(W\) of design \(D\) is \(W\left(D\right)=(0,3,0,0)\). Table 2 of Chen et al. (1993) shows four regular six-factor 16-run designs, each with a different word length pattern. Any regular \(2^{6-2}\) design has to be isomorphic to one of these four designs. The design \(D\) is isomorphic to design 6-2.1 in Table 2 of Chen et al. (1993) and therefore has MA. \(\blacksquare\)
### Enumeration procedures
A minimal complete set (MCS) of \(2^{n-p}\) designs of resolution \(R\), denoted by \(C_{n,p}^{R}\), is a set of designs with exactly one representative of each isomorphism class. A complete set of \(2^{n-p}\) designs of resolution \(R\), denoted by \(\widetilde{C}_{n,p}^{R}\), is a set of designs with at least one representative for each isomorphism class. The most common method to generate a MCS of \(2^{(n+1)-(p+1)}\) designs of resolution \(R\) is to first extend \(C_{n,p}^{R}\), a MCS of \(n\)-factor parent designs, into
\(\widetilde{C}^{R}_{n+1,p+1}\), a complete set of \((n+1)\)-factor candidate designs, and then reduce it to a MCS \(C^{R}_{n+1,p+1}\). This procedure can be divided in two main steps:
**Extension:**: Each parent design in \(C^{R}_{n,p}\) is extended into one or more \(2^{(n+1)-(p+1)}\) candidate designs by adding candidate two-level factor columns so as to form \(\widetilde{C}^{R}_{n+1,p+1}\).
**Reduction:**: Of all the candidate designs in \(\widetilde{C}^{R}_{n+1,p+1}\), only one representative per isomorphism class is retained.
We now discuss the extension and reduction methods that have appeared in the literature.
#### 2.2.1 Extension methods
Search tableBingham and Sitter (1999) used a search table (Franklin and Bailey, 1977) to enumerate complete sets of two-level split-plot designs. Applied to a \(2^{n-p}\) design with \(2^{k}\) runs, \(k=n-p\) basic factors and \(p\) added factors, without split-plot complication, a search table is a two-way table with \(2^{k}-k-1\) rows and \(p\) columns. The columns represent the added factors, while the rows correspond to all possible interactions between two or more basic factors. The interactions are the possible generators for the added factors. In the table, the interactions are sorted according to their order. Interactions of the same order are sorted lexicographically.
**Example 1** (continued).: The search table for \(D\) is presented in Table 2. For the first added factor \(e\), there are \(2^{4}-4-1=11\) possible generators. However, generators of the same order result in isomorphic 5-factor designs, so that there are only three isomorphism classes. The words \(abe\), \(abce\) and \(abcde\) are used to represent these classes because they appear first in the table. These three \(2^{5-1}\) non-isomorphic designs are the ones presented in Table 2 of Chen et al. (1993).
For the second added factor \(f\), Bingham and Sitter (1999) explained that we only need to consider generators that are lower in the table than the generator used for the first added factor. Indeed, using \(ab\) as generator both for factors \(e\) and \(f\) would result in a word of length two (\(ef\)) indicating that the main effects of factors \(e\) and \(f\) are aliased. The design would thus have a resolution of II which is undesirable. Therefore, the choice
of an additional generator in the same row has to be disregarded. Next, consider the pair of words \(\{abce,acf\}\) where the generator of the second word, \(ac\), appears higher in the table than the generator of the first word, \(abc\). The resulting design can be obtained from another pair of words, for which the generator of the second word appears lower in the search table than the generator of the first word. This can be seen by interchanging the columns and relabelling the factors in the original pair of words in the following way: \(\{abce,acf\}\rightarrow\{acbe,abf\}\rightarrow\{acbf,abe\}\rightarrow\{abe, abcf\}\). This also applies to any of the other sets of words containing \(abce\) or \(abcde\) and an additional word from a higher row in the table. We conclude that we do not lose non-isomorphic designs if we only consider generators that appear lower in the search table. The words that are not considered for the generation of this \(2^{6-2}\) design are struck through.
Without the search table, each of the three non-isomorphic \(2^{5-1}\) designs, represented by the words \(abe\), \(abce\) and \(abcde\), can generate 11 candidates, one for each generator available. With the search table, the design with \(abe\) as its first word can only generate 10 candidates, the design with \(abce\) as its first word can only generate 4 candidates and the design with \(abcde\) as its first word cannot generate any candidates. As a consequence, the searchtable approach leads to 14 \(2^{6-2}\) candidate designs among the 33 possible options. These 14 options must belong to one of the four isomorphism classes identified by Chen et al. (1993).
Delete-one-factor projectionFor a \(2^{(n+1)-(p+1)}\) design \(D_{c}\) and \(i=1,\ldots,n+1\), let \(D_{c(i)}\) be the \(2^{n-p}\) subdesign obtained by deleting the \(i\)th column. Such a subdesign is called a delete-one-factor projection (DOP) of \(D_{c}\)(Xu, 2009). For any design \(D\) in \(C_{n,p}^{R}\), any interaction of the basic factors that has not yet been used to construct the design is a candidate column, and using any one of the candidate columns for a new added factor yields a \(2^{(n+1)-(p+1)}\) candidate \(D_{c}\). Xu (2009) showed that discarding \(D_{c}\) if its resolution is lower than \(R\) or if \(D\) does not have MA among all the DOPs of \(D_{c}\) still results in a complete set \(\widetilde{C}_{n+1,p+1}^{R}\). This is because all non-isomorphic \(n\)-factor designs are available for extension, including a design that is isomorphic to the MA DOP of \(D_{c}\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Added factor**} \\ \cline{2-5}
**Generator** & \multicolumn{1}{c}{\(e\)} & \(f\) (\(ab\)) & \(f\) (\(abc\)) & \(f\) (\(abcd\)) \\ \hline \(ab\) & \(abe\) & \(\pm b\!f\) & \(\pm b\!f\) & \(\pm b\!f\) \\ \(ac\) & \(\pm ee\) & \(acf\) & \(\pm e\!f\) & \(\pm e\!f\) \\ \(ad\) & \(\pm de\) & \(adf\) & \(\pm d\!f\) & \(\pm d\!f\) \\ \(bc\) & \(bee\) & \(bcf\) & \(bcf\) & \(bcf\) \\ \(bd\) & \(bde\) & \(bdf\) & \(bdf\) & \(bdf\) \\ \(cd\) & \(ede\) & \(cdf\) & \(\pm e\!f\) & \(\pm e\!f\) \\ \(abc\) & \(abce\) & \(abcf\) & \(\pm e\!f\) & \(\pm e\!f\) \\ \(abd\) & \(\pm abde\) & \(abdf\) & \(abdf\) & \(\pm abdf\) \\ \(acd\) & \(aede\) & \(acdf\) & \(acdf\) & \(aedf\) \\ \(bcd\) & \(bede\) & \(bcdf\) & \(bcdf\) & \(bedf\) \\ \(abcd\) & \(abcde\) & \(abcdf\) & \(abcdf\) & \(\pm abedf\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Search table for \(2^{6-2}\) designs where the three columns for factor \(f\) correspond with the possible generators for factor \(e\), written in parentheses.
Minimum complete set algorithmThe minimum complete set (MCS) algorithm was introduced by Schoen et al. (2010) to generate MCSs of orthogonal arrays of specific run sizes, numbers of factor levels and strengths. The algorithm uses the lexicographically minimum in columns (LMC) form:
**Definition 2**.: _Consider two \(2^{n-p}\) designs \(D_{1}\) and \(D_{2}\) with \(N\) runs. Denote by \(d_{1}\) and \(d_{2}\) the \(N\cdot n\) vectors obtained by concatenating the \(n\) columns of \(D_{1}\) and \(D_{2}\), respectively, and by \(d_{ij}(i=1,2;j=1,\ldots,N\cdot n)\) the elements of these vectors. Then, \(D_{1}\) is lexicographically smaller than \(D_{2}\) if there is an \(l\leq N\cdot n\) such that \(d_{1j}=d_{2j}\) for \(j=1,\ldots,l-1\) and \(d_{1l}<d_{2l}\). The design \(D_{1}\) is in LMC form if no other design from its isomorphism class is lexicographically smaller._
The MCS algorithm starts with a MCS of parents, \(C_{n,p}^{R}\), that are all in LMC form. Then, each parent is extended to a candidate design by creating a new column using element-wise addition. Only additional columns that are lexicographically smaller than the columns of the parent are considered. The set of extended designs is guaranteed to contain all designs from \(C_{n+1,p+1}^{R}\) in LMC form (Schoen et al., 2010). However, the MCS algorithm generates both regular and non-regular designs, that is, designs where the aliasing between the effects in the model matrix can also be partial. For example, from the three non-isomorphic \(2^{5-1}\) designs from Example 1, corresponding to words \(abe\), \(abce\) and \(abcde\), the LMC algorithm produces three non-regular designs and four regular \(2^{6-2}\) designs.
#### 2.2.2 Reduction methods
PartitioningThe reduction step is a computationally intensive part of the whole enumeration process. For this reason, it is often preceded by a partitioning step in which a design criterion is selected such that isomorphic designs have the same value and any two designs with different values must be non-isomorphic. Such a criterion is called an invariant. First, an invariant is computed for all candidates in \(\widetilde{C}_{n+1,p+1}^{R}\). Then, the set of candidates is divided into subsets of designs with the same value for the invariant. Any singleton is automatically an isomorphism class representative and does not need to go through the reduction step.
Chen et al. (1993) used a letter pattern in addition to the WLP as an invariant in their algorithm, while Lin and Sitter (2008) used the eigenvalues of a word pattern matrix in addition to the WLP. Xu (2009) used the moment projection pattern (Xu and Deng, 2005), a ranking criterion based on the Hamming distance of the projections of the design matrix, as an invariant in his algorithm.
Pairwise isomorphism testingChen et al. (1993) implemented a pairwise isomorphism testing procedure where, within every subset of \(\widehat{C}_{n+1,p+1}^{R}\) created by the partitioning, each candidate design is tested against all other candidates. For each pair of candidates, every possible isomorphic map from one design to the other is tested and if none can be found, the pair of designs are non-isomorphic.
Canonical form rejectionSchoen et al. (2010) employed lexicographic ordering in their MCS algorithm. All candidate designs generated in the extension step of the algorithm are tested to determine whether they are in LMC form. If that is the case, the candidate is kept as an isomorphism class representative. Otherwise, it is rejected. However, this procedure also retains non-regular designs. Since we address only regular designs, we perform an additional regularity check for every candidate design \(D\). Let \(\mathbf{X}\) be the \(N\times\left(2^{k}-1\right)\) model matrix corresponding to the interactions between the \(k\) basic factors of the design \(D\) and let \(\mathbf{D}\) be its \(N\times n\) design matrix. Then, the design is regular and therefore retained if and only if the \(\left(2^{k}-1\right)\times n\) matrix \(N^{-1}\mathbf{X^{\prime}D}\) only contains ones and zeros.
Canonical form conversionNAUTY (McKay and Piperno, 2014) is a program that selects isomorphism class representatives among sets of graphs. It converts all graphs in the set into a specific form called the NAUTY canonical form and then selects one representative for each unique form. Ryan and Bulutoglu (2010) showed that a \(2^{n-p}\) design can be uniquely represented as a graph, and that graph isomorphism is equivalent to design isomorphism. This justifies their use of NAUTY to perform the isomorphism reduction step.
#### 2.2.3 State of the art in two-level design enumeration
Chen et al. (1993) were the first to apply extension and reduction to enumerate two-level regular designs. They considered all unused two-level factor interaction columns in the extension step, and they used a pairwise isomorphism testing in the reduction step. Their catalog includes all non-isomorphic 16-run designs of resolution III, resolution-III 32-run designs with up to 28 factors, and resolution-IV 64-run designs with up to 32 factors. Block and Mee (2005) extended that catalog to 128-run designs of resolution IV with up to 64 factors by differentiating design based on their projections rather than performing a pairwise isomorphism check. Xu (2009) used the moment projection pattern criterion for partitioning, pairwise isomorphism testing and the DOP method in the extension step of the algorithm, thereby creating a more efficient algorithm. This allowed him to enumerate and present efficient designs for run sizes up to 4096, with resolution up to VII. Ryan and Bulutoglu (2010) improved the speed of Xu's algorithm by replacing the pairwise isomorphism testing procedure in the reduction step with the NAUTY-based algorithm. With this procedure, they extended the catalog of Xu (2009) by enumerating resolution IV 128-run and 256-run designs with up to 64 factors, and addressing further resolution-VI cases for designs involving 2048 and 4096 runs.
## 3 Enumeration of \(4^{m}2^{n-p}\) designs
### Preliminaries
A \(4^{m}2^{n-p}\) design has \(m\) four-level factors and \(n\) two-level factors. Using the grouping scheme of Wu (1989), these designs can be derived from two-level designs by combining pairs of two-level factors into four-level factors. Table 3 illustrates this by showing that each of the four level combinations of two two-level factors can be assigned to a different level of a four-level factor. The two-level factors constitute two parts of the main effect of the four-level factor. The third part is the interaction between the two two-level factors. The two main effects and the interaction are called pseudo-factors.
A regular \(4^{m}2^{n-p}\) design is constructed from \(k\) two-level basic fac
runs. To define \(m\) four-level factors, \(2m\) of the \(k\) basic factors are used. The remaining \(k-2m\) basic factors serve as building blocks for the \(p=2m+n-k\) additional two-level factors. A MCS of \(4^{m}2^{n-p}\) designs involving \(2^{k}\) runs with resolution \(R\) is denoted by \(C_{m,n,p}^{R}\), while a complete set is denoted by \(\widetilde{C}_{m,n,p}^{R}\).
To define the word length pattern of a \(4^{m}2^{n-p}\) design, we need to adjust the definition of the word length pattern of two-level designs. More specifically, an interaction of two pseudo-factors that form a four-level factor constitutes another pseudo-factor. For this reason, the length of a word that contains two pseudo-factors corresponding to a single four-level factor is decreased by 1. Following Wu and Zhang (1993), we represent the three pseudo-factors corresponding to a single four-level factor as lower case letters with an index \(i\in\{1,2,3\}\). For instance, when the original factors \(a\) and \(b\) define the four-level factor \(A\), then we replace the factors \(a\) and \(b\) with \(a_{1}\) and \(a_{2}\), and their product \(ab\) by \(a_{3}\) in all the words of the defining relation of the design.
**Example 2**.: Consider the \(2^{6-2}\) design \(D\) with a defining relation \(\{abce,acdf,bdef\}\) presented in Example 1. We turn it into a \(4^{1}2^{4-2}\) design, \(D_{1}\), by creating the four-level factor \(A\) using the two two-level factors \(a\) and \(b\) and their interaction \(ab\) as pseudo factors. By relabeling the pseudo-factors, the three words of the original defining relation \(\{abce,acdf,bdef\}\) become \(\{a_{3}ce,a_{1}cdf,a_{2}def\}\), and now have lengths 3, 4 and 4, respectively. The word length pattern of the design therefore changes from \(W\left(D\right)=\left(0,3,0,0\right)\) to \(W\left(D_{1}\right)=\left(1,2,0,0\right)\), and \(D_{1}\) becomes a resolution-III design instead of a resolution-IV design. Furthermore, the two-factor interaction \(ce\) is now fully aliased with one of the main effects of the four-level
\begin{table}
\begin{tabular}{c c c c} \hline \(a\) & \(b\) & \(A\) \\ \hline \(+1\) & \(+1\) & \(\rightarrow\) & \(0\) \\ \(+1\) & \(-1\) & \(\rightarrow\) & \(1\) \\ \(-1\) & \(+1\) & \(\rightarrow\) & \(2\) \\ \(-1\) & \(-1\) & \(\rightarrow\) & \(3\) \\ \hline \end{tabular}
\end{table}
Table 3: Grouping scheme (Wu, 1989) to combine two two-level factors, \(a\) and \(b\), into a four-level factor, \(A\).
factor \(A\).
Wu and Zhang (1993) differentiate between words containing pseudo-factors and words not containing any pseudo-factors. Words involving pseudo-factors from \(t\) different four-level factors are of type \(t\) and words not involving pseudo-factors are of type \(0\). This distinction helps to differentiate non-isomorphic designs according to the interest of the practitioner. For example, a design whose only words are of type \(0\) is especially useful to study the main effects and two-factor interactions of four-level factors, because these effects are not aliased with effects of two-level factors or with effects of other four-level factors. Similarly, a design with type-2 words of length three has main effects of a two-level factor that are aliased with an interaction between two four-level factors, so that the design is less suitable to study these two-level factors.
For a \(4^{m}2^{n-p}\) design \(D\), let \(A_{it}\) denote the number of words of length \(i\) and type \(t\), and let \({\bf A}_{i}^{m}=(A_{im},\ldots,A_{i0})\), be a vector representing all words of length \(i\) in descending order of the type. We call the vector
\[{\bf W}_{m}(D)=\left({\bf A}_{3}^{m},\ldots,{\bf A}_{m+n}^{m}\right) \tag{3}\]
the word length pattern of type \(m\), abbreviated as \({\rm WLP}_{m}\). In a similar fashion, let \({\bf A}_{i}^{0}=(A_{i0},\ldots,A_{im})\) be a vector representing all words of length \(i\) in ascending order of the type. We call
\[{\bf W}_{0}(D)=\left({\bf A}_{3}^{0},\ldots,{\bf A}_{m+n}^{0}\right) \tag{4}\]
the word length pattern of type \(0\), abbreviated as \({\rm WLP}_{0}\). If \(m>1\), the vector \({\bf A}_{i}\) includes three or more elements so that there are at least six ways to order them. We find it hard to motivate orderings of the \(A_{it}\) values other than the descending or ascending orders of \(t\). For this reason, we only consider \({\rm WLP}_{m}\) and \({\rm WLP}_{0}\). The \({\rm WLP}_{m}\) should be especially useful as a design criterion if the emphasis is on studying the effects involving the four-level factors. In contrast, the \({\rm WLP}_{0}\) should be useful if the emphasis is on studying the two-level factors.
The resolution of a \(4^{m}2^{n-p}\) design \(D\) is defined as the smallest integer \(r\) for which \({\bf A}_{r}^{0}\) or \({\bf A}_{r}^{m}\) is a non-zero vector, and thus as the length of the shortest word in the defining relation. However, to define aberration, we need to differentiate the words of different types.
**Definition 3** (Wu and Zhang (1993)).: _Consider two \(4^{m}2^{n-p}\) designs \(D_{1}\) and \(D_{2}\). Let \(\mathbf{W}_{t}(D_{1})\) and \(\mathbf{W}_{t}(D_{2})\) be the word length patterns of type \(t\) of \(D_{1}\) and \(D_{2}\), respectively. Then, \(D_{1}\) has less aberration of type \(t\) than \(D_{2}\) if the first entry where \(\mathbf{W}_{t}(D_{1})\) differs from \(\mathbf{W}_{t}(D_{2})\) is smaller in \(\mathbf{W}_{t}(D_{1})\) than in \(\mathbf{W}_{t}(D_{2})\). If no other design has less aberration of type \(t\) than \(D_{1}\), then \(D_{1}\) has minimum aberration of type \(t\)._
**Example 2** (continued).: We return to the \(4^{1}2^{4-2}\) design \(D_{1}\). The two-level basic factors are \(c\) and \(d\), and there are two added two-level factors \(e=abc\) and \(f=acd\). The original defining relation of \(D_{1}\) was \(\{abce,acdf,bdef\}\) and became \(\{a_{3}ce,a_{1}cdf,a_{2}def\}\) after relabeling the pseudo-factors corresponding to the four-level factor. In the new defining relation, one word has length 3 and two words have length 4, but they are all words of type 1 since they all contain one pseudo-factor. Therefore, we can say that \(A_{31}=1\) and \(A_{41}=2\), and thus the word length pattern of type 1 of \(D_{1}\) is \(\mathbf{W}_{1}(D_{1})=((1,0),(2,0))\), while its word length pattern of type 0 is \(\mathbf{W}_{0}(D_{1})=((0,1),(0,2))\). According to Wu and Zhang (1993), the MA \(4^{1}2^{4-2}\) design of both type 1 and 0 has a defining relation \(\{acde,bcf,abdef\}\), which becomes \(\{a_{1}cde,a_{2}cf,a_{3}def\}\) after relabeling the pseudo-factors corresponding to the four-level factor, and it has a word length pattern of type 1 of \(((1,0),(2,0))\), and a word length pattern of type 0 of \(((0,1),(0,2))\). We see that by performing the relabeling \(e\leftrightarrow f\), and \(a_{1}\to a_{3}\to a_{2}\to a_{1}\), the two designs have the same word length patterns. Therefore, \(D_{1}\) is a MA design of type 1 and of type 0. \(\blacksquare\)
### Extension procedures
Search tableThe search table has to be adapted to cope with \(4^{m}2^{n-p}\) designs. More specifically, we need to relabel the pseudo-factors corresponding to the four-level factors. Just as for two-level designs, the columns of the search table represent the added factors, while the rows correspond to all possible interactions between two or more basic factors. These interactions are the possible generators for the added factors. We first relabel the interactions with the pseudo-factors corresponding to the four-level factors. Then, we sort the interactions by their order. However, pseudo-factors cannot be relabeled to ordinary factors, and ordinary factors cannot be relabeled to pseudo-factors. Interactions with the same order are further sorted by type. Interactions with the same length and type are
sorted lexicographically. After the search table has been sorted correctly, the same rules as for two-level designs apply. This means that the generators considered for an additional added factor must not come from the same row or a higher row than the previous one added.
We construct \(4^{m}2^{n-p}\) designs in such a way that the sub-designs involving only the four-level factors are full factorial designs which are possibly replicated. The main reason for this measure is that it simplifies the construction of search tables, because it permits the inclusion of the four-level factors as basic factors. In addition, it implies that there is no aliasing among the four-level factors. So designs with a fractional sub-design of the four-level factors are disregarded.
**Example 2** (continued).: We return to the \(4^{1}2^{4-2}\) design \(D_{1}\). The two-level basic factors are \(c\) and \(d\), and there are two added two-level factors \(e=abc\) and \(f=acd\). The adapted search table for \(D_{1}\) is presented in Table 4. To visualize the relabeling, the original generators are listed in the first column of the table, and to visualize the new way of sorting, the order and type of the generators are also indicated in the search table. It is easy to see that \(a_{3}\) cannot be considered as a generator since it has order 1. This would imply aliasing between the main effect of the four-level factor and an added two-level factor. We also see that, after relabeling, \(ac\), \(bc\) and \(abc\) become \(a_{1}c\), \(a_{2}c\) and \(a_{3}c\), which are all generators of the same order and of the same type. Since pseudo-factors can only be relabeled to pseudo-factors, and two-level factors to two-level factors, the designs with \(cde\) and \(a_{1}ce\) as the first added factor are not isomorphic to each other. However, all generators with order 2 and type 1 will lead to isomorphic designs. All other generators with order 3 and type 1 will also lead to isomorphic designs. Therefore, there are three non-isomorphic \(4^{1}2^{3-1}\) designs with 16 runs, with \(cde\), \(a_{1}ce\) and \(a_{1}cde\) as the first added factor. In \(D_{1}\), the original generator of factor \(e\) is \(a_{3}c\), but the generator \(a_{1}c\) results in a \(4^{1}2^{3-1}\) design that is isomorphic to the original one. The generators that lead to isomorphic designs are struck through. Without the search table, each of the three non-isomorphic \(4^{1}2^{3-1}\) designs could generate ten candidates, one for each generator with order two or more. With the search table, the design with \(cde\) as its first word can generate nine candidates, the design with
\(a_{1}ce\) as its first word can generate eight candidates and the design with \(a_{1}cde\) as its first word can only generate two candidates, since it is located near the bottom of the search table. As a consequence, the search-table approach leads to 19 \(4^{1}2^{4-2}\) candidate designs among the 30 possible options. \(\blacksquare\)
Delete-one-factor projectionsThe DOP method can be adapted to \(4^{m}2^{n-p}\) designs by only considering the deletion of two-level factors. For any design \(D\) in \(C^{R}_{m,n,p}\), adding a two-level column yields a \(4^{m}2^{(n+1)-(p+1)}\) candidate design \(D_{c}\). Such a candidate is discarded if its resolution is lower than \(R\) or if \(D\) does not have MA among all DOPs of \(D_{c}\). The resulting set, \(\tilde{C}^{R}_{m,n+1,p+1}\), is a complete set. The proof can be found in Appendix A.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{Original} & \multirow{2}{*}{Relabeled} & \multirow{2}{*}{Order} & \multirow{2}{*}{Type} & \multicolumn{6}{c}{Added factor} \\ \cline{5-8} & & & & & \(e\) & \(f\) (\(cd\)) & \(f\) (\(a_{1}c\)) & \(f\) (\(a_{1}cd\)) \\ \hline \(ab\) & \(a_{3}\) & 1 & 1 & \(\mathfrak{a}_{3}e\) & \(\mathfrak{a}_{3}\!\!f\) & \(\mathfrak{a}_{3}\!\!f\) & \(\mathfrak{a}_{3}\!\!\!f\) \\ \(cd\) & \(cd\) & 2 & 0 & \(cde\) & \(e\!\!\!f\) & \(e\!\!f\) & \(e\!\!f\) \\ \(ac\) & \(a_{1}c\) & 2 & 1 & \(a_{1}ce\) & \(a_{1}cf\) & \(\mathfrak{a}_{1}e\!\!f\) & \(\mathfrak{a}_{1}e\!\!f\) \\ \(bc\) & \(a_{2}c\) & 2 & 1 & \(\mathfrak{a}_{2}\!\!\!e\)e & \(a_{2}cf\) & \(a_{2}cf\) & \(\mathfrak{a}_{2}e\!\!f\) \\ \(abc\) & \(a_{3}c\) & 2 & 1 & \(\mathfrak{a}_{3}\!\!\!e\)e & \(a_{3}cf\) & \(a_{3}cf\) & \(\mathfrak{a}_{3}\!\!e\!f\) \\ \(ad\) & \(a_{1}d\) & 2 & 1 & \(\mathfrak{a}_{7}\!\!\!de\) & \(a_{1}df\) & \(a_{1}df\) & \(\mathfrak{a}_{7}\!\!\!df\) \\ \(bd\) & \(a_{2}d\) & 2 & 1 & \(\mathfrak{a}_{2}\!\!\!de\) & \(a_{2}df\) & \(a_{2}df\) & \(\mathfrak{a}_{2}df\) \\ \(abd\) & \(a_{3}d\) & 2 & 1 & \(\mathfrak{a}_{3}\!\!\!de\) & \(a_{3}df\) & \(a_{3}df\) & \(\mathfrak{a}_{3}df\) \\ \(acd\) & \(a_{1}cd\) & 3 & 1 & \(a_{1}cde\) & \(a_{1}cdf\) & \(a_{1}cdf\) & \(\mathfrak{a}_{1}cdf\) \\ \(bcd\) & \(a_{2}cd\) & 3 & 1 & \(\mathfrak{a}_{2}\!\!\!e\)e & \(a_{2}cdf\) & \(a_{2}cdf\) & \(a_{2}cdf\) \\ \(abcd\) & \(a_{3}cd\) & 3 & 1 & \(\mathfrak{a}_{3}\!\!\!e\)e & \(a_{3}cdf\) & \(a_{3}cdf\) & \(a_{3}cdf\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Search table for \(4^{1}2^{4-2}\) designs with \(A=(a,b,ab)\) as four-level factor where the three columns for factor \(f\) correspond with the possible generator for factor \(e\), written in parentheses.
### Selected procedures
For the enumeration of \(4^{m}2^{n-p}\) designs, we compare three methods. First, we modify the method of Ryan and Bulutoglu (2010), such that the adapted DOP algorithm from Section 3.2 performs the extension step, and the NAUTY-based algorithm performs the reduction step. We call this method "DOP-NAUTY". However, the DOP method requires many WLP computations, and Xu and Wu (2001) showed that the WLP computation for mixed-level designs (such as \(4^{m}2^{n-p}\) designs) is more complex than for pure-level designs (such as \(2^{n}\) designs). In contrast, the search-table method does not involve WLP computations, because it considers candidates based on a fixed search table. Therefore, it might be faster than the DOP method. For this reason, we also consider a method using the search-table algorithm in the extension step and NAUTY to check isomorphism. We call it "ST-NAUTY". The third method we consider is a more general method that uses the MCS algorithm of Schoen et al. (2010) for the extension step and retains the regular non-isomorphic designs in the reduction step. We call that third method "MCS-regular". For reproducibility purposes, pseudo-codes for the ST-NAUTY and DOP-NAUTY methods are available in the supplementary materials of the paper. Pseudo-code for the benchmark MCS method is available in Schoen et al. (2010). The code used for the enumeration of the catalog is also available on Github at [https://github.com/ABohynDOE/enumeration_fatld](https://github.com/ABohynDOE/enumeration_fatld).
## 4 Enumeration results
### Computing times
We apply the DOP-NAUTY, ST-NAUTY and MCS-regular methods to three test cases: (a) all 32-run designs of resolution III with one four-level factor, (b) all 64-run designs of resolution IV with one four-level factor, and (c) all 64-run designs of resolution IV with two four-level factors. Figure 1 shows the enumeration times for the three methods applied to the three test cases. For all test cases, the MCS-regular method requires the longest computing time to enumerate all \(4^{m}2^{n-p}\) designs. The time difference between the DOP-NAUTY method (represented by black bullets) and the ST-NAUTY method (represented by black triangles) initially increases with the number of two-level factors. From a certain
number of two-level factors onwards, the computing time difference starts to drop again due to the scarcity of \(4^{m}2^{n}\) designs with many two-level factors. In all test cases considered and for each number of two-level factors, the ST-NAUTY algorithm is the fastest of the three for the enumeration of \(4^{m}2^{n-p}\) designs so that we use this method to generate our catalog.
### Catalog
Since we only construct four-level factors from basic factors, and we need two basic factors to construct a single four-level factor, we consider a maximum of \(\lfloor k/2\rfloor\) four-level factors for a design involving \(2^{k}\) runs. Using the ST-NAUTY algorithm, we completely enumerate all designs for the cases presented in Table 5.
For 16-run designs, it is impossible to generate regular designs with one four-level factor and more than 12 two-level factors or designs with two four-level factors and more than nine two-level factors. For the enumeration of 64-run designs with 1, 2, or 3 four-level factors, a resolution of at least III, and 17-20, 13-20, and 10-20 two-level factors, respectively, we use a bounded enumeration, as explained in Section 4.3, because the resolution-III designs
\begin{table}
\begin{tabular}{l r r r} \hline \hline Run size & \(m\) & \(R\) & Max. \(n\) \\ \hline
16 & 1 & III & 12 \\
16 & 2 & III & 9 \\
32 & 1,2 & III & 20 \\
64 & 1 & III & 16 \\
64 & 2 & III & 12 \\
64 & 3 & III & 9 \\
64 & 1,2,3 & IV & 20 \\
128 & 1,2,3 & IV & 20 \\ \hline \hline \end{tabular}
\end{table}
Table 5: All cases considered for the complete enumeration. For each case, we enumerate all \(4^{m}2^{n-p}\) designs of a specific run size, with a resolution of at least \(R\), and up to Max. \(n\) two-level factors.
Figure 1: Enumeration times for 32-run resolution III \(4^{1}2^{n-p}\) designs, 64-run resolution IV \(4^{1}2^{n-p}\) designs and 64-run resolution IV \(4^{2}2^{n-p}\) designs, where \(n\) represents the number of two-level factors. Methods: ST-NAUTY (\(\bullet\)), DOP-NAUTY (\(\bullet\)), MCS-regular (\(\blacktriangle\))
for these numbers of two-level factors are too numerous.
Table 6 presents the numbers of non-isomorphic designs with a resolution of at least III for run sizes of 16, 32, and 64. Designs enumerated using the bounded enumeration are indicated with an asterisk. While Ankenman's 1999 catalog provided one design for each number of factors, we present all non-isomorphic 16-run and 32-run designs. We also tackle resolution-III 64-run designs and found more than 33 million designs, all of which are new. Table 7 presents the numbers of non-isomorphic designs with a resolution of at least IV for run sizes of 64 and 128. With this table, we present more than 6 million new designs. This offers the prospect of being able to tackle many future design-of-experiments problems in a systematic rather than an ad hoc fashion.
The catalog can be explored online through an interactive web app at [https://abohydhoe.shinyapps.io/fatldesign-selection-tool/](https://abohydhoe.shinyapps.io/fatldesign-selection-tool/). In the app, filters are provided to search the catalog based on run size, resolution, number of four-level factors, and number of two-level factors. For the selected designs, the columns needed to generate the design matrices, the complete word length patterns, the \(\mathbf{A}_{3}^{m}\) vectors, the \(\mathbf{A}_{4}^{m}\) vectors, and the \(\mathbf{A}_{5}^{m}\) vectors are shown. All designs are also available from the authors.
### Bounded enumeration
Even with our efficient ST-NAUTY algorithm, enumerating all non-isomorphic \(4^{m}2^{n-p}\) designs becomes computationally prohibitive for large values of \(n\). This proves to be the case with our enumeration of 64-run designs of resolution III, where, for certain numbers of two-level factors, more than four million non-isomorphic designs are enumerated. From that point onwards, we extend designs using an upper bound for values contained within the \(\mathbf{A}_{3}\) vector in order to reduce the number of candidate designs generated. We define the bound as \(\boldsymbol{\delta}_{m,n}^{3}=\left(\delta_{m,n,0}^{3},\ldots,\delta_{m,n,m}^{3 }\right)\), where \(\delta_{m,n,t}^{3}\) is the upper bound on the number of words of length 3 and type \(t\) for \(4^{m}2^{n-p}\) designs. More specifically, a candidate \(4^{m}2^{(n+1)-(p+1)}\) design of resolution III with \(\mathbf{A}_{3}=(A_{30},\ldots,A_{3m})\) is only added to the complete set \(\tilde{C}_{m,n+1,p+1}\) and submitted to the reduction step with NAUTY if \(A_{3t}\leq\delta_{m,n+1,t}^{3}\) for all \(t\) from 0 to \(m\).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(n\)} & \multicolumn{2}{c}{\(N\) = 16} & \multicolumn{2}{c}{\(N\) = 32} & \multicolumn{2}{c}{\(N\) = 64} \\ \cline{2-9} & \(m\) = 1 & \(m\) = 2 & \(m\) = 1 & \(m\) = 2 & \(m\) = 1 & \(m\) = 2 & \(m\) = 3 \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
*Number of designs obtained using bounded enumeration
\end{table}
Table 6: Numbers of non-isomorphic \(4^{m}2^{n-p}\) designs of resolution III with \(N=16\), \(N=32\) and \(N=64\) runs.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(n\)} & \multicolumn{4}{c}{\(N=64\)} & \multicolumn{4}{c}{\(N=128\)} \\ \cline{2-7} & \(m=1\) & \(m=2\) & \(m=3\) & \(m=1\) & \(m=2\) & \(m=3\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 7: Numbers of non-isomorphic \(4^{m}2^{n-p}\) designs of resolution \(IV\) with \(N=64\) and \(N=128\) runs.
We enumerate designs with up to 20 two-level factors, so we define the \(\boldsymbol{\delta}_{m,n}^{3}\) values for \(n\leqslant 20\), as follows. We construct \(4^{m}2^{20-p}\) designs from all \(2^{20+2m}\) designs from Chen et al. (1993). Let \(D_{m,20}\) be the minimum aberration design of type \(m\) among all the \(4^{m}2^{20-p}\) designs thus created. We use the \(\mathbf{A}_{3}\) vector of \(D_{m,20}\) to define \(\boldsymbol{\delta}_{m,20}^{3}\). Next, we consider all 20 delete-one-factor projections (DOPs) of \(D_{m,20}\), and take the one with the worst \(\mathbf{A}_{3}\) vector. The worst \(\mathbf{A}_{3}\) vector is defined as the one that sequentially maximizes \(A_{3t}\) for \(t\) going from \(m\) to zero. This worst \(\mathbf{A}_{3}\) defines \(\boldsymbol{\delta}_{m,19}^{3}\). So, we do not extend any \(4^{m}2^{19-(p-1)}\) design with \(A_{3t}>\delta_{m,19,t}^{3}\) for \(t=0,\ldots,m\). If there is more than one design with the worst \(\mathbf{A}_{3}\) vector, we perform an isomorphism check and we keep the non-isomorphic designs with the worst \(\mathbf{A}_{3}\) vector in a set denoted by \(S_{m,19}\). For the bound on \(4^{m}2^{18-(p-2)}\) designs, we then consider all DOPs from all \(4^{m}2^{19-(p-1)}\) designs in \(S_{m,19}\). The procedure continues until we reach a value of \(n\) for which the enumeration was complete.
There are two advantages to this bound definition. First, we ensure that the bound will never be so restrictive that no designs are enumerated. Second, we ensure that the enumerated designs are at least as good in terms of type-\(m\) aberration as the ones that we constructed as a reference to create the bound. By focusing on aberration of type \(m\), we favor designs that minimize the aliasing involving the four-level factors, which cannot be easily constructed from existing two-level designs.
Table 8 provides the bound values for \(4^{m}2^{n-p}\) designs of resolution III with 64 runs and \(m=1,2\) and 3, whenever the bounded enumeration was used. In a supplementary table, we present details on the number of candidate designs that are rejected prior to the isomorphism reduction step because at least one entry of their \(\mathbf{A}_{3}\) vector exceeds the bound.
## 5 Applications
In this section, we revisit the two motivating examples for this paper. More specifically, we investigate whether the ad hoc designs used can be improved upon by one or more designs in our catalog.
\begin{table}
\begin{tabular}{c c c c} \hline \(n\) & \(m=1\) & \(m=2\) & \(m=3\) \\ \hline
20 & (0, 5) & (0, 16, 0) & (0, 22, 6, 0) \\
19 & (0, 5) & (0, 16, 0) & (0, 21, 6, 0) \\
18 & (0, 5) & (0, 16, 0) & (0, 20, 6, 0) \\
17 & (0, 5) & (0, 16, 0) & (0, 19, 6, 0) \\
16 & - & (0, 16, 0) & (0, 18, 6, 0) \\
15 & - & (0, 14, 0) & (0, 16, 6, 0) \\
14 & - & (0, 13, 0) & (0, 14, 6, 0) \\
13 & - & (0, 12, 0) & (0, 11, 6, 0) \\
12 & - & - & (0, 9, 6, 0) \\
11 & - & - & (0, 7, 6, 0) \\
10 & - & - & (0, 6, 6, 0) \\
9 & - & - & - \\ \hline \end{tabular}
\end{table}
Table 8: \(\boldsymbol{\delta}_{m,n}^{3}\) bounds on the \(\mathbf{A}_{3}\) vectors for \(4^{m}2^{n-p}\) designs of resolution III and run size 64 given as \((\delta_{m,0},\ldots,\delta_{m,m})\). Extension steps for which the bounded enumeration was not used are indicated with a dash (-).
### Chemical etching experiment
The first practical case presented by Schoen (1999) is a chemical etching experiment involving 32 runs with two four-level factors and 12 two-level factors. In this experiment, a run involves chemical synthesis of a catalyst performed on a gauze in an autoclave. Since the synthesis takes time to prepare and to run, the 32 runs were divided over eight days. Four of the two-level factors were varied between the days, while the remaining eight two-level factors as well as the two four-level factors were varied within the days. Therefore, the experiment had a split-plot structure with days as whole plots and syntheses as sub-plots.
Table 6 shows that there are 5423 non-isomorphic \(4^{2}2^{12-11}\) designs. We made a preliminary inventory of these designs in terms of the word length patterns of type 0 and type 2. Tables 9 and 10 show the number of words of length 3 and length 4 of the best \(4^{2}2^{12-11}\) designs in terms of aberration of type 0 and aberration of type 2, respectively. The split-plot structure of the experiment requires four two-level factors to be varied between eight whole plots, defined by the eight days. This means that, for the whole plots, a \(2^{4-1}\) design is needed. Therefore, any \(4^{2}2^{12-11}\) design with at least one word of length 4 and type 0 can be used for the experiment. All designs presented in Tables 9 and 10 have a least 38 of these words so that each of them can accommodate the split-plot structure of the experiment. It turns out that the design used by Schoen (1999) for the experiment has minimum aberration of type 0; it is design 5381, the first one in Table 9. The design has no words of length 3 and type 0, meaning that no main effects of a two-level factor are aliased with a two-factor interaction involving other two-level factors, and 10 words of length 3 and type 1, meaning that 10 two-factor interactions involving two-level factors are aliased with a main effect of a four-level factor. The design also has four words of length 3 and type 2, so that the main effects of four two-level factors are aliased with an interaction between the four-level factors.
The design used by Schoen (1999) does not have minimum aberration of type 2. More specifically, there are 1356 designs that are better in terms of aberration of type 2. For this reason, it is not shown in Table 10. It has four words of length 3 and type 2, while the design with minimum aberration of type 2 has no such words. The absence of words
of length 3 and type 2 is a useful feature if the four-level factors are likely to interact with each other because the main effects of the two-level factors are then not aliased with the interaction among the four-level factors. This demonstrates the utility of having a catalog of \(4^{2}2^{12-11}\) designs at hand.
### Cheese-making experiment
The second practical case presented by Schoen (1999) is a 10-factor cheese-making experiment involving 128 runs with one four-level factor and nine two-level factors. In this experiment, milk was distributed over several curds production tanks. In each of these tanks, the curds were formed and transported to presses forming the individual cheeses.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Rank & ID & \(A_{32}\) & \(A_{31}\) & \(A_{30}\) & \(A_{42}\) & \(A_{41}\) & \(A_{40}\) \\ \hline
1 & 3346 & 0 & 24 & 0 & 42 & 0 & 39 \\
2 & 1286 & 0 & 25 & 0 & 41 & 0 & 38 \\
3 & 159 & 0 & 26 & 0 & 40 & 0 & 38 \\
4 & 162 & 0 & 26 & 0 & 40 & 0 & 39 \\
5 & 161 & 0 & 27 & 0 & 39 & 0 & 38 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Number of words of length 3 and length 4 of the best 32-run \(4^{2}2^{12-11}\) designs according to the aberration of type 2.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Rank & ID & \(A_{30}\) & \(A_{31}\) & \(A_{32}\) & \(A_{40}\) & \(A_{41}\) & \(A_{42}\) \\ \hline
1 & 5381 & 0 & 10 & 4 & 38 & 68 & 24 \\
2 & 4902 & 0 & 17 & 6 & 38 & 34 & 13 \\
3 & 3633 & 0 & 18 & 5 & 38 & 34 & 13 \\
4 & 3632 & 0 & 18 & 6 & 38 & 34 & 12 \\
5 & 4901 & 0 & 18 & 6 & 39 & 32 & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Number of words of length 3 and length 4 of the best 32-run \(4^{2}2^{12-11}\) designs according to the aberration of type 0.
Seven factors were varied over the curds production tanks, while three factors were varied at the level of the individual cheeses. Due to various practical limitations, the experiment had to be conducted using a split-plot experimental design involving 32 whole plots, corresponding to the curds productions, each containing four sub-plots corresponding to the individual cheeses.
Due to our complete enumeration, we have a catalog of all non-isomorphic \(4^{1}2^{9-4}\) designs involving 128 runs. Table 7 shows that there are 263 such designs. We evaluated each of these designs in terms of the word length pattern of type 0, and the word length pattern of type 1. Table 11 shows the number of words of length 4 and length 5, of the five best 128-run \(4^{1}2^{9-4}\) designs, according to the aberration of type 1. Schoen (1999) actually used design 222, the third best, with \(A_{40}=1,A_{41}=0,A_{50}=2\) and \(A_{51}=5\). The split-plot structure of the experiment required seven whole-plot factors to be studied over 32 curds productions. This means that for the whole plots, a \(2^{7-2}\) design was needed, which can at best have a resolution of IV (Chen et al., 1993). The minimum aberration 32-run design with seven two-level factors has \(A_{4}=1\) and \(A_{5}=2\). So, the cheese-making design must have at least one word of length 4 and type 0, and at least two words of length 5 and type 0. This is incompatible with designs 262, 263, and 230 in Table 11. The design that was used by Schoen (1999) thus has the best aberration of type 1, given the split-plot structure.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Rank & ID & \(A_{41}\) & \(A_{40}\) & \(A_{51}\) & \(A_{50}\) \\ \hline
1 & 262 & 0 & 0 & 6 & 2 \\
2 & 263 & 0 & 0 & 9 & 0 \\
3 & 222 & 0 & 1 & 5 & 2 \\
4 & 230 & 0 & 1 & 6 & 1 \\
5 & 223 & 0 & 1 & 6 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Number of words of length 4 and length 5 of the best 128-run \(4^{1}2^{9-4}\) designs according to the aberration of type 1.
Schoen's design is the 38th best in terms of aberration of type 0, among the 263 \(4^{1}2^{9-4}\) designs in the catalog. However, none of the 37 better designs can accommodate the split-plot structure because none of them have words of length 4 and type 0 and words of length 5 and type 0. We conclude that the design used in the cheese-making experiment has the smallest aberration both of type 0 and type 1, given the split-plot structure.
### Sensor fields simulation experiment
Katic (2011) discussed a sensor fields simulation experiment involving 32 runs, two four-level factors, and five two-level factors. The experiment was intended to determine the ideal position for one or more undersea sensors given the environmental conditions of the sea, the features of the sensor, and the characteristics of the object entering the sensor field. In the experiment, Katic (2011) want to quantify how the different inputs in the program affect the placement of the sensors. The initial position and direction of an object entering the sensor field are both modeled using a probability density function (PDF) that can take four different forms. For this reason, two four-level factors were used to represent the four possible PDFs for the initial position and direction. The five two-level factors represent other features of the sensors, and of the object entering the sensor field.
To generate the design, the author started with the minimum aberration \(2^{9-4}\) design and created two four-level factors, \(A=(b,c)\) and \(B=(d,e)\), using the grouping scheme of Wu (1989). The final design obtained is a 32-run \(4^{2}2^{5-4}\) design with \(\mathbf{A}_{3}^{0}=(0,1,1)\), \(\mathbf{A}_{4}^{0}=(0,4,5)\), and \(\mathbf{A}_{5}^{0}=(0,1,2)\). Table 6 shows that there are 109 non-isomorphic \(4^{2}2^{5-4}\) designs with 32 runs. Table 12 shows the number of words of lengths 3 and 4 for the five best \(4^{2}2^{5-4}\) designs in terms of aberration of type 0. We see that the design used in this experiment is design 62 and that it is only the fourth best in terms of aberration of type 0. Wu and Zhang (1993) and Ankenman (1999) both provide a design with minimum aberration of type 0 so that Katic (2011) could have used this design for their research. It is isomorphic to design 109 in our catalog.
Minimizing aberration of type 0 minimizes the confounding between main effects and two-factor interactions involving two-level factors. However, if it is likely that the four possible
PDFs for the initial position and direction interact, it could be useful to minimize the confounding of the main effects of the two-level factors with the interaction among the four-level factors. This is achieved by minimizing the aberration of type 2. To look for designs that minimize the aberration of type 2, we evaluated the 109 non-isomorphic \(4^{2}2^{5-4}\) designs with 32 runs in terms of the word length pattern of type 2. They are presented in Table 13. Design 80 has minimum aberration of type 2, with \(\mathbf{A}_{3}^{2}=(0,2,0)\) and \(\mathbf{A}_{4}^{2}=(0,0,8)\), while design 62, i.e., the one used by Katic (2011), ranks 16th in terms of aberration of type 2. As design 62, design 80 has no words of length 3 and type 0, but it has 8 words of length 4 instead of 9. In conclusion, designs 109 or design 80, are better than design 62 in terms of aberration. In any case, both options can be found in our catalog.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Rank & ID & \(A_{32}\) & \(A_{31}\) & \(A_{30}\) & \(A_{42}\) & \(A_{41}\) & \(A_{40}\) \\ \hline
1 & 80 & 0 & 2 & 0 & 8 & 0 & 0 \\
2 & 43 & 0 & 2 & 0 & 8 & 0 & 1 \\
3 & 41 & 0 & 3 & 0 & 7 & 0 & 0 \\
4 & 42 & 0 & 3 & 0 & 7 & 0 & 1 \\
5 & 10 & 0 & 4 & 0 & 6 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 13: Number of words of length 3 and length 4 of the best 32-run \(4^{2}2^{5-4}\) designs according to the aberration of type 2.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Rank & ID & \(A_{30}\) & \(A_{31}\) & \(A_{32}\) & \(A_{40}\) & \(A_{41}\) & \(A_{42}\) \\ \hline
1 & 109 & 0 & 0 & 1 & 1 & 4 & 6 \\
2 & 106 & 0 & 0 & 2 & 1 & 4 & 4 \\
3 & 107 & 0 & 0 & 2 & 0 & 4 & 4 \\
4 & 62 & 0 & 1 & 1 & 0 & 4 & 5 \\
5 & 79 & 0 & 1 & 1 & 0 & 3 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Number of words of length 3 and length 4 of the best 32-run \(4^{2}2^{5-4}\) designs according to the aberration of type 0.
Discussion
In this paper, we developed an efficient enumeration procedure for regular \(4^{m}2^{n-p}\) designs. Using this procedure, we created an extensive catalog of such designs for run sizes of 16, 32, 64, and 128, with one, two, or three four-level factors and up to 20 two-level factors. For a few cases, we could not enumerate all non-isomorphic designs. However, for all of these cases, we were able to provide a set of good designs rather than just the minimum aberration designs. To do so, we implemented a bounded enumeration that is restrictive enough for the enumeration to be computationally feasible while still providing a good subset of all possible designs. The entire catalog we obtained is a major addition to the currently available catalogs for \(4^{m}2^{n-p}\) designs (Wu and Zhang, 1993; Ankenman, 1999). Furthermore, we built an interactive web application that allows anyone to browse through the catalog and find a design based on several criteria.
The catalog opens up possibilities for future work. As several millions of designs are now available, a precise characterization of these designs would be helpful. The possibility of optimizing the designs according to multiple criteria could help researchers better choose the design that fits their experimental needs. Looking back at the cheese-making example, the main argument for the choice of the design was the split-plot layout imposed by the experimental conditions. So, interesting follow-up work would be to characterize the designs' potential to be run as a split-plot design.
Another restriction in the randomization, often seen in screening experiments, is when designs have to be blocked. In some cases, there is one blocking factor (Sun et al., 1997; Martono et al., 2015; Schoen et al., 2019), while, in other cases, there are two blocking factors (Godolphin, 2019; Vo-Thanh et al., 2020). Thus, it would also be interesting to study the different blocking schemes that could be applied to our newly enumerated designs.
**ACKNOWLEDGMENTS**
We would like to thank Hongquan Xu as well as Kenneth Ryan for providing us with the code from their respective papers and taking the time to answer our questions. The
authors would also like to thank the anonymous referees for improving this paper with their constructive comments.
**CONFLICT OF INTEREST**
None declared
**FUNDING**
This research was funded by the FWO.
**DATA AVAILABILITY**
The designs are accessible through a web app at [https://abohydnoe.shinyapps.io/fatdesign-selection-tool/](https://abohydnoe.shinyapps.io/fatdesign-selection-tool/). The code to reproduce the methodology of the paper is available online at the GitHub repository [https://github.com/ABohynDOE/enumeration_fatld](https://github.com/ABohynDOE/enumeration_fatld).
**SUPPLEMENTARY MATERIAL**
**pseudocode.pdf:** Pseudo-code schemes for the ST-NAUTY and DOP-NAUTY methods.
## Appendix A Proofs
**Lemma A.1**.: _Consider a \(4^{m}2^{(n+1)-(p+1)}\) design \(D\) involving \(N\) runs. Let \(D_{(i)}\) be the \(4^{m}2^{n-p}\) subdesign obtained by deleting the \(i\)th two-level column of \(D\). If \(D_{(i)}\) has minimum aberration among all the subdesigns of \(D\) then the \(i\)th column is a product of some two-level columns and thus \(D_{(i)}\) has \(N\) distinct runs._
Proof.: Suppose that the result is not true. Then, the \(i\)th column is independent of the other columns and thus does not appear in the defining relation of \(D\). In that case, we can choose another column that does appear in the defining relation; deleting that other column would yield a design having less aberration than \(D_{(i)}\), which is a contradiction.
**Lemma A.2**.: _For any \(4^{m}2^{n-p}\) design \(D\) in the minimum complete set \(C_{m,n,p}^{R}\), adding a two-level column yields a \(4^{m}2^{(n+1)-(p+1)}\) candidate design \(D_{c}\). Such a candidate can be discarded if its resolution is lower than \(R\) or if \(D\) does not have MA among all DOPs of \(D_{c}\). The resulting set, \(\widetilde{C}_{m,n+1,p+1}^{R}\), is a complete set._
Proof.: For a \(4^{m}2^{n-p}\) design \(D\) involving \(2^{k}\) runs, the \(m\) four-level factors can be decomposed into \(m\) mutually exclusive triplets of two-level pseudo-factors of the form \((a_{1},a_{2},a_{3})\), where \(a_{3}=a_{1}a_{2}\). Therefore, \(D\) involves \(3m+n\leqslant 2^{k}-1\) two-level factors. We restrict our attention to the case where the four-level subdesign is a full factorial design that is possibly replicated, that is when \(2m\leqslant k\). In that case, each triplet of pseudo-factors consists of a pair of basic factors and their Hadamard product. As a result, there are \(k^{\prime}=k-2m\) further basic factors in the design.
If \(n\leqslant k^{\prime}\), then the basic factors not involved in the definition of the four-level factors are used to define additional two-level factors. If \(n>k^{\prime}\), we show by induction that every possible \(4^{m}2^{n-p}\) design in \(2^{k}\) runs is isomorphic to a design in \(C_{m,n,p}\) obtained with the DOP procedure.
That this is true for \(n=k^{\prime}+1\) is trivial, since the parent design is a full factorial design. Any additional column is then a product of basic factors, and the parent design must have MA among all DOPs. Now, suppose that Lemma A.2 is true for \(n=k^{\prime}+l\). Consider \(n+1=k^{\prime}+l+1\). Let \(D_{c}=(C_{1},\ldots,C_{m},c_{1},\ldots,\ c_{n+1})\) be a \(4^{m}2^{(n+1)-(p+1)}\) candidate design involving \(N\) runs where \(C_{i}\) is the \(i\)th four-level factor of the design and \(c_{i}\) is the \(i\)th two-level factor of the design. Suppose that \(D_{c(n+1)}\) has MA among all DOPs of \(D_{c}\). Lemma A.1 implies that \(D_{c(n+1)}\) has \(N\) distinct runs. By the assumption for \(n=k^{\prime}+l\), there exists a \(4^{m}2^{n-p}\) design \(D_{n}\) in \(C_{m,n,p}\) that is isomorphic to \(D_{c(n+1)}\). Let \(\kappa,\rho\) and \(\sigma\) be the column permutations, row permutations and sign switches, respectively, that form the isomorphic map \(\pi\) from \(D_{c(n+1)}\) to \(D_{n}\), that is, \(D_{n}=\pi(D_{c(n+1)})\). Out of the three operations in \(\pi\), only \(\rho\) can be applied to the last column, so let \(\rho(c_{n+1})\) be the result of the row permutations \(\rho\) applied to \(c_{n+1}\). Now, by applying \(\pi\) to \(D_{c}\) we obtain the following result: \(\pi(D_{c})=\pi\left(D_{c(n+1)},c_{n+1}\right)=\left(\pi\left(D_{c(n+1)}\right),\pi\left(c_{n+1}\right)\right)=(D_{n},\rho(c_{n+1}))\). By definition, \(D_{n}\) is in \(C_{m,n,k}\). By Lemma A.1, column \(c_{n+1}\) of \(D_{c}\) is a product of some other columns of
\(D_{c}\) and since all columns are considered in the extension procedure, \(\rho(c_{n+1})\) is entertained in the extension procedure. Therefore, \(\pi(D_{c})\) is entertained in this modified construction procedure. This means that there is an isomorphic map from \(D_{c}\) to a design in \(C_{m,n+1,p+1}\) and that \(D_{c}\) is isomorphic to a design in \(C_{m,n+1,p+1}\). Observing that this reasoning can be applied to any value of \(n\) and \(l\) and that it is true for \(n=k^{\prime}+1\) completes the proof. \(\blacksquare\)
|
2310.13193 | Heterogeneous Graph Neural Networks for End-to-End Traffic Assignment
and Traffic Flow Learning | The traffic assignment problem is one of the significant components of
traffic flow analysis for which various solution approaches have been proposed.
However, deploying these approaches for large-scale networks poses significant
challenges. In this paper, we leverage the power of heterogeneous graph neural
networks to propose a novel data-driven approach for end-to-end traffic
assignment and traffic flow learning. Our model integrates an adaptive graph
attention mechanism with auxiliary "virtual" links connecting
origin-destination node pairs, This integration enables the model to capture
spatial traffic patterns across different links, By incorporating the
node-based flow conservation law into the overall loss function, the model
ensures the prediction results in compliance with flow conservation principles,
resulting in highly accurate predictions for both link flow and flow-capacity
ratios. We present numerical experiments on urban transportation networks and
show that the proposed heterogeneous graph neural network model outperforms
other conventional neural network models in terms of convergence rate and
prediction accuracy. Notably, by introducing two different training strategies,
the proposed heterogeneous graph neural network model can also be generalized
to different network topologies. This approach offers a promising solution for
complex traffic flow analysis and prediction, enhancing our understanding and
management of a wide range of transportation systems. | Tong Liu, Hadi Meidani | 2023-10-19T23:04:09Z | http://arxiv.org/abs/2310.13193v2 | # Heterogeneous Graph Neural Networks for Data-driven Traffic Assignment
###### Abstract
The traffic assignment problem is one of the significant components of traffic flow analysis for which various solution approaches have been proposed. However, deploying these approaches for large-scale networks poses significant challenges. In this paper, we leverage the power of heterogeneous graph neural networks to propose a novel data-driven approach for traffic assignment and traffic flow learning. The proposed model is capable of capturing spatial traffic patterns across different links, yielding highly accurate results. We present numerical experiments on urban transportation networks and show that the proposed heterogeneous graph neural network model outperforms other conventional neural network models in terms of convergence rate, training loss, and prediction accuracy. Notably, the proposed heterogeneous graph neural network model can also be generalized to different network topologies. This approach offers a promising solution for complex traffic flow analysis and prediction, enhancing our understanding and management of a wide range of transportation systems.
keywords: traffic assignment, graph neural network, traffic flow prediction, flow conservation +
Footnote β : journal: Transportation Research Part C: Emerging Technologies
## 1 Introduction
The traffic assignment problem, as a significant component of transportation network performance analysis, helps understand spatial and temporal traffic flow patterns for effective transportation network management and provides insights into bottlenecks, and traffic congestion (Nie et al., 2004). Congestion is a major concern in urban areas, leading to increased travel times, fuel consumption, and environmental pollution. By identifying the bottlenecks in transportation networks, city planners can make a strategic plan to increase road capacities to mitigate traffic congestion.
To this end, the objective of the traffic assignment problem is to determine the traffic flow distribution and identify traffic bottlenecks on a given road network. Numerous mathematical models have been proposed to tackle traffic assignment problems. The user equilibrium (UE) principle and system optimum (SO) principles are two fundamental but effective solutions to traffic assignment problems. The UE principle, also known as Wardrop's first principle, states that drivers between each origin-destination (OD) pair cannot unilaterally reduce travel costs by unilaterally shifting to another route (Kuang et al., 2021). The system optimum assignment, which is based on Wardrop's second principle, emphasizes driver cooperation to minimize the total system travel time (Seliverstov et al., 2017). These models have been used extensively in practice to estimate traffic flow patterns and optimize transportation infrastructure investments. However, these approaches are computationally expensive for a large-scale network. Substantial efforts have been made to find efficient algorithms to solve this problem (Nie et al., 2004; Di Lorenzo et al., 2015). Furthermore, researchers have proposed various extensions to the UE and SO models, such as the stochastic user equilibrium (Damberg et al., 1996), multi-modal traffic assignment (Pi et al., 2019), and dynamic traffic assignment (Ben-Akiva et al., 2012). These models incorporate additional factors, such as travel time variability, driver behaviors, and network capacity constraints, to provide more realistic traffic assignment solutions. However, there is still a pressing need for more effective and accurate methods for traffic assignment problems, particularly for traffic planning and optimization.
To perform transportation performance analysis, OD demand needs first to be obtained, which is a significant component in traffic assignment problems. OD demand estimation has been widely studied in the transportation engineering literature. The statistical model is one of the common approaches for OD estimation. It leverages multiple data sources to estimate vehicle flow between different locations in a network (Hazelton, 2008; Jin et al., 2014). Furthermore, the increasing availability of crowdsourced data, including GPS data and social media, enables us to extract OD demand and traffic flow data at a large scale and over extended periods (Toole et al., 2015; Zhang et al., 2020). Besides, neural network approaches (Tang et al., 2021; Zhang et al., 2021) have also been applied to estimate OD estimation, which allows for the extraction of complex patterns from high-dimensional and noisy transportation data.
In recent years, data-driven approaches have leveraged the availability of large-scale transportation data and computational resources in various transportation problems. For instance, the recurrent neural network (RNN), including the long short-term memory (LSTM) and gated recurrent unit (GRU) networks, have been applied to different prediction tasks in transportation (Zhaowei et al., 2020; Fang et al., 2022). However, standalone RNNs lack the ability to learn spatial correlation. To overcome this weakness and capture the spatio-temporal relationship of traffic flows at different locations, Ye et al. (2019) combines convolutional neural networks (CNN) with LSTM to predict multiple transportation demands using real-world taxi and sharing bike demand data. Recently, the graph neural network (GNN) has emerged as a powerful tool in transportation performance analysis because of its capability to effectively model and analyze data represented in the form of a graph. Compared with RNNs and CNNs, a key advantage of GNNs is their ability to capture spatial and relational information inherent in transportation networks.
Graph neural networks have been shown to be effective in network modeling and management of transportation systems (Rahman and Hasan, 2022; Liu and Meidani, 2023a). However, these models encountered a few limitations. Firstly, the proposed GNN models can only be trained on a fixed topology, and need to be re-trained whenever the topology of the road network changes. Additionally, these studies didn't adequately consider the dynamic nature of traffic flow, e.g., when link capacities alter due to traffic accidents or damages to roadways. These drawbacks limit the adaptability and applicability of these models in real-world scenarios.
To address this challenge, in this paper, we propose a novel GNN-based approach for static traffic assignment. Specifically, we focus on predicting the flow-capacity ratio of different scenarios in the context of traffic assignment. In particular, to address the long-range effects caused by OD pairs that are multiple-hops apart, we create a heterogeneous GNN that consists of additional "virtual" links. The major contributions of this work are as follows: (1) a heterogeneous GNN model is proposed with "virtual" OD-links and the "real" roadway links, which incorporates the attention-based mechanism and message-passing mechanism; (2) the proposed model also integrates a physics-based loss based on the flow conservation law into the total loss function to accelerate the learning process; (3) the proposed model is generalizable to different road network topologies, link characteristics, and OD demands.
The remainder of this article is structured as follows. General backgrounds on the traffic assignment problem, the neural network and graph neural network models are presented in Section 2. Section 3 includes the explanation of the proposed heterogeneous graph neural network for traffic flow learning. Furthermore, the experiments with urban road networks and generalized synthetic networks are presented to demonstrate the accuracy and generalization capability of the proposed framework in Section 4. Finally, the conclusion and discussion of the proposed framework are presented in Section 5.
## 2 Technical Background
### Traffic Assignment Problem
The traffic assignment problem involves assigning traffic volumes or flows to each edge in the network. Given a transportation network represented as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where nodes \(\mathcal{V}\) represent intersections of roads and edges \(\mathcal{E}\) represent roads or links connecting these locations, The general form of traffic assignment problem can be considered as an optimization task:
\[\min_{f}:\quad\sum_{e\in\mathcal{E}}Z_{e}(f_{e}), \tag{1}\]
where \(f_{e}\) and \(Z_{e}(f_{e})\) are the total flow and the link cost function on link \(e\), respectively. The link cost function can be expressed in terms of travel time, travel distance, or other relevant factors. Therefore, the objective function is to minimize the summation of link cost over the graph. The traffic assignment problem can have different formulations depending on the specific objectives and constraints considered in the optimization process. For instance, under user equilibrium formulation, the traffic assignment problem could be solved by optimizing Beckmann's formulation (Beckmann et al., 1956):
\[\begin{split}\min:& z(x)=\sum_{e\in\mathcal{E}} \int_{0}^{x_{e}}t_{e}(\omega)\mathrm{d}\omega\\ \text{s.t.}&\sum_{k}f_{k}^{rs}=q_{rs},\ \forall r,s\in \mathcal{V},\\ & f_{k}^{rs}\geq 0,\ \forall k,r,s\in\mathcal{V},\\ & x_{e}=\sum_{rs}\sum_{k}f_{k}^{rs}\zeta_{e,k}^{rs},\forall e\in \mathcal{E},\end{split} \tag{2}\]
where \(t_{e}(f_{e})\) is the link travel time function, \(q_{rs}\) is the total demand from source \(r\) to destination \(s\), \(f_{k}^{rs}\) represents the flow on \(k^{\text{th}}\) path from \(r\) to s. \(\zeta_{e,k}^{rs}\) is the binary value, which equals to 1 when link \(e\) is on \(k^{\text{th}}\) connecting \(r\) and \(s\).
### Neural Network
Without loss of generality, we will start by considering a neural network with only one layer. Given a \(p\)-dimensional input vector \(h^{k}\in\mathbb{R}^{p}\), \(q\)-dimensional the output \(h^{k+1}\in\mathbb{R}^{q}\) of single layer neural network, The single-layer neural network with index \(k\) can be expressed as:
\[\mathbf{h}^{k+1}=\sigma(\mathbf{h}^{k}\mathbf{W}_{k}+\mathbf{b}_{k}), \tag{3}\]
where \(\mathbf{W}_{k}\in\mathbb{R}^{p\times q}\) and \(\mathbf{b}_{k}\in\mathbb{R}^{1\times q}\) represent the weight and bias term, respectively. The non-linear activation function \(\sigma(\cdot)\) is utilized in the neural network. Theoretically, a single-layer neural network with an infinite number of neurons can approximate any continuous function to arbitrary accuracy, given a sufficiently large dataset (Hornik et al., 1989). However, due to limitations in network width, dataset size, and the challenge of tuning parameters, a single-layer network is not optimal for achieving top performance, which leads to overfitting and poor generalization performance. To alleviate the limitation, multiple neural network layers are stacked together to enhance its expressibility and capture complex hierarchical features.
### Graph Neural Network
Neural networks have shown remarkable performance in various applications. In most neural network applications, input data structures are normally fixed, which is also called Euclidean data. However, non-Euclidean data structure such as graph-structured data is pervasive in different applications. The complexity and variability of the graph structure data make it difficult to model with conventional neural network architectures. To address this challenge, GNNs are specifically designed to handle graph-structured data. It operates on the node features and edge features and learns to extract embedding from nodes and edges, aiming to capture the underlying graph structure.
There are different types of graph neural network formulation. One of the popular approaches is the spectral approach (Wang and Zhang, 2022). Spectral graph convolution is a type of convolution operation on graph signals that uses the graph Fourier transform. It operates in the frequency domain and utilizes the eigenvalues and eigenvectors of the graph Laplacian to filter the node features. Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with adjacency matrix \(\mathbf{A}\) and diagonal degree matrix \(\mathbf{D}=\text{diag}(\mathbf{A}\mathbf{\bar{1}})\), the Laplacian matrix and normalized Laplacian matrix of the graph is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) and \(\mathbf{L}_{\text{norm}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{L}\mathbf{D}^{-\frac{1}{2}}\), respectively. The spectral graph convolution is defined mathematically as:
\[g_{\theta}*\mathbf{x}=\mathbf{U}g_{\theta}(\mathbf{U}^{T}\mathbf{x}), \tag{4}\]
where \(g_{\theta}\) is a filter with learnable parameters \(\theta\), \(\mathbf{x}\in\mathbb{R}^{|\mathcal{V}|\times N_{F}}\) is the input features with \(|\mathcal{V}|\) nodes and \(N_{F}\) features per node, and \(\mathbf{U}\) is the eigenvectors of \(\mathbf{L}_{\text{norm}}\). The input signal is first transformed into the spectral
domain. The features are passed through the learnable filter and transformed back into the spatial domain. The graph spectral operator can be applied to graphs of varying sizes. As a different approach to modeling graph data, the graph attention network (GAT) learns the graph feature by computing attention scores for each node based on its features and the features of its neighbors (Velickovic et al., 2017). The graph attention network computes the new node representation \(\mathbf{x}_{i}^{\prime}\) for each node \(i\) as follows:
\[\mathbf{x}_{i}^{\prime}=\sigma\left(\sum_{j=1}^{N}\alpha_{ij}\mathbf{W}_{x}\mathbf{x}_{j} \right), \tag{5}\]
where \(\sigma\) is an activation function, \(\mathbf{W}_{x}\) is a learnable weight matrix, and \(\alpha_{ij}\) is the attention weight assigned to the node \(j\) related to its neighbour node \(i\). The attention weights are computed as follows:
\[\alpha_{ij}=\frac{\exp(\sigma(\mathbf{a}^{T}[\mathbf{W}_{x}\mathbf{x}_{i}\oplus\mathbf{W}_{x} \mathbf{x}_{j}]))}{\sum_{k\in\mathcal{N}(i)}\exp(\sigma(\mathbf{a}^{T}[\mathbf{W}_{x}\mathbf{ x}_{i}\oplus\mathbf{W}_{x}\mathbf{x}_{j}]))}, \tag{6}\]
where \(\mathbf{a}\) is a learnable weight vector, \(\mathcal{N}(i)\) is the set of neighboring nodes of node \(i\), \(\oplus\) denotes concatenation function. The graph attention mechanism can be stacked into multiple layers, with each layer learning increasingly complex representations of the graph. The attention mechanism allows the network to learn the different importance of different nodes within a neighborhood, which can improve model performance.
The aforementioned formulation is valid for homogeneous graphs, where all nodes and edges have the same semantic meaning. However, it is noted that real-world graphs are not always homogeneous. For instance, in the literature citation graph, nodes can represent various entities such as papers, authors, and journals, while edges may denote different semantic relationships. When the graph contains different types of nodes or edges, it is considered as a heterogeneous graph. Utilizing GNNs on heterogeneous graphs offers notable advantages over homogeneous counterparts, particularly in the ability to learn type-specific representations for each node and edge type (Wang et al., 2019; Fu et al., 2020). This allows for more accurate and targeted modeling of each entity and relationship, leading to improved performance on downstream tasks (Zhao et al., 2021). In the following sections, we will leverage the expressiveness of the heterogeneous graph neural network to estimate the traffic flow performance under different OD demand settings.
## 3 Traffic Flow Learning using Heterogeneous Graph Neural Networks
In this section, we elaborate on the proposed architecture of the heterogeneous graph neural network for traffic flow learning. The detail of the proposed model is shown in Fig. 1. It consists of the embedding preprocessing block, the heterogeneous graph attention block, and the prediction block. The details of each module are described as follows.
### Graph Construction & Feature Preprocessing
The heterogeneous graph \(\mathcal{G}=(\mathcal{V},\mathcal{E}_{r},\mathcal{E}_{v})\) for traffic flow learning consists of one type of node representing the intersections of road segments and two edge types: real links and virtual links. The real links represent the road segments in the road network, while the virtual links represent the auxiliary link between the origin and destination nodes. The auxiliary links can be considered as the edge augmentation technique to enhance the feature updating. The node features \(\mathbf{X}_{n}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|+2}\) are the origin-destination demand of each node and the node geocordinates. The real link features \(\mathbf{X}_{e,r}\in\mathbb{R}^{|\mathcal{E}_{r}|\times 2}\) include the free-flow travel time and the link capacity, while the virtual link features \(\mathbf{X}_{e,v}\in\mathbb{R}^{|\mathcal{E}_{v}|\times 1}\) are binary values indicating the existence of OD demand between origin and destination nodes.
Furthermore, the original node features are often sparse and non-normalized. To address this issue, we employ a preprocessing step to encode the raw features into a lower-dimensional representation that captures the essential characteristics of the data while preserving the semantic information. The generated node feature embedding size is \(\mathbf{X}_{n}^{0}\in\mathbb{R}^{|\mathcal{V}|\times N_{v}}\), where \(N_{v}\) is the embedding size. Similarly, the edge features are also normalized before being propagated in the message passing. The normalized edge feature of real edges and virtual edges are denoted as \(\mathbf{X}_{e,r}^{0}\in\mathbb{R}^{|\mathcal{E}_{r}|\times 2}\) and \(\mathbf{X}_{e,v}^{0}\in\mathbb{R}^{|\mathcal{E}_{r}|\times 1}\), respectively. Additionally, it should be noted that there is an overlap between the real links and virtual links in the heterogeneous graph because it is possible that a road and an OD demand both exist between the same node pair.
### Graph Spatial Features Extraction
As discussed in Section 2.3, effectively aggregating information from nodes and edges of different types is a major challenge for heterogeneous graphs. In order to address this challenge, we propose a novel approach that leverages attention mechanisms on both real links and virtual links. This allows our model to selectively focus on the most relevant information from each edge type, improving the overall performance of the model.
As the first step of spatial feature extraction, to enhance the modeling capability and capture more fine-grained relationships, we employ a key-query-value mechanism on node features in addition to the weighted sum operation. For single-head attention, the node features are transformed into key, query, and value matrices through linear transformations:
\[\mathbf{K},\mathbf{Q},\mathbf{V}=[\mathbf{W}_{Q}\mathbf{X}_{n}^{0},\mathbf{W}_{K}\mathbf{X}_{n}^{0},\mathbf{W}_ {V}\mathbf{X}_{n}^{0}], \tag{7}\]
where \(\mathbf{W}_{K}\in\mathbb{R}^{N_{v}\times N_{h}}\), \(\mathbf{W}_{Q}\in\mathbb{R}^{N_{v}\times N_{h}}\), and \(\mathbf{W}_{V}\in\mathbb{R}^{N_{v}\times N_{h}}\) are learnable weight matrices. \(N_{h}\) is the output size of the linear transformation. The value matrix captures the learned representations and importance scores for each node. The query matrix represents the target node's characteristics, while the key matrix contains the features of neighboring nodes. The attention scores between the query vector and each key are calculated with the dot product, normalized by a softmax function to ensure that the weights sum up to one:
\[s_{ji}=\frac{\exp(\mathbf{q}_{j}^{T}\mathbf{k}_{i})}{\sum_{k\in\mathcal{N}(i)}\exp( \mathbf{q}_{k}^{T}\mathbf{k}_{i})}, \tag{8}\]
where \(\mathbf{k}_{i}\) and \(\mathbf{q}_{j}\) denote the \(i^{\text{th}}\) and \(j^{\text{th}}\) column of matrix \(\mathbf{K}\) and \(\mathbf{Q}\), respectively. The attention scores are then used in the weighted sum of the corresponding values, allowing the model to focus on more informative features. When the edge consists of multiple features, the output of the attention layer \(\mathbf{R}\in\mathbb{R}^{|\mathcal{V}|\times(N_{h}\times h)}\) is calculated using the attention score from Equation 8, value vector, and edge features from the neighbor of node \(i\):
\[\mathbf{R}=\bigoplus_{i=1}^{N}\left(\sum_{l=1}^{L}\sum_{k\in\mathcal{N}(i)}s_{ik} \mathbf{v}_{k}x_{ik,l}\right), \tag{9}\]
where \(\mathbf{v}_{k}\) denote the \(k^{\text{th}}\) column of value matrix \(\mathbf{V}\), \(x_{ik,l}\) represent the \(l^{\text{th}}\) edge feature on link \((i,k)\). This allows the model to focus on the most relevant output vectors for a given input vector. For multi-head
Figure 1: The illustration of the heterogeneous graph neural network for traffic flow learning. The original node feature is transformed using multi-layer perception. Then, the node feature and edge feature are first passed through the virtual links and then passed through the real links. The flow-capacity ratio of each link is calculated using the source node feature, destination node feature, and edge feature.
attention, the process of Equation 7, 8, and 9 is repeated \(h\) times to get multiple head attention output \([\mathbf{R}^{(0)},\mathbf{R}^{(1)},,\mathbf{R}^{(h-1)}]\), where \(h\) is the number of heads. The learned embedding of original node features is then passed into a linear transformation layer. We concatenate the output tensors \(\mathbf{R}^{(0)},\mathbf{R}^{(1)},\ldots,\mathbf{R}^{(h-1)}\) along the feature dimension and multiply a learnable weight matrix to produce the final output tensor \(\mathbf{O}\):
\[\mathbf{O}=\bigoplus_{i=0}^{h-1}\left(\mathbf{W}_{o}\mathbf{R}^{(i)}\right). \tag{10}\]
where \(\mathbf{W}_{o}\in\mathbb{R}^{(N_{h}\times h)\times N_{o}}\), \(N_{o}\) is the output dimension of the final node embedding of a single layer GNN.
As shown in Figure 1, multiple GNN layers are stacked to extract spatial features. The first GNN layer aggregates the node features along the virtual links. While the following GNN layers aggregate the node features along the real link. Compared with the homogeneous graph constructed with only real links, the link augmentation and message passing through virtual links can be considered as a dimension-reduction technique to reduce the number of hops required for distant nodes to gather messages. As a result, it requires fewer GNN layers for effective feature aggregation and following edge prediction in the heterogeneous graph.
### Graph Edge Prediction
To predict the transportation performance at the edge level, the node embedding of the source node and destination node, and the real edge feature are concatenated and passed into the multi-layer perceptron to have the flow prediction \(\tilde{f}_{e}\) of real link \(e\):
\[\tilde{f}_{e}=\texttt{MLP}([\mathbf{O}_{e,src}\oplus\mathbf{O}_{e,dst}\oplus\mathbf{X}_{ e,r}];\mathbf{W}_{e},\mathbf{b}_{e}), \tag{11}\]
where \(\mathbf{O}_{e,src}\) and \(\mathbf{O}_{e,dst}\) represent the source node embedding and destination node embedding, respectively. \(\mathbf{W}_{e}\) and \(\mathbf{b}_{e}\) is the learnable parameters associated with the multilayer perception. After generating the final prediction, selecting an appropriate loss function becomes crucial to ensure the model's effective convergence. The proposed model utilizes a twofold loss function. The first part is the supervised loss, which measures the discrepancy between prediction and ground truth:
\[L_{s}=\frac{1}{|\mathcal{E}_{r}|}\sum_{e\in\mathcal{E}_{r}}\|f_{e}-\tilde{f}_ {e}\|, \tag{12}\]
where the \(f_{e}\) and \(\tilde{f}_{e}\) represent the ground truth and prediction of flow on link \(e\in\mathcal{E}\). The second part of the loss function is from the node-based flow conservation law, where the total flow of traffic entering a node equals the total flow of traffic exiting that node. The node-based flow conservation law can be represented mathematically:
\[\sum_{k}f_{ki}-\sum_{j}f_{ij}=\Delta f_{i}=\begin{cases}&\sum_{v\in\mathcal{V }}O_{v,i}-\sum_{v\in\mathcal{V}}O_{i,v},\quad\text{ if }i\in\mathcal{V}_{OD},\\ &0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \text{ otherwise },\end{cases} \tag{13}\]
where \(f_{ki}\) denotes the flow on the link \((k,i)\), \(\Delta f_{i}\) represents the difference between flow receiving and sending at node \(i\), \(O_{v,i}\) represents the number of OD demand from \(v\) to \(i\). \(\mathcal{V}\) and \(\mathcal{V}_{OD}\) denote the entire node set and the origin-destination node set, respectively. The node-based flow conservation law can be considered as a normalization loss or auxiliary loss to ensure the predicted traffic flow at each node satisfies the flow conservation principle. One common way to incorporate conservation law into the loss function is to define a residual loss function:
\[L_{f}=\sum_{i}\ |\sum_{k\in\mathcal{N}_{i}(i)}\tilde{f}_{ki}-\sum_{j\in \mathcal{N}_{o}(i)}\tilde{f}_{ij}-\Delta f_{i}|, \tag{14}\]
where \(\mathcal{N}_{i}(i)\) and \(\mathcal{N}_{o}(i)\) represent the incoming edges and outcoming edges of node \(i\), respectively. The normalization loss \(L_{f}\) measures how the flow prediction satisfies the flow conservation law. Minimizing this loss function during training will encourage the model to learn traffic flow patterns that satisfy the conservation law. The total loss for the flow prediction \(L_{total}\) is the weighted summation of the supervised loss and the conservation loss:
\[L_{total}=w_{s}L_{s}+w_{f}L_{f}, \tag{15}\]
where the \(w_{s}\) and \(w_{f}\) represent the normalized weight for supervised loss and the conservation loss, respectively.
## 4 Numerical Experiments
To evaluate the accuracy, efficiency, and generalization capability of the proposed graph neural network, two numerical experiments are conducted. The first experiment is on urban transportation networks with synthetic traffic data. The second experiment is on multiple synthetic graphs with different topologies. Furthermore, we also synthesized variations in the OD demand and link capacity, as will be explained later.
### Experiments on Urban Transportation Networks
As case studies, three urban transportation networks are selected: Sioux Falls network, East Massachusetts Network (EMA), and Anaheim network. The information about the network topology, link capacity, and the default OD demand of these networks are obtained from (Bar-Gera et al., 2023). The details of the network topologies are shown in Table 1. To create demand variation, we scaled the demand by a scaling factor according to
\[\tilde{O}_{s,t}=\delta^{o}_{s,t}\ O_{s,t}, \tag{16}\]
where \(O_{s,t}\) is the default OD demand between source \(s\) and destination \(t\) and \(\delta^{o}_{s,t}\sim U(0.5,1.5)\) is the uniformly distributed random scaling factor for the OD pair (\(s\), \(t\)). Additionally, to account for variations in network properties, variable link capacities are created according to
\[\tilde{c}_{a}=\delta^{c}_{a}\ c_{a}, \tag{17}\]
where \(c_{a}\) is the original link capacity for link \(a\), and \(\delta^{c}_{a}\) is the scaling factor for link \(a\). Capacity variations are considered to be due to traffic accidents, road construction/damage, and adverse weather conditions, which reduce the link capacity. In this work, three levels of capacity reduction are considered: (L): light disruption with \(\delta^{c}_{a}\sim U(0.8,1.0)\); (M) moderate disruption with \(\delta^{c}_{a}\sim U(0.5,1.0)\); (H) high disruption with \(\delta^{c}_{a}\sim U(0.2,1.0)\). The size of the dataset for each network at each disruption scenario is 5000, which is split into the training set and the testing set with a ratio of 80% and 20%, respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Network Name & \(|\mathcal{V}|\) & \(|\mathcal{E}|\) & Average Degree & OD Demand \\ \hline Sioux Falls & 24 & 76 & 3.17 & 188,960 \\ EMA & 74 & 258 & 3.49 & 132,106 \\ Anaheim & 416 & 914 & 3.05 & 226,279 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The detail of urban transportation network. Three networks, including Sioux Falls, East Massachusetts, and Anaheim, are considered.
Figure 2: The illustrations of urban transportation networks, including Sioux Falls, EMA, and Anaheim. The link color represents the link capacity of each link.
The ground truth data used in training and testing steps is obtained from the user equilibrium (UE)-based traffic assignment. Specifically, the traffic flow and flow-capacity ratio (v/c ratio) of each link are the quantities of interest. The GNN model is implemented using PyTorch (Paszke et al., 2019), and DGL (Wang et al., 2019). The preprocessing layer consists of a three-layer fully connected neural network with an embedding size of 64. The number of GNN layers in the proposed model is 3. The first layer is applied on virtual links, and the following layers are applied on real links. The number of heads in the attention block is 8. For hyper-parameter selection, the hidden layer size is chosen as 64, which is common in neural network implementation (Liu and Meidani, 2023). Rectified Linear Unit (ReLU) activation (Agarap, 2018) is chosen to introduce nonlinearity. Mini-batch stochastic gradient descent was implemented in the training process. More specifically, the adaptive moment estimation (Adam) optimizer (Kingma and Ba, 2014) is adopted with a learning rate of 0.001. The batch size of training is 128. The weights for supervised loss and the conservation loss in equation 15 are chosen as 1.0 and 0.001, respectively, in order to keep both loss terms in the same order of magnitude. We evaluated the performance of our proposed heterogeneous GNN model (referred to by HetGAT) and compared it with three benchmark models: a fully connected neural network (FCNN), a homogeneous graph attention network (GAT), and a homogeneous graph convolution network (GCN). The FCNN consisted of five fully connected layers with an embedding size of 64. The GAT and GCN both have three layers of graph message passing layer, followed by three layers of FCNN with an embedding size of 64. The metrics to evaluate performance include the mean absolute error (MAE) and the normalized conversation loss \(L_{f}\) given by
\[\text{MAE}=\frac{1}{N}\sum_{i=1}^{N}|y_{i}-\tilde{y}_{i}|, \tag{18}\]
\[L_{f}=\frac{\sum_{i}|\sum_{k\in\mathcal{N}_{i}(i)}\tilde{f}_{ki}-\sum_{j\in \mathcal{N}_{o}(i)}\tilde{f}_{ij}-\Delta f_{i}|}{\sum_{s}\sum_{t}\tilde{O}_{s,t}}, \tag{19}\]
where \(y\) and \(\tilde{y}\) respectively represent the ground truth and predicted values for the flow-capacity ratio. Furthermore, we use the coefficient of determination, denoted by \(R^{2}\) to measure the goodness-of-fit between the GNN prediction and the ground truth. The training histories of the studied models are shown in Figure 3. The results indicate that the FCNN model performed poorly during training compared to GNN-based models, as shown by the high training loss and early stagnation. In contrast, GCN and GAT models exhibited similar convergence rates. Our proposed model outperformed both GCN and GAT in terms of training loss. Especially when used for larger networks, the proposed model demonstrated superior convergence performance compared to GCN and GAT; for the Anaheim network, the training loss of the proposed model is almost 1/3 of that of GAT in the first 25 iterations. This is because GCN and GAT only consider homogeneous edges, which limits the message passing to adjacent nodes. In contrast, the proposed GNN model uses virtual links and provides augmented connectivity to long-hop node pairs, which makes the node feature updating in HetGAT more efficiently.
After the training is finished, the model performance is evaluated on the testing set. The experiments under the urban road network are conducted in three different settings. The first setting, referred to as
Figure 3: Training loss history under urban transportation network. Three benchmarks, including FCNN, GCN, and GAT, are compared with HetGAT.
LMH-LMH, involved using all levels of disruption (flow reduction scaling levels) in both the training and testing sets. The second setting, namely L-M, involves training the model on light disruption data and testing it on medium disruption data. The third case, which is labeled as M-H, involves training on medium disruption data and testing on high disruption data. The L-M and M-H scenarios will therefore involve unseen cases that don't exist in the training data. Figure 4 plotted the predicted value and ground truth of the flow-capacity ratio on multiple samples in the Anaheim network under the LMH-LMH setting. In total, the flow-capacity ratio on 10,000 edges is predicted using HetGAT, GAT, and GCN, respectively. Figure 4 indicates that HetGAT has a relatively higher correlation coefficient, which outperforms GAT and GCN. Table 2 summarizes the prediction performance of all methods under different settings, and shows that HetGAT, compared to other models, offers better performance. When the graph size increases, the proposed model can maintain a relatively low MAE compared to GCN and GAT. For instance, in the Anaheim network, HetGAT offers MAE values that are 39%, 19%, and 31% lower than GAT values, in LMH-LMH, L-M, and M-H settings, respectively. Also, the conservation errors of HetGAT are 52%, 29%, and 55% smaller in LMH-LMH, L-M, and M-H settings, respectively. This shows that the inclusion of virtual links can assist GNN models in better learning the traffic flow patterns.
As an additional experiment, we consider a realistic scenario where the demand values are not known for the entire OD pairs. To evaluate the accuracy and robustness of the proposed model under incomplete OD demand scenarios, we introduce a random mask to the original OD demand, to simulate these missing
\begin{table}
\begin{tabular}{c c c c c|c c c c|c c} \hline \hline \multirow{2}{*}{Network} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{LMH-LMH} & \multicolumn{3}{c}{L-M} & \multicolumn{3}{c}{M-H} \\ \cline{3-11} & & MAE & \(L_{f}\) & \(R^{2}\) & MAE & \(L_{f}\) & \(R^{2}\) & MAE & \(L_{f}\) & \(R^{2}\) \\ \hline \multirow{4}{*}{Sioux Falls} & FCNN & 0.158 & 0.324 & 0.728 & 0.139 & 0.062 & 0.812 & 0.215 & 0.238 & 0.643 \\ & GAT & 0.046 & 0.092 & 0.956 & **0.022** & 0.018 & **0.991** & 0.085 & 0.057 & 0.889 \\ & GCN & 0.046 & 0.096 & 0.955 & 0.023 & 0.019 & 0.990 & 0.086 & 0.060 & 0.889 \\ & HetGAT & **0.028** & **0.054** & **0.985** & 0.026 & **0.018** & 0.986 & **0.062** & **0.039** & **0.937** \\ \hline \multirow{4}{*}{EMA} & FCNN & 0.238 & 0.422 & 0.621 & 0.225 & 0.144 & 0.795 & 0.263 & 0.446 & 0.771 \\ & GAT & 0.063 & 0.131 & 0.952 & 0.046 & 0.033 & 0.991 & 0.132 & 0.090 & 0.842 \\ & GCN & 0.071 & 0.167 & 0.944 & 0.034 & 0.043 & 0.986 & 0.161 & 0.135 & 0.814 \\ & HetGAT & **0.038** & **0.090** & **0.982** & **0.034** & **0.027** & **0.989** & **0.090** & **0.055** & **0.921** \\ \hline \multirow{4}{*}{ Anaheim} & FCNN & 0.111 & 0.721 & 0.822 & 0.199 & 0.237 & 0.801 & 0.186 & 0.386 & 0.840 \\ & GAT & 0.051 & 0.263 & 0.951 & 0.042 & 0.081 & 0.958 & 0.090 & 0.184 & 0.881 \\ \cline{1-1} & GCN & 0.072 & 0.402 & 0.903 & 0.062 & 0.109 & 0.911 & 0.116 & 0.245 & 0.827 \\ \cline{1-1} & HetGAT & **0.031** & **0.126** & **0.981** & **0.034** & **0.059** & **0.979** & **0.062** & **0.082** & **0.939** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of HetGAT with benchmark methods. Three different settings are included: LMH-LMH, L-M, M-H. The mean absolute error, normalized conservation loss, and correlation coefficient are used to evaluate the prediction performance on the testing set.
Figure 4: Comparison of predicted flow-capacity ratio between ground truth and surrogate modeling in the Anaheim network under LMH-LMH setting.
demand scenarios. Specifically, given a specific missing ratio, we randomly select a number of OD pairs and mask their corresponding OD demand values as zeros in the input node feature. This way, the model is expected to learn the inherent patterns and structures of the traffic network, even when some of the demand information is missing. The effectiveness of the proposed model under incomplete OD demand scenarios will be evaluated by comparing the predicted traffic flows against the ground truth data obtained from the complete OD demand. Three missing ratios are considered in the experiment: 20%, 30%, and 40%. The training setting and the hyperparameters remain the same as those in the aforementioned experiments. Table 3 summarizes the results of prediction performance under different missing rate scenarios for the LMH-LMH setting. The FCNN model is not considered in these experiments because of their very poor performance in the previous experiments under full OD demand. As expected, overall the performances under incomplete OD demand drop, compared to that in the complete demand scenario, reported in Table 2. However, HetGAT still outperforms GAT and GCN under different networks and different missing ratios. In particular, the MAE values are about 50% smaller to those offered by GAT and GCN under various missing ratios. Also, the flow predictions by HetGAT have better compliance with the flow conservation law compared with GAT and GCN.
### Experiments on Generalized Synthetic Networks
In this set of experiments, we study the performance of the proposed model when generalized to networks with different topologies. To do so, we generate synthetic networks by starting with a grid graph and adding links between randomly selected nodes in the grid. Then, a few nodes and edges are randomly removed to emulate the real road network and increase the variability. It should be noted that the proposed HetGAT involves a node feature whose dimension is equal to the total number of nodes in the graph. Therefore, our model can be generalized to different graphs as long as the total number of nodes remain the same. Therefore, the various topologies in synthetic road networks all have the same number of nodes. Two sizes of the synthetic generalized dataset are included in the experiments: 100 and 300. For each graph size, 100 different graph topologies are generated. Three of these randomly generated graphs are shown in Figure 5. The OD demand and the link capacity are also randomly generated using the scaling factor according to Equation 16 and 17, respectively. Furthermore, the UE-based traffic assignment is used to generate the ground truth data, similarly to Section 4.1.
It should be noted that in fully connected neural networks (FCNN), the output dimension and the order of the entries should be fixed. This is while in our synthetic networks, even though the number of nodes are fixed, the number of edges are not necessarily the same in different topologies. Because of this, FCNN is not suitable for experiments on generalized synthetic networks. Therefore, GAT and GCN are the only two models that are compared with the proposed HetGAT model.
In the training, the same hyperparameters used in the previous experiments are used here, as well. Figure 6 presents the training history for the three models, and shows that GCN and GAT models struggle during the training process. In contrast, the HetGAT model can successfully learn the flow dynamics across multiple graphs, leading to a decrease in training loss over time. For a graph size of 100, GCN and GAT had a training
\begin{table}
\begin{tabular}{c c c c c|c c c c|c c c} \hline \hline \multirow{3}{*}{Network} & \multirow{3}{*}{Model} & \multicolumn{6}{c}{Missing Ratio} \\ \cline{3-11} & & \multicolumn{3}{c}{20\%} & \multicolumn{3}{c}{30\%} & \multicolumn{3}{c}{40\%} \\ \cline{3-11} & & MAE & \(L_{f}\) & \(R^{2}\) & MAE & \(L_{f}\) & \(R^{2}\) & MAE & \(L_{f}\) & \(R^{2}\) \\ \hline \multirow{4}{*}{Sioux Falls} & GAT & 0.048 & 0.098 & 0.954 & 0.049 & 0.102 & 0.953 & 0.049 & 0.099 & 0.952 \\ & GCN & 0.049 & 0.102 & 0.951 & 0.049 & 0.101 & 0.953 & 0.049 & 0.101 & 0.952 \\ & HetGAT & **0.033** & **0.068** & **0.981** & **0.034** & **0.069** & **0.979** & **0.033** & **0.068** & **0.982** \\ \hline \multirow{4}{*}{EMA} & GAT & 0.068 & 0.145 & 0.938 & 0.070 & 0.149 & 0.931 & 0.070 & 0.149 & 0.933 \\ & GCN & 0.079 & 0.200 & 0.936 & 0.079 & 0.204 & 0.935 & 0.077 & 0.188 & 0.939 \\ & HetGAT & **0.046** & **0.111** & **0.970** & **0.048** & **0.115** & **0.970** & **0.048** & **0.105** & **0.972** \\ \hline \multirow{4}{*}{Anaheim} & GAT & 0.067 & 0.152 & 0.924 & 0.061 & 0.135 & 0.933 & 0.071 & 0.156 & 0.914 \\ & GCN & 0.105 & 0.225 & 0.835 & 0.109 & 0.237 & 0.802 & 0.106 & 0.239 & 0.827 \\ \cline{1-1} & HetGAT & **0.035** & **0.060** & **0.975** & **0.034** & **0.069** & **0.969** & **0.034** & **0.058** & **0.977** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the performance of HetGAT with that of GAT and GCN under incomplete OD demand. Three missing ratios are considered in the experiments: 20%, 30%, and 40%. The mean absolute error (MAE), normalized conservation loss (\(L_{f}\)), and the coefficient of determination, \(R^{2}\), are used to evaluate the prediction performance on the testing set.
loss of 0.114 and 0.124, respectively, whereas HetGAT achieved a training loss of 0.032, representing a 70% improvement. The prediction performance metrics on testing sets are compared in Table 4, where the mean absolute error of HetGAT outperformed GAT and GCN for graph sizes of 100 and 300, with values of 0.033 and 0.051, respectively.
## 5 Conclusion and Discussion
In this paper, we proposed a novel approach for traffic assignment and traffic flow learning using heterogeneous graph neural networks. We conducted extensive experiments on three real-world traffic networks to evaluate the performance of the proposed heterogeneous GNN model and compared it with the state-of-the-art models. The results show that the proposed model outperforms other models in terms of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Graph Size} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{Model} \\ \cline{3-5} & & GAT & GCN & HetGAT \\ \hline \multirow{3}{*}{100} & MAE & 0.143 & 0.179 & **0.033** \\ & \(L_{f}\) & 0.016 & 0.014 & **0.013** \\ & \(R^{2}\) & 0.602 & 0.372 & **0.979** \\ \hline \multirow{3}{*}{300} & MAE & 0.291 & 0.316 & **0.051** \\ & \(L_{f}\) & 0.018 & 0.016 & **0.013** \\ \cline{1-1} & \(R^{2}\) & 0.412 & 0.316 & **0.977** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the performance of HetGAT with that of GAT and GCN on generalized synthetic networks. Two graph sizes of 100 and 300 are considered in the experiments. The mean absolute error (MAE), normalized conservation loss (\(L_{f}\)), and coefficient of determination (\(R^{2}\)) are used to evaluate the prediction performance on the testing set.
Figure 5: The illustrations of sampled generalized synthetic networks. The network size is 100. The link color represents the link capacity of each link.
Figure 6: Training loss history under generalized synthetic networks. Two network sizes of 100 and 300 are selected for training and testing. Two benchmarks including GCN and GAT are compared with HetGAT.
convergence rate, training loss, and prediction accuracy. The proposed model can also be generalized to different network topologies, with different topologies or network parameters, which demonstrates its potential in real-world applications. Currently, the proposed HetGAT model only learns and predicts the static traffic flow patterns. As a potential extension of this work, the proposed framework can be extended to learn the dynamic traffic flow patterns. Furthermore, the current proposed GNN model uses training data collected from conventional solvers of static traffic assignment. In future work, we will study how these models can be trained on traffic data collected from sensors, such as loop detectors, cameras, or GPS devices.
## 6 Acknowledgment
This work was supported in part by the National Science Foundation under Grant CMMI-1752302.
|
2305.17143 | The least eigenvalue of the complements of graphs with given
connectivity | The least eigenvalue of a graph $G$ is the least eigenvalue of adjacency
matrix of $G$. In this paper we determine the graphs which attain the minimum
least eigenvalue among all complements of connected simple graphs with given
connectivity. | Huan Qiu, Keng Li, Guoping Wang | 2023-05-25T12:06:10Z | http://arxiv.org/abs/2305.17143v2 | # The least eigenvalue of the complements of graphs with given connectivity 1
# The least eigenvalue of the complements of graphs with given connectivity 1
Footnote 1: This work is supported by NSFC (No. 11461071).
Huan Qiu, Keng Li, Guoping Wang
School of Mathematical Sciences, Xinjiang Normal University,
Urumqi, Xinjiang 830054, P.R.China
Corresponding author. Email: [email protected].
**Abstract.** The least eigenvalue of a graph \(G\) is the least eigenvalue of adjacency matrix of \(G\). In this paper we determine the graphs which attain the minimum least eigenvalue among all complements of connected simple graphs with given connectivity.
**Key words:** The least eigenvalue; Complements of graphs; Connectivity.
**MR(2020) Subject Classification:** 05C40, 05C50
## 1. Introduction
Let \(G\) be a simple graph with the vertex set \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\) and the edge set \(E(G)\). The adjacency matrix of \(G\) is denoted by \(A(G)=(a_{ij})_{n\times n}\), where \(a_{ij}=1\) if \(v_{i}v_{j}\in E(G)\), and \(a_{ij}=0\) otherwise. Since \(A(G)\) is a non-negative real symmetric matrix, its eigenvalues can be ranged as \(\lambda_{1}(G)\geq\lambda_{2}(G)\geq\cdots\geq\lambda_{n}(G)\), where \(\lambda_{1}(G)\) and \(\lambda_{n}(G)\) are called spectral radius and the least eigenvalue of \(G\), respectively. The complement of \(G\) is denoted by \(G^{c}=(V(G^{c}),E(G^{c}))\), where \(V(G^{c})=V(G)\) and \(E(G^{c})=\{uv:u,v\in V(G),uv\notin E(G)\}\).
Although there are much research about the least eigenvalues of graphs (See [1-8]), there is little research about the least eigenvalues of the complements of graphs. Y. Fan, F. Zhang and Y. Wang [?] characterized the connected graph with the minimal least eigenvalue among all complements of trees. G. Jiang, G. Yu, W. Sun and Z. Ruan [12] achieved the graph with the minimum least eigenvalue among all graphs whose complements have only two pendent vertices.
A graph and its complement could be completely distinct, say, the complement of a tree is no longer a tree, and so the least eigenvalues of a graph and its complement could be completely distinct. Therefore, it is meaning for us to study the least eigenvalues of the complements of graphs.
The _connectivity_ of graph \(G\) is the minimum number of vertices whose deletion yields the resulting graph disconnected. We study the least eigenvalue of the complements of graphs with given connectivity in this paper. This paper is organized as follows. In Section 2 we give some preliminary results. In Section 3 we determine the graph which attain the minimum least eigenvalue among all complements of connected simple graphs with given connectivity.
## 2. Preliminary
Suppose that \(G\) is a simple graph with the vertex set \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\). Let \(x=(x_{1},x_{2},\cdots,x_{n})^{T}\), where \(x_{i}\) corresponds to \(v_{i}\), i.e., \(x(v_{i})=x_{i}\) for \(i=1,2,\cdots,n\). Then
\[x^{T}A(G)x=\sum_{v_{i}v_{j}\in E(G)}2x_{i}x_{j}. \tag{1}\]
The set of neighbours of \(v\) in \(G\) is denoted by \(N_{G}(v)\). Suppose that \(x\) is an eigenvector of \(A(G)\) corresponding to the eigenvalue \(\lambda\). Then for \(v_{i}\in V(G)\), we have
\[\lambda x_{i}=\sum_{v_{j}\in N_{G}(v_{i})}x_{j}\quad for\ i=1,2,\cdots,n. \tag{2}\]
**Lemma 2.1.** (Rayleigh's inequalities) _Let \(G\) be a graph with spectral radius \(\lambda_{1}(G)\) and the least eigenvalue \(\lambda_{n}(G)\) of \(A(G)\), and \(x=(x_{1},x_{2},\cdots,x_{n})^{T}\) be a unit vector. Then we have_
\[\lambda_{n}(G)\leq x^{T}A(G)x\leq\lambda_{1}(G).\]
_Moreover, the first equality holds if and only if \(x\) is a unit eigenvector of \(A(G)\) with respect to \(\lambda_{n}(G)\) and the second equality holds if and only if \(x\) is a unit eigenvector of \(A(G)\) with respect to \(\lambda_{1}(G)\)._
**Lemma 2.2.** [11] _Let \(\Delta(G)\) be the maximum degree of a graph \(G\). Then \(\Delta(G)\geq\lambda_{1}(G)\geq\sqrt{\Delta(G)}\)._
**Lemma 2.3.** [7] _Let \(G^{*}\) be a connected graph with two non-adjacent vertices \(u,v\) and let \(G\) be the graph obtained from \(G^{*}\) by adding the edge \(uv\). Assume that \(x\) and \(y\) are the unit least vectors of \(G\) and \(G^{*}\), respectively. Then_
(i.)__\(\lambda_{n}(G^{*})\leq\lambda_{n}(G)\) _if_ \(x_{u}=0\) _or_ \(x_{v}=0\)_, and the equality holds if and only if x is a least vector of_ \(G^{*}\) _and_ \(x_{u}=x_{v}=0\)_._
(ii.)__\(\lambda_{n}(G)\leq\lambda_{n}(G^{*})\) _if_ \(y_{u}=0\) _or_ \(y_{v}=0\)_, and the equality holds if and only if y is a least vector of_ \(G\) _and_ \(y_{u}=y_{v}=0\)_._
(iii.)__\(\lambda_{n}(G)<\lambda_{n}(G^{*})\) _if_ \(y_{u}y_{v}<0\)_._
A _matching_ in a graph is a set of pairwise nonadjacent edges. A matching with \(\alpha\) pairwise nonadjacent edges is denoted by \(\alpha\)-matching.
**Lemma 2.4.** (Hall's theorem) _A bipartite graph \(G=G[X,Y]\) has a matching which covers every vertex in \(X\) if and only if \(|N_{G}(S)|\geq|S|\) for all \(S\subseteq X\), where \(N_{G}(S)=\cup_{v\in S}N_{G}(v)\)._
From this lemma we easily see that the below result is true.
**Corollary 2.5**.: _Let \(G\) be a graph with the vertex set \(V(G)\). Suppose \(U\) and \(W\) are two subset of \(V(G)\) satisfying \(U\cap W=\emptyset\). Then \(G\) has a matching between \(U\) and \(W\) which covers every vertex in \(U\) if \(|N_{G}(S)\cap W|\geq|S|\) for all \(S\subseteq U\)._
If neither its origin nor its terminus is covered by a matching \(M\), the path is called an _M-augmenting path_.
**Lemma 2.6**.: (Berge's theorem) _A matching \(M\) in a graph \(G\) is a maximum matching if and only if \(G\) contains no \(M\)-augmenting path._
**Lemma 2.7**.: [10] _Suppose \(G\) and \(G^{c}\) are both connected graphs on \(n\) vertices. If \(x\) is a least eigenvector of \(G^{c}\) then \(x\) contains at least two positive entries and negative entries._
## 3 Main results
Let \(\mathcal{G}_{n,\kappa}\) denote the set of the connected simple graphs on \(n\) vertices with the connectivity \(\kappa\), and \(\mathcal{G}_{n,\kappa}^{c}\) be the set of the complements of the graphs in \(\mathcal{G}_{n,\kappa}\). In this paper we usually consider \(\mathbf{G}\) to be the graph of \(\mathcal{G}_{n,\kappa}\) so that \(\lambda_{n}(\mathbf{G}^{c})\) is as small as possible in \(\mathcal{G}_{n,\kappa}^{c}\). If \(\kappa=n-1\) then \(\mathbf{G}\) is isomorphic to the complete graph \(K_{n}\) of order \(n\), and so in what follows we assume \(\kappa\leq n-2\). Next we will use four claims to characterize \(\mathbf{G}\).
In this paper we usually let \(\partial(\mathbf{G})\) be a minimum vertex-cut of \(\mathbf{G}\) and \(x=(x_{1},\cdots,x_{n})^{T}\) be a unit eigenvector of \(A(\mathbf{G}^{c})\) with respect to \(\lambda_{n}(\mathbf{G}^{c})\).
Let \(G\) be a graph on \(n\) vertices, \(J_{n}\) be the matrix of order \(n\) whose all entries are \(1\) and \(I_{n}\) be the identity matrix of order \(n\). Then we have
\[A(G^{c})=J_{n}-I_{n}-A(G).\]
**Claim 3.1**.: \(\mathbf{G}-\partial(\mathbf{G})\) _contains exactly two components \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\)._
**Proof.** Suppose on the contrary that \(\mathbf{G}-\partial(\mathbf{G})\) contains three components \(\mathbf{G}_{1}\), \(\mathbf{G}_{2}\) and \(\mathbf{G}_{3}\). Then we can choose two vertices \(u\in V(\mathbf{G}_{1})\) and \(v\in V(\mathbf{G}_{2})\) such that \(x(u)x(v)>0\). Let \(H_{1}=\mathbf{G}+uv\). Note that \(\partial(\mathbf{G})\) is also \(\kappa\)-vertex cut of \(H_{1}\), and so \(H_{1}\in\mathcal{G}_{n,\kappa}\). From the equation (1), we have \(x^{T}A(\mathbf{G})x=\sum_{v_{i}v_{j}\in E(\mathbf{G})}2x_{i}x_{j}<\sum_{v_{i}v _{j}\in E(H_{1})}2x_{i}x_{j}=x^{T}A(H_{1})x\).
Thus, by Lemma 2.1, we have
\[\lambda_{n}(\mathbf{G}^{c}) =x^{T}A(\mathbf{G}^{c})x\] \[=x^{T}(J_{n}-I_{n})x-x^{T}A(\mathbf{G})x\] \[>x^{T}(J_{n}-I_{n})x-x^{T}A(H_{1})x\] \[=x^{T}A(H_{1}^{c})x\] \[\geq\lambda_{n}(H_{1}^{c}).\]
This contradicts the choice of \(\mathbf{G}^{c}\), and so Claim 3.1 is true. \(\square\)
Suppose \(G\) is a graph with the vertex set \(V(G)\). If \(S\) is a non-empty subset of \(V(G)\) then we denote by \(G[S]\) the subgraph of \(G\) induced by \(S\). Set \(\partial({\bf G})^{+}=\{v\in\partial({\bf G}):x(v)\geq 0\}\), \(\partial({\bf G})^{-}=\{v\in\partial({\bf G}):x(v)<0\}\), \(V_{i}^{+}=\{v\in V({\bf G}_{i}):x(v)\geq 0\}\) and \(V_{i}^{-}=\{v\in V({\bf G}_{i}):x(v)<0\}\), where \(i=1,2\).
**Claim 3.2**.: \(G[V_{1}^{+}\cup\partial({\bf G})^{+}]\)_, \(G[V_{2}^{+}\cup\partial({\bf G})^{+}]\), \(G[V_{1}^{-}\cup\partial({\bf G})^{-}]\) and \(G[V_{2}^{-}\cup\partial({\bf G})^{-}]\) are all complete graphs._
**Proof.** Suppose on the contrary that the vertices \(u\) and \(v\) are not adjacent in \(G[V_{1}^{+}\cup\partial({\bf G})^{+}]\). Let \(H_{2}={\bf G}+uv\). Note that \(\partial({\bf G})\) is also \(\kappa\)-vertex cut of \(H_{2}\), and so \(H_{2}\in{\cal G}_{n,\kappa}\). From the equation (1), \(x^{T}A({\bf G})x=\sum_{v_{i}v_{j}\in E({\bf G})}2x_{i}x_{j}\leq\sum_{v_{i}v_{j }\in E(H_{2})}2x_{i}x_{j}=x^{T}A(H_{2})x\). As in the proof of Claim 3.1, we can verify \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}(H_{2}^{c})\). Lemma 2.3 shows the equality holds if and only if \(x\) is a least vector of \({\bf G}^{c}\) and \(x(u)=x(v)=0\). If \(x(u)=x(v)=0\) then we consider \(H_{2}\) as \({\bf G}\), and otherwise \(\lambda_{n}({\bf G}^{c})>\lambda_{n}(H_{2}^{c})\), which contradicts the choice of \({\bf G}^{c}\), and so \(u\) and \(v\) are adjacent in \(G[V_{1}^{+}\cup\partial({\bf G})^{+}]\). Therefore, \(G[V_{1}^{+}\cup\partial({\bf G})^{+}]\) is complete graph.
Similarly, \(G[V_{2}^{+}\cup\partial({\bf G})^{+}]\), \(G[V_{1}^{-}\cup\partial({\bf G})^{-}]\) and \(G[V_{2}^{-}\cup\partial({\bf G})^{-}]\) are also complete graphs. \(\Box\)
Set \(V^{+}=\{v\in V({\bf G}):x(v)\geq 0\}\), and \(V^{-}=\{v\in V({\bf G}):x(v)<0\}\). Write \(|V^{+}|=n_{1}\) and \(|V^{-}|=n_{2}\). Then \(n_{1}+n_{2}=n\). For convenience we assume without loss of generality that \(n_{1}\geq n_{2}\), in which case we can distinguish three cases as follows:
\[n_{2}\geq\kappa,\ \ n_{1}\geq\kappa>n_{2},\ \ \kappa>n_{1}.\]
Let \(U\) be the subset of \(V^{+}\) containing such vertices of \(V^{+}\) that connect at least one vertex in \(V^{-}\). Let \(W\) be the subset of \(V^{-}\) containing such vertices of \(V^{-}\) that connect at least one vertex in \(V^{+}\).
**Claim 3.3**.: _If \(n_{2}\geq\kappa\), then there is a \(\kappa\)-matching between \(V^{+}\) and \(V^{-}\)._
**Proof.** Let \(M\) be a maximum matching between \(U\) and \(W\). Since \(n_{1}\geq n_{2}\), \(n_{1}\geq\kappa\). If \(|U|<\kappa\), then \({\bf G}\backslash U\) is not connected, and so \(|U|\geq\kappa\). Similarly, \(|W|\geq\kappa\). Denote by \(V(M)\) the set of the vertices which are covered by \(M\). If \(U\backslash V(M)=\emptyset\) or \(W\backslash V(M)=\emptyset\), then there is an \(\kappa\)-matching between \(U\) and \(W\).
So we assume \(U\backslash V(M)\neq\emptyset\) and \(W\backslash V(M)\neq\emptyset\). In this case there is no edge between \(U\backslash V(M)\) and \(W\backslash V(M)\) otherwise the edge is not matched under \(M\), which contradicts the maximality of \(M\).
Let \(S_{1}\) be the set of the vertices of \(U\cap V(M)\) which are adjacent to some vertices of \(W\backslash V(M)\). Let \(T_{2}\) be the set of the vertices of \(W\cap V(M)\) which are adjacent to some vertices of \(U\backslash V(M)\). Let \(T_{1}\) be the set of the vertices of \(W\cap V(M)\) which are matching with the vertices of \(S_{1}\). Let \(S_{2}\) be the set of the vertices of \(U\cap V(M)\) which are matching with the vertices of \(T_{2}\). Then there is no edge between \(T_{1}\) and \(S_{2}\) otherwise we will get an \(M\)-augmenting path \(P=w_{1}s_{1}t_{1}s_{2}t_{2}u_{1}\), where \(w_{1}\in W\backslash V(M)\), \(s_{1}\in S_{1}\), \(t_{1}\in T_{1}\), \(s_{2}\in S_{2}\), \(t_{2}\in T_{2}\), \(u_{1}\in U\backslash V(M)\) and \(s_{1}t_{1}\), \(s_{2}t_{2}\in M\). This contradicts Lemma 2.6.
Clearly, \(T_{1}\cap T_{2}=\emptyset\) and \(S_{1}\cap S_{2}=\emptyset\). Let \(T_{3}\) be the set of the vertices of \((W\cap V(M))\backslash(T_{1}\cup T_{2})\) such that for each vertex \(t_{3}\in T_{3}\), there is a path from \(t_{3}\) to some vertex of \(T_{2}\) whose edges are alternately in \(\widetilde{E}\backslash M\) and \(M\), where \(\widetilde{E}\) is the set
of the edges between \((U\cap V(M))\backslash S_{1}\) and \((W\cap V(M))\backslash(T_{1}\cup T_{2})\). Let \(S_{3}\) be the set of the vertices of \((U\cap V(M))\backslash(S_{1}\cup S_{2})\) which are matching with the vertices of \(T_{3}\). Then there is no edge between \(T_{1}\) and \(S_{3}\) otherwise we will get an \(M\)-augmenting path. This contradicts Lemma 2.6.
Let \(S_{4}\) be the set of the vertices of \((U\cap V(M))\backslash(S_{1}\cup S_{2})\) such that for each vertex \(s_{4}\in S_{4}\), there is a path from \(s_{4}\) to some vertex of \(S_{1}\) whose edges are alternately in \(\widetilde{E}^{*}\backslash M\) and \(M\), where \(\widetilde{E}^{*}\) are the set of the edges between \((W\cap V(M))\backslash T_{2}\) and \((U\cap V(M))\backslash(S_{1}\cup S_{2})\). Let \(T_{4}\) be the set of the vertices of \((W\backslash V(M))\backslash(T_{1}\cup T_{2})\) which are matching with the vertices of \(S_{4}\). Just as the above argument we can verify that there is no edge between \(T_{4}\) and \(S_{2}\cup S_{3}\). Similarly, there is no edge between \(S_{3}\) and \(T_{1}\cup T_{4}\).
\(S_{3}\cap S_{4}=\emptyset\) and \(T_{3}\cap T_{4}=\emptyset\) otherwise we will get an \(M\)-augmenting path, which contradicts Lemma 2.6. Let \(S_{5}=(U\cap V(M))\backslash\bigcup_{i=1}^{4}S_{i}\) and \(T_{5}=(W\cap V(M))\backslash\bigcup_{i=1}^{4}T_{i}\). Clearly, the vertices of \(S_{5}\) are matching with the vertices of \(T_{5}\). \(T_{i}\) and \(S_{i}\) (\(1\leq i\leq 5\)) are shown in Figure 1.
\(T
**Claim 3.4.**_If \(n_{1}\geq\kappa>n_{2}\), then there is a \(n_{2}\)-matching between \(n_{2}\) vertices of \(U\) and all vertices of \(V^{-}\) and the other \(\kappa-n_{2}\) vertices of \(U\) are adjacent to each vertex of \(V^{-}\)._
**Proof.** If \(V^{-}\backslash W\neq\emptyset\), then \(W\) is clearly a vertex cut of \({\bf G}\). Whereas \(|W|<\kappa\), this contradiction shows \(W=V^{-}\). If \(|U|<\kappa\) then \(U\) is clearly a vertex cut of \({\bf G}\). This contradiction shows \(|U|\geq\kappa\).
Set \(U=\{u_{1},u_{2},\cdots,u_{|U|}\}\), where \(x(u_{i})\leq x(u_{i+1})\) (\(1\leq i\leq|U|-1\)), and \(R_{1}=\{u_{1},u_{2},\cdots,u_{\kappa-n_{2}}\}\) is subset of \(U\). If there is some \(Q^{*}\subseteq V^{-}\) such that \(|N_{{\bf G}}(Q^{*})\cap(U\backslash R_{1})|<|Q^{*}|\), then \(R_{1}\cup(V^{-}\backslash Q^{*})\cup(N_{{\bf G}}(Q^{*})\cap(U\backslash R_{1}))\) is a vertex cut of \({\bf G}\). Whereas \(|R_{1}\cup(V^{-}\backslash Q^{*})\cup(N_{{\bf G}}(Q^{*})\cap(U\backslash R_{1}))|<\kappa\), this contradiction shows that for any \(Q\subseteq V^{-}\), \(|N_{{\bf G}}(Q)\cap(U\backslash R_{1})|\geq|Q|\). By Corollary 2.5, there exists a \(n_{2}\)-matching \(M_{1}\) between \(V^{-}\) and \(U\backslash R_{1}\).
If there is some vertex \(v_{1}\in V^{-}\) such that \(|N_{{\bf G}}(v_{1})\cap U|<\kappa-n_{2}+1\), then \((V^{-}\backslash\{v_{1}\})\cup(N_{{\bf G}}(v_{1})\cap U)\) is a vertex cut of \({\bf G}\). Whereas \(|(V^{-}\backslash\{v_{1}\})\cup(N_{{\bf G}}(v_{1})\cap U)|<\kappa\), this contradiction shows that for any \(v\in V^{-}\), \(|N_{{\bf G}}(v)\cap U|\geq\kappa-n_{2}+1\), and so \(v\) connects at least \(\kappa-n_{2}\) vertices of \(U\) except the vertex matching with \(v\). Now we will show that \(v\) connects all vertices of \(R_{1}\).
Suppose for a contradiction that some vertex \(u_{\ell_{1}}\) of \(R_{1}\) is not adjacent to \(v\). Then \(v\) must be adjacent to a vertex \(u_{\ell_{2}}\in U\backslash R_{1}\) satisfying \(vu_{\ell_{2}}\not\in M_{1}\) since \(|N_{{\bf G}}(v_{2})\cap U|\geq\kappa-n_{2}+1\). We delete the edge \(vu_{\ell_{2}}\) and add \(vu_{\ell_{1}}\). Note \(x(u_{\ell_{1}})\leq x(u_{\ell_{2}})\). From the equation (1), \(x^{T}A({\bf G})x=\sum_{v_{i}v_{j}\in E({\bf G})}2x_{i}x_{j}\leq\sum_{v_{i}v_{j }\in E({\bf G}-vu_{\ell_{2}}+vu_{\ell_{1}})}2x_{i}x_{j}\). As in the proof of Claim 3.1, we can verify \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}(({\bf G}-vu_{\ell_{2}}+vu_{\ell_{1}})^ {c})\). This contradiction shows \(v\) connects all vertices of \(R_{1}\). \(\Box\)
Let \(n_{1}\geq\kappa>n_{2}\). Suppose \(K_{n_{1}}\) and \(K_{n_{2}}\) are disjoint. Then we denote by \({\bf B}_{2}(n_{1},n_{2};\kappa)\) the graph obtained from \(K_{n_{1}}\) and \(K_{n_{2}}\) by connecting \(n_{2}\) edges between \(V(K_{n_{1}})\) and \(V(K_{n_{2}})\) so that they become an \(n_{2}\)-matching \(M_{1}\) and connecting each vertex of \(V(K_{n_{2}})\) and all \(\kappa-n_{2}\) vertices of \(V(K_{n_{1}})\) which are not covered by \(M_{1}\). Clearly, \({\bf B}_{2}(n_{1},n_{2};\kappa)\in{\cal G}_{n,\kappa}\).
**Lemma 3.2.**_When \(n_{1}\geq\kappa>n_{2}\), \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}(({\bf B}_{2}^{c}(n_{1},n_{2};\kappa)))\)._
**Proof.** From the claim 3.4 we easily observe that after connecting respectively all pairs of vertices between \(V_{1}^{+}\) and \(V_{2}^{+}\) and between \(V_{1}^{-}\) and \(V_{2}^{-}\) and deleting some edges between \(V^{+}\) and \(V^{-}\) we can obtain the resulting graph which is isomorphic to \({\bf B}_{2}(n_{1},n_{2};\kappa)\).
From the equation (1), \(x^{T}A({\bf G})x=\sum_{v_{i}v_{j}\in E({\bf G})}2x_{i}x_{j}\leq\sum_{v_{i}v_{j }\in E({\bf B}_{2}(n_{1},n_{2};\kappa))}2x_{i}x_{j}=x^{T}A({\bf B}_{2}(n_{1},n_ {2};\kappa))x\). As in the proof of Claim 3.1, we can verify that \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))\). \(\Box\)
Let \(\kappa>n_{1}\). Suppose \(K_{n_{1}}\) and \(K_{n_{2}}\) are disjoint. Set \(S\) to be a subset of \(V(K_{n_{1}})\) such that \(|S|=n_{1}-n_{2}\). Then we denote by \({\bf B}_{3}(n_{1},n_{2};\kappa)\) the graph obtained from \(K_{n_{1}}\) and \(K_{n_{2}}\) by connecting each vertex of \(S\) with each vertex of \(V(K_{n_{2}})\) and connecting each vertex of \(V(K_{n_{1}})\backslash S\) with \(\kappa-n_{1}+1\) vertices of \(V(K_{n_{2}})\) so that for any two vertices \(s_{1}\) and \(s_{2}\) of \(V(K_{n_{1}})\backslash S\), \(N_{{\bf B}_{3}(n_{1},n_{2};\kappa)}(s_{1})\cap V(K_{n_{2}})\neq N_{{\bf B}_{3}( n_{1},n_{2};\kappa)}(s_{2})\cap V(K_{n_{2}})\), and connecting each vertex of \(V(K_{n_{2}})\) with \(\kappa-n_{1}+1\) vertices of \(V(K_{n_{1}})\backslash S\) so that for any two vertices \(t_{1}\) and \(t_{2}\) of \(V(K_{n_{2}})\), \(N_{{\bf B}_{3}(n_{1},n_{2};\kappa)}(t_{1})\cap V(K_{n_{1}})\neq N_{{\bf B}_{3}( n_{1},n_{2};\kappa)}(t_{2})\cap V(K_{n_{1}})\). Clearly, \({\bf B}_{3}(n_{1},n_{2};\kappa)\in{\cal G}_{n,\kappa}\).
**Lemma 3.3.**_When \(\kappa>n_{1}\), \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}(({\bf B}^{c}_{3}(n_{1},n_{2};\kappa)))= \kappa+1-n\)._
**Proof.** If \(V^{+}\backslash U\neq\emptyset\), then \(U\) is clearly a vertex cut of \({\bf G}\). Whereas \(|U|<\kappa\), this contradiction shows \(U=V^{+}\). Similarly, we can verify that \(W=V^{-}\).
If there is some vertex \(w^{*}\in V^{-}\) such that \(|N_{{\bf G}}(w^{*})\cap V^{+}|<\kappa-n_{2}+1\), then \((V^{-}\backslash\{w^{*}\})\cup(N_{{\bf G}}(w^{*})\cap V^{+})\) is a vertex cut of \({\bf G}\). Whereas \(|(V^{-}\backslash\{w^{*}\})\cup(N_{{\bf G}}(w^{*})\cap V^{+})|<\kappa\), this contradiction shows that for any \(w\in V^{-}\), \(|N_{{\bf G}}(w)\cap V^{+}|\geq\kappa-n_{2}+1\). Similarly, we can verify that for any \(u\in V^{+}\), \(|N_{{\bf G}}(u)\cap V^{-}|\geq\kappa-n_{1}+1\).
If there are two vertices \(w_{1}\) and \(w_{2}\) of \(V^{-}\) such that \(N_{{\bf G}}(w_{1})\cap(V^{+}\backslash R_{2})=N_{{\bf G}}(w_{2})\cap(V^{+} \backslash R_{2})\) and \(|N_{{\bf G}}(w_{1})\cap V^{+}|=\kappa-n_{1}+1\), then \((V^{-}\backslash\{w_{1}\cup w_{2}\})\cup(N_{{\bf G}}(w_{1})\cap V^{+})\cup R_{2}\) is a vertex cut of \({\bf G}\). Whereas \(|(V^{-}\backslash\{w_{1}\cup w_{2}\})\cup(N_{{\bf G}}(w_{1})\cap V^{+})\cup R_ {2}|<\kappa\), this contradiction shows that for any two vertices \(w_{3}\) and \(w_{4}\) of \(V^{-}\) satisfying \(|N_{{\bf G}}(w_{3})\cap(V^{+}\backslash R_{2})|=|N_{{\bf G}}(w_{4})\cap(V^{+} \backslash R_{2})|=\kappa-n_{1}+1\), \(N_{{\bf G}}(w_{3})\cap(V^{+}\backslash R_{2})\neq N_{{\bf G}}(w_{4})\cap V^{+} (V^{+}\backslash R_{2})\). Similarly, we can verify that for any two vertices \(u^{\prime}\) and \(u^{\prime\prime}\) of \(V^{+}\backslash R_{2}\) satisfying \(|N_{{\bf G}}(u^{\prime})\cap(V^{+}\backslash R_{2})|=|N_{{\bf G}}(u^{\prime \prime})\cap(V^{+}\backslash R_{2})|=\kappa-n_{1}+1\), \(N_{{\bf G}}(u^{\prime})\cap V^{-}\neq N_{{\bf G}}(u^{\prime\prime})\cap V^{-}\).
We denote by \({\bf G}_{0}\) the graph obtained from \({\bf G}\) by connecting respectively all pairs of vertices between \(V^{+}_{1}\) and \(V^{+}_{2}\) and between \(V^{-}_{1}\) and \(V^{-}_{2}\). Clearly, \({\bf G}^{c}_{0}\) is a bipartite graph, and so \(\lambda_{n}({\bf G}^{c}_{0})=-\lambda_{1}({\bf G}^{c}_{0})\), where \(\lambda_{1}({\bf G}^{c}_{0})\) is the spectral radius of \({\bf G}^{c}_{0}\). Let \(\Delta({\bf G}^{c}_{0})\) be the maximum degree of \({\bf G}^{c}_{0}\). From the above argument, we know that \(\Delta({\bf G}^{c}_{0})\leq n-\kappa-1\). By the lemma 2.2, we know that \(\lambda_{1}({\bf G}^{c}_{0})\leq\Delta({\bf G}^{c}_{0})\), and so \(\lambda_{n}({\bf G}^{c}_{0})\geq-\Delta({\bf G}^{c}_{0})\geq\kappa+1-n\).
We can easily observe that \({\bf B}^{c}_{3}(n_{1},n_{2};\kappa)\) is composed of a \((n-\kappa-1)\)-regular bipartite graph and \(n_{1}-n_{2}\) isolate vertices, and so \(\lambda_{n}({\bf B}^{c}_{3}(n_{1},n_{2};\kappa))=\kappa+1-n\). Therefore, \(\lambda_{n}({\bf G}^{c}_{0})\geq\lambda_{n}({\bf B}^{c}_{3}(n_{1},n_{2};\kappa))\).
From the equation (1), \(x^{T}A({\bf G})x=\sum_{v_{i}v_{j}\in E({\bf G})}2x_{i}x_{j}\leq\sum_{v_{i}v_{j} \in E({\bf G}_{0})}2x_{i}x_{j}=x^{T}A({\bf G}_{0})x\). As in the proof of Claim 3.1, we can verify \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}({\bf G}^{c}_{0})\). Therefore, we have \(\lambda_{n}({\bf G}^{c})\geq\lambda_{n}({\bf B}^{c}_{3}(n_{1},n_{2};\kappa))\). \(\square\)
**Lemma 3.4.**\(\lambda_{n}({\bf B}^{c}_{1}(n_{1},n_{2},\kappa))>\lambda_{n}({\bf B}^{c}_{1}( \lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor,\kappa))\)_._
**Proof.** Assume without loss of generality that \(n_{1}>n_{2}+1\). Suppose that \(U^{*}\subseteq V(K_{n_{1}})\) is a vertex cut of \({\bf B}_{1}(n_{1},n_{2},\kappa)\), and that \(W^{*}\subseteq V(K_{n_{2}})\) is a set composed of those vertices which are adjacent to the vertices in \(U^{*}\). Let \(x=(x_{1},x_{2},\cdots,x_{n})^{T}\) be a unit eigenvector of \({\bf B}^{c}_{1}(n_{1},n_{2},\kappa)\) with respect to \(\lambda_{n}({\bf B}^{c}_{1}(n_{1},n_{2},\kappa))\). By the symmetry of \({\bf B}^{c}_{1}(n_{1},n_{2},\kappa)\), all the vertices in \(V(K_{n_{1}})\setminus U\) correspond to the same value \(x_{1}\), all the vertices in \(U^{*}\) correspond to the same value \(x_{2}\), all the vertices in \(V(K_{n_{2}})\setminus W^{*}\) correspond to the same value \(x_{3}\), and all the vertices in \(W^{*}\) correspond to the same value \(x_{4}\). From the equation (2), we have
\[\left\{\begin{array}{l}\lambda_{n}x_{1}=(n_{2}-\kappa)x_{3}+\kappa x_{4},\\ \lambda_{n}x_{2}=(n_{2}-\kappa)x_{3}+(\kappa-1)x_{4},\\ \lambda_{n}x_{3}=(n_{1}-\kappa)x_{1}+\kappa x_{2},\\ \lambda_{n}x_{4}=(n_{1}-\kappa)x_{1}+(\kappa-1)x_{2}.\end{array}\right.\]
Transform the above equations into a matrix equation \((A_{n_{1},n_{2}}-\lambda_{n}I_{4})\widetilde{x}=0\)
where \(\widetilde{x}=(x_{1},x_{2},x_{3},x_{4})^{T}\) and
\[A_{n_{1},n_{2}}=\begin{pmatrix}0&0&n_{2}-\kappa&\kappa\\ 0&0&n_{2}-\kappa&\kappa-1\\ n_{1}-\kappa&\kappa&0&0\\ n_{1}-\kappa&\kappa-1&0&0\end{pmatrix}.\]
Let \(g_{n_{1},n_{2}}(\lambda)=\det(A_{n_{1},n_{2}}-\lambda I_{4})\). We can compute out
\[g_{n_{1},n_{2}}(\lambda)=\lambda^{4}+(2\kappa-n_{1}n_{2}-1)\lambda^{2}+\kappa^ {2}-(n_{1}+n_{2})\kappa+n_{1}n_{2}.\]
Therefore, \(g_{n_{1},n_{2}}(\lambda)-g_{n_{1}-1,n_{2}+1}(\lambda)=(n_{1}-n_{2}-1)(\lambda^ {2}-1)\).
Note that \({\bf B}_{1}^{c}(n_{1},n_{2},\kappa)\) is a bipartite graph. It is well known that \(\lambda_{n}({\bf B}_{1}^{c}(n_{1},n_{2},\kappa))=-\lambda_{1}({\bf B}_{1}^{c}( n_{1},n_{2},\kappa))\). Recall, \(n_{1}>n_{2}+1\). By Lemma 2.7, \(\Delta({\bf B}_{1}^{c}(n_{1},n_{2},\kappa))\geq n_{1}-1>1\), and so by Lemma 2.2, \(\lambda_{n}({\bf B}_{1}^{c}(n_{1},n_{2},\kappa))<-1\). This implies that \(g_{n_{1}-1,n_{2}+1}(\lambda_{n}({\bf B}_{1}^{c}(n_{1},n_{2},\kappa)))<0\). We can observe that the function \(g_{n_{1}-1,n_{2}+1}(\lambda)\) monotonically decreases when \(\lambda<-\sqrt{\frac{n_{1}n_{2}+n_{1}-n_{2}-2\kappa}{2}}\), and so \(\lambda_{n}({\bf B}_{1}^{c}(n_{1},n_{2},\kappa))>\lambda_{n}({\bf B}_{1}^{c}(n _{1}-1,n_{2}+1,\kappa))\). This shows \(\lambda_{n}({\bf B}_{1}^{c}(n_{1},n_{2},\kappa))>\lambda_{n}({\bf B}_{1}^{c}( \lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor,\kappa))\). \(\square\)
**Lemma 3.5.**_When \(n<2\kappa\), \(\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))\geq\lambda_{n}({\bf B}_{2}^{c }(\kappa,n-\kappa;\kappa))\), and when \(n\geq 2\kappa\), \(\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))\geq\lambda_{n}({\bf B}_{2}^{c }(n-\kappa+1,\kappa-1;\kappa))\)._
**Proof.** Let \(y=(y_{1},y_{2},\cdots,y_{n})^{T}\) be a unit eigenvector of \({\bf B}_{2}^{c}(n_{1},n_{2};\kappa)\) with respect to \(\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))\). By the symmetry of \({\bf B}_{2}^{c}(n_{1},n_{2};\kappa)\), all the vertices in \(V(K_{n_{1}})\backslash(V(M_{1})\cup R_{1})\) correspond to the same value \(y_{1}\), all the vertices in \(R_{1}\) correspond to the same value \(y_{2}\), all the vertices in \(V(K_{n_{1}})\cap M_{1}\) correspond to the same value \(y_{3}\), and all the vertices in \(V(K_{n_{2}})\) correspond to the same value \(y_{4}\). From the equation (2), we have
\[\left\{\begin{array}{l}\lambda_{n}y_{1}=n_{2}y_{4},\\ \lambda_{n}y_{2}=0,\\ \lambda_{n}y_{3}=(n_{2}-1)y_{4},\\ \lambda_{n}y_{4}=(n_{1}-\kappa)y_{1}+(n_{2}-1)y_{3}.\end{array}\right.\]
Transform the above equations into a matrix equation \((A_{n_{1},n_{2}}-\lambda_{n}I_{4})\widetilde{y}=0\), where \(\widetilde{y}=(y_{1},y_{2},y_{3},y_{4})^{T}\) and
\[A_{n_{1},n_{2}}=\begin{pmatrix}0&0&0&n_{2}\\ 0&0&0&0\\ 0&0&0&n_{2}-1\\ n_{1}-\kappa&0&n_{2}-\kappa&0\end{pmatrix}.\]
Let \(f_{n_{1},n_{2}}(\lambda)=\det(A_{n_{1},n_{2}}-\lambda I_{4})\). We can compute out
\[f_{n_{1},n_{2}}(\lambda)=\lambda^{2}(\lambda^{2}-(n_{2}-1)^{2}-(n_{1}-\kappa)n _{2}),\]
from which we obtain \(\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))=-\sqrt{(n_{2}-1)^{2}+(n_{1}- \kappa)n_{2}}\). Then we have \(\lambda_{n}({\bf B}_{2}^{c}(n_{1}-1,n_{2}+1;\kappa))=-\sqrt{n_{2}^{2}+(n_{1}- \kappa-1)(n_{2})+1}\). Recall \(\kappa\leq n-2\). By a simple computation we can determine that \(\lambda_{n}({\bf B}_{2}^{c}(n_{1},n_{2};\kappa))\geq\lambda_{n}({\bf B}_{2}^{c }(n_{1}-1,n_{2}+1;\kappa))\).
This implies that \(\lambda_{n}(\mathbf{B}_{2}^{c}(n_{1},n_{2};\kappa))\geq\lambda_{n}(\mathbf{B}_{2} ^{c}(\kappa,n-\kappa;\kappa))=\kappa+1-n\) if \(n<2\kappa\), and \(\lambda_{n}(\mathbf{B}_{2}^{c}(n_{1},n_{2};\kappa))\geq\lambda_{n}(\mathbf{B}_{2 }^{c}(n-\kappa+1,\kappa-1;\kappa))=-\sqrt{(\kappa-2)^{2}+(n-2\kappa+1)(\kappa-1)}\) if \(n\geq 2\kappa\). \(\square\)
It is easy to compute that when \(n<2\kappa\), \(\lambda_{n}(\mathbf{B}_{3}^{c}(n_{1},n_{2};\kappa))=\lambda_{n}(\mathbf{B}_{2 }^{c}(\kappa,n-\kappa;\kappa))=\kappa+1-n\). From Lemmas 3.2, 3.3 and 3.5 we can easily see that the following result is true.
**Theorem 3.1**.: _When \(n<2\kappa\), \(\lambda_{n}(\mathbf{G}^{c})\geq\kappa+1-n\)._
**Lemma 3.6**.: _When \(n\geq 2\kappa\), \(\lambda_{n}(\mathbf{B}_{1}^{c}(\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2} \rfloor,\kappa))<\lambda_{n}(\mathbf{B}_{2}^{c}(n-\kappa+1,\kappa-1,\kappa))\)._
**Proof.** From Lemma 3.4, we know
\[g_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor}(\lambda)=\lambda^{4}+(2 \kappa-\lceil\frac{n}{2}\rceil\lfloor\frac{n}{2}\rfloor-1)\lambda^{2}+\kappa^ {2}-n\kappa+\lceil\frac{n}{2}\rceil\lfloor\frac{n}{2}\rfloor.\]
From Lemma 3.5, we know
\[f_{n-\kappa+1,\kappa-1}(\lambda)=\lambda^{2}(\lambda^{2}-(\kappa-2)^{2}-(n-2 \kappa+1)(\kappa-1)).\]
Set \(\phi(\lambda)=g_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor}(\lambda) -f_{n-\kappa+1,\kappa-1}(\lambda)\). Then
\[\phi(\lambda)=(-(\kappa^{2}-n\kappa+\lceil\frac{n}{2}\rceil\lfloor\frac{n}{2 }\rfloor)+\kappa-n+2)\lambda^{2}+\kappa^{2}-n\kappa+\lceil\frac{n}{2}\rceil \lfloor\frac{n}{2}\rfloor.\]
Then we can compute out the minimum root of \(\phi(\lambda)\) is \(\lambda_{0}=-\sqrt{\frac{-(\kappa^{2}-n\kappa+\lceil\frac{n}{2}\rceil\lfloor \frac{n}{2}\rfloor)}{-(\kappa^{2}-n\kappa+\lceil\frac{n}{2}\rceil\lfloor \frac{n}{2}\rfloor)+\kappa-n+2}}\). Clearly, \(\lambda_{n}(\mathbf{B}_{2}^{c}(n-\kappa+1,\kappa-1,\kappa))=-\sqrt{(\kappa-2 )^{2}+(n-2\kappa+1)(\kappa-1)}<\lambda_{0}\), and so \(g_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor}(\lambda_{n}(B_{2}^{c}(n -\kappa+1,\kappa-1,\kappa)))-f_{n-\kappa+1,\kappa-1}(\lambda_{n}(\mathbf{B}_ {2}^{c}(n-\kappa+1,\kappa-1,\kappa)))<0\). Thus, \(g_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor}(\lambda_{n}(\mathbf{B}_ {2}^{c}(n-\kappa+1,\kappa-1,\kappa)))<0\). It is easy to obverse that the function \(g_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor}(\lambda)\) monotonically decrease when \(\lambda<\lambda_{n}(\mathbf{B}_{1}^{c}(\lceil\frac{n}{2}\rceil,\lfloor\frac{n }{2}\rfloor,\kappa))\), and so \(\lambda_{n}(\mathbf{B}_{1}^{c}(\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2} \rfloor,\kappa))<\lambda_{n}(\mathbf{B}_{2}^{c}(n-\kappa+1,\kappa-1,\kappa))\). \(\square\)
From Lemmas 3.1, 3.2, 3.4, 3.5 and 3.6 we can easily see that the following result is true.
**Theorem 3.2**.: _When \(n\geq 2\kappa\), \(\lambda_{n}(\mathbf{G}^{c})\geq\lambda_{n}(\mathbf{B}_{1}^{c}(\lceil\frac{n}{2} \rceil,\lfloor\frac{n}{2}\rfloor,\kappa))\)._
|
2307.01158 | Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement
Learning | The ability to model the mental states of others is crucial to human social
intelligence, and can offer similar benefits to artificial agents with respect
to the social dynamics induced in multi-agent settings. We present a method of
grounding semantically meaningful, human-interpretable beliefs within policies
modeled by deep networks. We then consider the task of 2nd-order belief
prediction. We propose that ability of each agent to predict the beliefs of the
other agents can be used as an intrinsic reward signal for multi-agent
reinforcement learning. Finally, we present preliminary empirical results in a
mixed cooperative-competitive environment. | Ini Oguntola, Joseph Campbell, Simon Stepputtis, Katia Sycara | 2023-07-03T17:07:18Z | http://arxiv.org/abs/2307.01158v2 | # Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning
###### Abstract
The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings. We present a method of grounding semantically meaningful, human-interpretable beliefs within policies modeled by deep networks. We then consider the task of _2nd-order_ belief prediction. We propose that ability of each agent to predict the beliefs of the other agents can be used as an intrinsic reward signal for multi-agent reinforcement learning. Finally, we present preliminary empirical results in a mixed cooperative-competitive environment.
Machine Learning, ICML
## 1 Introduction
The ability to infer the mental states of oneself and others - beliefs, desires, intentions, preferences, etc - is known as _theory of mind_ (ToM) (Baker et al., 2011). Humans naturally build rich internal models of others, and are able to use these inferences to predict the behavior of others, to condition their own behavior, and to forecast social interactions (Georgeff et al., 1999). Theory of mind has long been studied within cognitive science and psychology (Premack and Woodruff, 1978), a fundamental aspect of human social intelligence that has been shown to develop in early childhood. (Ensink and Mayes, 2010; Astington and Edward, 2010).
Traditionally, agent-modeling approaches within reinforcement learning (RL) and imitation learning largely ignore the idea of internal mental states, typically only focused on modeling external actions (He et al., 2016; Wen et al., 2019). However, there is a growing body of work in the machine learning literature aimed towards developing artificial agents that exhibit theory of mind (Baker et al., 2011; Rabinowitz et al., 2018; Jara-Ettinger, 2019; Fuchs et al., 2021). Even beyond simply providing a helpful inductive bias for modeling behavior, ToM reasoning has the potential to enable the discovery and correction of false beliefs or incomplete knowledge, facilitate efficient communication and coordination, and improve human-agent teaming (Zeng et al., 2020; Sclar et al., 2022; Oguntola et al., 2021).
The work of (Aru et al., 2023) highlights key challenges regarding the difficulty of evaluating current deep learning ToM approaches. In particular, from a human perspective we may solve a task using an already-developed internal theory of mind, whereas an artificial agent may be able to learn simpler decision rules or take advantage of spurious correlations as shortcuts, and it is difficult to determine whether ToM has actually been learnt.
Here we consider the reverse - rather than solving a task and hoping it induces a theory of mind, we instead explicitly learn a theory of mind over semantically grounded beliefs, and use this as a signal to solve the task. Our fundamental research question is the following: can modeling other agents' _beliefs_ serve as an intrinsic reward signal to improve performance in multi-agent settings?
In this paper we develop an approach to explicitly grounding semantically meaningful beliefs within RL policies. We then propose the use of ToM reasoning over the beliefs of other agents as intrinsic motivation in multi-agent scenarios. We run experiments in a mixed cooperative-competitive environment and show preliminary results that suggest this approach may improve multi-agent performance, with respect to both coordination and deception.
The primary contributions of this paper are the following:
* We develop an information-theoretic residual variant to the concept bottleneck learning paradigm (Koh et al., 2020) based on mutual information minimization.
* We utilize this approach to model semantically-meaningful belief states within RL policies.
* We propose the prediction task of second-order prediction of these beliefs (i.e. ToM reasoning) as intrinsic motivation.
* We demonstrate preliminary results that demonstrate improved performance in a mixed cooperative
competitive environment.
## 2 Related Work
### Intrinsic Motivation in Deep RL
Intrinsic motivation in reinforcement learning refers to the use of an additional reward signal to encourage particular agent behaviors without direct feedback from the environment on the task.
In the single-agent setting, common approaches to intrinsic motivation include "curiosity" to encourage visiting novel states (Pathak et al., 2017) and "empowerment" to encourage diversity of reachable states (Mohamed and Jimenez Rezende, 2015).
Most of these approaches can also be extended to the multi-agent setting, but the introduction of multiple agents inherently creates an inter-agent dynamic that can be explored as well. (Jaques et al., 2019) proposed an intrinsic reward for "social influence" by rewarding agents for having high mutual information between their actions. (Wang et al., 2020) develop similar approaches that reward an agent for influencing the state transition dynamics and rewards of other agents.
In constrast, our intrinsic reward approach is predicated on influencing the internal beliefs of other agents, rather than directly influencing their external states or actions.
### Theory of Mind in Multi-Agent RL
Although RL often implicitly involves theory of mind via agent modeling, recent approaches have also sought to model this directly (Rabinowitz et al., 2018).
Within multi-agent reinforcement learning there have been a variety of approaches inspired by ToM reasoning, modeling beliefs (Fuchs et al., 2021; Wang et al., 2022; Sclar et al., 2022) and intents (Qi and Zhu, 2018; Xu et al., 2019). Other inverse reinforcement learning methods approach ToM-like reasoning by conditioning the reward function on inferred latent characteristics (Tian et al., 2021; Wu et al., 2023). Most of these are aimed at improving coordination in cooperative multi-agent scenarios, particularly with regard to communication (Sclar et al., 2022; Wang et al., 2022).
### Concept Learning
Concept learning, generally speaking, is an approach to interpretability for deep neural networks that involves enforcing structure on the latent space to represent grounded, semantically meaningful "concepts".
One such approach is concept whitening (Chen et al., 2020), in which an intermediate layer is inserted for orthogonal alignment of data in the latent space with predefined human-interpretable concept labels, with concepts provided via auxiliary datasets. The restriction with this method is the inherent assumption that all concepts are non-overlapping.
Concept bottleneck models are a similar approach developed an approach that consists of a concept extractor directly supervised on concept labels, and a predictor network that generates an output from these concepts (Koh et al., 2020). While more flexible than concept whitening in the sense that it can encode any set of concepts, it still makes the assumption that the provided set of concepts alone is expressive enough for the predictive task; performance suffers when this not the case.
Some approaches mitigate this by combining the concept predictions with a residual extracted from the input, they either impose additional constraints (e.g. orthogonality) on the combined output that may not hold (Zabounidis et al., 2023), or they do not provide a way to directly ensure the information encoded by the residual does not overlap with the concepts (Yuksekgounul et al., 2022), allowing the model to effectively ignore concepts in its decision making process.
While prior work has used these approaches in the context of imitation and reinforcement learning (Oguntola et al., 2021; Zabounidis et al., 2023), in this work we specifically examine concept learning as a way to approach the challenge of grounding semantically meaningful _mental states_ within policies. We also develop a residual variant that directly encourages decorrelation between concepts and residual while avoiding the introduction of any restrictive assumptions.
## 3 Method
### Modeling Beliefs via Concept Learning
In deep reinforcement learning, policies are typically black box models that directly map states to actions. Our approach follows the paradigm of concept learning (Yi et al., 2018; Chen et al., 2020; Koh et al., 2020; Yeh et al., 2020; Oguntala et al., 2021; Zabounidis et al., 2023), which involves inserting an intermediate _concept layer_ which is designed to align with human-interpretable "concepts", typically via a supervised auxiliary loss. In our setting, these concepts are designed to model _beliefs_ about the environment. For instance, in an environment with a door, one could model the belief over whether the door is locked as a binary concept \(b_{locked}\in\{0,1\}\).
\[L_{belief}=\begin{cases}\mathrm{MSE}(\mathbf{b},\mathbf{b}^{\prime})&\text{ if continuous}\\ \mathrm{CE}(\mathbf{b},\mathbf{b}^{\prime})&\text{if discrete}\end{cases} \tag{1}\]
where \(\mathbf{b}\) is the agent belief vector, \(\mathbf{b}^{\prime}\) is the ground truth, MSE is the mean-squared error, and CE is the cross entropy loss.
These beliefs are then used to generate an action. However, depending on the selection of beliefs, they alone may not be a sufficient signal to learn a policy that successfully solves a given task. We mitigate this by additionally introducing a _residual_ - a compressed representation of the input that is concatenated to the belief vector. Given vector input \(\mathbf{x}\), we have our residual network generate \(r(\mathbf{x})=\mathbf{z}\).
It is important that our residual and beliefs be disentangled - that is, the residual should not contain any information about the beliefs - as otherwise our model may simply learn to rely entirely on the residual and ignore the beliefs, which would compromise the interpretablity of the policy.
We approach "disentanglement" from a probability theory perspective, aiming to ensure that the belief and residual vectors are statistically independent. Here our goal is to minimize the mutual information between the belief vector and residual, which is zero if and only if they are independent. This measure can also be characterized as KL-divergence between the joint distribution and the product of the marginal distributions:
\[I(B;Z)=D_{KL}(\mathbb{P}_{BZ}\parallel\mathbb{P}_{B}\otimes\mathbb{P}_{Z}) \tag{2}\]
To achieve this, we utilize the variational approach from (Cheng et al., 2020) and minimize a contrastive log-ratio upper bound:
\[L_{q}(\theta) =-\mathbb{E}_{p_{\sigma}(\mathbf{b},\mathbf{z})}[\log q_{\theta} (\mathbf{z}|\mathbf{b})] \tag{3}\] \[L_{residual}(\sigma) =\mathbb{E}_{p_{\sigma}(\mathbf{b},\mathbf{z})}[\log q_{\theta} (\mathbf{z}|\mathbf{b})]\] \[\quad-\mathbb{E}_{p_{\sigma}(\mathbf{b})}\mathbb{E}_{p_{\sigma} (\mathbf{z})}[\log q_{\theta}(\mathbf{z}|\mathbf{b})] \tag{4}\]
where \(\mathbf{b}\) is the belief vector, \(\mathbf{z}\) is the residual vector, \(p_{\sigma}(\mathbf{b},\mathbf{z})\) is the joint distribution of intermediate outputs from our policy, and \(q_{\theta}(\mathbf{z}|\mathbf{b})\) is a variational approximation to the conditional distribution \(p_{\sigma}(\mathbf{z}|\mathbf{b})\), modeled via a separate neural network trained to minimize negative log-likelihood \(L_{q}(\theta)=-\log\mathcal{L}(\theta)\).
Unlike approaches based on concept whitening (Oguntola et al., 2021; Zabounidis et al., 2023), our method of disentanglement does not assume or impose any intra-dimensional orthogonality constraints within the concept (i.e. belief) or residual layers, but rather decorrelates the two vectors as a whole. Specifically, we make no restrictive assumptions that concepts are mutually exclusive, and also retain full multi-dimensional expressiveness within our residual representation while simultaneously minimizing correlation with our concept vector.
Finally, the concatenated output \((\mathbf{b},\mathbf{z})\) is fed into the rest of the actor network to generate an action. The concept layer and residual layer are trained by adding the additional loss terms to the objective function optimized by the reinforcement learning algorithm of choice. For our experiments we use the PPO objective from (Schulman et al., 2017), but generally speaking this approach is agnostic to the particular RL algorithm chosen.
\[L_{PPO}(\sigma) =\mathbb{E}_{t}[\min(r_{t}(\sigma)A_{t}, \tag{5}\] \[\text{clip}(r_{t}(\sigma),1+\epsilon,1+\epsilon)A_{t})]\] \[L_{policy} =\alpha L_{PPO}+\beta L_{belief}+\gamma L_{residual} \tag{6}\]
where \(r_{t}(\sigma)=\frac{\pi_{\sigma}(a_{t}|s_{t})}{\pi_{\sigma_{old}}(\epsilon_{ t}|s_{t})}\) is the PPO probability ratio, \(\pi_{\sigma}\) is the policy to be optimized, \(A_{t}\) is the advantage function, and \(\alpha,\beta,\gamma,\epsilon>0\) are hyperparameters.
Figure 1: Policy models with 1st and 2nd-order belief prediction. The belief predictor is supervised by ground truth labels, and the residual network is regularized via mutual information minimization with respect to beliefs.
During training, for each batch we optimize both the policy loss \(L_{policy}\) (with respect to the policy parameters \(\sigma\)) and the variational loss \(L_{q}\) (with respect to the variational parameters \(\theta\)).
### Second-Order Belief Prediction
In a multi-agent scenario where each agent is reasoning over the same set of beliefs over the environment, consider the _second-order belief_ as one agent's prediction of another agent's beliefs. It is important to note that the first-order belief of an agent may be incorrect, in which case a correct second-order belief would successfully predict this false belief.
For instance, consider a scenario where a door is locked but agent A believes the door is unlocked. Agent B should ideally have 1) the first-order belief that the door is unlocked, and 2) the second-order belief that agent A thinks the door is locked.
Our approach proposes the use of second-order belief prediction as an intrinsic reward. Intuitively speaking, we want to incentivize each agent to 1) learn to predict the beliefs of other agents and 2) learn to behave in a way such that the beliefs of the other agents will be predictable (e.g. learning to observe other agents, learning to communicate, etc).
We do this by augmenting the agent's belief network to produce not only its own belief vector, but also a belief vector prediction for each of the other agents.
\[\mathbf{B}=[\mathbf{b}+f(\mathbf{x})_{i}]_{i=1}^{K} \tag{7}\]
where \(K\) is the total number of agents, \(\mathbf{B}\) is the \(K\times dim(\mathbf{b})\) second-order belief matrix, and \(f:\mathbb{R}^{\dim(\mathbf{x})}\rightarrow\mathbb{R}^{K\times\dim(\mathbf{b})}\) is modeled by a neural network.
Rather than treat this as a directly-supervised auxiliary task, we instead include the second-order prediction loss as an additional reward term, as we want the policy's value estimation to be biased towards states where both the current and the **future** beliefs (or belief distributions) of the other agents tend to be predictable (e.g. states where it can gain information about other agents).
Then the intrinsic reward becomes the negative belief prediction loss:
\[r_{tom}=\begin{cases}-\frac{1}{K}\sum_{i=1}^{K}MSE(\mathbf{B}_{i},\mathbf{b}^{ (i)})&\text{if continuous}\\ -\frac{1}{K}\sum_{i=1}^{K}CE(\mathbf{B}_{i},\mathbf{b}^{(i)})&\text{if discrete} \end{cases} \tag{8}\]
\[r=r_{task}+\lambda r_{tom} \tag{9}\]
where \(\lambda\geq 0\) is a hyperparameter.
### Training vs Execution
The training setup requires that all agents are trained in the manner previously described, and we assume that the beliefs of other agents are available during centralized training to calculate intrinsic reward.
During training we do not propagate gradients from the policy or reward through the 1st-order belief prediction network; that is, the 1st-order belief prediction network is only updated from the supervised belief loss on ground truth values from the environment, and is unaffected by the reward dynamics of the task. In combination with the mutual information regularization for the residual, this ensures that any belief information relevant to an agent's policy comes only from the agent's ability to infer the correct values of said beliefs from the environment. This approach eliminates any potential issues with a "malicious actor" purposefully generating incorrect belief predictions.
Execution, on the other hand, does not require beliefs or any inner states of other agents, and thus can be done with other policies that were not trained with our training setup or architecture - or even with human agents.
## 4 Experiments
### ParticleWorld: Physical Deception
We use a variant of the physical deception task described in (Lowe et al., 2017). This environment consists of \(N\) landmarks, \(N\) green "good" agents and a single red adversary agent within a 2D world.
In our variant, one of the landmarks is the "target", but neither the good agents nor the adversary are initially told which one. The \(N\) green agents receive a joint reward based on the minimum distance to the target landmark, with each agent's contribution weighted by a randomly generated reward coefficient \(\eta_{i}\sim\mathrm{Uniform}[0,1]\). Similarly, the adversary is penalized based on its distance from the target.
The episode ends either after a fixed time-limit, or when the adversary reaches any landmark. If this is the target landmark, the adversary receives a positive reward, otherwise a
Figure 2: ParticleWorld physical deception environment.
negative penalty (both time-scaled).
\[r_{good}(t) =-\min_{i}\left\{d(\mathbf{x}_{i,t},\mathbf{x}_{target})\right\} \tag{10}\] \[+d(\mathbf{x}_{adv},\mathbf{x}_{target})\] \[r_{adv}(t) =-d(\mathbf{x}_{adv},\mathbf{x}_{target})\] (11) \[+\mathbb{I}[\mathbf{x}_{adv}=\mathbf{x}_{other}](1-t/T)\] \[-\mathbb{I}[\mathbf{x}_{adv}=\mathbf{x}_{target}](1-t/T)\]
where \(d\) is Euclidean distance, \(\mathbf{x}_{target}\) is the position of the target, \(\mathbf{x}_{other}\) is the position of the non-target landmark, \(\mathbf{x}_{i,t}\) is the position of good agent \(i\) at time \(t\), \(\mathbf{x}_{adv,t}\) is the position of the adversary agent at time \(t\), and \(T\) is the maximum episode length.
The adversary is incentivized to find and navigate to the target as quickly as possible. On the other hand, the green agents are incentivized to keep the adversary uncertain as long as possible while accumulating reward.
ObservationsEach agent policy takes in a vector observation indicating the relative positions of landmarks and other agents. The good agents also can observe the weighted sum of their distances to the target landmark (weighted via their reward coefficients), whereas the adversary must rely on observing other agents' behavior to try and determine which landmark is the target.
ActionsEach agent moves via a discrete action space.
BeliefsIn this scenario each agent is trained with two sets of first-order beliefs:
1. Which landmark is the target?
2. What are the reward coefficients for each agent?
### Training
We use Multi-Agent Proximal Policy Optimization (MAPPO) to train all agents in our experiments, under the paradigm of centralized training with decentralized execution (CTDE) (Yu et al., 2022). Our training procedure alternates between optimizing the policy for the good agents and the policy for the adversary, where one policy remains fixed and the weights other are trained; we swap every 100k timesteps.
## 5 Preliminary Results
We trained agents with various belief-prediction configurations on the physical deception task with \(N=2\) landmarks; training curves are shown in Figure 3, and the mean episodic reward achieved by the final policies are shown in Table 1. We report the mean episode reward obtained with the best hyperparameter setting over 20 episodes, for each of 5 random seeds.
We find that agents with the 2nd-order intrinsic reward perform significantly better in relation to the opposition. This phenomenon is observed for both the green good agents and the red adversary.
### Qualitative Analysis of Observed Strategies
We qualitatively assess and summarize the strategies observed with the final trained policies from each of the configurations we considered below.
Baseline (no beliefs)Each green agent drifts towards a unique landmark. Red adversary appears to drifts randomly.
1st-order beliefs only (all agents)Similar behavior to baseline.
2nd-order beliefs (green agents)Each green agent drifts towards a specific landmark. In some episodes. green agents swap between landmarks.
2nd-order beliefs (red adversary)Red tends to be more decisive, moving quickly to landmark.
In both cases we observe that the incorporation of the 2nd-order intrinsic reward tends to lead to the exhibition of more complex strategies that do not seem to be discovered with
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
1st-Order & 1st-Order & 2nd-Order & 2nd-Order & Episode Reward & Episode Reward \\ (Good) & (Adv.) & (Good) & (Adv.) & (Good) & (Adv.) \\ \hline No & No & No & No & 1.889 (\(\pm\) 0.23) & -15.32 (\(\pm\) 0.51) \\ Yes & Yes & No & No & 2.209 (\(\pm\) 0.11) & -15.17 (\(\pm\) 0.29) \\ Yes & Yes & **Yes** & No & **2.760 (\(\pm\) 0.44)** & -17.78 (\(\pm\) 0.32) \\ Yes & Yes & No & **Yes** & 1.636 (\(\pm\) 0.41) & **-14.01 (\(\pm\) 0.30)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on ParticleWorld physical deception task, in various configurations. Here we present the mean cumulative reward of the final trained policies, averaged across 5 random seeds, where (good) is used to indicate the green good agents, and (adv.) is used to indicate the red adversary. With respect to beliefs, we vary whether the each policy generates 1st-order predictions, 2nd-order predictions, or none at all. Episode reward variance is given in parentheses.
the baseline MARL approach, or even when learning with 1st-order beliefs alone.
## 6 Ongoing and Future Work
Although preliminary results indicate our approach may be effective, they are with respect to a single, relatively simple environment. We are currently examining more complex multi-agent tasks with more varied social dynamics, and additionally scaling the approach to scenarios with more (or even an arbitrary number of) agents.
Beyond continuing to experiment with other environments, we are particularly interested in studying the efficacy of our approach in communication; both in more traditional cooperative scenarios as well as potentially in competitive tasks.
We are also interested in a more thorough investigation of our concept-residual approach in comparison with the standard whitening or bottleneck approaches (Chen et al., 2020; Koh et al., 2020).
## Acknowledgements
This work is supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0036, and by the AFRL/AFOSR award FA9550-18-1-0251.
|
2304.00716 | A spectral extremal problem on non-bipartite triangle-free graphs | A theorem of Nosal and Nikiforov states that if $G$ is a triangle-free graph
with $m$ edges, then $\lambda (G)\le \sqrt{m}$, where the equality holds if and
only if $G$ is a complete bipartite graph. A well-known spectral conjecture of
Bollob\'{a}s and Nikiforov [J. Combin. Theory Ser. B 97 (2007)] asserts that if
$G$ is a $K_{r+1}$-free graph with $m$ edges, then $\lambda_1^2(G) +
\lambda_2^2(G) \le (1-\frac{1}{r})2m$. Recently, Lin, Ning and Wu [Combin.
Probab. Comput. 30 (2021)] confirmed the conjecture in the case $r=2$. Using
this base case, they proved further that $\lambda (G)\le \sqrt{m-1}$ for every
non-bipartite triangle-free graph $G$, with equality if and only if $m=5$ and
$G=C_5$. Moreover, Zhai and Shu [Discrete Math. 345 (2022)] presented an
improvement by showing $\lambda (G) \le \beta (m)$, where $\beta(m)$ is the
largest root of $Z(x):=x^3-x^2-(m-2)x+m-3$. The equality in Zhai--Shu's result
holds only if $m$ is odd and $G$ is obtained from the complete bipartite graph
$K_{2,\frac{m-1}{2}}$ by subdividing exactly one edge. Motivated by this
observation, Zhai and Shu proposed a question to find a sharp bound when $m$ is
even. We shall solve this question by using a different method and characterize
three kinds of spectral extremal graphs over all triangle-free non-bipartite
graphs with even size. Our proof technique is mainly based on applying Cauchy
interlacing theorem of eigenvalues of a graph, and with the aid of a triangle
counting lemma in terms of both eigenvalues and the size of a graph. | Yongtao Li, Lihua Feng, Yuejian Peng | 2023-04-03T04:38:37Z | http://arxiv.org/abs/2304.00716v2 | # A solution of Zhai-Shu's question on spectral extremal problems+
###### Abstract
A theorem of Nosal and Nikiforov states that if \(G\) is a triangle-free graph with \(m\) edges, then \(\lambda(G)\leq\sqrt{m}\), equality holds if and only if \(G\) is a complete bipartite graph. A well-known spectral conjecture of Bollobas and Nikiforov [J. Combin. Theory Ser. B 97 (2007)] asserts that if \(G\) is a \(K_{r+1}\)-free graph with \(m\) edges, then \(\lambda_{1}^{2}(G)+\lambda_{2}^{2}(G)\leq(1-\frac{1}{r})2m\). Recently, Lin, Ning and Wu [Combin. Probab. Comput. 30 (2021)] confirmed the conjecture in the case \(r=2\). Using this base case, they proved further that \(\lambda(G)\leq\sqrt{m-1}\) for every non-bipartite triangle-free graph \(G\), with equality if and only if \(m=5\) and \(G=C_{5}\). Moreover, Zhai and Shu [Discrete Math. 345 (2022)] presented an improvement by showing \(\lambda(G)\leq\beta(m)\), where \(\beta(m)\) is the largest root of \(Z(x):=x^{3}-x^{2}-(m-2)x+m-3\). The equality in Zhai-Shu's result holds only if \(m\) is odd and \(G\) is obtained from the complete bipartite graph \(K_{2,\frac{m-1}{2}}\) by subdividing exactly one edge. Motivated by this observation, Zhai and Shu proposed a question to find a sharp bound when \(m\) is even. We shall solve this question by using a different method and characterize three kinds of spectral extremal graphs over all triangle-free non-bipartite graphs with even size. Our proof technique is mainly based on applying Cauchy's interlacing theorem of eigenvalues of a graph, and with the aid of a triangle counting lemma in terms of both eigenvalues and the size of a graph.
**Key words:** Nosal theorem; non-bipartite graphs; Cauchy interlacing theorem.
**2010 Mathematics Subject Classification.** 05C50, 05C35.
## 1 Introduction
Let \(G\) be a simple graph with vertex set \(V(G)\) and edge set \(E(G)\). We usually write \(n\) and \(m\) for the number of vertices and edges, respectively. One of the main problems of algebraic graph theory is to determine the combinatorial properties of a graph that are reflected from the algebraic properties of its associated matrices. Let \(G\) be a simple graph on \(n\) vertices.
The _adjacency matrix_ of \(G\) is defined as \(A(G)=[a_{ij}]_{n\times n}\) where \(a_{ij}=1\) if two vertices \(v_{i}\) and \(v_{j}\) are adjacent in \(G\), and \(a_{ij}=0\) otherwise. We say that \(G\) has eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) if these values are eigenvalues of the adjacency matrix \(A(G)\). Let \(\lambda(G)\) be the maximum value in absolute among all eigenvalues of \(G\), which is known as the _spectral radius_ of \(G\).
### The spectral extremal graph problems
A graph \(G\) is called _\(F\)-free_ if it does not contain an isomorphic copy of \(F\) as a subgraph. Clearly, every bipartite graph is \(C_{3}\)-free. The _Turan number_ of a graph \(F\) is the maximum number of edges in an \(n\)-vertex \(F\)-free graph, and it is usually denoted by \(\mathrm{ex}(n,F)\). An \(F\)-free graph on \(n\) vertices with \(\mathrm{ex}(n,F)\) edges is called an _extremal graph_ for \(F\). As is known to all, the Mantel theorem (see, e.g., [2]) asserts that if \(G\) is a triangle-free graph on \(n\) vertices, then
\[e(G)\leq\lfloor n^{2}/4\rfloor, \tag{1}\]
equality holds if and only if \(G\) is the balanced complete bipartite graph \(K_{\lfloor\frac{n}{2}\rfloor,\lceil\frac{n}{2}\rceil}\).
There are numerous extensions and generalizations of Mantel's theorem; see [3, 5]. Especially, Turan (see, e.g., [2, pp. 294-301]) extended Mantel's theorem by showing that if \(G\) is a \(K_{r+1}\)-free graph on \(n\) vertices with maximum number of edges, then \(G\) is isomorphic to the graph \(T_{r}(n)\), where \(T_{r}(n)\) denotes the complete \(r\)-partite graph whose part sizes are as equal as possible. Each vertex part of \(T_{r}(n)\) has size either \(\lfloor\frac{n}{r}\rfloor\) or \(\lceil\frac{n}{r}\rceil\). The graph \(T_{r}(n)\) is usually called Turan's graph. Five alternative proofs of Turan's theorem are selected into THE BOOK1[1, p. 285]. Moreover, we refer the readers to the surveys [10, 42].
Footnote 1: Paul ErdΕs liked to talk about THE BOOK, in which God maintains the perfect proofs for mathematical theorems, and he also said that you need not believe in God but you should believe in THE BOOK.
Spectral extremal graph theory, with its connections and applications to numerous other fields, has enjoyed tremendous growth in the past few decades. There is a rich history on the study of bounding the eigenvalues of a graph in terms of various parameters. For example, one can refer to [4] for spectral radius and cliques, [36] for independence number and eigenvalues, [43, 23] for eigenvalues of outerplanar and planar graphs, [8, 50] for excluding friendship graph, and [44, 51, 12] for excluding minors. It is a traditional problem to bound the spectral radius of a graph. Let \(G\) be a graph on \(n\) vertices with \(m\) edges. It is natural to ask how large the spectral radius \(\lambda(G)\) may have. A well-known result states that
\[\lambda(G)\leq\sqrt{2m}. \tag{2}\]
This bound can be guaranteed by \(\lambda(G)^{2}\leq\sum_{i=1}^{n}\lambda_{i}^{2}=\mathrm{Tr}(A^{2}(G))=\sum_{i=1 }^{n}d_{i}=2m\). We recommend the readers to [13, 14, 33] for more extensions.
It is also a popular problem to study the extremal structure for graphs with given number of edges. For example, it is not difficult to show that if \(G\) has \(m\) edges, then \(G\) contains at most \(\frac{\sqrt{8}}{6}m^{3/2}\) triangles; see, e.g., [2, p. 304] and [7]. In addition, it is an instrumental topic to study the interplay between these two problems mentioned-above. More precisely, one can investigate the largest eigenvalue of the adjacency matrix in a triangle-free graph with
given number of edges2. Dating back to 1970, Nosal [41] and Nikiforov [33, 36] independently obtained such a result.
Footnote 2: Note that when we consider the result on a graph with respect to the given number of edges, we shall ignore the possible isolated vertices if there are no confusions.
**Theorem 1.1** (Nosal [41], Nikiforov [33, 36]).: _Let \(G\) be a graph with \(m\) edges. If \(G\) is triangle-free, then_
\[\lambda(G)\leq\sqrt{m}, \tag{3}\]
_equality holds if and only if \(G\) is a complete bipartite graph._
Mantel's theorem in (1) can be derived from (3). Indeed, using Rayleigh's inequality, we have \(\frac{2m}{n}\leq\lambda(G)\leq\sqrt{m}\), which yields \(m\leq\lfloor n^{2}/4\rfloor\). Thus, Theorem 1.1 could be viewed as a spectral version of Mantel's theorem. Moreover, Theorem 1.1 implies a result of Lovasz and Pelikan [28], which asserts that if \(G\) is a tree on \(n\) vertices, then \(\lambda(G)\leq\sqrt{n-1}\), equality holds if and only if \(G=K_{1,n-1}\).
Inequality (3) impulsed the great interests of studying the maximum spectral radius for \(F\)-free graphs with given number of edges, see [33, 36] for \(K_{r+1}\)-free graphs, [35, 49, 45] for \(C_{4}\)-free graphs, [48] for \(K_{2,r+1}\)-free graphs, [48, 32] for \(C_{5}\)-free or \(C_{6}\)-free graphs, [30] for \(C_{7}\)-free graphs, [18, 9, 27] for \(C_{4}^{\triangle}\)-free or \(C_{5}^{\triangle}\)-free graphs, where \(C_{k}^{\triangle}\) is a graph on \(k+1\) vertices obtained from \(C_{k}\) and \(C_{3}\) by sharing a common edge; see [38] for \(B_{k}\)-free graphs, where \(B_{k}\) denotes the book graph consisting of \(k\) triangles sharing a common edge, [22] for \(F_{2}\)-free graphs with given number of edges, where \(F_{2}\) is the friendship graph consisting of two triangles intersecting in a common vertex, [39, 40] for counting the number of \(C_{3}\) and \(C_{4}\). We refer the readers to the surveys [37, 19] and references therein.
In particular, Bollobas and Nikiforov [4] posed the following nice conjecture.
**Conjecture 1.2** (Bollobas-Nikiforov, 2007).: _Let \(G\) be a \(K_{r+1}\)-free graph of order at least \(r+1\) with \(m\) edges. Then_
\[\lambda_{1}^{2}(G)+\lambda_{2}^{2}(G)\leq 2m\Big{(}1-\frac{1}{r}\Big{)}.\]
Recently, Lin, Ning and Wu [24] confirmed the base case \(r=2\); see, e.g., [38, 17] for related results. Furthermore, the base case leads to Theorem 1.3 in next section.
### The non-bipartite triangle-free graphs
The extremal graphs determined in Theorem 1.1 are the complete bipartite graphs. Excepting the largest extremal graphs, the second largest extremal graphs were extensively studied over the past years. In this paper, we will pay attentions mainly to the spectral extremal problems for non-bipartite triangle-free graphs with given number of edges. Using the inequalities from majorization theory, Lin, Ning and Wu [24] confirmed the triangle case in Conjecture 1.2, and then they proved the following result.
**Theorem 1.3** (Lin-Ning-Wu, 2021).: _Let \(G\) be a triangle-free graph with \(m\) edges. If \(G\) is non-bipartite, then_
\[\lambda(G)\leq\sqrt{m-1},\]
equality holds if and only if \(m=5\) and \(G=C_{5}\)._
The upper bound in Theorem 1.3 is not sharp for \(m>5\). Motivated by this observation, Zhai and Shu [49] provided a further improvement on Theorem 1.3. For every integer \(m\geq 3\), we denote by \(\beta(m)\) the largest root of
\[Z(x):=x^{3}-x^{2}-(m-2)x+m-3. \tag{4}\]
If \(m\) is odd, then we define \(SK_{2,\frac{m-1}{2}}\) as the graph obtained from the complete bipartite graph \(K_{2,\frac{m-1}{2}}\) by subdividing an edge; see Figure 1 for two drawings. Clearly, \(SK_{2,\frac{m-1}{2}}\) is a triangle-free graph with \(m\) edges, and it is non-bipartite as it contains a copy of \(C_{5}\). By computations, we know that \(\beta(m)\) is the spectral radius of \(SK_{2,\frac{m-1}{2}}\).
The improvement of Zhai and Shu [49] on Theorem 1.3 can be stated as below.
**Theorem 1.4** (Zhai-Shu, 2022).: _Let \(G\) be a graph of size \(m\). If \(G\) is triangle-free and non-bipartite, then_
\[\lambda(G)\leq\beta(m),\]
_equality holds if and only if \(G=SK_{2,\frac{m-1}{2}}\)._
Indeed, the result of Zhai and Shu improved Theorem 1.3. It was proved in [49, Lemma 2.2] that for every \(m\geq 6\),
\[\sqrt{m-2}<\beta(m)<\sqrt{m-1}. \tag{5}\]
The original proof of Zhai and Shu [49] for Theorem 1.4 is technical and based on the use of the Perron components. Subsequently, Li and Peng [21] provided an alternative proof by applying Cauchy's interlacing theorem. We remark that \(\lim_{m\to\infty}(\beta(m)-\sqrt{m-2})=0\). In addition, Wang [45] improved Theorem 1.4 slightly by determining all the graphs with size \(m\) whenever it is a non-bipartite triangle-free graph satisfying \(\lambda(G)\geq\sqrt{m-2}\).
### A question of Zhai and Shu
The upper bound in Theorem 1.4 could be attained only if \(m\) is odd, since the extremal graph \(SK_{2,\frac{m-1}{2}}\) is well-defined only in this case. Thus, it is interesting to determine the spectral extremal graph when \(m\) is even. Zhai and Shu in [49, Question 2.1] proposed the following question formally.
**Question 1.5** (Zhai-Shu [49]).: _For even \(m\), what is the extremal graph attaining the maximum spectral radius over all triangle-free non-bipartite graphs with \(m\) edges?_
In this paper, we shall solve this question and determine the spectral extremal graphs. Although Question 1.5 seems to be another side of Theorem 1.4, we would like to point out that the even case is actually more difficult and different, and the original method is ineffective in this case.
**Definition 1.6** (Spectral extremal graphs).: _Suppose that \(m\in 2\mathbb{N}^{*}\). Let \(L_{m}\) be the graph obtained from the subdivision \(SK_{2,\frac{m-2}{2}}\) by hanging an edge on a vertex with the maximum degree. If \(\frac{m-3}{3}\) is a positive integer, then we define \(Y_{m}\) as the graph obtained from \(C_{5}\) by blowing up a vertex to an independent set \(I_{\frac{m-3}{3}}\) on \(\frac{m-3}{3}\) vertices, then adding a new vertex, and joining this vertex to all vertices of \(I_{\frac{m-3}{3}}\). If \(\frac{m-4}{3}\) is a positive integer, then we write \(T_{m}\) for the graph obtained from \(C_{5}\) by blowing up two adjacent vertices to independent sets \(I_{\frac{m-4}{3}}\) and \(I_{2}\), respectively, where \(I_{\frac{m-4}{3}}\) and \(I_{2}\) form a complete bipartite graph; see Figure 2._
**Theorem 1.7** (Main result).: _Let \(m\) be even and \(m\geq 258\). Suppose that \(G\) is a triangle-free graph with \(m\) edges and \(G\) is non-bipartite. (a) If \(m=3t\) for some \(t\in\mathbb{N}^{*}\), then \(\lambda(G)\leq\lambda(Y_{m})\), equality holds if and only if \(G=Y_{m}\). (b) If \(m=3t+1\) for some \(t\in\mathbb{N}^{*}\), then \(\lambda(G)\leq\lambda(T_{m})\), equality holds if and only if \(G=T_{m}\). (c) If \(m=3t+2\) for some \(t\in\mathbb{N}^{*}\), then \(\lambda(G)\leq\lambda(L_{m})\), equality holds if and only if \(G=L_{m}\)._
The construction of \(L_{m}\) is natural. Nevertheless, it is not apparent to find \(Y_{m}\) and \(T_{m}\). There are some analogous results that the extremal graphs depend on the parity of the size \(m\) in the literature. For example, the \(C_{5}\)-free or \(C_{6}\)-free spectral extremal graphs with \(m\) edges are determined in [48] when \(m\) is odd, and later in [32] when \(m\) is even. Moreover, the \(C_{4}^{\triangle}\)-free or \(C_{5}^{\triangle}\)-free spectral extremal graphs are determined in [18] for odd \(m\), and subsequently in [9, 27] for even \(m\). In addition, the results of Nikiforov [34], Zhai and Wang [47] showed that the \(C_{4}\)-free spectral extremal graphs with given order \(n\) also rely on the parity of \(n\). In a nutshell, for large size \(m\), there is a common phenomenon that the extremal graphs in two cases are extremely similar, that is, the extremal graph in the
Figure 2: Extremal graphs in Theorem 1.7.
even case is always constructed from that in the odd case by handing an edge to a vertex with maximum degree. Surprisingly, the extremal graphs in our conclusion break down this common phenomenon and shows a new structure of the extremal graphs.
**Outline of the paper.** In Section 2, we shall present some lemmas, which shows that the spectral radius of \(L_{m}\) is smaller than that of \(Y_{m}\) if \(\frac{m}{3}\in\mathbb{N}^{*}\), as well as that of \(T_{m}\) if \(\frac{m-1}{3}\in\mathbb{N}^{*}\). Moreover, we will provide the estimations on both \(\lambda(L_{m})\) and \(\beta(m)\). In Section 3, we will show some forbidden induced subgraphs, which helps us to characterize the local structure of the desired extremal graph. In Section 4, we present the proof of Theorem 1.7. Our proof of Theorem 1.7 is quite different from that of Theorem 1.4 in [49]. The techniques used in our proof borrows some ideas from Lin, Ning and Wu [24] as well as Ning and Zhai [39]. We shall apply Cauchy's interlacing theorem and a triangle counting result, which make full use of the information of all eigenvalues of a graph. In Section 5, we conclude this paper with some possible open problems for interested readers.
**Notations.** We shall follow the standard notation in [6] and consider only simple and undirected graphs. Let \(N(v)\) be the set of neighbors of a vertex \(v\), and \(d(v)\) be the degree of \(v\). For a subset \(S\subseteq V(G)\), we write \(e(S)\) for the number of edges with two endpoints in \(S\), and \(N_{S}(v)=N(v)\cap S\) for the set of neighbors of \(v\) in \(S\). Let \(K_{r+1}\) be the complete graph on \(r+1\) vertices, and \(K_{s,t}\) be the complete bipartite graph with parts of sizes \(s\) and \(t\). Let \(I_{k}\) be an independent set on \(k\) vertices. We write \(C_{n}\) and \(P_{n}\) for the cycle and path on \(n\) vertices, respectively. Given graphs \(G\) and \(H\), we write \(G\cup H\) for the union of \(G\) and \(H\). In other words, \(V(G\cup H)=V(G)\cup V(H)\) and \(E(G\cup H)=E(G)\cup E(H)\). For simplicity, we write \(kG\) for the union of \(k\) copies of \(G\). We denote by \(t(G)\) the number of triangles in \(G\).
## 2 Preliminaries and outline of the proof
In this section, we will give the estimation on the spectral radius of \(L_{m}\). Note that \(L_{m}\) exists whenever \(m\) is even, while \(Y_{m}\) and \(T_{m}\) are well-defined only if \(m\,(\text{mod }3)\) is \(0\) or \(1\), respectively. We will show that \(Y_{m}\) and \(T_{m}\) has larger spectral radius than \(L_{m}\). In addition, we will introduce Cauchy's interlacing theorem, a triangle counting result in terms of eigenvalues, and an operation of graphs which increases the spectral radius strictly. Before showing the proof of Theorem 1.7, we will illustrate the key ideas of our proof, and then we outline the main steps of the framework.
### Bounds on the spectral radius of extremal graphs
By computations, we can obtain that \(\lambda(Y_{m})\) is the largest root of
\[Y(x):=x^{4}-x^{3}+(2-m)x^{2}+(m-3)x+\tfrac{m}{3}-1. \tag{6}\]
Similarly, \(\lambda(T_{m})\) is the largest root of
\[T(x):=x^{5}-mx^{3}+\tfrac{7m-22}{3}x+\tfrac{16-4m}{3}, \tag{7}\]
and \(\lambda(L_{m})\) is the largest root of the polynomial
\[L(x):=x^{6}-mx^{4}+(\tfrac{5m}{2}-7)x^{2}+(4-m)x+2-\tfrac{m}{2}. \tag{8}\]
**Lemma 2.1**.: _If \(m\in\{6,8,10\}\), then \(\lambda(L_{m})>\sqrt{m-2}\). If \(m\geq 12\) is even, then_
\[\sqrt{m-2.5}<\lambda(L_{m})<\sqrt{m-2}.\]
_Moreover, we have \(\lambda(L_{6})\approx 2.1149\), \(\lambda(L_{8})\approx 2.4938\) and \(\lambda(L_{10})\approx 2.8424\)._
Proof.: The case \(m\in\{6,8,10\}\) is straightforward. Next, we shall consider the case \(m\geq 12\). By a direct computation, it is easy to verify that
\[L(\sqrt{m-2.5})=-(1.25+\sqrt{m-2.5})m+4\sqrt{m-2.5}+3.875<0,\]
which gives \(\lambda(L_{m})>\sqrt{m-2.5}\). Moreover, we have
\[L(\sqrt{m-2})=\frac{1}{2}\left(m^{2}-(9+2\sqrt{m-2})m+8(2+\sqrt{m-2})\right)>0.\]
Furthermore, we have \(L^{\prime}(x):=\frac{\mathrm{d}}{\mathrm{d}x}L(x)=6x^{5}-4mx^{3}+(5m-14)x-m+4\). By calculations, one can check that \(L^{\prime}(\sqrt{m-2})>0\) and \(L^{\prime}(x)\geq 0\) for every \(x\geq\sqrt{m-2}\), which yields \(L(x)>L(\sqrt{m-2})>0\) for every \(x>\sqrt{m-2}\). Thus \(\lambda(L_{m})<\sqrt{m-2}\).
**Lemma 2.2**.: _If \(m\geq 38\) is even and \(m=3t\) for some \(t\in\mathbb{N}^{*}\), then_
\[\lambda(L_{m})<\lambda(Y_{m}).\]
Proof.: We know from (6) that \(\lambda(Y_{m})\) is the largest root of
\[Y(x)=x^{4}-x^{3}+(2-m)x^{2}+(m-3)x+\tfrac{m-3}{3}.\]
By calculations, we can verify that
\[L(x)-x^{2}Y(x)=x^{5}-2x^{4}+(3-m)x^{3}+(\tfrac{13m}{6}-6)x^{2}+(4-m)x+2-\tfrac {m}{2},\]
and for every \(m\geq 38\), we have
\[L(x)-x^{2}Y(x)\Big{|}_{x=\sqrt{m-3}}=\frac{m^{2}}{6}-m\sqrt{m-3}-m+4\sqrt{m-3 }+2>0.\]
Moreover, we can show that \(\frac{\mathrm{d}}{\mathrm{d}x}(L(x)-x^{2}Y(x))>0\) for every \(x\geq\sqrt{m-3}\). Thus, it follows that \(L(x)>x^{2}Y(x)\) for every \(x\geq\sqrt{m-3}\). So \(\lambda(L_{m})<\lambda(Y_{m})\), as needed.
**Lemma 2.3**.: _If \(m\geq 10\) is even and \(m=3t+1\) for some \(t\in\mathbb{N}^{*}\), then_
\[\lambda(L_{m})<\lambda(T_{m}).\]
Proof.: Recall in (7) that \(\lambda(T_{m})\) is the largest root of \(T(x)\). It is sufficient to prove that \(L(x)>xT(x)\) for every \(x\geq 3\). Upon computation, we can get
\[L(x)-xT(x)=\frac{m+2}{6}x^{2}+\frac{m-4}{3}x+\frac{4-m}{2}>0.\]
Consequently, we have \(\lambda(L_{m})<\lambda(T_{m})\), as desired.
The next lemma provides a refinement on (5) for every \(m\geq 62\).
**Lemma 2.4**.: _Let \(m\) be even and \(m\geq 62\). Then_
\[\sqrt{m-2}<\beta(m)<\sqrt{m-1.85}.\]
Proof.: Firstly, we have \(Z(\sqrt{m-2})=-1<0\), which yields \(\sqrt{m-2}<\beta(m)\). Secondly, one can check that \(Z(\sqrt{m-1.85})>0\) for every \(m\geq 62\), and \(Z^{\prime}(x)=3x^{2}-2x-(m-2)>0\) for \(x\geq\sqrt{m-1.85}\). Therefore, we have \(Z(x)>Z(\sqrt{m-1.85})>0\) for every \(x>\sqrt{m-1.85}\), which yields \(\beta(m)<\sqrt{m-1.85}\), as required.
The following lemma is referred to as the eigenvalue interlacing theorem, also known as Cauchy's interlacing theorem, which states that the eigenvalues of a principal submatrix of a Hermitian matrix interlace those of the underlying matrix; see, e.g., [52, pp. 52-53] or [53, pp. 269-271]. The eigenvalue interlacing theorem is a powerful tool to extremal combinatorics and plays a significant role in two recent breakthroughs [15, 16].
**Lemma 2.5** (Eigenvalue Interlacing Theorem).: _Let \(H\) be an \(n\times n\) Hermitian matrix partitioned as_
\[H=\begin{bmatrix}A&B\\ B^{*}&C\end{bmatrix},\]
_where \(A\) is an \(m\times m\) principal submatrix of \(H\) for some \(m\leq n\). Then for every \(1\leq i\leq m\),_
\[\lambda_{n-m+i}(H)\leq\lambda_{i}(A)\leq\lambda_{i}(H).\]
Recall that \(t(G)\) denotes the number of triangles in \(G\). It is well-known that the value of \((i,j)\)-entry of \(A^{k}(G)\) is equal to the number of walks of length \(k\) in \(G\) starting from vertex \(v_{i}\) to \(v_{j}\). Since each triangle of \(G\) contributes \(6\) closed walks of length \(3\), we can count the number of triangles and obtain
\[t(G)=\frac{1}{6}\sum_{i=1}^{n}A^{3}(i,i)=\frac{1}{6}\mathrm{Tr}(A^{3})=\frac{1 }{6}\sum_{i=1}^{n}\lambda_{i}^{3}. \tag{9}\]
The forthcoming lemma could be regarded as a triangle spectral counting lemma in terms of both the eigenvalues and the size of a graph. This could be viewed as a useful variant of (9) by using \(\sum_{i=1}^{n}\lambda_{i}^{2}=\mathrm{tr}(A^{2})=\sum_{i=1}^{n}d_{i}=2m\).
**Lemma 2.6** (see [39]).: _Let \(G\) be a graph on \(n\) vertices with \(m\) edges. If \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) are all eigenvalues of \(G\), then_
\[t(G)=\frac{1}{6}\sum_{i=2}^{n}(\lambda_{1}+\lambda_{i})\lambda_{i}^{2}+\frac{ 1}{3}(\lambda_{1}^{2}-m)\lambda_{1}.\]
For convenience, we introduce a function \(f(x)\), which will be frequently used in Section 3 to find the induced substructures that are forbidden in the extremal graph.
**Lemma 2.7**.: _Let \(f(x)\) be a function given as_
\[f(x):=(\sqrt{m-2.5}+x)x^{2}.\]
_If \(a\leq x\leq b\leq 0\), then_
\[f(x)\geq\min\{f(a),f(b)\}.\]
Proof.: The function \(f(x)\) is increasing when \(x\in(-\infty,-\frac{2}{3}\sqrt{m-2.5})\), and decreasing when \(x\in[-\frac{2}{3}\sqrt{m-2.5},0]\). Thus the desired statement holds immediately.
The following lemma [46] is also needed in this paper, it provides an operation on a connected graph and increases the adjacency spectral radius strictly.
**Lemma 2.8** (Wu-Xiao-Hong [46], 2005).: _Let \(G\) be a connected graph and \((x_{1},\ldots,x_{n})^{T}\) be a Perron vector of \(G\), where \(x_{i}\) corresponds to \(v_{i}\). Assume that \(v_{i},v_{j}\in V(G)\) are vertices such that \(x_{i}\geq x_{j}\), and \(S\subseteq N_{G}(v_{j})\setminus N_{G}(v_{i})\) is a non-empty set. Denote \(G^{*}=G-\{v_{j}v:v\in S\}+\{v_{i}v:v\in S\}\). Then \(\lambda(G)<\lambda(G^{*})\)._
### Proof overview
As promised, we will interpret the key ideas and steps of the proof of Theorem 1.7. First of all, we would like to make a comparison of the proofs of Theorem 1.3 and Theorem 1.4. The proof of Theorem 1.3 in [24] is short and succinct, it relies on the base case in Conjecture 1.2, which states that if \(G\) is a triangle-free graph with \(m\geq 2\) edges, then
\[\lambda_{1}^{2}(G)+\lambda_{2}^{2}(G)\leq m, \tag{10}\]
where the equality holds if and only if \(G\) is one of some specific bipartite graphs; see [24, 38]. Combining the condition in Theorem 1.3, we know that if \(G\) is a triangle-free non-bipartite graph such that \(\lambda_{1}(G)\geq\sqrt{m-1}\), then \(\lambda_{2}(G)<1\). Such a bound on the second largest eigenvalue provides great convenience to characterize the local structure of \(G\). For instance, combining \(\lambda_{2}(G)<1\) with the Cauchy interlacing theorem, we obtain that \(C_{5}\) is a shortest odd cycle of \(G\). However, it is not sufficient to use (10) for the proof of Theorem 1.4. Indeed, if \(G\) satisfies further that \(\lambda(G)\geq\beta(m)\), then we get \(\lambda_{2}(G)<2\) only, since \(\beta(m)\to\sqrt{m-2}\) as \(m\) tends to infinity. Nevertheless, this bound is invalid for our purpose to describe the local structure of \(G\). The original proof of Zhai and Shu [49] for Theorem 1.4 avoids the use of (10) and applies the Perron components. Thus it needs to make more careful structure analysis of the desired extremal graph.
To overcome the aforementioned obstacle, we will get rid of the use of (10), and then exploit the information of all eigenvalues of graphs, instead of the second largest eigenvalue merely. Our proof of Theorem 1.7 grows out from the original proof [24] of Theorem 1.3, which provided a method to find forbidden induced substructures. We will frequently use Cauchy's interlacing theorem and the triangle counting result in Lemma 2.6.
The main steps of our proof can be outlined as below. It introduces the main ideas of the approach of this paper for treating the problem involving triangles.
* Assume that \(G\) is a spectral extremal graph with even size, that is, \(G\) is a non-bipartite triangle-free graph and attains the maximum spectral radius. First of all, we will show that \(G\) is connected and it does not contain the odd cycle \(C_{2k+1}\) as an induced subgraph for every \(k\geq 3\). Consequently, \(C_{5}\) is a shortest odd cycle in \(G\).
* Let \(S\) be the set of vertices of a copy of \(C_{5}\) in \(G\). By using Lemma 2.5 and Lemma 2.6, we will find more forbidden substructures in the desired extremal graph; see, e.g., the graphs \(H_{1},H_{2},H_{3}\) in Lemma 3.2. In this step, we will characterize and refine the local structure on the vertices around the cycle \(S\).
* Using the information on the local structure of \(G\), we will show that \(V(G)\setminus S\) has at most one vertex with distance two to \(S\); see Claim 4.2. Moreover, there are at most three vertices of \(V(G)\setminus S\) with exactly one neighbor on \(S\), and all these vertices are adjacent to a same vertex of \(S\).
* Combining with the three steps above, we will determine the structure of \(G\) and show some possible graphs with large spectral radius. By comparing the polynomials of graphs, we will prove that \(G\) is isomorphic to \(Y_{m},T_{m}\) or \(L_{m}\).
## 3 Some forbidden induced subgraphs
In this section, we always assume that \(G\) is a non-bipartite triangle-free graph with even size \(m\) and \(G\) attains the maximal spectral radius. Since \(L_{m}\) is triangle-free and non-bipartite, we get by Lemma 2.1 that
\[\lambda(G)\geq\lambda(L_{m})>\sqrt{m-2.5}. \tag{11}\]
On the other hand, we obtain from Theorem 1.4 and Lemma 2.4 that
\[\lambda(G)<\beta(m)<\sqrt{m-1.85}. \tag{12}\]
Our aim in this section is to determine some forbidden induced substructures of the extremal graph \(G\). In this process, we need to exclude \(16\) induced substructures for our purpose. One of the main research directions in the proof is to show that \(G\) has at least one triangle, i.e., \(t(G)>0\), whenever the substructure forms an induced copy in \(G\). Throwing away some tedious calculations, the main tools used in our proof attribute to Cauchy's Interlacing Theorem (Lemma 2.5) and the triangle counting result (Lemma 2.6).
**Lemma 3.1**.: _For any odd integer \(s\geq 7\), an extremal graph \(G\) does not contain \(C_{s}\) as an induced cycle. Consequently, \(C_{5}\) is a shortest odd cycle in \(G\)._
Proof.: Since \(G\) is non-bipartite, let \(s\) be the length of a shortest odd cycle in \(G\). Since \(G\) is triangle-free, we have \(s\geq 5\). Moreover, a shortest odd cycle \(C_{s}\subseteq G\) must be an induced odd cycle. It is well-known that the eigenvalues of \(C_{s}\) are given as \(\left\{2\cos\frac{2\pi k}{s}:k=0,1,\ldots,s-1\right\}\). In particular, we have
\[\text{Eigenvalues}(C_{7})=\{2,1.246,1.246,-0.445,-0.445,-1.801,-1.801\}.\]
Since \(C_{s}\) is an induced copy in \(G\), we know that \(A(C_{s})\) is a principal submatrix of \(A(G)\). Lemma 2.5 implies that for every \(i\in\{1,2,\ldots,s\}\),
\[\lambda_{n-s+i}(G)\leq\lambda_{i}(C_{s})\leq\lambda_{i}(G).\]
where \(\lambda_{i}\) means the \(i\)-th largest eigenvalue. We next show that \(s=5\). For convenience, we write \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) for eigenvalues of \(G\) in the non-increasing order.
Suppose on the contrary that \(C_{7}\) is an induced odd cycle of \(G\), then \(\lambda_{2}\geq\lambda_{2}(C_{7})=2\cos\frac{2\pi}{7}\approx 1.246\) and \(\lambda_{3}\geq\lambda_{3}(C_{7})=2\cos\frac{12\pi}{7}\approx 1.246\). Recall in Lemma 2.7 that
\[f(x)=(\sqrt{m-2.5}+x)x^{2}.\]
Evidently, we get
\[f(\lambda_{2})\geq f(1.246)\geq 1.552\sqrt{m-2.5}+1.934,\]
and
\[f(\lambda_{3})\geq f(1.246)\geq 1.552\sqrt{m-2.5}+1.934.\]
Our goal is to get a contradiction by applying Lemma 2.6 and showing \(t(G)>0\). It is not sufficient to obtain \(t(G)>0\) by using the positive eigenvalues of \(C_{7}\) only. Next, we are going to exploit the negative eigenvalues of \(C_{7}\). For \(i\in\{4,5,6,7\}\), we know that \(\lambda_{i}(C_{7})<0\). The Cauchy interlacing theorem yields \(\lambda_{n-3}\leq\lambda_{4}(C_{7})=-0.445\), \(\lambda_{n-2}\leq\lambda_{5}(C_{7})=-0.445\), \(\lambda_{n-1}\leq\lambda_{6}(C_{7})=-1.801\) and \(\lambda_{n}\leq\lambda_{7}(C_{7})=-1.801\). To apply Lemma 2.7, we need to find the lower bounds on \(\lambda_{i}\) for each \(i\in\{n-3,n-2,n-1,n\}\). We know from (11) that \(\lambda_{1}\geq\lambda(L_{m})>\sqrt{m-2.5}\), and then \(\lambda_{n}^{2}\leq 2m-(\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3}^{2}+ \lambda_{n-3}^{2}+\lambda_{n-2}^{2}+\lambda_{n-1}^{2})<2m-(m-2.5+6.744)=m-4.244\), which implies \(-\sqrt{m-4.244}<\lambda_{n}\leq-1.801\). By Lemma 2.7, we get
\[f(\lambda_{n})\geq\min\{f(-\sqrt{m-4.244}),f(-1.801)\}>0.8\sqrt{m-2.5}.\]
Similarly, we have \(\lambda_{n-1}^{2}+\lambda_{n}^{2}\leq 2m-(\lambda_{1}^{2}+\lambda_{2}^{2}+ \lambda_{3}^{2}+\lambda_{n-3}^{2}+\lambda_{n-2}^{2})<m-1.001\). Combining with \(\lambda_{n-1}^{2}\leq\lambda_{n}^{2}\), we get \(-\sqrt{(m-1.001)/2}<\lambda_{n-1}\leq-1.801\). By Lemma 2.7, we obtain
\[f(\lambda_{n-1})\geq\min\{f(-\sqrt{(m-1.001)/2}),f(-1.801)\}>3.243\sqrt{m-2.5} -5.841.\]
Using (11) and (12), we have \(\sqrt{m-2.5}<\lambda_{1}<\sqrt{m-1.85}\). By Lemma 2.6, we get
\[t(G) >\frac{1}{6}(f(\lambda_{2})+f(\lambda_{3})+f(\lambda_{n})+f( \lambda_{n-1}))-\frac{2.5}{3}\lambda_{1}\] \[>\frac{1}{6}(7.147\sqrt{m-2.5}-5\sqrt{m-1.85}-1.973)>0.\]
This is a contradiction. By the monotonicity of \(\cos x\), we can prove that \(C_{s}\) can not be an induced subgraph of \(G\) for each odd integer \(s\geq 7\). Thus we get \(s=5\).
Using a similar method as in the proof of Lemma 3.1, we can prove the following lemmas, whose proofs are postponed to the Appendix. To avoid unnecessary calculations, we did not attempt to get the best bound on the size of \(G\), and then we consider the case \(m\geq 258\)
**Lemma 3.2**.: \(G\) _does not contain any graph of \(\{H_{1},H_{2},H_{3}\}\) as an induced subgraph._
**Lemma 3.3**.: \(G\) _does not contain any graph of \(\{T_{1},T_{2},T_{3},T_{4}\}\) as an induced subgraph._
**Lemma 3.4**.: _Any graph of \(\{J_{1},J_{2},J_{3},J_{4}\}\) can not be an induced subgraph of \(G\)._
**Lemma 3.5**.: _Any graph of \(\{L_{1},L_{2},L_{3},L_{4}\}\) can not be an induced subgraph of \(G\)._
Proof of the main theorem
It is the time to show the proof of Theorem 1.7.
**Proof of Theorem 1.7.** Suppose that \(G\) is a non-bipartite triangle-free graph with \(m\) edges (\(m\geq 258\) is even) such that \(G\) attains the maximum spectral radius. Thus we have \(\lambda(G)\geq\lambda(L_{m})\) since \(L_{m}\) is one of the triangle-free non-bipartite graphs. Our goal is to prove that \(G=Y_{m}\) if \(\frac{m}{3}\in\mathbb{N}^{*}\); \(G=T_{m}\) if \(\frac{m-1}{3}\in\mathbb{N}^{*}\), and \(G=L_{m}\) if \(\frac{m-2}{3}\in\mathbb{N}^{*}\). First of all, we can see that \(G\) must be connected. Otherwise, we can choose \(G_{1}\) and \(G_{2}\) as two different components, where \(G_{1}\) attains the spectral radius of \(G\), by identifying two vertices from \(G_{1}\) and \(G_{2}\), respectively, we get a new graph with larger spectral radius, which is a contradiction3. By Lemma 3.1, we can draw the following claim.
Footnote 3: There is another way to get a contradiction. We delete an edge within \(G_{2}\), and then add an edge between \(G_{1}\) and \(G_{2}\). This operation will also lead to a new graph with larger spectral radius.
**Claim 4.1**.: \(C_{5}\) _is a shortest odd cycle in \(G\)._
By Claim 4.1, we denote by \(S=\{u_{1},u_{2},u_{3},u_{4},u_{5}\}\) the set of vertices of a copy of \(C_{5}\), where \(u_{i}u_{i+1}\in E(G)\) and \(u_{5}u_{1}\in E(G)\). Let \(N(S):=\left(\cup_{u\in S}N(u)\right)\setminus S\) be the union of neighborhoods of vertices of \(S\), and let \(d_{S}(v)=|N(v)\cap S|\) be the number of neighbors of \(v\) in the set \(S\). Clearly, we have \(d_{S}(v)\in\{0,1,2\}\) for every \(v\in V(G)\setminus S\). Otherwise, if \(d_{S}(v)\geq 3\), then one can find a triangle immediately, a contradiction.
**Claim 4.2**.: \(V(G)\setminus S\) _does not contain a vertex with distance \(3\) to \(S\), and \(V(G)\setminus S\) has at most one vertex with distance \(2\) to \(S\)._
Proof.: This claim is a consequence of Lemmas 3.2 and 3.4. Firstly, we show that \(V(G)\setminus S\) does not contain a vertex which has distance \(3\) to \(S\). Otherwise, if \(w_{1}\) is such a vertex and \(P_{4}=w_{1}w_{2}w_{3}u_{1}\) is a shortest path of length \(3\), then \(w_{2}\) can not be adjacent to any vertex of \(S\). Since \(G\) is triangle-free, we know that neither \(w_{3}u_{2}\) nor \(w_{3}u_{5}\) can be an edge, and at least one of \(w_{3}u_{3}\) and \(w_{3}u_{4}\) is not an edge. If \(w_{3}u_{3}\notin E(G)\) and \(w_{3}u_{4}\notin E(G)\), then \(\{w_{2},w_{3}\}\cup S\) induces a copy of \(J_{1}\), contradicting with Lemma 3.4. If \(w_{3}u_{3}\in E(G)\), then \(\{w_{1},w_{2},w_{3}\}\cup(S\setminus\{u_{2}\})\) forms an induced copy of \(J_{1}\) since \(w_{1}w_{3},w_{1}u_{i}\) and \(w_{2}u_{i}\) are not edges of \(G\), a contradiction. By symmetry, the case that \(w_{3}u_{4}\in E(G)\) is similar.
Now, suppose on the contrary that \(V(G)\setminus S\) contains two vertices, say \(w_{1},w_{2}\), which have distance \(2\) to \(S\). Let \(v_{1}\) and \(v_{2}\) be two vertices out of \(S\) such that \(w_{1}\sim v_{1}\sim S\) and \(w_{2}\sim v_{2}\sim S\). Since \(J_{1}\) can not be an induced copy of \(G\) and \(G\) is triangle-free, we know that \(d_{S}(v_{1})=d_{S}(v_{2})=2\). If \(v_{1}=v_{2}\), then \(\{w_{1},w_{2},v_{1}\}\cup S\) forms an induced copy of \(J_{2}\) in \(G\), a contradiction. Thus, we get \(v_{1}\neq v_{2}\). Without loss of generality, we may assume that \(N_{S}(v_{1})=\{u_{1},u_{3}\}\). By Lemma 3.2, \(G\) does not contain \(H_{3}\) as an induced subgraph, we get \(N_{S}(v_{2})\neq\{u_{3},u_{5}\}\) and \(N_{S}(v_{2})\neq\{u_{1},u_{4}\}\). By symmetry, we have either \(N_{S}(v_{2})=\{u_{2},u_{4}\}\) or \(N_{S}(v_{2})=\{u_{1},u_{3}\}\). For the former case, since \(H_{2}\) is not an induced subgraph of \(G\) by Lemma 3.2, we get \(v_{1}v_{2}\in E(G)\). If \(w_{1}w_{2}\in E(G)\), then \(G\) contains \(J_{4}\) as an induced subgraph, which is a contradiction by Lemma 3.4. Thus \(w_{1}w_{2}\notin E(G)\). By Lemma 2.8, one can compare the Perron components of \(v_{1}\) and \(v_{2}\), and then move \(w_{1}\) and \(w_{2}\) together,
namely, either making \(w_{1}\) adjacent to \(v_{2}\), or \(w_{2}\) adjacent to \(v_{1}\). In this process, the resulting graph remains triangle-free and non-bipartite as well. However, it has larger spectral radius than \(G\), which contradicts with the maximality of the spectral radius of \(G\). For the latter case, i.e., \(N_{S}(v_{1})=N_{S}(v_{2})=\{u_{1},u_{3}\}\). Since \(J_{3}\) is not an induced copy in \(G\), a similar argument shows \(w_{1}w_{2}\notin E(G)\), and then it also leads to a contradiction.
In what follows, we shall partition the remaining proof in two cases, which are dependent on whether \(V(G)\setminus S\) contains a vertex with distance \(2\) to the cycle \(S\).
**Case 1.** Every vertex of \(V(G)\setminus S\) is adjacent a vertex of \(S\).
In this case, we have \(V(G)=S\cup N(S)\). For convenience, we denote \(N(S)=V_{1}\cup V_{2}\), where \(V_{i}=\{v\in N(S):d_{S}(v)=i\}\) for each \(i=1,2\). At the first glance, different vertices of \(V_{1}\) can be joined to different vertices of \(S\). By Lemma 3.3, \(G\) does not contain \(T_{1}\) and \(T_{2}\) as induced subgraphs, we obtain that \(V_{1}\) is an independent set in \(G\). Using Lemma 2.8, we can move all vertices of \(V_{1}\) together such that _all of them are adjacent to a same vertex of \(S\)_, and get a new graph with larger spectral radius. Note that this process can keep the resulting graph being triangle-free and non-bipartite since \(V_{1}\) is edge-less and \(S\) is still a copy of \(C_{5}\). By Lemma 3.2, \(H_{1}\) can not be an induced subgraph of \(G\), then we get \(|V_{1}|\leq 3\).
Since \(m\geq 258\), we can fix a vertex \(v\in N(S)\) and assume that \(N_{S}(v)=\{u_{1},u_{3}\}\). For each \(w\in V(G)\setminus(S\cup\{v\})\), since \(G\) contains no triangles and no \(H_{3}\) as an induced subgraph by Lemma 3.2, we know that \(N_{S}(w)\neq\{u_{3},u_{5}\}\) and \(N_{S}(w)\neq\{u_{4},u_{1}\}\). It is possible that \(N_{S}(w)=\{u_{1},u_{3}\},\{u_{2},u_{4}\}\) or \(\{u_{5},u_{2}\}\). Furthermore, if \(N_{S}(w)=\{u_{1},u_{3}\}\), then \(wv\notin E(G)\) since \(G\) contains no triangle; if \(N_{S}(w)=\{u_{2},u_{4}\}\), then \(wv\in E(G)\) since \(G\) contains no induced copy of \(H_{2}\). We denote \(N_{i,j}:=\{w\in V(G)\setminus S:N_{S}(w)=\{u_{i},u_{j}\}\}\). Note that \(G\) has no induced copy of \(H_{3}\), then at least one of the sets \(N_{2,4}\) and \(N_{5,2}\) is empty.
**Subcase 1.1.** If both \(N_{2,4}=\varnothing\) and \(N_{5,2}=\varnothing\), then \(V_{2}=N_{1,3}\) and \(V(G)=S\cup V_{1}\cup N_{1,3}\). By Lemma 3.3, \(T_{3}\) and \(T_{4}\) can not be induced subgraphs of \(G\). Hence, all vertices of \(V_{1}\) are adjacent to the vertex \(u_{1}\) or \(u_{2}\) by symmetry. Next, we will show that \(|V_{1}|\in\{1,3\}\), and then we prove that \(V_{1}\) and \(N_{1,3}\) form a complete bipartite graph or an empty graph.
Suppose that all vertices \(V_{1}\) are adjacent to \(u_{1}\). Then there is no edge between \(V_{1}\) and \(N_{1,3}\) since \(G\) is triangle-free. Note that \(m=5+2|N_{1,3}|+|V_{1}|\) is even, we get \(|V_{1}|\in\{1,3\}\); see \(L_{m}\) and \(G_{4}\) in Figure 3.
If \(|V_{1}|=1\), then \(G\) is the desired extremal graph \(L_{m}\);
If \(|V_{1}|=3\), then by computation, we get \(\lambda(G_{4})\) is the largest root of
\[F_{4}(x):=x^{6}-mx^{4}+(\tfrac{7m}{2}-14)x^{2}+(6-m)x+9-\tfrac{3m}{2}.\]
Clearly, we can check that \(L(x)<F_{4}(x)\) for each \(x\geq 1\), and so \(\lambda(G_{4})<\lambda(L_{m})\).
Now, suppose that all vertices of \(V_{1}\) are adjacent to \(u_{2}\). If there is no edge between \(V_{1}\) and \(N_{1,3}\), then \(|V_{1}|\in\{1,3\}\) and \(G\) is isomorphic to \(G_{2}\) or \(G_{5}\); see Figure 3. By computations or Lemma 2.8, we can get \(\lambda(G)<\lambda(L_{m})\); If there exists an edge between \(V_{1}\) and \(N_{1,3}\), then we claim that \(V_{1}\) and \(N_{1,3}\) form a complete bipartite subgraph by Lemma 3.5. Indeed, Lemma 3.5 asserts that \(G\) does not contain \(L_{1}\) as an induced subgraph. In other words, if \(v\in V_{1}\) is a vertex which is adjacent to one vertex of \(N_{1,3}\), then \(v\) will be adjacent to all vertices of \(N_{1,3}\). Note that \(G\) does not contain \(L_{2}\) as an induced subgraph, which means that other vertices of \(V_{1}\) are also adjacent to all vertices of \(N_{1,3}\). Observe that \(m=5+2|N_{1,3}|+|V_{1}|(1+|N_{1,3}|)\) is even, which yields that \(|V_{1}|\) is odd, and so \(|V_{1}|\in\{1,3\}\). Consequently, \(G\) is isomorphic to either \(Y_{m}\) or \(G_{6}\); see Figure 3.
If \(|V_{1}|=1\), then \(|N_{1,3}|=\tfrac{m-6}{3}\) and \(G=Y_{m}\). By Lemma 2.2, we get \(\lambda(L_{m})<\lambda(Y_{m})\). Thus, \(Y_{m}\) is the required extremal graph whenever \(m=3t\) for some even \(t\in\mathbb{N}^{*}\).
If \(|V_{1}|=3\), then \(|N_{1,3}|=\tfrac{m-8}{5}\), and \(\lambda(G_{6})\) is the largest root of
\[F_{6}(x):=x^{4}-x^{3}+(2-m)x^{2}+(m-3)x+\tfrac{3m-9}{5}.\]
One can calculate that
\[L(x)-x^{2}F_{6}(x)=x^{5}-2x^{4}+(3-m)x^{3}+(\tfrac{19m}{10}-\tfrac{26}{5})x^{ 2}+(4-m)x+2-\tfrac{m}{2},\]
and
\[L(x)-x^{2}F_{6}(x)\Big{|}_{x=\sqrt{m-3}}=-\frac{m^{2}}{10}-m\sqrt{m-3}+\frac{3 m}{5}+4\sqrt{m-3}-\frac{2}{5}<0.\]
Furthermore, one can prove that \(\frac{\mathrm{d}}{\mathrm{d}x}(L(x)-x^{2}F_{6}(x))<0\) for each \(x\geq\sqrt{m-3}\). Consequently, it leads to \(L(x)<xF_{6}(x)\) for each \(x>\sqrt{m-3}\), which yields \(\lambda(G_{6})<\lambda(L_{m})\).
**Subcase 1.2.** Without loss of generality, we may assume that \(N_{5,2}=\varnothing\) and \(N_{2,4}\neq\varnothing\), then \(N(S)=V_{1}\cup N_{1,3}\cup N_{2,4}\). By Lemma 3.2, \(H_{2}\) can not be an induced subgraph of \(G\). Thus, \(N_{1,3}\) and \(N_{2,4}\) induce a complete bipartite subgraph in \(G\). Now, we consider the vertices of \(V_{1}\). Recall that all vertices of \(V_{1}\) are adjacent to a same vertex of \(S\). By Lemma 3.3, \(G\) does not contain \(T_{3}\) and \(T_{4}\) as induced subgraphs. Then the vertices of \(V_{1}\) can not be adjacent to \(u_{1},u_{4}\) or \(u_{5}\). By Lemma 3.5, we know that \(L_{3}\) and \(L_{4}\) can not be induced subgraph of \(G\). Thus, all vertices of \(V_{1}\) can not be adjacent to \(u_{2}\) or \(u_{3}\). To sum up, we get \(V_{1}=\varnothing\), and so \(N(S)=N_{1,3}\cup N_{2,4}\). We denote \(A=N_{1,3}\cup\{u_{2},u_{4}\}\) and \(B=N_{2,4}\cup\{u_{3},u_{1}\}\). Let \(|A|=a\) and \(|B|=b\). Then we observe that \(G\) is isomorphic to the subdivision of the complete bipartite graph \(K_{a,b}\) by subdividing the edge \(u_{1}u_{4}\) of \(K_{a,b}\). Note that \(m=ab+1\) and \(a,b\geq 3\) are odd integers. Without loss of generality, we may assume that \(a\geq b\).
If \(b=3\), then \(m=3a+1\) for some \(a\in\mathbb{N}^{*}\). In this case, we get \(G=T_{m}\). Invoking Lemma 2.3, we have \(\lambda(L_{m})<\lambda(T_{m})\) and thus \(T_{m}\) is the desired extremal graph.
If \(b\geq 5\), then \(m=ab+1\) and \(\lambda(SK_{a,b})\) is the largest root of
\[F_{a,b}(x):=x^{5}-mx^{3}+(3m-2-2a-2b)x-2m+2a+2b.\]
Recall in (8) that \(\lambda(L_{m})\) is the largest root of \(L(x)\). We can verify that
\[L(x)-xF_{a,b}(x)=-(\tfrac{m}{2}+5-2a-2b)x^{2}+(4+m-2a-2b)x-\tfrac{m}{2}+2.\]
Since \(b\geq 5\) and \(m=ab+1\geq 258\), we get \(\tfrac{m}{2}+5-2a-2b=\tfrac{1}{2}((a-4)(b-4)-5)>0\). It follows that \(L(x)\leq xF_{a,b}(x)\) for every \(x\geq 3\). Thus, we get \(\lambda(SK_{a,b})<\lambda(L_{m})\), as required.
**Case 2.** There is exactly one vertex of \(V(G)\setminus S\) with distance \(2\) to the cycle \(S\). Let \(w_{2},w_{1}\) be two vertices with \(w_{2}\sim w_{1}\sim S\), and \(N_{S}(w_{1})=\{u_{1},u_{3}\}\). We denote \(V(G)\setminus(S\cup\{w_{1},w_{2}\}):=V_{1}\cup V_{2}\), where \(V_{i}=\{v\in V(G):v\notin S\cup\{w_{1},w_{2}\},d_{S}(v)=i\}\) for each \(i=1,2\). Similar with the argument in Case 1, using Lemmas 3.3 and 2.8, one can move all vertices of \(V_{1}\) such that _all of them are adjacent to a same vertex of \(S\)_. By Lemma 3.2, \(H_{1}\) is not an induced subgraph of \(G\). Then \(|V_{1}|\leq 3\). Let \(v\in V_{2}\) be any vertex. We claim that \(N_{S}(v)=\{u_{1},u_{3}\}\). Indeed, Lemma 3.2 implies that \(N_{S}(v)\neq\{u_{3},u_{5}\}\) and \(N_{S}(v)\neq\{u_{1},u_{4}\}\). If \(N_{S}(v)=\{u_{2},u_{4}\}\), then \(w_{1}v\in E(G)\) since \(G\) does not contain \(H_{2}\) as an induced subgraph by Lemma 3.2 again. Consequently, \(S\cup\{w_{1},w_{2},v\}\) forms an induced copy of \(L_{4}\), which contradicts with Lemma 3.5. Thus, we get \(N_{S}(v)\neq\{u_{2},u_{4}\}\). Similarly, we also get \(N_{S}(v)\neq\{u_{2},u_{5}\}\). In conclusion, we obtain \(N_{S}(v)=\{u_{1},u_{3}\}\) for every \(v\in V_{2}\). Since \(m\) is even, we get \(|V_{1}|\in\{0,2\}\).
First of all, suppose that \(|V_{1}|=0\). Then \(G\) is isomorphic to \(G_{2}\), the graph obtained from a \(C_{5}\) by blowing up the vertex \(u_{2}\) exactly \(\tfrac{m-4}{2}\) times and then hanging an edge to \(u_{2}\); see Figure 3. By computations, we \(\lambda(G_{2})<\lambda(L_{m})\), as desired.
Now, suppose that \(|V_{1}|=2\) and \(V_{1}:=\{v_{1},v_{2}\}\). By Lemma 3.3, we know that \(T_{3}\) and \(T_{4}\) are not induced subgraphs of \(G\). Then the vertices of \(V_{1}\) can not be adjacent to \(u_{4}\) and \(u_{5}\). By symmetry of \(u_{1}\) and \(u_{3}\), there are two possibilities, namely, all vertices of \(V_{1}\) are adjacent to \(u_{1}\) or \(u_{2}\). If all vertices of \(V_{1}\) are adjacent to \(u_{1}\), then \(v_{1}w_{2}\notin E(G)\) and \(v_{2}w_{2}\notin E(G)\) since \(T_{1}\) can not be an induced subgraph of \(G\) by Lemma 3.3. By comparing the Perron components of \(u_{1}\) and \(w_{1}\), one can move \(v_{1},v_{2}\) and \(w_{2}\) together using Lemma 2.8. Thus, \(G\) is isomorphic to \(G_{4}\) or \(G_{5}\) in Figure 3. If all vertices of \(V_{1}\) are adjacent to \(u_{2}\), then \(v_{1}w_{2}\notin E(G)\) and \(v_{2}w_{2}\notin E(G)\) since \(J_{3}\) is not an induced subgraph in \(G\) by Lemma 3.4. A similar argument shows that \(G\) is isomorphic to \(G_{5}\) in Figure 3. By direct computations, we can obtain \(\lambda(G_{4})<\lambda(L_{m})\) and \(\lambda(G_{5})<\lambda(L_{m})\). This completes the proof.
## 5 Concluding remarks
Although we have solved Question 1.5 for every \(m\geq 258\), our proof requires a lot of calculations of eigenvalues. As shown in Figure 2, there are three kinds extremal graphs depending on \(m\,(\text{mod }3)\in\{0,1,2\}\). Thus, it seems unavoidable to make calculations and comparisons among the spectral radii of these three graphs. Unlike the odd case in Theorem
1.4, the bound \(\beta(m)\) is sharp for all odd integers \(m\in\mathbb{N}\). For the even case, Theorem 1.7 presents all extremal graph for \(m\geq 258\). In addition, for \(m\in\{6,8,10\}\), Lemma 2.1 gives \(\lambda(L_{m})>\sqrt{m-2}\). Using a result in [45, Theorem 5], we can prove that \(Y_{6},L_{8}\) and \(T_{10}\) are the extremal graphs when \(m\in\{6,8,10\}\), respectively. In view of this evidence, it is possible to find a new proof of Question 1.5 to characterize the extremal graphs for \(\mathrm{every}m\geq 12\).
The blow-up of a graph \(G\) is a new graph obtained from \(G\) by replacing each vertex \(v\in V(G)\) with an independent set \(I_{v}\), and for two vertices \(u,v\in V(G)\), we add all edges between \(I_{u}\) and \(I_{v}\) whenever \(uv\in E(G)\). It was proved in [24, 38] that if \(G\) is a triangle-free graph with \(m\geq 2\) edges, then \(\lambda_{1}^{2}(G)+\lambda_{2}^{2}(G)\leq m\), where the equality holds if and only if \(G\) is a blow-up of a member of the family \(\mathcal{G}=\{P_{2}\cup K_{1},2P_{2}\cup K_{1},P_{4}\cup K_{1},P_{5}\cup K_{1}\}\). This result confirmed the base case of a conjecture of Bollobas and Nikiforov [4]. Observe that all extremal graphs in this result are bipartite graphs. Therefore, it is possible to consider the maximum of \(\lambda_{1}^{2}(G)+\lambda_{2}^{2}(G)\) in which \(G\) is triangle-free and non-bipartite.
The extremal problem was also studied for non-bipartite triangle-free graphs with given number of vertices. We write \(SK_{s,t}\) for the graph obtained from the complete bipartite graph \(K_{s,t}\) by subdividing an edge. In 2021, Lin, Ning and Wu [24] proved that if \(G\) is a non-bipartite triangle-free graph on \(n\) vertices, then
\[\lambda(G)\leq\lambda\big{(}SK_{\lfloor\frac{n-1}{2}\rfloor,\lceil\frac{n-1}{ 2}\rceil}\big{)}, \tag{13}\]
equality holds if and only if \(G=SK_{\lfloor\frac{n-1}{2}\rfloor,\lceil\frac{n-1}{2}\rceil}\). Comparing this result with Theorem 1.4, one can see that the extremal graphs with given order and size are extremely different although both of them are subdivisions of complete bipartite graphs. Roughly speaking, the former is nearly balanced, but the latter is exceedingly unbalanced.
Later, Li and Peng [20] extended (13) to the non-\(r\)-partite \(K_{r+1}\)-free graphs with \(n\) vertices. Notice that the extremal graph in (13) has many copies of \(C_{5}\). There is another way to extend (13) by considering the non-bipartite graphs on \(n\) vertices without any copy of \(\{C_{3},C_{5},\ldots,C_{2k+1}\}\) where \(k\geq 2\). This was done by Lin and Guo [25] as well as Li, Sun and Yu [17] independently. Subsequently, the corresponding spectral problem for graphs with \(m\) edges was studied in [21, 29]. However, the extremal graphs in this setting can be achieved only for odd \(m\). Hence, we propose the following question for interested readers4.
Footnote 4: We believe intuitively that the spectral extremal graphs with even size are perhaps constructed from those in Figure 2 by βreplacingβ the red copy of \(C_{5}\) with a longer odd cycle \(C_{2k+3}\).
**Question 5.1**.: _For even \(m\), what is the extremal graph attaining the maximum spectral radius over all non-bipartite \(\{C_{3},C_{5},\ldots,C_{2k+1}\}\)-free graphs with \(m\) edges?_
We write \(q(G)\) for the signless Laplacian spectral radius, i.e., the largest eigenvalue of the _signless Laplacian matrix_\(Q(G)=D(G)+A(G)\), where \(D(G)=\mathrm{diag}(d_{1},\ldots,d_{n})\) is the degree diagonal matrix and \(A(G)\) is the adjacency matrix. A theorem of He, Jin and Zhang [11] implies that if \(G\) is a triangle-free graph on \(n\) vertices, then \(q(G)\leq n\), with equality if and only if \(G\) is a complete bipartite graph (need not be balanced). This result can also be viewed as a spectral version of Mantel's theorem. It is worth mentioning that Liu, Miao and Xue [26] characterized the maximum signless Laplacian spectral radius among all
non-bipartite triangle-free graphs with given order \(n\) and size \(m\), respectively. Fortunately, the corresponding extremal graphs are independent of the parity of \(m\). Soon after, they [31] also provided the extensions for graphs without any copy of \(\{C_{3},C_{5},\ldots,C_{2k+1}\}\).
### Declaration of competing interest
The authors declare that they have no conflicts of interest to this work.
|
2309.00932 | Deep supervised hashing for fast retrieval of radio image cubes | The shear number of sources that will be detected by next-generation radio
surveys will be astronomical, which will result in serendipitous discoveries.
Data-dependent deep hashing algorithms have been shown to be efficient at image
retrieval tasks in the fields of computer vision and multimedia. However, there
are limited applications of these methodologies in the field of astronomy. In
this work, we utilize deep hashing to rapidly search for similar images in a
large database. The experiment uses a balanced dataset of 2708 samples
consisting of four classes: Compact, FRI, FRII, and Bent. The performance of
the method was evaluated using the mean average precision (mAP) metric where a
precision of 88.5\% was achieved. The experimental results demonstrate the
capability to search and retrieve similar radio images efficiently and at
scale. The retrieval is based on the Hamming distance between the binary hash
of the query image and those of the reference images in the database. | Steven Ndung'u, Trienko Grobler, Stefan J. Wijnholds, Dimka Karastoyanova, George Azzopardi | 2023-09-02T12:59:52Z | http://arxiv.org/abs/2309.00932v1 | # Deep supervised hashing for fast retrieval of radio image cubes
###### Abstract
The shear number of sources that will be detected by next-generation radio surveys will be astronomical, which will result in serendipitous discoveries. Data-dependent deep hashing algorithms have been shown to be efficient at image retrieval tasks in the fields of computer vision and multimedia. However, there are limited applications of these methodologies in the field of astronomy. In this work, we utilize deep hashing to rapidly search for similar images in a large database. The experiment uses a balanced dataset of 2708 samples consisting of four classes: Compact, FRI, FRII, and Bent. The performance of the method was evaluated using the mean average precision (mAP) metric where a precision of 88.5% was achieved. The experimental results demonstrate the capability to search and retrieve similar radio images efficiently and at scale. The retrieval is based on the Hamming distance between the binary hash of the query image and those of the reference images in the database.
## 1 Introduction
In recent years, radio astronomy has experienced exponential data growth: next-generation radio surveys are producing massive data sets with tens of millions of unknown radio sources [1]. With the exponential increase of high-resolution radio images, management and optimal exploitation of the desired galaxies have become challenging tasks. Thus, there is a growing need for fast, efficient and effective image retrieval for a given query galaxy. Image retrieval/indexing of radio galaxies is the process of finding and identifying galaxies with similar morphological structures in a large database of galaxies, similar to a query image. Efficient retrieval of similar radio galaxies enables astronomers to categorize and study their evolution and discover rare astronomical objects and phenomena.
There has been gradual advancement in astronomy, with focus on image retrieval research. For instance, content-based image retrieval (CBIR) [2, 3], and text-based image retrieval (TBIR) [4] approaches have been applied in finding the 'nearest neighbour' search objects from a reference database in astronomy. To the best of our knowledge, image retrieval is, however, hardly addressed in _radio_ astronomy. The goal of this work is to demonstrate how image retrieval can be effectively applied in radio astronomy.
Data-dependent deep hashing algorithms utilizing convolutional neural networks (CNNs) are widely studied in image retrieval problems in the fields of multimedia and computer vision [5, 6, 7]. This is attributed to the ability of the deep hashing CNN to extract and learn unique image signatures [8]. The algorithms are used to create compact and low-dimensional binary representations of both the query image and the database of images. Then, a similarity calculation is performed between the binary representation of the query image and each binary representation of the database images using the Hamming distance. The resultant distance values are then used to rank the images (e.g. the top 100) being retrieved. Fig. 1 summarises this process.
## 2 Data and Methodology
### Dataset
The data sample of radio galaxies used in this paper was constructed using information from multiple catalogs as compiled and processed by Samudre et al. [9]. The original dataset is composed of four classes of galaxies: Compact (406 samples), FRI (389 samples), FRII (679 samples), and
Figure 1: A schematic overview of the image retrieval process based on a query image. The Hamming distance is calculated between the query image and every reference image. The answer image represents the top 1 image retrieved.
Bent (508 samples), which are distributed among the given training, validation, and test datasets [10, 11, 12, 13, 14, 15] (Table 1). This dataset was processed by Samudre et al. [9] in a similar manner to the steps described in [16]. Furthermore, Samudre et al. [9] balanced the train and validation datasets by upsampling the underrepresented classes through randomly duplicating samples from the original dataset. As a result, a balanced dataset of 2708 samples was obtained, composed of Compact (675 samples), FRI (674 samples), FRII (679 samples), and Bent (680 samples).
In this work we also reshape the images to \(224\times 224\) pixels as an additional preprocessing step to all images during the training and application of the model.
### Method
The proposed image retrieval framework consists of three main phases: training, encoding and image retrieval. The first phase of the method involves pre-training a CNN on a large dataset such as ImageNet [17] to learn important features, patterns and representations of the images. It is then fine-tuned with a dataset specific to the target domain (radio astronomical images). This allows the deep CNN to understand the critical morphological features on the images to map them onto a unique image representation [8]. Notably, the final layer in the proposed framework is a fully connected layer, and its neuron activations are regulated by the preceding layers, which encode semantics, to achieve the hashing process. The sigmoid activation function is used after the last layer such that all values are forced to lie in the range [0,1]. Fig. 2 shows the structural schematic structure of the deep hashing model.
In the second phase (encoding phase), the trained model is used to generate a real-valued vector for each of the training (reference) images. Such vectors are then binarized by a thresholding operation that transforms to 1 all values above a given threshold, and 0 otherwise. The same encoding operation is also applied to a given query image. These binary image representations are compact and efficient, which are then used for image retrieval. The idea behind this approach is that images with similar morphological structures should have similar binary encodings.
Finally, the third phase is image retrieval, where binary image representations are used to match similar images. We use the Hamming distance to compare the binary representation of a query image from the test image database with the hash codes of the images in the training (reference) database. The reference images are then sorted and ranked based on the Hamming distances. Images with the shortest distance to the query image are considered to be the most similar. From the list of ranked images, the top 100 images are retrieved.
### Evaluation Metrics
Similar to previous studies, we assess the performance of our deep CNN algorithm following the widely adopted mean average precision (mAP) metric [18]:
\[\text{AP}=\frac{1}{\text{GTP}}\sum_{i=1}^{n}\text{Precision}(i)\times\text{ Rel}(i), \tag{1}\]
\[\text{mAP}=\frac{1}{N_{q}}\sum_{j=1}^{N_{q}}AP_{j}, \tag{2}\]
where AP represents the average precision of one query, with \(n\) being the total number of reference images, and GTP the total number of ground truth positives, Precision(\(i\)) is the precision of the top \(i\) ranked reference images and Rel(\(i\)) is an indicator variable that is 1 if the \(i\)th image is relevant and 0 otherwise. Finally, the mAP is computed as the average of all AP values obtained for all \(N_{q}\) query images.
## 3 Experimental Results and Discussion
Our model is constructed using a transfer learning paradigm. We fine-tune a DenseNet161 [19], which is pre-trained on ImageNet [17]. The fine-tuning of the DenseNet161 model consisted of two steps. First, all the layers of the pre-trained model are frozen except for the final, customized layer. This layer is then trained for 15 epochs on the radio data. In the second step, the model is unfrozen and fine-tuned for 200 iterations using an optimal learning rate obtained through the cyclical learning rates approach [20]. The learning rate is bounded within the range of 0 to \(10^{-3}\) for fast model convergence and weight decay of \(10^{-3}\) for model regularization. We designed our model such that the final output layer generates an eight-element hash code. For the model to learn discriminative features that preserve the similarity of the images, we used the triplet margin loss function [21]. The loss function is designed such that the outputs of similar images are pulled together while pushing away dissimilar images. Therefore, the model effectively learns the semantic structure of similar images.
We evaluate the proposed framework on the given test set with different threshold percentile values used in the encoding phase and show the results in Fig. 3. For instance,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Type** & **Sample size** & **Training** & **Validation** & **Test** \\ \hline Bent & 508 & 305 & 100 & 103 \\ Compact & 406 & 226 & 80 & 100 \\ FRI & 389 & 215 & 74 & 100 \\ FRII & 679 & 434 & 144 & 101 \\ \hline Total & 1982 & 1180 & 398 & 404 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **The distribution of the original dataset spread across the training, validation and testing images.** |
2304.01408 | A physically realizable molecular motor driven by the Landauer blowtorch
effect | We propose a model for a molecular motor in a molecular electronic junction
driven by a natural manifestation of Landauer's blowtorch effect. The effect
emerges via the interplay of the electronic friction and diffusion
coefficients, each calculated quantum mechanically using nonequilibrium Green's
functions, within a semi-classical Langevin description of the rotational
dynamics. The motor functionality is analysed through numerical simulations
where the rotations exhibit a directional preference according to the intrinsic
geometry of the molecular configuration. The proposed mechanism for motor
function is expected to be ubiquitous for a range of molecular geometries
beyond the one examined here. | Riley J. Preston, Daniel S. Kosov | 2023-04-03T22:52:24Z | http://arxiv.org/abs/2304.01408v2 | # A physically realizable molecular motor driven by the Landauer blowtorch effect
###### Abstract
We propose a model for a molecular motor in a molecular electronic junction driven by a natural manifestation of Landauer's blowtorch effect. The effect emerges via the interplay of the electronic friction and diffusion coefficients, each calculated quantum mechanically using nonequilibrium Green's functions, within a semi-classical Langevin description of the rotational dynamics. The motor functionality is analysed through numerical simulations where the rotations exhibit a directional preference according to the intrinsic geometry of the molecular configuration. The proposed mechanism for motor function is expected to be ubiquitous for a range of molecular geometries beyond the one examined here.
## I Introduction
Experimental demonstrations of molecular motors have used a range of external energy sources such as light [1; 2; 3; 4; 5], chemical reactions [6; 7; 8], thermal gradients [9], or applied electric currents [10; 11; 12; 13; 14; 15; 16; 17], the latter being of particular interest due to its conceptual compatibility with nanoelectronics. With this in mind, this work considers a molecular rotor subject to an applied electric current as supplied by a pair of conducting electrodes. The molecular rotor serves as the main conducting element in a molecular electronic junction, capable of producing mechanical work.
There already exists a wealth of theoretical literature describing such systems whose motor functionalities arise from a range of physical phenomena, including, but not limited to, quantum tunneling [10] and excitation-relaxation [18; 19] processes in asymmetric ratchet potentials, non-Markovian behaviour of the current-induced forces leading to a bias in the directionality [20], and non-conservative forces [21; 22]. However, previous studies have overlooked the possible functionality which can arise due to the inhomogeneous dissipative-excitational current-induced forces present in such systems.
In this paper, we consider a model in which a molecular rotor is driven by the current-induced forces imparted by electrons tunneling through it. These forces provide the required energy to the rotational degree of freedom in order to overcome the potential barrier for rotation. In parallel with previous work [18; 21; 19], we model the rotational degree of freedom classically according to a Keldysh-Langevin approach where its time-evolution is governed by three components; an adiabatic force which sets the shape of the ratchet potential, as well as a dissipative frictional force and a stochastic force, the balance of which yields the steady-state temperature of the classical rotator. Each of these forces, which arise due to the interaction with the quantum nonequilibrium electronic environment, is calculated self-consistently via nonequilibrium Green's functions.
The directionality of our rotation is a result of - to our knowledge - a hitherto unexplored contribution for motors in molecular junctions, that being a consequence of Landauer's blowtorch effect [23; 24] in the ratchet potential, which emerges via the interplay of the coordinate-dependent diffusion and viscosity coefficients. This phenomena is well-understood in the context of chemical reaction rates [25], whereas here the scope is extended to the study of ratchets. This is in contrast to previous research where the directional rotation comes as a result of the non-conservativity of the adiabatic force [18; 21] - a phenomena which is also easily accessible with our model via an appropriate choice of Hamiltonian, but is not the aim of this study. Driving the motor by the blowtorch effect is of particular interest since the dissipative and stochastic forces generally act to degrade the device performance rather than enforce it [21]. We note that while the effect of inhomogeneous viscosity and diffusion coefficients in ratchets has been explored on a mathematical level [26; 27], here we propose a physically realizable molecular electronic junction in which the effect emerges naturally. This effect does not require an explicit time-dependence of the Hamiltonian as it arises due to the molecular geometry. Additionally, the directional rotation does not require an asymmetric ratchet potential, although such asymmetric potentials can arise from our calculations via the adiabatic force, further reinforcing motor performance. We note that the function of our motor is reliant on the rotational dynamics being sufficiently damped; a regime which is generally fulfilled in molecular electronic junctions since the conducting molecule is usually embedded into an insulating solvent or it is a part of a molecular monolayer.
It has been shown theoretically that a non-zero charge current can be pumped through a quantum system in equilibrium via the periodic variation of two independent parameters [28]. A particularly relevant example is described in Ref.[29], where the coupling of a quantum system to the left and right electrodes each assumes a pe
riodic time-dependence. Since in our model the coupling to each lead is implicitly time-dependent through the evolution of the nuclear geometry, this would be equivalent to a constant, manual rotation of our molecular motor with constant angular velocity. We use our model to investigate the converse effect to equilibrium charge shuttling, in which an applied charge current via the nonequilibrium electrodes _produces_ a time-dependent variation of two independent parameters (the coupling to the left and right electrodes) which emerges via the directed rotation of the molecular geometry. Thus, this is an example of an adiabatic quantum motor. We do, however, find that the operational parameter regimes of our molecular motor differ from that of models of equilibrium charge shuttling, which we further discuss in the results section. Our choice of Hamiltonian also mirrors an example demonstrated in Ref. [30], where the rotation is instead considered from a quantum perspective.
## II Model
A visualisation of our proposed molecular junction configuration is shown in Fig. 1. We have two planar electrodes bridged by a biphenyl based molecule. The phenyl rings are prepared such that they are displaced by a dihedral angle \(\phi\) from each other - this angle is a constant for a given simulation. The motor effect arises through the angle \(\theta\), which represents the uniform rotation of the entire molecular bridge as a rigid body. To produce an observable directionality to the rotations, the vibrations must be adequately damped. We find that the electronically calculated forces are generally insufficient to achieve this regime and so we additionally include an external equilibrium environment; for example, a solvent or molecular monolayer surrounding the junction, which acts to further dampen the classical vibrations. We emphasize that the proposed geometry is merely a physically reasonable suggestion. The proposed motor effect should be ubiquitous in molecular junction geometries provided that there is an asymmetry in the Hamiltonian, in our case arising from the dihedral angle \(\phi\).
The system is described by a generic tunneling Hamiltonian as per
\[\hat{H}(t)=\hat{H}_{M}+\hat{H}_{L}+\hat{H}_{R}+\hat{H}_{LM}(\theta(t))+\hat{H }_{MR}(\theta(t))+H_{\text{cl}}(t). \tag{1}\]
The total system Hamiltonian is partitioned into the following components; the molecular Hamiltonian \(\hat{H}_{M}\) for the molecular bridge, the left and right electrodes Hamiltonians \(\hat{H}_{L}\) and \(\hat{H}_{R}\), the electrodes-molecule coupling Hamiltonians \(\hat{H}_{LM}(\theta(t))\) and \(\hat{H}_{MR}(\theta(t))\) which describe the coupling between the electronic states on the rotor and the left and right electrodes, respectively, and the classical Hamiltonian \(H_{\text{cl}}(t)\) which describes the time-evolving molecular geometry. Note that \(\hat{H}_{LM}(\theta(t))\) and \(\hat{H}_{MR}(\theta(t))\) depend on time implicitly via the classical, rotational degree of freedom \(\theta\).
The molecular Hamiltonian consists of two conducting electronic levels, each localised on one of the phenyl rings. It then takes the form
\[\hat{H}_{M}=E_{1}\hat{d}_{1}^{\dagger}\hat{d}_{1}+E_{2}\hat{d}_{2}^{\dagger} \hat{d}_{2}+v(\hat{d}_{1}^{\dagger}\hat{d}_{2}+\hat{d}_{2}^{\dagger}\hat{d}_{ 1}). \tag{2}\]
\(E_{1}\) and \(E_{2}\) are the energies of the first and second electronic levels, respectively, while \(v\) is the hopping amplitude.
The electrodes are described as non-interacting fermionic baths and the Hamiltonian is taken in the standard form,
\[\hat{H}_{L}+\hat{H}_{R}=\sum_{k\alpha}\epsilon_{k\alpha}\hat{d}_{k\alpha}^{ \dagger}\hat{d}_{k\alpha}, \tag{3}\]
where we use a subscript \(k\alpha\) to denote an operator acting on state \(k\) in the \(\alpha\) electrode which has energy \(\epsilon_{k\alpha}\).
The molecule-electrode coupling, \(\hat{H}_{LM}\) and \(\hat{H}_{MR}\), are defined according to
\[\hat{H}_{LM} =\sum_{k\in L}\Big{(}t_{k1}(\theta(t))\hat{d}_{k}^{\dagger}\hat{ d}_{1}+\text{h.c.}\Big{)}, \tag{4}\] \[\hat{H}_{RM} =\sum_{k\in R}\Big{(}t_{k2}(\theta(t)+\phi)\hat{d}_{k}^{\dagger} \hat{d}_{2}+\text{h.c.}\Big{)}. \tag{5}\]
The matrix elements \(t_{ki}\) (and their conjugates) describe the tunneling amplitudes (between electrode states \(k\) and the molecular bridge states \(i\), where state 1 is only coupled to the left electrode and state 2 is only coupled to the right electrode. Note that \(t_{ki}\) depends explicitly on the classical rotational coordinate \(\theta\). We choose to express \(t_{k\alpha,i}(\theta)=t_{k\alpha,i}s_{\alpha}(\theta)\), where the classical dependence emerges through \(s_{\alpha}(\theta)\), which takes the following
Figure 1: Schematic of the model system. A biphenyl based molecule (green) connects two graphene electrodes. The molecule represents a rigid rotator. The dihedral angle between the two phenyl rings, which is critical for motor functionality, can be adjusted by the inclusion of appropriate side groups to atoms 1,2,3, and 4. An applied current induces a directional rotation about the red bonds when the dihedral angle between the phenyl rings is non-zero.
forms for the left and right electrodes:
\[s_{L} =1+\frac{A}{2}\left(\cos(2\theta)-1\right), \tag{6}\] \[s_{R} =1+\frac{A}{2}\left(\cos(2(\theta+\phi))-1\right). \tag{7}\]
With this dependence, the coupling amplitude is maximised when a phenyl ring is coplanar with its corresponding electrode and minimised when the phenyl ring is orthogonal to the electrode with a magnitude of \(1-A\) times the maximum value. This dependence of the tunneling amplitudes on the rotational angle can be realized physically using graphene electrodes, where the rotation of the molecular bridge out of the electrode plane lowers \(\pi\)-conjugation, reducing the corresponding tunneling amplitude.
Finally, the classical Hamiltonian is given by a rigid rotator expression,
\[H_{\mathrm{cl}}(t)=\frac{L^{2}}{2I}+U_{\mathrm{cl}}(\theta), \tag{8}\]
where \(L\) is the angular momentum of the molecular geometry, \(I\) is the moment of inertia and \(U_{\mathrm{cl}}(\theta)\) is the classical potential for the rotation. In our calculations we set \(U_{\mathrm{cl}}(\theta)=0\), such that the rotational potential results entirely from the interaction with the electronic environment, calculated quantum mechanically. In any case, the inclusion of a non-zero classical potential will not have a qualitative difference on the observed motor effect.
## III Current-induced torque and "blowtorch" temperature
The operator for the torque acting on the classical rotational coordinate due to the quantum, electronic environment is given by
\[\hat{\tau}=-\partial_{\theta}\hat{H}(t)=-\sum_{k\alpha,i}\left[\partial_{ \theta}t_{k\alpha i}(\theta)\hat{d}_{k\alpha}^{\dagger}\hat{d}_{i}+h.c.\right], \tag{9}\]
where \(\partial_{\theta}\) is the partial derivative with respect to \(\theta\). The summation in the above runs over both electrodes, \(\alpha\in\{L,R\}\), and both molecular electronic states, \(i\in\{1,2\}\). The torque operator is then expressed in terms of a mean term and a deviation from the mean,
\[\hat{\tau}=\langle\hat{\tau}\rangle+\delta\hat{\tau}, \tag{10}\]
where each can be quantified in terms of nonequilibrium Green's functions. As is covered in detail in the appendix, a time-scale separation between the slow classical rotation of the rotor and the fast electron tunneling allows for a perturbative expansion of the mean torque in terms of the small parameter - the derivative with respect to central time in the molecular bridge Green's functions. The perturbative expansion is
\[\langle\hat{\tau}\rangle=\tau_{(0)}(\theta)+\tau_{(1)}(\theta,\hat{\theta})+..., \tag{11}\]
where \(\tau_{(n)}\) is of \(n^{\mathrm{th}}\) order in the central time derivatives. We truncate the expansion after the first order. We calculate a conservative potential according to
\[U=-\int_{\theta_{0}}^{\theta}d\theta^{\prime}\tau_{(0)}(\theta^{\prime}), \tag{12}\]
where the choice of \(\theta_{0}\) is arbitrary. Equation (12) entirely defines the ratchet potential for our rotational coordinate due to the electronic environment. Finally, the torque operator is then mapped onto a classical torque such that we obtain a classical equation of motion for the rotational coordinate. It takes the form of a Langevin equation,
\[I\ddot{\theta}=\tau_{(0)}(\theta)-(\xi_{\mathrm{solv}}+\xi(\theta))\dot{ \theta}+\delta\tau(t), \tag{13}\]
where \(\xi(\theta)\), calculated via \(\tau_{(1)}\), is the electronic friction coefficient while \(\xi_{\mathrm{solv}}\) is the friction due to the interaction with an external solvent. \(\delta\tau(t)\) is a classical stochastic force quantified according to a diffusion coefficient, \(D_{\mathrm{tot}}=D(\theta)+D_{\mathrm{solv}}\), where the electronic part is defined according to
\[\langle\delta\tau(t)\delta\tau(t^{\prime})\rangle=D(\theta)\delta(t-t^{\prime}). \tag{14}\]
Each of the electronic forces, \(\tau_{(0)}(\theta)\), \(\xi(\theta)\), and \(D(\theta)\), are calculated quantum mechanically via nonequilibrium Green's functions while the forces due to interaction with the external solvent, \(\xi_{\mathrm{solv}}\) and \(D_{\mathrm{solv}}\), are input parameters to the model which allow us to artificially increase the damping of the dynamics. We have applied the white-noise approximation in calculating the electronic part of the diffusion coefficient which is justified due to the clear separation of time-scales between the electronic and classical dynamics [31]. The same cannot be said for the external damping, whose dynamics may occur on similar time-scales to the classical rotations. However, the operation of our motor is governed chiefly by the behaviour of the electronic component and as such, we predict that a more accurate approach to the modelling of the external solvent is not important for the observed motor effect. In analogy with the fluctuation-dissipation theorem, we can define an effective "blowtorch" temperature for the classical rotation according to [32; 33; 25; 34]
\[k_{B}T_{\mathrm{eff}}(\theta)=\frac{D(\theta)+D_{\mathrm{solv}}}{2(\xi(\theta )+\xi_{\mathrm{solv}})}. \tag{15}\]
Expressions for the diffusion coefficient, viscosity and average torque in terms of nonequilibrium Green's functions along with relevant derivations are given in the appendix.
## IV Results
We now present results for our model system. Results are acquired via computational simulations of the Langevin dynamics produced by our model according to (13). From long Langevin trajectories in time, we calculate the average rotation rate for the classical rotational
degree of freedom for a chosen set of parameters. The common parameters for all calculations, unless otherwise specified, are as follows. The electrode temperatures are set such that \(k_{B}T_{\alpha}\approx 2.72\times 10^{-2}\)eV, and the solvent is in thermal equilibrium with the electrodes with a corresponding viscosity coefficient of \(\xi_{\rm solv}=5\) a.u.. The moment of inertia of the classical rotational coordinate is approximated according to two phenyl rings as \(I=4.5\times 10^{5}\) a.u.. We use the wide-band approximation, and express the level broadening as
\[\Gamma_{\alpha}=\Gamma_{\alpha}^{\rm max}s_{\alpha}^{2}(\theta), \tag{16}\]
where the maximum level broadenings \(\Gamma_{\alpha}^{\rm max}\) are input parameters in our calculations. We take the maximum level broadenings due to the left and right electrodes, respectively, as \(\Gamma_{L}^{\rm max}\approx 0.272\)eV and \(\Gamma_{R}^{\rm max}=\Gamma_{L}^{\rm max}/2\). For the molecular Hamiltonian, we take \(E_{1}=E_{2}=0\) while the hopping amplitude is given by \(v=1.25\)eV. We apply the voltage, \(\mathcal{V}\), symmetrically in all cases such that \(\mu_{L}=-\mu_{R}\). Finally, we take \(A=0.95\), such that the Hamiltonian coupling element when the phenyl ring is perpendicular to the electrode is \(5\%\) of the corresponding coplanar value.
In Fig. 2, we observe the periodic ratchet potentials generated for a range of voltages along with the corresponding inhomogeneous effective temperatures overlaid on top. At equilibrium, the rotational coordinate is in thermal equilibrium with the electrodes and solvent. Of principal importance are the energies of the molecular orbitals which are \(\pm 1.25eV\) for our parameters, which are off-resonant when \(\mathcal{V}<2.5V\). Increasing the voltage in the off-resonant regime - exemplified by the \(\mathcal{V}=2V\) case - increases the height of the energy barrier for rotation while the temperature of the rotational coordinate differs only slightly from equilibrium. Conversely, in the resonant regime when \(\mathcal{V}>2.5V\), the inhomogeneous temperature as a function of \(\theta\) yields clear periodic hot-spots which we refer to as the blowtorch. Further increasing the voltage magnifies these hotspots while decreasing the energy barrier for rotation. The value of \(\phi=-\pi/4\) was chosen specifically here to illustrate a situation in which a periodic blowtorch increases the probability for the forwards rotation (increasing \(\theta\)). This is because the effects of the potential gradient are nullified in the region where the blowtorch is applied, resulting in an effective decrease to the barrier for rotation in the forwards direction [25]. We also observe numerically that our Langevin coefficients are independent of the sign of the voltage. Thus, the rotational direction must also be independent of the sign of the voltage. In other words, our mechanism for the rotation of the molecular structure is independent of the direction of electron tunneling through the junction. If we take this to be true, this then justifies our decision to have \(\Gamma_{L}^{\rm max}\neq\Gamma_{R}^{\rm max}\), since otherwise the symmetry of the system would prevent any non-zero average rotation.
We additionally observe the dependence on \(\phi\) in Fig. 3. When \(\phi=0\) and the two phenyl rings are coplanar, the ratchet potential and corresponding effective temperature distribution are symmetric, ruling out any possible rotation as is to be expected. Upon comparing \(\phi=\pi/4\) with \(\phi=-\pi/4\), corresponding to opposite chiralities of the molecular bridge, we observe the dependence on \(\theta\) to be flipped such that we should observe equal and opposite average rotation rates - a result which we observe directly in Fig. 4. The case of \(\phi=1.3\) was chosen to highlight the possibility of deformation to the potential which can have a significant effect on the rotation rate.
We now turn to numerical simulations of the dynamics. In Fig. 4, we observe the average rotation rate, \(R\), over a trajectory as a function of \(\phi\). The rotations go to zero
Figure 2: \(U\) calculated according to (12) with \(k_{B}T_{\rm eff}\) overlaid on top for different voltages. The nonhomogeneous temperature with local hot spots is the manifestation of the Landauer blowtorch effect. Dihedral angle between phenyl rings: \(\phi=-\pi/4\).
when \(\phi=0\) and \(\pm\frac{\pi}{2}\), as is to be expected from symmetry arguments. We find that \(R(-\phi)=-R(\phi)\) as expected from the previous discussion. Short example trajectories of the rotational coordinate as a function of time are plotted for different values of \(\phi\) in Fig. 5, for the readers intuition.
In Ref. [29], equilibrium charge shuttling was shown to be maximised when \(\phi=\pm\pi/4\); a result which we can readily reproduce by applying a manual rotation to \(\theta\) such that it increases or decreases linearly with time. We find here that the rotation rate due to an applied voltage follows a similar trend, reaching a minimum/maximum at \(\phi=\pm\pi/4\). However, we observe a deviation from this behaviour around \(\phi=\pm 1.3\) due to the rapid current-induced deformation of the ratchet potential. We also note that equilibrium charge shuttling can even be observed even when \(\Gamma_{L}^{\text{max}}=\Gamma_{R}^{\text{max}}\); a regime in which we do not observe a net rotation by applying a voltage since our mechanism for rotation is independent of the direction of the current. In contrast, models for equilibrium charge pumping show that the produced current is _reversed_ upon reversing the rotation of the molecular configuration [29]. We find that the direction of rotation in our model is determined by the choice of \(\phi\) as well as the choices of \(\Gamma_{L}^{\text{max}}\) and \(\Gamma_{R}^{\text{max}}\). We have arbitrarily chosen \(\Gamma_{L}^{\text{max}}>\Gamma_{R}^{\text{max}}\) to produce the displayed results. If we instead choose \(\Gamma_{R}^{\text{max}}>\Gamma_{L}^{\text{max}}\), the observed rotational directions are reversed - a result we have observed numerically but not shown here.
Fig. 6 demonstrates the voltage dependence of the rotational rate. We observe negligible rotation in the off-resonant regime when \(\mathcal{V}<2.5V\). In the resonant regime, the average rotation rate increases approximately linearly due to the increasing magnitude of the applied blowtorch with increasing voltage along with the lowering of the energy barrier required for rotation. For even higher voltages, we expect that the rotation rate would begin decreasing back towards zero since the large effective temperatures will overwhelm the potential entirely, removing any directional preference. This, however, would occur beyond the realms of physically achievable voltages for our model.
The function of our molecular motor requires sufficient damping - a regime we achieve via the inclusion of an external solvent to the system. In Fig. 7, we observe the dependence of the rotation rate on the moment of inertia of the molecular configuration, where \(I\approx 1.15\times 10^{-45}\text{kgm}^{2}\) is the physically reasonable value corresponding to our chosen molecular configuration. In the overdamped case
Figure 5: Short time trajectories of the rotational angle \(\theta\) (expressed here in terms of the number of revolutions) for different values of the dihedral angle \(\phi\). Voltage \(\mathcal{V}=5V\).
Figure 6: The rotation rate, \(R\), as a function of voltage \(\mathcal{V}\). Each \(R\) point is calculated via averaging over a trajectory with a length of \(\approx 1.6\times 10^{6}\) ns. Dihedral angle \(\phi=-\pi/4\).
where \(I\) is unrealistically small, the rotation rate is orders of magnitude larger than for realistic values for \(I\). The rotation rate asymptotically decreases towards zero with increasing moment of inertia, where in the underdamped case, the preference of a given direction will become vanishingly small. As an additional insight, we define the directionality according to
\[Dir=\frac{n_{\text{forw}}}{n_{\text{forw}}+n_{\text{back}}}, \tag{17}\]
where \(n_{\text{forw}}\) and \(n_{\text{back}}\) are the number of forward and backward rotations over the full length of the trajectory. \(Dir=1\) would correspond to a trajectory in which the molecular motor rotates unidirectionally forwards. For the physically realistic value of \(I\), 50.68% of all rotations are forwards. This directionality is far smaller than what has been demonstrated for motors governed chiefly by quantum effects [10].
## V Conclusions
In this paper, we have proposed an experimentally realizable model for a molecular motor in a molecular electronic junction whose operation is governed by Landauer's blowtorch effect. This contrasts with other theoretical models for molecular motors which generally disregard the inhomogeneous temperature of the electronic environment induced by the nonequilibrium electrodes. We have demonstrated that directional rotations can be produced entirely as a result of the behaviour of the viscosity and diffusion coefficients - these are exerted by tunneling quantum electrons on the classical rotator and calculated exactly via nonequilibrium Green's functions - while the rotational potential is periodic and subsequently introduces no intrinsic directionality of its own. This effect is, however, limited to regimes where the rotations are sufficiently damped and we anticipate that the small electronic friction alone will not be enough to produce a non-negligible rotational preference, hence our choice to additionally include an external solvent which increases the dampening of the rotation.
**DATA AVAILABILITY**
The data that supports the findings of this study are available within the article.
Figure 7: The rotation rate \(R\), and directionality, \(Dir\), as a function of the moment of inertia of the classical rotational coordinate. The final point on each plot corresponds to our usual choice of \(I\) for two phenyl rings. The trajectory length was chosen for each value of \(I\) to ensure convergence of the results. Dihedral angle \(\phi=-\pi/4\), voltage \(\mathcal{V}=5\mathcal{V}\).
Appendix A Torque, Rotational Viscosity and Diffusion Coefficient in Terms of Nonequilibrium Green's functions
We use the standard definitions for the lesser \(G^{<}_{ij}(t,t^{\prime})\), greater \(G^{>}_{ij}(t,t^{\prime})\), retarded \(G^{R}_{ij}(t,t^{\prime})\) and advanced \(G^{A}_{ij}(t,t^{\prime})\) components of the electronic Green's functions in our derivations. Expressing the torque operator in the Heisenberg picture, we compute the average torque as
\[\langle\hat{\tau}\rangle=i\sum_{k\alpha i}\left[\partial_{\theta}t_{k\alpha i}( \theta)G^{<}_{ik\alpha}(t,t)+\partial_{\theta}t_{ik\alpha}(\theta)G^{<}_{k \alpha i}(t,t)\right]. \tag{10}\]
This torque is computed for the exact, nonadiabatic Green's functions.
We now perform a perturbative expansion of the mean torque given in (10). It is a mathematical convenience to perform this expansion under a Wigner transformation of the time since it allows for the easy recognition of different time-scales within the system. The Wigner time coordinates are defined according to
\[T=\frac{t+t^{\prime}}{2},\qquad\tau=t-t^{\prime}, \tag{11}\]
where \(T\) is the central time, associated with the long time-scales of classical vibration and \(\tau\) is the relative time, related to electronic tunneling. Thus, in our theory the small parameter naturally emerges via derivatives with respect to \(T\). We introduce an auxilliary two-time function,
\[\mathcal{T}(t,t^{\prime})=i\sum_{k\alpha i}\left[\partial_{\theta}t_{k\alpha i }(\theta(t^{\prime}))G^{<}_{ik\alpha}(t,t^{\prime})+\partial_{\theta}t_{ik \alpha}(\theta(t))G^{<}_{k\alpha i}(t,t^{\prime})\right], \tag{12}\]
where \(\mathcal{T}(t,t)=\langle\hat{\tau}(t)\rangle\). Next, the Green's functions spanning both the electrode and molecular space can be decomposed via the Dyson equation
\[G^{<}_{k\alpha i}(t,t^{\prime})=\int_{-\infty}^{\infty}dt_{1}\sum_{j}\left[g^ {<}_{k\alpha}(t,t_{1})t_{k\alpha j}(t_{1})G^{A}_{j\hat{\imath}}(t_{1},t^{ \prime})\right.+\left.g^{R}_{k\alpha}(t,t_{1})t_{k\alpha j}(t_{1})G^{<}_{ji}(t _{1},t^{\prime})\right], \tag{13}\]
where \(g_{k\alpha}(t,t_{1})\) is the free Green's function for electrode \(\alpha\). The resultant equation for \(\mathcal{T}(t,t^{\prime})\) is
\[\mathcal{T}(t,t^{\prime})=i\sum_{ij}\int_{-\infty}^{\infty}dt_{1}\Big{[}G^{<} _{ij}(t,t_{1})\Phi^{A}_{ji}(t_{1},t^{\prime})+G^{R}_{ij}(t,t_{1})\Phi^{<}_{ji} (t_{1},t^{\prime})+\Psi^{<}_{ij}(t,t_{1})G^{A}_{ji}(t_{1},t^{\prime})+\Psi^{R }_{ij}(t,t_{1})G^{<}_{ji}(t_{1},t^{\prime})\Big{]}. \tag{14}\]
Here we have introduced the self-energy-like terms, \(\Psi\) and \(\Phi\), which contain any information about the coupling to the electrodes. These are defined as (\(c=<,>,R,A\))
\[\Psi^{c}_{ij}(t,t^{\prime})=\sum_{k\alpha}\partial_{\theta}t_{ik\alpha}(\theta (t))g^{c}_{k\alpha}(t,t^{\prime})t_{k\alpha j}(\theta(t^{\prime})), \tag{15}\]
\[\Phi^{c}_{ij}(t,t^{\prime})=\sum_{k\alpha}t_{ik\alpha}(\theta(t))g^{c}_{k \alpha}(t,t^{\prime})\partial_{\theta}t_{k\alpha j}(\theta(t^{\prime})). \tag{16}\]
Application of the Wigner transform to (14) results in
\[\int d\tau e^{i\omega\tau}\mathcal{T}(t,t^{\prime})=\text{Tr}\Big{\{}ie^{ \frac{1}{2}\lambda(\partial_{\tau}^{2}\partial_{\omega}^{\Phi}-\partial_{ \omega}^{\bar{G}}\partial_{\tau}^{\Phi})}\left(\tilde{G}^{<}\tilde{\Phi}^{A}+ \tilde{G}^{R}\tilde{\Phi}^{<}\right)+ie^{\frac{1}{2}\lambda(\partial_{\tau}^{ \Phi}\partial_{\omega}^{\bar{G}}-\partial_{\omega}^{\bar{G}}\partial_{\tau}^{ \bar{G}})}\left(\tilde{\Psi}^{<}\tilde{G}^{A}+\tilde{\Psi}^{R}\tilde{G}^{<} \right)\Big{\}}, \tag{17}\]
where we use \(\tilde{G}\) to denote the Wigner transform of \(G\), defined as
\[\tilde{G}(T,\omega)=\int d\tau e^{i\omega\tau}G(T,\tau), \tag{18}\]
and the same applies for the self-energy-like terms. Functions in the Wigner space carry dependence on \(T\) and \(\omega\) which we subdue for brevity. We now propose the ansatzes,
\[\tilde{G}=\tilde{G}_{(0)}+\lambda\tilde{G}_{(1)}+\lambda^{2}\tilde{G}_{(2)}+..., \tag{19}\]
\[\tilde{\Psi} = \tilde{\Psi}_{(0)}+\lambda\tilde{\Psi}_{(1)}+\lambda^{2}\tilde{ \Psi}_{(2)}+..., \tag{101}\] \[\tilde{\Phi} = \tilde{\Phi}_{(0)}+\lambda\tilde{\Phi}_{(1)}+\lambda^{2}\tilde{ \Phi}_{(2)}+..., \tag{102}\]
in which \(\tilde{G}_{(n)}\) is of \(n^{th}\) order in our small parameter, and the same applies to \(\tilde{\Psi}\) and \(\tilde{\Phi}\). Terms with \(n=0\) correspond to the adiabatic approximation, while the higher order terms go beyond this and account for the dynamical corrections due to molecular rotations. We use \(\lambda\) in the above as a book-keeping term which makes clear the "smallness" of the term in question. For example, a term proportional to \(\lambda\) will be first order in our small parameter, and so on. We let \(\lambda=1\) at the end of the derivation.
We substitute these expansions into (100) and consider each order of \(\lambda\) separately. In the adiabatic case, we retain only the \(n=0\) terms from (101)-(102) while the exponentials in (100) disappear, resulting in
\[\int d\tau e^{i\omega\tau}\mathcal{T}_{(0)}(t,t^{\prime})=i\text{Tr}\Big{\{} \tilde{G}_{(0)}^{<}\tilde{\Phi}_{(0)}^{A}+\tilde{G}_{(0)}^{R}\tilde{\Phi}_{(0) }^{<}+\tilde{\Psi}_{(0)}^{<}\tilde{G}_{(0)}^{A}+\tilde{\Psi}_{(0)}^{R}\tilde{ G}_{(0)}^{<}\Big{\}}, \tag{103}\]
where we have let \(\lambda=1\). We then apply the inverse Wigner transform and let \(\tau=0\) which yields
\[\tau_{(0)}=-\int\frac{d\omega}{\pi}\text{Im}\text{Tr}\Big{\{}\tilde{\Psi}_{(0 )}^{<}\tilde{G}_{(0)}^{A}+\tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{<}\Big{\}}. \tag{104}\]
We use ImTr to denote the imaginary part of the trace, where we have used the fact that \((X^{<})^{\dagger}=-X^{<}\) and \((X^{A})^{\dagger}=X^{R}\) for an arbitrary term \(X\). (104) specifies the adiabatic torque.
We now consider the first-order non-adiabatic correction to the average torque. This is found by retaining the first-order terms in (100), which are linear in \(\lambda\). With some work, we find
\[\tau_{(1)}=-\frac{1}{\pi}\int d\omega\text{Im}\text{Tr}\left\{ \tilde{\Psi}_{(0)}^{R}\tilde{G}_{(1)}^{<}+\tilde{\Psi}_{(1)}^{<}\tilde{G}_{( 0)}^{A}\right.\left.+\tilde{\Psi}_{(1)}^{R}\tilde{G}_{(0)}^{<}+\tilde{\Psi}_{ (0)}^{<}\tilde{G}_{(1)}^{A}\right\}\\ +\frac{1}{2\pi}\int d\omega\text{Re}\text{Tr}\left\{\partial_{T }\tilde{\Psi}_{(0)}^{<}\partial_{\omega}\tilde{G}_{(0)}^{A}+\partial_{T} \tilde{\Psi}_{(0)}^{R}\partial_{\omega}\tilde{G}_{(0)}^{<}\right.\left.- \partial_{\omega}\tilde{\Psi}_{(0)}^{<}\partial_{T}\tilde{G}_{(0)}^{A}- \partial_{\omega}\tilde{\Psi}_{(0)}^{R}\partial_{T}\tilde{G}_{(0)}^{<}\right\}, \tag{105}\]
where ReTr denotes the real part of the trace. We find that \(\tau_{(1)}\) is proportional to \(\dot{\theta}\) and as a result, it can be alternately expressed as
\[\tau_{(1)}=-\xi(\theta)\dot{\theta}, \tag{106}\]
where \(\xi\) is the electronic viscosity coefficient. Thus, (105) denotes the dissipative frictional torque.
The fluctuations about the average torque are treated as a Gaussian stochastic variable which is quantified entirely by its first two moments:
\[\langle\hat{s}\hat{\tau}(t)\rangle=0,\qquad\langle\hat{s}\hat{\tau}(t)\hat{s} \hat{\tau}(t^{\prime})\rangle=D\delta(t-t^{\prime}), \tag{107}\]
where \(D\) is the electronic diffusion coefficient which we aim to find an expression for. Note that we have taken the white-noise approximation such that the stochastic force is delta-correlated.
Here, we provide a final expression for \(D\), while the derivation follows Ref. [33]:
\[D(\theta) = \frac{1}{2\pi}\int d\omega\text{Tr}\left\{\tilde{G}_{(0)}^{>} \tilde{\Phi}_{(0)}^{A}\tilde{G}_{(0)}^{<}\tilde{\Phi}_{(0)}^{A}+\tilde{G}_{(0 )}^{R}\tilde{\Phi}_{(0)}^{>}\tilde{G}_{(0)}^{<}\tilde{\Phi}_{(0)}^{A}+\tilde{G} _{(0)}^{>}\tilde{\Phi}_{(0)}^{A}\tilde{G}_{(0)}^{R}\tilde{\Phi}_{(0)}^{<}+ \tilde{G}_{(0)}^{R}\tilde{\Phi}_{(0)}^{>}\tilde{G}_{(0)}^{R}\tilde{\Phi}_{(0)}^ {<}+\tilde{\Psi}_{(0)}^{>}\tilde{G}_{(0)}^{A}\tilde{\Psi}_{(0)}^{<}\tilde{G} _{(0)}^{A}\right. \tag{108}\] \[+ \tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{>}\tilde{\Psi}_{(0)}^{<} \tilde{G}_{(0)}^{A}+\tilde{\Psi}_{(0)}^{>}\tilde{G}_{(0)}^{A}\tilde{\Psi}_{(0 )}^{R}\tilde{G}_{(0)}^{<}+\tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{>}\tilde{\Psi}_ {(0)}^{R}\tilde{G}_{(0)}^{<}+\tilde{G}_{(0)}^{>}\tilde{\zeta}_{(0)}^{<}+\tilde{ \zeta}_{(0)}^{>}\tilde{G}_{(0)}^{<}+\tilde{G}_{(0)}^{>}\tilde{\Psi}_{(0)}^{<} \tilde{G}_{(0)}^{A}+\tilde{G}_{(0)}^{>}\tilde{\Psi}_{(0)}^{<}\tilde{G}_{(0)}^{A} \tilde{\Phi}_{(0)}^{A}\] \[+ \tilde{\Psi}_{(0)}^{>}\tilde{G}_{(0)}^{A}\tilde{\Phi}_{(0)}^{A} \tilde{G}_{(0)}^{<}+\tilde{G}_{(0)}^{>}\tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{<} \tilde{\Phi}_{(0)}^{A}+\tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{>}\tilde{\Phi}_ {(0)}^{A}\tilde{G}_{(0)}^{<}+\tilde{G}_{(0)}^{>}\tilde{\Psi}_{(0)}^{R}\tilde{G} _{(0)}^{R}\tilde{\Phi}_{(0)}^{<}+\tilde{\Psi}_{(0)}^{R}\tilde{G}_{(0)}^{R} \tilde{\Phi}_{(0)}^{>}\tilde{G}_{(0)}^{<}\right\},\]
where we have introduced an additional self-energy-like term, defined as \((c=<,>,R,A)\)
\[\zeta_{ij}^{c}(t,t^{\prime})=\sum_{k\alpha}\partial_{\theta}t_{ik\alpha}(\theta(t ))g_{k\alpha}^{c}(t,t^{\prime})\partial_{\theta}t_{k\alpha j}(\theta(t^{\prime})), \tag{109}\]
whose perturbative expansion is defined in the usual way. The diffusion coefficient according to (108) then gives a means of quantifying the stochastic force in numerical simulations.
## Appendix B Solving for the Adiabatic and First Order Green's Functions
What remains is to calculate explicit expressions for both the adiabatic and first order Green's functions, as well as the self-energy-like terms, in the frequency domain. The Green's functions evolve according to the Keldysh-Kadanoff-Baym equations, given in the Wigner space as [25; 31; 33]
\[\Big{(}\omega+\frac{i}{2}\partial_{T}-e^{\frac{1}{2i}\lambda\partial_{\omega}^ {G}\partial_{T}^{h}}h(T)\Big{)}\tilde{G}^{R/A}=I+e^{\frac{1}{2i}\lambda\big{(} \partial_{T}^{\Sigma}\partial_{-}^{G}-\partial_{\omega}^{\Sigma}\partial_{T}^ {G}\big{)}}\tilde{\Sigma}^{R/A}\tilde{G}^{R/A}, \tag{10}\]
\[\Big{(}\omega+\frac{i}{2}\partial_{T}-e^{\frac{1}{2i}\lambda\partial_{\omega}^ {G}\partial_{T}^{h}}h(T)\Big{)}\tilde{G}^{</>}=e^{\frac{1}{2i}\lambda\big{(} \partial_{T}^{\Sigma}\partial_{-}^{G}-\partial_{\omega}^{\Sigma}\partial_{T}^ {G}\big{)}}\Big{(}\tilde{\Sigma}^{R}\tilde{G}^{</>}+\tilde{\Sigma}^{</>}\tilde {G}^{A}\Big{)}, \tag{11}\]
where we have shown the retarded/advanced and the lesser/greater terms collectively. Here, we adopt the convenient notation for derivatives, \(\partial_{T}^{G}\), which denotes a partial derivative acting on the \(G\) term with respect to \(T\), and so on. We have once again introduced the book-keeping parameter, \(\lambda\), for clarity in our perturbative expansions. The self-energies take the conventional form (\(c=<,>,R,A\)):
\[\Sigma_{ij}^{c}(t,t^{\prime})=\sum_{k\alpha}t_{ik\alpha}(\theta(t))g_{k\alpha} ^{c}(t,t^{\prime})t_{k\alpha j}(\theta(t^{\prime})), \tag{12}\]
and we apply our usual ansatz to the self-energies,
\[\tilde{\Sigma}=\tilde{\Sigma}_{(0)}+\lambda\tilde{\Sigma}_{(1)}+\lambda^{2} \tilde{\Sigma}_{(2)}+.... \tag{13}\]
To solve for the form of the adiabatic and first-order Green's functions, we take a perturbative expansion of the exponentials in (10) and (11) as well as substituting in our perturbative ansatzes, (12) and (13). Truncating after the zeroth order and solving for \(\tilde{G}_{(0)}\) yields the standard adiabatic Green's functions as follows:
\[\tilde{G}_{(0)}^{R/A}=\Big{(}\omega I-h-\tilde{\Sigma}_{(0)}^{R/A}\Big{)}^{-1}, \tag{14}\]
\[\tilde{G}_{(0)}^{</>}=\tilde{G}_{(0)}^{R}\tilde{\Sigma}_{(0)}^{</>}\tilde{G}_ {(0)}^{A}. \tag{15}\]
For the first-order, we consider terms linear in \(\lambda\) such that we obtain
\[\tilde{G}_{(1)}^{R/A}=\frac{1}{2i}\tilde{G}_{(0)}^{R/A}\Big{[}\tilde{G}_{(0)} ^{R/A},\partial_{T}h\Big{]}\tilde{G}_{(0)}^{R/A}, \tag{16}\]
\[\tilde{G}_{(1)}^{</>}=\tilde{G}_{(0)}^{R}\tilde{\Sigma}_{(0)}^{</>}\tilde{G}_ {(1)}^{A}+\tilde{G}_{(1)}^{R}\tilde{\Sigma}_{(0)}^{</>}\tilde{G}_{(0)}^{A}+ \frac{1}{2i}\tilde{G}_{(0)}^{R}\Big{(}\partial_{T}h\tilde{G}_{(0)}^{R} \partial_{\omega}\tilde{\Sigma}^{</>}+\tilde{G}_{(0)}^{</>}\partial_{T}h+h.c \Big{)}\tilde{G}_{(0)}^{A}. \tag{17}\]
We now solve for the adiabatic and first-order components of the self-energy-like terms. Rather than considering each variant of self-energy individually, we will instead consider the following more general expression (\(c=<,>,R,A\))
\[\Xi_{\alpha,ii^{\prime}}^{c}=\sum_{k}A_{ik\alpha}(t)g_{k\alpha}^{c}(t,t^{ \prime})B_{k\alpha i^{\prime}}(t^{\prime}), \tag{18}\]
where \(A\) and \(B\) are arbitrary functions of time. Obviously, when \(A_{k\alpha i}=B_{kai}=t_{kai}\), we obtain \(\Sigma^{c}\), while different choices allow us to obtain \(\Psi\), \(\Phi\) and \(\zeta\). We apply the Wigner transform to the above while making use of the shift operator, defined according to \(f(x+h)=e^{hd_{x}^{f}}f(x)\) where we use \(d_{x}^{f}\) to denote the derivative with respect to \(x\) which acts on \(f\) (to avoid ambiguity), to obtain
\[\tilde{\Xi}_{\alpha,ii^{\prime}}^{c} = \sum_{k}\int_{-\infty}^{\infty}d\tau e^{i\omega\tau}e^{\frac{i}{2} d_{T}^{d}}A_{ik\alpha}(T)g_{k\alpha}^{c}(t,t^{\prime})e^{\frac{-i}{2}d_{T}^{D}}B_{k \alpha i^{\prime}}(T) \tag{19}\] \[= \sum_{k}\int_{-\infty}^{\infty}d\tau e^{i\omega\tau}e^{\frac{i}{2 }\tilde{\omega}_{\omega}^{c}(d_{T}^{A}-d_{T}^{D})}A_{ik\alpha}(T)g_{k\alpha}^{ c}(t,t^{\prime})B_{k\alpha i^{\prime}}(T), \tag{20}\]
where the \(\overleftarrow{\partial_{\omega}^{c}}\) notation denotes the derivative operator acting to the left on the exponential. Now we take all the terms that are independent of \(\tau\) outside of the integral, leaving us with
\[\tilde{\Xi}_{\alpha,ii^{\prime}}^{c} = \sum_{k}e^{\frac{t}{2\hbar}\overrightarrow{\partial_{\omega}^{l}}( d_{T}^{A}-d_{T}^{B})}A_{ik\alpha}(T)B_{k\alpha i^{\prime}}(T)\int_{-\infty}^{ \infty}d\tau e^{i\omega\tau}g_{k\alpha}^{c}(t,t^{\prime}) \tag{101}\] \[= \sum_{k}e^{\frac{t}{2\hbar}\overrightarrow{\partial_{\omega}^{l}} (d_{T}^{A}-d_{T}^{B})}A_{ik\alpha}(T)B_{k\alpha i^{\prime}}(T)\tilde{g}_{k \alpha}^{c}(T,\omega). \tag{102}\]
Finally, we take a power series expansion of the exponential to find
\[\tilde{\Xi}_{\alpha,ii^{\prime}}^{c}=\sum_{k}A_{ik\alpha}\tilde{g}_{k\alpha}^{ c}B_{k\alpha i^{\prime}}+\frac{1}{2i}\sum_{k}\frac{\partial\tilde{g}_{k\alpha}^{c} }{\partial\omega}\left(\frac{dA_{ik\alpha}}{dT}B_{k\alpha i^{\prime}}-A_{ik \alpha}\frac{dB_{k\alpha i^{\prime}}}{dT}\right)+...=\tilde{\Xi}_{(0),\alpha,ii^{\prime}}^{c}+\tilde{\Xi}_{(1),\alpha,ii^{\prime}}^{c}+..., \tag{103}\]
where the functional dependencies are clear from the context. Thus, (103) allows us to calculate each of the required orders of self-energy-like terms. If we consider \(A_{k\alpha i}=B_{k\alpha i}=t_{k\alpha i}\), the adiabatic component corresponds to the standard self-energy. We make the wide-band approximation for the electrodes. The retarded/advanced component is given by
\[\tilde{\Sigma}_{(0),\alpha,ii^{\prime}}^{R/A}=\mp\frac{i}{2}\Gamma_{\alpha,ii^ {\prime}}, \tag{104}\]
where the level-broadening takes the form
\[\Gamma_{\alpha,ii^{\prime}}=2\pi t_{\alpha i}^{*}t_{\alpha i^{\prime}}\rho_{ \alpha}, \tag{105}\]
where density of states \(\rho\) is a constant and \(t_{\alpha ki}=t_{\alpha i}\) under the wide-band approximation. The equation for the lesser case takes the form
\[\tilde{\Sigma}_{(0),\alpha,ii^{\prime}}^{<}(\omega,T)=if_{\alpha}(\omega) \Gamma_{\alpha,ii^{\prime}}(T), \tag{106}\]
where \(f_{\alpha}(\omega)\) is the Fermi-Dirac distribution;
\[f_{\alpha}(\omega)=\frac{1}{e^{\frac{\omega-\mu_{\alpha}}{k_{B}T_{\alpha}}}+1}. \tag{107}\]
Here, \(\mu_{\alpha}\) is the chemical potential for the \(\alpha\) lead while \(T_{\alpha}\) is the macroscopic temperature and \(k_{B}\) is Boltzmann's constant. The form of \(\Psi\), \(\Phi\) and \(\zeta\) can be found equivalently by replacing \(\Gamma\) in the above equations with, \(\Gamma^{\Psi}\), \(\Gamma^{\Phi}\) and \(\Gamma^{\zeta}\), respectively, as given by
\[\Gamma^{\Psi}_{\alpha,ii^{\prime}}=2\pi\partial_{\theta}t_{\alpha i}^{*}t_{ \alpha i^{\prime}}\rho_{\alpha}, \tag{108}\]
\[\Gamma^{\Phi}_{\alpha,ii^{\prime}}=2\pi t_{\alpha i}^{*}\partial_{\theta}t_{ \alpha i^{\prime}}\rho_{\alpha}, \tag{109}\]
\[\Gamma^{\zeta}_{\alpha,ii^{\prime}}=2\pi\partial_{\theta}t_{\alpha i}^{*} \partial_{\theta}t_{\alpha i^{\prime}}\rho_{\alpha}. \tag{110}\]
Under the wide-band approximation, \(\tilde{\Xi}_{(1)}^{R/A}=0\), and so we need only consider the lesser case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.